date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/22
666
2,447
<issue_start>username_0: I am new to cake build. I want to run Microsoft appcenter command from Cake script. I tried using cake.powershell for it <https://cakebuild.net/addins/powershell/> ``` StartPowershellFile("./appcenter.ps1"); ``` > > // appcenter.ps1 had just CLI command for Appcenter "*appcenter test > run uitest --app \"5678999\" --devices \"7738\" --app-path > ./iPhone.ipa --test-series \"master\" --locale \"en\_US\" --build-dir > /iPhone/bin/Debug*" > > > This did not work. I also tried ``` StartProcess("./appcenter.ps1"); ``` from <https://cakebuild.net/dsl/process/> Can anyone suggest or provide sample code how can I run CLI commands from cake script? I need to run CLI for Microsoft Appcenter tool.<issue_comment>username_1: Here you have a few options. If you want to continue to use the StartProcess alias, you will need to give it the application that you want to execute, in your case, PowerShell. The file that you want to run would likely then become the argument that you pass in. So, you would need to do something similar to: ``` StartProcess("powershell", new ProcessSettings{ Arguments = "./appcenter.ps1" }); ``` **NOTE:** This is untested, and may need to be altered in order for it to function. More information about the overload used for StartProcess can be found here: <https://cakebuild.net/api/Cake.Common/ProcessAliases/81E648CC> The second, probably preferred option, would be to use the Cake.PowerShell addin: <https://github.com/SharpeRAD/Cake.Powershell> This would allow you to do: ``` StartPowershellFile("./appcenter.ps1"); ``` Finally, I "think" you are trying to utilise the same endpoint that is being used within this Addin: <https://github.com/cake-contrib/Cake.WindowsAppStore> Perhaps you could collaborate with the original author of that addin to extend it to include the required functionality that you are trying to use. Upvotes: 2 <issue_comment>username_2: After going through Gary's suggestions, I ended up using `StartProcess`: ``` StartProcess("appcenter", new ProcessSettings{ Arguments = "test run uitest --app \"MyApp\" --devices \"45678\" --app-path /TheApp.ipa --test-series \"master\" --locale \"en_US\" --build-dir /UITest/bin/Debug" }); ``` Upvotes: 2 [selected_answer]
2018/03/22
699
2,294
<issue_start>username_0: Using linux, I want to search a text file for the string `Blah` and then return the line full line that contained the string and all the lines following the pattern up until a line that contains the word `Failed`. For example, ``` Test Case Name "Blah" Error 1 Error 2 Error 3 Failed Test Case Name "Foo" Pass Test Case Name "Red" Pass ``` In the above, I want to search for "Blah", and then return: ``` Test Case Name "Blah" Error 1 Error 2 Error 3 ``` Up until the line `Failed`. There can be any number of "Error" lines between `Blah` and `Failed`. **Follow up to make it faster** Both sed and awk options worked. ``` sed '/Blah/!d;:a;n;/Failed/d;ba' file ``` and ``` awk '/Failed/{p=0}/Blah/{p=1}p;' file ``` However, I noticed that while returning the expected outcome is quite fast, it takes ages to exit. Maybe these commands are recurrently searching for `Blah` and given that it appears only once, they run until the end-of-file. This would not be much of a problem but I'm working with a file that contains 10 million lines and for now it is painfully slowly. Any suggestions on how to exit after finding both lines containing `Blah` and `Failed` would be much appreciated. Thanks!<issue_comment>username_1: would you like do with awk? `awk '/Failed/{p=0}/Blah/{p=1}p;' file` will works for you. Upvotes: 1 <issue_comment>username_2: With sed: ``` sed '/Blah/,/Failed/!d;//{1!d;}' file ``` * `/Blah/`: match lines from `Blah`to `Failed` * `!d`: do not delete previous matching lines * `//{1!d;}`: from lines matching the addresses (that is `Blah`and `Failed`), do not delete the first one `1!d`. Upvotes: 2 <issue_comment>username_3: This might work for you (GNU sed): ``` sed -n '/Blah/,/Failed/{/Failed/!p}' file ``` Print the lines between and including `Blah` to `Failed` unless the line contains `Failed`. ``` sed ':a;/Blah/!d;:b;n;/Failed/ba;bb' file ``` If a line does not contain `Blah` delete it. Otherwise, print the current line and fetch the next (`n`). If this line contains `Failed` delete it and begin next iteration. Otherwise, repeat until successful or end-of-file. The first solution prevents `Blah` and `Failed` being printed if they inhabit the same line. The second alternative, allows this. Upvotes: 2
2018/03/22
794
2,443
<issue_start>username_0: Firstly, I apologize if my title is misleading/ unclear as I am really not sure what is the best way to put it. I have 3 optional arguments, which uses `action='store_true'`. Let's keep the argument flags to `-va, -vb, -vc` ``` var_list = ['a', 'b', 'c'] if args.va: run_this_func(var_list[0]) if args.vb: run_this_func(var_list[1]) if args.vc: run_this_func(var_list[2]) if not args.higha and not args.highb and not args.highmem: for var in var_list: run_this_func(var) if args.va and args.vb: run_this_func(var_list[:-1]) if args.vb and args.vc: run_this_func(var_list[1:]) if args.vc and args.va: run_this_func(var_list[0], var_list[2]) ``` How can I code in more efficient way? The above method that I had utilized while it may work, seems more like a roundabout way to get things going... Initially I am thinking of using tuple such that it will be something such as `input = (args.va, args.vb, args.vc)` so that it may return me eg. `(True, False, False)`... Not sure if that is ideal though. Any advice?<issue_comment>username_1: I think you can use tuple too. ``` var_list = ['a', 'b', 'c'] input = (args.va, args.vb, args.vc) vars = [item for index, item in enumerate(var_list) if args_tuple[index]] run_this_func(*vars) ``` Upvotes: 1 <issue_comment>username_2: I think this is a case where [`argparse.add_argument`](https://docs.python.org/3.6/library/argparse.html#argparse.ArgumentParser.add_argument)'s [`nargs`](https://docs.python.org/3.6/library/argparse.html#nargs) and [`choices`](https://docs.python.org/3.6/library/argparse.html#choices) keywords can be used to get the list that you want (some subset of `[a,b,c]`) without having to filter based on multiple arguments. I would recommend doing something like this: ``` parser = argparse.ArgumentParser() parser.add_argument('-v', choices=['a','b','c'], nargs='+') args = parser.parse_args() ``` This says: create an optional argument `-v` that can take one or multiple values but each of those values must be one of `['a','b','c']` Then you can pass it arguments from the command line like this (for example): ``` $ python my_file.py -v a c ``` which means that `args` will look like: `Namespace(v=['a', 'c'])` and `args.v` looks like `['a', 'c']` which is the list you were looking for (and without any filtering!) You can then call your function as: ``` run_this_func(args.v) ``` Upvotes: 0
2018/03/22
589
2,001
<issue_start>username_0: So I'm trying to get the sum of every score in all games related to the according player and how many game he has played ``` class Player(models.Model): name = models.CharField() class Game(models.Model): points = models.IntegerField() user = models.ForeignKey(User, on_delete=models.CASCADE) ``` My desired output is: ``` { "name": "asdf", "total_points": "9999" "games_played": "12" } ``` I thought I could do it getting a value list of my players then looking the sum of all games and count games played for every value on the list, but I don't know how to do it. I know how to do it for an specific Player, but I want to get the entire list. I see this as a loop where every Player counts by its own. Thanks<issue_comment>username_1: Assuming that `Player` has a `User` stored as `.user`, you could do something like: ``` from django.db.models import Sum games = Game.objects.filter(user=player.user) output = { 'name': player.name, 'total_points': games.aggregate(Sum('points')), 'total_games': games.count() } ``` Upvotes: 2 <issue_comment>username_2: Your `models.py` seems like incomplete, So I'm assuming a there is a FK relationship between `Player` and `Game` as below ``` class Player(models.Model): name = models.CharField(max_length=200) class Game(models.Model): points = models.IntegerField() user = models.ForeignKey(Player, on_delete=models.CASCADE) ``` You can fetch all summary of your DB by using a `Group By` query as below, ``` from django.db.models import Count, Sum, F Game.objects.annotate(name=F('user__name')).values('name').annotate(games_played=Count('id'), total_points=Sum('points')) ``` Sample output will be like this, ``` ``` `QuerySet` is a iterable similar to `list`. You can iterate over it if you want. Hence each of the item from the `QuerySet` be like this, ``` {'name': 'name_1', 'games_played': 2, 'total_points': 6} ``` Upvotes: 4 [selected_answer]
2018/03/22
559
1,855
<issue_start>username_0: We're getting ready to deploy our app (Cytoscape) using Install4J7's feature that detects the installed JVM and offers to download a new one. We find that if we download a JVM, let the installer finish, then run the installer again, it offers to download a new JVM again. I would have thought it would have detected the one it just downloaded. **Have we misconfigured something?? Or is there a later version of Install4J?** Our JVM range is 1.8.0\_152 .. 1.9. The JVM we're downloading is here: <http://chianti.ucsd.edu/jres/macosx-amd64-1.8.0_162.tar.gz> **What could be going wrong??** Thanks!<issue_comment>username_1: Assuming that `Player` has a `User` stored as `.user`, you could do something like: ``` from django.db.models import Sum games = Game.objects.filter(user=player.user) output = { 'name': player.name, 'total_points': games.aggregate(Sum('points')), 'total_games': games.count() } ``` Upvotes: 2 <issue_comment>username_2: Your `models.py` seems like incomplete, So I'm assuming a there is a FK relationship between `Player` and `Game` as below ``` class Player(models.Model): name = models.CharField(max_length=200) class Game(models.Model): points = models.IntegerField() user = models.ForeignKey(Player, on_delete=models.CASCADE) ``` You can fetch all summary of your DB by using a `Group By` query as below, ``` from django.db.models import Count, Sum, F Game.objects.annotate(name=F('user__name')).values('name').annotate(games_played=Count('id'), total_points=Sum('points')) ``` Sample output will be like this, ``` ``` `QuerySet` is a iterable similar to `list`. You can iterate over it if you want. Hence each of the item from the `QuerySet` be like this, ``` {'name': 'name_1', 'games_played': 2, 'total_points': 6} ``` Upvotes: 4 [selected_answer]
2018/03/22
488
1,432
<issue_start>username_0: I am trying read all lines in `out.txt` and check if that starts with word `No`. If the line starts with `No`, I am echoing `yes` to file `1.txt`, else `no` to `2.txt` I am using below code : ``` for /F "tokens=*" %%A in (out.txt) do ( IF "%%A:~0,2%"=="No" ( @echo yes >> 1.txt )else ( @echo no >> 2.txt ) ) ``` but its nit working for me. seems like the if statement is not working correctly. Can someone please tell me if there is something wrong with the code ?. I tried to echo `%%A` and is reading the lines as expected.<issue_comment>username_1: You cannot substring a *metavariable*. Try ``` findstr /b /L "No" out.txt >nul if errorlevel 1 ( echo not found ) else ( echo found ) ``` which finds any line in the file that `/b` begins `/L` with the literal `No` and outputs any found. The `>nul` disposes of the output. `errorlevel` is set to `0` if the string was found, non-zero otherwise. Upvotes: 3 [selected_answer]<issue_comment>username_2: This is faster: ``` @setlocal ENABLEEXTENSIONS @set prompt=$G @for /F "tokens=*" %%A in (out.txt) do @call :DoIt "%%A" @exit /b 0 :DoIt @set _line=%~1 @set _linePrefeix=%_line:~0,2% @if "%_linePrefeix%" equ "No" (@echo yes >> 1.txt) else (@echo no >> 2.txt) @exit /b 0 ``` username_1's solution and this one both require two processes, but this one involves an executable image that is already loaded into memory (cmd.exe). Upvotes: 1
2018/03/22
355
875
<issue_start>username_0: How to queryrows after group by with multiple columns. Below is the table : ``` ord_num | loc | void | pay_type ----------------------- 10 | a101 | Y | CR 10 | a101 | N | AB 10 | a101 | N | CH 11 | a102 | N | CR 11 | a102 | Y | CR 12 | a103 | Y | JK 13 | a104 | N | CR 13 | a104 | Y | JK 14 | a104 | Y | CR ``` I need rows with ord\_num, loc where pay\_type contains only 'CR'. I am expecting the query result below : ``` ord_num | loc ------------------ 11 | a102 14 | a104 ```<issue_comment>username_1: ``` select distinct ord_ num, loc from yourTable where pay_type = 'CR' ``` Upvotes: 0 <issue_comment>username_2: ``` select ord_num, loc from table group by ord_num, loc having max(pay_type) = min(pay_type) and max(pay_type) = 'CR' ``` Upvotes: 3 [selected_answer]
2018/03/22
3,593
10,593
<issue_start>username_0: I have extended the C++ 11 `std::array`, it is working file, but when I try to overload the `operator[]`, I got this error: ``` error: lvalue required as left operand of assignment array[0] = 911; ^~~ ``` Is it possible to implement the `operator[]` adding bound checking for the `std::array` type? This is the code: ``` #include #include #include template struct Array : public std::array { Array() { } // std::array constructor inheritance // https://stackoverflow.com/questions/24280521/stdarray-constructor-inheritance Array(std::initializer\_list< array\_datatype > new\_values) { unsigned int data\_size = new\_values.size(); unsigned int column\_index = 0; // std::cout << data\_size << std::endl; if( data\_size == 1 ) { this->clear(\*(new\_values.begin())); } else { assert(data\_size == array\_size); for( auto column : new\_values ) { (\*this)[column\_index] = column; column\_index++; } } } array\_datatype operator[](unsigned int line) { assert(line < array\_size); assert(line > -1); return (\*this)[line]; } /\*\* \* Prints a more beauty version of the array when called on `std::cout<< array << std::end;` \*/ friend std::ostream& operator<<( std::ostream &output, const Array &array ) { unsigned int column; output << "{"; for( column=0; column < array\_size; column++ ) { output << array[column]; if( column != array\_size-1 ) { output << ", "; } } output << "}"; return output; } } ``` --- Related: 1. [Is it possible to enable array bounds checking in g++?](https://stackoverflow.com/questions/4778552/is-it-possible-to-enable-array-bounds-checking-in-g) 2. [Accessing an array out of bounds gives no error, why?](https://stackoverflow.com/questions/1239938/accessing-an-array-out-of-bounds-gives-no-error-why)<issue_comment>username_1: You can use: ``` array_datatype& operator[](unsigned int line)& array_datatype const& operator[](unsigned int line)const& array_datatype operator[](unsigned int line)&& ``` Upvotes: 2 <issue_comment>username_2: If you want to use the return value of `operator[]` on the left side of an assignment, you have to return the array element by reference, not by value. You also have a recursive loop, as you are calling your own `operator[]` from inside of itself. You want to call the base class's `operator[]` instead, so you need to qualify it. Try this: ``` array_datatype& operator[](unsigned int line) { assert(line < array_size); assert(line > -1); return std::array::operator[](line); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: If you want bound checked access to array elements, just use the `at` method of `std::vector` (or `std::array`) instead of the `[]` operator. It is there for that purpose, don't reinvent the wheel :). See [reference](http://www.cplusplus.com/reference/vector/vector/at/) for documentation of array bounds checking. Upvotes: 2 <issue_comment>username_4: This was my solution for single dimensional arrays: ``` #include #include #include template struct Array { /\*\* \* Is it okay to inherit implementation from STL containers, rather than delegate? \* https://stackoverflow.com/questions/2034916/is-it-okay-to-inherit-implementation-from-stl-containers-rather-than-delegate \*/ std::array \_data; /\*\* \* std::array constructor inheritance \* https://stackoverflow.com/questions/24280521/stdarray-constructor-inheritance \*/ Array() { } Array(std::initializer\_list< array\_datatype > new\_values) { unsigned int data\_size = new\_values.size(); unsigned int column\_index = 0; // std::cout << data\_size << std::endl; if( data\_size == 1 ) { this->clear(\*(new\_values.begin())); } else { assert(data\_size == array\_width); for( auto column : new\_values ) { this->\_data[column\_index] = column; column\_index++; } } } /\*\* \* Overloads the `[]` array access operator, allowing you to access this class objects as the \* where usual `C` arrays. \* \* How to implement bound checking for std::array? \* https://stackoverflow.com/questions/49419089/how-to-implement-bound-checking-for-stdarray \* \* @param line the current line you want to access \* @return a pointer to the current line \*/ array\_datatype operator[](unsigned int line)&& { assert(line < array\_width); assert(line >= 0); return this->\_data[line]; } array\_datatype const& operator[](unsigned int line)const& { assert(line < array\_width); assert(line >= 0); return this->\_data[line]; } array\_datatype& operator[](unsigned int line)& { assert(line < array\_width); assert(line >= 0); return this->\_data[line]; } void clear(array\_datatype initial = 0) { unsigned int column\_index = 0; for( ; column\_index < array\_width; column\_index++ ) { this->\_data[column\_index] = initial; } } /\*\* \* The Array<> type includes the Matrix<> type, because you can multiply a `Array` by an `Matrix`, \* but not a vice-versa. \*/ void multiply(Array< array\_width, Array< array\_width, array\_datatype > > &matrix) { unsigned int column; unsigned int step; array\_datatype old\_array[array\_width]; for(column = 0; column < array\_width; column++) { old\_array [column] = this->\_data[column]; this->\_data[column] = 0; } for(column = 0; column < array\_width; column++) { for(step = 0; step < array\_width; step++) { this->\_data[column] += old\_array[step] \* matrix.\_data[step][column]; } } // If you would like to preserve the original value, it can be returned here // return old\_array; } /\*\* \* Prints a more beauty version of the array when called on `std::cout<< array << std::end;` \*/ friend std::ostream& operator<<( std::ostream &output, const Array &array ) { unsigned int column; output << "{"; for( column=0; column < array\_width; column++ ) { output << array.\_data[column]; if( column != array\_width-1 ) { output << ", "; } } output << "}"; return output; } }; ``` --- And this is a extension for matrix (multi-dimensional): ``` #include #include #include "array.h" /\*\* \* C++ Matrix Class \* https://stackoverflow.com/questions/2076624/c-matrix-class \* \* A proper way to create a matrix in c++ \* https://stackoverflow.com/questions/618511/a-proper-way-to-create-a-matrix-in-c \* \* error: incompatible types in assignment of 'long int (\*)[4]' to 'long int [4][4]' \* https://stackoverflow.com/questions/49312484/error-incompatible-types-in-assignment-of-long-int-4-to-long-int \*/ template struct Matrix : public Array< matrix\_height, Array< matrix\_width, matrix\_datatype > > { Matrix() { } Matrix(matrix\_datatype initial) { this->clear(initial); } Matrix(std::initializer\_list< std::initializer\_list< matrix\_datatype > > raw\_data) { // std::cout << raw\_data.size() << std::endl; assert(raw\_data.size() == matrix\_height); // std::cout << raw\_data.begin()->size() << std::endl; assert(raw\_data.begin()->size() == matrix\_width); unsigned int line\_index = 0; unsigned int column\_index; for( auto line : raw\_data ) { column\_index = 0; for( auto column : line ) { this->\_data[line\_index][column\_index] = column; column\_index++; } line\_index++; } } void clear(matrix\_datatype initial=0) { unsigned int line; unsigned int column; for( line=0; line < matrix\_height; line++ ) { for( column=0; column < matrix\_width; column++ ) { this->\_data[line][column] = initial; } } } void multiply(Matrix &matrix) { unsigned int line; unsigned int column; unsigned int step; matrix\_datatype old\_matrix[matrix\_height][matrix\_width]; for(line = 0; line < matrix\_height; line++) { for(column = 0; column < matrix\_width; column++) { old\_matrix [line][column] = this->\_data[line][column]; this->\_data[line][column] = 0; } } for(line = 0; line < matrix\_height; line++) { for(column = 0; column < matrix\_width; column++) { for(step = 0; step < matrix\_width; step++) { this->\_data[line][column] += old\_matrix[line][step] \* matrix.\_data[step][column]; } // std::cout << "this->\_data[line][column] = " << this->\_data[line][column] << std::endl; } } // If you would like to preserve the original value, it can be returned here // return old\_matrix; } /\*\* \* Prints a more beauty version of the matrix when called on `std::cout<< matrix << std::end;` \*/ friend std::ostream& operator<<( std::ostream &output, const Matrix &matrix ) { unsigned int line; unsigned int column; output << "{"; for( line=0; line < matrix\_height; line++ ) { output << "{"; for( column=0; column < matrix\_width; column++ ) { output << matrix.\_data[line][column]; if( column != matrix\_width-1 ) { output << ", "; } } if( line != matrix\_height-1 ) { output << "}, "; } else { output << "}"; } } output << "}"; return output; } }; ``` --- This is a simple test application for it: ``` #include "array.h" #include "matrix.h" void array_tests(); void matrix_tests(); /** * To build it use: * g++ -std=c++11 test.cpp -o main */ int main (int argc, char* argv[]) { array_tests(); std::cout << std::endl; matrix_tests(); } // struct Matrixx : public Array< 3, Array< 3, int > > // { // }; void array_tests() { std::cout << "Array tests" << std::endl; Array<3, long int> array; array[1] = 99911; std::cout << array << std::endl; std::cout << array[1] << std::endl; std::cout << array[2] << std::endl; Array<3> array2 = {0,0,0}; std::cout << "array2: " << array2 << std::endl; Array<3> array3 = {3}; std::cout << "array3: " << array3 << std::endl; } void matrix_tests() { std::cout << "Matrix tests" << std::endl; Matrix<3, 3, long int> matrix; std::cout << matrix << std::endl; matrix[0][0] = 911; std::cout << matrix << std::endl; std::cout << matrix[0] << std::endl; std::cout << matrix[0][0] << std::endl; Matrix<3, 3> matrix2{ {0,0,0}, {0,0,0}, {0,0,0} }; std::cout << matrix2 << std::endl; Matrix<3, 3> matrix3 = { 3 }; std::cout << matrix3 << std::endl; Matrix<3, 1, long int> matrix4 = { 4 }; std::cout << matrix4 << std::endl; } ``` Running it, you would see this: ``` Array tests {0, 99911, 0} 99911 0 array2: {0, 0, 0} array3: {3, 3, 3} Matrix tests {{39593264, 0, 1875895727}, {0, 39593264, 0}, {1875566066, 0, -927864272}} {{911, 0, 1875895727}, {0, 39593264, 0}, {1875566066, 0, -927864272}} {911, 0, 1875895727} 911 {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}} {{3, 3, 3}, {3, 3, 3}, {3, 3, 3}} {{4, 4, 4}} ``` Upvotes: 0
2018/03/22
738
2,618
<issue_start>username_0: I want to disable all outgoing connections that are initiated by docker containers to the outside world. I can do this in linux by adding a rule to the FORWARD chain in linux. How do I do this in Docker for Mac? I found out that Docker for Mac uses an xhyve vm and that’s where docker0 interface lives. What interface in the host does this connect to? I used nettop on Mac and I see that Docker uses my en0 wireless interface. But, I’m not sure if Docker and xhyve are using the same interface. Edit: Added docker-for-windows tag because they might have similar solutions (Hoping) Edit 2: Docker for Mac has changed so the accepted solution changed a bit<issue_comment>username_1: Try Mac's `pfctl` command, it's kind of equivalent to iptables. Here's man page: <https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man8/pfctl.8.html> Upvotes: 1 <issue_comment>username_2: ### Docker ``` $ docker run --net=host --privileged -ti alpine sh # apk update && apk add iptables # iptables -vnL ``` This and the rules could be turned into a `Dockerfile` and run with a `-- restart` option. I think `on-failure` might work to reapply the rules when Docker for Mac starts up. ### Virtual Machine To get to the linux VM: ``` mac$ brew install screen mac$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty ``` Since the move to [linuxkit](https://github.com/linuxkit/linuxkit), this is not your average linux host, everything's a container: ``` linuxkit:~# ctr -n services.linuxkit tasks ls TASK PID STATUS acpid 925 RUNNING diagnose 967 RUNNING host-timesync-daemon 1116 RUNNING ntpd 1248 RUNNING vpnkit-forwarder 1350 RUNNING docker-ce 1011 RUNNING kubelet 1198 RUNNING trim-after-delete 1303 RUNNING vsudd 1398 RUNNING ``` Use `runc` to move into the `docker-ce` (or `docker`) namespace ``` linuxkit:~# runc --root /run/containerd/runc/default exec -t docker-ce /bin/sh docker-ce # iptables -vnL ``` Note that rules will disappear after a restart of Docker for Mac. I haven't found the secret sauce for persisting system changes yet. Use `ctrl`-`a` then `d` to exit the screen session otherwise you will bork the terminal. ### OSX For the easy but € option, use [Little Snitch](https://www.obdev.at/products/littlesnitch/index.html) and block outbound connections on OSX from `com.docker.supervisor via vpnkit`. Upvotes: 3 [selected_answer]
2018/03/22
695
2,328
<issue_start>username_0: I am trying to find pid of a oracle process by using below command ps -ef | grep pmon | grep orcl | grep -v grep When trying to use python oracle\_pid = os.system("echo `ps -ef | grep pmon | grep %s | grep -v grep | awk '{print $2}'`" %(oracle\_sid)) print(oracle\_pid) it is printing 0 as value Any suggestions on how to achieve just the pid as output? Regards<issue_comment>username_1: Try Mac's `pfctl` command, it's kind of equivalent to iptables. Here's man page: <https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man8/pfctl.8.html> Upvotes: 1 <issue_comment>username_2: ### Docker ``` $ docker run --net=host --privileged -ti alpine sh # apk update && apk add iptables # iptables -vnL ``` This and the rules could be turned into a `Dockerfile` and run with a `-- restart` option. I think `on-failure` might work to reapply the rules when Docker for Mac starts up. ### Virtual Machine To get to the linux VM: ``` mac$ brew install screen mac$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty ``` Since the move to [linuxkit](https://github.com/linuxkit/linuxkit), this is not your average linux host, everything's a container: ``` linuxkit:~# ctr -n services.linuxkit tasks ls TASK PID STATUS acpid 925 RUNNING diagnose 967 RUNNING host-timesync-daemon 1116 RUNNING ntpd 1248 RUNNING vpnkit-forwarder 1350 RUNNING docker-ce 1011 RUNNING kubelet 1198 RUNNING trim-after-delete 1303 RUNNING vsudd 1398 RUNNING ``` Use `runc` to move into the `docker-ce` (or `docker`) namespace ``` linuxkit:~# runc --root /run/containerd/runc/default exec -t docker-ce /bin/sh docker-ce # iptables -vnL ``` Note that rules will disappear after a restart of Docker for Mac. I haven't found the secret sauce for persisting system changes yet. Use `ctrl`-`a` then `d` to exit the screen session otherwise you will bork the terminal. ### OSX For the easy but € option, use [Little Snitch](https://www.obdev.at/products/littlesnitch/index.html) and block outbound connections on OSX from `com.docker.supervisor via vpnkit`. Upvotes: 3 [selected_answer]
2018/03/22
997
2,967
<issue_start>username_0: I am trying to add two columns and create a new one. This new column should become the first column in the dataframe or the output csv file. ``` column_1 column_2 84 test 65 test ``` Output should be ``` column column_1 column_2 trial_84_test 84 test trial_65_test 65 test ``` I tried below given methods but they did not work: ``` sum = str(data['column_1']) + data['column_2'] data['column']=data.apply(lambda x:'%s_%s_%s' % ('trial' + data['column_1'] + data['column_2']),axis=1) ``` Help is surely appreciated.<issue_comment>username_1: Do not use `lambda` for this, as it is just a thinly veiled loop. Here is a vectorised solution. Care needs to be taken to convert non-string values to `str` type. ``` df['column'] = 'trial_' + df['column_1'].astype(str) + '_' + df['column_2'] df = df.reindex_axis(sorted(df.columns), axis=1) # sort columns alphabetically ``` Result: ``` column column_1 column_2 0 trial_84_test 84 test 1 trial_65_test 65 test ``` Upvotes: 2 <issue_comment>username_2: **Create sample data**: ``` df = pd.DataFrame({'column_1': [84, 65], 'column_2': ['test', 'test']}) ``` **Method 1**: Use [assign](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html) to create new column, and then reorder. ``` >>> df.assign(column=['trial_{}_{}'.format(*cols) for cols in df.values])[['column'] + df.columns.tolist()] column column_1 column_2 0 trial_84_test 84 test 1 trial_65_test 65 test ``` **Method 2**: Create a new series and then [concatenate](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html). ``` s = pd.Series(['trial_{}_{}'.format(*cols) for cols in df.values], index=df.index, name='column') >>> pd.concat([s, df], axis=1) column column_1 column_2 0 trial_84_test 84 test 1 trial_65_test 65 test ``` **Method 3**: [Insert](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.insert.html) the new values at the first index of the dataframe (i.e. column 0). ``` df.insert(0, 'column', ['trial_{}_{}'.format(*cols) for cols in df.values]) >>> df column column_1 column_2 0 trial_84_test 84 test 1 trial_65_test 65 test ``` **Method 3 (alternative way to create values for new column)**: ``` df.insert(0, 'column', df.astype(str).apply(lambda row: 'test_' + '_'.join(row), axis=1)) ``` By the way, [`sum`](https://docs.python.org/2/library/functions.html#sum) is a keyword so you do not want to use it as a variable name. Upvotes: 2 <issue_comment>username_3: You can using `insert` ``` df.insert(0,column='Columns',value='trial_' + df['column_1'].astype(str)+ '_'+df['column_2'].astype(str) ) df Out[658]: Columns column_1 column_2 0 trial_84_test 84 test 1 trial_65_test 65 test ``` Upvotes: 0
2018/03/22
670
2,664
<issue_start>username_0: I have a byte array representing a gzipped json array. I send this array in chunks (up to 20 bytes) over a Bluetooth connection to another device. To indicate a start of a new transmission, I send some "reset bytes" (a sequence indicating that we have a new transmission) to the other device. However, for this approach to work, I would need to make sure that the reset bytes are unique in the sense that the gzipped and chunked json array would not contain the same sequence. The following snipped shows a shortened version of the code I use: ``` //the sequence I choose as reset sequence var nullByteSeq = new byte[3] { 0x00, 0x00, 0x00 }; //send the reset sequence to device var reseted = await characteristic.Write(nullByteSeq); //messageJson is gzipped json format of a message var messageJson = BluetoothHelper.GetMessageJson(message); //header: first 10 bytes: message name - byte 10 to 19: message size var header = BluetoothHelper.GetMessageHeader(message); var written = await characteristic.Write(header); while (bytesSentCounter < messageJson.Count()) { var toSend = messageJson.Skip(bytesSentCounter).Take(mtu).ToArray(); var sent = await characteristic.Write(toSend); bytesSentCounter += toSend.Count(); } ``` As you can see I use three 0x00 bytes as "reset bytes". However, in some cases the messageJson is in such a format, that the last chunk sent has exactly three 0x00 bytes. Therefore my question: Is there any kind of byte sequence that will never occur at the end of an gzip byte sequence? And if not, how can I achieve what I want?<issue_comment>username_1: Thanks to @Name McChange I was able to solve this problem. As pointed out by him I used base64 encode to remove 0x00 bytes from the gzip bytes array. However instead of base64 encoding the whole byte array, I just encoded the last three bytes of that array. This is sufficient, since the confusion described in the question can only occur when the last three bytes of the gzip array are 0x00 and the array is chunked in such a way, that the last write transmission contains only three bytes. I post this answer in the hope that it might help someone. Upvotes: 0 <issue_comment>username_2: To answer the original question, no. The end of a gzip stream is the number of uncompressed bytes represented in the last gzip member, modulo 232, in little-endian order. So for long enough inputs, the last four bytes can be anything. The four bytes before that is the CRC-32 of the uncompressed data, so the last eight bytes can be anything. Before that is compressed data, and that can be anything as well for a few thousand bytes. Upvotes: 1
2018/03/22
483
1,741
<issue_start>username_0: ``` exports.admin= function(req, res, next){ if(!req.user) { var err = new Error ('No Valid User'); err.status = 403; return next(err); } else if(!req.user.admin) { var err = new Error ('You must be an administrator!'); err.status = 403; return next(err); } else { console.log('IN'); // <---- I hit this return next(); } } ``` So I have this function and it is called here: ``` routerA.route('/').post( (req,res,next)=> { authenticate.admin(req, res, next) .then(()=>{ // <-----here on this line I receive this exception console.log('AAA') Dishes.create(req.body) .then((dish)=>{ console.log('Dish Created: ', dish); res.statusCode = 200; res.setHeader('Content-Type','application/json'); res.json(dish); }, (err)=>next(err)) .catch((err)=>next(err)); }); }) ``` I don't get it I return the promise from admin, why I receive this exception Cannot read property 'then' of undefined.<issue_comment>username_1: But you are returning `next()` which doesn't return a promise. It returns `undefined`. Upvotes: 0 <issue_comment>username_2: `next()` is not a promise in [express](https://expressjs.com/en/4x/api.html#express). It's a callback to tell express to move onto the next middleware, whatever that may be. Call `authenticate.admin` as [middleware](https://expressjs.com/en/guide/using-middleware.html). Then the call to `next()` will allow express to move onto your request handler. ``` routerA.route('/').post(authenticate.admin, (req,res,next)=> { ... }) ``` Upvotes: 2 [selected_answer]
2018/03/22
258
885
<issue_start>username_0: I have ``` Treemap ``` class where Student is ``` TreeMap ``` students where `Person` is object. When I do ``` class.put(String, students) ``` I am getting error: ```none no suitable method found for put(String, TreeMap) java ```<issue_comment>username_1: But you are returning `next()` which doesn't return a promise. It returns `undefined`. Upvotes: 0 <issue_comment>username_2: `next()` is not a promise in [express](https://expressjs.com/en/4x/api.html#express). It's a callback to tell express to move onto the next middleware, whatever that may be. Call `authenticate.admin` as [middleware](https://expressjs.com/en/guide/using-middleware.html). Then the call to `next()` will allow express to move onto your request handler. ``` routerA.route('/').post(authenticate.admin, (req,res,next)=> { ... }) ``` Upvotes: 2 [selected_answer]
2018/03/22
433
1,243
<issue_start>username_0: How can I set the maximum rows of pandas dataframe displayed in pycharm console? for example, I just want to see the first ten rows of a dataframe, but the pycharm console displayed most of the rows of this dataframe: [![enter image description here](https://i.stack.imgur.com/1gxgM.jpg)](https://i.stack.imgur.com/1gxgM.jpg) Is there has some command or setting method in pycharm?<issue_comment>username_1: To see just ten rows: ``` import pandas as pd pd.set_option('display.max_rows',10) ``` This will fix your PD to show only first 10 rows always, no matter if you print dataframe to see more. If you want to vary from time to time the use ``` df.head(10) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Use indexing to return a portion of a pandas DataFrame ------------------------------------------------------ Some data: ``` date_time = ["2011-09-01", "2011-08-01", "2011-07-01", "2011-06-01", "2011-05-01"] date_time = pd.to_datetime(date_time) temp = [2, 4, 6, 4, 6] DF = pd.DataFrame() DF['temp'] = temp DF = DF.set_index(date_time) DF[:2] # rows to view ``` This returns the first two items of DF: ``` Out[22]: temp 2011-09-01 2 2011-08-01 4 ``` Upvotes: 0
2018/03/22
1,029
2,766
<issue_start>username_0: I am curious on why is namedtuple slower than a regular class in python. Consider the following: ``` In [1]: from collections import namedtuple In [2]: Stock = namedtuple('Stock', 'name price shares') In [3]: s = Stock('AAPL', 750.34, 90) In [4]: %%timeit ...: value = s.price * s.shares ...: 175 ns ± 1.17 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [5]: class Stock2: ...: __slots__ = ('name', 'price', 'shares') ...: def __init__(self, name, price, shares): ...: self.name = name ...: self.price = price ...: self.shares = shares In [6]: s2 = Stock2('AAPL', 750.34, 90) In [8]: %%timeit ...: value = s2.price * s2.shares ...: 106 ns ± 0.832 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [9]: class Stock3: ...: def __init__(self, name, price, shares): ...: self.name = name ...: self.price = price ...: self.shares = shares In [10]: s3 = Stock3('AAPL', 750.34, 90) In [11]: %%timeit ...: value = s3.price * s3.shares ...: 118 ns ± 3.54 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [12]: t = ('AAPL', 750.34, 90) In [13]: %%timeit ...: values = t[1] * t[2] ...: 93.8 ns ± 1.13 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [14]: d = dict(name='AAPL', price=750.34, shares=90) In [15]: %%timeit ...: value = d['price'] * d['shares'] ...: 92.5 ns ± 0.37 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) ``` I expected namedtuple to come before a class without slots. This is on python3.6. Also pretty amazing that dictionary's performance is comparable to a tuple.<issue_comment>username_1: To see just ten rows: ``` import pandas as pd pd.set_option('display.max_rows',10) ``` This will fix your PD to show only first 10 rows always, no matter if you print dataframe to see more. If you want to vary from time to time the use ``` df.head(10) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Use indexing to return a portion of a pandas DataFrame ------------------------------------------------------ Some data: ``` date_time = ["2011-09-01", "2011-08-01", "2011-07-01", "2011-06-01", "2011-05-01"] date_time = pd.to_datetime(date_time) temp = [2, 4, 6, 4, 6] DF = pd.DataFrame() DF['temp'] = temp DF = DF.set_index(date_time) DF[:2] # rows to view ``` This returns the first two items of DF: ``` Out[22]: temp 2011-09-01 2 2011-08-01 4 ``` Upvotes: 0
2018/03/22
663
1,818
<issue_start>username_0: I have a script that is (supposed to be) assigning a dynamic variable name (s1, s2, s3, ...) to a directory path: ``` savedir() { declare -i n=1 sn=s$n while test "${!sn}" != ""; do n=$n+1 sn=s$n done declare $sn=$PWD echo "SAVED ($sn): ${!sn}" } ``` The idea is that the user is in a directory they'd like to recall later on and can save it to a shell variable by typing 'savedir'. It -does- in fact write out the echo statement successfully: if I'm in the directory /home/mrjones and type 'savedir', the script returns: SAVED (s1): /home/mrjones ...and I can further type: > > echo $sn > > > and the script returns: s1 ...but typing either... ``` > echo $s1 ``` ...or > > echo ${!sn} > > > ...both return nothing (empty strings). What I want, in case it's not obvious, is this: > > echo $s1 > > > ``` /home/mrjones ``` Any help is greatly appreciated! [apologies for the formatting...]<issue_comment>username_1: To see just ten rows: ``` import pandas as pd pd.set_option('display.max_rows',10) ``` This will fix your PD to show only first 10 rows always, no matter if you print dataframe to see more. If you want to vary from time to time the use ``` df.head(10) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Use indexing to return a portion of a pandas DataFrame ------------------------------------------------------ Some data: ``` date_time = ["2011-09-01", "2011-08-01", "2011-07-01", "2011-06-01", "2011-05-01"] date_time = pd.to_datetime(date_time) temp = [2, 4, 6, 4, 6] DF = pd.DataFrame() DF['temp'] = temp DF = DF.set_index(date_time) DF[:2] # rows to view ``` This returns the first two items of DF: ``` Out[22]: temp 2011-09-01 2 2011-08-01 4 ``` Upvotes: 0
2018/03/22
570
1,649
<issue_start>username_0: Supposing I have three columns, make, year, and msrp, I want to only show rows there the amount of make/year combinations are over, for example, 10. This is because some makes only have data for one year and I dont want random car makes in my data. Im able to get the number of year/make combinations for each make, but I dont know how to include other columns. I have the query: **SELECT make, count(DISTINCT year)** **FROM cars** **GROUP BY make** which gives something like this: ``` make | count -------------- Honda | 13 Ford | 17 Bugatti | 3 ... ``` but I want something like: ``` make | count | year | msrp ---------------------- Honda | 13 | 2001 | 100 Honda | 13 | 2002 | 200 Honda | 13 | 2003 | 300 Ford | 17 | 2001 | 100 Ford | 17 | 2002 | 200 Ford | 17 | 2003 | 300 Bugatti | 1 | 2014 | 1000 ``` and only show the rows that have a count > a number (probably 10) \* the data examples are made up \*<issue_comment>username_1: ``` SELECT make, count(DISTINCT year) OVER (PARTITION BY make) "Count", year, msrp FROM cars GROUP BY make, year, msrp HAVING count(make) > 10 and count(year) > 10 ``` Upvotes: 0 <issue_comment>username_2: I suspect that you don't really need `count(distinct)`. So, to get the information: ``` select make, year, msrp, count(*) over (partition by make) as num_make_years from cars; ``` If you want to filter, then use a subquery: ``` select my.* from (select make, year, msrp, count(*) over (partition by make, year) as num_make_years from cars ) my where num_make_years > 10; ``` Upvotes: 1
2018/03/22
407
1,358
<issue_start>username_0: I have a Microsoft Access Database file. I wanted to delete records older than 5 years in it. I made a backup before starting to modify the file. I was able to run a query and then run the command below and append it or update it to the database file. ``` DELETE FROM Inspections Report WHERE Date <= #01/01/2013# ``` I used the example: [Delete by Date In Access](https://stackoverflow.com/questions/30877787/delete-by-date-in-access) The records still seem to be in there. **My desired Output:** A analogy to what I am trying to do would be the bottom left corner of a Microsoft Word file where you see page 1 of 10 when it should say page 1 of 5 after deleting pages.<issue_comment>username_1: ``` SELECT make, count(DISTINCT year) OVER (PARTITION BY make) "Count", year, msrp FROM cars GROUP BY make, year, msrp HAVING count(make) > 10 and count(year) > 10 ``` Upvotes: 0 <issue_comment>username_2: I suspect that you don't really need `count(distinct)`. So, to get the information: ``` select make, year, msrp, count(*) over (partition by make) as num_make_years from cars; ``` If you want to filter, then use a subquery: ``` select my.* from (select make, year, msrp, count(*) over (partition by make, year) as num_make_years from cars ) my where num_make_years > 10; ``` Upvotes: 1
2018/03/22
1,015
2,951
<issue_start>username_0: I'm super new at this and starting in the middle while working forwards and backwards simultaneously, so forgive me if this is a super basic question. I have this code: ``` mh<-read.csv("mkhz_meta.csv") msub<-mh[complete.cases(mh[ , 29]),] msub2 <- subset(msub, Target.Fungal.Phylum!= "a") png("J_Weighted_2_FungalPhylum.png", width= 800, height=600) ggplot(aes(x = Target.Fungal.Phylum, fill= Concl..Weighted.nestedness), data = msub2) + geom_bar(position = "dodge")+ xlab("Target Fungal Phylum")+ ylab("Count")+ ggtitle("Weighted vs. Primer Specificity")+ theme_bw()+ scale_fill_grey()+ theme(axis.text.x =element_text(hjust = 0.5, size =8, angle= 0), axis.text.y =element_text(size= 10), legend.title=element_blank(), legend.text=element_text(size=10), axis.title.x =element_text(size=10), axis.title.y =element_text(size=10))+ theme(panel.grid.minor = element_blank(), panel.grid.major=element_blank(), strip.background = element_blank(), strip.text.x=element_text(size=14), panel.border = element_rect(colour = "black")) dev.off() ``` It returns this graph: ![Binary vs. Sequencing](https://i.stack.imgur.com/pKbZs.png) There is one count of non-nested morphotyping, and no count of nested morphotyping - I can't figure out how to have the 'morphotyping' bar be split to reflect this. Any help is greatly appreciated!<issue_comment>username_1: Try adding: `+ scale_fill_discrete(drop=FALSE) + scale_x_discrete(drop=FALSE)` to your plot command to preserve the empty plot bins. Upvotes: 0 <issue_comment>username_2: You can use `tidyr::complete` to add missing counts to your data. Sample data: ``` mydata <- structure(list(platform = c("454", "454", "454", "454", "454", "454", "454", "454", "454", "454", "Morphotyping", "Sanger", "Sanger", "Sanger", "Sanger", "Sanger", "Sanger"), is_nested = c("nested", "not nested", "nested", "nested", "nested", "nested", "nested", "not nested", "nested", "not nested", "not nested", "nested", "nested", "nested", "not nested", "nested", "not nested")), .Names = c("platform", "is_nested"), row.names = c(NA, -17L), class = c("tbl_df", "tbl", "data.frame"), spec = structure(list(cols = structure(list(platform = structure(list(), class = c("collector_character", "collector")), is_nested = structure(list(), class = c("collector_character", "collector"))), .Names = c("platform", "is_nested")), default = structure(list(), class = c("collector_guess", "collector"))), .Names = c("cols", "default"), class = "col_spec")) ``` Code to count and plot: ``` library(tidyverse) mydata %>% count(platform, is_nested) %>% complete(platform, is_nested) %>% ggplot(aes(platform, n)) + geom_col(aes(fill = is_nested), position = position_dodge()) ``` [![enter image description here](https://i.stack.imgur.com/9pmg9.png)](https://i.stack.imgur.com/9pmg9.png) Upvotes: 2 [selected_answer]
2018/03/22
2,254
8,355
<issue_start>username_0: I am trying to have this code determine which element has the closest value to a constant. In this code the variable `boxes = 5`, any element that has `boxCapacity >= boxes` is added to an ArrayList. From that list, the one with the closest `boxCapacity` to `boxes` should be used. I am able to select those greater than `boxes`, but unable to pick that with the closest `boxCapacity`. ``` public void deliver(double miles, int boxes) { for (int i = 0; i < cars.size(); i++){ if (cars.get(i).getBoxCapacity() >= boxes){ deliveryCars = new ArrayList(); deliveryCars.add(cars.get(i)); smallest = deliveryCars.get(0).getBoxCapacity(); for(j = 0; j < deliveryCars.size(); j++){ if (deliveryCars.get(j).getBoxCapacity() < smallest) { smallest = deliveryCars.get(j).getBoxCapacity(); k++; } } } } System.out.println("Delivering with " + deliveryCars.get(k).getPlate()); } ``` I tried to make a new list, but it has not been working out.<issue_comment>username_1: So I think what you're saying is that you have of list of values say `[0, 1, 2, 3, 4, 5, 6]` and then you are given another number say `4`, and what you want to do is to select the number from the list that is the smallest of all the numbers greater than `4`, so in this case you'd want to choose `5`, right? Well, there are a ton of ways to do that. But the fastest way to do it is to go through the list one time and keep track of the smallest number greater than your 'targetNumber': ``` ... public Integer getNextBiggest(List numbers, int targetNumber) { // Set the default 'nextInt' to the largest possible value. int nextInt = Integer.MAX\_VALUE; for (int i = 0; i < numbers.length; i++) { // If the current number is greater than our targetNumber. if (sortedList.get(i) > targetNumber) { // Set the nextInt variable to the MINIMUM of current number // and the current nextInt value. nextInt = Math.min(nextInt, sortedList.get(i)); } } return nextInt; } ... ``` So that would work, but your list is a bit more complicated since you're using Objects and not integers, that said, it's a simple conversion: First some assumptions: 1. `cars` is a List. 2. `Car` object has a `getBoxCapacity()` method that returns an `int`. 3. `Car` object has a `getPlate()` method that returns a `String`. Ok so it might look like this: ``` ... public Car getBestFitCar(List cars, int targetBoxCapacity) { // Default best fit is null. No best first yet. Car bestFit = null; for (int i = 0; i < cars.length; i++) { Car current = cars.get(i); // If the current Car box capacity is greater than our target box capacity. if (current.getBoxCapacity() > targetBoxCapacity) { // Set the bestFit variable to the Car that has the MINIMUM // 'box capacity' between the current Car and the bestFit Car. if (bestFit == null || bestFit.getBoxCapacity() > current.getBoxCapacity()) { bestFit = current; } } } return bestFit; } public static void main(String[] args) { List cars = new ArrayList<>(); // add some cars here... int targetBoxCapacity = 5; Car bestFit = getBestFitCar(cars, targetBoxCapacity); if (bestFit != null) { System.out.println("Best fit car is: " + bestFit.getPlate()); } else { System.out.println("No car can fit " + targetBoxCapacity + " boxes."); } } ``` **Update:** I've seen some nice responses using streams, but I'd to add some caution. Streams make writing the code faster/more readable but would end up being less efficient in time/space than a solution with simple loops. This solution only uses O(1) extra space, and O(n) time in the worst case. I'd figure the stream answers would use O(n) extra space, and O(n \* n log n) time in the worst case. So, if you have a tiny list I'd say go with the simple really cool streams solutions, but you have a list of a lot of elements that you'll be better off with a more traditional approach. Upvotes: 0 <issue_comment>username_2: You can simplify your code to something that looks like that ``` public void deliver(double miles, int boxes){ // check if there are cars availible if (!cars.isEmpty()) { // assume that first car in a list is best for delivery int smallest = cars.get(0).getBoxCapacity(); Car deliveryCar = cars.get(0); // iterating over all cars in a list // but still compares to the first car in a list for (Car car : cars) { if (car.getBoxCapacity() >= boxes && car.getBoxCapacity() < smallest) { deliveryCar = car; } } System.out.println("Delivering with " + deliveryCar.getPlate()); } } ``` Upvotes: 1 <issue_comment>username_3: You do not need a new list. The task you've outlined is a sequential search through an unordered list, and unless I misunderstand your goal, you only need a single `for` loop -- that is, you only need to iterate through the list one time. Since you are looking for a single item and you don't need to look at more than one item at a time to see if it's the best one so far, you only need one variable to keep track of its location in the list. Here's a working sample. Notice the variable names describe the purpose (e.g. "mimimumBoxCapacity" instead of the ambiguous "boxes"). This helps me better understand what my code is doing. ``` // print the plate number of the car with the smallest boxCapacity // greater than a specified minimum public void deliver(List cars, double miles, int minimumBoxCapacity) { if ((cars != null) && (cars.size() > 0)) { int indexOfBestMatch = -1; // negative index means no match yet for (int i = 0; i < cars.size(); i++) { if (cars.get(i).getBoxCapacity() > minimumBoxCapacity) { if (indexOfBestMatch < 0) { // this is the only match seen so far; remember it indexOfBestMatch = i; } else { // found a better match; replace the old best match if (cars.get(i).getBoxCapacity() < cars.get(indexOfBestMatch).getBoxCapacity()) { indexOfBestMatch = i; } } } } if (indexOfBestMatch >= 0) { System.out.println("Delivering with " + cars.get(indexOfBestMatch).getPlate()); } } } ``` This code illustrates how your algorithm would need to change to do what you want. username_1's answer using a Car variable to keep track of the best fit is even clearer, especially where the method returns a Car result and lets the calling logic decide what to do with that result. (Your original code didn't compile, because variables like "smallest" and "deliveryCars" weren't defined before they were used. It would be helpful in the future if you post code that compiles, even if it doesn't yet do what you want it to do.) Upvotes: 0 <issue_comment>username_4: This answer builds from the clarifications that @username_1 provided, namely that the goal is to find the car with the lowest boxCapacity that is greater than the targetCapacity. Given the list of cars, you can filter out any car with a boxCapacity smaller than the target, and then select the minimum boxCapacity from what is left. ``` List cars = List.of(new Car(8), new Car(3), new Car(5), new Car(6)); int suggestedCapacity = 4; Optional bestFit = cars.stream() .filter(car -> car.getBoxCapacity() >= suggestedCapacity) .min(Comparator.comparing(Car::getBoxCapacity)); if (bestFit.isPresent()) { System.out.println("found car with capacity " + bestFit.get().getBoxCapacity()); } else { System.out.println("No suitable car found"); } ``` The Streams api takes care of the list manipulation and keeping track of the internal state of the minimum for you. Upvotes: 0 <issue_comment>username_5: Using Java 8 streams... ``` Car deliveryVehicle = cars .stream() .filter(c -> c.getBoxCapacity() > boxes) .min(Comparator.comparingInt(Car::getBoxCapacity)) .orElse(null); ``` Assuming your `cars` was an iterable/streamable collection, this creates a stream, filters it to extract all instances where the capacity is greater than `boxes`, finds the element with the smallest capacity, and returns it, or `null` if there were no cars with more than `boxes` capacity. You can then do whatever you want with the returned `Car` object, like call `getPlate()`. Remember to check for null for the case where no acceptable car was found. Upvotes: 1
2018/03/22
480
1,698
<issue_start>username_0: I have an ArrayList `list` with the values `90, 80, 75` in it that I want to convert to an IntStream. I have tried using the function `list.stream()`, but the issue I come upon is that when I attempt to use a lambda such as: `list.stream().filter(x -> x >= 90).forEach(System.out.println(x);` It will not let me perform the operations because it is a regular stream, not an IntStream. Basically what this line is supposed to do is that if the 90 is in the list to print it out. What am I missing here? I can't figure out how to convert the Stream to an IntStream.<issue_comment>username_1: Use `mapToInt` as ``` list.stream() .mapToInt(Integer::intValue) .filter(x -> x >= 90) .forEach(System.out::println); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can convert the stream to an `IntStream` using `.mapToInt(Integer::intValue)`, but only if the stream has the type `Stream`, which implies that your source list is a `List`. In that case, there is no reason why ``` list.stream().filter(x -> x >= 90).forEach(System.out::println); ``` shouldn’t work. It’s even unlikely that using an `IntStream` improves the performance for this specific task. The reason why you original code doesn’t work, is that `.forEach(System.out.println(x);` isn’t syntactically correct. First, there is a missing closing brace. Second, `System.out.println(x)` is neither, a lambda expression nor a method reference. You have to use either, `.forEach(x -> System.out.println(x));`or `.forEach(System.out::println);`. In other words, the error is not connected to the fact that this is a generic `Stream` instead of an `IntStream`. Upvotes: 1
2018/03/22
769
2,418
<issue_start>username_0: I encounter a weird problem that a `data.table` function doesn't recognize a well-defined argument if the function is used in another function. Here is a simple example: I get an error when the first function `testFun1`, > > `Error in fun(value) : could not find function "fun"` > > > However, it is clear that there is default value of `fun`. There is no issue using `reshape2::dcast`, See `testFun2`. ``` testFun1 <- function(data, formula, fun = sum, value.var = "value") { data.table::dcast(data = data, formula = formula, fun.aggregate = fun, value.var = "value") } testFun2 <- function(data, formula, fun = sum, value.var = "value") { reshape2::dcast(data = data, formula = formula, fun.aggregate = fun, value.var = "value") } d <- data.table(x = c("a", "b"), y = c("c", "d"), value = 1) testFun1(d, x ~ y) # Error in fun(value) : could not find function "fun" testFun2(d, x ~ y) ```<issue_comment>username_1: This seems to be a bug that had existed in a previous version of data.table, was fixed, and has popped up again. The solution was to change the parameter name in your wrapper function from `fun` to `fun.aggregate` so that it matches the name of the `data.table::dcast` parameter. Example: ``` testFun1 <- function(data, formula, fun.aggregate = sum, value.var = "value") { data.table::dcast(data = data, formula = formula, fun.aggregate = fun.aggregate, value.var = "value") } ``` Upvotes: 0 <issue_comment>username_2: This issue has been already resolved by recent improvements to `dcast` by made Arun. They will be soon available on CRAN as 1.12.2 version. ```r install.packages("data.table", repos="https://Rdatatable.gitlab.io/data.table") library(data.table) testFun1 <- function(data, formula, fun = sum, value.var = "value") { data.table::dcast(data = data, formula = formula, fun.aggregate = fun, value.var = "value") } d <- data.table(x = c("a", "b"), y = c("c", "d"), value = 1) testFun1(d, x ~ y) testFun2 <- function(data, formula, fun = sum, value.var = "value") { reshape2::dcast(data = data, formula = formula, fun.aggregate = fun, value.var = "value") } d <- data.table(x = c("a", "b"), y = c("c", "d"), value = 1) all.equal(testFun1(d, x ~ y), as.data.table(testFun2(d, x ~ y)), check.attributes=FALSE) #[1] TRUE ``` Upvotes: 1
2018/03/22
371
1,511
<issue_start>username_0: I currently creating a cross platform desktop application using Electron. I wish to add analytics to view user metrics. When I try to find for existing packages that provides metrics info, I found [electron-ga](https://github.com/jaystack/electron-ga). The package uses GA to track user metrics and to set it up I need to include GA tracking id in my app. My question is, if I include the tracking id in an electron app and distribute my app, everyone can look at the tracking id and steal it right? I would like to know if using this method is right? Thank you.<issue_comment>username_1: That is ok. You could see many people push the ga id at website or app. It is necessary. If you don't add tracking id to ga code. Then you cannot track user. Upvotes: 1 <issue_comment>username_2: It will be anyway publicly available (e.g. you can find the ga ID through the source code of any site that uses Google Analytics). It can get very bad if someone wants to harm you, as he can just plug your ga ID in any of his sites unless you are prepared and protected. What you can do to be sure that even if someone has your ga ID, he can't make use of it, is the folowing. Go to your Google Analytics profile and create a custom filter. Choose "Hostname" for the filter field and fill the filter pattern with your site (e.g. mysite.com). Don't forget to use the "\" before of any ".". This way, only this specific address will be able to make use of your ga ID. Upvotes: 5 [selected_answer]
2018/03/22
424
1,647
<issue_start>username_0: In my code I attempt to use AJAX to fetch the content of a template file and return it as a string, which I alert. This results in `undefined`, however if instead of `return xmlhttp.responseText` I do `alert(xmlhttp.responseText)` I get the result I'm after. Can I return the AJAX content as a string outside the function?? ```js alert(getFile()); function getFile() { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function stateChanged() { if (xmlhttp.readyState == 4) return xmlhttp.responseText; } xmlhttp.open("POST","/template.html", true); xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xmlhttp.send(); } ```<issue_comment>username_1: That is ok. You could see many people push the ga id at website or app. It is necessary. If you don't add tracking id to ga code. Then you cannot track user. Upvotes: 1 <issue_comment>username_2: It will be anyway publicly available (e.g. you can find the ga ID through the source code of any site that uses Google Analytics). It can get very bad if someone wants to harm you, as he can just plug your ga ID in any of his sites unless you are prepared and protected. What you can do to be sure that even if someone has your ga ID, he can't make use of it, is the folowing. Go to your Google Analytics profile and create a custom filter. Choose "Hostname" for the filter field and fill the filter pattern with your site (e.g. mysite.com). Don't forget to use the "\" before of any ".". This way, only this specific address will be able to make use of your ga ID. Upvotes: 5 [selected_answer]
2018/03/22
675
2,180
<issue_start>username_0: Say I have a list of 5 dates `[Mar 2,Mar 6, Mar 7, Mar 26]` all in the year 2018. The week start on Saturday and end Sunday. I want the following result ``` [Mar 2] [Mar 6, Mar 7] [Mar 26] ``` How can I do it with LINQ? Or in a functional way.<issue_comment>username_1: You can use the following on `DateTime` [Calendar.GetWeekOfYear Method (DateTime, CalendarWeekRule, DayOfWeek)](https://msdn.microsoft.com/en-us/library/system.globalization.calendar.getweekofyear(v=vs.110).aspx) > > Returns the week of the year that includes the date in the specified > DateTime value. > > > * time > > > + Type: `System.DateTime` > + A date and time value. > * rule > > > + Type: `System.Globalization.CalendarWeekRule` > + An enumeration value that defines a calendar week. > * firstDayOfWeek > > > + Type: `System.DayOfWeek` > + An enumeration value that represents the first day of the week. > > > **Given** ``` List myAwesomeList; ``` **Usage** ``` var result = myAwesomeList.GroupBy(x => CultureInfo.CurrentCulture.Calendar .GetWeekOfYear(x.date, CalendarWeekRule.FirstDay, DayOfWeek.Saturday)) .Select(grp => grp.ToList()) .ToList(); ``` **Returns** ``` List> ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I want to post this as an iterative answer since I don't want to introduce iterative bias thinking into the question since this might influence the elegant of the answer that I get :) So please don't read this if you understand the question being asked. Here is a quick and dirty way to solve the problem using an iterative way. ``` var orders = new List { 4, 5, 6, 0, 1, 2, 3 }; var nums = new List {2, 5, 6, 2, 2, 4}; var queue = new Queue(nums); var results = new List>(); while (queue.Count > 0) { var subLists = new List(); foreach (var order in orders) { if(order == queue.Peek()) subLists.Add(queue.Dequeue()); if (queue.Count == 0) break; } results.Add(subLists); } ``` Upvotes: 0
2018/03/22
954
3,364
<issue_start>username_0: Jest allows you to specify a coverage reporter in the package.json like so ``` { ... "jest": { "coverageDirectory": "./coverage", "coverageReporters": ["json", "text", ...] } ... } ``` However, this only outputs the `.json` summary to the coverage directory. It only prints the text version to the console. Is there a way to output the text version to a `.txt` file in the coverage folder as well? I've been referring to the docs [here](https://facebook.github.io/jest/docs/en/configuration.html#coveragereporters-array-string), which says that it is compatible with any of the [Istanbul reporters](https://github.com/gotwarlost/istanbul/tree/master/lib/report). The text Istanbul reporter [appears to have support for writing to a file](https://github.com/gotwarlost/istanbul/blob/bc84c315271a5dd4d39bcefc5925cfb61a3d174a/lib/report/text.js#L224). Is there any way to utilize it this?<issue_comment>username_1: @KerSplosh We ended up writing a custom script with `shelljs`. Basically it runs the tests, and writes the table to a file with `fs`. ``` const shell = require("shelljs"); const path = require("path"); const fs = require("fs"); const result = shell.exec("yarn test --coverage"); fs.writeFileSync( path.resolve(".", "coverage.txt"), result.substring(result.indexOf("|\nFile") + 2) ); if (result.code !== 0) { shell.exit(1); } ``` This isn't ideal though. Preferably this would be done through the Jest configuration. But at the time I implemented this, I don't think it wasn't possible. Upvotes: 0 <issue_comment>username_2: Add `json` to `coverageReports` in your jest config: `"coverageReporters": ["json"]` Then install istanbul: `npm i -D istanbul` Add this script to `package.json`: ``` "scripts": { "test": "jest --coverage && istanbul report --include coverage/coverage-final.json text > coverage.txt", ... } ``` The script will generate the code coverage report file `coverage-final.json` then istanbul will generate the expected output redirected to `coverage.txt` Upvotes: 2 <issue_comment>username_3: **If you happen to use react scripts on top of jest:** Add these snippets to their sections in package.json: ``` "scripts": { "cover:report": "react-scripts test --coverage .> coverage/coverage-report.txt", } ... "jest": { "collectCoverageFrom": [ "src/**/*.ts*" ], "coverageReporters": ["text"] } ``` This generates coverage report in file coverage/coverage-report.txt. The dot in ".>" part tells the script to take all files (except for ignored ones) that match the "." pattern - which would typically be all of them. Modify the "collectCoverageFrom" array of strings to include files/folders as needed. This command doesn't exit itself unfortunately so you have to Ctrl+C to end it when it just hangs in there after being done. To run it in terminal: `"yarn cover:report"` The result contains plaintext table of coverage results. Upvotes: 0 <issue_comment>username_4: In your jest config, you can specify the `file` option, and optionally the `dir` options which defaults to the current working directory: ``` "jest": { "coverageReporters": ["text", { file: 'coverage.txt' }] } ``` See docs for available options [here](https://github.com/gotwarlost/istanbul/blob/master/lib/report/text.js#L33-L35). Upvotes: 2
2018/03/22
502
1,356
<issue_start>username_0: As the title stated, how to do that? ``` val a = 3.U val result = a / 2.U ``` result would be `1.U` However I want to apply ceil on division. ``` val result = ceil(a / 2.U ) ``` Therefore, I could get `2.U` of the result value.<issue_comment>username_1: The problem is the expression `a / 2.U` is indeed `1.U`: if you apply `ceil` to `1.U` you'll get `1.U`. Recall that this happens to `Int`s as well, as they use integer division: ``` scala> val result = Math.ceil(3 / 2) result: Double = 1.0 ``` What you should do is to enforce one of the division operands to be a `Double` likewise: ``` scala> val result = Math.ceil(3 / (2: Double)) result: Double = 2.0 ``` And then just convert it back to `UInt`. Upvotes: 0 <issue_comment>username_2: When dividing `a` by `b`, if you know that `a` is not too big (namely that `a <= UInt.MaxValue - (b - 1)`), then you can do ``` def ceilUIntDiv(a: UInt, b: UInt): UInt = (a + b - 1.U) / b ``` If `a` is potentially too big, then the above can overflow, and you'll need to adapt the result after the fact instead: ``` def ceilUIntDiv(a: UInt, b: UInt): UInt = { val c = a / b if (b * c == a) c else c + 1.U } ``` Upvotes: 1 <issue_comment>username_3: ``` def ceilUIntDiv(a: UInt, b: UInt): UInt = { (a / b) + {if (a % b == 0.U) 0.U else 1.U} } ``` Upvotes: 0
2018/03/22
865
2,208
<issue_start>username_0: There are two rdds. ``` val pairRDD1 = sc.parallelize(List( ("cat",2), ("girl", 5), ("book", 4),("Tom", 12))) val pairRDD2 = sc.parallelize(List( ("cat",2), ("cup", 5), ("mouse", 4),("girl", 12))) ``` And then I will do this join operation. ``` val kk = pairRDD1.fullOuterJoin(pairRDD2).collect ``` it shows like that: ``` kk: Array[(String, (Option[Int], Option[Int]))] = Array((book,(Some(4),None)), (Tom,(Some(12),None)), (girl,(Some(5),Some(12))), (mouse,(None,Some(4))), (cup,(None,Some(5))), (cat,(Some(2),Some(2)))) ``` if i would like to fill the NONE by 0 and transform `Option[int]` to `Int`.what should I code?Thanks!<issue_comment>username_1: You can use `mapValues` on `kk` as follows (note this is before the `collect`): ``` pairRDD1.fullOuterJoin(pairRDD2).mapValues(pair => (pair._1.getOrElse(0), pair._2.getOrElse(0))) ``` You might have to do this before `collect` in an `RDD`, otherwise you could do: ``` kk.map { case (k, pair) => (k, (pair._1.getOrElse(0), pair._2.getOrElse(0))) } ``` Upvotes: 1 <issue_comment>username_2: Based on commnets in first answer, if you are fine using DataFrames, you can do with dataframes with any number of columns. ``` val ss = SparkSession.builder().master("local[*]").getOrCreate() val sc = ss.sparkContext import ss.implicits._ val pairRDD1 = sc.parallelize(List(("cat", 2,9999), ("girl", 5,8888), ("book", 4,9999), ("Tom", 12,6666))) val pairRDD2 = sc.parallelize(List(("cat", 2,9999), ("cup", 5,7777), ("mouse", 4,3333), ("girl", 12,1111))) val df1 = pairRDD1.toDF val df2 = pairRDD2.toDF val joined = df1.join(df2, df1.col("_1") === df2.col("_1"),"fullouter") joined.show() ``` Here `_1,_2` e.t.c are default column names provided by Spark. But, if you wish to have proper names you can change it as you wish. Result: ``` +----+----+----+-----+----+----+ | _1| _2| _3| _1| _2| _3| +----+----+----+-----+----+----+ |girl| 5|8888| girl| 12|1111| | Tom| 12|6666| null|null|null| | cat| 2|9999| cat| 2|9999| |null|null|null| cup| 5|7777| |null|null|null|mouse| 4|3333| |book| 4|9999| null|null|null| +----+----+----+-----+----+----+ ``` Upvotes: 0
2018/03/22
1,284
4,596
<issue_start>username_0: So, I recently made two big changes ... moved my code from bitbucket to github, and set up a pipeline on heroku with a new staging app (original app is now production). I got a new github token and placed it into the auth.json file as was done with the previous bitbucket repo (it's a private repo). However, when I push to heroku to build the code with composer there, I cannot connect with the laravel spark repo. Error: ``` Installing laravel/spark (v3.0.5): Downloading (failed) Failed to download laravel/spark from dist: The "https://api.github.com/repos/laravel/spark/zipball/512af184c15d793c33328ff03313553ea6feacba" file could not be downloaded (HTTP/1.1 404 Not Found) Now trying to download from source Installing laravel/spark (v3.0.5): Cloning 512af184c1 [RuntimeException] Failed to execute git clone --no-checkout 'https://***:***@github.com/laravel/spark.git' '/tmp/build_9916d292e7eb72e0fbe34f47e3d9854c/vendor/laravel/spark' && cd '/tmp/build_9916d292e7eb72e0fbe34f47e3d9854c/vendor/laravel/spark' && git remote add composer 'https://***:***@github.com/laravel/spark.git' && git fetch composer remote: Repository not found. fatal: repository 'https://***:***@github.com/laravel/spark.git/' not found ``` What I have tried ... Setting the github api token on heroku with ``` heroku config:set GITHUB_API_TOKEN= ``` Setting the composer github token ``` composer config -g github-oauth.github.com ``` I am connected to the Laravel Spark repo on github and when I run composer on my local machine I am not prompted for a spark token. Every other dependency that I have runs fine - I can change the auth.json and that is not the case, so I don't think this is a problem with lack of access to my github. Does anyone know how Laravel - Spark checks to grant access and how we can check to see where we are going wrong? There should be a checklist of things that can be looked at if access is denied. Any help is appreciated. Been stuck for almost a week. I really need some way to figure out how to connect to the Spark repo. (Edit) Spark is a composer satis repo. I can't really find any info on how to prompt this type of repo to tell me why I can't clone it or how best to communicate with it. (Edit 2) Also tried changing the git config to ensure that it had the right token. This should be overwritten by the files, but I tried it anyway. ``` git config github.accesstoken ``` The response from the software providers is to use an alternative method and place the code under my source control so that composer is not trying to load it. I do not wish to do this for a number of reasons. Again, I need a way to clone the satis repo in composer. Edit 3: I have also tried going to [the URL](https://spark-satis.laravel.com/) of the repo and attempting to access one of the versions. This displays the same error as when you go to the URL in the error directly (it's the same URL). ``` { "message": "Not Found", "documentation_url": "https://developer.github.com/v3/repos/contents/#get-archive-link" } ``` This seems to back up the belief that this is not a composer issue, but something to do with a github setting or spark setting. Edit 4: It occurred to me that my problems started after upgrading to V6 and I am getting denied access to the spark repo containing versions 1-5 and version 6 is separate. I upgraded my spark version to 6 and had access to that repo. I then tried uploaded the code base to heroku that had version 6 but was denied access to the repo there. I also tried ... ``` heroku config:set github_oauth= ``` Edit 5: I noticed that the output from pushing to heroku included the phrase ``` NOTICE: Using $COMPOSER_GITHUB_OAUTH_TOKEN for GitHub OAuth. ``` In response, I found an article asserting that the oauth token should be set in the config portion of composer.json as ... ``` "config": { "github-oauth": { "github.com": "" } } ``` I tried it, but it didn't work<issue_comment>username_1: So, it turns out there were several issues. The final big one was that for some reason, I had to delete my api personal token used for github access and create a new one with full privileges for everything. Once that was set up, I had access and was able to reduce the privileges to repo only. Upvotes: 1 <issue_comment>username_2: <https://github.com/ladybirdweb/agorainvoicing> Use open source Agora Invoicing software. It has all the tools you need to start software selling business. It is build on Laravel framework and is very similar to Laravel Spark Upvotes: -1
2018/03/22
626
2,076
<issue_start>username_0: Hi everyone I just want some explanation about vue props data. So I'm passing value from parent component to child component. The thing is when parent data has data changes/update it's not updating in child component. ``` Vue.component('child-component', { template: '{{val}}', props: ['testData'], data: function () { return { val: this.testData } } }); ``` But using the props name ***{{testdata}}*** it's displaying the data from parent properly ``` Vue.component('child-component', { template: '{{testData}}', props: ['testData'], data: function () { return { val: this.testData } } }); ``` Thanks in advance Fiddle [link](https://jsfiddle.net/PenAndPapers/waqb81fy/17/)<issue_comment>username_1: This is best explained with a very simple example ```js let a = 'foo' let b = a a = 'bar' console.info('a', a) console.info('b', b) ``` When you assign... ``` val: this.testData ``` you're setting the **initial value** of `val` once when the component is created. Changes to the prop will not be reflected in `val` in the same way that changes to `a` above are not reflected in `b`. See <https://v2.vuejs.org/v2/guide/components.html#One-Way-Data-Flow> Upvotes: 4 [selected_answer]<issue_comment>username_2: I resolve with! `this.$set(this.mesas, i, event);` ```js data() { return { mesas: [] } }, components: { 'table-item': table_l, }, methods: { deleteMesa: function(event) { let i = this.mesas.map(item => item.id).indexOf(event.id); console.log("mesa a borrare", i); this.mesas.splice(i, 1); }, updateMesa: function(event) { let i =this.mesas.map(item => item.id).indexOf(event.id); console.log("mesa a actualizar", i); /// With this method Vue say Warn //this.mesas[i]=event; /// i Resolve as follow this.$set(this.mesas, i, event); }, // Adds Useres online addMesa: function(me) { console.log(me); this.mesas.push(me); } } ``` Upvotes: 0
2018/03/22
582
1,814
<issue_start>username_0: I'm not sure what it means to have "pull access" on a private repo. Should I be able to clone it now? I can't. ``` [1045](m7int01)~/codes>git clone https://github.com/glwhart/[repo_name] Initialized empty Git repository in /zhome/glh43/codes/[repo_name]/.git/ error: The requested URL returned error: 403 Forbidden while accessing https://github.com/glwhart/[repo_name]/info/refs fatal: HTTP request failed ``` But I can download a zip file of the source code...<issue_comment>username_1: This is best explained with a very simple example ```js let a = 'foo' let b = a a = 'bar' console.info('a', a) console.info('b', b) ``` When you assign... ``` val: this.testData ``` you're setting the **initial value** of `val` once when the component is created. Changes to the prop will not be reflected in `val` in the same way that changes to `a` above are not reflected in `b`. See <https://v2.vuejs.org/v2/guide/components.html#One-Way-Data-Flow> Upvotes: 4 [selected_answer]<issue_comment>username_2: I resolve with! `this.$set(this.mesas, i, event);` ```js data() { return { mesas: [] } }, components: { 'table-item': table_l, }, methods: { deleteMesa: function(event) { let i = this.mesas.map(item => item.id).indexOf(event.id); console.log("mesa a borrare", i); this.mesas.splice(i, 1); }, updateMesa: function(event) { let i =this.mesas.map(item => item.id).indexOf(event.id); console.log("mesa a actualizar", i); /// With this method Vue say Warn //this.mesas[i]=event; /// i Resolve as follow this.$set(this.mesas, i, event); }, // Adds Useres online addMesa: function(me) { console.log(me); this.mesas.push(me); } } ``` Upvotes: 0
2018/03/22
1,041
3,163
<issue_start>username_0: I have a DXF file that was exported from a drawing of a simple arc that starts at `(0, 0)`, ends at `(2, 0)` and has a radius of `1.0`. I would expect the `LWPOLYLINE` to be made up of two vertices with the first containing the start point and bulge factor, and the second point simply containing the end point. However, the end point contains a bulge factor as well. How is this bulge point to be interpreted? Shouldn't all vertices with a bulge be followed by another point that defines the end point? ``` AcDbPolyline 90 2 70 0 43 0.0 10 0.0 -----------------> x1 20 0.0 -----------------> x2 42 0.9999999999999998 ---> p1 to p2 w/ bulge = 1, makes sense 10 2.0 -----------------> x2 20 0.0 -----------------> y2 42 1.330537671996453 ----> why does p2 have a bulge? Shouldn't all vertices w/ a bulge be followed by another point (to define the end point)? 0 ENDSEC ```<issue_comment>username_1: The best way to find out such details, is to test. If you don't have an AutoCAD application try Autodesk TrueView, it's free. What I found out by testing is: the last bulge value does nothing, you can change it to any value you want or just delete it, the LWPOLYLINE looks always the same. EDIT: This is only true if the LWPOLYLINE isn't closed. If the LWPOLINE is closed, group code 70=1, the last bulge and also the last start width and end width value, apply to the closing segment from the last vertex to first vertex, your example as closed polyline looks like this: [![enter image description here](https://i.stack.imgur.com/TlRjU.png)](https://i.stack.imgur.com/TlRjU.png) Upvotes: 4 [selected_answer]<issue_comment>username_2: DXF group 70 is bit-coded, with bit `1` indicating that the LWPolyline entity is closed (note that this is not the same as the LWPolyline having coincident endpoints). With bit `1` set, the bulge factor (DXF group 42) and starting & ending width values (DXF groups 40 & 41) define how the closing segment (i.e. the segment spanning the last vertex & first vertex) should appear. You can witness the effect of this value in the following examples: The following `entmake` expression with the final DXF group 42 entry omitted (and therefore interpreted as `0`) produces a polyline as shown in the image: ``` (entmake '( (000 . "LWPOLYLINE") (100 . "AcDbEntity") (100 . "AcDbPolyline") (090 . 3) (070 . 1) (010 0.0 0.0) (010 1.0 1.0) (010 1.0 0.0) ) ) ``` [![Zero Bulge](https://i.stack.imgur.com/EX4QI.png)](https://i.stack.imgur.com/EX4QI.png) Whereas the following `entmake` expression with the final DXF group 42 entry set to `-1` (`=tan(-pi/4)`) produces a polyline as shown in the image: ``` (entmake '( (000 . "LWPOLYLINE") (100 . "AcDbEntity") (100 . "AcDbPolyline") (090 . 3) (070 . 1) (010 0.0 0.0) (010 1.0 1.0) (010 1.0 0.0) (042 . -1.0) ) ) ``` [![Non-zero Bulge](https://i.stack.imgur.com/gJS3C.png)](https://i.stack.imgur.com/gJS3C.png) Upvotes: 2
2018/03/22
953
3,484
<issue_start>username_0: I have a list of strings, and need to check each item to see if it contains some string `$path`, where the string should contain a unc path and `$path` is also a unc path. For example: ``` "RW \\test" -match "\\test" ``` returns `True`, as `\\test` is contained in `RW \\test`. Great. So why does this return `False` ? : ``` "RW \\test\te" -match "\\test\te" ``` At first I though maybe the single backslash is somehow acting as an escape character (even though in PowerShell that should be `) So I tried ``` "RW \\test\\te" -match "\\test\\te" ``` But this also returns `False` .... Why?<issue_comment>username_1: You need to escape both of the backslashes with backslashes in your regular expression on the right-hand side of the `-match` operator. ``` PS /> "RW \\test\te" -match "\\\\test\\te" True ``` Here's what the result looks like: ``` PS /> $matches[0] \\test\te ``` You could also expand on this to use named captures in regular expressions. Named captures just give friendly names to individual captures inside of a regular expression, making them more easily referenced as a property on the `$matches` variable, instead of a numeric index. ``` PS /> "RW \\test\te" -match "(?\\\\test\\te)" True PS /> $matches.UNCPath \\test\te ``` Keep in mind that the backtick character is used to escape certain special characters in PowerShell double-quoted strings. However, in the case of the `-match` operator, you're invoking the .NET regular expression engine. In the .NET regex engine, the backslash is used to escape special characters in the regex context. Hence, in this example, the backtick escape character isn't applicable. *Also*, make sure that you are **not** escaping special characters in your source string, on the left-hand side of the `-match` operator. The reason that your final example doesn't match, is because you added a second `\`, but only escaped a single `\` in the regex on the right-hand side of the `-match` operator. Upvotes: 3 [selected_answer]<issue_comment>username_2: To complement [<NAME>'s helpful answer](https://stackoverflow.com/a/49419445/45375) with a tip provided by [PetSerAl](https://stackoverflow.com/users/4003407/petseral) in a comment on the question: To **use a string as a *literal* in a regex context, pass it to `[regex]::Escape()`**: ``` PS> "RW \\test\te" -match [regex]::Escape("\\test\te") True ``` `[regex]::Escape()` conveniently escapes all characters that have special meaning in a regex with escape character `\`, so that the string is matched as a literal: ``` PS> [regex]::Escape("\\test\te") \\\\test\\te ``` Note how the `\` instances were each escaped with `\`, effectively doubling them. If your string does use regex constructs but also contains characters with special meaning in regexes that you want to be treated as literals, you must `\`-escape them individually: ``` PS> '***' -match '\**' # match zero or more (*) '*' chars (\*) True ``` Upvotes: 2 <issue_comment>username_3: Somewhat orthogonal, but for matching paths, you might find the `-like` operator easier to use. It supports *wildcards* instead of *regular expressions* so you could write your example as ``` "RW \\test\te" -like "*\\test\te" ``` Note that the leading '\*' on the RHS is required- wildcard patterns are "anchored" (have to match the whole string). Regular expressions are unanchored by default and only have to match a fragment of the string. Upvotes: 1
2018/03/22
241
736
<issue_start>username_0: Suppose I have this (where my insertion point is designated "|"): ``` col1 col2 thing oeueaoue| another something test ``` What keystroke(s) would I use so the insertion point jumps to here: ``` col1 col2 thing oeueaoue another | something test ``` ? Is `return`+`A`+`tab`… the only way?<issue_comment>username_1: In insert mode: ``` ``` In normal mode: ``` A ``` or: ``` j ``` Upvotes: -1 <issue_comment>username_2: When there are many lines to edit, it can be useful to map the return key temporarily, something like that in your case: ``` inoremap jA ``` When you're done, just cancel the map: ``` iunmap ``` Upvotes: 2 [selected_answer]
2018/03/22
705
2,511
<issue_start>username_0: I have a tab panel on my html where I am rendering a table. ``` ``` I am rendering the table through a php script from mysql. Function getManual() gets triggered on clicking the tab panel. ``` function getManual() { var folder = $('#workingDir').val(); $.post('manualAnnotation.php', {'folder': folder}, function(data) { $('#annotationTable').html(data).show(); }) } ``` And my php that renders the table is as follows: ``` $result = mysqli_query($connect,$sql)or die(mysqli_error()); echo " "; echo "| Select | Image | Location | Brand | Run | | --- | --- | --- | --- | --- | "; while($row = mysqli\_fetch\_array($result)) { $ID = $row['ID']; $Image = $row['Image']; $Location = $row['Location']; $Brand = $row['Brand']; echo "| | ".$Image." | ".$Location." | ".$Brand." | [RUN](#) | "; } echo " "; mysqli_close($connect); ``` I am trying to enable the jQuery dataTable on the table id "manTab". On my index page, i have added this code: ``` $(document).ready(function() { $('#manTab').DataTable({ }); }); ``` While this renders the table the functionalities of datatable such as search, sort or pagination is not enabled.<issue_comment>username_1: You can position individual component based on configuration like this. ``` $(document).ready(function() {     $('#example').DataTable( {         "dom": '<"top"i>rt<"bottom"flp><"clear">'     } ); } ); ``` Look into [official Documentation](https://datatables.net/reference/option/dom) for more detail ``` l - length changing input control f - filtering input t - The table! i - Table information summary p - pagination control r - processing display element ``` Upvotes: 0 <issue_comment>username_2: you trying to convert a table which not already finish rendering (from request) yet. ``` function getManual() { var folder = $('#workingDir').val(); $.post('manualAnnotation.php', {'folder': folder}, function(data) { $('#annotationTable').html(data).show(); $('#manTab').DataTable(); //place here }); } ``` above will working if you have few rows in your table. but if you have thousand or more. you need using callback/promise to wait until your table is finish rendered. ``` function getManual() { var folder = $('#workingDir').val(); $.post('manualAnnotation.php', {'folder': folder}, function(data) { $('#annotationTable').html(data).show(); }).done(function(){ $('#manTab').DataTable(); //place here }); } ``` Upvotes: 2 [selected_answer]
2018/03/22
534
2,035
<issue_start>username_0: I have a simple DAX formula that i want to use to distinct count the number of customers who have met a certain criteria. I have a table that defines these criteria i.e Tiers I am doing this on power bi and this is the measure that i am using; ``` MTierB = var y = SUM('Brand Tiers'[TierB]), var x = SUM('Brand Tiers'[TierA]) return CALCULATE(DISTINCTCOUNT('Sales By Customers'[CUSTOMER_StoreCode]), KEEPFILTERS(FILTER('Sales By Customers', 'Sales By Customers'[Total Sales in MSU] >= y AND <=x ))) ``` I am getting a syntax error. The AND operator is underlined in red and i dont understand why? Please help out<issue_comment>username_1: You can position individual component based on configuration like this. ``` $(document).ready(function() {     $('#example').DataTable( {         "dom": '<"top"i>rt<"bottom"flp><"clear">'     } ); } ); ``` Look into [official Documentation](https://datatables.net/reference/option/dom) for more detail ``` l - length changing input control f - filtering input t - The table! i - Table information summary p - pagination control r - processing display element ``` Upvotes: 0 <issue_comment>username_2: you trying to convert a table which not already finish rendering (from request) yet. ``` function getManual() { var folder = $('#workingDir').val(); $.post('manualAnnotation.php', {'folder': folder}, function(data) { $('#annotationTable').html(data).show(); $('#manTab').DataTable(); //place here }); } ``` above will working if you have few rows in your table. but if you have thousand or more. you need using callback/promise to wait until your table is finish rendered. ``` function getManual() { var folder = $('#workingDir').val(); $.post('manualAnnotation.php', {'folder': folder}, function(data) { $('#annotationTable').html(data).show(); }).done(function(){ $('#manTab').DataTable(); //place here }); } ``` Upvotes: 2 [selected_answer]
2018/03/22
573
1,932
<issue_start>username_0: What I am trying to do is pretty basic. Given an object, and without caring about the properties names, I want to ensure all its values to be of a certain type. Therefore I have something like the following code: ``` // @flow type DynamicStructure = { [string]: number } const key: string = "someKey" const someStructure: DynamicStructure = { [key]: "invalid, should be a number" } ``` The weird thing is that I am getting "no errors!" after applying Flow on the code above, which is clearly wrong. You can verify this behaviour on the [Flow REPL](https://flow.org/try/#0PTAEAEDMBsHsHcBQiAuBPADgU1AETQHYCGAtgJYDGAyigE4CuFK9tOAvKAN6ICQA2gGc6ZAgHMAugC5QBeiQBGWWogC+yCrAJDQAayxppQ2iNGgOAIgGwSWANL7ziDVpSgrNmgyYss0-MXJqOkZmVjMuXj49NClQcxEANyJoMgATABo3AAtYemhU0EVQIhk5RVpHNSA) On the other hand, when I don't use dynamic accessors for the object everything work as expected. For example for the following code I get the expected errors: ``` // @flow type DynamicStructure = { [string]: number } const someStructure: DynamicStructure = { "someKey": "invalid, should be a number" } ``` Am I doing something wrong? or is this a Flow issue? Thanks in advance.<issue_comment>username_1: Yes, this looks like a Flowtype bug: <https://github.com/facebook/flow/issues/2928> Upvotes: 2 [selected_answer]<issue_comment>username_2: Flow is a static type checker. It has a number of weaknesses around the handling of computed properties because of this. You have a perfectly reasonable example of how flow 'could' work, but you're making the assumption that it tracks the assignment of a literal to a variable and that there are no side effects that might cause it to invalidate its idea of whether the variable remains the same when used as the computed property. See [refinement invalidations](https://flow.org/en/docs/lang/refinements/#toc-refinement-invalidations) in the docs Upvotes: 0
2018/03/22
388
1,438
<issue_start>username_0: I am trying to use facebook sdk for facebook login. I gave <http://localhost> as Valid OAuth Redirect URIs but it throws the following error > > HTTPS is required for all Redirect URIs. > > > I used this future few days ago it worked fine. but now it throws this error And I am not able to disable > > Enforce HTTPS > > > option<issue_comment>username_1: I ran into this issue with my Rails app that I usually run with <http://localhost:3000>. To use https, I used [ngrok](https://ngrok.com/) which allows you to use https by providing a tunnel. To do this: 1. I went to their website and downloaded their program 2. I extracted the file for the program 3. In my console, I went into the directory where ngrok was extracted to and entered 'grok http 3000' on my Windows machine, others may use './grok http 3000' 4. After entering that, ngrok provided a https address which I put into the Valid OAuth Redirect URIs field in Facebook 5. Then I started my server and was able to access it using that https address instead of localhost:3000 Upvotes: 2 <issue_comment>username_2: yep, they changed that recently :-( For testing the login flow locally I installed a self-signed certificate <https://letsencrypt.org/docs/certificates-for-localhost/> btw, I doesn't have to be trusted by the browser if you're OK with a one time security warning. Don't use this certificate in production! Upvotes: 1
2018/03/22
1,192
4,486
<issue_start>username_0: I was doing Authentication for my front end. sending `ErrorObservable` but not sure how to correctly handle it. first, a `verify()` used by `AuthGuard` ``` verify(): Observable { //return this.http.get('/api/verify', this.jwt()).map((response: Response) => response.json()); //Do not do remote api check if there is no token saved if(!localStorage.getItem('acToken')) { return ErrorObservable.create( 'No Token Found?'); } //console.log('verifying Remote'); return this.httpC.get>(Utils.apiBaseUrl + '/oauth/scopes', this.extraHeader()) .pipe( //tap(data => console.log(data)), catchError(Utils.handleError), ); } ``` second, `canActivate()` in the `AuthGuard` ``` canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable | boolean { console.log('@' + state.url); return this.\_userService.verify() .map( data => { console.log('verify Data'); // logged in so return true if (data !== null) { console.log(data); return true; } // error when verify so redirect to login page with the return url return this.defaultReject(state); }, error => { //console.log('verify Error'); // error when verify so redirect to login page with the return url console.log('@err ' + error); return this.defaultReject(state); } ); } ``` Every time `Verify()` return `"ErrorObservable.create( 'No Token Found?')"`, there will be red ERROR `"Error: Uncaught (in promise): No Token Found?"` in console; on the other hand, if the `HttpClient` inside the `Verify()` get `401`, it will not get a `"ERROR Error: Uncaught"` Is there a correct way to catch that `ErrorObservable` without Uncaught? Thanks for helping<issue_comment>username_1: You are calling `.map` the wrong way in your `canActivate` method. If you see the documentation for `.map` (<http://reactivex.io/rxjs/class/es6/Observable.js~Observable.html#instance-method-map>). You are using `map` a bit like the `subscribe` method. `map` can only transform every value yielded by the observable into another, so you shouldn't pass it the second parameter (the function that handles errors). In fact, if you pass `map` a second parameter it'll act as the `this` reference in the projection function (the first parameter). The problem here is that, when you return an `ErrorObservable` you can't really apply `map` to it, you've got to subscribe to handle the error. See it in this example, where `error` is a `ErrorObservable` created via `Observable.throw`: ```js let error = Rx.Observable.throw("error!"); // create an ErrorObservable error.map(data => console.log(data)); // this has no effect, as the observable won't emit values. error.subscribe({ next: value => console.log("value", value), // as previously, won't execute error: error => console.log("error", error) // will execute }); // You also can: error.map(data => console.log('data', data)).subscribe({ next: value => console.log('result', value), error: error => console.log('other way of handling it', error) }); ``` The `map` operator subscribes to the observable, but ignores errors (they pass through it). However, you can chain both results, as demonstrated in my last example. This way you always catch an error and can act on it Upvotes: 2 <issue_comment>username_2: ``` this.getBase().subscribe( (next: BaseInfo) => { if(next === undefined){ console.log('next ', 'hmm?');// because handleError2 return of(result as T); }else { console.log('next ', next); } }, error =>{ console.log('err ', error);// won't fire unless handleError2 throw error }, () => { console.log('complete'); } ); getBase() { return this.httpC.get(environment.apiServerUri + '/base1', this.extraHeader()) .pipe( catchError(Utils.handleError2('getBase', undefined)) ); } public static handleError2(operation = 'operation', result?: T) { return (error: HttpErrorResponse): Observable => { console.error(error); // log to console instead ////throw(operation + ' failed'); // use this for subscribe(error:) to fire // Let the app keep running by returning an empty result. return of(result as T); }; } ``` After one and half month, I finally understand this issue. It was my `handleError2` that catch the error but not throw. Because my `handleError2` was not `'throw'` error back, the `'error'` function under the `getBase().subscribe()` won't fire. Upvotes: 0
2018/03/22
1,101
3,920
<issue_start>username_0: I want to aggregate ( effectively de-duping ) and sum nested data the simplest way possible , through map / reduce / lodash or whatever. Can use ES6 / ES7 it doesn't matter. Simplest , cleanest is preferred . thanks. I have an array e.g. ``` [{ "orderNumber": "0001", "itemList": [{ "item_code": "X1000", "qty": 10, "unit_price": 20 }, { "item_code": "X1002", "qty": 10, "unit_price": 20 } ] }, { "orderNumber": "0002", "itemList": [{ "item_code": "X1000", "qty": 10, "unit_price": 20 }, { "item_code": "X1003", "qty": 10, "unit_price": 20 } ] }] ``` And I want to end up with; ``` [{ "item_code": "X1000", "qty": 20, "unit_price": 20 }, { "item_code": "X1002", "qty": 10, "unit_price": 20 }, { "item_code": "X1003", "qty": 10, "unit_price": 20 } ] ``` thanks for your time!<issue_comment>username_1: You are calling `.map` the wrong way in your `canActivate` method. If you see the documentation for `.map` (<http://reactivex.io/rxjs/class/es6/Observable.js~Observable.html#instance-method-map>). You are using `map` a bit like the `subscribe` method. `map` can only transform every value yielded by the observable into another, so you shouldn't pass it the second parameter (the function that handles errors). In fact, if you pass `map` a second parameter it'll act as the `this` reference in the projection function (the first parameter). The problem here is that, when you return an `ErrorObservable` you can't really apply `map` to it, you've got to subscribe to handle the error. See it in this example, where `error` is a `ErrorObservable` created via `Observable.throw`: ```js let error = Rx.Observable.throw("error!"); // create an ErrorObservable error.map(data => console.log(data)); // this has no effect, as the observable won't emit values. error.subscribe({ next: value => console.log("value", value), // as previously, won't execute error: error => console.log("error", error) // will execute }); // You also can: error.map(data => console.log('data', data)).subscribe({ next: value => console.log('result', value), error: error => console.log('other way of handling it', error) }); ``` The `map` operator subscribes to the observable, but ignores errors (they pass through it). However, you can chain both results, as demonstrated in my last example. This way you always catch an error and can act on it Upvotes: 2 <issue_comment>username_2: ``` this.getBase().subscribe( (next: BaseInfo) => { if(next === undefined){ console.log('next ', 'hmm?');// because handleError2 return of(result as T); }else { console.log('next ', next); } }, error =>{ console.log('err ', error);// won't fire unless handleError2 throw error }, () => { console.log('complete'); } ); getBase() { return this.httpC.get(environment.apiServerUri + '/base1', this.extraHeader()) .pipe( catchError(Utils.handleError2('getBase', undefined)) ); } public static handleError2(operation = 'operation', result?: T) { return (error: HttpErrorResponse): Observable => { console.error(error); // log to console instead ////throw(operation + ' failed'); // use this for subscribe(error:) to fire // Let the app keep running by returning an empty result. return of(result as T); }; } ``` After one and half month, I finally understand this issue. It was my `handleError2` that catch the error but not throw. Because my `handleError2` was not `'throw'` error back, the `'error'` function under the `getBase().subscribe()` won't fire. Upvotes: 0
2018/03/22
2,027
7,349
<issue_start>username_0: I'd like to check whether a dictionary is a subset of another dictionary recursively. Let's assume both dictionaries having builtin types as items. I've seen there is already a very old thread [Python: Check if one dictionary is a subset of another larger dictionary](https://stackoverflow.com/questions/9323749/python-check-if-one-dictionary-is-a-subset-of-another-larger-dictionary) trying to solve something similar but not quite... Because none of the answers living there will serve for my purposes I've decided to post my own [solution](https://stackoverflow.com/a/49418622/3809375) in there but it is still not fully complete, the below function is working ok for almost all cases but it fails miserably in cases when the subset has values that don't exist in the superset, ie: ``` def is_subset(superset, subset): for key, value in superset.items(): if key not in subset: continue if isinstance(value, dict): if not is_subset(value, subset[key]): return False elif isinstance(value, str): if not subset[key] in value: return False elif isinstance(value, list): if not set(subset[key]) <= set(value): return False elif isinstance(value, set): if not subset[key] <= value: return False else: if not subset[key] == value: return False return True superset = {'question': 'mcve', 'metadata': {}} subset = {'question': 'mcve', 'metadata': {'author': 'BPL'}} print(is_subset(superset, subset)) ``` The function returns True but it should return False. So, how would you fix it?<issue_comment>username_1: Just a guess, but I think because the dic returned by 'metadata' in the superset is empty, none of the if statements return true so you get the final return of True. You could just do a check to see if the length of either dic is zero. If one is an not the other, then return false. Otherwise proceed with your recursive solution. Upvotes: 0 <issue_comment>username_2: Your code logic is upside down. Notice how you take each element in the *superset* and continue if *they are not in the subset*. What you want to do is take each element in the *subset* and check that *they are in the superset*. Here is a fixed version of you code. ``` def is_subset(superset, subset): for key, value in subset.items(): if key not in superset: return False if isinstance(value, dict): if not is_subset(superset[key], value): return False elif isinstance(value, str): if value not in superset[key]: return False elif isinstance(value, list): if not set(value) <= set(superset[key]): return False elif isinstance(value, set): if not value <= superset[key]: return False else: if not value == superset[key]: return False return True ``` Here are some example of the function giving the correct result. ``` superset = {'question': 'mcve', 'metadata': {}} subset = {'question': 'mcve', 'metadata': {'author': 'BPL'}} is_subset(superset, subset) # False superset = {'question': 'mcve', 'metadata': {'foo': {'bar': 'baz'}}} subset = {'metadata': {'foo': {}}} is_subset(superset, subset) # True superset = {'question': 'mcve', 'metadata': {'foo': 'bar'}} subset = {'question': 'mcve', 'metadata': {}, 'baz': 'spam'} is_subset(superset, subset) # False ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Here is a solution that also properly recurses into lists and sets. (i changed the order of arguments, because that made more sense to me) ``` def is_subset(subset, superset): if isinstance(subset, dict): return all(key in superset and is_subset(val, superset[key]) for key, val in subset.items()) if isinstance(subset, list) or isinstance(subset, set): return all(any(is_subset(subitem, superitem) for superitem in superset) for subitem in subset) if isinstance(subset, str): return subset in superset # assume that subset is a plain value if none of the above match return subset == superset ``` When using python 3.10, you can use python's new match statements to do the typechecking: ``` def is_subset(subset, superset): match subset: case dict(_): return all(key in superset and is_subset(val, superset[key]) for key, val in subset.items()) case list(_) | set(_): return all(any(is_subset(subitem, superitem) for superitem in superset) for subitem in subset) case str(_): return subset in superset # assume that subset is a plain value if none of the above match case _: return subset == superset ``` Upvotes: 1 <issue_comment>username_4: I didn't like the original solution much - it didn't work for some of the cases I had, as some of the comments said. Here's a more generalized solution: ``` def is_subvalue(supervalue, subvalue) -> bool: """Meant for comparing dictionaries, mainly. Note - I don't treat ['a'] as a subvalue of ['a', 'b'], or 'a' as a subvalue of 'ab'. For that behavior for a list or set, remove the line: `if len(supervalue) != len(subvalue): return False` For that behavior for a string, switch `subvalue == supervalue` to `subvalue in supervalue` for strings only. But NOT in this function, as it's meant to compare dictionaries and {'ab': 'a'} is not the same as {'a': 'a'} """ if isinstance(subvalue, list) or isinstance(subvalue, set): if isinstance(subvalue, list) and not isinstance(supervalue, list): return False if isinstance(subvalue, set) and not isinstance(supervalue, set): return False if len(supervalue) != len(subvalue): return False return all([is_subvalue(supervalue[i], subvalue[i]) for i in range(len(subvalue))]) if isinstance(subvalue, dict): if not isinstance(supervalue, dict): return False for key in subvalue: if key not in supervalue or not is_subvalue(supervalue[key], subvalue[key]): return False return True # all other types. return supervalue == subvalue ``` Here's some tests for it: ``` assert is_subvalue(None, None) assert is_subvalue(1, 1) assert is_subvalue('1', '1') assert is_subvalue([], []) assert is_subvalue({}, {}) assert is_subvalue(['1'], ['1']) assert not is_subvalue(['1'], ['1', '2']) and not is_subvalue(['1', '2'], ['1']) assert is_subvalue({'a': 'b'}, {'a': 'b'}) assert not is_subvalue({'ab': 'b'}, {'a': 'b'}) assert is_subvalue({'a': ['b']}, {'a': ['b']}) assert is_subvalue({'a': ['b', 'c']}, {'a': ['b', 'c']}) # tests for ensuring more complex dictionary checks work assert not is_subvalue({'a': ['b', 'c']}, {'a': ['b']}) assert not is_subvalue({'a': 'b'}, {'a': 'b', 'b': 'c'}) assert is_subvalue({'a': 'b', 'b': 'c', 'c': 'd'}, {'a': 'b', 'b': 'c'}) assert not is_subvalue({'a': 'b', 'b': 'c', 'c': 'd'}, {'a': 'b', 'b': {'c': 'c'}}) assert is_subvalue({'a': 'b', 'b': {'c': 'c'}, 'c': 'd'}, {'a': 'b', 'b': {'c': 'c'}}) assert is_subvalue({'a': 'b', 'b': ['c', 'c'], 'c': 'd'}, {'a': 'b', 'b': ['c', 'c']}) ``` Upvotes: 0
2018/03/22
1,874
7,817
<issue_start>username_0: Apps can be transferred from one company to another in both stores, Google Play and Apple Store. However, as I was told, each app has a certificate. I don't know much about this and googling confused me more than it actually helped me. I'm responsible for an app movement and the initialization of an update procedure. Company X who gave me the task, has an app. The app was developed by DEV Company. The relationship between the two companies got a bit frozen after some time, regarding the ownership of the source code of the app. At the end, both companies agreed, that they will simply transfer the app from the store accounts owned by DEV Company to their own store accounts. To keep peace without any lawyers involved, Company X would redesign the app from scratch without using any code from the current app. Therefore, DEV Company would not handover the source code, just the APK. So Company X would create a new app and once that one goes live, deactivate the old app. I read both documentations about app transfers on Google <https://support.google.com/googleplay/android-developer/answer/6230247?hl=en-GB> and on Apple <https://developer.apple.com/library/content/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/TransferringAndDeletingApps.html> As long as there are no in-app purchases, there is no problem with transferring an app. However, I can't find any information regarding certificates. Do the certificates come together with the app and will automatically be transferred too or does a certificate belong to the owner of the DEV account, in that case DEV Company? This would mean that the app needs to be recompiled with a new certificate and I believe, it would mean that all users would need to update their app just for this cause, is that correct? I found this on Apple: <https://developer.apple.com/support/certificates/> But this confuses me more than it helps in that situation.<issue_comment>username_1: Alright, thanks to <NAME>, who gave the answer for this case via comments. Here's an info about signing keys on Android <https://developer.android.com/studio/publish/app-signing.html> and on Apple <https://developer.apple.com/support/code-signing/>, but that's just the explanation of app signing keys. Basically, an app has a certificate, which identifies the creator/developer of the app. This is similar to an SSL certificate. The idea is to tell the end user, that the current version of the app was created by the original developer. As Allen states: > > You cannot resign the .apk file. You can imagine the security issues that would pose if you could do that allowing some 3rd party to use someone else's signature. The new app will not be able to overwrite the existing app with a different key. > > > Therefore in the current case, after transferring the app to the new account, it must be deactivated and a new one must be published. The users won't be able to use the original app anymore. If the app was properly designed, it should show an error message at startup/login/whereever possible and inform the user that the app is deprecated and the new version must be downloaded from the store. Upvotes: 0 <issue_comment>username_2: username_1's answer is not quite right (at least for Google Play). I can't comment for anywhere but Google Play, let me try to give the correct answer for there. An app has a certificate, which identifies the owner of the app. This is similar to an SSL certificate. The idea is to tell the end user **but more importantly the Android OS**, that the current version of the app was created by the same person as the original app. Otherwise anyone could update an app on a device. So you have three options: 1. In my opinion the better option is to get the certificate from the original developer. Then the app can be transferred to the new developer account, and when a new version is released, all users will get the update 2. If the certificate cannot be handed over this is a worse situation. In this situation there is no option but to publish a new version of the app and unpublish the old version. If you do this * existing users will not get updates, they will have to go and install the new version manually * existing users will still be able to use the old app * existing users will still be able to install the old app (new users can't) 3. Register the app for [Google Play App Signing](https://support.google.com/googleplay/android-developer/answer/7384423). Then the signing and the certificate will be stored by Google. When the app is transferred, a new upload key can be created for the new owner, which has all the benefits of option 1. As you can see, the second option is a much worse option for your client and users. Options 1 or 3 are much better. If the app had been registered for Google Play App Signing, then this problem wouldn't exist. I strongly recommend doing this if a new app is created. Upvotes: 1 <issue_comment>username_3: For iOS app, it's necessary to have following items. 1. `iOS Distribution:` in key-chain. 2. Provisioning profile for app-store. For Xcode project configuration. 3. The app's bundle identifier. For Xcode project configuration. This can be found in `Provisioning profile` if you don't have it. You can find the method to decode the `Provisioning profile` to get it. If another company only needs the `ipa` for uploading to App store, you need above items to produce the `ipa` result. If you need to upload to their App store, then you need an account in their [Itunes Connect](https://itunesconnect.apple.com/login), and it has access right to upload an app to the App store. You can make the whole thing automatically by [Fastlane](https://fastlane.tools), a command line tool. I prefer [the tutorial on the www.raywenderlich.com about Fastlane.](https://www.raywenderlich.com/136168/fastlane-tutorial-getting-started-2) Upvotes: 0 <issue_comment>username_4: > > --- For iOS Only --- > > > I had a similar kind of situation, where I need to transfer my app from one iTunes account to another iTunes account. Basically there are 4 identities associated with each iOS app: **1. Bundle Id** (**for ex:** com.domainname.appIdentifier) **2. Provisioning profile** **3. APNS Certificate** (If push notifications implemented for app) **4. Account Certificate** (These are developer account specific, means unique for all the app published under same account) > > Now transferring the app from one iTunes account to another will migrate the app with the bundle id only. But it will not affect the current working of an app. It will work as fine as it was working before transfer. > > > Now if you wish to upload new version of app then you need to create new provisioning file, new APNS and you will use the account certificate of new account. (Keep in mind that you need to update the APNS certificate (.p12/ PEM file) on server for your push notification to work.) > > > --- Now coming to your question specifically, DEV company will transfer app to iTunes account of your company X. It will transfer the app with bundle id, means app on app store will no longer be available under DEV company's account nor they can use same bundle id again. Only your company X can use that bundle id as you get it from transfer from DEV company. For user point of view, there will not be any change or problem with app uses. Even if you publish new version of app old app will work fine. Only problem can be with push notification functionality if you have in your app, since you will change .p12/ PEM file on your server and so old user will not get push notification without updating the app. Do let me know if you need any more explanation on it or if you have confusion in any point. Upvotes: 0
2018/03/22
1,749
6,951
<issue_start>username_0: The program works but when it hits a wall the turtle undo's the last step and tries again. However, it keeps inputting the same forward distance and angle causing it to move in the same path in a loop. Is there a way to stop the turtle from taking the same value again? ``` from turtle import Turtle, Screen import random def createTurtle(color, width): tempName = Turtle("arrow") tempName.speed("fastest") tempName.color(color) tempName.width(width) return tempName def inScreen(screen, turt): x = screen.window_width() / 2 y = screen.window_height() / 2 turtleX, turtleY = turt.pos() if not (-x < turtleX < x) and (-y < turtleY < y): return False return True def moveTurtle(screen, turt): while True: while inScreen(screen, turt): turt.fd(random.randrange(100)) turt.left(random.randrange(360)) if (inScreen(screen, turt) == False): turt.undo() continue wn = Screen() alpha = createTurtle("red", 3) moveTurtle(wn, alpha) wn.exitonclick() ```<issue_comment>username_1: Alright, thanks to <NAME>, who gave the answer for this case via comments. Here's an info about signing keys on Android <https://developer.android.com/studio/publish/app-signing.html> and on Apple <https://developer.apple.com/support/code-signing/>, but that's just the explanation of app signing keys. Basically, an app has a certificate, which identifies the creator/developer of the app. This is similar to an SSL certificate. The idea is to tell the end user, that the current version of the app was created by the original developer. As Allen states: > > You cannot resign the .apk file. You can imagine the security issues that would pose if you could do that allowing some 3rd party to use someone else's signature. The new app will not be able to overwrite the existing app with a different key. > > > Therefore in the current case, after transferring the app to the new account, it must be deactivated and a new one must be published. The users won't be able to use the original app anymore. If the app was properly designed, it should show an error message at startup/login/whereever possible and inform the user that the app is deprecated and the new version must be downloaded from the store. Upvotes: 0 <issue_comment>username_2: username_1's answer is not quite right (at least for Google Play). I can't comment for anywhere but Google Play, let me try to give the correct answer for there. An app has a certificate, which identifies the owner of the app. This is similar to an SSL certificate. The idea is to tell the end user **but more importantly the Android OS**, that the current version of the app was created by the same person as the original app. Otherwise anyone could update an app on a device. So you have three options: 1. In my opinion the better option is to get the certificate from the original developer. Then the app can be transferred to the new developer account, and when a new version is released, all users will get the update 2. If the certificate cannot be handed over this is a worse situation. In this situation there is no option but to publish a new version of the app and unpublish the old version. If you do this * existing users will not get updates, they will have to go and install the new version manually * existing users will still be able to use the old app * existing users will still be able to install the old app (new users can't) 3. Register the app for [Google Play App Signing](https://support.google.com/googleplay/android-developer/answer/7384423). Then the signing and the certificate will be stored by Google. When the app is transferred, a new upload key can be created for the new owner, which has all the benefits of option 1. As you can see, the second option is a much worse option for your client and users. Options 1 or 3 are much better. If the app had been registered for Google Play App Signing, then this problem wouldn't exist. I strongly recommend doing this if a new app is created. Upvotes: 1 <issue_comment>username_3: For iOS app, it's necessary to have following items. 1. `iOS Distribution:` in key-chain. 2. Provisioning profile for app-store. For Xcode project configuration. 3. The app's bundle identifier. For Xcode project configuration. This can be found in `Provisioning profile` if you don't have it. You can find the method to decode the `Provisioning profile` to get it. If another company only needs the `ipa` for uploading to App store, you need above items to produce the `ipa` result. If you need to upload to their App store, then you need an account in their [Itunes Connect](https://itunesconnect.apple.com/login), and it has access right to upload an app to the App store. You can make the whole thing automatically by [Fastlane](https://fastlane.tools), a command line tool. I prefer [the tutorial on the www.raywenderlich.com about Fastlane.](https://www.raywenderlich.com/136168/fastlane-tutorial-getting-started-2) Upvotes: 0 <issue_comment>username_4: > > --- For iOS Only --- > > > I had a similar kind of situation, where I need to transfer my app from one iTunes account to another iTunes account. Basically there are 4 identities associated with each iOS app: **1. Bundle Id** (**for ex:** com.domainname.appIdentifier) **2. Provisioning profile** **3. APNS Certificate** (If push notifications implemented for app) **4. Account Certificate** (These are developer account specific, means unique for all the app published under same account) > > Now transferring the app from one iTunes account to another will migrate the app with the bundle id only. But it will not affect the current working of an app. It will work as fine as it was working before transfer. > > > Now if you wish to upload new version of app then you need to create new provisioning file, new APNS and you will use the account certificate of new account. (Keep in mind that you need to update the APNS certificate (.p12/ PEM file) on server for your push notification to work.) > > > --- Now coming to your question specifically, DEV company will transfer app to iTunes account of your company X. It will transfer the app with bundle id, means app on app store will no longer be available under DEV company's account nor they can use same bundle id again. Only your company X can use that bundle id as you get it from transfer from DEV company. For user point of view, there will not be any change or problem with app uses. Even if you publish new version of app old app will work fine. Only problem can be with push notification functionality if you have in your app, since you will change .p12/ PEM file on your server and so old user will not get push notification without updating the app. Do let me know if you need any more explanation on it or if you have confusion in any point. Upvotes: 0
2018/03/22
748
2,421
<issue_start>username_0: When a substring having a space in the beginning is printed the leading whitespace is removed. ``` $ line="C , D,E,"; $ echo "Output-`echo ${line:3}`"; Output-D,E, ``` Why is the leading whitespace being removed from the output and how can I print the whitespace?<issue_comment>username_1: The substring operation ``` ${line:3} ``` would extract all characters starting at position 3, which indeed is: `[ D,E,]` (I added [] just for readability). However, in command substitution `echo ${line:3}`, shell performs word splitting and removes the leading blank character, giving the result as `D,E,`. Put the substring expression in double quotes to preserve the leading blank, like this: ``` echo "Output-"`echo "${line:3}"` # => Output- D,E, ``` --- To understand this more clearly, try this: ``` line="C , D,E," # $() does command substitution like backquotes; it is a much better syntax string1=$(echo "${line:3}") # double quotes prevent word splitting string2=$(echo ${line:3}) # no quotes, shell does word splitting string3="$(echo ${line:3})" # since double quotes are outside but not inside $(), shell still does word splitting echo "string1=[$string1], string2=[$string2], string3=[$string3]" ``` gives the output: ``` string1=[ D,E,], string2=[D,E,], string3=[D,E,] ``` --- See also: * [Word splitting - GNU documentation](https://www.gnu.org/software/bash/manual/html_node/Word-Splitting.html) * [Word splitting - Greg's Wiki](https://mywiki.wooledge.org/WordSplitting) * [Word splitting in Bash with IFS set to a non-whitespace character](https://stackoverflow.com/questions/43163225/word-splitting-in-bash-with-ifs-set-to-a-non-whitespace-character) * [What is the benefit of using $() instead of backticks in shell scripts?](https://stackoverflow.com/questions/9449778/what-is-the-benefit-of-using-instead-of-backticks-in-shell-scripts) Upvotes: 2 <issue_comment>username_2: Shell does not preserve multiple spaces between arguments, so the two spaces collapse to one. ``` echo "Output-`echo ${line:3}`"; ``` Evaluates to: ``` echo "Output-`echo -D,E,`"; ``` And: ``` echo -D,E, ``` Evaluates to: ``` -D,E, ``` In other words echo receives the same arguments for both of those calls: ``` echo -D,E, echo -D,E, ``` As the argument arrive as an array of Strings, end each String is already trimmed (no spaces around it). Upvotes: -1
2018/03/22
436
1,786
<issue_start>username_0: I have a test request for executing some test cases multiple times in robotframework and have the test cases pass/fail status individually in report. Now I use for loop to execute but I get only one Pass/Fail status for all executions.<issue_comment>username_1: To my knowledge there is now way to loop *Test Cases* therefore I assume you are using repeated execution of a *Keyword within* test case like: ``` Test case with loop assertion :FOR ${var} IN RANGE 3 \ Click Element ${MY_BUTTON} ``` You will never see specific outcome of keywords in *Report*, only in the *Log*. You will need to produce Test Cases to see the outcome in Report. For convenient generation of multiple same test cases (running same keyword) with different (or same) sets of data, you can use the [data-driven approach](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#data-driven-style) ``` *** Settings *** Test Template Click Element *** Test Cases *** OBJECT LOCATOR Click my button first time ${MY_BUTTON} Click my button second time ${MY_BUTTON} Click my button third time ${MY_BUTTON} ``` Of course the Tempate keyword can be custom one containing several libraries keywords. Upvotes: 2 <issue_comment>username_2: Robot-framework will look for matching tests in all of the provided paths, if you pass the same path more than once, robot will run the same test again. For example, if you are running tests on the current folder, you can pass "." as many times as you want the test to run. Ex: ``` robot -t "*My test*" . . . ``` This command will run all tests that match the expression 3 times, and the report will contain all 3 executions and results. Upvotes: 4 [selected_answer]
2018/03/22
1,225
4,984
<issue_start>username_0: I am migrating my Room database. I want to add new table. So I created the Entry class like this: ``` @Entity(foreignKeys = {@ForeignKey(entity = Project.class, parentColumns = "projectId", childColumns = "projectId", onDelete = ForeignKey.CASCADE)}, indices = { @Index(name = "projectId_index", value = {"projectId"}) }) public class ProjectDimension { @PrimaryKey(autoGenerate = true) private long dimensionId; @ColumnInfo private long projectId; @ColumnInfo private String name; @ColumnInfo private String value; // getters and setters here... } ``` Then my Dao looks like this: ``` @Dao public interface ProjectDimensionDao { @Query("SELECT * FROM ProjectDimension") Single> getAll(); @Query("SELECT \* FROM ProjectDimension WHERE projectId = :projectId") Single> getByProject(long projectId); @Insert(onConflict = OnConflictStrategy.REPLACE) long insert(ProjectDimension projectDimension); @Delete void delete(ProjectDimension projectDimension); } ``` Then lastly on my database class: ``` @Database(entities = { Contact.class, ContactEmail.class, ContactPhone.class, Monitoring.class, Organization.class, OrgEmail.class, OrgPhone.class, Project.class, ProjectContact.class, ProjectLocation.class, ProjectDimension.class }, version = 2) public abstract class MonitoringDatabase extends RoomDatabase { private static MonitoringDatabase instance; // other Data Access Objects (DAO) here... public abstract ProjectDimensionDao projectDimensionDao(); public static MonitoringDatabase getInstance(Context context) { if (instance == null) { instance = Room.databaseBuilder(context.getApplicationContext(), MonitoringDatabase.class, "monitoring-database") .addMigrations(MIGRATION_1_2) .build(); } return instance; } /** * Upgrade database from version 1 to 2. * Details: Added new table named ProjectDimension */ private static final Migration MIGRATION_1_2 = new Migration(1, 2) { @Override public void migrate(@NonNull SupportSQLiteDatabase database) { // create ProjectDimension table database.execSQL("CREATE TABLE `ProjectDimension` (`dimensionId` INTEGER, `projectId` INTEGER, " + "`name` TEXT, `value` TEXT, " + "PRIMARY KEY(`dimensionId`), " + "FOREIGN KEY(`projectId`) REFERENCES `Project`(`projectId`) ON DELETE CASCADE)"); } }; public static void destroyInstance() { instance = null; } } ``` After I run it, I got an error that looks like this: ``` Expected: TableInfo{name='ProjectDimension', columns={name=Column{name='name', type='TEXT', notNull=false, primaryKeyPosition=0}, value=Column{name='value', type='TEXT', notNull=false, primaryKeyPosition=0}, projectId=Column{name='projectId', type='INTEGER', notNull=true, primaryKeyPosition=0}, dimensionId=Column{name='dimensionId', type='INTEGER', notNull=true, primaryKeyPosition=1}}, foreignKeys=[ForeignKey{referenceTable='Project', onDelete='CASCADE', onUpdate='NO ACTION', columnNames=[projectId], referenceColumnNames=[projectId]}], indices=[Index{name='projectId_index', unique=false, columns=[projectId]}]} Found: TableInfo{name='ProjectDimension', columns={name=Column{name='name', type='TEXT', notNull=false, primaryKeyPosition=0}, value=Column{name='value', type='TEXT', notNull=false, primaryKeyPosition=0}, projectId=Column{name='projectId', type='INTEGER', notNull=false, primaryKeyPosition=0}, dimensionId=Column{name='dimensionId', type='INTEGER', notNull=false, primaryKeyPosition=1}}, foreignKeys=[ForeignKey{referenceTable='Project', onDelete='CASCADE', onUpdate='NO ACTION', columnNames=[projectId], referenceColumnNames=[projectId]}], indices=null} ``` I think the problem is in my CREATE statement. I've been searching for the proper SQL query for this but still failed to find some. Somebody help!<issue_comment>username_1: You have 2 errors, one regarding the NOT NULL column property, the other regarding foreign keys. Here are the differences : Expected : > > projectId notNull=**true** > > > dimensionId notNull=**true** > > > foreignKeys indices=**[Index{name='projectId\_index', unique=false, columns=[projectId]}]}** > > > Found > > projectId notNull=**false** > > > dimensionId notNull=**false** > > > foreignKeys indices=**null** > > > I can help with the first problem, in your Entity, the type `long` can't be nullable. Change it to `Long` to make it work. For the second one, add something like this to your migrate function ``` database.execSQL("CREATE INDEX projectId_index ON Project (projectId)"); ``` Upvotes: 2 <issue_comment>username_2: You need to annonate projectId as @nonNull annotation. Upvotes: 0
2018/03/22
474
1,616
<issue_start>username_0: **Objective:** to fill in one dataframe with another using transpose ``` df = pd.DataFrame({'Attributes': ['love', 'family','tech']}) df.T ``` Produces this output: ``` 0 1 2 Attributes love family tech ``` Secondarily, I have another dataframe that is empty: ``` data = pd.DataFrame(columns = ['Attribute_01', 'Attribute_02', 'Attribute_03'] ``` I would like to bring the two dataframes together to produce the following: ``` Attribute_01 Attribute_02 Attribute_03 love family tech ```<issue_comment>username_1: Use `.loc` accessor to set the first row of `data` using a listified `df['Attributes']`. ``` data.loc[0] = df['Attributes'].tolist() ``` Result: ``` Attribute_01 Attribute_02 Attribute_03 0 love family tech ``` Upvotes: 0 <issue_comment>username_2: I think you just need to change the column name in df1 ``` df.columns=data.columns df Out[741]: Attribute_01 Attribute_02 Attribute_03 Attributes love family tech ``` Upvotes: 2 <issue_comment>username_3: **Setup** ``` df Attributes 0 love 1 family 2 tech ``` **Option 1** `rename` ``` df.T.rename(dict(enumerate(data.columns)), axis=1) Attribute_01 Attribute_02 Attribute_03 Attributes love family tech ``` --- **Option 2** `set_index` ``` df.set_index(data.columns).T Attribute_01 Attribute_02 Attribute_03 Attributes love family tech ``` Upvotes: 3 [selected_answer]
2018/03/22
449
1,470
<issue_start>username_0: I found this existing [jsfiddle here](http://jsfiddle.net/6NJ8e/13/) from other questions. My question is, how do I put custom attribute in this code? I have tried it like this but didn't work ``` Instagram for (var i=0; i < inputs.length; i++) { inputs[i].onchange = function() { var add = $(this).attr('othervalue') * (this.checked ? 1 : -1); total.innerHTML = parseFloat(total.innerHTML) + add } } ```<issue_comment>username_1: Use `.loc` accessor to set the first row of `data` using a listified `df['Attributes']`. ``` data.loc[0] = df['Attributes'].tolist() ``` Result: ``` Attribute_01 Attribute_02 Attribute_03 0 love family tech ``` Upvotes: 0 <issue_comment>username_2: I think you just need to change the column name in df1 ``` df.columns=data.columns df Out[741]: Attribute_01 Attribute_02 Attribute_03 Attributes love family tech ``` Upvotes: 2 <issue_comment>username_3: **Setup** ``` df Attributes 0 love 1 family 2 tech ``` **Option 1** `rename` ``` df.T.rename(dict(enumerate(data.columns)), axis=1) Attribute_01 Attribute_02 Attribute_03 Attributes love family tech ``` --- **Option 2** `set_index` ``` df.set_index(data.columns).T Attribute_01 Attribute_02 Attribute_03 Attributes love family tech ``` Upvotes: 3 [selected_answer]
2018/03/22
721
3,153
<issue_start>username_0: I want to test whether the strings are updated correctly when the user changes the language. I am using `Espresso` to test whether the string matches the correct locale and I am currently changing it like so: ``` private fun changeLocale(language: String, country: String) { val locale = Locale(language, country) Locale.setDefault(locale) val configuration = Configuration() configuration.locale = locale context.activity.baseContext.createConfigurationContext(configuration) getInstrumentation().runOnMainSync { context.activity.recreate() } } ``` The problem is that the espresso test `onView(withText(expected)).check(matches(isDisplayed()))` is asserting false so I was wondering what is the correct way to set the default locale before running a test?<issue_comment>username_1: according to [this](https://stackoverflow.com/a/21810126/450356) answer, you can change the Locale programmatically: ``` public class ResourcesTestCase extends AndroidTestCase { private void setLocale(String language, String country) { Locale locale = new Locale(language, country); // here we update locale for date formatters Locale.setDefault(locale); // here we update locale for app resources Resources res = getContext().getResources(); Configuration config = res.getConfiguration(); config.locale = locale; res.updateConfiguration(config, res.getDisplayMetrics()); } public void testEnglishLocale() { setLocale("en", "EN"); String cancelString = getContext().getString(R.string.cancel); assertEquals("Cancel", cancelString); } public void testGermanLocale() { setLocale("de", "DE"); String cancelString = getContext().getString(R.string.cancel); assertEquals("Abbrechen", cancelString); } public void testSpanishLocale() { setLocale("es", "ES"); String cancelString = getContext().getString(R.string.cancel); assertEquals("Cancelar", cancelString); } } ``` you can easily convert that code to Kotlin. Upvotes: 3 <issue_comment>username_2: In my experience, setting locale at runtime is simply not reliable. This guy has a lot more to say about the topic here: <https://proandroiddev.com/change-language-programmatically-at-runtime-on-android-5e6bc15c758> You should try using Firebase Test Lab or similar services and run your tests on different devices (which have different locales set) Upvotes: 2 [selected_answer]<issue_comment>username_3: Kotlin version, based on the post from @username_1 ``` private fun setLocale(language: String, country: String) { val locale = Locale(language, country) // here we update locale for date formatters Locale.setDefault(locale) // here we update locale for app resources val context: Context = getApplicationContext() val res: Resources = context.resources val config: Configuration = res.configuration config.setLocales(LocaleList(locale)) res.updateConfiguration(config, res.displayMetrics) } ``` Upvotes: 1
2018/03/22
682
2,824
<issue_start>username_0: I've been trying to get the values in a multiple select box and put it in an array. I've tried this: JQUERY ``` var selectedValues = $('#multipleSelect').val(); ``` HTML ``` Text 1 Text 2 Text 3 ``` By <NAME> from [this SO question](https://stackoverflow.com/questions/3243476/how-to-get-multiple-select-box-values-using-jquery). I was wondering if there was a way to get all the values in the multiple select box without having to select anything.<issue_comment>username_1: according to [this](https://stackoverflow.com/a/21810126/450356) answer, you can change the Locale programmatically: ``` public class ResourcesTestCase extends AndroidTestCase { private void setLocale(String language, String country) { Locale locale = new Locale(language, country); // here we update locale for date formatters Locale.setDefault(locale); // here we update locale for app resources Resources res = getContext().getResources(); Configuration config = res.getConfiguration(); config.locale = locale; res.updateConfiguration(config, res.getDisplayMetrics()); } public void testEnglishLocale() { setLocale("en", "EN"); String cancelString = getContext().getString(R.string.cancel); assertEquals("Cancel", cancelString); } public void testGermanLocale() { setLocale("de", "DE"); String cancelString = getContext().getString(R.string.cancel); assertEquals("Abbrechen", cancelString); } public void testSpanishLocale() { setLocale("es", "ES"); String cancelString = getContext().getString(R.string.cancel); assertEquals("Cancelar", cancelString); } } ``` you can easily convert that code to Kotlin. Upvotes: 3 <issue_comment>username_2: In my experience, setting locale at runtime is simply not reliable. This guy has a lot more to say about the topic here: <https://proandroiddev.com/change-language-programmatically-at-runtime-on-android-5e6bc15c758> You should try using Firebase Test Lab or similar services and run your tests on different devices (which have different locales set) Upvotes: 2 [selected_answer]<issue_comment>username_3: Kotlin version, based on the post from @username_1 ``` private fun setLocale(language: String, country: String) { val locale = Locale(language, country) // here we update locale for date formatters Locale.setDefault(locale) // here we update locale for app resources val context: Context = getApplicationContext() val res: Resources = context.resources val config: Configuration = res.configuration config.setLocales(LocaleList(locale)) res.updateConfiguration(config, res.displayMetrics) } ``` Upvotes: 1
2018/03/22
509
1,854
<issue_start>username_0: I have 32 bytes numbers and i store them in a uint8\_t\* buffer. How to calculate a 32 bytes number in C? Example: add, mod, multiplication Thanks!<issue_comment>username_1: You basically need to do math just like we learn in elementary school: Add individual elements an[d](https://en.wikipedia.org/wiki/Karatsuba_algorithm) maintain a carry byte; or multiple each pair of byte elements, apply the appropriate offset and sum everything up. This would all be loops, naturally. These can be improved upon in various ways (e.g. Karatsuba's algorithm for multiplication, and beyond that - discrete FFT [can be used](http://www.cs.rug.nl/~ando/pdfs/Ando_Emerencia_multiplying_huge_integers_using_fourier_transforms_paper.pdf) for a multiplication which is O(n log(n)) in the number of fixed-size elements) - but you should start simple. Now, you don't have to reinvent the wheel yourself; there are several FOSS libraries for these kinds of "Big Integer" or BigInt structures, e.g. [this one](https://github.com/andreazevedo/biginteger) or the even more popular [LibTomMath](https://github.com/libtom/libtommath) suggested by @deamentiamundi. There are even more of these in C++ if you're not limited to C only. Finally - instead of working with individual bytes - assuming the number of bytes is divisible by 2, 4 or 8 you can use `uint16_t`, `uint32_t` or `uint64_t` as the basic unit with which you work. Upvotes: 1 <issue_comment>username_2: The public domain big integer C-library Libtommath, to be found at <https://github.com/libtom/libtommath>, has one advantage: it includes a full and easy to understand description of the algorithms in `libtommath/doc` (you will need Latex to build it from scratch, otherwise ask Google for a copy of `tommath.pdf`). Even the source is well readable in contrast to e.g. GMP! Upvotes: 0
2018/03/22
555
1,955
<issue_start>username_0: I'm using JDBC via Clojure and want to search my postgres fields for whether they literally contain a question mark, but can't seem to get anything to work regardless of how I try to escape or double up my question marks. ``` (jdbc/query *db* "SELECT * FROM conversations WHERE full_text @@ to_tsquery('?')") = > () ```<issue_comment>username_1: You may have more luck [using the `jdbc/execute!` function](http://clojure-doc.org/articles/ecosystem/java_jdbc/using_sql.html#Deleting%20rows), as it is basically a raw SQL string passed to the DB: ``` (j/execute! db-spec ["update fruit set cost = ( 2 * grade ) where grade > ?" 50.0] ) ``` I would leave off the parameter (at least to start), more like: ``` (j/execute! db-spec ["update fruit set cost = ( 2 * grade ) where grade > 50.0" ]) ``` Except in your case the `?` may be interpreted as a placeholder for a param. Maybe escaping will be required. Upvotes: 0 <issue_comment>username_2: The documentation for query seems to say that the parameter should be a vector (like in Alan's execute samples). Is that just a copy/paste mistake? That said - are you sure you're looking in the right spot? To me it seems your `to_tsquery('?')` is the issue here. That returns nothing. ``` select to_tsquery('?'); NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored Successfully run. Total query runtime: 73 msec. 1 rows affected. ``` This doesn't seem to be a Clojure or JDBC question, this is just postgresql. Can you describe your schema and what you want to archive in more detail? Upvotes: 1 <issue_comment>username_3: The trick was fairly obvious: just use a prepared statement where its ? points to a value that includes your searched-for ?. ``` (require '[honeysql.core :as sql] (defn dbr [s] (jdbc/query *db* s)) (-> {:select [:*] :from [:conversations] :where [:like :full_text "%?%"]} sql/format dbr) ``` Upvotes: 1
2018/03/22
671
2,486
<issue_start>username_0: I have a large VB.net application that does FEM structural analysis. It requires double precision math. The application also uses DirectX for the graphics. I now know that DirectX intentionally sets the "floating-point unit” (FPU) to single precision by default when it starts. That is a big problem. I need to figure out how to start DirectX but preserve the double precision. Currently I start DirectX with the following: ``` Dev = New Device(0, DeviceType.Hardware, Panel2, CreateFlags.SoftwareVertexProcessing, pParams) ``` I have read that using “CreateFlags.FpuPreserve” as shown below will preserve the double precision. But when I try this DirectX does not start. ``` Dev = New Device(0, DeviceType.Hardware, Panel2, CreateFlags.FpuPreserve, pParams) ``` Can anybody tell me how to do start DirectX from with VB.net and preserve the double precision?<issue_comment>username_1: First, to get your code to work, you need to still specify software vertex processing. In VB, this is done as `Dev = New Device(0, DeviceType.Hardware, Panel2, CreateFlags.SoftwareVertexProcessing Or CreateFlags.FpuPreserve, pParams)` Once you've fixed that, you'll find it still doesn't do what you want. `FpuPreserve` corresponds to [D3DCREATE\_FPU\_PRESERVE](https://msdn.microsoft.com/en-us/library/windows/desktop/bb172527.aspx). This affects the floating-point precision calculations on the *CPU*, not the GPU. To get double-precision GPU calculations, you first needs to use Direct3D 11 API or higher (it looks like use are using Direct3D 9). Even then double-precision support is an optional feature; you have to query [ID3D11Device::CheckFeatureSupport](https://msdn.microsoft.com/en-us/library/windows/desktop/ff476497(v=vs.85).aspx) with [D3D11\_FEATURE\_DOUBLES](https://msdn.microsoft.com/en-us/library/windows/desktop/ff476124(v=vs.85).aspx) to see if it exists. In SharpDX it would be `device.CheckFeatureSupport(Feature.ShaderDoubles)`, where `device` is of type `Direct3D11.Device`, or `GraphicsDeviceFeatures.HasDoublePrecision` if you are using SharpDX.Toolkit.Graphics. Upvotes: 1 <issue_comment>username_2: Yes that did it thanks! The call needed the "Or CreateFlags.FpuPreserve" added to it as shown above, the "or" being critical. Double precision is then maintained in CPU / FPU calculations, which is exactly what I need. I don't know what precision is being used in the graphics calculations but I don't care about that. Upvotes: 0
2018/03/22
649
2,223
<issue_start>username_0: I have to get the specific values from Request Header -- referer ``` referer: https://xxx.xx.xx/xx/xx?programGroupName=xxx&fundraiserPageID=3041315&participantFirstName=Test&participantLastName=Testerson&displayName=Test%20Testerson&fundraiserPageURL=http://xxx.xxx.xx/wpa/xx/xxx ``` In above I have to get only fundraiserPageID value that is '3041315' and after that participantFirstName value 'Test' plus other values and then need to store in the previous defined variables, so that I can reuse in the next request. [![enter image description here](https://i.stack.imgur.com/EfAFl.png)](https://i.stack.imgur.com/EfAFl.png) I tied following attempt using regular expression and it shows nothing, I'm going something wrong, not sure how to define the regular expression for it etc... [![enter image description here](https://i.stack.imgur.com/ALKfk.png)](https://i.stack.imgur.com/ALKfk.png)<issue_comment>username_1: As you show in your screenshot, althugh it's a URL, it's part of a value of referer in **request header** and therefore you must choose radio button of `Request Headers` in `Field to check` parameter Also put value in Match No. field, it's a required field according to [doc](https://jmeter.apache.org/usermanual/component_reference.html#Regular_Expression_Extractor), use 1 for getting first match. > > For match number > 0, matching will stop as soon as enough matches have been found. > > > Upvotes: 1 <issue_comment>username_2: Amend your Regular Expression Extractor configuration as follows: 1. Switch "Field to check" to `Request Headers` 2. Change your Regular Expression to `fundraiserPageID=(\d+)` as I see neither quotation marks nor trailing `>` in your original Referer header [![JMeter Regular Expressions Extractor](https://i.stack.imgur.com/eHflC.png)](https://i.stack.imgur.com/eHflC.png) References: * [JMeter: Regular Expressions](http://jmeter.apache.org/usermanual/regular_expressions.html) * [Using RegEx (Regular Expression Extractor) with JMeter](https://guide.blazemeter.com/hc/en-us/articles/207421325-Using-RegEx-Regular-Expression-Extractor-with-JMeter) * [Perl 5 Regex Cheat sheet](https://perlmaven.com/regex-cheat-sheet) Upvotes: 0
2018/03/22
716
2,422
<issue_start>username_0: I have the following function: ``` export const functionName = (key) => { const data = [ { displayOrder: 0, key: 'key-name-1', }, { displayOrder: 2, key: 'key-name-2', }, ]; for (let index in data) { return data[index].displayOrder === 0 ? data[index].key : data[0].key; } }; ``` The purpose of the function is to return the `key` with the lowest displayOrder. This works, but is there a better more slick method to achieve this? Secondly, how best could I create a similar version to re-order the entire array based on `displayOrder`?<issue_comment>username_1: The proper method to use when you want to extract a single thing from an array (and you can't identify the element by itself with `.find`) is to use `.reduce`: ```js const functionName = () => { const data = [ { displayOrder: 0, key: 'key-name-1', }, { displayOrder: 2, key: 'key-name-2', }, ]; const lowestDisplayObj = data.reduce((lowestObjSoFar, currentObj) => { if (currentObj.displayOrder < lowestObjSoFar.displayOrder) return currentObj; return lowestObjSoFar; }, { displayOrder: Infinity, key: null }); return lowestDisplayObj.key; }; console.log(functionName()); ``` Note that your current argument to `functionName`, `key`, was unused in your snippet, I'm not sure what it's for. You could also use .sort() and select the first element in the array, but that's more computationally expensive than it needs to be. Upvotes: 2 [selected_answer]<issue_comment>username_2: You could make use of array reduce and do something like below. ```js const data = [{ displayOrder: 10, key: 'key-name-10', }, { displayOrder: 2, key: 'key-name-2', }, { displayOrder: 1, key: 'key-name-1 Lowest', } ]; let a = data.reduce((prev, item) => { if (!prev) { return item; } return (item.displayOrder < prev.displayOrder) ? item : prev; }); console.log(a.key); ``` Upvotes: 0 <issue_comment>username_3: ```js const data = [ { displayOrder: 0, key: 'key-name-1', }, { displayOrder: 2, key: 'key-name-2', }, ]; const min = Math.min.apply(null, data.map(({displayOrder}) => +displayOrder)); const obj = data.find((obj) => obj.displayOrder == min); console.log(obj.key); ``` Upvotes: 0
2018/03/22
587
1,875
<issue_start>username_0: I want to use a JavaScript variable which is equal to the url of an image in a html img tag. I need the following tag to be able to display the image that is tied to the variable. ```js document.getElementById("id-of-img-tag").src = imgVar; document.getElementById("id-of-img-tag").innerHTML = imgVar; ``` ```html ![img]( ) ``` Both of the lines of JS result with the alternative being displayed for the img tag "img".<issue_comment>username_1: 1. You are using `getElementBy`**Id**`()` method for `**class** `="id-of-img-tag"...>`. Try `querySelector("`**.**`id-of-img-tag")` instead.` 2. The second statement makes no sense. `innerHTML` will parse a string into HTML. `imgVar` is not a string nor is it even defined. `imgVar` should be just a simple string that literally represents the url of the image: ``` var imgVar = "url of img" ``` 3. Assign `imgVar` to the `src` attribute of `![]()` Demo ---- ```js var imgVar = "https://www.jqueryscript.net/images/Simplest-Responsive-jQuery-Image-Lightbox-Plugin-simple-lightbox.jpg"; document.querySelector(".id-of-img-tag").src = imgVar; ``` ```html ![img]() ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer lies in your HTML code. You have assigned the tag with a **class** of "id-of-img-tag", and then in your javascript, you are using getElementBy**Id**(). Because there isn't an **id** called "id-of-img-tag" nothing is changed, and so the browser sees an 'empty' **src** and displays the **alt** image Also in the code, you supplied you are not defining imgVar, which is returning the 'undefined' error which you show. Here's a working example: ``` ![img]() let url = "https://static.guim.co.uk/sys-images/Guardian/Pix/pictures/2014/4/11/1397210130748/Spring-Lamb.-Image-shot-2-011.jpg"; document.getElementById("id-of-img-tag").src = url; ``` Upvotes: 0
2018/03/22
700
2,261
<issue_start>username_0: How to use `.toggle()` method but not for show and hide purposes? For example, when I click on a certain div I would like to animate it's position `$(div).animate({"left":"+=50px"});` and then on the second click to return the div on the same position `$(div).animate({"left":"-=50px"})`. I know there is other solution but I would like to achieve this with `.toggle()` without hiding an showing the div. Any ideas?<issue_comment>username_1: ```js $("#myDiv").toggle(function() { $(this).stop().animate({ left:"+=50" }, 500); }, function() { $(this).stop().animate({ left:"-=50" }, 500); }); ``` ```css #myDiv{ background-color:black; width:100px; height:100px; position:absolute; left:50px; } ``` ```html ``` hope this answer your question. However, jQuery 1.9 and newer do not allow this feature. Upvotes: 1 <issue_comment>username_2: there is no toggle function using click since 1.9 . But you can still doing this using two ways: [for more explaination](https://stackoverflow.com/questions/4911577/jquery-click-toggle-between-two-functions) ```js $(function(){ //basic method var clicked = false; $('#mydiv1').click(function(){ if(clicked){ clicked = false; $(this).animate({"left":"+=50px"}); }else{ clicked=true; $(this).animate({"left":"-=50px"}); } }); //create new function like old built-in function $.fn.clickToggle = function(func1, func2) { var funcs = [func1, func2]; this.data('toggleclicked', 0); this.click(function() { var data = $(this).data(); var tc = data.toggleclicked; $.proxy(funcs[tc], this)(); data.toggleclicked = (tc + 1) % 2; }); return this; }; $('#mydiv2').clickToggle(function(){ $(this).animate({"left":"+=50px"}); }, function(){ $(this).animate({"left":"-=50px"}); }); }); ``` ```css #mydiv1{ background-color: yellow; width: 50px; height: 50px; position: absolute; left: 100px; } #mydiv2{ background-color: blue; width: 50px; height: 50px; position: absolute; left: 200px; top: 100px; } ``` ```html ``` Upvotes: 0
2018/03/22
952
3,533
<issue_start>username_0: I have an MSSQL stored procedure which explicitly sets an output parameter to a value. However, when I execute that stored procedure from an Classic ASP page using an ADODB command, the output parameter is null. Stored procedure: ```sql ALTER PROCEDURE [dbo].[recordResponse] @survey_id smallint OUTPUT, @member_id varchar(10) OUTPUT, @response varchar(1000) OUTPUT, @comment varchar(1000) OUTPUT, @response_id int OUTPUT, @timestamp datetime OUTPUT, @status int OUTPUT AS BEGIN SET NOCOUNT ON; SET @timestamp = getdate(); DECLARE @surveyExists as binary Select @surveyExists = 1 from surveys where survey_id = @survey_id; if (@surveyExists = 1) BEGIN insert into responses(member, [timestamp], response, comments, survey_id) values(@member_id, @timestamp, @response, @comment, @survey_id); set @response_id = SCOPE_IDENTITY(); set @status = 200; END else set @status = 400; END ``` Classic ASP: ```vb Set cmd = Server.CreateObject("ADODB.Command") 'Initiate the command object cmd.CommandType = 4 'Stored Procedure cmd.CommandText = "recordResponse" 'Name of the stored procedure cmd.ActiveConnection = connString 'Using which connection? 'Add the parameters cmd.Parameters.Append cmd.CreateParameter("@survey_id", 2, 3, 0, 1) cmd.Parameters.Append cmd.CreateParameter("@member_id", 200, 3, 10, memberNo) cmd.Parameters.Append cmd.CreateParameter("@response", 200, 3, 1000, answer) cmd.Parameters.Append cmd.CreateParameter("@comment", 200, 3, 1000, comment) cmd.Parameters.Append cmd.CreateParameter("@response_id", 2, 2) cmd.Parameters.Append cmd.CreateParameter("@timestamp", 135, 2) cmd.Parameters.Append cmd.CreateParameter("@status", 3, 2) 'Execute stored procedure Call cmd.Execute() Response.write "[" & cmd("@status") & "]" ``` This results in an output of `[]` whereas I am expecting an output of `[200]` or `[400]`. I have looked at various other [similar threads](https://stackoverflow.com/questions/28682604/why-the-output-parameter-of-my-adodb-command-does-not-retrieve-a-value-when-exec) and taken on board suggestions and solutions including iterating over the resulting recordset, but none have solved my problem. Can anyone see where I am going wrong???<issue_comment>username_1: Please try ``` cmd.Parameters["@status"].Value ``` Upvotes: 0 <issue_comment>username_2: The create parameter parameters are to be set like this. Set objparameter=objcommand.CreateParameter (name,type,direction,size,value) in ADO. Please set the parameters, its type, direction & size correctly. ``` Response.write "[" & cmd.Parameters("@status").Value & "]" ``` Upvotes: 1 <issue_comment>username_3: I found the problem: I was using the wrong dataType for the parameter `@response_id`. I had `adSmallInt` (2), whereas it should have been `adInteger` (3) to match the data type declared in the stored procedure. MSSQL was throwing an exception, but my ASP script had `On Error Resume Next` specified which was silently swallowing the error. So the script ran and the SP executed the `INSERT` statement but couldn't get far enough to set the `@status` parameter, but the script kept running anyway. Once I commented out the `On Error Resume Next`, I could see the exception and found the culprit. Thanks for all your suggestions anyway. Much appreciated! Upvotes: 1 [selected_answer]
2018/03/22
1,233
3,862
<issue_start>username_0: ``` import argparse # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image") ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) ``` I'm running a face recognition example through OpenCV. I use 'argparse' at this point, and get this error. ``` args = vars(ap.parse_args()) ``` from this code. ``` usage: ipykernel_launcher.py [-h] -i IMAGE -p PROTOTXT -m MODEL [-c CONFIDENCE] ipykernel_launcher.py: error: the following arguments are required: -i/-- image, -p/--prototxt, -m/--model An exception has occurred, use %tb to see the full traceback. SystemExit: 2 C:\Users\user\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py:2918: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D. warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1) ``` How can I solve it? This is my computer environment and use the Jupyter-notebook * Python: 3.6.4 64bit [MSC v.1900 64 bit (AMD64)] * IPython: 6.2.1 * OS: Windows 10 10.0.15063 SP0 * argparse: 1.1<issue_comment>username_1: This is hard to answer without you sharing how you try to run the file. The error is telling you it did not find the required arguments passed in when you ran the file. Since you specified `required = True` for the -i, -p, and -m arguments you must always pass them in or make them optional if they are not needed to run your program. Upvotes: 3 [selected_answer]<issue_comment>username_2: In an `ipython` session: ``` In [36]: import argparse In [37]: # construct the argument parse and parse the arguments ...: ap = argparse.ArgumentParser() ...: ap.add_argument("-i", "--image", required=True, ...: help="path to input image") ...: ap.add_argument("-p", "--prototxt", required=True, ...: help="path to Caffe 'deploy' prototxt file") ...: ap.add_argument("-m", "--model", required=True, ...: help="path to Caffe pre-trained model") ...: ap.add_argument("-c", "--confidence", type=float, default=0.5, ...: help="minimum probability to filter weak detections") ...: args = vars(ap.parse_args()) ...: usage: ipython3 [-h] -i IMAGE -p PROTOTXT -m MODEL [-c CONFIDENCE] ipython3: error: the following arguments are required: -i/--image, -p/--prototxt, -m/--model An exception has occurred, use %tb to see the full traceback. SystemExit: 2 /usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2918: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D. warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1) ``` I can run this parser by modifying `sys.argv`: ``` In [39]: import sys In [40]: sys.argv[1:] Out[40]: ['--pylab', '--nosep', '--term-title', '--InteractiveShellApp.pylab_import_all=False'] In [41]: sys.argv[1:] = '-i image -p proto -m amodel'.split() In [42]: args = ap.parse_args() In [43]: args Out[43]: Namespace(confidence=0.5, image='image', model='amodel', prototxt='proto') ``` or with ``` In [45]: ap.parse_args('-i image -p proto -m amodel'.split()) Out[45]: Namespace(confidence=0.5, image='image', model='amodel', prototxt='proto') ``` I often use this method to test a parser. If this parser was in a script, and I ran it from command line without the arguments, it would print the `usage` and then exit. That exit is what `ipython` is catching and displaying as `SystemExit: 2`. Upvotes: 2 <issue_comment>username_3: I think you should set required=True to False Upvotes: 0
2018/03/22
1,345
4,437
<issue_start>username_0: Im aware there are many questions on how to turn an array into an object I know you can do something like this.. `object = {...arr};` but my question is more specific say I have an array of objects like so.. ``` [ { dataType: something, dataValue: book }, { dataType: food, dataValue: brocolli }, { dataType: object, dataValue: chair }, ] ``` and I want to create an object that looks like this.. ``` { something: book, food: brocolli, object: chair } ``` I tried to do something like this... ``` arr.forEach(item => { newarray.push({arr.dataType: arr.dataValue}); }) newObject = {...newarray} ``` but that didnt work.. any help would be appreciated!<issue_comment>username_1: Like this one? ``` arr.reduce((acc,val)=>{acc[val.dataType]=val.dataValue;return acc;},{}) ``` Upvotes: 1 <issue_comment>username_2: You can use the function `reduce`. ```js var array = [ { dataType: 'something', dataValue: 'book' }, { dataType: 'food', dataValue: 'brocolli' }, { dataType: 'object', dataValue: 'chair' },]; var result = array.reduce((a, {dataType, dataValue}) => { a[dataType] = dataValue; return a; }, {}); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Using Spread Syntax and computed property names. ```js var array = [ { dataType: 'something', dataValue: 'book' }, { dataType: 'food', dataValue: 'brocolli' }, { dataType: 'object', dataValue: 'chair' },]; var result = array.reduce((a, {dataType, dataValue}) => ({...a, [dataType]: dataValue}), {}); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: You can use `reduce` ```js var arr = [{ dataType: 'something', dataValue: 'book' }, { dataType: 'food', dataValue: 'brocolli' }, { dataType: 'object', dataValue: 'chair' }, ] var obj = arr.reduce((c, {dataType,dataValue}) => Object.assign(c, {[dataType]: dataValue}), {}); console.log(obj); ``` Upvotes: 2 <issue_comment>username_4: Some [`Array.prototype.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce), destructuring and spread do the job: ```js var a = [ { dataType: 'something', dataValue: 'book' }, { dataType: 'food', dataValue: 'brocolli' }, { dataType: 'object', dataValue: 'chair' }, ]; // ES6 var new6 = a.reduce((o, {dataType, dataValue}) => ({...o, [dataType]: dataValue}), {}); console.log(new6); //ES5 var new5 = a.reduce(function(o, i) { var prop = {}; prop[i.dataType] = i.dataValue; return Object.assign(o, prop); }, {}); console.log(new5); //ES3.1 (lol) var new31 = {}, i; for (i = 0; i < a.length; i++) { new31[a[i].dataType] = a[i].dataValue; } console.log(new31); ``` Upvotes: 2 <issue_comment>username_5: You probably want to use `newarray[item.dataType] = item.dataValue;` to set to set those key-value pairs into the newarray. ```js var arr = [ { dataType: 'something', dataValue: 'book' }, { dataType: 'food', dataValue: 'brocolli' }, { dataType: 'object', dataValue: 'chair' }, ]; var newarray = {}; arr.forEach(item => { newarray[item.dataType] = item.dataValue; }); newObject = {...newarray}; console.log(newObject); ``` Upvotes: 1 <issue_comment>username_6: As an alternative to `reduce`, you can use `Object.assign()` and `Array.map()`. This is very similar to your original attempt, except for using object assign. I prefer this method, because the spread is only performed once, after the mapping has been created. My gut tells me this is more performant than `reduce`, but I haven't tested. ```js var arr = [{ dataType: 'something', dataValue: 'book' }, { dataType: 'food', dataValue: 'brocolli' }, { dataType: 'object', dataValue: 'chair' }, ] var obj = Object.assign({}, ...arr.map(({ dataType, dataValue }) => ({ [dataType]: dataValue }))) console.log(obj); ``` Upvotes: 0
2018/03/22
1,063
3,590
<issue_start>username_0: I'm using `selenium-server` version 3.0.1 and `nightwatch` version ^0.9.12 on node 8.9.0. My e2e test do run, clicks work, and reading the DOM works, but `setValue` just doesn't. For example, the following test: ``` browser .url("...") .waitForElementPresent('select[name=foo]', 5000) .click('select[name=foo] option:nth-child(2)') .waitForElementPresent('input[name=bar]', 5000) .setValue('input[name=bar]', "hello world") .getValue('input[name=bar]', function(input) { this.assert.equal(input.value, "hello world"); }) .end(); ``` will open the url, wait for `foo` and click the second option. It will wait for `bar`, then fails: ``` Running: test ✔ Element was present after 24 milliseconds. ✔ Element was present after 28 milliseconds. ✖ Failed [equal]: ('' == 'hello world') - expected "hello world" but got: "" at Object. (/test/e2e/specs/test.js:49:21) FAILED: 1 assertions failed and 2 passed (4.692s) \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ TEST FAILURE: 1 assertions failed, 2 passed. (4.9s) ✖ test - run through apply process (4.692s) Failed [equal]: ('' == 'hello world') - expected "hello world" but got: "" ``` If I replace the `setValue` with a delay and enter a value by hand, the test will pass, so `getValue` is working. This does run, and pass, on other systems, but I can't get it working on my own so I think it's a `selenium-server` issue. I've tried a lot of the 101 fixes, clearing the npm cache, re-running an `npm install`, etc. But with no errors other than the failure, how do I debug this?<issue_comment>username_1: The error says it all : ``` Failed [equal]: ('' == 'hello world') - expected "hello world" but got: "" ``` In your code block you have induced `wait` for the *WebElement* identified as **'input[name=bar]'** and next invoked `setValue()` which is successful as follows : ``` .waitForElementPresent('input[name=bar]', 5000) .setValue('input[name=bar]', "hello world") ``` Now the *JavaScript* associated with this *WebElement* identified as **'input[name=bar]'** will require some time to render the value within the *HTML DOM*. But in you code block you are trying to access the entered value (in the previous step) too early. Hence your script finds as the *value*. Solution -------- You need to induce a *waiter* i.e. [expect](http://nightwatchjs.org/api#expect-text) clause for the value to be rendered within the *HTML DOM* through the associated *JavaScript* as follows : ``` .waitForElementPresent('input[name=bar]', 5000) .setValue('input[name=bar]', "hello world") .expect.element('input[name=bar]').to.have.value.that.equals('hello world'); .getValue('input[name=bar]', function(input) { this.assert.equal(input.value, "hello world"); }) ``` Upvotes: 0 <issue_comment>username_2: Assuming you are trying to test using Chrome, you need to update your ChromeDriver. Chrome 65 was released recently and older ChromeDriver versions are apparently incompatible with it. Download the latest one from the [downloads](https://sites.google.com/a/chromium.org/chromedriver/downloads) page. Make Nightwatch use it, `nightwatch.json` - ``` { ... "selenium": { ... "cli_args": { "webdriver.chrome.driver": "path/to/chromedriver.exe" ``` Assuming Nightwatch uses it (you can see which one it uses using the Windows Task Manager - assuming you are using Windows - look for the command line of chromedriver.exe), `setValue` should now work again. Upvotes: 2 [selected_answer]
2018/03/22
1,385
4,880
<issue_start>username_0: I found this piece of code online [Countdown Timer](https://codepen.io/SitePoint/pen/MwNPVq) And attempted to implement it on a page but it wasn't counting down. Thanks to all who commented. Here's the NOW WORKING sample code (see below snippet) ```js function getTimeRemaining(endtime) { var t = Date.parse(endtime) - Date.parse(new Date()); var seconds = Math.floor((t / 1000) % 60); var minutes = Math.floor((t / 1000 / 60) % 60); var hours = Math.floor((t / (1000 * 60 * 60)) % 24); var days = Math.floor(t / (1000 * 60 * 60 * 24)); return { 'total': t, 'days': days, 'hours': hours, 'minutes': minutes, 'seconds': seconds }; } function initializeClock(id, endtime) { var clock = document.getElementById(id); var daysSpan = clock.querySelector('.days'); var hoursSpan = clock.querySelector('.hours'); var minutesSpan = clock.querySelector('.minutes'); var secondsSpan = clock.querySelector('.seconds'); function updateClock() { var t = getTimeRemaining(endtime); daysSpan.innerHTML = t.days; hoursSpan.innerHTML = ('0' + t.hours).slice(-2); minutesSpan.innerHTML = ('0' + t.minutes).slice(-2); secondsSpan.innerHTML = ('0' + t.seconds).slice(-2); if (t.total <= 0) { clearInterval(timeinterval); } } updateClock(); var timeinterval = setInterval(updateClock, 1000); } var deadline = new Date(Date.parse(new Date()) + 15 * 24 * 60 * 60 * 1000); initializeClock('clockdiv', deadline); ``` ```css #clockdiv{ font-family: sans-serif; color: #fff; display: inline-block; font-weight: 100; text-align: center; font-size: 30px; } #clockdiv > div{ padding: 10px; border-radius: 3px; background: #00BF96; display: inline-block; } #clockdiv div > span{ padding: 15px; border-radius: 3px; background: #00816A; display: inline-block; } .smalltext{ padding-top: 5px; font-size: 16px; } ``` ```html Days Hours Minutes Seconds ``` Perhaps it needs to do a body onload or I'm missing something because when I try the code it just shows empty boxes and no timer. While there are many great clock plugins, it makes sense to just do this in raw javascipt as the article says: code will be lightweight because it will have zero external scripts website will perform better because you won’t need to load external scripts and style sheets. have more control because you will have built the clock to behave exactly the way you want it to (rather than trying to bend a plugin to your will). It seems like a useful piece of code if it could be made to work with minimal changes. What am I missing? Thanks!<issue_comment>username_1: The error says it all : ``` Failed [equal]: ('' == 'hello world') - expected "hello world" but got: "" ``` In your code block you have induced `wait` for the *WebElement* identified as **'input[name=bar]'** and next invoked `setValue()` which is successful as follows : ``` .waitForElementPresent('input[name=bar]', 5000) .setValue('input[name=bar]', "hello world") ``` Now the *JavaScript* associated with this *WebElement* identified as **'input[name=bar]'** will require some time to render the value within the *HTML DOM*. But in you code block you are trying to access the entered value (in the previous step) too early. Hence your script finds as the *value*. Solution -------- You need to induce a *waiter* i.e. [expect](http://nightwatchjs.org/api#expect-text) clause for the value to be rendered within the *HTML DOM* through the associated *JavaScript* as follows : ``` .waitForElementPresent('input[name=bar]', 5000) .setValue('input[name=bar]', "hello world") .expect.element('input[name=bar]').to.have.value.that.equals('hello world'); .getValue('input[name=bar]', function(input) { this.assert.equal(input.value, "hello world"); }) ``` Upvotes: 0 <issue_comment>username_2: Assuming you are trying to test using Chrome, you need to update your ChromeDriver. Chrome 65 was released recently and older ChromeDriver versions are apparently incompatible with it. Download the latest one from the [downloads](https://sites.google.com/a/chromium.org/chromedriver/downloads) page. Make Nightwatch use it, `nightwatch.json` - ``` { ... "selenium": { ... "cli_args": { "webdriver.chrome.driver": "path/to/chromedriver.exe" ``` Assuming Nightwatch uses it (you can see which one it uses using the Windows Task Manager - assuming you are using Windows - look for the command line of chromedriver.exe), `setValue` should now work again. Upvotes: 2 [selected_answer]
2018/03/22
745
2,627
<issue_start>username_0: I am working on angularjs application. I'm iterating the response and displaying in a table, when user click on the table row information i'm passing the json value to the java script where i will process further and display information. ``` | {{info.id}} | [product1 Data](javascript:void(0)) | [product2 Data](javascript:void(0)) | [product3 Data](javascript:void(0)) | [Show All Product Data](javascript:void(0)) | ``` In the above html code, when user click on **Show All Product Data** i want to pass `x.product1Data,x.product2Data,x.product3Data` value to showDetails(..). Any inputs on how to pass all 3 objects in showDetails when user click on **Show All Product Data** js code: ``` $scope.showDetails = function(productInfo){ //productInfo contains the product information agularjs.forEach(productInfo,function(value,key){ } } ```<issue_comment>username_1: You could simply call all `showDetail`s directly: ``` ng-click="showDetails(x.product1Data); showDetails(x.product2Data); showDetails(x.product3Data);"> ``` You could also pass multiple arguments as you would to a normal function, but you would have to refactor `showDetails` for that. Upvotes: 0 <issue_comment>username_2: You could create a function called `showDetailSet()`, which takes an array as an argument. It could do something like (you will have to scope it obviously): ``` function showDetailSet(products) { products.forEach(p => showDetails(p)) } ``` Then you would call it like `ng-click="showDetailSet([x.product1Data, x.product2Data, x.product3Data])"` Upvotes: 1 <issue_comment>username_3: Depending on how your Json array is formatted, you may be able to push a concatenated variable such as ``` showDetails(x.product1Data.concat(x.product2Data.concat(x.product3Data))); ``` Upvotes: 0 <issue_comment>username_4: Normally parameters can be send by following code, ``` showDetails(x.product1Data,x.product2Data,product3Data) ``` dont know why this is not solving your problem, if it is not working then follow bellow, Just pass `info` you will get all the `productData` in the `showDetails` function ``` | {{info.id}} | [product1 Data](javascript:void(0)) | [product2 Data](javascript:void(0)) | [product3 Data](javascript:void(0)) | [Show All Product Data](javascript:void(0)) | ``` and in controller ``` $scope.showDetails = function(productInfo,index){ if(index==-1) { // do your stuff for show all details } else{ //productInfo contains the product information agularjs.forEach(productInfo,function(value,key){ } } } ``` Upvotes: 0
2018/03/22
731
2,531
<issue_start>username_0: I have setup a Signup, Login, Logout with `bcrypt`. I have set up basic routing as shown below however keep getting the same error with > > "no route matches GET '/Signup'"... > > > anyone, please help I'm confused - '/Signup' should go to sessions#new which is new.html.erb in sessions right? Please clarify... There might also be something wrong with my controllers which I will post if necessary as well. Thanks for any help. ``` Rails.application.routes.draw do resources :users, controller: :sessions root 'users#index' get '/signup', to: 'users#new' get '/signup', to: 'users#update' post '/signup', to: 'users#create' get '/login', to: 'sessions#new' post '/login', to: 'sessions#create' get '/logout', to: 'sessions#destroy' end ```<issue_comment>username_1: You could simply call all `showDetail`s directly: ``` ng-click="showDetails(x.product1Data); showDetails(x.product2Data); showDetails(x.product3Data);"> ``` You could also pass multiple arguments as you would to a normal function, but you would have to refactor `showDetails` for that. Upvotes: 0 <issue_comment>username_2: You could create a function called `showDetailSet()`, which takes an array as an argument. It could do something like (you will have to scope it obviously): ``` function showDetailSet(products) { products.forEach(p => showDetails(p)) } ``` Then you would call it like `ng-click="showDetailSet([x.product1Data, x.product2Data, x.product3Data])"` Upvotes: 1 <issue_comment>username_3: Depending on how your Json array is formatted, you may be able to push a concatenated variable such as ``` showDetails(x.product1Data.concat(x.product2Data.concat(x.product3Data))); ``` Upvotes: 0 <issue_comment>username_4: Normally parameters can be send by following code, ``` showDetails(x.product1Data,x.product2Data,product3Data) ``` dont know why this is not solving your problem, if it is not working then follow bellow, Just pass `info` you will get all the `productData` in the `showDetails` function ``` | {{info.id}} | [product1 Data](javascript:void(0)) | [product2 Data](javascript:void(0)) | [product3 Data](javascript:void(0)) | [Show All Product Data](javascript:void(0)) | ``` and in controller ``` $scope.showDetails = function(productInfo,index){ if(index==-1) { // do your stuff for show all details } else{ //productInfo contains the product information agularjs.forEach(productInfo,function(value,key){ } } } ``` Upvotes: 0
2018/03/22
1,061
3,145
<issue_start>username_0: I have a source string that I want to split the `data` out: ``` String source = "data|junk,data|junk|junk,data,data|junk"; String[] result = source.split(","); ``` The above gives `data|junk, data|junk|junk, data, data|junk`. To further get the data out, I did this: ``` for (int i = 0; i < result.length; i++) { result[i] = result[i].split("\\|")[0]; } ``` Which gives what I wanted `data, data, data, data`. I want to see if it is possible to do it in one split with the right regex: ``` String[] result = source.split("\\|.*?,"); ``` The above gives `data, data, data,data|junk`, in which the last two data are not split. Could you please help with the correct regex to get the result I wanted? Example string: "Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf" Expected result: "Ann, Bob, Clara, David"<issue_comment>username_1: You can change your regular expression to account for the "junk", then keep matching while it matches data: ``` import java.util.regex.Pattern; import java.util.regex.Matcher; public class RegexTest { public static void main(String[] args) { String input = "Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf"; Pattern p = Pattern.compile("(\\w+)(\\|\\w+)*,?"); Matcher m = p.matcher(input); while (m.find()) { System.out.println(m.group(1)); } } } ``` The regular expression looks for word characters (letters, digits, and underscores) and captures that. It then looks for a pipe symbol (escaped so that that it does not have a special meaning in the regular expression) with again word characters. This pipe plus word characters can happen any number (zero to many) of times. After that could be a comma, optionally. This prints > > Ann > > > Bob > > > Clara > > > David > > > It also captures the "junk", and you could access that with `m.group(2)` in the loop. If you don't want to capture that, insert a `?:` into the regular expression: ``` Pattern.compile("(\\w+)(?:\\|\\w+)*,?"); ``` Upvotes: 2 <issue_comment>username_2: In the string, > > Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf > > > `\\|.*?,` - this will match `|anynoncommastring,` but this doesn't match the final `|rijfidjf` since that does not end in comma. So to match that, use `(,|$)` instead of just `,`, making the regex `\\|.*?(,|$)` But the above does not match a single isolated comma, so alternating `,` with `\\|.*?(,|$)`, makes the final regex `(\\|.*?(,|$)|,)`. The pattern `(\\|.*?(,|$)|,)` works, ``` String source = "Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf"; String[] result = source.split("(\\|.*?(,|$)|,)"); for (int i = 0; i < result.length; i++) { System.out.println(result[i]); } ``` Output: ``` Ann Bob Clara David ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: I came up with the following solution: ``` String source = "one|junk,two|junk|junk,three,four|junk|junk"; String[] result = source.split("([|](?:(.*?,(?=[^,]+[|,]|$))|.*$))|,"); System.out.println(Arrays.toString(result)); [one, two, three, four] ``` Upvotes: 1
2018/03/22
984
3,023
<issue_start>username_0: ``` txt = open("document.txt").read() spt = txt.split() new_list = [] for i in spt: if i.endswith(','): w=i.rstrip(',') new_list.append(w) elif i.endswith('.'): w=i.rstrip('.') new_list.append(w) else: new_list.append(i) print(new_list) ``` I'm trying to go through my "document.txt" word by word and strip any comma at the end of the word. This is a learning exercise for me. I know there are "better" shorter ways to do this but I'm trying to learn the in's and out's of python. I have seen a lot of code on here that touches on this question, but I'm still having a hard time getting it to compile Anything. Any help would be greatly appreciated.<issue_comment>username_1: You can change your regular expression to account for the "junk", then keep matching while it matches data: ``` import java.util.regex.Pattern; import java.util.regex.Matcher; public class RegexTest { public static void main(String[] args) { String input = "Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf"; Pattern p = Pattern.compile("(\\w+)(\\|\\w+)*,?"); Matcher m = p.matcher(input); while (m.find()) { System.out.println(m.group(1)); } } } ``` The regular expression looks for word characters (letters, digits, and underscores) and captures that. It then looks for a pipe symbol (escaped so that that it does not have a special meaning in the regular expression) with again word characters. This pipe plus word characters can happen any number (zero to many) of times. After that could be a comma, optionally. This prints > > Ann > > > Bob > > > Clara > > > David > > > It also captures the "junk", and you could access that with `m.group(2)` in the loop. If you don't want to capture that, insert a `?:` into the regular expression: ``` Pattern.compile("(\\w+)(?:\\|\\w+)*,?"); ``` Upvotes: 2 <issue_comment>username_2: In the string, > > Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf > > > `\\|.*?,` - this will match `|anynoncommastring,` but this doesn't match the final `|rijfidjf` since that does not end in comma. So to match that, use `(,|$)` instead of just `,`, making the regex `\\|.*?(,|$)` But the above does not match a single isolated comma, so alternating `,` with `\\|.*?(,|$)`, makes the final regex `(\\|.*?(,|$)|,)`. The pattern `(\\|.*?(,|$)|,)` works, ``` String source = "Ann|xcjiajeaw,Bob|aijife|vdsjisdjfe,Clara,David|rijfidjf"; String[] result = source.split("(\\|.*?(,|$)|,)"); for (int i = 0; i < result.length; i++) { System.out.println(result[i]); } ``` Output: ``` Ann Bob Clara David ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: I came up with the following solution: ``` String source = "one|junk,two|junk|junk,three,four|junk|junk"; String[] result = source.split("([|](?:(.*?,(?=[^,]+[|,]|$))|.*$))|,"); System.out.println(Arrays.toString(result)); [one, two, three, four] ``` Upvotes: 1
2018/03/22
632
1,750
<issue_start>username_0: My Table TB\_PRUEBAS has a column called "CltsVcdos" with several repeated customer's Id's. Now, I want to add a new column that shows me a 1 when the Id is unique, and 0 when the Id is repeated. Part of the Query [![enter image description here](https://i.stack.imgur.com/C05cq.png)](https://i.stack.imgur.com/C05cq.png) That's what I want [![enter image description here](https://i.stack.imgur.com/I3D2a.png)](https://i.stack.imgur.com/I3D2a.png) Thank you!<issue_comment>username_1: ``` Case When ((ROW_NUMBER() OVER( PARTITION BY CltsVcdos ORDER BY CltsVcdos ASC) ) =1) then 1 else 0 end ``` You can Use above condition to get the UNIQUEVALUES. Following example explains it further ``` CREATE TABLE [dbo].[TB_PRUEBAS]( [CltsVcdos] [int] NULL ) ON [PRIMARY] INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (101) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (101) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (101) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (102) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (102) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (104) SELECT ROW_NUMBER() OVER(ORDER BY CltsVcdos ASC) AS Row#, [CltsVcdos], (ROW_NUMBER() OVER( PARTITION BY CltsVcdos ORDER BY CltsVcdos ASC) ) As RepeatedRowNumber , Case When ((ROW_NUMBER() OVER( PARTITION BY CltsVcdos ORDER BY CltsVcdos ASC) ) =1) then 1 else 0 end As UNIQUEVALUES FROM [dbo].[TB_PRUEBAS] P ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This assumes you have some kind of `identity` *columns* which specifies the ordering ``` select CltsVcdos, (case when count(CltsVcdos) over (partition by CltsVcdos by ) > 1 then 0 else 1 end) as UniquValues from TB\_PRUEBAS ``` Upvotes: 0
2018/03/22
621
1,709
<issue_start>username_0: So I am using regex101.com to test my string and I can't get the output I need. The sample I made can be viewed here <https://regex101.com/r/YQTW4c/2>. So my regex is this: ``` (.\*?)<\/table> ``` and the sample string: ``` || | | | | ``` I want to get the everything inside the table class datatable which, in this example, is `|| | | | |`. Am I missing something here? Any help would be much appreciated.<issue_comment>username_1: ``` Case When ((ROW_NUMBER() OVER( PARTITION BY CltsVcdos ORDER BY CltsVcdos ASC) ) =1) then 1 else 0 end ``` You can Use above condition to get the UNIQUEVALUES. Following example explains it further ``` CREATE TABLE [dbo].[TB_PRUEBAS]( [CltsVcdos] [int] NULL ) ON [PRIMARY] INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (101) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (101) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (101) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (102) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (102) INSERT [dbo].[TB_PRUEBAS] ([CltsVcdos]) VALUES (104) SELECT ROW_NUMBER() OVER(ORDER BY CltsVcdos ASC) AS Row#, [CltsVcdos], (ROW_NUMBER() OVER( PARTITION BY CltsVcdos ORDER BY CltsVcdos ASC) ) As RepeatedRowNumber , Case When ((ROW_NUMBER() OVER( PARTITION BY CltsVcdos ORDER BY CltsVcdos ASC) ) =1) then 1 else 0 end As UNIQUEVALUES FROM [dbo].[TB_PRUEBAS] P ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This assumes you have some kind of `identity` *columns* which specifies the ordering ``` select CltsVcdos, (case when count(CltsVcdos) over (partition by CltsVcdos by ) > 1 then 0 else 1 end) as UniquValues from TB\_PRUEBAS ``` Upvotes: 0
2018/03/22
469
1,527
<issue_start>username_0: I need to match a string with specific letters but none of the letters should be duplicated. For example, [AEIOU] should not match 'AAEIOU' or 'AAEEIOU'. It should only match 'AEIOU' and the order of the letters should not matter. I tried using the exact quantifier with {} but did not work.<issue_comment>username_1: This maybe inefficient, but maybe use counters? Construct a counter for each, then check if they are equal? Upvotes: 0 <issue_comment>username_2: You can use negative lookahead: [`^(?!.*(.).*\1)[AEIOU]+$`](https://regex101.com/r/nrTTZz/1) Whatever you put in the brackets will be the subset of characters you select from. ``` >>> import re >>> tests = ['AAEIOU', 'AAEEIOU', 'AEIOU'] >>> for test in tests: .. print(re.match(r'^(?!.*(.).*\1)[AEIOU]+$', test)) None None <_sre.SRE_Match object; span=(0, 5), match='AEIOU'> ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Not using regex, but you can use python string methods to count characters in a string. This can be used to develop a filter function that, effectively, can get you the results you want. ``` def has_all_unique(s): """Returns True of the string has exactly 1 of each vowel""" return all(s.count(char) == 1 for char in 'AEIOU') tests = ['AAEIOU', 'AAEEIOU', 'AEIOU'] for match in filter(has_all_unique, tests): print(match) ``` Result would be just the one match meeting the condition, ``` AEIOU ``` This isn't the most performant option, but is simple and readable. Upvotes: 0
2018/03/22
198
805
<issue_start>username_0: The Jenkins that we use for ETL automation stopped its service and then restarted it using command prompt. But I wanted to investigate about the reason that caused it to stop. But in System log file of Jenkins I can see only today's log. How can I see log of previous days. Please help.<issue_comment>username_1: If you are using a linux machine logs will be in **/var/log/jenkins/jenkins.log** unless you have set customized location. If you have set any logrotate it will be archived and you might require to unzip and check those to see previous logs. Take a look at this [documentation](https://wiki.jenkins.io/display/JENKINS/Logging) for more info Upvotes: 4 <issue_comment>username_2: I found logs from the web panel <https://jenkins.mycompany.com.ua/log/all> Upvotes: 0
2018/03/22
538
1,843
<issue_start>username_0: To begin, this is for a class, i dont like this language. Its simple Do loop to print the square root of numbers. The objective is to replace the value in the do loop with macro variables. Here is my source code: ``` %LET Start_Value = 1; %LET Stop_Value = 5; DATA sqrt_table; DO &Start_Value. TO &Stop_Value.; Sqrt_n = SQRT(&Start_Value.); OUTPUT; END; RUN; TITLE 'Square root table from 1 to 5'; PROC PRINT DATA = sqrt_table noobs; RUN; TITLE; ``` The Log says the error is in the DO &Start\_Value. "Symbol is not recognized" I followed the the source coude given, i have decalred the macros as they should be, and i am accessing them as i read to do so. What is the issue?<issue_comment>username_1: Macro code in general, and in this case specifically, is just used to replace constant text. First get a working DO loop without any macro variables and then replace the parts that you want to vary with the macro variable references. So the basic syntax for an iterative DO loop is: ``` do VAR=START to END; ... end; ``` Where `VAR` is a variable name and `start` and `end` are numerical expressions. Compare that to the pattern of your attempt and you can see that you have left off the `VAR=` part. Also the assignment statement is going to assign the same value to `SQRT_N` on every iteration of the DO loop. Because you have essentially written. ``` Sqrt_n = SQRT(1); ``` Remember macro variables are just ways to help you generate the program that you want SAS to actually run. Upvotes: 2 <issue_comment>username_2: If you are begginer in SAS don't mix macro lanauge wit 4GL. Here is what you need. ``` %LET Start_Value = 1; %LET Stop_Value = 5; DATA sqrt_table; DO i = &Start_Value. TO &Stop_Value.; Sqrt_n = SQRT(i); OUTPUT; END; RUN; ``` Upvotes: 2 [selected_answer]
2018/03/22
538
2,140
<issue_start>username_0: So trying to setup my firestore database and I have a collection called Users that stores the users information. I also have subcollections of Towers for each user. My users documents have a playerUid field that I use for security settings. Here are my current security rules: ``` service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read: if request.auth.uid != null; } match /users/{user=**}{ allow read, create: if request.auth.uid != null; allow update: if request.auth.uid == resource.data.playerUid; } } } ``` this allows users to read, create both their user document and the subcollection of tower documents, but they cant edit the subcollection. There is no playerUid in the tower documents. Is there a way to use the playerUid in the user document to authenticate for updating the towers? Or do I need to add a playerUid field to the tower documents to authenticate against<issue_comment>username_1: You can `get` the user document in the rules of the `Towers` subcollection as shown in the [Firestore documentation on accessing other documents](https://firebase.google.com/docs/firestore/security/rules-conditions#access_other_documents): > > > ``` > allow delete: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.admin == true > > ``` > > Alternatively you can indeed include the UID if the user in the documents of the subcollection. That will prevent needing an extra document read in the rules. Upvotes: 3 <issue_comment>username_2: This snippet may be able to help you ``` service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read: if request.auth.uid != null; } match /users/{user=**}{ allow read, create: if request.auth.uid != null; allow update: if request.auth.uid == user; // <== THIS LINE } } } ``` Wouldn't this match the 'user' path with the 'uid'? I will try test this when I have some time. It would save a get call if it does work. Upvotes: 2
2018/03/22
415
1,446
<issue_start>username_0: I'm doing a code review of a project, which means cycling through all the files in it. I want to keep my hands on the keyboard but neither do I want to have to `CMD`+`P` and type in the name of each file. I've bound `CMD`+`K`,`CMD`+`E` to `workbench.files.action.focusFilesExplorer` which enables me to easily get to the Explorer, but then I can only `explorer.openToSide`, which isn't exactly what I want. I want to be able to open them directly, full-screen even if I have other windows open. Are there commands for this that I can bind to? I suspect this isn't a feature yet.<issue_comment>username_1: To open a file, just press `Enter` once you've selected it. It's bound to the `list.select` command by default. This also works for expanding / collapsing folders. Upvotes: 4 [selected_answer]<issue_comment>username_2: By default on a mac you can use `cmd`+`down` to open the file. Pressing `Enter` will edit the filename. Upvotes: 4 <issue_comment>username_3: After `workbench.files.action.focusFilesExplorer` you can press `Up` or `Down` to navigate through file list while File explorer is fosuced. Then hit `Enter` to open the selected file. Also I set `Ctrl` + `E` for `workbench.files.action.focusFilesExplorer`. In my case, sequential navigating through list of files is a repeating sequence of these shortcuts: * `Ctrl` + `E` * `Down` * `Enter` * ... * `Ctrl` + `E` * `Down` * `Enter` * ... Upvotes: 0
2018/03/22
390
1,716
<issue_start>username_0: When you navigate to: [blockchain.info](https://blockchain.info/wallet/#/login) You will notice that if you click `view-source` on the page, it will show HTML context different than that when you `inspect-element`. My question is, how are they doing this? I understand they are using `.pug` templates from AngularJS framework. But how does my browser know where to read them from if they are not loaded from the client-(browser)-side? Also, if I was to insert jQuery onto the page, would the jQuery know when the events are triggered `on('click', 'submit', 'whatever') etc ...`?<issue_comment>username_1: Any framework that is rendering HTML client-side (React, Angular, Vue) will do that. The actual source code could literally just be some basic html boilerplate and a div that then gets loaded with an application through something like Javascript. Thus, when you view the source of the page, you're seeing this basic templating. But when inspecting an element, Chrome Dev tools (and others) are inspecting the element that is being rendered client side. Your browser has placed those elements on the DOM, they didn't exist in the source code till the code executed. Hope that helps clear things up. Upvotes: 0 <issue_comment>username_2: When you click `View Source`, you see what the server sends back. Many pages do not send back a full HTML page, instead some skeleton HTML and add the rest of the functionality via JavaScript When you `Inspect Element`, you're viewing the browser's representation of the DOM, which includes any manipulations done via JavaScript. For a visual explanation, see this article on css-tricks: <https://css-tricks.com/dom/> Upvotes: 3 [selected_answer]
2018/03/22
651
2,195
<issue_start>username_0: I want to make a symmetric encryption for the file `/tmp/public.txt`. ``` gpg --symmetric /tmp/public.txt ``` The command will invoke the `enter passphrase` window,i want to send the password automatically. [![enter image description here](https://i.stack.imgur.com/6ZbOz.png)](https://i.stack.imgur.com/6ZbOz.png) My try here: ``` echo "<PASSWORD>" | gpg --passphrase-fd 0 --symmetric /tmp/public.txt ``` The `enter passphrase` window still pop up, How to send the password automatically in gpg's symmetric encryption ?<issue_comment>username_1: ``` key="it is a long password to encrypt and decrypt my file in symmetric encryption " ``` Encypt public.txt. ``` openssl enc -des3 -a -salt -in public.txt -k ${key} -out public.asc ``` Decrypt public.asc. ``` openssl enc -d -des3 -a -salt -k ${key} -in public.asc -out public.out ``` Can i draw a conclusion that openssl is a more powerful tool for encryption than gpg? Upvotes: 0 <issue_comment>username_2: Depending on your GnuPG version (>= 2.1.0 ) you need to add "--pinentry-mode loopback" to the command. For GnuPG version >= 2.1.0 but < 2.1.12 you also need to add: "allow-loopback-pinentry" to the ~/.gnupg/gpg-agent.conf Your command would then be: ``` echo "<PASSWORD>" | gpg --pinentry-mode loopback --passphrase-fd 0 --symmetric /tmp/public.txt ``` Alternatively you don't have to use passphrase-fd and the echo but can directly provide the passphrase: ``` gpg --pinentry-mode loopback --passphrase "<PASSWORD>" --symmetric /tmp/public.txt ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Since I stumbled on this question having the same problem, I'll post the answer that actually helped me (from other SE question). The key options here are `--batch --yes`: ``` $ gpg --passphrase <PASSWORD> --batch --yes --symmetric file_to_enc ``` (Taken from [this question](https://unix.stackexchange.com/questions/60213/gpg-asks-for-password-even-with-passphrase) ) That way you can actually encrypt a file symmetrically supplying the key as commandline argument, although this might mean that other users of the system might see the passphrase used. Upvotes: 2
2018/03/22
694
2,442
<issue_start>username_0: I am starting to learn springboot and already encountered an error. I tried searching for this error, but i wasn't able to find it. I have inserted the pictures of the entire error as well as my code for the pom.xml and the main class. pom.xml ``` org.springframework.boot spring-boot-starter-parent 1.5.2.RELEASE org.springframework.boot spring-boot-starter-web 1.8 ``` Main ``` package io.java.springbootstarter; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class CourseApiApp { public static void main(String[] args) { SpringApplication.run(CourseApiApp.class, args); } } ``` This was the description for the error: ``` The Tomcat connector configured to listen on port 8080 failed to start. The port may already be in use or the connector may be misconfigured. Action: Verify the connector's configuration, identify and stop any process that's listening on port 8080, or configure this application to listen on another port. 2018-03-21 22:47:48.794 INFO 9412 --- [ main] ationConfigEmbeddedWebApplicationContext : Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@f75083: startup date [Wed Mar 21 22:47:46 EDT 2018]; root of context hierarchy 2018-03-21 22:47:48.794 INFO 9412 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown ``` [Error](https://i.stack.imgur.com/tdKEW.png), [Error Continued](https://i.stack.imgur.com/vs16r.png) Thank you in advance.<issue_comment>username_1: If you are using linux/mac, u can try this command : ``` lsof -i :8080 ``` This will return the process id along with other information, then use the following command to kill the process : ``` kill -9 your_process_id ``` This way, you need not to change the port anymore. In case the other process is a java process as well, you could also just do `jps` which shows all running java processes and kill it accordingly. Upvotes: 2 <issue_comment>username_2: The port 8080 in using, you should use another port. You can config in application.properties by setting server.port Upvotes: 0 <issue_comment>username_3: For me just restarting my computer worked. As the error message says some application was already using the specified port. Upvotes: 0
2018/03/22
276
1,027
<issue_start>username_0: Hi I have a question about css. I want to change the color the first element of class 'active' which is "3". This is my code, but it doesn't work. ```css .item.active:first-child { color: red } ``` ```html 1 2 3 4 5 6 6 ``` And Is there any way to solve this problem with Jquery??? Please help.<issue_comment>username_1: You can use `first` with `class` like this in jquery. ```js $( ".active:first" ).css( "color", "red" ); ``` ```html 1 2 3 4 5 6 6 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Using owl carousel center event You can also achieved the same: .center { color: red; } $('.nonloop').owlCarousel({ center: true, items: 2, loop: false, margin: 10, responsive: { 600: { items: 4 } } }); ``` Upvotes: 0
2018/03/22
217
534
<issue_start>username_0: I want to do multiplication using for loop. Here is the code. ``` a = 4 b = 6 for i in [a,b]: i*=2 ``` The values of `a` and `b` remain the same. How to make it work?<issue_comment>username_1: `int` are immutable, so you'll need to rebind `a` and `b` to fresh int objects ``` >>> a = 4 >>> b = 6 >>> a, b = (i*2 for i in [a,b]) >>> a 8 >>> b 12 ``` Upvotes: 1 <issue_comment>username_2: Use dictionary: ``` z = {'i': a, 'j': b} for k in z.keys(): z[k] *= 2 a = z['i'] b = z['j'] ``` Upvotes: 0
2018/03/22
724
2,119
<issue_start>username_0: say I have HTML as ``` I want to be center of content ``` css ``` html, body { height: 100%; } body { margin: 0; } .page { height: 100%; display: flex; flex-direction: column; } .header { flex-shrink: 0; background-color: green; height: 50px; } .content { flex-grow: 1; } .footer { flex-shrink: 0; background-color: steelblue; height: 150px; } .foo { display: flex; justify-content: center; align-items: center; } ``` <https://codepen.io/rrag/pen/PRmOYe> I achieved a sticky footer at the bottom of the page using `flex-grow` but that leads to a content with unknown height How do I center something in the middle of `content`? Is it possible using flexbox or may be an alternative approach?<issue_comment>username_1: You have almost the exact right idea! Instead of setting your vertical centering rules on `.foo` (`display: flex;`, `justify-content: center`; and `align-items: center;`), you instead simply need to set them on the parent `.content`. This can be seen in the following (Click `Run snippet` and then`Full page` to see this in effect): ```css html, body { height: 100%; } body { margin: 0; } .page { height: 100%; display: flex; flex-direction: column; } .header { flex-shrink: 0; background-color: green; height: 50px; } .content { flex-grow: 1; display: flex; justify-content: center; align-items: center; } .footer { flex-shrink: 0; background-color: steelblue; height: 150px; } ``` ```html I want to be center of content ``` This can also be seen **[on CodePen](https://codepen.io/Obsidian-Age/pen/bvWYpO)**. Upvotes: 4 [selected_answer]<issue_comment>username_2: I'll answer your follow up question in comments: You can wrap your content between two flex spacers: HTML: ``` ``` CSS: ``` .page { height: 100vh; display: flex; flex-direction: column; .header { //... } .vertical-spacer { flex: 1; } .wrapper { //... } .footer { //... } } ``` Here it is in CodePen: <https://codepen.io/glf86/pen/rNqPJqw> Upvotes: 1
2018/03/22
644
2,108
<issue_start>username_0: * react:16.3.0-alpha.1 * jest: "22.3.0" * enzyme: 3.3.0 * typescript: 2.7.1 code: ``` class Foo extends React.PureComponent{ bar:number; async componentDidMount() { this.bar = 0; let echarts = await import('echarts'); // async import this.bar = 100; } } ``` test: ``` describe('...', () => { test('...', async () => { const wrapper = shallow(); const instance = await wrapper.instance(); expect(instance.bar).toBe(100); }); }); ``` Error: ``` Expected value to be: 100 Received: 0 ```<issue_comment>username_1: Your test also needs to implement async, await. For ex: ``` it('should do something', async function() { const wrapper = shallow(); const result = await wrapper.instance(); expect(result.bar).toBe(100); }); ``` Upvotes: 0 <issue_comment>username_2: Something like this should work for you:- ``` describe('...', () => { test('...', async () => { const wrapper = await mount(); expect(wrapper.instance().bar).toBe(100); }); }); ``` Upvotes: 4 <issue_comment>username_3: Solution: 1: use the async/await syntax. 2: Use mount (no shallow). 3: await async componentLifecycle. For ex: ``` test(' ',async () => { const wrapper = mount( ); await wrapper.instance().componentDidMount(); }) ``` Upvotes: 6 [selected_answer]<issue_comment>username_4: Try this: ``` it('should do something', async function() { const wrapper = shallow(); await wrapper.instance().componentDidMount(); app.update(); expect(wrapper.instance().bar).toBe(100); }); ``` Upvotes: 2 <issue_comment>username_5: None of the solutions provided here fixed all my issues. At the end I found <https://medium.com/@lucksp_22012/jest-enzyme-react-testing-with-async-componentdidmount-7c4c99e77d2d> which fixed my problems. **Summary** ``` function flushPromises() { return new Promise(resolve => setImmediate(resolve)); } it('should do someting', async () => { const wrapper = await mount(); await flushPromises(); expect(wrapper.instance().bar).toBe(100); }); ``` Upvotes: 2
2018/03/22
1,062
3,418
<issue_start>username_0: My HTML is like this ``` Hello World =========== Click Here ========== ``` I am trying to position `Hello World` in the center and the text `Click Here` 5% from the right edge of the screen. Here is my CSS ``` .maindiv { position: relative; } .maindiv h1:nth-child(1) { text-align: center; } .maindiv h1:nth-child(2) { right: 5%; } ``` I dont know why it is not working. I have set Parent's div to relative and set second child to right's 5% so it should be 5% off from the right but its still on the left<issue_comment>username_1: Style your css like this: ```css .maindiv { position: relative } .maindiv h1:nth-child(1) { position: absolute; left: 50%; top: 50%; } .maindiv h1:nth-child(2) { position: absolute; bottom: 0; right: 5%; } ``` ```html Centered ======== right-aligned ============= ``` Upvotes: 1 <issue_comment>username_2: Use margin-right rather then only right property. ``` .maindiv h1:nth-child(2) { margin-right: 5%; } ``` Upvotes: 2 <issue_comment>username_3: Your rule on `.maindiv` only defines how that `div` behaves to its parent element. It has no effect on the `div`'s children. Your rule on the second `h1` has no effect since `right` is ignored for static positioning and would move the element more to the left if you used it with `relative` positioning. You want it to not be bigger than 100% of the horizontal space, but still be affected by the parent's bounds. This is where `position: absolute` comes in. In order to make it relative to the parent, the parent needs its `relative` without any other settings, though (or anything non-static for that matter). If you want them on the same height, I suggest `float` or `top: 0` with `position: absolute`. --- What you might want instead: ```css .maindiv { /* only as a reference for children positioning */ position: relative; } .maindiv > h1 { /* You don't want margins to affect positioning here */ margin: 0; } .maindiv h1:nth-child(1) { text-align: center; } .maindiv h1:nth-child(2) { position: absolute; right: 5%; top: 0; } ``` ```html Foo === Bar === ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: When positioning elements with `right`, `left`, `top`, and `bottom`, imagine the element being pushed in the direction you specify. i.e. `right:5%;` would push the element 5% to the right. In this case, all you need to do is initially align `.maindiv` to `text-align:right`. like so: ```css .maindiv { position: relative; width: 100%; text-align: right; } .maindiv h1:nth-child(1) { text-align: center; } .maindiv h1:nth-child(2) { position: relative; right: 5%; } ``` ```html centered ======== right aligned ============= ``` Your code will now work. Upvotes: 1 <issue_comment>username_5: It is not good to use multiple tags in one html page. So I changed the click here to since it seems like a link by text. In the below code, click here is placed relative to , so if you are moving html down words, click here will also move with it. Just try this. ```css .maindiv h1 { text-align: center; position: relative; line-height: 20px; } .maindiv h1 a { position: absolute; right: 5%; font-size: 16px; } ``` ```html Hello World [Click Here](#) ============================= ``` Upvotes: 1
2018/03/22
341
1,219
<issue_start>username_0: I've added a custom domain to an App Engine project. The TTFB of requests to that project's service on the \*.appspot.com domain is under 15ms. Accessing the service via the custom domain, however, takes about 80ms. What can I do to fix this?<issue_comment>username_1: According to someone who works at Google, if you set up a custom domain for an App Engine project hosted in Japan, requests are then routed via Taiwan, which increases latency. I haven't heard an explanation of *why* they do that, but regardless, GCP has known about this issue for about three years and don't seem to think it's a big problem <https://issuetracker.google.com/issues/64458939> Upvotes: 2 <issue_comment>username_2: Point mentioned in the documentation on mapping custom domain with app engine > > Using custom domains might add noticeable latency to responses that > App Engine sends to your app's users in some regions. The regions are > as follows: > > > ``` us-west2 us-east4 northamerica-northeast1 southamerica-east1 europe-west2 europe-west3 asia-south1 asia-northeast1 australia-southeast1 ``` Link - <https://cloud.google.com/appengine/docs/standard/python3/mapping-custom-domains> Upvotes: 0
2018/03/22
1,138
4,294
<issue_start>username_0: Is there a way to get list of all files in "resources" folder in Kotlin? I can read specific file as ``` Application::class.java.getResourceAsStream("/folder/filename.ext") ``` But sometimes I just want to extract everything from folder "folder" to an external directory.<issue_comment>username_1: There are no methods for it (i.e. `Application::class.java.listFilesInDirectory("/folder/")`), but you can create your own system to list the files in a directory: ``` @Throws(IOException::class) fun getResourceFiles(path: String): List = getResourceAsStream(path).use{ return if(it == null) emptyList() else BufferedReader(InputStreamReader(it)).readLines() } private fun getResourceAsStream(resource: String): InputStream? = Thread.currentThread().contextClassLoader.getResourceAsStream(resource) ?: resource::class.java.getResourceAsStream(resource) ``` Then just call `getResourceFiles("/folder/")` and you'll get a list of files in the folder, assuming it's in the classpath. This works because Kotlin has an extension function that reads lines into a List of Strings. The declaration is: ``` /** * Reads this reader content as a list of lines. * * Do not use this function for huge files. */ public fun Reader.readLines(): List { val result = arrayListOf() forEachLine { result.add(it) } return result } ``` Upvotes: 3 <issue_comment>username_2: As I struggled with the same issue and couldn't find a concrete answer, so I had to write one myself. Here is my solution: ``` fun getAllFilesInResources() { val projectDirAbsolutePath = Paths.get("").toAbsolutePath().toString() val resourcesPath = Paths.get(projectDirAbsolutePath, "/src/main/resources") val paths = Files.walk(resourcesPath) .filter { item -> Files.isRegularFile(item) } .filter { item -> item.toString().endsWith(".txt") } .forEach { item -> println("filename: $item") } } ``` Here I have parsed through all the files in the ***/src/main/resources*** folder and then filtered only the regular files (no directories included) and then filtered for the text files within the resources directory. The output is a list of all the absolute file paths with the extension ***.txt*** in the resources folder. Now you can use these paths to copy the files to an external folder. Upvotes: 4 <issue_comment>username_3: Two distinct parts: 1. Obtain a file that represents the resource directory 2. Traverse the directory For **1** you can use Java's `getResource`: ``` val dir = File( object {}.javaClass.getResource(directoryPath).file ) ``` For **2** you can use Kotlin's `File.walk` extension function that returns a [sequence](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.sequences/) of files which you can process, e.g: ``` dir.walk().forEach { f -> if(f.isFile) { println("file ${f.name}") } else { println("dir ${f.name}") } } ``` Put together you may end up with the following code: ```kotlin fun onEachResource(path: String, action: (File) -> Unit) { fun resource2file(path: String): File { val resourceURL = object {}.javaClass.getResource(path) return File(checkNotNull(resourceURL, { "Path not found: '$path'" }).file) } with(resource2file(path)) { this.walk().forEach { f -> action(f) } } } ``` so that if you have `resources/nested` direcory, you can: ```kotlin fun main() { val print = { f: File -> when (f.isFile) { true -> println("[F] ${f.absolutePath}") false -> println("[D] ${f.absolutePath}") } } onEachResource("/nested", print) } ``` Upvotes: 3 <issue_comment>username_4: Here is a solution to iterate over [JAR-packed resources](https://stackoverflow.com/questions/1429172/how-to-list-the-files-inside-a-jar-file) on JVM: ``` fun iterateResources(resourceDir: String) { val resource = MethodHandles.lookup().lookupClass().classLoader.getResource(resourceDir) ?: error("Resource $resourceDir was not found") FileSystems.newFileSystem(resource.toURI(), emptyMap()).use { fs -> Files.walk(fs.getPath(resourceDir)) .filter { it.extension == "ttf" } .forEach { file -> println(file.toUri().toString()) } } } ``` Upvotes: 1
2018/03/22
1,236
4,681
<issue_start>username_0: I recently installed XAMPP on a windows machine(Windows 10) just to get a local server up and running so that I could make API calls and not run into any CORS issues and it was working fine. Now what is happening is that whenever I make any changes in the .html or .js files it won't update the page, I have cleared all the browser history , temp files, restarted XAMPP, closed and reopened the browser (in the root folder there is only a single .html file so no clash there) but still it won't update. Is it something related to XAMPP(v3.2.2) or any local settings that I have to configure. Browser: Chrome 65, though it is the same issue with firefox and IE.<issue_comment>username_1: There are no methods for it (i.e. `Application::class.java.listFilesInDirectory("/folder/")`), but you can create your own system to list the files in a directory: ``` @Throws(IOException::class) fun getResourceFiles(path: String): List = getResourceAsStream(path).use{ return if(it == null) emptyList() else BufferedReader(InputStreamReader(it)).readLines() } private fun getResourceAsStream(resource: String): InputStream? = Thread.currentThread().contextClassLoader.getResourceAsStream(resource) ?: resource::class.java.getResourceAsStream(resource) ``` Then just call `getResourceFiles("/folder/")` and you'll get a list of files in the folder, assuming it's in the classpath. This works because Kotlin has an extension function that reads lines into a List of Strings. The declaration is: ``` /** * Reads this reader content as a list of lines. * * Do not use this function for huge files. */ public fun Reader.readLines(): List { val result = arrayListOf() forEachLine { result.add(it) } return result } ``` Upvotes: 3 <issue_comment>username_2: As I struggled with the same issue and couldn't find a concrete answer, so I had to write one myself. Here is my solution: ``` fun getAllFilesInResources() { val projectDirAbsolutePath = Paths.get("").toAbsolutePath().toString() val resourcesPath = Paths.get(projectDirAbsolutePath, "/src/main/resources") val paths = Files.walk(resourcesPath) .filter { item -> Files.isRegularFile(item) } .filter { item -> item.toString().endsWith(".txt") } .forEach { item -> println("filename: $item") } } ``` Here I have parsed through all the files in the ***/src/main/resources*** folder and then filtered only the regular files (no directories included) and then filtered for the text files within the resources directory. The output is a list of all the absolute file paths with the extension ***.txt*** in the resources folder. Now you can use these paths to copy the files to an external folder. Upvotes: 4 <issue_comment>username_3: Two distinct parts: 1. Obtain a file that represents the resource directory 2. Traverse the directory For **1** you can use Java's `getResource`: ``` val dir = File( object {}.javaClass.getResource(directoryPath).file ) ``` For **2** you can use Kotlin's `File.walk` extension function that returns a [sequence](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.sequences/) of files which you can process, e.g: ``` dir.walk().forEach { f -> if(f.isFile) { println("file ${f.name}") } else { println("dir ${f.name}") } } ``` Put together you may end up with the following code: ```kotlin fun onEachResource(path: String, action: (File) -> Unit) { fun resource2file(path: String): File { val resourceURL = object {}.javaClass.getResource(path) return File(checkNotNull(resourceURL, { "Path not found: '$path'" }).file) } with(resource2file(path)) { this.walk().forEach { f -> action(f) } } } ``` so that if you have `resources/nested` direcory, you can: ```kotlin fun main() { val print = { f: File -> when (f.isFile) { true -> println("[F] ${f.absolutePath}") false -> println("[D] ${f.absolutePath}") } } onEachResource("/nested", print) } ``` Upvotes: 3 <issue_comment>username_4: Here is a solution to iterate over [JAR-packed resources](https://stackoverflow.com/questions/1429172/how-to-list-the-files-inside-a-jar-file) on JVM: ``` fun iterateResources(resourceDir: String) { val resource = MethodHandles.lookup().lookupClass().classLoader.getResource(resourceDir) ?: error("Resource $resourceDir was not found") FileSystems.newFileSystem(resource.toURI(), emptyMap()).use { fs -> Files.walk(fs.getPath(resourceDir)) .filter { it.extension == "ttf" } .forEach { file -> println(file.toUri().toString()) } } } ``` Upvotes: 1
2018/03/22
866
3,440
<issue_start>username_0: Started to dabble in Xamarin Forms. Two things I cant figure out: Binding of my Listview: I have a class with: ``` public class Mainlist { public string Title { get; set; } public string Value { get; set; } } ``` My XAML looks like: ``` ``` What happens now is that i have a list of URLS. From every URL I am scraping certain info with HTMLAgilityPack foreach loop, which is working fine. I would like to add the scraped data after each run of the loop to the listview and have it display. Something like "lazy loading". Up to now i could only figure out how to set the itemsource after all Urls are scraped and have it display at once with something like this: ``` //set itemsource to URL collection mainlist.ItemsSource = new List() { new Mainlist() { //scraped info from each URL Title = title.ToString().Trim(), Value = value.ToString().Trim(), }, }; ```<issue_comment>username_1: I think you could do something like this: ``` mainlist.ItemsSource = new ObservableCollection(); foreach (var item in yourDataFromHtmlAgilityPackScraping) { mainlist.ItemsSource.Add(new Mainlist() { //scraped info from each URL Title = item.title.ToString().Trim(), Value = item.value.ToString().Trim(), }); } ``` The important part here is the ObservableCollection. Which allows the Listview to be updated when a new element is added. Upvotes: 2 <issue_comment>username_2: First, create a view model class, called *MyViewModel.cs*: ``` public class MyViewModel : INotifyPropertyChanged { // property changed event handler public event PropertyChangedEventHandler PropertyChanged; private ObservableCollection \_list; public ObservableCollection List { get { return \_list; } set { \_list = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(List))); } } public MyViewModel() { \_list = new ObservableCollection(); } public async void StartScraping() { // assuming you are 'awaiting' the results of your scraping method... foreach (...) { await ... scrape a web page ... var newItem = new Mainlist() { Title = title.ToString().Trim(), Value = value.ToString().Trim() }; // if you instead have multiple items to add at this point, // then just create a new List, add your items to it, // then add that list to the ObservableCollection List. Device.BeginInvokeOnMainThread(() => { List.Add(newItem); }); } } } ``` Now in your page's `xaml.cs` code behind file, set the view model as your `BindingContext`: ``` public class MyPage : ContentPage // (assuming the page is called "MyPage" and is of type ContentPage) { MyViewModel _viewModel; public MyPage() { InitializeComponent(); _viewModel = new MyViewModel(); BindingContext = _viewModel; // bind the view model's List property to the list view's ItemsSource: mainList.setBinding(ListView.ItemsSourceProperty, "List"); } } ``` And note that in your view model, you'll need to use an `ObservableCollection` instead of a `List`, as `ObservableCollection` will allow the `ListView` to be updated automatically whenever you add or remove items from it. Also, to ease a bit of confusion, I'd recommend changing the class name from `Mainlist` to `MainListItem`. Upvotes: 3 [selected_answer]
2018/03/22
2,784
9,832
<issue_start>username_0: I have this table on MS SQL Server ``` Customer Month Amount ----------------------------- Tom 1 10 Kate 1 60 Ali 1 70 Tom 2 50 Kate 2 40 Tom 3 80 Ali 3 20 ``` I want the select to get accumulation of the customer for each month ``` Customer Month Amount ----------------------------- Tom 1 10 Kate 1 60 Ali 1 70 Tom 2 60 Kate 2 100 Ali 2 70 Tom 3 140 Kate 3 100 Ali 3 90 ``` Noticing that Ali has no data for the month of 2 and Kate has no data for the month of 3 I have done it but the problem is that for the missing month for each customer no data shows i.e. Kate has to be in month 3 with 100 amount and Ali has to be in Month 2 with 70 amount ``` declare @myTable as TABLE (Customer varchar(50), Month int, Amount int) ; INSERT INTO @myTable (Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20); select * from @myTable select SUM(b.Amount),a.Customer, a.Month from @myTable a inner join @myTable b on a.Customer = b.Customer and a.Month >= b.Month group by a.Customer, a.Month ```<issue_comment>username_1: ``` with cte as (select * from (select distinct customer from myTable ) c cross join ( values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)) t(month)) select cte.customer, cte.month, sum(myTable.amount) over (partition by cte.customer order by cte.month) as cumamount from cte left join myTable on cte.customer = myTable.customer and cte.month = myTable.month order by cte.month, cte.customer desc ``` Upvotes: 2 <issue_comment>username_2: Try Sum Over Partition By <https://learn.microsoft.com/en-us/sql/t-sql/functions/sum-transact-sql> This will help you get the idea how to accumulate. If the code i use in postgresql like this `Select sum(amount) over(partition by customer, month)` Upvotes: 0 <issue_comment>username_3: This should do it for you. Also here is a link to the Microsoft docs regarding aggregation functions. <https://learn.microsoft.com/en-us/sql/t-sql/functions/aggregate-functions-transact-sql> Example: ``` SELECT Customer, Month, SUM(Amount) as Amount FROM myTable GROUP BY Customer, Month ORDER BY Customer, Month ``` Upvotes: 0 <issue_comment>username_4: Use *window function* ``` select Customer, Month, sum(Amount) over (partition by customer order by month) Amount from table t ``` So, you want some kind of `look up` tables which has possible months with customers. ``` with cte as ( select * from ( select Customer from table group by Customer)c cross join (values (1),(2),(3))a(Months) ) -- look-up table select c.Customer, c.Months, sum(t.Amount) over (partition by c.Customer order by c.Months) Amount from cte c left join table t on t.Month = c.Months and t.Customer = c.Customer ``` **Result :** ``` Customer Months Amount Tom 1 10 Kate 1 60 Ali 1 70 Tom 2 60 Ali 2 70 Kate 2 100 Ali 3 90 Kate 3 100 Tom 3 140 ``` Upvotes: 2 <issue_comment>username_5: Do you want get the sum amount every month for each customer , what ever the customer has transaction in that month? In following script, If you have a customer table, you can join the customer table, do not need use (SELECT DISTINCT Customer FROM @myTable) ``` declare @myTable as TABLE (Customer varchar(50), Month int, Amount int); INSERT INTO @myTable(Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20), ('Jack', 3, 90); SELECT c.Customer,sv.number AS Month ,SUM(CASE WHEN t.Month<=sv.number THEN t.Amount ELSE 0 END ) AS Amount FROM master.dbo.spt_values AS sv INNER JOIN (SELECT DISTINCT Customer FROM @myTable) AS c ON 1=1 LEFT JOIN @myTable AS t ON t.Customer=c.Customer WHERE sv.type='P' AND sv.number BETWEEN 1 AND MONTH(GETDATE()) GROUP BY sv.number,c.Customer ORDER BY c.Customer,sv.number ``` ``` ---------- Customer Month Amount -------------------------------------------------- ----------- ----------- Ali 1 70 Ali 2 70 Ali 3 90 Jack 1 0 Jack 2 0 Jack 3 90 Kate 1 60 Kate 2 100 Kate 3 100 Tom 1 10 Tom 2 60 Tom 3 140 ``` Upvotes: 1 <issue_comment>username_6: Try this the table name is "a". Using a combination of Cte and sub query. Tried it out in MSSQL2008R2 ``` with cte as ( select * from ( select Customer from a group by Customer)c cross join (values (1),(2),(3),(4),(5),(6),(7),(8),(9), (10),(11),(12))a(Months) ) select Customer,Months, (select SUM(total) from (select customer , month , sum(amount)as total from a group by customer, month) as GroupedTable where GroupedTable.customer= cte.customer and GroupedTable.month<= cte.Months) as total from cte Group by Customer,Months order by Customer,Months ``` Upvotes: 1 <issue_comment>username_7: try this: create table #tmp (Customer VARCHAR(10), [month] INT ,Amount INT) ``` INSERT INTO #tmp SELECT 'Tom',1,10 union all SELECT 'Kate',1,60 union all SELECT 'Ali',1,70 union all SELECT 'Tom',2,50 union all SELECT 'Kate',2,40 union all SELECT 'Tom',3,80 union all SELECT 'Ali',3,20 ;WITH cte1 AS ( SELECT [month], ROW_NUMBER() OVER(order by [month] desc) rn FROM (SELECT DISTINCT [month] as [month] FROM #tmp) a ) , cte2 AS ( SELECT customer, ROW_NUMBER() OVER(order by customer desc) rn FROM (SELECT DISTINCT customer as customer FROM #tmp) b ) SELECT t2.Customer,t2.[month],ISNULL(t1.Amount,0) As Amount into #tmp2 from #tmp t1 RIGHT JOIN (select [month],customer from cte1 cross apply cte2) t2 ON t1.customer=t2.customer and t1.[month]=t2.[month] order by t2.[month] SELECT Customer,[Month] ,SUM (Amount) OVER(partition by customer order by customer ROWS UNBOUNDED PRECEDING ) as Amount FROM #tmp2 order by [month] drop table #tmp drop table #tmp2 ``` Upvotes: 1 <issue_comment>username_8: to be clear(in answer Amount and AmountSum) ``` DECLARE @myTable TABLE(Customer varchar(50), Month int, Amount int); INSERT INTO @myTable(Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20); DECLARE @FullTable TABLE(Customer varchar(50), Month int, Amount int); INSERT INTO @FullTable(Customer, Month, Amount) SELECT c.Customer, m.Month, ISNULL(mt.Amount, 0) FROM (SELECT DISTINCT [Month] FROM @myTable) AS m CROSS JOIN (SELECT DISTINCT Customer FROM @myTable) AS c LEFT JOIN @myTable AS mt ON m.Month = mt.Month AND c.Customer = mt.Customer SELECT t1.Customer, t1.Month, t1.Amount, (t1.Amount + ISNULL(t2.sm, 0)) AS AmountSum FROM @FullTable AS t1 CROSS APPLY (SELECT SUM(Amount) AS sm FROM @FullTable AS t WHERE t.Customer = t1.Customer AND t.Month < t1.Month) AS t2 ORDER BY Month, Customer ``` Upvotes: 2 [selected_answer]<issue_comment>username_9: I think this does what you want ``` declare @myTable as TABLE (Customer varchar(50), Month int, Amount int); INSERT INTO @myTable (Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20); select dts.Month, cts.Customer, isnull(t.Amount, 0) as Amount , sum(isnull(t.Amount, 0)) over(partition by cts.Customer order by dts.Month) as CumAmt from ( select distinct customer from @myTable ) cts cross join ( select distinct Month from @myTable ) dts left join @myTable t on t.Customer = cts.Customer and t.Month = dts.Month order by dts.Month, cts.Customer; Month Customer Amount CumAmt ----------- -------------------------------------------------- ----------- ----------- 1 Ali 70 70 1 Kate 60 60 1 Tom 10 10 2 Ali 0 70 2 Kate 40 100 2 Tom 50 60 3 Ali 20 90 3 Kate 0 100 3 Tom 80 140 ``` Upvotes: 1
2018/03/22
2,928
9,920
<issue_start>username_0: I know the "how to join the newest record" questions get asked a lot, however, this one exceeds my knowledge. I usually use a "join the max date subquery and then join the row with the matching id and time" approach to the problem, and I can add additional conditions in the where clause of the subquery, but this style doesn't work in this situation. I am looking for a history of all records in the first table, and to join on the most recent record from the second table before the time of the record from the first. Example: ``` CREATE TABLE `A` ( `id` int(11) NOT NULL, `a_time` timestamp NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `A` (`id`, `a_time`) VALUES (1, '2018-03-21 04:30:00'), (2, '2018-03-21 05:30:00'), (3, '2018-03-21 07:30:00'), (4, '2018-03-21 12:30:00'); CREATE TABLE `B` ( `id` int(11) NOT NULL, `b_time` timestamp NOT NULL, `some_text` varchar(128) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `B` (`id`, `b_time`, `some_text`) VALUES (1, '2018-03-21 05:30:00', 'Foo'), (2, '2018-03-21 09:30:00', 'Bar'); ALTER TABLE `A` ADD PRIMARY KEY (`id`); ALTER TABLE `B` ADD PRIMARY KEY (`id`); ``` What I'm hoping for is the id and time from A, and the most recent (<=) some\_text from B ``` a.id a.a_time b.some_text ------------------------------------ 1 2018-03-21 04:30:00 null 2 2018-03-21 05:30:00 Foo 3 2018-03-21 07:30:00 Foo 4 2018-03-21 12:30:00 Bar ``` Can someone help me out with this one? Thanks!<issue_comment>username_1: ``` with cte as (select * from (select distinct customer from myTable ) c cross join ( values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)) t(month)) select cte.customer, cte.month, sum(myTable.amount) over (partition by cte.customer order by cte.month) as cumamount from cte left join myTable on cte.customer = myTable.customer and cte.month = myTable.month order by cte.month, cte.customer desc ``` Upvotes: 2 <issue_comment>username_2: Try Sum Over Partition By <https://learn.microsoft.com/en-us/sql/t-sql/functions/sum-transact-sql> This will help you get the idea how to accumulate. If the code i use in postgresql like this `Select sum(amount) over(partition by customer, month)` Upvotes: 0 <issue_comment>username_3: This should do it for you. Also here is a link to the Microsoft docs regarding aggregation functions. <https://learn.microsoft.com/en-us/sql/t-sql/functions/aggregate-functions-transact-sql> Example: ``` SELECT Customer, Month, SUM(Amount) as Amount FROM myTable GROUP BY Customer, Month ORDER BY Customer, Month ``` Upvotes: 0 <issue_comment>username_4: Use *window function* ``` select Customer, Month, sum(Amount) over (partition by customer order by month) Amount from table t ``` So, you want some kind of `look up` tables which has possible months with customers. ``` with cte as ( select * from ( select Customer from table group by Customer)c cross join (values (1),(2),(3))a(Months) ) -- look-up table select c.Customer, c.Months, sum(t.Amount) over (partition by c.Customer order by c.Months) Amount from cte c left join table t on t.Month = c.Months and t.Customer = c.Customer ``` **Result :** ``` Customer Months Amount Tom 1 10 Kate 1 60 Ali 1 70 Tom 2 60 Ali 2 70 Kate 2 100 Ali 3 90 Kate 3 100 Tom 3 140 ``` Upvotes: 2 <issue_comment>username_5: Do you want get the sum amount every month for each customer , what ever the customer has transaction in that month? In following script, If you have a customer table, you can join the customer table, do not need use (SELECT DISTINCT Customer FROM @myTable) ``` declare @myTable as TABLE (Customer varchar(50), Month int, Amount int); INSERT INTO @myTable(Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20), ('Jack', 3, 90); SELECT c.Customer,sv.number AS Month ,SUM(CASE WHEN t.Month<=sv.number THEN t.Amount ELSE 0 END ) AS Amount FROM master.dbo.spt_values AS sv INNER JOIN (SELECT DISTINCT Customer FROM @myTable) AS c ON 1=1 LEFT JOIN @myTable AS t ON t.Customer=c.Customer WHERE sv.type='P' AND sv.number BETWEEN 1 AND MONTH(GETDATE()) GROUP BY sv.number,c.Customer ORDER BY c.Customer,sv.number ``` ``` ---------- Customer Month Amount -------------------------------------------------- ----------- ----------- Ali 1 70 Ali 2 70 Ali 3 90 Jack 1 0 Jack 2 0 Jack 3 90 Kate 1 60 Kate 2 100 Kate 3 100 Tom 1 10 Tom 2 60 Tom 3 140 ``` Upvotes: 1 <issue_comment>username_6: Try this the table name is "a". Using a combination of Cte and sub query. Tried it out in MSSQL2008R2 ``` with cte as ( select * from ( select Customer from a group by Customer)c cross join (values (1),(2),(3),(4),(5),(6),(7),(8),(9), (10),(11),(12))a(Months) ) select Customer,Months, (select SUM(total) from (select customer , month , sum(amount)as total from a group by customer, month) as GroupedTable where GroupedTable.customer= cte.customer and GroupedTable.month<= cte.Months) as total from cte Group by Customer,Months order by Customer,Months ``` Upvotes: 1 <issue_comment>username_7: try this: create table #tmp (Customer VARCHAR(10), [month] INT ,Amount INT) ``` INSERT INTO #tmp SELECT 'Tom',1,10 union all SELECT 'Kate',1,60 union all SELECT 'Ali',1,70 union all SELECT 'Tom',2,50 union all SELECT 'Kate',2,40 union all SELECT 'Tom',3,80 union all SELECT 'Ali',3,20 ;WITH cte1 AS ( SELECT [month], ROW_NUMBER() OVER(order by [month] desc) rn FROM (SELECT DISTINCT [month] as [month] FROM #tmp) a ) , cte2 AS ( SELECT customer, ROW_NUMBER() OVER(order by customer desc) rn FROM (SELECT DISTINCT customer as customer FROM #tmp) b ) SELECT t2.Customer,t2.[month],ISNULL(t1.Amount,0) As Amount into #tmp2 from #tmp t1 RIGHT JOIN (select [month],customer from cte1 cross apply cte2) t2 ON t1.customer=t2.customer and t1.[month]=t2.[month] order by t2.[month] SELECT Customer,[Month] ,SUM (Amount) OVER(partition by customer order by customer ROWS UNBOUNDED PRECEDING ) as Amount FROM #tmp2 order by [month] drop table #tmp drop table #tmp2 ``` Upvotes: 1 <issue_comment>username_8: to be clear(in answer Amount and AmountSum) ``` DECLARE @myTable TABLE(Customer varchar(50), Month int, Amount int); INSERT INTO @myTable(Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20); DECLARE @FullTable TABLE(Customer varchar(50), Month int, Amount int); INSERT INTO @FullTable(Customer, Month, Amount) SELECT c.Customer, m.Month, ISNULL(mt.Amount, 0) FROM (SELECT DISTINCT [Month] FROM @myTable) AS m CROSS JOIN (SELECT DISTINCT Customer FROM @myTable) AS c LEFT JOIN @myTable AS mt ON m.Month = mt.Month AND c.Customer = mt.Customer SELECT t1.Customer, t1.Month, t1.Amount, (t1.Amount + ISNULL(t2.sm, 0)) AS AmountSum FROM @FullTable AS t1 CROSS APPLY (SELECT SUM(Amount) AS sm FROM @FullTable AS t WHERE t.Customer = t1.Customer AND t.Month < t1.Month) AS t2 ORDER BY Month, Customer ``` Upvotes: 2 [selected_answer]<issue_comment>username_9: I think this does what you want ``` declare @myTable as TABLE (Customer varchar(50), Month int, Amount int); INSERT INTO @myTable (Customer, Month, Amount) VALUES ('Tom', 1, 10), ('Kate', 1, 60), ('Ali', 1, 70), ('Tom', 2, 50), ('Kate', 2, 40), ('Tom', 3, 80), ('Ali', 3, 20); select dts.Month, cts.Customer, isnull(t.Amount, 0) as Amount , sum(isnull(t.Amount, 0)) over(partition by cts.Customer order by dts.Month) as CumAmt from ( select distinct customer from @myTable ) cts cross join ( select distinct Month from @myTable ) dts left join @myTable t on t.Customer = cts.Customer and t.Month = dts.Month order by dts.Month, cts.Customer; Month Customer Amount CumAmt ----------- -------------------------------------------------- ----------- ----------- 1 Ali 70 70 1 Kate 60 60 1 Tom 10 10 2 Ali 0 70 2 Kate 40 100 2 Tom 50 60 3 Ali 20 90 3 Kate 0 100 3 Tom 80 140 ``` Upvotes: 1
2018/03/22
309
1,016
<issue_start>username_0: I am using `agm/core` in displaying the coordinates of a map. I have the code below ``` ``` The view is this: [![enter image description here](https://i.stack.imgur.com/JGSpn.png)](https://i.stack.imgur.com/JGSpn.png) But I want to initially show the streets like this: [![enter image description here](https://i.stack.imgur.com/WvVL6.png)](https://i.stack.imgur.com/WvVL6.png) How to do that?<issue_comment>username_1: Set zoom and center attributes. For example. ``` ``` center is usually to specify which area you want to concentrate on, according to it change zoom level Upvotes: 4 [selected_answer]<issue_comment>username_2: The update for agm-map does not support center directives anymore. use zoom and latitude and long instead. ``` ``` Upvotes: 4 <issue_comment>username_3: Add usePanning attr for center map position [usePanning]='true' ``` ``` Here is the reference link from agm <https://angular-maps.com/api-docs/agm-core/components/agmmap#usePanning> Upvotes: 0
2018/03/22
279
940
<issue_start>username_0: Need some assistance in where to start looking. Is there a workaround with using an official API or tips for virtualizing Instagram?<issue_comment>username_1: There are some Instagram API endpoints which could help you, such as: ``` "feed/user/{$userId}/reel_media/" "feed/reels_tray/" ``` Upvotes: 0 <issue_comment>username_2: I think below Instagram API endpoint could help you: ``` "feed/tag/{tag!s}/" ``` Personally, I am using Instagram Private API and I think it is super cool. (<https://instagram-private-api.readthedocs.io/en/latest/api.html#instagram_private_api.Client.feed_tag>) ``` username = "XXXXXX" password = "<PASSWORD>" hashtag = "football" api = Client(username, password) tag_metadata = api.feed_tag(hashtag, api.generate_uuid()) football_stories = tag_metadata["story"]["items"] for story in football_stories: # Do whatever you want with the story. print story ``` Upvotes: 1
2018/03/22
823
2,609
<issue_start>username_0: I want align button right corner of the dialog below is my html ```html What's your favorite animal? Ok [demo](https://stackblitz.com/edit/angular-ksmixt?file=app/dialog-overview-example-dialog.html) ```<issue_comment>username_1: Remove ``` {display: flex} ``` styling for class mat-dialog-actions Upvotes: 0 <issue_comment>username_2: You can use the `align` HTML attribute: ```html What's your favorite animal? Ok ``` [Demo](https://stackblitz.com/edit/mat-dialog-aligned-actions) --- Note: The reason why setting an `align="end"` attribute works on the dialog's actions container is because the `align` attribute is used to add flexbox properties to the dialog actions in the dialog component's theming SCSS file: *(Extract of `dialog.scss`)* ```scss .mat-dialog-actions { // ... &[align='end'] { justify-content: flex-end; } &[align='center'] { justify-content: center; } } ``` Here's the [source code](https://github.com/angular/components/blob/07f6a4892cbe44fbb66ff2ded5cdef92087bd5aa/src/material/dialog/dialog.scss#L56-L62). Upvotes: 7 [selected_answer]<issue_comment>username_3: Since the `align` attribute [is not supported in HTML5](https://www.w3schools.com/tags/att_div_align.asp), you should use this CSS instead: ``` .mat-dialog-actions { justify-content: flex-end; } ``` *This is what is done internally by Angular Material when you put `align="end"`, you can inspect the element to check that.* Upvotes: 6 <issue_comment>username_4: Use the build in toolbar that is part of material. ``` #### Edit Task close Modal Content here ``` Custom CSS for header ``` .task-header { background-color: transparent; padding: 0 5px; height: 20px; } .fx-spacer { flex: 1 1 auto; } ``` Upvotes: 2 <issue_comment>username_5: Guess i'm a bit late but anyway. you can use `float` style attribute: ```html Ok ``` I used inline styling for easier demonstration but I don't recommend inline styling for your projects. use seperate style sheets instead Upvotes: 1 <issue_comment>username_6: Close functionality and button alignment without TypeScript. HTML: ``` X ``` CSS: ``` .button-close { justify-self: right; font-size: 20px; border: none; height: 30px; background-color: #FFFFFF; outline: none; color: #c04747; &:hover { cursor: pointer; } } ``` Upvotes: 1 <issue_comment>username_7: For angular 15 you can set justify-content: flex-end: ``` ``` Or in css: ``` .mat-mdc-dialog-actions { justify-content: flex-end; } ``` Upvotes: 1
2018/03/22
336
1,081
<issue_start>username_0: My current mybatis `mapper.xml` is ``` select id, user\_id, mall\_id, log, log\_type from user\_log where user\_id in ( #{item,jdbcType=VARCHAR} ) and mall\_id = #{1} ``` the java `Mapper.java` is ``` List batchSelect(List userList, Long mallId); ``` When I start spring-boot service, the exception is: ``` exception: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.binding.BindingException: Parameter 'userList' not found. Available parameters are [0, 1, param1, param2] ``` How can I write this correctly ?<issue_comment>username_1: you can use the annonation `@Param` ``` List batchSelect(@Param("userList")List userList, @Param("mailId")Long mallId); select id, user\_id, mall\_id, log, log\_type from user\_log where user\_id in ( #{item,jdbcType=VARCHAR} ) and mall\_id = #{mailId} ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: actually, mybatis-generator supprot `selectByExample`, `updateByExample`, they all support `where in` clause. this is my final choice Upvotes: 0
2018/03/22
688
1,955
<issue_start>username_0: I want to compare the values for 2 or more keys in python dict and return the highest value. I can find max for 2 keys but I am clueless for 3 keys and above. Here is my code for 2 keys comparison: ``` d = {} d['right'] = [0.1, 0.3, 0.5] d['left'] = [0.2, 0.1, 0.4] result = [list(d)[0] if x > y else list(d)[1] for x,y in zip(list(d.values())[0], list(d.values())[1])] # result: ['left', 'right', 'right'] ``` now what's the best way to do it for 3 keys? ``` # input: d = {} d['right'] = [0.1, 0.3, 0.5] d['left'] = [0.2, 0.1, 0.4] d['back'] = [0.0, 0.2, 0.8] # expected result: ['left', 'right', 'back'] ``` PS: *I try not to use `numpy`, `pandas`, or any library possible. Or if I have to use, I try to stick with whatever base library available in python.*<issue_comment>username_1: You should sort the dictionary key/value pairs by values in the descending order: ``` keys,_ = zip(*sorted(d.items(), key=lambda x:x[1], reverse=True)) keys #('left', 'right', 'back') ``` You can use `itemgetter` instead of the `lambda`, if you prefer: ``` from operator import itemgetter keys,_ = zip(*sorted(d.items(), key=itemgetter(1), reverse=True)) ``` Upvotes: 2 <issue_comment>username_2: Construct N dict for N values, and compute each one: ``` d = {} d['right'] = [0.1, 0.3, 0.5] d['left'] = [0.2, 0.1, 0.4] dct = [dict(zip(d.keys(), [v[i] for v in d.values()])) for i in range(len(d.values()[0]))] keys = [sorted(d.keys(), key=lambda key:d[key], reverse=True)[0] for d in dct] # ['left', 'right', 'right'] ``` Upvotes: 0 <issue_comment>username_3: You can try something like this: ``` l1 = zip(*d.values()) l2 = list(d.keys()) [l2[i.index(max(i))] for i in l1] ``` Output: ``` ['left', 'right', 'back'] ``` Upvotes: 2
2018/03/22
354
1,259
<issue_start>username_0: I have vim emulator turned on, which is the desired behavior; turning it off only changes the blinking, block cursor into a blinking, non-block cursor.<issue_comment>username_1: > > You can turn off cursor blinking by going to Settings, Editor, General, Appearance and untick `Caret blinking` > > > comment of @errikos above Instead of navigating the settings with clicks only, you may also type `caret blink` into the search field. This way you let IDEA do the search work for you ;) Upvotes: 5 <issue_comment>username_2: Go to File -> Settings -> General -> Appearance -> Upvotes: 0 <issue_comment>username_2: Another option is to deactivate Vim Emulator. To do this go to Tools -> select Vim Emulator or press "ctrl+alt+v" as show below: [![enter image description here](https://i.stack.imgur.com/79iHB.png)](https://i.stack.imgur.com/79iHB.png) for more information follow this link: <https://www.jetbrains.com/help/idea/using-product-as-the-vim-editor.html> Upvotes: 2 <issue_comment>username_3: Go to File->Settings->Editor->Appearance You will see option named caret blinking :) Upvotes: 0 <issue_comment>username_4: Just go to file-->settings type "blinking" in the searchbar. untick Caret blinking(ms) OK Upvotes: 2
2018/03/22
578
1,730
<issue_start>username_0: i have an ngFor ,a for loop written for Grocery items.I have got this code snippent from a website explaining ngFor over ngRepeat. I just copy pasted it . But it doesn't seem to work. It uses an interface, a @Component and an export default class. Can you also please explain that.Please help. ``` ngFor import {Component} from '@angular/core'; interface Grocery { id: number; label: string; } @Component({ selector: 'my-app', template: ` Grocery selected: {{ selectedGrocery.label }} * [{{ grocery.label }} {{ i }}](#) ` }) export default class App { public groceries: Grocery[]; constructor() { this.groceries = [{ id: 0, label: 'Butter' },{ id: 1, label: 'Apples' },{ id: 2, label: 'Paprika' },{ id: 3, label: 'Potatoes' },{ id: 4, label: 'Oatmeal' },{ id: 5, label: 'Spaghetti' },{ id: 6, label: 'Pears' },{ id: 7, label: 'Bacon' }]; this.selectGrocery(this.groceries[0]); } selectGrocery(grocery: Grocery) { this.selectedGrocery = grocery; } trackByGrocery: (index: number, grocery: Grocery): number => grocery.id; } ```<issue_comment>username_1: You can take a look at the official Angular 2 documentation. Here is how it is recommended to use ngFor in html component or template [1] : ``` * {{ hero }} ``` [1] <https://angular.io/guide/displaying-data> Upvotes: 0 <issue_comment>username_2: You should have a look at the angular official documentation. Take a look at this [stackBlitz](https://angular-ng-for-example.stackblitz.io) There are a few things to note in the question. Change the trackByGrocery to be a function ``` trackByGrocery = (index, grocery) => grocery.id; ``` and declare the selectedGrocery before using it. Upvotes: -1
2018/03/22
547
1,804
<issue_start>username_0: I have a vector, `my_class`, that is made up of grades. I am trying to generate a new vector, `top_grades`, that is filled up with grades from `my_class` that are greater than or equal to 85. I wrote the following code, but the `top_grades` vector ended up being the size of `my_class` vector, with all the grades that were under 85 being returned as NA. I think this happens because some indices of `top_grades` were given no value as the vector was being built. ``` my_class = c(84, 85, 90) #sample vector top_grades = c() #create vector for (i in 1:length(my_class)) { #iterate for each index in the length of my_class if (my_class[i] >= 85) { top_grades[i] <- my_class[i] #the value of top_grades at index i is the value of my_class at index i } else { next #go to next index if the value of the grade at that index is lower than 85 } } ``` I solved this problem by finding a handy function online that removes the NA's from a vector. ``` top_grades = na.omit(top_grades) #remove NA's from filled vector ``` **My question:** Is there a more elegant way to write this loop that builds the `top_grades` vector without NA's?<issue_comment>username_1: You can take a look at the official Angular 2 documentation. Here is how it is recommended to use ngFor in html component or template [1] : ``` * {{ hero }} ``` [1] <https://angular.io/guide/displaying-data> Upvotes: 0 <issue_comment>username_2: You should have a look at the angular official documentation. Take a look at this [stackBlitz](https://angular-ng-for-example.stackblitz.io) There are a few things to note in the question. Change the trackByGrocery to be a function ``` trackByGrocery = (index, grocery) => grocery.id; ``` and declare the selectedGrocery before using it. Upvotes: -1
2018/03/22
517
1,953
<issue_start>username_0: What's the best format (most efficient) way to transit GPS and ID data via LoRa radio signal , using an arduino ESP32 . I have setup a radio and built JSON strings but for such a low bandwidth format I suspect there's a better more efficient way. Also what's the best way to handle security, just base64 and encrypt your own data or is there a standardized format?<issue_comment>username_1: 1. You could choose whatever format fits your scenario as long as you keep the payloads small. 2. The payload and the LoraWan packet are both encrypted. So you don't have to worry about doing your own encryption. Upvotes: 0 <issue_comment>username_2: Please follow my thoughts below: > > What's the best format (most efficient) way to transit GPS and ID data > via LoRa radio signal , using an arduino ESP32 . I have setup a radio > and built JSON strings but for such a low bandwidth format I suspect > there's a better more efficient way. > > > LoRa and LoRaWAN are designed for low-power wide area networks, you may set a self-defined data format to keep your payload as short as possible. > > Also what's the best way to handle security, just base64 and encrypt > your own data or is there a standardized format? > > > LoRa and LoRaWAN are not same. LoRa is the PHY layer of LoRaWAN. As you mentioned, you are using LoRa. You may find an encryption approach that would not increase the payload length too much to find a balance point between energy consumption and security. There is no standard payload format. Additional comment: I have to say that LoRa is not a good choice if security is a critical issue in your App. Upvotes: 0 <issue_comment>username_3: Take a look at Cayenne Low Power Payload (LPP). Cayenne specifies a GPS Location data type. <https://mydevices.com/cayenne/docs/lora/#lora-cayenne-low-power-payload> <https://www.thethingsnetwork.org/docs/devices/arduino/api/cayennelpp.html> Upvotes: 2
2018/03/22
492
1,821
<issue_start>username_0: I need to count the number of times each method is called when the project is run. Also I need to know both the production and the dev mode, whatever tools code or something it can count for me. I use C# .NETcore vso2017 Enterprise<issue_comment>username_1: 1. You could choose whatever format fits your scenario as long as you keep the payloads small. 2. The payload and the LoraWan packet are both encrypted. So you don't have to worry about doing your own encryption. Upvotes: 0 <issue_comment>username_2: Please follow my thoughts below: > > What's the best format (most efficient) way to transit GPS and ID data > via LoRa radio signal , using an arduino ESP32 . I have setup a radio > and built JSON strings but for such a low bandwidth format I suspect > there's a better more efficient way. > > > LoRa and LoRaWAN are designed for low-power wide area networks, you may set a self-defined data format to keep your payload as short as possible. > > Also what's the best way to handle security, just base64 and encrypt > your own data or is there a standardized format? > > > LoRa and LoRaWAN are not same. LoRa is the PHY layer of LoRaWAN. As you mentioned, you are using LoRa. You may find an encryption approach that would not increase the payload length too much to find a balance point between energy consumption and security. There is no standard payload format. Additional comment: I have to say that LoRa is not a good choice if security is a critical issue in your App. Upvotes: 0 <issue_comment>username_3: Take a look at Cayenne Low Power Payload (LPP). Cayenne specifies a GPS Location data type. <https://mydevices.com/cayenne/docs/lora/#lora-cayenne-low-power-payload> <https://www.thethingsnetwork.org/docs/devices/arduino/api/cayennelpp.html> Upvotes: 2
2018/03/22
736
2,854
<issue_start>username_0: I am using the scikit-learn KNeighborsClassifier for classification on a dataset with 4 output classes. The following is the code that I am using: `knn = neighbors.KNeighborsClassifier(n_neighbors=7, weights='distance', algorithm='auto', leaf_size=30, p=1, metric='minkowski')` The model works correctly. However, I would like to provide user-defined weights for each sample point. The code currently uses the inverse of the distance for scaling using the `metric='distance'` parameter. I would like to continue to keep the inverse distance scaling but for each sample point, I have a probability weight as well. I would like to apply this as a weight in the distance calculation. For example, if `x` is the test point and `y,z` are the two nearest neighbors for which distance is being calculated, then I would like the distance to be calculated as (sum|x-y|)\*wy and (sum|x-z|)\*wz respectively. I tried to define a function that was passed into the `weights` argument but then I also would like to keep the inverse distance scaling in addition to the user defined weight and I do not know the inverse distance scaling function. I could not find an answer from the documentation. Any suggestions?<issue_comment>username_1: KNN in sklearn doesn't have sample weight, unlike other estimators, e.g. DecisionTree. Personally speaking, I think it is a disappointment. It is not hard to make KNN support sample weight, since the predicted label is the majority voting of its neighbours. A stupid walk around, is to generate samples yourself based on the sample weight. E.g., if a sample has weight 2, then make it appear twice. Upvotes: 2 <issue_comment>username_2: `sklearn.neighbors.KNeighborsClassifier.score()` has a `sample_weight` parameter. Is that what you're looking for? Upvotes: -1 <issue_comment>username_3: You can use resampling to adapt your sample weights with K-neighbors since the sklearn implementation does not include sample weights. Here is how you could do this: ```py import numpy as np from sklearn.neighbors import KNeighborsClassifier # Get training and testing data Xtrain, ytrain, sample_weight_train = get_train_data() Xtest, ytest, sample_weight_test = get_test_data() # Derive probability values from your sample weights prob_train = np.asarray(sample_weight_train) / np.sum(sample_weight_train) upsample_size = int(np.max(prob_train) / np.min(prob_train) * len(ytrain)) newids = np.random.choice(range(len(ytrain)), size=upsample_size, p=prob_train, replace=True) # Upsample training data using sample weights as probabilities # so that the data distribution is upsampled to fit the corresponding sample weights Xtrain, ytrain = Xtrain[newids,:], ytrain[newids] # Fit your model model = KNeighborsClassifier() model = model.fit(Xtrain, ytrain) ypred = model.predict(Xtest) ``` Upvotes: 0
2018/03/22
800
2,637
<issue_start>username_0: The following is the command called and the output. This is during setting up channels on a hyperledger fabric network. ``` peer channel update -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/Org1MSPanchors.tx ``` > > 2018-03-22 03:19:34.849 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP > 2018-03-22 03:19:34.849 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity > 2018-03-22 03:19:34.850 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized > Error: Invalid channel create transaction : mismatched channel ID channel != mychannel > ` > > > Has anyone encountered this?<issue_comment>username_1: Try run `peer channel list` maybe channel with such name doesn't exist. Upvotes: 0 <issue_comment>username_2: > > Endorser and orderer connections initialized Error: Invalid channel create transaction : mismatched channel ID channel != mychannel > > > You are trying to apply configuration transaction which aimed for channel with id `channel`, for channel id `mychannel`. You either need to regenerate config transaction for `mychannel` or to use existing one for `channel`, e.g.: ``` peer channel update -o orderer.example.com:7050 -c channel -f ./channel-artifacts/Org1MSPanchors.tx ``` Upvotes: 1 <issue_comment>username_3: A situation when this error usually occurs is when an Org is part of 2 different channels but only 1 MSPAnchors.tx file has been created from configtxgen tool for a given profile. Ideally, you would want to generate MSPAnchors.tx file for each channel which that Org will be part of. Assume you are planning to keep Org1 as part of 2 different channels (Ch1 and Ch2), then you will have to generate .tx file for each of these channels: ``` configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors1.tx -channelID Ch1 -asOrg Org1MSP configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors2.tx -channelID Ch2 -asOrg Org1MSP ``` This will create two .tx files in Channel-Artifacts folder: ``` Org1MSPanchors1.tx Org1MSPanchors2.tx ``` Then for updating anchor peers on Ch1: ``` peer channel update -o orderer.example.com:7050 -c ch1 -f ./channel-artifacts/ORG1MSPanchors1.tx ``` And on Ch2: ``` peer channel update -o orderer.example.com:7050 -c ch2 -f ./channel-artifacts/ORG1MSPanchors2.tx ``` The above channel update command is assuming that `CORE_PEER_TLS_ENABLED` is set to false otherwise you'll have to provide CA file path of Orderer using `--cafile` flag Upvotes: 1