date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/21
311
1,088
<issue_start>username_0: In the following Kotlin code ``` editTextUsername = EditText findViewById(R.id.email_edittext) ``` and the error is : Error:(70, 35) Unexpected tokens (use ';' to separate expressions on the same line) but i can not understand what did i do wrong.<issue_comment>username_1: If you're trying to specify the type of a variable, you can do it like this: ``` val editTextUsername: EditText = findViewById(R.id.email_edittext) ``` If you're below API level 26, and need a cast: ``` editTextUsername = findViewById(R.id.email_edittext) as EditText ``` If you already have the generic `findViewById` method because you're on a newer API level: ``` editTextUsername = findViewById(R.id.email\_edittext) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: if you are finding and casting view inside activity ``` editTextUserName = findViewById(R.id.) as EditText ``` or inside fragment use the ff ``` View view = inflater.inflate(R.layout.article_view, container, false); editTextUserName = view.findViewById(R.id.) as EditText ``` Upvotes: 0
2018/03/21
612
2,109
<issue_start>username_0: I want to call a function dynamically, but I didn't success. ``` available.append({ 'analysis_name': 'Category X Total Payment', 'col_name': 'VALUE', 'required_cols': ['Category','VALUE'], 'agg':'SUM', 'analysis_type': 'pareto-bar', 'func': 'draw_pareto'}) available.append({ 'analysis_name': 'Category X Count', 'col_name': 'Count', 'required_cols': ['Category','VALUE'], 'agg':'Count', 'analysis_type': 'pareto-bar', 'func': 'draw_pareto'}) ``` I have an array which is mapping function names, and I want to call the function by user option like this: ``` def runAnalysis(self, analysis): required_cols = analysis['required_cols'] agg = analysis['agg'] col_name = analysis['col_name'] analysis['data'] = self.draw_pareto(required_cols[0], required_cols[1], agg, col_name) // this line must be dynamically return analysis['data'] ```<issue_comment>username_1: Change your data to contain the function: ``` available.append({ 'analysis_name': 'Category X Count', 'col_name': 'Count', 'required_cols': ['Category','VALUE'], 'agg':'Count', 'analysis_type': 'pareto-bar', 'func': self.draw_pareto}) ``` Then call it: ``` analysis['data'] = analysis['func'](required_cols[0], required_cols[1], agg, col_name) // this line must be dynamically ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: In python everything (`classes`, `functions` etc) is an object, if you want a `dict` to map from keys to functions use function objects. ``` def test(): map = { 'a': func1, 'b': func2 } map['a']() def func1(): pass def func2(): pass ``` Upvotes: 0 <issue_comment>username_3: If you can not change the dictionary to contain the function itself, instead of it's name, as suggested in other answers, when you can use [`getattr`](https://docs.python.org/3.5/library/functions.html#getattr) to get the function corresponding to the name: ``` func = getattr(self, analysis['func']) analysis['data'] = func(required_cols[0], required_cols[1], agg, col_name) ``` Upvotes: 1
2018/03/21
1,019
3,060
<issue_start>username_0: Need to find which invoice has the second-lowest total price among invoices that do not include a sale of a FiredAlways stove. I can manage to get the lowest, but not the second lowest. What I have: ``` SELECT TOP 1 WITH TIES I.InvoiceNbr, I.InvoiceDt, I.TotalPrice FROM INVOICE I WHERE EXISTS( SELECT TOP 2 WITH TIES I.InvoiceNbr FROM INVOICE I WHERE EXISTS ( SELECT FK_InvoiceNbr FROM INV_LINE_ITEM WHERE FK_StoveNbr NOT IN (SELECT S.SerialNumber FROM STOVE AS S WHERE S.Type = 'FiredAlways')) ORDER BY I.TotalPrice DESC) GROUP BY I.InvoiceNbr, I.InvoiceDt, I.TotalPrice ORDER BY I.TotalPrice ASC; ``` Data: ``` [INVOICE]( [InvoiceNbr] [numeric](18, 0) NULL, [InvoiceDt] [datetime] NULL, [TotalPrice] [numeric](18, 2) NULL, [FK_CustomerID] [numeric](18, 0) NULL, [FK_EmpID] [numeric](18, 0) NULL [INV_LINE_ITEM]( [LineNbr] [numeric](18, 0) NULL, [Quantity] [numeric](18, 0) NULL, [FK_InvoiceNbr] [numeric](18, 0) NULL, [FK_PartNbr] [numeric](18, 0) NULL, [FK_StoveNbr] [numeric](18, 0) NULL, [ExtendedPrice] [numeric](18, 2) NULL [STOVE]( [SerialNumber] [int] NOT NULL, [Type] [char](15) NOT NULL, [Version] [char](15) NULL, [DateOfManufacture] [smalldatetime] NULL, [Color] [varchar](12) NULL, [FK_EmpId] [int] NULL, ``` Wanted Output: ``` Invoice # date Price --------- ------------ ------- 206 02/03/2002 28.11 ```<issue_comment>username_1: Two general approaches to get the *nth lowest*: ``` DECLARE @tbl TABLE(SomeInt INT); INSERT INTO @tbl VALUES(10),(2),(35),(44),(52),(56),(27); ``` --Use a `TOP n` on the inner select and a `TOP 1` with a reverse order on the outer: ``` SELECT TOP 1 innertbl.SomeInt FROM ( SELECT TOP 2 SomeInt FROM @tbl GROUP BY SomeInt ORDER BY SomeInt ) AS innertbl ORDER BY innertbl.SomeInt DESC; ``` --use a `CTE` with `DENSE_RANK()` (thx to dnoeth for the hint) ``` WITH AddSortNumber AS ( SELECT SomeInt ,DENSE_RANK() OVER(ORDER BY SomeInt) AS SortNumber FROM @tbl ) SELECT SomeInt FROM AddSortNumber WHERE SortNumber=2 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Try This: ``` WITH cte AS ( SELECT I.InvoiceNbr, I.InvoiceDt, I.TotalPrice, dense_rank() over( ORDER BY I.TotalPrice ASC) as dnrnk FROM INVOICE I INNER JOIN Inv_LineITEM ON InvoiceNbr = FK_InvoiceNbr INNER JOIN Stove ON FK_StoveNbr = SerialNumber WHERE Type != 'FiredAlways' ) select InvoiceNbr, InvoiceDt, TotalPrice from cte where dnrnk=2 ``` Upvotes: 0 <issue_comment>username_3: ``` SELECT TOP 1 TotalPrice FROM ( SELECT TOP 2 TotalPrice FROM invoice ORDER BY TotalPrice DESC ) AS em ORDER BY TotalPrice ASC ``` Essentially: Find the top 2 TotalPrice in descending order. Of those 2, find the top TotalPrice in ascending order. The selected value is the second-highest TotalPrice. If the TotalPrice isn't distinct, you can use `SELECT DISTINCT TOP ...` instead. Upvotes: -1
2018/03/21
1,956
7,451
<issue_start>username_0: angular2 how to use ng-template from a different file? When I place the ng-template within the same HTML where I use it works but when I move ng-template into a separate file then it won't work. Is there a way to move ng-template into its own file and use it in different html file? **info-message.html** ``` Hi ``` above is working fine because ng-template and the usage is in same file **message-template.html** ``` Hi ``` **info-message.html** ``` ``` This is not working. Is there a way to use **"messageTemplate"** which is in a separate file inside another html(Eg: info-message.html) Thanks in advance.<issue_comment>username_1: You can use something like this (template is used from another component): ``` @Component( template: '' ) export class MessageTemplate { infoMessage: InfoMessage; } @Component( .... ) export class InfoMessage{ @ContentChild('columnTemplate') template: TemplateRef; constructor(private messageTemplate: MessageTemplate) { messageTemplate.infoMessage = this; } } ``` Upvotes: 0 <issue_comment>username_2: Have you seen this? <https://github.com/angular/angular/issues/27503> There is an example there provided by dawidgarus The suggestion is that if you want to reuse your template in different files, you should convert what is inside the template into a separate component, then you can reuse that component wherever you want. Upvotes: 2 <issue_comment>username_3: This behaviour can be achieved via a 'portal'. This is a useful and fairly common pattern in Angular applications. For example you may have a global sidebar outlet living near the top app level and then child components may specify a local , as part of their overall template, to be rendered at this location. Note that while the may be defined outside of the file where the desired outlet is defined, it is still necessary to place the inside the template of *some* component. This can be a minimalist component which is only responsible for wrapping the , however it could equally be a complicated component where the of interest only plays a minor part. This code illustrates one possible basic implementation of a portal. ``` @Directive({ selector: '[appPortal]' }) export class PortalDirective implements AfterViewInit { @Input() outlet: string; constructor(private portalService: PortalService, private templateRef: TemplateRef) {} ngAfterViewInit(): void { const outlet: PortalOutletDirective = this.portalService.outlets[this.outlet]; outlet.viewContainerRef.clear(); outlet.viewContainerRef.createEmbeddedView(this.templateRef); } } @Directive({ selector: '[appPortalOutlet]' }) export class PortalOutletDirective implements OnInit { @Input() appPortalOutlet: string; constructor(private portalService: PortalService, public viewContainerRef: ViewContainerRef) {} ngOnInit(): void { this.portalService.registerOutlet(this); } } @Injectable({ providedIn: 'root' }) export class PortalService { outlets = new Map(); registerOutlet(outlet: PortalOutletDirective) { this.outlets[outlet.appPortalOutlet] = outlet; } } ``` It works using three parts: * A 'portal' directive. This lives on the desired and takes as input the name of the outlet at which the content should be rendered. * A 'portal outlet' directive. This lives on an outlet, e.g. an , and defines the outlet. * A 'portal' service. This is provided at the root level and stores references to the portal outlets so they can be accessed by the portals. This may seem like a lot of work for something quite simple but once this plumbing is in place it is easy to (re)use. ``` // foo.component.html Foo === RIGHT ===== ``` In general it's not a great idea to reinvent the wheel though when there are already well-tested, documented and stable implementations available. The [Angular CDK](https://material.angular.io/cdk) provides [such an implementation](https://material.angular.io/cdk/portal) and I'd advise to use that one rather than your own in practice. Upvotes: 3 <issue_comment>username_4: If you are loading a separate file, you can define a Component in the separate file (instead of a ). And then inject the entire Component into the using the `*ngComponentOutlet`. You can find the full sulotion with example here: <https://stackoverflow.com/a/59180628/2658683> Upvotes: 0 <issue_comment>username_5: Expanding on the answer by @username_3 for reasons of explanation and portability. This will let you use a template across components. **To use:** ``` 'app.module.ts' ``` ```js import {NgModule} from '@angular/core'; import { IdcPortalDirective, IdcTemplatePortalDirective, PortalService } from './idc-template-portal/idc-template-portal.component'; @NgModule({ declarations: [ IdcPortalDirective, IdcTemplatePortalDirective ], imports: [], exports: [], providers: [ PortalService ], bootstrap: [AppComponent] }) export class AppModule {} ``` ``` './idc-template-portal/idc-template-portal.component.ts' ``` ```js import { AfterViewInit, Directive, Injectable, Input, OnInit, Output, TemplateRef, ViewContainerRef } from '@angular/core'; /*** Input Template ***/ /*** Template Contents ***/ @Directive({ selector: '[idcPortal]' }) export class IdcPortalDirective implements OnInit { @Input() outlet: string; @Output() inlet: string = this.outlet; constructor(private portalService: PortalService, public templateRef: TemplateRef) {} ngOnInit():void { this.portalService.registerInlet(this); } } /\*\*\* Output Container \*\*\*/ /\*\*\* \*\*\*/ @Directive({ selector: '[idcPortalOutlet]' }) export class IdcTemplatePortalDirective implements OnInit, AfterViewInit { @Input() appPortalOutlet: string; @Output() outlet: string = this.appPortalOutlet; constructor(private portalService: PortalService, public viewContainerRef: ViewContainerRef) {} ngOnInit():void { this.portalService.registerOutlet(this); } ngAfterViewInit() { this.portalService.initializePortal(this.appPortalOutlet); } } @Injectable({ providedIn: 'root' }) export class PortalService { outlets = new Map(); inlets = new Map(); registerOutlet(outlet: IdcTemplatePortalDirective) { this.outlets[outlet.outlet] = outlet; } registerInlet(inlet: IdcPortalDirective) { this.inlets[inlet.inlet] = inlet; } initializePortal(portal:string) { const inlet: IdcPortalDirective = this.inlets[portal]; const outlet: IdcTemplatePortalDirective = this.outlets[portal]; outlet.viewContainerRef.clear(); outlet.viewContainerRef.createEmbeddedView(inlet.templateRef); } } ``` He,@username_3, mentions reinventing the wheel in regards to the [Angular CDK portals package](https://material.angular.io/cdk/portal/overview). However, I find his/this implementation to make more sense in the way it's used in the application flow and the ease in which a template can be ported from component to another component that contains the portal outlet (allowing component to component->portal template communication. For example within a component template implementing the [Angular Material MatBottomSheet](https://material.angular.io/components/bottom-sheet/overview) (idcBottomSheet)). **The 'input' element:** ``` notes A place for your thoughts: ``` **The 'output' element (inside your MatBottomSheet component template):** ```html ``` Upvotes: 0
2018/03/21
501
1,842
<issue_start>username_0: I am new to Angular. I would like to to print list of products, which are in relation with distributor. For that I thought about combaining \*ngFor \*ngIf so here is a code: ``` Distributors ------------ Operation couldn't be executed: {{errorMessage}} * id: {{distributor.id}} name: {{distributor.name}} products 1. dist ``` So I want to loop over the products array and if the distributor id of product matches with distributor id from other array I will have it printed new li element created. Unfortunately I get an error: > > Can't have multiple template bindings on one element. Use only one attribute named 'template' or prefixed with \* > > > [ERROR ->]\*ngIf="product.distributor.id== distributor.id" > > > Does anyone knows how to refactor this code to make it work?<issue_comment>username_1: You can use to have ngIf ``` products - dist ``` Upvotes: 1 <issue_comment>username_2: You cannot use multiple structural directives on one html element. For example you can instead place your `*ngFor` in a and wrap it around your `-` element. The advantage of is the fact, that it doesn't introduce extra levels of HTML. ```js - dist ``` For more detail information have a look at the [Official Angular Documentation (One structural directive per host element)](https://angular.io/guide/structural-directives#one-structural-directive-per-host-element) Upvotes: 3 <issue_comment>username_3: Angular doesn't support more than one structural directive on the same element, so you should use the `ng-container` helper element here. FYI `ng-container` does not add an extra element to the DOM (the `ul` in Example 2 does get added to the DOM). Example 1 ``` - {{ product?.name }} ``` or Example 2 ``` * {{log(thing)}} {{thing.name}} ``` Upvotes: 6 [selected_answer]
2018/03/21
305
1,144
<issue_start>username_0: The image file is coming from Node js side and an image is uploaded through java backend to s3 temp bucket<issue_comment>username_1: You can use to have ngIf ``` products - dist ``` Upvotes: 1 <issue_comment>username_2: You cannot use multiple structural directives on one html element. For example you can instead place your `*ngFor` in a and wrap it around your `-` element. The advantage of is the fact, that it doesn't introduce extra levels of HTML. ```js - dist ``` For more detail information have a look at the [Official Angular Documentation (One structural directive per host element)](https://angular.io/guide/structural-directives#one-structural-directive-per-host-element) Upvotes: 3 <issue_comment>username_3: Angular doesn't support more than one structural directive on the same element, so you should use the `ng-container` helper element here. FYI `ng-container` does not add an extra element to the DOM (the `ul` in Example 2 does get added to the DOM). Example 1 ``` - {{ product?.name }} ``` or Example 2 ``` * {{log(thing)}} {{thing.name}} ``` Upvotes: 6 [selected_answer]
2018/03/21
1,050
3,347
<issue_start>username_0: Okay, let met explain. I am getting a obsersable from AngularFire Database, and here is what I pretend to do: - get 2 static copies of the obsersable response - show 1 static copy - allow user to modify this static copy and not save it - when user decides too cancel the changes, i simply get the copy number 2 and set it to copy number 1. getting obsersable: ``` this.afoDatabase.object("usuarios/"+this.userKey+"/pacientes/"+this.pacienteKey).map(res=> { this.model = res; this.model2 = res; }).first().toPromise(); ``` when the users decides to cancel the changes, here is what i am doing: ``` this.model = this.model2; // this.model is the copy number 1 and this.model2 is the 'unchanged' copy number 2 ``` but when i do it, the copy number 2, even that i never showed or changed it, is exactly equal to copy number 1;<issue_comment>username_1: Map is used to modify data received in each event. So you need to assign data to model like ``` .first((data) => model = data) ``` Then use ``` .subscribe((data) => model2 = data) ``` If you want to modify data format you may call ``` .map((data) => { let res = modifyData(data); return res}); ``` Update: ``` import { timer } from 'rxjs/observable/timer'; import 'rxjs/add/operator/first'; let model, model1; let observable = timer(0, 1000); observable.first().subscribe((n) => { model = n; }); observable.subscribe((n) => { model1 = n; }); ``` Upvotes: 0 <issue_comment>username_2: **Updated** You can use [combineLatest](https://www.learnrxjs.io/operators/combination/combinelatest.html) like ``` let supplier = this.afoDatabase.object("usuarios/" + this.userKey + "/pacientes/" + this.pacienteKey); Observable.combineLatest(supplier, supplier) .map([model1, model2]=> { this.model1=model1; this.model2=model2; }).first() ... ``` CombineLatest will emit an array when both of these observables emits. Here `model1` and `model2` should have same value. *Note*: This will create two separate requests. If you want to create single request then look below at Previous Answer. **Previous Answer** Look here ```js let res = {foo:1,bar:2} let model1 = res; let model2 = res; model1.foo=3; console.log(model1.foo) //3 console.log(model2.foo) //3 ``` Because when we declare `model1` and `model2` with value `res`, so both points to the same object. Both holds the reference of same object (here `res`) and changing one changes other. What you need to do is clone `res` to `model2` so it will a different object with same value as `res`. Cloning is of two types shallow and deep. 1. [Efficient deep copy](https://stackoverflow.com/questions/122102/what-is-the-most-efficient-way-to-deep-clone-an-object-in-javascript?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) 2. [Shallow copy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) 3. [Difference between shallow and deep copy.](https://stackoverflow.com/questions/184710/what-is-the-difference-between-a-deep-copy-and-a-shallow-copy) If your `res` objects values are string, numbers etc then you need shallow copy. Example ``` res= {a:1,b:"2"} ``` If they contains array, other objects then you need deep copy. Example ``` res={a:{c:1},b:[2,3]} ``` Upvotes: 2 [selected_answer]
2018/03/21
385
1,359
<issue_start>username_0: I want to merge 2 table into one by using **UNION ALL** operator. The first table has several fields. The second table groups several field into on JSONB field. In order to simplify the question, I **reproduced the error** by using this simple SQL request (without any dependance on table) : ``` SELECT 10 as price UNION ALL SELECT '{"price":2}'::jsonb->'price' as price; ``` This request return the following error : ``` ERROR: UNION types integer and jsonb cannot be matched LINE 3: SELECT '{"price":2}'::jsonb->'price' as price; ``` How can I merge an integer with JSONB interger property by using UNION ALL operator ? I want to get the following output : ``` price ------- 10 2 (2 rows) ```<issue_comment>username_1: JSON seems so simple and yet it gets a bit complicated when working with types. You want to extract the *element* as a value; then, you can convert to an integer. Hence: ``` SELECT 10 as price UNION ALL SELECT ('{"price":2}'::jsonb->>'price')::int as price ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The `->` operator returns a JSONB value, not an integer. You need to use the `->>` operator which returns a `text` and then cast that to an integer: ``` SELECT 10 as price UNION ALL SELECT ('{"price":2}'::jsonb->>'price')::int as price; ``` Upvotes: 1
2018/03/21
892
3,191
<issue_start>username_0: I am wondering how to configure the `.npmrc` file so that I can have a default registry and a different scoped registry with authentication. I am using Nexus for the private repository and I am not sure how to set authentication for the scoped registry, only the default registry. For example my `~/.npmrc` file is: ``` registry=https://registry.npmjs.org/ @test-scope:registry=http://nexus:8081/nexus/content/repositories/npm-test/ email=<EMAIL> _auth="…" ``` If I do `npm publish` for a package scoped to `test-scope`, I get an authentication error. AFAIK, the `_auth` only applies to the `registry=...` section. Is there a way of specifying an auth key for the `@test-scope:registry=...` section? Thanks,<issue_comment>username_1: So, after some digging through the NPM source code, it turns out there is a way to do this. My solution is below: ``` registry=https://registry.npmjs.org/ @test-scope:registry=http://nexus:8081/nexus/content/repositories/npm-test/ //nexus:8081/nexus/content/repositories/npm-test/:username=admin //nexus:8081/nexus/content/repositories/npm-test/:_password=<PASSWORD>M= email=… ``` **Explanation:** The scope `@test-scope` specifies that packages with the scope should be published to a different registry than the default `registry=` when executing the `npm publish` command. The two lines starting with `//nexus:8081/...` are used to specify the credentials to the scoped repository for both `username` and `_password` where `_password` is the base64 encoded password component from the previously used `_auth` credentials. Using this approach, only scoped packages will be published and installed from the private registry and all other packages will be installed from the default registry. **Edit:** Additional to this, the password can be specified as an environment variable so that it is not stored in plaintext in the file. For example: ``` registry=https://registry.npmjs.org/ @test-scope:registry=http://nexus:8081/nexus/content/repositories/npm-test/ //nexus:8081/nexus/content/repositories/npm-test/:username=admin //nexus:8081/nexus/content/repositories/npm-test/:_password=${<PASSWORD>} email=… ``` Also, when using Nexus, the `email=` line must be specified. Upvotes: 7 [selected_answer]<issue_comment>username_2: for some strange reason the `_auth` is called `_authToken` when used with scoped packages. If you are using this you don't have to store your plain text password in your `.npmrc` ``` registry=https://registry.npmjs.org/ @test-scope:registry=http://nexus:8081/nexus/content/repositories/npm-test/ //nexus:8081/nexus/content/repositories/npm-test/:_authToken=... email=… ``` Upvotes: 4 <issue_comment>username_3: Run the following command, replacing **@company-scope** with the scope, and **company-registry** with the name of your company’s npm Enterprise registry: ``` npm login --scope=@company-scope --registry=https://registry.company-registry.npme.io/ ``` This information is available on the npm [documention](https://docs.npmjs.com/configuring-your-registry-settings-as-an-npm-enterprise-user#configuring-scopes-to-point-to-different-registries). Upvotes: 3
2018/03/21
1,209
3,910
<issue_start>username_0: I want to apply a style to an element when hovering over another element in a different parent `div`. The classes are `lvcontainer` and `rvcontainer` and when I hover over `lvcontainer`, `rvcontainer` will be set to `display: block`. I'm trying to make a multidimensional drop menu like blackberry navigation. ```js var lvcontainer = document.getElementsByClassName('lvcontainer'); var rvcontainer = document.getElementsByClassName('rvcontainer'); for (i = 0; i < 1; i++) { lvcontainer[i].addEventListener("mouseover", function() { rvcontainer[i].style.display = "block"; }, false); } ``` ```css body { margin: auto; } #container { display: table; } #lcontainer { padding: 0 10px 0 10px; display: table-cell; } #rcontainer { padding: 0 10px 0 10px; display: table-cell; } .rvcontainer { display: none; } ``` ```html Country Genre Japan Korea American Comedy Mystery Horror ```<issue_comment>username_1: Maybe is not the cleaner solution but you can try this: ``` body { margin: auto; } #container { display: table; } #lcontainer { padding: 0 10px 0 10px; display: table-cell; } #rcontainer { padding: 0 10px 0 10px; display: table-cell; } .rvcontainer-c,.rvcontainer-g { display: none; } Country Genre Japan Korea American Comedy Mystery Horror var lvcontainerC = document.getElementsByClassName('lvcontainer-c'); var rvcontainerC = document.getElementsByClassName('rvcontainer-c'); lvcontainerC[0].addEventListener("mouseover", function(){ rvcontainerC[0].style.display = "block"; }, false); lvcontainerC[0].addEventListener("mouseout", function(){ rvcontainerC[0].style.display = "none"; }, false); var lvcontainerG = document.getElementsByClassName('lvcontainer-g'); var rvcontainerG = document.getElementsByClassName('rvcontainer-g'); lvcontainerG[0].addEventListener("mouseover", function(){ rvcontainerG[0].style.display = "block"; }, false); lvcontainerG[0].addEventListener("mouseout", function(){ rvcontainerG[0].style.display = "none"; }, false); ``` Upvotes: 0 <issue_comment>username_2: There are a few issues but you can do what you want by looping through and protecting your index. The below assumes there will always be the same number of lvcontainer and rvcontainer ```js var lvcontainer = document.getElementsByClassName('lvcontainer'); var rvcontainer = document.getElementsByClassName('rvcontainer'); for (i = 0; i < lvcontainer.length; i++) { // loop to the length (function(protectedIndex) { // protect the index so clicks won't use last increment of i // show lvcontainer[protectedIndex].addEventListener("mouseover", function() { rvcontainer[protectedIndex].style.display = "block"; }, false); // hide lvcontainer[protectedIndex].addEventListener("mouseout", function() { rvcontainer[protectedIndex].style.display = "none"; }, false); })(i); } ``` ```css body { margin: auto; } #container { display: table; } #lcontainer { padding: 0 10px 0 10px; display: table-cell; } #rcontainer { padding: 0 10px 0 10px; display: table-cell; } .rvcontainer { display: none; } ``` ```html Country Genre Japan Korea American Comedy Mystery Horror ``` After your comment, I would restructure your html to be a bit more semantically correct: ```css #container ul { margin: 0; padding: 0; list-style: none; } #lcontainer { display: inline-block; position: relative; } .lvcontainer { padding: 0.1em 0.5em 0.1em 0.1em; } .rvcontainer { display: none; position: absolute; top: 0; left: 100%; } .lvcontainer:hover>.rvcontainer { display: block; } ``` ```html * Country + Japan + Korea + American * Genre + Comedy + Mystery + Horror ``` Upvotes: 2 [selected_answer]
2018/03/21
276
989
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/rwcWK.jpg)](https://i.stack.imgur.com/rwcWK.jpg)I have tried to add HTML template via intent but only spanable string or attached file is working via intent, Is there any way to inject Html template to Gmail via intent?<issue_comment>username_1: yes i think so, try get your format using Html.fromHtml() method and set the type text/HTML: ``` final Intent superIntent = new Intent(FirstScreen.this, SecondScreen.class); superIntent.setType("text/html"); superIntent.putExtra("html", Html.fromHtml(body)); .. ``` Upvotes: 0 <issue_comment>username_2: Use below code ``` final Intent shareIntent = new Intent(Intent.ACTION_SENDTO, Uri.parse("mailto:")); shareIntent.putExtra(Intent.EXTRA_SUBJECT, "The Subject"); shareIntent.putExtra( Intent.EXTRA_TEXT, Html.fromHtml(new StringBuilder() .append("**Some Content** ") .append("More content ") .toString()) ); ``` Upvotes: 1
2018/03/21
393
1,341
<issue_start>username_0: I have Mainactivity, one Fragment and BlogRecycleadapter to populate the Fragment with Blog post. Inside Mainactivity, I have Floating action Button. But When I run the app It hides beneath the items of blogRecycleadapter. How can I make Floating action button always on Top?? [![enter image description here](https://i.stack.imgur.com/9ndAL.png)](https://i.stack.imgur.com/9ndAL.png) My mainactivity layout is ```html xml version="1.0" encoding="utf-8"? ``` and my fragment layout is , ```css xml version="1.0" encoding="utf-8"? ``` and my blogRecycleadapter class is ```css xml version="1.0" encoding="utf-8"? ```<issue_comment>username_1: yes i think so, try get your format using Html.fromHtml() method and set the type text/HTML: ``` final Intent superIntent = new Intent(FirstScreen.this, SecondScreen.class); superIntent.setType("text/html"); superIntent.putExtra("html", Html.fromHtml(body)); .. ``` Upvotes: 0 <issue_comment>username_2: Use below code ``` final Intent shareIntent = new Intent(Intent.ACTION_SENDTO, Uri.parse("mailto:")); shareIntent.putExtra(Intent.EXTRA_SUBJECT, "The Subject"); shareIntent.putExtra( Intent.EXTRA_TEXT, Html.fromHtml(new StringBuilder() .append("**Some Content** ") .append("More content ") .toString()) ); ``` Upvotes: 1
2018/03/21
506
1,906
<issue_start>username_0: I am using Ant Select component inside Dropdown component. Here is my index file which renders Dropdown ``` const getMenu = filter => ( ); this.handleDropdownVisibility(val, searchFilter) } > ... ``` Here is my MenuContainer which return Select Component inside it ``` handleSelectChange = val => { this.setState({ selectedValue: val, }); }; {numberComparision.map((item, i) => { return ( {item.name} ); })} ``` so on clicking select value onVisibleChange fires and closes dropdown [![Select box inside dropdown](https://i.stack.imgur.com/9mC9n.png)](https://i.stack.imgur.com/9mC9n.png)<issue_comment>username_1: You are mixing components that are not meant to be mixed here, I believe. Dropdown expects its overlay to be a menu of some sorts. Or at least something static that does not open yet another dynamic layer. Select already has a dropdown type behaviour. So your Dropdown opens the Select which opens the Select dropdown, and then they both react to the click event and close. It is currently not clear from your question and screenshot what you are actually trying to achieve, that could not be achieved using just a Select. You could try clarifying that. Upvotes: 0 <issue_comment>username_2: In current v3.3.1 there is no API to prevent to close the `Dropdown` list. As a solution I can offer **[this custom component](https://codesandbox.io/s/71ll81l72q)**. [![enter image description here](https://i.stack.imgur.com/kMrLP.png)](https://i.stack.imgur.com/kMrLP.png) `Item` has a property `clickable` which indicates will be the droplist closed after click or not. You can set `true/false` or css name of an element which should not trigger closing drop-list. Upvotes: 1 <issue_comment>username_3: Change Menu.Item where the select is contained to a Menu.ItemGroup, those do not trigger the onVisibleChange when clicked. Upvotes: 1
2018/03/21
294
1,048
<issue_start>username_0: I would get a length of an array that I pass in the parameter of a function I have an array of object ``` groupList:group[]=[]; ``` in `selectItem` I call `testExistinginArray` function ``` selectItem (event) { if(!this.toggle) { this.groupList.push(event); } else{ this.ejList.push(event); if(!this.testExistinginArray(event,this.groupList)) { //I pass groupList as parameter this.groupList.push({code:event.codeBusinessGroup,name:event.nameBusinessGroup}) } } } testExistinginArray(event,type: any[]) { for(let i=0;i ``` Actually I get undefined length error ``` TypeError: Cannot read property 'length' of undefined ```<issue_comment>username_1: You tried to extract attribute `length` from `this.type`, not from function parameter `type`. Looks like typo Upvotes: 0 <issue_comment>username_2: Use `type.length` instead of `this.type.length`. Here `type` is not function variable, it's argument variable. So you can't read using `this` Upvotes: 1
2018/03/21
183
752
<issue_start>username_0: I have a source file which contains 10 records. It doesn't has any primary key. Requirement is to load first two records at first run into target table. And then delete those records from source file. Now the source file will have 8 records. In next run, first 2 records will get loaded into target followed by getting deleted from source file.. How can I achieve this in Informatica Power centre<issue_comment>username_1: You tried to extract attribute `length` from `this.type`, not from function parameter `type`. Looks like typo Upvotes: 0 <issue_comment>username_2: Use `type.length` instead of `this.type.length`. Here `type` is not function variable, it's argument variable. So you can't read using `this` Upvotes: 1
2018/03/21
1,367
5,054
<issue_start>username_0: i have a simple Selenium Test Code: ``` public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "/home/chromedriver"); WebDriver driver= new ChromeDriver(); driver.get("http://google.com"); } ``` And i get this Error: ``` Exception in thread "main" java.lang.NoClassDefFoundError: okhttp3/ConnectionPool | Caused by: java.lang.ClassNotFoundException: okhttp3.ConnectionPool ``` I think the jars and Dependency are ok but i still get this Error<issue_comment>username_1: The error says it all : ``` Exception in thread "main" java.lang.NoClassDefFoundError: okhttp3/ConnectionPool | Caused by: java.lang.ClassNotFoundException: okhttp3.ConnectionPool ``` What is **`NoClassDefFoundError`** ---------------------------------- **`NoClassDefFoundError`** in Java occurs when **`Java Virtual Machine`** is not able to find a particular class at runtime which was available at compile time. For example, if we have resolved a method call from a class or accessing any static member of a Class and that Class is not available during run-time then **`JVM`** will throw **`NoClassDefFoundError`**. The error clearly says that you have misconfigured the **classpath**. It would be tough to debug the exact cause of the issue untill and unless you tell us how you run tests, which builder or IDE do you use and the builder config file or project description. What went wrong : ----------------- From all the above mentioned points it's clear that the related **`Class`** or **`Methods`** were resolved from one source **`Compile Time`** which was not available during **`Run Time`**. This situation occurs if there are presence of multiple sources to resolve the Classes and Methods through **`JDK`**/**`Maven`**/**`Gradle`**. Selenium dependency on okhttp ----------------------------- At this point it is worth to mention that **selenium-java-3.9.x** clients does have a dependency on **okhttp** and you can find the [dependency list here](http://search.maven.org/#artifactdetails%7Corg.seleniumhq.selenium%7Cselenium-java%7C3.11.0%7Cjar). It is also to be noted that : * There were some issues with launching of Chrome as per [Can't launch chrome browser using latest selenium 3.9.0](https://github.com/SeleniumHQ/selenium/issues/5453). * To address the issue from *Selenium v3.9.1* **OkHttp backed instances can now connect to servers requiring authorisation** which was based on PR [#5444](https://github.com/SeleniumHQ/selenium/pull/5444). Solution : ---------- Here are a few steps to solve **`NoClassDefFoundError - okhttp3/ConnectionPool`** error : * While using a Build Tool e.g. **`Maven`** or **`Gradle`**, **remove** all the **`External JARs`** from the **`Java Build Path`**. **`Maven`** or **`Gradle`** will download and resolve all the required dependencies. * If using **`Selenium JARs`** within a **`Java Project`** add only required **`External JARs`** within the **`Java Build Path`** and remove the unused one. * While using **`Maven`**, either use **`selenium-java`** or **`selenium-server`**. Avoid using both at the same time. * Upgrade *JDK* to recent levels [**JDK 8u162**](http://www.oracle.com/technetwork/java/javase/8u162-relnotes-4021436.html). * Upgrade *Selenium* to current levels [**Version 3.10.0**](https://docs.seleniumhq.org/download/). * Upgrade *ChromeDriver* [**ChromeDriver v2.37**](https://sites.google.com/a/chromium.org/chromedriver/downloads) level. * Keep *Chrome* version at ***Chrome v64-66*** levels. ([as per ChromeDriver v2.37 release notes](https://chromedriver.storage.googleapis.com/2.37/notes.txt)) * *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only. * Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *test Suite*. * If your base *Chrome* version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of Chrome. * Execute your `@Test`. Upvotes: 2 <issue_comment>username_2: I hope, your trying a method without having class. Please try to put your main method inside class. please let me know if you are getting any error further. class Test{ public static void main(String[] args) { ``` System.setProperty("webdriver.chrome.driver", "youHaveToUseLocationWhereYouHaveYourChromeDriver"); WebDriver driver= new ChromeDriver(); driver.get("http://google.com"); ``` } } Upvotes: -1 <issue_comment>username_3: please check of this jars files in your project libraries: `okhttp-3.10.0.jar` & `okio1.14.1.jar` may be solve using this jar files your problem : > > java.lang.NoClassDefFoundError: okhttp3/ConnectionPool > > > Upvotes: 2 <issue_comment>username_4: I had the same issue as this. By following the advice above AND adding the Selenium client JARs(including sources) to modulepath and the Selenium server JAR to classpath it worked. Upvotes: 0
2018/03/21
1,642
5,490
<issue_start>username_0: i know it causes due to google new version update of gms to 12.0.0 here is link <https://developers.google.com/android/guides/releases> add in `android/build.gradle` but now its not compiling onesignal coz it also uses Google service with different version other solution on github suggest me to add '+' in dependencies but it's not working ``` configurations.all { // #PlayServicesGate — March, 20 2018 resolutionStrategy { force 'com.google.android.gms:play-services-auth:11.8.0' // Firebase dependencies force "com.google.android.gms:play-services-base:11.8.0" force 'com.google.firebase:firebase-core:11.8.0' force 'com.google.firebase:firebase-auth:11.8.0' } } ``` please let me know if anyone know about this ``` :react-native-onesignal:prepareComGoogleAndroidGmsPlayServicesTagmanagerV4Impl1200Library :react-native-onesignal:prepareComGoogleAndroidGmsPlayServicesTagmanagerV4ImplLicense1200Library :react-native-onesignal:prepareComGoogleAndroidGmsPlayServicesTasks1200Library :react-native-onesignal:prepareComGoogleAndroidGmsPlayServicesTasksLicense1200Library :react-native-onesignal:prepareComOnesignalOneSignal382Library :react-native-onesignal:prepareOrgWebkitAndroidJscR174650Library :react-native-onesignal:prepareReleaseDependencies :react-native-onesignal:compileReleaseAidl :react-native-onesignal:compileReleaseNdk UP-TO-DATE :react-native-onesignal:compileLint :react-native-onesignal:copyReleaseLint UP-TO-DATE :react-native-onesignal:compileReleaseRenderscript :react-native-onesignal:generateReleaseBuildConfig :react-native-onesignal:generateReleaseResValues :react-native-onesignal:generateReleaseResources :react-native-onesignal:mergeReleaseResources :react-native-onesignal:processReleaseManifest :react-native-onesignal:processReleaseResources FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':react-native-onesignal:processReleaseResources'. > Error: more than one library with package name 'com.google.android.gms.license' * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. BUILD FAILED Total time: 3 mins 7.465 secs Error: /Users/vsts/agent/2.131.0/work/1/s/android/gradlew failed with return code: 1 at ChildProcess. (/Users/vsts/agent/2.131.0/work/\_tasks/Gradle\_8d8eebd8-2b94-4c97-85af-839254cc6da4/1.128.0/node\_modules/vsts-task-lib/toolrunner.js:569:30) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:191:7) at maybeClose (internal/child\_process.js:886:16) at Socket. (internal/child\_process.js:342:11) at emitOne (events.js:96:13) at Socket.emit (events.js:188:7) at Pipe.\_handle.close [as \_onclose] (net.js:497:12) ```<issue_comment>username_1: go to project.properties and change the following lines: cordova.system.library.2=com.google.android.gms:play-services-gcm:+ cordova.system.library.3=com.google.android.gms:play-services-location:+ To cordova.system.library.2=com.google.android.gms:play-services-gcm:11+ cordova.system.library.3=com.google.android.gms:play-services-location:11+ It worked for me :) Upvotes: 1 <issue_comment>username_2: just add these line in your code block ``` configurations.all{ //here include these line force 'com.google.android.gms:play-services-gcm:11.8.0' force 'com.google.android.gms:play-services-analytics:11.8.0' force 'com.google.android.gms:play-services-location:11.8.0' } ``` Upvotes: 0 <issue_comment>username_3: its simple as i am struggling with my project i find out that any dependencies you have used in your project all are using '+' and now its not working anymore so apply specific version to it by doing in : android/build.gradle ``` configurations.all { resolutionStrategy { force "com.google.android.gms:play-services-gcm:11.8.0" .... your other dependencies } } ``` this worked for me as I stuck in onesignal but i got all dependences form onesignal and give it specific version and now everything is working fine Upvotes: 0 <issue_comment>username_4: I have tried some of the solutions posted here without success, and I really did not want to delve into `node_modules`, but that is what I have done since my project is time sensitive. At least until a permanent solution is found. In my app-level `build.gradle`, I have updated all Google dependencies to version 11.8.0, e.g. `compile "com.google.android.gms:play-services-base:11.8.0"`. Then in `react-native-onesignal's build.gradle`: I changed these lines: ``` compile 'com.google.android.gms:play-services-gcm:+' compile 'com.google.android.gms:play-services-analytics:+' compile 'com.google.android.gms:play-services-location:+' ``` to the same specific version as follows: ``` compile 'com.google.android.gms:play-services-gcm:11.8.0' compile 'com.google.android.gms:play-services-analytics:11.8.0' compile 'com.google.android.gms:play-services-location:11.8.0' ``` That seems to fix the error. If anyone has a better solution, I'll gladly implement that in the project. In the meantime, I hope this helps someone else as well. Upvotes: 1 [selected_answer]
2018/03/21
1,604
5,076
<issue_start>username_0: Is it possible to have two fit\_generator? I'm creating a model with two inputs, The model configuration is shown below. [![enter image description here](https://i.stack.imgur.com/FDY0W.png)](https://i.stack.imgur.com/FDY0W.png) Label Y uses the same labeling for X1 and X2 data. The following error will continue to occur. > > *Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected > to see 2 array(s), but instead got the following list of 1 arrays: > [array([[[[0.75686276, 0.75686276, 0.75686276], > [0.75686276, 0.75686276, 0.75686276], > [0.75686276, 0.75686276, 0.75686276], > ..., > [0.65882355, 0.65882355, 0.65882355...* > > > My code looks like this: ```python def generator_two_img(X1, X2, Y,batch_size): generator = ImageDataGenerator(rotation_range=15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') genX1 = generator.flow(X1, Y, batch_size=batch_size) genX2 = generator.flow(X2, Y, batch_size=batch_size) while True: X1 = genX1.__next__() X2 = genX2.__next__() yield [X1, X2], Y """ ................................. """ hist = model.fit_generator(generator_two_img(x_train, x_train_landmark, y_train, batch_size), steps_per_epoch=len(x_train) // batch_size, epochs=nb_epoch, callbacks = callbacks, validation_data=(x_validation, y_validation), validation_steps=x_validation.shape[0] // batch_size, `enter code here`verbose=1) ```<issue_comment>username_1: Try this generator: ```python def generator_two_img(X1, X2, y, batch_size): genX1 = gen.flow(X1, y, batch_size=batch_size, seed=1) genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1) while True: X1i = genX1.next() X2i = genX2.next() yield [X1i[0], X2i[0]], X1i[1] ``` Generator for 3 inputs: ```python def generator_three_img(X1, X2, X3, y, batch_size): genX1 = gen.flow(X1, y, batch_size=batch_size, seed=1) genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1) genX3 = gen.flow(X3, y, batch_size=batch_size, seed=1) while True: X1i = genX1.next() X2i = genX2.next() X3i = genX3.next() yield [X1i[0], X2i[0], X3i[0]], X1i[1] ``` **EDIT** (add generator, output image and numpy array, and target) ```python #X1 is an image, y is the target, X2 is a numpy array - other data input def gen_flow_for_two_inputs(X1, X2, y): genX1 = gen.flow(X1,y, batch_size=batch_size,seed=666) genX2 = gen.flow(X1,X2, batch_size=batch_size,seed=666) while True: X1i = genX1.next() X2i = genX2.next() #Assert arrasy are equal - this was for peace of mind, but slows down training #np.testing.assert_array_equal(X1i[0],X2i[0]) yield [X1i[0], X2i[1]], X1i[1] ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: I have an implementation for multiple inputs for `TimeseriesGenerator` that I have adapted it (I have not been able to test it unfortunately) to meet this example with `ImageDataGenerator`. My approach was to build a wrapper class for the multiple generators from `keras.utils.Sequence` and then implement the base methods of it: `__len__` and `__getitem__`: ``` from keras.preprocessing.image import ImageDataGenerator from keras.utils import Sequence class MultipleInputGenerator(Sequence): """Wrapper of 2 ImageDataGenerator""" def __init__(self, X1, X2, Y, batch_size): # Keras generator self.generator = ImageDataGenerator(rotation_range=15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # Real time multiple input data augmentation self.genX1 = self.generator.flow(X1, Y, batch_size=batch_size) self.genX2 = self.generator.flow(X2, Y, batch_size=batch_size) def __len__(self): """It is mandatory to implement it on Keras Sequence""" return self.genX1.__len__() def __getitem__(self, index): """Getting items from the 2 generators and packing them""" X1_batch, Y_batch = self.genX1.__getitem__(index) X2_batch, Y_batch = self.genX2.__getitem__(index) X_batch = [X1_batch, X2_batch] return X_batch, Y_batch ``` You can use this generator with `model.fit_generator()` once the generator has been instanced. Upvotes: 3
2018/03/21
347
1,173
<issue_start>username_0: I am trying to log an error like this ``` logger.error("Entered value [" + val + "] is wrong") ``` But I don't want string concatenation because if you read the strings from right to left, they are in the wrong order. Is there any other way to do this? I need to put `val` in the error message.<issue_comment>username_1: You can use `String.format` function. `String output = String.format("Entered value [%s] is wrong", val); logger.error(output);` Have a look at more examples in the following post <https://dzone.com/articles/java-string-format-examples> Upvotes: 0 <issue_comment>username_2: You can use [`String.format`](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#format-java.lang.String-java.lang.Object...-), like this ``` logger.error(String.format("Entered value[%s] is wrong", val)); ``` This is supposing that `val` is a string. If it is not, you should change `%s` to something else (e.g., if `val` is an integer, you should use `%d`). For further detail about format strings (like `%s`) check [here](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax). Upvotes: 3 [selected_answer]
2018/03/21
374
1,426
<issue_start>username_0: I wish to iterate of a folder of pdf files containing voucher images. I am viewing each file in a TAcroPDF component. The uses can seer the amount written on the voucher and can enter this amount into a textbox. On pressing the ENTER key the next voucher is displayed and I wish to automatically refocus on the textBox. For some reason the Form is not passing focus to the TEdit component even though ActiveControl is edtAmount. I have triedt edtAmout.SerFocus after the ShowImage function. I have tried PostMessage and edtAnount.Perform. I wveb have an OnIdle Handler with this code ``` if not edtAmount.Focused then PostMessage(Handle, um_AmountFocus,0,0); ``` All hadlers are being processed. What I am missing is why the TAcroPDF seems to hog the focus. I can manually double click into the TEdit but I need a less user-intensive solutrion<issue_comment>username_1: Using the LoadFile method caused focus issues ``` AcroPDF1.LoadFile(PDFFileName); //was the source of the focusing problem ``` use ``` AcroPDF1.src:=PDFFileName; ``` instead. Upvotes: 0 <issue_comment>username_2: Sorry for the delayed response. One thing that I found that works is to use a TTimer; enable the Timer after you call LoadFile, and in the Timer event set the focus to your edtAmount. A timer delay of 500 milliseconds seems to work in my case. Disable the timer after the SetFocus call. Upvotes: 2
2018/03/21
617
2,242
<issue_start>username_0: I'm working with a basic `std::ofstream` object, created as follows: ``` output_stream = std::ofstream(output_file.c_str()); ``` This creates a file, where some information is put in. Let me show an example of such a message: (Watch window excerpt) ``` full_Message "Error while processing message:\r\n\tForecast Request:" ``` All this is ok, but after having launched following commands, there is a problem: ``` output_stream << full_Message; output_stream.flush(); ``` In order to see what is wrong, let's look at the hexadecimal dump of the file: (this is a hexadecimal display of the file, as seen in Notepad++. For clarity reasons I've taken a screenshot.) [![Notepad++ Hexdump screenshot](https://i.stack.imgur.com/QqJUN.png)](https://i.stack.imgur.com/QqJUN.png) As you can see, the character `0d` is doubled, resulting in following display: ``` Error while processing message: Forecast Request: ``` (There's a newline too much, both lines should be directly one after the other) I am aware of the addition of `#13` characters while doing file conversion from UNIX/Linux to Windows, but this is not relevant here: I'm purely working with a Windows file, on a Windows system, so there should be no need to add any `#13` character. Does anybody have an idea how I can avoid this extra character being added? Thanks in advance<issue_comment>username_1: Because by default the library converts `'\n'` to `"\r\n"` for text streams on platforms where it's needed (like Windows). So you don't need your explicit carriage-return in your string. It's handled automatically. If you want to specify the carriage-return explicitly, then you need to open the file in binary mode. --- When reading a text stream, the opposite conversion happens, with `"\r\n"` being converted to `'\n'`. Upvotes: 2 <issue_comment>username_2: The streams default to *text mode*, which means that in Windows, if you write `\n` then the file gets `\r\n`. Therefore , if you write `\r\n` then the file gets `\r\r\n`. To fix this, either just write `\n` in your code; or open the file in *binary mode*: ``` auto output_stream = std::ofstream(output_file.c_str(), std::ios::binary); ``` Upvotes: 5 [selected_answer]
2018/03/21
706
2,755
<issue_start>username_0: I am having an issue connecting to the server. I had the full access to the data yesterday, but suddenly the day after today, the system requires me to add a new firewall rule to enable access to the data. Here it ask me to sign up to Microsoft Azure so I did sign in: ![Here it ask me to sign up to Microsoft Azure](https://i.stack.imgur.com/DY066.png) as I put my email where I subscribed to Microsoft Azure, it showed me that The server specified doesn't exist in any subscription: ![The server specified doesn't exist in any subscription](https://i.stack.imgur.com/C1ui7.png) I need help on how to enable the firewall, and have access to the server from my account, so I can keep working on the data.<issue_comment>username_1: You are seeing that message because your client/local IP address has not yet been added to the firewall rules on the SQL server. To regain access to your SQL server from your local machine: * use your web browser and login to the [Azure portal](https://portal.azure.com) * locate your SQL server: **All services** -> **Databases** -> **SQL Servers** * click on the name of your SQL server to open its properties page * click on **Firewalls and virtual networks** (located under **Settings**) * your current IP address should already be detected and viewable on the page, so go ahead and click the **+ Add client IP** button * you can give the firewall rule a friendly name (it will default to "ClientIP\_X\_X\_X\_X") * click the **Save** button and you're done. Your client IP address is likely to change every few days by your Internet provider unless you are on a plan that gives you a static IP address. Upvotes: 3 <issue_comment>username_2: You are having this problem because your Client Machine's IP does not have access to the Azure database. Like you have mentioned that you did have access till yesterday but today you cannot access it. It is probably your client machine has its IP set to dynamic and it changed its IP today. To fix you will need to logon to your Azure Portal. 1. Go to the Azure SQL Database on your Azure Portal. 2. In the Details (3rd blade) click on `Set Server Firewall` [![enter image description here](https://i.stack.imgur.com/Zg5qq.jpg)](https://i.stack.imgur.com/Zg5qq.jpg) 3. At this point a new window will open and on the very first blade for `Firewall settings` click on `Add client IP`. [![enter image description here](https://i.stack.imgur.com/M9bIH.jpg)](https://i.stack.imgur.com/M9bIH.jpg) 4. Click on the `Save` button and close it. 5. At this point your Client PC's IP would have been added to the Azure Firewall as a rule and you should be able to connect to Azure SQL Database from your Client Machine. Upvotes: 2 [selected_answer]
2018/03/21
706
2,667
<issue_start>username_0: I have some old service accounts on a Google Cloud project. I want to find out if they are still being used or that I can safely delete them. Is there a way to see when they were last used? And maybe how often? I guess I could just delete the key and hope nothing breaks, but that seems a bit iffy. Is there a better solution?<issue_comment>username_1: At this time, there is no tool within the Google cloud platform to list where exactly Service account keys are used, or when last they were used or how often they are used. However, with the [GCloud command line](https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/keys/list), you can list all Service Account keys (within your project) with the Creation Date(which is often the date of creation of the Service Account - For Default Keys) and the Expiry Date for the Key. $ gcloud iam service-accounts keys list --iam-account [IAM\_account] --project [projectName] You can verify if Keys of your old [User-managed Service account](https://cloud.google.com/iam/docs/service-accounts#user-managed_service_accounts) are not expired yet - Hence, they are probably still being used. If you delete the service accounts that are still used by running instances, the instances may start failing their operations. However, you can contact the [Google Cloud platform team](https://cloud.google.com/support/docs/) to help with re-adding the deleted Service account back to your project. Upvotes: 2 <issue_comment>username_2: Check <https://cloud.google.com/logging/docs/audit/> ``` gcloud logging read "logName=(projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity OR projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Fsystem_events OR projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Fdata_access) ``` or online <https://console.cloud.google.com/home/activity> `gcloud` return YAML with IP addresses! I see SQL connection activities. Upvotes: 1 <issue_comment>username_3: I follow a 2-step process to delete or disable service accounts in a gcp project: 1. List unused service accounts ```sh gcloud recommender insights list \ --insight-type=google.iam.serviceAccount.Insight \ --location=global \ --filter=insightSubtype=SERVICE_ACCOUNT_USAGE \ --project --format=json | jq -r '.[]|select(.content.lastAuthenticatedTime == null)|.content.email, .lastRefreshTime' | paste -d, - - | sort -t, -k1 -k2 > /tmp/unused-sa.csv ``` 2. Disable (or delete) the unused accounts ```sh cat /tmp/unused-sa.csv | cut -d, -f1 | while read line; do echo "$line" gcloud --project iam service-accounts disable "$line" done ``` Upvotes: 1
2018/03/21
512
1,610
<issue_start>username_0: Of course, question was discussed thousand times ([1](https://archive.sap.com/discussions/thread/696015), [2](https://archive.sap.com/discussions/thread/963412), [3](https://archive.sap.com/discussions/thread/1229493)) but nothing was offered except this ugly snippet: ``` data: str type string value 'abcd#', len type i. len = strlen( str ). len = len - 1. str = str+0(len). ``` Is there any elegant one-liner to do this? The only prominent way I found so far is ``` SHIFT str RIGHT DELETING TRAILING `,`. ``` However, it requires that you know what the last char is (TRAILING mask) and mask doesn't support regexp or wildcards. Or I am wrong? This variant doesn't work for me for some reason ``` SHIFT string RIGHT BY 1. ``` Maybe somebody knows more beautiful syntax to do this in one line? Anything new in ABAP 7.40 or 7.50?<issue_comment>username_1: SUBSTRING to get the last character of a string: ``` DATA: str TYPE string VALUE 'abcd#'. str = substring( val = str off = strlen( str ) - 1 len = 1 ). ``` str will be '#' To remove the last character of a string (like in your example): ``` str = substring( val = str off = 0 len = strlen( str ) - 1 ). ``` Type in SUBSTRING in your ABAP Editor and hit F1 on it, there are some more variations (substring\_after, substring\_before, etc.) Upvotes: 5 [selected_answer]<issue_comment>username_2: A trick I've used in the past: ``` DATA l_str TYPE string VALUE `abcd#`. SHIFT l_str RIGHT by 1 PLACES CIRCULAR. " move last char to start, read it from l_str(1) l_str = l_str+1. " remove it. ``` Upvotes: -1
2018/03/21
248
1,021
<issue_start>username_0: I wanted to test my microchip beacon setup.. Is there any way to verify the advertisements sent by the beacon are received by Android without writing an application. I have seen stackoverflow post related to sniffing where you have to enable bluetooth sniffing in Developer options [Sniffing/logging your own Android Bluetooth traffic](https://stackoverflow.com/questions/23877761/sniffing-logging-your-own-android-bluetooth-traffic) Will this apply to Bluetooth Low Energy Module also or is it only for Bluetooth Classic Devices<issue_comment>username_1: There are lots of BLE sniffer apps in the Android marketplace that will pickup and display your beacon advertisement. Search for "ble sniffer" Upvotes: 0 <issue_comment>username_2: Yes it will pick up everything on the HCI link, including BLE advertisements. But you must have at least one app that has told the system to perform a BLE scan of course. Why don't you just use a BLE scan app like nRF Connect? Upvotes: 2 [selected_answer]
2018/03/21
479
1,242
<issue_start>username_0: I want to filter an array that only contains emails, I did this ``` emails = emails.filter((x)=>(x !== (undefined || null || ''))) ``` that delete the empty value, but can accept a value that is not an email.<issue_comment>username_1: Using the regular expression found [here](https://stackoverflow.com/questions/46155/how-to-validate-an-email-address-in-javascript) you can complete your filter like so: ```js var emails = []; emails = emails.filter(e => typeof e == "string" && validEmail(e)); console.log(emails); function validEmail(email) { var re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; return re.test(email.toLowerCase()); } ``` Upvotes: 1 <issue_comment>username_2: You can use the regex from the accepted answer [here](https://stackoverflow.com/questions/46155/how-to-validate-an-email-address-in-javascript) ``` let re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; emails = emails.filter(e =>e && e.toLowerCase().match(re)); ``` Upvotes: 3 [selected_answer]
2018/03/21
441
1,897
<issue_start>username_0: I am trying to make ‘Would You Rather’ app. This basically involves two options that the user picks (e.g. Chocolate or Vanilla). ![enter image description here](https://i.stack.imgur.com/fOiMr.png) I want to be able to store the amount of times people chose each option. I can only really think of one way of doing this: Having each option as it’s own row in a database, and incrementing this each time the user picks it. However, surely this would require making a request every time the user answers a question, and therefore handling (depending on the number of users) thousands of requests a minute? And wouldn’t there be an issue with two people trying to update the value at once? I’m not sure how the best way to go about doing this is.<issue_comment>username_1: I am not aware of programming language you are using so I will give a generalized answer. If you want to save roundtrip from database one solution would be to create a temp file with votes for your options. This file can be update when user selects an option. To put these values in database you can create an background job which will collect data and store it in database in scheduled manner this will have a downside of not having current data in database but I think this will outweigh the benefits. Upvotes: 2 <issue_comment>username_2: I would suggest having a table which holds individual votes. It can have three columns, ID, IP address/username (or another identifying bit of information to stop duplicate votes) and what they voted for. Whenever you want to calculate the current votes you can just do the count query while filtering out duplicate IP addresses/usernames. If you are worried about database scalabilty it may be worthwhile looking into buffering the inserts into the database. For example storing the votes every 20 seconds or so and doing a batch insert. Upvotes: 1
2018/03/21
990
3,345
<issue_start>username_0: Execution failed for task ':react-native-device-info:processReleaseResources'. > > Error: more than one library with package name 'com.google.android.gms.license' > > ><issue_comment>username_1: In android/build.gradle add ``` ... allprojects { repositories { ... configurations.all { resolutionStrategy { ... force 'com.google.android.gms:play-services-gcm:11.6.0' } } } } ``` or you can change directly the modules (not recommendable) .../node\_modules/react-native-device-info/android/build.gradle ``` compile 'com.google.android.gms:play-services-gcm:11.6.0' ``` Upvotes: 2 <issue_comment>username_2: My approach is scan all android project and update version of google play services directly to each module build.gradle. Step 1: Create js file with name android-gradle-fix.js in your project ``` #!/usr/bin/env node const fs = require('fs') try { console.log('Fix Start') var rootDir = process.cwd() // @nhancv: Preparing path var androidSettingGradleFile = `${rootDir}/android/settings.gradle` var androidSettingGradleFileData = fs.readFileSync(androidSettingGradleFile, 'utf8') var pathArr = [] var keySearch = ".projectDir = new File(rootProject.projectDir, '../" var keyIndex = 0 while ((keyIndex = androidSettingGradleFileData.indexOf(keySearch, keyIndex)) !== -1) { var nextIndex = keyIndex + keySearch.length var path = androidSettingGradleFileData.substring(nextIndex, androidSettingGradleFileData.indexOf("')", nextIndex)) pathArr.push(path) keyIndex++ } var newVersion = '11.8.0' var key = 'com.google.android.gms' // @nhancv: Update version for (var i = 0; i < pathArr.length; i++) { var file = `${rootDir}/${pathArr[i]}/build.gradle` var data = fs.readFileSync(file, 'utf8') var result = data var index = 0 var logs = [] while ((index = data.indexOf(key, index)) !== -1) { var versionIndexOf = data.indexOf(':', index + key.length + 1) + 1 var endVersionIndexOf = data.indexOf(data[index-1], versionIndexOf + 1) var moduleOrigin = data.substring(index, endVersionIndexOf) var moduleNew = data.substring(index, versionIndexOf) + newVersion if (moduleOrigin !== moduleNew) { logs.push(`Replace: ${moduleOrigin} -> ${moduleNew}`) result = result.replace(moduleOrigin, moduleNew) fs.writeFileSync(file, result, 'utf8') } index++ } if (logs.length > 0) { console.log(`Fix path: ${pathArr[i]}`) for (var j = 0; j < logs.length; j++) { console.log(`=> ${logs[j]}`) } } } console.log('Fix Done') } catch (error) { console.error(error) } ``` Step 2: Update npm script in **package.json** file ``` "scripts": { "postinstall": "node ./android-gradle-fix.js" }, ``` When ever you run npm install it will scan all android module and update with specific version 11.8.0 (you can change it in android-gradle-fix.js file) --- When run npm install it will skip post install and you may facing with error: postinstall: **cannot run in wd %s %s (wd=%s) node** ==> Fix add — unsafe-perm to npm install like ``` npm install --unsafe-perm ``` Detail here: <https://medium.com/p/2fd245027832> Hope it can help you. Upvotes: 0
2018/03/21
623
2,174
<issue_start>username_0: I am using nodejs mongo driver in my application. I set up below options in the connection: ``` { connectTimeoutMS: 30000, socketTimeoutMS: 30000, // retry to connect for 120 times reconnectTries: 120, // wait 1 second before retrying reconnectInterval: 1000 }; ``` It will try to re-connect 120 times if the connection is broken and 1 second for each delay. I need to listen on the server status changes during re-connect. I added below event listeners: ``` db.on('close', this.onClose.bind(this)); db.on('error', this.onError.bind(this)); db.on('timeout', this.onTimeout.bind(this)); db.on('parseError', this.onParseError.bind(this)); db.on('reconnect', this.onReconnect.bind(this)); ``` All the event listeners are working fine but my problem is how to detect that the reconnect failed after 120 times retries. For example, if the server is down then I will receive a close event. If the server is up during 120 seconds, I will receive reconnect event. But what if the server is not up in 120 seconds. How can I detect this change? Should I implement it by myself?<issue_comment>username_1: You can do it like this: ``` // Do this on your global scope // var connectionAttemps = 0; var doThisInsideYourTriggeredErrorEvent = function () { connectionAttemps++; if (connectionAttemps >= 120) { // Here goes your logic to stop the reconnection attempts // } /* Here goes your implemented logic (if any) for the "onError" event */ } ``` Check if it helps. > > PS: note that the content of *my* function goes inside the content of your *OnError* function. > > > Upvotes: 2 [selected_answer]<issue_comment>username_2: There is a (somewhat) undocumented event type: `reconnectFailed`. This is documented here: <http://mongodb.github.io/node-mongodb-native/core/api/Server.html#event:reconnectFailed> but only for the Server object. However it does seem to also be emitted by the Db object, like so: ``` db.on('reconnectFailed', (err) => { // do something here }); ``` I've verified this works also for the 2.2 version of the nodejs mongodb driver, just not documented there at all. Upvotes: 2
2018/03/21
1,302
3,790
<issue_start>username_0: I have a function to delete outliers `detectaOutliers()`, but somehow my function does not delete all outliers. Can somebody help me to find the mistake? ``` detectaOutliers = function(x) { q = quantile(x, probs = c(0.25, 0.75)) R = IQR(x) OM1 = q[1] - (R * 1.5) # outliers moderados OM3 = q[2] + (R * 1.5) OE1 = q[1] - (R * 3) # outliers extremos OE3 = q[2] + (R * 3) moderados = ifelse(x < OM1 | x > OM3, 1, 0) extremos = ifelse(x < OE1 | x > OE3, 1, 0) cbind(extOut = moderados) } cepas = unique(AbsExtSin$Cepa) concs = unique(AbsExtSin$Concen) outliers = NULL for (cepa in cepas) { for (concen in concs) { datosOE = subset(AbsExtSin, Cepa == cepa & Concen == concen) outs = detectaOutliers(datosOE$Abs) datosOE = cbind(datosOE, outs) outliers = rbind(outliers, datosOE) } } AbsExtSin = subset(outliers, extOut == 0)[, 1:5] ``` This is the data without outliers (I deleted 11 outliers, but I have more) [![enter image description here](https://i.stack.imgur.com/NDWa3.jpg)](https://i.stack.imgur.com/NDWa3.jpg)<issue_comment>username_1: **Answer**: I assume that your problem is the following: First, you detect outliers (just like the boxplot function) and remove them. Afterwards, you produce boxplots with the cleaned data, which again shows outliers. And you expect to see no outliers. This is not necessarily an error of your code, this is an error in your expectations. When you remove the outliers, the statistics of your data set change. For example, the quartiles are not the same anymore. Hence, you might identify "new" outliers. See the following example: ``` ## create example data set.seed(12345) rand <- rexp(100,23) ## plot. gives outliers. boxplot(rand) ## detect outliers with these functions detectaOutliers = function(x) { q = quantile(x, probs = c(0.25, 0.75)) R = IQR(x) OM1 = q[1] - (R * 1.5) # outliers moderados OM3 = q[2] + (R * 1.5) OE1 = q[1] - (R * 3) # outliers extremos OE3 = q[2] + (R * 3) moderados = ifelse(x < OM1 | x > OM3, 1, 0) extremos = ifelse(x < OE1 | x > OE3, 1, 0) cbind(extOut = moderados) } detectOut <- function(x) boxplot(x, plot = FALSE)$out ## clean your data clean1 <- rand[!as.logical(detectaOutliers(rand))] clean2 <- rand[!rand%in%detectOut(rand)] ## check that these functions do the same. all(clean1 == clean2 ) # Fun fact: depending on your data, clean1 and clean2 # are not always the same. See the extra note below. ## plot cleaned data boxplot(clean2) ## Still has outliers. But "new" ones. confirm with: sort(boxplot(rand)$out) # original outlier sort(boxplot(clean2)$out) # new outlier ``` **Note 1:** Your code does not necessarily use the same outlier identification as the boxplot function in R (I am not sure about the ggplot boxplot, but this is at least true for the graphics::boxplot function.): ``` ## The boxplot function (rather: boxplot.stats) ## does not use the quantile function, but the fivenum function ## to identify outliers. They produce different results, e.g., here: fivenum(rand)[c(2,4)] quantile(rand,probs=c(0.25,0.75)) ``` **Note 2**: If you want boxplots that exclude outliers, you can use the `outline` parameter of the boxplot function (for ggplot, see [Ignore outliers in ggplot2 boxplot](https://stackoverflow.com/questions/5677885/ignore-outliers-in-ggplot2-boxplot)) Upvotes: 2 <issue_comment>username_2: 6 hours later, I realized that the error was in the variables I was using (my database has 4 variables and I needed to remove the outliers of a column alone, depending on two others and it turns out that I was wrong with the 2 I chose) Finally, I realized and the function works perfectly! I feel the inconvenience and thank you very much to all Upvotes: 0
2018/03/21
832
2,732
<issue_start>username_0: I tried almost all the methods (CLEAN,TRIM,SUBSTITUTE) trying to remove the character hiding in the beginning and the end of a text. In my case, I downloaded the bill of material report from oracle ERP and found that the item codes are a victim of hidden characters. After so many findings, I was able to trace which character is hidden and found out that it's a question mark'?' (via VBA code in another thread) both at the front and the end. You can take this item code‭: ‭11301-21‬ If you paste the above into your excel and see its length =LEN(), you can understand my problem much better. I need a good solution for this problem. Therefore please help! Thank you very much in advance.<issue_comment>username_1: You have characters that **look** like a space character, but are not. They are UniCode 8236 & 8237. Just replace them with a space character *(ASCII 32)*. **EDIT#1:** Based on the string in your post, the following VBA macro will replace UniCode characters 8236 amd 8237 with simple space characters: ``` Sub Kleanup() Dim N1 As Long, N2 As Long Dim Bad1 As String, Bad2 As String N1 = 8237 Bad1 = ChrW(N1) N2 = 8236 Bad2 = ChrW(N2) Cells.Replace what:=Bad1, replacement:=" ", lookat:=xlPart Cells.Replace what:=Bad2, replacement:=" ", lookat:=xlPart End Sub ``` Upvotes: 1 <issue_comment>username_2: Thanks to [username_1](https://stackoverflow.com/users/2474656/garys-student), because his answer inspired me. Also, I used [this answer](https://stackoverflow.com/questions/37024107/excel-vba-remove-unicode-characters-in-a-string) for this code. This function will clean every single char of your data, so it should work for you. You need 2 functions: 1 to clean the Unicode chars, and other one to clean your item codes\_ ``` Public Function CLEAN_ITEM_CODE(ByRef ThisCell As Range) As String If ThisCell.Count > 1 Or ThisCell.Count < 1 Then CLEAN_ITEM_CODE = "Only single cells allowed" Exit Function End If Dim ZZ As Byte For ZZ = 1 To Len(ThisCell.Value) Step 1 CLEAN_ITEM_CODE = CLEAN_ITEM_CODE & GetStrippedText(Mid(ThisCell.Value, ZZ, 1)) Next ZZ End Function Private Function GetStrippedText(txt As String) As String If txt = "–" Then GetStrippedText = "–" Else Dim regEx As Object Set regEx = CreateObject("vbscript.regexp") regEx.Pattern = "[^\u0000-\u007F]" GetStrippedText = regEx.Replace(txt, "") End If End Function ``` And this is what i get using it as formula in Excel. Note the difference in the Len of strings: [![enter image description here](https://i.stack.imgur.com/b0f8r.jpg)](https://i.stack.imgur.com/b0f8r.jpg) Hope this helps Upvotes: 3 [selected_answer]
2018/03/21
267
1,206
<issue_start>username_0: We just migrated our bots as required by Microsoft before the deadline. I chose location "North Europe" when migrating. Now I see that each bot was placed in its own resource group with the correct location "North Europe". However, inside these resource groups the actual Bot Channels Registration has location "global". I cannot find any information about this. What does location "global" mean and how should I interpret the Bot Channels Registration being "global" while within a resource group in "North Europe"? Any information much appreciated.<issue_comment>username_1: From looking at a bot I have, I see this, too. If it's ok, I have a guess as to what it is. I believe the bot itself is global in its location since it's just the code and settings for the bot. The App Service resource, however, is where the bot is deployed to on the web which does have a location. Hopefully, you'll see the App Service in the North Europe location. :) Upvotes: 0 <issue_comment>username_2: Bot registrations contain information about the bot such as App ID/Password, channels, etc. They are not services that are deployed somewhere, that's why their location is global. Upvotes: 1
2018/03/21
236
974
<issue_start>username_0: Is there a way in which I can print a query generated for orator orm. When I print it, I get it as I want to print the query which will be applied on database like "SELECT \* from students" I tried iterating over the object and looking for any key which will return me the query, but haven't found anything like that.<issue_comment>username_1: From looking at a bot I have, I see this, too. If it's ok, I have a guess as to what it is. I believe the bot itself is global in its location since it's just the code and settings for the bot. The App Service resource, however, is where the bot is deployed to on the web which does have a location. Hopefully, you'll see the App Service in the North Europe location. :) Upvotes: 0 <issue_comment>username_2: Bot registrations contain information about the bot such as App ID/Password, channels, etc. They are not services that are deployed somewhere, that's why their location is global. Upvotes: 1
2018/03/21
596
2,025
<issue_start>username_0: If you follow the AWS Glue Add Job Wizard to create a script to write parquet files to S3 you end up with generated code something like this. ``` datasink4 = glueContext.write_dynamic_frame.from_options( frame=dropnullfields3, connection_type="s3", connection_options={"path": "s3://my-s3-bucket/datafile.parquet"}, format="parquet", transformation_ctx="datasink4", ) ``` Is it possible to specify a KMS key so that the data is encrypted in the bucket?<issue_comment>username_1: glue scala job ``` val spark: SparkContext = new SparkContext() val glueContext: GlueContext = new GlueContext(spark) spark.hadoopConfiguration.set("fs.s3.enableServerSideEncryption", "true") spark.hadoopConfiguration.set("fs.s3.serverSideEncryption.kms.keyId", args("ENCRYPTION_KEY")) ``` I think syntax should be differ for Python, but idea the same Upvotes: 4 [selected_answer]<issue_comment>username_2: To spell out the answer using PySpark, you can do either ``` from pyspark.conf import SparkConf [...] spark_conf = SparkConf().setAll([ ("spark.hadoop.fs.s3.enableServerSideEncryption", "true"), ("spark.hadoop.fs.s3.serverSideEncryption.kms.keyId", "") ]) sc = SparkContext(conf=spark\_conf) ``` noticing the `spark.hadoop` prefix - or (uglier but shorter) ``` sc._jsc.hadoopConfiguration().set("fs.s3.enableServerSideEncryption", "true") sc._jsc.hadoopConfiguration().set("fs.s3.serverSideEncryption.kms.keyId", "") ``` where `sc` is your current SparkContext. Upvotes: 2 <issue_comment>username_3: This isn't necessary. Perhaps it was when the question was first posed, but the same can be achieved by creating a security configuration and associating that with the glue job. Just remember to have this in your script, otherwise it won't do it: ``` job = Job(glueContext) job.init(args['JOB_NAME'], args) ``` <https://docs.aws.amazon.com/glue/latest/dg/encryption-security-configuration.html> <https://docs.aws.amazon.com/glue/latest/dg/set-up-encryption.html> Upvotes: 1
2018/03/21
871
3,112
<issue_start>username_0: I'm trying to delete lines in specific column from all rows that contains specific words. For example: Remove lines that contain word `apple` and it is always at the beginning of the line. ``` +--+------------------+ |ID|data | +--+------------------+ |1 |sometext1 | | |sometext2 | | |apple sometext3 | | |sometext4 | +--+------------------+ |2 |apple sometext5 | | |sometext6 | +--+------------------+ ``` so the result would be: ``` +--+------------------+ |ID|data | +--+------------------+ |1 |sometext1 | | |sometext2 | | |sometext4 | +--+------------------+ |2 |sometext6 | +--+------------------+ ``` 'SometextX' is different in every line, number of lines is different in every row and it has different number of characters in every line. I really need this in MySQL any help would be appreciated.<issue_comment>username_1: You would use `where`: ``` where textcol not like 'apple%' or textcol is null ``` This can be part of a `select` or a `delete` (the question mentions "result" which suggests the former and "delete" which suggests the latter). It is not clear whether you actually want to change the data or whether you just want the result set without these words. Note: you can do this without `or` and still handle `NULL` values, because MySQL has a `NULL`-safe equality operator: ``` where not left(textcol, 5) <=> 'apple' ``` Upvotes: 0 <issue_comment>username_2: You would be better off using `REGEXP` here to match patterns in each line: ``` DELETE FROM yourTable WHERE text REGEXP '^apple'; ``` `REGEXP` allows for fairly complex regex matching, and would be useful if your requirement changes or gets more complex later on. **Edit:** MySQL has no built in support for regex replacement, so there is no easy way to accomplish what you want. A general regex pattern to remove the word `apple` would be `\bapple\b`. You may search on this pattern and replace with empty string. Upvotes: 1 <issue_comment>username_3: You can use [MySQL functions](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html) to select the right rows and to update with new data as follows: ``` UPDATE `yourTable` SET `yourField` = REPLACE(yourField, 'apple', '') WHERE yourField LIKE '%apple%' ``` Upvotes: 0 <issue_comment>username_4: If you don't want to delete the whole row, you can run these 3 queries in this order ``` update your_table set text=replace(text,substring(text,@start:=locate('\napple',text),locate('\n',text,@start+1)-@start+1),''); update your_table set text=if((@start:=locate('apple',text))=1,replace(text,substring(text,@start,locate('\n',text,@start+1)-@start+1),''),text); update your_table set text=if((@start:=locate('apple',text))=1,replace(text,substring(text,locate('apple',text)),''),text); ``` update #1 will remove apple *in the middle* of the text (prefixed by `\n`) update #2 will remove apple *at the beginning* of its row (nothing before) and having following rows update #3 will remove remaining cases Upvotes: 0
2018/03/21
926
3,414
<issue_start>username_0: I'm encountering an issue while trying to lazy load module according to the user profile, I defined three default paths (with empty path for each route) and each user has access to a specific module, i'm using a guards in order to detemine the current user profile (actually i'm toggling manually to set the default loaded module by setting **const canGo =true**) The expected behavior is that the actual routing configuration should activate the adequate route according to the profile but that's not the case. ``` export const routes: Routes = [ { path: '', loadChildren: 'app/user/user.module#UserModule', canActivate: [ CanActivateUserModuleGuard ], }, { path: '', loadChildren: 'app/admin/admin.module#AdminModule', canActivate: [ CanActivateAdminModuleGuard ] }, { path: '', loadChildren: 'app/moderator/moderator.module#ModeratorModule', canActivate: [ CanActivateModeratorModuleGuard ] }, { path: '404', component: NotFoundComponent } ]; ``` NB: below the online issue if interested <https://stackblitz.com/edit/angular-vd4oyu?file=app%2Fapp-routing.module.ts> What is the best way to accomplish this requirement?<issue_comment>username_1: You would use `where`: ``` where textcol not like 'apple%' or textcol is null ``` This can be part of a `select` or a `delete` (the question mentions "result" which suggests the former and "delete" which suggests the latter). It is not clear whether you actually want to change the data or whether you just want the result set without these words. Note: you can do this without `or` and still handle `NULL` values, because MySQL has a `NULL`-safe equality operator: ``` where not left(textcol, 5) <=> 'apple' ``` Upvotes: 0 <issue_comment>username_2: You would be better off using `REGEXP` here to match patterns in each line: ``` DELETE FROM yourTable WHERE text REGEXP '^apple'; ``` `REGEXP` allows for fairly complex regex matching, and would be useful if your requirement changes or gets more complex later on. **Edit:** MySQL has no built in support for regex replacement, so there is no easy way to accomplish what you want. A general regex pattern to remove the word `apple` would be `\bapple\b`. You may search on this pattern and replace with empty string. Upvotes: 1 <issue_comment>username_3: You can use [MySQL functions](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html) to select the right rows and to update with new data as follows: ``` UPDATE `yourTable` SET `yourField` = REPLACE(yourField, 'apple', '') WHERE yourField LIKE '%apple%' ``` Upvotes: 0 <issue_comment>username_4: If you don't want to delete the whole row, you can run these 3 queries in this order ``` update your_table set text=replace(text,substring(text,@start:=locate('\napple',text),locate('\n',text,@start+1)-@start+1),''); update your_table set text=if((@start:=locate('apple',text))=1,replace(text,substring(text,@start,locate('\n',text,@start+1)-@start+1),''),text); update your_table set text=if((@start:=locate('apple',text))=1,replace(text,substring(text,locate('apple',text)),''),text); ``` update #1 will remove apple *in the middle* of the text (prefixed by `\n`) update #2 will remove apple *at the beginning* of its row (nothing before) and having following rows update #3 will remove remaining cases Upvotes: 0
2018/03/21
1,257
5,946
<issue_start>username_0: Iam implementing a basic authentication for Spring Boot application and iam defining my credentials in application.properties class but I want to hash-encode the password and then check if the hash is the same as the hash for the password in application.properties then I can login. If possible to do all of the logic in the **configure method** then it would be great. **application.properties:** BASIC AUTHENTICATION ==================== ``` user.name=test user.password={<PASSWORD> ``` **SecurityConfig class:** ``` @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { private static final Logger logger = LoggerFactory.getLogger(SecurityConfig.class); private AuthenticationProvider authenticationProvider; @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().anyRequest().authenticated().and().httpBasic() .and().sessionManagement().and().authenticationProvider(authenticationProvider) .sessionCreationPolicy(SessionCreationPolicy.STATELESS) } ``` ***UPDATED CODE*** ``` import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.security.crypto.password.PasswordEncoder; @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { private static final Logger logger = LoggerFactory.getLogger(SecurityConfig.class); @Value("${security.user.password}") private String password; @Value("${security.user.name}") private String username; @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().anyRequest().authenticated() .and().logout().and().httpBasic().and().sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS); } @Override public void configure(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication(). passwordEncoder(passwordEncoder()).withUser(username).password(password); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Bean public String generateHashedPassword(String password) { return BCrypt.hashpw(password, BCrypt.gensalt(10)); } } ``` **UPDATE 2** Currently the way it works now is when i start the application, i visit localhost:8080 then a login popup appears and i type the username and password (that are defined in application.properties) if I type the right username and password i get logged in but if i manage to login with the username and password defined in application.properties then whats the point with hashing the password? I was thinking more like having a list of hashed keys and compare the input password with the list and if success then login.<issue_comment>username_1: I think you need to implement your own AuthenticationProvider like in [this](https://stackoverflow.com/questions/31826233/custom-authentication-manager-with-spring-security-and-java-configuration) question. In the `authenticate()` method you can do the hashing of the retrieved password and check if it matches the one from your application.properties. Upvotes: 0 <issue_comment>username_2: Since you want to define your credentials in properties file, I guess you can take advantage of inmemory authentication. Try the following: ``` @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { private static final Logger logger = LoggerFactory.getLogger(SecurityConfig.class); private AuthenticationProvider authenticationProvider; @Value("${user.name}") private String userName; @Value("${user.password}") private String userHashedPassword; // <PASSWORD> @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().anyRequest().authenticated().and().httpBasic() .and().sessionManagement().and().authenticationProvider(authenticationProvider) .sessionCreationPolicy(SessionCreationPolicy.STATELESS) } @Override public void configure(AuthenticationManagerBuilder auth) throws Exception { auth .inMemoryAuthentication() .passwordEncoder(passwordEncoder()) .withUser(userName) .password(<PASSWORD>); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } } ``` Please, note, that in this case your password should be encrypted with `BCryptPasswordEncoder` first, and then you should put it into properties file (you can use its `encoder.encode("password")` method). Or you can use any other implementation of `PasswordEncoder` if you want. I've also noticed that you're using some custom autenticationProvider. Not sure how it works since you didnt share the code, and not sure that it will get along with inmemory autentication. But, anyway, I think it worth a shot and this is the right way to go in your scenario. Hope it helps. Upvotes: 3 [selected_answer]
2018/03/21
662
2,905
<issue_start>username_0: I have a table in the database (big orange one) including parts and prices for two different type. I am looking to find the little orange table as result in summary: **I am looking for common parts in both type R and O Where price has gone up from type O to type R.** This is the script I tried but it is disconnected: SELECT \*FROM Table WHERE type='R'as a SELECT \* FROM Table WHERE type='O'as b SELECT \* FROM a INNER JOIN b ON a.part = b.part WHERE a.price < b.price [![enter image description here](https://i.stack.imgur.com/Vn3SB.png)](https://i.stack.imgur.com/Vn3SB.png)<issue_comment>username_1: I think you need to implement your own AuthenticationProvider like in [this](https://stackoverflow.com/questions/31826233/custom-authentication-manager-with-spring-security-and-java-configuration) question. In the `authenticate()` method you can do the hashing of the retrieved password and check if it matches the one from your application.properties. Upvotes: 0 <issue_comment>username_2: Since you want to define your credentials in properties file, I guess you can take advantage of inmemory authentication. Try the following: ``` @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { private static final Logger logger = LoggerFactory.getLogger(SecurityConfig.class); private AuthenticationProvider authenticationProvider; @Value("${user.name}") private String userName; @Value("${user.password}") private String <PASSWORD>; // <PASSWORD> @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().anyRequest().authenticated().and().httpBasic() .and().sessionManagement().and().authenticationProvider(authenticationProvider) .sessionCreationPolicy(SessionCreationPolicy.STATELESS) } @Override public void configure(AuthenticationManagerBuilder auth) throws Exception { auth .inMemoryAuthentication() .passwordEncoder(passwordEncoder()) .withUser(userName) .password(<PASSWORD>); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } } ``` Please, note, that in this case your password should be encrypted with `BCryptPasswordEncoder` first, and then you should put it into properties file (you can use its `encoder.encode("password")` method). Or you can use any other implementation of `PasswordEncoder` if you want. I've also noticed that you're using some custom autenticationProvider. Not sure how it works since you didnt share the code, and not sure that it will get along with inmemory autentication. But, anyway, I think it worth a shot and this is the right way to go in your scenario. Hope it helps. Upvotes: 3 [selected_answer]
2018/03/21
606
2,773
<issue_start>username_0: Trying to access an object's data in a constructor returns an "undefined" object. It works on the ngOnInit() function, but the data (going to get reset) is needed **every time** the component starts. ``` import { Component, OnInit, Input } from '@angular/core'; @Input() data: any; constructor(dataService: DataService) { console.log(this.data); // undefined } ngOnInit() { console.log(this.data) // works here } ```<issue_comment>username_1: I think you need to implement your own AuthenticationProvider like in [this](https://stackoverflow.com/questions/31826233/custom-authentication-manager-with-spring-security-and-java-configuration) question. In the `authenticate()` method you can do the hashing of the retrieved password and check if it matches the one from your application.properties. Upvotes: 0 <issue_comment>username_2: Since you want to define your credentials in properties file, I guess you can take advantage of inmemory authentication. Try the following: ``` @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { private static final Logger logger = LoggerFactory.getLogger(SecurityConfig.class); private AuthenticationProvider authenticationProvider; @Value("${user.name}") private String userName; @Value("${user.password}") private String userHashedPassword; // <PASSWORD> @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().anyRequest().authenticated().and().httpBasic() .and().sessionManagement().and().authenticationProvider(authenticationProvider) .sessionCreationPolicy(SessionCreationPolicy.STATELESS) } @Override public void configure(AuthenticationManagerBuilder auth) throws Exception { auth .inMemoryAuthentication() .passwordEncoder(passwordEncoder()) .withUser(userName) .password(<PASSWORD>); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } } ``` Please, note, that in this case your password should be encrypted with `BCryptPasswordEncoder` first, and then you should put it into properties file (you can use its `encoder.encode("password")` method). Or you can use any other implementation of `PasswordEncoder` if you want. I've also noticed that you're using some custom autenticationProvider. Not sure how it works since you didnt share the code, and not sure that it will get along with inmemory autentication. But, anyway, I think it worth a shot and this is the right way to go in your scenario. Hope it helps. Upvotes: 3 [selected_answer]
2018/03/21
2,694
9,958
<issue_start>username_0: I installed Flutter following official document and also installed Flutter and Dart plugin on Android Studio. But, I can't see File>New Flutter Project wizard on Android Studio 3.0.1 I run "flutter doctor" command. See the below output. ``` Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel beta, v0.1.5, on Mac OS X 10.13.3 17D102, locale en-TR) [✓] Android toolchain - develop for Android devices (Android SDK 27.0.3) [✓] iOS toolchain - develop for iOS devices (Xcode 9.2) [✓] Android Studio (version 3.0) [✓] IntelliJ IDEA Community Edition (version 2017.3.3) [!] Connected devices ! No devices available ! Doctor found issues in 1 category. ```<issue_comment>username_1: I have also same problem.but what can you do in that situation is Just create the project with command line : > > flutter create your\_app\_name > > > Now open android studio and open that project. Hope this works well Upvotes: 4 <issue_comment>username_2: Got the same problem, fixed it by installing dart first using the plugin manager of android studio. Then install flutter plugin. Upvotes: 3 <issue_comment>username_3: How to set flutter wizard in Android Studio 3.0 1. File > Close Project 2. Configure > Check for Updates 3. You will find Flutter and Dart updates. Update and Restart. Downloading Patch. 4. File > New Flutter Project or Select new Flutter Project. Hope your problem resolved. Happy Codings!! Upvotes: 2 <issue_comment>username_4: This worked for me. Install Dart and Flutter manually from Plugins: * Open Plugins (For Mac: Configure -> Plugins OR Android Studio -> Preferences -> Plugins) * Search for Dart -> Search in repositories -> Install -> Restart Android studio * Search for Flutter -> Search in repositories -> Install -> Restart Android Studio Upvotes: 3 <issue_comment>username_5: Update flutter using below command > > flutter upgrade > > > and again create flutter application from android studio Upvotes: 2 <issue_comment>username_6: I've got the same problem and finally sorted out. It's usually caused by your upgrade of Android Studio from 2.x to 3.x at the same time. In short, it's because Flutter is not correctly configurated, but behind the scene there might be different reasons, so the universal solution is to run **`flutter doctor -v`** to diagnose and see what's missing. [First make sure you've already followed the setup steps in Flutter's official documentation and have your Android SDK updated.] In my own case, a couple of things to fix: 1. Update the JAVA\_HOME path in `.bash_profile`. Because I have 2 Java versions installed and so I updated it to use the same as Android Studio does. This is critical as `flutter doctor` relies on Java to check some of your configurations. 2. `Some Android licenses not accepted` - follow flutter doctor's advice to accept all licenses. 3. Android Studio's Flutter plugin version too low - simply update it. Upvotes: 0 <issue_comment>username_7: For some reason, Flutter refused to show New `Flutter Project` in android studio 3.1 but when I use android studio 3.2 it works fine after installing Dart and flutter plugins. Upvotes: 2 <issue_comment>username_8: Had the same problem. Check-in Android Studio if the Dart and Flutter plugins are both installed and marked with a lock symbol in Preferences --> Plugins. Anyway, the following procedure helped me: 1. uninstall the Flutter plugin 2. restart Android Studio. 3. uninstall the Dart plugin 4. restart Android Studio again which seemed important to do 5. install the Dart plugin again 6. restart Android Studio although it is annoying 7. install the Flutter plugin again 8. and guess: restart Android Studio After the last restart I saw the success message: [![success notification of flutter](https://i.stack.imgur.com/nU2Ek.png)](https://i.stack.imgur.com/nU2Ek.png) I assume we both installed both plugins without restarting after Dart. Upvotes: 7 [selected_answer]<issue_comment>username_9: I also faced it but soon I solved it I simply install **flutter** and **dart** plugins and restarted Android Studio. After restarting I noticed that there is no Wizard for creating flutter apps. But soon I realized that I have disabled some plugins ( Android Studio's default plugins ) so I enabled all plugins and restarted Android Studio again and BOOM! Now there is a wizard for Creating Flutter Apps. Hope this helps you ! Upvotes: 6 <issue_comment>username_10: I also ran into this issue and the steps above didn't work. What I did was check which plugins I have enabled and realized that `Android APK Support` and `Android NDK Support` was disabled in my AS. After enabling this and restarting android studio everything seems to be working correctly. Upvotes: 3 <issue_comment>username_11: You need check the “AndroidAPK Support” in your Plugins. See this screenshot: [![Android APK Support must be ticked in ](https://i.stack.imgur.com/y4YI4.png)](https://i.stack.imgur.com/y4YI4.png) Upvotes: 7 <issue_comment>username_12: 1. Yes first update all your plugins related to flutter and dart . 2. "AndroidAPK Support" plugin(install /enable /update). 3.This will work for all the AS , checked this on AS 3.4 also . 4. Thanks to the guys who answered before me . Upvotes: 4 <issue_comment>username_13: My answer will be nearly same with @username_9 but I'll upload screenshots of before and after ;) First and foremost **Flutter** and **Dart plugins** must be installed before. [![enter image description here](https://i.stack.imgur.com/8ipKL.png)](https://i.stack.imgur.com/8ipKL.png) After installing these plugins you can check if it is ok or not with `flutter doctor` [![enter image description here](https://i.stack.imgur.com/0M1mb.png)](https://i.stack.imgur.com/0M1mb.png) If you hadn't installed the plugins you would have this: [![enter image description here](https://i.stack.imgur.com/YAGXc.png)](https://i.stack.imgur.com/YAGXc.png) **Android Studio** At first, I didn't see "Start a new Flutter project" [![enter image description here](https://i.stack.imgur.com/uoGCE.png)](https://i.stack.imgur.com/uoGCE.png) [![enter image description here](https://i.stack.imgur.com/0depn.png)](https://i.stack.imgur.com/0depn.png) After: [![enter image description here](https://i.stack.imgur.com/fHjrF.png)](https://i.stack.imgur.com/fHjrF.png) Upvotes: 5 <issue_comment>username_14: 1. Uninstall Flutter and Dart plugin 2. Restart Android Studio 3. Install Flutter Plugin (it will prompt you to install Dart plugin as well. Accept it) 4. Restart Android Studio Also, reminder to check whether the plugin is enabled or not. Go to Preferences -> Plugins -> Flutter/Dart. Although, it's enabled by default, you may have had to disable it at some point in the past. In that case, just enable it from Preferences -> Plugin -> Flutter/Dart if you want to. Upvotes: 1 <issue_comment>username_15: Make sure you have flutter and dart installed, then enable Android apk support; this helped me Upvotes: 2 <issue_comment>username_16: If you are sure to have both Dart and Flutter plugins correctly installed, check again your Plugins and **be sure that Android APK Support is enabled**. If it isn't, enable it! [![Android APK Support](https://i.stack.imgur.com/cL3je.png)](https://i.stack.imgur.com/cL3je.png) Et voilà! Now everything should be fine: [![New Flutter project](https://i.stack.imgur.com/GHs8J.png)](https://i.stack.imgur.com/GHs8J.png) You can find the plugin menu inside the Welcome page [![Plugin from Welcome page](https://i.stack.imgur.com/hOQKZ.png)](https://i.stack.imgur.com/hOQKZ.png) or, when you have a project opened, inside the Preferences menu: [![Plugin menu inside Preferences menu](https://i.stack.imgur.com/5DVwe.png)](https://i.stack.imgur.com/5DVwe.png) Upvotes: 6 <issue_comment>username_17: Even if you have done everything here,it may not work if you have Android Studio 4.x or canary It works only in lower version Upvotes: 0 <issue_comment>username_18: In my case I did not have APK/NDK support Plugin, added that and it worked. Of course restart IDE couple of times. Upvotes: 1 <issue_comment>username_19: Well go to command line and check flutter doctor , if all good give command to your project directory flutter create appname and import this to androidstudio , this will generate necessary plugins. Upvotes: 2 <issue_comment>username_20: It was little different in my case. After upgrading to newest release of Android studio. The dart flutter and other related plugins were in incompatible mode. You can check that by going to Android Studio=> Plugins. I simply update those plugins and then restart the IDE and it works fine. [![Android studio 4. Plugin update window.](https://i.stack.imgur.com/eXmEo.png)](https://i.stack.imgur.com/eXmEo.png) Upvotes: 2 <issue_comment>username_21: [![Do this](https://i.stack.imgur.com/ySaXo.png)](https://i.stack.imgur.com/ySaXo.png) Just Restore your IDE settings as shown in image File -> Manage Ide Settings -> **Restore default settings** This will remove your installed plugins and ask you again to install them. Upvotes: 2 <issue_comment>username_22: In my case I only miss Dart plugin so i installed and on IDE restart it was there Upvotes: 0 <issue_comment>username_23: I had the same issue sometime ago even after installing both Dart and Flutter plugins for intelliJ, I fixed it like this... STEP ONE: Click "Repair IDE" [![step one](https://i.stack.imgur.com/eMnRI.png)](https://i.stack.imgur.com/eMnRI.png) STEP TWO: Click "Rescan Project Indexes" at the bottom [![step two](https://i.stack.imgur.com/nkuNK.png)](https://i.stack.imgur.com/nkuNK.png) STEP THREE: Click "Reopen Project" at the bottom [![step three](https://i.stack.imgur.com/DkaqD.png)](https://i.stack.imgur.com/DkaqD.png) Upvotes: 0 <issue_comment>username_24: just make sure you enable the Android Apk Support and its will work for sure Upvotes: -1
2018/03/21
433
1,834
<issue_start>username_0: I used StanfordCoreNLP jar file library to split English paragraphs into sentences but I could retrieve the split sentences as CoreMap Object, but I want to convert those split sentences of type CoreMap to type String, is there anyway to achieve this task. The bold text in the code shows the area where CoreMap is used and I want the sentences retrieved to convert it to String The code snippet: ``` props.setProperty("annotators","tokenize,ssplit"); //put that in a pipeline StanfordCoreNLP pipeline = new StanfordCoreNLP(props); //a data structure for the annotation Annotation document = new Annotation(text); // run the pipeline on that data structure pipeline.annotate(document); // access the annotations which has worked on a sentence List sentences = document.get(SentencesAnnotation.class); PrintStream printStream = new PrintStream(new FileOutputStream("/home/sakshi/Desktop/Admin\_System/translate.en")); PrintStream console = System.out; // To store the reference to default output stream to use it to restore the default std output stream System.setOut(printStream);// To change the default output stream \*\*for (CoreMap sentence : sentences) { System.out.println(sentence);\*\* } System.setOut(console); response.setContentType("text/plain"); response.getWriter().write(text); ```<issue_comment>username_1: Not sure about exact types of the CoreMap, but probably Map#values() is a List that you need. To convert it into the single String you could use Java8 Streams API: ``` list.stream ().map (i -> i.toString ()).collect (Collectors.joining (",")); ``` Upvotes: 0 <issue_comment>username_2: `whatever.toString()` since toString works for every java Object since every object inherits from java.lang.Object Upvotes: -1 [selected_answer]
2018/03/21
678
2,621
<issue_start>username_0: I am confused by Laravels .env file VS other settings I have a .env file which has my sql database settings inside, but I also have a file in config/database.php that has these settings inside. When I deploy my application to an Elastic Beanstalk instance, which of these files is it using for the database settings?<issue_comment>username_1: In your ".env" file you have your settings. in the ".php" files like your "database.php" file this is the default value for the property and normally, the corresponding value in the ".env" file is use here with this syntax : `'database' => env('database', 'default_value'),` Upvotes: 2 <issue_comment>username_2: The .env file is the first file it will use for configs. In case values are missing inside the .env file Laravel will check the config files. I primairly use those as backups. Upvotes: 1 <issue_comment>username_3: .env is short for environment and thus that is your environment configurations. The `database.php` configuration contains non-critical information. You obviously won't have your database's username's password in your source control or available in the code. In order to keep everything safe or to keep information saved that is environment-defined... you keep them in `.env` file Laravel will prioritize `.env` variables. Upvotes: 4 [selected_answer]<issue_comment>username_4: A little more detailed answer: The config Files are where you store all configurations. You shouldn't add any username, passwords or other secret informations in them, because they will be in your source control. All secret informations and all environment dependant informations should be stored in your .env file. With this you can have different configuration values in local/testing/production with just a different .env file. In your config files you access the information in you .env files, if necessary. When use what from [another answer](https://stackoverflow.com/questions/40026893/what-is-difference-between-use-envapp-env-configapp-env-or-appenviron/42393294?noredirect=1#comment101701849_42393294) * use env() only in config files * use App::environment() for checking the environment (APP\_ENV in .env). * use config('app.var') for all other env variables, ex. config('app.debug') * create own config files for your own ENV variables. Example: In your .env: MY\_VALUE=foo example config app/myconfig.php ``` return [ 'myvalue' => env('MY_VALUE', 'bar'), // 'bar' is default if MY_VALUE is missing in .env ]; ``` Access in your code: ``` config('myconfig.myvalue') // will result in 'foo' ``` --- Upvotes: 2
2018/03/21
346
1,178
<issue_start>username_0: I have 3 Topics : "BEGIN", "CONTINUE" and "END" These three topic needs to be joined in one Topic Message where i can get the Result Model that is a combination of the 3 Topic Messages. There are many example that shows how to join 2 topics. If anyone can give me an example or a hint of how can i make a Join or these 3 Topics.<issue_comment>username_1: Until the [cogroup feature](https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup "coo") gets implemented, you will need to first merge your first 2 topics into an intermediary topic, and then join that one with your 3rd topic. For an example of how to do that, see [the cogroup KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup "coo"). Upvotes: 2 <issue_comment>username_2: I depends on what kind of join you want to do. As you say, you have `KStream`, you would do two consecutive windowed joins: ```java KStream stream1 = builder.stream(...); KStream stream2 = builder.stream(...); KStream stream3 = builder.stream(...); KStream joined = stream1.join(stream2, ...) .join(stream3, ...); ``` Upvotes: 0
2018/03/21
224
728
<issue_start>username_0: I know there is ``` Collections.max(list); Collections.min(list); ``` To get the max or min of an arraylist, but I am looking for a way to get the max or min between a certain range. For example, what is the max between index 0 and index 5?<issue_comment>username_1: Use `list.subList` to run the operation on a portion of the `List`: ``` Collections.max(list.subList(0,6)); ``` `list.subList(0,6)` returns a view of the portion of the original list between indices 0 and 5 (inclusive). Upvotes: 4 [selected_answer]<issue_comment>username_2: You can go with sublist. Make a list with range and then pass it as param to the max. For ex : ``` Collections.max(list.subList(0,6)); ``` Upvotes: 3
2018/03/21
1,461
4,088
<issue_start>username_0: I have the following pandas dataframe: ``` count event date 0 1544 'strike' 2016-11-01 1 226 'defense' 2016-11-01 2 1524 'strike' 2016-12-01 3 246 'defense' 2016-12-01 4 1592 'strike' 2017-01-01 5 245 'defense' 2017-01-01 ``` I want to pivot/transform it in such a way the final output looks like this: ``` event 2016-11-01 2016-12-01 2017-01-01 2017-02-01 2017-03-01 'strike' 1544 1524 1592 1608 1654 'defense' 226 246 245 210 254 ``` but what i'm getting now upon pivoting is this: ``` count count count count count\ date 2016-11-01 2016-12-01 2017-01-01 2017-02-01 2017-03-01 event 'strike' 1544 1524 1592 1608 1654 'defense' 226 246 245 210 254 ``` is there any way i could remove the entire empty row ahead of the `event` index-name and rename the `date` index-name with `event` as its index-name and also remove the unwanted `count` appearing in the first row of the data frame? The data seems to be transforming correctly i just want to get rid of these headers and indexes and have the renamed and removed properly. I also don't want the row labels in the desired output. This is what i've been trying till now: ``` output = df.pivot(index='event', columns='date') print(output) ```<issue_comment>username_1: Solution is add parameter `values` to [`pivot`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html), then add [`reset_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html) for column from `index` and [`rename_axis`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html) fro remove column name: ``` output=df.pivot(index='event',columns='date',values='count').reset_index().rename_axis(None,1) print(output) event 2016-11-01 2016-12-01 2017-01-01 0 'defense' 226 246 245 1 'strike' 1544 1524 1592 ``` What happens if omit it? ``` print (df) count event date count1 0 1544 'strike' 2016-11-01 1 1 226 'defense' 2016-11-01 7 2 1524 'strike' 2016-12-01 8 3 246 'defense' 2016-12-01 3 4 1592 'strike' 2017-01-01 0 5 245 'defense' 2017-01-01 1 ``` `pivot` use each not used column and create `MultiIndex` for distinguish original columns: ``` output = df.pivot(index='event', columns='date') print(output) count count1 date 2016-11-01 2016-12-01 2017-01-01 2016-11-01 2016-12-01 2017-01-01 event 'defense' 226 246 245 7 3 1 'strike' 1544 1524 1592 1 8 0 ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I would recommend using the more general version of `pd.pivot()`, which is `pd.pivot_table()`, like so: ``` x = pd.pivot_table(df, index = 'event', columns = 'date', values = 'count') ``` You will get: ``` date 01/01/2017 01/11/2016 01/12/2016 event 'defense' 245 226 246 'strike' 1592 1544 1524 ``` Next, you can get rid of the 'date' string by setting: ``` x.columns.name = ' ' ``` Additionally, if you want to change the order of the events, you might want to set the variable up as a categorical variable, before doing the pivoting: ``` df.event = df.event.astype('category') # cast to categorical df.event.cat.set_categories(your_list, inplace = True) # force order ``` where `your_list` is the list of your categories, in order. Hope this helps. Upvotes: 2
2018/03/21
792
3,122
<issue_start>username_0: I have a failing Jest test case for code where I'm using promises. It looks like the resolution of the promise is happening after the test has completed, meaning I can't check that my promise resolution code has been executed. It feels like I need to make the event loop tick so the promise is resolved and the resolution code is executed, but haven't found anything that can do that in Jest. Here's a sample case. The code to be tested: ``` const Client = require('SomeClient'); module.exports.init = () => { Client.load().then(() => { console.log('load resolved'); setTimeout(() => { console.log('load setTimeout fired, retrying init'); module.exports.init(); }, 1000); }); }; ``` Test code: ``` jest.useFakeTimers(); const mockLoad = jest.fn().mockImplementation(() => Promise.resolve()); jest.mock('SomeClient', () => { return { load: mockLoad }; }, { virtual: true }); const promiseTest = require('./PromiseTest'); describe('SomeClient Promise Test', () => { it('retries init after 10 secs', () => { promiseTest.init(); expect(mockLoad).toHaveBeenCalledTimes(1); expect(setTimeout).toHaveBeenCalledTimes(1); // <-- FAILS - setTimeout has not been called jest.runAllTimers(); expect(mockLoad).toHaveBeenCalledTimes(2); }); }); ``` The `expect(setTimeout).toHaveBeenCalledTimes(1);` assertion fails (`setTimeout` has not been called at all), I think because the promise has not yet been resolved. Am I doing something wrong here? Can I cause the event loop to tick inside the test?<issue_comment>username_1: To tick the event loop inside your test, you should make it asynchronous. A nice workaround was suggested on [GitHub](https://github.com/facebook/jest/issues/2157#issuecomment-279171856). Having `flushPromises` as suggested there ``` function flushPromises() { return new Promise(resolve => setImmediate(resolve)); } ``` your test will look like ``` describe('SomeClient Promise Test', () => { it('retries init after 10 secs', () => { promiseTest.init(); expect(mockLoad).toHaveBeenCalledTimes(1); // notice return so jest knows that the test is asynchronous return flushPromises() .then(() => { expect(setTimeout).toHaveBeenCalledTimes(1); jest.runAllTimers(); expect(mockLoad).toHaveBeenCalledTimes(2); }); }); }); ``` Or the same using `async/await`: ``` describe('SomeClient Promise Test', () => { it('retries init after 10 secs', async () => { promiseTest.init(); expect(mockLoad).toHaveBeenCalledTimes(1); await flushPromises(); expect(setTimeout).toHaveBeenCalledTimes(1); jest.runAllTimers(); expect(mockLoad).toHaveBeenCalledTimes(2); }); }); ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Not entirely sure why our React setup was different but Sergey's answer nearly got us there. We needed these two bits in our test: ``` export function flushPromises(): Promise { return new Promise(jest.requireActual("timers").setImmediate); } ``` ``` await flushPromises(); ``` Upvotes: 1
2018/03/21
551
1,763
<issue_start>username_0: I have a python program that has to be running all the time. If for some reason it was stopped I want to restart it automatically. I thought of having a cron that will run every n number of seconds and check the program is running. My shell script is looks like this: ``` #!/usr/bin/env bash CM_COMMAND=`ps aux| grep abc| grep def| grep sudo` LEN_COMMAND=${#CM_COMMAND} if[["$LEN_COMMAND" -le "5"]] then echo "start the python program" fi exit ``` When I run this script I am getting the error: `my_prog.sh: line 4: $'if[[118\r -le 5]]\r': command not found'` What is the alternative of doing this and what is the problem with my script?<issue_comment>username_1: Maybe this would be more robust? 1) save the PID of your process when you start it with: ``` {your_python_command} & echo $! >>/{some_folder}/your_app.pid ``` 2) This script will check and restart if it can't find the PID.. ``` #!/usr/bin/env bash PID=`cat /{some_folder}/your_app.pid` if ! ps -p $PID > /dev/null then rm /{some_folder}/your_app.pid {your_python_command} & echo $! >>/{some_folder}/your_app.pid fi ``` 3) To add it to a cronjob: ``` crontab -e ``` choose your text editor and add this row at the end of the file: ``` */1 * * * * /{your_path}/{your_script_name} ``` exit and save (this will run the script every minute, check crontab manual to set your exact interval) Upvotes: 4 [selected_answer]<issue_comment>username_2: How about making it a service? A very clean solution, in my opinion. For more information on how to do it, you can read [this article](https://www.digitalocean.com/community/tutorials/how-to-configure-a-linux-service-to-start-automatically-after-a-crash-or-reboot-part-1-practical-examples). Upvotes: 2
2018/03/21
812
2,558
<issue_start>username_0: I have a multidimensional array with an arbitrary number of arrays. The array is called `charge_codes`. **print\_r( $charge\_codes )** ``` Array ( [0] => Array ( [charge_code] => 21 [amount] => 134.57 ) [1] => Array ( [charge_code] => 4 [amount] => 8.05 ) [2] => Array ( [charge_code] => 23 [amount] => 1.68 ) [3] => Array ( [charge_code] => 62 [amount] => 134.12 ) ) ``` I am trying to loop through the array and find the amount for charge code 62 and assign it to the amount for charge code 21. Once The amount has been assigned to charge code 21, I need to remove the array with charge code 62. **Result I am wanting** ``` Array ( [0] => Array ( [charge_code] => 21 [amount] => 134.12 ) [1] => Array ( [charge_code] => 4 [amount] => 8.05 ) [2] => Array ( [charge_code] => 23 [amount] => 1.68 ) ) ``` Should i loop through using `foreach( $charge_codes as $key = > $value )` ?<issue_comment>username_1: ``` $change_key = 0; $amount = 0; foreach($charge_codes as $key=>$value){ if($value["charge_code"] == 21) { $change_key = $key; } if($value["charge_code"] == 62) { $amount = $value["amount"]; unset($charge_codes[$key]); } } if($amount != 0){ $charge_codes[$change_key]["amount"] = $amount; } print_r($charge_codes); ``` Try this code. Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` php $arr = [ ["charge_code" = 21, "amount" => 134.57], ["charge_code" => 4, "amount" => 8.05], ["charge_code" => 23, "amount" => 1.68], ["charge_code" => 62, "amount" => 134.12] ]; /** * @Function to search the index from array * * @Args: charge code * * @Returns: null | index */ function searchIndexByChargeCode($chargeCode) { global $arr; foreach ($arr as $index=>$vals) { if (!empty($vals["charge_code"])) { if ($vals["charge_code"] == $chargeCode) { return $index; } } } return null; } $index62 = searchIndexByChargeCode(62); $index21 = searchIndexByChargeCode(21); $arr[$index21]["amount"] = $arr[$index62]["amount"]; unset($arr[$index62]); ?> ``` Upvotes: 0
2018/03/21
526
1,389
<issue_start>username_0: I have a tab separated file A containing several values per row: ``` A B C D E F G H I J K L M N O P Q R S T U V X Y Z ``` I want to remove from file A the elements contained in the following file B: ``` A D J M U V ``` resulting in a file C: ``` B C E F G H I K L N O P Q R S T X Y Z ``` Is there a way of doing this using bash?<issue_comment>username_1: **`awk`** solution: ``` awk 'NR == FNR{ pat = sprintf("%s%s|%s", (pat? pat "|":""), $1, $2); next } { gsub("^(" pat ")[[:space:]]*|[[:space:]]*(" pat ")", ""); if (NF) print }' file_b file_a ``` The output: ``` B C E F G H I K L N O P Q R S T X Y Z ``` Upvotes: 0 <issue_comment>username_2: In case the entries do not contain any special symbols for `sed` (for instance `()[]/\.*?+`) you can use the following command: ``` mapfile -t array < <( **C)** ``` This command reads file `B` into an array. From the array a `sed` command is constructed. The `sed` command will filter out all entries and delete blank lines. In your example, the constructed command ... ``` sed -r 's/(A|D|J|M|U|V)\t?//g;/^$/d' A > C ``` ... generates the following file `C` (spaces are actually tabs) ``` B C E F G H I K L N O P Q R S T X Y Z ``` Upvotes: 1
2018/03/21
893
2,966
<issue_start>username_0: How can I loop through multiple arrays and "pair" them, when declaring an object? Assuming there are two arrays that contain same amount of data. I would like to map each of them and allocate to objects properties. This is what I would like to achieve: ``` { name: john, surname: doe, datasets: [{ data: 1, vehicle: car, color: red }, { data: 2, vehicle: car, color: blue, }, { data: 3, vehicle: car, color: green }] } ``` this is what I have done: ``` function Constructor (name, surname, data, vehicle, colors) { this.name = name; this.surname = surname; this.data = data; this.vehicle = vehicle; this.colors = colors; this.person = { name: name, surname: surname, datasets: [{ data: data.map(data => ({ data, vehicle: vehicle, color: colors.map(color => ({ color })) })), }] } }; var testing = new Constructor ('john', 'doe', [1,2,3], 'car', ['red', 'blue', 'green']); console.log (testing.person); ```<issue_comment>username_1: If `data` and `color` always have identical lengths, you can use the `index` in your map function, which is passed as a second argument: ``` data.map((data, i) => ({ data, vehicle: vehicle, color: colors[i] }) ) ``` For non-matching data sets (i.e.: one has more data than the other), there are several approaches you could take. 1. Use a default color (if `data` has more items): ``` data.map((data, i) => ({ // ... color: colors[i] || "black" }) ``` 2. Throw an error if the data-sets do not align: ``` if (data.length !== colors.length) throw "data length does not match color length"; ``` 3. Take the shortest set of data and discard the extra's ``` Array.from( { length: Math.min(data.length, colors.length) }, (_, i) => ({ data: data[i], color: colors[i], vehicle }) ); ``` And, of course, many more! Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm just going to propose an alternative solution you may not have thought of. If `colors.length <= data.length` is always `true`, you can consider cycling through the colors using `%` remainder operator like so: ``` data.map((data, i) => ({ data, vehicle: vehicle, color: colors[i % colors.length] }) ``` ```js function Constructor(name, surname, data, vehicle, colors) { this.name = name; this.surname = surname; this.data = data; this.vehicle = vehicle; this.colors = colors; this.person = { name: name, surname: surname, datasets: [{ data: data.map((data, i) => ({ data, vehicle: vehicle, color: colors[i % colors.length] })) }] } } var testing = new Constructor('john', 'doe', [1, 2, 3, 4, 5], 'car', ['red', 'blue', 'green']); console.log(testing.person); ``` Which produces the sequence `red, blue, green, red, blue` for `data.length === 5` Upvotes: 1
2018/03/21
962
3,226
<issue_start>username_0: I am trying to create type information for my event listeners. As all event listeners are set on the same `.on()` function, I am using generics. ``` type Name = "error" | "connected"; type Callback = { error: (err: Error) => void, connected: (err: number) => void, }; function on(eventName: T, callback: Callback[T]): void { } on("error", (err) => err.stack); on("connected", (err) => err.stack); ``` I would expect the above to give me an error for the `connected` event as I attempt to use a `number` as an `Error`, however I get no type hinting at all for my callback functions. However if all function definitions in `Callback` match, it does begin to work. As below: ``` type Callback = { error: (err: Error) => void, connected: (err: Error) => void, }; ``` GIF of what I mean from VS code: [![GIF](https://i.stack.imgur.com/nIxu1.gif)](https://i.stack.imgur.com/nIxu1.gif) Am I doing something wrong?<issue_comment>username_1: Seems you doing over engineering things. Event can be listed in some enum. ``` enum EVENTS{ ERROR = "ERROR", CONNECTED = "CONNECTED" } ``` on method can be not generic ``` function on(eventName: EVENTS, callback: (v: any) => void): void { } ``` And just few examples ``` on(EVENTS.ERROR, (v: any) => { if(v instanceof Error){ console.log(v.stack); } }); on(EVENTS.CONNECTED, (v:any) => { console.log(v); }); ``` As usual it is hard to set type of event value. Sorry. Here is edited function. ``` function on(eventName: EVENTS, callback: (v: T) => void): void { } on(EVENTS.ERROR, (err) => { console.log(err.stack); }); on(EVENTS.CONNECTED, (v) => { console.log(v); }); ``` Upvotes: 1 <issue_comment>username_2: This seems to be a strange behavior on behalf of the inference engine in the compiler. **Speculation on why**: If you type the second parameter, as `Callback[T]`, the engine will try to determine `T` based on the argument type. So if you don't explicitly specify the type for the arrow function, the inference engine will infer the parameter for the arrow function to be `any` and try to guess `T` based on the arrow function type. (If you run with strict you will actually receive an error that the parameter has type `any` implicitly). There are two possible solutions: Use a two function approach, where `T` is determined in the first call and is known for the second call where the parameter is passed: ``` type Callback = { error: (e: Error) => void connected: (e: string) => void }; function on(eventName: T) { return function(callback: Callback[T]) { }; } on("error")(e=> e.stack); on("connected")(e=> e.substr(1)); ``` If the only thing that differs between functions is the parameter type and you are using 2.8 or older (unreleased at the time of writing but it is in RC, you can get it via `npm install -g typescript@rc`) you can get just the argument that differs, and the inference engine will not try to use the second parameter to infer `T` ``` type Arg0 = T extends (p1: infer U) => any ? U: never; function on(eventName: T, callback: (e: Arg0) => void) : void { } on("error", e=> e.stack); on("connected", e=> e.substr(1)); ``` Upvotes: 3 [selected_answer]
2018/03/21
928
3,118
<issue_start>username_0: I am catching the status code in my service from below code: ``` add(new: Add): Observable { return this.http .post(this.addURL, new , httpOptions) .map((response: Response) => { if (response) { if (response.status === 200) { return [{ status: response.status, response: response.json() }] } } }) .catch(this.handleError); } ``` Here is my component.ts code where I want to call the list after successful add ``` addDetails(): void { var new = this.form.value.array; new = JSON.stringify(this.form.value.array); this.addService.add(new) .subscribe( resultArray => this.Response = resultArray, error => console.log("Error :: " + error), ) if(Response.status ===200){ this.getList(); }} ``` but I am getting the below error: **Property 'status' does not exist on type 'typeof Response'**<issue_comment>username_1: Seems you doing over engineering things. Event can be listed in some enum. ``` enum EVENTS{ ERROR = "ERROR", CONNECTED = "CONNECTED" } ``` on method can be not generic ``` function on(eventName: EVENTS, callback: (v: any) => void): void { } ``` And just few examples ``` on(EVENTS.ERROR, (v: any) => { if(v instanceof Error){ console.log(v.stack); } }); on(EVENTS.CONNECTED, (v:any) => { console.log(v); }); ``` As usual it is hard to set type of event value. Sorry. Here is edited function. ``` function on(eventName: EVENTS, callback: (v: T) => void): void { } on(EVENTS.ERROR, (err) => { console.log(err.stack); }); on(EVENTS.CONNECTED, (v) => { console.log(v); }); ``` Upvotes: 1 <issue_comment>username_2: This seems to be a strange behavior on behalf of the inference engine in the compiler. **Speculation on why**: If you type the second parameter, as `Callback[T]`, the engine will try to determine `T` based on the argument type. So if you don't explicitly specify the type for the arrow function, the inference engine will infer the parameter for the arrow function to be `any` and try to guess `T` based on the arrow function type. (If you run with strict you will actually receive an error that the parameter has type `any` implicitly). There are two possible solutions: Use a two function approach, where `T` is determined in the first call and is known for the second call where the parameter is passed: ``` type Callback = { error: (e: Error) => void connected: (e: string) => void }; function on(eventName: T) { return function(callback: Callback[T]) { }; } on("error")(e=> e.stack); on("connected")(e=> e.substr(1)); ``` If the only thing that differs between functions is the parameter type and you are using 2.8 or older (unreleased at the time of writing but it is in RC, you can get it via `npm install -g typescript@rc`) you can get just the argument that differs, and the inference engine will not try to use the second parameter to infer `T` ``` type Arg0 = T extends (p1: infer U) => any ? U: never; function on(eventName: T, callback: (e: Arg0) => void) : void { } on("error", e=> e.stack); on("connected", e=> e.substr(1)); ``` Upvotes: 3 [selected_answer]
2018/03/21
867
2,990
<issue_start>username_0: I am trying to delegate the html rendering of a tree to a web worker. The tree is composed of nodes, each nodes has references to the next, previous and parent nodes. The data that represents the tree is an array, containing all the root nodes. This array is post to the web worker as is, since the serializer of the web worker is supposed to support circular references. When there's few nodes, everything goes well. Using Chrome browser, when the number of nodes reaches a limit, the web worker do not receive anything ; its message's data is simply null. No error appears in the console. With Firefox, IE and Edge, everything is OK. But I need Chrome to work to. I tried to simplify my code and make a case test (see the jsFiddle below), and it appears that the problem comes from the circular reference to the next node. In this case test, with 100 elements everything goes well, with 1000 it doesn't work. Is there any solution to this problem ? Is the only solution to change my code to remove circular references ? HTML: ``` Test 100Test 1000 ``` Javascript: ``` var workerCode = "self.onmessage = function(e) { self.postMessage(e.data ? 'ok ' + e.data.length : 'ko : null data'); };", blob = new Blob([workerCode], {type: 'text/javascript'}), blobUrl = URL.createObjectURL(blob), worker = new Worker(blobUrl); var btn_100 = document.getElementById('btn_100'), btn_1000 = document.getElementById('btn_1000'); worker.onmessage = function(e) { var log = document.createElement('p'); log.innerHTML = 'Response: ``` ' + e.data + ' ``` '; document.body.appendChild(log); }; btn_100.onclick = function() { send(worker, 100); }; btn_1000.onclick = function() { send(worker, 1000); }; function send(w, n) { var a = []; for (var i = 0; i < n; i++) { a.push({}); if (i > 0) a[i - 1].next = a[i]; } w.postMessage(a); } ``` Link to jsFiddle : <https://jsfiddle.net/jvr4a50r/><issue_comment>username_1: Actuallty,shallow copy happen in next property of object with in array. So for removing circular references, you have to do deep copy in next property. You can remove circular references by updateing the send function in following way. ``` var a = []; for (var i = 0; i < n; i++) { a.push({}); if (i > 0) a[i - 1].next = JSON.parse(JSON.stringify(a[i])); } ``` Upvotes: 0 <issue_comment>username_2: Google has recognized this issue to be a bug: <https://bugs.chromium.org/p/chromium/issues/detail?id=825466> So far, to ensure having no problems, the only solution is to remove circular references in objects. In this case I followed these steps: * give a unique id to each node * flatten the tree by putting all nodes in a "hasmap", the key is the unique id * in nodes, replace all references to other nodes by the unique id * and of course change all the code that used references to access objects indirectly through the hashmap Upvotes: 3 [selected_answer]
2018/03/21
1,310
4,312
<issue_start>username_0: I want to create android application "Points of interest". I've read many different tutorials, and I do not understand why I need to convert GPS coordinates to ECEF and then to ENU. Can you explain, please? Thanks!<issue_comment>username_1: Geospatial coordinate systems are a big topic, but the main choice between systems such as ECEF and ENU is about whether you want to describe a large region of the Earth's surface or just a small region. When Android provides a [geolocation](https://developer.android.com/reference/android/location/Location.html) via the [LocationListener](https://developer.android.com/reference/android/location/LocationListener.html) API, it typically does this using latitude/longitude/altitude which is ideal for representing any point over the Earth's surface, but is a "polar" or "geodetic" coordinate system that isn't ideal for plotting 2D locations. Standard [techniques](https://en.wikipedia.org/wiki/Geographic_coordinate_conversion#From_geodetic_to_ECEF_coordinates) allow this coordinate system to be converted into ECEF, which is another coordinate system which is suitable for the whole globe, but is "cartesian" so can be rotated and scaled using much simpler mathematical operations than the original latitude/longitude/altitude coordinates. Earth-Centred Earth Fixed ([ECEF](https://en.wikipedia.org/wiki/ECEF)) uses a coordinate system with its origin at the Earth's centre, so that any point on the ground will have coordinate values that are typically in the millions of metres. This is great for describing satellite orbits, or locations that span multiple continents, but not very convenient for 2D plots of points of interest within a town or city. If you want to draw a 2D map of a small region of the Earth's surface, then the East-North-Up coordinate system may be much more convenient. To use this, you need a reference location (such as the centre of a particular city) about which the local East/North/Up directions can be defined. Those then provide a set of x/y/z axes, where the x & y axes might be directly converted into the 2D screen coordinates. Obviously, as the region of interest grows larger (e.g. more than 100km), the effects of the Earth's curvature become more noticeable, and an ENU coordinate system will be less useful. See [wikipedia](https://en.wikipedia.org/wiki/Geodetic_datum) for more info. Moving from an ECEF to ENU coordinate system can be done by a simple set of matrix additions & multiplications which can be computed from the ECEF location of the centre of the map, and the unit vectors in the east/north/up directions. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can do it this way on Java ``` public List convertGpsToECEF(double lat, double longi, float alt) { double a=6378.1; double b=6356.8; double N; double e= 1-(Math.pow(b, 2)/Math.pow(a, 2)); N= a/(Math.sqrt(1.0-(e\*Math.pow(Math.sin(Math.toRadians(lat)), 2)))); double cosLatRad=Math.cos(Math.toRadians(lat)); double cosLongiRad=Math.cos(Math.toRadians(longi)); double sinLatRad=Math.sin(Math.toRadians(lat)); double sinLongiRad=Math.sin(Math.toRadians(longi)); double x =(N+0.001\*alt)\*cosLatRad\*cosLongiRad; double y =(N+0.001\*alt)\*cosLatRad\*sinLongiRad; double z =((Math.pow(b, 2)/Math.pow(a, 2))\*N+0.001\*alt)\*sinLatRad; List ecef= new ArrayList<>(); ecef.add(x); ecef.add(y); ecef.add(z); return ecef; } public List convertECEFtoENU(List ecefUser, List ecefPOI, double lat, double longi){ double cosLatRad=Math.cos(Math.toRadians(lat)); double cosLongiRad=Math.cos(Math.toRadians(longi)); double sinLatRad=Math.sin(Math.toRadians(lat)); double sinLongiRad=Math.sin(Math.toRadians(longi)); List vector = new ArrayList<>(); vector.add(ecefUser.get(0)-ecefPOI.get(0)); vector.add(ecefUser.get(1)-ecefPOI.get(1)); vector.add(ecefUser.get(2)-ecefPOI.get(2)); double e= vector.get(0)\*(-sinLongiRad)+vector.get(0)\*(cosLongiRad); double n= vector.get(0)\*(-sinLatRad)\*(cosLongiRad)+vector.get(1)\*(-sinLatRad)\*(sinLongiRad)+vector.get(2)\*cosLatRad; double u= vector.get(0)\*(cosLatRad)\*(cosLongiRad)+vector.get(1)\*(cosLatRad)\*(sinLongiRad)+vector.get(2)\*sinLatRad; List enu= new ArrayList<>(); enu.add(e); enu.add(n); enu.add(u); return enu; } ``` Upvotes: 0
2018/03/21
863
3,122
<issue_start>username_0: The multiplayer game I work with seems to transfer player screen position data all the time player moves with io.emit, I think this must be stress for server, how can I adjust this code to be executed no more than once per 250 ms? ``` // Only emit if the player is moving if (this.speed != 0) { this.emitPlayerData() } }, emitPlayerData () { // Emit the 'move-player' event, updating the player's data on the server socket.emit('move-player', { x: this.sprite.body.x, y: this.sprite.body.y, angle: this.sprite.body.rotation, ```<issue_comment>username_1: You can create temporary timestamp to check what was the last time you emitted the data and only emit if it was more than 250ms ago. For example: ``` var lastUpdated = Date.now(); //get current timestamp if (this.speed != 0 && lastUpdated + 250 <= Date.now()) { // if last updated timestamp + 250ms is smaller than current timestamp lastUpdated = Date.now(); //update the lastUpdated timestamp this.emitPlayerData(); } ``` Upvotes: 1 <issue_comment>username_2: Theoretically you can just ignore the emition of your specific event using something like ``` // Only emit if the player is moving if (this.speed != 0) { this.emitPlayerData() } var blockEmit = false; emitPlayerData () { if (!blockEmit) { // Emit the 'move-player' event, updating the player's data on the server socket.emit('move-player', { x: this.sprite.body.x, y: this.sprite.body.y, angle: this.sprite.body.rotation, } blockEmit = true; setTimeout(function () { blockEmit = false; }, 250) } } ``` It depends on how you are doing it and how you want to achieve it, but that's the main idea Upvotes: 1 <issue_comment>username_3: You can use a `debounce` function. ``` if (this.speed != 0) { this.emitPlayerData() } ``` would be ``` if (this.speed != 0) { debounce(this.emitPlayerData, 250, true) } ``` Notice the debounce API, which is `debounce(expensiveFunction, ms, immediate)` Example of a debounce function ``` // Returns a function, that, as long as it continues to be invoked, will not // be triggered. The function will be called after it stops being called for // N milliseconds. If `immediate` is passed, trigger the function on the // leading edge, instead of the trailing. function debounce(func, wait, immediate) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if (!immediate) func.apply(context, args); }; var callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; ``` For more information and examples: * <https://davidwalsh.name/javascript-debounce-function> * <https://gist.github.com/nmsdvid/8807205> If you're internally using lodash, go for it! <https://www.npmjs.com/package/lodash.debounce> Upvotes: 3 [selected_answer]
2018/03/21
935
3,379
<issue_start>username_0: I have a simple code in which I try to split numbers into digits: ``` for n in 10...12 { let numberIntoString = String(n) let splitNumber = Array(numberIntoString) let charIntoString = splitNumber.map {String($0)} let numberIntoInteger = charIntoString.map {Int($0)} print(numberIntoInteger) } ``` and the output is: ``` [Optional(1), Optional(0)] [Optional(1), Optional(1)] [Optional(1), Optional(2)] ``` I did some research about "Optional" and find out that this term has something to do with wrapping/unwrapping value but to understand this fully I need someone to explain me this on my own code. So, my questions are: Why optional value appeared in my code and how to get rid of this unexpected output? Also, I would like to know is there any way to write this code "better" (i.e. more readable)?<issue_comment>username_1: You can create temporary timestamp to check what was the last time you emitted the data and only emit if it was more than 250ms ago. For example: ``` var lastUpdated = Date.now(); //get current timestamp if (this.speed != 0 && lastUpdated + 250 <= Date.now()) { // if last updated timestamp + 250ms is smaller than current timestamp lastUpdated = Date.now(); //update the lastUpdated timestamp this.emitPlayerData(); } ``` Upvotes: 1 <issue_comment>username_2: Theoretically you can just ignore the emition of your specific event using something like ``` // Only emit if the player is moving if (this.speed != 0) { this.emitPlayerData() } var blockEmit = false; emitPlayerData () { if (!blockEmit) { // Emit the 'move-player' event, updating the player's data on the server socket.emit('move-player', { x: this.sprite.body.x, y: this.sprite.body.y, angle: this.sprite.body.rotation, } blockEmit = true; setTimeout(function () { blockEmit = false; }, 250) } } ``` It depends on how you are doing it and how you want to achieve it, but that's the main idea Upvotes: 1 <issue_comment>username_3: You can use a `debounce` function. ``` if (this.speed != 0) { this.emitPlayerData() } ``` would be ``` if (this.speed != 0) { debounce(this.emitPlayerData, 250, true) } ``` Notice the debounce API, which is `debounce(expensiveFunction, ms, immediate)` Example of a debounce function ``` // Returns a function, that, as long as it continues to be invoked, will not // be triggered. The function will be called after it stops being called for // N milliseconds. If `immediate` is passed, trigger the function on the // leading edge, instead of the trailing. function debounce(func, wait, immediate) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if (!immediate) func.apply(context, args); }; var callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; ``` For more information and examples: * <https://davidwalsh.name/javascript-debounce-function> * <https://gist.github.com/nmsdvid/8807205> If you're internally using lodash, go for it! <https://www.npmjs.com/package/lodash.debounce> Upvotes: 3 [selected_answer]
2018/03/21
1,025
3,865
<issue_start>username_0: I have two API requests, each of which asynchronously fetch data that must be available before running another function. I've been looking into Promises, but have no idea how I'd implement them into this scenario. Using this code, how can I await returned data from these before running my dependent function without using a setTimeout method?: ``` var invoices = null; var expenses = null var invoicesTotal = 0; var expensesTotal = 0; JA.get('api/' + companyId + '/Invoicing', function(err, data) { if (err) { console.error('Error getting Invoicing data:', err); } else { if (data) { invoices = data.Items; console.log('Successfully retrieved Invoicing data:', data.Items); } else { console.log('No invoicing data found'); } } }) JA.get('api/' + companyId + '/Expenses', function(err, data) { if (err) { console.error('Error retrieving expenses data:', err); } else { if (data) { expenses = data.Items; } else { console.log('No expenses data found'); } } }) /* CALCULATE TOTAL REVENUE */ function calcRevenue() { .... } setTimeout(function() { calcRevenue(); }, 1000) ```<issue_comment>username_1: You can use `Promise.all()` to await an array of promises, and only after all have completed do something in the `.then()` See [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) docs for more. Upvotes: 0 <issue_comment>username_2: While promises may be the 'better' solution, you can do this with by checking your data: ``` var invoices = null; var expenses = null; JA.get('api/' + companyId + '/Invoicing', function(err, data) { ... if (data) { invoices = data.Items; // Call calc here calcRevenue(); } } }) JA.get('api/' + companyId + '/Expenses', function(err, data) { ... if (data) { expenses = data.Items; // And here calcRevenue(); } } }) function calcRevenue() { if (invoices == null || expenses == null) return; ... else both have loaded, ready to do the calc } ``` If you need to re-call either ajax call then clear the `invoices`/`expenses` variables first. Upvotes: 0 <issue_comment>username_3: This answer uses jQuery `ajax()` instead of `get()`. You can use the [`.ajaxStop()`](http://api.jquery.com/ajaxStop/) event handler: ``` $(document).ajaxStop(function() { // do something after completion of last ajax call calcRevenue(); }); ``` Alternatively, for a better control, you can count active ajax calls and execute your desired code when there are no more ongoing ajax. ``` var active_ajax_calls = 0; $.ajax({ url: ..., type: 'GET', ... // With each ajax call on the page, increment the count by 1 beforeSend: function(xhr) { active_ajax_calls++; }, ... }) .done(function(response) { // do stuff }) .fail(function() { // do stuff }) .always(function() { // Decrement the count by 1 active_ajax_calls--; if(active_ajax_calls == 0) { // Do your special stuff that must happen after all ajax calcRevenue(); } }); ``` Upvotes: -1 <issue_comment>username_4: I'm presuming `JA` is an alias for `$` You need to use a promise chain: ``` JA.get('api/' + companyId + '/Invoicing') .done(function(result) { ...processing of result //this resurns the promise so the second done will run return JA.get('api/' + companyId + '/Expenses'); }) .done(function(result){ ..processing of second result }); ``` No settimeouts. Upvotes: 1
2018/03/21
867
3,032
<issue_start>username_0: I have a list in the following format: ``` mylist = ["a word 1 2 3 4","b word 5 6 7 8"] ``` I'm trying to split this into 2 new lists, one for each string, ie.: ``` newlist1 = ["a","word","1","2","3","4"] newlist2 = ["b","word","5","6","7","8"] ``` but I've not had any luck. I've tried various methods and looked at lots of similar questions but haven't been able to apply those successfully to my issue. Any help would be greatly appreciated, I'm still very new to Python and I know this is probably a very simple thing to do!<issue_comment>username_1: You can use `Promise.all()` to await an array of promises, and only after all have completed do something in the `.then()` See [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) docs for more. Upvotes: 0 <issue_comment>username_2: While promises may be the 'better' solution, you can do this with by checking your data: ``` var invoices = null; var expenses = null; JA.get('api/' + companyId + '/Invoicing', function(err, data) { ... if (data) { invoices = data.Items; // Call calc here calcRevenue(); } } }) JA.get('api/' + companyId + '/Expenses', function(err, data) { ... if (data) { expenses = data.Items; // And here calcRevenue(); } } }) function calcRevenue() { if (invoices == null || expenses == null) return; ... else both have loaded, ready to do the calc } ``` If you need to re-call either ajax call then clear the `invoices`/`expenses` variables first. Upvotes: 0 <issue_comment>username_3: This answer uses jQuery `ajax()` instead of `get()`. You can use the [`.ajaxStop()`](http://api.jquery.com/ajaxStop/) event handler: ``` $(document).ajaxStop(function() { // do something after completion of last ajax call calcRevenue(); }); ``` Alternatively, for a better control, you can count active ajax calls and execute your desired code when there are no more ongoing ajax. ``` var active_ajax_calls = 0; $.ajax({ url: ..., type: 'GET', ... // With each ajax call on the page, increment the count by 1 beforeSend: function(xhr) { active_ajax_calls++; }, ... }) .done(function(response) { // do stuff }) .fail(function() { // do stuff }) .always(function() { // Decrement the count by 1 active_ajax_calls--; if(active_ajax_calls == 0) { // Do your special stuff that must happen after all ajax calcRevenue(); } }); ``` Upvotes: -1 <issue_comment>username_4: I'm presuming `JA` is an alias for `$` You need to use a promise chain: ``` JA.get('api/' + companyId + '/Invoicing') .done(function(result) { ...processing of result //this resurns the promise so the second done will run return JA.get('api/' + companyId + '/Expenses'); }) .done(function(result){ ..processing of second result }); ``` No settimeouts. Upvotes: 1
2018/03/21
639
2,178
<issue_start>username_0: I have hyperledger fabric "First Network" up and running in Vagrant on Window 7. I would like to query on following tables of fabric ca database: * Affiliations * Users * Certificates Kindly help me how and where I can find fabric ca database, Thank You.<issue_comment>username_1: If you are using version 1.1, you can use the **fabric-ca-client affiliation list** and **fabric-ca-client identity list** commands to see the affiliations and users table, respectively. The ability to list the certificates will be in a future version of the fabric-ca-client. By default, the **fabric-ca-server** uses SQLITE and the name of the database file is **fabric-ca-server.db** in the home directory of the fabric-ca-server. You can use the **sqlite3 fabric-ca-server.db** command to enter a sqlite shell, and then issue **select \* from users** to list the users table for example. Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to login to the docker container to run sqlite3 fabric-ca-server.db. Follow these steps: ``` $ docker ps ``` find your Fabric-ca container name from the response. Then issue this command: ``` $ sudo docker exec -it /bin/bash ``` After logging in, change directories: ``` root@c6cc601b0360:/# cd ca root@c6cc601b0360:/ca# ``` Then connect to the database: ``` root@c6cc601b0360:/ca# sqlite3 fabric-ca-server.db ``` Run a test query: ``` root@c6cc601b0360:/ca# select * from users ``` Upvotes: 2 <issue_comment>username_3: it is very simple, just install and run SQL client, for me; I installed SQuirreL SQL client, you need to install the JDBC jar file for SQLite from this [link](https://bitbucket.org/xerial/sqlite-jdbc/downloads/): Then connect to Hyperledger fabric CA DB by setting the path file name [![enter image description here](https://i.stack.imgur.com/6Ryek.png)](https://i.stack.imgur.com/6Ryek.png) connect to the database then, without username\password if you keep default values, you will see the requested tables as shown below [![enter image description here](https://i.stack.imgur.com/dpVxG.png)](https://i.stack.imgur.com/dpVxG.png) I hope I could help. Upvotes: 1
2018/03/21
1,439
4,664
<issue_start>username_0: I am using the Flutter beta 0.15v with Android Studio.The crash only occurs on iOS simulator - IphoneX 11.2. ``` Unhandled exception: NoSuchMethodError: The getter 'stdout' was called on null. Receiver: null Tried calling: stdout #0 Object.noSuchMethod (dart:core-patch/dart:core/object_patch.dart:46) #1 stdout (package:flutter_tools/src/base/io.dart:176) #2 StdoutLogger.writeToStdOut (package:flutter_tools/src/base/logger.dart:100) #3 StdoutLogger.printStatus (package:flutter_tools/src/base/logger.dart:95) #4 _AppRunLogger.printStatus (package:flutter_tools/src/commands/daemon.dart:849) #5 printStatus (package:flutter_tools/src/globals.dart:39) #6 ResidentRunner._serviceDisconnected (package:flutter_tools/src/resident_runner.dart:756) #7 _rootRun (dart:async/zone.dart:1122) #8 _CustomZone.run (dart:async/zone.dart:1023) #9 _FutureListener.handleWhenComplete (dart:async/future_impl.dart:151) #10 _Future._propagateToListeners.handleWhenCompleteCallback (dart:async/future_impl.dart:603) #11 _Future._propagateToListeners (dart:async/future_impl.dart:659) #12 _Future._completeWithValue (dart:async/future_impl.dart:477) #13 _Future._asyncComplete. (dart:async/future\_impl.dart:507) #14 \_rootRun (dart:async/zone.dart:1126) #15 \_CustomZone.run (dart:async/zone.dart:1023) #16 \_CustomZone.bindCallback. (dart:async/zone.dart:949) #17 \_microtaskLoop (dart:async/schedule\_microtask.dart:41) #18 \_startMicrotaskLoop (dart:async/schedule\_microtask.dart:50) #19 \_runPendingImmediateCallback (dart:isolate-patch/dart:isolate/isolate\_patch.dart:113) #20 \_RawReceivePortImpl.\_handleMessage (dart:isolate-patch/dart:isolate/isolate\_patch.dart:166) ``` API call code as follows that uses http.dart ``` import 'package:http/http.dart' as http; _loadData() async { String dataURL = "https://api.github.com/orgs/abc/members"; http.Response response = await http.get(dataURL); setState(() { final membersJSON = JSON.decode(response.body); for (var memberJSON in membersJSON) { final member = new Member( memberJSON["login"], memberJSON["avatar_url"]); _members.add(member); } }); } ``` When you run it on ios simulator, the above exception is thrown everytime. Member data class ``` class Member { final String login; final String avatarUrl; Member(this.login, this.avatarUrl) { if (login == null) { throw new ArgumentError("login of Member cannot be null. " "Received: '$login'"); } if (avatarUrl == null) { throw new ArgumentError("avatarUrl of Member cannot be null. " "Received: '$avatarUrl'"); } } } ```<issue_comment>username_1: I've tried this code and is working fine on my iPhone X with iOS 11.2. ``` _loadData() async { String dataURL = "https://api.github.com/orgs/raywenderlich/members"; http.Response response = await http.get(dataURL); setState(() { final membersJSON = JSON.decode(response.body); for (var memberJSON in membersJSON) { final member = new Member( memberJSON["login"], memberJSON["avatar_url"]); //_members.add(member); debugPrint("${member.login}"); } }); ``` } The error is in the `dart.io` library used with [HTTPClientRequest](https://api.dartlang.org/stable/1.24.3/dart-io/IOSink-class.html) and the exception is in the line 176 [here](https://raw.githubusercontent.com/flutter/flutter/master/packages/flutter_tools/lib/src/base/io.dart). I've no error using `http` on my `iOS emulator`. So I guess the problem is in your SDK version or in the data you are parsing. Or there is something wrong with your emulator... Have you tried on a real device? Here my flutter version: ``` Flutter 0.1.5 • channel beta • https://github.com/flutter/flutter.git Framework • revision 3ea4d06340 (5 weeks ago) • 2018-02-22 11:12:39 -0800 Engine • revision ead227f118 Tools • Dart 2.0.0-dev.28.0.flutter-0b4f01f759 ``` You can try my example project [here](https://github.com/username_11/flutter_app_000006). Please try it and let me know if it works or you get the same error. Moreover type `flutter --version` and paste your output. Upvotes: 2 <issue_comment>username_2: The issue got resolved after creating the whole demo in a new project again. Probably the issue was with the build process for ios only, as the android counterpart of the old project was executing correctly. Such annoying issue will be resolved in future updates I hope. Upvotes: 1 [selected_answer]
2018/03/21
498
1,718
<issue_start>username_0: I want to rewrite the url ``` http://localhost:56713/Home/UserDetails?Code=223322 ``` with ``` http://localhost:56713/223322 ``` I have written below in StartUp.cs but it is not working ``` var rewrite = new RewriteOptions() .AddRewrite(@"{$1}", "Home/UserDetails?Code={$1}",true); ```<issue_comment>username_1: You need a Regular Expression on the first parameter on AddRewrite function. ``` var rewrite = new RewriteOptions().AddRewrite( @"^Home/UserDetails?Code=(.*)", // RegEx to match Route "Home/{$1}", // URL to rewrite route skipRemainingRules: true // Should skip other rules ); ``` This link maybe can help with more examples <https://learn.microsoft.com/en-us/aspnet/core/fundamentals/url-rewriting?tabs=aspnetcore2x> Upvotes: 4 [selected_answer]<issue_comment>username_2: Adding a rule to match `@"{$1}"` won't work. The term `$1` represents a value parsed using RegEx. You haven't executed any RegEx so you're effectively telling it to "rewrite my URL whenever the URL is `null`". Clearly, that isn't very likely. You want to match the incoming URL using this regular expression: ``` @"^Home/UserDetails?Code=(\d+)" ``` The `(\d+)` tells RegEx to match "one or more digits" and store it as a variable. Since this is the only variable included in the parens, the value is stored in `$1`. You then want to rewrite the URL using the value parsed using that regex: ``` "Home/$1" ``` You pass these two strings into the `AddRewrite` method: ``` AddRewrite( @"^Home/UserDetails?Code=(\d+)", // RegEx to match URL "Home/$1", // URL to rewrite true // Stop processing any aditional rules ); ``` Upvotes: 1
2018/03/21
1,008
2,846
<issue_start>username_0: I have a relationship name `:RELTYPE` with a property array called `LineIds`. Assume, ``` (node1)-[r:RELTYPE {LineIds : [1,2,15]}]->(node2) ``` I want to remove a very specific value from the `r.LineIds` array (let's say 15). The code written doesn't seem to work. I am trying to remove "15" from the `r.LineIds` array. ``` var lineId = 15 var match = "(t:Template)-[r1:DEPENDS_ON]->(e:Template)" this.client.Cypher.Match(match) .Where((Template t) => t.Id == this.source.Id && t.ScenarioId == sid1) .AndWhere((Segment e) => e.Id == this.dest.Id) .Remove(lineId+" In r.LineIds") .ExecuteWithoutResults(); ``` In Neo4j Browser ``` match (b :Template)-[r:Temp_Reffered_DIM_SEG1]->(dim2 :Natural_Account) where b.Id = 228 and b.ScenarioId = 200 and dim2.Id =117 remove 15 in r.LineIds delete 15 ``` I have also tried [15] instead of just 15 in the remove. Nothing seems to work. Can anyone please help me with the syntax?<issue_comment>username_1: As you can see the developer manual of neo4j where for [list operations](https://neo4j.com/docs/developer-manual/3.2/cypher/functions/list/) remove is not a function used for list but used to [remove](https://neo4j.com/docs/developer-manual/3.2/cypher/clauses/remove/) a property/label of a node. Upvotes: 0 <issue_comment>username_2: Not surprising you're having trouble, there isn't a way in Cypher to selectively remove an item from a list. But you can replace the list with one that has the value filtered out. With just Cypher, you can do this: ``` match (b :Template)-[r:Temp_Reffered_DIM_SEG1]->(dim2 :Natural_Account) where b.Id = 228 and b.ScenarioId = 200 and dim2.Id =117 set r.LineIds = filter(val in r.LineIds where val <> 15) ``` If you have access to APOC Procedures, there's one that is a bit easier to use: ``` match (b :Template)-[r:Temp_Reffered_DIM_SEG1]->(dim2 :Natural_Account) where b.Id = 228 and b.ScenarioId = 200 and dim2.Id =117 set r.LineIds = apoc.coll.removeAll(r.LineIds, [15]) ``` Upvotes: 1 <issue_comment>username_3: I finally figured it out. Filtering out the values that I wish to remove was one of the solution that worked for me. The Query written below worked fine. ``` match (b :Template)-[r:Temp_Reffered_DIM_SEG1]->(dim2 :Natural_Account) where b.Id = 228 and b.ScenarioId = 200 and dim2.Id =117 Set r.LineIds = filter(x in r.LineIds where not(x=15)) ``` Code for calling Neo4j DB from .net class ``` var lineId = 15 var match = "(t:Template)-[r1:DEPENDS_ON]->(e:Template)" this.client.Cypher.Match(match) .Where((Template t) => t.Id == this.source.Id && t.ScenarioId == sid1) .AndWhere((Segment e) => e.Id == this.dest.Id) .Set(string.Format("r.LineIds = filter(x in r.LineIds where not(x={0}))",lineId)) .ExecuteWithoutResults(); ``` Thats it. Thanks for your answers. Upvotes: 0
2018/03/21
889
3,202
<issue_start>username_0: Terraform can't find a resource which is declared in the same file where the reference is. It seems that this line is causing trouble: `role_arn = "${aws_iam_role.newsapi_lambda_codepipeline.arn}"`. It can't find `newsapi_lambda_codepipeline` which is declared as `resource "aws_iam_role" "newsapi_lambda_codepipeline" { ... }`. This is my main.tf: ``` resource "aws_s3_bucket" "newsapi_lambda_builds" { bucket = "newsapi-lambda-builds" acl = "private" } resource "aws_iam_role" "newsapi_lambda_codebuild" { name = "newsapi-lambda-codebuild" assume_role_policy = < ``` After executing `terraform apply` I get: ``` Error: Error running plan: 1 error(s) occurred: * aws_codepipeline.newsapi_lambda: 1 error(s) occurred: * aws_codepipeline.newsapi_lambda: Resource 'aws_iam_role.newsapi_lambda_codepipeline' not found for variable 'aws_iam_role.newsapi_lambda_codepipeline.arn' ``` I don't understand why that happens. I have `aws_iam_role.newsapi_lambda_codepipeline` declared, haven't I?<issue_comment>username_1: I believe your role declaration could be slightly wrong. And terraform was not able to generate an arn for that, therefore not found. It looks like you also need to create `resource "aws_iam_role_policy"`. See <https://www.terraform.io/docs/providers/aws/r/codepipeline.html> It's a bit unclear why you'd need to split. If this is not the case, let me know and I'll try to run the code myself to test. Upvotes: 4 [selected_answer]<issue_comment>username_2: For those experiencing an issue with `aws_ecs_task_definition` not finding a variable for the `aws_ecs_task_definition.XXX.arn`, there's a good chance your JSON came out malformed. Here's what I did to remedy my issue * Replace your line with `task_definition = "[]"` * Run `terraform plan` At this point you should get an error. For example, I got > > module.tf.aws\_ecs\_task\_definition.sandbox: ECS Task Definition container\_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field ContainerDefinition.MemoryReservation of type int64 > > > In this case, I had quoted `memSize` in my `template_file` and it didn't implicitly convert to int64, hence an error. I changed `"memoryReservation": "${mem_size}"` to `"memoryReservation": ${mem_size}`, removed the task\_definition placeholder, and everything went smoothly. Upvotes: 3 <issue_comment>username_3: Since the title of the problem is pretty generic, I landed on this link. I was able to find the problem, given the fact that there is `something wrong with the resource which was not found and hence it is not getting created` In my case it was a variable not getting referenced correctly in `aws_cloudwatch_event_rule` "event\_pattern" key ``` event_pattern = < ``` Upvotes: 0 <issue_comment>username_4: To help out with investigating such issues, you can run targeted `terraform plan`. In my case (misconfigured reference to CIDR block from custom AWS VPC module), after running ``` terraform plan --target aws_security_group.something-or-other ``` Terraform actually provided clear error message on what exactly i did wrong this time. Hope it helps :) Upvotes: 2
2018/03/21
330
1,140
<issue_start>username_0: I have downloaded chromedriver.exe and eclipse, I have added through add external jars but while executing it gives me error > > Error: Could not find or load main class demochrome.DemoChrome > > > ``` package demochrome; public class DemoChrome { public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "E://chromedriver.exe"); System.out.println("Welcome to chrome"); } } ```<issue_comment>username_1: I recommend page factory pattern, working example: ``` public class NewTest { String baseUrl = "http://google.com"; String driverPath= "C:\\chromedriver.exe"; WebDriver driver; @BeforeTest public void beforeTest() { System.setProperty("webdriver.chrome.driver", driverPath); driver = new ChromeDriver(); ``` } and in @Test ``` driver.get(baseUrl); ``` Maybe you changed class name in code (in project there's another name) and you have improper slash in "E://chromedriver.exe" Upvotes: 1 <issue_comment>username_2: `E://chromedriver.exe` is not a valid path. try `E:/chromedriver.exe` or `E:\\chromedriver.exe` Upvotes: 0
2018/03/21
405
1,509
<issue_start>username_0: If I use `import { Renderer2 } from '@angular/core';` in my shared.module in angular 4, why can't I also add it in the import array such as: ``` @NgModule({ imports: [ Renderer2, ... ] ``` Adding it in the import gives an error. Here is why I need Renderer2: myComponent.html (part of my shared.module): ``` ``` myComponent.ts ``` constructor(private renderer: Renderer2) { } const element = this.renderer.selectRootElement('#searchElem'); setTimeout(() => element.focus(), 0); ``` This is done to be able to set the focus of an element.<issue_comment>username_1: `Renderer2` is an injectable (service) part of Angular core. It's not a module. The [import property in `@NgModule` is to import other modules](https://angular.io/api/core/NgModule#imports). You should be able to use `Renderer2` in your component the way you have it, but simply remove it from the module imports. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` imports ``` imports describe which dependencies this module has. If your module depends on other modules, you list them here. example ``` imports: [ BrowserModule, HttpClientModule,DataTableModule, AppRoutingModule, ], ``` The short answer is that you put something in your NgModule’s imports if you’re going to be using it in your *templates* or with *dependency injection*. `import` statement is used to import other typescript files and class ``` import {className} from 'filePath'; ``` Upvotes: 0
2018/03/21
324
1,239
<issue_start>username_0: Not entirely sure if this is possible but is there a way to start the specified cloud foundry instances in sequence rather than having them start concurrently? Right now using the default cf push command with build-pack, 2 instances to be spun up. Would like to do this on the push if possible. Any ideas if this is possible? thanks, Stefan<issue_comment>username_1: Based on `cf push --help`, that doesn't seem to be possible... However, if your goal is to start instances in sequence, maybe the [`cf scale` command](https://docs.cloudfoundry.org/devguide/deploy-apps/cf-scale.html) is an option for you: ``` $ cf push myapp -i 1 $ cf scale myapp -i 2 ``` You could wrap these commands plus some logic into a script that would start N instances sequentially. Upvotes: 1 <issue_comment>username_2: I needed to run a poller, one instance of my app only - turns out I could do this by using the [`CF_INSTANCE_INDEX`](http://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#CF-INSTANCE-INDEX) environment variable and ensure that it only ran on one instance. This negated the need for a hackier solution of a sequenced start of the instances and a lock file. Upvotes: 1 [selected_answer]
2018/03/21
602
2,021
<issue_start>username_0: I have a DataTable and I want to remove all the rows matching a `List< string>`, how to do that? Following is my code, ``` public static DataTable GetSkills(List EnteredSkills) { DataTable dt = new DataTable(); dt = GetDBMaster("SkillMaster"); List MatchingSkills = EnteredSkills.Select(c => c.Text).ToList(); //Logic to Delete rows MatchingSkills from dt here return dt; } ``` **Final Solution** ``` public static DataTable GetSkills(List EnteredSkills) { DataTable dt = new DataTable(); dt = GetDBMaster("SkillMaster"); var MatchingSkills = new HashSet(EnteredSkills.Select(c => c.Text)); List removeRows = dt.AsEnumerable().Where(r => MatchingSkills.Contains(r.Field("DataTableSkillColumnName"))).ToList(); removeRows.ForEach(dt.Rows.Remove); return dt; } ```<issue_comment>username_1: The simplest way will be: ``` var lstRemoveColumns = new List() { "ColValue1", "ColVal2", "ColValue3", "ColValue4" }; List rowsToDelete = new List(); foreach (DataRow row in dt.Rows) { if (lstRemoveColumns.Contains(row["ColumnName"].ToString())) { rowsToDelete.Add(row); } } foreach (DataRow row in rowsToDelete) { dt.Rows.Remove(row); } dt.AcceptChanges(); ``` Look [here](https://stackoverflow.com/questions/5648339/deleting-specific-rows-from-datatable) Upvotes: 2 <issue_comment>username_2: Presuming the column is `SkillName` ``` List removeRows = dt.AsEnumerable() .Where(r => MatchingSkills.Contains(r.Field("SkillName"))) .ToList(); removeRows.ForEach(dt.Rows.Remove); ``` Side- note: i would use a `HashSet` because it would be more efficient: ``` var MatchingSkills = new HashSet(EnteredSkills.Select(c => c.Text)); ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: This solution works great! I'm using it for my solution. ``` List lsFilteredData = dtUnfilteredData.AsEnumerable().Where(q => lsWhatIWanted.Contains(q.Field("Code"))).ToList(); foreach (DataRow drFilteredData in lsFilteredData) { //Do whatever you want here } ``` Upvotes: 0
2018/03/21
601
2,043
<issue_start>username_0: I am reading sentences from a txt file and creating an array of unique words. First, I read the file line by line. I split the lines with whitespace to get the words as a String array. Then if the words are not my unique words ArrayList, I add the word to the ArrayList. However, there is a couple of problems. The first one is that it also adds empty String to the unique words ArrayList. The second one is that it adds the same words 2 times, and when I compare those two Strings, it acts like they are not equal. My code is as follows: ``` ArrayList uniqueWords = new ArrayList<>(); Scanner scan = new Scanner(new File("input.txt")); while(scan.hasNext()) { String []line = scan.nextLine().split("\\s+"); for(int i = 0;i ``` and output is as follows: ``` 0: 1:adalet 2:adalet 9: false ```<issue_comment>username_1: The simplest way will be: ``` var lstRemoveColumns = new List() { "ColValue1", "ColVal2", "ColValue3", "ColValue4" }; List rowsToDelete = new List(); foreach (DataRow row in dt.Rows) { if (lstRemoveColumns.Contains(row["ColumnName"].ToString())) { rowsToDelete.Add(row); } } foreach (DataRow row in rowsToDelete) { dt.Rows.Remove(row); } dt.AcceptChanges(); ``` Look [here](https://stackoverflow.com/questions/5648339/deleting-specific-rows-from-datatable) Upvotes: 2 <issue_comment>username_2: Presuming the column is `SkillName` ``` List removeRows = dt.AsEnumerable() .Where(r => MatchingSkills.Contains(r.Field("SkillName"))) .ToList(); removeRows.ForEach(dt.Rows.Remove); ``` Side- note: i would use a `HashSet` because it would be more efficient: ``` var MatchingSkills = new HashSet(EnteredSkills.Select(c => c.Text)); ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: This solution works great! I'm using it for my solution. ``` List lsFilteredData = dtUnfilteredData.AsEnumerable().Where(q => lsWhatIWanted.Contains(q.Field("Code"))).ToList(); foreach (DataRow drFilteredData in lsFilteredData) { //Do whatever you want here } ``` Upvotes: 0
2018/03/21
1,134
3,529
<issue_start>username_0: hi, I have some issues with my real time plotting for matplotlib. I am using "time" on the X axis and a random number on Y axis. The random number is a static number then multiplied by a random number ``` import matplotlib.pyplot as plt import datetime import numpy as np import time def GetRandomInt(Data): timerCount=0 x=[] y=[] while timerCount < 5000: NewNumber = Data * np.random.randomint(5) x.append(datetime.datetime.now()) y.append(NewNumber) plt.plot(x,y) plt.show() time.sleep(10) a = 10 GetRandomInt(a) ``` This seems to crash python as it cannot handle the updates - I can add a delay but wanted to know if the code is doing the right thing? I have cleaned the code to do the same function as my code, so the idea is we have some static data, then some data which we want to update every 5 seconds or so and then to plot the updates. Thanks!<issue_comment>username_1: You instantiate GetRandomInt which instantiates PlotData which instantiates GetRandomInt which instantiates PlotData which ... etc. This is the source of your problems. Upvotes: 0 <issue_comment>username_2: To draw a continuous set of random line plots, you would need to use animation in matplotlib: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() max_x = 5 max_rand = 10 x = np.arange(0, max_x) ax.set_ylim(0, max_rand) line, = ax.plot(x, np.random.randint(0, max_rand, max_x)) def init(): # give a clean slate to start line.set_ydata([np.nan] * len(x)) return line, def animate(i): # update the y values (every 1000ms) line.set_ydata(np.random.randint(0, max_rand, max_x)) return line, ani = animation.FuncAnimation( fig, animate, init_func=init, interval=1000, blit=True, save_count=10) plt.show() ``` [![animated graph](https://i.stack.imgur.com/jD4CY.png)](https://i.stack.imgur.com/jD4CY.png) The idea here is that you have a graph containing `x` and `y` values. Where `x`is just a range e.g. 0 to 5. You then call `animation.FuncAnimation()` which tells matplotlib to call your `animate()` function every `1000ms` to let you provide new `y` values. You can speed this up as much as you like by modifying the `interval` parameter. --- One possible approach if you wanted to plot values over time, you could use a `deque()` to hold the `y` values and then use the `x` axis to hold `seconds ago`: ``` from collections import deque import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from matplotlib.ticker import FuncFormatter def init(): line.set_ydata([np.nan] * len(x)) return line, def animate(i): # Add next value data.append(np.random.randint(0, max_rand)) line.set_ydata(data) plt.savefig('e:\\python temp\\fig_{:02}'.format(i)) print(i) return line, max_x = 10 max_rand = 5 data = deque(np.zeros(max_x), maxlen=max_x) # hold the last 10 values x = np.arange(0, max_x) fig, ax = plt.subplots() ax.set_ylim(0, max_rand) ax.set_xlim(0, max_x-1) line, = ax.plot(x, np.random.randint(0, max_rand, max_x)) ax.xaxis.set_major_formatter(FuncFormatter(lambda x, pos: '{:.0f}s'.format(max_x - x - 1))) plt.xlabel('Seconds ago') ani = animation.FuncAnimation( fig, animate, init_func=init, interval=1000, blit=True, save_count=10) plt.show() ``` Giving you: [![moving time plot](https://i.stack.imgur.com/DjRea.png)](https://i.stack.imgur.com/DjRea.png) Upvotes: 3
2018/03/21
1,019
3,045
<issue_start>username_0: I wrote following simple select query to fetch data, which is greater than the particular given date. ``` select * from XVIIX.emp_tasks where TASK_START_DATE > to_Date('30-MAR-18','DD-MM-YYYY'); ``` But the result is not what is expected from that. [![enter image description here](https://i.stack.imgur.com/U5MuX.jpg)](https://i.stack.imgur.com/U5MuX.jpg) Can someone explain me what is the root cause for this behavior?<issue_comment>username_1: You instantiate GetRandomInt which instantiates PlotData which instantiates GetRandomInt which instantiates PlotData which ... etc. This is the source of your problems. Upvotes: 0 <issue_comment>username_2: To draw a continuous set of random line plots, you would need to use animation in matplotlib: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() max_x = 5 max_rand = 10 x = np.arange(0, max_x) ax.set_ylim(0, max_rand) line, = ax.plot(x, np.random.randint(0, max_rand, max_x)) def init(): # give a clean slate to start line.set_ydata([np.nan] * len(x)) return line, def animate(i): # update the y values (every 1000ms) line.set_ydata(np.random.randint(0, max_rand, max_x)) return line, ani = animation.FuncAnimation( fig, animate, init_func=init, interval=1000, blit=True, save_count=10) plt.show() ``` [![animated graph](https://i.stack.imgur.com/jD4CY.png)](https://i.stack.imgur.com/jD4CY.png) The idea here is that you have a graph containing `x` and `y` values. Where `x`is just a range e.g. 0 to 5. You then call `animation.FuncAnimation()` which tells matplotlib to call your `animate()` function every `1000ms` to let you provide new `y` values. You can speed this up as much as you like by modifying the `interval` parameter. --- One possible approach if you wanted to plot values over time, you could use a `deque()` to hold the `y` values and then use the `x` axis to hold `seconds ago`: ``` from collections import deque import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from matplotlib.ticker import FuncFormatter def init(): line.set_ydata([np.nan] * len(x)) return line, def animate(i): # Add next value data.append(np.random.randint(0, max_rand)) line.set_ydata(data) plt.savefig('e:\\python temp\\fig_{:02}'.format(i)) print(i) return line, max_x = 10 max_rand = 5 data = deque(np.zeros(max_x), maxlen=max_x) # hold the last 10 values x = np.arange(0, max_x) fig, ax = plt.subplots() ax.set_ylim(0, max_rand) ax.set_xlim(0, max_x-1) line, = ax.plot(x, np.random.randint(0, max_rand, max_x)) ax.xaxis.set_major_formatter(FuncFormatter(lambda x, pos: '{:.0f}s'.format(max_x - x - 1))) plt.xlabel('Seconds ago') ani = animation.FuncAnimation( fig, animate, init_func=init, interval=1000, blit=True, save_count=10) plt.show() ``` Giving you: [![moving time plot](https://i.stack.imgur.com/DjRea.png)](https://i.stack.imgur.com/DjRea.png) Upvotes: 3
2018/03/21
560
2,173
<issue_start>username_0: i am starting a project i want know that what is the best and most accepted way of using angular as front end and laravel as back end can i do like in view folder of laravel i will install angular is it ok?? i tried to create api for that and in angluar i will call that api. what is the most used way for doing this i have seen tutorials over this but most of the tuts are for angular version 2 and its outdated so can you please tell me any way to work laravel 5.6 and angular 5 or 4 together please give me your suggestion and thoughts<issue_comment>username_1: I should stick with making an Laravel API and let your Angular app make calls to that API. This is the most common approach these days. Upvotes: 1 <issue_comment>username_2: So basiclly there are two ways you can do this. One keep everything in one app, and do as you suggested and put in /view. Laravel has a `php artisan preset react` which sets the application up for React, but as far as i know there is not one for angular but sure you can find a tutorial on how to include Angular in your Laravel project. The second way is keeping them seperated having a Laravel/Lumen app that provides there API, and a another project that has the Angular project running, than your Angular project will be calling the Laravel's api through two seperate servers running on Localhost example. I just answered another simillar questions of the benefits of doing it like this. Read it here: <https://stackoverflow.com/a/49181795/2783379> Upvotes: 3 [selected_answer]<issue_comment>username_3: You can view the documenation page of angular and laravel. I think, this is the best way and it can help you to achieve your project using the two technology. <https://laravel-angular.io/> Installation: ``` composer require jadjoubran/laravel-angular ``` If you're using Laravel 5.5 or newer, you can skip this step. Inside your config/app.php, add the Service Provider: ``` Jadjoubran\LaravelAngular\Provider\LaravelServiceProvider::class ``` Then run the installation command: ``` php artisan laravelangular:install ``` Source: <https://laravel-angular.io/docs/1/> Upvotes: 2
2018/03/21
301
1,009
<issue_start>username_0: I want to add custom background color for canvas how to add custom color to oracle forms developer<issue_comment>username_1: Open Canvas' Property Palette window and set its "Background Color" property to some color. Upvotes: 0 <issue_comment>username_2: Right click your canvas, and select "Property Palette": [![enter image description here](https://i.stack.imgur.com/8NKYd.png)](https://i.stack.imgur.com/8NKYd.png) Then, set the property "Background" as you wish (you can use sintax like `r70g85b85` or `red`, for example). Upvotes: 3 [selected_answer]<issue_comment>username_3: Define a Visual attribute namely `myGrayBack` with your desired background color ( may be `gray4` ) code with from toolbar. [![enter image description here](https://i.stack.imgur.com/M86Ij.jpg)](https://i.stack.imgur.com/M86Ij.jpg) In `WHEN-NEW-FORM-INSTANCE` or `PRE-FORM` trigger's code, add the following : ``` SET_CANVAS_PROPERTY('CANVAS1', visual_attribute, 'myGrayBack'); ``` Upvotes: 2
2018/03/21
552
2,152
<issue_start>username_0: I am quite new to docker. I worked a `Dockefile` that copies my project sources and them builds the application. The building process is quite complex and should be done in the container's environment; not the host machine. So the Dockerfile look something like this: ``` FROM base-image RUN mkdir -p /proj/branches/www COPY ./proj/branches/mybranch/* /proj/branches/mybranch/ RUN chown root:www-data -R /proj COPY config-and-build.sh . RUN /config-and-build.sh EXPOSE 80 CMD apachectl -D FOREGROUND ``` The project source folder is big (~3 G). And so, building the image takes a while; copying the files and compiling some sources. Most of the time I will not change the sources of the compiled programs so this is a waste of time. I am trying to understand how volumes work to mount the sources. But my understanding is that the volume is defined at runtime, so the `config-and-build` script cannot be done. Is there a way to get this done?<issue_comment>username_1: A quick way to make the source code available is to use [volume bind-mounts](https://docs.docker.com/storage/bind-mounts/). This will allow you to give the container access to a directory that is on the host machine. You can achieve that via mounting the directory containing the source code using the `-v` argument when starting the container. ``` docker run -v : ... ``` The host directory and the container directory will be in sync, which will enable you to avoid rebuilding the image wherever there are new code changes. Upvotes: 0 <issue_comment>username_2: You can add your script `config-and-build.sh` to ENTRYPOINT & integrate your CMD `apachectl -D FOREGROUND` in that entrypoint script itself. Ref - <https://docs.docker.com/engine/reference/builder/#entrypoint> Once that's done you can use host bind volumes `-v` with `docker run` to mount your project directory directly inside the container which will remove the dependency of copying project using the Dockerfile. Ref - [Bind a directory to a docker container](https://stackoverflow.com/questions/30183978/bind-a-directory-to-a-docker-container) Upvotes: 2 [selected_answer]
2018/03/21
1,437
4,562
<issue_start>username_0: Help please finish the code. I need to count the number of .resp-containers in each container separately. And then so that you can scroll this number in the switch and distribute the corresponding class to each element in the containers <https://codepen.io/anon/pen/VXpZjP> ``` $(document).ready(function(){ var getLength = $(".resp-container").length var item = $(".resp-container"); switch (getLength) { case 1: item.addClass("full-resp"); break; case 2: item.addClass("half-resp"); break; case 3: item.addClass("third-resp"); break; case 4: item.addClass("fourth-resp"); break; default: item.addClass("fourth-resp"); } }); ``` HTML ``` ```<issue_comment>username_1: For that you could use CSS instead, way more efficient than running a script Stack snippet ```css .resp-container:first-child:last-child { width: calc(100% - 15px); } .resp-container:first-child:nth-last-child(2), .resp-container:first-child:nth-last-child(2) ~ .resp-container { width: calc(50% - 15px); } .resp-container:first-child:nth-last-child(3), .resp-container:first-child:nth-last-child(3) ~ .resp-container { width: calc(33.333% - 15px); } .resp-container:first-child:nth-last-child(4), .resp-container:first-child:nth-last-child(4) ~ .resp-container { width: calc(25% - 15px); } /* for this demo */ .resp-container { display: inline-block; height: 30px; background: red; margin: 5px; } ``` ```html ``` Upvotes: 2 <issue_comment>username_2: You can use `.each()` for this, check snippet below.... ```js $(document).ready(function(){ $(".container").each(function(){ var getLength = $(this).find('.resp-container').length; var item = $(this).find('.resp-container'); switch (getLength) { case 1: item.addClass("full-resp"); break; case 2: item.addClass("half-resp"); break; case 3: item.addClass("third-resp"); break; case 4: item.addClass("fourth-resp"); break; default: item.addClass("fourth-resp"); } }) }); ``` ```css .container { width: 100%; } .resp-container { background: blue; height: 50px; margin: 10px; display: inline-block; } /* Респонсив классы */ .full-resp { width: 100%; } .half-resp { width: 50%; } .third-resp { width: 33%; } .fourth-resp { width: 25%; } @media screen and (max-width: 780px){ .half-resp, .third-resp, .fourth-resp { width: 50%; } } @media screen and (max-width: 661px){ .half-resp, .third-resp, .fourth-resp { width: 100%; } } ``` ```html ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: I just learned a lot of jQuery answering your question, but here it is: Using `.each()`, `$(HTMLElement)` to make `this` back into a jQuery object and `.children()` to find the `resp-container` you want to count. ```js $(document).ready(function() { var container = $(".container"); container.each(function() { var items = $(this).children(".resp-container"); switch (items.length) { case 1: items.addClass("full-resp"); break; case 2: items.addClass("half-resp"); break; case 3: items.addClass("third-resp"); break; case 4: items.addClass("fourth-resp"); break; default: items.addClass("fourth-resp"); } }); }); ``` ```html ``` Upvotes: 0 <issue_comment>username_4: Why not a CSS easy solution without the need of complicating with jQuery or extra class: ```css .container { display: flex; } .resp-container { height: 30px; background: red; margin: 5px; flex:1; } ``` ```html ``` Upvotes: 1 <issue_comment>username_5: Try vanilla JS, not jQuery: ``` const classes = ['full-resp', 'half-resp', 'third-resp', 'fourth-resp'] // array of classes that you want to add to the elements const containers = document.querySelectorAll('.container'); // nodelist of all .container divs containers.forEach(container => { // for each container... const respContainers = container.querySelectorAll('.resp-container'); // ... get nodelist of child .resp-container divs respContainers.forEach((rContainer, index) => { if (index < classes.length) return rContainer.classList.add(classes[index]); rContainer.classList.add(classes[classes.length - 1]); // if the index of the .resp-container div is larger than number of classes defined in classes array, add the last defined class to this element }) }) ``` Upvotes: 0
2018/03/21
753
1,818
<issue_start>username_0: I noticed that the base R `quantile` function does not support date arguments. I appreciate that defining quantiles for dates needs care in the definitions (i.e. if you have 6 dates and ask for the 25th percentile, you need to define the suitable rounding). Is there an efficient implementation of such a quantile function, either as a part of base or another package. The following sample function achieves essentially what I am interested in (with some tweaking to handle the case of the 0'th percentile), but I imagine that more efficient implementations are possible. ``` #Date quantile function. dquantile <- function(x, probs){ sx <- sort(x) pos <- round( probs * length(x) ) return( sx[pos] ) } # Example. dates <- as.Date("01/01/1900", "%d/%m/%Y") + floor( 36500 * runif(100000) ) dquantile(dates, c(0.001, 0.025, 0.975, 0.999) ) ```<issue_comment>username_1: If `x` is a vector of Dates and `probs` is a vector of probabilities: ``` # test input x <- as.Date("2018-03-21") + 0:10 probs <- 1:9/10 as.Date(quantile(unclass(x), probs), origin = "1970-01-01") ``` giving: ``` 10% 20% 30% 40% 50% 60% "2018-03-22" "2018-03-23" "2018-03-24" "2018-03-25" "2018-03-26" "2018-03-27" 70% 80% 90% "2018-03-28" "2018-03-29" "2018-03-30" ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The `quantile` function does support dates, you just need to specify the `type` argument. Your problem can be solved with: ``` dates <- as.Date("01/01/1900", "%d/%m/%Y") + floor( 36500 * runif(100000) ) quantile(dates, probs = c(0.001, 0.025, 0.975, 0.999), type = 1) 0.1% 2.5% 97.5% 99.9% "1900-02-04" "1902-06-23" "1997-06-10" "1999-10-30" ``` Upvotes: 4
2018/03/21
747
1,796
<issue_start>username_0: I understand that when an APDU is issued through a smart card reader the result has a format like this: ``` [ [data], SW1, SW2 ] ``` I know that when you're issuing the APDU you can specify the size of the expected answer by using `Le` field but I'm wondering if there is any byte (or anything) within the data field that indicates its actual size. For example, say that I want to read the master file: First, I issue a `SELECT FILE` apdu: ``` 00 A4 00 00 ``` which would return `61 1b` for example, where `1b` is number of bytes to read using `GET RESPONSE`. Then I send the `GET RESPONSE` apdu using `Le` for the expected size of the answer: ``` 00 C0 00 00 1B ``` And this returns `[ [00, 01, 02, ...], 90, 00 ]`. What I would like to know is: **Is there a way of figuring out the size of the data field?**<issue_comment>username_1: If `x` is a vector of Dates and `probs` is a vector of probabilities: ``` # test input x <- as.Date("2018-03-21") + 0:10 probs <- 1:9/10 as.Date(quantile(unclass(x), probs), origin = "1970-01-01") ``` giving: ``` 10% 20% 30% 40% 50% 60% "2018-03-22" "2018-03-23" "2018-03-24" "2018-03-25" "2018-03-26" "2018-03-27" 70% 80% 90% "2018-03-28" "2018-03-29" "2018-03-30" ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The `quantile` function does support dates, you just need to specify the `type` argument. Your problem can be solved with: ``` dates <- as.Date("01/01/1900", "%d/%m/%Y") + floor( 36500 * runif(100000) ) quantile(dates, probs = c(0.001, 0.025, 0.975, 0.999), type = 1) 0.1% 2.5% 97.5% 99.9% "1900-02-04" "1902-06-23" "1997-06-10" "1999-10-30" ``` Upvotes: 4
2018/03/21
877
3,403
<issue_start>username_0: I looking for node module (or something else) that can parse in runtime parameters from my program into yaml files. for example in kubernetes yamls ``` metadata: name: $PROJECT_NAME labels: service: $SERVICE_NAME system: $SYSTEM_ID app_version: $PROJECT_VERSION tier: app ``` There is a nice way to build new yaml or change the exist one that contain all my parameters values?<issue_comment>username_1: YAML doesn't always need a template as it is structured data. As long as you don't need formatting/comments, objects can be read or dumped with [js-yaml](https://github.com/nodeca/js-yaml). ``` const yaml = require('js-yaml') const fs = require('fs') const kyaml = { metadata: { name: project_name, service: service_name, system: system_id, app_version: project_version, tier: 'app', } } fs.writeFile('new.yaml', yaml.safeDump(kyaml), 'utf8', err => { if (err) console.log(err) }) ``` Also you could possibly be doing things that [helm](https://docs.helm.sh/) can already do for you with [templates](https://docs.helm.sh/chart_template_guide/#the-chart-template-developer-s-guide). Upvotes: 1 <issue_comment>username_2: I decided to use [Handlebars module](https://www.npmjs.com/package/handlebars) just give to the function a template with the parameters that i want to parse and the function will create new file contain all my changes ``` const Handlebars = require('handlebars'); const source = fs.readFileSync(`${cwd}/${file}`).toString(); const template = Handlebars.compile(source); const contents = template({ PROJECT_NAME: `${name}`, PROJECT_VERSION: `${version}`, DOCKER_IMAGE: `${image}` }); fs.writeFileSync(`${cwd}/target/${file}`, contents); console.log(`${file} -- Finish parsing YAML.`); ``` and the JSON look like ``` spec: containers: - name: {{PROJECT_NAME}}:{{PROJECT_VERSION}} resources: limits: memory: "1Gi" cpu: "1" image: {{DOCKER_IMAGE}} ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Any template engine should work but the templated values should be escaped appropriately if there is a chance the values will produce encoding errors. Because YAML is a superset of JSON, `JSON.stringify` can be safely used as a valid YAML escape function. Using Mustache templates, we ensure all templated values are escaped by setting the escape function: ```js const mustache = require('mustache') mustache.escape = JSON.stringify mustache.render(template, vars) ``` Using Handlebars we can produce valid YAML by disabling the default escaping and providing a "json" helper: ```js const handlebars = require('handlebars') const render = handlebars.compile(template, { noEscape: true }) render(vars, {helpers: { json: JSON.stringify }})) ``` The json helper needs to be used in the YAML template any time a value may require escaping: ``` metadata: name: {{json projectName}} labels: version: {{buildId}} tier: app ``` Without appropriate YAML escaping there could be encoding errors in the generated YAML document if values contain newlines or YAML special characters like "|". Some templating engines (eg Mustache and Handlebars) will escape values with HTML encoding by default which will also produce encoding errors if HTML special characters are present in values (eg quote which is escaped as "'"). Upvotes: 2
2018/03/21
1,296
4,004
<issue_start>username_0: I have a sqlite-query that gets the following data: ``` private GetAllPlayerAttributesFromDB() { IDbCommand dbcmd = dbconn.CreateCommand(); string sqlQueryPlayer = "SELECT player_id, ability_id, value FROM playerabilities"; dbcmd.CommandText = sqlQueryPlayer; IDataReader reader = dbcmd.ExecuteReader(); IDictionary dict = new Dictionary(); while (reader.Read()) { int player\_id = reader.GetInt32(0); int ability\_id = reader.GetInt32(1); int value= reader.GetInt32(2); // I have no idea how to save the data now dict.Add(ability\_id, value); } reader.Close(); reader = null; dbcmd.Dispose(); dbcmd = null; } ╔════════════╦════════════╦═══════╗ ║ player\_id ║ ability\_id ║ value ║ ╠════════════╬════════════╬═══════╣ ║ "16" ║ "0" ║ "56" ║ ║ "16" ║ "1" ║ "52" ║ ║ "16" ║ "2" ║ "62" ║ ║ "16" ║ "3" ║ "72" ║ ║ "16" ║ "4" ║ "64" ║ ║ "28" ║ "0" ║ "41" ║ ║ "28" ║ "1" ║ "49" ║ ║ "28" ║ "2" ║ "55" ║ ║ "28" ║ "3" ║ "60" ║ ║ "28" ║ "4" ║ "65" ║ ║ "41" ║ "0" ║ "72" ║ ║ "41" ║ "1" ║ "71" ║ ║ "41" ║ "2" ║ "79" ║ ║ "41" ║ "3" ║ "84" ║ ║ "41" ║ "4" ║ "52" ║ ╚════════════╩════════════╩═══════╝ ``` **How I want the use the data:** I want to get all abilities and values from one player by using his player\_id as key. For example if I need the data from player 16, then I want to get (0, 56), (1,52), (2, 62)... or an array with int[] values = [56, 52, 62, 72, 64] for each player. Now I want to save this data in a performant way. In PHP I would use an associative array. How could I do this in C#? (I have read some examples about dictionaries, but I am not able to use them for my problem. Maybe dictionaries are the wrong approach for this?)<issue_comment>username_1: Probably the easiest thing for the code you already have is to populate a `DataTable` which is an in-memory "table". It has rows and columns. ``` IDataReader reader = dbcmd.ExecuteReader(); DataTable dt = new DataTable(); dt.Load(reader); ``` > > Update > > > Once you have a data table loaded you can convert the data to instances of Players (for instance). Given these classes: ``` class PlayerDetails { public string AbilityID { get; set; } public string Value { get; set; } } class Player { public string PlayerID { get; set; } public List Details { get; set; } } ``` You can obtain a `List` each with a `Details` property containing a `List`: ``` var groupedByPlayer = dt.AsEnumerable().GroupBy(d => d["player_id"]).Select(b => new Player { PlayerID = b.Key.ToString(), Details = b.Select(bp => new PlayerDetails { AbilityID = bp[1].ToString(), Value = bp[2].ToString() }).ToList() }); ``` That probably looks pretty ugly if you're not familiar with Linq but if you understand SQL GROUP BY it is similar except you end up with instances of your `Player` class. Upvotes: 2 <issue_comment>username_2: You will need to define a data structure to hold your data. A simple one would be a class that represents one line ``` public class Player { public int Id {get; set;} public int Ability {get;set;} public int Value {get;set;} } ``` And add each row to a `List`. However, then you would have 5 rows for each player. Another option is a class for just Ability/Value ``` public class AbilityValue { public int Ability {get;set;} public int Value {get;set;} } ``` and store that in a `Dictionary>`. This needs some explaining: * The key is the player\_id * The value is a list of ability/value pairs that you read The way to use this: * read a row from the data * see if you already have that key in your dictionary * if so, add the new ability/value to that list * if not, add a new item with the player\_id as key. Don't forget to initialize that list If you know that "Ability" is always between 0 and 4, you could also use an array of size 5 as value of your dictionary. Then you use the ability as index into that array, to set the value. Upvotes: 2 [selected_answer]
2018/03/21
1,461
5,215
<issue_start>username_0: Hi Anyone knows how to change Pay button on checkout based on chosen payment method? I found something but I don't know if I could turn it into a snippet in function.php? Thank you. ``` public function __construct() { $this->id = 'ry_ecpay_atm'; $this->has_fields = false; $this->order_button_text = __('Pay via ATM', RY_WT::$textdomain); $this->method_title = __('ECPay ATM', RY_WT::$textdomain); $this->method_description = ''; ```<issue_comment>username_1: This can be done with the following code *(where you will set your payment gateway IDs and the corresponding desired button text)*: ``` add_filter('woocommerce_order_button_text', 'custom_order_button_text' ); function custom_order_button_text( $order_button_text ) { $default = __( 'Place order', 'woocommerce' ); // If needed // Get the chosen payment gateway (dynamically) $chosen_payment_method = WC()->session->get('chosen_payment_method'); // Set your payment gateways IDs in EACH "IF" statement if( $chosen_payment_method == 'bacs'){ // HERE set your custom button text $order_button_text = __( 'Bank wire payment', 'woocommerce' ); } elseif( $chosen_payment_method == 'ry_ecpay_atm'){ // HERE set your custom button text $order_button_text = __( 'Place order via ECPay', 'woocommerce' ); } // jQuery code: Make dynamic text button "on change" event ?> (function($){ $('form.checkout').on( 'change', 'input[name^="payment\_method"]', function() { var t = { updateTimer: !1, dirtyInput: !1, reset\_update\_checkout\_timer: function() { clearTimeout(t.updateTimer) }, trigger\_update\_checkout: function() { t.reset\_update\_checkout\_timer(), t.dirtyInput = !1, $(document.body).trigger("update\_checkout") } }; t.trigger\_update\_checkout(); }); })(jQuery); php return $order_button_text; } </code ``` *Code goes in function.php file of your active child theme (or theme).* Tested and works. Upvotes: 4 [selected_answer]<issue_comment>username_2: ``` add_filter('woocommerce_order_button_text', 'custom_order_button_text' ); function custom_order_button_text( $order_button_text ) { $default = __( 'Place order', 'woocommerce' ); // If needed // Get the chosen payment gateway (dynamically) $chosen_payment_method = WC()->session->get('chosen_payment_method'); ## --- For TESTING raw output on the chosen gateway ID --- ## // echo ' ``` ' . $chosen_payment_method . ' ``` '; // <=== uncomment for testing // Set your payment gateways IDs in EACH "IF" statement if( $chosen_payment_method == 'bacs'){ // HERE set your custom button text $order_button_text = __( 'Bank wire payment', 'woocommerce' ); } elseif( $chosen_payment_method == 'ecpay_shipping_pay'){ // HERE set your custom button text $order_button_text = __( 'Place order via Market', 'woocommerce' ); } elseif( $chosen_payment_method == 'ecpay'){ // HERE set your custom button text $order_button_text = __( 'Place order via ATM/Credit Card', 'woocommerce' ); } // jQuery code: Make dynamic text button "on change" event ?> (function($){ $('form.checkout').on( 'change', 'input[name^="payment\_method"]', function() { var t = { updateTimer: !1, dirtyInput: !1, reset\_update\_checkout\_timer: function() { clearTimeout(t.updateTimer) }, trigger\_update\_checkout: function() { t.reset\_update\_checkout\_timer(), t.dirtyInput = !1, $(document.body).trigger("update\_checkout") } }; t.trigger\_update\_checkout(); }); })(jQuery); php return $order_button_text; } </code ``` and this is the payment in that dropdown. ``` 'ecpay_payment_methods' => array( 'title' => __( 'Payment Method', 'ecpay' ), 'type' => 'multiselect', 'description' => __( 'Press CTRL and the right button on the mouse to select multi payments.', 'ecpay' ), 'options' => array( 'Credit' => $this->get_payment_desc('Credit'), 'Credit_3' => $this->get_payment_desc('Credit_3'), 'Credit_6' => $this->get_payment_desc('Credit_6'), 'Credit_12' => $this->get_payment_desc('Credit_12'), 'Credit_18' => $this->get_payment_desc('Credit_18'), 'Credit_24' => $this->get_payment_desc('Credit_24'), 'WebATM' => $this->get_payment_desc('WebATM'), 'ATM' => $this->get_payment_desc('ATM'), 'CVS' => $this->get_payment_desc('CVS'), 'BARCODE' => $this->get_payment_desc('BARCODE'), 'ApplePay' => $this->get_payment_desc('ApplePay') ), ``` Upvotes: 1 <issue_comment>username_3: I think this is an easier solution: ``` add_filter('woocommerce_available_payment_gateways', 'change_barion_label'); function change_barion_label($gateways) { if($gateways['ry_ecpay_atm']) { $gateways['ry_ecpay_atm']->order_button_text = 'new label'; } return $gateways; } ``` WooCommerce runs this filter when loading the payment gateways, so it should work site-wide. Upvotes: 2
2018/03/21
331
1,068
<issue_start>username_0: Try show/hide a field when checked a checkbox. How can i do this with jQuery? Thats my code: ``` jQuery(function() { jQuery('input[type="checkbox"]').on('change', function() { jQuery(this).find('#billing\_eu\_vat\_number\_field').toggle(!this.checked); }); }); }); ``` [![Screenshot](https://i.stack.imgur.com/ou9PM.png)](https://i.stack.imgur.com/ou9PM.png)<issue_comment>username_1: You can do it like that: ```js jQuery(function() { jQuery('.input-checkbox').change(function() { jQuery('#billing_eu_vat_number').toggle(); }); }); ``` ```html ``` Upvotes: 1 <issue_comment>username_2: HTML ---- ``` ``` JS -- ``` $(document).ready(function(){ $("#billing_wcj_checkout_field_2").click(function(){ if($(this).is(":checked")){ $("#billing_eu_vat_number").show(); }else{ $("#billing_eu_vat_number").hide(); } }); }); ``` Working pen link [enter link description here](https://codepen.io/smitraval27/pen/YaZBVN) Upvotes: 1 [selected_answer]
2018/03/21
1,922
6,835
<issue_start>username_0: Hey I'm trying to get my ViewModel working, but no luck so far. Android Studio shows error `Cannot resolve symbol 'ViewModelProviders'`. Every other question I found on this topic was correcting `extends Activity` to `extends AppCompatActivity`, but I am extending the right one. Not sure what I'm missing... My code is based on [This YouTube video](https://youtu.be/c9-057jC1ZA) **MainActivity.java** ``` public class MainActivity extends AppCompatActivity implements TileAdapter.TileAdapterOnClickHandler { private BaseViewModel viewModel; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //set Toolbar Toolbar myToolbar = findViewById(R.id.toolbar); setSupportActionBar(myToolbar); //initialize viewModel viewModel = ViewModelProviders.of(this).get(BaseViewModel.class); ``` **BaseViewModel.java** ``` public class BaseViewModel extends ViewModel { private Movie[] mMovie; public void init (Movie[] movies){ this.mMovie = movies; } public Movie[] getMovie() { return mMovie; } ```<issue_comment>username_1: I didn't have both dependencies in my build, hence the problem. ``` implementation "android.arch.lifecycle:extensions:1.1.0" implementation "android.arch.lifecycle:viewmodel:1.1.0" ``` Thanks @<NAME> Upvotes: 9 [selected_answer]<issue_comment>username_2: In the build.gradle file, add these lines in the dependencies block ``` dependencies { ... def lifecycle_version = "1.1.1" // ViewModel and LiveData implementation "android.arch.lifecycle:extensions:$lifecycle_version" //if not using java 8,use the following line annotationProcessor "android.arch.lifecycle:compiler:$lifecycle_version" //if using java 8,ignore above line and add the following line implementation "android.arch.lifecycle:common-java8:$lifecycle_version" ... } ``` [![Sample Image of build.gradle file](https://i.stack.imgur.com/qLPNV.png)](https://i.stack.imgur.com/qLPNV.png) Upvotes: 2 <issue_comment>username_3: If you are using compiled sdk version 28 or higher you only need to add single dependecy to get `ViewModel` and `LiveData` ``` dependencies { //... def lifecycle_version = "1.1.1" // ViewModel and LiveData implementation "android.arch.lifecycle:extensions:$lifecycle_version" } ``` Upvotes: 4 <issue_comment>username_4: If you are using `androidx` you need this: ``` implementation 'androidx.lifecycle:lifecycle-extensions:2.1.0' ``` Upvotes: 7 <issue_comment>username_5: you should add library in your project's build.gradle def lifecycle\_version = "2.0.0" ``` // ViewModel and LiveData implementation "androidx.lifecycle:lifecycle-extensions:$lifecycle_version" ``` Upvotes: 3 <issue_comment>username_6: Use `androix` libraries Change `implementation 'com.android.support:appcompat-v7:28.0.0'` to `implementation 'androidx.appcompat:appcompat:1.1.0-alpha03'` You can use ``` Refactor>Migrate to AndroidX ``` Upvotes: 2 <issue_comment>username_7: I solve this problem from [Android official documentation](https://developer.android.com/jetpack/androidx/releases/lifecycle). Add below to `build.grale` ``` def lifecycle_version = "2.0.0" // ViewModel and LiveData implementation "androidx.lifecycle:lifecycle-extensions:$lifecycle_version" ``` Upvotes: 3 <issue_comment>username_8: I had the same problem. None of the other solutions helped me. I realized that I was using `import androidx.lifecycle.ViewModelProvider;` instead of `import androidx.lifecycle.ViewModelProviders;`. So make sure you are using `import androidx.lifecycle.ViewModelProviders;`. That is `ViewModelProviders` with an `s`. Upvotes: 0 <issue_comment>username_9: Works fine in my app(java) before ```java loginViewModel = ViewModelProviders.of(this, new LoginViewModelFactory()) .get(LoginViewModel.class); ``` changes to ```java loginViewModel = new ViewModelProvider(this,new LoginViewModelFactory()).get(LoginViewModel.class); ``` hope it could help Upvotes: 2 <issue_comment>username_10: > > android.arch.lifecycle:extensions is deprecated use > > > ``` def lifecycle_version = "2.2.0" implementation "androidx.lifecycle:lifecycle-viewmodel:$lifecycle_version" implementation "androidx.lifecycle:lifecycle-livedata:$lifecycle_version" ``` Create **instance** of viewmodel like this: > > Java > > > ``` Yourclass obj = new ViewModelProvider(context).get(ClassViewModel.class); ``` > > Kotlin > > > ``` var obj = ViewModelProvider(context).get(ClassViewModel::class.java) ``` Upvotes: 5 <issue_comment>username_11: In My Case, I am facing below issue i.e: > > \*\* cannot access androidx.lifecycle.has a default viewmodelprovider factory which is a [![enter image description here](https://i.stack.imgur.com/y0XaX.png)](https://i.stack.imgur.com/y0XaX.png)subclass of your class Name, check your module classpath check your model conflicting dependencies \*\* > > > I added below dependencies in my **project build.gradle**. ``` def lifecycle_version = "2.2.0" implementation "androidx.lifecycle:lifecycle-viewmodel:$lifecycle_version" implementation "androidx.lifecycle:lifecycle-extensions:$lifecycle_version" ``` Then I create my class in a module project and I'm facing this issue, then I add these libraries in **module build.gradle** file and the issue is resolved. Upvotes: 2 <issue_comment>username_12: In implementation "androidx.lifecycle:lifecycle-extensions:2.2.0" and up ViewModelProviders is deprecated,use ``` viewModel = ViewModelProvider(this).get(BaseViewModel.class); ``` or in Kotlin ``` viewModel = ViewModelProvider(this).get(BaseViewModel::class.java); ``` instead of ``` viewModel = ViewModelProviders.of(this).get(BaseViewModel.class); ``` Upvotes: 2 <issue_comment>username_13: In my case (Android Studio 3.6.3 ~ 4.1.2), in `AppCompatActivity` to successfully do: ``` MyViewModel myViewModel = new ViewModelProvider(this).get(MyViewModel.class); ``` it requires both of these: ``` implementation 'androidx.appcompat:appcompat:1.1.0' implementation 'androidx.lifecycle:lifecycle-extensions:2.2.0' ``` (alone with `lifecycle-extentions` is not sufficient) Update ------ It seems that you have no longer to include the gradle implementation line (`'androidx.lifecycle:lifecycle-extensions:2.2.0'`) to instantiate `ViewModelProvider`. Upvotes: 3 <issue_comment>username_14: -'**ViewModelProviders**' is **deprecated** now * Now alternative > > * In java > > > `viewModel = ViewModelProvider(this).get(BaseViewModel.class);` > > * In kotlin > > > `var viewModel = ViewModelProvider(this).get(BaseViewModel::class.java)` Refs - <https://developer.android.com/reference/androidx/lifecycle/ViewModelProviders> Upvotes: 2
2018/03/21
1,229
3,829
<issue_start>username_0: I have created a couple of custom post type. To one of them i also want to add a custom field. It should just be a simple textfield where you can enter some text. Similar to the title field. How would you do that? I dont want to use a plugin. Current code (functions.php) ``` register_post_type( 'cases', array( 'labels' => array( 'name' => __( 'Cases' ), 'singular_name' => __( 'Case' ) ), 'publicly_queryable' => true, 'public' => true, 'has_archive' => true, 'rewrite' => array('slug' => 'cases'), 'supports' => array('title','editor','thumbnail') ) ); ```<issue_comment>username_1: You need to create custom meta box and add that field within metabox. **Create Metabox** ``` function add_your_fields_meta_box() { add_meta_box( 'your_fields_meta_box', // $id 'Your Fields', // $title 'show_your_fields_meta_box', // $callback 'your_post', // $screen 'normal', // $context 'high' // $priority ); } add_action( 'add_meta_boxes', 'add_your_fields_meta_box' ); ``` **Html part** ``` function show_your_fields_meta_box() { global $post; $meta = get_post_meta( $post->ID, 'your_fields', true ); ?> php } </code ``` **Save field in database** ``` function save_your_fields_meta( $post_id ) { // verify nonce if ( !wp_verify_nonce( $_POST['your_meta_box_nonce'], basename(__FILE__) ) ) { return $post_id; } // check autosave if ( defined( 'DOING_AUTOSAVE' ) && DOING_AUTOSAVE ) { return $post_id; } // check permissions if ( 'page' === $_POST['post_type'] ) { if ( !current_user_can( 'edit_page', $post_id ) ) { return $post_id; } elseif ( !current_user_can( 'edit_post', $post_id ) ) { return $post_id; } } $old = get_post_meta( $post_id, 'your_fields', true ); $new = $_POST['your_fields']; if ( $new && $new !== $old ) { update_post_meta( $post_id, 'your_fields', $new ); } elseif ( '' === $new && $old ) { delete_post_meta( $post_id, 'your_fields', $old ); } } add_action( 'save_post', 'save_your_fields_meta' ); ``` For more detail you can check here <https://www.taniarascia.com/wordpress-part-three-custom-fields-and-metaboxes/>, This is very nice link which will help you to create custom meta box and field step by step Upvotes: 3 <issue_comment>username_2: If you want to create custom fields, you have to add metabox. See my example that will help you. ``` function show_your_fields_meta_box() { global $post; $meta = get_post_meta( $post->ID, 'your_fields', true ); ?> // Just paste your input here as below. Input Text php } function save_your_fields_meta( $post_id ) { // verify nonce if ( !wp_verify_nonce( $_POST['your_meta_box_nonce'], basename(__FILE__) ) ) { return $post_id; } // check autosave if ( defined( 'DOING_AUTOSAVE' ) && DOING_AUTOSAVE ) { return $post_id; } // check permissions if ( 'page' === $_POST['post_type'] ) { if ( !current_user_can( 'edit_page', $post_id ) ) { return $post_id; } elseif ( !current_user_can( 'edit_post', $post_id ) ) { return $post_id; } } $old = get_post_meta( $post_id, 'your_fields', true ); $new = $_POST['your_fields']; if ( $new && $new !== $old ) { update_post_meta( $post_id, 'your_fields', $new ); } elseif ( '' === $new && $old ) { delete_post_meta( $post_id, 'your_fields', $old ); } } add_action( 'save_post', 'save_your_fields_meta' ); ? ``` Upvotes: 0 <issue_comment>username_3: I know that this is way late, but under your function, you need to add 'custom-fields' to your support: ``` 'supports' => array('title','editor','thumbnail', 'custom-fields' ``` Upvotes: 1
2018/03/21
2,364
7,469
<issue_start>username_0: I have this error: > > Execution failed for task ':processDebugResources'. > Error: more than one library with package name 'com.google.android.gms.license' > > > when i have run > > ionic cordova run android > > > this is info ionic: > > cli packages: (AppData\Roaming\npm\node\_modules) > > > ``` @ionic/cli-utils : 1.19.2 ionic (Ionic CLI) : 3.20.0 ``` global packages: ``` cordova (Cordova CLI) : 7.1.0 ``` local packages: ``` @ionic/app-scripts : 3.1.8 Cordova Platforms : android 6.3.0 Ionic Framework : ionic-angular 3.9.2 ``` System: ``` Android SDK Tools : 26.1.1 Node : v6.11.1 npm : 2.15.12 OS : Windows 7 ``` Environment Variables: ``` ANDROID_HOME : C:\Users\med\AppData\Local\Android\Sdk ``` Misc: ``` backend : pro ``` package.json: ``` { "name": "wetry", "version": "0.0.1", "author": "Ionic Framework", "homepage": "http://ionicframework.com/", "private": true, "scripts": { "clean": "ionic-app-scripts clean", "build": "ionic-app-scripts build", "lint": "ionic-app-scripts lint", "ionic:build": "ionic-app-scripts build", "ionic:serve": "ionic-app-scripts serve" }, "dependencies": { "@angular/common": "5.0.3", "@angular/compiler": "5.0.3", "@angular/compiler-cli": "5.0.3", "@angular/core": "5.0.3", "@angular/forms": "5.0.3", "@angular/http": "5.0.3", "@angular/platform-browser": "5.0.3", "@angular/platform-browser-dynamic": "5.0.3", "@ionic-native/base64": "^4.5.3", "@ionic-native/camera": "^4.5.3", "@ionic-native/core": "4.4.0", "@ionic-native/file": "^4.5.3", "@ionic-native/file-transfer": "^4.5.3", "@ionic-native/google-plus": "^4.6.0", "@ionic-native/image-picker": "^4.5.3", "@ionic-native/splash-screen": "4.4.0", "@ionic-native/status-bar": "4.4.0", "@ionic/pro": "1.0.20", "@ionic/storage": "2.1.3", "com-badrit-base64": "^0.2.0", "com.synconset.imagepicker": "~2.1.8", "cordova-plugin-analytics": "^1.4.3", "cordova-plugin-camera": "^2.4.1", "cordova-plugin-compat": "^1.2.0", "cordova-plugin-device": "^2.0.1", "cordova-plugin-file": "^5.0.0", "cordova-plugin-file-transfer": "^1.7.1", "cordova-plugin-googleplus": "^5.2.1", "cordova-plugin-inappbrowser": "^2.0.2", "cordova-plugin-ionic-keyboard": "^2.0.5", "cordova-plugin-ionic-webview": "^1.1.16", "cordova-plugin-splashscreen": "^5.0.2", "cordova-plugin-telerik-imagepicker": "^2.1.8", "cordova-plugin-whitelist": "^1.3.3", "cordova.plugins.diagnostic": "^4.0.3", "font-awesome": "^4.7.0", "ionic-angular": "3.9.2", "ionicons": "3.0.0", "ng2-cordova-oauth": "0.0.8", "rxjs": "5.5.2", "sw-toolbox": "3.6.0", "zone.js": "0.8.18", "cordova-android": "~6.3.0" }, "config": { "ionic_copy": "./config/copy.config.js" }, "devDependencies": { "@ionic/app-scripts": "3.1.8", "typescript": "2.4.2" }, "description": "An Ionic project", "cordova": { "plugins": { "cordova-plugin-whitelist": {}, "cordova-plugin-device": {}, "cordova-plugin-splashscreen": {}, "cordova-plugin-ionic-webview": {}, "cordova-plugin-ionic-keyboard": {}, "cordova-plugin-camera": {}, "cordova-plugin-file": {}, "cordova-plugin-file-transfer": {}, "cordova-plugin-compat": {}, "com.synconset.imagepicker": { "PHOTO_LIBRARY_USAGE_DESCRIPTION": "wetry" }, "cordova.plugins.diagnostic": {}, "com-badrit-base64": {}, "cordova-plugin-inappbrowser": {}, "cordova-plugin-googleplus": { "REVERSED_CLIENT_ID": "id " }, "cordova-plugin-analytics": {} }, "platforms": [ "android" ] } } ``` and this: ``` dependencies { compile fileTree(dir: 'libs', include: '*.jar') // SUB-PROJECT DEPENDENCIES START debugCompile(project(path: "CordovaLib", configuration: "debug")) releaseCompile(project(path: "CordovaLib", configuration: "release")) compile "com.android.support:appcompat-v7:23+" compile "com.google.android.gms:play-services-analytics:+" compile "com.android.support:support-v4:24.1.1+" compile "com.google.android.gms:play-services-auth:+" compile "com.google.android.gms:play-services-identity:+" compile "com.android.support:support-v4:26.+" compile "com.android.support:appcompat-v7:26.+" // SUB-PROJECT DEPENDENCIES END ``` }<issue_comment>username_1: It's a mess with Cordova plugins not specifying dependencies properly in their gradle build files (using different exact versions instead of +). I'm using this another plugin [cordova-android-play-services-gradle-release](http://cordova-android-play-services-gradle-release) which somehow fixes the issue for me (ya...irony it works :D). ``` ionic cordova plugin add cordova-android-play-services-gradle-release ionic cordova platform add android ionic cordova platform remove android ``` Above commands will add this plugin, after which you need to re-add android platform. Edit (other solution): You can instead try editing the project.properties file with the latest or compatible version (11.+) if the above solution doesn't work ``` target=android-26 android.library.reference.1=CordovaLib cordova.system.library.1=com.android.support:appcompat-v7:23+ cordova.gradle.include.1=com.synconset.imagepicker/myapp-ignorelinterrors.gradle cordova.gradle.include.2=com.synconset.imagepicker/myapp-androidtarget.gradle cordova.system.library.2=com.android.support:support-v4:24.1.1+ cordova.system.library.3=com.google.android.gms:play-services-auth:11.+ cordova.system.library.4=com.google.android.gms:play-services-identity:11.+ cordova.system.library.5=com.google.firebase:firebase-core:11.+ cordova.system.library.6=com.google.firebase:firebase-messaging:11.+ cordova.gradle.include.3=cordova-plugin-fcm/myapp-FCMPlugin.gradle ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I was facing same issue from yesterday.. I have fixed this issue by adding below code in in build.gradle(Module:App(Platforms/android)) ``` Add config in android/build.gradle allprojects { repositories { ... configurations.all { resolutionStrategy { // Add force (11.4.0 is version you want to use) force 'com.google.firebase:firebase-messaging:11.4.0' force 'com.google.firebase:firebase-core:11.4.0' force 'com.google.android.gms:play-services-gcm:11.4.0' } } } } ``` Upvotes: 0 <issue_comment>username_3: i too had this problem , after long research i found out the solution open the file platform/android/project.properties and check the code before my code `cordova.system.library.2=com.google.android.gms:play-services-analytics:+` after my code `cordova.system.library.2=com.google.android.gms:play-services-analytics:11.+` it's working fine ,thanks you Upvotes: 1 <issue_comment>username_4: ``` ionic cordova platform remove android ionic cordova platform add [email protected] ``` Worked for me Upvotes: 0
2018/03/21
975
3,456
<issue_start>username_0: I have a form with a several different field types, all of which need to be complete before submission. I have the submit button disabled and would like to remove the disabled attribute once all the fields have values. I have examples from previous questions of this functionality working with [radios](https://codepen.io/abbasarezoo/pen/vdoMGX) and [checkboxes](https://codepen.io/abbasarezoo/pen/jZgQOV) and I've read a few answers which show how to achieve this using fields only: * [Disabling submit button until all fields have values](https://stackoverflow.com/questions/5614399/disabling-submit-button-until-all-fields-have-values) * [Disable submit button until all form inputs have data](https://stackoverflow.com/questions/35590673/disable-submit-button-until-all-form-inputs-have-data) But is there any way we can get check that all field types have values using jQuery? Including and fields? Here's a [Codepen](https://codepen.io/abbasarezoo/pen/QmpzXQ) of a basic version of my form and here's my HTML: ``` Text field Radio fields Radio 1 Radio 2 Checkbox fields Checkbox 1 Checkbox 2 Select options Option A Option B Text area ``` Is this possible?<issue_comment>username_1: you can do like this with jQuery: (in codepen replace all your javascript with this code, I have tested it in your codepen) ``` $(function() { $('.contact-form').on('input',':input',function() { var inputs = $('.contact-form :input'); var num\_inputs = inputs.length; var num\_filled = inputs.filter(function() { return !!this.value }).length; $('.contact-form :submit').prop('disabled',(num\_filled<num\_inputs)); }); }); ``` good luck! Upvotes: 0 <issue_comment>username_2: A few remarks: 1. In the next snippet, I assume ALL inputs (except radio buttons) have to be filled in before making the form submittable, which does not necessarily represent all real life scenarios. 2. As mentioned, radios (in this example only 2) are given a special treatment since only one per group can be selected. 3. Concerning s, one will be considered invalid if an empty option if selected (if one exists). In this example, your select tag always has a non-empty value, I added another to the markup to show its behavior. Let me know if it works ```js $(document).ready(function() { $(":input").on('input', function(){validate();}); $(":input").on('change', function(){validate();}); }); function validate() { var $submitButton = $("input[type=submit]"); var isValid = true; if($(':checkbox:checked').length != $(':checkbox').length) isValid = false; else if($(':radio:checked').length == 0) isValid = false; else{ $('input:text, textarea').each(function() { if($(this).val() == ''){ isValid = false; return false; } }); if(isValid){ $('select').each(function() { if($(this).val() == ''){ isValid = false; return false; } }); } } $submitButton.prop("disabled", !isValid); } ``` ```css .contact-form { width: 400px; margin: 50px; } .contact-form__row { padding: 0 0 10px; margin: 0 0 10px; border-bottom: 1px solid grey; } p { font-weight: bold; } ``` ```html Text field Radio fields Radio 1 Radio 2 Checkbox fields Checkbox 1 Checkbox 2 Select options Option A Option B Option A Option B Text area ``` Upvotes: 1
2018/03/21
857
3,037
<issue_start>username_0: i am using mongoengine to integrate with flask , i wanted to know how to get document object id every time i try i get `File "/var/www/flask_projects/iot_mongo/app.py", line 244, in post return jsonify(user.id) AttributeError: 'BaseQuerySet' object has no attribute 'id'` ``` class Test(Resource): def post(self): parser = reqparse.RequestParser() parser.add_argument('email',required=True, help='email') args=parser.parse_args() user=AdminUser.objects(email=args['email']) return jsonify(user.id) api.add_resource(Test,'/test') if __name__ == '__main__': app.run(debug=True) ```<issue_comment>username_1: Check out [the documentation](http://docs.mongoengine.org/apireference.html#), [`Document.objects`](http://docs.mongoengine.org/apireference.html#Document.objects) is a [`QuerySet`](http://docs.mongoengine.org/apireference.html#mongoengine.queryset.QuerySet) object. You seem to be expecting that this part of your code ``` user=AdminUser.objects(email=args['email']) # user will be a QuerySet ``` will give you a single result, which is not the case, it will give you a `QuerySet` with zero or more results. It does not have an attribute `id`, this is why you get the error message you are seeing when you try to access this attribute here: ``` return jsonify(user.id) # QuerySet does not have the attribute id ``` You need to fetch the actual result(s) you want from it, assuming you are sure your query will return a single result, or do not care that there might be more than one result and just want the first one, you probably want something along these lines: ``` user=AdminUser.objects(email=args['email']).first() # extract first result return jsonfiy(user) ``` Alernatively, returning all results would look like this: ``` users=AdminUser.objects(email=args['email']).all() # extract all results return jsonfiy(users) ``` Upvotes: 1 <issue_comment>username_2: I've been doing this. An example User model would be like~ ``` class User(Document): first_name = StringField(required=True, max_length=50) last_name = StringField(required=True, max_length=50) username = StringField(required=True) password = StringField(required=True, min_length=6) def to_json(self): return { "_id": str(self.pk), "first_name": self.first_name, "last_name": self.last_name, "username": self.username, "password": <PASSWORD> } ``` I convert the id to a string. I would then get a single object with~ ``` user = User.objects.get(pk=user_id) return user.to_json() ``` for a whole object, but if I just want the id I would do... ``` user.pk() ``` I created my own to\_json method that converts the primary key to a string because otherwise it would return "id": ObjectID("SomeID") instead of neatly printing "id": "SomeID". Hope this helps! If you want to find someone by email I suggest~ ``` User.objects.get(email=args['email']) ``` Upvotes: 2
2018/03/21
807
3,342
<issue_start>username_0: It is known that exact mathematical strategies such MILP are not efficient for large instances of the flexible job shop problem. However, still, nowadays it is possible to find proposals of MILP formulations for the FJS problem. It may be due to the fact that it is interesting to use the MILP model for experiments involving non-exact methods as metaheuristics (GA, FA, TS, etc) since it provides lower bounds. I also read that CP should be chosen when finding a feasible solution is more important than an optimal solution. Is that a true statement?<issue_comment>username_1: What you said is about right. For some types of problems it is hard to construct an efficient MILP model to solve them, and they are better off being solved by metaheuristics. However, if a LP can be constructed in a way as to provide a tight and non-trivial bound to a problem then it would be possible to verify if the solution of a good metaheuristic reaches optimality or near-optimality. This means that you can (potentially) solve some instances of some types of NP problems to optimality using only linear programming and metaheuristics. As for CP, it is very good at finding if a problem is feasible (or proving that it is infeasible). CP *can* be used to find optimal solutions, but it is not its strong suit and MILP usually does better in that regard. Upvotes: 2 <issue_comment>username_2: > > I also read that CP should be chosen when finding a feasible solution is more important than an optimal solution. This is true? > > > I think that this statement is becoming less and less true with the recent progress of CP. Especially for scheduling problems. For instance you mention the flexible job-shop scheduling problem. On this problem, generic CP techniques were used to improve and close many of the open instances of the classical benchmarks (both by finding better solutions and by finding tighter lower bounds). See for instance [1]. In this article, the same CP techniques are used to improve/close many other classical scheduling problems (in particular several variants of job-shop and RCPSP). And, yes, CP can provide some lower bounds. If you add the constraint “objective < K” and the search proves this problem is infeasible, then K is a lower bound. It is also to be noted that some modern CP solvers use linear relaxations to guide the search and provide some lower bounds. You can also have a look at [2] for a comparison of the performance of several MIP models and a CP model for the massively studied job-shop scheduling problem. And if you are interested in a more complete view of how different CP techniques can be integrated in a generic CP-based optimization engine, there is also this very recent article [3] (<http://ibm.biz/Constraints2018>). [1] <NAME>, <NAME>, <NAME>. “Failure-directed Search for Constraint-based Scheduling”. Proc. 12th International Conference on the Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, CPAIOR 2015 [2] <NAME>, <NAME>. “Mixed Integer Programming Models for Job Shop Scheduling: a Computational Analysis”. Computers & Operations Research. 2016. [3] <NAME>, <NAME>, <NAME>, <NAME> . “IBM ILOG CP Optimizer for Scheduling”. Constraints journal, 2018 Upvotes: 4 [selected_answer]
2018/03/21
1,278
3,749
<issue_start>username_0: I'm trying to build a page where people can select the colour and capacity of a product. Only one colour/capacity can be active at a given time and when one has been selected a border needs to appear around it. You can see in my attempt below the problem of the other elements being displaced when the border is applied. [![border displacing items](https://i.stack.imgur.com/tsJqY.png)](https://i.stack.imgur.com/tsJqY.png) I thought about giving all the elements `border: 2px solid rgba(0,0,0,0)` to resolve the displacing issue then using javascript to change the border properties on click, but I think this is probably not an elegant solution. Here is the HTML... ``` * * * ``` and the CSS... ``` ul li { width: 47px; height: 47px; border-radius: 12px; border: 2px solid rgba(0,0,0,0); padding: 3px; } .select-colour { width: 45px; height: 45px; border-radius: 8px; border: 1px solid #c2bebb; -webkit-box-shadow: inset 0px -99px 46px -86px rgba(0,0,0,0.42); -moz-box-shadow: inset 0px -99px 46px -86px rgba(0,0,0,0.42); box-shadow: inset 0px -99px 46px -86px rgba(0,0,0,0.42); } #gold { background-color: #f5e7dc; } #silver { background-color: #e2e2e2; } #space-grey { background-color: #232323; } ``` Does anyone have any ideas about the best way to approach this? Thanks.<issue_comment>username_1: Add `box-sizing: border-box` to the `ul li` selector like this: ``` ul li { width: 47px; height: 47px; border-radius: 12px; border: 2px solid rgba(0,0,0,0); padding: 3px; box-sizing: border-box; } ``` Box sizing: border-box makes the element total width/height include the border and radius. Edit: Two links for documentation: <https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing> <https://www.w3schools.com/cssref/css3_pr_box-sizing.asp> Upvotes: 3 [selected_answer]<issue_comment>username_2: use box-sizing: border-box; Here is the link to MDN where you can find more details <https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing> Upvotes: 0 <issue_comment>username_3: Simply keep the border-color: transparent if the div is not selected, and change the color of the border on selection. By the way, you may add a transition to it as well. Upvotes: 0 <issue_comment>username_4: ``` input[type="checkbox"]:checked ~ #hello{ box-shadow: 0 0 0 3px hotpink; } #hello{ width: 50px; margin: 20px; } ``` ``` Hello ``` This is what you want a hidden check box Upvotes: 0 <issue_comment>username_5: Here's a CSS only solution using radio buttons and labels. Hope this helps. ```css * { box-sizing: border-box; } .container { position: relative; padding-top: 30px; } label { color: transparent; position: absolute; top: 0; left: 0; } input:checked+label { color: black; } input[type="radio"] { display: none; } #gold+label:after { content: ""; width: 2rem; height: 2rem; background: gold; position: absolute; border-radius: 5px; top: 30px; left: 0; } #gold:checked+label:after, #silver:checked+label:after, #bronze:checked+label:after { border: 2px solid red; } #silver+label:after { content: ""; width: 2rem; height: 2rem; background: silver; position: absolute; border-radius: 5px; top: 30px; left: 40px; } #bronze+label:after { content: ""; width: 2rem; height: 2rem; background: sandybrown; position: absolute; border-radius: 5px; top: 30px; left: 80px; } ``` ```html Gold Silver Bronze ``` Upvotes: 1
2018/03/21
903
2,824
<issue_start>username_0: I have a vector of strings and want to add a + before each word in each string. ``` strings <- c('string one', 'string two', 'string three') strings_new <- str_replace_all(strings, "\\b\\w", '+') string_new ``` Unfortunately, this is replacing the first character, not adding the + symbol. I'm not too familiar with regex to know how to solve this. Any help would be great. Thanks<issue_comment>username_1: Using captured groups is one way of doing this. Group with parenthesis and recall with `\\1`. ``` strings_new <- str_replace_all(strings, "(\\b\\w)", '+\\1') strings_new [1] "+string +one" "+string +two" "+string +three" ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You may use a **base R** solution using PCRE regex `[[:<:]]` that matches the *starting word boundary*, a location between a non-word and a word char: ``` strings <- c('string one', 'string two', 'string three') gsub("[[:<:]]", "+", strings, perl=TRUE) # => [1] "+string +one" "+string +two" "+string +three" ``` Or, you may use a `(\w+)` (that matches and captures into Group 1 any one or more word chars, i.e. letters, digits, or `_`) TRE regex to replace with a `+` and a replacement backreference `\1` to restore the consumed chars in the output: ``` gsub("(\\w+)", '+\\1', strings) # => [1] "+string +one" "+string +two" "+string +three" ``` Note you do not need a word boundary here since the first word char matched will be already at the word boundary and the consequent word chars will be consumed due to `+` quantifier. See the [regex demo](https://regex101.com/r/dRdsAp/1). And with an ICU regex based `str_replace_all`, you may use ``` > str_replace_all(strings, "\\w+", '+\\0') [1] "+string +one" "+string +two" "+string +three" ``` The `\\0` is a replacement backreference to the whole match. Upvotes: 2 <issue_comment>username_3: Another alternative would be to use `strsplit()` in combination with `paste0()`: ``` res <- lapply(strsplit(strings, " "), function(x) paste0("+", x)) sapply(res, paste0, collapse = " ") # [1] "+string +one" "+string +two" "+string +three" ``` For some people the advantage may be that you don't have to wrestle with a regular expression. However, I would always prefer the direct regex statements by Jasbner and Wictor Upvotes: 0 <issue_comment>username_4: You can do this without capture groups as well (as others have shown) by using the regex `\b(?=\w)` with `perl=T` as shown below. [See code in use here](https://tio.run/##K/r/v7ikKDMvvVjBRlchWUMdwlPIz0tV11GA8UrK85F5GUWpqeqaXOnFpUkaSjExSRr2tjEx5ZpKOgpK2kACaqCOQkFqUY5tiOb//wA) ``` strings <- c('string one', 'string two', 'string three') gsub("\\b(?=\\w)", "+", strings, perl=T) ``` Result ``` [1] "+string +one" "+string +two" "+string +three" ``` Upvotes: 1
2018/03/21
3,868
18,953
<issue_start>username_0: In my case, I have Java 1.6 and want to connect to a remote server which only supports TLS1.2. Server URL is: <https://blagajne-test.fu.gov.si:9002> and certificate public key is here: <http://datoteke.durs.gov.si/dpr/files/test-tls.cer> I have no possibility to upgrade Java because is a part of Oracle Database 11g (11.4). I tried to write a simple program in Java which uses BouncyCastel libraries but got error: Exception in thread "main" ``` org.bouncycastle.crypto.tls.TlsFatalAlertReceived: handshake_failure(40) at org.bouncycastle.crypto.tls.TlsProtocol.handleAlertMessage(Unknown Source) ``` The step I have followed was: 1.) have downloaded test-tls.cer and imported the key into jssacerts and cacerts. [![enter image description here](https://i.stack.imgur.com/hS8KH.jpg)](https://i.stack.imgur.com/hS8KH.jpg) 2.) In Java have done this example: ``` import java.io.IOException; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.net.Socket; import java.net.URL; import java.security.MessageDigest; import java.security.Security; import java.security.Signature; import javax.net.ssl.HttpsURLConnection; import org.bouncycastle.crypto.tls.CertificateRequest; import org.bouncycastle.crypto.tls.DefaultTlsClient; import org.bouncycastle.crypto.tls.TlsAuthentication; import org.bouncycastle.crypto.tls.TlsClientProtocol; import org.bouncycastle.crypto.tls.TlsCredentials; import org.bouncycastle.jce.provider.BouncyCastleProvider; public class Test3 { public static Signature podpis = null; public static MessageDigest md = null; static { try { Security.addProvider(new BouncyCastleProvider()); } catch (Exception ex) { ex.printStackTrace(); } } public static void main(String[] args) throws Exception { String httpsURL = "https://blagajne-test.fu.gov.si:9002"; URL myurl = new URL(httpsURL); HttpsURLConnection con = (HttpsURLConnection )myurl.openConnection(); con.setSSLSocketFactory(new TLSSocketConnectionFactory()); InputStream ins = con.getInputStream(); } } ``` and ``` import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.Socket; import java.net.UnknownHostException; import java.security.KeyStore; import java.security.Principal; import java.security.SecureRandom; import java.security.Security; import java.security.cert.CertificateException; import java.security.cert.CertificateExpiredException; import java.security.cert.CertificateFactory; import java.util.Hashtable; import java.util.LinkedList; import java.util.List; import javax.net.ssl.HandshakeCompletedEvent; import javax.net.ssl.HandshakeCompletedListener; import javax.net.ssl.SSLPeerUnverifiedException; import javax.net.ssl.SSLSession; import javax.net.ssl.SSLSessionContext; import javax.net.ssl.SSLSocket; import javax.net.ssl.SSLSocketFactory; import javax.security.cert.X509Certificate; import org.apache.commons.logging.Log; import org.bouncycastle.crypto.tls.Certificate; import org.bouncycastle.crypto.tls.CertificateRequest; import org.bouncycastle.crypto.tls.DefaultTlsClient; import org.bouncycastle.crypto.tls.ExtensionType; import org.bouncycastle.crypto.tls.TlsAuthentication; import org.bouncycastle.crypto.tls.TlsClientProtocol; import org.bouncycastle.crypto.tls.TlsCredentials; import org.bouncycastle.jce.provider.BouncyCastleProvider; public class TLSSocketConnectionFactory extends SSLSocketFactory { ////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Adding Custom BouncyCastleProvider /////////////////////////////////////////////////////////////////////////////////////////////////////////////// static { if (Security.getProvider(BouncyCastleProvider.PROVIDER_NAME) == null) { Security.addProvider(new BouncyCastleProvider()); } } ////////////////////////////////////////////////////////////////////////////////////////////////////////////// //SECURE RANDOM ////////////////////////////////////////////////////////////////////////////////////////////////////////////// private SecureRandom _secureRandom = new SecureRandom(); ////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Adding Custom BouncyCastleProvider /////////////////////////////////////////////////////////////////////////////////////////////////////////////// @Override public Socket createSocket(Socket socket, final String host, int port, boolean arg3) throws IOException { if (socket == null) { socket = new Socket(); } if (!socket.isConnected()) { socket.connect(new InetSocketAddress(host, port)); } final TlsClientProtocol tlsClientProtocol = new TlsClientProtocol(socket.getInputStream(), socket.getOutputStream(), _secureRandom); return _createSSLSocket(host, tlsClientProtocol); } ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// // SOCKET FACTORY METHODS ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// @Override public String[] getDefaultCipherSuites() { return null; } @Override public String[] getSupportedCipherSuites() { return null; } @Override public Socket createSocket(String host, int port) throws IOException, UnknownHostException { return null; } @Override public Socket createSocket(InetAddress host, int port) throws IOException { return null; } @Override public Socket createSocket(String host, int port, InetAddress localHost, int localPort) throws IOException, UnknownHostException { return null; } @Override public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException { return null; } ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //SOCKET CREATION ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// private SSLSocket _createSSLSocket(final String host, final TlsClientProtocol tlsClientProtocol) { return new SSLSocket() { private java.security.cert.Certificate[] peertCerts; @Override public InputStream getInputStream() throws IOException { return tlsClientProtocol.getInputStream(); } @Override public OutputStream getOutputStream() throws IOException { return tlsClientProtocol.getOutputStream(); } @Override public synchronized void close() throws IOException { tlsClientProtocol.close(); } @Override public void addHandshakeCompletedListener(HandshakeCompletedListener arg0) { } @Override public boolean getEnableSessionCreation() { return false; } @Override public String[] getEnabledCipherSuites() { return null; } @Override public String[] getEnabledProtocols() { return null; } @Override public boolean getNeedClientAuth() { return false; } @Override public SSLSession getSession() { return new SSLSession() { @Override public int getApplicationBufferSize() { return 0; } @Override public String getCipherSuite() { throw new UnsupportedOperationException(); } @Override public long getCreationTime() { throw new UnsupportedOperationException(); } @Override public byte[] getId() { throw new UnsupportedOperationException(); } @Override public long getLastAccessedTime() { throw new UnsupportedOperationException(); } @Override public java.security.cert.Certificate[] getLocalCertificates() { throw new UnsupportedOperationException(); } @Override public Principal getLocalPrincipal() { throw new UnsupportedOperationException(); } @Override public int getPacketBufferSize() { throw new UnsupportedOperationException(); } @Override public X509Certificate[] getPeerCertificateChain() throws SSLPeerUnverifiedException { return null; } @Override public java.security.cert.Certificate[] getPeerCertificates() throws SSLPeerUnverifiedException { return peertCerts; } @Override public String getPeerHost() { throw new UnsupportedOperationException(); } @Override public int getPeerPort() { return 0; } @Override public Principal getPeerPrincipal() throws SSLPeerUnverifiedException { return null; //throw new UnsupportedOperationException(); } @Override public String getProtocol() { throw new UnsupportedOperationException(); } @Override public SSLSessionContext getSessionContext() { throw new UnsupportedOperationException(); } @Override public Object getValue(String arg0) { throw new UnsupportedOperationException(); } @Override public String[] getValueNames() { throw new UnsupportedOperationException(); } @Override public void invalidate() { throw new UnsupportedOperationException(); } @Override public boolean isValid() { throw new UnsupportedOperationException(); } @Override public void putValue(String arg0, Object arg1) { throw new UnsupportedOperationException(); } @Override public void removeValue(String arg0) { throw new UnsupportedOperationException(); } }; } @Override public String[] getSupportedProtocols() { return null; } @Override public boolean getUseClientMode() { return false; } @Override public boolean getWantClientAuth() { return false; } @Override public void removeHandshakeCompletedListener(HandshakeCompletedListener arg0) { } @Override public void setEnableSessionCreation(boolean arg0) { } @Override public void setEnabledCipherSuites(String[] arg0) { } @Override public void setEnabledProtocols(String[] arg0) { } @Override public void setNeedClientAuth(boolean arg0) { } @Override public void setUseClientMode(boolean arg0) { } @Override public void setWantClientAuth(boolean arg0) { } @Override public String[] getSupportedCipherSuites() { return null; } @Override public void startHandshake() throws IOException { tlsClientProtocol.connect(new DefaultTlsClient() { @SuppressWarnings("unchecked") @Override public Hashtable getClientExtensions() throws IOException { Hashtable clientExtensions = super.getClientExtensions(); if (clientExtensions == null) { clientExtensions = new Hashtable(); } //Add host\_name byte[] host\_name = host.getBytes(); final ByteArrayOutputStream baos = new ByteArrayOutputStream(); final DataOutputStream dos = new DataOutputStream(baos); dos.writeShort(host\_name.length + 3); dos.writeByte(0); // dos.writeShort(host\_name.length); dos.write(host\_name); dos.close(); clientExtensions.put(ExtensionType.server\_name, baos.toByteArray()); return clientExtensions; } @Override public TlsAuthentication getAuthentication() throws IOException { return new TlsAuthentication() { @Override public void notifyServerCertificate(Certificate serverCertificate) throws IOException { try { KeyStore ks = \_loadKeyStore(); CertificateFactory cf = CertificateFactory.getInstance("X.509"); List certs = new LinkedList(); boolean trustedCertificate = false; for (org.bouncycastle.asn1.x509.Certificate c : serverCertificate.getCertificateList()) { java.security.cert.Certificate cert = cf.generateCertificate(new ByteArrayInputStream(c.getEncoded())); certs.add(cert); String alias = ks.getCertificateAlias(cert); if (alias != null) { if (cert instanceof java.security.cert.X509Certificate) { try { ((java.security.cert.X509Certificate) cert).checkValidity(); trustedCertificate = true; } catch (CertificateExpiredException cee) { cee.printStackTrace(); } } } else { System.out.println("-->"); } } if (!trustedCertificate) { throw new CertificateException("Unknown cert " + serverCertificate); } peertCerts = certs.toArray(new java.security.cert.Certificate[0]); } catch (Exception ex) { ex.printStackTrace(); throw new IOException(ex); } } @Override public TlsCredentials getClientCredentials(CertificateRequest arg0) throws IOException { return null; } /\*\* \* Private method to load keyStore with system or \* default properties. \* \* @return \* @throws Exception \*/ private KeyStore \_loadKeyStore() throws Exception { FileInputStream trustStoreFis = null; try { String sysTrustStore = null; File trustStoreFile = null; KeyStore localKeyStore = null; sysTrustStore = System.getProperty("javax.net.ssl.trustStore"); String javaHome; if (!"NONE".equals(sysTrustStore)) { if (sysTrustStore != null) { trustStoreFile = new File(sysTrustStore); trustStoreFis = \_getFileInputStream(trustStoreFile); } else { javaHome = System.getProperty("java.home"); trustStoreFile = new File(javaHome + File.separator + "lib" + File.separator + "security" + File.separator + "jssecacerts"); if ((trustStoreFis = \_getFileInputStream(trustStoreFile)) == null) { trustStoreFile = new File(javaHome + File.separator + "lib" + File.separator + "security" + File.separator + "cacerts"); trustStoreFis = \_getFileInputStream(trustStoreFile); } } if (trustStoreFis != null) { sysTrustStore = trustStoreFile.getPath(); } else { sysTrustStore = "No File Available, using empty keystore."; } } System.out.println("sysTrustStore: " +sysTrustStore); String trustStoreType = System.getProperty("javax.net.ssl.trustStoreType") != null ? System.getProperty("javax.net.ssl.trustStoreType") : KeyStore.getDefaultType(); String trustStoreProvider = System.getProperty("javax.net.ssl.trustStoreProvider") != null ? System.getProperty("javax.net.ssl.trustStoreProvider") : ""; if (trustStoreType.length() != 0) { if (trustStoreProvider.length() == 0) { localKeyStore = KeyStore.getInstance(trustStoreType); } else { localKeyStore = KeyStore.getInstance(trustStoreType, trustStoreProvider); } char[] keyStorePass = null; String str5 = System.getProperty("javax.net.ssl.trustStorePassword") != null ? System.getProperty("javax.net.ssl.trustStorePassword") : ""; if (str5.length() != 0) { keyStorePass = str5.toCharArray(); } localKeyStore.load(trustStoreFis, (char[]) keyStorePass); if (keyStorePass != null) { for (int i = 0; i < keyStorePass.length; i++) { keyStorePass[i] = 0; } } } return (KeyStore) localKeyStore; } finally { if (trustStoreFis != null) { trustStoreFis.close(); } } } private FileInputStream \_getFileInputStream(File paramFile) throws Exception { if (paramFile.exists()) { return new FileInputStream(paramFile); } return null; } }; } }); } };//Socket } } ``` When I execute the main program I got: [![enter image description here](https://i.stack.imgur.com/hY4iP.jpg)](https://i.stack.imgur.com/hY4iP.jpg) What I'm doing wrong or why I get back that exception?<issue_comment>username_1: Please switch to Version 1.55 this will fix the issue... Upvotes: -1 <issue_comment>username_2: 1. Download jce\_policy-6.zip from oracle website 2. unzip the jce\_policy-6.zip you will have two jars local\_policy.jar and US\_export\_policy.jar. 3. Install the jars based on the README file provided in the zip - two jars should be copied to `%JAVA_HOME%/lib/security` 4. Edit the file `%JAVA_HOME%/jre/lib/security/java.security` and locate the security-provider list add this line - `security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider` 5. Use the following code to get the SSLContext. ``` Provider provider = new BouncyCastleJsseProvider(); Security.addProvider(provider); SSLContext ctx = SSLContext.getInstance("TLS",provider.getName()); ``` Upvotes: 0
2018/03/21
470
1,634
<issue_start>username_0: When using the very popular swiper.js, normally works as expected. However currently loop = true is not working because we have slidesPerView and slidesPerColumn enabled. Currently have: ``` var mySwiper = new Swiper ('#my-swiper', { slidesPerView: 3, slidesPerColumn: 2, spaceBetween: 30, speed: 2000, loop: true, autoplay: { delay: 1000, disableOnInteraction: false, }, ``` Several others have ran into a similar issue but not clear solution. One noted they added to help resolve the issue: ``` setTimeout(function(){ mySwiper.update(true); mySwiper.slideTo(0, 0) }, 100); ``` Tried adding after the above code block but they there is no motion at all. If I added it inside the above code block then it shows one large thumbnail per slide vs 6. Any thoughts?<issue_comment>username_1: Please switch to Version 1.55 this will fix the issue... Upvotes: -1 <issue_comment>username_2: 1. Download jce\_policy-6.zip from oracle website 2. unzip the jce\_policy-6.zip you will have two jars local\_policy.jar and US\_export\_policy.jar. 3. Install the jars based on the README file provided in the zip - two jars should be copied to `%JAVA_HOME%/lib/security` 4. Edit the file `%JAVA_HOME%/jre/lib/security/java.security` and locate the security-provider list add this line - `security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider` 5. Use the following code to get the SSLContext. ``` Provider provider = new BouncyCastleJsseProvider(); Security.addProvider(provider); SSLContext ctx = SSLContext.getInstance("TLS",provider.getName()); ``` Upvotes: 0
2018/03/21
744
2,079
<issue_start>username_0: I have a txt file with a number of dates, each signifying an event: ``` 15MAR18 103000 15MAR18 120518 17MAR18 121203 17MAR18 134443 17MAR18 151733 19MAR18 165013 19MAR18 182253 19MAR18 195533 ``` I am trying to get a running tally of how many 'events' occur within a 24 hour time period. I can read the file and parse into datetime objects ok: ``` for line in range(0, len(event_list): eventTime = event_list[line][:14] eventTime = datetime.strptime(eventTime, '%d%b%y %H%M%S') eventTime_next = event_list[line+1][:14] eventTime_next = datetime.strptime(next, '%d%b%y %H%M%S') ``` I don't know how to next go about it. I tried to compare the line ahead with the previous and but I don't think that's the way to go about it. I need it so the following happens ``` 15MAR18 103000 1 15MAR18 120518 2 17MAR18 121203 1 17MAR18 134443 2 17MAR18 151733 3 19MAR18 165013 1 19MAR18 182253 2 19MAR18 195533 3 ``` I.e So the count will go back to 1 when 24 hours has elapsed since the first comparison... and then start again with the new start reference. I hope this makes sense? Or is this something for pandas library or something?<issue_comment>username_1: Please switch to Version 1.55 this will fix the issue... Upvotes: -1 <issue_comment>username_2: 1. Download jce\_policy-6.zip from oracle website 2. unzip the jce\_policy-6.zip you will have two jars local\_policy.jar and US\_export\_policy.jar. 3. Install the jars based on the README file provided in the zip - two jars should be copied to `%JAVA_HOME%/lib/security` 4. Edit the file `%JAVA_HOME%/jre/lib/security/java.security` and locate the security-provider list add this line - `security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider` 5. Use the following code to get the SSLContext. ``` Provider provider = new BouncyCastleJsseProvider(); Security.addProvider(provider); SSLContext ctx = SSLContext.getInstance("TLS",provider.getName()); ``` Upvotes: 0
2018/03/21
324
1,074
<issue_start>username_0: I have been scratching my head with this one and wonder if anybody would be kind enough to give me a pointer. I am extracting some data as a variable from JSON to PHP, and I can do this no problem when there are nested nodes - IF the node is a text but not if the node is a number. I am using json\_decode THIS IS NOT THE SAME AS How do I extract data from JSON with PHP? This is ok ``` $get_temp = $jsonobj->main->temp; ``` This is not working ``` $get_weather = $jsonobj->main->0->weather; ``` So my question is how do I target the node when it is a number? Thanks<issue_comment>username_1: Probabily you have an array into main node.. so you can get its value with an index like this: ``` $get_weather = $jsonobj->main[0]->weather; ``` Where `0` is the index that you want to get Upvotes: 2 <issue_comment>username_2: ``` $get_weather = $jsonobj->main[$x]->weather; ``` **$x** would be the index Upvotes: 2 <issue_comment>username_3: This should work: ``` $get_weather = $jsonobj->main[0]->weather; ``` Upvotes: 2 [selected_answer]
2018/03/21
1,269
4,703
<issue_start>username_0: I'm trying to use the express session middleware in a separate route handler but the compiler complains that the property 'session' does not exist on type 'Request'. app.ts ``` import debug = require('debug'); import express = require('express'); import path = require('path'); import db = require('diskdb'); import bodyParser = require('body-parser'); import session = require('express-session'); import fileStore = require('session-file-store'); import routes from './routes/index/index'; import users from './routes/user'; import register from './routes/users/register'; import login from './routes/users/login'; var app = express(); //Setup sesssion middleware var sessionFileStore = fileStore(session); app.use(session({ name: 'server-session-cookie-id', secret: 'my express secret', saveUninitialized: true, resave: true, store: new sessionFileStore() })); //Here we are configuring express to use body-parser as middle-ware. app.use(bodyParser.urlencoded({ extended: false })); app.use(bodyParser.json()); // view engine setup app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'pug'); app.use(express.static(path.join(__dirname, 'public'))); app.use('/', routes); app.use('/users', users); app.use('/register', session, register); app.use('/login', session, login); // catch 404 and forward to error handler app.use(function (req, res, next) { var err = new Error('Not Found'); err['status'] = 404; next(err); }); // error handlers // development error handler // will print stacktrace if (app.get('env') === 'development') { app.use((err: any, req, res, next) => { res.status(err['status'] || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler // no stacktraces leaked to user app.use((err: any, req, res, next) => { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function () { debug('Express server listening on port ' + server.address().port); }); ``` **login.ts** ``` import express = require('express'); import path = require('path'); import bcrypt = require('bcrypt'); const router = express.Router(); router.get('/', (req: express.Request, res: express.Response) => { res.render(path.join(__dirname, 'login'), { message: 'display login form', username: '' }); }); router.post('/', (req: express.Request, res: express.Response) => { var un = req.body.username; var pw = req.body.password; var db = require('diskdb'); db = db.connect('db', ['users']); var existing = db.users.findOne({ username: un }); var all = db.users.find(); bcrypt.compare(pw, existing != null ? existing.password : '', function (err, hashres) { // res == true if (hashres) { res.render(path.join(__dirname, 'login'), { message: ('handle login form submission for ' + un), username: un, result: "the username " + un + " is now logged in.", resulttype: "success" }); req.session.user = un; } else { res.render(path.join(__dirname, 'login'), { message: ('handle login form submission for ' + un), username: un, result: "the username and password combination is incorrect.", resulttype: "error" }); } }); }); export default router; ``` All the examples I've found online say that this is supposed to work, though most don't use a separate file for the route hander...<issue_comment>username_1: In your `app.ts` replace these lines of code ``` app.use('/register', session, register); app.use('/login', session, login); ``` to ``` app.use('/register', register); app.use('/login', login); ``` --- **updated:** try update your code of setup session store by this example ``` var session = require('express-session'); var FileStore = require('session-file-store')(session); app.use(session({ store: new FileStore(options), ... })); ``` if still not work, `express-session` has a default session store, use the default session store and check if there still a compiler error. Upvotes: 1 <issue_comment>username_2: On a hunch I tried to create the same project without TypeScript and it worked. Once I knew that TS was the culprit it was quite straight forward to find that I had to import an additional node package: `@types/express-session` Upvotes: 1 [selected_answer]
2018/03/21
688
2,346
<issue_start>username_0: I have a csv where for one column values are in list of dict like below ``` [{'10': 'i ve been with what is now comcast since 2001 the company has really grown and improved and delivers a great service along with great customer service ', 'aspects':['service']}, {'20': 'good service but lack of options to allow it be more affordable allowing individual channel choices would be great ', 'aspects':['lack', 'service']}, {'30': 'it a good service but very expensive', 'aspects':['service']}, {'40': 'good service', 'aspects':['service']}, {'50': 'good service but over priced ', 'aspects':['service']}] ``` Now because when I am reading this from `CSV` its a `string` I am not able to convert it to original type of `list of dict` and then `json` . How I can actually achieve this . Solution : ``` data = output[output.aspects == aspect]['column1'].tolist() listData=ast.literal_eval(data[0]) return json.dumps(listData) ```<issue_comment>username_1: You can use the `ast` module **Ex:** ``` import ast s = """[{'10': 'i ve been with what is now comcast since 2001 the company has really grown and improved and delivers a great service along with great customer service ', 'aspects':['service']}, {'20': 'good service but lack of options to allow it be more affordable allowing individual channel choices would be great ', 'aspects':['lack', 'service']}, {'30': 'it a good service but very expensive', 'aspects':['service']}, {'40': 'good service', 'aspects':['service']}, {'50': 'good service but over priced ', 'aspects':['service']}]""" print(ast.literal_eval(s)) ``` **Output:** ``` [{'10': 'i ve been with what is now comcast since 2001 the company has really grown and improved and delivers a great service along with great customer service ', 'aspects': ['service']}, {'aspects': ['lack', 'service'], '20': 'good service but lack of options to allow it be more affordable allowing individual channel choices would be great '}, {'30': 'it a good service but very expensive', 'aspects': ['service']}, {'aspects': ['service'], '40': 'good service'}, {'aspects': ['service'], '50': 'good service but over priced '}] ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: To convert to json file: ``` >>> import json >>> json.dump(obj,open(path + '/txt.json','w+')) ``` Upvotes: -1
2018/03/21
1,173
3,635
<issue_start>username_0: I have defined folder for user maciek with permissions using sqlplus. ``` SQL> CREATE OR REPLACE DIRECTORY test AS '\home\oracle\Desktop\test'; SQL> GRANT READ, WRITE ON DIRECTORY test to maciek; ``` The folder I have created earlier manually. When I run the following script. ``` DECLARE OBRAZEK_lob blob; obrazek_SI si_stillimage; DANE_PLIKU BFILE := BFILENAME('test','Pojazd.jpg'); BEGIN DBMS_LOB.CREATETEMPORARY(OBRAZEK_lob, TRUE); DBMS_LOB.fileopen(DANE_PLIKU, DBMS_LOB.file_readonly); DBMS_LOB.LOADFROMFILE(OBRAZEK_lob, DANE_PLIKU, DBMS_LOB.GETLENGTH(DANE_PLIKU)); DBMS_LOB.FILECLOSE(DANE_PLIKU); obrazek_SI :=SI_stillimage(obrazek_lob); INSERT INTO foto_oferty_si (idk,nazwa_pliku,opis,obrazek,oferta_id) VALUES(1, 'Pojazd.jpg','Autko', obrazek_si, 1); DBMS_LOB.FREETEMPORARY(OBRAZEK_lob); COMMIT; END; ``` I get the following error message. ``` Error report: ORA-22285: non-existent directory or file for FILEOPEN operation ORA-06512: at "SYS.DBMS_LOB", line 805 ORA-06512: at line 7 22285. 00000 - "non-existent directory or file for %s operation" *Cause: Attempted to access a directory that does not exist, or attempted ``` Is it the problem that Oracle doesn't see that folder that I created manually? Shoud the command below create folder pointed at path automatically? ``` CREATE OR REPLACE DIRECTORY test AS '\home\oracle\Desktop\test'; ```<issue_comment>username_1: From comments, you are using an Oracle DB inside a VBox and would like to load an image from the host Desktop. Here's 2 options. 1. Map host directory into the VBox then proceed as your above code but adjust the directory to the mapped in drive. > > > ``` > auto-mounted shared folders are mounted into the /media directory, along with the prefix sf_ > > ``` > > Making this example /media/sf\_klrice Use this in the create directory command. [![enter image description here](https://i.stack.imgur.com/qfSWI.png)](https://i.stack.imgur.com/qfSWI.png) VBox Reference: <https://www.virtualbox.org/manual/ch04.html#sf_mount_auto> 2. Use SQLcl which allows for JavaScript logic. Instead of PLSQL and Directories the code would look like the snippet below. A more complete write up and example is on my Blog here: <http://krisrice.io/2015-10-14-sqlcl-blob-loading-from-file/> ``` /* * Function to take in a filename and add or create it to a map * with bind variables */ function addBindToMap(map,bindName,fileName){ /* conn is the actual JDBC connection */ var b = conn.createBlob(); var out = b.setBinaryStream(1); var path = java.nio.file.FileSystems.getDefault().getPath(fileName); /* slurp the file over to the blob */ java.nio.file.Files.copy(path, out); out.flush(); if ( map == null ) { /* java objects as binds needs a hashmap */ var HashMap = Java.type("java.util.HashMap"); map = new HashMap(); } /* put the bind into the map */ map.put("b",b); return map; } /* File name */ var file = "/Users/klrice/workspace/raptor_common/10_5.log"; /* load binds */ binds = addBindToMap(null,"b",file); /* add more binds */ binds.put("path",file); /* exec the insert and pass binds */ var ret = util.execute("insert into k(path,blob_content,when) values(:path , :b, sysdate)",binds); /* print the results */ sqlcl.setStmt("select path,dbms_lob.getlength(blob_content) from k order by when desc;"); sqlcl.run(); ``` Upvotes: 1 <issue_comment>username_2: Interesting but the oracle directory names must be specified in `capital letters`, use `TEST` instead of `test`. Upvotes: 1 [selected_answer]
2018/03/21
1,009
3,463
<issue_start>username_0: I have to parse a CSV. I am using Apache common csv to do the same.My csv structures data looks like : ``` Name, ErrorType, Location, Error Detail, Mandatory ABC , E1 , "XYZ\ABC", "Valid Values: 'X','Y','Z'", REQUIRED ``` I am using below configuration for csv Parser ``` CSVFormat.DEFAULT.withIgnoreSurroundingSpaces() .withQuote(null). .withHeader(excelHeaders) .withFirstRecordAsHeader().parse(in).getRecords(); ``` I have to use withQuote(null) option to avoid error ``` java.io.IOException: (line 27) invalid char between encapsulated token and delimiter at org.apache.commons.csv.Lexer.parseEncapsulatedToken(Lexer.java:281) at org.apache.commons.csv.Lexer.nextToken(Lexer.java:158) at org.apache.commons.csv.CSVParser.nextRecord(CSVParser.java:586) at org.apache.commons.csv.CSVParser.getRecords(CSVParser.java:448) ``` It is giving me the details of Error Detail column whenever I am trying to read the value of Mandatory column . Is there any way to ignore the comma inside the quotes, I have already tried withEscape('"') .If I use this, I got an exception that csv doesn't have this index.<issue_comment>username_1: From comments, you are using an Oracle DB inside a VBox and would like to load an image from the host Desktop. Here's 2 options. 1. Map host directory into the VBox then proceed as your above code but adjust the directory to the mapped in drive. > > > ``` > auto-mounted shared folders are mounted into the /media directory, along with the prefix sf_ > > ``` > > Making this example /media/sf\_klrice Use this in the create directory command. [![enter image description here](https://i.stack.imgur.com/qfSWI.png)](https://i.stack.imgur.com/qfSWI.png) VBox Reference: <https://www.virtualbox.org/manual/ch04.html#sf_mount_auto> 2. Use SQLcl which allows for JavaScript logic. Instead of PLSQL and Directories the code would look like the snippet below. A more complete write up and example is on my Blog here: <http://krisrice.io/2015-10-14-sqlcl-blob-loading-from-file/> ``` /* * Function to take in a filename and add or create it to a map * with bind variables */ function addBindToMap(map,bindName,fileName){ /* conn is the actual JDBC connection */ var b = conn.createBlob(); var out = b.setBinaryStream(1); var path = java.nio.file.FileSystems.getDefault().getPath(fileName); /* slurp the file over to the blob */ java.nio.file.Files.copy(path, out); out.flush(); if ( map == null ) { /* java objects as binds needs a hashmap */ var HashMap = Java.type("java.util.HashMap"); map = new HashMap(); } /* put the bind into the map */ map.put("b",b); return map; } /* File name */ var file = "/Users/klrice/workspace/raptor_common/10_5.log"; /* load binds */ binds = addBindToMap(null,"b",file); /* add more binds */ binds.put("path",file); /* exec the insert and pass binds */ var ret = util.execute("insert into k(path,blob_content,when) values(:path , :b, sysdate)",binds); /* print the results */ sqlcl.setStmt("select path,dbms_lob.getlength(blob_content) from k order by when desc;"); sqlcl.run(); ``` Upvotes: 1 <issue_comment>username_2: Interesting but the oracle directory names must be specified in `capital letters`, use `TEST` instead of `test`. Upvotes: 1 [selected_answer]
2018/03/21
358
1,150
<issue_start>username_0: When I write this in TypeScript, I get an error saying `Namespace Bar has no exported member Qux`. Why is that and how do I fix it? ```js class Foo {} namespace Bar { export const Qux = Foo; } let a: Bar.Qux; ```<issue_comment>username_1: The `Qux` in your implementation is a variable rather than a type. Therefore, the code below is valid: ``` class Foo {} namespace Bar { export const Qux = Foo; } let a = Bar.Qux; ``` If you like to use `Qux` as an `interface` or extended `class`, you can do: ``` class Foo {} namespace Bar { export class Qux extends Foo {} } let a = new Bar.Qux(); ``` Does this solve your problem? Upvotes: 0 <issue_comment>username_2: You are exporting a constant, not a type. You can do this `let a = new Bar.Foo()`, and `a` will be of type Foo. If you want to export both a type and a constant: ``` namespace Bar { export const Qux = Foo; export type Qux = Foo; } ``` Then you can have: ``` let a: Bar.Qux = new Bar.Qux(); ``` TypeScript will determine, based on context, if you are using the type definition or the constant. Upvotes: 4 [selected_answer]
2018/03/21
446
1,705
<issue_start>username_0: I am reading from the documentation on openlog: > > Because Server-Side JavaScript Script Libraries have no context for > which component called them, you will not be able to use this to pass > the component to OpenLog. Instead, if you wish to identify which > component called the code, you need to pass the component into your > SSJS function as a parameter. > > > <https://wiki.openntf.org/display/XOL/Using+from+SSJS+Script+Libraries> So I have a link which calls a SSJS function that resides in a script library: ``` #{javascript:openDialogAssessment(this);} ``` The function registers failures to openlog as followed: ``` function openDialogAssessment(component){ try { try{ assessmentBean.removeAssessment("assessment"); }catch(e){ openLogBean.addError(e,component); } //... more code } ``` If I look at the log result in openlog I read: > > null - Developer has passed 'this' directly from an SSJS function in > Script Library /proposal.jss. Please note, SSJS Script Libraries have > no context for components. You must pass the relevant component into > your SSJS function as a parameter. > > > Can someone tell me what I am doing wrong and how I could pass the component to the function correctly?<issue_comment>username_1: As Ferry stated, this refers to a component, not the function. therefor the component should be provided to the ssjs function e.g. this.getParent(). Upvotes: 0 <issue_comment>username_2: In this context 'this' is the eventHandler. An eventHandler is not a UIComponent. You should pass 'this.getParent()' (the link) in this case. Upvotes: 3 [selected_answer]
2018/03/21
1,212
3,351
<issue_start>username_0: i have a dictionary as ``` count = {'lt60': {'a': 0, 'b': 0, 'c': 0, 'd': 0}, 'ge60le90': {'a': 4, 'b': 0, 'C': 0, 'd': 0}, 'gt90': {'a': 0, 'b': 1, 'c': 2, 'd': 1} } ``` i want to write this dictionary in a CSV format like this ..as you can see in this picture [![csv file format](https://i.stack.imgur.com/8DOuG.png)](https://i.stack.imgur.com/8DOuG.png) what i want is pic the keys from lt60, ge60le90, gt90 and want to write them in a row. like i pick 'a' and its value from all the nested dictionaries and write its value in that row.<issue_comment>username_1: You can use `pandas` to do this: ``` import pandas as pd count = {'lt60': {'a': 0, 'b': 0, 'c': 0, 'd': 0}, 'ge60le90': {'a': 4, 'b': 0, 'c': 0, 'd': 0}, 'gt90': {'a': 0, 'b': 1, 'c': 2, 'd': 1} } df = pd.DataFrame(count).rename_axis('relation_type').reset_index() df = df.rename(columns={'ge60le90': 'confidence<90', 'gt90': 'confidence>90', 'lt60': 'confidence<60'}) df.to_csv('out.csv', index=False) # relation_type confidence<90 confidence>90 confidence<60 # 0 a 4 0 0 # 1 b 0 1 0 # 2 c 0 2 0 # 3 d 0 1 0 ``` Upvotes: 2 <issue_comment>username_2: This problem can be simplified by iterating through your dict to pull out your keys and values of those keys ``` count = {'lt60': {'a': 0, 'b': 0, 'c': 0, 'd': 0}, 'ge60le90': {'a': 4, 'b': 0, 'c': 0, 'd': 0}, 'gt90': {'a': 0, 'b': 1, 'c': 2, 'd': 1} } # Create outfile f = open("C:\\Users\\\\Desktop\\OUT.csv","w") # Write first row f.write(",a,b,c,d\n") # Iterate through keys for keys in count: print(keys) f.write(keys + ",") KEYS = count[keys] # Iterate though values for values in KEYS: print(KEYS[values]) f.write(str(KEYS[values]) + ",") f.write("\n") f.close() ``` Upvotes: 0 <issue_comment>username_3: Another way of doing it would be to utilize `csv` module. (Note that in your dictionary you have an upper case `C` which I corrected in my code below): ``` import csv lookup = {'ge60le90': 'confidence<90','gt90': 'confidence>90', 'lt60': 'confidence<60'} count = {'lt60': {'a': 0, 'b': 0, 'c': 0, 'd': 0}, 'ge60le90': {'a': 4, 'b': 0, 'c': 0, 'd': 0}, 'gt90': {'a': 0, 'b': 1, 'c': 2, 'd': 1} } # Getting keys from dictionary that match them with titles below. rowKeys = [k for k in count['lt60'].keys()] titles = [['relation type'] + list(lookup[k] for k in count.keys())] # Getting all row variable values for every title. rows = [[count[k][i] for k in count.keys()] for i in rowKeys] # Concatenating variables and values. fields = [[rowKeys[i]] + rows[i] for i in range(len(rowKeys))] # Concatenating final output to be written to file. result = titles + fields print("Final result to be written: ") for r in result: print(r) # Writing to file. with open("output.csv", "w", newline="") as outFile: writer = csv.writer(outFile, delimiter=';',quotechar='|', quoting=csv.QUOTE_MINIMAL) writer.writerows(result) ``` Note that the `;` delimiter works for European Windows and might not work for you. In this case, use `,` instead. Upvotes: 1 [selected_answer]
2018/03/21
742
2,311
<issue_start>username_0: I have conflicts between the default key mappings of plugin jedi-vim and my customized key mapping. ``` " I mapped some cscope functions like below nnoremap g :cscope find g =expand('') nnoremap d :cscope find d =expand('') ``` However, this key binding is overridden by the key binding of `g:jedi#goto_assignments_command` and in `g:jedi#goto_command` jedi-vim. I am wondering if it is possible to set a distinct for jedi-vim only instead of re-mapping conflicted keys.<issue_comment>username_1: Apparently jedi-vim doesn't use the canonical -mappings, but separate configuration variables. Nonetheless ``` let g:jedi#goto_assignments_command = ",g" let g:jedi#goto_command = ",d" ``` in your `~/.vimrc` (i.e. before jedi-vim is sourced) should do the trick, and this is what I would recommend. ### Alternative The key is influenced by the `mapleader` variable. From [`:help`](https://vimhelp.appspot.com/map.txt.html#<Leader>) : > > Note that the value of "mapleader" is used at the moment the mapping is > defined. Changing "mapleader" after that has no effect for already defined > mappings. > > > So, you could also solve it this way: ``` let mapleader = ',' runtime! plugin/jedi.vim unlet mapleader ``` Plugin managers or installing as a *pack plugin* further complicate this, and it changes the ordering of plugin initialization. I do not recommend this. Upvotes: 3 [selected_answer]<issue_comment>username_2: Follow @[Ingo](https://stackoverflow.com/users/813602/ingo-karkat) answer. I remaped most used functions in [jedi-vim docu](https://github.com/davidhalter/jedi-vim/blob/master/doc/jedi-vim.txt): ``` :let g:jedi#goto_command = ",jd" :let g:jedi#goto_assignments_command = ",jg" :let g:jedi#usages_command = ",jn" :let g:jedi#rename_command = ",jr" " will auto-create next visual-map: ,jr *@:call jedi#rename_visual() " rename\_command() fails in normal-mode, but success in visual-mode ! :let g:jedi#goto\_stubs\_command = ",js" :let g:jedi#documentation\_command = ",jK" " s \*@'c'.jedi#complete\_string(0) :let g:jedi#completions\_command = "" " jedi-vim Plug called AFTER remaping its commands only Plug 'davidhalter/jedi-vim', {'for': 'python'} ``` PD: this should be a comment of Mr. Karkat answer, but still not 50 points. Upvotes: 0