date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/19 | 330 | 1,232 | <issue_start>username_0: In way to skip the need of trusting a third party and to skip the full download of the blockchain on my PC, I would like to setup a full node syncing an ethereum testnet blockchain on a local server and then connect my PC to this server, using a geth instance.
Is that possible in some way?
How can I do that?<issue_comment>username_1: You can just connect to your local geth server via rpc
```
geth attach http://host:port
```
[There](https://github.com/ethereum/wiki/wiki/JSON-RPC) you can read about geth's rpc apis
You can start rpc listener with this command on your geth server after or during syncing
```
admin.startRPC(host, port, cors, apis)
```
As an alternative you can just use light sync on your main pc so you won't need to download full blockchain
```
geth --light
```
But it depends on what you gonna do with it
Upvotes: 2 [selected_answer]<issue_comment>username_2: For others having problem with syncing. I think it's a good idea [ for development and testing ] to setup the ethereum blockchain locally instead of connecting with any testnet or the mainnet.
Reference:
<https://nodejsera.com/blockchain/how-to-start-a-private-ethereum-blockchain-network.html>
Upvotes: 0 |
2018/03/19 | 1,009 | 3,241 | <issue_start>username_0: There are two out-of-order strings arrays, which contain multiple very long strings that may be repeated. How to determine if the two arrays are exactly equal?
Example:
```
[“abc”, "abc", "bcd", "efg"] == ["bcd", "efg", “abc”, "abc"]
```
The easiest way I can think of is to compare two arrays after sorting.
But when each string in the array is very long, it is time consuming to perform string comparisons each time. Is there any way to improve it?
```
let array1 = ["aaa...(100 a)...aa", "bbb...(100 b)... bbb", "ccc...(100c) ...c", ....]
let array2 = ["bbb...(100 b)... bbb", "aaa...(100 a)...aa", "ccc...(100c) ...c", ....]
```
You can use any language to solve, but you can't use the functions in the library to directly compare the arrays for equality.<issue_comment>username_1: Instead of sorting the strings, you could take the hash of each string and compare those. (These you may want to sort.)
However this still requires you to compare the strings directly if the hashes are equal, since hash collisions could yield false positives.
Upvotes: 0 <issue_comment>username_2: If in Python:
```
import collections
def areItersEqual(a1, a2):
return collections.Counter(a1) == collections.Counter(a2)
>>> a1 = [“abc”, "abc", "bcd", "efg"]
>>> a2 = ["bcd", "efg", “abc”, "abc"]
>>> assert areItersEqual(a1,a2) is True
True
```
With this the upper bounds of time and space are O(m+n) as opposed to O(n logn) + O(m logm) time with sorting.
This can be written without `Counter()` too:
```
def areItersEqual(a1, a2):
c = {}
for word in a1:
if word in c:
c[word] += 1
else:
c[word] = 0
for word in a2:
if word in c:
c[word] -= 1
else:
return False
return not(bool([v for k,v in c.items() if v!=0]))
```
Upvotes: 1 <issue_comment>username_3: Just check below code with Javascript :)
```
var a1 = ["abc", "abc", "bcd", "efg"];
var a2 = ["bcd", "efg", "abc", "abc"]
var allIndexExist = true;
for (var i = 0; i < a1.length; i++) {
var index = a2.indexOf(a1[i]);
if (index == -1) {
allIndexExist = false;
break;
} else {
a2.splice(index, 1);
}
}
if (a2.length > 0 || !allIndexExist) {
console.log("Array is not equal");
} else {
console.log("Array is equal");
}
```
Upvotes: 0 <issue_comment>username_4: Swift:
```
func test() -> Bool {
var a1 = ["abc", "abc", "bcd", "efg"]
var a2 = ["bcd", "efg", "abc", "abc"]
if a1.count != a2.count {
return false
}
var temp = [String: Int]()
for ele in a1 {
if let count = temp[ele] { // if temp contain key
temp[ele] = count + 1
} else {
temp[ele] = 1
}
}
for ele in a2 {
if let count = temp[ele] { // if temp contain key
if count == 0 {
return false
} else if count == 1 {
temp[ele] = nil
} else {
temp[ele] = count - 1
}
} else {
return false
}
}
return temp.count == 0
}
print(test())
```
This function can only be solved by enumerating two arrays. O(n)
Upvotes: 0 |
2018/03/19 | 1,470 | 5,792 | <issue_start>username_0: I have seen several other questions like this, but they do not solve the issue. I used MailChimp's API to make a simple call to add a member to my mailing list when they sign up.
However when I test, I get a 401 unauthorized and the API complains of no API key submitted. When I inspect the request in Chrome, I don't see any Authorization header. Here is my code:
```
const formData = {
email_address: this.emailControl.value,
status: 'subscribed',
merge_fields: {
NAME: this.nameControl.value
}
};
const header = new HttpHeaders({
'Authorization': 'apikey:' + environment.mailChimpApiKey
});
this.http.post(this.mailChimpUrl, formData, {
headers: header,
observe: 'response'
}).subscribe(response => {
console.log('response', response);
if (response.status === 200) {
this.submitted = true;
}
});
```
I have checked and double-checked the HttpClient.post method signature, and how the MailChimp API expects to receive the Auth header. It seems like I'm doing everything right, so why isn't Angular setting the header?
I am noticing that the value changes only for the `Access-Control-Request-Headers` when I set optional headers. Am I reading the chrome console wrong?
[](https://i.stack.imgur.com/EM05u.png)
Angular version: 5.2.4<issue_comment>username_1: From the screenshot, I assume that you are making `OPTIONS` request, not `POST`.
Probably the problem is on the server-side. Check out this answer: [http post - how to send Authorization header?](https://stackoverflow.com/questions/39408413/http-post-how-to-send-authorization-header/39411480#39411480)
You should also consider adding `withCredentials` to your request options.
```
const formData = {
email_address: this.emailControl.value,
status: 'subscribed',
merge_fields: {
NAME: this.nameControl.value
}
};
const header = new HttpHeaders({
'Authorization': 'apikey:' + environment.mailChimpApiKey
});
this.http.post(this.mailChimpUrl, formData, {
headers: header,
observe: 'response',
withCredentials: true
}).subscribe(response => {
console.log('response', response);
if (response.status === 200) {
this.submitted = true;
}
});
```
According to [documentation](https://angular.io/api/http/RequestOptions):
>
> withCredentials: boolean | null
>
>
> Enable use credentials for a Request.
>
>
>
Upvotes: 0 <issue_comment>username_2: I'd very much prefer to comment on the original posters answer as I feel it is correct and a clarification comment may help that person get +1 for answer.
So..
If I am running code between angular and two of my domains and using say node to serve the data, I have to enter the following during development in the node server. I do this very open at first to just make sure all is working in dev.
```
app.use(function (req, res, next) {
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT,
PATCH, DELETE');
res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
res.setHeader('Access-Control-Allow-Credentials', true); //
next();
```
});
This clears up your error but is too open for cross scripting, you can read on how to allow specific domains etc to access it instead of the \* provided here.
Upvotes: 0 <issue_comment>username_3: To do this we need to import `Headers` and `RequestOptions` along with `Http` from `@angular/http` library.
And as @Maciej suggested you can also use `withCredentials : true` to your request options.
>
> ApplicationService.ts
>
>
>
```
import { Injectable } from '@angular/core';
import { Http, Headers, Response, RequestOptions } from '@angular/http';
@Injectable()
export class ApplicationService {
constructor(private http: Http) {
}
myformPost(id:number, formData : any){
let header = this.initHeaders();
let options = new RequestOptions({ headers: header, method: 'post'});
let body = JSON.stringify(formData);
return this.http.post(this.myapiUrl, body, options)
.map(res => {
return res.json();
})
.catch(this.handleError.bind(this));
}
private initHeaders(): Headers {
var headers = new Headers();
let token = localstorage.getItem(StorageKey.USER_TOKEN);
if (token !== null) {
headers.append('Authorization', token);
}
headers.append('Pragma', 'no-cache');
headers.append('Content-Type', 'application/json');
headers.append('Access-Control-Allow-Origin', '*');
return headers;
}
private handleError(error: any): Observable {
return Observable.throw(error.message || error);
}
}
```
Upvotes: 2 <issue_comment>username_4: The issue is not with Angular. Modern browsers send a [preflight OPTIONS request](https://developer.mozilla.org/en-US/docs/Glossary/Preflight_request) for most cross-domain requests, which is not supported by Mailchimp. The Mailchimp API does not support client-side implementations:
>
> MailChimp does not support client-side implementation of our API using
> CORS requests due to the potential security risk of exposing account
> API keys.
>
>
>
It would have been nice if this was stated a bit more obviously, but I didn't notice it at first. The best solution is to use jsonp.
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,280 | 4,819 | <issue_start>username_0: I´m installing suiteCrm version 7.10.2, license agree works fine, system check works fine but when i´m done with the database and sdmin config and hit next don´t appen anything. I check the console and get the ReferenceError: YAHOO is not defined
I´m installing in my domain and not locally.
In chrome console i get this error
install.php:14 GET <https://www.digitalinput.pt/SuiteCRM-7.10.2/cache/include/javascript/sugar_grp1_yui.js?s=6.5.25&c=> 404 (Not Found)
but the file exists and i add a 777 permission just to try and nothing changes!<issue_comment>username_1: From the screenshot, I assume that you are making `OPTIONS` request, not `POST`.
Probably the problem is on the server-side. Check out this answer: [http post - how to send Authorization header?](https://stackoverflow.com/questions/39408413/http-post-how-to-send-authorization-header/39411480#39411480)
You should also consider adding `withCredentials` to your request options.
```
const formData = {
email_address: this.emailControl.value,
status: 'subscribed',
merge_fields: {
NAME: this.nameControl.value
}
};
const header = new HttpHeaders({
'Authorization': 'apikey:' + environment.mailChimpApiKey
});
this.http.post(this.mailChimpUrl, formData, {
headers: header,
observe: 'response',
withCredentials: true
}).subscribe(response => {
console.log('response', response);
if (response.status === 200) {
this.submitted = true;
}
});
```
According to [documentation](https://angular.io/api/http/RequestOptions):
>
> withCredentials: boolean | null
>
>
> Enable use credentials for a Request.
>
>
>
Upvotes: 0 <issue_comment>username_2: I'd very much prefer to comment on the original posters answer as I feel it is correct and a clarification comment may help that person get +1 for answer.
So..
If I am running code between angular and two of my domains and using say node to serve the data, I have to enter the following during development in the node server. I do this very open at first to just make sure all is working in dev.
```
app.use(function (req, res, next) {
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT,
PATCH, DELETE');
res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
res.setHeader('Access-Control-Allow-Credentials', true); //
next();
```
});
This clears up your error but is too open for cross scripting, you can read on how to allow specific domains etc to access it instead of the \* provided here.
Upvotes: 0 <issue_comment>username_3: To do this we need to import `Headers` and `RequestOptions` along with `Http` from `@angular/http` library.
And as @Maciej suggested you can also use `withCredentials : true` to your request options.
>
> ApplicationService.ts
>
>
>
```
import { Injectable } from '@angular/core';
import { Http, Headers, Response, RequestOptions } from '@angular/http';
@Injectable()
export class ApplicationService {
constructor(private http: Http) {
}
myformPost(id:number, formData : any){
let header = this.initHeaders();
let options = new RequestOptions({ headers: header, method: 'post'});
let body = JSON.stringify(formData);
return this.http.post(this.myapiUrl, body, options)
.map(res => {
return res.json();
})
.catch(this.handleError.bind(this));
}
private initHeaders(): Headers {
var headers = new Headers();
let token = localstorage.getItem(StorageKey.USER_TOKEN);
if (token !== null) {
headers.append('Authorization', token);
}
headers.append('Pragma', 'no-cache');
headers.append('Content-Type', 'application/json');
headers.append('Access-Control-Allow-Origin', '*');
return headers;
}
private handleError(error: any): Observable {
return Observable.throw(error.message || error);
}
}
```
Upvotes: 2 <issue_comment>username_4: The issue is not with Angular. Modern browsers send a [preflight OPTIONS request](https://developer.mozilla.org/en-US/docs/Glossary/Preflight_request) for most cross-domain requests, which is not supported by Mailchimp. The Mailchimp API does not support client-side implementations:
>
> MailChimp does not support client-side implementation of our API using
> CORS requests due to the potential security risk of exposing account
> API keys.
>
>
>
It would have been nice if this was stated a bit more obviously, but I didn't notice it at first. The best solution is to use jsonp.
Upvotes: 3 [selected_answer] |
2018/03/19 | 334 | 952 | <issue_start>username_0: I am quite new to R therefor my question might be quite basic but searching through the forum I haven't really found the write answer.
I have a data.frame of 24 variables and after computing the mean for each column I would like to calculate the IQR(0.25 and 0.75) for every column.
As far as I understood I have to transform the df in a matrix for IQR which I have done, and tried several options with do.call or apply but don't really manage.
Any help would be much appreciated!<issue_comment>username_1: Try to adapt this code:
```
df<-data.frame(a=runif(100,10,20),
b=runif(100,23,23))
lapply(df,quantile,probs=c(0.25,0.75))
$a
25% 75%
12.31132 17.72699
$b
25% 75%
23 23
```
Upvotes: 2 <issue_comment>username_2: Summary gives you the mean, quartiles, extreme values for each columns
```
df<-data.frame(a=runif(100,10,20),
b=runif(100,23,23))
summary(df)
```
Upvotes: 0 |
2018/03/19 | 2,004 | 7,159 | <issue_start>username_0: I am fairly new to SQL and I can't understand why I am receiving an error when establishing foreign keys as I receive an error saying that the destination table is invalid.
Below is the SQL code, any advice on how to fix would be brilliant! :)
The error appears regarding tblFilms and tblCinemaScreens.
```
CREATE TABLE tblCustomer (
CustomerID int,
CustomerSurname NVARCHAR(25),
CustomerForename NVARCHAR(20),
CustomerAge int,
CustomerPhoneNumber NVARCHAR(12),
CustomerEmailAddress NVARCHAR(100),
CONSTRAINT PK_tblCustomer PRIMARY KEY CLUSTERED (CustomerID)
)
GO
CREATE TABLE tblBookings (
BookingID int,
FilmShowings TIME,
PriceOfFilm MONEY,
DateOfBooking DATE,
FilmID int,
CinemaScreenID int,
CustomerID int,
CONSTRAINT PK_tblBookings PRIMARY KEY CLUSTERED (BookingID),
CONSTRAINT FK_FilmID FOREIGN KEY (FilmID) REFERENCES tblFilms(FilmID),
CONSTRAINT FK_CustomerID FOREIGN KEY (CustomerID) REFERENCES tblCustomer(CustomerID)
)
GO
CREATE TABLE tblFilms (
FilmID int,
FilmName VARCHAR(100),
FilmDuration int,
AgeRating VARCHAR(3),
CriticScore int,
FilmDescription NVARCHAR(300),
FilmGenre NVARCHAR(20),
FilmStartScreeningDate DATE,
FlimEndScreeningDate DATE,
CinemaScreenID int,
CONSTRAINT PK_tblFilms PRIMARY KEY CLUSTERED (FilmID),
CONSTRAINT FK_tblFilms FOREIGN KEY (CinemaScreenID) REFERENCES tblCinemaScreens(CinemaScreenID)
)
GO
CREATE TABLE tblCinemaScreens (
CinemaScreenID int,
CinemaScreenType NVARCHAR(10),
NumberOfSeats int,
FilmID int,
CONSTRAINT PK_tblCinemaScreens PRIMARY KEY CLUSTERED (CinemaScreenID),
CONSTRAINT FK_tblCinemaScreens FOREIGN KEY (FilmID) REFERENCES tblFilms(FilmID)
)
GO
```<issue_comment>username_1: Cinema screens table does not need a film id and you need to create the tables BEFORE referencing them with FK's
```
CREATE TABLE tblCinemaScreens (
CinemaScreenID int,
CinemaScreenType NVARCHAR(10),
NumberOfSeats int,
FilmID int,
CONSTRAINT PK_tblCinemaScreens PRIMARY KEY CLUSTERED (CinemaScreenID)
GO
CREATE TABLE tblFilms (
FilmID int,
FilmName VARCHAR(100),
FilmDuration int,
AgeRating VARCHAR(3),
CriticScore int,
FilmDescription NVARCHAR(300),
FilmGenre NVARCHAR(20),
FilmStartScreeningDate DATE,
FlimEndScreeningDate DATE,
CinemaScreenID int,
CONSTRAINT PK_tblFilms PRIMARY KEY CLUSTERED (FilmID),
CONSTRAINT FK_tblFilms FOREIGN KEY (CinemaScreenID) REFERENCES tblCinemaScreens(CinemaScreenID)
)
GO
CREATE TABLE tblCustomer (
CustomerID int,
CustomerSurname NVARCHAR(25),
CustomerForename NVARCHAR(20),
CustomerAge int,
CustomerPhoneNumber NVARCHAR(12),
CustomerEmailAddress NVARCHAR(100),
CONSTRAINT PK_tblCustomer PRIMARY KEY CLUSTERED (CustomerID)
)
GO
CREATE TABLE tblFilms (
FilmID int,
FilmName VARCHAR(100),
FilmDuration int,
AgeRating VARCHAR(3),
CriticScore int,
FilmDescription NVARCHAR(300),
FilmGenre NVARCHAR(20),
FilmStartScreeningDate DATE,
FlimEndScreeningDate DATE,
CinemaScreenID int,
CONSTRAINT PK_tblFilms PRIMARY KEY CLUSTERED (FilmID),
CONSTRAINT FK_tblFilms FOREIGN KEY (CinemaScreenID) REFERENCES tblCinemaScreens(CinemaScreenID)
)
GO
```
Upvotes: 0 <issue_comment>username_2: You're creating a table - tblFilms - and adding a foreign key to tblCinemaScreens before the second table has been created.
As a help I usually create all tables and then create any foreign key relationships and other constraints.
Upvotes: -1 <issue_comment>username_3: You are trying to create a foreign key on tables before they are made. Comment out the lines in the create table statements that are giving you an error and create the tables. Once the tables are made create the missing foreign keys you need like this:
```
alter table tblFilms
add CONSTRAINT FK_tblFilms FOREIGN KEY (CinemaScreenID) REFERENCES tblCinemaScreens(CinemaScreenID)
alter table tblBookings
add CONSTRAINT FK_FilmID FOREIGN KEY (FilmID) REFERENCES tblFilms(FilmID)
```
Upvotes: 0 <issue_comment>username_4: I would change up the schema some to give yourself more flexibility. I would assume that a film can be on more than one cinema screen so I would take `CinemaScreenID` off of `tblFilms` and remove the foreign key. You can also remove `FilmID` from `tblBookings` since you have the `CinemaScreenID` already which has the `FilmID`.. Another thing to consider might be having multiple films on the same `CinemaScreen` which would be another tabled called `tblCinemaScreenFilms` and you would put that `CinemaScreenFilmID` on the `tblBookings` instead
```
CREATE TABLE tblFilms (
FilmID int,
...
CONSTRAINT PK_tblFilms PRIMARY KEY CLUSTERED (FilmID),
)
GO
CREATE TABLE tblCustomer (
CustomerID int,
...
CONSTRAINT PK_tblCustomer PRIMARY KEY CLUSTERED (CustomerID)
)
GO
CREATE TABLE tblBookings (
BookingID int,
CinemaScreenID int,
CustomerID int,
...
CONSTRAINT PK_tblBookings PRIMARY KEY CLUSTERED (BookingID),
CONSTRAINT FK_CinemaScreenID FOREIGN KEY (CinemaScreenID ) REFERENCES tblCinemaScreens(CinemaScreenID),
CONSTRAINT FK_CustomerID FOREIGN KEY (CustomerID) REFERENCES tblCustomer(CustomerID)
)
GO
CREATE TABLE tblCinemaScreens (
CinemaScreenID int,
FilmID int,
...
CONSTRAINT PK_tblCinemaScreens PRIMARY KEY CLUSTERED (CinemaScreenID),
CONSTRAINT FK_tblCinemaScreens FOREIGN KEY (FilmID) REFERENCES tblFilms(FilmID)
)
GO
```
**Option B (preferred)**
```
CREATE TABLE tblCustomer (
CustomerID int,
...
CONSTRAINT PK_tblCustomer PRIMARY KEY CLUSTERED (CustomerID)
)
GO
CREATE TABLE tblFilms (
FilmID int,
...
CONSTRAINT PK_tblFilms PRIMARY KEY CLUSTERED (FilmID),
)
GO
CREATE TABLE tblCinemaScreens (
CinemaScreenID int,
...
CONSTRAINT PK_tblCinemaScreens PRIMARY KEY CLUSTERED (CinemaScreenID)
)
GO
CREATE TABLE tblCinemaScreenFilms (
CinemaScreenFilmID int,
CinemaScreenID int,
FilmID int
CONSTRAINT PK_tblCinemaScreenFilms PRIMARY KEY CLUSTERED (CinemaScreenFilmID),
CONSTRAINT FK_FilmID FOREIGN KEY (FilmID) REFERENCES tblFilms(FilmID),
CONSTRAINT FK_CinemaScreenID FOREIGN KEY (FilmID) REFERENCES tblCinemaScreens(CinemaScreenID)
)
GO
CREATE TABLE tblBookings (
BookingID int,
CustomerID int,
CinemaScreenFilmID int,
...
CONSTRAINT PK_tblBookings PRIMARY KEY CLUSTERED (BookingID),
CONSTRAINT FK_CustomerID FOREIGN KEY (CustomerID) REFERENCES tblCustomer(CustomerID),
CONSTRAINT FK_CinemaScreenFilmID FOREIGN KEY (CinemaScreenFilmID) REFERENCES tblCinemaScreenFilms(CinemaScreenFilmID)
)
GO
```
Upvotes: 0 <issue_comment>username_5: The answer is they should have not direct relationships
A film is shown on 0 or more screens
You need a join table
```
rblFilmCinema
filmID fk to tblFilms
screenID fk to tblCinemaScreens
composite PK on filmID, screenID
```
Upvotes: 0 |
2018/03/19 | 567 | 1,894 | <issue_start>username_0: In my use case I would like to add in numbers consisting of 4 digits, like in the image. The first digit should be 0 or 1.
My code so far is:
```
func numberOfComponents(in weightPickerView: UIPickerView) -> Int {
return 4
}
func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int {
return loopingMargin * numbers.count
}
func pickerView(_ pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String? {
return numbers[row % numbers.count]
}
func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) {
digits[component] = row % numbers.count
let weightString = "\(digits[0])\(digits[1])\(digits[2])\(digits[3])"
weightField.text = weightString
}
```
How can I achieve that?
[](https://i.stack.imgur.com/5VXHJ.png)<issue_comment>username_1: You can try
```
func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int {
if(component == 0)
{
return 2
}
else
{
return loopingMargin * numbers.count
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Considering:
```
let numbers = [2,10,10,10]
var digits = [0,0,0,0]
```
Update your delegate methods like this:
```
func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int {
return numbers[component]
}
func pickerView(_ pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String? {
return String(row)
}
func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) {
digits[component] = row
let weightString = "\(digits[0])\(digits[1])\(digits[2])\(digits[3])"
weightField.text = weightString
}
```
Upvotes: 2 |
2018/03/19 | 336 | 1,169 | <issue_start>username_0: Opening multiple `db.xx.onSnapshot` listeners, is there a way to know how many/what active listeners from the firebase SDK? & if there is global disconnect method to all active listeners?<issue_comment>username_1: You can try
```
func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int {
if(component == 0)
{
return 2
}
else
{
return loopingMargin * numbers.count
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Considering:
```
let numbers = [2,10,10,10]
var digits = [0,0,0,0]
```
Update your delegate methods like this:
```
func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int {
return numbers[component]
}
func pickerView(_ pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String? {
return String(row)
}
func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) {
digits[component] = row
let weightString = "\(digits[0])\(digits[1])\(digits[2])\(digits[3])"
weightField.text = weightString
}
```
Upvotes: 2 |
2018/03/19 | 1,266 | 3,846 | <issue_start>username_0: Examples of looping object creation that I find online use the new keyword and I want to use Object.create().
My attempt returns an error at line 29:
<https://jsfiddle.net/ynfkev6c/2/>
```
function my_game() {
var cols = 11;
var rows = 11;
var num = cols * rows;
var size = 50;
var spacing = 5;
var square = {
size: 50,
x: 0,
y: 0
}
var piece = {
size: 50,
x: 0,
y: 0,
name: king
}
function make_grid() {
var squares = [];
for (i = 0; i < rows; i++) {
for (j = 0; j < cols; j++) {
squares.push({ // ??
Object.create(square); // ??
x: i; // ??
y: j; // ??
})
}
}
}
}
```
How would you find this answer if you were me?<issue_comment>username_1: **Create the object and then either set or assign the properties:**
```
var obj = Object.create(square);
squares.push(Object.assign(obj, {x: i, y: j }));
```
**Or set the properties:**
```
var obj = Object.create(square);
obj.x = i;
obj.y = j;
squares.push(obj);
```
Upvotes: 0 <issue_comment>username_2: Change your inner loops like that:
```
for (i = 0; i < rows; i++) {
for (j = 0; j < cols; j++) {
let sq = Object.create(square)
sq.x = i
sq.y = j
squares.push(sq)
}
}
```
Upvotes: 1 <issue_comment>username_3: You cannot use `Object.create` inside an object literal. You will need to write
```
function make_grid() {
var squares = [];
for (var i = 0; i < rows; i++) {
for (var j = 0; j < cols; j++) {
var o = Object.create(square);
o.x = i;
o.y = j;
squares.push(o);
}
}
return squares;
}
```
or if you need a more concise solution without a temporary variable,
```
squares.push(Object.assign(Object.create(square), {
x: i,
y: j,
}));
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: First, in `squares.push`, you begin object notation, which expects commas, not semicolons between keys:
```
squares.push({ // ??
Object.create(square); // ??
x: i; // ??
y: j; // ??
})
// should be
squares.push({
// Object.create(square), // we'll address this in a bit
x: i,
y: j,
})
```
Furthermore, I'm not sure if you intend on just pushing the new Object or merging in the values `x: i, y: j` but you can do the first easily:
```
squares.push(Object.create(square))
```
Or the second easily if you have ES6 support:
```
// This will merge the created object with the new values
squares.push({
...square,
x: i,
y: j,
})
```
Upvotes: 0 <issue_comment>username_5: I might use a function to create the object by passing in the `i` and `j` parameters and just push that to the array:
```
function createSquare(x, y) {
return { size: 50, x, y }
}
...
for (j = 0; j < cols; j++) {
squares.push(createSquare(i, j));
}
```
Upvotes: 1 <issue_comment>username_6: `squares.push({...})` will create array of objects.Each object will need key and value. `squares.push({Object.create(square);` is neither creating a key or value
That snippet can be replaced by
```
squares.push({ // ??
'a': Object.create(square), // ??
x: i, // ??
y: j, // ??
})
```
Full snippet
```
function my_game() {
var cols = 11;
var rows = 11;
var num = cols * rows;
var size = 50;
var spacing = 5;
var square = {
size: 50,
x: 0,
y: 0
}
var piece = {
size: 50,
x: 0,
y: 0,
name: king
}
function make_grid() {
var squares = [];
for (i = 0; i < rows; i++) {
for (j = 0; j < cols; j++) {
squares.push({ // ??
'a': Object.create(square), // ??
x: i, // ??
y: j, // ??
})
}
}
console.log(squares)
}
}
```
Upvotes: 0 |
2018/03/19 | 1,128 | 4,318 | <issue_start>username_0: I have try to check an importance and reason to use W winapi vs A, (W meaning wide char, A meaning ascii right?)
I have made a simple example, i receive a temp path for current user like this:
```
CHAR pszUserTempPathA[MAX_PATH] = { 0 };
WCHAR pwszUserTempPathW[MAX_PATH] = { 0 };
GetTempPathA(MAX_PATH - 1, pszUserTempPathA);
GetTempPathW(MAX_PATH - 1, pwszUserTempPathW);
printf("pathA=%s\r\npathW=%ws\r\n",pszUserTempPathA,pwszUserTempPathW);
```
My current user has a russian name, so its written in cyrillic, printf outputs like this:
```
pathA=C:\users\Пыщь\Local\Temp
pathW=C:\users\Пыщь\Local\Temp
```
So both paths are allright, i thought i will receive some error, or a mess of symbols with a `GetTempPathA` since the current user is a unicode, but i figured out, that cyrillic characters are actually included in extended ascii character set. So i have a question, if i were to use my software, and it will extract data in a temp folder of current user, who is chinese ( assuming he have chinese symbols in user name ), will i get a mess or an error using the `GetTempPathA` version? Should i always use a `W` prefixed functions, for a production software, that is working with winapi directly?<issue_comment>username_1: Of course, you need the wide version. ASCII version can't even technically handle more than 256 distinct characters. Cyrillic is included in the extended ASCII set (if that's your localization) while Chinese isn't and can't due to much larger set of characters needed to represent it. Moreover, you can get mess with Cyrillic as well - it will only work properly if the executing machine has matching localization. So on a machine with non-cyrillic localization the text will be displayed according to whatever is defined by the localization settings.
Upvotes: -1 [selected_answer]<issue_comment>username_2: First, the -A suffix stands for ANSI, not ASCII. ASCII is a 7-bit character set. ANSI, as Microsoft uses the term, is for an encoding using 8-bit code units (`char`s) and code pages.
Some people use the terms "extended ASCII" or "high ASCII," but that's not actually a standard and, in some cases, isn't quite the same as ANSI. Extended ASCII is the ASCII character set plus (at most) 128 additional characters. For many ANSI code pages this is identical to extended ASCII, but some code pages accommodate variable length characters (which Microsoft calls multi-byte). Some people consider "extended ASCII" to just mean ISO-Latin-1 (which is nearly identical to Windows-1252).
---
Anyway, with an ANSI function, your string can include any characters from your current code page. **If you need characters that aren't part of your current code page, you're out-of-luck. You'll have to use the wide -W versions.**
In modern versions of Windows, you can generally think of the -A functions as wrappers around the -W functions that use [MultiByteToWideChar](https://msdn.microsoft.com/en-us/library/windows/desktop/dd319072(v=vs.85).aspx) and/or [WideCharToMultiByte](https://msdn.microsoft.com/en-us/library/windows/desktop/dd374130(v=vs.85).aspx) to convert any strings passing through the API. But the latter conversion can be lossy, since wide character strings might include characters that your multibyte strings cannot represent.
---
Portable, cross-platform code often stores [all text in UTF-8](http://utf8everywhere.org/), which uses 8-bit code units (`char`s) but can represent any Unicode code point, and anytime text needs to go through a Windows API, you'd explicitly convert to/from wide chars and then call the -W version of the API.
UTF-8 is nearly similar to what Microsoft calls a multibyte ANSI code page, except that Windows does not completely support a UTF-8 code page. There is `CP_UTF8`, but it works only with certain APIs (like WideCharToMultiByte and MultiByteToWideChar). You cannot set your code page to `CP_UTF8` and expect the general -A APIs to do the right thing.
---
As you try to test things, be aware that it's difficult (and sometimes impossible) to get the CMD console window to display characters outside the current code page. If you want to display multi-script strings, you probably should write a GUI application and/or use the debugger to inspect the actual content of the strings.
Upvotes: 3 |
2018/03/19 | 1,013 | 4,120 | <issue_start>username_0: I'm new to Firebase/Firestore and trying to create a Firebase Function that will delete all user data upon deletion of an Auth account.
My functions is successfully called on deleting an account and I'm trying to delete a collection called links for that user and then delete the user document. But I'm getting an error of linksRef.forEach is not a function.
Any guidance on how I'd do this cascading delete?
```
exports.deleteUserData = functions.auth.user().onDelete((event) => {
const userId = event.data.uid;
const store = admin.firestore();
store.collection('users').doc(userId).get().then(user => {
if (user.exists) {
user.collection('links').get().then(links => {
links.forEach(link => {
link.delete();
})
return;
}).catch(reason => {
console.log(reason);
});
user.delete();
return;
}
else {
// User does not exist
return;
}
}
).catch(reason => {
console.log(reason);
});
});
```<issue_comment>username_1: Of course, you need the wide version. ASCII version can't even technically handle more than 256 distinct characters. Cyrillic is included in the extended ASCII set (if that's your localization) while Chinese isn't and can't due to much larger set of characters needed to represent it. Moreover, you can get mess with Cyrillic as well - it will only work properly if the executing machine has matching localization. So on a machine with non-cyrillic localization the text will be displayed according to whatever is defined by the localization settings.
Upvotes: -1 [selected_answer]<issue_comment>username_2: First, the -A suffix stands for ANSI, not ASCII. ASCII is a 7-bit character set. ANSI, as Microsoft uses the term, is for an encoding using 8-bit code units (`char`s) and code pages.
Some people use the terms "extended ASCII" or "high ASCII," but that's not actually a standard and, in some cases, isn't quite the same as ANSI. Extended ASCII is the ASCII character set plus (at most) 128 additional characters. For many ANSI code pages this is identical to extended ASCII, but some code pages accommodate variable length characters (which Microsoft calls multi-byte). Some people consider "extended ASCII" to just mean ISO-Latin-1 (which is nearly identical to Windows-1252).
---
Anyway, with an ANSI function, your string can include any characters from your current code page. **If you need characters that aren't part of your current code page, you're out-of-luck. You'll have to use the wide -W versions.**
In modern versions of Windows, you can generally think of the -A functions as wrappers around the -W functions that use [MultiByteToWideChar](https://msdn.microsoft.com/en-us/library/windows/desktop/dd319072(v=vs.85).aspx) and/or [WideCharToMultiByte](https://msdn.microsoft.com/en-us/library/windows/desktop/dd374130(v=vs.85).aspx) to convert any strings passing through the API. But the latter conversion can be lossy, since wide character strings might include characters that your multibyte strings cannot represent.
---
Portable, cross-platform code often stores [all text in UTF-8](http://utf8everywhere.org/), which uses 8-bit code units (`char`s) but can represent any Unicode code point, and anytime text needs to go through a Windows API, you'd explicitly convert to/from wide chars and then call the -W version of the API.
UTF-8 is nearly similar to what Microsoft calls a multibyte ANSI code page, except that Windows does not completely support a UTF-8 code page. There is `CP_UTF8`, but it works only with certain APIs (like WideCharToMultiByte and MultiByteToWideChar). You cannot set your code page to `CP_UTF8` and expect the general -A APIs to do the right thing.
---
As you try to test things, be aware that it's difficult (and sometimes impossible) to get the CMD console window to display characters outside the current code page. If you want to display multi-script strings, you probably should write a GUI application and/or use the debugger to inspect the actual content of the strings.
Upvotes: 3 |
2018/03/19 | 1,167 | 3,409 | <issue_start>username_0: If we have a 2D int array that looks like this:
```
6 | 8 | 9 | 16
0 | 6 |-3 | 4
18| 2 | 1 | 11
```
Than the expected output would be:
```
0 | 2 |-3 | 4
6 | 6 | 1 | 11
18| 8 | 9 | 16
```
I block when when i think of it how to sort vertically.
```
int[][] array = new int[10][10];
for(int i = 0; i < array.length; i++){
for(int j = 0; j < array[0]; j++){
//here i block because i don't know how i would vertically sort them
}
}
```
I know there are a lot of topics about this and in all of them not one of them worked for me. Therefore I apologize for this post but I am stuck.<issue_comment>username_1: Learn [Linear algebra](https://en.wikipedia.org/wiki/Linear_algebra) please:
An example with the [matrix transpose](https://en.wikipedia.org/wiki/Transpose)
```
public class MatrixSort {
private static final int MATRIX[][] = {
{ 6, 8, 9, 16 },
{ 0, 6, -3, 4 },
{ 18, 2, 1, 11 }
};
private static int[][] transpose(int[][] m) {
int[][] ret = new int[m[0].length][m.length];
for (int i = 0; i < m.length; i++)
for (int j = 0; j < m[0].length; j++)
ret[j][i] = m[i][j];
return ret;
}
public static void main(String[] args) {
int ret[][] = transpose(MATRIX);
for(int i=0; i < ret.length; i++) {
Arrays.sort(ret[i]);
}
ret = transpose(ret);
for(int i=0; i < ret.length; i++) {
for(int j=0; j < ret[i].length; j++) {
System.out.print(ret[i][j]);
System.out.print(" | ");
}
System.out.print('\n');
}
}
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You can use a temp array that will contain a column and use the method sort on it.
```
int[][] input = new int[numberOfRow][numberOfColumn];
int[][] result = new int[numberOfRow][numberOfColumn];
for(int col = 0; col < numberOfColumn; col++){
for(int row = 0; row < numberOfRow; row++){
int[] temp = int[numberOfRow];
temp[row] = input[row][col];
}
Arrays.sort(temp);
for(int i=0; i
```
Upvotes: 0 <issue_comment>username_3: You can make your own class that makes a `List` from one column of the array and uses the array as the backing data (i.e. implement `set`). You can then use `Collections.sort` to sort it in-place.
```
class ColumnList extends AbstractList implements List {
private final T[][] array;
private final int column;
public ColumnList(T[][] array, int column) {
this.array = array;
this.column = column;
}
@Override
public T get(int index) {
return array[index][column];
}
@Override
public T set(int index, T element) {
return array[index][column] = element;
}
@Override
public int size() {
return array.length;
}
}
public void test(String[] args) {
Integer[][] array = {
{6, 8, 9, 16},
{0, 6, -3, 4},
{18, 2, 1, 11}
};
System.out.println("Before: " + Arrays.deepToString(array));
// Sort each column separately.
for (int i = 0; i < array[0].length; i++) {
ColumnList column = new ColumnList<>(array, i);
Collections.sort(column);
}
System.out.println("After: " + Arrays.deepToString(array));
}
```
prints
>
> Before: [[6, 8, 9, 16], [0, 6, -3, 4], [18, 2, 1, 11]]
>
>
> After: [[0, 2, -3, 4], [6, 6, 1, 11], [18, 8, 9, 16]]
>
>
>
Upvotes: 1 |
2018/03/19 | 297 | 898 | <issue_start>username_0: I would change the style of p-inputSwitch component of Primeng library
I would get something like this:
[](https://i.stack.imgur.com/IEQmN.png)
Here's my code :
```
SEARCH BY
Business group
Borrower
```
I started by deleting labels but width changes also and I don't know how to increase it
[](https://i.stack.imgur.com/omCT9.png)<issue_comment>username_1: you can replace your div element with following
```
```
Upvotes: 0 <issue_comment>username_2: >
> I started by deleting labels but width changes also and I don't know how to increase it
>
>
>
Override PrimeNG `ui-inputswitch` class :
```
.ui-inputswitch {
width: 80px !important;
}
```
See [Plunker](https://plnkr.co/edit/CvJxrMxnGUgc6bFQHIVj?p=preview)
Upvotes: 3 [selected_answer] |
2018/03/19 | 370 | 1,272 | <issue_start>username_0: I have Jenkins 2.46.3 LTS version that i am trying to upgrade to 2.73.3 LTS version. My current java version is 1.7. How do i install and start Jenkins with 1.8 version? I am getting the following error after i copied Jenkins war file under (`/usr/lib/Jenkins`) and restarted the service
```
$ sudo service jenkins start
Starting Jenkins Jenkins requires Java8 or later, but you are running 1.7.0_97-b02 from /app_2/java/jdk1.7.0_97/jre
java.lang.UnsupportedClassVersionError: 51.0
at Main.main(Main.java:124)
```
Is there any way to start Jenkins using Java 8 without changing the system classpath?
Thanks,
Ann<issue_comment>username_1: Set `JENKINS_JAVA` option in `/etc/sysconfig/jenkins` file.
The variable name might be different depending on your operating system and package source but the `/etc/sysconfig/jenkins` file is usually the configuration file for your `jenkins` service.
Upvotes: 2 <issue_comment>username_2: Just run the war with the new java binary:
```
nohup /java -jar jenkins.war > $LOGFILE 2>&1
```
Not sure if JAVA\_HOME is needed, but you can execute the line below in the shell, before the command above, or add to the top of init script :
```
export JAVA_HOME=/java
```
Hope that helps.
Upvotes: 1 |
2018/03/19 | 1,235 | 3,494 | <issue_start>username_0: I have a list of strings in format as below :
```
xxxxxxxxxx xxxxxxxxxxxxx 100PS xxxxxxxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 250PS xxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 350PS xxxxxxxxxxxxx xxxxxxxxx xxxx
xxxxxxxxxx xxxxxxxxxxxxx 100PS xxxxxxxxxxxxx 100PS xxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 200PS xxxxxxxxxxxxxxxx 200PS xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 100PS xxxxxxxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 250PS xxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 350PS xxxxxxxxxxxxx xxxxxxxxx xxxx
```
In Excel/VBA, and I am trying to remove duplicate values from the string i.e. 100PS and 200PS where it is printed out twice. Using VBA and Reg-Ex I've come up with :
```
(?<=\d\d\dPS\s.*)(\d\d\dPS\s)
```
And this seems to work when testing it online and on other languages, but in VBA, lookbehind is not supported, and this is absolutely wrecking my brain.
The value always consists of \d\d\d (3 digits) and PS, ends with \s but all the xxxxxx text around it can differ every time and have different lengths etc.
How would I possibly choose the duplicate PS value with regex?
I have looked through stackoverflow and found a couple of reg-ex examples, but they don't seem to be working in VBA..
Any help is greatly appreciated,
Thanks<issue_comment>username_1: Have you considered a worksheet formula?
```
=SUBSTITUTE(A1,MID(A1,SEARCH("???PS",A1),6),"",2)
```
[](https://i.stack.imgur.com/wD9Ba.png)
Upvotes: 2 <issue_comment>username_2: [See regex in use here](http://regexstorm.net/tester?p=%28%5Cs%28%5Cd%7B3%7DPS%29%5Cs.*%5Cs%29%5C2%5Cs&i=xxxxxxxxxx%20xxxxxxxxxxxxx%20100PS%20xxxxxxxxxxxxxxxx%20xxxxxxxxx%20xxxxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20250PS%20xxxxxxxxxxx%20xxxxxxxxx%20xxxxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20350PS%20xxxxxxxxxxxxx%20xxxxxxxxx%20xxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20100PS%20xxxxxxxxxxxxx%20100PS%20xxxxxxx%20xxxxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20200PS%20xxxxxxxxxxxxxxxx%20200PS%20xxxxxxxxx%20xxxxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20100PS%20xxxxxxxxxxxxxxxx%20xxxxxxxxx%20xxxxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20250PS%20xxxxxxxxxxx%20xxxxxxxxx%20xxxxxx%0D%0Axxxxxxxxxx%20xxxxxxxxxxxxx%20350PS%20xxxxxxxxxxxxx%20xxxxxxxxx%20xxxx&r=%241)
```
(\s(\d{3}PS)\s.*\s)\2\s
```
* `(\s(\d{3}PS)\s.*\s)` Capture the following into capture group 1
+ `\s` Matches a single whitespace character
+ `(\d{3}PS)` Capture the following into capture group 2
- `\d{3}` Matches any 3 digits
- `PS` Match this literally
+ `\s` Matches a single whitespace character
+ `.*` Matches any character (except `\n`) any number of times
+ `\s` Matches a single whitespace character
* `\2` Matches the text that was most recently captured by capture group 2
* `\s` Matches a single whitespace character
Replacement: `$1` (puts capture group 1 back into the string)
Result:
```
xxxxxxxxxx xxxxxxxxxxxxx 100PS xxxxxxxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 250PS xxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 350PS xxxxxxxxxxxxx xxxxxxxxx xxxx
xxxxxxxxxx xxxxxxxxxxxxx 100PS xxxxxxxxxxxxx xxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 200PS xxxxxxxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 100PS xxxxxxxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 250PS xxxxxxxxxxx xxxxxxxxx xxxxxx
xxxxxxxxxx xxxxxxxxxxxxx 350PS xxxxxxxxxxxxx xxxxxxxxx xxxx
```
Upvotes: 1 [selected_answer] |
2018/03/19 | 765 | 3,002 | <issue_start>username_0: I am unable to get the OOXML of a Header. According to the [documentation](https://dev.office.com/reference/add-ins/word/section) `getHeader`" method will return `Body` type. The [`Body`](https://dev.office.com/reference/add-ins/word/body) has a method to get OOXML. But it looks like it is not returning the OOXML. Maybe I am missing something?
Here's my code:
```
Word.run(function (context) {
// Create a proxy sectionsCollection object.
var mySections = context.document.sections;
// Queue a commmand to load the sections.
context.load(mySections, 'body/style');
// Synchronize the document state by executing the queued commands,
// and return a promise to indicate task completion.
return context.sync().then(function () {
// header
var headerBody = mySections.items[0].getHeader("primary");
// header OOXML
//// NOT GETTING OOXML HERE
var headerOOXML = headerBody.getOoxml();
// Synchronize the document state by executing the queued commands,
// and return a promise to indicate task completion.
return context.sync().then(function () {
// modify header
var headerOOXMLValue = ModifyHeaderMethod(headerOOXML.value);
headerBody.clear();
headerBody.insertOoxml(headerOOXMLValue, 'Start');
// Synchronize the document state by executing the queued commands,
// and return a promise to indicate task completion.
return context.sync().then(function () {
callBackFunc({
isError: false
});
});
});
});
})
```<issue_comment>username_1: You have a lot of extra code here but the gist of your problem is that `headerOOXML` won't be populated until you `sync()`:
```
Word.run(function (context) {
var header = context.document.sections // Grabv
.getFirst() // Get the first section
.getHeader("primary"); // Get the header
var ooxml = header.getOoxml();
return context.sync().then(function () {
console.log(ooxml.value);
});
});
```
Upvotes: 0 <issue_comment>username_2: The "art" of Office.js is to minimize the number of "syncs" you do. I know that is kind of an unnecessary burden, but that's how it is.
With that in mind, In this case you only need ONE sync.
this code works (assuming that you have only one section in the doc).
btw you can try it in script lab with this [yaml](https://gist.github.com/JuaneloJuanelo/d97fad310e27a265f2ac6a76d74c815e).
if this does not work, please indicate if this is Word for Windows (and what build) or Online, or Mac... thanks!
```js
async function run() {
await Word.run(async (context) => {
let myOOXML = context.document.sections.getFirst()
.getHeader("primary").getOoxml();
await context.sync();
console.log(myOOXML.value);
});
}
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 631 | 2,693 | <issue_start>username_0: I want to be able to detect when the user has lost connection with the server (closed tab, lost internet connection, etc.) I am using stompjs over Sockjs on my client and spring mvc websockets on my server.
How can i detect when the client has lost connection.
Here is how i configure my websocket message brocket:
```
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfiguration extends AbstractWebSocketMessageBrokerConfigurer {
@Autowired
private TaskScheduler scheduler;
@Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic").setHeartbeatValue(new long[]{10000, 10000}).setTaskScheduler(scheduler);
config.setApplicationDestinationPrefixes("/app");
}
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/web").setAllowedOrigins("*").withSockJS();
}
}
```
And here is my controller class that actualy handles the incoming socket messages:
```
@RestController
@CrossOrigin
public class WebMessagingController {
@MessageMapping("/chat/message")
public void newUserMessage(String json) throws IOException {
messagesProcessor.processUserMessage(json);
}
}
```
I know that if i would have used class that extends from **TextWebSocketHandler** i would have bean able to override methods that are being called on connection and disconnection of the client, but i don`t think this is going to work with sock-js client.
Thank you.<issue_comment>username_1: The `StompSubProtocolHandler` implements `afterSessionEnded()` which is called from the `WebSocketHandler.afterConnectionClosed()`. And the former emits an event like this:
```
/**
* Event raised when the session of a WebSocket client using a Simple Messaging
* Protocol (e.g. STOMP) as the WebSocket sub-protocol is closed.
*
* Note that this event may be raised more than once for a single session and
\* therefore event consumers should be idempotent and ignore a duplicate event.
\*
\* @author <NAME>
\* @since 4.0.3
\*/
@SuppressWarnings("serial")
public class SessionDisconnectEvent extends AbstractSubProtocolEvent {
```
So, what you need is an `ApplicationListener` for this `SessionDisconnectEvent` and all the information is there in the event.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to Artem's response, you can use @EventListener annotation in any **spring bean**:
```
@EventListener
public void onDisconnectEvent(SessionDisconnectEvent event) {
LOGGER.debug("Client with username {} disconnected", event.getUser());
}
```
Upvotes: 3 |
2018/03/19 | 692 | 1,908 | <issue_start>username_0: I have a dataframe that looks like this:
```
Reference | ID | Length
ref101 |123456 | 10
ref101 |123789 | 5
ref202 |654321 | 20
ref202 |653212 | 40
```
I'm trying to determine which row for each row in the `Reference` column has the greatest length (based on the value in the `Length` column). For example, `ref101` with `ID` `123456` is greater in length than `ref101` with ID 123789.
I've been playing around with `.groupby()`, but am getting nowhere. Is there a way of performing this sort of operation in Pandas?<issue_comment>username_1: There is one way from `idxmax`, it will return the index with the max length of each group
```
df.groupby('Reference').Length.idxmax()
Out[495]:
Reference
ref101 0
ref202 3
Name: Length, dtype: int64
```
Or `nlargest`
```
df.groupby('Reference').Length.nlargest(1)
Out[496]:
Reference
ref101 0 10
ref202 3 40
Name: Length, dtype: int64
```
Upvotes: 2 <issue_comment>username_2: If it's the whole row you want, then use `groupby` + `idxmax`:
```
df.loc[df.groupby('Reference').Length.idxmax()]
Reference ID Length
0 ref101 123456 10
3 ref202 653212 40
```
If you want *just* the length, then `groupby` + `max` will suffice:
```
df.groupby('Reference').Length.max()
Reference
ref101 10
ref202 40
Name: Length, dtype: int64
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: And another one:
```
df = df.sort_values(['Reference', 'Length'], ascending=False).drop_duplicates(['Reference'])
```
Upvotes: 0 <issue_comment>username_4: With `sort_values` on `Length`, followed by `groupby` and taking first one by `head(1)`:
```
result_df = df.sort_values('Length',ascending=False).groupby('Reference').head(1))
print(result_df)
```
Result:
```
Reference ID Length
3 ref202 653212 40
0 ref101 123456 10
```
Upvotes: 0 |
2018/03/19 | 770 | 2,170 | <issue_start>username_0: The user in this quiz can choose a name:
```
Wie is jouw allergrootste vijand
Loki
Joker
Two Face
Red Skull
Thanos
Killmonger
Black Adam
Hela
```
I want to compare this with the names inside an array. My idea was to use the index numbers to compare this so I tried to use `.indexOf()` but his is not working.
```
function vijand(){
var vijanden = ['Loki', 'Joker', 'Two Face', 'Red Skull', 'Thanos', 'Killmonger', 'Black Adam', 'Hela'];
var keuze = document.getElementsByClassName('vijand').value;
var vijandNummer;
//De eerste 4 uit de array
if ( keuze === vijanden.indexOf(0,1,2,3)) {
vijandNummer = 1;
}
else if (keuze === vijanden.indexOf(4,5,6,7) ) {
vijandNummer = 0;
}
return vijandNummer;
}
```
What am I doing wrong?<issue_comment>username_1: There is one way from `idxmax`, it will return the index with the max length of each group
```
df.groupby('Reference').Length.idxmax()
Out[495]:
Reference
ref101 0
ref202 3
Name: Length, dtype: int64
```
Or `nlargest`
```
df.groupby('Reference').Length.nlargest(1)
Out[496]:
Reference
ref101 0 10
ref202 3 40
Name: Length, dtype: int64
```
Upvotes: 2 <issue_comment>username_2: If it's the whole row you want, then use `groupby` + `idxmax`:
```
df.loc[df.groupby('Reference').Length.idxmax()]
Reference ID Length
0 ref101 123456 10
3 ref202 653212 40
```
If you want *just* the length, then `groupby` + `max` will suffice:
```
df.groupby('Reference').Length.max()
Reference
ref101 10
ref202 40
Name: Length, dtype: int64
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: And another one:
```
df = df.sort_values(['Reference', 'Length'], ascending=False).drop_duplicates(['Reference'])
```
Upvotes: 0 <issue_comment>username_4: With `sort_values` on `Length`, followed by `groupby` and taking first one by `head(1)`:
```
result_df = df.sort_values('Length',ascending=False).groupby('Reference').head(1))
print(result_df)
```
Result:
```
Reference ID Length
3 ref202 653212 40
0 ref101 123456 10
```
Upvotes: 0 |
2018/03/19 | 915 | 3,096 | <issue_start>username_0: I have some Apache Parquet file. I know I can execute `parquet file.parquet` in my shell and view it in terminal. But I would like some GUI tool to view Parquet files in more user-friendly format. Does such kind of program exist?<issue_comment>username_1: Actually I found some Windows 10 specific solution. However, I'm working on Linux Mint 18 so I would like to some Linux (or ideally cross-platform) GUI tool. Is there some other GUI tool?
<https://www.channels.elastacloud.com/channels/parquet-net/how-about-viewing-parquet-files>
Upvotes: 2 <issue_comment>username_2: Check out this utility. Works for all windows versions: <https://github.com/mukunku/ParquetViewer>
Upvotes: 3 <issue_comment>username_3: There is a GUI tool to view [Parquet](https://parquet.apache.org/) and also other binary format data like [ORC](https://orc.apache.org/) and [AVRO](https://avro.apache.org/). It's pure Java application so that can be run at Linux, Mac and also Windows. Please check [Bigdata File Viewer](https://github.com/username_3-Mark/bigdata-file-viewer) for details.
It supports complex data type like array, map, struct etc. And you can save the read file in CSV format.
[](https://i.stack.imgur.com/mCU8z.png)
Upvotes: 2 <issue_comment>username_4: JetBrains (IntelliJ, PyCharm etc) has a plugin for this, if you have a professional version: <https://plugins.jetbrains.com/plugin/12494-big-data-tools>
Upvotes: 1 <issue_comment>username_5: GUI option for Windows, Linux, MAC
You can now use **DBeaver** to
* view parquet data
* view metadata and statistics
* run sql query on one or multiple files. (supports glob expressions)
* generate new parquet files.
DBeaver leverages DuckDB driver to perform operations on parquet file. Features like Projection and predicate pushdown are also supported by DuckDB.
Simply [create an in-memory instance of DuckDB](https://duckdb.org/docs/guides/sql_editors/dbeaver) using Dbeaver and run the queries like mentioned in this [document](https://duckdb.org/docs/data/parquet). Right now Parquet and [CSV](https://duckdb.org/docs/data/csv) is supported.
Here is a Youtube video that explains the same - <https://youtu.be/j9_YmAKSHoA>
[](https://i.stack.imgur.com/rIsKv.png)
Upvotes: 4 <issue_comment>username_6: There is [Tad](https://www.tadviewer.com/) utility, which is cross-platform. Allows you to open Parquet files and also pivot them and export to CSV. Uses DuckDB as it's backend. More info on the [DuckDB page](https://duckdb.org/docs/guides/data_viewers/tad):
GH here:
<https://github.com/antonycourtney/tad>
[](https://i.stack.imgur.com/6TISO.gif)
Upvotes: 4 <issue_comment>username_7: There is webassembly viewer which works fully offline: <https://aloneguid.github.io/parquet-online/>
[](https://i.stack.imgur.com/8BIos.png)
Upvotes: 2 |
2018/03/19 | 381 | 960 | <issue_start>username_0: How can i pull random name from the Array?
```
...
$all = "$a1$b1$c1$d1$e1";
$all = print_r(explode("
",$all));
echo $all;
----
Array ( [0] => lizzy [1] => rony [2] => )
```
I need random text to appear in echo<issue_comment>username_1: ```
echo $input[array_rand($all)];
```
This gets a random index in the array and then echos the value.
Upvotes: 2 <issue_comment>username_2: **Get random characters:**
```
rand_chars("ABCEDFG", 10); // Output: GABGFFGCDA
```
**Get random number:**
```
echo rand(1, 10000); // Output: 5482
```
**If you want to have a random string based on given input:**
```
$chars = "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
$clen = strlen( $chars )-1;
$id = '';
$length = 10;
for ($i = 0; $i < $length; $i++)
{
$id .= $chars[mt_rand(0,$clen)];
}
echo ($id); // Output: Gzt6syUS8M
```
**Documentation:** <http://php.net/manual/en/function.rand.php>
Upvotes: 2 |
2018/03/19 | 426 | 1,151 | <issue_start>username_0: I have set a hover effect in my CSS, but it seems to be having no effect. What am I doing wrong?
```css
.button_link {
display: inline-block;
border: solid black 2px;
border-radius: 15px;
background-color: #ddf;
padding: 10px;
text-decoration: none;
}
.button_link#hover {
color: orange;
background-color: #fff;
}
```
```html
[Return to example.com](http://example.com)
aa
```<issue_comment>username_1: ```
echo $input[array_rand($all)];
```
This gets a random index in the array and then echos the value.
Upvotes: 2 <issue_comment>username_2: **Get random characters:**
```
rand_chars("ABCEDFG", 10); // Output: GABGFFGCDA
```
**Get random number:**
```
echo rand(1, 10000); // Output: 5482
```
**If you want to have a random string based on given input:**
```
$chars = "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
$clen = strlen( $chars )-1;
$id = '';
$length = 10;
for ($i = 0; $i < $length; $i++)
{
$id .= $chars[mt_rand(0,$clen)];
}
echo ($id); // Output: Gzt6syUS8M
```
**Documentation:** <http://php.net/manual/en/function.rand.php>
Upvotes: 2 |
2018/03/19 | 1,134 | 4,374 | <issue_start>username_0: too many hours wasted because of this bug. I wrote a custom validation that makes sure that the "currency name" is not repeated, the custom validation function called "validateNameUniqness" is calling a http service from a laravel api.
laravel must determine if the name is repeated or not:
if name is not repeated then laravel will return true, else it will return an error message.
in both cases laravel will mark the response with the status "200".
laravel code:
```
public function validateCurrencyNameUniqness($name){
$request = new Request();
$request->replace(['name' => $name]);
$validator = Validator::make($request->all(),[
'name' => 'unique:currencies',
]);
if ($validator->fails()) {
$uniqness_error = $validator->errors()->first('name');
return response()->json($uniqness_error, 200);
}
return response()->json(true, 200);
}
```
angular service will call the laravel api and return Promise:
```
validateCurrencyNameUniqness(name:string): Promise{
return
this.http.get('http://127.0.0.1:8000/api/validateCurrencyNameUniqness/'
+ name)
.toPromise()
.then(this.extractData)
.catch(this.handleError);
}
private extractData(res: Response) {
let body = res.json();
return body || {};
}
private handleError(error: any): Promise {
console.error('An error occurred', error);
return Promise.reject(error.message || error);
}
```
angular custom validation function will return null if name is not repeated, else it will return an object:
```
validateNameUniqness(control: FormControl ): {[key: string]: any} {
let name = control.value;
if(name){
// if name is valid return null, else return error object
this.services.validateCurrencyNameUniqness(name)
.then(result =>{
if(result !== true){
this.unqiness_msg = result;
console.log(this.unqiness_msg );
return {'not_unique': {value: true}}
//return {'not_unique': true};
}
})
.catch(error => console.log(error));
}
return null;
}
```
angular material & html script:
```
{{currency\_name.value?.length || 0}}/30
{{getErrorMessage()}}
```
the validation call:
```
name = new FormControl('', [Validators.required,this.validateNameUniqness.bind(this)]);
getErrorMessage() {
if(this.name.hasError('not_unique'))
return this.unqiness_msg;
if (this.name.hasError('required') || this.symbol.hasError('required') || this.decimals_after_point.hasError('required'))
return 'You must enter a value';
}
```
I think that the validation form is never be "Invalid" for the uniqueness validation.
testing showing that the custom validation function is logging the error message to the console if the name is repeated, but it never show the error message to the user!
any help is appreciated.<issue_comment>username_1: For one, this needs to be an async validator, check that if you didn't do it already. For two, your validator function is not returning anything. Return that promise you get from service.
```
if (name) {
// if name is valid return null, else return error object
return this.services.validateCurrencyNameUniqness(name)
```
Also, do not forget to *also* return from both `.then` and `.catch` handlers.
Upvotes: 0 <issue_comment>username_2: you have some small issues in your validator
1. your validator should always return
2. your validator should always return promises
3. you should inject the service in your function
your code should look like this (this is my custom validator implementation)
```
export const buidUsermameValidator = (service: UserService, oldUsername?: string) => (username: FormControl): ValidationErrors | null => {
if (username && username.value !== "" && username.value !== oldUsername) {
return service.usermameExist(username.value).toPromise()
.then((result) => {
if (result) {
return { "exist": true }
}
return null
})
.catch(err => { return null })
} else {
return Promise.resolve(null)
}
}
```
when you going to use your validator your code should look like this
```
this.userForm = this.fb.group({
"username": ["", Validators.compose([Validators.required]), buidUsermameValidator(this.service)],
})
```
Upvotes: 1 |
2018/03/19 | 1,330 | 4,452 | <issue_start>username_0: I have a 'base' that has a `virtual int` auto implemented property called 'id'.
```
class bbase
{
public virtual int id { get; set; }
}
```
When I inherit from this class to a 'derived' class, I get:
```
class Derived : bbase
{
public override int id
{
get
{
return base.id;
}
set
{
base.id = value;
}
}
}
```
After initializing the base and derived classes and assigning the value to base class property like this:
```
bbase b2 = new bbase();
Derived d2 = new Derived();
b2.id = 6;
```
When I try to output the value from derived class it outputs '0':
```
Console.WriteLine(d2.id);
```
My confusion is when I try the same approach with virtual methods it works fine. The derived class method with `base.method()` works fine. It returns whatever is there in the base method but why not with auto-implemented properties?
```
class bbase
{
public virtual void execute() { Console.WriteLine("base.execute2"); }
}
class Derived : bbase
{
public override void execute()
{
base.execute();
}
}
```<issue_comment>username_1: `b2` and `d2` are different objects. You can tell because you did `new` once for each.
If you made the properties `static`, you'd see the results you expect.
Upvotes: 2 <issue_comment>username_2: Both are two separate objects. So what you are seeing is the expected result.
What you are trying to mock is can be done if you re-write your code so that both variables `b2` and `d2` point to the `Derived` class object like following:
```
Derived d2 = new Derived();
bbase b2 = d2;
b2.id = 6;
```
Now if you do :
```
Console.WriteLine(d2.id); // prints 6
```
You will see the same value that was set above.
[See DEMO here.](https://dotnetfiddle.net/32PWuG)
Ideally, i think you should not be calling the `base` that way in the property getter and setter like the above case as you are not doing anything special with value of base so, instead of that just use the default `get` and `set` in derived like:
```
class Derived : bbase
{
public override int id
{
get;
set;
}
}
```
and now if you write the following:
```
Derived d2 = new Derived();
bbase b2 = d2;
b2.id = 6;
Console.WriteLine("d2:"+d2.id);
Console.WriteLine("b2: "+b2.id);
```
It will output:
>
> d2: 6
>
>
> b2: 6
>
>
>
See the [working DEMO Fiddle](https://dotnetfiddle.net/pXPlQa) to understand it.
Upvotes: 3 <issue_comment>username_3: Actually `b2` and `d2` are different objects.
so if you set `b2.id = 6;` then `d2` remains to 0, as long as you also set `d2.id = 6;`.
In your virtual method example, you are returning (by overriding it) the base method => `base.execute();`, that's why you're getting the base method output even though you instantiate a `d2` class.
Upvotes: 0 <issue_comment>username_4: If your code looked like this:
```
bbase b2 = new bbase();
bbase d2 = new bbase();
b2.id = 6;
Console.WriteLine(d2.id);
```
Would you expect the result to be anything but 0? It is the same difference.
Overriding, hiding and the other Polymorphy only maters when you **assign a derived class to a reference of the base class**.
```
Derived d2 = new Derived();
d2.id = 6;
bbase b2 = d2;
Console.WriteLine(b2.id);
```
And that kind of assignment happens a lot in any OOP language. It is between 1/2 and 3/4 of the point of using OOP.
Upvotes: 0 <issue_comment>username_5: I think you are missing out on some fundamentals here.
Your `bbase` class defines an *instance level* property "id" as virtual. This means it *can* be overridden:
```
class bbase
{
public virtual int id { get; set; }
}
```
Your `Derived` class overrides the *instance level property* "id":
```
class Derived : bbase
{
public override int id
{
get
{
return base.id;
}
set
{
base.id = value;
}
}
}
```
All this means is that any instance of `Derived` handles the "id" property in this *overridden* way.
When you instantiate your objects:
```
bbase b2 = new bbase();
Derived d2 = new Derived();
```
You are making a *instances* of each class. These instance's properties have their own values. The inheritance is not relevant because they are *two separate instances*. Anything you do to `d2` would not be reflected in `b2` because they are different.
Upvotes: 2 [selected_answer] |
2018/03/19 | 1,592 | 6,403 | <issue_start>username_0: I am trying to toggle visiblity of a div in a stateless component like this:
```
const playerInfo = (props) => {
let isPanelOpen = false;
return (
isPanelOpen = !isPanelOpen }>Toggle
{isPanelOpen &&
{props.children}
}
);
};
```
I see that the value of `isPanelOpen` changes to `true`, but the panel is not being shown. I assume that is because this is the stateless function that doesn't get called again, so once we return the jsx it will have the value of `false`, and won't update it later.
Is there a way of fixing this, and avoiding of pasing this single variable as props through 4 more parent stateless components?<issue_comment>username_1: >
> I assume that is because this is the stateless function that doesn't get called again
>
>
>
Basically, the only way to re-render component is to change state or props. :)
So when you change a local variable, React doesn't get notified about it and doesn't start [reconcilation](https://reactjs.org/docs/glossary.html#reconciliation).
Upvotes: 1 <issue_comment>username_2: You can't tell React to re-render the UI by assigning new value directly to the variable (in your case you did `isPanelOpen = !isPanelOpen`).
The correct method is to use `setState`.
But you cannot do it in a stateless component, you must do it in a stateful component, so your code should looks like this
```
import React, {Component} from 'react';
class playerInfo extends Component {
constructor(props){
super(props);
this.state = {
isPanelOpen: false
}
}
render() {
return (
this.setState({isPanelOpen: !this.state.isPanelOpen})}>Toggle
{this.state.isPanelOpen &&
{this.props.children}
}
);
}
}
```
Explanation
-----------
Remember two things:
1) Your UI should only bind to `this.state.XXXX` (for stateful component) or `props.XXX` (for stateless component).
2) The only way to update UI is by calling `setState()` method, no other way will trigger React to re-render the UI.
### But... how do I update stateless component since it doesn't have the `setState` method?
**ONE ANSWER**:The stateless component should be *contained* in another stateful component.
Example
Let's say your stateless component is called `Kid`, and you have another stateful component called `Mum`.
```
import React, {Component} from 'react';
class Mum extends Component {
constructor(props){
super(props);
this.state = {
isHappy: false
}
}
render() {
return (
this.setState({isHappy: true})}>Eat
);
}
}
const Kid = (props) => (props.isHappy ? I'm happy : I'm sad);
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: A functional component will not re-render unless a previous parent's state changes and passes down updated properties that propagate down to your functional component. You can pass an onClick handler in from a stateful parent and call this from your stateless component when its onClick is triggered. The parent will control the toggling of the display and pass it in as a prop to the child (see snippet below).
To architect this, you should determine if your HOC (higher order component) should be in charge of UI state. If so, then it can make the determination if its child component should be in an open state or not and then pass that as a property to the child state. If this is a component that should open and close independent of the world around it, then it should probably have its own state. For example, if you are making a tabbed widget, it should probably be controlling its own open and closed states.
```js
class App extends React.Component {
state= {
isOpen: false
}
handleClick = () => {
this.setState({
isOpen: !this.state.isOpen
})
}
render() {
return (
)
}
}
const YourComponent = ({isOpen, handleClick} = props) => {
const onClick = () => {
if (handleClick) {
handleClick();
}
}
return (
{isOpen ?
it is open
----------
:
it is closed
------------
}
)
}
ReactDOM.render(, document.getElementById('root'));
```
```html
```
If your major concern is about passing properties/methods down to too many components, you could create a [pure component](https://60devs.com/pure-component-in-react.html) which will give you access to state but not all the overhead of a React Component subclass. You could also look into using a state management tool like Redux.
Upvotes: 0 <issue_comment>username_4: You can do this with native Javascipt otherwise in React you can not do this with stateless Component :)
```
const playerInfo = (props) => {
let isPanelOpen = false;
return ( <
div onClick = {
() => {
if (document.getElementsByClassName("info-panel")[0].style.display == 'none') {
isPanelOpen = true;
document.getElementsByClassName("info-panel")[0].style.display = '';
} else {
isPanelOpen = false;
document.getElementsByClassName("info-panel")[0].style.display = 'none';
}
}
} > Toggle < /div> <
div className = "info-panel" > {
this.props.children
} <
/div>
);
};
```
Upvotes: 1 <issue_comment>username_5: You can do this by using useState hooks like this:
```
import { useState } from "react";
function playerInfo () {
const [panel, setPanel] = useState(false);
function toggleButton () {
if(!panel) setPanel(true);
else setPanel(false);
}
return (
Toggle
panel ? {this.props.children} : null;
}
);
};
export default playerInfo;
```
Upvotes: 2 <issue_comment>username_6: As of version 16.8 of React you can handle this for stateless components using a hook - in this case `useState`. In it's simplest form it can be implemented like this:
```
import React, { useState } from 'react';
const PlayerInfo = (props) => {
const [showPanel, togglePanel] = useState(false);
return (
togglePanel(!showPanel) }>Toggle
{showPanel && (
{props.children}
)}
);
};
export default PlayerInfo;
```
Upvotes: 1 |
2018/03/19 | 1,910 | 4,497 | <issue_start>username_0: I need to test connection between a server located in my own datacenter and an Amazon RDS instance. I've tried with
```
time telnet 3306
```
but it tracks the time since i've issued the command, until i've ended it, which is not relevant.
Are there any ways of measuring this?<issue_comment>username_1: My answer does not assume that ICMP ping is allowed, it uses TCP based measures. But you will have to ensure there are security group rules to allow access from the shell running the tests to the RDS instance
First, ensure some useful packages are installed
```
apt-get install netcat-openbsd traceroute
```
Check that basic connectivity works to the database port. This example is for Oracle, ensure you use the endpoint and port from the console
```
nc -vz dev-fulfil.cvxzodonju67.eu-west-1.rds.amazonaws.com 1521
```
Then see what the latency is. The number you want is the final one (step 12)
```
sudo tcptraceroute dev-fulfil.cvxzodonju67.eu-west-1.rds.amazonaws.com 1521
traceroute to dev-fulfil.cvxzodonju67.eu-west-1.rds.amazonaws.com (10.32.21.12), 30 hops max, 60 byte packets
1 pc-0-3.ioppublishing.com (172.16.0.3) 0.691 ms 3.341 ms 3.400 ms
2 10.100.101.1 (10.100.101.1) 0.839 ms 0.828 ms 0.811 ms
3 xe-10-2-0-12265.lon-001-score-1-re1.interoute.net (172.16.31.10) 10.591 ms 10.608 ms 10.592 ms
4 ae0-0.lon-001-score-2-re0.claranet.net (84.233.200.190) 10.575 ms 10.668 ms 10.668 ms
5 ae2-0.lon-004-score-1-re0.claranet.net (84.233.200.186) 12.708 ms 12.734 ms 12.717 ms
6 169.254.254.6 (169.254.254.6) 12.673 ms * *
7 169.254.254.1 (169.254.254.1) 10.623 ms 10.642 ms 10.823 ms
8 * * *
9 * * *
10 * * *
11 * * *
12 * 10.32.21.12 (10.32.21.12) 20.662 ms 21.305 ms
```
A better measure of "latency" might be "the time a typical transaction takes with no or little data to transfer". To do this, write a script element that does this in a loop, maybe 1000 times and then time it with a high precision timer. But the exact details of this vary according to your needs
Upvotes: 5 [selected_answer]<issue_comment>username_2: Use ping. You will need to [enable ping on your EC2 instance per this answer](https://stackoverflow.com/questions/21981796/cannot-ping-aws-ec2-instance).
Ping will provide a time for each ping in milliseconds:
```
ping 172.16.58.3
PING 172.16.58.3 (172.16.58.3): 56 data bytes
64 bytes from 172.16.58.3: icmp_seq=0 ttl=227 time=68.873 ms
64 bytes from 172.16.58.3: icmp_seq=1 ttl=227 time=68.842 ms
64 bytes from 172.16.58.3: icmp_seq=2 ttl=227 time=68.959 ms
64 bytes from 172.16.58.3: icmp_seq=3 ttl=227 time=69.053 ms
```
Upvotes: -1 <issue_comment>username_3: Time the query. RDS must be hosting a SQL database server, so issue a trivial SQL query to it and time the execution.
For example, if your RDS instance is PostgreSQL, connect using `psql` and enable `\timing`.
>
> psql -h myhost -U myuser
>
>
>
```
postgres=> \timing
Timing is on.
postgres=> SELECT 1;
?column?
----------
1
(1 row)
Time: 14.168 ms
```
The latency is 14.168 ms in this example. Consult the manual for timing your specific SQL server implementation.
Upvotes: 3 <issue_comment>username_4: Usually RDS instances do not respond to ICMP protocols, so we can use TCP protocols for testing, make sure your server's IP is in the white list of the RDS firewall before testing.
So we can use `hping3`.
>
> Hping3 is a command-line packet analyzer, packet crafter and testing
> tool for network administrators and penetration testers.
>
>
>
```bash
sudo apt-get install hping3
```
Then run your test like 5 times.
```
sudo hping3 -S -p -c 5
```
Example:
```
ubuntu@ip-172-30-0-70:~$ sudo hping3 -S -p 3306 my.ap-northeast-1.rds.amazonaws.com -c 5
HPING my.ap-northeast-1.rds.amazonaws.com (ens5 54.95.xxx.xxx): S set, 40 headers + 0 data bytes
len=46 ip=54.95.xxx.xxx ttl=253 DF id=0 sport=3306 flags=SA seq=0 win=29200 rtt=3.8 ms
len=60 ip=54.95.xxx.xxx ttl=253 DF id=0 sport=3306 flags=SA seq=0 win=28960 rtt=0.0 ms
len=134 ip=54.95.xxx.xxx ttl=253 DF id=19591 sport=3306 flags=AP seq=0 win=227 rtt=0.0 ms
len=52 ip=54.95.xxx.xxx ttl=253 DF id=19592 sport=3306 flags=A seq=0 win=235 rtt=0.0 ms
len=63 ip=54.95.xxx.xxx ttl=253 DF id=19593 sport=3306 flags=AP seq=0 win=235 rtt=0.0 ms
--- my.ap-northeast-1.rds.amazonaws.com hping statistic ---
1 packets transmitted, 5 packets received, -400% packet loss
round-trip min/avg/max = 3.8/3.8/3.8 ms
```
Upvotes: 0 |
2018/03/19 | 2,167 | 5,348 | <issue_start>username_0: >
> **[IS\_READ\_ONLY](https://developer.android.com/reference/android/provider/ContactsContract.DataColumns.html#IS_READ_ONLY)**
>
>
> flag: "0" by default, "1" if the row cannot be modified or deleted
> except by a sync adapter. See CALLER\_IS\_SYNCADAPTER. Type: INTEGER
> Constant Value: "is\_read\_only"
>
>
>
When I have apply the above in my code, I am getting **-1** as the output for all the contacts. I am using `IS_READ_ONLY` to identify the read only contacts synced in WhatsApp, PayTM, Duo, etc.
```
Cursor curContacts = cr.query(ContactsContract.Contacts.CONTENT_URI, null, null, null, null);
if (curContacts != null) {
while (curContacts.moveToNext()) {
int contactsReadOnly = curContacts.getColumnIndex(ContactsContract.Data.IS_READ_ONLY);
Log.d(Config.TAG, String.valueOf(contactsReadOnly));
}
}
```
**OutPut**
```
-1
-1
-1
```
Have also tried the below line instead of `Data.IS_READ_ONLY`, but the output is same.
```
int contactsReadOnly = curContacts.getColumnIndex(ContactsContract.RawContacts.RAW_CONTACT_IS_READ_ONLY);
```<issue_comment>username_1: My answer does not assume that ICMP ping is allowed, it uses TCP based measures. But you will have to ensure there are security group rules to allow access from the shell running the tests to the RDS instance
First, ensure some useful packages are installed
```
apt-get install netcat-openbsd traceroute
```
Check that basic connectivity works to the database port. This example is for Oracle, ensure you use the endpoint and port from the console
```
nc -vz dev-fulfil.cvxzodonju67.eu-west-1.rds.amazonaws.com 1521
```
Then see what the latency is. The number you want is the final one (step 12)
```
sudo tcptraceroute dev-fulfil.cvxzodonju67.eu-west-1.rds.amazonaws.com 1521
traceroute to dev-fulfil.cvxzodonju67.eu-west-1.rds.amazonaws.com (10.32.21.12), 30 hops max, 60 byte packets
1 pc-0-3.ioppublishing.com (172.16.0.3) 0.691 ms 3.341 ms 3.400 ms
2 10.100.101.1 (10.100.101.1) 0.839 ms 0.828 ms 0.811 ms
3 xe-10-2-0-12265.lon-001-score-1-re1.interoute.net (172.16.17.32) 10.591 ms 10.608 ms 10.592 ms
4 ae0-0.lon-001-score-2-re0.claranet.net (84.233.200.190) 10.575 ms 10.668 ms 10.668 ms
5 ae2-0.lon-004-score-1-re0.claranet.net (84.233.200.186) 12.708 ms 12.734 ms 12.717 ms
6 169.254.254.6 (169.254.254.6) 12.673 ms * *
7 169.254.254.1 (169.254.254.1) 10.623 ms 10.642 ms 10.823 ms
8 * * *
9 * * *
10 * * *
11 * * *
12 * 10.32.21.12 (10.32.21.12) 20.662 ms 21.305 ms
```
A better measure of "latency" might be "the time a typical transaction takes with no or little data to transfer". To do this, write a script element that does this in a loop, maybe 1000 times and then time it with a high precision timer. But the exact details of this vary according to your needs
Upvotes: 5 [selected_answer]<issue_comment>username_2: Use ping. You will need to [enable ping on your EC2 instance per this answer](https://stackoverflow.com/questions/21981796/cannot-ping-aws-ec2-instance).
Ping will provide a time for each ping in milliseconds:
```
ping 192.168.3.11
PING 192.168.3.11 (192.168.3.11): 56 data bytes
64 bytes from 192.168.3.11: icmp_seq=0 ttl=227 time=68.873 ms
64 bytes from 192.168.3.11: icmp_seq=1 ttl=227 time=68.842 ms
64 bytes from 192.168.3.11: icmp_seq=2 ttl=227 time=68.959 ms
64 bytes from 192.168.3.11: icmp_seq=3 ttl=227 time=69.053 ms
```
Upvotes: -1 <issue_comment>username_3: Time the query. RDS must be hosting a SQL database server, so issue a trivial SQL query to it and time the execution.
For example, if your RDS instance is PostgreSQL, connect using `psql` and enable `\timing`.
>
> psql -h myhost -U myuser
>
>
>
```
postgres=> \timing
Timing is on.
postgres=> SELECT 1;
?column?
----------
1
(1 row)
Time: 14.168 ms
```
The latency is 14.168 ms in this example. Consult the manual for timing your specific SQL server implementation.
Upvotes: 3 <issue_comment>username_4: Usually RDS instances do not respond to ICMP protocols, so we can use TCP protocols for testing, make sure your server's IP is in the white list of the RDS firewall before testing.
So we can use `hping3`.
>
> Hping3 is a command-line packet analyzer, packet crafter and testing
> tool for network administrators and penetration testers.
>
>
>
```bash
sudo apt-get install hping3
```
Then run your test like 5 times.
```
sudo hping3 -S -p -c 5
```
Example:
```
ubuntu@ip-172-30-0-70:~$ sudo hping3 -S -p 3306 my.ap-northeast-1.rds.amazonaws.com -c 5
HPING my.ap-northeast-1.rds.amazonaws.com (ens5 54.95.xxx.xxx): S set, 40 headers + 0 data bytes
len=46 ip=54.95.xxx.xxx ttl=253 DF id=0 sport=3306 flags=SA seq=0 win=29200 rtt=3.8 ms
len=60 ip=54.95.xxx.xxx ttl=253 DF id=0 sport=3306 flags=SA seq=0 win=28960 rtt=0.0 ms
len=134 ip=54.95.xxx.xxx ttl=253 DF id=19591 sport=3306 flags=AP seq=0 win=227 rtt=0.0 ms
len=52 ip=54.95.xxx.xxx ttl=253 DF id=19592 sport=3306 flags=A seq=0 win=235 rtt=0.0 ms
len=63 ip=54.95.xxx.xxx ttl=253 DF id=19593 sport=3306 flags=AP seq=0 win=235 rtt=0.0 ms
--- my.ap-northeast-1.rds.amazonaws.com hping statistic ---
1 packets transmitted, 5 packets received, -400% packet loss
round-trip min/avg/max = 3.8/3.8/3.8 ms
```
Upvotes: 0 |
2018/03/19 | 894 | 3,063 | <issue_start>username_0: I've done some time-series forecasting in R with ARIMA, which predicts a value at a future timepoint given a series of continuous values, but I'm not sure how to approach time-series prediction when dealing with categorical values.
Given these simple training sequences for 5 people's morning dress routine, how can I generate predictions for the final two entries for person6?
```
person1 <- c("underwear", "socks", "pants", "shirt", "tie", "shoes", "jacket")
person2 <- c("underwear", "pants", "socks", "shirt", "tie", "jacket", "shoes")
person3 <- c("socks", "underwear", "pants", "shirt", "tie", "shoes", "jacket")
person4 <- c("underwear", "socks", "shirt", "pants", "tie", "shoes", "jacket")
person5 <- c("underwear", "socks", "shirt", "tie", "pants", "jacket", "shoes")
person6 <- c("underwear", "socks", "pants", "shirt") # Predict next events
```
Thanks in advance!<issue_comment>username_1: Well this is definitely more a graph / markov chain type of problem than what you call time series.
My guess would be value you should expect is "tie", "shoes", "jacket"
All vectors have the same elements "underwear", "socks", "pants", "shirt", "tie", "shoes", "jacket" occurring once.
You can set logical constraints about the order (you can not tie a tie without putting on a shirt first same for the jacket)
3 out of the 5 person presents the 3 elements in the order "tie", "shoes", "jacket"
I would be very interested in a formal general solution of such problems !
Upvotes: 1 <issue_comment>username_2: Using the markovchain package we can estimate the next state given the current state using the estimated transition matrix. First create an n x 2 matrix `m` of transition pairs (one per row) and then perform the estimation giving `est`. From that we can form our prediction using `predict` and `plot` the transition matrix.
```
library(markovchain)
L <- list(person1, person2, person3, person4, person5)
m <- do.call("rbind", lapply(L, function(x) cbind(head(x, -1), tail(x, -1))))
mc <- markovchainFit(m)
est <- mc$estimate
est # show transition matrix
# ...snip...
# estimate next step after "shirt"
person6 <- c("underwear", "socks", "pants", "shirt")
prior_state <- tail(person6, 1)
predict(est, prior_state)
## [1] tie
plot(est)
```
(continued after plot)
[](https://i.stack.imgur.com/fVoXK.png)
A variation of the above is to consider the distribution conditional on no state that has appeared appearing again. Here `allowed` is those states that do not appear in `person6`. The prediction in this case is still `"tie"` but had the prediction from the original markov chain above been a state that occurred in person6 then the prediction would have been different. This prediction no longer has the markov property since the prediction of the next state depends on all states that have occurred so far.
```
allowed <- setdiff(states(est), person6)
names(which.max(conditionalDistribution(est, tail(person6, 1))[allowed]))
## [1] "tie"
```
Upvotes: 3 |
2018/03/19 | 1,385 | 5,090 | <issue_start>username_0: I have the following, which works fine. It generates a unique random number for a given empty array and a Max determined by the another array (data) length. I would like to add a check that does:
* when the array length is = MaxN, I want to store the last value of the array inside a variable so that if it is = to a new random generated number I will call "generateRandomNumber(array, maxN)" again.
```
const generateRandomNumber = (array, maxN, lastN) => {
let randomN = Math.floor(Math.random() * maxN) + 0;
console.log(lastN)
if(lastN == randomN) {
// do your thing
}
if(array.includes(randomN)) {
return generateRandomNumber(array, maxN, lastN);
}
if(array.push(randomN) == maxN) {
lastN = array.length - 1
array.length = 0;
}
return randomN
}
export default generateRandomNumber
```
however I am always getting undefined inside the console.log. I am passing lastN like so:
let lastN;
I would think that that value which is undefined at first would later get updated inside:
```
if(array.push(randomN) == maxN) {
lastN = array.length - 1
array.length = 0;
}
```
component where generateRandomNumber is used:
```
...
const utilityArray = []
const tempQuestions = []
let lastN
class App extends Component {
constructor(props) {
super(props);
this.state = {
collection: gridItemsCollection,
intro: false,
instructions: false,
grid: true,
questions: this.props.questions,
selectedQuestion: ""
}
}
getRandomN = (arr, max, lastN) => {
let s = generateRandomNumber(arr, max, lastN)
return s
}
hideGridItem(e) {
let //index = this.getRandomN(utilityArray, gridItemsCollection.length),
collection = this.state.collection,
newCollection,
//updatedGridItem = collection[index].hidden = true,
questions = this.state.questions.questions,
n = this.getRandomN(tempQuestions, questions.length, lastN);
console.log(lastN)
// this.setState({
// newCollection: [ ...collection, updatedGridItem ]
// })
// if(this.getAnswer(e)) {
this.generateNewQuestion(questions[n])
// }
// else {
// console.log('no')
// }
}
generateNewQuestion(selectedQuestion) {
this.setState({
selectedQuestion
})
}
componentDidMount = () => {
const questions = this.state.questions.questions
let randomNumber = this.getRandomN(tempQuestions, questions.length, lastN)
this.generateNewQuestion(questions[randomNumber])
}
getAnswer = (e) =>
e.target.getAttribute('data-option') == this.state.selectedQuestion.correct_option
render() {
const state = this.state
const { collection, grid, intro, selectedQuestion } = state
console.log(tempQuestions)
return (
intro screen
);
}
}
export default App;
```<issue_comment>username_1: Well this is definitely more a graph / markov chain type of problem than what you call time series.
My guess would be value you should expect is "tie", "shoes", "jacket"
All vectors have the same elements "underwear", "socks", "pants", "shirt", "tie", "shoes", "jacket" occurring once.
You can set logical constraints about the order (you can not tie a tie without putting on a shirt first same for the jacket)
3 out of the 5 person presents the 3 elements in the order "tie", "shoes", "jacket"
I would be very interested in a formal general solution of such problems !
Upvotes: 1 <issue_comment>username_2: Using the markovchain package we can estimate the next state given the current state using the estimated transition matrix. First create an n x 2 matrix `m` of transition pairs (one per row) and then perform the estimation giving `est`. From that we can form our prediction using `predict` and `plot` the transition matrix.
```
library(markovchain)
L <- list(person1, person2, person3, person4, person5)
m <- do.call("rbind", lapply(L, function(x) cbind(head(x, -1), tail(x, -1))))
mc <- markovchainFit(m)
est <- mc$estimate
est # show transition matrix
# ...snip...
# estimate next step after "shirt"
person6 <- c("underwear", "socks", "pants", "shirt")
prior_state <- tail(person6, 1)
predict(est, prior_state)
## [1] tie
plot(est)
```
(continued after plot)
[](https://i.stack.imgur.com/fVoXK.png)
A variation of the above is to consider the distribution conditional on no state that has appeared appearing again. Here `allowed` is those states that do not appear in `person6`. The prediction in this case is still `"tie"` but had the prediction from the original markov chain above been a state that occurred in person6 then the prediction would have been different. This prediction no longer has the markov property since the prediction of the next state depends on all states that have occurred so far.
```
allowed <- setdiff(states(est), person6)
names(which.max(conditionalDistribution(est, tail(person6, 1))[allowed]))
## [1] "tie"
```
Upvotes: 3 |
2018/03/19 | 1,391 | 5,105 | <issue_start>username_0: Could somebody explain what the difference is in this example of operator overloading, `Average& operator+=(int num)` where you return a reference to `Average` versus not returning a reference to `Average` i.e. `Average operator+=(int num)`.
Does returning a reference mean return a reference to the object being assigned to (not a copy) which is `*this`. So in this case return a reference to the object `avg`.
How/why does the non reference version work? Where is the result being copied?
```
#include
#include // for fixed width integers
class Average
{
private:
int32\_t m\_total = 0; // the sum of all numbers we've seen so far
int8\_t m\_numbers = 0; // the count of numbers we've seen so far
public:
Average()
{
}
friend std::ostream& operator<<(std::ostream &out, const Average &average)
{
// Our average is the sum of the numbers we've seen divided by the count of the numbers we've seen
// We need to remember to do a floating point division here, not an integer division
out << static\_cast(average.m\_total) / average.m\_numbers;
return out;
}
// Because operator+= modifies its left operand, we'll write it as a member
Average& operator+=(int num)
{
// Increment our total by the new number
m\_total += num;
// And increase the count by 1
++m\_numbers;
// return \*this in case someone wants to chain +='s together
return \*this;
}
};
int main()
{
Average avg;
avg += 4;
std::cout << avg << '\n';
return 0;
}
```<issue_comment>username_1: >
> Does returning a reference mean return a reference to the object being assigned to (not a copy) which is \*this. So in this case return a reference to the object avg.
>
>
>
Yes.
>
> How/why does the non reference version work? Where is the result being copied?
>
>
>
The returned value is a temporary object that is passed to `std::cout <<`.
It's similar to using:
```
int foo()
{
return 10;
}
std::cout << foo() << std::endl;
```
The return value of `foo` is a temporary object that is passed to `std::cout <<`.
Upvotes: 0 <issue_comment>username_2: You should return a reference because thats a convention most code in standard library is using, and most programmers do. Most programmers will expect below code:
```
std::string s;
(s += "a") = "b";
std::cout << s << std::endl;
```
to print `b`, or in this example:
```
int n = 10;
(n += 20) = 100;
std::cout << n << std::endl;
```
will expect `100` to be printed.
That is why you should return a reference to keep with the convention which allows to modify object which is on the left side of assignment.
Otherwise, if you return by value (a copy) assignment as in above examples will assign to a temporary.
Upvotes: 3 [selected_answer]<issue_comment>username_3: When overriding operators in C++, one can provide them in multiple forms. You can define assignment operators which do not return a reference to the modified object, and they would work as expected. But, we have for each operator what we call [a canonical implementation](http://en.cppreference.com/w/cpp/language/operators#Canonical_implementations):
>
> Other than the restrictions above, the language puts no other constraints on what the overloaded operators do, or on the return type (it does not participate in overload resolution), but in general, overloaded operators are expected to behave as similar as possible to the built-in operators: `operator+` is expected to add, rather than multiply its arguments, `operator=` is expected to assign, etc. The related operators are expected to behave similarly (`operator+` and `operator+=` do the same addition-like operation). The return types are limited by the expressions in which the operator is expected to be used: for example, assignment operators return by reference to make it possible to write `a = b = c = d`, because the built-in operators allow that.
>
>
> Commonly overloaded operators have the following typical, canonical forms.
>
>
>
There is a lot to read about those canonical forms, but I'd suggest to start with this really good SO answer on [*What are the basic rules and idioms for operator overloading?*](https://stackoverflow.com/a/4421708/5470596)
---
>
> How/why does the non reference version work?
>
>
>
Because even if the C++ Standard encourages you to use the canonical forms, it doesn't forbid you not to.
>
> Where is the result being copied?
>
>
>
Nowhere, the value is discarded. Implementation will probably optimize them away.
Upvotes: 2 <issue_comment>username_4: Try to check this [similar question](https://stackoverflow.com/a/6375794/8300627).
When you return reference, you are actually passing the actual object. While when you return by value, temporary object is created and then passed to caller.
So if you return without reference this assignment may work as per your code segment above.
```
Average aa = avg += 4;
```
But it will fail to compile if you try.
```
Average *ptr = &(avg += 4);
```
The above code will work if you return reference though, since we are passing valid scope of object.
Upvotes: 0 |
2018/03/19 | 1,093 | 4,083 | <issue_start>username_0: The title may seems weird but I'll try to explain it briefly... So In my admin section I extend a layout with a Sidebar displayed as a bootstrap list on every page.
Depending on the page I'm at, I want to put an active class on a specific li but I'm not sure what's the usual method to do this in a Laravel project..
I thought about putting a js script in every page, but that's not smart at all...
So i would like your advices guys, thanks.<issue_comment>username_1: >
> Does returning a reference mean return a reference to the object being assigned to (not a copy) which is \*this. So in this case return a reference to the object avg.
>
>
>
Yes.
>
> How/why does the non reference version work? Where is the result being copied?
>
>
>
The returned value is a temporary object that is passed to `std::cout <<`.
It's similar to using:
```
int foo()
{
return 10;
}
std::cout << foo() << std::endl;
```
The return value of `foo` is a temporary object that is passed to `std::cout <<`.
Upvotes: 0 <issue_comment>username_2: You should return a reference because thats a convention most code in standard library is using, and most programmers do. Most programmers will expect below code:
```
std::string s;
(s += "a") = "b";
std::cout << s << std::endl;
```
to print `b`, or in this example:
```
int n = 10;
(n += 20) = 100;
std::cout << n << std::endl;
```
will expect `100` to be printed.
That is why you should return a reference to keep with the convention which allows to modify object which is on the left side of assignment.
Otherwise, if you return by value (a copy) assignment as in above examples will assign to a temporary.
Upvotes: 3 [selected_answer]<issue_comment>username_3: When overriding operators in C++, one can provide them in multiple forms. You can define assignment operators which do not return a reference to the modified object, and they would work as expected. But, we have for each operator what we call [a canonical implementation](http://en.cppreference.com/w/cpp/language/operators#Canonical_implementations):
>
> Other than the restrictions above, the language puts no other constraints on what the overloaded operators do, or on the return type (it does not participate in overload resolution), but in general, overloaded operators are expected to behave as similar as possible to the built-in operators: `operator+` is expected to add, rather than multiply its arguments, `operator=` is expected to assign, etc. The related operators are expected to behave similarly (`operator+` and `operator+=` do the same addition-like operation). The return types are limited by the expressions in which the operator is expected to be used: for example, assignment operators return by reference to make it possible to write `a = b = c = d`, because the built-in operators allow that.
>
>
> Commonly overloaded operators have the following typical, canonical forms.
>
>
>
There is a lot to read about those canonical forms, but I'd suggest to start with this really good SO answer on [*What are the basic rules and idioms for operator overloading?*](https://stackoverflow.com/a/4421708/5470596)
---
>
> How/why does the non reference version work?
>
>
>
Because even if the C++ Standard encourages you to use the canonical forms, it doesn't forbid you not to.
>
> Where is the result being copied?
>
>
>
Nowhere, the value is discarded. Implementation will probably optimize them away.
Upvotes: 2 <issue_comment>username_4: Try to check this [similar question](https://stackoverflow.com/a/6375794/8300627).
When you return reference, you are actually passing the actual object. While when you return by value, temporary object is created and then passed to caller.
So if you return without reference this assignment may work as per your code segment above.
```
Average aa = avg += 4;
```
But it will fail to compile if you try.
```
Average *ptr = &(avg += 4);
```
The above code will work if you return reference though, since we are passing valid scope of object.
Upvotes: 0 |
2018/03/19 | 2,374 | 8,311 | <issue_start>username_0: I'm trying to save current logs to the catalina log with appended current date.
Log4j2.xml looks like:
```
xml version="1.0" encoding="UTF-8"?
```
but is saved as `catalina.log` without current date.
I have moved from log4j where I have properties which were doing that job:
```
log4j.appender.CATALINA=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.CATALINA.RollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.CATALINA.RollingPolicy.FileNamePattern=${catalina.base}/logs/catalina.%d{yyyy-MM-dd}.log
```
I'm using log4j2 version 2.3 due to code uses JDK6.
Also I have went to `catalina.properties` and removed `log4j*.jar` from the `jarsToSkip` property, per Vasan suggestion, but still current date is not appended to current catalina log.
Tomcat startup logs:
```
Mar 19, 2018 3:00:31 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 619 ms
Mar 19, 2018 3:00:31 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Mar 19, 2018 3:00:31 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.22
Mar 19, 2018 3:00:31 PM org.apache.catalina.startup.HostConfig deployWAR
INFO: Deploying web application archive monitor.war
Mar 19, 2018 3:00:31 PM org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(/usr/local//apache-tomcat-7.0.22/webapps/msg-monitor/WEB-INF/lib/javax.servlet-3.0.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Mar 19, 2018 3:00:31 PM org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(/usr/local/apache-tomcat-7.0.22/webapps/msg-monitor/WEB-INF/lib/servlet-api-2.5.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Mar 19, 2018 3:00:31 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Mar 19, 2018 3:00:31 PM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization started
Mar 19, 2018 3:00:31 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.web.context.support.XmlWebApplicationContext@5e1558dc: display name [Root WebApplicationContext]; startup date [Mon Mar 19 15:00:31 EDT 2018]; root of context hierarchy
Mar 19, 2018 3:00:31 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml]
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Mar 19, 2018 3:00:32 PM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory
INFO: Bean factory for application context [org.springframework.web.context.support.XmlWebApplicationContext@5e1558dc]: org.springframework.beans.factory.support.DefaultListableBeanFactory@2e7cf883
Mar 19, 2018 3:00:32 PM org.springframework.core.io.support.PropertiesLoaderSupport loadProperties
INFO: Loading properties file from class path resource [properties/monitor.properties]
Mar 19, 2018 3:00:32 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization
INFO: Bean 'org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter#718ddc48' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Mar 19, 2018 3:00:32 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization
INFO: Bean 'dataSource' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Mar 19, 2018 3:00:32 PM org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean createNativeEntityManagerFactory
INFO: Building JPA container EntityManagerFactory for persistence unit 'direct-msg-monitor-store'
Mar 19, 2018 3:00:33 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization
INFO: Bean 'entityManagerFactory' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Mar 19, 2018 3:00:33 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization
INFO: Bean 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Mar 19, 2018 3:00:33 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization
INFO: Bean 'org.springframework.transaction.config.internalTransactionAdvisor' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Mar 19, 2018 3:00:33 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@2e7cf883: defining beans [org.springframework.context.annotation.internalPersistenceAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,txJSONProvider,healthCheckResource,txsResource,aggregationDAOImpl,notificationDuplicationDAOImpl,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0,entityManagerFactory,dataSource,transactionManager,notificationDuplicationDAO,duplicationStateManager,org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0,generalTimeoutCondition,reliableTimeoutCondition,varaiableTimeoutCondition,generalCompletionCondition,reliableCompletionCondition,variableCompletionCondition,aggregationStrategy,msgIdCorrelator,textAssemblerFactory,textBodyGenerator,dsnGenerator,dsnMessageProcessor,smtpClientFactory,dsnSender,aggregationDAO,directMonitoringRepo,msgMonitoringProducerTemplate,consumerTemplate,camel-1:beanPostProcessor,camel-1]; root of factory hierarchy
Mar 19, 2018 3:00:34 PM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization completed in 2804 ms
Mar 19, 2018 3:00:35 PM org.apache.catalina.util.SessionIdGenerator createSecureRandom
INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [1,265] milliseconds.
Mar 19, 2018 3:00:36 PM com.sun.jersey.spi.spring.container.servlet.SpringServlet getContext
INFO: Using default applicationContext
Mar 19, 2018 3:00:36 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.6 03/25/2011 01:14 PM'
Mar 19, 2018 3:00:36 PM org.apache.catalina.startup.HostConfig deployWAR
INFO: Deploying web application archive service.war
```
Any clue what I have set wrong?<issue_comment>username_1: This version of tomcat skips scanning for any JAR matching the pattern log4j\*.jar. The pattern is statically configured in catalina.properties (in `tomcat.util.scan.DefaultJarScanner.jarsToSkip` property) that is bundled with tomcat.
Due to this, it does not read the log4j2 ServletContainerInitializer which sets log4j2 up for web applications. So you'll need to change the properties file to remove the log4j\* pattern.
[Reference](https://logging.apache.org/log4j/2.x/manual/webapp.html)
Upvotes: 1 <issue_comment>username_2: Try below configuration for printing current date in log file name -
```
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 507 | 2,182 | <issue_start>username_0: I have a server running NodeJs and expressjs.
It was serving a static website through port 80 fine with the majority of the content and static files being served from a public directory.
I made changes on my personal computer where I created my application (things like changing type and copy and stylesheets etc)
I then went and deleted the folder on my server where I was hosting the nodejs and expressjs app.
I restart pm2 and then for some reason even though I deleted the original website I am still being served the original public website files mixed with some of my newly uploaded files. I deleted all of those so I am confused with how or where it is getting these files I deleted. I physically deleted the app from the server but it's as if I never did. I also restarted pm2 several times.
To be clear I see some of the changes I made but I'm still seeing content that I thought I deleted. When you delete an application from a Digital Ocean nodejs server does it cache files somewhere?
Anyhelp with why I am still seeing old files I deleted would be greatly appreciated.<issue_comment>username_1: Try [nodemon](https://github.com/remy/nodemon), it will restart your application every time you made a change
Upvotes: 1 <issue_comment>username_2: It sounds to me it might be a browser caching issue. Make sure to clear your browser cache to assure you’re always dealing with “live” results.
If this doesn’t help, the only other possibility I can think of is that when you “restart pm2”, you failed to gracefully stop the previous run of your program, perhaps by killing `pm2` with a CTRL-C, which could leave a spawned sub-process running.
Use the version of the `ps` or equivalent for your particular operating system to assure you don’t have any of these unmanaged or “dangling” server processes still running.
Upvotes: 0 <issue_comment>username_3: It's been quite a long time and I see there is not so many answers. I had the same problem yesterday, I uploaded new files and my website was still showing old files. My solution was to flush my public DNS cache here:
<https://developers.google.com/speed/public-dns/cache>
Upvotes: 0 |
2018/03/19 | 478 | 1,867 | <issue_start>username_0: I'm trying to get a grip on ServiceBus architecture and I am using RawRabbit 1.10.4.
For this I am following an online course where RawRabbit is used.
In the Registration of Command Handlers, the following Extension Method is created, where the Methoc ctx.UseConsumerConfiguration is called.
The Extension method should be in RawRabbit.Pipe namespace, which is not available.
What is the nuget package I need to install, to get the extension methods, or has this extension method / namespace been removed in Version 1.10.4?
```
public static class Extensions
{
public static Task WithCommandHandlerAsync(this IBusClient bus,
ICommandHandler handler)
where TCommand : ICommand
=> bus.SubscribeAsync(msg => handler.HandleAsync(msg),
ctx => ctx.UseConsumerConfiguration(cfg => cfg.FromDelaredQueue(q => q.WithName("XXX"))));
}
```<issue_comment>username_1: I think you need to install RawRabbit 2.0, and the latest version in Nuget Manager Package is only 1.10.4 something.
However you can use Package Manager Console to install version 2 with following syntax
```
Install-Package RawRabbit -Version 2.0.0-beta9
```
Hope this helps
Upvotes: 4 [selected_answer]<issue_comment>username_2: Actually you have to install the nuget package RawRabbit.Compatibility.Legacy
then reference into your class the namespace `RawRabbit.Compatibility.Legacy`
or install the package **RawRabbit.Operations.Subscribe** if you don't want to work with legacy code
Use `UseSubscribeConfiguration` instead of `UseConsumerConfiguration`
Upvotes: 0 <issue_comment>username_3: On the "Add Packages" dialog there is a "Show pre-release packages" checkbox you can tick, so the "Version" combobox will show the beta and rc releases as well.
[](https://i.stack.imgur.com/IjTQS.png)
Upvotes: 0 |
2018/03/19 | 1,434 | 5,493 | <issue_start>username_0: I've set up AWS API Gateway to pass through requests to a service that returns images.
[](https://i.stack.imgur.com/QWCpW.png)
When I use the "Test" functionality in the UI, the logs show the PNG data being returned in the method response, as well as the `Content-Type=image/png:
[](https://i.stack.imgur.com/eIpib.png)
However, when you actually go and visit the endpoint in a browser, the `Content-Type` is `application/json`.
I would have expected that the "Method response headers" displayed in the logs of the "Test" UI to match what would actually be returned.
How do I force API Gateway to return the upstream's Content-Type (`image/png` in this case, but others more generally) to the browser?
Here is the endpoint as defined in the Swagger 2.0 syntax:
```
"/format/{id}/image.png": {
"get": {
"tags": [],
"summary": "",
"deprecated": true,
"operationId": "get-png",
"produces": [
"image/png"
],
"parameters": [
{
"name": "id",
"in": "path",
"description": "My Description",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": "Successful operation",
"schema": {
"type": "file"
},
"headers": {
"Access-Control-Allow-Origin": {
"type": "string",
"description": "URI that may access the resource"
},
"Content-Type": {
"type": "string",
"description": "Response MIME type"
}
}
}
},
"x-amazon-apigateway-integration": {
"responses": {
"default": {
"statusCode": "200",
"responseParameters": {
"method.response.header.Access-Control-Allow-Origin": "'*'",
"method.response.header.Content-Type": "integration.response.header.Content-Type"
}
}
},
"requestParameters": {
"integration.request.path.id": "method.request.path.id"
},
"uri": "https://[image_service]/{id}.png",
"passthroughBehavior": "when_no_match",
"httpMethod": "GET",
"type": "http"
}
}
}
```
Notes:
* This endpoint is somewhat simplified (but still illustrates the problem). However in reality, there is more to the endpoint (ie. I'm not just proxying the requests, but also rewriting paths + query params).
+ As noted in [this answer](https://stackoverflow.com/a/31630676/6460914), if your endpoint is just proxing requests to an image server, you should probably use AWS CloudFront instead. It has edge caching included in its price, and is ~3x cheaper.<issue_comment>username_1: If you want to rewrite paths/query params and/or HTTP responses, you could use AWS lambda to listen for "client response" events coming from your upstream web server, and set the final HTTP response headers, etc. in there.
Upvotes: 0 <issue_comment>username_2: It turned out that I was missing two things:
First, I needed to change the list of types AWS will send to the upstream in the "Accept" header"
```
"x-amazon-apigateway-binary-media-types" : [
"image/png"
]
```
Secondly, I needed to set the Integration Response to "Convert to binary (if needed)":
```
"contentHandling": "CONVERT_TO_BINARY"
```
Here is the modified configuration:
```
{
"swagger": "2.0",
"info": {
"description": "My description",
"title": "My Title",
"version": "1.0.0"
},
"schemes": [
"https",
"http"
],
"paths": {
"/format/{id}/image.png": {
"get": {
"tags": [],
"summary": "My Description",
"deprecated": true,
"operationId": "get-png",
"produces": [
"image/png"
],
"parameters": [
{
"name": "id",
"in": "path",
"description": "",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": "Successful operation",
"schema": {
"type": "file"
},
"headers": {
"Access-Control-Allow-Origin": {
"type": "string",
"description": "URI that may access the resource"
},
"Content-Type": {
"type": "string",
"description": "Response MIME type"
}
}
}
},
"x-amazon-apigateway-integration": {
"responses": {
"default": {
"statusCode": "200",
"responseParameters": {
"method.response.header.Content-Type": "integration.response.header.Content-Type",
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"contentHandling": "CONVERT_TO_BINARY"
}
},
"requestParameters": {
"integration.request.path.id": "method.request.path.id"
},
"uri": "https://img.shields.io/pypi/format/{id}.png",
"passthroughBehavior": "when_no_match",
"httpMethod": "GET",
"type": "http"
}
}
}
},
"definitions": {},
"x-amazon-apigateway-binary-media-types" : [
"image/png"
]
}
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 672 | 2,309 | <issue_start>username_0: My node.js application needs to upload files to S3. Most of the time, it will upload the file to an existing bucket. But sometimes, the bucket will need to be created first. Is there a way to check whether the bucket already exists, and if not, create it before initiating the upload? Here's what I've got so far:
```
function uploadit () {
'use strict';
console.log('Preparing to upload the verse.')
var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var uploadParams = {Bucket: 'DynamicBucketName', key: '/test.mp3', Body: myvalue, ACL: 'public-read'};
var file = 'MyVerse.mp3';
var fs = require('fs');
var fileStream = fs.createReadStream(file);
fileStream.on('error', function(err) {
console.log('File Error', err);
});
uploadParams.Body = fileStream;
var path = require('path');
uploadParams.Key = book.replace(/ /g, "")+reference.replace(/ /g, "")+"_user"+userid+".mp3";
// call S3 to retrieve upload file to specified bucket
s3.upload (uploadParams, function (err, data) {
if (err) {
console.log("Error", err);
} if (data) {
console.log("Upload Success", data.Location);
linkit();
addanother();
}
});
}
```<issue_comment>username_1: [waitFor](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#waitFor-property) should be able to get you whether if a bucket does not exist.
```
var params = {
Bucket: 'STRING_VALUE' /* required */
};
s3.waitFor('bucketNotExists', params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
```
If a bucket does not exist, then you can create it.
Hope it helps.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can put the create\_bucket in a try catch block in python or you can list\_buckets and search for a match.
```
import boto3
s3 = boto3.client('s3', region_name='us-east-1',
# Set up AWS credentials
aws_access_key_id=AWS_KEY_ID,
aws_secret_access_key=AWS_SECRET)
buckets = s3.list_buckets()
buckets = [bucket['Name'] for bucket in response['Buckets'] if bucket['Name']==bucket_name]
if len(buckets)==0:
s3_client.create_bucket(Bucket=bucket_name)
```
Upvotes: 0 |
2018/03/19 | 592 | 2,026 | <issue_start>username_0: Currently I have one simple program with 2 player and they can play some calculation card game.
The player turn function currently I using is
```
int turn=0;
while(!gameCheck(p1,p2)) {
if(turn%2==0) {
plyGame(p1);}
else {
plyGame(p2);}
turn++;
```
if lets say I want to let player become 4 player , the code become like this:
```
int turn=1;
while(!gameCheck(p1,p2,p3,p4)) {
if(turn%4==1) {
plyGame(p1);}
else if(turn%4==2){
plyGame(p2);}
else if(turn%4==3){
plyGame(p3);}
else {
plyGame(p4);}
turn++;
```
If I add special effect like reverse the order of player, what method should I do in order to reverse the player order (if currently is a turn of player 3, a random event occur to reverse the order, next player become player 2, then player 1 and then back to player 4)?<issue_comment>username_1: [waitFor](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#waitFor-property) should be able to get you whether if a bucket does not exist.
```
var params = {
Bucket: 'STRING_VALUE' /* required */
};
s3.waitFor('bucketNotExists', params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
```
If a bucket does not exist, then you can create it.
Hope it helps.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can put the create\_bucket in a try catch block in python or you can list\_buckets and search for a match.
```
import boto3
s3 = boto3.client('s3', region_name='us-east-1',
# Set up AWS credentials
aws_access_key_id=AWS_KEY_ID,
aws_secret_access_key=AWS_SECRET)
buckets = s3.list_buckets()
buckets = [bucket['Name'] for bucket in response['Buckets'] if bucket['Name']==bucket_name]
if len(buckets)==0:
s3_client.create_bucket(Bucket=bucket_name)
```
Upvotes: 0 |
2018/03/19 | 1,160 | 3,638 | <issue_start>username_0: I would like to update the `original_tbl` (url below) to look like the `update_tbl` (URL below). I am uncertain about the T-SQL for this.
original table:
[](https://i.stack.imgur.com/vyWVJ.jpg)
preferred updated table:
[](https://i.stack.imgur.com/jeNmK.jpg)<issue_comment>username_1: if you have just a few than you can do D-Shih's suggestion
If you have many occurences where the column `Attempts`is null, you could first generate all your update commands first.
So actually you are using 2 steps
**Step one: generate all update commands**
```
select 'update original_table set systemID = ''' + SystemID + ''' where StudentID = ' + convert(varchar, t.studentID)
from original_table t
where Attempts is null
```
**step 2: execute all commands**
now you get a recordset with all the update commands you need to execute.
Just copy it and execute them all.
And finally delete the empty attempts
```
delete from original_Table where Attempts is null
```
Do not forget to check the generated update commands off course if they are correct...
Upvotes: 1 <issue_comment>username_2: I assume you have additional systemId's in your table so you can do it for the entire table in one update
```
DECLARE @systemId NVARCHAR(10)
DECLARE @tb table (StudentId int , attempts int , systemId nvarchar(10))
INSERT INTO @tb
VALUES (105,0,'CRU877'),
(105,1,NULL),
(105,2,NULL),
(105,3,NULL),
(106,0,'AUR145'),
(106,1,NULL),
(106,2,NULL),
(106,3,NULL),
(106,4,NULL)
/*Before*/
SELECT *
FROM @tb
UPDATE @tb
SET @systemId = systemId = CASE WHEN systemId IS NULL THEN @systemId ELSE systemId END
/*After*/
SELECT *
FROM @tb
WHERE attempts != 0
```
Upvotes: 1 <issue_comment>username_3: Try this:
Data generation:
```
declare @x table(studentId int, attempts varchar(2), systemId varchar(10))
insert into @x values
(105, '1', ''),
(105, '2', ''),
(105, '3', ''),
(105, '', 'CRU877'),
(106, '1', ''),
(106, '2', ''),
(106, '3', ''),
(106, '4', ''),
(106, '', 'AUR145')
```
Update query:
```
update @x set systemId = sysId from (select studentId [stdId], systemId [sysId] from @x where attempts = '') [x] where studentId = stdId
delete @x where attempts = ''
select * from @x
```
Pictures you attached are pretty ambigious, I don't know whether blank cells are `NULL`s or empty strings. Just in case there's solution considering this:
```
declare @x table(studentId int, attempts int, systemId varchar(10))
insert into @x values
(105, 1, null),
(105, 2, null),
(105, 3, null),
(105, null, 'CRU877'),
(106, 1, null),
(106, 2, null),
(106, 3, null),
(106, 4, null),
(106, null, 'AUR145')
update @x set systemId = sysId from (select studentId [stdId], systemId [sysId] from @x where attempts is null) [x] where studentId = stdId
delete @x where attempts is null
select * from @x
```
Upvotes: 1 [selected_answer]<issue_comment>username_4: Declare a table variable, update the original table using an update + join combo, and then run a delete statement on the original table.
declare @systemIDs table (studentID int, systemID varchar(50))
insert into @systemIDs
select studentID, systemID from original\_tbl where systemID is not null
update ot set ot.systemID = si.systemID from original\_tbl ot inner join @systemIDs si on ot.studentID = si.studentID
delete from original\_tbl where attempts is null
Upvotes: 0 |
2018/03/19 | 372 | 1,292 | <issue_start>username_0: I want to notify users for new incoming messages when they are browsing other browser tabs.
First I have to set a blinking red bot as a favicon (the problem here is that Google Chrome doesn't support GIF animations as favicon)
```
$('#favicon').attr('href','_/css/img/favicon.gif');
```
is there a way to loop through two images one red and one white form 500ms each?
```
setInterval(function() {
$('#favicon').attr('href','_/css/img/red.png');
}, 500);
```
How do I do a loop of 500ms for two icons?<issue_comment>username_1: Use a variable and toggle it each time you change:
```
var red=1;
setInterval(function() {
if (red==1) {
red=0;
$('#favicon').attr('href','_/css/img/white.png');
} else {
red=1;
$('#favicon').attr('href','_/css/img/red.png');
}
}, 500);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Plain Javascript it the simplest solution, and such answer was already given.
As an alternative, you can use [favicon.js](http://lab.ejci.net/favico.js/) to play a video as the favicon. You could achieve this by converting your existing GIF to a video and then play it with favicon.js. The advantage of this solution is that your animation can be as complex as you want.
Upvotes: 2 |
2018/03/19 | 373 | 1,225 | <issue_start>username_0: I plan to do this using only Pandas, however this is my first time using Pandas. I know that Pandas has a read\_excel function.
My row in excel is the 4th row and has dates but I need these dates in a dataframe on Python in a column.
Any help will be appreciated.
```
import pandas as pd
fp = "G:\\Data\\Data2\\myfile.xlsm"
data = pd.read_excel(fp, skiprows = 4, sheet_name = "CRM View" )
```
This is all I have so far, but to my understanding this will read everything from the fourth row in my excel file, where as I only want the contents of the fourth row and then this row is to be fed as a column in my dataframe.<issue_comment>username_1: I'd try this
```
data = pd.read_excel('your_file.xlsx')
other_df['dates'] = data.iloc[3, :]
```
Upvotes: 0 <issue_comment>username_2: So, when you read excel your first row will be header and indices start from `0`.
If you take that into account your desired row is fetched like this:
```
import pandas as pd
fp = "G:\\Data\\Data2\\myfile.xlsm"
data = pd.read_excel(fp, sheet_name = "CRM View" )
dates_row = data.loc[2, :]
```
Now you can make that row into column like this:
```
new_data = pd.DataFrame({'Dates': dates_row})
```
Upvotes: 2 |
2018/03/19 | 1,222 | 3,656 | <issue_start>username_0: I am new to Vue and I am trying to bind an element with a fairly complex data object using Vue with a nested component.
I believe I have set this up correctly according to the documentation but I'm not seeing any examples that match my situation exactly.
The error I'm getting is `vue.js:584 [Vue warn]: Property or method "factions" is not defined on the instance but referenced during render.` This message is displayed for every json object property that is referenced in the markup. I'm guessing there is something relatively simple here that I'm missing but I cannot determine what that is. Any help is appreciated, thank you!
Here is a fiddle: <https://jsfiddle.net/rjz0gfLn/7/>
And the code:
```js
var x = {
"factions": [
{
"id": 0,
"name": "Unknown",
"img": "Unknown.png"
},
{
"id": 1,
"name": "Light",
"img": "Light.png"
},
{
"id": 2,
"name": "Dark",
"img": "Dark.png"
}
],
"roles": [
{
"id": 0,
"name": "Unknown",
"img": "Unknown.png"
},
{
"id": 1,
"name": "Assassin",
"img": "Assassin.png"
},
{
"id": 2,
"name": "Mage",
"img": "Mage.png"
}
],
"cacheDate": 1521495430225
};
console.log("data object", x);
Vue.component("filter-item", {
template: `- ![]()
{{name}}
`});
Vue.component("filter-items", {
template: `*
`});
var v = new Vue({
el: "#filters",
data: x
});
```
```html
```<issue_comment>username_1: `factions` belongs to the root Vue instance, `filter-items` has a different `data` object. You probably want to pass in `factions` as a prop to `filter-items`.
<https://v2.vuejs.org/v2/guide/components.html#Passing-Data-with-Props>
```
Vue.component("filter-items", {
props: ['factions'],
template: `*
`});
var v = new Vue({
el: "#filters",
data: x
});
```
EDIT: `v-repeat` was deprecated, use `v-for`, and you're not passing a prop down to `filter-item`
Upvotes: 1 <issue_comment>username_2: You have to declare **every** data you want to pass to the children as `props` in those child components. Additionally you need to pass each prop from the parent using `:fieldnameinchild="value"` (shorthand for `v-bind:fieldnameinchild="value"`.
In your case, you'll need to do that in several places:
Parent:
```
```
And child:
```
```
Notice, in vue2 you want to use `v-for` instead of `v-repeat`.
Also, add `:key` to the `v-for`:
>
> [Vue tip]: : component lists rendered with `v-for` should have explicit keys. See <https://vuejs.org/guide/list.html#key> for more info.
>
>
>
See updated fiddle: <https://jsfiddle.net/username_2/rjz0gfLn/21/>
```js
var x = {
"factions": [
{
"id": 0,
"name": "Unknown",
"img": "img.png"
},
{
"id": 1,
"name": "Light",
"img": "/a/a6/Light.png"
},
{
"id": 2,
"name": "Dark",
"img": "/0/0e/Dark.png"
}
],
"roles": [
{
"id": 0,
"name": "Unknown",
"img": "img.png"
},
{
"id": 1,
"name": "Assassin",
"img": "/6/69/Assassin.png"
},
{
"id": 2,
"name": "Mage",
"img": "/2/20/Mage.png"
}
],
"cacheDate": 1521495430225
};
console.log("data object", x);
Vue.component("filter-item", {
template: `- ![]()
{{name}}
`,
props: ['img', 'fimg', 'name']
});
Vue.component("filter-items", {
template: `*
`,
props: ['factions', 'roles']
});
var v = new Vue({
el: "#filters",
data: x
});
```
```html
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,210 | 4,307 | <issue_start>username_0: I try to use the same SimpleForm for creating a new object and for editing. I try to make it this way:
```xml
```
But the bindings are not in mode TwoWay. Is there a possibility to make it to TwoWay Binding?<issue_comment>username_1: I am not quite sure, if using expression binding in values is the right way to do it, but should work with JSON models without additional coding (except for data-sap-ui-bindingsyntax="complex" in your ui5-bootstrap)
With a OData model you need to use oModel.setDefaultBindingMode("TwoWay")
as described in [Setting the Default Binding Mode](http://veui5infra.dhcp.wdf.sap.corp:8080/sapui5-sdk-internal/#/topic/1a08f70951a744b1a7962b09665cc92f "Setting the Default Binding Mode") since OData models use OneWay binding as default.
Upvotes: 0 <issue_comment>username_2: Cause
=====
In a property binding, using ...
* **Expression Binding**
* [Binding with **string literals**](https://stackoverflow.com/q/40254435/5846045)
* Binding with a **`formatter`** (regardless of using it in conjunction with `path` or `parts`)
... turns the binding mode into ***OneWay***. Unless only the `path` is defined in the binding info object (not `parts`), all the above cases make use of the module [`sap.ui.model.CompositeBinding`](https://sdk.openui5.org/api/sap.ui.model.CompositeBinding).
Resolution
==========
Expression Binding definition as it's written in the question cannot become TwoWay.
However, `CompositeBinding` does **allow TwoWay binding by having a `type` to the [property binding info](https://sdk.openui5.org/api/sap.ui.base.ManagedObject.PropertyBindingInfo) assigned that is derived from [`sap.ui.model.CompositeType`](https://sdk.openui5.org/api/sap.ui.model.CompositeType).**
* Use either one of the existing data type classes derived from the `CompositeType` (Click on "View subclasses" in the API reference)
* Or define your own composite type class.
+ Sample [`Currency` type definition](https://github.com/SAP/openui5/blob/master/src/sap.ui.core/src/sap/ui/model/type/Currency.js) from UI5.
+ Sample *ternary* type that behaves like the above Expression Binding definition: <https://embed.plnkr.co/0MVvfZ/?show=view%2FHome.view.xml,preview>
It takes all three `parts` necessary for the ternary operation; one for the condition, one for binding the *truthy* case (`a`), and one for binding the *falsy* case (`b`):
```xml
```
The actual ternary operation happens in the type definition which could look something like this:
```js
sap.ui.define([
"sap/ui/model/CompositeType"
], function(CompositeType) {
"use strict";
return CompositeType.extend("demo.model.type.MyTernary", {
constructor: function() {
CompositeType.apply(this, arguments);
this.bParseWithValues = true; // Publicly documented. Make 'parts' available in parseValue
},
/**
* Displaying data from the right model (model -> view)
*/
formatValue: parts => parts[0] ? parts[1] : parts[2],
/**
* Assigning entered value to the right model (view -> model)
*/
parseValue: (enteredValue, stuff, parts) => parts[0] ? [
parts[0],
enteredValue,
parts[2],
] : [
parts[0],
parts[1],
enteredValue,
],
validateValue: () => true // Nothing to validate here
});
});
```
Prerequisites
-------------
* In order to allow TwoWay composite binding, all binding `parts` must have the TwoWay binding mode enabled:
>
> Note that a composite binding will be forced into mode OneWay when **one** of the binding `parts` is not in mode TwoWay. (Source: [`ManagedObject#bindProperty`](https://sdk.openui5.org/api/sap.ui.base.ManagedObject#methods/bindProperty))
>
>
>
* Ensure that the bootstrap configuration option `sap-ui-compatVersion` is set to `"edge"`.
Upvotes: 1 <issue_comment>username_3: There is another way you can go about this. Make use of any of the events(ex:submit) on the Input control.
**View:**
```
```
**Controller:**
```
onSubmitValue:function(oEvent){
var value=oEvent.getSource().getValue();
var sPath=oEvent.getSource().getBindingContext('YourModelName').getPath();
this.getView().getModel('YourModelName').setProperty(sPath+'/PropertyName',value);
}
```
Upvotes: 0 |
2018/03/19 | 1,290 | 4,586 | <issue_start>username_0: I would like to write a program that read characters from a file and display them with two spaces between each character.
However, I am limited to 10 characters per line.
How could I make the program return to a new line every 10 characters?
```
// OUTPUT CHARACTERS FROM FILE
cout << "Characters read from file are: " << endl;
inFile.get(textWritten);
while (inFile) {
if (textWritten == SPACE) cout << " ";
cout << textWritten << " ";
inFile.get(textWritten);
}
```<issue_comment>username_1: I am not quite sure, if using expression binding in values is the right way to do it, but should work with JSON models without additional coding (except for data-sap-ui-bindingsyntax="complex" in your ui5-bootstrap)
With a OData model you need to use oModel.setDefaultBindingMode("TwoWay")
as described in [Setting the Default Binding Mode](http://veui5infra.dhcp.wdf.sap.corp:8080/sapui5-sdk-internal/#/topic/1a08f70951a744b1a7962b09665cc92f "Setting the Default Binding Mode") since OData models use OneWay binding as default.
Upvotes: 0 <issue_comment>username_2: Cause
=====
In a property binding, using ...
* **Expression Binding**
* [Binding with **string literals**](https://stackoverflow.com/q/40254435/5846045)
* Binding with a **`formatter`** (regardless of using it in conjunction with `path` or `parts`)
... turns the binding mode into ***OneWay***. Unless only the `path` is defined in the binding info object (not `parts`), all the above cases make use of the module [`sap.ui.model.CompositeBinding`](https://sdk.openui5.org/api/sap.ui.model.CompositeBinding).
Resolution
==========
Expression Binding definition as it's written in the question cannot become TwoWay.
However, `CompositeBinding` does **allow TwoWay binding by having a `type` to the [property binding info](https://sdk.openui5.org/api/sap.ui.base.ManagedObject.PropertyBindingInfo) assigned that is derived from [`sap.ui.model.CompositeType`](https://sdk.openui5.org/api/sap.ui.model.CompositeType).**
* Use either one of the existing data type classes derived from the `CompositeType` (Click on "View subclasses" in the API reference)
* Or define your own composite type class.
+ Sample [`Currency` type definition](https://github.com/SAP/openui5/blob/master/src/sap.ui.core/src/sap/ui/model/type/Currency.js) from UI5.
+ Sample *ternary* type that behaves like the above Expression Binding definition: <https://embed.plnkr.co/0MVvfZ/?show=view%2FHome.view.xml,preview>
It takes all three `parts` necessary for the ternary operation; one for the condition, one for binding the *truthy* case (`a`), and one for binding the *falsy* case (`b`):
```xml
```
The actual ternary operation happens in the type definition which could look something like this:
```js
sap.ui.define([
"sap/ui/model/CompositeType"
], function(CompositeType) {
"use strict";
return CompositeType.extend("demo.model.type.MyTernary", {
constructor: function() {
CompositeType.apply(this, arguments);
this.bParseWithValues = true; // Publicly documented. Make 'parts' available in parseValue
},
/**
* Displaying data from the right model (model -> view)
*/
formatValue: parts => parts[0] ? parts[1] : parts[2],
/**
* Assigning entered value to the right model (view -> model)
*/
parseValue: (enteredValue, stuff, parts) => parts[0] ? [
parts[0],
enteredValue,
parts[2],
] : [
parts[0],
parts[1],
enteredValue,
],
validateValue: () => true // Nothing to validate here
});
});
```
Prerequisites
-------------
* In order to allow TwoWay composite binding, all binding `parts` must have the TwoWay binding mode enabled:
>
> Note that a composite binding will be forced into mode OneWay when **one** of the binding `parts` is not in mode TwoWay. (Source: [`ManagedObject#bindProperty`](https://sdk.openui5.org/api/sap.ui.base.ManagedObject#methods/bindProperty))
>
>
>
* Ensure that the bootstrap configuration option `sap-ui-compatVersion` is set to `"edge"`.
Upvotes: 1 <issue_comment>username_3: There is another way you can go about this. Make use of any of the events(ex:submit) on the Input control.
**View:**
```
```
**Controller:**
```
onSubmitValue:function(oEvent){
var value=oEvent.getSource().getValue();
var sPath=oEvent.getSource().getBindingContext('YourModelName').getPath();
this.getView().getModel('YourModelName').setProperty(sPath+'/PropertyName',value);
}
```
Upvotes: 0 |
2018/03/19 | 1,393 | 4,732 | <issue_start>username_0: I've created a stored procedure and I'm inserting values into 3 tables, `Users`, `Workers` and `Vets`. In the `Users` table, I have an identity column `UserID`, which is a foreign key to the `Workers` and `Vets` table.
When inserting into the `Workers` and `Vets` tables, I can't seem to retrieve the value of the `UserID` that was inserted into the `Users` table.
This is the stored procedure for the vets:
```
CREATE PROCEDURE [dbo].[InsertIntoVets]
@PrimeiroNome Varchar(20),
@NomeDoMeio Varchar(50),
@Sobrenome Varchar(30),
@DataDeNascimento Date,
@EndereçoPostal1 Varchar(120),
@EndereçoPostal2 Varchar(120),
@Cidade Varchar(100),
@Especialidade_id Int,
@Clinica_id Int,
@UserID Int OUTPUT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO Veterinarios([Primeiro nome], [Nome do meio], Sobrenome, [Data de Nascimento], [Endereço Postal1], [Endereço Postal2], Cidade, Especialidade_id, Clinica_id, UserID)
VALUES (@PrimeiroNome, @NomeDoMeio, @Sobrenome, @DataDeNascimento, @EndereçoPostal1, @EndereçoPostal2, @Cidade, @Especialidade_id, @Clinica_id, @UserID)
SET @UserID = SCOPE_IDENTITY();
END
```
C# code:
```
// Vets
SqlCommand sqlCommandVets = new SqlCommand();
sqlCommandVets.CommandType = System.Data.CommandType.StoredProcedure;
sqlCommandVets.CommandText = "InsertIntoVets";
sqlCommandVets.Parameters.Add("@PrimeiroNome", SqlDbType.VarChar).Value = (primeiroNome.Text.Trim());
sqlCommandVets.Parameters.Add("@NomeDoMeio", SqlDbType.VarChar).Value = (nomeDoMeio.Text.Trim());
sqlCommandVets.Parameters.Add("@Sobrenome", SqlDbType.VarChar).Value = (sobrenome.Text.Trim());
sqlCommandVets.Parameters.Add("@DataDeNascimento", SqlDbType.Date).Value = (dataDeNascimento.Text.Trim());
sqlCommandVets.Parameters.Add("@EndereçoPostal1", SqlDbType.VarChar).Value = (enderecoPostal1.Text.Trim());
sqlCommandVets.Parameters.Add("@EndereçoPostal2", SqlDbType.VarChar).Value = (enderecoPostal2.Text.Trim());
sqlCommandVets.Parameters.Add("@Cidade", SqlDbType.VarChar).Value = (cidade.Text.Trim());
sqlCommandVets.Parameters.Add("@Especialidade_id", SqlDbType.Int).Value = (dropDownEspecialidade.SelectedValue);
sqlCommandVets.Parameters.Add("@Clinica_id", SqlDbType.VarChar).Value = (dropDownClinica.SelectedValue);
sqlCommandVets.Parameters.Add("@UserID", SqlDbType.Int).Direction = ParameterDirection.Output;
sqlCommandVets.Connection = con;
```
Here you have a screenshot of both the `Users` and `Vets` table:
[UsersTable](https://i.stack.imgur.com/TCZC9.png) AND [VetsTable](https://i.stack.imgur.com/4Rat0.png)
I hope I was clear and provided enough info.
Thank you in advance,
<NAME>.<issue_comment>username_1: You don't need an OUTPUT parameter for this. Just change your stored procedure that inserts the user data into
```
CREATE PROCEDURE [dbo].[InsertIntoUsers]
.....
SELECT SCOPE_IDENTITY();
END
```
Then you could execute the stored procedure with `ExecuteScalar` and get the result from the stored procedure that consist of a single row with a single column
```
int id = Convert.ToInt32(sqlCommandUsers.ExecuteScalar());
```
At this point you have the value for the `UserID` column that you want to set in the Vets and Workers table. Just pass it as a normal parameter to your stored procedure.
Also in this context where you have multiple updates to your database you should be sure to insert everything inside a [Transaction](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/local-transactions). If something goes wrong in the insertions of Vets you want to [Rollback](https://msdn.microsoft.com/en-us/library/zayx5s0h(v=vs.110).aspx) everything you have done in the Users and Workers table or [Commit](https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.commit(v=vs.110).aspx) if all goes well
Upvotes: 2 [selected_answer]<issue_comment>username_2: Even though many others will recommend returning via a `SELECT` statement, I would recommend sticking with the `ExecuteNonQuery()`. It is more efficient for both the SQL Server as well as your calling application because no database object will need to be created and no object will need to be utilized (directly or transparently) to read it.
I generally explicitly declare my output parameters and then attach them. After the ExecuteNonQuery() you can access the parameters via the .Value property.
Note 1: .Value is an object, so be aware of the *NULL* possibility
```
SqlParamater opUserID = new SqlParameter("@UserID",sqlDbType.Int);
opUserID.Direction = ParameterDirection.Output;
sqlCommandVets.Parameters.Add (opUserID);
sqlCommandVets.ExecuteNonQuery();
int NewUserID = (int)opUserID.Value;
```
Upvotes: 0 |
2018/03/19 | 442 | 1,671 | <issue_start>username_0: I need to detect what is the font size for text component after a font scaling.
Let's say that I have a Text component with font size 18px
```
My Text
```
The user has set a large font through the OS accessibility settings.
Now my text has been rendered with a larger font size (more than 18px).
I'm aware that I can use `allowFontScaling={false}` but I don't want to lose the text accessibility.
I saw that React native have an API for getting the font scale `PixelRatio.getFontScale()` but it doesn't work for iOS
>
> Returns the scaling factor for font sizes. This is the ratio that is
> used to calculate the absolute font size, so any elements that heavily
> depend on that should use this to do calculations.
>
>
> If a font scale is not set, this returns the device pixel ratio.
>
>
> Currently this is only implemented on Android and reflects the user
> preference set in Settings > Display > Font size, on iOS it will
> always return the default pixel ratio. @platform android
>
>
>
Any ideas?<issue_comment>username_1: [react-native-device-info](https://github.com/rebeccahughes/react-native-device-info) provides a function, getFontScale(), which appears to work in both android and iOS.
Upvotes: 3 <issue_comment>username_2: you can use `useWindowDimensions`
```
import { useWindowDimensions } from 'react-native';
const MyScreen = ({ navigation }) => {
useWindowDimensions().fontScale // you will get current scale
}
```
Upvotes: 2 <issue_comment>username_3: According to the [documentation](https://reactnative.dev/docs/pixelratio) `PixelRatio.getFontScale()` should work for both Android and iOS
Upvotes: 1 |
2018/03/19 | 621 | 2,558 | <issue_start>username_0: Basically, I know that *RDF*, *RDFS*, and *OWL* are used to define the ontologies in order to address the semantics problem in world wide web.
However, these terms make me a bit confused when studying them. This is my conclusion regarding their relationship after reading [this article](https://www.obitko.com/tutorials/ontologies-semantic-web/introduction.html). Please correct me if I get wrong
1. **Resource Description Framework (RDF)**. As its self-explained name, it is just a framework that describes the resources in term of a graph.
2. **RDF Schema (RDFS)**. It is a set of possible *relationships* that could be used in RDF.
3. **Web Ontology Language (OWL)**. It is even an *extended set of possible relationships* for an RDF graph.
Furthermore, what is the status of semantic web technology? Is it still in the research phase, or is it already adopted in production?<issue_comment>username_1: This question might be out of scope for Stackoverflow, but basically:
* RDFS is just some extended [vocabulary for RDF](https://stackoverflow.com/questions/9755113/how-are-rdf-and-rdfs-related).
* OWL, on the other hand, is a set of technologies (including a vocabulary) for running reasoners on data sets defined in RDF, to generate new triples.
There are definitely people using Semantic Web Technologies (aka Linked Data) in production, mostly to integrate various data sources. But it definitely hasn't taken off the way HTML or XML have.
Where it's trickled somewhat into the mainstream is with [Facebook's Open Graph](http://ogp.me/) (which is technically not quite RDF unfortunately), Google's [Structured Data](https://developers.google.com/search/docs/guides/intro-structured-data) and [schema.org](http://schema.org/).
Upvotes: 3 [selected_answer]<issue_comment>username_2: RDF is a framework, including a data model, a set of syntaxes, and a vocabulary, the latter of which is extended in RDFS to describe relationships of taxonomical structures, such as subclass-superclass relationships, and OWL is a fully-featured ontology language that can describe far more complex relationships, along with constraints on data values. I already explained this on the following page:
[What is the difference between RDF and OWL?](https://stackoverflow.com/questions/1740341/what-is-the-difference-between-rdf-and-owl)
Semantic Web technologies are indeed viable and already utilized in a wide range of applications on a global scale by industry giants such as Google as well as governments around the world.
Upvotes: 2 |
2018/03/19 | 354 | 1,451 | <issue_start>username_0: I need to download a file, but, I am wondering which is the best approach to publish my resource.
Lets say I have a Document
```
http://api/documents/id
```
where I make a GET request should I receive the information of the document with that id including an extra field with the string representation of the file in BASE64? or should I publish another url resource like
```
http://api/documents/id/download
```
just for getting the file? The first I know how to do it, but I don't know if that is the proper way. With the later I need advice.<issue_comment>username_1: On my understanding your `document` resource consists of both metadata and the actual content of the document. So you could support the following:
* `GET /documents/:id`: Return a representation of the metadata of the document
* `GET /documents/:id/content`: Return a representation of the content of the document
---
Alternatively you could support a single endpoint such as `GET /documents/:id` and return both metadata and content in a multipart response.
---
Content negotiation would be the best approach though. You could use a single endpoint such as `GET /documents/:id` and `Accept: application/json` for the metadata and, for example, `Accept: application/octet-stream` for the content.
Upvotes: 2 <issue_comment>username_2: how about
`GET /documents/:id` for metadata
`GET /documents/:id?action=download` for content
Upvotes: -1 |
2018/03/19 | 712 | 1,929 | <issue_start>username_0: I am trying auto-increment using PHP to generate English alphabets instead numbers. I know how to do auto increment for numbers:
```
for ($i = 0; $i <= 2; $i++) {
echo $i;
}
```
But I want a way I can generate ABC instead 123.<issue_comment>username_1: You could just create an array of letters like so, and use the index of the forloop to loop through them.
```
$letters = array('a','b','c');
for($i=0;$i<=2;$i++)
{
echo $letters[$i];
}
```
Upvotes: 0 <issue_comment>username_2: You can compare and increment strings the exact same way as numbers, just change the initialisation and exit conditions of the loop:
```
for($i='A'; $i<='C'; $i++) {
echo $i, PHP_EOL;
}
```
>
> A
> B
> C
>
>
>
See <https://eval.in/974692>
Upvotes: 1 <issue_comment>username_3: One option is to use `range` and use `foreach` to loop.
```
$letters = range("A","C");
foreach( $letters as $letter ) {
echo $letter . "
";
}
```
This will result to:
```
A
B
C
```
Upvotes: 1 <issue_comment>username_4: Similar to above, you could set a variable to $x = "abcdefg..." and echo substr($x, $i, 1). You will need to reset $i to 1 whenever it reaches 27.
Upvotes: 0 <issue_comment>username_5: You can use `chr` function together with ASCII code to generate them
For UpperCase:
```
for ($i = 65; $i <= 90; $i++) {
echo chr($i) . PHP_EOL;
}
```
For LowerCase:
```
for ($i = 97; $i <= 122; $i++) {
echo chr($i) . PHP_EOL;
}
```
Here the complete list of ASCII codes:
<https://www.ascii-code.com/>
Upvotes: 3 [selected_answer]<issue_comment>username_6: Just get the ascii code for A and loop for 26
```
php
$a_is = ord('A');
for ( $ch=$a_is; $ch<$a_is+26; $ch++ ) {
echo chr($ch) . PHP_EOL ;
}
</code
```
Or set a char count
```
php
$a_is = ord('A');
$stop = 5;
for ( $ch=$a_is; $ch<$a_is+$stop; $ch++ ) {
echo chr($ch) . PHP_EOL ;
}
</code
```
Upvotes: 2 |
2018/03/19 | 929 | 2,996 | <issue_start>username_0: I have read about the `targetBinary` flag and I have also read [this](https://stackoverflow.com/questions/34869548/codepush-how-to-deploy-to-multiple-build-versions-of-the-same-deployment-config) from a member of the Microsoft team working on CodePush.
I have version `5.0.1` and `5.1.0`.
* The second version has native differences when comparing to `5.0.1`
If a bug comes in for version `5.0.1`, how do I fix it and deploy it for this version only? Bugs may be critical and not everyone will have the latest version of the app.
Or, the bug may only exist on a specific version.
Is my only option to:
* Open Xcode & Android Studio
* Change my Bundle Versions/Build Number from `5.1.0` to `5.0.1` and then make a change, then change my numbering back?
This seems like a long-winded way of updating a version. Is there a more elegant way of managing this?<issue_comment>username_1: You can target a specific version with code push by choosing a different version using the target flag `-t` and the appcenter-cli
Target all versions of the app:
`appcenter codepush release-react -a Org/MyApp -t '*'`
Target version 5.0.1 versions of the app:
`appcenter codepush release-react -a Org/MyApp -t '5.0.1'`
Upvotes: 3 <issue_comment>username_2: The target binary version param supports ranges for this scenario. Here's a helpful table to guide you.
```
Range Expression Who gets the update
---------------- ----------------------
1.2.3 Only devices running the specific binary app store version 1.2.3 of your app
* Any device configured to consume updates from your CodePush app
1.2.x Devices running major version 1, minor version 2 and any patch version of your app
1.2.3 - 1.2.7 Devices running any binary version between 1.2.3 (inclusive) and 1.2.7 (inclusive)
>=1.2.3 <1.2.7 Devices running any binary version between 1.2.3 (inclusive) and 1.2.7 (exclusive)
1.2 Equivalent to >=1.2.0 <1.3.0
~1.2.3 Equivalent to >=1.2.3 <1.3.0
^1.2.3 Equivalent to >=1.2.3 <2.0.0
```
Upvotes: 5 <issue_comment>username_3: [](https://i.stack.imgur.com/kSifx.png)
link to code push:
<https://learn.microsoft.com/zh-cn/appcenter/distribution/codepush/cli#target-binary-version-parameter>
```
1.2.3 Only devices running the specific binary version 1.2.3 of your app
* Any device configured to consume updates from your CodePush app
1.2.x Devices running major version 1, minor version 2, and any patch version of your app
1.2.3 - 1.2.7 Devices running any binary version between 1.2.3 (inclusive) and 1.2.7 (inclusive)
>=1.2.3 <1.2.7 Devices running any binary version between 1.2.3 (inclusive) and 1.2.7 (exclusive)
1.2 Equivalent to >=1.2.0 <1.3.0
~1.2.3 Equivalent to >=1.2.3 <1.3.0
^1.2.3 Equivalent to >=1.2.3 <2.0.0
```
Upvotes: -1 |
2018/03/19 | 454 | 1,643 | <issue_start>username_0: I'm working with opencv in my last study's project. I have python 2.7 and opencv 3.4 already installed. I developed my python project in windows 8 64 bit and I converted my application from .py to .exe through Pyinstaller and it's working fine .
But when I move my application to the industrial machine which is windows xp pack 3 32bit and i try to import cv2 I get the following error :
```
ImportError: DLL load failed: The specified module could not be found
```
Note that I have tried to install Microsoft visual c++ 2015 and didn't solve the problem .
Can anyone help me ?<issue_comment>username_1: Which version using for build opencv?
* Try install [Visual C++ Redistributable for Visual Studio](https://www.microsoft.com/en-US/download/details.aspx?id=48145) for your version.
* Try rebuild dll for Windows XP 32 bit
Upvotes: 0 <issue_comment>username_2: I had the same issue. I solved it by placing two dlls in the same folder as my .exe file. The dlls are "api-ms-win-downlevel-shlwapi-l1-1-0.dll" which can be downloaded from the internet and other one is "opencv\_ffmpeg\*\*\*\_\*\*.dll" which can be found in python site-packages if you have installed python-opencv via pip, where \*\*\* is the version and \*\* is 32 or 64 bit. After you have gathered these two files, compile the script using following options.
```
pyinstaller -w script.py
```
Now place the two dlls in dist/your\_folder/ where the .exe is located. Also be careful of all the warnings related to missing dlls while compiling, if you see any warning related to missing dlls, place that dll in .exe folder afterward.
Upvotes: 2 |
2018/03/19 | 1,419 | 5,214 | <issue_start>username_0: ```
Android Studio 3.1 RC 2
kotlin 1.2.30
```
The signature of the fetchMessage in Java
```
Single fetchMessage(final String Id);
```
The kotlin code
```
fun translate(Id: String): Completable {
return repository.fetchMessage(Id)
.flatMap {
Single.fromCallable({
update(messageId, it, State.COMPLETED)
State.COMPLETED
})
}
.onErrorReturn({
update(Id, null, State.ERROR)
State.ERROR
})
.toCompletable()
}
```
The method I want to run before fetchMessage
```
fun insertMessage(Id: String): Completable {
return Completable.fromCallable {
insert(Id, State.IDLE)
}
}
```
I want the insertMessage e to somehow run before the fetchMessage. I was thinking of using concatMap but not sure how to combine the translate and insertMessage. So that the insertMessage will run first then once completed the translate will run.
Many thanks for any suggestions,
```
Update solution 1 using startWith(..):
```
By changing the return to Single for the translate method. I have done like this:
```
fun translate(Id: String): Single {
return repository.fetchMessage(Id)
.flatMap {
Single.fromCallable({
update(messageId, it, State.COMPLETED)
State.COMPLETED
})
}
.onErrorReturn({
update(Id, null, State.ERROR)
State.ERROR
})
}
```
Then I can have a method to do the following insertMessage(..) -> translate(..):
```
translate(Id).toCompletable().startWith(insertMessage(id, State.IDLE))
```
Would that be an ideal solution?
```
Update solution 2 using concatWith(..):
```
My returning a Observable and calling toObservable() in the chain.
```
fun translate(Id: String): Observable {
return repository.fetchMessage(Id)
.flatMap {
Single.fromCallable({
update(messageId, it, State.COMPLETED)
State.COMPLETED
})
}
.onErrorReturn({
update(Id, null, State.ERROR)
State.ERROR
})
.toObservable()
}
```
And I can use concatWith so the sequence would be insertMessage(..) -> translate(..):
```
translate(Id).toCompletable().concatWith(insertMessage(id, State.IDLE).toObservable())
.toCompletable()
```
Are these correct solutions?<issue_comment>username_1: If you have a `Completable`, you can chain any other reactive type with it via [`andThen`](http://reactivex.io/RxJava/2.x/javadoc/io/reactivex/Completable.html#andThen-io.reactivex.CompletableSource-):
```
insertMessage("id")
.andThen(translate("id"))
```
Upvotes: 3 <issue_comment>username_2: Both your options make sense, but I would recommend you to clean them a bit.
First of all you need to clearly understand what return type to use in each case: [Observable](http://reactivex.io/documentation/observable.html), [Single](http://reactivex.io/documentation/single.html) or [Completable](https://static.javadoc.io/io.reactivex/rxjava/1.2.1/rx/Completable.html).
The definition is the following:
* **Single** represent Observable that emit single value or error.
* **Completable** represent Observable that emits no value, but only terminal events, either onError or onCompleted.
In both your cases you don't need any data returned, all you need is to know if the operation is successful or not. Completable is designed to handle exactly this case.
So I'd recommend you to have:
```
fun translate(Id: String): Completable {
return repository.fetchMessage(Id)
.flatMapCompletable {
Completable.fromAction {
update(messageId, it, State.COMPLETED)
}
}.doOnError {
update(Id, null, State.ERROR)
}
}
fun insertMessage(Id: String): Completable {
return Completable.fromCallable {
insert(Id, State.IDLE)
}
}
```
Nice option to make your code cleaner is to use [Completable.fromAction](https://static.javadoc.io/io.reactivex/rxjava/1.2.1/rx/Completable.html#fromAction(rx.functions.Action0)) instead of [Completable.fromCallable](https://static.javadoc.io/io.reactivex/rxjava/1.2.1/rx/Completable.html#fromCallable(java.util.concurrent.Callable)), so you don't need to return anything.
Then you can use any of your options, [startWith](https://static.javadoc.io/io.reactivex/rxjava/1.2.1/rx/Completable.html#startWith(rx.Completable)) or [concatWith](https://static.javadoc.io/io.reactivex/rxjava/1.2.1/rx/Completable.html#concatWith(rx.Completable)). Both wait until first observable completes before running the second observable. I prefer to use concatWith, because it runs the functions in the same order that they are written.
Eventually we get an elegant solution:
```
insertMessage(id).concatWith(translate(id))
```
or
```
translate(id).startWith(insertMessage(id))
```
More info about concat: <http://reactivex.io/documentation/operators/concat.html>
Here is the implementation of the functions inside the rxJava library if you are curious:
```
public final Completable startWith(Completable other) {
requireNonNull(other);
return concat(other, this);
}
public final Completable concatWith(Completable other) {
requireNonNull(other);
return concat(this, other);
}
```
As you can see the only difference is the order.
Upvotes: 3 [selected_answer] |
2018/03/19 | 516 | 1,899 | <issue_start>username_0: I have a Django project where in the project's root folder (where `manage.py` is) I can do
```
(venv) MyName-MacBook:mydjangoproject myname$ python
>>> import django
>>> from django.core.mail import EmailMessage, send_mail
>>>
```
However, when I create a subfolder, this doesn't work anymore
```
(venv) MyName-MacBook:mydjangoproject myname$ mkdir email && cd email
(venv) MyName-MacBook:email myname$ python
>>> import django
>>> from django.core.mail import EmailMessage, send_mail
Traceback (most recent call last):
File "", line 1, in
File "/Users/myname/Documents/MacBook:mydjangoproject/venv/lib/python3.6/site-packages/django/core/mail/\_\_init\_\_.py", line 11, in
from django.core.mail.message import (
File "/Users/myname/Documents/MacBook:mydjangoproject/venv/lib/python3.6/site-packages/django/core/mail/message.py", line 7, in
from email import (
File "/Users/myname/Documents/MacBook:mydjangoproject/email/email.py", line 2, in
from django.core.mail import EmailMessage, send\_mail
ImportError: cannot import name 'EmailMessage'
>>>
```
I realise this is a very basic question, but I am a bit stuck. It's strange that I can import `django`, but not the other functions.<issue_comment>username_1: You have a package named django in your project, don't you :) Don't do that- it will make you import the wrong thing by accident.
Upvotes: 1 <issue_comment>username_2: Looking at your traceback I see you have a module named `email`.
Django tries to import from the builtin module `email` and the python interpreter thinks your module is the one to import. Try renaming that module to avoid name collisions and you should be fine
Edit: After your edit the problem becomes very clear. The python interpreter looks at your working directory before the builtins, finds an `email.py` and tries to import from that
Upvotes: 3 [selected_answer] |
2018/03/19 | 715 | 2,444 | <issue_start>username_0: This is my controller
```
public function document(){
$file = File::paginate(6);
return view('admin.document',compact('file'));
}
```
and this is my view
```
@foreach ($file as $key => $value )
| {{ $no++ }} | {{$value->title}} | {{$value->file}} | {{$value->status}} | {{$value->created\_at}} |
|
@endforeach
```
I don't know why I'm getting error<issue_comment>username_1: I don't see initialization of `$no` variable, but you use it in your blade template.
Upvotes: 0 <issue_comment>username_2: Check the $no variable you are trying to use in the $no++
Upvotes: 0 <issue_comment>username_2: ```html
@php $no = 0; @endphp
@foreach ($file as $key=> $value)
| {{ $no++ }} | {{$value->title}} | {{$value->file}} | {{$value->status}} | {{$value->created\_at}} |
|
@endforeach
```
Upvotes: -1 <issue_comment>username_3: Like some people noticed `$no` variable is the guilty one.
Instead of having a counter starting from Laravel 5.3 there is a new handy `$loop` variable. The $loop variable is a stdClass object that provides meta information about the loop you're currently inside. It exposes the following properties:
* **index:** the 0-based index of the current item in the loop; 0 would mean "first item"
* **iteration:** the 1-based index of the current item in the loop; 1 would mean "first item"
* **remaining:** how many items remain in the loop; if current item is first of three, would return 2
* **count:** the count of items in the loop
* **first:** boolean; whether this is the first item in the loop
* **last:** boolean; whether this is the last item in the loop
depth: integer; how many "levels" deep this loop is; returns 1 for a loop, 2 for a loop within a loop, etc.
* **parent:** if this loop is within another @foreach loop, returns a reference to the $loop variable for the parent loop item; otherwise returns null
In your case you will need `$loop->iteration`.
```
{{ $loop->iteration }} |
```
Upvotes: 0 <issue_comment>username_4: >
> You got the error because: you do not define variable `$no` .
>
>
> Define as $no = 1 before the loop...
>
>
> Example:
>
>
>
```
@php($no = 1)
```
Upvotes: 0 <issue_comment>username_5: try this
your controller
```
public function document(){
$data['file'] = File::paginate(6);
return view('admin.document',$data);
}
```
foreach in your view
```
@foreach ($file as $value )
@endforeach
```
Upvotes: 0 |
2018/03/19 | 1,069 | 4,217 | <issue_start>username_0: I am new to react and I was playing with react state and components. I made this little app that takes array elements from user and randomly changes its element's position. But when I change one state another state also chances with it. I have one state called element array that contains array of numbers and another array called draw that will contain elements from element array with randomly shifted element positions.problem is when draw[] changes element[] also changes. **I don't want element[] to change**. I only want draw[] contain elements from element[] array with new position within array.
When I click on draw button, it generates array with randomly positioned elements but it also changes position of elements in element array which contains original position of elements. I dont know how element array changes when I am updating draw array
```js
const dom=document.getElementById("fn")
class Final extends React.Component{
constructor(props)
{
super(props)
this.state={
element:[],
draw: []
}
this.addElement=this.addElement.bind(this)
this.drawer = this.drawer.bind(this)
}
drawer() {
// shifting logic Fisher–Yates_shuffle
var n = this.state.element.length
var t=this.state.element
var temp;
while (n !== 0) {
var rand = Math.floor(Math.random() * Math.floor(n))
n--;
temp = t[n]
t[n] = t[rand]
t[rand] = temp
}
this.setState( (prev)=> {
return {draw:t} //update one state
})
}
addElement(element)
{
if(!element)
{
return "ADD SOMETHING!"
}
else if(this.state.element.indexOf(element)>-1)
{
return "value Already Exists!"
}
this.setState((prev)=>{
element=prev.element.concat(element)
return {element}
})
}
render()
{
return(
< AddElement element = {
this.state.element
}
addElement = {
this.addElement
}
drawer = {
this.drawer
}
/>
< Drawer draw={
this.state.draw
}
/>
)
}
}
class AddElement extends React.Component{
constructor(props)
{
super(props);
this.add=this.add.bind(this)
this.drawIt = this.drawIt.bind(this)
this.state={
error:undefined
}
}
add(e)
{
e.preventDefault();
const element=e.target.elements.element.value
const v=this.props.addElement(element);
this.setState(()=>{return {error:v}})
}
drawIt()
{
this.props.drawer() //calls drawer function in Final componet
}
render()
{
return(
Add elements
{this.state.error && {this.state.error}
}
DRAW IT
elements inside Element array
{this.props.element.map((x)=>{return })}
)
}
}
class Drawer extends React.Component{
render()
{
return(
{ this.props.draw.length > 0 && new randomly generated position of elements
}
{this.props.draw.map((x) => { return })}
)
}
}
class Element extends React.Component {
render() {
return (
- {this.props.v}
)
}
}
ReactDOM.render(,dom)
```
```html
```<issue_comment>username_1: That's because you're mutating your state array, you can just clone or use destructure assignment in your `drawer` method:
**wrong**
```
var t = this.state.element;
```
**right**
```
var t = [...this.state.element];
```
You can also see it working [here](http://Check%20out%20this%20[codesandbox][1].).
Upvotes: 1 <issue_comment>username_2: In javascript objects and arrays are passed as reference.
So basically your line
```
var t=this.state.element
```
This didn't make the copy of the array but instead created a new reference to the same array.
```
var t = [...this.state.element];
```
Using destructuring assignment you unpack values from arrays, or properties from objects, into unique or new variables.
Upvotes: 1 [selected_answer] |
2018/03/19 | 531 | 1,915 | <issue_start>username_0: I am hoping that someone can help me. I have the below code which is not pulling through any results. I am looking to bring through data which will match multiple criteria.
Column B which is a data field
Column AC which (so long as is `null`) I would like the information to display
The first issue is I cannot get it to Search when using a date for column B
The second is when I add the second criteria of Null in column AC I I am getting a Run-Time Error 424 Object Required. Can someone please help me. What am I doing wrong?
```
Private Sub CommandButton1_Click()
Dim searchRange As Range
Dim foundCell As Range
Dim mysearch As String
mysearch = Me.txtDate.Value
With Sheets("Acc")
Set searchRange = Sheets("Acc").Range("B2", .Range("B" & .Rows.Count).End(xlUp))
End With
Set foundCell = searchRange.Find(what:=mysearch, Lookat:=xlWhole, MatchCase:=False, SearchFormat:=False)
If Not foundCell Is Nothing And foundCell.Offset(0, 27) Is Null Then
Me.txtRef1 = foundCell.Offset(0, -1).Value
Else
MsgBox "Reference Does Not Exist"
End If
End Sub
```<issue_comment>username_1: That's because you're mutating your state array, you can just clone or use destructure assignment in your `drawer` method:
**wrong**
```
var t = this.state.element;
```
**right**
```
var t = [...this.state.element];
```
You can also see it working [here](http://Check%20out%20this%20[codesandbox][1].).
Upvotes: 1 <issue_comment>username_2: In javascript objects and arrays are passed as reference.
So basically your line
```
var t=this.state.element
```
This didn't make the copy of the array but instead created a new reference to the same array.
```
var t = [...this.state.element];
```
Using destructuring assignment you unpack values from arrays, or properties from objects, into unique or new variables.
Upvotes: 1 [selected_answer] |
2018/03/19 | 1,942 | 5,125 | <issue_start>username_0: I have a few columns that have 3 different possible types of strings in it.
1. an empty string
2. a string with 1 number (ex: 2.05)
3. a string with 12 numbers ( ex:
`'1.01, 2.02, 3.03, 4.04, 5.05, 6.06, 7.07, 8.08, 9.09, 10.10, 11.11, 12.12'` )
With that 12 number string, I need to split on every 3rd occurence of a comma, and then insert these newly seperated strings into various columns in a different table.
```
COL 1: COL 2: COL 3: COL 4:
1.01, 2.02, 3.03 | 4.04, 5.05, 6.06 | 7.07, 8.08, 9.09 | 10.10, 11.11, 12.12
```
These columns also have default values that need to be added upon trying to do anything with the empty string or the 1 value string.
The one value string would be entered like so:
```
COL 1: COL 2: COL 3: COL 4:
0, 0, *some number* | 1, 0, 0 | 0, 1, 0 | 0, 0, 1
```
The empty string would just have 0 instead of that value.
That original column once all the data has been converted over and added properly will end up becoming a GUID with a reference key to the location of where I am inserting that into the other table after the split. I'm not really worried about that at the moment though.<issue_comment>username_1: Even though you are on 2016, another option is as follows
**Example**
```
Declare @YourTable table (ID int,SomeCol varchar(max))
Insert Into @YourTable values
(1,''),
(2,'1.01, 2.02, 3.03, 4.04, 5.05, 6.06, 7.07, 8.08, 9.09, 10.10, 11.11, 12.12'),
(3,'2.05')
Select A.ID
,Col1 = case when Pos2 is not null then concat(Pos1 ,', ',Pos2 ,', ',Pos3) else concat('0, 0, ',case when Pos1 = '' then '0' else Pos1 end) end
,Col2 = case when Pos2 is not null then concat(Pos4 ,', ',Pos5 ,', ',Pos6) else '1, 0 ,0' end
,Col3 = case when Pos2 is not null then concat(Pos7 ,', ',Pos7 ,', ',Pos9) else '0, 1 ,0' end
,Col4 = case when Pos2 is not null then concat(Pos10,', ',Pos11,', ',Pos12) else '0, 0 ,1' end
From @YourTable A
Cross Apply (
Select Pos1 = ltrim(rtrim(xDim.value('/x[1]','varchar(max)')))
,Pos2 = ltrim(rtrim(xDim.value('/x[2]','varchar(max)')))
,Pos3 = ltrim(rtrim(xDim.value('/x[3]','varchar(max)')))
,Pos4 = ltrim(rtrim(xDim.value('/x[4]','varchar(max)')))
,Pos5 = ltrim(rtrim(xDim.value('/x[5]','varchar(max)')))
,Pos6 = ltrim(rtrim(xDim.value('/x[6]','varchar(max)')))
,Pos7 = ltrim(rtrim(xDim.value('/x[7]','varchar(max)')))
,Pos8 = ltrim(rtrim(xDim.value('/x[8]','varchar(max)')))
,Pos9 = ltrim(rtrim(xDim.value('/x[9]','varchar(max)')))
,Pos10 = ltrim(rtrim(xDim.value('/x[10]','varchar(max)')))
,Pos11 = ltrim(rtrim(xDim.value('/x[11]','varchar(max)')))
,Pos12 = ltrim(rtrim(xDim.value('/x[12]','varchar(max)')))
From (Select Cast('' + replace(A.SomeCol,',','')+'' as xml) as xDim) as A
) B
```
**Returns**
```
ID Col1 Col2 Col3 Col4
1 0, 0, 0 1, 0 ,0 0, 1 ,0 0, 0 ,1
2 1.01, 2.02, 3.03 4.04, 5.05, 6.06 7.07, 7.07, 9.09 10.10, 11.11, 12.12
3 0, 0, 2.05 1, 0 ,0 0, 1 ,0 0, 0 ,1
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your target format is better than the CSV strings, but still not the best. You should try to get this as row-by-row table out of which you can generate any output format easily.
This answer will provide a resultset with all the information you will need to get this in your groups (pivot or conditional aggregation):
```
DECLARE @mockupTable TABLE(ID INT IDENTITY,YourStrangeString VARCHAR(150), DefaultValue VARCHAR(5));
INSERT INTO @mockupTable VALUES
('','abc')
,('2.05','xyz')
,('1.01, 2.02, 3.03, 4.04, 5.05, 6.06, 7.07, 8.08, 9.09, 10.10, 11.11, 12.12','blah');
```
--the query
```
WITH Splitted AS
(
SELECT *
,CAST('' + REPLACE(YourStrangeString,',','') + '' AS XML) AS Fragment
FROM @mockupTable
)
,ToNumberedFragments AS
(
SELECT ID
,ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT NULL)) AS FragmentNmbr
,Frg.value('text()[1]','varchar(10)') AS Fragment
,DefaultValue
FROM Splitted
OUTER APPLY Fragment.nodes('/x') AS A(Frg)
)
SELECT ID
,FragmentNmbr
,(FragmentNmbr-1)/3 AS FragmentGroup
,Fragment
,DefaultValue
FROM ToNumberedFragments;
```
the result
```
ID Nr grp nmbr DefaultValue
1 1 0 NULL abc
2 1 0 2.05 xyz
3 1 0 1.01 blah
3 2 0 2.02 blah
3 3 0 3.03 blah
3 4 1 4.04 blah
3 5 1 5.05 blah
3 6 1 6.06 blah
3 7 2 7.07 blah
3 8 2 8.08 blah
3 9 2 9.09 blah
3 10 3 10.10 blah
3 11 3 11.11 blah
3 12 3 12.12 blah
```
The grouping trick is integer division.
Now, seeing John's answer, I think this will be better for your needs...
Upvotes: 1 |
2018/03/19 | 629 | 1,935 | <issue_start>username_0: I have a form with string validation. The validation the string must contain at least two words. I am using the `split()` method
```
string.split(' ').length >= 2
```
However, if user input Japanese string ex:
>
> 公開文書
>
>
>
The result of `公開文書.split(' ').length` is `1`, and the value is not valid for my form. But, if I translate the string from Japanese to English we get the translation: `Public document`, and it looks like a valid string for the form.
Is there a way to handle such a case? I want to make such Japanese strings valid.<issue_comment>username_1: Simply put, you are asking to split a string by a space. Japanese text inherently does not contain spaces. The system doesn't care whether, *when translated*, it would contain a space.
With Japanese, we are really more concerned with whether there are multiple words. If you are addressing Japanese specifically, I would suggest [tiny-segmenter](https://www.npmjs.com/package/tiny-segmenter). For example:
```
var segmenter = new TinySegmenter()
var segs = segmenter.segment("公開文書") // Output [ '公開', '文書' ]
var num_words = segs.length >= 2 // true
```
Do note that this is only for Japanese text. You will need to find a similar solution for other languages, such as Chinese or Korean.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this if you need it for chinese
<https://github.com/username_2/node-segment>
```
var Segment = require('novel-segment');
var segment = new Segment();
var ret = segment.doSegment('公開文書');
console.log(ret.length); // => 2
console.log(ret);
```
---
```
{
"w": "公開",
"p": 4096,
"f": 1254,
"ps": "動詞 動語素",
"pp": "0x1000",
"_debug": {
"index": 0,
"ps_en": "v"
}
},
{
"w": "文書",
"p": 1048576,
"f": 6704,
"ps": "名詞 名語素",
"pp": "0x100000",
"_debug": {
"index": 1,
"ps_en": "n"
}
}
```
Upvotes: 1 |
2018/03/19 | 613 | 2,292 | <issue_start>username_0: My MySQL skills are very basic, so what i want to get is the name of the team 1 and the team 2 of the table matches from the table teams, in a single SELECT statement (if possible). I have a statement but I only get the name of the team 1 in team 2 I get the same name as the team 1
**Teams Table**
```
---------------------------------------
- team_id - team_title - team_image -
---------------------------------------
- 1 - Brazil - br.png -
- 2 - Russia - ru.png -
- 3 - Spain - es.png -
---------------------------------------
```
**Matches Table**
```
------------------------------------------
- match_id - match_team1 - match_team2 -
------------------------------------------
- 1 - 2 - 3 -
- 2 - 1 - 2 -
- 3 - 3 - 1 -
------------------------------------------
```
**Function**
```
function get_all_matches($connect)
{
$sentence = $connect->prepare("
SELECT matches.*
, teams.team_title
FROM matches
, teams
WHERE matches.match_team1 = teams.team_id
ORDER
BY matches.match_id DESC
");
$sentence->execute();
return $sentence->fetchAll();
}
```<issue_comment>username_1: You need two joins, one for each team name you want to retrieve.
Also, note the implicit joins (having more than one table in the `from` clause) have been considered deprecated for quite some time, and you should probably use the modern explicit `join` clause instead:
```sql
SELECT m.*,
t1.team_title AS team_1_title,
t2.team_title AS team_2_title
FROM matches m
JOIN teams t1 ON m.match_team1 = t1.team_id
JOIN teams t2 ON m.match_team2 = t2.team_id
ORDER BY m.match_id DESC
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Have you tried somthing like
```
Select A.match_team1, t1.team_title,
A.match_team2, t2.team_title
FROM matches A
left outer join teams t2
on A.match_team2 = t2.team_id,
teams T1
WHERE A.match_team1 = t1.team_id
```
I am not familiar with MySql, but joins should be supported I guess.
Upvotes: 0 |
2018/03/19 | 728 | 2,606 | <issue_start>username_0: ```js
function ajaxcall()
{
var namejs=document.getElementById("namejs").value;
var emailjs=document.getElementById("emailjs").value;
var passjs=document.getElementById("passjs").value;
var selected\_gender=document.getElementById("selected\_gender").value;
var div\_id=document.getElementById("div\_id").innerHTML;
var xmlhttp=new new XMLHttpRequest();
xmlhttp.open("GET","response\_insert.php?name="+namejs+"&email="+emailjs+"&password="+<PASSWORD>+"&gender="+selected\_gender);
xmlhttp.send(null);
div\_id=xmlhttp.responseText();
}
//Here my response_php page code written below
php
include "insert.php";
extract($_REQUEST);
mysql_connect("localhost","root","");
mysql_select_db("demo");
$name=$_GET["name"];
$email=$_GET["email"];
$password=$_GET["<PASSWORD>"];
$gender=$_GET["gender"];
mysql_query("insert into tbl (name,email,password,gender) value ('$name','$email','$password','$gender')") or die("Data not inserted");
echo "Inserted Successfully";
?
```
```html
Insert using Ajax,Jquery,Php,MySql
.form-control
{
width: 300px;
}
.s
{
width: 100px;
}
**Name**
**Email**
**Password**
**Gender**
Male
Female
INSERT
```
I want to insert data into MySql database without refreshing the whole page...here is my code..
i have written response\_insert.php page's code below javascript code..........
what is problem is there....
How can i do that using Ajax and javascript and php
please give the answer.......................................................................<issue_comment>username_1: You need two joins, one for each team name you want to retrieve.
Also, note the implicit joins (having more than one table in the `from` clause) have been considered deprecated for quite some time, and you should probably use the modern explicit `join` clause instead:
```sql
SELECT m.*,
t1.team_title AS team_1_title,
t2.team_title AS team_2_title
FROM matches m
JOIN teams t1 ON m.match_team1 = t1.team_id
JOIN teams t2 ON m.match_team2 = t2.team_id
ORDER BY m.match_id DESC
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Have you tried somthing like
```
Select A.match_team1, t1.team_title,
A.match_team2, t2.team_title
FROM matches A
left outer join teams t2
on A.match_team2 = t2.team_id,
teams T1
WHERE A.match_team1 = t1.team_id
```
I am not familiar with MySql, but joins should be supported I guess.
Upvotes: 0 |
2018/03/19 | 1,113 | 3,921 | <issue_start>username_0: I'm trying to extract a single word from a line of text. As I understand it, Powershell regexes are *almost* the same as PCREs (and I have a way of easily testing PCREs). I have a file containing (amongst other things) something like...
```
ignore=thisline
username=symcbean
dontRead=thisEither
```
And I want to get the value associated with "username".
I know that the LHS of '=' will contain "username", optionally surrounded by whitespace, and the RHS will contain the value I am trying to extract (optionally surrounded by whitespace). The string I am looking for will match \w+, hence:
```
(?<=username=)\w+
```
works for the case without additional whitespace. But I can't seem to accommodate the optional white space. For brevity I've only shown the case of trying to handle a whitespace before the '=' below:
```
(?<=username\s*=)\w+ - doesn't match with or without additional space
(?<=username\W+)\w+ - doesn't match with or without additional space
(?<=username[\s=]*)\w+ - doesn't match with or without additional space
```
However in each case above, the group in the look-behind zero-width assertion (/username\s\*=/, /username\W+/, /username[\s=]\*/) matches the relevant part of the string.
I'm hoping to get a single value match (rather than array).<issue_comment>username_1: Meh, you could use regexes but then you would have [two problems.](https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/) This is how I would do it:
```
# Notice the extra spaces
$initialText = ' username = wombat '
$userName = $intialText.Split('=')[1].Trim()
```
Here's how the key line works:
* The `Split()` method takes the string `$initialText`, and divides it into an array, eliminating the character passed to the split (treating it as a delimiter). So now, you have an array `@(' username ',' wombat ')`.
* You then take the 1th (zero origin) element of the array (`[1]`). This is `' wombat '`.
* You then call the `Trim()` method, which gets rid of all the whitespace at the beginning and the end of the string - so you now have `'wombat'`...
* ...which you assign to `$userName`.
Split would still work you would just have to find the line that starts with username. Having said that, here is a regex method:
```
$initialText = ' username = wombat '
$initialString -match '^.+=\W+(?.+)\W+$'
$username = $matches.username
```
Or for an entire file:
From the prompt:
```
Get-Content C:\Path\To\Some\File.txt | %{if($_.trim().startswith('username')){$_ -match '^.+=\W?(?.+)\W?$'; $username = $matches.username; $username}}
```
Or if you are doing it in a script:
```
$fileContents = Get-Content C:\Path\To\Some\File.txt
foreach($line in $fileContents){
if($line.Trim().StartsWith('username')){
$line -match '^.+=\W?(?.+)\W?$'
$userName = $matches.username
}
}
$userName
```
Upvotes: 2 <issue_comment>username_2: This should do the trick if you are looking for multiple usernames in a single file. It will just put all values into an array of strings. The regular expressions pointed out should pull out what you want.
```
[regex]$username = "(?<=username\s*=\s*)\w+"
$usernames = @(Select-String -Path $file -Pattern $username -AllMatches | ForEach-Object {
$_.Matches | ForEach-Object{
$_.Value
}
})
```
To explain a little bit of the `Select-String` commandlet, when you use the `-AllMatches` switch with it, it will return a collection of match objects. Inside those match objects are Matches, Groups, captures, etc. For this reason, you have to do the `Foreach-Object { $_.Matches` and then inside each matches object there is a value property hence `| Foreach-Object { $_.Value`
If it is only one username per file, you could just do this per file:
```
$text = get-content $file
[regex]$usernameReg = "(?<=username\s*=\s*)\w+"
$username = $usernameReg.Match($text).Value
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 6,414 | 12,286 | <issue_start>username_0: As per [this answer](https://stackoverflow.com/questions/25248766/emulating-shifts-on-32-bytes-with-avx), I've created the following test program:
```
#include
#include
#include
#define SHIFT\_LEFT( N ) \
\
inline \_\_m256i shift\_left\_##N ( \_\_m256i A ) { \
\
if ( N == 0 ) return A; \
else if ( N < 16 ) return \_mm256\_alignr\_epi8 ( A, \_mm256\_permute2x128\_si256 ( A, A, \_MM\_SHUFFLE ( 0, 0, 2, 0 ) ), ( uint8\_t ) ( 16 - N ) ); \
else if ( N == 16 ) return \_mm256\_permute2x128\_si256 ( A, A, \_MM\_SHUFFLE ( 0, 0, 2, 0 ) ); \
else return \_mm256\_slli\_si256 ( \_mm256\_permute2x128\_si256 ( A, A, \_MM\_SHUFFLE ( 0, 0, 2, 0 ) ), ( uint8\_t ) ( N - 16 ) ); \
}
void print ( const size\_t n ) {
size\_t i = 0x8000000000000000;
while ( i ) {
putchar ( ( int ) ( n & i ) + ( int ) ( 48 ) );
i >>= 1;
putchar ( ( int ) ( n & i ) + ( int ) ( 48 ) );
i >>= 1;
putchar ( ' ' );
}
}
SHIFT\_LEFT ( 2 );
int main ( ) {
\_\_m256i a = \_mm256\_set\_epi64x ( 0x00, 0x00, 0x00, 0x03 );
\_\_m256i b = shift\_left\_2 ( a );
size\_t \* c = ( size\_t \* ) &b
print ( c [ 3 ] ); print ( c [ 2 ] ); print ( c [ 1 ] ); print ( c [ 0 ] ); putchar ( '\n' );
return 0;
}
```
The above program does not give the expected (by me) output, as far as I can see. I'm stumped as to how these functions work together (read the descriptions). Am I doing something wrong, or is the implementation of shift\_left() wrong?
EDIT1: I came to realize (and confirmed in the comments) that this code only intends to shift by max 32 (and are bytes), so it does not satisfy my goal. Which leaves the question, "How to implement lane crossing logical bit-wise shift (left and right) in AVX2".
EDIT2: Fast forward: In the meanwhile, I'm less stumped as to how it works and have coded what I needed. I've posted the code (shift and rotate) and accepted that as the answer.<issue_comment>username_1: Probably not the kind of answer that you're expecting. But here's a reasonably efficient solution that actually works for a run-time shift amount.
The costs are:
* **Preprocess:** ~12 - 14 instructions
* **Rotation:** 5 instructions
* **Shift:** 6 instructions
In order to shift or rotate anything, you must first preprocess the shift amount. Once you have that, you can efficiently perform shifts/rotations.
Because the preprocessing step is so expensive, this solution utilizes an object to hold the preprocessed shift amount so that it can be reused many times when shifting by the same amount.
For efficiency, the object should be on the stack in the same scope as the code that does the shifting. This allows the compiler to promote all the fields of the object into registers. Furthermore, it's recommended to force-inline all the methods of the class.
```
#include
#include
class LeftShifter\_AVX2{
public:
LeftShifter\_AVX2(uint32\_t bits){
// Precompute all the necessary values.
permL = \_mm256\_sub\_epi32(
\_mm256\_setr\_epi32(0, 1, 2, 3, 4, 5, 6, 7),
\_mm256\_set1\_epi32(bits / 32)
);
permR = \_mm256\_sub\_epi32(permL, \_mm256\_set1\_epi32(1));
bits %= 32;
shiftL = \_mm\_cvtsi32\_si128(bits);
shiftR = \_mm\_cvtsi32\_si128(32 - bits);
\_\_m256i maskL = \_mm256\_cmpgt\_epi32(\_mm256\_setzero\_si256(), permL);
\_\_m256i maskR = \_mm256\_cmpgt\_epi32(\_mm256\_setzero\_si256(), permR);
mask = \_mm256\_or\_si256(maskL, \_mm256\_srl\_epi32(maskR, shiftR));
}
\_\_m256i rotate(\_\_m256i x) const{
\_\_m256i L = \_mm256\_permutevar8x32\_epi32(x, permL);
\_\_m256i R = \_mm256\_permutevar8x32\_epi32(x, permR);
L = \_mm256\_sll\_epi32(L, shiftL);
R = \_mm256\_srl\_epi32(R, shiftR);
return \_mm256\_or\_si256(L, R);
}
\_\_m256i shift(\_\_m256i x) const{
return \_mm256\_andnot\_si256(mask, rotate(x));
}
private:
\_\_m256i permL;
\_\_m256i permR;
\_\_m128i shiftL;
\_\_m128i shiftR;
\_\_m256i mask;
};
```
**Test Program:**
```
#include
using namespace std;
void print\_u8(\_\_m256i x){
union{
\_\_m256i v;
uint8\_t s[32];
};
v = x;
for (int c = 0; c < 32; c++){
cout << (int)s[c] << " ";
}
cout << endl;
}
int main(){
union{
\_\_m256i x;
char buffer[32];
};
for (int c = 0; c < 32; c++){
buffer[c] = (char)c;
}
print\_u8(x);
print\_u8(LeftShifter\_AVX2(0).shift(x));
print\_u8(LeftShifter\_AVX2(8).shift(x));
print\_u8(LeftShifter\_AVX2(32).shift(x));
print\_u8(LeftShifter\_AVX2(40).shift(x));
}
```
**Output:**
```
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
```
---
Right-shift is very similar. I'll leave that as an exercise for the reader.
Upvotes: 3 <issue_comment>username_2: The following code implements lane-crossing logical bit-wise shift/rotate (left and right) in AVX2:
```
// Prototypes...
__m256i _mm256_sli_si256 ( __m256i, int );
__m256i _mm256_sri_si256 ( __m256i, int );
__m256i _mm256_rli_si256 ( __m256i, int );
__m256i _mm256_rri_si256 ( __m256i, int );
// Implementations...
__m256i left_shift_000_063 ( __m256i a, int n ) { // 6
return _mm256_or_si256 ( _mm256_slli_epi64 ( a, n ), _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), _mm256_permute4x64_epi64 ( _mm256_srli_epi64 ( a, 64 - n ), _MM_SHUFFLE ( 2, 1, 0, 0 ) ), _MM_SHUFFLE ( 3, 3, 3, 0 ) ) );
}
__m256i left_shift_064_127 ( __m256i a, int n ) { // 7
__m256i b = _mm256_slli_epi64 ( a, n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 2, 1, 0, 0 ) );
__m256i c = _mm256_srli_epi64 ( a, 64 - n );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 1, 0, 0, 0 ) );
__m256i f = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), d, _MM_SHUFFLE ( 3, 3, 3, 0 ) );
__m256i g = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), e, _MM_SHUFFLE ( 3, 3, 0, 0 ) ); // 6
return _mm256_or_si256 ( f, g );
}
__m256i left_shift_128_191 ( __m256i a, int n ) { // 7
__m256i b = _mm256_slli_epi64 ( a, n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 1, 0, 0, 0 ) );
__m256i c = _mm256_srli_epi64 ( a, 64 - n );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 1, 0, 0, 0 ) );
__m256i f = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), d, _MM_SHUFFLE ( 3, 3, 0, 0 ) );
__m256i g = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), e, _MM_SHUFFLE ( 3, 0, 0, 0 ) );
return _mm256_or_si256 ( f, g );
}
__m256i left_shift_192_255 ( __m256i a, int n ) { // 5
return _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), _mm256_slli_epi64 ( _mm256_permute4x64_epi64 ( a, _MM_SHUFFLE ( 0, 0, 0, 0 ) ), n ), _MM_SHUFFLE ( 3, 0, 0, 0 ) );
}
__m256i _mm256_sli_si256 ( __m256i a, int n ) {
if ( n < 128 ) return n < 64 ? left_shift_000_063 ( a, n ) : left_shift_064_127 ( a, n % 64 );
else return n < 192 ? left_shift_128_191 ( a, n % 64 ) : left_shift_192_255 ( a, n % 64 );
}
__m256i right_shift_000_063 ( __m256i a, int n ) { // 6
return _mm256_or_si256 ( _mm256_srli_epi64 ( a, n ), _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), _mm256_permute4x64_epi64 ( _mm256_slli_epi64 ( a, 64 - n ), _MM_SHUFFLE ( 0, 3, 2, 1 ) ), _MM_SHUFFLE ( 0, 3, 3, 3 ) ) );
}
__m256i right_shift_064_127 ( __m256i a, int n ) { // 7
__m256i b = _mm256_srli_epi64 ( a, n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 3, 3, 2, 1 ) );
__m256i c = _mm256_slli_epi64 ( a, 64 - n );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 3, 3, 3, 2 ) );
__m256i f = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), d, _MM_SHUFFLE ( 0, 3, 3, 3 ) );
__m256i g = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), e, _MM_SHUFFLE ( 0, 0, 3, 3 ) );
return _mm256_or_si256 ( f, g );
}
__m256i right_shift_128_191 ( __m256i a, int n ) { // 7
__m256i b = _mm256_srli_epi64 ( a, n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 3, 2, 3, 2 ) );
__m256i c = _mm256_slli_epi64 ( a, 64 - n );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 3, 2, 1, 3 ) );
__m256i f = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), d, _MM_SHUFFLE ( 0, 0, 3, 3 ) );
__m256i g = _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), e, _MM_SHUFFLE ( 0, 0, 0, 3 ) );
return _mm256_or_si256 ( f, g );
}
__m256i right_shift_192_255 ( __m256i a, int n ) { // 5
return _mm256_blend_epi32 ( _mm256_setzero_si256 ( ), _mm256_srli_epi64 ( _mm256_permute4x64_epi64 ( a, _MM_SHUFFLE ( 0, 0, 0, 3 ) ), n ), _MM_SHUFFLE ( 0, 0, 0, 3 ) );
}
__m256i _mm256_sri_si256 ( __m256i a, int n ) {
if ( n < 128 ) return n < 64 ? right_shift_000_063 ( a, n ) : right_shift_064_127 ( a, n % 64 );
else return n < 192 ? right_shift_128_191 ( a, n % 64 ) : right_shift_192_255 ( a, n % 64 );
}
__m256i left_rotate_000_063 ( __m256i a, int n ) { // 5
return _mm256_or_si256 ( _mm256_slli_epi64 ( a, n ), _mm256_permute4x64_epi64 ( _mm256_srli_epi64 ( a, 64 - n ), _MM_SHUFFLE ( 2, 1, 0, 3 ) ) );
}
__m256i left_rotate_064_127 ( __m256i a, int n ) { // 6
__m256i b = _mm256_slli_epi64 ( a, n );
__m256i c = _mm256_srli_epi64 ( a, 64 - n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 2, 1, 0, 3 ) );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 1, 0, 3, 2 ) );
return _mm256_or_si256 ( d, e );
}
__m256i left_rotate_128_191 ( __m256i a, int n ) { // 6
__m256i b = _mm256_slli_epi64 ( a, n );
__m256i c = _mm256_srli_epi64 ( a, 64 - n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 1, 0, 3, 2 ) );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 0, 3, 2, 1 ) );
return _mm256_or_si256 ( d, e );
}
__m256i left_rotate_192_255 ( __m256i a, int n ) { // 5
return _mm256_or_si256 ( _mm256_srli_epi64 ( a, 64 - n ), _mm256_permute4x64_epi64 ( _mm256_slli_epi64 ( a, n ), _MM_SHUFFLE ( 0, 3, 2, 1 ) ) );
}
__m256i _mm256_rli_si256 ( __m256i a, int n ) {
if ( n < 128 ) return n < 64 ? left_rotate_000_063 ( a, n ) : left_rotate_064_127 ( a, n % 64 );
else return n < 192 ? left_rotate_128_191 ( a, n % 64 ) : left_rotate_192_255 ( a, n % 64 );
}
__m256i right_rotate_000_063 ( __m256i a, int n ) { // 5
return _mm256_or_si256 ( _mm256_srli_epi64 ( a, n ), _mm256_permute4x64_epi64 ( _mm256_slli_epi64 ( a, 64 - n ), _MM_SHUFFLE ( 0, 3, 2, 1 ) ) );
}
__m256i right_rotate_064_127 ( __m256i a, int n ) { // 6
__m256i b = _mm256_srli_epi64 ( a, n );
__m256i c = _mm256_slli_epi64 ( a, 64 - n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 0, 3, 2, 1 ) );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 1, 0, 3, 2 ) );
return _mm256_or_si256 ( d, e );
}
__m256i right_rotate_128_191 ( __m256i a, int n ) { // 6
__m256i b = _mm256_srli_epi64 ( a, n );
__m256i c = _mm256_slli_epi64 ( a, 64 - n );
__m256i d = _mm256_permute4x64_epi64 ( b, _MM_SHUFFLE ( 1, 0, 3, 2 ) );
__m256i e = _mm256_permute4x64_epi64 ( c, _MM_SHUFFLE ( 2, 1, 0, 3 ) );
return _mm256_or_si256 ( d, e );
}
__m256i right_rotate_192_255 ( __m256i a, int n ) { // 5
return _mm256_or_si256 ( _mm256_slli_epi64 ( a, 64 - n ), _mm256_permute4x64_epi64 ( _mm256_srli_epi64 ( a, n ), _MM_SHUFFLE ( 2, 1, 0, 3 ) ) );
}
__m256i _mm256_rri_si256 ( __m256i a, int n ) {
if ( n < 128 ) return n < 64 ? right_rotate_000_063 ( a, n ) : right_rotate_064_127 ( a, n % 64 );
else return n < 192 ? right_rotate_128_191 ( a, n % 64 ) : right_rotate_192_255 ( a, n % 64 );
}
```
I have tried to make the \_mm256\_permute4x64\_epi64 ops (when there in any case have to be two) to partially overlap, which should keep the overall latency to a minimum.
Most of the suggestions and or clues given by commenters were helpful in putting together the code, thanks to those. Obviously, improvements and or any other comments are welcome.
I think that Mystical's answer is interesting, but too complicated to be used effectively for generalized shifting/rotating for use f.e. in a library.
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,065 | 3,700 | <issue_start>username_0: I'm having issues with running pytest tests from inside PyCharm
I have a file x\_tests.py inside a folder called agents\_automation\_2 in C:\Temp, the content of the file is
```
import pytest
def test_mytest():
assert False
```
When I run I get the following output
*C:\Python36-32\python.exe "C:\Program Files\JetBrains\PyCharm 2017.3.2\helpers\pycharm\_jb\_pytest\_runner.py" --path C:/Temp/agents\_automation\_2
Launching py.test with arguments C:/Temp/agents\_automation\_2 in C:\Temp\agents\_automation\_2
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-3.4.2, py-1.5.2, pluggy-0.6.0
rootdir: C:\Temp\agents\_automation\_2, inifile:
plugins: xdist-1.22.0, forked-0.2
collected 0 items
======================== no tests ran in 0.01 seconds =========================*
However, when I run from a regular windows command line inside the folder the test runs OK
Any idea of what might be the issue ?
Thanks !!!<issue_comment>username_1: The name of your test file does not conform to a default pytest discovery rules. If you change *x\_tests.py* to *x\_test.py* it run the test and fails as expected. Take a look at [this page](https://docs.pytest.org/en/latest/goodpractices.html) for more info on pytest discovery rules.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Make sure that PyCharm is configured to use PyTest. By default it uses unittests. Checkout this doc: <https://www.jetbrains.com/help/pycharm/choosing-your-testing-framework.html>. Once you've correctly configured PyCharm to use PyTest, when you, for example, open the context menu for the test file, you'll see something like **Run 'PyTest in ... '**
Upvotes: 1 <issue_comment>username_3: In my case I had only to delete the `pytest.ini` file from the project.
Upvotes: 1 <issue_comment>username_4: For those using WSL, problem might be related to WSL being unaccessible.
If you are getting this error on WSL,
`Logon failure: the user has not been granted the requested logon type at this computer.`
Try restarting the `vmcompute` service in PowerShell (run as Administrator):
```
powershell restart-service vmcompute
```
Source: <https://github.com/microsoft/WSL/issues/5401>
Upvotes: 0 <issue_comment>username_5: In my case, test class contains **init** method, cause empty suite as well. What you need to do is to avoid to write init method.
Upvotes: 2 <issue_comment>username_6: 1. Make sure your conftest.py file is in same folder.
2. If you are using fixtures. Add "Test" keyword in class name (example = TestClass)
```py
@pytest.mark.usefixtures("setup1")
class TestClass:
def test_abc(self):
print("testcase execution")
```
Upvotes: 1 <issue_comment>username_7: In my case I had only change the class name with test in starting and that resolved my issue.
[](https://i.stack.imgur.com/3vdvP.png)
Upvotes: 0 <issue_comment>username_8: I got same error and figured out the definition name doesn't have "test\_",
Example :
Before error :
class TestExample2 :
```
def editProfile(self, dataLoad):
print(dataLoad)
print(dataLoad[1])
```
after fix :
class TestExample2 :
```
def test_editProfile(self, dataLoad):
print(dataLoad)
print(dataLoad[1])
```
[](https://i.stack.imgur.com/Dy5EB.png)
Upvotes: 0 <issue_comment>username_9: I had the same problem. It was caused by a circular import in a super class of my test. When I tried to run this super class (which has no tests itself) directly I saw the error message.
Upvotes: 0 |
2018/03/19 | 916 | 3,156 | <issue_start>username_0: I have a model class:
```
public class mOrderSnapshot
{
[Display(Name = "Birthday")]
[Required(ErrorMessage = "*")]
[DataType(DataType.Date)]
[DisplayFormat(ApplyFormatInEditMode = true, DataFormatString = "{0:d}")]
public DateTime Birthday { get; set; }
```
I try to insert a date in input.
```
@Html.EditorFor(model => model.Birthday, new { @class = "usercontrol"})
```
But I see only the mask **дд.мм.гггг**
Why I can't insert a value(date).<issue_comment>username_1: The name of your test file does not conform to a default pytest discovery rules. If you change *x\_tests.py* to *x\_test.py* it run the test and fails as expected. Take a look at [this page](https://docs.pytest.org/en/latest/goodpractices.html) for more info on pytest discovery rules.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Make sure that PyCharm is configured to use PyTest. By default it uses unittests. Checkout this doc: <https://www.jetbrains.com/help/pycharm/choosing-your-testing-framework.html>. Once you've correctly configured PyCharm to use PyTest, when you, for example, open the context menu for the test file, you'll see something like **Run 'PyTest in ... '**
Upvotes: 1 <issue_comment>username_3: In my case I had only to delete the `pytest.ini` file from the project.
Upvotes: 1 <issue_comment>username_4: For those using WSL, problem might be related to WSL being unaccessible.
If you are getting this error on WSL,
`Logon failure: the user has not been granted the requested logon type at this computer.`
Try restarting the `vmcompute` service in PowerShell (run as Administrator):
```
powershell restart-service vmcompute
```
Source: <https://github.com/microsoft/WSL/issues/5401>
Upvotes: 0 <issue_comment>username_5: In my case, test class contains **init** method, cause empty suite as well. What you need to do is to avoid to write init method.
Upvotes: 2 <issue_comment>username_6: 1. Make sure your conftest.py file is in same folder.
2. If you are using fixtures. Add "Test" keyword in class name (example = TestClass)
```py
@pytest.mark.usefixtures("setup1")
class TestClass:
def test_abc(self):
print("testcase execution")
```
Upvotes: 1 <issue_comment>username_7: In my case I had only change the class name with test in starting and that resolved my issue.
[](https://i.stack.imgur.com/3vdvP.png)
Upvotes: 0 <issue_comment>username_8: I got same error and figured out the definition name doesn't have "test\_",
Example :
Before error :
class TestExample2 :
```
def editProfile(self, dataLoad):
print(dataLoad)
print(dataLoad[1])
```
after fix :
class TestExample2 :
```
def test_editProfile(self, dataLoad):
print(dataLoad)
print(dataLoad[1])
```
[](https://i.stack.imgur.com/Dy5EB.png)
Upvotes: 0 <issue_comment>username_9: I had the same problem. It was caused by a circular import in a super class of my test. When I tried to run this super class (which has no tests itself) directly I saw the error message.
Upvotes: 0 |
2018/03/19 | 2,023 | 9,319 | <issue_start>username_0: I have an `activity` with several `EditTexts`. If the user clicks '`Cancel`' button and nothing has changed in these `EditTexts` then the app should go to the previous activity but if something has changed in these `EditTexts` then I want the user to see the `AlertDialog`:
```
Save changes you made?
NO YES
```
I have set up a `TextWatcher` for these `EditTexts` like:
```
//let's set up a textwatcher so if the state of any of the edittexts has changed.
//if it has changed and user clicks 'CANCEL', we'll ask first, '
//You've made changes here. Sure you want to cancel?'
TextWatcher edittw = new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
}
@Override
public void afterTextChanged(Editable s) {
Toast.makeText(EditContact.this, "change detected", Toast.LENGTH_SHORT).show();
}
};
categoryname.addTextChangedListener(edittw);
namename.addTextChangedListener(edittw);
phonename.addTextChangedListener(edittw);
addressname.addTextChangedListener(edittw);
commentname.addTextChangedListener(edittw);
```
And my `AlertDialog` for the `Cancel` button - which is appearing regardless of whether any `EditTexts` have changed or not, but I just want it to appear only if changes are made in the `EditTexts`, otherwise there should be no `AlerDialog` and current activity should go back to previous activity - goes like:
```
private void cancelButton() {
cancel.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
//add a dialogue box
AlertDialog.Builder builder = new AlertDialog.Builder(view.getContext());
builder.setMessage("Save changes you made?").setPositiveButton("Yes", dialogClickListener)
.setNegativeButton("No", dialogClickListener).show();
}
});
}
//Are you sure you want to cancel? dialogue
DialogInterface.OnClickListener dialogClickListener = new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
switch (which){
case DialogInterface.BUTTON_POSITIVE:
//Yes button clicked
pDialog = new ProgressDialog(EditContact.this);
// Showing progress dialog for the review being saved
pDialog.setMessage("Saving...");
pDialog.show();
//post the review_id in the current activity to EditContact.php and
StringRequest stringRequest = new StringRequest(Request.Method.POST, EditContact_URL,
new Response.Listener() {
@Override
public void onResponse(String response) {
//hide the dialogue saying 'Saving...' when page is saved
pDialog.dismiss();
Toast.makeText(EditContact.this, response, Toast.LENGTH\_LONG).show();
}
},
new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Toast.makeText(EditContact.this, "problem here", Toast.LENGTH\_LONG).show();
}
}) {
@Override
protected Map getParams() {
Map params = new HashMap();
//we are posting review\_id into our EditContact.php file,
//the second value, review\_id,
// is the value we get from Android.
// When we see this in our php, $\_POST["review\_id"],
//put in the value from Android
params.put("review\_id", review\_id);
return params;
}
};
AppController.getInstance().addToRequestQueue(stringRequest);
//when cancelled, back to the PopulistoListView class
Intent j = new Intent(EditContact.this,PopulistoListView.class);
startActivity(j);
break;
case DialogInterface.BUTTON\_NEGATIVE:
//close the activity
finish();
}
}
};
```
I have searched the internet for tutorials or posts with phrases like 'TextWatcher' and 'AlertDialog' but I've not found something that will help me acheive what I am trying to do.<issue_comment>username_1: Try add alertDialog like below, you can put dialog inside onAfterChange method:
```js
TextWatcher edittw = new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
}
@Override
public void afterTextChanged(Editable s) {
Toast.makeText(EditContact.this, "change detected", Toast.LENGTH_SHORT).show();
AlertDialog.Builder builder;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
builder = new AlertDialog.Builder(getApplicationContext(), android.R.style.Theme_Material_Dialog_Alert);
} else {
builder = new AlertDialog.Builder(getApplicationContext());
}
builder.setTitle("Delete entry")
.setMessage("Are you sure you want to delete this entry?")
.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
//You want yes
}
})
.setNegativeButton(android.R.string.no, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
//You want no
}
})
.setIcon(android.R.drawable.ic_dialog_alert)
.show();
}
};
```
Upvotes: 1 <issue_comment>username_2: You can do something like this
```
//onCancel Clicked
if(somethingChanged)//compare old values with new values
showDialog();
else
onBackPressed();
```
Upvotes: 0 <issue_comment>username_3: Create a `boolean` variable to track `textChange`
```
boolean isDirty;
TextWatcher edittw = new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
}
@Override
public void afterTextChanged(Editable s) {
Toast.makeText(EditContact.this, "change detected", Toast.LENGTH_SHORT).show();
isDirty = true;
}
};
categoryname.addTextChangedListener(edittw);
namename.addTextChangedListener(edittw);
phonename.addTextChangedListener(edittw);
addressname.addTextChangedListener(edittw);
commentname.addTextChangedListener(edittw);
```
And change you cancel button click event to this
```
private void cancelButton() {
cancel.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(isDirty) {
//add a dialogue box
AlertDialog.Builder builder = new AlertDialog.Builder(view.getContext());
builder.setMessage("Save changes you made?").setPositiveButton("Yes", dialogClickListener)
.setNegativeButton("No", dialogClickListener).show();
}
else {
// this will finish the current activity and the last activity will be popped from the stack.
finish();
}
}
});
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: In cancel button onClick method just try this :-
```
@Override
onClick(View v) {
if(!(categoryname.getText().toString().isEmpty() && namename.getText().toString().isEmpty() && phonename.getText().toString().isEmpty() && addressname.getText().toString().isEmpty() && commentname.getText().toString().isEmpty())) {
// show your dialog
}
else {
// normal cancel
}
```
Just `check the text` in all the `edittexts` , if any one of them is not empty , show the dialog.. !!
>
> No need to add `textChangeListener` for the above problem !!
>
>
>
Upvotes: 0 <issue_comment>username_5: You simply use boolean variable like so
* declare check variable in class properties
boolean check ;
* set the value of this variable in `onTextChange()` to `true` in the `TextWatcher`
* change `cancelButton()` to this
```
private void cancelButton() {
cancel.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(changed){
AlertDialog.Builder builder = new AlertDialog.Builder(view.getContext());
builder.setMessage("Save changes you made?").setPositiveButton("Yes", dialogClickListener)
.setNegativeButton("No", dialogClickListener).show();
}else {
Log.v(TAG,"nothing changed ");
}
}
});
}
```
Upvotes: 1 |
2018/03/19 | 685 | 2,168 | <issue_start>username_0: Input
```
ABC - MMM
```
Expected output
```
MMM
```
Everything after `-` and a `space`
Tried this `(-\s).+`
But I dont get rid of - and space.
<https://regex101.com/r/0W7lhD/2><issue_comment>username_1: You want to use `positive lookbehind` which means look for things after, but not including, a pattern. The syntax is `?<=`
```
(?<=-\s).+
```
Here is some [further reading](https://www.regular-expressions.info/lookaround.html)
Though this is the *correct* way to do this it's worth noting, as @username_2 points out, support is currently very limited.
**EDIT:**
You can use split to turn the string into an array and then return the last string in the array.
This is a slightly long winded way to do it but I think it is more readable this way.
```js
var string = "ADSD - ASDASD";
var regex = /(-\s)/;
function matchAfter(string, pattern) {
var short = string.split(pattern);
return short[short.length - 1];
}
console.log(matchAfter(string, regex));
```
Upvotes: 1 <issue_comment>username_2: So there are multiple ways to go about this. The easiest and potentially best method is not to use regex at all.
**Method 1 - split**
Of course, you'd add a check to ensure that the element `[1]` exists, but this shows you the general idea of getting `MMM`.
```js
console.log("ABC - MMM".split(" - ")[1])
```
**Method 2 - regex group**
This method groups everything you *want* into capture group 1.
```js
console.log("ABC - MMM".match(/-\s(.+)/)[1])
```
**Method 3 - regex lookbehind**
This one was suggested by [username_1 in his answer here](https://stackoverflow.com/a/49367964/3600709). While this works, it's currently only supported on Chrome 62+ (lookbehinds have little support in browsers at the moment). You can see the [RegExp Lookbehind Assertions proposal here as part of EMCA TC39](https://github.com/tc39/proposal-regexp-lookbehind).
```js
console.log("ABC - MMM".match(/(?<=-\s).+/)[0])
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: You can capture the groups and get the last element from the array.
```js
console.log(("ABC - MMM".match(/-\s+(.+)/) || []).pop());
```
Upvotes: 0 |
2018/03/19 | 1,302 | 4,564 | <issue_start>username_0: ```
struct FOO{
int a;
int b;
int c;
};
volatile struct FOO foo;
int main(void)
{
foo.a = 10;
foo.b = 10;
foo.c = 10;
struct FOO test = foo;
return 0;
}
```
This won't compile, because
`struct FOO test = foo;`
generates an error:
>
> error: binding reference of type 'const FOO&' to 'volatile FOO'
> discards qualifiers
>
>
>
How can I copy a `volatile struct` into another `struct` in C++ (before C++11)?
Many people suggested to just delelte volatile, but I can't do that in that case, because I want to copy the current SPI-Reg setttings inside a µC and this is declared volatile by the manufacturer headers.
I want to copy those settings, because the manufactuerer also provides an Library to use the SPI for EnDat-Communication, and I don't have access to the source-code. Since I have to change the SPI-Reg-Settings during runtime I want to easyly get back to the library SPI-settings without calling the init\_endat()-lib fkt again (it's unspecified what happens if i call it twice).
Could I possibly use memcopy() for that?
As suggested, this is a copy of the following question.
[Why am I not provided with a default copy constructor from a volatile?](https://stackoverflow.com/questions/17217300/why-am-i-not-provided-with-a-default-copy-constructor-from-a-volatile)<issue_comment>username_1: This is ill-formed because `FOO` has an implicit copy constructor defined as:
```
FOO(FOO const&);
```
And you write `FOO test = foo;` with `foo` of type `volatile FOO`, invoking:
```
FOO(volatile FOO const&);
```
But references-to-volatile to references-to-non-volatile implicit conversion is ill-formed.
From here, *two* solutions emerge:
1. don't make volatile to non-volatile conversions;
2. define a suited copy constructor or copy the object members "manually";
3. `const_cast` can remove the volatile qualifier, but this is undefined behavior to use that if your underlying object is effectively volatile.
>
> Could I possibly use memcopy() for that?
>
>
>
No you cannot, `memcpy` is incompatible with volatile objects: thre is no overload of it which takes pointers-to-volatile, and there is nothing you can do without invoking undefined behavior.
So, as a conclusion, your best shot if you cannot add a constructor to `FOO` is to define:
```
FOO FOO_copy(FOO volatile const& other)
{
FOO result;
result.a = other.a;
result.b = other.b;
result.c = other.c;
return result;
}
```
Or with C++11's [`std::tie`](http://en.cppreference.com/w/cpp/utility/tuple/tie "cppreference.com"):
```
FOO FOO_copy(FOO volatile const& other)
{
FOO result;
std::tie(result.a, result.b, result.c) = std::tie(other.a, other.b, other.c);
return result;
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: You haven't provided enough details about your problem to give a more precise assessment, but the solution to whatever problem you're trying to solve is almost certainly not to use `volatile`. "Volatile" means that the value can change from under your feet: the two typical good use cases are variables changed from within UNIX signal handlers and memory-mapped registers. Volatile is not enough for variables shared among threads, notably.
The reason you are getting this error is that your compiler is trying to find a `FOO(volatile FOO&)` copy constructor, which is never automatically generated.
Upvotes: 2 <issue_comment>username_3: To give another approach to an answer, to address why this doesn't make sense, rather than just where the C++ standard says this is invalid:
The whole point of `volatile` is that you have precise control over which variable gets accessed when. That means given `volatile int i, j;`, `i = 1; j = 2;` and `j = 2; i = 1;` do not do the same thing. The compiler cannot freely transform one into the other. The same applies to reads: given `volatile int i, j; int x, y;`, `x = i; y = j;` and `y = j; x = i;` do not do the same thing. The presence of `volatile` means the accesses *must* happen in *exactly* the order you specified.
Now, in your example, what should `struct FOO test = foo;` do? You've never specified whether you want to first read `foo.a`, then `foo.b`, finally `foo.c`, or perhaps first read `foo.c`, then `foo.b`, finally `foo.a`, or perhaps some other order.
You can, if you wish, do this:
```
struct FOO test;
test.a = foo.a;
test.b = foo.b;
test.c = foo.c;
```
Here, you explicitly specify the order of the accesses to `foo`'s fields, so you avoid the problem.
Upvotes: 4 |
2018/03/19 | 742 | 1,852 | <issue_start>username_0: This is probably trivial, but I can't find the answer. Consider the following code:
```py
from sympy import *
X = Symbol('X')
a=10
b=100
c=1000
d=10000
s = latex ( a*b*c*d / X )
print (s)
```
displays:
```
\frac{10000000000}{X}
```
And I would prefer
```
\frac{10^{10}}{X}
```
Is it possible ? Note that a, b, c and d are read from files. So values will change at each run. Then, following stuffs don't solve my problem:
```
n20 = Symbol('10')
```
neither
```
latex(S('10**10/X', evaluate=False))
```<issue_comment>username_1: ```
>>> from sympy import *
>>> var('X')
X
>>> latex(S('10**20/X', evaluate=False))
'\\frac{10^{20}}{X}'
```
See <https://github.com/sympy/sympy/wiki/Quick-examples>.
EDIT: Your edited question differs considerably from the original. Here's an answer to it.
Because your input values might not be powers of ten `r` might not be. Consequently, when it is expressed as a power of ten its exponent might not be an integer; hence, the use of base ten logarithms.
```
from sympy import latex, sympify, Symbol
from math import log10
a=10
b=100
c=1000
d=10000
r = a * b * c * d
exponent = log10(r)
X = Symbol('X')
s = latex(sympify('10**{}/X'.format(exponent), evaluate=False))
print (s)
```
The result for these values of `a, b, c` and `d` is `\frac{10^{10.0}}{X}`.
Upvotes: 2 <issue_comment>username_2: All you need is a little help that will return your number with powers of 10 removed. Then wrap this in an unevaluated Mul and pass it to latex:
```
>>> def u10(n):
... if abs(n) < 10 or int(n) != n: return n
... s = str(n)
... m = s.rstrip('0')
... if len(m) == len(s): return n
... return Mul(int(m), Pow(10, len(s) - len(m), evaluate=0), evaluate=0)
...
>>> u10(12300)
123*10**2
>>> latex(Mul(_,1/x,evaluate=False))
'\\frac{123 \\cdot 10^{2}}{x}'�
```
Upvotes: 1 |
2018/03/19 | 1,139 | 2,432 | <issue_start>username_0: I am trying to extract a Date and a Time from a Timestamp:
```
DateTime
31/12/2015 22:45
```
to be:
```
Date | Time |
31/12/2015| 22:45 |
```
however when I use:
```
df['Date'] = pd.to_datetime(df['DateTime']).dt.date
```
I Get :
```
2015-12-31
```
Similarly with Time i get:
```
df['Time'] = pd.to_datetime(df['DateTime']).dt.time
```
gives
```
23:45:00
```
but if I try to format it I get an error:
```
df['Date'] = pd.to_datetime(f['DateTime'], format='%d/%m/%Y').dt.date
ValueError: unconverted data remains: 00:00
```<issue_comment>username_1: **Option 1**
Since you don't really need to operate on the *dates* per se, just split your column on space:
```
df = df.DateTime.str.split(expand=True)
df.columns = ['Date', 'Time']
```
```
df
Date Time
0 31/12/2015 22:45
```
---
**Option 2**
Alternatively, just drop the format specifier completely:
```
v = pd.to_datetime(df['DateTime'], errors='coerce')
df['Time'] = v.dt.time
df['Date'] = v.dt.floor('D')
```
```
df
Time Date
0 22:45:00 2015-12-31
```
Upvotes: 2 <issue_comment>username_2: Try strftime
```
df['DateTime'] = pd.to_datetime(df['DateTime'])
df['Date'] = df['DateTime'].dt.strftime('%d/%m/%Y')
df['Time'] = df['DateTime'].dt.strftime('%H:%M')
DateTime Date Time
0 2015-12-31 22:45:00 31/12/2015 22:45
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: If your `DateTime` column is already a datetime type, you shouldn't need to call `pd.to_datetime` on it.
Are you looking for a string (`"12:34"`) or a timestamp (the concept of `12:34` in the afternoon)? If you're looking for the former, there are answers here that cover that. If you're looking for the latter, you can use the `.dt.time` and `.dt.date` accessors.
```
>>> pd.__version__
u'0.20.2'
>>> df = pd.DataFrame({'DateTime':pd.date_range(start='2018-01-01', end='2018-01-10')})
>>> df['date'] = df.DateTime.dt.date
>>> df['time'] = df.DateTime.dt.time
>>> df
DateTime date time
0 2018-01-01 2018-01-01 00:00:00
1 2018-01-02 2018-01-02 00:00:00
2 2018-01-03 2018-01-03 00:00:00
3 2018-01-04 2018-01-04 00:00:00
4 2018-01-05 2018-01-05 00:00:00
5 2018-01-06 2018-01-06 00:00:00
6 2018-01-07 2018-01-07 00:00:00
7 2018-01-08 2018-01-08 00:00:00
8 2018-01-09 2018-01-09 00:00:00
9 2018-01-10 2018-01-10 00:00:00
```
Upvotes: 0 |
2018/03/19 | 465 | 1,376 | <issue_start>username_0: I am trying to generate an object via a for loop, the problem I am having is that the property name is not being generated instead it is just inserted as the variable name.
Here is an example:
```
for (let key in person) {
let obj = {key : person[key] };
console.log(obj);
}
```
If you run this it prints
```
{ key : "smith" }
```
The desired object would be
```
{ name : "smith" }
```
any ideas on how to achieve this? thank you in advanced.<issue_comment>username_1: What you want is :
```js
const person = {
age: 18,
size: '1m74',
eyeColor: 'blue',
};
for (let key in person) {
const obj = {
[key] : person[key],
};
console.log(obj);
}
```
---
Look at [here](https://stackoverflow.com/questions/19837916/creating-object-with-dynamic-keys) for explainations
---
---
Example with `Array.forEach` and `Object.keys`
```js
const person = {
age: 18,
size: '1m74',
eyeColor: 'blue',
};
Object.keys(person).forEach((x) => {
const obj = {
[x]: person[x],
};
console.log(obj);
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can do this by :
```
obj = {name: person[key] }
```
Upvotes: 0 <issue_comment>username_3: You can achieve using
```
for (let key in person) {
const obj = {};
obj[key] = person[key];
console.log(obj);
}
```
Upvotes: 1 |
2018/03/19 | 4,365 | 14,832 | <issue_start>username_0: I'm using this jquery timer to collect time spent while it is running.
<https://github.com/walmik/timer.jquery>
<http://jquerytimer.com/>
In a prior Stack Overflow post we were able to Post to another page the current accumulated time using jQuery Ajax ([jQuery.timer how to get current value in php?](https://stackoverflow.com/questions/49327105/jquery-timer-how-to-get-current-value-in-php)). Many thinks to @Dakis
It seems our current solution is trying to save on any Stop and Restart of the Timer. It only needs to do a Save to DB routine IF the “Save Time and Notes” button is selected.
I’ve been researching jQuery Ajax and understand that a key/value pair is needed to be sent to the server/receiving page. I understand the first value identifies the target from which to get the "key", but I could not get a clear understanding of proper formatting for the second “value”.
**'task': $('.ta\_tasks').data('task’)** does not seem to be passing the value as expected.
I’ve added a TextArea with an ID of “ta\_tasks” and appended the current working AJAX with:
```
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('.ta_tasks').data('task’)
```
On the receiving page I added a simple alert to see if the value is being received but it is not. If I can figure out how to properly send the contents of the TextArea I could also figure out how to submit a value from the “Save Time and Notes” button so that a Pause and Restart will not also submit to the database.
Working page: <http://sgdesign.com/timer2.php>
**Parent page script:**
```
$(document).ready(function () {
var hasTimer = false;
/\*\*
\* Save the current timer value.
\*
\* Performs an ajax request to the server, which will
\* save the timer value in a database table and return
\* a corresponding message.
\*/
function saveTime() {
$.ajax({
method: 'post',
dataType: 'html',
url: 'saveTime.php',
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('.ta\_tasks').data('task')
},
success: function (response, textStatus, jqXHR) {
displayAlert('success', response);
},
error: function (jqXHR, textStatus, errorThrown) {
/\*
\* If the status code of the response is the custom one
\* defined by me, the developer, in saveTime.php, then I
\* can display the corresponding error message. Otherwise,
\* the displayed message will be a general user-friendly
\* one - so, that no system-related infos will be shown.
\*/
var message = (jqXHR.status === 420)
? jqXHR.statusText
: 'An error occurred during your request. Please try again.';
displayAlert('danger', message);
},
complete: function (jqXHR, textStatus) {
//...
}
});
}
/\*\*
\* Display a bootstrap alert.
\*
\* @param type string success|info|warning|danger.
\* @param message string Alert message.
\* @return void
\*/
function displayAlert(type, message) {
var alert = '<div class="alert alert-' + type + ' alert-dismissible" role="alert">'
+ '<button type="button" class="close" data-dismiss="alert" aria-label="Close">'
+ '<span aria-hidden="true">×</span>'
+ '</button>'
+ '<span>' + message + '</span>'
+ '</div>';
$('.messages').html(alert);
}
// Init timer start
$('.save-timer-btn').on('click', function () {
saveTime();
});
// Init timer start
$('.start-timer-btn').on('click', function () {
hasTimer = true;
$('.timer').timer({
editable: true
});
$(this).addClass('d-none');
$('.pause-timer-btn').removeClass('d-none');
});
// Init timer resume
$('.resume-timer-btn').on('click', function () {
$('.timer').timer('resume');
$(this).addClass('d-none');
$('.pause-timer-btn').removeClass('d-none');
});
// Init timer pause
$('.pause-timer-btn').on('click', function () {
$('.timer').timer('pause');
$(this).addClass('d-none');
$('.resume-timer-btn').removeClass('d-none');
saveTime();
});
// Remove timer. Leaves the display intact.
$('.remove-timer-btn').on('click', function () {
hasTimer = false;
$('.timer').timer('remove');
$(this).addClass('d-none');
$('.start-timer-btn').removeClass('d-none');
$('.pause-timer-btn, .resume-timer-btn').addClass('d-none');
});
// Additional focus event for this demo
$('.timer').on('focus', function () {
if (hasTimer) {
$('.pause-timer-btn').addClass('d-none');
$('.resume-timer-btn').removeClass('hidden');
}
});
// Additional blur event for this demo
$('.timer').on('blur', function () {
if (hasTimer) {
$('.pause-timer-btn').removeClass('d-none');
$('.resume-timer-btn').addClass('d-none');
}
});
});
```
**Target Page contents:**
```
php
// Price per hour variable
$cost = 50;
# require 'connection.php';
// Validate the timer value.
if (!isset($_POST['time']) || empty($_POST['time'])) {
/*
* This response header triggers the ajax error because the status
* code begins with 4xx (which corresponds to the client errors).
* I defined 420 as the custom status code. You can choose whatever
* code between 401 and 499 which is not officially assigned, e.g.
* which is marked as "Unassigned" in the official HTTP Status Code Registry.
* See the link.
*
* @link https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml HTTP Status Code Registry.
*/
header('HTTP/1.1 420 No time value defined. Did you start the timer?');
exit();
}
// Validate the timer state.
if (!isset($_POST['state']) || empty($_POST['state'])) {
header('HTTP/1.1 420 No timer state recognized. Did you start the timer?');
exit();
}
// Read the posted values.
$time = $_POST['time'];
$state = $_POST['state']; /* The state of the timer when the saving operation was triggered. */
$task = $_POST['ta_tasks'];
$r = $cost / 3600 * $time;
$rate = round($r, 2);
/*
* Save the timer value in a db table using PDO library.
*/
/* $sql = 'INSERT INTO my_timer_table (
time
) VALUES (
:time
)';
$statement = $connection-prepare($sql);
$statement->execute([
':time' => $time,
]);
// Print success message.
echo 'Time (' . $time . ' seconds) successfully saved when timer was ' . $state . '.';
exit(); */
?>
Untitled Document
var a = "<?php echo $task; ?>";
alert ('task: ' + a);
php function secondsToTime($seconds) {
$dtF = new \DateTime('@0');
$dtT = new \DateTime("@$seconds");
return $dtF-diff($dtT)->format('%h hours, %i minutes and %s seconds');
// return $dtF->diff($dtT)->format('%a days, %h hours, %i minutes and %s seconds');
}
?>
php echo secondsToTime($time);
echo '<br';
echo 'Tasks: '.$task .'
';
echo 'Cost: $'. $rate;
?>
```
**Goal Summary**
Proper formatting of data in: 'task': $('.ta\_tasks').data('task’)
Understanding of Why so as to learn how to also transfer when the 'Save Time and Notes" button to invoke saving Cost and Notes to DB
[](https://i.stack.imgur.com/oBmaw.jpg)<issue_comment>username_1: check your ajax request data that you re sending and the one you are getting in your saveTime.php, you are sending 'task' and receiving 'ta\_task' in saveTime.php
```js
$.ajax({
method: 'post',
dataType: 'html',
url: 'saveTime.php',
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'ta_task': $('.ta_tasks').data('task')
},
//other codes here
```
```html
//saveTime.php
//now get the value with 'ta_task'
$task = $_POST['ta_task'];
```
Upvotes: 0 <issue_comment>username_2: textarea don't have `ta_tasks` class, u use `id` and it doesn't have html5 `data object`, correct to `$("#ta_tasks").val()`.
Upvotes: 1 <issue_comment>username_3: Don't define functions inside `$(document).ready`. Bring them outside.
Functions in PHP should reside only in pages destined for this purpose. See [PSR-1 Side Effects](https://www.php-fig.org/psr/psr-1/#23-side-effects). In principle you should definitely read: PSR-1 and PSR-2. Optional, especially PSR-4.
When you try to read a value sent through ajax, then you should read the value, not the CSS selector. So: Wrong: `$task = $_POST['ta_tasks'];`, correct: `$task = $_POST['task'];`.
Before validating posted values (on top of the page *saveTime.php*) you shouldn't declare any variables or do other things - so to say. So no `$cost = 50;` before validations, but after them. Still, if you want to define constants for *saveTime.php*, then better bring them in another file, which you can include.
In this case, the `data()` method is a proprietary method of <http://jquerytimer.com> ! You can use it to fetch some values (timer value, timer state, etc). But, in order to fetch the value of a html control you need to use `val()`, or `text()`, or `innerHtml`, etc. In a word: native js or jquery methods/functions. So, use like this:
```
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('#ta_tasks').val()
}
```
Do you see the selector (`'#ta_tasks'`)? It references an id (because of `#`). You used `.ta_tasks`, therefore referencing a class name. Which you didn't define.
Better: use the *camelCase* naming convention for html id's and names, and the "hyphen-separated" form for css classes:
```
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('#ta_tasks').val()
}
//...
Doh!
```
Avoid as much as possible referencing php code from javascript or css code. If you need a php value inside a javascript code, then pass it through a javascript function - as argument, or save the php value inside an attribute of a html control and read it through referencing the attribute by js/jquery methods/functions. As an example, see the code in *saveTime.php*, which saves the *task* value in a hidden input and alerts it from js code.
---
### index.php
```
Demo - Timer
$(document).ready(function () {
var hasTimer = false;
// Init timer start
$('.save-timer-btn').on('click', function () {
saveTime();
});
// Init timer start
$('.start-timer-btn').on('click', function () {
hasTimer = true;
$('.timer').timer({
editable: true
});
$(this).addClass('d-none');
$('.pause-timer-btn').removeClass('d-none');
});
// Init timer resume
$('.resume-timer-btn').on('click', function () {
$('.timer').timer('resume');
$(this).addClass('d-none');
$('.pause-timer-btn').removeClass('d-none');
});
// Init timer pause
$('.pause-timer-btn').on('click', function () {
$('.timer').timer('pause');
$(this).addClass('d-none');
$('.resume-timer-btn').removeClass('d-none');
saveTime();
});
// Remove timer. Leaves the display intact.
$('.remove-timer-btn').on('click', function () {
hasTimer = false;
$('.timer').timer('remove');
$(this).addClass('d-none');
$('.start-timer-btn').removeClass('d-none');
$('.pause-timer-btn, .resume-timer-btn').addClass('d-none');
});
// Additional focus event for this demo
$('.timer').on('focus', function () {
if (hasTimer) {
$('.pause-timer-btn').addClass('d-none');
$('.resume-timer-btn').removeClass('d-none');
}
});
// Additional blur event for this demo
$('.timer').on('blur', function () {
if (hasTimer) {
$('.pause-timer-btn').removeClass('d-none');
$('.resume-timer-btn').addClass('d-none');
}
});
});
/\*\*
\* Save the current timer value.
\*
\* Performs an ajax request to the server, which will
\* save the timer value in a database table and return
\* a corresponding message.
\*/
function saveTime() {
$.ajax({
method: 'post',
dataType: 'html',
url: 'saveTime.php',
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('#taTasks').val()
},
success: function (response, textStatus, jqXHR) {
displayAlert('success', response);
},
error: function (jqXHR, textStatus, errorThrown) {
var message = (jqXHR.status === 420)
? jqXHR.statusText
: 'An error occurred during your request. Please try again.';
displayAlert('danger', message);
},
complete: function (jqXHR, textStatus) {
//...
}
});
}
/\*\*
\* Display a bootstrap alert.
\*
\* @param type string success|info|warning|danger.
\* @param message string Alert message.
\* @return void
\*/
function displayAlert(type, message) {
var alert = '<div class="alert alert-' + type + ' alert-dismissible" role="alert">'
+ '<button type="button" class="close" data-dismiss="alert" aria-label="Close">'
+ '<span aria-hidden="true">×</span>'
+ '</button>'
+ '<span>' + message + '</span>'
+ '</div>';
$('.messages').html(alert);
}
####
Timer Demo 2
Start
Resume
Pause
Remove Timer
Save Time and Notes
Notes to accompany task:
Doh!
```
### saveTime.php
```
php
require_once 'functions.php';
// Validate the timer value.
if (!isset($_POST['time']) || empty($_POST['time'])) {
header('HTTP/1.1 420 No time value defined. Did you start the timer?');
exit();
}
// Validate the timer state.
if (!isset($_POST['state']) || empty($_POST['state'])) {
header('HTTP/1.1 420 No timer state recognized. Did you start the timer?');
exit();
}
// Validate the task.
if (!isset($_POST['task']) || empty($_POST['task'])) {
header('HTTP/1.1 420 No task value received.');
exit();
}
// Price per hour variable
$cost = 50;
// Read the posted values.
$time = $_POST['time'];
$state = $_POST['state']; /* The state of the timer when the saving operation was triggered. */
$task = $_POST['task'];
$r = $cost / 3600 * $time;
$rate = round($r, 2);
?
$(document).ready(function () {
alertTask();
});
function alertTask() {
var task = $('#task').val();
alert(task);
}
php
echo secondsToTime($time);
echo '<br';
echo 'Tasks: ' . $task . '
';
echo 'Cost: $' . $rate;
?>
```
### functions.php
```
php
function secondsToTime($seconds) {
$dtF = new \DateTime('@0');
$dtT = new \DateTime("@$seconds");
return $dtF-diff($dtT)->format('%h hours, %i minutes and %s seconds');
// return $dtF->diff($dtT)->format('%a days, %h hours, %i minutes and %s seconds');
}
```
---
**Edit 1:** In *index.php*, I brought the js functions outside of `$(document).ready`. I forgot to do it earlier.
**Edit 2:** Changed `hidden` to `d-none` in
```
$('.resume-timer-btn').removeClass('hidden');
```
**Edit 3:** I found the problem about which I commented. It was in my *saveTime.php* code: I loaded the jquery library, but it was already loaded in *index.php*. More of it: since you are loading the content of *saveTime.php* in a html page (*index.php*) which already has all resources loaded, you don't need to structure the *saveTime.php* as a whole structured html (with doctype, head, body, etc). It is completely enough to just define the content and script tags that you need. So, I reedited *saveTime.php* correspondingly.
Upvotes: 2 [selected_answer] |
2018/03/19 | 2,613 | 9,046 | <issue_start>username_0: I'm using a rails API with a React frontend. My Scores table has a course\_id column via a foreign key
```
class CreateScores < ActiveRecord::Migration[5.1]
def change
create_table :scores do |t|
t.integer :score
t.float :differential
t.references :user, foreign_key: true
t.references :course, foreign_key: true
t.timestamps
end
end
end
class CreateCourses < ActiveRecord::Migration[5.1]
def change
create_table :courses do |t|
t.string :name
t.float :rating
t.integer :slope
t.timestamps
end
end
end
```
In my React app I want to be able to render the name of a specified course not just the id provided by the course\_id column - so something like 'Augusta National' rather than '18'.
I'm not sure of the best practice to accomplish this. I've tried to add a column to my Score table that references the Course :name column but have failed miserably - I've also tried pulling the name of the course in a private method in the Score controller and merging it into the create method but this has also failed - any suggestions?<issue_comment>username_1: check your ajax request data that you re sending and the one you are getting in your saveTime.php, you are sending 'task' and receiving 'ta\_task' in saveTime.php
```js
$.ajax({
method: 'post',
dataType: 'html',
url: 'saveTime.php',
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'ta_task': $('.ta_tasks').data('task')
},
//other codes here
```
```html
//saveTime.php
//now get the value with 'ta_task'
$task = $_POST['ta_task'];
```
Upvotes: 0 <issue_comment>username_2: textarea don't have `ta_tasks` class, u use `id` and it doesn't have html5 `data object`, correct to `$("#ta_tasks").val()`.
Upvotes: 1 <issue_comment>username_3: Don't define functions inside `$(document).ready`. Bring them outside.
Functions in PHP should reside only in pages destined for this purpose. See [PSR-1 Side Effects](https://www.php-fig.org/psr/psr-1/#23-side-effects). In principle you should definitely read: PSR-1 and PSR-2. Optional, especially PSR-4.
When you try to read a value sent through ajax, then you should read the value, not the CSS selector. So: Wrong: `$task = $_POST['ta_tasks'];`, correct: `$task = $_POST['task'];`.
Before validating posted values (on top of the page *saveTime.php*) you shouldn't declare any variables or do other things - so to say. So no `$cost = 50;` before validations, but after them. Still, if you want to define constants for *saveTime.php*, then better bring them in another file, which you can include.
In this case, the `data()` method is a proprietary method of <http://jquerytimer.com> ! You can use it to fetch some values (timer value, timer state, etc). But, in order to fetch the value of a html control you need to use `val()`, or `text()`, or `innerHtml`, etc. In a word: native js or jquery methods/functions. So, use like this:
```
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('#ta_tasks').val()
}
```
Do you see the selector (`'#ta_tasks'`)? It references an id (because of `#`). You used `.ta_tasks`, therefore referencing a class name. Which you didn't define.
Better: use the *camelCase* naming convention for html id's and names, and the "hyphen-separated" form for css classes:
```
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('#ta_tasks').val()
}
//...
Doh!
```
Avoid as much as possible referencing php code from javascript or css code. If you need a php value inside a javascript code, then pass it through a javascript function - as argument, or save the php value inside an attribute of a html control and read it through referencing the attribute by js/jquery methods/functions. As an example, see the code in *saveTime.php*, which saves the *task* value in a hidden input and alerts it from js code.
---
### index.php
```
Demo - Timer
$(document).ready(function () {
var hasTimer = false;
// Init timer start
$('.save-timer-btn').on('click', function () {
saveTime();
});
// Init timer start
$('.start-timer-btn').on('click', function () {
hasTimer = true;
$('.timer').timer({
editable: true
});
$(this).addClass('d-none');
$('.pause-timer-btn').removeClass('d-none');
});
// Init timer resume
$('.resume-timer-btn').on('click', function () {
$('.timer').timer('resume');
$(this).addClass('d-none');
$('.pause-timer-btn').removeClass('d-none');
});
// Init timer pause
$('.pause-timer-btn').on('click', function () {
$('.timer').timer('pause');
$(this).addClass('d-none');
$('.resume-timer-btn').removeClass('d-none');
saveTime();
});
// Remove timer. Leaves the display intact.
$('.remove-timer-btn').on('click', function () {
hasTimer = false;
$('.timer').timer('remove');
$(this).addClass('d-none');
$('.start-timer-btn').removeClass('d-none');
$('.pause-timer-btn, .resume-timer-btn').addClass('d-none');
});
// Additional focus event for this demo
$('.timer').on('focus', function () {
if (hasTimer) {
$('.pause-timer-btn').addClass('d-none');
$('.resume-timer-btn').removeClass('d-none');
}
});
// Additional blur event for this demo
$('.timer').on('blur', function () {
if (hasTimer) {
$('.pause-timer-btn').removeClass('d-none');
$('.resume-timer-btn').addClass('d-none');
}
});
});
/\*\*
\* Save the current timer value.
\*
\* Performs an ajax request to the server, which will
\* save the timer value in a database table and return
\* a corresponding message.
\*/
function saveTime() {
$.ajax({
method: 'post',
dataType: 'html',
url: 'saveTime.php',
data: {
'time': $('.timer').data('seconds'),
'state': $('.timer').data('state'),
'task': $('#taTasks').val()
},
success: function (response, textStatus, jqXHR) {
displayAlert('success', response);
},
error: function (jqXHR, textStatus, errorThrown) {
var message = (jqXHR.status === 420)
? jqXHR.statusText
: 'An error occurred during your request. Please try again.';
displayAlert('danger', message);
},
complete: function (jqXHR, textStatus) {
//...
}
});
}
/\*\*
\* Display a bootstrap alert.
\*
\* @param type string success|info|warning|danger.
\* @param message string Alert message.
\* @return void
\*/
function displayAlert(type, message) {
var alert = '<div class="alert alert-' + type + ' alert-dismissible" role="alert">'
+ '<button type="button" class="close" data-dismiss="alert" aria-label="Close">'
+ '<span aria-hidden="true">×</span>'
+ '</button>'
+ '<span>' + message + '</span>'
+ '</div>';
$('.messages').html(alert);
}
####
Timer Demo 2
Start
Resume
Pause
Remove Timer
Save Time and Notes
Notes to accompany task:
Doh!
```
### saveTime.php
```
php
require_once 'functions.php';
// Validate the timer value.
if (!isset($_POST['time']) || empty($_POST['time'])) {
header('HTTP/1.1 420 No time value defined. Did you start the timer?');
exit();
}
// Validate the timer state.
if (!isset($_POST['state']) || empty($_POST['state'])) {
header('HTTP/1.1 420 No timer state recognized. Did you start the timer?');
exit();
}
// Validate the task.
if (!isset($_POST['task']) || empty($_POST['task'])) {
header('HTTP/1.1 420 No task value received.');
exit();
}
// Price per hour variable
$cost = 50;
// Read the posted values.
$time = $_POST['time'];
$state = $_POST['state']; /* The state of the timer when the saving operation was triggered. */
$task = $_POST['task'];
$r = $cost / 3600 * $time;
$rate = round($r, 2);
?
$(document).ready(function () {
alertTask();
});
function alertTask() {
var task = $('#task').val();
alert(task);
}
php
echo secondsToTime($time);
echo '<br';
echo 'Tasks: ' . $task . '
';
echo 'Cost: $' . $rate;
?>
```
### functions.php
```
php
function secondsToTime($seconds) {
$dtF = new \DateTime('@0');
$dtT = new \DateTime("@$seconds");
return $dtF-diff($dtT)->format('%h hours, %i minutes and %s seconds');
// return $dtF->diff($dtT)->format('%a days, %h hours, %i minutes and %s seconds');
}
```
---
**Edit 1:** In *index.php*, I brought the js functions outside of `$(document).ready`. I forgot to do it earlier.
**Edit 2:** Changed `hidden` to `d-none` in
```
$('.resume-timer-btn').removeClass('hidden');
```
**Edit 3:** I found the problem about which I commented. It was in my *saveTime.php* code: I loaded the jquery library, but it was already loaded in *index.php*. More of it: since you are loading the content of *saveTime.php* in a html page (*index.php*) which already has all resources loaded, you don't need to structure the *saveTime.php* as a whole structured html (with doctype, head, body, etc). It is completely enough to just define the content and script tags that you need. So, I reedited *saveTime.php* correspondingly.
Upvotes: 2 [selected_answer] |
2018/03/19 | 1,161 | 4,126 | <issue_start>username_0: I'm taking a class in python and now I'm struggling to complete one of the tasks.
The aim is to ask for an input, integrate through that string and print only words that start with letters > g. If the word starts with a letter larger than g, we print that word. Otherwise, we empty the word and iterate through the next word(s) in the string to do the same check.
This is the code I have, and the output. Would be grateful for some tips on how to solve the problem.
```
# [] create words after "G" following the Assignment requirements use of functions, menhods and kwyowrds
# sample quote "Wheresoever you go, go with all your heart" ~ Confucius (551 BC - 479 BC)
# [] copy and paste in edX assignment page
quote = input("Enter a sentence: ")
word = ""
# iterate through each character in quote
for char in quote:
# test if character is alpha
if char.isalpha():
word += char
else:
if word[0].lower() >= "h":
print(word.upper())
else:
word=""
Enter a sentence: Wheresoever you go, go with all your heart
WHERESOEVER
WHERESOEVERYOU
WHERESOEVERYOUGO
WHERESOEVERYOUGO
WHERESOEVERYOUGOGO
WHERESOEVERYOUGOGOWITH
WHERESOEVERYOUGOGOWITHALL
WHERESOEVERYOUGOGOWITHALLYOUR
The output should look like,
Sample output:
WHERESOEVER
YOU
WITH
YOUR
HEART
```<issue_comment>username_1: ```
s = "Wheresoever you go, go with all your heart"
out = s.translate(str.maketrans(string.punctuation, " "*len(string.punctuation)))
desired_result = [word.upper() for word in out.split() if word and word[0].lower() > 'g']
print(*desired_result, sep="\n")
```
Upvotes: 0 <issue_comment>username_2: Simply a list comprehension with `split` will do:
```
s = "Wheresoever you go, go with all your heart"
print(' '.join([word for word in s.split() if word[0].lower() > 'g']))
# Wheresoever you with your heart
```
Modifying to match with the desired output (Making all uppercase and on new lines):
```
s = "Wheresoever you go, go with all your heart"
print('\n'.join([word.upper() for word in s.split() if word[0].lower() > 'g']))
'''
WHERESOEVER
YOU
WITH
YOUR
HEART
'''
```
**Without** list comprehension:
```
s = "Wheresoever you go, go with all your heart"
for word in s.split(): # Split the sentence into words and iterate through each.
if word[0].lower() > 'g': # Check if the first character (lowercased) > g.
print(word.upper()) # If so, print the word all capitalised.
```
Upvotes: 2 <issue_comment>username_3: Your problem is that you're only resetting `word` to an empty string in the `else` clause. You need to reset it to an empty string immediately after the `print(word.upper())` statement as well for the code as you've wrote it to work correctly.
That being said, if it's not explicitly disallowed for the class you're taking, you should look into string methods, specifically `string.split()`
Upvotes: -1 <issue_comment>username_4: Here is a readable and commented solution. The idea is first to split the sentence into a list of words using `re.findall` (regex package) and iterate through this list, instead of iterating on each character as you did. It is then quite easy to print only the words starting by a letter greater then 'g':
```
import re
# Prompt for an input sentence
quote = input("Enter a sentence: ")
# Split the sentence into a list of words
words = re.findall(r'\w+', quote)
# Iterate through each word
for word in words:
# Print the word if its 1st letter is greater than 'g'
if word[0].lower() > 'g':
print(word.upper())
```
To go further, here is also the one-line style solution based on exactly the same logic, using list comprehension:
```
import re
# Prompt for an input sentence
quote = input("Enter a sentence: ")
# Print each word starting by a letter greater than 'g', in upper case
print(*[word.upper() for word in re.findall(r'\w+', quote) if word[0].lower() > 'g'], sep='\n')
```
Upvotes: 0 |
2018/03/19 | 1,299 | 3,253 | <issue_start>username_0: I need to convert the nested list `result = [[450, 455, 458], [452, 454, 456, 457], [451, 453]]` to a dictionary like:
```
{
0:
{
450: None,
455: 450,
458: 450
},
1: {
452: None,
454: 452,
456: 452,
457: 452
},
2: {
451: None,
453: 451
}
}
```
**Please take a look at this and assist:**
```
result_group = {}
for sub_group in result:
group_count = 0
first_rel_item = 0
result_group[group_count] = dict()
for item in sub_group:
if item == sub_group[0]:
result_group[group_count][item] = None
first_rel_item = item
continue
result_group[group_count]['item'] = first_rel_face
group_count += 1
```
I messed up with this as i get key Error:1 cant add to dictionary.<issue_comment>username_1: Try this:
```
result_group = {}
group_count = 0
for sub_group in result:
first_rel_item = 0
result_group[group_count] = {}
result_group[group_count][sub_group[0]] = None
previtem = sub_group[0]
for item in sub_group[1:]:
result_group[group_count][item] = previtem
previtem = item
group_count += 1
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could use a list comprehension here:
```
>>> result = [[450, 455, 458], [452, 454, 457], [451, 453]]
>>> dict(enumerate({**{i: a[0] for i in a[1:]}, **{a[0]: None}}
for a in result))
{0: {450: None, 455: 450, 458: 450},
1: {452: None, 454: 452, 457: 452},
2: {451: None, 453: 451}}
```
Note: this uses ["extended"](https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-448) iterable unpacking, which was introduced in Python 3.5. `z = {**x, **y}` merges dictionaries `x` and `y`.
Each `a` is a sublist of `result`. You want to use `a[0]` as the value for the 1st elements and above, and None for the 0th element.
The assumption here is that you only want the 0th element of the sublist to have a corresponding None value. (If the 0th element were repeated, somewhere, it would use the 0th element as its value, as in @username_3's answer.)
Upvotes: 1 <issue_comment>username_3: This is one way:
```
lst = [[450, 455, 458], [452, 454, 457], [451, 453]]
res = {i: {w: None if w == v[0] else v[0] for w in v}
for i, v in enumerate(lst)}
```
**Result**
```
{0: {450: None, 455: 450, 458: 450},
1: {452: None, 454: 452, 457: 452},
2: {451: None, 453: 451}}
```
**Explanation**
* Use ternary statement to determine whether you choose `None` or `v[0]`.
* Use `enumerate` to extract index of nested list.
Upvotes: 2 <issue_comment>username_4: ```
# the nice solutions were already given, so by foot:
d = {}
result = [[450, 455, 458], [452, 454, 457], [451, 453]]
for idx,l in enumerate(result): # returns the index and the sublists data
rMin = min(l)
d[idx] = {} # create a inner dict at key idx
for i in l:
d[idx][i] = None if i == rMin else rMin # fill inner dicts keys
print(d)
```
Output:
```
{0: {450: None, 455: 450, 458: 450},
1: {452: None, 454: 452, 457: 452},
2: {451: None, 453: 451}}
```
Upvotes: 0 |
2018/03/19 | 801 | 2,908 | <issue_start>username_0: I have the perl code to delete the files inside the directory and later the directory.
```
find ( sub {
my $file = $File::Find::name;
if ( -f $file ) {
push (@file_list, $file);
}
}, @find_dirs);
for my $file (@file_list) {
my @stats = stat($file);
if ($now-$stats[9] > $AGE) {
unlink $file;
}
}
```
But the above code is deleting only the contents inside the directories and sub directories leaving behind all the empty folder.
Could anyone please help me with the changes to be done to above coding so that it deletes the files and also the directories.<issue_comment>username_1: [`unlink`](https://perldoc.perl.org/functions/unlink.html) does not delete directories, only files.
>
> Note: unlink will not attempt to delete directories unless you are
> superuser and the -U flag is supplied to Perl. Even if these
> conditions are met, be warned that unlinking a directory can inflict
> damage on your filesystem. Finally, using unlink on directories is not
> supported on many operating systems. Use rmdir instead.
>
>
>
You want [`rmdir`](https://perldoc.perl.org/functions/rmdir.html), and you probably want to check with [`-d`](https://perldoc.perl.org/functions/-X.html) which one to use, unless you don't care about warnings.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I am only putting the code together, so you may upvote @username_1 that answered first. Try:
```
finddepth( sub {
my $file = $File::Find::name;
my @stats = stat($file);
if( -f $file && $now - $stats[9] > $AGE ) {
unlink $file;
}
elsif( -d $file ) {
rmdir $file;
}
}, @find_dirs );
```
A few comments:
* `File::Find` will find both files and directories.
* `-f` checks for a file; `-d` for a directory.
* `rmdir` will only remove a directory if the directory is empty. That is why files must be deleted first. `finddepth` takes care of this.
* `-f` and `-d` are simple to use, but [`stat`](http://perldoc.perl.org/functions/stat.html) may also be used for such a check (see second field, *mode*.)
* I have not tested the code; I cannot easily recreate your conditions.
**EDIT**: Now it uses `finddepth` instead of `find` because:
>
> finddepth() works just like find() except that it invokes the &wanted function for a directory after invoking it for the directory's contents. It does a postorder traversal instead of a preorder traversal, working from the bottom of the directory tree up where find() works from the top of the tree down.
>
>
>
This should take care of removing the directories in order, deepest first. Some directories may still not be removed if files remain in them that do not match the delete condition. If you want them removed when empty regardless of their timestamp, then remove the `if -d` condition. The non-empty ones will remain. Directories that cannot be removed may issue a warning...
Upvotes: 1 |
2018/03/19 | 804 | 2,407 | <issue_start>username_0: I'm trying to compare each object in the array.
Let's say below is my array:
```
var objects = [{
'x': 1,
'y': 2
}, {
'x': 2,
'y': 1
},{
'x': 3,
'y': 1
},{
'x': 4,
'y': 1
}];
```
Given two items, say `item1` and `item2`, I need to check the condition `item1.x == item2.y` and `item1.y == item2.x` through the array.
Is there a clean/efficient way to do in **Lodash**?<issue_comment>username_1: ```
var newArray = _.map(objects, function(item) {
if (item.x == item.y) return item;
});
var filteredNewArray = _.without(newArray, undefined)
console.log(filteredNewArray)
```
Upvotes: 0 <issue_comment>username_2: You can use map.
This may help,
```
objects.map((value, index) => {
return value.x == value.y
})
```
Upvotes: 0 <issue_comment>username_3: ```js
var objects = [
{'x': 1, 'y': 2},
{'x': 2, 'y': 1},
{'x': 3, 'y': 1},
{'x': 4, 'y': 1}
];
var comparedLodash = _.map(objects, function (item) {
return !!_.find(objects, {x: item.y, y: item.x});
});
console.log(comparedLodash);
```
Complexity of this is ***O(n^2)***.
Note: You could make it *O(nlogn)* if you sorted the array before starting the comparison, but this would add significant noise to the code.
Breakdown:
==========
The [**`_.map(somearray, somefunction)` function**](https://lodash.com/docs/4.17.5#map) executes the `somefunction` into every element of `somearray`. We will use it to convert every item of the `objects` array into a boolean. Roughly it is like:
```
var comparedLodash = _.map(objects, functionThatWillConvertAnItemIntoABoolean);
```
Now, each `item1` should be converted `true` if there's `anotherItem` with `item1.x == anotherItem.y` and `item1.y == anotherItem.x`. To find if this another item exists, we use [`_.find()`](https://lodash.com/docs/4.17.5#find).
[**`_.find(somearray, someobject)`**](https://lodash.com/docs/4.17.5#find) tries to if the `someobject` exists in the `somearray`. That is why we do, above, something like:
```
function (item) {
var anotherItem = {x: item.y, y: item.x}
return _.find(objects, anotherItem);
}
```
Lastly, `_.find` returns the object, if found, and `undefined` if not found. We use `!!` to convert that to boolean (`!!undefined` becomes `false` and `!!someobject` becomes `true`).
Upvotes: 2 [selected_answer] |
2018/03/19 | 446 | 1,579 | <issue_start>username_0: This should be an easy question.
I am using *Java 8, Hibernate 3.6*
I have an `org.hibernate.Query` object and I am using it to call a
stored procedure. I am calling it with `query.executeUpdate();`.
How can I programatically set a timeout to this query?
So that when I call the SP and if the DB does not respond in say 1 hour,
it throws an exception of some sort on the Java side. Note that I don't
want to use some global hibernate property which would affect all my queries.
In fact if I wanted, I guess I would need to do what they say here, right?
I mean I would have needed to just set `javax.persistence.query.timeout`
when starting the JVM, correct? But OK, that's not what I need.
[Hibernate: set default query timeout?](https://stackoverflow.com/questions/2101455/hibernate-set-default-query-timeout)
Many thanks in advance.<issue_comment>username_1: I think you can use hibernate `Query#setHint` for that. For more info please visit the [link](http://docs.jboss.org/hibernate/stable/entitymanager/reference/en/html/objectstate.html#d0e1215)
We need to set the timeout before executing the query like below.
```
query.setHint("org.hibernate.timeout", 10) //10 is the query timeout in second.
```
Note: It may also useful.
```
Statement#setQueryTimeout(seconds)
PreparedStatement#setQueryTimeout(seconds)
```
Both has setQueryTimeout method.
Upvotes: 0 <issue_comment>username_2: It was in front my eyes but initially I didn't see it.
`org.hibernate.Query.setTimeout(int seconds)`
Upvotes: 2 [selected_answer] |
2018/03/19 | 924 | 1,894 | <issue_start>username_0: I have a dataframe like bellow
```
ID Date
111 1.1.2018
222 5.1.2018
333 7.1.2018
444 8.1.2018
555 9.1.2018
666 13.1.2018
```
and I would like to bin them into 5 days intervals.
The output should be
```
ID Date Bin
111 1.1.2018 1
222 5.1.2018 1
333 7.1.2018 2
444 8.1.2018 2
555 9.1.2018 2
666 13.1.2018 3
```
How can I do this in python, please?<issue_comment>username_1: Looks like `groupby` + `ngroup` does it:
```
df['Date'] = pd.to_datetime(df.Date, errors='coerce', dayfirst=True)
df['Bin'] = df.groupby(pd.Grouper(freq='5D', key='Date')).ngroup() + 1
```
```
df
ID Date Bin
0 111 2018-01-01 1
1 222 2018-01-05 1
2 333 2018-01-07 2
3 444 2018-01-08 2
4 555 2018-01-09 2
5 666 2018-01-13 3
```
---
If you don't want to mutate the Date column, then you may first call `assign` for a copy based assignment, and then do the `groupby`:
```
df['Bin'] = df.assign(
Date=pd.to_datetime(df.Date, errors='coerce', dayfirst=True)
).groupby(pd.Grouper(freq='5D', key='Date')).ngroup() + 1
df
ID Date Bin
0 111 1.1.2018 1
1 222 5.1.2018 1
2 333 7.1.2018 2
3 444 8.1.2018 2
4 555 9.1.2018 2
5 666 13.1.2018 3
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: One way is to create an array of your date range and use `numpy.digitize`.
```
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
date_ranges = pd.date_range(df['Date'].min(), df['Date'].max(), freq='5D')\
.astype(np.int64).values
df['Bin'] = np.digitize(df['Date'].astype(np.int64).values, date_ranges)
```
Result:
```
ID Date Bin
0 111 2018-01-01 1
1 222 2018-01-05 1
2 333 2018-01-07 2
3 444 2018-01-08 2
4 555 2018-01-09 2
5 666 2018-01-13 3
```
Upvotes: 0 |
2018/03/19 | 304 | 1,062 | <issue_start>username_0: For this website I'm building, I have recently ran into a small strip of white going down the right side. I have rolled my changes back to when I know there wasn't that line, and it is still there. I am trying to find the css that is pushing it over, but I can't. And the only element that is encroaching into that whitespace is my navbar, but it's width is 100%, so it shouldn't extend the viewport.
How can I find the CSS rule that is forcing that whitespace?
The website is: <https://meganandadam2018.com><issue_comment>username_1: Remove your margin on this class. If you already have 100% width, setting more margin will make the div expand:
```
.home-sec-3 #radio-button-container {
display: flex;
flex-flow: column;
justify-content: center;
align-items: center;
margin: 10px 0px 10px 0px; // here is the problem, set right and left to 0
width: 100%;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can remove `height: 100%` from body and html and add `overflow-x: hidden;`
Upvotes: -1 |
2018/03/19 | 967 | 4,091 | <issue_start>username_0: I want to configure git repository so it will NOT remember the history of one long file, but still will keep its latest version.
Is it possible?
**P.S.** Thank you for the answers --- I am convinced now that I do not want to do that.<issue_comment>username_1: You cannot change past commits without invalidating whole history, it is just how Git works. So if this is big file that need to be versioned, but it's versioning is infeasible via Git then you have alternatives in form of:
* [Git Annex](https://git-annex.branchable.com)
* [Git LFS](https://git-lfs.github.com)
Which allows you to have separate tree that stores blobs outside of "standard" Git's tree.
Upvotes: 1 <issue_comment>username_2: Your question implies that you're ok with editing the commit history, as that's the only way to do what you're asking. Look at the
```
git filter-branch
```
command, as it'll do what you're asking. Leaving the latest revision of the file in question in place is left as an exercise for the reader (though I'd just go and commit it back in after `filter-branch` has done its thing).
Be aware that depending on the history and usage patterns of the repository this will have significant implications. (OTOH, if it's a repo only you use, those implications are probably less.)
If you're just looking to cut down on the clone/repo size, performing shallow clones (or making your clone shallow) may do what you want with no *global* implications for other consumers of the repository. (See `git-fetch` and `git-clone` for more information about shallow.)
Upvotes: -1 <issue_comment>username_3: So... Several people (comments and username_1's answer) are recommending solutions like LFS, which efficiently manage large file needs in git repos. It seems nobody's quite sure why you would want the behavior you specified, rather than the behavior these give... and that's for good reason. A big part of the purpose of git is to be able to reproduce past versions of your code. If we compare your desired behavior to LFS, the big difference is that under your desired behavior it becomes impossible to reproduce a prior version of the system. So it really isn't clear *why* you would want that.
But that's fine... let's get to your question.
Can it be done? Well... by hobbling git so that it can't perform even basic source control functions properly, you could fake it. But it won't be easy and git will work against you every step of the way. Sound good?
And before you go asking "what about username_2's solution"... that's the one that would involve hobbling git's ability to work with your repo. Where they see your question "[implying that you're] ok with editing the commit history", I see it showing that you aren't aware of the consequences of constantly editing your commit history.
It's hard to do correctly, especially as part of a routine workflow.
It's made harder because by definition, your use case requires editing the history either (a) with staged changes pending commit, or (b) right after committing changes that you don't want to edit.
Once you get it working right, every time you do it the refs will be moved in a non-fast-forward way, putting everyone else who shares the repo in a bad state. If they do the wrong thing to recover from that state, extra copies of the file can pop back into the history. And even if everyone plays along, keeping the remote from hoarding all those obsolete copies of the file is still likely to be quite difficult (depending on how your remote is hosted).
Maybe those last concerns don't worry you, because maybe you're the only user of your repo and maybe you don't keep a remote. If that's the case - if you're not using git for its key strengths as a distributed version control system, and you're trying to do something against the grain of its history-tracking model - then git may just not be the best tool for your use case.
Or, as others have implicitly suggested, maybe it's the use case that needs to be reassessed. That's between you and whoever else might be on your team.
Upvotes: 2 |
2018/03/19 | 1,622 | 5,518 | <issue_start>username_0: Consider a recursive data structure like the following:
```
data Tree level
= Leaf String
| Node level [ Tree level ]
```
Now, if `level` is an instance of `Ord`, I would like to impose at the type level the following limitation on the data structure: a node must contain only `Tree`s with a higher `level`.
You can safely assume that `level` is a simple sum type like
```
Level
= Level1
| Level2
...
| LevelN
```
but where `N` is not known a priori. In this case I would be able to have that all the subnodes of a node have a higher level.
For example
```
tree = Node Level1
[ Node Level2 []
, Node Level3 []
]
```
should compile, while
```
tree = Node Level2
[ Node Level1 []
]
```
should not.
Is it possible to model such a thing in Haskell?<issue_comment>username_1: Here's the basic idea. The easiest way to encode recursion limits like this is to use [Peano numbers](https://en.wikipedia.org/wiki/Peano_axioms). Let's define such a type.
```
data Number = Zero | Succ Number
```
A number is either zero or the successor of another number. This is a nice way to define numbers here, as it will get along nicely with our tree recursion. Now, we want the `Level` to be a type, not a value. If it's a value, we can't limit its value at the type level. So we use GADTs to restrict the way we can initialize things.
```
data Tree (lvl :: Number) where
Leaf :: String -> Tree lvl
Node :: [Tree lvl] -> Tree ('Succ lvl)
```
`lvl` is the depth. A `Leaf` node can have any depth, but a `Node` node is restricted in its depth and must be strictly greater than that of its children (here, strictly one greater, which works in most simple cases. Allowing it to be strictly greater in general would require some more complicated type-level tricks, possibly with `-XTypeInType`). Notice that we use `'Succ` at the type level. This is a *promoted type*, enabled with `-XDataKinds`. We also need `-XKindSignatures` to enable the `:: Number` constraint.
Now let's write a function.
```
f :: Tree ('Succ 'Zero) -> String
f _ = "It works!"
```
This function only takes trees that go at most one level deep. We can try to call it.
```
f (Leaf "A") -- It works!
f (Node [Leaf "A"]) -- It works!
f (Node [Node [Leaf "A"]]) -- Type error
```
So it will fail at compile-time if the depth is too much.
Complete example (including compiler extensions):
```
{-# LANGUAGE GADTs, KindSignatures, DataKinds #-}
data Number = Zero | Succ Number
data Tree (lvl :: Number) where
Leaf :: String -> Tree lvl
Node :: [Tree lvl] -> Tree ('Succ lvl)
f :: Tree ('Succ 'Zero) -> String
f _ = "It works!"
```
This isn't everything you can do with this. There's certainly expansions to be made, but it gets the point across and will hopefully point you in the right direction.
Upvotes: 3 <issue_comment>username_2: So there are a number of difficulties with this question. Peano numbers are a good place to start, though:
```
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE ConstraintKinds #-}
data Nat = Z | S Nat
```
Next, we'll need some way of saying one number is "bigger" than another. We can do so by first defining an inductive class for "n is less than or equal to m"
```
class (n :: Nat) <= (m :: Nat)
instance Z <= n
instance n <= m => (S n <= S m)
```
We can then define "less than" in terms of this:
```
type n < m = S n <= m
```
Finally, here's the Tree and Levels:
```
data Tree n where
Leaf :: String -> Tree n
Node :: n < z => Level z -> [Tree z] -> Tree n
data Level n where
Level0 :: Level Z
Level1 :: Level (S Z)
Level2 :: Level (S (S Z))
Level3 :: Level (S (S (S Z)))
Level4 :: Level (S (S (S (S Z))))
```
And, as desired, the first example compiles:
```
tree = Node Level1
[ Node Level2 []
, Node Level3 []
]
```
While the second does not:
```
tree = Node Level2
[ Node Level1 []
]
```
Just for extra fun, we can now add a "custom type error" (this will need `UndecidableInstances`:
```
import GHC.TypeLits (TypeError, ErrorMessage(Text))
instance TypeError (Text "Nodes must contain trees of a higher level") => S n < Z
```
So when you write:
```
tree = Node Level2
[ Node Level1 []
]
```
You get the following:
>
> • Nodes must contain trees of a higher level
>
>
> • In the expression: Node Level1 []
>
>
> In the second argument of ‘Node’, namely ‘[Node Level1 []]’
>
>
> In the expression: Node Level2 [Node Level1 []]
>
>
>
If you want to make "level" more generic, you'll need a couple more extensions:
```
{-# LANGUAGE TypeApplications, RankNTypes, AllowAmbiguousTypes, TypeFamilies #-}
import qualified GHC.TypeLits as Lits
data Level n where
Level0 :: Level Z
LevelS :: !(Level n) -> Level (S n)
class HasLevel n where level :: Level n
instance HasLevel Z where level = Level0
instance HasLevel n => HasLevel (S n) where level = LevelS level
type family ToPeano (n :: Lits.Nat) :: Nat where
ToPeano 0 = Z
ToPeano n = S (ToPeano (n Lits.- 1))
node :: forall q z n m. (ToPeano q ~ z, HasLevel z, n < z) => [Tree z] -> Tree n
node = Node level
tree =
node @1
[ node @2 []
, node @3 []
]
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 1,499 | 4,987 | <issue_start>username_0: I'm looking for efficient and elegant solution for numbers extraction from html tags such as
```
0.00
-0.300
-0,150
-0,150
```
so the desired output is supposed to be
```
0.00
-0.300
-0,150
-0,150
```
Thanks a lot for any of your advice in forward.<issue_comment>username_1: Here's the basic idea. The easiest way to encode recursion limits like this is to use [Peano numbers](https://en.wikipedia.org/wiki/Peano_axioms). Let's define such a type.
```
data Number = Zero | Succ Number
```
A number is either zero or the successor of another number. This is a nice way to define numbers here, as it will get along nicely with our tree recursion. Now, we want the `Level` to be a type, not a value. If it's a value, we can't limit its value at the type level. So we use GADTs to restrict the way we can initialize things.
```
data Tree (lvl :: Number) where
Leaf :: String -> Tree lvl
Node :: [Tree lvl] -> Tree ('Succ lvl)
```
`lvl` is the depth. A `Leaf` node can have any depth, but a `Node` node is restricted in its depth and must be strictly greater than that of its children (here, strictly one greater, which works in most simple cases. Allowing it to be strictly greater in general would require some more complicated type-level tricks, possibly with `-XTypeInType`). Notice that we use `'Succ` at the type level. This is a *promoted type*, enabled with `-XDataKinds`. We also need `-XKindSignatures` to enable the `:: Number` constraint.
Now let's write a function.
```
f :: Tree ('Succ 'Zero) -> String
f _ = "It works!"
```
This function only takes trees that go at most one level deep. We can try to call it.
```
f (Leaf "A") -- It works!
f (Node [Leaf "A"]) -- It works!
f (Node [Node [Leaf "A"]]) -- Type error
```
So it will fail at compile-time if the depth is too much.
Complete example (including compiler extensions):
```
{-# LANGUAGE GADTs, KindSignatures, DataKinds #-}
data Number = Zero | Succ Number
data Tree (lvl :: Number) where
Leaf :: String -> Tree lvl
Node :: [Tree lvl] -> Tree ('Succ lvl)
f :: Tree ('Succ 'Zero) -> String
f _ = "It works!"
```
This isn't everything you can do with this. There's certainly expansions to be made, but it gets the point across and will hopefully point you in the right direction.
Upvotes: 3 <issue_comment>username_2: So there are a number of difficulties with this question. Peano numbers are a good place to start, though:
```
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE ConstraintKinds #-}
data Nat = Z | S Nat
```
Next, we'll need some way of saying one number is "bigger" than another. We can do so by first defining an inductive class for "n is less than or equal to m"
```
class (n :: Nat) <= (m :: Nat)
instance Z <= n
instance n <= m => (S n <= S m)
```
We can then define "less than" in terms of this:
```
type n < m = S n <= m
```
Finally, here's the Tree and Levels:
```
data Tree n where
Leaf :: String -> Tree n
Node :: n < z => Level z -> [Tree z] -> Tree n
data Level n where
Level0 :: Level Z
Level1 :: Level (S Z)
Level2 :: Level (S (S Z))
Level3 :: Level (S (S (S Z)))
Level4 :: Level (S (S (S (S Z))))
```
And, as desired, the first example compiles:
```
tree = Node Level1
[ Node Level2 []
, Node Level3 []
]
```
While the second does not:
```
tree = Node Level2
[ Node Level1 []
]
```
Just for extra fun, we can now add a "custom type error" (this will need `UndecidableInstances`:
```
import GHC.TypeLits (TypeError, ErrorMessage(Text))
instance TypeError (Text "Nodes must contain trees of a higher level") => S n < Z
```
So when you write:
```
tree = Node Level2
[ Node Level1 []
]
```
You get the following:
>
> • Nodes must contain trees of a higher level
>
>
> • In the expression: Node Level1 []
>
>
> In the second argument of ‘Node’, namely ‘[Node Level1 []]’
>
>
> In the expression: Node Level2 [Node Level1 []]
>
>
>
If you want to make "level" more generic, you'll need a couple more extensions:
```
{-# LANGUAGE TypeApplications, RankNTypes, AllowAmbiguousTypes, TypeFamilies #-}
import qualified GHC.TypeLits as Lits
data Level n where
Level0 :: Level Z
LevelS :: !(Level n) -> Level (S n)
class HasLevel n where level :: Level n
instance HasLevel Z where level = Level0
instance HasLevel n => HasLevel (S n) where level = LevelS level
type family ToPeano (n :: Lits.Nat) :: Nat where
ToPeano 0 = Z
ToPeano n = S (ToPeano (n Lits.- 1))
node :: forall q z n m. (ToPeano q ~ z, HasLevel z, n < z) => [Tree z] -> Tree n
node = Node level
tree =
node @1
[ node @2 []
, node @3 []
]
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 234 | 984 | <issue_start>username_0: I have created my DEV environment without any problem. It's work Fine.
but I'm trying to create a QA environment (or any other) and it does not work.
the only difference between the two environments is the variable that refers to the backend (I have tried putting the same one and the problem persists)
if I try some method in the different environments by means of the "Test" function, both work. But when I try from postman, only work DEV. The only error I see for CloudWatch is the following:
**Execution failed due to configuration error: Invalid endpoint address**.
Any idea? Thanks<issue_comment>username_1: I think the problem that you are having is that you need to deploy your stage. i.e.
API -> Resources -> Actions (on root of api) -> Deploy Api
Then select the stage you want to deploy get the new endpoint and test from postman.
Upvotes: 0 <issue_comment>username_2: the problem was the name of variables in **Stage Variables**
Upvotes: 1 |
2018/03/19 | 1,773 | 6,983 | <issue_start>username_0: I have a docker compose file with the following entries
---
```
version: '2.1'
services:
mysql:
container_name: mysql
image: mysql:latest
volumes:
- ./mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '<PASSWORD>'
ports:
- '3306:3306'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3306"]
interval: 30s
timeout: 10s
retries: 5
test1:
container_name: test1
image: test1:latest
ports:
- '4884:4884'
- '8443'
depends_on:
mysql:
condition: service_healthy
links:
- mysql
```
The Test-1 container is dependent on mysql and it needs to be up and running.
In docker this can be controlled using health check and depends\_on attributes.
The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????
Any directions on this is greatly appreciated.
My Kubernetes file:
```
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: mysqldb
image: "dockerregistry:mysqldatabase"
imagePullPolicy: Always
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 10
- name: test1
image: "dockerregistry::test1"
imagePullPolicy: Always
ports:
- containerPort: 3000
```<issue_comment>username_1: While I don't know the direct answer to your question except [this link (k8s-AppController)](https://github.com/Mirantis/k8s-AppController), I don't think it's wise to use same deployment for DB and app. Because you are tightly coupling your db with app and loosing awesome k8s option to scale any one of them as needed. Further more if your db pod dies you loose your data as well.
Personally what I would do is to have a separate [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) with [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for database and Deployment for app and use [Service](https://kubernetes.io/docs/concepts/services-networking/service/) to make sure their communication.
Yes I have to run few different commands and may need at least two separate deployment files but this way I am decoupling them and can scale them as needed. And my data is being persistent as well!
Upvotes: 3 <issue_comment>username_2: In Kubernetes terminology one your docker-compose set is a [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod).
So, there is no `depends_on` equivalent there. Kubernetes will check all containers in a pod and they all have to be alive for a mark that pod as Healthy and will always run them together.
In your case, you need to prepare configuration of Deployment like that:
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: app-and-db
spec:
containers:
- name: app
image: nginx
ports:
- containerPort: 80
- name: db
image: mysql
ports:
- containerPort: 3306
```
After pod will be started, your database will be available on `localhost` interface for your application, because of [network conception](https://kubernetes.io/docs/concepts/workloads/pods/pod/):
>
> Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.
>
>
>
But, as @username_1 mentioned, it is not a good idea to run database and application in your pod and without Persistent Volume. Here is a good tutorial on how to run a [stateful application in the Kubernetes](https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/).
Upvotes: 2 <issue_comment>username_3: As mentioned, you should run the database and the application containers in separate pods and connect them with a service.
Unfortunately, both Kubernetes and Helm don't provide a functionality similar to what you've described. We had many issues with that and tried a few approaches until we have decided to develop a smallish utility that solved this problem for us.
Here's the link to the tool we've developed: <https://github.com/Opsfleet/depends-on>
You can make pods wait until other pods become ready according to their readinessProbe configuration. It's very close to Docker's depends\_on functionality.
Upvotes: 3 <issue_comment>username_4: This was purposefully left out. The reason being is that applications should be responsible for their connect/re-connect logic for connecting to service(s) such as a database. This is outside the scope of Kubernetes.
Upvotes: 6 [selected_answer]<issue_comment>username_5: That's the beauty of Docker Compose and Docker Swarm... Their simplicity.
We came across this same Kubernetes shortcoming when deploying the ELK stack.
We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.
Below is an example of a side-car that waits until Grafana is ready.
Add this 'initContainer' block just above your other containers in the Pod:
```
spec:
initContainers:
- name: wait-for-grafana
image: darthcabs/tiny-tools:1
args:
- /bin/bash
- -c
- >
set -x;
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do
echo '.'
sleep 15;
done
containers:
.
.
(your other containers)
.
.
```
Upvotes: 6 <issue_comment>username_6: <https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/>
what about liveness and readiness ??? supports commands, http requests and more to check if another service responds OK
```
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- curl http://service2/api/v2/health-check
initialDelaySeconds: 5
periodSeconds: 5
```
Upvotes: 0 |
2018/03/19 | 1,023 | 2,948 | <issue_start>username_0: I'm writing a recursive copystring function in c and I'm receiving a write access violation. The funcion doesn't terminate when the function reaches the final letter in the string.
Here's the code:
```
#include
void copy(char\*, char\*, int);
int main()
{
char str1[10] = { 'H', 'e', ' j',' h','e', 'j' };
char str2[10] = { '/0' };
copy(str1, str2, 0);
printf(str2);
getchar();
return 0;
}
void copy(char\* str1, char\* str2, int index) {
while (str1[index] != '/0') {
str2[index] = str1[index];
index++;
copy(str1, str2, index);
}
return;
}
```<issue_comment>username_1: There are three main problems here:
1) You use `'/0'` when it should be `'\0'`
2) You have extra spaces in the initializer, i.e. `' j'` and `' h'`.
3) Your recursive function isn't really doing what you want - don't use a `while`. It shall only copy one char and then do the recursive call until it meets a `'\0'`
The code should rather look like:
```
#include
void copy(char\*, char\*, int);
int main()
{
char str1[10] = { 'H', 'e', 'j','h','e', 'j' };
// ^^ ^^ removed extra spce
char str2[10]; // No initialization needed
copy(str1, str2, 0);
printf(str2);
getchar();
return 0;
}
void copy(char\* str1, char\* str2, int index) {
str2[index] = str1[index];
// Have we reached end?
if (str1[index] != '\0') {
// No - so call again
index++;
copy(str1, str2, index);
}
return;
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You have a couple of problems.
1) `\0` (null byte) is not same as `/0`. C-strings are terminated with `\0` (or just 0) which is most certainly what you intended to use.
2) You have spaces in `' j'` and `' h'` - these are multi-byte characters. This is, again, probably not what you wanted. If your intention was to have space between chars,
then you need to:
```
char str1[] = { 'H', 'e', ' ', 'j',' ', 'h','e', 'j' };
```
or,
```
char str1[] = "He j h ej";
```
If not, remove the spaces from those two elements of `str1` array. You could write it as:
```
char str1[] = "Hejhej";
```
Also, it's better (less error prone) if you leave out the array dimension - compiler will allocate sufficient space. If you needed to know the size of the array then you can use `sizeof str1` to get it.
Your `copy` function doesn't need to `index` at all. Since you want to copy recursively, the use of while loop is also incorrect. So you could write it as:
```
#include
void copy(const char\*, char\*);
int main(void)
{
char str1[] = "Hejhej";
/\* Equivalent to:
\* char str1[] = { 'H', 'e', 'j', 'h', 'e', 'j', '\0' };
\*/
char str2[sizeof str1] = { '\0' };
copy(str1, str2);
printf("%s\n", str2);
return 0;
}
void copy(const char \*str1, char \*str2)
{
if (\*str1) {
\*str2 = \*str1;
copy(++str1, ++str2);
} else {
/\* This isn't strictly needed because the destination string `str2` contains null bytes. \*/
\*str2 = '\0';
}
}
```
Upvotes: 1 |
2018/03/19 | 723 | 2,789 | <issue_start>username_0: I have a problem with ionic, angular, typescript, i get lost a little ...
I would like to call an external function to my file but I get the error: *"Uncaught (in promise): TypeError: undefined is not an object (evaluating 'this.alertCtrl.create') popup"*
Here is my main file:
```
import { Component } from '@angular/core';
import { NavController } from 'ionic-angular';
import { params } from '../../modules/params';
@Component({
selector: 'page-app',
templateUrl: 'app.html'
})
export class AppPage {
params = new params();
constructor(public navCtrl: NavController) {
this.params.popup("Hello", "test");
}
}
```
As well as the page containing the function:
```
import { AlertController } from 'ionic-angular';
export class params {
public alertCtrl: AlertController;
constructor(){
}
popup(title, text){
let alert = this.alertCtrl.create({
title: title,
subTitle: text,
buttons: ['OK']
});
alert.present();
}
}
```
Where would I be wrong?
Thank you very much for your help.<issue_comment>username_1: For the injection to work properly you must pass it as a constructor param:
```
import { AlertController } from 'ionic-angular';
export class params {
constructor(public alertCtrl: AlertController) {
}
popup(title, text){
let alert = this.alertCtrl.create({
title: title,
subTitle: text,
buttons: ['OK']
});
alert.present();
}
}
```
What you had before was the declaration of a public property called `alertCtrl` of type `AlertController`, but it was never being initialized, so that's why you got the undefined error.
Upvotes: 1 <issue_comment>username_2: my friend.
You'll first need to add the @Injectable() decorator in the class that contains the function you want to use. For example:
```
import { Injectable } from '@angular/core';
import { AlertController } from 'ionic-angular';
@Injectable()
export class params {
constructor(public alertCtrl: AlertController){
}
popup(title, text){
let alert = this.alertCtrl.create({
title: title,
subTitle: text,
buttons: ['OK']
});
alert.present();
}
}
```
Then you'll just need to inject the variable in your constructor to call it in your page like this:
```
import { Component } from '@angular/core';
import { NavController } from 'ionic-angular';
import { params } from '../../modules/params';
@Component({
selector: 'page-app',
templateUrl: 'app.html'
})
export class AppPage {
constructor(public navCtrl: NavController, public params: params) {
this.params.popup("Hello", "test");
}
}
```
Hope I could help you! =)
Upvotes: 1 [selected_answer] |
2018/03/19 | 507 | 1,788 | <issue_start>username_0: I have this XML file name update.xml linked in the pastebin below
<https://pastebin.com/kFjX5Wka>
Now I would like to make the output like:
```
Manufacturer Google:
Device codename: angler
Download: download tag link here from
Device codename: bullhead
Download: download tag link here from
Manufacturer OnePlus:
Device codename: cheeseburger
Download: download tag link here from
Device codename: dumpling
Download: download tag link here from
```
What I have so far
```
if (file_exists('update.xml')) {
$xml = simplexml_load_file('update.xml');
foreach($xml as $manufacturer){
echo $manufacturer['id'] . "
";
}
} else {
exit('Failed to open update.xml.');
}
```
This returns the manufacturers, but am blocked at parsing the info from them
So can someone point me to the right direction<issue_comment>username_1: You need to go further and examine `children` of each `manufacturer`:
```
if (file_exists('update.xml')) {
$xml = simplexml_load_file('update.xml');
foreach($xml as $manufacturer){
echo $manufacturer['id'] . PHP_EOL;
foreach ($manufacturer as $k => $v) {
// `$k` gives you a tagname,
// `$v->download` gives value of `download` tag
echo $k . $v->download . PHP_EOL;
}
}
} else {
exit('Failed to open update.xml.');
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: This will works!
```
$xml = new SimpleXMLElement($xmlString);
foreach ($xml->manufacturer as $manufacturer){
echo "Manufacturer {$manufacturer['id']}:
";
foreach ($manufacturer as $codename => $v){
echo "Device codename: {$codename}
";
echo "Download: {$v->download}
";
}
echo "
---
";
}
```
Upvotes: 0 |
2018/03/19 | 380 | 1,243 | <issue_start>username_0: I need to match a file extension for webpack rules using regex. I want to make two separate rules for targeting svg files: `*.svg` and `*.icon.svg`.
These rules *must* be mutually exclusive. It's easy to caputure \*.icon.svg scenario by testing for `/\.icon.svg$/`. But what should be the expression to capture all `.svg` extensions, excluding the files that end with `.icon.svg`?
Rule example for the `.icon.svg`
```
config.module.rules.push({
test: /\.svg$/,
loader: 'vue-svg-loader'
})
```<issue_comment>username_1: To match all `.svg` files EXCEPT `.icon.svg`
```
/(?!.*\.icon\.svg$)^.+\.svg$/
```
To match only the `.icon.svg`:
```
/\.icon\.svg$/
```
---
So, that is going to look something like:
```
{
// match all .svg files EXCEPT .icon.svg
test: /(?!.*\.icon\.svg$)^.+\.svg$/,
loader: 'specific-loader-name'
},
{
// Match only .icon.svg files
test: /\.icon\.svg$/,
loader: 'different-loader'
},
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: this shall do the trick.
```
{
test:/^(?!.*.icon)\.svg$/i
...
}
```
this regex use a thing called negative lookahead, so basically it will exclude anything that contains .icon before ".svg"
Upvotes: 0 |
2018/03/19 | 393 | 1,136 | <issue_start>username_0: I am using this formula to find the last value of a list:
```
=IF(ISBLANK(A55),INDEX(A4:A55,MAX((A4:A55<>"")*(ROW(A4:A55))),A55))
```
However this time it's not working as my cells are not "empty" they have a formula that returns either nothing ("") or a cell:
```
=IF(AND(B50>=$G$4,B50<=$G$5),B50,"")
```
How can I make the index work to find the last cell that actually returns a value?<issue_comment>username_1: To match all `.svg` files EXCEPT `.icon.svg`
```
/(?!.*\.icon\.svg$)^.+\.svg$/
```
To match only the `.icon.svg`:
```
/\.icon\.svg$/
```
---
So, that is going to look something like:
```
{
// match all .svg files EXCEPT .icon.svg
test: /(?!.*\.icon\.svg$)^.+\.svg$/,
loader: 'specific-loader-name'
},
{
// Match only .icon.svg files
test: /\.icon\.svg$/,
loader: 'different-loader'
},
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: this shall do the trick.
```
{
test:/^(?!.*.icon)\.svg$/i
...
}
```
this regex use a thing called negative lookahead, so basically it will exclude anything that contains .icon before ".svg"
Upvotes: 0 |
2018/03/19 | 2,781 | 7,470 | <issue_start>username_0: I have a data.table with 200 obs and 20 variables, and I need to get n-lags from those 20 variables in a loop.
I was trying something like the following, but is not working properly.
Please any help?
>
> nombresvar = names(Model\_X)
>
>
> for (j in nombresvar) for(i in 1:3)
>
>
> Model\_X[,c(paste0(j, i)) := lag(c(paste0('Model\_X$', j)), i)]
>
>
>
The problem is coming from the **Lag Function**, as my code seems not to work properly, when trying to produce 4 lags variables per VariablesNames are in Nombresvar vector.
The lag function for each loop should have been something like the following:
```
lag ( ModelX$Variable1, 1)
lag ( ModelX$Variable1, 2)
lag ( ModelX$Variable1, 3)
lag ( ModelX$Variable2, 1)
....
lag ( ModelX$VariableN, 3)
```<issue_comment>username_1: Not sure if this is what your asking:
```
# Random data 200 obs, 20 vars
set.seed(1)
df <- data.frame(replicate(20,sample(0:100,200,rep=TRUE)))
# fucntion for getting lags
lags <- function(x,lag) {
lag(x,lag)
}
require(dplyr) # using lag from dplyr (has opposite lead)
# lapply saves lags to a list
lag1 <- lapply(df,lags, lag =1)
lag2 <- lapply(df,lags, lag =2)
lag3 <- lapply(df,lags, lag =3)
# cbind to data frame
lag_1_df <- as.data.frame(do.call(cbind,lag1))
lag_2_df <- as.data.frame(do.call(cbind,lag2))
lag_3_df <- as.data.frame(do.call(cbind,lag3))
# names
names_lag1 <- rep(1,length(lag_1_df))
var_names <- rep(1:length(lag_1_df),1)
var_names <- paste("var",var_names)
names_lag1 <- paste(var_names,"lag",names_lag1)
colnames(lag_1_df) <- names_lag1
# column names lag 2
names_lag2 <- rep(2,length(lag_2_df))
var_names <- rep(1:length(lag_2_df),1)
var_names <- paste("var",var_names)
names_lag2 <- paste(var_names,"lag",names_lag2)
colnames(lag_2_df) <- names_lag2
# column names lag 3
names_lag3 <- rep(3,length(lag_3_df))
var_names <- rep(1:length(lag_3_df),1)
var_names <- paste("var",var_names)
names_lag3 <- paste(var_names,"lag",names_lag3)
colnames(lag_3_df) <- names_lag3
# Place all in same data frame
all_df <- cbind(lag_1_df,lag_2_df,lag_3_df)
head(all_df)
```
With output:
```
> head(all_df)
var 1 lag 1 var 2 lag 1 var 3 lag 1 var 4 lag 1 var 5 lag 1 var 6 lag 1 var 7 lag 1 var 8 lag 1 var 9 lag 1
1 NA NA NA NA NA NA NA NA NA
2 26 27 66 82 86 53 37 24 13
3 37 22 18 93 3 69 74 65 4
4 57 52 96 14 98 38 94 98 3
5 91 27 90 75 75 96 67 38 92
6 20 18 95 98 27 11 70 46 84
var 10 lag 1 var 11 lag 1 var 12 lag 1 var 13 lag 1 var 14 lag 1 var 15 lag 1 var 16 lag 1 var 17 lag 1 var 18 lag 1
1 NA NA NA NA NA NA NA NA NA
2 6 88 60 6 63 94 81 64 95
3 35 97 95 5 21 56 48 100 7
4 58 87 12 98 91 57 17 51 37
5 54 44 52 80 59 10 40 94 67
6 61 19 94 3 17 39 82 19 1
var 19 lag 1 var 20 lag 1 var 1 lag 2 var 2 lag 2 var 3 lag 2 var 4 lag 2 var 5 lag 2 var 6 lag 2 var 7 lag 2
1 NA NA NA NA NA NA NA NA NA
2 77 28 NA NA NA NA NA NA NA
3 59 75 26 27 66 82 86 53 37
4 72 61 37 22 18 93 3 69 74
5 28 51 57 52 96 14 98 38 94
6 44 88 91 27 90 75 75 96 67
```
Upvotes: 0 <issue_comment>username_2: Using `shift` and `set` from `data.table`
```
library(data.table)
DT <- data.table(foo = seq_len(10),
bar = seq_len(10)*2L,
baz = seq_len(10)*3L)
LagCols <- c("bar","baz")
LagLengths <- seq_len(2)
for(y in LagCols){
for (z in LagLengths) set(DT, j = eval(paste0(y,"_lag_",z)), value = shift(DT[[y]],n = z, type = "lag"))
}
print(DT)
```
gives the following:
```
foo bar baz bar_lag_1 bar_lag_2 baz_lag_1 baz_lag_2
1: 1 2 3 NA NA NA NA
2: 2 4 6 2 NA 3 NA
3: 3 6 9 4 2 6 3
4: 4 8 12 6 4 9 6
5: 5 10 15 8 6 12 9
6: 6 12 18 10 8 15 12
7: 7 14 21 12 10 18 15
8: 8 16 24 14 12 21 18
9: 9 18 27 16 14 24 21
10: 10 20 30 18 16 27 24
```
Upvotes: 0 <issue_comment>username_3: There is a much simpler way to create the additional lag columns. The `n` parameter to `data-table`'s `shift()` function is defined as
>
> Non-negative integer vector denoting the offset to lead or lag the
> input by. To create multiple lead/lag vectors, provide multiple values
> to n
>
>
>
So,
```
DT[, shift(baz, 0:3)]
```
returns
>
>
> ```
> V1 V2 V3 V4
> 1: 3 NA NA NA
> 2: 6 3 NA NA
> 3: 9 6 3 NA
> 4: 12 9 6 3
> 5: 15 12 9 6
> 6: 18 15 12 9
> 7: 21 18 15 12
> 8: 24 21 18 15
> 9: 27 24 21 18
> 10: 30 27 24 21
>
> ```
>
>
Now, the OP has requested to shift each variable and to name the new columns according to the amount of shift. This can be accomplished by
```
DT[, unlist(lapply(.SD, shift, n = 0:3), recursive = FALSE)]
```
>
>
> ```
> foo1 foo2 foo3 foo4 bar1 bar2 bar3 bar4 baz1 baz2 baz3 baz4
> 1: 1 NA NA NA 2 NA NA NA 3 NA NA NA
> 2: 2 1 NA NA 4 2 NA NA 6 3 NA NA
> 3: 3 2 1 NA 6 4 2 NA 9 6 3 NA
> 4: 4 3 2 1 8 6 4 2 12 9 6 3
> 5: 5 4 3 2 10 8 6 4 15 12 9 6
> 6: 6 5 4 3 12 10 8 6 18 15 12 9
> 7: 7 6 5 4 14 12 10 8 21 18 15 12
> 8: 8 7 6 5 16 14 12 10 24 21 18 15
> 9: 9 8 7 6 18 16 14 12 27 24 21 18
> 10: 10 9 8 7 20 18 16 14 30 27 24 21
>
> ```
>
>
### Data
For comparison, the sample data of [Matt's answer](https://stackoverflow.com/a/49369826/3817004) is used
```
library(data.table)
DT <- data.table(foo = seq_len(10),
bar = seq_len(10)*2L,
baz = seq_len(10)*3L)
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 813 | 3,231 | <issue_start>username_0: When I try to use context in my **fragment** I get the error:
>
> constructor Adapter in class Adapter cannot be applied to given types;
> required:Context,List,OnItemClickListener
>
>
>
I have declared my context in my adapter as follows:
```
private Context mContext;
```
then i initialized the context:
```
public MyAdapter(Context context,List listItems, OnItemClickListener callback) {
this.listItems = listItems;
this.callback = callback;
this.mContext = context;
}
```
And used **mContext** to get my imageurl in the onBindViewHolder using picasso
```
@Override
public void onBindViewHolder(@NonNull ViewHolder holder, int position) {
ListItem listItem = listItems.get(position);
Picasso.with(mContext).load(listItem.getImageurl()).into(holder.imageUrl;
}
```
But now I can't seem to get around using this context in my Fragment.
This is what I've tried: In my **Fragment**:
**//an error occurs**
```
adapter = new MyAdapter(this,listItems);
```
*so I tried this:*
**//still get an error**
```
adapter = new MyAdapter(getContext(),this);
```
I also tried getActivity but still get an error
```
adapter = new MyAdapter(getActivity());
```
where am I going wrong?
All I am really trying to do is display the image in my listfragment but I don't know how to use Picasso with using context, and MyAdapter does not require context to function properly. I've been using it without declaring context and the data displays properly. Onclick is also working and displaying strings from firebase, but now I need to display images from Firebase using Picasso into my listfragment. Everything else works fine except this line of code in my Fragment:
```
adapter = new MyAdapter(getActivity());
```<issue_comment>username_1: Fragments are inflated inside an Activity.
1. In Fragment, you either use the context of the Activity or the context of the whole application.
2. Plus you have missed passing one more parameter in the Adapter, ie your click listener.
Define the adapter like this -
```
OnItemClickListener mOnItemClickListener = OnItemClickListener {
void onItemClick(int position) {
}
}
adapter = new MyAdapter(getActivity(), listItems, mOnItemClickListener);
```
or
```
adapter = new MyAdapter(getActivity().getApplicationContext(), listItems, mOnItemClickListener);
```
Upvotes: 1 <issue_comment>username_2: In your adapter initialization you pass 2 parameters, but your constructor requires 3 parameters.
so try to initialize using 3 params:
```
adapter=new MyAdapter(getContext(), listItems, this);
```
getContext()= context of fragment.
lisItems= your list.
this=is your click interface listener (make sure you implemented the listener in your fragment).
Upvotes: 1 <issue_comment>username_3: Try this, I think you forgot last argument
```
OnItemClickListener listener = OnItemClickListener {
void onItemClick(int position) {
//some code
}
}
adapter = new MyAdapter(this, listItems, listener);
```
Upvotes: 1 <issue_comment>username_4: You try Get context from any View object in `Holder`.
Example:
```
mContext = holder.imageView.getContext()
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,644 | 5,809 | <issue_start>username_0: I have a big python array that I would like to break into chunks and then perform a calculation on the chunks and then "reassembly" into one array. Below is what I have so far I'm just starting to learn threading in general and threading with Python.
```
def performCalc(binaryArray):
# perform some operation
rArray = blah * binaryArray
return rArray
def main(argv):
numberOfThreads = 5
str(len(grey_arr) # 100,000 elements
greyScaleChunks = np.array_split(grey_arr, numberOfThreads)
for i in range(numberOfThreads):
t = Thread(target=performCalc, args=(greyScaleChunks[i],))
t.start()
# take all the return values and later create one big array to be resized into matrix.
```
The ordering of the chunks is important and I have to maintain that.<issue_comment>username_1: If you want to solve it with explicit `Thread` objects, and you want to get the results of the thread functions, you need to hold onto those `Thread` objects so you can later `join` them and pull out their results. Like this:
```
ts = []
for i in range(numberOfThreads):
t = Thread(target=performCalc, args=(greyScaleChunks[i],))
ts.append(t)
t.start()
for t in ts:
t.join()
# When you get here, all threads have finished
```
Also, the default implementation of `Thread.run` just calls your `target` and throws away the result. So you need to store the return value somewhere the main thread can access. Many numpy programs do this by passing in a pre-allocated array to each thread, so they can fill them in, and that isn't too huge a change to your design, but it's not the way you're headed. You can of course pass in any other mutable object to mutate. Or set a global variable, etc. But you've designed this around returning a value, and that's a nice way to think about things, so let's stick with that. The easiest way to make that work is to subclass `Thread`:
```
class ReturningThread(threading.Thread):
def run(self):
try:
if self._target:
self._result = self._target(*self._args, **self._kwargs)
finally:
del self._target, self._args, self._kwargs
def join(self):
super().join()
return self._result
```
This is untested code, but it should work. (I've done similar things in real code, but more complicated, to allow `join` to handle timeouts properly; here I kept it dead simple, just adding a `_result =` in the `run` method and `return`ing it in `join`.)
So:
```
ts = []
for i in range(numberOfThreads):
t = ReturningThread(target=performCalc, args=(greyScaleChunks[i],))
ts.append(t)
t.start()
results = []
for t in ts:
results.append(t.join())
```
And now you have a list of arrays that you can stack together.
---
However, what I did above is basically turn each thread into a half-assed future. It may be conceptually simpler to just use actual futures. This does mean that we're now using a thread pool that we don't really have any need for—there's exactly one task per thread. There's a probably-negligible performance cost (you're spending a lot more time on the actual work than the queueing, or you wouldn't want to thread this way in the first place), but, more importantly, we're adding significant extra complexity buried under the hood (in a well-tested stdlib module) for a bit less complexity in our code; whether or not that's worth it is up to you. Anyway:
```
with concurrent.futures.ThreadPoolExecutor(max_workers=numberOfThreads) as x:
results = x.map(performCalc, greyScaleChunks)
```
This handles creating 5 threads, creating a job for each `performCalc(chunk)`, partitioning the 5 jobs out to the 5 threads, joining the threads, and gathering the 5 jobs' results in order, so all you have to do is stack up the results.
---
Another advantage of using an executor is that if it turns out your code isn't benefiting from thread-parallelism because of the GIL (unlikely to be a problem in your case—you should be spending most of your time in a numpy operation over 20000 rows, which will run with the GIL released—but obviously you have to test to verify that's true), you can switch to processes very easily: just change that `ThreadPoolExecutor` to a `ProcessPoolExecutor` and you're done.
It's possible that your args and returns can't be either copied or shared between processes the default way, or that doing so is so expensive that it kills all the benefits of parallelism—but the fact that you can test that with a one-word change, and then only deal with it if it's a problem, is still a win.
Upvotes: 2 <issue_comment>username_2: You can do it by using the largely undocumented `ThreadPool` (mentioned in [this answer](https://stackoverflow.com/a/3386632/355230)) and its `map_async()` method as shown in the following runnable example:
```
import numpy as np
from pprint import pprint
from multiprocessing.pool import ThreadPool
import threading
blah = 2
def performCalc(binaryArray):
# perform some operation
rArray = blah * binaryArray
return rArray
def main(data_array):
numberOfThreads = 5
pool = ThreadPool(processes=numberOfThreads)
greyScaleChunks = np.array_split(data_array, numberOfThreads)
results = pool.map_async(performCalc, greyScaleChunks)
pool.close()
pool.join() # Block until all threads exit.
# Final results will be a list of arrays.
pprint(results.get())
grey_arr = np.array(range(50))
main(grey_arr)
```
Results printed:
```none
[array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18]),
array([20, 22, 24, 26, 28, 30, 32, 34, 36, 38]),
array([40, 42, 44, 46, 48, 50, 52, 54, 56, 58]),
array([60, 62, 64, 66, 68, 70, 72, 74, 76, 78]),
array([80, 82, 84, 86, 88, 90, 92, 94, 96, 98])]
```
Upvotes: 0 |
2018/03/19 | 874 | 2,803 | <issue_start>username_0: im trying to figure out, how to add after 1st click one class called active\_img to an element and on 2nd click remove the 1st class and add class called reverse\_img one and this like place in a loop. So on every odd click, element will get class active\_img and on every even click, element will get class reverse\_img.
this is what i have in jquery, but it doesnt work, if i add there the 2nd class.
```
$('.menu').click(function() {
if ($('.menu_img').hasClass('active_img')) {
$('.menu_img').removeClass('active_img');
} else {
$('.menu_img').addClass('active_img');
}
});
```
html
```

```
and scss
```
.menu {
@include width-height(260px, 220px);
background-color: transparent;
border-radius: 50%;
margin: -150px 0 0 -170px;
transition: .3s;
float: left;
.menu_img {
width: 60px;
position: absolute;
top: 8px;
left: 15px;
margin: 0;
transition: .3s;
transform: rotateZ(0deg);
}
.active_img {
animation-name: img;
animation-duration: .5s;
-webkit-animation-name: img;
-webkit-animation-duration: .5s;
}
.reverse_img {
animation-name: reverse_img;
animation-duration: .5s;
-webkit-animation-name: reverse_img;
-webkit-animation-duration: .5s;
}
}
.menu:hover {
background-color: white;
margin: -110px 0 0 -130px;
cursor: pointer;
transition: .3s;
.menu_img {
top: 3px;
left: 10px;
width: 70px;
transition: .3s;
}
}
@keyframes img {
0% {}
100% {
transform: rotateZ(180deg);
}
}
@keyframes reverse_img {
0% {}
100% {
transform: rotateZ(-180deg);
}
}
```
thx for the answers :D<issue_comment>username_1: You can look into [toggleClass()](http://api.jquery.com/toggleclass/)
This will switch the class on and off every other click
```
$('.menu').click(function() {
$('.menu_img').toggleClass('active_img');
});
```
Upvotes: 1 <issue_comment>username_2: ```
$('.menu').click(function() {
var menuImage = $('.menu_img'),
newClass = menuImage.hasClass('active_img') ? 'reverse_img' : 'active_img';
menuImage.removeClass('active_img reverse_img');
menuImage.addClass(newClass);
}
});
```
Think that should work. Or...
```
$('.menu').click(function() {
var menuImage = $('.menu_img');
menuImage.toggleClass('active_img');
menuImage.toggleClass('reverse_img');
}
});
```
If the page always loads initially with appropriate class.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Add reverse\_img class to your HTML first and use this script:
```
$('.menu').click(function() {
($('.menu_img').toggleClass('active_img'));
($('.menu_img').toggleClass('reverse_img'));
```
});
Upvotes: 2 |
2018/03/19 | 647 | 2,348 | <issue_start>username_0: I am trying to validate this list of columns for the presence. So I wrote this code. It's works but can we create an array of columns for this?
Example that I wrote:
```
class UserAccount < ApplicationRecord
self.table_name = 'accounts'
belongs_to :user
with_options unless: :new_record? do
validates :address, presence: { message: 'This field is required' }
validates :street, presence: { message: 'This field is required' }
validates :house_number, presence: { message: 'This field is required' }
validates :zip_code, presence: { message: 'This field is required' }
validates :city, presence: { message: 'This field is required' }
validates :country, presence: { message: 'This field is required' }
end
end
```
Array of columns to iterate on:
```
REQUIRED_COLUMNS = %w[address street house_number zip_code city country]
```<issue_comment>username_1: [validates](http://api.rubyonrails.org/v5.1.3/classes/ActiveModel/Validations/ClassMethods.html#method-i-validates) can take multiple columns as the first args, so you could just provide all columns on one line, i.e.
```
validates :address, :street, :etc, presence: { message: 'This field is required' }
```
If you do want to use an array for this, you should be able to use the [splat](https://endofline.wordpress.com/2011/01/21/the-strange-ruby-splat/) operator, i.e.
```
REQUIRED_COLUMNS = %i(address street house_number zip_code city country).freeze
validates *REQUIRED_COLUMNS, presence: { message: 'This field is required' }
```
Notice I've switched to `%i()` to generate an array of symbols.
Using the `splat` operator, in a very small nutshell, destructures the array so its contents are passed as individual args, rather than just one array argument.
Hope that helps - let me know how you get on.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could simply do this:
```
validates_presence_of :address, :street, :house_number, :zip_code, :city, :country, unless: :new_record?
```
Notice that I removed your `message` declaration. This way the validation will use the default message which includes the name of the field that has failed validation – i.e. `"City can't be blank."` – instead of just saying `"This field is required."` with no explanation to the user as to what `"This"` is.
Upvotes: 0 |
2018/03/19 | 570 | 1,821 | <issue_start>username_0: I'm using CosmosDB using SQL API and I'm trying to join two collections. I saw join example within a document but not getting what actually looking.
**RequestLog**
```
{
"DateTimeStamp": "2018-03-16T10:56:52.1411006Z",
"RequestId": "8ce80648-66e2-4357-98a8-7a71e8b65301",
"IPAddress": "0.0.0.173"
}
```
**ResponseLog**
```
{
"DateTimeStamp": "2018-03-16T10:56:52.1411006Z",
"RequestId": "8ce80648-66e2-4357-98a8-7a71e8b65301",
"Body": "Hello"
}
```
Is it possible to join both collections? how?<issue_comment>username_1: It is not possible to write join queries across multiple collections in Cosmos, or even across multiple documents in a single collection for that matter. Your only options here would be to issue separate queries (preferably in parallel) OR if your documents lived together in the same collection, you could retrieve all the relevant logs for a request using the common RequestId property.
`SELECT * from c WHERE c.RequestId = '8ce80648-66e2-4357-98a8-7a71e8b65301'`
This will only work if the object structure across the documents is the same. In this example it's possible because they both share a property of the same name called `RequestId`. You can't do JOIN on arbitrary properties.
Upvotes: 3 <issue_comment>username_2: Actually Cosmos DB JOIN operation is limited to the scope of a single document. What possible is you can join parent object with child objects under same document.
Cross-document joins are **NOT** supported, so you would have to implement such query yourself.
Upvotes: 5 [selected_answer]<issue_comment>username_3: In Azure Cosmos DB, joins are scoped to a single item. Cross-item and cross-container joins are **not supported**. [check documentation here](https://learn.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-join)
Upvotes: 1 |
2018/03/19 | 325 | 1,270 | <issue_start>username_0: I'm new to Angular and it's concepts, and i'm trying to wrap my head around how to deal with this particular scenario.
I'm using events from Cordova's Keyboard plugin to set a `keyboardVisible` property in my page component, and using it to show/hide the footer.
```
Footer content
```
The footer is hidden/shown correctly, but I need to call `resize()` on the underlying `Content` in the component to update the layout ([as is mentioned here](https://ionicframework.com/docs/api/components/content/Content/#resizing-the-content))...
```
@ViewChild(Content) content: Content;
this.content.resize();
```
... and I can't for the life of me figure out how/where to call this after the `*ngIf` directive has done it's thing.<issue_comment>username_1: ```
Footer content
export class YourCom{
check(item){
if(!keyboardVisible){
this.content.resize();
}
}
}
```
just to give you a heads up that if you call a function that updates the view somehow, you're gonna get an error from change detection that says >"Expression updated after view is checked"
Upvotes: 0 <issue_comment>username_2: If resize doesn't return anything you can simply do this
```
{{content.resize()}}
Footer content
```
Upvotes: 1 |
2018/03/19 | 382 | 1,460 | <issue_start>username_0: In my Android app, I wrote
```
return x >= rangeStart && x <= rangeEnd
```
IntelliJ has an inspection that wants me to change this to
```
return x in rangeStart..rangeEnd
```
however, since this code is on the critical path of high-volume event handling, I can accept the transformation only if I know for sure that the second form won't create any garbage. Is there such a guarantee in Kotlin?<issue_comment>username_1: Example from [here](https://kotlinlang.org/docs/reference/ranges.html)
```
if (i in 1..10) { // equivalent of 1 <= i && i <= 10
println(i)
}
```
I think it's language "Sugar" and it's equals!
Upvotes: 2 <issue_comment>username_2: You can check the bytecode generated by the Kotlin compiler yourself by searching actions and typing in "Show Kotlin Bytecode", or from the menu via Tools -> Kotlin -> Show Kotlin Bytecode.
Optimizing range checks on primitives (Int, Double, etc) is a pretty trivial thing to do, so you'll find that the Kotlin compiler always does the optimization for you with these types. If you have just a few of these on your critical code path, you can check that it happens by hand to feel safe about using ranges.
The [documentation](https://kotlinlang.org/docs/reference/ranges.html) is brief about this, but it does say:
>
> Range is defined for any comparable type, but for integral primitive types it has an optimized implementation.
>
>
>
Upvotes: 4 [selected_answer] |
2018/03/19 | 475 | 1,475 | <issue_start>username_0: How can I find all values in an array A - where the key is not in array B ( a bit like a SQL Not In ) . I don't want to compare the whole array , just one property - but I do want to return all fields for the differences using lodash ( or simpler )
```
const arrayA = [
{ sku:"1", name:"one"},
{ sku:"2", name:"two"}
]
const arrayB = [
{ sku:"1", name:"One Product"},
{ sku:"2", name:"Two Product"},
{ sku:"3", name:"Three Product"}
]
```
The results should be :
```
{ sku:"3", name:"Three Product"}
```
Thank you for your time.<issue_comment>username_1: You don't really need lodash for this.
What you need is to filter the second array and eliminate all elements existing in the first array.
```js
const arrayA = [
{ sku:"1", name:"one"},
{ sku:"2", name:"two"}
]
const arrayB = [
{ sku:"1", name:"One Product"},
{ sku:"2", name:"Two Product"},
{ sku:"3", name:"Three Product"}
]
console.log(
// filter B such that we only leave items from B that are not in A
arrayB.filter(b =>
!arrayA.some(a => a.sku === b.sku)
)
)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You could also use:
```
const arrayA = [
{ sku:"1", name:"one"},
{ sku:"2", name:"two"}
];
const arrayB = [
{ sku:"1", name:"One Product"},
{ sku:"2", name:"Two Product"},
{ sku:"3", name:"Three Product"}
];
let diff = _.differenceWith(arrayB, arrayA, (a, b) => _.isEqual(a.sku, b.sku) );
```
Upvotes: 1 |
2018/03/19 | 938 | 3,457 | <issue_start>username_0: Based on some sports results data, I have a `Fixture` object which has `getHome()` and `getAway()` method. I'd like to shorten this method which I've written to only use a single lambda function (instead of creating a new list and two lambdas), is this possible?
```
private Collection finalResults(Team team) {
ListfinalResults = new ArrayList<>();
List homeResults = resultList.stream().filter(fixture ->
fixture.getHome().equals(team))
.collect(toList());
List awayResults = resultList.stream().filter(fixture ->
fixture.getAway().equals(team))
.collect(toList());
finalResults.addAll(homeResults);
finalResults.addAll(awayResults);
return finalResults;
}
```<issue_comment>username_1: Simple enough
```
resultList.stream()
.filter(fixture -> fixture.getHome().equals(team) || fixture.getAway().equals(team)))
.collect(toList());
```
**EDIT:** This is on the assumption that order does not matter to you. If your final list needs to have home result and then away, have a look at [username_2's answer](https://stackoverflow.com/a/49368438/4405757).
Upvotes: 5 [selected_answer]<issue_comment>username_2: Assuming the order doesn't matter, you can do it on one line. Like,
```
private Collection finalResults(Team team) {
return resultList.stream()
.filter(fixture -> fixture.getHome().equals(team)
|| fixture.getAway().equals(team))
.collect(toList());
}
```
If the order matters (home results and then away), you can do it with a single `List` like
```
private Collection finalResults(Team team) {
List al = new ArrayList<>(resultList.stream()
.filter(fixture -> fixture.getHome().equals(team)).collect(toList()));
al.addAll(resultList.stream()
.filter(fixture -> fixture.getAway().equals(team)).collect(toList()));
return al;
}
```
Upvotes: 2 <issue_comment>username_3: You can simply create a conditions concatenations or can concatenate multiple **filter** call
**Conditions concatenations**
```
myList.stream()
.filter(element -> (condition1 && condition2 && condition3))
```
**Multiple filter call**
```
myList.stream()
.filter(element -> condition1)
.filter(element -> condition2)
.filter(element -> condition3)
```
Upvotes: 2 <issue_comment>username_4: If you wan to get fancy with lambdas:
```
Predicate isHome = fr -> fr.getHome().equals(team)
Predicate isAway = fr -> fr.getAway().equals(team)
resultList.stream()
.filter(isHome.or(isAway))
.collect(toList()));
```
You could even extract the compose predicate to test it in isolation, with no streams involved, which is good for more complex predicates:
```
Predicate isHomeOrAway = isHome.or(isAway)
assertTrue(isHomeOrAway(homeFixture));
...
```
Upvotes: 4 <issue_comment>username_5: You can do the following
```
someStream.filter(((Predicate) someObject-> someCondition).or(someObject-> someOtherCondition))
```
Or you can define your own "or" function that won't cause such a deep hierarchy
```
@SuppressWarnings("unchecked")
Predicate or(Predicate ...predicates) {
return r -> Arrays.stream(predicates).anyMatch(p -> p.test(r));
}
```
That gives you a cleaner interface without casting and the nesting
```
.filter(or(
yourObject -> {
return false;
},
yourObject -> {
return false;
},
yourObject -> {
return false;
},
yourObject -> {
return false;
}
))
```
Upvotes: 2 |
2018/03/19 | 771 | 2,571 | <issue_start>username_0: I would need to obtain the centroid of a queryset of points (PointField) with Django
Here are my models:
```
class GroupOfCities(models.Model)
geomcentroid = models.PointField(srid=4326, blank=True, null=True)
class City(models.Model):
centroid = models.PointField(srid=4326, blank=True, null=True)
groupofcities = models.ForeignKey(GroupOfCities, null=True)
```
I would need to get the centroid of each group of cities and save it to geomcentroid
Example of what I would like to do for one group of cities:
```
from django.contrib.gis.db.models.functions import Centroid
firstgroupofcities = GroupeOfCities.objects.get(id=1)
cities = City.objects.filter(groupofcities=firstgroupofcities).annotate(cent=Centroid('centroid'))
firstgroupofcities.geomcentroid = cities.cent
firstgroupofcities.save()
```
But this "Centroid" functionality only works for polygons.
Any clue?<issue_comment>username_1: You could iterate over the queryset and calculate the centroid in Python -- the code below is untested, but the approach should work.
```
from django.contrib.gis.geos import Point
firstgroupofcities = GroupeOfCities.objects.get(id=1)
cities = City.objects.filter(groupofcities=firstgroupofcities)
x_average = 0
y_average = 0
for idx, city in enumerate(cities):
x_average = x_average*(idx/(idx+1)) + city.centroid.coords[0]/(idx+1)
y_average = y_average*(idx/(idx+1)) + city.centroid.coords[1]/(idx+1)
cent = Point(x_average, y_average)
firstgroupofcities.geomcentroid = cent
firstgroupofcities.save()
```
Additionally, you'll probably want to either:
1) run the calculation each time the fkey relationship on `City` is changed by overriding `save()` or using [signals](https://docs.djangoproject.com/en/2.0/topics/signals/)
OR
2) calculate it each time you need the data by putting it in a method of `GroupOfCities`
Lastly, if none of that works and you're using Postgres and you're required to do the calculation in the DB, you might have an alternative using these:
<https://postgis.net/docs/ST_MakePolygon.html>
<https://docs.djangoproject.com/en/2.0/topics/db/sql/>
Upvotes: 1 [selected_answer]<issue_comment>username_2: You need to use `Union` aggregate function:
```
from django.contrib.gis.db.models.aggregates import Union
from django.contrib.gis.db.models.functions import Centroid
for goc in GroupOfCities.objects.annotate(cent=Centroid(Union('city_set__centroid')):
goc.geomcentroid = goc.cent
goc.save()
```
(It basically creates the "polygon" needed by "Centroid".)
Upvotes: 1 |
2018/03/19 | 905 | 3,532 | <issue_start>username_0: Here's my code
```
import React, { Component } from 'react';
import ReactDOM from 'react-dom';
import MarkdownRenderer from 'react-markdown-renderer';
export default class Content extends Component {
constructor(props) {
super(props);
this.state = {
content: ''
};
}
componentWillMount() {
let file_path = this.props.mdfPath;
fetch(file_path)
.then(response => response.text())
.then(content => {
this.setState({ content })
})
}
render() {
return(
)
}
}
```
This component fetches the content of any markdown file whose path is passed to it (through props), and then makes use of [`react-markdown-renderer`](https://github.com/InsidersByte/react-markdown-renderer) to turn that markdown into HTML.
I've downloaded the `hihglight.js` files and pointed to them in my index.html file. I also have run, inside index.html, the function `initHighlightingOnLoad()`. However, **when the site loads, my code block isn't highlighted**. I'm not sure what's going on... Can someone help please?
This is what outputs to the DOM
```
My Site
=======
This is my site...
```
const msg = 'Welcome to My Site';
console.log(msg); // Welcome to My SIte
```
```<issue_comment>username_1: My guess is that your React app hasn't initialized the component yet at the time when the index.html runs `initHighlightingOnLoad()`
Try moving `initHighlightingOnLoad()` inside `componentDidMount` of your `Content` component
Upvotes: 2 <issue_comment>username_2: @username_1's answered worked for me, I had the same problem. Hope this can help
```
import React, {Component} from 'react';
import hljs from 'highlight.js';
import 'highlight.js/styles/vs2015.css';
hljs.configure({useBR: true});
export default class Post extends Component{
componentDidMount(){
hljs.initHighlightingOnLoad();
}
render(){
return(
);
}
}
```
Upvotes: 1 <issue_comment>username_3: For everyone who didn't find any working answer above and have no success with `initHighlightingOnLoad` and others builtin functions.
React: 16.8.2 working example:
```
import hljs from "highlight.js";
import "./dracula.css";
class Preview extends Component {
componentDidMount() {
this.updateCodeSyntaxHighlighting();
}
componentDidUpdate() {
this.updateCodeSyntaxHighlighting();
}
updateCodeSyntaxHighlighting = () => {
document.querySelectorAll("pre code").forEach(block => {
hljs.highlightBlock(block);
});
};
render() {
return (
);
}
}
```
Note that `updateCodeSyntaxHighlighting` should be in `componentDidMount` and `componentDidUpdate` methods in every component which use ````
... tags.
````
Upvotes: 2 <issue_comment>username_4: This looks at the 'code' block in a HTML string
Highlight.js
```
import React, { Component } from 'react';
import hljs from "highlight.js";
import 'highlight.js/styles/vs2015.css';
class Highlight extends Component {
componentDidMount() {
this.updateCodeSyntaxHighlighting();
}
componentDidUpdate() {
this.updateCodeSyntaxHighlighting();
}
updateCodeSyntaxHighlighting = () => {
document.querySelectorAll("code").forEach(block => {
hljs.highlightBlock(block);
});
};
render() {
return (
);
}
}
export default Highlight;
```
To use it
```
import Highlight from "../path/to/Highlight"
< Highlight body={HTML_STRING} />
```
Upvotes: 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.