date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/19 | 2,307 | 7,905 | <issue_start>username_0: Hi everyone I am new to Python and have been trying out how to create lists within lists containing multiples from 1 to 4, with each of them containing the same number of multiples as the number they are being multiplied with. In particular, applying the use of `lambda` and `map` function.
```
[[0],
[0, 2],
[0, 3, 6],
[0, 4, 8, 12],
[0, 5, 10, 15, 20]]
```
I have tried several ways and this is the code that I am currently working on, but I have received `NameError: name 'x' is not defined` error. I will really be grateful if someone can point out the mistakes that I have overlooked!
```
list_1 = [[0], [0,1], [0,1,2]]
result = list(map(lambda x, i ,j ,k: x[i][j] * k, list_1, range(3),items in x[i], range(1,3)))
print(result)
```<issue_comment>username_1: If you need to get the absence in a time period (let's say in past 7 days), then you can do something like this
```
SELECT
ID,
StudentID,
[Date],
AbsenceReasonID
FROM(
SELECT
ID,
StudentID,
[Date],
AbsenceReasonID,
COUNT(StudentID) OVER(PARTITION BY StudentID ORDER BY StudentID) AS con,
((DATEPART(dw, [Date]) + @@DATEFIRST) % 7) AS dw
FROM attendance
) D
WHERE
D.con > 2
AND [Date] >= '2018-02-02'
AND [Date] <= GETDATE()
AND dw NOT IN(0,1)
```
and based on your given data the output will be
```
| ID | StudentID | Date | AbsenceReasonID |
|--------|-----------|------------|-----------------|
| 430957 | 10158 | 2018-02-02 | 2 |
```
you could adjust the output as you like.
[SQL Fiddle](http://www.sqlfiddle.com/#!18/6b2a1c/3)
Upvotes: 0 <issue_comment>username_2: Try this:
`CTE` contains the absence dates when a student was absent on both the day before and the day after (excluding weekend). The 2 `UNION` at the end add back the first and last of each group and eliminate the duplicates.
```
with cte(id, studentId, dateof , absenceReasonId)
as
(
select a.*
from attendance a
where exists (select 1 from attendance preva
where preva.studentID = a.studentID
and datediff(day, preva.dateof, a.dateof)
<= (case when datepart(dw, preva.dateof) >= 5
then 8 - datepart(dw, preva.dateof)
else 1
end)
and preva.dateof < a.dateof)
and exists (select 1 from attendance nexta
where nexta.studentID = a.studentID
and datediff(day, a.dateof, nexta.dateof)
<= (case when datepart(dw, a.dateof) >= 5
then 8 - datepart(dw, a.dateof)
else 1
end)
and nexta.dateof > a.dateof))
select cte.*
from cte
union -- use union to remove duplicates
select preva.*
from
attendance preva
inner join
cte
on preva.studentID = cte.studentID
and preva.dateof < cte.dateof
and datediff(day, preva.dateof, cte.dateof)
<= (case when datepart(dw, preva.dateof) >= 5
then 8 - datepart(dw, preva.dateof)
else 1
end)
union
select nexta.*
from attendance nexta
inner join
cte
on nexta.studentID = cte.studentID
and datediff(day, cte.dateof, nexta.dateof)
<= (case when datepart(dw, cte.dateof) >= 5
then 8 - datepart(dw, cte.dateof)
else 1
end)
and nexta.dateof > cte.dateof
order by studentId, dateof
```
[sqlfiddle](http://sqlfiddle.com/#!18/8ead7/1)
Upvotes: 0 <issue_comment>username_3: You can use this to find your absence ranges. In here I use a recursive `CTE` to number all days from a few years while at the same time record their week day. Then use another recursive `CTE` to join absence dates for the same student that are one day after another, considering weekends should be skipped (read the `CASE WHEN` on the join clause). At the end show each absence spree filtered by N successive days.
```
SET DATEFIRST 1 -- Monday = 1, Sunday = 7
;WITH Days AS
(
-- Recursive anchor: hard-coded first date
SELECT
GeneratedDate = CONVERT(DATE, '2017-01-01')
UNION ALL
-- Recursive expression: all days until day X
SELECT
GeneratedDate = DATEADD(DAY, 1, D.GeneratedDate)
FROM
Days AS D
WHERE
DATEADD(DAY, 1, D.GeneratedDate) <= '2020-01-01'
),
NumberedDays AS
(
SELECT
GeneratedDate = D.GeneratedDate,
DayOfWeek = DATEPART(WEEKDAY, D.GeneratedDate),
DayNumber = ROW_NUMBER() OVER (ORDER BY D.GeneratedDate ASC)
FROM
Days AS D
),
AttendancesWithNumberedDays AS
(
SELECT
A.*,
N.*
FROM
Attendance AS A
INNER JOIN NumberedDays AS N ON A.Date = N.GeneratedDate
),
AbsenceSpree AS
(
-- Recursive anchor: absence day with no previous absence, skipping weekends
SELECT
StartingAbsenceDate = A.Date,
CurrentDateNumber = A.DayNumber,
CurrentDateDayOfWeek = A.DayOfWeek,
AbsenceDays = 1,
StudentID = A.StudentID
FROM
AttendancesWithNumberedDays AS A
WHERE
NOT EXISTS (
SELECT
'no previous absence date'
FROM
AttendancesWithNumberedDays AS X
WHERE
X.StudentID = A.StudentID AND
X.DayNumber = CASE A.DayOfWeek
WHEN 1 THEN A.DayNumber - 3 -- When monday then friday (-3)
WHEN 7 THEN A.DayNumber - 2 -- When sunday then friday (-2)
ELSE A.DayNumber - 1 END)
UNION ALL
-- Recursive expression: find the next absence day, skipping weekends
SELECT
StartingAbsenceDate = S.StartingAbsenceDate,
CurrentDateNumber = A.DayNumber,
CurrentDateDayOfWeek = A.DayOfWeek,
AbsenceDays = S.AbsenceDays + 1,
StudentID = A.StudentID
FROM
AbsenceSpree AS S
INNER JOIN AttendancesWithNumberedDays AS A ON
S.StudentID = A.StudentID AND
A.DayNumber = CASE S.CurrentDateDayOfWeek
WHEN 5 THEN S.CurrentDateNumber + 3 -- When friday then monday (+3)
WHEN 6 THEN S.CurrentDateNumber + 2 -- When saturday then monday (+2)
ELSE S.CurrentDateNumber + 1 END
)
SELECT
StudentID = A.StudentID,
StartingAbsenceDate = A.StartingAbsenceDate,
EndingAbsenceDate = MAX(N.GeneratedDate),
AbsenceDays = MAX(A.AbsenceDays)
FROM
AbsenceSpree AS A
INNER JOIN NumberedDays AS N ON A.CurrentDateNumber = N.DayNumber
GROUP BY
A.StudentID,
A.StartingAbsenceDate
HAVING
MAX(A.AbsenceDays) >= 3
OPTION
(MAXRECURSION 5000)
```
If you want to list the original Attendance table rows, you can replace the last select:
```
SELECT
StudentID = A.StudentID,
StartingAbsenceDate = A.StartingAbsenceDate,
EndingAbsenceDate = MAX(N.GeneratedDate),
AbsenceDays = MAX(A.AbsenceDays)
FROM
AbsenceSpree AS A
INNER JOIN NumberedDays AS N ON A.CurrentDateNumber = N.DayNumber
GROUP BY
A.StudentID,
A.StartingAbsenceDate
HAVING
MAX(A.AbsenceDays) >= 3
```
with this `CTE + SELECT`:
```
,
FilteredAbsenceSpree AS
(
SELECT
StudentID = A.StudentID,
StartingAbsenceDate = A.StartingAbsenceDate,
EndingAbsenceDate = MAX(N.GeneratedDate),
AbsenceDays = MAX(A.AbsenceDays)
FROM
AbsenceSpree AS A
INNER JOIN NumberedDays AS N ON A.CurrentDateNumber = N.DayNumber
GROUP BY
A.StudentID,
A.StartingAbsenceDate
HAVING
MAX(A.AbsenceDays) >= 3
)
SELECT
A.*
FROM
Attendance AS A
INNER JOIN FilteredAbsenceSpree AS F ON A.StudentID = F.StudentID
WHERE
A.Date BETWEEN F.StartingAbsenceDate AND F.EndingAbsenceDate
OPTION
(MAXRECURSION 5000)
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 769 | 2,039 | <issue_start>username_0: Given the DataFrame:
```
import pandas as pd
df = pd.DataFrame([6, 4, 2, 4, 5], index=[2, 6, 3, 4, 5], columns=['A'])
```
Results in:
```
A
2 6
6 4
3 2
4 4
5 5
```
Now, I would like to sort by values of Column A AND the index.
e.g.
```
df.sort_values(by='A')
```
Returns
```
A
3 2
6 4
4 4
5 5
2 6
```
Whereas I would like
```
A
3 2
4 4
6 4
5 5
2 6
```
How can I get a sort on the column first and index second?<issue_comment>username_1: You can sort by index and then by column A using `kind='mergesort'`.
This works because [mergesort is stable](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sort_values.html).
```
res = df.sort_index().sort_values('A', kind='mergesort')
```
Result:
```
A
3 2
4 4
6 4
5 5
2 6
```
Upvotes: 4 <issue_comment>username_2: Using `lexsort` from numpy may be other way and little faster as well:
```
df.iloc[np.lexsort((df.index, df.A.values))] # Sort by A.values, then by index
```
Result:
```
A
3 2
4 4
6 4
5 5
2 6
```
Comparing with `timeit`:
```
%%timeit
df.iloc[np.lexsort((df.index, df.A.values))] # Sort by A.values, then by index
```
Result:
```
1000 loops, best of 3: 278 µs per loop
```
With reset index and set index again:
```
%%timeit
df.reset_index().sort_values(by=['A','index']).set_index('index')
```
Result:
```
100 loops, best of 3: 2.09 ms per loop
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: The other answers are great. I'll throw in one other option, which is to provide a name for the index first using [rename\_axis](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html) and then reference it in `sort_values`. I have not tested the performance but expect the accepted answer to still be faster.
`df.rename_axis('idx').sort_values(by=['A', 'idx'])`
```
A
idx
3 2
4 4
6 4
5 5
2 6
```
You can clear the index name afterward if you want with `df.index.name = None`.
Upvotes: 2 |
2018/03/19 | 1,440 | 3,884 | <issue_start>username_0: I am having a hard time in finding a way to filter this array by id value, and generate a new one preserving coords with it's filter.
Array example:
```
var herbs =[
{
"coords":[3300,2796],"items":[
{id: "dandelion",qty: 72},
{id: "sage",qty: 4},
{id: "valerian",qty: 1},
]},
{
"coords":[3300,2800],"items":[
{id: "dandelion",qty: 26},
{id: "valerian",qty: 7},
{id: "sage",qty: 2},
]},
{
"coords":[3300,2804],"items":[
{id: "dandelion",qty: 57},
{id: "sage",qty: 4},
{id: "wormwood",qty: 1},
]}]
```
I want to filter it by id, generating a new one with it's coords.
Example:
Filtering by `id = dandelion`
```
var dandelion =[
{
"coords":[3300,2796],"items":[
{id: "dandelion",qty: 72},
]},
{
"coords":[3300,2800],"items":[
{id: "dandelion",qty: 26},
]},
{
"coords":[3300,2804],"items":[
{id: "dandelion",qty: 57},
]}]
```
Filtering by `id = sage`
```
var sage =[
{
"coords":[3300,2796],"items":[
{id: "sage",qty: 4},
]},
{
"coords":[3300,2800],"items":[
{id: "sage",qty: 2},
]},
{
"coords":[3300,2804],"items":[
{id: "sage",qty: 4},
]}]
```
Also, this array it's pretty big, I have 467.000 coords. So I plan to filter it and save a new file with each filtered.<issue_comment>username_1: You can use `reduce` for this to push to a new array with `items` the result of a `filter` within the reduce call. It only pushes to the new array when the search term is found somewhere in the `items`:
```js
var herbs =[
{
"coords":[3300,2796],"items":[
{id: "dandelion",qty: 72},
{id: "sage",qty: 4},
{id: "valerian",qty: 1},
]},
{
"coords":[3300,2800],"items":[
{id: "dandelion",qty: 26},
{id: "valerian",qty: 7},
{id: "sage",qty: 2},
]},
{
"coords":[3300,2804],"items":[
{id: "dandelion",qty: 57},
{id: "sage",qty: 4},
{id: "wormwood",qty: 1},
]}]
function filterByID(array, id) {
return array.reduce((a, c) => {
let items = c.items.filter(i => i.id === id )
if (items.length){
a.push({
coords: c.coords,
items: items
})
}
return a
}, [])
}
console.log(filterByID(herbs, "dandelion"))
console.log(filterByID(herbs, "sage"))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: It is very straightforward. Use standard `Array` methods `map` and `filter` to get what you want.
```js
var herbs = [{
"coords": [3300, 2796],
"items": [{
id: "dandelion",
qty: 72
},
{
id: "sage",
qty: 4
},
{
id: "valerian",
qty: 1
},
]
},
{
"coords": [3300, 2800],
"items": [{
id: "dandelion",
qty: 26
},
{
id: "valerian",
qty: 7
},
{
id: "sage",
qty: 2
},
]
},
{
"coords": [3300, 2802],
"items": [{
id: "dandelion",
qty: 26
},
{
id: "valerian",
qty: 7
},
//no sage
]
},
{
"coords": [3300, 2804],
"items": [{
id: "dandelion",
qty: 57
},
{
id: "sage",
qty: 4
},
{
id: "wormwood",
qty: 1
},
]
}
];
function filterHerbs(id) {
return herbs.map((h) => {
return {
coords: h.coords,
items: h.items.filter((i) => i.id == id)
}
})//filter result in case of empty item arrays
.filter((h) => h.items.length);
}
var dand = filterHerbs('dandelion')
console.log(dand);
var sage = filterHerbs('sage');
console.log(sage);
```
Upvotes: 0 <issue_comment>username_3: ```
const dandelion = herbs.map(
({coords, items}) => items.filter(
({id}) => id === 'dandelion'
).map(ob => ({coords, ...ob})));
```
Upvotes: 0 |
2018/03/19 | 2,970 | 8,114 | <issue_start>username_0: I'm trying to learn R, and I decided to figure it out by building a thing to read the live election results that my state puts up on election night. Unfortunately, I've hit a snag in computing a `Margin` value to use for map fills. My state (WA) uses a Top 2 primary, which means that in some races there are two people of the same party in the November election. That's probably too much background, but anyway here's the coding problem:
I have a data frame that looks like this:
```
Dist Party Votes
1 (Prefers Democratic Party) 124151
1 (Prefers Republican Party) 101428
2 (Prefers Democratic Party) 122173
2 (Prefers Republican Party) 79518
3 (Prefers Republican Party) 124796
3 (Prefers Democratic Party) 78018
4 (Prefers Republican Party) 75307
4 (Prefers Republican Party) 77772
5 (Prefers Republican Party) 135470
5 (Prefers Democratic Party) 87772
6 (Prefers Democratic Party) 141265
6 (Prefers Republican Party) 83025
7 (Prefers Democratic Party) 203954
7 (Prefers Republican Party) 47921
8 (Prefers Republican Party) 125741
8 (Prefers Democratic Party) 73003
9 (Prefers Democratic Party) 118132
9 (Prefers Republican Party) 48662
10 (Prefers Democratic Party) 99279
10 (Prefers Republican Party) 82213
```
And I want to make it look like this:
```
Dist (Prefers Democratic Party) (Prefers Republican Party)
1 124151 101428
2 122173 79518
3 78018 124796
4 [NA or 0] 153079
5 87772 135470
6 141265 83025
7 203954 47921
8 73003 125741
9 118132 48662
10 99279 82213
```
`spread()` doesn't work because of the duplicates in `Dist = 4`. I've managed to put this together from some other questions on here, but I'm not satisfied with it and I'm almost positive there's a better way
```
library(tidyr)
library(dplyr)
CongressTidy %>%
group_by(Dist) %>%
mutate(GOPVotes = sum(ifelse(Party == "(Prefers Republican Party)", Votes, 0))) %>%
mutate(DemVotes = sum(ifelse(Party == "(Prefers Democratic Party)", Votes, 0)))
```
That returns this:
```
Dist Party Votes GOPVotes DemVotes
1 (Prefers Democratic Party) 124151 101428 124151
1 (Prefers Republican Party) 101428 101428 124151
2 (Prefers Democratic Party) 122173 79518 122173
2 (Prefers Republican Party) 79518 79518 122173
3 (Prefers Republican Party) 124796 124796 78018
3 (Prefers Democratic Party) 78018 124796 78018
4 (Prefers Republican Party) 75307 153079 0
4 (Prefers Republican Party) 77772 153079 0
5 (Prefers Republican Party) 135470 135470 87772
5 (Prefers Democratic Party) 87772 135470 87772
6 (Prefers Democratic Party) 141265 83025 141265
6 (Prefers Republican Party) 83025 83025 141265
7 (Prefers Democratic Party) 203954 47921 203954
7 (Prefers Republican Party) 47921 47921 203954
8 (Prefers Republican Party) 125741 125741 73003
8 (Prefers Democratic Party) 73003 125741 73003
9 (Prefers Democratic Party) 118132 48662 118132
9 (Prefers Republican Party) 48662 48662 118132
10 (Prefers Democratic Party) 99279 82213 99279
10 (Prefers Republican Party) 82213 82213 99279
```
That's fine, as far as it goes, and I can add selector column and select by that:
```
CongressMargins <- CongressTidy %>%
group_by(Dist) %>%
mutate(GOPVotes = sum(ifelse(Party == "(Prefers Republican Party)", Votes, 0))) %>%
mutate(DemVotes = sum(ifelse(Party == "(Prefers Democratic Party)", Votes, 0))) %>%
mutate(selector = c(1,2)) %>%
subset(selector == 1, select = c(Dist, GOPVotes, DemVotes))
```
Which gives me what I want, and I can calculate the Margin just fine from there:
```
Dist GOPVotes DemVotes
1 101428 124151
2 79518 122173
3 124796 78018
4 153079 0
5 135470 87772
6 83025 141265
7 47921 203954
8 125741 73003
9 48662 118132
10 82213 99279
```
But if there are 2 unopposed races that would get screwed up, because it's based on vector recycling. And it's just ugly. And **there's gotta be a better way.** Any ideas?<issue_comment>username_1: We can calculate the group sum first and then spread. If you want the missing cell to be 0, use `spread(Party, Votes, fill = 0)`.
```
library(tidyverse)
dat2 <- dat %>%
group_by(Dist, Party) %>%
summarise(Votes = sum(Votes)) %>%
spread(Party, Votes) %>%
ungroup()
dat2
# # A tibble: 10 x 3
# Dist `(Prefers Democratic Party)` `(Prefers Republican Party)`
#
# 1 1 124151 101428
# 2 2 122173 79518
# 3 3 78018 124796
# 4 4 NA 153079
# 5 5 87772 135470
# 6 6 141265 83025
# 7 7 203954 47921
# 8 8 73003 125741
# 9 9 118132 48662
# 10 10 99279 82213
```
**DATA**
```
dat <- read.table(text = "Dist Party Votes
1 '(Prefers Democratic Party)' 124151
1 '(Prefers Republican Party)' 101428
2 '(Prefers Democratic Party)' 122173
2 '(Prefers Republican Party)' 79518
3 '(Prefers Republican Party)' 124796
3 '(Prefers Democratic Party)' 78018
4 '(Prefers Republican Party)' 75307
4 '(Prefers Republican Party)' 77772
5 '(Prefers Republican Party)' 135470
5 '(Prefers Democratic Party)' 87772
6 '(Prefers Democratic Party)' 141265
6 '(Prefers Republican Party)' 83025
7 '(Prefers Democratic Party)' 203954
7 '(Prefers Republican Party)' 47921
8 '(Prefers Republican Party)' 125741
8 '(Prefers Democratic Party)' 73003
9 '(Prefers Democratic Party)' 118132
9 '(Prefers Republican Party)' 48662
10 '(Prefers Democratic Party)' 99279
10 '(Prefers Republican Party)' 82213",
header = TRUE, stringsAsFactors = FALSE)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use `dcast` from `reshape2` package specifying the aggregation function as `sum`
```
library(reshape2)
dcast(dat,Dist~Party,sum,value.var = "Votes")
Dist (Prefers Democratic Party) (Prefers Republican Party)
1 1 124151 101428
2 2 122173 79518
3 3 78018 124796
4 4 0 153079
5 5 87772 135470
6 6 141265 83025
7 7 203954 47921
8 8 73003 125741
9 9 118132 48662
10 10 99279 82213
```
Using base R:
```
xtabs(Votes~Dist+Party,dat)
Party
Dist (Prefers Democratic Party) (Prefers Republican Party)
1 124151 101428
2 122173 79518
3 78018 124796
4 0 153079
5 87772 135470
6 141265 83025
7 203954 47921
8 73003 125741
9 118132 48662
10 99279 82213
```
The above output is of class `table` you can make it a dataframe by:
`as.data.frame.matrix(xtabs(Votes~Dist+Party,dat))` Now this is a dataframe, you can subset the way you want
Upvotes: 1 |
2018/03/19 | 1,328 | 5,067 | <issue_start>username_0: I'm using Redux-Saga in a React Native app. When I get the authentication token back from the server, how do I save it to local storage?
I tried using
`await AsyncStorage.setItem("token", token);`
but React Native complained and said `await` was a reserved word.
Am I misunderstanding something? Is the saga code not where I should be doing this?
Here is my code
```
function* loginFlow(action) {
try {
let username = action.username;
let password = action.password;
const response = yield call(getUser, username, password);
let token = response.headers.get("access-token");
const result = yield response.json();
if (token) {
console.log("success: ", token);
yield put({ type: LOGIN_SUCCESS, result });
} else {
if (result.error) {
yield put({ type: LOGIN_FAILURE, error: result.error });
}
}
} catch (e) {
yield put({ type: LOGIN_FAILURE, error: e.message });
console.log("error", e);
}
}
```
**Edit:**
Here is the getUser function:
```
const getUser = (username, password) => {
return fetch(`${apiURL}/${apiVersion}/${apiType}/${apiEndpoint_auth}`, {
method: "POST",
headers: {
Accept: "application/json",
"Content-Type": "application/json"
},
body: JSON.stringify({
email: username,
password: <PASSWORD>
})
});
};
```<issue_comment>username_1: >
> Remember, the await keyword is only valid inside async functions. If you use it outside of an async function's body, you will get a SyntaxError.
>
>
>
Reference: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function>
The function generator needs to have ***async*** before *function*.
Upvotes: 1 <issue_comment>username_2: Since each method of the AsyncStorage API returns a Promise object, you could use redux-saga `call(fn, ...args)` function.
From the documentation of `call(fn, ...args)`, you could use it on a normal function that returns a Promise as a result.
>
> Creates an Effect description that instructs the middleware to call the function fn with args as arguments.
>
>
> **fn**: Function - A Generator function, or normal function which either returns a Promise as result, or any other value.
>
>
> **args**: Array - An array of values to be passed as arguments to fn
>
>
>
In this case, we could use `yield call(fn, ...args)` this way:
```js
yield call(AsyncStorage.setItem, "token", token)
```
This would have the same effect as `await`, where it would block the execution until the Promise is resolved / rejected.
Full code snippet with minor comments:
```js
function* loginFlow(action) {
try {
let username = action.username;
let password = <PASSWORD>;
const response = yield call(getUser, username, password);
let token = response.headers.get("access-token");
const result = yield response.json();
if (token) {
console.log("success: ", token);
// Wait / block until the Promise is resolved
yield call(AsyncStorage.setItem, "token", token);
// Will be only executed once the previous line have been resolved
yield put({ type: LOGIN_SUCCESS, result });
} else {
if (result.error) {
yield put({ type: LOGIN_FAILURE, error: result.error });
}
}
} catch (e) {
yield put({ type: LOGIN_FAILURE, error: e.message });
console.log("error", e);
}
}
```
Reference:
* <https://redux-saga.js.org/docs/api/#callfn-args>
Upvotes: 2 <issue_comment>username_3: **Pass your auth token in below function.**
```
saveToken = async (token) => {
try {
AsyncStorage.setItem("token", "" + token);
} catch (error) {
console.log('AsyncStorage error: ' + error.message);
}
}
```
Upvotes: 1 <issue_comment>username_4: **This is how i managed to store token inside redux-saga generator.**
```
function* loginFlow(email, password) {
try {
// get your token
const token = yield call(loginApi, email, password);
// store token to local storage
yield call(storeToken, token);
yield put({ type: LOGIN_SUCCESS });
} catch (error) {
yield put({ type: LOGIN_ERROR, error });
}
}
```
---
```
function loginApi(email, password) {
return fetch('https://yourApiUrl', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ email, password }),
})
.then(response => response.json())
.then(json => json)
.catch((error) => {
throw error;
});
}
```
---
```
async function storeToken(token) {
try {
await AsyncStorage.setItem('token', token);
} catch (error) {
console.log('AsyncStorage error during token store:', error);
}
}
```
---
**Note:** Store your token before you dispatch your LOGIN\_SUCCESS action. So that you will get your token in React Component by re-rendering made by LOGIC\_SUCCESS action.
Upvotes: 3 |
2018/03/19 | 1,033 | 3,886 | <issue_start>username_0: Researching this has been a little difficult due to I'm not precisely sure how the question should be worded. Here is some pseudo code summarizing my goal.
```
public class TestService {
Object someBigMehtod(String A, Integer I) {
{ //block A
//do some long database read
}
{ //block B
//do another long database read at the same time as block B
}
{ //block C
//get in this block when both A & B are complete
//and access result returned or pushed from A & B
//to build up some data object to push out to a class that called
//this service or has subscribed to it
return null;
}
}
}
```
I am thinking I can use RxJava or Spring Integration to accomplish this or maybe just instantiating multiple threads and running them. Just the layout of it though makes me think Rx has the solution because I am thinking data is pushed to block C. Thanks in advance for any advice you might have.<issue_comment>username_1: You can do this with [`CompletableFuture`](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html). In particular, its [`thenCombine`](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html#thenCombine-java.util.concurrent.CompletionStage-java.util.function.BiFunction-) method, which waits for two tasks to complete.
```
CompletableFuturefa = CompletableFuture.supplyAsync(() -> {
// do some long database read
return a;
});
CompletableFuture **fb = CompletableFuture.supplyAsync(() -> {
// do another long database read
return b;
});
CompletableFuture fc = fa.thenCombine(fb, (a, b) -> {
// use a and b to build object c
return c;
});
return fc.join();**
```
These methods will all execute on the [`ForkJoinPool.commonPool()`](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html#commonPool--). You can control where they run if you pass in optional [`Executor`s](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executor.html).
Upvotes: 2 <issue_comment>username_2: You can use **Zip** operator from **Rxjava**. This operator can run in parallel multiple process and then zip the results.
Some docu <http://reactivex.io/documentation/operators/zip.html>
And here an example of how works <https://github.com/politrons/reactive/blob/master/src/test/java/rx/observables/combining/ObservableZip.java>
Upvotes: 0 <issue_comment>username_3: For now I just went with John's suggestion. This is getting the desired effect. I mix in RxJava1 and RxJava2 syntax a bit which is probably poor practice. Looks like I have some reading cut out for me on java.util.concurrent package . Time permitting I would like to do the zip solution.
```
@Test
public void myBigFunction(){
System.out.println("starting ");
CompletableFuture> fa = CompletableFuture.supplyAsync( () ->
{ //block A
//do some long database read
try {
Thread.sleep(3000);
System.out.println("part A");
return asList(new String[] {"abc","def"});
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
);
CompletableFuture> fb = CompletableFuture.supplyAsync( () ->
{ //block B
//do some long database read
try {
Thread.sleep(6000);
System.out.println("Part B");
return asList(new Integer[] {123,456});
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
);
CompletableFuture> fc = fa.thenCombine(fb,(a,b) ->{
//block C
//get in this block when both A & B are complete
int sum = b.stream().mapToInt(i -> i.intValue()).sum();
return a.stream().map(new Function() {
@Override
public String apply(String s) {
return s+sum;
}
}).collect(Collectors.toList());
});
System.out.println(fc.join());
}
```
It does only take 6 seconds to run.
Upvotes: 0 |
2018/03/19 | 1,088 | 3,574 | <issue_start>username_0: I am a Common Lisp beginner, but not so in C++.
There's a simple C++ program that I am trying to mirror in CL (see [Pollard's Rho algorithm variant example in C++](https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm#Variants)
).
The C++ program runs without errors. One requirement is that all the outputs from both the programs must match.
C++ version
-----------
```
int gcd(int a, int b) {
int remainder;
while (b != 0) {
remainder = a % b;
a = b;
b = remainder;
}
return a;
}
int prime () {
int n = 10403, x_fixed = 2, cycle_size = 2, x = 2, factor = 1;
while (factor == 1) {
for (int count=1;count <= cycle_size && factor <= 1;count++) {
x = (x*x+1)%n;
factor = gcd(x - x_fixed, n);
}
cycle_size *= 2;
x_fixed = x;
}
cout << "\nThe factor is " << factor;
return 0;
}
```
Common Lisp version
-------------------
Here is what I've come up with. The debugging is giving me nightmares, yet I tried a lot many times and stepped through the entire code, still I have no idea where I have gone wrong :(
```
(defun prime ()
(setq n 10403)
(setq x_fixed 2)
(setq cycle_size 2)
(setq x 2)
(setq factor 1)
(setq count 1)
(while_loop))
(defun while_loop ()
(print
(cond ((= factor 1)
(for_loop)
(setf cycle_size (* cycle_size 2))
(setf x_fixed x)
(setf count 1)
(while_loop))
((/= factor 1) "The factor is : ")))
(print factor))
(defun for_loop ()
(cond ((and (<= count cycle_size) (<= factor 1))
(setf x (rem (* x (+ x 1)) n))
(setf factor (gcd (- x x_fixed) n)))
((or (<= count cycle_size) (<= factor 1))
(setf count (+ count 1)) (for_loop))))
```
Notes
-----
* I named all variables and constants the same as in the C++ version.
* I took half a day to decide whether or not to ask this question
* If my Common Lisp code looks funny or silly you free to not-help<issue_comment>username_1: You really need to do more reading on Common Lisp. It has all the basic imperative constructs of C++, so there's no need to go through the contortions you have just to translate a simple algorithm. See for example [Guy Steele's classic, available for free](http://www.cs.cmu.edu/Groups/AI/html/cltl/cltl2.html).
Here is a more reasonable and idiomatic trans-coding:
```
(defun prime-factor (n &optional (x 2))
(let ((x-fixed x)
(cycle-size 2)
(factor 1))
(loop while (= factor 1)
do (loop for count from 1 to cycle-size
while (<= factor 1)
do (setq x (rem (1+ (* x x)) n)
factor (gcd (- x x-fixed) n)))
(setq cycle-size (* 2 cycle-size)
x-fixed x)))
factor))
(defun test ()
(prime-factor 10403))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to define local variables.
A basic translation of the C code would look similar to this:
```
(defun my-gcd (a b)
(let ((remainder 0))
(loop while (/= b 0) do
(setf remainder (mod a b)
a b
b remainder))
a))
```
or with type declarations:
```
(defun my-gcd (a b)
(declare (integer a b))
(let ((remainder 0))
(declare (integer remainder))
(loop while (/= b 0) do
(setf remainder (mod a b)
a b
b remainder))
a))
```
The `integer` data type in Common Lisp is unbounded - unlike an `int` in C++.
Upvotes: 3 |
2018/03/19 | 1,938 | 14,879 | <issue_start>username_0: I have implemented the logic for this problem but it succeeds only for smaller strings and time limit exceeds as well as the memory usage is very high for larger string inputs ( as below ). My intent is to implement the problem in the same approach as I did below but with possible enhancements. Could someone tell me where am I doing wrong?
Input :
```
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcde"
```
Output :
```
5
```
Code :
```
class Solution {
public static int lengthOfLongestSubstring(String s) {
int len=0;
char[] cstr = null;
LinkedHashSet lhs = new LinkedHashSet();
HashMap hm = new HashMap();
String newstr = "";
//Generating all possible substrings using for loops as below and storing them in a hashmap.
for( int i=0;i m : hm.entrySet() ) {
//System.out.println( m.getKey()+"-"+m.getValue() );
cstr = m.getKey().toCharArray();
//System.out.println(cstr);
for( int i=0;i len )
len = newstr.length();
cstr = null;
lhs.clear();
newstr = "";
}
return len;
}
public static void main(String[] args) {
System.out.println(lengthOfLongestSubstring("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcde"));
}
}
```<issue_comment>username_1: This is my naive bruteforce solution using plain indices and avoiding to create additional strings.
```
public static String findLongestSubstringWithDistinctChars(String str)
{
Objects.requireNonNull(str);
int maxFrameBeginIndex = 0;
int maxFrameLength = 0;
for (int frameBeginIndex = 0; frameBeginIndex < str.length(); frameBeginIndex++)
{
for (int frameLength = 1; frameBeginIndex + frameLength <= str.length(); frameLength++)
{
// Check that last char in frame is not contained in the rest of the frame.
char lastChar = str.charAt(frameBeginIndex + frameLength - 1);
int frameEndIndexExclusive = frameBeginIndex + frameLength - 1;
if (containsChar(str, frameBeginIndex, frameEndIndexExclusive, lastChar))
{
break;
}
if (frameLength > maxFrameLength)
{
maxFrameBeginIndex = frameBeginIndex;
maxFrameLength = frameLength;
}
}
}
return str.substring(maxFrameBeginIndex, maxFrameBeginIndex + maxFrameLength);
}
/**
* Checks whether str's substring of range [beginIndex,endIndexEclusive[
* contains the character c.
*
* @param str
* the string
* @param beginIndex
* the begin index
* @param endIndexExclusive
* the end index, exclusive
* @param c
* the character to check
* @return whether c is contained in the substring
*/
private static boolean containsChar(String str, int beginIndex, int endIndexExclusive, char c)
{
for (int i = beginIndex; i < endIndexExclusive; i++)
{
if (str.charAt(i) == c)
{
return true;
}
}
return false;
}
```
Your sample input needs ~1sec on coderpad.io (interesting benchmarking ^\_^).
Upvotes: 0 <issue_comment>username_2: I might have missed something in the OP, but a simple solution might just use `substring`, `indexof` etc. on the string itself. My solution looks like:
```
public class LengthOfLongestSubstringWithUniqueCharacters {
public static void main(String[] args) {
String input = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcde";
int maxLength = -1;
int startPos = -1;
for (int i = 0; i < input.length(); i++) {
int len = getSubstringWithUniqueCharacters(input, i);
if (len > maxLength) {
maxLength = len;
startPos = i;
}
}
System.out.println("String: " + input.substring(startPos, startPos + maxLength));
System.out.println("Max: " + maxLength);
}
public static int getSubstringWithUniqueCharacters(String s, int pos) {
int i = 0;
for (; i < s.length() - pos; i++) {
if (s.substring(pos, pos + i).indexOf(s.charAt(pos + i)) != -1) {
return i;
}
}
return i;
}
}
/**
Output:
String: abcde
Max: 5
*/
```
Upvotes: 0 <issue_comment>username_3: I'll give a solution in python, which is good with time complexity (linear). You can translate it to java if need be.
```
def fun(s):
last_index_of = [-1]*26
last_index_of[ord(s[0])-ord('a')]=0
length, max = 1,1
for i in range(1,len(s)):
index = ord(s[i])-ord('a')
if last_index_of[index]==-1 or i-last_index_of[index]>length:
length = length+1
if length>max:
max=length
else:
length = i - last_index_of[index]
last_index_of[index]=i
return max
```
Some results:
```
fun("abccab")
3
fun("abacc")
3
fun("abcdeabcdeff")
6
fun("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcde")
5
```
Upvotes: 2 |
2018/03/19 | 387 | 1,260 | <issue_start>username_0: I would like to check them in one line code. Is it possible? Otherwise, the code I have currently is:
```
while True:
player_number = int(input('Determine first player Random(0) or AI(1):'))
player_number2 = int(input('Determine second player Random (0) or AI (1):'))
if player_number, player_number2 in range(0,2): # shows error
break
else:
print ('Out of range. Please choose a player')
```<issue_comment>username_1: I would just use `and`:
```
if player_number in (0, 1) and player_number2 in (0, 1):
```
You can also use sets but it's a little less clear:
```
if {player_number, player_number2} <= {0, 1}:
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use an `all` statement:
```
if all(i in range(0, 2) for i in (player_number, player_number2)):
```
The benefit of this method is it is easily extendable to multiple players.
An even better option would be to use dictionaries and compare to a set:
```
while True:
players = {k: int(input('Determine player {0} Random(0) or AI(1):'.format(k)))\
for k in range(0, 2)}
if set(players.values()) <= {0, 1}:
break
else:
print('Out of range. Please choose a player')
```
Upvotes: 1 |
2018/03/19 | 751 | 2,614 | <issue_start>username_0: In Access VBA, I am trying to print the values of a parsed Parameter array but keep getting a Runtime Error 13 - Type Mismatch. The values in the array are mixed types i.e. Double, String, Long.
Code as follows:
```
Function MyArray() as Variant
Dim MyParams(2) as Variant
MyParams(0) = "3459"
MyParams(1) = "3345"
MyParams(2) = "34.666"
MyArray = MyParams
End Function
Sub PrintArray(ParamArray Params() As Variant)
Dim p_param as Variant
For Each p_param in Params
Debug.Print params < - Error occurs here
Next p_param
End Sub
```
I tried converting to string etc but it still wont work.
Any suggestions?<issue_comment>username_1: In order to iterate the `ParamArray` values, you need to understand what arguments you're receiving.
Say you have this:
```
Public Sub DoSomething(ParamArray values() As Variant)
```
The cool thing about `ParamArray` is that it allows the calling code to do this:
```
DoSomething 1, 2, "test"
```
If you're in `DoSomething`, what you receive in `values()` is 3 items: the numbers `1` & `2`, and a string containing the word `test`.
However what's happening in your case, is that you're doing something like this:
```
DoSomething Array(1, 2, "test")
```
And when you're in `DoSomething`, what you receive in `values()` is 1 item: an array containing the numbers `1` & `2`, and a string containing the word `test`.
The bad news is that you can't control how the calling code will be invoking that function.
The good news is that you can be flexible about it:
```
Public Sub DoSomething(ParamArray values() As Variant)
If ArrayLenth(values) = 1 Then
If IsArray(values(0)) Then
PrintArray values(0)
End If
Else
PrintArray values
End If
End Sub
```
```
Public Function ArrayLength(ByRef target As Variant) As Long
Debug.Assert IsArray(target)
ArrayLength = UBound(target) - LBound(target) + 1
End Function
```
Now either way can work:
```
DoSomething 1, 2, "test"
DoSomething Array(1, 2, "test")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If an element of the passed in `Params()` array is an array itself then treat it as such, otherwise just print...
```
Private Sub PrintArray(ParamArray Params() As Variant)
Dim p_param As Variant
Dim i As Long
For Each p_param In Params
If IsArray(p_param) Then
For i = LBound(p_param) To UBound(p_param)
Debug.Print p_param(i)
Next
Else
Debug.Print p_param
End If
Next p_param
End Sub
```
Upvotes: 0 |
2018/03/19 | 831 | 3,137 | <issue_start>username_0: I have been trying to figure out how to find a Gridlayout in a fragment using findViewById. I've looked everywhere and am surprised to not have found this instructed by anyone in a similar situation of mine. I have used a tab layout in android studios, the tabs are different fragments, and within them are Gridlayouts which have cardviews that open new activities. I have provided the code below to show what I am working with:
```
public class PCpage extends Fragment {
GridLayout pcGrid;
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View mainPc = inflater.inflate(R.layout.pc_main, container, false);
return mainPc;
pcGrid = (GridLayout) mainPc.findViewById(R.id.pcGrid);
setSingleEvent(pcGrid);
}
private void setSingleEvent(GridLayout pcGrid){
for(int i =0; i
```
The following line doesn't seem to want to work out and is returning an error "unreachable statement":
```
pcGrid = (GridLayout) mainPc.findViewById(R.id.pcGrid);
```
Any feedback on why this is happening or how to fix this and make it work would be greatly appreciated!<issue_comment>username_1: In order to iterate the `ParamArray` values, you need to understand what arguments you're receiving.
Say you have this:
```
Public Sub DoSomething(ParamArray values() As Variant)
```
The cool thing about `ParamArray` is that it allows the calling code to do this:
```
DoSomething 1, 2, "test"
```
If you're in `DoSomething`, what you receive in `values()` is 3 items: the numbers `1` & `2`, and a string containing the word `test`.
However what's happening in your case, is that you're doing something like this:
```
DoSomething Array(1, 2, "test")
```
And when you're in `DoSomething`, what you receive in `values()` is 1 item: an array containing the numbers `1` & `2`, and a string containing the word `test`.
The bad news is that you can't control how the calling code will be invoking that function.
The good news is that you can be flexible about it:
```
Public Sub DoSomething(ParamArray values() As Variant)
If ArrayLenth(values) = 1 Then
If IsArray(values(0)) Then
PrintArray values(0)
End If
Else
PrintArray values
End If
End Sub
```
```
Public Function ArrayLength(ByRef target As Variant) As Long
Debug.Assert IsArray(target)
ArrayLength = UBound(target) - LBound(target) + 1
End Function
```
Now either way can work:
```
DoSomething 1, 2, "test"
DoSomething Array(1, 2, "test")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If an element of the passed in `Params()` array is an array itself then treat it as such, otherwise just print...
```
Private Sub PrintArray(ParamArray Params() As Variant)
Dim p_param As Variant
Dim i As Long
For Each p_param In Params
If IsArray(p_param) Then
For i = LBound(p_param) To UBound(p_param)
Debug.Print p_param(i)
Next
Else
Debug.Print p_param
End If
Next p_param
End Sub
```
Upvotes: 0 |
2018/03/19 | 290 | 1,069 | <issue_start>username_0: i don't know how to customize a calendar view to be the same as the picture for my application. Here is my example picture.
[](https://i.stack.imgur.com/sjRyr.png)<issue_comment>username_1: Yes u can define a custom calender view like that. Here is my suggestion:
* ListView/ReceyclerView
* Calendar class
Easy way is make user pick month/year first, your list only display day.
Upvotes: 0 <issue_comment>username_2: In the past, I had the same problem so I have created a Kotlin library which solves this problem. You can create a calendar with your custom UI and use selection features with few lines of code. You can [find the repository here](https://github.com/username_2/SingleRowCalendar) and [here is an article about my sample app](https://medium.com/@misosvec01/single-row-horizontal-calendar-has-never-been-easier-6ad989fafee8).
Additionally, [there is a similar library for Java here](https://github.com/TechIsFun/OneLineCalendar).
Upvotes: 3 [selected_answer] |
2018/03/19 | 2,000 | 7,387 | <issue_start>username_0: ```
let message = "heY, WHAt are you dOING?";
let count_changes = 0;
let isLetter = (letter) => {
if (('a'<=letter && letter >='z') || ('A'<=letter && letter >='Z')) {
return letter;
} else {
return -1;
}
}
for(let i = 0; i <= message.length; i++) {
if (isLetter(i) && message[i].toLowerCase()) {
message[i].toUpperCase();
count_changes++;
console.log(message[i].toLowerCase());
}
else if (isLetter(i) && message[i].toUpperCase()) {
message[i].toLowerCase();
count_changes++;
}
else {
console.error('Bad stirng');
}
}
```
Hello, I want to use the function isLetter to check the string message every character and when i use isLetter in the for loop to check in the if statement whether i is a Letter or not and also if its Lowercase letter later to when there is a change to Uppercase i increment count\_changes++. Again with the second if statement if also i is Letter and in this case Uppercase letter then if change to lowercase letter to increment the count\_changes++ so the count\_changes to be my final result
thank you<issue_comment>username_1: **TL;DR:**
```js
let message = "heY, WHAt are you dOING?";
let newMessage = "";
let count_changes = 0;
let isLowerCaseLetter = (letter) => 'a' <= letter && letter <= 'z';
let isUpperCaseLetter = (letter) => 'A' <= letter && letter <= 'Z';
/* Iterate over every character of the message. */
for (let i = 0; i < message.length; i++) {
/* Cache the character at the current index. */
let character = message[i];
/* Check whether the character is a lowercase letter. */
if (isLowerCaseLetter(character)) {
newMessage += character.toUpperCase();
count_changes++;
}
/* Check whether the character is an uppercase letter. */
else if (isUpperCaseLetter(character)) {
newMessage += character.toLowerCase();
count_changes++;
}
/* Otherwise, just add the current character to the new message. */
else newMessage += character;
}
console.log("New Message: ", newMessage);
console.log("Changes: ", count_changes);
```
---
**Your Mistakes:**
1. The way you're checking if a character is a letter is wrong, due to `>='z'`. It should be `<='z'`. The same goes for the check against `'Z'`.
2. Functions that have a Boolean connotation had better return `true` or `false` instead of `-1` or the character itself as you do.
3. Inside `isLetter` you pass the index instead of the character itself. The function call should be `isLetter(message[i])` instead of `isLetter(i)`.
4. The very message you are testing will be deemed a *'bad string'*, because of the comma and the spaces between the words.
5. In your loop, the condition should be `i < message.length`, otherwise, every message will be deemed a *'bad string'*, because you'll exceed all characters and get an `undefined` value.
6. The methods `toLowerCase` and `toUpperCase` do not affect the original string but create a new one instead. If you want to assemble the resulting characters together, you have to initialise a `newMessage` string and concatenate it the processed character each loop.
**Suggested solution:**
1. Instead of one `isLetter` function create one checking if a character is a lowercase letter and one checking if it's an uppercase letter. That way you combine your checks and your `if` clause will be much simpler and more readable.
2. Ditch the `isLetter` check and the good string / bad string thing completely, so as not to have problems with in-between characters such as spaces and punctuation.
3. Attempt to minimise function calls, as for large strings, they will slow down your code a lot. In the code below, only **2 function calls per loop** are used, compared to the accepted answer, which makes:
* 3 function calls per loop plus,
* 3 function calls when a character is letter *(the majority of the time)*
* 3 one-time function calls for `from`, `map` and `join`, which will matter for large strings.
**Speedtest:**
In a series of 5 tests using a massive string (2,825,856 chars long) the answers stack up as follows:
* this answer *([jsFiddle](https://jsfiddle.net/vqbkf9vb/34/) used)*:
`[1141.91ms, 1150.93ms, 1093.75ms, 1048.50ms, 1183.03ms]`
* accepted answer *([jsFiddle](https://jsfiddle.net/vqbkf9vb/35/) used)*:
`[2211.30ms, 2985.22ms, 2136.73ms, 2279.26ms, 2482.34ms]`
Upvotes: 1 <issue_comment>username_2: From what I understand, you want to count the number of characters in the string and return a string where all uppercase characters are replaced with lowercase characters and all lowercase characters are replaced with uppercase characters. Additionally, you want to increment countChanges once for every character changed.
This code should do what you want:
```
let message = "heY, WHAt are you dOING?";
let countChanges = 0;
let isLetter = c => c.toLowerCase() !== c.toUpperCase();
let isLowerCase = c => c.toLowerCase() === c;
let flippedMessage = Array.from(message).map((c)=>{
if(!isLetter(c)){
return c;
}
countChanges++;
// return uppercase character if c is a lowercase char
if(isLowerCase(c)){
return c.toUpperCase();
}
// Here, we know c is an uppercase character, so return the lowercase
return c.toLowerCase();
}).join('');
// flippedMessage is "HEy, whaT ARE YOU Doing?"
// countChanges is 18
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: By default, javascript's comparison of strings is case sensitive, therefore you can check a character's case by comparing it to either an upper or lower case converted value.
If it is the same, then the case is what you checked against, if not, the case is different.
`"TRY" == "TrY"` would return `false`, whereas `"TRY" == "TRY"` would return `true`;
So, use a variable to indicate the case of the last letter checked, then compare the next letter to the opposite case. If it matches, the case has changed, otherwise it is still the same case.
The `isLetter` function checks a value to be a single character, and using a regex test ensures that it is a letter - no punctuation or digits etc.
Your loop would always produce an error because you were iterating outside the lenth of the message string - arrays are 0 based.
```
let message = "heY, WHAt are you dOING?";
let count_changes = 0;
let lowerCase = message[0] == message[0].toLowerCase();
let messageLength = message.length;
function isLetter (val) {
// Check val is a letter of the alphabet a - z ignoring case.
return val.length == 1 && val.match(/[a-z]/i);
}
for (let i = 0; i < messageLength; i++) {
var char = message[i];
if (isLetter(char)) {
if(lowerCase) {
// Check to see if the next letter is upper case when the last one was lower case.
if(char == char.toUpperCase()) {
lowerCase = false;
count_changes++;
}
}
else {
// Check to see if the next letter is lower case when the last one was upper case.
if(char == char.toLowerCase()) {
lowerCase = true;
count_changes++;
}
}
}
else {
// Found a non-letter character.
console.error('Not a letter.');
}
}
console.log("Number of times the case changed: " + count_changes);
```
Upvotes: 1 |
2018/03/19 | 612 | 2,081 | <issue_start>username_0: Every time I change any of the settings in VSCode, the command plate to come up with "null password (Press 'Enter' to confirm or 'Escape' to cancel)". Hitting enter is enough to make things work fine, however, it is still annoying. I am not sure of the reason behind this and was wondering if anybody came across something similar.
Attached is a picture of the command plate (just in case...).[](https://i.stack.imgur.com/rS1Tm.png)<issue_comment>username_1: I just ran into the same issue. It looks like the culprit is [SQLTools](https://github.com/mtxr/vscode-sqltools).
Here's the line of code that produces the dialog:
<https://github.com/mtxr/vscode-sqltools/blob/0865cf0/src/sqltools.ts#L419-L424>
This was super hard to track down, since it took me a while to figure out that the "Press 'Enter' to confirm" part is built-in to all VSCode prompt dialogs, and that string is not provided by the extension.
Putting this in my `settings.json` made the extension stop prompting me:
```json
"sqltools.autoConnectTo": "blah",
```
(The default value is `null`. Putting a string somehow causes it to generate an error in the console but whatever, it solves the problem.)
I hope this works for you! You can also:
* Disable or uninstall the extension if you don't need it at all
* File an issue to let the author know: <https://github.com/mtxr/vscode-sqltools/issues>
Upvotes: 4 [selected_answer]<issue_comment>username_2: Like [username_1](https://stackoverflow.com/users/782045/username_1) said, this is likely caused by the [SQLTools](https://marketplace.visualstudio.com/items?itemName=mtxr.sqltools) extension.
To fix, do this:
1. Press `Ctrl`+`,` to open `settings.json`
2. Add line `"sqltools.autoConnectTo": "Prevent Dialog Box",`
[](https://i.stack.imgur.com/vCCdy.png)
3. Save and open the command palette (`Ctrl`+`Shift`+`P`) to reload with the command :
```
>Reload Window
```
Done!
Upvotes: 2 |
2018/03/19 | 1,195 | 4,271 | <issue_start>username_0: I currently have a list hard coded into my python code. As it keeps expanding, I wanted to make it more dynamic by reading the list from a file. I have read through many articles about how to do this, but in practice I can't get this working. So firstly, here is an example of the existing hardcoded list:
```
serverlist = []
serverlist.append(("abc.com", "abc"))
serverlist.append(("def.com", "def"))
serverlist.append(("hji.com", "hji"))
```
When I enter the command 'print serverlist' the output is shown below and my list works perfectly when I access it:
```
[('abc.com', 'abc'), ('def.com', 'def'), ('hji.com', 'hji')]
```
Now I've replaced the above code with the following:
```
serverlist = []
with open('/server.list', 'r') as f:
serverlist = [line.rstrip('\n') for line in f]
```
With the contents of server.list being:
```
'abc.com', 'abc'
'def.com', 'def'
'hji.com', 'hji'
```
When I now enter the command `print serverlist`, the output is shown below:
```
["'abc.com', 'abc'", "'def.com', 'def'", "'hji.com', 'hji'"]
```
And the list is not working correctly. So what exactly am I doing wrong? Am I reading the file incorrectly or am I formatting the file incorrectly? Or something else?<issue_comment>username_1: Try this:
```
serverlist = []
with open('/server.list', 'r') as f:
for line in f:
serverlist.append(tuple(line.rstrip('\n').split(',')))
```
**Explanation**
* You want an explicit `for` loop so you cycle through each line as expected.
* You need `list.append` for each line to append to your list.
* You need to use `split(',')` in order to split by commas.
* Convert to `tuple` as this is your desired output.
**List comprehension method**
The `for` loop can be condensed as below:
```
with open('/server.list', 'r') as f:
serverlist = [tuple(line.rstrip('\n').split(',')) for line in f]
```
Upvotes: 0 <issue_comment>username_2: The contents of the file are not interpreted as Python code. When you read a `line in f`, it is a string; and the quotation marks, commas etc. in your file are just those characters as parts of a string.
If you want to create some other data structure from the string, you need to *parse* it. The program has no way to know that you want to turn the string `"'abc.com', 'abc'"` into the tuple `('abc.com', 'abc')`, unless you instruct it to.
This is the point where the question becomes "too broad".
If you are in control of the file contents, then you can simplify the data format to make this more straightforward. For example, if you just have `abc.com abc` on the line of the file, so that your string ends up as `'abc.com abc'`, you can then just `.split()` that; this assumes that you don't need to represent whitespace inside either of the two items. You could instead split on another character (like the comma, in your case) if necessary (`.split(',')`). If you need a general-purpose hammer, you might want to look into JSON. There is also `ast.literal_eval` which can be used to treat text as simple Python literal expressions - in this case, you would need the lines of the file to include the enclosing parentheses as well.
Upvotes: 1 <issue_comment>username_3: If you are willing to let go of the quotes in your file and rewrite it as
```
abc.com, abc
def.com, def
hji.com, hji
```
the code to load can be reduced to a one liner using the fact that files are iterables
```
with open('servers.list') as f:
servers = [tuple(line.split(', ')) for line in f]
```
Remember that using a file as an iterator already strips off the newlines.
You can allow arbitrary whitespace by doing something like
```
servers = [tuple(word.strip() for word in line.split(',')) for line in f]
```
It might be easier to use something like regex to parse the original format. You could use an expression that captures the parts of the line you care about and matches but discards the rest:
```
import re
pattern = re.compile('\'(.+)\',\\s*\'(.+)\'')
```
You could then extract the names from the matched groups
```
with open('servers.list') as f:
servers = [pattern.fullmatch(line).groups() for line in f]
```
This is just a trivialized example. You can make it as complicated as you wish for your real file format.
Upvotes: 1 |
2018/03/19 | 462 | 1,390 | <issue_start>username_0: Based on the Google Form input I have the following data collected from the user
```
+-------+--------+--------+--------+
| Name | Col1 | Col2 | Col3 |
+-------+--------+--------+--------+
| name1 | | | 1 |
| name2 | 3 | | |
| name3 | | 2 | |
+-------+--------+--------+--------+
```
which only one of the Col1, Col2 or Col3 will contain value
What I want is to created a view like this
```
+-------+--------+
| Name | Col |
+-------+--------+
| name1 | 1 |
| name3 | 2 |
| name2 | 3 |
+-------+--------+
```
The SQL command should not only merge Col1, Col2 and Col3 but also sort the new Col based on it value.
Thanks in advance<issue_comment>username_1: You can use `coalesce()`:
```
select name, coalesce(col1, col2, col3) as col
from t
order by col;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Sheets doesn't support full sql.
```
=ARRAYFORMULA(SORT({A2:A4,MMULT(--B2:D4,TRANSPOSE(B2:D2^0))},2,1))
```
Upvotes: 0 <issue_comment>username_3: The simplest method below (I'-'I's answer is good but a little overcomplicated) given the particularities of your data
```
=sort(arrayformula({H1:H6, I1:I6 + J1:J6 + K1:K6}), 2, true)
```
[](https://i.stack.imgur.com/PzQE9.png)
Upvotes: 1 |
2018/03/19 | 534 | 1,761 | <issue_start>username_0: I'm using jQuery to create a to-do list and centered everything on the page using `text-align: center` with body in my CSS file. But how can I left-align newly added list items below the form?
Here's the JavaScript portion of the code I'm currently using (my guess is this is what I'd need to change), but you can also view my [CodePen](https://codepen.io/nataliecardot/pen/VXmPrO) to view the CSS and HTML.
```
$(function() {
$('.button').click(function() {
let toAdd = $('input[name=checkListItem]').val();
// inserts specified element as last child of target element
$('.list').append('' + toAdd + '');
// clears input box after clicking 'add'
$('input[name=checkListItem]').val('');
});
$('input[name=checkListItem]').keypress(function(e) {
if (e.which == 13) {
$('.button').click();
// e.preventDefault() prevents default event from occuring, e.stopPropagation() prevents event from bubbling up, and return false does both
return false;
}
});
$(document).on('click', '.item', function() {
$(this).remove();
});
});
```<issue_comment>username_1: You can use `coalesce()`:
```
select name, coalesce(col1, col2, col3) as col
from t
order by col;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Sheets doesn't support full sql.
```
=ARRAYFORMULA(SORT({A2:A4,MMULT(--B2:D4,TRANSPOSE(B2:D2^0))},2,1))
```
Upvotes: 0 <issue_comment>username_3: The simplest method below (I'-'I's answer is good but a little overcomplicated) given the particularities of your data
```
=sort(arrayformula({H1:H6, I1:I6 + J1:J6 + K1:K6}), 2, true)
```
[](https://i.stack.imgur.com/PzQE9.png)
Upvotes: 1 |
2018/03/19 | 1,041 | 3,010 | <issue_start>username_0: I'm using *ionic 3.2* and angular, for install `HTTP` (<https://ionicframework.com/docs/native/http/>) I use this commands:
```
ionic cordova plugin add cordova-plugin-advanced-http
npm install --save @ionic-native/http
```
In script `autenticar.ts` I added the `import { HTTP } from '@ionic-native/http';` like this:
```
import { Component, ViewChild } from '@angular/core';
import { NavController } from 'ionic-angular';
import { HTTP } from '@ionic-native/http';
@Component({
selector: 'page-autenticar',
templateUrl: 'autenticar.html'
})
export class AutenticarPage {
@ViewChild('username') username;
@ViewChild('password') password;
constructor(public navCtrl: NavController, public http: HTTP) {
console.log(http)
}
...
```
After reload app I get this error:
>
>
> ```
> Runtime Error
> Uncaught (in promise): Error: StaticInjectorError(AppModule)
> [AutenticarPage -> HTTP]: StaticInjectorError(Platform: core)[AutenticarPage -> HTTP]:
> NullInjectorError: No provider for HTTP!
> Error: StaticInjectorError(AppModule)[AutenticarPage -> HTTP]:
> StaticInjectorError(Platform: core)[AutenticarPage -> HTTP]:
> NullInjectorError: No provider for HTTP! at _NullInjector.get
> (http://localhost:8100/build/vendor.js:1376:19) at resolveToken
> (http://localhost:8100/build/vendor.js:1674:24) at tryResolveToken
> (http://localhost:8100/build/vendor.js:1616:16) at StaticInjector.get
> (http://localhost:8100/build/vendor.js:1484:20) at resolveToken
> (http://localhost:8100/build/vendor.js:1674:24) at tryResolveToken
> (http://localhost:8100/build/vendor.js:1616:16) at StaticInjector.get
> (http://localhost:8100/build/vendor.js:1484:20) at resolveNgModuleDep
> (http://localhost:8100/build/vendor.js:11228:25) at NgModuleRef_.get
> (http://localhost:8100/build/vendor.js:12461:16) at resolveDep
> (http://localhost:8100/build/vendor.js:12951:45)
>
> ```
>
>
I try this [**answer**](https://stackoverflow.com/a/47492777/1518921), says I have to add the `app.module.ts` this `import { HttpModule } from '@angular/http';`, like this:
```
import { NgModule, ErrorHandler } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { IonicApp, IonicModule, IonicErrorHandler } from 'ionic-angular';
import { HttpModule } from '@angular/http';
import { MyApp } from './app.component';
....
imports: [
BrowserModule,
HttpModule,
IonicModule.forRoot(MyApp)
],
...
```
But still the same error.<issue_comment>username_1: You need to add HTTP under **[`providers`](http://%20https://github.com/monkeyDledger/uppayDemo/blob/b4996f9e2bcbd4781c7c9ed7f6ebc674cdfa8c39/src/app/app.module.ts#L56)**
```
providers: [
HTTP
]
```
Upvotes: 2 <issue_comment>username_2: You need to add HTTP in providers
app.module.ts
```
...
import { HTTP } from '@ionic-native/http';
...
@NgModule({
...
providers: [
...
HTTP
...
]
...
})
export class AppModule { }
```
Upvotes: 5 [selected_answer] |
2018/03/19 | 1,300 | 4,249 | <issue_start>username_0: I'm very new to web development so have mercy with your responses.
I have a grid of images that I want to modify. When the mouse hovers over an image, I want an overlay with text to appear over the image (this may require the cell in the grid to expand to contain all the text).
When the image with the overlay is clicked, it should open a modal (I already have this working) with the full text and info inside.
All changes need to look smooth with transitions (overlay shouldn't just *be* there when the mouse touches, it should animate in, etc.) when they enter/exit.
I'm not sure what the right terminology is for this, so I'm struggling to find info searching on Google. So, if you could provide me with some resources to learn this, or provide some examples, it'd be much appreciated. Thanks in advance :)
Edit: Here's close to what I want to happen
There will be an image, like this:[](https://i.stack.imgur.com/NK55l.png)
After the mouse hover over this image, an overlay should animate in to look like this: [](https://i.stack.imgur.com/PSWiX.png)
The difference between this and what I want, is I want to show text instead of an icon, and I also want the cell in the grid upon which the mouse is hovering to expand to more pleasantly present the text that will be shown on the overlay.<issue_comment>username_1: Sounds like you're looking for tooltip, see
<https://getbootstrap.com/docs/4.0/components/tooltips/>
Upvotes: 0 <issue_comment>username_2: You can do this with just css first you need to wrap your image in a div and set the position to relative. Then place the image and the overlay inside of it. Then you can use css transitions to achieve the desired effect. You will set the original opacity of the overlay to 0 and set the hover opacity to 1. Below is an example. Since you haven't posted any code I can't tell what your markup will be so I just made an example.
```css
.img-container{
position:relative;
display:inline-block;
}
.img-container .overlay{
position:absolute;
top:0;
left:0;
width:100%;
height:100%;
background:rgb(0,170,170);
opacity:0;
transition:opacity 500ms ease-in-out;
}
.img-container:hover .overlay{
opacity:1;
}
.overlay span{
position:absolute;
top:50%;
left:50%;
transform:translate(-50%,-50%);
color:#fff;
}
```
```html

overlay content
```
Upvotes: 5 [selected_answer]<issue_comment>username_3: Set image as "block"
```css
.img-container{
position:relative;
display:inline-block;
}
.img-container img{
display:block
}
.img-container .overlay{
position:absolute;
top:0;
left:0;
width:100%;
height:100%;
background:rgb(0,170,170);
opacity:0;
transition:opacity 500ms ease-in-out;
}
.img-container:hover .overlay{
opacity:1;
}
.overlay span{
position:absolute;
top:50%;
left:50%;
transform:translate(-50%,-50%);
color:#fff;
}
```
```html

overlay content
```
Upvotes: 2 <issue_comment>username_4: try below css for manage "simple overlay text", "display overlay text only on hover event", "simple overlate with Bg"
```css
.img-container {
position: relative;
display: inline-block;
}
.img-container .overlay {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
display: flex;
justify-content: center;
align-items: center;
}
.img-container .overlay.overlay-bg:not(.hover-overlay) {
background: rgba(0, 0, 0, 0.35);
}
.img-container .overlay.hover-overlay {
background: rgba(0, 0, 0, 0.6);
opacity: 0;
transition: opacity 500ms ease-in-out;
}
.img-container:hover .overlay.hover-overlay {
opacity: 1;
}
.overlay span {
position: absolute;
color: #fff;
}
```
```html
simple overlay text

overlay content
simple overlay text with Bg

overlay content
on hover overlay text

overlay content
```
Upvotes: 0 |
2018/03/19 | 764 | 2,578 | <issue_start>username_0: I am having a problem with my D3 code.
```
const hexagon = this.hexagonSVG.append('path')
.attr('id', 'active')
.attr('d', lineGenerator(hexagonData))
.attr('stroke', 'url(#gradient)')
.attr('stroke-width', 3.5)
.attr('fill', 'none')
const totalLength = (hexagon).node().getTotalLength()
const \_transition = this.d3.transition()
.duration(DASH\_ANIMATION)
.ease(this.d3.easeLinear)
hexagon
.attr('stroke-dasharray', totalLength + ' ' + totalLength)
.attr('stroke-dashoffset', totalLength)
.attr('stroke-dashoffset', 0)
.transition(\_transition)
```
This code was working perfectly fine for almost 6 months, but an error just came out of nowhere today.
"hexagon.attr(...).attr(...).attr(...).transition is not a function"
Can someone please tell me how I solve this issue? Thank you.<issue_comment>username_1: For future reference: I ran into a similar issue and it seems to be a problem between webpack, yarn and d3-transition. The later extends the function of d3-selection, which somehow results in multiple d3-selection versions in the yarn.lock file (as described in [this issue](https://github.com/d3/d3-selection/issues/185)).
In my case explicitly adding d3-selection, removing the lock file and then running `yarn install` again fixed the issue.
It seems like every update of d3-transition recreates this problem.
Upvotes: 3 <issue_comment>username_2: I encountered this when I was mix using `import * as d3 from 'd3';` and individual `import { select } from 'd3-selection'`.
I overcome the issue by just using individual import, which I guess is the suggested way of doing things.
Upvotes: 2 <issue_comment>username_3: For posterity's sake on this old issue. I found another option for `[email protected]`, which is to add this to your `package.json`:
```
"resolutions": {
"d3-selection": "1.3.0"
}
```
Then deleting your `yarn.lock` and then `yarn install` the project again to help yarn handle the version resolutions.
FYI: My `d3` dependencies in my `package.json` are (showing that I don't have `d3-selection` as a direct dependency):
```
...
"d3": "4.13.0",
"d3-geo-projection": "2.9.0",
"d3-scale": "1.0.0",
"d3-scale-chromatic": "3.0.0",
...
```
This seems to mainly be an issue with how yarn handle it's dependency resolution because, as a test, I was able to temporarily switch my project over to `npm` and had no issues.
This thread on Github lists a few alternative solutions to what I posted above (and worked for me): <https://github.com/d3/d3-selection/issues/185>
Upvotes: 0 |
2018/03/19 | 1,046 | 3,602 | <issue_start>username_0: I am using `split()` and `split(" ")` on the same [string](http://www.ccs.neu.edu/home/vip/teach/Algorithms//7_hash_RBtree_simpleDS/hw_hash_RBtree/alice_in_wonderland.txt). But why is `split(" ")` returning less number of elements than `split()`? I want to know in what specific input case this would happen.<issue_comment>username_1: [`str.split`](https://docs.python.org/2/library/stdtypes.html#str.split) with the `None` argument (or, no argument) splits on *all* whitespace characters, and this isn't limited to *just* the space you type in using your spacebar.
```
In [457]: text = 'this\nshould\rhelp\tyou\funderstand'
In [458]: text.split()
Out[458]: ['this', 'should', 'help', 'you', 'understand']
In [459]: text.split(' ')
Out[459]: ['this\nshould\rhelp\tyou\x0cunderstand']
```
List of all whitespace characters that `split(None)` splits on can be found at [All the Whitespace Characters? Is it language independent?](https://stackoverflow.com/questions/18169006/all-the-whitespace-characters-is-it-language-independent)
Upvotes: 4 [selected_answer]<issue_comment>username_2: If you run the help command on the split() function you'll see this:
>
> split(...) S.split([sep [,maxsplit]]) -> list of strings
>
>
> Return a list of the words in the string S, using sep as the delimiter
> string. If maxsplit is given, at most maxsplit splits are done. If sep
> is not specified or is None, any whitespace string is a separator and
> empty strings are removed from the result.
>
>
>
Therefore the difference between the to is that `split()` without specifing the delimiter will delete the empty strings while the one with the delimiter won't.
Upvotes: 2 <issue_comment>username_3: In Python, the split function splits on a specific string if specified, otherwise on spaces (and then you can access the result list by index as usual):
```
s = "Hello world! How are you?"
s.split()
Out[9]:['Hello', 'world!', 'How', 'are', 'you?']
s.split("!")
Out[10]: ['Hello world', ' How are you?']
s.split("!")[0]
Out[11]: 'Hello world'
```
Upvotes: 1 <issue_comment>username_4: The method `str.split` called without arguments has a somewhat different behaviour.
First it splits by any whitespace character.
```
'foo bar\nbaz\tmeh'.split() # ['foo', 'bar', 'baz', 'meh']
```
But it also remove the empty strings from the output list.
```
' foo bar '.split(' ') # ['', 'foo', 'bar', '']
' foo bar '.split() # ['foo', 'bar']
```
Upvotes: 2 <issue_comment>username_5: From my own experience, the most confusion had come from `split()`'s different treatments on whitespace.
Having a separator like `' '` vs `None`, triggers different behavior of `split()`. According to the [Python documentation](https://docs.python.org/3/library/stdtypes.html#str.split).
>
> If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace.
>
>
>
Below is an example, in which the sample string has a trailing space `' '`, which is the same whitespace as the one passed in the second `split()`. Hence, this method behaves differently, not because of some whitespace character mismatch, but it's more of how this method was designed to work, maybe for convenience in common scenarios, but it can also be confusing for people who expect the `split()` to just `split`.
```
sample = "a b "
sample.split()
>>> ['a', 'b']
sample.split(' ')
>>> ['a', 'b', '']
```
Upvotes: 0 |
2018/03/19 | 510 | 1,762 | <issue_start>username_0: Can't get my SQL `LIMIT` to work:
```
$sql = mysqli_query($conn, "SELECT url, code, count FROM zipit WHERE uid='".$uid."' LIMIT '".$this_page_first_result."','".$results_per_page."'");
```<issue_comment>username_1: Try this:
```
$sql = mysqli_query($conn, "SELECT url, code, count FROM zipit WHERE uid='".$uid."' LIMIT ".$this_page_first_result.", ".$results_per_page);
```
Upvotes: -1 <issue_comment>username_2: The way you've got your query written translates to:
```
SELECT url ,
code ,
count
FROM zipit
WHERE uid='uid1234'
LIMIT '123','50'
```
The single quotes are invalid for your [`LIMIT`](https://dev.mysql.com/doc/refman/5.5/en/limit-optimization.html) clause. . .
What you should use is:
```
$sql = mysqli_query($conn,"SELECT url,code,count
FROM zipit
WHERE uid='".$uid."'
LIMIT ".$this_page_first_result.",".$results_per_page);
```
---
>
> LIMIT takes one or two numeric arguments, which must both be
> nonnegative integer constants, with these exceptions:
>
>
> Within prepared statements, LIMIT parameters can be specified using ?
> placeholder markers.
>
>
> Within stored programs, LIMIT parameters can be specified using
> integer-valued routine parameters or local variables.
>
>
> With two arguments, the first argument specifies the offset of the
> first row to return, and the second specifies the maximum number of
> rows to return. The offset of the initial row is 0 (not 1)
>
>
>
Upvotes: 1 [selected_answer]<issue_comment>username_3: $sql = mysqli\_query($conn, "SELECT url, code, count FROM zipit WHERE uid=$uid LIMIT $this\_page\_first\_result,$results\_per\_page");
Upvotes: 0 |
2018/03/19 | 744 | 2,460 | <issue_start>username_0: I have 2 different tables and I want to make them in one drop down list, Is this possible?
this for my dropdownlist Controller.
I added some items because that items are not in table.
```
var engineer =
(from x in up5.V_Pekerja.Where(x => x.KodeBagian == "E15320" &&
x.NamaJabatan.ToLower().Contains("section head") == false &&
x.NamaJabatan.ToLower().Contains("lead of") == false)
select new
SelectListItem { Text = x.Nopek + " - " + x.Nama, Value = x.Nopek }).ToList();
engineer.Add(new SelectListItem { Text = "663693 - Nana Sukarna", Value = "663693" });
engineer.Add(new SelectListItem { Text = "653479 - Tri Bambang H", Value = "653479" });
engineer.Add(new SelectListItem { Text = "747522 - <NAME>", Value = "747522" });
ViewBag.Engineer = engineer;
```
can someone please help me if this possible.. sorry for my bad english<issue_comment>username_1: Try this:
```
$sql = mysqli_query($conn, "SELECT url, code, count FROM zipit WHERE uid='".$uid."' LIMIT ".$this_page_first_result.", ".$results_per_page);
```
Upvotes: -1 <issue_comment>username_2: The way you've got your query written translates to:
```
SELECT url ,
code ,
count
FROM zipit
WHERE uid='uid1234'
LIMIT '123','50'
```
The single quotes are invalid for your [`LIMIT`](https://dev.mysql.com/doc/refman/5.5/en/limit-optimization.html) clause. . .
What you should use is:
```
$sql = mysqli_query($conn,"SELECT url,code,count
FROM zipit
WHERE uid='".$uid."'
LIMIT ".$this_page_first_result.",".$results_per_page);
```
---
>
> LIMIT takes one or two numeric arguments, which must both be
> nonnegative integer constants, with these exceptions:
>
>
> Within prepared statements, LIMIT parameters can be specified using ?
> placeholder markers.
>
>
> Within stored programs, LIMIT parameters can be specified using
> integer-valued routine parameters or local variables.
>
>
> With two arguments, the first argument specifies the offset of the
> first row to return, and the second specifies the maximum number of
> rows to return. The offset of the initial row is 0 (not 1)
>
>
>
Upvotes: 1 [selected_answer]<issue_comment>username_3: $sql = mysqli\_query($conn, "SELECT url, code, count FROM zipit WHERE uid=$uid LIMIT $this\_page\_first\_result,$results\_per\_page");
Upvotes: 0 |
2018/03/19 | 837 | 3,003 | <issue_start>username_0: I have four cells in a table (UITableView), the first and second cells take me to a "ViewController" and with the following code works perfectly for me.
```
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let segueIdentifier: String
switch indexPath.row {
case 0: //for first cell
segueIdentifier = "ubicacion"
case 1: //for second cell
segueIdentifier = "companias"
case 3: // For third cell
// Open un link using SFSafariViewController
default: //For fourth cell
// call phone
}
self.performSegue(withIdentifier: segueIdentifier, sender: self)
}
```
My question is with respect to the third and fourth cell, how do I send an action?
The third cell: you must open a link using "SFSafariViewController"
The fourth: when you click you must call a specified number.
[Here an image of my table](https://i.stack.imgur.com/OP1dL.png)
I will appreciate if you can guide me<issue_comment>username_1: To open link in Safari, use
```
if let url = URL(string: "YOUR URL") {
UIApplication.shared.openURL(url)
}
```
To call a number, use
```
if let url = NSURL(string: "tel://\(PHONE NUMBER)"), UIApplication.sharedApplication().canOpenURL(url) {
UIApplication.shared.openURL(url)
}
```
**Note:**
You should only use `performSegue` for case 0 & 1. Also, I think your case 3 would actually be case 2. You can update your code to be as below
```
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
switch indexPath.row {
case 0: //for first cell
performSegue(withIdentifier: "ubicacion", sender: self)
case 1: //for second cell
performSegue(withIdentifier: "companias", sender: self)
case 2: // For third cell
if let url = URL(string: "YOUR URL") {
UIApplication.shared.openURL(url)
}
default: //For fourth cell
if let url = NSURL(string: "tel://\(PHONE NUMBER)"), UIApplication.sharedApplication().canOpenURL(url) {
UIApplication.shared.openURL(url)
}
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: My final code in swift 4
```
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
switch indexPath.row {
case 0: //for first cell
performSegue(withIdentifier: "ubicacion", sender: self)
case 1: //for second cell
performSegue(withIdentifier: "companias", sender: self)
case 2: // For third cell
let urlGruasWeb = URL(string: "https://www.google.com/")
let vistaGruas = SFSafariViewController(url: urlGruasWeb!)
present(vistaGruas, animated: true, completion: nil)
vistaGruas.delegate = self as? SFSafariViewControllerDelegate
default: //For fourth cell
let url: NSURL = URL(string: "tel://\(911)")! as NSURL
UIApplication.shared.open(url as URL, options: [:], completionHandler: nil)
}
}
```
Upvotes: 0 |
2018/03/19 | 876 | 2,840 | <issue_start>username_0: I just want to have a loading animation with grayed-out background upon clicking TestLoading button. But I can't get it right. The loading gif is slightly in the left side but I want it in the center. Also grayed-out background is not working.
Here's my css:
```
.divLoader {
margin: 0px;
padding: 0px;
position: fixed;
right: 0px;
top: 0px;
width: 100%;
height: 100%;
background-color: rgb(67, 71, 75);
z-index: 30001;
opacity: 0.8;
}
.divLoaderContent {
position: absolute;
color: White;
top: 50%;
left: 40%;
}
```
In my view, I have this:
```

```
and
```
$('#btnRoster1').click(function(e) {
$("#divProcessing").show();
});
```
[](https://i.stack.imgur.com/9foYT.png)<issue_comment>username_1: To open link in Safari, use
```
if let url = URL(string: "YOUR URL") {
UIApplication.shared.openURL(url)
}
```
To call a number, use
```
if let url = NSURL(string: "tel://\(PHONE NUMBER)"), UIApplication.sharedApplication().canOpenURL(url) {
UIApplication.shared.openURL(url)
}
```
**Note:**
You should only use `performSegue` for case 0 & 1. Also, I think your case 3 would actually be case 2. You can update your code to be as below
```
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
switch indexPath.row {
case 0: //for first cell
performSegue(withIdentifier: "ubicacion", sender: self)
case 1: //for second cell
performSegue(withIdentifier: "companias", sender: self)
case 2: // For third cell
if let url = URL(string: "YOUR URL") {
UIApplication.shared.openURL(url)
}
default: //For fourth cell
if let url = NSURL(string: "tel://\(PHONE NUMBER)"), UIApplication.sharedApplication().canOpenURL(url) {
UIApplication.shared.openURL(url)
}
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: My final code in swift 4
```
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
switch indexPath.row {
case 0: //for first cell
performSegue(withIdentifier: "ubicacion", sender: self)
case 1: //for second cell
performSegue(withIdentifier: "companias", sender: self)
case 2: // For third cell
let urlGruasWeb = URL(string: "https://www.google.com/")
let vistaGruas = SFSafariViewController(url: urlGruasWeb!)
present(vistaGruas, animated: true, completion: nil)
vistaGruas.delegate = self as? SFSafariViewControllerDelegate
default: //For fourth cell
let url: NSURL = URL(string: "tel://\(911)")! as NSURL
UIApplication.shared.open(url as URL, options: [:], completionHandler: nil)
}
}
```
Upvotes: 0 |
2018/03/19 | 698 | 2,205 | <issue_start>username_0: I have the following string:
```
my_string = '1) ServerName sn = ProtobufUtil.toServerName(request.getServer());\\n2) String msg = "Region server " + sn +\\n3) " reported a fatal error:\\\\n" + errorText;\\n4) LOG.error(msg);'
```
I need to convert that string into a list split by symbol `\\n`. So, the list will be like this:
```
my_list = ['1) ServerName sn = ProtobufUtil.toServerName(request.getServer());',
'2) String msg = "Region server " + sn +',
'3) " reported a fatal error:\\\\n" + errorText;',
'4) LOG.error(msg);'
]
```
I used the symbol `\\n` as the splitter in my code:
```
my_list = my_string.split("\\n")
```
However, the output for the third element in the list is not as I expected.
Output:
```
my_list = ['1) ServerName sn = ProtobufUtil.toServerName(request.getServer());',
'2) String msg = "Region server " + sn +',
'3) " reported a fatal error:\\',
'" + errorText;',
'4) LOG.error(msg);']
```
How should the splitter be defined in the code?<issue_comment>username_1: You've got no option but the regex option. You can do this with `re.split`, and a negative lookbehind.
```
>>> import re
>>> re.split(r'(?
```
```
[
'1) ServerName sn = ProtobufUtil.toServerName(request.getServer())',
'2) String msg = "Region server " + sn ',
'3) " reported a fatal error:\\\\n" + errorText',
'4) LOG.error(msg);'
]
```
The lookbehind specifies that the split must occur only when `\\n` isn't preceded by more backslashes.
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can try this pattern , which is Positive Lookahead :
```
pattern r'\\n(?=\d)'
```
code:
```
my_string = '1) ServerName sn = ProtobufUtil.toServerName(request.getServer());\\n2) String msg = "Region server " + sn +\\n3) " reported a fatal error:\\\\n" + errorText;\\n4) LOG.error(msg);'
import re
for i in re.split(r'\\n(?=\d)',my_string):
print(i)
```
output:
```
1) ServerName sn = ProtobufUtil.toServerName(request.getServer());
2) String msg = "Region server " + sn +
3) " reported a fatal error:\\n" + errorText;
4) LOG.error(msg);
```
Upvotes: 1 |
2018/03/19 | 1,211 | 2,906 | <issue_start>username_0: I have dictionary by the name of temp
```
dict_items([('/history/apollo/', ['6245', '6245', '6245', '6245', '6245', '6245', '6245',
'6245']), ('/shuttle/countdown/', ['3985', '3985', '3985', '3985', '3985', '3985', '3985',
'-', '-', '-', '0', '3985', '4247', '3985', '3985', '3998', '0',
'3985', '3985', '3985', '3985', '4247', '3985', '3985', '398, '3985']), ('/', ['7074', '7074', '7074',
'7074', '7074', '7074', '7074', '7074', '7074', '7074', '70]), ('/images/dual-pad.gif', ['141308', '141308',
'0', '141308', '141308', '141308', '0', '141308', '0', '141308', '141308']),
('/images/NASA-logosmall.gif', ['0', '786', '786', '0', '786', '786', '786',
'786', '786', '786', '786', '786', '786', '786', '786', '0',
'786', '786', '786'])])
```
its basically url and bandwidth acossiated with the particular url
I need sum of all the values in the list which are in string for a particular key while ignoring hyphen which is one of the value for a key
```
desired output:
dict_items([('/history/apollo/', ['4996'], ('/', ['70810']), ('/images/dual-
pad.gif', ['113040']), ('/images/NASA-logosmall.gif', ['12576'])])
#Or total value for a key without string
#dict_items([(/history/apollo/, [4996], (/, [70810])(/images/dual-
pad.gif, [113040]), (/images/NASA-logosmall.gif, [12576])])
my code is
sums = {k: sum(i for i in v if isinstance(i, int)) for k, v in temp.items()}
```
it gives me error
TypeError: unsupported operand type(s) for +: 'int' and 'str' then I tried
```
sums = {k: sum(int(i) for i in v) for k, v in [temp.values()]}
```
then I tried
```
print({k:sum(map(int, [v])) for k, v in temp.items()})
```
didnt work
getting error :
```
TypeError: expected string or bytes-like object
```
will appreciate any help<issue_comment>username_1: You've got no option but the regex option. You can do this with `re.split`, and a negative lookbehind.
```
>>> import re
>>> re.split(r'(?
```
```
[
'1) ServerName sn = ProtobufUtil.toServerName(request.getServer())',
'2) String msg = "Region server " + sn ',
'3) " reported a fatal error:\\\\n" + errorText',
'4) LOG.error(msg);'
]
```
The lookbehind specifies that the split must occur only when `\\n` isn't preceded by more backslashes.
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can try this pattern , which is Positive Lookahead :
```
pattern r'\\n(?=\d)'
```
code:
```
my_string = '1) ServerName sn = ProtobufUtil.toServerName(request.getServer());\\n2) String msg = "Region server " + sn +\\n3) " reported a fatal error:\\\\n" + errorText;\\n4) LOG.error(msg);'
import re
for i in re.split(r'\\n(?=\d)',my_string):
print(i)
```
output:
```
1) ServerName sn = ProtobufUtil.toServerName(request.getServer());
2) String msg = "Region server " + sn +
3) " reported a fatal error:\\n" + errorText;
4) LOG.error(msg);
```
Upvotes: 1 |
2018/03/19 | 510 | 1,480 | <issue_start>username_0: I have database like below:
```
WITH TB AS(
SELECT 1 NONB FROM DUAL UNION ALL
SELECT 89 NONB FROM DUAL UNION ALL
SELECT 193 NONB FROM DUAL
)
SELECT * FROM TB
```
I want change column NONB `to_char(NONB)` and display zero before the number like below.
>
> 001
>
>
> 089
>
>
> 189
>
>
>
How can I do this? Thanks.<issue_comment>username_1: You've got no option but the regex option. You can do this with `re.split`, and a negative lookbehind.
```
>>> import re
>>> re.split(r'(?
```
```
[
'1) ServerName sn = ProtobufUtil.toServerName(request.getServer())',
'2) String msg = "Region server " + sn ',
'3) " reported a fatal error:\\\\n" + errorText',
'4) LOG.error(msg);'
]
```
The lookbehind specifies that the split must occur only when `\\n` isn't preceded by more backslashes.
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can try this pattern , which is Positive Lookahead :
```
pattern r'\\n(?=\d)'
```
code:
```
my_string = '1) ServerName sn = ProtobufUtil.toServerName(request.getServer());\\n2) String msg = "Region server " + sn +\\n3) " reported a fatal error:\\\\n" + errorText;\\n4) LOG.error(msg);'
import re
for i in re.split(r'\\n(?=\d)',my_string):
print(i)
```
output:
```
1) ServerName sn = ProtobufUtil.toServerName(request.getServer());
2) String msg = "Region server " + sn +
3) " reported a fatal error:\\n" + errorText;
4) LOG.error(msg);
```
Upvotes: 1 |
2018/03/19 | 836 | 2,803 | <issue_start>username_0: When I save a python source code file, I want to re-run the script. Is there a command that works like this (sort of like nodemon for node)?<issue_comment>username_1: While there are probably ways to do this within the python ecosystem such as watchdog/watchmedo ( <https://github.com/gorakhargosh/watchdog> ), and maybe even linux scripting options with inotifywait ( <https://linux.die.net/man/1/inotifywait> ), for me, the easiest solution by far was... to just use nodemon! What I didn't know is that although the github tagline of nodemon is "Monitor for any changes in your node.js application and automatically restart the server - perfect for development" actually nodemon is a delicously generic tool and knows that .py files should be executed with python for example. Here's where I think the magic happens: <https://github.com/remy/nodemon/blob/c1211876113732cbff78eb1ae10483eaaf77e5cf/lib/config/defaults.js>
End result is that the command line below totally works. Yay!
```python
$ nodemon hello.py
[nodemon] starting `python hello.py`
```
Upvotes: 8 [selected_answer]<issue_comment>username_2: You can install nodemon to watch for file changes.
e.g.
```
npm i -g nodemon
```
Then to use:
```
nodemon --exec python3 hello.py
```
This is for when you use python3 in the command line. On windows you can also use 'py' instead.
Upvotes: 6 <issue_comment>username_3: The most similar way to nodemon I found is by using the watchdog package:
```
pip install watchdog
```
This comes with a utility called watchmedo:
```
watchmedo shell-command \
--patterns="*.py" \
--command='python "${watch_src_path}"' \
.
```
Now just work on your `.py` and it will be executed every time you save the file.
Upvotes: 5 <issue_comment>username_3: You actually can use nodemon with python, from their docs:
>
> Running non-node scripts nodemon can also be used to execute and
> monitor other programs. nodemon will read the file extension of the
> script being run and monitor that extension instead of .js if there's
> no nodemon.json:
>
>
> `nodemon --exec "python -v" ./app.py`
>
>
> Now nodemon will run app.py with python in verbose mode (note that if
> you're not passing args to the exec program, you don't need the
> quotes), and look for new or modified files with the .py extension.
>
>
>
<https://github.com/remy/nodemon#running-non-node-scripts>
Upvotes: 2 <issue_comment>username_4: I just use `npx nodemon pythonfile.py`
and it works.Make sure you are using nodemon v2.0.x
or above
Upvotes: 3 <issue_comment>username_5: I have used **py-mon** to watch for file changes.
**Installation**
```
pip install py-mon
```
**Execution**
```
pymon filename.py
```
below is the link of package :
<https://pypi.org/project/py-mon/>
Upvotes: 1 |
2018/03/19 | 902 | 3,078 | <issue_start>username_0: In a `NSAttributed` type statement, I want to keep the existing attributed value and give it a new attributed value.
The problem is that `replacingOccurrences` is only possible for string types, as I want to give a new value every time the word appears in the entire sentence.
If I change `NSAttributedString` to string type, the attributed value is deleted. I must keep the existing values.
How can I do that?<issue_comment>username_1: While there are probably ways to do this within the python ecosystem such as watchdog/watchmedo ( <https://github.com/gorakhargosh/watchdog> ), and maybe even linux scripting options with inotifywait ( <https://linux.die.net/man/1/inotifywait> ), for me, the easiest solution by far was... to just use nodemon! What I didn't know is that although the github tagline of nodemon is "Monitor for any changes in your node.js application and automatically restart the server - perfect for development" actually nodemon is a delicously generic tool and knows that .py files should be executed with python for example. Here's where I think the magic happens: <https://github.com/remy/nodemon/blob/c1211876113732cbff78eb1ae10483eaaf77e5cf/lib/config/defaults.js>
End result is that the command line below totally works. Yay!
```python
$ nodemon hello.py
[nodemon] starting `python hello.py`
```
Upvotes: 8 [selected_answer]<issue_comment>username_2: You can install nodemon to watch for file changes.
e.g.
```
npm i -g nodemon
```
Then to use:
```
nodemon --exec python3 hello.py
```
This is for when you use python3 in the command line. On windows you can also use 'py' instead.
Upvotes: 6 <issue_comment>username_3: The most similar way to nodemon I found is by using the watchdog package:
```
pip install watchdog
```
This comes with a utility called watchmedo:
```
watchmedo shell-command \
--patterns="*.py" \
--command='python "${watch_src_path}"' \
.
```
Now just work on your `.py` and it will be executed every time you save the file.
Upvotes: 5 <issue_comment>username_3: You actually can use nodemon with python, from their docs:
>
> Running non-node scripts nodemon can also be used to execute and
> monitor other programs. nodemon will read the file extension of the
> script being run and monitor that extension instead of .js if there's
> no nodemon.json:
>
>
> `nodemon --exec "python -v" ./app.py`
>
>
> Now nodemon will run app.py with python in verbose mode (note that if
> you're not passing args to the exec program, you don't need the
> quotes), and look for new or modified files with the .py extension.
>
>
>
<https://github.com/remy/nodemon#running-non-node-scripts>
Upvotes: 2 <issue_comment>username_4: I just use `npx nodemon pythonfile.py`
and it works.Make sure you are using nodemon v2.0.x
or above
Upvotes: 3 <issue_comment>username_5: I have used **py-mon** to watch for file changes.
**Installation**
```
pip install py-mon
```
**Execution**
```
pymon filename.py
```
below is the link of package :
<https://pypi.org/project/py-mon/>
Upvotes: 1 |
2018/03/19 | 1,364 | 4,323 | <issue_start>username_0: I am running tomcat on RHEL 7 machine with 1GB RAM. I have setup tomcat and java both to have Xmx=1G and below statements support that,
>
> [root@ip-172-31-28-199 bin]# java -XX:+PrintFlagsFinal -version | grep
> HeapSize Picked up \_JAVA\_OPTIONS: -Xmx1g
> uintx ErgoHeapSizeLimit = 0 {product}
> uintx HeapSizePerGCThread = 87241520 {product}
> uintx InitialHeapSize := 16777216 {product}
> uintx LargePageHeapSizeThreshold = 134217728 {product}
> uintx MaxHeapSize := 1073741824 {product} openjdk version "1.8.0\_161"
>
>
>
and
>
> tomcat 2799 1 1 02:21 ? 00:00:07 /usr/bin/java
> -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.awt.headless=true -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Xmx1024M -Dignore.endorsed.dirs= -classpath /opt/tomcat/bin/bootstrap.jar:/opt/tomcat/bin/tomcat-juli.jar
> -Dcatalina.base=/opt/tomcat -Dcatalina.home=/opt/tomcat -Djava.io.tmpdir=/opt/tomcat/temp org.apache.catalina.startup.Bootstrap start
>
>
>
But when I get exception, I get following message,
>
> There is insufficient memory for the Java Runtime Environment to continue.
> ==========================================================================
>
>
> Native memory allocation (mmap) failed to map 244043776 bytes for committing reserved memory.
> =============================================================================================
>
>
>
I know java can never claim 1GB memory as that is the total memory of the machine. but why I am getting error with this size mentioned?<issue_comment>username_1: While there are probably ways to do this within the python ecosystem such as watchdog/watchmedo ( <https://github.com/gorakhargosh/watchdog> ), and maybe even linux scripting options with inotifywait ( <https://linux.die.net/man/1/inotifywait> ), for me, the easiest solution by far was... to just use nodemon! What I didn't know is that although the github tagline of nodemon is "Monitor for any changes in your node.js application and automatically restart the server - perfect for development" actually nodemon is a delicously generic tool and knows that .py files should be executed with python for example. Here's where I think the magic happens: <https://github.com/remy/nodemon/blob/c1211876113732cbff78eb1ae10483eaaf77e5cf/lib/config/defaults.js>
End result is that the command line below totally works. Yay!
```python
$ nodemon hello.py
[nodemon] starting `python hello.py`
```
Upvotes: 8 [selected_answer]<issue_comment>username_2: You can install nodemon to watch for file changes.
e.g.
```
npm i -g nodemon
```
Then to use:
```
nodemon --exec python3 hello.py
```
This is for when you use python3 in the command line. On windows you can also use 'py' instead.
Upvotes: 6 <issue_comment>username_3: The most similar way to nodemon I found is by using the watchdog package:
```
pip install watchdog
```
This comes with a utility called watchmedo:
```
watchmedo shell-command \
--patterns="*.py" \
--command='python "${watch_src_path}"' \
.
```
Now just work on your `.py` and it will be executed every time you save the file.
Upvotes: 5 <issue_comment>username_3: You actually can use nodemon with python, from their docs:
>
> Running non-node scripts nodemon can also be used to execute and
> monitor other programs. nodemon will read the file extension of the
> script being run and monitor that extension instead of .js if there's
> no nodemon.json:
>
>
> `nodemon --exec "python -v" ./app.py`
>
>
> Now nodemon will run app.py with python in verbose mode (note that if
> you're not passing args to the exec program, you don't need the
> quotes), and look for new or modified files with the .py extension.
>
>
>
<https://github.com/remy/nodemon#running-non-node-scripts>
Upvotes: 2 <issue_comment>username_4: I just use `npx nodemon pythonfile.py`
and it works.Make sure you are using nodemon v2.0.x
or above
Upvotes: 3 <issue_comment>username_5: I have used **py-mon** to watch for file changes.
**Installation**
```
pip install py-mon
```
**Execution**
```
pymon filename.py
```
below is the link of package :
<https://pypi.org/project/py-mon/>
Upvotes: 1 |
2018/03/19 | 1,352 | 3,784 | <issue_start>username_0: I have some unix times that I convert to timestamps in `sparklyr` and for some reasons I also need to convert them into strings.
Unfortunately, it seems that during the conversion to string `hive` converts to EST (my locale).
```
df_new <- spark_read_parquet(sc, "/mypath/parquet_*",
overwrite = TRUE,
name = "df_new",
memory = FALSE,
options = list(mergeSchema = "true"))
> df_new %>%
mutate(unix_t = from_utc_timestamp(timestamp(t) ,'UTC'),
date_str = date_format(unix_t, 'yyyy-MM-dd HH:mm:ss z'),
date_alt = to_date(from_utc_timestamp(timestamp(t) ,'UTC'))) %>%
select(t, unix_t, date_str, date_alt) %>% head(5)
# Source: lazy query [?? x 4]
# Database: spark_connection
t unix_t date_str date_alt
1 1419547405. 2014-12-25 22:43:25 2014-12-25 17:43:25 EST 2014-12-25
2 1418469714. 2014-12-13 11:21:54 2014-12-13 06:21:54 EST 2014-12-13
3 1419126103. 2014-12-21 01:41:43 2014-12-20 20:41:43 EST 2014-12-20
4 1419389856. 2014-12-24 02:57:36 2014-12-23 21:57:36 EST 2014-12-23
5 1418271811. 2014-12-11 04:23:31 2014-12-10 23:23:31 EST 2014-12-10
```
As you can see both `date_str` and `date_alt` use the `EST` timezone. I need `UTC` here. How can I do that?
Thanks!<issue_comment>username_1: It's possible that sparklyr is doing some weird translation of timezones into the hive functions. I'd try registering the dataframe as a table and doing the manipulation with pure HQL:
```
createOrReplaceTempView(df_new, "df_new")
result <- sql("select from_utc_timestamp(timestamp(t) ,'UTC'),
cast(from_utc_timestamp(timestamp(t) ,'UTC') as STRING),
cast(from_utc_timestamp(timestamp(t) ,'UTC') as DATE)
from df_new")
head(result)
```
**edit**
If you're unfamiliar with SQL-languages, you can add any of the variables from `df_new` as a comma separated list like so (and rename your selections with `as`)
```
select var1, var2, t,
from_utc_timestamp(timestamp(t) ,'UTC') as unix_t,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as STRING) as date_str,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as DATE) as date_alt
from df_new
```
You can also use \* to represent all variables from the data frame:
```
select *,
from_utc_timestamp(timestamp(t) ,'UTC') as unix_t,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as STRING) as date_str,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as DATE) as date_alt
from df_new
```
Upvotes: 0 <issue_comment>username_2: Try using as.POSIXct() ?
```
format(as.POSIXct(unix_t, origin = unix_t, tz = "UTC", usetz=TRUE),"%Y-%m-%d %H:mm:ss")
```
This will first convert unix timestamp to UTC and then formatted to desired string.
Upvotes: -1 <issue_comment>username_3: From the Hive function reference, [date\_format](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) uses Java's [SimpleDateFormat](https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html), which I believe always defaults to the JVM time zone, this explains why this gets you a character string converted to your time zone.
One option is to detect the time zone and manually add the hours to get UTC.
Another option would be to use `lubridate` with `spark_apply()`:
```
sdf_len(sc, 1) %>%
mutate(unix_t = from_utc_timestamp(timestamp(1522371003) , 'UDT')) %>%
spark_apply(
function(e) {
dplyr::mutate(
e,
time_str = as.character(
lubridate::with_tz(
as.POSIXct(unix_t, origin="1970-01-01"),
"GMT"
)
)
)
},
columns = c("id", "unix_t", "time_str"))
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 1,005 | 3,264 | <issue_start>username_0: I was trying to write a function that took in N bytes of little endian hex and made it into an unsigned int.
```
unsigned int endian_to_uint(char* buf, int num_bytes)
{
if (num_bytes == 0)
return (unsigned int) buf[0];
return (((unsigned int) buf[num_bytes -1]) << num_bytes * 8) | endian_to_uint(buf, num_bytes - 1);
}
```
however, the value returned is approx ~256 times larger than the expected value. Why is that?
If I needed to do use it for a 4 byte buffer, normally you'd do:
```
unsigned int endian_to_uint32(char* buf)
{
return (((unsigned int) buf[3]) << 24)
| (((unsigned int) buf[2]) << 16)
| (((unsigned int) buf[1]) << 8)
| (((unsigned int) buf[0]));
}
```
which should be reproduced by the recursive function I wrote, or is there some arithmetic error that I haven't caught?<issue_comment>username_1: It's possible that sparklyr is doing some weird translation of timezones into the hive functions. I'd try registering the dataframe as a table and doing the manipulation with pure HQL:
```
createOrReplaceTempView(df_new, "df_new")
result <- sql("select from_utc_timestamp(timestamp(t) ,'UTC'),
cast(from_utc_timestamp(timestamp(t) ,'UTC') as STRING),
cast(from_utc_timestamp(timestamp(t) ,'UTC') as DATE)
from df_new")
head(result)
```
**edit**
If you're unfamiliar with SQL-languages, you can add any of the variables from `df_new` as a comma separated list like so (and rename your selections with `as`)
```
select var1, var2, t,
from_utc_timestamp(timestamp(t) ,'UTC') as unix_t,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as STRING) as date_str,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as DATE) as date_alt
from df_new
```
You can also use \* to represent all variables from the data frame:
```
select *,
from_utc_timestamp(timestamp(t) ,'UTC') as unix_t,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as STRING) as date_str,
cast(from_utc_timestamp(timestamp(t) ,'UTC') as DATE) as date_alt
from df_new
```
Upvotes: 0 <issue_comment>username_2: Try using as.POSIXct() ?
```
format(as.POSIXct(unix_t, origin = unix_t, tz = "UTC", usetz=TRUE),"%Y-%m-%d %H:mm:ss")
```
This will first convert unix timestamp to UTC and then formatted to desired string.
Upvotes: -1 <issue_comment>username_3: From the Hive function reference, [date\_format](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) uses Java's [SimpleDateFormat](https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html), which I believe always defaults to the JVM time zone, this explains why this gets you a character string converted to your time zone.
One option is to detect the time zone and manually add the hours to get UTC.
Another option would be to use `lubridate` with `spark_apply()`:
```
sdf_len(sc, 1) %>%
mutate(unix_t = from_utc_timestamp(timestamp(1522371003) , 'UDT')) %>%
spark_apply(
function(e) {
dplyr::mutate(
e,
time_str = as.character(
lubridate::with_tz(
as.POSIXct(unix_t, origin="1970-01-01"),
"GMT"
)
)
)
},
columns = c("id", "unix_t", "time_str"))
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 508 | 1,985 | <issue_start>username_0: I'm doing what I would have expected to be a fairly straightforward query on a modified version of the imdb database:
```
select primary_name, release_year, max(rating)
from titles natural join primary_names natural join title_ratings
group by year
having title_category = 'film' and year > 1989;
```
However, I'm immediately running into
>
> "column must appear in the GROUP BY clause or be used in an aggregate function."
>
>
>
I've tried researching this but have gotten confusing information; some examples I've found for this problem look structurally identical to mine, where others state that you must group every single selected parameter, which defeats the whole purpose of a group as I'm only wanting to select the maximum entry per year.
What am I doing wrong with this query?
Expected result: table with 3 columns which displays the highest-rated movie of each year.<issue_comment>username_1: If you want the maximum entry per year, then you should do something like this:
```
select r.*
from ratings r
where r.rating = (select max(r2.rating) where r2.year = r.year) and
r.year > 1989;
```
In other words, `group by` is the wrong approach to writing this query.
I would also strongly encourage you to forget that `natural join` exists at all. It is an abomination. It uses the *names* of common columns for joins. It does not even use properly declared foreign key relationships. In addition, you cannot see what columns are used for the join.
While I am it, another piece of advice: qualify all column names in queries that have more than one table reference. That is, include the table alias in the column name.
Upvotes: 2 <issue_comment>username_2: If you want to display all the columns you can user window function like :
```
select primary_name, year, max(rating) Over (Partition by year) as rating
from titles natural
join primary_names natural join ratings
where title_type = 'film' and year > 1989;
```
Upvotes: 0 |
2018/03/19 | 865 | 3,042 | <issue_start>username_0: I made an app with Construct 2 and I exported to Intel XDK. Then I exported to cordova and everytime I try to build with "cordova build android" I get this error:
ERROR: In FontFamilyFont, unable to find attribute android:t
tcIndex
FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':processDebugResources'.
>
> com.android.ide.common.process.ProcessException: Failed to execute aapt
>
>
>
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output.
BUILD FAILED
Total time: 47.337 secs
```
Command finished with error code 1: cmd /s /c "C:\Users\Gustavo\app\platfo
rms\android\gradlew.bat cdvBuildDebug -b C:\Users\Gustavo\app\platforms\an
droid\build.gradle -Dorg.gradle.daemon=true -Dorg.gradle.jvmargs=-Xmx2048m -Pand
roid.useDeprecatedNdk=true"
Error: cmd: Command failed with exit code 1 Error output:
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
ERROR: In FontFamilyFont, unable to find attribute android:fontVariationSettings
ERROR: In FontFamilyFont, unable to find attribute android:ttcIndex
FAILURE: Build failed with an exception.
\* What went wrong:
Execution failed for task ':processDebugResources'.
> com.android.ide.common.process.ProcessException: Failed to execute aapt
```
It's my first time posting here, so if I am doing something wrong, please tell me.<issue_comment>username_1: Gustavo,
Its is possible that you have a conflict in cordova plugins or platform. A second possiblity is that, you have added components via npm but haven't installed it. For both scenario, I suggest that you list out the current versions installed, note them down then do an update to cordova. Here is how to list out the versions, like what I have.
`cd projectfolder
$ cordova plugin
cordova-plugin-console 1.1.0 "Console"
cordova-plugin-device 2.0.1 "Device"
cordova-plugin-whitelist 1.3.3 "Whitelist"
$ cordova platform
Installed platforms:
android 7.0.0
browser 5.0.3
Available platforms:
ios ~4.5.4
osx ~4.0.1
windows ~5.0.0
www ^3.12.0
$ npm -v
3.10.10`
Here is how you update cordova for the project. This example assumes you are using android platform. If you have other plugins/platform, do the necessary.
```
npm install
npm update
cordova platform rm android --nosave
cordova platform add android
```
Alternatively
```
cordova platform update android
```
If you notice a specific plugin with version error, remove and the plugin with required version.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Thanks for the help!
I got a build successful by adding this in the build-extras.gradle file:
```
configurations.all {
resolutionStrategy {
force 'com.android.support:support-v4:27.1.0'
}
}
```
And by installing the cordova-android-support-gradle-release.
Upvotes: 2 |
2018/03/19 | 1,198 | 3,773 | <issue_start>username_0: I'm having a hard time trying to figure this out. New to coding. I'm trying to read a .txt file, tokenize it, pos tag the words in it.
Here's what I've got so far:
```
import nltk
from nltk import word_tokenize
import re
file = open('1865-Lincoln.txt', 'r').readlines()
text = word_tokenize(file)
string = str(text)
nltk.pos_tag(string)
```
My problem is, it keeps giving me the `TypeError: expected string or bytes-like object` error.<issue_comment>username_1: word\_tokenize is expecting a string but file.readlines() gives you a list.
Just convert the list to a string will solve the issue.
```
import nltk
from nltk import word_tokenize
import re
file = open('test.txt', 'r').readlines()
text =''
for line in file:
text+=line
text = word_tokenize(text)
string = str(text) # remove it if want to tag by words and pass text directly to post_tag:)
nltk.pos_tag(string)
```
Upvotes: 1 <issue_comment>username_2: I suggest you do the following:
```
import nltk
# nltk.download('all') # only for the first time when you use nltk
from nltk import word_tokenize
import re
with open('1865-Lincoln.txt') as f: # with - open is recommended for file reading
lines = f.readlines() # first get all the lines from file, store it
for i in range(0, len(lines)): # for each line, do the following
token_text = word_tokenize(lines[i]) # tokenize each line, store in token_text
print (token_text) # for debug purposes
pos_tagged_token = nltk.pos_tag(token_text) # pass the token_text to pos_tag()
print (pos_tagged_token)
```
For a text file containing:
>
> user is here
>
>
> pass is there
>
>
>
The output was:
>
> ['user', 'is', 'here']
>
>
> [('user', 'NN'), ('is', 'VBZ'), ('here', 'RB')]
>
>
> ['pass', 'is', 'there']
>
>
> [('pass', 'NN'), ('is', 'VBZ'), ('there', 'RB')]
>
>
>
It worked for me, I'm on Python 3.6, if that should matter. Hope this helps!
**EDIT 1:**
So your issue was you were passing a *list of strings* to `pos_tag()`, whereas doc says
>
> A part-of-speech tagger, or POS-tagger, processes a sequence of words, and attaches a part of speech tag to each word
>
>
>
Hence you needed to pass it line by line i. e. *string by string*. That is why you were getting a `TypeError: expected string or bytes-like object` error.
Upvotes: 0 <issue_comment>username_3: Most probably the `1865-Lincoln.txt` refers to the inaugural speech of president Lincoln. It's available in NLTK from <https://github.com/nltk/nltk_data/blob/gh-pages/packages/corpora/inaugural.zip>
The original source of the document comes from the [Inaugural Address Corpus](http://search.language-archives.org/record.html?id=languagecommons_org_Inaugural-Address-Corpus-1789-2009)
If we check how [NLTK is reading the file using LazyCorpusReader](https://github.com/nltk/nltk/blob/develop/nltk/corpus/__init__.py#L125), we see that the files are Latin-1 encoded.
```
inaugural = LazyCorpusLoader(
'inaugural', PlaintextCorpusReader, r'(?!\.).*\.txt', encoding='latin1')
```
If you have the default encoding set to `utf8`, most probably that's where the `TypeError: expected string or bytes-like object` is occurring
You should open the file with an explicit encoding and decode the string properly, i.e.
```
import nltk
from nltk import word_tokenize, pos_tag
tagged_lines = []
with open('test.txt', encoding='latin1') as fin:
for line in fin:
tagged_lines.append(pos_tag(word_tokenize(line)))
```
But technically, you can access the `inagural` corpus directly as a corpus object in NLTK, i.e.
```
>>> from nltk.corpus import inaugural
>>> from nltk import pos_tag
>>> tagged_sents = [pos_tag(sent) for sent in inaugural.sents('1865-Lincoln.txt')]
```
Upvotes: -1 |
2018/03/19 | 617 | 1,875 | <issue_start>username_0: In Google Sheet, I have a list of name
```
+-------+
| Name |
+-------+
| name1 |
| name2 |
| name3 |
+-------+
```
And then I have a list of attendance
```
+-------+
| Name |
+-------+
| name1 |
| name3 |
+-------+
```
I would like to generate a list that didn't attend the event
```
+-------+
| Name |
+-------+
| name2 |
+-------+
```
How can I generate the last table? Thanks in advance
EDIT: Since Google Sheets support query. So it is better to add SQL tag.<issue_comment>username_1: Is this what you want? The formula shown in C2 is dragged down into C2:C4
[](https://i.stack.imgur.com/ciBlr.png)
Upvotes: 0 <issue_comment>username_2: To generate values like you described you might want to create yourself a script `Tools -> Script editor`.
*Script:*
```
function myFunction() {
var sheet = SpreadsheetApp.getActiveSheet();
var values = sheet.getDataRange().getValues();
var sourceColumnIndex = 0;
var compareColumnIndex = 1;
var setColumnIndex = 2;
var currentSetRow = 1;
for(var i=1; i true.");
sameValues = true;
break;
} else {
Logger.log("\"" + sourceCell + "\"(" + i + "/" + sourceColumnIndex + ") == \"" + compareCell + "(" + j + "/" + compareColumnIndex + ")\" => false.");
}
}
if(sameValues == false) {
sheet.getRange(currentSetRow + 1, setColumnIndex + 1).setValue(sourceCell);
currentSetRow++;
}
}
}
```
*Result before running the script:*
[](https://i.stack.imgur.com/6KqyO.png)
*Result after running the script:*
[](https://i.stack.imgur.com/cXEr4.png)
Upvotes: 0 <issue_comment>username_3: ```
=FILTER(A2:A,ISNA(VLOOKUP(A2:A,B2:B,1,0)),A2:A)
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 605 | 2,064 | <issue_start>username_0: I'm having a hard time figuring out a solution to this situation. I need assistance, I am trying to disable the send button if my fields are not filled using a textwatcher. Here is part of the code:
```
public class Main5Activity extends AppCompatActivity {
TextView shoppinglist, fullname;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main5);
fullname = (TextView) findViewById(R.id.fullname);
shoppinglist = (TextView) findViewById(R.id.shoppinglist);
public void send(View v) {
new Send().execute();
}
}
```<issue_comment>username_1: Is this what you want? The formula shown in C2 is dragged down into C2:C4
[](https://i.stack.imgur.com/ciBlr.png)
Upvotes: 0 <issue_comment>username_2: To generate values like you described you might want to create yourself a script `Tools -> Script editor`.
*Script:*
```
function myFunction() {
var sheet = SpreadsheetApp.getActiveSheet();
var values = sheet.getDataRange().getValues();
var sourceColumnIndex = 0;
var compareColumnIndex = 1;
var setColumnIndex = 2;
var currentSetRow = 1;
for(var i=1; i true.");
sameValues = true;
break;
} else {
Logger.log("\"" + sourceCell + "\"(" + i + "/" + sourceColumnIndex + ") == \"" + compareCell + "(" + j + "/" + compareColumnIndex + ")\" => false.");
}
}
if(sameValues == false) {
sheet.getRange(currentSetRow + 1, setColumnIndex + 1).setValue(sourceCell);
currentSetRow++;
}
}
}
```
*Result before running the script:*
[](https://i.stack.imgur.com/6KqyO.png)
*Result after running the script:*
[](https://i.stack.imgur.com/cXEr4.png)
Upvotes: 0 <issue_comment>username_3: ```
=FILTER(A2:A,ISNA(VLOOKUP(A2:A,B2:B,1,0)),A2:A)
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 742 | 3,024 | <issue_start>username_0: I am trying to pass props and functions from a parent to a child component in React. However, when I try to call the function in the child component, I receive the error: "**Uncaught TypeError: Cannot read property 'bind' of undefined**".
This would suggest that the function created in the parent element is not being accessed by the child component. Am I missing a step in my code that would allow me to reference a parent component function from a child component?
I am defining states as props in my parent component, for use in conditional rendering in my child component. I am also defining functions in the parent component, which I am trying to call in my child component.
Provided below is my code:
**Note:** The flow is supposed to work as follows: Click Sign Up button > Call SignUpClick function > render "SignUp" component (based on the conditional rendering logic outlined in the child component)
The same flow concept would apply if someone clicked the Sign In button.
**Parent Component**
```
export default class Parent extends Component{
constructor(props) {
super(props);
this.state = {
SignUpClicked: false,
SignInClicked: false,
};
this.SignUpClick = this.SignUpClick.bind(this);
this.SignInClick = this.SignInClick.bind(this);
}
SignUpClick() {
this.setState({
SignUpClicked: true,
});
}
SignInClick() {
this.setState({
SignInClicked: true,
});
}
render () {
return (
)
}
}
```
**Child Component**
```
export default class Child extends Component {
render () {
if (this.props.SignUpClicked) {
return(
) } else if (this.props.SignInClicked) {
return (
)
} else {
return (
Sign Up
Sign In
)
}
}
}
```<issue_comment>username_1: Change the render method in parent class to pass SignUpClick and SignInClick to child.
```
return (
)
```
Also, in the child class, access the methods as this.props.SignUpClick and this.props.SignInClick
Upvotes: 2 <issue_comment>username_2: If you think about it, where would the child component grab the references for those functions ? The parent needs to give the child access to those methods, and how does a parent pass data to a child, with props! In your case you are not passing any prop at all to the child, so here is how your parent render method should look like:
```
render () {
return (
);
}
```
This is how a parent communicates with a child passing down props that are now accessible with `this.props`. At this point your Child component render method would look like this:
```
render () {
const {
signUpClicked,
signInClicked,
onSignUpClick,
onSignInClick,
} = this.props;
if (signUpClicked) {
return();
} else if (signInClicked) {
return ();
} else {
return (
Sign Up
Sign In
)
}
}
```
I used destructuring to help with readability. Hope this helps!
Upvotes: 2 [selected_answer] |
2018/03/19 | 489 | 1,485 | <issue_start>username_0: I am following this tutorial to publish a topic to Pub/Sub from a golang project and here's the code I have for that project at the moment:
```
package main
import "cloud.google.com/go/pubsub"
import "fmt"
func main() {
fmt.Printf("hello, world\n")
}
```
All it does is simply imports the pubsub but when I run `go get` I get this error: `undefined: ocgrpc.NewClientStatsHandler`
```
C:\Users\iha001\Dev\golang-projects\src\github.com\naguibihab\golang-playarea\src\gcloud>go get
# cloud.google.com/go/pubsub
..\..\..\..\..\cloud.google.com\go\pubsub\go18.go:34:51: undefined: ocgrpc.NewClientStatsHandler
```
Is there anything else I need to install to get this running?<issue_comment>username_1: It seems to have been an issue on the repo:
>
> @naguibihab This is NOT a windows issue. This commit fixes the problem
> be072a5. Short explanation: breaking changes where pushed on a minor
> release of a google pubsub dependency:
> census-instrumentation/opencensus-go@ac82455, method
> NewClientStatsHandler was deleted. (They don't claim anywhere they
> comply with semver).
>
>
>
Here's the fix mentioned in that comment: <https://github.com/GoogleCloudPlatform/google-cloud-go/commit/be072a5d1d73144ae0ce1071e9bd43d1ad221581>
Upvotes: 1 [selected_answer]<issue_comment>username_2: I was having the same issue on mac, using "cloud.google.com/go/pubsub" version 0.19.0. The fix for me was bumping the version down to 0.18.0.
Upvotes: 1 |
2018/03/19 | 1,102 | 2,657 | <issue_start>username_0: I'm implementing an algorithm to get the factorial of a certain number for a programming class.
```
fn factorial(number: u64) -> u64 {
if number < 2 {
1
} else {
number * factorial(number - 1)
}
}
```
When I tried with 100 or even with 25 I get this error `"thread 'main' panicked at 'attempt to multiply with overflow'"`, so I tried wrapping, and the result function was:
```
fn factorial(number: u64) -> u64 {
if number < 2 {
1
} else {
number.wrapping_mul(factorial(number - 1))
}
}
```
This way there is not panic but the result is always zero, so I tried using f64 and result was
100! = 93326215443944100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
instead of
100! = 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
Is there another way to store the result so the right value is returned?<issue_comment>username_1: 100! is a *really* big number. In fact, the largest factorial that will fit in a `u64` is just 20!. For numbers that don't fit in a `u64`, [`num::bigint::BigUint`](https://huonw.github.io/primal/num/bigint/struct.BigUint.html) is an appropriate storage option.
The following code calculates a value for 100!. You can run it in your browser [here](https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=0d709ccaf80f0976ec04239e07e7e84f).
```
extern crate num;
use num::BigUint;
fn factorial(number: BigUint) -> BigUint {
let big_1 = 1u32.into();
let big_2 = 2u32.into();
if number < big_2 {
big_1
} else {
let prev_factorial = factorial(number.clone() - big_1);
number * prev_factorial
}
}
fn main() {
let number = 100u32.into();
println!("{}", factorial(number));
}
```
To give some insight into why `u64` doesn't work, you can call the `bits` method on the result. If you do so, you will find that the value of 100! requires **525** bits to store. That's more than 8 `u64`'s worth of storage.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I wanted to complement @username_1 answer with an iterative solution using [`Iterator::fold`](https://doc.rust-lang.org/std/iter/trait.Iterator.html):
```rust
extern crate num;
use num::{bigint::BigUint, One};
fn factorial(value: u32) -> BigUint {
(2..=value).fold(BigUint::one(), |res, n| res * n)
}
fn main() {
let result = factorial(10);
assert_eq!(result, 3628800u32.into());
}
```
Upvotes: 1 |
2018/03/19 | 419 | 1,474 | <issue_start>username_0: I want a regex to match something like `_Hello_` or `_Hell No_`
So the requirements are `_` on both sides and some text in between. However the text cannot start with or end with whitespace (they can however contain whitespace inside the text).
I have tried `_[\S]+.*[\S]+_` but this fails to match when i have less than 2 characters in the text eg `_H_`
Please help<issue_comment>username_1: You can just treat the single-character case separately:
```
_(\S.*\S|\S)_
```
You probably want to replace `.*` with `.*?` to make it non-greedy. Otherwise, you'll fail to properly match multiple instances of this pattern in a row:
```
_foo_ _bar_
^^^^^^^^^
one match
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: Try the following pattern:
```
_\S(.*\S)?_
```
This pattern allows for a single non whitespace character. It also allows for more characters beyond this, but they must end with a non whitespace character.
Upvotes: 1 <issue_comment>username_3: How about this:
```
_[^\s+]?.*[^\s+]_
```
Using `^` inside the brackets means NOT any whitespace.
`?` means zero or one of (any character that's not whitespace)
`.*` means any number of any characters
Upvotes: 0 <issue_comment>username_4: I would suggest the following pattern:
```
^_.+_$
```
This will match the first three lines but not lines 4 and 5:
```
_Hello_
_Hello_Hello_
_Hello Hello_
_Hello_
_Hello_
^---Trailing space here
```
Upvotes: 0 |
2018/03/19 | 459 | 1,523 | <issue_start>username_0: The following code is using page control to display images. I would like to use the same loop to display a single element of the array on each page. Right now the code displays a,b,c on all of the pages. I want it to display just one letter so page 1 a page 2 b etc.
```
@IBOutlet var lz: UILabel!
var judo = ["a","b","c"]
var output = ""
override func viewDidLoad() {
super.viewDidLoad()
scrol.delegate = self
for image in 0...2 {
output += " \(judo[image])"
let imageTo = UIImage(named: "\(image).png")
let imageView = UIImageView(image: imageTo)
let xCord = view.frame.midX + view.frame.width * CGFloat(image)
contenetWidth += view.frame.width
scrol.addSubview(imageView)
}
lz.text = output
}
```<issue_comment>username_1: Try this
```
for image in 0...2 {
output += " \(judo[image])"
...
let xCord = view.frame.midX + view.frame.width * CGFloat(image)
let label = UILabel.init(frame: CGRect(x:xCord,y:0,width:50,height:30))
label.text = output
scrol.addSubview(label)
...
}
```
Upvotes: 0 <issue_comment>username_2: Use `UIScrollViewDelegate` method `scrollViewWillEndDragging`
```
func scrollViewWillEndDragging(_ scrollView: UIScrollView, withVelocity velocity: CGPoint, targetContentOffset: UnsafeMutablePointer) {
let index = targetContentOffset.pointee.x / view.frame.width
self.lz.text = self.judo[Int(index)]
}
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 360 | 1,125 | <issue_start>username_0: As you can see in my website: <http://erdemcalikoglu.cf/> I have the menu on the top of the screen but my problem is: when I press the 2nd and the 4th button the conten is not centered. Can you help me with that please?[I want this when I press the SERVICES button on menu](https://i.stack.imgur.com/irmF2.jpg)
[but it looks like this](https://i.stack.imgur.com/YPdvD.jpg)<issue_comment>username_1: You did not provide enough info but by using Inspect Element, you need to add a width of 100% to your text or the container that it is in. Right now, it's centering the text in the provided width - which is dynamic upon your text. When you make the width 100%, it's setting the text container to fit the screen size 100%.
Add this to your CSS:
```
.servis p {
width: 100%;
}
```
Upvotes: 0 <issue_comment>username_2: Remove the padding left from .iletisim p, Corrected code below
```
.iletisim p {
position: absolute;
text-align: left;
color: white;
font-family: 'Lobster', cursive;
z-index: 1;
font-size: 30px;
color: #000;
text-align: center;
padding-left: 0px; }
```
try this
Upvotes: 2 |
2018/03/19 | 293 | 940 | <issue_start>username_0: Saw this question in an interview repository. Is there a way for it to be done?
My guess is, it is a trick question. There is no way to do it in constant space , as strings are immutable.<issue_comment>username_1: You did not provide enough info but by using Inspect Element, you need to add a width of 100% to your text or the container that it is in. Right now, it's centering the text in the provided width - which is dynamic upon your text. When you make the width 100%, it's setting the text container to fit the screen size 100%.
Add this to your CSS:
```
.servis p {
width: 100%;
}
```
Upvotes: 0 <issue_comment>username_2: Remove the padding left from .iletisim p, Corrected code below
```
.iletisim p {
position: absolute;
text-align: left;
color: white;
font-family: 'Lobster', cursive;
z-index: 1;
font-size: 30px;
color: #000;
text-align: center;
padding-left: 0px; }
```
try this
Upvotes: 2 |
2018/03/19 | 841 | 2,800 | <issue_start>username_0: I have a C# .NET program (4.7.1) which I want to use the default system proxy if one is available.
When I put the following code in my App.Config file:
```
xml version="1.0" encoding="utf-8"?
... rest of the file
```
The application crashes on startup in KERNELBASE.dll with no error and exits immediately.
I have setup a proxy on localhost using fiddler (to do some testing)
I can find the following error in the event logs which is not very useful:
```
Faulting application name: myprogram.exe, version: 0.01.6652.23883, time stamp: 0x5aaf246f
Faulting module name: KERNELBASE.dll, version: 10.0.16299.15, time stamp: 0x2cd1ce3d
Exception code: 0xe0434352
Fault offset: 0x001008b2
Faulting process id: 0x1220
Faulting application start time: 0x01d3bf2e95ca9d05
Faulting application path: C:\source\myprogram.exe
Faulting module path: C:\Windows\System32\KERNELBASE.dll
Report Id: 5a60273b-637f-4dac-ae09-5539fb563884
Faulting package full name:
Faulting package-relative application ID:
```
Any ideas where I am going wrong and how to get default proxy working in a C# .NET program?<issue_comment>username_1: According to [the docs](https://learn.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/network/proxy-element-network-settings):
>
> The `proxy` element defines a proxy server for an application. If this element is missing from the configuration file, then the .NET Framework will use the `proxy` settings in Internet Explorer.
>
>
>
I would venture to guess that the machine you are attempting to run this on doesn't have Internet Explorer, which is causing the crash.
In any case, it would make sense to add the proxy server settings to ensure that your application *will* run on a machine without Internet Explorer installed.
```
```
If you want to *detect* a proxy, there is no way to do it using `app.config` because that functionality doesn't exist in .NET. Instead, you have to do something along the lines of:
```
WebProxy proxy = (WebProxy) WebRequest.DefaultWebProxy;
if (proxy.Address.AbsoluteUri != string.Empty)
{
Console.WriteLine("Proxy URL: " + proxy.Address.AbsoluteUri);
wc.Proxy = proxy;
}
```
Reference: [C# auto detect proxy settings](https://stackoverflow.com/a/6279271)
Upvotes: 2 <issue_comment>username_2: This is what I ended up doing in the end (basically what username_1 suggested) with GetSystemWebProxy added
```
var proxy = WebRequest.GetSystemWebProxy();
_webRequestHandler = new WebRequestHandler { ClientCertificateOptions = ClientCertificateOption.Automatic };
_webRequestHandler.Proxy = proxy;
_client = new HttpClient(_webRequestHandler);
_client.BaseAddress = new Uri(connectionUrl);
_client.Timeout = new TimeSpan(0,0,0,timeoutSeconds);
```
Upvotes: 1 [selected_answer] |
2018/03/19 | 411 | 1,619 | <issue_start>username_0: Hi I am currently having an error that says illegal string offset and i already searh here I just know that you get that warning if you are treating a string as if it is an array but I am certain that I am using it as an array can anybody help me thanks
```
$data2 = array('EquipmentName' => $this->input->post('txt_equipb'),
'EquipmentType' => $this->input->post('txt_equiptype'),
'RequirementID' => $id2);
foreach($data2 as $d) {
$data2s = array('EquipmentName' => $d['EquipmentName'],
'EquipmentType' => $d['EquipmentType'],
'RequirementID' => $d['RequirementID']);
}
```<issue_comment>username_1: If you're just gonna loop once to set array values to a variable, you may just simply do it this way without using a foreach loop.
```
$data2 = array('EquipmentName' => $this->input->post('txt_equipb'),
'EquipmentType' => $this->input->post('txt_equiptype'),
'RequirementID' => $id2);
$data2s = $data2;
print_r( $data2s ); // check if has values
```
Upvotes: 0 <issue_comment>username_2: You've misunderstand the meaning of foreach.(sigh)
Suggestions Provided:
just `var_dump($d);` before assignment of $data2s, and you'll know the result.
------------------------------------------------------------------------------
In the foreach, as you could see, each $d is only the value part of $data2, which means in every assignment of $data2s, there's no key as 'EquipmentName', only a simple string.
Upvotes: 2 [selected_answer] |
2018/03/19 | 481 | 1,954 | <issue_start>username_0: When I copy java Code A to a Kotlin project in Android Studio 3.01, the Code A is converted to Code B automatically.
And I add `override` for `fun override onMenuItemClick(item: MenuItem)` in Code B by the hint of Android Studio 3.01.
But I get still the error "Expecting member declaration" in Code B, what wrong with my kotlin Code B?
**Code A**
```
import android.support.v7.widget.PopupMenu;
public static void showPopup(View v, final Context mContext) {
PopupMenu popup = new PopupMenu(mContext, v);
popup.inflate(R.menu.menu_more);
popup.setOnMenuItemClickListener(new PopupMenu.OnMenuItemClickListener() {
public boolean onMenuItemClick(MenuItem item) {
return HandleMenu(item, mContext);
}
});
popup.show();
}
```
**Code B**
```
import android.support.v7.widget.PopupMenu;
fun showPopup(v: View, mContext: Context) {
val popup = PopupMenu(mContext, v)
popup.inflate(R.menu.menu_more)
popup.setOnMenuItemClickListener(object : PopupMenu.OnMenuItemClickListener() {
fun override onMenuItemClick(item: MenuItem): Boolean {
return HandleMenu(item, mContext)
}
})
popup.show()
}
```<issue_comment>username_1: You could simply replace this:
```
popup.setOnMenuItemClickListener(object : PopupMenu.OnMenuItemClickListener() {
fun override onMenuItemClick(item: MenuItem): Boolean {
return HandleMenu(item, mContext)
}
})
```
with this:
```
popup.setOnMenuItemClickListener { item -> HandleMenu(item, mContext) }
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: It should be `override fun` instead of `fun override`.
Also you can make use of Kotlin SAM and simplify it to `popup.setOnMenuItemClickListener { item -> HandleMenu(item, mContext) }` as the IDE suggests.
Upvotes: 2 |
2018/03/19 | 298 | 817 | <issue_start>username_0: I want to create the following function
```
Left_padded(n, width)
```
That returns, for example:
```
Left_padded(6, 4):
' 6' #number 6 into a 4 digits space
```
```
Left_padded(54, 5)
' 54' #number 54 into a 5 digits space
```<issue_comment>username_1: You can use [`rjust`](https://docs.python.org/3/library/stdtypes.html#str.rjust):
```
>>> def Left_padded(n, width):
... return str(n).rjust(width)
>>> Left_padded(54, 5)
' 54'
```
Upvotes: 2 <issue_comment>username_2: Assuming you want to put the number next to another string, you can also use [`%`](https://pyformat.info/#string_pad_align) formatting to achieve the same result:
```
>>> w1 = "your number is:"
>>> num = 20
>>> line = '%s%10s' % (w1, num)
>>> print(line)
'your number is: 20'
```
Upvotes: 1 |
2018/03/19 | 1,183 | 3,021 | <issue_start>username_0: Apologies, I'm very new to unix. I've searched for the answer, but to be honest would probably not recognise it at this stage of my unix journey.
I have a tab-delimited file (large - 800 columns by 5000 rows).
I would like to change a word to a number every time that word is found, but only within a range of columns. If the word is matched in other columns it remains unmodified.
i.e. change x to 9 but only in columns 2,3, and 4.
Input file:
`1 2 x 4 5 6
1 x 3 4 5 6
1 2 x 4 x 6
1 x 3 x x 6`
Expected output file
`1 2 9 4 5 6
1 9 3 4 5 6
1 2 9 4 x 6
1 9 3 9 x 6`
Within the real input file, I need to modify a large number of columns within a continuous range (i.e. coloumns 7-487)
Any help welcome.
Cheers
GTed<issue_comment>username_1: A straight-forward way to do that,
```
awk '$2=="x"{$2=9}$3=="x"{$3=9}$4=="x"{$4=9}1' file
```
Brief explanation,
* Change `$2`, `$3`, and `$4` to 9 if it's "x"
* The append `1` is used to print the output
As the further request of OP,
modify the answer for the range of fields,
```
awk '{for(i=2;i<=4;i++)$i=($i=="x")?9:$i}1' file
```
Upvotes: 2 <issue_comment>username_2: The following makes the replacement on columns 2, 3, and 4 as requested:
```
awk '{ for (i = 2; i <= 4; i++)
if ($i == "x") $i = 9
print}' InputFile.txt
```
Upvotes: 1 <issue_comment>username_3: To make this question's answer Generic following may help you on same. Where I am considering few things.
i- Let's say you want to replace strings `x`,`y`,`z` on places `2`,`3`,`4` on each line with `9`,`11`,`12`.
ii- I am considering that number of string to be replaced with new strings will be always same.
iii- Considering following is Input\_file to test code is behaving well or not here too.
Let's say following is a test Input\_file:
```
cat Input_file
1 2 x 4 5 6
1 x 3 4 5 6
1 2 x 4 x 6
1 x 3 x x 6
1 z 6 y 6 y
```
Following is the code for same.
```
awk -v strings="x,y,z" -v places="2,3,4" -v replaces="9,11,12" '
BEGIN{
num=split(strings, a,",");
num1=split(places, b,",");
num2=split(replaces, c,",")}
{
for(i=1;i<=NF;i++){
for(j=1;j<=num1;j++){
if($i==a[j]){
$i=c[j]}}}
}
1
' Input_file
```
Output will be as follows.(you could add `OFS="\t"` too in above code to get output in TAB delimited form)
```
1 2 9 4 5 6
1 9 3 4 5 6
1 2 9 4 9 6
1 9 3 9 9 6
1 12 6 11 6 11
```
Upvotes: 0 <issue_comment>username_4: `awk` to the rescue!
specify the columns for replacement in a space delimited list in `cols` variable.
```
$ awk -v cols='2 3 4' 'BEGIN{split(cols,c)}
{for(i in c) sub("x",9,$c[i])}1' file
1 2 9 4 5 6
1 9 3 4 5 6
1 2 9 4 x 6
1 9 3 9 x 6
```
**UPDATE**
To give a range of columns instead, you can use below
```
$ awk -v cols='2-4' 'BEGIN{split(cols,c,"-")}
{for(i=c[1];i<=c[2];i++) sub("x",9,$i)}1' file
```
**NB** To get tab delimited output set `-v OFS='\t'`
Upvotes: 0 |
2018/03/19 | 2,608 | 9,665 | <issue_start>username_0: I'm trying to use TypeScript's compiler API to perform very basic type inference, but I couldn't find anything helpful from the documentation or google search.
Essentially, I want to have a function `inferType` that takes a variable and return its inferred definition:
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
inferType(bar); // => "number[]"
inferType(bar2); // => "number"
inferType(foo); // "(number[], number) => number"
```
Is there anyway I can achieve this through the compiler API? If not, is there anyway I can achieve this any other way?<issue_comment>username_1: >
> Is there anyway I can achieve this through the compiler API
>
>
>
The compiler API can let you inspect *code* when *code is a string*. e.g.
```
const someObj = ts.someApi(`
// Code
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
`);
// use someObj to infer things about the code
```
>
> If not, is there anyway I can achieve this any other way?
>
>
>
Use `typeof` although its [significantly limited](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof).
Alternatively load *self* code using nodejs `__filename` (will only work in node and only if running thought `ts-node` i.e. raw TS ): <https://nodejs.org/api/globals.html#globals_filename>.
Upvotes: 0 <issue_comment>username_2: Another way to use decorators.
```
function inspectType(target: Object, propKey: string): any {
}
class MyClass {
@inspectType foo: number;
@inspectType elem: HTMLElement;
}
console.info(Reflect.getMetadata("design:type", Object.getPrototypeOf(MyClass), "foo")); // Constructor of the Number
console.info(Reflect.getMetadata("design:type", Object.getPrototypeOf(MyClass), "elem")); // Constructor of the HTMLElement
```
Note, to make it work its needed enable options in config:
```
"compilerOptions": {
"target": "ES5",
"experimentalDecorators": true,
"emitDecoratorMetadata": true
}
```
And use [reflect-metadata](https://www.npmjs.com/package/reflect-metadata) polyfil. More details on decorators in [my article (rus)](https://habrahabr.ru/company/docsvision/blog/310870/).
Upvotes: 0 <issue_comment>username_3: *Option 1*
----------
You can use the compiler API to achieve this by using an emit transformer. The emit transformer receives the AST during the emit process and it can modify it. Transformers are used internally by the compiler to transform the TS AST into a JS AST. The resulting AST is then written to file.
What we will do is create a transformer that, when it encounters a function named `inferType` it will add an extra argument to the call that will be the typescript type name.
**transformation.ts**
```
import * as ts from 'typescript'
// The transformer factory
function transformer(program: ts.Program): ts.TransformerFactory {
let typeChecker = program.getTypeChecker();
function transformFile(program: ts.Program, context: ts.TransformationContext, file: ts.SourceFile): ts.SourceFile {
function visit(node: ts.Node, context: ts.TransformationContext): ts.Node {
// If we have a call expression
if (ts.isCallExpression(node)) {
let target = node.expression;
// that calls inferType
if(ts.isIdentifier(target) && target.escapedText == 'inferType'){
// We get the type of the argument
var type = typeChecker.getTypeAtLocation(node.arguments[0]);
// And then we get the name of the type
var typeName = typeChecker.typeToString(type)
// And we update the original call expression to add an extra parameter to the function
return ts.updateCall(node, node.expression, node.typeArguments, [
... node.arguments,
ts.createLiteral(typeName)
]);
}
}
return ts.visitEachChild(node, child => visit(child, context), context);
}
const transformedFile = ts.visitEachChild(file, child => visit(child, context), context);
return transformedFile;
}
return (context: ts.TransformationContext) => (file: ts.SourceFile) => transformFile(program, context, file);
}
// Compile a file
var cmd = ts.parseCommandLine(['test.ts']);
// Create the program
let program = ts.createProgram(cmd.fileNames, cmd.options);
//Emit the program with our extra transformer
var result = program.emit(undefined, undefined, undefined, undefined, {
before: [
transformer(program)
]
} );
```
**test.ts**
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
function inferType(arg:T, typeName?: string) {
return typeName;
}
inferType(bar); // => "number[]"
inferType(bar2); // => "number"
inferType(foo); // "(number[], number) => number"
```
**result file test.js**
```
var bar = [1, 2, 3];
var bar2 = 5;
function foo(a, b) {
return a[0] + b;
}
function inferType(arg, typeName) {
return typeName;
}
inferType(bar, "number[]"); // => "number[]"
inferType(bar2, "number"); // => "number"
inferType(foo, "(a: number[], b: number) => number"); // "(number[], number) => number"
```
**Note**
This is just a proof of concept, you would need to further test. Also integrating this into your build process might be non trivial, basically you would need to replace the original compiler with this custom version that does this custom transform
*Option 2*
----------
Another option would be to use the compiler API to perform a transformation of the source code before compilation. The transformation would insert the type name into the source file. The disadvantage is that you would see the type parameter as a string in the source file, but if you include this transformation in your build process it will get updated automatically. The advantage is that you can use the original compiler and tools without changing anything.
**transformation.ts**
```
import * as ts from 'typescript'
function transformFile(program: ts.Program, file: ts.SourceFile): ts.SourceFile {
let empty = ()=> {};
// Dummy transformation context
let context: ts.TransformationContext = {
startLexicalEnvironment: empty,
suspendLexicalEnvironment: empty,
resumeLexicalEnvironment: empty,
endLexicalEnvironment: ()=> [],
getCompilerOptions: ()=> program.getCompilerOptions(),
hoistFunctionDeclaration: empty,
hoistVariableDeclaration: empty,
readEmitHelpers: ()=>undefined,
requestEmitHelper: empty,
enableEmitNotification: empty,
enableSubstitution: empty,
isEmitNotificationEnabled: ()=> false,
isSubstitutionEnabled: ()=> false,
onEmitNode: empty,
onSubstituteNode: (hint, node)=>node,
};
let typeChecker = program.getTypeChecker();
function visit(node: ts.Node, context: ts.TransformationContext): ts.Node {
// If we have a call expression
if (ts.isCallExpression(node)) {
let target = node.expression;
// that calls inferType
if(ts.isIdentifier(target) && target.escapedText == 'inferType'){
// We get the type of the argument
var type = typeChecker.getTypeAtLocation(node.arguments[0]);
// And then we get the name of the type
var typeName = typeChecker.typeToString(type)
// And we update the original call expression to add an extra parameter to the function
var argument = [
... node.arguments
]
argument[1] = ts.createLiteral(typeName);
return ts.updateCall(node, node.expression, node.typeArguments, argument);
}
}
return ts.visitEachChild(node, child => visit(child, context), context);
}
const transformedFile = ts.visitEachChild(file, child => visit(child, context), context);
return transformedFile;
}
// Compile a file
var cmd = ts.parseCommandLine(['test.ts']);
// Create the program
let host = ts.createCompilerHost(cmd.options);
let program = ts.createProgram(cmd.fileNames, cmd.options, host);
let printer = ts.createPrinter();
let transformed = program.getSourceFiles()
.map(f=> ({ o: f, n:transformFile(program, f) }))
.filter(x=> x.n != x.o)
.map(x=> x.n)
.forEach(f => {
host.writeFile(f.fileName, printer.printFile(f), false, msg => console.log(msg), program.getSourceFiles());
})
```
**test.ts**
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
function inferType(arg: T, typeName?: string) {
return typeName;
}
let f = { test: "" };
// The type name parameter is added/updated automatically when you run the code above.
inferType(bar, "number[]");
inferType(bar2, "number");
inferType(foo, "(a: number[], b: number) => number");
inferType(f, "{ test: string; }");
```
Upvotes: 2 <issue_comment>username_4: You can play with my TypeScript Compiler API Playground example of LanguageService type checking example: <https://typescript-api-playground.glitch.me/#example=ts-type-checking-source>
Also this is node.js script that parses input typescript code and it infer the type of any symbol according on how it's used. It uses TypeScript Compiler API , creates a Program, and then the magic is just "program.getTypeChecker().getTypeAtLocation(someNode)"
Working example: <https://github.com/username_4Sgx/typescript-plugins-of-mine/blob/master/typescript-ast-util/spec/inferTypeSpec.ts>
If you are not familiar with Compiler API start here. Also you have a couple of projects that could make it easier:
* <https://dsherret.github.io/ts-simple-ast/>
* <https://github.com/RyanCavanaugh/dts-dom>
good luck
Upvotes: 3 |
2018/03/19 | 2,854 | 10,101 | <issue_start>username_0: ```
let nums = [-1, 50, 75, 200, 350, 525, 1000];
nums.every(function(num) {
console.log(num < 0);
});
```
*true*
*=> false*
---
When I run this code in <https://repl.it/@super8989/BraveFunctionalSale>, this returns "true" then "=> false".
According to .every() description, the return value is "true if the callback function returns a truthy value for every array element; otherwise, false."
**Why does it show "true" and then "=> false"?**
---
Furthermore, when I change the array so that "- value" is in the middle of the array, it returns "false" then "=> false".
```
let nums = [1, 50, -75, 200, 350, 525, 1000];
nums.every(function(num) {
console.log(num < 0);
});
```
*false*
*=> false*
<https://repl.it/@super8989/CyberInterestingPhase>
---
```
let nums = [-1, 50, 75, 200, 350, 525, 1000];
console.log(nums.every(num => num < 0));
```
*false*
*=> undefined*
But if I write it this way, this returns false then undefined.
<https://repl.it/@super8989/MonstrousAjarDimension>
---
I'm very confused... Please help!<issue_comment>username_1: >
> Is there anyway I can achieve this through the compiler API
>
>
>
The compiler API can let you inspect *code* when *code is a string*. e.g.
```
const someObj = ts.someApi(`
// Code
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
`);
// use someObj to infer things about the code
```
>
> If not, is there anyway I can achieve this any other way?
>
>
>
Use `typeof` although its [significantly limited](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof).
Alternatively load *self* code using nodejs `__filename` (will only work in node and only if running thought `ts-node` i.e. raw TS ): <https://nodejs.org/api/globals.html#globals_filename>.
Upvotes: 0 <issue_comment>username_2: Another way to use decorators.
```
function inspectType(target: Object, propKey: string): any {
}
class MyClass {
@inspectType foo: number;
@inspectType elem: HTMLElement;
}
console.info(Reflect.getMetadata("design:type", Object.getPrototypeOf(MyClass), "foo")); // Constructor of the Number
console.info(Reflect.getMetadata("design:type", Object.getPrototypeOf(MyClass), "elem")); // Constructor of the HTMLElement
```
Note, to make it work its needed enable options in config:
```
"compilerOptions": {
"target": "ES5",
"experimentalDecorators": true,
"emitDecoratorMetadata": true
}
```
And use [reflect-metadata](https://www.npmjs.com/package/reflect-metadata) polyfil. More details on decorators in [my article (rus)](https://habrahabr.ru/company/docsvision/blog/310870/).
Upvotes: 0 <issue_comment>username_3: *Option 1*
----------
You can use the compiler API to achieve this by using an emit transformer. The emit transformer receives the AST during the emit process and it can modify it. Transformers are used internally by the compiler to transform the TS AST into a JS AST. The resulting AST is then written to file.
What we will do is create a transformer that, when it encounters a function named `inferType` it will add an extra argument to the call that will be the typescript type name.
**transformation.ts**
```
import * as ts from 'typescript'
// The transformer factory
function transformer(program: ts.Program): ts.TransformerFactory {
let typeChecker = program.getTypeChecker();
function transformFile(program: ts.Program, context: ts.TransformationContext, file: ts.SourceFile): ts.SourceFile {
function visit(node: ts.Node, context: ts.TransformationContext): ts.Node {
// If we have a call expression
if (ts.isCallExpression(node)) {
let target = node.expression;
// that calls inferType
if(ts.isIdentifier(target) && target.escapedText == 'inferType'){
// We get the type of the argument
var type = typeChecker.getTypeAtLocation(node.arguments[0]);
// And then we get the name of the type
var typeName = typeChecker.typeToString(type)
// And we update the original call expression to add an extra parameter to the function
return ts.updateCall(node, node.expression, node.typeArguments, [
... node.arguments,
ts.createLiteral(typeName)
]);
}
}
return ts.visitEachChild(node, child => visit(child, context), context);
}
const transformedFile = ts.visitEachChild(file, child => visit(child, context), context);
return transformedFile;
}
return (context: ts.TransformationContext) => (file: ts.SourceFile) => transformFile(program, context, file);
}
// Compile a file
var cmd = ts.parseCommandLine(['test.ts']);
// Create the program
let program = ts.createProgram(cmd.fileNames, cmd.options);
//Emit the program with our extra transformer
var result = program.emit(undefined, undefined, undefined, undefined, {
before: [
transformer(program)
]
} );
```
**test.ts**
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
function inferType(arg:T, typeName?: string) {
return typeName;
}
inferType(bar); // => "number[]"
inferType(bar2); // => "number"
inferType(foo); // "(number[], number) => number"
```
**result file test.js**
```
var bar = [1, 2, 3];
var bar2 = 5;
function foo(a, b) {
return a[0] + b;
}
function inferType(arg, typeName) {
return typeName;
}
inferType(bar, "number[]"); // => "number[]"
inferType(bar2, "number"); // => "number"
inferType(foo, "(a: number[], b: number) => number"); // "(number[], number) => number"
```
**Note**
This is just a proof of concept, you would need to further test. Also integrating this into your build process might be non trivial, basically you would need to replace the original compiler with this custom version that does this custom transform
*Option 2*
----------
Another option would be to use the compiler API to perform a transformation of the source code before compilation. The transformation would insert the type name into the source file. The disadvantage is that you would see the type parameter as a string in the source file, but if you include this transformation in your build process it will get updated automatically. The advantage is that you can use the original compiler and tools without changing anything.
**transformation.ts**
```
import * as ts from 'typescript'
function transformFile(program: ts.Program, file: ts.SourceFile): ts.SourceFile {
let empty = ()=> {};
// Dummy transformation context
let context: ts.TransformationContext = {
startLexicalEnvironment: empty,
suspendLexicalEnvironment: empty,
resumeLexicalEnvironment: empty,
endLexicalEnvironment: ()=> [],
getCompilerOptions: ()=> program.getCompilerOptions(),
hoistFunctionDeclaration: empty,
hoistVariableDeclaration: empty,
readEmitHelpers: ()=>undefined,
requestEmitHelper: empty,
enableEmitNotification: empty,
enableSubstitution: empty,
isEmitNotificationEnabled: ()=> false,
isSubstitutionEnabled: ()=> false,
onEmitNode: empty,
onSubstituteNode: (hint, node)=>node,
};
let typeChecker = program.getTypeChecker();
function visit(node: ts.Node, context: ts.TransformationContext): ts.Node {
// If we have a call expression
if (ts.isCallExpression(node)) {
let target = node.expression;
// that calls inferType
if(ts.isIdentifier(target) && target.escapedText == 'inferType'){
// We get the type of the argument
var type = typeChecker.getTypeAtLocation(node.arguments[0]);
// And then we get the name of the type
var typeName = typeChecker.typeToString(type)
// And we update the original call expression to add an extra parameter to the function
var argument = [
... node.arguments
]
argument[1] = ts.createLiteral(typeName);
return ts.updateCall(node, node.expression, node.typeArguments, argument);
}
}
return ts.visitEachChild(node, child => visit(child, context), context);
}
const transformedFile = ts.visitEachChild(file, child => visit(child, context), context);
return transformedFile;
}
// Compile a file
var cmd = ts.parseCommandLine(['test.ts']);
// Create the program
let host = ts.createCompilerHost(cmd.options);
let program = ts.createProgram(cmd.fileNames, cmd.options, host);
let printer = ts.createPrinter();
let transformed = program.getSourceFiles()
.map(f=> ({ o: f, n:transformFile(program, f) }))
.filter(x=> x.n != x.o)
.map(x=> x.n)
.forEach(f => {
host.writeFile(f.fileName, printer.printFile(f), false, msg => console.log(msg), program.getSourceFiles());
})
```
**test.ts**
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
function inferType(arg: T, typeName?: string) {
return typeName;
}
let f = { test: "" };
// The type name parameter is added/updated automatically when you run the code above.
inferType(bar, "number[]");
inferType(bar2, "number");
inferType(foo, "(a: number[], b: number) => number");
inferType(f, "{ test: string; }");
```
Upvotes: 2 <issue_comment>username_4: You can play with my TypeScript Compiler API Playground example of LanguageService type checking example: <https://typescript-api-playground.glitch.me/#example=ts-type-checking-source>
Also this is node.js script that parses input typescript code and it infer the type of any symbol according on how it's used. It uses TypeScript Compiler API , creates a Program, and then the magic is just "program.getTypeChecker().getTypeAtLocation(someNode)"
Working example: <https://github.com/username_4Sgx/typescript-plugins-of-mine/blob/master/typescript-ast-util/spec/inferTypeSpec.ts>
If you are not familiar with Compiler API start here. Also you have a couple of projects that could make it easier:
* <https://dsherret.github.io/ts-simple-ast/>
* <https://github.com/RyanCavanaugh/dts-dom>
good luck
Upvotes: 3 |
2018/03/19 | 2,609 | 9,592 | <issue_start>username_0: I need click a div to pop the options in the event. My code are as followed.
```
one
two
three
```
I use Vue get Dom element by ref and bind onClick on div
```
this.$refs.selectNative.dispatchEvent(new MouseEvent('click', { 'bubbles': false }));
```
but it doesn't work.
I want to get options when click the div.
[](https://i.stack.imgur.com/O3PLT.jpg)
[](https://i.stack.imgur.com/iXPYR.jpg)<issue_comment>username_1: >
> Is there anyway I can achieve this through the compiler API
>
>
>
The compiler API can let you inspect *code* when *code is a string*. e.g.
```
const someObj = ts.someApi(`
// Code
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
`);
// use someObj to infer things about the code
```
>
> If not, is there anyway I can achieve this any other way?
>
>
>
Use `typeof` although its [significantly limited](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof).
Alternatively load *self* code using nodejs `__filename` (will only work in node and only if running thought `ts-node` i.e. raw TS ): <https://nodejs.org/api/globals.html#globals_filename>.
Upvotes: 0 <issue_comment>username_2: Another way to use decorators.
```
function inspectType(target: Object, propKey: string): any {
}
class MyClass {
@inspectType foo: number;
@inspectType elem: HTMLElement;
}
console.info(Reflect.getMetadata("design:type", Object.getPrototypeOf(MyClass), "foo")); // Constructor of the Number
console.info(Reflect.getMetadata("design:type", Object.getPrototypeOf(MyClass), "elem")); // Constructor of the HTMLElement
```
Note, to make it work its needed enable options in config:
```
"compilerOptions": {
"target": "ES5",
"experimentalDecorators": true,
"emitDecoratorMetadata": true
}
```
And use [reflect-metadata](https://www.npmjs.com/package/reflect-metadata) polyfil. More details on decorators in [my article (rus)](https://habrahabr.ru/company/docsvision/blog/310870/).
Upvotes: 0 <issue_comment>username_3: *Option 1*
----------
You can use the compiler API to achieve this by using an emit transformer. The emit transformer receives the AST during the emit process and it can modify it. Transformers are used internally by the compiler to transform the TS AST into a JS AST. The resulting AST is then written to file.
What we will do is create a transformer that, when it encounters a function named `inferType` it will add an extra argument to the call that will be the typescript type name.
**transformation.ts**
```
import * as ts from 'typescript'
// The transformer factory
function transformer(program: ts.Program): ts.TransformerFactory {
let typeChecker = program.getTypeChecker();
function transformFile(program: ts.Program, context: ts.TransformationContext, file: ts.SourceFile): ts.SourceFile {
function visit(node: ts.Node, context: ts.TransformationContext): ts.Node {
// If we have a call expression
if (ts.isCallExpression(node)) {
let target = node.expression;
// that calls inferType
if(ts.isIdentifier(target) && target.escapedText == 'inferType'){
// We get the type of the argument
var type = typeChecker.getTypeAtLocation(node.arguments[0]);
// And then we get the name of the type
var typeName = typeChecker.typeToString(type)
// And we update the original call expression to add an extra parameter to the function
return ts.updateCall(node, node.expression, node.typeArguments, [
... node.arguments,
ts.createLiteral(typeName)
]);
}
}
return ts.visitEachChild(node, child => visit(child, context), context);
}
const transformedFile = ts.visitEachChild(file, child => visit(child, context), context);
return transformedFile;
}
return (context: ts.TransformationContext) => (file: ts.SourceFile) => transformFile(program, context, file);
}
// Compile a file
var cmd = ts.parseCommandLine(['test.ts']);
// Create the program
let program = ts.createProgram(cmd.fileNames, cmd.options);
//Emit the program with our extra transformer
var result = program.emit(undefined, undefined, undefined, undefined, {
before: [
transformer(program)
]
} );
```
**test.ts**
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
function inferType(arg:T, typeName?: string) {
return typeName;
}
inferType(bar); // => "number[]"
inferType(bar2); // => "number"
inferType(foo); // "(number[], number) => number"
```
**result file test.js**
```
var bar = [1, 2, 3];
var bar2 = 5;
function foo(a, b) {
return a[0] + b;
}
function inferType(arg, typeName) {
return typeName;
}
inferType(bar, "number[]"); // => "number[]"
inferType(bar2, "number"); // => "number"
inferType(foo, "(a: number[], b: number) => number"); // "(number[], number) => number"
```
**Note**
This is just a proof of concept, you would need to further test. Also integrating this into your build process might be non trivial, basically you would need to replace the original compiler with this custom version that does this custom transform
*Option 2*
----------
Another option would be to use the compiler API to perform a transformation of the source code before compilation. The transformation would insert the type name into the source file. The disadvantage is that you would see the type parameter as a string in the source file, but if you include this transformation in your build process it will get updated automatically. The advantage is that you can use the original compiler and tools without changing anything.
**transformation.ts**
```
import * as ts from 'typescript'
function transformFile(program: ts.Program, file: ts.SourceFile): ts.SourceFile {
let empty = ()=> {};
// Dummy transformation context
let context: ts.TransformationContext = {
startLexicalEnvironment: empty,
suspendLexicalEnvironment: empty,
resumeLexicalEnvironment: empty,
endLexicalEnvironment: ()=> [],
getCompilerOptions: ()=> program.getCompilerOptions(),
hoistFunctionDeclaration: empty,
hoistVariableDeclaration: empty,
readEmitHelpers: ()=>undefined,
requestEmitHelper: empty,
enableEmitNotification: empty,
enableSubstitution: empty,
isEmitNotificationEnabled: ()=> false,
isSubstitutionEnabled: ()=> false,
onEmitNode: empty,
onSubstituteNode: (hint, node)=>node,
};
let typeChecker = program.getTypeChecker();
function visit(node: ts.Node, context: ts.TransformationContext): ts.Node {
// If we have a call expression
if (ts.isCallExpression(node)) {
let target = node.expression;
// that calls inferType
if(ts.isIdentifier(target) && target.escapedText == 'inferType'){
// We get the type of the argument
var type = typeChecker.getTypeAtLocation(node.arguments[0]);
// And then we get the name of the type
var typeName = typeChecker.typeToString(type)
// And we update the original call expression to add an extra parameter to the function
var argument = [
... node.arguments
]
argument[1] = ts.createLiteral(typeName);
return ts.updateCall(node, node.expression, node.typeArguments, argument);
}
}
return ts.visitEachChild(node, child => visit(child, context), context);
}
const transformedFile = ts.visitEachChild(file, child => visit(child, context), context);
return transformedFile;
}
// Compile a file
var cmd = ts.parseCommandLine(['test.ts']);
// Create the program
let host = ts.createCompilerHost(cmd.options);
let program = ts.createProgram(cmd.fileNames, cmd.options, host);
let printer = ts.createPrinter();
let transformed = program.getSourceFiles()
.map(f=> ({ o: f, n:transformFile(program, f) }))
.filter(x=> x.n != x.o)
.map(x=> x.n)
.forEach(f => {
host.writeFile(f.fileName, printer.printFile(f), false, msg => console.log(msg), program.getSourceFiles());
})
```
**test.ts**
```
let bar = [1, 2, 3];
let bar2 = 5;
function foo(a: number[], b: number) {
return a[0] + b;
}
function inferType(arg: T, typeName?: string) {
return typeName;
}
let f = { test: "" };
// The type name parameter is added/updated automatically when you run the code above.
inferType(bar, "number[]");
inferType(bar2, "number");
inferType(foo, "(a: number[], b: number) => number");
inferType(f, "{ test: string; }");
```
Upvotes: 2 <issue_comment>username_4: You can play with my TypeScript Compiler API Playground example of LanguageService type checking example: <https://typescript-api-playground.glitch.me/#example=ts-type-checking-source>
Also this is node.js script that parses input typescript code and it infer the type of any symbol according on how it's used. It uses TypeScript Compiler API , creates a Program, and then the magic is just "program.getTypeChecker().getTypeAtLocation(someNode)"
Working example: <https://github.com/username_4Sgx/typescript-plugins-of-mine/blob/master/typescript-ast-util/spec/inferTypeSpec.ts>
If you are not familiar with Compiler API start here. Also you have a couple of projects that could make it easier:
* <https://dsherret.github.io/ts-simple-ast/>
* <https://github.com/RyanCavanaugh/dts-dom>
good luck
Upvotes: 3 |
2018/03/19 | 1,378 | 2,889 | <issue_start>username_0: I have a huge data set in R. Each observation has a categorical label to it and a numerical value in this case a mass.
I’m looking to find summary statistics (Mean, Median, Mode) for my mass values grouped by each subset label I have.
I’m completely stumped so any help would be appreciated.
A snippet of the data is
```
Order_or_higher First_appearance_mya Last_appearance_mya Mass_kg
Rodentia -13.9 -11.3 0.006665867
Rodentia -11.8 -7.5 0.005259311
Rodentia -14.4 -14.4 0.036379302
Rodentia -16.7 -13.7 0.056373546
Rodentia -14.1 -14.1 0.008149854
Rodentia -28.4 -20.3 0.009393331
Rodentia -2.4 -2.4 0.02126367
Rodentia -0.9 0 0.014909521
Rodentia -3.8 -3.7 0.027798999
Rodentia -2.8 -0.5 0.01889694
Rodentia -1.6 -1.6 0.017115766
Carnivora -5.8 -5.7 63.51300709
Carnivora -17.4 -14.5 281.8132792
Carnivora -20.1 -15.5 130.4832311
```
With many many more categorical values<issue_comment>username_1: The dplyr package has these functions and is designed for these tasks.
Assume that `d` is your dataset.
```
d %>%
group_by() %>%
summarise(mean = mean(),
median = median(),
mode = ModeFunction())
```
Where you'd define a function to declare `ModeFunction`. username_2's function works well and simply.
Upvotes: 1 <issue_comment>username_2: There are a lot of ways, you can do this in R. Here is one approach using `tidyverse`. But first, take note that the function `mode()` in R does not return your *mode estimate*. To learn more about the `mode()` function, type `?mode` in your console. So we have to create a function that returns the mode. Obviously, we can start with the `table()` function because it returns the frequency distribution of `x` in `table(x)`.
```
Mode <- function(x) {
uniqx <- unique(x)
uniqx[which.max(table(x))]
}
```
Let's now apply this new function and the existing built-in functions in R.
```
tt <- "Rodentia -13.9 -11.3 0.006665867
Rodentia -11.8 -7.5 0.005259311
Rodentia -14.4 -14.4 0.036379302
Rodentia -16.7 -13.7 0.056373546
Rodentia -14.1 -14.1 0.008149854
Rodentia -28.4 -20.3 0.009393331
Rodentia -2.4 -2.4 0.02126367
Rodentia -0.9 0 0.014909521
Rodentia -3.8 -3.7 0.027798999
Rodentia -2.8 -0.5 0.01889694
Rodentia -1.6 -1.6 0.017115766
Carnivora -5.8 -5.7 63.51300709
Carnivora -17.4 -14.5 281.8132792
Carnivora -20.1 -15.5 130.4832311"
df <- read.table(text = tt, header = F)
library(tidyverse)
df %>%
group_by(V1) %>%
summarise_at(vars(V2:V4), funs(mean, median, Mode))
```
And here is the output:
```
# V1 V2_mean V3_mean V4_mean V2_median V3_median V4_median V2_Mode V3_Mode V4_Mode
#
# 1 Carniv… -14.4 -11.9 1.59e+2 -17.4 -14.5 130. -5.80 -5.70 6.35e+1
# 2 Rodent… -10.1 -8.14 2.02e-2 -11.8 -7.50 0.0171 -13.9 -11.3 6.67e-3
```
Upvotes: 2 |
2018/03/19 | 640 | 2,068 | <issue_start>username_0: Let's say I have a struct with a bool field
```
struct pBanana {
bool free;
};
```
Now I have another struct that contains a vector of the pBanana struct
```
struct handler_toto {
std::vector listBananas;
};
```
Now I would love to return how many times the **boolean free** is false in the listBananas
```
int someFunction (mainHandler& gestionnaire)
{
return std::count(gestionnaire.listBananas.begin(), gestionnaire.listBananas.end(), gestionnaire.listBananas.operator[/*How to do it here */]);
}
```
After consulting the [documentation](http://en.cppreference.com/w/cpp/container/vector) I'm having difficulties understating how to use `operator[]` properly in my case<issue_comment>username_1: That's because you can't use `operator[]` to do this.
You could do this using `std::count_if` (not `count`) and a *lambda function* a.k.a. *anonymous function*:
```
return std::count_if(
gestionnaire.listBananas.begin(),
gestionnaire.listBananas.end(),
[](const pBanana &b) {return b.free;});
```
`std::count_if` will call your function for each item in the list, and count the number of times it returns `true`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `std::count_if()`, like demonstrated in immibis's answer, is the best option.
To answer your question, to use `operator[]` properly, you have to loop through the vector manually, eg:
```
int someFunction (mainHandler& gestionnaire) {
int count = 0;
std::vector &bananas = gestionnaire.listBananas;
size\_t size = bananas.size();
for(size\_t i = 0; i < size; ++i) {
if (!bananas[i].free) // <-- operator[] called here
++count;
}
return count;
}
```
Or, you can use iterators instead and not use `operator[]` at all:
```
int someFunction (mainHandler& gestionnaire) {
int count = 0;
std::vector &bananas = gestionnaire.listBananas;
for(std::vector::iterator iter = bananas.begin(); iter != bananas.end(); ++iter) {
if (!iter->free)
++count;
}
return count;
}
```
Upvotes: 1 |
2018/03/19 | 377 | 1,263 | <issue_start>username_0: I have a table like this
```
id alias_word language original
1 word1 es changed_word
2 word1 en orig_word
3 word1 fr changed_word
4 word2 de other_original
```
Supposing reference column `language` = `es`
How to make a query to have result all rows that have `alias_word` column = `word1` and the `original` column is different than `original` column where `language` column = es
espected result:
`2 word1 en orig_word`
Ihave tried this and have empty result
```
SELECT * FROM words WHERE alias_word = 'word1' AND original <> original
```<issue_comment>username_1: `AND original <> original` Will always return nothing. I don't quite follow what you are doing with that original column, but I think you want to not have the word 'original' in the original column:
```
SELECT *
FROM words
WHERE alias_word = 'word1'
AND original <> 'original'
AND language = 'es'
```
Upvotes: 0 <issue_comment>username_2: Try using a self join:
```
SELECT w2.*
FROM words w1
INNER JOIN words w2
ON w1.alias_word = w2.alias_word AND
w1.original <> w2.original
WHERE
w1.language = 'es';
```
[Demo
----](http://rextester.com/QHWWYF53290)
Upvotes: 2 [selected_answer] |
2018/03/19 | 719 | 2,141 | <issue_start>username_0: So I have a problem on a homework assignment that asks for a text file to be used to create a dictionary containing keys as students names and corresponding values as NumPy 1-dimensional Matrixes.
The text file would be formatted as follows:
```
John 23 53 54 56 58
Jane 89 54 56 76 93
Marie-Claire 56 68 76 86 92
```
All names are one word or a hyphened name and all students have the same number of grades on their line of the file. The problem is I can't figure out how to use just the first word (the name) of each line in the text file to be the key. This was my attempt:
```
def student_grade(filename):
with open('filename','r') as file:
Grade_Dict = {}
for line in file:
words = line.split(' ')
Grade_Dict[words[0]]= np.array(words[1:])
```
note: the problem asks for the dictionary to be made and the file to be read within a function.
I'm just confused on how to test if I got any of this code correct or not.<issue_comment>username_1: You are very close. A couple of small changes:
### Code:
```
def student_grade(filename):
with open(filename, 'rU') as file:
grade_dict = {}
for line in file:
name = line.strip().split(' ')
grade_dict[name[0]] = np.array(name[1:])
return grade_dict
```
The filename passed to `open()` is a variable, not a string so `filename` not `"filename"`.
### Test Code:
```
print(student_grade('file1'))
```
### Results:
```
{'John': array(['23', '53', '54', '56', '58'], dtype='
```
Upvotes: 1 <issue_comment>username_2: You can try something like this :
```
grade={}
with open('file.txt','r') as f:
for line in f:
data=line.split()
grade[data[0]]=np.array(list(map(int,data[1:])))
print(grade)
```
output:
```
{'Marie-Claire': array([56, 68, 76, 86, 92]), 'Jane': array([89, 54, 56, 76, 93]), 'John': array([23, 53, 54, 56, 58])}
```
if you don't want numbers in int then:
```
grade={}
with open('file.txt','r') as f:
for line in f:
data=line.split()
grade[data[0]]=np.array(data[1:])
print(grade)
```
Upvotes: 0 |
2018/03/19 | 151 | 568 | <issue_start>username_0: In React given a prop that provides true/false how would one conditionally add `readOnly` to an input field in JSX?
So if I had `this.props.boolean` whats a terse option for adding readOnly when this.props.boolean == false readOnly isn't attached, when this.props.boolean == true readOnly is attached.<issue_comment>username_1: ```
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: This is ugly but it works:
```
{props.readonly ?
:
}
```
Make the two controls custom to avoid duplicating as much as possible...
Upvotes: 2 |
2018/03/19 | 773 | 2,699 | <issue_start>username_0: Hi I currently have a list of dictionaries I am trying to insert into DynamoDB (each dict as an item). Each item having a hashkey and label1,label2,...label3000 key/value pairs with label# being the key and a string as the value pair. Some of my items have up to label fields. Is this a problem when using put\_item in DynamoDB? Currently, the label# keys within each dictionary are unordered, and when I go to insert each item it is only adding 19 of the fields.<issue_comment>username_1: There is no limit to the number of attributes but the total item size is limited to 400kb.
>
> **Items**
>
>
> ***Item Size***
>
>
> The maximum item size in DynamoDB is 400 KB, which includes both
> attribute name binary length (UTF-8 length) and attribute value
> lengths (again binary length). The attribute name counts towards the
> size limit.
>
>
> For example, consider an item with two attributes: one attribute named
> "shirt-color" with value "R" and another attribute named "shirt-size"
> with value "M". The total size of that item is 23 bytes.
>
>
> **Attributes**
>
>
> ***Attribute Name-Value Pairs Per Item***
>
>
> The cumulative size of attributes per item must fit within the maximum
> DynamoDB item size (400 KB).
>
>
> ***Number of Values in List, Map, or Set***
>
>
> There is no limit on the number of values in a List, a Map, or a Set,
> as long as the item containing the values fits within the 400 KB item
> size limit.
>
>
>
[Docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items)
Upvotes: 3 <issue_comment>username_2: The "limit" is just in view, by default is 20 columns.
You can select to view all columns using the settings button.
[](https://i.stack.imgur.com/KUNQZ.jpg)
Upvotes: 0 <issue_comment>username_3: A **Dynamodb table can have any number of attributes** but while writing a particular row the number of attributes is limited as the cumulative size of an attribute name and attribute value **should not exceed 400KB.**
Mathematically speaking if the row you are writing has an average attribute size of 7 bytes and the corresponding value for it has a size of 6 bytes then you can have approximately **400 \* 1024/13 ~ 31507** attributes for that entry.
If you make another such entry with **an entirely different attribute name set** then overall your table will have **63014 attributes.**
Keep in mind each entry has a limit of 400KB, but the table can have any number of attributes.
<https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-attributes>
Upvotes: 2 |
2018/03/19 | 1,030 | 3,606 | <issue_start>username_0: I recently compiled Qt 5.10.1 statically (mingw32) and the routine below now fails to work. I modified the code to include the full path for windows cmd "c:\windows\system32\cmd.exe" but that still doesn't work. Attempted with Windows 7 & 10. The code below works fine with Qt 5.6. Its job is to open a Windows terminal. Similar code to open a console in macOS and Linux works.
NOTE: This behavior is a bug introduced in Qt 5.8 see:
<https://bugreports.qt.io/browse/QTBUG-57687>
```
QString commstr = adbdir+"cpath.bat";
QFile file(commstr);
if(!file.open(QFile::WriteOnly |
QFile::Text))
{
logfile("error creating cpath.bat!");
QMessageBox::critical(this,"","Error creating bat file!");
return;
}
QTextStream out(&file);
out << "set PATH=%PATH%;"+QDir::currentPath()+";"<< endl;
file.flush();
file.close();
cstring = "cmd /k " +QDir::currentPath()+"/cpath.bat";
QProcess::startDetached(cstring);
```<issue_comment>username_1: The correct way in Qt 5.10 is to pass arguments to QProcess program, so when you want to run `cmd /k cpath.bat` then the program is `cmd` and the arguments are `/k xyz.bat` . Also according to bug report [QTBUG-64662](https://bugreports.qt.io/browse/QTBUG-64662), Qt does start the process starts but to make it appear, that might be related to win32 API, so using `QProcess::setCreateProcessArgumentsModifier` could be used to show the shell as describer in [QProcess Documentation](http://doc.qt.io/qt-5/qprocess.html#CreateProcessArgumentModifier-typedef).
So in your case:
```
#include "Windows.h"
int main(int argc, char *argv[]) {
QApplication app(argc, argv);
QProcess process;
QString program = "cmd.exe";
QStringList arguments = QStringList() << "/K" << QDir::currentPath()+"/cpath.bat";
process->setCreateProcessArgumentsModifier([] (QProcess::CreateProcessArguments *args)
{
args->flags |= CREATE_NEW_CONSOLE;
args->startupInfo->dwFlags &= ~STARTF_USESTDHANDLES;
});
process.start(program, arguments);
process.waitForStarted();
return app.exec();
}
```
---
And in order to create detached process, you can inherit QProcess and detach it after it starts, like this:
```
#include
#include
#include
#include
#include "Windows.h"
class QDetachableProcess
: public QProcess {
public:
QDetachableProcess(QObject \*parent = 0)
: QProcess(parent) {
}
void detach() {
waitForStarted();
setProcessState(QProcess::NotRunning);
}
};
int main(int argc, char \*argv[]) {
QDetachableProcess process;
QString program = "cmd.exe";
QStringList arguments = QStringList() << "/K" << QDir::currentPath()+"/cpath.bat";
process.setCreateProcessArgumentsModifier(
[](QProcess::CreateProcessArguments \*args) {
args->flags |= CREATE\_NEW\_CONSOLE;
args->startupInfo->dwFlags &=~ STARTF\_USESTDHANDLES;
});
process.start(program, arguments);
process.detach();
return 0;
}
```
Upvotes: 1 <issue_comment>username_2: NOTE: This behavior with startDetached is a Windows-specific Qt bug introduced in Qt 5.8. The workaround is referenced at:
<https://bugreports.qt.io/browse/QTBUG-57687>
```
QProcess p;
p.setProgram("cmd.exe");
p.setArguments({"/k", QDir::currentPath()+"/cpath.bat"});
p.setCreateProcessArgumentsModifier([] (
QProcess::CreateProcessArguments
*args) {
args->flags &= ~CREATE_NO_WINDOW;
});
p.startDetached();
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 344 | 1,245 | <issue_start>username_0: How do I navigate to another webpage using the same driver with Selenium in python?
I do not want to open a new page. I want to keep on using the same driver.
I thought that the following would work:
```
driver.navigate().to("https://support.tomtom.com/app/contact/")
```
But it doesn't! Navigate seems not to be a 'WebDriver' method<issue_comment>username_1: The line of code which you have tried as :
```
driver.navigate().to("https://support.tomtom.com/app/contact/")
```
It is a typical *Java* based line of code.
However as per the currect **Python API Docs** of [The WebDriver implementation](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webdriver.html#module-selenium.webdriver.remote.webdriver) **navigate()** method is yet to be supported/implemented.
Indtead, you can use the **get(url)** method instead which is defined as :
```
def get(self, url):
"""
Loads a web page in the current browser session.
"""
self.execute(Command.GET, {'url': url})
```
Upvotes: 2 <issue_comment>username_2: To navigate to a webpage you just write
```
driver.get(__url__)
```
you can do this in your program multiple times
Upvotes: 3 [selected_answer] |
2018/03/19 | 830 | 3,117 | <issue_start>username_0: I'm kind of a newbie to PowerShell and I am currently making a simple service monitoring script. Right now I have a list of computer names and a list of service names that I scan for.
I save the scan to a log. I am wondering if there is any way I can speed up my PowerShell code? I'm not sure if I am using the quickest methods for the job.
Are there any known alternatives to this code that would scan services quicker?
```
$myServices = $PSScriptRoot + "\services.txt" # $PSScriptRoot references current directory
$myServers = $PSScriptRoot + "\servers.txt"
$Log = $PSScriptRoot + "\svclog.csv"
$LogLive = $PSScriptRoot + "\svclogLive.csv"
$serviceList = Get-Content $myServices
Remove-Item -Path $Log
$results = Get-Content $myServers | ForEach-Object {
foreach ($service in $serviceList) {
if ($s=get-service -computer $_ -name $service -ErrorAction SilentlyContinue)
{
$s | select MachineName, ServiceName, Status, StartType
} else {
# "$_ - Service '$service' does not exist."
}
}
}
$results | Export-CSV $Log -notypeinformation
# Create a second current log that Python can read while this script runs
Copy-Item -Path $Log -Destination $LogLive
```<issue_comment>username_1: You can try capturing all of the target server's services in an array and looking through it rather than calling `get-service` on every service you are searching for:
```
$myServices = $PSScriptRoot + "\services.txt" # $PSScriptRoot references current directory
$myServers = $PSScriptRoot + "\servers.txt"
$Log = $PSScriptRoot + "\svclog.csv"
$LogLive = $PSScriptRoot + "\svclogLive.csv"
$serviceList = Get-Content $myServices
Remove-Item -Path $Log
$results = Get-Content $myServers | ForEach-Object {
# All of the services in one grab
$serverServices = @(Get-Service -computer $_ -ErrorAction SilentlyContinue)
if ($serverServices) {
foreach ($service in $serviceList) {
#Note: this inner use of another $_ may confuse PowerShell...
if ($s = ($serverServices | Where {$_.Name -eq $service}))
{
$s | select MachineName, ServiceName, Status, StartType
} else {
# "$_ - Service '$service' does not exist."
}
}
}
}
$results | Export-CSV $Log -notypeinformation
# Create a second current log that Python can read while this script runs
Copy-Item -Path $Log -Destination $LogLive
```
Upvotes: 1 <issue_comment>username_2: Use `Invoke-command`
```
$serviceList = Get-Content $myServices
#some code
$results = Get-Content $myServers
Invoke-command -ComputerName $results -ScriptBlock {
Param($MyServices)
Get-Service -Name $MyServices | Select-Object -Property ServiceName, Status, StartType
} -ArgumentList $MyServices,$Null | Select-Object -Property ServiceName, Status, StartType,PSComputerName |
Export-Csv -NoTypeInformation -Path $Log
#For getting starttype in Version 2.0
Get-wmiObject -class Win32_Service -Filter "Name='BITS'" | Select-Object -Property Name, State, startMode
```
Upvotes: 2 |
2018/03/19 | 579 | 2,226 | <issue_start>username_0: I am building application on laravel. Now application itself should PCI DSS compliant. Hence we can't store card details on file.Now we don't store details anywhere itself.But when request hits at server.Then laravel logs that information into laravel.log.
Is there a programmable way so that we could remove that entry from laravel.log?<issue_comment>username_1: You can try capturing all of the target server's services in an array and looking through it rather than calling `get-service` on every service you are searching for:
```
$myServices = $PSScriptRoot + "\services.txt" # $PSScriptRoot references current directory
$myServers = $PSScriptRoot + "\servers.txt"
$Log = $PSScriptRoot + "\svclog.csv"
$LogLive = $PSScriptRoot + "\svclogLive.csv"
$serviceList = Get-Content $myServices
Remove-Item -Path $Log
$results = Get-Content $myServers | ForEach-Object {
# All of the services in one grab
$serverServices = @(Get-Service -computer $_ -ErrorAction SilentlyContinue)
if ($serverServices) {
foreach ($service in $serviceList) {
#Note: this inner use of another $_ may confuse PowerShell...
if ($s = ($serverServices | Where {$_.Name -eq $service}))
{
$s | select MachineName, ServiceName, Status, StartType
} else {
# "$_ - Service '$service' does not exist."
}
}
}
}
$results | Export-CSV $Log -notypeinformation
# Create a second current log that Python can read while this script runs
Copy-Item -Path $Log -Destination $LogLive
```
Upvotes: 1 <issue_comment>username_2: Use `Invoke-command`
```
$serviceList = Get-Content $myServices
#some code
$results = Get-Content $myServers
Invoke-command -ComputerName $results -ScriptBlock {
Param($MyServices)
Get-Service -Name $MyServices | Select-Object -Property ServiceName, Status, StartType
} -ArgumentList $MyServices,$Null | Select-Object -Property ServiceName, Status, StartType,PSComputerName |
Export-Csv -NoTypeInformation -Path $Log
#For getting starttype in Version 2.0
Get-wmiObject -class Win32_Service -Filter "Name='BITS'" | Select-Object -Property Name, State, startMode
```
Upvotes: 2 |
2018/03/19 | 1,581 | 5,984 | <issue_start>username_0: I am trying to create a custom object that passes all non-existent method calls down to a member attribute. This works under normal custom method invocations, but fails when attempting to call arithmetic operators.
Below is a console snippet of an example class, a test function, and a cleaned up disassembly of the test function.
```
>>> class NoAdd(object):
... member = 0
... def __getattr__(self, item):
... print('NoAdd __getattr__')
... # return getattr(self.member, item)
... if item == '__add__':
... return self.member.__add__
>>> def test():
... print('Call __add__ directly.')
... NoAdd().__add__(5) # 5
... print('Implicit __add__.')
... NoAdd() + 5 # TypeError
>>> dis(test)
3 8 LOAD_GLOBAL 1 (NoAdd)
10 CALL_FUNCTION 0
12 LOAD_ATTR 2 (__add__)
14 LOAD_CONST 2 (5)
16 CALL_FUNCTION 1
18 POP_TOP
5 28 LOAD_GLOBAL 1 (NoAdd)
30 CALL_FUNCTION 0
32 LOAD_CONST 2 (5)
34 BINARY_ADD
36 POP_TOP
38 LOAD_CONST 0 (None)
40 RETURN_VALUE
>>> test()
Call __add__ directly.
NoAdd __getattr__
Implicit __add__.
Traceback (most recent call last):
File "", line 1, in
File "", line 5, in test
TypeError: unsupported operand type(s) for +: 'NoAdd' and 'int'
```
I thought that the Python interpreter would look for the `__add__` method using the standard procedure of invoking `__getattr__` when the method was not found in the object's method list, before looking for `__radd__` in the int. This is apparently not what is happening.
Can someone help me out by explaining what is going on or helping me find out where in the Python source code I can find what `BINARY_ADD` is doing? I'm not sure if I can fix this, but a workaround would be appreciated.<issue_comment>username_1: I don't understand you.
Why you can't do something like:
```
class MyInt(int):
pass
def test():
print(MyInt(5) + 5)
test()
```
Upvotes: -1 <issue_comment>username_2: The methods `__getattr__` and `__getattribute__` are only used for recovering attributes when you call then explicitly, by example when you do `foo.bar`. They are not used for implicit invocation.
This behaviour is specified in the [documentation](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__)
>
> **Note:** This method may still be bypassed when looking up special methods
> as the result of implicit invocation via language syntax or built-in
> functions.
>
>
>
The reason for such an implementation is explained [here](https://docs.python.org/3/reference/datamodel.html#special-method-lookup).
>
> Bypassing the `__getattribute__()` machinery in this fashion provides
> significant scope for speed optimisations within the interpreter, at
> the cost of some flexibility in the handling of special methods
>
>
>
In conclusion what you are trying to do, *i.e.* using `__getattr__` to redirect implicit special method calls, has been voluntarily sacrificed in favor of speed.
Inheritance
-----------
The behaviour you are describing can be achieved by class inheritance. Although, passing in arguments to your class constructor will require the following fidling with the `__new__` method.
```
class NoAdd(int):
def __new__(cls, x, *args, **kwargs):
return super().__new__(cls, x)
def __init__(self, x, *args, **kwargs):
...
x = NoAdd(0)
x + 5 # 5
x * 2 # 0
```
Metaclass
---------
Now, suppose you really need to catch implicit call to special methods. I see very little case where this could be useful, but it is a fun exercise. In this case we can rely on metaclass to fill in missing methods with the ones from `member`.
```
class ProxyToMember(type):
def __init__(cls, name, bases, name_space):
super().__init__(name, bases, name_space)
if hasattr(cls, 'member'):
proxy_attrs = (attr for attr in dir(cls.member) if not hasattr(cls, attr))
def make_proxy(attr):
attr_value = getattr(cls.member, attr)
def proxy(_, *args, **kwargs):
return attr_value(*args, **kwargs)
if callable(attr_value):
return proxy
else:
return property(lambda _: getattr(cls.member, attr))
for attr in proxy_attrs:
setattr(cls, attr, make_proxy(attr))
class A(metaclass=ProxyToMember):
member = 0
class B(metaclass=ProxyToMember):
member = 'foo'
A() + 1 # 1
B().startswith('f') # True
```
Upvotes: 2 <issue_comment>username_3: After some more intense research (and following some initially unlikely trails) I found my answer. My thought process didn't come up with the same search keywords, which is why I didn't find these before. My question could probably be considered a duplicate of one of the ones linked below.
I found the simplest/best explanation for why this happens [here](https://stackoverflow.com/questions/13511386/python-built-in-functions-vs-magic-functions-and-overriding?noredirect=1&lq=1). I found some good references for resolving this [here](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes) and [here](https://stackoverflow.com/questions/8637254/intercept-operator-lookup-on-metaclass). The links in [Oliver's](https://stackoverflow.com/a/49356663) answer are also helpful.
In summary, Python does not use the standard method lookup process for the magic methods such as `__add__` and `__str__`. It looks like the workaround is to make a `Wrapper` class from [here](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes).
Upvotes: -1 |
2018/03/19 | 1,212 | 4,398 | <issue_start>username_0: I want to crawl data from a website. In this website :
HTML :
```
* [Place1](http://.../place1)
* [Place2](http://.../place2)
```
Inside "<http://.../place1>":
```
Place 1

```
How can I crawl data inside href using 'Nokogiri" gem? (Data in other page when we click )
When I research, I only find the way to crawl data in a page. Not find how to crawl data inside href page. Thanks<issue_comment>username_1: I don't understand you.
Why you can't do something like:
```
class MyInt(int):
pass
def test():
print(MyInt(5) + 5)
test()
```
Upvotes: -1 <issue_comment>username_2: The methods `__getattr__` and `__getattribute__` are only used for recovering attributes when you call then explicitly, by example when you do `foo.bar`. They are not used for implicit invocation.
This behaviour is specified in the [documentation](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__)
>
> **Note:** This method may still be bypassed when looking up special methods
> as the result of implicit invocation via language syntax or built-in
> functions.
>
>
>
The reason for such an implementation is explained [here](https://docs.python.org/3/reference/datamodel.html#special-method-lookup).
>
> Bypassing the `__getattribute__()` machinery in this fashion provides
> significant scope for speed optimisations within the interpreter, at
> the cost of some flexibility in the handling of special methods
>
>
>
In conclusion what you are trying to do, *i.e.* using `__getattr__` to redirect implicit special method calls, has been voluntarily sacrificed in favor of speed.
Inheritance
-----------
The behaviour you are describing can be achieved by class inheritance. Although, passing in arguments to your class constructor will require the following fidling with the `__new__` method.
```
class NoAdd(int):
def __new__(cls, x, *args, **kwargs):
return super().__new__(cls, x)
def __init__(self, x, *args, **kwargs):
...
x = NoAdd(0)
x + 5 # 5
x * 2 # 0
```
Metaclass
---------
Now, suppose you really need to catch implicit call to special methods. I see very little case where this could be useful, but it is a fun exercise. In this case we can rely on metaclass to fill in missing methods with the ones from `member`.
```
class ProxyToMember(type):
def __init__(cls, name, bases, name_space):
super().__init__(name, bases, name_space)
if hasattr(cls, 'member'):
proxy_attrs = (attr for attr in dir(cls.member) if not hasattr(cls, attr))
def make_proxy(attr):
attr_value = getattr(cls.member, attr)
def proxy(_, *args, **kwargs):
return attr_value(*args, **kwargs)
if callable(attr_value):
return proxy
else:
return property(lambda _: getattr(cls.member, attr))
for attr in proxy_attrs:
setattr(cls, attr, make_proxy(attr))
class A(metaclass=ProxyToMember):
member = 0
class B(metaclass=ProxyToMember):
member = 'foo'
A() + 1 # 1
B().startswith('f') # True
```
Upvotes: 2 <issue_comment>username_3: After some more intense research (and following some initially unlikely trails) I found my answer. My thought process didn't come up with the same search keywords, which is why I didn't find these before. My question could probably be considered a duplicate of one of the ones linked below.
I found the simplest/best explanation for why this happens [here](https://stackoverflow.com/questions/13511386/python-built-in-functions-vs-magic-functions-and-overriding?noredirect=1&lq=1). I found some good references for resolving this [here](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes) and [here](https://stackoverflow.com/questions/8637254/intercept-operator-lookup-on-metaclass). The links in [Oliver's](https://stackoverflow.com/a/49356663) answer are also helpful.
In summary, Python does not use the standard method lookup process for the magic methods such as `__add__` and `__str__`. It looks like the workaround is to make a `Wrapper` class from [here](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes).
Upvotes: -1 |
2018/03/19 | 1,281 | 4,973 | <issue_start>username_0: I am using codeigniter and the problem is that when i submit this ajax:
```
$("#checkstudnumbutton").click(function(e){
e.preventDefault();
$.ajax({
type: "POST",
url: 'php echo base_url();?member/Uploadv2/checkAuthor',
dataType: 'json',
data: {studnum:$("#authorstudentnum").val()},
success:function(data)
{
if(data.exist===true)
{
// populate modal inputs
alert('exists');
}
else
{
alert("yo");
// show div alert
$('#alertnostud').show();
$('#alertnostud').delay(5000).hide();
}
},
error:function()
{
alert('ajax failed');
}
});
});
```
the console shows 500 (internal server error) please help<issue_comment>username_1: I don't understand you.
Why you can't do something like:
```
class MyInt(int):
pass
def test():
print(MyInt(5) + 5)
test()
```
Upvotes: -1 <issue_comment>username_2: The methods `__getattr__` and `__getattribute__` are only used for recovering attributes when you call then explicitly, by example when you do `foo.bar`. They are not used for implicit invocation.
This behaviour is specified in the [documentation](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__)
>
> **Note:** This method may still be bypassed when looking up special methods
> as the result of implicit invocation via language syntax or built-in
> functions.
>
>
>
The reason for such an implementation is explained [here](https://docs.python.org/3/reference/datamodel.html#special-method-lookup).
>
> Bypassing the `__getattribute__()` machinery in this fashion provides
> significant scope for speed optimisations within the interpreter, at
> the cost of some flexibility in the handling of special methods
>
>
>
In conclusion what you are trying to do, *i.e.* using `__getattr__` to redirect implicit special method calls, has been voluntarily sacrificed in favor of speed.
Inheritance
-----------
The behaviour you are describing can be achieved by class inheritance. Although, passing in arguments to your class constructor will require the following fidling with the `__new__` method.
```
class NoAdd(int):
def __new__(cls, x, *args, **kwargs):
return super().__new__(cls, x)
def __init__(self, x, *args, **kwargs):
...
x = NoAdd(0)
x + 5 # 5
x * 2 # 0
```
Metaclass
---------
Now, suppose you really need to catch implicit call to special methods. I see very little case where this could be useful, but it is a fun exercise. In this case we can rely on metaclass to fill in missing methods with the ones from `member`.
```
class ProxyToMember(type):
def __init__(cls, name, bases, name_space):
super().__init__(name, bases, name_space)
if hasattr(cls, 'member'):
proxy_attrs = (attr for attr in dir(cls.member) if not hasattr(cls, attr))
def make_proxy(attr):
attr_value = getattr(cls.member, attr)
def proxy(_, *args, **kwargs):
return attr_value(*args, **kwargs)
if callable(attr_value):
return proxy
else:
return property(lambda _: getattr(cls.member, attr))
for attr in proxy_attrs:
setattr(cls, attr, make_proxy(attr))
class A(metaclass=ProxyToMember):
member = 0
class B(metaclass=ProxyToMember):
member = 'foo'
A() + 1 # 1
B().startswith('f') # True
```
Upvotes: 2 <issue_comment>username_3: After some more intense research (and following some initially unlikely trails) I found my answer. My thought process didn't come up with the same search keywords, which is why I didn't find these before. My question could probably be considered a duplicate of one of the ones linked below.
I found the simplest/best explanation for why this happens [here](https://stackoverflow.com/questions/13511386/python-built-in-functions-vs-magic-functions-and-overriding?noredirect=1&lq=1). I found some good references for resolving this [here](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes) and [here](https://stackoverflow.com/questions/8637254/intercept-operator-lookup-on-metaclass). The links in [Oliver's](https://stackoverflow.com/a/49356663) answer are also helpful.
In summary, Python does not use the standard method lookup process for the magic methods such as `__add__` and `__str__`. It looks like the workaround is to make a `Wrapper` class from [here](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes).
Upvotes: -1 |
2018/03/19 | 957 | 3,268 | <issue_start>username_0: I have several float values that have necessary zeros at the ends.
One number that I have is `0.0013790`.
When finding the length of this, I get `8` when I should be getting `9`, since the zero at the end is dropped. I can not use `.format()`, since some numbers are shorter than others and there is no concrete length that I want them set to. If I had a `float` that was seven digits long after the decimal and set the format to `8`, I would get an extra zero which should NOT belong there.
I can not afford to have my program adding zeros through format when they are not always necessary, since some numbers will be shorter than others. How do I find the actual length of these numbers when a zero is at the end?
I can not make an if statement that checks if the number `.endswith 0`, because it never does. The zero is always dropped! I am already checking the length of the string of the float and still the zero is dropped! Many numbers will not end with zero, so I can not simply add one to the length found. Please help!
**Numbers to test:**
When inputting \_, you should get \_. If you can get the below to work along with some other numbers, please give me the solution. I've been racking at my brain for hours!! Thanks.
WANTED RESULTS: `0.12345 -> 7, 0.123450 -> 8, 0.1234500 -> 9`.
UPDATE:
Thank you for your solutions, but the numbers are not inputs. I have them set to the eval() function, since I have roughly 1000 variables that need to be accessed dynamically from a websocket. Values are retrieved just fine, but if I am not mistaken, eval() defaults to float. Switching it from float to string has not done me much good, since I am guessing that eval() is always a float. Any solutions??<issue_comment>username_1: Then we have to store the float as a string value. Following lines may be answer to the question where it is a default behaviour.
```
mynum = input('Enter your number: ')
print('Hello', mynum)
print(len(mynum))
```
Upvotes: -1 <issue_comment>username_2: If you first convert the input to a number, and then to a string, you'll lose any insignificant digits.
If you are asking the user to enter the value:
```
>>> foo = input('Enter the number here: ')
Enter the number here: 0.0013790
>>> len(foo)
9
```
*If you are using Python 2, make sure you use `raw_input` and not `input`*
As long as you don't cast the value to a float, you should get correct values for `len()`.
Upvotes: 0 <issue_comment>username_3: You need to store your values as strings if you want to track length independent of the value of the float.
Floating point values have no length, and trailing 0s do not affect the value so they produce identical floats. This means after it gets defined, there is **no** way to determine whether `0.12345` was defined using `0.12345` or `0.12345000000`.
```
0.12345 is 0.123450 # True
0.12345 is 0.1234500 # True
len(0.12345) # TypeError: object of type 'float' has no len()
```
Everything works fine for the string representation of those floats:
```
"0.12345" is "0.123450" # False
"0.12345" is "0.1234500" # False
len("0.12345") # 7
```
Thus you should store these values as strings, and convert them to `float` when necessary.
Upvotes: 2 [selected_answer] |
2018/03/19 | 1,317 | 3,146 | <issue_start>username_0: I am new to writing queries in Postgres and am interested in understanding how one can count the **number of unique first time users per day**.
If the table only has two columns- `user_id` and `start_time` which is a timestamp that indicates the time of use. If a user has used on previous day, the `user_id` should not be counted.
Why does the following query not work? Shouldn't it be possible to select distinct on two variables at once?
```
SELECT COUNT (DISTINCT min(start_time::date), user_id),
start_time::date as date
FROM mytable
GROUP BY date
```
produces
>
> ERROR: function count(date, integer) does not exist
>
>
>
The output would look like this
```
date count
1 2017-11-22 56
2 2017-11-23 73
3 2017-11-24 13
4 2017-11-25 91
5 2017-11-26 107
6 2017-11-27 33...
```
Any suggestions about how to count distinct min Date and user\_id and then group by date in psql would be appreciated.<issue_comment>username_1: Try this
```
select start_time,count(*) as count from
(
select user_id,min(start_time::date) as start_time
from mytable
group by user_id
)distinctRecords
group by start_time;
```
This will count each user only once for min date.
Upvotes: 2 [selected_answer]<issue_comment>username_2: you can use following query:
```
select count(user_id ) total_user , start_time
from (
SELECT min (date (start_time)) start_time, user_id
FROM mytable )tmp
group by start_time
```
Upvotes: 0 <issue_comment>username_3: You may try this logic:
* First find the first login time of each `user_id` - `MIN`
`(start_time)` .
* Joining the above results with the main table, increment the count of
user only if the user has not logged in yet. `COUNT` does not add 1 to the record when it's argument is `NULL`.
[SQL Fiddle](http://sqlfiddle.com/#!17/b56d82/1)
**PostgreSQL 9.6 Schema Setup**:
```
CREATE TABLE yourtable
(user_id int, start_time varchar(19))
;
INSERT INTO yourtable
(user_id, start_time)
VALUES
(1, '2018-03-19 08:05:01'),
(2, '2018-03-19 08:05:01'),
(1, '2018-03-19 08:05:04'),
(3, '2018-03-19 08:05:01'),
(1, '2018-03-20 08:05:04'),
(2, '2018-03-20 08:05:04'),
(4, '2018-03-20 08:05:04'),
(3, '2018-03-20 08:05:06'),
(3, '2018-03-20 08:05:04'),
(3, '2018-03-20 08:05:05'),
(1, '2018-03-21 08:05:06'),
(3, '2018-03-21 08:05:05'),
(6, '2018-03-21 08:05:06'),
(3, '2018-03-22 08:05:05'),
(4, '2018-03-22 08:05:05'),
(5, '2018-03-23 08:05:05')
;
```
**Query 1**:
```
WITH f
AS ( SELECT user_id, MIN (start_time) first_start_time
FROM yourtable
GROUP BY user_id)
SELECT t.start_time::DATE
,count( CASE WHEN t.start_time > f.first_start_time
THEN NULL ELSE 1 END )
FROM yourtable t JOIN f ON t.user_id = f.user_id
GROUP BY start_time::DATE
ORDER BY 1
```
**[Results](http://sqlfiddle.com/#!17/b56d82/1/0)**:
```
| start_time | count |
|------------|-------|
| 2018-03-19 | 3 |
| 2018-03-20 | 1 |
| 2018-03-21 | 1 |
| 2018-03-22 | 0 |
| 2018-03-23 | 1 |
```
Upvotes: 1 |
2018/03/19 | 1,083 | 4,688 | <issue_start>username_0: I have developed one login form in Android. I have to implement validation part to my login class. This is my login class. I'm using api for connect the db. Please help me this.
public class LoginActivity extends AppCompatActivity {
```
private static final int REQUEST_READ_CONTACTS = 0;
SoapPrimitive resultString;
String Response;
String url = "url";
private EditText usernamee;
private EditText passwordd;
private View mLoginFormView;
String username = "";
String password = "";
String response = "";
private String count = "";
JSONArray LocArray = new JSONArray();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_login);
final Context context = getApplicationContext();
// Set up the login form.
usernamee = (EditText) findViewById(R.id.usernamee);
passwordd = (EditText) findViewById(R.id.passwordd);
Button mEmailSignInButton = (Button) findViewById(R.id.btnLogin);
mEmailSignInButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
username = usernamee.getText().toString();
password = <PASSWORD>();
RequestQueue queue = Volley.newRequestQueue(context);
StringRequest postRequest = new StringRequest(Request.Method.POST, url,
new Response.Listener()
{
@Override
public void onResponse(String response) {
// response
Log.d("Response", response);
try {
JSONObject obj = new JSONObject(response);
if(obj.get("Role").equals("Admin"))
{
Intent adminIntent = new Intent(LoginActivity.this, Main2Activity.class);
startActivity(adminIntent);
}
else if (obj.get("Role").equals("User"))
{
Intent userIntent = new Intent(LoginActivity.this, Main3Activity.class);
startActivity(userIntent);
}
} catch (JSONException e) {
e.printStackTrace();
}
}
},
new Response.ErrorListener()
{
@Override
public void onErrorResponse(VolleyError error) {
// error
Log.d("Error.Response", response);
}
}
) {
@Override
protected Map getParams()
{
Map params = new HashMap();
params.put("username", username);
params.put("password", <PASSWORD>);
return params;
//return null;
}
};
queue.add(postRequest);
}
});
}
```
// @TargetApi(Build.VERSION\_CODES.HONEYCOMB\_MR2)
}
How can I implement validation in my login class.<issue_comment>username_1: Try this
```
btnLogin.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
String email = usernamee.getText().toString().trim();
String pass = passwordd.getText().toString().trim();
if (pass.length() < 6) {
edtPass.setError("Password must contain 6 characters");
edtPass.requestFocus();
}
if (TextUtils.isEmpty(pass)) {
edtPass.setError("Please enter Password");
edtPass.requestFocus();
}
if (!Patterns.EMAIL_ADDRESS.matcher(email).matches()) {
edtEmail.setError("Please enter Valid email");
edtEmail.requestFocus();
}
if (TextUtils.isEmpty(email)) {
edtEmail.setError("Please enter Usee Name");
edtEmail.requestFocus();
}
if (!TextUtils.isEmpty(email) &&
pass.length() >= 6 &&
Patterns.EMAIL_ADDRESS.matcher(email).matches() &&
!TextUtils.isEmpty(pass)) {
//call your login service here
}
}
});
```
Upvotes: 1 <issue_comment>username_2: try this
```
private EditText input_email,input_password;
public boolean validate() {
boolean valid = true;
String email = input_email.getText().toString();
String password = input_password.getText().toString();
if (email.isEmpty() || !android.util.Patterns.EMAIL_ADDRESS.matcher(email).matches()) {
input_email.setError("enter a valid email address");
valid = false;
} else {
input_email.setError(null);
}
if (password.isEmpty() || password.length() < 4 || password.length() > 10) {
input_password.setError("between 4 and 10 alphanumeric characters");
valid = false;
} else {
input_password.setError(null);
}
return valid;
}
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 1,388 | 5,501 | <issue_start>username_0: I'm using ASP.NET Core 2 with Entity Framework Core 2.0.2. I created a context and `Add-Migrations` command in Package Manager Controller works fine.
However when `Update-Database` command is used, I get an error:
>
> System.Data.SqlClient is not supported on this platform
>
>
>
I can't figure out where the problem is. Can you help me? Thanks.
My `.csproj` file:
```
netcoreapp2.0
portable
true
..\docker-compose.dcproj
```<issue_comment>username_1: I ran into the same issue a couple of days ago - I'm not sure what the underlying issue is, but reverting some of the `EntityFrameworkCore` nuget packages back to 2.0.0 seems to have resolved the issue for me. These are the packages I downgraded:
```
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: I ran into this issue recently with .net standard 2.0 classes being consumed by a regular .net framework app. (.net 4.7.x). The only thing that ultimately fixed my issue was migrating from packages.config to PackageReference on the regular .net app.
Upvotes: 3 <issue_comment>username_3: Same problem here but for me it is a failure on the part of System.Data.SqlClient to load dynamically as part of a plugin. Our plugin dlls are loaded dynamically via Autofac and a controlling service selects the correct one at run time. Unfortunately System.Data.SqlClient will not load dynamically like this, result in the above error message. So I had to load it when the controlling service starts up. This is obviously not ideal but for now it is a usable workaround as all our plugins are still under our control.
I'll be more specific, following a question in comments.
A service selects plug-ins at run time. The plug-ins register their own dependencies via Autofac and if that dependency is a Nuget package they will also include the package as a normal Nuget dependency.
The controlling service registers the plug-in dlls on start up and the first time they are used the plug-in dependencies are also loaded. When System.Data.SqlClient load is attempted following a call to the plug-in that uses SqlClient the "not supported" error results.
Setting System.Data.SqlClient as a Nuget dependency in the controlling service works OK and the library is loaded correctly without error. However, this is not ideal because the the SqlClient library always has to be loaded by the controlling service even if the plug-in selected to run it does not need it.
In other words the SqlClient library is always loaded at service start up occupying resources, etc when it may not even be needed. But at least it works.
Upvotes: 4 <issue_comment>username_4: Change the framework to .NetCore 3.x or .NetFramework 4.x...
Upvotes: -1 <issue_comment>username_5: Just in case somebody lands here who is trying to run System.Data.SqlClient on net50/netstandard on rid freebsd-x64: Microsoft.Data.SqlClient worked for me.
Maybe this works on every portable option and/or for all System.[...] ->Microsoft.[...] dll.
Upvotes: 2 <issue_comment>username_6: I spent a couple of hours on this but managed to resolve it. Posting here in case it helps someone else save some time.
In my .csproj files I had
```
true
```
Removing this solved my problem. Some information can be found [here](https://weblog.west-wind.com/posts/2019/Apr/30/NET-Core-30-SDK-Projects-Controlling-Output-Folders-and-Content#sending-output-to-a-custom-folder-with-dependencies). Setting the value to true causes all dependencies to be copied to the output folder and for me, maybe when loading the application, it was getting confused about which System.Data.SqlClient.dll to load.
Upvotes: 2 <issue_comment>username_7: I had this exact same issue with a .NET 5.0 console application i was deploying. I discovered that when i published the application the publish profiles target framework was set to 3.1 instead of 5.0 and that is what caused this error for me. After re-publishing with the correct target framework everything worked as expected.
Upvotes: 1 <issue_comment>username_8: String connectionString, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.UpdateDatabaseImpl(String targetMigration, String connectionString, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.UpdateDatabase.<>c\_\_DisplayClass0\_0.<.ctor>b\_\_0()
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)
Microsoft.Data.SqlClient is not supported on this platform."
go to manage nugget packages and do a downgrade. Hope it works for you
Upvotes: -1 <issue_comment>username_9: If you copy the dll from the unzipped nupkg package and got this error, make sure you use the right one with included dependencies like System.Runtime ... :
```
~/runtimes/unix or win/lib/netcoreapp2.1
```
for e.g. `.net6.0`
Upvotes: 0 <issue_comment>username_10: For me, it got resolved by having the runtimes subdirectory available within the used bin directory (using .net 6.0 and Microsoft.EntityFrameworkCore.SqlServer Version="7.0.7" )
Upvotes: 0 <issue_comment>username_11: I was in the exact same situation, `add-migration` works fine, but `update-database` get the "SqlClient Not wupported on this platform" error. I am using linux (Pop!\_OS).
The accepted solution didn't work for me, so I tried a lot of alternatives.
The solution for me was to add `linux-x64` to the .csproj of my WebApi. Just answering here in case other people still getting the same error.
Upvotes: 0 |
2018/03/19 | 3,059 | 8,114 | <issue_start>username_0: I have a transparent firewall (running VyOS) that passes BGP traffic between the routers on each side. When the link on one side of the bridge goes down, I want to bring down the link on the other side so the router will clear its BGP information without waiting for the 2:30 minute timer to expire.
([More background information](https://serverfault.com/q/899110/64818))
This is my script:
```bsh
#!/bin/bash
## This script will bounce a br interface if a member interface goes down.
## This will cause router BGP timers to reset, making outages last only seconds instead of minutes.
##
## This script is called by netplug on Vyos:
## /etc/netplug/linkdown.d/my-brdown
##
## Version History
## 1.0 - Initial version
##
LOCKDIR=/var/run/my-bridge-ctl
# Since we only have one br, not going to implement this right now.
#IGNORE_BRIDGES=()
IFACE=$1
#Remove the lock directory
function cleanup {
if rmdir $LOCKDIR; then
logger -is -t "my-bridge-ctl" -p "kern.info" "Finished"
else
logger -is -t "my-bridge-ctl" -p "kern.error" "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir $LOCKDIR; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C)
trap "cleanup" EXIT
logger -is -t "my-bridge-ctl" -p "kern.info" "Acquired lock, running"
# Processing starts here
IFACE_DESC=$(<"/sys/class/net/${IFACE}/ifalias")
IFACE_BR_DIR="/sys/class/net/${IFACE}/brport"
if [ ! -d "$IFACE_BR_DIR" ]; then
logger -is -t "my-bridge-ctl" -p "kern.warning" "Interface ${IFACE} (${IFACE_DESC-no desc}) went down. Not a member of a bridge. Skipping."
else
IFACE_BR_LINK=$(realpath "/sys/class/net/${IFACE}/master")
IFACE_BR_NAME=$(basename $IFACE_BR_LINK)
IFACE_BR_DESC=$(<"${IFACE_BR_LINK}/ifalias")
logger -is -t "my-bridge-ctl" -p "kern.warning" "Interface ${IFACE} (${IFACE_DESC:-no desc}) went down. Member of bridge ${IFACE_BR_NAME} (${IFACE_BR_DESC:-no desc})."
# TODO: Insert IGNORE_BRIDGE check here
find "${IFACE_BR_LINK}/brif" -type l -print0 | while IFS= read -r -d $'\0' IFACE_BR_MEMBER_LINK; do
IFACE_BR_MEMBER_NAME=$(basename $IFACE_BR_MEMBER_LINK)
logger -is -t "my-bridge-ctl" -p "kern.info" "Handling ${IFACE_BR_NAME} member interface ${IFACE_BR_MEMBER_NAME} (${IFACE_BR_MEMBER_LINK})."
# Actually do the bounce
ip link set dev ${IFACE_BR_MEMBER_NAME} down && sleep 2 && ip link set dev ${IFACE_BR_MEMBER_NAME} up
logger -is -t "my-bridge-ctl" -p "kern.info" "Interface ${IFACE_BR_MEMBER_NAME} bounced."
done
fi
sleep 5
else
logger -is -t "my-bridge-ctl" -p "kern.info" "Could not create lock directory '$LOCKDIR'"
exit 1
fi
```
When I run my script manually, it works fine. When `netplugd` runs it, it causes `netplugd` to crash. I ran `netplugd` in the foreground to make sure I captured all the output:
```none
root@firewall00:~# netplugd -F
/etc/netplug/netplug bond0 probe -> pid 10277
/etc/netplug/netplug bond1 probe -> pid 10278
/etc/netplug/netplug bond2 probe -> pid 10279
/etc/netplug/netplug bond3 probe -> pid 10280
/etc/netplug/netplug bond4 probe -> pid 10281
/etc/netplug/netplug bond5 probe -> pid 10282
/etc/netplug/netplug bond6 probe -> pid 10283
/etc/netplug/netplug bond7 probe -> pid 10284
/etc/netplug/netplug bond8 probe -> pid 10285
/etc/netplug/netplug bond9 probe -> pid 10286
/etc/netplug/netplug bond10 probe -> pid 10287
/etc/netplug/netplug bond11 probe -> pid 10288
/etc/netplug/netplug bond12 probe -> pid 10289
/etc/netplug/netplug bond13 probe -> pid 10290
/etc/netplug/netplug bond14 probe -> pid 10291
/etc/netplug/netplug bond15 probe -> pid 10292
/etc/netplug/netplug br0 probe -> pid 10293
/etc/netplug/netplug br1 probe -> pid 10294
/etc/netplug/netplug br2 probe -> pid 10295
/etc/netplug/netplug br3 probe -> pid 10296
/etc/netplug/netplug br4 probe -> pid 10297
/etc/netplug/netplug br5 probe -> pid 10298
/etc/netplug/netplug br6 probe -> pid 10299
/etc/netplug/netplug br7 probe -> pid 10300
/etc/netplug/netplug br8 probe -> pid 10301
/etc/netplug/netplug br9 probe -> pid 10302
/etc/netplug/netplug br10 probe -> pid 10303
/etc/netplug/netplug br11 probe -> pid 10304
/etc/netplug/netplug br12 probe -> pid 10305
/etc/netplug/netplug br13 probe -> pid 10306
/etc/netplug/netplug br14 probe -> pid 10307
/etc/netplug/netplug br15 probe -> pid 10308
/etc/netplug/netplug eth0 probe -> pid 10309
/etc/netplug/netplug eth1 probe -> pid 10310
/etc/netplug/netplug eth2 probe -> pid 10311
/etc/netplug/netplug eth3 probe -> pid 10312
/etc/netplug/netplug eth4 probe -> pid 10313
/etc/netplug/netplug eth5 probe -> pid 10314
/etc/netplug/netplug eth6 probe -> pid 10315
/etc/netplug/netplug eth7 probe -> pid 10316
/etc/netplug/netplug eth8 probe -> pid 10317
/etc/netplug/netplug eth9 probe -> pid 10318
/etc/netplug/netplug eth10 probe -> pid 10319
/etc/netplug/netplug eth11 probe -> pid 10320
/etc/netplug/netplug eth12 probe -> pid 10321
/etc/netplug/netplug eth13 probe -> pid 10322
/etc/netplug/netplug eth14 probe -> pid 10323
/etc/netplug/netplug eth15 probe -> pid 10324
/etc/netplug/netplug eth3 in -> pid 10325
/etc/netplug/netplug eth0 in -> pid 10326
/etc/netplug/netplug eth1 in -> pid 10327
/etc/netplug/netplug br0 in -> pid 10328
br0: state INNING pid 10328 exited status 0
eth3: state INNING pid 10325 exited status 0
eth0: state INNING pid 10326 exited status 0
eth1: state INNING pid 10327 exited status 0
eth2: state DOWN flags 0x00001003 UP,BROADCAST,MULTICAST -> 0x00011043 UP,BROADCAST,RUNNING,MULTICAST,10000
/etc/netplug/netplug eth2 in -> pid 10337
eth2: state INNING pid 10337 exited status 0
eth2: state ACTIVE flags 0x00011043 UP,BROADCAST,RUNNING,MULTICAST,10000 -> 0x00001003 UP,BROADCAST,MULTICAST
/etc/netplug/netplug eth2 out -> pid 10340
my-bridge-ctl[10344]: Acquired lock, running
my-bridge-ctl[10349]: Interface eth2 (br0 inside - net1138a) went down. Member of bridge br0 (no desc).
my-bridge-ctl[10353]: Handling br0 member interface eth2 (/sys/devices/virtual/net/br0/brif/eth2).
eth2: state OUTING flags 0x00001003 UP,BROADCAST,MULTICAST -> 0x00001002 BROADCAST,MULTICAST
eth2: state DOWNANDOUT flags 0x00001002 BROADCAST,MULTICAST -> 0x00001003 UP,BROADCAST,MULTICAST
Error: eth2: unexpected state DOWNANDOUT for UP
root@firewall00:~# my-bridge-ctl[10357]: Interface eth2 bounced.
my-bridge-ctl[10359]: Handling br0 member interface eth3 (/sys/devices/virtual/net/br0/brif/eth3).
my-bridge-ctl[10386]: Interface eth3 bounced.
my-bridge-ctl[10389]: Finished
```
The error is `Error: eth2: unexpected state DOWNANDOUT for UP`.
I can't figure out what is causing `netlogd` to get to that point.<issue_comment>username_1: This may be a bug in `netplugd`, [reported in Debian in December 2011](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=652418). The following patch was proposed in January 2013, accepted ([in 1.2.9.2-2](http://metadata.ftp-master.debian.org/changelogs/main/n/netplug/netplug_1.2.9.2-3_changelog)) in November 2014, and released in May 2015. (That is one *very* slow bugfix process.)
```
--- a/if_info.c
+++ b/if_info.c
@@ -186,6 +186,7 @@
if (newflags & IFF_UP) {
switch(info->state) {
case ST_DOWN:
+ case ST_DOWNANDOUT:
info->state = ST_INACTIVE;
break;
```
The VyOS netplug repository [does not have](https://github.com/vyos/netplug/blob/lithium/if_info.c#L186-L190) this patch.
I suggest you talk to the VyOS people about adding that patch.
Upvotes: 1 <issue_comment>username_2: Moshe's answer made me look more carefully at the netplugd source. The state `ST_DOWNANDOUT` is used when netplugd is processing a link-down script and the link goes down again. The mentioned patch just hides that condition. I added per-interface locks to my script and it works fine now.
The final-ish code is here: <https://gist.github.com/username_2/f16824ea3a4597d35737463c612eb8a3#file-my-bridge-ctl-sh>
Upvotes: 0 |
2018/03/19 | 1,266 | 5,310 | <issue_start>username_0: I'm looking to use API Gateway + Lambda + Cognito User Pools to build a simple REST API.
The API will be used in two ways. The first is to support a basic web app (hosted on CloudFront + S3). Authentication for the web application uses the hosted Cognito sign in / sign up flow and is working fine (with API Gateway setup to use the user pool authenticator).
The second method will be for customers to use the REST API to communicate with the system.
As an example, the client might use the web app to configure a workflow and then use an API to invoke that workflow.
What is the recommended method of authenticating the API for use with backend services?
Traditionally, I'd expect to use an API key + secret token for this purpose. I have no issue creating API keys in the API Gateway interface however I can't see anyway to link that to a specific user, nor can I see any method of specifying a secret token alongside the API key.
And assuming the above is possible, how would I set it up in such a way that I could use the JWT-based approach for the web application and the API key + secret token for customers to use.
EDIT: Additionally, I notice that app clients have an ID and a secret. Are they intended to be used for 3rd API-based-authentication (similar to how other systems make you create an app for API access)? I'm a bit skeptical because there's a limit of 25 per user pool, although it is a soft limit...<issue_comment>username_1: When i was starting out using API gateway and Congito, i referenced <https://github.com/awslabs/aws-serverless-auth-reference-app> a lot and found it very helpful in demonstrating the integration between the different AWS components.
Upvotes: 1 <issue_comment>username_2: I have been searching for an answer to this myself and my searching led me to your question. I will give you my best answer from my research, assuming you want to utilize the well-known key/secret approach. Maybe others can provide a better approach.
Basically, the approach is:
1. Your REST API accounts are just Cognito users in a (possibly separate) user pool
* The management of API accounts is done from the back end
* The username and password will be the API key and secret, are [administratively created](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_Operations.html) (see the Admin\* operations), and can be whatever format you want (within Cognito limits)
2. The REST API is [authorized via Cognito JWT tokens](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html)
3. API account key and secret are only used to retrieve or refresh tokens
* This requires the REST API to have a set of endpoints to support token retrieval and refresh using account keys and secrets
* Based upon how long you set up the Cognito refresh interval, you can require API accounts to submit their key/secret credentials [from very often to almost never](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-with-identity-providers.html#amazon-cognito-user-pools-using-the-refresh-token)
Structuring the authorization of your REST API to use Cognito tokens will allow you to integrate the REST API directly with [API Gateway's support for Cognito](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-enable-cognito-user-pool.html).
I think the biggest headache of this whole thing is that you will have to create the supporting pieces for, e.g., registered users to request API accounts and for the administration of those accounts, as well as some extra helper REST endpoints for token exchange. Additionally, clients will have to keep track of keys/secrets AND token(s) as well as add client-side logic to know when to supply tokens or credentials.
Upvotes: 3 <issue_comment>username_3: If I understand you correctly, you want to create a "long-lived API key + secret" for programmatic access to your API?
I have exactly this need, and am sadly finding that it appears to not be possible. The longest a key can be valid for is 1 hour. You can have a refresh token that's valid for 10 years. <https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html>
I'm currently looking for an elegant solution to this. I'd be interested to hear if you ever found a solution, or if you rolled your own.
Upvotes: 1 <issue_comment>username_4: Did anyone ever find a more elegant solution to this problem?
The first answer seems like pushing too much work into the hands of my customers. I don't know the skill level of the developers calling my API, and I wouldn't wish becoming a Cognito developer on anyone lol. More seriously, I don't want them to have to store multiple pieces of information and then have to deal with refreshing tokens.
I *might* be Ok with giving them a refresh token. Then I could do one of two things:
1. Give them a refresh method. I'd figure out all the weird Cognito kinks and keep their method to a simple payload of just the refresh token. I'd give them back the access token to use on subsequent calls.
2. Let them pass me the refresh token as if it was an access token. I would use it on each call to get an access token and then use that to call the interior APIs.
Upvotes: 0 |
2018/03/19 | 587 | 2,065 | <issue_start>username_0: I am following this [link](https://sendgrid.com/docs/API_Reference/SMTP_API/getting_started_smtp.html) for a test. But I receive
>
> 451 Authentication failed: Could not authenticate
>
>
>
at step 4:
>
> 4.Enter your Base64 converted API key in the next line as the password.
>
>
>
Does it mean I entered a wrong Base64 converted API key? But I have double checked the key. What's going on?
By the way, I am also using Postfix, and in `/var/log/maillog` it says
>
> certificate verification failed for [smtp.sendgrid.net](http://smtp.sendgrid.net/)[192.168.3.11]:587: untrusted issuer /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
>
>
>
Then I followed this [link](https://sendgrid.com/docs/API_Reference/SMTP_API/errors_and_troubleshooting.html) to add the certificate, but I still cannot send the email by Postfix, perhaps the reason is `451 Authentication failed: Could not authenticate`?<issue_comment>username_1: I've had the same issue. Run the TELNET test again (following the instructions [here](https://sendgrid.com/docs/API_Reference/SMTP_API/getting_started_smtp.html)),
but instead of using the API username `apikey` and your API key, use your base64 converted login username and password for Step 3 and step 4.
This should return a `235 Authentication successful` response. This will mean you've successfully connected.
Given that the username/password test is successful, consider opening your API key Permissions to Full Access and try sending and email. If that works, you can use [this link](https://sendgrid.com/docs/Classroom/Basics/API/api_key_permissions.html) to adjust your permissions for your application.
Upvotes: 0 <issue_comment>username_2: Another cause of '451 Authentication failed: Could not authenticate'
- because IP not whitelisted
You need to whitelist your NAT or the Public IP of the instance in the Sengrid site then the steps described here would work -
<https://sendgrid.com/docs/API_Reference/SMTP_API/getting_started_smtp.html>
Upvotes: 1 |
2018/03/19 | 607 | 2,516 | <issue_start>username_0: I'm planning to build an IoT project for an oil palm plantation through the use of an Arduino and an Android Mobile application for my final year project in University. As plantations have low to no communication signals which includes wifi, it is possible to implement LoRaWAN without access to the internet/use/ of a web-based application?<issue_comment>username_1: The LoRaWAN *node* does not need any other communications channel aside from LoRaWAN, of course. Would not make any sense otherwise. ;-)
The *gateway* however does need a connection to the server application that is to be used as a central instance for your use case. Usually this is an existing LoRaWAN cloud service such as The Things Network (TTN) with your application connected behind, but in theory you could connect the gateway to your very own central, making your whole network independent. This is possible because LoRa uses frequency bands free for use (ISM bands) so anyone can become a „network operator“. The TTN software is available as Open Source, for example.
Connection from the gateway to the central is usually done via existing Ethernet/WiFi infrastructures or mobile internet (3G/4G), whatever suits best.
Besides, the LoRa modules available for Arduinos can be used for a low-level, point-to-point LoRa (not LoRaWAN) connection between two such modules. No gateway here. Maybe that is an option, too, for your use case.
Upvotes: 2 <issue_comment>username_2: The LoraWAN is using the Gateway connected to some kind of cloud, for example the TTN network which is community based. If you live in a bigger city you have good chances to have a TTN Gateway in your area.
You can however connect two Lora nodes together to get a point to point connection. You can send data from Node1, which is connected to some kind of sensor and batterypowered, to Node2, which is stationary and stores all the data to a flashdrive for example. From this flashdrive you can import the data to a website or you could use an application like Node-Red to display the data on a Dashboard.
[Here](http://www.instructables.com/id/Dragino-LoRa-GPS-Tracker-1/) you will find instructions on how to send Data from one Lora-Node to another.
[Here](http://www.instructables.com/id/Lora-Temperature-Dashboard/) you will find instuctions on how to use Node-Red to display your Lora-Data. You will have to change the input from the TTN-Cloud to a textfile on your Raspberry, or whatever gateway you use. (Optional)
Upvotes: 2 |
2018/03/19 | 666 | 2,401 | <issue_start>username_0: *All versions numbers are shown at the bottom of this question.*
When I add a new Unit Test (Universal Windows) project to my solution and build it, it builds fine, and the template `TestMethod1` shows up in VS Test Explorer. When I run this test from Test Explorer however, I get the above error, followed by:
```
Updating the layout...
Deployment complete (0:00:00.502). Full package name:
```
Breakpoints that I set within `TestMethod1` do not get hit when I debug this test and I receive the same error.
The UWP test app launches for a split second then closes, and VS Test Explorer continues trying to run the test indefinitely.
I found this thread, which suggested an issue with network adapters (tried it, didn't work for me):
<https://developercommunity.visualstudio.com/content/problem/153784/unit-tests-to-not-execute-for-uwp-application.html>
Things I've tried so far:
Restarting VS, Clean/Build, removing obj/bin, disabling all network adapters except the ethernet one that's actually being used, removing `%TEMP%\VisualStudioTestExplorerExtensions`, running `CheckNetIsolation.exe LoopbackExempt -c`, restarting machine, repairing Visual Studio installation, reconsidering my chosen industry.
The Unit Test project references a UWP blank app which itself references a netstadard1.4 project (all vanilla).
**Question:** How do I fix this error and get the unit test running?
**Version numbers:**
Visual Studio: Enterprise 2017 15.5.6
UWP target/min: Windows 10 Anniversary Edition (10.0; Build 14393)
NuGet packages: Microsoft.NETCore.UniversalWindowsPlatform v6.0.8, MSTest.TestAdapter v1.2.0, MSTest.TestFramework v1.2.0
**Edit 1:**
I tried with Visual Studio v15.6.3 on another machine and everything worked fine there so I upgraded this machine to v15.6.3 as well, but it still doesn't work (I get the same error/behaviour).<issue_comment>username_1: If it works correctly on a different PC, it really seems like a PC-specific problem. I would suggest doing full Visual Studio uninstall and reinstall.
Upvotes: 1 <issue_comment>username_2: I got the same error message
>
> DEP3000: Attempts to stop the application failed. This may cause the
> deployment to fail. [0xD000A003] Exception from HRESULT: 0xD000A003
>
>
>
I found the process of the app in Task Manager and killed it, then I could start the app without any issue.
Upvotes: 0 |
2018/03/19 | 317 | 1,243 | <issue_start>username_0: I just started using Castle.Windsor for a couple of reasons, one of which is to get away from using both static classes and the singleton pattern.
I'm not new to the general concepts of DI, but am a little new to implementation details. The one I'm dealing with now: how do I get the instance of my class that I registered from within the `Application_BeginRequest()` method in the `Global.asax` file? I obviously can't add parameters to this method, and I setup the container in the `Application_Start()` method of this class so it's not like I can create a constructor to inject it.
Is this an edge case where I would have to use the Service Locator pattern or what is the proper way to do this?<issue_comment>username_1: If it works correctly on a different PC, it really seems like a PC-specific problem. I would suggest doing full Visual Studio uninstall and reinstall.
Upvotes: 1 <issue_comment>username_2: I got the same error message
>
> DEP3000: Attempts to stop the application failed. This may cause the
> deployment to fail. [0xD000A003] Exception from HRESULT: 0xD000A003
>
>
>
I found the process of the app in Task Manager and killed it, then I could start the app without any issue.
Upvotes: 0 |
2018/03/19 | 188 | 707 | <issue_start>username_0: I want to implement some navigation buttons (no text, only image, like flat button).
I add default button with BackgroundImage property. And it has a small gap between border and image.
How can I remove that gap?<issue_comment>username_1: Set border property to None will make the button fully flat and just flat image will be shown
Upvotes: 0 <issue_comment>username_2: Set FlatStyle to Flat
```
button1.FlatStyle = FlatStyle.Flat;
```
Upvotes: 0 <issue_comment>username_3: As far i got your concern.
You need to set two properties of Button.
1. `button.FlatStyle = System.Windows.Forms.FlatStyle.Flat;`
2. `button1.FlatAppearance.BorderSize = 0;`
Upvotes: 3 [selected_answer] |
2018/03/19 | 685 | 2,634 | <issue_start>username_0: I'm trying to upload a multiple files/images using vue.js and axios. My backend is ASP.NET Core. But the problem is, whenever I put breakpoint in my request method, I always get a Count = 0 in my controller.
Here are my simple HTML and my codes:
HTML
```
Upload
```
My JS
```
import axios from "axios"
var app= new Vue({
el: "#app",
data: {
files: new FormData()
},
methods:{
fileChange(fileList) {
this.files.append("file", fileList[0], fileList[0].name);
},
upload() {
const files = this.files;
axios.post(`/MyController/Upload`, files,
{
headers: {
'Content-Type': 'multipart/form-data'
}
}).then(response => {
alert(response.data);
}).catch(error => {
console.log(error);
});
},
}
```
My Controller
```
public IActionResult Upload(IList files)
{
return Json("Hey");
}
```
Any help please?<issue_comment>username_1: I had the same problem. The fix for me was instead to look at the request. You'll see there's a **Form** property and in there is your file(s). Here's a code snippet that worked for me.
And to give credit where credit is due: I found the answer [on this blog post](http://www.talkingdotnet.com/upload-file-angular-5-asp-net-core-2-1-web-api/). It's about Angular and the code snippet is from there.
```
[HttpPost]
public IActionResult UploadLogo()
{
try
{
var file = Request.Form.Files[0];
string folderName = "Upload";
string webRootPath = _hostingEnvironment.WebRootPath;
string newPath = Path.Combine(webRootPath, folderName);
if (!Directory.Exists(newPath))
{
Directory.CreateDirectory(newPath);
}
if (file.Length > 0)
{
string fileName = ContentDispositionHeaderValue.Parse(file.ContentDisposition).FileName.Trim('"');
string fullPath = Path.Combine(newPath, fileName);
using (var stream = new FileStream(fullPath, FileMode.Create))
{
file.CopyTo(stream);
}
}
return Json("Upload Successful");
}
catch (System.Exception ex)
{
return Json("Upload Failed: " + ex.Message);
}
}
```
Upvotes: 0 <issue_comment>username_2: This github repo might be helpful for you [ASP.NET Core FileUpload with Vue.JS & Axios](https://github.com/deanilvincent/ASP.NET-Core-FileUpload-with-VueJS-Axios)
Basically, he used the `IFormCollection files` instead of `IList files`
Upvotes: 5 [selected_answer] |
2018/03/19 | 365 | 1,336 | <issue_start>username_0: it tries to connect to npm server and throws red screen, I want to test some offline on-load features.
or is there someway to debug without npm server?<issue_comment>username_1: To debug on an android device (usb connected) you need to do the following:
1. Enable USB debugging on your android device
2. Navigate to your `AndroidDevelopment/SDKManagerForAndroidStudio/Platform-tools/ directory` in a terminal and type `adb devices` (this should pull up your connected device)
3. Again in the terminal type `adb reverse tcp:8081 tcp:8081`
You should then be able to debug on your device. [Here](https://facebook.github.io/react-native/docs/running-on-device.html) is a link to the docs (I think they explain how to go about this on ios, I've only tried it on Android)
Upvotes: 1 <issue_comment>username_2: To debug offline IOS (react native) you can run two commands:
- once you turned off your wifi, run this command in the terminal:
`sudo ifconfig lo0 alias 192.xx.xx.xx` - 192.xx.xx.xx is your ip address (you can get the ip address running `ifconfig` command)
After running above command and being offline you should not have the red screen
To undo the above command you run:
`sudo ifconfig lo0 -alias 192.xx.xx.xx`, being still offline you will get now the red screen.
Hope this helps.
Upvotes: 2 |
2018/03/19 | 1,820 | 6,800 | <issue_start>username_0: So I have an entity Book
```
public class Book {
private String id;
private String name;
private String description;
private Image coverImage;
private Set chapters;
//Sets & Gets
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Book)) return false;
Book book = (Book) o;
return Objects.equals(name, book.name) &&
Objects.equals(description, book.description) &&
Objects.equals(image, book.image) &&
Objects.equals(chapters, book.chapters);
}
@Override
public int hashCode() {
return Objects.hash(name, description, image, chapters);
}
}
```
an entity Chapter
```
public class Chapter {
private String id;
private String title;
private String number;
private LocalDate releaseDate;
private Set distributors;
//Sets & Gets
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Chapter)) return false;
Chapter chapter = (Chapter) o;
return Objects.equals(title, chapter.title) &&
Objects.equals(number, chapter.number) &&
Objects.equals(releaseDate, chapter.releaseDate) &&
Objects.equals(distributors, chapter.distributors);
}
@Override
public int hashCode() {
return Objects.hash(title, number, releaseDate, distributors);
}
}
```
and a Distributor entity
```
public class Distributor {
private String id;
private String name;
private Image logoImage;
//Sets & Gets
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Distributor)) return false;
Distributor that = (Distributor) o;
return Objects.equals(name, that.name) &&
Objects.equals(logoImage, that.logoImage);
}
@Override
public int hashCode() {
return Objects.hash(name, logoImage);
}
}
```
I have a `List` of old and new chapters and I have to add only the new ones to the Book.
Hibernate fetches and populates the `Set` with all the chapters in the database using its custom implementation `PersistentSet`.
The problem I'm trying to solve is add only those chapters from the List that are not present in the PersistentSet. For this I though, as Chapter does not uses the id field to calculate the hashCode/equals I could just add all the chapters from the List to the PersistentSet and the result should be a Set that excluded those from the list that already exists and included those that are not in the set. Well... this is not happening.
Hibernate's `PersistentSet` is not using the hashCode/equals function I defined for the `Distributor` entity but some internal implementation of it, resulting in the Chapters from the List and Set having different hashCodes and been not equals. Lets call a chapter from the List `lChapter` and a chapter from the PersistentSet `psChapter` and assume they are equals except for the Id.
If I do
```
lChapter.equals(psChapter); //True
```
but If I do
```
psChapter.equals(lChapter); //False
```
And if I do
```
book.getChapters().addAll(chapters);
```
Being `book` an attached entity with 20 chapters and `chapters` the List with 21 chapters the result is a set with 41 chapters.
I'm I doing something wrong here? I find it been a very trivial problem yet I haven't found any solution that doesn't involves me going through the List and check if its contained before adding. Its an unnecessary extra step that I can't afford.
Edit 1: Image is a custom implementation and does implements hashCode/equals and already proved its not the problem. Even if I remove from the entities the above experiments results doesn't change.
Edit 2: I debugged the code and when doing `lChapter.equals(psChapter);` if you go into `Objects.equals(distributors, chapter.distributors)` of the Chapter's equals function, it goes to the Distributor equals function whereas on the `psChapter.equals(lChapter);` it goes into the PersistentSet one.<issue_comment>username_1: **UPDATE**
After going through the hibernate JIRA [PersistentSet does not honor hashcode/equals contract when loaded eagerly](https://hibernate.atlassian.net/browse/HHH-3799) it is a known issue.
*The collection loaded eagerly in some situations calls hashcode on its items before their field values are populated and thus the other methods like contain(), remove() etc are impacted.*
The fix is planned for 6.0.0 Alpha.
And as per one of the suggestions in the JIRA as a workaround, *it's much better to stick to LAZY collections. EAGER fetching is bad for performance, can lead to Cartesian Products, and you can't paginate the collection.*
And that should explain why
`Chapter.equals(psChapter);` returns `true` since it uses normal Set.equals
`psChapter.equals(lChapter);` returns `false`.
This goes via PersistentSet while violates the hashcode/equals contract and thus it is not guaranteed to return true even if the element is present in the Set. And further it results in allowing adding duplicate elements to the Set as well.
Upvotes: 1 <issue_comment>username_2: I tried your code with main method, and it is working as expected, as below
Could you please try with your code ?
```
public static void main(String[] args) {
LocalDate now = LocalDate.now();
Distributor db = new Distributor();
db.setId("1");
db.setLogoImage(null);
db.setName("Name");
Set dbs = new HashSet<>();
dbs.add(db);
Chapter c= new Chapter();
c.setDistributors(dbs);
c.setId("1");
c.setNumber("123");
c.setReleaseDate(now);
c.setTitle("10:30");
Set chapters = new HashSet<>();
chapters.add(c);
Book b= new Book();
b.setChapters(chapters);
b.setCoverImage(null);
b.setDescription("Description");
b.setId("1");
b.setName("Name");
Set dbs1 = new HashSet<>();
Distributor db1 = new Distributor();
db1.setId("1");
db1.setLogoImage(null);
db1.setName("Name");
dbs1.add(db1);
Chapter c1 = new Chapter();
c1.setDistributors(dbs1);
c1.setId("1");
c1.setNumber("123");
c1.setReleaseDate(now);
c1.setTitle("10:30");
System.out.println(chapters.add(c1));
System.out.println(chapters.size());
}
```
Upvotes: 0 <issue_comment>username_3: A workaround for this issue is to add an `@OrderColumn` to the relationship between **Book** and **Chapter** like this:
```
public class Book {
private String id;
private String name;
private String description;
private Image coverImage;
@OrderColumn
private Set chapters;
...
}
```
This will get rid of the `PersistentSet` and instead use the `PersistentSortedSet` of Hibernate which does not violate the equals/hashCode contract.
By the way I was wondering why you don't have any `@OneToMany` / `@ManyToMany` annotations? Seems Hibernate does it automatically (which I find creepy). How does Hibernate decide whether it's a one-to-many or many-to-many relationship?
Upvotes: 2 |
2018/03/19 | 1,588 | 4,607 | <issue_start>username_0: I can change the color of the checkbox, but I cannot seem to get the color of the text label to change. I want to do this with CSS. Here is my code.
```css
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
.container:hover input~.checkmark {
background-color: #ccc;
}
.container+label input:checked~.checkmark {
background-color: #2196F3;
color: blue;
}
.checkmark:after {
content: "";
position: absolute;
display: none;
}
.container input:checked~.checkmark:after {
display: block;
}
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid white;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
```
```html
One
Two
Three
Four
```
I tried adding the label selector with the checkbox like mentioned on many other websites, but it does not work..
I appreciate your help so much, I had spent many hours on this and I would be so relieved to have a solution. Thanks in advance!!<issue_comment>username_1: Add the following css rule, that should change the color:
```
label.container {
color: #7cbb7c;
}
```
**Update**
Change the html for the checkboxes like so:
```
One
```
Add the following css rule:
```
input:checked ~ span.label {
color: #ff00ff;
}
```
You can find more about it here: [CSS element1~element2 Selector](https://www.w3schools.com/cssref/sel_gen_sibling.asp)
### Example
```css
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
.container:hover input~.checkmark {
background-color: #ccc;
}
.container input:checked~.checkmark {
background-color: #2196F3;
}
/* Add the following css rule: */
.container input:checked~span.label {
color: #ff00ff;
}
.checkmark:after {
content: "";
position: absolute;
display: none;
}
.container input:checked~.checkmark:after {
display: block;
}
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid white;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
```
```html
One
Two
Three
Four
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Next selector `+`
-----------------
>
> Select elements that is placed immediately after (not inside) the first specified element.
>
>
>
Wrap the label text with a `span` and move it nex to `.checkmark`
```
text
```
Now you can select it like this:
```
input:checked~.checkmark + .label
```
Example
-------
```css
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
.container:hover input~.checkmark {
background-color: #ccc;
}
.container input:checked~.checkmark {
background-color: #2196F3;
}
/* Add the following css rule: */
.container input:checked~.checkmark+.label {
color: #2196F3;
}
.checkmark:after {
content: "";
position: absolute;
display: none;
}
.container input:checked~.checkmark:after {
display: block;
}
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid white;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
```
```html
First
Second
Third
```
Upvotes: 1 |
2018/03/19 | 1,393 | 4,179 | <issue_start>username_0: When I try to use the function `List.nth`, the sml shell returns this error message:
```
- List.nth([1,2,3],0);
[autoloading]
unexpected exception (bug?) in SML/NJ: Io [Io: openIn failed on "/Users/jhr/Work/smlnj/osx-dist/smlnj.dst/sml.boot.x86-unix/smlnj/basis/.cm/x86-unix/basis.cm", No such file or directory]
raised at: Basis/Implementation/IO/bin-io-fn.sml:617.25-617.71
../cm/util/safeio.sml:30.11
../compiler/TopLevel/interact/evalloop.sml:42.54
```
It returns same error message if I use `nth` without `List.` as well.
So I can guess that there is something problem with 'autoloading' the library.
But there are some strange things more.
If I use another basis library function `length`, it works fine. Like this:
```
- length ([1,2,3]);
val it = 3 : int
```
But what if I use `List.length`? It goes to error. Like this:
```
- List.length([1,2,3]);
[autoloading]
unexpected exception (bug?) in SML/NJ: Io [Io: openIn failed on "/Users/jhr/Work/smlnj/osx-dist/smlnj.dst/sml.boot.x86-unix/smlnj/basis/.cm/x86-unix/basis.cm", No such file or directory]
raised at: Basis/Implementation/IO/bin-io-fn.sml:617.25-617.71
../cm/util/safeio.sml:30.11
../compiler/TopLevel/interact/evalloop.sml:42.54
```
So it seems there is definitely something wrong with 'autoloading' stuff, but I can't figure out how to fix it.
Please help me finding the problem and fixing it!
Note:
1. I use Max OS X 10.13.3, and SML/NJ compiler of v110.81
2. 'jhr' in the path is the previous 'user name'. I changed to 'cadenzah'. That is why that path does not exist. There is 'cadenzah' directory in 'Users' directory.
ps. Maybe there is something problem with directory structure of complier itself between previous version and this(v110.81)?<issue_comment>username_1: How did you install SML/NJ on your Mac?
It seems that the compiler resides in a user-owned directory; I would recommend that you try and install SML/NJ via Homebrew as [this blog post](http://islovely.co/posts/painless-installation-of-sml-on-os-x/) instructs:
```
$ ruby <(curl -fsSk https://raw.github.com/mxcl/homebrew/go)
$ brew update
$ brew install smlnj
```
Since you're not asking how to install SML/NJ, this isn't a duplicate of the following questions:
* [SML not detecting OS on OS X Mavericks](https://stackoverflow.com/questions/20009628/sml-not-detecting-os-on-os-x-mavericks)
* [How do I install a working version of Standard ML on Mac?](https://stackoverflow.com/questions/19024116/how-do-i-install-a-working-version-of-standard-ml)
But perhaps you should be asking that question instead and not this one. :)
Otherwise, try and set the current username to 'Cadenzah' instead of 'cadenzah' so it matches the capitalisation of your user directory. Unix filesystems tend to be case sensitive. Even though MacOS is not, by default, this may cause some conflicts in software that don't respect local filesystem laws.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you are having this issue and can't resolve it by using homebrew to install, try setting the environment variable `SMLHOME_DIR` to the installation directory. This may resolve some "File not found" errors.
Example:
```
% pwd
/usr/local/smlnj
% ls
MLRISC/ bin/ cml/ doc/ ml-burg/ ml-lpt/ nlffi/ smlnj-lib/ license.html
base/ ckit/ config/ lib/ ml-lex/ ml-yacc/ null trace-debug-profile/
% export SMLNJ_HOME=/usr/local/smlnj/
```
You can then add that to your `.zshrc`, etc.
Upvotes: 2 <issue_comment>username_3: I solved the same issue by exporting `SMLNJ_HOME` mentioned in SML's MacOS installation documentation.
Execute:
`echo 'export SMLNJ_HOME="/usr/local/smlnj"' >> $HOME/.bash_profile`
Then, source it into your current command line environment:
`source $HOME/.bash_profile`
SML is then capable of loading extra functions from the core library:
```
Standard ML of New Jersey v110.79 [built: Sun Oct 4 14:45:06 2015]
- List.nth;
[autoloading]
[library $SMLNJ-BASIS/basis.cm is stable]
[library $SMLNJ-BASIS/(basis.cm):basis-common.cm is stable]
[autoloading done]
val it = fn : 'a list * int -> 'a
```
Upvotes: 0 |
2018/03/19 | 962 | 3,053 | <issue_start>username_0: I am getting this error in the log?
Though I am running this on the simulator, will this matter in the testing stage?
>
> canOpenURL: failed for URL: "tel://0478733797" - error: "This app is not allowed to query for scheme tel"
> callNumber button pressed
>
>
>
Here is my function.
The string is "0478733797"
```
func callNumber(phoneNumber:String) {
if let phoneCallURL = URL(string: "tel://\(phoneNumber)") {
let application:UIApplication = UIApplication.shared
if (application.canOpenURL(phoneCallURL)) {
application.open(phoneCallURL, options: [:], completionHandler: nil)
}
}
}
```<issue_comment>username_1: How did you install SML/NJ on your Mac?
It seems that the compiler resides in a user-owned directory; I would recommend that you try and install SML/NJ via Homebrew as [this blog post](http://islovely.co/posts/painless-installation-of-sml-on-os-x/) instructs:
```
$ ruby <(curl -fsSk https://raw.github.com/mxcl/homebrew/go)
$ brew update
$ brew install smlnj
```
Since you're not asking how to install SML/NJ, this isn't a duplicate of the following questions:
* [SML not detecting OS on OS X Mavericks](https://stackoverflow.com/questions/20009628/sml-not-detecting-os-on-os-x-mavericks)
* [How do I install a working version of Standard ML on Mac?](https://stackoverflow.com/questions/19024116/how-do-i-install-a-working-version-of-standard-ml)
But perhaps you should be asking that question instead and not this one. :)
Otherwise, try and set the current username to 'Cadenzah' instead of 'cadenzah' so it matches the capitalisation of your user directory. Unix filesystems tend to be case sensitive. Even though MacOS is not, by default, this may cause some conflicts in software that don't respect local filesystem laws.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you are having this issue and can't resolve it by using homebrew to install, try setting the environment variable `SMLHOME_DIR` to the installation directory. This may resolve some "File not found" errors.
Example:
```
% pwd
/usr/local/smlnj
% ls
MLRISC/ bin/ cml/ doc/ ml-burg/ ml-lpt/ nlffi/ smlnj-lib/ license.html
base/ ckit/ config/ lib/ ml-lex/ ml-yacc/ null trace-debug-profile/
% export SMLNJ_HOME=/usr/local/smlnj/
```
You can then add that to your `.zshrc`, etc.
Upvotes: 2 <issue_comment>username_3: I solved the same issue by exporting `SMLNJ_HOME` mentioned in SML's MacOS installation documentation.
Execute:
`echo 'export SMLNJ_HOME="/usr/local/smlnj"' >> $HOME/.bash_profile`
Then, source it into your current command line environment:
`source $HOME/.bash_profile`
SML is then capable of loading extra functions from the core library:
```
Standard ML of New Jersey v110.79 [built: Sun Oct 4 14:45:06 2015]
- List.nth;
[autoloading]
[library $SMLNJ-BASIS/basis.cm is stable]
[library $SMLNJ-BASIS/(basis.cm):basis-common.cm is stable]
[autoloading done]
val it = fn : 'a list * int -> 'a
```
Upvotes: 0 |
2018/03/19 | 209 | 720 | <issue_start>username_0: I have to display the names of employees,salary and job where salary must be greater that minimum salary in table and his/her job should be starting with letter 'M'.
i tried this
```
select ename,sal,job from emp where sal>min(sal) and job like "M%";
```
but it says
```
Invalid use of group function.
```<issue_comment>username_1: You can try the following:
```
SELECT ename, sal, job from emp where sal > (SELECT min(`sal`) from emp) and job like "M%"
```
Upvotes: 0 <issue_comment>username_2: You can use inner Query for minimum salary.
```
SELECT ename, sal, job
from emp
where sal > (SELECT min(sal) from emp)) //innerQuery
and job like "M%"
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 890 | 2,567 | <issue_start>username_0: ```
@echo off
goto :food
SETLOCAL EnableDelayedExpansion
for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do (
set "DEL=%%a"
)
:fruits
set i=0
for %%a in (apple banana grape lime) do (
set /A i+=1
set fruit[!i!]=%%a
)
set /a fruit=%random%%%4+1
set fruit=!fruit[%fruit%]!
exit /B
:food
for /l %%x in (1, 1, 5) do (
call :fruits
call :colorEcho 70 !fruit!
echo/
)
pause
exit
:colorEcho
echo off
"%~2"
findstr /v /a:%1 /R "^$" "%~2" nul
del "%~2" > nul 2>&1i
```
For some reason that I don't know, this code does not output correctly. It outputs just blank spaces because for some reason the "fruit" variable is never being filled. Can anyone explain this to me? I have another script which works fine using a similar structure but extracting these parts out of that script totally breaks it...any help is much appreciated!<issue_comment>username_1: You have to make the backspace first. The `colorecho` function requires that variable be defined. You are skipping over it because the second line of your code does a `GOTO FOOD`. You are also skipping over the `SETLOCAL` command which will screw up the delayed expansion for your fruit array.
```
@echo off
SETLOCAL EnableDelayedExpansion
for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do (
set "DEL=%%a"
)
:fruits
set i=0
for %%a in (apple banana grape lime) do (
set /A i+=1
set fruit[!i!]=%%a
)
:food
for /l %%x in (1, 1, 5) do (
call :rand
call :colorEcho 70 !fruit!
echo/
)
pause
exit
:rand
set /a rand=%random%%%4+1
set fruit=!fruit[%rand%]!
GOTO :EOF
:colorEcho
echo off
"%~2"
findstr /v /a:%1 /R "^$" "%~2" nul
del "%~2" > nul 2>&1
GOTO :EOF
```
Upvotes: 1 <issue_comment>username_2: You have the right code in the [accepted answer](https://stackoverflow.com/a/49305748/778560) at your previous question. The "some reason" because your code does not work is the modifications you introduced in such a code, so you just need to duplicate the same scheme...
There are multiple ways to achieve the same thing. I always prefer the simplest one; for example:
```
@echo off
setlocal EnableDelayedExpansion
for /F %%a in ('echo prompt $H ^| cmd') do set "BS=%%a"
:food
for /l %%x in (1, 1, 5) do (
call :fruits
call :colorEcho 70 !fruit!
echo/
)
pause
goto :EOF
:fruits
set /A i=%random%%%4+1
for /F "tokens=%i%" %%a in ("apple banana grape lime") do set fruit=%%a
exit /B
:colorEcho color text
set /P "=%BS% " > "%~2"
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,149 | 3,637 | <issue_start>username_0: I want to get table contents from this website: "<https://www.premierleague.com/stats/top/players/red_card?se=42&cl=2>". When
I Inspect Element, on Chrome browser, I can find the table entries in the DOMTree as displayed in the browser. But when I run the following code, I get a different table which corresponds to the table in <https://www.premierleague.com/stats/top/players/red_card>.
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
BASEURL = "https://www.premierleague.com/stats/top/players/"
driver = webdriver.Chrome("/Users/manpreet/Downloads/chromedriver")
driver.get("https://www.premierleague.com/stats/top/players/red_card?se=42&cl=2")
##for i in range(5000):
## print i
## time.sleep(1)
try:
elem = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, '//*[@id="mainContent"]/div[2]/div/div[2]/div[1]/div[2]/table'))
)
finally:
print('10 secs over')
print(elem.text)
```
I called the WebDriverWait function for upto 30 seconds but I don't get the correct table. I noticed that when I use WebDriverWait, the browser opened by Selenium displays the table in <https://www.premierleague.com/stats/top/players/red_card> for the entire duration of 30 seconds. But when I don't use WebDriverWait, the driver first displays the table in <https://www.premierleague.com/stats/top/players/red_card>, the page loads for a few seconds and then displays the table in <https://www.premierleague.com/stats/top/players/red_card?se=42&cl=2>. The whole process only takes about 5-6 seconds (at most). I think the Ajax call is getting stuck when I use WebDriverWait. And this might be the reason selenium doesn't return the correct table because Selenium scrapes the displayed content.
Can anybody tell me how to get the correct table?<issue_comment>username_1: You have to make the backspace first. The `colorecho` function requires that variable be defined. You are skipping over it because the second line of your code does a `GOTO FOOD`. You are also skipping over the `SETLOCAL` command which will screw up the delayed expansion for your fruit array.
```
@echo off
SETLOCAL EnableDelayedExpansion
for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do (
set "DEL=%%a"
)
:fruits
set i=0
for %%a in (apple banana grape lime) do (
set /A i+=1
set fruit[!i!]=%%a
)
:food
for /l %%x in (1, 1, 5) do (
call :rand
call :colorEcho 70 !fruit!
echo/
)
pause
exit
:rand
set /a rand=%random%%%4+1
set fruit=!fruit[%rand%]!
GOTO :EOF
:colorEcho
echo off
"%~2"
findstr /v /a:%1 /R "^$" "%~2" nul
del "%~2" > nul 2>&1
GOTO :EOF
```
Upvotes: 1 <issue_comment>username_2: You have the right code in the [accepted answer](https://stackoverflow.com/a/49305748/778560) at your previous question. The "some reason" because your code does not work is the modifications you introduced in such a code, so you just need to duplicate the same scheme...
There are multiple ways to achieve the same thing. I always prefer the simplest one; for example:
```
@echo off
setlocal EnableDelayedExpansion
for /F %%a in ('echo prompt $H ^| cmd') do set "BS=%%a"
:food
for /l %%x in (1, 1, 5) do (
call :fruits
call :colorEcho 70 !fruit!
echo/
)
pause
goto :EOF
:fruits
set /A i=%random%%%4+1
for /F "tokens=%i%" %%a in ("apple banana grape lime") do set fruit=%%a
exit /B
:colorEcho color text
set /P "=%BS% " > "%~2"
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 367 | 1,256 | <issue_start>username_0: I am trying to define vectorizer parameters for use in a model, but python keeps saying that I am missing a parameter. Reviews is a list of restaurant reviews I have web scraped from yelp. The problem is occurring with .fit\_transform(), I have the following:
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
min_df=0.2, stop_words='english',
use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(1,3))
%time tfidf_matrix = TfidfVectorizer.fit_transform(Reviews)
print(tfidf_matrix)
```<issue_comment>username_1: You created `tfidf_vectorizer` object but it is not used. You should use `tfidf_vectorizer.fit_transform(Reviews)`.
Upvotes: 2 <issue_comment>username_2: When you use .fit\_transform, you need to pass a list, dict or tuple to iterate over values.
Example:
```
list = ["a" , "b" , "c"] #Here is your data
TfidfVectorizer.fit_transform(list)
```
Is important that you dont have null, or none values on your set of data.
If you have only one value you can also do this and it works
```
list = ["Only Value"]
TfidfVectorizer.fit_transform(list)
```
Upvotes: 0 |
2018/03/19 | 1,055 | 3,353 | <issue_start>username_0: I'm trying to learn c++ template metaprogramming by implementing some functions. I know the solution to this specific problem has been provided on stackoverflow, what I'm interested in is *understanding why this solution doesn't work*. Here's the code:
```
template < std::size_t... Ns , typename... Ts >
auto tail_impl( std::index_sequence , std::tuple t )
{
return std::make\_tuple( std::get(t)... );
}
template
tuple tail( std::tuple t )
{
return tail\_impl( std::make\_index\_sequence() , t );
}
template
constexpr bool check\_for\_type(tuple t) {
if constexpr(is\_same::value) {
return true;
}
return check\_for\_type(tail(t));
}
template
constexpr bool check\_for\_type(tuple<> t) {
return false;
}
int main( int argc, const char \*argv) {
auto t2 = make\_tuple(4,"qw", 6.5);
double f = check\_for\_type(t2);
return 0;
}
```
This template is supposed to check if a tuple contains an element of a certain type, but compiling it gives the following error:
```
> clang++ t.cpp -std=c++17
t.cpp:45:12: error: call to function 'check_for_type' that is neither visible in the
template definition nor found by argument-dependent lookup
return check_for_type(tail(t));
^
t.cpp:45:12: note: in instantiation of function template specialization
'check\_for\_type' requested here
t.cpp:45:12: note: in instantiation of function template specialization
'check\_for\_type' requested here
t.cpp:66:16: note: in instantiation of function template specialization
'check\_for\_type' requested here
double f = check\_for\_type(t2);
^
t.cpp:58:16: note: 'check\_for\_type' should be declared prior to the call site
constexpr bool check\_for\_type(tuple<> t) {
^
1 error generated.
```
What's wrong with this piece of code?<issue_comment>username_1: >
>
> ```
> template
> constexpr bool check\_for\_type(tuple t) {
> if constexpr(is\_same::value) {
> return true;
> }
> return check\_for\_type(tail(t));
> }
>
> ```
>
>
You are calling `check_for_type(...)` before it is declared. Helpfully, your error message states:
>
>
> ```
> t.cpp:58:16: note: 'check_for_type' should be declared prior to the call site
> constexpr bool check_for_type(tuple<> t) {
>
> ```
>
>
Once you do that, the code [compiles](https://godbolt.org/g/x6zR2e):
```
// Put this function first
template
constexpr bool check\_for\_type(tuple<> t) {
return false;
}
template
constexpr bool check\_for\_type(tuple t) {
if constexpr(is\_same::value) {
return true;
}
// Right here, the compiler looks up `check\_for\_type` and doesn't find
// an overload that can be called with just one explicit type parameter.
return check\_for\_type(tail(t));
}
```
Upvotes: 2 <issue_comment>username_2: Since you are using c++17 in your code, I thought it would make sense to point out that there are a lot of new tools to avoid having to make these kind of recursive templates.
You could condense the whole thing to this:
```
#include
#include
#include
template
constexpr bool check\_for\_type(std::tuple) {
return std::disjunction\_v...>;
}
int main() {
std::tuple tup;
std::cout << check\_for\_type(tup) << '\n';
std::cout << check\_for\_type(tup) << std::endl;
return 0;
}
```
If `std::disjunction` gets no parameters it defaults to false, so passing an empty tuple is also covered here.
Upvotes: 3 |
2018/03/19 | 682 | 2,735 | <issue_start>username_0: Suppose, while the user is using the app he long taps the home button and siri opens up. Is there any way to know this through some event or notification or delegate methods?
I want to know if siri is launched while my app is running. Is there any sure way to know?<issue_comment>username_1: Based on your requirements there is no method will notify app that Siri open and close notification.
But yes if you want to catch that there are some method where you will get notify while below action accure
1. Phone call received.
2. Whatsapp or any other video call received.
3. Siri open while long press on home button.
in above all scenario app will notify in below method and based on that you can handle that event over there.
**==>** `applicationWillResignActive`
```
func applicationWillResignActive(_ application: UIApplication) {
}
```
>
> * Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions
> (such as an incoming phone call or SMS message) or when the user quits
> the application and it begins the transition to the background state.
> * Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method
> to pause the game.
>
>
>
**==>** `applicationWillEnterForeground` & `applicationDidBecomeActive` (while user come back on your app from background
```
func applicationWillEnterForeground(_ application: UIApplication) {
}
```
>
> * Called as part of the transition from the background to the active
> state; here you can undo many of the changes made on entering the
> background.
>
>
>
```
func applicationDidBecomeActive(_ application: UIApplication) {
}
```
>
> * Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the
> background, optionally refresh the user interface.
>
>
>
other than above methods you can not get any notification where you will handle Siri open from your app.
Hope this will help to understand app delegate flow of notification.
Upvotes: -1 <issue_comment>username_2: Swift 4.1 - iOS 12.0.0
So far there is only *work-around solution* to detect whether Siri is running or not for my use-case.
**Caution**: Please test it carefully in your project to double check (*since we are relying on AVAudioSession which is not related with Siri Kit*)
```
func hasSiri() -> Bool {
let audioSession = AVAudioSession.sharedInstance()
return audioSession.inputDataSource == nil ||
audioSession.inputDataSources == nil ||
audioSession.inputDataSources?.isEmpty == true
}
```
Upvotes: 1 |
2018/03/19 | 841 | 3,124 | <issue_start>username_0: My problem is that I have br's in my html in order to make the nav vertical for the 480px viewport, but I need the nav to be horizontal and centered for the 1024 and 1280px viewports. Here is my html:
```
### Links
[Home](index.html)
[Mission](mission.html)
[About Us](about.html)
[Products](products.html)
[Somewebsitehere.net](http://www.somewebsiteidkyet.net)
```
Here is the CSS:
```
@media screen and (min-width: 1024px) {
#wrapper {
width: 900px;
}
header {}
article.left {
float: left;
clear: left;
}
aside.complementary {
float: right;
clear: right;
}
nav {
display: inline-block;
}
```<issue_comment>username_1: Based on your requirements there is no method will notify app that Siri open and close notification.
But yes if you want to catch that there are some method where you will get notify while below action accure
1. Phone call received.
2. Whatsapp or any other video call received.
3. Siri open while long press on home button.
in above all scenario app will notify in below method and based on that you can handle that event over there.
**==>** `applicationWillResignActive`
```
func applicationWillResignActive(_ application: UIApplication) {
}
```
>
> * Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions
> (such as an incoming phone call or SMS message) or when the user quits
> the application and it begins the transition to the background state.
> * Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method
> to pause the game.
>
>
>
**==>** `applicationWillEnterForeground` & `applicationDidBecomeActive` (while user come back on your app from background
```
func applicationWillEnterForeground(_ application: UIApplication) {
}
```
>
> * Called as part of the transition from the background to the active
> state; here you can undo many of the changes made on entering the
> background.
>
>
>
```
func applicationDidBecomeActive(_ application: UIApplication) {
}
```
>
> * Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the
> background, optionally refresh the user interface.
>
>
>
other than above methods you can not get any notification where you will handle Siri open from your app.
Hope this will help to understand app delegate flow of notification.
Upvotes: -1 <issue_comment>username_2: Swift 4.1 - iOS 12.0.0
So far there is only *work-around solution* to detect whether Siri is running or not for my use-case.
**Caution**: Please test it carefully in your project to double check (*since we are relying on AVAudioSession which is not related with Siri Kit*)
```
func hasSiri() -> Bool {
let audioSession = AVAudioSession.sharedInstance()
return audioSession.inputDataSource == nil ||
audioSession.inputDataSources == nil ||
audioSession.inputDataSources?.isEmpty == true
}
```
Upvotes: 1 |
2018/03/19 | 306 | 970 | <issue_start>username_0: The following code raises `TypeError: input expected at most 1 arguments but got 3`. I am unsure how to fix this.
```
def leg_count(w):
x = input("How many legs does a", w, "have? ")
print("A", w, "has", x, "legs")
leg_count("crocodile")
```<issue_comment>username_1: The function `input` takes a single argument. It cannot be used the same way as `print` which will take and print multiple arguments. You will need to use `str.format` to do what you want.
```
def leg_count(w):
x = input("How many legs does a {} have? ".format(w))
print("A", w, "has", x, "legs")
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
def leg_count(w):
x = input("How many legs does a " + w + " have ?: ")
print ("A " + w + " has " + str(x) + " legs")
leg_count("crocodile")
```
if you look input documentation :
input has an optional parameter, which is the prompt string. So you need to pass as a string.
Upvotes: 0 |
2018/03/19 | 2,168 | 6,633 | <issue_start>username_0: Hi I am trying to design multi select drop-down list using css class. I am trying to develop multi select drop-down list box exactly below.
[](https://i.stack.imgur.com/Z9gDW.png)
I have below html code.
```css
.checkbox {
position: relative;
display: block;
margin-top: 10px;
margin-bottom: 10px;
}
.dropdown-menu {
position: absolute;
top: 100%;
left: 0;
z-index: 1000;
float: left;
min-width: 150px;
max-height: 600px;
overflow-x: visible;
overflow-y: visible;
padding: 0;
margin: 0;
list-style: none;
font-size: 13px;
font-weight: 500;
text-align: left;
background-color: #FFFFFF;
border: 1px solid rgba(0, 0, 0, 0.15);
border-radius: 0;
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
-webkit-background-clip: padding-box;
background-clip: padding-box;
color: #464646;
}
.btn-group, .btn-group-vertical {
position: relative;
display: inline-block;
vertical-align: middle;
}
toggle.btn-default {
background: #dedede;
background: rgba(0, 0, 0, 0.13);
border: 1px solid #2E92FA;
color: #464646;
outline: none;
}
```
```html
Filter by
* Sensors
* Actuators
* Digital inputs
* Outputs
* Converters
```
In the above code checkbox coming the middle and clicking on drop down it should expand and by default it should not expand. Can someone help me to make this work as expected? Any help would be appreciated. Thank you.<issue_comment>username_1: remove open class from and add it when click on filter button
Upvotes: 0 <issue_comment>username_2: [demo][1] Here You can find a code in the demo
```
[1]: http://jsfiddle.net/x2z08o3r/
```
Upvotes: 0 <issue_comment>username_3: Try to remove the `open` class first.
```js
$('.btn-group').click(function(e) { $('.dropdown-menu').toggleClass('open'); });
```
```css
body {
margin:0;
}
.checkbox {
position: relative;
display: block;
margin-top: 10px;
margin-bottom: 10px;
}
.dropdown-menu {
position: absolute;
top: 100%;
left: 0;
z-index: 1000;
float: left;
min-width: 150px;
max-height: 600px;
overflow-x: visible;
overflow-y: visible;
padding: 0;
margin: 0;
list-style: none;
font-size: 13px;
font-weight: 500;
text-align: left;
background-color: #FFFFFF;
border: 1px solid rgba(0, 0, 0, 0.15);
border-radius: 0;
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
-webkit-background-clip: padding-box;
background-clip: padding-box;
color: #464646;
transition:all .3s;
transform: translate(-100%);
}
.dropdown-menu.open {
transform: translate(0%);
}
.btn-group, .btn-group-vertical {
position: relative;
display: inline-block;
vertical-align: middle;
}
toggle.btn-default {
background: #dedede;
background: rgba(0, 0, 0, 0.13);
border: 1px solid #2E92FA;
color: #464646;
outline: none;
}
label.checkbox-inline {
display: contents;
}
```
```html
Filter by
* Sensors
* Actuators
* Digital inputs
* Outputs
* Converters
```
Upvotes: 2 <issue_comment>username_4: What I believe you want to do is add a javascript onclick function that adds and removes your checkbox when filter is clicked like so
```
let btn = document.querySelector('.btn-group');
let ul = document.querySelector('.dropdown-menu');
btn.addEventListener('click',function(){
if(ul.style.display === 'none'){
ul.style.display = 'block';
}else{ul.style.display = 'none'}
});
```
and here is the updated css
```
.checkbox {
position: relative;
display: block;
margin-top: 10px;
margin-bottom: 10px;
}
.dropdown-menu {
display:none;
position: absolute;
top: 100%;
left: 0;
z-index: 1000;
float: left;
min-width: 150px;
max-height: 600px;
overflow-x: visible;
overflow-y: visible;
padding: 0;
margin: 0;
list-style: none;
font-size: 13px;
font-weight: 500;
text-align: left;
background-color: #FFFFFF;
border: 1px solid rgba(0, 0, 0, 0.15);
border-radius: 0;
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
-webkit-background-clip: padding-box;
background-clip: padding-box;
color: #464646;
}
.btn-group, .btn-group-vertical {
position: relative;
display: inline-block;
vertical-align: middle;
}
toggle.btn-default {
background: #dedede;
background: rgba(0, 0, 0, 0.13);
border: 1px solid #2E92FA;
color: #464646;
outline: none;
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_5: You can check the following
**CCS**
```
.checkbox {
position: relative;
display: inline-block;
margin-top: 10px;
margin-bottom: 10px;
}
.dropdown-menu {
position: relative;
top: 100%;
left: 0;
z-index: 1000;
float: left;
min-width: 250px;
max-height: 600px;
overflow-x: visible;
overflow-y: visible;
padding: 0;
margin: 0;
list-style: none;
font-size: 13px;
font-weight: 500;
text-align: left;
background-color: #FFFFFF;
border: 1px solid rgba(0, 0, 0, 0.15);
border-radius: 0;
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
-webkit-background-clip: padding-box;
background-clip: padding-box;
color: #464646;
}
.dropbtn {
background-color: #4CAF50;
color: white;
padding: 16px;
font-size: 16px;
border: none;
}
.dropdown:hover .dropbtn {
background-color: #3e8e41;
}
```
**HTML**
```
Filter by
* Sensors
* Actuators
* Digital inputs
* Outputs
* Converters
```
Upvotes: 0 |
2018/03/19 | 329 | 1,001 | <issue_start>username_0: ```
Dim i As integer = 0
while i < 10
gridview.RowCount = gridview.RowCount + 1
gridview.Row(i).Cells(0) = i
i++
End while
```
I want to increase the grid view count each time I added a new row by the above code. But it only update the row, but it skips the existing rows in the data grid view. So the only last row placed.<issue_comment>username_1: Not sure what you want to do. You can always get the row count as **gridview.rowcount - 1**
Below code will rename the first column cells to the row sequence 0,1,2,3....
```
Dim rowcount as integer = gridview.RowCount - 1
Dim i As integer
for i = 0 to rowcount
gridview.Row(i).Cells(0) = i
next
```
Upvotes: 2 <issue_comment>username_2: ```js
gridview.AllowUserAddRows = false
Dim i As integer = 0
while i < 10
gridview.RowCount = gridview.RowCount + 1
gridview.Row(i).Cells(0) = i
i++
End while
```
this was working after I just put on the top
'gridview.AllowUserAddRows = false'
Upvotes: 2 [selected_answer] |
2018/03/19 | 657 | 2,183 | <issue_start>username_0: I tried to go through the numa\_distance() and other related functions (From the 1st link ), But couldn't understand. I am just trying to understand how linux calculates the NUMA distance between two nodes when this distance is said to vary based on architecture and NUMA interlink.
I referred following links:
1. <https://github.com/jmesmon/numactl/blob/0df3f720e606a3706700e0487ba19d720f50c4b8/distance.c>
2. <https://github.com/jmesmon/numactl/blob/0df3f720e606a3706700e0487ba19d720f50c4b8/numa.h>
3. <https://github.com/jmesmon/numactl/blob/0df3f720e606a3706700e0487ba19d720f50c4b8/libnuma.c><issue_comment>username_1: Inside (recent versions of) the ACPI specification you'll find a description of a table called "SLIT"/System Locality (Distance) Information Table. This table is just an array (like `d = array[numa_node][numa_node]`) that an operating system uses to determine the relative distance between any 2 NUMA nodes; where the values in the array range from 10 to 254 (255 is used for "no connection between these NUMA domains"), where the value 10 represents how quickly something in a NUMA domain can access something in the same NUMA domain (the fastest case) and the value 254 would be 25.4 times slower.
I'd assume that firmware populates this table using hard-coded values - e.g. the motherboard manufacturer might do a few measurements and determine value/s that are good enough for all models of CPUs that the motherboard supports.
Firmware provides this table to the OS. The OS doesn't calculate anything.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The distances are hard coded by the firmware in ACPI SLIT tables and represent relative memory latency between NUMA nodes -- a distance of "10" means a latency of 1x and a distance of "20" is "2x" more latency than local node access. Linux exposes those values in sysfs, but there are a bunch of ways to access them (including dumping the ACPI tables directly if you fancy it).
The actual memory latency between nodes is rarely as bad as the ACPI tables claim, at least from my testing, <http://www.codeblueprint.co.uk/2019/07/12/what-are-slit-tables.html>.
Upvotes: 2 |
2018/03/19 | 562 | 2,040 | <issue_start>username_0: I would like to replace string in file contains ${user.home} using sed in Linux but I'm unable to do it. I tried below options using sed but failed.
The input file:
===============
```
```
Tried code to replace `${user.home}`:
=====================================
```
sed -i "s/$${user.home}/r_str/g" 1.xml
sed -i "s/${user.home}/r_str/g" 1.xml
sed -i "s/\$\{user\.home\}/r_str/g" 1.xml
```
Actual:
=======
```
```
Expected:
=========
```
```<issue_comment>username_1: Inside (recent versions of) the ACPI specification you'll find a description of a table called "SLIT"/System Locality (Distance) Information Table. This table is just an array (like `d = array[numa_node][numa_node]`) that an operating system uses to determine the relative distance between any 2 NUMA nodes; where the values in the array range from 10 to 254 (255 is used for "no connection between these NUMA domains"), where the value 10 represents how quickly something in a NUMA domain can access something in the same NUMA domain (the fastest case) and the value 254 would be 25.4 times slower.
I'd assume that firmware populates this table using hard-coded values - e.g. the motherboard manufacturer might do a few measurements and determine value/s that are good enough for all models of CPUs that the motherboard supports.
Firmware provides this table to the OS. The OS doesn't calculate anything.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The distances are hard coded by the firmware in ACPI SLIT tables and represent relative memory latency between NUMA nodes -- a distance of "10" means a latency of 1x and a distance of "20" is "2x" more latency than local node access. Linux exposes those values in sysfs, but there are a bunch of ways to access them (including dumping the ACPI tables directly if you fancy it).
The actual memory latency between nodes is rarely as bad as the ACPI tables claim, at least from my testing, <http://www.codeblueprint.co.uk/2019/07/12/what-are-slit-tables.html>.
Upvotes: 2 |
2018/03/19 | 1,575 | 5,001 | <issue_start>username_0: I am currently trying to send a float value across two Arduinos via SPI. Currently I am working to send a static value of 2.25 across and then read it via the `Serial.println()` command. I would then want to pass a float value from a linear displacement sensor. My end goal is to be able to have the master ask for information, the slave gathers the appropriate data and packages it and then master receives said data and does what it needs with it.
Currently I am getting an error "call of overloaded 'println(byte [7])' is ambiguous" and I am not to sure why I am getting this error. I am currently a mechanical engineering student and I am crash-coursing myself through C/C++. I am not entirely positive about what I am doing. I know that a float is 4 bytes and I am attempting to create a buffer of 7 bytes to store the float and the '\n' char with room to spare. My current code is below.
Master:
```
#include
void setup() {
pinMode(SS,OUTPUT);
digitalWrite(SS,HIGH);
SPI.begin();
SPI.setClockDivider(SPI\_CLOCK\_DIV4);
}
void loop() {
digitalWrite(SS,LOW);
float a = 2.25;
SPI.transfer(a);
SPI.transfer('\n');
digitalWrite(SS,HIGH);
}
```
My slave code is as follows:
```
#include
byte buf[7];
volatile byte pos = 0;
volatile boolean process\_it = false;
void setup() {
Serial.begin(9600);
pinMode(MISO,OUTPUT);
digitalWrite(MISO,LOW);
SPCR |= \_BV(SPE); // SPI Enable, sets this Arduino to Slave
SPCR |= \_BV(SPIE); // SPI interrupt enabled
}
ISR(SPI\_STC\_vect) {
// Interrupt Service Routine(SPI\_(SPI Transfer Complete)\_vector)
byte c = SPDR;
// SPDR = SPI Data Register, so you are saving the byte of information in that register to byte c
if (pos < sizeof buf) {
buf[pos++] = c;
if (c == '\n') {
process\_it = true;
}
}
}
void loop() {
if (process\_it = true) {
Serial.println(buf);
pos = 0;
process\_it = false;
}
}
```<issue_comment>username_1: The immediate problem is that Serial.print doesn't know what to do with a byte array. Either declare it as a char array or cast it in the print statement:
```
char buf[7];
```
OR
```
Serial.print((char*) buf);
```
Either way, though, it's not going to show up as a float like you want.
An easier way to do all this is to use memcpy or a union to go back and forth between float and bytes. On the master end:
```
uint8_t buf[4];
memcpy(buf, &a, 4);
```
Then use SPI to send 4 bytes. Reverse it on the peripheral end.
Note that sending '\n' as the termination byte is a bad idea because it can lead to weird behavior, since one of the bytes in the float could easily be 0x0a, the hexadecimal equivalent of '\n'.
Upvotes: 0 <issue_comment>username_2: I figured out what I needed to do and I wanted to post my finished code. I also added an ability to transfer more than one float value.
Master:
```
#include
float a = 3.14;
float b = 2.25;
uint8\_t storage [12];
float buff[2] = {a, b};
void setup()
{
digitalWrite(SS, HIGH);
SPI.begin();
Serial.begin(9600);
SPI.setClockDivider(SPI\_CLOCK\_DIV8);
}
void loop()
{
digitalWrite(SS, LOW);
memcpy(storage, &buff, 8);
Serial.print("storage[0] = "); Serial.println(storage[0]); // the
following serial prints were to check i was getting the right decimal
numbers for the floats.
Serial.print("storage[1] = "); Serial.println(storage[1]);
Serial.print("storage[2] = "); Serial.println(storage[2]);
Serial.print("storage[3] = "); Serial.println(storage[3]);
Serial.print("storage[4] = "); Serial.println(storage[4]);
Serial.print("storage[5] = "); Serial.println(storage[5]);
Serial.print("storage[6] = "); Serial.println(storage[6]);
Serial.print("storage[7] = "); Serial.println(storage[7]);
SPI.transfer(storage, sizeof storage ); //SPI library allows a user to
transfer a whole array of bytes and you need to include the size of the
array.
digitalWrite(SS, HIGH);
delay(1000);
}
```
For my Slave code:
```
#include
byte storage [8];
volatile byte pos;
volatile boolean process;
float buff[2];
void setup()
{
pinMode(MISO,OUTPUT);
SPCR |= \_BV(SPE);
SPCR |= \_BV(SPIE);
pos = 0;
process = false;
Serial.begin(9600);
}
ISR(SPI\_STC\_vect)
{
byte gathered = SPDR;
if( pos < sizeof storage)
{
storage[pos++] = gathered;
}
else
process = true;
}
void loop()
{
if( process )
{
Serial.print("storage[0] = "); Serial.println(storage[0]);
Serial.print("storage[1] = "); Serial.println(storage[1]);
Serial.print("storage[2] = "); Serial.println(storage[2]);
Serial.print("storage[3] = "); Serial.println(storage[3]);
Serial.print("storage[4] = "); Serial.println(storage[4]);
Serial.print("storage[5] = "); Serial.println(storage[5]);
Serial.print("storage[6] = "); Serial.println(storage[6]);
Serial.print("storage[7] = "); Serial.println(storage[7]);
memcpy(buff,&storage,8);
Serial.print("This is buff[0]");Serial.println(buff[0]);
Serial.print("This is buff[1]");Serial.println(buff[1]);
storage[pos] = 0;
pos = 0;
process = false;
}
}
```
Upvotes: 2 |
2018/03/19 | 2,057 | 6,445 | <issue_start>username_0: <NAME>
ISBN-13: 978-0321714114
Page 280-281, it says:
>
> **Making A Member Function a Friend**
>
>
> Rather than making the entire Window\_mgr class a friend, Screen can
> instead specify that only the clear member is allowed access. When we
> declare a member function to be a friend, we must specify the class of
> which that function is a member:
>
>
>
> ```
> class Screen {
> // Window_mgr::clear must have been declared before class Screen
> friend void Window_mgr::clear(ScreenIndex);
> // ... rest of the Screen class
> };
>
> ```
>
> Making a member function a friend requires careful structuring of our
> programs to accommodate interdependencies among the declarations and
> definitions. In this example, we must order our program as follows:
>
>
> * First, define the Window\_mgr class, which declares, but cannot define, clear. Screen must be declared before clear can use the
> members of Screen.
> * Next, define class Screen, including a friend declaration for clear.
> * Finally, define clear, which can now refer to the members in Screen.
>
>
>
The problem is: class Window\_mgr has a data member that depends of class
Screen definition. See:
```
class Window_mgr {
public:
// location ID for each screen on the window
using ScreenIndex = std::vector::size\_type;
// reset the Screen at the given position to all blanks
void clear(ScreenIndex);
private:
std::vector screens{Screen(24, 80, ' ')};
};
```
So it is impossible firstly define Window\_mgr without defining Screen
previously!
And at the same time, it is impossible define Screen without we have
defined Window\_mgr!!!
How can this problem be solved???
Is the book wrong?
I will paste here a code so that you can repeat the problem using a
minimal code:
```
#include
#include
#include
class A
{
friend void B::hello();
public:
A(int i) : number{i} {}
private:
void f() {
std::cout << "hello" << std::endl;
}
int number;
};
class B {
private:
std::vectorx{A(10)};
public:
void hello()
{
for(A &elem : x)
{
elem.f();
}
}
};
int main()
{
A x;
return 0;
}
```
If I compile this code, the result is:
error: use of undeclared identifier 'B'
friend void B::hello();
And if I invert the position (A <--> B), I have:
error: use of undeclared identifier 'A'
std::vector x{A(10)};
Is there a correct way to do that??
Thank you!
---
EDIT:
Thank you, <NAME>
Solution:
```
#include
#include
#include
class A;
class B {
private:
std::vectorx;
public:
B();
void hello();
};
class A
{
friend void B::hello();
public:
A(int i) : number{i} {}
private:
void f() {
std::cout << "hello" << std::endl;
}
int number;
};
B::B() : x{A(10)}
{
}
void B::hello()
{
for(A &elem : x)
{
elem.f();
}
}
int main()
{
return 0;
}
```
Conclusion:
* the book is incomplete in that it doesn't expose the necessity of doing the forward declaration of class A firstly and the impossibility to do in-class initialization in this case.
* I didn't notice that the problem was the A(10), not the vector! That is, we can use incomplete type A (only declaration, without definition) when we are using it as Template argument to vector (because it doesn't create A object itself) but we can not use incomplete type A when defining a object, for example: A(10);<issue_comment>username_1: The immediate problem is that Serial.print doesn't know what to do with a byte array. Either declare it as a char array or cast it in the print statement:
```
char buf[7];
```
OR
```
Serial.print((char*) buf);
```
Either way, though, it's not going to show up as a float like you want.
An easier way to do all this is to use memcpy or a union to go back and forth between float and bytes. On the master end:
```
uint8_t buf[4];
memcpy(buf, &a, 4);
```
Then use SPI to send 4 bytes. Reverse it on the peripheral end.
Note that sending '\n' as the termination byte is a bad idea because it can lead to weird behavior, since one of the bytes in the float could easily be 0x0a, the hexadecimal equivalent of '\n'.
Upvotes: 0 <issue_comment>username_2: I figured out what I needed to do and I wanted to post my finished code. I also added an ability to transfer more than one float value.
Master:
```
#include
float a = 3.14;
float b = 2.25;
uint8\_t storage [12];
float buff[2] = {a, b};
void setup()
{
digitalWrite(SS, HIGH);
SPI.begin();
Serial.begin(9600);
SPI.setClockDivider(SPI\_CLOCK\_DIV8);
}
void loop()
{
digitalWrite(SS, LOW);
memcpy(storage, &buff, 8);
Serial.print("storage[0] = "); Serial.println(storage[0]); // the
following serial prints were to check i was getting the right decimal
numbers for the floats.
Serial.print("storage[1] = "); Serial.println(storage[1]);
Serial.print("storage[2] = "); Serial.println(storage[2]);
Serial.print("storage[3] = "); Serial.println(storage[3]);
Serial.print("storage[4] = "); Serial.println(storage[4]);
Serial.print("storage[5] = "); Serial.println(storage[5]);
Serial.print("storage[6] = "); Serial.println(storage[6]);
Serial.print("storage[7] = "); Serial.println(storage[7]);
SPI.transfer(storage, sizeof storage ); //SPI library allows a user to
transfer a whole array of bytes and you need to include the size of the
array.
digitalWrite(SS, HIGH);
delay(1000);
}
```
For my Slave code:
```
#include
byte storage [8];
volatile byte pos;
volatile boolean process;
float buff[2];
void setup()
{
pinMode(MISO,OUTPUT);
SPCR |= \_BV(SPE);
SPCR |= \_BV(SPIE);
pos = 0;
process = false;
Serial.begin(9600);
}
ISR(SPI\_STC\_vect)
{
byte gathered = SPDR;
if( pos < sizeof storage)
{
storage[pos++] = gathered;
}
else
process = true;
}
void loop()
{
if( process )
{
Serial.print("storage[0] = "); Serial.println(storage[0]);
Serial.print("storage[1] = "); Serial.println(storage[1]);
Serial.print("storage[2] = "); Serial.println(storage[2]);
Serial.print("storage[3] = "); Serial.println(storage[3]);
Serial.print("storage[4] = "); Serial.println(storage[4]);
Serial.print("storage[5] = "); Serial.println(storage[5]);
Serial.print("storage[6] = "); Serial.println(storage[6]);
Serial.print("storage[7] = "); Serial.println(storage[7]);
memcpy(buff,&storage,8);
Serial.print("This is buff[0]");Serial.println(buff[0]);
Serial.print("This is buff[1]");Serial.println(buff[1]);
storage[pos] = 0;
pos = 0;
process = false;
}
}
```
Upvotes: 2 |
2018/03/19 | 1,101 | 3,414 | <issue_start>username_0: Say I have copied the string for a cookie from a browser request.
```
_some_session=RXF6SVF5RHdV...
```
I want to open the rails console and paste something like
```
> session[RXF6SVF5RHdV...]
```
To retrieve the decrypted data from the session. If this is possible, how do I do it?<issue_comment>username_1: The immediate problem is that Serial.print doesn't know what to do with a byte array. Either declare it as a char array or cast it in the print statement:
```
char buf[7];
```
OR
```
Serial.print((char*) buf);
```
Either way, though, it's not going to show up as a float like you want.
An easier way to do all this is to use memcpy or a union to go back and forth between float and bytes. On the master end:
```
uint8_t buf[4];
memcpy(buf, &a, 4);
```
Then use SPI to send 4 bytes. Reverse it on the peripheral end.
Note that sending '\n' as the termination byte is a bad idea because it can lead to weird behavior, since one of the bytes in the float could easily be 0x0a, the hexadecimal equivalent of '\n'.
Upvotes: 0 <issue_comment>username_2: I figured out what I needed to do and I wanted to post my finished code. I also added an ability to transfer more than one float value.
Master:
```
#include
float a = 3.14;
float b = 2.25;
uint8\_t storage [12];
float buff[2] = {a, b};
void setup()
{
digitalWrite(SS, HIGH);
SPI.begin();
Serial.begin(9600);
SPI.setClockDivider(SPI\_CLOCK\_DIV8);
}
void loop()
{
digitalWrite(SS, LOW);
memcpy(storage, &buff, 8);
Serial.print("storage[0] = "); Serial.println(storage[0]); // the
following serial prints were to check i was getting the right decimal
numbers for the floats.
Serial.print("storage[1] = "); Serial.println(storage[1]);
Serial.print("storage[2] = "); Serial.println(storage[2]);
Serial.print("storage[3] = "); Serial.println(storage[3]);
Serial.print("storage[4] = "); Serial.println(storage[4]);
Serial.print("storage[5] = "); Serial.println(storage[5]);
Serial.print("storage[6] = "); Serial.println(storage[6]);
Serial.print("storage[7] = "); Serial.println(storage[7]);
SPI.transfer(storage, sizeof storage ); //SPI library allows a user to
transfer a whole array of bytes and you need to include the size of the
array.
digitalWrite(SS, HIGH);
delay(1000);
}
```
For my Slave code:
```
#include
byte storage [8];
volatile byte pos;
volatile boolean process;
float buff[2];
void setup()
{
pinMode(MISO,OUTPUT);
SPCR |= \_BV(SPE);
SPCR |= \_BV(SPIE);
pos = 0;
process = false;
Serial.begin(9600);
}
ISR(SPI\_STC\_vect)
{
byte gathered = SPDR;
if( pos < sizeof storage)
{
storage[pos++] = gathered;
}
else
process = true;
}
void loop()
{
if( process )
{
Serial.print("storage[0] = "); Serial.println(storage[0]);
Serial.print("storage[1] = "); Serial.println(storage[1]);
Serial.print("storage[2] = "); Serial.println(storage[2]);
Serial.print("storage[3] = "); Serial.println(storage[3]);
Serial.print("storage[4] = "); Serial.println(storage[4]);
Serial.print("storage[5] = "); Serial.println(storage[5]);
Serial.print("storage[6] = "); Serial.println(storage[6]);
Serial.print("storage[7] = "); Serial.println(storage[7]);
memcpy(buff,&storage,8);
Serial.print("This is buff[0]");Serial.println(buff[0]);
Serial.print("This is buff[1]");Serial.println(buff[1]);
storage[pos] = 0;
pos = 0;
process = false;
}
}
```
Upvotes: 2 |
2018/03/19 | 1,455 | 5,449 | <issue_start>username_0: I have this structure
```
struct Event {
const string event;
const int order;
Event(const string& _event, const int& _order):event(_event),order(_order) {}
};
struct EventCompare {
bool operator()(const Event& lhs, const Event& rhs)
{
return (lhs.order < rhs.order);
}
};
```
which I would like to use in a `set`:
```
set events;
```
I do know that sets doesn't allow duplicates. However, I would like to define duplicates as two instance of `structs` with equal `events` *regardless* of their orders, in other words, `A = B` iff `A.event == B.event`. This definition has to affect the way the set works, which means that, for example, the set has to ignore `Event("holiday", 1)`, if it already contains `Event("holiday", 0)`.
How can I do that? I've tried to add
```
if (lhs.event == rhs.event)
return false;
```
in my `EventCompare`, but that didn't work. Will using `pair` instead of `struct` help anyhow?<issue_comment>username_1: A `set` doesn't look for equality. It only checks for an ordering.
Since you want two events to be equal if the `event` is the same, that means that neither comes before the other, and your comparison function should return `false` in that case.
```
bool operator()(const Event& lhs, const Event& rhs) const {
return lhs.event != rhs.event && lhs.order < rhs.order;
}
```
**However**, this won't work since it no longer defines a *strict weak ordering*, since you can have events where `A < B` and `B < C` but `!(A < C)` if `A` and `C` have matching `event` strings but `B`'s order is between `A`'s and `C`'s.
So no, you can't use a set to store elements where a 2nd non-ordering attribute overrides the ordering one. You'd have to change the ordering to be based on `event`, but then you won't be able to look things up based on the `order`.
You could use a `map` to map the `event` strings to the `order` value used to store them into the `set`. Then you check the map to see if it is already there, and decide which element to keep in the set. Otherwise update both the set and map with the new entry.
Upvotes: 0 <issue_comment>username_2: If under the conditions you specified they are considered to be equal, then it's obvious that the result of the `<` comparison would be false. One is not less than the other, they are considered to be equal. The comparison operator, for the purpose of being used with associative containers, only needs to indicate if one instance is "less" than the other. Since, under these circumstances, they are considered to be equal, neither one is less than the other.
Therefore:
```
struct EventCompare {
bool operator()(const Event& lhs, const Event& rhs)
{
if (lhs.event == rhs.event)
return false;
return (lhs.order < rhs.order);
}
};
```
However, this does not address the situation where two instances of the `Event` object have the same `order`, but different `Event`s. If such situation is cannot arise, you don't have to worry about it. If it can, simply decide what their ordering would be, and set the return value of the comparison operator, in that case, accordingly.
Upvotes: 2 <issue_comment>username_3: The closest you can use is:
```
struct EventCompare {
bool operator()(const Event& lhs, const Event& rhs)
{
if (lhs.event == rhs.event)
return false;
return (lhs.order < rhs.order);
}
};
```
However, the compare criteria you are asking for does not meet the [strictly weak ordering](https://stackoverflow.com/questions/979759/operator-and-strict-weak-ordering), which is required to put objects in a `std::set`.
Let's you have three objects with the following data:
```
obj1 = {"foo", 200}
obj2 = {"bar", 300}
obj3 = {"foo", 400}
```
If you add objects to the set in the order `obj1`, `obj2`, `obj3`, you will see only `obj1` and `obj2`, in that order, in the set.
If you add objects to the set in the order `obj2`, `obj3`, `obj1`, you will see only `obj2` and `obj3`, in that order, in the set.
Not only do you get different objects in the set depending on which object is added to the set first but even the objects appear in different order based on which object was added to the set first. I can only see problems in the future if you follow this strategy.
I think you should take a fresh look at your requirements and look for a cleaner solution. I am not able to suggest a solution without a deeper understanding of what your are trying to do.
Upvotes: 2 <issue_comment>username_4: I seem to be missing something blindingly obvious. But surely all you need to do is compare according to your requirements?
```
struct EventCompareName
{
bool operator()(const Event& lhs, const Event& rhs)
{
// If you want to compare by event and not order, then do so.
return (lhs.event < rhs.event);
// Rather than comparing by order and not event.
//return (lhs.order < rhs.order);
}
};
std::set events;
```
Of course, you might *also* want to compare by `order` in some cases (even though your question gives absolutely zero indication of that requirement). In which case:
```
struct EventCompareNameOrder
{
bool operator()(const Event& lhs, const Event& rhs)
{
if (lhs.event != rhs.event)
return (lhs.event < rhs.event);
return (lhs.order < rhs.order);
}
};
std::set allEvents;
```
Upvotes: 0 |
2018/03/19 | 395 | 1,493 | <issue_start>username_0: My friend and I have been working in a `react-native` project for a company where he is working on a windows for Android and I am working on IOS part beside `Google Maps` and `PlacesPicker`.
But now after my friend have added `googleSignin` I try to add it to my `Podfile` for IOS and install the pod there is a error which is stopping me to work on it any more.
```
[!] The `Project [Release]` target overrides the `HEADER_SEARCH_PATHS` build setting defined in `Pods/Target Support Files/Pods-Project/Pods-Project.release.xcconfig'. This can lead to problems with the CocoaPods installation
- Use the `$(inherited)` flag, or
- Remove the build settings from the target.
```
Thank you in advance.<issue_comment>username_1: Go to Xcode project and select Build settings.
Scroll down and find Library Search Path dropdown under 'Search Paths' section.
In Release mode use $(inherited)in the beginning of the path string.
[](https://i.stack.imgur.com/ZruOM.png)
Upvotes: 2 <issue_comment>username_2: Thanks for your tips @username_1 you really helped me to dig down the path file on my xcode .But the issue is actually resolved by @SamirChenon [tricks](https://stackoverflow.com/a/30099524/8817388) which I applied to every path. Additionally, I also removed spaces between my `pod packages` in inside `podfile` and `pod install`.
Thank you all for helping :D
Upvotes: 1 [selected_answer] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.