qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
---|---|---|---|---|---|
1,209,941 |
If $M, N$ are finite dimensional vector spaces with same dimension, then if $M$ is subset of $N$, then $M=N$.
I think i need to show that both vector spaces are spanned by the same bases in order to do this or to prove $N$ is subset of $M$?
But i am not sure how to do this.
Thanks
|
2015/03/28
|
[
"https://math.stackexchange.com/questions/1209941",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/225132/"
] |
Let me reformulate the problem:
>
> If $M$ is a (finite dimensional) subspace of a finite dimensional vector space $N$ and $\dim M=\dim N$, then $M=N$.
>
>
>
The clause that $M$ is finite dimensional is redundant, but it's not important, as you have it in your assignment.
Suppose $M\ne N$ and let $v\in N$, $v\notin M$. If $\{v\_1,\dots,v\_n\}$ is a basis for $M$, then $\{v\_1,\dots,v\_n,v\}$ is a linearly independent set in $N$: this is impossible, because a linearly independent sets in $N$ has at most $n=\dim N=\dim M$ elements.
|
I think the notation etc can get over-engineered and technical. Here is an attempt to look at things more straightforwardly.
Here are two facts you should know about vector spaces:
Every basis of a finite dimensional vector space has the same number of elements.
Any linearly independent subset of a vector space can be extended to a basis.
You know that $M$ is contained in $N$, and they have the same dimension $r$. A basis $B$ of $M$ therefore has $r$ elements, and as a basis is a linearly independent subset of $M$ and hence of $N$ (because $N$ contains $M$).
$B$, as a basis of $M$, can therefore be extended to a basis of $N$, which will have $r$ elements - so $B$ is already a basis of $N$ as well as of $M$.
Since $B$ is a basis of $N$, any element $v\in N$ can be written as a linear combination of the basis vectors. But this is just an expression of $v$ as an element of $M$. So every element of $N$ is also an element of $M$ and the two are equal.
|
11,252,066 |
I have variable `WCHAR sDisplayName[1024];`
How can I check if `sDisplayName` contains the string "example"?
|
2012/06/28
|
[
"https://Stackoverflow.com/questions/11252066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/508284/"
] |
```
if(wcscmp(sDisplayName, L"example") == 0)
; //then it contains "example"
else
; //it does not
```
This does not cover the case where the string in `sDisplayName` starts with "example" or has "example" in the middle. For those cases, you can use `wcsncmp` and `wcsstr`.
Also this check is case sensitive.
Also this will break if `sDisplayName` contains garbage - i. e. is not null terminated.
Consider using std::wstring instead. That's the C++ way.
EDIT: if you want to match the beginning of the string:
```
if(wcsncmp(sDisplayName, L"Adobe", 5) == 0)
//Starts with "Adobe"
```
If you want to find the string in the middle
```
if(wcsstr(sDisplayName, L"Adobe") != 0)
//Contains "Adobe"
```
Note that wcsstr returns *nonzero* if the string is found, unlike the rest.
|
You can use the `wchar_t` [variants of standard C functions](http://msdn.microsoft.com/en-us/library/z9da80kz%28v=vs.80%29.aspx) (i.e., `wcsstr`).
|
11,252,066 |
I have variable `WCHAR sDisplayName[1024];`
How can I check if `sDisplayName` contains the string "example"?
|
2012/06/28
|
[
"https://Stackoverflow.com/questions/11252066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/508284/"
] |
```
if(wcscmp(sDisplayName, L"example") == 0)
; //then it contains "example"
else
; //it does not
```
This does not cover the case where the string in `sDisplayName` starts with "example" or has "example" in the middle. For those cases, you can use `wcsncmp` and `wcsstr`.
Also this check is case sensitive.
Also this will break if `sDisplayName` contains garbage - i. e. is not null terminated.
Consider using std::wstring instead. That's the C++ way.
EDIT: if you want to match the beginning of the string:
```
if(wcsncmp(sDisplayName, L"Adobe", 5) == 0)
//Starts with "Adobe"
```
If you want to find the string in the middle
```
if(wcsstr(sDisplayName, L"Adobe") != 0)
//Contains "Adobe"
```
Note that wcsstr returns *nonzero* if the string is found, unlike the rest.
|
wscstr will find your string anywhere in sDisplayName, wsccmp will see if sDisplayName is exactly your string.
|
11,252,066 |
I have variable `WCHAR sDisplayName[1024];`
How can I check if `sDisplayName` contains the string "example"?
|
2012/06/28
|
[
"https://Stackoverflow.com/questions/11252066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/508284/"
] |
You can use the `wchar_t` [variants of standard C functions](http://msdn.microsoft.com/en-us/library/z9da80kz%28v=vs.80%29.aspx) (i.e., `wcsstr`).
|
wscstr will find your string anywhere in sDisplayName, wsccmp will see if sDisplayName is exactly your string.
|
24,094 |
At the end of A Most Wanted Man, Gunther
>
> drives somewhere, gets out of his car, and walks off
>
>
>
And then the movie ends. Where did he drive, and what is the significance of that scene?
|
2014/08/22
|
[
"https://movies.stackexchange.com/questions/24094",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/13422/"
] |
I think it is as simple as this: Gunther was angry that he had been taken advantage of, but there was nothing he could do -- so he left.
|
It never focuses on the second car up after he parks but it matches color and clothing of the American lady and her car. Also when he gets out of his van to smoke he gives a long stare and a possible signal toward the American agent's vehicle as it drives up. Significance is that he was in on it as well, but it could just be coincidence.
|
24,094 |
At the end of A Most Wanted Man, Gunther
>
> drives somewhere, gets out of his car, and walks off
>
>
>
And then the movie ends. Where did he drive, and what is the significance of that scene?
|
2014/08/22
|
[
"https://movies.stackexchange.com/questions/24094",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/13422/"
] |
I think it is as simple as this: Gunther was angry that he had been taken advantage of, but there was nothing he could do -- so he left.
|
To me, the plot of the movie is about different agencies working against each other, if you remember the beginning of the movie a message about how inefficiencies and competing interest in the german intelligence community lead to Muhammad Atta's successful planning of 9/11. To me the movie was about how the intelligence agencies are going back to its "old ways" which was alluded to by the cia agent who said "old habits die hard". Guenther leaving his car signifies that he quit because his way of "making the world a safer place" wasn't going to work because people were screwing him over. Plus it brings into question torture and how it can possible prevent people from helping the intelligence community.
|
24,094 |
At the end of A Most Wanted Man, Gunther
>
> drives somewhere, gets out of his car, and walks off
>
>
>
And then the movie ends. Where did he drive, and what is the significance of that scene?
|
2014/08/22
|
[
"https://movies.stackexchange.com/questions/24094",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/13422/"
] |
I think it is as simple as this: Gunther was angry that he had been taken advantage of, but there was nothing he could do -- so he left.
|
Gunther thought he could trust the American agents, but finally realizes he was set up and betrayed by them. Instead of working together for a positive outcome, the Americans had their own agenda, and never understood his or his country's priorities. This symbolizes something we don't often realize: our image in the world is not always that positive. Gunther is totally disillusioned and disgusted. He's sure the US agents are laughing at him for being too trusting, therefore losing the man he thought he could save. It was just senseless. Good doesn't always triumph over evil.
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You can do something simple instead of insane regex; just pad `+` with white space:
```
String number = "100+500";
number = number.replace("+", " + ");
```
Now you can split it at the white space:
```
String[] split = number.split(" ");
```
Now your indices will be set:
```
split[0] = "100";
split[1] = "+";
split[2] = "500";
```
To check for all arithmetic symbols, you can use the following method if you wish to avoid regex:
```
public static String replacing(String s) {
String[] chars = {"+", "-", "/", "="};
for (String character : chars) {
if (s.contains(character)) {
s = s.replace(character, " " + character + " ");//not exactly elegant, but it works
}
}
return s;
}
//in main method
number = replacing(number);
String[] split = number.split(" ");
```
|
Since +,-,\* basically all mathematically symbols are special characters so you put a "\\" before them inside the split function like this
```
String number = "100+500";
String[] numbers = number.split("\\+");
for (String n:numbers) {
System.out.println(n);
}
```
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
Off the bat, I don't know any library routine for the split. A custom splitting routine could be like this:
```
/**
* Splits the given {@link String} at the operators +, -, * and /
*
* @param string
* the {@link String} to be split.
* @throws NullPointerException
* when the given {@link String} is null.
* @return a {@link List} containing the split string and the operators.
*/
public List<String> split(String string) throws NullPointerException {
if (string == null)
throw new NullPointerException("the given string is null!");
List<String> result = new ArrayList<String>();
// operators to split upon
String[] operators = new String[] { "+", "-", "*", "/" };
int index = 0;
while (index < string.length()) {
// find the index of the nearest operator
int minimum = string.length();
for (String operator : operators) {
int i = string.indexOf(operator, index);
if (i > -1)
minimum = Math.min(minimum, i);
}
// if an operator is found, split the string
if (minimum < string.length()) {
result.add(string.substring(index, minimum));
result.add("" + string.charAt(minimum));
index = minimum + 1;
} else {
result.add(string.substring(index));
break;
}
}
return result;
}
```
Some test code:
```
System.out.println(split("100+10*6+3"));
System.out.println(split("100+"));
```
Output:
```
[100, +, 10, *, 6, +, 3]
[100, +]
```
|
You can do something simple instead of insane regex; just pad `+` with white space:
```
String number = "100+500";
number = number.replace("+", " + ");
```
Now you can split it at the white space:
```
String[] split = number.split(" ");
```
Now your indices will be set:
```
split[0] = "100";
split[1] = "+";
split[2] = "500";
```
To check for all arithmetic symbols, you can use the following method if you wish to avoid regex:
```
public static String replacing(String s) {
String[] chars = {"+", "-", "/", "="};
for (String character : chars) {
if (s.contains(character)) {
s = s.replace(character, " " + character + " ");//not exactly elegant, but it works
}
}
return s;
}
//in main method
number = replacing(number);
String[] split = number.split(" ");
```
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You can also use the Pattern/Matcher classes in Java:
```
String expression = "100+34";
Pattern p = Pattern.compile("(\\d+)|(\\+)");
Matcher m = p.matcher(expression);
String[] elems = new String[m.groupCount() +1];
int i=0;
while(m.find())
{
elems[i++] = m.group();
}
```
|
You can split your expression string, then in result having pure tokens and categorized tokens. The [mXparser](http://mathparser.org/) library supports this as well as the calculation process. Please follow the below example:
Your very simple example "100+500":
```
import org.mariuszgromada.math.mxparser.*;
...
...
Expression e = new Expression("100+500");
mXparser.consolePrintTokens( e.getCopyOfInitialTokens() );
```
Result:
```
[mXparser-v.4.0.0] --------------------
[mXparser-v.4.0.0] | Expression tokens: |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | TokenIdx | Token | KeyW | TokenId | TokenTypeId | TokenLevel | TokenValue | LooksLike |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | 0 | 100 | _num_ | 1 | 0 | 0 | 100.0 | |
[mXparser-v.4.0.0] | 1 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 2 | 500 | _num_ | 1 | 0 | 0 | 500.0 | |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
```
More sophisticated example "2\*sin(x)+(3/cos(y)-e^(sin(x)+y))+10":
```
import org.mariuszgromada.math.mxparser.*;
...
...
Argument x = new Argument("x");
Argument y = new Argument("y");
Expression e = new Expression("2*sin(x)+(3/cos(y)-e^(sin(x)+y))+10", x, y);
mXparser.consolePrintTokens( e.getCopyOfInitialTokens() );
```
Result:
```
[mXparser-v.4.0.0] --------------------
[mXparser-v.4.0.0] | Expression tokens: |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | TokenIdx | Token | KeyW | TokenId | TokenTypeId | TokenLevel | TokenValue | LooksLike |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | 0 | 2 | _num_ | 1 | 0 | 0 | 2.0 | |
[mXparser-v.4.0.0] | 1 | * | * | 3 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 2 | sin | sin | 1 | 4 | 1 | NaN | |
[mXparser-v.4.0.0] | 3 | ( | ( | 1 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 4 | x | x | 0 | 101 | 2 | NaN | |
[mXparser-v.4.0.0] | 5 | ) | ) | 2 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 6 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 7 | ( | ( | 1 | 20 | 1 | NaN | |
[mXparser-v.4.0.0] | 8 | 3 | _num_ | 1 | 0 | 1 | 3.0 | |
[mXparser-v.4.0.0] | 9 | / | / | 4 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 10 | cos | cos | 2 | 4 | 2 | NaN | |
[mXparser-v.4.0.0] | 11 | ( | ( | 1 | 20 | 3 | NaN | |
[mXparser-v.4.0.0] | 12 | y | y | 1 | 101 | 3 | NaN | |
[mXparser-v.4.0.0] | 13 | ) | ) | 2 | 20 | 3 | NaN | |
[mXparser-v.4.0.0] | 14 | - | - | 2 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 15 | e | e | 2 | 9 | 1 | NaN | |
[mXparser-v.4.0.0] | 16 | ^ | ^ | 5 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 17 | ( | ( | 1 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 18 | sin | sin | 1 | 4 | 3 | NaN | |
[mXparser-v.4.0.0] | 19 | ( | ( | 1 | 20 | 4 | NaN | |
[mXparser-v.4.0.0] | 20 | x | x | 0 | 101 | 4 | NaN | |
[mXparser-v.4.0.0] | 21 | ) | ) | 2 | 20 | 4 | NaN | |
[mXparser-v.4.0.0] | 22 | + | + | 1 | 1 | 2 | NaN | |
[mXparser-v.4.0.0] | 23 | y | y | 1 | 101 | 2 | NaN | |
[mXparser-v.4.0.0] | 24 | ) | ) | 2 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 25 | ) | ) | 2 | 20 | 1 | NaN | |
[mXparser-v.4.0.0] | 26 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 27 | 10 | _num_ | 1 | 0 | 0 | 10.0 | |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
```
To understand what Token.tokenId and Token.tokenTypeId means you need to refer to the [API documentation](http://mathparser.org/api/) and parsertokens section. For instance in Operator class you have
1. Operator.TYPE\_ID - this corresponds to Token.tokenTypeId if Token is recognized as Operator
2. Operator.OPERATOR\_NAME\_ID - this corresponds to Token.tokenId if Token is recognized as particular OPERATOR\_NAME.
Please follow [mXparser tutorial](http://mathparser.org/mxparser-tutorial/) for better understanding.
Best regards
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
Off the bat, I don't know any library routine for the split. A custom splitting routine could be like this:
```
/**
* Splits the given {@link String} at the operators +, -, * and /
*
* @param string
* the {@link String} to be split.
* @throws NullPointerException
* when the given {@link String} is null.
* @return a {@link List} containing the split string and the operators.
*/
public List<String> split(String string) throws NullPointerException {
if (string == null)
throw new NullPointerException("the given string is null!");
List<String> result = new ArrayList<String>();
// operators to split upon
String[] operators = new String[] { "+", "-", "*", "/" };
int index = 0;
while (index < string.length()) {
// find the index of the nearest operator
int minimum = string.length();
for (String operator : operators) {
int i = string.indexOf(operator, index);
if (i > -1)
minimum = Math.min(minimum, i);
}
// if an operator is found, split the string
if (minimum < string.length()) {
result.add(string.substring(index, minimum));
result.add("" + string.charAt(minimum));
index = minimum + 1;
} else {
result.add(string.substring(index));
break;
}
}
return result;
}
```
Some test code:
```
System.out.println(split("100+10*6+3"));
System.out.println(split("100+"));
```
Output:
```
[100, +, 10, *, 6, +, 3]
[100, +]
```
|
Since +,-,\* basically all mathematically symbols are special characters so you put a "\\" before them inside the split function like this
```
String number = "100+500";
String[] numbers = number.split("\\+");
for (String n:numbers) {
System.out.println(n);
}
```
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You can also use the Pattern/Matcher classes in Java:
```
String expression = "100+34";
Pattern p = Pattern.compile("(\\d+)|(\\+)");
Matcher m = p.matcher(expression);
String[] elems = new String[m.groupCount() +1];
int i=0;
while(m.find())
{
elems[i++] = m.group();
}
```
|
Since +,-,\* basically all mathematically symbols are special characters so you put a "\\" before them inside the split function like this
```
String number = "100+500";
String[] numbers = number.split("\\+");
for (String n:numbers) {
System.out.println(n);
}
```
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You want to split between digits and non-digits without consuming any input... you need look arounds:
```
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
```
What the heck is that train wreck of a regex?
It's expressing the initial sentence of this answer:
* `(?<=\d)` means the previous character is a digit
* `(?=\D)` means the next character is a non-digit
* `(?<=\d)(?=\D)` together will match *between* a digit and a non-digit
* `regexA|regexB` means either regexA *or* regexB is matched, which is used as above points, but non-digit then digit for the visa-versa logic
An important point is that look arounds are *non-consuming*, so the split doesn't gobble up any of the input during the split.
---
Here's some test code:
```
String number = "100+500-123/456*789";
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
System.out.println(Arrays.toString(split));
```
Output:
```
[100, +, 500, -, 123, /, 456, *, 789]
```
---
To work with numbers that may have a decimal point, use this regex:
```
"(?<=[\\d.])(?=[^\\d.])|(?<=[^\\d.])(?=[\\d.])"
```
which effectively just add `.` to the characters that are a "number".
|
Off the bat, I don't know any library routine for the split. A custom splitting routine could be like this:
```
/**
* Splits the given {@link String} at the operators +, -, * and /
*
* @param string
* the {@link String} to be split.
* @throws NullPointerException
* when the given {@link String} is null.
* @return a {@link List} containing the split string and the operators.
*/
public List<String> split(String string) throws NullPointerException {
if (string == null)
throw new NullPointerException("the given string is null!");
List<String> result = new ArrayList<String>();
// operators to split upon
String[] operators = new String[] { "+", "-", "*", "/" };
int index = 0;
while (index < string.length()) {
// find the index of the nearest operator
int minimum = string.length();
for (String operator : operators) {
int i = string.indexOf(operator, index);
if (i > -1)
minimum = Math.min(minimum, i);
}
// if an operator is found, split the string
if (minimum < string.length()) {
result.add(string.substring(index, minimum));
result.add("" + string.charAt(minimum));
index = minimum + 1;
} else {
result.add(string.substring(index));
break;
}
}
return result;
}
```
Some test code:
```
System.out.println(split("100+10*6+3"));
System.out.println(split("100+"));
```
Output:
```
[100, +, 10, *, 6, +, 3]
[100, +]
```
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You want to split between digits and non-digits without consuming any input... you need look arounds:
```
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
```
What the heck is that train wreck of a regex?
It's expressing the initial sentence of this answer:
* `(?<=\d)` means the previous character is a digit
* `(?=\D)` means the next character is a non-digit
* `(?<=\d)(?=\D)` together will match *between* a digit and a non-digit
* `regexA|regexB` means either regexA *or* regexB is matched, which is used as above points, but non-digit then digit for the visa-versa logic
An important point is that look arounds are *non-consuming*, so the split doesn't gobble up any of the input during the split.
---
Here's some test code:
```
String number = "100+500-123/456*789";
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
System.out.println(Arrays.toString(split));
```
Output:
```
[100, +, 500, -, 123, /, 456, *, 789]
```
---
To work with numbers that may have a decimal point, use this regex:
```
"(?<=[\\d.])(?=[^\\d.])|(?<=[^\\d.])(?=[\\d.])"
```
which effectively just add `.` to the characters that are a "number".
|
You can do something simple instead of insane regex; just pad `+` with white space:
```
String number = "100+500";
number = number.replace("+", " + ");
```
Now you can split it at the white space:
```
String[] split = number.split(" ");
```
Now your indices will be set:
```
split[0] = "100";
split[1] = "+";
split[2] = "500";
```
To check for all arithmetic symbols, you can use the following method if you wish to avoid regex:
```
public static String replacing(String s) {
String[] chars = {"+", "-", "/", "="};
for (String character : chars) {
if (s.contains(character)) {
s = s.replace(character, " " + character + " ");//not exactly elegant, but it works
}
}
return s;
}
//in main method
number = replacing(number);
String[] split = number.split(" ");
```
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You can do something simple instead of insane regex; just pad `+` with white space:
```
String number = "100+500";
number = number.replace("+", " + ");
```
Now you can split it at the white space:
```
String[] split = number.split(" ");
```
Now your indices will be set:
```
split[0] = "100";
split[1] = "+";
split[2] = "500";
```
To check for all arithmetic symbols, you can use the following method if you wish to avoid regex:
```
public static String replacing(String s) {
String[] chars = {"+", "-", "/", "="};
for (String character : chars) {
if (s.contains(character)) {
s = s.replace(character, " " + character + " ");//not exactly elegant, but it works
}
}
return s;
}
//in main method
number = replacing(number);
String[] split = number.split(" ");
```
|
You can split your expression string, then in result having pure tokens and categorized tokens. The [mXparser](http://mathparser.org/) library supports this as well as the calculation process. Please follow the below example:
Your very simple example "100+500":
```
import org.mariuszgromada.math.mxparser.*;
...
...
Expression e = new Expression("100+500");
mXparser.consolePrintTokens( e.getCopyOfInitialTokens() );
```
Result:
```
[mXparser-v.4.0.0] --------------------
[mXparser-v.4.0.0] | Expression tokens: |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | TokenIdx | Token | KeyW | TokenId | TokenTypeId | TokenLevel | TokenValue | LooksLike |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | 0 | 100 | _num_ | 1 | 0 | 0 | 100.0 | |
[mXparser-v.4.0.0] | 1 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 2 | 500 | _num_ | 1 | 0 | 0 | 500.0 | |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
```
More sophisticated example "2\*sin(x)+(3/cos(y)-e^(sin(x)+y))+10":
```
import org.mariuszgromada.math.mxparser.*;
...
...
Argument x = new Argument("x");
Argument y = new Argument("y");
Expression e = new Expression("2*sin(x)+(3/cos(y)-e^(sin(x)+y))+10", x, y);
mXparser.consolePrintTokens( e.getCopyOfInitialTokens() );
```
Result:
```
[mXparser-v.4.0.0] --------------------
[mXparser-v.4.0.0] | Expression tokens: |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | TokenIdx | Token | KeyW | TokenId | TokenTypeId | TokenLevel | TokenValue | LooksLike |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | 0 | 2 | _num_ | 1 | 0 | 0 | 2.0 | |
[mXparser-v.4.0.0] | 1 | * | * | 3 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 2 | sin | sin | 1 | 4 | 1 | NaN | |
[mXparser-v.4.0.0] | 3 | ( | ( | 1 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 4 | x | x | 0 | 101 | 2 | NaN | |
[mXparser-v.4.0.0] | 5 | ) | ) | 2 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 6 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 7 | ( | ( | 1 | 20 | 1 | NaN | |
[mXparser-v.4.0.0] | 8 | 3 | _num_ | 1 | 0 | 1 | 3.0 | |
[mXparser-v.4.0.0] | 9 | / | / | 4 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 10 | cos | cos | 2 | 4 | 2 | NaN | |
[mXparser-v.4.0.0] | 11 | ( | ( | 1 | 20 | 3 | NaN | |
[mXparser-v.4.0.0] | 12 | y | y | 1 | 101 | 3 | NaN | |
[mXparser-v.4.0.0] | 13 | ) | ) | 2 | 20 | 3 | NaN | |
[mXparser-v.4.0.0] | 14 | - | - | 2 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 15 | e | e | 2 | 9 | 1 | NaN | |
[mXparser-v.4.0.0] | 16 | ^ | ^ | 5 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 17 | ( | ( | 1 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 18 | sin | sin | 1 | 4 | 3 | NaN | |
[mXparser-v.4.0.0] | 19 | ( | ( | 1 | 20 | 4 | NaN | |
[mXparser-v.4.0.0] | 20 | x | x | 0 | 101 | 4 | NaN | |
[mXparser-v.4.0.0] | 21 | ) | ) | 2 | 20 | 4 | NaN | |
[mXparser-v.4.0.0] | 22 | + | + | 1 | 1 | 2 | NaN | |
[mXparser-v.4.0.0] | 23 | y | y | 1 | 101 | 2 | NaN | |
[mXparser-v.4.0.0] | 24 | ) | ) | 2 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 25 | ) | ) | 2 | 20 | 1 | NaN | |
[mXparser-v.4.0.0] | 26 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 27 | 10 | _num_ | 1 | 0 | 0 | 10.0 | |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
```
To understand what Token.tokenId and Token.tokenTypeId means you need to refer to the [API documentation](http://mathparser.org/api/) and parsertokens section. For instance in Operator class you have
1. Operator.TYPE\_ID - this corresponds to Token.tokenTypeId if Token is recognized as Operator
2. Operator.OPERATOR\_NAME\_ID - this corresponds to Token.tokenId if Token is recognized as particular OPERATOR\_NAME.
Please follow [mXparser tutorial](http://mathparser.org/mxparser-tutorial/) for better understanding.
Best regards
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You want to split between digits and non-digits without consuming any input... you need look arounds:
```
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
```
What the heck is that train wreck of a regex?
It's expressing the initial sentence of this answer:
* `(?<=\d)` means the previous character is a digit
* `(?=\D)` means the next character is a non-digit
* `(?<=\d)(?=\D)` together will match *between* a digit and a non-digit
* `regexA|regexB` means either regexA *or* regexB is matched, which is used as above points, but non-digit then digit for the visa-versa logic
An important point is that look arounds are *non-consuming*, so the split doesn't gobble up any of the input during the split.
---
Here's some test code:
```
String number = "100+500-123/456*789";
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
System.out.println(Arrays.toString(split));
```
Output:
```
[100, +, 500, -, 123, /, 456, *, 789]
```
---
To work with numbers that may have a decimal point, use this regex:
```
"(?<=[\\d.])(?=[^\\d.])|(?<=[^\\d.])(?=[\\d.])"
```
which effectively just add `.` to the characters that are a "number".
|
You can split your expression string, then in result having pure tokens and categorized tokens. The [mXparser](http://mathparser.org/) library supports this as well as the calculation process. Please follow the below example:
Your very simple example "100+500":
```
import org.mariuszgromada.math.mxparser.*;
...
...
Expression e = new Expression("100+500");
mXparser.consolePrintTokens( e.getCopyOfInitialTokens() );
```
Result:
```
[mXparser-v.4.0.0] --------------------
[mXparser-v.4.0.0] | Expression tokens: |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | TokenIdx | Token | KeyW | TokenId | TokenTypeId | TokenLevel | TokenValue | LooksLike |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | 0 | 100 | _num_ | 1 | 0 | 0 | 100.0 | |
[mXparser-v.4.0.0] | 1 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 2 | 500 | _num_ | 1 | 0 | 0 | 500.0 | |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
```
More sophisticated example "2\*sin(x)+(3/cos(y)-e^(sin(x)+y))+10":
```
import org.mariuszgromada.math.mxparser.*;
...
...
Argument x = new Argument("x");
Argument y = new Argument("y");
Expression e = new Expression("2*sin(x)+(3/cos(y)-e^(sin(x)+y))+10", x, y);
mXparser.consolePrintTokens( e.getCopyOfInitialTokens() );
```
Result:
```
[mXparser-v.4.0.0] --------------------
[mXparser-v.4.0.0] | Expression tokens: |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | TokenIdx | Token | KeyW | TokenId | TokenTypeId | TokenLevel | TokenValue | LooksLike |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
[mXparser-v.4.0.0] | 0 | 2 | _num_ | 1 | 0 | 0 | 2.0 | |
[mXparser-v.4.0.0] | 1 | * | * | 3 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 2 | sin | sin | 1 | 4 | 1 | NaN | |
[mXparser-v.4.0.0] | 3 | ( | ( | 1 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 4 | x | x | 0 | 101 | 2 | NaN | |
[mXparser-v.4.0.0] | 5 | ) | ) | 2 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 6 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 7 | ( | ( | 1 | 20 | 1 | NaN | |
[mXparser-v.4.0.0] | 8 | 3 | _num_ | 1 | 0 | 1 | 3.0 | |
[mXparser-v.4.0.0] | 9 | / | / | 4 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 10 | cos | cos | 2 | 4 | 2 | NaN | |
[mXparser-v.4.0.0] | 11 | ( | ( | 1 | 20 | 3 | NaN | |
[mXparser-v.4.0.0] | 12 | y | y | 1 | 101 | 3 | NaN | |
[mXparser-v.4.0.0] | 13 | ) | ) | 2 | 20 | 3 | NaN | |
[mXparser-v.4.0.0] | 14 | - | - | 2 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 15 | e | e | 2 | 9 | 1 | NaN | |
[mXparser-v.4.0.0] | 16 | ^ | ^ | 5 | 1 | 1 | NaN | |
[mXparser-v.4.0.0] | 17 | ( | ( | 1 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 18 | sin | sin | 1 | 4 | 3 | NaN | |
[mXparser-v.4.0.0] | 19 | ( | ( | 1 | 20 | 4 | NaN | |
[mXparser-v.4.0.0] | 20 | x | x | 0 | 101 | 4 | NaN | |
[mXparser-v.4.0.0] | 21 | ) | ) | 2 | 20 | 4 | NaN | |
[mXparser-v.4.0.0] | 22 | + | + | 1 | 1 | 2 | NaN | |
[mXparser-v.4.0.0] | 23 | y | y | 1 | 101 | 2 | NaN | |
[mXparser-v.4.0.0] | 24 | ) | ) | 2 | 20 | 2 | NaN | |
[mXparser-v.4.0.0] | 25 | ) | ) | 2 | 20 | 1 | NaN | |
[mXparser-v.4.0.0] | 26 | + | + | 1 | 1 | 0 | NaN | |
[mXparser-v.4.0.0] | 27 | 10 | _num_ | 1 | 0 | 0 | 10.0 | |
[mXparser-v.4.0.0] ---------------------------------------------------------------------------------------------------------------
```
To understand what Token.tokenId and Token.tokenTypeId means you need to refer to the [API documentation](http://mathparser.org/api/) and parsertokens section. For instance in Operator class you have
1. Operator.TYPE\_ID - this corresponds to Token.tokenTypeId if Token is recognized as Operator
2. Operator.OPERATOR\_NAME\_ID - this corresponds to Token.tokenId if Token is recognized as particular OPERATOR\_NAME.
Please follow [mXparser tutorial](http://mathparser.org/mxparser-tutorial/) for better understanding.
Best regards
|
27,808,112 |
I am trying to split a Math Expression.
```
String number = "100+500";
String[] split = new String[3];
```
I want to make
* split[0] = "100"
* split[1] = "+"
* split[2] = "500"
I tried this but I don't know what to write for splitting.
```
split = number.split(????);
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27808112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195580/"
] |
You want to split between digits and non-digits without consuming any input... you need look arounds:
```
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
```
What the heck is that train wreck of a regex?
It's expressing the initial sentence of this answer:
* `(?<=\d)` means the previous character is a digit
* `(?=\D)` means the next character is a non-digit
* `(?<=\d)(?=\D)` together will match *between* a digit and a non-digit
* `regexA|regexB` means either regexA *or* regexB is matched, which is used as above points, but non-digit then digit for the visa-versa logic
An important point is that look arounds are *non-consuming*, so the split doesn't gobble up any of the input during the split.
---
Here's some test code:
```
String number = "100+500-123/456*789";
String[] split = number.split("(?<=\\d)(?=\\D)|(?<=\\D)(?=\\d)");
System.out.println(Arrays.toString(split));
```
Output:
```
[100, +, 500, -, 123, /, 456, *, 789]
```
---
To work with numbers that may have a decimal point, use this regex:
```
"(?<=[\\d.])(?=[^\\d.])|(?<=[^\\d.])(?=[\\d.])"
```
which effectively just add `.` to the characters that are a "number".
|
Since +,-,\* basically all mathematically symbols are special characters so you put a "\\" before them inside the split function like this
```
String number = "100+500";
String[] numbers = number.split("\\+");
for (String n:numbers) {
System.out.println(n);
}
```
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Pandas has portions of its code written in C to make it run faster. If you tried to install pandas manually you would need to build it. Try reinstalling it with miniconda package manager here: <http://conda.pydata.org/miniconda.html>
and then you can just do
```
conda install pandas
```
There are very simple instructions on how to do it in the link below. Just do ctrl-f miniconda to find the section that talks about it
<http://pandas.pydata.org/pandas-docs/dev/install.html>
|
I was having this problem with python 2.7.13
here is my solution:
1. install Cython with
```
pip install Cython
```
2. install g++ and gcc
```
apt-get install gcc, g++
```
3. uninstall pandas
```
pip uninstall pandas
```
4. reinstall pandas
```
pip install pandas
```
then everything will be OK.
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Actually, none of these answers worked for me in the following environment:
```
docker-compose # multiple containers, the managing one based on debian
Python 2.7
Django 1.8.19
numpy==1.11.3 # pinned to version, because of https://github.com/rbgirshick/py-faster-rcnn/issues/481
... more requirements
```
The following solution worked, after reading
<https://github.com/pandas-dev/pandas/issues/18281>
and
<https://github.com/pandas-dev/pandas/issues/16715>
which both addressed interim solutions and later recommended upgrading,
so I integrated into the Dockerfile
```
pip install -r requirements.txt \
&& pip install \
pandas==0.21.0 \
--force-reinstall \
--upgrade \
--no-deps \
--no-cache \
--find-links https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/ \
--no-index
```
which is mentioned in <https://github.com/pandas-dev/pandas/issues/16715#issuecomment-310063504>
I tried all solutions mentioned here, except the accepted answer, also because a) I don't want anaconda in a web production environment and b) it's not a good answer to foster frameworks or cli-solutions for architectures, where a package is not used standalone...
Furthermore, I dislike @colo's answer being downvoted, because it actually is a feasible solution in a certain environment.
For anyone finding this thread with similar requirements and expectations like me, I hope to have saved some minutes.
|
Ok, I tried more than 20 differents way of install/uninstall, it was still not working. (conda and pip, --force --upgrade, ==THEGOODVERSION, etc...).
At the end I found that I had the wrong PATH...
[](https://i.stack.imgur.com/O1L1F.jpg)
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Instead of installing it with conda or pip, try to install it with your package manager:
sudo apt-get install python3-pandas
|
try
```
/miniconda3/bin/conda install python
python: 3.6.0-0 --> 3.6.1-2
```
and
```
/miniconda3/bin/conda install pandas
```
Try the same with your Anaconda version.
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Actually, none of these answers worked for me in the following environment:
```
docker-compose # multiple containers, the managing one based on debian
Python 2.7
Django 1.8.19
numpy==1.11.3 # pinned to version, because of https://github.com/rbgirshick/py-faster-rcnn/issues/481
... more requirements
```
The following solution worked, after reading
<https://github.com/pandas-dev/pandas/issues/18281>
and
<https://github.com/pandas-dev/pandas/issues/16715>
which both addressed interim solutions and later recommended upgrading,
so I integrated into the Dockerfile
```
pip install -r requirements.txt \
&& pip install \
pandas==0.21.0 \
--force-reinstall \
--upgrade \
--no-deps \
--no-cache \
--find-links https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/ \
--no-index
```
which is mentioned in <https://github.com/pandas-dev/pandas/issues/16715#issuecomment-310063504>
I tried all solutions mentioned here, except the accepted answer, also because a) I don't want anaconda in a web production environment and b) it's not a good answer to foster frameworks or cli-solutions for architectures, where a package is not used standalone...
Furthermore, I dislike @colo's answer being downvoted, because it actually is a feasible solution in a certain environment.
For anyone finding this thread with similar requirements and expectations like me, I hope to have saved some minutes.
|
try
```
/miniconda3/bin/conda install python
python: 3.6.0-0 --> 3.6.1-2
```
and
```
/miniconda3/bin/conda install pandas
```
Try the same with your Anaconda version.
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Pandas has portions of its code written in C to make it run faster. If you tried to install pandas manually you would need to build it. Try reinstalling it with miniconda package manager here: <http://conda.pydata.org/miniconda.html>
and then you can just do
```
conda install pandas
```
There are very simple instructions on how to do it in the link below. Just do ctrl-f miniconda to find the section that talks about it
<http://pandas.pydata.org/pandas-docs/dev/install.html>
|
I had the same problem and the issue came from an encoding problem.
My os was previously set up in French and everything was fine. But then when I switched to English I had the error above.
You can type
```
locale
```
in the terminal to check the local environment variables.
When set up in French, I had this configuration:
[French config](http://i.stack.imgur.com/6ZmgS.png).
Then, after I switched to English, I had:
[English config](http://i.stack.imgur.com/sp2N1.png).
I then added the following lines in the .bash\_profile under /Users/myName and everything went back to normal.
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Instead of installing it with conda or pip, try to install it with your package manager:
sudo apt-get install python3-pandas
|
Ok, I tried more than 20 differents way of install/uninstall, it was still not working. (conda and pip, --force --upgrade, ==THEGOODVERSION, etc...).
At the end I found that I had the wrong PATH...
[](https://i.stack.imgur.com/O1L1F.jpg)
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Pandas has portions of its code written in C to make it run faster. If you tried to install pandas manually you would need to build it. Try reinstalling it with miniconda package manager here: <http://conda.pydata.org/miniconda.html>
and then you can just do
```
conda install pandas
```
There are very simple instructions on how to do it in the link below. Just do ctrl-f miniconda to find the section that talks about it
<http://pandas.pydata.org/pandas-docs/dev/install.html>
|
I had this issue when I needed up upgrade from Python 32 bit to 64 bit to use tensorflow.
Running this command uninstalled pandas 0.21 and reinstalled 0.22 :
pip install --upgrade pandas
Sorted.
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
I had the same problem and the issue came from an encoding problem.
My os was previously set up in French and everything was fine. But then when I switched to English I had the error above.
You can type
```
locale
```
in the terminal to check the local environment variables.
When set up in French, I had this configuration:
[French config](http://i.stack.imgur.com/6ZmgS.png).
Then, after I switched to English, I had:
[English config](http://i.stack.imgur.com/sp2N1.png).
I then added the following lines in the .bash\_profile under /Users/myName and everything went back to normal.
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
|
I was unable to upgrade pandas with regular
```
pip install --upgrade pandas
"tensorflow 1.6.0 has requirement numpy>=1.13.3, but you'll have numpy 1.13.1 which is incompatible."
```
However bumping it with:
```
pip install --upgrade pandas --force
```
solved issue completely
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
Instead of installing it with conda or pip, try to install it with your package manager:
sudo apt-get install python3-pandas
|
I had this issue when I needed up upgrade from Python 32 bit to 64 bit to use tensorflow.
Running this command uninstalled pandas 0.21 and reinstalled 0.22 :
pip install --upgrade pandas
Sorted.
|
30,761,152 |
I installed Anaconda with python 2.7.7.
However, whenever I run "import pandas" I get the error:
`"ImportError: C extension: y not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first."`
I tried running the suggested command but it stated that
```
skipping 'pandas\index.c' Cython extension (up-to-date)
skipping 'pandas\src\period.c' Cython extension (up-to-date)
skipping 'pandas\algos.c' Cython extension (up-to-date)
skipping 'pandas\lib.c' Cython extension (up-to-date)
skipping 'pandas\tslib.c' Cython extension (up-to-date)
skipping 'pandas\parser.c' Cython extension (up-to-date)
skipping 'pandas\hashtable.c' Cython extension (up-to-date)
skipping 'pandas\src\sparse.c' Cython extension (up-to-date)
skipping 'pandas\src\testing.c' Cython extension (up-to-date)
skipping 'pandas\msgpack.cpp' Cython extension (up-to-date)
```
**Has anyone encountered this before and found a solution?**
|
2015/06/10
|
[
"https://Stackoverflow.com/questions/30761152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3453901/"
] |
I was having this problem with python 2.7.13
here is my solution:
1. install Cython with
```
pip install Cython
```
2. install g++ and gcc
```
apt-get install gcc, g++
```
3. uninstall pandas
```
pip uninstall pandas
```
4. reinstall pandas
```
pip install pandas
```
then everything will be OK.
|
I tried all the solutions above, but nothing works out...
Error Message
=============
I got an error message with `ipython`
```
ImportError: C extension: iNaT not built. If you want to import pandas
from the source directory,
you may need to run 'python setup.py build_ext --inplace --force'
to build the C extensions first.
```
and it suggests
```
$ python setup.py build_ext --inplace --force
```
Solution
========
**My suggestion**: Be careful about the version issue!
I clone `pandas` from the official github repo, then build it myself and install by `pip`
Following is the command I typed in terminal
```
$ cd pandas
$ python setup.py build_ext --inplace --force
$ sudo pip install . # don't forget the dot
```
or, if you want to install in your personal Linux account instead of under the system (due to multiple users issue)
you can add `--user` flag
```
$ pip --user install . # don't forget the dot, too
```
Now, everything works fine on my laptop
### My configuration
```
Ubuntu 16.04
Python 2.7
Numpy 1.13.1
```
Good luck!
|
68,565,520 |
I'm very new when it comes to axios and react.
My goal is to have half of the array rendered in one column and the other half in the second column. So i can style each of the columns separately.
I have an array of JSON objects that I would like to render in to columns like
this:
OmOss.js
--------
```
const OmOss = () => {
return (
<ContentContainer>
<Column>
<Element></Element>
</Column>
<Column>
<Element></Element>
</Column>
</ContentContainer>
);
};
```
Here is my get request with axios that returns all of the objects in the array as a single component.
But I can't figure out how to seperate each of those objects and render them seperately.
Element.js
----------
```
export function Element() {
const url = "http://localhost:1337/om-osses";
const [result, setResult] = useState(null);
const [isLoading, setLoading] = useState(true);
useEffect(() => {
axios.get(url).then((response) => {
setResult(response.data);
setLoading(false);
});
}, []);
if (isLoading) {
return (
<ContentContainer>
<div> Loading... </div>
</ContentContainer>
);
}
return (
<TextContainer>
{result.map((text) => (
<div>
<Title text={text.Title} fontSize="2.3rem"></Title>
<p>{text.Content}</p>
</div>
))}
</TextContainer>
);
}
```
This is what i've tried.
* I've tried to push to an array like so
```
let data= [];
useEffect(() => {
axios.get(url).then((response) => {
setResult(response.data);
setLoading(false);
});
data.push(response.data);
}, []);
```
And acessing it like this:
```
<Title text={array[0].title} fontSize="2.3rem"></Title>
```
But the array element was undefined despite the axios request coming through successfully.
|
2021/07/28
|
[
"https://Stackoverflow.com/questions/68565520",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11637117/"
] |
This fixed the problem:
(1) Use postgresql instead of sqlite.
(2) Switch from SequentialExecutor to LocalExecutor.
|
Just to add to that - we had other similar reports and we decided to make a very clear warning in such case in the UI (will be released in the next version):
<https://github.com/apache/airflow/pull/17133>
[](https://i.stack.imgur.com/BEkjY.png)
|
17,060,152 |
Currently I load values like so (using configparser)
```
my_value = config.get("my", "value")
```
But I find myself loading alot of values sometimes as my program grows over time. Is there any "prettier"/better way to load all values in a config file?
Was perhaps thinking of json but not really sure? One problem with json is its not very easy for newbies to understand, and I cant comment the sections in json?
|
2013/06/12
|
[
"https://Stackoverflow.com/questions/17060152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I would suggest to save the configuration as a Python dict in a YAML file. YAML is human-readable and supports comments with a # sign at the beginning of a line. However, it does not support block comments.
```
import yaml
conf = {"name":"john", "password":"asdf"}
with open("conf.yaml", "w") as f:
yaml.dump(conf, f)
```
will give you a file conf.yaml with the content:
```
{name: john, password: asdf}
```
You can then read this with:
```
import yaml
with open("conf.yaml", "r") as f:
conf = yaml.load(f)
```
You might consider using XML if you need block comments.
|
In a Java application I developed I had hundreds of parameters in a static class called Resources that I needed to make configurable by the user via a XML file without breaking the existing application.
What I did was to use introspection to read the name and type of each property of the Resources class and then search the XML file for an element with the same name and assign the value, if found, to that property.
It should be easier in Python since it's a dynamic language so you don't need to bother with type safety and casting (which is not necessarily a good thing...)
|
17,060,152 |
Currently I load values like so (using configparser)
```
my_value = config.get("my", "value")
```
But I find myself loading alot of values sometimes as my program grows over time. Is there any "prettier"/better way to load all values in a config file?
Was perhaps thinking of json but not really sure? One problem with json is its not very easy for newbies to understand, and I cant comment the sections in json?
|
2013/06/12
|
[
"https://Stackoverflow.com/questions/17060152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
If you want to use ConfigParser, you can store your config values in a dictionary indexed by `[section name][item name]`, and load them without having to explicitly specify each variable name:
```
import ConfigParser
from pprint import pprint
cfg = ConfigParser.ConfigParser()
cfg.read('config.cfg')
CONFIG_DATA = {}
for section_name in cfg.sections():
CONFIG_DATA[section_name] = {}
for item_name in cfg.items(section_name):
CONFIG_DATA[section_name][item_name] = cfg.get(section_name, item_name)
pprint(CONFIG_DATA)
```
|
In a Java application I developed I had hundreds of parameters in a static class called Resources that I needed to make configurable by the user via a XML file without breaking the existing application.
What I did was to use introspection to read the name and type of each property of the Resources class and then search the XML file for an element with the same name and assign the value, if found, to that property.
It should be easier in Python since it's a dynamic language so you don't need to bother with type safety and casting (which is not necessarily a good thing...)
|
17,060,152 |
Currently I load values like so (using configparser)
```
my_value = config.get("my", "value")
```
But I find myself loading alot of values sometimes as my program grows over time. Is there any "prettier"/better way to load all values in a config file?
Was perhaps thinking of json but not really sure? One problem with json is its not very easy for newbies to understand, and I cant comment the sections in json?
|
2013/06/12
|
[
"https://Stackoverflow.com/questions/17060152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
What about simply using Python modules?
You can just have a config.py file including something like the following:
```
# config.py
MY = "value"
DB = "mysql://..."
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
},
'handlers': {
},
'loggers': {
}
}
```
And then import that file when you need it:
```
import config
import logging.config
logging.config.dictConfig(config.LOGGING)
print config.MY
```
|
In a Java application I developed I had hundreds of parameters in a static class called Resources that I needed to make configurable by the user via a XML file without breaking the existing application.
What I did was to use introspection to read the name and type of each property of the Resources class and then search the XML file for an element with the same name and assign the value, if found, to that property.
It should be easier in Python since it's a dynamic language so you don't need to bother with type safety and casting (which is not necessarily a good thing...)
|
17,060,152 |
Currently I load values like so (using configparser)
```
my_value = config.get("my", "value")
```
But I find myself loading alot of values sometimes as my program grows over time. Is there any "prettier"/better way to load all values in a config file?
Was perhaps thinking of json but not really sure? One problem with json is its not very easy for newbies to understand, and I cant comment the sections in json?
|
2013/06/12
|
[
"https://Stackoverflow.com/questions/17060152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
If you want to use ConfigParser, you can store your config values in a dictionary indexed by `[section name][item name]`, and load them without having to explicitly specify each variable name:
```
import ConfigParser
from pprint import pprint
cfg = ConfigParser.ConfigParser()
cfg.read('config.cfg')
CONFIG_DATA = {}
for section_name in cfg.sections():
CONFIG_DATA[section_name] = {}
for item_name in cfg.items(section_name):
CONFIG_DATA[section_name][item_name] = cfg.get(section_name, item_name)
pprint(CONFIG_DATA)
```
|
I would suggest to save the configuration as a Python dict in a YAML file. YAML is human-readable and supports comments with a # sign at the beginning of a line. However, it does not support block comments.
```
import yaml
conf = {"name":"john", "password":"asdf"}
with open("conf.yaml", "w") as f:
yaml.dump(conf, f)
```
will give you a file conf.yaml with the content:
```
{name: john, password: asdf}
```
You can then read this with:
```
import yaml
with open("conf.yaml", "r") as f:
conf = yaml.load(f)
```
You might consider using XML if you need block comments.
|
17,060,152 |
Currently I load values like so (using configparser)
```
my_value = config.get("my", "value")
```
But I find myself loading alot of values sometimes as my program grows over time. Is there any "prettier"/better way to load all values in a config file?
Was perhaps thinking of json but not really sure? One problem with json is its not very easy for newbies to understand, and I cant comment the sections in json?
|
2013/06/12
|
[
"https://Stackoverflow.com/questions/17060152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
If you want to use ConfigParser, you can store your config values in a dictionary indexed by `[section name][item name]`, and load them without having to explicitly specify each variable name:
```
import ConfigParser
from pprint import pprint
cfg = ConfigParser.ConfigParser()
cfg.read('config.cfg')
CONFIG_DATA = {}
for section_name in cfg.sections():
CONFIG_DATA[section_name] = {}
for item_name in cfg.items(section_name):
CONFIG_DATA[section_name][item_name] = cfg.get(section_name, item_name)
pprint(CONFIG_DATA)
```
|
What about simply using Python modules?
You can just have a config.py file including something like the following:
```
# config.py
MY = "value"
DB = "mysql://..."
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
},
'handlers': {
},
'loggers': {
}
}
```
And then import that file when you need it:
```
import config
import logging.config
logging.config.dictConfig(config.LOGGING)
print config.MY
```
|
13,334,207 |
I'm trying to learn how to pause and resume a thread in java. I'm using an `Applet` that `implements Runnable`has 2 buttons "Start" and "Stop".
```
public void init(){
th = new Thread(this);
th.start();
btn_increment = new Button("Start");
btn_increment.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent ev){
th.notify();
}
});
add(btn_increment);
btn_decrement = new Button("Stop");
btn_decrement.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent ev){
try{
th.wait();
} catch(InterruptedException e) {
e.printStackTrace();
}
}
});
add(btn_decrement);
}
```
The run method of the thread:
```
public void run(){
while(true){
repaint();
try{
Thread.sleep(20);
} catch(InterruptedException e) {
e.printStackTrace();
}
}
}
```
Now whenever I try to to pause or resume the thread an exception is thrown:
`Exception in thread "AWT-EventQueue-1" java.lang.IllegalMonitorStateException`
Notes:
The previous code runs perfectly if I use the deprecated methods `suspend()` and `resume()`, but the documentation points out at using `notify()` and `wait()` instead with synchronization. I tried adding the word `synchronized` to the `actionPerformed` method, but it still throws the exception.
Can someone please explain **WHY** this isn't working and how to solve the synchronization problem? Few explanation points would really be of much help ;)
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13334207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/636342/"
] |
You can't just call `notify` and `wait`. You have to wait *for something*. And before calling `notify`, you have to make it so that there's nothing to wait for anymore.
If your blocks aren't already synchronized, then something is wrong in your design.
How can you call `wait` unless you have something to wait for? And how can you know that there is something to wait for if you haven't checked? And how can you check without synchronizing with the code that controls whether that thing has happened yet or not?
How can you call `notify` unless something just happened that you need to notify the thread about? And how could something have happened that another thread cares about if you don't hold the lock that would tell that thread about it?
You should use `wait` like this:
```
while (something_to_wait_for()) wait();
```
And that `something_to_wait_for` should check something that is protected by synchronization. And you can't make `something_to_wait_for` synchronized because then you have a race condition -- what if that something happens after `something_to_wait_for` returns but before you enter `wait`? Then you are waiting for something that already happened! So you *need* synchronization fundamentally. If you are just adding it at the end, your design is broken.
The solution in your case is probably to add something to wait for. Perhaps a simple boolean variable is all you need. Then your code can be `while (should_wait) wait();`, `should_wait = true;`, and `should_wait = false(); notifyAll()`. You'll need `synchronized` to protect the boolean and the `wait`/`notify` logic.
|
I think you have to synchronize on the thread in order to call wait and notify. Try to use
```
synchronized (th) {
th.notify();
}
```
and the same with `wait()`.
|
13,334,207 |
I'm trying to learn how to pause and resume a thread in java. I'm using an `Applet` that `implements Runnable`has 2 buttons "Start" and "Stop".
```
public void init(){
th = new Thread(this);
th.start();
btn_increment = new Button("Start");
btn_increment.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent ev){
th.notify();
}
});
add(btn_increment);
btn_decrement = new Button("Stop");
btn_decrement.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent ev){
try{
th.wait();
} catch(InterruptedException e) {
e.printStackTrace();
}
}
});
add(btn_decrement);
}
```
The run method of the thread:
```
public void run(){
while(true){
repaint();
try{
Thread.sleep(20);
} catch(InterruptedException e) {
e.printStackTrace();
}
}
}
```
Now whenever I try to to pause or resume the thread an exception is thrown:
`Exception in thread "AWT-EventQueue-1" java.lang.IllegalMonitorStateException`
Notes:
The previous code runs perfectly if I use the deprecated methods `suspend()` and `resume()`, but the documentation points out at using `notify()` and `wait()` instead with synchronization. I tried adding the word `synchronized` to the `actionPerformed` method, but it still throws the exception.
Can someone please explain **WHY** this isn't working and how to solve the synchronization problem? Few explanation points would really be of much help ;)
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13334207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/636342/"
] |
You have misunderstood how `wait()` works. Calling `wait` on a `Thread` object does not pause that thread; it instead tells the currently running thread to wait for something else to happen. To explain why, I'll need to back up a bit and explain what `synchronized` actually does.
When you enter a `synchronized` block you obtain the *monitor* associated with an object. For example,
```
synchronized(foo) {
```
obtains the monitor associated with the object `foo`.
Once you have the monitor, no other threads can obtain it until you exit the synchronized block. This is where `wait` and `notify` come in.
`wait` is a method on the Object class that tells the currently running thread to temporarily release the monitor it holds. This allows other threads to synchronize on `foo`.
```
foo.wait();
```
This thread will not resume until someone else calls `notify` or `notifyAll` on `foo` (or the thread is interrupted). Once that happens, this thread will attempt to re-acquire the monitor for `foo` and then continue. Note that if any other threads are waiting to obtain the monitor then they might get in first - there is no guarantee of the order the JVM will hand out locks. Note that `wait()` will wait forever if no-one calls `notify` or `notifyAll`. It's usually best to use the other form of `wait` that takes a timeout. That version will wake up when someone calls `notify`/`notifyAll` or when the timeout has expired.
So, you need one thread to do the waiting and a different thread to do the notifying. Both `wait` and `notify` must hold the monitor on the object they are trying to wait on or notify; this is why you are seeing the IllegalMonitorStateException.
An example might help you understand:
```
class RepaintScheduler implements Runnable {
private boolean paused = false;
private final Object LOCK = new Object();
public void run() {
while (true) {
synchronized(LOCK) {
if (paused) {
try {
LOCK.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
} else {
repaint();
}
}
try {
Thread.sleep(20);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public void pause() {
synchronized(LOCK) {
paused = true;
LOCK.notifyAll();
}
}
public void resume() {
synchronized(LOCK) {
paused = false;
LOCK.notifyAll();
}
}
}
```
Your Applet code can then do this:
```
public void init() {
RepaintScheduler scheduler = new RepaintScheduler();
// Add listeners that call scheduler.pause and scheduler.resume
btn_increment.addActionListener(new ActionListener() {public void actionPerformed(ActionEvent e) {
scheduler.resume();
}});
btn_decrement.addActionListener(new ActionListener() {public void actionPerformed(ActionEvent e) {
scheduler.pause();
}});
// Now start everything up
Thread t = new Thread(scheduler);
t.start();
}
```
Note that the Applet class does not care about how the scheduler pauses/resumes and does not have any synchronized blocks.
So a possible sequence of events here is:
* Thread A starts running the repaint scheduler.
* Thread A goes to sleep for 20ms.
* Thread B (the event dispatch thread) receives a button click; calls 'pause'.
* Thread B obtains the monitor on LOCK.
* Thread B updates the 'paused' variable and calls LOCK.notifyAll.
* No threads are waiting on LOCK so nothing interesting happens.
* Thread B releases the monitor on LOCK.
* Thread A wakes up, goes through its loop again.
* Thread A obtains the monitor on LOCK.
* Thread A sees that it should be paused, so it calls LOCK.wait.
* At this point Thread A suspends, waiting for someone to call notifyAll. Thread A releases the monitor on LOCK.
* Some time later, the user clicks 'resume'.
* Thread B calls scheduler.resume.
* Thread B obtains the monitor on LOCK.
* Thread B updates the 'paused' variable and calls LOCK.notifyAll.
* Thread A sees the 'notifyAll' and wakes up. It tries to obtain the monitor on LOCK but it is held by Thread B so Thread A blocks.
* Thread B releases the monitor on LOCK.
* Thread A obtains the monitor and carries on.
Does that all make sense?
Having a separate LOCK variable is not required; I've done that to highlight the fact that you are not calling wait/notify on a `Thread` instance. Similarly, the logic inside the RepaintScheduler is not ideal but is just there to illustrate how wait/notify could be used.
|
I think you have to synchronize on the thread in order to call wait and notify. Try to use
```
synchronized (th) {
th.notify();
}
```
and the same with `wait()`.
|
13,334,207 |
I'm trying to learn how to pause and resume a thread in java. I'm using an `Applet` that `implements Runnable`has 2 buttons "Start" and "Stop".
```
public void init(){
th = new Thread(this);
th.start();
btn_increment = new Button("Start");
btn_increment.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent ev){
th.notify();
}
});
add(btn_increment);
btn_decrement = new Button("Stop");
btn_decrement.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent ev){
try{
th.wait();
} catch(InterruptedException e) {
e.printStackTrace();
}
}
});
add(btn_decrement);
}
```
The run method of the thread:
```
public void run(){
while(true){
repaint();
try{
Thread.sleep(20);
} catch(InterruptedException e) {
e.printStackTrace();
}
}
}
```
Now whenever I try to to pause or resume the thread an exception is thrown:
`Exception in thread "AWT-EventQueue-1" java.lang.IllegalMonitorStateException`
Notes:
The previous code runs perfectly if I use the deprecated methods `suspend()` and `resume()`, but the documentation points out at using `notify()` and `wait()` instead with synchronization. I tried adding the word `synchronized` to the `actionPerformed` method, but it still throws the exception.
Can someone please explain **WHY** this isn't working and how to solve the synchronization problem? Few explanation points would really be of much help ;)
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13334207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/636342/"
] |
You have misunderstood how `wait()` works. Calling `wait` on a `Thread` object does not pause that thread; it instead tells the currently running thread to wait for something else to happen. To explain why, I'll need to back up a bit and explain what `synchronized` actually does.
When you enter a `synchronized` block you obtain the *monitor* associated with an object. For example,
```
synchronized(foo) {
```
obtains the monitor associated with the object `foo`.
Once you have the monitor, no other threads can obtain it until you exit the synchronized block. This is where `wait` and `notify` come in.
`wait` is a method on the Object class that tells the currently running thread to temporarily release the monitor it holds. This allows other threads to synchronize on `foo`.
```
foo.wait();
```
This thread will not resume until someone else calls `notify` or `notifyAll` on `foo` (or the thread is interrupted). Once that happens, this thread will attempt to re-acquire the monitor for `foo` and then continue. Note that if any other threads are waiting to obtain the monitor then they might get in first - there is no guarantee of the order the JVM will hand out locks. Note that `wait()` will wait forever if no-one calls `notify` or `notifyAll`. It's usually best to use the other form of `wait` that takes a timeout. That version will wake up when someone calls `notify`/`notifyAll` or when the timeout has expired.
So, you need one thread to do the waiting and a different thread to do the notifying. Both `wait` and `notify` must hold the monitor on the object they are trying to wait on or notify; this is why you are seeing the IllegalMonitorStateException.
An example might help you understand:
```
class RepaintScheduler implements Runnable {
private boolean paused = false;
private final Object LOCK = new Object();
public void run() {
while (true) {
synchronized(LOCK) {
if (paused) {
try {
LOCK.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
} else {
repaint();
}
}
try {
Thread.sleep(20);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public void pause() {
synchronized(LOCK) {
paused = true;
LOCK.notifyAll();
}
}
public void resume() {
synchronized(LOCK) {
paused = false;
LOCK.notifyAll();
}
}
}
```
Your Applet code can then do this:
```
public void init() {
RepaintScheduler scheduler = new RepaintScheduler();
// Add listeners that call scheduler.pause and scheduler.resume
btn_increment.addActionListener(new ActionListener() {public void actionPerformed(ActionEvent e) {
scheduler.resume();
}});
btn_decrement.addActionListener(new ActionListener() {public void actionPerformed(ActionEvent e) {
scheduler.pause();
}});
// Now start everything up
Thread t = new Thread(scheduler);
t.start();
}
```
Note that the Applet class does not care about how the scheduler pauses/resumes and does not have any synchronized blocks.
So a possible sequence of events here is:
* Thread A starts running the repaint scheduler.
* Thread A goes to sleep for 20ms.
* Thread B (the event dispatch thread) receives a button click; calls 'pause'.
* Thread B obtains the monitor on LOCK.
* Thread B updates the 'paused' variable and calls LOCK.notifyAll.
* No threads are waiting on LOCK so nothing interesting happens.
* Thread B releases the monitor on LOCK.
* Thread A wakes up, goes through its loop again.
* Thread A obtains the monitor on LOCK.
* Thread A sees that it should be paused, so it calls LOCK.wait.
* At this point Thread A suspends, waiting for someone to call notifyAll. Thread A releases the monitor on LOCK.
* Some time later, the user clicks 'resume'.
* Thread B calls scheduler.resume.
* Thread B obtains the monitor on LOCK.
* Thread B updates the 'paused' variable and calls LOCK.notifyAll.
* Thread A sees the 'notifyAll' and wakes up. It tries to obtain the monitor on LOCK but it is held by Thread B so Thread A blocks.
* Thread B releases the monitor on LOCK.
* Thread A obtains the monitor and carries on.
Does that all make sense?
Having a separate LOCK variable is not required; I've done that to highlight the fact that you are not calling wait/notify on a `Thread` instance. Similarly, the logic inside the RepaintScheduler is not ideal but is just there to illustrate how wait/notify could be used.
|
You can't just call `notify` and `wait`. You have to wait *for something*. And before calling `notify`, you have to make it so that there's nothing to wait for anymore.
If your blocks aren't already synchronized, then something is wrong in your design.
How can you call `wait` unless you have something to wait for? And how can you know that there is something to wait for if you haven't checked? And how can you check without synchronizing with the code that controls whether that thing has happened yet or not?
How can you call `notify` unless something just happened that you need to notify the thread about? And how could something have happened that another thread cares about if you don't hold the lock that would tell that thread about it?
You should use `wait` like this:
```
while (something_to_wait_for()) wait();
```
And that `something_to_wait_for` should check something that is protected by synchronization. And you can't make `something_to_wait_for` synchronized because then you have a race condition -- what if that something happens after `something_to_wait_for` returns but before you enter `wait`? Then you are waiting for something that already happened! So you *need* synchronization fundamentally. If you are just adding it at the end, your design is broken.
The solution in your case is probably to add something to wait for. Perhaps a simple boolean variable is all you need. Then your code can be `while (should_wait) wait();`, `should_wait = true;`, and `should_wait = false(); notifyAll()`. You'll need `synchronized` to protect the boolean and the `wait`/`notify` logic.
|
9,754,562 |
I am trying to use twitter search web service in python. I want to call a web service like:
```
http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&result_type=mixed
```
from my python program.
Can anybody tell me
1. how to use xmlhttprequst object in python
2. how to pass parameters to it, and
3. how to get the data in dictionary.
Here is my try:
```
import urllib
import sys
url = "http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&result_type=mixed"
urlobj = urllib.urlopen(url)
data = urlobj.read()
print data
```
Thanks.
|
2012/03/17
|
[
"https://Stackoverflow.com/questions/9754562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/929701/"
] |
You don't need "asynchronous httprequest" to use twitter search api:
```
import json
import urllib
import urllib2
# make query
query = urllib.urlencode(dict(q="blue angel", rpp=5, include_entities=1,
result_type="mixed"))
# make request
resp = urllib2.urlopen("http://search.twitter.com/search.json?" + query)
# make dictionary (parse json response)
d = json.load(resp)
```
There are probably several libraries that provide a nice OO interface around these http requests.
To make multiple requests concurrently you could use [`gevent`](http://www.gevent.org/):
```
import gevent
import gevent.monkey; gevent.monkey.patch_all() # patch stdlib
import json
import urllib
import urllib2
def f(querystr):
query = urllib.urlencode(dict(q=querystr, rpp=5, include_entities=1,
result_type="mixed"))
resp = urllib2.urlopen("http://search.twitter.com/search.json?" + query)
d = json.load(resp)
print('number of results %d' % (len(d['results']),))
jobs = [gevent.spawn(f, q) for q in ['blue angel', 'another query']]
gevent.joinall(jobs) # wait for completion
```
|
I'd recommend checking out [requests](http://docs.python-requests.org/en/latest/) and its [`async` module](http://docs.python-requests.org/en/latest/user/advanced/#asynchronous-requests).
Simple request:
---------------
```
import json
import requests
params = {'rpp': 5, 'include_entities': 1, 'result_type': 'mixed',
'q': 'blue angel'}
r = requests.get('http://search.twitter.com/search.json', params=params)
print json.loads(r.text)
```
Async:
------
```
import json
from requests import async
def add_option_params(args):
options = {'rpp': 5, 'include_entities': 1, 'result_type': 'mixed'}
args['params'].update(options)
return args
requests = []
for search_term in ['test1', 'test2', 'test3']:
request = async.get('http://search.twitter.com/search.json',
params={'q': search_term},
hooks={'args': add_option_params})
requests.append(request)
for result in async.map(requests):
print result.url, json.loads(result.text)['completed_in']
```
|
117,429 |
My current client has several internal products which the IT-department supports. All product owners are non-technical and each application is always developed by one developer (and so is all change requests).
The current methodology is a bit like waterfall. All uses cases are defined first by the product owner. Then the application is developed and released into a test-environment. The user tests it and says yes/no. Any changes results in a change request and a new release (including setup-packages). And it continues like that until the user is happy which results in the product going to production.
I want to introduce an agile method instead to capture changes quicker and focus on the most important features from start. Scrum seems to be overkill since there only is one developer per project. Any other suggestions?
**Edit**
My client isn't going use anything that isn't documented. So I'm looking for an established and documented method (if there are one which suits one developer).
|
2011/11/02
|
[
"https://softwareengineering.stackexchange.com/questions/117429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12629/"
] |
You don't have to follow full Scrum but you can definitely take advantage of its incremental iterative approach which will replace your waterfall.
How will the process change:
* POs will not need to define all use cases upfront.
* POs will define only use cases they are absolutely sure about at the moment and prioritize them.
* Developer will follow commitment process to take multiple most important use cases to next iteration (with fixed length).
* PO will be able to verify those use cases during (or worse after) the iteration.
* Developer will deliver new "release" after each iteration (this can take big advantage of some automation tools).
Pros of the change:
* Better visibility to development process
* Better change management where problems are discovered much earlier in development phase and can be targeted immediately in the next iteration without cumulative effect (adding more functionality based on incorrectly implemented feature)
* Incremental delivery where after each iteration developer should provide working product with new features
* More important features will be delivered more quickly and thus will be more used when developing / testing rest of the product = there can be more feedback about their long term usability
Cons of the change:
* POs will have to accept a new model and communicate with developer continuously. If POs are not willing to participate on development and provide continuous feedback and clarifications + validate use cases as soon as possible you can give up any try for change.
* POs will have to change the way how they define use cases. Use cases will have to be small enough for iteration and they will have to be independent as much as possible. If developer must work on five use cases concurrently to complete them all it will not work.
The main point of the change: If you want to go for agile you must have much often feedback (small iterations with incremental delivery of working product) and you must have much more communication between developer and PO.
|
In a one developer environment the key things are:
```
Source control
Continuous build server
Task/bug tracking system
Defined sprints
Unit tests
Code coverage
Independent testing in parallel to development
```
Doing this will make you agile enough without meeting the strict definition of one of documented methodologies. If the teams grow, then look at the communication aspects of how the team works.
|
117,429 |
My current client has several internal products which the IT-department supports. All product owners are non-technical and each application is always developed by one developer (and so is all change requests).
The current methodology is a bit like waterfall. All uses cases are defined first by the product owner. Then the application is developed and released into a test-environment. The user tests it and says yes/no. Any changes results in a change request and a new release (including setup-packages). And it continues like that until the user is happy which results in the product going to production.
I want to introduce an agile method instead to capture changes quicker and focus on the most important features from start. Scrum seems to be overkill since there only is one developer per project. Any other suggestions?
**Edit**
My client isn't going use anything that isn't documented. So I'm looking for an established and documented method (if there are one which suits one developer).
|
2011/11/02
|
[
"https://softwareengineering.stackexchange.com/questions/117429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12629/"
] |
You don't have to follow full Scrum but you can definitely take advantage of its incremental iterative approach which will replace your waterfall.
How will the process change:
* POs will not need to define all use cases upfront.
* POs will define only use cases they are absolutely sure about at the moment and prioritize them.
* Developer will follow commitment process to take multiple most important use cases to next iteration (with fixed length).
* PO will be able to verify those use cases during (or worse after) the iteration.
* Developer will deliver new "release" after each iteration (this can take big advantage of some automation tools).
Pros of the change:
* Better visibility to development process
* Better change management where problems are discovered much earlier in development phase and can be targeted immediately in the next iteration without cumulative effect (adding more functionality based on incorrectly implemented feature)
* Incremental delivery where after each iteration developer should provide working product with new features
* More important features will be delivered more quickly and thus will be more used when developing / testing rest of the product = there can be more feedback about their long term usability
Cons of the change:
* POs will have to accept a new model and communicate with developer continuously. If POs are not willing to participate on development and provide continuous feedback and clarifications + validate use cases as soon as possible you can give up any try for change.
* POs will have to change the way how they define use cases. Use cases will have to be small enough for iteration and they will have to be independent as much as possible. If developer must work on five use cases concurrently to complete them all it will not work.
The main point of the change: If you want to go for agile you must have much often feedback (small iterations with incremental delivery of working product) and you must have much more communication between developer and PO.
|
Scrum is overkill, but Agile is more natural for small teams, even one person teams. The point of Agile is laying out a backlog of user stories upfront that accurately describe the client's use cases.
Before the start of each sprint, priority and LOE (Level of Effort or Points) are determined for user stories, and based on what is possible in the 2-3 week sprint timeframe, user stories are added to the sprint.
At the end of the sprint, all the user stories should be developed **and tested** and the most important aspect of all is that all functionality from previous sprints should be unaffected and the software should not be left at the end of the sprint in a broken state.
*Does it make sense to release to the client after every sprint?* No, it doesn't and this is a common misconception of Agile that I see all the time. I encounter few clients who want a working release after every sprint. Some may want a release every quarter for instance, and if Agile is being followed, then the last released sprint should always be in a usable state where a release can be prepared for the client on a regular interval. Some clients may also want an environment where they can authorize, demo and evaluate the latest sprint so that they can keep up to date on the real time progress of the project.
*What about the challenges of providing software in a usable state within the first few sprints of a fresh new project?* The first few sprints of a new project can be a big challenge because the amount of architecture and design work as well as foundation work that needs to occur always peaks at the beginning of a new project and gradually diminishes throughout development.
This can be dealt with in a number of ways, many shops will use the initial backlog of user stories to make and document core architecture decisions before the first sprint even begins. Other times the project is invisioned to be structurally similar to other previous projects and a project template is used that lays out the foundation for the core design and future development. I have also worked on teams where the architecture team starts to kick off the project with a 2-3 week **Preamble** before the first sprint is officially supposed to begin. The first sprint can start out small, for the developers too, for a typical CRUD application I always like to start off with a login page and authentication. It is easy to start, clearly defined what is finished, and can be clearly tested by QA at the early stages of the project.
**EDIT:** So how this helps your client is that changes can be made to user stories throughout the process, and this can reflect in modified timelines, estimations, quicker turnaround for the customer, more feedback to the client and more transparency to the client. This benefits you in that you will be more **agile** in dealing with changes to the original scope in the middle of a project.
|
30,310,542 |
I need to **compare hundreds of objects** stored in a unique list to find duplicates:
```
object_list = {Object_01, Object_02, Object_03, Object_04, Object_05, ...}
```
I've written a custom function, which returns `True`, if the objects are equal and `False` if not:
```
object_01.compare(object_02)
>>> True
```
Compare method works well, but takes a lot of time per execution. I'm currently using `itertools.combinations(x, 2)` to **iterate through all combinations**. I've thought it's a good idea to use a dict for storing already compared objects and create new sets dynamically like:
```
dct = {'Compared': {}}
dct['Compared'] = set()
import itertools
for a, b in itertools.combinations(x, 2):
if b.name not in dct['Compared']:
if compare(a,b) == True:
#print (a,b)
key = a.name
value = b.name
if key not in dct:
dct[key] = set()
dct[key].add(value)
else:
dct[key].add(value)
dct[key].add(key)
dct['Compared'].add(b)
```
**Current Output:**
```
Compared: {'Object_02', 'Object_01', 'Object_03', 'Object_04', 'Object_05'}
Object_01: {'Object_02', 'Object_03', 'Object_01'}
Object_04: {'Object_05', 'Object_04'}
Object_05: {'Object_04'}
...
```
I would like to know: **Is there a faster way to iterate through all combinations** and how to *break/prevent* the iteration of an object, which is already assigned to a list of duplicates?
**Desired Output:**
```
Compared: {'Object_02', 'Object_01', 'Object_03', 'Object_04', 'Object_05'}
Object_01: {'Object_02', 'Object_03', 'Object_01'}
Object_04: {'Object_05', 'Object_04'}
...
```
**Note:** Compare method is a c-wrapper. Requirement is to find an algorithm around it.
|
2015/05/18
|
[
"https://Stackoverflow.com/questions/30310542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3091066/"
] |
You don't need to calculate all combinations, you just need to check if a given item is a duplicate:
```
for i, a in enumerate(x):
if any(a.compare(b) for b in x[:i]):
# a is a duplicate of an already seen item, so do something
```
This is still technically O(n^2), but you've cut out at least half the checks required, and should be a bit faster.
In short, `x[:i]` returns all items in the list before index `i`. If the item `x[i]` appears in that list, you know it's a duplicate. If not, there may be a duplicate *after* it in the list, but you worry about that when you get there.
Using `any` is also important here: if it finds any true item, it will immediately stop, without checking the rest of the iterable.
You could also improve the number of checks by removing known duplicates from the list you're checking against:
```
x_copy = x[:]
removed = 0
for i, a in enumerate(x):
if any(a.compare(b) for b in x_copy[:i-removed]):
del x_copy[i-removed]
removed += 1
# a is a duplicate of an already seen item, so do something
```
Note that we use a copy, to avoid changing the sequence we're iterating over, and we need to take account for the number of items we've removed when using indexes.
Next, we just need to figure out how to build the dictionary.
THis might be a little more complex. The first step is to figure out exactly which element is a duplicate. This can be done by realising `any` is just a wrapper around a `for` loop:
```
def any(iterable):
for item in iterable:
if item: return True
return False
```
We can then make a minor change, and pass in a function:
```
def first(iterable, fn):
for item in iterable:
if fn(item): return item
return None
```
Now, we change our duplicate finder as follows:
```
d = collections.defaultdict(list)
x_copy = x[:]
removed = 0
for i, a in enumerate(x):
b = first(x_copy[:i-removed], a.compare):
if b is not None:
# b is the first occurring duplicate of a
del x_copy[i-removed]
removed += 1
d[b.name].append(a)
else:
# we've not seen a yet, but might see it later
d[a.name].append(a)
```
This will put every element in the list into a dict(-like). If you only want the duplicates, it's then just a case of getting all the entries with a length greater than 1.
|
Group the objects by name if you want to find the dups grouping by attributes
```
class Foo:
def __init__(self,i,j):
self.i = i
self.j = j
object_list = {Foo(1,2),Foo(3,4),Foo(1,2),Foo(3,4),Foo(5,6)}
from collections import defaultdict
d = defaultdict(list)
for obj in object_list:
d[(obj.i,obj.j)].append(obj)
print(d)
defaultdict(<type 'list'>, {(1, 2): [<__main__.Foo instance at 0x7fa44ee7d098>, <__main__.Foo instance at 0x7fa44ee7d128>],
(5, 6): [<__main__.Foo instance at 0x7fa44ee7d1b8>],
(3, 4): [<__main__.Foo instance at 0x7fa44ee7d0e0>, <__main__.Foo instance at 0x7fa44ee7d170>]})
```
If not the name then use a tuple to store all the attributes you use to check for comparison.
Or sort the list by the attributes that matter and use groupby to group:
```
class Foo:
def __init__(self,i,j):
self.i = i
self.j = j
object_list = {Foo(1,2),Foo(3,4),Foo(1,2),Foo(3,4),Foo(5,6)}
from itertools import groupby
from operator import attrgetter
groups = [list(v) for k,v in groupby(sorted(object_list, key=attrgetter("i","j")),key=attrgetter("i","j"))]
print(groups)
[[<__main__.Foo instance at 0x7f794a944d40>, <__main__.Foo instance at 0x7f794a944dd0>], [<__main__.Foo instance at 0x7f794a944d88>, <__main__.Foo instance at 0x7f794a944e18>], [<__main__.Foo instance at 0x7f794a944e60>]]
```
You could also implement lt, eq and hash to make your objects sortable and hashable:
```
class Foo(object):
def __init__(self,i,j):
self.i = i
self.j = j
def __lt__(self, other):
return (self.i, self.j) < (other.i, other.j)
def __hash__(self):
return hash((self.i,self.j))
def __eq__(self, other):
return (self.i, self.j) == (other.i, other.j)
print(set(object_list))
object_list.sort()
print(map(lambda x: (getattr(x,"i"),getattr(x,"j")),object_list))
set([<__main__.Foo object at 0x7fdff2fc08d0>, <__main__.Foo object at 0x7fdff2fc09d0>, <__main__.Foo object at 0x7fdff2fc0810>])
[(1, 2), (1, 2), (3, 4), (3, 4), (5, 6)]
```
Obviously the attributes need to be hashable, if you had lists you could change to tuples etc..
|
3,268,491 |
We run an web-app with built-in SSRS reports, which are integrated via an Iframe.
A user has set her display percentage to 150% in Windows 7.
(Control Panel > Appearance and Personalization > Display).
She is displaying the webpage with the report in Firefox and she has to scroll horizontally and vertically within the Iframe to view the whole report.
I managed to get some screenshots
* no scrollbars in Firefox
with a display percentage set to 100% set in Windows 7
<http://screencast.com/t/MGM3MmQzOTgt>
* no scrollbars in IE8 (with display 150% set in Windows 7)
<http://screencast.com/t/ZjE3NGVj>
* scrollbars in Firefox
with a display percentage set to 150% set in Windows 7
<http://screencast.com/t/ZmI1Yjkz>
Somehow Iframe and SSRS don't seem to work together well, but how to solve this?
|
2010/07/16
|
[
"https://Stackoverflow.com/questions/3268491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/233226/"
] |
It looks like `AbstractAutowireCapableBeanFactory` (where most of the work with BeanWrapper is done) is hardcoded to use `BeanWrapperImpl`. No point of extension there. `BeanWrapperImpl` uses `CachedIntrospectionResults` which uses `Introspector` in turn. Looks like there is no way to configure any of these dependencies. We can try to use standard points of extension: `BeanPostProcessor` or `BeanFactoryPostProcessor`.
Using just `BeanPostProcessor` will not work, because if we are doing something like this:
```
<bean id="beanForInjection" class="com.test.BeanForInjection">
<property name="bean" ref="beanToBeInjected"/>
</bean>
```
where `BeanForInjection` is a Scala class
```
package com.test
import com.other.BeanToBeInjected
class BeanForInjection {
var bean : BeanToBeInjected = null;
}
```
and `BeanToBeInjected` is a bean we want to inject, then we will catch exception before `BeanPostProcessor` will have a chance to step in. Beans gets populated with values before any callbacks of `BeanPostProcessor` called.
But we can use `BeanFactoryPostProcessor` to 'hide' properties that are expected to be injected through Scala-like setters and apply them latter.
Something lilke this:
```
package com.other;
import ...
public class ScalaAwareBeanFactoryPostProcessor implements BeanFactoryPostProcessor, PriorityOrdered {
... PriorityOrdered related methods...
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
String[] beanNames = beanFactory.getBeanDefinitionNames();
for (String currentName : beanNames) {
BeanDefinition beanDefinition = beanFactory.getBeanDefinition(currentName);
processScalaProperties(beanDefinition);
}
}
protected void processScalaProperties(BeanDefinition beanDefinition) {
String className = beanDefinition.getBeanClassName();
try {
Set<PropertyValue> scalaProperties = new HashSet<PropertyValue>();
for (PropertyValue propertyValue : beanDefinition.getPropertyValues().getPropertyValueList()) {
String scalaSetterName = ScalaAwarePostProcessorUtils.getScalaSetterName(propertyValue.getName());
BeanInfo beanInfo = getBeanInfo(className);
PropertyDescriptor[] propertyDescriptors = beanInfo.getPropertyDescriptors();
MethodDescriptor[] methodDescriptors = beanInfo.getMethodDescriptors();
for (MethodDescriptor md : methodDescriptors) {
if (scalaSetterName.equals(md.getName())) {
boolean isScalaProperty = true;
for (PropertyDescriptor pd : propertyDescriptors) {
if (propertyValue.getName().equals(pd.getName())) {
isScalaProperty = false;
}
}
if (isScalaProperty) {
scalaProperties.add(propertyValue);
}
}
}
}
if (!scalaProperties.isEmpty()) {
beanDefinition.setAttribute(ScalaAwarePostProcessorUtils.SCALA_ATTRIBUTES_KEY, scalaProperties);
}
for (PropertyValue propertyValue : scalaProperties) {
beanDefinition.getPropertyValues().removePropertyValue(propertyValue);
}
} catch (ClassNotFoundException e) {
} catch (IntrospectionException e) {
}
}
private BeanInfo getBeanInfo(String className) throws ClassNotFoundException, IntrospectionException {
Class beanClass = Class.forName(className);
BeanInfo beanInfo = Introspector.getBeanInfo(beanClass);
cleanIntrospectorCache(beanClass);
return beanInfo;
}
private void cleanIntrospectorCache(Class beanClass) {
Class classToFlush = beanClass;
do {
Introspector.flushFromCaches(classToFlush);
classToFlush = classToFlush.getSuperclass();
}
while (classToFlush != null);
}
}
```
This implementation simply checks is any bean has properties that are not listed as properties and also have Scala-like setters. All properties that match this contract are removed from properties list and saved as attributes of the bean. Now, all we need is to pull this attributes (if any) for every bean and apply them. There is where we need `BeanPostProcessor` (AutowiredAnnotationBeanPostProcessor can be a good example of BeanPostProcessor).
```
package com.other;
public class ScalaAwareBeanPostProcessor extends InstantiationAwareBeanPostProcessorAdapter
implements PriorityOrdered, BeanFactoryAware {
private ConfigurableListableBeanFactory beanFactory;
... Order related stuff...
public void setBeanFactory(BeanFactory beanFactory) {
if (beanFactory instanceof ConfigurableListableBeanFactory) {
this.beanFactory = (ConfigurableListableBeanFactory) beanFactory;
}
}
@Override
public PropertyValues postProcessPropertyValues(PropertyValues pvs, PropertyDescriptor[] pds, Object bean, String beanName) throws BeansException {
try {
InjectionMetadata metadata = findScalaMetadata(beanFactory.getBeanDefinition(beanName), bean.getClass());
metadata.inject(bean, beanName, pvs);
}
catch (Throwable ex) {
throw new BeanCreationException(beanName, "Injection of Scala dependencies failed", ex);
}
return pvs;
}
private InjectionMetadata findScalaMetadata(BeanDefinition beanDefinition, Class<?> beanClass) throws IntrospectionException {
LinkedList<InjectionMetadata.InjectedElement> elements = new LinkedList<InjectionMetadata.InjectedElement>();
Set<PropertyValue> scalaProperties = (Set<PropertyValue>) beanDefinition.getAttribute(ScalaAwarePostProcessorUtils.SCALA_ATTRIBUTES_KEY);
if (scalaProperties != null) {
for (PropertyValue pv : scalaProperties) {
Method setter = ScalaAwarePostProcessorUtils.getScalaSetterMethod(beanClass, pv.getName());
if (setter != null) {
Method getter = ScalaAwarePostProcessorUtils.getScalaGetterMethod(beanClass, pv.getName());
PropertyDescriptor pd = new PropertyDescriptor(pv.getName(), getter, setter);
elements.add(new ScalaSetterMethodElement(setter, pd));
}
}
}
return new InjectionMetadata(beanClass, elements);
}
private class ScalaSetterMethodElement extends InjectionMetadata.InjectedElement {
protected ScalaSetterMethodElement(Member member, PropertyDescriptor pd) {
super(member, pd);
}
@Override
protected Object getResourceToInject(Object target, String requestingBeanName) {
Method method = (Method) this.member;
MethodParameter methodParam = new MethodParameter(method, 0);
DependencyDescriptor dd = new DependencyDescriptor(methodParam, true);
return beanFactory.resolveDependency(dd, requestingBeanName);
}
}
}
```
Simply create these two beans in your context:
```
<bean class="com.other.ScalaAwareBeanFactoryPostProcessor"/>
<bean class="com.other.ScalaAwareBeanPostProcessor"/>
```
Note:
This is not a final solution. It will work for classes, but it will not work for simple types:
```
<bean id="beanForInjection" class="com.test.BeanForInjection">
<property name="bean" ref="beanToBeInjected"/>
<property name="name" value="skaffman"/>
</bean>
```
Solution will work for `bean`, but not for `name`. This can be fixed, but at this point I think you will be better off just using @BeanInfo annotation.
|
Interesting question, You might find following links useful
<http://www.grails.org/Extended+Data+Binding+Plugin#Application-wide> DataBinder and BeanWrapper configuration
<http://blog.krecan.net/2008/06/17/spring-field-injection/>
|
48,763,802 |
This is my data.
```
Mod <- as.factor(c(rep("GLM",5),rep("MLP",5),rep("RF",5),rep("SVML",5),rep("SVMR",5)))
Manifold <- as.factor(rep(c("LLE","Iso","PCA","MDS","kPCA"),5))
ROC <- runif(25,0,1)
Sens <- runif(25,0,1)
Spec <- runif(25,0,1)
df <- data.frame("Mod"= Mod, "Manifold"= Manifold, "ROC" = ROC, "Sens" = sens, "Spec" = spec)
```
And I am making this graph
```
resul3 <- ggplot(df, aes(x = Mod, y = ROC, fill= Manifold)) +
geom_bar(stat = "identity", position = "dodge", color = "black") +
ylab("ROC & Specificity") +
xlab("Classifiers") +
theme_bw() +
ggtitle("Classifiers' ROC per Feature Extraction Plasma") +
geom_point(aes(y=Spec), color = "black", position=position_dodge(.9)) +
scale_fill_manual(name = "Feature \nExtraction", values = c("#FFEFCA",
"#EDA16A" ,"#C83741", "#6C283D", "#62BF94"))
```
[first graph](https://i.stack.imgur.com/bHB0H.png)
And what I want is another legend with tittle "Specificity" and a single black point. I dont want the point to be inside the Manifolds legend.
[Something like this but without the points inside the manifold squares](https://i.stack.imgur.com/Bql46.png)
|
2018/02/13
|
[
"https://Stackoverflow.com/questions/48763802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7222370/"
] |
Changing the `geom_point` line, adding a `scale_color_manual` and using the override as seen in @drmariod's answer will result in this plot:
```
ggplot(df, aes(x = Mod, y = ROC, fill= Manifold)) +
geom_bar(stat = "identity", position = "dodge", color = "black") +
ylab("ROC & Specificity") +
xlab("Classifiers") +
theme_bw() +
ggtitle("Classifiers' ROC per Feature Extraction Plasma") +
geom_point(aes(y=Spec, color = "Specificity"), position=position_dodge(.9)) +
scale_fill_manual(name = "Feature \nExtraction", values = c("#FFEFCA",
"#EDA16A" ,"#C83741", "#6C283D", "#62BF94")) +
scale_color_manual(name = NULL, values = c("Specificity" = "black")) +
guides(fill = guide_legend(override.aes = list(shape = NA)))
```
[](https://i.stack.imgur.com/CAgf7.png)
|
You can overwrite the aesthetics for shape and set it to `NA` like this
```
ggplot(df, aes(x = Mod, y = ROC, fill= Manifold)) +
geom_bar(stat = "identity", position = "dodge", color = "black") +
ylab("ROC & Specificity") +
xlab("Classifiers") +
theme_bw() +
ggtitle("Classifiers' ROC per Feature Extraction Plasma") +
geom_point(aes(y=Spec), color = "black", position=position_dodge(.9)) +
scale_fill_manual(name = "Feature \nExtraction", values = c("#FFEFCA",
"#EDA16A" ,"#C83741", "#6C283D", "#62BF94")) +
guides(fill = guide_legend(override.aes = list(shape = NA)))
```
|
44,764,311 |
I am writing a python script to find the latest zip files in a given directory. I just finished to write it, but it's taking really long time to give the output on data that are >30 GB. It's taking ~45min to run. Any tips on how I can improve the performance of my script to run faster? I am using python 2.7 on a windows 10.
```
import os, glob
from datetime import datetime
def findZip_Dir_list(cwd):
file_dirs = []
for (dirname, dirs, files) in os.walk(cwd):
for filename in files:
if filename.endswith('.zip'):
file_dirs.append(dirname)
return file_dirs
cwd = os.getcwd()
zip_dirs = findZip_Dir_list(cwd)
new_dirs = []
for i in zip_dirs:
if i not in new_dirs:
new_dirs.append(i)
latest_zip = []
for j in new_dirs:
list_of_files = glob.glob(j+'/*.zip')
latest_zip.append(max(list_of_files, key=os.path.getctime))
for k in latest_zip:
print k
```
Thanks in advance.
|
2017/06/26
|
[
"https://Stackoverflow.com/questions/44764311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8056376/"
] |
The best and simplest idea is to use [PyPy](https://pypy.org/).
It is an alternative python interpreter that is very optimized. However, if you use modules written in C that are not from the standard library, you won't be able to use them directly with PyPy.
|
The compiler used in python isn't the fastet (It isn't as close to the machine code compared too for example C), that's why C, C++, etc is used for programs that require better performance. Unfortunately I don't think you can improve the speed if not betting a faster/better performing computer.
Hope this explains it :)
|
44,764,311 |
I am writing a python script to find the latest zip files in a given directory. I just finished to write it, but it's taking really long time to give the output on data that are >30 GB. It's taking ~45min to run. Any tips on how I can improve the performance of my script to run faster? I am using python 2.7 on a windows 10.
```
import os, glob
from datetime import datetime
def findZip_Dir_list(cwd):
file_dirs = []
for (dirname, dirs, files) in os.walk(cwd):
for filename in files:
if filename.endswith('.zip'):
file_dirs.append(dirname)
return file_dirs
cwd = os.getcwd()
zip_dirs = findZip_Dir_list(cwd)
new_dirs = []
for i in zip_dirs:
if i not in new_dirs:
new_dirs.append(i)
latest_zip = []
for j in new_dirs:
list_of_files = glob.glob(j+'/*.zip')
latest_zip.append(max(list_of_files, key=os.path.getctime))
for k in latest_zip:
print k
```
Thanks in advance.
|
2017/06/26
|
[
"https://Stackoverflow.com/questions/44764311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8056376/"
] |
Without seeing the code and the data it's working on, I can only guess, but if you only need to find the latest files, the running time should depend on the number of files in the directory, not their size. To get the last time of modification of a file, you can use `os.stat()` for example.
EDIT:
Ok, now that I see it, here are some things that could make it faster:
* in the function, make the file\_dirs a set instead of a list, that way you can avoid duplicate elements
* if you already found a zip file in a directory, you can break the inner loop in the os.walk()
* if there won't be any duplicates, you can use that set directly, and don't need to make the new\_dirs list (checking if an element is in a list is a O(n) operation, and that is what makes the code so slow)
|
The compiler used in python isn't the fastet (It isn't as close to the machine code compared too for example C), that's why C, C++, etc is used for programs that require better performance. Unfortunately I don't think you can improve the speed if not betting a faster/better performing computer.
Hope this explains it :)
|
421,771 |
Is there a formal mathematical proof that the solution to the [German Tank Problem](https://en.wikipedia.org/wiki/German_tank_problem) is a function of **only** the parameters *k* (number of observed samples) and *m* (maximum value among observed samples)? In other words, can one prove that the solution is independent of the other sample values besides the maximum value?
|
2019/08/12
|
[
"https://stats.stackexchange.com/questions/421771",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55245/"
] |
### Likelihood
Common problems in [probability theory](https://en.wikipedia.org/wiki/Probability_theory) refer to the probability of observations $x\_1, x\_2, ... , x\_n$ given a certain model and given the parameters (let's call them $\theta$) involved. For instance the probabilities for specific situations in card games or dice games are often very straightforward.
However, in many practical situations we are dealing with an *inverse* situation ([inferential statistics](https://en.wikipedia.org/wiki/Statistical_inference)). That is: the observation $x\_1, x\_2, ... , x\_k$ is given and now the model is *unknown*, or at least we do not know certain parameters $\theta$.
In these type of problems we often refer to a term called the likelihood of the parameters, $\mathcal{L(\theta)}$, which is a rate of believe in a specific parameter $\theta$ given observations $x\_1, x\_2, .. x\_k$. This term is expressed as being proportional to the probability for the observations $x\_1, x\_2, .. x\_k$ assuming that a model parameter $\theta$ would be hypothetically true. $$\mathcal{L}(\theta,x\_1, x\_2, .. x\_k) \propto \text{probability observations $x\_1, x\_2, .. x\_k$ given $\theta$ }$$
For a given parameter value $\theta$ the more probable a certain observation $x\_1, x\_2, .. x\_n$ is (relative to the probability with other parameter values), the more the observation supports this particular parameter (or theory/hypothesis that assumes this parameter). A (relative) high likelihood will reinforce our believes about that parameter value (there's a lot more [philosophical](https://en.wikipedia.org/wiki/Likelihood_function#Background_and_interpretation) to say about this).
---
### Likelihood in the German tank problem
Now for the German tank problem the likelihood function for a set of samples $x\_1, x\_2, .. x\_k$ is:
$$\mathcal{L}(\theta,x\_1, x\_2, .. x\_k ) = \Pr(x\_1, x\_2, .. x\_k, \theta) = \begin{cases} 0 &\text{if } \max(x\_1, x\_2, .. x\_k) > \theta \\ {{\theta}\choose{k}}^{-1} &\text{if } \max(x\_1, x\_2, .. x\_k) \leq \theta, \end{cases}$$
Whether you observe samples {1, 2, 10} or samples {8, 9, 10} should not matter *when the samples are considered from a uniform distribution* with parameter $\theta$. Both samples are equally likely with probability ${{\theta}\choose{3}}^{-1}$ and using the idea of likelihood the one sample does not tell more about the parameter $\theta$ than the other sample.
The high values {8, 9, 10} might make you think/believe that $\theta$ should be higher. But, it is *only* the value {10} That truly gives you relevant information about the likelihood of $\theta$ (the value 10 tells you that $\theta$ will be ten or higher, the other values 8 and 9 do not contribute anything to this information).
---
### Fisher Neyman factorization theorem
[This theorem](https://en.wikipedia.org/wiki/Sufficient_statistic#Fisher%E2%80%93Neyman_factorization_theorem) tells you that a certain statistic $T(x\_1, x\_2, … , x\_k)$ (ie some function of the observations, like the mean, median, or as in the German tank problem the maximum) is sufficient (contains all information) when you can factor out, in the likelihood function, the terms that are dependent on the other observations $x\_1, x\_2, … , x\_k$, such that this factor does not depend on both the parameter $\theta$ and $x\_1, x\_2, … , x\_k$ (and the part of the likelihood function that relates the data with the hypothetical parameter values is only dependent on the statistic but not the whole of the data/observations).
The case of the German tank problem is simple. You can see above that the entire expression for the Likelihood above is already only dependent on the statistic $\max(x\_1, x\_2, .. x\_k)$ and the rest of the values $x\_1, x\_2, .. x\_k$ does not matter.
---
### Little game as example
Let's say we play the following game repeatedly: $\theta$ is itself a random variable and drawn with equal probability either 100 or 110. Then we draw a sample $x\_1,x\_2,...,x\_k$.
We want to choose a strategy for guessing $\theta$, based on the observed $x\_1,x\_2,...,x\_k$ that maximizes our probability to have the right guess of $\theta$.
The proper strategy will be to choose 100 unless one of the numbers in the sample is >100.
We could be tempted to choose the parameter value 110 already when many of the $x\_1,x\_2,...,x\_k$ tend to be all high values close to hundred (but none exactly over hundred), but that would be wrong. The probability for such an observation will be larger when the true parameter value is 100 than when it is 110. So if we guess, in such situation, 100 as the parameter value, then we will be less likely to make a mistake (because the situation with these high values close to hundred, yet still below it, occurs more often in the case that the true value is 100 rather than the case that the true value is 110).
|
You haven't presented a precise formulation of "the problem", so it's not exactly clear what you're asking to be proved. From a Bayesian perspective, the posterior probability does depend on all the data. However, each observation of a particular serial number will support that number the most. That is, given any observation $n$, the odds ratio between posterior and prior will be greater for the hypothesis "the actual number of tanks is $n$" than it will be for "the actual number of tanks is [number other than $n$]". Thus, if we start with a uniform prior, then $n$ will have the highest posterior after seeing that observation.
Consider a case where we have the data point $13$, and hypotheses $N=10,13,15$. Obviously, the posterior for $N=10$ is zero. And our posteriors for $N=13,15$ will be larger than their prior. The reason for this is that in Bayesian reasoning, absence of evidence *is* evidence of absence. Any time we have an opportunity where we *could have* made an observation that would have decreased our probability, but don't, the probability increases. Since we *could* have seen $16$, which would have set our posteriors for $N=13,15$ to zero, the fact that we didn't see it means that we should increase our posteriors for $N=13,15$. But note that the smaller the number, the more numbers we could have seen that would have excluded that number. For $N=13$, we would have rejected that hypothesis after seeing $14,15,16,...$. But for $N=15$, we would have needed at least $16$ to reject the hypothesis. Since the hypothesis $N=13$ is more falsifiable than $N=15$, the fact that we *didn't* falsify $N=13$ is more evidence for $N=13$, than not falsifying $N=15$ is evidence for $N=15$.
So every time we see a data point, it sets the posterior of everything below it to zero, and increases the posterior of everything else, with smaller numbers getting the largest boost. Thus, the number that gets the overall largest boost will be the smallest number whose posterior wasn't set to zero, i.e. the maximum value of the observations.
Numbers less than the maximum affect *how much* larger a boost the maximum gets, but it doesn't affect the general trend of the maximum getting largest boost. Consider the above example, where we've already seen $13$. If the next number we see is $5$, what effect will that have? It helps out $5$ more than $6$, but both numbers have already been rejected, so that's not relevant. It helps out $13$ more than $15$, but $13$ already has been helped out more than $15$, so that doesn't affect which number has been helped out the most.
|
17,196,402 |
To give some background information, I am processing a saved file, and after using a regular expression to split the file into it's component objects, I then need to process the object's data based on which type of object it is.
My current thought is to use parallelism to get a little bit of a performance gain as loading each object is independent of each other. So I was going to define a `LoadObject` function accepting a `std::string` for each type of object I'm going to be handling and then calling `std::async` as follows:
```
void LoadFromFile( const std::string& szFileName )
{
static const std::regex regexObject( "=== ([^=]+) ===\\n((?:.|\\n)*)\\n=== END \\1 ===", std::regex_constants::ECMAScript | std::regex_constants::optimize );
std::ifstream inFile( szFileName );
inFile.exceptions( std::ifstream::failbit | std::ifstream::badbit );
std::string szFileData( (std::istreambuf_iterator<char>(inFile)), (std::istreambuf_iterator<char>()) );
inFile.close();
std::vector<std::future<void>> vecFutures;
for( std::sregex_iterator itObject( szFileData.cbegin(), szFileData.cend(), regexObject ), end; itObject != end; ++itObject )
{
// Determine what type of object we're loading:
if( (*itObject)[1] == "Type1" )
{
vecFutures.emplace_back( std::async( LoadType1, (*itObject)[2].str() ) );
}
else if( (*itObject)[1] == "Type2" )
{
vecFutures.emplace_back( std::async( LoadType2, (*itObject)[2].str() ) );
}
else
{
throw std::runtime_error( "Unexpected type encountered whilst reading data file." );
}
}
// Make sure all our tasks completed:
for( auto& future : vecFutures )
{
future.get();
}
}
```
Note that there will be more than 2 types in the application (this was just a short example) and potentially thousands of objects in the file to be read.
I am aware that creating too many threads is often a bad thing for performance when it exceeds the maximum hardware concurrency due to context switches, but if my memory serves me correctly the C++ runtime is supposed to monitor the number of threads created and schedule `std::async` appropriately (I believe in Microsoft's case their ConcRT library is responsible for this?), so the above code may still result in a performance improvement?
Thanks in advance!
|
2013/06/19
|
[
"https://Stackoverflow.com/questions/17196402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/860566/"
] |
>
> the C++ runtime is supposed to monitor the number of threads created and schedule std::async appropriately
>
>
>
No. If the asynchronous tasks are in fact run asynchronously (rather than deferred) then all that's required is that they are run as if on a new thread. It is perfectly valid for a new thread to be created and started for every task, without any regard for the hardware's limited capacity for parallelism.
There's a note:
>
>
> >
> > [ Note: If this policy is specified together with other policies, such as when using a policy value of launch::async | launch::deferred,
> > implementations should defer invocation or the selection of the policy
> > when no more concurrency can be effectively exploited. —end note ]
> >
> >
> >
>
>
>
However, this is non-normative and in any case it indicates that once no more concurrency can be exploited the tasks may become deferred, and therefore get executed when someone waits on the result, rather than still being asynchronous and running immediately after one of the previous asynchronous tasks is finished, as would be desirable for maximum parallelism.
That is, if we have 10 long running tasks and the implementation can only execute 4 in parallel, then the first 4 will be asynchronous and then the last 6 may be deferred. Waiting on the futures in sequence would execute the deferred tasks on a single thread in sequence, eliminating parallel execution for those tasks.
The note does also say that instead of deferring invocation, the selection of the policy may be deferred. That is, the function may still run asynchronously but that decision may be delayed, say, until one of the earlier tasks completes, freeing up a core for a new task. But again, this is not required, the note is non-normative, and as far as I know Microsoft's implementation is the only one that behaves this way. When I looked at another implementation, libc++, it simply ignores this note altogether so that using either `std::launch::async` or `std::launch::any` policies result in asynchronous execution on a new thread.
>
> (I believe in Microsoft's case their ConcRT library is responsible for this?)
>
>
>
Microsoft's implementation does indeed behave as you describe, however this is not required and a portable program cannot rely on that behavior.
One way to portably limit how many threads are actually running is to use something like a semaphore:
```
#include <future>
#include <vector>
#include <mutex>
#include <cstdio>
// a semaphore class
//
// All threads can wait on this object. When a waiting thread
// is woken up, it does its work and then notifies another waiting thread.
// In this way only n threads will be be doing work at any time.
//
class Semaphore {
private:
std::mutex m;
std::condition_variable cv;
unsigned int count;
public:
Semaphore(int n) : count(n) {}
void notify() {
std::unique_lock<std::mutex> l(m);
++count;
cv.notify_one();
}
void wait() {
std::unique_lock<std::mutex> l(m);
cv.wait(l, [this]{ return count!=0; });
--count;
}
};
// an RAII class to handle waiting and notifying the next thread
// Work is done between when the object is created and destroyed
class Semaphore_waiter_notifier {
Semaphore &s;
public:
Semaphore_waiter_notifier(Semaphore &s) : s{s} { s.wait(); }
~Semaphore_waiter_notifier() { s.notify(); }
};
// some inefficient work for our threads to do
int fib(int n) {
if (n<2) return n;
return fib(n-1) + fib(n-2);
}
// for_each algorithm for iterating over a container but also
// making an integer index available.
//
// f is called like f(index, element)
template<typename Container, typename F>
F for_each(Container &c, F f) {
typename Container::size_type i = 0;
for (auto &e : c)
f(i++, e);
return f;
}
// global semaphore so that lambdas don't have to capture it
Semaphore thread_limiter(4);
int main() {
std::vector<int> input(100);
for_each(input, [](int i, int &e) { e = (i%10) + 35; });
std::vector<std::future<int>> output;
for_each(input, [&output](int i, int e) {
output.push_back(std::async(std::launch::async, [] (int task, int n) -> int {
Semaphore_waiter_notifier w(thread_limiter);
std::printf("Starting task %d\n", task);
int res = fib(n);
std::printf("\t\t\t\t\t\tTask %d finished\n", task);
return res;
}, i, e));
});
for_each(output, [](int i, std::future<int> &e) {
std::printf("\t\t\tWaiting on task %d\n", i);
int res = e.get();
std::printf("\t\t\t\t\t\t\t\t\tTask %d result: %d\n", i, res);
});
}
```
|
Posting to an old thread here but an excellent book is [https://www.amazon.com/C-Concurrency-Action-Practical-Multithreading/dp/1933988770](https://rads.stackoverflow.com/amzn/click/com/1933988770)
However, GPU / CPU cooperation and concurrency may be where the bigger performance benefits can be found based on recent technology trends.
|
41,874 |
At a job fair I was talking to a recruiting agent and he asked what I am getting my degree in. I said computer science with a specialization in networking and communications. He asked what made me go into that specialization.
The reason is I was already in the computer science program and had taken some networking courses and done well in them and found them enjoyable. I basically already had taken 90% of the courses required for the specialization so in a sense it happened organically. What would be a good answer? It seems like the general answer to such a question would be "because I enjoy it and am good at it" but this wouldn't win many points.
|
2015/02/24
|
[
"https://workplace.stackexchange.com/questions/41874",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/32200/"
] |
You said:
>
> I basically already had taken 90% of the courses required for the specialization so in a sense it happened organically.
>
>
>
This is your answer. It's honest, it's admirable, and it's pretty well put apart from the jumbling of the first four words.
You say your answer would be "because I enjoy it, and am good at it", but the above quote already covers that implicitly and obviously. At the same time it conveys that the field seems to genuinely be one of your talents, and it does so without even a whiff of bragging.
I should be asking *you* for tips.
|
Actually, "because I enjoy it and am good at it" isn't a bad place to start, but you can build that out more. As you studied this area more you realized that you really enjoyed it and were good at it, which led you to take more courses and ultimately to specialize. That shows that it's an ongoing interest, while "because I'd done 90% of it already" sounds more "opportunistic" -- you did this because it was there, but it might not be something you'll want to be doing in a year or two.
You don't say if you're an undergraduate or graduate student, but undergrads especially have very little to distinguish themselves in *coursework* -- you're all taking mostly the same courses along the same trajectory. Pursuing a specialization is good, but when we interview students what I and my peers (technical people, not HR people) are mostly looking for, after basic qualifications, is that passion and enthusiasm that tells us that you actually *want* to be doing this. That turns out to be a pretty good predictor for success.
While it's not exactly your situation, I've used a similar approach to explain a career shift. When asked about it I say something like: "I was a perfectly capable X, but as I spent more time doing Y I found that I was really *good* at it and really *enjoyed* it, so instead of being a capable X I set out to be an excellent Y."
|
53,138,975 |
I am sorry if this has already been covered before. I know how to do this is C and Java but not C++. Without using a pre-existing class which includes the use of Vector, how would you increase the size of an array given the code below?
The array expansion and assignment to the array takes place in push() noted with the all caps comment.
EDIT: As I have mentioned in comments below this is a question regarding manually reallocating arrays rather than using std::vector or "Dynamic Arrays."
Line.h
```
#include <iostream>
#include "Point.h"
using namespace std;
class Line {
public:
Line();
virtual ~Line();
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const Point& p);
private:
unsigned int index; // size of "points" array
Point* points;
};
```
Main.cpp
```
#include <iostream>
#include "Point.h"
#include "Line.h"
using namespace std;
int main() {
int x, y;
int size; // Some user defined size for the array
Line line;
Point a[size]; // Some points that are already filled
// Push the data in a[] to the variable "line"
for(int i = 0; i < size; i++){
// Increase array size of Point* points in variable line and add a[i] to the end of the array
line.push(points[i]);
}
return 0;
}
```
|
2018/11/04
|
[
"https://Stackoverflow.com/questions/53138975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4397145/"
] |
The simple answer is you should always use std::vector in this case. However it might be useful to explain just why that is. So lets consider how you would implement this without std::vector so you might see just why you would want to use std::vector:
```
// Naive approach
Line::push(const Point& p)
{
Point* new_points = new Points[index + 1];
std::copy(std::make_move_iterator(points), std::make_move_iterator(points+index), new_points);
new_points[index] = p;
delete[] points;
points = new_points;
index += 1;
}
```
This approach has many problems. We are forced to reallocate and move the entire array every time an entry is inserted. However a vector will pre-allocate a reserve and use space out of the reserve for each insert, only re-allocating space once the reserve limit is surpassed. This mean vector will far out perform your code in terms of performance as less time will be spent allocating and moving data unnecessarily. Next is the issue of exceptions, this implementation has no exception guarantees, where as the std::vector provides you with a strong exception guarantee: <https://en.wikipedia.org/wiki/Exception_safety>. Implementing a strong exception guarantee for your class is none trivial, however you would have automatically got this had you implemented this in terms of std::vector as such
```
Line::push(const Point& p)
{
points.push_back(p);
}
```
There are also other more subtle problems with your approach, your class does not define copy or assignment operators and so gets compiler generated shallow copy versions generated which means if someone copies your class then allocated members will get deleted twice. To resolve this you need to follow the rule of 3 paradigm pre C++11 and the rule of 5 for C++ 11 onwards: <https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming)>. However had you used a vector none of this would be needed as you would benefit from the rule of zero and be able to rely on the compiler generated defaults: <https://blog.rmf.io/cxx11/rule-of-zero>
|
You basically have no way but to allocate a new array, copy existing values inside and `delete []` the old one. That's why vector is doing the reallocation by a multiplicative factor (say each reallocation doubles the size). This is one of the reasons you want to use the standard library structures instead of reimplementing.
|
53,138,975 |
I am sorry if this has already been covered before. I know how to do this is C and Java but not C++. Without using a pre-existing class which includes the use of Vector, how would you increase the size of an array given the code below?
The array expansion and assignment to the array takes place in push() noted with the all caps comment.
EDIT: As I have mentioned in comments below this is a question regarding manually reallocating arrays rather than using std::vector or "Dynamic Arrays."
Line.h
```
#include <iostream>
#include "Point.h"
using namespace std;
class Line {
public:
Line();
virtual ~Line();
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const Point& p);
private:
unsigned int index; // size of "points" array
Point* points;
};
```
Main.cpp
```
#include <iostream>
#include "Point.h"
#include "Line.h"
using namespace std;
int main() {
int x, y;
int size; // Some user defined size for the array
Line line;
Point a[size]; // Some points that are already filled
// Push the data in a[] to the variable "line"
for(int i = 0; i < size; i++){
// Increase array size of Point* points in variable line and add a[i] to the end of the array
line.push(points[i]);
}
return 0;
}
```
|
2018/11/04
|
[
"https://Stackoverflow.com/questions/53138975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4397145/"
] |
You basically have no way but to allocate a new array, copy existing values inside and `delete []` the old one. That's why vector is doing the reallocation by a multiplicative factor (say each reallocation doubles the size). This is one of the reasons you want to use the standard library structures instead of reimplementing.
|
**Keep It Simple**
In my opinion, in this case, it's better to use a Linked-List of `CPoint` in `CLine`:
```
struct CPoint
{
int x = 0, y = 0;
CPoint * m_next = nullptr;
};
class CLine
{
public:
CLine() {};
virtual ~CLine()
{
// Free Linked-List:
while (m_points != nullptr) {
m_current = m_points->m_next;
delete m_points;
m_points = m_current;
}
};
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = new CPoint);
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
CPoint * m_points = nullptr, * m_current = nullptr;
};
```
.
Or, even better with smart pointers:
```
#include <memory>
struct CPoint
{
int m_x = 0, m_y = 0;
std::shared_ptr<CPoint> m_next;
};
class CLine
{
public:
CLine() {};
virtual ~CLine() {}
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = std::make_shared<CPoint>());
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
std::shared_ptr<CPoint> m_points, m_current;
};
```
|
53,138,975 |
I am sorry if this has already been covered before. I know how to do this is C and Java but not C++. Without using a pre-existing class which includes the use of Vector, how would you increase the size of an array given the code below?
The array expansion and assignment to the array takes place in push() noted with the all caps comment.
EDIT: As I have mentioned in comments below this is a question regarding manually reallocating arrays rather than using std::vector or "Dynamic Arrays."
Line.h
```
#include <iostream>
#include "Point.h"
using namespace std;
class Line {
public:
Line();
virtual ~Line();
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const Point& p);
private:
unsigned int index; // size of "points" array
Point* points;
};
```
Main.cpp
```
#include <iostream>
#include "Point.h"
#include "Line.h"
using namespace std;
int main() {
int x, y;
int size; // Some user defined size for the array
Line line;
Point a[size]; // Some points that are already filled
// Push the data in a[] to the variable "line"
for(int i = 0; i < size; i++){
// Increase array size of Point* points in variable line and add a[i] to the end of the array
line.push(points[i]);
}
return 0;
}
```
|
2018/11/04
|
[
"https://Stackoverflow.com/questions/53138975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4397145/"
] |
The simple answer is you should always use std::vector in this case. However it might be useful to explain just why that is. So lets consider how you would implement this without std::vector so you might see just why you would want to use std::vector:
```
// Naive approach
Line::push(const Point& p)
{
Point* new_points = new Points[index + 1];
std::copy(std::make_move_iterator(points), std::make_move_iterator(points+index), new_points);
new_points[index] = p;
delete[] points;
points = new_points;
index += 1;
}
```
This approach has many problems. We are forced to reallocate and move the entire array every time an entry is inserted. However a vector will pre-allocate a reserve and use space out of the reserve for each insert, only re-allocating space once the reserve limit is surpassed. This mean vector will far out perform your code in terms of performance as less time will be spent allocating and moving data unnecessarily. Next is the issue of exceptions, this implementation has no exception guarantees, where as the std::vector provides you with a strong exception guarantee: <https://en.wikipedia.org/wiki/Exception_safety>. Implementing a strong exception guarantee for your class is none trivial, however you would have automatically got this had you implemented this in terms of std::vector as such
```
Line::push(const Point& p)
{
points.push_back(p);
}
```
There are also other more subtle problems with your approach, your class does not define copy or assignment operators and so gets compiler generated shallow copy versions generated which means if someone copies your class then allocated members will get deleted twice. To resolve this you need to follow the rule of 3 paradigm pre C++11 and the rule of 5 for C++ 11 onwards: <https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming)>. However had you used a vector none of this would be needed as you would benefit from the rule of zero and be able to rely on the compiler generated defaults: <https://blog.rmf.io/cxx11/rule-of-zero>
|
Essentially the only way is to use a *dynamic array* (one created using `new[]`) and to create an entirely new *dynamic array* and copy (or move) the objects from the old *array* to the new one.
Something like this:
```
class Line {
public:
Line(): index(0), points(nullptr) {} // initialize
virtual ~Line() { delete[] points; } // Clean up!
void push(const Point& p)
{
// create new array one element larger than before
auto new_points = new Point[index + 1];
// copy old elements to new array (if any)
for(unsigned int p = 0; p < index; ++p)
new_points[p] = points[p];
new_points[index] = p; // then add our new Point to the end
++index; // increase the recorded number of elements
delete[] points; // out with the old
points = new_points; // in with the new
}
private:
unsigned int index; // size of "points" array
Point* points;
};
```
But this approach is very inefficient. To do this well is quite complex. The main problems with doing things this way are:
* Exception safety - avoiding a memory leak if an exception is thrown.
* Allocation - avoiding having to reallocate (and re-copy) every single time.
* Move semantics - taking advantage of some objects ability to be *moved* much more efficiently than they are *copied*.
A (slightly) better version:
```
class Line {
public:
Line(): index(0) {} // initialize
virtual ~Line() { } // No need to clean up because of `std::unique_ptr`
void push(const Point& p)
{
// create new array one element larger than before
auto new_points = std::unique_ptr<Point[]>(new Point[index + 1]);
// first add our new Point to the end (in case of an exception)
new_points[index] = p;
// then copy/move old elements to new array (if any)
for(unsigned int p = 0; p < index; ++p)
new_points[p] = std::move(points[p]); // try to move else copy
++index; // increase the recorded number of elements
std::swap(points, new_points); // swap the pointers
}
private:
unsigned int index; // size of "points" array
std::unique_ptr<Point[]> points; // Exception safer
};
```
That takes care of *exception safety* and (to some degree - but not entirely) *move semantics*. However it must be pointed out that *exception safety* is only going to be **complete** if the elements stored in the array (type `Point`) are themselves *exception safe* when being *copied* or *moved*.
But this does not deal with efficient allocation. A `std::vector` will *over allocate* so it doesn't have to do it with every new element. This code also misses a few other tricks that a `std::vector` would employ (like allocating **uninitialized** memory and constructing/destructing the elements manually as and when they are needed/discarded).
|
53,138,975 |
I am sorry if this has already been covered before. I know how to do this is C and Java but not C++. Without using a pre-existing class which includes the use of Vector, how would you increase the size of an array given the code below?
The array expansion and assignment to the array takes place in push() noted with the all caps comment.
EDIT: As I have mentioned in comments below this is a question regarding manually reallocating arrays rather than using std::vector or "Dynamic Arrays."
Line.h
```
#include <iostream>
#include "Point.h"
using namespace std;
class Line {
public:
Line();
virtual ~Line();
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const Point& p);
private:
unsigned int index; // size of "points" array
Point* points;
};
```
Main.cpp
```
#include <iostream>
#include "Point.h"
#include "Line.h"
using namespace std;
int main() {
int x, y;
int size; // Some user defined size for the array
Line line;
Point a[size]; // Some points that are already filled
// Push the data in a[] to the variable "line"
for(int i = 0; i < size; i++){
// Increase array size of Point* points in variable line and add a[i] to the end of the array
line.push(points[i]);
}
return 0;
}
```
|
2018/11/04
|
[
"https://Stackoverflow.com/questions/53138975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4397145/"
] |
The simple answer is you should always use std::vector in this case. However it might be useful to explain just why that is. So lets consider how you would implement this without std::vector so you might see just why you would want to use std::vector:
```
// Naive approach
Line::push(const Point& p)
{
Point* new_points = new Points[index + 1];
std::copy(std::make_move_iterator(points), std::make_move_iterator(points+index), new_points);
new_points[index] = p;
delete[] points;
points = new_points;
index += 1;
}
```
This approach has many problems. We are forced to reallocate and move the entire array every time an entry is inserted. However a vector will pre-allocate a reserve and use space out of the reserve for each insert, only re-allocating space once the reserve limit is surpassed. This mean vector will far out perform your code in terms of performance as less time will be spent allocating and moving data unnecessarily. Next is the issue of exceptions, this implementation has no exception guarantees, where as the std::vector provides you with a strong exception guarantee: <https://en.wikipedia.org/wiki/Exception_safety>. Implementing a strong exception guarantee for your class is none trivial, however you would have automatically got this had you implemented this in terms of std::vector as such
```
Line::push(const Point& p)
{
points.push_back(p);
}
```
There are also other more subtle problems with your approach, your class does not define copy or assignment operators and so gets compiler generated shallow copy versions generated which means if someone copies your class then allocated members will get deleted twice. To resolve this you need to follow the rule of 3 paradigm pre C++11 and the rule of 5 for C++ 11 onwards: <https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming)>. However had you used a vector none of this would be needed as you would benefit from the rule of zero and be able to rely on the compiler generated defaults: <https://blog.rmf.io/cxx11/rule-of-zero>
|
**Keep It Simple**
In my opinion, in this case, it's better to use a Linked-List of `CPoint` in `CLine`:
```
struct CPoint
{
int x = 0, y = 0;
CPoint * m_next = nullptr;
};
class CLine
{
public:
CLine() {};
virtual ~CLine()
{
// Free Linked-List:
while (m_points != nullptr) {
m_current = m_points->m_next;
delete m_points;
m_points = m_current;
}
};
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = new CPoint);
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
CPoint * m_points = nullptr, * m_current = nullptr;
};
```
.
Or, even better with smart pointers:
```
#include <memory>
struct CPoint
{
int m_x = 0, m_y = 0;
std::shared_ptr<CPoint> m_next;
};
class CLine
{
public:
CLine() {};
virtual ~CLine() {}
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = std::make_shared<CPoint>());
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
std::shared_ptr<CPoint> m_points, m_current;
};
```
|
53,138,975 |
I am sorry if this has already been covered before. I know how to do this is C and Java but not C++. Without using a pre-existing class which includes the use of Vector, how would you increase the size of an array given the code below?
The array expansion and assignment to the array takes place in push() noted with the all caps comment.
EDIT: As I have mentioned in comments below this is a question regarding manually reallocating arrays rather than using std::vector or "Dynamic Arrays."
Line.h
```
#include <iostream>
#include "Point.h"
using namespace std;
class Line {
public:
Line();
virtual ~Line();
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const Point& p);
private:
unsigned int index; // size of "points" array
Point* points;
};
```
Main.cpp
```
#include <iostream>
#include "Point.h"
#include "Line.h"
using namespace std;
int main() {
int x, y;
int size; // Some user defined size for the array
Line line;
Point a[size]; // Some points that are already filled
// Push the data in a[] to the variable "line"
for(int i = 0; i < size; i++){
// Increase array size of Point* points in variable line and add a[i] to the end of the array
line.push(points[i]);
}
return 0;
}
```
|
2018/11/04
|
[
"https://Stackoverflow.com/questions/53138975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4397145/"
] |
Essentially the only way is to use a *dynamic array* (one created using `new[]`) and to create an entirely new *dynamic array* and copy (or move) the objects from the old *array* to the new one.
Something like this:
```
class Line {
public:
Line(): index(0), points(nullptr) {} // initialize
virtual ~Line() { delete[] points; } // Clean up!
void push(const Point& p)
{
// create new array one element larger than before
auto new_points = new Point[index + 1];
// copy old elements to new array (if any)
for(unsigned int p = 0; p < index; ++p)
new_points[p] = points[p];
new_points[index] = p; // then add our new Point to the end
++index; // increase the recorded number of elements
delete[] points; // out with the old
points = new_points; // in with the new
}
private:
unsigned int index; // size of "points" array
Point* points;
};
```
But this approach is very inefficient. To do this well is quite complex. The main problems with doing things this way are:
* Exception safety - avoiding a memory leak if an exception is thrown.
* Allocation - avoiding having to reallocate (and re-copy) every single time.
* Move semantics - taking advantage of some objects ability to be *moved* much more efficiently than they are *copied*.
A (slightly) better version:
```
class Line {
public:
Line(): index(0) {} // initialize
virtual ~Line() { } // No need to clean up because of `std::unique_ptr`
void push(const Point& p)
{
// create new array one element larger than before
auto new_points = std::unique_ptr<Point[]>(new Point[index + 1]);
// first add our new Point to the end (in case of an exception)
new_points[index] = p;
// then copy/move old elements to new array (if any)
for(unsigned int p = 0; p < index; ++p)
new_points[p] = std::move(points[p]); // try to move else copy
++index; // increase the recorded number of elements
std::swap(points, new_points); // swap the pointers
}
private:
unsigned int index; // size of "points" array
std::unique_ptr<Point[]> points; // Exception safer
};
```
That takes care of *exception safety* and (to some degree - but not entirely) *move semantics*. However it must be pointed out that *exception safety* is only going to be **complete** if the elements stored in the array (type `Point`) are themselves *exception safe* when being *copied* or *moved*.
But this does not deal with efficient allocation. A `std::vector` will *over allocate* so it doesn't have to do it with every new element. This code also misses a few other tricks that a `std::vector` would employ (like allocating **uninitialized** memory and constructing/destructing the elements manually as and when they are needed/discarded).
|
**Keep It Simple**
In my opinion, in this case, it's better to use a Linked-List of `CPoint` in `CLine`:
```
struct CPoint
{
int x = 0, y = 0;
CPoint * m_next = nullptr;
};
class CLine
{
public:
CLine() {};
virtual ~CLine()
{
// Free Linked-List:
while (m_points != nullptr) {
m_current = m_points->m_next;
delete m_points;
m_points = m_current;
}
};
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = new CPoint);
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
CPoint * m_points = nullptr, * m_current = nullptr;
};
```
.
Or, even better with smart pointers:
```
#include <memory>
struct CPoint
{
int m_x = 0, m_y = 0;
std::shared_ptr<CPoint> m_next;
};
class CLine
{
public:
CLine() {};
virtual ~CLine() {}
// TAKE IN NEW POINT, INCREASE THE ARRAY SIZE AND ADD NEW POINT TO THE END OF THE ARRAY
void push(const CPoint& p)
{
m_current = (((m_points == nullptr) ? (m_points) : (m_current->m_next)) = std::make_shared<CPoint>());
m_current->m_x = p.m_x;
m_current->m_y = p.m_y;
m_index++;
};
private:
unsigned int m_index = 0; // size of "points" array
std::shared_ptr<CPoint> m_points, m_current;
};
```
|
1,443,146 |
We're starting a new project in Python with a few proprietary algorithms and sensitive bits of logic that we'd like to keep private. We also will have a few outsiders (select members of the public) working on the code. We cannot grant the outsiders access to the small, private bits of code, but we'd like a public version to work well enough for them.
Say that our project, Foo, has a module, `bar`, with one function, `get_sauce()`. What really happens in `get_sauce()` is secret, but we want a public version of `get_sauce()` to return an acceptable, albeit incorrect, result.
We also run our own Subversion server so we have total control over who can access what.
**Symlinks**
My first thought was symlinking — Instead of `bar.py`, provide `bar_public.py` to everybody and `bar_private.py` to internal developers only. Unfortunately, creating symlinks is tedious, manual work — especially when there are really going to be about two dozen of these private modules.
More importantly, it makes management of the Subversion authz file difficult, since for each module we want to protect an exception must be added on the server. Someone might forget to do this and accidentally check in secrets... Then the module is in the repo and we have to rebuild the repository without it and hope that an outsider didn't download it in the meantime.
**Multiple repositories**
The next thought was to have two repositories:
```
private
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
└── bar.py
public
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
├── bar.py
├── baz.py
└── quux.py
```
The idea is that only internal developers will be able to checkout both `private/` and `public/`. Internal developers will set their `PYTHONPATH=private/trunk:public/trunk`, but everyone else will just set `PYTHONPATH=public/trunk`. Then, both insiders and outsiders can `from foo import bar` and get the right module, right?
Let's try this:
```
% PYTHONPATH=private/trunk:public/trunk python
Python 2.5.1
Type "help", "copyright", "credits" or "license" for more information.
>>> import foo.bar
>>> foo.bar.sauce()
'a private bar'
>>> import foo.quux
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named quux
```
I'm not a Python expert, but it seems that Python has already made up its mind about module `foo` and searches relative to that:
```
>>> foo
<module 'foo' from '/path/to/private/trunk/foo/__init__.py'>
```
Not even deleting `foo` helps:
```
>>> import sys
>>> del foo
>>> del sys.modules['foo']
>>> import foo.quux
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named quux
```
Can you provide me with a better solution or suggestion?
|
2009/09/18
|
[
"https://Stackoverflow.com/questions/1443146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102704/"
] |
[In the `__init__` method of the `foo` package you can change `__path__` to make it look for its modules in other directories.](http://docs.python.org/tutorial/modules.html#packages-in-multiple-directories)
So create a directory called `secret` and put it in your private Subversion repository. In `secret` put your proprietary `bar.py`. In the `__init__.py` of the public `foo` package put in something like:
```
__path__.insert(0,'secret')
```
This will mean for users who have the private repository and so the `secret` directory they will get the proprietary `bar.py` as `foo.bar` as `secret` is the first directory in the search path. For other users, Python won't find `secret` and will look as the next directory in `__path__` and so will load the normal `bar.py` from `foo`.
So it will look something like this:
```
private
└── trunk/
└── secret/
└── bar.py
public
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
├── bar.py
├── baz.py
└── quux.py
```
|
Use some sort of plugin system, and keep your plugins to your self, but also have publically available plugins that gets shipped with the open code.
Plugin systems abound. You can easily make dead simple ones yourself. If you want something more advanced I prefer the Zope Component Architecture, but there are also options like setuptools entry\_points, etc.
Which one to use in your case would be a good second question.
|
1,443,146 |
We're starting a new project in Python with a few proprietary algorithms and sensitive bits of logic that we'd like to keep private. We also will have a few outsiders (select members of the public) working on the code. We cannot grant the outsiders access to the small, private bits of code, but we'd like a public version to work well enough for them.
Say that our project, Foo, has a module, `bar`, with one function, `get_sauce()`. What really happens in `get_sauce()` is secret, but we want a public version of `get_sauce()` to return an acceptable, albeit incorrect, result.
We also run our own Subversion server so we have total control over who can access what.
**Symlinks**
My first thought was symlinking — Instead of `bar.py`, provide `bar_public.py` to everybody and `bar_private.py` to internal developers only. Unfortunately, creating symlinks is tedious, manual work — especially when there are really going to be about two dozen of these private modules.
More importantly, it makes management of the Subversion authz file difficult, since for each module we want to protect an exception must be added on the server. Someone might forget to do this and accidentally check in secrets... Then the module is in the repo and we have to rebuild the repository without it and hope that an outsider didn't download it in the meantime.
**Multiple repositories**
The next thought was to have two repositories:
```
private
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
└── bar.py
public
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
├── bar.py
├── baz.py
└── quux.py
```
The idea is that only internal developers will be able to checkout both `private/` and `public/`. Internal developers will set their `PYTHONPATH=private/trunk:public/trunk`, but everyone else will just set `PYTHONPATH=public/trunk`. Then, both insiders and outsiders can `from foo import bar` and get the right module, right?
Let's try this:
```
% PYTHONPATH=private/trunk:public/trunk python
Python 2.5.1
Type "help", "copyright", "credits" or "license" for more information.
>>> import foo.bar
>>> foo.bar.sauce()
'a private bar'
>>> import foo.quux
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named quux
```
I'm not a Python expert, but it seems that Python has already made up its mind about module `foo` and searches relative to that:
```
>>> foo
<module 'foo' from '/path/to/private/trunk/foo/__init__.py'>
```
Not even deleting `foo` helps:
```
>>> import sys
>>> del foo
>>> del sys.modules['foo']
>>> import foo.quux
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named quux
```
Can you provide me with a better solution or suggestion?
|
2009/09/18
|
[
"https://Stackoverflow.com/questions/1443146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102704/"
] |
[In the `__init__` method of the `foo` package you can change `__path__` to make it look for its modules in other directories.](http://docs.python.org/tutorial/modules.html#packages-in-multiple-directories)
So create a directory called `secret` and put it in your private Subversion repository. In `secret` put your proprietary `bar.py`. In the `__init__.py` of the public `foo` package put in something like:
```
__path__.insert(0,'secret')
```
This will mean for users who have the private repository and so the `secret` directory they will get the proprietary `bar.py` as `foo.bar` as `secret` is the first directory in the search path. For other users, Python won't find `secret` and will look as the next directory in `__path__` and so will load the normal `bar.py` from `foo`.
So it will look something like this:
```
private
└── trunk/
└── secret/
└── bar.py
public
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
├── bar.py
├── baz.py
└── quux.py
```
|
Here's an alternate solution I noticed when reading the docs for [Flask](http://flask.pocoo.org/docs/):
>
> **`flaskext/__init__.py`**
>
>
> The only purpose of this file is to mark the package as namespace package. This is required so that multiple modules from different PyPI packages can reside in the same Python package:
>
>
>
> ```
> __import__('pkg_resources').declare_namespace(__name__)
>
> ```
>
> If you want to know exactly what is happening there, checkout the distribute or setuptools docs which explain how this works.
>
>
>
|
1,443,146 |
We're starting a new project in Python with a few proprietary algorithms and sensitive bits of logic that we'd like to keep private. We also will have a few outsiders (select members of the public) working on the code. We cannot grant the outsiders access to the small, private bits of code, but we'd like a public version to work well enough for them.
Say that our project, Foo, has a module, `bar`, with one function, `get_sauce()`. What really happens in `get_sauce()` is secret, but we want a public version of `get_sauce()` to return an acceptable, albeit incorrect, result.
We also run our own Subversion server so we have total control over who can access what.
**Symlinks**
My first thought was symlinking — Instead of `bar.py`, provide `bar_public.py` to everybody and `bar_private.py` to internal developers only. Unfortunately, creating symlinks is tedious, manual work — especially when there are really going to be about two dozen of these private modules.
More importantly, it makes management of the Subversion authz file difficult, since for each module we want to protect an exception must be added on the server. Someone might forget to do this and accidentally check in secrets... Then the module is in the repo and we have to rebuild the repository without it and hope that an outsider didn't download it in the meantime.
**Multiple repositories**
The next thought was to have two repositories:
```
private
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
└── bar.py
public
└── trunk/
├── __init__.py
└── foo/
├── __init__.py
├── bar.py
├── baz.py
└── quux.py
```
The idea is that only internal developers will be able to checkout both `private/` and `public/`. Internal developers will set their `PYTHONPATH=private/trunk:public/trunk`, but everyone else will just set `PYTHONPATH=public/trunk`. Then, both insiders and outsiders can `from foo import bar` and get the right module, right?
Let's try this:
```
% PYTHONPATH=private/trunk:public/trunk python
Python 2.5.1
Type "help", "copyright", "credits" or "license" for more information.
>>> import foo.bar
>>> foo.bar.sauce()
'a private bar'
>>> import foo.quux
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named quux
```
I'm not a Python expert, but it seems that Python has already made up its mind about module `foo` and searches relative to that:
```
>>> foo
<module 'foo' from '/path/to/private/trunk/foo/__init__.py'>
```
Not even deleting `foo` helps:
```
>>> import sys
>>> del foo
>>> del sys.modules['foo']
>>> import foo.quux
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named quux
```
Can you provide me with a better solution or suggestion?
|
2009/09/18
|
[
"https://Stackoverflow.com/questions/1443146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/102704/"
] |
Use some sort of plugin system, and keep your plugins to your self, but also have publically available plugins that gets shipped with the open code.
Plugin systems abound. You can easily make dead simple ones yourself. If you want something more advanced I prefer the Zope Component Architecture, but there are also options like setuptools entry\_points, etc.
Which one to use in your case would be a good second question.
|
Here's an alternate solution I noticed when reading the docs for [Flask](http://flask.pocoo.org/docs/):
>
> **`flaskext/__init__.py`**
>
>
> The only purpose of this file is to mark the package as namespace package. This is required so that multiple modules from different PyPI packages can reside in the same Python package:
>
>
>
> ```
> __import__('pkg_resources').declare_namespace(__name__)
>
> ```
>
> If you want to know exactly what is happening there, checkout the distribute or setuptools docs which explain how this works.
>
>
>
|
5,401,448 |
I've a client server architecture implemented in C++ with blocking sockets under Windows 7. Everything is running well up to a certain level of load. If there are a couple of clients (e.g. > 4) receiving or sending megabytes of data, sometimes the communication with one client freezes for approximately 5 seconds. All other clients are working as expected in that case.
The buffer size is 8192 bytes and logging on the server side reads as follows:
TimeStamp (s.ms) - received bytes
…
1299514524.618 - 8192
1299514524.618 - 8192
**1299514524.618 - 0004**
**1299514529.641 - 8192**
1299514529.641 - 3744
1299514529.641 - 1460
1299514529.641 - 1460
1299514529.641 - 8192
…
It seems that only 4 bytes can be read within that 5 seconds. Furthermore I found out that the freezing time is always arounds that 5 seconds - never 4 or less and never 6 or more...
Any ideas?
Best regards
Michael
|
2011/03/23
|
[
"https://Stackoverflow.com/questions/5401448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/534960/"
] |
This is a Windows bug.
[KB 2020447 - **Socket communication using the loopback address will intermittently encounter a five second delay**](http://support.microsoft.com/kb/2020447)
A Hotfix is available in
[KB 2861819 - **Data transfer stops for five seconds in a Windows Socket-based application in Windows 7 and Windows Server 2008 R2**](http://support.microsoft.com/kb/2861819)
|
I've had this problem in situations of high load: the last packet of TCP data sometimes reached before the second to last, as the default stack is not defined for package sorting,
this disorder caused in receiving similar result to what you describe.
The solution adopted was: load distribution in more servers
|
1,831,373 |
changing the text of a label (or sophisticatedly we can say a text-based progress bar).
in winforms you just Invalidate / Update.
But how to do this in WPF without using Background Threads. ???
|
2009/12/02
|
[
"https://Stackoverflow.com/questions/1831373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/185022/"
] |
```
public static class ExtensionMethods
{
private static Action EmptyDelegate = delegate() { };
public static void Refresh(this UIElement uiElement)
{
uiElement.Dispatcher.Invoke(DispatcherPriority.Render, EmptyDelegate);
}
}
private void LoopingMethod()
{
for (int i = 0; i < 10; i++)
{
label1.Content = i.ToString();
label1.Refresh();
Thread.Sleep(500);
}
}
```
Reference: <http://geekswithblogs.net/NewThingsILearned/archive/2008/08/25/refresh--update-wpf-controls.aspx>
|
Perhaps you should read more on the subject of [Bindings](http://msdn.microsoft.com/en-us/magazine/cc163299.aspx)..
Basicly, bindings will manage this for you..
|
9,551,015 |
I'm trying to set up replication between a SQL Server 2008 R2 database and SQL Server CE 3.5. I have set up IIS 7 accordingly and get a nice "Microsoft SQL Server Compact Server Agent" when checking the publication URL (<http://winserver2008/SQLReplication/sqlcesa35.dll>).
However when I try <http://winserver2008/SQLReplication/sqlcesa35.dll?diag> I get the following:

When I try to connect to the publisher via a C# app I'm writing I get "Initializing SQL Server Reconciler has failed. Try again". i assume that means I need that 9.0 database reconciler? If so how do I install it?
On a side note this only happens when I try to connect via Integrated Windows Authentication. If I try via SQL Server authentication instead I just get an error about SQL Server missing or having an incorrect login/password.
|
2012/03/04
|
[
"https://Stackoverflow.com/questions/9551015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41348/"
] |
Finally figured it out. After locating the actual error message in SQL Server Profiler it became obvious there was an issue with the access privileges of the publication. As it turns out I had to add the database user to the PAL instead of the corresponding windows account.
|
The IIS web site and web application utilized by SQL CE 3.5 has to be set to allow "directory browsing". Once I turned that on, the ReadWriteDeleteMessageFile status went to SUCCESS.
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
Leave out the `<p>` tag unless the content of your table cell is *truly* a paragraph. It's certainly possible to have paragraphs in tabular data, and in that case a semantic `<p>` would be appropriately placed. But for the common table with data in the cells eg. numbers, names, etc., don't include the `<p>`.
|
If the tabular cell data is text:
```
<td>
content text
</td>
```
If the tabular cell data is paragraph(s):
```
<td>
<p>
content text
</p>
...
</td>
```
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
It depends on your intention. If the cell is going to have just ONE paragraph then it makes no sense to add the `<p>` tag to it.
If you intend to have a few paragraphs in the `<td>` cell then it makes sense to use the `<p>` tag.
|
If the tabular cell data is text:
```
<td>
content text
</td>
```
If the tabular cell data is paragraph(s):
```
<td>
<p>
content text
</p>
...
</td>
```
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
Leave out the `<p>` tag unless the content of your table cell is *truly* a paragraph. It's certainly possible to have paragraphs in tabular data, and in that case a semantic `<p>` would be appropriately placed. But for the common table with data in the cells eg. numbers, names, etc., don't include the `<p>`.
|
They are both valid. However, if you are going to have multiple paragraphs, obviously use the `<p>` tags
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
If the tabular cell data is text:
```
<td>
content text
</td>
```
If the tabular cell data is paragraph(s):
```
<td>
<p>
content text
</p>
...
</td>
```
|
Both are valid; if that is the only content of `<td>`, and the content is not being used in JavaScript code, then the second is better.
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
They are both valid. However, if you are going to have multiple paragraphs, obviously use the `<p>` tags
|
Both are valid; if that is the only content of `<td>`, and the content is not being used in JavaScript code, then the second is better.
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
Leave out the `<p>` tag unless the content of your table cell is *truly* a paragraph. It's certainly possible to have paragraphs in tabular data, and in that case a semantic `<p>` would be appropriately placed. But for the common table with data in the cells eg. numbers, names, etc., don't include the `<p>`.
|
It depends on your intention. If the cell is going to have just ONE paragraph then it makes no sense to add the `<p>` tag to it.
If you intend to have a few paragraphs in the `<td>` cell then it makes sense to use the `<p>` tag.
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
It depends on your intention. If the cell is going to have just ONE paragraph then it makes no sense to add the `<p>` tag to it.
If you intend to have a few paragraphs in the `<td>` cell then it makes sense to use the `<p>` tag.
|
They are both valid. However, if you are going to have multiple paragraphs, obviously use the `<p>` tags
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
They are both valid. However, if you are going to have multiple paragraphs, obviously use the `<p>` tags
|
Depends on if you subscribe to the "tables are for tabular data" or the "tables are for layout" school. If you prefer to use your tables for tabular data, and the paragraph is not tabular data the "p" is valid, if tables are for layout, and you have the "p" tag reserved for other layout semantics then its not required.
Short answer is, its really up to you. :)
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
Leave out the `<p>` tag unless the content of your table cell is *truly* a paragraph. It's certainly possible to have paragraphs in tabular data, and in that case a semantic `<p>` would be appropriately placed. But for the common table with data in the cells eg. numbers, names, etc., don't include the `<p>`.
|
Depends on if you subscribe to the "tables are for tabular data" or the "tables are for layout" school. If you prefer to use your tables for tabular data, and the paragraph is not tabular data the "p" is valid, if tables are for layout, and you have the "p" tag reserved for other layout semantics then its not required.
Short answer is, its really up to you. :)
|
1,944,213 |
Which is more semantic and valid?
```
<td>
<p>
content text
</p>
</td>
```
or
```
<td>
content text
</td>
```
|
2009/12/22
|
[
"https://Stackoverflow.com/questions/1944213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84201/"
] |
Leave out the `<p>` tag unless the content of your table cell is *truly* a paragraph. It's certainly possible to have paragraphs in tabular data, and in that case a semantic `<p>` would be appropriately placed. But for the common table with data in the cells eg. numbers, names, etc., don't include the `<p>`.
|
Both are valid; if that is the only content of `<td>`, and the content is not being used in JavaScript code, then the second is better.
|
951,078 |
**Problem:** Prove that the equation $5^x+2=17^y$ doesn't have any solutions with $x,y$ in $\mathbb{N}$.
I've been analyzing the remainder while dividing by $4$, but I'm getting nowhere.
|
2014/09/29
|
[
"https://math.stackexchange.com/questions/951078",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/143201/"
] |
I'll refer to (one of) my preferred textbooks :
* Herbert Enderton, [A Mathematical Introduction to Logic](http://rads.stackoverflow.com/amzn/click/0122384520) (2nd ed - 2001), page 112 :
he defines *substitution* by recursion :
>
> 1. For atomic $\alpha, \alpha\_t^x$ is the expression obtained from $\alpha$ by replacing
> the variable $x$ by $t$.
>
>
>
[...]
>
> 4. $(∀y \alpha)\_t^x$ is $∀y \alpha$, if $x = y$, $∀y(\alpha\_t^x)$ if $x \ne y$.
>
>
>
Then he discuss the fallacy of substituting $y$ for $x$ into $∀x¬∀y(x=y)$, getting the wrong : $¬∀y(y=y)$, which again is exactly your example.
>
> The problem is that when $y$ was substituted for $x$, it was immediately “captured” by the $∀y$ quantifier. We must impose a restriction [...] that will preclude this sort of quantifier capture. Informally, we can say that a term $t$ is **not** substitutable for $x$ in $\alpha$ if there is some variable $y$ in $t$ that is captured by a $∀y$ quantifier in $\alpha\_t^x$.
>
>
>
Thus [page 113] he defines :
>
> Let $x$ be a variable, $t$ a term. We define the phrase “$t$ *is substitutable for* $x$ in $\alpha$” as follows:
>
>
> 1. For atomic $\alpha, t$ is always substitutable for $x$ in $\alpha$. (There are no
> quantifiers in $\alpha$, so no capture could occur.)
>
>
>
[...]
>
> 3. $t$ is substitutable for $x$ in $∀y\alpha$ iff either
>
>
>
> >
> > (a) $x$ does not occur free in $∀y\alpha$, or
> >
> >
> > (b) $y$ does not occur in $t$ and $t$ is substitutable for $x$ in $\alpha$. (The point here is to be sure that nothing in $t$ will be captured by the $∀y$ prefix and that nothing has gone wrong inside $\alpha$ earlier.)
> >
> >
> >
>
>
> For example, $x$ is always substitutable for itself in any formula. If $t$ contains no variables that occur in $\alpha$, then $t$ is substitutable for $x$ in $\alpha$.
>
>
> The reader is cautioned not to be confused about the choice of words. Even if $t$ is not substitutable for $x$ in $\alpha$, still $\alpha\_t^x$ is obtained from $\alpha$ by
> replacing $x$ wherever it occurs free by $t$.
>
>
>
The two definitions are exactly the same as in Christopher Leary, *A Friendly Introduction to Mathematical Logic* (2000), pages 39-41.
---
I think that you are "not seeing" the difference between the two concepts because you are thinking only to one part of the problem :
>
> $(∀y \alpha)\_t^x$ is $∀y \alpha$, if $x = y$, $∀y(\alpha\_t^x)$ if $x \ne y$.
>
>
>
This clause prevent us from replacing the *bound* variable. I.e. with $∀y(x=y)$ we can perform $(∀y(x=y))\_t^x$ to get $∀y(t=y)$ but we cannot perform $(∀y(x=y))\_t^y$.
But we have another possible (bad) case : when $t$ has free variables inside, included the "degenerate" case when $t$ is $y$.
In this case the above clause tells us nothing, because we are trying to put $t$ (i.e. $y$) in place of $x$ inside $∀y(x=y)$. In this case $x$ is **not** the *bound* variable (which is $y$) but the result is still not what we expect.
The formula :
>
> $∀y(x=y)$
>
>
>
is *true* in a model with only one object, while :
>
> $∀y(y=y)$
>
>
>
is *true* in every model.
|
To be able to formally replace $y$ by $x$, it must be true that $x=y$ (or $y=x$) where $x$ and $y$ are *free* variables. In the statement $\forall x\exists y ¬(x=y)$, $x$ and $y$ are not free variables; they are *bound* variables. For this reason, we cannot make the substitution.
|
3,820,348 |
I am trying to shear an Image in WPF (C#) using pure XAML. I have used transformations such as skew, scale and matrix but I am not getting the result as I want. Following is the code I am using
```
<Image Source="G:\Demo\virtualization\virtualization\img3.bmp" Stretch="Fill" Height="70" Width="240" Margin="0,170,-312,-29" >
<Image.RenderTransform>
<TransformGroup>
<MatrixTransform Matrix="1,.1,-4.2,1,0,0" />
<!--<ScaleTransform ScaleX=".2" ScaleY=".6"/>-->
</TransformGroup>
</Image.RenderTransform>
</Image>
```
With the above code, below is the similar effect that I am getting:
```
/----/
/ /
/ /
/----/
```
But I want following effect:
```
/---\
/ \
/ \
/---------\
```
Due to new user, I am not allowed to post Images right now...
Thanks
***EDIT:***
I have tried viewport3d to get this effect. Following is the code:
```
<Grid>
<Image Source="G:\Demo\virtualization\virtualization\2.jpg"/>
<Viewport3D Name="mainViewport" ClipToBounds="True" HorizontalAlignment="Stretch" Height="300">
<Viewport3D.Camera>
<PerspectiveCamera Position="0, 0, 20" />
</Viewport3D.Camera>
<Viewport2DVisual3D>
<Viewport2DVisual3D.Transform>
<MatrixTransform3D Matrix="1,0,0,0
0,1,0,0
0,0,1,0
0,0,0,1" />
</Viewport2DVisual3D.Transform>
<Viewport2DVisual3D.Geometry>
<MeshGeometry3D Positions="-1,1,0 -1,-1,0 1,-1,0 1,1,0"
TextureCoordinates="0,0 0,1 1,1 1,0" TriangleIndices="0 1 2 0 2 3"/>
</Viewport2DVisual3D.Geometry>
<Viewport2DVisual3D.Material>
<DiffuseMaterial Viewport2DVisual3D.IsVisualHostMaterial="True" Brush="White"/>
</Viewport2DVisual3D.Material>
<Grid>
<Image Source="G:\Demo\virtualization\virtualization\img3.bmp"/>
</Grid>
</Viewport2DVisual3D>
<ModelVisual3D>
<ModelVisual3D.Content>
<DirectionalLight Color="#FFFFFFFF" Direction="0,-1,-1"/>
</ModelVisual3D.Content>
</ModelVisual3D>
</Viewport3D>
</Grid>
```
I am using matrix transformation to get the effect.
However I am looking to alternate ways to get this effect May be rotating the camera of viewport.
|
2010/09/29
|
[
"https://Stackoverflow.com/questions/3820348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461537/"
] |
You cannot achieve this kind of effect with linear transformation (mathematically impossible).
You could get this effect with WPF 3D.
|
You can achieve this effect with a custom pixel shader.
Download a copy of Shazzam (it's free) and check out the paperfold sample (shazzam-tool.com). While it's not exactly the shear effect you showed in your question, it is close.
Have you worked with shaders before? If you want to use a custom shader I have a prototype shear effect that I worked on a couple months ago. I can clean up the HLSL and post it here.
Edit
----
Shazzam contains a tutorial section and about 80 sample shaders. That's a good place to start. The hardest part about using custom shaders is that you have to learn a new language (HLSL).
There is a list of shader articles at <http://blog.shazzam-tool.com/index.php/shader-articles/>
|
54,165,098 |
I'm trying to create a small project but I'm stuck and don't even know where to begin with the JS code. Essentially when you type in your order in the textbox, as you type it or when you click the button (it doesn't have to add everything up in real time though it would be "cooler") it adds a number to the price based on what word is detected, so if soup is 5 dollars for example and a drink 2 and you write both, it'll add the two prices up based on your order. I thought about creating an array (I'm not too good with those) and I could just assign a number to it with the => thingy, but then how would I apply it the textarea on key up or the button to add up based on what word from array (and its' number) is detected as input and then add the numbers associated with the non number array? I thought about doing a function for each word and if that word is in the box it just adds a number that is the price to the price tag below it but that would 30 else if statements and I don't know if that would be beneficial as I want to learn something new that I haven't tried before and an else if statement but mess up and not add certain words and only add 2 prices instead of however many. Maybe a foreach function?
Here is the HTML & CSS: <https://codepen.io/Filizof/pen/zyRvOG?editors=1010>
```css
body {
background: #2b2b2b;
text-align: center;
}
#menudiv {
background: #2b2b2b;
height: 1000px;
width: 500px;
border: 4px dashed white;
}
#menutitle {
color: white;
background: #7b2302;
height: 200px;
width: 200px;
border-radius: 100px;
margin: auto;
font-size: 21px;
}
#menutext {
position: relative;
bottom: -55px;
font-family: papyrus;
}
.listitem {
list-style: none;
color: white;
font-family: papyrus;
font-size: 20px;
float: left;
margin-bottom: 10px;
}
#orderdiv {
width: 1000px;
height: 800px;
background: #2b2b2b;
position: relative;
top: -1008px;
right: -550px;
border: 4px dashed white;
}
#ordertitlediv {
background: #7b2302;
color: white;
width: 450px;
height: 100px;
}
#ordertext {
font-family: papyrus;
font-size: 40px;
position: relative;
bottom: -18px;
}
.listorderitem {
display: inline-block;
position: relative;
}
#firstitem {
position: relative;
bottom: -60px;
}
#orderlist {
position: relative;
right: 220px;
top: -50px;
}
#txtarea {
background: #2b2b2b;
color: white;
border: 2px solid white;
font-family: papyrus;
font-size: 30px;
}
.takelist {
display: inline-block;
color: white;
font-size: 60px;
margin: 60px;
position: relative;
top: -100px;
font-family: papyrus;
}
#mybtn {
width: 100px;
height: 50px;
position: relative;
top: -12px;
background: none;
border: 2px solid white;
color: white;
font-family: papyrus;
}
```
```html
<div id="menudiv">
<div id="menutitle"><p id="menutext">FILIP'S BAKEHOUSE<p/></div>
<ul id="foodlist">
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fmedia.glamour.com%2Fphotos%2F57597064f4c7c09c2559f60e%2Fmaster%2Fw_1024%2Cc_limit%2Fcarrot-soup.png&f=1">Soup: $5</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fcentra.ie%2Fimage%2Fvar%2Ffiles%2FADF%2FLoveDinner%2FCentra_honey_sauce_noodles.png&f=1">Noodles: $5</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F08%2FPNGPIX-COM-Bread-PNG-Transparent-Image.png&f=1">Bread: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Ftea%2Ftea_PNG16896.png&f=1">Tea: $1.50</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pizzainn-me.com%2Fcontent%2Fthemes%2Fbernelli%2Fimg%2Fappetizers%2Flasagna.png&f=1">Lasagna: $6</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2F78.media.tumblr.com%2F32f23f3391e3b515d6b430b895458f9c%2Ftumblr_ns6b039Uzg1uyxczto1_500.png&f=1">Mousaka: %6</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Fcake%2Fcake_PNG13111.png&f=1">Cake: $4</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngall.com%2Fwp-content%2Fuploads%2F2016%2F05%2FSalad-PNG-Image.png&f=1">Salad: $4</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.govegan.org.uk%2Fwp-content%2Fuploads%2F2015%2F11%2Ffruitsalad300x3007.png&f=1">Fruit Salad: $4</li>
<li class="listitem"><img height="50" width="50" src="http://miam-images.m.i.pic.centerblog.net/0837921e.png">Orange Juice: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngonly.com%2Fwp-content%2Fuploads%2F2017%2F05%2FApple-Juice-PNG-Picture.png&f=1">Apple Juice: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pkdrecipes.com%2Fwp-content%2Fuploads%2F2010%2F09%2F032411-Cranberry-Juice-400.png&f=1">Cranberry Juice: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fimg15.deviantart.net%2Fddeb%2Fi%2F2013%2F329%2F3%2Fe%2Fcookie__png_by_darksideofgraphic-d6vlc5l.png&f=1">Cookies: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Fbiscuit%2Fbiscuit_PNG88.png&f=1">Biscuits: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F07%2FPNGPIX-COM-Ice-Cream-PNG-Transparent-Image.png&f=1">Ice Cream: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F11%2FPNGPIX-COM-Potato-Chips-PNG-Transparent-Image.png&f=1">Chips: $1</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fselflender.s3.amazonaws.com%2Fblog%2Ftacos_1.png&f=1">Tacos: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpluspng.com%2Fimg-png%2Ffree-png-jelly-free-range-snack-co-452.png&f=1">Jelly: $1</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fbodyfuelcafe.co.uk%2Fwp-content%2Fuploads%2F2014%2F07%2FTHMB-Mango-Smoothie.png&f=1">Smoothie: $1</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2F37.media.tumblr.com%2F605762b2e80fbdda35364d5b0db4a540%2Ftumblr_n5bxf2lI5k1rx3d1wo1_r2_500.png&f=1">Pancakes: $8</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F10%2FPNGPIX-COM-Sandwich-PNG-Transparent-Image-500x282.png&f=1">Sandwich: $4</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Fburger_sandwich%2Fburger_sandwich_PNG4114.png&f=1">Burger: $5</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Ffries%2Ffries_PNG77.png&f=1">Fries: $7</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F10%2FPNGPIX-COM-Almond-Nut-PNG-Image-500x404.png&f=1">Nuts: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpluspng.com%2Fimg-png%2Frice-png-rice-png-601.png&f=1">Rice: $8</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fpruebeydisfrute.files.wordpress.com%2F2015%2F09%2Fchai-latte.png&f=1">Chai</li>
</ul>
</div>
</div>
<div id="orderdiv">
<ul id="orderlist">
<li id="firstitem" class="listorderitem"><img id="orderimg" height="100" width="100" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fcdn.pixabay.com%2Fphoto%2F2016%2F09%2F13%2F18%2F38%2Fsilverware-1667988__340.png&f=1"></li>
<li class="listorderitem"><div id="ordertitlediv"><p id="ordertext">DM YOUR ORDER</p></div></li>
</ul>
<textarea id="txtarea" rows="5" cols="50"></textarea>
<ul id="takeorderlist">
<li class="takelist">$<p id="price"></p></li>
<li class="takelist"><p id="wall">|</p></li>
<li class="takelist"><button id="mybtn">ORDER</button></li>
</ul>
</div>
```
```
create array w/ foods and a numbered price
create variables for the txtarea in the html
create the function (maybe foreach) that adds the price of each word
```
|
2019/01/13
|
[
"https://Stackoverflow.com/questions/54165098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You are probably looking for something as follows.
Note the code below only includes mapping for `soup`, `noodles` and `bread`, but you can add the remaining items easily to the `priceMap`.
Also space is assumed as a delimiter used by a user in textarea. You can change it by passing a different character into `split`.
```
const text = document.getElementById('txtarea');
const totalPrice = document.getElementById('price');
const priceMap = {
soup: 5,
noodles: 5,
bread: 2
};
text.addEventListener('keyup', (e) => {
totalPrice.textContent = text.value.split(' ')
.filter(Boolean)
.map(v => v.toLowerCase())
.map(v => priceMap[v])
.filter(Boolean)
.reduce((sum, price) => sum + price, 0);
});
```
|
In order to use this fully functional, you should separate order items by `,` and give quantity to the item even though you buy 1.
Try `1 lasagna, 2 cake`
```js
let input = document.querySelector("#txtarea")
let selectedFood = [];
let menu = [
{
item: "soup",
price: 5
},
{
item: "noodles",
price: 5
},
{
item: "bread",
price: 2
},
{
item: "tea",
price: 1.5
},
{
item: "lasagna",
price: 6
},
{
item: "mousaka",
price: 6
},
{
item: "cake",
price: 4
},
{
item: "salad",
price: 4
},
{
item: "fruit salad",
price: 4
},
{
item: "orange juice",
price: 2
},
{
item: "apple juice",
price: 2
},
{
item: "cranberry juice",
price: 2
},
{
item: "cookies",
price: 3
},
{
item: "biscuits",
price: 3
},
{
item: "ice icream",
price: 3
},
{
item: "chips",
price: 1
},
{
item: "tacos",
price: 2
},
{
item: "jelly",
price: 1
},
{
item: "smoothie",
price: 1
},
{
item: "pancakes",
price: 8
},
{
item: "sandwich",
price: 4
},
{
item: "burger",
price: 5
},
{
item: "fries",
price: 7
},
{
item: "nuts",
price: 3
},
{
item: "rice",
price: 8
},
{
item: "chai",
price: 5
}
];
input.onkeyup = () => {
let selection = input.value.replace(/ /g, '').split(",")
let total = 0;
selection.map(s => {
let cost = 0;
let found = menu.filter(m => m.item == s.replace(/\d+/g, ''))
if(found.length > 0) {
cost = found[0].price * parseInt(s)
total += cost;
}
})
document.querySelector("#price").innerText = total;
}
```
```css
body {
background: #2b2b2b;
text-align: center;
}
#menudiv {
background: #2b2b2b;
height: 1000px;
width: 500px;
border: 4px dashed white;
}
#menutitle {
color: white;
background: #7b2302;
height: 200px;
width: 200px;
border-radius: 100px;
margin: auto;
font-size: 21px;
}
#menutext {
position: relative;
bottom: -55px;
font-family: papyrus;
}
.listitem {
list-style: none;
color: white;
font-family: papyrus;
font-size: 20px;
float: left;
margin-bottom: 10px;
}
#orderdiv {
width: 1000px;
height: 800px;
background: #2b2b2b;
position: relative;
top: -1008px;
right: -550px;
border: 4px dashed white;
}
#ordertitlediv {
background: #7b2302;
color: white;
width: 450px;
height: 100px;
}
#ordertext {
font-family: papyrus;
font-size: 40px;
position: relative;
bottom: -18px;
}
.listorderitem {
display: inline-block;
position: relative;
}
#firstitem {
position: relative;
bottom: -60px;
}
#orderlist {
position: relative;
right: 220px;
top: -50px;
}
#txtarea {
background: #2b2b2b;
color: white;
border: 2px solid white;
font-family: papyrus;
font-size: 30px;
}
.takelist {
display: inline-block;
color: white;
font-size: 60px;
margin: 60px;
position: relative;
top: -100px;
font-family: papyrus;
}
#mybtn {
width: 100px;
height: 50px;
position: relative;
top: -12px;
background: none;
border: 2px solid white;
color: white;
font-family: papyrus;
}
```
```html
<div id="menudiv">
<div id="menutitle"><p id="menutext">FILIP'S BAKEHOUSE<p/></div>
<ul id="foodlist">
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fmedia.glamour.com%2Fphotos%2F57597064f4c7c09c2559f60e%2Fmaster%2Fw_1024%2Cc_limit%2Fcarrot-soup.png&f=1">Soup: $5</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fcentra.ie%2Fimage%2Fvar%2Ffiles%2FADF%2FLoveDinner%2FCentra_honey_sauce_noodles.png&f=1">Noodles: $5</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F08%2FPNGPIX-COM-Bread-PNG-Transparent-Image.png&f=1">Bread: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Ftea%2Ftea_PNG16896.png&f=1">Tea: $1.50</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pizzainn-me.com%2Fcontent%2Fthemes%2Fbernelli%2Fimg%2Fappetizers%2Flasagna.png&f=1">Lasagna: $6</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2F78.media.tumblr.com%2F32f23f3391e3b515d6b430b895458f9c%2Ftumblr_ns6b039Uzg1uyxczto1_500.png&f=1">Mousaka: %6</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Fcake%2Fcake_PNG13111.png&f=1">Cake: $4</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngall.com%2Fwp-content%2Fuploads%2F2016%2F05%2FSalad-PNG-Image.png&f=1">Salad: $4</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.govegan.org.uk%2Fwp-content%2Fuploads%2F2015%2F11%2Ffruitsalad300x3007.png&f=1">Fruit Salad: $4</li>
<li class="listitem"><img height="50" width="50" src="http://miam-images.m.i.pic.centerblog.net/0837921e.png">Orange Juice: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngonly.com%2Fwp-content%2Fuploads%2F2017%2F05%2FApple-Juice-PNG-Picture.png&f=1">Apple Juice: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pkdrecipes.com%2Fwp-content%2Fuploads%2F2010%2F09%2F032411-Cranberry-Juice-400.png&f=1">Cranberry Juice: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fimg15.deviantart.net%2Fddeb%2Fi%2F2013%2F329%2F3%2Fe%2Fcookie__png_by_darksideofgraphic-d6vlc5l.png&f=1">Cookies: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Fbiscuit%2Fbiscuit_PNG88.png&f=1">Biscuits: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F07%2FPNGPIX-COM-Ice-Cream-PNG-Transparent-Image.png&f=1">Ice Cream: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F11%2FPNGPIX-COM-Potato-Chips-PNG-Transparent-Image.png&f=1">Chips: $1</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fselflender.s3.amazonaws.com%2Fblog%2Ftacos_1.png&f=1">Tacos: $2</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpluspng.com%2Fimg-png%2Ffree-png-jelly-free-range-snack-co-452.png&f=1">Jelly: $1</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fbodyfuelcafe.co.uk%2Fwp-content%2Fuploads%2F2014%2F07%2FTHMB-Mango-Smoothie.png&f=1">Smoothie: $1</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2F37.media.tumblr.com%2F605762b2e80fbdda35364d5b0db4a540%2Ftumblr_n5bxf2lI5k1rx3d1wo1_r2_500.png&f=1">Pancakes: $8</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F10%2FPNGPIX-COM-Sandwich-PNG-Transparent-Image-500x282.png&f=1">Sandwich: $4</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Fburger_sandwich%2Fburger_sandwich_PNG4114.png&f=1">Burger: $5</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpngimg.com%2Fuploads%2Ffries%2Ffries_PNG77.png&f=1">Fries: $7</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.pngpix.com%2Fwp-content%2Fuploads%2F2016%2F10%2FPNGPIX-COM-Almond-Nut-PNG-Image-500x404.png&f=1">Nuts: $3</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=http%3A%2F%2Fpluspng.com%2Fimg-png%2Frice-png-rice-png-601.png&f=1">Rice: $8</li>
<li class="listitem"><img height="50" width="50" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fpruebeydisfrute.files.wordpress.com%2F2015%2F09%2Fchai-latte.png&f=1">Chai</li>
</ul>
</div>
<div id="orderdiv">
<ul id="orderlist">
<li id="firstitem" class="listorderitem"><img id="orderimg" height="100" width="100" src="https://proxy.duckduckgo.com/iu/?u=https%3A%2F%2Fcdn.pixabay.com%2Fphoto%2F2016%2F09%2F13%2F18%2F38%2Fsilverware-1667988__340.png&f=1"></li>
<li class="listorderitem"><div id="ordertitlediv"><p id="ordertext">DM YOUR ORDER</p></div></li>
</ul>
<textarea id="txtarea" rows="5" cols="50"></textarea>
<ul id="takeorderlist">
<li class="takelist">$<p id="price"></p></li>
<li class="takelist"><p id="wall">|</p></li>
<li class="takelist"><button id="mybtn">ORDER</button></li>
</ul>
</div>
```
|
35,900,044 |
So I'm following a set of tutorials and have come to a point where I have the app working (A simple calculator), but the UI could still be improved.
The current UI works great on phones, but when going up to an iPad, parts of the UI expand as needed, but the buttons stay at quite a small size. I have no constraints keeping the buttons small and the stack that the buttons are in do expand to fill the space.
My project files can be found here:
<https://github.com/maza256/retrocalc>
An image to explain is found here:
[iPhone and iPad Preview image](http://i.stack.imgur.com/0G3OW.png)
Any help would be great.
|
2016/03/09
|
[
"https://Stackoverflow.com/questions/35900044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4644541/"
] |
Searching for the message in your exception returns the following SO answer:
[Mixed mode assembly is built against version 'v1.1.4322'](https://stackoverflow.com/questions/4018924/mixed-mode-assembly-is-built-against-version-v1-1-4322)
>
> You need to add an app.Config file and set useLegacyV2RuntimeActivationPolicy to true.
>
>
> This is required to use mixed mode CLR 2 assemblies in a .NET 4 application.
>
>
>
|
You can temporarily add an exception block to your code with a messagebox show:
```
try
{
... (your existing code)
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
```
Or, you could create a handler for unhandled exceptions and rely on that. To do so, make your project's Program.cs look something like this:
```
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
**AppDomain.CurrentDomain.UnhandledException += Unhandled;**
Application.Run(new FormMain());
}
static void Unhandled(object sender, UnhandledExceptionEventArgs exArgs)
{
// log the exception, display it, put a breakpoint here, whatever
}
```
Note, too, that "*== true*" is redundant when testing booleans; you can simply use "*if (soundSpeakers.Headphone)*" instead (and "*if (!soundSpeakers.Headphone)*" for the opposite).
|
48,962,743 |
For the following codility task my solution is 100% correct from the Performance point of view and 80% from the correctness point of view. My Solution is failing for two elements but I have tested it with different values (two elements) and I am getting required output. Please let me know how can I fix this issue.
**Codility Task link**
<https://app.codility.com/programmers/lessons/4-counting_elements/perm_check/>
My Code:
```
import java.util.*;
class Solution {
public int solution(int[] A) {
if(A.length == 1){
if(A[0] != 1)
return 0;
else
return 1;
}
Arrays.sort(A);
for(int i =1; i<A.length; i++){
if(A[i] - A[i-1] != 1){
return 0;
}
}
return 1;
}
}
```
**Failing Test Case:**
```
double two elements ✘WRONG ANSWER got 1 expected 0
1. 0.004 s OK
2. 0.004 s OK
3. 0.004 s WRONG ANSWER, got 1 expected 0
4. 0.004 s OK
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48962743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8494430/"
] |
#### A) To uninstall `Docker Engine`:
```
sudo yum remove docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
#### B) To uninstall old versions of docker (previously called `docker` or `docker-engine`):
```
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
```
---
And delete
* the directories `/var/lib/docker` and/or `/var/lib/containerd`, which contains your images, containers and volumes
* and `/etc/docker`, which contains docker configuration files.
Source: [Official docs (uninstall Docker Engine)](https://docs.docker.com/engine/install/centos/#uninstall-docker-engine) | [Official docs (uninstall old versions)](https://docs.docker.com/engine/install/centos/#uninstall-old-versions).
|
for newer versions you need to remove the cli as well
```sh
sudo yum remove -y docker-ce docker-ce-cli
```
that will do the trick
|
48,962,743 |
For the following codility task my solution is 100% correct from the Performance point of view and 80% from the correctness point of view. My Solution is failing for two elements but I have tested it with different values (two elements) and I am getting required output. Please let me know how can I fix this issue.
**Codility Task link**
<https://app.codility.com/programmers/lessons/4-counting_elements/perm_check/>
My Code:
```
import java.util.*;
class Solution {
public int solution(int[] A) {
if(A.length == 1){
if(A[0] != 1)
return 0;
else
return 1;
}
Arrays.sort(A);
for(int i =1; i<A.length; i++){
if(A[i] - A[i-1] != 1){
return 0;
}
}
return 1;
}
}
```
**Failing Test Case:**
```
double two elements ✘WRONG ANSWER got 1 expected 0
1. 0.004 s OK
2. 0.004 s OK
3. 0.004 s WRONG ANSWER, got 1 expected 0
4. 0.004 s OK
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48962743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8494430/"
] |
#### A) To uninstall `Docker Engine`:
```
sudo yum remove docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
#### B) To uninstall old versions of docker (previously called `docker` or `docker-engine`):
```
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
```
---
And delete
* the directories `/var/lib/docker` and/or `/var/lib/containerd`, which contains your images, containers and volumes
* and `/etc/docker`, which contains docker configuration files.
Source: [Official docs (uninstall Docker Engine)](https://docs.docker.com/engine/install/centos/#uninstall-docker-engine) | [Official docs (uninstall old versions)](https://docs.docker.com/engine/install/centos/#uninstall-old-versions).
|
A newer answer for this is:
Uninstall the Docker Engine, CLI, and Containerd packages:
```
$ sudo yum remove docker-ce docker-ce-cli containerd.io
```
Images, containers, volumes, or customized configuration files on your host are not automatically removed. To delete all images, containers, and volumes:
```
$ sudo rm -rf /var/lib/docker
```
Source: <https://docs.docker.com/engine/install/centos/#uninstall-docker-engine>
|
48,962,743 |
For the following codility task my solution is 100% correct from the Performance point of view and 80% from the correctness point of view. My Solution is failing for two elements but I have tested it with different values (two elements) and I am getting required output. Please let me know how can I fix this issue.
**Codility Task link**
<https://app.codility.com/programmers/lessons/4-counting_elements/perm_check/>
My Code:
```
import java.util.*;
class Solution {
public int solution(int[] A) {
if(A.length == 1){
if(A[0] != 1)
return 0;
else
return 1;
}
Arrays.sort(A);
for(int i =1; i<A.length; i++){
if(A[i] - A[i-1] != 1){
return 0;
}
}
return 1;
}
}
```
**Failing Test Case:**
```
double two elements ✘WRONG ANSWER got 1 expected 0
1. 0.004 s OK
2. 0.004 s OK
3. 0.004 s WRONG ANSWER, got 1 expected 0
4. 0.004 s OK
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48962743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8494430/"
] |
#### A) To uninstall `Docker Engine`:
```
sudo yum remove docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
#### B) To uninstall old versions of docker (previously called `docker` or `docker-engine`):
```
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
```
---
And delete
* the directories `/var/lib/docker` and/or `/var/lib/containerd`, which contains your images, containers and volumes
* and `/etc/docker`, which contains docker configuration files.
Source: [Official docs (uninstall Docker Engine)](https://docs.docker.com/engine/install/centos/#uninstall-docker-engine) | [Official docs (uninstall old versions)](https://docs.docker.com/engine/install/centos/#uninstall-old-versions).
|
I'm on CentOS 7 and followed all the above suggestions which worked to remove docker's files and commands, but I still found it in my repo list.
```
yum repolist
```
Showed up...
```
docker-ce-stable/7/x86_64 Docker CE Stable - x86_64 117
```
I removed it via the following commands:
```
cd /etc/yum.repos.d/
rm docker-ce.repo
rm: remove regular file ‘docker-ce.repo’? y
```
This removed it then I ran these two commands (add sudo before each if you are not logged in as root) to clean the cache and update:
```
yum clean all
yum makecache
```
I then found 2 further entries in the yum cache that needed removing manually:
```
rm -rf /var/cache/yum/x86_64/7/docker-ce-nightly
rm -rf /var/cache/yum/x86_64/7/docker-ce-stable
```
|
48,962,743 |
For the following codility task my solution is 100% correct from the Performance point of view and 80% from the correctness point of view. My Solution is failing for two elements but I have tested it with different values (two elements) and I am getting required output. Please let me know how can I fix this issue.
**Codility Task link**
<https://app.codility.com/programmers/lessons/4-counting_elements/perm_check/>
My Code:
```
import java.util.*;
class Solution {
public int solution(int[] A) {
if(A.length == 1){
if(A[0] != 1)
return 0;
else
return 1;
}
Arrays.sort(A);
for(int i =1; i<A.length; i++){
if(A[i] - A[i-1] != 1){
return 0;
}
}
return 1;
}
}
```
**Failing Test Case:**
```
double two elements ✘WRONG ANSWER got 1 expected 0
1. 0.004 s OK
2. 0.004 s OK
3. 0.004 s WRONG ANSWER, got 1 expected 0
4. 0.004 s OK
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48962743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8494430/"
] |
for newer versions you need to remove the cli as well
```sh
sudo yum remove -y docker-ce docker-ce-cli
```
that will do the trick
|
A newer answer for this is:
Uninstall the Docker Engine, CLI, and Containerd packages:
```
$ sudo yum remove docker-ce docker-ce-cli containerd.io
```
Images, containers, volumes, or customized configuration files on your host are not automatically removed. To delete all images, containers, and volumes:
```
$ sudo rm -rf /var/lib/docker
```
Source: <https://docs.docker.com/engine/install/centos/#uninstall-docker-engine>
|
48,962,743 |
For the following codility task my solution is 100% correct from the Performance point of view and 80% from the correctness point of view. My Solution is failing for two elements but I have tested it with different values (two elements) and I am getting required output. Please let me know how can I fix this issue.
**Codility Task link**
<https://app.codility.com/programmers/lessons/4-counting_elements/perm_check/>
My Code:
```
import java.util.*;
class Solution {
public int solution(int[] A) {
if(A.length == 1){
if(A[0] != 1)
return 0;
else
return 1;
}
Arrays.sort(A);
for(int i =1; i<A.length; i++){
if(A[i] - A[i-1] != 1){
return 0;
}
}
return 1;
}
}
```
**Failing Test Case:**
```
double two elements ✘WRONG ANSWER got 1 expected 0
1. 0.004 s OK
2. 0.004 s OK
3. 0.004 s WRONG ANSWER, got 1 expected 0
4. 0.004 s OK
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48962743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8494430/"
] |
for newer versions you need to remove the cli as well
```sh
sudo yum remove -y docker-ce docker-ce-cli
```
that will do the trick
|
I'm on CentOS 7 and followed all the above suggestions which worked to remove docker's files and commands, but I still found it in my repo list.
```
yum repolist
```
Showed up...
```
docker-ce-stable/7/x86_64 Docker CE Stable - x86_64 117
```
I removed it via the following commands:
```
cd /etc/yum.repos.d/
rm docker-ce.repo
rm: remove regular file ‘docker-ce.repo’? y
```
This removed it then I ran these two commands (add sudo before each if you are not logged in as root) to clean the cache and update:
```
yum clean all
yum makecache
```
I then found 2 further entries in the yum cache that needed removing manually:
```
rm -rf /var/cache/yum/x86_64/7/docker-ce-nightly
rm -rf /var/cache/yum/x86_64/7/docker-ce-stable
```
|
48,962,743 |
For the following codility task my solution is 100% correct from the Performance point of view and 80% from the correctness point of view. My Solution is failing for two elements but I have tested it with different values (two elements) and I am getting required output. Please let me know how can I fix this issue.
**Codility Task link**
<https://app.codility.com/programmers/lessons/4-counting_elements/perm_check/>
My Code:
```
import java.util.*;
class Solution {
public int solution(int[] A) {
if(A.length == 1){
if(A[0] != 1)
return 0;
else
return 1;
}
Arrays.sort(A);
for(int i =1; i<A.length; i++){
if(A[i] - A[i-1] != 1){
return 0;
}
}
return 1;
}
}
```
**Failing Test Case:**
```
double two elements ✘WRONG ANSWER got 1 expected 0
1. 0.004 s OK
2. 0.004 s OK
3. 0.004 s WRONG ANSWER, got 1 expected 0
4. 0.004 s OK
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48962743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8494430/"
] |
A newer answer for this is:
Uninstall the Docker Engine, CLI, and Containerd packages:
```
$ sudo yum remove docker-ce docker-ce-cli containerd.io
```
Images, containers, volumes, or customized configuration files on your host are not automatically removed. To delete all images, containers, and volumes:
```
$ sudo rm -rf /var/lib/docker
```
Source: <https://docs.docker.com/engine/install/centos/#uninstall-docker-engine>
|
I'm on CentOS 7 and followed all the above suggestions which worked to remove docker's files and commands, but I still found it in my repo list.
```
yum repolist
```
Showed up...
```
docker-ce-stable/7/x86_64 Docker CE Stable - x86_64 117
```
I removed it via the following commands:
```
cd /etc/yum.repos.d/
rm docker-ce.repo
rm: remove regular file ‘docker-ce.repo’? y
```
This removed it then I ran these two commands (add sudo before each if you are not logged in as root) to clean the cache and update:
```
yum clean all
yum makecache
```
I then found 2 further entries in the yum cache that needed removing manually:
```
rm -rf /var/cache/yum/x86_64/7/docker-ce-nightly
rm -rf /var/cache/yum/x86_64/7/docker-ce-stable
```
|
44,642,544 |
I'm trying to pass a boolean parameter from the 'GetClientValidationRules' method of a validator.
However the parameter comes through as a string.
Is it possible to pass the parameter as an actual boolean value?
Code for reference:
Server-side:
```
public override IEnumerable<ModelClientValidationRule> GetClientValidationRules()
{
var rule = new ModelClientValidationRule()
{
ErrorMessage = "This is an error."
ValidationType = "test"
}
rule.ValidationParameters.Add("test", true);
yield return rule;
}
```
Client-side:
```
$.validator.unobtrusive.adapters.add("test", ["test"], function(options) {
var parameter = options.params.test;
if (parameter) { // Always true, because parameter === "True" (string value)
...
}
});
```
I understand that i could just use 'if (parameter === "True")' but would like to avoid doing so if possible.
Any help would be greatly appreciated.
|
2017/06/20
|
[
"https://Stackoverflow.com/questions/44642544",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4127388/"
] |
Here's a spec that simulates Spark's avro auto-schema generation in a non-spark environment and tests the ser/de.
```
private val sconf = new SparkConf().set("spark.sql.avro.compression.codec", "snappy")
implicit val spark: SparkSession =
SparkSession.builder().appName("SparkTest").master("local[*]").config(sconf).getOrCreate()
implicit def sc: SparkContext = spark.sparkContext
"a foo" should {
"serialise to/from avro" in {
val foo = Foo("bar")
val tempPath = "./foo_avro"
val rdd = sc.parallelize(List(foo), 1)
val jsonRdd = rdd.map(_.toJson)
spark.read.json(jsonRdd).write.mode(SaveMode.Overwrite).avro(tempPath)
val readItBack = sc.newAPIHadoopFile(tempPath,
classOf[AvroKeyInputFormat[GenericRecord]],
classOf[AvroKey[GenericRecord]],
classOf[NullWritable])
val foos: Set[Foo] =
readItBack.keys.map(x => FooAvroParser(x.datum()).parse).collect().toSet
FileUtils.deleteDirectory(new File(tempPath))
foos mustEqual Set(foo)
}.set(minTestsOk = 5)
```
**build.sbt**
```
libraryDependencies ++= Seq(
"com.databricks" %% "spark-avro" % "3.2.0" % "test" exclude ("org.apache.avro", "avro"),
"org.apache.spark" %% "spark-sql" % "2.1.1" % "test",
"org.apache.avro" % "avro" % "1.7.7"
)
dependencyOverrides ++= Set(
"com.fasterxml.jackson.core" % "jackson-core" % "2.8.7" % "test",
"com.fasterxml.jackson.core" % "jackson-databind" % "2.8.7" % "test",
"com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.8.7" % "test"
)
```
|
You don't need to emulate serialization, you just need to create a `GenericRecord`:
```
val gr = new GenericRecordBuilder(schema)
.put("bar", 10)
.put("baz", "bat")
.build
foo(gr) shouldBe Foo(10, "baz")
```
|
388,681 |
I am trying to write a .awk source file to filter a .txt and I wanted to know how do I use the max variable after in the second command
```
BEGIN {max1=0}
```
Find the max value in $4 between two patterns (0 and 1) and set it as a variable
================================================================================
```
{if ($4>0 && $4<1)
max1=$4
else if ($4==1)
max=max1}
END {print max}
/Nodes/ {f=1} /EndNodes/ {f=0} #Gives lines after Nodes and before EndNodes
{if ($2+0>=0 && $3+0==0 && max==$4) #Filters the given lines between Nodes and EndNodes
{print $1}}
```
My guess is that i need to run the program from start after the max variable is defined (because it uses the same lines as the second command.
The desire output should be: 20, 31 and 32
**INPUT**
```
$Nodes
34
1 0.0000000E+000 0.0000000E+000 0.0000000E+000
2 6.0000000E-003 0.0000000E+000 0.0000000E+000
3 0.0000000E+000 6.0000000E-003 0.0000000E+000
4 -6.0000000E-003 0.0000000E+000 0.0000000E+000
5 0.0000000E+000 -6.0000000E-003 0.0000000E+000
6 2.1213203E-003 2.1213203E-003 0.0000000E+000
7 -2.1213203E-003 2.1213203E-003 0.0000000E+000
8 -2.1213203E-003 -2.1213203E-003 0.0000000E+000
9 2.1213203E-003 -2.1213203E-003 0.0000000E+000
10 4.2426407E-003 4.2426407E-003 0.0000000E+000
11 -4.2426407E-003 4.2426407E-003 0.0000000E+000
12 -4.2426407E-003 -4.2426407E-003 0.0000000E+000
13 4.2426407E-003 -4.2426407E-003 0.0000000E+000
14 2.1213203E-003 0.0000000E+000 0.0000000E+000
15 0.0000000E+000 2.1213203E-003 0.0000000E+000
16 -2.1213203E-003 0.0000000E+000 0.0000000E+000
17 0.0000000E+000 -2.1213203E-003 0.0000000E+000
18 0.0000000E+000 2.1213203E-003 6.0000000E-003
19 0.0000000E+000 6.0000000E-003 6.0000000E-003
20 0.0000000E+000 0.0000000E+000 6.0000000E-003
21 -4.2426407E-003 4.2426407E-003 6.0000000E-003
22 -2.1213203E-003 2.1213203E-003 6.0000000E-003
23 -6.0000000E-003 0.0000000E+000 6.0000000E-003
24 -2.1213203E-003 0.0000000E+000 6.0000000E-003
25 -4.2426407E-003 -4.2426407E-003 6.0000000E-003
26 -2.1213203E-003 -2.1213203E-003 6.0000000E-003
27 0.0000000E+000 -6.0000000E-003 6.0000000E-003
28 0.0000000E+000 -2.1213203E-003 6.0000000E-003
29 4.2426407E-003 -4.2426407E-003 6.0000000E-003
30 2.1213203E-003 -2.1213203E-003 6.0000000E-003
31 6.0000000E-003 0.0000000E+000 6.0000000E-003
32 2.1213203E-003 0.0000000E+000 6.0000000E-003
33 4.2426407E-003 4.2426407E-003 6.0000000E-003
34 2.1213203E-003 2.1213203E-003 6.0000000E-003
$EndNodes
$Elements
#And some more data
$EndElements
```
|
2017/08/27
|
[
"https://unix.stackexchange.com/questions/388681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248391/"
] |
This is a one-pass solution:
```
/Nodes/ { read = 1 }
/EndNodes/ { read = 0 }
!read { next }
NF == 4 { n = $1; x = $2; y = $3; z = $4 }
z > max { delete set; i = 1; max = z }
x >= 0 && y == 0 && z == max { set[i++] = n }
END { for (i in set) { print set[i] } }
```
The `read` variable determines if we should act on the current record or not. If it's 1, then we do.
The 3rd block discards the current input if we're not interested in it and continues from the top with the next record.
The 4th block sets up four convenience variables, `n`, `x`, `y` and `z`. They are nicer to read than `$1` etc.
The 5th block deletes the array `set`. The `set` array is the set of all node numbers that we've found so far and that fulfill the criteria. Since this block executes if we've found a new maximum for `z`, all previously found nodes are invalidated. We also save the new maximum (`max`). The variable `i` is just an index into the array (a counter, basically). If no maximum has yet been found, then the uninitialized `max` will be treated as zero in the test.
The 6th block executes when we've found a node that fulfills the criteria. The node number is saved in the `set` array and `i` is incremented.
At the end, we loop through the `set` array and output its contents.
The result when running under GNU `awk` is
```
20
31
32
```
BSD `awk` and `mawk` running on OpenBSD produces the list in the opposite order.
|
***awk*** solution:
***get\_max\_nodes.awk*** script:
```
#!/bin/awk -f
BEGIN{ max=0 }
NR==FNR{ # processing the 1st input file
if ($4~/^[0-9]/) { # if the 4th field is a number
if($4+0 > max) max=$4+0 # capturing maximal number
}
next
}
{ # processing the 2nd input file (same file)
if ($4~/^[0-9]/ && $2+0>=0 && $3+0==0 && $4+0==max) {
print $1
}
}
```
*Usage:*
```
awk -f get_max_nodes.awk input.txt input.txt
```
The output:
```
20
31
32
```
|
388,681 |
I am trying to write a .awk source file to filter a .txt and I wanted to know how do I use the max variable after in the second command
```
BEGIN {max1=0}
```
Find the max value in $4 between two patterns (0 and 1) and set it as a variable
================================================================================
```
{if ($4>0 && $4<1)
max1=$4
else if ($4==1)
max=max1}
END {print max}
/Nodes/ {f=1} /EndNodes/ {f=0} #Gives lines after Nodes and before EndNodes
{if ($2+0>=0 && $3+0==0 && max==$4) #Filters the given lines between Nodes and EndNodes
{print $1}}
```
My guess is that i need to run the program from start after the max variable is defined (because it uses the same lines as the second command.
The desire output should be: 20, 31 and 32
**INPUT**
```
$Nodes
34
1 0.0000000E+000 0.0000000E+000 0.0000000E+000
2 6.0000000E-003 0.0000000E+000 0.0000000E+000
3 0.0000000E+000 6.0000000E-003 0.0000000E+000
4 -6.0000000E-003 0.0000000E+000 0.0000000E+000
5 0.0000000E+000 -6.0000000E-003 0.0000000E+000
6 2.1213203E-003 2.1213203E-003 0.0000000E+000
7 -2.1213203E-003 2.1213203E-003 0.0000000E+000
8 -2.1213203E-003 -2.1213203E-003 0.0000000E+000
9 2.1213203E-003 -2.1213203E-003 0.0000000E+000
10 4.2426407E-003 4.2426407E-003 0.0000000E+000
11 -4.2426407E-003 4.2426407E-003 0.0000000E+000
12 -4.2426407E-003 -4.2426407E-003 0.0000000E+000
13 4.2426407E-003 -4.2426407E-003 0.0000000E+000
14 2.1213203E-003 0.0000000E+000 0.0000000E+000
15 0.0000000E+000 2.1213203E-003 0.0000000E+000
16 -2.1213203E-003 0.0000000E+000 0.0000000E+000
17 0.0000000E+000 -2.1213203E-003 0.0000000E+000
18 0.0000000E+000 2.1213203E-003 6.0000000E-003
19 0.0000000E+000 6.0000000E-003 6.0000000E-003
20 0.0000000E+000 0.0000000E+000 6.0000000E-003
21 -4.2426407E-003 4.2426407E-003 6.0000000E-003
22 -2.1213203E-003 2.1213203E-003 6.0000000E-003
23 -6.0000000E-003 0.0000000E+000 6.0000000E-003
24 -2.1213203E-003 0.0000000E+000 6.0000000E-003
25 -4.2426407E-003 -4.2426407E-003 6.0000000E-003
26 -2.1213203E-003 -2.1213203E-003 6.0000000E-003
27 0.0000000E+000 -6.0000000E-003 6.0000000E-003
28 0.0000000E+000 -2.1213203E-003 6.0000000E-003
29 4.2426407E-003 -4.2426407E-003 6.0000000E-003
30 2.1213203E-003 -2.1213203E-003 6.0000000E-003
31 6.0000000E-003 0.0000000E+000 6.0000000E-003
32 2.1213203E-003 0.0000000E+000 6.0000000E-003
33 4.2426407E-003 4.2426407E-003 6.0000000E-003
34 2.1213203E-003 2.1213203E-003 6.0000000E-003
$EndNodes
$Elements
#And some more data
$EndElements
```
|
2017/08/27
|
[
"https://unix.stackexchange.com/questions/388681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248391/"
] |
This is a one-pass solution:
```
/Nodes/ { read = 1 }
/EndNodes/ { read = 0 }
!read { next }
NF == 4 { n = $1; x = $2; y = $3; z = $4 }
z > max { delete set; i = 1; max = z }
x >= 0 && y == 0 && z == max { set[i++] = n }
END { for (i in set) { print set[i] } }
```
The `read` variable determines if we should act on the current record or not. If it's 1, then we do.
The 3rd block discards the current input if we're not interested in it and continues from the top with the next record.
The 4th block sets up four convenience variables, `n`, `x`, `y` and `z`. They are nicer to read than `$1` etc.
The 5th block deletes the array `set`. The `set` array is the set of all node numbers that we've found so far and that fulfill the criteria. Since this block executes if we've found a new maximum for `z`, all previously found nodes are invalidated. We also save the new maximum (`max`). The variable `i` is just an index into the array (a counter, basically). If no maximum has yet been found, then the uninitialized `max` will be treated as zero in the test.
The 6th block executes when we've found a node that fulfills the criteria. The node number is saved in the `set` array and `i` is incremented.
At the end, we loop through the `set` array and output its contents.
The result when running under GNU `awk` is
```
20
31
32
```
BSD `awk` and `mawk` running on OpenBSD produces the list in the opposite order.
|
I do not have enough reputation to comment, so I am forced to respond with an answer. My first comment would have been that awk is not the best tool for doing real number math. It is better at strings and integer numbers.
Other points on awk: The BEGIN paragraph happens before any lines are read from your input. The END paragraph happens after all lines have been read.
Note: awk does not save or care about anything except the current line/record of input, unless you take steps to store the records/fields within your code. This needs to happen between BEGIN and END.
Your code between these paragraphs is only setting/resetting variables but doing nothing with their values. Essentially, you could tail -1 your input file and pipe the output to cut, to achieve similar results.
It seems that your intention is to test the number in column 4 to find max or max1, and only print column 1 if the column 4 contains this max followed by successful testing of column 2 and 3. This logic needs to be moved before the END unless you only care about the last line of the file.
Remember, in awk, every line of input (by default) is compared against every condition. If the condition is true, the action or list of actions is executed. It is highly possible to have multiple conditions trigger actions on the same line.
My first impression is that you need to re-evaluate your process. Decide the order of importance and take actions accordingly. For example, to me, the first order of importance is only acting on the data between two flags in my input file. Next, Determine if real numbers in each of the 4 fields may be converted to (or treated as) integers (or strings) without losing their meaning. The actual data itself does not need to change, just its representation within your code. These could be converted back again but the probability of losing accuracy of the original number is high. Lastly, choose whether or not you need to store all/any of this unsorted, random data for later processing, or give immediate per line output.
The following meta example could be done much more efficiently...
Begin with a flag to let you know if it is time to start parsing. Have you seen the $0 ~= /Nodes/ line yet? start=0. You may also want to prep for knowing when to stop parsing data by testing for $0 ~= /EndNodes/ line, stop=0. And you might want a counter if you are storing data, count=0.
```
BEGIN {
start=0
stop=0
count=0
max=0
}
/EndNodes/ {
stop=1
}
/Nodes/ {
start=1
}
NF==4 {
if (start==1 && stop==0) {
count++
column1[count]=$1
column2[count]=substr($2,1,index($2,".")-1)
column3[count]=substr($3,1,index($3,".")-1)
column4[count]=substr($4,1,index($4,".")-1)
}
}
# Now print column1 if column2 is non-negative and column3=0 and column4=max
# In the first loop through the array/list, find max
END {
for (loop=1;loop<=count;loop++) {
if (column4[loop]>max) {
max=column4[loop]
}
}
for (loop=1;loop<=count;loop++) {
if (column4[loop]==max && column3[loop]==0 && column2[loop]>=0) {
print column1[loop]
}
}
}
```
As written, the output will be
```
20
31
32
```
|
20,938,797 |
I want to create a custom control ideally like this:
```
<foo:Frobnicate runat="server">
<DoStuffWithThese>
<asp:TextBox runat="server" ID="fizzbot" />
</DoStuffWithThese>
<!-- other controls can be in the wrapper but outside the DoSomeStuffWithThese collection -->
<asp:DropDownList runat="server" ID="othercontrol" />
</foo:Frobnicate>
```
although I would settle for this if necessary:
```
<foo:Frobnicate runat="server">
<DoStuffWithThese>
<asp:TextBox runat="server" ID="fizzbot" />
</DoStuffWithThese>
<AndOtherStuffWithThese>
<asp:DropDownList runat="server" ID="othercontrol" />
</AndOtherStuffWithThese>
</foo:Frobnicate>
```
I can access the controls in the codebehind OK (in the second, not-ideal example), but **for some reason their IDs (which I need) are NULL**.
Here is the codebehind:
```
[ParseChildren(true)]
public class Frobnicate : WebControl {
[PersistenceMode(PersistenceMode.InnerProperty)]
public List<WebControl> DoStuffWithThese { get; set; }
[PersistenceMode(PersistenceMode.InnerProperty)]
public List<WebControl> AndOtherStuffWithThese { get; set; }
public override OnLoad(EventArgs e) {
base.OnLoad(e);
foreach(Control currentControl in DoStuffWithThese) {
// the control can be accessed (e.g. I can see it's a TextBox, etc.
// but currentControl.ID == null here -- why? :(
}
}
```
Does anyone know why this is? And more importantly, how I can fix it so I can get a custom control in one of those formats and able to access the IDs?
|
2014/01/05
|
[
"https://Stackoverflow.com/questions/20938797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I believe you have to implement `INamingContainer` in your class, WebControl doesn't do that on its own. Note that it's just a marker interface, you don't need any code. Also, instead of a list of controls, you should probably use `ITemplate` (which allows you to actually create the controls and use them in the output of your control).
This should be helpful: <http://msdn.microsoft.com/en-us/library/36574bf6(v=vs.85).aspx>
If this isn't what you want, please elaborate on what you're actually trying to do, and why you don't want to use templates.
To explain a bit more - just by adding a reference to a server control, you're not actually adding it anywhere to the page / control tree, which means it isn't *actually* created, and it doesn't work as part of the page, really (which includes a useful UniqueID/ClientID, and pretty much any logic except for the constructor).
When you have the template, you can use data binding to fill in the required data etc., and you can access the controls using `FindControl`, for example (or just use the `Controls` collection, but be aware that the template will likely also contain literals and other stuff than just your controls).
|
Can you get the id of the dropdown? If so, I think it has to do with the other items missing runat=server.
|
481,868 |
Show that the function $x^4 – 3x^2 + x-10$ cannot have a root inside $(0,2)$.
Please note that roots of $f'(x)$ cannot be found using a calculator. Attempted the question by calculating $f'(x)$ and assuming that at least one root of $f(x)$ exists in $(0,2)$. However had difficulty locating the points of maxima/minima, which cannot be done without finding the roots to $f'(x)$. Think there's another way to do it. Any suggestions?
|
2013/09/02
|
[
"https://math.stackexchange.com/questions/481868",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/92679/"
] |
You can show that in the domain $(0,2)$,
$$ x^4 - 3x^2 + x - 10 < -10 + 3x.$$
This is equivalent to
$$x (x-2) (x+1)^2 < 0. $$
Since $-10 + 3x < 0 $ in the domain $(0,2)$, it follows that the function is never 0 in the domain.
---
The RHS of the inequality is obtained by finding the linear function which satisfies the values at the end points. This helps us by ensuring that we know 2 of the roots.
|
Note that the function $g(x) = x^4-3x^2$ achieves maximum modulus in $[0,2]$ at $x=2$, with $|g(4)|=4$. Also note that in $(0,2)$ we have
$$|x^4-3x^2+x-10|>10-|x|-|g(x)| > 8-|g(x)| \geq 4.$$
|
481,868 |
Show that the function $x^4 – 3x^2 + x-10$ cannot have a root inside $(0,2)$.
Please note that roots of $f'(x)$ cannot be found using a calculator. Attempted the question by calculating $f'(x)$ and assuming that at least one root of $f(x)$ exists in $(0,2)$. However had difficulty locating the points of maxima/minima, which cannot be done without finding the roots to $f'(x)$. Think there's another way to do it. Any suggestions?
|
2013/09/02
|
[
"https://math.stackexchange.com/questions/481868",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/92679/"
] |
Note that the function $g(x) = x^4-3x^2$ achieves maximum modulus in $[0,2]$ at $x=2$, with $|g(4)|=4$. Also note that in $(0,2)$ we have
$$|x^4-3x^2+x-10|>10-|x|-|g(x)| > 8-|g(x)| \geq 4.$$
|
Calvin's method looks great [+1]. If you really want other ways, you could try
If $0 < x < 2$, then $x^3 < 8$ and $3x^2 + 10 > 2 \sqrt{30} x$ by AM-GM.
So $x^4 + x = x(x^3+1) < 9 x < 2 \sqrt{30} x < 3x^2 + 10$
|
481,868 |
Show that the function $x^4 – 3x^2 + x-10$ cannot have a root inside $(0,2)$.
Please note that roots of $f'(x)$ cannot be found using a calculator. Attempted the question by calculating $f'(x)$ and assuming that at least one root of $f(x)$ exists in $(0,2)$. However had difficulty locating the points of maxima/minima, which cannot be done without finding the roots to $f'(x)$. Think there's another way to do it. Any suggestions?
|
2013/09/02
|
[
"https://math.stackexchange.com/questions/481868",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/92679/"
] |
Note that the function $g(x) = x^4-3x^2$ achieves maximum modulus in $[0,2]$ at $x=2$, with $|g(4)|=4$. Also note that in $(0,2)$ we have
$$|x^4-3x^2+x-10|>10-|x|-|g(x)| > 8-|g(x)| \geq 4.$$
|
Each term of a polynomial function $f:x\mapsto \sum\_{k=0}^na\_kx^k$ is monotonic in $x$ over any interval $[a,b]$ with $0\leqslant a\leqslant b$. Therefore $$f(x)\le \sum\_{k=0}^n a\_kc\_k^k\quad(a\leqslant x\leqslant b),$$where $c\_k=a$ if $a\_k\leqslant 0$ and $c\_k=b$ if $a\_k>0.$ Now you can apply this to your example by checking this upper bound in each of (say) the intervals $[0,\sqrt2]$, $[\sqrt2,\sqrt3],$ and $[\sqrt3,2]$, which is easy to calculate.
|
481,868 |
Show that the function $x^4 – 3x^2 + x-10$ cannot have a root inside $(0,2)$.
Please note that roots of $f'(x)$ cannot be found using a calculator. Attempted the question by calculating $f'(x)$ and assuming that at least one root of $f(x)$ exists in $(0,2)$. However had difficulty locating the points of maxima/minima, which cannot be done without finding the roots to $f'(x)$. Think there's another way to do it. Any suggestions?
|
2013/09/02
|
[
"https://math.stackexchange.com/questions/481868",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/92679/"
] |
You can show that in the domain $(0,2)$,
$$ x^4 - 3x^2 + x - 10 < -10 + 3x.$$
This is equivalent to
$$x (x-2) (x+1)^2 < 0. $$
Since $-10 + 3x < 0 $ in the domain $(0,2)$, it follows that the function is never 0 in the domain.
---
The RHS of the inequality is obtained by finding the linear function which satisfies the values at the end points. This helps us by ensuring that we know 2 of the roots.
|
Calvin's method looks great [+1]. If you really want other ways, you could try
If $0 < x < 2$, then $x^3 < 8$ and $3x^2 + 10 > 2 \sqrt{30} x$ by AM-GM.
So $x^4 + x = x(x^3+1) < 9 x < 2 \sqrt{30} x < 3x^2 + 10$
|
481,868 |
Show that the function $x^4 – 3x^2 + x-10$ cannot have a root inside $(0,2)$.
Please note that roots of $f'(x)$ cannot be found using a calculator. Attempted the question by calculating $f'(x)$ and assuming that at least one root of $f(x)$ exists in $(0,2)$. However had difficulty locating the points of maxima/minima, which cannot be done without finding the roots to $f'(x)$. Think there's another way to do it. Any suggestions?
|
2013/09/02
|
[
"https://math.stackexchange.com/questions/481868",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/92679/"
] |
You can show that in the domain $(0,2)$,
$$ x^4 - 3x^2 + x - 10 < -10 + 3x.$$
This is equivalent to
$$x (x-2) (x+1)^2 < 0. $$
Since $-10 + 3x < 0 $ in the domain $(0,2)$, it follows that the function is never 0 in the domain.
---
The RHS of the inequality is obtained by finding the linear function which satisfies the values at the end points. This helps us by ensuring that we know 2 of the roots.
|
Each term of a polynomial function $f:x\mapsto \sum\_{k=0}^na\_kx^k$ is monotonic in $x$ over any interval $[a,b]$ with $0\leqslant a\leqslant b$. Therefore $$f(x)\le \sum\_{k=0}^n a\_kc\_k^k\quad(a\leqslant x\leqslant b),$$where $c\_k=a$ if $a\_k\leqslant 0$ and $c\_k=b$ if $a\_k>0.$ Now you can apply this to your example by checking this upper bound in each of (say) the intervals $[0,\sqrt2]$, $[\sqrt2,\sqrt3],$ and $[\sqrt3,2]$, which is easy to calculate.
|
6,522,960 |
I want join an arbitrary-length list of filters with `or`. If the list would be fixed-length, it would look like this:
```
query.filter(filters(0) || filters(1) || … || filter(n))
```
Joining filters with `and` would be easy:
```
for (filter ← filters)
query = query.filter(filter)
```
Joining things that evaluate to `Boolean`s with `or` is easy, too:
```
val any = evaluateToBools.foldLeft(true)(
(left: Boolean, right: Eval2Bool) =>
left || right.evaluate
)
```
---
Update:
=======
as i wrote it, it would be easy, if scalaquery’s `filter` was a standard one. unfortunately, scalaquery only allows these filters to be executed by the sql engine.
so my specific question would be:
if i have a set of string tuples:
```
val tms = Set( ("A","a"), ("B", "b"), ... )
```
and a query with the two columns “t” and “m”,
how can i generate a filter that represents the following SQL:
```
... WHERE/AND ( (t="A" and m="a") or (t="B" and m="b") or ... )
```
…or can sql `in` operators be used with tuples like this?
```
... WHERE (t,m) IN (("A","a"), ("B","b"), ...)
```
and if so, how to do it in scalaquery
---
Hack:
-----
currently, i do the following:
```
val tms = markers map { tm ⇒ tm._1 +"||"+ tm._2 }
query.filter(d ⇒ d._4 ++"||"++ d._5 inSet tms)
```
…but that’s unbearably hacky.
Solution
========
I implemented Stefan’s solution like this:
```
rq = rq filter { d ⇒
markers map { tm ⇒
(d._4 is tm._1) && (d._5 is tm._2)
} reduceLeft { _||_ }
}
```
|
2011/06/29
|
[
"https://Stackoverflow.com/questions/6522960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247482/"
] |
There is really nothing about `Query.filter` which would make this any different than combining predicates for filtering a Scala collection. Yes, it does have a more complicated type:
```
def filter[T](f: E => T)(implicit wt: CanBeQueryCondition[T]): Query[E, U] = ...
```
But you can safely ignore the `CanBeQueryCondition` typeclass and assume `T` to be `Column[Boolean]` or `Column[Option[Boolean]]` as long as you use that same type for all of your predicates (which you can always do).
So what is the type of your `filters` sequence? I assume this is where your problem lies. Let's start with filtering a Scala collection `List[User]`. Here the predicates should have the type `User => Boolean` and you can reduce the applied predicates with `||` to combine them:
```
case class User(id: Int, name: String)
val users = List(
User(1, "foo"),
User(2, "bar"),
User(3, "blub")
)
val filters = List(
{ u: User => u.id == 1 },
{ u: User => u.name == "bar" }
)
val filtered = users filter { u =>
filters map { _(u) } reduceLeft { _ || _ }
}
```
Now we add a database table for these `User` objects:
```
class DBUsers extends Table[User]("USERS") {
def id = column[Int]("ID")
def name = column[String]("NAME")
def * = id ~ name <> (User, User.unapply _)
}
object DBUsers extends DBUsers
```
Filtering a `Query[DBUsers]` requires predicates of type `DBUsers => Column[Boolean]`:
```
val dbFilters = List(
{ u: DBUsers => u.id === 1 },
{ u: DBUsers => u.name === "bar" }
)
```
Combining and applying the filters is exactly the same as before:
```
val dbFiltered = DBUsers filter { u =>
dbFilters map { _(u) } reduceLeft { _ || _ }
}
```
Regarding an `inSet` method for tuples: I think it's a good idea. Please file an enhancement request for it. Some database systems could support it natively, and for the others the encoding outlined in this answer could be used.
|
How about this?
```
query.filter(filters reduceLeft (_ || _))
```
|
6,008,940 |
Howdie
I 've classified all the regular expressions I used the most as library files. To give a dummy example, I have a file called `/mysdk/library/regex/email/match` with the contents:
```
^[a-z]@[a-z]\.com$
```
(I know this is not the good regex, but that's for the example :)). And I have a lot of folders with commonly used regular expressions:
```
/library
/regex
/email
/url
/social_security
...
```
Now I'm doing a Perl script that will tell me if a given string matches a given regular expression from my library. For example,
```
perl my-script.pl email [email protected]
```
And the script should print 0 on false and 1 on true. But it does not work :( This my script:
```
if($ARGV[1])
{
if(open(REGEX_CONSTANT,"/mysdk/library/regex/$ARGV[0]/match"))
{
$regex_constant=<REGEX_CONSTANT>;
close(REGEX_CONSTANT);
if($ARGV[1] =~ m/$regex_constant/) { print 1; exit }
}
print 0
}
else
{
print 0
}
```
I've also tried
```
if($ARGV[1] =~ m/($regex_constant)/) { print 1; exit }
```
Even when the string is supposed to match, it prints 0. I know it finds the file and successfully reads the content from it because I've debugged that. What am I doing wrong?
|
2011/05/15
|
[
"https://Stackoverflow.com/questions/6008940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/754522/"
] |
It's possible you have trailing newlines on those, or maybe headers. Try running chomp on the strings you read in. Hard to tell without know what the files look like.
|
Have you considered looking at `Regexp::Common`
This might not solve your problem directly, but could help you classify and its already includes some very common regex's you may find useful.
|
6,008,940 |
Howdie
I 've classified all the regular expressions I used the most as library files. To give a dummy example, I have a file called `/mysdk/library/regex/email/match` with the contents:
```
^[a-z]@[a-z]\.com$
```
(I know this is not the good regex, but that's for the example :)). And I have a lot of folders with commonly used regular expressions:
```
/library
/regex
/email
/url
/social_security
...
```
Now I'm doing a Perl script that will tell me if a given string matches a given regular expression from my library. For example,
```
perl my-script.pl email [email protected]
```
And the script should print 0 on false and 1 on true. But it does not work :( This my script:
```
if($ARGV[1])
{
if(open(REGEX_CONSTANT,"/mysdk/library/regex/$ARGV[0]/match"))
{
$regex_constant=<REGEX_CONSTANT>;
close(REGEX_CONSTANT);
if($ARGV[1] =~ m/$regex_constant/) { print 1; exit }
}
print 0
}
else
{
print 0
}
```
I've also tried
```
if($ARGV[1] =~ m/($regex_constant)/) { print 1; exit }
```
Even when the string is supposed to match, it prints 0. I know it finds the file and successfully reads the content from it because I've debugged that. What am I doing wrong?
|
2011/05/15
|
[
"https://Stackoverflow.com/questions/6008940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/754522/"
] |
It's possible you have trailing newlines on those, or maybe headers. Try running chomp on the strings you read in. Hard to tell without know what the files look like.
|
Rather than putting the regular expressions in files, you might consider putting them in a module:
```
# In FavoriteRegex.pm.
package FavoriteRegex;
use strict;
use warnings;
use parent qw(Exporter);
our @EXPORT = qw();
our @EXPORT_OK = qw(%FAVS);
our %FAVS = (
foo => qr/foo/,
integer => qr/\A\d+\Z/,
);
1;
```
Scripts that need those regular expressions become much simpler:
```
# In some_script.pl.
use strict;
use warnings;
use FavoriteRegex qw(%FAVS);
say 'foo' if 'blah foo blah' =~ $FAVS{foo};
say 'integer' if '1234' =~ $FAVS{integer};
say 'integer?!?' if '1234.0' =~ $FAVS{integer};
```
Even if your project requires that the regular expressions to live in their own non-Perl files, the work of reading those files should be done in the module (`FavoriteRegex.pm` in this example), not the scripts.
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
initialize a variable check with value =1 and update the value in the else .then check the value of check after the loop has executed
```
if (future.isDone()) {
System.out.println("true");
}
else
{System.out.println("false");
check=0;}
```
|
If the method `future.get` returns then the computation done by the future is finished, so calling `isDone` is redundant. And yes, after all futures have finished all of the threads in the `ThreadPool` should be available.
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Using a boolean variable is the simplest way to understand that all threads are done.
Another way could be using a primitive, e.g. an integer. You could simply increment/decrement the counter to see if all threads are done.
And another way could be checking the return value of `awaitTermination()` invocation on your `ExecutorService` object.
```
int counter = 0;
for(Future<Boolean> future : futures)
{
future.get();
if (future.isDone())
System.out.println("true");
else
{
counter++;
System.out.println("false");
}
}
if(counter != 0)
System.out.println("DONE!");
// Another way of checking all threads are done.
if(executor.awaitTermination(100, TimeUnit.SECONDS))
System.out.println("DONE!");
```
|
Invokeall calling this method waits complete all take then CPU execution reaches to for loop to check for completion
Or
Do we need do .. While loop To check completion of tasks
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Using a boolean variable is the simplest way to understand that all threads are done.
Another way could be using a primitive, e.g. an integer. You could simply increment/decrement the counter to see if all threads are done.
And another way could be checking the return value of `awaitTermination()` invocation on your `ExecutorService` object.
```
int counter = 0;
for(Future<Boolean> future : futures)
{
future.get();
if (future.isDone())
System.out.println("true");
else
{
counter++;
System.out.println("false");
}
}
if(counter != 0)
System.out.println("DONE!");
// Another way of checking all threads are done.
if(executor.awaitTermination(100, TimeUnit.SECONDS))
System.out.println("DONE!");
```
|
initialize a variable check with value =1 and update the value in the else .then check the value of check after the loop has executed
```
if (future.isDone()) {
System.out.println("true");
}
else
{System.out.println("false");
check=0;}
```
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Meant as a comment on vainolo's reply but I don't have enough reputation points:
`isDone` is also redundant because `invokeAll` returns a list of futures for which `isDone` is true. The javadocs [mention this](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html#invokeAll(java.util.Collection)).
|
Invokeall calling this method waits complete all take then CPU execution reaches to for loop to check for completion
Or
Do we need do .. While loop To check completion of tasks
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Normally you add Runnable tasks to a Thread pool. Adding Threads to a thread pool isn't going to do what you think.
When future.get() returns for each task, all the tasks have completed. The threads which ran the tasks will still be running.
If you want to stop all the thread once the tasks have completed you can use `executor.shutdown();` and `executor.awaitTermination`
---
The way I would write it is
```
ExecutorService executor = Executors.newFixedThreadPool(10);
List<Future<Boolean>> futures = new ArrayList<>();
for (int n = 0; n < 100; n++)
futures .add(executor.submit(new MyTask(n));
for(Future<Boolean> future : futures) {
boolean result = future.get();
// do something with the result.
}
```
If the result is not needed to you make the type `Future<Void>` or `Future<?>` or just `Future`
|
If the method `future.get` returns then the computation done by the future is finished, so calling `isDone` is redundant. And yes, after all futures have finished all of the threads in the `ThreadPool` should be available.
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Meant as a comment on vainolo's reply but I don't have enough reputation points:
`isDone` is also redundant because `invokeAll` returns a list of futures for which `isDone` is true. The javadocs [mention this](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html#invokeAll(java.util.Collection)).
|
Normally you add Runnable tasks to a Thread pool. Adding Threads to a thread pool isn't going to do what you think.
When future.get() returns for each task, all the tasks have completed. The threads which ran the tasks will still be running.
If you want to stop all the thread once the tasks have completed you can use `executor.shutdown();` and `executor.awaitTermination`
---
The way I would write it is
```
ExecutorService executor = Executors.newFixedThreadPool(10);
List<Future<Boolean>> futures = new ArrayList<>();
for (int n = 0; n < 100; n++)
futures .add(executor.submit(new MyTask(n));
for(Future<Boolean> future : futures) {
boolean result = future.get();
// do something with the result.
}
```
If the result is not needed to you make the type `Future<Void>` or `Future<?>` or just `Future`
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Meant as a comment on vainolo's reply but I don't have enough reputation points:
`isDone` is also redundant because `invokeAll` returns a list of futures for which `isDone` is true. The javadocs [mention this](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html#invokeAll(java.util.Collection)).
|
initialize a variable check with value =1 and update the value in the else .then check the value of check after the loop has executed
```
if (future.isDone()) {
System.out.println("true");
}
else
{System.out.println("false");
check=0;}
```
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
Normally you add Runnable tasks to a Thread pool. Adding Threads to a thread pool isn't going to do what you think.
When future.get() returns for each task, all the tasks have completed. The threads which ran the tasks will still be running.
If you want to stop all the thread once the tasks have completed you can use `executor.shutdown();` and `executor.awaitTermination`
---
The way I would write it is
```
ExecutorService executor = Executors.newFixedThreadPool(10);
List<Future<Boolean>> futures = new ArrayList<>();
for (int n = 0; n < 100; n++)
futures .add(executor.submit(new MyTask(n));
for(Future<Boolean> future : futures) {
boolean result = future.get();
// do something with the result.
}
```
If the result is not needed to you make the type `Future<Void>` or `Future<?>` or just `Future`
|
initialize a variable check with value =1 and update the value in the else .then check the value of check after the loop has executed
```
if (future.isDone()) {
System.out.println("true");
}
else
{System.out.println("false");
check=0;}
```
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
To check if all are true, you can do something like this:
```
boolean works=true;
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else{
System.out.println("false");works=false;
}
}
if(works)System.out.println("yea it works")
```
|
initialize a variable check with value =1 and update the value in the else .then check the value of check after the loop has executed
```
if (future.isDone()) {
System.out.println("true");
}
else
{System.out.println("false");
check=0;}
```
|
12,896,755 |
Having the following code at hand:
```
ExecutorService executor = Executors.newFixedThreadPool(10);
Collection collection = new ArrayList();
for (int n=1; n<100; n++)
collection.add(new MyThread(n));
try {
List<Future<Boolean>> futures = executor.invokeAll(collection);
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else
System.out.println("false");
}
} catch (Exception e) {
e.printStackTrace();
}
```
If the above is correct?
And if all `future.isDone()` are true, then all of the threads have been done?
How can I make a flag, to be sure that all of them are done?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12896755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018686/"
] |
To check if all are true, you can do something like this:
```
boolean works=true;
for(Future<Boolean> future : futures){
future.get();
if (future.isDone()) {
System.out.println("true");
}
else{
System.out.println("false");works=false;
}
}
if(works)System.out.println("yea it works")
```
|
Normally you add Runnable tasks to a Thread pool. Adding Threads to a thread pool isn't going to do what you think.
When future.get() returns for each task, all the tasks have completed. The threads which ran the tasks will still be running.
If you want to stop all the thread once the tasks have completed you can use `executor.shutdown();` and `executor.awaitTermination`
---
The way I would write it is
```
ExecutorService executor = Executors.newFixedThreadPool(10);
List<Future<Boolean>> futures = new ArrayList<>();
for (int n = 0; n < 100; n++)
futures .add(executor.submit(new MyTask(n));
for(Future<Boolean> future : futures) {
boolean result = future.get();
// do something with the result.
}
```
If the result is not needed to you make the type `Future<Void>` or `Future<?>` or just `Future`
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.