qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
---|---|---|---|---|---|
3,816,718 | I have an enum that I'd like to display all possible values of. Is there a way to get an array or list of all the possible values of the enum instead of manually creating such a list? e.g. If I have an enum:
```
public enum Enumnum { TypeA, TypeB, TypeC, TypeD }
```
how would I be able to get a `List<Enumnum>` that contains `{ TypeA, TypeB, TypeC, TypeD }`? | 2010/09/28 | [
"https://Stackoverflow.com/questions/3816718",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/388739/"
]
| If you prefer a more generic way, here it is. You can add up more converters as per your need.
```
public static class EnumConverter
{
public static string[] ToNameArray<T>()
{
return Enum.GetNames(typeof(T)).ToArray();
}
public static Array ToValueArray<T>()
{
return Enum.GetValues(typeof(T));
}
public static List<T> ToListOfValues<T>()
{
return Enum.GetValues(typeof(T)).Cast<T>().ToList();
}
public static IEnumerable<T> ToEnumerable<T>()
{
return (T[])Enum.GetValues(typeof(T));
}
}
```
Sample Implementations :
```
string[] roles = EnumConverter.ToStringArray<ePermittedRoles>();
List<ePermittedRoles> roles2 = EnumConverter.ToListOfValues<ePermittedRoles>();
Array data = EnumConverter.ToValueArray<ePermittedRoles>();
``` | The OP asked for How to get an array of all `enum` values in C# ?
**What if you want to get an array of selected `enum` values in C# ?**
**Your Enum**
```
enum WeekDays
{
Sunday,
Monday,
Tuesday
}
```
If you want to just select `Sunday` from your `Enum`.
```
WeekDays[] weekDaysArray1 = new WeekDays[] { WeekDays.Sunday };
WeekDays[] weekDaysArray2 = Enum.GetValues(typeof(WeekDays)).Cast<WeekDays>().Where
(x => x == WeekDays.Sunday).ToArray();
```
Credits goes to knowledgeable tl.
References:
[1.](https://stackoverflow.com/questions/3160267/how-to-create-an-array-of-enums)
[2.](https://stackoverflow.com/questions/17123548/convert-enum-to-list)
Hope helps someone. |
3,816,718 | I have an enum that I'd like to display all possible values of. Is there a way to get an array or list of all the possible values of the enum instead of manually creating such a list? e.g. If I have an enum:
```
public enum Enumnum { TypeA, TypeB, TypeC, TypeD }
```
how would I be able to get a `List<Enumnum>` that contains `{ TypeA, TypeB, TypeC, TypeD }`? | 2010/09/28 | [
"https://Stackoverflow.com/questions/3816718",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/388739/"
]
| This gets you a plain array of the enum values using [`Enum.GetValues`](http://msdn.microsoft.com/en-us/library/system.enum.getvalues.aspx):
```
var valuesAsArray = Enum.GetValues(typeof(Enumnum));
```
And this gets you a generic list:
```
var valuesAsList = Enum.GetValues(typeof(Enumnum)).Cast<Enumnum>().ToList();
``` | Try this code:
```
Enum.GetNames(typeof(Enumnum));
```
This return a `string[]` with all the enum names of the chosen enum. |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| F0=0, F1=1, F2=1, F3=2, F4=3 ...
```
for i in range (102):
c = a + b
a = b
b = c
print c
```
Notice that in your first loop , c=a+b=0+1=F0+F1=F2, so the result you print is F2 not F1, thus F103 is printed in the 102nd loop. | Examine:
```
for f in [1, 2, "foo"]:
print f
```
Then recall that `range(102)` creates a list `[1, 2, 3,` ... `101, 102]`. |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| Let's expand the for loop and see what happens:
```
a = 0
b = 1 # this is the first fibonacci number
# First iteration of for loop
c = a + b # c is now 1, the second fibonacci number
a = b # a is now 1
b = c # b is now 1
# Second iteration
c = a + b # c is now 2, the third fibonacci number
a = b # a is now 1
b = c # b is now 2
# Third iteration
c = a + b # c is now 3, the fourth fibonacci number
a = b # a is now 2
b = c # b is now 3
# Fourth iteration
c = a + b # c is now 5, the fifth fibonacci number
a = b # a is now 3
b = c # b is now 5
# Remaining 98 iteration omitted
```
You see that after 4 iterations we have in `c` the 5:th fibonacci number. After 102 iterations `c` will hold the 103:d fibonacci number. That is why you are using `range(102)` and not `range(103)`. If you wanted your fibonacci series to start with 0 ([as it sometimes does](http://en.wikipedia.org/wiki/Fibonacci_number)), i.e. `0, 1, 1, 3, 5`, you would need to use `range(101)`.
The python for loop iterates over a sequence until the sequence is exhausted, or the for loop is prematurely exited with the `break` statement. `range(5)` creates a list of 5 elements: `[0, 1, 2, 3, 4]`, which, when used with the for loop, causes it to repeat 5 times. The loop body in the following example is therefore evaluated 5 times:
```
sum = 0
for i in range(5):
sum = sum + i
print sum
```
We just calculated the fifth [Triangle number](http://en.wikipedia.org/wiki/Triangular_number): 10
More on python for loops:
<https://wiki.python.org/moin/ForLoop> | F0=0, F1=1, F2=1, F3=2, F4=3 ...
```
for i in range (102):
c = a + b
a = b
b = c
print c
```
Notice that in your first loop , c=a+b=0+1=F0+F1=F2, so the result you print is F2 not F1, thus F103 is printed in the 102nd loop. |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| There are infinite uses of the `for` statement, for in your case, it just does whatever in it for 102 times. So, **in your specific case**, the `for` statement does...
```
c = a + b
a = b
b = c
print c
c = a + b
a = b
b = c
print c
c = a + b
a = b
b = c
print c
# ... (99 more times)
```
and when it is done, the program finishes with `print "The 103rd number is", c`, which you know what is happening. | This line will repeat anything below it that is indented 102 times.
```
for i in range(102):
``` |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| Let's expand the for loop and see what happens:
```
a = 0
b = 1 # this is the first fibonacci number
# First iteration of for loop
c = a + b # c is now 1, the second fibonacci number
a = b # a is now 1
b = c # b is now 1
# Second iteration
c = a + b # c is now 2, the third fibonacci number
a = b # a is now 1
b = c # b is now 2
# Third iteration
c = a + b # c is now 3, the fourth fibonacci number
a = b # a is now 2
b = c # b is now 3
# Fourth iteration
c = a + b # c is now 5, the fifth fibonacci number
a = b # a is now 3
b = c # b is now 5
# Remaining 98 iteration omitted
```
You see that after 4 iterations we have in `c` the 5:th fibonacci number. After 102 iterations `c` will hold the 103:d fibonacci number. That is why you are using `range(102)` and not `range(103)`. If you wanted your fibonacci series to start with 0 ([as it sometimes does](http://en.wikipedia.org/wiki/Fibonacci_number)), i.e. `0, 1, 1, 3, 5`, you would need to use `range(101)`.
The python for loop iterates over a sequence until the sequence is exhausted, or the for loop is prematurely exited with the `break` statement. `range(5)` creates a list of 5 elements: `[0, 1, 2, 3, 4]`, which, when used with the for loop, causes it to repeat 5 times. The loop body in the following example is therefore evaluated 5 times:
```
sum = 0
for i in range(5):
sum = sum + i
print sum
```
We just calculated the fifth [Triangle number](http://en.wikipedia.org/wiki/Triangular_number): 10
More on python for loops:
<https://wiki.python.org/moin/ForLoop> | There are infinite uses of the `for` statement, for in your case, it just does whatever in it for 102 times. So, **in your specific case**, the `for` statement does...
```
c = a + b
a = b
b = c
print c
c = a + b
a = b
b = c
print c
c = a + b
a = b
b = c
print c
# ... (99 more times)
```
and when it is done, the program finishes with `print "The 103rd number is", c`, which you know what is happening. |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| Let's expand the for loop and see what happens:
```
a = 0
b = 1 # this is the first fibonacci number
# First iteration of for loop
c = a + b # c is now 1, the second fibonacci number
a = b # a is now 1
b = c # b is now 1
# Second iteration
c = a + b # c is now 2, the third fibonacci number
a = b # a is now 1
b = c # b is now 2
# Third iteration
c = a + b # c is now 3, the fourth fibonacci number
a = b # a is now 2
b = c # b is now 3
# Fourth iteration
c = a + b # c is now 5, the fifth fibonacci number
a = b # a is now 3
b = c # b is now 5
# Remaining 98 iteration omitted
```
You see that after 4 iterations we have in `c` the 5:th fibonacci number. After 102 iterations `c` will hold the 103:d fibonacci number. That is why you are using `range(102)` and not `range(103)`. If you wanted your fibonacci series to start with 0 ([as it sometimes does](http://en.wikipedia.org/wiki/Fibonacci_number)), i.e. `0, 1, 1, 3, 5`, you would need to use `range(101)`.
The python for loop iterates over a sequence until the sequence is exhausted, or the for loop is prematurely exited with the `break` statement. `range(5)` creates a list of 5 elements: `[0, 1, 2, 3, 4]`, which, when used with the for loop, causes it to repeat 5 times. The loop body in the following example is therefore evaluated 5 times:
```
sum = 0
for i in range(5):
sum = sum + i
print sum
```
We just calculated the fifth [Triangle number](http://en.wikipedia.org/wiki/Triangular_number): 10
More on python for loops:
<https://wiki.python.org/moin/ForLoop> | `range` is a function that returns a list of integers. If you specify a single argument `n`, you get all integers from 0 through `n-1`.
A statement of the form `for x in y: <do something>`, where `y` is iterable, iterates over every element in `y`. For each such element `z`, it binds the variable `x` to `z` and then executes the body of the loop one time.
So the line `for i in range(102)` executes everything inside the loop one for each integer between 0 and 101. NB: you aren't actually referencing the variable `i` inside your loop, so you could also write `for _ in range(102)`.
Regarding the block inside the loop, it goes roughly like this:
* set `c` equal to the sum of `a` and `b`
* set `a` to the value of `b`
* set `b` to the value of `c`
So each time you iterate, you find the next number, store it in `c`, and then update your state variables `a` and `b`. `a` and `b` hold the last two fibonacci numbers.
If you really want the 103rd fibonacci number, there is a bug in your program - do you see it? |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| `range` is a function that returns a list of integers. If you specify a single argument `n`, you get all integers from 0 through `n-1`.
A statement of the form `for x in y: <do something>`, where `y` is iterable, iterates over every element in `y`. For each such element `z`, it binds the variable `x` to `z` and then executes the body of the loop one time.
So the line `for i in range(102)` executes everything inside the loop one for each integer between 0 and 101. NB: you aren't actually referencing the variable `i` inside your loop, so you could also write `for _ in range(102)`.
Regarding the block inside the loop, it goes roughly like this:
* set `c` equal to the sum of `a` and `b`
* set `a` to the value of `b`
* set `b` to the value of `c`
So each time you iterate, you find the next number, store it in `c`, and then update your state variables `a` and `b`. `a` and `b` hold the last two fibonacci numbers.
If you really want the 103rd fibonacci number, there is a bug in your program - do you see it? | Fibonacci's formula is :
```
f(n) = f(n-1) + f(n-2); (n > 1, n is integer)
f(0) = 0
f(1) = 1
```
And range(102) = [0, 1, 2, ..., 101]
so
`for i in range (102):` means for loop runs 102 times
```
c = a + b
a = b
b = c
```
means b stand for f(n-1), and a stand for f(n-2), so c stand for f(n).
the for loop runs 102 times to get the 103th element, because the loop is start with 2nd element. |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| F0=0, F1=1, F2=1, F3=2, F4=3 ...
```
for i in range (102):
c = a + b
a = b
b = c
print c
```
Notice that in your first loop , c=a+b=0+1=F0+F1=F2, so the result you print is F2 not F1, thus F103 is printed in the 102nd loop. | This line will repeat anything below it that is indented 102 times.
```
for i in range(102):
``` |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| Let's expand the for loop and see what happens:
```
a = 0
b = 1 # this is the first fibonacci number
# First iteration of for loop
c = a + b # c is now 1, the second fibonacci number
a = b # a is now 1
b = c # b is now 1
# Second iteration
c = a + b # c is now 2, the third fibonacci number
a = b # a is now 1
b = c # b is now 2
# Third iteration
c = a + b # c is now 3, the fourth fibonacci number
a = b # a is now 2
b = c # b is now 3
# Fourth iteration
c = a + b # c is now 5, the fifth fibonacci number
a = b # a is now 3
b = c # b is now 5
# Remaining 98 iteration omitted
```
You see that after 4 iterations we have in `c` the 5:th fibonacci number. After 102 iterations `c` will hold the 103:d fibonacci number. That is why you are using `range(102)` and not `range(103)`. If you wanted your fibonacci series to start with 0 ([as it sometimes does](http://en.wikipedia.org/wiki/Fibonacci_number)), i.e. `0, 1, 1, 3, 5`, you would need to use `range(101)`.
The python for loop iterates over a sequence until the sequence is exhausted, or the for loop is prematurely exited with the `break` statement. `range(5)` creates a list of 5 elements: `[0, 1, 2, 3, 4]`, which, when used with the for loop, causes it to repeat 5 times. The loop body in the following example is therefore evaluated 5 times:
```
sum = 0
for i in range(5):
sum = sum + i
print sum
```
We just calculated the fifth [Triangle number](http://en.wikipedia.org/wiki/Triangular_number): 10
More on python for loops:
<https://wiki.python.org/moin/ForLoop> | Examine:
```
for f in [1, 2, "foo"]:
print f
```
Then recall that `range(102)` creates a list `[1, 2, 3,` ... `101, 102]`. |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| Examine:
```
for f in [1, 2, "foo"]:
print f
```
Then recall that `range(102)` creates a list `[1, 2, 3,` ... `101, 102]`. | This line will repeat anything below it that is indented 102 times.
```
for i in range(102):
``` |
19,041,174 | ```
a = 0
b = 1
print a
print b
for i in range (102):
c = a + b
a = b
b = c
print c
print "The 103rd number is", c
```
I don't understand how the for statement works. I understand everything except how the program continues till the 103rd element and understands that it has to add the last two numbers to get the next number. I understand that a is equal to b and b is equal to c.
Is it because after adding c, it changes the vales of a and b then goes back to the for statement? And that is where I am confused. What does the program do next?
Does it go back to the for statement to check which (term)/element this is to make sure it is less then 102.
Also when I specify the range to be 102 does it do the action once and then 102 times more basically list the number 103 times) or does it do it 102 times)?
What I am basically asking is if I need to find the 103rd element why do i specify range=102, not range =103 | 2013/09/27 | [
"https://Stackoverflow.com/questions/19041174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2821664/"
]
| There are infinite uses of the `for` statement, for in your case, it just does whatever in it for 102 times. So, **in your specific case**, the `for` statement does...
```
c = a + b
a = b
b = c
print c
c = a + b
a = b
b = c
print c
c = a + b
a = b
b = c
print c
# ... (99 more times)
```
and when it is done, the program finishes with `print "The 103rd number is", c`, which you know what is happening. | Fibonacci's formula is :
```
f(n) = f(n-1) + f(n-2); (n > 1, n is integer)
f(0) = 0
f(1) = 1
```
And range(102) = [0, 1, 2, ..., 101]
so
`for i in range (102):` means for loop runs 102 times
```
c = a + b
a = b
b = c
```
means b stand for f(n-1), and a stand for f(n-2), so c stand for f(n).
the for loop runs 102 times to get the 103th element, because the loop is start with 2nd element. |
38,788,888 | I have a problem with my Python code. I am trying to display the ordinal number of the users input. So if I typed 32 it would display 32nd, or if I typed 576 it would display 576th. The only thing that doesn't work is 93, it displays 93th. Every other number works and I am not sure why. Here is my code:
```
num = input ('Enter a number: ')
end = ''
if num[len(num) - 2] != '1' or len(num) == 1:
if num.endswith('1'):
end = 'st'
elif num.endswith('2'):
end = 'nd'
elif num == '3':
end = 'rd'
else:
end = 'th'
else:
end = 'th'
ornum = num + end
print (ornum)
``` | 2016/08/05 | [
"https://Stackoverflow.com/questions/38788888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6232881/"
]
| You use `endswith()` at 2 places, instead of 3:
```
if num.endswith('1'):
end = 'st'
elif num.endswith('2'):
end = 'nd'
#elif num == '3': WRONG
elif num.endswith('3'):
end = 'rd'
```
In your code, it will test 'if num is equal to 3' instead of 'if num ends with 3'. | For some reason you forgot to check `endswith()` when it comes to `3`:
```
elif num.endswith('3'):
end = 'rd'
```
On a side note, you can improve your code by reading [this question](https://codereview.stackexchange.com/questions/41298/producing-ordinal-numbers) on SE Code Review, which included this awesome version:
```
SUFFIXES = {1: 'st', 2: 'nd', 3: 'rd'}
def ordinal(num):
if 10 <= num % 100 <= 20:
suffix = 'th'
else:
suffix = SUFFIXES.get(num % 10, 'th')
return str(num) + suffix
``` |
12,532 | I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large).
I found out that one solution is to reduce the `batch_size`. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that `batch_size` used in examples is usually as a power of two, which I don't understand either.
I don't mind if my network takes longer to train, but I would like to know if reducing the `batch_size` will decrease the quality of my predictions.
Thanks. | 2016/07/01 | [
"https://datascience.stackexchange.com/questions/12532",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17484/"
]
| After one and a half years, I come back to my answer because my previous answer was wrong.
Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima. When it is too low, your network weights can just jump around if your data is noisy and it might be unable to learn or it converges very slowly, thus negatively impacting total computation time.
Another advantage of batching is for GPU computation, GPUs are very good at parallelizing the calculations that happen in neural networks if part of the computation is the same (for example, repeated matrix multiplication over the same weight matrix of your network). This means that a batch size of 16 will take less than twice the amount of a batch size of 8.
In the case that you do need bigger batch sizes but it will not fit on your GPU, you can feed a small batch, save the gradient estimates and feed one or more batches, and then do a weight update. This way you get a more stable gradient because you increased your virtual batch size. | Oddly enough, I found that larger batch sizes with keras require more epochs to converge.
For example, the output of [this script](https://github.com/shadiakiki1986/test-ml/blob/master/t4-LSTM%20stateful/test_temporal_regression.py) based on keras' [integration test](https://github.com/fchollet/keras/blob/master/tests/integration_tests/test_temporal_data_tasks.py#L78) is
```
epochs 15 , batch size 16 , layer type Dense: final loss 0.56, seconds 1.46
epochs 15 , batch size 160 , layer type Dense: final loss 1.27, seconds 0.30
epochs 150 , batch size 160 , layer type Dense: final loss 0.55, seconds 1.74
```
Related
* Keras [issue 4708](https://github.com/fchollet/keras/issues/4708): the user turned out to be using `BatchNormalization`, which affected the results.
* [This tutorial](https://machinelearningmastery.com/tune-lstm-hyperparameters-keras-time-series-forecasting/) on LSTM, section **Tuning the Batch Size**
* search results for [keras lstm batch size effect on result](https://duckduckgo.com/?q=keras+lstm+batch+size+effect+on+result&t=canonical&ia=web)
* [My Neural Network isn't working! What should I do?](http://theorangeduck.com/page/neural-network-not-working?imm_mid=0f6562&cmp=em-data-na-na-newsltr_20170920), point 5 (You Used a too Large Batch Size) discusses exactly this
>
> Using too large a batch size can have a negative effect on the
> accuracy of your network during training since it reduces the
> stochasticity of the gradient descent.
>
>
>
Edit: most of the times, increasing `batch_size` is desired to speed up computation, but there are other simpler ways to do this, like using data types of a smaller footprint via the `dtype` argument, whether in *keras* or *tensorflow*, e.g. `float32` instead of `float64` |
12,532 | I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large).
I found out that one solution is to reduce the `batch_size`. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that `batch_size` used in examples is usually as a power of two, which I don't understand either.
I don't mind if my network takes longer to train, but I would like to know if reducing the `batch_size` will decrease the quality of my predictions.
Thanks. | 2016/07/01 | [
"https://datascience.stackexchange.com/questions/12532",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17484/"
]
| After one and a half years, I come back to my answer because my previous answer was wrong.
Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima. When it is too low, your network weights can just jump around if your data is noisy and it might be unable to learn or it converges very slowly, thus negatively impacting total computation time.
Another advantage of batching is for GPU computation, GPUs are very good at parallelizing the calculations that happen in neural networks if part of the computation is the same (for example, repeated matrix multiplication over the same weight matrix of your network). This means that a batch size of 16 will take less than twice the amount of a batch size of 8.
In the case that you do need bigger batch sizes but it will not fit on your GPU, you can feed a small batch, save the gradient estimates and feed one or more batches, and then do a weight update. This way you get a more stable gradient because you increased your virtual batch size. | I feel the accepted answer is possibly wrong. There are variants in *Gradient Descent Algorithms*.
1. **Vanilla Gradient Descent**: Here the Gradient is being calculated on all the data points at a single shot and the average is taken. Hence we have a smoother version of the gradient takes longer time to learn.
2. **Stochastic Gradient Descent** : Here one-data point at a time hence the gradient is aggressive (noisy gradients) hence there is going to be lot of oscillations ( we use Momentum parameters - e.g Nesterov to control this). So there is a chance that your oscillations can make the algorithm not reach a local minimum.(diverge).
3. **Mini-Batch Gradient Descent**: Which takes the perks of both the previous ones averages gradients of a small batch. Hence not too aggressive like SGD and allows Online Learning which Vanilla GD never allowed.
The smaller the Mini-Batch the better would be the performance of your model (not always) and of course it has got to do with your epochs too faster learning. If you are training on large dataset you want faster convergence with good performance hence we pick Batch-GD's.
SGD had fixed learning parameter hence we start other Adaptive Optimizers like Adam, AdaDelta, RMS Prop etc which changes the learning parameter based on the history of Gradients. |
12,532 | I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large).
I found out that one solution is to reduce the `batch_size`. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that `batch_size` used in examples is usually as a power of two, which I don't understand either.
I don't mind if my network takes longer to train, but I would like to know if reducing the `batch_size` will decrease the quality of my predictions.
Thanks. | 2016/07/01 | [
"https://datascience.stackexchange.com/questions/12532",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17484/"
]
| After one and a half years, I come back to my answer because my previous answer was wrong.
Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima. When it is too low, your network weights can just jump around if your data is noisy and it might be unable to learn or it converges very slowly, thus negatively impacting total computation time.
Another advantage of batching is for GPU computation, GPUs are very good at parallelizing the calculations that happen in neural networks if part of the computation is the same (for example, repeated matrix multiplication over the same weight matrix of your network). This means that a batch size of 16 will take less than twice the amount of a batch size of 8.
In the case that you do need bigger batch sizes but it will not fit on your GPU, you can feed a small batch, save the gradient estimates and feed one or more batches, and then do a weight update. This way you get a more stable gradient because you increased your virtual batch size. | A couple of papers have been published showing – and conventional wisdom in 2020 seems to still be persuaded— that, as [Yann LeCun](https://twitter.com/ylecun/status/989610208497360896) put, large batches are bad for your health.
Two relevant papers are
* [Revisiting Small Batch Training For Deep Neural Networks, Dominic Masters and Carlo Luschi](https://arxiv.org/abs/1804.07612) which implies that anything over 32 may degrade training in SGD.
and
* [On Large-batch Training For Deep Learning: Generalization Gap And Sharp Minima](https://arxiv.org/abs/1609.04836)
which offers possible reasons. To paraphrase badly, big batches are likely to get stuck in local (“sharp”) minima, small batches not. There is some interplay with choice of learning rate. |
12,532 | I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large).
I found out that one solution is to reduce the `batch_size`. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that `batch_size` used in examples is usually as a power of two, which I don't understand either.
I don't mind if my network takes longer to train, but I would like to know if reducing the `batch_size` will decrease the quality of my predictions.
Thanks. | 2016/07/01 | [
"https://datascience.stackexchange.com/questions/12532",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17484/"
]
| I feel the accepted answer is possibly wrong. There are variants in *Gradient Descent Algorithms*.
1. **Vanilla Gradient Descent**: Here the Gradient is being calculated on all the data points at a single shot and the average is taken. Hence we have a smoother version of the gradient takes longer time to learn.
2. **Stochastic Gradient Descent** : Here one-data point at a time hence the gradient is aggressive (noisy gradients) hence there is going to be lot of oscillations ( we use Momentum parameters - e.g Nesterov to control this). So there is a chance that your oscillations can make the algorithm not reach a local minimum.(diverge).
3. **Mini-Batch Gradient Descent**: Which takes the perks of both the previous ones averages gradients of a small batch. Hence not too aggressive like SGD and allows Online Learning which Vanilla GD never allowed.
The smaller the Mini-Batch the better would be the performance of your model (not always) and of course it has got to do with your epochs too faster learning. If you are training on large dataset you want faster convergence with good performance hence we pick Batch-GD's.
SGD had fixed learning parameter hence we start other Adaptive Optimizers like Adam, AdaDelta, RMS Prop etc which changes the learning parameter based on the history of Gradients. | Oddly enough, I found that larger batch sizes with keras require more epochs to converge.
For example, the output of [this script](https://github.com/shadiakiki1986/test-ml/blob/master/t4-LSTM%20stateful/test_temporal_regression.py) based on keras' [integration test](https://github.com/fchollet/keras/blob/master/tests/integration_tests/test_temporal_data_tasks.py#L78) is
```
epochs 15 , batch size 16 , layer type Dense: final loss 0.56, seconds 1.46
epochs 15 , batch size 160 , layer type Dense: final loss 1.27, seconds 0.30
epochs 150 , batch size 160 , layer type Dense: final loss 0.55, seconds 1.74
```
Related
* Keras [issue 4708](https://github.com/fchollet/keras/issues/4708): the user turned out to be using `BatchNormalization`, which affected the results.
* [This tutorial](https://machinelearningmastery.com/tune-lstm-hyperparameters-keras-time-series-forecasting/) on LSTM, section **Tuning the Batch Size**
* search results for [keras lstm batch size effect on result](https://duckduckgo.com/?q=keras+lstm+batch+size+effect+on+result&t=canonical&ia=web)
* [My Neural Network isn't working! What should I do?](http://theorangeduck.com/page/neural-network-not-working?imm_mid=0f6562&cmp=em-data-na-na-newsltr_20170920), point 5 (You Used a too Large Batch Size) discusses exactly this
>
> Using too large a batch size can have a negative effect on the
> accuracy of your network during training since it reduces the
> stochasticity of the gradient descent.
>
>
>
Edit: most of the times, increasing `batch_size` is desired to speed up computation, but there are other simpler ways to do this, like using data types of a smaller footprint via the `dtype` argument, whether in *keras* or *tensorflow*, e.g. `float32` instead of `float64` |
12,532 | I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large).
I found out that one solution is to reduce the `batch_size`. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that `batch_size` used in examples is usually as a power of two, which I don't understand either.
I don't mind if my network takes longer to train, but I would like to know if reducing the `batch_size` will decrease the quality of my predictions.
Thanks. | 2016/07/01 | [
"https://datascience.stackexchange.com/questions/12532",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17484/"
]
| Oddly enough, I found that larger batch sizes with keras require more epochs to converge.
For example, the output of [this script](https://github.com/shadiakiki1986/test-ml/blob/master/t4-LSTM%20stateful/test_temporal_regression.py) based on keras' [integration test](https://github.com/fchollet/keras/blob/master/tests/integration_tests/test_temporal_data_tasks.py#L78) is
```
epochs 15 , batch size 16 , layer type Dense: final loss 0.56, seconds 1.46
epochs 15 , batch size 160 , layer type Dense: final loss 1.27, seconds 0.30
epochs 150 , batch size 160 , layer type Dense: final loss 0.55, seconds 1.74
```
Related
* Keras [issue 4708](https://github.com/fchollet/keras/issues/4708): the user turned out to be using `BatchNormalization`, which affected the results.
* [This tutorial](https://machinelearningmastery.com/tune-lstm-hyperparameters-keras-time-series-forecasting/) on LSTM, section **Tuning the Batch Size**
* search results for [keras lstm batch size effect on result](https://duckduckgo.com/?q=keras+lstm+batch+size+effect+on+result&t=canonical&ia=web)
* [My Neural Network isn't working! What should I do?](http://theorangeduck.com/page/neural-network-not-working?imm_mid=0f6562&cmp=em-data-na-na-newsltr_20170920), point 5 (You Used a too Large Batch Size) discusses exactly this
>
> Using too large a batch size can have a negative effect on the
> accuracy of your network during training since it reduces the
> stochasticity of the gradient descent.
>
>
>
Edit: most of the times, increasing `batch_size` is desired to speed up computation, but there are other simpler ways to do this, like using data types of a smaller footprint via the `dtype` argument, whether in *keras* or *tensorflow*, e.g. `float32` instead of `float64` | A couple of papers have been published showing – and conventional wisdom in 2020 seems to still be persuaded— that, as [Yann LeCun](https://twitter.com/ylecun/status/989610208497360896) put, large batches are bad for your health.
Two relevant papers are
* [Revisiting Small Batch Training For Deep Neural Networks, Dominic Masters and Carlo Luschi](https://arxiv.org/abs/1804.07612) which implies that anything over 32 may degrade training in SGD.
and
* [On Large-batch Training For Deep Learning: Generalization Gap And Sharp Minima](https://arxiv.org/abs/1609.04836)
which offers possible reasons. To paraphrase badly, big batches are likely to get stuck in local (“sharp”) minima, small batches not. There is some interplay with choice of learning rate. |
12,532 | I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large).
I found out that one solution is to reduce the `batch_size`. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that `batch_size` used in examples is usually as a power of two, which I don't understand either.
I don't mind if my network takes longer to train, but I would like to know if reducing the `batch_size` will decrease the quality of my predictions.
Thanks. | 2016/07/01 | [
"https://datascience.stackexchange.com/questions/12532",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17484/"
]
| I feel the accepted answer is possibly wrong. There are variants in *Gradient Descent Algorithms*.
1. **Vanilla Gradient Descent**: Here the Gradient is being calculated on all the data points at a single shot and the average is taken. Hence we have a smoother version of the gradient takes longer time to learn.
2. **Stochastic Gradient Descent** : Here one-data point at a time hence the gradient is aggressive (noisy gradients) hence there is going to be lot of oscillations ( we use Momentum parameters - e.g Nesterov to control this). So there is a chance that your oscillations can make the algorithm not reach a local minimum.(diverge).
3. **Mini-Batch Gradient Descent**: Which takes the perks of both the previous ones averages gradients of a small batch. Hence not too aggressive like SGD and allows Online Learning which Vanilla GD never allowed.
The smaller the Mini-Batch the better would be the performance of your model (not always) and of course it has got to do with your epochs too faster learning. If you are training on large dataset you want faster convergence with good performance hence we pick Batch-GD's.
SGD had fixed learning parameter hence we start other Adaptive Optimizers like Adam, AdaDelta, RMS Prop etc which changes the learning parameter based on the history of Gradients. | A couple of papers have been published showing – and conventional wisdom in 2020 seems to still be persuaded— that, as [Yann LeCun](https://twitter.com/ylecun/status/989610208497360896) put, large batches are bad for your health.
Two relevant papers are
* [Revisiting Small Batch Training For Deep Neural Networks, Dominic Masters and Carlo Luschi](https://arxiv.org/abs/1804.07612) which implies that anything over 32 may degrade training in SGD.
and
* [On Large-batch Training For Deep Learning: Generalization Gap And Sharp Minima](https://arxiv.org/abs/1609.04836)
which offers possible reasons. To paraphrase badly, big batches are likely to get stuck in local (“sharp”) minima, small batches not. There is some interplay with choice of learning rate. |
325,305 | Debian Lenny. For every user, including root:
```
# cat /proc/sys/fs/file-max
262144
# sysctl fs.file-max
fs.file-max = 262144
# ulimit -Hn
1024
# ulimit -Sn
1024
```
File `/etc/security/limits.conf` has no uncommented lines.
Where does it get that 1024? | 2011/08/19 | [
"https://superuser.com/questions/325305",
"https://superuser.com",
"https://superuser.com/users/88337/"
]
| The [`fs.file-max` sysctl](http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/sysctl/fs.txt;h=88fd7f5c8dcd61307171b3af852541d06a984380;hb=HEAD#87) shows how many file handles can be allocated *system-wide*, while `ulimit` resource limits are per-process (or per-UID). The former is described in [`Documentation/sysctl/fs.txt:90`](http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/sysctl/fs.txt;h=88fd7f5c8dcd61307171b3af852541d06a984380;hb=HEAD#87):
```
file-max & file-nr:
The value in file-max denotes the maximum number of file-
handles that the Linux kernel will allocate. When you get lots
of error messages about running out of file handles, you might
want to increase this limit.
```
The 1024 files rlimit is not explicitly set anywhere; it's hardcoded into the kernel as the default value for pid 1, at [`include/asm-generic/resource.h:81`](http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=include/asm-generic/resource.h;h=61fa862fe08d61fb326bd66c20e31c567e3fb6f8;hb=HEAD#l68):
```
/*
* boot-time rlimit defaults for the init task:
*/
#define INIT_RLIMITS \
{ \
...
[RLIMIT_NOFILE] = { INR_OPEN_CUR, INR_OPEN_MAX }, \
...
}
```
which references `INR_OPEN_CUR` and `INR_OPEN_MAX` from [`include/linux/fs.h:26`](http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=include/linux/fs.h;h=178cdb4f1d4afe81a66e631872de4b587e95d1c4;hb=HEAD#l26):
```
#define INR_OPEN_CUR 1024 /* Initial setting for nfile rlimits */
#define INR_OPEN_MAX 4096 /* Hard limit for nfile rlimits */
```
Other processes simply inherit the limit from `init` (or whatever is pid 1).
Why does `/proc/1/limits` on Debian report 1024 as both soft *and hard* nfile limit? I don't know: neither the sysvinit sources nor Debian kernel patches change it. It could be the initramfs scripts, maybe. (I run Arch, which has the 1024/4096 default.) | For the question @grawity brought up, this kernel commit can explain:
```
commit 0ac1ee0bfec2a4ad118f907ce586d0dfd8db7641
Author: Tim Gardner
Date: Tue May 24 17:13:05 2011 -0700
ulimit: raise default hard ulimit on number of files to 4096
```
At lease in RHEL5.4 it's 1024/1024, and in RHEL6.2 it's 1024/4096. |
9,133 | I get regularly the feedback in comments to use Latex or MathJax. Also I would write beautiful formulas what I can do only on pen & paper. Where can I get info about these? | 2016/09/10 | [
"https://physics.meta.stackexchange.com/questions/9133",
"https://physics.meta.stackexchange.com",
"https://physics.meta.stackexchange.com/users/32426/"
]
| (La)tex is a extensible textual document format for mathematical symbols. Although it is huge, you need only to know a very small subset of it to be able to write real formulas in your posts.
MathJax is a Javascript library what makes the Latex formatting usable in a HTML/Javascript/Css environment. The StackExchange site network uses MathJax on most of the sites what requires it, the Physics SE is one of them.
Its markup on the level you need is very simple, much easier even as the HTML. Here is what is enough for you. You can learn it in around 5 minutes.
* Anything what you want to be shown in page as formula, you have to include between two \$s. So, `a text like $a^b$` will be shown so: [](https://i.stack.imgur.com/NJJvP.png).
* If you use double dollars, the formula will be center-aligned in a new line.
* `+` and `-` are working as common: [](https://i.stack.imgur.com/XMr03.png).
* For multiplication, you can use 1) nothing (`$ab$` is [](https://i.stack.imgur.com/XgYls.png)) 2) `\cdot` (`$a\cdot b$` is [](https://i.stack.imgur.com/DDgEa.png)) 3) and some others.
* For fractions, there is a more complex syntax: [](https://i.stack.imgur.com/g4uL6.png) is expressed by `$\frac{a}{b}$`.
* Here you can see, you can use `{}` for grouping the terms. It is essentially like `()` in math, but it is only for the positioning of the elements and it will be invisible in the result.
* You can use powers, or you can write anything in the top-right index with a `^`. For example, `$a^b$` will be [](https://i.stack.imgur.com/zB3j0.png).
* You can get things into the bottom-right index by a `_`: `$a_b$` will be [](https://i.stack.imgur.com/pM117.png).
* Square root is going so: `$\sqrt{a}$` will be [](https://i.stack.imgur.com/bx4aO.png).
* For the common functions, e (in the sense of 2.71...), h (the Planck constant) and c (speed of light), you can use them simply as texts (thus, without the leading `\`).
* [](https://i.stack.imgur.com/YThrA.png) and the other greek letters can be expressed by a leading backslash, like: `$\pi$`, `$\delta$`, `$\epsilon$`. The capital greek letters can be done usually by capital markup: `$\Pi$`, `$\Delta$`, will be [](https://i.stack.imgur.com/y2Via.png).
* Degree, grad is a little bit tricky. There is the `$\circ$` to show a little circle: [](https://i.stack.imgur.com/bEW9g.png). Put this into the top-right index: `$42^\circ C$` will be [](https://i.stack.imgur.com/nIJ0f.png).
* Infinity is `\infty` ([](https://i.stack.imgur.com/l7myD.png)).
---
A good option to exercise if you start to write a question or an answer on the main site. Below your textbox, you will see, how it will seem after post (**Warning: don't post your tries, it will contaminate the site! Discard them after you're ready!**).
---
Homework: formulate in MathJax the well-known time dilation formula of the SR:
[](https://i.stack.imgur.com/pZOmF.png) | While this seems like a well-intentioned thread, there is already a comprehensive tutorial over on the [Mathematics Meta Stack Exchange](https://math.meta.stackexchange.com/) site, at
>
> [MathJax basic tutorial and quick reference](https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference)
> -------------------------------------------------------------------------------------------------------------------------------------------
>
>
>
and there really isn't that much point in duplicating the enormous amount of work that has gone into writing that. |
314,892 | I've been using MongoDB on CentOS with ext4 for a few months now with no problems. I'm potentially going to be moving to Joyentcloud, and their SmartOS is based off of OpenSolaris and uses ZFS. I'm wondering if there could be some downsides to switching to this for MongoDB? | 2011/09/23 | [
"https://serverfault.com/questions/314892",
"https://serverfault.com",
"https://serverfault.com/users/8964/"
]
| If you want all mails redirected to Gmail, then you can accept it there instead. Without redirecting. All mails are handled at Google directly. This is called [Google Apps](http://www.google.com/apps/intl/en/business/index.html) and is free for no charge for a limited amount of users. And as I understood you only have one per domain. | MailEnable is a lightwieght and free Windows mail server, dead easy too
Google for it, you can set it up as easy or as complex as you like
Good luck |
45,825,541 | I seem to be doing something improper here and I am dumbfounded on what it is. I am trying to launch The Foundry NUKE from within Maya, and when I try I get this error relating to a module not found. But if I load up Nuke on a regular command line it loads up perfect fine. Seems something related to Maya's Python interpreter not able to find this module? I can't seem to find it...
Update #1: I have even tried doing some sys.path.appends of the Nuke plugin, DLLs, lib, and include dirs before all of this to no avail...
Update #2: I've reinstalled my Python and verified that it is 64 bit. Also checked my Maya and Nuke versions which are 64 bit. I've tried the following as well... opening up a normal Python command prompt outside of Maya to load Nuke via an os.system call and it works. It is only when doing an os.system call of Nuke in Maya that it fails with problems importing this \_socket module. When checking what \_socket module Maya is loading I get:
```
import _socket
print _socket.__file__
C:\Program Files\Autodesk\Maya2016\Python\DLLs\_socket.pyd
```
Leading me to believe that Maya's Python is loading a diff version of this \_socket then what Nuke is and something is going wrong there.
Original Code/Errors:
```
C:\Program Files\Nuke9.0v8\Nuke9.0.exe
Traceback (most recent call last):
File "C:/Program Files/Nuke9.0v8/plugins/init.py", line 22, in <module>
import nukescripts.ViewerProcess
File "C:/Program Files/Nuke9.0v8/plugins\nukescripts\__init__.py", line 22, in <module>
from nukeprofiler import *
File "C:/Program Files/Nuke9.0v8/plugins\nukescripts\nukeprofiler.py", line 2, in <module>
import socket
File "C:\Python27\lib\socket.py", line 47, in <module>
import _socket
ImportError: DLL load failed: The specified module could not be found.
C:/Program Files/Nuke9.0v8/plugins/init.py : error interpreting this plugin
```
---
```
from PySide import QtCore, QtGui
import maya.cmds as cmds
import os, sys
#import subprocess
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(314, 216)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Ignored, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(Dialog.sizePolicy().hasHeightForWidth())
Dialog.setSizePolicy(sizePolicy)
self.blastBtn = QtGui.QPushButton(Dialog)
self.blastBtn.setGeometry(QtCore.QRect(110, 130, 75, 23))
self.blastBtn.setObjectName("blastBtn")
self.blastBtn.clicked.connect(self.RunPlayblast)
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
Dialog.setWindowTitle(QtGui.QApplication.translate("Dialog", "Playblast & Nuke Me", None, QtGui.QApplication.UnicodeUTF8))
self.blastBtn.setText(QtGui.QApplication.translate("Dialog", "Blast", None, QtGui.QApplication.UnicodeUTF8))
def RunPlayblast(self):
cmds.playblast(fmt="qt", f="myMovie.mov", fo=True)
self.RunNuke()
def RunNuke(self):
nukeExe = r'C:\Program Files\Nuke9.0v8\Nuke9.0.exe'
myTemplate = r'B:\home\nukePBTemplate.nk'
os.system('"'+nukeExe+'" -x ' +myTemplate)
#command = nukeExe+" -x "+myTemplate
#subprocess.Popen(command)
if __name__ == '__main__':
app = QtGui.QApplication.instance()
Dialog = QtGui.QDialog()
blastMe = Ui_Dialog()
blastMe.setupUi(Dialog)
Dialog.show()
app.exec_()
``` | 2017/08/22 | [
"https://Stackoverflow.com/questions/45825541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5416347/"
]
| I think you can use built in subprocess to call nuke like this
```
import subprocess
nukeProcess = subprocess.Popen(["C:/Program Files/Nuke9.0v8/Nuke9.0.exe", "-x", "B:/home/nukePBTemplate.nk"])
```
And I think you never use a mel.eval("system /whatever") since you trying from python, already python has many built in methods to deal with system commands, like subprocess, commands etc ..
**Update**
```
import subprocess, os
newEnv = os.environ.copy()
newEnv["PYTHONPATH"] = newEnv["PATH"] + "/local/share/python/2.7/libs"
nukeProcess = subprocess.Popen(["C:/Program Files/Nuke9.0v8/Nke9.0.exe", "-x", "B:/home/nukePBTemplate.nk"], env=newEnv)
``` | You should run `system` MEL command inside your Python script. To do that you need to use `mel.eval` method with string argument (string inside string). The purpose of the `eval` command is to provide a way for the user to execute a MEL command or procedure which can only be determined at runtime.
```
import maya.mel as mel
def RunNuke(self):
mel.eval('system "/Applications/Nuke10.5v5/Nuke10.5v5.app/Contents/MacOS/Nuke10.5v5 --nc --nukex"')
```
**I'm running Non-commercial version of NUKEX on a macOS (not Windows) and it works.**
Full version of your code should look like this:
```
from PySide import QtCore, QtGui
import maya.cmds as cmds
import os, sys
import maya.mel as mel
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(314, 216)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Ignored, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth( Dialog.sizePolicy().hasHeightForWidth() )
Dialog.setSizePolicy(sizePolicy)
self.blastBtn = QtGui.QPushButton(Dialog)
self.blastBtn.setGeometry(QtCore.QRect(110, 130, 75, 23))
self.blastBtn.setObjectName("blastBtn")
self.blastBtn.clicked.connect(self.RunPlayblast)
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
Dialog.setWindowTitle(QtGui.QApplication.translate("Dialog", "Playblast & Nuke Me", None, QtGui.QApplication.UnicodeUTF8))
self.blastBtn.setText(QtGui.QApplication.translate("Dialog", "Blast", None, QtGui.QApplication.UnicodeUTF8))
def RunPlayblast(self):
cmds.playblast(fmt="qt", f="myMovie.mov", fo=True)
self.RunNuke()
def RunNuke(self):
mel.eval('system "/Applications/Nuke10.5v5/Nuke10.5v5.app/Contents/MacOS/Nuke10.5v5 --nc --nukex "')
if __name__ == '__main__':
app = QtGui.QApplication.instance()
Dialog = QtGui.QDialog()
blastMe = Ui_Dialog()
blastMe.setupUi(Dialog)
Dialog.show()
app.exec_()
```
[](https://i.stack.imgur.com/BBOYh.gif)
If you still experiencing problems with NUKE's launch from within Maya, try to run it in `safe mode`:
```
import maya.mel as mel
def RunNuke(self):
mel.eval('system "C:\Program Files\Nuke9.0v8\Nuke9.0.exe" --safe --nc --nukex')
```
>
> Read it: [Launching NUKE and HIERO in safe mode](https://support.foundry.com/hc/en-us/articles/206986449-Q100038-Launching-Nuke-NukeX-NukeStudio-Hiero-in-safe-mode).
>
>
>
If it still doesn't work I think it's a bug in your Python library or it's an issue in content of init.py. |
130,682 | As far as I can see a cape and a hood are mostly identical type of clothing. Does that mean that they are synonyms? What's the difference between them?
I was watching the cartoon and the movie and she isn't wearing a hood, she wears a cape and a hat there. Why is her name Red Riding Hood, what does riding mean? And why is her clothing called a hood?
P.s. Why *red riding hood* while it should be *red hat*. | 2017/05/27 | [
"https://ell.stackexchange.com/questions/130682",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/44134/"
]
| Capes and hoods are not identical. They can be worn together, but they are separate pieces of clothing. Capes are sleeveless cloaks, often associated with superhero attire. Long, flowing pieces of material affixed to the neck/shoulder area in some fashion.
[](https://i.stack.imgur.com/MUFOo.jpg)
Hoods are a form of headwear, that cover the head and neck and expose the face. They often form part of a coat or cloak, and inspire the name *hoodie* for hooded sweatshirts and jackets.
[](https://i.stack.imgur.com/FU1tV.jpg)
As you can see, these are very different pieces of clothing.
With regards to your postscript, this is because hats and hoods are different classes of headwear. Hats generally do not cover the neck and have brims, which hoods do not have. | A riding hood is [*an enveloping hood or hooded cloak worn for riding and as an outdoor wrap by women and children.* - M-W](https://www.merriam-webster.com/dictionary/riding%20hood) This kind of garb was probably more common when people routinely got around on horseback (like in the days when popular old children's stories were written). Pulling the hood back off the head would make it blend in with the rest of the cloak.
But in re-envisioning old stories into modern picture books or cartoons, original meaning is often lost. "Little Red Riding Hood" becomes the girl's name instead of a nickname based on a reference to her red cloak, and her outfit might be revised to be more modern and familiar to what children relate to. Similarly, "Goldilocks" becomes the girl's literal name instead of a reference to her hair color (if the story had been written today, she might have been called "Blondie"). |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| HINT:
As $x\to0,x\ne0$ so safely divide numerator & denominator by $x$
and use
$$\lim\_{h\to0}\dfrac{e^h-1}h=1$$
Observe that the exponent of $e,$ the limit variable and the denominator are **same**. | **Hint**: We can divide by $x $ to get $$\lim\_{x \to 0} \frac {5-(\frac {e^{2x}-1}{2x}(2))}{3+3 (\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here? |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| Using Taylor's series:
$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$
Thus, $$\lim\_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$ | **Hint**: We can divide by $x $ to get $$\lim\_{x \to 0} \frac {5-(\frac {e^{2x}-1}{2x}(2))}{3+3 (\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here? |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| $$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\
\lim\_{x \to 0} \frac{5x - (1+2x+\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\frac{(4x)^2}{2!}+o(x^3))-3} =\\
\lim\_{x \to 0} \frac{3x-\frac{(2x)^2}{2!}-o(x^3)}{15x +3\frac{(4x)^2}{2!}+o(x^3)} =
\lim\_{x \to 0} \frac{3x}{15x } =\frac{3}{15}$$ | $$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\frac12\frac{\dfrac52-\dfrac{e^{2x}-1}{2x}}{\dfrac34+3\dfrac{e^{4x}-1}{4x}}\to\frac12\frac{\dfrac52-L}{\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).
Then you can take for granted that $L=1$, giving the answer $\dfrac15$. |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| HINT:
As $x\to0,x\ne0$ so safely divide numerator & denominator by $x$
and use
$$\lim\_{h\to0}\dfrac{e^h-1}h=1$$
Observe that the exponent of $e,$ the limit variable and the denominator are **same**. | Do you know the Taylor formula? If so, then
$$
e^x = 1 + x + o(x)
$$
as $x \to 0$. Here, $o(x)$ is a function such that $o(x)/x \to 0$ as $x \to 0$.
Hence
$$
\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \frac{3x+ o(x)}{15x+o(x)} = \frac{3+o(x)/x}{15+o(x)/x}
$$
which shows that your limit is $1/5$. |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| Using Taylor's series:
$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$
Thus, $$\lim\_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$ | $$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\frac12\frac{\dfrac52-\dfrac{e^{2x}-1}{2x}}{\dfrac34+3\dfrac{e^{4x}-1}{4x}}\to\frac12\frac{\dfrac52-L}{\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).
Then you can take for granted that $L=1$, giving the answer $\dfrac15$. |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| HINT:
As $x\to0,x\ne0$ so safely divide numerator & denominator by $x$
and use
$$\lim\_{h\to0}\dfrac{e^h-1}h=1$$
Observe that the exponent of $e,$ the limit variable and the denominator are **same**. | Using Taylor's series:
$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$
Thus, $$\lim\_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$ |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| Using Taylor's series:
$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$
Thus, $$\lim\_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$ | Do you know the Taylor formula? If so, then
$$
e^x = 1 + x + o(x)
$$
as $x \to 0$. Here, $o(x)$ is a function such that $o(x)/x \to 0$ as $x \to 0$.
Hence
$$
\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \frac{3x+ o(x)}{15x+o(x)} = \frac{3+o(x)/x}{15+o(x)/x}
$$
which shows that your limit is $1/5$. |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| $$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\
\lim\_{x \to 0} \frac{5x - (1+2x+\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\frac{(4x)^2}{2!}+o(x^3))-3} =\\
\lim\_{x \to 0} \frac{3x-\frac{(2x)^2}{2!}-o(x^3)}{15x +3\frac{(4x)^2}{2!}+o(x^3)} =
\lim\_{x \to 0} \frac{3x}{15x } =\frac{3}{15}$$ | **Hint**: We can divide by $x $ to get $$\lim\_{x \to 0} \frac {5-(\frac {e^{2x}-1}{2x}(2))}{3+3 (\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here? |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| HINT:
As $x\to0,x\ne0$ so safely divide numerator & denominator by $x$
and use
$$\lim\_{h\to0}\dfrac{e^h-1}h=1$$
Observe that the exponent of $e,$ the limit variable and the denominator are **same**. | $$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\frac12\frac{\dfrac52-\dfrac{e^{2x}-1}{2x}}{\dfrac34+3\dfrac{e^{4x}-1}{4x}}\to\frac12\frac{\dfrac52-L}{\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).
Then you can take for granted that $L=1$, giving the answer $\dfrac15$. |
2,085,878 | Need to calculate the following limit without using L'Hopital's Rule:
$$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$
The problem I'm facing is that no matter what I do I still get expression of the form
$$ \frac{0}{0} $$
I thought maybe to use
$$ t = e^{2x} $$
But I still can't simplify it enough..
Thank you | 2017/01/06 | [
"https://math.stackexchange.com/questions/2085878",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/306907/"
]
| HINT:
As $x\to0,x\ne0$ so safely divide numerator & denominator by $x$
and use
$$\lim\_{h\to0}\dfrac{e^h-1}h=1$$
Observe that the exponent of $e,$ the limit variable and the denominator are **same**. | $$ \lim\_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\
\lim\_{x \to 0} \frac{5x - (1+2x+\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\frac{(4x)^2}{2!}+o(x^3))-3} =\\
\lim\_{x \to 0} \frac{3x-\frac{(2x)^2}{2!}-o(x^3)}{15x +3\frac{(4x)^2}{2!}+o(x^3)} =
\lim\_{x \to 0} \frac{3x}{15x } =\frac{3}{15}$$ |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| I'm not quite sure if the question is asking about glacial, ice ages, or snowball Earth, and whether it's about the onset or end of a glacial period. I'll try to hit all three.
Ice Ages and Milankovitch Cycles
--------------------------------
Ice ages are long spans of time that marked by periods of time during which ice reaches far from the poles, interspersed by periods during which the ice retreats (but never quite goes away). The periods of time during which ice covers a good sized fraction of the Earth are called glacials; the periods during which ice retreats to only cover areas in the far north and far south are called interglacials. We are living in ice age conditions, right now. There's still ice on Antarctica and Greenland. We are also in an interglacial period within that larger ice age. The current ice age began about 33 million years ago while the current interglacial began about 11,700 years ago.
The [Milankovitch cycles](http://en.wikipedia.org/wiki/Milankovitch_cycles) determine whether the Earth is in a glacial or interglacial period. Conditions are right for ice to form and spread when precession puts northern hemisphere summer near aphelion and winter near perihelion and when both obliquity and eccentricity are low. The Earth currently satisfies the first of those conditions, but obliquity and eccentricity are a bit too high. That makes our northern hemisphere summers are a bit too warm, our winters a bit too cold.
The Milankovitch cycles] provides several answers to the question "where does the energy go?" Those times when conditions are ripe for glaciation have energy in the northern hemisphere spread more uniformly across the year than times not conducive to glaciation. Summers are milder, which means accumulated snow doesn't melt as much. Winters are milder, which means more snow falls.
Once ice does become ubiquitous, another answer to the "where does the energy go" question is into space. Ice and snow are white. Their presence reduces the amount of sunlight absorbed by the Earth.
The first paper listed below by Hays et al. is the seminal paper that brought the concept of Milankovitch cycles to the forefront. The second paper by Abe-Ouchi et al. dicusses a recent climate simulation that successfully recovers many salient features of the most recent glaciations. Most importantly, this paper appears to have solved the 100,000 year mystery and shows why deglaciation operates so quickly.
[Hays, J. D., Imbrie, J., & Shackleton, N. J. (1976, December). "Variations in the Earth's orbit: Pacemaker of the ice ages." *American Association for the Advancement of Science.*](http://courses.washington.edu/proxies/Hays-Pacemaker_of_Ice_Ages-Sci76.pdf)
[Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M. E., Okuno, J. I., Takahashi, K., & Blatter, H. (2013). "Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume." *Nature*, 500(7461), 190-193.](http://www.nature.com/nature/journal/v500/n7461/abs/nature12374.html)
Icehouse Earth vs Greenhouse Earth
----------------------------------
The Earth's climate appears to have been toggling between two climate extremes for much of the Earth's existence, one where things are cold and ice is likely to form and the other where ice is absent worldwide except maybe right at the poles. Dinosaurs roamed the arctic and Antarctica when the Earth was in one of its greenhouse phases. The Earth has been in a greenhouse phase for most of the Earth's existence.
Milankovitch cycles don't cause glaciation during greenhouse Earth periods. Ice ages happen when the Earth is in an icehouse phase. What appears to distinguish greenhouse and icehouse phases are the positions and orientations of the continents. Having a continent located over a pole helps cool the climate. Having continents oriented so they channel ocean circulation in a way that keeps the ocean cool also helps cool the climate.
The Earth transitioned from its hothouse mode to its icehouse mode 33 million years ago or so. That's right about when two key events happened in the ocean. Up until then, Antarctica was still connected to both Australia and South America. The separation from Tasmania formed the Tasmanian Gateway, while the separation from South America formed the Drake Passage. This marked the birth of the very cold Southern Ocean, it marked the buildup of ice on Antarctica, and it marked the end of the Eocene.
[Bijl, P. K., Bendle, J. A., Bohaty, S. M., Pross, J., Schouten, S., Tauxe, L., ... & Yamane, M. (2013). "Eocene cooling linked to early flow across the Tasmanian Gateway." *Proceedings of the National Academy of Sciences*, 110(24), 9645-9650.](http://www.pnas.org/content/110/24/9645.abstract)
[Exon, N., Kennett, J., & Malone, M. Leg 189 Shipboard Scientific Party (2000). "The opening of the Tasmanian gateway drove global Cenozoic paleoclimatic and paleoceanographic changes: Results of Leg 189." *JOIDES J*, 26(2), 11-18.](http://a.ennyu.com/pdf/Exon_JoiJour_00.pdf)
Snowball Earth and the Faint Young Sun Paradox
----------------------------------------------
Snowball Earth episodes were not your average ice age. Ice typically doesn't come near the tropics, even in the worst of ice ages. Snowball Earth means just that; the snow and ice encroached well into the tropics, possibly extending all the way to the equator.
The problem with snowball Earth isn't explaining where all the energy went. The real problem is explaining why the ancient Earth wasn't in a permanent snowball Earth condition, starting from shortly after the Earth radiated away the initial heat from the formation of the Earth.
The solar constant is not quite constant. While it doesn't change much at all from year to year, or even century to century, it changes a lot over the course of billions of years. Young G class stars produce considerably less energy than do middle aged G class stars, which in turn produce considerably less energy than do older G class stars. When our Sun was young, it produced only 75% or so as much energy than it does now.

By all rights, the Earth should have been completely covered with ice. The young Sun did not produce enough energy to support open oceans. This obviously was not the case. There is plenty of geological evidence that the Earth had open oceans even when the Earth was quite young.
This 40 year old conundrum, first raised by Carl Sagan and George Mullen, is the [faint young Sun paradox](http://en.wikipedia.org/wiki/Faint_young_Sun_paradox). There have been a number of proposed ways out of the paradox, but none of them quite line up with the geological evidence.
One obvious way out is that the Earth's early atmosphere was very different from our nitrogen-oxygen atmosphere and contained significantly more greenhouse gases. The amount of greenhouse gases needed to avert a permanent snowball Earth is highly debated, ranging from not much at all to extreme amounts. Another way out is a reduced albedo due to the significantly smaller early continents and lack of life. The young Earth would have been mostly ocean, and ocean water is rather dark (unless it's covered with ice). Lack of life means no biogenic cloud condensation nuclei, which means fewer clouds.
[Goldblatt, C., & Zahnle, K. J. (2011). "Faint young Sun paradox remains." *Nature*, 474(7349), E1-E1.](http://www.nature.com/nature/journal/v474/n7349/abs/nature09961.html)
[Kienert, H., Feulner, G., & Petoukhov, V. (2012). "Faint young Sun problem more severe due to ice‐albedo feedback and higher rotation rate of the early Earth." *Geophysical Research Letters*, 39(23).](http://onlinelibrary.wiley.com/doi/10.1029/2012GL054381/abstract)
[Rosing, M. T., Bird, D. K., Sleep, N. H., & Bjerrum, C. J. (2010). "No climate paradox under the faint early Sun." *Nature*, 464(7289), 744-747.](http://www.nature.com/nature/journal/v464/n7289/abs/nature08955.html) | Of course it isn't "absurd", and looking at the ball-park energy budget figures you'll see why:
First, I don't think anyone is claiming the Earth is completely frozen. More of a "slushy at the Equator" scenario. But let's assume an average 1 km thickness of ice for arguments sake (i.e. probably an exaggeration although polar ice would be thicker).
The current thermal budget [can be found here](http://en.wikipedia.org/wiki/Earth%27s_energy_budget%20including%20references.).
Pertinent figures:
Geothermal heat flow (vertically through the rock, primarily from radioactive decay & cooling) is ~$0.084\ \mathrm{W/m^2}$.
Solar input: $340 \mathrm{W/m^2}$ About a quarter of this is reflected back but this will vary according to conditions (e.g. ice cover, clouds, etc). Assume no reflection. (yes the presence of ice would cause reflection - but it may also reduce cloud cover due to reduced evaporation?).
Note: geothermal energy is tiny relative to solar flux, so we'll ignore it.
That 1 km of ice = 1000m3 of ice (per square meter).
Let's assume that the melting process would also involve an increase in 10C in additional to the latent heat of melting (ie. yes we melt the water but we also increase its temperature a bit).
Total energy required (per $\mathrm{m^3}$) = (Temp increase \times Thermal capacity of water + Latent heat of melting) × volume.
So if we plug in our numbers, that would be:
$$(10 \times 4.2 + 334) \times 10^6 \times 1000$$
$$= 3.8 \times 10^{11}\ \mathrm{Joules}$$
Heat flux for the same area is $340\ \mathrm{J/s}$
So time for the required energy to melt the ice = $3.8 \times 10^{11} / 340 = 35\ \mathrm{years}$.
You could argue about my ball-park estimates. For example there would be more albedo reflection to melt the ice (=takes longer). And you may not necessarily have to increase the temperature of the water as much ( =>takes less time), and much of the Earth would not have 1 km of ice (=>takes less time). But this gives you a ball park "yes it is feasible".
---
Edit: I read the question a bit quickly - the above shows it is 'easy' to get from an ice planet to an ice-free planet. But the converse is also true. The amount of energy that keeps the Earth ice free can easily be added/subtracted over a timescale of a few centuries, just from solar and atmospheric effects. |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| Of course it isn't "absurd", and looking at the ball-park energy budget figures you'll see why:
First, I don't think anyone is claiming the Earth is completely frozen. More of a "slushy at the Equator" scenario. But let's assume an average 1 km thickness of ice for arguments sake (i.e. probably an exaggeration although polar ice would be thicker).
The current thermal budget [can be found here](http://en.wikipedia.org/wiki/Earth%27s_energy_budget%20including%20references.).
Pertinent figures:
Geothermal heat flow (vertically through the rock, primarily from radioactive decay & cooling) is ~$0.084\ \mathrm{W/m^2}$.
Solar input: $340 \mathrm{W/m^2}$ About a quarter of this is reflected back but this will vary according to conditions (e.g. ice cover, clouds, etc). Assume no reflection. (yes the presence of ice would cause reflection - but it may also reduce cloud cover due to reduced evaporation?).
Note: geothermal energy is tiny relative to solar flux, so we'll ignore it.
That 1 km of ice = 1000m3 of ice (per square meter).
Let's assume that the melting process would also involve an increase in 10C in additional to the latent heat of melting (ie. yes we melt the water but we also increase its temperature a bit).
Total energy required (per $\mathrm{m^3}$) = (Temp increase \times Thermal capacity of water + Latent heat of melting) × volume.
So if we plug in our numbers, that would be:
$$(10 \times 4.2 + 334) \times 10^6 \times 1000$$
$$= 3.8 \times 10^{11}\ \mathrm{Joules}$$
Heat flux for the same area is $340\ \mathrm{J/s}$
So time for the required energy to melt the ice = $3.8 \times 10^{11} / 340 = 35\ \mathrm{years}$.
You could argue about my ball-park estimates. For example there would be more albedo reflection to melt the ice (=takes longer). And you may not necessarily have to increase the temperature of the water as much ( =>takes less time), and much of the Earth would not have 1 km of ice (=>takes less time). But this gives you a ball park "yes it is feasible".
---
Edit: I read the question a bit quickly - the above shows it is 'easy' to get from an ice planet to an ice-free planet. But the converse is also true. The amount of energy that keeps the Earth ice free can easily be added/subtracted over a timescale of a few centuries, just from solar and atmospheric effects. | What's not been touched yet, and the current answers do cover a lot of ground, is the variability of solar input.
Even IF the amount of energy radiated out by the earth remains the same (and it probably would, roughly), solar input is highly variable and is a major factor in determining the total energy budget of the planet.
Even a small change can have far reaching consequences. Thus if the sun's output goes down by only a few percent (and that's well within its variability even over the 11 year sunspot cycle) temperatures on earth will swing with that (over that short a cycle that's pretty much averaged out by changes in ocean currents). If such a "dip" lasts longer, think a few centuries, you get a "little ice age" as we just emerged from in the 19th century (and by some accounts may be on the verge of slipping into the next, as the sun is again seemingly rather inactive).
And the sun has longer cycles, such a low activity cycle lasting a few thousand years can well drop the earth into a full scale ice age. And as the ice sheets grow, reflection goes up, less energy heats the planet, less clouds form. Until the sun enters a high activity phase again, the planet will remain (relatively) cold.
As said, this is unlikely to cause the entire planet to disappear under a blanket of ice. The last major ice age "only" came down to about the latitude of the Rhine delta in Europe for example. |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| Of course it isn't "absurd", and looking at the ball-park energy budget figures you'll see why:
First, I don't think anyone is claiming the Earth is completely frozen. More of a "slushy at the Equator" scenario. But let's assume an average 1 km thickness of ice for arguments sake (i.e. probably an exaggeration although polar ice would be thicker).
The current thermal budget [can be found here](http://en.wikipedia.org/wiki/Earth%27s_energy_budget%20including%20references.).
Pertinent figures:
Geothermal heat flow (vertically through the rock, primarily from radioactive decay & cooling) is ~$0.084\ \mathrm{W/m^2}$.
Solar input: $340 \mathrm{W/m^2}$ About a quarter of this is reflected back but this will vary according to conditions (e.g. ice cover, clouds, etc). Assume no reflection. (yes the presence of ice would cause reflection - but it may also reduce cloud cover due to reduced evaporation?).
Note: geothermal energy is tiny relative to solar flux, so we'll ignore it.
That 1 km of ice = 1000m3 of ice (per square meter).
Let's assume that the melting process would also involve an increase in 10C in additional to the latent heat of melting (ie. yes we melt the water but we also increase its temperature a bit).
Total energy required (per $\mathrm{m^3}$) = (Temp increase \times Thermal capacity of water + Latent heat of melting) × volume.
So if we plug in our numbers, that would be:
$$(10 \times 4.2 + 334) \times 10^6 \times 1000$$
$$= 3.8 \times 10^{11}\ \mathrm{Joules}$$
Heat flux for the same area is $340\ \mathrm{J/s}$
So time for the required energy to melt the ice = $3.8 \times 10^{11} / 340 = 35\ \mathrm{years}$.
You could argue about my ball-park estimates. For example there would be more albedo reflection to melt the ice (=takes longer). And you may not necessarily have to increase the temperature of the water as much ( =>takes less time), and much of the Earth would not have 1 km of ice (=>takes less time). But this gives you a ball park "yes it is feasible".
---
Edit: I read the question a bit quickly - the above shows it is 'easy' to get from an ice planet to an ice-free planet. But the converse is also true. The amount of energy that keeps the Earth ice free can easily be added/subtracted over a timescale of a few centuries, just from solar and atmospheric effects. | ,
I propose that the “snow ball Earth” was brought to a close by the dust from a huge meteorite (the largest known on Earth) impacting Australia (see <http://charles_w.tripod.com/antipode.html> ) settling onto the ice and melting it by a bare soil warming affect (see <http://charles_w.tripod.com/climate.html> ) and thus initiating the Cambrian. The dust fertilizing the ocean probably contributed considerably to the explosion of life then. That initiation was probably considerably assisted by the subsequent release of methane gas from methane ice under the ocean floor and by dust from volcanic eruptions from the Bahamas Islands, which are located at the antipode (opposite side of a sphere) of the above impact. The close correlation of volcanoes on Mars with meteorite impacts at their antipodes gives supporting evidence for such a phenomenon.
.
Sincerely, Charles Weber |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| Of course it isn't "absurd", and looking at the ball-park energy budget figures you'll see why:
First, I don't think anyone is claiming the Earth is completely frozen. More of a "slushy at the Equator" scenario. But let's assume an average 1 km thickness of ice for arguments sake (i.e. probably an exaggeration although polar ice would be thicker).
The current thermal budget [can be found here](http://en.wikipedia.org/wiki/Earth%27s_energy_budget%20including%20references.).
Pertinent figures:
Geothermal heat flow (vertically through the rock, primarily from radioactive decay & cooling) is ~$0.084\ \mathrm{W/m^2}$.
Solar input: $340 \mathrm{W/m^2}$ About a quarter of this is reflected back but this will vary according to conditions (e.g. ice cover, clouds, etc). Assume no reflection. (yes the presence of ice would cause reflection - but it may also reduce cloud cover due to reduced evaporation?).
Note: geothermal energy is tiny relative to solar flux, so we'll ignore it.
That 1 km of ice = 1000m3 of ice (per square meter).
Let's assume that the melting process would also involve an increase in 10C in additional to the latent heat of melting (ie. yes we melt the water but we also increase its temperature a bit).
Total energy required (per $\mathrm{m^3}$) = (Temp increase \times Thermal capacity of water + Latent heat of melting) × volume.
So if we plug in our numbers, that would be:
$$(10 \times 4.2 + 334) \times 10^6 \times 1000$$
$$= 3.8 \times 10^{11}\ \mathrm{Joules}$$
Heat flux for the same area is $340\ \mathrm{J/s}$
So time for the required energy to melt the ice = $3.8 \times 10^{11} / 340 = 35\ \mathrm{years}$.
You could argue about my ball-park estimates. For example there would be more albedo reflection to melt the ice (=takes longer). And you may not necessarily have to increase the temperature of the water as much ( =>takes less time), and much of the Earth would not have 1 km of ice (=>takes less time). But this gives you a ball park "yes it is feasible".
---
Edit: I read the question a bit quickly - the above shows it is 'easy' to get from an ice planet to an ice-free planet. But the converse is also true. The amount of energy that keeps the Earth ice free can easily be added/subtracted over a timescale of a few centuries, just from solar and atmospheric effects. | One cause of glaciation on earth is dust from volcanic activities ie you have heard the term volcanic or nuclear winter.
During glaciation continental plates with a mile or two of ice piled upon them increase vastly in weight and actually sink. When we see a rock we see a hard stone object, but that rock within the earth under a vast amount of pressure is not "rock" solid, it is more the like peanut butter, it is soft and pliable. This is how continental plates are able to sink under the weight of a mile or two thick sheet of ice.
Not only do the continental plates have weight pushing down on this peanut butter like rock beneath us, the oceans also have weight and they as well push the plates beneath them down into this peanut butter like stone.
During glacial maximums ocean levels drop by as much as 400 feet, so what is currently 400 feet underwater was dry land during the last glacial maximum. This loss of 400 feet of water across all the oceans of the earth greatly reduces weight upon the ocean floors. That same water is turned to snow and collects as ice on the continental plates this adds that same weight now to the continental plates. So the oceanic plates raise while the continental plates drop.
During interglacial periods you get a lightening of continental plates and the oceanic plates become heavier with the return of high ocean water levels ie global sea level rise such as we are experiencing currently. Continental plates rise and oceanic plates drop. This reaction is not immediate, again if you imagine peanut butter if you put a board over several inches of peanut butter and add weight to the board it takes some time for the peanut butter to squish out the sides, the same with the continental plates and the soft stone they sit atop of. It takes thousands of years for the continental plates to rebound to their original height. During this rebounding of the continental plates and the dropping of the oceanic plates volcanic activity slowly starts to increase. Eventually it is only a matter of time before you get another great volcanic eruption due the plate movement that produces a great deal of ash in the atmosphere, this continues again and again with increased levels of volcanic activity. This ash reflects light back into space before it is able to reach and warm the surface, thereby reducing average global temperatures.
This is certainly not the only mechanism that controls glacial maximum and interglacial periods but it does work in concert with the afore written answers to your question. The normal condition for our planet during human existence is glacial maximum conditions, what we are experiencing now an "interglacial" period exists only about 20% of the time somewhere on average of about 20k years per 100k years. This is the second interglacial period in the last 100k years. You have go back to 120k to 130k years for the next previous interglacial period. |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| I'm not quite sure if the question is asking about glacial, ice ages, or snowball Earth, and whether it's about the onset or end of a glacial period. I'll try to hit all three.
Ice Ages and Milankovitch Cycles
--------------------------------
Ice ages are long spans of time that marked by periods of time during which ice reaches far from the poles, interspersed by periods during which the ice retreats (but never quite goes away). The periods of time during which ice covers a good sized fraction of the Earth are called glacials; the periods during which ice retreats to only cover areas in the far north and far south are called interglacials. We are living in ice age conditions, right now. There's still ice on Antarctica and Greenland. We are also in an interglacial period within that larger ice age. The current ice age began about 33 million years ago while the current interglacial began about 11,700 years ago.
The [Milankovitch cycles](http://en.wikipedia.org/wiki/Milankovitch_cycles) determine whether the Earth is in a glacial or interglacial period. Conditions are right for ice to form and spread when precession puts northern hemisphere summer near aphelion and winter near perihelion and when both obliquity and eccentricity are low. The Earth currently satisfies the first of those conditions, but obliquity and eccentricity are a bit too high. That makes our northern hemisphere summers are a bit too warm, our winters a bit too cold.
The Milankovitch cycles] provides several answers to the question "where does the energy go?" Those times when conditions are ripe for glaciation have energy in the northern hemisphere spread more uniformly across the year than times not conducive to glaciation. Summers are milder, which means accumulated snow doesn't melt as much. Winters are milder, which means more snow falls.
Once ice does become ubiquitous, another answer to the "where does the energy go" question is into space. Ice and snow are white. Their presence reduces the amount of sunlight absorbed by the Earth.
The first paper listed below by Hays et al. is the seminal paper that brought the concept of Milankovitch cycles to the forefront. The second paper by Abe-Ouchi et al. dicusses a recent climate simulation that successfully recovers many salient features of the most recent glaciations. Most importantly, this paper appears to have solved the 100,000 year mystery and shows why deglaciation operates so quickly.
[Hays, J. D., Imbrie, J., & Shackleton, N. J. (1976, December). "Variations in the Earth's orbit: Pacemaker of the ice ages." *American Association for the Advancement of Science.*](http://courses.washington.edu/proxies/Hays-Pacemaker_of_Ice_Ages-Sci76.pdf)
[Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M. E., Okuno, J. I., Takahashi, K., & Blatter, H. (2013). "Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume." *Nature*, 500(7461), 190-193.](http://www.nature.com/nature/journal/v500/n7461/abs/nature12374.html)
Icehouse Earth vs Greenhouse Earth
----------------------------------
The Earth's climate appears to have been toggling between two climate extremes for much of the Earth's existence, one where things are cold and ice is likely to form and the other where ice is absent worldwide except maybe right at the poles. Dinosaurs roamed the arctic and Antarctica when the Earth was in one of its greenhouse phases. The Earth has been in a greenhouse phase for most of the Earth's existence.
Milankovitch cycles don't cause glaciation during greenhouse Earth periods. Ice ages happen when the Earth is in an icehouse phase. What appears to distinguish greenhouse and icehouse phases are the positions and orientations of the continents. Having a continent located over a pole helps cool the climate. Having continents oriented so they channel ocean circulation in a way that keeps the ocean cool also helps cool the climate.
The Earth transitioned from its hothouse mode to its icehouse mode 33 million years ago or so. That's right about when two key events happened in the ocean. Up until then, Antarctica was still connected to both Australia and South America. The separation from Tasmania formed the Tasmanian Gateway, while the separation from South America formed the Drake Passage. This marked the birth of the very cold Southern Ocean, it marked the buildup of ice on Antarctica, and it marked the end of the Eocene.
[Bijl, P. K., Bendle, J. A., Bohaty, S. M., Pross, J., Schouten, S., Tauxe, L., ... & Yamane, M. (2013). "Eocene cooling linked to early flow across the Tasmanian Gateway." *Proceedings of the National Academy of Sciences*, 110(24), 9645-9650.](http://www.pnas.org/content/110/24/9645.abstract)
[Exon, N., Kennett, J., & Malone, M. Leg 189 Shipboard Scientific Party (2000). "The opening of the Tasmanian gateway drove global Cenozoic paleoclimatic and paleoceanographic changes: Results of Leg 189." *JOIDES J*, 26(2), 11-18.](http://a.ennyu.com/pdf/Exon_JoiJour_00.pdf)
Snowball Earth and the Faint Young Sun Paradox
----------------------------------------------
Snowball Earth episodes were not your average ice age. Ice typically doesn't come near the tropics, even in the worst of ice ages. Snowball Earth means just that; the snow and ice encroached well into the tropics, possibly extending all the way to the equator.
The problem with snowball Earth isn't explaining where all the energy went. The real problem is explaining why the ancient Earth wasn't in a permanent snowball Earth condition, starting from shortly after the Earth radiated away the initial heat from the formation of the Earth.
The solar constant is not quite constant. While it doesn't change much at all from year to year, or even century to century, it changes a lot over the course of billions of years. Young G class stars produce considerably less energy than do middle aged G class stars, which in turn produce considerably less energy than do older G class stars. When our Sun was young, it produced only 75% or so as much energy than it does now.

By all rights, the Earth should have been completely covered with ice. The young Sun did not produce enough energy to support open oceans. This obviously was not the case. There is plenty of geological evidence that the Earth had open oceans even when the Earth was quite young.
This 40 year old conundrum, first raised by Carl Sagan and George Mullen, is the [faint young Sun paradox](http://en.wikipedia.org/wiki/Faint_young_Sun_paradox). There have been a number of proposed ways out of the paradox, but none of them quite line up with the geological evidence.
One obvious way out is that the Earth's early atmosphere was very different from our nitrogen-oxygen atmosphere and contained significantly more greenhouse gases. The amount of greenhouse gases needed to avert a permanent snowball Earth is highly debated, ranging from not much at all to extreme amounts. Another way out is a reduced albedo due to the significantly smaller early continents and lack of life. The young Earth would have been mostly ocean, and ocean water is rather dark (unless it's covered with ice). Lack of life means no biogenic cloud condensation nuclei, which means fewer clouds.
[Goldblatt, C., & Zahnle, K. J. (2011). "Faint young Sun paradox remains." *Nature*, 474(7349), E1-E1.](http://www.nature.com/nature/journal/v474/n7349/abs/nature09961.html)
[Kienert, H., Feulner, G., & Petoukhov, V. (2012). "Faint young Sun problem more severe due to ice‐albedo feedback and higher rotation rate of the early Earth." *Geophysical Research Letters*, 39(23).](http://onlinelibrary.wiley.com/doi/10.1029/2012GL054381/abstract)
[Rosing, M. T., Bird, D. K., Sleep, N. H., & Bjerrum, C. J. (2010). "No climate paradox under the faint early Sun." *Nature*, 464(7289), 744-747.](http://www.nature.com/nature/journal/v464/n7289/abs/nature08955.html) | What's not been touched yet, and the current answers do cover a lot of ground, is the variability of solar input.
Even IF the amount of energy radiated out by the earth remains the same (and it probably would, roughly), solar input is highly variable and is a major factor in determining the total energy budget of the planet.
Even a small change can have far reaching consequences. Thus if the sun's output goes down by only a few percent (and that's well within its variability even over the 11 year sunspot cycle) temperatures on earth will swing with that (over that short a cycle that's pretty much averaged out by changes in ocean currents). If such a "dip" lasts longer, think a few centuries, you get a "little ice age" as we just emerged from in the 19th century (and by some accounts may be on the verge of slipping into the next, as the sun is again seemingly rather inactive).
And the sun has longer cycles, such a low activity cycle lasting a few thousand years can well drop the earth into a full scale ice age. And as the ice sheets grow, reflection goes up, less energy heats the planet, less clouds form. Until the sun enters a high activity phase again, the planet will remain (relatively) cold.
As said, this is unlikely to cause the entire planet to disappear under a blanket of ice. The last major ice age "only" came down to about the latitude of the Rhine delta in Europe for example. |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| I'm not quite sure if the question is asking about glacial, ice ages, or snowball Earth, and whether it's about the onset or end of a glacial period. I'll try to hit all three.
Ice Ages and Milankovitch Cycles
--------------------------------
Ice ages are long spans of time that marked by periods of time during which ice reaches far from the poles, interspersed by periods during which the ice retreats (but never quite goes away). The periods of time during which ice covers a good sized fraction of the Earth are called glacials; the periods during which ice retreats to only cover areas in the far north and far south are called interglacials. We are living in ice age conditions, right now. There's still ice on Antarctica and Greenland. We are also in an interglacial period within that larger ice age. The current ice age began about 33 million years ago while the current interglacial began about 11,700 years ago.
The [Milankovitch cycles](http://en.wikipedia.org/wiki/Milankovitch_cycles) determine whether the Earth is in a glacial or interglacial period. Conditions are right for ice to form and spread when precession puts northern hemisphere summer near aphelion and winter near perihelion and when both obliquity and eccentricity are low. The Earth currently satisfies the first of those conditions, but obliquity and eccentricity are a bit too high. That makes our northern hemisphere summers are a bit too warm, our winters a bit too cold.
The Milankovitch cycles] provides several answers to the question "where does the energy go?" Those times when conditions are ripe for glaciation have energy in the northern hemisphere spread more uniformly across the year than times not conducive to glaciation. Summers are milder, which means accumulated snow doesn't melt as much. Winters are milder, which means more snow falls.
Once ice does become ubiquitous, another answer to the "where does the energy go" question is into space. Ice and snow are white. Their presence reduces the amount of sunlight absorbed by the Earth.
The first paper listed below by Hays et al. is the seminal paper that brought the concept of Milankovitch cycles to the forefront. The second paper by Abe-Ouchi et al. dicusses a recent climate simulation that successfully recovers many salient features of the most recent glaciations. Most importantly, this paper appears to have solved the 100,000 year mystery and shows why deglaciation operates so quickly.
[Hays, J. D., Imbrie, J., & Shackleton, N. J. (1976, December). "Variations in the Earth's orbit: Pacemaker of the ice ages." *American Association for the Advancement of Science.*](http://courses.washington.edu/proxies/Hays-Pacemaker_of_Ice_Ages-Sci76.pdf)
[Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M. E., Okuno, J. I., Takahashi, K., & Blatter, H. (2013). "Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume." *Nature*, 500(7461), 190-193.](http://www.nature.com/nature/journal/v500/n7461/abs/nature12374.html)
Icehouse Earth vs Greenhouse Earth
----------------------------------
The Earth's climate appears to have been toggling between two climate extremes for much of the Earth's existence, one where things are cold and ice is likely to form and the other where ice is absent worldwide except maybe right at the poles. Dinosaurs roamed the arctic and Antarctica when the Earth was in one of its greenhouse phases. The Earth has been in a greenhouse phase for most of the Earth's existence.
Milankovitch cycles don't cause glaciation during greenhouse Earth periods. Ice ages happen when the Earth is in an icehouse phase. What appears to distinguish greenhouse and icehouse phases are the positions and orientations of the continents. Having a continent located over a pole helps cool the climate. Having continents oriented so they channel ocean circulation in a way that keeps the ocean cool also helps cool the climate.
The Earth transitioned from its hothouse mode to its icehouse mode 33 million years ago or so. That's right about when two key events happened in the ocean. Up until then, Antarctica was still connected to both Australia and South America. The separation from Tasmania formed the Tasmanian Gateway, while the separation from South America formed the Drake Passage. This marked the birth of the very cold Southern Ocean, it marked the buildup of ice on Antarctica, and it marked the end of the Eocene.
[Bijl, P. K., Bendle, J. A., Bohaty, S. M., Pross, J., Schouten, S., Tauxe, L., ... & Yamane, M. (2013). "Eocene cooling linked to early flow across the Tasmanian Gateway." *Proceedings of the National Academy of Sciences*, 110(24), 9645-9650.](http://www.pnas.org/content/110/24/9645.abstract)
[Exon, N., Kennett, J., & Malone, M. Leg 189 Shipboard Scientific Party (2000). "The opening of the Tasmanian gateway drove global Cenozoic paleoclimatic and paleoceanographic changes: Results of Leg 189." *JOIDES J*, 26(2), 11-18.](http://a.ennyu.com/pdf/Exon_JoiJour_00.pdf)
Snowball Earth and the Faint Young Sun Paradox
----------------------------------------------
Snowball Earth episodes were not your average ice age. Ice typically doesn't come near the tropics, even in the worst of ice ages. Snowball Earth means just that; the snow and ice encroached well into the tropics, possibly extending all the way to the equator.
The problem with snowball Earth isn't explaining where all the energy went. The real problem is explaining why the ancient Earth wasn't in a permanent snowball Earth condition, starting from shortly after the Earth radiated away the initial heat from the formation of the Earth.
The solar constant is not quite constant. While it doesn't change much at all from year to year, or even century to century, it changes a lot over the course of billions of years. Young G class stars produce considerably less energy than do middle aged G class stars, which in turn produce considerably less energy than do older G class stars. When our Sun was young, it produced only 75% or so as much energy than it does now.

By all rights, the Earth should have been completely covered with ice. The young Sun did not produce enough energy to support open oceans. This obviously was not the case. There is plenty of geological evidence that the Earth had open oceans even when the Earth was quite young.
This 40 year old conundrum, first raised by Carl Sagan and George Mullen, is the [faint young Sun paradox](http://en.wikipedia.org/wiki/Faint_young_Sun_paradox). There have been a number of proposed ways out of the paradox, but none of them quite line up with the geological evidence.
One obvious way out is that the Earth's early atmosphere was very different from our nitrogen-oxygen atmosphere and contained significantly more greenhouse gases. The amount of greenhouse gases needed to avert a permanent snowball Earth is highly debated, ranging from not much at all to extreme amounts. Another way out is a reduced albedo due to the significantly smaller early continents and lack of life. The young Earth would have been mostly ocean, and ocean water is rather dark (unless it's covered with ice). Lack of life means no biogenic cloud condensation nuclei, which means fewer clouds.
[Goldblatt, C., & Zahnle, K. J. (2011). "Faint young Sun paradox remains." *Nature*, 474(7349), E1-E1.](http://www.nature.com/nature/journal/v474/n7349/abs/nature09961.html)
[Kienert, H., Feulner, G., & Petoukhov, V. (2012). "Faint young Sun problem more severe due to ice‐albedo feedback and higher rotation rate of the early Earth." *Geophysical Research Letters*, 39(23).](http://onlinelibrary.wiley.com/doi/10.1029/2012GL054381/abstract)
[Rosing, M. T., Bird, D. K., Sleep, N. H., & Bjerrum, C. J. (2010). "No climate paradox under the faint early Sun." *Nature*, 464(7289), 744-747.](http://www.nature.com/nature/journal/v464/n7289/abs/nature08955.html) | ,
I propose that the “snow ball Earth” was brought to a close by the dust from a huge meteorite (the largest known on Earth) impacting Australia (see <http://charles_w.tripod.com/antipode.html> ) settling onto the ice and melting it by a bare soil warming affect (see <http://charles_w.tripod.com/climate.html> ) and thus initiating the Cambrian. The dust fertilizing the ocean probably contributed considerably to the explosion of life then. That initiation was probably considerably assisted by the subsequent release of methane gas from methane ice under the ocean floor and by dust from volcanic eruptions from the Bahamas Islands, which are located at the antipode (opposite side of a sphere) of the above impact. The close correlation of volcanoes on Mars with meteorite impacts at their antipodes gives supporting evidence for such a phenomenon.
.
Sincerely, Charles Weber |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| I'm not quite sure if the question is asking about glacial, ice ages, or snowball Earth, and whether it's about the onset or end of a glacial period. I'll try to hit all three.
Ice Ages and Milankovitch Cycles
--------------------------------
Ice ages are long spans of time that marked by periods of time during which ice reaches far from the poles, interspersed by periods during which the ice retreats (but never quite goes away). The periods of time during which ice covers a good sized fraction of the Earth are called glacials; the periods during which ice retreats to only cover areas in the far north and far south are called interglacials. We are living in ice age conditions, right now. There's still ice on Antarctica and Greenland. We are also in an interglacial period within that larger ice age. The current ice age began about 33 million years ago while the current interglacial began about 11,700 years ago.
The [Milankovitch cycles](http://en.wikipedia.org/wiki/Milankovitch_cycles) determine whether the Earth is in a glacial or interglacial period. Conditions are right for ice to form and spread when precession puts northern hemisphere summer near aphelion and winter near perihelion and when both obliquity and eccentricity are low. The Earth currently satisfies the first of those conditions, but obliquity and eccentricity are a bit too high. That makes our northern hemisphere summers are a bit too warm, our winters a bit too cold.
The Milankovitch cycles] provides several answers to the question "where does the energy go?" Those times when conditions are ripe for glaciation have energy in the northern hemisphere spread more uniformly across the year than times not conducive to glaciation. Summers are milder, which means accumulated snow doesn't melt as much. Winters are milder, which means more snow falls.
Once ice does become ubiquitous, another answer to the "where does the energy go" question is into space. Ice and snow are white. Their presence reduces the amount of sunlight absorbed by the Earth.
The first paper listed below by Hays et al. is the seminal paper that brought the concept of Milankovitch cycles to the forefront. The second paper by Abe-Ouchi et al. dicusses a recent climate simulation that successfully recovers many salient features of the most recent glaciations. Most importantly, this paper appears to have solved the 100,000 year mystery and shows why deglaciation operates so quickly.
[Hays, J. D., Imbrie, J., & Shackleton, N. J. (1976, December). "Variations in the Earth's orbit: Pacemaker of the ice ages." *American Association for the Advancement of Science.*](http://courses.washington.edu/proxies/Hays-Pacemaker_of_Ice_Ages-Sci76.pdf)
[Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M. E., Okuno, J. I., Takahashi, K., & Blatter, H. (2013). "Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume." *Nature*, 500(7461), 190-193.](http://www.nature.com/nature/journal/v500/n7461/abs/nature12374.html)
Icehouse Earth vs Greenhouse Earth
----------------------------------
The Earth's climate appears to have been toggling between two climate extremes for much of the Earth's existence, one where things are cold and ice is likely to form and the other where ice is absent worldwide except maybe right at the poles. Dinosaurs roamed the arctic and Antarctica when the Earth was in one of its greenhouse phases. The Earth has been in a greenhouse phase for most of the Earth's existence.
Milankovitch cycles don't cause glaciation during greenhouse Earth periods. Ice ages happen when the Earth is in an icehouse phase. What appears to distinguish greenhouse and icehouse phases are the positions and orientations of the continents. Having a continent located over a pole helps cool the climate. Having continents oriented so they channel ocean circulation in a way that keeps the ocean cool also helps cool the climate.
The Earth transitioned from its hothouse mode to its icehouse mode 33 million years ago or so. That's right about when two key events happened in the ocean. Up until then, Antarctica was still connected to both Australia and South America. The separation from Tasmania formed the Tasmanian Gateway, while the separation from South America formed the Drake Passage. This marked the birth of the very cold Southern Ocean, it marked the buildup of ice on Antarctica, and it marked the end of the Eocene.
[Bijl, P. K., Bendle, J. A., Bohaty, S. M., Pross, J., Schouten, S., Tauxe, L., ... & Yamane, M. (2013). "Eocene cooling linked to early flow across the Tasmanian Gateway." *Proceedings of the National Academy of Sciences*, 110(24), 9645-9650.](http://www.pnas.org/content/110/24/9645.abstract)
[Exon, N., Kennett, J., & Malone, M. Leg 189 Shipboard Scientific Party (2000). "The opening of the Tasmanian gateway drove global Cenozoic paleoclimatic and paleoceanographic changes: Results of Leg 189." *JOIDES J*, 26(2), 11-18.](http://a.ennyu.com/pdf/Exon_JoiJour_00.pdf)
Snowball Earth and the Faint Young Sun Paradox
----------------------------------------------
Snowball Earth episodes were not your average ice age. Ice typically doesn't come near the tropics, even in the worst of ice ages. Snowball Earth means just that; the snow and ice encroached well into the tropics, possibly extending all the way to the equator.
The problem with snowball Earth isn't explaining where all the energy went. The real problem is explaining why the ancient Earth wasn't in a permanent snowball Earth condition, starting from shortly after the Earth radiated away the initial heat from the formation of the Earth.
The solar constant is not quite constant. While it doesn't change much at all from year to year, or even century to century, it changes a lot over the course of billions of years. Young G class stars produce considerably less energy than do middle aged G class stars, which in turn produce considerably less energy than do older G class stars. When our Sun was young, it produced only 75% or so as much energy than it does now.

By all rights, the Earth should have been completely covered with ice. The young Sun did not produce enough energy to support open oceans. This obviously was not the case. There is plenty of geological evidence that the Earth had open oceans even when the Earth was quite young.
This 40 year old conundrum, first raised by Carl Sagan and George Mullen, is the [faint young Sun paradox](http://en.wikipedia.org/wiki/Faint_young_Sun_paradox). There have been a number of proposed ways out of the paradox, but none of them quite line up with the geological evidence.
One obvious way out is that the Earth's early atmosphere was very different from our nitrogen-oxygen atmosphere and contained significantly more greenhouse gases. The amount of greenhouse gases needed to avert a permanent snowball Earth is highly debated, ranging from not much at all to extreme amounts. Another way out is a reduced albedo due to the significantly smaller early continents and lack of life. The young Earth would have been mostly ocean, and ocean water is rather dark (unless it's covered with ice). Lack of life means no biogenic cloud condensation nuclei, which means fewer clouds.
[Goldblatt, C., & Zahnle, K. J. (2011). "Faint young Sun paradox remains." *Nature*, 474(7349), E1-E1.](http://www.nature.com/nature/journal/v474/n7349/abs/nature09961.html)
[Kienert, H., Feulner, G., & Petoukhov, V. (2012). "Faint young Sun problem more severe due to ice‐albedo feedback and higher rotation rate of the early Earth." *Geophysical Research Letters*, 39(23).](http://onlinelibrary.wiley.com/doi/10.1029/2012GL054381/abstract)
[Rosing, M. T., Bird, D. K., Sleep, N. H., & Bjerrum, C. J. (2010). "No climate paradox under the faint early Sun." *Nature*, 464(7289), 744-747.](http://www.nature.com/nature/journal/v464/n7289/abs/nature08955.html) | One cause of glaciation on earth is dust from volcanic activities ie you have heard the term volcanic or nuclear winter.
During glaciation continental plates with a mile or two of ice piled upon them increase vastly in weight and actually sink. When we see a rock we see a hard stone object, but that rock within the earth under a vast amount of pressure is not "rock" solid, it is more the like peanut butter, it is soft and pliable. This is how continental plates are able to sink under the weight of a mile or two thick sheet of ice.
Not only do the continental plates have weight pushing down on this peanut butter like rock beneath us, the oceans also have weight and they as well push the plates beneath them down into this peanut butter like stone.
During glacial maximums ocean levels drop by as much as 400 feet, so what is currently 400 feet underwater was dry land during the last glacial maximum. This loss of 400 feet of water across all the oceans of the earth greatly reduces weight upon the ocean floors. That same water is turned to snow and collects as ice on the continental plates this adds that same weight now to the continental plates. So the oceanic plates raise while the continental plates drop.
During interglacial periods you get a lightening of continental plates and the oceanic plates become heavier with the return of high ocean water levels ie global sea level rise such as we are experiencing currently. Continental plates rise and oceanic plates drop. This reaction is not immediate, again if you imagine peanut butter if you put a board over several inches of peanut butter and add weight to the board it takes some time for the peanut butter to squish out the sides, the same with the continental plates and the soft stone they sit atop of. It takes thousands of years for the continental plates to rebound to their original height. During this rebounding of the continental plates and the dropping of the oceanic plates volcanic activity slowly starts to increase. Eventually it is only a matter of time before you get another great volcanic eruption due the plate movement that produces a great deal of ash in the atmosphere, this continues again and again with increased levels of volcanic activity. This ash reflects light back into space before it is able to reach and warm the surface, thereby reducing average global temperatures.
This is certainly not the only mechanism that controls glacial maximum and interglacial periods but it does work in concert with the afore written answers to your question. The normal condition for our planet during human existence is glacial maximum conditions, what we are experiencing now an "interglacial" period exists only about 20% of the time somewhere on average of about 20k years per 100k years. This is the second interglacial period in the last 100k years. You have go back to 120k to 130k years for the next previous interglacial period. |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| What's not been touched yet, and the current answers do cover a lot of ground, is the variability of solar input.
Even IF the amount of energy radiated out by the earth remains the same (and it probably would, roughly), solar input is highly variable and is a major factor in determining the total energy budget of the planet.
Even a small change can have far reaching consequences. Thus if the sun's output goes down by only a few percent (and that's well within its variability even over the 11 year sunspot cycle) temperatures on earth will swing with that (over that short a cycle that's pretty much averaged out by changes in ocean currents). If such a "dip" lasts longer, think a few centuries, you get a "little ice age" as we just emerged from in the 19th century (and by some accounts may be on the verge of slipping into the next, as the sun is again seemingly rather inactive).
And the sun has longer cycles, such a low activity cycle lasting a few thousand years can well drop the earth into a full scale ice age. And as the ice sheets grow, reflection goes up, less energy heats the planet, less clouds form. Until the sun enters a high activity phase again, the planet will remain (relatively) cold.
As said, this is unlikely to cause the entire planet to disappear under a blanket of ice. The last major ice age "only" came down to about the latitude of the Rhine delta in Europe for example. | ,
I propose that the “snow ball Earth” was brought to a close by the dust from a huge meteorite (the largest known on Earth) impacting Australia (see <http://charles_w.tripod.com/antipode.html> ) settling onto the ice and melting it by a bare soil warming affect (see <http://charles_w.tripod.com/climate.html> ) and thus initiating the Cambrian. The dust fertilizing the ocean probably contributed considerably to the explosion of life then. That initiation was probably considerably assisted by the subsequent release of methane gas from methane ice under the ocean floor and by dust from volcanic eruptions from the Bahamas Islands, which are located at the antipode (opposite side of a sphere) of the above impact. The close correlation of volcanoes on Mars with meteorite impacts at their antipodes gives supporting evidence for such a phenomenon.
.
Sincerely, Charles Weber |
2,410 | We often seem to accept the idea that there were periods of time in which the entire surface of Earth was frozen, for the most part. This implies that there were periods of time in which the entire surface was NOT frozen over. Thus, there must have been heat and energy present on the surface. How did all that energy move to cause an ice age? It seems absurd for all that energy to just radiate into space or move deep into the Earth. | 2014/08/27 | [
"https://earthscience.stackexchange.com/questions/2410",
"https://earthscience.stackexchange.com",
"https://earthscience.stackexchange.com/users/813/"
]
| What's not been touched yet, and the current answers do cover a lot of ground, is the variability of solar input.
Even IF the amount of energy radiated out by the earth remains the same (and it probably would, roughly), solar input is highly variable and is a major factor in determining the total energy budget of the planet.
Even a small change can have far reaching consequences. Thus if the sun's output goes down by only a few percent (and that's well within its variability even over the 11 year sunspot cycle) temperatures on earth will swing with that (over that short a cycle that's pretty much averaged out by changes in ocean currents). If such a "dip" lasts longer, think a few centuries, you get a "little ice age" as we just emerged from in the 19th century (and by some accounts may be on the verge of slipping into the next, as the sun is again seemingly rather inactive).
And the sun has longer cycles, such a low activity cycle lasting a few thousand years can well drop the earth into a full scale ice age. And as the ice sheets grow, reflection goes up, less energy heats the planet, less clouds form. Until the sun enters a high activity phase again, the planet will remain (relatively) cold.
As said, this is unlikely to cause the entire planet to disappear under a blanket of ice. The last major ice age "only" came down to about the latitude of the Rhine delta in Europe for example. | One cause of glaciation on earth is dust from volcanic activities ie you have heard the term volcanic or nuclear winter.
During glaciation continental plates with a mile or two of ice piled upon them increase vastly in weight and actually sink. When we see a rock we see a hard stone object, but that rock within the earth under a vast amount of pressure is not "rock" solid, it is more the like peanut butter, it is soft and pliable. This is how continental plates are able to sink under the weight of a mile or two thick sheet of ice.
Not only do the continental plates have weight pushing down on this peanut butter like rock beneath us, the oceans also have weight and they as well push the plates beneath them down into this peanut butter like stone.
During glacial maximums ocean levels drop by as much as 400 feet, so what is currently 400 feet underwater was dry land during the last glacial maximum. This loss of 400 feet of water across all the oceans of the earth greatly reduces weight upon the ocean floors. That same water is turned to snow and collects as ice on the continental plates this adds that same weight now to the continental plates. So the oceanic plates raise while the continental plates drop.
During interglacial periods you get a lightening of continental plates and the oceanic plates become heavier with the return of high ocean water levels ie global sea level rise such as we are experiencing currently. Continental plates rise and oceanic plates drop. This reaction is not immediate, again if you imagine peanut butter if you put a board over several inches of peanut butter and add weight to the board it takes some time for the peanut butter to squish out the sides, the same with the continental plates and the soft stone they sit atop of. It takes thousands of years for the continental plates to rebound to their original height. During this rebounding of the continental plates and the dropping of the oceanic plates volcanic activity slowly starts to increase. Eventually it is only a matter of time before you get another great volcanic eruption due the plate movement that produces a great deal of ash in the atmosphere, this continues again and again with increased levels of volcanic activity. This ash reflects light back into space before it is able to reach and warm the surface, thereby reducing average global temperatures.
This is certainly not the only mechanism that controls glacial maximum and interglacial periods but it does work in concert with the afore written answers to your question. The normal condition for our planet during human existence is glacial maximum conditions, what we are experiencing now an "interglacial" period exists only about 20% of the time somewhere on average of about 20k years per 100k years. This is the second interglacial period in the last 100k years. You have go back to 120k to 130k years for the next previous interglacial period. |
157,981 | On MySQL 5.7.11 running on Amazon RDS with InnoDB
I have a fairly heavy reporting query that takes about 3 minutes to run. During this time I can't access my reporting screen, which reads some min and max dates from one of the tables that is included in the report.
I would have thought by setting "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" read-only queries would not affect each other at all? I have this set both at session level and my param group on RDS
Is there anything I'm missing?
**SHOW ENGINE INNODB STATUS**
<http://pastebin.com/XvU1AdNM>
**show create table ...**
auto\_increment number indicate rough row counts
```
CREATE TABLE `transaction` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`file_id` int(11) DEFAULT NULL,
`countid` int(11) NOT NULL,
`txn_date` datetime NOT NULL,
`txn_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`user_rmn` bigint(20) NOT NULL,
`customer_no` varchar(20) COLLATE utf8_unicode_ci DEFAULT NULL,
`aggregator_name` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`trans_amount` decimal(15,4) NOT NULL,
`incoming_commission` decimal(15,4) NOT NULL,
`mmplt_txn_id` int(11) NOT NULL,
`product_type` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`txn_category` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`circle` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`status` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`role` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`number` int(11) DEFAULT NULL,
`user_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`city_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`state_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`retailer_commission` decimal(15,4) NOT NULL,
`total_commission` decimal(15,4) NOT NULL,
`net_revenue` decimal(15,4) NOT NULL,
`ad_name` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`ad_commission` decimal(15,4) DEFAULT NULL,
`md_name` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`md_commission` decimal(15,4) DEFAULT NULL,
`cnf_name` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`cnf_commission` decimal(15,4) DEFAULT NULL,
`ad_id` bigint(20) DEFAULT NULL,
`md_id` bigint(20) DEFAULT NULL,
`cnf_id` bigint(20) DEFAULT NULL,
`operator_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `txnId` (`txn_id`),
KEY `IDX_723705D193CB796C` (`file_id`),
KEY `date_idx` (`txn_date`),
KEY `user_idx` (`user_id`),
KEY `cnf_idx` (`cnf_id`),
KEY `md_idx` (`md_id`),
KEY `ad_idx` (`ad_id`),
KEY `user_rmn_idx` (`user_rmn`),
KEY `trans_amount_idx` (`trans_amount`),
KEY `incoming_commission_idx` (`incoming_commission`),
KEY `retailer_commission_idx` (`retailer_commission`),
KEY `ad_commission_idx` (`ad_commission`),
KEY `md_commission_idx` (`md_commission`),
KEY `cnf_commission_idx` (`cnf_commission`),
KEY `cnf_date_idx` (`txn_date`,`cnf_id`),
KEY `md_date_idx` (`txn_date`,`md_id`),
KEY `ad_date_idx` (`txn_date`,`ad_id`),
KEY `user_rmn_date_idx` (`txn_date`,`user_rmn`),
CONSTRAINT `FK_723705D193CB796C` FOREIGN KEY (`file_id`) REFERENCES `file_to_sync` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=11370410 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `operator` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
`category_low_id` int(11) DEFAULT NULL,
`category_medium_id` int(11) DEFAULT NULL,
`category_high_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `IDX_D7A6A781B596C062` (`category_low_id`),
KEY `IDX_D7A6A78125326495` (`category_medium_id`),
KEY `IDX_D7A6A7818196AB83` (`category_high_id`),
CONSTRAINT `FK_D7A6A78125326495` FOREIGN KEY (`category_medium_id`) REFERENCES `operator_category_medium` (`id`),
CONSTRAINT `FK_D7A6A7818196AB83` FOREIGN KEY (`category_high_id`) REFERENCES `operator_category_high` (`id`),
CONSTRAINT `FK_D7A6A781B596C062` FOREIGN KEY (`category_low_id`) REFERENCES `operator_category_low` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=56 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `operator` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
`category_low_id` int(11) DEFAULT NULL,
`category_medium_id` int(11) DEFAULT NULL,
`category_high_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `IDX_D7A6A781B596C062` (`category_low_id`),
KEY `IDX_D7A6A78125326495` (`category_medium_id`),
KEY `IDX_D7A6A7818196AB83` (`category_high_id`),
CONSTRAINT `FK_D7A6A78125326495` FOREIGN KEY (`category_medium_id`) REFERENCES `operator_category_medium` (`id`),
CONSTRAINT `FK_D7A6A7818196AB83` FOREIGN KEY (`category_high_id`) REFERENCES `operator_category_high` (`id`),
CONSTRAINT `FK_D7A6A781B596C062` FOREIGN KEY (`category_low_id`) REFERENCES `operator_category_low` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=56 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
CREATE TABLE `operator_category_medium` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `operator_category_high` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `depositor` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`depositor_id` bigint(20) NOT NULL,
`name` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`amount` decimal(15,4) NOT NULL,
`deposited` datetime NOT NULL,
`details` longtext COLLATE utf8_unicode_ci NOT NULL,
`netsuite_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `depositor_idx` (`depositor_id`),
KEY `netsuite_id_idx` (`netsuite_id`),
KEY `deposited_idx` (`deposited`)
) ENGINE=InnoDB AUTO_INCREMENT=62650 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
```
**Queries**
```
DROP temporary TABLE IF EXISTS `depositor_type`;
CREATE temporary TABLE `depositor_type` (
`depositor_id` bigint(20) NOT NULL,
`type` varchar(4) NOT NULL,
amount decimal(20,4) NULL,
count_deposit int(11) NULL,
PRIMARY KEY (depositor_id, type),
KEY type_idx (type),
KEY amount_idx (amount)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
insert into depositor_type (type, depositor_id ) select distinct 'u',user_rmn as depositor_id from transaction where txn_date between :from and :to and user_rmn is not null union DISTINCT select distinct 'a',ad_id as depositor_id from transaction where txn_date between :from and :to and ad_id is not null union DISTINCT select distinct 'm',md_id as depositor_id from transaction where txn_date between :from and :to and md_id is not null union DISTINCT select distinct 'c',cnf_id as depositor_id from transaction where txn_date between :from and :to and cnf_id is not null;
update depositor_type set amount=(select sum(amount) from depositor d where d.depositor_id=depositor_type.depositor_id), count_deposit=(select count(amount) from depositor d where d.depositor_id=depositor_type.depositor_id) ;
DROP TABLE IF EXISTS `9bf92fsums`;
CREATE TABLE `9bf92fsums` ( `cnf_id` bigint(20) NOT NULL,
`md_id` bigint(20) NOT NULL,
`ad_id` bigint(20) NOT NULL,
`user_rmn` bigint(20) NOT NULL,
`operator_id` int(11) not null,
`trans_amount` decimal(20,4) NOT NULL,
`incoming_commission` decimal(20,4) NOT NULL,
`retailer_commission` decimal(20,4) NOT NULL,
`ad_commission` decimal(20,4) NOT NULL,
`md_commission` decimal(20,4) NOT NULL,
`cnf_commission` decimal(20,4) NOT NULL,
`count_trans` int(11) not null,
PRIMARY KEY (`cnf_id`,`md_id`,`ad_id`,`user_rmn`, `operator_id`),
KEY `md_id_idx` (`cnf_id`),
KEY `ad_id_idx` (`ad_id`) USING BTREE,
KEY `user_rmn_idx` (`user_rmn`) USING BTREE,
KEY `operator_id_idx` (`operator_id`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
insert into 9bf92fsums select distinct coalesce(cnf_id,0), coalesce(md_id,0), coalesce(ad_id,0), coalesce(user_rmn,0), operator_id, sum(trans_amount), sum(incoming_commission) as incoming_commission, sum(retailer_commission) as retailer_commission, sum(ad_commission) as ad_commission, sum(md_commission) as md_commission, sum(cnf_commission), count(txn_id) as count_trans from transaction where txn_date between :from and :to group by coalesce(cnf_id,0), coalesce(md_id,0), coalesce(ad_id,0), coalesce(user_rmn,0), operator_id
select 'User' as type, user_rmn as phone, t.amount, sum(trans_amount), sum(incoming_commission) as incoming_commission, sum(retailer_commission) as retailer_commission, sum(ad_commission) as ad_commission, sum(md_commission) as md_commission, sum(cnf_commission), sum(count_deposit) as cnt_depositors, sum(count_trans) as count_trans, och.name as operator_category_high, ocm.name as operator_category_medium, ocl.name as operator_category_low from 9bf92fsums s inner join depositor_type t on s.user_rmn=t.depositor_id and t.type='u' inner join operator o on s.operator_id=o.id left join operator_category_high och on (o.category_high_id=och.id) left join operator_category_medium ocm on (o.category_medium_id=ocm.id) left join operator_category_low ocl on (o.category_low_id=ocl.id) where t.amount > 0 group by user_rmn, och.name, ocm.name, ocl.name
select 'AD' as type, ad_id as phone, t.amount, sum(trans_amount), sum(incoming_commission) as incoming_commission, sum(retailer_commission) as retailer_commission, sum(ad_commission) as ad_commission, sum(md_commission) as md_commission, sum(cnf_commission), sum(count_deposit) as cnt_depositors, sum(count_trans) as count_trans, och.name as operator_category_high, ocm.name as operator_category_medium, ocl.name as operator_category_low from 9bf92fsums s inner join depositor_type t on s.ad_id=t.depositor_id and t.type='a' inner join operator o on s.operator_id=o.id left join operator_category_high och on (o.category_high_id=och.id) left join operator_category_medium ocm on (o.category_medium_id=ocm.id) left join operator_category_low ocl on (o.category_low_id=ocl.id) where t.amount > 0 group by ad_id, och.name, ocm.name, ocl.name
select 'MD' as type, md_id as phone, t.amount, sum(trans_amount), sum(incoming_commission) as incoming_commission, sum(retailer_commission) as retailer_commission, sum(ad_commission) as ad_commission, sum(md_commission) as md_commission, sum(cnf_commission), sum(count_deposit) as cnt_depositors, sum(count_trans) as count_trans, och.name as operator_category_high, ocm.name as operator_category_medium, ocl.name as operator_category_low from 9bf92fsums s inner join depositor_type t on s.md_id=t.depositor_id and t.type='m' inner join operator o on s.operator_id=o.id left join operator_category_high och on (o.category_high_id=och.id) left join operator_category_medium ocm on (o.category_medium_id=ocm.id) left join operator_category_low ocl on (o.category_low_id=ocl.id) where t.amount > 0 group by md_id, och.name, ocm.name, ocl.name
select 'CNF' as type, cnf_id as phone, t.amount, sum(trans_amount), sum(incoming_commission) as incoming_commission, sum(retailer_commission) as retailer_commission, sum(ad_commission) as ad_commission, sum(md_commission) as md_commission, sum(cnf_commission), sum(count_deposit) as cnt_depositors, sum(count_trans) as count_trans, och.name as operator_category_high, ocm.name as operator_category_medium, ocl.name as operator_category_low from 9bf92fsums s inner join depositor_type t on s.cnf_id=t.depositor_id and t.type='c' inner join operator o on s.operator_id=o.id left join operator_category_high och on (o.category_high_id=och.id) left join operator_category_medium ocm on (o.category_medium_id=ocm.id) left join operator_category_low ocl on (o.category_low_id=ocl.id) where t.amount > 0 group by cnf_id, och.name, ocm.name, ocl.name
select distinct 'no deposits' as type, null as depositor_id, 0 as sum_dep_amount, sum(trans_amount), sum(incoming_commission) as incoming_commission, sum(retailer_commission) as retailer_commission, sum(ad_commission) as ad_commission, sum(md_commission) as md_commission, sum(cnf_commission), 0 as cnt_depositors, sum(count_trans) as count_trans, och.name as operator_category_high, ocm.name as operator_category_medium, ocl.name as operator_category_low from 9bf92fsums s inner join operator o on s.operator_id=o.id left join operator_category_high och on (o.category_high_id=och.id) left join operator_category_medium ocm on (o.category_medium_id=ocm.id) left join operator_category_low ocl on (o.category_low_id=ocl.id) group by 'no deposits', och.name, ocm.name, ocl.name
select distinct 'no transactions' as type, depositor_id, sum(amount) as sum_dep_amount, 0 as trans_amount, 0 as incoming_commission, 0 as retailer_commission, 0 as ad_commission, 0 as md_commission, 0 as cnf_commission, count(d.id) as cnt_depositors, 0 as cnt_txn_id, 'n/a' as operator_category_high, 'n/a' as operator_category_medium, 'n/a' as operator_category_low from depositor d where depositor_id not in (select user_rmn from 9bf92fsums) and depositor_id not in (select ad_id from 9bf92fsums) and depositor_id not in (select md_id from 9bf92fsums) and depositor_id not in (select cnf_id from 9bf92fsums) group by 'no transactions', depositor_id
``` | 2016/12/13 | [
"https://dba.stackexchange.com/questions/157981",
"https://dba.stackexchange.com",
"https://dba.stackexchange.com/users/2599/"
]
| Here is the original model -- at least as worded in the question.
[](https://i.stack.imgur.com/OV4gj.png)
Though there is nothing cyclical here it does have few problems. For example, the model has no relationship between `County` and `District`, although you do state that "District has many Counties". It is also possible to have `StudentRecord {RecordId, TeacherId, CountyId}` such that that teacher does not teach in the district the county is located in.
---
One possible fix may look like this, though I do not think this would be complete solution, but it may be a good starting point. I will use predicates -- marked by **|** -- and constraints (italics) to describe the model. Predicates map to tables and constraints to PK,AK, FK.
**|** District (DistrictId) exists.
*District is identified by DistrictId.*
**|** County (CountyCode) is located in district (DistrictId).
*County is identified by CountyCode.*
*Each county is located in exactly one district; for each district that district may have more than one county.*
*If a county is locaeted in a district, then that district must exist.*
**|** Teacher (TeacherId) exists.
*Teacher is identified by TeacherId.*
**|** Teacher (TeacherId) is licensed to teach in district (DistrictId).
*For each teacher, that teacher may be licensed to teach in more than one district.*
*For each district, more than one teacher may be licensed to teach in that district.*
*If a teacher is licensed to teach in a district, then that teacher must exist.*
*If a teacher is licensed to teach in a district, then that district must exist.*
**|** School (SchoolId) is located in county (CountyCode), district (DistrictId).
*School is identified by SchoolId.*
*For each school, that school is located in exactly one county, district.*
*For each county, district; more than one school may be located in that county, district.*
*If a school is located in a county, district; then that county must be located in that district.*
**|** Teacher (TeacherId) teaches in school (SchoolId), in district (DistrictId).
*Each teacher may teach in more than one school in a district.*
*For each school in a district it is possible that more than one teacher teaches in that school.*
*If a teacher teaches in school in a district, then that school must be located in that district.*
*If a teacher teaches in school in a district, then that teacher must be licensed to teach in that district.*
**|** Student (StudentId) exists.
*Student is identified by StudentId.*
**|** Student (StudentId) attends school (SchoolId).
*Each student may attend more than one school, for each school is is possible that more than one student attends that school.*
*If a student attends school, then that student must exist.*
*If a student attends school, then that school must exist.*
**|** Student record (RecordId) was generated for student (StudentId) by teacher (TeacherID) in school (SchoolID).
*Student record is identified by RecordId.*
*If a student record was generated for a student by a teacher in a school, then that student must attend that school.*
*If a student record was generated for a student by a teacher in a school, then that teacher must teach in that school.*
[](https://i.stack.imgur.com/q9wcS.png) | It is ok to have relationships with everyone. As long as there's a reason for that, meaning, as long as the concepts have an independent relationship amongst themselves. If your student lives in a county and you want to keep that info; and their teacher is a teacher in a different county, that's ok. But if the student must only have teachers in their own county, it's not ok.
You must be thinking about normalization. Check "[Database normalization](http://en.wikipedia.org/wiki/Database_normalization)" for more details, might some of your questions as to why tables are designed in some ways. |
39,165,196 | so I have a variable called multiplier that contains a certain value depending on what the user registered with. What I am trying to write here is
If multiplier is equal to "sedentary" then give it the value of `$sedmultiplier`
If multiplier is equal to "lightly" then give it the value of `$lightmultiplier`
Im stuck on this, can't seem to figure out how this would be written.
```
switch ($multiplier==) {
case "sedentary":
$multiplier=$sedmultiplier;
break;
case "lightly":
$multiplier=$lightmultiplier;
break;
case "moderately":
$multiplier=$modmultiplier;
break;
case "very":
$multiplier=$verymultiplier;
break;
case "extremely":
$multiplier=$extrememultiplier;
break;
default:
multiplier==0;
}
``` | 2016/08/26 | [
"https://Stackoverflow.com/questions/39165196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2023527/"
]
| The option to preview all screen sizes is not available in Android Studio 2.2 Beta 2. However, you can resize the preview screen to see how your layout would look on different screen sizes, as shown in the attached gif.
[](https://i.stack.imgur.com/um1y0.gif) | While preview all screen sizes seems to be gone, it is possible in 2.3 to view more than one screen size at the same time.
Right-click on the activity\_main.xml (or whichever xml file) tab, and from the pop-up menu select split-vertically or split-horizontally.
This will create a second xml tab that can use a different screen for design/text. This can be repeated for more tabs, and the tabs can be "torn off" to float separately. |
39,165,196 | so I have a variable called multiplier that contains a certain value depending on what the user registered with. What I am trying to write here is
If multiplier is equal to "sedentary" then give it the value of `$sedmultiplier`
If multiplier is equal to "lightly" then give it the value of `$lightmultiplier`
Im stuck on this, can't seem to figure out how this would be written.
```
switch ($multiplier==) {
case "sedentary":
$multiplier=$sedmultiplier;
break;
case "lightly":
$multiplier=$lightmultiplier;
break;
case "moderately":
$multiplier=$modmultiplier;
break;
case "very":
$multiplier=$verymultiplier;
break;
case "extremely":
$multiplier=$extrememultiplier;
break;
default:
multiplier==0;
}
``` | 2016/08/26 | [
"https://Stackoverflow.com/questions/39165196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2023527/"
]
| The option to preview all screen sizes is not available in Android Studio 2.2 Beta 2. However, you can resize the preview screen to see how your layout would look on different screen sizes, as shown in the attached gif.
[](https://i.stack.imgur.com/um1y0.gif) | The Previews are back!
Try this:
1. open your layout xml
2. open the new tab **Layout Validation**
If you can't find the tab, try double-shift and search for "Layout Validation".
[](https://i.stack.imgur.com/2s6yk.gif)
([source](https://developer.android.com/studio/debug/layout-inspector#layout-validation)) |
39,165,196 | so I have a variable called multiplier that contains a certain value depending on what the user registered with. What I am trying to write here is
If multiplier is equal to "sedentary" then give it the value of `$sedmultiplier`
If multiplier is equal to "lightly" then give it the value of `$lightmultiplier`
Im stuck on this, can't seem to figure out how this would be written.
```
switch ($multiplier==) {
case "sedentary":
$multiplier=$sedmultiplier;
break;
case "lightly":
$multiplier=$lightmultiplier;
break;
case "moderately":
$multiplier=$modmultiplier;
break;
case "very":
$multiplier=$verymultiplier;
break;
case "extremely":
$multiplier=$extrememultiplier;
break;
default:
multiplier==0;
}
``` | 2016/08/26 | [
"https://Stackoverflow.com/questions/39165196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2023527/"
]
| While preview all screen sizes seems to be gone, it is possible in 2.3 to view more than one screen size at the same time.
Right-click on the activity\_main.xml (or whichever xml file) tab, and from the pop-up menu select split-vertically or split-horizontally.
This will create a second xml tab that can use a different screen for design/text. This can be repeated for more tabs, and the tabs can be "torn off" to float separately. | The Previews are back!
Try this:
1. open your layout xml
2. open the new tab **Layout Validation**
If you can't find the tab, try double-shift and search for "Layout Validation".
[](https://i.stack.imgur.com/2s6yk.gif)
([source](https://developer.android.com/studio/debug/layout-inspector#layout-validation)) |
9,323,338 | I am having trouble coding a hw program that is made to generate test with multiple choice and essay questions. Everything works except my program skips lines when it goes to read a part of the essay class. I know it has to do with the scanner and scan.nextline, scan.nextInt and scan.next, etc but I am confused on how exactly to fix it.
Thank you for your help.
```
import java.util.*;
public class TestWriter
{
public static void main (String [] args)
{
Scanner scan = new Scanner (System.in);
String type=null;
System.out.println ("How many questions are on your test?");
int num = scan.nextInt ();
Question [] test = new Question [num];
for (int i=0; i <num; i++)
{
System.out.println ("Question " + (i+1) + ": Essay or multiple choice question? (e/m)");
type = scan.next ();
scan.nextLine ();
if (type.equals ("e"))
{
test [i] = new Essay ();
test [i].readQuestion ();
}
if (type.equals ("m"))
{
test [i] = new MultChoice ();
test [i].readQuestion ();
}
}
for (int i=0; i <num; i++)
{
System.out.println ("Question " + (i+1)+": "+ type);
test [i].print ();
}
}
}
```
here is the essay class
```
public class Essay extends Question
{
String question;
int line;
public void readQuestion ()
{
System.out.println ("How many lines?");
line = scan.nextInt ();
scan.next ();
System.out.println ("Enter the question");
question = scan.nextLine ();
}
public void print ()
{
System.out.println (question);
for (int i=0; i <line; i++)
System.out.println ("");
}
}
``` | 2012/02/17 | [
"https://Stackoverflow.com/questions/9323338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1049017/"
]
| Using scan.nextInt() will generate the following problems
If your input is "5 5", nextInt() will get the next integer leaving the remaining " 5" of the buffer line. Of which the remaining " 5" will be caught by
`type = scan.next();`
In the class test writer:
```
System.out.println("How many questions are on your test?");
int num = scan.nextInt();
Question[] test = new Question[num]; for(int i=0; i<num; i++)
{
System.out.println("Question " + (i+1) + ": Essay or multiple choice question? (e/m)");
type = scan.next();
```
This will generate the issue as i have mentioned above.
To fix this you can either
a) Ensure that input is solely a number
b) Get the entire line like so `String temp = scan.nextLine();` then convert it to a integer. This will you can play with the string and check if its the input you require i.e if the 1st letter / set of numerical digits is an e/m or an integer.
The problem with scan.nextInt() is that it only gets the next integer of the input line. If there are spaces after the input it was taken from i.e "5 5" it will grab only the next int 5 and leave " 5" behind.
Thus i would recommend using scan.nextLine() and manipulating the string to ensure that the input can be handled and verified and at the same time ensuring that you do not get confused of where the scanner is at.
You should use .next() / .nextInt() if you are handling an input with various parameters you want to specifically catch such as "25 Male Student 1234" in this case the code would be as such
```
int age = scan.nextInt();
String sex = scan.next();
String job = scan.next();
int score = scan.nextInt();
``` | Your `readQuestion` function should be ...
```
public void readQuestion()
{
System.out.println("How many lines?");
line = scan.nextInt();
scan.nextLine();
System.out.println("Enter the question");
question = scan.nextLine();
}
```
It should be `scan.nextLine();` to add an empty new line at the end |
9,323,338 | I am having trouble coding a hw program that is made to generate test with multiple choice and essay questions. Everything works except my program skips lines when it goes to read a part of the essay class. I know it has to do with the scanner and scan.nextline, scan.nextInt and scan.next, etc but I am confused on how exactly to fix it.
Thank you for your help.
```
import java.util.*;
public class TestWriter
{
public static void main (String [] args)
{
Scanner scan = new Scanner (System.in);
String type=null;
System.out.println ("How many questions are on your test?");
int num = scan.nextInt ();
Question [] test = new Question [num];
for (int i=0; i <num; i++)
{
System.out.println ("Question " + (i+1) + ": Essay or multiple choice question? (e/m)");
type = scan.next ();
scan.nextLine ();
if (type.equals ("e"))
{
test [i] = new Essay ();
test [i].readQuestion ();
}
if (type.equals ("m"))
{
test [i] = new MultChoice ();
test [i].readQuestion ();
}
}
for (int i=0; i <num; i++)
{
System.out.println ("Question " + (i+1)+": "+ type);
test [i].print ();
}
}
}
```
here is the essay class
```
public class Essay extends Question
{
String question;
int line;
public void readQuestion ()
{
System.out.println ("How many lines?");
line = scan.nextInt ();
scan.next ();
System.out.println ("Enter the question");
question = scan.nextLine ();
}
public void print ()
{
System.out.println (question);
for (int i=0; i <line; i++)
System.out.println ("");
}
}
``` | 2012/02/17 | [
"https://Stackoverflow.com/questions/9323338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1049017/"
]
| Using scan.nextInt() will generate the following problems
If your input is "5 5", nextInt() will get the next integer leaving the remaining " 5" of the buffer line. Of which the remaining " 5" will be caught by
`type = scan.next();`
In the class test writer:
```
System.out.println("How many questions are on your test?");
int num = scan.nextInt();
Question[] test = new Question[num]; for(int i=0; i<num; i++)
{
System.out.println("Question " + (i+1) + ": Essay or multiple choice question? (e/m)");
type = scan.next();
```
This will generate the issue as i have mentioned above.
To fix this you can either
a) Ensure that input is solely a number
b) Get the entire line like so `String temp = scan.nextLine();` then convert it to a integer. This will you can play with the string and check if its the input you require i.e if the 1st letter / set of numerical digits is an e/m or an integer.
The problem with scan.nextInt() is that it only gets the next integer of the input line. If there are spaces after the input it was taken from i.e "5 5" it will grab only the next int 5 and leave " 5" behind.
Thus i would recommend using scan.nextLine() and manipulating the string to ensure that the input can be handled and verified and at the same time ensuring that you do not get confused of where the scanner is at.
You should use .next() / .nextInt() if you are handling an input with various parameters you want to specifically catch such as "25 Male Student 1234" in this case the code would be as such
```
int age = scan.nextInt();
String sex = scan.next();
String job = scan.next();
int score = scan.nextInt();
``` | In your TestWriter.main() method what are you expecting at 3 line in following code:
```
System.out.println("Question " + (i+1) + ": Essay or multiple choice question? (e/m)");
type = scan.next();
scan.nextLine(); //LINE 3: What are you expecting user to enter over here.
```
the control flow will stuck at this point unless you enter something on the console. |
5,612,791 | So, I heard that C++ templates shouldn't be separated into a header (.h) and source (.cpp) files.
For instance, a template like this:
```
template <class T>
class J
{
T something;
};
```
Is this true? Why is it so?
If because of that I'm gonna have to put both declaration and implementation in the same file, should I put it in a .h file or a .cpp file? | 2011/04/10 | [
"https://Stackoverflow.com/questions/5612791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/380714/"
]
| Headers.
It's because templates are instantiated at compile-time, not link-time, and different translation units (roughly equivalent to your `.cpp` files) only "know about" each other at link-time. Headers tend to be widely "known about" at compile-time because you `#include` them in any translation unit that needs them.
Read <https://isocpp.org/wiki/faq/templates> for more. | If you need the template code to be usable by other translation units (.cpp files), you need to put the implementation in the .h file or else those other units won't be able to instantiate the template (expand it according to the types they use).
If your template function is only instantiated in one .cpp file, you can define it there. This happens sometimes when a class has a private member function which is a template (and it is only called from the implementation file, not the class header file). |
5,612,791 | So, I heard that C++ templates shouldn't be separated into a header (.h) and source (.cpp) files.
For instance, a template like this:
```
template <class T>
class J
{
T something;
};
```
Is this true? Why is it so?
If because of that I'm gonna have to put both declaration and implementation in the same file, should I put it in a .h file or a .cpp file? | 2011/04/10 | [
"https://Stackoverflow.com/questions/5612791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/380714/"
]
| The reason you can't put a templated class into a .cpp file is because in order to "compile" a .cpp file you need to know what the type that is being used in place of T. As it stands a templated class (like your class J) doesn't have enough information to compile. Thus it must be all in headers.
If you want the break up the implementation into another file for cleanliness, the best practice is to use an .hxx file. Like this: inside your header file, J.h, put:
```
#ifndef _J_H__
#define _J_H__
template <class T> class J{ // member definitions };
#include "j.hxx"
#endif // _J_H__
```
and then, in j.hxx you'll have
```
template <class T> J<T>::J() { // constructor implementation }
template <class T> J<T>::~J() { // destructor implementation }
template <class T> void J<T>::memberFunc() { // memberFunc implementation }
// etc.
```
Finally in your .cpp file that uses the templated class, let's call it K.cpp you'll have:
```
#include "J.h" // note that this always automatically includes J.hxx
void f(void)
{
J<double> jinstance; // now the compiler knows what the exact type is.
}
``` | If you need the template code to be usable by other translation units (.cpp files), you need to put the implementation in the .h file or else those other units won't be able to instantiate the template (expand it according to the types they use).
If your template function is only instantiated in one .cpp file, you can define it there. This happens sometimes when a class has a private member function which is a template (and it is only called from the implementation file, not the class header file). |
5,612,791 | So, I heard that C++ templates shouldn't be separated into a header (.h) and source (.cpp) files.
For instance, a template like this:
```
template <class T>
class J
{
T something;
};
```
Is this true? Why is it so?
If because of that I'm gonna have to put both declaration and implementation in the same file, should I put it in a .h file or a .cpp file? | 2011/04/10 | [
"https://Stackoverflow.com/questions/5612791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/380714/"
]
| Headers.
It's because templates are instantiated at compile-time, not link-time, and different translation units (roughly equivalent to your `.cpp` files) only "know about" each other at link-time. Headers tend to be widely "known about" at compile-time because you `#include` them in any translation unit that needs them.
Read <https://isocpp.org/wiki/faq/templates> for more. | Yes, it's true. Declaration and implementation are generally put into the header file all together. Some compilers experimented with an `export` keyword that would allow them to be separated, but that has been removed from C++0x. Check out [this FAQ entry](http://www.parashift.com/c++-faq-lite/templates.html#faq-35.12) for all the dirty details. |
5,612,791 | So, I heard that C++ templates shouldn't be separated into a header (.h) and source (.cpp) files.
For instance, a template like this:
```
template <class T>
class J
{
T something;
};
```
Is this true? Why is it so?
If because of that I'm gonna have to put both declaration and implementation in the same file, should I put it in a .h file or a .cpp file? | 2011/04/10 | [
"https://Stackoverflow.com/questions/5612791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/380714/"
]
| The reason you can't put a templated class into a .cpp file is because in order to "compile" a .cpp file you need to know what the type that is being used in place of T. As it stands a templated class (like your class J) doesn't have enough information to compile. Thus it must be all in headers.
If you want the break up the implementation into another file for cleanliness, the best practice is to use an .hxx file. Like this: inside your header file, J.h, put:
```
#ifndef _J_H__
#define _J_H__
template <class T> class J{ // member definitions };
#include "j.hxx"
#endif // _J_H__
```
and then, in j.hxx you'll have
```
template <class T> J<T>::J() { // constructor implementation }
template <class T> J<T>::~J() { // destructor implementation }
template <class T> void J<T>::memberFunc() { // memberFunc implementation }
// etc.
```
Finally in your .cpp file that uses the templated class, let's call it K.cpp you'll have:
```
#include "J.h" // note that this always automatically includes J.hxx
void f(void)
{
J<double> jinstance; // now the compiler knows what the exact type is.
}
``` | Yes, it's true. Declaration and implementation are generally put into the header file all together. Some compilers experimented with an `export` keyword that would allow them to be separated, but that has been removed from C++0x. Check out [this FAQ entry](http://www.parashift.com/c++-faq-lite/templates.html#faq-35.12) for all the dirty details. |
5,612,791 | So, I heard that C++ templates shouldn't be separated into a header (.h) and source (.cpp) files.
For instance, a template like this:
```
template <class T>
class J
{
T something;
};
```
Is this true? Why is it so?
If because of that I'm gonna have to put both declaration and implementation in the same file, should I put it in a .h file or a .cpp file? | 2011/04/10 | [
"https://Stackoverflow.com/questions/5612791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/380714/"
]
| Headers.
It's because templates are instantiated at compile-time, not link-time, and different translation units (roughly equivalent to your `.cpp` files) only "know about" each other at link-time. Headers tend to be widely "known about" at compile-time because you `#include` them in any translation unit that needs them.
Read <https://isocpp.org/wiki/faq/templates> for more. | The reason you can't put a templated class into a .cpp file is because in order to "compile" a .cpp file you need to know what the type that is being used in place of T. As it stands a templated class (like your class J) doesn't have enough information to compile. Thus it must be all in headers.
If you want the break up the implementation into another file for cleanliness, the best practice is to use an .hxx file. Like this: inside your header file, J.h, put:
```
#ifndef _J_H__
#define _J_H__
template <class T> class J{ // member definitions };
#include "j.hxx"
#endif // _J_H__
```
and then, in j.hxx you'll have
```
template <class T> J<T>::J() { // constructor implementation }
template <class T> J<T>::~J() { // destructor implementation }
template <class T> void J<T>::memberFunc() { // memberFunc implementation }
// etc.
```
Finally in your .cpp file that uses the templated class, let's call it K.cpp you'll have:
```
#include "J.h" // note that this always automatically includes J.hxx
void f(void)
{
J<double> jinstance; // now the compiler knows what the exact type is.
}
``` |
517,830 | How do I set my settings such that even if I close my lid (Laptop)
My downloads don't stop or pause in utransmission
I have set the setting in power options to not suspend after closing lid still doesn't have Time out is off as well in power option
but this doesn't help
also wifi goes off when I close the lid .
I just want my torrents running even if I close the lid or keep the laptop untouched for a long time | 2014/08/29 | [
"https://askubuntu.com/questions/517830",
"https://askubuntu.com",
"https://askubuntu.com/users/321468/"
]
| `/etc/environment` takes a proper re-login to take effect, because it is processed by PAM at login. Further, as @przemo noted, it is not run or sourced as a script, so variables are not expanded. Put such variables in a `.sh` file in `/etc/profile.d/`:
```bsh
sudo tee -a /etc/profile.d/my_vars.sh <<"EOF"
export M2_HOME=/usr/local/apache-maven/apache-maven-3.1.1
export M2=$M2_HOME/bin
export PATH=$M2:$PATH
export JAVA_HOME=/usr/local/jdk1.6.0_45
export PATH=$JAVA_HOME:$PATH
EOF
```
This will also take a re-login to take full effect, but you can test it out immediately by running a login shell:
```
$ bash -l
$ echo $PATH
/usr/local/jdk1.6.0_45:/usr/local/apache-maven/apache-maven-3.1.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
```
As Gunnar has pointed out, the default assignment to `PATH` should remain in `/etc/environment`, and shouldn't be added to the above script. See [this community wiki page](https://help.ubuntu.com/community/EnvironmentVariables#A.2BAC8-etc.2BAC8-profile.d.2BAC8.2A.sh) for more information. | `/etc/environment` is not a script file, you cannot use variables, for further reading I recommend <https://help.ubuntu.com/community/EnvironmentVariables> |
54,372,246 | I'm trying to use lit-element component in my React project, and I'd like to pass in the callback function to lit-element component from React but with no luck.
I've tried a couple of different ways, like change property type, and pass function in as a string, but none of them works.
**lit-element component code:**
```
import { LitElement, html } from "lit-element";
class MyButton extends LitElement {
static get properties() {
return {
clickHandler: {
type: String
},
bar: {
type: String
}
};
}
render() {
const foo = this.clickHandler; //value is undefined
const bar = this.bar; //value is "it's bar"
return html`
<button @click=${this.clickHandler}>click me</button>
`;
}
}
customElements.define("my-button", MyButton);
```
**react side code:**
```
<my-button clickHandler={() => alert("clicked")} bar="it's bar" />
```
I put a break point in the render section of the component, and I can see the 'bar' value get passed in correctly, but the value of 'clickHandler' is undefined.
Does anyone have any idea on how to pass function from React to lit-element?
Thanks! | 2019/01/25 | [
"https://Stackoverflow.com/questions/54372246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10958745/"
]
| What I found that works is adding a ref in the react component to the lit element, then literally setting the property on it.
So, for the following JSX:
```
<some-webcomponent ref={this.myRef}></some-webcomponent>
```
You can pass a property to ‘some-webcomponent’ in i.e. componentDidMount:
```
componentDidMount () {
const element = this.myRef.current;
element.someCallback = () => // ...
}
```
It’s not too pretty, but I wouldn’t consider it a hack either. Requires quite a lot of boilerplate though :/
Here’s a full React component for reference:
```
class MyComponent extends React.Component {
constructor(props) {
super(props);
this.myRef = React.createRef();
}
render() {
return <some-webcomponent ref={this.myRef}></some-webcomponent>;
}
componentDidMount() {
const element = this.myRef.current;
element.someCallback = () => console.log(“call!”);
}
}
```
Where the lit element is:
```
import { LitElement, html } from "lit-element";
class SomeWebcomponent extends LitElement {
static get properties() {
return {
someCallback: { type: Function }
};
}
render() {
return html`
<button @click=${this.someCallback}>click me</button>
`;
}
}
customElements.define("some-webcomponent", SomeWebcomponent);
``` | Have a look at the menu button click function for the no redux pwa-starter-kit at <https://github.com/Polymer/pwa-starter-kit/blob/template-no-redux/src/components/my-app.js>. I believe that provides the example that may work for you. |
54,372,246 | I'm trying to use lit-element component in my React project, and I'd like to pass in the callback function to lit-element component from React but with no luck.
I've tried a couple of different ways, like change property type, and pass function in as a string, but none of them works.
**lit-element component code:**
```
import { LitElement, html } from "lit-element";
class MyButton extends LitElement {
static get properties() {
return {
clickHandler: {
type: String
},
bar: {
type: String
}
};
}
render() {
const foo = this.clickHandler; //value is undefined
const bar = this.bar; //value is "it's bar"
return html`
<button @click=${this.clickHandler}>click me</button>
`;
}
}
customElements.define("my-button", MyButton);
```
**react side code:**
```
<my-button clickHandler={() => alert("clicked")} bar="it's bar" />
```
I put a break point in the render section of the component, and I can see the 'bar' value get passed in correctly, but the value of 'clickHandler' is undefined.
Does anyone have any idea on how to pass function from React to lit-element?
Thanks! | 2019/01/25 | [
"https://Stackoverflow.com/questions/54372246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10958745/"
]
| This question probably isn't just valid for react. It's quite general, how do I pass a handler from parent to child component, either it's a lit-element or html element.
According to the property doc, <https://lit-element.polymer-project.org/guide/properties>
```
<my-element
mystring="hello world"
mynumber="5"
mybool
myobj='{"stuff":"hi"}'
myarray='[1,2,3,4]'>
</my-element>
```
Doesn't seem that it supports (callback) function at the moment. So how does Element handle event from parent level ?
According to the event doc, <https://lit-element.polymer-project.org/guide/events>, you can dispatch any event to the dom tree, including your parent. `Dom` event system is much broader than `React` prop system.
```
class MyElement extends LitElement {
...
let event = new CustomEvent('my-event', {
detail: {
message: 'Something important happened'
}
});
this.dispatchEvent(event);
}
```
and then in either lit or non-lit context, use the following to handle the event,
```
const myElement = document.querySelector('my-element');
myElement.addEventListener('my-event', (e) => {console.log(e)});
```
This way you can allow children to fire implementation for parent, which is exactly the definition for callback. | Have a look at the menu button click function for the no redux pwa-starter-kit at <https://github.com/Polymer/pwa-starter-kit/blob/template-no-redux/src/components/my-app.js>. I believe that provides the example that may work for you. |
54,372,246 | I'm trying to use lit-element component in my React project, and I'd like to pass in the callback function to lit-element component from React but with no luck.
I've tried a couple of different ways, like change property type, and pass function in as a string, but none of them works.
**lit-element component code:**
```
import { LitElement, html } from "lit-element";
class MyButton extends LitElement {
static get properties() {
return {
clickHandler: {
type: String
},
bar: {
type: String
}
};
}
render() {
const foo = this.clickHandler; //value is undefined
const bar = this.bar; //value is "it's bar"
return html`
<button @click=${this.clickHandler}>click me</button>
`;
}
}
customElements.define("my-button", MyButton);
```
**react side code:**
```
<my-button clickHandler={() => alert("clicked")} bar="it's bar" />
```
I put a break point in the render section of the component, and I can see the 'bar' value get passed in correctly, but the value of 'clickHandler' is undefined.
Does anyone have any idea on how to pass function from React to lit-element?
Thanks! | 2019/01/25 | [
"https://Stackoverflow.com/questions/54372246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10958745/"
]
| This question probably isn't just valid for react. It's quite general, how do I pass a handler from parent to child component, either it's a lit-element or html element.
According to the property doc, <https://lit-element.polymer-project.org/guide/properties>
```
<my-element
mystring="hello world"
mynumber="5"
mybool
myobj='{"stuff":"hi"}'
myarray='[1,2,3,4]'>
</my-element>
```
Doesn't seem that it supports (callback) function at the moment. So how does Element handle event from parent level ?
According to the event doc, <https://lit-element.polymer-project.org/guide/events>, you can dispatch any event to the dom tree, including your parent. `Dom` event system is much broader than `React` prop system.
```
class MyElement extends LitElement {
...
let event = new CustomEvent('my-event', {
detail: {
message: 'Something important happened'
}
});
this.dispatchEvent(event);
}
```
and then in either lit or non-lit context, use the following to handle the event,
```
const myElement = document.querySelector('my-element');
myElement.addEventListener('my-event', (e) => {console.log(e)});
```
This way you can allow children to fire implementation for parent, which is exactly the definition for callback. | What I found that works is adding a ref in the react component to the lit element, then literally setting the property on it.
So, for the following JSX:
```
<some-webcomponent ref={this.myRef}></some-webcomponent>
```
You can pass a property to ‘some-webcomponent’ in i.e. componentDidMount:
```
componentDidMount () {
const element = this.myRef.current;
element.someCallback = () => // ...
}
```
It’s not too pretty, but I wouldn’t consider it a hack either. Requires quite a lot of boilerplate though :/
Here’s a full React component for reference:
```
class MyComponent extends React.Component {
constructor(props) {
super(props);
this.myRef = React.createRef();
}
render() {
return <some-webcomponent ref={this.myRef}></some-webcomponent>;
}
componentDidMount() {
const element = this.myRef.current;
element.someCallback = () => console.log(“call!”);
}
}
```
Where the lit element is:
```
import { LitElement, html } from "lit-element";
class SomeWebcomponent extends LitElement {
static get properties() {
return {
someCallback: { type: Function }
};
}
render() {
return html`
<button @click=${this.someCallback}>click me</button>
`;
}
}
customElements.define("some-webcomponent", SomeWebcomponent);
``` |
3,579,638 | Below is a sample:
```
private function loadLevel(name:String):void{
level1 = new Level1(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
Loads only level1 class, now I dont want to create another function just to load another level..
I want that loadLevel to load level1,level2 etc.. on the same function.
ideas would help. :)
thanks! | 2010/08/26 | [
"https://Stackoverflow.com/questions/3579638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115330/"
]
| ```
private function loadLevel(level : int, name : String) : void {
try {
var LevelClass : Class = getDefinitionByName("Level" + level) as Class;
// i think you should also create an ILevel interface for your levels
var level : ILevel = new LevelClass(name) as ILevel;
level.onInitialize.addOnce(onLevelReady);
level.loadData();
} catch(e:Error) {
trace("Failed: " + e.message)
}
}
private function somewhere() : void {
// The downside is that you need to make sure the actual Level1,Level2,... classes
// are included by mxmlc when compiling your swf. This can be done by refencing them somewhere in the code.
Level1;
loadLevel(1, "some Level");
}
```
another solution might look like this:
```
private function loadLevel(level : int, name : String) : void {
var level : ILevel;
switch(level) {
case 1:
level = new Level1(name);
break;
case 2:
level = new Level2(name);
break;
default:
throw new Error("No such Level");
}
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
``` | ```
private function loadLevel(name:String, level:Number):void{
var Level:Class = getDefinitionByName("Level" + level) as Class;
level1 = new Level(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
If your class requires a class path.
```
var Level:Class = getDefinitionByName("com.domain.Level" + level) as Class;
``` |
3,579,638 | Below is a sample:
```
private function loadLevel(name:String):void{
level1 = new Level1(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
Loads only level1 class, now I dont want to create another function just to load another level..
I want that loadLevel to load level1,level2 etc.. on the same function.
ideas would help. :)
thanks! | 2010/08/26 | [
"https://Stackoverflow.com/questions/3579638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115330/"
]
| ```
private function loadLevel(level : int, name : String) : void {
try {
var LevelClass : Class = getDefinitionByName("Level" + level) as Class;
// i think you should also create an ILevel interface for your levels
var level : ILevel = new LevelClass(name) as ILevel;
level.onInitialize.addOnce(onLevelReady);
level.loadData();
} catch(e:Error) {
trace("Failed: " + e.message)
}
}
private function somewhere() : void {
// The downside is that you need to make sure the actual Level1,Level2,... classes
// are included by mxmlc when compiling your swf. This can be done by refencing them somewhere in the code.
Level1;
loadLevel(1, "some Level");
}
```
another solution might look like this:
```
private function loadLevel(level : int, name : String) : void {
var level : ILevel;
switch(level) {
case 1:
level = new Level1(name);
break;
case 2:
level = new Level2(name);
break;
default:
throw new Error("No such Level");
}
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
``` | This is what I use:
```
private function loadLevel(level:Level):void{
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
```
and call it using:
```
loadLevel(new Level1());
```
And make sure that all levels extends from the class Level
```
class Level1 extends Level
```
But I think the others solutions work better for a classes like Level1, Level2, Level3. I use this to control scenes in my game, Scene\_Title, Scene\_Game, Scene\_Credits |
3,579,638 | Below is a sample:
```
private function loadLevel(name:String):void{
level1 = new Level1(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
Loads only level1 class, now I dont want to create another function just to load another level..
I want that loadLevel to load level1,level2 etc.. on the same function.
ideas would help. :)
thanks! | 2010/08/26 | [
"https://Stackoverflow.com/questions/3579638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115330/"
]
| ```
private function loadLevel(level : int, name : String) : void {
try {
var LevelClass : Class = getDefinitionByName("Level" + level) as Class;
// i think you should also create an ILevel interface for your levels
var level : ILevel = new LevelClass(name) as ILevel;
level.onInitialize.addOnce(onLevelReady);
level.loadData();
} catch(e:Error) {
trace("Failed: " + e.message)
}
}
private function somewhere() : void {
// The downside is that you need to make sure the actual Level1,Level2,... classes
// are included by mxmlc when compiling your swf. This can be done by refencing them somewhere in the code.
Level1;
loadLevel(1, "some Level");
}
```
another solution might look like this:
```
private function loadLevel(level : int, name : String) : void {
var level : ILevel;
switch(level) {
case 1:
level = new Level1(name);
break;
case 2:
level = new Level2(name);
break;
default:
throw new Error("No such Level");
}
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
``` | Here's another answer:
Keep your levels in an array.
```
private const levels = [null, Level1, Level2, Level3]
```
And then:
```
private function loadLevel(name:String, n:int):void{
level = new (levels[n])(name);
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
``` |
3,579,638 | Below is a sample:
```
private function loadLevel(name:String):void{
level1 = new Level1(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
Loads only level1 class, now I dont want to create another function just to load another level..
I want that loadLevel to load level1,level2 etc.. on the same function.
ideas would help. :)
thanks! | 2010/08/26 | [
"https://Stackoverflow.com/questions/3579638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115330/"
]
| Here's another answer:
Keep your levels in an array.
```
private const levels = [null, Level1, Level2, Level3]
```
And then:
```
private function loadLevel(name:String, n:int):void{
level = new (levels[n])(name);
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
``` | ```
private function loadLevel(name:String, level:Number):void{
var Level:Class = getDefinitionByName("Level" + level) as Class;
level1 = new Level(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
If your class requires a class path.
```
var Level:Class = getDefinitionByName("com.domain.Level" + level) as Class;
``` |
3,579,638 | Below is a sample:
```
private function loadLevel(name:String):void{
level1 = new Level1(name);
level1.onInitialize.addOnce(onLevelReady);
level1.loadData();
}
```
Loads only level1 class, now I dont want to create another function just to load another level..
I want that loadLevel to load level1,level2 etc.. on the same function.
ideas would help. :)
thanks! | 2010/08/26 | [
"https://Stackoverflow.com/questions/3579638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115330/"
]
| Here's another answer:
Keep your levels in an array.
```
private const levels = [null, Level1, Level2, Level3]
```
And then:
```
private function loadLevel(name:String, n:int):void{
level = new (levels[n])(name);
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
``` | This is what I use:
```
private function loadLevel(level:Level):void{
level.onInitialize.addOnce(onLevelReady);
level.loadData();
}
```
and call it using:
```
loadLevel(new Level1());
```
And make sure that all levels extends from the class Level
```
class Level1 extends Level
```
But I think the others solutions work better for a classes like Level1, Level2, Level3. I use this to control scenes in my game, Scene\_Title, Scene\_Game, Scene\_Credits |
12,459,390 | Our customer has their public RSA key stored in a certificate.
We need this key hardcoded in our WinRT app, so we can encrypt client-side. However, we're having issues importing the key into an instance of the CryptographicKey class.
We're using the ImportPublicKey on the RSAProvider:
```
rsaProvider = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaPkcs1);
key = rsaProvider.ImportPublicKey(publicKeyBuffer);
```
We've tried loading several things into the publicKeyBuffer: The certificate, the public key exported from the certificate in several formats.
How do we load their public key? | 2012/09/17 | [
"https://Stackoverflow.com/questions/12459390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/71515/"
]
| I found this article in the MSDN Forum very helpful. Carlos Lopez postet some code to get the Public Key out of a Base64 encoded Certificate.
<http://social.msdn.microsoft.com/Forums/en-US/17e1467a-2de7-47d2-8d8c-130518eaac68/how-to-use-a-x509-certificate-not-a-pfx-to-verify-a-signature>
Here the code:
```
public static CryptographicKey GetCryptographicPublicKeyFromCert(string strCert)
{
int length;
CryptographicKey CryptKey = null;
byte[] bCert = Convert.FromBase64String(strCert);
// Assume Cert contains RSA public key
// Find matching OID in the certificate and return public key
byte[] rsaOID = EncodeOID("1.2.840.113549.1.1.1");
int index = FindX509PubKeyIndex(bCert, rsaOID, out length);
// Found X509PublicKey in certificate so copy it.
if (index > -1)
{
byte[] X509PublicKey = new byte[length];
Array.Copy(bCert, index, X509PublicKey, 0, length);
AsymmetricKeyAlgorithmProvider AlgProvider = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaPkcs1);
CryptKey = AlgProvider.ImportPublicKey(CryptographicBuffer.CreateFromByteArray(X509PublicKey));
}
return CryptKey;
}
static int FindX509PubKeyIndex(byte[] Reference, byte[] value, out int length)
{
int index = -1;
bool found;
length = 0;
for (int n = 0; n < Reference.Length; n++)
{
if ((Reference[n] == value[0]) && (n + value.Length < Reference.Length))
{
index = n;
found = true;
for (int m = 1; m < value.Length; m++)
{
if (Reference[n + m] != value[m])
{
found = false;
break;
}
}
if (found) break;
else index = -1;
}
}
if (index > -1)
{
// Find outer Sequence
while (index > 0 && Reference[index] != 0x30) index--;
index--;
while (index > 0 && Reference[index] != 0x30) index--;
}
if (index > -1)
{
// Find the length of encoded Public Key
if ((Reference[index + 1] & 0x80) == 0x80)
{
int numBytes = Reference[index + 1] & 0x7F;
for (int m = 0; m < numBytes; m++)
{
length += (Reference[index + 2 + m] << ((numBytes - 1 - m) * 8));
}
length += 4;
}
else
{
length = Reference[index + 1] + 2;
}
}
return index;
}
static public byte[] EncodeOID(string szOID)
{
int[] OIDNums;
byte[] pbEncodedTemp = new byte[64];
byte[] pbEncoded = null;
int n, index, num, count;
OIDNums = ParseOID(szOID);
pbEncodedTemp[0] = 6;
pbEncodedTemp[2] = Convert.ToByte(OIDNums[0] * 40 + OIDNums[1]);
count = 1;
for (n = 2, index = 3; n < OIDNums.Length; n++)
{
num = OIDNums[n];
if (num >= 16384)
{
pbEncodedTemp[index++] = Convert.ToByte(num / 16384 | 0x80);
num = num % 16384;
count++;
}
if (num >= 128)
{
pbEncodedTemp[index++] = Convert.ToByte(num / 128 | 0x80);
num = num % 128;
count++;
}
pbEncodedTemp[index++] = Convert.ToByte(num);
count++;
}
pbEncodedTemp[1] = Convert.ToByte(count);
pbEncoded = new byte[count + 2];
Array.Copy(pbEncodedTemp, 0, pbEncoded, 0, count + 2);
return pbEncoded;
}
static public int[] ParseOID(string szOID)
{
int nlast, n = 0;
bool fFinished = false;
int count = 0;
int[] dwNums = null;
do
{
nlast = n;
n = szOID.IndexOf(".", nlast);
if (n == -1) fFinished = true;
count++;
n++;
} while (fFinished == false);
dwNums = new int[count];
count = 0;
fFinished = false;
do
{
nlast = n;
n = szOID.IndexOf(".", nlast);
if (n != -1)
{
dwNums[count] = Convert.ToInt32(szOID.Substring(nlast, n - nlast), 10);
}
else
{
fFinished = true;
dwNums[count] = Convert.ToInt32(szOID.Substring(nlast, szOID.Length - nlast), 10);
}
n++;
count++;
} while (fFinished == false);
return dwNums;
}
``` | Two things:
1. The argument to ImportPublicKey key is an IBuffer. The easiest way to get this is using the
ToBuffer extension method for a byte[].
2. Use the override of [ImportPublicKey](http://msdn.microsoft.com/en-us/library/windows/apps/windows.security.cryptography.core.cryptographicpublickeyblobtype.aspx) that takes both an IBuffer and a [CryptographicPublicKeyBlobType](http://msdn.microsoft.com/en-us/library/windows/apps/windows.security.cryptography.core.cryptographicpublickeyblobtype.aspx), specifically CryptographicPublicKeyBlobType.X509SubjectPublicKeyInfo. Pass in the subject public key info field from the certificate. |
12,459,390 | Our customer has their public RSA key stored in a certificate.
We need this key hardcoded in our WinRT app, so we can encrypt client-side. However, we're having issues importing the key into an instance of the CryptographicKey class.
We're using the ImportPublicKey on the RSAProvider:
```
rsaProvider = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaPkcs1);
key = rsaProvider.ImportPublicKey(publicKeyBuffer);
```
We've tried loading several things into the publicKeyBuffer: The certificate, the public key exported from the certificate in several formats.
How do we load their public key? | 2012/09/17 | [
"https://Stackoverflow.com/questions/12459390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/71515/"
]
| I found this article in the MSDN Forum very helpful. Carlos Lopez postet some code to get the Public Key out of a Base64 encoded Certificate.
<http://social.msdn.microsoft.com/Forums/en-US/17e1467a-2de7-47d2-8d8c-130518eaac68/how-to-use-a-x509-certificate-not-a-pfx-to-verify-a-signature>
Here the code:
```
public static CryptographicKey GetCryptographicPublicKeyFromCert(string strCert)
{
int length;
CryptographicKey CryptKey = null;
byte[] bCert = Convert.FromBase64String(strCert);
// Assume Cert contains RSA public key
// Find matching OID in the certificate and return public key
byte[] rsaOID = EncodeOID("1.2.840.113549.1.1.1");
int index = FindX509PubKeyIndex(bCert, rsaOID, out length);
// Found X509PublicKey in certificate so copy it.
if (index > -1)
{
byte[] X509PublicKey = new byte[length];
Array.Copy(bCert, index, X509PublicKey, 0, length);
AsymmetricKeyAlgorithmProvider AlgProvider = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaPkcs1);
CryptKey = AlgProvider.ImportPublicKey(CryptographicBuffer.CreateFromByteArray(X509PublicKey));
}
return CryptKey;
}
static int FindX509PubKeyIndex(byte[] Reference, byte[] value, out int length)
{
int index = -1;
bool found;
length = 0;
for (int n = 0; n < Reference.Length; n++)
{
if ((Reference[n] == value[0]) && (n + value.Length < Reference.Length))
{
index = n;
found = true;
for (int m = 1; m < value.Length; m++)
{
if (Reference[n + m] != value[m])
{
found = false;
break;
}
}
if (found) break;
else index = -1;
}
}
if (index > -1)
{
// Find outer Sequence
while (index > 0 && Reference[index] != 0x30) index--;
index--;
while (index > 0 && Reference[index] != 0x30) index--;
}
if (index > -1)
{
// Find the length of encoded Public Key
if ((Reference[index + 1] & 0x80) == 0x80)
{
int numBytes = Reference[index + 1] & 0x7F;
for (int m = 0; m < numBytes; m++)
{
length += (Reference[index + 2 + m] << ((numBytes - 1 - m) * 8));
}
length += 4;
}
else
{
length = Reference[index + 1] + 2;
}
}
return index;
}
static public byte[] EncodeOID(string szOID)
{
int[] OIDNums;
byte[] pbEncodedTemp = new byte[64];
byte[] pbEncoded = null;
int n, index, num, count;
OIDNums = ParseOID(szOID);
pbEncodedTemp[0] = 6;
pbEncodedTemp[2] = Convert.ToByte(OIDNums[0] * 40 + OIDNums[1]);
count = 1;
for (n = 2, index = 3; n < OIDNums.Length; n++)
{
num = OIDNums[n];
if (num >= 16384)
{
pbEncodedTemp[index++] = Convert.ToByte(num / 16384 | 0x80);
num = num % 16384;
count++;
}
if (num >= 128)
{
pbEncodedTemp[index++] = Convert.ToByte(num / 128 | 0x80);
num = num % 128;
count++;
}
pbEncodedTemp[index++] = Convert.ToByte(num);
count++;
}
pbEncodedTemp[1] = Convert.ToByte(count);
pbEncoded = new byte[count + 2];
Array.Copy(pbEncodedTemp, 0, pbEncoded, 0, count + 2);
return pbEncoded;
}
static public int[] ParseOID(string szOID)
{
int nlast, n = 0;
bool fFinished = false;
int count = 0;
int[] dwNums = null;
do
{
nlast = n;
n = szOID.IndexOf(".", nlast);
if (n == -1) fFinished = true;
count++;
n++;
} while (fFinished == false);
dwNums = new int[count];
count = 0;
fFinished = false;
do
{
nlast = n;
n = szOID.IndexOf(".", nlast);
if (n != -1)
{
dwNums[count] = Convert.ToInt32(szOID.Substring(nlast, n - nlast), 10);
}
else
{
fFinished = true;
dwNums[count] = Convert.ToInt32(szOID.Substring(nlast, szOID.Length - nlast), 10);
}
n++;
count++;
} while (fFinished == false);
return dwNums;
}
``` | For those banging his head as how you can use a public key stored in a certificate in a WinRT app, let me ease your pain: You can't, at least not directly.
The `AsymmetricKeyAlgorithmProvider.ImportPublicKey` function takes an IBuffer and a CryptographicPublicKeyBlobType, the keyBlob (IBuffer) parameter it's the public key of the certificate, not the full certificate, only its public key.
But you can't get the public key of the certificate with out parsing it first, here is where the problem lies, there is no way to parse the certificate on WinRT, given that the most used class for this task, X509Certificate, is not available, nor is its namespace, and the facilities for certificates are only to be used on web services connections.
The only way to workaround this will be by implementing a certificate parser, or porting such functionality from an open source project, like [Bouncy Castle](http://www.bouncycastle.org/). So, if you know one, please leave it in the comments.
By the way, to export the public key from the certificate (in plain .NET) in a format that can be used in a WinRT app, use this:
```
X509Certificate2 Certificate;
....
byte[] CertificatePublicKey = Certificate.PublicKey.EncodedKeyValue.RawData;
```
Then in the WinRT app use this:
```
AsymmetricKeyAlgorithmProvider algorithm = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaSignPkcs1Sha1);
IBuffer KeyBuffer = CryptographicBuffer.DecodeFromBase64String(CertificatePublicKeyContent);
CryptographicKey key = algorithm.ImportPublicKey(KeyBuffer, CryptographicPublicKeyBlobType.Pkcs1RsaPublicKey);
```
Note that i encoded the public key in base 64 first, but you may use raw binary data instead (the CryptographicBuffer class has more methods for this purpose). |
52,429,600 | I wish to validate attribute `amount`. The allowed amount differ per admin user role. So need to update (or not) based on role of the admin who updates the funds, not the user who the funds belong to.
```
class UserFund
validate :validate_amount
def validate_amount
if current_user.admin?
amount <= 100
elsif current_user.cs?
amount <= 25
elsif current_user.staff?
amount <= 15
end
end
end
```
`User has_many :user_funds`
`UserFund belongs_to :user`
BUT
current\_user != user.
the user who edits funds (current\_user) is admin, not the user who the funds belong to.
But you should not use `current_user` in the model I think. What would be the best practice to solve this? | 2018/09/20 | [
"https://Stackoverflow.com/questions/52429600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9612162/"
]
| If `current_user` has no relations to `UserFund` object which is going to be saved, then I think it would be better to transform validator into a separate method. And since that method implies some knowledge about `User` class, it would make sense to move it to `User` class as an instance method (or if admin and user are different, then to admin class):
```
class User < ApplicationRecord
def amount_too_big?(amount)
return admin? && amount > 100 || cs? && amount > 25 || staff? && amount > 15
end
end
```
and call this method in the controller before saving `UserFund` object:
```
if current_user.amount_too_big?(amount)
# Render error message about unacceptably big funds
return
end
```
Why so? Very likely `current_user` object is defined by some controller filter, so it's available in controllers and views only. And `UserFund` object has access only to `User` model funds belong to. Giving access to `current_user` from `UserFund` is possible only by calling some other method and passing `current_user` as an argument, which is not accepted on validation.
Other point: when running from rails console, `current_user` is not defined. `UserFund` object will have no idea what `current_user` is, so could not use it at all. | If this validation is based on data from the same model, you can just skip `current_user`. For example, assuming this validation is for `User` model with instance methods `admin?`, `cs?` and `staff?`, it can be written as:
```
class User < ApplicationRecord
validate :validate_amount
def validate_amount
if admin?
amount <= 100
elsif cs?
amount <= 25
elsif staff?
amount <= 15
end
end
end
```
However the `valid?` method of ActiveRecord will verify that the errors collection is empty, so your custom validation methods should add errors to it when you wish validation to fail:
```
class User < ApplicationRecord
validate :validate_amount
def validate_amount
if admin? && amount > 100 || cs? && amount > 25 || staff? && amount > 15
errors.add(:amount, "is too big")
end
end
end
```
More details are here:
<https://guides.rubyonrails.org/active_record_validations.html#custom-methods> |
52,429,600 | I wish to validate attribute `amount`. The allowed amount differ per admin user role. So need to update (or not) based on role of the admin who updates the funds, not the user who the funds belong to.
```
class UserFund
validate :validate_amount
def validate_amount
if current_user.admin?
amount <= 100
elsif current_user.cs?
amount <= 25
elsif current_user.staff?
amount <= 15
end
end
end
```
`User has_many :user_funds`
`UserFund belongs_to :user`
BUT
current\_user != user.
the user who edits funds (current\_user) is admin, not the user who the funds belong to.
But you should not use `current_user` in the model I think. What would be the best practice to solve this? | 2018/09/20 | [
"https://Stackoverflow.com/questions/52429600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9612162/"
]
| If `current_user` has no relations to `UserFund` object which is going to be saved, then I think it would be better to transform validator into a separate method. And since that method implies some knowledge about `User` class, it would make sense to move it to `User` class as an instance method (or if admin and user are different, then to admin class):
```
class User < ApplicationRecord
def amount_too_big?(amount)
return admin? && amount > 100 || cs? && amount > 25 || staff? && amount > 15
end
end
```
and call this method in the controller before saving `UserFund` object:
```
if current_user.amount_too_big?(amount)
# Render error message about unacceptably big funds
return
end
```
Why so? Very likely `current_user` object is defined by some controller filter, so it's available in controllers and views only. And `UserFund` object has access only to `User` model funds belong to. Giving access to `current_user` from `UserFund` is possible only by calling some other method and passing `current_user` as an argument, which is not accepted on validation.
Other point: when running from rails console, `current_user` is not defined. `UserFund` object will have no idea what `current_user` is, so could not use it at all. | I think this is a good case for the "context" on validations <https://guides.rubyonrails.org/active_record_validations.html#on>
You can define different validations on different contexts (ie: different roles).
```
validate :admin_amount, on: :admin
validate :cs_amount, on: :cs
validate :staff_amount, on: :staff
def admin_amount
errors.add(:amount, 'is too long') if amount > 100
end
def cs_amount
errors.add(:amount, 'is too long') if amount > 25
end
def staff_amount
errors.add(:amount, 'is too long') if amount > 15
end
#of course, you can DRY this a lot, i'm just repeating a lot of code to make it clearer
```
Now you can save a record (or check if is valid) using a "context" key.
```
fund = UserFund.new
fund.save(context: current_user.role.to_sym) #assuming the role is a string
# of fund.valid?(context: current_user.role.to_sym)
``` |
8,581 | ### Context:
I'm a Psychology PhD student. As with many psychology PhD students, I know how to perform various statistical analyses using statistical software, up to techniques such as PCA, classification trees, and cluster analysis.
But it's not really satisfying because though I can explain why I did an analysis and what the indicators mean, I can't explain how the technique works.
The real problem is that mastering statistical software is easy, but it is limited. To learn new techniques in articles requires that I understand how to read mathematical equations. At present I couldn't calculate eigenvalues or K-means. Equations are like a foreign language to me.
### Question:
* Is there a comprehensive guide that helps with understanding equations in journal articles?
---
### Edit:
I thought the question would be more self explanatory: above a certain complexity, statistical notation becomes gibberish for me; let's say I would like to code my own functions in R or C++ to understand a technique but there's a barrier. I can't transform an equation into a program.
And really: I don't know the situation in US doctoral schools, but in mine (France), the only courses I can follow is about some 16th century litterary movement... | 2011/03/21 | [
"https://stats.stackexchange.com/questions/8581",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3827/"
]
| ### Overview:
* My impression is that your experience is common to a lot of students in the social sciences.
* The starting point is a motivation to learn.
* You can go down either **self-taught** or **formal instruction** routes.
### Formal instruction:
There are many options in this regard.
You might consider a masters in statistics or just taking a few subjects in a statistics department.
However, you'd probably want to check that you have the necessary mathematical background. Depending on the course, you may find that you need to revisit pre-calculus mathematics, and perhaps some material such as calculus and linear algebra before tackling university-level mathematically rigorous statistics subjects.
### Self-taught
Alternatively, you could go down the self-taught route.
There are heaps of good resources on the internet.
In particular, reading and doing exercises in mathematics text books is important, but probably not sufficient. It's important to listen to instructors talking about mathematics and watch them solve problems.
It's also important to think about your mathematical goals and the mathematical prerequisites required to achieve those goals. If equations are like a foreign language to you, then you may find that you need to first study elementary mathematics.
I've prepared a few resources aimed at assisting people who are making the transition from using statistical software to understanding the underlying mathematics.
* **Videos**: [List of Free Online Mathematics Videos](http://jeromyanglim.blogspot.com/2009/05/online-mathematics-video-courses-for.html) - This post also provides some guidance regarding what would be an appropriate mathematical sequence starting from pre-calculus and working through calculus, linear algebra, probability, and mathematical statistics.
Also see this question on [mathematical statistics videos](https://stats.stackexchange.com/questions/485/mathematical-statistics-videos).
* **Reading and pronunciation** - One of the first challenges is learning how to pronounce and read mathematical equations. I wrote two posts, one on [pronunciation](http://jeromyanglim.blogspot.com/2009/05/pronunciation-guides-for-mathematical.html) and another on [tips for reading mathematics for the non-mathematician](http://jeromyanglim.blogspot.com/2010/01/tips-on-reading-mathematics-for-non.html).
* **Writing** - Learning to write mathematics can help with reading mathematics. Try learning LaTeX and check out some of the guides on [mathematics in LaTeX](http://en.wikibooks.org/wiki/LaTeX/Mathematics)
* **Books**: When it comes to learning mathematics, I think it is worth investing in a few good textbooks. However, there are a lot of [free online options](http://jeromyanglim.blogspot.com/2010/01/free-online-mathematics-books.html) these days | I understand your difficulty as I have a similar problem when I try to do something new in statistics (I'm also a grad student, but in a different field). I have found examining the R code quite useful to get an idea how something is calculated. For example, I have been recently learning how to use `kmeans` clustering and have many basic questions, both conceptual and how it is implemented. Using an `R` installation (I recommend `R Studio`, <http://www.rstudio.org/>, but any installation works), just type `kmeans` in to the command line. Here is an example of *part* of the output:
```
x <- as.matrix(x)
m <- nrow(x)
if (missing(centers))
stop("'centers' must be a number or a matrix")
nmeth <- switch(match.arg(algorithm), `Hartigan-Wong` = 1,
Lloyd = 2, Forgy = 2, MacQueen = 3)
if (length(centers) == 1L) {
if (centers == 1)
nmeth <- 3
k <- centers
if (nstart == 1)
centers <- x[sample.int(m, k), , drop = FALSE]
if (nstart >= 2 || any(duplicated(centers))) {
cn <- unique(x)
mm <- nrow(cn)
if (mm < k)
stop("more cluster centers than distinct data points.")
centers <- cn[sample.int(mm, k), , drop = FALSE]
}
}
```
I'm not sure how practical it is to examine the source every time, but it really helps me get an idea what is going on, assuming you have some familiarity with the syntax.
A previous question I asked on [stackoverflow](https://stackoverflow.com/questions/5297889/how-do-i-find-the-parameters-used-in-kmeans-to-create-clusters) pointed me in this direction, but also helpfully told me that the comments about the code are sometimes included [here](http://svn.r-project.org/R/trunk/src/library/stats/R/kmeans.R).
---
More generally, the [Journal of Statistical Software](http://www.jstatsoft.org/) illustrates this link between theory and implementation, but it is frequently about advanced topics (that I personally have difficulty understanding), but is useful as an example. |
8,581 | ### Context:
I'm a Psychology PhD student. As with many psychology PhD students, I know how to perform various statistical analyses using statistical software, up to techniques such as PCA, classification trees, and cluster analysis.
But it's not really satisfying because though I can explain why I did an analysis and what the indicators mean, I can't explain how the technique works.
The real problem is that mastering statistical software is easy, but it is limited. To learn new techniques in articles requires that I understand how to read mathematical equations. At present I couldn't calculate eigenvalues or K-means. Equations are like a foreign language to me.
### Question:
* Is there a comprehensive guide that helps with understanding equations in journal articles?
---
### Edit:
I thought the question would be more self explanatory: above a certain complexity, statistical notation becomes gibberish for me; let's say I would like to code my own functions in R or C++ to understand a technique but there's a barrier. I can't transform an equation into a program.
And really: I don't know the situation in US doctoral schools, but in mine (France), the only courses I can follow is about some 16th century litterary movement... | 2011/03/21 | [
"https://stats.stackexchange.com/questions/8581",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3827/"
]
| ### Overview:
* My impression is that your experience is common to a lot of students in the social sciences.
* The starting point is a motivation to learn.
* You can go down either **self-taught** or **formal instruction** routes.
### Formal instruction:
There are many options in this regard.
You might consider a masters in statistics or just taking a few subjects in a statistics department.
However, you'd probably want to check that you have the necessary mathematical background. Depending on the course, you may find that you need to revisit pre-calculus mathematics, and perhaps some material such as calculus and linear algebra before tackling university-level mathematically rigorous statistics subjects.
### Self-taught
Alternatively, you could go down the self-taught route.
There are heaps of good resources on the internet.
In particular, reading and doing exercises in mathematics text books is important, but probably not sufficient. It's important to listen to instructors talking about mathematics and watch them solve problems.
It's also important to think about your mathematical goals and the mathematical prerequisites required to achieve those goals. If equations are like a foreign language to you, then you may find that you need to first study elementary mathematics.
I've prepared a few resources aimed at assisting people who are making the transition from using statistical software to understanding the underlying mathematics.
* **Videos**: [List of Free Online Mathematics Videos](http://jeromyanglim.blogspot.com/2009/05/online-mathematics-video-courses-for.html) - This post also provides some guidance regarding what would be an appropriate mathematical sequence starting from pre-calculus and working through calculus, linear algebra, probability, and mathematical statistics.
Also see this question on [mathematical statistics videos](https://stats.stackexchange.com/questions/485/mathematical-statistics-videos).
* **Reading and pronunciation** - One of the first challenges is learning how to pronounce and read mathematical equations. I wrote two posts, one on [pronunciation](http://jeromyanglim.blogspot.com/2009/05/pronunciation-guides-for-mathematical.html) and another on [tips for reading mathematics for the non-mathematician](http://jeromyanglim.blogspot.com/2010/01/tips-on-reading-mathematics-for-non.html).
* **Writing** - Learning to write mathematics can help with reading mathematics. Try learning LaTeX and check out some of the guides on [mathematics in LaTeX](http://en.wikibooks.org/wiki/LaTeX/Mathematics)
* **Books**: When it comes to learning mathematics, I think it is worth investing in a few good textbooks. However, there are a lot of [free online options](http://jeromyanglim.blogspot.com/2010/01/free-online-mathematics-books.html) these days | I get the impression that you think that you can get insight into a statistical equation
by programming it into either R or C++; you can't. To understand a statistical equation,
find an "undergraduate" textbook with lots of homework problems at the end of each chapter that contains the equation, and then do the homework at the end of the chapter containing the equation.
For example, to understand PCA you do need a good understanding of linear algebra and in particular singular value decomposition. While learning quantum computing through Michael Nielsen's book, it became apparent to me that I needed to review linear algebra. I came across Gilbert Strang's videos, they were extremely helpful in establishing a foundational understanding of concepts. However, the nuance of the material did not get through until
I found a linear algebra book containing alot of homework problems, and then I needed to do them. |
24,211,129 | * I'm looking for a generic solution so sysctl.conf is just an example.
I have several lines of configuration I need to be sure exist in all my servers' sysctl.conf:
```
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 1519344680
kernel.shmall = 4294967296
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.rmem_default = 33554432
net.core.wmem_default = 33554432
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 131072 16777216
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
```
The sysctl.conf file expects no leading blank spaces in the file.
I'd like to stuff all the above lines into a yaml file so as to comply with Craig Dunn's Roles/Profiles methodology.
If I try to use this format:
```
`content: "kernel.msgmnb = 65536\n kernel.msgmax = 65536\n kernel.shmmax = 1519344680\n kernel.shmall = 4294967296\n net.core.rmem_max = 33554432\n net.core.wmem_max = 33554432\n net.core.rmem_default = 33554432\n net.core.wmem_default = 33554432\n"`
```
the resultant file is written with a whitespace character in column 0 for each line but the first. I need a newline but I don't want the leading whitespace.
I'm trying to nail down a simple, easily replicable way to reproduce a config file on all my nodes WITHOUT using template erb files. I want all my lines of config in a yaml file.
Here is my module's init.pp:
```
class sysctl_conf {
$sysctl_lines = hiera('sysctl_conf')
file { '/tmp/test.txt':
content => $sysctl_lines,
}
}
```
If in common.yaml I use:
sysctl\_conf:
'content' : "net.ipv4.ip\_forward = 0\n net.ipv4.conf.default.rp\_filter = 1\n net.ipv4.conf.default.accept\_source\_route = 0 kernel.sysrq = 0 kernel.core\_uses\_pid = 1 net.ipv4.tcp\_syncookies = 1 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 kernel.msgmnb = 65536\n kernel.msgmax = 65536\n"
I end up with " Munging failed for value in class content: can't convert Hash into String" | 2014/06/13 | [
"https://Stackoverflow.com/questions/24211129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/189395/"
]
| There are different ways to go about this.
Personally I think keeping the whole file as one string in your YAML is a poor choice, but the [syntax](http://en.wikipedia.org/wiki/YAML#Syntax) for that would be
```
sysctl_content: |
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
...
```
A better alternative is an array
```
sysctl_lines:
- net.ipv4.tcp_syncookies=1
- net.bridge.bridge-nf-call-ip6tables=0
```
Then get them into the file using a template such as
```
# This file is managed by Puppet
<% scope.function_hiera("sysctl_lines", []).each do |line| -%>
<%= line %>
<% end -%>
```
Or better yet, make it a hash
```
sysctl_settings:
net.ipv4.tcp_syncookies: 1
net.bridge.bridge-nf-call-ip6tables: 0
...
```
And a template like
```
# This file is managed by Puppet
<% scope.function_hiera("sysctl_settings", []).each do |key,val| -%>
<%= key %> = <%= val %>
<% end -%>
```
This has the advantage that you can spread distinct value overrides throughout your hierarchy if you use the `hiera_hash` lookup method instead. | Managing sysctl conf with templates and files is an awful idea.
There are dozens of applications that might need to make alterations, so you don't want 1 module to control them all.
This is a much better way to do it : [link](http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas#etcsysctlconf)
Now you can set individual settings in individual modules, or have your base class have certain settings, and have you database class (for example) configure other settings without having to redefine the entire file. |
24,211,129 | * I'm looking for a generic solution so sysctl.conf is just an example.
I have several lines of configuration I need to be sure exist in all my servers' sysctl.conf:
```
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 1519344680
kernel.shmall = 4294967296
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.rmem_default = 33554432
net.core.wmem_default = 33554432
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 131072 16777216
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
```
The sysctl.conf file expects no leading blank spaces in the file.
I'd like to stuff all the above lines into a yaml file so as to comply with Craig Dunn's Roles/Profiles methodology.
If I try to use this format:
```
`content: "kernel.msgmnb = 65536\n kernel.msgmax = 65536\n kernel.shmmax = 1519344680\n kernel.shmall = 4294967296\n net.core.rmem_max = 33554432\n net.core.wmem_max = 33554432\n net.core.rmem_default = 33554432\n net.core.wmem_default = 33554432\n"`
```
the resultant file is written with a whitespace character in column 0 for each line but the first. I need a newline but I don't want the leading whitespace.
I'm trying to nail down a simple, easily replicable way to reproduce a config file on all my nodes WITHOUT using template erb files. I want all my lines of config in a yaml file.
Here is my module's init.pp:
```
class sysctl_conf {
$sysctl_lines = hiera('sysctl_conf')
file { '/tmp/test.txt':
content => $sysctl_lines,
}
}
```
If in common.yaml I use:
sysctl\_conf:
'content' : "net.ipv4.ip\_forward = 0\n net.ipv4.conf.default.rp\_filter = 1\n net.ipv4.conf.default.accept\_source\_route = 0 kernel.sysrq = 0 kernel.core\_uses\_pid = 1 net.ipv4.tcp\_syncookies = 1 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 kernel.msgmnb = 65536\n kernel.msgmax = 65536\n"
I end up with " Munging failed for value in class content: can't convert Hash into String" | 2014/06/13 | [
"https://Stackoverflow.com/questions/24211129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/189395/"
]
| There are different ways to go about this.
Personally I think keeping the whole file as one string in your YAML is a poor choice, but the [syntax](http://en.wikipedia.org/wiki/YAML#Syntax) for that would be
```
sysctl_content: |
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
...
```
A better alternative is an array
```
sysctl_lines:
- net.ipv4.tcp_syncookies=1
- net.bridge.bridge-nf-call-ip6tables=0
```
Then get them into the file using a template such as
```
# This file is managed by Puppet
<% scope.function_hiera("sysctl_lines", []).each do |line| -%>
<%= line %>
<% end -%>
```
Or better yet, make it a hash
```
sysctl_settings:
net.ipv4.tcp_syncookies: 1
net.bridge.bridge-nf-call-ip6tables: 0
...
```
And a template like
```
# This file is managed by Puppet
<% scope.function_hiera("sysctl_settings", []).each do |key,val| -%>
<%= key %> = <%= val %>
<% end -%>
```
This has the advantage that you can spread distinct value overrides throughout your hierarchy if you use the `hiera_hash` lookup method instead. | Although your approach may not be the best idea and the other answers show better approaches:
Your white space in front of each line but the first is there, because you put it there:
```
content: "kernel.msgmnb = 65536\n kernel.msgmax = 65536\n kernel.shmmax = 1519344680\n kernel.shmall = 4294967296\n net.core.rmem_max = 33554432\n net.core.wmem_max = 33554432\n net.core.rmem_default = 33554432\n net.core.wmem_default = 33554432\n"
```
You've put a `" "` after each and every `"\n"`. That's the problem.
Replace each `"\n "` with `"\n"` any you are fine. |
62,128,845 | "a specified logon session does not exist. it may already have been terminated" after i joined the device to azure active directory
-i can't access our shared folder in our server after i joined the device to azure AD and use office 365 account (Please see click the link below to see the error image for your reference), but if i use local administrator of the device i can access the file server using the credentials with no problem, please note that we don't have an premises active directory or GPO, kindly help me.
[a specified logon session does not exist. it may already have been terminated](https://i.stack.imgur.com/iQJ7W.png) | 2020/06/01 | [
"https://Stackoverflow.com/questions/62128845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4002256/"
]
| The repository run by Boundless has been replaced by one hosted by OSGeo. The OSGeo webdav repo was merged into the new OSGeo repo. Details are here (<https://docs.geotools.org/latest/userguide/tutorial/quickstart/maven.html>)
Replace this block
```
<repository>
<id>osgeo</id>
<name>Open Source Geospatial Foundation Repository</name>
<url>http://download.osgeo.org/webdav/geotools/</url>
</repository>
<repository> <!--Add the snapshot repository here-->
<snapshots>
<enabled>true</enabled>
</snapshots>
<id>opengeo</id>
<name>OpenGeo Maven Repository</name>
<url>http://repo.opengeo.org</url>
</repository>
```
with
```
<repository>
<id>osgeo</id>
<name>OSGeo Release Repository</name>
<url>https://repo.osgeo.org/repository/release/</url>
<snapshots><enabled>false</enabled></snapshots>
<releases><enabled>true</enabled></releases>
</repository>
<repository>
<id>osgeo-snapshot</id>
<name>OSGeo Snapshot Repository</name>
<url>https://repo.osgeo.org/repository/snapshot/</url>
<snapshots><enabled>true</enabled></snapshots>
<releases><enabled>false</enabled></releases>
</repository>
``` | If you paste the `Open Source Geospatial Foundation Repository` URL in your web browser and hit enter it will return `404 Not Found` error. This happens when Maven attempts to connect to that resource to fetch dependency for you but it's no longer available. However, if you pay close attention to `Maven Repo` link, there is a note:
>
> Note: this artefact is located at Boundless repository (<https://repo.boundlessgeo.com/main/>)
>
>
>
Try to replace `http://download.osgeo.org/webdav/geotools/` with a given in the note URL and run `mvn clean install`
Please let me know if that worked for you. |
7,903,550 | I made a web page with the following code and viewed it in Google Chrome.
```
<html>
<head>
<style type="text/css">
html {padding:30px; background-color:blue;}
body {margin:0px; background-color:red;}
</style>
</head>
<body>
hello world
</body>
</html>
```
The result is what I expected, a red box with a 30 pixel blue border that fills the entire web browser window. However, when I view it in Firefox, the red box is only the height of one line-height. In IE8, there is no blue border.
How do I make Firefox and IE8 display the same thing as what I see in Google Chrome?
**Additional notes** I tried adding different doctype tags to the page, but that only made it appear like Firefox, that is, the 1 line-height of red. | 2011/10/26 | [
"https://Stackoverflow.com/questions/7903550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/27305/"
]
| if i understand you correctly, set your html & body width to 100% , height 100%
>
> <http://jsfiddle.net/Diezel23/Lv6Vw/#base>
>
>
> | You could add an additional div:
```
<html>
<head>
<style>
body {
padding: 30px;
margin: 0px;
}
div {
width: 100%;
height: 100%;
background-color: blue;
}
</style>
</head>
<body>
<div>
ABC
</div>
</body>
</html>
``` |
7,903,550 | I made a web page with the following code and viewed it in Google Chrome.
```
<html>
<head>
<style type="text/css">
html {padding:30px; background-color:blue;}
body {margin:0px; background-color:red;}
</style>
</head>
<body>
hello world
</body>
</html>
```
The result is what I expected, a red box with a 30 pixel blue border that fills the entire web browser window. However, when I view it in Firefox, the red box is only the height of one line-height. In IE8, there is no blue border.
How do I make Firefox and IE8 display the same thing as what I see in Google Chrome?
**Additional notes** I tried adding different doctype tags to the page, but that only made it appear like Firefox, that is, the 1 line-height of red. | 2011/10/26 | [
"https://Stackoverflow.com/questions/7903550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/27305/"
]
| For this I think you have to resort to absolute or relative positioning; otherwise, your height/margin combo will push the bottom blue line off the screen. This works cross browser for this simple case. Hopefully it works for your more complicated use case.
```
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
body { background:blue; }
.first{
position:absolute; /* fixed also works */
background:red;
top:30px;
left:30px;
right:30px;
bottom:30px;
}
</style>
</head>
<body>
<div class="first">hello world</div>
</body>
</html>
``` | if i understand you correctly, set your html & body width to 100% , height 100%
>
> <http://jsfiddle.net/Diezel23/Lv6Vw/#base>
>
>
> |
7,903,550 | I made a web page with the following code and viewed it in Google Chrome.
```
<html>
<head>
<style type="text/css">
html {padding:30px; background-color:blue;}
body {margin:0px; background-color:red;}
</style>
</head>
<body>
hello world
</body>
</html>
```
The result is what I expected, a red box with a 30 pixel blue border that fills the entire web browser window. However, when I view it in Firefox, the red box is only the height of one line-height. In IE8, there is no blue border.
How do I make Firefox and IE8 display the same thing as what I see in Google Chrome?
**Additional notes** I tried adding different doctype tags to the page, but that only made it appear like Firefox, that is, the 1 line-height of red. | 2011/10/26 | [
"https://Stackoverflow.com/questions/7903550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/27305/"
]
| For this I think you have to resort to absolute or relative positioning; otherwise, your height/margin combo will push the bottom blue line off the screen. This works cross browser for this simple case. Hopefully it works for your more complicated use case.
```
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
body { background:blue; }
.first{
position:absolute; /* fixed also works */
background:red;
top:30px;
left:30px;
right:30px;
bottom:30px;
}
</style>
</head>
<body>
<div class="first">hello world</div>
</body>
</html>
``` | You could add an additional div:
```
<html>
<head>
<style>
body {
padding: 30px;
margin: 0px;
}
div {
width: 100%;
height: 100%;
background-color: blue;
}
</style>
</head>
<body>
<div>
ABC
</div>
</body>
</html>
``` |
210,811 | So I created the text with the add text button - I'm using version 2.91 - then I went to the right tab with all the properties functions: world properties, materials properties, physics properties, etc., then I extruded the text a bit(stretched it).
Next, I went to the materials tab to color it, clicked on new , the sphere showed up, then I clicked on the color line(base color) and chose a color, but nothing happened to the text. | 2021/02/06 | [
"https://blender.stackexchange.com/questions/210811",
"https://blender.stackexchange.com",
"https://blender.stackexchange.com/users/116368/"
]
| You probably need to switch preview modes. Press Z in the 3D viewport and pick either "Material Preview" or "Rendered" from the pie menu. The Preview Modes and Viewport Shading Options can also be accessed from the 4 circles and the little down arrow, respectively, at the top right of the viewport:
[](https://i.stack.imgur.com/zQgRN.png) | Make sure you are on Render preview or material preview tab
[](https://i.stack.imgur.com/3dQ7G.png) |
696,551 | We had feedback from our pentest report saying we should turn off server tokens. This is stop people being able to see which version of PHP we are using, and limit their ability to target the specific PHP version.
I have added the following to nginx.conf, under the http block:
```
server_tokens off;
```
But what tools can I use to check this change has taken affect? | 2015/06/04 | [
"https://serverfault.com/questions/696551",
"https://serverfault.com",
"https://serverfault.com/users/202300/"
]
| From the [manual](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens) you know what the setting does:
>
> **Syntax**: `server_tokens on | off`;
>
> **Default**: `server_tokens on`;
>
> **Context**: http, server, location
>
>
> *Enables or disables emitting nginx version in error messages and in the “Server” response header field.*
>
>
>
So your options are:
* generate an error message, for instance if you don't have a custom 404 error message simply request a non-existing page and in the footer you won't see the version information `nginx/1.2.3` any more.
* inspect the server headers and confirm that the version is no longer displayed.
A simple check to see the HTTP response headers is to manually connect i.e. with: `telnet www.example.com 80` where the client lines are what you enter:
>
> client: HEAD / HTTP/1.1
>
> client: Host: www.example.com
>
>
> server: HTTP/1.1 200 OK
>
> server: Date: Wed, 1 Jan 1970 22:13:05 GMT
>
> server: **Server: Nginx/1.2.3**
>
> server: Connection: close
>
> server: Content-Type: text/html
>
>
> | Take a look at InSpec, a tool that allows you "turn your compliance, security, and other policy requirements into automated tests."
<https://www.inspec.io>
It can do all the configuration testing that you need for your Nginx server. Here's one way to test for the existence of the conf file and the value of `server_tokens`:
```
conf_path = '/etc/nginx/nginx.conf'
control 'Server tokens should be off' do
describe file(conf_path) do
it 'The config file should exist and be a file.' do
expect(subject).to(exist)
expect(subject).to(be_file)
end
end
if (File.exist?(conf_path))
Array(nginx_conf(conf_path).params['http']).each do |http|
describe "http:" do
it 'server_tokens should be off if found in the http context.' do
Array(http["server_tokens"]).each do |tokens|
expect(tokens).to(cmp 'off')
end
end
end
end
end
end
```
If set correctly, InSpec returns:
```
✔ Server tokens should be off: File /etc/nginx/nginx.conf
✔ File /etc/nginx/nginx.conf The config file should exist and be a file.
✔ http: server_tokens should be off if found in the http context.
```
If not:
```
× Server tokens should be off: File /etc/nginx/nginx.conf (1 failed)
✔ File /etc/nginx/nginx.conf The config file should exist and be a file.
× http: server_tokens should be off if found in the http context.
expected: "off"
got: ["on"]
(compared using `cmp` matcher)
``` |
696,551 | We had feedback from our pentest report saying we should turn off server tokens. This is stop people being able to see which version of PHP we are using, and limit their ability to target the specific PHP version.
I have added the following to nginx.conf, under the http block:
```
server_tokens off;
```
But what tools can I use to check this change has taken affect? | 2015/06/04 | [
"https://serverfault.com/questions/696551",
"https://serverfault.com",
"https://serverfault.com/users/202300/"
]
| From the [manual](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens) you know what the setting does:
>
> **Syntax**: `server_tokens on | off`;
>
> **Default**: `server_tokens on`;
>
> **Context**: http, server, location
>
>
> *Enables or disables emitting nginx version in error messages and in the “Server” response header field.*
>
>
>
So your options are:
* generate an error message, for instance if you don't have a custom 404 error message simply request a non-existing page and in the footer you won't see the version information `nginx/1.2.3` any more.
* inspect the server headers and confirm that the version is no longer displayed.
A simple check to see the HTTP response headers is to manually connect i.e. with: `telnet www.example.com 80` where the client lines are what you enter:
>
> client: HEAD / HTTP/1.1
>
> client: Host: www.example.com
>
>
> server: HTTP/1.1 200 OK
>
> server: Date: Wed, 1 Jan 1970 22:13:05 GMT
>
> server: **Server: Nginx/1.2.3**
>
> server: Connection: close
>
> server: Content-Type: text/html
>
>
> | Also, if you serve PHP projects, you may need to change in `/etc/nginx/{fastcgi,fastcgi_params).conf`
```
fastcgi_param SERVER_SOFTWARE nginx;
``` |
696,551 | We had feedback from our pentest report saying we should turn off server tokens. This is stop people being able to see which version of PHP we are using, and limit their ability to target the specific PHP version.
I have added the following to nginx.conf, under the http block:
```
server_tokens off;
```
But what tools can I use to check this change has taken affect? | 2015/06/04 | [
"https://serverfault.com/questions/696551",
"https://serverfault.com",
"https://serverfault.com/users/202300/"
]
| After a bit more googling, I have found curl command can check the server headers which shows both server tokens and php versions:
```
curl -I -L www.example.com
```
Thanks to Alexey for pointing out the change needed in PHP.
```
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 04 Jun 2015 10:49:35 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://www.example.com
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 04 Jun 2015 10:49:36 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 04 Jun 2015 10:49:35 GMT
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1433414975"
Content-Language: en
``` | Take a look at InSpec, a tool that allows you "turn your compliance, security, and other policy requirements into automated tests."
<https://www.inspec.io>
It can do all the configuration testing that you need for your Nginx server. Here's one way to test for the existence of the conf file and the value of `server_tokens`:
```
conf_path = '/etc/nginx/nginx.conf'
control 'Server tokens should be off' do
describe file(conf_path) do
it 'The config file should exist and be a file.' do
expect(subject).to(exist)
expect(subject).to(be_file)
end
end
if (File.exist?(conf_path))
Array(nginx_conf(conf_path).params['http']).each do |http|
describe "http:" do
it 'server_tokens should be off if found in the http context.' do
Array(http["server_tokens"]).each do |tokens|
expect(tokens).to(cmp 'off')
end
end
end
end
end
end
```
If set correctly, InSpec returns:
```
✔ Server tokens should be off: File /etc/nginx/nginx.conf
✔ File /etc/nginx/nginx.conf The config file should exist and be a file.
✔ http: server_tokens should be off if found in the http context.
```
If not:
```
× Server tokens should be off: File /etc/nginx/nginx.conf (1 failed)
✔ File /etc/nginx/nginx.conf The config file should exist and be a file.
× http: server_tokens should be off if found in the http context.
expected: "off"
got: ["on"]
(compared using `cmp` matcher)
``` |
696,551 | We had feedback from our pentest report saying we should turn off server tokens. This is stop people being able to see which version of PHP we are using, and limit their ability to target the specific PHP version.
I have added the following to nginx.conf, under the http block:
```
server_tokens off;
```
But what tools can I use to check this change has taken affect? | 2015/06/04 | [
"https://serverfault.com/questions/696551",
"https://serverfault.com",
"https://serverfault.com/users/202300/"
]
| After a bit more googling, I have found curl command can check the server headers which shows both server tokens and php versions:
```
curl -I -L www.example.com
```
Thanks to Alexey for pointing out the change needed in PHP.
```
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 04 Jun 2015 10:49:35 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://www.example.com
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 04 Jun 2015 10:49:36 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 04 Jun 2015 10:49:35 GMT
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1433414975"
Content-Language: en
``` | Also, if you serve PHP projects, you may need to change in `/etc/nginx/{fastcgi,fastcgi_params).conf`
```
fastcgi_param SERVER_SOFTWARE nginx;
``` |
696,551 | We had feedback from our pentest report saying we should turn off server tokens. This is stop people being able to see which version of PHP we are using, and limit their ability to target the specific PHP version.
I have added the following to nginx.conf, under the http block:
```
server_tokens off;
```
But what tools can I use to check this change has taken affect? | 2015/06/04 | [
"https://serverfault.com/questions/696551",
"https://serverfault.com",
"https://serverfault.com/users/202300/"
]
| Also, if you serve PHP projects, you may need to change in `/etc/nginx/{fastcgi,fastcgi_params).conf`
```
fastcgi_param SERVER_SOFTWARE nginx;
``` | Take a look at InSpec, a tool that allows you "turn your compliance, security, and other policy requirements into automated tests."
<https://www.inspec.io>
It can do all the configuration testing that you need for your Nginx server. Here's one way to test for the existence of the conf file and the value of `server_tokens`:
```
conf_path = '/etc/nginx/nginx.conf'
control 'Server tokens should be off' do
describe file(conf_path) do
it 'The config file should exist and be a file.' do
expect(subject).to(exist)
expect(subject).to(be_file)
end
end
if (File.exist?(conf_path))
Array(nginx_conf(conf_path).params['http']).each do |http|
describe "http:" do
it 'server_tokens should be off if found in the http context.' do
Array(http["server_tokens"]).each do |tokens|
expect(tokens).to(cmp 'off')
end
end
end
end
end
end
```
If set correctly, InSpec returns:
```
✔ Server tokens should be off: File /etc/nginx/nginx.conf
✔ File /etc/nginx/nginx.conf The config file should exist and be a file.
✔ http: server_tokens should be off if found in the http context.
```
If not:
```
× Server tokens should be off: File /etc/nginx/nginx.conf (1 failed)
✔ File /etc/nginx/nginx.conf The config file should exist and be a file.
× http: server_tokens should be off if found in the http context.
expected: "off"
got: ["on"]
(compared using `cmp` matcher)
``` |
36,077,108 | ```
private void btnAssemble_Click(object sender, EventArgs e)
{
txtAssembled.Text = (cboTitle.Text + txtFirstName.Text[0] + txtMiddle.Text + txtLastName.Text + "\r\n" +txtStreet.Text + "\r\n"+ cboCity.Text);
}
```
I'm trying to get 1 character white space inbetween cboTitle.Text, txtFirname.Text, txtMiddle.Text, and txtLastName, but they all output the information together, but I want them spaced evenly. what do I need to do? thanks in advance.
I'm going to post some other code thats below the one above in my project, just in case it might be relevant.
```
string AssembleText(string Title, string FirstName, string MiddleInitial, string LastName, string AddressLines, string City )
{
string Result = "";
Result += Title + " ";
Result += FirstName.Substring(0, 2) + " ";
// Only append middle initial if it is entered
if (MiddleInitial != "")
{
Result += MiddleInitial + " ";
}
Result += LastName + "\r\n";
// Only append items from the multiline address box
// if they are entered
if ( AddressLines != "")
{
Result += AddressLines + "\r\n";
}
//if (AddressLines.Length > 0 && AddressLines.ToString() != "")
//{
// Result += AddressLines + "\r\n";
//}
Result += City;
return Result;
}
}
```
} | 2016/03/18 | [
"https://Stackoverflow.com/questions/36077108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5650181/"
]
| If you just want a space between those specific fields in btnAssemble\_Click, you can just insert them like this:
`string myStr = foo + " " + bar + " " + baz;`
So your first function would be modified to read:
`private void btnAssemble_Click(object sender, EventArgs e)
{
txtAssembled.Text = (cboTitle.Text + " " + txtFirstName.Text[0] + " " + txtMiddle.Text + " " + txtLastName.Text + "\r\n" + txtStreet.Text + "\r\n" + cboCity.Text);
}`
A few other comments:
* It's not clear to me what the AssembleText() function you posted has to do with this. I am confused though, as I see a few lines appending spaces at the end just like I mentioned above.
* Using the String.Format() function may make this code easier to read and maintain.
* Using Environment.NewLine instead of "\r\n" will make the string contain the newline character defined for that specific environment.
* Using a StringBuilder object may be faster over concatenation when building strings inside of a loop (which may not apply here). | It seems that you want `String.Join`; whenever you want to combine strings with a delimiter, say, `" "` (space) all you need is to put
```
String combined = String.Join(" ",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text);
```
Complete implementation (joining by *space* and *new line*) could be
```
txtAssembled.Text = String.Join(Environment.NewLine,
String.Join(" ",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text),
txtStreet.Text,
cboCity.Text);
``` |
36,077,108 | ```
private void btnAssemble_Click(object sender, EventArgs e)
{
txtAssembled.Text = (cboTitle.Text + txtFirstName.Text[0] + txtMiddle.Text + txtLastName.Text + "\r\n" +txtStreet.Text + "\r\n"+ cboCity.Text);
}
```
I'm trying to get 1 character white space inbetween cboTitle.Text, txtFirname.Text, txtMiddle.Text, and txtLastName, but they all output the information together, but I want them spaced evenly. what do I need to do? thanks in advance.
I'm going to post some other code thats below the one above in my project, just in case it might be relevant.
```
string AssembleText(string Title, string FirstName, string MiddleInitial, string LastName, string AddressLines, string City )
{
string Result = "";
Result += Title + " ";
Result += FirstName.Substring(0, 2) + " ";
// Only append middle initial if it is entered
if (MiddleInitial != "")
{
Result += MiddleInitial + " ";
}
Result += LastName + "\r\n";
// Only append items from the multiline address box
// if they are entered
if ( AddressLines != "")
{
Result += AddressLines + "\r\n";
}
//if (AddressLines.Length > 0 && AddressLines.ToString() != "")
//{
// Result += AddressLines + "\r\n";
//}
Result += City;
return Result;
}
}
```
} | 2016/03/18 | [
"https://Stackoverflow.com/questions/36077108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5650181/"
]
| Using String.format() should feet the bill. It also make your code easy to read.
```
txt.assembled.text = String.Format("{0} {1} {2} {3}",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text
);
``` | It seems that you want `String.Join`; whenever you want to combine strings with a delimiter, say, `" "` (space) all you need is to put
```
String combined = String.Join(" ",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text);
```
Complete implementation (joining by *space* and *new line*) could be
```
txtAssembled.Text = String.Join(Environment.NewLine,
String.Join(" ",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text),
txtStreet.Text,
cboCity.Text);
``` |
36,077,108 | ```
private void btnAssemble_Click(object sender, EventArgs e)
{
txtAssembled.Text = (cboTitle.Text + txtFirstName.Text[0] + txtMiddle.Text + txtLastName.Text + "\r\n" +txtStreet.Text + "\r\n"+ cboCity.Text);
}
```
I'm trying to get 1 character white space inbetween cboTitle.Text, txtFirname.Text, txtMiddle.Text, and txtLastName, but they all output the information together, but I want them spaced evenly. what do I need to do? thanks in advance.
I'm going to post some other code thats below the one above in my project, just in case it might be relevant.
```
string AssembleText(string Title, string FirstName, string MiddleInitial, string LastName, string AddressLines, string City )
{
string Result = "";
Result += Title + " ";
Result += FirstName.Substring(0, 2) + " ";
// Only append middle initial if it is entered
if (MiddleInitial != "")
{
Result += MiddleInitial + " ";
}
Result += LastName + "\r\n";
// Only append items from the multiline address box
// if they are entered
if ( AddressLines != "")
{
Result += AddressLines + "\r\n";
}
//if (AddressLines.Length > 0 && AddressLines.ToString() != "")
//{
// Result += AddressLines + "\r\n";
//}
Result += City;
return Result;
}
}
```
} | 2016/03/18 | [
"https://Stackoverflow.com/questions/36077108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5650181/"
]
| It would be like this
```
private void btnAssemble_Click(object sender, EventArgs e)
{
txtAssembled.Text = (cboTitle.Text + " " + txtFirstName.Text[0] + " " +txtMiddle.Text + " " + txtLastName.Text + "\r\n" +txtStreet.Text + "\r\n"+ cboCity.Text);
}
``` | It seems that you want `String.Join`; whenever you want to combine strings with a delimiter, say, `" "` (space) all you need is to put
```
String combined = String.Join(" ",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text);
```
Complete implementation (joining by *space* and *new line*) could be
```
txtAssembled.Text = String.Join(Environment.NewLine,
String.Join(" ",
cboTitle.Text,
txtFirstName.Text[0],
txtMiddle.Text,
txtLastName.Text),
txtStreet.Text,
cboCity.Text);
``` |
11,193,183 | I use backbone.js and have a model without a collection.
In the view I call fetch on the model with a callback to render the view.
```
this.user.fetch({success: function(d) { self.randomUserView.render() }})
```
how can I make the view update automatically when the model change? e.g. I don't want to specify the above callback every time I call fetch. I tried to bind the view to many model events on initialize but this did not work. | 2012/06/25 | [
"https://Stackoverflow.com/questions/11193183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69636/"
]
| On the view, add an event handler to the view's model:
```
initialize: function() {
this.model.on('change',this.render,this);
}
``` | Backbone is event-driven, not callback driven framework (although technically they are callbacks). And your approach does not seem to be native to Backbone. When you do fetch(), user model will automatically trigger "add" event. All you need to do is in the corresponding view add this in initialize:
```
initialize: function() {
... your code...
this.model.bind('add', this.render);
}
```
This way you subscribe to this even only once in the view init and don't have to ever pass explicit callbacks. |
11,193,183 | I use backbone.js and have a model without a collection.
In the view I call fetch on the model with a callback to render the view.
```
this.user.fetch({success: function(d) { self.randomUserView.render() }})
```
how can I make the view update automatically when the model change? e.g. I don't want to specify the above callback every time I call fetch. I tried to bind the view to many model events on initialize but this did not work. | 2012/06/25 | [
"https://Stackoverflow.com/questions/11193183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69636/"
]
| On the view, add an event handler to the view's model:
```
initialize: function() {
this.model.on('change',this.render,this);
}
``` | Actually if you want to have a view refresh on fetch on a collection you need to bind RESET!
```
this.model.bind('reset', this.render, this);
```
Update is only fired if the current collection is edited.
ps bindAll is dangerous and lazy. (and probably going to cause you problems down the line) |
11,193,183 | I use backbone.js and have a model without a collection.
In the view I call fetch on the model with a callback to render the view.
```
this.user.fetch({success: function(d) { self.randomUserView.render() }})
```
how can I make the view update automatically when the model change? e.g. I don't want to specify the above callback every time I call fetch. I tried to bind the view to many model events on initialize but this did not work. | 2012/06/25 | [
"https://Stackoverflow.com/questions/11193183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69636/"
]
| Backbone is event-driven, not callback driven framework (although technically they are callbacks). And your approach does not seem to be native to Backbone. When you do fetch(), user model will automatically trigger "add" event. All you need to do is in the corresponding view add this in initialize:
```
initialize: function() {
... your code...
this.model.bind('add', this.render);
}
```
This way you subscribe to this even only once in the view init and don't have to ever pass explicit callbacks. | Actually if you want to have a view refresh on fetch on a collection you need to bind RESET!
```
this.model.bind('reset', this.render, this);
```
Update is only fired if the current collection is edited.
ps bindAll is dangerous and lazy. (and probably going to cause you problems down the line) |
11,140,010 | I've found a method using reflection (and got it's `MethodInfo`). How can I invoke it without getting `TargetInvocationException` when exceptions are thrown?
**Update**
I'm creating a command implementation where the commands are handled by classes which implemement
```
public interface ICommandHandler<T> where T : class, ICommand
{
public void Invoke(T command);
}
```
Since there is one dispatcher which takes care of find and map all handlers to the correct command I can't invoke the methods directly but by using reflection. Something like:
```
var handlerType = tyepof(IHandlerOf<>).MakeGenericType(command.GetType());
var method = handlerType.GetMethod("Invoke", new [] { command.GetType() });
method.Invoke(theHandler, new object[]{command});
```
It works fine, but I want all exceptions to get passed on to the code that invoked the command.
So that the caller can use:
```
try
{
_dispatcher.Invoke(new CreateUser("Jonas", "Gauffin"));
}
catch (SomeSpecificException err)
{
//handle it.
}
```
Instead of having to catch `TargetInvocationException`.
(I know that I can throw the inner exception, but that's pretty worthless since the stack trace is destroyed)
**Update2**
[Here](http://timwi.blogspot.se/2010/03/dynamic-method-invocation-without.html) is a possible solution..
But it seems more like a hack. Aren't there a better solution? Maybe with expressions or something? | 2012/06/21 | [
"https://Stackoverflow.com/questions/11140010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/70386/"
]
| You can't. That's the specified way that exceptions are propagated by invoking a method via reflection. You can always catch `TargetInvocationException` and then throw the "inner" exception obtained via the [`InnerException`](http://msdn.microsoft.com/en-us/library/system.exception.innerexception.aspx) property, if you want the effect to be the original exception being thrown.
(You'll lose the original stack trace, mind you. It's possible that there's a way to prevent that, but it's tricky. I believe there *may* be more support for this in .NET 4.5; I'm not sure.) | You can call Invoke on the mehtodinfo instance, but the first argument of the call is the target (The object that the method info belongs to). If you pass this and it has access to call it, you should not get the exception. |
11,140,010 | I've found a method using reflection (and got it's `MethodInfo`). How can I invoke it without getting `TargetInvocationException` when exceptions are thrown?
**Update**
I'm creating a command implementation where the commands are handled by classes which implemement
```
public interface ICommandHandler<T> where T : class, ICommand
{
public void Invoke(T command);
}
```
Since there is one dispatcher which takes care of find and map all handlers to the correct command I can't invoke the methods directly but by using reflection. Something like:
```
var handlerType = tyepof(IHandlerOf<>).MakeGenericType(command.GetType());
var method = handlerType.GetMethod("Invoke", new [] { command.GetType() });
method.Invoke(theHandler, new object[]{command});
```
It works fine, but I want all exceptions to get passed on to the code that invoked the command.
So that the caller can use:
```
try
{
_dispatcher.Invoke(new CreateUser("Jonas", "Gauffin"));
}
catch (SomeSpecificException err)
{
//handle it.
}
```
Instead of having to catch `TargetInvocationException`.
(I know that I can throw the inner exception, but that's pretty worthless since the stack trace is destroyed)
**Update2**
[Here](http://timwi.blogspot.se/2010/03/dynamic-method-invocation-without.html) is a possible solution..
But it seems more like a hack. Aren't there a better solution? Maybe with expressions or something? | 2012/06/21 | [
"https://Stackoverflow.com/questions/11140010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/70386/"
]
| Create a `Delegate` from the `MethodInfo` (through one of the overloads of [`Delegate.CreateDelegate`](http://msdn.microsoft.com/en-us/library/system.delegate.createdelegate.aspx)) and invoke that instead. This *won't* wrap any exception thrown by the method inside a `TargetInvocationException` like `MethodInfo.Invoke` does.
```
class Foo
{
static void ThrowingMethod()
{
throw new NotImplementedException();
}
static MethodInfo GetMethodInfo()
{
return typeof(Foo)
.GetMethod("ThrowingMethod", BindingFlags.NonPublic | BindingFlags.Static);
}
// Will throw a NotImplementedException
static void DelegateWay()
{
Action action = (Action)Delegate.CreateDelegate
(typeof(Action), GetMethodInfo());
action();
}
// Will throw a TargetInvocationException
// wrapping a NotImplementedException
static void MethodInfoWay()
{
GetMethodInfo().Invoke(null, null);
}
}
```
**EDIT**:
(As the OP has pointed out, DynamicInvoke won't work here since it wraps too)
Based on your update, I would just use `dynamic`:
```
((dynamic)theHandler).Invoke(command);
``` | You can call Invoke on the mehtodinfo instance, but the first argument of the call is the target (The object that the method info belongs to). If you pass this and it has access to call it, you should not get the exception. |
11,140,010 | I've found a method using reflection (and got it's `MethodInfo`). How can I invoke it without getting `TargetInvocationException` when exceptions are thrown?
**Update**
I'm creating a command implementation where the commands are handled by classes which implemement
```
public interface ICommandHandler<T> where T : class, ICommand
{
public void Invoke(T command);
}
```
Since there is one dispatcher which takes care of find and map all handlers to the correct command I can't invoke the methods directly but by using reflection. Something like:
```
var handlerType = tyepof(IHandlerOf<>).MakeGenericType(command.GetType());
var method = handlerType.GetMethod("Invoke", new [] { command.GetType() });
method.Invoke(theHandler, new object[]{command});
```
It works fine, but I want all exceptions to get passed on to the code that invoked the command.
So that the caller can use:
```
try
{
_dispatcher.Invoke(new CreateUser("Jonas", "Gauffin"));
}
catch (SomeSpecificException err)
{
//handle it.
}
```
Instead of having to catch `TargetInvocationException`.
(I know that I can throw the inner exception, but that's pretty worthless since the stack trace is destroyed)
**Update2**
[Here](http://timwi.blogspot.se/2010/03/dynamic-method-invocation-without.html) is a possible solution..
But it seems more like a hack. Aren't there a better solution? Maybe with expressions or something? | 2012/06/21 | [
"https://Stackoverflow.com/questions/11140010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/70386/"
]
| Create a `Delegate` from the `MethodInfo` (through one of the overloads of [`Delegate.CreateDelegate`](http://msdn.microsoft.com/en-us/library/system.delegate.createdelegate.aspx)) and invoke that instead. This *won't* wrap any exception thrown by the method inside a `TargetInvocationException` like `MethodInfo.Invoke` does.
```
class Foo
{
static void ThrowingMethod()
{
throw new NotImplementedException();
}
static MethodInfo GetMethodInfo()
{
return typeof(Foo)
.GetMethod("ThrowingMethod", BindingFlags.NonPublic | BindingFlags.Static);
}
// Will throw a NotImplementedException
static void DelegateWay()
{
Action action = (Action)Delegate.CreateDelegate
(typeof(Action), GetMethodInfo());
action();
}
// Will throw a TargetInvocationException
// wrapping a NotImplementedException
static void MethodInfoWay()
{
GetMethodInfo().Invoke(null, null);
}
}
```
**EDIT**:
(As the OP has pointed out, DynamicInvoke won't work here since it wraps too)
Based on your update, I would just use `dynamic`:
```
((dynamic)theHandler).Invoke(command);
``` | You can't. That's the specified way that exceptions are propagated by invoking a method via reflection. You can always catch `TargetInvocationException` and then throw the "inner" exception obtained via the [`InnerException`](http://msdn.microsoft.com/en-us/library/system.exception.innerexception.aspx) property, if you want the effect to be the original exception being thrown.
(You'll lose the original stack trace, mind you. It's possible that there's a way to prevent that, but it's tricky. I believe there *may* be more support for this in .NET 4.5; I'm not sure.) |
63,254,737 | I would like a help on my code, it is a menu where the customer must choose three options and quantity, the base of it is done, but I am having difficulties when the customer chooses the same product twice, my code does not add both purchases, but replaces it.
For example, the person chooses the second option and 5 of quantity, and then chooses the second option again but 2, the code answer will only count 2, since it was the last data that the variable received.
I appreciate any help.
```
#include <stdio.h>
#include <stdlib.h>
#include <locale.h>
#include <string.h>
int main()
{
int opcao, quantidade[7];
float total[7], final = 0;
char produto[7][20];
strcpy_s(produto[0], "Cachorro-Quente");
strcpy_s(produto[1], "Xis Salada");
strcpy_s(produto[2], "Xis Bacon");
strcpy_s(produto[3], "Misto");
strcpy_s(produto[4], "Salada");
strcpy_s(produto[5], "Água");
strcpy_s(produto[6], "Refrigerante");
setlocale(LC_ALL, "");
//Menu para os pedidos
printf("**********************CARDÁPRIO**************************\n");
printf("ITEM PRODUTO CÓDIGO PREÇO UNITÁRIO\n");
printf("---------------------------------------------------------\n");
printf("1 Cachorro-Quente 100 5,00\n");
printf("---------------------------------------------------------\n");
printf("2 Xis Salada 101 8,79\n");
printf("---------------------------------------------------------\n");
printf("3 Xis Bacon 102 9,99\n");
printf("---------------------------------------------------------\n");
printf("4 Misto 103 6,89\n");
printf("---------------------------------------------------------\n");
printf("5 Salada 104 4,80\n");
printf("---------------------------------------------------------\n");
printf("6 Água 105 3,49\n");
printf("---------------------------------------------------------\n");
printf("7 Refrigerante 106 4,99\n");
printf("*********************************************************\n");
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
while ((opcao >= 1) && (opcao <= 7))
{
switch (opcao)
{
case 1: // cachorro-quente
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[0]);
total[0] = quantidade[0] * 5.00;
break;
case 2: // xis salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[1]);
total[1] = quantidade[1] * 8.79;
break;
case 3: // xis bacon
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[2]);
total[2] = quantidade[2] * 9.99;
break;
case 4: // misto
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[3]);
total[3] = quantidade[3] * 6.89;
break;
case 5: // salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[4]);
total[4] = quantidade[4] * 4.80;
break;
case 6: // agua
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[5]);
total[5] = quantidade[5] * 3.49;
break;
case 7: // refrigerante
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[6]);
total[6] = quantidade[6] * 4.99;
break;
}
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
}
printf("\nItem Quantidade Valor\n");
for (int i = 0; i < 7; i++)
{
if (quantidade[i] > 0)
{
printf(produto[i]);
printf(" %.i %.2f\n", quantidade[i], total[i]);
final = final + total[i];
}
}
printf("\n");
printf("Total da compra dos pedidos: %.2f\n", final);
system("pause");
return 0;
}
``` | 2020/08/04 | [
"https://Stackoverflow.com/questions/63254737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14050309/"
]
| With reasonable sum limit this problem might be solved using extension of dynamic programming approach for subset sum problem or coin change problem with predetermined number of coins. Note that we can `count` all variants in pseudopolynomial time `O(x*n)`, but output size might grow exponentially, so generation of all variants might be a problem.
Make 3d array, list or vector with outer dimension `x-1` for example: `A[][][]`. Every element `A[p]` of this list contains list of possible subsets with sum `p`.
We can walk through all elements (call current element `item`) of initial "set" (I noticed repeating elements in your example, so it is not true set).
Now scan `A[]` list from the last entry to the beginning. (This trick helps to avoid repeating usage of the same item).
If `A[i - item]` contains subsets with size < `k`, we can add all these subsets to `A[i]` appending `item`.
After full scan `A[x]` will contain subsets of size `k` and less, having sum `x`, and we can filter only those of size `k`
Example of output of my quick-made Delphi program for the next data:
```
Lst := [1,2,3,3,4,5,6,7];
k := 3;
sum := 10;
3 3 4
2 3 5 //distinct 3's
2 3 5
1 4 5
1 3 6
1 3 6 //distinct 3's
1 2 7
```
To exclude variants with distinct repeated elements (if needed), we can use non-first occurence only for subsets already containing the first occurence of item (so 3 3 4 will be valid while the second 2 3 5 won't be generated)
I literally translate my Delphi code into C++ (weird, I think :)
```
int main()
{
vector<vector<vector<int>>> A;
vector<int> Lst = { 1, 2, 3, 3, 4, 5, 6, 7 };
int k = 3;
int sum = 10;
A.push_back({ {0} }); //fictive array to make non-empty variant
for (int i = 0; i < sum; i++)
A.push_back({{}});
for (int item : Lst) {
for (int i = sum; i >= item; i--) {
for (int j = 0; j < A[i - item].size(); j++)
if (A[i - item][j].size() < k + 1 &&
A[i - item][j].size() > 0) {
vector<int> t = A[i - item][j];
t.push_back(item);
A[i].push_back(t); //add new variant including current item
}
}
}
//output needed variants
for (int i = 0; i < A[sum].size(); i++)
if (A[sum][i].size() == k + 1) {
for (int j = 1; j < A[sum][i].size(); j++) //excluding fictive 0
cout << A[sum][i][j] << " ";
cout << endl;
}
}
``` | You should first sort the so called array. Secondly, you should determine if the problem is actually solvable, to save time... So what you do is you take the last k elements and see if the sum of those is larger or equal to the x value, if it is smaller, you are done it is not possible to do something like that.... If it is actually equal yes you are also done there is no other permutations.... O(n) feels nice doesn't it?? If it is larger, than you got a lot of work to do..... You need to store all the permutations in an seperate array.... Then you go ahead and replace the smallest of the k numbers with the smallest element in the array.... If this is still larger than x then you do it for the second and third and so on until you get something smaller than x. Once you reach a point where you have the sum smaller than x, you can go ahead and start to increase the value of the last position you stopped at until you hit x.... Once you hit x that is your combination.... Then you can go ahead and get the previous element so if you had 1,1,5, 6 in your thingy, you can go ahead and grab the 1 as well, add it to your smallest element, 5 to get 6, next you check, can you write this number 6 as a combination of two values, you stop once you hit the value.... Then you can repeat for the others as well.... You problem can be solved in O(n!) time in the worst case.... I would not suggest that you 10^27 combinations, meaning you have more than 10^27 elements, mhmmm bad idea do you even have that much space??? That's like 3bits for the header and 8 bits for each integer you would need 9.8765\*10^25 terabytes just to store that clossal array, more memory than a supercomputer, you should worry about whether your computer can even store this monster rather than if you can solve the problem, that many combinations even if you find a quadratic solution it would crash your computer, and you know what quadratic is a long way off from O(n!)... |
63,254,737 | I would like a help on my code, it is a menu where the customer must choose three options and quantity, the base of it is done, but I am having difficulties when the customer chooses the same product twice, my code does not add both purchases, but replaces it.
For example, the person chooses the second option and 5 of quantity, and then chooses the second option again but 2, the code answer will only count 2, since it was the last data that the variable received.
I appreciate any help.
```
#include <stdio.h>
#include <stdlib.h>
#include <locale.h>
#include <string.h>
int main()
{
int opcao, quantidade[7];
float total[7], final = 0;
char produto[7][20];
strcpy_s(produto[0], "Cachorro-Quente");
strcpy_s(produto[1], "Xis Salada");
strcpy_s(produto[2], "Xis Bacon");
strcpy_s(produto[3], "Misto");
strcpy_s(produto[4], "Salada");
strcpy_s(produto[5], "Água");
strcpy_s(produto[6], "Refrigerante");
setlocale(LC_ALL, "");
//Menu para os pedidos
printf("**********************CARDÁPRIO**************************\n");
printf("ITEM PRODUTO CÓDIGO PREÇO UNITÁRIO\n");
printf("---------------------------------------------------------\n");
printf("1 Cachorro-Quente 100 5,00\n");
printf("---------------------------------------------------------\n");
printf("2 Xis Salada 101 8,79\n");
printf("---------------------------------------------------------\n");
printf("3 Xis Bacon 102 9,99\n");
printf("---------------------------------------------------------\n");
printf("4 Misto 103 6,89\n");
printf("---------------------------------------------------------\n");
printf("5 Salada 104 4,80\n");
printf("---------------------------------------------------------\n");
printf("6 Água 105 3,49\n");
printf("---------------------------------------------------------\n");
printf("7 Refrigerante 106 4,99\n");
printf("*********************************************************\n");
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
while ((opcao >= 1) && (opcao <= 7))
{
switch (opcao)
{
case 1: // cachorro-quente
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[0]);
total[0] = quantidade[0] * 5.00;
break;
case 2: // xis salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[1]);
total[1] = quantidade[1] * 8.79;
break;
case 3: // xis bacon
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[2]);
total[2] = quantidade[2] * 9.99;
break;
case 4: // misto
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[3]);
total[3] = quantidade[3] * 6.89;
break;
case 5: // salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[4]);
total[4] = quantidade[4] * 4.80;
break;
case 6: // agua
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[5]);
total[5] = quantidade[5] * 3.49;
break;
case 7: // refrigerante
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[6]);
total[6] = quantidade[6] * 4.99;
break;
}
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
}
printf("\nItem Quantidade Valor\n");
for (int i = 0; i < 7; i++)
{
if (quantidade[i] > 0)
{
printf(produto[i]);
printf(" %.i %.2f\n", quantidade[i], total[i]);
final = final + total[i];
}
}
printf("\n");
printf("Total da compra dos pedidos: %.2f\n", final);
system("pause");
return 0;
}
``` | 2020/08/04 | [
"https://Stackoverflow.com/questions/63254737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14050309/"
]
| Here is a complete solution in Python. Translation to C++ is left to the reader.
Like the usual subset sum, generation of the doubly linked summary of the solutions is pseudo-polynomial. It is `O(count_values * distinct_sums * depths_of_sums)`. However actually iterating through them can be exponential. But using generators the way I did avoids using a lot of memory to generate that list, even if it can take a long time to run.
```
from collections import namedtuple
# This is a doubly linked list.
# (value, tail) will be one group of solutions. (next_answer) is another.
SumPath = namedtuple('SumPath', 'value tail next_answer')
def fixed_sum_paths (array, target, count):
# First find counts of values to handle duplications.
value_repeats = {}
for value in array:
if value in value_repeats:
value_repeats[value] += 1
else:
value_repeats[value] = 1
# paths[depth][x] will be all subsets of size depth that sum to x.
paths = [{} for i in range(count+1)]
# First we add the empty set.
paths[0][0] = SumPath(value=None, tail=None, next_answer=None)
# Now we start adding values to it.
for value, repeats in value_repeats.items():
# Reversed depth avoids seeing paths we will find using this value.
for depth in reversed(range(len(paths))):
for result, path in paths[depth].items():
for i in range(1, repeats+1):
if count < i + depth:
# Do not fill in too deep.
break
result += value
if result in paths[depth+i]:
path = SumPath(
value=value,
tail=path,
next_answer=paths[depth+i][result]
)
else:
path = SumPath(
value=value,
tail=path,
next_answer=None
)
paths[depth+i][result] = path
# Subtle bug fix, a path for value, value
# should not lead to value, other_value because
# we already inserted that first.
path = SumPath(
value=value,
tail=path.tail,
next_answer=None
)
return paths[count][target]
def path_iter(paths):
if paths.value is None:
# We are the tail
yield []
else:
while paths is not None:
value = paths.value
for answer in path_iter(paths.tail):
answer.append(value)
yield answer
paths = paths.next_answer
def fixed_sums (array, target, count):
paths = fixed_sum_paths(array, target, count)
return path_iter(paths)
for path in fixed_sums([1,2,3,3,4,5,6,9], 10, 3):
print(path)
```
Incidentally for your example, here are the solutions:
```
[1, 3, 6]
[1, 4, 5]
[2, 3, 5]
[3, 3, 4]
``` | You should first sort the so called array. Secondly, you should determine if the problem is actually solvable, to save time... So what you do is you take the last k elements and see if the sum of those is larger or equal to the x value, if it is smaller, you are done it is not possible to do something like that.... If it is actually equal yes you are also done there is no other permutations.... O(n) feels nice doesn't it?? If it is larger, than you got a lot of work to do..... You need to store all the permutations in an seperate array.... Then you go ahead and replace the smallest of the k numbers with the smallest element in the array.... If this is still larger than x then you do it for the second and third and so on until you get something smaller than x. Once you reach a point where you have the sum smaller than x, you can go ahead and start to increase the value of the last position you stopped at until you hit x.... Once you hit x that is your combination.... Then you can go ahead and get the previous element so if you had 1,1,5, 6 in your thingy, you can go ahead and grab the 1 as well, add it to your smallest element, 5 to get 6, next you check, can you write this number 6 as a combination of two values, you stop once you hit the value.... Then you can repeat for the others as well.... You problem can be solved in O(n!) time in the worst case.... I would not suggest that you 10^27 combinations, meaning you have more than 10^27 elements, mhmmm bad idea do you even have that much space??? That's like 3bits for the header and 8 bits for each integer you would need 9.8765\*10^25 terabytes just to store that clossal array, more memory than a supercomputer, you should worry about whether your computer can even store this monster rather than if you can solve the problem, that many combinations even if you find a quadratic solution it would crash your computer, and you know what quadratic is a long way off from O(n!)... |
63,254,737 | I would like a help on my code, it is a menu where the customer must choose three options and quantity, the base of it is done, but I am having difficulties when the customer chooses the same product twice, my code does not add both purchases, but replaces it.
For example, the person chooses the second option and 5 of quantity, and then chooses the second option again but 2, the code answer will only count 2, since it was the last data that the variable received.
I appreciate any help.
```
#include <stdio.h>
#include <stdlib.h>
#include <locale.h>
#include <string.h>
int main()
{
int opcao, quantidade[7];
float total[7], final = 0;
char produto[7][20];
strcpy_s(produto[0], "Cachorro-Quente");
strcpy_s(produto[1], "Xis Salada");
strcpy_s(produto[2], "Xis Bacon");
strcpy_s(produto[3], "Misto");
strcpy_s(produto[4], "Salada");
strcpy_s(produto[5], "Água");
strcpy_s(produto[6], "Refrigerante");
setlocale(LC_ALL, "");
//Menu para os pedidos
printf("**********************CARDÁPRIO**************************\n");
printf("ITEM PRODUTO CÓDIGO PREÇO UNITÁRIO\n");
printf("---------------------------------------------------------\n");
printf("1 Cachorro-Quente 100 5,00\n");
printf("---------------------------------------------------------\n");
printf("2 Xis Salada 101 8,79\n");
printf("---------------------------------------------------------\n");
printf("3 Xis Bacon 102 9,99\n");
printf("---------------------------------------------------------\n");
printf("4 Misto 103 6,89\n");
printf("---------------------------------------------------------\n");
printf("5 Salada 104 4,80\n");
printf("---------------------------------------------------------\n");
printf("6 Água 105 3,49\n");
printf("---------------------------------------------------------\n");
printf("7 Refrigerante 106 4,99\n");
printf("*********************************************************\n");
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
while ((opcao >= 1) && (opcao <= 7))
{
switch (opcao)
{
case 1: // cachorro-quente
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[0]);
total[0] = quantidade[0] * 5.00;
break;
case 2: // xis salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[1]);
total[1] = quantidade[1] * 8.79;
break;
case 3: // xis bacon
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[2]);
total[2] = quantidade[2] * 9.99;
break;
case 4: // misto
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[3]);
total[3] = quantidade[3] * 6.89;
break;
case 5: // salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[4]);
total[4] = quantidade[4] * 4.80;
break;
case 6: // agua
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[5]);
total[5] = quantidade[5] * 3.49;
break;
case 7: // refrigerante
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[6]);
total[6] = quantidade[6] * 4.99;
break;
}
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
}
printf("\nItem Quantidade Valor\n");
for (int i = 0; i < 7; i++)
{
if (quantidade[i] > 0)
{
printf(produto[i]);
printf(" %.i %.2f\n", quantidade[i], total[i]);
final = final + total[i];
}
}
printf("\n");
printf("Total da compra dos pedidos: %.2f\n", final);
system("pause");
return 0;
}
``` | 2020/08/04 | [
"https://Stackoverflow.com/questions/63254737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14050309/"
]
| With reasonable sum limit this problem might be solved using extension of dynamic programming approach for subset sum problem or coin change problem with predetermined number of coins. Note that we can `count` all variants in pseudopolynomial time `O(x*n)`, but output size might grow exponentially, so generation of all variants might be a problem.
Make 3d array, list or vector with outer dimension `x-1` for example: `A[][][]`. Every element `A[p]` of this list contains list of possible subsets with sum `p`.
We can walk through all elements (call current element `item`) of initial "set" (I noticed repeating elements in your example, so it is not true set).
Now scan `A[]` list from the last entry to the beginning. (This trick helps to avoid repeating usage of the same item).
If `A[i - item]` contains subsets with size < `k`, we can add all these subsets to `A[i]` appending `item`.
After full scan `A[x]` will contain subsets of size `k` and less, having sum `x`, and we can filter only those of size `k`
Example of output of my quick-made Delphi program for the next data:
```
Lst := [1,2,3,3,4,5,6,7];
k := 3;
sum := 10;
3 3 4
2 3 5 //distinct 3's
2 3 5
1 4 5
1 3 6
1 3 6 //distinct 3's
1 2 7
```
To exclude variants with distinct repeated elements (if needed), we can use non-first occurence only for subsets already containing the first occurence of item (so 3 3 4 will be valid while the second 2 3 5 won't be generated)
I literally translate my Delphi code into C++ (weird, I think :)
```
int main()
{
vector<vector<vector<int>>> A;
vector<int> Lst = { 1, 2, 3, 3, 4, 5, 6, 7 };
int k = 3;
int sum = 10;
A.push_back({ {0} }); //fictive array to make non-empty variant
for (int i = 0; i < sum; i++)
A.push_back({{}});
for (int item : Lst) {
for (int i = sum; i >= item; i--) {
for (int j = 0; j < A[i - item].size(); j++)
if (A[i - item][j].size() < k + 1 &&
A[i - item][j].size() > 0) {
vector<int> t = A[i - item][j];
t.push_back(item);
A[i].push_back(t); //add new variant including current item
}
}
}
//output needed variants
for (int i = 0; i < A[sum].size(); i++)
if (A[sum][i].size() == k + 1) {
for (int j = 1; j < A[sum][i].size(); j++) //excluding fictive 0
cout << A[sum][i][j] << " ";
cout << endl;
}
}
``` | A brute force method using recursion might look like this...
For example, given variables set, x, k, the following pseudo code might work:
```
setSumStructure find(int[] set, int x, int k, int setIdx)
{
int sz = set.length - setIdx;
if (sz < x) return null;
if (sz == x) check sum of set[setIdx] -> set[set.size] == k. if it does, return the set together with the sum, else return null;
for (int i = setIdx; i < set.size - (k - 1); i++)
filter(find (set, x - set[i], k - 1, i + 1));
return filteredSets;
}
``` |
63,254,737 | I would like a help on my code, it is a menu where the customer must choose three options and quantity, the base of it is done, but I am having difficulties when the customer chooses the same product twice, my code does not add both purchases, but replaces it.
For example, the person chooses the second option and 5 of quantity, and then chooses the second option again but 2, the code answer will only count 2, since it was the last data that the variable received.
I appreciate any help.
```
#include <stdio.h>
#include <stdlib.h>
#include <locale.h>
#include <string.h>
int main()
{
int opcao, quantidade[7];
float total[7], final = 0;
char produto[7][20];
strcpy_s(produto[0], "Cachorro-Quente");
strcpy_s(produto[1], "Xis Salada");
strcpy_s(produto[2], "Xis Bacon");
strcpy_s(produto[3], "Misto");
strcpy_s(produto[4], "Salada");
strcpy_s(produto[5], "Água");
strcpy_s(produto[6], "Refrigerante");
setlocale(LC_ALL, "");
//Menu para os pedidos
printf("**********************CARDÁPRIO**************************\n");
printf("ITEM PRODUTO CÓDIGO PREÇO UNITÁRIO\n");
printf("---------------------------------------------------------\n");
printf("1 Cachorro-Quente 100 5,00\n");
printf("---------------------------------------------------------\n");
printf("2 Xis Salada 101 8,79\n");
printf("---------------------------------------------------------\n");
printf("3 Xis Bacon 102 9,99\n");
printf("---------------------------------------------------------\n");
printf("4 Misto 103 6,89\n");
printf("---------------------------------------------------------\n");
printf("5 Salada 104 4,80\n");
printf("---------------------------------------------------------\n");
printf("6 Água 105 3,49\n");
printf("---------------------------------------------------------\n");
printf("7 Refrigerante 106 4,99\n");
printf("*********************************************************\n");
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
while ((opcao >= 1) && (opcao <= 7))
{
switch (opcao)
{
case 1: // cachorro-quente
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[0]);
total[0] = quantidade[0] * 5.00;
break;
case 2: // xis salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[1]);
total[1] = quantidade[1] * 8.79;
break;
case 3: // xis bacon
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[2]);
total[2] = quantidade[2] * 9.99;
break;
case 4: // misto
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[3]);
total[3] = quantidade[3] * 6.89;
break;
case 5: // salada
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[4]);
total[4] = quantidade[4] * 4.80;
break;
case 6: // agua
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[5]);
total[5] = quantidade[5] * 3.49;
break;
case 7: // refrigerante
printf("Digite quantos itens deseja comprar: ");
scanf_s("%d", &quantidade[6]);
total[6] = quantidade[6] * 4.99;
break;
}
printf("\nDigite o número do item : ");
scanf_s("%d", &opcao);
}
printf("\nItem Quantidade Valor\n");
for (int i = 0; i < 7; i++)
{
if (quantidade[i] > 0)
{
printf(produto[i]);
printf(" %.i %.2f\n", quantidade[i], total[i]);
final = final + total[i];
}
}
printf("\n");
printf("Total da compra dos pedidos: %.2f\n", final);
system("pause");
return 0;
}
``` | 2020/08/04 | [
"https://Stackoverflow.com/questions/63254737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14050309/"
]
| Here is a complete solution in Python. Translation to C++ is left to the reader.
Like the usual subset sum, generation of the doubly linked summary of the solutions is pseudo-polynomial. It is `O(count_values * distinct_sums * depths_of_sums)`. However actually iterating through them can be exponential. But using generators the way I did avoids using a lot of memory to generate that list, even if it can take a long time to run.
```
from collections import namedtuple
# This is a doubly linked list.
# (value, tail) will be one group of solutions. (next_answer) is another.
SumPath = namedtuple('SumPath', 'value tail next_answer')
def fixed_sum_paths (array, target, count):
# First find counts of values to handle duplications.
value_repeats = {}
for value in array:
if value in value_repeats:
value_repeats[value] += 1
else:
value_repeats[value] = 1
# paths[depth][x] will be all subsets of size depth that sum to x.
paths = [{} for i in range(count+1)]
# First we add the empty set.
paths[0][0] = SumPath(value=None, tail=None, next_answer=None)
# Now we start adding values to it.
for value, repeats in value_repeats.items():
# Reversed depth avoids seeing paths we will find using this value.
for depth in reversed(range(len(paths))):
for result, path in paths[depth].items():
for i in range(1, repeats+1):
if count < i + depth:
# Do not fill in too deep.
break
result += value
if result in paths[depth+i]:
path = SumPath(
value=value,
tail=path,
next_answer=paths[depth+i][result]
)
else:
path = SumPath(
value=value,
tail=path,
next_answer=None
)
paths[depth+i][result] = path
# Subtle bug fix, a path for value, value
# should not lead to value, other_value because
# we already inserted that first.
path = SumPath(
value=value,
tail=path.tail,
next_answer=None
)
return paths[count][target]
def path_iter(paths):
if paths.value is None:
# We are the tail
yield []
else:
while paths is not None:
value = paths.value
for answer in path_iter(paths.tail):
answer.append(value)
yield answer
paths = paths.next_answer
def fixed_sums (array, target, count):
paths = fixed_sum_paths(array, target, count)
return path_iter(paths)
for path in fixed_sums([1,2,3,3,4,5,6,9], 10, 3):
print(path)
```
Incidentally for your example, here are the solutions:
```
[1, 3, 6]
[1, 4, 5]
[2, 3, 5]
[3, 3, 4]
``` | A brute force method using recursion might look like this...
For example, given variables set, x, k, the following pseudo code might work:
```
setSumStructure find(int[] set, int x, int k, int setIdx)
{
int sz = set.length - setIdx;
if (sz < x) return null;
if (sz == x) check sum of set[setIdx] -> set[set.size] == k. if it does, return the set together with the sum, else return null;
for (int i = setIdx; i < set.size - (k - 1); i++)
filter(find (set, x - set[i], k - 1, i + 1));
return filteredSets;
}
``` |
21,419 | For us IIS URL Rewrite module is the way to go.
How can sitemanager maintain Urls and publish on demand?
Do we need to make an exception in cd deployer to publish it to disk?
Before DXA we published Rewrite.config to filesystem and used that file from web.config.
With DXA it can be created [like this](http://www.mrgn.co/2015/05/25/redirects-friendly-urls-with-sdl-tridion/). | 2021/01/24 | [
"https://tridion.stackexchange.com/questions/21419",
"https://tridion.stackexchange.com",
"https://tridion.stackexchange.com/users/3158/"
]
| You can publish the rewrite.config to the broker and extend something like the [URLRewrite.NET](https://github.com/Bikeman868/UrlRewrite.Net) module to load the configuration from the database instead of the filesystem. The cool thing is that you can use it ahead of the DXA Static context module and before DXA starts processing the URLs.
One thing to consider is how you are loading the config from the broker since you could have many topology websites with multiple web applications hosted as a single website in IIS. Coming up with regex patterns for all of them and managing the configuration would be key to keeping the solution scalable.
A poor man's solution would to have a Razor template (or anything for that matter) publish a custom extension to the deployer. You can then configure your deployer to copy the custom extension to a filesystem instead of the broker database. You can include this in your IIS module for it to be picked up OOTB, This would still work but need to consider scaling among other considerations / configuration. Hope it helps! | Please refer to the link from the documentation [link](https://docs.sdl.com/792164/573605/sdl-digital-experience-accelerator-2-0/adding-a-redirect-page)
If you still want to manage the redirection you can use the re-write module on IIS web config |
21,419 | For us IIS URL Rewrite module is the way to go.
How can sitemanager maintain Urls and publish on demand?
Do we need to make an exception in cd deployer to publish it to disk?
Before DXA we published Rewrite.config to filesystem and used that file from web.config.
With DXA it can be created [like this](http://www.mrgn.co/2015/05/25/redirects-friendly-urls-with-sdl-tridion/). | 2021/01/24 | [
"https://tridion.stackexchange.com/questions/21419",
"https://tridion.stackexchange.com",
"https://tridion.stackexchange.com/users/3158/"
]
| You can publish the rewrite.config to the broker and extend something like the [URLRewrite.NET](https://github.com/Bikeman868/UrlRewrite.Net) module to load the configuration from the database instead of the filesystem. The cool thing is that you can use it ahead of the DXA Static context module and before DXA starts processing the URLs.
One thing to consider is how you are loading the config from the broker since you could have many topology websites with multiple web applications hosted as a single website in IIS. Coming up with regex patterns for all of them and managing the configuration would be key to keeping the solution scalable.
A poor man's solution would to have a Razor template (or anything for that matter) publish a custom extension to the deployer. You can then configure your deployer to copy the custom extension to a filesystem instead of the broker database. You can include this in your IIS module for it to be picked up OOTB, This would still work but need to consider scaling among other considerations / configuration. Hope it helps! | We have come across a legacy solution which was maintaining URL redirects and rewrites in a component and publish that as a config file to the file system (you need to configure .config extension in the storage config to publish this to the file system).
Here is an issue with this approach:
* The config file have it's limit (256 KB) as per Microsoft - so if you have couple of thousands of redirects and rewrites across your sites, then this won't work unless you hack some settings in the registry.
* The solution is not long term considering publishing to the file system is on deprecated path.
So, while this is possible, I can confirm we are on a path to revamp this as a whole as the solution is not flexible/scalable (and becomes unsecured if you have to hack to support thousands of redirects/rewrites). |
21,412,021 | In my database i have a column name called `IsStaff` which has a return value as a bit. So staff in a company can have a illness(1) or that staff had no illness(0). How would i write a sql query that can count all the numbers of 1's and 0's between a specific date's and represent it in a jquery table. This is what i have done:
```
public List<Staff> Method(string Date1, string Date2)
{
DateTime d = Convert.ToDateTime(Date1);
string date1 = d.ToLongDateString();
DateTime dd = Convert.ToDateTime(Date2);
string date2 = dd.ToLongDateString();
List<Staff> LBD = new List<Staff>();
SqlConnection conn = new SqlConnection etc...
SqlCommand command = new SqlCommand(@"SELECT * From TableName
WHERE Cast([Time] AS DATE) > @Time
AND CAST([Time] AS DATE) < @Time2
ORDER BY Time Desc", conn);
command.Parameters.AddWithValue("@Time", date1);
command.Parameters.AddWithValue("@Time2", date2);
conn.Open();
SqlDatadata data = command.Executedata();
while (data.Read())
{
Staff l = new Staff();
l.IsStaff = data["IsStaff"].ToString();
l.Name = data["Name"].ToString();
........
LBD.Add(l);
}
conn.Close();
return LBD;
}
```
>
> i can successfully get the data between two dates but how do i get total number of time a specific staff is been ill?
>
>
>
```
function Table(data) {
var table = '<table><tr><th>Name</th><th>Sum of ill staff</th><th>sum of none ill staff</th>';
var rowID = 0;
for (var staff in data) {
var row = '<tr class=\'staff-row\'id=\'' + data[student].StaffID + '\'</tr>';
row += '<td>' + data[staff].Name+ '</td>';
row += '<td>' + data[staff].IsStaff + '</td>';
row += '<td>' + data[staff].IsStaff + '</td>'
rowID++;
table += row;
}
table += '</table>';
$('#displayTable').html(table);
}
```
This is my dynamic generated table, first column is 'Name' which displays all the staff, second column is 'Sum of ill staff' that should display a staff who been ill for a specific date and final column is 'sum of none ill staff' that should display a staff who been not ill
**Q1** - what would be my sql query for counting a number of staff?
**Q2** - how do i add all 1's up and display it on my table? | 2014/01/28 | [
"https://Stackoverflow.com/questions/21412021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
| Why don´t you compute the values in the SQL?
```
SqlCommand command = new SqlCommand(@"SELECT StaffID, Name, sum(IsStaff),
sum(case when IsStaff = 1 then 0 else 1 end)
From TableName
WHERE Cast([Time] AS DATE) > @Time
AND CAST([Time] AS DATE) < @Time2
GROUP BY StaffID, Name
ORDER BY Time Desc", conn);
```
or use Linq to get the values computed from the list of Staff. | To count all the numbers of 1's and 0's between a specific date's use:
```
SELECT StaffID, count(IsStaff) From TableName
WHERE Cast([Time] AS DATE) > @Time
AND CAST([Time] AS DATE) < @Time2
GROUP BY StaffID
ORDER BY count(IsStaff) desc
```
To get total number of time a specific staff is been ill, use:
```
SELECT StaffID, sum(IsStaff) From TableName
WHERE Cast([Time] AS DATE) > @Time
AND CAST([Time] AS DATE) < @Time2
GROUP BY StaffID
ORDER BY sum(IsStaff) desc
``` |
45,860,963 | As the title says, I want to remove the `/presta` from my URL.
I can access it now with `domain.com/presta` but if I change from `Preferences > SEO & URL > Base URL` as "/", I can't access the site and if I connect to admin panel it's without GUI, just text.
Seems a little bit confusing.
Thanks in advance, sorry for any newbie mistake I've been working on this the whole day and it's my first try on presta. | 2017/08/24 | [
"https://Stackoverflow.com/questions/45860963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6161289/"
]
| The problem is that your `Function1` and `Function2` (horrible names!) return `Unit`, and `getValue` is declared to return a `String`
A declaration of a function looks like `def functionName(args): ReturnType = { ... }`
Some parts of this can be omitted, and then defaults are assumed.
In your case, you omitted the `ReturnType` declaration, and (more importantly) the `=` sign. When there is no `=` before the function body, the function will always return `Unit`. If you want it to return a `String`, you need to add an `=` before the body, and make sure that the last statement in the body is indeed a `String`.
Additionally, the default `case` clause does not return anything. This does not work, because, again, `getValue` is declared to return a `String`. You need to either throw an exception in that case, or think of a default value to return (empty string?) or else use `Options`, like the other answer suggests. | Function1 and Function2 should return strings. Also, you cannot use println, since the result of it is Unit, in this case, you can throw an exception:
```
def getValue(x: Any):Unit = x match {
case "Value1"=> function1(1)
case "Value2"=> function2(2)
case _ => throw new IllegalArgumentException("This is an invalid value")
}
def function1(v: Int): String = {
// SOME STUF THAT RETURNS STRING
}
def function2(v: Int): String = {
// SOME STUF THAT RETURNS STRING
}
``` |
15,363,595 | I'm practicing ACM problems to become a better programmer, but I'm still fairly new to c++ and I'm having trouble interpreting some of the judges code I'm reading. The beginning of a class starts with
```
public:
State(int n) : _n(n), _p(2*n+1)
{
```
and then later it's initialized with
```
State s(n);
s(0,0) = 1;
```
I'm trying to read the code but I can't make sense of that. The State class only seems to have 1 argument passed, but the programmer is passing 2 in his initialization. Also, what exactly is being set = to 1? As far as I can tell, the = operator isn't being overloaded but just in case I missed something I've included the full code below.
Any help would be greatly appreciated.
Thanks in advance
```
/*
* D - Maximum Random Walk solution
* ICPC 2012 Greater NY Regional
* Solution by Adam Florence
* Problem by Adam Florence
*/
#include <cstdio> // for printf
#include <cstdlib> // for exit
#include <algorithm> // for max
#include <iostream>
#include <vector>
using namespace std;
class State
{
public:
State(int n) : _n(n), _p(2*n+1)
{
if (n < 1)
{
cout << "Ctor error, n = " << n << endl;
exit(1);
}
for (int i = -n; i <= n; ++i)
_p.at(i+_n) = vector<double>(n+1, 0.0);
}
void zero(const int n)
{
for (int i = -n; i < n; ++i)
for (int m = 0; m <= n; ++m)
_p[i+_n][m] = 0;
}
double operator()(int i, int m) const
{
#ifdef DEBUG
if ((i < -_n) || (i > _n))
{
cout << "Out of range error, i = " << i << ", n = " << _n << endl;
exit(1);
}
if ((m < 0) || (m > _n))
{
cout << "Out of range error, m = " << m << ", n = " << _n << endl;
exit(1);
}
#endif
return _p[i+_n][m];
}
double& operator()(int i, int m)
{
#ifdef DEBUG
if ((i < -_n) || (i > _n))
{
cout << "Out of range error, i = " << i << ", n = " << _n << endl;
exit(1);
}
if ((m < 0) || (m > _n))
{
cout << "Out of range error, m = " << m << ", n = " << _n << endl;
exit(1);
}
#endif
return _p[i+_n][m];
}
static int min(int x, int y)
{
return(x < y ? x : y);
}
static int max(int x, int y)
{
return(x > y ? x : y);
}
private:
int _n;
// First index is the current position, from -n to n.
// Second index is the maximum position so far, from 0 to n.
// Value is probability.
vector< vector<double> > _p;
};
void go(int ds)
{
// Read n, l, r
int n, nds;
double l, r;
cin >> nds >> n >> l >> r;
const double c = 1 - l - r;
if(nds != ds){
cout << "Dataset number " << nds << " does not match " << ds << endl;
return;
}
// Initialize state, probability 1 at (0,0)
State s(n);
s(0,0) = 1;
State t(n);
State* p1 = &s;
State* p2 = &t;
for (int k = 1; k <= n; ++k)
{
// Compute probabilities at step k
p2->zero(k);
// At step k, the farthest from the origin you can be is k
for (int i = -k; i <= k; ++i)
{
const int mm = State::min( State::max(0, i+k), k);
for (int m = 0; m <= mm; ++m)
{
// At step k-1, p = probability of (i,m)
const double p = p1->operator()(i,m);
if (p > 0)
{
// Step left
p2->operator()(i-1, m) += p*l;
// Step right
p2->operator()(i+1, State::max(i+1,m)) += p*r;
// Stay put
p2->operator()(i, m) += p*c;
}
}
}
swap(p1, p2);
}
// Compute expected maximum position
double p = 0;
for (int i = -n; i <= n; ++i)
for (int m = 0; m <= n; ++m)
p += m * p1->operator()(i,m);
printf("%d %0.4f\n", ds, p);
}
int main(int argc, char* argv[])
{
// Read number of data sets to process
int num;
cin >> num;
// Process each data set identically
for (int i = 1; i <= num; ++i)
go(i);
// We're done
return 0;
}
``` | 2013/03/12 | [
"https://Stackoverflow.com/questions/15363595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1874415/"
]
| You are confusing a call to `state::operator()(int, int)` with an initialization. That operator call lets you set the value of an element of the class instance.
```
State s(n); // this is the only initialization
s(0,0) = 1; // this calls operator()(int, int) on instance s
``` | In this line:
```
s(0,0) = 1;
```
it's calling this:
```
double& operator()(int i, int m)
```
and because it returns a reference to a double, you can assign to it. |
15,363,595 | I'm practicing ACM problems to become a better programmer, but I'm still fairly new to c++ and I'm having trouble interpreting some of the judges code I'm reading. The beginning of a class starts with
```
public:
State(int n) : _n(n), _p(2*n+1)
{
```
and then later it's initialized with
```
State s(n);
s(0,0) = 1;
```
I'm trying to read the code but I can't make sense of that. The State class only seems to have 1 argument passed, but the programmer is passing 2 in his initialization. Also, what exactly is being set = to 1? As far as I can tell, the = operator isn't being overloaded but just in case I missed something I've included the full code below.
Any help would be greatly appreciated.
Thanks in advance
```
/*
* D - Maximum Random Walk solution
* ICPC 2012 Greater NY Regional
* Solution by Adam Florence
* Problem by Adam Florence
*/
#include <cstdio> // for printf
#include <cstdlib> // for exit
#include <algorithm> // for max
#include <iostream>
#include <vector>
using namespace std;
class State
{
public:
State(int n) : _n(n), _p(2*n+1)
{
if (n < 1)
{
cout << "Ctor error, n = " << n << endl;
exit(1);
}
for (int i = -n; i <= n; ++i)
_p.at(i+_n) = vector<double>(n+1, 0.0);
}
void zero(const int n)
{
for (int i = -n; i < n; ++i)
for (int m = 0; m <= n; ++m)
_p[i+_n][m] = 0;
}
double operator()(int i, int m) const
{
#ifdef DEBUG
if ((i < -_n) || (i > _n))
{
cout << "Out of range error, i = " << i << ", n = " << _n << endl;
exit(1);
}
if ((m < 0) || (m > _n))
{
cout << "Out of range error, m = " << m << ", n = " << _n << endl;
exit(1);
}
#endif
return _p[i+_n][m];
}
double& operator()(int i, int m)
{
#ifdef DEBUG
if ((i < -_n) || (i > _n))
{
cout << "Out of range error, i = " << i << ", n = " << _n << endl;
exit(1);
}
if ((m < 0) || (m > _n))
{
cout << "Out of range error, m = " << m << ", n = " << _n << endl;
exit(1);
}
#endif
return _p[i+_n][m];
}
static int min(int x, int y)
{
return(x < y ? x : y);
}
static int max(int x, int y)
{
return(x > y ? x : y);
}
private:
int _n;
// First index is the current position, from -n to n.
// Second index is the maximum position so far, from 0 to n.
// Value is probability.
vector< vector<double> > _p;
};
void go(int ds)
{
// Read n, l, r
int n, nds;
double l, r;
cin >> nds >> n >> l >> r;
const double c = 1 - l - r;
if(nds != ds){
cout << "Dataset number " << nds << " does not match " << ds << endl;
return;
}
// Initialize state, probability 1 at (0,0)
State s(n);
s(0,0) = 1;
State t(n);
State* p1 = &s;
State* p2 = &t;
for (int k = 1; k <= n; ++k)
{
// Compute probabilities at step k
p2->zero(k);
// At step k, the farthest from the origin you can be is k
for (int i = -k; i <= k; ++i)
{
const int mm = State::min( State::max(0, i+k), k);
for (int m = 0; m <= mm; ++m)
{
// At step k-1, p = probability of (i,m)
const double p = p1->operator()(i,m);
if (p > 0)
{
// Step left
p2->operator()(i-1, m) += p*l;
// Step right
p2->operator()(i+1, State::max(i+1,m)) += p*r;
// Stay put
p2->operator()(i, m) += p*c;
}
}
}
swap(p1, p2);
}
// Compute expected maximum position
double p = 0;
for (int i = -n; i <= n; ++i)
for (int m = 0; m <= n; ++m)
p += m * p1->operator()(i,m);
printf("%d %0.4f\n", ds, p);
}
int main(int argc, char* argv[])
{
// Read number of data sets to process
int num;
cin >> num;
// Process each data set identically
for (int i = 1; i <= num; ++i)
go(i);
// We're done
return 0;
}
``` | 2013/03/12 | [
"https://Stackoverflow.com/questions/15363595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1874415/"
]
| You are confusing a call to `state::operator()(int, int)` with an initialization. That operator call lets you set the value of an element of the class instance.
```
State s(n); // this is the only initialization
s(0,0) = 1; // this calls operator()(int, int) on instance s
``` | The second line is no longer initialization. The constructor was invoked in line 1, the second line invokes
>
> double& operator()(int i, int m)
>
>
>
with n=0 and m=0 and writing 1 to the reference that is returned. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.