qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
10,784
I plan on running PEX pipes both hot and cold across 16 feet of unheated attic space above a foyer. The foyer is 16' long and is off the kitchen, it isn't heated directly but is part of the house; it is usually colder than the rest of the house, mostly due to the cat door and slider the dogs go in and out of. There is insulation along the ceiling, underneath where I will be running the pipes. What is the best way to keep the pipes from freezing? Would an insulation pipe jacket be enough to keep the water in it from freezing or should I also run a long heating cable? I also considered insulating along the roof but it is a tight crawlspace and not much room in there to work. This is in New England so some days it gets pretty cold.
2011/12/20
[ "https://diy.stackexchange.com/questions/10784", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/4651/" ]
Additional information as this winter tests out badly routed plumbing. PEX that gets water frozen inside it stretches and expands over time. The weak point is fittings. Depending on the fitting, it can crack, the crimp rings can be stretched leading to a leak from the fitting or the PEX slipping off under pressure. They don't use the type fittings where they expand the PEX to go over a plastic fitting around here so I don't know how those hold up. Keep the fittings out of the freeze zone, and summer cabin owners recommend having a gravity drain so the pipe doesn't grow or else charge the system with RV antifreeze. My former neighbor had a blowout in the soffit in his new house built quite recently. It was a couple weeks ago where we had a snap down into the teens, a fitting let go. As a plumbing system, PEX is a lot more forgiving, but not invulnerable. If you do freeze up, PEX is an insulator, it will take longer after **heating the containing area** for water in it to melt. Be careful with any sort of direct heat application (should I have to say no torches?). Why water lines are being run through a soffit area, only the builder knows. Cheapness, ignorance, either unforgivable.
If you are using PEX, you can run it through your attic. PEX will NOT burst unless you live in a place that gets like 50 below. And it does not freeze as readily as metal pipe. We used PEX once and ended up replumbing our entire house with it. You can put foam insulation tubing on it to make you feel more secure, but you do need to give it some slack which causes it to snake. Still, the foam tubing insulation works great, but under our house - which is open - our PEX has been fine.
10,784
I plan on running PEX pipes both hot and cold across 16 feet of unheated attic space above a foyer. The foyer is 16' long and is off the kitchen, it isn't heated directly but is part of the house; it is usually colder than the rest of the house, mostly due to the cat door and slider the dogs go in and out of. There is insulation along the ceiling, underneath where I will be running the pipes. What is the best way to keep the pipes from freezing? Would an insulation pipe jacket be enough to keep the water in it from freezing or should I also run a long heating cable? I also considered insulating along the roof but it is a tight crawlspace and not much room in there to work. This is in New England so some days it gets pretty cold.
2011/12/20
[ "https://diy.stackexchange.com/questions/10784", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/4651/" ]
Pex in the attic simply needs to be run BELOW the insulation. Put it against the ceiling drywall, and it will never get particularly cold. The problem is, lots of installers don't do this. My contractor actually went to some trouble to hang the pex up high. I had to go through and undo all the clamps and put it down below the insulation, but it wasn't too big a deal. Get it LOW and make sure there's more insulation above it than below it, and it will be fine in all but the most ridiculous of climates.
My research on topic: 1. Use PEX 'A' - it's better designed to flex; PEX is not guaranteed not to freeze or rupture; so use the better PEX grade; if possible avoid Grade 'B' and 'C'; use PEX 'A' if you have to go this method. 2. Will freeze and certainly can rupture, but less likely than copper pipes. 3. Make sure to have a good manifold system with individual line shutoffs. 4. Have an indoor water supply cutoff and an indoor pressure relief value on manifold or below manifold to relieve pressure on tubing during deep freezes; simply allowing some of the water lines to drip may not be sufficient; vacuum may prevent all lines from being relieved of pressure. 5. Stay out of attic if possible; but if you must go through attic, well insulate the tubing; pay attention to #3 & 4 above. 6. Keep a lot of towels ready just in case.
10,784
I plan on running PEX pipes both hot and cold across 16 feet of unheated attic space above a foyer. The foyer is 16' long and is off the kitchen, it isn't heated directly but is part of the house; it is usually colder than the rest of the house, mostly due to the cat door and slider the dogs go in and out of. There is insulation along the ceiling, underneath where I will be running the pipes. What is the best way to keep the pipes from freezing? Would an insulation pipe jacket be enough to keep the water in it from freezing or should I also run a long heating cable? I also considered insulating along the roof but it is a tight crawlspace and not much room in there to work. This is in New England so some days it gets pretty cold.
2011/12/20
[ "https://diy.stackexchange.com/questions/10784", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/4651/" ]
Pex in the attic simply needs to be run BELOW the insulation. Put it against the ceiling drywall, and it will never get particularly cold. The problem is, lots of installers don't do this. My contractor actually went to some trouble to hang the pex up high. I had to go through and undo all the clamps and put it down below the insulation, but it wasn't too big a deal. Get it LOW and make sure there's more insulation above it than below it, and it will be fine in all but the most ridiculous of climates.
Additional information as this winter tests out badly routed plumbing. PEX that gets water frozen inside it stretches and expands over time. The weak point is fittings. Depending on the fitting, it can crack, the crimp rings can be stretched leading to a leak from the fitting or the PEX slipping off under pressure. They don't use the type fittings where they expand the PEX to go over a plastic fitting around here so I don't know how those hold up. Keep the fittings out of the freeze zone, and summer cabin owners recommend having a gravity drain so the pipe doesn't grow or else charge the system with RV antifreeze. My former neighbor had a blowout in the soffit in his new house built quite recently. It was a couple weeks ago where we had a snap down into the teens, a fitting let go. As a plumbing system, PEX is a lot more forgiving, but not invulnerable. If you do freeze up, PEX is an insulator, it will take longer after **heating the containing area** for water in it to melt. Be careful with any sort of direct heat application (should I have to say no torches?). Why water lines are being run through a soffit area, only the builder knows. Cheapness, ignorance, either unforgivable.
63,878,170
The code: ``` if __name__ == '__main__': n = int(input()) arr = list(map(int, input().rstrip().split())) for i in range(n-1): arr+=list(map(int, input().rstrip().split())) arr=arr[::-1] for i in arr: print(i,'',end='') ``` The error which I get: ``` Compiler Message Runtime Error Error (stderr) Traceback (most recent call last): File "Solution.py", line 16, in <module> arr+=list(map(int, input().rstrip().split())) EOFError: EOF when reading a line ``` Please correct me if I am going wrong somewhere as I am a beginner and self-taught.
2020/09/14
[ "https://Stackoverflow.com/questions/63878170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14032073/" ]
You can try with this below script- ``` SELECT id, MAX(CASE WHEN name = 'car1' THEN name END) car1, MAX(CASE WHEN name = 'car2' THEN name END) car2, MAX(CASE WHEN name = 'car3' THEN name END) car3 FROM your_table GROUP BY id ```
You can go for PIVOT feature. ```sql ;WITH src as ( SELECT * FROM ( VALUES (1, 'Car1', 'nissan'), (1, 'Car2', 'audi'), (1, 'Car3', 'toyota') ) as t (id, name, value) ) SELECT * FROM src PIVOT ( max(VALUE) FOR NAME IN ([Car1], [Car2], [Car3]) ) as pvt ``` ``` +----+--------+------+--------+ | id | Car1 | Car2 | Car3 | +----+--------+------+--------+ | 1 | nissan | audi | toyota | +----+--------+------+--------+ ```
51,940,312
I found a function which performs the same as `strcmp` but I was unable to see where the comparison `s1 == s2` is happening. I need help. Thanks. ``` int MyStrcmp (const char *s1, const char *s2) { int i; for (i = 0; s1[i] != 0 && s2[i] != 0; i++) { if (s1[i] > s2[i]) return +1; if (s1[i] < s2[i]) return -1; } if (s1[i] != 0) return +1; if (s2[i] != 0) return -1; return 0; } ```
2018/08/21
[ "https://Stackoverflow.com/questions/51940312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10146920/" ]
If `s1 == s2`, it also means that the length of the two strings are equal. Keeping this in mind, going through the for loop, none of the if statements in the loop are ever true. Therefore, we escape the for loop as `s1[i] = s2[i] = 0`, with `i` set to the length of the strings given. Now for the remaining two if statements, none of the conditions are true as `s1[i] = s2[i] = 0`. The code thus returns `0`.
We can not see '==' indeed, because the function use exclusion method, this function trys to filter all the inequality situations. The first half of this function: compare each char between s1 and s2, if any of char is not equal, the function finish and return corresponding comparison result. The second half of this function: compare whether s1 and s2 have same length, if their length are not equal, then return corresponding comparison result. At last, after it tried to exclude all inequality situations, its judgement is 'equal'.
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
This should do it: ```js function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { if(newString[i]==='-') { //check to see if the first char is - i++; //if it is, lets move to the negative number var converted = parseInt(newString[i]); // parse negative number total -= converted; // subtract value from total continue; // move to the next item in the loop } var converted = parseInt(newString[i]); total += converted; } return total; } console.log(sumDigits(-316)); ```
One way to do this, is to do a split that will keep the minus and the first digit together, not split. You can do that with a regular expression, and use `match` instead of `split`: ``` var newString = num.toString().match(/-?\d/g); ``` ```js function sumDigits(num) { var total = 0; var newString = num.toString().match(/-?\d/g); for (var i = 0; i < newString.length; i++) { var converted = parseInt(newString[i]); total += converted; } return total; } var result = sumDigits(-316); console.log(result); ``` In a bit shorter version, you could use `map` and `reduce`, like this: ```js function sumDigits(num) { return String(num).match(/-?\d/g).map(Number).reduce( (a, b) => a+b ); } console.log(sumDigits(-316)); ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
This should do it: ```js function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { if(newString[i]==='-') { //check to see if the first char is - i++; //if it is, lets move to the negative number var converted = parseInt(newString[i]); // parse negative number total -= converted; // subtract value from total continue; // move to the next item in the loop } var converted = parseInt(newString[i]); total += converted; } return total; } console.log(sumDigits(-316)); ```
You could always use `String#replace` with a function as a parameter: ```js function sumDigits (n) { var total = 0 n.toFixed().replace(/-?\d/g, function (d) { total += +d }) return total } console.log(sumDigits(-1148)) //=> 14 ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
This should do it: ```js function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { if(newString[i]==='-') { //check to see if the first char is - i++; //if it is, lets move to the negative number var converted = parseInt(newString[i]); // parse negative number total -= converted; // subtract value from total continue; // move to the next item in the loop } var converted = parseInt(newString[i]); total += converted; } return total; } console.log(sumDigits(-316)); ```
> > Is there a smarter way to even look at this? > > > You can avoid the conversion from number to string and back by using the modulo operator to extract the last digit. Repeat this step until you got all digits: ```js function sumDigits(num) { let total = 0, digit = 0; while (num != 0) { total += digit = num % 10; num = (num - digit) * 0.1; } return total < 0 ? digit + digit - total : total; } console.log(sumDigits(-316)); // 4 console.log(sumDigits(1148)); // 14 console.log(sumDigits(Number.MAX_SAFE_INTEGER)); // 76 ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
This should do it: ```js function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { if(newString[i]==='-') { //check to see if the first char is - i++; //if it is, lets move to the negative number var converted = parseInt(newString[i]); // parse negative number total -= converted; // subtract value from total continue; // move to the next item in the loop } var converted = parseInt(newString[i]); total += converted; } return total; } console.log(sumDigits(-316)); ```
``` function sumDigits(num) { let string = num.toString(); let zero = 0; let total = 0; for (var i = 0; i < string.length; i++) { if (Math.sign(num) === 1) { total = zero += Number(string[i]); } else { for (var i = 2; i < string.length; i++) { total = (zero += Number(string[i])) - Number(string[1]); } } } return total; } ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
One way to do this, is to do a split that will keep the minus and the first digit together, not split. You can do that with a regular expression, and use `match` instead of `split`: ``` var newString = num.toString().match(/-?\d/g); ``` ```js function sumDigits(num) { var total = 0; var newString = num.toString().match(/-?\d/g); for (var i = 0; i < newString.length; i++) { var converted = parseInt(newString[i]); total += converted; } return total; } var result = sumDigits(-316); console.log(result); ``` In a bit shorter version, you could use `map` and `reduce`, like this: ```js function sumDigits(num) { return String(num).match(/-?\d/g).map(Number).reduce( (a, b) => a+b ); } console.log(sumDigits(-316)); ```
> > Is there a smarter way to even look at this? > > > You can avoid the conversion from number to string and back by using the modulo operator to extract the last digit. Repeat this step until you got all digits: ```js function sumDigits(num) { let total = 0, digit = 0; while (num != 0) { total += digit = num % 10; num = (num - digit) * 0.1; } return total < 0 ? digit + digit - total : total; } console.log(sumDigits(-316)); // 4 console.log(sumDigits(1148)); // 14 console.log(sumDigits(Number.MAX_SAFE_INTEGER)); // 76 ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
One way to do this, is to do a split that will keep the minus and the first digit together, not split. You can do that with a regular expression, and use `match` instead of `split`: ``` var newString = num.toString().match(/-?\d/g); ``` ```js function sumDigits(num) { var total = 0; var newString = num.toString().match(/-?\d/g); for (var i = 0; i < newString.length; i++) { var converted = parseInt(newString[i]); total += converted; } return total; } var result = sumDigits(-316); console.log(result); ``` In a bit shorter version, you could use `map` and `reduce`, like this: ```js function sumDigits(num) { return String(num).match(/-?\d/g).map(Number).reduce( (a, b) => a+b ); } console.log(sumDigits(-316)); ```
``` function sumDigits(num) { let string = num.toString(); let zero = 0; let total = 0; for (var i = 0; i < string.length; i++) { if (Math.sign(num) === 1) { total = zero += Number(string[i]); } else { for (var i = 2; i < string.length; i++) { total = (zero += Number(string[i])) - Number(string[1]); } } } return total; } ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
You could always use `String#replace` with a function as a parameter: ```js function sumDigits (n) { var total = 0 n.toFixed().replace(/-?\d/g, function (d) { total += +d }) return total } console.log(sumDigits(-1148)) //=> 14 ```
> > Is there a smarter way to even look at this? > > > You can avoid the conversion from number to string and back by using the modulo operator to extract the last digit. Repeat this step until you got all digits: ```js function sumDigits(num) { let total = 0, digit = 0; while (num != 0) { total += digit = num % 10; num = (num - digit) * 0.1; } return total < 0 ? digit + digit - total : total; } console.log(sumDigits(-316)); // 4 console.log(sumDigits(1148)); // 14 console.log(sumDigits(Number.MAX_SAFE_INTEGER)); // 76 ```
42,879,075
**Question:** > > Write a function called `sumDigits`. > > > Given a number, `sumDigits` returns the sum of all its digits. > > > `var output = sumDigits(1148);` > > `console.log(output); // --> 14` > > > If the number is negative, the first digit should count as negative. > > > `var output = sumDigits(-316);` > > `console.log(output); // --> 4` > > > This is what I currently have coded and it works for positive values but I can't wrap my head around how to tackle the problem when given a negative value. When `-316` is put into the function, `NaN` is returned and I understand that when I `toString().split('')` the number, this is what is returned: `['-', '3', '1', '6']`. How do I deal with combining index 0 and 1? ``` function sumDigits(num) { var total = 0; var newString = num.toString().split(''); for (var i = 0; i < newString.length; i ++) { var converted = parseInt(newString[i]); total += converted; } return total; } sumDigits(1148); ``` Any hints on what methods I should be using? and is there a smarter way to even look at this?
2017/03/18
[ "https://Stackoverflow.com/questions/42879075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7711351/" ]
You could always use `String#replace` with a function as a parameter: ```js function sumDigits (n) { var total = 0 n.toFixed().replace(/-?\d/g, function (d) { total += +d }) return total } console.log(sumDigits(-1148)) //=> 14 ```
``` function sumDigits(num) { let string = num.toString(); let zero = 0; let total = 0; for (var i = 0; i < string.length; i++) { if (Math.sign(num) === 1) { total = zero += Number(string[i]); } else { for (var i = 2; i < string.length; i++) { total = (zero += Number(string[i])) - Number(string[1]); } } } return total; } ```
3,328,005
Let $X$ be a compact metric space. Take a sequence $\{\mu\_n\}\_{n=1}^\infty$ of Borel probability measures on $X$. Assume that this sequence converges (weak-$\ast$) to a Borel probability measure $\mu$ on $X$. Let $A$ be a Borel subset of $A$ such that $\mu\_n(A)=0$ for all $n\geq 1$. Is it necessarily true that $\mu(A)=0$?
2019/08/19
[ "https://math.stackexchange.com/questions/3328005", "https://math.stackexchange.com", "https://math.stackexchange.com/users/634463/" ]
No: take $X=[0,1]$, $\mu\_n=\delta\_{1/n}$, $\mu=\delta\_0$, and $A=\{0\}$.
If $\mu\_n\to\mu$ weak\* there's not much that can be said about convergence of $\mu\_n(A)$. If I recall correctly, assuming of course we're talking about regular Borel measures: 1. If $A$ is compact then $\mu(A)\ge\limsup\mu\_n(A)$. 2. If $A$ is open then $\mu(A)\le\liminf\mu\_n(A)$, and I think that's about the whole story. Thanks to @Mindlack for pointing out something I should have included here: 3. Hence, if $\mu(\partial A)=0$ then $\mu(A)=\lim\mu(A\_n)$. Proof: $A\cup\partial A=\overline A$, so, noting that $\mu\_n(\overline A)\ge\mu\_n(A)$, (1) shows that $$\mu(A)=\mu(\overline A)\ge\limsup\mu\_n(\overline A)\ge\limsup\mu\_n(A).$$ Similarly $A\setminus\partial A=A^0$, the interior of $A$, so (2) shows $$\mu(A)=\mu(A^0)\le\liminf\mu\_n(A^0)\le\liminf\mu\_n(A).$$And for any sequence $t\_n$, if $\limsup t\_n\le\liminf t\_n$ then $(t\_n)$ is convergent, with limit $\limsup t\_n=\liminf t\_n$.
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
Are you using LMHOSTS file? We had same issue. LMHOSTS file cache expires after 10 minutes by default. After system has been sitting idle for 10 minutes the host would use Broadcast message before reloading the LMHOSTS file causing the delay.
Since it is the first run (per app?) that is slow, you may be experiencing the compilation of the EDMX or the LINQ into SQL. Possible solutions: 1. Use precompiled views and precompiled queries (may require a lot of refactoring). <http://msdn.microsoft.com/en-us/library/bb896240.aspx> <http://blogs.msdn.com/b/dmcat/archive/2010/04/21/isolating-performance-with-precompiled-pre-generated-views-in-the-entity-framework-4.aspx> <http://msdn.microsoft.com/en-us/library/bb896297.aspx> <http://msdn.microsoft.com/en-us/magazine/ee336024.aspx> 2. Dry run all your queries on app start (prior to first request is received). You can run queries with a fake input (e.g. non existing zero keys) on a default connection string (can be to an empty database). Just make sure you don't throw exceptions (use SingleOrDefault() instead of Single() and handle null results and 0-length list results on .ToList()).
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
> > It seems after looking at Sql Server Profiler after 5 minutes give or > take 30 seconds with no activity in Sql Server Profiler and no site > interaction a couple of "Audit Logout" entries appear for the > application and as soon as that happens it then seems to take 10 - 15 > seconds to refresh the application. Is there an idle timeout on Sql > Server? > > > This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses [connection pooling](http://msdn.microsoft.com/en-us/library/8xx3tyca%28v=vs.80%29.aspx) and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people. For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
I would run "SQL Server Profiler" against SQL Server and capture a new trace while reproducing the problem by accessing the site after being idle 5-10 mins. After that look at the trace. Specifically, look for entries in which ApplicationName starts with "EntityFramework...". This will tell you what is EF doing at that moment. There could be some issue with custom caching on top of EF, or some session state that is expiring (check sessionState timeout in web.config)
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
This should work if you use the below in your connection string: ``` server=MyServer;database=MyDatabase;Min Pool Size=1;Max Pool Size=100 ``` It will force your connection pool to always maintain at least one connection. I must say I don't recommend this (persistant connection) but it will solve your problem.
Try to overwrite your timeout in the web.config like in this example: ``` Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True ;User ID=User;Password=password;Connection Timeout=120 ``` If it works, this is not a solution..just a work around.
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
> > It seems after looking at Sql Server Profiler after 5 minutes give or > take 30 seconds with no activity in Sql Server Profiler and no site > interaction a couple of "Audit Logout" entries appear for the > application and as soon as that happens it then seems to take 10 - 15 > seconds to refresh the application. Is there an idle timeout on Sql > Server? > > > This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses [connection pooling](http://msdn.microsoft.com/en-us/library/8xx3tyca%28v=vs.80%29.aspx) and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people. For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
Try to overwrite your timeout in the web.config like in this example: ``` Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True ;User ID=User;Password=password;Connection Timeout=120 ``` If it works, this is not a solution..just a work around.
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
Are you using LMHOSTS file? We had same issue. LMHOSTS file cache expires after 10 minutes by default. After system has been sitting idle for 10 minutes the host would use Broadcast message before reloading the LMHOSTS file causing the delay.
In our case the application was hosted in Azure App Service Plan and was having similar problem. Turned out to be a problem of not configuring virtual network. See the question/answer here - [EF Core 3.1.14 Recurring Cold Start](https://stackoverflow.com/questions/67409483/ef-core-3-1-14-recurring-cold-start/67445410#67445410)
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
This should work if you use the below in your connection string: ``` server=MyServer;database=MyDatabase;Min Pool Size=1;Max Pool Size=100 ``` It will force your connection pool to always maintain at least one connection. I must say I don't recommend this (persistant connection) but it will solve your problem.
Are you using LMHOSTS file? We had same issue. LMHOSTS file cache expires after 10 minutes by default. After system has been sitting idle for 10 minutes the host would use Broadcast message before reloading the LMHOSTS file causing the delay.
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
> > It seems after looking at Sql Server Profiler after 5 minutes give or > take 30 seconds with no activity in Sql Server Profiler and no site > interaction a couple of "Audit Logout" entries appear for the > application and as soon as that happens it then seems to take 10 - 15 > seconds to refresh the application. Is there an idle timeout on Sql > Server? > > > This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses [connection pooling](http://msdn.microsoft.com/en-us/library/8xx3tyca%28v=vs.80%29.aspx) and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people. For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
Create a simple webpage that accesses the SQL Server with a trivial query like "Select getDate()" or some other cheap query. Then use an external service like Pingdom or other monitor to hit that page every 30 seconds or so. That should keep the connections warm.
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
This should work if you use the below in your connection string: ``` server=MyServer;database=MyDatabase;Min Pool Size=1;Max Pool Size=100 ``` It will force your connection pool to always maintain at least one connection. I must say I don't recommend this (persistant connection) but it will solve your problem.
Since it is the first run (per app?) that is slow, you may be experiencing the compilation of the EDMX or the LINQ into SQL. Possible solutions: 1. Use precompiled views and precompiled queries (may require a lot of refactoring). <http://msdn.microsoft.com/en-us/library/bb896240.aspx> <http://blogs.msdn.com/b/dmcat/archive/2010/04/21/isolating-performance-with-precompiled-pre-generated-views-in-the-entity-framework-4.aspx> <http://msdn.microsoft.com/en-us/library/bb896297.aspx> <http://msdn.microsoft.com/en-us/magazine/ee336024.aspx> 2. Dry run all your queries on app start (prior to first request is received). You can run queries with a fake input (e.g. non existing zero keys) on a default connection string (can be to an empty database). Just make sure you don't throw exceptions (use SingleOrDefault() instead of Single() and handle null results and 0-length list results on .ToList()).
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
> > It seems after looking at Sql Server Profiler after 5 minutes give or > take 30 seconds with no activity in Sql Server Profiler and no site > interaction a couple of "Audit Logout" entries appear for the > application and as soon as that happens it then seems to take 10 - 15 > seconds to refresh the application. Is there an idle timeout on Sql > Server? > > > This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses [connection pooling](http://msdn.microsoft.com/en-us/library/8xx3tyca%28v=vs.80%29.aspx) and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people. For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
Since it is the first run (per app?) that is slow, you may be experiencing the compilation of the EDMX or the LINQ into SQL. Possible solutions: 1. Use precompiled views and precompiled queries (may require a lot of refactoring). <http://msdn.microsoft.com/en-us/library/bb896240.aspx> <http://blogs.msdn.com/b/dmcat/archive/2010/04/21/isolating-performance-with-precompiled-pre-generated-views-in-the-entity-framework-4.aspx> <http://msdn.microsoft.com/en-us/library/bb896297.aspx> <http://msdn.microsoft.com/en-us/magazine/ee336024.aspx> 2. Dry run all your queries on app start (prior to first request is received). You can run queries with a fake input (e.g. non existing zero keys) on a default connection string (can be to an empty database). Just make sure you don't throw exceptions (use SingleOrDefault() instead of Single() and handle null results and 0-length list results on .ToList()).
10,700,957
We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an `ASP.NET MVC C#` website using `EF4 POCO` in `IIS 7` (highly specced servers, dedicated just for this application). Obviously it's slow on application\_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using `Mini-Profiler`). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms). This is not `SQL queries` that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either. We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped. It seems after looking at `Sql Server Profiler` after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on `Sql Server`?
2012/05/22
[ "https://Stackoverflow.com/questions/10700957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351711/" ]
> > It seems after looking at Sql Server Profiler after 5 minutes give or > take 30 seconds with no activity in Sql Server Profiler and no site > interaction a couple of "Audit Logout" entries appear for the > application and as soon as that happens it then seems to take 10 - 15 > seconds to refresh the application. Is there an idle timeout on Sql > Server? > > > This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses [connection pooling](http://msdn.microsoft.com/en-us/library/8xx3tyca%28v=vs.80%29.aspx) and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people. For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
In our case the application was hosted in Azure App Service Plan and was having similar problem. Turned out to be a problem of not configuring virtual network. See the question/answer here - [EF Core 3.1.14 Recurring Cold Start](https://stackoverflow.com/questions/67409483/ef-core-3-1-14-recurring-cold-start/67445410#67445410)
62,223,854
I have been struggling for the past hours trying to upload an image to firestore storage but I can't make it... The image seems to be corrupted once on Firestore ``` func (fs *FS) Upload(fileInput []byte, fileName string) error { ctx, cancel := context.WithTimeout(context.Background(), fs.defaultTransferTimeout) defer cancel() bucket, err := fs.client.DefaultBucket() if err != nil { return err } object := bucket.Object(fileName) writer := object.NewWriter(ctx) defer writer.Close() if _, err := io.Copy(writer, bytes.NewReader(fileInput)); err != nil { return err } if err := object.ACL().Set(context.Background(), storage.AllUsers, storage.RoleReader); err != nil { return err } return nil } ``` I get no error but once uploaded... I get this: [![enter image description here](https://i.stack.imgur.com/fpEB1.png)](https://i.stack.imgur.com/fpEB1.png) Meanwhile on Google Cloud Storage: [![enter image description here](https://i.stack.imgur.com/7fISi.png)](https://i.stack.imgur.com/7fISi.png) Any thoughts?
2020/06/05
[ "https://Stackoverflow.com/questions/62223854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6093604/" ]
You can iterate over the outer area two at a time. Then, iterate over the inner ones, and save the semester in the first array with the corresponding course in the second array in a `JSON` object as key-value pairs. You need to make sure that the length of the outer array is even, and that of the inner arrays are equal. I am assuming that semesters are unique, otherwise, you need to store them as objects in an array. ```js const content = [ ['Spring 2017', 'Spring 2018', 'Spring 2019', 'Spring 2020'], ['Calc 1', 'Calc 2', 'Economics 1', 'Psychology 1'], ['Summer 2017', 'Summer 2018', 'Summer 2019', 'Summer 2020'], ['Swimming', 'English 1', 'History 1', 'Cooking 1'] ]; let coursesPerSemester = {}; if(content.length%2==0){ for(let i = 0; i < content.length; i+=2){ let semesters = content[i]; if(content[i+1].length == semesters.length){ for(let j = 0; j < semesters.length; j++){ coursesPerSemester[semesters[j]] = content[i+1][j]; } } } console.log(coursesPerSemester); } ```
for loop processing 2 sets of arrays at once. Assumes that sets of arrays are same length. ```js const content = [ ['Spring 2017', 'Spring 2018', 'Spring 2019', 'Spring 2020'], ['Calc 1', 'Calc 2', 'Economics 1', 'Psychology 1'], ['Summer 2017', 'Summer 2018', 'Summer 2019', 'Summer 2020','x'], ['Swimming', 'English 1', 'History 1', 'Cooking 1','y'] ] const res = [] for(let i = 0; i < content.length; i+=2) for(let j = 0; j < content[i].length; j++) res.push({term: content[i][j], name: content[i+1][j]}) console.log(res) ```
62,614,793
I'd like to extend the CaseIterable protocol so that all CaseIterable enums have a `random` static var that returns a random case. This is the code I've tried ``` public extension CaseIterable { static var random<T: CaseIterable>: T { let allCases = self.allCases return allCases[Int.random(n: allCases.count)] } } ``` But this fails to compile. Is there a way to achieve this using a static var? Or if not, how would I write the equivalent static func? ps Int.random extension for anyone playing along at home: ``` public extension Int { static func random(n: Int) -> Int { return Int(arc4random_uniform(UInt32(n))) } } ```
2020/06/27
[ "https://Stackoverflow.com/questions/62614793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3261886/" ]
So I figured it out. When adding a TensorFlow model to help with the object detection, apparently it has to contain metadata (so that that way, when you want to call "getLabels()" and its appropriate methods, it will actually return a label. Otherwise it will return nothing and cause errors apparently. [![MLKit screen showing metadata requirement](https://i.stack.imgur.com/epZXn.png)](https://i.stack.imgur.com/epZXn.png) This is the one I used that worked: mobilenet\_v1\_0.50\_192\_quantized\_1\_metadata\_1.tflite
To answer your #3 question: > > Once it detects an object, the app won't "change" (i.e when I move the phone, to try to detect another object, nothing in the display changes. > > > I'm guessing this is caused by the fact that your `imageProxy().close` needs to be a part of an OnCompletedListener else it will cause various threading issues and possibly be leading to the blocking of any additional images from being processed that you mention. i.e.: Change this: ```java objectDetector .process(image) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { //Toast.makeText(getApplicationContext(), "Fail. Sad!", Toast.LENGTH_SHORT).show(); //textView.setText("Fail. Sad!"); imageProxy.close(); } }) .addOnSuccessListener(new OnSuccessListener<List<DetectedObject>>() { @Override public void onSuccess(List<DetectedObject> results) { for (DetectedObject detectedObject : results) { Rect box = detectedObject.getBoundingBox(); for (DetectedObject.Label label : detectedObject.getLabels()) { String text = label.getText(); int index = label.getIndex(); float confidence = label.getConfidence(); textView.setText(text); }} mediaImage.close(); imageProxy.close(); } }); ``` to this: ```java objectDetector .process(image) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { //Toast.makeText(getApplicationContext(), "Fail. Sad!", Toast.LENGTH_SHORT).show(); //textView.setText("Fail. Sad!"); imageProxy.close(); } }) .addOnSuccessListener(new OnSuccessListener<List<DetectedObject>>() { @Override public void onSuccess(List<DetectedObject> results) { for (DetectedObject detectedObject : results) { Rect box = detectedObject.getBoundingBox(); for (DetectedObject.Label label : detectedObject.getLabels()) { String text = label.getText(); int index = label.getIndex(); float confidence = label.getConfidence(); textView.setText(text); }} } }).addOnCompleteListener(new OnCompleteListener<List<Barcode>>() { @Override public void onComplete(@NonNull Task<List<Barcode>> task) { imageProxy.close(); } }); ``` Note I did not check the accuracy of your curly brace locations/levels, so ensure you have those correct as well. I had similar issues, among others all related to this missing OnCompleteListener. See the original reasoning I found [here](https://stackoverflow.com/a/63022377/2760299) and how this applies more specifically to the Task object created by your `objectDetector.process(image)`, or in my case `Task<List<Barcode>> result = scanner.process(image)` [here](https://stackoverflow.com/a/63242768/2760299) for details.
34,079,561
I am trying to do something like this: ``` string foo = "Hello, this is a string"; //and then search for it. Kind of like this string foo2 = foo.Substring(0,2); //then return the rest. Like for example foo2 returns "He". //I want it to return the rest "llo, this is a string" ``` Thanks.
2015/12/04
[ "https://Stackoverflow.com/questions/34079561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5635030/" ]
**Xcode 8 • Swift 3** ``` extension Collection where Iterator.Element == UInt8 { var bytes: [UInt8] { return Array(self) } var data: Data { return Data(self) } var string: String? { return String(data: data, encoding: .utf8) } } extension String { var data: Data { return Data(utf8) } } ``` usage: ``` let sentence = "Hello World" let utf8View = sentence.utf8 let bytes = utf8View.bytes // [72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100] let data1 = sentence.data print(data1 as NSData) // <48656c6c 6f20576f 726c64> let data2 = utf8View.data let data3 = bytes.data let string1 = utf8View.string // "Hello World" let string2 = bytes.string // "Hello World" let string3 = data1.string // "Hello World" ```
I actually ended up needing to do this for a stream of `UInt8` and was curious how hard utf8 decoding is. It's definitely not a one liner, but through the following direct implementation together: ``` import UIKit let bytes:[UInt8] = [0xE2, 0x82, 0xEC, 0x00] var g = bytes.generate() extension String { init(var utf8stream:IndexingGenerator<[UInt8]>) { var result = "" var codepoint:UInt32 = 0 while let byte = utf8stream.next() where byte != 0x00 { codepoint = UInt32(byte) var extraBytes = 0 if byte & 0b11100000 == 0b11000000 { extraBytes = 1 codepoint &= 0b00011111 } else if byte & 0b11110000 == 0b11100000 { extraBytes = 2 codepoint &= 0b00001111 } else if byte & 0b11111000 == 0b11110000 { extraBytes = 3 codepoint &= 0b00000111 } else if byte & 0b11111100 == 0b11111000 { extraBytes = 4 codepoint &= 0b00000011 } else if byte & 0b11111110 == 0b11111100 { extraBytes = 5 codepoint &= 0b00000001 } for _ in 0..<extraBytes { if let additionalByte = utf8stream.next() { codepoint <<= 6 codepoint |= UInt32(additionalByte & 0b00111111) } } result.append(UnicodeScalar(codepoint)) } self = result } } String(utf8stream: g) ```
34,079,561
I am trying to do something like this: ``` string foo = "Hello, this is a string"; //and then search for it. Kind of like this string foo2 = foo.Substring(0,2); //then return the rest. Like for example foo2 returns "He". //I want it to return the rest "llo, this is a string" ``` Thanks.
2015/12/04
[ "https://Stackoverflow.com/questions/34079561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5635030/" ]
``` let buffUInt8: Array<UInt8> = [97, 98, 115, 100, 114, 102, 103, 104, 0] // you need Int8 array let buffInt8 = buffUInt8.map{ Int8(bitPattern: $0)} let str = String.fromCString(buffInt8) // "absdrfgh" ``` alternatively you can use ``` String.fromCStringRepairingIllFormedUTF8(cs: UnsafePointer<CChar>) -> (String?, hadError: Bool) ```
I actually ended up needing to do this for a stream of `UInt8` and was curious how hard utf8 decoding is. It's definitely not a one liner, but through the following direct implementation together: ``` import UIKit let bytes:[UInt8] = [0xE2, 0x82, 0xEC, 0x00] var g = bytes.generate() extension String { init(var utf8stream:IndexingGenerator<[UInt8]>) { var result = "" var codepoint:UInt32 = 0 while let byte = utf8stream.next() where byte != 0x00 { codepoint = UInt32(byte) var extraBytes = 0 if byte & 0b11100000 == 0b11000000 { extraBytes = 1 codepoint &= 0b00011111 } else if byte & 0b11110000 == 0b11100000 { extraBytes = 2 codepoint &= 0b00001111 } else if byte & 0b11111000 == 0b11110000 { extraBytes = 3 codepoint &= 0b00000111 } else if byte & 0b11111100 == 0b11111000 { extraBytes = 4 codepoint &= 0b00000011 } else if byte & 0b11111110 == 0b11111100 { extraBytes = 5 codepoint &= 0b00000001 } for _ in 0..<extraBytes { if let additionalByte = utf8stream.next() { codepoint <<= 6 codepoint |= UInt32(additionalByte & 0b00111111) } } result.append(UnicodeScalar(codepoint)) } self = result } } String(utf8stream: g) ```
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
The `L` means that string is a string of `wchar_t` characters, rather than the normal string of `char` characters. I'm not sure where you got the bit about four bytes from. From the spec section **6.4.5 String literals**, paragraph 2: > > A *character string literal* is a sequence of zero or more multibyte characters enclosed in double-quotes, as in **`"xyz"`**. A *wide string literal* is the same, except prefixed by the letter **`L`**. > > > And an excerpt from paragraph 5: > > For character string literals, the array elements have type **`char`**, and are initialized with the individual bytes of the multibyte character sequence; for wide string literals, the array elements have type **`wchar_t`**, and are initialized with the sequence of wide characters corresponding to the multibyte character > sequence, as defined by the **`mbstowcs`** function with an implementation-defined current locale. > > >
It indicates a string of wide characters, of type `wchar_t`.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
The `L` means that string is a string of `wchar_t` characters, rather than the normal string of `char` characters. I'm not sure where you got the bit about four bytes from. From the spec section **6.4.5 String literals**, paragraph 2: > > A *character string literal* is a sequence of zero or more multibyte characters enclosed in double-quotes, as in **`"xyz"`**. A *wide string literal* is the same, except prefixed by the letter **`L`**. > > > And an excerpt from paragraph 5: > > For character string literals, the array elements have type **`char`**, and are initialized with the individual bytes of the multibyte character sequence; for wide string literals, the array elements have type **`wchar_t`**, and are initialized with the sequence of wide characters corresponding to the multibyte character > sequence, as defined by the **`mbstowcs`** function with an implementation-defined current locale. > > >
If you don't know what that `L` does, then why are you making an assertive statement about each array element being `long` ("four bytes in size")? Where did that idea with the `long` come from? That `L` has as much relation to `long` as it has to "leprechaun" - no relation at all. The `L` prefix means that the following string literal consists of wide characters, i.e. each character has `wchar_t` type. P.S. Finally, it is always a good idea to use const-qualified pointers when pointing to string literals: `const wchar_t *s = L"test";`.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
The `L` means that string is a string of `wchar_t` characters, rather than the normal string of `char` characters. I'm not sure where you got the bit about four bytes from. From the spec section **6.4.5 String literals**, paragraph 2: > > A *character string literal* is a sequence of zero or more multibyte characters enclosed in double-quotes, as in **`"xyz"`**. A *wide string literal* is the same, except prefixed by the letter **`L`**. > > > And an excerpt from paragraph 5: > > For character string literals, the array elements have type **`char`**, and are initialized with the individual bytes of the multibyte character sequence; for wide string literals, the array elements have type **`wchar_t`**, and are initialized with the sequence of wide characters corresponding to the multibyte character > sequence, as defined by the **`mbstowcs`** function with an implementation-defined current locale. > > >
`L` does not mean `long integer` when prefixing a string. It means each character in the string is a wide character. Without this prefix, you are assigning a string of `char` to a `wchar_t` pointer, which would be a mismatch.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
If in doubt, consult the standard (§6.4.5, String Literals): > > A *character string literal* is a > sequence of zero or more multibyte > characters enclosed in double-quotes, > as in `"xyz"`. A *wide string literal* is > the same, except prefixed by the > letter `L`. > > > Note that it **does not** indicate that each character is a `long`, despite being prefixed with the same letter as the `long` literal suffix.
It indicates a string of wide characters, of type `wchar_t`.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
`L` does not mean `long integer` when prefixing a string. It means each character in the string is a wide character. Without this prefix, you are assigning a string of `char` to a `wchar_t` pointer, which would be a mismatch.
It indicates a string of wide characters, of type `wchar_t`.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
If in doubt, consult the standard (§6.4.5, String Literals): > > A *character string literal* is a > sequence of zero or more multibyte > characters enclosed in double-quotes, > as in `"xyz"`. A *wide string literal* is > the same, except prefixed by the > letter `L`. > > > Note that it **does not** indicate that each character is a `long`, despite being prefixed with the same letter as the `long` literal suffix.
If you don't know what that `L` does, then why are you making an assertive statement about each array element being `long` ("four bytes in size")? Where did that idea with the `long` come from? That `L` has as much relation to `long` as it has to "leprechaun" - no relation at all. The `L` prefix means that the following string literal consists of wide characters, i.e. each character has `wchar_t` type. P.S. Finally, it is always a good idea to use const-qualified pointers when pointing to string literals: `const wchar_t *s = L"test";`.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
If in doubt, consult the standard (§6.4.5, String Literals): > > A *character string literal* is a > sequence of zero or more multibyte > characters enclosed in double-quotes, > as in `"xyz"`. A *wide string literal* is > the same, except prefixed by the > letter `L`. > > > Note that it **does not** indicate that each character is a `long`, despite being prefixed with the same letter as the `long` literal suffix.
`L` does not mean `long integer` when prefixing a string. It means each character in the string is a wide character. Without this prefix, you are assigning a string of `char` to a `wchar_t` pointer, which would be a mismatch.
5,679,880
What does this do? ``` const wchar_t *s = L"test"; ``` If wchar\_t is two bytes on my machine, then why should we tell the compiler that the string should be treated in a way that each element is long i.e, *four* bytes in size?
2011/04/15
[ "https://Stackoverflow.com/questions/5679880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570078/" ]
`L` does not mean `long integer` when prefixing a string. It means each character in the string is a wide character. Without this prefix, you are assigning a string of `char` to a `wchar_t` pointer, which would be a mismatch.
If you don't know what that `L` does, then why are you making an assertive statement about each array element being `long` ("four bytes in size")? Where did that idea with the `long` come from? That `L` has as much relation to `long` as it has to "leprechaun" - no relation at all. The `L` prefix means that the following string literal consists of wide characters, i.e. each character has `wchar_t` type. P.S. Finally, it is always a good idea to use const-qualified pointers when pointing to string literals: `const wchar_t *s = L"test";`.
205,390
Is there any way to tell postfix to send ALL bounces to ONE mailbox? Right now bounces are sent to the Sender, but I would like to collect them all at one central place for further analyzing etc. I read about bounces and address rewriting, but found nothing to clearly state if this is possible or not -- to be exact: I don't want an additional bounce, I only want one bounce to be send to a centralized mailbox, NOT to the sender. Thanks a lot for your help :-)
2010/11/23
[ "https://serverfault.com/questions/205390", "https://serverfault.com", "https://serverfault.com/users/61316/" ]
Usually your intrusion detection log for a rogue IP address would list the MAC, but since it does not, you can try the following. Log onto your Cisco Device. Ping the rogue IP. Of course if you ACL is blocking access, this might be problematic. ``` ping 169.254.X.X ``` This will hopefully get the device's MAC address into the ARP table of the Cisco. ``` show arp | include 169.254.X.X ``` This will list the MAC address as well as the IP it is associated with. It will look something like: ``` Internet 169.254.X.X 0 2222.aaaa.bbbb ... ``` Where 2222.aaaa.bbbb is the MAC address. Finally run: ``` show mac-address-table dynamic | include 2222.aaaa.bbbb ``` To show the port. Where 2222.aaaa.bbbb is the mac address.
``` show mac-address-table dynamic ``` That will show you MAC-to-port mappings.
601,678
I was having a discussion with someone about why 3.5" hard drive adapters don't exist that run solely off USB. Please forgive me but my electrical engineering knowledge is minimal and been a long time since I've used it for anything practical. I know typical USB 3 can provide 5V / 900mA (4.5W) DC. 12V conversion end up with 5v/12v (0.416667) x 900mA = 12V 375mA So effectively you should be able to get 9W out of each USB port. 5v 900mA or 12V 375mA (or somewhere in between, right?) Hard drives require 12V and 5V DC power. Would it be possible to run a hard drive off USB power, and how many? Looking at the Seagate Exos Product Page as an example (pg 12): <https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789f.pdf> Peak spinup current 12V: 2.04A Peak spinup current 5V: 0.90A Maximum Operating Current 12V: 0.65A Maximum Operating Current 5V: 0.43A Question 1: Assuming spinup current could use a capacitor? How to calculate time required to charge capacitor and/or what capacitor(s) to use to charge from 5V/900mA (or multiple lines) to deliver 12V/2A + 5V/0.9A? 12V / 0.65 A = 5V / 1.56A So 1.56A + 0.43A ~ 2A @ 5V Question 2: So is it correct to assume 3 USB port should be able to accomplish this? If I can understand the math I'm almost tempted to try to build such a circuit and see if I can get it to work. Maybe I'm way off base here? Thanks for any responses.
2021/12/24
[ "https://electronics.stackexchange.com/questions/601678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/303794/" ]
If I'm not mistaken, a USB 3.1 port can even deliver 2A (nb : it might require power negotiation first). But basicaly, it's just a question of power : each USB port gives you 4.5W (if 900mA) or 10W (if 2A). Your hard drive, you need : 5\*0.43 + 12 \* 0.65 \* 1.3 = 12.3W (nb : I added a 1.3 multiplier for the 12V, to take into account the losses in the converter : it might be a bit more or a bit less depending on the efficiency of your converter). So basically, for normal operation, you need 3 USB ports @900mA or 2 @ 2A. Then there is the question of brigging the drive to spin : you need 5 \* 0.9 + 12 \* 2.04 \* 1.3 = 36.3W : thats about 3 times normal operation, so if you store energy with the same current as in normal use, you will need to store it for (at least) 3 times the start-up duration (no idea how long that is). In practice, you will have trouble finding capacitors big enough to supply 2A during start up. Basicaly, for a capacitor, you have : * Q=C*U (Q the charge, C the capacity, U the voltage), which gives dQ=C*dU * I = dQ/dt : I the current, dQ change in charge, dt duration So you get C = dQ/dU = I \* dt / dU So (for the 12V), if you allow a change of voltage of dU=0.25V and you need 1s for start up (if might be far more), and want a current of 2A, you will need a capacity of C= 2 \* 1 / 0.25 = 8 F You won't find any capacitor that big (I quickly checked on digikey, for 12V, the biggest capacity is 1.5 F (so you woul need 6 of them) : it costs more than 200€, and is 10cm diameter / 25cm high). So as @thexeno pointed out, the only way to go is to add a small battery (for exemple a small Lipo 1S with the corresponding converters). PS : be carefull when playing with USB : if you do a mistake, you can destroy the motherboard of your computer.
you made the math wrong: from an USB3.0 you can get up to 5V at 900mA (4.5W), and with a converter, you can get an output of 12V at 375mA. At this point you have NO power available for the 5V rail needed by the HDD, and also is not enough for the 12V itself. You can't power an HDD which demands 12V at 0.65A. Secondly, is not just a capacitor that solve the problem: capacitors will vary A LOT (linearly) their voltage when providing constant current, and they have very, VERY limited energy: ideally you need a battery to sustain the duration of the peak of an HDD. How do you charge it then? When the HDD is off? You get the point :) You need to design a dedicated converter, which coincidentally, is what the market offers already.
601,678
I was having a discussion with someone about why 3.5" hard drive adapters don't exist that run solely off USB. Please forgive me but my electrical engineering knowledge is minimal and been a long time since I've used it for anything practical. I know typical USB 3 can provide 5V / 900mA (4.5W) DC. 12V conversion end up with 5v/12v (0.416667) x 900mA = 12V 375mA So effectively you should be able to get 9W out of each USB port. 5v 900mA or 12V 375mA (or somewhere in between, right?) Hard drives require 12V and 5V DC power. Would it be possible to run a hard drive off USB power, and how many? Looking at the Seagate Exos Product Page as an example (pg 12): <https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789f.pdf> Peak spinup current 12V: 2.04A Peak spinup current 5V: 0.90A Maximum Operating Current 12V: 0.65A Maximum Operating Current 5V: 0.43A Question 1: Assuming spinup current could use a capacitor? How to calculate time required to charge capacitor and/or what capacitor(s) to use to charge from 5V/900mA (or multiple lines) to deliver 12V/2A + 5V/0.9A? 12V / 0.65 A = 5V / 1.56A So 1.56A + 0.43A ~ 2A @ 5V Question 2: So is it correct to assume 3 USB port should be able to accomplish this? If I can understand the math I'm almost tempted to try to build such a circuit and see if I can get it to work. Maybe I'm way off base here? Thanks for any responses.
2021/12/24
[ "https://electronics.stackexchange.com/questions/601678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/303794/" ]
you made the math wrong: from an USB3.0 you can get up to 5V at 900mA (4.5W), and with a converter, you can get an output of 12V at 375mA. At this point you have NO power available for the 5V rail needed by the HDD, and also is not enough for the 12V itself. You can't power an HDD which demands 12V at 0.65A. Secondly, is not just a capacitor that solve the problem: capacitors will vary A LOT (linearly) their voltage when providing constant current, and they have very, VERY limited energy: ideally you need a battery to sustain the duration of the peak of an HDD. How do you charge it then? When the HDD is off? You get the point :) You need to design a dedicated converter, which coincidentally, is what the market offers already.
> > I know typical USB 3 can provide 5V / 900mA (4.5W) DC. > > > That's not the power a USB 3.x port can provide "typically", that's the minimum power a USB 3.x port can provide and still meet the USB 3.x spec. I don't know if this is "typical" but I've seen many USB 3.x ports that will provide 7.5 watts, 5 volts at 1.5 amps. I've seen USB 3.x ports on computers rated for 12 watts as well, the maximum power allowed by USB-A under the spec. USB-C ports on some computers will provide 15 watts, that's using the USB-PD protocol on top of USB 3.x on the USB-C port. Drawing more than 4.5 watts from a USB 3.x port is possible while meeting the USB spec but it would be using optional extensions to the protocol. Pulling a random 3.5" drive off my shelf and I see it draws 12 volts at 0.5 amps and 5 volts at 0.35 amps, so less power than you saw. Then the adapter board will be drawing power on top of that. That's going to exceed 7.5 watts that would be available on many USB 3.x ports but should still be under 12 or 15 watts from other USB 3.x ports. I didn't see any mention of extra spin-up power on the label of this random drive from my shelf. If spin up power gets in the 30 watt range you saw then that's not going to work even under the 15 watts I've seen from USB-C. To power a 3.5" drive and USB adapter, and not run the risk of frying some USB ports, the drive and adapter needs to stay within the allowed power of the spec. First thing is the adapter needs to *ask* for power before powering up the drive. To draw even 4.5 watts requires getting permission from the host to draw that much power. To draw 12 or 15 watts means asking for more power using the optional extensions to the USB spec. Not all ports will support more than 4.5 watts, and hubs powered by the host (as opposed to a separate power supply) may provide less than a watt per port. Some hubs don't meet the USB spec and so fail to provide sufficient power for many devices. The number of 3.5" drive adapters that will run only off USB power will be slim to none because the market of people wanting these will be small. Even smaller will be the people that will have 3.5" drives that draw low enough power, and USB 3.x ports that can provide high enough power, that the two overlap. The people that want to comply with the USB spec will find compliance difficult while keeping costs to something people would be willing to pay. People that don't much care about compliance with the USB spec do so because compliance checking costs money, and if that's the case then they will find it cheaper to make it work with an external power supply. Devices that violate the USB spec still need to work to sell so adapters that fry USB ports from drawing 30 watts won't last long on the market.
601,678
I was having a discussion with someone about why 3.5" hard drive adapters don't exist that run solely off USB. Please forgive me but my electrical engineering knowledge is minimal and been a long time since I've used it for anything practical. I know typical USB 3 can provide 5V / 900mA (4.5W) DC. 12V conversion end up with 5v/12v (0.416667) x 900mA = 12V 375mA So effectively you should be able to get 9W out of each USB port. 5v 900mA or 12V 375mA (or somewhere in between, right?) Hard drives require 12V and 5V DC power. Would it be possible to run a hard drive off USB power, and how many? Looking at the Seagate Exos Product Page as an example (pg 12): <https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789f.pdf> Peak spinup current 12V: 2.04A Peak spinup current 5V: 0.90A Maximum Operating Current 12V: 0.65A Maximum Operating Current 5V: 0.43A Question 1: Assuming spinup current could use a capacitor? How to calculate time required to charge capacitor and/or what capacitor(s) to use to charge from 5V/900mA (or multiple lines) to deliver 12V/2A + 5V/0.9A? 12V / 0.65 A = 5V / 1.56A So 1.56A + 0.43A ~ 2A @ 5V Question 2: So is it correct to assume 3 USB port should be able to accomplish this? If I can understand the math I'm almost tempted to try to build such a circuit and see if I can get it to work. Maybe I'm way off base here? Thanks for any responses.
2021/12/24
[ "https://electronics.stackexchange.com/questions/601678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/303794/" ]
If I'm not mistaken, a USB 3.1 port can even deliver 2A (nb : it might require power negotiation first). But basicaly, it's just a question of power : each USB port gives you 4.5W (if 900mA) or 10W (if 2A). Your hard drive, you need : 5\*0.43 + 12 \* 0.65 \* 1.3 = 12.3W (nb : I added a 1.3 multiplier for the 12V, to take into account the losses in the converter : it might be a bit more or a bit less depending on the efficiency of your converter). So basically, for normal operation, you need 3 USB ports @900mA or 2 @ 2A. Then there is the question of brigging the drive to spin : you need 5 \* 0.9 + 12 \* 2.04 \* 1.3 = 36.3W : thats about 3 times normal operation, so if you store energy with the same current as in normal use, you will need to store it for (at least) 3 times the start-up duration (no idea how long that is). In practice, you will have trouble finding capacitors big enough to supply 2A during start up. Basicaly, for a capacitor, you have : * Q=C*U (Q the charge, C the capacity, U the voltage), which gives dQ=C*dU * I = dQ/dt : I the current, dQ change in charge, dt duration So you get C = dQ/dU = I \* dt / dU So (for the 12V), if you allow a change of voltage of dU=0.25V and you need 1s for start up (if might be far more), and want a current of 2A, you will need a capacity of C= 2 \* 1 / 0.25 = 8 F You won't find any capacitor that big (I quickly checked on digikey, for 12V, the biggest capacity is 1.5 F (so you woul need 6 of them) : it costs more than 200€, and is 10cm diameter / 25cm high). So as @thexeno pointed out, the only way to go is to add a small battery (for exemple a small Lipo 1S with the corresponding converters). PS : be carefull when playing with USB : if you do a mistake, you can destroy the motherboard of your computer.
They don't exist because they are not practical. If you need 10 watts and need to connect three USB cables just to run a single drive, and even require complex energy storage solution to get it started, it will be quite complex to draw power from three cables in a way that it fills USB specs. You can't just short the USB 5V output pins together. So it not impossible, it can be done, but it all boils down to if you want a complex expensive solution that requires three free USB ports, or cheap single cable solution with external power supply.
601,678
I was having a discussion with someone about why 3.5" hard drive adapters don't exist that run solely off USB. Please forgive me but my electrical engineering knowledge is minimal and been a long time since I've used it for anything practical. I know typical USB 3 can provide 5V / 900mA (4.5W) DC. 12V conversion end up with 5v/12v (0.416667) x 900mA = 12V 375mA So effectively you should be able to get 9W out of each USB port. 5v 900mA or 12V 375mA (or somewhere in between, right?) Hard drives require 12V and 5V DC power. Would it be possible to run a hard drive off USB power, and how many? Looking at the Seagate Exos Product Page as an example (pg 12): <https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789f.pdf> Peak spinup current 12V: 2.04A Peak spinup current 5V: 0.90A Maximum Operating Current 12V: 0.65A Maximum Operating Current 5V: 0.43A Question 1: Assuming spinup current could use a capacitor? How to calculate time required to charge capacitor and/or what capacitor(s) to use to charge from 5V/900mA (or multiple lines) to deliver 12V/2A + 5V/0.9A? 12V / 0.65 A = 5V / 1.56A So 1.56A + 0.43A ~ 2A @ 5V Question 2: So is it correct to assume 3 USB port should be able to accomplish this? If I can understand the math I'm almost tempted to try to build such a circuit and see if I can get it to work. Maybe I'm way off base here? Thanks for any responses.
2021/12/24
[ "https://electronics.stackexchange.com/questions/601678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/303794/" ]
If I'm not mistaken, a USB 3.1 port can even deliver 2A (nb : it might require power negotiation first). But basicaly, it's just a question of power : each USB port gives you 4.5W (if 900mA) or 10W (if 2A). Your hard drive, you need : 5\*0.43 + 12 \* 0.65 \* 1.3 = 12.3W (nb : I added a 1.3 multiplier for the 12V, to take into account the losses in the converter : it might be a bit more or a bit less depending on the efficiency of your converter). So basically, for normal operation, you need 3 USB ports @900mA or 2 @ 2A. Then there is the question of brigging the drive to spin : you need 5 \* 0.9 + 12 \* 2.04 \* 1.3 = 36.3W : thats about 3 times normal operation, so if you store energy with the same current as in normal use, you will need to store it for (at least) 3 times the start-up duration (no idea how long that is). In practice, you will have trouble finding capacitors big enough to supply 2A during start up. Basicaly, for a capacitor, you have : * Q=C*U (Q the charge, C the capacity, U the voltage), which gives dQ=C*dU * I = dQ/dt : I the current, dQ change in charge, dt duration So you get C = dQ/dU = I \* dt / dU So (for the 12V), if you allow a change of voltage of dU=0.25V and you need 1s for start up (if might be far more), and want a current of 2A, you will need a capacity of C= 2 \* 1 / 0.25 = 8 F You won't find any capacitor that big (I quickly checked on digikey, for 12V, the biggest capacity is 1.5 F (so you woul need 6 of them) : it costs more than 200€, and is 10cm diameter / 25cm high). So as @thexeno pointed out, the only way to go is to add a small battery (for exemple a small Lipo 1S with the corresponding converters). PS : be carefull when playing with USB : if you do a mistake, you can destroy the motherboard of your computer.
> > I know typical USB 3 can provide 5V / 900mA (4.5W) DC. > > > That's not the power a USB 3.x port can provide "typically", that's the minimum power a USB 3.x port can provide and still meet the USB 3.x spec. I don't know if this is "typical" but I've seen many USB 3.x ports that will provide 7.5 watts, 5 volts at 1.5 amps. I've seen USB 3.x ports on computers rated for 12 watts as well, the maximum power allowed by USB-A under the spec. USB-C ports on some computers will provide 15 watts, that's using the USB-PD protocol on top of USB 3.x on the USB-C port. Drawing more than 4.5 watts from a USB 3.x port is possible while meeting the USB spec but it would be using optional extensions to the protocol. Pulling a random 3.5" drive off my shelf and I see it draws 12 volts at 0.5 amps and 5 volts at 0.35 amps, so less power than you saw. Then the adapter board will be drawing power on top of that. That's going to exceed 7.5 watts that would be available on many USB 3.x ports but should still be under 12 or 15 watts from other USB 3.x ports. I didn't see any mention of extra spin-up power on the label of this random drive from my shelf. If spin up power gets in the 30 watt range you saw then that's not going to work even under the 15 watts I've seen from USB-C. To power a 3.5" drive and USB adapter, and not run the risk of frying some USB ports, the drive and adapter needs to stay within the allowed power of the spec. First thing is the adapter needs to *ask* for power before powering up the drive. To draw even 4.5 watts requires getting permission from the host to draw that much power. To draw 12 or 15 watts means asking for more power using the optional extensions to the USB spec. Not all ports will support more than 4.5 watts, and hubs powered by the host (as opposed to a separate power supply) may provide less than a watt per port. Some hubs don't meet the USB spec and so fail to provide sufficient power for many devices. The number of 3.5" drive adapters that will run only off USB power will be slim to none because the market of people wanting these will be small. Even smaller will be the people that will have 3.5" drives that draw low enough power, and USB 3.x ports that can provide high enough power, that the two overlap. The people that want to comply with the USB spec will find compliance difficult while keeping costs to something people would be willing to pay. People that don't much care about compliance with the USB spec do so because compliance checking costs money, and if that's the case then they will find it cheaper to make it work with an external power supply. Devices that violate the USB spec still need to work to sell so adapters that fry USB ports from drawing 30 watts won't last long on the market.
601,678
I was having a discussion with someone about why 3.5" hard drive adapters don't exist that run solely off USB. Please forgive me but my electrical engineering knowledge is minimal and been a long time since I've used it for anything practical. I know typical USB 3 can provide 5V / 900mA (4.5W) DC. 12V conversion end up with 5v/12v (0.416667) x 900mA = 12V 375mA So effectively you should be able to get 9W out of each USB port. 5v 900mA or 12V 375mA (or somewhere in between, right?) Hard drives require 12V and 5V DC power. Would it be possible to run a hard drive off USB power, and how many? Looking at the Seagate Exos Product Page as an example (pg 12): <https://www.seagate.com/www-content/product-content/enterprise-hdd-fam/exos-x-16/en-us/docs/100845789f.pdf> Peak spinup current 12V: 2.04A Peak spinup current 5V: 0.90A Maximum Operating Current 12V: 0.65A Maximum Operating Current 5V: 0.43A Question 1: Assuming spinup current could use a capacitor? How to calculate time required to charge capacitor and/or what capacitor(s) to use to charge from 5V/900mA (or multiple lines) to deliver 12V/2A + 5V/0.9A? 12V / 0.65 A = 5V / 1.56A So 1.56A + 0.43A ~ 2A @ 5V Question 2: So is it correct to assume 3 USB port should be able to accomplish this? If I can understand the math I'm almost tempted to try to build such a circuit and see if I can get it to work. Maybe I'm way off base here? Thanks for any responses.
2021/12/24
[ "https://electronics.stackexchange.com/questions/601678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/303794/" ]
They don't exist because they are not practical. If you need 10 watts and need to connect three USB cables just to run a single drive, and even require complex energy storage solution to get it started, it will be quite complex to draw power from three cables in a way that it fills USB specs. You can't just short the USB 5V output pins together. So it not impossible, it can be done, but it all boils down to if you want a complex expensive solution that requires three free USB ports, or cheap single cable solution with external power supply.
> > I know typical USB 3 can provide 5V / 900mA (4.5W) DC. > > > That's not the power a USB 3.x port can provide "typically", that's the minimum power a USB 3.x port can provide and still meet the USB 3.x spec. I don't know if this is "typical" but I've seen many USB 3.x ports that will provide 7.5 watts, 5 volts at 1.5 amps. I've seen USB 3.x ports on computers rated for 12 watts as well, the maximum power allowed by USB-A under the spec. USB-C ports on some computers will provide 15 watts, that's using the USB-PD protocol on top of USB 3.x on the USB-C port. Drawing more than 4.5 watts from a USB 3.x port is possible while meeting the USB spec but it would be using optional extensions to the protocol. Pulling a random 3.5" drive off my shelf and I see it draws 12 volts at 0.5 amps and 5 volts at 0.35 amps, so less power than you saw. Then the adapter board will be drawing power on top of that. That's going to exceed 7.5 watts that would be available on many USB 3.x ports but should still be under 12 or 15 watts from other USB 3.x ports. I didn't see any mention of extra spin-up power on the label of this random drive from my shelf. If spin up power gets in the 30 watt range you saw then that's not going to work even under the 15 watts I've seen from USB-C. To power a 3.5" drive and USB adapter, and not run the risk of frying some USB ports, the drive and adapter needs to stay within the allowed power of the spec. First thing is the adapter needs to *ask* for power before powering up the drive. To draw even 4.5 watts requires getting permission from the host to draw that much power. To draw 12 or 15 watts means asking for more power using the optional extensions to the USB spec. Not all ports will support more than 4.5 watts, and hubs powered by the host (as opposed to a separate power supply) may provide less than a watt per port. Some hubs don't meet the USB spec and so fail to provide sufficient power for many devices. The number of 3.5" drive adapters that will run only off USB power will be slim to none because the market of people wanting these will be small. Even smaller will be the people that will have 3.5" drives that draw low enough power, and USB 3.x ports that can provide high enough power, that the two overlap. The people that want to comply with the USB spec will find compliance difficult while keeping costs to something people would be willing to pay. People that don't much care about compliance with the USB spec do so because compliance checking costs money, and if that's the case then they will find it cheaper to make it work with an external power supply. Devices that violate the USB spec still need to work to sell so adapters that fry USB ports from drawing 30 watts won't last long on the market.
15,167,390
I am currently using `getc()` in a loop to receive input from a user: ``` char x; while (x != 'q') { printf("(c)ontinue or (q)uit?"); x = getc(stdin); } ``` If the user enters `c` the loop executes, presumably taking an additional character (the terminator or possibly a newline, I am guessing?) as input the first time around. I can prevent this by using something like: ``` char toss; char x; while (x != 'q') { printf("(c)ontinue or (q)uit?"); x = getc(stdin); toss = getc(stdin); } ``` But this strikes me as just a lazy newbie way of dealing with it. Is there a cleaner way to do this with `getc` or should I be using it as a string and using the first character of the array? Is there another even cleaner way that I haven't even considered?
2013/03/01
[ "https://Stackoverflow.com/questions/15167390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2124880/" ]
> > or should I be using it as a string and using the first character of the array? > > > Exactly. ``` char buf[32] = { 0 }; while (buf[0] != 'q') { fgets(buf, sizeof(buf), stdin); /* do stuff here */ } ```
You could just ignore spaces: ``` int x = 0; while (x != 'q' && x != EOF) { printf("(c)ontinue or (q)uit?"); while ((x = getc(stdin)) != EOF && isspace(x)) { /* ignore whitespace */ } } ``` Also note that `getc()` returns an `int`, not `char`. This is important if you want to detect `EOF` which you should also check for to avoid an infinite loop (for example if the user hits Ctrl-D on unix systems or Ctrl-Z on windows). To use `isspace()` you will need to include ctype.h.
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
Use list comprehension: ```py [i+j for i,j in zip(a, b)] ``` Results in: ```py [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
You can use `zip` and concatenation: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] c = [item[0] + item[1] for item in zip(a, b)] ``` Which yields: ``` ['bmw', 12, 2] ['audi', 3, 4] ['benz', 7, 5] ['honda', 6, 23] ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
You almost got it right, you should use `extend` instead of `append`: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] for i, j in zip(a, b): i.extend(j) print(a) ``` Output: ``` [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
You can use `zip` and concatenation: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] c = [item[0] + item[1] for item in zip(a, b)] ``` Which yields: ``` ['bmw', 12, 2] ['audi', 3, 4] ['benz', 7, 5] ['honda', 6, 23] ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
Use list comprehension: ```py [i+j for i,j in zip(a, b)] ``` Results in: ```py [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
You almost got it right, you should use `extend` instead of `append`: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] for i, j in zip(a, b): i.extend(j) print(a) ``` Output: ``` [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
``` for el1, el2 in zip(a, b): el1.extend(el2) ```
You can use list comprehension: ```py result = [a[i] + b[i] for i in range(len(a))] ``` Results in: ```py [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
Use list comprehension: ```py [i+j for i,j in zip(a, b)] ``` Results in: ```py [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
Using `map()` & `lambda` function, `*` unpacking to a list and `+` list concatenation: ``` res = [*map(lambda x, y: x + y, a, b)] print(res) ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
You can use `zip` and concatenation: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] c = [item[0] + item[1] for item in zip(a, b)] ``` Which yields: ``` ['bmw', 12, 2] ['audi', 3, 4] ['benz', 7, 5] ['honda', 6, 23] ```
Using `map()` & `lambda` function, `*` unpacking to a list and `+` list concatenation: ``` res = [*map(lambda x, y: x + y, a, b)] print(res) ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
You almost got it right, you should use `extend` instead of `append`: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] for i, j in zip(a, b): i.extend(j) print(a) ``` Output: ``` [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
Using `map()` & `lambda` function, `*` unpacking to a list and `+` list concatenation: ``` res = [*map(lambda x, y: x + y, a, b)] print(res) ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
Use list comprehension: ```py [i+j for i,j in zip(a, b)] ``` Results in: ```py [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
``` for el1, el2 in zip(a, b): el1.extend(el2) ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
``` for el1, el2 in zip(a, b): el1.extend(el2) ```
Using `map()` & `lambda` function, `*` unpacking to a list and `+` list concatenation: ``` res = [*map(lambda x, y: x + y, a, b)] print(res) ```
72,789,446
I have two lists and I want to append one with another, like this: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] ``` new list should be like this: ``` n = [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ``` I try this but it didn't work: ``` for i, j in a, b: a[i].append(b[j]) ```
2022/06/28
[ "https://Stackoverflow.com/questions/72789446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14481446/" ]
You almost got it right, you should use `extend` instead of `append`: ``` a = [['bmw'], ['audi'], ['benz'], ['honda']] b = [[12,2], [3,4], [7,5], [6,23]] for i, j in zip(a, b): i.extend(j) print(a) ``` Output: ``` [['bmw', 12, 2], ['audi', 3, 4], ['benz', 7, 5], ['honda', 6, 23]] ```
``` for el1, el2 in zip(a, b): el1.extend(el2) ```
47,622,664
Hi Can any one suggest how to read data from datagrid in windowsforms application which has two columns(FileName and FilePath). Below is the code I tried its returning all Filename and FilePath in single column(FileName). Any suggestions would be helpful to me.. ``` ` public System.Data.DataTable ExportToExcel() { System.Data.DataTable table = new System.Data.DataTable(); table.Columns.Add("FileName", typeof(string)); table.Columns.Add("FilePath", typeof(string)); for (int rows = 0; rows < dataGridView1.Rows.Count; rows++) { for (int col= 0; col < dataGridView1.Rows[rows].Cells.Count; col++) { table.Rows.Add(dataGridView1.Rows[rows].Cells[col].Value.ToString()); } } ```
2017/12/03
[ "https://Stackoverflow.com/questions/47622664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8409752/" ]
You can use this regex `((\d+)\sMB)` if there are one or more spaces between the number and `MB` you can use `\s+` to match one or more space, you can do all this with Pattern ``` String text = "Your Day Traffic is 150 MB and your Night Traffic is 136 MB "; String regex = "((\\d+)\\sMB)"; Pattern pattern = Pattern.compile(regex); Matcher matcher = pattern.matcher(text); int group = 1; while (matcher.find()) { System.out.println("group " + group++ + " ==> " + matcher.group(2)); } ``` In your case outputs are : ``` group 1 ==> 150 group 2 ==> 136 ```
Please read the java docs on regex [here](https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html). Essentially you have to ignore character between two "[number] MB" occurences. In that situation you can use a regex like so - ``` /.*\s+(\d+)\s+MB.*\s+(\d+)\s+MB/ ``` Full code is given here - ``` import java.util.regex.*; public class MatchMB { public static void main(String args[]){ Pattern p = Pattern.compile(".*\\s+(\\d+)\\s+MB.*\\s+(\\d+)\\s+MB"); Matcher m = p.matcher("Your Day Traffic is 150 MB and your Night Traffic is 136 MB"); while (m.find()) { System.out.println("group 1 ==>" + m.group(1)); System.out.println("group 2 ==>" + m.group(2)); } } ```
11,709,441
I'm looking to get live values for stock indexes, such as the Dow (DJIA) or Hang Seng Index (HSI). These need to be generated from a (configurable) set of index symbols, and saved to VBA variables without any interaction with the sheets. Ideally this would be from Bloomberg, or Yahoo if need be (though any other source would be ok too, as long as it's live). I understand this is a simple task, though I can't find any direct way of doing it- only examples of getting option price or stock data etc. I understand I start with a reference to the Bloomberg API, but I can't seem to get further than this. Thanks for your help
2012/07/29
[ "https://Stackoverflow.com/questions/11709441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1173672/" ]
If you want to retrieve live data using the Bloomberg API, you need to be a Bloomberg subscriber ($$$). As you also mention Yahoo, which is free, I suspect it is not what you want.
This isn't a simple task. You'll need to initiate an HTTP GET request for <http://www.google.com/finance?q=GOOG>, and parse the return string you self. The HTTP Request is sent with this code: ``` Set HttpReq = CreateObject("MSXML2.ServerXMLHTTP") HttpReq.Open "GET", "http://www.google.com/finance?q=GOOG", False HttpReq.send MsgBox HttpReq.responseText ```
68,893
I told my friend that I would create a dual boot system with his Windows Vista and not screw it up. It only boots to Ubuntu with no boot menu. At least it kept the compressed Windows partition. How do I get a dual boot system without being a rocket scientist before he comes home and breaks my neck?
2011/10/18
[ "https://askubuntu.com/questions/68893", "https://askubuntu.com", "https://askubuntu.com/users/28751/" ]
Install startupmanager, an application to help configure the boot up menu. Open up the Software Center and Search for "Startup Manager" or "startupmanager" and click install. Once it has installed run it by clicking on the ubuntu logo on the top left of the screen and searching for Startup Manager. There should be a section that says "Timeout: in seconds". Choose a period of time that you want to see the boot menu for. (You can also select an operating system to boot into by default). Close the application (it will make you wait as it cleans up). Restart your computer. This time you should see a list of options that will stay for the period of time you choose earlier. If you choose a short period of time, just press the up or down arrow keys to cancel the automatic boot and make it wait for your choice. Look for the option that should say something like "Windows Vista Loader on ...." and select it and press enter. This should boot you into your windows partition :)
Boot ubuntu. From a terminal `ctrl alt t` type this `sudo update-grub` Reboot and you should see a menu to choose windows.
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
I'd say both statements are correct. If you have a header file that contains only a pointer or a reference to some data type, then you only require a forward declaration. If however your header contains an object of a particular type, then you should include the header where that type is defined. The advice from Sutter and Alexandrescu is telling you not to rely on the consumers of your header to resolve such references by including the required type definitions. You may also want to check out the [Pimpl Idiom](http://en.wikibooks.org/wiki/C++_Programming/Idioms#Pointer_To_Implementation_.28pImpl.29) and [Compilation firewalls](http://www.gotw.ca/gotw/024.htm) (or [C++11 version](http://herbsutter.com/2011/11/04/gotw-100-compilation-firewalls/)).
In addition to what's covered in the other answers, there are cases where you *must* use a forward declaration because of mutual dependences. FWIW, if I only need a type name, I usually forward declare it. If it's a function declaration, I generally include it (though those cases are rare since non-member functions are rare and member functions must be included). If it's an `extern "C"` function, I *never* forward declare is since the linker can't tell me if I've messed up the argument types.
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Dependency management is very important in C++: if you change a header file, all translation units depending on this header file need to be compiled. This can be very expensive. As a result you want your header files to be minimal in the sense that they don't include anything they don't need to include. This is what Google's advice is about. If you need a certain component, you should include the component's header. However, you should not required to include anything but the component's to *get* the declarations. That is, each header file has to compile without including anything else. This is the advice given by Herb and Andrei. Note that this **only** applies to getting the declaration: if you want to use any of these declaration and this requires another component, you might need to include the header file for this component as well. These two advices go together, however! They are both very valuable and they shall be followed without compromise. This basically means, that you prefer to declare a class over including its header iff you only need the class declared. That is, if the class appears only declarations, in pointer or reference definitions, the parameter list, or the return type it is sufficient to have the class *declared*. If you need to know more about the class, e.g. because it is a base or a member of class being defined or an object is used e.g. in an inline function you need the *definition*. Effectively, you need the definition of the class if you need to know any of the members or about the class's size. One interesting twist are class templates: only the first class template declaration can define default arguments! A class template can be declared multiple times, however. To make your header files minimal while declaring class template, you probably want to have special forwarding headers for the involved class templates which only declare the class template and it default arguments. However, this is way into implementation land...
In addition to what's covered in the other answers, there are cases where you *must* use a forward declaration because of mutual dependences. FWIW, if I only need a type name, I usually forward declare it. If it's a function declaration, I generally include it (though those cases are rare since non-member functions are rare and member functions must be included). If it's an `extern "C"` function, I *never* forward declare is since the linker can't tell me if I've messed up the argument types.
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Sutter and Alexandrescu on item #22 say "Don't be over-dependent: Don't #include a definition when a forward declaration will do". Personally, I agree with this statement. If in my class A, I don't use any functionality of a class B, nor I instantiate an object of class B, then my code doesn't need to know how class B is made. I just need to know that it exists. Forward declaration is also useful to break cyclic dependencies... Edit: I also like a lot the observation that Mark B pointed out: it happens sometimes that you don't include the file a.hpp because it's already included in the file b.hpp you are including, which chose to include a.hpp even though forward declaration was enough. If you stop using the functions defined in b.hpp, your code won't compile anymore. If the programmer of b.hpp had used forward declaration, that wouldn't have happened, since you would have included a.hpp somewhere else in your code...
I believe they're both saying exactly the same thing. Suppose you have a method that takes a `Bar` by reference your your header file, but the method is defined in your source file. A forward declaration in the header is clearly sufficient for the header to compile standalone. Now let's look at the user code. If the client is just passing around a reference that's forwarded from somewhere else, then they don't need the definition of `Bar` at all and everything is fine. If they use a `Bar` in some other way, then that source file is what's mandating the use of the include for `Bar`, *NOT* your header. And in fact, if you include the unneeded `Bar` include in your header then if a client stops needing your include and removes it, all of a sudden their other `Bar` code quits working because they never properly included it in their own source file. Now suppose that your header uses `std::string`. In that case, in order for the header to compile standalone you must include `<string>`, which both guidelines would tell you to do (google because a forward declaration won't do, and Sutter and Alexandrescu to allow the header to compile standalone).
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Sutter and Alexandrescu on item #22 say "Don't be over-dependent: Don't #include a definition when a forward declaration will do". Personally, I agree with this statement. If in my class A, I don't use any functionality of a class B, nor I instantiate an object of class B, then my code doesn't need to know how class B is made. I just need to know that it exists. Forward declaration is also useful to break cyclic dependencies... Edit: I also like a lot the observation that Mark B pointed out: it happens sometimes that you don't include the file a.hpp because it's already included in the file b.hpp you are including, which chose to include a.hpp even though forward declaration was enough. If you stop using the functions defined in b.hpp, your code won't compile anymore. If the programmer of b.hpp had used forward declaration, that wouldn't have happened, since you would have included a.hpp somewhere else in your code...
In addition to what's covered in the other answers, there are cases where you *must* use a forward declaration because of mutual dependences. FWIW, if I only need a type name, I usually forward declare it. If it's a function declaration, I generally include it (though those cases are rare since non-member functions are rare and member functions must be included). If it's an `extern "C"` function, I *never* forward declare is since the linker can't tell me if I've messed up the argument types.
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Sutter and Alexandrescu on item #22 say "Don't be over-dependent: Don't #include a definition when a forward declaration will do". Personally, I agree with this statement. If in my class A, I don't use any functionality of a class B, nor I instantiate an object of class B, then my code doesn't need to know how class B is made. I just need to know that it exists. Forward declaration is also useful to break cyclic dependencies... Edit: I also like a lot the observation that Mark B pointed out: it happens sometimes that you don't include the file a.hpp because it's already included in the file b.hpp you are including, which chose to include a.hpp even though forward declaration was enough. If you stop using the functions defined in b.hpp, your code won't compile anymore. If the programmer of b.hpp had used forward declaration, that wouldn't have happened, since you would have included a.hpp somewhere else in your code...
I'd say both statements are correct. If you have a header file that contains only a pointer or a reference to some data type, then you only require a forward declaration. If however your header contains an object of a particular type, then you should include the header where that type is defined. The advice from Sutter and Alexandrescu is telling you not to rely on the consumers of your header to resolve such references by including the required type definitions. You may also want to check out the [Pimpl Idiom](http://en.wikibooks.org/wiki/C++_Programming/Idioms#Pointer_To_Implementation_.28pImpl.29) and [Compilation firewalls](http://www.gotw.ca/gotw/024.htm) (or [C++11 version](http://herbsutter.com/2011/11/04/gotw-100-compilation-firewalls/)).
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Sutter and Alexandrescu on item #22 say "Don't be over-dependent: Don't #include a definition when a forward declaration will do". Personally, I agree with this statement. If in my class A, I don't use any functionality of a class B, nor I instantiate an object of class B, then my code doesn't need to know how class B is made. I just need to know that it exists. Forward declaration is also useful to break cyclic dependencies... Edit: I also like a lot the observation that Mark B pointed out: it happens sometimes that you don't include the file a.hpp because it's already included in the file b.hpp you are including, which chose to include a.hpp even though forward declaration was enough. If you stop using the functions defined in b.hpp, your code won't compile anymore. If the programmer of b.hpp had used forward declaration, that wouldn't have happened, since you would have included a.hpp somewhere else in your code...
Dependency management is very important in C++: if you change a header file, all translation units depending on this header file need to be compiled. This can be very expensive. As a result you want your header files to be minimal in the sense that they don't include anything they don't need to include. This is what Google's advice is about. If you need a certain component, you should include the component's header. However, you should not required to include anything but the component's to *get* the declarations. That is, each header file has to compile without including anything else. This is the advice given by Herb and Andrei. Note that this **only** applies to getting the declaration: if you want to use any of these declaration and this requires another component, you might need to include the header file for this component as well. These two advices go together, however! They are both very valuable and they shall be followed without compromise. This basically means, that you prefer to declare a class over including its header iff you only need the class declared. That is, if the class appears only declarations, in pointer or reference definitions, the parameter list, or the return type it is sufficient to have the class *declared*. If you need to know more about the class, e.g. because it is a base or a member of class being defined or an object is used e.g. in an inline function you need the *definition*. Effectively, you need the definition of the class if you need to know any of the members or about the class's size. One interesting twist are class templates: only the first class template declaration can define default arguments! A class template can be declared multiple times, however. To make your header files minimal while declaring class template, you probably want to have special forwarding headers for the involved class templates which only declare the class template and it default arguments. However, this is way into implementation land...
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Dependency management is very important in C++: if you change a header file, all translation units depending on this header file need to be compiled. This can be very expensive. As a result you want your header files to be minimal in the sense that they don't include anything they don't need to include. This is what Google's advice is about. If you need a certain component, you should include the component's header. However, you should not required to include anything but the component's to *get* the declarations. That is, each header file has to compile without including anything else. This is the advice given by Herb and Andrei. Note that this **only** applies to getting the declaration: if you want to use any of these declaration and this requires another component, you might need to include the header file for this component as well. These two advices go together, however! They are both very valuable and they shall be followed without compromise. This basically means, that you prefer to declare a class over including its header iff you only need the class declared. That is, if the class appears only declarations, in pointer or reference definitions, the parameter list, or the return type it is sufficient to have the class *declared*. If you need to know more about the class, e.g. because it is a base or a member of class being defined or an object is used e.g. in an inline function you need the *definition*. Effectively, you need the definition of the class if you need to know any of the members or about the class's size. One interesting twist are class templates: only the first class template declaration can define default arguments! A class template can be declared multiple times, however. To make your header files minimal while declaring class template, you probably want to have special forwarding headers for the involved class templates which only declare the class template and it default arguments. However, this is way into implementation land...
These two recommendations are *completely* compatible with each other. You can follow both of them at the same time. They are in no way mutually exclusive -- it's not an "either or" situation. A header may use a forward declaration, when that is all that is needed, and still be compilable stand-alone: ``` // Foo.hpp class Bar; // Forward declaration class Foo { public: void doSomethingFancy (); private: Bar *bar; }; ``` In the above example, `Foo.hpp` is self-sufficient even though the `Bar` class is declared using a forward declaration rather than by inclusion of the `Bar.hpp` header.
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
Dependency management is very important in C++: if you change a header file, all translation units depending on this header file need to be compiled. This can be very expensive. As a result you want your header files to be minimal in the sense that they don't include anything they don't need to include. This is what Google's advice is about. If you need a certain component, you should include the component's header. However, you should not required to include anything but the component's to *get* the declarations. That is, each header file has to compile without including anything else. This is the advice given by Herb and Andrei. Note that this **only** applies to getting the declaration: if you want to use any of these declaration and this requires another component, you might need to include the header file for this component as well. These two advices go together, however! They are both very valuable and they shall be followed without compromise. This basically means, that you prefer to declare a class over including its header iff you only need the class declared. That is, if the class appears only declarations, in pointer or reference definitions, the parameter list, or the return type it is sufficient to have the class *declared*. If you need to know more about the class, e.g. because it is a base or a member of class being defined or an object is used e.g. in an inline function you need the *definition*. Effectively, you need the definition of the class if you need to know any of the members or about the class's size. One interesting twist are class templates: only the first class template declaration can define default arguments! A class template can be declared multiple times, however. To make your header files minimal while declaring class template, you probably want to have special forwarding headers for the involved class templates which only declare the class template and it default arguments. However, this is way into implementation land...
Sutter and Alexandrescu are probably more correct here. If I forward declare `class Foo;` that for some reason resides in `bar.h` and do not include `bar.h`, good luck trying to find the declaration of `Foo` (especially if the code-base is large).
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
I believe they're both saying exactly the same thing. Suppose you have a method that takes a `Bar` by reference your your header file, but the method is defined in your source file. A forward declaration in the header is clearly sufficient for the header to compile standalone. Now let's look at the user code. If the client is just passing around a reference that's forwarded from somewhere else, then they don't need the definition of `Bar` at all and everything is fine. If they use a `Bar` in some other way, then that source file is what's mandating the use of the include for `Bar`, *NOT* your header. And in fact, if you include the unneeded `Bar` include in your header then if a client stops needing your include and removes it, all of a sudden their other `Bar` code quits working because they never properly included it in their own source file. Now suppose that your header uses `std::string`. In that case, in order for the header to compile standalone you must include `<string>`, which both guidelines would tell you to do (google because a forward declaration won't do, and Sutter and Alexandrescu to allow the header to compile standalone).
Sutter and Alexandrescu are probably more correct here. If I forward declare `class Foo;` that for some reason resides in `bar.h` and do not include `bar.h`, good luck trying to find the declaration of `Foo` (especially if the code-base is large).
8,947,198
Google's C++ style guide [says](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_File_Dependencies), "Don't use an #include when a forward declaration would suffice." But [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) (Sutter and Alexandrescu), item #23 is, "Make header files self-sufficient. Behave responsibly: Ensure that each header you write is compilable standalone, by having it include any headers its contents depend upon." Which strategy is best?
2012/01/20
[ "https://Stackoverflow.com/questions/8947198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/966071/" ]
I believe they're both saying exactly the same thing. Suppose you have a method that takes a `Bar` by reference your your header file, but the method is defined in your source file. A forward declaration in the header is clearly sufficient for the header to compile standalone. Now let's look at the user code. If the client is just passing around a reference that's forwarded from somewhere else, then they don't need the definition of `Bar` at all and everything is fine. If they use a `Bar` in some other way, then that source file is what's mandating the use of the include for `Bar`, *NOT* your header. And in fact, if you include the unneeded `Bar` include in your header then if a client stops needing your include and removes it, all of a sudden their other `Bar` code quits working because they never properly included it in their own source file. Now suppose that your header uses `std::string`. In that case, in order for the header to compile standalone you must include `<string>`, which both guidelines would tell you to do (google because a forward declaration won't do, and Sutter and Alexandrescu to allow the header to compile standalone).
In addition to what's covered in the other answers, there are cases where you *must* use a forward declaration because of mutual dependences. FWIW, if I only need a type name, I usually forward declare it. If it's a function declaration, I generally include it (though those cases are rare since non-member functions are rare and member functions must be included). If it's an `extern "C"` function, I *never* forward declare is since the linker can't tell me if I've messed up the argument types.
29,956,131
I am having this issue and have tried almost everything. I want one column with images and one with strings. I can get the strings, but not the images. Here is what I have: ``` self.browserList=wx.ListCtrl(panel, pos=(20,150), size=(250,100), style.wx.LC_REPORT|wx.BORDER_SUNKEN) self.browserList.InsertColumn(0, '', width=50) self.browserList.InsertColumn(1, 'Browser: ', width=200) self.list=wx.ImageList(40,40) self.browserList.SetImageList(self.list, wx.IMAGE_LIST_NORMAL) images=['Users/Default/Desktop/Project/firefoxlogo.png','Users/Default/Desktop/Project/chromelogo.png'] x=0 for i in images: img=wx.Image(i, wx.BITMAP_TYPE_ANY) img=wx.BitmapFromImage(img) browserimg=self.list.Add(img) self.browserList.InsertImageItem(x, 0) self.browserList.InsertImageItem(x, 0, browserimg) self.browserList.SetStringItem(0, 1, "Mozilla Firefox") self.browserList.SetStringItem(1, 1, "Google Chrome") ```
2015/04/29
[ "https://Stackoverflow.com/questions/29956131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4696214/" ]
I get an error running your code. But anyways, I can't explain why, but maybe I think you can resolve it by changing `wx.IMAGE_LIST_NORMAL` to `wx.IMAGE_LIST_SMALL` Here is a simple code that I tried and worked for me. ``` import wx class MyFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title,size=(250, 250)) panel = wx.Panel(self, -1) panel.SetBackgroundColour('white') self.browserList=wx.ListCtrl(panel, pos=(20,150), size=(250,100),style = wx.LC_REPORT|wx.BORDER_SUNKEN) self.browserList.InsertColumn(0, '', width=50) self.browserList.InsertColumn(1, 'Browser: ', width=200) self.list=wx.ImageList(40,40) self.browserList.SetImageList(self.list, wx.IMAGE_LIST_SMALL) images=['mozilla.png','chrome.png'] x=0 for i in images: img=wx.Image(i, wx.BITMAP_TYPE_ANY) img=wx.BitmapFromImage(img) browserimg=self.list.Add(img) self.browserList.InsertImageItem(x, 0) self.browserList.InsertImageItem(x, 1) self.browserList.SetStringItem(0, 1, "Mozilla Firefox") self.browserList.SetStringItem(1, 1, "Google Chrome") class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, 'frame') frame.Show(True) return True app = MyApp(0) app.MainLoop() ``` Hope that helps.
First, an update of Deepas answer to the new wxPython Phoenix: ``` import wx class MyFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title,size=(250, 250)) panel = wx.Panel(self, -1) panel.SetBackgroundColour('white') self.browserList=wx.ListCtrl(panel, pos=(20,150), size=(300,300),style = wx.LC_REPORT|wx.BORDER_SUNKEN) self.browserList.InsertColumn(0, '', width=50) self.browserList.InsertColumn(1, 'Browser: ', width=200) self.list=wx.ImageList(100,100) self.browserList.SetImageList(self.list, wx.IMAGE_LIST_SMALL) images=['mozilla.png','chrome.png'] x=0 for i in images: img=wx.Image(i, wx.BITMAP_TYPE_ANY) img=wx.Bitmap(img) browserimg=self.list.Add(img) self.browserList.InsertItem(x, 0) self.browserList.InsertItem(x, 1) self.browserList.SetItem(0, 1, "Mozilla Firefox") self.browserList.SetItem(1, 1, "Google Chrome") class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, 'frame') frame.Show(True) return True app = MyApp(0) app.MainLoop() ``` Then a general comment. It seems wxPython is not reference counting the ImageList. So, the user created owning class has to store it or no images will be shown (since it is garbage collected I guess). That is, the line ``` self.list=wx.ImageList(100,100) ``` **can not** be changed to ``` list=wx.ImageList(100,100) ``` Then on images will be shown.
45,490,612
I have two tables like this: **Table1** `emp_leave_summary(id,emp_id,leave_from_date,leave_to_date,leave_type)` **Table2** `emp_leave_daywise(id,emp_id,leave_date,leave_type)` I would want to select `emp_id, leave_type` from **Table1** and insert into **Table2**. **for example:** In table1 I have this ``` id,emp_id,leave_from_date,leave_to_date,leave_type 1, 12345,2017-07-01 ,2017-07-03 ,Sick Leave ``` In table 2, I want to have this ``` id,emp_id,leave_date,leave_type 1,12345,2017-07-01,Sick Leave 2,12345,2017-07-02,Sick Leave 3,12345,2017-07-03,Sick Leave ``` **table structure with sample data** ``` CREATE TABLE `emp_leave_summary` ( `id` int(11) NOT NULL, `emp_id` int(11) NOT NULL, `leave_from_date` date NOT NULL, `leave_to_date` date NOT NULL, `leave_type` varchar(30) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `emp_leave_summary` (`id`, `emp_id`, `leave_from_date`, `leave_to_date`, `leave_type`) VALUES (1, 123, '2017-02-01', '2017-02-15', 'Earned Vacation Leave'), (2, 123, '2017-07-12', '2017-07-26', 'Earned Vacation Leave'), (3, 456, '2017-03-20', '2017-04-20', 'Earned Vacation Leave'), (4, 789, '2017-01-15', '2017-02-23', 'Earned Vacation Leave'), (5, 789, '2017-02-26', '2017-02-27', 'Sick Leave'); CREATE TABLE `emp_leave_daywise` ( `id` int(11) NOT NULL, `emp_id` int(11) NOT NULL, `leave_date` date NOT NULL, `leave_type` varchar(30) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ALTER TABLE `emp_leave_daywise` ADD PRIMARY KEY (`id`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_date` (`leave_date`), ADD KEY `leave_type` (`leave_type`); ALTER TABLE `emp_leave_summary` ADD PRIMARY KEY (`id`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_type` (`leave_type`), ADD KEY `leave_from_date` (`leave_from_date`), ADD KEY `leave_to_date` (`leave_to_date`); ```
2017/08/03
[ "https://Stackoverflow.com/questions/45490612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1795210/" ]
Try: ``` select * from (select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v where selected_date between '2012-02-10' and '2012-02-15' ``` -for date ranges up to nearly 300 years in the future. from [How to get list of dates between two dates in mysql select query](https://stackoverflow.com/questions/9295616/how-to-get-list-of-dates-between-two-dates-in-mysql-select-query)
I have managed to get the preferred output using the below solution **step1** create a calendar table and insert the dates( all possible required dates) something like this ``` CREATE TABLE `db_calender` ( `c_date` date NOT NULL); ``` then insert dates into the calendar table, to insert easily i used this procedure answered in this [How to populate a table with a range of dates?](https://stackoverflow.com/questions/10132024/how-to-populate-a-table-with-a-range-of-dates), thanks for Mr.Leniel Macaferi. ``` DROP PROCEDURE IF EXISTS filldates; DELIMITER | CREATE PROCEDURE filldates(dateStart DATE, dateEnd DATE) BEGIN WHILE dateStart <= dateEnd DO INSERT INTO tablename (_date) VALUES (dateStart); SET dateStart = date_add(dateStart, INTERVAL 1 DAY); END WHILE; END; ``` | DELIMITER ; CALL filldates('2017-02-01','2017-07-31'); then i tried below this query to get the preferred output ``` SELECT els.*, c.c_date FROM emp_leave_summary els JOIN db_calender c ON c.c_date BETWEEN els.leave_from_date AND els.leave_to_date ORDER BY els.id,c.c_date ``` i thank everyone for their contribution and support.
45,490,612
I have two tables like this: **Table1** `emp_leave_summary(id,emp_id,leave_from_date,leave_to_date,leave_type)` **Table2** `emp_leave_daywise(id,emp_id,leave_date,leave_type)` I would want to select `emp_id, leave_type` from **Table1** and insert into **Table2**. **for example:** In table1 I have this ``` id,emp_id,leave_from_date,leave_to_date,leave_type 1, 12345,2017-07-01 ,2017-07-03 ,Sick Leave ``` In table 2, I want to have this ``` id,emp_id,leave_date,leave_type 1,12345,2017-07-01,Sick Leave 2,12345,2017-07-02,Sick Leave 3,12345,2017-07-03,Sick Leave ``` **table structure with sample data** ``` CREATE TABLE `emp_leave_summary` ( `id` int(11) NOT NULL, `emp_id` int(11) NOT NULL, `leave_from_date` date NOT NULL, `leave_to_date` date NOT NULL, `leave_type` varchar(30) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `emp_leave_summary` (`id`, `emp_id`, `leave_from_date`, `leave_to_date`, `leave_type`) VALUES (1, 123, '2017-02-01', '2017-02-15', 'Earned Vacation Leave'), (2, 123, '2017-07-12', '2017-07-26', 'Earned Vacation Leave'), (3, 456, '2017-03-20', '2017-04-20', 'Earned Vacation Leave'), (4, 789, '2017-01-15', '2017-02-23', 'Earned Vacation Leave'), (5, 789, '2017-02-26', '2017-02-27', 'Sick Leave'); CREATE TABLE `emp_leave_daywise` ( `id` int(11) NOT NULL, `emp_id` int(11) NOT NULL, `leave_date` date NOT NULL, `leave_type` varchar(30) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ALTER TABLE `emp_leave_daywise` ADD PRIMARY KEY (`id`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_date` (`leave_date`), ADD KEY `leave_type` (`leave_type`); ALTER TABLE `emp_leave_summary` ADD PRIMARY KEY (`id`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_type` (`leave_type`), ADD KEY `leave_from_date` (`leave_from_date`), ADD KEY `leave_to_date` (`leave_to_date`); ```
2017/08/03
[ "https://Stackoverflow.com/questions/45490612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1795210/" ]
Try: ``` select * from (select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v where selected_date between '2012-02-10' and '2012-02-15' ``` -for date ranges up to nearly 300 years in the future. from [How to get list of dates between two dates in mysql select query](https://stackoverflow.com/questions/9295616/how-to-get-list-of-dates-between-two-dates-in-mysql-select-query)
Thanks for your schema. It makes it easy to work with your question. I changed your schema a little to make use of auto\_increment ``` CREATE TABLE `emp_leave_summary` ( `id` int(11) NOT NULL AUTO_INCREMENT, `emp_id` int(11) NOT NULL, `leave_from_date` date NOT NULL, `leave_to_date` date NOT NULL, `leave_type` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `emp_leave_summary` (`emp_id`, `leave_from_date`, `leave_to_date`, `leave_type`) VALUES ( 123, '2017-02-01', '2017-02-15', 'Earned Vacation Leave'), ( 123, '2017-07-12', '2017-07-26', 'Earned Vacation Leave'), ( 456, '2017-03-20', '2017-04-20', 'Earned Vacation Leave'), ( 789, '2017-01-15', '2017-02-23', 'Earned Vacation Leave'), ( 789, '2017-02-26', '2017-02-27', 'Sick Leave'); CREATE TABLE `emp_leave_daywise` ( `id` int(11) NOT NULL AUTO_INCREMENT, `emp_id` int(11) NOT NULL, `leave_date` date NOT NULL, `leave_type` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` Here I added a unique constraint on the emp\_leave\_daywise table as a primary key on a id integer does not ensure records are not duplicated. ``` ALTER TABLE `emp_leave_daywise` ADD UNIQUE KEY `emp_leave_daywise_unique_key` (`emp_id`,`leave_date`,`leave_type`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_date` (`leave_date`), ADD KEY `leave_type` (`leave_type`); ``` The unique key for emp\_leave\_summary needs some thought. For example ... do you allow summaries covering overlapping date ranges? ... ``` ALTER TABLE `emp_leave_summary` ADD UNIQUE KEY `emp_leave_summary_unique_key` (`emp_id`,`leave_from_date`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_type` (`leave_type`), ADD KEY `leave_from_date` (`leave_from_date`), ADD KEY `leave_to_date` (`leave_to_date`); ``` and now for the data extraction using a left join on existing data. ``` /* insert any missing records using a left join on existing records */ insert into emp_leave_daywise ( emp_id, leave_date, leave_type ) select `new`.* from ( select summary.emp_id, dates.date_ leave_date, summary.leave_type from emp_leave_summary summary inner join ( /* get dates to match against https://stackoverflow.com/questions/9295616/how-to-get-list-of-dates-between-two-dates-in-mysql-select-query */ select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) date_ from ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) dates on dates.date_ >= summary.leave_from_date and dates.date_ <= summary.leave_to_date ) `new` left join emp_leave_daywise old on `new`.emp_id = old.emp_id and `new`.leave_date = old.leave_date and `new`.leave_type = old.leave_type where old.id is null ; select * from emp_leave_daywise order by leave_date, emp_id; ``` returns 104 rows based on the data given ``` id emp_id leave_date leave_type 1 789 2017-01-15 Earned Vacation Leave 2 789 2017-01-16 Earned Vacation Leave 3 789 2017-01-17 Earned Vacation Leave 4 789 2017-01-18 Earned Vacation Leave 5 789 2017-01-19 Earned Vacation Leave 6 789 2017-01-20 Earned Vacation Leave ... 102 123 2017-07-24 Earned Vacation Leave 103 123 2017-07-25 Earned Vacation Leave 104 123 2017-07-26 Earned Vacation Leave ``` The date range creation SQL is from here [How to get list of dates between two dates in mysql select query](https://stackoverflow.com/questions/9295616/how-to-get-list-of-dates-between-two-dates-in-mysql-select-query)
45,490,612
I have two tables like this: **Table1** `emp_leave_summary(id,emp_id,leave_from_date,leave_to_date,leave_type)` **Table2** `emp_leave_daywise(id,emp_id,leave_date,leave_type)` I would want to select `emp_id, leave_type` from **Table1** and insert into **Table2**. **for example:** In table1 I have this ``` id,emp_id,leave_from_date,leave_to_date,leave_type 1, 12345,2017-07-01 ,2017-07-03 ,Sick Leave ``` In table 2, I want to have this ``` id,emp_id,leave_date,leave_type 1,12345,2017-07-01,Sick Leave 2,12345,2017-07-02,Sick Leave 3,12345,2017-07-03,Sick Leave ``` **table structure with sample data** ``` CREATE TABLE `emp_leave_summary` ( `id` int(11) NOT NULL, `emp_id` int(11) NOT NULL, `leave_from_date` date NOT NULL, `leave_to_date` date NOT NULL, `leave_type` varchar(30) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `emp_leave_summary` (`id`, `emp_id`, `leave_from_date`, `leave_to_date`, `leave_type`) VALUES (1, 123, '2017-02-01', '2017-02-15', 'Earned Vacation Leave'), (2, 123, '2017-07-12', '2017-07-26', 'Earned Vacation Leave'), (3, 456, '2017-03-20', '2017-04-20', 'Earned Vacation Leave'), (4, 789, '2017-01-15', '2017-02-23', 'Earned Vacation Leave'), (5, 789, '2017-02-26', '2017-02-27', 'Sick Leave'); CREATE TABLE `emp_leave_daywise` ( `id` int(11) NOT NULL, `emp_id` int(11) NOT NULL, `leave_date` date NOT NULL, `leave_type` varchar(30) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ALTER TABLE `emp_leave_daywise` ADD PRIMARY KEY (`id`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_date` (`leave_date`), ADD KEY `leave_type` (`leave_type`); ALTER TABLE `emp_leave_summary` ADD PRIMARY KEY (`id`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_type` (`leave_type`), ADD KEY `leave_from_date` (`leave_from_date`), ADD KEY `leave_to_date` (`leave_to_date`); ```
2017/08/03
[ "https://Stackoverflow.com/questions/45490612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1795210/" ]
Thanks for your schema. It makes it easy to work with your question. I changed your schema a little to make use of auto\_increment ``` CREATE TABLE `emp_leave_summary` ( `id` int(11) NOT NULL AUTO_INCREMENT, `emp_id` int(11) NOT NULL, `leave_from_date` date NOT NULL, `leave_to_date` date NOT NULL, `leave_type` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `emp_leave_summary` (`emp_id`, `leave_from_date`, `leave_to_date`, `leave_type`) VALUES ( 123, '2017-02-01', '2017-02-15', 'Earned Vacation Leave'), ( 123, '2017-07-12', '2017-07-26', 'Earned Vacation Leave'), ( 456, '2017-03-20', '2017-04-20', 'Earned Vacation Leave'), ( 789, '2017-01-15', '2017-02-23', 'Earned Vacation Leave'), ( 789, '2017-02-26', '2017-02-27', 'Sick Leave'); CREATE TABLE `emp_leave_daywise` ( `id` int(11) NOT NULL AUTO_INCREMENT, `emp_id` int(11) NOT NULL, `leave_date` date NOT NULL, `leave_type` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` Here I added a unique constraint on the emp\_leave\_daywise table as a primary key on a id integer does not ensure records are not duplicated. ``` ALTER TABLE `emp_leave_daywise` ADD UNIQUE KEY `emp_leave_daywise_unique_key` (`emp_id`,`leave_date`,`leave_type`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_date` (`leave_date`), ADD KEY `leave_type` (`leave_type`); ``` The unique key for emp\_leave\_summary needs some thought. For example ... do you allow summaries covering overlapping date ranges? ... ``` ALTER TABLE `emp_leave_summary` ADD UNIQUE KEY `emp_leave_summary_unique_key` (`emp_id`,`leave_from_date`), ADD KEY `emp_id` (`emp_id`), ADD KEY `leave_type` (`leave_type`), ADD KEY `leave_from_date` (`leave_from_date`), ADD KEY `leave_to_date` (`leave_to_date`); ``` and now for the data extraction using a left join on existing data. ``` /* insert any missing records using a left join on existing records */ insert into emp_leave_daywise ( emp_id, leave_date, leave_type ) select `new`.* from ( select summary.emp_id, dates.date_ leave_date, summary.leave_type from emp_leave_summary summary inner join ( /* get dates to match against https://stackoverflow.com/questions/9295616/how-to-get-list-of-dates-between-two-dates-in-mysql-select-query */ select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) date_ from ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3, ( select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) dates on dates.date_ >= summary.leave_from_date and dates.date_ <= summary.leave_to_date ) `new` left join emp_leave_daywise old on `new`.emp_id = old.emp_id and `new`.leave_date = old.leave_date and `new`.leave_type = old.leave_type where old.id is null ; select * from emp_leave_daywise order by leave_date, emp_id; ``` returns 104 rows based on the data given ``` id emp_id leave_date leave_type 1 789 2017-01-15 Earned Vacation Leave 2 789 2017-01-16 Earned Vacation Leave 3 789 2017-01-17 Earned Vacation Leave 4 789 2017-01-18 Earned Vacation Leave 5 789 2017-01-19 Earned Vacation Leave 6 789 2017-01-20 Earned Vacation Leave ... 102 123 2017-07-24 Earned Vacation Leave 103 123 2017-07-25 Earned Vacation Leave 104 123 2017-07-26 Earned Vacation Leave ``` The date range creation SQL is from here [How to get list of dates between two dates in mysql select query](https://stackoverflow.com/questions/9295616/how-to-get-list-of-dates-between-two-dates-in-mysql-select-query)
I have managed to get the preferred output using the below solution **step1** create a calendar table and insert the dates( all possible required dates) something like this ``` CREATE TABLE `db_calender` ( `c_date` date NOT NULL); ``` then insert dates into the calendar table, to insert easily i used this procedure answered in this [How to populate a table with a range of dates?](https://stackoverflow.com/questions/10132024/how-to-populate-a-table-with-a-range-of-dates), thanks for Mr.Leniel Macaferi. ``` DROP PROCEDURE IF EXISTS filldates; DELIMITER | CREATE PROCEDURE filldates(dateStart DATE, dateEnd DATE) BEGIN WHILE dateStart <= dateEnd DO INSERT INTO tablename (_date) VALUES (dateStart); SET dateStart = date_add(dateStart, INTERVAL 1 DAY); END WHILE; END; ``` | DELIMITER ; CALL filldates('2017-02-01','2017-07-31'); then i tried below this query to get the preferred output ``` SELECT els.*, c.c_date FROM emp_leave_summary els JOIN db_calender c ON c.c_date BETWEEN els.leave_from_date AND els.leave_to_date ORDER BY els.id,c.c_date ``` i thank everyone for their contribution and support.
47,469,063
Forewarning: I'm very new to Django (and web development, in general). I'm using Django to host a web-based UI that will take user input from a short survey, feed it through some analyses that I've developed in Python, and then present the visual output of these analyses in the UI. My survey consists of 10 questions asking a user how much they agree with a a specific topic. Example of UI for survey: [Example of UI input screen](https://i.stack.imgur.com/UZMvP.png) For *models.py*, I have 2 fields: Question & Choice ``` class Question(models.Model): question_text = models.CharField(max_length=200) def __str__(self): return self.question_text class Choice(models.Model): question = models.ForeignKey(Question, on_delete=models.CASCADE) choice_text = models.CharField(max_length=200) votes = models.IntegerField(default=0) def __str__(self): return self.choice_text ``` I am wanting to have a user select their response to all 10 questions, and then click submit to submit all responses at once, but I'm having trouble with how that would be handled in Django. Here is the html form that I'm using, but this code snippet places a "submit" button after each question, and only allows for a single submission at a time. **NOTE: The code below is creating a question-specific form for each iteration.** ``` {% for question in latest_question_list %} <form action="{% url 'polls:vote' question.id %}" method="post"> {% csrf_token %} <div class="row"> <div class="col-topic"> <label>{{ question.question_text }}</label> </div> {% for choice in question.choice_set.all %} <div class="col-select"> <input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" /> <label for="choice{{ forloop.counter }}">{{ choice.choice_text }}</label><br /> </div> {% endfor %} </div> <input type="submit" value="Vote" /> </form> {% endfor %} ``` I'm interested in how I would take multiple inputs (all for Question/Choice) in a single submission and return that back to *views.py* **EDIT: ADDING VIEWS.PY** Currently, my *views.py* script handles a single question/choice pair. Once I figure out how to allow users to submit the form one time for all 10 question/choices, I will need to reflect this in *views.py*. This could sort of be part 2 of the question. First, how do I enable a user to submit all of their responses to all 10 questions with one "submit" button? Second, how do I setup *views.py* to accept more than 1 value at a time? **views.py** ``` def vote(request, question_id): question = get_object_or_404(Question, pk=question_id) try: selected_choice = question.choice_set.get(pk=request.POST['choice']) except (KeyError, Choice.DoesNotExist): return render(request, 'polls/survey.html', { 'error_message': "You didn't select a choice.", }) else: selected_choice.votes += 1 selected_choice.save() return HttpResponseRedirect(reverse('polls:analysis')) ``` **Please let me know if additional context it needed.** Thanks in advance! -C
2017/11/24
[ "https://Stackoverflow.com/questions/47469063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4580366/" ]
Use getlist() In your views.py ``` if method=="POST": choices = request.POST.getlist('choice') ``` I feel you should change the input radio to checkbox. Radio won't allow multiple selection but Checkbox will. Refer here: <https://docs.djangoproject.com/en/dev/ref/request-response/#django.http.QueryDict.getlist>
You just need to organize your template a bit differently in order to have multiple questions within the same `form`. Litteraly in HTML it would translate into multiple text inputs and then one submit input below, all within one single form: ``` <form action="{% url 'polls:vote' question.id %}" method="post"> {% for question in latest_question_list %} {% csrf_token %} <div class="row"> <div class="col-topic"> <label>{{ question.question_text }}</label> </div> {% for choice in question.choice_set.all %} <div class="col-select"> <input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" /> <label for="choice{{ forloop.counter }}">{{ choice.choice_text }}</label><br /> </div> {% endfor %} </div> {% endfor %} <input type="submit" value="Vote" /> </form> ``` Is it working now ?
47,469,063
Forewarning: I'm very new to Django (and web development, in general). I'm using Django to host a web-based UI that will take user input from a short survey, feed it through some analyses that I've developed in Python, and then present the visual output of these analyses in the UI. My survey consists of 10 questions asking a user how much they agree with a a specific topic. Example of UI for survey: [Example of UI input screen](https://i.stack.imgur.com/UZMvP.png) For *models.py*, I have 2 fields: Question & Choice ``` class Question(models.Model): question_text = models.CharField(max_length=200) def __str__(self): return self.question_text class Choice(models.Model): question = models.ForeignKey(Question, on_delete=models.CASCADE) choice_text = models.CharField(max_length=200) votes = models.IntegerField(default=0) def __str__(self): return self.choice_text ``` I am wanting to have a user select their response to all 10 questions, and then click submit to submit all responses at once, but I'm having trouble with how that would be handled in Django. Here is the html form that I'm using, but this code snippet places a "submit" button after each question, and only allows for a single submission at a time. **NOTE: The code below is creating a question-specific form for each iteration.** ``` {% for question in latest_question_list %} <form action="{% url 'polls:vote' question.id %}" method="post"> {% csrf_token %} <div class="row"> <div class="col-topic"> <label>{{ question.question_text }}</label> </div> {% for choice in question.choice_set.all %} <div class="col-select"> <input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" /> <label for="choice{{ forloop.counter }}">{{ choice.choice_text }}</label><br /> </div> {% endfor %} </div> <input type="submit" value="Vote" /> </form> {% endfor %} ``` I'm interested in how I would take multiple inputs (all for Question/Choice) in a single submission and return that back to *views.py* **EDIT: ADDING VIEWS.PY** Currently, my *views.py* script handles a single question/choice pair. Once I figure out how to allow users to submit the form one time for all 10 question/choices, I will need to reflect this in *views.py*. This could sort of be part 2 of the question. First, how do I enable a user to submit all of their responses to all 10 questions with one "submit" button? Second, how do I setup *views.py* to accept more than 1 value at a time? **views.py** ``` def vote(request, question_id): question = get_object_or_404(Question, pk=question_id) try: selected_choice = question.choice_set.get(pk=request.POST['choice']) except (KeyError, Choice.DoesNotExist): return render(request, 'polls/survey.html', { 'error_message': "You didn't select a choice.", }) else: selected_choice.votes += 1 selected_choice.save() return HttpResponseRedirect(reverse('polls:analysis')) ``` **Please let me know if additional context it needed.** Thanks in advance! -C
2017/11/24
[ "https://Stackoverflow.com/questions/47469063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4580366/" ]
Ideally, this should have been done with Django Forms. Django forms have `widgets` and `RadioSelect` is one of them. You can use that to render your form and get the answer to each question at once. But that will need a lot of change in the way you are currently doing things. So, what you can do is, on click on a submit button, collect all the question/choice pairs and send them at once with a POST request. ``` {% for question in latest_question_list %} <form> <div class="row"> <div class="col-topic"> <label>{{ question.question_text }}</label> </div> {% for choice in question.choice_set.all %} <div class="col-select"> <input type="radio" name="choice" value="{{ choice.id }}" /> <label for="choice{{ forloop.counter }}">{{ choice.choice_text }}</label><br /> </div> {% endfor %} </div> </form> {% endfor %} <input id="submit-btn" type="submit" value="Vote" /> <script> $(document).on('click', '#submit-btn', function(event){ var response_data = [] var question_objs = $('.col-topic'); var choice_objs = $('.col-select'); for(i=0;i<question_objs.length;i++){ var question_text = $(question_objs[i]).find('label').text(); var choice_id = $(choice_objs[i]).find('input').val(); var choice_text = $(choice_objs[i]).find('label').text(); var question_choice = { "question_text": question_text, "choice_id": choice_id, "choice_text": choice_text } response_data.push(question_choice); } $.ajax({ type: "POST", url: "url_to_your_view", data: response_data, success: function(response){ alert("Success"); } }); }); </script> ``` This is how your view should look like. ``` def question_choice_view(request): if request.method == "POST": question_choice_data = request.POST['data'] # further logic ``` Now, question\_choice\_data is a list of dictionaries. Each dict in the list will have the question\_text, choice\_text and choice id of user's response.
You just need to organize your template a bit differently in order to have multiple questions within the same `form`. Litteraly in HTML it would translate into multiple text inputs and then one submit input below, all within one single form: ``` <form action="{% url 'polls:vote' question.id %}" method="post"> {% for question in latest_question_list %} {% csrf_token %} <div class="row"> <div class="col-topic"> <label>{{ question.question_text }}</label> </div> {% for choice in question.choice_set.all %} <div class="col-select"> <input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" /> <label for="choice{{ forloop.counter }}">{{ choice.choice_text }}</label><br /> </div> {% endfor %} </div> {% endfor %} <input type="submit" value="Vote" /> </form> ``` Is it working now ?
54,054
I am forming a universe of liquid futures/liquid FX forwards. I want a list of all liquid contracts, the key word being liquid. This is for an academic project, but you could imagine liquid being loosely defined as securities that could form the core trading portfolio of a mid-sized systematic trend-following CTA. This question is not meant to be a debate about the liquidity of individual contracts or markets- I'm asking for a systematic, repeatable procedure for determining a list of what I expect to be around 100-300 markets. I am flexible in terms of how I form this list, provided the method is systematic. For example, one approach might be to rank all generic contracts in Bloomberg by (say) 100-day trailing ADV as of (say) EOY 2019. Unfortunately, I have no idea how to do this, and help desk didn't seem to have a solution. Another issue here is narrowing down the list of securities that I am ranking such that I don't trip the Bloomberg API limits (for those less familiar with Bloomberg, this means I'd be hesitant to query more than 1000 or so securities). With equities, I would get a list of all stocks from the NYSE/NASDAQ/AMEX. But on the CME website, there seem to be a lot of contracts without listed volume, so this approach seems inappropriate. Another possible way might be to include the constituents of 1 or more futures indices. For instance, with equities I would use the Russell 3000. However, I haven't found an analogous index for commodity futures. BCOM constituents form a very small and limited list (for instance, they do not include Financials). GSCI is another small list. But a broader list seems hard to find. A word on data sources: I have access to Bloomberg and limited access to WRDS. My preference is for Bloomberg since the generic contracts are particularly convenient, though obviously I can make a crosswalk if necessary.
2020/05/11
[ "https://quant.stackexchange.com/questions/54054", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/34436/" ]
Systematically finding most liquid futures instruments ====================================================== --- Can we put together a better list than the academic articles? ------------------------------------------------------------- Yes! The lists in existing publications [[1](https://www.sciencedirect.com/science/article/abs/pii/S0304405X17302908), [2](https://www.researchgate.net/publication/304456445_Are_There_Exploitable_Trends_in_Commodity_Futures_Prices)] are great, but fall slightly short of your goal: > > I'm asking for a **systematic**, **repeatable** procedure for determining a list of what I expect to be around 100-300 ~~markets~~ instruments. [3] > > > What's liquid in 2012 and 2016 might not be in 2020. For example, the Micro E-mini S&Ps didn't launch until 2019, but are currently more active than many of the instruments in those academic publications. Moreover, you can't control the number of instruments to qualify. For these reasons, generating them systematically is a lot better. Using secdef files ------------------ One strategy is to use open interest as a proxy for liquidity instead of volume. Then, you can use secdef files which CME updates regularly on their public FTP server [here](ftp://ftp.cmegroup.com/SBEFix/Production/). The secdef files are written in plaintext and can be parsed like any FIX data. You can see a dictionary of all the fields [here](https://www.cmegroup.com/confluence/display/EPICSANDBOX/MDP+3.0+-+Security+Definition). In your case, you're probably interested in `207=SecurityExchange`, `1151=SecurityGroup`, `55=Symbol`, `167=SecurityType`, `462=UnderlyingProduct`, `5792=OpenInterestQty`. You'll need decide if you want to aggregate the open interest across all contract months in ranking your instruments. To make it simple, I assume that you do in my example code, but my solution can be extended easily. If you're not familiar with the Globex product codes, you can use the [product slate](https://www.cmegroup.com/trading/products/). Example code ------------ I made [example code available on GitHub to demonstrate how this is done](https://github.com/databento/secdef-parser). Here's the top 10: ``` XCME,GE,Interest Rate,10743955 XCBT,ZF,Interest Rate,3618711 XCBT,ZN,Interest Rate,3391115 XCME,ES,Equity,3265839 XNYM,CL,Energy,3003110 XCBT,ZT,Interest Rate,2447405 XNYM,NG,Energy,2217482 XCBT,ZS,Commodity/Agriculture,1729122 XCBT,ZQ,Interest Rate,1719442 XCBT,ZC,Commodity/Agriculture,1395498 ``` This approach lets you rank instruments and easily qualifies a larger set of instruments (70+) including the Micro E-mini S&Ps. It also lets you test the stability of your selection. You'll just need to store the secdef files over time by yourself. But for your convenience, you can get a free batch of historical secdef files from Dec 2019, hosted by Databento\* [here](https://s3.amazonaws.com/databento.com/samples/sample-cme-secdef-201912.zip), together with [a complete list of 163 instrument groups](https://s3.amazonaws.com/databento.com/samples/ranking.csv) generated from the 05/11/2020 secdef file. *For full disclosure: I work at said firm.* --- Other issues ------------ Also, to fill the gaps on 2 issues you're experiencing: > > Another issue here is narrowing down the list of securities that I am ranking such that I don't trip the Bloomberg API limits (for those less familiar with Bloomberg, this means I'd be hesitant to query more than 1000 or so securities). > > > The generic terminal isn't designed for systematic querying and ranking of instruments across the entire market. To do this with Bloomberg, you'll likely need a **B-PIPE** subscription instead. > > One approach might be to rank all generic contracts in Bloomberg by (say) 100-day trailing ADV as of (say) EOY 2019. > > > This isn't easy because futures contracts **expire**. You will need some systematic way to map from the individual contract months (e.g. ESM0, ESU0) to their root symbol (ES), otherwise you won't be able to compute ADV over a long horizon like 100 days. You might be able to get away by discretizing into month buckets, but you'll still need to account for the fact that rollover dates, lead months and active months, differ from contract to contract.
You could consider using the list of "liquid futures contracts" used in some previously published paper(s) on this subject, there are many. Alternatively, if you think previous studies missed some important contracts you could try to establish your own list independently. I thought for example of the using the following from a well know paper: ``` Moskowitz, Ooi and Pedersen (2012) Appendix A ``` And you have also suggested two others: ``` Koijen Moskowitz Pederson Vrugt (2018) Appendix D Han Hu Yang (2016) Table 1 ```
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I think oracle is smart enough to convert the less efficient one (whichever that is) into the other. So I think the answer should rather depend on the readability of each (where I think that `IN` clearly wins)
`OR` makes sense (from readability point of view), when there are less values to be compared. `IN` is useful esp. when you have a dynamic source, with which you want values to be compared. Another alternative is to use a `JOIN` with a temporary table. I don't think performance should be a problem, provided you have necessary indexes.
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
`OR` makes sense (from readability point of view), when there are less values to be compared. `IN` is useful esp. when you have a dynamic source, with which you want values to be compared. Another alternative is to use a `JOIN` with a temporary table. I don't think performance should be a problem, provided you have necessary indexes.
I did a SQL query in a large number of OR (350). Postgres do it **437.80ms**. ![Use OR](https://i.stack.imgur.com/Cv0fU.png) Now use IN: ![Use IN](https://i.stack.imgur.com/1UUga.png) **23.18ms**
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I assume you want to know the performance difference between the following: ``` WHERE foo IN ('a', 'b', 'c') WHERE foo = 'a' OR foo = 'b' OR foo = 'c' ``` According to the [manual for MySQL](http://dev.mysql.com/doc/refman/5.5/en/comparison-operators.html#function_in) if the values are constant `IN` sorts the list and then uses a binary search. I would imagine that `OR` evaluates them one by one in no particular order. So `IN` is faster in some circumstances. The best way to know is to profile both on your database with your specific data to see which is faster. I tried both on a MySQL with 1000000 rows. When the column is indexed there is no discernable difference in performance - both are nearly instant. When the column is not indexed I got these results: ``` SELECT COUNT(*) FROM t_inner WHERE val IN (1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000); 1 row fetched in 0.0032 (1.2679 seconds) SELECT COUNT(*) FROM t_inner WHERE val = 1000 OR val = 2000 OR val = 3000 OR val = 4000 OR val = 5000 OR val = 6000 OR val = 7000 OR val = 8000 OR val = 9000; 1 row fetched in 0.0026 (1.7385 seconds) ``` So in this case the method using OR is about 30% slower. Adding more terms makes the difference larger. Results may vary on other databases and on other data.
I did a SQL query in a large number of OR (350). Postgres do it **437.80ms**. ![Use OR](https://i.stack.imgur.com/Cv0fU.png) Now use IN: ![Use IN](https://i.stack.imgur.com/1UUga.png) **23.18ms**
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
The best way to find out is looking at the Execution Plan. --- I tried it with **Oracle**, and it was exactly the same. ``` CREATE TABLE performance_test AS ( SELECT * FROM dba_objects ); SELECT * FROM performance_test WHERE object_name IN ('DBMS_STANDARD', 'DBMS_REGISTRY', 'DBMS_LOB' ); ``` Even though the query uses `IN`, the Execution Plan says that it uses `OR`: ``` -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 8 | 1416 | 163 (2)| 00:00:02 | |* 1 | TABLE ACCESS FULL| PERFORMANCE_TEST | 8 | 1416 | 163 (2)| 00:00:02 | -------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("OBJECT_NAME"='DBMS_LOB' OR "OBJECT_NAME"='DBMS_REGISTRY' OR "OBJECT_NAME"='DBMS_STANDARD') ```
I did a SQL query in a large number of OR (350). Postgres do it **437.80ms**. ![Use OR](https://i.stack.imgur.com/Cv0fU.png) Now use IN: ![Use IN](https://i.stack.imgur.com/1UUga.png) **23.18ms**
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I assume you want to know the performance difference between the following: ``` WHERE foo IN ('a', 'b', 'c') WHERE foo = 'a' OR foo = 'b' OR foo = 'c' ``` According to the [manual for MySQL](http://dev.mysql.com/doc/refman/5.5/en/comparison-operators.html#function_in) if the values are constant `IN` sorts the list and then uses a binary search. I would imagine that `OR` evaluates them one by one in no particular order. So `IN` is faster in some circumstances. The best way to know is to profile both on your database with your specific data to see which is faster. I tried both on a MySQL with 1000000 rows. When the column is indexed there is no discernable difference in performance - both are nearly instant. When the column is not indexed I got these results: ``` SELECT COUNT(*) FROM t_inner WHERE val IN (1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000); 1 row fetched in 0.0032 (1.2679 seconds) SELECT COUNT(*) FROM t_inner WHERE val = 1000 OR val = 2000 OR val = 3000 OR val = 4000 OR val = 5000 OR val = 6000 OR val = 7000 OR val = 8000 OR val = 9000; 1 row fetched in 0.0026 (1.7385 seconds) ``` So in this case the method using OR is about 30% slower. Adding more terms makes the difference larger. Results may vary on other databases and on other data.
The best way to find out is looking at the Execution Plan. --- I tried it with **Oracle**, and it was exactly the same. ``` CREATE TABLE performance_test AS ( SELECT * FROM dba_objects ); SELECT * FROM performance_test WHERE object_name IN ('DBMS_STANDARD', 'DBMS_REGISTRY', 'DBMS_LOB' ); ``` Even though the query uses `IN`, the Execution Plan says that it uses `OR`: ``` -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 8 | 1416 | 163 (2)| 00:00:02 | |* 1 | TABLE ACCESS FULL| PERFORMANCE_TEST | 8 | 1416 | 163 (2)| 00:00:02 | -------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("OBJECT_NAME"='DBMS_LOB' OR "OBJECT_NAME"='DBMS_REGISTRY' OR "OBJECT_NAME"='DBMS_STANDARD') ```
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I assume you want to know the performance difference between the following: ``` WHERE foo IN ('a', 'b', 'c') WHERE foo = 'a' OR foo = 'b' OR foo = 'c' ``` According to the [manual for MySQL](http://dev.mysql.com/doc/refman/5.5/en/comparison-operators.html#function_in) if the values are constant `IN` sorts the list and then uses a binary search. I would imagine that `OR` evaluates them one by one in no particular order. So `IN` is faster in some circumstances. The best way to know is to profile both on your database with your specific data to see which is faster. I tried both on a MySQL with 1000000 rows. When the column is indexed there is no discernable difference in performance - both are nearly instant. When the column is not indexed I got these results: ``` SELECT COUNT(*) FROM t_inner WHERE val IN (1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000); 1 row fetched in 0.0032 (1.2679 seconds) SELECT COUNT(*) FROM t_inner WHERE val = 1000 OR val = 2000 OR val = 3000 OR val = 4000 OR val = 5000 OR val = 6000 OR val = 7000 OR val = 8000 OR val = 9000; 1 row fetched in 0.0026 (1.7385 seconds) ``` So in this case the method using OR is about 30% slower. Adding more terms makes the difference larger. Results may vary on other databases and on other data.
The OR operator needs a much more complex evaluation process than the IN construct because it allows many conditions, not only equals like IN. Here is a like of what you can use with OR but that are not compatible with IN: greater. greater or equal, less, less or equal, LIKE and some more like the oracle REGEXP\_LIKE. In addition consider that the conditions may not always compare the same value. For the query optimizer it's easier to to manage the IN operator because is only a construct that defines the OR operator on multiple conditions with = operator on the same value. If you use the OR operator the optimizer may not consider that you're always using the = operator on the same value and, if it doesn't perform a deeper and very much more complex elaboration, it could probably exclude that there may be only = operators for the same values on all the involved conditions, with a consequent preclusion of optimized search methods like the already mentioned binary search. [EDIT] Probably an optimizer may not implement optimized IN evaluation process, but this doesn't exclude that one time it could happen(with a database version upgrade). So if you use the OR operator that optimized elaboration will not be used in your case.
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
The OR operator needs a much more complex evaluation process than the IN construct because it allows many conditions, not only equals like IN. Here is a like of what you can use with OR but that are not compatible with IN: greater. greater or equal, less, less or equal, LIKE and some more like the oracle REGEXP\_LIKE. In addition consider that the conditions may not always compare the same value. For the query optimizer it's easier to to manage the IN operator because is only a construct that defines the OR operator on multiple conditions with = operator on the same value. If you use the OR operator the optimizer may not consider that you're always using the = operator on the same value and, if it doesn't perform a deeper and very much more complex elaboration, it could probably exclude that there may be only = operators for the same values on all the involved conditions, with a consequent preclusion of optimized search methods like the already mentioned binary search. [EDIT] Probably an optimizer may not implement optimized IN evaluation process, but this doesn't exclude that one time it could happen(with a database version upgrade). So if you use the OR operator that optimized elaboration will not be used in your case.
I think oracle is smart enough to convert the less efficient one (whichever that is) into the other. So I think the answer should rather depend on the readability of each (where I think that `IN` clearly wins)
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I think oracle is smart enough to convert the less efficient one (whichever that is) into the other. So I think the answer should rather depend on the readability of each (where I think that `IN` clearly wins)
I did a SQL query in a large number of OR (350). Postgres do it **437.80ms**. ![Use OR](https://i.stack.imgur.com/Cv0fU.png) Now use IN: ![Use IN](https://i.stack.imgur.com/1UUga.png) **23.18ms**
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I think oracle is smart enough to convert the less efficient one (whichever that is) into the other. So I think the answer should rather depend on the readability of each (where I think that `IN` clearly wins)
I'll add info for **PostgreSQL** version 11.8 (released 2020-05-14). `IN` may be significantly faster. E.g. table with ~23M rows. Query with `OR`: ```sql explain analyse select sum(mnozstvi_rozdil) from product_erecept where okres_nazev = 'Brno-město' or okres_nazev = 'Pardubice'; -- execution plan Finalize Aggregate (cost=725977.36..725977.37 rows=1 width=32) (actual time=4536.796..4540.748 rows=1 loops=1) -> Gather (cost=725977.14..725977.35 rows=2 width=32) (actual time=4535.010..4540.732 rows=3 loops=1) Workers Planned: 2 Workers Launched: 2 -> Partial Aggregate (cost=724977.14..724977.15 rows=1 width=32) (actual time=4519.338..4519.339 rows=1 loops=3) -> Parallel Bitmap Heap Scan on product_erecept (cost=15589.71..724264.41 rows=285089 width=4) (actual time=135.832..4410.525 rows=230706 loops=3) Recheck Cond: (((okres_nazev)::text = 'Brno-město'::text) OR ((okres_nazev)::text = 'Pardubice'::text)) Rows Removed by Index Recheck: 3857398 Heap Blocks: exact=11840 lossy=142202 -> BitmapOr (cost=15589.71..15589.71 rows=689131 width=0) (actual time=140.985..140.986 rows=0 loops=1) -> Bitmap Index Scan on product_erecept_x_okres_nazev (cost=0.00..8797.61 rows=397606 width=0) (actual time=99.371..99.371 rows=397949 loops=1) Index Cond: ((okres_nazev)::text = 'Brno-město'::text) -> Bitmap Index Scan on product_erecept_x_okres_nazev (cost=0.00..6450.00 rows=291525 width=0) (actual time=41.612..41.612 rows=294170 loops=1) Index Cond: ((okres_nazev)::text = 'Pardubice'::text) Planning Time: 0.162 ms Execution Time: 4540.829 ms ``` Query with `IN`: ``` explain analyse select sum(mnozstvi_rozdil) from product_erecept where okres_nazev in ('Brno-město', 'Pardubice'); -- execution plan Aggregate (cost=593199.90..593199.91 rows=1 width=32) (actual time=855.706..855.707 rows=1 loops=1) -> Index Scan using product_erecept_x_okres_nazev on product_erecept (cost=0.56..591477.07 rows=689131 width=4) (actual time=1.326..645.597 rows=692119 loops=1) Index Cond: ((okres_nazev)::text = ANY ('{Brno-město,Pardubice}'::text[])) Planning Time: 0.136 ms Execution Time: 855.743 ms ```
3,074,713
When dealing with big databases, which performs better: `IN` or `OR` in the SQL `WHERE` clause? Is there any *difference* about the way they are executed?
2010/06/19
[ "https://Stackoverflow.com/questions/3074713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
I assume you want to know the performance difference between the following: ``` WHERE foo IN ('a', 'b', 'c') WHERE foo = 'a' OR foo = 'b' OR foo = 'c' ``` According to the [manual for MySQL](http://dev.mysql.com/doc/refman/5.5/en/comparison-operators.html#function_in) if the values are constant `IN` sorts the list and then uses a binary search. I would imagine that `OR` evaluates them one by one in no particular order. So `IN` is faster in some circumstances. The best way to know is to profile both on your database with your specific data to see which is faster. I tried both on a MySQL with 1000000 rows. When the column is indexed there is no discernable difference in performance - both are nearly instant. When the column is not indexed I got these results: ``` SELECT COUNT(*) FROM t_inner WHERE val IN (1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000); 1 row fetched in 0.0032 (1.2679 seconds) SELECT COUNT(*) FROM t_inner WHERE val = 1000 OR val = 2000 OR val = 3000 OR val = 4000 OR val = 5000 OR val = 6000 OR val = 7000 OR val = 8000 OR val = 9000; 1 row fetched in 0.0026 (1.7385 seconds) ``` So in this case the method using OR is about 30% slower. Adding more terms makes the difference larger. Results may vary on other databases and on other data.
`OR` makes sense (from readability point of view), when there are less values to be compared. `IN` is useful esp. when you have a dynamic source, with which you want values to be compared. Another alternative is to use a `JOIN` with a temporary table. I don't think performance should be a problem, provided you have necessary indexes.
55,737,964
I have the following data and want to match certain strings as commented below. ``` FTUS80 KWBC 081454 AAA\r\r TAF AMD #should match 'AAA' LTUS41 KCTP 082111 RR3\r\r TMLLNS\r #should match 'RR3' and 'TMLLNS' SRUS55 KSLC 082010\r\r HM5SLC\r\r #should match 'HM5SLC' SRUS55 KSLC 082010\r\r SIGC \r\r #should match 'SIGC ' including whitespace ``` I need the following conditions met. But it doesn't work when I put it all together so I know I have mistakes. Thanks in advance. * Start match after 6 digit string: (?<=\d{6}) * match if 3 character mixed uppercase/digits and before first 2 carriage returns: ([A-Z0-9]{3})(?=\r) * match if 6 characters mixed uppercase/digits after carriage returns: (?<=\r\r[A-Z0-9]{6}) * match if 4 characters and two spaces: ([A-Z0-9]{4} )
2019/04/18
[ "https://Stackoverflow.com/questions/55737964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3093032/" ]
There is probably a more elegant way, but you could do something like the following: ``` (?:\d{6}\s?)([A-Z\d]{3})?(?:[\r\n]{2}\s)([A-Z\d]{6}|[A-Z\d]{4}\s{2})? ``` * `(?:\d{6}\s?)` non capture group of 6 digits followed by an optional space * `([A-Z\d]{3})?` optional capture group of 3 uppercase letters / digits * `(?:[\r\n]{2}\s)` non capture group of two line endings followed by 1 space * `([A-Z\d]{6}|[A-Z\d]{4}\s{2})?` optional capture group of either 6 uppercase letters / digits OR 4 uppercase letters / digits followed by 2 spaces
It's not clear what's the end of line here but assuming it's Unix one `\n`, the following expression captures strings as requested (double quotes added to show white space) ``` sed -rne 's/^.{18} ?([A-Z0-9]{3,3})?\r{2}?([^\r]+)?\r.*$/"\1\2"/p' text.txt ``` Result ``` "AAA" "RR3 TMLLNS" " HM5SLC" " SIGC " ``` * `.{18}` first 18 characters * `?([A-Z0-9]{3,3})?` matches AAA or RR3 without leading space * `\r{2}?([^\r]+)?\r` matches `TMLLNS`, `HM5SLC` or `SIGC` preceded by 2 `\r` and followed by 1 `\r` characters.
5,699,339
I've used 'uniq -d -c file' in many shell scripts on linux machines, and it works. On my MAC (OS X 10.6.7 with developer tools installed) it doesn't seems to work: ``` $ uniq -d -c testfile.txt usage: uniq [-c | -d | -u] [-i] [-f fields] [-s chars] [input [output]] ``` It would be nice if anyone could checks this.
2011/04/18
[ "https://Stackoverflow.com/questions/5699339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/95914/" ]
Well, it's right there in the `Usage` message. `[ -c | -d | -u]` means you can use *one* of those possibilities, not two. Since OSX is based on BSD, you can check that [here](http://www.manpages.info/freebsd/uniq.1.html) or, thanks to Ignacio, the more Apple-specific one [here](http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/uniq.1.html). If you want to achieve a similar output, you could use: ``` do_your_thing | uniq -c | grep -v '^ *1 ' ``` which will strip out all those coalesced lines that have a count of one.
You can try this `awk` solution ``` awk '{a[$0]++}END{for(i in a)if(a[i]>1){ print i ,a[i] } }' file ```
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
You have to add parentheses () around p==q (the way you write it, it will be interpreted as ("\n"+p) == q, and String cannot be compared to a boolean). This operator precedence is desired for expressions like ``` if(a+b == c+d) ``` etc. So, ``` System.out.println("\n"+(p==q)); ``` Should work just fine.
The order of precedence of operators means that your expression gets evaluated as ``` ("\n" + p) == q ``` It is nonsensical to compare a string to an int so compilation fails, try: ``` "\n" + (p == q) ```
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
You have to add parentheses () around p==q (the way you write it, it will be interpreted as ("\n"+p) == q, and String cannot be compared to a boolean). This operator precedence is desired for expressions like ``` if(a+b == c+d) ``` etc. So, ``` System.out.println("\n"+(p==q)); ``` Should work just fine.
Ah. The statement is wrong. ``` System.out.println("\n"+(p==q)); ``` ~Dheeraj
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
You have to add parentheses () around p==q (the way you write it, it will be interpreted as ("\n"+p) == q, and String cannot be compared to a boolean). This operator precedence is desired for expressions like ``` if(a+b == c+d) ``` etc. So, ``` System.out.println("\n"+(p==q)); ``` Should work just fine.
> > System.out.println("\n"+p==q); > > > compiler treat it as ``` System.out.println(("\n"+p)==q); ``` Use ``` System.out.println("\n"+(p==q)); ```
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
You have to add parentheses () around p==q (the way you write it, it will be interpreted as ("\n"+p) == q, and String cannot be compared to a boolean). This operator precedence is desired for expressions like ``` if(a+b == c+d) ``` etc. So, ``` System.out.println("\n"+(p==q)); ``` Should work just fine.
> > Which is not allowed at all because the expression p==q evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. > > > This is completely wrong. Concatenating anything to a String is implemented by the compiler via `String.valueOf()`, which is also overloaded to accept booleans. The reason for the compiler error is simply that `+` has a higher operator precedence than `==`, so your code is equivalent to ``` System.out.println(("\n"+p)==q); ``` and the error occurs because you have a `String` and an `int` being compared with `==` On the other hand, this works just as intended: ``` System.out.println("\n"+(p==q)); ```
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
The order of precedence of operators means that your expression gets evaluated as ``` ("\n" + p) == q ``` It is nonsensical to compare a string to an int so compilation fails, try: ``` "\n" + (p == q) ```
Ah. The statement is wrong. ``` System.out.println("\n"+(p==q)); ``` ~Dheeraj
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
> > System.out.println("\n"+p==q); > > > compiler treat it as ``` System.out.println(("\n"+p)==q); ``` Use ``` System.out.println("\n"+(p==q)); ```
Ah. The statement is wrong. ``` System.out.println("\n"+(p==q)); ``` ~Dheeraj
8,063,599
Let's see the following simplest code snippet in Java. ``` final public class Parsing { public static void main(String[] args) { int p=10; int q=10; System.out.println(p==q); } } ``` --- The above code in Java is fine and displays **true** as both p and q of the same type (int) contain the same value (10). Now, I want to concatenate the argument of println() so that it contains a new line escape sequence **\n** and the result is displayed in a new line as below. ``` System.out.println("\n"+p==q); ``` Which is not allowed at all because the expression **p==q** evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. Therefore, the above mentioned statement is invalid and it issues a compile-time error. What is the way to get around such situations in Java? --- and surprisingly, the following statements are perfectly valid in Java. ``` System.out.println("\n"+true); System.out.println("\n"+false); ``` and displays true and false respectively. How? Why is the same thing in the statement `System.out.println("\n"+p==q);` not allowed?
2011/11/09
[ "https://Stackoverflow.com/questions/8063599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1037210/" ]
> > Which is not allowed at all because the expression p==q evaluates a boolean value and a boolean type in Java (not Boolean, a wrapper type) can never be converted to any other types available in Java. > > > This is completely wrong. Concatenating anything to a String is implemented by the compiler via `String.valueOf()`, which is also overloaded to accept booleans. The reason for the compiler error is simply that `+` has a higher operator precedence than `==`, so your code is equivalent to ``` System.out.println(("\n"+p)==q); ``` and the error occurs because you have a `String` and an `int` being compared with `==` On the other hand, this works just as intended: ``` System.out.println("\n"+(p==q)); ```
Ah. The statement is wrong. ``` System.out.println("\n"+(p==q)); ``` ~Dheeraj
62,471
What is the most basic valid module one can create in Drupal? It would be useful to have a simple blueprint available, for anyone just getting started with Drupal.
2013/02/16
[ "https://drupal.stackexchange.com/questions/62471", "https://drupal.stackexchange.com", "https://drupal.stackexchange.com/users/1625/" ]
The below instructions allows one to create an empty module, and is helpful for anyone just getting started with module building. If you have troubles getting your first module working, or even showing up in Drupal, ensure you've read all the instructions below. Drupal 8 ======== A `project` must at least have 1. A machine name. 2. An [yaml](http://en.wikipedia.org/wiki/YAML) info file named after the machine name, in the form `module-machine-name.info.yml` with the following attributes: 1. `name:` A humanly readable name 2. `type:` A type, defining to be a `module`. 3. `core:` The major Drupal core version the module is compatible with, `8.x`, in this case. 3. An empty module file, name in the form `module-machine-name.module` Drupal looks for modules in these locations, as seen from the web root: 1. `/modules/` 2. `sites/[example.com]/modules` 3. `sites/default/modules` 4. `profiles/[install-profile]/modules` Technically, Drupal also looks for modules in `core/modules`, but one should *never*, place modules there, hence it's not on the list above. An example module structure, for a module with the machine name `helloworld`, would look like this: `/modules/helloworld/helloworld.info.yml` `/modules/helloworld/helloworld.module` Notice that both the info and module file as named exactly the same as the machine name, which is important. The module file may be empty, but the info file must contain a few minimum values for Drupal to recognize it as a module. For our helloworld module, that could look like: ``` name: 'Hello world module to demonstrate module building' core: 8.x type: module ``` If you follow the above instructions, you should be be able to get a new module listed in your Drupal site, although it won't do anything, at this stage. Drupal 7 ======== A module must at least have 1. A machine name. 2. A humanly readable name 3. An info file named after the machine name. 4. An empty module file. To be loadable by Drupal, it must also define which core version it is compatible with. Further Drupal looks for modules in these locations: 1. `sites/all/modules/` 2. `sites/[example.com]/modules` 3. `sites/default/modules` 4. `profiles/[install-profile]/modules` Technically, Drupal also looks for modules in `modules`, but one should *never*, place modules there, hence it's not on the list. An example module structure, for a module with the machine name `helloworld`, would look like this: `sites/all/modules/helloworld/helloworld.info` `sites/all/modules/helloworld/helloworld.module` Notice that both the info and module file as named exactly the same as the machine name, which is important. The module file may be empty, but the info file must contain the humanly readable name of the module, and the core version the module is compatible with. For our helloworld module, that could look like: ``` name = Hello world module to demonstrate module building core = 7.x ``` If you follow the above instructions, you should be be able to get a new module listed in your Drupal site, although it won't do anything, at this stage.
For Drupal 8, since the question mentioned a blueprint, I figure I'd mention the [Drupal Console](https://www.drupal.org/project/console) project. Once that is installed, one can generate module code (and other things like scaffolding for custom entities, plugins, etc). To generate a basic module from the command line: `drupal generate:module`. One is then prompted through some basic information gathering, and left with the module files at the end: [![Drupal console module generation](https://i.stack.imgur.com/APYFG.png)](https://i.stack.imgur.com/APYFG.png)
5,450,076
If unary `+`/`-` operators are used to perform conversions as the `Number()` casting function, then why do we need unary operators? What's the special need of these unary operators?
2011/03/27
[ "https://Stackoverflow.com/questions/5450076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/629305/" ]
The Unary `+` operator converts its operand to Number type. The Unary `-` operator converts its operand to Number type, and then negates it. (per the [ECMAScript spec](http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf)) In practice, Unary `-` is used for simply putting negative numbers in normal expressions, e.g.: ``` var x = y * -2.0; ``` That's the unary minus operator at work. The Unary `+` is equivalent to the Number() constructor called as a function, as implied by the spec. I can only speculate on the history, but the unary +/- operators behave similarly in many C-derived languages. I suspect the Number() behavior is the addition to the language here.
The practical side of this is if you have a function that you need to return a number you can use ``` const result = +yourFunc() ``` instead of ``` const result = Number(yourFunc()) ``` or ``` const result = -yourFunc() ``` instead of ``` const result = -Number(yourFunc()) ``` It will return NaN the same way that Number(). IMO makes things a little harder to read, but could be useful to keep your lines short?
58,013,030
I create navigation drawer activity using navigation Architecture. I put an icon on toolbar and I want to when click on the button, the new fragment should be open. I got this **error:** ``` android.view.InflateException: Couldn't resolve menu item onClick handler addShareFragment in class com.example.myapplication.MainActivity ``` Firstly, I created a `menu.xml`: ``` <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto"> <item android:id="@+id/shopping" android:orderInCategory="200" app:showAsAction="always" android:icon="@drawable/ic_menu_camera" android:title="shopping" android:onClick="addShareFragment"/> </menu> ``` So now I can see the icon on my toolbar: [![enter image description here](https://i.stack.imgur.com/NaJST.png)](https://i.stack.imgur.com/NaJST.png) I wrote this function to my main activity.: ``` fun addFragmentA(v: View) { var addShareFragment= ShareFragment() // -> Önce nesne tanımlıyoruz. var transaction = manager.beginTransaction()//-> fragment işlemi başlatıyoruz. transaction.add(R.id.container, addShareFragment, "FragA") //-> transaction.commit() } ```
2019/09/19
[ "https://Stackoverflow.com/questions/58013030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11758532/" ]
Instead of catching the brackets, you can replace the spaces that are preceded by `[` or followed by `]` with an empty string: ``` import re my_string = "[ 0.53119281 1.53762345 ]" my_regex_both = r"(?<=\[)\s+|\s+(?=\])" replaced = re.sub(my_regex_both, '', my_string) print(replaced) ``` Output: ``` [0.53119281 1.53762345] ```
Another option you can use aside from MrGeek's answer would be to use a capture group to catch everything between your `my_regex_start` and `my_regex_end` like so: ``` import re string1 = " [ 0.53119281 1.53762345 ]" result = re.sub(r"(\[\s+)(.*?)(\s+\])", r"[\2]", string1) print(result) ``` I have just sandwiched `(.*?)` between your two expressions. This will lazily catch what is between which can be used with `\2` OUTPUT ``` [0.53119281 1.53762345] ```
20,510,600
I just upgraded from OSX Snow Leopard to Mavericks, and now fetchmail fails to invoke procmail. Mutt is also not working, but that is a different story. The following poll (with names changed) has worked for several years: `poll pop.1and1.com protocol: pop3 username: [email protected] password: 123123123 nokeep fetchall mda "/opt/local/bin/procmail -d %T" # pass message to the local MDA` After upgrading to Mavericks, it correctly polls the POP3 server, but fails with the following message: `fetchmail: about to deliver with: /opt/local/bin/procmail -d 'tbaker' #****fetchmail: MDA died of signal 6 not flushed` The newly installed /opt/local/bin/procmail is the super-stable v3.22 of 2001/09/10, and my default $HOME/.procmailrc and system mailbox have not changed. I assume I'm not the only one with this problem so am surprised not to find any threads about this. Tom
2013/12/11
[ "https://Stackoverflow.com/questions/20510600", "https://Stackoverflow.com", "https://Stackoverflow.com/users/689003/" ]
Solution: I found a similar post in another forum from someone who solved the problem by getting procmail from the backup of his old system and installing under Mavericks. I retrieved fetchmail, procmail, and mutt from the Time Machine, installed them. Also installed putmail.py, which had been deleted from /usr/bin. Everything works now! Problem solved. Lessons learned: The Mavericks upgrade hoses Unix. Unix tools compiled under Mavericks may not work correctly. Unix tools from previous versions of OSX may continue to work fine.
Talked to Apple a few days ago. They are aware of the problem and plan on fixing it with their next update. In the meantime I was told to take the account offline and put it back online when you want to fetch mail. This is kind of a pain in the butt but it works and hopefully they will get it fixed soon.
54,642,211
Firstly, I'm totally new to Xcode 10 and Swift 4, and I've searched here but haven't found code that works. What I'm after: On launching app to play a video which is stored locally (called "launchvideo"). On completion of video to display/move to a UIviewcontroller with a storyboard ID of "menu" So far I have my main navigation controller with it's linked view controller. I'm guessing I need a UIview to hold the video to be played in on this page? Is there someone who can help a new guy out? Thanks
2019/02/12
[ "https://Stackoverflow.com/questions/54642211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11048482/" ]
Firstly change your launch screen storyboard to Main storyboard from project settings in General tab. Create one view controller with following name and write code to implement AVPlayer to play video. ``` import UIKit import AVFoundation class VideoLaunchVC: UIViewController { func setupAVPlayer() { let videoURL = Bundle.main.url(forResource: "Video", withExtension: "mov") // Get video url let avAssets = AVAsset(url: videoURL!) // Create assets to get duration of video. let avPlayer = AVPlayer(url: videoURL!) // Create avPlayer instance let avPlayerLayer = AVPlayerLayer(player: avPlayer) // Create avPlayerLayer instance avPlayerLayer.frame = self.view.bounds // Set bounds of avPlayerLayer self.view.layer.addSublayer(avPlayerLayer) // Add avPlayerLayer to view's layer. avPlayer.play() // Play video // Add observer for every second to check video completed or not, // If video play is completed then redirect to desire view controller. avPlayer.addPeriodicTimeObserver(forInterval: CMTime(seconds: 1, preferredTimescale: 1) , queue: .main) { [weak self] time in if time == avAssets.duration { let vc = UIStoryboard(name: "Main", bundle: nil).instantiateViewController(withIdentifier: "ViewController") as! ViewController self?.navigationController?.pushViewController(vc, animated: true) } } } //------------------------------------------------------------------------------ override func viewDidLoad() { super.viewDidLoad() } //------------------------------------------------------------------------------ override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) self.setupAVPlayer() // Call method to setup AVPlayer & AVPlayerLayer to play video } } ``` **Main.Storyboard:** [![Video Launch VC](https://i.stack.imgur.com/y9SYe.png)](https://i.stack.imgur.com/y9SYe.png) **Project Launch Screen File:** [![Project Launch Screen File](https://i.stack.imgur.com/0Nhx1.png)](https://i.stack.imgur.com/0Nhx1.png) See following video also: <https://youtu.be/dvi0JKEpNTc>
You have to load a video on **launchvideoVC**, like below way in **swift 4 and above** ``` import AVFoundation import AVKit override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) initVideo() } func initVideo(){ do { try AVAudioSession.sharedInstance().setCategory(.ambient, mode: .default) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) } let path = Bundle.main.path(forResource: "yourlocalvideo", ofType:"mp4"); player = AVPlayer(url: NSURL(fileURLWithPath: path!) as URL) NotificationCenter.default.addObserver(self, selector: #selector(launchvideoVC.itemDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: player?.currentItem) DispatchQueue.main.async(execute: {() -> Void in let playerLayer = AVPlayerLayer(player: self.player) playerLayer.frame = self.view.bounds playerLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill playerLayer.zPosition = 1 self.view.layer.addSublayer(playerLayer) self.player?.seek(to: CMTime.zero) self.player?.play() }) } @objc func itemDidFinishPlaying(_ notification: Notification?) { //move to whatever UIViewcontroller with a storyboard ID of "menu" } ``` I think it's may help you. Happy coding :)
54,642,211
Firstly, I'm totally new to Xcode 10 and Swift 4, and I've searched here but haven't found code that works. What I'm after: On launching app to play a video which is stored locally (called "launchvideo"). On completion of video to display/move to a UIviewcontroller with a storyboard ID of "menu" So far I have my main navigation controller with it's linked view controller. I'm guessing I need a UIview to hold the video to be played in on this page? Is there someone who can help a new guy out? Thanks
2019/02/12
[ "https://Stackoverflow.com/questions/54642211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11048482/" ]
Firstly change your launch screen storyboard to Main storyboard from project settings in General tab. Create one view controller with following name and write code to implement AVPlayer to play video. ``` import UIKit import AVFoundation class VideoLaunchVC: UIViewController { func setupAVPlayer() { let videoURL = Bundle.main.url(forResource: "Video", withExtension: "mov") // Get video url let avAssets = AVAsset(url: videoURL!) // Create assets to get duration of video. let avPlayer = AVPlayer(url: videoURL!) // Create avPlayer instance let avPlayerLayer = AVPlayerLayer(player: avPlayer) // Create avPlayerLayer instance avPlayerLayer.frame = self.view.bounds // Set bounds of avPlayerLayer self.view.layer.addSublayer(avPlayerLayer) // Add avPlayerLayer to view's layer. avPlayer.play() // Play video // Add observer for every second to check video completed or not, // If video play is completed then redirect to desire view controller. avPlayer.addPeriodicTimeObserver(forInterval: CMTime(seconds: 1, preferredTimescale: 1) , queue: .main) { [weak self] time in if time == avAssets.duration { let vc = UIStoryboard(name: "Main", bundle: nil).instantiateViewController(withIdentifier: "ViewController") as! ViewController self?.navigationController?.pushViewController(vc, animated: true) } } } //------------------------------------------------------------------------------ override func viewDidLoad() { super.viewDidLoad() } //------------------------------------------------------------------------------ override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) self.setupAVPlayer() // Call method to setup AVPlayer & AVPlayerLayer to play video } } ``` **Main.Storyboard:** [![Video Launch VC](https://i.stack.imgur.com/y9SYe.png)](https://i.stack.imgur.com/y9SYe.png) **Project Launch Screen File:** [![Project Launch Screen File](https://i.stack.imgur.com/0Nhx1.png)](https://i.stack.imgur.com/0Nhx1.png) See following video also: <https://youtu.be/dvi0JKEpNTc>
First you make a new view controller with view and change your launch screen storyboard to Main storyboard from project settings in General tab. [![enter image description here](https://i.stack.imgur.com/oAse7.png)](https://i.stack.imgur.com/oAse7.png) And also add your video in folder. [![enter image description here](https://i.stack.imgur.com/arnDb.png)](https://i.stack.imgur.com/arnDb.png) Then just add my below code to your launch screenView controller: ``` import UIKit import MediaPlayer import AVKit class LaunchViewController: UIViewController { fileprivate var rootViewController: UIViewController? = nil var player: AVPlayer? var playerController = AVPlayerViewController() override func viewDidLoad() { super.viewDidLoad() showSplashViewController() // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } func playVideo() { let videoURL = NSURL(string: "videoplayback") player = AVPlayer(url: videoURL! as URL) let playerController = AVPlayerViewController() playerController.player = player self.addChildViewController(playerController) // Add your view Frame playerController.view.frame = self.view.frame // Add subview in your view self.view.addSubview(playerController.view) player?.play() } private func loadVideo() { //this line is important to prevent background music stop do { try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryAmbient) } catch { } let path = Bundle.main.path(forResource: "videoplayback", ofType:"mp4") let filePathURL = NSURL.fileURL(withPath: path!) let player = AVPlayer(url: filePathURL) let playerLayer = AVPlayerLayer(player: player) playerLayer.frame = self.view.frame playerLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill playerLayer.zPosition = -1 self.view.layer.addSublayer(playerLayer) player.seek(to: kCMTimeZero) player.play() } func showSplashViewControllerNoPing() { if rootViewController is LaunchViewController { return } loadVideo() } /// Simulates an API handshake success and transitions to MapViewController func showSplashViewController() { showSplashViewControllerNoPing() delay(6.00) { self.showMenuNavigationViewController() } } public func delay(_ delay:Double, closure:@escaping ()->()) { DispatchQueue.main.asyncAfter( deadline: DispatchTime.now() + Double(Int64(delay * Double(NSEC_PER_SEC))) / Double(NSEC_PER_SEC), execute: closure) } /// Displays the MapViewController func showMenuNavigationViewController() { guard !(rootViewController is Home) else { return } let storyboard = UIStoryboard(name: "Main", bundle: nil) let nav = storyboard.instantiateViewController(withIdentifier: "homeTab") as! Home nav.willMove(toParentViewController: self) addChildViewController(nav) if let rootViewController = self.rootViewController { self.rootViewController = nav rootViewController.willMove(toParentViewController: nil) transition(from: rootViewController, to: nav, duration: 0.55, options: [.transitionCrossDissolve, .curveEaseOut], animations: { () -> Void in }, completion: { _ in nav.didMove(toParentViewController: self) rootViewController.removeFromParentViewController() rootViewController.didMove(toParentViewController: nil) }) } else { rootViewController = nav view.addSubview(nav.view) nav.didMove(toParentViewController: self) } } override var prefersStatusBarHidden : Bool { switch rootViewController { case is LaunchViewController: return true case is Home: return false default: return false } } } ```
1,419
As a follow on question from the 2012 [Uh oh. We have a [beginner] tag!](https://robotics.meta.stackexchange.com/questions/88/uh-oh-we-have-a-beginner-tag), I’ve noticed a couple of the meta tags discussed for deletion are still being used. Was this intentional or have they crept back into use somehow? Examples (see below) `beginner` and `research`. [![enter image description here](https://i.stack.imgur.com/b3fJ4.jpg)](https://i.stack.imgur.com/b3fJ4.jpg) [![enter image description here](https://i.stack.imgur.com/DN9au.jpg)](https://i.stack.imgur.com/DN9au.jpg)
2021/10/18
[ "https://robotics.meta.stackexchange.com/questions/1419", "https://robotics.meta.stackexchange.com", "https://robotics.meta.stackexchange.com/users/27817/" ]
This is a complex question that doesn't have a great answer - while Ben's answer does give some good explanations for how the system has worked in the past and it's worth considering and following that guidance - we've changed how and when we take sites out of beta in recent years and that's impacted many of our older sites. For the longest time, the guidance was that a site had to have various metrics (the stats in Area 51 Ben cites), with the most important being 10 questions per day. We set this restriction because we thought about sites leaving beta as ones that should be very active rather than relatively slow at least in part to the fact that leaving beta meant a site would have the higher privilege levels of graduated sites. We were also limited in how many sites could leave beta at any given time by the fact that "Graduation" was tied to having a site theme created for the site by our design team, which was something that was very difficult to keep up with for us. So back in 2015 we made a change - [Graduation, site closure, and a clearer outlook on the health of SE sites](https://meta.stackexchange.com/questions/257614/graduation-site-closure-and-a-clearer-outlook-on-the-health-of-se-sites) While we didn't remove the beta label from sites, we recognized that our practice of leaving sites in beta was having an impact and so we went to the effort of trying to help sites understand that being small was OK! That we wouldn't be closing a site down for being low-activity unless the moderation of the site started to fail. Shortly after that discussion I (as a user, not staff) [proposed the idea](https://meta.stackexchange.com/questions/257317/can-beta-sites-slated-for-graduation-get-full-site-abilities-without-site-design) of separating site designs from graduation so that more sites could leave beta sooner without having to wait for artwork. This was [discussed](https://meta.stackexchange.com/questions/260754/feedback-requested-design-independent-graduation?rq=1) and then made into [policy](https://meta.stackexchange.com/questions/263905/design-independent-graduation-is-on-for-early-september?rq=1), so back in September 2015, we kicked a handful of sites out of beta without site designs. In 2017 (still as a user) I asked a new question - [Let's break up with "Graduation" and remove a bunch of "Beta" labels](https://meta.stackexchange.com/questions/303727/lets-break-up-with-graduation-and-remove-a-bunch-of-beta-labels) - essentially, proposing the idea that the CMs consider changing their way of determining when a site should leave beta. After getting hired, I worked with some coworkers to consider a new site lifecycle that included removing sites from beta sooner, without considering their stats on Area 51 at all. While that project was cancelled before it was completed, we [removed the beta label from 29 sites](https://meta.stackexchange.com/questions/331708/congratulations-to-our-29-oldest-beta-sites-theyre-now-no-longer-beta) two years ago because we felt that seven years was too long to be in beta. Now, we're on the cusp of making such a change again. In the next six months or so, I'm expecting that we'll be able to remove the beta label from most sites across the network because they are - as Robotics is - not really a "beta" site and we need to change our practices about keeping sites in a perpetual beta phase. We'll have more on this in the coming months but I want you to know that we, as a team, don't see beta status as a detriment to any site - though we know many of the people using the sites do - and that's something we want to change. We don't really treat beta sites any differently than non-beta sites. In the end, there's nothing that y'all have done or failed to do that's kept you in beta - it's on us and we're working to address it.
There are some good links in the comments, but to answer your question: > > How can newcomers help? > > > The site stats on [Area51](https://area51.stackexchange.com/proposals/40020/robotics) give a good indication of what needs to happen to graduate to a non-Beta site. So the best thing to do to help the site is to get engaged. Ask questions. Answer questions. Even if the question already has 1 or 2 answers, if you have something to add go for it. Encourage your robotics colleagues to join and participate. Spread the word and help build a community. Find old questions that don't have any answers and answer them. Close questions that don't belong here. You get the idea. :)
1,419
As a follow on question from the 2012 [Uh oh. We have a [beginner] tag!](https://robotics.meta.stackexchange.com/questions/88/uh-oh-we-have-a-beginner-tag), I’ve noticed a couple of the meta tags discussed for deletion are still being used. Was this intentional or have they crept back into use somehow? Examples (see below) `beginner` and `research`. [![enter image description here](https://i.stack.imgur.com/b3fJ4.jpg)](https://i.stack.imgur.com/b3fJ4.jpg) [![enter image description here](https://i.stack.imgur.com/DN9au.jpg)](https://i.stack.imgur.com/DN9au.jpg)
2021/10/18
[ "https://robotics.meta.stackexchange.com/questions/1419", "https://robotics.meta.stackexchange.com", "https://robotics.meta.stackexchange.com/users/27817/" ]
This is a complex question that doesn't have a great answer - while Ben's answer does give some good explanations for how the system has worked in the past and it's worth considering and following that guidance - we've changed how and when we take sites out of beta in recent years and that's impacted many of our older sites. For the longest time, the guidance was that a site had to have various metrics (the stats in Area 51 Ben cites), with the most important being 10 questions per day. We set this restriction because we thought about sites leaving beta as ones that should be very active rather than relatively slow at least in part to the fact that leaving beta meant a site would have the higher privilege levels of graduated sites. We were also limited in how many sites could leave beta at any given time by the fact that "Graduation" was tied to having a site theme created for the site by our design team, which was something that was very difficult to keep up with for us. So back in 2015 we made a change - [Graduation, site closure, and a clearer outlook on the health of SE sites](https://meta.stackexchange.com/questions/257614/graduation-site-closure-and-a-clearer-outlook-on-the-health-of-se-sites) While we didn't remove the beta label from sites, we recognized that our practice of leaving sites in beta was having an impact and so we went to the effort of trying to help sites understand that being small was OK! That we wouldn't be closing a site down for being low-activity unless the moderation of the site started to fail. Shortly after that discussion I (as a user, not staff) [proposed the idea](https://meta.stackexchange.com/questions/257317/can-beta-sites-slated-for-graduation-get-full-site-abilities-without-site-design) of separating site designs from graduation so that more sites could leave beta sooner without having to wait for artwork. This was [discussed](https://meta.stackexchange.com/questions/260754/feedback-requested-design-independent-graduation?rq=1) and then made into [policy](https://meta.stackexchange.com/questions/263905/design-independent-graduation-is-on-for-early-september?rq=1), so back in September 2015, we kicked a handful of sites out of beta without site designs. In 2017 (still as a user) I asked a new question - [Let's break up with "Graduation" and remove a bunch of "Beta" labels](https://meta.stackexchange.com/questions/303727/lets-break-up-with-graduation-and-remove-a-bunch-of-beta-labels) - essentially, proposing the idea that the CMs consider changing their way of determining when a site should leave beta. After getting hired, I worked with some coworkers to consider a new site lifecycle that included removing sites from beta sooner, without considering their stats on Area 51 at all. While that project was cancelled before it was completed, we [removed the beta label from 29 sites](https://meta.stackexchange.com/questions/331708/congratulations-to-our-29-oldest-beta-sites-theyre-now-no-longer-beta) two years ago because we felt that seven years was too long to be in beta. Now, we're on the cusp of making such a change again. In the next six months or so, I'm expecting that we'll be able to remove the beta label from most sites across the network because they are - as Robotics is - not really a "beta" site and we need to change our practices about keeping sites in a perpetual beta phase. We'll have more on this in the coming months but I want you to know that we, as a team, don't see beta status as a detriment to any site - though we know many of the people using the sites do - and that's something we want to change. We don't really treat beta sites any differently than non-beta sites. In the end, there's nothing that y'all have done or failed to do that's kept you in beta - it's on us and we're working to address it.
I don't have much to add to [Ben](https://robotics.meta.stackexchange.com/a/1417/37) and [Catija](https://robotics.meta.stackexchange.com/a/1418/37)'s answers, and things haven't changed substantially since [my post](https://robotics.meta.stackexchange.com/a/1355/37) in 2017. We still don't have enough users with a high enough reputation to move to graduated site reputation requirements. We now have two 10k users (one of whom is a former moderator) but we still only have 8 (non-♦) users with 3k reputation, who would be able to cast ordinary close votes. Full graduation would require us to be a much bigger site, with many more users, votes and much more activity. We have already seen some sites be offered partial graduation path though, and while we missed out on the first round of beta tag removals, I'm glad to hear that a solution for other sites is in the works.