qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
---|---|---|---|---|---|
70,417,075 | I was creating a war game and I have this dictionary of weapons and the damage they do on another type of troops.
also, I have a list which has the keys from the dictionary stored in it.
```
weapon_specs = {
'rifle': {'air': 1, 'ground': 2, 'human': 5},
'pistol': {'air': 0, 'ground': 1, 'human': 3},
'rpg': {'air': 5, 'ground': 5, 'human': 3},
'warhead': {'air': 10, 'ground': 10, 'human': 10},
'machine gun': {'air': 3, 'ground': 3, 'human': 10}
}
inv = ['rifle', 'machine gun', 'pistol']
```
I need to get this output:
```
{'air': 4, 'ground': 6, 'human': 18}
```
I tried this :
```
for i in weapon_specs:
for k in inv:
if i == k:
list.update(weapon_specs[k])
``` | 2021/12/20 | [
"https://Stackoverflow.com/questions/70417075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17411307/"
] | You can use `collections.Counter`:
```
from collections import Counter
count = Counter()
for counter in [weapon_specs[weapon] for weapon in inv]:
count += counter
out = dict(count)
```
If you don't want to use `collections` library, you can also do:
```
out = {}
for weapon in inv:
for k,v in weapon_specs[weapon].items():
out[k] = out.get(k, 0) + v
```
Output:
```
{'air': 4, 'ground': 6, 'human': 18}
``` | First take a subset of your dictionary according to your list.
Then use `Counter`
```
from collections import Counter
subset = {k: weapon_specs[k] for k in inv}
out = dict(sum((Counter(d) for d in subset.values()), Counter()))
```
**Result**
```
{'air': 4, 'ground': 6, 'human': 18}
``` |
70,417,075 | I was creating a war game and I have this dictionary of weapons and the damage they do on another type of troops.
also, I have a list which has the keys from the dictionary stored in it.
```
weapon_specs = {
'rifle': {'air': 1, 'ground': 2, 'human': 5},
'pistol': {'air': 0, 'ground': 1, 'human': 3},
'rpg': {'air': 5, 'ground': 5, 'human': 3},
'warhead': {'air': 10, 'ground': 10, 'human': 10},
'machine gun': {'air': 3, 'ground': 3, 'human': 10}
}
inv = ['rifle', 'machine gun', 'pistol']
```
I need to get this output:
```
{'air': 4, 'ground': 6, 'human': 18}
```
I tried this :
```
for i in weapon_specs:
for k in inv:
if i == k:
list.update(weapon_specs[k])
``` | 2021/12/20 | [
"https://Stackoverflow.com/questions/70417075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17411307/"
] | You can use `collections.Counter`:
```
from collections import Counter
count = Counter()
for counter in [weapon_specs[weapon] for weapon in inv]:
count += counter
out = dict(count)
```
If you don't want to use `collections` library, you can also do:
```
out = {}
for weapon in inv:
for k,v in weapon_specs[weapon].items():
out[k] = out.get(k, 0) + v
```
Output:
```
{'air': 4, 'ground': 6, 'human': 18}
``` | used the counter from collections to add the dictionary values with reference to inv
```
from collections import Counter
weapon_specs = {'rifle': {'air': 1, 'ground': 2, 'human': 5},'pistol': {'air': 0, 'ground': 1, 'human': 3},'rpg': {'air': 5, 'ground': 5, 'human': 3},'warhead': {'air': 10, 'ground': 10, 'human': 10},'machine gun': {'air': 3, 'ground': 3, 'human': 10}}
inv = ['riffle', 'machine gun', 'pistol']
summation = Counter()
for category, values in weapon_specs.items():
if category in inv:
summation.update(values)
summation = dict(summation)
print(summation)
```
output:
```
{'air': 4, 'ground': 6, 'human': 18}
``` |
70,417,075 | I was creating a war game and I have this dictionary of weapons and the damage they do on another type of troops.
also, I have a list which has the keys from the dictionary stored in it.
```
weapon_specs = {
'rifle': {'air': 1, 'ground': 2, 'human': 5},
'pistol': {'air': 0, 'ground': 1, 'human': 3},
'rpg': {'air': 5, 'ground': 5, 'human': 3},
'warhead': {'air': 10, 'ground': 10, 'human': 10},
'machine gun': {'air': 3, 'ground': 3, 'human': 10}
}
inv = ['rifle', 'machine gun', 'pistol']
```
I need to get this output:
```
{'air': 4, 'ground': 6, 'human': 18}
```
I tried this :
```
for i in weapon_specs:
for k in inv:
if i == k:
list.update(weapon_specs[k])
``` | 2021/12/20 | [
"https://Stackoverflow.com/questions/70417075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17411307/"
] | Without having to import anything. You can just map the dictionary with two loops.
```
out = {'air': 0, 'ground': 0, 'human': 0}
for weapon in inv:
for k, v in weapon_specs[weapon].items():
out[k] += v
print(out)
```
Output:
```
{'air': 4, 'ground': 6, 'human': 18}
``` | First take a subset of your dictionary according to your list.
Then use `Counter`
```
from collections import Counter
subset = {k: weapon_specs[k] for k in inv}
out = dict(sum((Counter(d) for d in subset.values()), Counter()))
```
**Result**
```
{'air': 4, 'ground': 6, 'human': 18}
``` |
70,417,075 | I was creating a war game and I have this dictionary of weapons and the damage they do on another type of troops.
also, I have a list which has the keys from the dictionary stored in it.
```
weapon_specs = {
'rifle': {'air': 1, 'ground': 2, 'human': 5},
'pistol': {'air': 0, 'ground': 1, 'human': 3},
'rpg': {'air': 5, 'ground': 5, 'human': 3},
'warhead': {'air': 10, 'ground': 10, 'human': 10},
'machine gun': {'air': 3, 'ground': 3, 'human': 10}
}
inv = ['rifle', 'machine gun', 'pistol']
```
I need to get this output:
```
{'air': 4, 'ground': 6, 'human': 18}
```
I tried this :
```
for i in weapon_specs:
for k in inv:
if i == k:
list.update(weapon_specs[k])
``` | 2021/12/20 | [
"https://Stackoverflow.com/questions/70417075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17411307/"
] | used the counter from collections to add the dictionary values with reference to inv
```
from collections import Counter
weapon_specs = {'rifle': {'air': 1, 'ground': 2, 'human': 5},'pistol': {'air': 0, 'ground': 1, 'human': 3},'rpg': {'air': 5, 'ground': 5, 'human': 3},'warhead': {'air': 10, 'ground': 10, 'human': 10},'machine gun': {'air': 3, 'ground': 3, 'human': 10}}
inv = ['riffle', 'machine gun', 'pistol']
summation = Counter()
for category, values in weapon_specs.items():
if category in inv:
summation.update(values)
summation = dict(summation)
print(summation)
```
output:
```
{'air': 4, 'ground': 6, 'human': 18}
``` | First take a subset of your dictionary according to your list.
Then use `Counter`
```
from collections import Counter
subset = {k: weapon_specs[k] for k in inv}
out = dict(sum((Counter(d) for d in subset.values()), Counter()))
```
**Result**
```
{'air': 4, 'ground': 6, 'human': 18}
``` |
13,990,574 | I have a menu formed from an unordered list with nested lists set to visibility: hidden, then shown on hover.
The menu is dynamic so I cant predict which could be close to the windows' edge, when a dropdown is invoked near the edge a scrollbar appears as it overflows the bounds of the window. What I need is to be able to add a class if this happens.
Any help would be gratefully received.
Edit: just done a brief fiddle of this issue jsfiddle.net/TP8v9 | 2012/12/21 | [
"https://Stackoverflow.com/questions/13990574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/905792/"
] | Use:
```
if ($(document).width() > $(window).width()) {
// Overflowing
}
```
[Example JS Fiddle](http://jsfiddle.net/8PJTG/) | When the mouseover event is triggered you can check the width and position of the dropdown that is about to appear and check if that is more than the width of the window. |
13,990,574 | I have a menu formed from an unordered list with nested lists set to visibility: hidden, then shown on hover.
The menu is dynamic so I cant predict which could be close to the windows' edge, when a dropdown is invoked near the edge a scrollbar appears as it overflows the bounds of the window. What I need is to be able to add a class if this happens.
Any help would be gratefully received.
Edit: just done a brief fiddle of this issue jsfiddle.net/TP8v9 | 2012/12/21 | [
"https://Stackoverflow.com/questions/13990574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/905792/"
] | Use:
```
if ($(document).width() > $(window).width()) {
// Overflowing
}
```
[Example JS Fiddle](http://jsfiddle.net/8PJTG/) | Try something like this.
```
$(document).ready(function() {
if($(document).width() > $(window).width(){
$('#some_element').addClass('some_class')
/* Assuming you want to add some_class to some_element
if the width document width is more than window's
width
*/
}
});
``` |
378,670 | I need to install & configure vncserver on the manjaro. I want it configure in such way that if I reboot than also I could connect without starting service and should connect to 0 display not new session. | 2017/07/15 | [
"https://unix.stackexchange.com/questions/378670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171780/"
] | Install it via the command line:
`yaourt -S realvnc-vnc-server`
Run it and configure (login) with:
`systemctl start vncserver-x11-serviced.service`
If you have problems with no icon being displayed in the system tray (as I did), use:
`vncserver-x11-serviced-fg`
Since Manjaro uses systemd, you can use this to start the daemon automatically at system boot time:
`systemctl enable vncserver-x11-serviced.service`
Note: This is all assuming you have a basic (HOME) subscription.
Please read the [Realvnc docs](https://www.realvnc.com/en/connect/docs/man/vncserver-x11-serviced.html) for more information. | You should get used to looking at the Arch Wiki. I'm going to guess but I think English isn't your first language and that is ok because the Wiki covers most languages and you can just switch it.
This is the site you need,
[This is for TigerVNC](https://wiki.archlinux.org/index.php/TigerVNC)
Now you should know that VNC is not safe if you are using it with other people on your local network or if you want to use it over public internet.
You should take a look at X2GO it is way nicer and easier to setup and also has encryption and passwords.[This is the X2GO Wiki page](https://wiki.archlinux.org/index.php/X2Go) |
6,955,250 | see i m using multiple time malloc & free.
so at the end of application i want to make sure there is no memory leakage.
all malloc are freed.
Is there any method or function to see that?
another question :
all all os mostly reclaim memory only when that application gets exit but if application is suppose to be run long time & if this way it continuously leack memory then at some time there will be no unalloacated memory & application will be crash or system will re-boot...!! Is it true..??? | 2011/08/05 | [
"https://Stackoverflow.com/questions/6955250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775964/"
] | At the end of a process the OS reclaims used memory (so it cannot "leak").
>
> so at the end of application i want to make sure there is no memory
> leakage
>
>
>
### EDIT
James raised an interesting point in the comments: "*Any decent programmer should not rely on the OS to do his job*". I must underline I was thinking of the following scenario:
```
/* mallocs */
/* frees <- useless */
exit(0);
``` | You could wrap `malloc()` and `free()`, and count some basic statistics by yourself
```
#define malloc(x) malloc_stat(x)
#define free(x) free_stat(x)
static counter = 0;
void* malloc_stat( size_t s ) {
counter++;
return malloc(s);
}
void free_stat( p ) {
counter--;
free(p);
}
``` |
6,955,250 | see i m using multiple time malloc & free.
so at the end of application i want to make sure there is no memory leakage.
all malloc are freed.
Is there any method or function to see that?
another question :
all all os mostly reclaim memory only when that application gets exit but if application is suppose to be run long time & if this way it continuously leack memory then at some time there will be no unalloacated memory & application will be crash or system will re-boot...!! Is it true..??? | 2011/08/05 | [
"https://Stackoverflow.com/questions/6955250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775964/"
] | It is not guaranteed that the OS will reclaim your memory. A desktop or a server OS usually will; an embedded OS might not.
There are several debugging malloc libraries out there; google for `debug malloc` and use one that suits you. GNU `libc` has a debugging `malloc` built in. | You could wrap `malloc()` and `free()`, and count some basic statistics by yourself
```
#define malloc(x) malloc_stat(x)
#define free(x) free_stat(x)
static counter = 0;
void* malloc_stat( size_t s ) {
counter++;
return malloc(s);
}
void free_stat( p ) {
counter--;
free(p);
}
``` |
6,955,250 | see i m using multiple time malloc & free.
so at the end of application i want to make sure there is no memory leakage.
all malloc are freed.
Is there any method or function to see that?
another question :
all all os mostly reclaim memory only when that application gets exit but if application is suppose to be run long time & if this way it continuously leack memory then at some time there will be no unalloacated memory & application will be crash or system will re-boot...!! Is it true..??? | 2011/08/05 | [
"https://Stackoverflow.com/questions/6955250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775964/"
] | First, You should compile your code with debugging support (In gcc, it is -g). Note that this isn't a necessity but this enables the debugger to provide you with line numbers as one of the advantages.
Then you should run your code with a nice debugger like valgrind or gdb or whatever.
They should tell you the lines where the memory was allocated but not freed.
Valgrind is a very powerful tool for debugging. You'd need to use the --tool=memcheck option (which i think is enabled by default but doesn't hurt to know). | You could wrap `malloc()` and `free()`, and count some basic statistics by yourself
```
#define malloc(x) malloc_stat(x)
#define free(x) free_stat(x)
static counter = 0;
void* malloc_stat( size_t s ) {
counter++;
return malloc(s);
}
void free_stat( p ) {
counter--;
free(p);
}
``` |
46,109,766 | I have a UIImageView within another UIView. The UIImageView is slightly taller than the UIView. Though, I want the UIImageView to only be viewable within the UIView. Any part of the UIImageView outside of the UIView should not be seen.
I'm using a UITableView and inside the UITableViewCell will be that UIView that gives it a "card" look. As you can see below with the screenshots, the colors (red, blue, purple, green, and orange) is the background color for the cells. The UIView is the content inside the cell.
From looking only, I had to set the clipToBounds to `true` from the UIView. I did that, and the sample view from storyboard seemed to work.
[](https://i.stack.imgur.com/tKfvd.png)
The images inside the "What is truth?" UIView and "About Us" UIView crop the image correctly. Now, when I run this in a simulator, it doesn't crop the images.
[](https://i.stack.imgur.com/mhR1l.png)
As you can see the images bleed above and below the UIView.
FYI the images have an opacity of `0.35`.
How do I get the images to properly crop to the UIView border? | 2017/09/08 | [
"https://Stackoverflow.com/questions/46109766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1363270/"
] | ```
let htmlString:String = webView.stringByEvaluatingJavaScript(from: "document.getElementById('link').innerHTML")!
``` | use **`NSDataDetector`** class can match dates, addresses, links, phone numbers and transit information. *[Reference](https://developer.apple.com/documentation/foundation/nsdatadetector)*.
```
let htmlString = "<p><a href=\"https://www.youtube.com/watch?v=i2yscjyIBsk\">https://www.youtube.com/watch?v=i2yscjyIBsk</a></p>\n"
let types: NSTextCheckingType = .Link
let detector = try? NSDataDetector(types: types.rawValue)
guard let detect = detector else {
return
}
let matches = detect.matchesInString(htmlString, options: .ReportCompletion, range: NSMakeRange(0, htmlString.characters.count))
for match in matches {
print(match.URL!)
}
``` |
198,632 | First of all my PC config:
Host:**Windows 7(32-bit)**
Guest: **Windows XP SP3 as XP Mode.**
Integration Features Disabled(I don't think it has any thing to do with networking).
Network Connections on Host :
1.Local Area Connection(Realtek...) - **unplugged**(as no other PC is connected).
2.Nokia 2730 classic USB Modem - I use this connect to the internet.
Now,How can I connect the guest to the host as if they were on a LAN.
Please give specific steps as I am using Windows XP Mode/Virtual machine for the first time. | 2010/10/12 | [
"https://superuser.com/questions/198632",
"https://superuser.com",
"https://superuser.com/users/51044/"
] | To establish a Guest to Host connection, follow the steps in [this blog post](http://blog.hmobius.com/post/2009/12/24/How-to-get-Windows-7-XP-Mode-Apps-to-talk-to-SQL-2008-on-your-Windows-7-Host-OS.aspx#comment):
>
> In brief
>
>
> * Install the Loopback Network Adapter on your Windows 7 machine
> * Set the
> Loopback Adapter to have a static IP address which the VPC network
> adapter will use as its internet gateway
> * Configure the XP Mode Virtual
> Machine to use the loopback adapter as its gateway.
> * You should now be
> able to ping your host machine from your guest OS.
> * Open SQL Server connections via TCP/IP on host machine [updated 24/12]
> * Open incoming
> connections on port 1433/1434 for windows firewall
> * You should now be able to telnet to port 1433 on the host OS from XP mode
>
>
>
If those steps do not work for you, then try using different IP addresses (I used `10.0.0.199` (Host), `10.0.0.200` (Guest), and of course `255.255.255.0` (subnet mask). You can confirm by pinging the Host IP (`10.0.0.199`) from within XP Mode. Make sure on Windows 7 you create Windows Firewall exceptions for the ports you need.
In addition to those steps, you may also add a second network adapter to your XP Mode through the Virtual PC settings, selecting the network adapter on your host that has the internet connection. In the XP Mode IPv4 properties for the second network adapter, use all automatic settings.
You will now have a network path to your Windows 7 host and internet access. | In the VM network setup, you need to make sure the connection is setup as a `Bridge`. This will allow the VM's network connection to go through the hosts connection and act as if it is on the network. |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | The syntactic features of computer languages only exist for the benefit of human understanding; they are abstractions that are helpful for us to conceptualize what is happening in the programs. The programs themselves are agnostic to these syntactic features and the fact that the source language uses them is not necessarily reflected in the final code. The resultant machine code doesn't care whether you used a for-loop or a while-loop; the same CPU operations may be used for both. And what is an array? For us it is a complex data-structure with many rules that we must follow in order to make it work correctly, but in the computer it is just a series of adjacent data.
To be able to add or remove syntactic features on-the-fly would make programming languages much more complex allow for more opportunity for ambiguity. Since the resultant programs aren't really changed by presence of these features, it probably isn't beneficial to have to deal with the extra complexity of allowing syntactic features to be be added or removed.
Some features that are not syntactic do have a big affect on the performance of the resultant programs. For example, the memory allocation and cleanup system used might differ from language-to-language. Some other features include runtime safety checks, like bounds-checks on arrays. A notable feature that has a significant affect on the performance of resultant code is whether the language compiles to native machine code or must be interpreted by another program in order to run. | Not that I know of. There's no clear value to doing that, that I can see.
Once you have a compiler that can handle all language constructs, there is no need to construct new compilers that handle only a subset of constructs. There is value to programmers of having a single language to learn, rather than many variants. And most large programs are likely to use most language constructs somewhere or other.
There is lots of work on contructing domain-specific languages (DSLs), including using functional languages like Haskell to help with that. |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | Yes, there are minimalistic programming languages where many of the things considered "primitives" in other programming languages are built up from even simpler things.
For example, most programming languages have if-then-else, repeat-until, etc. built in.
However, assembly language, as well as a few minimalistic implementations of Forth and Lisp do not have these control structures built-in -- instead,
they are built up from simpler (to the machine) primitives.
Yossi Kreinin, in ["My history with Forth & stack machines"](https://yosefk.com/blog/my-history-with-forth-stack-machines.html),
describes an implementation of Forth
that doesn't even have comments built in!
[Karl Lunt has a macro package](https://www.seanet.com/%7Ekarllunt/picmacro.htm) that you can #INCLUDE from another PIC assembly-language file to support FOR-NEXT, SELECT-CASE, REPEAT-UNTIL, REPEAT-ALWAYS, etc.
It would be pretty easy to split that up into a bunch of files, each of which implemented only one control structure,
and then you could #INCLUDE only the control structures you actually use.
Assembly language even lets you custom-design new calling conventions.
People designing new instruction sets sometimes wonder about what's the minimal set of instructions that's really "necessary" -- how many different primitive instructions are necessary for a [minimal instruction set computer](https://en.wikipedia.org/wiki/minimal_instruction_set_computer) that can do all the stuff that any other CPU can do in a "reasonable" number of primitive instructions?
Alas, trying to "simplify" the number of primitive operations to an extreme often leads to a ["Turing Tarpit"](http://david.carybros.com/html/computer_architecture.html#tarpit).
Making the base language simpler by supporting very few things
(so that everything else must be built up from those things)
often leads to more complexity *elsewhere*.
(See: "[law of conservation of complexity](https://en.wikipedia.org/wiki/law_of_conservation_of_complexity)", aka "Waterbed Theory"). | Not that I know of. There's no clear value to doing that, that I can see.
Once you have a compiler that can handle all language constructs, there is no need to construct new compilers that handle only a subset of constructs. There is value to programmers of having a single language to learn, rather than many variants. And most large programs are likely to use most language constructs somewhere or other.
There is lots of work on contructing domain-specific languages (DSLs), including using functional languages like Haskell to help with that. |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | The syntactic features of computer languages only exist for the benefit of human understanding; they are abstractions that are helpful for us to conceptualize what is happening in the programs. The programs themselves are agnostic to these syntactic features and the fact that the source language uses them is not necessarily reflected in the final code. The resultant machine code doesn't care whether you used a for-loop or a while-loop; the same CPU operations may be used for both. And what is an array? For us it is a complex data-structure with many rules that we must follow in order to make it work correctly, but in the computer it is just a series of adjacent data.
To be able to add or remove syntactic features on-the-fly would make programming languages much more complex allow for more opportunity for ambiguity. Since the resultant programs aren't really changed by presence of these features, it probably isn't beneficial to have to deal with the extra complexity of allowing syntactic features to be be added or removed.
Some features that are not syntactic do have a big affect on the performance of the resultant programs. For example, the memory allocation and cleanup system used might differ from language-to-language. Some other features include runtime safety checks, like bounds-checks on arrays. A notable feature that has a significant affect on the performance of resultant code is whether the language compiles to native machine code or must be interpreted by another program in order to run. | Yes, there are minimalistic programming languages where many of the things considered "primitives" in other programming languages are built up from even simpler things.
For example, most programming languages have if-then-else, repeat-until, etc. built in.
However, assembly language, as well as a few minimalistic implementations of Forth and Lisp do not have these control structures built-in -- instead,
they are built up from simpler (to the machine) primitives.
Yossi Kreinin, in ["My history with Forth & stack machines"](https://yosefk.com/blog/my-history-with-forth-stack-machines.html),
describes an implementation of Forth
that doesn't even have comments built in!
[Karl Lunt has a macro package](https://www.seanet.com/%7Ekarllunt/picmacro.htm) that you can #INCLUDE from another PIC assembly-language file to support FOR-NEXT, SELECT-CASE, REPEAT-UNTIL, REPEAT-ALWAYS, etc.
It would be pretty easy to split that up into a bunch of files, each of which implemented only one control structure,
and then you could #INCLUDE only the control structures you actually use.
Assembly language even lets you custom-design new calling conventions.
People designing new instruction sets sometimes wonder about what's the minimal set of instructions that's really "necessary" -- how many different primitive instructions are necessary for a [minimal instruction set computer](https://en.wikipedia.org/wiki/minimal_instruction_set_computer) that can do all the stuff that any other CPU can do in a "reasonable" number of primitive instructions?
Alas, trying to "simplify" the number of primitive operations to an extreme often leads to a ["Turing Tarpit"](http://david.carybros.com/html/computer_architecture.html#tarpit).
Making the base language simpler by supporting very few things
(so that everything else must be built up from those things)
often leads to more complexity *elsewhere*.
(See: "[law of conservation of complexity](https://en.wikipedia.org/wiki/law_of_conservation_of_complexity)", aka "Waterbed Theory"). |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | The syntactic features of computer languages only exist for the benefit of human understanding; they are abstractions that are helpful for us to conceptualize what is happening in the programs. The programs themselves are agnostic to these syntactic features and the fact that the source language uses them is not necessarily reflected in the final code. The resultant machine code doesn't care whether you used a for-loop or a while-loop; the same CPU operations may be used for both. And what is an array? For us it is a complex data-structure with many rules that we must follow in order to make it work correctly, but in the computer it is just a series of adjacent data.
To be able to add or remove syntactic features on-the-fly would make programming languages much more complex allow for more opportunity for ambiguity. Since the resultant programs aren't really changed by presence of these features, it probably isn't beneficial to have to deal with the extra complexity of allowing syntactic features to be be added or removed.
Some features that are not syntactic do have a big affect on the performance of the resultant programs. For example, the memory allocation and cleanup system used might differ from language-to-language. Some other features include runtime safety checks, like bounds-checks on arrays. A notable feature that has a significant affect on the performance of resultant code is whether the language compiles to native machine code or must be interpreted by another program in order to run. | The problem with the idea of a modular or a-la-carte programming language (if I understand the question correctly) is that the main challenge of designing programming languages in the first place is *integrating* a wide variety of features and requirements without undue verbosity or a wild proliferation of syntactical or conceptual complexity.
It's always possible in principle to write everything from scratch in assembly language, but the volume of code that would ensue to express any given operation is itself the problem.
If a language is not already designed to provide a feature, it's very unlikely to be able to provide it without either being verbose or changing what seem to be fundamental principles.
Another thing about languages is that they are most useful when shared, and when shareable and communicable. Highly individual languages may multiply the effort of the designer, but they also require extreme design effort from the sole user. Shared languages have a community who can be engaged in design together, who can study and solve different problems with it, and devise the patterns and idioms for particular applications. |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | The syntactic features of computer languages only exist for the benefit of human understanding; they are abstractions that are helpful for us to conceptualize what is happening in the programs. The programs themselves are agnostic to these syntactic features and the fact that the source language uses them is not necessarily reflected in the final code. The resultant machine code doesn't care whether you used a for-loop or a while-loop; the same CPU operations may be used for both. And what is an array? For us it is a complex data-structure with many rules that we must follow in order to make it work correctly, but in the computer it is just a series of adjacent data.
To be able to add or remove syntactic features on-the-fly would make programming languages much more complex allow for more opportunity for ambiguity. Since the resultant programs aren't really changed by presence of these features, it probably isn't beneficial to have to deal with the extra complexity of allowing syntactic features to be be added or removed.
Some features that are not syntactic do have a big affect on the performance of the resultant programs. For example, the memory allocation and cleanup system used might differ from language-to-language. Some other features include runtime safety checks, like bounds-checks on arrays. A notable feature that has a significant affect on the performance of resultant code is whether the language compiles to native machine code or must be interpreted by another program in order to run. | You could argue that something like Swift kind of fits into this category - because the language is actually quite small, but with a massive standard library. Not even +, -, \*, / are part of the language - the fact that they are binary or sometimes unary operators, left associative, with certain priorities is defined in the standard library (which is enough for parsing) and their implementation for many types is defined elsewhere in the standard library (so you can add int + int, or float + float). The basic types like integers, floating point, character, are all defined in the standard library.
And there are aspects of the language that allow you to write code that looks *as if* you had extended the language.
Not that you can do much of anything without pulling in most of the standard library. |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | Yes, there are minimalistic programming languages where many of the things considered "primitives" in other programming languages are built up from even simpler things.
For example, most programming languages have if-then-else, repeat-until, etc. built in.
However, assembly language, as well as a few minimalistic implementations of Forth and Lisp do not have these control structures built-in -- instead,
they are built up from simpler (to the machine) primitives.
Yossi Kreinin, in ["My history with Forth & stack machines"](https://yosefk.com/blog/my-history-with-forth-stack-machines.html),
describes an implementation of Forth
that doesn't even have comments built in!
[Karl Lunt has a macro package](https://www.seanet.com/%7Ekarllunt/picmacro.htm) that you can #INCLUDE from another PIC assembly-language file to support FOR-NEXT, SELECT-CASE, REPEAT-UNTIL, REPEAT-ALWAYS, etc.
It would be pretty easy to split that up into a bunch of files, each of which implemented only one control structure,
and then you could #INCLUDE only the control structures you actually use.
Assembly language even lets you custom-design new calling conventions.
People designing new instruction sets sometimes wonder about what's the minimal set of instructions that's really "necessary" -- how many different primitive instructions are necessary for a [minimal instruction set computer](https://en.wikipedia.org/wiki/minimal_instruction_set_computer) that can do all the stuff that any other CPU can do in a "reasonable" number of primitive instructions?
Alas, trying to "simplify" the number of primitive operations to an extreme often leads to a ["Turing Tarpit"](http://david.carybros.com/html/computer_architecture.html#tarpit).
Making the base language simpler by supporting very few things
(so that everything else must be built up from those things)
often leads to more complexity *elsewhere*.
(See: "[law of conservation of complexity](https://en.wikipedia.org/wiki/law_of_conservation_of_complexity)", aka "Waterbed Theory"). | The problem with the idea of a modular or a-la-carte programming language (if I understand the question correctly) is that the main challenge of designing programming languages in the first place is *integrating* a wide variety of features and requirements without undue verbosity or a wild proliferation of syntactical or conceptual complexity.
It's always possible in principle to write everything from scratch in assembly language, but the volume of code that would ensue to express any given operation is itself the problem.
If a language is not already designed to provide a feature, it's very unlikely to be able to provide it without either being verbose or changing what seem to be fundamental principles.
Another thing about languages is that they are most useful when shared, and when shareable and communicable. Highly individual languages may multiply the effort of the designer, but they also require extreme design effort from the sole user. Shared languages have a community who can be engaged in design together, who can study and solve different problems with it, and devise the patterns and idioms for particular applications. |
143,146 | If all programming languages more or less compile down to the same machine code. Have there been attempts at, or is there a "field" around the concept of even more modular programming languages, where it can be built-up based on exactly what is needed? If arrays aren't handled, the language to simplify writing a program that is then compiled into machine code won't need to know what an array is. And, same with loops of different sorts, many languages have both *for*, and *while*.
I'm not a computer scientist. As a novice, there are so many different "language", but they're all mostly the same thing, and it seems like there might be a better way to organize the benefits each language gives relative to how much they actually overlap. I just watched Bjarne Stroustrup explain that C++ made managing complex data types easier, and that C should be seen as a subset of C++, but it seems like another way of doing it is to have an even more modular approach, treating the more complex abilities C++ has as modular add-ons, programs that can be imported. Since this is what is being done anyway except in a "one-size-fits-all" way. Basically, to not have a "one-size-fits-all" programming language, but construct what is needed depending on need.
If anyone understands what I'm fishing for, feel free to reply. It also seems such a language would make it easier for noobs to understand "programming languages", even if they could also have the choice to import more one-size-fits-all configuration. | 2021/08/14 | [
"https://cs.stackexchange.com/questions/143146",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/141449/"
] | Yes, there are minimalistic programming languages where many of the things considered "primitives" in other programming languages are built up from even simpler things.
For example, most programming languages have if-then-else, repeat-until, etc. built in.
However, assembly language, as well as a few minimalistic implementations of Forth and Lisp do not have these control structures built-in -- instead,
they are built up from simpler (to the machine) primitives.
Yossi Kreinin, in ["My history with Forth & stack machines"](https://yosefk.com/blog/my-history-with-forth-stack-machines.html),
describes an implementation of Forth
that doesn't even have comments built in!
[Karl Lunt has a macro package](https://www.seanet.com/%7Ekarllunt/picmacro.htm) that you can #INCLUDE from another PIC assembly-language file to support FOR-NEXT, SELECT-CASE, REPEAT-UNTIL, REPEAT-ALWAYS, etc.
It would be pretty easy to split that up into a bunch of files, each of which implemented only one control structure,
and then you could #INCLUDE only the control structures you actually use.
Assembly language even lets you custom-design new calling conventions.
People designing new instruction sets sometimes wonder about what's the minimal set of instructions that's really "necessary" -- how many different primitive instructions are necessary for a [minimal instruction set computer](https://en.wikipedia.org/wiki/minimal_instruction_set_computer) that can do all the stuff that any other CPU can do in a "reasonable" number of primitive instructions?
Alas, trying to "simplify" the number of primitive operations to an extreme often leads to a ["Turing Tarpit"](http://david.carybros.com/html/computer_architecture.html#tarpit).
Making the base language simpler by supporting very few things
(so that everything else must be built up from those things)
often leads to more complexity *elsewhere*.
(See: "[law of conservation of complexity](https://en.wikipedia.org/wiki/law_of_conservation_of_complexity)", aka "Waterbed Theory"). | You could argue that something like Swift kind of fits into this category - because the language is actually quite small, but with a massive standard library. Not even +, -, \*, / are part of the language - the fact that they are binary or sometimes unary operators, left associative, with certain priorities is defined in the standard library (which is enough for parsing) and their implementation for many types is defined elsewhere in the standard library (so you can add int + int, or float + float). The basic types like integers, floating point, character, are all defined in the standard library.
And there are aspects of the language that allow you to write code that looks *as if* you had extended the language.
Not that you can do much of anything without pulling in most of the standard library. |
195,925 | I have a Microsoft Virtual PC running Windows 2003. I intend to copy over the .vmc(Configuration file) and .vhd (Hard Disk file) to my Windows 7 laptop.
What do I need to do to run the Virtual PC in my Windows 7 laptop?
The whole Windows XP mode is not understandable to me and I quite honestly do not think that Windows XP mode will work here. | 2010/10/05 | [
"https://superuser.com/questions/195925",
"https://superuser.com",
"https://superuser.com/users/13000/"
] | If you install Virtual PC on your laptop, you are more or less good to go. Just copy the two files across...
Be aware that the Virtual PC console has "gone", and has been replaced by an even naffer Windows Explorer window.
You can install Windows 7 Virtual PC on versions other than the ones which support the XP Mode extensions to Virtual PC (at least, I have installed it on Windows 7 Home Premium), but Microsoft doesn't make it easy. If you go to the [Virtual PC and XP Mode](http://www.microsoft.com/windows/virtual-pc/download.aspx) download page, you can enter a version that does allow one to download XP Mode, and download and install "Windows Virtual PC". | In order to run Windows 7's Virtual PC, you'll need Professional, Enterprise or Ultimate. Home Premium won't work.
[Virtual PC 2007 SP1](http://www.microsoft.com/downloads/en/details.aspx?FamilyId=28C97D22-6EB8-4A09-A7F7-F6C7A1F000B5&displaylang=en) should work on Windows 7, but it's not "officially" supported.
[VirtualBox](http://www.virtualbox.org/) is another option if this doesn't work for you, though I'm not familiar with the conversion process. |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | In your manifest, when you declare the activity, use theme `"@android:style/Theme.Translucent.NoTitleBar"`
Ex:
```
<activity android:name="yourActivityName" android:label="@string/app_name" android:theme="@android:style/Theme.Translucent.NoTitleBar">
``` | If you are not interacting with the UI, what you are trying to do sounds more like an android service. |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | I am using `AppCompatActivity` and the solutions provided in this SO did not solve my problem. Here is what worked for me.
I added the following in my `styles.xml`.
```
<style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
<!-- Customize your theme here. -->
</style>
<style name="AppTheme.NoDisplay">
<item name="android:windowBackground">@null</item>
<item name="android:windowContentOverlay">@null</item>
<item name="android:windowIsTranslucent">true</item>
<item name="android:windowAnimationStyle">@null</item>
<item name="android:windowDisablePreview">true</item>
<item name="android:windowNoDisplay">true</item>
</style>
```
Then, for any activity that I want to disable the display, I modified like so:
```
<activity
android:name=".NoDisplayActivity"
android:theme="@style/AppTheme.NoDisplay">
```
Cheers! | I had used `moveTaskToBack(true)` in `onResume()` to put the entire activity stack in background. |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | Android also provides a theme specifically for this:
```
android:theme="@android:style/Theme.NoDisplay"
``` | I am using `AppCompatActivity` and the solutions provided in this SO did not solve my problem. Here is what worked for me.
I added the following in my `styles.xml`.
```
<style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
<!-- Customize your theme here. -->
</style>
<style name="AppTheme.NoDisplay">
<item name="android:windowBackground">@null</item>
<item name="android:windowContentOverlay">@null</item>
<item name="android:windowIsTranslucent">true</item>
<item name="android:windowAnimationStyle">@null</item>
<item name="android:windowDisablePreview">true</item>
<item name="android:windowNoDisplay">true</item>
</style>
```
Then, for any activity that I want to disable the display, I modified like so:
```
<activity
android:name=".NoDisplayActivity"
android:theme="@style/AppTheme.NoDisplay">
```
Cheers! |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | You need to add the Intent flag,
```
intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
```
Or
call "`finish();`" after firing the intent. | I had used `moveTaskToBack(true)` in `onResume()` to put the entire activity stack in background. |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | In your manifest, when you declare the activity, use theme `"@android:style/Theme.Translucent.NoTitleBar"`
Ex:
```
<activity android:name="yourActivityName" android:label="@string/app_name" android:theme="@android:style/Theme.Translucent.NoTitleBar">
``` | I am using `AppCompatActivity` and the solutions provided in this SO did not solve my problem. Here is what worked for me.
I added the following in my `styles.xml`.
```
<style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
<!-- Customize your theme here. -->
</style>
<style name="AppTheme.NoDisplay">
<item name="android:windowBackground">@null</item>
<item name="android:windowContentOverlay">@null</item>
<item name="android:windowIsTranslucent">true</item>
<item name="android:windowAnimationStyle">@null</item>
<item name="android:windowDisablePreview">true</item>
<item name="android:windowNoDisplay">true</item>
</style>
```
Then, for any activity that I want to disable the display, I modified like so:
```
<activity
android:name=".NoDisplayActivity"
android:theme="@style/AppTheme.NoDisplay">
```
Cheers! |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | In your manifest, when you declare the activity, use theme `"@android:style/Theme.Translucent.NoTitleBar"`
Ex:
```
<activity android:name="yourActivityName" android:label="@string/app_name" android:theme="@android:style/Theme.Translucent.NoTitleBar">
``` | I think this would help you a lot:
```
<activity android:name = "MyActivity"
android:label = "@string/app_name"
android:theme = "@android:style/Theme.NoDisplay" >
``` |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | I think this would help you a lot:
```
<activity android:name = "MyActivity"
android:label = "@string/app_name"
android:theme = "@android:style/Theme.NoDisplay" >
``` | Looks similar to the question asked here: [Removing an activity from the history stack](https://stackoverflow.com/questions/1898886/removing-an-activity-from-the-history-stack)
If it is, then you can use:
>
> FLAG\_ACTIVITY\_NO\_HISTORY
>
>
>
This should work to wipe activities off of the stack.
If you need to exclude from recent apps (long press home key) you can use this flag:
>
> FLAG\_ACTIVITY\_EXCLUDE\_FROM\_RECENTS
>
>
> |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | You need to add the Intent flag,
```
intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
```
Or
call "`finish();`" after firing the intent. | In your manifest add `@android:style/Theme.Translucent.NoTitleBar"` as mentioned in some of the answers above.
Also remove the `setContentView(R.layout.your_activity);` line from your activity.java file. |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | Using
```
<activity android:name="yourActivityName" android:label="@string/app_name" android:theme="@android:style/Theme.Translucent.NoTitleBar">
```
mentioned by Brian515 works great. This method is useful for creating an entry point Activity that decides on which activity to call, start, services, etc without having to show a UI to the user. Remember to use `finish()` after you have started your intent. | I had used `moveTaskToBack(true)` in `onResume()` to put the entire activity stack in background. |
2,704,084 | Is it in any way possible to launch an activity from the main function without having a UI? i.e. is there a way to create a sort of "wrapper" around another activity, i.e. by launching the main activity, it takes you to another activity automatically.
If that is not possible, is there a way to remove the main activity from the stack so that clicking the back button does not take you to a blank UI?
Here's an example of what I'm trying to do:
```
public class WrapperActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse("tel:555-1212"));
startActivity(intent);
}
}
``` | 2010/04/24 | [
"https://Stackoverflow.com/questions/2704084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236689/"
] | Android also provides a theme specifically for this:
```
android:theme="@android:style/Theme.NoDisplay"
``` | Looks similar to the question asked here: [Removing an activity from the history stack](https://stackoverflow.com/questions/1898886/removing-an-activity-from-the-history-stack)
If it is, then you can use:
>
> FLAG\_ACTIVITY\_NO\_HISTORY
>
>
>
This should work to wipe activities off of the stack.
If you need to exclude from recent apps (long press home key) you can use this flag:
>
> FLAG\_ACTIVITY\_EXCLUDE\_FROM\_RECENTS
>
>
> |
26,272 | Je crois que l'adjectif **vouvoyante** dans cette expression est venu du participe présent, **vouvoyant**, du verbe **vouvoyer**.
Mais je ne sais pas ce que veut dire vraiment ce mot dans cette expression, que j'ai vue dans ce contexte :
>
> Pour justifier le subjonctif présent habeas (« que vous ayez », **[en
> traduction vouvoyante](https://fr.wikipedia.org/wiki/Habeas_corpus)**), on peut considérer oportet (« il faut »)
> comme sous-entendu : oportet corpus habeas (« il faut que vous ayez le
> corps »).
>
>
>
Je n'ai trouvé le mot **vouvoyante** en adjectif nulle part. | 2017/06/24 | [
"https://french.stackexchange.com/questions/26272",
"https://french.stackexchange.com",
"https://french.stackexchange.com/users/13276/"
] | Normalement, on devrait toujours ajouter un COD aux verbes transitifs.
Toutefois, on observe un usage sans COD, pour certains verbes très courants, surtout d'**appréciation**:
* tu aimes les pommes ? j'aime (ou je n'aime pas).
* je déteste, j'apprécie, j'adore.
* je ne supporte pas. (*je supporte* est plus rare).
* je pense, je ne pense pas.
* je comprends.
* ... | En chanson, et en poésie, on peut quand même se permettre des tournures qui ne seraient normalement pas acceptables à l'écrit.
Dans une conversation, je pense pas qu'il soit particulièrement rare d'élider le complément d'objet direct, surtout quand il est clair en contexte.
Ici, je pense que Patrick Bruel a tout simplement évité de dire soit "ne le supportent pas" ou "ne supportent pas ça" parce que ça n'allait pas avec la mélodie de sa chanson.
Je ne suis pas certain qu'il y ait de règle ferme avec laquelle on puisse définir ce phénomène linguistique, mais je pense qu'un "*ça*" en fin de phrase, comme ici, peut souvent être abrégé quand le complément d'objet direct qu'il remplace vient tout just d'être mentionné. |
46,682,455 | Assume that I have this declaration in Java, it's okay.
```
abstract class Start<T extends End> {
public T end;
}
abstract class End<T extends Start> {
public T start;
}
```
However, it's not okay in Kotlin, since Kotlin has restriction for "cyclic" type parameter.
```
abstract class Start<T : End<*>> {
lateinit var end: T
}
abstract class End<T : Start<*>> {
lateinit var start: T
}
```
Is there any approach to solve this in Kotlin, so that I can have generic types that depend on each other? | 2017/10/11 | [
"https://Stackoverflow.com/questions/46682455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1197317/"
] | It is impossible to use just one type parameter. Introducing `Self` type, which is natively supported in some other languages, is necessary. However, in kotlin you will have to introduce the `Self` type by yourself, because [JetBrains officially turned down the request of adding self type](https://youtrack.jetbrains.com/issue/KT-6494#comment=27-882961).
```
abstract class Start<Self: Start<Self, T>, T: End<T, Self>> {
lateinit var end: T
}
abstract class End<Self: End<Self, T>, T: Start<T, Self>> {
lateinit var start: T
}
```
PS: This `Self` may later induce tediously long type. Proceed with caution. | Let G be a directed graph whose vertices are all type-parameters of all generic type declarations in the program. For every projection type-argument A in every generic type B<...> in the set of constituent types of every type in the B-closure of the set of declared upper bounds of every type- parameter T in G add an edge from T to U, where U is the type-parameter of the declaration of B<...> corresponding to the type-argument A. It is a compile-time error if the graph G has a cycle.
Note:
An intuitive meaning of an edge X → Y in the graph G is "the exact meaning of bounds for the type-parameter X depends on bounds for the type-parameter Y".
Example:
The following declaration is invalid, because there is an edge T → T, forming a cycle:
```
interface A<T : A<*>>
```
The bound `A<*>` is a projection with an implicit bound. If that bound is made explicit, the type `A<*>` takes an equivalent form `A<out A<*>>`. In the same way, it can be further rewritten in an equivalent from `A<out A<out A<*>>>`, and so on. In its fully expanded form this bound would be infinite. The purpose of this rule is to avoid such infinite types, and type checking difficulties associated with them.
The following pair of declarations is invalid, because there are edges T → S and S → T, forming a cycle:
```
interface B<T : C<*>>
interface C<S : B<*>>
```
The following declaration is invalid, because there are edges K → V and V → K, forming a cycle:
```
interface D<K: D<K, *>, V: D<*, V>>
```
On the other hand, each of the following declarations is valid:
```
interface A<T : A<T>>
interface D<K, V : D<*, V>>
```
TODO: Interaction of these algoritms with flexible types. TODO: Importing type declared in Java that violate these rules.
Subtyping relationships is to be decided inductively, i.e. must have a finite proof.
```
interface N<in T>
interface A<S> : N<N<A<A<S>>>>
```
[Official Link](https://jetbrains.github.io/kotlin-spec/kotlin-spec.pdf) |
370,429 | I have a problem with Windows 7 not sleeping.
```
PowerCfg -requests
```
says a "**Legacy Kernel Caller**" driver prevents the sleep mode.
This is not very helpful or informative.
How do I get more details about that object?
EDIT:
I found that
```
Powercfg -requestsoverride
```
is the best way of dealing with such misbehaving drivers and software.
The option [-requestsoverride](http://msdn.microsoft.com/en-us/library/ff794903.aspx) is not very well documented.
MSDN doesn't mention NAME is case sensitive, and to remove a request from overrides list you give the option with blank REQUEST parameter. | 2011/12/21 | [
"https://superuser.com/questions/370429",
"https://superuser.com",
"https://superuser.com/users/76982/"
] | Thanks for all the suggestions!
Finally I narrowed down the problem simply by trial and error, disabling devices and rebooting. It was a TV card driver hung and not releasing the power request despite being no longer in use.
EDIT:
Unfortunately, the problem with TV card is intermittently recurring. Googling shows it's also quite common. I found that disallowing the driver from making power requests with
```
Powercfg -requestsoverride Driver "Legacy Kernel Caller" System
```
solves it.
"Legacy Kernel Caller" is translated on different Windows language versions. On my Polish system it says "Starszego typu obiekt wywołujący jądro". | From the start menu, type in "Performance Information and Tools".
Click the Advanced Tools and click generate a System Health Report. It should point out legacy driver issues.
Edit:
Also try `powercfg -request`. |
370,429 | I have a problem with Windows 7 not sleeping.
```
PowerCfg -requests
```
says a "**Legacy Kernel Caller**" driver prevents the sleep mode.
This is not very helpful or informative.
How do I get more details about that object?
EDIT:
I found that
```
Powercfg -requestsoverride
```
is the best way of dealing with such misbehaving drivers and software.
The option [-requestsoverride](http://msdn.microsoft.com/en-us/library/ff794903.aspx) is not very well documented.
MSDN doesn't mention NAME is case sensitive, and to remove a request from overrides list you give the option with blank REQUEST parameter. | 2011/12/21 | [
"https://superuser.com/questions/370429",
"https://superuser.com",
"https://superuser.com/users/76982/"
] | From the start menu, type in "Performance Information and Tools".
Click the Advanced Tools and click generate a System Health Report. It should point out legacy driver issues.
Edit:
Also try `powercfg -request`. | In my case it was Spotfiy that misbehaved. People are [going ballistic](https://community.spotify.com/t5/Help-Desktop-Linux-Windows-Web/Wake-timer-on-windows/td-p/1205648/page/4) in their forums over this bug.
Solution: Quit spotify before putting computer to sleep/hibernate
I still question why on earth Windows allow a poorly programmed piece of software to override all power plan settings and create wake timers. Microsoft should take their share of the blame here.
edit: Seems like issue is [closed 3 days ago](https://svn.boost.org/trac/boost/ticket/11368) so I guess we should expect a fix soon. |
370,429 | I have a problem with Windows 7 not sleeping.
```
PowerCfg -requests
```
says a "**Legacy Kernel Caller**" driver prevents the sleep mode.
This is not very helpful or informative.
How do I get more details about that object?
EDIT:
I found that
```
Powercfg -requestsoverride
```
is the best way of dealing with such misbehaving drivers and software.
The option [-requestsoverride](http://msdn.microsoft.com/en-us/library/ff794903.aspx) is not very well documented.
MSDN doesn't mention NAME is case sensitive, and to remove a request from overrides list you give the option with blank REQUEST parameter. | 2011/12/21 | [
"https://superuser.com/questions/370429",
"https://superuser.com",
"https://superuser.com/users/76982/"
] | From the start menu, type in "Performance Information and Tools".
Click the Advanced Tools and click generate a System Health Report. It should point out legacy driver issues.
Edit:
Also try `powercfg -request`. | I had this issue and the Legacy Kernal Caller kept coming back intermetently, even though it was verifiably on the list of things to be ignored.
In case anyone still has problems like that, here's a link to a batch file + explanation of how to set up a task.....both were a learning curve which I never want to repeat!!
<https://github.com/richdyer2000/Sleepy>
The batch file basically performs the sleep management:
a main loop runs for ~300s (standard loop with 300 iterations and a ping command with count=2 to control the duration), reading the output of 'powercfg -requests' each time. If it finds anything other than "DISPLAY:", "SYSTEM:", ..."ACTIVE LOCK SCRREN:", "None." or "[DRIVER] Legacy Kernel Caller" on a line, then the main loop is restarted.
If the end of the main loop is reached, the command "rundll32.exe powrprof.dll,SetSuspendState 0,1,0" is executed. On windows 10, it seems it is necessary to run "powercfg -hibernate off" to get a proper sleep state, so I include this in the code prior to the sleep command to make sure. |
370,429 | I have a problem with Windows 7 not sleeping.
```
PowerCfg -requests
```
says a "**Legacy Kernel Caller**" driver prevents the sleep mode.
This is not very helpful or informative.
How do I get more details about that object?
EDIT:
I found that
```
Powercfg -requestsoverride
```
is the best way of dealing with such misbehaving drivers and software.
The option [-requestsoverride](http://msdn.microsoft.com/en-us/library/ff794903.aspx) is not very well documented.
MSDN doesn't mention NAME is case sensitive, and to remove a request from overrides list you give the option with blank REQUEST parameter. | 2011/12/21 | [
"https://superuser.com/questions/370429",
"https://superuser.com",
"https://superuser.com/users/76982/"
] | Thanks for all the suggestions!
Finally I narrowed down the problem simply by trial and error, disabling devices and rebooting. It was a TV card driver hung and not releasing the power request despite being no longer in use.
EDIT:
Unfortunately, the problem with TV card is intermittently recurring. Googling shows it's also quite common. I found that disallowing the driver from making power requests with
```
Powercfg -requestsoverride Driver "Legacy Kernel Caller" System
```
solves it.
"Legacy Kernel Caller" is translated on different Windows language versions. On my Polish system it says "Starszego typu obiekt wywołujący jądro". | In my case it was Spotfiy that misbehaved. People are [going ballistic](https://community.spotify.com/t5/Help-Desktop-Linux-Windows-Web/Wake-timer-on-windows/td-p/1205648/page/4) in their forums over this bug.
Solution: Quit spotify before putting computer to sleep/hibernate
I still question why on earth Windows allow a poorly programmed piece of software to override all power plan settings and create wake timers. Microsoft should take their share of the blame here.
edit: Seems like issue is [closed 3 days ago](https://svn.boost.org/trac/boost/ticket/11368) so I guess we should expect a fix soon. |
370,429 | I have a problem with Windows 7 not sleeping.
```
PowerCfg -requests
```
says a "**Legacy Kernel Caller**" driver prevents the sleep mode.
This is not very helpful or informative.
How do I get more details about that object?
EDIT:
I found that
```
Powercfg -requestsoverride
```
is the best way of dealing with such misbehaving drivers and software.
The option [-requestsoverride](http://msdn.microsoft.com/en-us/library/ff794903.aspx) is not very well documented.
MSDN doesn't mention NAME is case sensitive, and to remove a request from overrides list you give the option with blank REQUEST parameter. | 2011/12/21 | [
"https://superuser.com/questions/370429",
"https://superuser.com",
"https://superuser.com/users/76982/"
] | Thanks for all the suggestions!
Finally I narrowed down the problem simply by trial and error, disabling devices and rebooting. It was a TV card driver hung and not releasing the power request despite being no longer in use.
EDIT:
Unfortunately, the problem with TV card is intermittently recurring. Googling shows it's also quite common. I found that disallowing the driver from making power requests with
```
Powercfg -requestsoverride Driver "Legacy Kernel Caller" System
```
solves it.
"Legacy Kernel Caller" is translated on different Windows language versions. On my Polish system it says "Starszego typu obiekt wywołujący jądro". | I had this issue and the Legacy Kernal Caller kept coming back intermetently, even though it was verifiably on the list of things to be ignored.
In case anyone still has problems like that, here's a link to a batch file + explanation of how to set up a task.....both were a learning curve which I never want to repeat!!
<https://github.com/richdyer2000/Sleepy>
The batch file basically performs the sleep management:
a main loop runs for ~300s (standard loop with 300 iterations and a ping command with count=2 to control the duration), reading the output of 'powercfg -requests' each time. If it finds anything other than "DISPLAY:", "SYSTEM:", ..."ACTIVE LOCK SCRREN:", "None." or "[DRIVER] Legacy Kernel Caller" on a line, then the main loop is restarted.
If the end of the main loop is reached, the command "rundll32.exe powrprof.dll,SetSuspendState 0,1,0" is executed. On windows 10, it seems it is necessary to run "powercfg -hibernate off" to get a proper sleep state, so I include this in the code prior to the sleep command to make sure. |
18,888,865 | I've looked at and tried a few of the existing solutions on the site (for example [CSS Problem to make 2 divs float side by side](https://stackoverflow.com/questions/4882206/css-problem-to-make-2-divs-float-side-by-side) and [CSS layout - Aligning two divs side by side](https://stackoverflow.com/questions/2716955/css-layout-aligning-two-divs-side-by-side)) but none of them work for me.
I'm a bit of a newb to CSS but I'm trying to align the title and menu on my WordPress site <http://photography.stuartbrown.name/> in a similar way to <http://www.kantryla.net/>. Whenever I float:right on the menu area however the menu disappears below the image and a float:left on the menu it pushes the image way out to the right.
I know that in order to achieve what I want I will need to reduce the size of the site title and reduce the width of the menu (perhaps by reducing the gaps between the items in the list?), but I'd really appreciate some advice on how to achieve the title and menu layout of kantryla.
You may notice that I edited the PHP of the theme to include a DIV
`<div class="stuart_menu">`
that surrounds both the title and menu thinking that this wold make the enclosed items easier to control. Nt sure if that's right or not but I can easily remove if necessary.
Thanks for any help! | 2013/09/19 | [
"https://Stackoverflow.com/questions/18888865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1699434/"
] | Place these styles in your CSS
```
#logo {
float: left;
margin: 0 0 25px;
position: relative;
width: 20%;
}
#logo h1 {
color: #555555;
display: inline-block;
font-family: "Terminal Dosis",Arial,Helvetica,Geneva,sans-serif;
font-size: 25px;
font-weight: 200;
margin-bottom: 0.2em;
}
#menu {
float: left;
width: 80%;
}
.stuart_menu {
overflow:auto;
}
```
I guess thats it. | The menu is kinda messed up, I can't make any sense out of it with all the (unneeded) elements, classes.
But basicly you're on the right way, you'll need to redruce the size of both main elements (logo and menu) so it fits inside the parent div.
For instance, like this:
HTML
```
<div class="stuart_menu">
<div class="logo">logo</div>
<ul class="nav">
<li><a href="#">Home</a></li>
<li><a href="#">Blog</a></li>
<li><a href="#">Photos</a></li>
<li><a href="#">Delicious</a></li>
<li><a href="#">Twitter</a></li>
<li><a href="#">Google+</a></li>
<li><a href="#">FOAF Description</a></li>
</ul>
</div>
```
CSS:
```
.stuart_menu {
width: 600px;
}
.logo {
width: 150px;
background: red;
float: left;
}
.nav {
list-style: none;
margin: 0;
padding: 10px 0;
border-top: 1px solid gray;
border-bottom: 1px solid gray;
float: left;
}
.nav li {
display: inline-block;
}
```
Also check this [**demo**](http://jsfiddle.net/LinkinTED/ZWnBG/).
You can choose if you want to align the menu next to the logo (using `float: left`) or align it to the right side of the parent (changing the float to right). |
18,888,865 | I've looked at and tried a few of the existing solutions on the site (for example [CSS Problem to make 2 divs float side by side](https://stackoverflow.com/questions/4882206/css-problem-to-make-2-divs-float-side-by-side) and [CSS layout - Aligning two divs side by side](https://stackoverflow.com/questions/2716955/css-layout-aligning-two-divs-side-by-side)) but none of them work for me.
I'm a bit of a newb to CSS but I'm trying to align the title and menu on my WordPress site <http://photography.stuartbrown.name/> in a similar way to <http://www.kantryla.net/>. Whenever I float:right on the menu area however the menu disappears below the image and a float:left on the menu it pushes the image way out to the right.
I know that in order to achieve what I want I will need to reduce the size of the site title and reduce the width of the menu (perhaps by reducing the gaps between the items in the list?), but I'd really appreciate some advice on how to achieve the title and menu layout of kantryla.
You may notice that I edited the PHP of the theme to include a DIV
`<div class="stuart_menu">`
that surrounds both the title and menu thinking that this wold make the enclosed items easier to control. Nt sure if that's right or not but I can easily remove if necessary.
Thanks for any help! | 2013/09/19 | [
"https://Stackoverflow.com/questions/18888865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1699434/"
] | Place these styles in your CSS
```
#logo {
float: left;
margin: 0 0 25px;
position: relative;
width: 20%;
}
#logo h1 {
color: #555555;
display: inline-block;
font-family: "Terminal Dosis",Arial,Helvetica,Geneva,sans-serif;
font-size: 25px;
font-weight: 200;
margin-bottom: 0.2em;
}
#menu {
float: left;
width: 80%;
}
.stuart_menu {
overflow:auto;
}
```
I guess thats it. | Any kind of solution you can try could lead to modify the look & feel of your site.
Maybe you can try to achieve this by reducing the width of the elements and make it float on left.
BTW, this would mess up the entire design of the site, because the "menu" section is inserted into the main container element. So I'd rather separate the two section.
what I'd do is:
```
#logo{ width:60%;float:left;}
nav {width:35%;float:left;}
```
to reduce the gap between the nav li elements you can reduce the padding and to make it more recognizable, add a border-right
```
#menu ul li{margin:22px 15px; border-right:1px solid #ccc;}
```
Hope this works |
18,888,865 | I've looked at and tried a few of the existing solutions on the site (for example [CSS Problem to make 2 divs float side by side](https://stackoverflow.com/questions/4882206/css-problem-to-make-2-divs-float-side-by-side) and [CSS layout - Aligning two divs side by side](https://stackoverflow.com/questions/2716955/css-layout-aligning-two-divs-side-by-side)) but none of them work for me.
I'm a bit of a newb to CSS but I'm trying to align the title and menu on my WordPress site <http://photography.stuartbrown.name/> in a similar way to <http://www.kantryla.net/>. Whenever I float:right on the menu area however the menu disappears below the image and a float:left on the menu it pushes the image way out to the right.
I know that in order to achieve what I want I will need to reduce the size of the site title and reduce the width of the menu (perhaps by reducing the gaps between the items in the list?), but I'd really appreciate some advice on how to achieve the title and menu layout of kantryla.
You may notice that I edited the PHP of the theme to include a DIV
`<div class="stuart_menu">`
that surrounds both the title and menu thinking that this wold make the enclosed items easier to control. Nt sure if that's right or not but I can easily remove if necessary.
Thanks for any help! | 2013/09/19 | [
"https://Stackoverflow.com/questions/18888865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1699434/"
] | Place these styles in your CSS
```
#logo {
float: left;
margin: 0 0 25px;
position: relative;
width: 20%;
}
#logo h1 {
color: #555555;
display: inline-block;
font-family: "Terminal Dosis",Arial,Helvetica,Geneva,sans-serif;
font-size: 25px;
font-weight: 200;
margin-bottom: 0.2em;
}
#menu {
float: left;
width: 80%;
}
.stuart_menu {
overflow:auto;
}
```
I guess thats it. | Just changing the `#logo` to include `float: left;` should put the menu up with the logo. It will be to the right of it. Its just a matter of then down sizing both the logo and menu to fit within the container. Also the other answer should also work. |
290,003 | I have some raster files of an agricultural region. There are more than 50 small fields in the agricultural region. For each field, a shapefile has been provided. As shown in the image below, each polygon represents a field. In the attribute table of the each shapefile there is the field number along with other information (shown in image).
[](https://i.stack.imgur.com/4WxAl.png)
I also have some satellite raster data and none of them cover the whole agricultural region. Only a portion of the agricultural region is covered by one raster (an example is shown below).
[](https://i.stack.imgur.com/IDZMN.jpg)
I need to know the shapefiles' (polygon actually) name that falls inside the raster in this image. I can turn the labels on of all shapefiles and find the shapefiles' name one by one. As there are more than 20 rasters, doing that manually will take a lot of time.
Is there any way to know the shapefiles' name that falls inside the shown raster?
I am using ArcMap 10.5. | 2018/07/19 | [
"https://gis.stackexchange.com/questions/290003",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/70366/"
] | Totally agree with @stev\_k in that having fields in separate shapefiles is not the way to go. Do as he/she suggest, run the Merge tool to combine them into a single FeatureClass (shapefile) this will make subsequent processing simpler.
Looking at your sample screen shots your raster appears to be a multi-band raster and what you want to do is extract the extent of the data (pixels with colour).
Below is a simple model showing you the steps.
[](https://i.stack.imgur.com/DemWs.png)
Step 1 is is to extract band 1 into a rasterlayer which I call "one band" so you go from your multiband to a single band displayed in grey scale. Note the black edge this is the NODATA region of this raster.
[](https://i.stack.imgur.com/KFAyc.png)
Step 2 is run the CON tool; the where clause is any value greater than 0 is set to 1 and the false value you leave blank which means it gets set to NODATA. This creates a mask raster where pixels are 1 where there had been data and NODATA where there had been none.
Step 3 is to convert your raster into a polygon:
[](https://i.stack.imgur.com/lfJIq.png)
Step 4 is to simply run the select by location tool using your mask polygon to select field polygons in your merged field dataset. | If all the shapefiles have the same schema I would probably merge them into a single shapefile using the Merge tool (and keep the file name as an attribute) and then run a raster analysis. And then tell whoever gave you the data that is not the right way to do it - as has been pointed out, there is no need to have a seperate shapefile for each feature - it's merely creating more work for the users of the data.
You may also need to convert your raster to vector depending on what exact query you need - for example are you looking to select all shapes which are completely within the raster, or which just touch it? |
44,278,900 | In my limited experience with Scala, there is a error in my code:
```
class TagCalculation {
def test_string(arg1: String,arg2: String) = arg1 + " " + arg2
def test_int(arg1: Int,arg2: Int) = arg1 + arg2
}
val get_test = new TagCalculation
//test int, by the way it's Ok for String
val test_int_para = Array(1,2)
val argtypes2 = test_int_para.map(_.getClass)
val method2 = get_test.getClass.getMethod("test_int", argtypes2: _*)
method2.invoke(get_test,test_int_para: _*)
```
Outputs:
>
> console>:29: error: type mismatch;
> found : Array[Int]
> required:
> Array[\_ <: Object]
>
> method2.invoke(get\_test,test\_int\_para: \_\*)
>
>
>
Why did this error happened? | 2017/05/31 | [
"https://Stackoverflow.com/questions/44278900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6449529/"
] | `invoke()`'s second parameter has type `Array[_ <: Object]`. That means, that type of passed array should extend `Object`. In Scala `Object` is synonym to `AnyRef`, and `Int` isn't `AnyRef` subtype, but is `AnyVal` subtype actually. Here is more detailed explanation: <http://docs.scala-lang.org/tutorials/tour/unified-types.html>
You passed `Array[Int]` as second parameter, that's why error happened. You need to pass `Array[_ <: Object]` instead.
So, what you really need is to pass `Integer` wrappers, which are `AnyRef`s, explicitly. You can transform all array `Int`s to `Integer`s using `Integer.valueOf` method. Then, `invoke()` method just works:
```
scala> method2.invoke(get_test, test_int_para.map(Integer.valueOf): _*)
res2: Object = 3
``` | Instead of using `Integer.valueOf` (or `Double`, or `Long`, etc. etc.) there is a much simpler solution:
```
val test_int_para = Array[AnyRef](1,2)
```
Or if you already have `val params: Array[Any]`, e.g. `val params = Array(1,"string",df)`:
```
val paramsAsObjects = params.map(_.asInstanceOf[AnyRef])
someMethod.invoke(someObject, paramsAsObjects: _*)
``` |
18,214,739 | I'm aware I can use the [ShareLinkTask](http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh394009%28v=vs.105%29.aspx) class to share something on my favourite social network. I'm trying to add a button to share on twitter only. I don't want to enable the user to chose.
I can't find a workaround for that class, is it possible to do that? | 2013/08/13 | [
"https://Stackoverflow.com/questions/18214739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/779341/"
] | There are many ways to share something with social networks. Few are:
* [ShareLinkTask](http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh394009%28v=vs.105%29.aspx)
User will not have to SIgnIn, but would be presented with many networks.
* External Browser Mechanism (Recommended)
You can launch a popup with BrowserControl in it and redirect it to URL of form
`<https://twitter.com/intent/tweet?text=your-tweet-text&url=http://google.com>`.
User will be asked to SIgnIN with all text filled and ready to Tweet, as shown (Desktop Browser)

* Dedicated WP Libraries such as [TweetSharp](https://github.com/danielcrenna/tweetsharp), [LINQtoTwitter](http://linqtotwitter.codeplex.com/) n [others](https://dev.twitter.com/docs/twitter-libraries).
These will require you to use API 1.1 and send OAuth authenticated requests. Its a little comples if you only want Share capability. | It is not possil using the `ShareLinkTask`. You need to implement it by yourself, e.g: using [TweetSharp](https://github.com/danielcrenna/tweetsharp). |
9,049,474 | >
> **Possible Duplicate:**
>
> [Is there any NoSQL that is ACID compliant?](https://stackoverflow.com/questions/2608103/is-there-any-nosql-that-is-acid-compliant)
>
>
>
So, I heard NoSQL Databases are not ACID compliant, why is this? | 2012/01/28 | [
"https://Stackoverflow.com/questions/9049474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/434089/"
] | This isn't necessarily true - it depends on which particular database you're referring to. Some of them (for example Neo4j) are fully ACID compliant. Check out this link for a comparison of some NoSQL databases: <http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis> | Generally speaking, ACID compliance imposes performance overhead. NoSql performance is enhanced by this lack of performance overhead. |
20,988 | how to run firefox inside wine with windows compatible plugins..i am a newbie..i have to complete a online training program and need adobe flashplayer plugin.
it seems it is not available for linux..so what should i do here ? how can wine help?
please tell me in baby steps..thanks
amith | 2011/01/11 | [
"https://askubuntu.com/questions/20988",
"https://askubuntu.com",
"https://askubuntu.com/users/-1/"
] | Open a terminal (Applications > Accessories > Terminal) and type:
```
sudo apt-get install flashplugin-installer
```
It will ask for your password.
Restart firefox afterwards. (Firefox is installed by default). | Flash player is available for Linux.
Honestly I'm a bit confused by your question, but, here is a link to help you out:
<https://help.ubuntu.com/community/RestrictedFormats/Flash> |
20,988 | how to run firefox inside wine with windows compatible plugins..i am a newbie..i have to complete a online training program and need adobe flashplayer plugin.
it seems it is not available for linux..so what should i do here ? how can wine help?
please tell me in baby steps..thanks
amith | 2011/01/11 | [
"https://askubuntu.com/questions/20988",
"https://askubuntu.com",
"https://askubuntu.com/users/-1/"
] | Open a terminal (Applications > Accessories > Terminal) and type:
```
sudo apt-get install flashplugin-installer
```
It will ask for your password.
Restart firefox afterwards. (Firefox is installed by default). | [PlayOnLinux](http://www.playonlinux.com/en/) is a great application for managing Wine and installing Windows programs on Linux. There's an Ubuntu package on the website, so it's easy to install.
After downloading and installing the program, you can launch it by going to **Applications > Games > PlayOnLinux**.
Follow the prompts to install Firefox for Windows and the application will also install Flash for Windows. |
2,944,232 | Is it possible to implement some kind of decorator component in wicket ?
Specially while honoring the id of the decorated component ?
Currently i try to solve this using a Border Component acting as a decorator:
Given:
```
public XXXPage()
{
MyBorder border = new MyBorder("xxx");
border.add( new Label("xxx", "Foo") ); // label just as simplification. Its really a Component that we want to surround the anchor (linkify) around.
add(border);
}
```
XXXPage.html
```
<body>
<span wicket:id="xxx"/>
</body>
```
MyBorder.html
```
<wicket:border>
<a wicket:id="link"><wicket:body/></a>
</wicket:border>
```
MyBorder.java
```
public MyBorder(String id)
{
super(id);
AjaxLink link = new AjaxLink("link")
{
@Override public void onClick(AjaxRequestTarget arg0)
{
}
};
add(link);
link.add(getBodyContainer()); // from javadoc of Wicket's Border
}
```
Gives:
```
org.apache.wicket.WicketRuntimeException: The component(s) below failed to render. A common problem is that you have added a component in code but forgot to reference it in the markup (thus the component will never be rendered).
1. [Component id = xxx]
at org.apache.wicket.Page.checkRendering(Page.java:1182)
```
So in essence, the xxx component should be wrapped in a transparent manner with a link component. This xxx is out of control.
In a "banana wicket" implementation this would be a possible pseudo code:
```
Component wrapWithAction(Component a) {
// now wrap
Link b = new Link() {
// handle link onClick and all that
}
b.wrap(a);
return b;
}
//
``` | 2010/05/31 | [
"https://Stackoverflow.com/questions/2944232",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1990802/"
] | I think you'd have better luck adding either a WebMarkupContainer or Fragment within your ListView than anything else.. These two can both contain other Components such as Links, Labels, etc..
Let me know if you need more help.. | Add an [AjaxEventBehavior](https://wicket.apache.org/docs/1.4/org/apache/wicket/ajax/AjaxEventBehavior.html) for "onclick" to the Component. The example in the Javadoc I linked does what you want.
You can add Behaviors to almost anything in Wicket, that's one of its most powerful features. |
2,944,232 | Is it possible to implement some kind of decorator component in wicket ?
Specially while honoring the id of the decorated component ?
Currently i try to solve this using a Border Component acting as a decorator:
Given:
```
public XXXPage()
{
MyBorder border = new MyBorder("xxx");
border.add( new Label("xxx", "Foo") ); // label just as simplification. Its really a Component that we want to surround the anchor (linkify) around.
add(border);
}
```
XXXPage.html
```
<body>
<span wicket:id="xxx"/>
</body>
```
MyBorder.html
```
<wicket:border>
<a wicket:id="link"><wicket:body/></a>
</wicket:border>
```
MyBorder.java
```
public MyBorder(String id)
{
super(id);
AjaxLink link = new AjaxLink("link")
{
@Override public void onClick(AjaxRequestTarget arg0)
{
}
};
add(link);
link.add(getBodyContainer()); // from javadoc of Wicket's Border
}
```
Gives:
```
org.apache.wicket.WicketRuntimeException: The component(s) below failed to render. A common problem is that you have added a component in code but forgot to reference it in the markup (thus the component will never be rendered).
1. [Component id = xxx]
at org.apache.wicket.Page.checkRendering(Page.java:1182)
```
So in essence, the xxx component should be wrapped in a transparent manner with a link component. This xxx is out of control.
In a "banana wicket" implementation this would be a possible pseudo code:
```
Component wrapWithAction(Component a) {
// now wrap
Link b = new Link() {
// handle link onClick and all that
}
b.wrap(a);
return b;
}
//
``` | 2010/05/31 | [
"https://Stackoverflow.com/questions/2944232",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1990802/"
] | Depends on what kind of stuff your decorator wants to do. The regular meaning of decorating is to have object B assume the role of object A, providing exactly the same contract, using A to implement that contract, but do something extra on top of that. I think that that's not a very common case with Widgets. Rather you are looking to reuse some part (the UI and state, maybe behavior). In general, in my opinion, this works best through using panels and markup inheritance.
Technically, borders are the out-of-the-box reusable solution for decorating, but in practice they prove to be a bit hairy to work with. For relatively straightforward way of doing simple decorations, see my answer on [Generating commented-out content with Wicket](https://stackoverflow.com/questions/3933921/generating-commented-out-content-with-wicket/) (which uses the somewhat undocumented way Wicket's rendering pipeline works). Also shows that Wicket's behaviors are a very flexible way to modify behavior of existing components without requiring those components themselves to be changed. Other than that, I would just design for reuse explicitly. | Add an [AjaxEventBehavior](https://wicket.apache.org/docs/1.4/org/apache/wicket/ajax/AjaxEventBehavior.html) for "onclick" to the Component. The example in the Javadoc I linked does what you want.
You can add Behaviors to almost anything in Wicket, that's one of its most powerful features. |
9,140 | We Have a web site that employees must check into at a specific time each day. How can I make the site automatically open on each users computer at a certain time each day.
Thanks | 2009/05/17 | [
"https://serverfault.com/questions/9140",
"https://serverfault.com",
"https://serverfault.com/users/2786/"
] | You could set Windows Scedule task that opens a the link to the website.
And push the task trough your windows server with a GPO for each of your computer on the network. | You could keep it simple with [Windows Task Scheduler](http://support.microsoft.com/kb/308569). Just create a shortcut to IE and add the URL to the shortcut. |
9,140 | We Have a web site that employees must check into at a specific time each day. How can I make the site automatically open on each users computer at a certain time each day.
Thanks | 2009/05/17 | [
"https://serverfault.com/questions/9140",
"https://serverfault.com",
"https://serverfault.com/users/2786/"
] | You could set Windows Scedule task that opens a the link to the website.
And push the task trough your windows server with a GPO for each of your computer on the network. | What specifically do you need the users to do?
* Open the page?
* Read the page?
* Read and submit a response?
If the first, a simple scheduled task should suffice, as suggested by the accepted answer.
But if the second or third? Then you're a bit longer off.
Unfortunately there is no way to force someone to read the page, since for all you know, the person you expect to read it might not even be near the computer at that time.
And if you don't require anyone to read it, or can't guarantee that anyone will read it, why exactly do you need *everyone* to open that page then?
Sounds like a process with a flaw to me, to be honest.
Questions to ponder:
* What if the user has his machine on, but isn't around at that time? (gone for coffee, or even gone for the day, forgot to log off and turn off his computer)
* What if the machine is off at that time? What was the user expected to do on the website that he won't be doing that day
+ He could have just left
+ or he could arrive at work after the fact
* What if the user initiated a shutdown just as the page pops open? (race condition) |
9,140 | We Have a web site that employees must check into at a specific time each day. How can I make the site automatically open on each users computer at a certain time each day.
Thanks | 2009/05/17 | [
"https://serverfault.com/questions/9140",
"https://serverfault.com",
"https://serverfault.com/users/2786/"
] | You could set Windows Scedule task that opens a the link to the website.
And push the task trough your windows server with a GPO for each of your computer on the network. | Threaten the employees with termination if they don't.
Simply opening a website at a certain time every day will accomplish nothing. I'd treat it just like I treat a popup ad that gets past Firefox's adblock plugin - closing it before it ever gets a chance to load.
If it's truly vital to your business that people check this page every so often, it should be noticeable if no one checks it, and there should be a consequence for not doing it. |
9,140 | We Have a web site that employees must check into at a specific time each day. How can I make the site automatically open on each users computer at a certain time each day.
Thanks | 2009/05/17 | [
"https://serverfault.com/questions/9140",
"https://serverfault.com",
"https://serverfault.com/users/2786/"
] | What specifically do you need the users to do?
* Open the page?
* Read the page?
* Read and submit a response?
If the first, a simple scheduled task should suffice, as suggested by the accepted answer.
But if the second or third? Then you're a bit longer off.
Unfortunately there is no way to force someone to read the page, since for all you know, the person you expect to read it might not even be near the computer at that time.
And if you don't require anyone to read it, or can't guarantee that anyone will read it, why exactly do you need *everyone* to open that page then?
Sounds like a process with a flaw to me, to be honest.
Questions to ponder:
* What if the user has his machine on, but isn't around at that time? (gone for coffee, or even gone for the day, forgot to log off and turn off his computer)
* What if the machine is off at that time? What was the user expected to do on the website that he won't be doing that day
+ He could have just left
+ or he could arrive at work after the fact
* What if the user initiated a shutdown just as the page pops open? (race condition) | You could keep it simple with [Windows Task Scheduler](http://support.microsoft.com/kb/308569). Just create a shortcut to IE and add the URL to the shortcut. |
9,140 | We Have a web site that employees must check into at a specific time each day. How can I make the site automatically open on each users computer at a certain time each day.
Thanks | 2009/05/17 | [
"https://serverfault.com/questions/9140",
"https://serverfault.com",
"https://serverfault.com/users/2786/"
] | You could keep it simple with [Windows Task Scheduler](http://support.microsoft.com/kb/308569). Just create a shortcut to IE and add the URL to the shortcut. | Threaten the employees with termination if they don't.
Simply opening a website at a certain time every day will accomplish nothing. I'd treat it just like I treat a popup ad that gets past Firefox's adblock plugin - closing it before it ever gets a chance to load.
If it's truly vital to your business that people check this page every so often, it should be noticeable if no one checks it, and there should be a consequence for not doing it. |
9,140 | We Have a web site that employees must check into at a specific time each day. How can I make the site automatically open on each users computer at a certain time each day.
Thanks | 2009/05/17 | [
"https://serverfault.com/questions/9140",
"https://serverfault.com",
"https://serverfault.com/users/2786/"
] | What specifically do you need the users to do?
* Open the page?
* Read the page?
* Read and submit a response?
If the first, a simple scheduled task should suffice, as suggested by the accepted answer.
But if the second or third? Then you're a bit longer off.
Unfortunately there is no way to force someone to read the page, since for all you know, the person you expect to read it might not even be near the computer at that time.
And if you don't require anyone to read it, or can't guarantee that anyone will read it, why exactly do you need *everyone* to open that page then?
Sounds like a process with a flaw to me, to be honest.
Questions to ponder:
* What if the user has his machine on, but isn't around at that time? (gone for coffee, or even gone for the day, forgot to log off and turn off his computer)
* What if the machine is off at that time? What was the user expected to do on the website that he won't be doing that day
+ He could have just left
+ or he could arrive at work after the fact
* What if the user initiated a shutdown just as the page pops open? (race condition) | Threaten the employees with termination if they don't.
Simply opening a website at a certain time every day will accomplish nothing. I'd treat it just like I treat a popup ad that gets past Firefox's adblock plugin - closing it before it ever gets a chance to load.
If it's truly vital to your business that people check this page every so often, it should be noticeable if no one checks it, and there should be a consequence for not doing it. |
35,546,288 | I am new to Rust and I was reading the [Dining Philosophers' tutorial](http://doc.rust-lang.org/book/dining-philosophers.html) when I found this:
```
Mutex::new(())
```
I don't know what the argument inside `new` means. I read the [documentation for `Mutex`](http://doc.rust-lang.org/std/sync/struct.Mutex.html#methods) and I still have no idea what it means. I would appreciate an explanation about what is happening under the hood. | 2016/02/22 | [
"https://Stackoverflow.com/questions/35546288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2115632/"
] | `()` is simply a tuple with no values; a 0-tuple. The type and the value are spelled the same, both `()`. The type is sometimes known as the "unit type"; it used to actually be a distinct type in the compiler, but now is just treated as a degenerate tuple. It is a 0-sized type; objects of this type won't ever actually take up any space, though it is a [`Sized`](http://doc.rust-lang.org/std/marker/trait.Sized.html) type, just with a size of 0.
It is used for cases where you need to have a value or a type, but you have nothing relevant to put there. For instance, if you have a function that doesn't return a value, and call it in a place that expects a value, you find that it actually returns the value `()` of type `()`.
```
fn nothing() {}
fn main() {
println!("{:?}", nothing());
}
```
That prints `()` ([playpen](https://play.rust-lang.org/?gist=cd99c94c951220b00e17&version=stable)).
Another use is when you have a generic type like `Result<T, E>`, which indicates a success or failure of some operation, and can hold either the the result of the successful operation, or an error indicating why it failed. Some operations, such as [`std::io::write`](http://doc.rust-lang.org/std/io/trait.Write.html) which have no value to return if successful but want to be able to indicate an error, will return a [`std::io::Result`](http://doc.rust-lang.org/std/io/type.Result.html)`<()>`, which is actually a synonym for [`Result`](http://doc.rust-lang.org/std/result/enum.Result.html)`<(), std::io::Error>`; that allows the function to return `Ok(())` in the success case, but some meaningful error when it fails.
You might compare it to `void` in C or C++, which are also used for a lack of return value. However, you cannot ever write an object that has type `void`, which makes `void` much less useful in generic programming; you could never have an equivalent `Result<void, Error>` type, because you couldn't ever construct the `Ok` case.
In this case, a `Mutex` normally wraps and object that you want to access; so you can put that object into the mutex, and then access it from the guard that you get when you lock the mutex. However, in this example there is no actual data being guarded, so `()` is used since you need to put something in there, and `Mutex` is generic over the type so it can accept any type. | `()` is the empty [tuple](http://rustbyexample.com/primitives/tuples.html), also called the [unit type](https://doc.rust-lang.org/grammar.html#unit-expressions) -- a tuple with no member types. It is also the only valid value of said type. It has [a size of zero](https://doc.rust-lang.org/nomicon/exotic-sizes.html#zero-sized-types-zsts) (note that it is still `Sized`, just with a size of 0), making it nonexistent at runtime. This has several useful effects, one of which is being used here.
Here, `()` is used to create a `Mutex` with no owned data -- it's just an unlockable and lockable mutex. If we explicitly write out the type inference with the [turbofish operator](https://github.com/steveklabnik/rust/commit/4f22b4d1dbaa14da92be77434d9c94035f24ca5d#diff-ced4ae040c5c8672d936a581401ef9ceR1333) `::<>`, we could also write:
```
Mutex::<()>::new( () )
```
That is, we're creating a `new` `Mutex` that contains a `()` with the initial value `()`. |
102,052 | I've got a new Windows 7 PC added to my LAN.
I have two Windows XP (SP3) PCs connected to it also and one of them is visible but when I go into:
Control Panel>Network and Internet>Network Map that XP machine is listed as "discovered device(s) could not be placed on map.
And the 2nd XP machine (also SP3) doesn't even show up at all.
However, both XP Machines are accessible if I browse through Windows Exlorer to Network and expand that branch. But if I click on Network they are not shown in the Detail pane to the right.
All computers are:
* Up to date (XP machines on SP3)
* in the same workgroup
* Win 7 PC can browse to the XP machines but they don't show up properly in the Network diagram which makes me think something is amiss. | 2010/01/29 | [
"https://superuser.com/questions/102052",
"https://superuser.com",
"https://superuser.com/users/5001/"
] | Last time I did this, with an ASUS motherboard and Windows 7, the flashing tool crashed in the middle of the flash operation. Bricked motherboard! I had to buy a new motherboard (and I still don't know what to do with the bricked one).
I strongly recommend against flashing inside the OS. You'd better use other means: most recent motherboards support flashing from a USB key nowadays. | There is no risk when you flash from windows, you have to stop all running programs, and do not power off while updating the bios.
I flash from windows from the beginning (there are several years), I flash lots of motherboards asus, gigabyte,biostar, asrock and there was no problem.
So, yes just run the flash problem without any concern. |
17,182,425 | I'm trying to make a page with a form which would include a select dropdown menu. I'd like to have the select options come from the collection, rather than manually type them in the HTML. So far no luck. This is my code:
**html:**
```
<template name="addPage">
<div id="addForm">
<form>
<ul>
<li>
<label>Select a genre:</label>
<select id = "genreList">
{{#each genres}}
{{> genre}}
{{/each}}
</select>
</li>
<li><input type="submit" value="Submit"></li>
</ul>
</form>
</div>
</template>
<template name="genre">
<option value="{{genre}}">{{genre}}</option>
</template>
```
**js:** (Using mongodb-aggregation for the distinct call)
```
Template.addPage.genres = function () {
Activities.distinct("genre", function(error, result){
var returnArray = new Array();
for(var i in result) {
returnArray[i] = { 'genre': result[i] };
}
return returnArray;
});
}
```
With this code, the select dropdown form is empty. Is what I'm trying to do possible?
PS. I think perhaps the function Template.addPage.genres is returning before the array is filled...
Thank you! | 2013/06/19 | [
"https://Stackoverflow.com/questions/17182425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/838485/"
] | The problem is in dequeue where you decrement `queue->memory_allocated`. What is happening is this: you create an empty\_queue. You start adding elements to the array - this increases the size by 16. We keep entering elements until the 16th time and then we increase the size to 32. And finish using the first 20 of these.
The array in memory at this time is used for the first twenty and then unused for the last 12. Then we start calling dequeue and remove 10 items. Size is now equal to 10 and memory\_allocated/u32 is equal to 22. You start adding more elements and add 12 more elements (in the 12 free spaces that we had between number 20 and 32) at which point size == memory\_allocated/u32 so we increment the memory\_allocated by 16. Memory allocated is now equal to 38.
The array in memory looks like this though:
Size is 22.
We start adding more elements and on the 6th one of these we write *past* the end of the array. Anything that happens now is fair game. You have corrupted memory and obviously write over something of importance eventually. | Dave is correct, but I really think you want to re-think this code. After you add 20 values, and then subtract 10, you get memory that looks like this (not to scale):
```
queue->array beginning of queue end of queue end of buffer
| | | | | | | | | | | | | | | | | | | | | | | |
0 10 20 32
```
So 'Size' is 10, memory\_allocated is really bizarre because you increment it first by 16\*sizeof(u32), and then by just plain 16 in increment\_queue(), which is think is a bug)..
But most importantly, then you call realloc() with queue->array as the pointer... if you're really trying to resize the queue to 10 elements at that point, what you'll actually do is truncate the buffer to 10 elements starting at 0.. throwing away all the values in the valid part of the queue (between 10 and 20).
If this example doesn't help.. think about what happens when you add 20 elements, and then dequeue 20 elements. FirstElem and LastElem = 20, size = 0, first and last never get reset.. if you go on to add 16 more elements, you'll call realloc on queue->array with size 16, but none of the new 16 elements you added will be in the reallocated buffer. realloc will likely truncate from the queue->array to queue->array+16\*sizeof(u32).. but your elements will be at queue->array+20, and now in unallocated memory.
I think you need to rethink your whole algorithm. Or if you're just looking for a simple queue implementation on a unix system, look at 'man queue' (or look in /usr/include/sys/queue.h). |
29,390,356 | Are IceCandidate and SDP fixed values? Is this a good idea to store them in a server database instead of retrieving on every connection? If updating these data is unavoidable, when should I do it? | 2015/04/01 | [
"https://Stackoverflow.com/questions/29390356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3247703/"
] | No they are not fixed values. Ice candidates locate the user in the network topology they reside in *at present*, which unless you have a static IP (which almost nobody has) AND a wired internet connection AND a static LAN address, AND a desktop computer that connects solely through these means and never also through, say, wifi, then this will likely change hourly, daily or weekly.
SDP additionally contains the media-setup for a call and other information, which can change from call to call, and even mid-call (requiring re-negotiation) if video or audio sources are added, removed or altered during the call. SDP may additionally contain other things that expire, but hopefully this is enough to dissuade you. | No they are not. Ice candidates contain end point's IP and port combination which can change. Even if you have a static IP address a new port number is generated every time. |
29,158,103 | The BSD/POSIX socket API `recvfrom()` call (made available to C or C++ programmers via the `<sys/socket.h>` header file) provides a source address "out" parameter, `struct sockaddr *src_addr`, which stores the IP address of the remote server that sent the received datagram.
For any application that *sends* UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver), is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last *sent* datagram (i.e. the address used in the previous `sendto` call?)
In other words, if we call `sendto` and send a datagram to some address, should we always make sure that a corresponding `recvfrom` call is from the *same address*?
It seems that this might not be feasible, considering that a response datagram might legitimately originate from a different IP if the remote server is behind a firewall, or part of some distributed system with multiple IP addresses.
But, if we don't verify that a received datagram is from the *same* IP address as the address from the last `sendto` call, what's to prevent some attacker from intercepting datagrams, and then sending malicious datagrams to the client? | 2015/03/20 | [
"https://Stackoverflow.com/questions/29158103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2923952/"
] | How do you know that the received packet is a reply? Normally that's done using the source address and port from the received packet.
However, it is impossible to verify the source address in a UDP packet. A sender can place any source address they want to. So the check is only sufficient if you trust all the packets floating around on the internet, which is obviously not feasible.
So you need some additional mechanism: randomized cookies, sequence numbers, etc. etc. | Some NAT supports UDP hole-punching that also do exactly the IP validation you mentioned, so it's not necessary to do it in application.
For custom protocol, You may want to implement some sort of sequence number in your payload, to further increase security level. |
29,158,103 | The BSD/POSIX socket API `recvfrom()` call (made available to C or C++ programmers via the `<sys/socket.h>` header file) provides a source address "out" parameter, `struct sockaddr *src_addr`, which stores the IP address of the remote server that sent the received datagram.
For any application that *sends* UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver), is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last *sent* datagram (i.e. the address used in the previous `sendto` call?)
In other words, if we call `sendto` and send a datagram to some address, should we always make sure that a corresponding `recvfrom` call is from the *same address*?
It seems that this might not be feasible, considering that a response datagram might legitimately originate from a different IP if the remote server is behind a firewall, or part of some distributed system with multiple IP addresses.
But, if we don't verify that a received datagram is from the *same* IP address as the address from the last `sendto` call, what's to prevent some attacker from intercepting datagrams, and then sending malicious datagrams to the client? | 2015/03/20 | [
"https://Stackoverflow.com/questions/29158103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2923952/"
] | >
> For any application that sends UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver),
>
>
>
You could use a random outbound port number. This is how [DNS can mitigate spoofing attacks](https://security.stackexchange.com/a/15321/8340).
>
> is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last sent datagram (i.e. the address used in the previous sendto call?)
>
>
> In other words, if we call sendto and send a datagram to some address, should we always make sure that a corresponding recvfrom call is from the same address?
>
>
>
Not just for security, but for functionality. If you have multiple outbound connections you need to know which UDP replies correspond to each connection. This can be achieved via IP address & port combination on the remote side.
>
> It seems that this might not be feasible, considering that a response
> datagram might legitimately originate from a different IP if the
> remote server is behind a firewall, or part of some distributed system
> with multiple IP addresses.
>
>
>
A remote system should send the reply via the same interface that it received it on. If it doesn't, it will be "non standard' and won't work with your application, or others where it needs to receive and reply to UDP packets.
>
> But, if we don't verify that a received datagram is from the same IP
> address as the address from the last sendto call, what's to prevent
> some attacker from intercepting datagrams, and then sending malicious
> datagrams to the client?
>
>
>
Nothing. If an attacker can MITM a UDP connection then they can intercept and change whatever they want. They could just send UDP packets with spoofed IP addresses to your service.
If you need to preserve integrity and confidentiality of packets, you need to implement some sort of encryption. For example. check out [DTLS](http://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security) if you need to support datagrams. | Some NAT supports UDP hole-punching that also do exactly the IP validation you mentioned, so it's not necessary to do it in application.
For custom protocol, You may want to implement some sort of sequence number in your payload, to further increase security level. |
29,158,103 | The BSD/POSIX socket API `recvfrom()` call (made available to C or C++ programmers via the `<sys/socket.h>` header file) provides a source address "out" parameter, `struct sockaddr *src_addr`, which stores the IP address of the remote server that sent the received datagram.
For any application that *sends* UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver), is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last *sent* datagram (i.e. the address used in the previous `sendto` call?)
In other words, if we call `sendto` and send a datagram to some address, should we always make sure that a corresponding `recvfrom` call is from the *same address*?
It seems that this might not be feasible, considering that a response datagram might legitimately originate from a different IP if the remote server is behind a firewall, or part of some distributed system with multiple IP addresses.
But, if we don't verify that a received datagram is from the *same* IP address as the address from the last `sendto` call, what's to prevent some attacker from intercepting datagrams, and then sending malicious datagrams to the client? | 2015/03/20 | [
"https://Stackoverflow.com/questions/29158103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2923952/"
] | How do you know that the received packet is a reply? Normally that's done using the source address and port from the received packet.
However, it is impossible to verify the source address in a UDP packet. A sender can place any source address they want to. So the check is only sufficient if you trust all the packets floating around on the internet, which is obviously not feasible.
So you need some additional mechanism: randomized cookies, sequence numbers, etc. etc. | In the general case it isn't true that an arbitrarily received datagram is a response to the previous request. You have to fllter, e.g. via `connect(),` to ensure that you only process responses that are responses. |
29,158,103 | The BSD/POSIX socket API `recvfrom()` call (made available to C or C++ programmers via the `<sys/socket.h>` header file) provides a source address "out" parameter, `struct sockaddr *src_addr`, which stores the IP address of the remote server that sent the received datagram.
For any application that *sends* UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver), is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last *sent* datagram (i.e. the address used in the previous `sendto` call?)
In other words, if we call `sendto` and send a datagram to some address, should we always make sure that a corresponding `recvfrom` call is from the *same address*?
It seems that this might not be feasible, considering that a response datagram might legitimately originate from a different IP if the remote server is behind a firewall, or part of some distributed system with multiple IP addresses.
But, if we don't verify that a received datagram is from the *same* IP address as the address from the last `sendto` call, what's to prevent some attacker from intercepting datagrams, and then sending malicious datagrams to the client? | 2015/03/20 | [
"https://Stackoverflow.com/questions/29158103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2923952/"
] | How do you know that the received packet is a reply? Normally that's done using the source address and port from the received packet.
However, it is impossible to verify the source address in a UDP packet. A sender can place any source address they want to. So the check is only sufficient if you trust all the packets floating around on the internet, which is obviously not feasible.
So you need some additional mechanism: randomized cookies, sequence numbers, etc. etc. | >
> For any application that sends UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver),
>
>
>
You could use a random outbound port number. This is how [DNS can mitigate spoofing attacks](https://security.stackexchange.com/a/15321/8340).
>
> is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last sent datagram (i.e. the address used in the previous sendto call?)
>
>
> In other words, if we call sendto and send a datagram to some address, should we always make sure that a corresponding recvfrom call is from the same address?
>
>
>
Not just for security, but for functionality. If you have multiple outbound connections you need to know which UDP replies correspond to each connection. This can be achieved via IP address & port combination on the remote side.
>
> It seems that this might not be feasible, considering that a response
> datagram might legitimately originate from a different IP if the
> remote server is behind a firewall, or part of some distributed system
> with multiple IP addresses.
>
>
>
A remote system should send the reply via the same interface that it received it on. If it doesn't, it will be "non standard' and won't work with your application, or others where it needs to receive and reply to UDP packets.
>
> But, if we don't verify that a received datagram is from the same IP
> address as the address from the last sendto call, what's to prevent
> some attacker from intercepting datagrams, and then sending malicious
> datagrams to the client?
>
>
>
Nothing. If an attacker can MITM a UDP connection then they can intercept and change whatever they want. They could just send UDP packets with spoofed IP addresses to your service.
If you need to preserve integrity and confidentiality of packets, you need to implement some sort of encryption. For example. check out [DTLS](http://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security) if you need to support datagrams. |
29,158,103 | The BSD/POSIX socket API `recvfrom()` call (made available to C or C++ programmers via the `<sys/socket.h>` header file) provides a source address "out" parameter, `struct sockaddr *src_addr`, which stores the IP address of the remote server that sent the received datagram.
For any application that *sends* UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver), is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last *sent* datagram (i.e. the address used in the previous `sendto` call?)
In other words, if we call `sendto` and send a datagram to some address, should we always make sure that a corresponding `recvfrom` call is from the *same address*?
It seems that this might not be feasible, considering that a response datagram might legitimately originate from a different IP if the remote server is behind a firewall, or part of some distributed system with multiple IP addresses.
But, if we don't verify that a received datagram is from the *same* IP address as the address from the last `sendto` call, what's to prevent some attacker from intercepting datagrams, and then sending malicious datagrams to the client? | 2015/03/20 | [
"https://Stackoverflow.com/questions/29158103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2923952/"
] | >
> For any application that sends UDP datagrams to some remote endpoint, and then receives a response (such as, for example, a DNS resolver),
>
>
>
You could use a random outbound port number. This is how [DNS can mitigate spoofing attacks](https://security.stackexchange.com/a/15321/8340).
>
> is it considered a necessary security precaution to always make sure that any received datagram is from the same IP address as the last sent datagram (i.e. the address used in the previous sendto call?)
>
>
> In other words, if we call sendto and send a datagram to some address, should we always make sure that a corresponding recvfrom call is from the same address?
>
>
>
Not just for security, but for functionality. If you have multiple outbound connections you need to know which UDP replies correspond to each connection. This can be achieved via IP address & port combination on the remote side.
>
> It seems that this might not be feasible, considering that a response
> datagram might legitimately originate from a different IP if the
> remote server is behind a firewall, or part of some distributed system
> with multiple IP addresses.
>
>
>
A remote system should send the reply via the same interface that it received it on. If it doesn't, it will be "non standard' and won't work with your application, or others where it needs to receive and reply to UDP packets.
>
> But, if we don't verify that a received datagram is from the same IP
> address as the address from the last sendto call, what's to prevent
> some attacker from intercepting datagrams, and then sending malicious
> datagrams to the client?
>
>
>
Nothing. If an attacker can MITM a UDP connection then they can intercept and change whatever they want. They could just send UDP packets with spoofed IP addresses to your service.
If you need to preserve integrity and confidentiality of packets, you need to implement some sort of encryption. For example. check out [DTLS](http://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security) if you need to support datagrams. | In the general case it isn't true that an arbitrarily received datagram is a response to the previous request. You have to fllter, e.g. via `connect(),` to ensure that you only process responses that are responses. |
67,783,082 | I'm having troubles with node 16 and ES6. I'm trying to make a upload file controller but i'm stuck with req.file.stream which is undefined
I'm using multer to handle upload files.
The first issue was \_\_dirname undefined that I was able to fix with path and New Url.
The error I got with pipeline
```
node:internal/process/promises:246
triggerUncaughtException(err, true /* fromPromise */);
^
TypeError [ERR_INVALID_ARG_TYPE]: The "source" argument must be of type function or an instance of Stream, Iterable, or AsyncIterable. Received undefined
```
my userRoutes.js
```
import express from "express";
import { signin, signup, logout } from "../Controller/AuthController.js";
import {
getUsers,
getUser,
updateUser,
deleteUser,
follow,
unfollow,
} from "../Controller/UserController.js";
import { upload } from "../Controller/UploadController.js";
import multer from "multer";
const router = express.Router();
// Auth
router.post("/signin", signin);
router.post("/signup", signup);
router.post("/logout", logout);
// users
router.get("/", getUsers);
router.get("/:id", getUser);
router.patch("/:id", updateUser);
router.delete("/:id", deleteUser);
router.patch("/follow/:id", follow);
router.patch("/unfollow/:id", unfollow);
// upload
router.post("/upload", multer().single("file"), upload);
export default router;
```
And my UploadController.js
```
import fs from "fs";
import { promisify } from "util";
import stream from "stream";
const pipeline = promisify(stream.pipeline);
// const { uploadErrors } = require("../utils/errors.utils");
import path from "path";
const __dirname = path.dirname(new URL(import.meta.url).pathname);
export const upload = async (req, res) => {
try {
// console.log(req.file);
console.log(__dirname);
if (
!req.file.mimetype == "image/jpg" ||
!req.file.mimetype == "image/png" ||
!req.file.mimetype == "image/jpeg"
)
throw Error("invalid file");
if (req.file.size > 2818128) throw Error("max size");
} catch (err) {
const errors = uploadErrors(err);
return res.status(201).json({ err });
}
const fileName = req.body.name + ".jpg";
await pipeline(
req.file.stream,
fs.createWriteStream(
`${__dirname}/../client/public/uploads/profil/${fileName}`
)
);
try {
await User.findByIdAndUpdate(
req.body.userId,
{ $set: { picture: "./uploads/profil/" + fileName } },
{ new: true, upsert: true, setDefaultsOnInsert: true },
(err, docs) => {
if (!err) return res.send(docs);
else return res.status(500).send({ message: err });
}
);
} catch (err) {
return res.status(500).send({ message: err });
}
};
``` | 2021/06/01 | [
"https://Stackoverflow.com/questions/67783082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13483916/"
] | Multer gives you the file as a Buffer, not a Stream. `req.file.stream` is not valid property, but `req.file.buffer` is: <https://github.com/expressjs/multer#file-information>.
From the look of your code, you're trying to save the file on disk. You can use multer's [`DiskStorage`](https://github.com/expressjs/multer#diskstorage) for that. Create a storage instance and pass it to the multer instance as a configuration:
```js
const storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, `${__dirname}/../client/public/uploads/profil/`);
},
filename: function (req, file, cb) {
cb(null, req.body.name + '.jpg');
},
});
const upload = multer({ storage });
router.post('/upload', upload.single('file'), upload);
```
Have a look at this free [Request Parsing in Node.js Guide](https://maximorlov.com/request-parsing-nodejs-guide/) for working with file uploads in Node.js. | if you want to use `req.file.stream`, you will need to install this version of multer:
```
npm install --save multer@^2.0.0-rc.1
```
and your code will work perfectly, just change your `req.file.mimetype` to `req.file.detectedMimeType` !! |
41,789,697 | I am new to android app developing as i was creating spinner i noticed a extra space / padding vertically to drop down list of the spinner at the start and end of the drop down.
MainActivity.java:
```
public class MainActivity extends AppCompatActivity
{
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Spinner sr = (Spinner) findViewById(R.id.spinner);
String[] days = getResources().getStringArray(R.array.days);
ArrayAdapter<String> ar = new ArrayAdapter<>(this, R.layout.single_row, days);
sr.setAdapter(ar);
}
}
```
activity\_main.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/activity_main"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#666666"
tools:context="com.xxxxx.defaultspinner.MainActivity">
<Spinner
android:id="@+id/spinner"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="20dp"
android:spinnerMode="dialog"
android:background="#898989">
</Spinner>
</RelativeLayout>
```
single\_row.xml
```
<TextView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/app_name"
android:textColor="#FFFFFF"
android:textSize="26sp"
android:background="#214161">
</TextView>
```
[](https://i.stack.imgur.com/Z17th.png)
[](https://i.stack.imgur.com/1Kpo7.png)
I set background of all the views to different color so that i can identify the source of the extra space / padding. But the the extra space / padding has white background which none of the view has.
**Note** this is not because of the **spinnerMode="dialog"** option. this behavior is also happens when **spinnerMode="dropdown"**. How can i remove this space ? or i am doing something wrong ? | 2017/01/22 | [
"https://Stackoverflow.com/questions/41789697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7277357/"
] | You just need to override the getDropDownView method in the adapter.
```
@Override
public View getDropDownView(int position, View convertView, ViewGroup parent) {
parent.setPadding(0, 0, 0, 0);
return convertView;
}
``` | try adding this to your `TextView`
```
android:includeFontPadding="false"
```
it will remove the `TextView` extra top and bottom padding |
39,490 | The profit-loss diagram for the "long stock, short call" position and the "short put" position are almost exactly the same, except that for the former, you may be able to profit more when the stock goes up as, in addition to the premium collected from selling the option, you also get to profit from capital gains and dividends from ownership of the stock. Additionally, you are able to allow for a greater decrease in the stock price before you start incurring a loss. However, to take two positions, there may be more transaction costs compared to just taking a "short put" alone. Are there any other pros and cons that I have not considered? | 2020/09/01 | [
"https://economics.stackexchange.com/questions/39490",
"https://economics.stackexchange.com",
"https://economics.stackexchange.com/users/18212/"
] | I am not sure how you want to solve inflation but i assume you want to know if production could be connected with the overall price level in the economy.
Lets take a look at the eqaution of exchange:
$ M \cdot V = P \cdot T$
Wher M equals money supply, V the velocity of money (how often was one unit of money used), P the price level and T a number for aggregate transactions. However number of transaction are correlated with the goods produced in the economy. We can thus write the income version of the equation as:
$ M \cdot V = P \cdot Y$
Dividing by Y verifies your assumption. Given that money supply and circulation speed are constant, higher production decreases the price level in the economy.
$ \frac{M \cdot V}{Y} = P$ | In the real world, the answer is that anything can happen.
* If the inflation is associated with a shortage of a key good - e.g., oil supply squeezes by OPEC in the 1970s, or raw food prices for countries with a low weight of processed food in the CPI - bringing more supply in will lower prices.
* However, countries can grow in real terms and inflation rise, as seen in the U.S. in the 1970s: [link to FRED database.](https://fred.stlouisfed.org/series/GDPC1)
The reason for the indeterminacy is straightforward. New output does not magically appear. Workers need to be hired, and quite often there is fixed investment needed for new productive capacity. That hiring and/or fixed investment can make supply bottlenecks worse.
However, the question’s focus on the money supply is not helpful. The velocity of money is unstable, and so one cannot draw any useful conclusions between monetary aggregate growth and inflation on any reasonable forecast horizon. [M1 velocity in the United States.](https://fred.stlouisfed.org/series/M1V) |
53,653 | When doing physics with two-level systems and introducing rotations, a term that appears quite often is the rotation of a Pauli matrix by another one:
$$e^{- i \sigma\_j \theta/2} \sigma\_k e^{i \sigma\_j \theta/2}$$
The way I know to evaluate this is by using the identity
$$\exp(i \sigma\_j \theta/2) = I \cos \frac{\theta}{2} + i \sigma\_j \sin \frac{\theta}{2}$$
and expanding the exponentials in the previous equation. If $j=k$ it's trivial, but in the other cases it gets quite tedious. Is there an easier way to do that (or some shortcut)? | 2013/02/11 | [
"https://physics.stackexchange.com/questions/53653",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5533/"
] | As TMS mentioned, if you play around with the Pauli matrix properties and the double-angle trig formulae, you should get a nice result. (Of course, what "nice" means may depend on what you're going to use it for.)
I find it useful, however, to step back and give the geometrical nature of the object you're dealing with take the reins. How would I do this? It's often the particulars of the formulation that bog you down and it's best to generalize the setting a bit. Thus, in particular, consider the frame-of-reference-free expression
$$
e^{-i\vec{\sigma}\cdot\hat{n}\theta/2} \, \vec{a}\cdot\vec{\sigma} \, e^{i\vec{\sigma}\cdot\hat{n}\theta/2}
$$
where $\hat{n}\cdot\hat{n}=1$. You need to apply the identity you mention and the property that
$$
(\vec{a}\cdot\vec{\sigma})(\hat{n}\cdot\vec{\sigma})=\vec{a}\cdot\hat{n}+i\vec{\sigma}\cdot(\vec{a}\times\hat{n}),
$$
as well as double-angle formulae and a triple vector product (i.e. nothing intimidating).
It's definitely a worthwhile exercise to work through that; the result is
$$
e^{-i\vec{\sigma}\cdot\hat{n}\theta/2} \, \vec{a}\cdot\vec{\sigma} \, e^{i\vec{\sigma}\cdot\hat{n}\theta/2}
=
\left[
\hat{n}(\hat{n}\cdot \vec a)+
\cos(\theta)(\vec{a}-\hat n(\hat n\cdot \vec a))
+\sin(\theta)\hat{n}\times\vec{a}
\right]\cdot\vec{\sigma}.
$$
Here the vector in square brackets is the rotated one: $\vec{a}$ rotated by angle $\theta$ around unit $\hat{n}$. (To see what each term does, simply take $\hat n$ along the $z$ axis; the cosine and sine terms then describe the on- and off-diagonal terms of the $x,y$ submatrix.)
I'm not sure what kind of a shortcut you're looking for or whether this is useful in that sense. I just want to say that sometimes stepping back to a greater generality can ease the computational burden, or at least make clearer what's going on. | For people who like mathiness:
The [*Hadamard Lemma*](http://en.wikipedia.org/wiki/Baker%E2%80%93Campbell%E2%80%93Hausdorff_formula#An_important_lemma) says the following:
Let $X$ and $Y$ be square, complex matrices, then
$$
e^{X} Y e^{-X} = Y + [X,Y] + \frac{1}{2!}[X,[X,Y]] + \cdots.
$$
You should (in principle) be able to use this to derive whatever expression you get.
This lemma can also be written using the operations $\mathrm{Ad}$ and $\mathrm{ad}$ called "adjoint" which are important in the representation theory of Lie groups and Lie algebras (in fact all this Pauli matrix spin stuff falls under this umbrella). These operations are defined as
$$
\mathrm{Ad}\_g(X) = g X g^{-1}, \qquad \mathrm{ad}\_X(Y) = [X,Y]
$$
This notation allows us to write the lemma as
$$
\mathrm{Ad}\_{e^X} = e^{\mathrm{ad}\_X}
$$ |
3,772,671 | I am building a site that requires a lot of MySQL inserts and lookups from different tables in a (hopefully) secure part of the site. I want to use an abstraction layer for the whole process. Should I use a PHP framework (like Zend or CakePHP) for this, or just use a simple library (like Crystal or Doctrine)?
I would also like to make sure that the DB inserts are done in a relatively secure part of the site (though not SSL). Currently, I am using the method outlined [here](http://marakana.com/blog/examples/php-implementing-secure-login-with-php-javascript-and-sessions-without-ssl.html) (MD5 encryption and random challenge string), but maybe some of the frameworks come with similar functionality that would simplify the process?
What I'm trying to implement: a table of forms filled out with DB values. If you change a value or add a new row, pressing "save" will update or insert DB rows. I'm sure this has been done before, so I wouldn't want to reinvent the wheel. | 2010/09/22 | [
"https://Stackoverflow.com/questions/3772671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/444979/"
] | Most PHP backends have secure access to a private database. Normally, there's little difficulty to keeping the database secure, mostly by not making it reachable directly. That way the security of access depends on the inability for anyone to tamper with the PHP code, and not any software security scheme. | I would recomend [Symfony Framework](http://www.symfony-project.org/) for this. There is a great online tutorial on this at [Practical Symfony](http://www.symfony-project.org/jobeet/1_4/Doctrine/en/).The Framework's [Form class](http://www.symfony-project.org/jobeet/1_4/Doctrine/en/10) handles most of the security for you. It also has a nice [login plugin](http://www.symfony-project.org/plugins/sfDoctrineGuardPlugin) to make the application secure. |
3,772,671 | I am building a site that requires a lot of MySQL inserts and lookups from different tables in a (hopefully) secure part of the site. I want to use an abstraction layer for the whole process. Should I use a PHP framework (like Zend or CakePHP) for this, or just use a simple library (like Crystal or Doctrine)?
I would also like to make sure that the DB inserts are done in a relatively secure part of the site (though not SSL). Currently, I am using the method outlined [here](http://marakana.com/blog/examples/php-implementing-secure-login-with-php-javascript-and-sessions-without-ssl.html) (MD5 encryption and random challenge string), but maybe some of the frameworks come with similar functionality that would simplify the process?
What I'm trying to implement: a table of forms filled out with DB values. If you change a value or add a new row, pressing "save" will update or insert DB rows. I'm sure this has been done before, so I wouldn't want to reinvent the wheel. | 2010/09/22 | [
"https://Stackoverflow.com/questions/3772671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/444979/"
] | Most PHP backends have secure access to a private database. Normally, there's little difficulty to keeping the database secure, mostly by not making it reachable directly. That way the security of access depends on the inability for anyone to tamper with the PHP code, and not any software security scheme. | Unless by Data Abstraction you mean an implementation of a Data Access Patterns like ActiveRecord or Table Data Gateway or something ORMish (in both cases you should update your question accordingly then), you don't need a framework, because PHP has a DB abstraction layer with [PDO](http://de2.php.net/manual/en/book.pdo.php). |
3,772,671 | I am building a site that requires a lot of MySQL inserts and lookups from different tables in a (hopefully) secure part of the site. I want to use an abstraction layer for the whole process. Should I use a PHP framework (like Zend or CakePHP) for this, or just use a simple library (like Crystal or Doctrine)?
I would also like to make sure that the DB inserts are done in a relatively secure part of the site (though not SSL). Currently, I am using the method outlined [here](http://marakana.com/blog/examples/php-implementing-secure-login-with-php-javascript-and-sessions-without-ssl.html) (MD5 encryption and random challenge string), but maybe some of the frameworks come with similar functionality that would simplify the process?
What I'm trying to implement: a table of forms filled out with DB values. If you change a value or add a new row, pressing "save" will update or insert DB rows. I'm sure this has been done before, so I wouldn't want to reinvent the wheel. | 2010/09/22 | [
"https://Stackoverflow.com/questions/3772671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/444979/"
] | Most PHP backends have secure access to a private database. Normally, there's little difficulty to keeping the database secure, mostly by not making it reachable directly. That way the security of access depends on the inability for anyone to tamper with the PHP code, and not any software security scheme. | It sounds like you are really asking two different questions. One being should I use a framework (Zend, Symfony, Cake, etc) for the development of a website? The other being whether or not to use something along the lines of an ORM (Doctrine, Propel, etc)?
The answer to the first one is a resounding "yes". Frameworks are designed to keep you from having to reinvent the wheel for common/basic functionality. The time you spend learning how to (correctly) use a framework will payoff greatly in the long run. You'll eventually be much more productive that "rolling your own". Not to mention you'll gain a community of people who have likely been through similar situations and overcome issues similar to what you will face (that in and of itself could be the best reason to use a framework). I'm not going to suggest a particular framework since they all have strengths and weaknesses and is another topic in and of itself (however, I do use and prefer Zend Framework but don't let that influence your decision).
Concerning whether or not to use an ORM is a slightly more difficult question. I've recently began to work with them more and in general I would recommend them but it all boils down to using the right tool for the right job. They solve some specific problems very well, others not so much. However, since you specifically mention security I'll quickly address that. I don't think that a ORM inherently "increases security", however it can force you into making better decisions. That said, bad coding and bad coding practices will result in security issues no matter what technology/framework you are using.
Hope that helps! |
40,724,337 | I'm getting an error which I can't find the solution to, it's that my model does not contain a definition for 'AsEnumerable' (I put the whole error lower in the post), I've read about it around Stack Overflow, all answers seem to point towards `using system.Linq` and `using system.Data`, but I have them in the code.
The program is supposed to take some data from tables: Contatti, Contacts and Companies which are at different locations and bring them together (which happen in the list of the GET method in the controller), the code is the following:
Model:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Globalization;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
public partial class ContactsUni2
{
public IEnumerable<Contatti> Contattis { get; set; }
public IEnumerable<Companies> Companies { get; set; }
public IEnumerable<Contact> Contacts { get; set; }
}
```
Controller GET Which Gives the error:
```
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Entity;
using System.Linq;
using System.Net;
using System.Web;
using System.Web.Mvc;
using ContactManager.Models;
namespace ContactManager.Controllers
{
public class ContactsUni21Controller : Controller
{
private ApplicationDbContext db = new ApplicationDbContext();
private ContattiDB2 db2 = new ContattiDB2();
// GET: ContactsUni21
public ActionResult Index(String Page)
{
ContactsUni2 CU = new ContactsUni2();
CU.Contattis = db.Contattis.Include(i => i.ContattoID);
CU.Contacts = db.Contacts;
CU.Companies = db.Companies;
List<ContactsUni2> contactlist = new List<ContactsUni2>();
contactlist.Add(CU.AsEnumerable()); // The error is HERE at CU.AsEnumerable(), cannot convert from method group to 'contactsuni2'
return View(contactlist);
}
```
View, which also gives a similar error but for Contatti.Count()
```
@model IEnumerable<ContactManager.ViewModels.ContactsUni3>
@{
ViewBag.Title = "Index";
}
<h2>Contacts Unified</h2>
<html>
<head>
<meta name="viewport" content="width=device-width" />
<title>Index</title>
</head>
<body>
<p>
@Html.ActionLink("Create New", "Create")
</p>
<table class="table">
<tr>
<th>
@Html.DisplayNameFor(model => model.Contatti.Nome)
@Html.DisplayNameFor(model => model.Contatti.Citta)
@Html.DisplayNameFor(model => model.Contatti.ContattoID)
@Html.DisplayNameFor(model => model.Contatti.CodicePostale)
@Html.DisplayNameFor(model => model.Contatti.Email)
@Html.DisplayNameFor(model => model.Contact.Address)
@Html.DisplayNameFor(model => model.Contact.CompanyId)
@Html.DisplayNameFor(model => model.Contact.ContactId)
@Html.DisplayNameFor(model => model.Company.CompanyName)
</th>
<th></th>
</tr>
@foreach (var item in Model)
{
for (int i = 0; i < @item.Contatti.Count(); i++) // Second Error Here
{
<tr>
<td>
@item.Contatti[i].Nome
@item.Contatti[i].Citta
@item.Contatti[i].ContattoID
@item.Contatti[i].CodicePostale
@item.Contatti[i].Email
@item.Contatti[i].Address
@item.Contatti[i].CompanyId
@item.Contatti[i].ContactId
@item.Contatti[i].CompanyName
</td>
<td>
@Html.ActionLink("Edit", "Edit", new { id = item.ContattoID }) |
@Html.ActionLink("Details", "Details", new { id = item.ContattoID }) |
@Html.ActionLink("Delete", "Delete", new { id = item.ContattoID })
</td>
</tr>
}
}
</table>
</body>
</html>
```
First error in Controller GET Method:
>
> Error CS1929 'ContactsUni2' does not contain a definition for 'AsEnumerable' and the best extension method overload 'DataTableExtensions.AsEnumerable(DataTable)' requires a receiver of type 'DataTable'
>
>
>
when I try doing that, it says:
>
> cannot convert from method group to 'contactsuni2'
>
>
>
Second Error in View loop:
>
> Error CS1061 'Contatti' does not contain a definition for 'Count' and no extension method 'Count' accepting a first argument of type 'Contatti' could be found (are you missing a using directive or an assembly reference?)
>
>
> | 2016/11/21 | [
"https://Stackoverflow.com/questions/40724337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4431178/"
] | Just use this:
```
contactlist.Add(CU);
```
The method `List.Add()` expects a single item, and CU is a single item.
And to fix the problem in your View: simply add a line with `@using System.Linq` on top so the compiler will know where to look.
*Additional info about AsEnumerable:*
You only need `AsEnumerable()` if you have a collection or sequence of items, *but* that collection is incompatible with a construct such as `foreach (var item in collection)`.
In that case you can try using `foreach (var item in collection.AsEnumerable())`.
In some cases the .NET Framework may then already provide the implementation of `.AsEnumerable()`, in other cases you may have to provide it yourself. But in the case of your question you don't even need it at all. | You have 2 separate issues:
1. In your `ContactsUni21Controller` you're calling `AsEnumerable` on an object that does not implement `IEnumerable`; you're trying to add an instance of `ContactsUni2` to a `List<ContactsUni2>` there's no need for a cast here since it's an instance of the same type that the list is containing
2. As @CodeCaster mentioned, you're not putting a `@using System.Linq;` statement in your Razor view, preventing you from accessing the `IEnumerable.Count` method |
8,973,665 | I get the error:
```
Column 'dbo.Saved_ORDER_IMPORT.Company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
When i execute :
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
where [sent] = 1 and datesent between '01/01/2009' and '01/27/2012'
group by [Order No]
```
do i have to change the column or my query? | 2012/01/23 | [
"https://Stackoverflow.com/questions/8973665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569654/"
] | **Description**:
You can't group your query by only one column because you select 2.
>
> **MSDN** - Groups a selected set of rows into a set of summary rows by the values of one or more columns or expressions in SQL Server. One row is returned for each group. Aggregate functions in the SELECT clause list provide information about each group instead of individual rows.
>
>
>
We need more information but maybe this helps
**Sample**:
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
WHERE [sent] = 1 AND datesent BETWEEN '01/01/2009' AND '01/27/2012'
GROUP BY [Order No], Company
```
More information:
* [MSDN - GROUP BY (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms177673.aspx) | You must put the Company field in the group by clause. |
8,973,665 | I get the error:
```
Column 'dbo.Saved_ORDER_IMPORT.Company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
When i execute :
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
where [sent] = 1 and datesent between '01/01/2009' and '01/27/2012'
group by [Order No]
```
do i have to change the column or my query? | 2012/01/23 | [
"https://Stackoverflow.com/questions/8973665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569654/"
] | **Description**:
You can't group your query by only one column because you select 2.
>
> **MSDN** - Groups a selected set of rows into a set of summary rows by the values of one or more columns or expressions in SQL Server. One row is returned for each group. Aggregate functions in the SELECT clause list provide information about each group instead of individual rows.
>
>
>
We need more information but maybe this helps
**Sample**:
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
WHERE [sent] = 1 AND datesent BETWEEN '01/01/2009' AND '01/27/2012'
GROUP BY [Order No], Company
```
More information:
* [MSDN - GROUP BY (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms177673.aspx) | The GROUP BY clause tells SQL to return 1 row for each item in the group by list. In your case, you want one row per order number. If want company too, you need to include it in the group by.
By if all you want is a sorted by order #, company, remove the group by and use ORDER BY instead. |
8,973,665 | I get the error:
```
Column 'dbo.Saved_ORDER_IMPORT.Company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
When i execute :
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
where [sent] = 1 and datesent between '01/01/2009' and '01/27/2012'
group by [Order No]
```
do i have to change the column or my query? | 2012/01/23 | [
"https://Stackoverflow.com/questions/8973665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569654/"
] | **Description**:
You can't group your query by only one column because you select 2.
>
> **MSDN** - Groups a selected set of rows into a set of summary rows by the values of one or more columns or expressions in SQL Server. One row is returned for each group. Aggregate functions in the SELECT clause list provide information about each group instead of individual rows.
>
>
>
We need more information but maybe this helps
**Sample**:
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
WHERE [sent] = 1 AND datesent BETWEEN '01/01/2009' AND '01/27/2012'
GROUP BY [Order No], Company
```
More information:
* [MSDN - GROUP BY (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms177673.aspx) | Remove company from the select or group by company, too.
To give you an example:
You have five order numbers and two companies
1 - Company1
1 - Company2
2 - Company1
2 - Company2
3 - Company1
Now you group by order number, so you have 3 results:
1
2
3
... but what's the company for the order number results? Because order number 1 can either bei Company1 or Company2. |
8,973,665 | I get the error:
```
Column 'dbo.Saved_ORDER_IMPORT.Company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
When i execute :
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
where [sent] = 1 and datesent between '01/01/2009' and '01/27/2012'
group by [Order No]
```
do i have to change the column or my query? | 2012/01/23 | [
"https://Stackoverflow.com/questions/8973665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569654/"
] | **Description**:
You can't group your query by only one column because you select 2.
>
> **MSDN** - Groups a selected set of rows into a set of summary rows by the values of one or more columns or expressions in SQL Server. One row is returned for each group. Aggregate functions in the SELECT clause list provide information about each group instead of individual rows.
>
>
>
We need more information but maybe this helps
**Sample**:
```
SELECT [Order No], Company
FROM [dbo].[Saved_ORDER_IMPORT]
WHERE [sent] = 1 AND datesent BETWEEN '01/01/2009' AND '01/27/2012'
GROUP BY [Order No], Company
```
More information:
* [MSDN - GROUP BY (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms177673.aspx) | Your query is returning that error because you are not using an [AGGREGATE](http://msdn.microsoft.com/en-us/library/ms173454.aspx) function in your select statement. Including the [Company] field in the GROUP BY statement would remove the error but it would not make sense. |
10,719,498 | Im new to android development. Now im trying to use sqlite db.
I created a database sqlite file using sqlite manager.
I imported it to the project by /data/data/packagename/dbname, it works fine in emulator , but if I took release in device the app crashes,I didnt know what happens and why it happens. Any Suggestions for it? | 2012/05/23 | [
"https://Stackoverflow.com/questions/10719498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/756941/"
] | You cannot use a External DB in that manner. When you import it into your project it doesn't mean it is available in all devices from there after(Assuming you used DDMS for this). It means that DB is available to that particular emulator only. Follow the below link to find out how to add a External DB to your Application..
<http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/> | View this link, this will be very helpful if you are using your own database in Android.
>
> <http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/>
>
>
>
in this tutorial you have to put your database in assets folder of your project and the database will automatically transferred to database folder of you device. |
10,719,498 | Im new to android development. Now im trying to use sqlite db.
I created a database sqlite file using sqlite manager.
I imported it to the project by /data/data/packagename/dbname, it works fine in emulator , but if I took release in device the app crashes,I didnt know what happens and why it happens. Any Suggestions for it? | 2012/05/23 | [
"https://Stackoverflow.com/questions/10719498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/756941/"
] | You cannot use a External DB in that manner. When you import it into your project it doesn't mean it is available in all devices from there after(Assuming you used DDMS for this). It means that DB is available to that particular emulator only. Follow the below link to find out how to add a External DB to your Application..
<http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/> | See <https://github.com/jgilfelt/android-sqlite-asset-helper> for a helper lib to take care of this.
(I haven't used this library personally but I came across it yesterday while searching for something else. It appears to do what you need, though). |
10,719,498 | Im new to android development. Now im trying to use sqlite db.
I created a database sqlite file using sqlite manager.
I imported it to the project by /data/data/packagename/dbname, it works fine in emulator , but if I took release in device the app crashes,I didnt know what happens and why it happens. Any Suggestions for it? | 2012/05/23 | [
"https://Stackoverflow.com/questions/10719498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/756941/"
] | You cannot use a External DB in that manner. When you import it into your project it doesn't mean it is available in all devices from there after(Assuming you used DDMS for this). It means that DB is available to that particular emulator only. Follow the below link to find out how to add a External DB to your Application..
<http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/> | ```
private void StoreDatabase() {
File DbFile=new File("data/data/packagename/DBName.sqlite");
if(DbFile.exists())
{
System.out.println("file already exist ,No need to Create");
}
else
{
try
{
DbFile.createNewFile();
System.out.println("File Created successfully");
InputStream is = this.getAssets().open("DBName.sqlite");
FileOutputStream fos = new FileOutputStream(DbFile);
byte[] buffer = new byte[1024];
int length=0;
while ((length = is.read(buffer))>0)
{
fos.write(buffer, 0, length);
}
System.out.println("File succesfully placed on sdcard");
//Close the streams
fos.flush();
fos.close();
is.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
``` |
69,625,063 | [](https://i.stack.imgur.com/8jep6.jpg)Uploading react native app bundle to Google play console for testing but it shows me : You uploaded an APK or app bundle with a shortcuts XML configuration with the following error: Element '<shortcut>' is missing a required attribute, 'android:shortcutId'.
My shortcuts.xml File:
```
<?xml version="1.0" encoding="utf-8"?>
<shortcuts xmlns:android="http://schemas.android.com/apk/res/android">
<shortcut android:shortcutId="ID_test" android:enabled="true"
android:shortcutShortLabel="@string/compose_shortcut_short_label"
android:shortcutLongLabel="@string/compose_shortcut_long_label">
<intent android:action="android.intent.action.VIEW"
android:targetPackage="com.nativebasetest2"
android:targetClass="com.nativebasetest2.MainActivity" />
<!-- If your shortcut is associated with multiple intents, include them
here. The last intent in the list determines what the user sees when
they launch this shortcut. -->
<categories android:name="android.shortcut.conversation" />
<capability-binding android:key="actions.intent.OPEN_APP_FEATURE" />
```
```
<capability android:name="actions.intent.OPEN_APP_FEATURE">
<intent android:action="android.intent.action.VIEW"
android:targetPackage="com.nativebasetest2"
android:targetClass="com.nativebasetest2.MainActivity">
<extra android:key="requiredForegroundActivity"
android:value="com.nativebasetest2/MainActivity"/>
</intent>
</capability>
```
```
</shortcuts>
```
AndroidManifest.xml File:
```
<activity
android:name=".MainActivity"
android:label="@string/app_name"
android:configChanges=
"keyboard|keyboardHidden|orientation|screenSize|uiMod"
android:launchMode="singleTask"
android:screenOrientation="portrait"
android:windowSoftInputMode="adjustResize">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<meta-data android:name="android.app.shortcuts"
android:resource="@xml/shortcuts" />
</activity>
```
Please help me to find what is wrong here, TIA | 2021/10/19 | [
"https://Stackoverflow.com/questions/69625063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16811720/"
] | `Element '<shortcut>' is missing a required attribute, 'android:shortcutId'.`
is showing because the minSdkVersion must be 25.
That being said, if you still want your app to support OS below SDK 25, you can create a xml folder for SDK 25:
`xml-v25`
put your shortcuts file in it.
`xml-v25/shortcuts.xml`
Edit the `xml/shortcuts.xml` and remove any `<shortcut>` tag.
Shortcuts will be available for SDK 25 and above, and your app can be installed for anything that match your minSDKVersion | Add a reference to shortcuts.xml in your app manifest by following these steps:
1. In your app's manifest file (AndroidManifest.xml), find an activity whose intent filters are set to the android.intent.action.MAIN action and the android.intent.category.LAUNCHER category.
2. Add a reference to shortcuts.xml in AndroidManifest.xml using a tag in the Activity that has intent filters for both MAIN and LAUNCHER, as follows:
```
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.myapplication">
<application ... >
<activity android:name="Main">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<meta-data android:name="android.app.shortcuts"
android:resource="@xml/shortcuts" />
</activity>
</application>
</manifest>
``` |
69,625,063 | [](https://i.stack.imgur.com/8jep6.jpg)Uploading react native app bundle to Google play console for testing but it shows me : You uploaded an APK or app bundle with a shortcuts XML configuration with the following error: Element '<shortcut>' is missing a required attribute, 'android:shortcutId'.
My shortcuts.xml File:
```
<?xml version="1.0" encoding="utf-8"?>
<shortcuts xmlns:android="http://schemas.android.com/apk/res/android">
<shortcut android:shortcutId="ID_test" android:enabled="true"
android:shortcutShortLabel="@string/compose_shortcut_short_label"
android:shortcutLongLabel="@string/compose_shortcut_long_label">
<intent android:action="android.intent.action.VIEW"
android:targetPackage="com.nativebasetest2"
android:targetClass="com.nativebasetest2.MainActivity" />
<!-- If your shortcut is associated with multiple intents, include them
here. The last intent in the list determines what the user sees when
they launch this shortcut. -->
<categories android:name="android.shortcut.conversation" />
<capability-binding android:key="actions.intent.OPEN_APP_FEATURE" />
```
```
<capability android:name="actions.intent.OPEN_APP_FEATURE">
<intent android:action="android.intent.action.VIEW"
android:targetPackage="com.nativebasetest2"
android:targetClass="com.nativebasetest2.MainActivity">
<extra android:key="requiredForegroundActivity"
android:value="com.nativebasetest2/MainActivity"/>
</intent>
</capability>
```
```
</shortcuts>
```
AndroidManifest.xml File:
```
<activity
android:name=".MainActivity"
android:label="@string/app_name"
android:configChanges=
"keyboard|keyboardHidden|orientation|screenSize|uiMod"
android:launchMode="singleTask"
android:screenOrientation="portrait"
android:windowSoftInputMode="adjustResize">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<meta-data android:name="android.app.shortcuts"
android:resource="@xml/shortcuts" />
</activity>
```
Please help me to find what is wrong here, TIA | 2021/10/19 | [
"https://Stackoverflow.com/questions/69625063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16811720/"
] | `Element '<shortcut>' is missing a required attribute, 'android:shortcutId'.`
is showing because the minSdkVersion must be 25.
That being said, if you still want your app to support OS below SDK 25, you can create a xml folder for SDK 25:
`xml-v25`
put your shortcuts file in it.
`xml-v25/shortcuts.xml`
Edit the `xml/shortcuts.xml` and remove any `<shortcut>` tag.
Shortcuts will be available for SDK 25 and above, and your app can be installed for anything that match your minSDKVersion | I solved it by doing change in android/build.gradle.
1. minsdkVersion = 30
2. taargetsdkVersion = 30 |
1,044,667 | I am conducting a sanctioned pentest in a closed reference environment, and struggled upon a seemingly simple issue, I currently cannot solve.
When attempting to execute a directory traversal attack against a vulnerable Fermitter FTP server running on MS Windows OS, it is possible to do a LIST on system root (addresses and content listings changed here for reference only):
```
# ftp 192.168.13.22
Connected to 192.168.13.22.
220 Femitter FTP Server ready.
Name (192.168.13.22:root):
331 Password required for root.
Password:
230 User root logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls ../../../../
200 Port command successful.
150 Opening data connection for directory list.
-rwxrwxrwx 1 ftp ftp 0 Sep 23 2015 AUTOEXEC.BAT
-rw-rw-rw- 1 ftp ftp 0 Sep 23 2015 CONFIG.SYS
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 Documents and Settings
dr--r--r-- 1 ftp ftp 0 Sep 23 2015 Program Files
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 WINDOWS
226 File sent ok
```
However, if I want to list the contents of a folder containing white spaces such as `Documents and settings`, I am not able to list the directory contents because of whites spaces being ignored.
```
ftp> ls ../../../../documents and settings/
usage: ls remote-directory local-file
ftp> ls ../../../../documents\ and\ settings
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
ftp> ls ../../../../documents%20and%20settings
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents%20and%20settings not found
226 File sent ok
ftp> ls ../../../../'documents and settings'/
usage: ls remote-directory local-file
ftp> ls ../../../../"documents and settings"/
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
ftp> ls "../../../../documents and settings/"
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
```
I already tried using different FTP clients (CLI and GUI, on Linux and Windows) and either they ignore white spaces or disallow directory traversal.
Also tried scripting the attack on Python by using at first raw sockets and then ftplib to send the commands in HEX format directly to the FTP server, but with no success.
Googling for couple of hours did not yield a working solution (yes, there were a lot of options, which did not work), that is why there is someone here, who has had the same issue. Pretty sure, that this is not the first time such a directory traversal with white spaces is needed. | 2016/02/23 | [
"https://superuser.com/questions/1044667",
"https://superuser.com",
"https://superuser.com/users/562934/"
] | Solution suggested by @Dogeatcatworld to use MS Windows directory short notation such as `C:\Docume~1\`.
```
ftp> ls ../../../../Docume~1/
200 Port command successful.
150 Opening data connection for directory list.
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 .
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 ..
drw-rw-rw- 1 ftp ftp 0 Sep 26 2015 Administrateur
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 All Users
226 File sent ok
```
Really good article from MS Knowledge Base explains the 8.3 directory notation: [How Windows Generates 8.3 File Names from Long File Names](https://support.microsoft.com/en-us/kb/142982) | Ftp doesn't use url encoding, so %xx won't work unless you're using ftp in a browser who can translate it for you.
Try using quotes around it instead, ie: ls "../../some dir" |
1,044,667 | I am conducting a sanctioned pentest in a closed reference environment, and struggled upon a seemingly simple issue, I currently cannot solve.
When attempting to execute a directory traversal attack against a vulnerable Fermitter FTP server running on MS Windows OS, it is possible to do a LIST on system root (addresses and content listings changed here for reference only):
```
# ftp 192.168.13.22
Connected to 192.168.13.22.
220 Femitter FTP Server ready.
Name (192.168.13.22:root):
331 Password required for root.
Password:
230 User root logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls ../../../../
200 Port command successful.
150 Opening data connection for directory list.
-rwxrwxrwx 1 ftp ftp 0 Sep 23 2015 AUTOEXEC.BAT
-rw-rw-rw- 1 ftp ftp 0 Sep 23 2015 CONFIG.SYS
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 Documents and Settings
dr--r--r-- 1 ftp ftp 0 Sep 23 2015 Program Files
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 WINDOWS
226 File sent ok
```
However, if I want to list the contents of a folder containing white spaces such as `Documents and settings`, I am not able to list the directory contents because of whites spaces being ignored.
```
ftp> ls ../../../../documents and settings/
usage: ls remote-directory local-file
ftp> ls ../../../../documents\ and\ settings
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
ftp> ls ../../../../documents%20and%20settings
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents%20and%20settings not found
226 File sent ok
ftp> ls ../../../../'documents and settings'/
usage: ls remote-directory local-file
ftp> ls ../../../../"documents and settings"/
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
ftp> ls "../../../../documents and settings/"
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
```
I already tried using different FTP clients (CLI and GUI, on Linux and Windows) and either they ignore white spaces or disallow directory traversal.
Also tried scripting the attack on Python by using at first raw sockets and then ftplib to send the commands in HEX format directly to the FTP server, but with no success.
Googling for couple of hours did not yield a working solution (yes, there were a lot of options, which did not work), that is why there is someone here, who has had the same issue. Pretty sure, that this is not the first time such a directory traversal with white spaces is needed. | 2016/02/23 | [
"https://superuser.com/questions/1044667",
"https://superuser.com",
"https://superuser.com/users/562934/"
] | Solution suggested by @Dogeatcatworld to use MS Windows directory short notation such as `C:\Docume~1\`.
```
ftp> ls ../../../../Docume~1/
200 Port command successful.
150 Opening data connection for directory list.
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 .
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 ..
drw-rw-rw- 1 ftp ftp 0 Sep 26 2015 Administrateur
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 All Users
226 File sent ok
```
Really good article from MS Knowledge Base explains the 8.3 directory notation: [How Windows Generates 8.3 File Names from Long File Names](https://support.microsoft.com/en-us/kb/142982) | >
> The "short name" is really the old DOS 8.3 naming convention, so all the directories will be the first 6 letters followed by ~1 assuming there is only one name that matches, for example:
>
>
> C:\ABCDEF~1 - C:\ABCDEFG I AM DIRECTORY
>
> C:\BCDEFG~1 - C:\BCDEFGHIJKL M Another Directory
>
>
> Here is the only exception:
>
>
> C:\ABCDEF~1 - C:\ABCDEFG I AM DIRECTORY
>
> C:\ABCDEF~2 - C:\ABCDEFGHI Directory as well
>
>
>
Source: [How can I find the short path of a Windows directory/file?](https://superuser.com/questions/348079/how-can-i-find-the-short-path-of-a-windows-directory-file) |
1,044,667 | I am conducting a sanctioned pentest in a closed reference environment, and struggled upon a seemingly simple issue, I currently cannot solve.
When attempting to execute a directory traversal attack against a vulnerable Fermitter FTP server running on MS Windows OS, it is possible to do a LIST on system root (addresses and content listings changed here for reference only):
```
# ftp 192.168.13.22
Connected to 192.168.13.22.
220 Femitter FTP Server ready.
Name (192.168.13.22:root):
331 Password required for root.
Password:
230 User root logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls ../../../../
200 Port command successful.
150 Opening data connection for directory list.
-rwxrwxrwx 1 ftp ftp 0 Sep 23 2015 AUTOEXEC.BAT
-rw-rw-rw- 1 ftp ftp 0 Sep 23 2015 CONFIG.SYS
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 Documents and Settings
dr--r--r-- 1 ftp ftp 0 Sep 23 2015 Program Files
drw-rw-rw- 1 ftp ftp 0 Sep 23 2015 WINDOWS
226 File sent ok
```
However, if I want to list the contents of a folder containing white spaces such as `Documents and settings`, I am not able to list the directory contents because of whites spaces being ignored.
```
ftp> ls ../../../../documents and settings/
usage: ls remote-directory local-file
ftp> ls ../../../../documents\ and\ settings
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
ftp> ls ../../../../documents%20and%20settings
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents%20and%20settings not found
226 File sent ok
ftp> ls ../../../../'documents and settings'/
usage: ls remote-directory local-file
ftp> ls ../../../../"documents and settings"/
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
ftp> ls "../../../../documents and settings/"
200 Port command successful.
150 Opening data connection for directory list.
/C:/Program Files/Femitter/Shared/../../../../documents not found
226 File sent ok
```
I already tried using different FTP clients (CLI and GUI, on Linux and Windows) and either they ignore white spaces or disallow directory traversal.
Also tried scripting the attack on Python by using at first raw sockets and then ftplib to send the commands in HEX format directly to the FTP server, but with no success.
Googling for couple of hours did not yield a working solution (yes, there were a lot of options, which did not work), that is why there is someone here, who has had the same issue. Pretty sure, that this is not the first time such a directory traversal with white spaces is needed. | 2016/02/23 | [
"https://superuser.com/questions/1044667",
"https://superuser.com",
"https://superuser.com/users/562934/"
] | >
> The "short name" is really the old DOS 8.3 naming convention, so all the directories will be the first 6 letters followed by ~1 assuming there is only one name that matches, for example:
>
>
> C:\ABCDEF~1 - C:\ABCDEFG I AM DIRECTORY
>
> C:\BCDEFG~1 - C:\BCDEFGHIJKL M Another Directory
>
>
> Here is the only exception:
>
>
> C:\ABCDEF~1 - C:\ABCDEFG I AM DIRECTORY
>
> C:\ABCDEF~2 - C:\ABCDEFGHI Directory as well
>
>
>
Source: [How can I find the short path of a Windows directory/file?](https://superuser.com/questions/348079/how-can-i-find-the-short-path-of-a-windows-directory-file) | Ftp doesn't use url encoding, so %xx won't work unless you're using ftp in a browser who can translate it for you.
Try using quotes around it instead, ie: ls "../../some dir" |
27,623,389 | I have an ASP.NET Web API running locally on some port and I have an angularjs app running on 8080. I want to access the api from the client.
I can successfully login and register my application because in my OAuthAuthorizationProvider explicitly sets the repsonse headers in the /Token endpoint.
```
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });
```
That's good. However, my other API methods do not seem to work. In my WebApiCongig.Register, I enable CORS and I add the EnableCors Attribute to my controllers to allow all origins, all headers, and all methods. I can set a break point in my get method on the controller and it gets hit just fine. Here is what I found watching the Network tab in chrome.
2 requests are are sent to the same api method. One method type OPTIONS and one with method type GET. The OPTIONS request header includes these two lines
>
> Access-Control-Request-Headers:accept, authorization
>
>
> Access-Control-Request-Method:GET
>
>
>
And the response includes these lines
>
> Access-Control-Allow-Headers:authorization
>
>
> Access-Control-Allow-Origin:\*
>
>
>
However, the GET method request looks quite different. It returns ok with a status code of 200, but it does not inlcude and access control headers in the request or response. And like I said, it hits the API just fine. I can even do a POST and save to the database, but the client complains about the response!!
I've looked at every single SO question and tried every combination of enabling cors. I'm using Microsoft.AspNet.Cors version 5.2.2. I'm' using AngularJS version 1.3.8. I'm also using the $resource service instead of $http which doesn't seem to make a difference either.
If I can provide more information, please let me know.
BTW, I can access the Web API using Fiddler and/or Postman by simply including the Bearer token. | 2014/12/23 | [
"https://Stackoverflow.com/questions/27623389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/81766/"
] | You don't seem to be handling the preflight `Options` *requests*.
`Web API` needs to respond to the `Options` request in order to confirm that it is indeed configured to support `CORS`.
To handle this, all you need to do is send an *empty response* back. You can do this inside your actions, or you can do it globally like this:
```
protected void Application_BeginRequest()
{
if (Request.Headers.AllKeys.Contains("Origin") && Request.HttpMethod == "OPTIONS")
{
Response.Flush();
}
}
```
This extra check was added to ensure that old `APIs` that were designed to accept only `GET` and `POST` requests will not be exploited. Imagine sending a `DELETE` request to an `API` designed when this *verb* didn't exist. The outcome is *unpredictable* and the results might be *dangerous*.
Also I suggest enabling *Cors* by **web.config** instead of `config.EnableCors(cors);`
This can be done by adding some custom headers inside the `<system.webServer>` node.
```
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
<add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
</customHeaders>
</httpProtocol>
```
Please note that the *Methods* are all individually specified, instead of using `*`. This is because there is a bug occurring when using `*`. | This ended up being a simple fix. Simple, but it still doesn't take away from the bruises on my forehead. It seems like the more simple, the more frustrating.
I created my own custom cors policy provider attribute.
```
public class CorsPolicyProvider : Attribute, ICorsPolicyProvider
{
private CorsPolicy _policy;
public CorsPolicyProvider()
{
// Create a CORS policy.
_policy = new CorsPolicy
{
AllowAnyMethod = true,
AllowAnyHeader = true,
AllowAnyOrigin = true
};
// Magic line right here
_policy.Origins.Add("*");
}
public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
return Task.FromResult(_policy);
}
}
```
I played around with this for hours. Everything should work right?? I mean the EnableCors attribute should work too?? But it didn't. So I finally added the line above to explicitly add the origin to the policy. BAM!! It worked like magic. To use this just add the attribute to your api class or method you want to allow.
```
[Authorize]
[RoutePrefix("api/LicenseFiles")]
[CorsPolicyProvider]
//[EnableCors(origins: "*", headers: "*", methods: "*")] does not work!!!!! at least I couldn't get it to work
public class MyController : ApiController
{
``` |
27,623,389 | I have an ASP.NET Web API running locally on some port and I have an angularjs app running on 8080. I want to access the api from the client.
I can successfully login and register my application because in my OAuthAuthorizationProvider explicitly sets the repsonse headers in the /Token endpoint.
```
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });
```
That's good. However, my other API methods do not seem to work. In my WebApiCongig.Register, I enable CORS and I add the EnableCors Attribute to my controllers to allow all origins, all headers, and all methods. I can set a break point in my get method on the controller and it gets hit just fine. Here is what I found watching the Network tab in chrome.
2 requests are are sent to the same api method. One method type OPTIONS and one with method type GET. The OPTIONS request header includes these two lines
>
> Access-Control-Request-Headers:accept, authorization
>
>
> Access-Control-Request-Method:GET
>
>
>
And the response includes these lines
>
> Access-Control-Allow-Headers:authorization
>
>
> Access-Control-Allow-Origin:\*
>
>
>
However, the GET method request looks quite different. It returns ok with a status code of 200, but it does not inlcude and access control headers in the request or response. And like I said, it hits the API just fine. I can even do a POST and save to the database, but the client complains about the response!!
I've looked at every single SO question and tried every combination of enabling cors. I'm using Microsoft.AspNet.Cors version 5.2.2. I'm' using AngularJS version 1.3.8. I'm also using the $resource service instead of $http which doesn't seem to make a difference either.
If I can provide more information, please let me know.
BTW, I can access the Web API using Fiddler and/or Postman by simply including the Bearer token. | 2014/12/23 | [
"https://Stackoverflow.com/questions/27623389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/81766/"
] | You don't seem to be handling the preflight `Options` *requests*.
`Web API` needs to respond to the `Options` request in order to confirm that it is indeed configured to support `CORS`.
To handle this, all you need to do is send an *empty response* back. You can do this inside your actions, or you can do it globally like this:
```
protected void Application_BeginRequest()
{
if (Request.Headers.AllKeys.Contains("Origin") && Request.HttpMethod == "OPTIONS")
{
Response.Flush();
}
}
```
This extra check was added to ensure that old `APIs` that were designed to accept only `GET` and `POST` requests will not be exploited. Imagine sending a `DELETE` request to an `API` designed when this *verb* didn't exist. The outcome is *unpredictable* and the results might be *dangerous*.
Also I suggest enabling *Cors* by **web.config** instead of `config.EnableCors(cors);`
This can be done by adding some custom headers inside the `<system.webServer>` node.
```
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
<add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
</customHeaders>
</httpProtocol>
```
Please note that the *Methods* are all individually specified, instead of using `*`. This is because there is a bug occurring when using `*`. | in my case, after I changed the Identity option of my AppPool under IIS from ApplicationPoolIdentity to NetworkService, CORS stopped working in my app. |
27,623,389 | I have an ASP.NET Web API running locally on some port and I have an angularjs app running on 8080. I want to access the api from the client.
I can successfully login and register my application because in my OAuthAuthorizationProvider explicitly sets the repsonse headers in the /Token endpoint.
```
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });
```
That's good. However, my other API methods do not seem to work. In my WebApiCongig.Register, I enable CORS and I add the EnableCors Attribute to my controllers to allow all origins, all headers, and all methods. I can set a break point in my get method on the controller and it gets hit just fine. Here is what I found watching the Network tab in chrome.
2 requests are are sent to the same api method. One method type OPTIONS and one with method type GET. The OPTIONS request header includes these two lines
>
> Access-Control-Request-Headers:accept, authorization
>
>
> Access-Control-Request-Method:GET
>
>
>
And the response includes these lines
>
> Access-Control-Allow-Headers:authorization
>
>
> Access-Control-Allow-Origin:\*
>
>
>
However, the GET method request looks quite different. It returns ok with a status code of 200, but it does not inlcude and access control headers in the request or response. And like I said, it hits the API just fine. I can even do a POST and save to the database, but the client complains about the response!!
I've looked at every single SO question and tried every combination of enabling cors. I'm using Microsoft.AspNet.Cors version 5.2.2. I'm' using AngularJS version 1.3.8. I'm also using the $resource service instead of $http which doesn't seem to make a difference either.
If I can provide more information, please let me know.
BTW, I can access the Web API using Fiddler and/or Postman by simply including the Bearer token. | 2014/12/23 | [
"https://Stackoverflow.com/questions/27623389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/81766/"
] | This ended up being a simple fix. Simple, but it still doesn't take away from the bruises on my forehead. It seems like the more simple, the more frustrating.
I created my own custom cors policy provider attribute.
```
public class CorsPolicyProvider : Attribute, ICorsPolicyProvider
{
private CorsPolicy _policy;
public CorsPolicyProvider()
{
// Create a CORS policy.
_policy = new CorsPolicy
{
AllowAnyMethod = true,
AllowAnyHeader = true,
AllowAnyOrigin = true
};
// Magic line right here
_policy.Origins.Add("*");
}
public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
return Task.FromResult(_policy);
}
}
```
I played around with this for hours. Everything should work right?? I mean the EnableCors attribute should work too?? But it didn't. So I finally added the line above to explicitly add the origin to the policy. BAM!! It worked like magic. To use this just add the attribute to your api class or method you want to allow.
```
[Authorize]
[RoutePrefix("api/LicenseFiles")]
[CorsPolicyProvider]
//[EnableCors(origins: "*", headers: "*", methods: "*")] does not work!!!!! at least I couldn't get it to work
public class MyController : ApiController
{
``` | in my case, after I changed the Identity option of my AppPool under IIS from ApplicationPoolIdentity to NetworkService, CORS stopped working in my app. |
255,399 | Is there a security risk to disabling the windows user account password, since my PC is already unlocked with a complex pin at boot time? I have my PC configured with sleep disabled. I'm running windows 10 pro.
For example, is windows network security reduced? | 2021/09/20 | [
"https://security.stackexchange.com/questions/255399",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/267434/"
] | Weaknesses of running passwordless with strong BitLocker:
* BitLocker might get temporarily suspended during certain updates (this is required with TPM-based protection when updating certain boot code, and happens automatically) which presents a window to steal the machine and get everything.
* An attacker who steals your computer while it's on can get everything.
+ You can't meaningfully "lock" the computer except via shutdown/hibernate, which take time or risk losing data.
* A malicious process running as a different low-privilege user can access your account easily.
+ This is a problem if you have multiple user accounts for different people.
+ This is a problem if there's a low-privilege service account that gets compromised.
* Authentication mechanisms that aren't technically "network log in" operations (as Windows defines them) will still work against you.
+ People won't be able to Remote Desktop in as you (by default), but they might be able to SSH in as you (if you enable the SSH server).
* Your cryptographic secrets (EFS keys, DPAPI keys, certificate private keys, passwords saved in the credential vault, etc.) will be essentially unprotected (though this might not matter to you since it would need to be a local attacker). | A user with a blank password cannot, by default, perform network logons.
This is controlled by the local security policy option "Limit local account use of blank passwords to console logon only", which is enabled by default. What this option means is that a local user account that has a blank password cannot be used to log onto the system from anywhere other than the computer's physical location.
[](https://i.stack.imgur.com/ITWi4.png) |
57,847,827 | I am trying to connect to a database hosted on mongo atlas from a service running on elastic beanstalk. I am getting the error:
`UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [youmaylike-shard-00-01-necsu.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to youmaylike-shard-00-01-necsu.mongodb.net:27017 closed]`
I believe this is happening because I don't have the Ip address of my service whitelisted on atlas. I am unsure of how to get the Ip address for my service, I tried running `eb ssh` but I'm not sure what it gave me is the correct value | 2019/09/09 | [
"https://Stackoverflow.com/questions/57847827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9650710/"
] | There are multiple ways to get it, below two:
Before using the AWS console or the AWS CLI run `eb health` and get the intance ID or IDs for your deployment
1. Using the AWS Console go to EC2 and then Instances find the instance ID or IDs click it and on the pane below the IP will be located at "IPv4 Public IP"
2. Using the AWS CLI `aws ec2 describe-instances --instance-ids <YOUR INSTANCE ID or IDS HERE>` | The public IP depend upon the configuration of your Elastic beanstalk instances.
**Internet Access:**
Instances must have access to the Internet through one of the following methods.
**Public Subnet**
* Instances have a public IP address and use an Internet Gateway to access the Internet.
**Private Subnet**
* Instances use a NAT device to access the Internet.
So, if it's behind Gateway then you can check [here](https://forums.aws.amazon.com/thread.jspa?threadID=93137) for whitelisting or might this help too
```
aws ec2 describe-instances --instance-ids i-0c9c9b44b --query 'Reservations[*].Instances[*].PublicIpAddress' --output text
```
or
```
curl http://checkip.amazonaws.com
```
If it's behind NAT then you need to whitelist the NAT Gateway IP.
Goto [VPC](https://us-west-2.console.aws.amazon.com/vpc/home?region=us-west-2#) -> Select NAT Gateways -> Copy the Elastic IP or public IP adress of NAT Gateway and whitelist this IP in atlas side. |
57,847,827 | I am trying to connect to a database hosted on mongo atlas from a service running on elastic beanstalk. I am getting the error:
`UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [youmaylike-shard-00-01-necsu.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to youmaylike-shard-00-01-necsu.mongodb.net:27017 closed]`
I believe this is happening because I don't have the Ip address of my service whitelisted on atlas. I am unsure of how to get the Ip address for my service, I tried running `eb ssh` but I'm not sure what it gave me is the correct value | 2019/09/09 | [
"https://Stackoverflow.com/questions/57847827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9650710/"
] | There are multiple ways to get it, below two:
Before using the AWS console or the AWS CLI run `eb health` and get the intance ID or IDs for your deployment
1. Using the AWS Console go to EC2 and then Instances find the instance ID or IDs click it and on the pane below the IP will be located at "IPv4 Public IP"
2. Using the AWS CLI `aws ec2 describe-instances --instance-ids <YOUR INSTANCE ID or IDS HERE>` | From MongoDB Atlas support:
If you have dynamic IP addresses, you have the following options;
1. You can use the Atlas Public API to dynamically add and remove IPs from your whitelist. For MongoDB Atlas documentation on configuring Atlas API Access, please click here.
2. You can use VPC Peering (M10+ instances only) to link your Atlas cluster to your existing VPC. For documentation Setting up a VPC peering connection in MongoDB Atlas, please click here.
3. Or you can set your whitelist to 0.0.0.0/0 to allow the entire Internet into your IP whitelist. For MongoDB Atlas documentation on adding entries to your IP Whitelist, please click here. Please note that adding 0.0.0.0/0 to the cluster’s whitelist as this can expose the cluster to denial of service attacks. Also, please be aware that Heroku uses dynamic IPs, so you will have to add 0.0.0.0/0 to the whitelist when using Heroku to connect to your Atlas Cluster.
See [asked question on their FAQ](https://intercom.help/mongodb-atlas/en/articles/1560740-using-dynamic-ip-addresses-with-atlas). |
46,627,955 | I'm in an attempt to make a conditional based GPU LED color changing script, however this has been more of a challenge then I thought it would be, I was really sure it would be pretty straight forward, however I can't seen to find anything on this GPU feature, so, has anyone heard of an API for such purposes or am I gonna need to deal directly with the hardware(what I've never done before)?
I know this question is a little too unspecific, I apologize for that, however, I really need at least a direction on where to start, as I've never ever ventured this deep into my electric charges conductors(a.k.a. hardware).
Obs: The GPU in question is a Gigabyte GTX 1060 D5 Rev. 2(I also failed in finding any documentation on it… Besides the "Users guide" at least, what is "not really helpful" at best) | 2017/10/08 | [
"https://Stackoverflow.com/questions/46627955",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7432363/"
] | If you're using gcc 6, you're most likely running into [this](https://lists.gnu.org/archive/html/bug-binutils/2017-02/msg00262.html) bug (note that the bug is not specific to Debian but depends on how gcc was built). A workaround is simply to compile using the `-no-pie' option which disables position-independent code generation.
[This](https://stackoverflow.com/questions/2463150/what-is-the-fpie-option-for-position-independent-executables-in-gcc-and-ld) is a good start if you want to know more about PIE. | >
> gprof seems to fail to collect data from my program. Here is my command line:
>
>
>
> ```
> g++ -Wall -O3 -g -pg -o fftw_test fftw_test.cpp -lfftw3 -lfftw3_threads -lm && ./fftw_test
>
> ```
>
>
Your program uses fftw library and probably consist almost only of fftw library calls. What is the running time? Your program may be too fast to be profiled with gprof. *Update* And library may be not seen by gprof as it was compiled without gprof profiling enabled.
GNU gprof has two parts. First, it instruments function calls in c/cpp files which were compiled with `-pg` option (with mcount function calls - <https://en.wikipedia.org/wiki/Gprof>) - to get caller/callee info. Second, it links additional profiling library into your executable to add periodic sampling to find which code was executed for more time. Sampling is done with profil (setitimer). Setitimer profiling has limited resolution and can't resolve intervals less than 10 ms or 1 ms (100 or 1000 samples per second).
And in your example, the fftw library was probably compiled without instrumentation, so no `mcount` calls in it. It still can be captured by sampling part, but only for main thread of the program (<https://en.wikipedia.org/wiki/Gprof> - "typically it only profiles the main thread of application").
`perf` profiler have no instrumentation with `mcount` (it gets callee/caller from stack unwinding when recorded with `-g` option), but it have much better statistics/sampling variants (it can use hardware PMU counters), without 100 or 1000 Hz limit, and it supports (profiles) threads correctly. Try `perf record -F1000 ./fftw_test` (with 1 kHz sampling frequency) and `perf report` or `perf report > report.txt`. There are some GUI/HTML frontends to perf too: <https://github.com/KDAB/hotspot> <https://github.com/jrfonseca/gprof2dot>
For better setitimer style profiler check google-perftools <https://github.com/gperftools/gperftools> for "CPU PROFILER".
======
With your test I have some gprof results on Debian 8.6 Linux kernel version 3.16.0-4-amd64 x86\_64 machine, g++ (Debian 4.9.2-10), gprof is "GNU gprof (GNU Binutils for Debian) 2.27"
```
$ cat gprof_test.cpp
#include <cstdlib>
#include <ctime>
#include <iostream>
int main()
{
std::srand( std::time( 0 ) );
double sum = 0.0;
for ( int i = 0; i < 100000000; ++i )
sum += std::rand() / ( double ) RAND_MAX;
std::cout << sum << '\n';
return 0;
}
$ g++ -Wall -O3 -g -pg -o gprof_test gprof_test.cpp && time ./gprof_test
5.00069e+06
real 0m0.992s
$ gprof -b gprof_test gmon.out
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 1 0.00 0.00 _GLOBAL__sub_I_main
```
So, gprof did not catch any time samples in this 1 second example and have no information about calls into the libraries (**they were compiled without `-pg`**). After adding some wrapper functions and prohibiting inline optimization I have some data from gprof but library time was not accounted (it sees 0.72 seconds of 2 second runtime):
```
$ cat *cpp
#include <cstdlib>
#include <ctime>
#include <iostream>
int rand_wrapper1()
{
return std::rand();
}
int rand_scale1()
{
return rand_wrapper1() / ( double ) RAND_MAX;
}
int main()
{
std::srand( std::time( 0 ) );
double sum = 0.0;
for ( int i = 0; i < 100000000; ++i )
sum+= rand_scale1();
// sum += std::rand() / ( double ) RAND_MAX;
std::cout << sum << '\n';
return 0;
}
$ g++ -Wall -O3 -fno-inline -g -pg -o gprof_test gprof_test.cpp && time ./gprof_test
real 0m2.345s
$ gprof -b gprof_test gmon.out
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ns/call ns/call name
80.02 0.57 0.57 rand_scale1()
19.29 0.71 0.14 100000000 1.37 1.37 rand_wrapper1()
2.14 0.72 0.02 frame_dummy
0.00 0.72 0.00 1 0.00 0.00 _GLOBAL__sub_I__Z13rand_wrapper1v
0.00 0.72 0.00 1 0.00 0.00 __static_initialization_and_destruction_0(int, int) [clone .constprop.0]
Call graph
granularity: each sample hit covers 2 byte(s) for 1.39% of 0.72 seconds
index % time self children called name
<spontaneous>
[1] 97.9 0.57 0.14 rand_scale1() [1]
0.14 0.00 100000000/100000000 rand_wrapper1() [2]
-----------------------------------------------
0.14 0.00 100000000/100000000 rand_scale1() [1]
[2] 19.0 0.14 0.00 100000000 rand_wrapper1() [2]
```
And perf sees all parts:
```
$ perf record ./gprof_test
0
[ perf record: Woken up 2 times to write data ]
[ perf record: Captured and wrote 0.388 MB perf.data (~16954 samples) ]
$ perf report |more
# Samples: 9K of event 'cycles'
# Event count (approx.): 7373484231
#
# Overhead Command Shared Object Symbol
# ........ .......... ................. .........................
#
25.91% gprof_test gprof_test [.] rand_scale1()
21.65% gprof_test libc-2.19.so [.] __mcount_internal
13.88% gprof_test libc-2.19.so [.] _mcount
12.54% gprof_test gprof_test [.] main
9.35% gprof_test libc-2.19.so [.] __random_r
8.40% gprof_test libc-2.19.so [.] __random
3.97% gprof_test gprof_test [.] rand_wrapper1()
2.79% gprof_test libc-2.19.so [.] rand
1.41% gprof_test gprof_test [.] mcount@plt
0.03% gprof_test [kernel.kallsyms] [k] memset
``` |
46,627,955 | I'm in an attempt to make a conditional based GPU LED color changing script, however this has been more of a challenge then I thought it would be, I was really sure it would be pretty straight forward, however I can't seen to find anything on this GPU feature, so, has anyone heard of an API for such purposes or am I gonna need to deal directly with the hardware(what I've never done before)?
I know this question is a little too unspecific, I apologize for that, however, I really need at least a direction on where to start, as I've never ever ventured this deep into my electric charges conductors(a.k.a. hardware).
Obs: The GPU in question is a Gigabyte GTX 1060 D5 Rev. 2(I also failed in finding any documentation on it… Besides the "Users guide" at least, what is "not really helpful" at best) | 2017/10/08 | [
"https://Stackoverflow.com/questions/46627955",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7432363/"
] | I presume the problem comes from the fact you are using `O3` level of optimisation. With gcc-8.4.0 I get nothing with `O3`, limited data (e.g. number of function calls missing) with `O2` and proper profile for `O1` and `O0`.
This seems to have been a [known bug](https://bugs.launchpad.net/ubuntu/+source/gcc-6/+bug/1678510) with older versions of gcc, yet I did not come across any sources regarding such a problem with later versions. I can only hypothesize whether it is a bug with the compiler or whether more aggressive optimisations prevent some performance data from being collected. | >
> gprof seems to fail to collect data from my program. Here is my command line:
>
>
>
> ```
> g++ -Wall -O3 -g -pg -o fftw_test fftw_test.cpp -lfftw3 -lfftw3_threads -lm && ./fftw_test
>
> ```
>
>
Your program uses fftw library and probably consist almost only of fftw library calls. What is the running time? Your program may be too fast to be profiled with gprof. *Update* And library may be not seen by gprof as it was compiled without gprof profiling enabled.
GNU gprof has two parts. First, it instruments function calls in c/cpp files which were compiled with `-pg` option (with mcount function calls - <https://en.wikipedia.org/wiki/Gprof>) - to get caller/callee info. Second, it links additional profiling library into your executable to add periodic sampling to find which code was executed for more time. Sampling is done with profil (setitimer). Setitimer profiling has limited resolution and can't resolve intervals less than 10 ms or 1 ms (100 or 1000 samples per second).
And in your example, the fftw library was probably compiled without instrumentation, so no `mcount` calls in it. It still can be captured by sampling part, but only for main thread of the program (<https://en.wikipedia.org/wiki/Gprof> - "typically it only profiles the main thread of application").
`perf` profiler have no instrumentation with `mcount` (it gets callee/caller from stack unwinding when recorded with `-g` option), but it have much better statistics/sampling variants (it can use hardware PMU counters), without 100 or 1000 Hz limit, and it supports (profiles) threads correctly. Try `perf record -F1000 ./fftw_test` (with 1 kHz sampling frequency) and `perf report` or `perf report > report.txt`. There are some GUI/HTML frontends to perf too: <https://github.com/KDAB/hotspot> <https://github.com/jrfonseca/gprof2dot>
For better setitimer style profiler check google-perftools <https://github.com/gperftools/gperftools> for "CPU PROFILER".
======
With your test I have some gprof results on Debian 8.6 Linux kernel version 3.16.0-4-amd64 x86\_64 machine, g++ (Debian 4.9.2-10), gprof is "GNU gprof (GNU Binutils for Debian) 2.27"
```
$ cat gprof_test.cpp
#include <cstdlib>
#include <ctime>
#include <iostream>
int main()
{
std::srand( std::time( 0 ) );
double sum = 0.0;
for ( int i = 0; i < 100000000; ++i )
sum += std::rand() / ( double ) RAND_MAX;
std::cout << sum << '\n';
return 0;
}
$ g++ -Wall -O3 -g -pg -o gprof_test gprof_test.cpp && time ./gprof_test
5.00069e+06
real 0m0.992s
$ gprof -b gprof_test gmon.out
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 1 0.00 0.00 _GLOBAL__sub_I_main
```
So, gprof did not catch any time samples in this 1 second example and have no information about calls into the libraries (**they were compiled without `-pg`**). After adding some wrapper functions and prohibiting inline optimization I have some data from gprof but library time was not accounted (it sees 0.72 seconds of 2 second runtime):
```
$ cat *cpp
#include <cstdlib>
#include <ctime>
#include <iostream>
int rand_wrapper1()
{
return std::rand();
}
int rand_scale1()
{
return rand_wrapper1() / ( double ) RAND_MAX;
}
int main()
{
std::srand( std::time( 0 ) );
double sum = 0.0;
for ( int i = 0; i < 100000000; ++i )
sum+= rand_scale1();
// sum += std::rand() / ( double ) RAND_MAX;
std::cout << sum << '\n';
return 0;
}
$ g++ -Wall -O3 -fno-inline -g -pg -o gprof_test gprof_test.cpp && time ./gprof_test
real 0m2.345s
$ gprof -b gprof_test gmon.out
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ns/call ns/call name
80.02 0.57 0.57 rand_scale1()
19.29 0.71 0.14 100000000 1.37 1.37 rand_wrapper1()
2.14 0.72 0.02 frame_dummy
0.00 0.72 0.00 1 0.00 0.00 _GLOBAL__sub_I__Z13rand_wrapper1v
0.00 0.72 0.00 1 0.00 0.00 __static_initialization_and_destruction_0(int, int) [clone .constprop.0]
Call graph
granularity: each sample hit covers 2 byte(s) for 1.39% of 0.72 seconds
index % time self children called name
<spontaneous>
[1] 97.9 0.57 0.14 rand_scale1() [1]
0.14 0.00 100000000/100000000 rand_wrapper1() [2]
-----------------------------------------------
0.14 0.00 100000000/100000000 rand_scale1() [1]
[2] 19.0 0.14 0.00 100000000 rand_wrapper1() [2]
```
And perf sees all parts:
```
$ perf record ./gprof_test
0
[ perf record: Woken up 2 times to write data ]
[ perf record: Captured and wrote 0.388 MB perf.data (~16954 samples) ]
$ perf report |more
# Samples: 9K of event 'cycles'
# Event count (approx.): 7373484231
#
# Overhead Command Shared Object Symbol
# ........ .......... ................. .........................
#
25.91% gprof_test gprof_test [.] rand_scale1()
21.65% gprof_test libc-2.19.so [.] __mcount_internal
13.88% gprof_test libc-2.19.so [.] _mcount
12.54% gprof_test gprof_test [.] main
9.35% gprof_test libc-2.19.so [.] __random_r
8.40% gprof_test libc-2.19.so [.] __random
3.97% gprof_test gprof_test [.] rand_wrapper1()
2.79% gprof_test libc-2.19.so [.] rand
1.41% gprof_test gprof_test [.] mcount@plt
0.03% gprof_test [kernel.kallsyms] [k] memset
``` |
46,627,955 | I'm in an attempt to make a conditional based GPU LED color changing script, however this has been more of a challenge then I thought it would be, I was really sure it would be pretty straight forward, however I can't seen to find anything on this GPU feature, so, has anyone heard of an API for such purposes or am I gonna need to deal directly with the hardware(what I've never done before)?
I know this question is a little too unspecific, I apologize for that, however, I really need at least a direction on where to start, as I've never ever ventured this deep into my electric charges conductors(a.k.a. hardware).
Obs: The GPU in question is a Gigabyte GTX 1060 D5 Rev. 2(I also failed in finding any documentation on it… Besides the "Users guide" at least, what is "not really helpful" at best) | 2017/10/08 | [
"https://Stackoverflow.com/questions/46627955",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7432363/"
] | If you're using gcc 6, you're most likely running into [this](https://lists.gnu.org/archive/html/bug-binutils/2017-02/msg00262.html) bug (note that the bug is not specific to Debian but depends on how gcc was built). A workaround is simply to compile using the `-no-pie' option which disables position-independent code generation.
[This](https://stackoverflow.com/questions/2463150/what-is-the-fpie-option-for-position-independent-executables-in-gcc-and-ld) is a good start if you want to know more about PIE. | I presume the problem comes from the fact you are using `O3` level of optimisation. With gcc-8.4.0 I get nothing with `O3`, limited data (e.g. number of function calls missing) with `O2` and proper profile for `O1` and `O0`.
This seems to have been a [known bug](https://bugs.launchpad.net/ubuntu/+source/gcc-6/+bug/1678510) with older versions of gcc, yet I did not come across any sources regarding such a problem with later versions. I can only hypothesize whether it is a bug with the compiler or whether more aggressive optimisations prevent some performance data from being collected. |
29,118,085 | **Description:**
I have a case in finding a solution to a problem. rules to find the solution as follows:
>
> Case 1: `IF T01 AND T02 AND T03 THEN S01`
>
>
> Case 2: `IF T04 THEN S02`
>
>
> Case 3: `IF T04 AND T05 AND T06 THEN S03`
>
>
>
To display the questions on the matter, set based on a decision tree. at the time of problem 1 (T1) asked, then there is a yes or no answer. if you have problems, a selected 'yes'. if it does not have this problem, then selected the answer 'no'. followed by asking about the next problem until a solution (S) is found.
**My question:**
1. how do I apply the rule or the decision tree in a database?
2. whether there are other ways to find a solution (S) but the question
of problem must be a sequence based on the decision tree?
please see the decision tree on the following link [here](http://oi59.tinypic.com/24cw9ye.jpg).
**Caption:**
```
T = trouble/problem;
S = solution;
Y = if answer is YES;
N = if answer is NO;
```
Thank you | 2015/03/18 | [
"https://Stackoverflow.com/questions/29118085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4684421/"
] | In bash `square brackets` are used. Hence change
```
if ( $? == 0 ) then
```
to
```
if [ $? == 0 ]; then
```
Edit: Change
```
mail -s "Linux backup" "[email protected]"
$Mailtext
```
to
```
echo $Mailtext | mail -s "Linux backup" [email protected]
```
To verify that your are able to send and receive a mail, try to send a mail with dummy text as below.
```
echo "Testing Mail" | mail -s "Linux backup" [email protected]
``` | As commented, your variable assignement for Mailtext are inefficient (it works, but it has no sense to use an echo command to assign a text value).
As for your email sending, your mail command invocation should be :
```
echo $Mailtext | mail -s "Linux backup" "[email protected]"
``` |
9,282 | I am looking for algorithms to prioritize equipment renewals.
Input: (years since last renewal, cost of renewal, importance of renewal).
Output: An ordering of the equipment according to which it will be renewed.
I do not know if there are any algorithms for this particular problem. If you have any idea how to fit this problem into a more general context, that would be useful too.
A way to rephrase the problem:
You have $n$ pieces of equipment $E\_1,\ldots,E\_n$. For each piece $E\_i$ you have a triple $(\text{age}\_i,\text{cost}\_i,\text{importance}\_i)$. At the beginning of the year you have $X$ amount of money. You want to spend these money in order to *minimize* the function $\sum\_i \text{age}\_i\cdot \text{importance}\_i$ at the end of the year. So, during the year you have to select a subset $S$ of $\{1,\ldots,n\}$ such that:
$$\sum\_{i\in S} \text{cost}\_i\le X \text{ (cost constraint)}$$ and the sum $$\sum\_{i\in S} \text{age}\_i\cdot \text{importance}\_i$$ is *maximal* among all subsets of $\{1,\ldots,n\}$ that satisfy the cost constraint.
Any help? | 2013/01/29 | [
"https://cs.stackexchange.com/questions/9282",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6610/"
] | Given the clarification to the question, it is a 0-1 knapsack problem (your knapsack is the money available, the value is the objective function), and look for ways to solve that one, for example following the [Wikipedia lead](http://en.wikipedia.org/wiki/Knapsack_problem#0-1_knapsack_problem).
**Edit:** A simple approximate algorithm would be just sort equipment on increasing cost per unit (age \* importance), and in that order include each if it fits (doesn't run over maximal cost). In the case of continuous knapsack (i.e., can include a fraction of a item) that strategy is optimal. *Should* work fine if there are many items, and the data isn't too spread out. | It all depends on how you define how you decide a an equipment is to be renewed. But at the end, you may sort the lists lexicographically. That is, define a *comparison function* that takes as input two sets $X = (x \_1, ..., x\_k)$ and $Y = (y \_1, ..., y \_k)$, it will return true if $X \succ Y$ and false otherwise, where $X \succ Y$ if:
* $x \_1 \succ y \_1$ or:
* if there is a $j$ s.t. $1 < j \leq k$, $x \_j \succ y \_j$ and $x \_i = y \_i$ for each $1 \leq i < j$.
Use this function with any sorting algorithm you know. (but instead of sorting numbers, you sort lists according to the set comparison function you define). |
6,315,262 | I am trying to create a number of jQuery dialogs but I would like to constrain their positions to inside a parent div. I am using the following code to create them (on a side note the oppacity option is not working either...):
```
var d= $('<div title="Title goes here"></div>').dialog({
autoOpen: true,
closeOnEscape: false,
draggable: true,
resizable: false,
width: dx,
height: dy
});
d.draggable('option', 'containment', 'parent');
d.draggable('option', 'opacity', 0.45);
$('#controlContent').append(d.parent());
``` | 2011/06/11 | [
"https://Stackoverflow.com/questions/6315262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | A bit more helpful and complete version of above solution.
It even limits the resizing outside of the div too!
And the JavaScript is fully commented.
```
// Public Domain
// Feel free to use any of the JavaScript, HTML, and CSS for any commercial or private projects. No attribution required.
// I'm not responsible if you blow anything up with this code (or anything else you could possibly do with this code).
jQuery(function($)
{
// When the document is ready run the code inside the brackets.
$("button").button(); // Makes the button fancy (ie. jquery-ui styled)
$("#dialog").dialog(
{
// Set the settings for the jquery-ui dialog here.
autoOpen: false, // Don't open the dialog instantly. Let an event such as a button press open it. Optional.
position: { my: "center", at: "center", of: "#constrained_div" } // Set the position to center of the div.
}).parent().resizable(
{
// Settings that will execute when resized.
containment: "#constrained_div" // Constrains the resizing to the div.
}).draggable(
{
// Settings that execute when the dialog is dragged. If parent isn't used the text content will have dragging enabled.
containment: "#constrained_div", // The element the dialog is constrained to.
opacity: 0.70 // Fancy opacity. Optional.
});
$("#button").click(function()
{
// When our fancy button is pressed the stuff inside the brackets will be executed.
$("#dialog").dialog("open"); // Opens the dialog.
});
});
```
<http://jsfiddle.net/emilhem/rymEh/33/> | I have found a way to do it. This is now my method for creating a dialog:
```
var d = $('<div title="Title"></div>').dialog({
autoOpen: true,
closeOnEscape: false,
resizable: false,
width: 100,
height: 100
});
d.parent().find('a').find('span').attr('class', 'ui-icon ui-icon-minus');
d.parent().draggable({
containment: '#controlContent',
opacity: 0.70
});
$('#controlContent').append(d.parent());
``` |
20,640,249 | I have just set up Flurry to track uncaught exceptions but it is not being called.
1. I have the most recent Flurry SDK.
2. In the AppDelegate.m I have imported "Flurry.h"
3. I have the following method to log errors:
```
void uncaughtExceptionHandler(NSException *exception){
[Flurry logError:@"Uncaught" message:@"Crash!" exception:exception];
}
```
4 .In application didFinishLaunchingWithOptions method I have set the following:
```
- [Flurry setCrashReportingEnabled:YES];
- NSSetUncaughtExceptionHandler(&uncaughtExceptionHandler);
- [Flurry startSession:@"flurry key"];
```
I have purposely written some code to make the app crash but I don't see anything getting logged in Flurry. (Flurry.com/Events/Event Logs) I have been crashing the app since yesterday.
I am using an ipad not the simulator to test. | 2013/12/17 | [
"https://Stackoverflow.com/questions/20640249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3071579/"
] | Calling order should be this way,
```
[Flurry setCrashReportingEnabled:YES];
[Flurry startSession:@"flurry key"];
NSSetUncaughtExceptionHandler(&uncaughtExceptionHandler);
``` | It may have to be in the app store for Flurry to report crashes..
Try `bugsnag` for error handling, it is much better. Flurry is awesome at analytics, but bugs are better reported at `bugsnag`. |
3,422,761 | I would like to convert an int to BSTR. I'm using createTextNode in MSXML which accepts BSTR. How can I do that please? | 2010/08/06 | [
"https://Stackoverflow.com/questions/3422761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/402816/"
] | Probably not efficient but first convert to a string and then you can simply convert that (untested):
```
std::wstring convertToString(int value)
{
std::wstringstream ss;
ss << value;
return ss.str();
}
_bstr_t theConverted(convertToString(42).c_str());
``` | [Data Type Conversion Functions [Automation]](http://msdn.microsoft.com/en-us/library/b69504cf-6c80-4de1-a26e-9281ab848c71%28VS.85%29#functions_to_convert_to_type_bstr) (MSDN), see "Functions to convert to type BSTR" section. |
3,422,761 | I would like to convert an int to BSTR. I'm using createTextNode in MSXML which accepts BSTR. How can I do that please? | 2010/08/06 | [
"https://Stackoverflow.com/questions/3422761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/402816/"
] | Probably not efficient but first convert to a string and then you can simply convert that (untested):
```
std::wstring convertToString(int value)
{
std::wstringstream ss;
ss << value;
return ss.str();
}
_bstr_t theConverted(convertToString(42).c_str());
``` | ```
int number = 123;
_bstr_t bstr = (long)number;
```
([Source](http://www.codeguru.com/forum/showthread.php?t=122576)) |
3,422,761 | I would like to convert an int to BSTR. I'm using createTextNode in MSXML which accepts BSTR. How can I do that please? | 2010/08/06 | [
"https://Stackoverflow.com/questions/3422761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/402816/"
] | ```
int number = 123;
_bstr_t bstr = (long)number;
```
([Source](http://www.codeguru.com/forum/showthread.php?t=122576)) | [Data Type Conversion Functions [Automation]](http://msdn.microsoft.com/en-us/library/b69504cf-6c80-4de1-a26e-9281ab848c71%28VS.85%29#functions_to_convert_to_type_bstr) (MSDN), see "Functions to convert to type BSTR" section. |
3,313,370 | We are building a daily newsletter based on member preferences. The member can choose a city and some categories among a list of 10. Basically each email will be different. Each email is generated by our server.
We are unable to find a provider with an API that can do that. Would you have any solution that ensure a 99% delivery.
Thank you
Damien | 2010/07/22 | [
"https://Stackoverflow.com/questions/3313370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399619/"
] | That sounds like a problem more with your Hadoop setup than with Eclipse. Make sure you have all the pieces of your cluster running, i.e. DataNode(s), TaskTracker(s), JobTracker. If those are all running, it might be a problem with the way you're setting up the job. | Are you bent to-do this in Java? If not you can use a Ruby gem called WUKONG that has a pagerank example <http://github.com/mrflip/wukong/tree/master/examples/pagerank/> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.