qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
70,405,640
What should be the css to align them in the same line? ``` <div class="discount-tab"> <h1 class="discount-heading">Book your first adventure with us at 10% discount</h1> <button class="mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent">Book now</button> </div> ```
2021/12/18
[ "https://Stackoverflow.com/questions/70405640", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17223877/" ]
You can set the `h1` tag to: `display: inline-block`. It'll only take up the space it needs that way. Another approach, which might be even better is to make use of flexbox. Flexbox can align all the items vertically and/or horizontally. ``` .discount-tab { display: flex; flex-direction: row; align-items: center; } ```
You can use [grid](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout) `display: grid;` and add styling to button a little. ```css .discount-tab { display: grid; align-items: center; justify-content: center; grid-template-columns: 4fr 1fr; } .mdl-button { max-width: 100px; } ``` ```html <div class="discount-tab"> <h1 class="discount-heading">Book your first adventure with us at 10% discount</h1> <button class="mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent">Book now</button> </div> ```
15,710
Was travelling on a Airbus A320 (night flight) and during the safety briefing noticed that the fluorescent floor lighting strip referred to in the evacuation instructions was totally absent. Both sides of the aisle. I could see the marks on the floor carpet where it must have been earlier tacked on. Was removed and went unreplaced for whatever reason. Is it OK for an aircraft to fly without this item? Seemed pretty essential to me for a safe evacuation in the dark. Edit: I did try & take a photo but unfortunately didn't come out so well. ![enter image description here](https://i.stack.imgur.com/hPf7r.jpg)
2015/06/11
[ "https://aviation.stackexchange.com/questions/15710", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/7611/" ]
According to [this document](http://fsims.faa.gov/wdocs/mmel/a-320%20r21.pdf) the answer would appear to be no if all lights were missing. ![enter image description here](https://i.stack.imgur.com/vj58n.png) The 1/1 columns indicates the number installed and required for dispatch.
Those lights don't have to be mounted to the floor. They can also be seat mounted. <http://www.bruceind.com/index.php?option=com_k2&view=item&id=90:escape-path-lighting-systems&Itemid=189> <http://www.astronics.com/_images/aircraft-safety/EPM%20033010.pdf> ![Seat mounted escape path lights](https://i.stack.imgur.com/8FvPa.png) This is the AC that provides guidance on the requirements of the system. <http://www.faa.gov/documentLibrary/media/Advisory_Circular/AC25.812-1A.pdf> The regulation is [14CFR 25.812](http://www.ecfr.gov/cgi-bin/text-idx?c=ecfr&SID=67a8813bf9d9da0aa64e74e2e5ced957&rgn=div8&view=text&node=14:1.0.1.3.11.4.178.62&idno=14 "14CFR25.812")
25,284
Why do summary routes get a lower AD than other routing protocols? For example: The AD of EIGRP is 90, whereas the AD of a summary route is 5.
2015/12/17
[ "https://networkengineering.stackexchange.com/questions/25284", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/21408/" ]
Since the summarized route means that a router advertising it has knowledge of the individual routes within the summarized prefix, it is more trustworthy than the same (summarized) prefix being advertised as an individual route without the knowledge of the individual routes which make up the summary. This doesn't mean that the summarized route is more preferred than one of the routes in the summary since the longest match will be more preferred than the summary. It simply means that the same route, if advertised as both a summary and a non-summary, the summary route is more trustworthy.
The AD of the EIGRP summary route is 5 only on the router that has the summary route configured. When the summary is advertised to other routers it has an AD of 90. The reason for the low AD is to insure that the summary route (to nul0) is preferred to prevent routing loops.
44,268,716
I have a test data service written in Angular4. It currently looks like this: ``` import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; import 'rxjs/add/operator/toPromise' @Injectable() export class DataService { constructor(private http: Http) { } fetchData(){ return this.http.get('https://dinstruct-d4b62.firebaseio.com/.json').map( (res) => res.json()).toPromise(); } } ``` With thanks to "The Net Ninja" for this code, as this section of the app is basically exactly the same as the tutorial code (I prefer to have something that should be a known working example for testing purposes when building new apps)... The problem is that though there is definitely test data at <https://dinstruct-d4b62.firebaseio.com/.json>, which is not hidden or firewalled in any way as far as I can tell (directly accessible via browser), when the app enters the `fetchData()` function, it logs: ``` ERROR Error: Uncaught (in promise): Error: Response with status: 404 Not Found for URL: https://dinstruct-d4b62.firebaseio.com/.json Error: Response with status: 404 Not Found for URL: https://dinstruct-d4b62.firebaseio.com/.json ``` at the start of the stack trace. What could be going on here? **Update:** I also noticed that in the calling function: ``` ngOnInit(): void { this.customerService.getCustomers() .then(customers => this.customers = customers); this.dataService.fetchData().subscribe( (data) => console.log(data)); } ``` When I have `this.dataService.fetchData().subscribe((data) => console.log(data));` in the code, clicking a link to the dashboard it momentarily shows `localhost:3000/dashboard` in the browser address bar but then immediate flicks back to showing the previous URL. However, when I remove this line, the app correctly shows `localhost:3000/dashboard` the whole time. I assume this is probably related to the console.logged 404 error. Also perplexing is that when I check the network traffic, no 404 is shown. **Update:** When the observable is change to a promise I get this output in the console: ``` Response {_body: Object, status: 404, ok: false, statusText: "Not Found", headers: Headers…} headers : Headers ok : false status : 404 statusText : "Not Found" type : null url : "https://dinstruct-d4b62.firebaseio.com/.json" _body : Object error : "Collection 'undefined' not found" __proto__ : Object constructor : function Object() hasOwnProperty : function hasOwnProperty() isPrototypeOf : function isPrototypeOf() propertyIsEnumerable : function propertyIsEnumerable() toLocaleString : function toLocaleString() toString : function () valueOf : function valueOf() __defineGetter__ : function __defineGetter__() __defineSetter__ : function __defineSetter__() __lookupGetter__ : function __lookupGetter__() __lookupSetter__ : function __lookupSetter__() get __proto__ : function __proto__() set __proto__ : function __proto__() __proto__ : Body constructor : function Response(responseOptions) toString : function () __proto__ : Object ``` There is still no 404 in the network traffic. I have now updated the calling function to this: ``` ngOnInit(): void { this.customerService.getCustomers() .then(customers => this.customers = customers); this.dataService.fetchData().then((data) => { console.log(data); }) .catch((error) => console.error(error)); } ```
2017/05/30
[ "https://Stackoverflow.com/questions/44268716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The in-memory-web-api will interfere with your "outside" requests. You need to remove that from your NgModule, since otherwise Angular is always trying to look in in-memory-web-api for your requests, which obviously doesn't exist in that place. So removing the equivalent of ``` InMemoryWebApiModule.forRoot(InMemoryDataService) ``` from your ngModule and that should clear it out! :)
Try importing `import 'rxjs/add/operator/toPromise';` and add toPromise to the end of the http get in the the fetchData() function. ``` fetchData(){ return this.http.get('https://dinstruct-d4b62.firebaseio.com/.json').map( (res) => res.json()).toPromise(); } ``` Your calling function should then look like this: ``` this.dataService.fetchData() .then((data) => { console.log(data); }) .catch((error) => console.error(error)); ```
44,268,716
I have a test data service written in Angular4. It currently looks like this: ``` import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; import 'rxjs/add/operator/toPromise' @Injectable() export class DataService { constructor(private http: Http) { } fetchData(){ return this.http.get('https://dinstruct-d4b62.firebaseio.com/.json').map( (res) => res.json()).toPromise(); } } ``` With thanks to "The Net Ninja" for this code, as this section of the app is basically exactly the same as the tutorial code (I prefer to have something that should be a known working example for testing purposes when building new apps)... The problem is that though there is definitely test data at <https://dinstruct-d4b62.firebaseio.com/.json>, which is not hidden or firewalled in any way as far as I can tell (directly accessible via browser), when the app enters the `fetchData()` function, it logs: ``` ERROR Error: Uncaught (in promise): Error: Response with status: 404 Not Found for URL: https://dinstruct-d4b62.firebaseio.com/.json Error: Response with status: 404 Not Found for URL: https://dinstruct-d4b62.firebaseio.com/.json ``` at the start of the stack trace. What could be going on here? **Update:** I also noticed that in the calling function: ``` ngOnInit(): void { this.customerService.getCustomers() .then(customers => this.customers = customers); this.dataService.fetchData().subscribe( (data) => console.log(data)); } ``` When I have `this.dataService.fetchData().subscribe((data) => console.log(data));` in the code, clicking a link to the dashboard it momentarily shows `localhost:3000/dashboard` in the browser address bar but then immediate flicks back to showing the previous URL. However, when I remove this line, the app correctly shows `localhost:3000/dashboard` the whole time. I assume this is probably related to the console.logged 404 error. Also perplexing is that when I check the network traffic, no 404 is shown. **Update:** When the observable is change to a promise I get this output in the console: ``` Response {_body: Object, status: 404, ok: false, statusText: "Not Found", headers: Headers…} headers : Headers ok : false status : 404 statusText : "Not Found" type : null url : "https://dinstruct-d4b62.firebaseio.com/.json" _body : Object error : "Collection 'undefined' not found" __proto__ : Object constructor : function Object() hasOwnProperty : function hasOwnProperty() isPrototypeOf : function isPrototypeOf() propertyIsEnumerable : function propertyIsEnumerable() toLocaleString : function toLocaleString() toString : function () valueOf : function valueOf() __defineGetter__ : function __defineGetter__() __defineSetter__ : function __defineSetter__() __lookupGetter__ : function __lookupGetter__() __lookupSetter__ : function __lookupSetter__() get __proto__ : function __proto__() set __proto__ : function __proto__() __proto__ : Body constructor : function Response(responseOptions) toString : function () __proto__ : Object ``` There is still no 404 in the network traffic. I have now updated the calling function to this: ``` ngOnInit(): void { this.customerService.getCustomers() .then(customers => this.customers = customers); this.dataService.fetchData().then((data) => { console.log(data); }) .catch((error) => console.error(error)); } ```
2017/05/30
[ "https://Stackoverflow.com/questions/44268716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The in-memory-web-api will interfere with your "outside" requests. You need to remove that from your NgModule, since otherwise Angular is always trying to look in in-memory-web-api for your requests, which obviously doesn't exist in that place. So removing the equivalent of ``` InMemoryWebApiModule.forRoot(InMemoryDataService) ``` from your ngModule and that should clear it out! :)
In AppModule.ts where you have imports[], you would have imported the **HttpClientInMemoryWebApiModule** like => --- ``` HttpClientInMemoryWebApiModule .forRoot( InMemoryService, { dataEncapsulation: false } ),  ``` --- Now What is happening here is that your application is searching for the public API in the in-memory web API only to solve this just tell In memory module not to behave like that by setting the passing though unknown Url true like => --- ``` HttpClientInMemoryWebApiModule .forRoot( InMemoryService, { dataEncapsulation: false, passThruUnknownUrl: true } ) ```
8,621
I'm setting up a storage server in a typical (albeit very large) computer case. I have like 20 2 TB SATA hard drives, and I want to connect all of them to one motherboard. The most ports I found on a consumer motherboard was 15. Is there some sort of way to get like 20 SATA ports from USB C or PCIe? I don't mind slower speeds because I have SSDs to handle high-speed uploads, and most of the time I am accessing the data from a computer connected through ethernet, which has proved manageable with slower drives. Thanks for the help!
2017/12/29
[ "https://hardwarerecs.stackexchange.com/questions/8621", "https://hardwarerecs.stackexchange.com", "https://hardwarerecs.stackexchange.com/users/6353/" ]
TL;DR: Your motherboard + an LSI 9211-8i, or just a motherboard with more SATA ports. ===================================================================================== ### Motherboard upgrade If you're willing to manage with a slightly lower amount of ports, just get a motherboard with a bunch of SATA ports! The most I know of is 22 on the [ASRock Z87 Extreme11/ac](http://www.asrock.com/mb/Intel/Z87%20Extreme11ac/). You'll notice two things though: 1. You can't find this board. Seriously, good luck. 2. The specifications say "22 x SATA3 (16 x SAS3 12.0 Gb/s + 6 x SATA3 6.0 Gb/s) from LSI SAS 3008 Controller+ 3X24R Expander" Intel only supports up to 6 SATA-III ports on the various [LGA 1150](https://en.wikipedia.org/wiki/LGA_1150) and [LGA 1151 chipsets](https://en.wikipedia.org/wiki/LGA_1151), 10 on [X99](https://en.wikipedia.org/wiki/LGA_2011) and 8 (???) on [X299](https://www.intel.com/content/www/us/en/products/chipsets/desktop-chipsets/x299.html) (though the [MSI X299 XPOWER GAMING AC](https://www.msi.com/Motherboard/X299-XPOWER-GAMING-AC.html) supports 10.) AMD supports 6 on [X370](https://en.wikipedia.org/wiki/Socket_AM4) and [X399 apparently caps out at 8.](https://pcpartpicker.com/products/motherboard/#sort=price&s=36) Basically, that ASRock motherboard just has another third party SATA controller built in. That's no fun! This leads us to the section option: ### Add another SATA controller @cybernard has the right idea, but I'm going to disagree with his hardware choice. Though it's not 100% clear, it seems implied by your post that you don't need hardware RAID support, at least at a controller level. Basically, we just want to present the system with just a bunch of disks (JBOD.) This is going to allow us to save a massive amount of money compared using dedicated hardware RAID card on a card like the Adaptec RAID 71685 (retails for $1120!) which would need to have every drive connected to it (if we used the hardware RAID), since we can continue to use the onboard SATA ports. To accomplish this, we're going to use a much, much cheaper HBA: the LSI 9211-8i (also known as the IBM M1015, or the compatible Dell Perc H200/H310.) [![enter image description here](https://i.stack.imgur.com/tiAru.jpg)](https://i.stack.imgur.com/tiAru.jpg) It has 2 x two SFF-8087 mini-SAS connectors, each supporting four 6Gb/s connections (the max for SATA-III) and up to 256 physical devices, and can be easily connected to standard SATA data using [a breakout cable.](https://rads.stackoverflow.com/amzn/click/B012BPLYJC) [![enter image description here](https://i.stack.imgur.com/aOJ3G.jpg)](https://i.stack.imgur.com/aOJ3G.jpg) It's popular, [cheap](https://www.ebay.com/itm/292371003988) (roughly $40), and–with a little work–supports JBOD. [Here's a quick guide on how to set it up.](https://nguvu.org/freenas/Convert-LSI-HBA-card-to-IT-mode/) There's a wealth other cards as well, like the monstrous [40-Channel SATA 6Gbps HighPoint Rocket 750](https://rads.stackoverflow.com/amzn/click/B00C7JNPSQ), but they're far, far more expensive, and probably overkill for your needs. Remember, you can use multiple cards, and as long as you're not bottlenecking, port multiplication is fine. Internal SATA III 1 to 5 cards can be had as low as [$60](https://www.newegg.com/Product/Product.aspx?Item=9SIA24G2HZ1956); you don't need a fancy backplane (though if you're into fancy cases with fancy hot swap, [boy have I got a post for you!](https://hardwarerecs.stackexchange.com/questions/8557/mini-itx-case-with-10-sata-hot-swap-bays/8577#8577))
Assuming this is a typical logic board, PCIe is a lower-level connection. In other words, PCIe is actually a 'direct connect' into the PCI buss.. On the other hand, USB is usually routed thru the PCI buss THEN thru a USB controller chip, to then run thru the USB circuit. This means that data would be handled faster thru PCIe, rather than thru the USB busses.. The speed difference of SSD would be somewhat lost by going thru USB. Additionally, a USB based assembly might well cause you further issues with issues such as auto-mount, etc....
8,621
I'm setting up a storage server in a typical (albeit very large) computer case. I have like 20 2 TB SATA hard drives, and I want to connect all of them to one motherboard. The most ports I found on a consumer motherboard was 15. Is there some sort of way to get like 20 SATA ports from USB C or PCIe? I don't mind slower speeds because I have SSDs to handle high-speed uploads, and most of the time I am accessing the data from a computer connected through ethernet, which has proved manageable with slower drives. Thanks for the help!
2017/12/29
[ "https://hardwarerecs.stackexchange.com/questions/8621", "https://hardwarerecs.stackexchange.com", "https://hardwarerecs.stackexchange.com/users/6353/" ]
TL;DR: Your motherboard + an LSI 9211-8i, or just a motherboard with more SATA ports. ===================================================================================== ### Motherboard upgrade If you're willing to manage with a slightly lower amount of ports, just get a motherboard with a bunch of SATA ports! The most I know of is 22 on the [ASRock Z87 Extreme11/ac](http://www.asrock.com/mb/Intel/Z87%20Extreme11ac/). You'll notice two things though: 1. You can't find this board. Seriously, good luck. 2. The specifications say "22 x SATA3 (16 x SAS3 12.0 Gb/s + 6 x SATA3 6.0 Gb/s) from LSI SAS 3008 Controller+ 3X24R Expander" Intel only supports up to 6 SATA-III ports on the various [LGA 1150](https://en.wikipedia.org/wiki/LGA_1150) and [LGA 1151 chipsets](https://en.wikipedia.org/wiki/LGA_1151), 10 on [X99](https://en.wikipedia.org/wiki/LGA_2011) and 8 (???) on [X299](https://www.intel.com/content/www/us/en/products/chipsets/desktop-chipsets/x299.html) (though the [MSI X299 XPOWER GAMING AC](https://www.msi.com/Motherboard/X299-XPOWER-GAMING-AC.html) supports 10.) AMD supports 6 on [X370](https://en.wikipedia.org/wiki/Socket_AM4) and [X399 apparently caps out at 8.](https://pcpartpicker.com/products/motherboard/#sort=price&s=36) Basically, that ASRock motherboard just has another third party SATA controller built in. That's no fun! This leads us to the section option: ### Add another SATA controller @cybernard has the right idea, but I'm going to disagree with his hardware choice. Though it's not 100% clear, it seems implied by your post that you don't need hardware RAID support, at least at a controller level. Basically, we just want to present the system with just a bunch of disks (JBOD.) This is going to allow us to save a massive amount of money compared using dedicated hardware RAID card on a card like the Adaptec RAID 71685 (retails for $1120!) which would need to have every drive connected to it (if we used the hardware RAID), since we can continue to use the onboard SATA ports. To accomplish this, we're going to use a much, much cheaper HBA: the LSI 9211-8i (also known as the IBM M1015, or the compatible Dell Perc H200/H310.) [![enter image description here](https://i.stack.imgur.com/tiAru.jpg)](https://i.stack.imgur.com/tiAru.jpg) It has 2 x two SFF-8087 mini-SAS connectors, each supporting four 6Gb/s connections (the max for SATA-III) and up to 256 physical devices, and can be easily connected to standard SATA data using [a breakout cable.](https://rads.stackoverflow.com/amzn/click/B012BPLYJC) [![enter image description here](https://i.stack.imgur.com/aOJ3G.jpg)](https://i.stack.imgur.com/aOJ3G.jpg) It's popular, [cheap](https://www.ebay.com/itm/292371003988) (roughly $40), and–with a little work–supports JBOD. [Here's a quick guide on how to set it up.](https://nguvu.org/freenas/Convert-LSI-HBA-card-to-IT-mode/) There's a wealth other cards as well, like the monstrous [40-Channel SATA 6Gbps HighPoint Rocket 750](https://rads.stackoverflow.com/amzn/click/B00C7JNPSQ), but they're far, far more expensive, and probably overkill for your needs. Remember, you can use multiple cards, and as long as you're not bottlenecking, port multiplication is fine. Internal SATA III 1 to 5 cards can be had as low as [$60](https://www.newegg.com/Product/Product.aspx?Item=9SIA24G2HZ1956); you don't need a fancy backplane (though if you're into fancy cases with fancy hot swap, [boy have I got a post for you!](https://hardwarerecs.stackexchange.com/questions/8557/mini-itx-case-with-10-sata-hot-swap-bays/8577#8577))
Most companies that produce RAID controllers have similar options. What your doing isn't even hard, people do 256 hard drives via SAS/SATA expanders and chasis like the one listed below. Clearly you need an adaptec raid controller card. The 71685 supports 24 devices natively. <https://storage.microsemi.com/en-us/support/raid/sas_raid/sas-71685/> [![71685](https://i.stack.imgur.com/BEc6K.png)](https://i.stack.imgur.com/BEc6K.png) 4 internal and 2 external connectors. Each support 4 drives natively. <https://ark.intel.com/products/60273/Intel-RAID-Expander-RES2CV240#@productimages> Then you add SATA/SAS expanders and you can add up to 255/6 devices. [https://www.newegg.com/Product/Product.aspx?Item=N82E16811192419&ignorebbr=1&nm\_mc=KNC-GoogleAdwords-PC&cm\_mmc=KNC-GoogleAdwords-PC-*-pla-*-Server+-+Chassis-\_-N82E16811192419&gclid=CjwKCAiA7JfSBRBrEiwA1DWSGyLCmrUMtGV2pRlZ-JC6baIFzL5alLwQiVeEXKzQUsiiONg6cUxbYBoCLmwQAvD\_BwE&gclsrc=aw.ds](https://www.newegg.com/Product/Product.aspx?Item=N82E16811192419&ignorebbr=1&nm_mc=KNC-GoogleAdwords-PC&cm_mmc=KNC-GoogleAdwords-PC-_-pla-_-Server+-+Chassis-_-N82E16811192419&gclid=CjwKCAiA7JfSBRBrEiwA1DWSGyLCmrUMtGV2pRlZ-JC6baIFzL5alLwQiVeEXKzQUsiiONg6cUxbYBoCLmwQAvD_BwE&gclsrc=aw.ds) [![enter image description here](https://i.stack.imgur.com/vgvsM.png)](https://i.stack.imgur.com/vgvsM.png)
700
I know someone who bought earphones that shine light in you ears. According to what he was told, there are neurons that sense light and then make you feel wide awake when activated, which seemed like snake oil to me. Apparently the pineal gland may be able to sense light and it does secrete melatonin - a sleep regulating hormone. I'm still sceptical though as its stuck in the middle of your brain. Would shining lights in your ears be able to have any effect on how awake you feel?
2012/01/17
[ "https://biology.stackexchange.com/questions/700", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/368/" ]
There is no known mechanism for light detection through the ears in humans, as far as I know. It is certainly true that the pineal gland is part of the system that regulates the circadian rhythm (briefly, the daily sleep-wake cycle). However, while the pineal gland in birds and other non-mammalian vertebrates is directly sensitive to light, the mammalian pineal gland is not (see, for review, [Doyle and Menaker, 2007](http://www.ncbi.nlm.nih.gov/pubmed/18419310) and [Csernus, 2006](http://www.ncbi.nlm.nih.gov/pubmed/16687306)). In all animals, the circadian rhythm is regulated by a photoperiod cue and therefore requires light detection. In mammals, the light sensors are found exclusively in the retina, the sensory portion of the eye. There are two classes of light detecting cells in the retina. First, rod and cone photoreceptors mediate vision in the usual sense of the word. These cells contain proteins called opsins that absorb photons of light and thereby excite the photoreceptors that contain them, informing the brain that light was detected. A second class of photosenstive cells in the retina are called intrinsically photosensitive retinal ganglion cells (ipRGCs) (see [Do and Yau, 2010](http://www.ncbi.nlm.nih.gov/pubmed/20959623) for review). These cells mediate "non-image-forming" vision and are an important part of the circadian rhythm pathway. They also contain an opsin called *melanopsin* which is a photosensitive pigment. This is not to be confused with *melatonin*, which is the sleep hormone released by the pineal gland. The ipRGCs in the retina send the photoperiod cue to a brain area called the suprachiasmatic nucleus (SCN). The SCN then signals to the pineal gland. If we are generous and assume that these light-emitting headphones are the result of misunderstandings, we can guess that the confusion arises from (1) the fact that some animals have a directly photosensitive pineal gland, but not mammals and (2) that the pineal gland secretes melatonin but not the photosensitive pigment melanopsin. --- **Update**: From a bit of research, it turns out that the company selling the headphones is not "confused" as I politely offered. I don't think this site is the appropriate forum to refute their research or claims. Suffice to say that the retina is the only part of the human brain shown to be photosensitive.
I believe there are light sensors ([TRPV3](http://en.wikipedia.org/wiki/TRPV3)) in the skin for infrared light (heat), that convey that information back to the brain from the skin. This is kind of light detection, but it is not direct detection like the rhodopsins in the eye. By the way, without passing information onto neurons, cells probably have a lot of sensors they may use to respond to their local environment. This recent article talks about how [olfactory receptors can be found in lung and gut cells](http://the-scientist.com/2011/12/01/taste-in-the-mouth-gut-and-airways/). so its quite possible that the conventional light detecting genes (rhodopsins) would be found in skin cells, but they may not convey information to neurons.
11,986,428
I have the following function for which I want to find the extrema using matlab. ![enter image description here](https://i.stack.imgur.com/UMKYe.png) That function has to use the "normcdf" function in matlab in order to get the results but when I'm trying to create the symbolic function I get back some errors. The input I give is the following: ``` syms z fz t sz fv = 1000 * ((z * fz * normcd(t,fz,sz)) / (20 * 50 * normcd(t,50,20))) + 1000 * normcdf((20 * 50 * normcd(t,50,20) + z * fz * normcd(t,fz,sz)) / 2000, 50 * normcd(t,50,20), 20) - 10 * z ``` and the errors I get back are the following: ``` ??? Error using ==> sym.le at 11 Function 'le' is not implemented for MuPAD symbolic objects. Error in ==> normcdf at 57 sigma(sigma <= 0) = NaN; ``` Does anyone know how I can get around that? Thanks in advance. I forgot to mention that I use matlab version R2009a.
2012/08/16
[ "https://Stackoverflow.com/questions/11986428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1221734/" ]
<http://dev.elevationblog.com/2009/07/23/event-calendar-rails-plugin/> After loading almost all the comments I found the answer to that question. just override the color method in your event model. ``` def color # color logic based on event type # or maybe associated calendar?, ie # self.calendar.color if self.type = correct_type return "#3399BB" #any html acceptable color end end ``` answered by jeff do your logic then return the color you want to used
I did things a little differently for more flexibility based on info I found on this site. **Output** ``` <div class="btn-group event_color" data-toggle="buttons-radio"> <button type="button" class="btn btn-success" value="#00a651">Active</button> <button type="button" class="btn btn-warning" value="#ecef0b">Pending</button> <button type="button" class="btn btn-danger" value="#ef0b0b">On Hold</button> <%=f.hidden_field :color %> <%=f.hidden_field :event_status %> </div> ``` **JS** ``` $(".event_color .btn").click(function() { // whenever a button is clicked, set the hidden helper $('#event_color').val($(this).val()); }); $(".event_color .btn").click(function() { // whenever a button is clicked, set the hidden helper $("#event_event_status").val($(this).text()); }); ```
145,653
If I do `$collection->getSelect()` I am moving from `collection` object to `select` object and I can use some useful stuff more close to raw sql. But what can I do to get back from `select` to magento `collection`? If, for example, I want to use `$collection->getFirstItem()->getId()`?
2016/11/14
[ "https://magento.stackexchange.com/questions/145653", "https://magento.stackexchange.com", "https://magento.stackexchange.com/users/36253/" ]
``` $collection = your collection here; $select = $collection->getSelect(); //do nasty stuff with the $select object. ``` since this is an object, it is passed by reference and anything you change in it will be reflected in the collection select Then you can use again: ``` $collection->getFirstItem(); ```
As long as you don't store the result of `getSelect()` into a new variable and try to call the collection methods on that new variable, you should still be able to use collection methods. For example: ``` $collection->getSelect()->where('is_enabled=?', 0); return $collection->getFirstItem(); ``` This would work perfectly fine.
806,506
Where is it written that my hard disk is SSD or HDD? I have tried searching: * msinfo32 * Device Manager * Disk Management I need to to see the words solid state drive or hard disk drive in Windows 7. It may be either through CLI or GUI. I found the same information for Windows 8 here. Right-click on C drive-> *Properties*-> *Tools*-> *Optimize/Defragment now* -> Here you should disk listed with its media type.
2014/09/03
[ "https://superuser.com/questions/806506", "https://superuser.com", "https://superuser.com/users/303024/" ]
1. Find the drive in Device Manager (devmgmt.msc). 2. Look up the model number in Google. Example: ![enter image description here](https://i.stack.imgur.com/yW1B7.jpg) [KINGSTON SH103S3120G](http://lmgtfy.com/?q=kingston+sh103s3120g&l=1) - Kingston 120 GB SSD [ST1000LM014-1EJ164-SSHD](http://lmgtfy.com/?q=ST1000LM014-1EJ164-SSHD) - Seagate 1 TB SSHD --- So far, every search I've done to find a proper solution for this seems to indicate that one doesn't exist. Every Windows 7 solution I've found has been either a hack based on finding some string like "SSD" in the model number (which is horribly unreliable, as demonstrated by my Kingston above) or testing read/write performance and comparing it against some threshold. The fact of the matter is, the OS really has little reason to actually care what type of physical media resides within the hard drive. All the physical reading and writing is done by the hard drive controller, which translates the (generally media-agnostic) commands given to it from the OS via its drivers. Effectively, the OS only needs to worry about declaring what data it needs read/written and the controller handles the how and where of reading/writing it. (Yes, the OS knows a "where" too - but that's a *logical* location defined in software, not a *physical* one that's hardware-dependent.) Windows 8, and the newer devices it supports, has a bit more intelligence built-in. However, these features appear to not have been back-ported to Windows 7.
I'm not completely clear on your question; however, in My Computer, right click on drive, select properties, select Hardware tab. In my case it shows *Patriot Pyro SSd SATA Disk Device*.
806,506
Where is it written that my hard disk is SSD or HDD? I have tried searching: * msinfo32 * Device Manager * Disk Management I need to to see the words solid state drive or hard disk drive in Windows 7. It may be either through CLI or GUI. I found the same information for Windows 8 here. Right-click on C drive-> *Properties*-> *Tools*-> *Optimize/Defragment now* -> Here you should disk listed with its media type.
2014/09/03
[ "https://superuser.com/questions/806506", "https://superuser.com", "https://superuser.com/users/303024/" ]
1. Find the drive in Device Manager (devmgmt.msc). 2. Look up the model number in Google. Example: ![enter image description here](https://i.stack.imgur.com/yW1B7.jpg) [KINGSTON SH103S3120G](http://lmgtfy.com/?q=kingston+sh103s3120g&l=1) - Kingston 120 GB SSD [ST1000LM014-1EJ164-SSHD](http://lmgtfy.com/?q=ST1000LM014-1EJ164-SSHD) - Seagate 1 TB SSHD --- So far, every search I've done to find a proper solution for this seems to indicate that one doesn't exist. Every Windows 7 solution I've found has been either a hack based on finding some string like "SSD" in the model number (which is horribly unreliable, as demonstrated by my Kingston above) or testing read/write performance and comparing it against some threshold. The fact of the matter is, the OS really has little reason to actually care what type of physical media resides within the hard drive. All the physical reading and writing is done by the hard drive controller, which translates the (generally media-agnostic) commands given to it from the OS via its drivers. Effectively, the OS only needs to worry about declaring what data it needs read/written and the controller handles the how and where of reading/writing it. (Yes, the OS knows a "where" too - but that's a *logical* location defined in software, not a *physical* one that's hardware-dependent.) Windows 8, and the newer devices it supports, has a bit more intelligence built-in. However, these features appear to not have been back-ported to Windows 7.
Goto the control panel -> system -> and find the "device manager", click on it to give a listing of all devices present. It should list the storage media, as in model **WD 500000000-XYZ.abc** You then check to see what that model# refers to by googling the exact model# provided. Once done it explains the specs of that storage device.
32,538,758
I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook **00-classification.ipynb** for testing the classification by a trained model for ImageNet. But any **get\_ipython()** statement in the script is giving the following error: ``` $ python python/my_test_imagenet.py Traceback (most recent call last): File "python/my_test_imagenet.py", line 23, in <module> get_ipython().magic(u'matplotlib inline') ``` In the script, I'm importing the following: ``` import numpy as np import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # Make sure that caffe is on the python path: caffe_root = '/path/to/caffe/' import sys sys.path.insert(0, caffe_root + 'python') import caffe plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' import os # ... Rest of the code... ``` Can someone please help me to resolve this error?
2015/09/12
[ "https://Stackoverflow.com/questions/32538758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452617/" ]
You have to run your script with ipython: ``` $ ipython python/my_test_imagenet.py ``` Then `get_ipython` will be already in global context. Note: Importing it via `from IPython import get_ipython` in ordinary shell `python` will not work as you really need `ipython` running.
If your intention is to run converted .py file notebook then you should just comment out `get_ipython()` statements. The matlibplot output can't be shown inside console so you would have some work to do . Ideally, iPython shouldn't have generated these statements. You can use following to show plots: ``` plt.show(block=True) ```
32,538,758
I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook **00-classification.ipynb** for testing the classification by a trained model for ImageNet. But any **get\_ipython()** statement in the script is giving the following error: ``` $ python python/my_test_imagenet.py Traceback (most recent call last): File "python/my_test_imagenet.py", line 23, in <module> get_ipython().magic(u'matplotlib inline') ``` In the script, I'm importing the following: ``` import numpy as np import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # Make sure that caffe is on the python path: caffe_root = '/path/to/caffe/' import sys sys.path.insert(0, caffe_root + 'python') import caffe plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' import os # ... Rest of the code... ``` Can someone please help me to resolve this error?
2015/09/12
[ "https://Stackoverflow.com/questions/32538758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452617/" ]
You have to run your script with ipython: ``` $ ipython python/my_test_imagenet.py ``` Then `get_ipython` will be already in global context. Note: Importing it via `from IPython import get_ipython` in ordinary shell `python` will not work as you really need `ipython` running.
`get_ipython` is available only if the IPython module was imported that happens implicitly if you run ipython shell (or Jupyter notebook). If not, the import will fail, but you can still import it explicitly with: ``` from IPython import get_ipython ```
32,538,758
I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook **00-classification.ipynb** for testing the classification by a trained model for ImageNet. But any **get\_ipython()** statement in the script is giving the following error: ``` $ python python/my_test_imagenet.py Traceback (most recent call last): File "python/my_test_imagenet.py", line 23, in <module> get_ipython().magic(u'matplotlib inline') ``` In the script, I'm importing the following: ``` import numpy as np import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # Make sure that caffe is on the python path: caffe_root = '/path/to/caffe/' import sys sys.path.insert(0, caffe_root + 'python') import caffe plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' import os # ... Rest of the code... ``` Can someone please help me to resolve this error?
2015/09/12
[ "https://Stackoverflow.com/questions/32538758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452617/" ]
You have to run your script with ipython: ``` $ ipython python/my_test_imagenet.py ``` Then `get_ipython` will be already in global context. Note: Importing it via `from IPython import get_ipython` in ordinary shell `python` will not work as you really need `ipython` running.
Just want to add that converting ipynb file to py having magic functions in your script trigger the same error as well since for instance `%%time` converts to `get_ipython().run_cell_magic('time')` Why so? [Magic functions](https://ipython.readthedocs.io/en/stable/interactive/python-ipython-diff.html?highlight=magics) (magics) are often present in the form of shell-like syntax, but they are python functions under the hood. The conversion from cell magics to get\_ipython() commands is a part of nbconvert and is required to get a runnable python script, as cell magics are not valid python outside of a notebook cell(things like magics or aliases are turned into function calls)
32,538,758
I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook **00-classification.ipynb** for testing the classification by a trained model for ImageNet. But any **get\_ipython()** statement in the script is giving the following error: ``` $ python python/my_test_imagenet.py Traceback (most recent call last): File "python/my_test_imagenet.py", line 23, in <module> get_ipython().magic(u'matplotlib inline') ``` In the script, I'm importing the following: ``` import numpy as np import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # Make sure that caffe is on the python path: caffe_root = '/path/to/caffe/' import sys sys.path.insert(0, caffe_root + 'python') import caffe plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' import os # ... Rest of the code... ``` Can someone please help me to resolve this error?
2015/09/12
[ "https://Stackoverflow.com/questions/32538758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452617/" ]
If your intention is to run converted .py file notebook then you should just comment out `get_ipython()` statements. The matlibplot output can't be shown inside console so you would have some work to do . Ideally, iPython shouldn't have generated these statements. You can use following to show plots: ``` plt.show(block=True) ```
`get_ipython` is available only if the IPython module was imported that happens implicitly if you run ipython shell (or Jupyter notebook). If not, the import will fail, but you can still import it explicitly with: ``` from IPython import get_ipython ```
32,538,758
I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook **00-classification.ipynb** for testing the classification by a trained model for ImageNet. But any **get\_ipython()** statement in the script is giving the following error: ``` $ python python/my_test_imagenet.py Traceback (most recent call last): File "python/my_test_imagenet.py", line 23, in <module> get_ipython().magic(u'matplotlib inline') ``` In the script, I'm importing the following: ``` import numpy as np import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # Make sure that caffe is on the python path: caffe_root = '/path/to/caffe/' import sys sys.path.insert(0, caffe_root + 'python') import caffe plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' import os # ... Rest of the code... ``` Can someone please help me to resolve this error?
2015/09/12
[ "https://Stackoverflow.com/questions/32538758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452617/" ]
If your intention is to run converted .py file notebook then you should just comment out `get_ipython()` statements. The matlibplot output can't be shown inside console so you would have some work to do . Ideally, iPython shouldn't have generated these statements. You can use following to show plots: ``` plt.show(block=True) ```
Just want to add that converting ipynb file to py having magic functions in your script trigger the same error as well since for instance `%%time` converts to `get_ipython().run_cell_magic('time')` Why so? [Magic functions](https://ipython.readthedocs.io/en/stable/interactive/python-ipython-diff.html?highlight=magics) (magics) are often present in the form of shell-like syntax, but they are python functions under the hood. The conversion from cell magics to get\_ipython() commands is a part of nbconvert and is required to get a runnable python script, as cell magics are not valid python outside of a notebook cell(things like magics or aliases are turned into function calls)
32,538,758
I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook **00-classification.ipynb** for testing the classification by a trained model for ImageNet. But any **get\_ipython()** statement in the script is giving the following error: ``` $ python python/my_test_imagenet.py Traceback (most recent call last): File "python/my_test_imagenet.py", line 23, in <module> get_ipython().magic(u'matplotlib inline') ``` In the script, I'm importing the following: ``` import numpy as np import matplotlib.pyplot as plt get_ipython().magic(u'matplotlib inline') # Make sure that caffe is on the python path: caffe_root = '/path/to/caffe/' import sys sys.path.insert(0, caffe_root + 'python') import caffe plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' import os # ... Rest of the code... ``` Can someone please help me to resolve this error?
2015/09/12
[ "https://Stackoverflow.com/questions/32538758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452617/" ]
`get_ipython` is available only if the IPython module was imported that happens implicitly if you run ipython shell (or Jupyter notebook). If not, the import will fail, but you can still import it explicitly with: ``` from IPython import get_ipython ```
Just want to add that converting ipynb file to py having magic functions in your script trigger the same error as well since for instance `%%time` converts to `get_ipython().run_cell_magic('time')` Why so? [Magic functions](https://ipython.readthedocs.io/en/stable/interactive/python-ipython-diff.html?highlight=magics) (magics) are often present in the form of shell-like syntax, but they are python functions under the hood. The conversion from cell magics to get\_ipython() commands is a part of nbconvert and is required to get a runnable python script, as cell magics are not valid python outside of a notebook cell(things like magics or aliases are turned into function calls)
53,037
Suppose the following problem: I have $n$ models, $M\_k$, each with parameters $\mathbf{\theta}\_k$ for a data set $D$. There where previous observations of a subset of the parameters which are common to every model $M\_k$ (i.e., I have well defined priors for a subset of the parameters $\theta\_k$), so I performed an MCMC algorithm in order to obtain the posterior distribution of each model using that prior information, i.e., I have $p(\theta\_k|D,M\_k)$, and have to decide which of those models is the 'correct' one. I was thinking in defining what do I mean by 'the correct' one, and came up with the idea that I have to decide which of the posterior distributions is closer to the 'real' posterior distribution that generated the data (which may or may not be in my set of posterior distributions). I was thinking of using bayes factors, but I keep thinking that I need something like the AIC which, instead of using the likelihood and the corresponding MLE estimates, uses the posterior distributions and the corresponding maximum-a-posteriori estimates. My idea is to obtain an unbiased (or nearly unbiased) estimator of the KL divergence between the real posterior and my posteriors (understanding that the AIC is an estimator of the KL divergence between the 'real' likelihood and the likelihood of my models). Is there something like this in the statistical literature? I'm just kind of crazy of thinking the problem like this?
2013/03/22
[ "https://stats.stackexchange.com/questions/53037", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9174/" ]
Strictly speaking, the question **"to decide which of those models is the 'correct' one"** makes no sense in a Bayesian analysis. In the Bayesian framework, what you do is to compare the models *with respect to each other*. Bayesian inference always gives you a *relative* comparison of competing models. There is a lot of information on chapter 7 of O'Hagan and Forster nice [book](http://rads.stackoverflow.com/amzn/click/0470685697). And yes, this kind of analysis will rely on the full posteriors.
BIC and DIC are "Bayesian" tools similar to AIC. A slightly different Bayesian model selection tool is the Log-Predictive Score. Note that, with exception of the BIC, Bayesian tools are based on the posterior distribution (or the posterior sample) rather than on point estimators. This is common in Bayesian statistics since the goal is to account for the variability of the parameters which is not considered by using point estimators.
53,037
Suppose the following problem: I have $n$ models, $M\_k$, each with parameters $\mathbf{\theta}\_k$ for a data set $D$. There where previous observations of a subset of the parameters which are common to every model $M\_k$ (i.e., I have well defined priors for a subset of the parameters $\theta\_k$), so I performed an MCMC algorithm in order to obtain the posterior distribution of each model using that prior information, i.e., I have $p(\theta\_k|D,M\_k)$, and have to decide which of those models is the 'correct' one. I was thinking in defining what do I mean by 'the correct' one, and came up with the idea that I have to decide which of the posterior distributions is closer to the 'real' posterior distribution that generated the data (which may or may not be in my set of posterior distributions). I was thinking of using bayes factors, but I keep thinking that I need something like the AIC which, instead of using the likelihood and the corresponding MLE estimates, uses the posterior distributions and the corresponding maximum-a-posteriori estimates. My idea is to obtain an unbiased (or nearly unbiased) estimator of the KL divergence between the real posterior and my posteriors (understanding that the AIC is an estimator of the KL divergence between the 'real' likelihood and the likelihood of my models). Is there something like this in the statistical literature? I'm just kind of crazy of thinking the problem like this?
2013/03/22
[ "https://stats.stackexchange.com/questions/53037", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9174/" ]
None of these information criteria are unbiased, but under some conditions they are consistent estimators of the out-of-sample deviance. They also all utilize the likelihood in some fashion, but the WAIC and the LOOIC differ from the AIC and the DIC in that the former two average the likelihood for each observation over (draws from) the posterior distribution, whereas the latter two plug in point estimates. In this sense, the WAIC and LOOIC are preferable because they do not make an assumption that the posterior distribution is multivariate normal, with the LOOIC being somewhat preferable to the WAIC because it can be made more robust to outliers and has a diagnostic that can be evaluated to see if its assumptions are met. [Overview article](http://www.stat.columbia.edu/~gelman/research/published/waic_understand3.pdf) [More detail about the practicalities](http://www.stat.columbia.edu/~gelman/research/published/loo_stan.pdf) [R package](https://cran.r-project.org/web/packages/loo/)
BIC and DIC are "Bayesian" tools similar to AIC. A slightly different Bayesian model selection tool is the Log-Predictive Score. Note that, with exception of the BIC, Bayesian tools are based on the posterior distribution (or the posterior sample) rather than on point estimators. This is common in Bayesian statistics since the goal is to account for the variability of the parameters which is not considered by using point estimators.
53,037
Suppose the following problem: I have $n$ models, $M\_k$, each with parameters $\mathbf{\theta}\_k$ for a data set $D$. There where previous observations of a subset of the parameters which are common to every model $M\_k$ (i.e., I have well defined priors for a subset of the parameters $\theta\_k$), so I performed an MCMC algorithm in order to obtain the posterior distribution of each model using that prior information, i.e., I have $p(\theta\_k|D,M\_k)$, and have to decide which of those models is the 'correct' one. I was thinking in defining what do I mean by 'the correct' one, and came up with the idea that I have to decide which of the posterior distributions is closer to the 'real' posterior distribution that generated the data (which may or may not be in my set of posterior distributions). I was thinking of using bayes factors, but I keep thinking that I need something like the AIC which, instead of using the likelihood and the corresponding MLE estimates, uses the posterior distributions and the corresponding maximum-a-posteriori estimates. My idea is to obtain an unbiased (or nearly unbiased) estimator of the KL divergence between the real posterior and my posteriors (understanding that the AIC is an estimator of the KL divergence between the 'real' likelihood and the likelihood of my models). Is there something like this in the statistical literature? I'm just kind of crazy of thinking the problem like this?
2013/03/22
[ "https://stats.stackexchange.com/questions/53037", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9174/" ]
Strictly speaking, the question **"to decide which of those models is the 'correct' one"** makes no sense in a Bayesian analysis. In the Bayesian framework, what you do is to compare the models *with respect to each other*. Bayesian inference always gives you a *relative* comparison of competing models. There is a lot of information on chapter 7 of O'Hagan and Forster nice [book](http://rads.stackoverflow.com/amzn/click/0470685697). And yes, this kind of analysis will rely on the full posteriors.
Nestor: You seem to be misinterpreting BIC and DIC. They are based on a Bayesian approach. The fact that you observe the likelihood in their expressions is due to an approximation. > > The BIC was developed by Gideon E. Schwarz, who gave a Bayesian argument for adopting it. > > > It [DIC] is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation. > > > Log-Predictive scores ARE purely Bayesian and they are NOT plug-in estimators. They are complementary to Bayes factors since evaluate the predictive performance of a model. There also seem to be a contradiction in "It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation." and "And, if you read carefully my question, I AM searching for something like an information criterion which includes the posterior distribution (and therefore, the variability of the parameters on each model)." Anyway, good luck ...
53,037
Suppose the following problem: I have $n$ models, $M\_k$, each with parameters $\mathbf{\theta}\_k$ for a data set $D$. There where previous observations of a subset of the parameters which are common to every model $M\_k$ (i.e., I have well defined priors for a subset of the parameters $\theta\_k$), so I performed an MCMC algorithm in order to obtain the posterior distribution of each model using that prior information, i.e., I have $p(\theta\_k|D,M\_k)$, and have to decide which of those models is the 'correct' one. I was thinking in defining what do I mean by 'the correct' one, and came up with the idea that I have to decide which of the posterior distributions is closer to the 'real' posterior distribution that generated the data (which may or may not be in my set of posterior distributions). I was thinking of using bayes factors, but I keep thinking that I need something like the AIC which, instead of using the likelihood and the corresponding MLE estimates, uses the posterior distributions and the corresponding maximum-a-posteriori estimates. My idea is to obtain an unbiased (or nearly unbiased) estimator of the KL divergence between the real posterior and my posteriors (understanding that the AIC is an estimator of the KL divergence between the 'real' likelihood and the likelihood of my models). Is there something like this in the statistical literature? I'm just kind of crazy of thinking the problem like this?
2013/03/22
[ "https://stats.stackexchange.com/questions/53037", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9174/" ]
None of these information criteria are unbiased, but under some conditions they are consistent estimators of the out-of-sample deviance. They also all utilize the likelihood in some fashion, but the WAIC and the LOOIC differ from the AIC and the DIC in that the former two average the likelihood for each observation over (draws from) the posterior distribution, whereas the latter two plug in point estimates. In this sense, the WAIC and LOOIC are preferable because they do not make an assumption that the posterior distribution is multivariate normal, with the LOOIC being somewhat preferable to the WAIC because it can be made more robust to outliers and has a diagnostic that can be evaluated to see if its assumptions are met. [Overview article](http://www.stat.columbia.edu/~gelman/research/published/waic_understand3.pdf) [More detail about the practicalities](http://www.stat.columbia.edu/~gelman/research/published/loo_stan.pdf) [R package](https://cran.r-project.org/web/packages/loo/)
Nestor: You seem to be misinterpreting BIC and DIC. They are based on a Bayesian approach. The fact that you observe the likelihood in their expressions is due to an approximation. > > The BIC was developed by Gideon E. Schwarz, who gave a Bayesian argument for adopting it. > > > It [DIC] is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation. > > > Log-Predictive scores ARE purely Bayesian and they are NOT plug-in estimators. They are complementary to Bayes factors since evaluate the predictive performance of a model. There also seem to be a contradiction in "It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation." and "And, if you read carefully my question, I AM searching for something like an information criterion which includes the posterior distribution (and therefore, the variability of the parameters on each model)." Anyway, good luck ...
53,037
Suppose the following problem: I have $n$ models, $M\_k$, each with parameters $\mathbf{\theta}\_k$ for a data set $D$. There where previous observations of a subset of the parameters which are common to every model $M\_k$ (i.e., I have well defined priors for a subset of the parameters $\theta\_k$), so I performed an MCMC algorithm in order to obtain the posterior distribution of each model using that prior information, i.e., I have $p(\theta\_k|D,M\_k)$, and have to decide which of those models is the 'correct' one. I was thinking in defining what do I mean by 'the correct' one, and came up with the idea that I have to decide which of the posterior distributions is closer to the 'real' posterior distribution that generated the data (which may or may not be in my set of posterior distributions). I was thinking of using bayes factors, but I keep thinking that I need something like the AIC which, instead of using the likelihood and the corresponding MLE estimates, uses the posterior distributions and the corresponding maximum-a-posteriori estimates. My idea is to obtain an unbiased (or nearly unbiased) estimator of the KL divergence between the real posterior and my posteriors (understanding that the AIC is an estimator of the KL divergence between the 'real' likelihood and the likelihood of my models). Is there something like this in the statistical literature? I'm just kind of crazy of thinking the problem like this?
2013/03/22
[ "https://stats.stackexchange.com/questions/53037", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9174/" ]
None of these information criteria are unbiased, but under some conditions they are consistent estimators of the out-of-sample deviance. They also all utilize the likelihood in some fashion, but the WAIC and the LOOIC differ from the AIC and the DIC in that the former two average the likelihood for each observation over (draws from) the posterior distribution, whereas the latter two plug in point estimates. In this sense, the WAIC and LOOIC are preferable because they do not make an assumption that the posterior distribution is multivariate normal, with the LOOIC being somewhat preferable to the WAIC because it can be made more robust to outliers and has a diagnostic that can be evaluated to see if its assumptions are met. [Overview article](http://www.stat.columbia.edu/~gelman/research/published/waic_understand3.pdf) [More detail about the practicalities](http://www.stat.columbia.edu/~gelman/research/published/loo_stan.pdf) [R package](https://cran.r-project.org/web/packages/loo/)
Strictly speaking, the question **"to decide which of those models is the 'correct' one"** makes no sense in a Bayesian analysis. In the Bayesian framework, what you do is to compare the models *with respect to each other*. Bayesian inference always gives you a *relative* comparison of competing models. There is a lot of information on chapter 7 of O'Hagan and Forster nice [book](http://rads.stackoverflow.com/amzn/click/0470685697). And yes, this kind of analysis will rely on the full posteriors.
23,320,128
I'm running a pretty time-consuming method in a thread, and one of the things it does, is if there is no image available it sets a grid named Background's background to a solid color. Here's how that snippet looks: ``` SolidColorBrush scb = new SolidColorBrush(); scb.Color = Color.FromRgb(21, 21, 21); Dispatcher.BeginInvoke(new Action(() => Background.Background = scb)); ``` But I always get errors at this place saying `"Cannot use a DependencyObject that belongs to a different thread than its parent Freezable"` Does anyone know why this is happening? The Dispatcher should make this problem go away, right? Here's how I am calling the method by the way (if needed) ``` Thread BGthread = new Thread(HandleBackgrounds); BGthread.Start(); ```
2014/04/27
[ "https://Stackoverflow.com/questions/23320128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2860324/" ]
`SolidColorBrush` is a dependency object - and you're creating it in the non-UI thread, then trying to use it in the UI thread. Try this instead: ``` Action action = () => { SolidColorBrush scb = new SolidColorBrush(Color.FromRgb(21, 21, 21)); Background.Background = scb; }; Dispatcher.BeginInvoke(action); ``` Or of course just in one statement: ``` Dispatcher.BeginInvoke((Action (() => Background.Background = new SolidColorBrush(Color.FromRgb(21, 21, 21))))); ``` Either way, you're creating the `SolidColorBrush` in the action that you're passing to the dispatcher.
A `Brush` is a `DispatcherObject`, and as such it has a thread affinity - it belongs to the thread that created it, and normally can only be used by it. However, WPF has a sub-class of dispatcher objects, called `Freezable`s, for which you can remove the thread affinity by making them read-only. Brushes are freezable, so you can create one on a thread and pass it to another: ``` var scb = new SolidColorBrush(Color.FromRgb(21, 21, 21)); scb.Freeze(); Dispatcher.BeginInvoke(new Action(() => Background.Background = scb)); ``` This can be useful if you're creating a brush in a view model that's not created on a UI thread. Another common use case is decoding images on a different thread, which can improve performance (`ImageSource` is also a freezable). Freezing freezables is also considered a [performance optimization](http://msdn.microsoft.com/en-us/library/bb613565.aspx), so use it whenever possible.
26,006,385
I need to keep the overall layout of an object. I pass it into a method and: 1. I need to delete things from the object, then do something, 2. once I get back to the main function I was in 3. I need the object to be UNTOUCHED. The way it is setup now it deletes it from the main object as well as the one in the bottom method. SEE JSFiddle for code <http://jsfiddle.net/rwux4rta/> To get the results from the run, see console Please HELP! ``` $( document ).ready(function() { var pList = new Object(); pList["test"] = "test"; //this is being deleted from BOTH instances of the Obj pList["test1"] = "test1"; pList["test2"] = "test2"; pList["test3"] = "test3"; pList["test4"] = "test4"; displayData(pList); console.log(pList); }); function displayData(badData){ badData.test.removeData(); console.log(badData); } ```
2014/09/23
[ "https://Stackoverflow.com/questions/26006385", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2888257/" ]
The [MPVolumeView class](https://developer.apple.com/Library/ios/documentation/MediaPlayer/Reference/MPVolumeView_Class/index.html) is designed to let you do exactly this. It's in MediaPlayer.framework, so add that to your app to make things build correctly. You create it and make it visible the way you instantiate any other subclass of UIView, which you probably know by now. You disable the routing button by setting the "showsRoutingButton" property to false. "How am I supposed to know that this will work without having to test it on an actual device?" By seeing that it's been there since iOS 2.0, and is used in countless apps?
The process to writing such a slider is incredibly simple, look into `UISlider`(<https://developer.apple.com/library/ios/documentation/UIKit/Reference/UISlider_Class/>) and then use the float value from the slider to set the volume. If you do not want to write your own slider, look on GitHub([github.com](http://github.com)) for controls that do this for you. After doing a quick search, I found this `UISlider`subclass that adjusts [volume](https://github.com/OopsMouse/SNVolumeSlider).
68,761
*[Beyond Libertarianism: Interpretations of Mill's Harm Principle and the Economic Implications Therein](https://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1051&context=political_science_theses#page=26)* > > The harm principle does not stipulate strict rights of the individual, applied uniformly. > > > To justify a system of redistribution via Mill’s harm principle, we must first grant that taxation, in a general, nonspecific guise is a legitimate action of the state.1 > > > I am trying to make the goal of reducing inequality and providing for social security and insurance compatible with the harm principle. Adhering to the harm principle the state should only act (restrict people's free will, coerce them into doing something) in order to prevent harm and safeguard third persons' rights. Further the presumption in favor of liberty (in dubio pro libertate) makes a liberal state do so only when the harm (or danger since the probability of harm is harm in itself) to third person's rights is actually known and proven and not presumed. I can't see how amassing wealth (in itself when it is devoid from any enriching actions that have a negative externality) can harm third persons'. I can't see how I am harming anyone by inheriting or by dying and having my inheritance passed to my heirs only (obviously there is an exclusion of the general public and any other person). People (that have been infected) transmit (probabilistically) SARS-COV-2 now the (specific) vaccines after more than one year of testing have been finally proven to reduce transmission. I appreciate that not preventing (reducing) a harm (danger) to others that you know (or should and could know) is in itself a harm (i.e states that coerce their subjects/citizens into getting vaccinated are not illiberal). I don't feel that reducing inequality and reducing the transmission of SARS-COV-2 are of the same nature. One is clearly a harm while I find it difficult to accept the other is harm. I don't feel Taxes are illiberal but they seem against the harm principle. **How could we adjust the Harm Principle so as to allow Taxation not to infringe upon it?** I obviously don't mean by simply adding a perfunctory exemption (e.g excluding taxes or excluding reasonable burdens) but essentially and substantially altering Harm Principle's content without allowing obviously tyranical and despotic state actions either. --- 1Towery, Matthew A., "Beyond Libertarianism: Interpretations of Mill's Harm Principle and the Economic Implications Therein." Thesis, Georgia State University, 2012. <https://scholarworks.gsu.edu/political_science_theses/45>
2021/09/14
[ "https://politics.stackexchange.com/questions/68761", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/39834/" ]
This is where the difference between [intrinsic goals and instrumental goals](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) is important to discuss. Mill is expressing an intrinsic goal in the Harm Principle, that is, a thing which self-justifies. Any moral philosophy that results in a self-defeating decision tree is incoherent and therefore not one we need to bother ourselves with. The principle of philosophical charity holds that when considering a philosopher's argument, one should do so in the terms that represent it as strongly as possible - so any interpretation of the Harm Principle, that governments should act to protect citizens from harm, including infringements on their liberties - including from the gov't itself - which leads us to recommend a government incapable of protecting anyone is not reasonable for us to offer. Without taxation, governments have no resources and no capacity to enact policies or offer protection to anyone. Taxation is, therefore, an instrumental goal in support of the intrinsic goal of the preservation of liberty. You can't have the one without the other, period. Even if your soldiers/police work for free, they still need equipment, training, facilities, and other things which must be paid for. It's not that wealth its itself harmful, it's that the whole political/moral philosophy becomes incoherent if you insist that the government should protect your person and freedoms... but may not have any resources with which to do so. Similarly, redistributive policies are not punishments against wealth but protective measures to stave off ruinous poverty (the harms from which are obvious and well recorded). Taxing the poor to turn around and provide for them is equally incoherent as would be insisting that laws are enforced without resources. Taxing the wealthy to fund these protections is an instrumental goal that serves the intrinsic goal of protection. No modification of Mill's Harm Principle is needed, per se. Only the context within which it is being contemplated. If you try to apply the Harm Principle as if every act happens in a vacuum, wholly independent of all other acts, then you wind up with a Harm Principle that only permits total anarchy - since in order for a government to act at all it must have the resources and capacity to do so. Considering taxation as an independent act, as you discuss in the comments, does result in the HP proscribing against it, thus a government may never have resources or capacity, and thus a government may never rightfully exist. John Stewart Mill, however, was not an anarchist - and this line of argument is absurd on its face besides. Therefore we *must* consider the Harm Principle in the context of the interdependent nature of acts, which forces the acknowledgement of the existence of the intrinsic vs. the instrumental. An instrumental act is justified by the ends it is made in pursuit of. This means that insofar as a government's intrinsic acts are solely to prevent harm, all instrumental acts *necessary to that end* are similarly permitted by the Harm Principle. An interesting consequence here is that if the final, intrinsic end, is NOT to prevent harm (or has elements besides the prevention of harm) then the *entire* chain of acts is now in violation of the Harm Principle, without exception. If you read the rest of Mill's body of work, however, you'll find that he (at the least) flirted with the beginnings of what became Rule Utilitarianism - which permits actors to make errors, so long as they have evidence to support their conclusions that their acts are *likely* in furtherance of greater utility to the greatest number.
In the words of Adam Smith, commonly accepted to be the father of modern Capitalism > > "A power to dispose of estates for ever is manifestly absurd. The earth and the fulness of it belongs to every generation, and the preceding one can have no right to bind it up from posterity. Such extension of property is quite unnatural. There is no point more difficult to account for than the right we conceive men to have to dispose of their goods after death." > > > Capitalism works when the excess wealth gained from enterprise is spent by the person who earned it - stimulating demand and creating jobs. However, in the failure of "trickle-down economics" we can see the disconnect between theory and reality - people are not obligated to spend the fortunes they amass. An estate tax handles this in part, incentivizing the spending of accumulated wealth, but a wealth tax is of the same vein. However, it is incredibly challenging if not impossible in practice to evaluate everyone's wealth in both cash and assets, with things like offshore tax havens and ownership of anything in foreign countries complicating the issue. Thus taxing income became the next best thing. Income tax is not an ideal way of preventing infinite accumulation of wealth, but it was the accepted alternative to the challenging task of enacting a true wealth tax. This may prove easier in the modern era, and we have seen a strong political push recently for a wealth tax. Note that this does not defend such regressive taxes as sales tax or excise tax. **Addendum - Oligarchy** This is a problem not dealt with by taxation generally speaking, but addresses your question of how amassing wealth can harm third persons. The primary problem with accumulation of massive wealth wrt harming others is that infinite amassing of wealth (this necessarily including assets, not just cash) can lead to Oligarchy. Granted, it is not guaranteed to lead to Oligarchy, but there are real-life examples (to your statement - known and not just presumed) such as the botched privatization of Russia. Oligarchy is a problem not because it inherently harms third persons, but that it prevents the State from its minimalist position of preventing someone from harming others. To quote J. Paul Getty, "If you owe the bank $100 that's your problem. If you owe the bank $100 million, that's the bank's problem." There comes a certain point of accumulation of the private sector where the State is at the mercy of private individuals, and it can thus no longer stop said individuals from engaging in anti-competitive practices. This is generally addressed with antitrust legislation and deeming some goods and services utilities.
68,761
*[Beyond Libertarianism: Interpretations of Mill's Harm Principle and the Economic Implications Therein](https://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1051&context=political_science_theses#page=26)* > > The harm principle does not stipulate strict rights of the individual, applied uniformly. > > > To justify a system of redistribution via Mill’s harm principle, we must first grant that taxation, in a general, nonspecific guise is a legitimate action of the state.1 > > > I am trying to make the goal of reducing inequality and providing for social security and insurance compatible with the harm principle. Adhering to the harm principle the state should only act (restrict people's free will, coerce them into doing something) in order to prevent harm and safeguard third persons' rights. Further the presumption in favor of liberty (in dubio pro libertate) makes a liberal state do so only when the harm (or danger since the probability of harm is harm in itself) to third person's rights is actually known and proven and not presumed. I can't see how amassing wealth (in itself when it is devoid from any enriching actions that have a negative externality) can harm third persons'. I can't see how I am harming anyone by inheriting or by dying and having my inheritance passed to my heirs only (obviously there is an exclusion of the general public and any other person). People (that have been infected) transmit (probabilistically) SARS-COV-2 now the (specific) vaccines after more than one year of testing have been finally proven to reduce transmission. I appreciate that not preventing (reducing) a harm (danger) to others that you know (or should and could know) is in itself a harm (i.e states that coerce their subjects/citizens into getting vaccinated are not illiberal). I don't feel that reducing inequality and reducing the transmission of SARS-COV-2 are of the same nature. One is clearly a harm while I find it difficult to accept the other is harm. I don't feel Taxes are illiberal but they seem against the harm principle. **How could we adjust the Harm Principle so as to allow Taxation not to infringe upon it?** I obviously don't mean by simply adding a perfunctory exemption (e.g excluding taxes or excluding reasonable burdens) but essentially and substantially altering Harm Principle's content without allowing obviously tyranical and despotic state actions either. --- 1Towery, Matthew A., "Beyond Libertarianism: Interpretations of Mill's Harm Principle and the Economic Implications Therein." Thesis, Georgia State University, 2012. <https://scholarworks.gsu.edu/political_science_theses/45>
2021/09/14
[ "https://politics.stackexchange.com/questions/68761", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/39834/" ]
This is where the difference between [intrinsic goals and instrumental goals](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) is important to discuss. Mill is expressing an intrinsic goal in the Harm Principle, that is, a thing which self-justifies. Any moral philosophy that results in a self-defeating decision tree is incoherent and therefore not one we need to bother ourselves with. The principle of philosophical charity holds that when considering a philosopher's argument, one should do so in the terms that represent it as strongly as possible - so any interpretation of the Harm Principle, that governments should act to protect citizens from harm, including infringements on their liberties - including from the gov't itself - which leads us to recommend a government incapable of protecting anyone is not reasonable for us to offer. Without taxation, governments have no resources and no capacity to enact policies or offer protection to anyone. Taxation is, therefore, an instrumental goal in support of the intrinsic goal of the preservation of liberty. You can't have the one without the other, period. Even if your soldiers/police work for free, they still need equipment, training, facilities, and other things which must be paid for. It's not that wealth its itself harmful, it's that the whole political/moral philosophy becomes incoherent if you insist that the government should protect your person and freedoms... but may not have any resources with which to do so. Similarly, redistributive policies are not punishments against wealth but protective measures to stave off ruinous poverty (the harms from which are obvious and well recorded). Taxing the poor to turn around and provide for them is equally incoherent as would be insisting that laws are enforced without resources. Taxing the wealthy to fund these protections is an instrumental goal that serves the intrinsic goal of protection. No modification of Mill's Harm Principle is needed, per se. Only the context within which it is being contemplated. If you try to apply the Harm Principle as if every act happens in a vacuum, wholly independent of all other acts, then you wind up with a Harm Principle that only permits total anarchy - since in order for a government to act at all it must have the resources and capacity to do so. Considering taxation as an independent act, as you discuss in the comments, does result in the HP proscribing against it, thus a government may never have resources or capacity, and thus a government may never rightfully exist. John Stewart Mill, however, was not an anarchist - and this line of argument is absurd on its face besides. Therefore we *must* consider the Harm Principle in the context of the interdependent nature of acts, which forces the acknowledgement of the existence of the intrinsic vs. the instrumental. An instrumental act is justified by the ends it is made in pursuit of. This means that insofar as a government's intrinsic acts are solely to prevent harm, all instrumental acts *necessary to that end* are similarly permitted by the Harm Principle. An interesting consequence here is that if the final, intrinsic end, is NOT to prevent harm (or has elements besides the prevention of harm) then the *entire* chain of acts is now in violation of the Harm Principle, without exception. If you read the rest of Mill's body of work, however, you'll find that he (at the least) flirted with the beginnings of what became Rule Utilitarianism - which permits actors to make errors, so long as they have evidence to support their conclusions that their acts are *likely* in furtherance of greater utility to the greatest number.
The text of the 'harm principle', as given in the linked document, reads as follows: > > That principle is, that the sole end for which mankind are warranted, > individually or collectively, in interfering with the liberty of > action of any of their number is self-protection. That the only > purpose for which power can be rightfully exercised over any member of > a civilized community, against his will, is to prevent harm to others. > > > There are actually *two* formulations here, and it's useful to consider the difference. The two formulations are: * The 'prevention of harm', from the second sentence, and from which we get the common name of the principle, and... * The activity of 'self-protection', from the first sentence. I suspect people focus on the concept of *harm* because harm seems like a quantifiable, measurable, objective concept. Intuitively, pinching someone does less harm than punching them, which does less harm than hitting them with a baseball bat, and we like to think that we can extend that intuitive rank-ordering to any sort of harm whatsoever. Obviously this suffers serious problems in practice — I mean, is living with the lingering effects of 300 years of slavery and oppression more harm or less harm than getting hit with a baseball bat? — but it is difficult to shake that intuition completely. On the other hand, the principle of 'self-protection' is intuitive in a different, more subjective sense. We all know more or less what we want to protect ourselves *from*, and there is a broad range of events and activities that most of us would agree everyone wants to protect themselves from. This also changes the nature of our relationship to government. Instead of government being an aloof, paternalistic entity that determine what is objectively harmful and tasks itself with preventing it from happening, government becomes a tool that we actively use for collective self-protection. The question is no longer that ambiguous determination of what is and is not objectively harmful, with all the caveats and pitfalls that entails; it is a more Kantian question or what things we collectively decide that we collectively want to protect ourselves against. This naturally changes our perspectives on the issue. We no longer try to measure (say) the harm of taxation against the harm of poverty (which are deeply incommensurate metrics in any case). Now we concern ourselves with the idea that people in general want to protect themselves against abject poverty (not to mention heritable poverty), and we take the least invasive approach to ensure that people can protect themselves against abject poverty. We no longer care about the wealth divide, or how wealthy any individual gets, so long as everyone can protect themselves against falling into poverty. It's worth noting that this move is inherent in Marxism. Marx shifted the metric away from harm to *property* and towards harm to *labor*; one must protect the effort one expends towards producing a good, because one must live by the profits of the labor one expends. And if we follow the Marxist thread all the way we find that the ultimate *harm* (in his view) is the segregation pf people into 'groups' or 'classes' that are treated differentially under government and law. This leads us straight into social democratic and left-Libertarian principles, where the creation of 'others', of second class citizens and excluded groups, is the root of all structural harm within society.
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
There is a way to find out why pods were deleted and who deleted them. The only way to find out something is to set the `ttl` for k8s to be greater than the default 1h and search through the events: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1`
``` kubectl get pods -a ``` you will get the list of running pods and the terminated pods in case you are searching for this
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
You can try `kubectl logs --previous` to list the logs of a previously stopped pod <http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/> You may also want to check out these debugging tips <http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/>
There is a way to find out why pods were deleted and who deleted them. The only way to find out something is to set the `ttl` for k8s to be greater than the default 1h and search through the events: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1`
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
As of today, `kubectl get pods -a` is deprecated, and as a result you cannot get deleted pods. What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the `ttl` for kubernetes events - by running: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1` You can then investigate further issues within your logging pipeline if you have one in place.
If you want to see all the previously deleted `pods` and you are trying to fetch the previous `pods`. Command line: `kubectl get pods` in which you will get all the pod details, because every service has one or more pods and they have unique ip address Here you can check the lifecycle of pods and what phases of pod has. <https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle> and you can see the previous pod logs by typing a command: `kubectl logs --previous`
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
As of today, `kubectl get pods -a` is deprecated, and as a result you cannot get deleted pods. What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the `ttl` for kubernetes events - by running: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1` You can then investigate further issues within your logging pipeline if you have one in place.
If your container has previously crashed, you can access the previous container’s crash log with: kubectl logs --previous ${POD\_NAME} ${CONTAINER\_NAME}
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
As of today, `kubectl get pods -a` is deprecated, and as a result you cannot get deleted pods. What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the `ttl` for kubernetes events - by running: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1` You can then investigate further issues within your logging pipeline if you have one in place.
There is a way to find out why pods were deleted and who deleted them. The only way to find out something is to set the `ttl` for k8s to be greater than the default 1h and search through the events: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1`
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
There is a way to find out why pods were deleted and who deleted them. The only way to find out something is to set the `ttl` for k8s to be greater than the default 1h and search through the events: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1`
If you want to see all the previously deleted `pods` and you are trying to fetch the previous `pods`. Command line: `kubectl get pods` in which you will get all the pod details, because every service has one or more pods and they have unique ip address Here you can check the lifecycle of pods and what phases of pod has. <https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle> and you can see the previous pod logs by typing a command: `kubectl logs --previous`
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
There is a way to find out why pods were deleted and who deleted them. The only way to find out something is to set the `ttl` for k8s to be greater than the default 1h and search through the events: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1`
If your container has previously crashed, you can access the previous container’s crash log with: kubectl logs --previous ${POD\_NAME} ${CONTAINER\_NAME}
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
as far as i know you may not get the pod details once the pod is deleted. can i know what is the usecase? example: 1. if a pod created using - kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false you will have a pod with status termiated:error 2. if a pod is created using - kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true you will have pod with status terminated:Complted 3. if container in a pods restarts: the pod will be alive and you can get the logs of previous container (only the previous container) using kubectl logs --container < container\_name > --previous=true < pod\_name > 4. if you doing an upgrade of you app and you are creating pods using deployments. if the update deployment "say a new image". pod will be terminated and new pod will be created. you can get the pod details from depoyment yaml. if you want to get details of previous pod you have see "spec" section of previous deployment yaml
``` kubectl get pods -a ``` you will get the list of running pods and the terminated pods in case you are searching for this
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
As of today, `kubectl get pods -a` is deprecated, and as a result you cannot get deleted pods. What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the `ttl` for kubernetes events - by running: `kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1` You can then investigate further issues within your logging pipeline if you have one in place.
You can try `kubectl logs --previous` to list the logs of a previously stopped pod <http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/> You may also want to check out these debugging tips <http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/>
40,636,021
Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version). I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one. Commands like ``` kubectl get pods kubectl get pod <pod-name> ``` work only with current pods (live or stopped). How I could get more details about old pods? I would like to see 1. when they were created 2. which environment variables they had when created 3. why and when they were stopped
2016/11/16
[ "https://Stackoverflow.com/questions/40636021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849837/" ]
You can try `kubectl logs --previous` to list the logs of a previously stopped pod <http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/> You may also want to check out these debugging tips <http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/>
``` kubectl get pods -a ``` you will get the list of running pods and the terminated pods in case you are searching for this
31,161,509
Client will fetch json formatted data from server: ``` {"html": "<hr>a<br>"} ``` And I want to render the html code into a div. How can I do that? And in fact, I want to display the rendered html page inside a wysiwyg editor. Is there any Opensource wysiwyg(js, html5, or sth) can do this?
2015/07/01
[ "https://Stackoverflow.com/questions/31161509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2038901/" ]
The problem is not in the ng-repeat. It is correct based on your data structure. I would change the approach to the way you're adding data to your cart. Presently, you're pushing all of the "item" sizes to the cart regardless if the user selects that particular size of item or not and you're just using ng-class to hide the sizes that have a quantity of 0. That doesn't make a lot of sense to me. That said, you can use filter in your view to filter out the items with a quantity of 0. The difference between what you're doing now which is just using CSS to hide those values, and using the filter is that the filter won't actually generate any DOM elements for any of the values that don't match the filter. So you could do something like this in your controller: ``` $scope.greaterThan = function(prop, val){ return function(item){ return item[prop] > val; } } ``` Then in your repeat: ``` ng-repeat="size in item.sizes | filter: greaterThan('numOrders', 0) track by $index" ``` Alternatively, you can use ngIf, which will also not render the DOM elements such as: ``` <input ng-repeat="size in item.sizes track by $index" ng-if="size.numOrders <= 0" type="hidden" name="amount_{{$index}}" value="{{size.price}}"> ``` Still, I highly recommend that you take a look at how you might optimize your cart functionality. There is a very good book called [Pro AngularJS](https://rads.stackoverflow.com/amzn/click/com/B00HX4PJ9I). I seem to recall that the main example used the book goes step by step through creating an online shop. You might want to get it just for the example code. It would definitely help you streamline what you've got, set up a service for the cart and use custom directives for example for the cart dropdown.
Your HTML ``` <input type="image" src="http://www.paypal.com/en_US/i/btn/x-click-but01.gif" name="submit" alt="Make payments with PayPal - it's fast, free and secure!"> ``` Replace it with : ``` <input type="submit" style="background:url(http://www.paypal.com/en_US/i/btn/x-click-but01.gif); width:100px; height:100px;" name="Submit" alt="Make payments with PayPal - it's fast, free and secure!"> ```
175,794
I've heard and read enough programmers firmly advocating automatic tests. According to many, tests are themselves part of a code's functionality, untested code is broken and/or legacy by definition, long-term manual testing is more time-consuming and provides far weaker guarantees against failures than automatic testing... etc. I'm trying to hobbystically develop a turn-based Pokemon-like game. While talking to a far more experienced developer I said that when I add a new attack I run my game, go into 1v1 combat, use this move and see if it does what its supposed to do. His answer was: *Why won't you instead write a test that goes into a 1v1 combat, uses this move and checks if it does what it's supposed to do? Doing what you're doing manually may be faster than writing tests if you do it once, twice, thrice. But after 10 times? One hundred times?* I'm thinking about what he said. I'm thinking and I still don't feel convinced. About any other kind of project, perhaps. But a game? Assume I'm adding lifesteal to a monster's attack. Problem is I'll have to **anyway** boot my game and play it! To see if it feels right, if it plays right, if not for anything else. Tests on the other hand add maintenance cost: for example, once I (for balancement reasons) change the lifesteal coefficient, I'll have to reflect that change in tests. (Or I won't have to reflect this change in tests if I copy&paste the formula, parametrized by attack strength, lifesteal coefficient, etc, but doesn't this defeat the purpose of testing?) Automatic tests are supposed to replace manual tests; but given I have to manually play my game anyway, isn't this duplication of work? Or is my thinking wrong? What am I missing?
2019/09/25
[ "https://gamedev.stackexchange.com/questions/175794", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/101389/" ]
Testing is an investment in your future. While an individual test might duplicate some aspect of manual testing you are about to do in any given run of the game, a robust suite of tests can in the long run cover far more scenarios than that small bit of overlapping work (manual testing can also more effectively test more complex scenarios where writing automatic tests might be too difficult). Writing tests is an overhead, yes. So is maintaining them when the things you are testing change. It's also much harder to retrofit tests onto a big project not built from scratch with that goal in mind, as there are some technical design choices that are less compatible with easy, isolated testing than others. Whether or not that overhead is worth the benefit depends on you and the scale and scope of your project. The payoff is that automated tests can be run automatically and require far, far less work on your part that equivalent manual testing coverage would. In your example, sure, you manually run the game and ad-hoc test your change. But do you do it on every platform you are shipping on? On every build configuration? Automated tests can do so easily, which means you can catch errors that only manifest on one or two platforms/configurations that you don't regularly test more quickly. Similarly, while you may ad-hoc test that one ability change, do you manually go back and ad-hoc test *every other ability*? Probably not, because you're operating under the assumption that you didn't make a change that could impact those other abilities. But you are a programmer, and therefore, you make mistakes ("bugs") and therefor you *could have* accidentally made your change in a such a way as to have unintentionally broke something you were not expecting. An automated test suite could catch that. Granted, the value of the test suite is dependent on what you test and how, and there is a point of diminishing returns in the investment (100% test coverage is often impractical, for example). For example, it's probably worthwhile to write a test to validate the results of a math function that presumes a left- or right-handed convention, as change in that function to prefer the other convention will probably destabilize a lot and should be flagged. But a test for the outcome of some damage calculation may not be a good candidate for a test, at least not early on when you are iterating rapidly on game balance. That sort of test is perhaps best added later, once balance has hardened a little, to alert you that you may have made a change that has balance implications.
First off, 100% test coverage is not a realistic goal, and games especially tend to incorporate elements that are difficult to test reliably, for example when aspects like timing, physics or (nontrivial) AI become involved. In addition, automated tests are neither infallible nor by definition superior to manual testing, so your friends' stance appears a little... overzealous to me. Howver, that does not mean that games can't massively benefit from automated tests. A common approach in test automation is to prioritize areas that would benefit the most and work your way down the list until you reach a point where the efforts outweigh the advantages. Typical criteria are: * How often does this code **change** / how likely is it to introduce a bug here? * How **severe** are the consequences if it breaks? * How **easy** is it to test? * How **obvious** (or not) would a bug be during manual tests? I'd recommend the following starting points: ### Sanity checks If game stats and actions depend on complicated and/or frequently tweaked formulae or functions, it's rarely necessary to check for minor deviations (unless your game is really balanced on a knife's edge), and updating the test cases every time can take a lot of work. What you **do** want to guard against, however, are **gamebreaking** numbers and unexpected interactions. Say, an ability that does 20 times the expected damage on a critical hit, but only against a specific enemy type. Instead of setting up a test case that checks the exact damage output and secondary effects of every attack, I would write a test that goes through all attacks and checks if the numbers are **within a specific range** that's appropriate for their power level. Tweaking an ability by 10% for balance reasons shouldn't cause any tests to fail, but extreme outliers (which are probably unintended) should. ### Core rules and algorithms There will probably be pieces of code (say, the save/load mechanism, network code, pathfinding, core game rules...) that are critical to your game working as intended. They are also likely to interact with a lot of different modules, and thus be easily affected by changes to those. You might, for example, introduce a new enemy type, forget to serialize one of their stats and mess up the savegame format, which only becomes apparent during a longer playthrough, which you can't do after every little change. For these central elements, automated tests are usually worth the effort because you absolutely don't want them to break. ### Rapid iteration Few finished games look anything like their early versions. You will probably make some sweeping changes that affect most of the existing content in one way or another, for example by adding a new stat or game mechanic. This, essentially, resets the "tested and mature" status of all your code and assets and the amount of extra testing required will grow with the size of your game. You can't just skip manual tests, of course. There's no replacement for actually playing your game (and having others play it as well). But having a battery of tests ready to go, even if they're imperfect and incomplete, can allow you to vet and tweak these sweeping changes *much* faster, and find many inconsistencies the second they are introduced. You're not limited to testing for correctness, by the way. If you're worried about balance, a scipt that runs a battery of different combinations, items, matchups or whatever can yield a lot more valuable (and reliable) data points than a handful of manual tests.
175,794
I've heard and read enough programmers firmly advocating automatic tests. According to many, tests are themselves part of a code's functionality, untested code is broken and/or legacy by definition, long-term manual testing is more time-consuming and provides far weaker guarantees against failures than automatic testing... etc. I'm trying to hobbystically develop a turn-based Pokemon-like game. While talking to a far more experienced developer I said that when I add a new attack I run my game, go into 1v1 combat, use this move and see if it does what its supposed to do. His answer was: *Why won't you instead write a test that goes into a 1v1 combat, uses this move and checks if it does what it's supposed to do? Doing what you're doing manually may be faster than writing tests if you do it once, twice, thrice. But after 10 times? One hundred times?* I'm thinking about what he said. I'm thinking and I still don't feel convinced. About any other kind of project, perhaps. But a game? Assume I'm adding lifesteal to a monster's attack. Problem is I'll have to **anyway** boot my game and play it! To see if it feels right, if it plays right, if not for anything else. Tests on the other hand add maintenance cost: for example, once I (for balancement reasons) change the lifesteal coefficient, I'll have to reflect that change in tests. (Or I won't have to reflect this change in tests if I copy&paste the formula, parametrized by attack strength, lifesteal coefficient, etc, but doesn't this defeat the purpose of testing?) Automatic tests are supposed to replace manual tests; but given I have to manually play my game anyway, isn't this duplication of work? Or is my thinking wrong? What am I missing?
2019/09/25
[ "https://gamedev.stackexchange.com/questions/175794", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/101389/" ]
Testing is an investment in your future. While an individual test might duplicate some aspect of manual testing you are about to do in any given run of the game, a robust suite of tests can in the long run cover far more scenarios than that small bit of overlapping work (manual testing can also more effectively test more complex scenarios where writing automatic tests might be too difficult). Writing tests is an overhead, yes. So is maintaining them when the things you are testing change. It's also much harder to retrofit tests onto a big project not built from scratch with that goal in mind, as there are some technical design choices that are less compatible with easy, isolated testing than others. Whether or not that overhead is worth the benefit depends on you and the scale and scope of your project. The payoff is that automated tests can be run automatically and require far, far less work on your part that equivalent manual testing coverage would. In your example, sure, you manually run the game and ad-hoc test your change. But do you do it on every platform you are shipping on? On every build configuration? Automated tests can do so easily, which means you can catch errors that only manifest on one or two platforms/configurations that you don't regularly test more quickly. Similarly, while you may ad-hoc test that one ability change, do you manually go back and ad-hoc test *every other ability*? Probably not, because you're operating under the assumption that you didn't make a change that could impact those other abilities. But you are a programmer, and therefore, you make mistakes ("bugs") and therefor you *could have* accidentally made your change in a such a way as to have unintentionally broke something you were not expecting. An automated test suite could catch that. Granted, the value of the test suite is dependent on what you test and how, and there is a point of diminishing returns in the investment (100% test coverage is often impractical, for example). For example, it's probably worthwhile to write a test to validate the results of a math function that presumes a left- or right-handed convention, as change in that function to prefer the other convention will probably destabilize a lot and should be flagged. But a test for the outcome of some damage calculation may not be a good candidate for a test, at least not early on when you are iterating rapidly on game balance. That sort of test is perhaps best added later, once balance has hardened a little, to alert you that you may have made a change that has balance implications.
> > Problem is I'll have to anyway boot my game and play it! To see if it feels right, if it plays right, if not for anything else. > > > Yes, you need to do that while you are iterating on that specific situation you are implementing right now. But think ahead a couple years in the future. You might be working on a completely different aspect of the game and accidentally break this situation you though you had done. Are you going to test every single thing in your game after every change? Certainly not manually all by yourself. So how long will it take you to notice the bug? Will you then be able to easily connect it to that specific change you made? But if you have an automated test suit, you *can* easily test your entire codebase after every change. > > Tests on the other hand add maintenance cost: for example, once I (for balancement reasons) change the lifesteal coefficient, I'll have to reflect that change in tests. > > > Yes, and that's a good thing, because now you have to be aware of every ramification of your balance change. Does the tutorial still play out the way you intended? Is that one boss fight still winnable with the intended strategy? Is that other boss still immune to life steal? In order to detect those problems you might have to play through your whole game from start to finish. But an automated test can tell you within seconds to minutes. So an automated test allows you to go through every situation where the outcome is affected and confirm one by one that this is still the outcome you intended. > > Automatic tests are supposed to replace manual tests; but given I have to manually play my game anyway, isn't this duplication of work? > > > Automatic tests are not supposed to replace manual tests. They are supposed to augment them. While it can not replace the "how does it feel?" playtesting, it can save you a ton of work in regression testing (testing again and again if the things which used to work still work).
74,994
I was trying to understand how the following reaction occurs (taken from a synthesis of homopipecolic acid[1]): ![Reduction of hemiaminal by triethylsilane-boron trifluoride diethyl etherate](https://i.stack.imgur.com/c4KaC.png) I thought that triethylsilane, $\ce{Et3SiH}$, would reduce the ester to an aldehyde. However, it turns out that the hemiaminal carbon has also been reduced with cleavage of the $\ce{C-N}$ bond. How does this happen? ### References: 1. Chiou, W.; Chen, G.; Kao, C.; Gao, Y. Syntheses of (−)-pelletierine and (−)-homopipecolic acid. *Org. Biomol. Chem.* **2012,** *10* (13), 2518. [DOI: 10.1039/C2OB06984A](https://doi.org/10.1039/C2OB06984A).
2017/05/23
[ "https://chemistry.stackexchange.com/questions/74994", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/17541/" ]
My organic chemistry is rusty by now, but the full mechanism for the reduction itself is likely something along these lines: [![Proposed mechanism for hemiaminal reduction](https://i.stack.imgur.com/LcoaW.png)](https://i.stack.imgur.com/LcoaW.png) In a "typical" lactone / ester, the alkyl oxygen can't be lost so easily, but in this case there's a nitrogen lone pair that can assist in that. The resulting iminium ion is reduced by the silane. The acid at the end will remove the remaining boron trifluoride.
This is what I came up with. The starting material first reacts with the Lewis acid - $\ce{BF\_3}$-etherate. There are two sites which can attack the acid - the carbonyl of the ester, or the carbonyl of the amide in the protecting group. Considering the first step to be kinetically controlled (which it generally is), the sterically less hindered carbonyl of the ester group will react. Now, the lone pair of the nitrogen plays its part, breaking the $\ce{C-O}$ bond, relieving the positive charge on the oxygen. The iminium formed will be converted to the corresponding amine by the action of the triethylsilane. Hydrolysis will lead to the carboxylic acid. You can see the stereochemistry in the images. Justification: Intramolecular rearrangement is faster than intermolecular encounters.
69,367,375
I want to use do notation to combine pseudo-random values: ``` g :: StdGen g = mkStdGen 100 example1 :: Bool example1 = fst $ runState (do x <- state (uniformR (False,True)) y <- state (uniformR (False,True)) return $ x == y ) g ``` Function `uniformR` is defined in terms of the System.Random.Stateful module: ``` uniformR :: (RandomGen g, UniformRange a) => (a, a) -> g -> (a, g) uniformR r g = runStateGen g (uniformRM r) ``` so in my example, it seems silly for `uniformR` to create and run state, only for my example to create and run state again. Is there a way to rewrite example 1, using System.Random.Stateful and do notation? This is the only thing I could get to work (which is ridiculous): ``` example3 :: Bool example3 = fst $ runStateGen g (do x <- uniformRM (False,True) y <- uniformRM (False,True) return $ do x' <- x y' <- y return $ x'==y') ``` It seems like what I need is some type of monad transformer?
2021/09/28
[ "https://Stackoverflow.com/questions/69367375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1264328/" ]
It is much simpler than one would imagine: ```hs example4 :: Bool example4 = runStateGen_ (mkStdGen 100) $ \gen -> do x <- uniformRM (False,True) gen y <- uniformRM (False,True) gen pure (x == y) ``` Of course `uniformRM (False, True) == unformM`, but that is probably irrelevant, since that was just an example to demonstrate the question, I imagine. More info can be found in haddock as well as in [this blogpost](https://alexey.kuleshevi.ch/blog/2021/01/29/random-interface/) and [this video presentation](https://www.youtube.com/watch?v=GGbPqSM1ADw)
The second argument to `runStateGen` is `StateGenM g -> State g a`, which is `ReaderT` in disguise: ``` import Control.Monad.Reader (ReaderT(..), runReaderT) import Control.Applicative (liftA2) import qualified System.Random.Stateful as Random uniformRM :: (Random.UniformRange a, Random.StatefulGen g m) => (a, a) -> ReaderT g m a uniformRM r = ReaderT (Random.uniformRM r) g :: Random.StdGen g = Random.mkStdGen 100 example1 = Random.runStateGen_ g $ runReaderT $ liftA2 (==) (uniformRM (False,True)) (uniformRM (False,True)) ```
9,306,045
The default action to expand a node on ExtJS tree is double-click. Before version 4, there is `singleClickExpand` property in TreeNode configuration. How to apply `singleClickExpand` behavior on ExtJS version 4 tree ?? Is there a configuration property for this behavior without setting event listener?? Thank you.
2012/02/16
[ "https://Stackoverflow.com/questions/9306045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560305/" ]
I've spent some time looking for the same thing. I feel I can definitively answer your question with... No there isn't a config option for it. I had to set a click handler. I needed one anyway though to implement functionality for leaf clicks: ``` var tree = Ext.create('Ext.tree.Panel', { store: store, rootVisible: false, lines: false, useArrows: true, listeners: { itemclick: function(view, node) { if(node.isLeaf()) { // some functionality to open the leaf(document) in a tabpanel } else if(node.isExpanded()) { node.collapse(); } else { node.expand(); } } } }); ```
If you're using keyboard navigation, you probably need to use the selectionChange event so that you cover all the scenarios, but anyway, here's an approach i'm using in my case to achieve the singleClick thingy. define a new event in the tree, for instance imagine you have defined a class which inherits from the treepanel, then in the "initComponent" you would create the event: Ext.define('MY.view.CheckList', { extend: 'Ext.tree.Panel', alias: 'widget.checklist', ``` store: 'CheckPoints', useArrows: true, initComponent: function () { this.callParent(arguments); this.on("selectionchange", this.onSelectionChange); this.addEvents({ singleClick: true }); }, onSelectionChange: function(model, nodes){ [....] // fire statusUpdated event this.fireEvent("singleClick", this, model, nodes); } ``` }); then you need to listen to the event you created, for instance: ``` var mytree = Ext.create("MY.view.CheckList"); ``` mytree.on("singleClick", function( tree, model, nodes){ ``` console.log(">>>>>>>>>>>>>>>>>>>>> EVENT TRIGGERED <<<<<<<<<<<<<<<<<<<<<<<<"); console.log( tree, model, nodes); var currSelPath = nodes[0].getPath(); //This will expand the node at the path you just clicked tree.expandPath(currSelPath); ``` }); 2 things: * You need to tweak it a bit to suit your code scenario * Ofc you could just listen to the "selectionChange" event directly and do this, but i think from a code reusability standpopint, its cleaner and more obvious, and if you add a bit more logic and checks before you fire the event "singleClick" and imagine if you have more trees that inherit from this base tree, you just need to do it once. It's not the most perfect solution, i'm sure, but in my case worked fine. HTH!
9,306,045
The default action to expand a node on ExtJS tree is double-click. Before version 4, there is `singleClickExpand` property in TreeNode configuration. How to apply `singleClickExpand` behavior on ExtJS version 4 tree ?? Is there a configuration property for this behavior without setting event listener?? Thank you.
2012/02/16
[ "https://Stackoverflow.com/questions/9306045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560305/" ]
I've spent some time looking for the same thing. I feel I can definitively answer your question with... No there isn't a config option for it. I had to set a click handler. I needed one anyway though to implement functionality for leaf clicks: ``` var tree = Ext.create('Ext.tree.Panel', { store: store, rootVisible: false, lines: false, useArrows: true, listeners: { itemclick: function(view, node) { if(node.isLeaf()) { // some functionality to open the leaf(document) in a tabpanel } else if(node.isExpanded()) { node.collapse(); } else { node.expand(); } } } }); ```
In fact you use the expand function of the treeview. Just implement the itemclick function on you treepanel: ``` listeners: { itemclick: function (treeview, record, item, index, e, eOpts) { treeview.expand(record); } } ```
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
Remember or recall words or images is highly individual and depends on which hemisphere of the brain is dominant. My right half of the brain is slightly more dominant than the left, which makes me remember a face rather than the name. Sometimes I'm embarrassed when meeting people on the street and I recognize the person, but don't recall their name. If we start talking it takes a while before I remember the name because I need other details as well. That's why avatar and username together are important to support both memory styles. > > Some people are just better at remembering in a certain way—even identical twins may vary in that regard—and it can relate to which hemisphere of the brain is dominant. Visual memory (which we call episodic memory) relates to the right hemisphere of the brain, which is associated with intuition. Verbal (semantic) memory is primarily a function of the left hemisphere, which we link with analytical thinking. The difference between the two kinds of memory becomes most obvious when it comes to recalling a deeply emotional event—9/11 or the day JFK was shot, for example. People with a strong semantic memory recall headlines, quotes, and phrases; those inclined toward visual memory retain the pictures and images of the event more vividly. > > > Reference: [Ask Dr. Gupta: Why Do I Recall Words Better Than Pictures?](http://www.prevention.com/health/brain-games/dr-sanjay-gupta-visual-memory)
As most people have said, it depends on the user. For me personally it depends entirely on the context. * On forums I'm heavily dependant on usernames for identifying people. I tend to remember people by their usernames on forums (and here). I think this is because forum users tend to use avatars detached from their identity such as cartoon characters or memes. * On Twitter and Facebook I'm heavily reliant on the avatars of users because those people tend to represent themselves by their own face. I either learn what they look like (if i don't know them personally) or if it's someone i already know from the real world I know their face. I So my answer would be think about the context of use and base your decision on that.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
To the other questions I would add accessibility concerns. For many with poor sight a username is much more easily used either by using zoomed text or some for of text reading. Even for those who don't need screen enhancements (e.g. me) avatars can be hard to tell apart e.g. Twitter and using pictures, may are difficult to tell who they are as photos are not well taken or the user is trying to be clever and show something which works on a reasonably sized photo but not a thumbnail. Given that last thought I would add a concern about general differentiability it is much easier to make two avatars very nearly the same and not notice compared to usernames, if the users are trying to confuse people - which some will do.
Certain avatar images are very memorable and distinct. Some are almost indistinguishable. Likewise with user names. Someone who was asked to remember the username of "Jon Skeet" and was asked a day later later to identify it from a list of the ten most similar usernames might have a good chance at identifying it, while someone who was shown a generic gravatar and asked to identify it from a list including nine randomly-selected ones would have a relatively poor chance even five minutes later. On the other hand, someone asked to remember a username written in an unfamiliar script [e.g. a hypothetical user "絕對沒有"] would have a hard time than someone asked to remember the gravatar of Jon Skeet (sompared with the ten most similar gravatars, assuming nobody copied Mr. Skeet's picture). The differences in memorability between different usernames and avatars would seem to outweigh any general advantage usernames would have over avatars or vice versa. Incidentally, without looking back at the previous hypothetical username, can one one remember whether it was "絕對伏特加", "絕對沒有", or "絕對值" [Chinese characters from Google translate]
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
It depends not only on person, but also on service. if you need to type username often ( or see it in your message ) then username is more memorable ( twitter ) Making avatar very small also prevent avatar to recognition. Some services might allow to use not only latin letters in the username which make them weird Speaking about weird. It also depends on chosen username and avatar. Long username is hard to memorize, photo of user face in an avatar 32x32 is hard to remember or recognize As a conclusion very depends on service where both of their are going to be used.
Although it is not exactly about avatar and user name, there is a research paper about distinctive file icons called [VisualIDs: Automatic Distinctive Icons for Desktop Interfaces](http://scribblethink.org/Work/VisualIDs/visualids.html) in SIGGRAPH 2004. Visual distinctiveness is unsurprisingly useful for both short term memory task (browsing for a specific file) and long term memory task (sketching and describing icons two days later). One interesting principle is that arbitrarily unique icons can be recalled regardless of their contents or meanings to the user. Note that this research assumes a priori that "Search and memory for images is known to be generally faster and more robust than search and memory for words" with a reference to [Data Mountain (UIST 1998)](http://research.microsoft.com/apps/pubs/default.aspx?id=64329). Despite no direct comparison to textual memory, I believe that this research can supplement that visual memory is *stickier* i.e. changing a distinctive and registered avatar would throw users off more than changing a username.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
Standard reminder: Graphics are often of little or no use to folks who are reading the screen through assistive technology. The simplest answer -- as here in Stack Exchange -- is to display *both*.
Although it is not exactly about avatar and user name, there is a research paper about distinctive file icons called [VisualIDs: Automatic Distinctive Icons for Desktop Interfaces](http://scribblethink.org/Work/VisualIDs/visualids.html) in SIGGRAPH 2004. Visual distinctiveness is unsurprisingly useful for both short term memory task (browsing for a specific file) and long term memory task (sketching and describing icons two days later). One interesting principle is that arbitrarily unique icons can be recalled regardless of their contents or meanings to the user. Note that this research assumes a priori that "Search and memory for images is known to be generally faster and more robust than search and memory for words" with a reference to [Data Mountain (UIST 1998)](http://research.microsoft.com/apps/pubs/default.aspx?id=64329). Despite no direct comparison to textual memory, I believe that this research can supplement that visual memory is *stickier* i.e. changing a distinctive and registered avatar would throw users off more than changing a username.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
Some people have a great memory for words, other people a great memory for faces. Some have both or neither. Some avatars can be completely generic and difficult to remember, such as Gravatar's autogenerated avatars. ![enter image description here](https://i.stack.imgur.com/zdA2T.png) Others can be very unique and memorable. Your DVK example is a good one. Some usernames can be completely generic, such as this site's "user3216857". Others can be very unique and memorable. This is also very individual, since topics or references that impress me might not impress someone else (e.g. the username Gandalf wouldn't be especially memorable to someone unfamiliar with LoTR, but it's safe to assume that more SO newcomers would remember the name Gandalf than Jon Skeet - which is only memorable because he is Jon Skeet). People process images faster than written words, even in their native language. Also, images contain more information and they are much more diverse. If you squint a little, all words will look pretty much the same, while you can still tell apart your average avatars. So they're usually easier to identify. This is separate from memorability.
Standard reminder: Graphics are often of little or no use to folks who are reading the screen through assistive technology. The simplest answer -- as here in Stack Exchange -- is to display *both*.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
As most people have said, it depends on the user. For me personally it depends entirely on the context. * On forums I'm heavily dependant on usernames for identifying people. I tend to remember people by their usernames on forums (and here). I think this is because forum users tend to use avatars detached from their identity such as cartoon characters or memes. * On Twitter and Facebook I'm heavily reliant on the avatars of users because those people tend to represent themselves by their own face. I either learn what they look like (if i don't know them personally) or if it's someone i already know from the real world I know their face. I So my answer would be think about the context of use and base your decision on that.
Although it is not exactly about avatar and user name, there is a research paper about distinctive file icons called [VisualIDs: Automatic Distinctive Icons for Desktop Interfaces](http://scribblethink.org/Work/VisualIDs/visualids.html) in SIGGRAPH 2004. Visual distinctiveness is unsurprisingly useful for both short term memory task (browsing for a specific file) and long term memory task (sketching and describing icons two days later). One interesting principle is that arbitrarily unique icons can be recalled regardless of their contents or meanings to the user. Note that this research assumes a priori that "Search and memory for images is known to be generally faster and more robust than search and memory for words" with a reference to [Data Mountain (UIST 1998)](http://research.microsoft.com/apps/pubs/default.aspx?id=64329). Despite no direct comparison to textual memory, I believe that this research can supplement that visual memory is *stickier* i.e. changing a distinctive and registered avatar would throw users off more than changing a username.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
It depends on the person. A bit of an extreme example, but a dyslexic for example might struggle telling apart John Skeet and Jonno Teeks, whereas a color-blind person might not be able to tell two people apart that have combinations of certain colors in their avatar. In general though, avatars tend to offer a wider variety of options. You can use letters, words, colors, shapes, etc. whereas usernames can only do a certain amount of characters, mostly in a single color. Then again, usernames are often unique, and people won't be able to change them. Combining those characteristics: avatars serve as a great "first glance" recognition but aren't set in stone, whereas usernames are good for specifics and certainty. So in general, **avatars are more recognizable, but less authoritative**.
It depends not only on person, but also on service. if you need to type username often ( or see it in your message ) then username is more memorable ( twitter ) Making avatar very small also prevent avatar to recognition. Some services might allow to use not only latin letters in the username which make them weird Speaking about weird. It also depends on chosen username and avatar. Long username is hard to memorize, photo of user face in an avatar 32x32 is hard to remember or recognize As a conclusion very depends on service where both of their are going to be used.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
It depends on the person. A bit of an extreme example, but a dyslexic for example might struggle telling apart John Skeet and Jonno Teeks, whereas a color-blind person might not be able to tell two people apart that have combinations of certain colors in their avatar. In general though, avatars tend to offer a wider variety of options. You can use letters, words, colors, shapes, etc. whereas usernames can only do a certain amount of characters, mostly in a single color. Then again, usernames are often unique, and people won't be able to change them. Combining those characteristics: avatars serve as a great "first glance" recognition but aren't set in stone, whereas usernames are good for specifics and certainty. So in general, **avatars are more recognizable, but less authoritative**.
If it's only about recognizable or memorable, then it's avatar. [Wikipedia page of Avatar](http://en.wikipedia.org/wiki/Avatar_%28computing%29) stated this (too bad no research or article backed this up) > > ...the avatar is placed in order for other users to easily identify who has written the post without having to read their username. > > > that implied avatar is indeed easier to be recognized. Because, you can catch a glimpse of image without having to focus on it (or is it just me?), yet username has to be read. And that statement more or less matches with my experience. In a forum with big avatar, avatar is the first thing I recognize. I opened many threads and saw many posts. After some times, I realized, "eyyy that ava again," and that's where I 'realize' the username, and if I'm lucky, remember it. I also found that is user X uses, say, Son Goku avatar for a long time, or changing the avatar, but still Son Goku image that clearly displays the face, before realizing it, everytime I see Son Goku avatar in the forum, I will immediately associate it with that user.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
Some people have a great memory for words, other people a great memory for faces. Some have both or neither. Some avatars can be completely generic and difficult to remember, such as Gravatar's autogenerated avatars. ![enter image description here](https://i.stack.imgur.com/zdA2T.png) Others can be very unique and memorable. Your DVK example is a good one. Some usernames can be completely generic, such as this site's "user3216857". Others can be very unique and memorable. This is also very individual, since topics or references that impress me might not impress someone else (e.g. the username Gandalf wouldn't be especially memorable to someone unfamiliar with LoTR, but it's safe to assume that more SO newcomers would remember the name Gandalf than Jon Skeet - which is only memorable because he is Jon Skeet). People process images faster than written words, even in their native language. Also, images contain more information and they are much more diverse. If you squint a little, all words will look pretty much the same, while you can still tell apart your average avatars. So they're usually easier to identify. This is separate from memorability.
If it's only about recognizable or memorable, then it's avatar. [Wikipedia page of Avatar](http://en.wikipedia.org/wiki/Avatar_%28computing%29) stated this (too bad no research or article backed this up) > > ...the avatar is placed in order for other users to easily identify who has written the post without having to read their username. > > > that implied avatar is indeed easier to be recognized. Because, you can catch a glimpse of image without having to focus on it (or is it just me?), yet username has to be read. And that statement more or less matches with my experience. In a forum with big avatar, avatar is the first thing I recognize. I opened many threads and saw many posts. After some times, I realized, "eyyy that ava again," and that's where I 'realize' the username, and if I'm lucky, remember it. I also found that is user X uses, say, Son Goku avatar for a long time, or changing the avatar, but still Son Goku image that clearly displays the face, before realizing it, everytime I see Son Goku avatar in the forum, I will immediately associate it with that user.
58,525
**Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?** I am interested in understanding this from a usability perspective. Lets say on a site such as this network, a user has both a username as well as a photo/avatar. * On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar** * On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username** Which of these is more recognisable/memorable?1 i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3 --- **1.** Lets ignore rep points, as it is not relevant to my question. **2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that. **3. Please reference research where possible.**
2014/06/06
[ "https://ux.stackexchange.com/questions/58525", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/4430/" ]
It depends not only on person, but also on service. if you need to type username often ( or see it in your message ) then username is more memorable ( twitter ) Making avatar very small also prevent avatar to recognition. Some services might allow to use not only latin letters in the username which make them weird Speaking about weird. It also depends on chosen username and avatar. Long username is hard to memorize, photo of user face in an avatar 32x32 is hard to remember or recognize As a conclusion very depends on service where both of their are going to be used.
Certain avatar images are very memorable and distinct. Some are almost indistinguishable. Likewise with user names. Someone who was asked to remember the username of "Jon Skeet" and was asked a day later later to identify it from a list of the ten most similar usernames might have a good chance at identifying it, while someone who was shown a generic gravatar and asked to identify it from a list including nine randomly-selected ones would have a relatively poor chance even five minutes later. On the other hand, someone asked to remember a username written in an unfamiliar script [e.g. a hypothetical user "絕對沒有"] would have a hard time than someone asked to remember the gravatar of Jon Skeet (sompared with the ten most similar gravatars, assuming nobody copied Mr. Skeet's picture). The differences in memorability between different usernames and avatars would seem to outweigh any general advantage usernames would have over avatars or vice versa. Incidentally, without looking back at the previous hypothetical username, can one one remember whether it was "絕對伏特加", "絕對沒有", or "絕對值" [Chinese characters from Google translate]
9,849,475
I would like to select a value from a database table or its inverse with only one SQL Request. I previoulsy posted a question, but now I have more specifications, which change the problem. My table fields are : `id, rate, from_value, to_value, date_modified` This is the pseudo-code I would like to do with only one request : ``` SELECT `rate` if `from_value` = $from_value and `to_value` = $to_value ``` OR ``` SELECT (1/`rate`) if `from_value` = $to_value AND `to_value` = $from_value WHERE UNIX_TIMESTAMP('NOW()')-".$expirationTime." < UNIX_TIMESTAMP('`date_modified`')) ``` I'm sure one of the row is present in the table, so I would like to return only the `rate` value, and null only if the WHERE clause is not realized. I don't want it to return NULL for the other rows (for which no conditions are realized). Thanks
2012/03/24
[ "https://Stackoverflow.com/questions/9849475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1288927/" ]
Looks like a simple [CASE](http://dev.mysql.com/doc/refman/5.5/en/case-statement.html) will do the trick: ``` select case when from_value = $from_value and to_value = $to_value then rate when from_value = $to_value and to_value = $from_value then 1 / rate end ... ``` You might want an ELSE on the CASE if your WHERE clause doesn't force one of the CASE's branches to match. If you only want rows that match one of the branches in the above CASE then use the WHERE clause: ``` select case when from_value = $from_value and to_value = $to_value then rate when from_value = $to_value and to_value = $from_value then 1 / rate end where (from_value = $from_value and to_value = $to_value) or (from_value = $to_value and to_value = $from_value) ```
haven't tried this out, but you can use an IF construct as follows ``` SELECT if from_value = $from_value and to_value = $to_value then rate else 1/rate end as rate from tablename ```
27,205,874
I decided to create my own `SortableDictionary` struct in Swift, building it in the Xcode playground so I could test it as I went. `SortableDictionary` works by taking a dictionary and a sort function and using the sort function to create a sorted array of keys from the dictionary. It can sort by dictionary keys or by dictionary values, and has separate sorts for each (with a Bool value to toggle between the two). I know that `findInsertionIndex` works as it is supposed to, and I'm confident of `insert` and `sort`. But whenever I tried to create a `SortableDictionary` instance I kept getting ``` Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_1386_INVOP, subcode-0x0) ``` no matter what I tried. After posting on hear, I found that the error only occurs when SortableDictionary has to deal with a dictionary with more than one value. Pass it a dictionary with one or no values and it will work. Thus this will work: ``` let collection: [String: Double] = ["Bananas": 5] var sortedDictionary2 = SortableDictionary(dictionary: collection, sortByValues: true, valueSortKey: {$0 < $1}, keySortKey: {$1 < $0}) var collection: [String: Double] = ["Bananas": 5] var sortedDictionary2 = SortableDictionary(dictionary: collection, sortByValues: true, valueSortKey: {$0 < $1}, keySortKey: {$1 < $0}) ``` But this will not: ``` let collection: [String: Double] = ["Bananas": 5, "Dates": 3] var sortedDictionary2 = SortableDictionary(dictionary: collection, sortByValues: true, valueSortKey: {$0 < $1}, keySortKey: {$1 < $0}) var collection: [String: Double] = ["Bananas": 5, "Dates": 3] var sortedDictionary2 = SortableDictionary(dictionary: collection, sortByValues: true, valueSortKey: {$0 < $1}, keySortKey: {$1 < $0}) ``` I then cut out sort, insert, and the two findInsertionIndex methods from the SortableDictionary struct along with all of the non calculated global variables. This moved the error to the line in findInsertionIndex(key sorting version): ``` if (keySortKey!(sortedKeys[upperBound], key)) ``` This despite the fact that sortingByValues was true which meant that function should not have ever even been called (I double checked this, sortingByValues is true at the time of the if statement, yet Xcode insists on executing the else branch instead). So then I rewrote the code the bare minimum to switch from using global variables to parameters passed into each function. This moved the error code to the line in insert: ``` sortedKeys = sortedKeys.filter {$0 != key} ``` The number of Key:Value pairs in the dictionary no longer appear to have any effect on the error. I still have no idea what is going on, but I've created a Gist for my sortable dictionary and append both versions of my modified function code to the end (only use one at a time). Here is my Sortable Dictionary code: <https://gist.github.com/7OOTnegaTerces/6277116470d03b4676c5>
2014/11/29
[ "https://Stackoverflow.com/questions/27205874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4307013/" ]
I would suggest a *structure* (i.e. a `struct` or a `class`) for the top level, have a `std::map` of those top-level structure. Then each structure in turn contains a `std::map` for the contained symbols, again with a structure that contains, among other things, the type of the symbol. Something as simple as this: ``` struct LocalSymbol { std::string name; enum { FLOAT, INT } type; // Possibly other information needed for local symbols }; struct GlobalSymbol { std::string name; // Possibly other information needed for global symbols std::map<std::string, LocalSymbol> locals; } std::map<std::string, GlobalSymbol> globals; ``` This will very easily give you the nested structure you seem to want, as well as keeping all keeping related data tightly together into smaller structures. Your other big problem seems to be parsing, and I suggest you read more about compilers and parsing, and try to implement a more traditional lexer-parser kind of parser, where you split up the input handling and parsing into two components. If you want to hand-code the parser-part, I suggest a [recursive descent style parser](http://en.wikipedia.org/wiki/Recursive_descent_parser) which will make it very easy to handle scoping and levels.
Here it is! If you accept an advice, start reading the main function instead of the types on the top. I didn't need multimap. Instead of copying the parent block's variables, I only reference the parent container with its index. Just before printing there is a traversal to the top-most block, and it collects all the variables visible in the current block. ``` #include <algorithm> #include <cassert> #include <fstream> #include <iomanip> #include <iostream> #include <map> #include <stack> #include <string> #include <vector> using namespace std; typedef string type_t; //type of variable typedef string variable_t; //name of variable typedef string class_t; //name of 'class' (container) const int NO_PARENT = -1; //top-most typedef vector<class ClassSymbolTable> symbols_t; //we use vector to preserve order // main class, it stores symbols of a single class, and references its parent class ClassSymbolTable { class_t _name; //class name map<variable_t, type_t> _types; //map of variable types vector<variable_t> _variables; //we use this vector to preserve order symbols_t& _symbols; //reference to the symbol table int _parent_index = NO_PARENT; //reference to parent index in symbol vector //!! parent class, nullptr if top-level ClassSymbolTable* parent() const { return _parent_index != NO_PARENT ? &_symbols[_parent_index] : nullptr; } // does this class directly declares var ? bool declares_variable(const variable_t& var) const { return _types.find(var) != _types.end(); } // print variable info in desired format void print_variable(const variable_t& var) { if (declares_variable(var)) { cout << " -> <" << _types[var] << ", " << _name << ">"; } if (parent()) { parent()->print_variable(var); } } // traverse classes up to top-level and collect variables in order void collect_variables_to_print(vector<variable_t>& vars) { if (ClassSymbolTable* p = parent()) { p->collect_variables_to_print(vars); // add variables defined on this level vector<variable_t> add_vars; for (size_t i = 0; i < _variables.size(); ++i) { if (find(vars.begin(), vars.end(), _variables[i]) == vars.end()) { // defined on this level add_vars.push_back(_variables[i]); } } vars.insert(vars.end(), add_vars.begin(), add_vars.end()); } else { //top-level vars = _variables; } } // get depth for indentation int get_depth() const { int depth = 0; for (ClassSymbolTable* p = parent(); p; p = p->parent()) { ++depth; } return depth; } static size_t s_max_class_name_length; //for printing public: // ctor ClassSymbolTable(const string& name, int parent_index, symbols_t& symbols) : _name(name), _parent_index(parent_index), _symbols(symbols) { s_max_class_name_length = max(s_max_class_name_length, name.length()); } // add variable void add(const variable_t& var, const type_t& type) { _variables.push_back(var); _types[var] = type; } // print this class' vars in desired format void print() { cout.fill(' '); const int indent = get_depth() + s_max_class_name_length + 3 /*for ':' */; vector<variable_t> vars; collect_variables_to_print(vars); // print class name string classname = _name + ": "; cout.fill(' '); cout.width(indent); cout << classname; // print vars cout.width(0); cout << vars[0]; print_variable(vars[0]); cout << endl; for (size_t i = 1; i < vars.size(); ++i) { cout.width(indent); cout << ' '; //pad before cout.width(0); cout << vars[i]; print_variable(vars[i]); cout << endl; } cout.width(0); } }; size_t ClassSymbolTable::s_max_class_name_length = 0; int main(int argc, char* argv[]) { ifstream in("input1.txt"); assert(in.is_open()); symbols_t symbols; //collect symbols const char* delimiters = ":;{}"; vector<string> current_tokens; string buffer; stack<int> class_stack; //to manage class hierarchy, we stack the classes' index in the symbol vector class_stack.push(NO_PARENT); //so we dont have to branch at first level while (in >> buffer) { size_t found = buffer.find_first_of(delimiters); current_tokens.push_back(buffer.substr(0, found)); //either whole or until delimiter if (found != string::npos) { //delimiter found char delimiter = buffer[found]; switch (delimiter) { case ':': //class name assert(current_tokens.size() == 1); { // add new class symbol table and refer to parent class symbols.emplace_back(current_tokens[0], class_stack.top(), symbols); // we rather store index in '{' for symmetric code } break; case '{': //block open assert(!symbols.empty()); { class_stack.push(symbols.size()-1); //stack the index for nested classes } break; case '}': //block close assert(!class_stack.empty()); { class_stack.pop(); //done with this class } break; case ';': //variable assert(!symbols.empty()); assert(current_tokens.size() == 2); { // add variable to the current class symbol table ClassSymbolTable& current_class = symbols.back(); current_class.add(current_tokens[1], current_tokens[0]); } break; } //put back the remaining characters current_tokens.clear(); if (found < buffer.size() - 1) { current_tokens.push_back(buffer.substr(found + 1)); } } } assert(class_stack.size() == 1 && class_stack.top() == NO_PARENT); //just to be sure for (ClassSymbolTable& c : symbols) { c.print(); } cout << "." << endl; return 0; } ``` It can be optimized to avoid lookup during printing, and you can also avoid storing the symbols if you only want to print them. You can store local variables here and there, but the main idea will be the same. And yes, I'm using yet another container to manage nesting, a stack :) Only by using multimap, your variables will be shuffled. Somehow, you have to keep track of the order. I am using vectors to do that. (If you can't compile C++11, just replace the range-based for loop at the very end of main)
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
Cypress automatically sends the cookie named "csrftoken" with the request, but Django expects the csrf token to be called "csrfmiddlewaretoken". Therefore, I had to get the token and pass it by hand as follows: ``` cy.getCookie('csrftoken') .then((csrftoken) => { cy.request({ method: 'POST', url: your_url_here, // "form: true" is required here for the submitted information to be accessible via request.POST in Django (even though the docs make it sound like a bare 'POST' request can be made without the "form: true") form: true, body: { csrfmiddlewaretoken: csrftoken.value, testing: true, obj_model: 'Customer', field_name: 'name', field_value: 'Customer - Testing' } }) .then((result) => { expect(result.body.success).to.equal(true) }) .then(() => { //additional processing here if needed }) }) ```
You are correct, Cypress is not sending the token in the body because it is `undefined`, because of the way you are using `.get()` on the `input` to get the token. You are using `.get()` as a *synchronous* call, but it's actually **async**. This is because Cypress will intelligently retry finding the DOM element, and that takes an indeterminate amount of time. This is a core concept of Cypress that enables built-in tests. The Cypress documentation details this better than I can, so check that out here: <https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Default-Assertions> How you access a property on an element in the DOM should be put in a callback, in your case: ``` cy.get("input[name='csrfmiddlewaretoken']").then($input=>{ const hidden_token = $input.val() cy.request({ method: 'POST', form: true, url: login_url, // body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': cy.getCookie('csrftoken').value} body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': hidden_token} }) }) ``` ... **Pro-tip: using Cypress's doc search will usually lend you what you need** [![enter image description here](https://i.stack.imgur.com/1rc5c.png)](https://i.stack.imgur.com/1rc5c.png)
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
You can get the first CSRF token required for login using a `HEAD` request and looking at the cookies (no need to parse the page). Also you can have your custom `cy.login()` return the token (asynchronously, so you need to use `.then()`) instead of having to call `cy.getCookie('csrftoken')` again if you need a token afterwards for POST requests and such: ```js Cypress.Commands.add('login', (username, password) => { return cy.request({ url: '/login/', method: 'HEAD' // cookies are in the HTTP headers, so HEAD suffices }).then(() => { cy.getCookie('sessionid').should('not.exist') cy.getCookie('csrftoken').its('value').then((token) => { let oldToken = token cy.request({ url: '/login/', method: 'POST', form: true, followRedirect: false, // no need to retrieve the page after login body: { username: username, password: password, csrfmiddlewaretoken: token } }).then(() => { cy.getCookie('sessionid').should('exist') return cy.getCookie('csrftoken').its('value') }) }) }) }) ``` Note: The token changes after login, therefore two `cy.getCookie('csrftoken')` calls. Afterwards you can just use it in the following way in your tests (see <https://docs.djangoproject.com/en/dev/ref/csrf/> for why the header is needed): ```js cy.login().then((csrfToken) => { cy.request({ method: 'POST', url: '/api/baz/', body: { 'foo': 'bar' }, headers: { 'X-CSRFToken': csrfToken } }) }) ```
You are correct, Cypress is not sending the token in the body because it is `undefined`, because of the way you are using `.get()` on the `input` to get the token. You are using `.get()` as a *synchronous* call, but it's actually **async**. This is because Cypress will intelligently retry finding the DOM element, and that takes an indeterminate amount of time. This is a core concept of Cypress that enables built-in tests. The Cypress documentation details this better than I can, so check that out here: <https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Default-Assertions> How you access a property on an element in the DOM should be put in a callback, in your case: ``` cy.get("input[name='csrfmiddlewaretoken']").then($input=>{ const hidden_token = $input.val() cy.request({ method: 'POST', form: true, url: login_url, // body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': cy.getCookie('csrftoken').value} body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': hidden_token} }) }) ``` ... **Pro-tip: using Cypress's doc search will usually lend you what you need** [![enter image description here](https://i.stack.imgur.com/1rc5c.png)](https://i.stack.imgur.com/1rc5c.png)
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
To use Cypress to login programmatically with Django (i.e., without using the UI), the simplest solution is to change two words in the CSRF testing recipe that Cypress provides. The two changes that I made in the below compared to the Cypress recipe at <https://github.com/cypress-io/cypress-example-recipes/blob/master/examples/logging-in__csrf-tokens/cypress/integration/logging-in-csrf-tokens-spec.js> are: 1. Changing `_csrf` to `csrfmiddlewaretoken`; and 2. Changing `$html.find("input[name=_csrf]").val()` to `$html.find("input[name=csrfmiddlewaretoken]").val()` Recipe updated for Django 2.2: ``` // This recipe expands on the previous 'Logging in' examples // and shows you how to use cy.request when your backend // validates POSTs against a CSRF token // describe('Logging In - CSRF Tokens', function(){ const username = 'cypress' const password = 'password123' Cypress.Commands.add('loginByCSRF', (csrfToken) => { cy.request({ method: 'POST', url: '/login', failOnStatusCode: false, // dont fail so we can make assertions form: true, // we are submitting a regular form body body: { username, password, csrfmiddlewaretoken: csrfToken // insert this as part of form body } }) }) it('strategy #1: parse token from HTML', function(){ cy.request('/login') .its('body') .then((body) => { const $html = Cypress.$(body) const csrf = $html.find("input[name=csrfmiddlewaretoken]").val() cy.loginByCSRF(csrf) .then((resp) => { expect(resp.status).to.eq(200) }) }) }) ```
You are correct, Cypress is not sending the token in the body because it is `undefined`, because of the way you are using `.get()` on the `input` to get the token. You are using `.get()` as a *synchronous* call, but it's actually **async**. This is because Cypress will intelligently retry finding the DOM element, and that takes an indeterminate amount of time. This is a core concept of Cypress that enables built-in tests. The Cypress documentation details this better than I can, so check that out here: <https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Default-Assertions> How you access a property on an element in the DOM should be put in a callback, in your case: ``` cy.get("input[name='csrfmiddlewaretoken']").then($input=>{ const hidden_token = $input.val() cy.request({ method: 'POST', form: true, url: login_url, // body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': cy.getCookie('csrftoken').value} body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': hidden_token} }) }) ``` ... **Pro-tip: using Cypress's doc search will usually lend you what you need** [![enter image description here](https://i.stack.imgur.com/1rc5c.png)](https://i.stack.imgur.com/1rc5c.png)
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
According to the documentation > > In addition, for HTTPS requests, strict referer checking is done by CsrfViewMiddleware. This means that even if a subdomain can set or modify cookies on your domain, it can’t force a user to post to your application since that request won’t come from your own exact domain. > > > <https://docs.djangoproject.com/en/3.0/ref/csrf/#how-it-works> Thus, if you're testing a site using HTTPS request, you might have to explicitly set your `Referer` header. This solved it for me: ``` cy.request({ url: '/accounts/login/', method: 'HEAD' }); cy.getCookie('csrftoken').then(v => { cy.request({ method: 'POST', form: true, url: '/accounts/login/', headers: { Referer: `${Cypress.config('baseUrl')}/accounts/login/` }, body: { csrfmiddlewaretoken: v.value, login: Cypress.env('agentUsername'), password: Cypress.env('agentPassword') } }); }); ```
You are correct, Cypress is not sending the token in the body because it is `undefined`, because of the way you are using `.get()` on the `input` to get the token. You are using `.get()` as a *synchronous* call, but it's actually **async**. This is because Cypress will intelligently retry finding the DOM element, and that takes an indeterminate amount of time. This is a core concept of Cypress that enables built-in tests. The Cypress documentation details this better than I can, so check that out here: <https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Default-Assertions> How you access a property on an element in the DOM should be put in a callback, in your case: ``` cy.get("input[name='csrfmiddlewaretoken']").then($input=>{ const hidden_token = $input.val() cy.request({ method: 'POST', form: true, url: login_url, // body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': cy.getCookie('csrftoken').value} body: {'username': 'guest', 'password': 'password', 'csrfmiddlewaretoken': hidden_token} }) }) ``` ... **Pro-tip: using Cypress's doc search will usually lend you what you need** [![enter image description here](https://i.stack.imgur.com/1rc5c.png)](https://i.stack.imgur.com/1rc5c.png)
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
You can get the first CSRF token required for login using a `HEAD` request and looking at the cookies (no need to parse the page). Also you can have your custom `cy.login()` return the token (asynchronously, so you need to use `.then()`) instead of having to call `cy.getCookie('csrftoken')` again if you need a token afterwards for POST requests and such: ```js Cypress.Commands.add('login', (username, password) => { return cy.request({ url: '/login/', method: 'HEAD' // cookies are in the HTTP headers, so HEAD suffices }).then(() => { cy.getCookie('sessionid').should('not.exist') cy.getCookie('csrftoken').its('value').then((token) => { let oldToken = token cy.request({ url: '/login/', method: 'POST', form: true, followRedirect: false, // no need to retrieve the page after login body: { username: username, password: password, csrfmiddlewaretoken: token } }).then(() => { cy.getCookie('sessionid').should('exist') return cy.getCookie('csrftoken').its('value') }) }) }) }) ``` Note: The token changes after login, therefore two `cy.getCookie('csrftoken')` calls. Afterwards you can just use it in the following way in your tests (see <https://docs.djangoproject.com/en/dev/ref/csrf/> for why the header is needed): ```js cy.login().then((csrfToken) => { cy.request({ method: 'POST', url: '/api/baz/', body: { 'foo': 'bar' }, headers: { 'X-CSRFToken': csrfToken } }) }) ```
Cypress automatically sends the cookie named "csrftoken" with the request, but Django expects the csrf token to be called "csrfmiddlewaretoken". Therefore, I had to get the token and pass it by hand as follows: ``` cy.getCookie('csrftoken') .then((csrftoken) => { cy.request({ method: 'POST', url: your_url_here, // "form: true" is required here for the submitted information to be accessible via request.POST in Django (even though the docs make it sound like a bare 'POST' request can be made without the "form: true") form: true, body: { csrfmiddlewaretoken: csrftoken.value, testing: true, obj_model: 'Customer', field_name: 'name', field_value: 'Customer - Testing' } }) .then((result) => { expect(result.body.success).to.equal(true) }) .then(() => { //additional processing here if needed }) }) ```
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
You can get the first CSRF token required for login using a `HEAD` request and looking at the cookies (no need to parse the page). Also you can have your custom `cy.login()` return the token (asynchronously, so you need to use `.then()`) instead of having to call `cy.getCookie('csrftoken')` again if you need a token afterwards for POST requests and such: ```js Cypress.Commands.add('login', (username, password) => { return cy.request({ url: '/login/', method: 'HEAD' // cookies are in the HTTP headers, so HEAD suffices }).then(() => { cy.getCookie('sessionid').should('not.exist') cy.getCookie('csrftoken').its('value').then((token) => { let oldToken = token cy.request({ url: '/login/', method: 'POST', form: true, followRedirect: false, // no need to retrieve the page after login body: { username: username, password: password, csrfmiddlewaretoken: token } }).then(() => { cy.getCookie('sessionid').should('exist') return cy.getCookie('csrftoken').its('value') }) }) }) }) ``` Note: The token changes after login, therefore two `cy.getCookie('csrftoken')` calls. Afterwards you can just use it in the following way in your tests (see <https://docs.djangoproject.com/en/dev/ref/csrf/> for why the header is needed): ```js cy.login().then((csrfToken) => { cy.request({ method: 'POST', url: '/api/baz/', body: { 'foo': 'bar' }, headers: { 'X-CSRFToken': csrfToken } }) }) ```
To use Cypress to login programmatically with Django (i.e., without using the UI), the simplest solution is to change two words in the CSRF testing recipe that Cypress provides. The two changes that I made in the below compared to the Cypress recipe at <https://github.com/cypress-io/cypress-example-recipes/blob/master/examples/logging-in__csrf-tokens/cypress/integration/logging-in-csrf-tokens-spec.js> are: 1. Changing `_csrf` to `csrfmiddlewaretoken`; and 2. Changing `$html.find("input[name=_csrf]").val()` to `$html.find("input[name=csrfmiddlewaretoken]").val()` Recipe updated for Django 2.2: ``` // This recipe expands on the previous 'Logging in' examples // and shows you how to use cy.request when your backend // validates POSTs against a CSRF token // describe('Logging In - CSRF Tokens', function(){ const username = 'cypress' const password = 'password123' Cypress.Commands.add('loginByCSRF', (csrfToken) => { cy.request({ method: 'POST', url: '/login', failOnStatusCode: false, // dont fail so we can make assertions form: true, // we are submitting a regular form body body: { username, password, csrfmiddlewaretoken: csrfToken // insert this as part of form body } }) }) it('strategy #1: parse token from HTML', function(){ cy.request('/login') .its('body') .then((body) => { const $html = Cypress.$(body) const csrf = $html.find("input[name=csrfmiddlewaretoken]").val() cy.loginByCSRF(csrf) .then((resp) => { expect(resp.status).to.eq(200) }) }) }) ```
48,835,065
The question seems to be pretty easy, but I haven't found solution myself. I have some folder, with 1 jpg-file inside it (foo bar.jpg) and this bat-file: ``` for /f %%f in ('dir /b /a:-d "*.jpg"') do echo "%%f" pause ``` For some reason instead of something like this: ``` C:\Test>echo foo bar.jpg foo bar.jpg ``` I see this: ``` C:\Test>echo "foo" "foo" ``` Despite I already put `%%f` inside quotes. I.e. command prompt doesn't understand the space in file name. How to fix it? (In my real code I will use `copy` instead of `echo`).
2018/02/16
[ "https://Stackoverflow.com/questions/48835065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5587480/" ]
You can get the first CSRF token required for login using a `HEAD` request and looking at the cookies (no need to parse the page). Also you can have your custom `cy.login()` return the token (asynchronously, so you need to use `.then()`) instead of having to call `cy.getCookie('csrftoken')` again if you need a token afterwards for POST requests and such: ```js Cypress.Commands.add('login', (username, password) => { return cy.request({ url: '/login/', method: 'HEAD' // cookies are in the HTTP headers, so HEAD suffices }).then(() => { cy.getCookie('sessionid').should('not.exist') cy.getCookie('csrftoken').its('value').then((token) => { let oldToken = token cy.request({ url: '/login/', method: 'POST', form: true, followRedirect: false, // no need to retrieve the page after login body: { username: username, password: password, csrfmiddlewaretoken: token } }).then(() => { cy.getCookie('sessionid').should('exist') return cy.getCookie('csrftoken').its('value') }) }) }) }) ``` Note: The token changes after login, therefore two `cy.getCookie('csrftoken')` calls. Afterwards you can just use it in the following way in your tests (see <https://docs.djangoproject.com/en/dev/ref/csrf/> for why the header is needed): ```js cy.login().then((csrfToken) => { cy.request({ method: 'POST', url: '/api/baz/', body: { 'foo': 'bar' }, headers: { 'X-CSRFToken': csrfToken } }) }) ```
According to the documentation > > In addition, for HTTPS requests, strict referer checking is done by CsrfViewMiddleware. This means that even if a subdomain can set or modify cookies on your domain, it can’t force a user to post to your application since that request won’t come from your own exact domain. > > > <https://docs.djangoproject.com/en/3.0/ref/csrf/#how-it-works> Thus, if you're testing a site using HTTPS request, you might have to explicitly set your `Referer` header. This solved it for me: ``` cy.request({ url: '/accounts/login/', method: 'HEAD' }); cy.getCookie('csrftoken').then(v => { cy.request({ method: 'POST', form: true, url: '/accounts/login/', headers: { Referer: `${Cypress.config('baseUrl')}/accounts/login/` }, body: { csrfmiddlewaretoken: v.value, login: Cypress.env('agentUsername'), password: Cypress.env('agentPassword') } }); }); ```
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see.
Perception rolls, like Stealth rolls, should be rolled by the GM out of sight of the players. The players should not know if they rolled high or low, just what they find.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
When I GM, there are generally four ways a perception check plays out. 1) There is something to be discovered. If I know that there is a trap or an ambush in the room, then the player's might spot that thing. Depending on the amount of success on the perception roll, the player's may get different levels of information. Maybe they realise that there is a pit trap in the middle of the corridor. Or maybe they can just see that the floor looks weird, like it doesn't fit in with the flagstones around that spot. 2) If you have nothing interesting prepared, there might be something to be discovered. If I haven't prepared anything in the room, I'll quickly consider if there still might be something the players can discover. Are there monsters in other rooms nearby that might be heard? Tracks or other signs of activity? Murals or furniture that have hints about the history or nature of the place? Like with 1, the level of success determines amount of detail. Could be vague like a sound from the next room over that might just be the ruins crumbling, or could be something alive moving around. Or explicit, like a mural showing the ritual used to open the magically sealed door elsewhere. Or inconsequential, such as destroyed machinery that reveals this to have been a torture room. 3/4) The perception check might fail, or there might be nothing that the players can discover. In both cases my answer is something along the lines of "Well.. there doesn't ***seem*** to be anything in the room." My players quickly learn that my suggestive tone means nothing in this case. I generally find it easier to always make it sound like the group missed something interesting, instead of trying to always make it sound like there was nothing interesting. The important thing is that your reaction is the same in both cases, so the players get no clue from you. The way I play it, perception checks are used for finding interesting stuff. I don't go into minute detail about the architecture or furniture or how the air smells, unless that detail is either useful or interesting for what it reveals. My players won't care about those minute details, and I see no reason for punishing them for looking around. I want them to look around. So they can find all the interesting stuff. And when there is no interesting stuff to find, I get it over with quickly with the simply words "Well.. there doesn't ***seem*** to be anything in the room." So we can quickly move onwards with the action.
I would say, allow a character an opportunity to roll a perception check... And treat it as a "Gut feeling" type of thing about a room. If, they want to go in more depth into a room, treat it as a Search attempt (X amount of time at a DC of Y) and do the whole DM trick of secretly rolling for random encounters or dual perception checks (do your characters notice the guards patrolling and do the guards notice the characters searching where they shouldn't be) However, I would distinguish between, "You don't see anything obvious..." and "There is nothing here..." -- by that, I mean, EXPLICITLY, make it obvious that additional checks are not going to find anything so don't waste time rolling. Maybe in the first case, they failed the roll or there wasn't anything to find...but, I would definitely try to make it obvious when additional checking is not going to be productive. Though, I agree, you should wean the players off Suggesting checks and more into describing actions/intentions...
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important. A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.) To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time.
Perception rolls, like Stealth rolls, should be rolled by the GM out of sight of the players. The players should not know if they rolled high or low, just what they find.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
So, what's the downside of saying, "nope, you find nothing"? You're committed to letting players make their own rolls (which is perfectly fine, though not everyone plays that way), so they already know that they hit a DC of 23 or less (or whatever). There is no need to punish yourself or them by pretending otherwise. It's not even particularly bad in terms of meta-gaming, if you are willing to decide that characters have some sense of how effective their perception rolls have been. If you don't like that, then by letting the players roll they are taking responsibility for not using knowledge of the die roll to meta-game. (This is often why people advocate having the GM do perception rolls in secret). I've had some success as a GM by using the "PCs search an empty room" situation to advantage to add some realism and depth of involvement, by having things that aren't really important but which fit the location, or that I think are cool. So things like utensils that fell behind something, a damaged straw doll, a broken weapon, dice, pottery. (Yes, you do run the risk of a player spending an hour on "the mystery of the rusty spoon"). That also helps a bit with the "the GM mentioned it so it must be part of the plot" meta-gaming. Ultimately, if you have something that contributes to your game when there is a perception check, use it. If you don't, move on fast to get to things that do.
have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
**There is ALWAYS something to be found!**\* *What* the players find, of course, may not be at all relevant or useful to the story or to the characters' progress. But in fact, a high roll when searching or observing an area is a great chance to use some creativity to both enhance the overall experience, **and also to make the players think more carefully about their use of pointless random checks.** A five-minute discourse on the detailed state of the area should do the trick! Consider this possible response: > > There's clearly nothing of significant interest here to be seen -- the room is completely empty. But, feeling keenly observant and abnormally curious, you inspect the area with intense scrutiny anyway. You notice the roughness of each stony brick, and the slight decay of the grout between them. > > > Then you observe that your shoes make a satisfying *clop, clop, clop* sound as you walk, and you find yourself considering the subtle unevenness of the floor -- the sandy but firm texture of the sandstone, the slim gap between each tile. You doubt you could force a sheet of good paper between them. Indeed, despite the slight lip where some tiles have sunken a millimeter or two on one edge or another, the mason did their job well -- there's no way you could ever work a tile out of place without inflicting tremendous damage. What effort it must have taken him or her to cut the tiles from some hillside afar off, to apply the mortar and then painstakingly lay the tiles out perfectly adjacent to one another and in their proper order. You ponder the craftsman's trade a moment longer, and then turn your attention to the ceiling... *and so on.* > > > If you know your world, this isn't as hard as it may sound. Just be creative with it! --- \* *Unless you have a PC who happens to be floating isolated and disembodied in an absolute void. But then I'd think it's unlikely to matter.*
have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
**There is ALWAYS something to be found!**\* *What* the players find, of course, may not be at all relevant or useful to the story or to the characters' progress. But in fact, a high roll when searching or observing an area is a great chance to use some creativity to both enhance the overall experience, **and also to make the players think more carefully about their use of pointless random checks.** A five-minute discourse on the detailed state of the area should do the trick! Consider this possible response: > > There's clearly nothing of significant interest here to be seen -- the room is completely empty. But, feeling keenly observant and abnormally curious, you inspect the area with intense scrutiny anyway. You notice the roughness of each stony brick, and the slight decay of the grout between them. > > > Then you observe that your shoes make a satisfying *clop, clop, clop* sound as you walk, and you find yourself considering the subtle unevenness of the floor -- the sandy but firm texture of the sandstone, the slim gap between each tile. You doubt you could force a sheet of good paper between them. Indeed, despite the slight lip where some tiles have sunken a millimeter or two on one edge or another, the mason did their job well -- there's no way you could ever work a tile out of place without inflicting tremendous damage. What effort it must have taken him or her to cut the tiles from some hillside afar off, to apply the mortar and then painstakingly lay the tiles out perfectly adjacent to one another and in their proper order. You ponder the craftsman's trade a moment longer, and then turn your attention to the ceiling... *and so on.* > > > If you know your world, this isn't as hard as it may sound. Just be creative with it! --- \* *Unless you have a PC who happens to be floating isolated and disembodied in an absolute void. But then I'd think it's unlikely to matter.*
We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important. A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.) To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see.
If they fail, make them perceive something wrong. A misperception can happen anytime, specially if they're really looking for something. Characters can misinterpret, misunderstand and become very obsessed with something so they can be completely wrong about something. This will get them in trouble and give you plenty of ideas. Example: If someone's in a room which there's nothing to see but fail a perception test, tell them they have a gut feeling this room is important. If they succeed tell them it's not important. And in this time they could be discovered, attacked or lose a good opportunity. This would give you time to move NPCs, create situations and everything. It depends on your history.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important. A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.) To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time.
I would say, allow a character an opportunity to roll a perception check... And treat it as a "Gut feeling" type of thing about a room. If, they want to go in more depth into a room, treat it as a Search attempt (X amount of time at a DC of Y) and do the whole DM trick of secretly rolling for random encounters or dual perception checks (do your characters notice the guards patrolling and do the guards notice the characters searching where they shouldn't be) However, I would distinguish between, "You don't see anything obvious..." and "There is nothing here..." -- by that, I mean, EXPLICITLY, make it obvious that additional checks are not going to find anything so don't waste time rolling. Maybe in the first case, they failed the roll or there wasn't anything to find...but, I would definitely try to make it obvious when additional checking is not going to be productive. Though, I agree, you should wean the players off Suggesting checks and more into describing actions/intentions...
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see.
The situation should be handled exactly as if there were something interesting to find but they did not meet the DC for finding it. That is effectively what happened, just in this case the DC is infinite as there is nothing to be found. Explicitly telling the players that there is nothing to be found should be avoided - if you get in the habit of telling them that there is nothing to be found, they will know when they have merely failed the roll.
60,558
I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves. My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls? Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest". Is there a better way of approaching this? Do I make something up? I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea.
2015/05/04
[ "https://rpg.stackexchange.com/questions/60558", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/22662/" ]
We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important. A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.) To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time.
have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see.
11,705,339
I have a UITableView in another view. I want to make the height of the UITabeView constant. I have variable number of rows, but I want the height of the UITableView constant so that it does not hide the views below it when the rows increase. how do I do this? I tried setting the autoresizingmask but that did not help. Thanks in advance.
2012/07/28
[ "https://Stackoverflow.com/questions/11705339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1415788/" ]
[EPyLEpSY](https://stackoverflow.com/users/1039901/epylepsy) is correct. ``` CGRect tbFrame = [myTableView frame]; tbFrame.size.height = 100; [myTableView setFrame:tbFrame]; ```
you can use the method: ``` - (id)initWithFrame:(CGRect)frame style:(UITableViewStyle)style ``` and initialize the frame with the width and height you want. The style can be one of the following options: UITableViewStylePlain UITableViewStyleGrouped I recommend reviewing [UITableView class reference](http://developer.apple.com/library/ios/#documentation/uikit/reference/UITableView_Class/Reference/Reference.html) for more information
62,281,484
First, I am finding an average of some data, and then formatting that average into to decimal points. And in the end, I want to use rollup to generate a total row for all columns. The problem is: I want the rollup to sum the data as they appear, however, when adding up the average it is not adding them as the way they were formatted, but rather their full actual values, and I don't want that. for example if the average is 25.66666667 and the formatted number is shown as 26.67 AND 10.5192 and the formatted number is shown as 10.52 I want the roll up to add: 26.67 + 10.52, NOT 25.66666667 + 10.5192 Any idea how using Oracle SQL? Or any alternatives for roll ups that would give me the required result. Note that I need to generate the total summary row during my sql query command.
2020/06/09
[ "https://Stackoverflow.com/questions/62281484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13563566/" ]
your column 'time' is of dtype `timedelta` as the error tells you; you could use the `total_seconds()` method to convert to seconds and divide by 60 to get the minutes. If you want a full-featured datetime column, combine 'date' and 'time'. Then you can use `.dt.minute`. **Ex:** ``` import pandas as pd df = pd.DataFrame({'time': pd.to_timedelta(['00:30:45','00:30:45','00:21:06','00:21:06','00:21:06']), 'date': pd.to_datetime(['2020-02-28','2020-02-28','2020-03-09','2020-03-09','2020-03-09'])}) # to get the "total minutes": df['minutes'] = df['time'].dt.total_seconds()/60 df['minutes'] # 0 30.75 # 1 30.75 # 2 21.10 # 3 21.10 # 4 21.10 # Name: minutes, dtype: float64 ``` [[pd.Timedelta docs]](https://pandas.pydata.org/pandas-docs/stable/user_guide/timedeltas.html) ``` # to get a column of dtype datetime: df['DateTime'] = df['date'] + df['time'] # now you can do: df['DateTime'].dt.minute # 0 30 # 1 30 # 2 21 # 3 21 # 4 21 # Name: DateTime, dtype: int64 ```
If you have not converted to a datetime dataframe do that first then you create a new column like this ``` df['minute'] = df['date'].dt.minute ``` or this method here ``` df[new]= df[column].map(lambda x: datetime.datetime(x.minutes)) ```
62,281,484
First, I am finding an average of some data, and then formatting that average into to decimal points. And in the end, I want to use rollup to generate a total row for all columns. The problem is: I want the rollup to sum the data as they appear, however, when adding up the average it is not adding them as the way they were formatted, but rather their full actual values, and I don't want that. for example if the average is 25.66666667 and the formatted number is shown as 26.67 AND 10.5192 and the formatted number is shown as 10.52 I want the roll up to add: 26.67 + 10.52, NOT 25.66666667 + 10.5192 Any idea how using Oracle SQL? Or any alternatives for roll ups that would give me the required result. Note that I need to generate the total summary row during my sql query command.
2020/06/09
[ "https://Stackoverflow.com/questions/62281484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13563566/" ]
your column 'time' is of dtype `timedelta` as the error tells you; you could use the `total_seconds()` method to convert to seconds and divide by 60 to get the minutes. If you want a full-featured datetime column, combine 'date' and 'time'. Then you can use `.dt.minute`. **Ex:** ``` import pandas as pd df = pd.DataFrame({'time': pd.to_timedelta(['00:30:45','00:30:45','00:21:06','00:21:06','00:21:06']), 'date': pd.to_datetime(['2020-02-28','2020-02-28','2020-03-09','2020-03-09','2020-03-09'])}) # to get the "total minutes": df['minutes'] = df['time'].dt.total_seconds()/60 df['minutes'] # 0 30.75 # 1 30.75 # 2 21.10 # 3 21.10 # 4 21.10 # Name: minutes, dtype: float64 ``` [[pd.Timedelta docs]](https://pandas.pydata.org/pandas-docs/stable/user_guide/timedeltas.html) ``` # to get a column of dtype datetime: df['DateTime'] = df['date'] + df['time'] # now you can do: df['DateTime'].dt.minute # 0 30 # 1 30 # 2 21 # 3 21 # 4 21 # Name: DateTime, dtype: int64 ```
@Fobersteiner's answer is very good, but just for completeness, I would like to add that you could also divide your column of dtype `timedelta` by a fixed `timedelta`. For instance: ```py from datetime import timedelta import pandas as pd df = pd.DataFrame({'time': pd.to_timedelta(['00:30:45','00:30:45','00:21:06','00:21:06','00:21:06']), 'date': pd.to_datetime(['2020-02-28','2020-02-28','2020-03-09','2020-03-09','2020-03-09'])}) # to get the "total minutes": df['minutes'] = df['time'] / timedelta(minutes=1) # <-- df['minutes'] Out[9]: 0 30.75 1 30.75 2 21.10 3 21.10 4 21.10 Name: minutes, dtype: float64 ``` Though personally, I prefer @Fobersteiner's method.
58,324,162
I'm experimenting with WSASockets and I'm very new to it. I tried to send the input and the output of a process (cmd.exe in this case, to act like a remote shell) through a socket using handles, but whenever I try to use : ``` si.dwFlags = (STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW); si.hStdInput = si.hStdOutput = si.hStdError = (HANDLE)sock; ``` the program exit, without prompting the result to the other end of the socket : `nc -lvnp 8081`. At some point I also tried to switch to normal sockets but I heard that using handles like this would only work with WSA ones because they are non-overlapped. Here is my code so far : ``` #include <winsock2.h> #include <windows.h> #include <ws2tcpip.h> #include <stdio.h> #pragma comment(lib, "Ws2_32.lib") #define DEFAULT_BUFLEN 1024 void BindSock(char* rhost, int rport); int main(int argc, char** argv) { //FreeConsole(); // This is the way to make the cmd vanish char rhost[] = "xxxxxxxxxxxxxxxx"; // ip to connect to int rport = 8081; BindSock(rhost, rport); return 0; } void BindSock(char* rhost, int rport) { /*while (1) {*/ SECURITY_ATTRIBUTES saAttr; saAttr.nLength = sizeof(SECURITY_ATTRIBUTES); saAttr.bInheritHandle = TRUE; saAttr.lpSecurityDescriptor = NULL; // Initialize Winsock WSADATA wsaData; int iResult = WSAStartup(MAKEWORD(2, 2), &wsaData); if (iResult != NO_ERROR) { printf("WSAStartup function failed with error: %d\n", iResult); return; } printf("[*] Winsock init ... \n"); //init socket props SOCKET sock; sock = WSASocketW(AF_INET, SOCK_STREAM, IPPROTO_TCP, 0,0,0); if (sock == INVALID_SOCKET) { printf("socket function failed with error: %ld\n", WSAGetLastError()); WSACleanup(); return; } printf("[*] Sock init ... \n"); //Filling struc props struct sockaddr_in clientService; clientService.sin_family = AF_INET; InetPton(AF_INET, rhost, &(clientService.sin_addr)); clientService.sin_port = htons(rport); printf("[*] attempting to connect \n"); iResult = WSAConnect(sock, (SOCKADDR*)&clientService, sizeof(clientService),NULL,NULL,NULL,NULL); if (iResult == SOCKET_ERROR) { printf("[!] connect function failed with error: %ld\n", WSAGetLastError()); iResult = closesocket(sock); if (iResult == SOCKET_ERROR) printf("[!] closesocket function failed with error: %ld\n", WSAGetLastError()); WSACleanup(); return; } printf("[X] Sock Connected\n"); STARTUPINFO si; PROCESS_INFORMATION pi; ZeroMemory(&si, sizeof(si)); si.cb = sizeof(si); ZeroMemory(&pi, sizeof(pi)); si.dwFlags = (STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW); si.hStdInput = si.hStdOutput = si.hStdError = (HANDLE)sock; printf("[*] Created process props\n"); CreateProcessA(NULL, "\"cmd.exe\"", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi); WaitForSingleObject(pi.hProcess, INFINITE); CloseHandle(pi.hProcess); CloseHandle(pi.hThread); //} } ```
2019/10/10
[ "https://Stackoverflow.com/questions/58324162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10512647/" ]
There are two small issues with your code: 1. You want to combine two conditions with `&`, but you should wrap each of these conditions in parenthesis to clearly seperate them: `(x==...) & (y=...)` 2. The result of this check has the form of a Series (with only one observation in it). Python is not sure how to convert this Series of booleans into one boolean because in case the series has multiple values it doesn't know how to aggregate them (should the Series only result in a single True if all values are True or is it enough if at least one of them is True, ...). Therefore you should clarify that by adding `series.all()` or `series.any()` to your check. ``` def rec_date(row): if row['flag'] == '2.1': if (df_workable[(df_workable['workable_day'] == int(row['wday_a1'])) & (df_workable['month'] == 1)]['day'] <= dt.datetime.today().day).all(): val = "this" else: val = "that" else: val = "Still missing" return val ``` **Output:** ``` day_a1 wday_a1 iwday_a1 flag rec_date 0 24.0 4.0 6.0 2.1 this 1 NaN NaN NaN NaN Still missing 3 31.0 22.0 1.0 2.2 Still missing 4 27.0 18.0 5.0 3.3.2.1.3 Still missing 26816 25.0 19.0 5.0 1 Still missing 26817 31.0 NaN NaN 3.2 Still missing ```
So, let's break this statement down: `df_workable[df_workable['workable_day'] == 4 & df_workable['month'] == 1]['day']` 1. `df_workable`: the complete DataFrame 2. `df_workable[df_workable['workable_day'] == int(row['wday_a1']) & df_workable['month'] == 1]`: you are filtering the DataFrame based on specific values for `workable_day` and `month`. This returns a new DataFrame, with the filtered results of the **whole** DataFrame. 3. `df_workable[df_workable['workable_day'] == int(row['wday_a1']) & df_workable['month'] == 1]['day']`: this takes the DataFrame returned in step 2 and accesses its `['day']` column. This returns a [`pandas.Series`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html) object, which contains all values for the DataFrame's `day`column. Which means, when you do `df_workable[df_workable['workable_day'] == int(row['wday_a1']) & df_workable['month'] == 1]['day'] <= dt.datetime.today().day`, you are trying to compare a whole Series object (which contains multiple values corresponding to each row) to a single datetime value, **NOT** iterating through the rows. I don't really get the comparison you are trying to do, but it doesn't seem possible to be done following your current logic.
29,998,196
I just got a NFC Reader (ACS ACR122U), but the SDK available is for Windows only. Where can I find a SDK for Mac OSX? I already contacted helpdesk of the hardware provider but until now I don't get any response from them.
2015/05/02
[ "https://Stackoverflow.com/questions/29998196", "https://Stackoverflow.com", "https://Stackoverflow.com/users/421066/" ]
libnfc works with the ACR122U, according to <http://nfc-tools.org/index.php?title=ACR122> . To install libnfc on OSX, see: <http://nfc-tools.org/index.php?title=Libnfc#Mac_OS_X>
To add privileges you run `vigr` and `vigr -s` on OSX. On linux you can do the same or use ``` usermod <user> -G <group> ``` Where group is `libnfc` or `nfc` or something like that.
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
Try this: ``` UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(40, 30, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = NSTextAlignmentCenter; label.textColor = [UIColor whiteColor]; label.numberOfLines = 0; label.lineBreakMode = UILineBreakModeWordWrap; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ```
One Minor Change on iOS 6 or later is that the ``` label.textAlignment = UITextAlignmentCenter; ``` is deprecated so use ``` label.textAlignment = NSTextAlignmentLeft; ``` instead.
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
To show the UILable as your displayed in your image, you need to set the following property of [UILabel](http://developer.apple.com/library/ios/#documentation/uikit/reference/UILabel_Class/Reference/UILabel.html#jumpTo_9) and also increase the height of your Label. ``` @property(nonatomic) NSInteger numberOfLines; @property(nonatomic) UILineBreakMode lineBreakMode; ``` Should be like as below .. ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 100)]; ................................. label.numberOfLines=0; label.lineBreakMode=UILineBreakModeCharacterWrap; ............................ ```
One Minor Change on iOS 6 or later is that the ``` label.textAlignment = UITextAlignmentCenter; ``` is deprecated so use ``` label.textAlignment = NSTextAlignmentLeft; ``` instead.
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
One Minor Change on iOS 6 or later is that the ``` label.textAlignment = UITextAlignmentCenter; ``` is deprecated so use ``` label.textAlignment = NSTextAlignmentLeft; ``` instead.
here is how to create UILabel Programmatically.. 1) Write this in .h file of your project. ``` UILabel *label; ``` 2) Write this in .m file of your project. ``` label=[[UILabel alloc]initWithFrame:CGRectMake(10, 70, 50, 50)];//Set frame of label in your viewcontroller. [label setBackgroundColor:[UIColor lightGrayColor]];//Set background color of label. [label setText:@"Label"];//Set text in label. [label setTextColor:[UIColor blackColor]];//Set text color in label. [label setTextAlignment:NSTextAlignmentCenter];//Set text alignment in label. [label setBaselineAdjustment:UIBaselineAdjustmentAlignBaselines];//Set line adjustment. [label setLineBreakMode:NSLineBreakByCharWrapping];//Set linebreaking mode.. [label setNumberOfLines:1];//Set number of lines in label. [label.layer setCornerRadius:25.0];//Set corner radius of label to change the shape. [label.layer setBorderWidth:2.0f];//Set border width of label. [label setClipsToBounds:YES];//Set its to YES for Corner radius to work. [label.layer setBorderColor:[UIColor blackColor].CGColor];//Set Border color. [self.view addSubview:label];//Add it to the view of your choice. ```
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
if you know the number of line i.e if number of Line is 3 then you can write ``` label.numberOfLines=3; label.lineBreakMode=UILineBreakModeCharacterWrap; ``` and if u don't know the exact line for label then you can write ``` label.numberOfLines=0; label.lineBreakMode=UILineBreakModeCharacterWrap; ```
Set Numberoflines property of your label and then also increase some width of your lable so your label can shows proper. This property controls the maximum number of lines to use in order to fit the label’s text into its bounding rectangle. The default value for this property is 1. To remove any maximum limit, and use as many lines as needed, set the value of this property to 0. If you constrain your text using this property, any text that does not fit within the maximum number of lines and inside the bounding rectangle of the label is truncated using the appropriate line break mode. [read more](http://developer.apple.com/library/IOs/#documentation/UIKit/Reference/UILabel_Class/Reference/UILabel.html)
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
Try this: ``` UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(40, 30, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = NSTextAlignmentCenter; label.textColor = [UIColor whiteColor]; label.numberOfLines = 0; label.lineBreakMode = UILineBreakModeWordWrap; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ```
Set **numberOfLines** property of UILabel. ``` label.lineBreakMode = UILineBreakModeWordWrap; label.numberOfLines = 3; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; ```
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
Try this: ``` UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(40, 30, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = NSTextAlignmentCenter; label.textColor = [UIColor whiteColor]; label.numberOfLines = 0; label.lineBreakMode = UILineBreakModeWordWrap; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ```
here is how to create UILabel Programmatically.. 1) Write this in .h file of your project. ``` UILabel *label; ``` 2) Write this in .m file of your project. ``` label=[[UILabel alloc]initWithFrame:CGRectMake(10, 70, 50, 50)];//Set frame of label in your viewcontroller. [label setBackgroundColor:[UIColor lightGrayColor]];//Set background color of label. [label setText:@"Label"];//Set text in label. [label setTextColor:[UIColor blackColor]];//Set text color in label. [label setTextAlignment:NSTextAlignmentCenter];//Set text alignment in label. [label setBaselineAdjustment:UIBaselineAdjustmentAlignBaselines];//Set line adjustment. [label setLineBreakMode:NSLineBreakByCharWrapping];//Set linebreaking mode.. [label setNumberOfLines:1];//Set number of lines in label. [label.layer setCornerRadius:25.0];//Set corner radius of label to change the shape. [label.layer setBorderWidth:2.0f];//Set border width of label. [label setClipsToBounds:YES];//Set its to YES for Corner radius to work. [label.layer setBorderColor:[UIColor blackColor].CGColor];//Set Border color. [self.view addSubview:label];//Add it to the view of your choice. ```
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
To show the UILable as your displayed in your image, you need to set the following property of [UILabel](http://developer.apple.com/library/ios/#documentation/uikit/reference/UILabel_Class/Reference/UILabel.html#jumpTo_9) and also increase the height of your Label. ``` @property(nonatomic) NSInteger numberOfLines; @property(nonatomic) UILineBreakMode lineBreakMode; ``` Should be like as below .. ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 100)]; ................................. label.numberOfLines=0; label.lineBreakMode=UILineBreakModeCharacterWrap; ............................ ```
if you know the number of line i.e if number of Line is 3 then you can write ``` label.numberOfLines=3; label.lineBreakMode=UILineBreakModeCharacterWrap; ``` and if u don't know the exact line for label then you can write ``` label.numberOfLines=0; label.lineBreakMode=UILineBreakModeCharacterWrap; ```
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
Try this: ``` UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(40, 30, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = NSTextAlignmentCenter; label.textColor = [UIColor whiteColor]; label.numberOfLines = 0; label.lineBreakMode = UILineBreakModeWordWrap; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ```
``` UILabel *mycoollabel=[[UILabel alloc]initWithFrame:CGRectMake(10, 70, 50, 50)]; mycoollabel.text=@"I am cool";// // for multiple lines,if text lenght is long use next line mycoollabel.numberOfLines=0; [self.View addSubView:mycoollabel]; ```
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
Set Numberoflines property of your label and then also increase some width of your lable so your label can shows proper. This property controls the maximum number of lines to use in order to fit the label’s text into its bounding rectangle. The default value for this property is 1. To remove any maximum limit, and use as many lines as needed, set the value of this property to 0. If you constrain your text using this property, any text that does not fit within the maximum number of lines and inside the bounding rectangle of the label is truncated using the appropriate line break mode. [read more](http://developer.apple.com/library/IOs/#documentation/UIKit/Reference/UILabel_Class/Reference/UILabel.html)
``` UILabel *mycoollabel=[[UILabel alloc]initWithFrame:CGRectMake(10, 70, 50, 50)]; mycoollabel.text=@"I am cool";// // for multiple lines,if text lenght is long use next line mycoollabel.numberOfLines=0; [self.View addSubView:mycoollabel]; ```
7,428,554
I did by code the following: ``` UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(40, 70, 300, 50)]; label.backgroundColor = [UIColor clearColor]; label.textAlignment = UITextAlignmentCenter; // UITextAlignmentCenter, UITextAlignmentLeft label.textColor=[UIColor whiteColor]; label.text = @"Telechargez et consultez les catalogues et les tarifs de la gamme Audi au format PDF"; [self.view addSubview:label]; ``` And it looks like [this](http://i56.tinypic.com/2w2oig9.png) but I want it to look like [this](http://i51.tinypic.com/fxux50.png). How to change the label's properties?
2011/09/15
[ "https://Stackoverflow.com/questions/7428554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778076/" ]
Set Numberoflines property of your label and then also increase some width of your lable so your label can shows proper. This property controls the maximum number of lines to use in order to fit the label’s text into its bounding rectangle. The default value for this property is 1. To remove any maximum limit, and use as many lines as needed, set the value of this property to 0. If you constrain your text using this property, any text that does not fit within the maximum number of lines and inside the bounding rectangle of the label is truncated using the appropriate line break mode. [read more](http://developer.apple.com/library/IOs/#documentation/UIKit/Reference/UILabel_Class/Reference/UILabel.html)
here is how to create UILabel Programmatically.. 1) Write this in .h file of your project. ``` UILabel *label; ``` 2) Write this in .m file of your project. ``` label=[[UILabel alloc]initWithFrame:CGRectMake(10, 70, 50, 50)];//Set frame of label in your viewcontroller. [label setBackgroundColor:[UIColor lightGrayColor]];//Set background color of label. [label setText:@"Label"];//Set text in label. [label setTextColor:[UIColor blackColor]];//Set text color in label. [label setTextAlignment:NSTextAlignmentCenter];//Set text alignment in label. [label setBaselineAdjustment:UIBaselineAdjustmentAlignBaselines];//Set line adjustment. [label setLineBreakMode:NSLineBreakByCharWrapping];//Set linebreaking mode.. [label setNumberOfLines:1];//Set number of lines in label. [label.layer setCornerRadius:25.0];//Set corner radius of label to change the shape. [label.layer setBorderWidth:2.0f];//Set border width of label. [label setClipsToBounds:YES];//Set its to YES for Corner radius to work. [label.layer setBorderColor:[UIColor blackColor].CGColor];//Set Border color. [self.view addSubview:label];//Add it to the view of your choice. ```
26,831,439
Is it possible to have LinearLayout with layout\_height="match\_parent" and layout\_width that would attribute to half the height? I would then put those layouts in larger HorizontalScrollView and let layouts keep desired aspect ratio. Is this possible without creating my own layout that would handle it ?
2014/11/09
[ "https://Stackoverflow.com/questions/26831439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1109920/" ]
You need a custom ViewGroup for this. Google has one called [ProportionalLayout](https://android.googlesource.com/platform/packages/apps/ContactsCommon/+/master/src/com/android/contacts/common/widget/ProportionalLayout.java) -- it's not part of the Android SDK, so you have to copy the source code into your app in order to use it in your XML layouts. You also need to add this to `res/values/attrs.xml`: ``` <declare-styleable name="ProportionalLayout"> <attr name="direction" format="string"/> <attr name="ratio" format="float"/> </declare-styleable> ```
You can do that programmatically by getting the layout height and then set the width to half that size. ``` LinearLayout layout = (LinearLayout)findViewById(R.id.numberPadLayout); int height = layout.getHeight(); LayoutParams params = layout.getLayoutParams(); params.height = height; params.width = height/2; ```
26,831,439
Is it possible to have LinearLayout with layout\_height="match\_parent" and layout\_width that would attribute to half the height? I would then put those layouts in larger HorizontalScrollView and let layouts keep desired aspect ratio. Is this possible without creating my own layout that would handle it ?
2014/11/09
[ "https://Stackoverflow.com/questions/26831439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1109920/" ]
You need a custom ViewGroup for this. Google has one called [ProportionalLayout](https://android.googlesource.com/platform/packages/apps/ContactsCommon/+/master/src/com/android/contacts/common/widget/ProportionalLayout.java) -- it's not part of the Android SDK, so you have to copy the source code into your app in order to use it in your XML layouts. You also need to add this to `res/values/attrs.xml`: ``` <declare-styleable name="ProportionalLayout"> <attr name="direction" format="string"/> <attr name="ratio" format="float"/> </declare-styleable> ```
I have faced with this question month ago. The found the only way to solve it: In the layout file make width = match\_parent, and height = 0dp or whatever! In the onCreate method add of your activity add onGlobalLayoutListener(). This method will be called when all heights and widths of all widgets are measured. So you can get your width by mLinearLayout.getWidth(); ``` void onCreate(..) { ... mLinearLayout = getViewById(R.id.your_layout); mFrameLayout.getViewTreeObserver().addOnGlobalLayoutListener(myOnGlobalLayoutListener); } OnGlobalLayoutListener myOnGlobalLayoutListener =new OnGlobalLayoutListener(){ @Override public void onGlobalLayout() { if (!mFlag) { mFlag = true; mLinearLayout.setLayoutParams(mLinearLayout.getWidth(), mLinearLayout.getWidth() / 2)); } } }; ``` Sorry for my english! Hope it will help you!
422,052
The [AMD Catalyst™ 13.12 Proprietary Linux x86 Display Driver](http://support.amd.com/en-us/kb-articles/Pages/amdcatalyst13-12linreleasenotes.aspx) can be used with Ubuntu `12.04.2` and `13.04` and the [14.1 Beta](http://support.amd.com/en-us/kb-articles/Pages/Latest-LINUX-Beta-Driver.aspx) version can be used with Ubuntu `12.04.3` and `13.10`. Since, the beta version is known to have a lot of issues, I need to use the stable driver. Unfortunately, it is not working with `13.10` as it said in the documentation. So, I need to download and install `12.04.2` and disable future updates to `12.04.3` for example. How I can do this when the version of the Ubuntu that can be downloaded from the [official website](http://www.ubuntu.com/download/desktop) is `12.04.4` and how can I prevent update to next builds?
2014/02/17
[ "https://askubuntu.com/questions/422052", "https://askubuntu.com", "https://askubuntu.com/users/95857/" ]
You can download old Ubuntu versions from <http://old-releases.ubuntu.com/releases/12.04.2/> Now choose the version you want (12.04.2) and be sure if you want it 32 or 64bit Then click on it to download. Now for the updates you can run the update manager from DASH then go to settings and beside check for updates choose never (see the image below) ![enter image description here](https://i.stack.imgur.com/DSLMA.png)
In such a case I would pin down the dependencies but let the updates continue. In this manner security issues and high impact bugs in other packages will be dealt with. In the case of this driver this means: Xserver <=1.14, Kernel <=3.11, Glib <=2.3 The way to do this is described [here](https://serverfault.com/questions/435132/how-to-version-lock-packages-in-ubuntu). One issue with pinning is that you can not *(that I know of)* pin down to a version which is not available yet *(do not upgrade above 3.5 while current is 3.3)*. So a direct pinning of this dependencies is not possible. A way to deal with this is to pin down the current versions and if an update of one of these files occurs check if it still meets dependencies. If so, update the pinning, else leave it pinned. To pin down to 12.04.2 on these dependencies you install 12.04.2 using the [link](http://old-releases.ubuntu.com/releases/12.04.2/) provided by maythux and make a file **/etc/apt/preferences** with the following lines: ``` Package: x-server-common Pin: version 2:1.11* Pin-Priority: 550 Package: linux-image-generic* Pin: version 3.2.* Pin-Priority: 550 Package: libglib2.0 Pin: version 2.3* Pin-Priority: 550 ``` But the funny part is that looking at the dependencies I see no reason why the driver should not work with 12.04.4 Then looking at all the distro's they say to support it is obvious they made a list of the distro's on the market in the first half of 2013. So probably it is just a list with tested versions. **If I were you, I would give it a go and install 12.04.4** To be sure it won't break with future updates you put the following lines in the apt preferences file: ``` Package: x-server-common Pin: version 2:1.11* Pin-Priority: 550 Package: linux-image-generic* Pin: version 3.11.* Pin-Priority: 550 Package: libglib2.0 Pin: version 2.3* Pin-Priority: 550 ```
72,315
I am rather new to Unity, I have only started using it after upgrading to 11.10 several days ago, and one thing really worries me. I started to miss incoming messages in Empathy and Skype because sometimes I'm not there when they arrive. And when I'm back, I don't use the launcher because I have a maximized window, so I don't see that there are windows requesting my attention. Is there a way to make the launcher remind me of such windows? Or maybe make it stick them out halfway like it does when an event happens and keep them like that until I click on them?
2011/10/26
[ "https://askubuntu.com/questions/72315", "https://askubuntu.com", "https://askubuntu.com/users/202/" ]
You can use [Recent Notifications Indicator](https://launchpad.net/recent-notifications) for catching up missed notifications. It will display a postbox icon in your notification area. To install, use the following commands: ``` sudo apt-add-repository ppa:jconti/recent-notifications sudo apt-get update sudo apt-get install indicator-notifications ``` To start the indicator you have to restart Unity. The simplest way to do this is just to *log out and log back in*. Once you have any incoming notifications, the icon turns from „white” to green and you can use its pull down menu to review the messages.
This seems to be a empathy user interface bug as Pidgin does not exhibit the same behaviour. Also try following this guide [Pidgin Skype](http://www.omgubuntu.co.uk/2011/04/run-skype-as-a-daemon-and-manage-it-from-empathy-in-ubuntu-11-04/) to integrate Skype messages into the messaging menu so you don't miss those either
70,606,414
****I want to get single image from bitmap array recycler view and show it to my detail activity**** [recycler view adapter class] ' public void onBindViewHolder(@NonNull viewholder holder, @SuppressLint("RecyclerView") int position) { ``` model model= list.get(position); byte[] image=list.get(position).getImage(); Bitmap bitmap= BitmapFactory.decodeByteArray(image,0,image.length); holder.imageView.setImageBitmap(bitmap); holder.t1.setText(list.get(position).getBrand()); holder.t2.setText(list.get(position).getModel()); holder.t3.setText(list.get(position).getYear()); holder.t4.setText(list.get(position).getPrice()); holder.itemView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Intent intent= new Intent(context,cardetail.class); intent.putExtra("im",bitmap); intent.putExtra("branddetail", String.valueOf(list.get(position).getBrand())); intent.putExtra("modeldetail",String.valueOf(list.get(position).getModel())); intent.putExtra("yeardetail",String.valueOf(list.get(position).getYear())); intent.putExtra("pricedetail",String.valueOf(list.get(position).getPrice())); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(intent); } }); ``` [detailactivity] ``` ImageView imgview; TextView t1,t2,t3,t4; Bitmap bitmap; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_cardetail); imgview=findViewById(R.id.imgdetail); t1=findViewById(R.id.cname); t2=findViewById(R.id.cmodel); t3=findViewById(R.id.cyear); t4=findViewById(R.id.cprice); listadaptr listadaptr; Bitmap bitmap= getIntent().getParcelableExtra("im"); imgview.setImageBitmap(bitmap); t1.setText(getIntent().getStringExtra("branddetail")); t2.setText(getIntent().getStringExtra("modeldetail")); t3.setText(getIntent().getStringExtra("yeardetail")); t4.setText(getIntent().getStringExtra("pricedetail")); ``` enter code here
2022/01/06
[ "https://Stackoverflow.com/questions/70606414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16262113/" ]
You can pass table name as second parameter in 'classe' relation like this: ``` public function classe() { return $this->belongsToMany(Classes::class, 'class_student'); } ```
In relations (in both models) as second parameter you must provide table name `class_student` because you don't follow convention because you model name is `Classes`
57,139,486
Having trouble for adding a gradient color to my fileinput progress bar. Right now, I was able to change the color of the progress bar from a regular blue to something else using the code provided here. [Colour of progress bar in fileInput -- Shiny](https://stackoverflow.com/questions/44401763/colour-of-progress-bar-in-fileinput-shiny) ``` ui <- fluidPage( tags$head(tags$style(".progress-bar{background-color:#3c763d;}")), fileInput(inputId = "fileInp", label = "Input file:",multiple = FALSE, accept = c( "text/csv", "text/comma-separated-values,text/plain", ".csv")) ) server <- function(input, output){ } shinyApp(ui=ui, server=server) ##also tried replacing background color with code from https://www.w3schools.com/css/css3_gradients.asp but no luck : background-color: linear-gradient(to right, red , yellow); ``` However, what I want is to have gradient just like this <https://imgur.com/XdFBUIt>
2019/07/22
[ "https://Stackoverflow.com/questions/57139486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9996679/" ]
To set a gradient in CSS, the property is `background-image`, not `background-color`. You also have to set the `background-size` to `auto`, otherwise it is set to `40px 40px` and the progress bar is striped. Here is the CSS: ``` tags$head( tags$style( ".progress-bar { background-image: linear-gradient(to right, red , yellow) !important; background-size: auto !important; }") ) ```
Try creating a colour ramp from a predefined color palette: <https://www.rdocumentation.org/packages/dichromat/versions/1.1/topics/colorRampPalette>
60,613,031
i want to load previous and next page as by default when PageView has been initialized. But in default implementation the next page is loading when the page is scrolled the next page. is there any way to do that. in android we can simply achieve it by using `offscreenpagelimit` but in flutter I couldn't found any equivalent. ps: i am showing the image from network in PageView
2020/03/10
[ "https://Stackoverflow.com/questions/60613031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4862911/" ]
There is an issue on GitHub about preloading tabs: <https://github.com/flutter/flutter/issues/31191> Please check their solutions.
Try this: ``` PageView( cacheExtents: your_offset_limit_here, /// etc ) ```
25,931,651
In mobile Safari (iOS 8.0 iPad Mini) it seems that, the more inputs (I've tried with different types) a web has, the slower the "typing" gets (I've even managed to freeze safari just typing). Just to make sure it was not the keyboard, I tested with Swiftkey, but the same problem arose. With Swiftkey, the input was fast, but the text was slowly inserted inside an input type text. I've created a [jsFiddle](http://jsfiddle.net/9cmmg79u/1/embedded/result/) with some inputs and it really goes slow (as hell). At this fiddle, I added some select with lots of "option" tags because I found out that this makes the situation even worse. I also tried adding autocomplete attribute (set to false) but it doesn't seem to affect in any way. This is more or less how would a "problematic" code look like: ``` <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="text" autocorrect="off" /> <input class="" type="email" autocorrect="off" /> <select><!-- lots of "option" tags --></select> ``` And now the weird thing: **this doesn't happen on an iPhone 5S** (didn't test it on any other iPad). Does anybody know why this happens? Or if it happens on any other device? Thanks in advance.
2014/09/19
[ "https://Stackoverflow.com/questions/25931651", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1346273/" ]
Same issue as this one: [Why does Safari Mobile have trouble handling many input fields on iOS 8](https://stackoverflow.com/questions/26149532/why-does-safari-mobile-have-trouble-handling-many-input-fields-on-ios-8) A workaround is to wrap each input element in a form element like this: ``` <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="text" autocorrect="off" /></form> <form><input class="" type="email" autocorrect="off" /></form> ```
In order for the community to have the context: would you please share the entire source for one of the pages this issue is occurring on? If you're trying to debug on a physical iPad I strongly recommend downloading [Xcode](https://developer.apple.com/xcode/downloads/) and opening the iPad emulator. From there you can view the console and a variety of other debugging tools. That should tell you an error for the speed and possibly propose a solution to fix it.
1,568,881
The examples they use on my book are basically from basic arithmetic like 1, 2 and 3 to calculus and calculating derivatives, which is really annoying because I can't build the fundamental skill required to do harder questions. Anyhow, I have to solve an non-factorable inequality. Normally, I would use the quadratic formula but it's not quadratic, nor cubic. The question is $x^4 + 2x^3 - 4x^2 - 6x \leq -3$. And I have to get an $x$ = something. I am actually really lost here because I can't even apply factor theorem in this case since no values work out. I think this is a problem of my fundamentals though, please tell me how to solve things like these when they cannot be factored. Or we can just have a one on one discussion... Any help is appreciated, thank you!
2015/12/10
[ "https://math.stackexchange.com/questions/1568881", "https://math.stackexchange.com", "https://math.stackexchange.com/users/272298/" ]
After manipulating a little bit and rearranging terms when the whole thing is $\leq 0$: $$x^4 + 2x^3 - 4x^2 - 6x + 3 \\ x^4+2x^3-6x-4x^2+3 \\ x^4 + 2x(x^2-3)-4x^2+3 \\ x^4 + 2x(x^2-3)-3x^2-x^2+3 \\ x^4 + 2x(x^2-3)-3x^2-(x^2-3) \\ x^4 -3x^2 + 2x(x^2-3)-(x^2-3) \\ x^4 -3x^2 + (x^2-3)(2x-1) \\ x^2(x^2-3) + (x^2-3)(2x-1) \\ (x^2-3)(x^2+2x-1).$$ So your problem is now $(x^2-3)(x^2+2x-1) \leq 0$. So it appears that $x=\pm \sqrt{3}$ will bring about equality. If you use the quadratic formula, you will also find that $x=\pm \sqrt{2}-1$ are 2 more solutions. That means we've found all 4 roots(!). After testing sample values within each of the intervals delimited by the 4 roots, you can show that the total valid domain is $x \in [-1-\sqrt{2}, -\sqrt{3}] \cup [\sqrt{2}-1,\sqrt{3}]$.
There is a [quartic formula](http://www.sosmath.com/algebra/factor/fac12/fac12.html), but I do not recommend using it, since it is incredibly convoluted. Since you are trying to solve an inequality, it is probably fine to use a numerical approximation for the zeroes, which are about $.414, -1.732, 1.732,$ and $ -2.414$. I know from experience that $\sqrt{3} \approx 1.732$, and plugging in $\sqrt{3}$ and $-\sqrt{3}$ show that those are in fact roots. Dividing your polynomial by $x^2 - 3$ gives the polynomial $x^2 + 2x - 1$. Using the quadratic formula, we get that the other two roots are $-1 + \sqrt{2}$ and $-1 - \sqrt{2}$. If you are not allowed to use numerical approximations to find your roots, I would recommend plugging in square roots of small numbers (especially divisors of the constant term) in addition to small integers.
42,643,410
Does anyone know how to select(check) all the check boxes in column of handsontable by checking the first check box of a particular column. I tried all the possible options to get it done but I,m not able to do it by any means. Here is the code, ``` function getCarData() { return [ { available: true,available2: true,available3: true, comesInBlack: 'yes'}, {available: false,available2: true,available3: true,available3: true, comesInBlack: 'yes'}, { available: true,available2: true,available3: true, comesInBlack: 'no'}, {available: false,available2: true,available3: true, comesInBlack: 'yes'}, { available: false,available2: true,available3: true, comesInBlack: 'no'} ]; } var example1 = document.getElementById('example1'), hot1; hot1 = new Handsontable(example1, { data: getCarData(), colHeaders: ['Available','Available2','Available3'], columns: [ { data: 'available', type: 'checkbox' },{ data: 'available2', type: 'checkbox' },{ data: 'available3', type: 'checkbox' } ] }); ```
2017/03/07
[ "https://Stackoverflow.com/questions/42643410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are trying to open AlertDialog from BroadcastReceiver that is not able to find context(context you) .Do one thing redirect user to from broadcastreceiver and from there show alertdialog in onCreate. Problem One : You have to pass theme in Alert Dialog > > AlertDialog.Builder builder = new > AlertDialog.Builder(context, > R.style.AppTheme); > > > **Secondly** you can't show dialog from Broadcastreceiver its not getting context as a developer hope you know how to redirect user from Broadcastreceiver to activity . Error you must be getting crash for this > > java.lang.RuntimeException: Unable to start receiver > com.stackoverflow.Kommit: > android.view.WindowManager$BadTokenException: Unable to add window -- > token null is not for an application > > > So redirect user to MainActivity instead of showing alertDialog and in onCreate of MainActivity show alert dialog where you will have context on MainActivity **UPDATED** Kommit.java(onReceive) ``` Intent i = new Intent(context, MainActivity.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); Bundle b = new Bundle(); b.putInt("notify", 1); i.putExtras(b); //Put your id to your next Intent context.startActivity(i); ``` **In MainActivity** OnCreate() ``` Bundle b = getIntent().getExtras(); int value = 0; // or other values if(b != null) value = b.getInt("notify"); if(value !=0) { //show alertdialog } ``` EDIT by P. Dee: Corrected your notify.
try to change the id for different notifications: ``` notificationManager.notify(0, notificationBuilder); ``` here change `0` to some dynamic number
42,643,410
Does anyone know how to select(check) all the check boxes in column of handsontable by checking the first check box of a particular column. I tried all the possible options to get it done but I,m not able to do it by any means. Here is the code, ``` function getCarData() { return [ { available: true,available2: true,available3: true, comesInBlack: 'yes'}, {available: false,available2: true,available3: true,available3: true, comesInBlack: 'yes'}, { available: true,available2: true,available3: true, comesInBlack: 'no'}, {available: false,available2: true,available3: true, comesInBlack: 'yes'}, { available: false,available2: true,available3: true, comesInBlack: 'no'} ]; } var example1 = document.getElementById('example1'), hot1; hot1 = new Handsontable(example1, { data: getCarData(), colHeaders: ['Available','Available2','Available3'], columns: [ { data: 'available', type: 'checkbox' },{ data: 'available2', type: 'checkbox' },{ data: 'available3', type: 'checkbox' } ] }); ```
2017/03/07
[ "https://Stackoverflow.com/questions/42643410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are trying to open AlertDialog from BroadcastReceiver that is not able to find context(context you) .Do one thing redirect user to from broadcastreceiver and from there show alertdialog in onCreate. Problem One : You have to pass theme in Alert Dialog > > AlertDialog.Builder builder = new > AlertDialog.Builder(context, > R.style.AppTheme); > > > **Secondly** you can't show dialog from Broadcastreceiver its not getting context as a developer hope you know how to redirect user from Broadcastreceiver to activity . Error you must be getting crash for this > > java.lang.RuntimeException: Unable to start receiver > com.stackoverflow.Kommit: > android.view.WindowManager$BadTokenException: Unable to add window -- > token null is not for an application > > > So redirect user to MainActivity instead of showing alertDialog and in onCreate of MainActivity show alert dialog where you will have context on MainActivity **UPDATED** Kommit.java(onReceive) ``` Intent i = new Intent(context, MainActivity.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); Bundle b = new Bundle(); b.putInt("notify", 1); i.putExtras(b); //Put your id to your next Intent context.startActivity(i); ``` **In MainActivity** OnCreate() ``` Bundle b = getIntent().getExtras(); int value = 0; // or other values if(b != null) value = b.getInt("notify"); if(value !=0) { //show alertdialog } ``` EDIT by P. Dee: Corrected your notify.
Normally a dialog needs an Activity context, or make the dialog type a system alert dialog > > alert.getWindow().setType(WindowManager.LayoutParams.TYPE\_SYSTEM\_ALERT); > > > This needs the android.permission.SYSTEM\_ALERT\_WINDOW to be declared in manifest.
42,643,410
Does anyone know how to select(check) all the check boxes in column of handsontable by checking the first check box of a particular column. I tried all the possible options to get it done but I,m not able to do it by any means. Here is the code, ``` function getCarData() { return [ { available: true,available2: true,available3: true, comesInBlack: 'yes'}, {available: false,available2: true,available3: true,available3: true, comesInBlack: 'yes'}, { available: true,available2: true,available3: true, comesInBlack: 'no'}, {available: false,available2: true,available3: true, comesInBlack: 'yes'}, { available: false,available2: true,available3: true, comesInBlack: 'no'} ]; } var example1 = document.getElementById('example1'), hot1; hot1 = new Handsontable(example1, { data: getCarData(), colHeaders: ['Available','Available2','Available3'], columns: [ { data: 'available', type: 'checkbox' },{ data: 'available2', type: 'checkbox' },{ data: 'available3', type: 'checkbox' } ] }); ```
2017/03/07
[ "https://Stackoverflow.com/questions/42643410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are trying to open AlertDialog from BroadcastReceiver that is not able to find context(context you) .Do one thing redirect user to from broadcastreceiver and from there show alertdialog in onCreate. Problem One : You have to pass theme in Alert Dialog > > AlertDialog.Builder builder = new > AlertDialog.Builder(context, > R.style.AppTheme); > > > **Secondly** you can't show dialog from Broadcastreceiver its not getting context as a developer hope you know how to redirect user from Broadcastreceiver to activity . Error you must be getting crash for this > > java.lang.RuntimeException: Unable to start receiver > com.stackoverflow.Kommit: > android.view.WindowManager$BadTokenException: Unable to add window -- > token null is not for an application > > > So redirect user to MainActivity instead of showing alertDialog and in onCreate of MainActivity show alert dialog where you will have context on MainActivity **UPDATED** Kommit.java(onReceive) ``` Intent i = new Intent(context, MainActivity.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); Bundle b = new Bundle(); b.putInt("notify", 1); i.putExtras(b); //Put your id to your next Intent context.startActivity(i); ``` **In MainActivity** OnCreate() ``` Bundle b = getIntent().getExtras(); int value = 0; // or other values if(b != null) value = b.getInt("notify"); if(value !=0) { //show alertdialog } ``` EDIT by P. Dee: Corrected your notify.
For my opinion , `onReceive` can only survive for 10 seconds. So if your random time is longer than 10 seconds,the notification object is destroied which is newed in `onReceive`. So my advice is that do this in a new thread in `Service` instead of `Broadcast Receiver`.