text
stringlengths 3
1.74M
| label
class label 2
classes | source
stringclasses 3
values |
---|---|---|
Apache Reverse Proxy Bypass Exploitation. | 1cybersec
| Reddit |
Position Relative vs Absolute?. <p>What is the difference between <code>position: relative</code> and <code>position: absolute</code> in CSS? And when should you use which?</p>
| 0non-cybersec
| Stackexchange |
How to check if SQL Server Tables are System Tables. <p>Using the stored procedure <code>sp_msforeachtable</code> it's possible to execute a script for all tables in a database.</p>
<p>However, there are system tables which I'd like to exclude from that. Instinctively, I would check the properties <code>IsSystemTable</code> or <code>IsMSShipped</code>. These don't work like I expect - I have for example a table called <code>__RefactorLog</code>:</p>
<p><img src="https://i.stack.imgur.com/Wkfjg.png" alt="System table"></p>
<p>But when I query if this is a system or MS Shipped table, SQL Server reports none of my tables are system tables:</p>
<pre><code>exec (N'EXEC Database..sp_msforeachtable "PRINT ''? = '' + CAST(ObjectProperty(Object_ID(''?''), ''IsSystemTable'') AS VARCHAR(MAX))"') AS LOGIN = 'MyETLUser'
-- Results of IsSystemTable:
[dbo].[__RefactorLog] = 0
[schema].[myUserTable] = 0
</code></pre>
<p>and</p>
<pre><code>exec (N'EXEC Database..sp_msforeachtable "PRINT ''? = '' + CAST(ObjectProperty(Object_ID(''?''), ''IsMSShipped'') AS VARCHAR(MAX))"') AS LOGIN = 'MyETLUser'
-- Results of IsMSShipped:
[dbo].[__RefactorLog] = 0
[schema].[myUserTable] = 0
</code></pre>
<p>When I look into the properties of the table (inside SSMS), the table is marked as a system object. An object property like <code>IsSystemObject</code> doesn't exist though (AFAIK).</p>
<p>How do I check if a table is a system object, apart from the object property? How does SSMS check if a table is a system object?</p>
| 0non-cybersec
| Stackexchange |
[UPDATE] I[29F] just walked in on my husband[30M] making out with my sister[33]. Please help.. Thank you everyone for your kind words and PMs. Your words really helped me when I was in the lowest possible spot I have been in. A lot has happened since I woke up. First of all, I am no longer going to refer to Lisa as my sister because she is not my sister any longer.
I woke up this morning and felt like complete shit and didn't want to get up. I went and got a pregnancy test and thank fucking GOD I am not pregnant. It was bittersweet because we have been trying to get pregnant for a while now and I never thought I'd be so glad to see I wasn't pregnant.. I am really upset over the way things have turned out and now I am having these weird feelings that I want to be pregnant after all. I don't know whats going on but its just adding on another difficult layer of shit going on right now..
A little while later I left the house to get groceries and when I opened my mailbox there was a letter in it from Lisa. After I got home I opened it and read it. I am not going to rewrite it because I can not even stand to look at the letter again. Basically it said that she apologizes for how things turned out and she explained to me that she was very vulnerable after losing her husband. That after spending so much time with my husband she started to fall for him and that she thought she wouldn't ever love anyone else again so when she realized she loved my husband she knew she couldn't let him go. No one else can fill the hole in her heart.. Lisa promised they never physically did anything before that kiss I caught them in and she went on to say she needs me in her life and that she hopes I can forgive her. I can't write anymore about this right now I might add in the rest later. I am a fucking mess.
Rick called me a little while ago. I didn't pick up the phone so he texted me and told me that he still loves me and that we can find a way to work this out. I don't know what the fuck that means since he just left me for Lisa. Now I'm really confused because now that I'm not pregnant, I want to be, and I want my marriage to not be over even though I hate him for what hes done to me. And why would he text me that? Is he changing his mind? I am so confused. I wish these past few days never happened so there would be nothing wrong still.. I know I shouldn't forgive him if he wants another chance but 10 years of marriage... We were going to be parents.. Fuck I am so confused and hurt I can't even think straight..
TDLR: Rick left me for Lisa. I'm not pregnant but I am having weird feelings about that. Lisa left me a letter in my mailbox and then Rick attempted to call me. He texted me something that confused me even more. Now I have no idea what is going on and I don't know what to do. Is he changing his mind? | 0non-cybersec
| Reddit |
Rotated table (90 degree in whole page) in two-column article. <p>I tried to make a table rotated in one page (according to ferahfeza's post in <a href="https://tex.stackexchange.com/questions/170205/rotate-table-90-degrees-and-stretch-to-fill-whole-page">rotate table (90 degrees) and stretch to fill whole page</a>) but when I paste the same code in two-column article format, it didn't work properly, the table is placed on text (i.e, the table didn't take a new page but instead it printed on text in the same page)</p>
<p>How to do that ?</p>
<p>Thank you</p>
| 0non-cybersec
| Stackexchange |
I just found out I was dating a communist... I can’t believe I missed all the red flags | 0non-cybersec
| Reddit |
How the GoT theme was meant to be heard.... | 0non-cybersec
| Reddit |
Reference for subsemigroups of $\mathbb{N}^n$. <p>A well known result about the natural numbers $\mathbb{N}$ says that for any finite subset $A \subset \mathbb{N}$ there exists $R \ge 0$ such that if $n$ is in the subgroup of $\mathbb{Z}$ generated by $A$ and if $n \ge R$ then $n$ is in the semigroup generated by $A$. </p>
<p>Are there any references to a higher dimensional version of this result? </p>
<p>The version I want goes like this. </p>
<ul>
<li>Take a finite subset $U$ of $\mathbb{N}^n$. Let $C_U$ be the smallest closed cone in $\mathbb{R}^n$ containing $U$, i.e. all non-negative real linear combinations of $U$. Let $G_U$ be the subgroup of $\mathbb{N}^n$ generated by $U$, i.e. all integer linear combinations. Let $S_U$ be the subsemigroup generated by $U$, i.e. all non-negative integer linear combinations. Then there exists $R>0$ such that for every $v \in G_U$, if the ball around $v$ of radius $R$ is contained in $C_U$ then $v \in S_U$.</li>
</ul>
| 0non-cybersec
| Stackexchange |
[WP] One day it happened in an instant, everyone with a Facebook account was infected with a bioelectronic virus that turned them into a soldier of Zuckerberg. They called it the Facebookening. This is the legend of the man who rose up to save us: they call him Tom. From Myspace.. | 0non-cybersec
| Reddit |
Policeman caught taking selfie as man attempted suicide on Istanbul bridge. | 0non-cybersec
| Reddit |
I was tired of my 9 to 5 lifestyle so I stored all of my vacation days and took a month long trip to SE Asia.. | 0non-cybersec
| Reddit |
What is wrong with A Scanner Darkly?. I just watched A Scanner Darkly for the first time. I didn't have high expectations, because I knew that it generally wasn't well-received. I love sci-fi films and stories. I love unique or astonishing visuals in film. I love Philip K. Dick. I love *A Scanner Darkly* (the novel). I love Keanu Reeves, Robert Downey Jr, Wynona Ryder, and Woody Harrelson. I love stories about drugs and drug addiction. This film seemed like the perfect movie to me. So why didn't I like it very much? I'm really asking. Did anybody else have this experience?
I think I would have liked to see more about the company at which Keanu Reeves' character worked, like more explanation about their operations. I think I would have also liked to know more about the origins and history of Substance D. Instead it kind of just focused on the day-to-day lives of these addicts, which got kind of boring at times (such as the bike scene). | 0non-cybersec
| Reddit |
In the film Unbreakable (2000) Samuel L Jackson’s character can be seen admiring some comic book displays, looking closely all three are Marvel titles. From left to right they are: Thor, Shield and Daredevil.. | 0non-cybersec
| Reddit |
Becoming Human: NOVA (so well-made and interesting!). | 0non-cybersec
| Reddit |
I'm bavk. | 0non-cybersec
| Reddit |
Kerberos realm understanding. <p>Could someone summarise why realms are necessary in Kerberos and the advantages of the concept.</p>
<p>I'm struggling to isolate everything I know / beginning to understand into some well defined points for revision. My research just uncovers articles with so much depth can barely make sense of it. I understand what they are. I am aware that using them means that data is distributed thus advantageous in the event of a system failure and that it is easier to manage many small realms instead of one huge one.</p>
<p>Thanks in advance</p>
| 0non-cybersec
| Stackexchange |
The Google calculator says that $\left(\frac00\right)^0=1$. Is this true?. <p><a href="https://www.google.com/#q=(0/0)%5E0" rel="nofollow">According to Google</a>, $\left(\frac00\right)^0=1$.
Is this true? Why or why not?</p>
| 0non-cybersec
| Stackexchange |
General Solution of $\sin(mx)+\sin(nx)=0$. <p>Problem:</p>
<blockquote>
<p>Find the general solution of $$\sin(mx)+\sin(nx)=0$$</p>
</blockquote>
<p>My attempt:
$$$$
$$\sin(mx)=-\sin(nx)$$
$$=\cos\left(\dfrac{\pi}{2}-mx\right)=\cos\left(\dfrac{\pi}{2}+nx\right)$$
Using $\cos\theta=\cos\alpha\Rightarrow \theta=2n\pi\pm \alpha,$$$$$
$$\text{CASE } 1:\theta=2n\pi+ \alpha$$
$$\dfrac{\pi}{2}-mx=2p\pi+\left(\dfrac{\pi}{2}+nx\right)$$
$$\Rightarrow x=\dfrac{-2p\pi}{m+n}$$$$$$
$$\text{CASE } 2:\theta=2n\pi- \alpha$$
$$\dfrac{\pi}{2}-mx=2q\pi-\left(\dfrac{\pi}{2}+nx\right)$$
$$\Rightarrow x=\dfrac{(2q-1)\pi}{n-m}$$$$$$
$$\Longrightarrow x=\dfrac{-2p\pi}{m+n} \text{ or } x=\dfrac{(2q-1)\pi}{n-m}$$</p>
<p>I checked my calculations again and again, but was unable to notice any flaw. However, the book I use categorically mentions the solutions for $x$ as $x=\dfrac{2j\pi}{m+n}$ or $x=\dfrac{(2k+1)\pi}{m-n}$
I would be truly grateful if somebody could please show me my errors. Many thanks in advance!</p>
<p>PS. Kindly note that $p,q,j,k\in \mathbb Z$</p>
| 0non-cybersec
| Stackexchange |
The Number of Standard Young Tableau of a Frame. <p>Suppose $\mathbb{F}$ be a frame corresponding to a partition $m_1 \geqslant m_2\geqslant...\geqslant m_r>0$ and $f(\mathbb{F})$ represents the number of standard young tableaux.
Then i want to prove that</p>
<blockquote>
<p>$(n+1)f(\mathbb{F})=\sum_{j=1}^{r+1}f(m_i,m_j+1)$</p>
</blockquote>
<p>where we assume $m_{r+1}=0$ and $ n= \sum_{i=1}^{r}m_i$ and use $f(\mathbb{F})=\sum_{j=1}^{r}f(m_i,m_j-1)$ </p>
<p>Notation:
$f(m_i,m_j-1)$ denotes number of standard tableau corresponding to the partition obtained from mother partition by reducing $1$ in the $j$th place and all other remains same, we assume $f(m_1,...,m_k,0,0,0....)=f(m_1,...,m_k)$</p>
<p>My try:
I tried to do it inductively and tried to go from right to left as follows.</p>
<blockquote>
<p>$f(m_1+1,m_2,...,m_r)= f(m_i) +
f(m_1+1,m_2-1,...,m_r)+...+f(m_1+1,m_2,...,m_r-1)$
$f(m_1,m_2+1,...,m_r)=.............$</p>
</blockquote>
<p>and then adding side by side i got the proof. but suddenly i realised for the second equation if $m_1=m_2$ then quantity on the left side is always 0 while the quantity on the right side is always greater than zero as $f(m_i)>0$ so, </p>
<blockquote>
<p>where am i wrong?</p>
</blockquote>
| 0non-cybersec
| Stackexchange |
I want his baby. The guy I fuck has an impregnation fantasy. I play along to get him aroused but secretly I imagine and wish I would accidently get pregnant. I'm at a place in life where a baby would be a horrible mistake and I don't know how I would even care for one but I still want one. By him. Up till recently I didn't want a kid ever so this is a big change for me. | 0non-cybersec
| Reddit |
Schrödinger operator with delta (zero range) interaction.. <p>I am reading the book of Albeverio named Solvable models in quantum mechanics. In the first chapter it is explained how to realize the operator $"-\Delta+\delta_0"$ as a self adjoint operator on $L^2(\mathbb{R}^3)$. I explain the main idea.
At the beginning one has to consider $H:=-\Delta_{|_{C^{\infty}_0}(\mathbb{R}^3\setminus \{0\})}$. One has to show that such an operator admits some self adjoint extensions on $L^2(\mathbb{R}^3)$, that will be, by definition, our operators $"-\Delta+\delta_0"$.
There are two technical things that I didn't understand.
The first one: the book says that one can show (it is not trivial) that the adjoint of $H$ is $H^*=-\Delta$ with domain
$$\mathcal{D}(H^*)=\{g\in H^{2,2}_{loc}(\mathbb{R}^3\setminus \{0\})\cap L^2(\mathbb{R^3})\,\,s.t.\,\, \Delta g\in L^2(\mathbb{R^3})\}.$$
I really do not understand, maybe it is a very stupid thing, why this space is not equal to $H^2(\mathbb{R}^3)$.</p>
<p>I explain my doubt: I say that $g\in L^2(\mathbb{R}^3)$ and $\Delta g\in L^2(\mathbb{R}^3)$ allow me to consider their Fourier transform, getting $(1+|\xi|^2)\hat{g}\in L^2(\mathbb{R}^3)$, which says to me that $g\in H^2(\mathbb{R}^3)$. Why I'm wrong?</p>
<p>The second thing that i do not understand is the following.
The autor says that a straightforward calculation shows that
$$\psi(k,x)=\frac{e^{ik|x|}}{|x|},\,\, x\neq 0,\,\, \mathfrak{Im}(k)>0,$$
is the only one solution of
$$H^* \psi(k)=k^2\psi(k),\,\, \psi(k)\in\mathcal{D}(H^*),\,\, k^2\in \mathbb{C}\setminus\mathbb{R},\,\, \mathfrak{Im}(k)>0.$$
I do not know how to procede in proving this fact.</p>
<p>Can someone help me to understand this two facts?
I apologize for the technical, maybe stupid, question.
Thanks.</p>
| 0non-cybersec
| Stackexchange |
House Watchdog Says Cyber-Security War Can’t Be Won With Men Only. | 1cybersec
| Reddit |
She's Still Here. I don't tell very many people this story because most just tell me "they were just dreams" or "you're thinking about it too much." I'm telling you right now, I have had my fair share of nightmares, and these are different. The story is a bit long, and for that I'm sorry, but I needed to get it out there.
It all started when I was in high school. I had this weird, really vivid dream where my sister and I were taking a tour of this huge mansion. Before we stepped through the door, the tour guide gave us a warning.
"Whatever you do, don't say her name."
She never said who, and she never mentioned the name or what would happen if we said it, but I was immediately filled with a sense of foreboding. My sister, just like in real life, didn't believe in these things. She laughed as we started the tour and asked if I was scared. I told her I was, that it was really creepy, and I asked her not to say the name. I begged her not to say the name.
I covered my ears as she yelled out the name, hoping that if I didn't hear it, it wouldn't be a problem. I never heard the name, and to this day I don't know what it was. The tour guide looked at us with pity and shook her head. I yelled at my sister, but she just kept laughing.
I was creeped out now, expecting something to happen. I calmed down after a while, thinking maybe it was just a story after all. Then I noticed the extra footsteps. I told myself I was imagining it, but then I noticed in the mirrors we passed a head that kept peeking around corners. She was following us.
Finally, in the dream we returned home to go to sleep. In those days my sister and I slept in the same room in a bunk bed. She had the top bunk, and I had the bottom. This is where the dream started to get even more strangely vivid. Normally in my dreams, the rooms are distorted in some way, but in this one everything was perfect, just like in real life. I sat in the dark and said a quick prayer, hoping that I had just imagined everything in the mansion. Then I noticed the figure in the middle of the room. It was a young girl, maybe ten years old. She had bobbed brown hair and was wearing a tattered, light colored dress with frills and patched with mud. She was staring at me, though in the dark it almost looked like she didn't have eyes.
My heart started pounding as she started walking toward me. I started chanting to myself, trying to wake up as she inched closer and closer.
"It's just a dream. It's just a dream. IT'S JUST A DREAM!"
I woke up. I looked around my room, still terrified. No matter what anyone says, I know I was awake. I made absolutely sure, just in case I was still asleep and the girl popped up again. Finally, satisfied that the nightmare was over, I lay back and drifted off into an uneasy sleep.
Only to open my eyes in my dream, back in my bedroom. I heard a sound above me and leaned out to look up at my sister's bunk. There, sitting on my sister's chest, was the girl. This time she really didn't have eyes, just black holes that seemed to go back forever into her head and she had an insane, toothless smile that was far too wide. She turned to me slowly with her head cocked unnaturally far to the right, and I will never, ever forget that creaky voice that escaped her lips.
"Nice try."
Needless to say, I found that whole dream very creepy. I told a few friends, we laughed about it, and I thought it was all over. But then I started noticing that in almost every dream I had, whether it was a nightmare or even just a silly or weird dream, it always seemed like someone was following me, watching me. If there was a mirror anywhere in the dream, there was always an extra figure lurking in it. Whenever I saw her or heard her footsteps, I would be seized by terror until she was gone, and then whatever the dream was would continue until she showed up again or I woke up. Then came the next vivid dream.
This time, I was walking around my house, when suddenly, my house slowly morphed into an entirely different house. I shrugged it off, as often happens in dreams, and began to explore. I walked down some stairs into what appeared to be the living room.
There she was in the middle of the room. She was smiling at me, but not the unnatural smile she'd had in the first dream. She looked normal this time, if a little pale. I thought about running, but she pointed to a little table she had set up next to a sliding glass door.
"We're going to have a tea party."
She didn't say it in a threatening way, but there was something about the way she smiled, or maybe it was the way she held her head at an odd angle, that made me feel like something wasn't right with her. I felt that if I didn't do what she wanted, she would snap and become the girl I had seen on my sister's chest that first night.
I sat down on the floor across from her at the little table, and she poured some tea for me and for herself. I didn't dare touch it, my arms pasted firmly to my side with fear, but she sipped at hers and began chatting cheerily at me. I don't remember much of what she said, my heart was pounding too loudly in my ears for me to hear clearly.
She babbled on for what seemed like forever before I noticed the other person. He was a shadow figure, lurking in the corner behind a TV set. He was very large and wide shouldered, wearing what looked like a hat and a coat. As the girl talked, he became clearer, more solid, but as he became more solid, her babbling became more and more frantic and crazed. Her smile was frozen in place, but her eyes became wider and more terrified as she spoke. Suddenly I became aware of what she was saying.
"...No need to be afraid. He won't hurt anyone. He's my friend. Won't hurt anyone. He's just wants to be my friend. He talks to me and plays with me. Never hurt anyone. He's my friend. Don't have to be afraid." And so on, getting faster and faster, as he kept getting bigger and bigger, almost touching the ceiling and blocking out most of the light.
She kept staring at me in terror as she spoke, until I finally woke up.
This time, I only told the story to friends that I knew would believe me. We talked about it, tried to figure out who the man was. Did he kill her? Was his presence linked with her insanity? Was he human or something else? She continued to haunt my every dream, always around the corner, always behind me in mirrors, always there.
Then came the third vivid dream. By this time, I was a sophomore in college, living in the dorms with a roommate, but in this dream I was back in a one-person dorm room that I had lived in while taking a summer course. I was doing my hair in the mirror when I started noticing things moving behind me. I would see books move, but they would be right where I left them when I turned around. The bed covers would crinkle up, but would be made up perfectly when I turned around. I realized that it was her again, though she never showed herself.
Suddenly, the covers lifted off the bed, a human shape underneath them with glowing eyes creeping towards me in the mirror. Finally, I had had it. I was scared and crying and I whirled around to face the girl and screamed with all my might.
"You always do this to me! You can't treat me like this! It's not fair!"
I was woken up by my roommate who said I had been whimpering in my sleep and mumbling.
It's been two years since then, and I haven't had another dream about the girl. I hope it stays that way, but every now and then in my dreams I feel someone watching me, or see a glimpse of a tattered dress whipping around a corner in a mirror, or hear extra footsteps when I walk, and I know she's still here. | 0non-cybersec
| Reddit |
How to detect dark photos in Android. <p>I have an Android app where user takes a photo of himself with the front camera and then the photo is being uploaded to my server. I notice that many photos comes to my server too dark (sometimes almost impossible to cleary see the user face).</p>
<p>I would like to filter out such photos and show notification (eg. "Photo is too dark. Take one more picture") to the user in the app side. How I could accomplish such task in Android?</p>
<p><strong>EDIT:</strong></p>
<p>I have found out how to calculate brightness for one single pixel (thank's to this answer: <a href="https://stackoverflow.com/a/16313099/2999943">https://stackoverflow.com/a/16313099/2999943</a>): </p>
<pre><code>private boolean isPixelColorBright(int color) {
if (android.R.color.transparent == color)
return true;
boolean rtnValue = false;
int[] rgb = {Color.red(color), Color.green(color), Color.blue(color)};
int brightness = (int) Math.sqrt(rgb[0] * rgb[0] * .299 + rgb[1]
* rgb[1] * .587 + rgb[2] * rgb[2] * .114);
if (brightness >= 200) { // light color
rtnValue = true;
}
return rtnValue;
}
</code></pre>
<p>But still I don't have clear idea how to determine whole image brightness "status". Any suggestions?</p>
| 0non-cybersec
| Stackexchange |
My multi-purpose battlestation. | 0non-cybersec
| Reddit |
Existence of a convergent series with a "sub-series" of smaller radius of convergence.. <p>This is a past qualifying exam question:</p>
<blockquote>
<p>True or false? There is a sequence of complex numbers $\{a_n\}_{n=0}^\infty$ and strictly increasing sequence of integers $\{p_n\}$ with $p_n \ge n$, such that the radius of convergence of
$$\sum_{n=0}^\infty a_nz^n$$
is one, but
$$\sum_{n=0}^\infty a_n z^{p_n}$$
is less than one.</p>
</blockquote>
<p>I can see that if we took the "sub-series" to be $\sum a_{p_n} z^{p_n}$, then the opposite would be true: that the radius of convergence for the "sub-series" would be $\ge$ the radius of convergence for the regular series. But this is not the problem at hand.</p>
| 0non-cybersec
| Stackexchange |
lower and upper bound for $\sum_{k=1}^n \frac{(-1)^{\Omega(k)}}k$?. <p>Are there known any lower and upper bounds for
$$
\sum_{k=1}^n \frac{(-1)^{\Omega(k)}}k,
$$
where $\Omega(n)$ is the number of prime factors counting multiplicities of $n$?</p>
<p>Or at least is it known if it is always positive?</p>
| 0non-cybersec
| Stackexchange |
How to protect potentially destructive command line options?. <p>I'm curious if anyone can help me with what the best way to protect potentially destructive command line options is for a linux command line application?</p>
<p>To give a very hypothetical scenario: imagine a command line program that sets the maximum thermal setting for a processor before emergency power off. Lets further pretend that there are two main options, one of which is --max-temperature (in Celsius), which can be set to any integer between 30 & 50. There is also an override flag --melt which would disable the processor from shutting down via software regardless of how hot the processor got, until the system electrically/mechanically failed. </p>
<p>Certainly such an option like --melt is dangerous, and could cause physical destruction at worst case. But again, lets pretend that this type of functionality is a requirement (albeit a strange one). The application has to run as root, but if there was a desire to help ensure the --melt option wasn't accidentally triggered by confused, or not experience users how would you do that?</p>
<p>Certainly a very common anti-pattern (IMO) is to hide the option, so that --help or the man page doesn't reveal its existence, but that is security through obscurity and could have the unintended consequence of a user triggering it, but not being able to find out what it means.</p>
<p>Another possibility is to change the flag to a command line argument that requires the user to pass --melt OVERRIDE, or some other token as a signifier that they REALLY mean to do this. </p>
<p>Are there other mechanisms to accomplish the same goal?</p>
| 0non-cybersec
| Stackexchange |
Why are salmon fisheries in Canada collapsing? Scientists are silenced once again.. | 0non-cybersec
| Reddit |
Cannot enable migrations for Entity Framework in class library. <p>I just got on board with EF 5 and am using their code-first migrations tool but I seem to get an error when I try to enable migrations.</p>
<p>I type <code>Enable-Migrations</code> into the package manager console and then it says</p>
<blockquote>
<p>No classes deriving from DbContext found in the current project.<br>
Edit the generated Configuration class to specify the context to enable migrations for.<br>
Code First Migrations enabled for project MyApp.MvcUI.</p>
</blockquote>
<p>It then creates a Migrations folder and a Configuration class in my MvcUI project. Thing is, my DbContext lives in a class library project called MyApp.Domain. It should be doing all that in that project and should have no problem finding my DbContext.</p>
| 0non-cybersec
| Stackexchange |
[Image] Showing up. | 0non-cybersec
| Reddit |
Sick mashup of big hits and big plays. | 0non-cybersec
| Reddit |
Ctrl + Page UP / Page Down are reversed. <p>I use Ubuntu, KDE and xfce. All the shortcuts for changing tabs or anything involving the keys <kbd>Ctrl</kbd> + <kbd>Page Up</kbd> instead of going right, it goes left (down) .</p>
<ol>
<li>How can I change that? </li>
<li>Is that the correct way? </li>
</ol>
<p>Am I the reverse here ? :)</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
halloween makeup👻🎃🎃. | 0non-cybersec
| Reddit |
My friend just got this puppy and he is the sweetest thing. I mean just look at that face!. | 0non-cybersec
| Reddit |
Unix Shell/SSH config to allow TCP port forwarding without showing a command prompt. <p>I'm running a Debian Linux. I'd like to have a user account that is able to connect via SSH for TCP-forwarding <em>only</em>, without a command prompt.</p>
<p>e.g the following would work (from a remote computer):</p>
<pre><code>ssh -D1234 user@myhost
</code></pre>
<p>but no command prompt would appear.</p>
<p>Using a shell like /bin/false or /sbin/nologin is too restrictive as it doesn't even allow the user to log in. A shell that only allows the "exit" or Ctrl+D commands would do the job.</p>
<p>I know that something similar is possible to allow only SFTP, but I can't find the equivalent for TCP forwarding.</p>
<p>Thanks</p>
| 0non-cybersec
| Stackexchange |
Eventlog entry for allowed connection in Windows Firewall. <p>I was seeing a lot of entries in the eventlog:</p>
<pre><code>The Windows Filtering Platform has permitted a connection.
Application Information:
Process ID: 4
Application Name: System
Network Information:
Direction: Inbound
Source Address: 10.xxx.xxx.xxx
Source Port: 80
Destination Address: 10.xxx.xxx.xxx
Destination Port: 31773
Protocol: 6
Filter Information:
Filter Run-Time ID: 67903
Layer Name: Receive/Accept
Layer Run-Time ID: 44
</code></pre>
<p>We have a loadbalancer which checks every second to see if the application is still running (a health check).
The logs contain large amounts of this kind of entries, which makes the Event Viewer slow and it's difficult to find the more interesting logs.</p>
<p>How do I make sure these messages don't end up in the event logs?</p>
| 0non-cybersec
| Stackexchange |
The Cost of Information∗
Luciano Pomatto† Philipp Strack‡ Omer Tamuz§
February 6, 2019
Abstract
We develop an axiomatic theory of information acquisition that captures the idea
of constant marginal costs in information production: the cost of generating two
independent signals is the sum of their costs, and generating a signal with probability
half costs half its original cost. Together with a monotonicity and a continuity
conditions, these axioms determine the cost of a signal up to a vector of parameters.
These parameters have a clear economic interpretation and determine the difficulty
of distinguishing states. We argue that this cost function is a versatile modeling tool
that leads to more realistic predictions than mutual information.
1 Introduction
“The choice of information structures must be subject to some limits,
otherwise, of course, each agent would simply observe the entire state of the
world. There are costs of information, and it is an important and incompletely
explored part of decision theory in general to formulate reasonable cost functions
for information structures.” – Arrow (1985).
Much of contemporary economic theory is built on the idea that information is scarce
and valuable. A proper understanding of information as an economic commodity requires
theories for its value, as well as for its production cost. While the literature on the value
of information (Bohnenblust, Shapley, and Sherman, 1949; Blackwell, 1951) is by now well
established, modeling the cost of producing information has remained an unsolved problem.
In this paper, we develop an axiomatic theory of costly information acquisition.
∗We thank Kim Border, Ben Brooks, Simone Cerreia-Vioglio, Tommaso Denti, Federico Echenique,
Drew Fudenberg, Ed Green, Adam Kapor, Massimo Marinacci, Jeffrey Mensch, Filip Matějka, Stephen
Morris, and Doron Ravid for their comments. All errors and omissions are our own.
†Caltech. Email: [email protected].
‡UC Berkeley. Email: [email protected].
§Caltech. Email: [email protected]. Omer Tamuz was supported by a grant from the Simons
Foundation (#419427).
1
ar
X
iv
:1
81
2.
04
21
1v
2
[
ec
on
.T
H
]
4
F
eb
2
01
9
We characterize all cost functions over signals (i.e., Blackwell experiments or information
structures) that satisfy three main axioms: First, signals that are more informative in
the sense of Blackwell (1951) are more costly. Second, the cost of generating independent
signals equals the sum of their individual costs. Third, the cost of generating a signal with
probability half equals half the cost of generating it with probability one.
As an example, the second axiom implies that the cost of collecting n independent
random samples (for example by surveying n customers) is linear in n. The third axiom
implies that the cost of an experiment that produces a sample with probability α is a
fraction α of the cost of acquiring the same sample with probability one.
Our three axioms admit a straightforward economic interpretation. The first one is a
simple form of monotonicity: more precise information is more costly. The second and
third axioms aim to capture the idea of constant marginal costs. In the study of traditional
commodities, a standard avenue for studying costs functions is by categorizing them in
terms of decreasing, increasing, or constant marginal costs. The case of linear cost is,
arguably, the conceptually simplest one.
With this motivation in mind, the second axiom states that the cost of generating a
signal is the same regardless of which additional independent signals a decision maker
decides to acquire. Consider, as an example, a company surveying customers by calling
them to learn about the demand for a new product. Our axiom implies that the cost of
calling an additional customer is constant, i.e. calling 100 customers is 10 times more costly
than calling 10. Whether this assumption is a reasonable approximation depends on the
application at hand: for instance, it depends on whether or not large fixed costs are a
crucial ingredient of the economic environment under consideration.
The third axiom posits constant marginal costs with respect to the probability that an
experiment is successful. To formalize this idea we study experiments that succeed with
probability α, and produce no information with probability 1− α. The axiom states that
for such experiments the cost is linear in α, so that the marginal cost of success is constant.
We propose the constant marginal cost assumption as a natural starting point for
thinking about the cost of information acquisition. It has the advantage that it admits
a clear economic interpretation, making it easy to judge for which applications it is
appropriate.
Representation. The main result of this paper is a characterization theorem for cost
functions over experiments. We are given a finite set Θ of states of nature. An experiment
µ produces a signal realization s with probability µi(s) in state i ∈ Θ. We show that
for any cost function C that satisfies the above postulates, together with a continuity
assumption, there exist non-negative coefficients βij , one for each ordered pair of states of
2
nature i and j, such that1
C(µ) =
∑
i,j∈Θ
βij
(∑
s∈S
µi(s) log
µi(s)
µj(s)
)
. (1)
The coefficients βij can be interpreted as capturing the difficulty of discriminating between
state i and state j. To see this, note that the cost can be expressed as a linear combination
C(µ) =
∑
i,j∈Θ
βij DKL(µi‖µj),
where the Kullback-Leibler divergence
DKL(µi‖µj) =
∑
s∈S
µi(s) log
µi(s)
µj(s)
is the expected log-likelihood ratio between state i and state j when the state equals i.
DKL(µi‖µj) is thus large if the experiment µ on average produces evidence that strongly
favors state i over j, conditional on the state being i. Hence, the greater βij the more
costly it is to reject the hypothesis that the state is j when it truly is i. Formally, βij is
the marginal cost of increasing the expected log-likelihood ratio of an experiment with
respect to states i and j, conditional on i being the true state. We refer to the cost (1) as
the log-likelihood ratio cost (or LLR cost).
In many common information acquisition problems, states of the world are one dimen-
sional. This is the case when, for instance, the unknown state is a physical quantity to be
measured, or the future level of interest rates. In these examples, a signal can be seen as a
noisy measurement of the unknown underlying state i ∈ R. We provide a framework for
choosing the coefficients βij in these contexts. Our main hypotheses are that the difficulty
of distinguishing between two states i and j is a function of the distance between them,
and that the cost of performing a measurement with standard Gaussian noise does not
depend on the set of states Θ in the particular information acquisition problem; this is a
feature that is commonly assumed in models that exogenously restrict attention to normal
signals.
Under these assumptions (Axioms a and b) Proposition 3 shows that there exists a
constant κ such that, for every pair of states i, j ∈ Θ,
βij =
κ
(i− j)2
.
Thus, states that are closer are more difficult to distinguish. As we show in the paper, this
1Throughout the paper we assume that the set of states of nature Θ is finite. We do not assume a finite
set S of signal realizations and the generalization of (1) to infinitely many signal realizations is given in (3).
3
choice of parameters offers a simple and tractable framework for analyzing the implications
of the LLR cost.
The concept of a Blackwell experiment makes no direct reference to subjective proba-
bilities nor to Bayesian reasoning.2 Likewise, our axioms and characterization theorem do
not presuppose the existence of a prior over the states of nature. Nevertheless, given a
prior q over Θ, an experiment induces a distribution over posteriors p, making p a random
variable. Under this formulation, the LLR cost (1) of an experiment can be represented as
the expected change of the function
F (p) =
∑
i,j
βij
pi
qi
log
(
pi
pj
)
from the prior q to the posterior p induced by the signal.3 That is, the cost of an experiment
equals
E [F (p)− F (q)] .
This alternative formulation makes it possible to apply techniques and insights derived
for posterior-separable costs functions (Caplin and Dean, 2013; Caplin, Dean, and Leahy,
2018).
Relation to Mutual Information Cost. Following Sims’ seminal work on rational
inattention, cost functions based on mutual information have been commonly applied
to model costly information processing (Sims, 2003, 2010). Mackowiak, Matějka, and
Wiederholt (2018) review the literature on rational inattention. Mutual information costs
are defined as the expected change
E [H(q)−H(p)]
of the Shannon entropy H(p) = −
∑
i∈Θ pi log pi between the decision maker’s prior belief
q and posterior p. Equivalently, in this formulation, the cost of an experiment is given by
the mutual information between the state of nature and the signal.
Compared to Sims’ work—and the literature in rational inattention—our work aims
at modeling a different kind of phenomenon. While Sims’ goal is to model the cost of
processing information our goal is to model the cost of generating information. Due to this
difference in motivation, Sims’ axioms postulate that signals which are harder to encode
are more costly, while we assume that signals which are harder to generate are more costly.
As an illustrative example of this difference consider a newspaper. Rational inattention
2Blackwell experiments have been studied both within and outside the Bayesian framework. See, for
instance, Le Cam (1996) for a review of the literature on Blackwell experiments.
3By Bayes’ rule the posterior belief p associated with the signal realization s is given by pi =
qiµi(s)/(
∑
j
qjµj(s)).
4
theory models the readers’ effort of processing the information contained in the newspaper.
In contrast, our goal is to model the cost that the newspaper incurs in producing this
information.
Given the different motivation, it is perhaps not surprising that the LLR cost leads to
predictions which are profoundly different from those induced by mutual information cost.
We illustrate the differences by four stylized examples in §5.
2 Model
A decision maker acquires information on an unknown state of nature belonging to a finite
set Θ. Elements of Θ will be denoted by i, j, k, etc. Following Blackwell (1951), we model
the information acquisition process by means of signals, or experiments. An experiment
µ = (S, (µi)i∈Θ) consists of a set S of signal realizations equipped with a sigma-algebra Σ,
and, for each state i ∈ Θ, a probability measure µi defined on (S,Σ). The set S represents
the possible outcomes of the experiment, and each measure µi describes the distribution of
outcomes when the true state is i.
We assume throughout that the measures (µi) are mutually absolutely continuous,
so that each derivative (i.e. ratio between densities) dµidµj is finite almost everywhere. In
the case of finite signal realizations these derivatives are simply equal to ratio between
probabilities µi(s)
µj(s)
, as in (1). This assumption means that no signal can ever rule out any
state, and in particular can never completely reveal the true state.
Given an experiment µ, we denote by
`ij(s) = log
dµi
dµj
(s)
the log-likelihood ratio between states i and j upon observing the realization s. We define
the vector
L(s) = (`ij(s))i,j
of log-likelihood ratios among all pairs of states. The distribution of L depends on the true
state generating the data. Given an experiment µ, we denote by µ̄i the distribution of L
conditional on state i.4
We restrict our attention to signals where the induced log-likelihoods ratios (`ij) have
finite moments. That is, experiments such that for every state i and every integral vector
α ∈ NΘ the expectation
∫
S |
∏
k 6=i `
αk
ik |dµi is finite. We denote by E the class of all such
experiments.5 The restriction to E is a technical condition that rules out experiments whose
log-likelihood ratios have very heavy tails, but, to the best of our knowledge, includes all
4The measure µ̄i is defined as µ̄i(A) = µi({s : L(s) ∈ A}) for every measurable A ⊆ RΘ×Θ.
5We refer to E as a class, rather than a set, since Blackwell experiments do not form a well-defined set.
In doing so we follow a standard convention in set theory (see, for instance, Jech, 2013, p. 5).
5
(not fully revealing) experiments commonly used in applications. In particular, we do not
restrict our attention to a parametric family of experiments such as normally distributed
signals.
The cost of producing information is described by an information cost function
C : E → R+
assigning to each experiment µ ∈ E its cost C(µ). In the next section we introduce and
characterize four basic properties for information cost functions.
2.1 Axioms
Our first axiom postulates that the cost of an experiment should depend only on its
informational content. For instance, it should not be sensitive to the way signal realizations
are labelled. In making this idea formal we follow Blackwell (1951, Section 4).
Let q ∈ P(Θ) be the uniform prior assigning equal probability to each element of Θ.6
Let µ and ν be two experiments, inducing the distributions over posteriors πµ and πν given
the uniform prior q. Then µ dominates ν in the Blackwell order if∫
P(Θ)
f(p) dπµ(p) ≥
∫
P(Θ)
f(p) dπν(p)
for every convex function f : P(Θ)→ R.
As is well-known, dominance with respect to the Blackwell order is equivalent to the
requirement that in any decision problem, a Bayesian decision maker achieves a (weakly)
higher expected utility when basing her action on µ rather than ν. We say that two
experiments are Blackwell equivalent if they dominate each other. It is a standard result
that two experiments µ and ν are Blackwell equivalent if and only if for every every state
i they induce the same distribution µ̄i = ν̄i of log-likelihood ratios (see, for example,
Lemma 1 in the Appendix).
As discussed in the introduction, it is natural to require the cost of information to be
increasing in the Blackwell order. For our main result, it is sufficient to require that any
two experiments that are Blackwell equivalent lead to the same cost. Nevertheless, it will
turn out that the cost function axiomatized in this paper will satisfy the stronger property
of Blackwell monotonicity (see Proposition 1).
Axiom 1. If µ and ν are Blackwell equivalent, then C(ν) = C(µ).
The lower envelope of a cost function assigns to each µ the minimum cost of producing
an experiment that is Blackwell equivalent to µ. If experiments are optimally chosen by a
6Throughout the paper, P(Θ) denotes the set of probability measures on Θ identified with their
representation in RΘ, so that for every q ∈ P(Θ), qi is the probability of the state i.
6
decision maker then we can, without loss of generality, identify a cost function with its
lower envelope. This results in a cost function for which Axiom 1 is automatically satisfied.
For the next axiom, we study the cost of performing multiple independent experiments.
Given µ = (S, (µi)) and ν = (T, (νi)) we define the signal
µ⊗ ν = (S × T, (µi × νi))
where µi × νi denotes the product of the two measures.7 Under the experiment µ × ν,
the realizations of both experiments µ and ν are observed, and the two observations are
independent conditional on the state. To illustrate, suppose µ and ν consist of drawing a
random sample from two possible populations. Then µ⊗ ν is the experiment where two
independent samples, one for each population, are collected.
Our second axiom states that the cost function is additive with respect to independent
experiments:
Axiom 2. The cost of performing two independent experiments is the sum of their costs:
C(µ⊗ ν) = C(µ) + C(ν) for all µ and ν.
An immediate implication of Axioms 1 and 2 is that a completely uninformative signal
has zero cost. This follows from the fact that an uninformative experiment µ is Blackwell
equivalent to the product experiment µ⊗ µ.
In many settings an experiment can, with non-negligible probability, fail to produce new
evidence. The next axiom states that the cost of an experiment is linear in the probability
that the experiment will generate information. Given µ, we define a new experiment, which
we call a dilution of µ and denote by α · µ. In this new experiment, with probability α
the signal µ is produced, and with probability 1− α a completely uninformative signal is
observed. Formally, given µ = (S, (µi)), fix a new signal realization o /∈ S and α ∈ [0, 1].
We define
α · µ = (S ∪ {o}, (νi)),
where νi(E) = αµi(E) for every measurable E ⊆ S, and νi({o}) = 1− α. The next axiom
specifies the cost of such an experiment:
Axiom 3. The cost of a dilution α · µ is linear in the probability α:
C(α · µ) = αC(µ) for every µ and α ∈ [0, 1] .
Our final assumption is a continuity condition. We first introduce a (pseudo)-metric
over E . Recall that for every experiment µ, µ̄i denotes its distribution of log-likelihood
7When the set of signal realizations is finite, the measure µi × νi assigns to each realization (s, t) the
probability µi(s)νi(t).
7
ratios conditional on state i. We denote by dtv the total-variation distance.8 Given a vector
α ∈ NΘ, let Mµi (α) =
∫
S |
∏
k 6=i `
αk
ik |dµi be the α-moment of the vector of log-likelihood
ratios (`ik)k 6=i. Given an upper bound N ≥ 1, we define the distance:
dN (µ, ν) = max
i∈Θ
dtv (µ̄i, ν̄i) + max
i∈Θ
max
α∈{0,...,N}n
|Mµi (α)−M
ν
i (α)| .
According to the metric dN , two signals µ and ν are close if, for each state i, the induced
distributions of log-likelihood ratios are close in total-variation and, in addition, have
similar moments, for any moment α lower or equal to (N, . . . , N).
Axiom 4. For some N ≥ 1 the function C is uniformly continuous with respect to dN .
As is well known, convergence with respect to the total-variation distance is a demanding
requirement, as compared to other topologies such as the weak topology. So, continuity
with respect to dtv is a relatively weak assumption. Continuity with respect to the stronger
metric dN is, therefore, an even weaker assumption.9
2.2 Discussion
Additivity assumptions in the spirit of Axiom 2 have appeared in multiple parametric models
of information acquisition. A common assumption in Wald’s classic model of sequential
sampling and its variations (Wald, 1945; Arrow, Blackwell, and Girshick, 1949), is that
the cost of acquiring n independent samples from a population is linear in n.10 Likewise,
in models where information is acquired by means of normally distributed experiments,
a standard specification is that the cost of an experiment is inversely proportional to its
variance (see, e.g. Wilson, 1975; Van Nieuwerburgh and Veldkamp, 2010). This amounts to
an additivity assumption, since the product of two independent normal signals is Blackwell
equivalent to a normal signal whose precision (that is, the inverse of its variance) is equal
to the sum of the precisions of the two original signals.
Underlying these different models is the notion that the cost of an additional independent
experiment is constant. Axiom 2 captures this idea in a non-parametric context, where no
a priori restrictions are imposed over the domain of feasible experiments. As discussed in
the introduction, we focus on linear cost structures as we view those as a natural starting
point to reason about the cost of information, in the same way the assumption of constant
8That is, dtv(µ̄i, ν̄i) = sup |µ̄i(A)− ν̄i(A)|, where the supremum is over all measurable subsets of RΘ×Θ.
9We discuss this topology in detail in §A. Any information cost function that is continuous with respect
to the metric dN satisfies Axiom 1. For expositional clarity, we maintain the two axioms as separate
throughout the paper.
10A similar condition appears in the continuous-time formulation of the sequential sampling problem,
where the information structure consists of observing a signal with Brownian noise over a time period of
length t, under a cost that is linear in t (Dvoretzky, Kiefer, Wolfowitz, et al., 1953; Chan, Lizzeri, Suen,
and Yariv, 2017; Morris and Strack, 2018).
8
marginal cost is a benchmark for the analysis of traditional commodities. Whether this
assumption fits a particular application well is inevitably an empirical question.
Axiom 3 expresses the idea that the marginal cost of increasing the probability of
success of an experiment is constant. The axiom admits an additional interpretation. In
an extended framework where the decision maker is allowed to randomize her choice of
experiment, the property
C(α · µ) ≤ αC(µ) (2)
ensures that the cost of the diluted experiment α · µ is not greater than the expected cost
of performing µ with probability α and collecting no information with probability 1− α.
Hence, if (2) was violated, the experiment α · µ could be replicated at a strictly lower cost
through a simple randomization by the decision maker. Now assume Axiom 2 holds. Then,
the converse inequality
C(α · µ) ≥ αC(µ)
ensures that the cost C(µ) of an experiment is not greater than the expected cost (1/α)C(α·
µ) of performing repeated independent copies of the diluted experiment α · µ until it
succeeds.11 Axiom 3 is thus automatically satisfied once one allows for dynamic and mixed
strategies of information acquisition.
3 Representation
Theorem 1. An information cost function C satisfies Axioms 1-4 if and only if there
exists a collection (βij)i,j∈Θ in R+ such that for every experiment µ = (S, (µi)),
C(µ) =
∑
i,j∈Θ
βij
∫
S
log
dµi
dµj
(s) dµi(s). (3)
Moreover, the collection (βij)i 6=j is unique given C.
We refer to a cost function that satisfies Axioms 1-4 as a log-likelihood ratio (LLR) cost.
As shown by the theorem, this class of information cost functions is uniquely determined
up to the parameters (βij). The expression
∫
S log(dµi/dµj)dµi is the Kullback-Leibler
divergence DKL(µi‖µj) between the two distributions, a well understood and tractable
measure of informational content (Kullback and Leibler, 1951). This implies that (3) can
alternatively be formulated as
C(µ) =
∑
i,j∈Θ
βijDKL(µi‖µj).
11Implicit in this interpretation is the assumption, common in the literature on rational inattention, that
the decision maker’s cost of an experiment is expressed in the same unit as her payoffs.
9
A higher value of DKL(µi‖µj) describes an experiment which, conditional on state i,
produces stronger evidence in favor of state i compared to j, as represented by a higher
expected value of the log-likelihood ratio dµi/dµj . The coefficient βij thus measures
the marginal cost of increasing the expected log-likelihood ratio between states i and j,
conditional on i, while keeping all other expected log-likelihood ratios fixed.12
The specification of the parameters (βij) must of course depend on the particular
application under consideration. Consider, for instance, a doctor who must choose a
treatment for a patient displaying a set of symptoms, and who faces uncertainty regrading
their cause. In this example, the state of the world i represents the pathology affecting the
patient. In order to distinguish between two possible diseases i and j it is necessary to
collect samples and run tests, whose costs will depend on factors that are specific to the
two conditions, such as their similarity, or the prominence of their physical manifestations.
These difference in costs can then be reflected by the coefficients βij and βji. For example,
if i and j are two types of viral infections, and k is a bacterial infection, then βij > βik if
it is harder to tell apart the two viral infection than to tell apart a viral infection from
a bacterial one. In §6 we discuss environments where the coefficients might naturally be
asymmetric, in the sense that βij 6= βji.
In environments where no pair of states is a priori harder to distinguish than another,13
a natural choice is to set all the coefficients (βij) to be equal. Finally, in §4 we propose a
specific functional form in the more structured case where states represent a one-dimensional
quantity.
Closed form solutions for the Kullback-Leibler divergence between standard distri-
butions, such as normal, exponential or binomial, are readily available. This makes it
immediate to compute the cost C(µ) of common parametric families of experiments.
Normal Signals. Consider a normal experiment µm,σ according to which the signal s is
given by
s = mi + ε
where the mean mi ∈ R depends on the true state i, and ε is state independent and
normally distributed with standard deviation σ. By substituting (3) with the well-known
expression for the Kullback-Leibler divergence between normal distributions, we obtain
12As we formally show in Lemma 2 in the Appendix, this operation of increasing a single expected log-
likelihood ratio while keeping all other expectations fixed is well-defined: for every experiment µ and every
ε > 0, if DKL(µi‖µj) > 0 then there exists a new experiment ν such that DKL(νi‖νj) = DKL(µi‖µj) + ε,
and all other divergences are equal. Hence the difference in cost between ν and the experiment µ is given by
βij times the difference ε in the expected log-likelihood ratio. The result formally justifies the interpretation
of each coefficient βij as a marginal cost.
13An example is that of a country that faces uncertainty regarding which of its political rivals is responsible
for a cyber attack.
10
that the cost of such an experiment is given by
C(µm,σ) =
∑
i,j∈Θ
βij
(mj −mi)2
2σ2
. (4)
The cost is decreasing in the variance σ2, as one may expect. Increasing βij increases the
cost of a signal µm,σ by a factor that is proportional to the squared distance between the
two states.
Binary Signals. Another canonical example is the binary-binary setting in which the set
of states is Θ = {H,L}, and the signal νp = (S, (νi)) is also binary: S = {0, 1}, νH = B(p)
and νL = B(1− p) for some p > 1/2, where B(p) is the Bernoulli distribution on {0, 1}
assigning probability p to 1. In this case
C(νp) = (βHL + βLH)
[
p log
p
1− p
+ (1− p) log
1− p
p
]
. (5)
Hence the cost is monotone in (βij) and p.
In the above examples more informative experiments are more costly. This is true for
for normal signals, since the cost is decreasing in σ, and for binary signals, where the cost
is increasing in p. The next result establishes that the a LLR cost function is monotone
with respect to the Blackwell order:
Proposition 1. Let µ and ν be experiments such that µ Blackwell dominates ν. Then
every LLR cost C satisfies C(µ) ≥ C(ν).
Bayesian Representation. The framework we considered so far makes no references
to subjective beliefs over the states of nature. Nevertheless, a LLR cost function can be
easily embedded in a standard Bayesian framework. Consider, to illustrate, a decision
maker endowed with a prior q ∈ P(Θ). Each experiment µ induces then a distribution
over posteriors πµ. As shown by the next result, the cost of an experiment C(µ) can be
reformulated in terms of the distribution πµ.
Proposition 2. Let C admit the representation (3) and fix a prior q ∈ P(Θ) with full
support. For every experiment µ inducing a distribution over posterior πµ,
C(µ) =
∫
P(Θ)
F (p)− F (q) dπµ(p) where F (p) =
∑
i,j∈Θ
βij
pi
qi
log
pi
pj
. (6)
In this representation the cost of the experiment µ is expressed as the expected change
of the function F from the prior q to the realized posterior p. Each coefficient βij is
normalized by the prior probability of the state qi.
11
Representations of the form (6) have been studied in the literature under the name of
“posterior separable” (Caplin, Dean, and Leahy, 2018, Definition 5). For example, Sims’
mutual information cost has the same functional form, but where F (p) is replaced by the
Shannon entropy H(p) = −
∑
i pi log pi. An important implication of Theorem 2 is that
general techniques for posterior separable costs functions, as developed by Caplin and
Dean (2013), can be applied to the LLR cost function.
4 One-Dimensional Information Acquisition Problems
Up to now we have been intentionally silent on how to specify the coefficients (βij). Each
parameter βij captures how costly it is to distinguish between particular states, and thus
will necesarrily be context dependent.
A commonly encountered context is that of learning about a one-dimensional charac-
teristic, so that each state i is a real number.14 In macroeconomic applications, the state
may represent the future level of interest rates. In perceptual experiments in neuroscience
and economics, the state can correspond to the number of red/blue dots on a screen (see
§5.1 below). More generally, i might represent a physical quantity to be measured.
In this section we propose a natural choice of parameters (βij) for one-dimensional
information acquisition problems. Given a problem where each state i ∈ Θ ⊂ R is a real
number, we propose to set each coefficient βij to be equal to κ(i−j)2 for some constant κ ≥ 0.
So, each βij is inversely proportional to the squared distance between the corresponding
states i and j. Therefore, under this specification, two states that are closer to each other
are harder to distinguish.
The main result of this section shows that this choice of parameters captures two main
hypotheses: (a) the difficulty of producing a signal that allows to distinguish between
state i and j is a function only of the distance |i− j| between the two states, and (b) the
cost of a noisy measurement of the state with standard normal error is the same across
information acquisition problems. Both assumptions express the idea that the cost of
making a measurement depends only on its precision, and not on the other details of the
model, such as the set of states Θ. For example, the cost of measuring a person’s height
should depend only on the precision of the measurement instrument, but not on what
modeling assumptions are made about the set of possible heights.
We denote by T the collection of finite subsets of R with at least two elements. Each
set Θ ∈ T represents the set of states of nature in a different, one-dimensional, information
acquisition problem. To simplify the language, we refer to each Θ as a problem. For each
Θ ∈ T we are given an LLR cost function CΘ with coefficients (βΘij ). The next two axioms
formalize the two hypotheses described above by imposing restrictions, across problems,
14We opt, in this section, to deviate from notational convention and use the letters i, j to refer to real
numbers, in order to maintain consistency with the rest of the paper.
12
on the cost of information.
The first axiom states that βΘij , the marginal cost of increasing the expected LLR
between two states i, j ∈ Θ, is a function of the distance between the two, and is unaffected
by changing the values of the other states.
Axiom a. For all Θ,Ξ ∈ T such that |Θ| = |Ξ|, and for all i, j ∈ Θ and k, l ∈ Ξ,
if |i− j| = |k − l| then βΘij = β
Ξ
kl.
For each i ∈ R we denote by ζi a normal probability measure on the real line with
mean i and variance 1. Given a problem Θ, we denote by ζΘ the experiment (R, (ζi)i∈Θ).
Hence, ζΘ is the canonical experiment consisting of a noisy measurement of the state plus
standard normal error.15 The next axiom states that the cost of such a measurement does
not depend on the particular values that the state can take.
Axiom b. For all Θ,Ξ ∈ T , CΘ(ζΘ) = CΞ(ζΞ).
Axioms a and b lead to a simple parametrization for the coefficients of the LLR cost in
one-dimensional information acquisition problems:
Proposition 3. The collection CΘ,Θ ∈ T , satisfies Axioms a and b if and only if there
exists a constant κ > 0 such that for all i, j ∈ Θ and Θ ∈ T ,
βΘij =
κ
n(n− 1)
1
(i− j)2
where n is the cardinality of Θ.
Proposition 3 implies that for any Θ ∈ T , a normal signal with mean i and variance σ2
has cost κσ−2 proportional to its precision; this can be seen by applying (4), the expression
for the cost of normal signals. Thus, the functional form given in Proposition 3 generalizes
a specification often found in the literature, where the cost of a normal signal is assumed
to be proportional to its precision (Wilson, 1975; Van Nieuwerburgh and Veldkamp, 2010)
to arbitrary (non-normal) information structures.
5 Examples
5.1 Information Acquisition in Decision Problems
We now study the log-likelihood ratio cost in the context of decision problems. We consider
a decision maker choosing an action a from a finite set A of actions. The payoff from a
15Expressed differently, if i ∈ Θ is the true state, then the outcome of the experiment ζΘ is distributed
as s = i+ �, where � is normally distributed with mean zero and variance 1 independent of the state.
13
depends on the state of nature i ∈ Θ and is given by u(a, i). The agent is endowed with a
prior q over the set of states.
Before making her choice, the agent can acquire a signal µ at cost C(µ). As is well
known, if the cost function C is monotone with respect to the Blackwell order, then it is
without loss of generality to restrict attention to signals where the set of realizations S
equals the set of actions A, and to assume that upon observing a signal s = a the decision
maker will choose the action recommended by the signal. We can then therefore identify
an experiment µ with a vector of probability measures (µi) in P(A).
An optimal signal µ? = (µ?i ) solves
µ? ∈ argmax
µ
∑
i∈Θ
qi
(∑
a∈A
µi(a)u(a, i)
)
− C(µ)
. (7)
Hence, action a is chosen in state i with probability µ?i (a). The maximization problem (7)
is strictly concave, provided all coefficients (βij) are strictly positive (Proposition 7 in the
Appendix). Thus, it admits a unique solution.
First Order Conditions. Denote the support of µ of by supp(µ): this is the set of
actions which are played with strictly positive probability under µ.16 The next result
characterizes the optimal choice probabilities under the LLR cost:
Proposition 4. Assume that βij 6= 0 for all i 6= j. Let µ = (µi)i∈Θ be a state-dependent
distribution over actions which solves the optimization problem (7). Then, for every state
i ∈ Θ and every pair of actions a1, a2 ∈ supp(µ) it holds that
qi [u(i, a1)− u(i, a2)] = c̃(i, a1)− c̃(i, a2) (8)
where
c̃(i, a) = −
∑
j 6=i
[
βij log
µj(a)
µi(a)
+ βji
µj(a)
µi(a)
]
.
Condition (8) can be interpreted as follows. The expression qi [u(i, a1)− u(i, a2)]
measures the expected benefit of choosing action a1 instead of a2 in state i. Up to an
additive constant, c̃(i, a) is the informational cost of choosing action a marginally more often
in state i. This marginal cost is increasing in the probability µi(a), due to the convexity of
C. Hence the right-hand-side of (8) measures the change in information acquisition cost
necessary to choose action a1 marginally more often and action a2 marginally less often.
An Application to Perception Tasks. Consider a perception task (see, e.g. Dean and
Neligh, 2017) where subjects observe 100 dots of different colors on a screen. Each dot is
16supp(µ) = {a ∈ A : µi(a) > 0 for some i ∈ Θ} .
14
either red or blue. A parameter r ∈ {1, . . . , 50} is fixed. Subjects are told the value of r and
that the number of blue dots i is drawn uniformly in Θ = {50− r, . . . , 49, 51, . . . , 50 + r}.
The state where the number of blue and red dots is equal to 50 is ruled out to simplify the
exposition.17
Subjects are asked to guess whether there are more blue or red dots, and get rewarded
if they guess correctly. So the set of actions is A = {R,B} and
u(a, i) =
1 if a = B and i > 50
1 if a = R and i < 50
0 otherwise.
For a tuple of distributions over actions (µi)i∈Θ, in state i an agent guesses correctly with
probability
m(i) =
µi(B) if i > 50µi(R) if i < 50.
Intuitively, it should be harder to guess whether there are more blue or red dots when
the difference in the number of dots is small, i.e. when i is close to 50. Indeed, it is a
well established fact in the psychology18, neuroscience19, economics20 literatures that so
called psychometric functions—the relation between the strength of a stimulus offered to
a subject and the probability that the subject identifies this stimulus—are sigmoidal (or
S-shaped), so that the probability that a subject chooses B transitions smoothly from
values close to 0 to values close to 1 when the number of blue dots increases.
As Dean and Neligh (2017) note, under mutual information cost (and a uniform prior, as
in the experimental setup described above), the optimal signal µ∗ must induce a probability
of guessing correctly that is state-independent.21 As shown by Matějka and McKay (2015),
Caplin and Dean (2013), and Steiner, Stewart, and Matějka (2017), conditional on a
state i, the likelihood ratio µ∗i (B)/µ
∗
i (R) between the two actions must equal the ratio
eu(i,B)/eu(i,R). Hence, the probability that a subject chooses correctly must be the same
for any two states that lead to the same utility function over actions, such as the state in
which there are 51 blue dots and the state in which there are 99 blue dots.
This unrealistic prediction is driven by the fact that under mutual information the states
17This means that the prior is qi = 12r , for i ∈ Θ.
18See, e.g., Chapter 7 in Green and Swets (1966) or Chapter 4 in Gescheider (1997).
19E.g., Krajbich et al. (2010); Tavares et al. (2017).
20See, e.g., Mosteller and Nogee (1951).
21It is well known that under mutual information costs the physical features of the states (such as distance
or similarity) do not affect the cost of information acquisition. For instance, Mackowiak, Matějka, and
Wiederholt (2018) write “[..] entropy does not depend on a metric, i.e., the distance between states does
not matter. With entropy, it is as difficult to distinguish the temperature of 10oC from 20oC, as 1oC from
2oC. In each case the agent needs to ask one binary question, resolve the uncertainty of one bit.”
15
20 40 60 80 100
0.0
0.2
0.4
0.6
0.8
1.0
number of blue dots
p
ro
b
a
b
il
it
y
o
f
a
n
sw
e
ri
n
g
b
lu
e
Figure 1: Predicted probability of guessing that there are more red dots as a function of
the state for LLR cost with βij = 1/(i− j)2 (in blue) and mutual information cost (in red).
are devoid of meaning and thus equally hard to distinguish. Indeed, the same conclusion
holds for any cost function C in (7) that, like mutual information, is invariant with respect
to a permutation of the states and is convex as a function of the state-dependent action
distributions (µi).
Our model accounts for the difficulty of distinguishing different states through the
coefficients β. As this is a one-dimensional information acquisition problem, we apply the
specification βij = κ/(i− j)2 of the LLR cost described in §4. As can be seen in Figure 1,
the LLR cost predicts a sigmoidal relation between the state and the choice probabilities.
Continuous Choice. The main insight emerging from the above example is that under
the LLR cost closer states are harder to distinguish, in the sense that acquiring information
that finely discriminates between them is more costly. This, in turn, implies that the choice
probabilities cannot vary abruptly across nearby states.
We now extend this intuition to more general decision problems. We assume that the
state space Θ is endowed with a distance d : Θ×Θ→ R. In the previous example, d is
simply the difference |i− j| in the number of blue dots.
We say that nearby states are hard to distinguish if for all i, j ∈ Θ
min{βij , βji} ≥
1
d(i, j)2
. (9)
So, the cost of acquiring information that discriminates between states i and j is high for
states that are close to each other.22 Our next result shows that when nearby states are
hard to distinguish, the optimal choice probabilities are Lipschitz continuous in the state:
the agent will choose actions with similar probabilities in similar states. For this result, we
22As we show in the proof of the next proposition, the results of this section extend with minor variations
to the case where the exponent in (9) is taken to be some γ > 0 rather than 2.
16
denote by ‖u‖ = maxa,i |u(a, i)| the norm of the decision maker’s utility function.
Proposition 5 (Continuity of Choice). Suppose that nearby states are hard to distinguish.
Then the optimal choice probabilities µ? solving (7) are uniformly Lipschitz continuous with
constant
√
‖u‖, i.e. satisfy
∣∣∣µ?i (a)− µ?j (a)∣∣∣ ≤ √‖u‖ d(i, j) for all a ∈ A and i, j ∈ Θ. (10)
Lipschitz continuity is a standard notion of continuity in discrete settings, such as
the one of this paper, where the relevant variable i takes finitely many values. A crucial
feature of the bound (10) is that the Lipschitz constant depends only on the norm ‖u‖ of
the utility function, independently of the exact form of the coefficients (βij), and of the
number of states.23
This result highlights a contrast between the predictions of mutual information cost
and LLR cost. Mutual information predicts behavior that displays counter-intuitive
discontinuities with respect to the state. Under the log-likelihood ratio cost, when nearby
states are harder to distinguish, the change in choice probabilities across states can be
bounded by the distance between them.
This difference has stark implications in coordination games. Morris and Yang (2016)
study information acquisition in coordination problems. In their model, continuity of the
choice probabilities with respect to the state leads to a unique equilibrium; if continuity fails,
then there are multiple equilibria. This suggests that mutual information and LLR costs
lead to very different predictions in coordination games and their economic applications
(bank-runs, currency attacks, models of regime change, etc).
5.2 Acquiring Precise Information
In this section we use a simple example to illustrate how our additivity axiom captures
constant marginal costs, a principle that is natural in settings of physical production of
information, and contrast it with the sub-additivity—i.e., decreasing marginal costs—of
mutual information.
Consider, for instance, the classical problem of learning the bias of a coin by flipping it
multiple times. In this context, mutual information and LLR cost behave quite differently.
Suppose the coin either yields heads 80% of the time or tails 80% of the time and either
bias is equally likely. We are interested in comparing the cost of observing a single coin
flip versus a long sequence of coin flips.
23Proposition 5 suggests that the analysis of choices probabilities might be extended to the case where
the set of states Θ is an interval in R, or, more generally, a metric space. Given a (possibly infinite) state
space Θ endowed with a metric, and a sequence of finite discretizations (Θn) converging to Θ, the bound
(10) implies that if the corresponding sequence of choice probabilities converges, then it must converge to a
collection of choice probabilities that are continuous, and moreover Lipschitz.
17
Under LLR cost, the additivity axiom implies that the cost of observing k coin flips is
linear in k. Hence the cost of observing a sequence of k flips goes to infinity with k. Under
mutual information cost with constant λ > 0 the cost of a single coin flip equals[
{0.8 log (0.8) + 0.2 log (0.2)} − log
1
2
]
λ ≈ 0.19λ .
Seeing an infinite sequence of coins reveals the state and thus leads to a posterior of 0 or 1.
The cost of seeing an infinite sequence of coin flips and thus learning the state is given by
lim
p→1
[
{p log p+ (1− p) log (1− p)} − log
1
2
]
λ = log(2)λ ≈ 0.69λ .
Thus, the cost of observing infinitely many coin flips is only approximately 3.6 times
the cost of observing a single coin flip. The low—and arguably in many applications
unrealistic—cost of acquiring perfect information is caused by the sub-additivity of mutual
information as a cost function, which contrasts with the additivity of the log-likelihood
ratio cost we propose (see Figure 2).
0 5 10 15 20
0
1
2
3
4
Number of coin flips k
E
n
tr
o
p
y
c
o
st
Figure 2: The LLR cost (in red) and the mutual information cost (in blue) of observing
multiple independent coin flips/binary signals.
These simple calculations suggest that using Sims’ mutual information cost as a model
of information production rather than information processing (as originally intended by
Sims) may lead to counterintuitive predictions.
This difference in the marginal cost of information is not merely a mathematical
difference, but could lead to substantially different predictions in economic applications.
For example, it might lead to different predictions about whether investors tend to learn and
ultimately invest in domestic or foreign stocks, as shown in Section 2.5 of Van Nieuwerburgh
and Veldkamp (2010), for the case where signals are exogenously restricted to be normal.
18
5.3 Hypothesis Testing
In this section we apply the log-likelihood ratio cost to a standard hypothesis testing
problem. We consider a decision maker performing an experiment with the goal of learning
about an hypothesis, i.e. whether the state is in a subset24
H ⊂ Θ .
We consider an experiment that reveals with some probability whether the hypothesis is
true or not, and study how its cost depends on the structure of H. For a given hypothesis
H and a precision α consider the binary signal µ, with signal realizations S = {H,Hc}
µi(s) =
α for i ∈ s1− α for i /∈ s (11)
Conditional on each state i, this experiment yields a correct signal with probability α.
Under LLR cost, the cost of such a signal is given by
∑
i∈H,j∈Hc
βij + βji
(α log α
1− α
+ (1− α) log
1− α
α
)
(12)
The first term captures the difficulty of discerning between H and Hc. The harder the
states in H and Hc are to distinguish, the larger the coefficients βij and βji will be, and
the more costly it will thus be to learn whether the hypothesis H is true. The second term
is monotone in the signal precision α and is independent of the hypothesis.
Learning about the GDP. For concreteness, consider the case where the state is
represented by a natural number i in the interval Θ = {20000, . . . , 80000}, representing, for
instance, the current US GDP per capita. Consider the following two different hypotheses:25
(H1) The GDP is above 50000.
(H2) The GDP is an even number.
Intuitively, producing enough information to answer with high accuracy whether (H1) is
true should be less expensive than producing enough information to answer whether (H2)
is true, a practically impossible task. Our model captures this intuition. As the state is
one-dimensional we set βij = κ/(i− j)2, following §4. Then,∑
i∈H1 ,j∈H1 c
βij + βji ≈ 22κ
∑
i∈H2 ,j∈H2 c
βij + βji ≈ 148033κ.
24We denote the complement of H by Hc = Θ \H.
25Formally, H1 = {i ∈ Θ: i > 50000} and H2 = {i ∈ Θ: i even}
19
That is, learning whether the GDP is even or odd is by an order of magnitude more costly
than learning whether the GDP is above or below 50000.
It is useful to compare these observations with the results that would be obtained under
mutual information and a uniform prior on Θ. In such a model, the cost of a symmetric
binary signal with precision α is determined solely by the cardinality of H. In particular,
under mutual information learning whether the GDP is above or below 50000 is equally
costly as learning whether it is even or odd. This follows from the fact that the mutual
information cost is invariant with respect to a relabelling of the states.
This example demonstrates that the LLR cost function can capture different phenomena
from mutual information cost. Rational inattention theory models the cost of paying
attention to information that is freely available. In the above example, it is equally costly
to read the last digit and the first digit of the per capita GDP in a newspaper. In contrast
to rational inattention, we aim at modeling the cost of generating information, and capture
the intuitive fact that measuring the most significant digit of the GDP is much easier than
measuring the least significant one.
6 Verification and Falsification
It is well understood that verification and falsification are fundamentally different forms of
empirical research. This can be seen most clearly through Karl Popper’s famous example
of the statement “all swans are white.” Regardless of how many white swans are observed,
no amount of evidence can imply that the next one will be white. However, observing a
single black swan is enough to prove the statement false.
Popper’s argument highlights a crucial asymmetry between verification and falsification.
A given experiment, such as the observation of swans, can make it feasible to reject an
hypothesis, yet have no power to prove that the same hypothesis is true.
This principle extends from science to everyday life. In a legal case, the type of evidence
necessary to prove that a person is guilty can be quite different from the the type of evidence
necessary to demonstrate that a person is innocent. In a similar way, corroborating the
claim “Ann has a sibling” might require empirical evidence (such as the outcome of a
DNA test) that is distinct from the sort of evidence necessary to prove that she has no
siblings. These examples lead to the question of how to capture Popper’s distinction
between verification and falsification in a formal model of information acquisition.
In this section we show that the asymmetry between verification and falsification can
be captured by the LLR cost. As an example, we consider a state space Θ = {a, e} that
consists of two hypotheses. For simplicity, let a corresponds to the hypothesis “all swans
are white” and e to the event “there exists a nonwhite swan.” Imagine a decision maker
who attaches equal probability to the each state, and consider the experiments described
20
s1 s2
a 1− ε2 ε2
e 1− ε ε
(a) Experiment I
s1 s2
a 1− ε ε
e 1− ε2 ε2
(b) Experiment II
Table 1: The set of states is Θ = {a, e}. In both experiments S = {s, t}. Under experiment
I, observing the signal realization t rejects the hypothesis that the state is a (up to a small
probability of error ε2). Under experiment II, observing t verifies the same hypothesis.
in Table 1:26
• In experiment I, regardless of the state, an uninformative signal realization s1 occurs
with probability greater than 1 − ε, where ε is positive and small. If a nonwhite
swan exists, then one is observed with probability ε. Formally, this corresponds to
observing the signal realization s2. If all swans are white, then signal s1 is observed,
up to an infinitesimal probability of error ε2. Hence, conditional on observing s2, the
decision maker’s belief in state a approaches zero, while conditional on observing s1
the decision maker’s belief remains close to the prior. So, the experiment can reject
the hypothesis that the state is a, but cannot verify it.27
• In experiment II the roles of the two states are reversed: if all swans are white,
then this fact is revealed to the decision maker with probability ε. If there is a
non-white swan, then the uninformative signal s1 is observed (up to an infinitesimal
probability of error ε2). Conditional on observing s2, the decision maker’s belief in
state a approaches one, and conditional on observing s1 the decision maker’s belief is
essentially unchanged. Thus, the experiment can verify the hypothesis that the state
is a, but cannot reject it.
As shown by the example, permuting the state-dependent distributions of an experiment
may affect its power to verify or falsify an hypothesis. However, permuting the role of the
states may, in reality, correspond to a completely different type of empirical investigation.
For instance, experiment I can be easily implemented in practice: as an extreme example,
26Popper (1959) intended verification and falsifications as deterministic procedures, which exclude even
small probabilities of error. In our informal discussion we do not distinguish between events that are
deemed extremely unlikely (such as thinking of having observed a black swan in world where all swans
are white) and events that have zero probability. We refer the reader to (Popper, 1959, chapter 8) and
Olszewski and Sandroni (2011) for a discussion of falsifiability and small probability events.
27The error term ε2 can be interpreted as small noise in the observation. Its role is simply to ensure that
log-likelihood ratios are finite for each observation.
21
the decision maker may look up in the sky. There is a small chance a nonwhite swan will
be observed; if not, the decision maker’s belief will not change by much. It is not obvious
exactly what tests or samples would be necessary to implement experiment II, let along to
conclude that the two experiments should be equally costly to perform.
We conclude that in order for a model of information acquisition to capture the difference
between verification and falsification, the cost of an experiment should not necessarily be
invariant with respect to a permutation of the states. In our model, this can be captured
by assuming that the coefficients (βij) are non-symmetric, i.e. that βij and βji are are
not necessarily equal. For instance, the cost of experiments I and II in Table 1 will differ
whenever the coefficients of the LLR cost satisfy βae 6= βea. For example, if we set βae = κ
and βea = 0, and if we consider small ε, then the cost of experiment I is κε, to first order
in ε. In comparison, the cost of experiment II is—again to first order—a factor of log(1/ε)
higher. Hence the ratio between the costs of these experiments is arbitrarily high for small
ε.
We note that a difference between the costs of these experiments is impossible under
mutual information and a uniform prior, since in that model the cost of an experiment is
invariant with respect to a permutation of the states.
7 Related Literature
The question of how to quantify the amount of information provided by an experiment is
the subject of a long-standing and interdisciplinary literature. Kullback and Leibler (1951)
introduced the notion of Kullback-Leibler divergence as a measure of distance between
statistical populations. Kelly (1956), Lindley (1956), Marschak (1959) and Arrow (1971)
apply Shannon’s entropy to the problem of ordering information structures.
More recently, Hansen and Sargent (2001) and Strzalecki (2011) adopted KL-divergence
as a tool to model robust decision criteria under uncertainty. Cabrales, Gossner, and
Serrano (2013) derive Shannon entropy as an index of informativeness for experiments in
the context of portfolio choice problems (see also Cabrales, Gossner, and Serrano, 2017).
Frankel and Kamenica (2018) put forward an axiomatic framework for quantifying the
value and the amount of information in an experiment.
Rational Inattention. As discussed in the introduction, our work is also motivated by
the recent literature on rational inattention and models of costly information acquisition
based on Shannon’s entropy. A complete survey of this area is beyond the scope of this
paper; we instead refer the interested reader to Caplin (2016) and Mackowiak, Matějka,
and Wiederholt (2018) for perspectives on this growing literature.
22
Decision Theory. Our axiomatic approach differs both in terms of motivation and
techniques from other results in the literature. Caplin and Dean (2015) study the revealed
preference implications of rational inattention models, taking as a primitive state-dependent
random choice data. Within the same framework, Caplin, Dean, and Leahy (2018)
characterize mutual information cost, Chambers, Liu, and Rehbeck (2017) study non-
separable models of costly information acquisition, and Denti (2018) provides a revealed
preference of posterior separability. Decision theoretic foundations for models of information
acquisition have been put forward by de Oliveira (2014), De Oliveira, Denti, Mihm, and
Ozbek (2017), and Ellis (2018). Mensch (2018) provides an axiomatic characterization of
posterior-separable cost functions.
The Wald Model of Sequential Sampling. The notion of constant marginal costs
over independent experiments goes back to Wald’s (1945) classic sequential sampling model;
our axioms extend some of Wald’s ideas to a model of flexible information acquisition. In
its most general form, Wald’s model considers a decision maker who acquires information
by collecting multiple independent copies of a fixed experiment, and incurs a cost equal to
number of repetitions. In this model, every stopping strategy corresponds to an experiment,
and so every such model defines a cost over some family of experiments. It is easy to see
that such a cost satisfies our axioms.
Morris and Strack (2018) consider a continuous-time version where the decision maker
observes a one-dimensional diffusion process whose drift depends on the state, and incurs
a cost proportional to the expected time spent observing. This cost is again easily seen
to satisfy our axioms, and indeed, for the experiments that can be generated using this
sampling process, they show that the expected cost of a given distribution over posteriors
is of the form obtained in Proposition 3. Outside of the binary state case, only a restricted
family of distributions over posteriors can be implemented by means of a sampling strategy.
This has to be expected, since in Wald’s model the decision maker has in each period a
single, exogenously fixed, signal at their disposal.
One could imagine modifying the exercise in their paper by considering families of
processes other than one-dimensional diffusion processes; for example, one could take
Poisson processes with rates depending on the state. One of the contributions of our
paper is to abstract away from such parametric assumptions, and show that a few simple
axioms which capture the most basic intuition behind Wald’s model suffice to pin down a
specific family of cost functions over experiments. Nevertheless, one may view the result in
Morris and Strack (2018) as complementary evidence that the cost function obtained in
Proposition 3 is a natural choice for one-dimensional information acquisition problems.
Dynamic Information Acquisition Models. Hébert and Woodford (2018), Zhong
(2017, 2019), and Morris and Strack (2018) relate cost functions over experiments and
23
sequential models of costly information acquisition. In these papers, the cost C(µ) is the
minimum expected cost of generating the experiment µ by means of a dynamic sequential
sampling strategy.
Hébert and Woodford (2018) analyze a continuous-time model where the decision
maker’s beliefs follow a diffusion process and the decision maker can acquire information
by varying its volatility. They propose and characterize a family of “neighborhood-based”
cost functions that generalize mutual information, and allow for the cost of learning about
states to be affected by their proximity. In a perception task, these cost are flexible enough
to accommodate optimal response probabilities that are S-shaped, similarly to our analysis
in §5.1. The LLR cost does not generalize mutual information, but has a structure similar
to a neighborhood-based cost where the neighboring structure consists of all pairs of states.
Zhong (2017) provides general conditions for a cost function over experiments to be
induced by some dynamic model of information acquisition. Zhong (2019) studies a dynamic
model of non-parametric information acquisition, where a decision maker can choose any
dynamic signal process as an information source, and pays a flow cost that is a function of
the informativeness of the process. A key assumption is discounting of delayed payoffs.
The paper shows that the optimal strategy corresponds to a Poisson signal.
Information Theory. This paper is also related to the axiomatic literature in informa-
tion theory characterizing different notions of entropy and information measures. Ebanks,
Sahoo, and Sander (1998) and Csiszár (2008) survey and summarize the literature in the
field. In the special case where |Θ| = 2 and the coefficients (βij) are set to 1, the function
(1) is also known as J-divergence. Kannappan and Rathie (1988) provide an axiomatization
of J-divergence, under axioms very different from the ones in this paper. A more general
representation appears in Zanardo (2017).
Ebanks, Sahoo, and Sander (1998) characterize functions over tuples of measures with
finite support. They show that a condition equivalent to our additivity axiom leads to a
functional form similar to (1). Their analysis is however quite different from ours: their
starting point is an assumption which, in the notation of this paper, states the existence of
a map F : RΘ → R such that the cost of an experiment (S, (µi)) with finite support takes
the form C(µ) =
∑
s∈S F ((µi(s))i∈Θ). This assumption of additive separability does not
seem to have an obvious economic interpretation, nor to be related to our motivation of
capturing constant marginal costs in information production.
Probability Theory. The results in Mattner (1999, 2004) have, perhaps, the closest
connection with this paper. Mattner studies functionals over the space probability measures
over R that are additive with respect to convolution. As we explain in the next section,
additivity with respect to convolution is a property that is closely related to Axiom 2. We
draw inspiration from Mattner (1999) in applying the study of cumulants to the proof of
24
Theorem 1. However, the difference in domain makes the techniques in Mattner (1999,
2004) not applicable to this paper.
8 Proof Sketch
In this section we informally describe some of the ideas involved in the proof of Theorem 1.
We consider the binary case where Θ = {0, 1} and so there is only one relevant log-likelihood
ratio ` = `10. The proof of the general case is more involved, but conceptually similar.
Step 1. Let C satisfy Axioms 1-4. Conditional on each state i, an experiment µ induces a
distribution σi for `. Two experiments that induce the same pair of distributions (σ0, σ1)
are equivalent in the Blackwell order. Thus, by Axiom 1, C can be identified with a map
c(σ0, σ1) defined over all pairs of distributions induced by some experiment µ.
Step 2. Axioms 2 and 3 translate into the following properties of c. The product µ⊗ ν
of two experiments induces, conditional on i, a distribution for ` that is the convolution
of the distributions induced by the two experiments. Axiom 2 is equivalent to c being
additive with respect to convolution, i.e.
c(σ0 ∗ τ0, σ1 ∗ τ1) = c(σ0, σ1) + c(τ0, τ1)
Axiom 3 is equivalent to c satisfying for all α ∈ [0, 1],
c(ασ0 + (1− α)δ0, ασ1 + (1− α)δ0) = αc(σ0, σ1)
where δ0 is the degenerate measure at 0. Axiom 4 translates into continuity of c with
respect to total variation and the first N moments of σ0 and σ1.
Step 3. As is well known, many properties of a probability distribution can be analyzed by
studying its moments. We apply this idea to the study of experiments, and show that under
our axioms the cost c(σ0, σ1) is a function of the first N moments of the two measures, for
some (arbitrarily large) N . Given an experiment µ, we consider the experiment
µn =
1
n
· (µ⊗ · · · ⊗ µ)
in which with probability 1/n no information is produced, and with the remaining proba-
bility the experiment µ is carried out n times. By Axioms 2 and 3, the cost of µn is equal
to the cost of µ.28 We show that these properties, together with the continuity axiom,
imply that the cost of an experiment is a function G of the moments of (σ0, σ1):
c(σ0, σ1) = G [mσ0(1), . . . ,mσ0(N),mσ1(1), . . . ,mσ1(N)] (13)
28For n large, the experiment µn has a very simple structure: With high probability it is uninformative,
and with probability 1/n is highly revealing about the states.
25
where mσi(n) is the n-th moment of σi. Each mσi(n) is affine in σi, hence Step 2 implies
that G is affine with respect to mixtures with the zero vector.
Step 4. It will be useful to analyze a distribution not only through its moments but
also through its cumulants. The n-th cumulant κσ(n) of a probability measure σ is the
n-th derivative at 0 of the logarithm of its characteristic function. By a combinatorial
characterization due to Leonov and Shiryaev (1959), κσ(n) is a polynomial function of
the first n moments mσ(1), . . . ,mσ(n). For example, the first cumulant is the expectation
κσ(1) = mσ(1), the second is the variance, and the third is κσ(3) = mσ(3)−2mσ(2)mσ(1)+
2mσ(1)3. Step 3 and the result by Leonov and Shiryaev (1959) imply that the cost of an
experiment is a function H of the cumulants of (σ0, σ1):
c(σ0, σ1) = H [κσ0(1), . . . , κσ0(N), κσ1(1), . . . , κσ1(N)] (14)
where κσi(n) is the n-th cumulant of σi.
Step 5. Cumulants satisfy a crucial property: the cumulant of a sum of two independent
random variables is the sum of their cumulants. So, they are additive with respect to
convolution. By Step 2, this implies that H is additive. We show that H is in fact a
linear funtion. This step is reminiscent of the classic Cauchy equation problem. That
is, understanding under what conditions a function φ : R → R that satisfies φ(x + y) =
φ(x) + φ(y) must be linear. In Theorem 4 we show, very generally, that any additive
function from a subset K ⊂ Rd to R+ is linear, provided K is closed under addition and has
a non-empty interior. We then proceed to show that both of these conditions are satisfied
if K is taken to be the domain of H, and thus deduce that H is linear.
Step 6. In the last step we study the implications of (13) and (14). We apply the
characterization by Leonov and Shiryaev (1959) and show that the affinity with respect
to the origin of the map G, and the linearity of H, imply that H must be a function
solely of the first cumulants κσ0(1) and κσ1(1). That is, C must be a weighted sum of the
expectations of the log-likelihood ratio ` conditional on each state.
9 Conclusions
In this paper we put forward an axiomatic approach to modeling the cost of information
acquisition, characterizing a family of cost functions that capture a notion of constant
marginal returns in the production of information. We study the predictions implied
by our assumptions in various settings, and compare them to the predictions of mutual
information costs.
We propose a number of possible avenues for future research, all of which would
require the solution of some non-trivial technical challenges: The first is an extension
of our framework beyond the setting of a finite set of states to a continuum of states.
26
In particular, this is natural in the context of one-dimensional problems we study in §4.
Second, one could consider a generalization of the study of one-dimensional problems in §4
to multidimensional problems in which Θ is a subset of Rd. This would constitute a rather
general, widely applicable setting. Third, there are a number of important additional
settings which have been modeled using mutual information cost, where it may be of
interest to understand the sensitivity of the conclusions to this assumption, and how it may
change if we assume constant marginal costs (see, e.g., Van Nieuwerburgh and Veldkamp,
2010).
Finally, if one accepts our axioms (and hence LLR costs) as capturing constant marginal
costs, a natural definition for convex cost is a cost that given by the supremum over a
family of LLR costs. Likewise, concave costs would be infima over LLR costs. It may
be interesting to understand if such costs are characterized by simple axioms (e.g., by
substituting the appropriate inequalities in our axioms) and whether they admit a simple
functional form.
References
Arrow, K. J. (1971). The value of and demand for information. Decision and organization 2,
131–139.
Arrow, K. J. (1985). Informational structure of the firm. The American Economic
Review 75 (2), 303–307.
Arrow, K. J., D. Blackwell, and M. A. Girshick (1949). Bayes and minimax solutions
of sequential decision problems. Econometrica, Journal of the Econometric Society,
213–244.
Austin, T. D. (2006). Entropy and Sinai theorem. mimeo.
Blackwell, D. (1951). Comparison of experiments. In Proceedings of the Second Berkeley
Symposium on Mathematical Statistics and Probability. The Regents of the University of
California.
Bohnenblust, H. F., L. S. Shapley, and S. Sherman (1949). Reconnaissance in game theory.
Brouwer, L. (1911). Beweis der invarianz des n-dimensionalen gebiets. Mathematische
Annalen 71 (3), 305–313.
Cabrales, A., O. Gossner, and R. Serrano (2013). Entropy and the value of information for
investors. American Economic Review 103 (1), 360–77.
Cabrales, A., O. Gossner, and R. Serrano (2017). A normalized value for information
purchases. Journal of Economic Theory 170, 266–288.
27
Caplin, A. (2016). Measuring and modeling attention. Annual Review of Economics 8,
379–403.
Caplin, A. and M. Dean (2013). Behavioral implications of rational inattention with
shannon entropy. Technical report, National Bureau of Economic Research.
Caplin, A. and M. Dean (2015). Revealed preference, rational inattention, and costly
information acquisition. American Economic Review 105 (7), 2183–2203.
Caplin, A., M. Dean, and J. Leahy (2018). Rational inattentive behavior: Characterizing
and generalizing shannon entropy. Technical report, National Bureau of Economic
Research.
Chambers, C. P., C. Liu, and J. Rehbeck (2017). Nonseparable costly information acquisition
and revealed preference.
Chan, J., A. Lizzeri, W. Suen, and L. Yariv (2017). Deliberating collective decisions. The
Review of Economic Studies 85 (2), 929–963.
Cover, T. M. and J. A. Thomas (2012). Elements of information theory. John Wiley &
Sons.
Csiszár, I. (2008). Axiomatic characterizations of information measures. Entropy 10 (3),
261–273.
de Oliveira, H. (2014). Axiomatic foundations for entropic costs of attention. Technical
report, Mimeo.
De Oliveira, H., T. Denti, M. Mihm, and K. Ozbek (2017). Rationally inattentive preferences
and hidden information costs. Theoretical Economics 12 (2), 621–654.
Dean, M. and N. Neligh (2017). Experimental tests of rational inattention.
Denti, T. (2018). Posterior-separable cost of information.
Dvoretzky, A., J. Kiefer, J. Wolfowitz, et al. (1953). Sequential decision problems for pro-
cesses with continuous time parameter. testing hypotheses. The Annals of Mathematical
Statistics 24 (2), 254–264.
Ebanks, B., P. Sahoo, and W. Sander (1998). Characterizations of information measures.
World Scientific.
Ellis, A. (2018). Foundations for optimal inattention. Journal of Economic Theory 173,
56–94.
28
Frankel, A. and E. Kamenica (2018). Quantifying information and uncertainty. Technical
report, Working paper.
Gescheider, G. A. (1997). Psychophysics: the fundamentals (3 ed.). Psychology Press.
Green, D. M. and J. A. Swets (1966). Signal detection theory and psychophysics. New
York : Wiley. Includes indexes. Bibliography: p. 437-486.
Hansen, L. and T. J. Sargent (2001). Robust control and model uncertainty. American
Economic Review 91 (2), 60–66.
Hébert, B. and M. Woodford (2018). Information costs and sequential information sampling.
Jech, T. (2013). Set theory. Springer Science & Business Media.
Kannappan, P. and P. Rathie (1988). An axiomatic characterization of j-divergence. In
Transactions of the Tenth Prague Conference on Information Theory, Statistical Decision
Functions, Random Processes, pp. 29–36. Springer.
Kelly, J. (1956). A new interpretation of information rate. bell system technical journal.
Krajbich, I., C. Armel, and A. Rangel (2010). Visual fixations and the computation and
comparison of value in simple choice. Nature neuroscience 13 (10), 1292.
Kullback, S. and R. A. Leibler (1951). On information and sufficiency. The annals of
mathematical statistics 22 (1), 79–86.
Le Cam, L. (1996). Comparison of experiments: A short review. Lecture Notes-Monograph
Series, 127–138.
Leonov, V. and A. N. Shiryaev (1959). On a method of calculation of semi-invariants.
Theory of Probability & its applications 4 (3), 319–329.
Lindley, D. V. (1956). On a measure of the information provided by an experiment. The
Annals of Mathematical Statistics, 986–1005.
Mackowiak, B., F. Matějka, and M. Wiederholt (2018). Rational inattention: A disciplined
behavioral model.
Marschak, J. (1959). Remarks on the economics of information. Technical report, Cowles
Foundation for Research in Economics, Yale University.
Matějka, F. and A. McKay (2015). Rational inattention to discrete choices: A new
foundation for the multinomial logit model. American Economic Review 105 (1), 272–98.
Mattner, L. (1999). What are cumulants? Documenta Mathematica 4, 601–622.
29
Mattner, L. (2004). Cumulants are universal homomorphisms into hausdorff groups.
Probability theory and related fields 130 (2), 151–166.
Mensch, J. (2018). Cardinal representations of information.
Morris, S. and P. Strack (2018). The wald problem and the relation of sequential sampling
and static information costs.
Morris, S. and M. Yang (2016). Coordination and continuous choice.
Mosteller, F. and P. Nogee (1951). An experimental measurement of utility. Journal of
Political Economy 59 (5), 371–404.
Olszewski, W. and A. Sandroni (2011). Falsifiability. American Economic Review 101 (2),
788–818.
Popper, K. (1959). The logic of scientific discovery. Routledge.
Shiryaev, A. N. (1996). Probability. Springer.
Sims, C. (2010). Rational inattention and monetary economics. Handbook of monetary
Economics 3, 155–181.
Sims, C. A. (2003). Implications of rational inattention. Journal of monetary Eco-
nomics 50 (3), 665–690.
Steiner, J., C. Stewart, and F. Matějka (2017). Rational inattention dynamics: Inertia
and delay in decision-making. Econometrica 85 (2), 521–553.
Strzalecki, T. (2011). Axiomatic foundations of multiplier preferences. Econometrica 79 (1),
47–73.
Tao, T. (2011). Brouwer’s fixed point and invariance of domain theorems,
and Hilbert’s fifth problem. https://terrytao.wordpress.com/2011/06/13/
brouwers-fixed-point-and-invariance-of-domain-theorems-and-hilberts-fifth-problem.
Tavares, G., P. Perona, and A. Rangel (2017). The attentional drift diffusion model of
simple perceptual decision-making. Frontiers in neuroscience 11, 468.
Van Nieuwerburgh, S. and L. Veldkamp (2010). Information acquisition and under-
diversification. The Review of Economic Studies 77 (2), 779–805.
Wald, A. (1945). Sequential tests of statistical hypotheses. The Annals of Mathematical
Statistics 16 (2), 117–186.
Wilson, R. (1975). Informational economies of scale. The Bell Journal of Economics,
184–195.
30
https://terrytao.wordpress.com/2011/06/13/brouwers-fixed-point-and-invariance-of-domain-theorems-and-hilberts-fifth-problem
https://terrytao.wordpress.com/2011/06/13/brouwers-fixed-point-and-invariance-of-domain-theorems-and-hilberts-fifth-problem
Zanardo, E. (2017). How to measure disagreement. Technical report.
Zhong, W. (2017). Indirect information measure and dynamic learning.
Zhong, W. (2019). Optimal dynamic information acquisition.
31
Appendix A Discussion of the Continuity Axiom
Our continuity axiom may seem technical, and in a sense it is. However, there are some
interesting technical subtleties involved with its choice. Indeed, it seems that a more
natural choice of topology would be the topology of weak convergence of likelihood ratios.
Under that topology, two experiments would be close if they had close expected utilities
for decision problems with continuous bounded utilities. The disadvantage of this topology
is that no cost that satisfies the rest of the axioms is continuous in this topology. To see
this, consider the sequence of experiments in which a coin (whose bias depends on the
state) is tossed n times with probability 1/n, and otherwise is not tossed at all. Under
our axioms these experiments all have the same cost—the cost of tossing the coin once.
However, in the weak topology these experiments converge to the trivial experiment that
yields no information and therefore has zero cost.
In fact, even the stronger total variation topology suffers from the same problem, which
is demonstrated using the same sequence of experiments. Therefore, one must consider a
finer topology (which makes for a weaker continuity assumption), which we do by also
requiring the first N moments to converge. Note that increasing N makes for a finer
topology and therefore a weaker continuity assumption, and that our results hold for all
N > 0. An even stronger topology (which requires the convergence of all moments) is used
by Mattner (1999, 2004) to find additive linear functionals on the space of all random
variables on R.
Nevertheless, the continuity axiom is technical. We state here without proof that it is
not required when there are only two states, and we conjecture that it is not required in
general.
Appendix B Preliminaries
For the rest of this section, in order to simplify the notation, we let Θ = {0, 1, . . . , n}, so
that |Θ| = n+ 1.
B.1 Properties of the Kullback-Leibler Divergence
In this section we summarize some well known properties of the Kullback-Leibler divergence,
and derive from them straightforward properties of the LLR cost.
Given a measurable space (X,Σ) we denote by P(X,Σ) the space of probability
measures on (X,Σ). If X = Rd for some d ∈ N then Σ is implicitly assumed to be the
corresponding Borel σ-algebra and we simply write P(Rd).
For the next result, given two measurable spaces (Ω,Σ) and (Ω′,Σ′), a measurable
map F : Ω → Ω′, and a measure η ∈ P(Ω,Σ), we can define the push-forward measure
F∗η ∈ P(Ω′,Σ′) by [F∗η](A) = η(F−1(A)) for all A ∈ Σ′.
32
Proposition 6. Let ν1, ν2, η1, η2 be measures in P(Ω,Σ), and let µ1, µ2 be probability
measures in P(Ω′,Σ′). Assume that DKL(ν1‖ν2), DKL(η1‖η2) and DKL(µ1‖µ2) are all
finite. Let F : Ω→ Ω′ be measurable. Then:
1. DKL(ν1‖ν2) ≥ 0 with equality if and only if ν1 = ν2.
2. DKL(ν1 × µ1‖ν2 × µ2) = DKL(ν1‖ν2) +DKL(µ1‖µ2).
3. For all α ∈ (0, 1),
DKL(αν1 + (1− α)η1‖αν2 + (1− α)η2) ≤ αDKL(ν1‖ν2) + (1− α)DKL(η1‖η2).
and this equality is strict unless ν1 = η1 and ν2 = η2.
4. DKL(F∗ν1‖F∗µ1) ≤ DKL(ν1‖µ1).
It is well known that KL-divergence satisfies the first three properties in the statement
of the proposition. We refer the reader to (Austin, 2006, Proposition 2.4) for a proof of
the last property.
Lemma 1. Two experiments µ = (S, (µi)) and ν = (T, (νi)) that satisfy µ̄i = ν̄i for every
i ∈ Θ are equivalent in the Blackwell order.
Proof. The result is standard, but we include a proof for completeness. Suppose µ̄i = ν̄i for
every i ∈ Θ. Given the experiment µ and a uniform prior on Θ, the posterior probability
of state i conditional on s is given almost surely by
pi(s) =
dµi
d
∑
j∈Θ µj
(s) =
1∑
j∈Θ
dµj
dµi
(s)
=
1∑
j∈Θ e`ji
(15)
and the corresponding expression applies to experiment ν. By assumption, conditional on
each state the two experiments induce the same distribution of log-likelihood ratios (`ij).
Hence, by (15) they must induce the same distribution over posteriors, hence be equivalent
in the Blackwell order.
A consequence of Proposition 6 is that the LLR cost is monotone with respect to the
Blackwell order.
Proof of Proposition 1. Let C be a LLR cost. It is immediate that if µ̄i = ν̄i for every i
then C(µ) = C(ν). We can assume without loss of generality that S = T = P(Θ), endowed
with the Borel σ-algebra. This follows from the fact that we can define a new experiment
ρ = (P(Θ), (ρi)) such that µ̄i = ρ̄i for every i (see, e.g. Le Cam (1996)), and apply the
same result to ν . By Blackwell’s Theorem there exists a probability space (R, λ) and
33
a “garbling” map G : S × R → T such that for each i ∈ Θ it holds that νi = G∗(µi × λ).
Hence, by the first, second and fourth statements in Proposition 6,
DKL(νi‖νj) = DKL(G∗(µi × λ)‖G∗(µj × λ))
≤ DKL(µi × λ‖µj × λ)
= DKL(µi‖µj) +DKL(λ‖λ)
= DKL(µi‖µj).
Therefore, by Theorem 1, we have
C(ν) =
∑
i,j∈Θ
βijDKL(νi‖νj) ≤
∑
i,j∈Θ
βijDKL(µi‖µj) = C (µ) .
We note that a similar argument shows that if all the coefficients βij are positive then
C(µ) > C(ν) whenever µ Blackwell dominates ν but ν does not dominate µ.
An additional direct consequence of Proposition 6 is that the LLR cost is convex:
Proposition 7. Let µ = (S, (µi)) and ν = (S, (νi)) be experiments in E. Given α ∈ (0, 1),
define the experiment η = (S, (νi)) as ηi = ανi + (1− α)µi for each i. Then any LLR cost
C satisfies
C(η) ≤ αC(ν) + (1− α)C(µ).
The follows immediately from the third statement in Proposition 6. We note that if
ν and µ are not Blackwell equivalent, and if all the coefficients βij are positive, then the
inequality above is strict.
We now study the set
D = {(DKL(µi‖µj))i 6=j : µ ∈ E} ⊆ R
(n+1)n
+
of all possible pairs of expected log-likelihood ratios induced by some experiment µ. The
next result shows that D contains the strictly positive orthant.
Lemma 2. R(n+1)n++ ⊆ D
Proof. The set D is convex. To see this, let µ = (S, (µi)) and ν = (T, (νi)) be two
experiments. Without loss of generality, we can suppose that S = T , and S = S1 ∪ S2,
where S1, S2 are disjoint, and µi(S1) = νi(S2) = 1 for every i.
Fix α ∈ (0, 1) and define the new experiment τ = (S, (τi)) where τi = αµi + (1− α)νi
for every i. It can be verified that τi-almost surely, dτidτj satisfies
dτi
dτj
(s) = dµidµj (s) if s ∈ S1
and dτidτj (s) =
dνi
dνj
(s) if s ∈ S2. It then follows that
DKL(τi‖τj) = αDKL(µi‖µj) + (1− α)DKL(νi‖νj)
34
Hence D is convex. We now show D is a convex cone. First notice that the zero vector
belongs to D, since it corresponds to the totally uninformative experiment. In addition
(see §B.1),
DKL((µ⊗ µ)i‖(µ⊗ µ)j) = DKL(µi × µi‖µj × µj) = 2DKL(µi‖µj)
Hence D is closed under addition. Because D is also convex and contains the zero vector,
it follows that it is a convex cone.
Suppose, by way of contradiction, that the inclusion R(n+1)n++ ⊆ D does not hold. This
implies we can find a vector z ∈ R(n+1)n+ that does not belong to the closure of D. Therefore,
there exists a nonzero vector w ∈ R(n+1)n and t ∈ R such that w · z > t ≥ w · y for all
y ∈ D. Because D is a cone, then t ≥ 0 and 0 ≥ w ·y for all y ∈ D. Let iojo be a coordinate
such that wiojo > 0.
Consider the following three cumulative distribution functions on [2,∞):
F1(x) = 1−
2
x
F2(x) = 1−
log2 2
log2 x
F3(x) = 1−
log 2
log x
,
and denote by π1, π2, π3 the corresponding measures. A simple calculation shows that
DKL(π3‖π1) =∞, whereas DKL(πa‖πb) <∞ for any other choice of a, b ∈ {1, 2, 3}.
Let πεa = (1− ε) δ2 + επa for every a ∈ {1, 2, 3}, where δ2 is the point mass at 2. Then
still DKL(πε3‖πε1) = ∞, but, for any other choice of a and b in {1, 2, 3}, the divergence
D(πεa‖πεb) vanishes as ε goes to zero. Let π
ε,M
a be the measure πεa conditioned on [2,M ].
Then DKL(πε,Ma ‖π
ε,M
b ) tends to DKL(π
ε
a‖πεb) as M tends to infinity, for any a, b. It
follows that for every N ∈ N there exist ε small enough and M large enough such that
DKL(π
ε,M
3 ‖π
ε,M
1 ) > N and, for any other choice of a, b, DKL(π
ε,M
a ‖π
ε,M
b ) < 1/N .
Consider the experiment µ = (R, (µi)) where µi0 = π
ε,M
3 , µj0 = π
ε,M
1 and µk = π
ε,M
2
for all k 6∈ {i0, j0} and with ε and M so that the above holds for N large enough. Then
µ ∈ E since all measures have bounded support. It satisfies DKL(µio‖µjo) > N and
DKL(µi‖µj) < 1/N for every other pair ij.
Now let y ∈ D be the vector defined by µ. Then w · y > 0 for N large enough. A
contradiction.
B.2 Experiments and Log-likelihood Ratios
It will be convenient to consider, for each experiment, the distribution over log-likelihood
ratios with respect to the state i = 0 conditional on a state j. Given an experiment, we
35
define `i = `i0 for every i ∈ Θ. We say that a vector σ = (σ0, σ1, . . . , σn) ∈ P(Rn)n+1 of
measures is derived from the experiment (S, (µi)) if for every i = 0, 1, . . . , n,
σi(E) = µi ({s : (`1(s), . . . , `n(s)) ∈ E}) for all measurable E ⊆ Rn
That is, σi is the distribution of the vector (`1, . . . , `n) of log-likelihood ratios (with respect
to state 0) conditional on state i. There is a one-to-one relation between the vector σ and
the collection (µ̄i) of distributions defined in the main text. Notice that `ij = `i0−`j0 almost
surely, hence knowing the distribution of (`0i)i∈Θ is enough to recover the distribution
of (`ij)i,j∈Θ. Nevertheless, working directly with σ (rather than (µ̄i)) will simplify the
notation considerably.
We call a vector σ ∈ P(Rn)n+1 admissible if it is derived from some experiment. The
next result provides a straightforward characterization of admissible vectors of measures.
Lemma 3. A vector of measures σ = (σ0, σ1, . . . , σn) is admissible if and only if the
measures are mutually absolutely continuous and, for every i, satisfy dσidσ0 (ξ) = e
ξi for
σi-almost every ξ ∈ Rn.
Proof. If (σ0, σ1, . . . , σn) is admissible then there exists an experiment µ = (S, (µi)) such
that for any measurable E ⊆ Rn∫
E
eξi dσ0(ξ) =
∫
1E ((`1(s), . . . `n(s))) e`i(s) dµ0(s)
=
∫
1E ((`1(s), . . . `n(s))) dµi(s)
where 1E is the indicator function of E. So,
∫
E e
ξi dσ0(ξ) = σi(E) for every E ⊆ Rn. Hence
eξi is a version of dµidµ0 .
Conversely, assume dσidσ0 (ξ) = e
ξi for σi-almost every ξ ∈ Rn. Define an experiment
(Rn+1, (µi)) where µi = σi for every i. The experiment (Rn+1, (µi)) is such that `i (ξ) = ξi
for every i > 0. Hence, for i > 0, µi ({ξ : (`1(ξ), . . . , `n(ξ)) ∈ E}) is equal to∫
1E ((`1 (ξ) , . . . `n (ξ))) exi dσ0(t) =
∫
1E(ξ)exi dσ0 = σi(E)
and similarly µ0 ({ξ : (`1 (ξ) , . . . , `n (ξ)) ∈ E}) = σ0 (E). So (σ0, . . . , σn) is admissible.
B.3 Properties of Cumulants
The purpose of this section is to formally describe cumulants and their relation to moments.
We follow Leonov and Shiryaev (1959) and (Shiryaev, 1996, p. 289). Given a vector
ξ ∈ Rn and an integral vector α ∈ Nn we write ξα = ξα11 ξ
α2
2 · · · ξ
αn
n and use the notational
conventions α! = α1!α2! · · ·αn! and |α| = α1 + · · ·αn.
36
Let A = {0, . . . , N}n\{0, . . . , 0}, for some constant N ∈ N greater or equal than 1. For
every probability measure σ1 ∈ P(Rn) and ξ ∈ Rn, let ϕσ1(ξ) =
∫
Rn e
i〈z,ξ〉 dσ1(z) denote
the characteristic function of σ1 evaluated at ξ. We denote by PA ⊆ P(Rn) the subset of
measures σ1 such that
∫
Rn |ξ
α| dσ1(ξ) <∞ for every α ∈ A. Every σ1 ∈ PA is such that
in a neighborhood of 0 ∈ Rn the cumulant generating function logϕσ1(z) is well defined
and the partial derivatives
∂|α|
∂ξα11 ∂ξ
α2
2 · · · ∂ξ
αn
n
logϕσ1(ξ)
exists and are continuous for every α ∈ Nn.
For every σ1 ∈ PA and α ∈ A let κσ1(α) be defined as
κσ1(α) = i
−|α| ∂
|α|
∂ξα11 ∂ξ
α2
2 · · · ∂ξ
αn
n
logϕσ1(0)
With slight abuse of terminology, we refer to κσ1 ∈ RA as the vector of cumulants of σ1.
In addition, for every σ1 ∈ PA and α ∈ A we denote by mσ1(α) =
∫
Rn ξ
α dσ1(ξ) the mixed
moment of σ1 of order α and refer to mσ1 ∈ RA as the vector of moments of σ1.
Given two measures σ1, σ2 ∈ P(Rn) we denote by σ1 ∗ σ2 ∈ P(Rn) the corresponding
convolution.
Lemma 4. For every σ1, σ2 ∈ PA, and α ∈ A, κσ1∗σ2(α) = κσ1(α) + κσ2(α).
Proof. The result follows from the well known fact that ϕσ1∗σ2(ξ) = ϕσ1(ξ)ϕσ2(ξ) for every
ξ ∈ Rn.
The next result, due to Leonov and Shiryaev (1959), establishes a one-to-one relation
between the moments {mσ1(α) : α ∈ A} and the cumulants {κσ1(α) : α ∈ A} of a
probability measure σ1 ∈ PA. Given α ∈ A, let Λ(α) be the set of all ordered collections(
λ1, . . . , λq
)
of non-zero vectors in Nn such that
∑q
p=1 λ
p = α.
Theorem 2. For every σ1 ∈ PA and α ∈ A,
1. mσ1(α) =
∑
(λ1,...,λq)∈Λ(α)
1
q!
α!
λ1!···λq !
∏q
p=1 κσ1(λ
p)
2. κσ1(α) =
∑
(λ1,...,λq)∈Λ(α)
(−1)q−1
q
α!
λ1!···λq !
∏q
p=1mσ1(λ
p)
B.4 Admissible Measures and the Cumulants Manifold
We denote by A the set of vectors of measures σ = (σ0, σ1, . . . , σn) that are admissible and
such that σi ∈ PA for every i. To each σ ∈ A we associate the vector
mσ = (mσ0 ,mσ1 , . . . ,mσn) ∈ R
d
37
of dimension d = (n+ 1) |A|. Similarly, we define
κσ = (κσ0 , κσ1 , . . . , κσn) ∈ R
d.
In this section we study properties of the setsM = {mσ : σ ∈ A} and K = {κσ : σ ∈ A}.
Lemma 5. Let I and J be disjoint finite sets and let (φk)k∈I∪J be a collection of real
valued functions defined on Rn. Assume {φk : k ∈ I ∪ J} ∪ {1Rn} are linearly independent
and the unit vector (1, . . . , 1) ∈ RJ belongs to the the interior of
{
(φk (ξ))k∈J : ξ ∈ R
n
}
.
Then
C =
{(∫
Rn
φk dσ1
)
k∈I
: σ1 ∈ P(Rn) has finite support and
∫
Rn
φk dσ1 = 1 for all k ∈ J
}
is a convex subset of RI with nonempty interior.
Proof. To ease the notation, let Y = Rn and denote by Po be the set of probability measures
on Y with finite support. Consider F = {φk : k ∈ I ∪ J} ∪ {1Rd} as a subset of the vector
space RY , where the latter is endowed with the topology of pointwise convergence. The
topological dual of RY is the vector space of signed measures on Y with finite support. Let
D =
{(∫
Rn
φk dσ1
)
k∈I∪J
: σ1 ∈ Po
}
⊆ RI∪J .
Fix k ∈ I∪J . Since φk does not belong to the linear space V generated by {φ ∈ F : φ 6= φk},
then there exists a signed measure
ρ = ασ1 − βσ2
where α, β ≥ 0, α+ β > 0 and σ1, σ2 ∈ Po, such that ρ satisfies
∫
φk dρ > 0 ≥
∫
φdρ for
every φ ∈ V .
This implies
∫
φdρ = 0 for every φ ∈ V . By taking φ = 1Rn , we obtain ρ(Rn) = 0.
Hence, α = β. Therefore,
∫
φk dσ1 >
∫
φk dσ2 and
∫
φm dσ1 =
∫
φm dσ2 for every φm in F
that is distinct from φk. Because k is arbitrary, it follows that the linear space generated
by D equals RI∪J . Because D is convex and spans RI∪J , then D has nonempty interior.
Now consider the hyperplane
H = {z ∈ RI∪J : zk = 1 for all k ∈ J}
Let Do be the interior of D. It remains to show that the hyperplane H satisfies H ∩Do 6= ∅.
This will imply that the projection of H ∩D on RI , which equals C, has non-empty interior.
Let w ∈ Do. By assumption, (1, . . . , 1) ∈ RJ is in the interior of {(φk(ξ))k∈J : ξ ∈ Y }.
Hence, there exists α ∈ (0, 1) small enough and ξ ∈ Y such that φk(ξ) = 11−α −
α
1−αwk for
38
every k ∈ J . Define z = αw + (1− α)(φk(ξ))k∈I∪J ∈ D. Then zk = 1 for every k ∈ J . In
addition, because w ∈ Do then z ∈ Do as well. Hence z ∈ H ∩Do.
Lemma 6. The setM = {mσ : σ ∈ A} has nonempty interior.
Proof. For every α ∈ A define the functions (φi,α)i∈Θ as
φ0,α (ξ) = ξα and φi,α (ξ) = ξαeξi for all i > 0.
Define ψ0 = 1Rn and ψi(ξ) = eξi for all i > 0. It is immediate to verify that
{φi,α : i ∈ Θ, α ∈ A} ∪ {ψi : i ∈ Θ}
is a linearly independent set of functions. In addition, (1, . . . , 1) ∈ Rn is in the interior of
{(eξ1 , . . . , eξn) : ξ ∈ Rn}. Lemma 5 implies that the set
C =
(∫
Rn
φi,α dσ0
)
i∈Θ
α∈A
: σ0 ∈ P(Rn) has finite support and
∫
Rn
eξi dσ0(ξ) = 1 for all i
has nonempty interior. Given σ0 as in the definition of C, construct a vector σ =
(σ0, σ1, . . . , σn) where for each i > 0 the measure σi is defined so that (dσi/dσ0)(ξ) = eξi ,
σ0-almost surely. Then, Lemma 3 implies σ is admissible. Because each σi has finite
support then σ ∈ A. In addition,
mσ =
(∫
Rn
φi,α dσ0
)
i∈Θ
α∈A
hence C ⊆M. Thus,M has nonempty interior.
Theorem 3. The set K = {κσ : σ ∈ A} has nonempty interior.
Proof. Theorem 2 establishes the existence of a continuous one-to-one map mσ0 7→ κσ0 ,
σ0 ∈ PA. Therefore, we can define a one-to-one function H : M → Rd such that
H (mσ) = κσ for every σ ∈ A. Lemma 6 shows there exists an open set U ⊆ Rd included
in M. Let HU be the restriction of H on U . Then HU satisfies all the assumptions of
Brouwer’s Invariance of Domain Theorem,29 which implies that HU (U) is an open subset
of Rd. Since H(M) ⊆ K, it follows that K has nonempty interior.
29Brouwer (1911). See also (Tao, 2011, Theorem 2).
39
Appendix C Automatic continuity in the Cauchy problem for subsemi-
groups of Rd.
A subsemigroup of Rd is a subset S ⊆ Rd that is closed under addition, so that x+ y ∈ S
for all x, y ∈ S. We say that a map F : S → R+ is additive if F (x + y) = F (x) + F (y)
for all x, y, x+ y ∈ S. We say that F is linear if there exists (a1, . . . , ad) ∈ Rd such that
F (x) = F (x1, . . . , xd) = a1x1 + · · ·+ adxd for all x ∈ S.
We can now state the main result of this section:
Theorem 4. Let S be a subsemigroup of Rd with a nonempty interior. Then every additive
function F : S → R+ is linear.
Before proving the theorem we will establish a number of claims.
Claim 1. Let S be a subsemigroup of Rd with a nonempty interior. Then there exists an
open ball B ⊂ Rd such that aB ⊂ S for all real a ≥ 1.
Proof. Let B0 be an open ball contained in S, with center x0 and radius r. Given a positive
integer k, note that kB0 is the ball of radius kr centered at kr0, and that it is contained in
S, since S is a semigroup. Choose a positive integer M ≥ 4 such that 23Mr > ‖x0‖, and
let B be the open ball with center at Mx0 and radius r (see Figure 3). Fix any a ≥ 1, and
write a = 1
M
(n+ γ) for some integer n ≥M and γ ∈ [0, 1). Then n
M
B is the ball of radius
n
M
r centered at nx0, which is contained in nB0, since nB0 also has center nx0, but has
a larger radius nr. So n
M
B ⊂ nB0. We claim that furthermore n+1M B is also contained
in nB0. To see this, observe that the center of n+1M B is (n+ 1)x0 and its radius is
n+1
M
r.
Hence the center of n+1
M
B is at distance ‖x0‖ from the center of nB0, and so the furthest
point in n+1
M
B is at distance ‖x0‖+ n+1M r from the center of nB0. But the radius of nB0 is
nr =
2
3
nr +
1
3
nr ≥
2
3
Mr +
1
3
nr > ‖x0‖+
n+ 1
M
r,
where the first inequality follows since n ≥ M , and the second since 23Mr > ‖x0‖ and
M ≥ 4. So nB0 indeed contains both nMB and
n+1
M
B. Thus it also contains aB, and so S
contains aB.
40
B0
2B0
MB0
B
Figure 3: Illustration of the proof of Claim 1. The dark ball B is contained in the light
ones, and it is apparent from this image that so is any multiple of B by a ≥ 1.
Claim 2. Let S be a subsemigroup of Rd with a nonempty interior. Let F : S → R+ be
additive and satisfy F (ay) = aF (y) for every y ∈ S and a ∈ R+ such that ay ∈ S. Then
F is linear.
Proof. If S does not include zero, then without loss of generality we add zero to it and set
F (0) = 0. Let B be an open ball such that aB ⊂ S for all a ≥ 1; the existence of such a
ball is guaranteed by Claim 1. Choose a basis {b1, . . . , bd} of Rd that is a subset of B, and
let x = β1b1 + · · ·+ βdbd be an arbitrary element of S. Let b = max {1/|βi| : βi 6= 0}, and
let a = max {1, b}. Then
F (ax) = F (aβ1b1 + · · ·+ aβdbd).
Assume without loss of generality that for some 0 ≤ k ≤ d it holds that the first k coefficients
βi are non-negative, and the rest are negative. Then for i ≤ k it holds that aβibi ∈ S and
for i > k it holds that −aβibi ∈ S; this follows from the defining property of the ball B,
since each bi is in B, and since |aβi| ≥ 1. Hence we can add F (−aβk+1bk+1 − · · · − aβdbd)
to both sides of the above displayed equation, and then by additivity,
F (ax) + F (−aβk+1bk+1 − · · · − aβdbd)
= F (aβ1b1 + · · ·+ aβdbd) + F (−aβk+1bk+1 − · · · − aβdbd)
= F (aβ1b1 + · · ·+ aβkbk).
41
Using additivity again yields
F (ax) + F (−aβk+1bk+1) + · · ·+ F (−aβdbd) = F (aβ1b1) + · · ·+ F (aβkbk).
Applying now the claim hypothesis that F (ay) = aF (y) whenever y, ay ∈ S yields
aF (x) + (−aβk+1)F (bk+1) + · · ·+ (−aβd)F (bd) = aβ1F (b1) + · · ·+ aβkF (bk).
Rearranging and dividing by a, we arrive at
F (x) = β1F (b1) + · · ·+ βdF (bd).
We can therefore extend F to a function that satisfies this on all of Rd, which is then
clearly linear.
Claim 3. Let B be an open ball in Rd, and let B be the semigroup given by ∪a≥1aB. Then
every additive F : B → R+ is linear.
Proof. Fix any x ∈ B, and assume ax ∈ B for some a ∈ R+. Since B is open, by Claim
2 it suffices to show that F (ax) = aF (x). The defining property of B implies that the
intersection of B and the ray {bx : b ≥ 0} is of the form {bx : b > a0} for some a0 ≥ 0.
By the additive property of F , we have that F (qx) = qF (x) for every rational q > a0.
Furthermore, if b > b′ > a0 then n(b− b′)x ∈ S for n large enough. Hence
F (bx) =
1
n
F (nbx)
=
1
n
F
(
nb′x+ (n(b− b′)x)
)
=
1
n
F
(
nb′x
)
+
1
n
F
(
n(b− b′)x
)
= F (b′x) +
1
n
F
(
n(b− b′)x
)
≥ F (b′x).
Thus the map f : (a0,∞) → R+ given by f(b) = F (bx) is monotone increasing, and its
restriction to the rationals is linear. So f must be linear, and hence F (ax) = aF (x).
Given these claims, we are ready to prove our theorem.
Proof of Theorem 4. Fix any x ∈ S, and assume ax ∈ S for some a ∈ R+. By Claim 2 it
suffices to show that F (ax) = aF (x). Let B be a ball with the property described in Claim
1, and denote its center by x0 and its radius by r. As in Claim 3, let B be the semigroup
given by ∪a≥1aB; note that B ⊆ S. Then there is some y such that x+y, a(x+y), y, ay ∈ B;
42
in fact, we can take y = bx0 for b = max {a, 1/a, |x|/r} (see Figure 4). Then, on the one
hand, by additivity,
F (ax+ ay) = F (ax) + F (ay).
On the other hand, since x+ y, a(x+ y), y, ay ∈ B,and since, by Claim 3, the restriction of
F to B is linear, we have that
F (ax+ ay) = F (a(x+ y)) = aF (x+ y) = aF (x) + aF (y) = aF (x) + F (ay),
thus
F (ax) + F (ay) = aF (x) + F (ay)
and so F (ax) = aF (x).
B
x
ax
y
ay
x+ y
a(x+ y)
Figure 4: An illustration of the proof of Theorem 4.
Appendix D Proof of Theorem 1
Throughout this section we maintain the notation and terminology introduced in §B. It
follows from the results in §B.1 that a LLR cost satisfies Axioms 1-4. For the rest of this
section, we denote by C a cost function that satisfies the axioms. Let N be such that C is
uniformly continuous with respect to the distance dN . We use the same N to define the
set A = {0, . . . , N}n\{0, . . . , 0} introduced in §B.3.
43
Lemma 7. Let µ and ν be two experiments that induce the same vector σ ∈ A. Then
C(µ) = C(ν).
Proof. Conditional on each k ∈ Θ, the two experiments induce the same distribution for
(`0i)i∈Θ. Because `ij = `i0 − `j0 almost surely, it follows that conditional on each state
the two experiments induce the same distribution over the vector of all log-likelihood
ratios (`ij)i,j∈Θ. Hence, µ̄i = ν̄i for every i. Hence, by Lemma 1 the two experiments are
equivalent in the Blackwell order. The result now follows directly from Axiom 1.
Lemma 7 implies we can define a function c : A → R+ as c(σ) = C(µ) where µ is an
experiment inducing σ.
Lemma 8. Consider two experiments µ = (S, (µi)) and ν = (T, (νi)) inducing σ and τ in
A, respectively. Then
1. The experiment µ⊗ ν induces the vector (σ0 ∗ τ0, . . . , σn ∗ τn) ∈ A;
2. The experiment α · µ induces the measure ασ + (1− α)δ0.
Proof. (1) For every E ⊆ Rn and every state i,
(µi × νi) ({(s, t) : (`1(s, t), . . . `n(s, t)) ∈ E})
= (µi × νi)
({
(s, t) :
(
log
dµ1
dµ0
(s) + log
dν1
dν0
(t), . . . , log
dµn
dµ0
(s) + log
dν1
dνn
(t)
)
∈ E
})
= (σi ∗ τi)(E)
where the last equality follows from the definition of σi and τi. This concludes the proof of
the claim.
(2) Immediate from the definition of α · µ.
Lemma 9. The function c : A → R satisfies, for all σ, τ ∈ A and α ∈ [0, 1]:
1. c(σ0 ∗ τ0, . . . , σn ∗ τn) = c(σ) + c(τ);
2. c(ασ + (1− α)δ0) = αc(σ).
Proof. (1) Suppose µ induces σ and ν induces τ . Then C(µ) = c(σ), C(ν) = c(τ) and, by
Axiom 2 and Lemma 8, c(σ0 ∗ τ0, . . . , σn ∗ τn) = C(µ⊗ ν) = c(σ) + c(τ). Claim (2) follows
directly from Axiom 3 and Lemma 8.
Lemma 10. If σ, τ ∈ A satisfy mσ = mτ then c(σ) = c(τ).
Proof. Let µ be and ν be two experiments inducing σ and τ , respectively. Let µ⊗r =
µ⊗ . . .⊗ µ be the experiment obtained as the r-th fold independent product of µ. Axioms
2 and 3 imply
C((1/r) · µ⊗r) = C(µ) and C((1/r) · ν⊗r) = C(ν)
44
In order to show that C(µ) = C(ν) we now prove that C((1/r) · µ⊗r)−C((1/r) · ν⊗r)→ 0
as r →∞. To simplify the notation let, for every r ∈ N,
µ[r] = (1/r) · µ⊗r and ν[r] = (1/r) · ν⊗r
Let σ[r] = (σ[r]0, . . . , σ[r]n) and τ [r] = (τ [r]0, . . . , τ [r]n) in A be the vectors of measures
induced by µ[r] and ν[r].
We claim that dN (µ[r], ν[r]) → 0 as r → ∞. First, notice that µ[r]i and ν[r]i assign
probability (r − 1)/r to the zero vector 0 ∈ R(n+1)
2
. Hence
dtv(µ[r]i, ν[r]i) = sup
E
1
r
∣∣∣µ⊗ri(E)− ν⊗ri(E)∣∣∣ ≤ 1r .
For every α ∈ A we have
M
µ[r]
i (α) =
∫
`α110 . . . `
αn
n0 dµ[r]i =
∫
Rn
ξα11 · · · ξ
αn
n dσ[r]i(ξ) = mσ[r]i(α) (16)
We claim that mσ[r] = mτ [r]. Theorem 2 shows the existence of a bijection H : M →
K such that H(mυ) = κυ for every υ ∈ A. The experiment µ⊗r induces the vector
(σ∗r0 , . . . , σ∗rn ) ∈ A, where σ∗ri denotes the r-th fold convolution of σi with itself. Denote
such a vector as σ∗r. Let τ∗r ∈ A be the corresponding vector induced by ν⊗r. Thus we
have κσ = H(mσ) = H(mτ ) = κτ , and
H(mµ∗r) = κσ∗r = (κ∗rσ0 , . . . , κ
∗r
σn
) = (rκσ0 , . . . , rκσn) = rκσ = rκτ = κτ∗r = H(mτ∗r)
Hence mσ∗r = mτ∗r . It now follows from
mσ[r]i(α) =
1
r
mσ∗r
i
(α) +
r − 1
r
0
that mσ[r] = mτ [r], concluding the proof of the claim.
Equation (16) therefore implies that Mµ[r]i (α) = M
ν[r]
i (α). Thus
dN (µ[r], ν[r]) = max
i
dtv(µ[r]i, ν[r]i) ≤
1
r
.
Hence dN (µ[r], ν[r]) converges to 0. Since C is uniformly continuous, then C(µ[r]) −
C(ν[r]) = 0. So, C(µ) = C(ν).
Lemma 11. There exists an additive function F : K → R such that c(σ) = F (κσ).
Proof. It follows from Lemma 10 that we can define a map G : M → R such that
c(σ) = G(mσ) for every µ ∈ A. We can use Theorem 2 to define a bijection H :M→K
45
such that H(mσ) = κσ. Hence F = G ◦H−1 satisfies c(σ) = F (κσ) for every σ. For every
σ, τ ∈ A, Lemmas 8 and 9 imply
F (κσ)+F (κτ ) = c(σ)+ c(τ) = c(σ0 ∗τ0, . . . , σn ∗τn) = F (κσ0∗τ0 , . . . , κσn∗τn) = F (κσ +κτ )
where the last equality follows from the additivity of the cumulants with respect to
convolution.
Lemma 12. There exist (λi,α)i∈Θ\{0},α∈A in R such that
c(σ) =
∑
i∈Θ
∑
α∈A
λi,ακσi(α) for every σ ∈ A.
Proof. As implied by Theorem 3, the set K ⊆ Rd has nonempty interior. It is closed under
addition, i.e. a subsemigroup. We can therefore apply Theorem 4 and conclude that the
function F in Lemma 11 is linear.
Lemma 13. Let (λi,α)i∈Θ\{0},α∈A be as in Lemma 12. Then
c(σ) =
∑
i∈Θ
∑
α∈A
λi,αmσi (α) for every σ ∈ A
Proof. Fix σ ∈ A. Given t ∈ (0, 1), the Leonov-Shirayev identity implies
c (tσ + (1− t)δ0) =
∑
i∈Θ
∑
α∈A
λi,α
∑
(λ1,...,λq)∈Λ(α)
(−1)q−1
q
α!
λ1! · · ·λq!
q∏
p=1
mtσi+(1−t)δ0 (λ
p)
=
∑
i∈Θ
∑
α∈A
λi,α
∑
(λ1,...,λq)∈Λ(α)
(−1)q−1
q
α!
λ1! · · ·λq!
tq
q∏
p=1
mσi (λ
p)
=
∑
i∈Θ
∑
α∈A
λi,α
∑
λ=(λ1,...,λq)∈Λ(α)
ρ (λ) tq
q∏
p=1
mσi (λ
p)
where for every tuple λ =
(
λ1, . . . , λq
)
∈ Λ(α) we let
ρ (λ) =
(−1)q−1
q
α!
λ1! · · ·λq!
Lemma 9 implies c(σ) = 1
t
c(tµ+ (1− t) δ0) for every t. Hence
c(σ) =
∑
i∈Θ
∑
α∈A
λi,α
∑
λ=(λ1,...,λq)∈Λ(α)
ρ(λ)tq−1
q∏
p=1
mσi(λ
p)
for all t ∈ (0, 1).
46
By considering the limit t ↓ 0, we have tq−1 → 0 whenever q 6= 1. Therefore
c(σ) =
∑
i∈Θ
∑
α∈A
λi,αmσi(α) for all σ ∈ A.
Lemma 14. Let (λi,α)i∈Θ\{0},α∈A be as in Lemmas 12 and 13. Then, for every i, if |α| > 1
then λi,α = 0.
Proof. Let γ = max {|α| : λi,α 6= 0 for some i} . Assume, as a way of contradiction, that
γ > 1. Fix σ ∈ A. Theorem 2 implies
c(σ) =
∑
i∈Θ
∑
α∈A
λi,αmσi(α)
=
∑
i∈Θ
∑
α∈A
λi,α
∑
(λ1,...,λq)∈Λ(α)
1
q!
α!
λ1! · · ·λq!
q∏
p=1
κσi(λ
p)
Let σ∗r = (σ∗r0 , . . . , σ∗r0 ), where each σ∗ri is the r-th fold convolution of σi with itself. Hence,
using the fact that κσ∗r
i
= rκσi for all r ∈ N,
c(σ∗r) =
∑
i∈Θ
∑
α∈A
λi,α
∑
(λ1,...,λq)∈Λ(α)
1
q!
α!
λ1! · · ·λq!
rq
q∏
p=1
κσi(λ
p)
(17)
By the additivity of c, c(σ∗r) = rc(σ). Hence, because γ > 1, c(σ∗r)/rγ → 0 as r → ∞.
Therefore, diving (17) by rγ we obtain
∑
i∈Θ
∑
α∈A
λi,α
∑
(λ1,...,λq)∈Λ(α)
1
q!
α!
λ1! · · ·λq!
rq−γ
q∏
p=1
κσi(λ
p)
→ 0 as r →∞. (18)
We now show that (18) leads to a contradiction. By construction, if
(
λ1, . . . , λq
)
∈ Λ(α)
then q ≤ |α|. Hence q ≤ γ whenever λi,α 6= 0. So, in equation (18) we have rq−γ → 0 as
r →∞ whenever q < γ. Hence (18) implies
∑
i∈Θ
∑
α∈A:|α|=γ
λi,α
∑
(λ1,...,λq)∈Λ(α),q=γ
1
q!
α!
λ1! · · ·λq!
q∏
p=1
κσi (λ
p)
= 0.
If q = γ and λi,α > 0 then γ = |α|. In this case, in order for λ =
(
λ1, . . . , λq
)
to satisfy
47
∑q
p=1 λ
p = α, it must be that each λp is a unit vector. Every such λ satisfies30
q∏
p=1
κσi(λ
p) =
(∫
Rn
ξ1 dσi (ξ)
)α1
· · ·
(∫
Rn
ξn dσi (ξ)
)αn
and ∑
(λ1,...,λq)∈Λ(α),q=|α|
1
q!
α!
λ1! · · ·λq!
=
∑
(λ1,...,λq)∈Λ(α),q=|α|
α!
|α|!
= 1
so we obtain that
∑
i∈Θ
∑
α∈A:|α|=γ
λi,α
(∫
Rn
ξ1 dσi (ξ)
)α1
· · ·
(∫
Rn
ξn dσi (ξ)
)αn
= 0. (19)
By replicating the argument in the proof of Lemma 6 we obtain that the set{(∫
Rn
ξj dσi(ξ)
)
i,j∈Θ,j>0
: σ ∈ A
}
⊆ R(n+1)n
contains an open set U . Consider now the function f : R(n+1)n → R defined as
f(z) =
∑
i∈Θ
∑
α∈A:|α|=γ
λi,αz
α1
i,1 · · · z
αn
i,n , z ∈ R
(n+1)n
Then (19) implies that f equals 0 on U . Hence, for every z ∈ U ,i ∈ Θ and α ∈ A such that
|α| = γ,
λi,α =
∂γ
∂α1zi,1 · · · ∂αnzi,n
f(z) = 0
This contradicts the assumption that γ > 1 and concludes the proof.
For every j ∈ {1, . . . , n} let 1j ∈ A be the corresponding unit vector. We write λij
for λi,j . Lemma 14 implies that for every distribution σ ∈ A induced by an experiment
(S, (µi)), the function c satisfies
c(σ) =
∑
i∈Θ
∑
j∈{1,...,n}
λij
∫
Rn
ξj dσi(ξ)
=
∑
i∈Θ
∑
j∈{1,...,n}
λij
∫
S
log
dµj
dµ0
(s) dµi(s)
=
∑
i∈Θ
∑
j∈{1,...,n}
λij
∫
S
log
dµj
dµ0
(s) + log
dµ0
dµi
(s)− log
dµ0
dµi
(s) dµi(s)tec
30It follows from the definition of cumulant that for every unit vector 1j ∈ Rn, κσi (1j) =
∫
Rn
ξj dσi(ξ).
48
Hence
c(σ) =
∑
i∈Θ
∑
j∈{1,...,n}
λij
∫
S
log
dµj
dµi
dµi(s) +
∑
i∈Θ
− ∑
j∈{1,...,n}
λij
∫
S
log
dµ0
dµi
(s) dµi(s)
=
∑
i,j∈Θ
βij
∫
S
log
dµi
dµj
(s) dµi(s)
where in the last step, for every i, we set βij = −λij if j 6= 0 and βi0 =
∑
j 6=0 λij .
It remains to show that the coefficients (βij) are positive and unique. Because C takes
positive values, Lemma 2 immediately implies βij ≥ 0 for all i, j. The same Lemma easily
implies that the coefficients are unique given C.
Appendix E Additional Proofs
Proof of Proposition 2. Consider a signal (S, (µi)). Recall that by `i = dµidµ0 . The posterior
probability of state i given a signal realizations s is, almost surely,
pi(s) =
qidµi
d
∑
j∈Θ µj
(s) =
qi`i(s)∑
j∈Θ qj`j(s)
.
Thus pi(s)
pj(s)
= qi`i(s)
qj`j(s)
. We denote by µ̄ =
∑
i∈Θ qiµi the unconditional distribution over S.
Letting γij = βij/qi we have
C(µ) =
∑
i,j∈Θ
γij qi
∫
S
log
dµi
dµj
(s) dµi(s)
=
∫
S
∑
i,j∈Θ
γij log
`i(s)
`j(s)
qi `i(s) dµ0(s)
=
∫
S
∑
i,j∈Θ
γij log
(
pi(s)qj
pj(s)qi
)
qi `i(s)∑
k qk `k(s)
dµ̄(s)
which equals
∫
S
∑
i,j∈Θ
γij
[
log
pi(s)
pj(s)
− log
qi
qj
]
qi `i(s)∑
k qk `k(s)︸ ︷︷ ︸
pi(s)
dµ̄(s)
=
∫
S
∑
i,j∈Θ
γij pi(s) log
pi(s)
pj(s)
dµ̄(s)−
∫
S
∑
i,j∈Θ
γij pi(s) log
qi
qj
dµ̄(s)
=
∫
S
∑
i,j∈Θ
γij pi log
pi
pj
dπµ(p)−
∑
i,j∈Θ
βijqi log
qi
qj
.
49
The proof is then concluded by applying the definition of F .
Proof of Proposition 5. We prove a slightly stronger result: Suppose min{βij , βji} ≥ 1d(i,j)γ
for any i, j ∈ Θ. Then for every action a, and every pair of states i, j,∣∣∣µ?i (a)− µ?j (a)∣∣∣ ≤ √‖u‖ d(i, j)γ/2 .
Clearly, the cost of the optimal experiment C(µ?) cannot exceed ‖u‖∞. Thus for any
action â ∈ A and any pair of states k,m
‖u‖ ≥ C(µ?) =
∑
i,j
βij
∑
a∈A
µi(a) log
µi(a)
µj(a)
≥
∑
i,j
min{βij , βji}
∑
a∈A
(
µi(a) log
µi(a)
µj(a)
+ µj(a) log
µj(a)
µi(a)
)
=
∑
i,j
min{βij , βji}
∑
a∈A
|µi(a)− µj(a)| ×
∣∣∣∣∣log
(
µi(a)
µj(a)
)∣∣∣∣∣
≥
∑
i,j
min{βij , βji} |µi(â)− µj(â)| × |logµi(â)− logµj(â)|
Thus
‖u‖ ≥ min{βkm, βmk} |µk(â)− µm(â)| × |logµk(â)− logµm(â)|
≥ min{βkm, βkm} |µk(â)− µm(â)|
2
≥
1
d(k,m)γ
|µk(â)− µm(â)|
2
.
Proof of Proposition 3. Let |Θ| = n. By Axiom a there exists a function f : R+ → R+ such
that βΘij = f(|i − j|). Let g : R+ → R+ be given by g(t) = f(t)t
2. The Kullback-Leibler
divergence between two normal distributions with unit variance and expectations i and j
is (i− j)2/2. Hence, by Axiom b there exists a constant κ ≥ 0, independent of n, so that
for each Θ ∈ T
κ = CΘ(νΘ) =
∑
i 6=j∈Θ
βΘij
(i− j)2
2
=
∑
i 6=j∈Θ
g(|i− j|). (20)
We show that g must be constant, which will complete the proof. The case n = 2 is
immediate, since then Θ = {i, j} and so (20) reduces to
κ = g(|i− j|).
50
For n > 2, let Θ = {i1, i2, . . . , in−1, x} with i1 < i2 < · · · < in−1 < x. Then (20)
implies
κ =
n−1∑
`=1
g(x− i`) +
n−1∑
k=1
k−1∑
`=1
g(ik − i`).
Taking the difference between this equation and the analogous one corresponding to
Θ′ = {i1, i2, . . . , in−1, y} with y > in−1 yields
0 =
n−1∑
`=1
g(x− i`)− g(y − i`).
Denoting i1 = −z, we can write this as
0 = g(x+ z)− g(y + z) +
n−1∑
`=2
g(x− i`)− g(y − i`).
Again taking a difference, this time of this equation with the analogous one obtained by
setting i1 = −w, we get
g(x+ w)− g(y + w) = g(x+ z)− g(y + z),
which by construction holds for all x, y > −z,−w. Consider in particular the case that
x, y > 0, w = 0 and z > 0. Then
g(x)− g(y) = g(x+ z)− g(y + z) for all x, y, z > 0. (21)
Since g is non-negative, it follows from (20) that g is bounded by κ. Let
A = sup
t>0
g(t) ≤ κ
and
B = inf
t>0
g(t) ≥ 0.
For every ε > 0, there are some x, y > 0 such that g(x) ≥ A− ε/2 and g(y) ≤ B+ ε/2, and
so g(x)−g(y) ≥ A−B−ε. By (21) it holds for all z > 0 that g(x+z)−g(y+z) ≥ A−B−ε.
For this to hold, since A and B are, respectively, the supremum and infimum of g, it must
be that g(x + z) ≥ A − ε and that g(y + z) ≤ B − ε for every z > 0. By choosing z
appropriately, it follows that A− ε ≤ g(max{x, y}+ 1) ≥ B − ε. Since this holds for any
ε > 0, we have shown that A = B and so g is constant.
51
Proof of Proposition 4. Let µ? be an optimal experiment. As argued in the text, µ? is
such that S = A, so that it reveals to the decision maker what actions to play. Let
A? = supp(µ?) be the set of actions played in µ?. It solves
max
µ∈R|Θ|×|A
?|
+
∑
i∈Θ
qi
(∑
a∈A
µi(a)u(a, i)
)
−
∑
i,j∈Θ
βij
∑
a∈A?
µi(a) log
µi(a)
µj(a)
(22)
subject to
∑
a∈A?
µi(a) = 1 for all i ∈ Θ. (23)
Reasoning as in (Cover and Thomas, 2012, Theorem 2.7.2) the Log-sum inequality implies
that the function DKL is convex when its domain is extended from pairs of probability
distrubutions to pairs of positive measures. Moreover, expected utility is linear in the
choice probabilities. It then follows that the objective function in (22) is concave over
R|Θ|×|A
?|
+ .
As (22) equals −∞ whenever µi(a) = 0 for some i and µj(a) > 0 for some j 6= i we
have that µ?i (a) > 0 for all i ∈ Θ, a ∈ A
?. For every λ ∈ R|Θ| we define the Lagrangian
Lλ(µ) as
Lλ(µ) =
∑
i∈Θ
qi
(∑
a∈A
µi(a)u(a, i)
)
−
∑
i,j∈Θ
βij
∑
a∈A
µi(a) log
µi(a)
µj(a)
−∑
i∈Θ
λi
∑
a∈A
µi(a) .
As µ? is an interior maximizer it follows from the Karush-Kuhn-Tucker conditions that
there exists Lagrange multipliers λ ∈ R|Θ| such that µ? maximizes Lλ(·) over R
|Θ|×|A?|
+ . As
µ? is interior it satisfies the first order condition
∇Lλ(µ?) = 0 .
We thus have that for every state i ∈ Θ and every action a ∈ A?
0 = qiui(a)− λi −
∑
j 6=i
{
βij
[
log
(
µ?i (a)
µ?j (a)
)
− 1
]
− βji
µ?j (a)
µ?i (a)
}
. (24)
Subtracting (24) evaluated at a′ from (24) evaluated at a yields that (8) is a necessary
condition for the optimality of µ?.
52
1 Introduction
2 Model
2.1 Axioms
2.2 Discussion
3 Representation
4 One-Dimensional Information Acquisition Problems
5 Examples
5.1 Information Acquisition in Decision Problems
5.2 Acquiring Precise Information
5.3 Hypothesis Testing
6 Verification and Falsification
7 Related Literature
8 Proof Sketch
9 Conclusions
A Discussion of the Continuity Axiom
B Preliminaries
B.1 Properties of the Kullback-Leibler Divergence
B.2 Experiments and Log-likelihood Ratios
B.3 Properties of Cumulants
B.4 Admissible Measures and the Cumulants Manifold
C Automatic continuity in the Cauchy problem for subsemigroups of Rd.
D Proof of Theorem 1
E Additional Proofs
| 0non-cybersec
| arXiv |
scrbook: remove margin notes for index entries. <p>I'm using the <code>scrbook</code> class. I don't like the way that my index entries are included as margin notes. How can I prevent the index entries from being shown in the margins? I consulted the KOMA-Script manual but was unable to find anything that addresses this issue.</p>
<p>Sample code follows:</p>
<pre><code>\documentclass[letterpaper,openright,12pt,chapterprefix=true,index=totoc]{scrbook}
\usepackage[english]{babel}
\usepackage{makeidx,showidx}
\title{The Title}
\author{The Author}
\date{}
\makeindex
\begin{document}
\frontmatter
\pagenumbering{roman}
\tableofcontents
\chapter*{Preface}
\addcontentsline{toc}{chapter}{Preface}
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
\mainmatter
\pagenumbering{arabic}
\chapter{The Chapter Name}
\addcontentsline{toc}{chapter}{The Chapter Name}
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
\index{Lorem ipsum!dolor sit amet}
Sed a tellus augue. Phasellus ut ultrices velit.
\index{Sed a tellus!augue}
\backmatter
\printindex
\end{document}
</code></pre>
| 0non-cybersec
| Stackexchange |
Partition the rationals with respect to a multivariate polynomial which sends classes to classes. <p>Let $R$ be a commutative ring and let $f\in R[x_1,x_2,\cdots,x_{n-1}],n\geq 2$ be a polynomial.</p>
<blockquote>
<p><strong>Definition:</strong> We say $f$ is <strong>$n$-severable</strong> over $R$ if there exists a partition (of set) $$R=\coprod_{i=1}^n R_i,R_i\neq \varnothing$$
such that for any sequence $\mathbf{a}=(a_{i_1},\cdots,a_{i_{n-1}}), a_{i_j}\in R_{i_j}$, $f(\mathbf{a})$ lies in $R_{i_n}$. Here $(i_1,\cdots,i_n)$ is a rearrangement of $(1,\cdots,n)$.</p>
</blockquote>
<p>Let me give a somewhat trivial example. Put $R=\Bbb{Z}$ and $n=2$, one can easily show that $f(x)=x+1$ is $2$-severable with partition $\Bbb{Z}=$ {odd numbers} $\cup$ {even numbers}. On the other hand, $f(x)=2x+1$ is not $2$-severable, since it possesses a fixed point $-1$. In general, one has following result, whose proof is straightforward.</p>
<blockquote>
<p><strong>Claim:</strong> For any $R$ and $f\in R[x]$, $f$ is $2$-severable if and only if there is no periodic element in $R$ of odd period under the iteration $f$.</p>
</blockquote>
<p>To be honest, I haven't tried much beyond the above examples, and I feel that it is hopeless to obtain an explicit criterion for the severability of a general polynomial. So to narrow down the question, here is what I mainly interest in:</p>
<blockquote>
<p><strong>Question:</strong> Is there any $n$-severable polynomial $f$ over $\Bbb{Q}$ with $n\geq3$?</p>
</blockquote>
<p>For the case $n=3$, I only calculated a few linear functions $f=ax_1+bx_2$ and didn't find any satisfied one yet. Also, for any $n\geq 2$, there is an $n$-severable polynomial $n(n-1)/2-(x_1+\cdots+x_{n-1})$ over $\Bbb{Z}/n\Bbb{Z}$, so it induces a collection of $n$-severable polynomials over $\Bbb{Z}$, but I've no idea whether any of them is severable over $\Bbb{Q}$.</p>
<p>Any advise or guidance would be appreciated.</p>
<p><strong>Update:</strong> I also posted it on <a href="https://mathoverflow.net/questions/292701/partition-the-rationals-with-respect-to-a-multivariate-polynomial-which-sends-cl">MO</a>.</p>
| 0non-cybersec
| Stackexchange |
LPT: If you ever hit the space bar accidentally and you scroll down, you can scroll back up with Shift+Space.. Also, you can pause YouTube videos by pressing 'K', so now you'll never accidentally scroll down! | 0non-cybersec
| Reddit |
How to overwrite server-side forced download window?. <p>When clicking at a link to a file, three things can happen by default (depending on the configuration of your browser and also of the server):</p>
<ul>
<li><strong>a)</strong> the file gets opened in the browser</li>
<li><strong>b)</strong> a download window gets opened, where you can choose to open the file with a specific program or to save it locally</li>
<li><strong>c)</strong> a download window gets opened, which only asks if you want to save the file locally</li>
</ul>
<p>My question is about <strong>c)</strong>, which, I assume, is triggered by a server-side configuration (specific HTTP header resp. MIME type). See the example below.</p>
<p>Is there a way to "overwrite" this behaviour, i.e., to get the usual download window <strong>b)</strong> instead?</p>
<h3>Example</h3>
<p>On <a href="http://www.heise.de/newsticker/meldung/Abmahnungen-wegen-Redtube-Porno-Streaming-erste-juristische-Gegenwehr-2064084.html" rel="nofollow noreferrer">this (German) page</a> there is an external <a href="http://www.lg-koeln.nrw.de/Presse/Pressemitteilungen/10_12_2013---Abmahnungen-_The-Archive_.pdf" rel="nofollow noreferrer">link to a PDF</a>, which triggers this download window:</p>
<p><img src="https://i.stack.imgur.com/ktz5T.png" alt="Iceweasel screenshot of download window"></p>
<p>Translation: Do you want to save this file? <kbd>Cancel</kbd> <kbd>Save file</kbd></p>
<p>As you can see, it doesn’t offer to open this file with a specific program.</p>
<hr>
<p>Update: <a href="https://superuser.com/a/688216/151741">sahmeepee suggested a solution</a> that works for "known" MIME types, i.e., you have to find a different download for a file of the same MIME type, so that this MIME type can be added to the mentionend download settings list. <strong>So I’m still looking for an "on the fly" way of overwriting such forced downloads.</strong></p>
| 0non-cybersec
| Stackexchange |
OOM-Killer called every now and then. <p>I have a dedicated server where I've installed <code>apache2</code>, as well as <code>rails-passenger</code>. Although i have 2GBs of RAM and most times about 1,5GB is free, there are some random times where I loose <code>ssh</code> and generic connectivity because <code>oom-killer</code> is killing processes. </p>
<p>I suppose there is a memory leak but I cannot find out where it comes from. <code>oom-killer</code> kills <code>apache2</code>, <code>mysql</code>, <code>passenger</code> and whatever.</p>
<p>Yesterday, I did a <code>cat syslog | grep -c oom-killer</code> and got 57 occurences!</p>
<p>It seems that something seriously destroys the memory. Once I reboot, everything comes back to normal. I suspect that it can be related to <code>passenger</code>, but I'm still trying to figure it out.</p>
<p>Can you think of another cause, or do you have anything to suggest that will make the leak identification procedure easier? I was even thinking of writing a bash script, to be run with <code>cron</code> for like every 5 minutes. </p>
| 0non-cybersec
| Stackexchange |
Find the intersection between point and circle. <p>given a line segment with endpoints P1 and P2 and a Circle with Center C and Radius R where it is known that P1 lies outside the circle and P2 lies inside the circle, what is an efficient way to find the intersection point between the two, P3? </p>
| 0non-cybersec
| Stackexchange |
How are instincts passed from one generation to the next?. This is something I have long wondered, and done some looking into, but I just cannot find a satisfactory answer. How are thoughts and behaviors passed from an animal to it's offspring? How can a shepherd dog, who has never seen a sheep and was not raised with any other shepherd dogs who could pass down the skill, know how to herd sheep the first time it sees one? | 0non-cybersec
| Reddit |
Redux configure store's createStore method. <p>I found two ways to configure redux createStore ,<br>
1.<a href="https://github.com/TeamWithBR/SampleProjectTodo/blob/master/src/store/configureStore.js" rel="noreferrer">https://github.com/TeamWithBR/SampleProjectTodo/blob/master/src/store/configureStore.js</a>
2.<a href="https://github.com/aknorw/piHome/blob/9f01bc4807a8dfe2a75926589508285bff8b1ea6/app/configureStore.js" rel="noreferrer">https://github.com/aknorw/piHome/blob/9f01bc4807a8dfe2a75926589508285bff8b1ea6/app/configureStore.js</a> </p>
<p>And I try it , both can work </p>
<p><strong>test1</strong></p>
<pre><code>import React from 'react';
import ReactDOM from 'react-dom';
import { Provider } from 'react-redux';
import { createStore, applyMiddleware } from 'redux';
import { Router, browserHistory } from 'react-router';
import reduxThunk from 'redux-thunk';
import routes from './routes';
import App from './components/app';
import reducers from './reducers';
const createStoreWithMiddleware = applyMiddleware(reduxThunk)(createStore);
ReactDOM.render(
<Provider store={createStoreWithMiddleware(reducers)}>
<Router history={browserHistory} routes={routes} />
</Provider>, document.querySelector('#app'));
</code></pre>
<p><strong>test2</strong></p>
<pre><code>const createStoreWithMiddlewareTest = createStore(reducers, applyMiddleware(reduxThunk));
ReactDOM.render(
<Provider store={createStoreWithMiddlewareTest}>
<Router history={browserHistory} routes={routes} />
</Provider> , document.querySelector('#app'));
</code></pre>
<p>But I don't know what's the difference between them??
Please guide me </p>
| 0non-cybersec
| Stackexchange |
Java EE's mysterious message policy. | 0non-cybersec
| Reddit |
Bronze Adventures- Support Braum Dragon Steal. https://youtu.be/evFIXJYApnA
I know, I'm bronze scum, although I believe things like this make it all worth it :p | 0non-cybersec
| Reddit |
nvidia 820m problem. <p>How to disable graphic card permanently on Ubuntu 14.04.</p>
<ul>
<li>graphic card is a 820m</li>
<li>laptop is hp 15 r203tx</li>
<li>bios version is up to date</li>
</ul>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Vue plugin with Typescript - Cannot read property '_init' of null. <p>I need to create Vue plugin containing a few reusable components. I want to create this using TypeScript. There are my files:</p>
<hr>
<p><em>components/SampleButton.vue</em></p>
<pre><code><template>
<button>Sample button</button>
</template>
<script lang="ts">
import Vue from 'vue';
export default Vue.extend({
name: 'sample-button',
});
</script>
</code></pre>
<hr>
<p><em>main.ts</em></p>
<pre><code>import SampleButton from './components/SampleButton.vue';
export {
SampleButton,
};
export default {
install(Vue: any, _options: object = {}): void {
Vue.component(SampleButton.name, SampleButton);
},
};
</code></pre>
<hr>
<p><em>shims-vue.d.ts</em></p>
<pre><code>declare module '*.vue' {
import Vue from 'vue';
export default Vue;
}
</code></pre>
<hr>
<p>Plugin building is successful, but when I try to user the plugin in my app I get an error:</p>
<pre><code>import MyPlugin from 'my-plugin'
Vue.use(MyPlugin)
</code></pre>
<p><a href="https://i.stack.imgur.com/TU2rP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TU2rP.png" alt="enter image description here"></a></p>
<p>Please help me find a mistake :)</p>
| 0non-cybersec
| Stackexchange |
Wells Fargo security system… (take a look at the source – maybe this should be xposted to r/gifs). | 1cybersec
| Reddit |
Nginx not redirecting but is working. <p>This is my first setup with nginx and I'm using it to proxy to nodejs. HTTP is on port 3000, and HTTPS is on port 3001.</p>
<p>If I go to <a href="http://test.domain.com" rel="noreferrer">http://test.domain.com</a> it loads the regular unsecure pages. If I go to <a href="https://test.domain.com" rel="noreferrer">https://test.domain.com</a> it loads the secure pages. But I want it to redirect from non-https to https. </p>
<p>What's wrong with my config? This is the whole domain.conf file I'm using.</p>
<pre><code>server {
listen 80;
server_name test.domain.com
return 301 https://test.domain.com$request_uri;
}
server {
listen 443 ssl;
server_name test.domain.com;
ssl_certificate /etc/nginx/ssl/domain.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://127.0.0.1:3001;
proxy_redirect off;
}
}
</code></pre>
<p>I have restarted nginx multiple times.</p>
<p>Thanks!</p>
| 0non-cybersec
| Stackexchange |
WiFi connected devices experience packet loss over OpenVPN. Hey guys, I'm using a Netgear Nighthawk AC2600 with the OpenVPN configuration files from the router. I have setup VPN to 3 computers and a windows tablet. 1 PC is hardwired and experiences no packet loss, the other 3 computers which are on different wifi connections altogether experience packet-loss. The wifi connected PC's have a great connection with low latency, no packet loss otherwise, and high bandwidth. I'm scratching my head here, wondering if you guys can help. Thanks. | 1cybersec
| Reddit |
Please review my connection log errors. <p>I am using the current version of Tails. I more often than not get the warning icon at the bottom left of the connection window telling me that the log file has been copied to the clip board. I will eventually connect with a green onion yet it takes quite a bit of time. I could use some feedback on this log. This log is from today at around 1pm, so the date/time is wrong. Also, there were so many repeats of the first line that I had to delete most of them in order to make them this post fit here on this forum.</p>
<pre><code>02/12/2016 00:09:47.800 [NOTICE] New control connection opened from 127.0.0.1.
02/12/2016 00:09:56.900 [NOTICE] DisableNetwork is set. Tor will not make or accept non-control network connections. Shutting down all existing connections.
02/12/2016 00:09:56.900 [NOTICE] Opening Socks listener on 127.0.0.1:9150
02/12/2016 00:09:56.900 [NOTICE] Opening DNS listener on 127.0.0.1:5353
02/12/2016 00:09:56.900 [NOTICE] Opening Transparent pf/netfilter listener on 127.0.0.1:9040
02/12/2016 00:09:56.900 [NOTICE] Renaming old configuration file to "/etc/tor/torrc.orig.1"
02/12/2016 00:09:57.200 [NOTICE] New control connection opened from 127.0.0.1.
02/12/2016 00:09:58.600 [NOTICE] Bootstrapped 5%: Connecting to directory server
02/12/2016 00:09:58.600 [NOTICE] Bootstrapped 10%: Finishing handshake with directory server
02/12/2016 00:09:58.800 [WARN] Proxy Client: unable to connect to 194.132.209.190:51867 ("server rejected connection")
02/12/2016 00:10:33.900 [NOTICE] Closing no-longer-configured Transparent pf/netfilter listener on 127.0.0.1:9040
02/12/2016 00:10:33.900 [NOTICE] Closing no-longer-configured DNS listener on 127.0.0.1:5353
02/12/2016 00:06:38.900 [NOTICE] Bootstrapped 5%: Connecting to directory server
02/12/2016 00:06:38.900 [NOTICE] Bootstrapped 10%: Finishing handshake with directory server
02/12/2016 00:06:39.000 [WARN] Proxy Client: unable to connect to 198.23.141.168:58693 ("server rejected connection")
02/12/2016 00:06:39.000 [WARN] Proxy Client: unable to connect to 194.132.209.158:44912 ("server rejected connection")
02/12/2016 00:06:39.400 [NOTICE] Bootstrapped 15%: Establishing an encrypted directory connection
02/12/2016 00:06:39.500 [NOTICE] Bootstrapped 20%: Asking for networkstatus consensus
02/12/2016 00:06:39.800 [NOTICE] new bridge descriptor 'consolsmeringue' (fresh): $9B1E39F667DBD7749CC653A7B2632A9D75DB1D27~consolsmeringue at 45.55.174.204
02/12/2016 00:06:39.800 [NOTICE] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
02/12/2016 00:06:40.000 [NOTICE] Bootstrapped 25%: Loading networkstatus consensus
02/12/2016 00:06:40.900 [NOTICE] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
] Closing old Socks listener on 127.0.0.1:9150
</code></pre>
| 0non-cybersec
| Stackexchange |
Three-year-old boy shoots pregnant mother and father in New Mexico. | 0non-cybersec
| Reddit |
How to run a python script on startup?. <p>First, I know that this has been asked before. Tried guides, didn't work...</p>
<p>So I have a python script that has a never ending loop. It accesses the internet.</p>
<p>All I want to do is that after I log in or the computer start, this python script is executed in the background until I shut down the computer.</p>
<p>Let's say my script is currently in my home directory named myscript.py</p>
<p>How could I achieve this task in ubuntu 12.10?</p>
<p>Thank you for your answer</p>
| 0non-cybersec
| Stackexchange |
How do I know if robocopy skipped the file I am copying?. <p>I am using a batch file to copy a database backup file using robocopy (on Windows 7/2008) and need to restore (with replace) the database only after the backup file is really changed (not skipped by robocopy). </p>
<pre><code>robocopy \\server\share . foo.bak /TBD /NP
</code></pre>
<p>Tried testing errorlevel but it doesn't help. Anyone has some suggestion to achieve such objective?</p>
<p>Thanks.</p>
| 0non-cybersec
| Stackexchange |
Late night redditors of Reddit, why do you stay up so late?. | 0non-cybersec
| Reddit |
Resize Image without losing quality. <p>I'm using the functions below to resize my images width & height but I noticed that it ruins the image quality.</p>
<pre><code>class func imageWithSize(image: UIImage,size: CGSize)->UIImage{
if UIScreen.mainScreen().respondsToSelector("scale"){
UIGraphicsBeginImageContextWithOptions(size,false,UIScreen.mainScreen().scale);
}
else
{
UIGraphicsBeginImageContext(size);
}
image.drawInRect(CGRectMake(0, 0, size.width, size.height));
var newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
class func resizeImageWithAspect(image: UIImage,scaledToMaxWidth width:CGFloat,maxHeight height :CGFloat)->UIImage
{
let scaleFactor = width / height;
let newSize = CGSizeMake(width, height);
return imageWithSize(image, size: newSize);
}
</code></pre>
<p>Is there another way to resize images without ruining the quality? Or how can I fix my functions below so it doesn't ruin the image quality after resizing?</p>
| 0non-cybersec
| Stackexchange |
How to forward a domain without an absolute link. <p>We have multiple sites within our server:</p>
<p>We mostly use a www.example.com/businessid=21 type URI format.</p>
<p>Unfortunately, we've changed the DNS for domain www.example.com to another server.</p>
<p>One of our clients will not change their DNS and will only use forwarding on their domain. </p>
<p>Is there a way to we can still point to the /businessid=21 site? given that we no longer have the www.example.com part of the url (i.e. just using the ip address of the server?)</p>
| 0non-cybersec
| Stackexchange |
Seattle's support for USMNT last night was off the charts: "We are going to Brazil!". | 0non-cybersec
| Reddit |
Borderline r/insanepeoplefacebook. | 0non-cybersec
| Reddit |
Alphas- A 2012 TV series just like X-Men but instead of being superheroes the Alphas have realistic problems arising from their powers. Very fun.. | 0non-cybersec
| Reddit |
How to disable "Extraction completed successfully" dialog in Nautilus?. <p>In Ubuntu 16.10, every time I extract a file, Nautilus displays the dialog "Extraction completed successfully". </p>
<p><img src="https://i.stack.imgur.com/WvnPB.png" alt="nautilus display dialog[1]"></p>
<p>How can I make Nautilus finish extraction silently (like it did in 16.04)?</p>
| 0non-cybersec
| Stackexchange |
Postfix SASL Authentification not applied. <p>I've a problem: I set up a Postfix and want to apply SASL user auth over cyprus.
But the Problem is that Postfix won't accept the SASL auth or even the TLS encryption which i configured.</p>
<p>In the following my Config:</p>
<p>postconf -m</p>
<pre><code>btree
cidr
environ
fail
hash
internal
memcache
nis
proxy
regexp
sdbm
sqlite
static
tcp
texthash
unix
</code></pre>
<p>postconf -M</p>
<pre><code>smtp inet n - - - - smtpd -v
smtpd pass - - - - - smtpd
smtps inet n - - - - smtpd -o syslog_name=postfix/smtps -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes
pickup fifo n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}
</code></pre>
<p>postconf -n</p>
<pre><code>alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
append_dot_mydomain = no
biff = no
config_directory = /etc/postfix
inet_interfaces = all
inet_protocols = all
mailbox_command = procmail -a "$EXTENSION"
mailbox_size_limit = 0
mydestination = localhost, localhost.localdomain, localhost
myhostname = XXX
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
myorigin = /etc/mailname
readme_directory = no
recipient_delimiter = +
relayhost =
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_tls_note_starttls_offer = yes
smtp_use_tls = yes
smtpd_banner = $myhostname ESMTP
smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination
smtpd_sasl_path = smtpd
smtpd_sasl_type = cyprus
smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem
smtpd_tls_auth_only = no
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom
</code></pre>
<p>If I connect over Telnet:</p>
<pre><code>telnet localhost smtp
Trying ::1...
Connected to localhost.localdomain.
Escape character is '^]'.
220 XXX ESMTP
ehlo test
250-XXX
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
auth plain 12345678
503 5.5.1 Error: authentication not enabled
</code></pre>
<p>Does anyone have an idea why postfix wont apply my TLS and more important my SASL auth config?</p>
| 0non-cybersec
| Stackexchange |
Shopping for a new stereo at Kmart, circa 1973, in a Kresge annual report photo.. | 0non-cybersec
| Reddit |
Lugaru HD running on the Raspberry Pi 3. The frame rate tends to go all over the place, but it's still playable.. | 0non-cybersec
| Reddit |
How to handle authenticatication with HttpWebRequest.AllowAutoRedirect?. <p>According to <a href="http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.allowautoredirect.aspx" rel="noreferrer">MSDN</a>, when <code>HttpWebRequest.AllowAutoRedirect</code> property is true, redirects will clear authentication headers. The workaround given is to implement IAuthenticationModule to handle authentication:</p>
<blockquote>
<p>The Authorization header is cleared on auto-redirects and HttpWebRequest automatically tries to re-authenticate to the redirected location. In practice, this means that an application can't put custom authentication information into the Authorization header if it is possible to encounter redirection. Instead, the application must implement and register a custom authentication module. The System.Net.AuthenticationManager and related class are used to implement a custom authentication module. The AuthenticationManager.Register method registers a custom authentication module. </p>
</blockquote>
<p>I created a basic implementation of this interface:</p>
<pre><code>public class CustomBasic : IAuthenticationModule
{
public CustomBasic() { }
public string AuthenticationType { get { return "Basic"; } }
public bool CanPreAuthenticate { get { return true; } }
private bool checkChallenge(string challenge, string domain)
{
if (challenge.IndexOf("Basic", StringComparison.InvariantCultureIgnoreCase) == -1) { return false; }
if (!string.IsNullOrEmpty(domain) && challenge.IndexOf(domain, StringComparison.InvariantCultureIgnoreCase) == -1) { return false; }
return true;
}
public Authorization PreAuthenticate(WebRequest request, ICredentials credentials)
{
return authenticate(request, credentials);
}
public Authorization Authenticate(String challenge, WebRequest request, ICredentials credentials)
{
if (!checkChallenge(challenge, string.Empty)) { return null; }
return this.authenticate(request, credentials);
}
private Authorization authenticate(WebRequest webRequest, ICredentials credentials)
{
NetworkCredential requestCredentials = credentials.GetCredential(webRequest.RequestUri, this.AuthenticationType);
return (new Authorization(string.Format("{0} {1}", this.AuthenticationType, Convert.ToBase64String(Encoding.ASCII.GetBytes(string.Format("{0}:{1}", requestCredentials.UserName, requestCredentials.Password))))));
}
}
</code></pre>
<p>and a simple driver to exercise the functionality:</p>
<pre><code>public class Program
{
static void Main(string[] args)
{
// replaces the existing handler for Basic authentication
AuthenticationManager.Register(new CustomBasic());
// make a request that requires authentication
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(@"https://www.SomeUrlThatRequiresAuthentication.com");
request.Method = "GET";
request.KeepAlive = false;
request.ContentType = "text/plain";
request.AllowAutoRedirect = true;
request.Credentials = new NetworkCredential("userName", "password");
HttpWebResponse result = (HttpWebResponse)request.GetResponse();
}
}
</code></pre>
<p>When I make a request that doesn't redirect, the <code>Authenticate</code> method on my class is called, and authentication succeeds. When I make a request that reutrns a 307 (temporary redirect) response, no methods of my class are called, and authentication fails. What's going on here?</p>
<p>I'd rather not disable auto redirect and write custom logic to handle 3xx responses myself. How can I get my authentication logic to work with auto redirect?</p>
| 0non-cybersec
| Stackexchange |
concurrent scheduled jobs and ora-27477. <p>Need:
To have Oracle kick off a shell script that performs processing on tables and data in the Oracle database. The script is instantiated by specific activity from web clients.</p>
<p>Issue:
Web clients will instantiate this job concurrently. In order to execute shell scripts from Oracle, you must do so using the Oracle scheduler. This job is set to execute immediately from Oracle. What I am seeing with the ORA-27477 documentation is that Oracle does not allow jobs with the same name to run concurrently.</p>
<p>Background:
We need to run this particular processing job from the shell because it uses C to do the heavy lifting. Porting that code to PL/SQL is not an option. Lots of legacy code in play here too that makes me sad, but that's life.</p>
| 0non-cybersec
| Stackexchange |
Ubuntu 12.04 Desktop CentrifyDC problems. <p>I have a problem with Ubuntu 12.04 when I trying to integrate with Active Directory.</p>
<p>I installed CentrifyDC and i joind the DC. Everything is ok until I reboot my computer and when I try to login with username and domain password I can't do it.Then I log in with my default ubuntu username and if i try command adinfo appears centrifydc in diconnectd mode.
After 3 minutes without any interventions when I type adinfo command centryfidc appears connected and I can login with my domain user and pass.</p>
<p>Same thing is if I execute command: /etc/init.d/centrifydc restart it appear connected.</p>
<p>Can anyone tell me why it can't connect at the boot ? Error when I type: adinfo --diag is: Cannot find SPNs: unable to bind to DC.</p>
| 0non-cybersec
| Stackexchange |
How do you calculate exacty when a domain in pending delete status will become available?. <p>A domain I want is in the "pendingDelete" stage according to WHOIS.</p>
<p>I have been monitoring it since "redemptionPeriod", and it entered into pendingDelete five days ago today.</p>
<p>After checking a few services (SnapNames, etc), they report it is scheduled to drop on the 11th (7 days, by my calculations), but I'm not quite sure what to believe.</p>
<p>The domain isn't highly valuable. It is only valuable to me and one other company. I can see no backorders placed on the big name sites, so I'm thinking of trying to get it without a backorder service.</p>
<p>Any insight as to when it will <strong>actually</strong> drop? I've read 11AM-2PM PST, but I'm unsure.</p>
| 0non-cybersec
| Stackexchange |
Fuck me, right?. | 0non-cybersec
| Reddit |
where does windows10 saves wifi login password for automatic wifi login. <p>I have a Windows 10 system and I connect to a Wifi network via a login and password system.</p>
<p>First after a power on I have to select the ssid of my network there is a password which is common for all users.
Then I get a link <a href="http://msftconnecttest.com" rel="nofollow noreferrer">http://msftconnecttest.com</a> which redirects me to our internal link <a href="http://192.0.2.254/?redirect=https%3A//secure.myportalac.in%3A8090" rel="nofollow noreferrer">http://192.0.2.254/?redirect=https%3A//secure.myportalac.in%3A8090</a><br>
I get some thing like this
<a href="https://i.stack.imgur.com/SdhKv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SdhKv.jpg" alt="login page"></a>
I here enter my username and password I use Edge mostly do not save my sessions.
I am connected to internet.
Now I do a reboot and usually I have to manually go through this process again and again entering username passwords.
However from some days I have seen that even if I do a reboot and I was connected previously then I do not have to enter my username password.
I am automatically connected to my wifi after a reboot.
Here is a screenshot I took today after a reboot<br>
<a href="https://i.stack.imgur.com/oA7f1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oA7f1.jpg" alt="automatic logged in"></a></p>
<p>I do not use any browser extensions or automatic password fillers.
I prefer to enter all this manually.
I checked Windows Credential manager
<a href="https://support.microsoft.com/en-au/help/4026814/windows-accessing-credential-manager" rel="nofollow noreferrer">https://support.microsoft.com/en-au/help/4026814/windows-accessing-credential-manager</a>
and I do not see any entry for my network in form of a saved login password here. Or may be if it is existing I am not able to understand it fully.
I am sharing a screenshot.<br>
<a href="https://i.stack.imgur.com/6flkw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6flkw.jpg" alt="windows credential manager"></a><br>
<a href="https://i.stack.imgur.com/9krz3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9krz3.jpg" alt="web credentials manager"></a>
Since I don't have a password in this laptop to login so I don't see any such thing
<a href="https://i.stack.imgur.com/K9cQT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K9cQT.jpg" alt="no password"></a>
I don't see any entry for my wifi ssid or even the portal
What I am wondering is I do not save passwords in browser then how is this possible for Wifi to remember all the previous passwords and connect me automatically after a reboot without asking my credentials. What has gone wrong with this system. What setting has been affected which is doing this.</p>
| 0non-cybersec
| Stackexchange |
How to create a Hash Computed Column for Many Columns?. <p>Does anyone know how to create a Hash Computed Column Persisted? I keep receiving an error below. Otherwise, I will have to utilize an update or trigger statement everytime. I know how to conduct for single column, however this includes multiple columns now.</p>
<pre><code>CREATE TABLE [dbo].[CustomerTransactiont]
(
CustomerTransactionId int primary key identity(1,1),
CustomerName varchar(255),
Price decimal(10,2),
Quantity int,
RowHash as hashbytes('SHA2_512', (select CustomerName
,Price
,Quantity for xml raw)) persisted
)
Msg 1046, Level 15, State 1, Line 7
Subqueries are not allowed in this context. Only scalar expressions are allowed.
</code></pre>
| 0non-cybersec
| Stackexchange |
More convenient form of derivative of $\mathrm{sinc}(x)$. <p>$\mathrm{sinc}(x)$ is defined as $\frac{\sin(x)}{x}$ except continuous at $x=0$ (insert the removable singularity). </p>
<p>The derivative of $\mathrm{sinc}(x)$ is usually given as the derivative of $\frac{\sin(x)}{x}$, namely $$\frac{\cos(x)}{x} - \frac{\sin(x)}{x^2},$$ but this has the same problem as $\frac{\sin(x)}{x}$. You have to re-insert the removable singularity at $x=0$. </p>
<p>Is there a more convenient form of the derivative of $\mathrm{sinc}(x)$, say perhaps using $\mathrm{sinc}(x)$ itself, that doesn't have this issue? (don't say piecewise; if piecewise was convenient we wouldn't have $\mathrm{sinc}$ in the first place.)</p>
| 0non-cybersec
| Stackexchange |
You don't often see things this funny in journals: "Moreover, we realized after our article had been published that major parts of the text had been plagiarized almost verbatim". | 0non-cybersec
| Reddit |
Windows 10 dialog "Ask for permission" even though I am the admin account. <p>Windows 10 Home. I setup family safety years ago. After a random update a while ago (can't quite remember) a dialog "You’ll need to ask an adult in your family if you can use: - Microsoft Sticky Notes" appears every time I fresh boot my PC and log into the parent (also admin) account. No other account, even the children, has this issue.</p>
<p>Trace ID: 2S6baCQs+0K/jnPS.3 (PrdID:9NBLGGH4QGHW)</p>
<p>Any ideas?</p>
<p>Edit: Parents can't have restrictions set in parental controls on family safety.</p>
| 0non-cybersec
| Stackexchange |
Geometric intuition for mixed partial derivatives. <p>I'm trying to better understand exactly what $f_{xy}(x,y)$ at a point is geometrically, and possibly understand why $f_{xy}$ and $f_{yx}$ should be equivalent, not just because the math happened to make it so. For example, $f_{xx}$ would be like looking at the concavity of the function in only the $x$ direction.</p>
<p>My real question is about how the second partial derivative test works.</p>
<p>$$D(x,y)=f_{xx}(x,y)f_{yy}(x,y)-(f_{xy}(x,y))^2$$</p>
<p>I can see that the sign of the first term can be interpreted as whether or not the concavity in both the $x$ and $y$ directions are in the same direction. For a critical point $(a,b)$, if the concavities are opposite, then we have a saddle point. It's also easy to see that $D(a,b)<0$.</p>
<p>When the concavities do agree, then $f_{xy}$ starts to play a role in the sign of $D$. My current understanding on why this is necessary, is because looking at the second derivatives at only the $x$ and $y$ directions doesn't quite give the entire picture on what is happening at $(a,b)$, but including $f_{xy}$ gives sufficient information to determine if $(a,b)$ is truly a local extremum or a saddle point instead (given that $D\neq0$).</p>
<p>My problem is that I still don't exactly get what sort of information $f_{xy}$ entails for a given point.</p>
| 0non-cybersec
| Stackexchange |
A work rate problem. <p><strong>Micheal paints $\frac{1}{p}$ of a building in 20 mins, what fraction of the same bulding can Hena paints in 20 mins, if they paint the building in an hour, working together.</strong></p>
<p><strong>Answer options:</strong></p>
<blockquote>
<p>$\frac{1}{3p}$</p>
<p>$\frac{3p}{p}$-3</p>
<p>$\frac{p-3}{3p}$</p>
</blockquote>
<p>I have done in this way:
The workrate of micheal is: $\frac{3}{p}$ in an hour.
therefore Hena will do $p-\frac{3}{p}$ in an hour.
Hence in 20 mins Hena will do, $\frac{p-\frac{3}{p}}{3}$= $\frac{p}{3}-\frac{1}{p}$</p>
<p>But this is not appeared in the answer list. Where is my error?</p>
| 0non-cybersec
| Stackexchange |
Personal Finance Careers (Canada). I'm wondering about careers in personal finance, as it's my strongest passion. What I don't want is to get a job in a bank as a 'personal financial advisor' or whatever the hell they call themselves these days, as I don't want to sit under a dozen layers of bureaucracy just pushing products. We all know that the last thing a bank cares about is anyone's personal financial well-being.
Are there other options to make a living off my interests in personal finance? All I can think of is 'for-fee' planners but honestly I don't know of anyone that actually goes to these because 'the banks are free.'
Enlighten me please! | 0non-cybersec
| Reddit |
Adding dropbox files over unsecure wi-fi. <p>So far I've always used private ISP connections (paid by myself) to access internet.
Now I'm using a wi-fi connection that requires a login, but I'm certain it's shared by dozens of people, with the same login, and this connection is as well with a little shield on windows 8.1, which I know means nothing good.</p>
<p>This raised the very question I make above, If I add or change a file on my dropbox, will someone in this wi-fi be able steal the file, since it will have to "navigate" trough the wi-fi to reach the dropbox server?</p>
<p>Edit: I'd also like to know what I could do to make my connection on this wi-fi more secure, other than disable the network finding on windows.
Thank you</p>
| 0non-cybersec
| Stackexchange |
Ubuntu 14.04 standard mode boot issue. <p>I have installed Ubuntu 14.04 as stand-alone OS on my Asus-1215P laptop a couple of weeks ago. After installation, it worked fine; but yesterday when I turned on the laptop, it went to the grub menu as usual. However, after that it started having problems: </p>
<blockquote>
<p>There was messages like <strong>ACPI Probing Failed</strong>, then <strong>^[[6~^</strong> kept printing. Then another message was that my cifs drive could not be mounted, after that everything goes into a <strong>loop</strong>....<strong>Bunch of lines printing over a over again, Ubuntu loading screen keeps flashing time to time</strong></p>
</blockquote>
<p>I tried to boot from recovery mode and it worked then, but the resolution is so low and <strong>^[[6~^</strong> was still printed a bunch of times. After googling some more, I tried using the command parameter from grub menu multiple times in combination with <strong>acpi=off, nomodeset and nosplash</strong>. Now, although Ubuntu successfully booted using all three options, the performance and graphics of the OS is like booting from recovery mode. So, what should I do to solve this problem?</p>
<p><strong>P.S: My laptop's 'L' key and "Enter"/"Return" key does not work, so keep in mind that when giving helpful suggestions/solutions</strong> </p>
| 0non-cybersec
| Stackexchange |
[video] WTF Japan? I don't even know what to think anymore.. | 0non-cybersec
| Reddit |
Possible to print more than 100 rows of a data.table?. <p>The data.table has a nice feature that suppresses output to the head and tail of the table.</p>
<p>Is it possible to view / print more than 100 rows at once?</p>
<pre><code>library(data.table)
## Convert the ubiquitous "iris" data to a data.table
dtIris = as.data.table(iris)
## Printing 100 rows is possible
dtIris[1:100, ]
## Printing 101 rows is truncated
dtIris[1:101, ]
</code></pre>
<p>I often have data.table results that are somewhat large (e.g. 200 rows) that I just want to view.</p>
| 0non-cybersec
| Stackexchange |
Rename a file in the internal storage. <p>What's the best/easiest way to rename a file in the application's internal storage? I find it a bit strange that there is a <code>Context.deleteFile()</code> method, but no "move" or "rename" function. Do I have to go all the way through saving the file's contents, deleting it, creating a new one and then copying the contents into that one? Or is there a way to copy a file over an existing file?</p>
<p>Update (Aug. 30, 2012):</p>
<p>As per the suggested solution below, which I cannot get to work:</p>
<ul>
<li>I have a file called shoppinglists.csv</li>
<li>Then I create a new file called shoppinglists.tmp, and copy the contents from shoppinglists.csv AND some new entries into that. The shoppinglist.tmp file is then a new version of the shoppinglists.csv file</li>
<li>Then I delete the old shoppinglists.csv file</li>
<li>Then I need to rename the shoppinglists.tmp file to shoppinglists.csv</li>
</ul>
<p>I tried this:</p>
<pre><code>ctx.deleteFile("shoppinglists.csv"); <--- delete the old file
File oldfile = new File("shoppinglists.tmp");
File newfile = new File("shoppinglists.csv");
oldfile.renameTo(newfile);
</code></pre>
<p>However, this doesn't work. After deleteFile(), nothing more happens, and I'm left with the new shoppinglists.tmp file.</p>
<p>What am I missing?</p>
<p>NB: There are no errors or anything in LogCat.</p>
| 0non-cybersec
| Stackexchange |
unable to run the script. <pre><code>d1=$(date --date="-10 min" "+%Y-%m-%d %H:%M")
d2=$(date --date="-1 min" "+%Y-%m-%d %H:%M")
sed -n "/$d1/,/$d2/p" /tmp/samba.log
while read -r line; do
if [[ $line -eq '- Exception from external service:' ]] ;
then
echo "Subject: Samba is Down "| /usr/sbin/sendmail -f [email protected]
-t [email protected],
fi
done
</code></pre>
| 0non-cybersec
| Stackexchange |
I made a logo for /r/cats. | 0non-cybersec
| Reddit |
Tangent is one-to-one with a restricted domain.. <p>I want to show the function $ \tan(\pi x - \frac \pi2) $ is one-to-one if <em>x</em> $ \in (0,1) $. But an argument I would normally use to prove a function is one-to-one (letting $ \tan(\pi x_1 - \frac \pi2) $ = $ \tan(\pi x_2 - \frac \pi2) $ and then showing $ x_1 $ = $ x_2 $) doesn't seem to work.</p>
<p>I was hoping someone could give me any suggestions.</p>
<p>Thanks.</p>
| 0non-cybersec
| Stackexchange |
WCF maxes CPU when waiting on _TransparantProxyStub_CrossContext function during call. <p>I'm getting heavy CPU usage when making calls to Cisco's AXL SOAP API using WCF. I start by creating a service model clientbase using generated classes from wsdl. I'm using basichttpbinding and transfermode as buffered. When executing a call, the CPU maxes out, and a CPU profile shows that 96% of CPU time is at <code>_TransparentProxyStub_CrossContext@0</code> from clr.dll that is called after calls such as <code>base.Channel.getPhone(request);</code>. More correctly, the call maxes out the CPU core that the process is running on.</p>
<p>Here's a snip of the client creation from the wsdl generate</p>
<pre><code>[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
public partial class AXLPortClient : System.ServiceModel.ClientBase<AxlNetClient.AXLPort>, AxlNetClient.AXLPort
{
public AXLPortClient()
{
}
public AXLPortClient(string endpointConfigurationName) :
base(endpointConfigurationName)
{
}
...
</code></pre>
<p>This is how I create the client:</p>
<pre><code>public class AxlClientFactory : IAxlClientFactory
{
private const string AxlEndpointUrlFormat = "https://{0}:8443/axl/";
public AXLPortClient CreateClient(IUcClientSettings settings)
{
ServicePointManager.ServerCertificateValidationCallback = (sender, certificate, chain, errors) => true;
ServicePointManager.Expect100Continue = false;
var basicHttpBinding = new BasicHttpBinding(BasicHttpSecurityMode.Transport);
basicHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
basicHttpBinding.MaxReceivedMessageSize = 20000000;
basicHttpBinding.MaxBufferSize = 20000000;
basicHttpBinding.MaxBufferPoolSize = 20000000;
basicHttpBinding.ReaderQuotas.MaxDepth = 32;
basicHttpBinding.ReaderQuotas.MaxArrayLength = 20000000;
basicHttpBinding.ReaderQuotas.MaxStringContentLength = 20000000;
basicHttpBinding.TransferMode = TransferMode.Buffered;
//basicHttpBinding.UseDefaultWebProxy = false;
var axlEndpointUrl = string.Format(AxlEndpointUrlFormat, settings.Server);
var endpointAddress = new EndpointAddress(axlEndpointUrl);
var axlClient = new AXLPortClient(basicHttpBinding, endpointAddress);
axlClient.ClientCredentials.UserName.UserName = settings.User;
axlClient.ClientCredentials.UserName.Password = settings.Password;
return axlClient;
}
}
</code></pre>
<p>The generated wsdl code for the AXL API is very large. Both initial and subsequent calls have the CPU issue, although subsequent calls are faster. Is there anything else I can do to debug this issue? Is there a way to reduce this high CPU usage?</p>
<p><strong>Update</strong></p>
<p>A bit more info with the bounty:</p>
<p>I've created the C# classes like so:</p>
<pre><code>svcutil AXLAPI.wsdl AXLEnums.xsd AXLSoap.xsd /t:code /l:C# /o:Client.cs /n:*,AxlNetClient
</code></pre>
<p>You have to download the wsdl for Cisco's AXL api from a call manager system. I'm using the 10.5 version of the API. I believe the a major slowdown is related to XML processing. The WSDL for the api is huge with the resulting classes making a 538406 lines of code!</p>
<p><strong>Update 2</strong></p>
<p>I've turned on WCF tracing with all levels. The largest time difference is in the process action activity between "A message was written" and "Sent a message over a channel" in which nearly a full minute passes between these two actions. Other activities (construct channel, open clientbase and close clientbase) all execute relatively fast.</p>
<p><strong>Update 3</strong></p>
<p>I've made two changes to the generated client classes. First, I removed the <code>ServiceKnownTypeAttribute</code> from all the operation contracts. Second, I removed the XmlIncludeAtribute from some of the serializable classes. These two changes reduced the file size of the generated client by more than 50% and had a small impact on test times (a reduction of about 10s on a 70s test result).</p>
<p>I also noticed that I have roughly 900 operation contracts for a single service interface and endpoint. This is due to the wsdl for the AXL API grouping all operations under a single namespace. I'm thinking about breaking this up, but that would mean creating multiple clientbases that would each implement a reduced interface and end up breaking everything that implements this wcf library.</p>
<p><strong>Update 4</strong></p>
<p>It looks like the number of operations is the central problem. I was able to separate out operations and interface definitions by verb (e.g. gets, adds, etc) into their own clientbase and interface (a very slow process using sublime text and regex as resharper and codemaid couldn't handle the large file that's still 250K+ lines). A test of the "Get" client with about 150 operations defined resulted in a 10 second execution for getPhone compared to a previous 60 second result. This is still a lot slower than it should be as simply crafting this operation in fiddler results in a 2 second execution. The solution will probably be reducing the operation count even more by trying to separate operations further. However, this adds a new problem of breaking all systems that used this library as a single client.</p>
| 0non-cybersec
| Stackexchange |
Can't locate import javax.inject.Inject package. <p>I'm trying to implement Dagger as a dependency injector in an IntelliJ project, but my code is failing on:</p>
<pre><code>import javax.inject.Inject;
</code></pre>
<p>Intellij is finding the '<code>javax</code>' package, but not the '<code>inject</code>' package, so it fails.</p>
<p>I am new to Android, so I apologize if this is a no brainer, but can anyone tell me why the inject package is not being found?</p>
| 0non-cybersec
| Stackexchange |
How to optimize MongoDB query with both $gt and $lte?. <p>I have the following query that is kind of like a reverse range lookup:</p>
<pre><code>db.ip_ranges.find({ $and: [{ start_ip_num: { $lte: 1204135028 } }, { end_ip_num: { $gt: 1204135028 } }] })
</code></pre>
<p>When run with only the $lte identifier, the query returns right away. But when I run with both the $gt and $lte in the same query, it is extremely slow (in seconds).</p>
<p>Both the start_ip_num and end_ip_num fields are indexed.</p>
<p>How can I go about optimizing this query?</p>
<p><strong>EDIT</strong></p>
<p>I get the following when I use the explain() function on the query:</p>
<pre><code>{
"cursor" : "BtreeCursor start_ip_num_1",
"nscanned" : 452336,
"nscannedObjects" : 452336,
"n" : 1,
"millis" : 2218,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"start_ip_num" : [
[
-1.7976931348623157e+308,
1204135028
]
]
}
}
</code></pre>
<p><strong>EDIT 2</strong></p>
<p>Once I added the compound index, the explain() function returns the following:</p>
<pre><code>{
"cursor" : "BtreeCursor start_ip_num_1_end_ip_num_1",
"nscanned" : 431776,
"nscannedObjects" : 1,
"n" : 1,
"millis" : 3433,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"start_ip_num" : [
[
-1.7976931348623157e+308,
1204135028
]
],
"end_ip_num" : [
[
1204135028,
1.7976931348623157e+308
]
]
}
}
</code></pre>
<p>However, the perf is still poor (in seconds).</p>
| 0non-cybersec
| Stackexchange |
What was this hack-bot trying to achieve via directory traversal?. <p>I was walking through my server logs and the following kinda stunned me:</p>
<pre><code>ACCESS ERROR 404 from IP 62.112.152.161:
Path: /index.php?page=weblog&env=../../../../../../../../etc/passwd%00
ACCESS ERROR 404 from IP 62.112.152.161:
Path: /download.php?dlfilename=../../../../../../../../etc/passwd%00
ACCESS ERROR 404 from IP 62.112.152.161:
Path: /download.php?filename=../../../../../../../../etc/passwd%00
ACCESS ERROR 404 from IP 62.112.152.161:
Path: /agb.php?lang=../../../../../../../../etc/passwd%00
ACCESS ERROR 404 from IP 62.112.152.161:
Path: /angemeldet.php?lang=../../../../../../../../etc/passwd%00
</code></pre>
<p>There were lot's more variations, always triyng to get the same file. They wouldn't succeed even if there was some "download.php"*, but I'm curious what would it do if they did.<br>
Also, I wonder if there's something I can do against this BOT in more global scope, to help other webmasters who might have their site vulnerable against such attack.</p>
<p>Unfortunatelly, HTTP headers were not registered.</p>
<p>*I'm running on virtual server where PHP's top dir is the website's root.</p>
| 1cybersec
| Stackexchange |
$E$ is sequentially closed $\iff$ $E$ is closed always hold or only in metrizable spaces?. <p>Let $(X,\mathcal T)$ a topological space. I know that if $\mathcal T$ is metrizable, and $E\subset X$, then $E$ is closed $\iff$ $E$ is sequentially. I was wondering if this is true on every topological spaces or in metrizable space only ? </p>
<p>I guess that if $E$ is closed, then it's sequentially closed. Indeed, let $(x_n)$ a sequence of $E$ that converge in $E$ and denote $x$ it's limit. Then we can easily show that $x\in \bar E=E$ where the last equality hold because $E$ is closed. </p>
<p>For the converse, I suppose that $E$ is not closed, i.e. there is $x\in \bar E$ s.t. $x\notin E$. I know that for all open set that contain $x$, we have that $U\cap E\neq \emptyset$. But I can't construct a sequence that converge to $x$. I also know that if $X$ is not metrizable, then there is $x\in \bar E$ s.t. no sequence from $E$ converge to $x$. But unfortunately, this doesn't contradict the fact that $E$ is sequentially closed. </p>
<p>So may be this property is not true ? If it's not true, what would be the "weakest" condition on the topology to make this property true ? Hausdorff maybe ?</p>
| 0non-cybersec
| Stackexchange |
Why is x <> 1 false when x is null?. <p>I have a query where a row is selected when <code>field1 <> 10</code>.</p>
<p>When <code>field1</code> is null, this predicate is <code>false</code>!</p>
<p>According to my primitive maths, if <code>null <> 10 = false</code>, then <code>10 = null</code>.</p>
<p>What is the rational for this?</p>
| 0non-cybersec
| Stackexchange |
What did 0 say to 8?. Nice belt!
| 0non-cybersec
| Reddit |
Which is our reality?. | 0non-cybersec
| Reddit |
Maximum Length of Android versionName / versionCode (Manifest). <p>I am trying to find out the maximum length of both the android:versionName and android:versionCode attributes of the android manifest file? </p>
<pre><code><manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.xxxx.xxxx"
android:versionCode="185" <--- THIS ATTRIBUTE
android:versionName="1.0.185"> <--- AND THIS ATTRIBUTE
</code></pre>
<p>Is there a maximum value or will it pretty much allow anything if there is no maximum are there certain rules in place?</p>
| 0non-cybersec
| Stackexchange |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.