text
stringlengths
3
1.74M
label
class label
2 classes
source
stringclasses
3 values
Is there any alternate way to avoid deprecation in Apache POI, for HSSF built-in colors?. <p>In my code i want to change the cell colors of a particular column of a HSSFWorkbook, if the text is "PASS". But when i was writing the code, lots of methods and constants like <strong>BRIGHT_GREEN.index</strong>, <strong>setFillPattern</strong>, <strong>SOLID_FOREGROUND</strong> are deprecated. I have searched for an alternative in Apache POI official website, but the code given there is also deprecated. I know there is no problem if i mention @deprecation tag, but sometimes after 100-150 lines(rows), cell-color is not changing. Can anyone please tell me is there any alternative to avoid @deprecation? FYI: i am using <strong><em>poi-bin-3.17-beta1-20170701</em></strong> jars. Thanks in advance :)</p> <pre><code>if(cell.getStringCellValue().equalsIgnoreCase("Pass")){ HSSFCellStyle style = workbook.createCellStyle(); style.setFillForegroundColor(HSSFColor.BRIGHT_GREEN.index); style.setFillPattern(HSSFCellStyle.SOLID_FOREGROUND); cell.setCellStyle(style); } </code></pre>
0non-cybersec
Stackexchange
High praise for Ant Man from schmoesknows - "favourite movie of the summer along with mad max".
0non-cybersec
Reddit
Just finished The Wheel of Time series. This book series has been a journey of about 5 years. I started reading the first book in the series while a Peace Corps Volunteer in Kenya - my brother had given me a kindle for my birthday and had bought me some books to read. The authors of the series (RIP Robert Jordan) took me through an epic journey. While I am happy to have finished the book series, I am also saddened at the completion of this epic journey. Of particular note, I wanted to highlight and share one of my favorite things of this series. Each book always began with a reference to the wind: >In one Age, called the Third Age by some, an Age yet to come, an Age long past, a wind rose in the Mountains of Mist. The wind was not the beginning. There are neither beginnings nor endings to the turning of the Wheel of Time. But it was *a* beginning. (The Eye of the World by Robert Jordan) The final book ended with the wind: >The wind blew southward, through knotted forests, over shimmering plains and toward lands unexplored. This wind, it was not ending. There are no endings, and never will be endings, to the turning of the Wheel of Time. But it was *an* ending. (A Memory of Light by Brandon Sanderson and Robert Jordan) Such a fitting ending that also reminds me that the cycle of finding great book series, like the Wheel of Time itself, continues without ending, though finishing a series is *an* ending.
0non-cybersec
Reddit
Calculating spans for hollow structural steel tubes. I’m looking to build a 18’ long floating plywood workspace along a wall. The back and sides will rest on supports attached directly to studs but I’m not sure about how to support the front edge. It’s for light duty work but i want to support someone sitting on it (400lbs front dead centre). Yes, I tend to always overbuild things. My first thought was a square steel tubing but have no idea how to determine the appropriate size/thickness. I know of the Sagulator for wood calculations. Is there something similar for steel tubing? I’d have to build it as 2x 9’ lengths bolted together with mending plates, if that matters. Or am I crazy for trying to span 18’ and should just add a centre support leg/cantilever?
0non-cybersec
Reddit
Error -32 while establishing USB connection. <p>We are working on a custom board based on <code>STM32f769ni</code> in Ubuntu 16.04 which has an ft232rl chip. We are connecting a micro USB cable from PC to custom board (micro USB port is adjacent to FT232RL chip in our board). When the USB cable is inserted, the following log appears:</p> <p><a href="https://i.stack.imgur.com/4ggl1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ggl1.png" alt="Terminal log"></a></p> <p>I read there is no need to install any ftdi drivers. In Windows 10 also, unknown device descriptor. There is no serial port in Ports is displayed. </p> <p><code>lsusb</code> is also displaying only keyboard and mouse only:</p> <pre class="lang-none prettyprint-override"><code>$ sudo lsusb Terminal log : device not accepting address, error -32 Unable to enumerate USB device </code></pre> <p>Schematic of <code>ft232rl</code> to USB design in our project:</p> <p><img src="https://i.stack.imgur.com/7dLnq.png" alt="FT232RL ofSchematic"></p>
0non-cybersec
Stackexchange
ITAP of a bunch of fleeting red sevens.
0non-cybersec
Reddit
Prove that $\left|f^{(n)} \left(z_0 \right) \right| \leqslant \frac{n! M}{R^n}$ for every $n$.. <p>Let $f$ be analytic on a domain $\Omega$ containing the closed disk of radius $R$ centered at $z_0$. Show that if $\left|z-z_0 \right| = R \implies \left| f(z) \right| \leqslant M$ then $\left|f^{(n)} \left(z_0 \right) \right| \leqslant \frac{n! M}{R^n}$ for every $n$.</p> <p>I tried applying Cauchy's Integral formula derivatives (since $f$ is analytic and $\Omega$ contains the simple closed curve $C_R(z_0)$ =circle of radius $R$ centered at $z_0$): </p> <p>Since $f^{(n)}(z_0)= \frac{n!}{2 \pi i} \int_{\Gamma} \frac{f(z)}{(z-z_0)^{n+1}}dz$, we have $$\begin{align*} \left|f^{(n)}(z_0) \right| &amp;= \left|\frac{n!}{2 \pi i} \int_{\Gamma} \frac{f(z)}{(z-z_0)^{n+1}}dz \right| \\&amp;= \left|\frac{n!}{2 \pi i} \right| \int_{C_{R}(z_0)} \frac{\left|f(z) \right|}{\left|(z-z_0) \right|^{n+1}}dz \\&amp; \leqslant \left|\frac{n!}{2 \pi i} \right| \int_{C_{R}(z_0)} \frac{M}{R^{n+1} }dz \end{align*}.$$</p> <p>Is this the right way to go about it? I wasn't really sure what to do with the modulus when the integral was involved, but I used facts like $\left|z^n \right|= \left|z \right|^n$ and $\left|uv \right| = \left|u \right| \left|v \right|$. Where do I go from here?</p>
0non-cybersec
Stackexchange
Converting int into NSString. <p>I want to convert a random int number in the <code>NSString</code> and then assign the NSString to another <code>NSString</code> but it crashes the app</p> <p>I am doing the following</p> <pre><code>int mynumber =(arc4random() % 1000 ); unique = [NSString stringWithFormat:@"%d",mynumber]; NSLog(unique) NSString*test=unique; </code></pre> <p>it gives crash when i write last line;</p> <p>It also prints values when I nslog the <code>unique</code> string. </p>
0non-cybersec
Stackexchange
Evaluation of $\lim\limits_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$. <p>One of the previous posts made me think of the following question: Is it possible to evaluate this limit without L'Hopital and Taylor?</p> <p>$$\lim_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$$</p>
0non-cybersec
Stackexchange
Disk full error despite free space. <p>in my ubuntu 20.0 I have about 150gb of disk space but my root folder is nearly full with about 15gb of data. here is the output of df command:</p> <pre><code>Filesystem Size Used Avail Use% Mounted on /dev/sda3 147G 14G 126G 10% / /dev/sda2 240M 7.8M 232M 4% /boot/efi /dev/sda1 782G 64G 718G 9% /media/mirhosseini/ADATA HD710 </code></pre> <p>and the output of df -i:</p> <pre><code>Filesystem Inodes IUsed IFree IUse% Mounted on udev 478466 604 477862 1% /dev tmpfs 485498 1049 484449 1% /run /dev/sda3 9822208 351043 9471165 4% / tmpfs 485498 79 485419 1% /dev/shm tmpfs 485498 6 485492 1% /run/lock tmpfs 485498 18 485480 1% /sys/fs/cgroup /dev/loop1 24339 24339 0 100% /snap/gnome-3-34-1804/33 /dev/loop0 10765 10765 0 100% /snap/core18/1705 /dev/loop2 10764 10764 0 100% /snap/core18/1754 /dev/loop4 62342 62342 0 100% /snap/gtk-common-themes/1506 /dev/loop3 24339 24339 0 100% /snap/gnome-3-34-1804/36 /dev/loop5 15827 15827 0 100% /snap/snap-store/433 /dev/loop6 15827 15827 0 100% /snap/snap-store/454 /dev/loop7 459 459 0 100% /snap/snapd/7264 /dev/loop8 462 462 0 100% /snap/snapd/7777 /dev/sda2 0 0 0 - /boot/efi tmpfs 485498 102 485396 1% /run/user/1000 /dev/sda1 752931592 14911 752916681 1% /media/mirhosseini/ADATA HD710 </code></pre> <p>as you can see there is about 126gb free space but I faced the "unsufficient space" or "disk full" error and this is the screenshot of disk usage analyzer: <a href="https://i.stack.imgur.com/6Req9.png" rel="nofollow noreferrer">screenshot of disk usage analyzer</a></p>
0non-cybersec
Stackexchange
Windows feature that makes cursor turn into block in a text box. <p>I have searched around online and cannot seem to find an adequate answer to my question.</p> <p>Every once in a while, I accidentally press some key combination on my keyboard that makes my cursor turn from the regular thin vertical line to a blue box that acts and behaves the exact same as the regular cursor but looks different. </p> <p><img src="https://i.stack.imgur.com/tBlFz.jpg" alt="(Screenshot of the irregular cursor)"> </p> <p>Since this only affects the cursor while I am in Google Chrome, I assume it is a setting in Chrome, but I don't know. It also appears to be a problem isolated to one tab at a time, which to me is baffling.</p> <p>It annoys me and the only way I know of to make it go away is to restart my computer. </p> <p>Thanks in advance!</p>
0non-cybersec
Stackexchange
Brothers never do part.
0non-cybersec
Reddit
Tesla made a monumental announcement about batteries last week and everyone missed it. Tesla had a shareholder's meeting last week and made an announcement which absolutely blew my mind. They believe they will be able to produce batteries for under $100/kWh with two years. If you had told anyone in the industry that a company would be achieving these prices before the end of the decade, they would have smiled and told you politely that you have no idea what you're talking about. A couple years ago, $350/kWh was considered the industry standard. Now look where we are. These prices will have some truly impressive implications. It basically means that Tesla's vehicles can be price-competitive with every vehicle in the market, and there will be nothing standing in the way of electric vehicles getting 80-90% market share except the time it takes to build the factories to build all these batteries and cars. So we are now at the beginning of the real electric revolution: one where electric cars are not limited by technology or price, but rather by the rate at which companies can build new factories to produce batteries for these cars. This is why Volkswagen recently announced they'll be investing [$48 billion in electric vehicle production](https://cleantechnica.com/2018/05/04/volkswagen-doubles-ev-battery-order-to-48-billion/). They are the first big auto company outside China to recognize how important it is to produce batteries at scale.
0non-cybersec
Reddit
Trying to login Ubuntu in Virtualbox but not getting the GUI. <p>I have created a vm using virtualbox Version 5.2.4 r119785 (Qt5.6.2). The host OS is windows 7 and I have installed Ubuntu 16.04.3 on the vm. The initial boot was successful after the installation but when I shut the vm down and restarted it, it boots to a command line login. I can use the credentials but it wont login to the GUI. Can someone help? <a href="https://i.stack.imgur.com/34TnS.jpg" rel="nofollow noreferrer">enter image description here</a>..Please see attached image. Thank you. </p>
0non-cybersec
Stackexchange
Bruce Schneier: Attacking Tor - how the NSA targets users' online anonymity.
1cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
[WP] Nuclear war is no longer an option without risking the future of the planet. Following the ancient laws of battle, each country involved military disputes shall elect a champion to face off against the opposing country's champions in a fight to the death.
0non-cybersec
Reddit
How to compute $7^{7^{7^{100}}} \bmod 100$?. <p>How to compute $7^{7^{7^{100}}} \bmod 100$? Is $$7^{7^{7^{100}}} \equiv7^{7^{\left(7^{100} \bmod 100\right)}} \bmod 100?$$ Thank you very much.</p>
0non-cybersec
Stackexchange
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
[PIC] My 7 year old fur baby, Zekea..
0non-cybersec
Reddit
For those in the U.S. - Are we actually screwed?. I want someone to clarify this.
0non-cybersec
Reddit
"Thirty-four percent of workers have nothing set aside for retirement and 2.2 million Americans over the age of 60 are still saddled with $43 billion of student loan debt.". My god... http://finance.yahoo.com/news/-i-m-never-going-to-be-able-to-retire--134736593.html
0non-cybersec
Reddit
Are there any Android version of R (without rooting the device)?. <p>Are there any version/equivalent of <a href="http://www.r-project.org/" rel="noreferrer">R</a> for android platform, specifically a <code>.apk</code> file?</p> <p>If not, how do one build it from the <a href="http://cran.r-project.org/sources.html" rel="noreferrer">source</a>, without <em>rooting</em> the device?</p> <p>(R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.)</p>
0non-cybersec
Stackexchange
This kid had the best Halloween costume.
0non-cybersec
Reddit
How?. With all the scientific and philosophical advancements and discoveries we've made, how the fuck do millions of people still believe in religion?
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
The Consequences of War - a war photographer speaks about the effects of war on soldiers and their families, himself, and the country. .
0non-cybersec
Reddit
How to position a p:dialog in Prime Faces after its resizing. <p>I have a Prime Faces <code>p:dialog</code> that has been resized while new components are inserted when opened ('show' state). However its position doesn't change and it's size is increasing from the down left corner until the page bottom.</p> <p>I need to reposition it every time I render new components dynamically. Is there any JavaScript function I can call to its widget to reposition?</p> <p>I'm using PrimeFaces 3.5 with Mojarra 2.1.13.</p>
0non-cybersec
Stackexchange
It is a tragedy that the Pokemon Platinum Soundtrack is not on Spotify, it's time to change that..
0non-cybersec
Reddit
MS Excel 2010 merge two sheets. <p>I have an inventory I do each month and have a fairly simple question but not sure what is the best way to complete this task. I have 3 columns. ItemNumber, ItemName, and TotalQTY. The first sheet is what I actually have on hand in the warehouse the second sheet is what our system says we should have. So far very simple. The problem is the system has items that we do not have in our warehouse and some items that are in the warehouse are not in the system. I want to merge the two sheets so it will have all of the data on one sheet while showing the TotalQTY of each so I can see the variance between the two. I understand that some rows will not have a variance since they will not be in both sheets. The end product I desire is columns ItemNumber, ItemName, TotalQtyWarehouse, TotalQtySystem, and Variance.</p>
0non-cybersec
Stackexchange
Auto create and name new sheet in Google Spreadsheet. <p>How can I automatically create and name new sheets in a google spreadsheet from a list of names, such as a student roll sheet? I would like each new sheet to be created when I add corresponding names to a list in a spreadsheet column. The new sheets can be in the same spreadsheet.</p>
0non-cybersec
Stackexchange
Prey and it's connection to System/Bioshock. I was playing through the new 1 hour demo of Prey tonight, (which I am absolutely in love with at the moment) and couldn't help but constantly think about how it feels so much like a spiritual successor to System Shock and Bioshock. I mean you start with a wrench, nothing is as it seems, and it's a sort of horror game with rpg elements. However a little Easter egg I found and enjoyed was immediately after you see the video of yourself in your office, after it's interrupted it says, "LOOKING GLASS SERVER: CONNECTION LOST" Of course the studio that made System Shock (which then became Irrational games) was called Looking Glass Studio. Just thought this was a neat little tid bit I'd share! (No image because my phone cam is potato quality)
0non-cybersec
Reddit
Dragon Breath Ice Cream.
0non-cybersec
Reddit
to look tough.
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
Passing length in form of a string instead of measure. <p>I'd like to pass a measure of length in form of a string, i.e.</p> <pre><code>\setlength{string} </code></pre> <p>or </p> <pre><code>\setlength{'string'} </code></pre> <p>instead of</p> <pre><code>\setlength{2.5cm} </code></pre> <p>I seem to remember seeing something like that somewhere in LaTeX, but can't find it. Appreciate your input! </p>
0non-cybersec
Stackexchange
Punctured open set is not contractible. <p>Let <span class="math-container">$U\subseteq\mathbb{R}^2$</span> be an open subset, and let <span class="math-container">$x\in U$</span>. Then <span class="math-container">$U\setminus\{x\}$</span> is not contractible. A space <span class="math-container">$X$</span> is called contractible if the identity map on <span class="math-container">$X$</span> is homotopic to a constant map <span class="math-container">$X\to X$</span>. So, I have to show that <span class="math-container">$Id_{U\setminus\{x\}}$</span> can not be homotopic to a constant map. What are tools to show that maps are not homotopic?</p>
0non-cybersec
Stackexchange
Watch a memory range in gdb?. <p>I am debugging a program in gdb and I want the program to stop when the memory region 0x08049000 to 0x0804a000 is accessed. When I try to set memory breakpoints manually, gdb does not seem to support more than two locations at a time.</p> <pre><code>(gdb) awatch *0x08049000 Hardware access (read/write) watchpoint 1: *0x08049000 (gdb) awatch *0x08049001 Hardware access (read/write) watchpoint 2: *0x08049001 (gdb) awatch *0x08049002 Hardware access (read/write) watchpoint 3: *0x08049002 (gdb) run Starting program: /home/iblue/git/some-code/some-executable Warning: Could not insert hardware watchpoint 3. Could not insert hardware breakpoints: You may have requested too many hardware breakpoints/watchpoints. </code></pre> <p>There is already a question where this has been asked and the answer was, that it may be possible to do this with valgrind. Unfortunately the answer does not contain any examples or reference to the valgrind manual, so it was not very enlightning: <a href="https://stackoverflow.com/questions/6764544/how-can-gdb-be-used-to-watch-for-any-changes-in-an-entire-region-of-memory">How can gdb be used to watch for any changes in an entire region of memory?</a></p> <p>So: How can I watch the whole memory region?</p>
0non-cybersec
Stackexchange
Finally have a decent battlestation!.
0non-cybersec
Reddit
Walter White motivates..
0non-cybersec
Reddit
My monitors turn off when playing specific types of computer games (battle royale type). ###Troubleshooting Help: **What is your parts list? [Consider formatting your parts list.](http://www.reddit.com/r/buildapc/wiki/pcpp)** [PCPartPicker part list](https://pcpartpicker.com/list/3pmsHN) / [Price breakdown by merchant](https://pcpartpicker.com/list/3pmsHN/by_merchant/) Type|Item|Price :----|:----|:---- **CPU** | [Intel - Core i5-6500 3.2GHz Quad-Core Processor](https://pcpartpicker.com/product/xwhj4D/intel-cpu-bx80662i56500) | $194.99 @ Amazon **CPU Cooler** | [Cooler Master - Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler](https://pcpartpicker.com/product/hmtCmG/cooler-master-cpu-cooler-rr212e20pkr2) | $19.99 @ Newegg **Motherboard** | [ASRock - Fatal1ty Z170 Gaming K4 ATX LGA1151 Motherboard](https://pcpartpicker.com/product/GYH48d/asrock-motherboard-fatal1tyz170gamingk4) | $135.99 @ Amazon **Memory** | [G.Skill - Ripjaws V Series 8GB (2 x 4GB) DDR4-2400 Memory](https://pcpartpicker.com/product/xjp323/gskill-memory-f42400c15d8gvr) | $91.99 @ Newegg **Video Card** | [MSI - Radeon R9 390 8GB Video Card](https://pcpartpicker.com/product/ymqbt6/msi-video-card-r9390gaming8g) |- **Power Supply** | [Rosewill - 750W 80+ Platinum Certified Fully-Modular ATX Power Supply](https://pcpartpicker.com/product/7wVBD3/rosewill-power-supply-quark750) | $119.99 @ Newegg Marketplace | *Prices include shipping, taxes, rebates, and discounts* | | Total (before mail-in rebates) | $572.95 | Mail-in rebates | -$10.00 | **Total** | **$562.95** | Generated by [PCPartPicker](http://pcpartpicker.com) 2018-02-01 04:00 EST-0500 | **Describe your problem. List any error messages and symptoms. Be descriptive.** So when I play battle royale type games (Fortnite, PUBG), my monitors turn off. My PC stays on though. I know this because usually when I have a video playing on the second monitor, the sound from the video keeps going for a while before that eventually stops. This doesn't happen with any other games that I play like League, OW, MMORPGs, Etc. Also, I should note that this only happens sometimes. There are days where I can play these types of games without a problem all day, and then there are days where it just turns off on its own. **List anything you've done in attempt to diagnose or fix the problem.** I've tried cleaning out the dust from my PC but I don't really have any of the proper equipment to clean all of the dust in the right way and I'm too poor right now to buy the dust blowing can things that are meant for computers and computer parts so I just blow off what I can using my mouth. edit: I checked the task manager to see my GPU usage in the performance tab. It stays at a regular 99%. I'm not sure what that means though but I don't think that's very good.. edit 2: So I checked the temp while I was running the game and it hovered around 80-82 degrees when the monitor shut off on it's own. I manually set the fan to to run at 100% to be able to get these temps though.
0non-cybersec
Reddit
Apple New 5se phone release on March.
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
Women that suck dick are hoes.
0non-cybersec
Reddit
Is it natural that $\overline{\int f}=\int\bar f$?. <p>Is it natural that $$\overline{\int f}=\int\bar f\ \ ?$$ </p> <p>I tried to prove it, but with no success. </p>
0non-cybersec
Stackexchange
What am I doing wrong to find this expected value?. <p>If I have 100,000 dollars to invest in stocks, and I can invest 1000 dollars in any particular stock my profit will be 200, 100, 0 and -100 dollars with probability .25 each. There are 100 different stocks to choose from and they all behave independently of each other. </p> <p>a) How do I find the expected value if I invest 100,000 dollars in one stock and </p> <p>b) If I invest 1000 in 100 different stocks? </p> <p>What I did was $$E(X)=100[200*.25+100*.25+0*.25-100*.25]=5000$$</p> <p>However the actual problem is asking about the probability of profit being 8000 dollars or more and in order to do that I'm assuming the distribution is normal so I'm looking for the $Var(X)$ so I can get the standard deviation. My $E(X^2)-(E(X))^2)$ isn't adding up because im getting a higher $(E(X))^2$ than I am $E(X^2)$. So my expected value must be wrong. Or am I doing something else wrong?</p>
0non-cybersec
Stackexchange
removing SSLv3 does not seem to fix POODLE problem. <p>I have removed SSLv3 from my Apache 2.2.15 ssl.conf file with the line</p> <pre><code>SSLProtocol All -SSLv2 -SSLv3 </code></pre> <p>I have restarted httpd but POODLE tests still show vulnerability exists with POODLE. What else could be the issue? Thanks</p>
0non-cybersec
Stackexchange
#1 Song in the World Zovii Da Great- 2020.
0non-cybersec
Reddit
Ottawa Senators players caught knocking coaches, laughing about team on video.
0non-cybersec
Reddit
Solving $c(n, n-2) - p(n, 2) = 7 - n$. <p>To help with a friends homework, I was asked how to solve this equation for $n$: $$c(n, n-2) - p(n, 2) = 7 - n$$</p> <p>Having no further information about what c and p are supposed to be (and they don't seem to know either) I assumed they were probably combination and permutation. But when I expanded that out, I was left with a polynomial $n^2 - 3n + 14$ which has complex solutions. Is there something else obvious I'm missing that c and p could be here? </p>
0non-cybersec
Stackexchange
[Spoilers] "I don't understand" Parody Rant [Inou-Battle wa Nichijou-kei no Naka de].
0non-cybersec
Reddit
How do you get a Percentage of how far the value is from expected Value. <p>I have an expected value say 5(minimum) or 6(minimum).</p> <p>Then I have 4 values [3, 10, 6, 8]</p> <p>I would like to compute the percentage on how far the 4 values are from the expected value in terms of percentage. Given 100% means they are all within the minimum and maximum and the farther they are from the expected the lower the percentage would be.</p> <p>I was looking for a mathematical way of computing this. One way is to assign true or false score for each. Meaning either they match or not the minimum or maximum value.</p> <p>[3(0.0), 10(0.0), 6(1.0), 8(0.0)] would give 25%</p> <p>But this solution is too crude because I want to give more fine grained computation for how far I am from the expected value.</p> <p>[3(0.0), 10(0.0), 6(1.0), 8(0.0)] would give X% [1(0.0), 10(0.0), 6(1.0), 9(0.0)] would give Y% </p> <p>Y% should be lower than X% since their value deviates farther from the mean than X%.</p> <p>I do have a boundary of expected values, 1 to 30. I tried to read on statistics like standard deviation but it is the deviation from the mean. Even If i change it to deviation from expected value, How do I get percentage of how far is it from expected value?</p> <p>Updated: I guess I already got an idea on this one. Let say there are N=20 available number to be taken. Then there are X=4 who would share this 20 slots. What I will do is get the average which is A=N/X=5. So I would compute for the summation of the difference of each item from the average. S=sum(abs(Ni-A)) I get the sum of the difference from the maximum.</p> <p>The percentage would be 1-(S/N)</p>
0non-cybersec
Stackexchange
Vectors parallel to plane, perpendicular to another vector. <p>From Anton, I have this simple looking LA question:</p> <p><em>Find all unit vectors parallel to the yz plane that are perpendicular to the vector 3,1,-2</em></p> <p>Since this vector is sloped in all 3 dimensions, and since yz is I flat in the x dimension, I am baffled how this can have a solution.</p> <p>I found a near exact version of this question on another forum but the vector given <em>did</em> have a zero component.</p> <p>I'm clearly missing some obvious detail, a quick point out would be much appreciated.</p>
0non-cybersec
Stackexchange
BBC News - Updated Sherlock 'will be back'.
0non-cybersec
Reddit
RAID 5 - How do I detect if a disk has failed, and which one?. <p>I bought an IBM eSeries with 5 disks of 36GB each.</p> <p>I would like to make a RAID5 out of it.</p> <p>I want to know how can I detect if a disk has failed or needs to be changed. If one fails, does the system will continue to operate anyway? And how can I know which disk has failed? How does the system tell me which disk to change? And how can I monitor the disk rebuild and how can I know when the rebuild is done?</p> <p>I have so much questions about this RAID thing, sorry :)</p> <p>p.s.: Ill use Debian 6</p>
0non-cybersec
Stackexchange
Larger fit joggers?. Been trying on joggers lately (H&M, vans) and can't seem to find ones that fit loose enough. I'm 6'4 with some pretty huge calves and thighs from mountain biking every summer throughout jasper and banff. Wondering if any other long/muscular legged guys on this sub have found any joggers that fit nice, cause I sure am having a hard time. First post on sub, please be gentle ;). Thanks for the read
0non-cybersec
Reddit
org.koin.android.error.MissingAndroidContextException: when try to test app with context. <p>I want to write test for <code>koin</code>. I use <code>RoomDatabase</code>, which receives context in constructor. App works well but test fails</p> <blockquote> <p>Can't resolve Application instance. Please use androidContext() function in your KoinApplication configuration.</p> </blockquote>
0non-cybersec
Stackexchange
The funniest account of how following due process will get you five star service from huge corporations.
0non-cybersec
Reddit
'Homosexuality is not an illness': Germany plans to ban conversion therapy this year, health minister announces.
0non-cybersec
Reddit
UK with the clapback.
0non-cybersec
Reddit
Don’t steal from your family.
0non-cybersec
Reddit
If you haven't seen this Cirque du Soleil stunt...you are missing the internets..
0non-cybersec
Reddit
Inequality with concave functions in expected values. <p>I'm working on an engineering problem and I manage to reduce it to the following claim, but I'm not sure if it is true. It will be great if someone can give me some ideas!</p> <p>Let $u(x)$ is an increasing and concave function such that $u(0)=0$.</p> <p>Let $X_1,..X_n$ be n random variables independently and identically distributed with support $S\subseteq[0,\infty)$, and define $Y_j=r_j X_j$ where $0&lt;r_1\leq r_2 \leq...\leq r_n$ are given.</p> <p>Problem: Is it always possible to find a probability vector $p\in\mathbb{R}^n$ (i.e $p\geq0$ and $\sum_{i=1}^{n} p_i = 1$), such that $\forall i$: $\mathbb{E}[u(p_i\sum_{j=1}^{n} Y_j)-u(Y_i)]&gt;0$</p> <p>Idea: Maybe this helps: I can prove that the claim is true IF the next claim is also true (but clearly I don't know if it is): $\forall p \exists i$: $\mathbb{E}[u(p_i\sum_{j=1}^{n} Y_j)-u(Y_i)]&gt;0$</p> <p>Notes: I believe it is possible, and I have tried using Jensen, Karamata, subadditivity, and Sperner's lemma, but none of these work. Any idea will be super welcome. Thanks!</p>
0non-cybersec
Stackexchange
Fuck The Police :The Mixtape (Ft. Boosie, 2pac, Lil Baby, NWA).
0non-cybersec
Reddit
Implementing the TD-Gammon algorithm. <p>I am attempting to implement the algorithm from the <a href="https://cling.csd.uwo.ca/cs346a/extra/tdgammon.pdf" rel="noreferrer">TD-Gammon article</a> by Gerald Tesauro. The core of the learning algorithm is described in the following paragraph:</p> <blockquote> <p><a href="https://i.stack.imgur.com/ZaXPB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZaXPB.png" alt="enter image description here"></a></p> </blockquote> <p>I have decided to have a single hidden layer (if that was enough to play world-class backgammon in the early 1990's, then it's enough for me). I am pretty certain that everything except the <code>train()</code> function is correct (they are easier to test), but I have no idea whether I have implemented this final algorithm correctly.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np class TD_network: """ Neural network with a single hidden layer and a Temporal Displacement training algorithm taken from G. Tesauro's 1995 TD-Gammon article. """ def __init__(self, num_input, num_hidden, num_output, hnorm, dhnorm, onorm, donorm): self.w21 = 2*np.random.rand(num_hidden, num_input) - 1 self.w32 = 2*np.random.rand(num_output, num_hidden) - 1 self.b2 = 2*np.random.rand(num_hidden) - 1 self.b3 = 2*np.random.rand(num_output) - 1 self.hnorm = hnorm self.dhnorm = dhnorm self.onorm = onorm self.donorm = donorm def value(self, input): """Evaluates the NN output""" assert(input.shape == self.w21[1,:].shape) h = self.w21.dot(input) + self.b2 hn = self.hnorm(h) o = self.w32.dot(hn) + self.b3 return(self.onorm(o)) def gradient(self, input): """ Calculates the gradient of the NN at the given input. Outputs a list of dictionaries where each dict corresponds to the gradient of an output node, and each element in a given dict gives the gradient for a subset of the weights. """ assert(input.shape == self.w21[1,:].shape) J = [] h = self.w21.dot(input) + self.b2 hn = self.hnorm(h) o = self.w32.dot(hn) + self.b3 for i in range(len(self.b3)): db3 = np.zeros(self.b3.shape) db3[i] = self.donorm(o[i]) dw32 = np.zeros(self.w32.shape) dw32[i, :] = self.donorm(o[i])*hn db2 = np.multiply(self.dhnorm(h), self.w32[i,:])*self.donorm(o[i]) dw21 = np.transpose(np.outer(input, db2)) J.append(dict(db3 = db3, dw32 = dw32, db2 = db2, dw21 = dw21)) return(J) def train(self, input_states, end_result, a = 0.1, l = 0.7): """ Trains the network using a single series of input states representing a game from beginning to end, and a final (supervised / desired) output for the end state """ outputs = [self(input_state) for input_state in input_states] outputs.append(end_result) for t in range(len(input_states)): delta = dict( db3 = np.zeros(self.b3.shape), dw32 = np.zeros(self.w32.shape), db2 = np.zeros(self.b2.shape), dw21 = np.zeros(self.w21.shape)) grad = self.gradient(input_states[t]) for i in range(len(self.b3)): for key in delta.keys(): td_sum = sum([l**(t-k)*grad[i][key] for k in range(t + 1)]) delta[key] += a*(outputs[t + 1][i] - outputs[t][i])*td_sum self.w21 += delta["dw21"] self.w32 += delta["dw32"] self.b2 += delta["db2"] self.b3 += delta["db3"] </code></pre> <p>The way I use this is I play through a whole game (or rather, the neural net plays against itself), and then I send the states of that game, from start to finish, into <code>train()</code>, along with the final result. It then takes this game log, and applies the above formula to alter weights using the first game state, then the first and second game states, and so on until the final time, when it uses the entire list of game states. Then I repeat that many times and hope that the network learns.</p> <p>To be clear, I am not after feedback on my code writing. This was never meant to be more than a quick and dirty implementation to see that I have all the nuts and bolts in the right spots.</p> <p>However, I have no idea whether it is correct, as I have thus far been unable to make it capable of playing tic-tac-toe at any reasonable level. There could be many reasons for that. Maybe I'm not giving it enough hidden nodes (I have used 10 to 12). Maybe it needs more games to train (I have used 200 000). Maybe it would do better with different normalisation functions (I've tried sigmoid and ReLU, leaky and non-leaky, in different variations). Maybe the learning parameters are not tuned right. Maybe tic-tac-toe and its deterministic gameplay means it "locks in" on certain paths in the game tree. Or maybe the training implementation is just wrong. Which is why I'm here.</p> <p>Have I misunderstood Tesauro's algorithm?</p>
0non-cybersec
Stackexchange
101,117 strong crowd watch 1905 FA Cup Final (x-post r/historyporn).
0non-cybersec
Reddit
What are the most confusing lyrics you ever heard?.
0non-cybersec
Reddit
Do dihedral groups $D_n$ for $n\geq 5$ exist?. <p>I know we can generate dihedral group of order three ($D_3$) and four ($D_4$) but my question is whether we can generate dihedral group of order five?</p>
0non-cybersec
Stackexchange
Cheeseburger on a pretzel bun with a fried egg.[2056x1132].
0non-cybersec
Reddit
Running scipy.integrate.ode in multiprocessing Pool results in huge performance hit. <p>I'm using python's <code>scipy.integrate</code> to simulate a 29-dimensional linear system of differential equations. Since I need to solve several problem instances, I thought I could speed it up by doing computations in parallel using <code>multiprocessing.Pool</code>. Since there is no shared data or synchronization necessary between threads (the problem is embarrassingly parallel), I thought this should obviously work. After I wrote the code to do this, however, I got very strange performance measurements:</p> <ul> <li>Single-threaded, without jacobian: 20-30 ms per call</li> <li>Single-threaded, with jacobian: 10-20 ms per call</li> <li>Multi-threaded, without jacobian: 20-30 ms per call</li> <li><strong>Multi-threaded, with jacobian: 10-5000 ms per call</strong></li> </ul> <p>What's shocking is that what I thought should be the fastest setup, was actually the slowest, and the variability was <em>two orders of magnitude</em>. It's a deterministic computation; computers aren't supposed to work this way. What could possibly be causing this?</p> <h3>Effect seems system-dependent</h3> <p>I tried the same code on another computer and I didn't see this effect. </p> <p>Both machines were using Ubuntu 64 bit, Python 2.7.6, scipy version 0.18.0, and numpy version 1.8.2. I didn't see the variability with an Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz processor. I did see the issue with an <a href="http://www.cpu-world.com/CPUs/Core_i7/Intel-Core%20i7%20Mobile%20i7-2670QM.html" rel="noreferrer">Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz</a>.</p> <h3>Theories</h3> <p>One thought was that there might be a shared cache among processors, and by running it in parallel I can't fit two instances of the jacobian matrix in the cache, so they constantly battle each other for the cache slowing each other down compared with if they are run serially or without the jacobian. But it's not a million variable system. The jacobian is a 29x29 matrix, which takes up 6728 bytes. The level 1 cache on the processor is <a href="http://www.cpu-world.com/CPUs/Core_i7/Intel-Core%20i7%20Mobile%20i7-2670QM.html" rel="noreferrer">4 x 32 KB</a>, much larger. Are there any other shared resources between processors that might be to blame? How can we test this?</p> <p>Another thing I noticed is that each python process seems to take several hundred percent of the CPU as it's running. This seems to mean that the code is already parallelized at some point (perhaps in the low-level library). This could mean that further parallelization wouldn't help, but I wouldn't expect such a dramatic slowdown.</p> <h3>Code</h3> <p>It would be good to try out the on more machines to see if (1) other people can experience the slowdown at all and (2) what are the common features of systems where the slowdown occurs. The code does 10 trials of two parallel computations using a multiprocessing pool of size two, printing out the time per scipy.ode.integrate call for each of the 10 trials. </p> <pre><code>'odeint with multiprocessing variable execution time demonsrtation' from numpy import dot as npdot from numpy import add as npadd from numpy import matrix as npmatrix from scipy.integrate import ode from multiprocessing import Pool import time def main(): "main function" pool = Pool(2) # try Pool(1) params = [0] * 2 for trial in xrange(10): res = pool.map(run_one, params) print "{}. times: {}ms, {}ms".format(trial, int(1000 * res[0]), int(1000 * res[1])) def run_one(_): "perform one simulation" final_time = 2.0 init_state = [0.1 if d &lt; 7 else 0.0 for d in xrange(29)] (a_matrix, b_vector) = get_dynamics() derivative = lambda dummy_t, state: npadd(npdot(a_matrix, state), b_vector) jacobian = lambda dummy_t, dummy_state: a_matrix #jacobian = None # try without the jacobian #print "jacobian bytes:", jacobian(0, 0).nbytes solver = ode(derivative, jacobian) solver.set_integrator('vode') solver.set_initial_value(init_state, 0) start = time.time() solver.integrate(final_time) dif = time.time() - start return dif def get_dynamics(): "return a tuple (A, b), which are the system dynamics x' = Ax + b" return \ ( npmatrix([ [0, 0, 0, 0.99857378006, 0.053384274244, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [0, 0, 1, -0.003182219341, 0.059524655342, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [0, 0, -11.570495605469, -2.544637680054, -0.063602626324, 0.106780529022, -0.09491866827, 0.007107574493, -5.20817921341, -23.125876742495, -4.246931301528, -0.710743697134, -1.486697327603, -0.044548215175, 0.03436637817, 0.022990248611, 0.580153205353, 1.047552018229, 11.265023544535, 2.622275290571, 0.382949404795, 0.453076470454, 0.022651889536, 0.012533628369, 0.108399390974, -0.160139432044, -6.115359574845, -0.038972389136, 0, ], [0, 0, 0.439356565475, -1.998182296753, 0, 0.016651883721, 0.018462046981, -0.001187470742, -10.778778281386, 0.343052863546, -0.034949331535, -3.466737362551, 0.013415853489, -0.006501746896, -0.007248032248, -0.004835912875, -0.152495086764, 2.03915052839, -0.169614300211, -0.279125393264, -0.003678218266, -0.001679708185, 0.050812027754, 0.043273505033, -0.062305315646, 0.979162836629, 0.040401368402, 0.010697028656, 0, ], [0, 0, -2.040895462036, -0.458999156952, -0.73502779007, 0.019255757332, -0.00459562242, 0.002120360732, -1.06432932386, -3.659159530947, -0.493546966858, -0.059561101143, -1.953512259413, -0.010939065041, -0.000271004496, 0.050563886711, 1.58833954495, 0.219923768171, 1.821923233098, 2.69319056633, 0.068619628466, 0.086310028398, 0.002415425662, 0.000727041422, 0.640963888079, -0.023016712545, -1.069845542887, -0.596675149197, 0, ], [-32.103607177734, 0, -0.503355026245, 2.297859191895, 0, -0.021215811372, -0.02116791904, 0.01581159234, 12.45916782984, -0.353636907076, 0.064136531117, 4.035326800046, -0.272152744884, 0.000999589868, 0.002529691904, 0.111632959213, 2.736421830861, -2.354540136198, 0.175216915979, 0.86308171287, 0.004401276193, 0.004373406589, -0.059795009475, -0.051005479746, 0.609531777761, -1.1157829788, -0.026305051933, -0.033738880627, 0, ], [0.102161169052, 32.057830810547, -2.347217559814, -0.503611564636, 0.83494758606, 0.02122657001, -0.037879735231, 0.00035400386, -0.761479736492, -5.12933410588, -1.131382179292, -0.148788337148, 1.380741054924, -0.012931029503, 0.007645723855, 0.073796656681, 1.361745395486, 0.150700793731, 2.452437244444, -1.44883919298, 0.076516270282, 0.087122640348, 0.004623192159, 0.002635233443, -0.079401941141, -0.031023369979, -1.225533436977, 0.657926151362, 0, ], [-1.910972595215, 1.713829040527, -0.004005432129, -0.057411193848, 0, 0.013989634812, -0.000906753354, -0.290513515472, -2.060635522957, -0.774845915178, -0.471751979387, -1.213891560083, 5.030515136324, 0.126407660877, 0.113188603433, -2.078420624662, -50.18523312358, 0.340665548784, 0.375863242926, -10.641168797333, -0.003634153255, -0.047962774317, 0.030509705209, 0.027584169642, -10.542357589006, -0.126840767097, -0.391839285172, 0.420788121692, 0, ], [0.126296110212, -0.002898250629, -0.319316070797, 0.785201711657, 0.001772374259, 0.00000584372, 0.000005233812, -0.000097899495, -0.072611454126, 0.001666291957, 0.195701043078, 0.517339177294, 0.05236528267, -0.000003359731, -0.000003009077, 0.000056285381, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [-0.018114066432, 0.077615035084, 0.710897211118, 2.454275059389, -0.012792968774, 0.000040510624, 0.000036282541, -0.000678672106, 0.010414324729, -0.044623231468, 0.564308412696, -1.507321670112, 0.066879720068, -0.000023290783, -0.00002085993, 0.000390189123, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [-0.019957254425, 0.007108972111, 122.639137999354, 1.791704310155, 0.138329792976, 0.000000726169, 0.000000650379, -0.000012165459, -8.481152717711, -37.713895394132, -93.658221074435, -4.801972165378, -2.567389718833, 0.034138340146, -0.038880106034, 0.044603217363, 0.946016722396, 1.708172458034, 18.369114490772, 4.275967542224, 0.624449778826, 0.738801257357, 0.036936909247, 0.020437742859, 0.176759579388, -0.261128576436, -9.971904607075, -0.063549647738, 0, ], [0.007852964982, 0.003925745426, 0.287856349997, 58.053471054491, 0.030698062827, -0.000006837601, -0.000006123962, 0.000114549925, -17.580742026275, 0.55713614874, 0.205946900184, -43.230778067404, 0.004227082975, 0.006053854501, 0.006646690253, -0.009138926083, -0.248663457912, 3.325105302428, -0.276578605231, -0.455150962257, -0.005997822569, -0.002738986905, 0.082855748293, 0.070563187482, -0.101597078067, 1.596654829885, 0.065879787896, 0.017442923517, 0, ], [0.011497315687, -0.012583019909, 13.848373855148, 22.28881517216, 0.042287331657, 0.000197558695, 0.000176939544, -0.003309689199, -1.742140233901, -5.959510415282, -11.333020298294, -14.216479234895, -3.944800806497, 0.001304578929, -0.005139259078, 0.08647432259, 2.589998222025, 0.358614863147, 2.970887395829, 4.39160430183, 0.111893402319, 0.140739944934, 0.003938671797, 0.001185537435, 1.045176603318, -0.037531801533, -1.744525005833, -0.972957942438, 0, ], [-16.939142002537, 0.618053512295, 107.92089190414, 204.524147386814, 0.204407545189, 0.004742101706, 0.004247169746, -0.079444150933, -2.048456967261, -0.931989524708, -66.540858220883, -116.470289129818, -0.561301215495, -0.022312225275, -0.019484747345, 0.243518778973, 4.462098610572, -3.839389874682, 0.285714413078, 1.40736916669, 0.007176864388, 0.007131419303, -0.097503691021, -0.083171197416, 0.993922379938, -1.819432085819, -0.042893874898, -0.055015718216, 0, ], [-0.542809857455, 7.081822285872, -135.012404429101, 460.929268260027, 0.036498617908, 0.006937238413, 0.006213200589, -0.116219147061, -0.827454697348, 19.622217613195, 78.553728334274, -283.23862765888, 3.065444785639, -0.003847616297, -0.028984525722, 0.187507140282, 2.220506417769, 0.245737625222, 3.99902408961, -2.362524402134, 0.124769923797, 0.142065016461, 0.007538727793, 0.004297097528, -0.129475392736, -0.050587718062, -1.998394759416, 1.072835822585, 0, ], [-1.286456393795, 0.142279456389, -1.265748910581, 65.74306027738, -1.320702989799, -0.061855995532, -0.055400100872, 1.036269854556, -4.531489334771, 0.368539277612, 0.002487097952, -42.326462719738, 8.96223401238, 0.255676968878, 0.215513465742, -4.275436802385, -81.833676543035, 0.555500345288, 0.612894852362, -17.351836610113, -0.005925968725, -0.078209662789, 0.049750119549, 0.044979645917, -17.190711833803, -0.206830688253, -0.638945907467, 0.686150823668, 0, ], [0, 0, 0, 0, 0, -0.009702263896, -0.008689641059, 0.162541456323, 0, 0, 0, 0, 0, 0, 0, 0, -0.012, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [-8.153162937544, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.005, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [0, -3.261265175018, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.005, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [0, 0, 0, 0.17441246156, -3.261265175018, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], [0, 0, -3.261265175018, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -8.5, -18, 0, 0, 0, 0, 0, 0, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, ], [0, 0, 0, -8.153162937544, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -8.5, -18, 0, 0, 0, 0, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, 0.699960862226, 0.262038222227, 0.159589891262, 0.41155156501, -1.701619176699, -0.0427567124, -0.038285155304, 0.703045934017, 16.975651534025, -0.115788018654, -0.127109026104, 3.599544290134, 0.001229743857, 0.016223661959, -0.01033400498, -0.00934235613, -6.433934989563, 0.042639567847, 0.132540852847, -0.142338323726, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, -37.001496211974, 0.783588795613, -0.183854784348, -11.869599790688, -0.106084318011, -0.026306590251, -0.027118088888, 0.036744952758, 0.76460150301, 7.002366574508, -0.390318898363, -0.642631203146, -0.005701671024, 0.003522251111, 0.173867535377, 0.147911422248, 0.056092715216, -6.641979472328, 0.039602243105, 0.026181724138, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, 1.991401999957, 13.760045912368, 2.53041689113, 0.082528789604, 0.728264862053, 0.023902766734, -0.022896554363, 0.015327568208, 0.370476566397, -0.412566245022, -6.70094564846, -1.327038338854, -0.227019235965, -0.267482033427, -0.008650986307, -0.003394359441, 0.098792645471, 0.197714179668, -6.369398456151, -0.011976840769, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, 1.965859332057, -3.743127938662, -1.962645156793, 0.018929412474, 11.145046656101, -0.03600197464, -0.001222148117, 0.602488409354, 11.639787952728, -0.407672972316, 1.507740702165, -12.799953897143, 0.005393102236, -0.014208764492, -0.000915158115, -0.000640326416, -0.03653528842, 0.012458973237, -0.083125038259, -5.472831842357, 0, ], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ], ]) , npmatrix([1.0 if d == 28 else 0.0 for d in xrange(29)]) ) if __name__ == "__main__": main() </code></pre> <h3>Example Output</h3> <p>Here's an example of the output that demonstrates the problem (each run is slightly different). Notice the large variability in execution times (over two orders of magnitude!). Again, this all goes away if I either use a pool of size 1 (or run the code without a pool), or if I don't use an explicit jacobian in the call to <code>integrate</code>.</p> <blockquote> <ol> <li>times: 5847ms, 5760ms</li> <li>times: 4177ms, 3991ms</li> <li>times: 229ms, 36ms</li> <li>times: 1317ms, 1544ms</li> <li>times: 87ms, 100ms</li> <li>times: 113ms, 102ms</li> <li>times: 4747ms, 5077ms</li> <li>times: 597ms, 48ms</li> <li>times: 9ms, 49ms</li> <li>times: 135ms, 109ms</li> </ol> </blockquote>
0non-cybersec
Stackexchange
Incorrect colours at printing after upgrading to Ubuntu 18.04.02. <p>The sample output below shows text being printed as a series of different colored echoes. What could cause this? My printer is a EPSON TX420w</p> <p><a href="https://i.stack.imgur.com/dLq1C.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dLq1C.jpg" alt="enter image description here"></a></p>
0non-cybersec
Stackexchange
Group policy issues. <p>We are having an issue on one of our clients relatively new sbs installs. </p> <p>The domain consists of a single SBS 2011 server with 4 Windows 7 clients and 3 XP clients. Most of the time everything is fine, however roughly every 3 days, Windows 7 clients start timing out when trying to receive computer group policy. </p> <p>This results in hour long delays before getting to the login screen in the morning. This is accompanied by event ID 6006, win login errors stating it took 3599 seconds to process policy. Once they've booted they can log in without issue however gpupdate fails again on computer policy and gpresult comes back with access denied, even when run as domain admin... At this point if we restart the server the network is fine for 3 days. </p> <p>I thought perhaps it might be ipv6 or smb2, but disabling ipv6 on the clients doesn't help and the clients can browse the sysvol folder freely on smb2 anyway. Does anyone have any ideas or routes I can take to further diagnose the issue? </p>
0non-cybersec
Stackexchange
Sparse estimation of large covariance matrices via a nested Lasso penalty ar X iv :0 80 3. 38 72 v1 [ st at .A P ] 2 7 M ar 2 00 8 The Annals of Applied Statistics 2008, Vol. 2, No. 1, 245–263 DOI: 10.1214/07-AOAS139 c© Institute of Mathematical Statistics, 2008 SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES VIA A NESTED LASSO PENALTY By Elizaveta Levina,1 Adam Rothman and Ji Zhu2 University of Michigan The paper proposes a new covariance estimator for large covari- ance matrices when the variables have a natural ordering. Using the Cholesky decomposition of the inverse, we impose a banded structure on the Cholesky factor, and select the bandwidth adaptively for each row of the Cholesky factor, using a novel penalty we call nested Lasso. This structure has more flexibility than regular banding, but, unlike regular Lasso applied to the entries of the Cholesky factor, results in a sparse estimator for the inverse of the covariance matrix. An it- erative algorithm for solving the optimization problem is developed. The estimator is compared to a number of other covariance estima- tors and is shown to do best, both in simulations and on a real data example. Simulations show that the margin by which the estimator outperforms its competitors tends to increase with dimension. 1. Introduction. Estimating covariance matrices has always been an im- portant part of multivariate analysis, and estimating large covariance ma- trices (where the dimension of the data p is comparable to or larger than the sample size n) has gained particular attention recently, since high- dimensional data are so common in modern applications (gene arrays, fMRI, spectroscopic imaging, and many others). There are many statistical meth- ods that require an estimate of a covariance matrix. They include princi- pal component analysis (PCA), linear and quadratic discriminant analysis (LDA and QDA) for classification, regression for multivariate normal data, inference about functions of the means of the components (e.g., about the mean response curve in longitudinal studies), and analysis of independence and conditional independence relationships between components in graphi- cal models. Note that in many of these applications (LDA, regression, con- ditional independence analysis) it is not the population covariance Σ itself Received March 2007; revised September 2007. 1Supported in part by NSF Grant DMS-05-05424 and NSA Grant MSPF-04Y-120. 2Supported in part by NSF Grants DMS-05-05432 and DMS-07-05532. Key words and phrases. Covariance matrix, high dimension low sample size, large p small n, Lasso, sparsity, Cholesky decomposition. This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Applied Statistics, 2008, Vol. 2, No. 1, 245–263. This reprint differs from the original in pagination and typographic detail. 1 http://arxiv.org/abs/0803.3872v1 http://www.imstat.org/aoas/ http://dx.doi.org/10.1214/07-AOAS139 http://www.imstat.org http://www.imstat.org http://www.imstat.org/aoas/ http://dx.doi.org/10.1214/07-AOAS139 2 E. LEVINA, A. ROTHMAN AND J. ZHU that needs estimating, but its inverse Σ−1, also known as the precision or concentration matrix. When p is small, an estimate of one of these matrices can easily be inverted to obtain an estimate of the other one; but when p is large, inversion is problematic, and it may make more sense to estimate the needed matrix directly. It has long been known that the sample covariance matrix is an extremely noisy estimator of the population covariance matrix when p is large, although it is always unbiased [Dempster (1972)]. There is a fair amount of theoreti- cal work on eigenvalues of sample covariance matrices of Gaussian data [see Johnstone (2001) for a review] that shows that unless p/n → 0, the eigenval- ues of the sample covariance matrix are more spread out than the population eigenvalues, even asymptotically. Consequently, many alternative estimators of the covariance have been proposed. Regularizing large covariance matrices by Steinian shrinkage has been proposed early on, and is achieved by either shrinking the eigenvalues of the sample covariance matrix [Haff (1980); Dey and Srinivasan (1985)] or replacing the sample covariance with its linear combination with the iden- tity matrix [Ledoit and Wolf (2003)]. A linear combination of the sample covariance and the identity matrix has also been used in some applications— for example, as original motivation for ridge regression [Hoerl and Kennard (1970)] and in regularized discriminant analysis [Friedman (1989)]. These approaches do not affect the eigenvectors of the covariance, only the eigen- values, and it has been shown that the sample eigenvectors are also not consistent when p is large [Johnstone and Lu (2007)]. Hence, shrinkage esti- mators may not do well for PCA. In the context of a factor analysis model, Fan et al. (2008) developed high-dimensional estimators for both the covari- ance and its inverse. Another general approach is to regularize the sample covariance or its inverse by making it sparse, usually by setting some of the off-diagonal elements to 0. A number of methods exist that are particularly useful when components have a natural ordering, for example, for longitudinal data, where the need for imposing a structure on the covariance has long been recognized [see Diggle and Verbyla (1998) for a review of the longitudinal data literature]. Such structure often implies that variables far apart in this ordering are only weakly correlated. Banding or tapering the covariance matrix in this context has been proposed by Bickel and Levina (2004) and Furrer and Bengtsson (2007). Bickel and Levina (2007) showed consistency of banded estimators under mild conditions as long as (log p)/n → 0, for both banding the covariance matrix and the Cholesky factor of the inverse discussed below. They also proposed a cross-validation approach for selecting the bandwidth. Sparsity in the inverse is particularly useful in graphical models, since zeroes in the inverse imply a graph structure. Banerjee et al. (2006) and SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 3 Yuan and Lin (2007), using different semi-definite programming algorithms, both achieve sparsity by penalizing the normal likelihood with an L1 penalty imposed directly on the elements of the inverse. This approach is compu- tationally very intensive and does not scale well with dimension, but it is invariant under variable permutations. When a natural ordering of the variables is available, sparsity in the in- verse is usually introduced via the modified Cholesky decomposition [Pourahmadi (1999)], Σ−1 = T⊤D−1T. Here T is a lower triangular matrix with ones on the diagonal, D is a di- agonal matrix, and the elements below diagonal in the ith row of T can be interpreted as regression coefficients of the ith component on its predeces- sors; the elements of D give the corresponding prediction variances. Several approaches to introducing zeros in the Cholesky factor T have been proposed. While they are not invariant to permutations of variables and are thus most natural when variables are ordered, they do introduce shrink- age, and in some cases, sparsity, into the estimator. Wu and Pourahmadi (2003) propose a k-diagonal (banded) estimator, which is obtained by smooth- ing along the first k sub-diagonals of T , and setting the rest to 0. The number k is chosen via an AIC penalty on the normal likelihood of the data. The re- sulting estimate of the inverse is also k-banded. Wu and Pourahmadi (2003) showed element-wise consistency of their estimator (although that is a prop- erty shared by the sample covariance matrix), and Bickel and Levina (2007) showed that banding the Cholesky factor produces a consistent estimator in the matrix L2 norm under weak conditions on the covariance matrix, the most general theoretical result on banding available to date. Huang et al. (2006) proposed adding an L1 penalty on the elements of T to the nor- mal likelihood, which leads to Lasso-type shrinkage of the coefficients in T , and introduces zeros in T which can be placed in arbitrary locations. This approach is more flexible than banding, but the resulting estimate of the inverse may not have any zeros at all, hence, the sparsity is lost. No con- sistency results are available for this method. A related Bayesian approach [Smith and Kohn (2002)] introduces zeros in the Cholesky factor via a hier- archical prior, while Wong et al. (2003) use a prior that allows elements of the inverse itself to be zero. Our approach, which we will call adaptive banding in contrast to regular banding, also relies on the Cholesky decomposition and a natural ordering of the variables. By introducing a novel nested Lasso penalty on the co- efficients of regressions that form the matrix T , we select the best model that regresses the jth variable on its k closest predecessors, but, unlike in simple banding, we allow k = kj to depend on j. The resulting structure of 4 E. LEVINA, A. ROTHMAN AND J. ZHU Fig. 1. The placement of zeros in the Cholesky factor T : (a) Banding; (b) Lasso penalty of Huang et al.; (c) Adaptive banding. the Cholesky factor is illustrated in Figure 1(c). It is obviously more flexible than banding, and hence, should produce a better estimate of the covariance by being better able to adapt to the data. Unlike the Lasso of Huang et al. (2006), adaptive banding preserves sparsity in the resulting estimate of the inverse, since the matrix T is still banded, with the overall k = maxj kj . We show that adaptive banding, in addition to preserving sparsity, outperforms the estimator of Huang et al. (2006) in simulations and on real data. One may also reasonably expect that as long as the penalty tuning parameter is selected appropriately, the theoretical consistency results established for banding in Bickel and Levina (2007) will hold for adaptive banding as well. The rest of the paper is organized as follows: Section 2 summarizes the penalized estimation approach in general, and presents the nested Lasso penalty and the adaptive banding algorithm, with a detailed discussion of optimization issues. Section 3 presents numerical results for adaptive band- ing and a number of other methods, for simulated data and a real example. Section 4 concludes with discussion. 2. Methods for penalized estimation of the Cholesky factor. For the sake of completeness, we start from a brief summary of the formal derivation of the Cholesky decomposition of Σ−1. Suppose we have a random vector X = (X1, . . . ,Xp) ⊤ , with mean 0 and covariance Σ. Let X1 = ε1 and, for j > 1, let Xj = j−1 ∑ l=1 φjlXl + εj ,(1) where φjl are the coefficients of the best linear predictor of Xj from X1, . . . ,Xj−1 and σ2j = Var(εj) the corresponding residual variance. Let Φ be the lower tri- angular matrix with jth row containing the coefficients φjl, l = 1, . . . , j − 1, SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 5 of the jth regression (1). Note that Φ has zeros on the diagonal. Let ε = (ε1, . . . , εp) ⊤ , and let D = diag(σ2j ) be a diagonal matrix with σ 2 j on the diagonal. Rewriting (1) in matrix form gives ε = (I −Φ)X,(2) where I is the identity matrix. It follows from standard regression theory that the residuals are uncorrelated, so taking covariance of both sides of (2) gives D = (I −Φ)Σ(I −Φ) ⊤ . Letting T = I − Φ, we can now write down the modified Cholesky decom- positions of Σ and Σ−1: Σ = T−1D(T−1) ⊤ , Σ−1 = T⊤D−1T.(3) Note that the only assumption on X was mean 0; normality is not required to derive the Cholesky decomposition. The natural question is how to estimate the matrices T and D from data. The standard regression estimates can be computed as long as p ≤ n, but in high-dimensional situations one expects to do better by regularizing the coefficients in T in some way, for the same reasons one achieves better prediction from regularized regression [Hastie et al. (2001)]. If p > n, the regression problem becomes singular, and some regularization is necessary for the estimator to be well defined. 2.1. Adaptive banding with a nested Lasso penalty. The methods pro- posed by Huang et al. (2006) and Wu and Pourahmadi (2003) both assume the data xi, i = 1, . . . , n, are sampled from a normal distribution N (0,Σ) and use the normal likelihood as the loss function. As the derivation above shows, the normality assumption is not necessary for estimating covariance using the Cholesky decomposition. We start, however, with the normal likeli- hood as the loss function and demonstrate how a new penalty can be applied to produce an adaptively banded estimator. The negative log-likelihood of the data, up to a constant, is given by ℓ(Σ,x1, . . . ,xn) = n log |Σ|+ n ∑ i=1 x ⊤ i Σ −1 xi (4) = n log |D|+ n ∑ i=1 x ⊤ i T ⊤D−1Txi. The negative log-likelihood can be decomposed into ℓ(Σ,x1, . . . ,xn) = p ∑ j=1 ℓj(σj , φj,x1, . . . ,xn), 6 E. LEVINA, A. ROTHMAN AND J. ZHU where ℓj(σj , φj,x1, . . . ,xn) = n logσ 2 j + n ∑ i=1 1 σ2j ( xij − j−1 ∑ l=1 φjlxil )2 .(5) Minimizing (4) is equivalent to minimizing each of the functions ℓj in (5), which is in turn equivalent to computing the best least squares fit for each of the regressions (1). Wu and Pourahmadi (2003) suggested using an AIC or BIC penalty to se- lect a common order for the regressions (1). They also subsequently smooth the sub-diagonals of T , and their method’s performance depends on the exact choice of the smoother and the selection of the smoothing parame- ters as much as on the choice of order. This makes a direct comparison to Huang et al. (2006) and our own method difficult. Bickel and Levina (2007) proposed a cross-validation method for selecting the common order for the regressions, and we will use their method for all the (nonadaptive) banding results below. Huang et al. (2006) proposed adding a penalty to (4) and minimizing ℓ(Σ,x1, . . . ,xn) + λ p ∑ j=2 P (φj),(6) where the penalty P on the entries of φj = (φj1, . . . , φj,j−1) is P (φj) = ‖φj‖ d d,(7) and ‖ · ‖d is the Ld vector norm with d = 1 or 2. The L2 penalty (d = 2) does not result in a sparse estimate of the covariance, so we will not focus on it here. The L1 penalty (d = 1), that is, the Lasso penalty [Tibshirani (1996)], results in zeros irregularly placed in T as shown in Figure 1(b), which also does not produce a sparse estimate of Σ−1. Again, minimizing (6) is equivalent to separately minimizing ℓj(σj , φj,x1, . . . ,xn) + λP (φj),(8) with P (φ1) = 0. We propose replacing the L1 penalty λ ∑j−1 l=1 |φjl| with a new nested Lasso penalty, J0(φj) = λ ( |φj,j−1|+ |φj,j−2| |φj,j−1| + |φj,j−3| |φj,j−2| + · · ·+ |φj,1| |φj,2| ) ,(9) where we define 0/0 = 0. The effect of this penalty is that if the lth variable is not included in the jth regression (φjl = 0), then all the subsequent variables (l − 1 through 1) are also excluded, since giving them nonzero coefficients would result in an infinite penalty. Hence, the jth regression only uses kj ≤ SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 7 j − 1 closest predecessors of the jth coordinate, and each regression has a different order kj . The scaling of coefficients in (9) could be an issue: the sole coefficient φj,j−1 and the ratios |φj,t| |φj,t+1| can, in principle, be on different scales, and penalizing them with the same tuning parameter λ may not be appropriate. In situations where the data have natural ordering, the variables are often measurements of the same quantity over time (or over some other index, e.g., spatial location or spectral wavelength), so both the individual coefficients φj,t and their ratios are on the order of 1; if variables are rescaled, in this context they would all be rescaled at the same time (e.g., converting between different units). However, the nested Lasso penalty is of independent interest and may be used in other contexts, for example, for group variable selection. To address the scaling issue in general, we propose two easy modifications of the penalty (9): J1(φj) = λ ( |φj,j−1| |φ̂∗j,j−1| + |φj,j−2| |φj,j−1| + |φj,j−3| |φj,j−2| + · · ·+ |φj,1| |φj,2| ) ,(10) J2(φj) = λ1 j−1 ∑ t=1 |φj,t|+ λ2 j−2 ∑ t=1 |φj,t| |φj,t+1| ,(11) where φ̂∗j,j−1 is the coefficient from regressing Xj on Xj−1 alone. The ad- vantage of the first penalty, J1, is that it still requires only one tuning parameter λ; the disadvantage is the ad hoc use of the regression coefficient φ̂∗j,j−1, which may not be close to φ̂j,j−1, but we can reasonably hope is on the same scale. The second penalty, J2, does not require this extra regres- sion coefficient, but it does require selection of two tuning parameters. It turns out, however, that, in practice, the value of λ2 is not as important as that of λ1, as the ratio term will be infinite whenever a coefficient in the denominator is shrunk to 0. In practice, on both simulations and real data, we have not found much difference between the three versions J0, J1, and J2, although in general J1 tends to be better than J0, and J2 better than J1. In what follows, we will write J for the three nested penalties J0, J1 and J2 if any one of them can be substituted. Adaptive banding for covariance estimation: 1. For j = 1, σ̂21 = Var(X1). 2. For each j = 2, . . . , p, let (σ̂j , φ̂j) = argmin σj ,φj ℓj(σj, φj ,x1, . . . ,xn) + J(φj).(12) 3. Compute Σ̂−1 according to (3); let Σ̂ = (Σ̂−1)−1. 8 E. LEVINA, A. ROTHMAN AND J. ZHU 2.2. The algorithm. The minimization of (12) is a nontrivial problem, since the penalties J are not convex. We developed an iterative procedure for this minimization, which we found to work well and converge quickly in practice. The algorithm requires an initial estimate of the coefficients φj . In the case p < n, one could initialize with coefficients φ̂j fitted without a penalty, which are given by the usual least squares estimates from regressing Xj on Xj−1, . . . ,X1. If p > n, however, these are not defined. Instead, we ini- tialize with φ̂ (0) jt = φ̂ ∗ jt, which are found by regressing Xj on Xt alone, for t = 1, . . . , j − 1. Then we iterate between steps 1 and 2 until convergence. Step 1. Given φ̂ (k) j , solve for σ̂ (k) j (the residual sum of squares is given in closed form): (σ̂ (k) j ) 2 = 1 n n ∑ i=1 ( xij − j−1 ∑ t=1 φ̂ (k) jt xit )2 . Step 2. Given φ̂ (k) j and σ̂ (k) j , solve for φ̂ (k+1) j . Here we use the following standard local quadratic approximation [also used by Fan and Li (2001) and Huang et al. (2006), among others]: |φ (k+1) jt | ≈ (φ (k+1) jt ) 2 2|φ (k) jt | + |φ (k) jt | 2 .(13) This approximation, together with substituting the previous values φ (k) jt in the denominator of the ratios in the penalty, converts the minimization into a ridge (quadratic) problem, which can be solved in closed form. For example, for the J2 penalty, we solve φ̂ (k+1) j = argmin φj 1 (σ̂ (k) j ) 2 n ∑ i=1 ( xij − j−1 ∑ t=1 φjtxit )2 + λ1 j−1 ∑ t=1 φ2jt 2|φ̂ (k) jt | + λ2 j−2 ∑ t=1 φ2jt 2|φ̂ (k) jt | · |φ̂ (k) j,t+1| . For numerical stability, we threshold the absolute value of every estimate at 10−10 over different iterations, and at the end of the iterations, set all estimates equal to 10−10 to zero. The approximation for J0 and J1 penalties is analogous. The function we are minimizing in (12) is not convex, there- fore, there is no guarantee of finding the global minimum. However, in the simulations we have tried, where we know the underlying truth (see Section 3.1 for details), we have not encountered any problems with spurious local minima with the choice of starting values described above. SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 9 Another approach in Step 2 above is to use the “shooting” strategy [Fu (1998); Friedman et al. (2007)]. That is, we sequentially solve for φjt: for each t = 1, . . . , j − 1, we fix σj and φ−jt = (φj1, . . . , φj,t−1, φj,t+1, . . . , φj,j−1) ⊤ at their most recent estimates and minimize (12) over φjt, and iterate until convergence. Since each minimization over φjt involves only one parameter and the objective function is piecewise convex, the computation is trivial. Also, since at each iteration the value of the objective function decreases, convergence is guaranteed. In our experience, these two approaches, the local quadratic approximation and the shooting strategy, do not differ much in terms of the computational cost and the solutions they offer. The algorithm also requires selecting the tuning parameter λ, or, in the case of J2, two tuning parameters λ1 and λ2. We selected tuning parameters on a validation set which we set aside from the original training data; alter- natively, 5-fold cross-validation can be used. As discussed above, we found that the value of λ2 in J2 is not as important; however, in all examples in this paper the computational burden was small enough to optimize over both parameters. 3. Numerical results. In this section we compare adaptive banding to other methods of regularizing the inverse. Our primary comparison is with the Lasso method of Huang et al. (2006) and with nonadaptive banding of Bickel and Levina (2007); these methods are closest to ours and also provide a sparse estimate of the Cholesky factor. As a benchmark, we also include the shrinkage estimator of Ledoit and Wolf (2003), which does not depend on the order of variables. 3.1. Simulation data. Simulations were carried out for three different covariance models. The first one has a tri-diagonal Cholesky factor and, hence, a tri-diagonal inverse: Σ1 :φj,j−1 = 0.8; φj,j′ = 0, j ′ < j − 1; σ2j = 0.01. The second one has entries of the Cholesky factor exponentially decaying as one moves away from the diagonal. Its inverse is not sparse, but instead has many small entries: Σ2 :φj,j′ = 0.5 |j−j′|, j′ < j; σ2j = 0.01. Both these models were considered by Huang et al. (2006), and similar models were also considered by Bickel and Levina (2007). In both Σ1 and Σ2, all the rows have the same structure, which favors regular nonadaptive banding. 10 E. LEVINA, A. ROTHMAN AND J. ZHU To test the ability of our algorithm to adapt, we also considered the following structure: Σ3 :kj ∼ U(1, ⌈j/2⌉); φj,j′ = 0.5, kj ≤ j ′ ≤ j − 1; φj,j′ = 0, j ′ < kj ; σ 2 j = 0.01. Here U(k1, k2) denotes an integer selected at random from all integers from k1 to k2. For moderate values of p, this structure is stable, and this is what we generate for p = 30 in the simulations below. For larger p, some realizations can generate a poorly conditioned true covariance matrix, which is not a problem in principle, but makes computing performance measures awkward. To avoid this problem, we divided the variables for p = 100 and p = 200 into 3 and 6 independent blocks, respectively, and generated a random structure from the model described above for each of the blocks. We will refer to all these models as Σ3. The structure of Σ3 should benefit more from adaptive banding. For each of the covariance models, we generated n = 100 training observa- tions, along with a separate set of 100 validation observations. We considered three different values of p: 30,100 and 200, and two different distributions: normal and multivariate t with 3 degrees of freedom, to test the behav- ior of the estimator on heavy-tailed data. The estimators were computed on the training data, with tuning parameters for all methods selected by maximizing the likelihood on the validation data. Using these values of the tuning parameters, we then computed the estimated covariance matrix on the training data and compared it to the true covariance matrix. There are many criteria one can use to evaluate covariance matrix esti- mation, for example, any one of the matrix norms can be calculated for the difference (L1, L2, L∞, or Frobenius norm). There is no general agreement on which loss to use in which situation. Here we use the Kullback–Leibler loss for the concentration matrix, which was used in Yuan and Lin (2007). The Kullback–Leibler loss is defined as follows: ∆KL(Σ, Σ̂) = tr(Σ̂ −1 Σ)− ln |Σ̂ −1 Σ| − p.(14) Another popular loss is the entropy loss for the covariance matrix, which was used by Huang et al. (2006). The entropy loss is the same as the Kullback– Leibler loss except the roles of the covariance matrix and its inverse are switched. The entropy loss can be derived from the Wishart likelihood [Anderson (1958)]. While one does not expect major disagreements between these losses, the entropy loss is a more appropriate measure if the covariance matrix itself is the primary object of interest (as in PCA, e.g.), and the Kullback–Leibler loss is a more direct measure of the estimate of the con- centration matrix. Both these losses are not normalized by dimension and SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 11 therefore cannot be compared directly for different p’s. We have also tried matrix norms and the so-called quadratic loss from Huang et al. (2006) and found that, while there is no perfect agreement between results every time, qualitatively they are quite similar. The main conclusions we draw from comparing estimators using the Kullback–Leibler loss would be the same if any other loss had been used. The results for the normal data and the three models are summarized in Table 1, which gives the average losses and the corresponding standard errors over 50 replications. The NA values for the sample appear when the matrix is singular. The J0 penalty has been omitted because it is dominated by J1 and J2. In general, we see that banding and adaptive banding perform better on all three models than the sample, Ledoit–Wolf’s estimator and Lasso. On Σ1 and Σ2, as expected, banding and adaptive banding are very similar (par- ticularly once standard errors are taken into account); but on Σ3, adaptive banding does better, and the larger p, the bigger the difference. Also, for normal data the J2 penalty always dominates J1, though they are quite close. To test the behavior of the methods with heavy-tailed data, we also per- formed simulations for the same three covariance models under the multi- variate t3 distribution (the heaviest-tail t distribution with finite variance). Table 1 Multivariate normal simulations for models Σ1 (banded Cholesky factor), Σ2 (nonsparse Cholesky factor with elements decaying exponentially as one moves away from the diagonal) and Σ3 (sparse Cholesky factor with variable length rows). The Kullback–Leibler losses (means and, in parentheses, standard errors for the means over 50 replications) are reported for sample covariance, the shrinkage estimator of Ledoit and Wolf (2003), the Lasso method of Huang et al. (2006), the nonadaptive banding method of Bickel and Levina (2007), and our adaptive banding with penalties J1 and J2 p Sample Ledoit–Wolf Lasso J1 J2 Banding Σ1 30 8.38(0.14) 3.59(0.04) 1.26(0.04) 0.79(0.02) 0.64(0.02) 0.63(0.02) 100 NA 29.33(0.12) 6.91(0.11) 2.68(0.04) 2.21(0.03) 2.21(0.03) 200 NA 90.86(0.19) 14.57(0.13) 5.10(0.06) 4.35(0.05) 4.34(0.05) Σ2 30 8.38(0.14) 3.59(0.02) 2.81(0.04) 1.42(0.03) 1.32(0.02) 1.29(0.03) 100 NA 18.16(0.02) 16.12(0.09) 5.01(0.07) 4.68(0.06) 4.55(0.05) 200 NA 40.34(0.02) 32.84(0.11) 9.88(0.06) 9.28(0.06) 8.95(0.06) Σ3 30 8.68(0.12) 171.31(1.00) 4.62(0.07) 3.26(0.05) 3.14(0.06) 3.82(0.05) 100 NA 945.65(2.15) 35.60(0.71) 11.82(0.13) 11.24(0.12) 14.34(0.09) 200 NA 1938.32(3.04) 118.84(1.54) 23.30(0.17) 22.70(0.16) 29.50(0.14) 12 E. LEVINA, A. ROTHMAN AND J. ZHU These results are given in Table 2. All methods perform worse than they do for normal data, but banding and adaptive banding still do better than other methods. Because the standard errors are larger, it is harder to establish a uniform winner among J1, J2 and banding, but generally these results are consistent with results obtained for normal data. Finally, we note that the differences between estimators are amplified with growing dimension p: while the patterns remain the same for all three values of p considered (30, 100 and 200), quantitatively the improvement of adaptive banding over the Ledoit–Wolf estimator and Lasso is the largest at p = 200, and is expected to be even more for higher dimensions. Since one advantage of adaptive banding as compared to Lasso is pre- serving sparsity in the inverse itself, we also compared percentages of true zeros both in the Cholesky factor and in the inverse that were estimated as zeros, for the models Σ1 and Σ3 (Σ2 is not sparse). The results are shown in Table 3. While for the easier case of Σ1 all methods do a reasonable job of finding zeros in the Cholesky factor, Lasso loses them in the inverse, whereas both kinds of banding do not. This effect is even more apparent on the more challenging case of Σ3. To gain additional insight into the sparsity of structures produced by the different methods, we also show heatmap plots of the percentage of times each entry of the Cholesky factor (Figure 2) and the inverse itself (Figure 3) were estimated as zeros. It is clear that only adaptive banding reflects the true underlying structure. Overall, the simulations show that the adaptive banding achieves the goals that it was designed for: it has more flexibility than banding, and therefore is better able to capture the underlying sparse structure, but, unlike the Table 2 Multivariate t3 simulations for models Σ1, Σ2, Σ3. Descriptions for the entries are the same as those in Table 1 p Sample Ledoit–Wolf Lasso J1 J2 Banding Σ1 30 30.33(0.65) 9.22(0.65) 7.60(0.74) 4.32(0.21) 3.68(0.19) 4.22(0.60) 100 NA 58.24(2.61) 38.99(1.44) 15.58(0.78) 13.85(0.72) 13.74(0.72) 200 NA 139.21(3.02) 111.62(2.73) 31.45(1.80) 28.22(1.71) 27.95(1.70) Σ2 30 30.33(0.65) 6.20(0.15) 8.44(0.20) 5.91(0.24) 5.21(0.22) 5.23(0.24) 100 NA 24.37(0.67) 31.92(0.83) 21.76(0.76) 18.87(0.71) 19.33(0.85) 200 NA 50.40(1.41) 64.28(1.98) 44.58(2.00) 38.46(1.75) 39.81(1.98) Σ3 30 30.77(0.74) 199.73(4.32) 14.48(0.40) 11.47(0.44) 11.57(0.47) 11.69(0.39) 100 NA 1061.54(12.62) 82.05(1.47) 43.38(1.14) 45.01(1.13) 42.78(1.04) 200 NA 2182.54(21.29) 182.82(9.51) 87.5(2.75) 91.25(2.79) 85.65(2.49) SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 13 Table 3 Percentage of true zeros in the Cholesky factor and in the inverse estimated as zeros for multivariate normal data, for models Σ1 and Σ3 (means and, in parentheses, standard errors for the means over 50 replications) Zeros in the Cholesky factor (%) Zeros in Σ−1 (%) p Lasso J1 J2 Banding Lasso J1 J2 Banding Σ1 30 70.5(0.4) 94.5(0.3) 96.3(0.4) 100(0) 31.4(0.8) 94.5(0.3) 96.3(0.4) 100(0) 100 92.7(0.1) 98.6(0.04) 99.1(0.1) 100(0) 76.4(0.5) 98.6(0.3) 99.1(0.04) 100(0) 200 93.7(0.06) 99.3(0.01) 99.5(0.04) 100(0) 69.9(0.4) 99.3(0.01) 99.5(0.04) 100(0) Σ3 30 55.6(1.5) 83.2(0.5) 80.9(0.7) 72.1(0.9) 10.2(0.7) 75.3(0.9) 70.4(1.5) 73.1(0.7) 100 88.3(0.1) 94.9(0.1) 94.9(0.1) 92.8(0.2) 55.1(0.5) 92.3(0.3) 92.3(0.2) 93.5(0.2) 200 92.4(0.1) 97.6(0.04) 97.7(0.04) 96.7(0.1) 84.4(0.9) 96.6(0.1) 96.7(0.1) 97.1(0.1) Fig. 2. Heatmap plots of percentage of zeros at each location in the inverse (out of 50 replications) for Σ3, p = 30. Black represents 100%, white 0%. Lasso, it has the ability to preserve the sparsity in the inverse as well as in the Cholesky factor. 14 E. LEVINA, A. ROTHMAN AND J. ZHU Fig. 3. Heatmap plots of percentage of zeros at each location in the inverse (out of 50 replications) for Σ3, p = 30. Black represents 100%, white 0%. 3.2. Prostate cancer data. In this section we consider an application to a prostate cancer dataset [Adam et al. (2002)]. The current standard screening approach for prostate cancer is a serum test for a prostate-specific antigen. However, the accuracy is far from satisfactory [Pannek and Partin (1998) and Djavan et al. (1999)], and it is believed that a combination or a panel of biomarkers will be required to improve the detection of prostate can- cer [Stamey et al. (2002)]. Recent advances in high-throughput mass spec- troscopy have allowed one to simultaneously resolve and analyze thousands of proteins. In protein mass spectroscopy, we observe, for each blood serum sample i, the intensity xij for many time-of-flight values. Time of flight is related to the mass over charge ratio m/z of the constituent proteins in the blood. The full dataset we consider [Adam et al. (2002)] consists of 157 healthy patients and 167 with prostate cancer. The goal is to discriminate between the two groups. Following the original researchers, we ignored m/z- sites below 2000, where chemical artifacts can occur. To smooth the intensity profile, we average the data in consecutive blocks of 10, giving a total of 218 sites. Thus, each observation x = (x1, . . . , x218) consists of an intensity pro- file of length 218, with a known class (cancer or noncancer) membership. The prostate cancer dataset we consider comes with pre-specified training SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 15 (n = 216) and test sets (N = 108). Figure 4 displays the mean intensities for “cancer” and “noncancer” from the training data. We consider the linear discriminant method (LDA) and the quadratic discriminant method (QDA). The linear and quadratic discriminant analysis assume the class-conditional density of x in class k is normal N (µk,Σk). The LDA arises in the special case when one assumes that the classes have a common covariance matrix Σk = Σ,∀k. If the Σk are not assumed to be equal, we then get QDA. The linear and quadratic discriminant scores are as follows: LDA: δk(x) = x ⊤Σ̂−1µ̂k − 1 2 µ̂⊤k Σ̂ −1µ̂k + log π̂k, QDA: δk(x) = − 1 2 log |Σ̂k| − 1 2 (x− µ̂k) ⊤ Σ̂−1 k (x− µ̂k) + log π̂k, where π̂k = nk/n is the proportion of the number of class-k observations in the training data, and the classification rule is given by argmaxk δk(x). Detailed information on LDA and QDA can be found in Mardia et al. (1979). Using the training data, we estimate µ̂k = 1 nk ∑ i∈classk xi and estimate Σ̂−1 or Σ̂−1 k using five different methods: the shrinkage toward the identity estimator of Ledoit and Wolf (2003), banding the Cholesky fac- tor, the Lasso estimator, and our adaptive banding method; we also include the Naive Bayes method as a benchmark, since it corresponds to LDA with the covariance matrix estimated by a diagonal matrix. Tuning parameters in different methods are chosen via five-fold cross-validation based on the training data. Mean vectors and covariance matrices were estimated on the training data and plugged into the classification rule, which was then ap- plied to the test data. Note that for this dataset p is greater than n, hence, Fig. 4. The mean intensity for “cancer” and “noncancer” from the training data. 16 E. LEVINA, A. ROTHMAN AND J. ZHU the sample covariance matrix is not invertible and cannot be used in LDA or QDA. The results as measured by the test set classification error are summa- rized in Table 4. For this particular dataset, we can see that overall the QDA performs much worse than the LDA, so we focus on the results of the LDA. The Naive Bayes method, which assumes independence among variables, has the worst performance. Banding (with a common bandwidth) and the Lasso method perform similarly and better than the Naive Bayes method. Our adaptive banding method performs the best, with either the J1 or the J2 penalty. To gain more insight about the sparsity structures of different estimators, we plot the structures of the estimated Cholesky factors and the corresponding Σ̂−1 of different methods in Figures 5 and 6 (black represents nonzero, and white represents zero). Based on the differences in the clas- sification performance, these plots imply that the Lasso method may have included many unimportant elements in Σ̂−1 (estimating zeros as nonze- ros), while the banding method may have missed some important elements (estimating nonzeros as zeros). The estimated Cholesky factors and the cor- responding Σ̂−1’s from the adaptive banding method (J1 and J2) represent an interesting structure: the intensities at higher m/z-values are more or less conditionally independent, while the intensities at lower m/z-values show a “block-diagonal” structure. We note that in the above analysis we used likelihood on the cross- validation data as the criterion for selecting tuning parameters. As an al- ternative, we also considered using the classification error as the selection criterion. The results from the two selection criteria are similar. For simplic- ity of exposition, we only presented results from using the likelihood as the selection criterion. Finally, we note that Adam et al. (2002) reported an error rate around 5% for a four-class version of this problem, using a peak finding proce- dure followed by a decision tree algorithm. However, we have had diffi- culty replicating their results, even when using their extracted peaks. In Tibshirani et al. (2005) the following classification errors were reported for other methods applied to the two-class dataset we used here: 30 for Nearest Shrunken Centroids [Tibshirani et al. (2003)], and 16 for both Lasso and fused Lasso [Tibshirani et al. (2005)]. However, note that these authors did not apply block smoothing. Table 4 Test errors (out of 108 samples) on the prostate cancer dataset Method Naive Bayes Ledoit & Wolf Banding Lasso J1 J2 Test error (LDA) 32 16 16 18 11 12 Test error (QDA) 32 51 46 49 31 29 SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 17 Fig. 5. Structure of Cholesky factors estimated for the prostate data. White corresponds to zero, black to nonzero. 4. Summary and discussion. We have presented a new covariance esti- mator for ordered variables with a banded structure, which, by selecting the bandwidth adaptively for each row of the Cholesky factor, achieves more flexibility than regular banding but still preserves sparsity in the inverse. Adaptive banding is achieved using a novel nested Lasso penalty, which takes into account the ordering structure among the variables. The estima- tor has been shown to do well both in simulations and a real data exam- ple. Zhao et al. (2006) proposed a related penalty, the composite absolute penalty (CAP), for handling hierarchical structures in variables. However, 18 E. LEVINA, A. ROTHMAN AND J. ZHU Fig. 6. Structure of the inverse covariance matrix estimated for the prostate data. White corresponds to zero, black to nonzero. Zhao et al. (2006) only considered a hierarchy with two levels, while, in our setting, there are essentially p − 1 hierarchical levels; hence, it is not clear how to directly apply CAP without dramatically increasing the number of tuning parameters. The theoretical properties of the estimator are a subject for future work. The nested Lasso penalty is not convex in the parameters; it is likely that the theory developed by Fan and Li (2001) for nonconvex penalized maxi- mum likelihood estimation can be extended to cover the nested Lasso (it is not directly applicable since our penalty cannot be decomposed into a sum SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 19 of identical penalties on the individual coefficients). However, that theory was developed only for the case of fixed p, n →∞, and the more relevant analysis for estimation of large covariance matrices would be under the as- sumption p →∞, n →∞, with p growing at a rate equal to or possibly faster than that of n, as was done for the banded estimator by Bickel and Levina (2007). Another interesting question for future work is extending this idea to estimators invariable under variable permutations. Acknowledgments. We thank the Editor, Michael Stein, and two referees for their feedback which led us to improve the paper, particularly the data section. We also thank Jianqing Fan, Xiaoli Meng and Bin Yu for helpful comments. REFERENCES Adam, B., Qu, Y., Davis, J., Ward, M., Clements, M., Cazares, L., Semmes, O., Schellhammer, P., Yasui, Y., Feng, Z. and Wright, G. (2002). Serum protein fingerprinting coupled with a pattern-matching algorithm distinguishes prostate cancer from benign prostate hyperplasia and healthy men. Cancer Research 62 3609–3614. Anderson, T. W. (1958). An Introduction to Multivariate Statistical Analysis. Wiley, New York. MR0091588 Banerjee, O., d’Aspremont, A. and El Ghaoui, L. (2006). Sparse covariance selection via robust maximum likelihood estimation. In Proceedings of ICML. Bickel, P. J. and Levina, E. (2004). Some theory for Fisher’s linear discriminant func- tion, “naive Bayes,” and some alternatives when there are many more variables than observations. Bernoulli 10 989–1010. MR2108040 Bickel, P. J. and Levina, E. (2007). Regularized estimation of large covariance matrices. Ann. Statist. To appear. Dempster, A. (1972). Covariance selection. Biometrics 28 157–175. Dey, D. K. and Srinivasan, C. (1985). Estimation of a covariance matrix under Stein’s loss. Ann. Statist. 13 1581–1591. MR0811511 Diggle, P. and Verbyla, A. (1998). Nonparametric estimation of covariance structure in longitudinal data. Biometrics 54 401–415. Djavan, B., Zlotta, A., Kratzik, C., Remzi, M., Seitz, C., Schulman, C. and Marberger, M. (1999). Psa, psa density, psa density of transition zone, free/total psa ratio, and psa velocity for early detection of prostate cancer in men with serum psa 2.5 to 4.0 ng/ml. Urology 54 517–522. Fan, J., Fan, Y. and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. J. Econometrics. To appear. Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360. MR1946581 Friedman, J. (1989). Regularized discriminant analysis. J. Amer. Statist. Assoc. 84 165– 175. MR0999675 Friedman, J., Hastie, T., Höfling, H. G. and Tibshirani, R. (2007). Pathwise coor- dinate optimization. Ann. Appl. Statist. 1 302–332. Fu, W. (1998). Penalized regressions: The bridge versus the lasso. J. Comput. Graph. Statist. 7 397–416. MR1646710 http://www.ams.org/mathscinet-getitem?mr=0091588 http://www.ams.org/mathscinet-getitem?mr=2108040 http://www.ams.org/mathscinet-getitem?mr=0811511 http://www.ams.org/mathscinet-getitem?mr=1946581 http://www.ams.org/mathscinet-getitem?mr=0999675 http://www.ams.org/mathscinet-getitem?mr=1646710 20 E. LEVINA, A. ROTHMAN AND J. ZHU Furrer, R. and Bengtsson, T. (2007). Estimation of high-dimensional prior and pos- terior covariance matrices in Kalman filter variants. J. Multivariate Anal. 98 227–255. MR2301751 Haff, L. R. (1980). Empirical Bayes estimation of the multivariate normal covariance matrix. Ann. Statist. 8 586–597. MR0568722 Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learn- ing. Springer, Berlin. MR1851606 Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12 55–67. Huang, J., Liu, N., Pourahmadi, M. and Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood. Biometrika 93 85–98. MR2277742 Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal com- ponents analysis. Ann. Statist. 29 295–327. MR1863961 Johnstone, I. M. and Lu, A. Y. (2007). Sparse principal components analysis. J. Amer. Statist. Assoc. To appear. Ledoit, O. and Wolf, M. (2003). A well-conditioned estimator for large-dimensional covariance matrices. J. Multivariate Anal. 88 365–411. MR2026339 Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979). Multivariate Analysis. Academic Press, New York. MR0560319 Pannek, J. and Partin, A. (1998). The role of psa and percent free psa for staging and prognosis prediction in clinically localized prostate cancer. Semin. Urol. Oncol. 16 100–105. Pourahmadi, M. (1999). Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation. Biometrika 86 677–690. MR1723786 Smith, M. and Kohn, R. (2002). Parsimonious covariance matrix estimation for longitu- dinal data. J. Amer. Statist. Assoc. 97 1141–1153. MR1951266 Stamey, T., Johnstone, I., McNeal, J., Lu, A. and Yemoto, C. (2002). Preoper- ative serum prostate specific antigen levels between 2 and 22 ng/ml correlate poorly with post-radical prostatectomy cancer morphology: Prostate specific antigen cure rates appear constant between 2 and 9 ng/ml. J. Urol. 167 103–111. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288. MR1379242 Tibshirani, R., Hastie, T., Narasimhan, B. and Chu, G. (2003). Class prediction by nearest shrunken centroids, with applications to DNA microarrays. Statist. Sci. 18 104–117. MR1997067 Tibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. (2005). Sparsity and smoothness via the fused lasso. J. Roy. Statist. Soc. Ser. B 67 91–108. MR2136641 Wong, F., Carter, C. and Kohn, R. (2003). Efficient estimation of covariance selection models. Biometrika 90 809–830. MR2024759 Wu, W. B. and Pourahmadi, M. (2003). Nonparametric estimation of large covariance matrices of longitudinal data. Biometrika 90 831–844. MR2024760 Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika 94 19–35. Zhao, P., Rocha, G. and Yu, B. (2006). Grouped and hierarchical model selection through composite absolute penalties. Technical Report 703, Dept. Statistics, UC Berkeley. http://www.ams.org/mathscinet-getitem?mr=2301751 http://www.ams.org/mathscinet-getitem?mr=0568722 http://www.ams.org/mathscinet-getitem?mr=1851606 http://www.ams.org/mathscinet-getitem?mr=2277742 http://www.ams.org/mathscinet-getitem?mr=1863961 http://www.ams.org/mathscinet-getitem?mr=2026339 http://www.ams.org/mathscinet-getitem?mr=0560319 http://www.ams.org/mathscinet-getitem?mr=1723786 http://www.ams.org/mathscinet-getitem?mr=1951266 http://www.ams.org/mathscinet-getitem?mr=1379242 http://www.ams.org/mathscinet-getitem?mr=1997067 http://www.ams.org/mathscinet-getitem?mr=2136641 http://www.ams.org/mathscinet-getitem?mr=2024759 http://www.ams.org/mathscinet-getitem?mr=2024760 SPARSE ESTIMATION OF LARGE COVARIANCE MATRICES 21 E. Levina A. Rothman J. Zhu Department of Statistics University of Michigan Ann Arbor, Michigan 48109–1107 USA E-mail: [email protected] [email protected] [email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] Introduction Methods for penalized estimation of the Cholesky factor Adaptive banding with a nested Lasso penalty The algorithm Numerical results Simulation data Prostate cancer data Summary and discussion Acknowledgments References Author's addresses
0non-cybersec
arXiv
Blade - Fly Away.
0non-cybersec
Reddit
ITAP of dust on an old DVD of mine.
0non-cybersec
Reddit
Making fairy buns 😋.
0non-cybersec
Reddit
[NSFW] curious cat.
0non-cybersec
Reddit
PsBattle: Danny Devito walking his dress wearing dog..
0non-cybersec
Reddit
How to use .where() with after a populate in mongoose. <p>So i have two schemas </p> <pre><code>var subcategories = new Schema({ //the category being populated needs to be the same case ; categoryId: [{ type: Schema.ObjectId, ref: 'categories' }], name: String, description: String, display: Boolean, active: Boolean, sortOrder: Number, createDate: Date, updateDate: Date, type: String, startDate: Date, endDate: Date, authorId: String }); </code></pre> <p>And</p> <pre><code>var categories = new Schema({ name: String, description: String, display: Boolean, active: Boolean, sortOrder: Number, createDate: Number, updateDate: Number, type: String, startDate: Date, endDate: Date, authorId: String }); </code></pre> <p>And I want to have a query to only return if active/display is true in both category/subcategory. What I'm having trouble with is how to properly set the filter for categoryId after a populate. Here is what I have so far</p> <pre><code>exports.generateList = function (req, res) { subcategories .find({})//grabs all subcategoris .where('categoryId').ne([])//filter out the ones that don't have a category .populate('categoryId') .where('active').equals(true) .where('display').equals(true) .where('categoryId.active').equals(true) .where('display').in('categoryId').equals(true) .exec(function (err, data) { if (err) { console.log(err); console.log('error returned'); res.send(500, { error: 'Failed insert' }); } if (!data) { res.send(403, { error: 'Authentication Failed' }); } res.send(200, data); console.log('success generate List'); }); }; </code></pre> <p>The only problem is even when i have a category with display = false it will still get returned.</p>
0non-cybersec
Stackexchange
Hair questions... Two of 'em!. A) What is the evolutionary explanation for male hair loss? B) Why does it hurt so much to pull hair? It seems liek quite a weak spot of the human anatomy.
0non-cybersec
Reddit
Generalized Way of Treating Extrema under Certain Constraints (Inequalities). <p>Let's take a simple example $f: \mathbb R^{2} \to \mathbb R$, $f(x,y)=xy$ and then I want to treat $f$ for a constraint $M$ under all possible inequalities:</p> <p><strong>Case 1)</strong> $M:=\{(x,y)\in \mathbb R^{2}|x^2+y^2=1\}$</p> <p><strong>Case 2)</strong> $M:=\{(x,y)\in \mathbb R^{2}|x^2+y^2\leq1\}$</p> <p><strong>Case 3)</strong> $M:=\{(x,y)\in \mathbb R^{2}|x^2+y^2\geq 1\}$</p> <p><strong>Case 4)</strong> $M:=\{(x,y)\in \mathbb R^{2}|x^2+y^2&lt;1\}$</p> <p><strong>Case 5)</strong> $M:=\{(x,y)\in \mathbb R^{2}|x^2+y^2&gt;1\}$</p> <p><em>Ideas:</em></p> <p><strong>Case 1</strong>) This is just the lagrange multiplier method, which is the standard constraint we have used thus far</p> <p><strong>Case 2</strong>) Similar to <strong>Case 1)</strong> because the Extrema are <strong>ALWAYS</strong> assumed on the boundaries <strong>(question: is this always true?)</strong>. We simply take the lagrange multipliers of $\partial M$.</p> <p><strong>Case 3</strong>) $M$ is closed, $f$ is continuous and $M$ is bounded below, therefore $f|_{M}$ takes on a minimum <strong>(question: is this the correct reasoning?)</strong>. In order to evaluate the critical values on the open set $M^{\circ}$, I would then simply let $\nabla f(x,y)=\begin{pmatrix} y\\ x \end{pmatrix}=0$, which implies $y=0, x=0$, so $f$ has no critical values on $M^{\circ}$ seeing as though the only critical value $(0,0)\notin M$. Since $f|_{M}$ does take on a minimum, the minimum must therefore be located on the boundary. In order to do this I once again use the lagrange multiplier method. <strong>(question: if I had found critical values $(x,y) \in M^{\circ}$, would I still have to compare the lagrange multiplier method used on $\partial M$ with my critical values found on $f|_{M^{\circ}}$ and the largest/smallest values would be global maximum/minimum?)</strong></p> <p><strong>Case 4</strong>) Setting $\nabla f(x,y)=0$, it follows that $y=0, x=0$ and $(0,0) \in M$, therefore we use the hessean matrix to assess whether it is a saddle point, or strict extrema. </p> <p><strong>Case 5</strong>): Since the <strong>only</strong> critical value $(0,0) \notin M$ and $M$ is open, $f|_{M}$ does not have any extrema. </p> <p>Answering the questions and identifying any fallacies in my reasoning would be of great help to me!</p>
0non-cybersec
Stackexchange
Smoking rate in U.S. hits all-time low, CDC says.
0non-cybersec
Reddit
Book Recommendation for mathematical finance. <p>Does anyone know a book which covers topics on:</p> <p>Brownian Motion </p> <p>Martingales </p> <p>Stochastic Calculus</p> <p>Stochastic Differential Equations </p> <p>Options pricing. Black-Scholes model</p> <p>Fundamental Theorems. Interest Rates </p> <p>Random Walk </p> <p>Applications in Insurance </p> <p>Simulations. Convergence </p> <p>Simulation methods </p> <p>I would like something in-depth, but at an undergraduate level.</p>
0non-cybersec
Stackexchange
Why is it that the surface integral of the flux of a vector field is the same as the surface integral of the vector field itself?. <p>In other words, this:</p> <p><img src="https://i.stack.imgur.com/2sggf.png" alt="definition"></p> <p><a href="http://www.math.ucla.edu/~archristian/teaching/32b-w17/week-7.pdf" rel="nofollow noreferrer">http://www.math.ucla.edu/~archristian/teaching/32b-w17/week-7.pdf</a></p> <p>Is this just a definition because what we really care about is how much the vectors are "pushing" through the surface? Or is it an actual equality?</p>
0non-cybersec
Stackexchange
How to calculate percentage for battery level indicator?. <p>I'm doing a hobby project which i want to display battery voltage in a OLED display. I have a battery indicator with 30 pixels width.</p> <p>i want to display when battery is 8.4V full 30 pixels width and when battery is 6v, 0 pixels.</p> <p>currently I'm calculating how many pixels i want to display as below</p> <p>indicatorWidth = (((voltage / 8.4) * 100) * 30) / 100;</p> <p>When battery is at 6v this shows 21 pixels.</p> <p>how do i fix this to show 0 width when at 6v and 30 width at 8.4v?</p> <p>Regards</p>
0non-cybersec
Stackexchange
MRW I'm the white house press secretary and I've dropped all my spagett.
0non-cybersec
Reddit
Multi screen is not working on DELL Optilex 5050 with ubuntu LTS 19.10. <p>I have installed kubuntu LTS 19.10 on my new dell optilex 5050. I wanted to use dual screen and I have plugged two monitors. one with VGA cable and the other with Display Port, but the problem is only one display will get the out put at a time while the other one won't. The one which get recognized are the one which is pluged in when the system boots. The other display won't be detected. </p> <p>I have tried to extend the monitors by installing windows os. It perfectly works after a display driver update. </p> <p>My question is how can I extend my screen on ubuntu with dual monitors. Thank you for your help in advance. </p>
0non-cybersec
Stackexchange
Remove debuild buildsystem=cmake linker flags. <p>I'm trying to create a *.deb file from debian using cmake with a mingw cross-compiler. CMake's compiler test fails when using <code>dpkg-buildpackage</code>.</p> <p>Building normally is fine: </p> <pre><code>mkdir build &amp;&amp; cd build cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -- The C compiler identification is GNU 6.3.0 -- The CXX compiler identification is GNU 6.3.0 -- Check for working C compiler: /etc/alternatives/i686-w64-mingw32-gcc -- Check for working C compiler: /etc/alternatives/i686-w64-mingw32-gcc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /etc/alternatives/i686-w64-mingw32-g++ -- Check for working CXX compiler: /etc/alternatives/i686-w64-mingw32-g++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done </code></pre> <p>However when I build this using <code>dpkg-buildpackage</code> it fails to configure:</p> <pre><code>dpkg-buildpackage -uc -us dpkg-buildpackage: info: source package foo dpkg-buildpackage: info: source version 1.0 dpkg-buildpackage: info: source distribution stretch dpkg-buildpackage: info: source changed by $USER dpkg-buildpackage: info: host architecture amd64 dpkg-source --before-build hw fakeroot debian/rules clean dh clean --buildsystem=cmake --parallel dh_testdir -O--buildsystem=cmake -O--parallel dh_auto_clean -O--buildsystem=cmake -O--parallel dh_clean -O--buildsystem=cmake -O--parallel dpkg-source -b hw dpkg-source: info: using source format '3.0 (native)' dpkg-source: info: building sim-honeywell-ease-control in sim-honeywell-ease-control_1.0.tar.xz dpkg-source: info: building sim-honeywell-ease-control in sim-honeywell-ease-control_1.0.dsc debian/rules build make: 'build' is up to date. fakeroot debian/rules binary dh binary --buildsystem=cmake --parallel dh_testdir -O--buildsystem=cmake -O--parallel dh_update_autotools_config -O--buildsystem=cmake -O--parallel dh_auto_configure -O--buildsystem=cmake -O--parallel cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -- The C compiler identification is GNU 6.3.0 -- The CXX compiler identification is GNU 6.3.0 -- Check for working C compiler: /etc/alternatives/i686-w64-mingw32-gcc -- Check for working C compiler: /etc/alternatives/i686-w64-mingw32-gcc -- broken CMake Error at /usr/share/cmake-3.7/Modules/CMakeTestCCompiler.cmake:51 (message): The C compiler "/etc/alternatives/i686-w64-mingw32-gcc" is not able to compile a simple test program. </code></pre> <p>The interesting part of the full log is a failure during linking: </p> <pre><code>/etc/alternatives/i686-w64-mingw32-gcc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-z,relro -Wl,--whole-archive CMakeFiles/cmTC_fc912.dir/objects.a -Wl,--no-whole-archive -o cmTC_fc912.exe -Wl,--out-implib,libcmTC_fc912.dll.a -Wl,--major-image-version,0,--minor-image-version,0 @CMakeFiles/cmTC_fc912.dir/linklibs.rsp /usr/bin/i686-w64-mingw32-ld: unrecognized option '-z' </code></pre> <p>The mingw linker fails to recognize the <code>-z</code> option. When I <code>diff</code> the CMakeCache.txt, I can see that dpkg-buildpakcage adds some linker flags by default:</p> <pre><code>&lt; CMAKE_EXE_LINKER_FLAGS:STRING=-Wl,-z,relro --- &gt; CMAKE_EXE_LINKER_FLAGS:STRING= </code></pre> <p>How can I prevent <code>dpkg-buildpackage</code> from doing this?</p> <p>FYI: my <code>debian/rules</code> file looks like this: </p> <pre><code>#!/usr/bin/make -f %: dh $@ --buildsystem=cmake --parallel </code></pre>
0non-cybersec
Stackexchange
AttributeError: &#39;module&#39; object has no attribute &#39;SSL_ST_INIT&#39;. <p>I am getting an SSL error using twilio. Anybody have any suggestions?</p> <h3>Error:</h3> <blockquote> <pre><code>Traceback (most recent call last): File "communication_easy/that_guy/communicate.py", line 4, in &lt;module&gt; from twilio.rest import Client File "/usr/local/lib/python2.7/dist-packages/twilio/rest/__init__.py", line 14, in &lt;module&gt; from twilio.http.http_client import TwilioHttpClient File "/usr/local/lib/python2.7/dist-packages/twilio/http/http_client.py", line 1, in &lt;module&gt; from requests import Request, Session, hooks File "/usr/local/lib/python2.7/dist-packages/requests/__init__.py", line 84, in &lt;module&gt; from urllib3.contrib import pyopenssl File "/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in &lt;module&gt; import OpenSSL.SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in &lt;module&gt; from OpenSSL import rand, crypto, SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in &lt;module&gt; SSL_ST_INIT = _lib.SSL_ST_INIT AttributeError: 'module' object has no attribute 'SSL_ST_INIT' </code></pre> </blockquote> <h3>Code:</h3> <pre><code>import random from twilio.rest import Client TWILIO_ACCOUNT_SID = "asdfasdfsdfsdf" TWILIO_AUTH_TOKEN = "asdfasdfasdfasf" TWILIO_NUMBER = "+5555555" def send_text(body, target_phone_number): client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN) message = client.messages.create(body=body, from_=TWILIO_NUMBER, to=target_phone_number) send_text(blah, blah) </code></pre>
0non-cybersec
Stackexchange
Fuck me, right?.
0non-cybersec
Reddit
Spamassassin doesn&#39;t filter out all high score mails. <p>Some mails with high score passes to users mailboxes even if many of the same spam messages gets filtered out correctly to server's spambox. Here is one example: Mail correctly filtered as spam</p> <pre><code>Date: Thu, 04 Aug 2016 15:08:33 +0300 From: Erich Gibbs &lt;[email protected]&gt; To: **** &lt;*****@****.**&gt; Subject: please sign [-- Attachment #1 --] [-- Type: multipart/related, Encoding: 7bit, Size: 16K --] [-- Attachment #1 --] [-- Type: text/plain, Encoding: 8bit, Size: 0.1K --] Dear **** Please sign the receipt attached for the arrival of new office facilities. Best regards, Erich Gibbs [-- Attachment #2: fe12f845f8ff.zip --] [-- Type: application/zip, Encoding: base64, Size: 15K --] [-- application/zip is unsupported (use 'v' to view this part) --] [-- Attachment #2: SpamAssassinReport.txt --] [-- Type: text/plain, Encoding: 7bit, Size: 1.0K --] Spam detection software, running on the system "****.****.**", has identified this incoming email as possible spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Dear **** Please sign the receipt attached for the arrival of new office facilities. Best regards, Erich Gibbs [...] Content analysis details: (5.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 2.9 HELO_DYNAMIC_SPLIT_IP Relay HELO'd using suspicious hostname (Split IP) 0.2 CK_HELO_GENERIC Relay used name indicative of a Dynamic Pool or Generic rPTR 0.0 TVD_RCVD_IP Message was received from an IP address 0.7 SPF_NEUTRAL SPF: sender does not match SPF record (neutral) 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS </code></pre> <p>Here is similar message which reached my inbox</p> <pre><code>Return-Path: &lt;[email protected]&gt; Received: from 108.subnet110-136-45.speedy.telkom.net.id (108.subnet110-136-45.speedy.telkom.net.id [110.136.45.108] (may be forged)) by (8.14.7/8.14.7) with ESMTP id u74CAuvv038162 for &lt;****@****.**&gt;; Thu, 4 Aug 2016 14:11:07 +0200 Received: from root by telkom.net.id with local (Exim 4.80) (envelope-from &lt;[email protected]&gt;) id kcxAKb-MGbTTg-NC for ****@****.**; Thu, 04 Aug 2016 19:10:52 +0700 To: "*****" &lt;****@****.**&gt; Subject: please sign Date: Thu, 04 Aug 2016 19:10:52 +0700 From: "Earlene Blankenship" &lt;[email protected]&gt; Message-ID: &lt;[email protected]&gt; X-Priority: 3 MIME-Version: 1.0 Content-Type: multipart/related; type="text/html"; boundary="b1_560b0ac54766d9148a54052f9a46e5ef" X-SPF-Scan-By: smf-spf v2.0.2 - http://smfs.sf.net/ Received-SPF: None (****.****.**: domain of [email protected] does not designate permitted sender hosts) receiver=****.****.**; client-ip=110.136.45.108; envelope-from=&lt;[email protected]&gt;; helo=108.subnet110-136-45.speedy.telkom.net.id; X-Virus-Scanned: clamav-milter 0.99.2 at ****.****.** X-Virus-Status: Clean X-Scanned-By: MIMEDefang 2.78 on 62.168.116.66 --b1_560b0ac54766d9148a54052f9a46e5ef Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Dear **** Please sign the receipt attached for the arrival of new office facilities. Best regards, Earlene Blankenship 1_560b0ac54766d9148a54052f9a46e5ef Content-Type: application/zip; name="d8bc18159378.zip" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="d8bc18159378.zip" </code></pre> <p>When I check the score of the same message with spamc, the score is high. I don't know why it is not flagged correctly before delivery.</p> <pre><code># spamc -R &lt;'1470312683.38275_0.****.****.**:2,Sa' 8.3/5.0 Spam detection software, running on the system "****.****.**", has identified this incoming email as possible spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Dear servis Please sign the receipt attached for the arrival of new office facilities. Best regards, Earlene Blankenship [...] Content analysis details: (8.3 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 2.9 HELO_DYNAMIC_SPLIT_IP Relay HELO'd using suspicious hostname (Split IP) 0.0 CK_HELO_DYNAMIC_SPLIT_IP Relay HELO'd using suspicious hostname (Split IP) 2.7 RCVD_IN_PSBL RBL: Received via a relay in PSBL [110.136.45.108 listed in psbl.surriel.com] 3.6 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL [110.136.45.108 listed in zen.spamhaus.org] -1.2 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain 0.4 RDNS_DYNAMIC Delivered to internal network by host with dynamic-looking rDNS # </code></pre> <p>Any Ideas what may be the cause? Thank you.</p>
0non-cybersec
Stackexchange
IBM Proposes WiFi Security Approach After Firesheep.
1cybersec
Reddit
How to search Google Groups for keywords. <p>I went to <a href="https://groups.google.com" rel="nofollow noreferrer">https://groups.google.com</a> and searched for <code>boot ring seat</code>, looking for a specific Usenet message that I know has those three words in it. <a href="https://groups.google.com/forum/#!search/boot$20ring$20seat" rel="nofollow noreferrer">No results.</a> All right, I thought, maybe I'm misremembering the Usenet message. So I searched instead for <code>whenever</code> &mdash; there <strong>must</strong> be some Usenet messages with that word in it! But still <a href="https://groups.google.com/forum/#!search/whenever" rel="nofollow noreferrer">no results</a>. (It says "Posts: 0, groups: 1354".) (These searches were done when not logged in to Google.)</p> <p>What am I doing wrong? How do I search for Usenet posts by keywords in them?</p> <hr> <p>Some notes (that you can skip) about the research I already did toward answering this question:</p> <p>There <em>is</em> a help link on that search-result page, but there's nothing about searching listed there. The most promising subtopic is "Learn how to read and create posts", which doesn't actually answer this question.</p> <p>I also did a Google Web search on <code>searching Google Groups Usenet</code>, but the top few results were unhelpful. The top one is from 2010, so I skipped it as likely <a href="https://www.merriam-webster.com/dictionary/overtaken%20by%20events" rel="nofollow noreferrer">OBE</a>; the next promising one is <a href="//superuser.com/q/733111">this SU post</a>, which is about browsing a newsgroup rather than searching by keyword; etc.</p>
0non-cybersec
Stackexchange
Carragher: The brilliant thing about Arsène Wenger that I always admire was he didn't have the finances that United had. When a team competes with a team that has more money, they'll do it a different way like how we did at Liverpool, but they actually played as good football if not better at times..
0non-cybersec
Reddit
What is the maximum number of dimensions allowed for an array in C++?. <p>You can declare a very simple array with 10 elements and use it that way :</p> <pre><code>int myArray[10]; myArray[4] = 3; std::cout &lt;&lt; myArray[4]; </code></pre> <p>Or declare a 2d array with 10x100 elements as <code>int myArray[10][100];</code></p> <p>Even create more complicated 3-d arrays with <code>int myArray[30][50][70];</code></p> <p>I can even go as far as writing :</p> <pre><code>int complexArray[4][10][8][11][20][3]; complexArray[3][9][5][10][15][3] = 5; std::cout &lt;&lt; complexArray[3][9][5][10][15][3]; </code></pre> <p>So, <strong>what is the maximum number of dimensions that you can use when declaring an array?</strong></p>
0non-cybersec
Stackexchange
Why should the primes used in RSA be distinct?. <p>The two primes $p$ and $q$ part of the public key need to be <strong>distinct</strong>. What's the reason for them to be distinct? Is it because factorization of $p^2$ where $p$ is a prime is relatively easier, or is there some other reason?</p>
0non-cybersec
Stackexchange
Java: Cannot use the &quot;USE&quot; keyword from MySQL?. <p>I have two SQL files:</p> <p>query1.sql</p> <pre><code>SELECT * FROM laptop_store.gpu; </code></pre> <p>query2.sql</p> <pre><code>USE laptop_store; SELECT * FROM gpu </code></pre> <p>Excecuting both in MySQL Workbench 8.0 CE will display the same result :</p> <p><img src="https://i.stack.imgur.com/PoZ6J.png" alt="result:"></p> <p>When I copy everything from two SQL code and run it in java</p> <pre><code>import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; public class NewClass { static String queryString1 = "SELECT * FROM laptop_store.gpu"; static String queryString2 = "USE laptop_store;\n" + "SELECT * FROM gpu"; public static void main(String[] args) { try{ Class.forName("com.mysql.cj.jdbc.Driver"); Connection con = DriverManager.getConnection( "jdbc:mysql://localhost:3306/laptop_store","root","tomnisa123"); Statement statement = con.createStatement(); //Change SQL code here: ResultSet rs = statement.executeQuery(queryString1); ResultSetMetaData rsmd = rs.getMetaData(); int colCount = rsmd.getColumnCount(); while(rs.next()) { for (int i = 1; i &lt;= colCount; i++){ System.out.print(rs.getString(i) + " "); } System.out.println(); } con.close(); } catch(Exception e){ System.out.println(e); } } } </code></pre> <p>Only the first one is success</p> <p><img src="https://i.stack.imgur.com/olzAX.png" alt="success">.</p> <p>But the second one shows an error:</p> <pre><code>java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT * FROM gpu' at line 2 </code></pre> <p>Why I cannot use the "USE" keyword from MySQL?</p>
0non-cybersec
Stackexchange
ar X iv :1 20 6. 26 60 v4 [ cs .C R ] 1 1 A pr 2 01 3 Privacy-Preserving Data Aggregation without Secure Channel: Multivariate Polynomial Evaluation Taeho Jung†, XuFei Mao‡, Xiang-Yang Li†, Shao-Jie Tang†, Wei Gong‡, and Lan Zhang‡ †Department of Computer Science, Illinois Institute of Technology, Chicago, IL ‡School of Software, TNLIST, Tsinghua University, Beijing Abstract—Much research has been conducted to securely outsource multiple parties’ data aggregation to an untrusted aggregator without disclosing each individual’s privately owned data, or to enable multiple parties to jointly aggregate their data while preserving privacy. However, those works either require secure pair-wise communication channels or suffer from high complexity. In this paper, we consider how an external aggregator or multiple parties can learn some algebraic statistics (e.g., sum, product) over participants’ privately owned data while preserving the data privacy. We assume all channels are subject to eavesdropping attacks, and all the communications throughout the aggregation are open to others. We propose several protocols that successfully guarantee data privacy under this weak assumption while limiting both the communication and computation complexity of each participant to a small constant. Index Terms—Privacy, aggregation, secure channels, SMC, homomorphic. I. INTRODUCTION The Privacy-preserving data aggregation problem has long been a hot research issue in the field of applied cryptography. In numerous real life applications such as crowd sourcing or mobile cloud computing, individuals need to provide their sen- sitive data (location-related or personal-information-related) to receive specific services from the entire system (e.g., location based services or mobile based social networking services). There are usually two different models in this problem: 1) an external aggregator collects the data and wants to conduct an aggregation function on participants’ data (e.g., crowd sourcing); 2) participants themselves are willing to jointly compute a specific aggregation function whose input data is co-provided by themselves (e.g., social networking services). However, the individual’s data should be kept secret, and the aggregator or other participants are not supposed to learn any useful information about it. Secure Multi-party Computation (SMC), Homomorphic Encryption (HE) and other crypto- graphic methodologies can be partially or fully exploited to solve this problem, but they are subject to some restrictions in this problem. Secure Multi-party Computation (SMC) was first formally introduced by Yao [22] in 1982 as Secure Two-Party Compu- tation. Generally, it enables n parties who want to jointly and 1The research of authors is partially supported by NSFC under Grant No. 61170216, No. 61228202, and No. 61272426, China 973 Program under Grant No.2011CB302705, China Postdoctoral Science Foundation funded project under grant No. 2012M510029, NSF CNS-0832120, NSF CNS-1035894, NSF ECCS-1247944. privately compute a function f(x1, x2, · · · , xn) = {y1, y2, · · · , yn} where xi is the input of the participant i, and the result yi is returned to the participant i only. Each result can be relevant to all input xi’s, and each participant i knows nothing but his own result yi. One could let the function in SMC output only one uniform result to all or parts of participants, which is the algebraic aggregation of their input data. Then the privacy- preserving data aggregation problem seems to be solved by this approach. However this actually does not completely solve our problem because interactive invocation is required for participants in synchronous SMC (e.g., [13]), which leads to high communication and computation complexity, which will be compared in the Section VIII. Even in the asynchronous SMC, the computation complexity is still too high for practical applications. Homomorphic Encryption (HE) allows direct addition and multiplication of ciphertexts while preserving decryptability. That is, Enc(m1) ⊗ Enc(m2) = Enc(m1 × m2), where Enc(m) stands for the ciphertext of m, and ⊗, × refer to the homomorphic operations on the ciphertext and plaintexts respectively. One could also try to solve our problem using this technique, but HE uses the same decryption key for original data and the aggregated data. That is, the operator who executes homomorphic operations upon the ciphertexts are not authorized to achieve the final result. This forbids aggregator from decrypting the aggregated result, because if the aggregator is allowed to decrypt the final result, he can also decrypt the individual ciphertext received, which contradicts our motivation. Also, because the size of the plaintext space is limited, the number of addition and multiplication operations executed upon ciphertexts was limited until Gentry et al. proposed a fully homomorphic encryption scheme [11] and implemented it in [12]. However, Lauter et al. pointed out in [16] that the complexity of general HE is too high to use in real application. Lauter also proposed a HE scheme which sacrificed possible number of multiplications for speed, but it still needs too much time to execute homomorphic operations on ciphertexts. Besides the aforementioned drawbacks, both SMC and HE require an initialization phase during which participants request keys from key issuers via secure channel. This could be a security hole since the security of those schemes relies on the assumption that keys are disclosed to authorized participants http://arxiv.org/abs/1206.2660v4 2 Secure Multi-party Computation Pros different outputs for different participants Cons high complexity due to the computation based on garbled circuit frequent interactions required for synchronous SMC Homomorphic Encryption Pros efficient if # of multiplcations is restricted Cons decrypter can decrypt both aggregated data and individual data trade-off between # of multiplications and complexity exists only. In this paper, we revisit the classic privacy preserving data aggregation problem. Our goal is to design efficient protocols without relying on a trusted authority and secure pair-wise communication channels. The main contributions of this paper are: • Formulation of a model without secure channel: Different from many other models in privacy-preserving data ag- gregation problem, our model does not require a secure communication channel throughout the protocol. • Efficient protocol in linear time: The total communication and computation complexity of our work is proportional to the number of participants n, while the complexities of many similar works are proportional to n2. We do not use complicated encryption protocols, which makes our system much faster than other proposed systems. • General Multivariate Polynomial Evaluation: We general- ize the privacy-preserving data aggregation to secure mul- tivariate polynomial evaluation whose inputs are jointly provided by multiple parties. That is, our scheme enables multiple parties to securely compute f({x1, · · · , xn}) = m ∑ k=1 ck( n ∏ i=1 x di,k i ) where the data xi is a privately known data by user i. Note that our general format of data aggregation can be directly used to express various statistical values. For example, ∑n i=1 xi can easily be achieved while preserving privacy, and thus the mean µ = ∑n i=1 xi/n can be computed with privacy- preserving. Given the mean µ, nµ2 + ∑n i=1 (x 2 i − 2xiµ) can be achieved from the polynomial, and this divided by n is the population variance. Similarly, other statistical values are also achievable (e.g., sample skewness,k-th moment, mean square weighted deviation, regression, and randomness test) based on our general multi-variate polynomial. Although our methods are proposed for computing the value of a multi- variate polynomial function where the input of each participant is assumed to be an integer, our methods can be generalized for functions (such as dot product) where the input of each participant is a vector. The rest of the paper is organized as follows. We present the system model and necessary background in Section III. In Section IV, we analyze the needed number of communications with secure communication channels when users communicate randomly. We first address the privacy preserving summation and production in Section V by presenting two efficient pro- tocols. Based on these protocols, we then present an efficient protocol for general multi-variate polynomial evaluation in Section VI. In Section VII, we present detailed analysis of the correctness, complexity, and security of our protocols. Performance evaluation of our protocols is reported in Section VIII. We compare our protocol with the ones based on SMC or HE. We then conclude the paper with the discussion of some future work in Section IX. II. RELATED WORK Many novel protocols have been proposed for privacy- preserving data aggregation or in general secure multi-party computation. Castelluccia et al. [5] presented a provable secure and efficient aggregation of encrypted data in WSN, which is extended from [6]. They designed a symmetric key homomor- phic encryption scheme which is additively homomorphic to conduct the aggregation operations on the ciphertexts. Their scheme uses modular addition, so the scheme is good for CPU- bounded devices such as sensor nodes in WSN. Their scheme can also efficiently compute various statistical values such as mean, variance and deviation. However, since they used the symmetric homomorphic encryption, their aggregator could decrypt each individual sensor’s data, and they assumed the trusted aggregator in their model. Sheikh et al. [19] proposed a k-secure sum protocol, which is motivated by the work of Clifton et al. [7]. They sig- nificantly reduced the probability of data leakage in [7] by segmenting the data block of individual party, and distributing segments to other parties. Here, sum of each party’s segments is his data, therefore the final sum of all segments are sum of all parties’ data. This scheme can be easily converted to k-secure product protocol by converting each addition to mul- tiplication. Similar to our protocol, one can combine their sum protocol and converted product protocol to achieve a privacy- preserving multivariate polynomial evaluation protocol. How- ever, pair-wise unique secure communication channels should be given between each pair of users such that only the receiver and the sender know the transmitted segment. Otherwise, each party’s secret data can be calculated by performing O(k) computations. In this paper, we remove the limitation of using secure communication channels. The work of He et al. [14] is similar to Sheikh et al.’s work. They proposed two privacy-preserving data aggregation schemes for wireless sensor networks: the Cluster-Based Pri- vate Data Aggregation (CPDA) and the Slice-Mix-AggRegaTe (SMART). In CPDA, sensor nodes form clusters randomly and collectively compute the aggregate result within each cluster. In the improved SMART, each node segments its data into n slices and distributes n− 1 slices to nearest nodes via secure channel. However, they only supports additions, and since each data is segmented, communication overhead per node is linear to the number of slices n. Shi et al. [20] proposed a construction that n participants periodically upload encrypted values to an aggregator, and the aggregator computes the sum of those values without learning anything else. This scheme is close to our solution to the multivariate polynomial evaluation problem, but they assumed a trusted key dealer in their model. The key dealer distributes 3 random key ki to participant i and key k0 to the aggregator, where Πni=0ki = 1, and the ciphertext is in the format of Ci = ki · g xi . Here, g is a generator, ki is a participant’s key and xi is his data (for i = 1, 2, · · ·n). Then, the aggregator can recover the sum ∑n i=1 xi iff he received ciphertexts from all of the participants. He computes k0Π n i=1Ci to get g ∑ n i=1 xi , and uses brute-force search to find the ∑n i=1 xi or uses Pollard’s lambda method [18] to calculate it. This kind of brute-force decryption limits the space of plaintext due to the hardness of the discrete logarithm problem, otherwise no deterministic algorithm can decrypt their ciphertext in polynomial time. The security of their scheme relies on the security of keys ki. In our scheme, the trusted aggregator in [5][6] is removed since data privacy against the aggregator is also a top concern these days. Unlike [14][19], we assumed insecure channels, which enabled us to get rid of expensive and vulnerable key pre-distribution. We did not segment each individual’s data, our protocols only incur constant communication overhead for each participant. Our scheme is also based on the hardness of the discrete logarithm problem like [20], but we do not trivially employ brute-force manner in decryption, instead, we employ our novel efficient protocols for sum and product calculation. III. SYSTEM MODELS AND PRELIMINARY A. System Model and Problem Definition Assume that there are n participants {p1, p2, · · · , pn}, and each participant pi has a privately known data xi from a group G1. The privacy-preserving data aggregation problem (or secure multivariate polynomial evaluation problem) is to compute some multivariate polynomial of xi jointly or by an aggregator while preserving the data privacy. Assume that there is a group of m powers {di,k ∈ Zq | k = 1, 2, · · · ,m} for each pi and m coefficients {ck | k = 1, · · · ,m, ck ∈ G1}. The objective of the aggregator or the participants is to com- pute the following polynomial without knowing any individual xi: f(x) = m ∑ k=1 (ck n ∏ i=1 x di,k i ) (1) Here vector x = (x1, x2, · · · , xn). For simplicity, we assume that the final result f(x) is positive and bounded from above by a large prime number P . We assume all of the powers di,k’s and coefficients ck’s are open to any participant as well as the attackers. This is a natural assumption since the powers and coefficients uniquely determine a multivariate polynomial, and the polynomial is supposed to be public. We employ two different models in this paper: One Aggre- gator Model and Participants Only Model. These two models are general cases we are faced with in real applications. One Aggregator Model: In the first model, we have one aggregator A who wants to compute the function f(x). We assume the aggregator is untrustful and curious. That is, he always eavesdrops the communications between participants and tries to harvest their input data. We also assume partici- pants do not trust each other and that they are curious as well, however, they will follow the protocol in general. We could also consider having multiple aggregators, but this is a simple extension which can be trivially achieved from our first model. We call this model the One Aggregator Model. Note that in this model, any single participant pi is not allowed to compute the final result f(x). Participants Only Model: The second model is similar to the first one except that there are n participants only and there is no aggregator. In this model, all the participants are equal and they all will calculate the final aggregation result f(x). We call this model the Participants Only Model. In both models, participants are assumed not to collude with each other. Relaxing this assumption is one of our future work. B. Additional Assumptions We assume that all the communication channels in our protocol are insecure. Anyone can eavesdrop them to inter- cept the data being transferred. To address the challenges of insecure communication channel, we assume that the discrete logarithm problem is computationally hard if: 1) the orders of the integer groups are large prime numbers; 2) the involved integer numbers are large numbers. The security of our scheme relies on this assumption. We further assume that there is a secure pseudorandom function (PRF) which can choose a random element from a group such that this element is computationally indistinguishable to uniform random. We also assume that user authentication was in place to au- thenticate each participants if needed. We note that Dong et al. [9] investigated verifiable privacy-preserving dot production of two vectors and Zhang et al. [24] proposed verifiable multi- party computation, both of which can be partially or fully exploited later. Designing privacy preserving data aggregation while providing verification of the correctness of the provided data is a future work. C. Discrete Logarithm Problem Let G ⊂ Zp be a cyclic multiplicative integer group, where p is a large prime number, and g be a generator of it. Then, for all h ∈ G, h can be written as h = gk for some integer k, and any integers are congruent modulo p. The discrete logarithm problem is defined as follows: given an element h ∈ G, find the integer k such that gb = h. The famous Decision Diffie-Hellman (DDH) problem pro- posed by Diffie and Hellman in [8] is derived from this assumption. DDH problem is widely exploited in the field of cryptography (e.g., El Gamal encryption [10] and other cryptographic security protocols such as CP-ABE [3]) as discussed in [4]. Our protocol is based on the assumption that it is computational expensive to solve the discrete logarithm problem as in other similar research works ([15], [17], [24]). IV. ACHIEVING SUM UNDER SECURED COMMUNICATION CHANNEL Before introducing our aggregation scheme without secure communication channel, we first describe the basic idea of 4 randomized secure sum calculation under secured communi- cation channel (It can be trivially converted to secure product calculation). The basic idea came from Clifton et al. [7], which is also reviewed in [21], but we found their setting imposed unnecessary communication overhead, and we reduced it while maintaining the same security level. Assume participants p1, p2, · · · , pn are arranged in a ring for computation purpose. Each participant pi itself breaks its privately owned data block xi into k segments si,j such that the sum of all k segments is equal to the value of the data block. The value of each segment is randomly decided. For sum, we can simply assign random values to segments si,j (1 ≤ j ≤ k − 1) and let si,k = xi − ∑k−1 j=1 si,j . Similar method can be used for product. In this scheme, each participant randomly selects k − 1 participants and transmit each of those participants a distinctive segment si,j . Thus at the end of this redistribution each of participants holds several segments within which one segment belongs to itself and the rest belongs to some other participants. The receiving participant adds all its received segments and transmits its result to the next participant in the ring. This process is repeated until all the segments of all the participants are added and the sum is announced by the aggregator. Recall that there are n participants and each participant randomly selects k− 1 participants to distribute its segments. Clearly, a larger k provides better computation privacy, how- ever it also causes larger communication overhead which is not desirable. In the rest of this section, we are interested at finding an appropriate k in order to reduce the communication cost while preserving computation privacy. In particular, we aim at selecting the smallest k to ensure that each participant holds at least one segment from the other participants after redistribution. We can view this problem as placing identical and indistinguishable balls into n distin- guishable (numbered) bins. This problem has been extensively studied and well-understood and the following lemma can be proved by simple union bound: Lemma IV.1. Let ǫ ∈ (0, 1) be a constant. If we randomly place (1 + ǫ)n lnn balls into n bins, with probability at least 1− 1 nǫ , all the n bins are filled. Assume that each participant will randomly select k − 1 participants (including itself) for redistribution. By treating each round of redistribution as one trial in coupon’s collector problem, we are able to prove that each participant only needs to redistribute ((1 + ǫ)n lnn)/n = (1 + ǫ) lnn segments to other participants to ensure that every participant receives at least one segment with high probability. However, different from previous assumption, each participant will select k − 1 participants except itself to redistribute its segments in our scheme. Therefore, we need one more round redistribution for each participant to ensure that every participant will receive at least one copy from other participants with high probability. Theorem IV.2. Let ǫ ∈ (0, 1) be a constant. If each par- ticipant randomly selects (1 + ǫ) lnn + 1 participants to redistribute its segments, with probability at least 1 − 1 nǫ , each participant receives at least one segment from the other participants. This theorem reveals that by setting k to the order of lnn, we are able to preserve the computation privacy. Compared with traditional secure sum protocol, our scheme dramatically reduce the communication complexity. However, we assume that the communication channel among participants are secure in above scheme. In the rest of this paper, we try to tackle the secure aggregation problem under unsecured channels. V. EFFICIENT PROTOCOLS FOR SUM AND PRODUCT In this section, we present two novel calculation protocols for each model which preserve individual’s data privacy. These four protocols will serve as bases of our solution to privacy-preserving data aggregation problem. For simplicity, we assume all coefficients ck (k ∈ [1,m]) and powers di,k (i ∈ [1, n], k ∈ [1,m]) of the polynomial f(x) = ∑m k=1 ck( ∏n i=1 x di,k i ) are known to every participant pi. Table I summarizes the main notations used in this paper. TABLE I NOTATIONS OF SYMBOLS USED IN OUR PROTOCOLS pi i-th participant in data aggregation A Aggregator G1,G2 multiplicative cyclic integer groups g1, g2 generators of above groups di,k power of x di,k i ck coefficient of ck ∑ n i=1 x di,k i ri, r̂i randomly chosen numbers A. Product Protocol - Participants Only Model Firstly, we assume that all participants together want to compute the value f(x) = ∏ i xi given their privately known values xi ∈ Zp. The basic idea of our protocol is to find some random integers Ri ∈ Zp such that ∏ iRi = 1 mod p and the user pi can compute the random number Ri easily while it is computationally expensive for other participants to compute the value Ri. Let G1 ⊂ Zp be a cyclic multiplicative group of prime order p and g1 be its generator. Then our protocol for privacy preserving production Πixi has the following steps: Setup, Encrypt, Product. Setup → ri ∈ Zq, Ri = (g ri+1 1 /g ri−1 1 ) ri ∈ G1 We assume all participants are arranged in a ring for computation purpose. The ring can be formed according to the lexicographical order of the MAC address or even the geographical location. It is out of our scope to consider this problem. Each pi(i ∈ {1, · · · , n}) randomly chooses a secret integer ri ∈ Zq using PRF and calculates a public parameter gri1 ∈ G1. Then, each pi shares Yi = g ri 1 mod p with pi−1 and pi+1 (here pn+1 = p1 and p0 = pn). After a round of exchanges, the participant pi computes the number Ri = (Yi+1/Yi−1) ri = (g ri+1 1 /g ri−1 1 ) ri mod p and 5 g rn−1 1 g rn 1 g r1 1g r1 1 g r2 1 pn−1 pi+1 g ri 1 p1 g rn 1 g r(i+1) 1 p2 pi pi−1 g ri 1 g r(i−1) 1 g r2 1 pn g rn 1 g rA 1 g r1 1g r1 1 g r2 1 pn pi+1 g ri 1 A p1 g r(i+1) 1 p2 pi pi−1 g ri 1 g r(i−1) 1 g rA 1 g r2 1 g rn 1 (a)Participants only model (b) One aggregator model Fig. 1. Communications in Setup keeps this number Ri secret. Note p1 calculates (g r2 1 /g rn 1 ) r1 and pn calculates (g r1 1 /g r(n−1) 1 ) rn . Encrypt(xi) → Ci ∈ G1 When a product is needed, every pi creates the ciphertext: Ci := xi ·Ri = xi · (g ri+1 1 /g ri−1 1 ) ri mod p where xi is his private input data. If he does not want to participate in the multiplication, he can simply set xi := 1. Then, he broadcasts this ciphertext. Ci Cn Cn C(n−1) C(i+1) pn C2 C(i−1) p2 p1 p(n−1) p(i+1) pi p(i−1) p2p(n−1) p1 p(i+1) p(i−1) pi A Ci C1 C2 pn Cn C(n−1) C(i−1) C(i+1) (a)Participants only model (b) One aggregator model Fig. 2. Communications in Encrypt Product({C1, C2, · · · , Cn}) → ∏n i=1 xi ∈ G1 Any pi, after receiving n ciphertexts {C1, C2, · · · , Cn} from all of the pi’s, calculates the following product: n ∏ i=1 Ci = n ∏ i=1 xi mod p To make sure that we can get a correct result ∏n i=1 xi without modular, we can choose p to be large enough, say p ≥ Mn, where M is a known upper bound on xi. B. Product Protocol - One Aggregator Model We use the same group used in Participants Only Model. Everything is same as the protocol above, except that the aggregator A acts as the (n + 1)-th participant pn+1. In other words, there are n + 1 “participants” now. The second difference is that, each participant pi will send the ciphertext Ci to the aggregator, instead of broadcasting to all participants. The aggregator A will not announce its random number Rn+1 = (g r1 1 /g rn 1 ) rn+1 to any regular participants. Each participant pi, i ∈ [1, n], sends the ciphertext Ci = Ri · xi to the aggregator A. The aggregator A then calculates (gr11 /g rn 1 ) rn+1 n ∏ i=1 xi = n ∏ i=1 xi to achieve the final product, where rn+1 is the random number generated by A. C. Sum Protocol - Participants Only Model Here we assume that all participants together want to compute the value f(x) = ∑n i=1 xi given their privately known values xi ∈ Zp. It seems that we can still exploit the method used for computing product by finding random numbers Ri such that ∑n i=1 Ri = 0. We found that it is challenging to find such a number Ri while preserve privacy and security. The basic idea of our protocol is to convert the sum of numbers into production of numbers. Previous solution [20] essentially applied this approach also by computing the product of ∏n i=1 g xi = g ∑ n i=1 xi . Then find ∑n i=1 xi by computing the discrete logarithm of the product. As discrete logarithm is computational expensive, we will not adopt this method. Instead, we propose a computational efficient method here. In a nutshell, we exploit the modular property below to achieve the privacy preserving sum protocol. (1 + p)m = m ∑ i=0 ( m i ) pi = 1 +mp mod p2 (2) From the Equation (2), we conclude that n ∏ i=1 (1 + p)xi = n ∏ i=1 (1 + p · xi) = (1 + p ∑ i xi) mod p 2. Our protocol works as follows. Let G2 ⊂ Zp2 be a cyclic multiplicative group of order p(p− 1) and g2 be its generator, where p is a prime number. Then our protocol for privacy preserving summation Πixi has the following steps: Setup, Encrypt, Sum. Setup → ri ∈ Zpq , Ri = (g ri+1 2 /g ri−1 2 ) ri Remember that participants are arranged in a circle. pi uses PRF to randomly pick a secret number ri ∈ Zpq , and calculates a public parameter gri2 . Then, he shares g ri 2 with pi+1 and pi−1. Similar to the product calculation protocol, pn shares his public parameter with his p(n−1) and p1, and p1 shares his public parameter with p2 and pn. After a round of exchanges, each pi calculates Ri = (g ri+1 2 /g ri−1 2 ) ri and keeps this secret. Encrypt(xi, Ri) → Ci ∈ G2 This algorithm crosses over two different integer groups: G1 and G2. Each pi first calculates (1+xi ·p) mod p 2. Note that xi ∈ G1, and it is temporarily treated as an element in G2, but this does not affect the last value of the result since operations in G2 are modulo p 2. Then, he multiplies the secret parameter Ri = (g ri+1 2 /g ri−1 2 ) ri to it to get the ciphertext: Ci = (1 + xi · p) · Ri 6 After all, each participant broadcasts his ciphertext to each others. Sum({C1, C2, · · · , Cn}) → ∑n k=1 xi ∈ G1. Each participant, after receiving the ciphertexts from all of other participants, calculates the following C ∈ G2: C = n ∏ i=1 Ci = (1 + p n ∑ i=1 xi) mod p 2 Then, he calculates (C − 1)/p mod p = ∑n i=1 xi mod p to recover the final sum. D. Sum Protocol - One Aggregator Model Similar to the product protocol for One Aggregator Model, everything is the same except that A acts as (n+1)-th partic- ipant in this model. The participants send their ciphertexts to A, and A calculates C = (gr12 /g rn 2 ) rn+1 n ∏ i=1 Ci = (1 + p n ∑ i=1 xi) mod p 2 Then, he can compute the final sum result ∑n i=1 xi. VI. EFFICIENT PROTOCOLS FOR GENERAL MULTIVARIATE POLYNOMIAL Now we are ready to present our efficient privacy preserving protocols for evaluating a multivariate polynomials. Our proto- col is based on the efficient protocols for sum and production presented in the previous section. A. One Aggregator Model The calculation of the polynomial 1 can be divided into nm multiplications and m additions. In this section we show how to conduct a joint calculation of m products and one sum while preserving individual’s data privacy in the One Aggregator Model. Different from the protocols in the Section V, those broadcast ciphertexts are not broadcast this time, they are sent to the aggregator instead. The purpose of this small change is only for reducing communication complexity, and from the security perspective, this is just same as broadcasting since our communication channels are insecure. 1) Basic Scheme: All the participants execute Setup to initiate the system. Then, for each k, all the participants need to calculate x di,k i ’s first, where di,k’s are powers specified by the aggregator A, and run the aforementioned product protocol for each k ∈ [1,m]. If A does not need the data from some participant pi, A can set his powers to be 0, and if pi does not want to participate in the aggregation, he can simply set his input as 1. Then, the aggregator is able to calculate ∑m k=1 (ck ∏n i=1 x di,k i ). 2) Advanced Scheme: The above Basic Scheme pre- serves data privacy in our problem as long as there are at least two x di,k i ’s not equal 1 in each following set {x d1,k 1 , x d2,k 2 · · · , x dn,k n }k∈{1,··· ,m}, which will be further dis- cussed in the Section VII-B1. Therefore, we exploit the aforementioned sum protocol to achieve Secure Scheme. All the participants execute Setup. Then, when executing the Encrypt of the product protocol, each participant checks whether his input is the only one not equal to 1 for each product ∏n i=1 x di,l i (i.e., his di,l is the only one not equal to 0 in {d1,l, d2,l, · · · , dn,l}). If it is, the product equals to his input data, which will directly disclose his data, so he skips it. The elements that are omitted form a set Dsum = {x di,k i }k∈Isum , where Isum is the set of indices k’s corresponding to the elements in Dsum. For each x di,k i ∈ Dsum, find his owner pi and add him into the set Psum. There can be duplicate pi’s in the set Psum. The pi’s in Psum need to calculate the following without knowing each other’s input: ∑ pi∈Psum ckx di,k i They are called sum participants, and we assume they are ordered by non-decreasing order of their indices in Psum and arranged in a circle. In what follows, we denote pi’s successor and predecessor in the Psum as pi,suc and pi,pre respectively. These sum participants run the sum protocol to encrypt their data and sends to the aggregator A. A, after receiving all the sum ciphertexts, is able to calculate ∑ k∈Isum ckx di,k i . Then, he is able to calculate ∑m k=1 (ck ∏n i=1 x di,k i ). B. Participants Only Model From the One Aggregator Model, we know the combination of two protocols (product protocol and second sum protocol) proposed in Section V gives the best scheme. Therefore we only show the scheme which employs both product and sum protocols. Advanced Scheme: Every participant executes Setup, and when he executes the Encrypt of the product protocol, he conducts the same examination as in the Section VI-A2 above. Then, the sum participants run the sum protocol to share their sum with each other. Finally, all participants are able to calculate ∑m k=1 (ck ∏n i=1 x di,k i ) based on others’ ciphertexts. VII. CORRECTNESS, COMPLEXITY AND SECURITY ANALYSIS Here we provide rigorous correctness proofs, complexity and security analysis of the protocols presented in this paper. We also discuss when our protocols could leak information about the privately known data xi and provide methods to address this when possible. A. Correctness Next we show the correctness of the product protocol in Section V. 1) Product Protocol: We only provide the analysis for Par- ticipants Only model, but the correctness in One Aggregator model is easily derivable from it. After participants receive {C1, · · · , Cn} they conduct the following calculation: 7 n ∏ i=1 Ci = n ∏ i=1 (xi(g ri+1 1 /g ri−1 1 ) ri) = ( n ∏ i=1 xi) n ∏ i=1 ((g ri+1 1 /g ri−1 1 ) ri) = ( n ∏ i=1 xi)g ∑ n i=1 (ri+1ri−riri−1) 1 = n ∏ i=1 xi Here rn+1 = r1, r0 = rn. Thus, the products are correctly calculated. 2) Sum Protocol: Similar to above, we only discuss the correctness for Participants Only Model. After participants receive {C1, · · · , Cn}, they conduct the following calculation: C = n ∏ i=1 Ci = n ∏ i=1 (1 + xip)(g ri+1 2 /g ri−1 2 ) ri = (1 + p n ∑ i=1 xi)g ∑ n i=1 ri+1ri−riri−1 2 = (1 + p n ∑ i=1 xi) mod p 2 Thus, (C − 1)/p mod p is indeed equal to ∑n i=1 xi mod p. B. Security We discuss the security of the schemes in both One Aggre- gator Model and Participants Only Model in this section. 1) Special Case of Products Calculation: As mentioned in the Section VI-A2, if there is only one ciphertext di,k is not equal to 0 in any set {d1,k, d2,k · · · , dn,k}k∈{1,··· ,m} during the products calculation, the individual data xi can be disclosed to others. This is because: (suppose that only di,k is the only ciphertext not equal to 1 in the set {d1,k, d2,k · · · , dn,k}) Decrypt({C1,k, C2,k · · · , Cn,k}) = xi and xi is disclosed to others if ck 6= 0. Therefore, in this case, the participants should conduct additional secure sum calculation before sending the ciphertexts to others. 2) Randomness and Group Selection: In fact, in the product calculation protocol, the group G1 should be carefully selected to make the input xi indistinguishable to a random element. We select a cyclic multiplicative group G1 ⊂ Zp of prime order q as follows. Find two large prime numbers p, q such that p = kq + 1 for some integer k. Then, find a generator h for Zp, and set g1 := h (p−1)/q modulo p (clearly g1 6= 1 modulo p). Then group G1 is generated by g1, whose order is q. Here the powers of the numbers in G1 belong to an integer group Zq . Next, we show that any input data xi is computationally indistinguishable to any random element chosen from Zp via PRF. For any i, we have Ci = xi(g ri+1 1 /g ri−1)ri = xig (ri+1−ri−1)ri 1 . Let xi be g χi 1 and ri+1 − ri−1 be γi, where χi ∈ Zq and γi ∈ Zq (This is possible since g1 is a generator of the group G1). Then, Ci = g χi 1 g γiri 1 . Theorem VII.1. ∀xi, ri ∈ Zq , ∃r̂i, χ̂i ∈ Zq such that g χi 1 g γiri 1 = g χ̂i 1 g γir̂i 1 mod p. Proof: For any ri, r̂i ∈ Zq , there exists χ̂i ∈ Zq such that: γi(ri − r̂i) = χ̂i − χi mod q because q and (ri − r̂i) are relatively prime (q is a prime number). Then we have χ̂i ∈ Zq for any ri ∈ Zq such that: g γi(ri−r̂i) 1 = g χ̂i−χi 1 mod p ⇒ g χi 1 g γiri 1 = g χ̂i 1 g γir̂i 1 mod p This implies that given the ciphertext Ci, any value xi is a possible valid data that can produce this ’ciphertext’ Ci. According to the Theorem VII.1, we can deduce that χi has the same level of randomness as ri. Therefore, g χi 1 is indistinguishable to a random element in G1 from other participants’ or attackers’ perspective, which implies Theorem VII.2. The input xi is computationally indistin- guishable to a random element chosen from G1. 3) Closure and Group Selection: We need to guarantee that all the multiplications in the sum protocol are closed in G2. Since (1+ xip) · (g r+i 2 /g r−1 2 ) ri is the only multiplication throughout the sum protocol, we must carefully choose the group G2 such that 1+xip ∈ G2. We let G2 ⊂ Zp2 be a cyclic multiplicative group generated by h, which is the generator of Zp. Then, the order of G2 is p(p − 1), and the powers of the numbers in G2 belong to an integer group Zp(p−1). Since G2 = Zp2 − {x|x = k · p, for some integer k} and ∀k : 1 + xip 6= kp, 1 + xip belongs to the group G2. 4) Restriction of the Product and Sum Protocol: In both protocols, we require that number of participants is at least 3 in Participants Only Model and at least 2 in One Aggregator Model. In Participants Only Model, if there are only 2 partic- ipants, privacy is not preservable since it is impossible to let p1 know x1 + x2 or x1x2 without knowing x2. However, in One Aggregator Model, since only the aggregator A knows the final result, as long as there are two participants, A is not able to infer any individual’s input data. C. Complexity We discuss the computation and communication complexity of the Advanced Scheme for each model in this section. 1) One Aggregator Model: It is easy to see that the computation complexities of Setup, Encrypt and Product of the product protocol are O(1), O(1) and O(n) respectively. Also, Encrypt is executed for m times by each participant and Product is executed for m times by the aggregator in the Advanced Scheme. Every participant and the aggregator exchanges gri’s with each adjacent neighbor in the ring, which incurs communi- cation of O(|p|) bits in Setup, where |p| represents the bit 8 length of p. In Encrypt, each participant sends m cipher- texts ck ∏n i=1 x di,k i ’s to the aggregator, so the communication overhead of Encrypt is O(m|p|) bits. Since n participants are sending the ciphertexts to the aggregator, the aggregator’s communication overhead is O(mn|p|). Similarly, the computation complexities of Setup, Encrypt and Sum in the sum protocol are O(1), O(1) and O(m) respectively, and they are executed for only once in the scheme. Hence, the communication overhead of Setup, En- crypt and Sum are O(|p2|) bits, O(|p2|) bits and O(m|p2|) bits respectively (|p2| is the big length of p2). Note that |p2| = 2|p|. Then, the total complexity of aggregator and participants are as follows: TABLE II ONE AGGREGATOR MODEL Aggregator Computation Communication (bits) Product (Product) O(mn) O(mn|p|) Sum (sum) O(m) O(m|p|) Per Participant Computation Communication (bits) Setup (Product) O(1) O(|p|) Encrypt (Product) O(m) O(m|p|) Setup (sum) O(1) O(|p|) Encrypt (sum) O(1) O(|p|) 2) Participants Only Model: In the Participants Only Model, participants broadcast ciphertexts to others, and cal- culates the products and sums themselves, therefore the com- plexities are shown as below: TABLE III PARTICIPANTS ONLY MODEL Per Participant Computation Communication (bits) Setup (Product) O(1) O(|p|) Encrypt (Product) O(m) O(mn|p|) Product (Product) O(mn) O(mn|p|) Setup (sum) O(1) O(|p|) Encrypt (sum) O(1) O(m|p|) Sum (sum) O(m) O(m|p|) Note that the communication overhead is balanced in the Participants Only Model, but the system-wide communication overhead is increased a lot. In the One Aggregator Model, the system-wide communication overhead is: O(mn|p|) +O(m|p|) + n · O(|p|) = O(mn|p|) (bits) However, in the Participants Only Model, the system-wide communication complexity is: n ·O(|p|)+n ·O(m|p|)+n ·O(mn|p|) = O(mn2|p|) (bits) VIII. PERFORMANCE EVALUATION BY IMPLEMENTATION We conduct extensive evaluations of our protocols. Our simulation result shows that the computation complexity of our protocol is indeed linear to the number of participants. To simulate and measure the computation overhead, we used GMP library to implement large number operations in our protocol in a computer with Intel i7-2620M @ 2.70GHz CPU and 2GB of RAM, and each result is the average time measured in the 100,000 times of executions. Also, the input data xi is of 20-bit length, the q is of 256-bit length, and p is roughly of 270-bit length. That is, xi is a number from [0, 220 − 1] and q is a uniform random number chosen from [0, 2256 − 1]. In this simulation, we measured the total overhead of our novel product protocol and sum protocol (the second sum protocol) proposed in the Section V). Here, we measured the total computation time spent in calculating the final result of n data (including encryption by n participants and the decryption by the aggregator). Since we only measure the computation overhead, there is no difference between One Aggregator Model and Participants Only Model. 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 tim e (m ic ro se co nd s) # of participants Our Product 0 20 40 60 80 100 0 50 100 150 200 250 300 350 400 tim e (m ic ro se co nd s) # of participants Our Summation (a) product (b) sum Fig. 3. Running time for product and sum calculation. First of all, the computation overhead of each protocol is indeed proportional to the number of participants. Also, the sum protocol needs much more time. This is natural because parameters in the sum protocol are in Zp2 , which are twice of the parameters in the product protocol in big length (they are in Zp). Multivariate polynomial evaluation is composed of m prod- ucts and one sum, so its computation overhead is barely the combination of the above two protocols’ overhead. We further compare the performance of our protocol with other existing multi party computation system implemented by Ben et al. [2] (FairplayMP). They implemented the BMR protocol [1], which requires constant number of communica- tion rounds regardless of the function being computed. Their system provides a platform for general secure multi-party computation (SMC), where one can program their secure com- putation with Secure Function Definition Language (SFDL). The programs wrote in SFDL enable multiple parties to jointly evaluate an arbitrary sized boolean circuit. This boolean circuit is same as the garbled circuit proposed by Yao’s 2 Party Computation (2PC) [22][23]. In Ben’s setting, where they used a grid of computers, each with two Intel Xeon 3GHz CPU and 4GB of RAM, they achieved the computation time as following tables when they have 5 participants: TABLE IV RUN TIME (MILLISECONDS) OF FAIRPLAYMP[2] Gates 32 64 128 256 512 1024 Per Participant 64 130 234 440 770 1394 One addition of two k-bit numbers can be expressed with 9 k + 1 XOR gates and k AND gates. Therefore, if we set the length of input data as 20 bits (which is approximately 1 million), we need 41 gates per addition in FairplayMP system. When we conduct 26 additions (which is equivalent to 1066 gates) in our system, the total computation time is 72.2 microseconds, which is 2 × 104 times faster than the FairplayMP, which needs 1.394 seconds to evaluate a boolean circuit of 1024 gates. Even if we did not consider the aggregator’s computation time in FairplayMP because they did not provide pure computation time (they provided the total run time including communication delay for the aggregator), our addition is already faster than their system. Obviously, the multiplication is much faster since it is roughly 8 times faster than the addition in our system. We also compare our system with an efficient homomorphic encryption implementation [16]. Lauter et al. proposed an efficient homomorphic encryption scheme which limits the total number of multiplications to a small number less than 100. If only one multiplication is allowed in their scheme (the fastest setting) and length of the modulus q is 1024, it takes 1 millisecond to conduct an addition and 41 milliseconds to conduct a multiplication. In our system, under the same condition, it takes 16.2 microseconds to conduct an addition and 0.7 microseconds to conduct a multiplication, which are approximately 100 times and 6×104 times faster respectively. They implemented the system in a computer with two Intel 2.1GHz CPU and 2GB of RAM. Even if considering our computer has a higher clock CPU, their scheme is still much slower than ours. TABLE V COMPARISON BETWEEN [16] AND OUR SYSTEM Addition Multiplication Lauter [16] 1 millisecond 41 milliseconds Ours 16.2 microseconds 0.7 microseconds The purpose of above two systems are quite different from ours, the first FairplayMP is for general multi-party computation and the second homomorphic encryption system is for general homomorphic encryption. They also provide a much higher level of security than ours since they achieve differential privacy, however, the comparison above does show the high speed of our system while our security level is still acceptable in real life applications, and this is one of the main contributions of this paper. IX. CONCLUSION In this paper, we successfully achieve a privacy-preserving multivariate polynomial evaluation without secure communi- cation channels by introducing our novel secure product and sum calculation protocol. We also show in the discussion that our proposed construction is efficient and secure enough to be applicable in real life. However, our scheme discloses each product part in the polynomial, which gives unnecessary information to attackers. Therefore, our next research will be minimizing the information leakage during the computation and communication. Another future work is to design privacy preserving data releasing protocols such that certain functions can be evaluated correctly while certain functional privacy can be protected. REFERENCES [1] D. Beaver, S. Micali, and P. Rogaway, “The round complexity of secure protocols,” in Proceedings of the twenty-second annual ACM symposium on Theory of computing, 1990, pp. 503–513. [2] A. Ben-David, N. Nisan, and B. Pinkas, “Fairplaymp: a system for secure multi-party computation,” in Proceedings of the 15th ACM conference on Computer and communications security, 2008, pp. 257– 266. [3] J. Bethencourt, A. Sahai, and B. Waters, “Ciphertext-policy attribute- based encryption,” in IEEE Symposium on Security and Privacy, 2007, pp. 321–334. [4] D. Boneh, “The decision diffie-hellman problem,” Algorithmic Number Theory, pp. 48–63, 1998. [5] C. Castelluccia, A. Chan, E. Mykletun, and G. Tsudik, “Efficient and provably secure aggregation of encrypted data in wireless sensor networks,” ACM Transactions on Sensor Networks (TOSN), vol. 5, no. 3, p. 20, 2009. [6] C. Castelluccia, E. Mykletun, and G. Tsudik, “Efficient aggregation of encrypted data in wireless sensor networks,” in The Second Annual In- ternational Conference on Mobile and Ubiquitous Systems: Networking and Services, 2005, pp. 109–117. [7] C. Clifton, M. Kantarcioglu, J. Vaidya, X. Lin, and M. Zhu, “Tools for privacy preserving distributed data mining,” ACM SIGKDD Explorations Newsletter, vol. 4, no. 2, pp. 28–34, 2002. [8] W. Diffie and M. Hellman, “New directions in cryptography,” IEEE Transactions on Information Theory, vol. 22, no. 6, pp. 644–654, 1976. [9] W. Dong, V. Dave, L. Qiu, and Y. Zhang, “Secure friend discovery in mobile social networks,” in IEEE INFOCOM, 2011, pp. 1647–1655. [10] T. ElGamal, “A public key cryptosystem and a signature scheme based on discrete logarithms,” in Advances in Cryptology, Springer, 1985, pp. 10–18. [11] C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Pro- ceedings of the 41st annual ACM symposium on Theory of computing, 2009, pp. 169–178. [12] C. Gentry and S. Halevi, “Implementing gentry’s fully-homomorphic encryption scheme,” Advances in Cryptology–EUROCRYPT 2011, pp. 129–148, 2011. [13] O. Goldreich, “Secure multi-party computation,” Manuscript. Prelimi- nary version, 1998. [14] W. He, X. Liu, H. Nguyen, K. Nahrstedt, and T. Abdelzaher, “Pda: Privacy-preserving data aggregation in wireless sensor networks,” in IEEE INFOCOM, 2007, pp. 2045–2053. [15] T. Jung, X. Li, Z. Wan, and M. Wan, “Privacy preserving cloud data access with multi-authorities,” in IEEE INFOCOM, 2013. [16] K. Lauter, M. Naehrig, and V. Vaikuntanathan, “Can homomorphic encryption be practical,” Preprint, 2011. [17] X. Li and T. Jung, “Search me if you can: privacy-preserving location query service,” in IEEE INFOCOM, 2013. [18] A. Menezes, P. Van Oorschot, and S. Vanstone, Handbook of applied cryptography, CRC, 1997. [19] R. Sheikh, B. Kumar, and D. Mishra, “Privacy preserving k secure sum protocol,” Arxiv preprint arXiv:0912.0956, 2009. [20] E. Shi, T. Chan, E. Rieffel, R. Chow, and D. Song, “Privacy-preserving aggregation of time-series data,” in Proceedings of NDSS, vol. 17, 2011. [21] V. Verykios, E. Bertino, I. Fovino, L. Provenza, Y. Saygin, and Y. Theodoridis, “State-of-the-art in privacy preserving data mining,” ACM Sigmod Record, vol. 33, no. 1, pp. 50–57, 2004. [22] A. Yao, “Protocols for secure computations,” in Proceedings of the 23rd Annual Symposium on Foundations of Computer Science, 1982, pp. 160– 164. [23] ——, “How to generate and exchange secrets,” in 27th Annual Sympo- sium on Foundations of Computer Science, 1986, pp. 162–167. [24] L. Zhang, X. Li, Y. Liu, and T. Jung, “Verifiable private multi-party com- putation: ranging and ranking,” in IEEE INFOCOM Mini-Conference, 2013. http://arxiv.org/abs/0912.0956 I Introduction II Related Work III System Models and Preliminary III-A System Model and Problem Definition III-B Additional Assumptions III-C Discrete Logarithm Problem IV Achieving sum Under Secured Communication Channel V Efficient Protocols for Sum and Product V-A Product Protocol - Participants Only Model V-B Product Protocol - One Aggregator Model V-C Sum Protocol - Participants Only Model V-D Sum Protocol - One Aggregator Model VI Efficient Protocols for General Multivariate Polynomial VI-A One Aggregator Model VI-A1 Basic Scheme VI-A2 Advanced Scheme VI-B Participants Only Model VII Correctness, Complexity and Security Analysis VII-A Correctness VII-A1 Product Protocol VII-A2 Sum Protocol VII-B Security VII-B1 Special Case of Products Calculation VII-B2 Randomness and Group Selection VII-B3 Closure and Group Selection VII-B4 Restriction of the Product and Sum Protocol VII-C Complexity VII-C1 One Aggregator Model VII-C2 Participants Only Model VIII Performance Evaluation by Implementation IX Conclusion References
1cybersec
arXiv
django null and empty string. <p>I find that when I insert nothing of a string <code>form field</code> through a form it captures it as an empty string <code>''</code>. </p> <p>However when I insert nothing of an integer <code>form field</code> through a form it captures it as <code>[null]</code>.</p> <p>Is this good practice? Should the string be <code>null</code> as well in the db?</p>
0non-cybersec
Stackexchange