text
stringlengths 3
1.74M
| label
class label 2
classes | source
stringclasses 3
values |
---|---|---|
16.04 - How To Purge Intel Default Drivers & Reinstall Intel Graphic Drivers. <p>I posted <a href="https://askubuntu.com/questions/763617/graphic-issues-after-ubuntu-16-04-upgrade">this question</a> as I am encountering display problems after upgrading Ubuntu 15.10 to 16.04. I went into 'additional drivers' & noticed this:</p>
<p><a href="https://i.stack.imgur.com/IOWbX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IOWbX.png" alt="enter image description here"></a></p>
<p>I was wondering whether purging the current drivers & 'reinstalling' them again would help detect the graphic hardware I have on my system. For info, my system is a Lenovo X220 Thinkpad:
i5 2420M
6Gb RAM
Onboard Graphic Card</p>
<p>Any suggestions would be great help. Many thanks,</p>
| 0non-cybersec
| Stackexchange |
Google analytics - Find what time users entered the site from traffic source. <p>The title says it all, I'm trying to find what time each visit came from a particular traffic source. EG if I wanted to find all the times that users visited from Google (Organic) - Can I do this i GA?</p>
| 0non-cybersec
| Stackexchange |
With great power comes great responsibility. | 0non-cybersec
| Reddit |
How to prove that there a constant $C$ such that $\arcsin \frac{1-x}{1+x}+2\arctan\sqrt{x}=C$?. <p>How to prove that there a constant $C$ such that $\arcsin \frac{1-x}{1+x}+2\arctan\sqrt{x}=C$?</p>
<p>I have no idea using which theorem to prove. Could someone show me how to start the problem?</p>
| 0non-cybersec
| Stackexchange |
My friend found this in his garden [NSFW]. | 0non-cybersec
| Reddit |
Foster kitten stole the remains of the treat my dog was chomping on. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Oracle: How can I get a value 'TRUE' or 'FALSE' comparing two NUMBERS in a query?. <p>I want to compare two numbers. Let's take i.e. 1 and 2.</p>
<p>I've tried to write the following query but it simply doesn't work as expected (Toad says: ORA-00923: FROM keyword not found where expected):</p>
<pre><code>SELECT 1 > 2 from dual
</code></pre>
<p>The DECODE is something like a Switch case, so how can I get the result of an expression evalutation (i.e. a number comparison) putting it in the select list?</p>
<p>I have found a solution using a functions instead of an expression in the SELECT LIST: i.e. </p>
<pre><code>select DECODE(SIGN(actual - target)
, -1, 'NO Bonus for you'
, 0,'Just made it'
, 1, 'Congrats, you are a winner')
from some_table
</code></pre>
<p>Is there a more elegant way? </p>
<p>Also how do I compare two dates? </p>
| 0non-cybersec
| Stackexchange |
How do set theory, and formal logic fit in together?. <p>Im at that stage in my mathematical understanding where I kinda understand what set theory is and what first order logic is but dont really understand how they fit together to create Mathematics. I assume that the ZF system uses first order logic to create the foundations of mathematics and in the grand scheme of things, set theory is dependent on logic for its existence whereas logic or any formal system can exist on its own. Is this the correct view?</p>
| 0non-cybersec
| Stackexchange |
I'm pretty sure I should find this offensive.... | 0non-cybersec
| Reddit |
Reload choices dynamically when using MultipleChoiceFilter. <p>I am trying to construct a <code>MultipleChoiceFilter</code> where the choices are the set of possible dates that exist on a related model (<code>DatedResource</code>).</p>
<p>Here is what I am working with so far...</p>
<pre><code>resource_date = filters.MultipleChoiceFilter(
field_name='dated_resource__date',
choices=[
(d, d.strftime('%Y-%m-%d')) for d in
sorted(resource_models.DatedResource.objects.all().values_list('date', flat=True).distinct())
],
label="Resource Date"
)
</code></pre>
<p>When this is displayed in a html view...</p>
<p><a href="https://i.stack.imgur.com/5a0ro.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5a0ro.png" alt="enter image description here"></a></p>
<p>This works fine at first, however if I create new <code>DatedResource</code> objects with new distinct <code>date</code> values I need to re-launch my webserver in order for them to get picked up as a valid choice in this filter. I believe this is because the <code>choices</code> list is evaluated once when the webserver starts up, not every time my page loads.</p>
<p>Is there any way to get around this? Maybe through some creative use of a <code>ModelMultipleChoiceFilter</code>?</p>
<p>Thanks!</p>
<p><strong>Edit:</strong>
I tried some simple <code>ModelMultipleChoice</code> usage, but hitting some issues.</p>
<pre><code>resource_date = filters.ModelMultipleChoiceFilter(
field_name='dated_resource__date',
queryset=resource_models.DatedResource.objects.all().values_list('date', flat=True).order_by('date').distinct(),
label="Resource Date"
)
</code></pre>
<p>The HTML form is showing up just fine, however the choices are not accepted values to the filter. I get <code>"2019-04-03" is not a valid value.</code> validation errors, I am assuming because this filter is expecting <code>datetime.date</code> objects. I thought about using the <code>coerce</code> parameter, however those are not accepted in <code>ModelMultipleChoice</code> filters.</p>
<p>Per dirkgroten's comment, I tried to use what was suggested in the <a href="https://stackoverflow.com/questions/26210217/how-to-use-modelmultiplechoicefilter">linked question</a>. This ends up being something like</p>
<pre><code>resource_date = filters.ModelMultipleChoiceFilter(
field_name='dated_resource__date',
to_field_name='date',
queryset=resource_models.DatedResource.objects.all(),
label="Resource Date"
)
</code></pre>
<p>This also isnt what I want, as the HTML now form is now a) displaying the <code>str</code> representation of each <code>DatedResource</code>, instead of the <code>DatedResource.date</code> field and b) they are not unique (ex if I have two <code>DatedResource</code> objects with the same <code>date</code>, both of their <code>str</code> representations appear in the list. This also isnt sustainable because I have 200k+ <code>DatedResources</code>, and the page hangs when attempting to load them all (as compared to the <code>values_list</code> filter, which is able to pull all distinct dates out in seconds.</p>
| 0non-cybersec
| Stackexchange |
Wylie Dufresne's Turkey Hash. | 0non-cybersec
| Reddit |
Privacy-preserving parametric
inference: a case for robust
statistics
Marco Avella-Medina ∗
November 20, 2019 (First version: May 15, 2018)
Abstract
Differential privacy is a cryptographically-motivated approach to privacy that has
become a very active field of research over the last decade in theoretical computer
science and machine learning. In this paradigm one assumes there is a trusted curator
who holds the data of individuals in a database and the goal of privacy is to simultane-
ously protect individual data while allowing the release of global characteristics of the
database. In this setting we introduce a general framework for parametric inference
with differential privacy guarantees. We first obtain differentially private estimators
based on bounded influence M-estimators by leveraging their gross-error sensitivity in
the calibration of a noise term added to them in order to ensure privacy. We then
show how a similar construction can also be applied to construct differentially private
test statistics analogous to the Wald, score and likelihood ratio tests. We provide
statistical guarantees for all our proposals via an asymptotic analysis. An interest-
ing consequence of our results is to further clarify the connection between differential
privacy and robust statistics. In particular, we demonstrate that differential privacy
is a weaker stability requirement than infinitesimal robustness, and show that robust
M-estimators can be easily randomized in order to guarantee both differential privacy
and robustness towards the presence of contaminated data. We illustrate our results
both on simulated and real data.
∗Columbia University, Department of Statistics, New York, NY, USA, email:
[email protected]. The author is grateful for the financial support of the Swiss National
Science Foundation and would like to thank Roy Welsch for many helpful discussions.
1
ar
X
iv
:1
91
1.
10
16
7v
1
[
cs
.L
G
]
2
2
N
ov
2
01
9
1 Introduction
Differential privacy is a cryptographically-motivated approach to privacy which has become a
very active field of research over the last decade in theoretical computer science and machine
learning (Dwork and Roth, 2014). In this paradigm one assumes there is a trusted curator
who holds the data of individuals in a database that might for instance be constituted by n
individual rows. The goal of privacy is to simultaneously protect every individual row while
releasing global characteristics of the database. Differential privacy provides such guarantees
in the context of remote access query systems where the data analysts do not get to see the
actual data, but can ask a server for the output of some statistical model. Here the trusted
curator processes the queries of the user and releases noisy versions of the desired output in
order to protect individual level data.
The interest in remote access systems was prompted by the recognition of fundamental
failures of anonymization approaches. Indeed, it is now well acknowledged that releasing
data sets without obvious individual identifiers such as names and home addresses are not
sufficient to preserve privacy. The problem with such approaches is that an ill-intentioned
user might be able to link the anonymized data with external non anonymous data. Hence
auxiliary information could help intruders break anonymization and learn sensitive informa-
tion. One prominent example of privacy breach is the de-anonymization of a Massachusetts
hospital discharge database by joining it with with a public voter database in Sweeney (1997).
In fact combining anonymization with sanitization techniques such as adding noise to the
dataset directly or removing certain entries of the data matrix are also fundamentally flawed
(Narayanan and Smatikov, 2008). On the other hand, differential privacy provides a rigorous
mathematical framework to the notion of privacy by guaranteeing protection against identity
attacks regardless of the auxiliary information that may be available to the attackers. This
is achieved by requiring that the output of a query does not change too much if we add or
remove any individual from the data set. Therefore the user cannot learn much about any
individual data record from the output requested.
There is now a large body of literature in this topic and recent work has sought to
link differential privacy to statistical problems by developing privacy-preserving algorithms
for empirical risk minimization, point estimation and density estimation (Dwork and Lei,
2009; Wasserman and Zhou, 2010; Smith, 2011; Chaudhuri et al., 2011; Bassily et al., 2014).
Despite the numerous developments made in the area of differential privacy since the seminal
work of Dwork et al. (2006), one can argue that their practical utility in applied scientific work
is very limited by the lack of broad guidelines for statistical inference. In particular, there
are no generic procedures for performing statistical hypothesis testing for general parametric
2
models which arguably constitutes one of the cornerstones of a statisticians data analysis
toolbox.
1.1 Our contribution
The basic idea of our work is to introduce differentially private algorithms leveraging tools
from robust statistics. In particular, we use the Gaussian mechanism studied in the differen-
tial privacy literature in combination with robust statistics sensitivity measures. At a high
level, this mechanism provides a generic way to release a noisy version of a statistical query,
where the noise level is carefully calibrated to ensure privacy. For this purpose, appropri-
ate notions of sensitivity have been studied in the computer science literature. By focusing
on the class of parametric M-estimators, we show that the well studied statistics notion of
sensitivity given by the influence function can also be used to calibrate the Gaussian mech-
anism. This logic extends to tests derived from M-estimators since their sensitivity can also
be understood via the influence function.
To the best of our knowledge, our work is the first one to provide a systematic treatment
of estimation and hypothesis testing with differential privacy guarantees in the context of
general parametric models. The main contributions of this paper are the following:
(a) We introduce a general class of differentially private parametric estimators under mild
conditions. Our estimators are computationally efficient and can be tuned to trade-off
statistical efficiency and robustness.
(b) We propose differentially private counterparts of the Wald, score and likelihood ratio
tests for parametric models. Our proposals are by construction robust in a contami-
nation neighborhood of the assumed generative model and are easily constructed from
readily available statistics.
(c) We further clarify the connections between differential privacy and robust statistics
by showing that the influence function can be used to bound the smooth sensitivity of
Nissim et al. (2007). It follows that bounded-influence estimators can naturally be used
to construct differentially private estimators. The converse is not true as our analysis
shows that one can construct differentially private estimators that asymptotically do
not have a bounded influence function.
1.2 Related work
The notion of differential privacy is very similar to the intuitive one of robustness in statistics.
The latter requires that no small portion of the data should influence too much a statistical
3
analysis (Huber and Ronchetti, 2009; Hampel et al., 1986; Belsley et al., 2005; Maronna et al.,
2006). This connection has been noticed in previous works that have shown how to construct
differentially private robust estimators. In particular, the estimators of (Dwork and Lei,
2009; Smith, 2011; Lei, 2011; Chaudhuri and Hsu, 2012) are the most closely related to ours
since they all provide differentially private parametric estimators building on M-estimators
and establish statistical convergence rates. However, our construction compares favorably
to previous proposals in many regards. Our estimators preserve the optimal parametric
√
n-consistency, and hence our privacy guarantees do not come at the expense of slower
statistical rates of convergence as in (Dwork and Lei, 2009; Lei, 2011). Furthermore we do
not assume a known diameter of the parameter space as in Smith (2011). Our construction
is inspired by the univariate estimator of Chaudhuri and Hsu (2012) which is in general
computationally inefficient as it requires the computation of the smooth sensitivity defined
in Section 2.2. We broaden the scope of their technique to general multivariate M-estimators
and more importantly, we overcome the computational barrier intrinsic to their method by
showing that the empirical influence function can be used in the noise calibration of the
Gaussian mechanism. There are however other possible approaches to construct differentially
private estimators. Here we discuss three popular alternatives that have been explored in
the literature.
The first approach seeks to design a mechanism to release differentially private data in-
stead of constructing new estimators. This can be achieved by constructing a differentially
private density estimator such as a perturbed histogram of the data. Once such a density
estimator is available it can be used to either sample private data (Wasserman and Zhou,
2010) or to construct a weighted differentially private objective function for empirical risk
minimization (Lei, 2011). Although the latter approach leads to better rates of conver-
gence for parametric estimation, they remain slow and have a bad dimension dependence
max{1/
√
n, (
√
log n/n)2/(2+p)}, where n is the sample size and p is the dimension of the
estimated parameter. Indeed, this approach suffers from the curse of dimensionality since
it relies on the computation of multivariate density estimators. Interestingly, a somehow
related approach for releasing synthetic data existed in the statistics literature prior to the
advent of differential privacy (Rubin, 1993; Reiter, 2002, 2005) and consequently also lacks
formal theoretical privacy guarantees.
A second approach consist of releasing estimators that are defined as the minimizers of a
perturbed objective function. Representative work in this direction includes Chaudhuri and
Monteleoni (2008) in the context of penalized logistic regression, Chaudhuri et al. (2011)
in the general learning problem of empirical risk minimization and Kiefer et al. (2012) in a
high dimensional regression setting. A related idea to perturbing the objective function is
4
to is to run a stochastic gradient descent algorithm where at each iteration update step an
appropriately scaled noise term is added to the gradient in order to ensure privacy. This idea
was used for example by Rajkumar and Argawal (2012) in the context of multiparty classi-
fication, Bassily et al. (2014) in the general learning setting of empirical risk minimization
and Wang et al. (2015) for Bayesian learning. Although the potential applicability of these
two perturbation approaches to a wide variety of models makes them appealing, it remains
unclear how to construct test statistics in these settings.
A third alternative approach is to draw samples from a well suited probability distribu-
tion. The exponential mechanism of McSherry and Talwar (2007) is a main example of a
general method for achieving (ε, 0)-differential privacy via random sampling. This idea leads
naturally to connections with posterior sampling in Bayesian statistics. Some papers explor-
ing these ideas include Chaudhuri and Hsu (2012) and Dimitrakakis et al. (2014, 2017). See
also Foulds et al. (2016) for a broader discussion of different mechanism for constructing pri-
vacy preserving Bayesian methods. Bayesian approaches that provide differentially private
posterior distributions seem to be naturally amenable for the construction of confidence in-
tervals and test statistics, as explored in Liu (2016). However it does not seem obvious to us
how to use Bayesian privacy preserving results such Dimitrakakis et al. (2014, 2017); Foulds
et al. (2016) in order to provide analogue constructions to ours for estimation and testing.
Interestingly, in this line of work the typical regularity conditions required on the likelihood
and prior distribution are reminiscent of the regularity conditions required in frequentists
setups as discussed below in Section 3.1.
The literature on hypothesis testing with differential privacy guarantees is much more
recent and limited than the one focusing on estimation. A few papers tackling this problem
are the work of (Uhler et al., 2013; Wang et al., 2015; Gaboardi et al., 2016) who consider dif-
ferentially private chi-squared tests and (Sheffet, 2017; Barrientos et al., 2019) who provide
differentially private t-tests for the regression coefficients of a linear regression model. Our
approach is more broadly applicable since it extends to general parametric models and also
weakens the distributional assumptions required by existing differentially private estimation
and testing techniques. Roughly speaking, this is due to the fact that our M-estimators are
robust by construction and will therefore have an associated bounded influence function. It
is worth noting that the latter property automatically guarantees gradient Lipschitz condi-
tions that have previously been assumed for differentially private empirical risk minimizers
(Chaudhuri et al., 2011; Bassily et al., 2014). After submitting the first version of this paper,
we have noticed some interesting new developments on differentially private inference in the
work of (Awan and Slavkovic, 2018, 2019; Canonne et al., 2019a,b).
One interesting new development in the literature that we do not cover in this work
5
is local differential privacy. This new paradigm accounts for settings in which even the
statistician collecting the data is not trusted (Duchi et al., 2018). This scenario leads to slower
minimax optimal convergence rates of estimation for many important problems including
mean estimation and logistic regression. Sheffet (2018) seems to be the first work exploring
the problem of hypothesis testing under local differential privacy.
1.3 Organization of the paper
In Section 2 we overview some key background notions from differential privacy and robust
statistics that we use throughout the paper. In Section 3 we introduce our technique for con-
structing differentially private estimators and study their theoretical properties. In Section
4 we show how to further extend our construction to test functionals in order to perform
differentially private hypothesis testing using M-estimators. In Section 5 we illustrate the
numerical performance of our methods in both synthetic and real data. We conclude our pa-
per in Section 6 with a discussion of our results and future research directions. We relegated
to the Appendix all the proofs and some auxiliary results and discussions.
Notation: ‖V ‖ denotes either euclidean norm if V ∈ RN or its induced operator norm if
V ∈ RN×N . The smallest and largest eigenvalues of a matrix A are denoted by λmin(A) and
λmax(A). For two probability measures P and Q, the notation d∞(P,Q) and dTV (P,Q) stand
for sup-norm (Kolmogorov-Smirnov) and total variation distance. We reserve calligraphic
letters such as S for sets and denote their cardinality by |S|. For two sets of S and S ′ of the
same size, we denote their Hamming distance by dH(S,S ′) := |S \ S ′| = |S ′ \ S|.
2 Preliminaries
Let us first review some important background concepts from differential privacy, robust
statistics and the M-estimation framework for parametric models.
2.1 Differential privacy
Consider a database consisting of a set of data points D = {x1, . . . , xn} ∈ Xn, where X ⊂ Rm
is some data space. We also use the notation D(Fn) to emphasize that D can be viewed as
a data set associated with an empirical distribution Fn induced by {x1, . . . , xn}. Differential
privacy seeks to release useful information from the data set while protecting information
about any individual data entry.
6
Definition 1. A randomized function A(D) is (ε, δ)–differentially private if for all pairs of
databases (D,D′) with dH(D,D′) = 1 and all measurable subsets of outputs O:
P(A(D) ∈ O) ≤ eεP(A(D′) ∈ O) + δ.
Intuitively, (ε, 0)-differential privacy ensures that for every run of algorithm A the output
is almost equally likely to be observed on every neighboring database. This condition is
relaxed by (ε, δ)-differential privacy since it allows that given a random output O drawn
from A(D), it may be possible to find a database D′ such that O is more likely to be
produced on D′ that it is when the database is D. However such an event will be extremely
unlikely. In both cases the similarity is defined by the factor eε while the probability of
deviating from this similarity is δ.
The magnitude of the privacy parameters (ε, δ) are typically considered to be quite differ-
ent. We are particularly interested in negligible values of δ that are smaller than the inverse
of any polynomial in the size n of the database. The rational behind this requirement is that
values of δ of the order of ‖x‖1, for some vector values database x, are problematic since
they “preserve privacy” while allowing to publish the complete records of a small number of
individuals in the database. On the other hand, the privacy parameter ε is typically thought
of as a moderately small constant and in fact “the nature of privacy guarantees with differing
but small epsilons are quite similar” (Dwork and Roth, 2014, p.25). Indeed, failing to be
(ε, 0)-differentially private for some large ε (i.e. ε = 10) is just saying that there is a least a
pair of neighboring datasets and an output O for which the ratio of probabilities of observing
O conditioned on the database being D or D′ is large.
One can naturally wonder how to compare two differentially private algorithms A1 and A2
with different associated privacy parameters (�1, δ2) and (�2, δ2). It seems natural to prefer
the algorithm that ensures the smallest privacy loss incurred by observing some output i.e.
log
(
P(A(D) ∈ O)/P(A(D′) ∈ O)
)
. Since we only consider negligible δ1 and δ2, the privacy
loss will be approximately proportional to the privacy parameter ε. One could consequently
prefer the algorithm with the smallest parameter ε even though we say that roughly speaking
“all small epsilons are alike” (Dwork and Roth, 2014, p.24).
Differential privacy enjoys certain appealing properties that facilitates the design and
analysis of complicated algorithms with privacy guarantees. Perhaps the two most important
ones are that (ε, δ)-differential privacy is immune to post-processing and that combining two
differentially private algorithms preserves differential privacy. More precisely, if A is (ε, δ)-
differentially private, then the composition of any data independent mapping f with A is
also (ε, δ)-differentially private. In other words, releasing f(A(D)) for any D still guarantees
(ε, δ)-differential privacy. Furthermore, if we have two algorithms A1 and A2 with different
7
associated privacy parameters (�1, δ2)- and (�2, δ2), then releasing the outputs of A1(D) and
A2(D) guarantees (ε1 +ε2, δ1 +δ2)-differential privacy. We refer interested readers to (Dwork
and Roth, 2014, Chapters 2–3) for a more extensive discussion of the concepts presented in
this subsection.
2.2 Constructing differentially private algorithms
A general and very popular technique for constructing differentially private algorithms is the
Laplace mechanism, which consists of adding some well calibrated noise to the output of a
standard query (Dwork et al., 2006). This procedure relies on suitable notions of sensitivity
of the function that is queried. All the following definitions of sensitivity are standard in the
differential privacy literature and are typically defined with respect to the L1 norm. We will
instead use the Euclidean norm for the construction of our estimators as explained below.
Definition 2. The global sensitivity of a function ϕ : Xn → Rp is
GS(ϕ) := sup
D,D′
{
‖ϕ(D)− ϕ(D′)‖ : dH(D,D′) = 1
}
.
The local sensitivity of a function ϕ : Xn → Rp at a data set D ∈ Xn is
LS(ϕ,D) := sup
D′
{
‖ϕ(D)− ϕ(D′)‖ : dH(D,D′) = 1
}
.
For ξ > 0, the ξ–smooth sensitivity of ϕ at D is
SSξ(ϕ,D) := sup
D′
{
e−ξdH(D,D
′)LS(ϕ,D′) : D′ ∈ Xn
}
.
We are now ready to describe two versions of the Laplace mechanism using the above
sensitivity notions defined with respect to the L1 norm. Denote by Lap(b) a scaled symmetric
Laplace distribution with density function hb(x) =
1
2b
exp(− |x|
b
) and let Lapp(b) be the mul-
tivariate distribution obtained from p independent and identically distributed Xj ∼ Lap(b)
for j = 1, . . . , p. A key idea introduced in the seminal paper Dwork et al. (2006) is that
for a function f : Xn → Rp and an input database D, one can simply compute f(D) and
then generate an independent noise term U ∼ Lapp(GS(f)/ε) in order to construct a (ε, 0)-
differentially private output f(D) + U . A related idea introduced by Nissim et al. (2007)
is to calibrate the noise using the smooth sensitivity instead of the local sensitivity. These
authors showed that provided ξ = ε
4(p+2 log(2/δ))
and Ũ ∼ Lapp(SSξ(f)/ε), then the output
f(D) + Ũ is (ε, δ)-differentially private. Our proposals will build on the latter idea for the
construction of private estimation and inferential procedures for parametric models.
8
We would like to point out that the different notions of sensitivity introduced in Definition
2 are usually defined with respect to the L1 norm. We chose to instead present these
definitions in terms of the Euclidean distance as they are more naturally connected to well
studied concepts in robust statistics. In particular, it leads to connections with the standard
way of presenting the notion of gross-error sensitivity in robust statistics and the related
problem of optimal B-robust estimation (Hampel et al., 1986, Chapter 4). Because we
focus on sensitivities with respective to the Euclidean metric, our construction follows the
same logic of the Laplace mechanism, but naturally replaces the noise distribution with
an appropriately scaled normal random variable as proposed in Nissim et al. (2007). In
this case the output f(D) + Ũ is (ε, δ)-differentially private if Ũ ∼ Np(0, σ2I) where σ =
5
√
2 log(2/δ)SSξ(f)/ε and ξ =
ε
4(p+2 log(2/δ))
. For obvious reasons the resulting procedure has
been called the Gaussian mechanism in Dwork and Roth (2014). As we were completing the
revision of the current manuscript we noticed that Cai et al. (2019) have also worked with
this mechanism for the derivation of the optimal statistical minimax rates of convergence for
parametric estimation under (ε, δ)-differential privacy.
2.3 Robust statistics
Robust statistics provides a theoretical framework that allows to take into account that
models are only idealized approximations of reality and develops methods that give results
that are stable when slight deviations from the stochastic assumptions of the model occur.
Book-length expositions on the topic include (Huber, 1981; Huber and Ronchetti, 2009;
Hampel et al., 1986; Maronna et al., 2006). We will focus on the infinitesimal robustness
approach that considers the impact of moderate distributional deviations from ideal models
on a statistical procedure (Hampel et al., 1986). In this setting the statistics of interest are
viewed as functionals of the underlying distribution and the influence function is the key
tool used to assess the robustness of a statistical functional.
Definition 3. Given a measurable space Z, a distribution space F, a parameter space Θ ⊂ Rp
and a functional T : F 7→ Θ, the influence function of T at a point z ∈ Z for a distribution
F is defined as
IF(z;T, F ) := lim
t→0+
T (Ft)− T (F )
t
,
where Ft = (1− t)F + t∆z and ∆z is a mass point at z.
The influence function has the heuristic interpretation of describing the effect of an
infinitesimal contamination at the point z on the estimate, standardized by the mass of
contamination. Furthermore, if a statistical functional T (F ) is sufficiently regular, a von
9
Mises expansion (von Mises, 1947; Hampel, 1974; Hampel et al., 1986) yields
T (G) = T (F ) +
∫
IF(z;T, F )d(G− F )(z) + o
(
d∞(G,F )
)
. (1)
Considering the approximation (1) over a neighborhood of the form Ft = {F (t)|F (t) = (1 −
t)F + tG, G an arbitrary distribution}, we see that the influence function can be used to
linearize the asymptotic bias in a neighborhood of the idealized model F . Therefore, a
statistical functional with bounded influence function is robust in the sense that it will have
a bounded approximate bias in a neighborhood of F . A related notion of robustness is the
gross-error sensitivity which measures the worst case value of the influence function.
Definition 4. The gross-error sensitivity of a functional T : F→ Θ at the distribution F is
γ(T, F ) := sup
x∈X
‖IF(x;T, F )‖.
Clearly if the space X is unbounded, the gross-error sensitivity of T will be infinite unless
its influence function is uniformly bounded. In Sections 3 and 4 we will show how to use the
robust statistics tools described here in the construction of differentially private estimators
and tests.
2.4 M-estimators for parametric models
M-estimators are a simple class of estimators that is appealing from a robust statistics
perspective and constitute a very general approach to parametric inference (Huber, 1964;
Huber and Ronchetti, 2009). They will be the focus of the rest of this paper. An M-estimator
θ̂ = T (Fn) of θ0 ∈ Θ ⊂ Rp is defined as a solution to
n∑
i=1
Ψ(xi, T (Fn)) = 0,
where Ψ : Rm ×Θ→ Rp, x1, . . . , xn ∈ Rm are independent identically distributed according
to F and Fn denotes the empirical distribution function. This class of estimators is a strict
generalization of the class of regular maximum likelihood estimators. Assuming that T (F ) =
θ0 and some mild conditions (Huber and Ronchetti, 2009, Ch. 6), as n → ∞ they are
asymptotically normally distributed as
√
n(T (Fn)− θ0)→d N(0, V (T, F )),
where V (T, F ) = EF [IF(X;T, F )IF(X;T, F )T ] and EF [IF(X;T, F )] = 0. Furthermore, their
influence function is
IF(x;T, F ) =
(
M(T, F )
)−1
Ψ(x, T (F )), (2)
10
where M(T, F ) = −EF [Ψ̇(X,T (F ))] = − ∂∂θEF [Ψ(X, θ)]
∣∣
θ=θ0
. Therefore M-estimators de-
fined by bounded functions Ψ are said to be infinitesimally robust since their influence
function is bounded and by (1) their asymptotic bias will also be bounded for small amounts
of contamination.
3 Differentially private estimation
3.1 Assumptions
In the following we allow Ψ to depend on n, but we do not stress it in the notation to make
it less cumbersome. Here are the main conditions required in our analysis:
Condition 1. The function Ψ(x, θ) is differentiable with respect to θ almost everywhere for
all x ∈ X, and we denote this derivative by Ψ̇(x, θ). Furthermore, for all θ ∈ Θ there exists
constants Kn, Ln > 0 such that
sup
x∈X
‖Ψ(x, θ)‖ ≤ Kn and sup
x∈X
‖Ψ̇(x, θ)‖ ≤ Ln.
Condition 2. The matrix MF = M(T, F ) = −EF [Ψ̇(X,T (F ))] is positive definite at the
generative distribution F = Fθ0 . Furthermore the space of data sets X
n is such that for
all empirical distributions Gn ∈ {G|D(G) ∈ Xn} with n ≥ N0 we have that 0 < b ≤
λmin(MGn) ≤ λmax(MGn) ≤ B <∞.
Condition 3. There exist r1 > 0, r2 > 0, r3 > 0, C1 and C2 > 0 such that
‖EFn [Ψ̇(X, θ)]− EGn [Ψ̇(X, θ)]‖ ≤ C1d∞(Fn, Gn) and
‖EFn [Ψ̇(X, θ)]− EFn [Ψ̇(X,T (Fn))]‖ ≤ C2‖T (Fn)− θ‖
whenever d∞(Fn, Gn) ≤ r1, ‖θ − T (Fn)‖ ≤ r2 and ‖θ − θ0‖ ≤ r3.
Condition 1 requires Ψ and Ψ̇ to be uniformly bounded in X by some potentially di-
verging constants Kn and Ln. The case Kn = K < ∞ is particularly appealing from a
robust statistics perspective as it guarantees that the resulting M-estimators has a bounded
influence function. If additionally Ln = L < ∞, then the resulting M-estimator will also
be second order infinitesimally robust as defined in La Vecchia et al. (2012) and will have
a bounded change of variance function; see Hampel et al. (1981) and our Appendix C for
more details. Condition 2 restricts the space of data sets to one where some minimal reg-
ularity conditions on the Jacobian of Ψ hold. Similar assumptions are usually required to
11
guarantee the asymptotic normality and Fréchet differentiability of M-estimators, see for
example Huber (1967), (Huber and Ronchetti, 2009, Corollary 6.7) and Clarke (1986). Our
assumptions are stronger in order to guarantee that MGn is invertible and hence that the
empirical influence function is computable. Even though such requirements are not always
explicitly stated, common statistical practice implicitly assumes them when computing es-
timated asymptotic variances with plug-in formulas. In a standard linear regression setting
these conditions boil down to assuming that the design matrix is full rank. Even such a
seemingly harmless condition seems stronger in the differential privacy context. Indeed, it
might not be checkable by the users and one would like to have such a guarantee to hold
over all possible configurations of the data. One possible way of tackling this problem is to
let the algorithm halt with an output “No Reply” when this assumption fails (Dwork and
Lei, 2009; Avella-Medina and Brunel, 2019). Condition 3 is a smoothness condition on Ψ̇ at
Fn, similar to Condition 4 in Chaudhuri and Hsu (2012). It is a technical assumption used
when upper bounding the smooth sensitivity by the gross-error sensitivity. The constants
C1 and C2 are effectively Lipschitz constants.
We would like to highlight that since the differential privacy paradigm assumes a remote
access query framework where the user does not get to see the data, in principle it is not
immediate that the user will be able to check basic features of the data e.g. whether the
design matrix is full rank before performing an analysis. This is a serious limitation of
this paradigm as it more generally prevents users from performing exploratory data analysis
before fitting a model and it is also unclear how to do model checking and run diagnostics on
fitted models. One would have to develop differentially private analogues of the whole data
analysis pipeline in order to allow a data analyst to perform rigorous statistical analysis. An
interesting recent development in this direction in a regression setting is the work of Chen
et al. (2018).
3.2 A general construction
Let us now introduce our mechanism for constructing differentially private M-estimators.
Given a statistical M-functional T , we propose the randomized estimator
AT (Fn) := T (Fn) + γ(T, Fn)
5
√
2 log(n) log(2/δ)
εn
Z, (3)
where Z is a p dimensional standard normal random variable. The intuition behind our
proposal is simple: the gross-error sensitivity γ(T, Fn) should be roughly of the same order
as the smooth sensitivity. Therefore multiplying it by
√
log(n) will guarantee that it upper
bounds the smooth sensitivity. This in turn suffices to guarantee (ε, δ)-differential privacy.
12
From a computational perspective, using the empirical gross-error-sensitivity is much more
appealing than computing the exact smooth sensitivity. Indeed, the former can be further
upper bounded in practice using the empirical influence function whereas the latter can be
very difficult to compute in general as discussed in Nissim et al. (2007).
Theorem 1. Let n ≥ max[N0, 1C2m log(2/δ) [1+
4
ε
{p+2 log(2/δ)} log(λmax(MFn )
b
)]2, (C ′)2m log(2/δ){2Ln
b
+
1
λmin(MFn )
(C1 + C2
Kn
b
)}2] and assume that Conditions 1–3 hold. Then hen AT is (ε, δ)–
differentially private.
Theorem 1 shows that our proposal leads to differentially private estimation. It builds
on two lemmas, relegated to the Appendix, that show that the smooth sensitivity of T
can indeed be upper bounded by twice its empirical gross error sensitivity. Note that the
minimum sample size requirement depends on the values of {N0, b,Kn, Ln, C1, C2} defined in
Conditions 1–3, as well as some constants C and C ′ resulting from our bounds on the error
incurred by approximating the smooth sensitivity with the empirical gross-error-sensitivity.
We provide a discussion about the evaluation of these constants in the Appendix.
3.3 Examples
Let us now present three important examples in order to illustrate how one can use readily
available robust M-estimators and their influence functions to derive bounds on their em-
pirical gross-error sensitivities. These quantities can in turn be used to release differentially
private estimates AT (Fn) defined in (3).
Example 1: Location-scale model
We consider the location-scale model discussed in (Huber and Ronchetti, 2009, Chapter
6). Here we observe an iid random sample of univariate random variables X1, . . . , Xn with
density function of the form 1
σ
f(x−µ
σ
), where f is some known density function, µ is some
unknown location parameter and σ is an unknown positive scale parameter. The problem
of simultaneous location and scale parameter estimation is motivated by invariance consid-
erations. In particular, in order to make an M-estimate of location scale invariant, we must
couple it with an estimate of scale. If the underlying distribution F is symmetric, location
estimates T and scale estimates S typically are asymptotically independent, and the asymp-
totic behavior of T depends on S only through the asymptotic value S(F ). We can therefore
afford to choose S on criteria other than low statistical variability. Huber (1964) generalized
the maximum likelihood system of equations by considering simultaneous M-estimates of
13
location and scale any pair of statistics (Tn, Sn) determined by two equations of the form
n∑
i=1
ψ
(xi − Tn
Sn
)
= 0 and
n∑
i=1
χ
(xi − Tn
Sn
)
= 0,
which lead Tn = T (Fn) and Sn = S(Fn) to be expressed in terms of functionals T and S
defined by the population equations∫
ψ
(x− T (F )
S(F )
)
dF (x) = 0 and
∫
χ
(x− T (F )
S(F )
)
dF (x) = 0.
From the latter equations one can show that, if ψ is odd and χ is even, the influence functions
of T and S are
IF(x;T, F ) =
ψ
(
x−T (F )
S(F )
)
S(F )∫
ψ′
(
x−T (F )
S(F )
)
dF (x)
and IF(x;S, F ) =
χ
(
x−T (F )
S(F )
)
S(F )∫
χ′
(
x−T (F )
S(F )
)
x−T (F )
S(F )
dF (x)
. (4)
The problem of robust joint estimation of location and scale was introduced in the seminal
paper of Huber (1964). In the important case of the normal model, where F = Φ is the
standard normal distribution, a prominent example of the above system of equations is
Huber’s Proposal 2. In this case, ψ(r) = ψc(r) = min{c,max(−c, r)} is the Huber function
and χ(r) = χc(r) = ψc(r)
2 − κ, where κ =
∫
min(c2, x2)dΦ(x) is a constant that ensures
Fisher consistency at the normal model i.e. T (Φ) = µ and S(Φ) = σ2. This particular
choice of estimating equations and (4) show that the empirical gross-error sensitivities of
µ̂ = Tn = T (Fn) and σ̂ = Sn = S(Fn) are
γ(T, Fn) =
cσ̂
1
n
∑n
i=1 I
∣∣xi−µ̂
σ̂
∣∣<c and γ(S, Fn) =
(c2 − κ)σ̂
1
n
∑n
i=1
(
xi−µ̂
σ̂
)2
I∣∣xi−µ̂
σ̂
∣∣<c
, (5)
where the last equation used that χ′c(r) = ψ
′
c(r)r almost everywhere and IE is the indicator
function taking the value 1 under the event E and is 0 otherwise. The formulas obtained in (5)
can be used in the Gaussian mechanism (3) for obtaining private location and scale estimates.
We refer the reader to Chapter 6 in Huber and Ronchetti (2009) for more discussion and
details on joint robust estimation of location and scale parameters.
Example 2: Linear regression
One can naturally build on the construction of the previous example to obtain robust esti-
mators for the linear regression model
yi = x
T
i β + ui, for i = 1, . . . , n, (6)
14
where yi is the response variable, xi ∈ Rp the covariates and the noise terms are ui
iid∼
N(0, σ2). The estimator discussed here is a Mallows’ type robust M-estimator defined as
(β̂, σ̂) = argmin
β,σ
{ n∑
i=1
σρc
(yi + xTi β
σ
)
w(xi) + κnσ
}
, (7)
where ρc is the Huber loss function with tuning parameter c, κ =
∫
min{c2, r2}dΦ(r) is a
Fisher consistency constant for σ and w : Rp → R≥0 is a downweighting function that controls
the impact of outlying covariates on the estimators of β̂ and σ̂ (Hampel et al., 1986). This
robust estimator uses Huber’s Proposal 2 for the estimation of the scale parameter. In this
case, the influence function of the estimator β̂ = T (Fn) is
IF(x, y;T, F ) = M−1F ψc
(y − xTT (F )
S(F )
)
xw(x),
whereMF =
∫
xxTw(x)ψ′c(r)dF and r =
y−xTT (F )
S(F )
. ThereforeMFn =
1
n
∑n
i=1 xxx
T
i w(xi)I|r̂i|≤c
with r̂i = (yi − xTi β̂)/σ̂, and assuming that supx ‖xw(x)‖ ≤ K̃, we see that γ(T, Fn) ≤
λmin(MFn)
−1cK̃. This last bound can be used for the release of a differentially private es-
timates of β. Note also that using the derivations from Example 1 we also have that the
empirical gross-error sensitivity of σ̂ = S(F ) is γ(S, Fn) = [
1
n
∑n
i=1 r̂
2
i I|r̂i|≤c]
−1(c2 − κ)σ̂.
Example 3: Generalized linear models
Generalized linear models (McCullagh and Nelder, 1989) assume that conditional on some
covariates, the response variables belong to the exponential family i.e. the response variables
Y1, . . . , Yn are drawn independently from the densities of the form
f(yi; θi) = exp
[{
yiθi − b(θi)
}
/φ+ c(yi, φ)
]
,
where a(·), b(·) and c(·) are specific functions and φ a nuisance parameter. Thus E(Yi) =
µi = b
′(θi) and var(Yi)= v(µi) = φb
′′(θi) and g(µi) = ηi = x
T
i β, where β ∈ Rp is the vector
of parameters, xi ∈ Rp is the set of explanatory variables and g(·) the link function.
Cantoni and Ronchetti (2001) proposed a class of M-estimators for GLM which can be
viewed as a natural robustification of the quasilikelihood estimators of Wedderburn (1974).
Their robust quasilikelihood is
ρn(β) =
1
n
n∑
i=1
QM(yi, x
T
i β),
where the functions QM(yi, x
T
i β) can be written as
QM(yi, x
T
i β) =
∫ µi
s̃
ν(yi, t)w(xi)dt−
1
n
n∑
j=1
µj∫
t̃
E
{
ν(yi, t)
}
w(xj)dt
15
with ν(yi, t) = ψ{(yi− t)/
√
v(t)}/
√
v(t), s̃ such that ψ{(yi− s̃)/
√
v(s̃)} = 0 and t̃ such that
E
[
ψ{(yi− s̃)/
√
v(s̃)}
]
= 0. The function ψ(·) is bounded and protects against large outliers
in the responses, and w(·) downweights leverage points. The estimator of β̂ of β derived
from the minimization of this loss function is the solution of the estimating equation
Ψ(n)(β) =
1
n
n∑
i=1
Ψ
(
yi, x
T
i β
)
=
1
n
n∑
i=1
{
ψ(ri)
1√
v(µi)
w(xi)
∂µi
∂β
− a(β)
}
= 0, (8)
where ri = (yi − µi)/
√
v(µi) and a(β) = n
−1∑n
i=1E{ψ(ri)/
√
v(µi)}w(xi)∂µi/∂β ensures
Fisher consistency and can be computed using the formulas in Appendix A of Cantoni and
Ronchetti (2001). We note that Appendix B of the same paper show that MF is of the form
1
n
XTBX and that these estimators and formulas are implemented in the function glmrob
of the R package robustbase. They can be used to used to bound the empirical gross-error
sensitivity with γ(T, Fn) ≤ λmin(MFn)−1Kn where Kn is as in Condition 1 and will be depend
on the choices of ψ and w as was the case in Example 2.
3.4 Convergence rates
We provide upper bounds for the convergence rates of AT (Fn). Our result is an extension of
Theorem 3 in Chaudhuri and Hsu (2012).
Theorem 2. Suppose Conditions 1–2 hold. Then, for τ ∈ (0, 1), with probability at least
1− τ
‖AT (Fn)− T (F )‖ ≤ ‖T (Fn)− T (F )‖+ C
√
log(n) log(2/δ)Kn{
√
p+
√
log(1/τ)}
εn
for some positive constant C. If in addition
Kn
√
m log(n) log(1/δ)
√
nε
→ 0 as n→∞, then
AT (Fn)− T (F ) = T (Fn)− T (F ) + op(1/
√
n).
A direct consequence of the above result and (Huber and Ronchetti, 2009, Corollary 6.7)
is that AT (Fn) is asymptotically normally distributed as stated next.
Corollary 1. Assume that p is fixed and that Conditions 1–2 hold. Further assume that
EFθ0 [‖Ψ(X, θ0)‖
2] is nonzero and finite. If
Kn
√
log(n) log(1/δ)
ε
√
n
→ 0 as n → ∞, then we have
that
√
n(AT (Fn)− T (F ))→d N(0, V (T, F )).
Remark 1. This asymptotic normality result can be easily extended to the case where p
diverges as n increases. In particular, invoking the results of He and Shao (2000) asymptotic
normality holds assuming p
2 log p
n
→ 0. Note also that when p diverges, Kn will be diverging
even for robust estimators as componentwise boundedness of Ψ implies that Kn = O(
√
p).
16
3.5 Efficiency, truncation and robustness properties
Smith (2008, 2011) introduced a class of asymptotically efficient point estimators obtained
by averaging subsampled estimators and adding well calibrated noise using the Laplace
mechanism of Dwork et al. (2006). Unfortunately his construction relies heavily on the
assumption that the diameter of the parameter space is known when calibrating the noise
added to the output. Furthermore it is also assumed that we observe bounded random
variables. Variants of this assumption are common in the differential privacy literature
(Smith, 2011; Lei, 2011; Bassily et al., 2014). Our estimators can bypass these issues as
long as the Ψ diverges slower than
√
n. In particular, this is easily achievable with robust
M-estimators since by construction they have a bounded Ψ. Alternatively, we could use
truncated maximum likelihood score equations to obtain asymptotically efficient estimators
as shown next.
Corollary 2. Let Tn denote the M-functional defined by the truncated score function
sc(x, θ) =
∂ log fθ(x)
∂θ
wc,θ(x), where wc,θ(x) = min{1, c/‖
∂ log fθ(x)
∂θ
‖}, c is some positive con-
stant and fθ0 denotes the density of F . If c → ∞ and c
log(n)√
nε
→ 0 as n → ∞, then we have
that
√
n(ATn(Fn)− θ0)→d N(0, I
−1(θ0)),
where I(θ0) denotes the Fisher information matrix.
The truncated maximum likelihood construction is reminiscent of the estimator of Catoni
(2012). The latter also uses a diverging level of truncation, but as a tool for achieving op-
timal non-asymptotic sub-Gaussian-type deviations for mean estimators under heavy tailed
assumptions.
From a robust statistics point of view a diverging level of truncation is not a fully satis-
factory solution. Indeed, it is well known that maximum likelihood estimators can be highly
sensitive to the presence of small fractions of contamination in the data. This remains
true for the truncated maximum likelihood estimator if the truncation level is allowed to
diverge as it entails that the estimator will fail to have a bounded influence function asymp-
totically and will therefore not be robust in this sense. Interestingly, Chaudhuri and Hsu
(2012) showed that any differentially private algorithm needs to satisfy a somehow weaker
degree of robustness. Our next Theorem provides a result in the same spirit for multivariate
M-estimators.
Theorem 3. Let ε ∈ (0, log 2
2
) and δ ∈ (0, ε
17
). Let F be the family of all distributions over
X ⊂ Rp and let A be any (ε, δ)-differentially private algorithm of T (F ). For all n ∈ N and
17
F ∈ F there exists a radius ρ = ρ(n) = 1
n
d log 2
2ε
e and a distribution G ∈ F with dTV (F,G) ≤ ρ,
such that either
EFnEA
[
‖A(D(Fn))− T (F )‖
]
≥
ρ
16
γ(T, F ) + o(ρ)
or
EGnEA
[
‖A(D(Gn))− T (G)‖
]
≥
ρ
16
γ(T, F ) + o(ρ),
where Fn and Gn denote empirical distributions obtained from F and G respectively.
Theorem 3 states that the convergence rates of any differentially private algorithm A
estimating the M-functional T is lower bounded by ργ(T, F ) in a small neighborhood of F .
Therefore M-functionals T with diverging influence functions will have slower convergence
rates for any algorithm A in all such neighborhoods. In this sense some degree of robustness
is needed in order to obtain informative differential private algorithms and the theorem
suggests that the influence function has to scale at most as ρ−1 = O(εn).
4 Differentially private inference
We now present our core results for privacy-preserving hypothesis testing building on the
randomization scheme introduced in the previous section.
4.1 Background
We denote the partition of a p dimensional vector v into p − k and k components by v =
(vT(1), v
T
(2))
T . We are interested in testing hypothesis of the form H0 : θ = θ0, where θ0 =
(θT0(1), 0
T )T and θ0(1) is unspecified against the alternative H1 : θ0(2) 6= 0 where θ0(1) is
unspecified. We assume throughout that the dimension k is fixed. A well known result in
statistics states that the Wald, score and likelihood ratio tests are asymptotically optimal
and equivalent in the sense that they converge to the uniformly most powerful test (Lehmann
and Romano, 2006). The level functionals of these test statistics can be approximated by
functionals of the form
α(Fn) := 1−Hk(nU(Fn)TU(Fn)) (9)
where Hk(·) is the cumulative distribution function of a χ2k random variable, U(Fn) is a
standardized functional such that under the null hypothesis U(F ) = 0 and
√
n(U(Fn)− U(F ))→d N(0, Ik). (10)
Heritier and Ronchetti (1994) proposed robust tests based on M-estimators. Their main
advantage over their classical counterparts is they have bounded level and power influence
18
functions. Therefore these tests are stable under small arbitrary contamination under both
the null hypothesis and the alternative. Following Heritier and Ronchetti (1994) we therefore
consider the three classes of tests described next.
1. A Wald-type test statistic is a quadratic statistic of the form
W (Fn) := T (Fn)
T
(2)(V (T, F )(22))
−1T (Fn)(2). (11)
2. A score (or Rao)-type test statistic has the form
R(Fn) := Z(T, Fn)
TU(T, F )−1Z(T, Fn), (12)
where Z(T, Fn) =
1
n
∑n
i=1 Ψ(Xi, TR(Fn))(2), TR is the restricted M-functional defined
as the solution of ∫
Ψ(x, TR(F ))(1)dF = 0 and TR(F )(2) = 0,
U(T, F ) = M(22.1)V (T, F )(22)M
T
(22.1) is a positive definite matrix and M(22.1) = M(22) −
M(21)M
−1
(11)
M(12) with M = M(T, F )
3. A likelihood ratio-type test has the form
S(Fn) :=
2
n
n∑
i=1
{ρ(xi, T (Fn))− ρ(xi, TR(Fn))}, (13)
where ρ(x, 0) = 0, ∂
∂θ
ρ(x, θ) = Ψ(x, θ) and T and TR are the M-functionals of the full
and restricted models respectively. As showed in Heritier and Ronchetti (1994) the
likelihood ratio functional is asymptotically equivalent to the quadratic form S̃(F ) :=
ULR(F )
TULR(F ) where ULR(F ) = M
1/2
(22.1)
T (F )(2).
Note that in practice the matrices M(T, F ), U(T, F ) and V (T, F ) need to be estimated. We
discuss this point in Section 4.6.
4.2 Private inference based on the level gross-error sensitivity
We can use any of the robust test statistics described above to provide differential private
p-values using an analogue construction to the one introduced for estimation in Section 3.
Our proposal for differentially private testing is to build p-values of the form
Aα(Fn) := α(Fn) + γ(α;Fn)
5
√
2 log(n) log(2/δ)
εn
Z,
19
where Z is an independent standard normal random variable. The rationale behind our
construction is that γ(α, Fn) is the right scaling factor for applying the Gaussian mechanism
to α(Fn) since it should roughly be of the same order as its smooth sensitivity. Note also
that one can use Aα(Fn) to construct randomized counterparts to the test statistics (11),
(12) and (13) by simply computing
Q(Fn) := H
−1
k (Aα(Fn)),
that is by evaluating the quantile function of a χ2k at Aα(Fn). Note that we can also apply
the Gaussian mechanism to the Wald, score and likelihood ratio type statistics of Section 4.1
and construct differentially private p-values from them. Indeed postprocessing preserves dif-
ferential privacy so computing the induced p-values preserves the privacy guarantees (Dwork
and Roth, 2014, Proposition 2.1). Our theoretical results extend straightforwardly to this
alternative approach and the numerical performance is nearly identical to the one presented
in this paper in our experiments. The following theorem establishes the differential privacy
guarantee of our proposal.
Theorem 4. Let n ≥ max[N0, 1C2m log(2/δ){1+
4
ε
(p+2 log(2/δ)) log(Cn,k,U)}2, C2Um log(1/δ)
K2n
λmax(MFn )
{1+
2Ln
b
+ 1
λmin(MFn )
(C1+C2
Kn
b
)}2], where CU and Cn,k,U are constants depending on the test func-
tional. If Conditions 1–3 hold, then Aα is (ε, δ)-differentially private.
The minimum sample size required in Theorem 4 is similar to that of Theorem 1. In
particular it also depends on the same {N0, b,Kn, Ln, C1, C2, C}, as well as the test specific
constants CU and Cn,k,U resulting from our bounds on the error incurred by approximating
the smooth sensitivity of the level functionals by their the empirical gross-error sensitivity.
A discussion on these constants can be found in the Appendix.
4.3 Examples
The following two examples show how to upper bound empirical the level gross-error sensi-
tivity γ(α, Fn) required for the construction of our differentially private p-values.
Example 4 : Testing and confidence intervals in linear regression
We consider the problem of hypothesis testing in the setting considered in Example 2. We
focus on the same Mallow’s estimator in combination with the Wald statistics Wn = W (Fn)
defined in (11) for hypothesis testing. We first note that from the chain rule, the influence
function of W at the Fn is
IF(x;W,Fn) = 2T (Fn)
T
(2)(V (T, F )(22))
−1IF(x;T, Fn)(2).
20
It follows that γ(W,Fn) ≤ 2λmin(V (T, F )(22))−1‖T (Fn)(2)‖γ(T(2), Fn) and the respective level
gross-error sensitivity can be bounded as γ(αW , Fn) ≤ nH ′k(nWn)γ(W,Fn). In the case of
univariate null hypothesis of the form H0 : βj = 0 these expressions become
IF(x;W,Fn) =
2T (Fn)j
V (T, F )jj
IF(x;T, Fn)j and γ(αW , Fn) ≤ 2nH ′n(nWn)
|T (Fn)j|
V (T, F )jj
‖(M−1Fn )j·‖Kn,
where (M−1Fn )j· denotes the jth row of M
−1
Fn
. The above bound on γ(αW , Fn) can be used in
the Gaussian mechanism suggested in Section 4.2 for reporting differentially private p-values
AαW (Fn) accompanying the regression slope estimates AT (Fn) of Example 2.
We further note that since (ε, δ)-differential privacy is not affected by post-processing,
one can also construct confidence intervals using the reported p-value AαW (Fn). Since the
asymptotic distribution of the Wald test is a χ21 for the null hypothesis H0 : βj = 0, a
natural way to construct a confidence interval is to map the value AαW (Fn) to the quantile
of χ21 and output the interval defined by its squared root. More precisely, one can first
compute Q
(ε,δ)
n = H
−1
1 (AαW (Fn)) and then report the differentially private confidence interval
(−
√
Q
(ε,δ)
n ,
√
Q
(ε,δ)
n ).
Example 5: Testing and confidence intervals in logistic regression
Let us now return to the robust quasilikelihood estimator discussed in Example 3 and focus
on the special case of binary regression with canonical link. Note that if one chooses ψ(r) = r
and w(x) = 1 in (8), the resulting estimator is equivalent to logistic regression. In general
(8) will take the form
1
n
n∑
i=1
{
ψ(ri)
e
1
2
xTi β
1 + ex
T
i β
w(xi)xi − a(β)
}
= 0,
where ri = (yi − pi)/
√
pi(1− pi) and pi = e
xTi β
1+e
xT
i
β
. In this case, if supx ‖xw(x)‖ ≤ K̃ and
|ψ(r)| ≤ cψ, then the gross-error sensitivity of β̂ = T (Fn) can be bounded as γ(T, Fn) ≤
2λmin(MFn)
−1cψK̃. For example if we consider the weight function w(x) = {1, 1/‖x‖} and
the Huber function ψ(r) = ψc(r), then K̃ = 1 and cψ is the constant of the Huber function.
Note also that Appendix B in Cantoni and Ronchetti (2001) provide formulas for MF when
ψ(r) = ψc(r) and this bound is readily obtained using standard functions in R. Furthermore
the computation of the the gross-error sensitivity for the level functional of the Wald statistics
follows from the same arguments discussed in Example 4. The extension of the proposed
construction of confidence intervals is also immediate.
21
4.4 Validity of the tests
In this subsection we establish statistical consistency guarantees for our differentially private
tests. The next theorem establishes rates of convergence and demonstrates the asymptotic
equivalence between them and their non-private counterparts under both the null distribution
and a local alternative.
Theorem 5. Assume Conditions 1 and 2 hold and let α(·) be the level functional of any of
the tests (11)–(13). Then, for τ ∈ (0, 1), with probability at least 1− τ
|Aα(Fn)− α(F )| ≤ |α(Fn)− α(F )|+ C
√
log(n) log(2/δ) log(2/τ)Kn√
n/kε
for some positive constant C. Furthermore, if
Kn
√
log(n) log(1/δ)√
n/kε
→ 0 as n→∞ then
Q(Fn) = Q0(Fn) + oP (1),
where Q0(Fn) = H
−1
k (α(Fn)).
A direct consequence of Theorem 5 is that the asymptotic distribution of Q(Fn) is the
same as the one of its non-private counterpart Q0(Fn) computed from the level functional
of any of the tests (11)–(13). Therefore the results of Heritier and Ronchetti (1994) also
give the asymptotic distributions of Q(Fn) under both H0 : θ = θ0 and H1,n : θ = θ0 +
∆√
n
for some ∆ > 0. In particular, Propositions 1 and 2 of that paper establish that (11) and
(12) are asymptotically equivalent as they both converge to χ2k under H0 and to χ
2
k(δ) with
δ = ∆TV (T, F )−1
(22)
∆ under H1,n. Proposition 3 of the same paper shows that (13) converges
instead to a weighted sum of k independent random variables distributed as χ21 under H0
and to a weighted sum of k independent random variables χ21(δi) for some δ1, . . . , δk > 0
under H1,n.
4.5 Robustness properties of differentially private tests
The tests associated with the differentially private p-values proposed in Section 4.2 enjoy
some degree of robustness by construction. In particular, it is not difficult to extend the
lower bound of Theorem 3 to the level functionals considered in this section.
Theorem 6. Assume the conditions of Theorem 3, but letting A be any (ε, δ)-differentially
private algorithm of the level functional α(F ) of either of the tests (11)–(13). Then either
EFnEA
[
|A(D(Fn))− α(F )‖
]
≥
ρ
16
⌈
log 2
2ε
⌉
µγ(U, F )2 + o
(
ρ
⌈
log 2
2ε
⌉)
22
or
EGnEA
[
|A(D(Gn))− α(G)|
]
≥
ρ
16
⌈
log 2
2ε
⌉
µγ(U, F )2 + o
(
ρ
⌈
log 2
2ε
⌉)
,
where µ = − ∂
∂ζ
Hk(q1−α0 ; ζ)
∣∣∣
ζ=0
, Hk(·, ζ) is the cumulative distribution function of a non-
central χ2k(ζ) with non-centrality parameter ζ ≥ 0, q1−α0 is the 1 − α0 quantile of a χ
2
k
distribution and α0 = α(F ) is the nominal level of the test.
Similar to Theorem 3 , Theorem 6 states that the convergence rates of any differentially
private algorithm A estimating the level functional α is lower bounded by the the gross-
error sensitivity of U(F ) in a small neighborhood of F , where U is defined in (9) and (10).
Therefore functionals U with diverging influence functions will lead to slower convergence
rates for any algorithm A in all such neighborhoods. The result suggests that the influence
function has to scale at most as ρ−1 = O(ε
√
n).
Note that the appearance of the quadratic term γ(U, F )2 in the lower bound is intuitive
from the definition of α(F ) and is in line with the robustness characterization of the level
influence function of (Heritier and Ronchetti, 1994; Ronchetti and Trojani, 2001). In fact
we can extend the robustness results of these papers to our setting and show that our tests
have stable level and power functions in shrinking contamination neighborhoods of the model
when Ψ is bounded.
We need to introduce additional notation in order to state the result. Consider the
(t, n)-contamination neighborhoods of Fθ0 defined by
Ut,n(Fθ0) :=
{
F 0t,n,G =
(
1−
t
√
n
)
Fθ0 +
t
√
n
G, G arbitrary
}
and let Un = U(Fn) be a statistical functional with bounded influence function and such
that U(F ) = 0 and
√
n(U(Fn)− U(Ft,n,G))→d N(0, Ik)
uniformly over the sequence of (t, n)-neighborhoods Ut,n(Fθ0). Further let
{F altη,n}n∈N :=
{(
1−
η
√
n
)
Fθ0 +
η
√
n
Fθ1
}
n∈N
be a sequence of local alternatives to Fθ0 and
Ut,n(F
alt
η,,n) :=
{
F 1t,n,G :=
(
1−
t
√
n
)
F altη,n +
t
√
n
G, G arbitrary
}
be the corresponding neighborhood of F altθ,n for a given n. We denote by {F
0
t,n,G}n∈N a sequence
of (t, n,G)-contaminations of the underlying null distribution Fθ0 , each of them belonging
23
to the neighborhood Ut,n(Fθ0). Similarly, we denote by {F 1t,n,G}n∈N a sequence of (t, n,G)-
contaminations of the underlying local alternatives F altη,n, each of them belonging to the
neighborhood Ut,n(F
alt
η,n). Finally, we denote by Aβ and β the power functionals of the tests
based on Aα and α respectively.
The following corollary follows from (Ronchetti and Trojani, 2001, Theorems 1–3) and
Theorem 5. It shows that the level and power of our differentially private tests are stable
in the contamination neighborhoods Ut,n(Fθ0) and Ut,n(F
alt
η,,n) when the influence function of
the functional U is bounded.
Corollary 3. Our differentially private Wald, score and likelihood ratio type tests have
stable level and power functionals when Kn <∞ in the sense that for all G
lim
n→∞
Aα(Ft,n,G) = lim
n→∞
α(Ft,n,G)
=α0 + t
2µ
∥∥∥∫ IF(x;U, Fθ0)dG(x)∥∥∥2 + o(t2)
and
lim
n→∞
Aβ(F
1
t,n,G) = lim
n→∞
β(F 1t,n,G)
= lim
n→∞
β(F altη,n)
+ 2µtη
∫
IF(x;U, F altη,n)
TdG(x)
∫
IF(x;U, Fθ0)dFθ1(x) + o(η),
where µ is as in Theorem 6.
4.6 Accounting for the change of variance sensitivity
In practice the standardizing matrices M(T, F ), U(T, F ) and V (T, F ) are estimated, so the
actual form of the functional U defining the test functional is
U(Fn) = S(Fn)
−1/2T̃ (Fn),
where T̃ is such that T̃ (F ) = 0 and
√
n(T̃ (Fn) − T̃ (F )) →d N(0, S(F )). The general
construction of Section 4.2 is still valid provided additional regularity conditions on Ψ hold.
In particular, it remains true that γ(α, F ) can be used to upper bound Γ̃n provided
∣∣∣ ∂∂θj Ψ̇∣∣∣ <
∞ for all j = 1, . . . , p. This condition implies third order infinitesimal robustness in the
sense of La Vecchia et al. (2012). From a practical point of view an upper bound on γ(α, Fn)
can be computed in this case using both the influence function and the change of variance
function of T . The latter accounts for the fact the S(F ) is also estimated. We refer the
reader to the Appendix for the precise form of the the change of variance function of general
M-estimators and a more detailed discussion of the implications of estimating the variance
in the noise calibration of our Gaussian mechanism.
24
5 Numerical examples
We investigate the finite sample performance of our proposals with simulated and real data.
We focus on a linear regression setting where we obtain consistent slope parameter estimates
at the model and show that our differentially private tests reach the desired nominal level
and has power under the alternative even in mildly contaminated scenarios. We first present
a simulation experiments that shows the statistical performance of our methods in small
samples before turning to a real data example with a large sample size. For the sake of space
we relegate to the Appendix a more extended discussion about other existing methods, some
complementary simulation results and a discussion of the evaluation of the constants of
Theorems 1 and 4.
5.1 Synthetic data
We consider a simulation setting similar to the one of Salibian-Barrera et al. (2016) in order
to explore the behavior of our consistent differentially private estimates and illustrate the
efficiency loss incurred by them, relative to their non private counterparts. We generate
the linear regression model (6) with β = (1, 1, 0, 0)T , xi ∼ N(0, V ) and V = {0.5|j−k|}4j,k=1.
We illustrate the effect of small amounts of contaminated data by generating outliers in the
responses as well as bad leverage points. This was done by replacing 1% of the values of y
and x2 with observations following a N(12, 0.1
2) and a N(5, 0.12) distribution respectively.
All the results reported below were obtained over 5000 replications and sample sizes ranging
from n = 100 to n = 1000.
The differentially private estimates considered here is the same Mallow’s type robust
regression estimator of Example 2. In particular, we consider the robust estimators of β
defined by
(β̂0, β̂, σ̂) = argmin
β,σ
{ n∑
i=1
σρc
(yi − β0 + xTi β
σ
)
w(xi) + κcnσ
}
,
where ρc is the Huber loss function with tuning parameter c, w : Rp → R≥0 is a downweight-
ing function and κc =
∫
min{x2, c2}dΦ(x) is a constant ensuring that σ̂ is consistent. In all
our simulations we set c = 1.345 and w(x) = min{1, 2/‖x‖2}. This robust estimator uses
Huber’s Proposal 2 for the estimation of the scale parameter (Huber and Ronchetti, 2009).
We computed it using the function rlm of the R package “MASS”. Figure 1 shows how the
level of privacy affects the performance of estimation relative to that of the target robust
estimator. In particular, it illustrates the slower convergence of our differential private esti-
mators for the range of privacy parameters ε = {0.2, 0.1, 0.05} and δ = 1/n2. Figure 2 shows
25
200 400 600 800 1000
−
2
0
1
2
3
n
β 1
200 400 600 800 1000
−
2
0
1
2
3
n
β 2
200 400 600 800 1000
−
3
−
1
0
1
2
3
n
β 3
200 400 600 800 1000
−
2
0
1
2
3
n
β 4
Figure 1: The plots show the componentwise estimation error of the parameter β for clean data
sets ranging from size n = 100 to n = 1000. The dotted dark blue line shows the median estimated
value of the target robust estimator while the light blue shaded area give pointwise quartiles of
the same estimator. The larger shaded gray areas give the pointwise quartiles of the estimated
differentially private estimators with privacy parameters ε = {0.2, 0.1, 0.05}.
the empirical level of the Wald statistics for testing the null hypothesis H0 : β3 = β4 = 0
with increasing sample sizes and nominal level of 5%. We see that all the tests have good
empirical coverage and that as expected the differentially private tests are not too sensitive
to the presence of a small amount of contamination. Interestingly, the empirical levels of the
robust test and the differentially private one are nearly identical when the privacy parame-
ters ε = {1, 0.1} and n ≥ 200. When we choose the very stringent ε = 0.001 the noise added
to the target p-value is so large that the resulting test amounts to flipping a coin.
In order to explore the power of our tests we set the regression parameter β to (1, 1, ν, 0)T ,
where ν varied in the range [−0.5, 0.5]. As seen in Figure 3 (a) the power function of the
three tests considered is almost indistinguishable when the data follows the normal model
(6). Figure 3 (b) shows that the power functions of the robust Wald tests and the derived
differentially private test remain almost identical to the one they have without contamination.
This reflects the power function stability result established in Theorem 3. From the same
26
figure, we clearly see that the power function of the Wald test constructed using least squares
estimator is shifted as a result of a small amount of contamination.
200 400 600 800 1000
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
(a)
n
Wald test
classic
robust
private
200 400 600 800 1000
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
(b)
n
Figure 2: (a) shows the convergence of our Wald statistic to the nominal level 0.05 at the model
while (b) shows its behavior under 1% contamination. We report four empirical differentially private
level curves: dotted lines, ε = 1; dash-dotted lines, ε = 0.1; dash-dotted lines, ε = 0.01, two-dashed
lines, ε = 0.001.
−0.4 −0.2 0.0 0.2 0.4
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
(a)
ν
Wald test
classic
robust
private
−0.4 −0.2 0.0 0.2 0.4
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
(b)
ν
Figure 3: (a) shows the power function of our Wald statistic at the model when n = 200
and β3 ∈ [−0.5, 0.5]; (b) shows its behavior under 1% contamination. We report four empirical
differentially private power curves: dotted lines, ε = 1; dash-dotted lines, ε = 0.1; dash-dotted
lines, ε = 0.01, two-dashed lines, ε = 0.001.
27
5.2 Application to housing price data
We revisit the housing price data set considered in Lei (2011). The data consist of 348′189
houses sold in the San Francisco Bay Area Between 2003 and 2006, for which we have the
price, size, year of transaction, and county in which the house is located. The data set has
two continuous covariates (price and size), one ordinal variable with 4 levels (year), and
one categorical variable (county) with 9 levels. We exclude the observations with missing
entries and follow the preprocessing suggested in Lei (2011), i.e. we filter out data points
with price outside the range of $105 ∼ $9× 105 or with size larger than 3′000 squared feet.
After preprocessing, we have 250′070 observations and the county variables has 6 levels after
combination. We also consider the same data without filtering price and size, in which case
we are left with 286′537 observations. We fitted a simple linear regression model in order to
predict the housing price using ordinary least squares, a robust estimator and differentially
private estimators. We computed the private estimator described in 5.1 as well as the differ-
entially private M-estimators based on a perturbed histogram with enhanced thresholding as
in Lei (2011). We assess the performance of the differentially private regression coefficients
by comparing them with their non-private counterparts. More specifically, we look at the
componentwise relative deviance from the non-private estimates dj = |β̂DPj /β̂j − 1| where
β̂j stands for the jth regression coefficient of either the ordinary least squares or the robust
estimator, and β̂DPj is its differentially private counterpart. In order to account for the ran-
domness of the Gaussian mechanism, we report the mean square error of the deviations dj
obtained over 500 realizations. The results are summarized in Tables 1 and 2.
It is interesting to notice that with the preprocessed data the least squares fit and the
robust fit are very similar. However with the raw data, the large unfiltered values of price
and size affect to a greater extent the estimator of Lei (2011). The accuracy of this estimator
also deteriorates for the raw data as reflected by the larger mean squared deviations obtained
in this case. On the other hand, our differentially private estimators give similar results for
both preprocessed and raw data, in terms of values of the fitted regression coefficients and
mean squared deviations from the target robust estimates. This is a particularly desirable
feature when privacy is an issue since researchers are likely to have limited access to the data
and hence carrying out a careful preprocessing might not be possible. Note also that for the
same level of privacy ε = 10−1, our method provides much more accurate estimation. The
poorer performance of the histogram estimator is to be expected as it suffers from the curse
of dimensionality. In this particular example Lei’s estimator effectively reduces the sample
size to only 2400 pseudo observations that can be sampled from the differentially private
estimated histogram.
28
Table 1: Linear regression coefficients using the Bay housing data after preprocessing. The second
and third columns give the regression coefficients obtained by ordinary least squares and the robust
Mallow’s estimator without privacy guarantees. We compare the performance of their differentially
private counterparts using the perturbed histogram approach and our Gaussian mechanism for a
fixed privacy level ε = 0.1. The reported number is the componentwise root mean square relative
error over 1000 realizations.
ε = 0.1
Method OLS Rob PHOLS PHRob DP
Intercept 135141 118479 8.9 10.4 1.4×10−4
Size 209 216 4.0 5.1 7.3×10−2
Year 56375 58136 2.6 5.2 2.8×10−4
County 2 -53765 -59605 8.1 7.6 2.9×10−4
County 3 146593 149202 2.7 3.8 1.1×10−4
County 4 -27546 -29681 37.7 28.4 5.2×10−4
County 5 45828 41184 7.8 16.5 4.1×10−4
County 6 -140738 -139780 3.6 7.7 1.1×10−4
Table 2: Linear regression coefficients using the raw Bay housing data without preprocessing.
The reported numbers are as in Table 1.
ε = 0.1
Method OLS Rob PHOLS PHRob DP
Intercept 456344 101524 33.4 28.6 1.5×10−4
Size 0.5 229 247.1 229.3 6.2×10−2
Year 71241 65170 87.8 85.7 2.2×10−4
County 2 -11261 -53727 416.8 376.9 2.9×10−4
County 3 275058 196967 82.4 80.7 7.5×10−5
County 4 -16425 -29337 569.0 519.1 4.8×10−4
County 5 98775 57524 101.9 95.9 2.6×10−4
County 6 -149027 -152499 143.3 141.2 9.2×10−5
We see from the reported values in Tables 1–2 that the accuracy of our private estimator
is comparable with that of the perturbed histogram if we impose the much stronger privacy
requirement ε = 10−3. This feature is also very appealing in practice and confirms what our
theory predicts and what we observed in simulations: we can afford a fixed privacy budget
29
with a smaller sample size or equivalently, for a fixed sample size we can ensure a higher
level of privacy using our methods. Note that given the large sample size of this data set,
unsurprisingly all the covariates are significantly predictive for the non-private estimators.
All univariate Wald statistics for the slope parameters in this example yield p-values smaller
than 10−16 for the non-private estimators. Since our differentially private p-values give
similar results we chose not to report them.
6 Concluding remarks
We introduced a general framework for differentially private statistical inference for para-
metric models based on M-estimators. The central idea of our approach is to leverage tools
from robust statistics in the design of a mechanism for the release of differentially private
statistical outputs. In particular, we release noisy versions of statistics of interest that we
view as functionals of the empirical distribution induced by the data. We use a bound of their
influence function in order to scale the random perturbation added to the desired statistics
to guarantee privacy. As a result, we propose a new class of consistent differentially private
estimators that can be easily and efficiently computed, and provide a general framework for
parametric hypothesis testing with privacy guarantees.
An interesting extension to be explored in the future is the construction of differentially
private tests in the context of nonparametric and high-dimensional regression. In principle
the idea of using the influence function to calibrate the noise added to test functionals also
seems intuitive in these settings, but the technical challenge of these extensions is twofold.
First, there are no general results regarding the level influence function of tests for these
settings. Second, the influence function of nonparametric and high-dimensional penalized
estimators has been formulated for a fixed tuning parameter (Christmann and Steinwart,
2007; Avella-Medina, 2017). Since in practice this parameter is usually chosen by some data
driven criterion, it would be necessary to account for this selection step in the derivation of
differentially private statistics following the approach of this work. Another interesting di-
rection for future research is to explore whether information-standardized influence functions
could be used to derive better or more general differentially private estimators (Hampel et
al., 1986; He and Simpson, 1992). It would also be interesting to explore the construction
of tests based on alternative approaches to differential privacy such as objective function
perturbation (Chaudhuri and Monteleoni, 2008; Chaudhuri et al., 2011; Kiefer et al., 2012)
or stochastic gradient descent (Rajkumar and Argawal, 2012; Bassily et al., 2014; Wang et
al., 2015).
30
References
Abadi, M., Chu, A., Goodfellow, I., McMahan,H.B., Mironov, I., Talwar, K.
and Zhang, L. (2016) Deep learning with differential privacy. In Proceedings of the 2016
ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318.
Avella-Medina, M. (2017). Influence functions for penalized M-estimators. Bernoulli, 23
(4B), p.3178–3196.
Avella-Medina, M. and Brunel, V.E. (2019). Differentially private sub-Gaussian loca-
tion estimators arXiv:1906.11923
Awan, J. and Slavkovic, A. (2018). Differentially private uniformly most powerful tests
for binomial data. Advances in Neural Information Processing Systems pp. 4208–4218.
Awan, J. and Slavkovic, A. (2019). Differentially Private Inference for Binomial Data
arXiv preprint arXiv:1904.00459.
Barrientos, A.F., Reiter, J.P., Machanavajjhala, A., and Chen, Y. (2019). Dif-
ferentially private significance tests for regression coefficients. Journal of Computational
and Graphical Statistics 28, 2, 440–453.
Bassily, R., Smith, A. and Thakurta, A. (2014). Private empirical risk minimization:
efficient algorithms and tight error bounds. IEEE 55th Annual Symposium on Foundations
of Computer Science, p. 464–473.
Belsley, D.A., Kuh, E. and Welsch, R.E. (2005). Regression diagnostics: Identifying
influential data and sources of collinearity. Wiley, New York.
Bernstein, G. and Sheldon, D. (2018). Differentially Private Bayesian Inference for
Exponential Families Advances in Neural Information Processing Systems, pp.2924–2934.
Cai, T.T.. Wang, Y. and Zhang, L.(2019) The cost of differential privacy: optimal rates
of convergence for parameter estimation with differential privacy. (manuscript).
Canonne, C.L., Kamath, G., McMillan, A., Smith, A. and Ullman, J. (2019). The
structure of optimal private tests for simple hypotheses. Proceedings of the 51st Annual
ACM SIGACT Symposium on Theory of Computing pp. 310–321.
Canonne, C.L., Kamath, G., McMillan, A., Ullman, J. and Zakynthinou, L.
(2019). Private Identity Testing for High-Dimensional Distributions. arXiv:1905.11947.
31
http://arxiv.org/abs/1904.00459
Cantoni, E. & Ronchetti, E. (2001). Robust inference for generalized linear models.
Journal of the American Statistical Association 96, 1022–30.
Catoni, O. Challenging the empirical mean and empirical variance: a deviation study.
Annales de l’Institut Henri Poincaré, 48 (4), p. 1148–1185.
Chaudhuri, K. and Monteleoni, C.(2008) Privacy-preserving logistic regression. Ad-
vances in Neural Information Processing Systems, p.289–296.
Chaudhuri, K., Monteleoni, C. and Sarwate, A.D. (2011) Differentially private empir-
ical risk minimization. Journal of Machine Learning Research, 12, p.1069–1109.
Chaudhuri, K. and Hsu, D. Convergence rates for differentially private statistical estima-
tion. International Conference on Machine Learning 2012, p.155–186.
Chen, Y., Barrientos, A. F., Machanavajjhala, A. and Reiter, J.P (2018) Is my
model any good: differentially private regression diagnostics. Knowledge and Information
Systems, 54(1), 33-64.
Christmann, A. and Steinwart, I. (2007). Consistency and robustness of kernel-based
regression in convex risk minimization. Bernoulli, 17(1), pp.799–819.
Clarke, B.R. (1986). Nonsmooth analysis and Fréchet differentiability of M-functionals.
Probability Theory and Related Fields, 73(2), pp. 197–819.
Dimitrakakis, C., Nelson, B., Mitrokotsa, A. and Rubinstein, B.I.. (2014) Robust
and private Bayesian inference. In International Conference on Algorithmic Learning
Theory, pp. 291-305, 2014.
Dimitrakakis, C., Nelson, B., Mitrokotsa, A., Zhang, Z. and Rubinstein, B.I..
(2014) Differential privacy for Bayesian inference through posterior sampling. Journal of
Machine Learning Research, 18(1) pp. 343–382, 2017.
Duchi, J.C., Jordan, M.I. and Wainwright, M.J. (2018). Minimax optimal procedures
for locally private estimation. Journal of the American Statistical Association, 113(521),
pp.182–215.
Dwork, C., McSherry, F., Nissim, K. and Smith, A. (2006). Calibrating noise to
sensitivity in private data analysis. Theory of Cryptography Conference, p. 265–284.
Dwork, C. and Lei, J. (2009). Differential privacy and robust statistics. In Proceedings
of the 41st annual ACM Symposium on Theory of Computing 2009, p.371–380.
32
Dwork, C. and Roth, A. (2014). The algorithmic foundations of differential privacy.
Foundations and Trends R© in Theoretical Computer Science, 9.3–4, 211–407.
Foulds, J., Geumlek, J., Welling, M. and Chaudhuri, K. (2016). Differentially
private chi-squared hypothesis testing: goodness of fit and independence testing. in Pro-
ceedings of the 32nd Conference on Uncertainty and Artificial Intelligence 2016.
Gaboardi, M., Lim, H.W., Rogers, R.M. and Vadhan, S.P. (2016). Differentially
private chi-squared hypothesis testing: goodness of fit and independence testing. Interna-
tional Conference on Machine Learning 2016, p.2111–2120.
Hampel, F.R. (1974). The influence curve and its role in robust estimation. Journal of the
American Statistical Association, 69, p.383–393.
Hampel, F.R., Rousseeuw, P.J. and Ronchetti, E.. (1981) The change-of-variance
curve and optimal redescending M-estimators. Journal of the American Statistical Asso-
ciation 69, p.643–648.
Hampel, F. R., Ronchetti, E.M. Rousseeuw, P. J and Stahel, W. A. (1986). Robust
statistics: the approach based on influence functions. Wiley, New York.
He, X. and Shao, Q.M. (2000). On parameters of increasing dimension. Journal of
Multivariate Analysis 73, p.120–135.
He, X. and Simpson, D.G. (1992). Robust direction estimation. Annals of Statistics 20,1,
p.351–369.
Heritier, S. and Ronchetti, E. (1994). Robust bounded-influence tests in general para-
metric models. Journal of the American Statistical Association 89, p.897–904.
Huber, P. (1964). Robust estimation of a location parameter. Ann. Math. Statist., 35,
p.73–101.
Huber, P. (1967). The behavior of maximum likelihood estimates under nonstandard
conditions. Proceedings of the fifth Berkeley symposium on mathematical statistics and
probability, Vol 1, 163–168.
Huber, P. (1981). Robust Statistics. Wiley, New York.
Huber, P. and Ronchetti, E. (2009). Robust Statistics, 2nd edition. Wiley, New York.
33
Kiefer, D., Smith, A. and Thakurta, A. (2012). Private convex empirical risk mini-
mization and high-dimensional regression. Conference on Learning Theory, 25, p.1–25.
La Vecchia, D., Ronchetti, E. and Trojani, F. (2012). Higher-order infinitesimal
robustness. Journal of the American Statistical Association, 107, p.1546–1557.
Lehmann, E. L. and Romano, J.P. (2005). Testing Statistical Hypothesis, 3rd edition.
Springer, New York.
Lei, J. (2011). Differentially private M-estimators. Advances in Neural Information Pro-
cessing Systems, p.361–369.
Liu, F. (2016). Model-based differentially private data synthesis. arXiv:1606.08052.
Machanavajjhala, A., Kifer, D., Abowd, J., Gehrke, J. and Vilhuber, L. (2008)
Privacy: Theory meets practice on the map. In Proceedings of the 2008 IEEE 24th Inter-
national Conference on Data Engineering pp. 277–286.
Maronna, R., Martin, R. and Yohai, V. (2006). Robust Statistics: Theory and Methods.
Wiley, New York.
McCullagh, P. & Nelder, J. A. (1989). Generalized Linear Models, 2nd edition. London:
Chapman & HAll/CRC.
McSherry, F. and Talwar, K. (2007). Mechanism design via differential privacy. In
IEEE Symposium on Foundations of Computer Science 2007
Narayanan, A. and Shmatikov, V. (2008) Robust de-anonymization of large sparse
datasets In IEEE Symposium on Security and Privacy 2008, p.111–125.
Nissim, K, Rashkodnikova, S. and Smith, A. (2007) Smooth sensitivity and sampling
in private data analysis. In Proceedings of the 39th annual ACM Symposium on Theory
of Computing 2007, p.75–84.
Rajkumar, A. and Argawal, S. (2012). A differentially private stochastic gradient de-
scent algorithm for multiparty classification. International Conference on Artificial Intel-
ligence and Statistics, p.933–941.
Reiter, J. (2002). Satisfying disclosure restrictions with synthetic data sets. Journal of
official Statistics, 18(4), 531–544.
34
Reiter, J. (2005). Releasing multiply-imputed, synthetic public use microdata: An il-
lustration and empirical study. Journal of Royal Statistical Society, Series A, 168, 185
–205.
Hall, R., Rinaldo, A. and Wasserman, L. (2012). Random Differential Privacy. Journal
of Privacy and Confidentiality 4(2), p.43–59
Ronchetti, E. and Trojani, F. (2001). Robust inference with GMM estimators. Journal
of Econometrics, 101, p.37–69.
Rubin, D. B. (1993). Statistical disclosure limitation. Journal of official Statistics, 9(2),
461–468.
Salibian-Barrera, M., Van Aelst, S. and Yohai, V.J. (2016). Robust tests for linear
regression models based on τ -estimates. Computational Statistics and Data Analysis, 93,
p.436–455.
Sheffet, O. (2017). Differentially private ordinary least squares. International Conference
on Machine Learning, p.3105–3114.
Sheffet, O. (2018). Locally private hypothesis testing. International Conference on Ma-
chine Learning, p.4605–4614.
Sheffet, O. (2019). Old techniques in differentially private linear regression. International
Conference on Algorithmic Learning Theory, p.788–826.
Smith, A. (2008). Efficient, differentially private point estimators. arXiv preprint
arXiv:0809.4794.
Smith, A. (2011). Privacy-preserving statistical estimation with optimal convergence rates.
In Proceedings of the 43rd annual ACM symposium on Theory of computing, p.813–822.
Sweeney, L. (1997). Weaving technology and policy together to maintain confidentiality.
he Journal of Law, Medicine & Ethics, 25(2-3) p.98–110.
Uhler, C., Slavkovic, A. and Fienberg, S. (2013). Privacy-preserving data sharing for
genome-wide association studies. Journal of Privacy and Confidentiality, 5. p.137–166.
Vershynin, R. (2018). High-dimensional probability: An introduction with applications in
data science. Cambridge University Press, 2018.
35
http://arxiv.org/abs/0809.4794
von Mises, R. (1947). On the asymptotic distribution of differentiable statistical function-
als. Annals of Mathematical Statistics, 18, p.309–348.
Wang, Y., Lee, J. and Kifer, D. (2015). Revisiting differentially private hypothesis tests
for categorical data. ArXiv:1511.03376v4.
Wang, Y.X., Fienberg, S. and Smola, A. (2015). Privacy for free: posterior sam-
pling and stochastic gradient descent Monte Carlo. International Conference on Machine
Learning, p.2493–2502
Wasserman, L. and Zhou, S. (2010). A statistical framework for differential privacy.
Journal of the American Statistical Association, 105, p.375–389.
Wedderburn, R. (1974). Quasi-likelihood functions, generalized linear models, and the
Gauss-Newton method. Biometrika 61, 439–47.
Zhelonkin, M. (2013). Robustness in sample selection models, Ph.D. thesis, University of
Geneva.
36
Supplementary File for
“Privacy-preserving parametric
inference: a case for robust
statistics”
Marco Avella-Medina∗
November 20, 2019 (First version: May 15, 2018)
Appendix A: proof of main results
Proof of Theorem 1
Proof. Our argument consists of using Lemmas 1 and 2 to show that
√
log(n)
n
γ(T, Fn) upper
bounds the ξ-smooth sensitivity of the M-functional T . This suffices to show the desired re-
sult since choosing ξ = ε
4{p+2 log(2/δ)} guarantees (ε, δ)-differential privacy as shown in (Nissim
et al., 2007, Lemmas 2.6 and 2.9).
From Lemma 2 we have that
√
log nγ(T, Fn) > Γn for n ≥ (C ′)2m log(1/δ){ 2Lnbλmin(MFn )(C1+
C2Kn/b)}2. Given Lemma 1, it therefore remains to show that
√
log nγ(T, Fn)
n
≥
1
bn
Kn exp
(
− ξC
√
mn log(2/δ) + ξ
)
. (14)
∗Columbia University, Department of Statistics, New York, NY, USA, email:
[email protected]. The author is grateful for the financial support of the Swiss National
Science Foundation and would like to thank Roy Welsch for many helpful discussions.
1
Further note that γ(T, Fn) ≥ Kn/Bn, where Bn = λmax(MF ). Hence in order to show (14)
it would suffice to establish that√
log(n) ≥
Bn
b
exp
(
− ξC
√
mn log(2/δ) + ξ
)
or equivalently
2Cξ
√
mn log(2/δ)− 2ξ − 2 log(Bn/b) ≥ − log log(n). (15)
Since ξ ≤ ε
4{p+2 log(2/δ)} ≤ 1, the left hand side of (15) will be nonnegative if
n ≥
{
1 +
log(Bn/b)
ξ
}2 1
C2m log(2/δ)
≥
[
1 +
4{p+ 2 log(2/δ)} log(Bn/b)
ε
]2 1
C2m log(2/δ)
which holds by assumption. We have thus established (14) and hence that
√
log(n)
n
γ(T, Fn)
upper bounds the smooth sensitivity of T . Therefore the Gaussian mechanism with scaling
γ(T, Fn)
5
√
2 log(n) log(2/δ)
εn
guarantees (ε, δ)-differential privacy (Nissim et al., 2007, Lemmas
2.6 and 2.9).
In addition to Conditions 1–3 discussed in the main document, the statements of Lemmas
1 and 2 require three additional definitions introduced in Chaudhuri and Hsu (2012). The
first two are fixed scale versions of the influence function and the gross error sensitivity, i.e.
for a fixed ρ > 0, we define
IFρ(x;T, F ) :=
T ((1− ρ)F + ρδx)− T (F )
ρ
and
γρ(T, F ) := sup
x∈X
‖IFρ(x;T, F )‖
The third important quantity appearing in our analysis is the supremum, over a Borel-
Cantelli type neighborhood, of the gross-error sensitivity i.e.
Γn := sup
{
γ1/n(T,G) : d∞(Fn, G) ≤ C
√
m log(2/δ)
n
}
. (16)
We are now ready to state the two main auxiliary lemmas.
Lemma 1. Assume Conditions 1 and 2 hold. Then
SSξ(T,D(Fn)) ≤ max
{
2Γn
n
,
1
bn
Kn exp
(
− Cξ
√
mn log(2/δ) + ξ
)}
,
where C is as in (16).
2
Proof. We adapt Lemma 1 in Chaudhuri and Hsu (2012) to our setting. We will show that
for any D(G1) ∈ Rn×m we have that
e−ξdH(D(Fn),D(G1))LS
(
T,D(G1)
)
≤ max
{
2Γn/n,
1
bn
Kn exp
(
− ξnrn + ξ
)}
,
where rn = C
√
m log(2/δ)
n
. For this we consider two possible cases. First suppose that
[dH
(
D(Fn),D(G1)
)
+ 1]/n > rn. Letting G
′
1 such that dH
(
D(G1),D(G′1)
)
= 1 and taking
ρ = 1 in Lemma 7 we get that LS
(
T,D(G1)
)
≤ Kn
bn
since
‖T (G1)− T (G′1)‖ ≤
∥∥∥∫ 1
0
∫
IF(x;T, (1− t)G1 + tG′1)d(G1 −G
′
1)dt
∥∥∥
≤d∞(G1, G′1) sup
t∈[0,1]
γ(T, (1− t)G1 + tG′1)
≤
Kn
bn
. (17)
Therefore
e−ξdH(D(Fn),D(G1))LS
(
T,D(G1)
)
≤
Kn
bn
exp
(
− ξnrn + ξ
)
.
Suppose now that [dH
(
D(Fn),D(G1)
)
+1]/n ≤ rn and fixD(G2) ∈ Rn×m such that dH
(
D(G1),D(G2)
)
=
1. Let j ∈ {1, . . . , n} be the index at which D(G1) and D(G2) differ. Finally let D(G3) ∈
R(n−1)×m be the data set obtained by removing the jth element of D(G1). Then by the
triangle inequality
d∞(Fn, G3) ≤ d∞(Fn, G1) + d∞(G3, G1) ≤
[
dH
(
D(Fn),D(G1)
)
+ 1
]
/n ≤ rn
and hence γ1/n(T,G3) ≤ Γn. Furthermore, using the triangle inequality we have that
‖T (G1)− T (G2)‖ = ‖T (G1)− T (G3) + T (G3)− T (G2)‖
=
1
n
‖IF1/n(xj;T,G3)− IF1/n(x′j;T,G3)‖
≤
2
n
γ1/n(T,G3)
≤
2Γn
n
.
Since the last bound holds for any choice of D(G2) we see that LS
(
T,D(G1)
)
≤ 2Γn/n and
consequently e−ξdH(D(Fn),D(G1))LS
(
T,D(G1)
)
≤ 2Γn/n.
Lemma 2. Assume Conditions 1–3 hold. Then
Γn ≤ 2γ(T, Fn) + C ′
√
m log(2/δ)
n
Knλmax(M
−1
Fn
)
{
2Ln/b+ λmax(M
−1
Fn
)(C1 + 2C2Kn/b)
}
for some positive constant C ′.
3
Proof. First note that by Lemma 10
γ1/n(T,G) ≤ 2γ(T,G) +O
[Knλmax(M−1G )
n
{
2Ln/b+ λmax(M
−1
G )(C1 + 2C2Kn/b)
}]
(18)
as long as MG is positive definite for all G ∈ {H : d∞(Fn, H) ≤ C
√
m log(2/δ)
n
}. Provided this
condition holds, it would suffice to show that
γ(T,G) ≤ γ(T, Fn)+O
[√
m log(2/δ)
n
Knλmax(M
−1
Fn
)
{
2Ln/b+λmax(M
−1
G )(C1 +2C2Kn/b)
}]
.
This last inequality is a consequence of Lemma 11.
Proof of Theorem 2
Proof. First note that Theorem 3.1.1 in Vershynin (2018) guarantees that a p-dimensional
standard Gaussian random variables concentrates around
√
p. Specifically, for Z ∼ Np(0, I)
and with probability 1−τ , we have that ‖Z‖−
√
p ≤ C
√
log(1/τ) for some universal constant
C. Applying this result to our Gaussian mechanism shows that with probability 1− τ
‖AT (Fn)− T (Fn)‖ ≤ Cγ(T, Fn)
5
√
2 log(n) log(2/δ)
εn
(
√
p+
√
log(1/τ)).
The first claimed result follows from the above expression since by Conditions 1 and 2 we
have that γ(T, Fn) = O(Kn). The second claim is verified by further noting that
AT (Fn)− T (F ) = T (Fn)− T (F ) + γ(T, Fn)
5
√
2 log(n) log(2/δ)
εn
Z
= T (Fn)− T (F ) +Op
(Kn√log(n) log(2/δ)
εn
)
= T (Fn)− T (F ) + op(1/
√
n),
where the last equality leveraged the assumed scaling
Kn
√
log(n) log(1/δ)
ε
√
n
= o(1).
Proof of Corollary 2
Proof. It is easy to check that Tn satisfies the conditions of Theorem 2 and that Tn converges
to the maximum likelihood M-functional.
4
Proof of Theorem 3
Proof. First note that
γρ(T, F ) = sup
z
1
ρ
∥∥T ((1− ρ)F + ∆x)− T (F )∥∥
≥
1
ρ
∥∥T ((1− ρ)F + ∆x)− T (F )∥∥
=
1
ρ
∥∥ρ∫ IF(z;T, F )d(∆x − F ) + o(ρ)∥∥
=
1
ρ
∥∥ρIF(x;T, F ) + o(ρ)∥∥
≥
∥∥IF(x;T, F )∥∥+ o(1),
where a von Mises expansion justifies the second equality and the third one follows from (2).
Taking the supremum over x in the last inequality we obtain
γρ(T, F ) ≥ γ(T, F ) + o(1).
The proof is completed by incorporating this result in the lower bound provided by Propo-
sition 1 below.
Proposition 1 is a generalization of Theorem 1 in Chaudhuri and Hsu (2012) and it
constitutes a somehow more general result than Theorem 3 since it gives a lower bound for
any differentially private algorithm without restricting T (F ) to be an M-functional.
Proposition 1. Let ε ∈ (0, log 2
2
) and δ ∈ (0, ε
17
). Let F be the family of all distributions
over X ⊂ Rm and let A be any (ε, δ)-differentially private algorithm approximating T (F ),
where T : F 7→ Rk with k ∈ {1, . . . , p}. For all n ∈ N and F ∈ F, there exists a radius
ρ = ρ(n) = 1
n
d log 2
2ε
e and a distribution G ∈ F with dTV (F,G) ≤ ρ, such that either
EFnEA
[
‖A(D(Fn))− T (F )‖
]
≥
ρ
16
γρ(T, F )
or
EGnEA
[
‖A(D(Gn))− T (G)‖
]
≥
ρ
16
γρ(T, F ),
where Fn and Gn denote empirical distributions obtained from F and G respectively.
Proof. The claimed result can be established by extending to our multivariate setting the
arguments provided in Theorem 1 of Chaudhuri and Hsu (2012). The only missing ingredient
is a multivariate version of their Lemma 3 that we derive in Lemma 4 below.
We will use the following result in the proof Lemma 4.
5
Lemma 3. Let A : Rn×m 7→ Rk for k ∈ {1, . . . , p} be any (ε, δ)-differentially private algo-
rithm, and let D ∈ X n×m and D′ ∈ X n×m be two data sets which differ by less than n0 < n
entries. Then, for any S
P[A(D) ∈ S] ≥ e−n0εP[A(D′) ∈ S]−
δ
1− e−ε
.
Proof. The same arguments of Lemma 2 in Chaudhuri and Hsu (2012) apply here.
Lemma 4. Let D ∈ X n×m and D′ ∈ X n×m be two data sets that differ in the value of
at most n0 < n entries. Furthermore let A : Rn×m 7→ Rk for k ∈ {1, . . . , p} be any (ε, δ)-
differentially private algorithm. For all 0 < γ < 1/3, and for all τ, τ ′ ∈ Rk, if n0 ≤
log(1/(2γ))
ε
and if δ ≤ 1
4
γ(1− e−ε), then
EA
[
‖A(D)− τ‖+ ‖A(D′)− τ ′‖
]
≥ γ‖τ − τ ′‖.
Proof. We adapt the proof of Lemma 3 in Chaudhuri and Hsu (2012) to our setting. It
suffices to construct two disjoint hyperrectangles I and I ′ such that
PA[A(D) ∈ I] + PA[A(D′) ∈ I ′] ≤ 2(1− γ) (19)
EA
[
‖A(D)−τ‖
∣∣A(D) ∈ I] ≥ 1
2
‖τ−τ ′‖ and EA
[
‖A(D′)−τ ′‖
∣∣A(D′) ∈ I ′] ≥ 1
2
‖τ−τ ′‖ (20)
since they imply that
EA
[
‖A(D)− τ‖+ ‖A(D′)− τ ′‖
]
> EA
[
‖A(D)− τ‖
∣∣A(D) /∈ I]PA[A(D) /∈ I] + EA[‖A(D′)− τ ′‖∣∣A(′D) /∈ I ′]PA[A(D′) /∈ I ′]
≥
1
2
‖τ − τ ′‖
(
PA[A(D) /∈ I] + PA[A(D′) /∈ I ′]
)
≥ γ‖τ − τ ′‖.
Let us now build I and I ′. Write τ = (τ1, . . . , τk) and τ
′ = (τ ′1, . . . , τ
′
k). Without loss
of generality assume that τj < τ
′
j and let tj =
1
2
(τ ′j − τj) for all j = 1, . . . , p. Further
let Ij = (τj − tj, τj + tj) and I ′j = (τ ′j − tj, τ ′j + tj). By construction the hyperrectangles
I := I1 × I2 × · · · × Ik and I ′ := I ′1 × I ′2 × · · · × Ik are disjoint and satisfy (20). It remains
to show that (19) holds. We proceed by contradiction. Suppose (19) does not hold, then
2γ > PA
[
A(D) /∈ I
]
+ PA
[
A(D′) /∈ I ′
]
≥ PA
[
A(D) ∈ I ′
]
+ PA
[
A(D′) ∈ I
]
≥ e−n0ε
(
PA
[
A(D′) ∈ I ′
]
+ PA
[
A(D) ∈ I
])
−
2δ
1− e−ε
≥ e−n0ε2(1− γ)−
γ
2
.
6
The first inequality follows by assumption, the second from I ∩ I ′ = ∅, the third one from
Lemma 3 and the last one by assumption and δ ≤ 1
4
γ(1 − e−ε). Further note that the last
inequality leads to
e−n0ε2(1− γ)−
γ
2
≥ 4γ(1− γ)−
γ
2
≥
7
2
γ − 4γ2 > 2γ
for γ ≤ 1
3
since n0 ≤
log(1/(2γ))
ε
. This is a contradiction and therefore (19) holds.
Proof of Theorem 4
Proof. The proof follows from the arguments of Theorem 1 by combining Lemmas 5, 6 and
the results of Nissim et al. (2007).
From Lemma 6 we have that
√
log nγ(α, Fn) > Γ̃n for n ≥ C ′
√
m log(2/δ)Knλmax(M
−1
F )
{
1+
2Ln/b+ λmax(M
−1
F )(C1 + 2C2Kn/b)
}
. Given Lemma 5, it therefore remains to show that
√
log nγ(α, Fn)
n
≥ Cn,kΓU,n exp
(
− Cξ
√
mn log(2/δ) + ξ
)
. (21)
Hence in order to show (21) it would suffice to establish that
2Cξ
√
mn log(2/δ)− 2ξ − 2 log
(nCn,kΓU,n
γ(α, Fn)
)
≥ − log log(n). (22)
Letting Cn,k,U =
nCn,kΓU,n
γ(α,Fn)
and since ξ ≤ ε
4{p+2 log(2/δ)} ≤ 1, the left hand side of (22) will be
nonnegative if
n ≥
{
1 +
log(Cn,k,U)
ξ
}2 1
C2m log(2/δ)
≥
[
1 +
4{p+ 2 log(2/δ)} log(Cn,k,U)
ε
]2 1
C2m log(2/δ)
.
This last inequality holds by assumption.
We introduce an analogue of the term Γn used in the proof of Theorem 1, but in the
context for level functionals, namely
Γ̃n := sup
{
γ1/n(α,G) : d∞(Fn, G) ≤ C
√
m log(2/δ)
n
}
. (23)
Γ̃n plays an important role in the analysis of our differentially private p-values. Lemmas 5
guarantees that for large n it suffices to control Γ̃n in order to bound the smooth sensitivity,
while Lemma 6 shows that Γ̃n is roughly of the same order as the empirical level gross-error
sensitivity.
7
Lemma 5. Assume that Conditions 1 and 2 hold. Then
SSξ(α,D(Fn)) ≤ max
{
2Γ̃n
n
,Cn,kΓU,n exp
(
− Cξ
√
mn log(2/δ) + ξ
)}
,
where C is as in (23), ΓU,n = sup{supt∈[0,1] γ
(
U, (1 − t)Gn + tG′n
)
: dH
(
D(Gn),D(G′n)
)
=
1, Gn, G
′
n ∈ Gn} and Cn,k = 2
(k−1)(k−1)/2e−(k−1)/2√
n2k/2Γ(k/2)
where Γ(·) is the gamma function.
Proof. The result follows from arguments similar to those of Lemma 1. We will show that
for any D(G1) ∈ Rn×m we have that
e−ξdH(D(Fn),D(G1))LS(α,D(G1)) ≤ max
{
2Γ̃n
n
, exp
(
− ξ(nrn − 1)
)}
,
where rn = C
√
m log(2/δ)
n
. For this we consider two possible cases. First suppose that
dH(D(Fn),D(G1)) + 1)/n > rn. Letting G′1 such that dH
(
D(G1),D(G′1)
)
= 1 and taking
ρ = 1 in Lemma 7, we get that LS
(
T,D(G1)
)
≤ Cn,kΓU,n since
|α(G1)− α(G′1)|
≤
∣∣∣ ∫ 1
0
∫
IF(x;α, (1− t)G1 + tG′1)d(G1 −G
′
1)dt
∣∣∣
≤ d∞(G1, G′1) sup
t∈[0,1]
γ(α, (1− t)G1 + tG′1)
≤
1
n
sup
t∈[0,1]
sup
x
∣∣∣H ′k(n‖U((1− t)G1 + tG′1)‖2)2nU((1− t)G1 + tG′1)T IF(x;U, (1− t)G1 + tG′1)∣∣∣
≤ 2 sup
z>0
{H ′k(nz
2)z} sup
t∈[0,1]
sup
x
‖IF(x;U, (1− t)G1 + tG′1)‖
≤ Cn,kΓU,n (24)
The last inequality used the definition of ΓU,n and supz>0{H ′k(nz
2)z} = (k−1)
(k−1)/2e−(k−1)/2√
n2k/2Γ(k/2)
.
Therefore
e−ξdH(D(Fn),D(G1))LS
(
T,D(G1)
)
≤ Cn,kΓU,n exp
(
− ξnrn + ξ
)
.
Suppose now that [dH(D(Fn),D(G1)) + 1]/n ≤ rn and fix D(G2) ∈ Rn×m such that
dH(D(G1),D(G2)) = 1. Let j ∈ {1, . . . , n} be the index at which D(G1) and D(G2) differ.
Finally let D(G3) ∈ R(n−1)×m be the data set obtained by removing the jth element of
D(G1). Then by the triangle inequality
d∞(Fn, G3) ≤ d∞(Fn, G1) + d∞(G3, G1) ≤ [dH(D(Fn),D(G1) + 1]/n ≤ rn
8
and hence γ1/n(γ,G3) ≤ Γ̃n. Therefore simple calculations show that
‖α(G1)− α(G2)‖ = ‖α(G1)− α(G3) + α(G3)− α(G2)‖
≤
2
n
γ1/n(α,G3)
≤
2Γ̃n
n
Since the bound holds for any choice of D(G2), we see that LS(T,D(G1)) ≤ 2Γ̃n/n and
consequently e−ξdH(D(Fn),D(G1))LS(T,D(G1)) ≤ 2Γ̃n/n .
Lemma 6. Assume Conditions 1–3. Then
Γ̃n ≤ 2γ(α, Fn) + C ′
√
m log(2/δ)
n
Knλmax(M
−1
Fn
)
{
1 + 2Ln/b+ λmax(M
−1
Fn
)(C1 + 2C2Kn/b)
}
.
Proof. We adapt the arguments developed for the estimation problem in Lemma 2. By
Lemma 15 we have that
γ1/n(α,G) ≤ 2γ(α,G)+O
[ 1
n
Knλmax(M
−1
F )
{
1+2Ln/b+λmax(M
−1
G )(C1 +2C2Kn/b)
}]
(25)
for all G ∈ {H : d∞(Fn, H) ≤ C
√
m log(2/δ)
n
}. Furthermore, it follows from Lemma 16 that
γ(α,G) ≤ γ(α, Fn)+O
[√m log(2/δ)
n
Knλmax(M
−1
Fn
)
{
1+Ln/b+λmax(M
−1
Fn
)(C1+2C2Kn/b)
}]
.
(26)
Using (26) in (25) shows the desired result.
Proof of Theorem 5
Proof. Let’s first consider the Wald functional. The proof of the first claim is very similar
to that of Theorem 2. The main difference is that ‖U(Fn)‖ = OP (
√
k/n) and hence
γ(α, Fn) = sup
x
|2nH ′k(n‖U(Fn)‖
2)U(Fn)
T IF(x;U, Fn)|
≤ |2nH ′k(n‖U(Fn)‖
2)|‖U(Fn)‖γ(U, Fn)
≤ |2nH ′k(n‖U(Fn)‖
2)|‖U(Fn)‖‖V −1/2‖γ(T, Fn)
≤ OP (
√
nkKn).
For the second claim it suffices to notice that since
2 log(n)γ(α,Fn)
nε
Z ≤ OP
(
Kn log(n)√
n/kε
)
= oP (1),
a Taylor expansion of H−1k yields
Q(Fn) = H
−1
k (α(Fn) + oP (1)) = H
−1
k (α(Fn)) + oP (1) = Q0(Fn) + oP (1).
9
It is easy to see that the proof for S̃ and is very similar. The same arguments also work for
the Rao test since direct calculations show that
IF(x;Z, Fn) =
( 1
n
n∑
i=1
Ψ̇(xi;TR(Fn))(2)
)
IF(x;TR, Fn) + Z(T, Fn −∆x).
Proof of Theorem 6
Proof. First note that Proposition 1 yields lower bounds of the form ρ
16
γρ(α, F ) for our
problem. We will simply further lower bound γρ(α, F ) in order to establish the claimed
result. Writing ρn =
√
nρ = 1√
n
d log 2
2ε
e and Fρn,n = (1−
1√
n
ρn)F +
1√
n
∆x for a fixed x, we see
that
√
n(U(Fn)− U(Fρn,n))→d N(0, Ik).
Furthermore, let α(Fρn,n) = 1−Hk(q1−α0 ; t(ρn)), where t(ρn) = n‖U(Fρn,n)‖2 and let b(ρn) =
−Hk(q1−α0 ; t(ρn)). Following the computations of Proposition 4 in Heritier and Ronchetti
(1994) we have that
α(Fρn,x)− α(F ) = ρnb
′(0) +
1
2
ρ2nb
′′(0) + o(ρ2n) +O(n
−1)
= ρ2nµ‖IF(x;U, F )‖
2 + o(ρ2n) +O(n
−1)
Using the last inequality we can see that
γρ(α, F ) = sup
x′
1
ρ
∣∣α((1− ρ)F + ∆x′)− α(F )∣∣
≥
1
ρ
∣∣α((1− ρ)F + ∆x)− α(F )∣∣
=
1
ρ
∣∣ρ2nµ‖IF(x;U, F )‖2 + o(ρ2n) +O(n−1)∣∣
≥
⌈
log 2
2ε
⌉
µ‖IF(x;U, F )‖2 + o
(⌈
log 2
2ε
⌉)
.
Taking the supremum over x on the right hand side of the last expression completes the
proof.
Appendix B: properties of the influence function
The influence function is a particular case of the Gâteaux derivative which constitutes a
more general notion of differentiability. We say that a functional T : F → Θ is Gâteaux
10
differentiable at F if there is a linear functional L = LF such that for all G ∈ F
lim
t→0
T (Ft)− T (F )
t
= LF (G− F )
with Ft = (1 − t)F + tG. In this section we derive some useful properties of the influence
function and the gross-error sensitivity that we use to establish our differential privacy guar-
antees. The next lemma states a useful identity for relating the gross-error sensitivity to its
fixed scale counterpart for M-estimators.
Lemma 7. Let Ft = (1− t)F + tG, then for ρ ∈ (0, 1] we have
T (Fρ)− T (F ) =
∫ ρ
0
∫
IF(x;T, Ft)d(G− F )dt.
Proof. We reproduce the arguments of Huber and Ronchetti (2009) p. 38-39 for complete-
ness. By construction
T (Fρ)− T (F0) =
∫ ρ
0
d
dt
T (Ft)dt,
where
d
dt
T (Ft) = lim
h→0
T (Ft+h)− T (Ft)
h
.
Noting that Ft+h can be rewritten as
Ft+h =
(
1−
h
1− t
)
Ft +
h
1− t
G
we get that
d
dt
T (Ft) =
1
1− t
∫
IF(x;T, Ft)d(G− Ft) =
∫
IF(x;T, Ft)d(G− F ).
Gross-error sensitivity bounds
The next lemmas provide a series of upper bounds relating γρ(T, F ), γ(T, Fρ) and γ(T,G),
where T is an M-functional defined by the equation∫
Ψ(x, T (F ))dF = 0.
The M-functional is assumed to satisfy supx ‖Ψ(x, θ)‖ ≤ K and the following smoothness
assumptions guaranteed by Condition 3 in the main text.. There exist r1 > 0, r2 > 0, C1
and C2 > 0 such that
‖EF [Ψ̇(X, θ)]− EG[Ψ̇(X, θ)]‖ ≤ C1d∞(F,G) and
11
‖EG[Ψ̇(X, θ)]− EG[Ψ̇(X,T (G))]‖ ≤ C2‖T (G)− θ‖
whenever d∞(F,G) ≤ r1 and ‖θ−T (G)‖ ≤ r2. We will further assume that λmin(MG) ≥ b > 0
for all G.
Lemma 8. Let T be an M-functional defined by a bounded function Ψ and such that
MF = M(T, F ) is positive definite. Then, for ρ ∈ (0, 1] we have that
γρ(T, F ) ≤ 2 sup
t∈[0,ρ]
γ(T, Ft),
where Ft = (1− t)F + t∆x.
Proof. Lemma 7 and direct calculations show that
γρ(T, F ) = sup
x
∥∥∥∥ρ−1
∫ ρ
0
∫
IF(x;T, Ft)d(∆x − F )dt
∥∥∥∥
≤ sup
x
‖ρ−1
∫ ρ
0
M−1Ft Ψ(x, T (Ft))dt‖+
∥∥∥∥ρ−1
∫ ρ
0
(∫
M−1Ft Ψ(y, T (Ft))dF (z)
)
dt
∥∥∥∥
≤ sup
t∈[0,ρ]
sup
x
‖M−1Ft Ψ(x, T (Ft))‖+ sup
t∈[0,ρ]
∥∥∥∥
∫
M−1Ft Ψ(x, T (Ft))dF (z)
∥∥∥∥
≤2γ(T, Fρ).
Lemma 9. Under the assumptions of Lemma 8, if sup ‖Ψ̇(x, θ)‖ ≤ L for all θ ∈ Θ and
λmax(M
−1
F )ρ{C1 + 2C2K/b} < 1, we have that
γ(T, Fρ) ≤γ(T, F ) + ρKλmax(M−1F )
{
2L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}
+O
[
ρ2K(C1 + 2C2K/b)
{
C1 + (K + L)C2K/b
}]
Proof. Simple manipulations and a first order (integral form) Taylor expansion shows that
IF(x;T, Fρ)− IF(x;T, F )
= M−1Fρ Ψ(x, T (Fρ))−M
−1
F Ψ(x, T (F ))
= M−1F {Ψ(x, T (Fρ))−Ψ(x, T (F ))}+ (M
−1
Fρ
−M−1F )Ψ(x, T (Fρ))
= M−1F M̃∆x{T (Fρ)− T (F )}+ (M
−1
Fρ
−M−1F )[Ψ(x, T (F )) + M̃∆x{T (Fρ)− T (F )}], (27)
where M̃∆x =
∫ 1
0
Ψ̇[x, T (F ) + t{T (Fρ)− T (F )}]dt. Therefore
‖IF(x;T, Fρ)‖ ≤‖IF(x;T, F )‖+ ‖M−1F M̃∆x{T (Fρ)− T (F )}‖
+ ‖(M−1Fρ −M
−1
F )Ψ(x, T (F )) + M̃∆x{T (Fρ)− T (F )}‖
≤γ(T, F ) + Lλmax(M−1F )‖T (Fρ)− T (F )‖
+ ‖M−1Fρ −M
−1
F ‖(K + L‖T (Fρ)− T (F )‖). (28)
12
Furthermore, using Neumann series we have that
M−1Fρ = M
−1
F −M
−1
F (MFρ −MF )M
−1
F +
∑
k≥2
(
−M−1F (MFρ −MF )
)j
M−1F . (29)
and by Condition 3 and Lemma 8
‖MFρ −MF‖
=
∥∥∥∥EFρ [Ψ̇(X,T (Fρ))− Ψ̇(X,T (F ))] + EFρ [Ψ̇(X,T (F ))]− EF [Ψ̇(X,T (F ))]
∥∥∥∥
≤
∥∥∥∥EFρ [Ψ̇(X,T (F ))]− EF [Ψ̇(X,T (F ))]
∥∥∥∥+
∥∥∥∥EFρ [Ψ̇(X,T (Fρ))− Ψ̇(X,T (F ))]
∥∥∥∥
≤ C1d∞(Fρ, F ) + C2‖T (Fρ)− T (F )‖
≤ ρ{C1 + C2γρ(T, F )}
≤ ρ(C1 + 2C2K/b). (30)
Since ‖M−1F ‖‖MFρ −MF )‖ ≤ λmax(M
−1
F )ρ{C1 + 2C2K/b} < 1 we also have∥∥∥∑
k≥2
(
−M−1F (MFρ −MF )
)j
M−1F
∥∥∥ ≤‖M−1F ‖∑
k≥2
‖M−1F ‖
j‖MFρ −MF )‖
j
≤‖M−1F ‖
(
1
1− ‖M−1F ‖‖MFρ −MF )‖
− 1− ‖M−1F ‖‖MFρ −MF )‖
)
≤
‖M−1F ‖
3‖MFρ −MF )‖2
1− ‖M−1F ‖‖MFρ −MF )‖
≤
ρ2(C1 + 2C2K/b)
2λ3max(M
−1
F )
1− λmax(M−1F )ρ(C1 + 2C2K/b)
(31)
Therefore using Lemma 7, (29)–(31) and taking the supremum over (28), we obtain
γ(T, Fρ)
≤ γ(T, F ) + Lλmax(M−1F )(2ρK/b)
+
{
λ2max(M
−1
F )ρ(C1 + 2C2K/b) +
ρ2(C1 + 2C2K/b)
2λ3max(M
−1
F )
1− λmax(M−1F )ρ(C1 + 2C2K/b)
}
(K + 2ρLK/b)
= γ(T, F ) + ρKλmax(M
−1
F )
{
2L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}
+ ρ2Kλ2max(M
−1
F )(C1 + 2C2K/b)
{
2L/b+
λmax(M
−1
F )(C1 + 2C2K/b)
1− λmax(M−1F )ρ(C1 + 2C2K/b)
}
+ 2ρ3b−1KL
λ3max(M
−1
F )(C1 + 2C2K/b)
2
1− λmax(M−1F )ρ(C1 + 2C2K/b)
(32)
The desired result follows from (32).
13
Lemma 10. Under the assumptions of Lemma 9 we have that
γρ(T, F ) ≤ 2γ(T, F ) + 2ρKλmax(M−1F )
{
2L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}
+O
[
ρ2K(C1 + 2C2K/b)
{
C1 + (K + L)C2K/b
}]
.
Proof. This is a direct consequence of Lemmas 8 and 9.
Lemma 11. Assume the conditions of Lemma 10 and let d∞(F,G) ≤ ρ. Then we have that
γ(T,G) ≤γ(T, F ) + ρKλmax(M−1F )
{
2L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}
+O
[
ρ2K(C1 + 2C2K/b)
{
C1 + (K + L)C2/b
}]
.
Proof. The proof is similar to that of Lemma 9. First note that
IF(x;T,G)− IF(x;T, F )
= M−1G Ψ(x, T (G))−M
−1
F Ψ(x, T (F ))
= M−1F {Ψ(x, T (G))−Ψ(x, T (F ))}+ (M
−1
G −M
−1
F )Ψ(x, T (G))
= M−1F M̃{T (G)− T (F )}+ (M
−1
G −M
−1
F )[Ψ(x, T (F )) + M̃{T (G)− T (F )}], (33)
where M̃ =
∫ 1
0
Ψ̇[x, T (F ) + t{T (G)− T (F )}]dt. Further note that
M−1G = M
−1
F −M
−1
F (MG −MF )M
−1
F +
∑
k≥2
(
−M−1F (MG −MF )
)j
M−1F . (34)
and that applying Lemma 7 with ρ = 1 we have that
‖T (G)− T (F )‖ ≤
∥∥∥∫ 1
0
∫
IF(x;T, Ft)d(G− F )dt
∥∥∥
≤d∞(F,G) sup
t∈[0,1]
γ(T, (1− t)F + tG)
≤
ρK
b
(35)
Therefore, by Condition 3
‖MG −MF‖ =
∥∥∥∥EG[Ψ̇(X,T (G))− Ψ̇(X,T (F ))] + EG[Ψ̇(X,T (F ))]− EF [Ψ̇(X,T (F ))]
∥∥∥∥
≤ C2‖T (G)− T (F )‖+ C1d∞(G,F )
≤ ρ(C1 + C2K/b) (36)
14
where the second inequality used Lemma 7 with t = 1. Furthermore, adapting (31) we see
that
‖
∑
k≥2
(
−M−1F (MG −MF )
)j
M−1F ‖ ≤
ρ2(C1 + 2C2K/b)
2λ3max(M
−1
F )
1− λmax(M−1F )ρ(C1 + 2C2K/b)
(37)
Combining (33)–(37) we see that
‖IF(x;T,G)‖
≤ ‖IF(x;T, F )‖+ ‖M−1F M̃{T (G)− T (F )}‖+ ‖(M
−1
G −M
−1
F )[Ψ(x, T (F )) + M̃{T (G)− T (F )}‖
≤ ‖IF(x;T, F )‖+ Lλmax(M−1F )ρK/b+ ‖M
−1
G −M
−1
F ‖(K + LρK/b)
≤ γ(T, F ) + ρLλmin(MF )−1K/b
+
(
λ2max(M
−1
F )ρ(C1 + C2K/b) +
ρ2(C1 + C2K/b)
2λ3max(M
−1
F )
1− λmax(M−1F )ρ(C1 + C2K/b)
){
K + LρK/b
}
≤ γ(T, F ) + ρλmin(MF )−1K
{
L/b+ λmax(M
−1
F )(C1 + C2K/b)
}
+ ρ2Kλ2max(M
−1
F )(C1 + C2K/b)
{
L/b+
λmax(M
−1
F )(C1 + C2K/b)
1− λmax(M−1F )ρ(C1 + 2C2K/b)
}
+ ρ3b−1KL
λmax(M
−1
F )(C1 + C2K/b)
2
1− λmax(M−1F )ρ(C1 + 2C2K/b)
(38)
Therefore taking the supremum over (38) we obtain the desired result
Generalized gross-error sensitivity bounds
We now provide a series of bounds on the gross-error sensitivity of a general functional g
that we use to study the three test functionals W , R and S̃ described in Section 4.1. Our
results rely on the following assumptions on g.
Condition 4. The function g : Rp × F → R has two continuous partial derivatives with
respect to its two arguments. Furthermore its first and second order partial derivatives
with respect to the corresponding two arguments ∇1g, ∇2g ∇11g ,∇12g, ∇21g and ∇22g are
bounded in sup norm by some constant C̄.
Lemma 12 shows that, under usual regularity conditions, the test functionals W , R and
S̃ satisfy Condition 4. Lemmas 13–16 provide inequalities relating different gross-error-
sensitivity functions of h(F ) = g(T (F ), F ), namely γ(h, F ), γ(h, Fρ), γρ(h, F ) and γ(h,G).
These results are in the spirit of Lemmas 8–11.
Lemma 12. The test functionals W (F ), R(F ) and S̃(F ) can be written as g(T (F ), F ). If in
addition Condition 1 holds with Kn = K <∞ and Ln = L <∞, then g satisfies Condition
4.
15
Proof. We verify the claims separately for W (F ), R(F ) and S̃(F ) in the three points below.
1. Wald functional: it is immediate from the definition of W that in fact W (F ) = f ◦
T (F ) = g(T (F ), F ), where f : θ → θT(2)(V (T, F )22)
−1θ(2). Therefore g is constant
function of the second argument and ∇2g = 0, ∇21g = 0 and ∇22g = 0. Furthermore
g is quadratic in its second argument and ∇1g(T (F ), F ) = (0T , 2θT(2)(V (T, F )22)
−1)T
and ∇11g(T (F ), F ) = blockdiag{0, 2(V (T, F )22)−1}.
2. Rao functional: sinceR is quadratic in the functional Z(T, F ) =
∫
Ψ(X,TR(F ))(2)dF =
f(TR(F ), F ), we have that R(F ) = g(TR(F ), F ). Hence in order to check Condition 4
it suffices to see that the derivatives of f are bounded because
∇1f(TR(F ), F ) =
∫
Ψ̇(X,TR(F ))(2)dF, ∇2f(TR(F ), F ) = Ψ(X,TR(F ))(2)
∇11f(TR(F ), F ) =
∫
∂
∂θ
Ψ̇(X, θ)(2)dF
∣∣∣
θ=TR(F )
, ∇12f(TR(F ), F ) = Ψ̇(X,TR(F ))(2)
and
∇22f(TR(F ), F ) = 0.
3. Likelihood ratio-type functional: since S̃(F ) is quadratic in T (F )(2), the arguments
given for W apply.
Lemma 13. Under the assumptions of Lemma 8 and h(F ) = g(T (F ), F ), we have that
γρ(h, F ) ≤ 2 sup
t∈[0,ρ]
γ(h, Ft),
where Ft = (1− t)F + t∆x.
Proof. Since Lemma 7 applies to h(Fρ) − h(F ), the same arguments used in the proof of
Lemma 8 show the claimed result.
Lemma 14. Under the assumptions of Lemma 9 and h(F ) = g(T (F ), F ), we have that
γ(h, Fρ) ≤ γ(h, F ) +O
[
ρK
{
L/b+ (C1 + 2C2K/b)
}]
.
Proof. First note that
IF(x;h, F ) = ∇1g(T (F ), F )IF(x;T, F ) +∇2g(T (F ), F )(∆x − F )
16
and
IF(x;h, Fρ)− IF(x;h, F )
=
{
∇1g(T (Fρ), Fρ)IF(x;T, Fρ)−∇1g(T (F ), F )IF(x;T, F )
}
+
{
∇2g(T (Fρ), Fρ)(∆x − Fρ)−∇2g(T (F ), F )(∆x − F )
}
= I1 + I2. (39)
We will proceed to bound |I1| and |I2| separately since from (39) we see that
|IF(x;h, Fρ)| ≤ |IF(x;h, F )|+ |I1|+ |I2|.
Let us first focus on I2. Note that
I2 = (1− ρ)∇2g(T (Fρ), Fρ)(∆x − F )−∇2g(T (F ), F )(∆x − F )
= (1− ρ)
{
∇2g(T (Fρ), Fρ)(∆x − F )−∇2g(T (F ), F )(∆x − F )
}
− ρ∇2g(T (F ), F )(∆x − F )
(40)
and that ∇2g(θ, F )(∆x−F ) can be viewed as a function g2 : Θ×F → R. Therefore viewing
g2 as a function of the first argument allows us to get a first order Taylor expansion of the
form
g2(θ, F ) = g2(θ
′, F ) +
[ ∫ 1
0
∇1g2{θ′ + t(θ − θ′), F}dt
]
(θ − θ′) = g2(θ′, F ) + (∇1ḡ2)(θ − θ′),
while viewing g2 as function of its second argument leads to
g2(θ, Fρ) = g2(θ, F ) +
∫ ρ
0
d
dt
g2(θ, Ft)dt = g2(θ, F ) +
∫ ρ
0
∇2g2(θ, Ft)(∆x − F )dt.
Applying consecutively the above expressions in the identity (40) yields
I2 =(1− ρ)
{
∇2g(T (F ), Fρ)(∆x − F )−∇2g(T (F ), F )(∆x − F ) + (∇1ḡ2)
(
T (Fρ)− T (F )
)}
− ρ∇2g(T (F ), F )(∆x − F )
=(1− ρ)
{∫ ρ
0
∇2g2(T (F ), F )(∆x − F )dt+ (∇1ḡ2)
(
T (Fρ)− T (F )
)}
− ρ∇2g(T (F ), F )(∆x − F )
=(1− ρ)
{∫ ρ
0
∇2g2(T (F ), Ft)(∆x − F )dt+ ρ(∇1ḡ2)IFρ(x;T, F )
}
− ρ∇2g(T (F ), F )(∆x − F ).
(41)
Since Condition 4 guarantees that all the first two partial derivatives of g are bounded, from
(41) and the triangle inequality we see that
|I2| = O
[
ρ
{
1 + γρ(T, F )
}]
(42)
17
Let us now study I1. Note that ∇1g(θ, F ) can be viewed as a function g1 : Θ×F → Rp and
admits analogous expansions to the ones considered for g2 in (41). Therefore
I1 = ∇1g(T (F ), F )
{
IF(x, T, Fρ)− IF(x;T, F )
}
+
{
∇1g(T (Fρ), Fρ)−∇1g(T (F ), F )
}
IF(x;T, Fρ)
= ∇1g(T (F ), F )
{
IF(x, T, Fρ)− IF(x;T, F )
}
+
{
∇1g(T (F ), Fρ)−∇1g(T (F ), F ) + (∇2ḡ1)
(
T (Fρ)− T (F )
)}
IF(x;T, Fρ)
= ∇1g(T (F ), F )
{
IF(x, T, Fρ)− IF(x;T, F )
}
+
{∫ ρ
0
∇1g1(T (F ), Ft)(∆x − F )dt+ ρ(∇2ḡ1)IFρ(x;T, F )
}
IF(x;T, Fρ). (43)
Using the Cauchy-Schwarz inequality, (43) and (27)–(30) we see that
|I1| ≤ ‖∇1g(T (F ), F )‖‖IF(x;T, Fρ)− IF(x;T, F )‖
+
∥∥∥∫ ρ
0
∇1g1(T (F ), Ft)(∆x − F )dt
∥∥∥‖IF(x;T, F )‖+ ρ‖∇2ḡ1‖‖IF(x;T, F )‖‖IFρ(x;T, F )‖
≤ C̄
{
‖IF(x;T, Fρ)− IF(x;T, F )‖+ ργ(T, F ) + ργρ(T, F )γ(T, F )
}
≤ C̄
{∥∥∥(M−1Fρ −M−1F )[Ψ(x, T (F )) + M̃∆x{T (Fρ)− T (F )}] +M−1F M̃∆x{T (Fρ)− T (F )}∥∥∥
+ ρ
(
1 + γρ(T, F )
)
γ(T, F )
}
≤ C̄
[
λmax(MF )
{ρ(C1 + 2C2K/b)
λmin(MF )2
+
ρ2(C1 + 2C2K/b)
2λ2max(M
−1
F )
λmax(M
−1
F )− ρ(C1 + 2C2K/b)
}
(K + 2ρLK/b)
+ Lλmax(M
−1
F )(2ρK/b) + ρLλmin(MF )
−1γρ(T, F ) + ρ
(
1 + γρ(T, F )
)
γ(T, F )
]
≤ C̄
[
ρKλmax(M
−1
F )
{
2L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}
+ ρ2Kλ2max(M
−1
F )(C1 + 2C2K/b)
{
2L/b+
λmax(M
−1
F )(C1 + 2C2K/b)
1− λmax(M−1F )ρ(C1 + 2C2K/b)
}
+ 2ρ3b−1KL
λ3max(M
−1
F )(C1 + 2C2K/b)
2
1− λmax(M−1F )ρ(C1 + 2C2K/b)
+ ρ
(
1 + 2ρK/b
)
λmax(M
−1
F )K
]
(44)
Using Lemma 10 to further upper bound (42) and (44) and taking the supremum over the
left hand side term of the resulting inequalities yields the desired result.
18
Lemma 15. Under the assumptions of Lemma 10 we have that
γρ(g, F ) ≤ 2γ(g, F ) +O
[
ρKλmax(M
−1
F )
{
1 + 2L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}]
.
Proof. The result is immediate from Lemmas 13 and 14.
Lemma 16. Under the assumptions of Lemma 11 we have that
γ(g,G) ≤ γ(g, F ) +O
[
ρKλmax(M
−1
F )
{
1 + L/b+ λmax(M
−1
F )(C1 + 2C2K/b)
}]
.
Proof. The proof is similar that of Lemma 14 and is omitted for the sake of space.
Appendix C: variance sensitivity of test statistics
In this appendix we detail the consequences of estimating the standardizing matricesM(T, F ),
U(T, F ) and V (T, F ) on our construction. For this we need the change of variance function
as a complementary tool to the influence function for the analysis of the sensitivity of the
test functionals.
The change of variance function of M-estimators
The change of variance function of an M-functional T at the model distribution F is defined
as
CVF(x;T, F ) :=
∂
∂t
V (T, (1− t)F + t∆x)
∣∣∣
t=0
for all x where this expression exists; see Hampel et al. (1981) and Hampel et al. (1986). It
is essentially the influence function of the asymptotic variance functional V (T, F ). It reflects
the impact of small amounts of contamination on the variance of the estimator T (Fn) and
hence on the length of the confidence intervals. We reproduce below the form of the change
of variance functions for general M-estimators as derived in Zhelonkin (2013).
For the sake of simplicity, we write V = V (T, F ), Ψ = Ψ(x, T (F )) =
(
Ψ1 Ψ2 . . . Ψp
)T
and
∂Ψ
∂θ
=
∂Ψ1
∂θ1
∂Ψ1
∂θ2
. . . ∂Ψ1
∂θp
∂Ψ2
∂θ1
∂Ψ2
∂θ2
. . . ∂Ψ2
∂θp
...
...
. . .
...
∂Ψp
∂θ1
∂Ψp
∂θ2
. . .
∂Ψp
∂θp
19
Using this notation, the change of variance function of M-estimators is
CVF(x;T, F ) = V−M−1
(∫
DdF +
∂
∂θ
Ψ
)
V − V
(∫
DdF +
∂
∂θ
Ψ
)
M−1
+M−1
(∫
RdF +
∫
RTdF + ΨΨT
)
M−1
where
D =
{( ∂
∂θ
∂
∂θj
Ψk
)T
IF(x;T, F )
}p
j,k=1
and
R =
( ∂
∂θ
Ψ
)
IF(x;T, F )ΨT .
Change of variance sensitivity for tests
The following result shows how the influence function of standardized M-functionals depends
on both the influence function and the change of variance function of its corresponding
unstandardized M-functional.
Proposition 2. Let T (F ) be an M-functional, with associated asymptotic variance matrix
V (T, F ). Then the influence function of the standardized functional U(F ) = V (T, F )−1/2T (F )
has the form
IF(x;U, F ) = V (T, F )−1/2IF(x;T, F )−
1
2
V (T, F )−1/2CVF(x;T, F )V (T, F )−1T (F ). (45)
Proof. The result follows by applying the chain rule to the derivative of U(Ft) with respect
to t with Ft = (1 − t)F + t∆x and evaluating the resulting expression at t = 0. Indeed the
derivative of V (T, Ft) is CVF(x;T, F ), the derivative of T (Ft) is IF(x;T, F ) and dA
−1/2(H) =
−1
2
A−1/2HA−1 for some symmetric p dimensional matrix A and H ∈ Rp×p
One can use Proposition 2 to get an upper bound of the gross-error sensitivity of the
differentially private Wald test resulting for the construction of Section 4. Note that if the
change of variance function is bounded it suffices to use the simpler bound based only on the
influence function of T as described in the main text. Assuming that Ψ̇ and its derivatives
are bounded, it suffices to multiply the first term of (45) by log(n) in order to guarantee a
bound on the smooth sensitivity. This can be shown by extending the arguments developed
in Appendix B. The same type of expansions work using the more complicated influence
function (45) at the expense of more tedious calculations. We could obtain results similar
to Proposition 2 for the standardized functionals used in the score and likelihood ratio tests.
However, as long as Ψ̇ and its derivatives are bounded, the simple bound discussed in the
main paper suffices to yield differential privacy.
20
Appendix D: Further discussions and simulations
D.1 Competing methods
Let us begin by making some general remarks regarding differential privacy in practical
settings. We note that published work in the area usually have numerical illustrations with
samples sizes of the order of ∼ 100′000 for their methods to yield acceptable results; see
for example (Lei, 2011; Chaudhuri et al., 2011; Sheffet, 2017; Barrientos et al., 2019) among
many others. It transpires from the existing literature that differential privacy is perceived
as a very strong requirement that leads to very conservative analysis. As such, it also needs
large sample sizes in order to give meaningful statistical results. This has sometimes been
mentioned explicitly in different contexts (Machanavajjhala et al., 2008; Hall et al., 2012;
Abadi et al., 2016) and is usually reflected in the very large sample sizes used in examples
or implicitly by assuming that the variables of interest are bounded. The latter is used
in the computation of the sensitivity of the statistics being queried. One of the messages
of our paper is that if we want to enforce differential privacy constraints on non robust
estimators, this will inevitably require us to inject large amounts of noise to the analysis.
However, estimators that are robust by construction will require less noise in order to ensure
differential privacy. It is precisely because of this that our methods can outperform existing
alternatives that rely on truncation strategies or apply bounds that assume that the variables
are bounded.
In the context of linear regression, there are a number of existing methods that can achieve
differential privacy. One is tempted to take an off the shelve method that works for general
empirical risk minimization problems based on either objective function perturbations or
stochastic gradient descent algorithms e.g. ()chaudhurietal2011, bassilyetal2014. However
such methods typically require some Lipschitz constant that is unknown in practice which
makes the tuning of such algorithms tricky. There are a couple of estimation methods
tailored specifically for the linear regression framework. In particular Sheffet (2019) uses
random projections and compression via the Johnson-Lindenstrauss transform in order to
achieve differential privacy. We do not include this estimator in our simulations since the
reported results in Appendix E of that paper require very large sample sizes. We restrict
our comparisons to the estimator that Cai et al. (2019) introduced for the linear regression
model as it was shown to be minimax optimal and it exhibited good numerical performance.
We note that there are less alternatives for hypothesis testing in the linear model context.
Only the work of Sheffet (2017) and Barrientos et al. (2019) seem to directly target this
issue. However, in both cases their algorithms require some delicate tuning for the respective
21
random projection/compression step for the former and for the subsampling and truncation
steps for the latter. More importantly, both methods seem to require sample sizes of the
order ∼ 10′000− 100′000 to give satisfactory statistical results.
D.2 Simulations
We consider the following six simulation settings for the linear regression model in order
to better illustrate the behavior of our method and compare it with the minimax optimal
estimator of Cai et al. (2019).
(a) The covariates are iid Bernoulli random variables with mean π = 0.15, the variance of
the Gaussian error is 0.25 and all the slope parameters are set to β1 = · · · = β5 = 1. A
similar setting was considered in Cai et al. (2019).
(b) The covariates are iid standard normal, β = (0.5,−0.25, 0)T and σ = 1− 0.52− 0.255 as
considered in the simulated example of Sheffet (2017).
(c) The same normal linear regression model considered in Section 5.1 in the main document.
(d) Same as model (c) but with heavy tailed errors generated from a t-distribution with 4
degrees of freedom.
(e) Same as model (c) but with heavy tailed errors and covariates generated from a t-
distribution with 4 degrees of freedom.
(f) The contaminated linear regression model considered in Section 5.1.
Figure 4 reports the mean L2 error ‖β̂ − β‖/‖β‖ obtained over 100 simulations with
samples sizes ranging from n = 500 to n = 5000 for the six settings described above. We
report the classic non private maximum likelihood estimator, the robust estimator used in
Section 5.1 as well as the truncated estimator of Cai et al. (2019). The latter is essentially
a least squares estimator for truncated responses that is rendered differentially private with
the Gaussian mechanism. The level of truncation of the responses diverges as Kσ
√
log n for
some K > 0 and hence requires knowledge of the noise level σ. We note that our proposal
also matches the derived optimal minimax rates of convergence up to a logarithmic term. As
our simulations indicate, robust differentially private estimators can significantly outperform
the behavior of the truncated least squares estimator of Cai et al. (2019) especially in the
presence of heavy tails in the covariates. In this case the truncated estimator can be expected
to perform poorly since it was constructed under the assumption that the covariates are
bounded.
22
● ● ● ● ● ● ● ● ● ●
1000 2000 3000 4000 5000
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
(a)
n
L
2
−
lo
s
s
● MLE
trunc
rob
● ● ● ● ● ● ● ● ● ●
1000 2000 3000 4000 5000
0
1
2
3
4
(b)
n
L
2
−
lo
s
s
● ● ● ● ● ● ● ● ● ●
1000 2000 3000 4000 5000
0
1
2
3
4
(c)
n
L
2
−
lo
s
s
● ● ● ● ● ● ● ● ● ●
1000 2000 3000 4000 5000
0
1
2
3
4
(d)
n
L
2
−
lo
s
s
● ● ● ● ● ● ● ● ● ●
1000 2000 3000 4000 5000
0
1
2
3
4
(e)
n
L
2
−
lo
s
s
● ● ● ● ● ● ● ● ● ●
1000 2000 3000 4000 5000
0
1
2
3
4
(f)
n
L
2
−
lo
s
s
Figure 4: Figures (a)–(d) show the performance of the MLE, the differentially private truncated
least squares estimator and the differentially private Mallows estimator.
D.3 Assessing technical constants
The privacy guarantees of Theorem 1 requires that n ≥ max{N0, N1, N2}, where N1 ≥
1
C2m log(2/δ)
[
1+4
ε
{p+2 log(2/δ)} log
(
λmax(MFn )
b
)]2
andN2 ≥ (C ′)2m log(2/δ)
{
2Ln/b+λ
−1
min(MFn)(C1+
C2
Kn
b
)
}2
, for some constants C and C ′ defined in (16) and Lemma 2. Consequently, a user
that wishes to check these conditions for a given data set and an M-estimator defined by
Ψ needs to know the value of the constants (C1, C2, N0, b, C, C
′). Let us therefore focus on
the evaluation of these constants. For concreteness and simplicity we focus on the robust
regression with Tukey biweight loss function as it admits three continuous derivatives almost
23
everywhere. More precisely we consider
ρc(t) =
1− (1− (t/c)
2)3 for |t| ≤ c
1 for |t| > c
and
β̂ = argmin
β
n∑
i=1
ρc
(yi + xTi β
σ
)
w(xi), (46)
where σ is some known scale estimate and we can chose c = 4.685 for 95% efficiency at the
normal model when w(x) = 1. For this loss function we have
ρ′c(t) =
6t
c2
(
1−
t2
c2
)2
I|t|≤c, , ρ
′′
c (t) =
6
c2
(
1−
t2
c2
)(
1−
3t2
c2
)
I|t|≤c, ρ
′′′
c (t) =
(12t3
c6
−
36t
c4
)
I|t|≤c,
Let’s first consider C1 and C2. Similar to Kn and Ln in Condition 1, their values are direct
consequences of the choice of Ψ. Indeed, one can take C1 = Ln since
‖EFn−Gn [Ψ̇(z, β)]‖ ≤ Ln‖Fn −Gn‖2 ≤ Lnd∞(Fn, Gn).
Furthermore, one can take C2 = maxt ρ
′′′(t)λmax(
1
n
XTX)‖ supxw(x)x‖ since for some inter-
mediate points β̄(i) such that xTi β̄
(i) lies between xTi β̂ and x
T
i β, we have that
‖EFn [Ψ̇(x, y, β̂)− Ψ̇(x, y, β)]‖ =
∥∥∥ 1
n
n∑
i=1
xix
T
i w(xi)ρ
′′′
c (yi − x
T
i β̄
(i))xTi (β̂ − β)
∥∥∥
≤ max
t
|ρ′′′c (t)|λmax
( 1
n
XTX
)
‖ max
1≤i≤n
w(xi)xi‖‖β̂ − β‖
The minimum sample size N0 defined in Condition 2 is related to the unknown minimum
eigenvalue b that cannot be computed in general for M-estimators. A simple remedy of this
issue is to incorporate a ridge penalty with a vanishing tuning parameter τn guaranteeing
that λmin(MGn) ≥ τn for all n and all empirical distributions Gn, hence also implying N0 = n.
More specifically, choosing τn = 1/n one would minimize
β̂ = argmin
β
{ n∑
i=1
ρc
(yi + xTi β
σ
)
w(xi) +
1
2n
‖β‖2
}
.
This ridge penalty would guarantee that b ≥ 1
n
and can be used in order to evaluate N1
since the term log(nλmax(MFn)) will remain small relative to n. This approach would not
lead to a meaningful way to evaluate N2 and at first glance seems to suggest the sample size
condition n ≥ N2 might be hard to meet. We note however while the term b in N1 comes
from a worst case consideration over all empirical distributions in (17), the term b in N2
24
was computed over all distributions G such that d∞(Fn, G) ≤ C
√
m log(2/δ)
n
. Consequently
one should think of b in N2 as a constant that is not too different from λmin(MFn). A non
completely rigorous, but practical solution is to replace C2Kn/b by 2C2λmax(M
−1
Fn
)Kn in the
inequality defining N2. Indeed, the von Mises expansion (1) leads to
‖T (G)− T (Fn)‖ ≤ ρλmax(M−1Fn )Kn + o(ρ)
and
‖MG −MFn‖ ≤ ρ(C1 + C2λmax(M
−1
Fn
)Kn) + o(ρ)
instead of ‖T (G)− T (Fn)‖ ≤ ρKn/b and ‖MG −MFn‖ ≤ ρ(C1 + C2Kn/b) in (35) and (36).
The constant C > 0 is arbitrary in Lemma 1, but it should not be too small in order to
meet the requirement n ≥ N1. Similarly, C ′ > 0 should be large in Lemma 2 but not too
large in practice in order to guarantee that n ≥ N2. A closer inspection of the arguments
used in proof of Lemma 2 shows that the choice of C ′ comes from (32) and (38), and could be
chosen to be 2C, and one could take C = 1/
√
m log(2/δ) in order to simplify the expressions
of N1 and N2. A much more conservative choice of C would be pick a large constant that
gives (16) the interpreation of leading to a usual Borel-Cantelli neighborhood around F .
We note that Theorem 4 also involves some constants CU and Cn,k,U . The former is
an upper bound on the test functional used and from the arguments of Lemma 12 we
get that it is 2λmax(V (T, F )22)
−1 for the Wald functional, CU = 2λmax((T, F )22.1) for the
Likelihood ratio type functional and CU = max{Ln, L′n} for the Rao functional, where
Ln = supx max1≤j≤p ‖
∂2
∂θ∂θT
Ψj(x; θ)‖ and Ψj is the jth component of Ψ. On the other hand
Cn,k,U =
nCn,kΓU
γ(α,Fn)
where
γ(α,Fn)
n
= 2H ′k(n‖Un‖
2)|UTn IF(x;U, Fn)|, and Cn,k and ΓU are defined
in Lemma 5. Note that ΓU can be evaluated using (35) to get the bound ‖T (Gn)−T (G′n)‖ ≤
Kn
bn
for any Gn, G
′
n such that dH(Gn, G
′
n) = 1. Therefore ΓU ≤ λmax(V (T, F )22)−1/2)
Kn
bn
for
the Wald functional, ΓU ≤ λmax(M(T, F )22.1)1/2)Knbn for the likelihood ratio type functional
and ΓU ≤ λmax(U(T, F ))KnLnbn for the Rao functional.
Finally, we note that in the more realistic case where σ is unknown, one could either use
a preliminary scale estimate in (46) or estimate it conconmitantly with β by solving a system
of equations similar to the one considered in Example 1. In both cases the formula of the
influence function of T (Fn) = β̂ becomes slightly more involved as they will now depend on
the influence function of S(Fn) = σ̂ (Huber and Ronchetti, 2009, Ch. 6.4). Consequently
the assessments of the constants (C1, C2, N0, b, C, C
′) discussed above would also need to be
adapted for the estimation of σ. We leave for future research the important issue of providing
a systematic treatment of evaluating such constants for wider class of M-estimators. Not only
would it render the privacy guarantees of our proposals easier to assess, it might also give
25
some further insights into which classes of robust M-estimators could be more convenient for
differential privacy.
26
1 Introduction
1.1 Our contribution
1.2 Related work
1.3 Organization of the paper
2 Preliminaries
2.1 Differential privacy
2.2 Constructing differentially private algorithms
2.3 Robust statistics
2.4 M-estimators for parametric models
3 Differentially private estimation
3.1 Assumptions
3.2 A general construction
3.3 Examples
3.4 Convergence rates
3.5 Efficiency, truncation and robustness properties
4 Differentially private inference
4.1 Background
4.2 Private inference based on the level gross-error sensitivity
4.3 Examples
4.4 Validity of the tests
4.5 Robustness properties of differentially private tests
4.6 Accounting for the change of variance sensitivity
5 Numerical examples
5.1 Synthetic data
5.2 Application to housing price data
6 Concluding remarks
Appendices
Appendix A proof of main results
Appendix B properties of the influence function
Appendix C variance sensitivity of test statistics
Appendix D Further discussions and simulations
D.1 Competing methods
D.2 Simulations
D.3 Assessing technical constants
| 1cybersec
| arXiv |
Why do American movies always portray saying ‘I love you’ as such a big, scary thing?. I get that it’s a special thing to say, but usually most movies and series I watch, even people who’re dating for a longer time and meeting each other’s family/friends are super embarrassed when they accidentally speak their mind.
Is this a culture difference, or is this just a thing in media? I’m Dutch myself and I believe we say I love you (Ik hou van je) a lot easier. | 0non-cybersec
| Reddit |
How can I make Internet Explorer 6 render Web pages like Internet Explorer 11?. <p>Now, I know that this may seem like a bad question in that I can just upgrade to Internet Explorer 8, but I am sticking with IE6 in that IE8 removes valuable features, like the ability to save favorites offline and the fact that a file path turns into a Windows Explorer window and typing a Web address into Windows Explorer changes it into an IE window.</p>
<p>I know that Internet Explorer 6 does a <em>really</em> bad job at rendering some pages. I know of the Google Chrome Frame extension that brings Chrome-style rendering into IE, but that will soon be discontinued. So, I tried another thing: I know that <code>C:\Windows\System32\mshtml.dll</code> contains the Trident rendering engine that is used by IE, so I tried something: I first backed up the original file by renaming it on Windows XP to <code>mshtml-old.dll</code>, then I tried to copy in the DLL from a computer running Windows 7 with Internet Explorer 10. I noticed that, after copying, the system had replaced the new DLL with the old one, but left the one I backed up intact.</p>
<p>Is there any way I can get the system to not replace the DLL like that so that I can transfer in IE11's <code>mshtml.dll</code> into Windows XP and make IE6 render like IE11?</p>
<p>I'm looking for an answer that describes how to tweak my system to make IE6 render like IE11 (or IE10), not one that tells me to upgrade IE or install another browser. I don't care how tedious the method is, just as long as it works.</p>
<blockquote>
<p>In case you think that I am on outdated hardware, the Windows XP machine is actually Windows XP Mode running on Windows 7. The <em>real</em> reason why I don't want to switch that to another browser is because I want to experiment.</p>
</blockquote>
| 0non-cybersec
| Stackexchange |
Geometric Meaning of Conditions on Curve Shortening. <p>The following shows the meaning of the notations.
<a href="https://i.stack.imgur.com/c3k3o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c3k3o.png" alt="enter image description here" /></a></p>
<p>Here is the definition of curve shortening.
<a href="https://i.stack.imgur.com/d1qh2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d1qh2.png" alt="enter image description here" /></a></p>
<p>I can understand the geometric meaning of (1) & (2), but not (3) & (4). Can someone explain more on the geometric meaning of (3) & (4). Thanks!</p>
| 0non-cybersec
| Stackexchange |
The Hype Train is making a return stop, Home [OC]. | 0non-cybersec
| Reddit |
How to Disable Apport Error Reporting in Ubuntu 16.04 LTS. <p>Well, I use ubuntu 16.04 LTS.
Once I could not change input language. Tried to reboot, system failed into initramfs.
Tried forced fsck, it fixes some issues, when reboot again always obtained apport report and failure to download updates. Again reboot, initramfs, forced fsck ...
How to disable apport reports on ubuntu 16.04 LTS?
Both methods from <a href="https://askubuntu.com/questions/615478/constant-startup-error-issue-what-is-com-ubuntu-apport-support-gtk-root">constant-startup-error-issue-what-is-com-ubuntu-apport-support-gtk-root</a> do not work for 16.04.</p>
| 0non-cybersec
| Stackexchange |
Recovering the SDE of Vasicek model.. <p>Suppose we have the solution to the ordinary Vasicek model:</p>
<p><span class="math-container">$$r_t = r_0 e^{-a t} + b(1 - e^{-a t}) + \sigma \int^{t}_0 e^{-a (t-s) } dW_s$$</span></p>
<p>How do I use the Ito's lemma to recover the SDE</p>
<p><span class="math-container">$$dr_t = a(b - r_t)dt + \sigma dW_t$$</span></p>
<p>Thank you for your help.</p>
| 0non-cybersec
| Stackexchange |
How to import node_modules in typescript using Vue CLI 3 created project?. <p>I have created a Vue.js project using Vue CLI 3 and enabled typescript and I'm trying to get Cesium working.</p>
<p>I have performed the following:</p>
<pre><code>npm install cesium
npm install @types/cesium
</code></pre>
<p>But when I perform a,</p>
<pre><code>npm run serve
</code></pre>
<p>I see the cesium globe fine, except that in VS Code, I get</p>
<pre><code>Cannot find module cesium/Cesium
</code></pre>
<p>For all my imports of Cesium</p>
<p>The relevant files can be found below.</p>
<p>Cesium.vue:</p>
<pre><code><template>
<div id="cesiumContainer"></div>
</template>
<script lang='ts'>
import Vue from 'vue';
import Cesium from 'cesium/Cesium';
export default Vue.extend({
name: 'Cesium',
data() {
return {
// viewer: null,
};
},
mounted() {
let viewer = new Cesium.Viewer('cesiumContainer', {
imageryProvider: Cesium.createTileMapServiceImageryProvider({
url: Cesium.buildModuleUrl('Assets/Textures/NaturalEarthII'),
}),
baseLayerPicker: false,
geocoder: false,
// requestRenderMode: true
// skyBox: false
});
},
});
</script>
<style>
#cesiumContainer {
/* width: 100%; */
/* height: 100%; */
width: 1024;
height: 768;
margin: 0;
padding: 0;
overflow: hidden;
}
</style>
</code></pre>
<p>main.ts:</p>
<pre><code>import Vue from 'vue';
import App from './App.vue';
import router from './router';
import store from './store';
import Cesium from 'cesium/Cesium';
// noinspection ES6UnusedImports
import widget from 'cesium/Widgets/widgets.css';
Vue.use(Cesium);
Vue.use(widget);
Vue.config.productionTip = false;
new Vue({
router,
store,
render: h => h(App),
}).$mount('#app');
</code></pre>
<p>vue.config.js:</p>
<pre><code>const CopyWebpackPlugin = require('copy-webpack-plugin');
const webpack = require('webpack');
const path = require('path');
const debug = process.env.NODE_ENV !== 'production';
let cesiumSource = './node_modules/cesium/Source';
let cesiumWorkers = '../Build/Cesium/Workers';
module.exports = {
baseUrl: '',
devServer: {
port: 8080,
},
configureWebpack: {
output: {
sourcePrefix: ' ',
},
amd: {
toUrlUndefined: true,
},
resolve: {
alias: {
vue$: 'vue/dist/vue.esm.js',
'@': path.resolve('src'),
cesium: path.resolve(__dirname, cesiumSource),
},
},
plugins: [
new CopyWebpackPlugin([
{ from: path.join(cesiumSource, cesiumWorkers), to: 'Workers' },
]),
new CopyWebpackPlugin([
{ from: path.join(cesiumSource, 'Assets'), to: 'Assets' },
]),
new CopyWebpackPlugin([
{ from: path.join(cesiumSource, 'Widgets'), to: 'Widgets' },
]),
new CopyWebpackPlugin([
{
from: path.join(cesiumSource, 'ThirdParty/Workers'),
to: 'ThirdParty/Workers',
},
]),
new webpack.DefinePlugin({
CESIUM_BASE_URL: JSON.stringify('./'),
}),
],
module: {
unknownContextCritical: /^.\/.*$/,
unknownContextCritical: false,
},
},
};
</code></pre>
<p>my tsconfig.json includes:</p>
<pre><code>{
"compilerOptions": {
"types": ["cesium"],
}
</code></pre>
<p>What am I doing wrong??</p>
| 0non-cybersec
| Stackexchange |
What happens when minmax is: minmax(auto, auto). <p>In the bottom of this article (<a href="https://bitsofco.de/how-the-minmax-function-works/" rel="nofollow noreferrer">How the minmax() Function Works</a>) it says for <code>minmax(auto, auto)</code>:</p>
<blockquote>
<p>If used as a maximum, the auto value is equivalent to the max-content value. If used as a minimum, the auto value represents the largest minimum size the cell can be. This “largest minimum size” is different from the min-content value, and specified by min-width/min-height.</p>
</blockquote>
<p>Would someone mind eleborating on the difference between <code>min-content</code> and the auto here that’s ‘specified by min-width/min-height’?</p>
<p>I understand min-content to be smallest possible width the cell can be that does not lead to an overflow. What does ‘specified by min-width/min-height’ mean?</p>
<p>Thanks</p>
| 0non-cybersec
| Stackexchange |
Subaru STI Sedan... convertible?. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Jemaine Clement on his new TV show 'Wellington Paranormal', What We Do In The Shadows spin-off. | 0non-cybersec
| Reddit |
I went backpacking today and had my transitional glasses in the mesh part of my backpack. | 0non-cybersec
| Reddit |
Does the working of sizeof operator different in c andd c++. <p>I have written a small printf statement which is working different in C and C++:</p>
<pre><code> int i;
printf ("%d %d %d %d %d \n", sizeof(i), sizeof('A'), sizeof(sizeof('A')), sizeof(float), sizeof(3.14));
</code></pre>
<p>The output for the above program in c using gcc compiler is 4 4 8 4 8</p>
<p>The output for the above program in c++ using g++ compiler is 4 1 8 4 8</p>
<p>I expected 4 1 4 4 8 in c. But the result is not so. </p>
<p>The third parameter in the printf sizeof(sizeof('A')) is giving 8</p>
<p>Can anyone give me the reasoning for this </p>
| 0non-cybersec
| Stackexchange |
Solution to differential equation of function of two variables. <p>Very simple question about differential equations, but I couldn't find anything online. </p>
<p>Let $f(x,z)$ be a function of two variables that satisfies:</p>
<p>$af+bf_x+cf_z+df_{xx}+ef_{zz}+gf_{xz}=q(x,z)$</p>
<p>where $q(x,z)$ is some known function, $a,b,c,d,e,g$ are constants. </p>
<p>How can I solve this, i.e., find an expression for $f(x,z)$ (given boundary conditions)? </p>
<p>Is there some reference where I can find general solution to differential equations of functions of two variables?</p>
| 0non-cybersec
| Stackexchange |
I'm trying to remove the background from an image (using inkscape). <p>I'm trying to create infographs - how can I remove the background of a photo? If (for instance) it was a photo of Obama - how would I go about modifying the photo so it was just his head and torso?</p>
| 0non-cybersec
| Stackexchange |
Why the United States needs a new pandemic-fighting federal agency. | 0non-cybersec
| Reddit |
The judge will decide your fate. | 0non-cybersec
| Reddit |
Use resources in unit tests with Swift Package Manager. <p>I'm trying to use a resource file in unit tests and access it with <code>Bundle.path</code>, but it returns nil.</p>
<p>This call in MyProjectTests.swift returns nil:</p>
<pre><code>Bundle(for: type(of: self)).path(forResource: "TestAudio", ofType: "m4a")
</code></pre>
<p>Here is my project hierarchy. I also tried moving <code>TestAudio.m4a</code> to a <code>Resources</code> folder:</p>
<pre><code>├── Package.swift
├── Sources
│ └── MyProject
│ ├── ...
└── Tests
└── MyProjectTests
├── MyProjectTests.swift
└── TestAudio.m4a
</code></pre>
<p>Here is my package description:</p>
<pre><code>// swift-tools-version:4.0
import PackageDescription
let package = Package(
name: "MyProject",
products: [
.library(
name: "MyProject",
targets: ["MyProject"])
],
targets: [
.target(
name: "MyProject",
dependencies: []
),
.testTarget(
name: "MyProjectTests",
dependencies: ["MyProject"]
),
]
)
</code></pre>
<p>I am using Swift 4 and the Swift Package Manager Description API version 4.</p>
| 0non-cybersec
| Stackexchange |
Linear system and subspaces. <p>Let $S $ be a subspace of $R^n$ with dimension k and $m = n-k.$ Show that </p>
<p>$$\exists A \in R^{m\times n}, b\in R^m$$</p>
<p>Such that</p>
<p>$$S = \{ x \in R^n : Ax = b\}$$</p>
<p>My attempt consist of getting m "free variables", ie, choose m rows of a vector in S and to create a linear system following it. But I am having difficults to organize it.</p>
<p>Thanks!</p>
| 0non-cybersec
| Stackexchange |
what can i say, we all have a little GGG in us. | 0non-cybersec
| Reddit |
Google chrome/chromium browser does not work with x2go client side. Fix needed?. <p>I am using x2go to access my Ubuntu system remotely. The x2go server is running on Ubuntu 18.04 and I using x2go client on Windows 10.
Google chrome and chromium browsers both work fine when using the Ubuntu system, as well as Firefox. However, when I use the chrome/chromium browser using x2go, they do not work. I get the following error:</p>
<p><a href="https://i.stack.imgur.com/b0m2K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b0m2K.png" alt="Site unreachable error"></a></p>
<p>And then I redirected to the following error:</p>
<p><a href="https://i.stack.imgur.com/iAb1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iAb1U.png" alt="No internet - error message"></a></p>
<p>Firefox works fine. <code>sudo apt-get package-name</code> works fine (under proxy). I am using proxy (password protected) and have used </p>
<pre><code>export {http,https}_proxy=http://username:password@domain:port
</code></pre>
<p>but with no success. Also, I can not change the proxy settings of chrome from settings as it says that system proxy settings are used.</p>
<p>Both Chrome and Firefox use system proxy settings but Firefox works while Chrome doesn't. Is there any fix for this?</p>
| 0non-cybersec
| Stackexchange |
Find basis and dimension of $V,W,V\cap W,V+W$ where $V=\{p\in\mathbb{R_4}(x):p'(0) = p(1)=p(0)=p(-1)\},W=\{p\in\mathbb{R_4}(x):p(1)=0\}$. <p>Find basis and dimension of $V,W,V\cap W,V+W$ where $V=\{p\in\mathbb{R_4}(x):p'(0) =p(1)=p(0)=p(-1)\},W=\{p\in\mathbb{R_4}(x):p(1)=0\}$</p>
<p>Could someone give a hint how to get general representation of a vector in $V$ and $W$?</p>
<p>$\mathbb{R}_4(x)$ is the set of polynomials $p(x)=ax^3+bx^2+cx+d$. </p>
| 0non-cybersec
| Stackexchange |
A website that analyzes your face and assigns you a numerical rating based on your face's symmetry. | 0non-cybersec
| Reddit |
Average size of the smallest piece for a bar broken in 3 pieces. <p>Frederick Mosteller , 'Fifty challenging problems in probability', Q.43:</p>
<blockquote> a bar is broken at random in 2 places. Find the average size of the smallest, middle, and largest pieces</blockquote>
<p>I would like to discuss the solution offered in the book for working out the avergae size of the smallest segment, which says (summarized):</p>
<blockquote>
We might as well work with a bar of unit length $1$. let $X$ and $Y$ be the positions of the 2 points dropped.
For convenience let us suppose that $X \lt Y$.
<br><br>
If we want to get the distribution of the smallest piece, then either $X$, $Y-X$ or $1-Y$ is smallest.
Let us suppose $X$ is smallest, then:
$$X \lt Y-X \text{ ie } 2X \lt Y$$
and
$$X \lt 1-Y \text{ ie } X+Y \lt 1$$
this corresponds to the triangle shown in the picture, therefore the mean of X over that area will correspond to the X-coordinate of the centroid of that triangle, which as we know from plane geometry is $1 \over 3$ of the way from the base to the opposite vertex, here the point $(\frac{1}{3},\frac {2}{3})$ , therefore $\frac{1}{3} * \frac{1}{3} = \frac{1}{9}$
<br><br>
Therefore the mean of the smallest segment is $\frac{1}{9}$
</blockquote>
<p><a href="https://i.stack.imgur.com/5BJA8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5BJA8.png" alt="enter image description here"></a></p>
<p>My question is: even restricting ourselves to the area $X < Y$, why don't we need to consider the cases where Y-X is smallest, and 1-Y is smallest ?</p>
| 0non-cybersec
| Stackexchange |
DVI KVM switch for a 30" monitor?. <p>I have two machines to which I would like to attach a Dell 30" monitor via a KVM switch. Most KVM switches don't support the maximum resolution of the monitor (2550x1600). The only solution I have been able to find is the <a href="http://www.belkin.com/flip/" rel="nofollow noreferrer">Belkin Flip</a> that on a good day leaves strange artifacts in the signal, and at worst is useless.</p>
<p>Is there a better KVM switch than the Flip for large displays?</p>
| 0non-cybersec
| Stackexchange |
java.lang.IllegalArgumentException: Can only use lower 16 bits for requestCode. <p>I am writing an application where <code>Activity A</code> launches <code>Activity B</code> using </p>
<pre><code>startActivityForResult(intent, -101);
</code></pre>
<p>but when called, it responded back with following error log:</p>
<pre><code>E/AndroidRuntime( 1708): java.lang.IllegalArgumentException: Can only use lower 16 bits for requestCode
E/AndroidRuntime( 1708): at android.support.v4.app.FragmentActivity.startActivityForResult(FragmentActivity.java:837)
</code></pre>
<p>Probably it could be <strong>-101</strong> but I am not sure. Does any one have any idea on this?</p>
| 0non-cybersec
| Stackexchange |
The network connection in my Windows 10 parallels VM isn't working, what is causing this?. <p>I installed Parallels VM on my Mac and created a Windows 10 VM. I have a connection problem in the Windows 10 VM but not on my Mac. </p>
<p>I get the following error: </p>
<p><code>There is no network adapter on your Mac for the "Parallels Shared #0" virtual network. The network adapter 0 will be disconnected.</code></p>
<p>What could cause this?</p>
| 0non-cybersec
| Stackexchange |
Tracking the time since last keystroke?. <p>This is my completed keylogger as of right now. I have already posted this question before but it was really hard to iterate myself. On_press and On_release are the two main functions in this. They both track one keystroke. I need to track the time it takes between keystrokes, and I am not totally sure how I would get this done. I had the thought that I could track to see the time in between the string appends. I need to be able to see the time in between keystrokes because if that is longer than a certain period of time (ten seconds), I want the string which houses the keystrokes (keys) to be cleared. Thank y'all!</p>
<pre><code>import pynput
import time
import os, sys
from pynput.keyboard import Key, Listener
import psutil
count = 0
keys = []
if (time.time() - lastKeystroke > 10):
keys =[]
def on_press(key):
global keys, count
keys.append(str(key).replace("'",'').replace("Key.space", ' ').replace("Key.shift", "").lower())
print(keys)
count += 1
def on_release(key):
if key == Key.esc:
return False
lastKeystroke = time.time()
with Listener(on_press, on_release =on_release) as listener:
listener.join()
</code></pre>
| 0non-cybersec
| Stackexchange |
Oops.. | 0non-cybersec
| Reddit |
Zedd - Papercut (feat. Troye Sivan)[Electro/Progressive House](2015). | 0non-cybersec
| Reddit |
To the female Redditor who called us creepy.. | 0non-cybersec
| Reddit |
How to automatically grant permissions to application created databases in SQL Server 2008 R2?. <p>So I have an application that automatically creates a new database after the database it is currently in reaches a fixed number of rows. I need to be able to automatically assign a user within the installation the db_datareader role for the automatically created database. I'm kind of at a loss here, any help is greatly appreciated.</p>
| 0non-cybersec
| Stackexchange |
Differences between a server-level firewall and AWS Security Groups?. <p>I was wondering if anyone could give some background on differentiating between a server firewall and AWS Security Groups?</p>
| 0non-cybersec
| Stackexchange |
3 monitors on a GT 430?. <p>Will a card like the <a href="http://www.guru3d.com/article/geforce-gt-430-review/" rel="nofollow">GT 430</a> which has 1x dvi, 1x vga, and 1x hdmi ports allow me to connect 3 monitors to it giving me a large desktop spanning 3 screens?</p>
<p>Assuming I have 3 vga monitors and get the following adapters for the graphics card:</p>
<pre><code>dvi to vga
hdmi to vga
</code></pre>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Embedding knitr tex files on a bigger latex file. <p>There are already some question about about how to use Knitr's latex output within a new knitr document, using Rstudio and include or input and declaring some chunks as child.</p>
<p>But what if I want to use a simple knitr output (such as an R xtable) from within an external LaTeX program, such as TeXStudio to build a larger project?</p>
<p>For example, I could create a mytable.rnw file and generate this simple table</p>
<pre><code>\documentclass{article}
\begin{document}
<<r table2, results='asis', message=T, echo=F>>=
library(xtable)
print(
xtable(
head(iris),
caption = 'Iris data'
),
comment = FALSE,
type = 'latex'
)
@
\end{document}
</code></pre>
<p><a href="https://i.stack.imgur.com/eFzF0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eFzF0.png" alt="enter image description here"></a></p>
<p>And now I want to grab the generated mytable.tex file and use it in a bigger document from TexStudio, in the same folder.</p>
<pre><code>\documentclass{article}
\begin{document}
First table:
\include{mytable}
Second table:
\include{mytable}
\end{document}
</code></pre>
<p>(Maybe is better to use input)<br>
When I try to compile it I get many errors:</p>
<pre><code>Can be used only in preamble. \documentclass
Can be used only in preamble. \documentclass{article}\usepackage
Can be used only in preamble. ...ss{article}\usepackage[]{graphicx}\usepackage
Undefined control sequence. \definecolor
Can be used only in preamble. \usepackage
Undefined control sequence. \definecolor
Undefined control sequence. \definecolor
Undefined control sequence. \definecolor
Undefined control sequence. \definecolor
Can be used only in preamble. \usepackage
Can be used only in preamble. ...eExists{upquote.sty}{\usepackage{upquote}}{}
Can be used only in preamble. \begin{document}
</code></pre>
<p>I guess the problem arises because knitr ouput includes a lot of preamble information that can't be included within the document body, not only usepackage directives but also a lot of fie-tune information about colors, tables...</p>
<p>What's the proper way do it?
How to force Rstudio's knitr to output only the proper information?<br>
Or how to force my main latex document (from TexStudio) to correct the problem?</p>
<p>If we have to do it many times is there any easy way to include that preamble in the main tex document without going manually one by one.<br>
Or even worse, some genetared outputs may contain incompatible preambles.</p>
| 0non-cybersec
| Stackexchange |
Set PAC (Proxy Auto-Config) file via bash?. <p>All the info I have found online deals with the gui network manager. How do I set this value via terminal?</p>
| 0non-cybersec
| Stackexchange |
KOAN Sound album vinyls are already pressed!. | 0non-cybersec
| Reddit |
How do I disable a HDD/Pen drive LED blinking?. <p>I want hardware config on ubuntu for pen drive LED blinking.</p>
| 0non-cybersec
| Stackexchange |
How do I make SEO friendly URLs?. <p>My url is: <code>website.com/profile/?id=24</code></p>
<p>I want it to be: <code>website.com/profile/kevinlee</code></p>
<p>The <code>?id=24</code> will be replaced with the username of the <code>id=24</code> in the database</p>
<p>How do I convert it to that?</p>
| 0non-cybersec
| Stackexchange |
RelativeSource and Popup. <p>The problem is that <code>RelativeSource</code> does not work in the following case. I use silverlight 5.</p>
<pre><code>//From MainPage.xaml
<Grid x:Name="LayoutRoot" Background="White" Height="100" Width="200">
<Popup IsOpen="True">
<TextBlock Text="{Binding Path=DataContext, RelativeSource={RelativeSource AncestorType=Grid}}" />
</Popup>
</Grid>
//From MainPage.xaml.cs
public MainPage()
{
InitializeComponent();
DataContext = "ololo";
}
</code></pre>
<p>If I set a breakpoint on the binding, I'll get Error:</p>
<blockquote>
<p>System.Exception: BindingExpression_CannotFindAncestor.</p>
</blockquote>
<p>If I use <code>ElementName=LayoutRoot</code> instead of <code>RelativeSource</code>, everything will be OK.</p>
<p>Why does the relative source binding not work?</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Drake - Sandra’s Rose. | 0non-cybersec
| Reddit |
CRM SDK OrganizationServiceContext null navigation properties on entity. <p>Trying to migrate an existing solution away from the deprecated Microsoft.Xrm.Client namespace to just use the generated service context from <code>CrmSvcUtil</code> using CrmSDK 9.0.0.5.</p>
<p>Previously we were using <code>Microsoft.Xrm.Client.CodeGeneration.CodeCustomization</code> to get a lazily loaded context.</p>
<p>I have two copies of the same solution and have been working through some of the API changes.</p>
<p>I have enabled Proxy Types</p>
<pre><code>client.OrganizationServiceProxy.EnableProxyTypes();
</code></pre>
<p>Which to my understanding switched it to act in a lazily-loaded manner. However, none of the navigation properties are loading as expected.</p>
<p>The few blog posts that I've found around this shift to <code>CrmServiceClient</code> etc suggest that even without lazy loading I should be able to load the property manually with a call to <code>Entity.LoadProperty()</code> which will either load the property or refresh the data. However, after doing that the navigation property is still null (specifically I'm trying to use a Contact navigation property). When I look through the <code>RelatedEntities</code> collection it is also empty.</p>
<p>I know that the entity has a related contact item as if I use a context generated with <code>Microsoft.Xrm.Client.CodeGeneration.CodeCustomization</code> it returns it and I can also see it in CRM itself using an advanced search.</p>
<pre><code>var connectionUri = ConfigurationManager.ConnectionStrings["Xrm"].ConnectionString;
var client = new CrmServiceClient(connectionUri);
client.OrganizationServiceProxy.EnableProxyTypes();
var context = new XrmServiceContext(client);
var result = context.new_applicationSet.FirstOrDefault(x => x.new_applicantid.Id == CurrentUserId);
//result.Contact is null
context.LoadProperty(result, "Contact");
//result.Contact is still null
//result.RelatedEntities is empty
</code></pre>
| 0non-cybersec
| Stackexchange |
Find the limit of the sequence involving integral. <blockquote>
<p>Let $f:[0,1] \rightarrow [0,1]$ increasing function and</p>
<p>$a_n=\int_{0}^{1} \frac {1+(f(x)^n}{1+(f(x))^{n+1}} dx \tag 1$</p>
<p>Prove $a_n$ is convergent and find the limit.</p>
</blockquote>
<hr>
<p>It's easy to prove $a_n \ge 1$ and $a_n$ is decreasing, therefore is convergent. By taking $f$ identical zero, we get $a_n=1, \forall n$ hence the limit is $1$. If the limit is not dependent on $f$ then I have to prove the limit is $1$ for all functions, but I can't figure out how to do it. </p>
| 0non-cybersec
| Stackexchange |
Spinning class calorie calculation. I've been going to spinning class recently and the bike has a fancy computer and power meter. At the end of class I like to look at the "calories" number and be proud of the big number on the screen.
So my question is simply, how accurate is that number? Theoretically it's a matter of simple physics, the bike is measuring my power output in watts and the time I'm working out so it can convert to calories pretty easily. Secondarily how does this compare with the number of calories I'm actually burning? | 0non-cybersec
| Reddit |
Ubuntu has been replaced by UEFI ST200.... : Partition 1 in boot sequence. <p>I recently upgraded my Ubuntu 18.04 LTS to 20.04 on my dell laptop which has a dual boot with windows 10. After rebooting for the first time it led me to the GRUB menu and when I selected Ubuntu the screen blacked out. Then I tried using the Dell OS Recovery Assistant to fix any issues. I don't know whether it was because if this or any other issue , now the boot sequence is showing UEFI ST200.... : Partition 1 and Windows instead of Windows and Ubuntu. I can't even see the GRUB menu. The Ubuntu option has been replaced by this UEFI option in the boot seequence.Please help.</p>
| 0non-cybersec
| Stackexchange |
Hamsters up my rectum song. | 0non-cybersec
| Reddit |
json formatting with moshi. <p>Does anyone know a way to get moshi to produce a multi-line json with indentation ( for human consumption in the context of a config.json )
so from:</p>
<pre><code>{"max_additional_random_time_between_checks":180,"min_time_between_checks":60}
</code></pre>
<p>to something like this:</p>
<pre><code>{
"max_additional_random_time_between_checks":180,
"min_time_between_checks":60
}
</code></pre>
<p>I know other json-writer implementations can do so - but I would like to stick to moshi here for consistency </p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
SBT Scala cross versions, with aggregation and dependencies. <p>I am struggling with how <code>crossScalaVersions</code> works with subprojects.</p>
<p>I have a project that compiles with 2.10 (foo) and a project that compiles with 2.11 (bar). They share a cross compiled project (common).</p>
<p>How can I compile projects foo and bar?</p>
<hr>
<p><strong>build.sbt</strong></p>
<pre><code>lazy val root = (project in file(".")).aggregate(foo, bar).settings(
crossScalaVersions := Seq("2.10.4", "2.11.4")
)
lazy val foo = (project in file("foo")).dependsOn(common).settings(
crossScalaVersions := Seq("2.10.4"),
scalaVersion := "2.10.4"
)
lazy val bar = (project in file("bar")).dependsOn(common).settings(
crossScalaVersions := Seq("2.11.4"),
scalaVersion := "2.11.4"
)
lazy val common = (project in file("common")).settings(
crossScalaVersions := Seq("2.10.4", "2.11.4")
)
</code></pre>
<p><strong>project/build.properties</strong></p>
<pre><code>sbt.version=0.13.7
</code></pre>
<p><strong>foo/src/main/scala/Foo.scala</strong></p>
<pre><code>object Foo {
<xml>{new C}</xml>
}
</code></pre>
<p><strong>bar/src/main/scala/Bar.scala</strong></p>
<pre><code>case class Bar(a: C, b: C, c: C, d: C, e: C, f: C, g: C,
h: C, i: C, j: C, k: C, l: C, m: C, n: C, o: C, p: C,
q: C, r: C, s: C, t: C, u: C, v: C, w: C, x: C, y: C,
z: C)
</code></pre>
<p><strong>common/src/main/scala/Common.scala</strong></p>
<pre><code>class C {}
</code></pre>
<hr>
<p><strong>Attempt 1</strong></p>
<pre><code>$ sbt compile
[info] Resolving jline#jline;2.12 ...
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: common#common_2.11;0.1-SNAPSHOT: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] Note: Unresolved dependencies path:
[warn] common:common_2.11:0.1-SNAPSHOT
[warn] +- bar:bar_2.11:0.1-SNAPSHOT
sbt.ResolveException: unresolved dependency: common#common_2.11;0.1-SNAPSHOT: not found
</code></pre>
<p><strong>Attempt 2</strong></p>
<pre><code>$ sbt +compile
[error] /home/paul/test/bar/src/main/scala/Bar.scala:1: Implementation restriction: case classes cannot have more than 22 parameters.
[error] case class Bar(a: C, b: C, c: C, d: C, e: C, f: C, g: C,
[error] ^
[error] one error found
[error] (bar/compile:compile) Compilation failed
</code></pre>
<p><strong>Attempt 3</strong></p>
<pre><code>$ sbt foo/compile bar/compile
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: common#common_2.11;0.1-SNAPSHOT: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] Note: Unresolved dependencies path:
[warn] common:common_2.11:0.1-SNAPSHOT
[warn] +- bar:bar_2.11:0.1-SNAPSHOT
sbt.ResolveException: unresolved dependency: common#common_2.11;0.1-SNAPSHOT: not found
</code></pre>
<p><strong>Attempt 4</strong></p>
<pre><code>$ sbt +foo/compile +bar/compile
[error] /home/paul/test3/foo/src/main/scala/Foo.scala:2: To compile XML syntax, the scala.xml package must be on the classpath.
[error] Please see http://docs.scala-lang.org/overviews/core/scala-2.11.html#scala-xml.
[error] <xml>{new C}</xml>
[error] ^
[error] one error found
[error] (foo/compile:compile) Compilation failed
</code></pre>
<p><strong>Attempt 5</strong></p>
<p>I even tried defining <code>common_2_10</code> and <code>common_2_11</code> projects with that same base directory but different scala versions. I recall reading that targets are namespaced by Scala version, but SBT says there is a conflict.</p>
<pre><code>$ sbt
[error] Overlapping output directories:/home/paul/test3/common/target:
[error] ProjectRef(file:/home/paul/test3/,common_2_10)
[error] ProjectRef(file:/home/paul/test3/,common_2_11)
</code></pre>
<hr>
<p>The only thing I've gotten to work is manually specifying versions:</p>
<pre><code>$ sbt ++2.10.4 foo/compile ++2.11.4 bar/compile
</code></pre>
<p>But this is a lot of commands, can never use parallelism, and obviates the whole use of (1) project aggregation and (2) cross building.</p>
<p>Am I missing something fundamental about the intent of <code>crossScalaVersions</code>? Or is there a way to have it play well with the rest of SBT, and for me to compile my heterogeneous projects?</p>
| 0non-cybersec
| Stackexchange |
I just need everything to be better.. I foster for a rescue, and last week I pulled a ~12 week old shepherd mix puppy from the pound, who two days later tested positive for parvovirus. She has excellent vet care, but it's still touch and go at the moment.
Firstly, I need her to be better. She's not bad enough to require full time hospitalization yet, but if she doesn't improve she will be soon. We caught it super early on Thursday, before she was even a little dehydrated, and while Friday was rough, she started perking up Saturday, and was almost back to normal yesterday. We thought the worst was over. Until last night when she started puking and couldn't stop. We're treating her as aggressively as possible. I'm with her around the clock, she's on iv fluids, meds, supplements. You name it, she's on it. But her tiny little body is tired, I don't know how much fight she has left in her. I just need her to be better.
Second, I need the shelter she came from to be better. This puppy is one of now 4 we have pulled from there to test positive for parvo this week. They just don't give a fuck. When we called them after the second one tested positive, they just offered to put them down if we didn't want to try and save them. We're a rescue, of course we'll try to save them! They don't care. We found out that they switch dogs musical chairs style into different kennels while they clean. Might as well just throw them all in together then! In a place where dogs from all manner of backgrounds and medical history are housed together, there are much better ways to prevent the spread of disease.
One of the dogs who tested positive sat in there for four fucking days with a leg so badly broken that it will require surgery. They were making him wait out his hold before letting us get him. They had him in a wire bottomed cage, so they didn't have to do anything but pull a tray out to clean, with his broken leg just dangling there. No help at all.
Lastly, I need people to be better. I need people to be better to their dogs. Wherever this shepherd puppy came from, you can guarantee she didn't get any vaccinations because someone didn't want to spend the money. Then they took her to the sure death of the pound because they had changed their minds about her. The dog with the broken leg? Owner fucking surrendered. His owner didn't want to pay his vet bills, so he took him to the pound to die. Another dog we pulled this week has two tick diseases, heart worms, and is still lactating from giving birth. Where are the puppies? We don't know, the owner didn't bring them with her to the pound to die.
The sheer amount of dogs that are waiting there hoping someone will save them makes me cry. Old dogs who've been with their family since puppyhood. Puppies that haven't even gotten a chance to live yet. Injured and sick dogs who's people have thrown them away like a broken toy. They all sit and wait for the person that they love more than anything on this earth to come save them, but they don't, and they die cold and alone with no understanding of why.
I'm exhausted and I'm angry. I need everything to be better.
In case anyone is wanting to see her, this is my little fighter [Stevie Nicks](http://imgur.com/vzxHXNQ). I name all my fosters after rock stars. | 0non-cybersec
| Reddit |
Turn Off Object Caching in Entity Framework CTP5. <p>I am having trouble figuring out something with the Entity Framework Code First stuff in CTP 5. It is doing caching of objects and I don't want it to. For example, I load a page (working with an ASP.NET MVC site) which loads an object. I then go change the database. I re-load the page and the changes are not reflected. If I kill the site and rerun it then it obviously re-fetches. How do I, either generally for a type, or even for a particular query, tell it to always go get a new copy. I think it might have something to do with MergeOption but I'm having trouble finding examples that work with CTP 5. Thanks.</p>
| 0non-cybersec
| Stackexchange |
How to fix unbalanced core muscles?. I've been keeping to a healthy diet and hitting the gym quite frequently for the past 6 months and recently i've noticed that my abdominal muscles have become unbalanced with what looks like a "7 pack" any tips on how to fix or should i just work one side more than the other(although I believe its more of a fat loss issue) | 0non-cybersec
| Reddit |
[FANART] Platelets mini supportive comic by me. | 0non-cybersec
| Reddit |
How to get the keys to keep their bounciness?. <p>I recently got a new Macbook Pro and my keyboard was great until it wasn't. Some of the keys have lost their bounce and now they stand out. I think it's the rubber suction piece that has becomed weakened for use over time.</p>
<p>How can I get the bounciness back in the keys? Does it mean that I have to replace the suction cup piece on the key?</p>
| 0non-cybersec
| Stackexchange |
HTTP request for XML file. <p>I'm trying to use Flurry Analytics for my program on Android and I'm having trouble getting the xml file itself from the server.</p>
<p>I'm getting close because in the Log Cat System.out tag I can get half of it for some reason and it says "XML Passing Exception = java.net.MalformedURLException: Protocol not found: ?xml version = 1.0 encoding="UTF-8" etc... until about half way through my xml code. Not sure what I'm doing wrong, I'm sending an HTTP get with the header requesting to accept the application/xml and it's not working properly. Any help is appreciated!</p>
<pre><code>try {
//HttpResponse response = client.execute(post);
//HttpEntity r_entity = response.getEntity();
//String xmlString = EntityUtils.toString(r_entity);
HttpClient client = new DefaultHttpClient();
String URL = "http://api.flurry.com/eventMetrics/Event?apiAccessCode=????&apiKey=??????&startDate=2011-2-28&endDate=2011-3-1&eventName=Tip%20Calculated";
HttpGet get = new HttpGet(URL);
get.addHeader("Accept", "application/xml");
get.addHeader("Content-Type", "application/xml");
HttpResponse responsePost = client.execute(get);
HttpEntity resEntity = responsePost.getEntity();
if (resEntity != null)
{
System.out.println("Not null!");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
String responseXml = EntityUtils.toString(responsePost.getEntity());
Document doc = db.parse(responseXml);
doc.getDocumentElement().normalize();
NodeList nodeList = doc.getElementsByTagName("eventMetrics");
for (int i = 0; i < nodeList.getLength(); i++)
{
Node node = nodeList.item(i);
Element fstElmnt = (Element) node;
NodeList nameList = fstElmnt.getElementsByTagName("day");
Element dayElement = (Element) nameList.item(0);
nameList = dayElement.getChildNodes();
countString = dayElement.getAttribute("totalCount");
System.out.println(countString);
count = Integer.parseInt(countString);
System.out.println(count);
count += count;
}
}
} catch (Exception e) {
System.out.println("XML Passing Exception = " + e);
}
</code></pre>
| 0non-cybersec
| Stackexchange |
Is Pontryagin dual of an abelian torsion group strongly complete?. <p>Let <span class="math-container">$A$</span> be an ablian torsion group. Pontryagin dual of <span class="math-container">$A$</span> is Hom<span class="math-container">$(A,\mathbb{Q}/\mathbb{Z})$</span>. Is it strongly complete, i.e. every subgroup of finite index is open? </p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Enable full-screen mode for applications on Lion. <p>There some applications which I can not yet run in full-screen mode. I particularly would need the OSX Terminal app and possible the TextEdit. However, after the upgrade they remained the same, and does not have the full screen button in the top right corner. How can I enable Lion style full-screen mode for there applications?</p>
| 0non-cybersec
| Stackexchange |
Apple Keyboard Modded with Wooden Keys. | 0non-cybersec
| Reddit |
Can't access site over https using letsencrypt cert. <p>So we have <a href="http://m.site.perkelle.com/" rel="nofollow noreferrer">this</a> site (which is going to be replaced with a forum soon, therefore I need ssl) however when you <a href="https://m.site.perkelle.com/" rel="nofollow noreferrer">go to it using https</a> I time out. I am using apache2 on debian and have generated my SSL cert with getssl and letsencrypt. Here is my m.conf in my sites-enabled folder:</p>
<pre><code>LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so
Listen 443
<VirtualHost *:443>
ServerName m.site.perkelle.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/test/
ErrorLog ${APACHE_LOG_DIR}/mc.site.perkelle.com.error.log
CustomLog ${APACHE_LOG_DIR}/mc.site.perkelle.com.access.log combined
LogLevel warn
SSLEngine on
SSLCertificateFile /etc/ssl/mc.site.perkelle.com/mc.site.perkelle.com.crt
SSLCertificateKeyFile /etc/ssl/mc.site.perkelle.com/mc.site.perkelle.com.key
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/test/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
</VirtualHost>
</code></pre>
<p>Thanks</p>
| 0non-cybersec
| Stackexchange |
Blowing up a barn with dynamite - via "what things did you do in the past that you can't get away with today". | 0non-cybersec
| Reddit |
What is best practice to use coroutine with fragment?. <p><strong>Description</strong> <br> I have TabLayout with multiple fragments. I want to save fragment data into Room DB on fragment change/swipe and display data to the user when it back to fragment.</p>
<p><strong>Currently Using</strong> <br> Currently, I am using coroutine with <code>GlobalScope.launch</code> to save into a fragment and it is working fine.</p>
<p><strong>Questions</strong> <br>
1. What is the best practice to use a coroutine with fragments to save data to DB on fragment change? <br>
2. It is good practice to use <code>GlobalScope.launch</code> on fragment change? <br>
3. if <code>GlobalScope.launch</code> is not good to use then what we can use instead of it?</p>
| 0non-cybersec
| Stackexchange |
Another simplification problem with algebraic indices. <p>Could someone please help me with some steps for how to solve this question?</p>
<p>$ \frac {3^{n} + 3^{n+2}} {3^{n-1} - 3^{n-2}}$</p>
<p>The answer is 45 apparently. I need to simplify to get to this answer.</p>
<p>If someone could give me a hint how to start off I would love to give it a try. Thank you!</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
How do I make my boyfriend smile again?. F (me, 21) and M (21) have been together 3.5 years. Up until two weeks ago, everything was great. Then I entered PMS she-devil mode. Violent mood swings, fits of crying for no apparent reason, constant anxiety and depression that often has me thinking suicidal thoughts. Unfortunately my periods aren't regular so PMS often lasts 2-3 weeks for me rather than a few days, and disappears once I get my actual period. I am kind of underweight and fucked up about food so that probably doesn't help. Anyway, our relationship is awesome for another 5 weeks and then repeat. I know it sounds very dramatic, but it's called PMDD and it sucks.
So I know I have not been fun to be around. I've apologised and tried to keep my depression in, but about a week ago he said "he needed space" and "wanted time alone" because I was always around when he wasn't working and didn't have time to himself. It's the holidays so I don't have uni to distract me. I was hurt and upset because when I'm depressed I just want to sleep in his bed every night but I understood and we didn't see each other for 3 days. I thought it was good for him to see his friends and have fun but if he was just at home playing PS3 then I really wanted company. So after 3 days I caved and went over to his house but then the following days I exploded over something stupid and/or cried.
I am so alone and tired and sick of feeling sad for the past few weeks. I feel like he is mean to suddenly want space just because I am no longer the perfect, put-together person I normally try my hardest to be. I've tried telling him that I need him and I just want to held but it seems like he just finds that pathetic and wants me even less.
I wish I actually made him happy but it's so hard when I can't even be happy myself. I feel like a piece of furniture he doesn't even look at. | 0non-cybersec
| Reddit |
Saw this at a JB Hi Fi. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Cassandra syndrome. I'll never forget her. She was found wandering aimlessly on the street in her white garment before collapsing, and eventually through a long chain of institutes that didn't know what to do with her, sent to my clinic.
She didn't say much, nor did she want to speak at all. I decided to take a slow approach and allow her to come to her senses. I decided to do what I did with all my patients that didn't want to talk. I gave her a warm bed and treated her like a friend who was simply on a visit in my house. I went to the mall with her, bought her new clothes, and took her out for dinner. It was highly inapproriate perhaps, and definitely a breach in the standard of care, but it was the only method I saw to achieve what those before me couldn't do: Have her speak.
She had the weirdest story I had ever heard in my career as a psychiatrist. She must have been a paranoid schizophrenic. It was insane, and I shouldn't have gone along with it, but I couldn't help myself.
Her story didn't make sense. She claimed to have escaped from an underground building, where she had been kept her entire life. Kept in a sterile hospital-like room, she last saw her mother when she was a child, never knowing what happened to her.
Numerous questions went through my head, looking for the one question that she couldn't answer. "So, how did you escape then?" I asked her. "I put a piece of paper in the ridge of locker where they kept all the keys in. When they were gone, I quickly opened the locker, grabbed a bunch of keys and headed out. I walked through a narrow corridor, found a large door, opened it and found myself in an underground metro tunnel, that I followed until I found a metro station and followed the people there. I then asked a truck driver to take me with him to the next city he stopped at."
"Alright, but why did they keep you there?" I asked. "Experiments." She answered. "They explained to me that I have special gifts that allow me to predict things that others can't see."
She must have been mixing fact with fiction, distorting her experiences in the mental care system, seeking to come up with an explanation for the things that didn't make sense to her. Perhaps she was a homeless woman, who suddenly became absorbed into the mental healthcare system, and didn't know what was happening so rapidly, being transferred from clinic to clinic.
"So why do you have these special gifts?" I asked her.
She was silent for a moment, before responding: "It's part of my heritage. My grandparents are Tibetan and Scandinavian. My parents are both of mixed race. They explained to me that I carried both the natural abilities found in people of Scandinavian and Tibetan ancestry. In both these people, the abilities manifest themselves in a minority, who do not seem out of the ordinary. They're the odd ones, and they tend to feel attracted to religion and the new age. In Tibet, they would become monks. In Scandinavia, you're most likely to find them doing Tarot card readings or following some neopagan movement. If you happen to inherit the abilities of both people, they theorized the effects would amplify each other."
Her story seemed ridiculous. And yet her appearance was mesmerizing, and her beauty exceptional. Piercing dark blue eyes in a person who otherwise seemed hispanic. Could it be she was telling the truth about her ancestry?
"So what do you know about the rest of your family then?" I asked her. "Not much. I had three older brothers according to my mother, but I have never met them. she claims they were removed from her after taking tests. She never saw them afterwards."
"About these predictions, how does this happen?" I asked her. "I don't know. They tell me to lay down and relax, and to listen to their instructions. That's when I forgot everything, and the next thing I remember I wake up again." She answered. "Hypnotism." I thought to myself. It was worth a shot.
I told her to lay down, relax, and listen to my instructions. Over the week I tried this multiple times, but nothing out of the ordinary happened. Until I decided to take another approach.
"Visualize a large wooden door. When you open it, you enter a small room that looks like an elevator. It can be any room you like, decorated in any way you want. Visualize a room where you feel comfortable. And tell me when you are ready to go further." After her permission to continue I told her to visualize a calender hanging on the wall. "Tell me, what year does it state?" I asked. "2038..." She answered.
I told her to leave the room when she feels ready to do so, and describe what she sees. She had clear visions, in which she could describe in great detail the streets she walked in, the clothes of the people, the scents she smelled and the vehicles she saw. The world she described seemed rich and pleasant, though the people were stressful and always in a hurry.
I tried this method multiple times. And always, she would describe different worlds, in various years of the 21st century. Sometimes the world she entered would look like a peaceful medieval village, but on further inspection, there would be solar panels on the roofs, and computers and technical gadgets in people's houses. Sometimes the world would have giant skyscrapers, where billions of people would be housed. Once she described entering a dance club, where loud and harsh music was played. The people wore bright clothing, large shoes and strange makeup, and swallowed various pills. I laughed to myself and told myself I would hopefully be dead before that trend would become popular.
And sometimes, her visions disturbed me. Sometimes she would describe a world that wasn't peaceful at all. A world with horrifically deformed children, and people who wore wigs as a result of being bald. People had gas masks, and food was grown from fungi in special water tanks, because the land was apparently barren. Impending rain was announced through speakers, and people would hurry inside before the rain would come. People were paid large amounts of money by government and private charities to have children, because most people didn't want to.
This vision only happened once, and I carried on with the hypnotism, thinking it must have been the result of a bad mood, or her menstrual cycle, or something along those lines.
My colleagues were insisting I was too obsessed with this patient, with one colleague arguing that I should take another patient from him. A young PHD student suffering from depression. He didn't know what to do with him, and wanted me to treat the patient instead.
I didn't want to have another patient. "A depressed PHD student? Why should I care? Show me a PHD student who *isn't* depressed." I told him, cynically laughing about my own joke. My colleague had only recently started working here, and seemed insecure about his own abilties to deal with patients by himself. I didn't think he needed help with this man, but I was wrong.
Last week, we found him hanging in his room. He didn't seem like a suicide risk, and so we were shocked. I didn't understand why he would kill himself. I felt guilt towards my collague for not helping him. The patient really didn't seem so difficult to me.
The patient had a computer in his room in our clinic, and I looked at his work. Apparently, he was a theoretical physicist. He was engaged in fierce debates with his professor about an experimental new type of nuclear reactor that his professor believed could solve our energy crisis. He argued that the professor underestimated the potential fallout as a result of an accident by three orders of magnitude. Through long and difficult calculations in his emails he attempted to demonstrate that an accident could lead to the release of a large mass of radioactive material that would eat itself through the concrete and into the ground, releasing increasingly large amounts of radioactive material.
The last email he received from his professor said that he would not have to show up on monday, and would need to find someone else to guide him in his PHD trajectory. The email was sent 2 hours before our personnel found him hanging.
I'm a psychiatrist, not a physicist, and I don't know much about radiation. What I do know is that I've never managed to treat my own patient. After the PHD student had killed himself, I tried to bring my patient under hypnosis a few more times, but every time I had to stop. She would describe the same things she had described earlier. The people would always look different, and the cities would never be the same, but she always described the same things. Parents carrying deformed children. Bald people wearing wigs. Gas masks. If I asked her to go further, she would start crying uncontrollably, and I would release her out of her hypnosis.
She was eventually taken away from me by a superior, who grew suspicious of my obsession. I have no idea where she is now. All I know is that this is the only time I have failed in my 42 year long career. | 0non-cybersec
| Reddit |
How to compute gcd of two polynomials efficiently. <p>I have two polynomials <span class="math-container">$A=x^4+x^2+1$</span>
And <span class="math-container">$B=x^4-x^2-2x-1$</span></p>
<p>I need to compute the gcd of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> but when I do the regular Euclidean way I get fractions and it gets confusing, are you somehow able to use a SylvesterMatrix to find the gcd or am I probably doing something wrong?</p>
<p>I don’t know how to format properly yet so apologies </p>
| 0non-cybersec
| Stackexchange |
[Season 4 Episode 8 Spoilers] I have never seen some something so disturbing on tv.. | 0non-cybersec
| Reddit |
Mercy to the resc... Well, it's still a rescue. | 0non-cybersec
| Reddit |
Cannot find entry file index.android.js in any of the roots React Native Navigation. <p>I'm new using React Native Navigation and I'm having a problem during the installation process for Android.</p>
<p>So i installed the package, and followed instructions until step 5.
Once there i add the code to MainApplication.java, but after doing that, i execute react-native run-android and i get the following error.</p>
<p><a href="https://i.stack.imgur.com/joJCG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/joJCG.png" alt="enter image description here"></a></p>
<p>I read in other posts that the problem could be solved by executing
<code>npm start -- --reset-cache</code> but that doesn't work for me.</p>
<p>Im using react native navigation v1.1.298</p>
<p>Any help will be appreciated.</p>
<p>Thank you.</p>
| 0non-cybersec
| Stackexchange |
In Brazil this girl called the opposite team's player "macaco", which is like Brazil's version of the n-word, translating to "monkey". In a fate of bad luck, got caught on camera and lost her job.. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Whats your teams favourite performance? This is just one of mine, Manchester United 7-1 Roma. | 0non-cybersec
| Reddit |
Run "mvn clean install" in Eclipse. <p>Title says it all.</p>
<p>I want to run the console command <code>mvn clean install</code> on my project in question <em>directly</em> in Eclipse, not from the command line.</p>
<p>It would just be more convenient for me to do this, as I already have the project open in Eclipse. It would save me time if I would not have to navigate to the folder in Windows Explorer.</p>
<p>Not a big deal if I can't do this... but can I? And if so, how?</p>
<p>It would be great if I could just right click my project, then click "mvn clean install" from the context menu.</p>
| 0non-cybersec
| Stackexchange |
Arrow keys don't work in a remote ssh shell on Juniper Junos switch through tmux - how to fix?. <p>In a Linux system when I connect to a Junos switch via SSH I am unable to use the arrow keys for navigation. Instead I have to use i.e. <kbd>Ctrl</kbd> + <kbd>P</kbd> instead of the up key.</p>
<p>If I just use a pure SSH connection arrow keys work as expected i.e. not doing nothing. When SSH-ing into a Cisco switch or Linux system the problem doesn't occur.</p>
<p>This is a very annoying behaviour. Why are these keys not working and how can I fix it?</p>
| 0non-cybersec
| Stackexchange |
Physical interface as switch port and VLAN interface as layer 3. <p>In most of large ISP networks, physical interfaces are used as switch ports (layer 2 port) and VLAN interfaces are used as layer 3.</p>
<p>Could you please explain what is the main purpose to use like this?</p>
| 0non-cybersec
| Stackexchange |
How do I Moq the ApplicationDbContext in .NET Core. <p>I'm trying out .NET Core for the first time and seeing how Moq can be used in unit testing. Out of the box, the controllers are created where the ApplicationDbContext are parameters to the constructor like this:</p>
<pre><code>public class MoviesController : Controller
{
private readonly ApplicationDbContext _context;
public MoviesController(ApplicationDbContext context)
{
_context = context;
}
</code></pre>
<p>Here is the unit test that I started with when testing the controller:</p>
<pre><code>[TestClass]
public class MvcMoviesControllerTests
{
[TestMethod]
public async Task MoviesControllerIndex()
{
var mockContext = new Mock<ApplicationDbContext>();
var controller = new MoviesController(mockContext.Object);
// Act
var result = await controller.Index();
// Assert
Assert.IsInstanceOfType(result, typeof(ViewResult));
}
</code></pre>
<p>But then I realized ApplicationDbContext is a concrete class AND it does not have a parameterless constructor so the test won't work. It gives me error: Could not find parameterless constructor.</p>
<p>Perhaps this may be a question more aimed at Moq rather than it being related to .NET Core, but I'm also new to Moq so I'm not sure how to proceed. Here is how the ApplicationDbContext code was generated when I created the project:</p>
<pre><code>public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
: base(options)
{
}
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
// Customize the ASP.NET Identity model and override the defaults if needed.
// For example, you can rename the ASP.NET Identity table names and more.
// Add your customizations after calling base.OnModelCreating(builder);
}
public DbSet<Movie> Movie { get; set; }
}
</code></pre>
<p>What do I need to change so that my unit test would succeed?</p>
<p><strong>UPDATE:</strong></p>
<p>I discovered from <a href="https://msdn.microsoft.com/en-us/magazine/mt703433.aspx" rel="noreferrer">https://msdn.microsoft.com/en-us/magazine/mt703433.aspx</a> that you can configure EF Core to use an in-memory database for unit testing. So I changed my unit test to look like this:</p>
<pre><code> [TestMethod]
public async Task MoviesControllerIndex()
{
var optionsBuilder = new DbContextOptionsBuilder<ApplicationDbContext>();
optionsBuilder.UseInMemoryDatabase();
var _dbContext = new ApplicationDbContext(optionsBuilder.Options);
var controller = new MoviesController(_dbContext);
// Act
var result = await controller.Index();
// Assert
Assert.IsInstanceOfType(result, typeof(ViewResult));
}
</code></pre>
<p>This test now succeeds. But is this the proper way of doing this? Obviously, I completely eliminated mocking the ApplicationDbContext with Moq! Or is there another solution to this problem using Moq.</p>
| 0non-cybersec
| Stackexchange |
Warning: Unnecessary HSTS header over HTTP. <p>I want to use <strong>https</strong>:// and <strong>non www.</strong> URL always. So I used the following code in my htaccess file. But i am getting an warning from <a href="https://hstspreload.org" rel="noreferrer">https://hstspreload.org</a></p>
<pre><code>RewriteCond %{HTTPS} off
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
RewriteCond %{HTTP_HOST} !^www\.
RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
<ifModule mod_headers.c>
Header always set Strict-Transport-Security "max-age=31536000;
includeSubDomains; preload"
</ifModule>
</code></pre>
<p>Warning Message is given bellow : </p>
<p><strong>Warning:</strong> Unnecessary HSTS header over HTTP<br/>
The HTTP page at <a href="http://mysiteurl.com" rel="noreferrer">http://mysiteurl.com</a> sends an HSTS header. This has no effect over HTTP, and should be removed.</p>
<p>Please help me to <strong>rid the above warning</strong>. I tried with the following code also but it does not work <a href="https://stackoverflow.com/questions/38235475/how-to-disable-hsts-header-with-http">#ref. topic</a> </p>
<pre><code> Header always set Strict-Transport-Security "max-age=31536000;
includeSubDomains; preload" env=HTTPS
</code></pre>
| 0non-cybersec
| Stackexchange |
Richard Dawkins sarcastically puts the opposition to same-sex marriage into perspective. | 0non-cybersec
| Reddit |
Pit bull, he reminds me of a Black Panther.. | 0non-cybersec
| Reddit |
Giant balloon pop. | 0non-cybersec
| Reddit |
Inducing metric of a vector space. <p>What does it mean, in the context of normed vector spaces, that the norm <em>induces</em> the metric? Furthermore, why normal vector spaces can't have a metric and be considered a metric space then?</p>
| 0non-cybersec
| Stackexchange |
The distribution of a complex signal. <p>If I have a complex signal
$$ y = h e^{j\phi} + n $$
where $ h \sim \mathcal C \mathcal N (0, \sigma_h^2) $
and $ n \sim \mathcal C \mathcal N (0, \sigma_n^2) $. </p>
<p>With $ h = |h|e^{j\theta} = |h|\cos \theta + j|h|\sin \theta $ and $ n = n_r + j n_i $,
I can rewrite $y$ as
$$ y = |h| e^{j(\phi+\theta)} + n $$
and thus the real part of $y$ should be
$$ y_r = |h| \cos(\phi + \theta) + n_r $$ </p>
<p>Now my question is, how do I find the distribution (pdf) $ p(y_r; \theta)$ with these given information?<br>
Should it be $ \mathcal N (0, \sigma_h^2 + \sigma_n^2) $
or $ \mathcal N (|h| \cos(\phi+\theta), \sigma_n^2)$ or anything else?</p>
<p>note:
1. $h$ and $n$ are independent.</p>
| 0non-cybersec
| Stackexchange |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.