text
stringlengths 0
3.34M
|
---|
[STATEMENT]
lemma extendsE:
assumes "extends R r"
obtains "r \<subseteq> R" "asym_factor r \<subseteq> asym_factor R"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lbrakk>r \<subseteq> R; asym_factor r \<subseteq> asym_factor R\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
extends R r
goal (1 subgoal):
1. (\<lbrakk>r \<subseteq> R; asym_factor r \<subseteq> asym_factor R\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
unfolding extends_def
[PROOF STATE]
proof (prove)
using this:
r \<subseteq> R \<and> asym_factor r \<subseteq> asym_factor R
goal (1 subgoal):
1. (\<lbrakk>r \<subseteq> R; asym_factor r \<subseteq> asym_factor R\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by blast |
(* Title: HOL/Auth/n_germanish_lemma_inv__3_on_rules.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_germanish Protocol Case Study*}
theory n_germanish_lemma_inv__3_on_rules imports n_germanish_lemma_on_inv__3
begin
section{*All lemmas on causal relation between inv__3*}
lemma lemma_inv__3_on_rules:
assumes b1: "r \<in> rules N" and b2: "(\<exists> p__Inv0 p__Inv2. p__Inv0\<le>N\<and>p__Inv2\<le>N\<and>p__Inv0~=p__Inv2\<and>f=inv__3 p__Inv0 p__Inv2)"
shows "invHoldForRule s f r (invariants N)"
proof -
have c1: "(\<exists> i. i\<le>N\<and>r=n_t1 i)\<or>
(\<exists> i. i\<le>N\<and>r=n_t2 i)\<or>
(\<exists> i. i\<le>N\<and>r=n_t3 i)\<or>
(\<exists> i. i\<le>N\<and>r=n_t4 i)\<or>
(\<exists> i. i\<le>N\<and>r=n_t5 i)\<or>
(\<exists> i. i\<le>N\<and>r=n_t6 N i)"
apply (cut_tac b1, auto) done
moreover {
assume d1: "(\<exists> i. i\<le>N\<and>r=n_t1 i)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_t1Vsinv__3) done
}
moreover {
assume d1: "(\<exists> i. i\<le>N\<and>r=n_t2 i)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_t2Vsinv__3) done
}
moreover {
assume d1: "(\<exists> i. i\<le>N\<and>r=n_t3 i)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_t3Vsinv__3) done
}
moreover {
assume d1: "(\<exists> i. i\<le>N\<and>r=n_t4 i)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_t4Vsinv__3) done
}
moreover {
assume d1: "(\<exists> i. i\<le>N\<and>r=n_t5 i)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_t5Vsinv__3) done
}
moreover {
assume d1: "(\<exists> i. i\<le>N\<and>r=n_t6 N i)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_t6Vsinv__3) done
}
ultimately show "invHoldForRule s f r (invariants N)"
by satx
qed
end
|
The female Seba`s Short-tailed Bats give birth about twice a year. The newborn babies weigh only about five grams and are completely dependent on their mothers. Like all mammals, they only drink their mother’s milk at first. When about three weeks old, the young bats start trying to fly by themselves. After eight weeks they get weaned and have to hunt for their own food. They have to eat a lot to acquire their final weight of 18-19 grams, which they reach when they are ten weeks old.
Giving birth and raising Pim was too much for his mother, so she died when he was only a few weeks old. On March 12, 2011, the keepers found little Pim lying on the floor of the bat enclosure. He was so starved and weak that he was trembling all over and couldn’t hang from the ceiling by himself. Banana pulp was needed!
Patiently, the little nursling was fed and cared for. With each day, Pim grew stronger and livelier. A few weeks later the day had come when the little youngster could be put into the enclosure with the other bats. At first, the change from banana pulp to whole fruit and a lot of flying around was very hard on Pim; quite a few times the keeper saw him just hanging there, taking a rest. Meanwhile, though, he has settled in and gets along fine with his fellows. Still, he remembers his keeper well. Sometimes he lands on her hand just like he used to when this was the place where he got his daily banana pulp.
Like all mammals, baby bats depend on their mothers at first. They get fed, cleaned and protected. When the mothers hunt for food, the little ones come along, holding tight to their mother’s belly. |
\section{Slot}\label{s:slot}
|
module CTL.Modalities.AF where
open import FStream.Core
open import Library
-- Certainly sometime : s₀ ⊧ φ ⇔ ∀ s₀ R s₁ R ... ∃ i . sᵢ ⊧ φ
-- TODO Unclear whether this needs sizes
data AF' {ℓ₁ ℓ₂} {C : Container ℓ₁}
(props : FStream' C (Set ℓ₂)) : Set (ℓ₁ ⊔ ℓ₂) where
alreadyA : head props → AF' props
notYetA : A (fmap AF' (inF (tail props))) → AF' props
open AF' public
AF : ∀ {ℓ₁ ℓ₂} {C : Container ℓ₁}
→ FStream C (Set ℓ₂) → Set (ℓ₁ ⊔ ℓ₂)
AF props = APred AF' (inF props)
mutual
AFₛ' : ∀ {i ℓ₁ ℓ₂} {C : Container ℓ₁}
→ FStream' C (Set ℓ₂) → FStream' {i} C (Set (ℓ₁ ⊔ ℓ₂))
head (AFₛ' props) = AF' props
tail (AFₛ' props) = AFₛ (tail props)
AFₛ : ∀ {i ℓ₁ ℓ₂} {C : Container ℓ₁}
→ FStream C (Set ℓ₂) → FStream {i} C (Set (ℓ₁ ⊔ ℓ₂))
inF (AFₛ props) = fmap AFₛ' (inF props)
|
Formal statement is: lemma sum_Suc_reindex: fixes f :: "nat \<Rightarrow> 'a::ab_group_add" shows "sum f {0..n} = f 0 - f (Suc n) + sum (\<lambda>i. f (Suc i)) {0..n}" Informal statement is: For any function $f$ from the natural numbers to an abelian group, we have $\sum_{i=0}^n f(i) = f(0) - f(n+1) + \sum_{i=0}^n f(i+1)$. |
\documentclass[12pt]{article}
\usepackage{float}
\usepackage{wrapfig}
\input{def}
\graphicspath{{figs/}}
\begin{document}
\title{Causal inference for process understanding in Earth sciences}
\author{Adam Massmann\thanks{Corresponding author:
[email protected]}, Pierre Gentine, Jakob Runge}
\maketitle
\begin{abstract}
There is growing interest in the study of causal methods in the
Earth sciences. However, most applications have focused on causal
discovery, i.e. inferring the causal relationships and causal
structure from data. This paper instead examines causality through
the lens of causal {\it inference} and how expert-defined causal
graphs, a fundamental from causal theory, can be used to clarify
assumptions, identify tractable problems, and aid interpretation of
results and their causality in Earth science research. We apply
causal theory to generic graphs of the Earth system to identify
where causal inference may be most tractable and useful to address
problems in Earth science, and avoid potentially incorrect
conclusions. Specifically, causal inference may be useful when: (1)
the effect of interest is only causally affected by the observed
portion of the state space; or: (2) the cause of interest can be
assumed to be independent of the evolution of the system’s state;
or: (3) the state space of the system is reconstructable from lagged
observations of the system. However, we also highlight through
examples that causal graphs can be used to explicitly define and
communicate assumptions and hypotheses, and help to structure
analyses, even if causal inference is ultimately challenging given
the data availability, limitations and uncertainties.
\end{abstract}
\paragraph{Note:} We will update this manuscript as our understanding
of causality's role in Earth science research evolves. Comments,
feedback, and edits are enthusiastically encouraged, and we will add
acknowledgments and/or coauthors as we receive community
contributions. To edit the manuscript directly (recommended) you can
fork the project's repository and submit a pull request at
\url{https://github.com/massma/causality-earth-science}, or you can
alternatively email us comments, questions, and suggestions and we
will try to incorporate them into the manuscript.
\section{Introduction}
There is growing interest in the study of causal inference methods in
the Earth sciences \citep[e.g.,][]{salvucci2002, ebert-uphoff2012,
kretschmer2016,Green_2017,barnes-2019,
samarasinghe2020,runge-causal-timeseries,runge2019inferring,goodwell-causality-2020}. However,
most of this work focuses on causal discovery, or the inference (using
data) of causal structure: i.e., the ``links'' and directions between
variables. In some cases, causal discovery can be used to estimate the
structure of a causal graph and the relationships between variables
when the graph is not known a priori. However in many if not most
Earth system applications, the causal graph is already known based on
physical insights. For instance the impact of El Ni\~{n}o on West
American rainfall is known to be causal and the graph does not need to
be discovered (even though using causal discovery for this problem is
a useful sanity check of the method's ability).
This paper looks at causality through a different, but complementary,
lens to causal discovery and examines how assumed causal graphs \citep{pearl1995causal}, a
fundamental from causal theory, can be used to clarify assumptions,
identify tractable problems, and aid interpretation of results in
Earth science research. Our goal is to distill
\citep[e.g.,][]{olah2017} the basics of the graphical approach to
causality in a way that is relatable for Earth scientists,
\textbf{hopefully motivating more widespread use and adoption of
causal graphs to organize analyses and communicate
assumptions}. These tools are relevant now more than ever, as the
abundance of new data and novel analysis methods have inevitably led
to more opaque results and barriers to the communication of
assumptions.
Beyond their usefulness as communication tools, if certain conditions
are met, causal graphs can be used to estimate, from data, the
generalized functional form of relationships between Earth science
variables \citep{pearl2009causality}. Ultimately, deriving generalized
functional relationships is a primary goal of science. While we know
the functional relationships between some variables a priori, there
are many relationships we do not know \citep[e.g., ecosystem scale
water and carbon fluxes;][]{massmann2019, zhou2019arid,
zhou2019feedback, grossiord2020}, or that we do know but are
computationally intractable to calculate \citep[e.g., clouds and
microphysics at the global scale:][]{randall2003, gentine2018,
zadra2018, gagne2020emulation}. In these types of applications,
causal graphs give us a path toward new scientific knowledge and
hypothesis testing: generalized functional relationships that were
inaccessible with traditional tools.
The main contribution of this paper is to demonstrate how causal
graphs, a fundamental tool of causal inference discussed in Section
\ref{sec:what-caus-caus}, can be used to communicate assumptions,
organize analyses and hypotheses, and ultimately improve scientific
understanding and reproducibility. We want to emphasize that almost
any study could benefit from inclusion of a causal graph in terms of
communication and clarification of hypotheses, even if in the end the
results cannot be interpreted causally. Causal graphs also encourage
us to think deeply in the initial stages of analysis about hypotheses
and how the system is structured, and can identify infeasible studies
early in the research process before time is spent on analysis,
acquiring data, building/running models, etc. These points require
some background and discussion, so the paper is divided into the
following sections:
\begin{itemize}
\item Section \ref{sec:what-caus-caus}: Introduces and discusses
causal graphs within the general philosophy of causality and its
application in Earth science.
\item Section \ref{sec:causal-graphs-pearls}: Using a simple relatable
example we explain the problem of confounders and how causal graphs
can be used to isolate the functional mapping between interventions
on some variable(s) to their effect on other variable(s).
\item Section \ref{sec:causal-graphs-as}: We draw on a real example
that benefits from inclusion of a causal graph, in terms of
communicating assumptions, and organizing and justifying analyses.
\item Section \ref{sec:necess-cond-caus}: We turn to more generic
examples of graphs that are generally consistent with a wide variety
of systems in Earth science, to highlight some of the difficulties
we confront when using causal inference in Earth science and how we
may be able to overcome these challenges.
\end{itemize}
Throughout the discussion key terms will be emphasized in italics.
\section{The graphical
perspective to causality and its usefulness in the Earth
sciences}\label{sec:what-caus-caus}
\begin{wrapfigure}{L}{0.35\textwidth}
\includegraphics[]{cloud-aerosol.pdf}
\caption{A toy graph example to demonstrate basic causal theory,
involving cloud (C), aerosol (A), and surface solar radiation
(S).}
\label{fig:toy}
\end{wrapfigure}
While there are many different definitions and interpretations of
``causality'', for this manuscript we view causality through the lens
of causal graphs, as introduced in \citet{pearl1995causal} and
discussed more extensively in \citet{pearl2009causality}. We take this
perspective because we believe causal graphs are useful in Earth
science, rather than because of any particular philosophical argument
for causal graphs as the ``true'' representation of causality.
Causal graphs are Directed Acyclic Graphs (\emph{DAG}s) that encode
our assumptions about the causal dependencies of a system. To make a
causal graph, a domain expert simply draws directed edges
(i.e. arrows) from variables that are causes to effects. In other
words, to make a causal graph you draw a diagram summarizing the
assumed causal links and directions between variables (e.g., Figure
\ref{fig:toy}). Causal graphs are useful tools because they can be
drawn by domain experts with no required knowledge of maths or
probability, but they also represent formal mathematical
objects. Specifically, underlying each causal graph are a set of
equations called \emph{structural causal models}: each node
corresponds to a generating function for that variable, and the inputs
to that function are the node's parents. Parents are the other nodes
in the graph that point to a node of interest (e.g., in the most
simple graph $X \to Y$; $X$ is a parent of $Y$). So in reality,
drawing arrows from ``causes'' to ``effects'' is synonymous with
drawing arrows from function inputs to generating functions.
In this way, drawing a causal graph is another way to visualize and
reason about a complicated system of equations, which is a very useful
tool for the Earth scientist: we deal with complicated interacting
systems of equations and welcome tools that help us understand and
reason about their collective behavior. In some cases we may know a
priori (from physics) the equations for a given function in a causal
graph. However, in practice we often either do not know all of the
functions a priori \citep[e.g., plant stomata response to
VPD;][]{massmann2019, zhou2019arid, zhou2019feedback, grossiord2020},
or some functions are computationally intractable to compute
\citep[e.g., turbulence, moist convection, and cloud microphysics in
large scale models;][]{zadra2018,gentine2018}. In these scenarios the
benefits of causal graphs are fully realized: based on the causal
graph we can calculate from data, using the \textit{do-}calculus
\citep{pearl-1994-do-calculus}, the response of target variables
(i.e. effects) to \textit{interventions} on any other variables in the
graph (i.e. causes). When combined with statistical modeling (i.e.,
regression), one can estimate the functional relationship between
interventions on causes and effects
\citep{pearl2009,shalizi2013,shi2019adapting,mao2020generative}.
By viewing causal graphs through this pragmatic lens of calculating
the functional form of relationships that we do not know a priori, we
simultaneously identify causal graphs' value for Earth scientists
while also side stepping philosophical arguments about the meaning of
causality. Causal graphs are pragmatic because in the Earth sciences
we often need to estimate how the system responds to
\emph{interventions} (prescribed changes to variables of interest, or
``causes''). For example, sub-grid physical parameterizations in Earth
system models (e.g., turbulence) require estimates of the time
tendencies' response to \emph{interventions} on the large scale state
and environment. We also may desire to calculate experiments: for
example how changing land cover from forest to grasslands affects (the
statistics of) surface temperature. \textit{Do-}calculus is a method
to calculate this response to interventions without relying on
approximate numerical models or real world experimentation, which can
be infeasible or unethical \citep[as is the case for geoengineering;
e.g., unilateral decisions to seed the oceans with iron, or spray
aerosols in the atmosphere,][]{hamilton2013no}. While we want to
maintain this emphasis on causality as a method for calculating the
generalized response to intervention (possibly using regression to
calculate the functional form of that response), for consistency with
the causal literature we will call the response variables ``effects'',
and the intervened-upon variables ``causes''.
For some, it may not be clear how the functional response to
\emph{interventions} is different from naive regression between
observed variables. We will demonstrate in Section
\ref{sec:causal-graphs-pearls} how uninformed regression is just a
functional mapping of associations between variables, and how this
differs from the response to interventions, i.e. a causal
mechanism. This is the problem that \textit{do-}calculus solves: it
identifies which data are needed and how we can use those data to
calculate the response to \emph{interventions}, rather than just
associations that may be attributable to other processes
entirely. Because we know the response to the intervention is
attributable to the intervention and not other processes, we have
greater confidence in the generalizability of the response to
interventions. This generalizability of the response to interventions
makes \textit{do-}calculus especially relevant for scientists and
engineers. Working through an example will clarify some of these
claims.
\section{A Toy Example: the problem of confounding and the necessity
of \textit{do-}calculus for calculating interventions}
\label{sec:causal-graphs-pearls}
% intro causal graphs}
To demonstrate the problem of \textit{confounding} and the necessity
of causal graphs/\textit{do-}calculus, we use a simple toy
example involving clouds, aerosols, and surface solar
radiation/sunlight. As shown in Figure \ref{fig:toy}, the causal
graph consists of:
\begin{enumerate}
\item An edge from aerosols to clouds because aerosols serve as cloud
condensation nuclei or ice nucleating particles, which affect the probability of water vapor conversion to cloud (condensates).
\item An edge from aerosols to surface solar radiation, because
aerosols can reflect sunlight back to space and reduce sunlight at
the surface.
\item An edge from clouds to sunlight, because clouds also reflect
sunlight back to space and can reduce sunlight at the surface.
\end{enumerate}
Causal graphs encode our assumptions about how the system behaves, and
the nodes and edges that are \textit{missing} from the graph often
represent strong assumptions about the lack of functional
dependence. For example, in the cloud-aerosol-sunlight example, clouds
also affect aerosols; e.g., by increasing the likelihood that aerosol
will be scavenged from the atmosphere during precipitation
\citep[e.g.,][]{radke-scavenge-1980, jurado2008,
blanco-alegre2018}. By not including an edge from cloud to aerosol,
we are making the assumption that we are neglecting the effect of
clouds on aerosols, and also preventing the graph from containing any
cycles (a path from a variable to itself) which is a requirement of
the theory: graphs must by acyclic. This acyclic requirement may raise
concerns for the reader; many problems in Earth science contain
feedbacks that introduce cycles. However, any feedback can be
represented as an acyclic graph by explicitly resolving the time
evolution of the feedback in the graph (Section
\ref{sec:necess-cond-caus} contains examples of such graphs).
Considering this example is intended to be pedagogical for introducing
causal theory to the readers, we will continue with the graph as drawn
in Figure \ref{fig:toy} (we refer the reader to \cite{gryspeerdt-2019}
for a more realistic treatment of aerosols and clouds).
Even though mathematical reasoning is not required to construct a
causal graph, the resulting graph encodes specific causal meaning
based on qualitative physical understanding of the system. Implicitly,
the graph corresponds to a set of underlying functions, called a
structural causal model, for each variable:
\begin{align}
\label{eq:2}
aerosol &\leftarrow f_{aerosol} (U_{aerosol}) \\
cloud &\leftarrow f_{cloud} (aerosol, U_{cloud})\\
sunlight &\leftarrow f_{sunlight} (aerosol, cloud, U_{sunlight})
\end{align}
where $U$ are random variables due to all the factors not represented
explicitly in the causal graph, and $f$ are deterministic functions
that generate each variable in the graph from their parents and
corresponding $U$.
The presence of the random variables $U$ introduces a third meaning to
the causal graph: they induce a factorization of the joint
distribution between variables into conditional and marginal factors:
\begin{equation}
P(A, C, S) = P(S \, | \,C, A) \, P(C \, | \, A) \, P(A),
\end{equation}
where $A$ represents aerosol, $C$ represents cloud, $S$ represents
surface solar radiation, and $P($C$ | $A$)$ denotes conditional
probability of $C$ given $A$ (Appendix \ref{prob-theory} describes the notation used in
this paper and a brief introduction to probability theory for
unfamiliar readers). The inclusion of randomness in causal graphs is a
key tool: by positing a causal graph, we are not stating that the
variables in the graph are the only processes in the system nor that the relationships are deterministic. Instead,
we are stating that all other processes not included in the graph
induce variations in the graph's variables that are \emph{independent} of
each other (e.g., all $U_{\cdot}$ in Equation (\ref{eq:2}) are
independent). For example, sources of aerosol variability not
considered in Figure \ref{fig:toy} include anthropogenic aerosol
emission, the biosphere, fires, volcanoes,
etc. \citep[e.g.,][]{Boucher2015}. For cloud, this includes synoptic
forcing or atmospheric humidity,
etc. \citep[e.g.,][]{wallace2006atmospheric}. For radiation, this
includes variability of top of atmosphere radiation,
etc. \citep[e.g.,][]{hartmann2015global}. Figure \ref{fig:toy} states
that all these external, or \textit{exogenous}, sources of variability
are independent of each other \citep[in very technical terms, this means the
graph is ``\textit{Markovian},''][]{pearl2009causality}).
We can now apply causal inference theory
\citep[e.g.,][]{pearl1995causal,tian2002general,shpitser2006} to the
assumptions encoded in our causal graph to identify which
distributions must be estimated from data in order to calculate the
response of effect(s) (e.g. of sunlight) to an experimental
intervention on the cause(s) (e.g. presence or absence of a
cloud). The goal of causal inference is to derive the response to the
intervention in terms of only observed distributions. This process of
identifying the necessary observed distributions is formally termed
\emph{causal identification} \citep[][, Ch. 3]{pearl2009causality}. If
a causal effect is not identifiable (\emph{un}-identifiable), for
example if calculating a causal effect requires distributions of
variables that we do not observe, then we cannot use causal inference
to calculate a causal effect, even with an infinite sample of
data.
A necessary condition for \emph{unidentifiability} is the presence of
an unblocked \emph{backdoor path} from cause to effect \citep[][,
Ch. 3]{pearl2009causality}. Backdoor paths are any paths going through
parents of the cause to the effect. We can block these paths by
selectively observing variables such that no information passes
through them \citep{geiger-d-sep}. If observations are not available
for the variables required to block the path, the path will be
\emph{unblocked}. However, if we can observe variables along the
backdoor paths such that all backdoor paths are blocked, then we have
satisfied the \emph{back-door criterion} \citep{pearl2009} and we can
calculate unbiased causal effects from data.
Understanding backdoor paths and the backdoor criterion is helped by
an example. Returning to our toy example (Figure \ref{fig:toy}), we
attempt to calculate the causal effect of clouds on sunlight. In other
words, we want to isolate the variability of sunlight due to the
causal link from cloud to sunlight (Figure \ref{fig:toy}). However,
aerosols affect both cloud and sunlight (i.e., there is a backdoor
path from cloud to aerosol to sunlight), so if we naively calculate a
"causal" effect using correlations between sunlight and cloud, we obtain
a biased estimate. To demonstrate this, consider simulated cloud,
aerosol, and sunlight data from a set of underlying equations
consistent with Figure \ref{fig:toy} and Equation (\ref{eq:2}):
\begin{align}
\begin{split}
\text{aerosol} =& \; U_{aerosol}; \; U_{aerosol} \sim
\text{uniform (0, 1]}\\ \text{cloud} =& \; \text{Cloudy if } U_{cloud} +
\text{aerosol} > 1; \; U_{cloud} \sim \text{uniform (0, 1]}\\ \text{sunlight}
=& \begin{cases} \text{Cloudy} &: 0.6 \cdot \text{downwelling clear
sky radiation} \\ \text{Clear} &: \text{downwelling clear sky
radiation}
\end{cases}
\label{eq:1}
\end{split}
\end{align}
where:
\begin{equation*} \text{downwelling clear sky radiation} =
U_{sunlight} \cdot (1 - aerosol); \; U_{sunlight} \sim
\text{Normal(340 W m$^{-2}$, 30 \, W m$^{-2}$)}
\end{equation*}
\begin{figure} \centering \includegraphics[]{naiveCloudSunlight.pdf}
\caption{A naive approach to estimating the ``effect'' of clouds on
sunlight: bin observations by cloudy and clear day, and compare the
values of sunlight. This approach yields an average difference of
157.74 W m$^{-2}$ between cloudy and clear days, and is a large
overestimation of the true causal effect of clouds on sunlight (-68.0
W m$^{-2}$) in these synthetic data.}
\label{fig:naive-cloud-sunlight}
\end{figure}
Now, consider not knowing the underlying generative processes, but
instead just passively observing cloud and sunlight. If one were
interested in calculating the effect of cloud on sunlight, and aerosol
data were not available or one were not aware that aerosol could have
an impact on clouds and aerosol, one direct but incorrect approach
would be to bin the data by cloudy and clear conditions and compare
the amount of sunlight between cloudy and clear observations (Figure
\ref{fig:naive-cloud-sunlight}). This approach would suggest that
clouds reduce sunlight by, on average, 158 W m$^{-2}$; this is is a
strong overestimation of the true average effect of clouds (-68 W
m$^{-2}$), derived from Equation (\ref{eq:1}). This overestimation is
due to aerosol-induced co-variability between cloud and sunlight that
is unrelated to the causal link from cloud to sunlight. However if
aerosols were constant (e.g. observed or not varying), any
co-variability between cloud and sunlight would be attributable to the
causal edge between cloud and sunlight (Figure \ref{fig:toy}). In
other words, conditional on aerosol, all co-variability between cloud
and sunlight is only due to the causal effect of cloud on sunlight.
We can mathematically encode this requirement that we must condition
on aerosol to isolate the causal effect of cloud on radiation, and
doing so derives the causal effect of cloud on sunlight by
satisfying the backdoor criterion with \textit{adjustment} on aerosol:
\begin{equation} P(S | do(C = c)) = \int_{a} P(S \, | \, C = c, A=a)
\, P(A=a) \; da,
\label{eq:3}
\end{equation} where the \textit{do}-expression ($P(S \, | \, do(C\, = \,c))$) represents the probability of sunlight
if we did an experiment where we intervened and set cloud to a value
of our choosing (in this case $c$, which could be ``True'' for the
presence of a cloud, or ``False'' for no cloud). In the case that observations of aerosols are not available, our causal effect is not identifiable and
we cannot generally use causal inference without further assumptions,
no matter how large the sample size is of our data.
Causal graphs are therefore powerful analysis tools: after encoding
our domain knowledge in a causal graph, we can analyze the available
observations to determine whether a causal calculation is possible,
\textit{without needing to collect, download, or manipulate any
data}. For more complicated graphs, causal identification can be
automated \citep[][ and \url{http://www.dagitty.net/},
\url{https://causalfusion.net}]{tian2002general,shpitser2006,huang2006identifiability,Bareinboim7345,
textor2017}. We later use this theory to assess assumptions that
lead to tractable causal analyses for generic Earth science scenarios
(Section \ref{sec:necess-cond-caus}).
Once we have established that a causal effect is identifiable from
data, we must estimate the required observational distributions
(Equation (\ref{eq:3})) from data. Often it may be more
computationally tractable to calculate an average causal effect,
rather than the full causal distribution $P(S | do(C=c))$, which might be difficult to estimate. Returning
to our toy example (Figure \ref{fig:toy}), the average effect is
defined as:
\begin{equation} \mathbb{E}(S | do(C = c)) = \int_{s} s \, P(S = s |
do(C=c)) \, ds,
\label{eq:4}
\end{equation}
where $\mathbb{E}$ is the expected value. Substituting Equation
(\ref{eq:3}) into Equation (\ref{eq:4}), and rearranging gives:
\begin{equation} \mathbb{E}(S | do(C = c)) = \int_{a} P(A=a) \;
\mathbb{E}(S \, | \, C=c, A=a) \, d a,
\label{eq:5}
\end{equation}
Where $\mathbb{E}(S \, | \, C=c, A=a)$ is just a regression of
sunlight on cloud and aerosol. Estimating the marginal $P(A)$ is
difficult, but if we assume that our observations are independent and
identically distributed (IID) and we have a large enough sample, we
can use the law of large numbers to approximate Equation (\ref{eq:5})
as \citep{shalizi2013} :
\begin{equation} \mathbb{E}(S | do(C = c)) \approx \frac{1}{n}
\sum_{i=1}^n \mathbb{E}(S \, | \, C=c, A=a_i).
\label{eq:6}
\end{equation}
Data or prior knowledge can inform the estimate of
$\mathbb{E}(S | C=c, A=a_i)$, but whatever regression method is used,
it should be checked to ensure it is representative of the data and
there is sufficient signal to noise ratio to robustly estimate the
regression. It is important to note how this estimate is different
from the naive association of $\mathbb{E}(S | C=c)$; in
$\mathbb{E}(S | C=c, A=a_i)$ we are controlling for the impact of
aerosol on sunlight by conditioning on $A$ and including aerosol in
the regression. As we will see, using the \textit{do}-calculus
estimate results in an estimate of the effect of cloud on aerosol that
is very different from the naive association, and close to the true
effect.
\begin{figure} \centering \includegraphics[]{aerosolSunlight.pdf}
\caption{A linear relationship between aerosol and sunlight,
conditional on cloud. If we use this regression to calculate the
average causal effect of cloud on sunlight, as in Equation
(\ref{eq:6}), our result is very close to the true causal effect
of -68.0 W m$^{-2}$.}
\label{fig:linear}
\end{figure}
In our simple example, a linear model conditional on cloud is a
suitable choice for the regression function
$\mathbb{E}(S | C=c, A=a_i)$ (Figure \ref{fig:linear}). However, many
problems in Earth science require non-linear approximation methods
like neural networks and/or advanced machine learning methods; for
examples of such machine learning methods, we refer readers to
\citep{bishop2006pattern}.
The causal effect of clouds on sunlight as calculated using Equation
(\ref{eq:6})) (e.g.
$\mathbb{E}(S | do(C = \text{cloudy})) - \mathbb{E}(S | do(C =
\text{clear}))$) is -67.69 W m$^{-2}$, which closely matches the true
causal effect from Equation (\ref{eq:1}) of -68 W m$^{-2}$. This
example demonstrates how causal inference and theory can be used to
calculate unbiased average effects using regression, subject to the
assumptions clearly encoded in the causal graph. Further, causal
inference can be used to justify and communicate assumptions in any
observational analyses employing regression. In the best case, the
causal effect is identifiable from the available observations, and the
regression analysis can be framed as an average causal effect. In the
worst case that identification is not possible from the available
observations, one may present the regression as observed associations
between variables. However, presentation of a causal graph still aids
the reader: the reader can see from the causal graph what the
confounders and unobserved sources of covariability are between the
predictors and the output. In all cases, the presentation of a causal
graph makes explicit the assumptions about the causal dependencies of
the system. Wherever possible, we recommend including causal graphs
with any observation-based analyses.
In summary of the main points of this introduction to causal graphical
models and \textit{do-}calculus:
\begin{itemize}
\item Graphical causal models encode our assumptions about causal
dependencies in a system (edges are drawn \emph{from} causes
\emph{to} effects). ``Causal dependencies'' really just refer to
functional dependencies between inputs (causes) and outputs
(effect), which are useful in the Earth sciences to reason about
graphically .
\item In order to calculate an unbiased causal effect from data, we
must isolate the covariability between cause and effect that is due
to the directed causal path from cause to effect. The presence of
non-causal dependencies between the cause and effect can be deduced
from the causal graph: the presence of an \textit{unblocked backdoor
path} from the cause to the effect leads to non-causal
dependencies (and co-variation).
\item The \emph{backdoor criterion} identifies the variables that we
must condition on in order to block all backdoor paths, remove
non-causal dependence between the cause and effect, and calculate an
unbiased causal effect from data.
\item The \emph{average} causal effect can be reliably approximated
with a regression (Equation (\ref{eq:6})) derived from the backdoor
criterion. In this scenario, causal theory and graphs identify the
variables that should be included in the regression in order to
calculate an unbiased causal effect (however researchers should
still ensure their choice of regression model is appropriate for the
data). While not demonstrated by our example, causal theory and
graphs also identify the variables that \textit{should not} be
included in the regression \citep{pearl2009}: we can also bias a
causal effect by including too many variables in a regression .
\item The do-calculus and identification theory provide a flexible
tool to determine whether an effect is identifiable and, if so,
which distributions should be estimated from data, while making no
assumptions about the forms of the underlying functions and
distributions. However, parametric assumptions can be applied to
make the calculation of those distributions from data more
computationally tractable.
\end{itemize}
Here we focused on the \emph{backdoor criterion} to block backdoor
paths. An unblocked backdoor path from the cause to the effect is a
necessary condition for unidentifiability. However, an unblocked
backdoor path from the cause to the effect is not a sufficient
condition for unidentifiability: there are other identification
strategies like the front door criterion \citep[see Section 3.5.2
in][]{pearl2009causality} and instrumental variables \citep[see
Chapter 8 in][]{pearl2009causality} that do not rely on observing
variables along the backdoor path, and can be used in some cases where
observations are not available to satisfy the backdoor criterion (also
see \citet{tian2002general} for more discussion on sufficient
conditions for unidentifiability). These are examples of how the
\textit{do-}calculus admits any strategy that frames the response to
interventions in terms of observed distributions. We focus on the
backdoor criterion because it is the most fundamental and direct
method for adjusting for confounding, the most intuitive for an
introduction to causality, and is the most relevant for the generic
temporal systems present in the Earth sciences (Figure
\ref{fig:generic} in Section \ref{sec:necess-cond-caus}). However,
causal identification through other methods like instrumental
variables and the front door criterion can also be automated; we refer
the reader to \citet{pearl2009causality} for further discussion and
software tools like \url{http://www.dagitty.net/} and
\url{http://www.causalfusion.net} for interactive exploration.
\section{Beyond toys: causal graphs as communicators, organizers, and
time-savers}\label{sec:causal-graphs-as}
In Section \ref{sec:causal-graphs-pearls} we used a toy example to
demonstrate a causal analysis starting with drawing a graph and ending
with the successful calculation of the average response of sunlight
(the effect) to an intervention on cloud (the cause). However, often
we may not be able to ultimately estimate the causal effects from the
available data. In many cases there are serious challenges due to
unobserved confounding in generic Earth science problems (Section
\ref{sec:necess-cond-caus}), and for other cases we may lack enough
samples, or samples could be too systematically biased, to estimate
the necessary distributions (or regressions; e.g., Equation
(\ref{eq:6})) with sufficient certainty. However, we want to
emphasize that even if calculating a causal effect might in some
cases be impossible, drawing a causal graph at the beginning of an
analysis still offers tremendous benefits in terms of organization and
scientific communication. Investing time to reason about the
functional structure of the problem at the outset can save scientists
time in the long term, forcing us to clarify our thinking early,
expose potential challenges, and identify intractable approaches.
Additionally, once the causal graph is drawn, we can use it as a
communication tool and include it in presentations, papers, and
discussions of our results. Making our assumptions about dependencies
in the system explicit greatly improves the interpretability and
reproducibility of our results. Perhaps our analysis and graph meet
the standards for a causal interpretation, but even if they do not,
the causal graph helps the rest of the community asses the sources of
confounding in the graph that were not controlled for, and understand
if their conceptualization of the graph structure matches the authors'
hypotheses. Often in research there are more assumptions being made
than are communicated, and even when they are communicated, the
assumptions do not always get discussed in a precise way. Including a
causal graph allows the assumptions to be clearly known, and discussed
in a precise and rigorous way \citep[e.g.,][]{hannart-da}.
To support the idea that many analyses would benefit from a causal
graph, we will detail how a past project benefits from a causal
graph. This example also moves beyond the toy example of Section
\ref{sec:causal-graphs-pearls}, and demonstrates causal graphs'
applicability to real problems in Earth science.
\subsection{Causal graphs' utility in a real example}
In \citet{massmann2017}, the lead author of this manuscript
participated in a field campaign designed to study the impact of
microphysical rain regime (specifically the presence of ice from aloft
falling into orographic clouds) on orographic enhancement of
precipitation. This field campaign and analysis benefits from a causal
graph and is a real-world example argument for the more common use of
causal graphs as research tools. Our retrospective causal graph of
orographic enhancement in the Nahuelbuta mountains under steady
conditions clearly communicates our assumptions about the system
(Figure \ref{fig:ccope}).
\begin{figure} \includegraphics[]{ccope.pdf}
\caption{A graph representing steady conditions for orographic
enhancement during the Chilean Coastal Orographic Precipitation
Experiment \citep[CCOPE,][]{massmann2017}. The field campaign
attempted to quantify the effect of ``rain regime'' on
``orographic enhancement.'' Observed quantities are represented
by solid nodes, while unobserved quantities are represented by
dashed nodes. Variables that must be observed to block all
backdoor paths are shaded. The effect of rain regime on orographic
enhancement is identifiable through adjustment on these shaded variables.}
\label{fig:ccope}
\end{figure}
Many of the variables in the graph, such as ``synoptic forcing'',
``wind'', and ``orographic flow'', are quite general
quantities. Keeping quantities general can lead to more intuitive and
interpretable graphs by limiting the number of details and nodes that
one must consider. However, if logic needs clarifying, the graph can
become more explicit (e.g., differentiating wind into speed,
direction, and spatial distribution, both horizontally and
vertically). We also include unobserved variables explicitly in the
graph, so we can reason about processes' impact on the system even if
they are not observed. As graphs become more complicated, one can
leverage interactive visualization software, or use static graph
abstractions like plates \citep{bishop2006pattern}, which are
particularly well suited to representing repeated structure common to
spatiotemporal systems.
While the exact details of the graph and the assumptions it encodes
are interesting (e.g., ``wind'', ``stability'', and ``atmospheric
moisture'' all refer to upwind conditions, and we assume that these
upwind conditions are the relevant ``boundary condition'' for the
downwind orographic clouds and precipitation), the noteworthy feature
of the graph is that the field campaign's effect of interest, rain
regime on orographic enhancement, is identifiable from the field
campaign's observations. This is subject to the assumptions encoded in
the graph, but those assumptions are explicitly represented and
communicated by the graph. The causal graph helps interpret the field
campaign's results, and in some sense proves that the design of the
field campaign is sound.
Therefore, for field campaigns, causal graphs are particularly useful at the
planning and proposal stage. Such a causal graph could be included in
any field campaign proposal, improving communication about the system
and also rigorously justifying the campaign’s observations as
necessary for calculating the desired effect(s). Even before the
proposal, one could start with a causal graph, and then analyze it to
determine which observations are needed to meet the campaign’s
goals. Building on this idea, one could attach costs associated with
observing each variable in the graph, and automatically determine the
set of observations that minimizes cost while still allowing us to
calculate our effect(s) of interest.
While this is just one example, it demonstrates that causal graphs are
useful beyond toy problems in the Earth system. Additionally, as we
will see in Section \ref{sec:necess-cond-caus}, we can draw quite
general graphs that are representative of many problems in Earth
science. We hope the reader considers drawing a causal graph as a
first step in their next project; they help structure, organize, and
clarify our analysis and its assumptions.
\section{Overcoming unobserved confounding and partial observation of Earth system state}
\label{sec:necess-cond-caus}
So far we have focused on toy (Section
\ref{sec:causal-graphs-pearls}) and specific (Section
\ref{sec:causal-graphs-as}) examples. We now turn our attention to
more general and generic problems in Earth science systems, the common
challenges we may encounter when attempting causal inference, and how
we can overcome these challenges.
Earth science systems and their components are (typically) dynamical
systems evolving through time according to an underlying system state
\citep{lorenz-1963,lorenz1996predictability,majda-state}. This offers
both opportunities and challenges for causal inference. When
constructing causal graphs we may benefit from the temporal ordering
of events \citep{runge2019inferring}: we know that future events can
have no causal effect on the past. We can also use the time dimension
to explicitly resolve feedbacks in the system, and transform cyclic
graphs with feedbacks into directed acyclic graphs (DAGs) required for
causal inference. While handling feedbacks and avoiding cyclic graphs
is a challenge of causal inference in Earth science, resolving the
time dimension is a generic path to overcome this challenge when
observations of sufficient resolution are available.
However, confounding due to incomplete observation of the system's
state variables also introduces challenges; challenges that are not
unique to this paper's causal lens: incomplete observation of the
system precludes the use of many ``causal discovery'' algorithms as
well (see \citet{runge2019inferring} for a detailed review). Causal
identification and tractable causal inference in Earth science
requires assumptions about the unobserved portions of the state space
that introduce this confounding, and how the unobserved portions of
the state space affect observed variables. Without such assumptions
the unobserved portions of the state space will introduce confounding
for any causal effect of interest (Figure \ref{fig:generic}). For
example, we generally do not observe the state space at every time
(e.g. $S(t-1/2)$, Figure \ref{fig:generic}), and at any given time, we
do not observe the state space at all locations and for all state
variables (e.g. $S(t)$ and $S(t-1)$ in Figure \ref{fig:generic}). In
other words, despite our impressive and growing array of satellite,
remote sensing, and in situ observation systems, we are still very far
from observing every relevant state variable at every location in time
and space. So, if we are interested in the causal effect of any state
variable at time $t$ on some variable at time $t+1$ (e.g., $E$ in
Figure \ref{fig:generic}), then the causal effect will be confounded
by the unobserved portions of the state space, and calculating a
causal effect will be impossible (un-identifiable) without additional
assumptions.
However, there are assumptions we can make that may be reasonable for
many generic applications, which remove this problem of unobserved
confounding due to partial observation of the state space. We
elaborate upon these assumptions in the following sections, and for
each assumption, we draw a graph, briefly discuss the scenario and
assumptions, present the identification formula and how average
effects can be estimated (for example, using regression to estimate
$\mathbb{E}(\cdot )$), and include some strategies for testing the
assumption(s) with data.
\begin{figure}
\centering
\input{figs/generic-graph.tex}
\caption{A generic graph of the Earth system state
sequence. Unobserved nodes are outlined by dashed lines. We only
observe the state space at certain times (e.g., no observations at
$S(t-1/2)$), and at times with observations, we only partially
observe the full state ($S(t)$, $S(t-1)$). In the scenario that we
are interested in calculating the causal effect of any portion of
the state space at time $t$ on some effect ($E$) at time $t+1$,
the causal effect will be confounded by the unobserved portions of
the state space, and calculating the causal effect is impossible
(un-identifiable) without additional assumptions.}
\label{fig:generic}
\end{figure}
\newpage
\paragraph{Assumption: we observe all state variables that impact our effect(s) of interest}
\begin{figure}[H]
\centering
\includegraphics[]{observe-everything.pdf}
\caption{A generic graph for the assumption that we observe all
state variables that impact our effect(s) of interest. $C(t)$ is
the cause of interest, $E(t+1)$ is the future effect of interest,
and $O(t)$ are all observations not including $C(t)$. The full
state $S(t-1)$ is not observed (dashed node). Observing $O(t)$
(grey shading) blocks the backdoor path from $C(t)$ to $E(t+1)$.}
\label{fig:observe-everything}
\end{figure}
\begin{itemize}
\item \textbf{Discussion:} While we may not observe the entire state
of the system, sometimes it is reasonable to assume that we observe
the portion of the state space that affects our specific effect of
interest ($E(t+1)$ in Figure \ref{fig:observe-everything}). In this
case, we can calculate causal effects by blocking backdoor paths
with the observed portion of the state space.
\item \textbf{Identification formula:}
\begin{equation*}
P(E(t+1) \, | \, do(C(t) = c)) = \int_{o} P(E(t+1) \, | \, C(t) = c,
O(t) = o) \, P(O(t)=o) \; d o,
\end{equation*}
\item \textbf{Average causal effect:}
\begin{equation*}
\mathbb{E}(E(t+1) \, | \, do(C(t) = c)) \approx \frac{1}{n}
\sum_{i=1}^n \mathbb{E}(E(t+1) \, | \, C(t)=c, O(t)=o_i),
\end{equation*}
where $C(t)$ is the cause of interest, $E(t+1)$ is the effect of
interest, and $O(t)$ are all observed variables not including $C(t)$.
\item \textbf{Check:} There is no way to check this assumption with
data. Therefore, this assumption requires strong physical
justification well supported by the literature. Care is also
required to insure that there are no interactions between $C(t)$ and
$O(t)$; e.g., the observations at a given time are truly
``simultaneous'' and cannot causally affect each other.
\end{itemize}
\newpage
\paragraph{Assumption: We can reconstruct the state at any given time
using lagged observations of the system}
\begin{figure}[H]
\centering
\includegraphics[]{reconstruction.pdf}
\caption{A generic graph for the assumption that we can reconstruct
the state at any given time using lagged observations of the
system \citep{takens1981detecting}. $C(t)$ is the cause of
interest, $E(t+1)$ is the future effect of interest, $O(t)$ are
all observations not including $C(t)$, and $S'(t-1)$ is the
reconstructed state. Observing $S'(t-1)$ (grey shading) blocks all
backdoor paths from $C(t)$ to $E(t+1)$.}
\label{fig:reconstruction}
\end{figure}
\begin{itemize}
\item \textbf{Discussion:} While we may not observe the entire state
space, in some cases we may be able to reconstruct the state at any
given time using lagged observations of the system \citep[see
Takens' theorem,][]{takens1981detecting}. In this case, we can use
the reconstructed state to block backdoor paths and examine the
effect of any observed variable ($C(t)$) on future variables
($E(t+1)$ in Figure \ref{fig:reconstruction}). Note that
additionally controlling for $O(t)$ can also make the effect
estimate more reliable. More generally, there may be more than one
suitable set of adjustment co-variates and current research is
targeted at finding optimal ones that yield the lowest estimation
error \citep[e.g.,][]{witte2020efficient}.
\item \textbf{Identification formula:}
\begin{equation*}
P(E(t+1) \, | \, do(C(t) = c)) = \int_{s} P(E(t+1) \, | \, C(t) = c,
S'(t-1) = s) \, P(S'(t-1)=s) \; d s.
\end{equation*}
\item \textbf{Average causal effect:}
\begin{equation*}
\mathbb{E}(E(t+1) \, | \, do(C(t) = c)) \approx \frac{1}{n}
\sum_{i=1}^n \mathbb{E}(E(t+1) \, | \, C(t)=c, S'(t-1)=s_i),
\end{equation*}
where $C(t)$ is the cause of interest, $E(t+1)$ is the effect of
interest, and $S'(t-1)$ is the reconstructed state using lagged
observations.
\item \textbf{Check:} One check on the success of the state space
reconstruction is to test whether the observed variables are
conditionally independent given the reconstructed state
variable. This approach bears similarity to the deconfounder
approach introduced by \cite{yixin-2019}, which argues that causal
effects can be calculated for many problems when we can infer a
latent variable that renders the (multiple) causes conditionally
independent given the latent variable.
\end{itemize}
\paragraph{Assumption: the cause of interest is independent of the
systems' state evolution}
\begin{figure} \includegraphics[]{forcing-graph.pdf}
\centering
\caption{A generic graph asserting an assumption that there are
forcings external to the evolution of the state-space.}
\label{fig:forcing}
\end{figure}
\begin{itemize}
\item \textbf{Discussion:} In some cases, we may assume that
the cause of interest is independent of the systems' state
evolution (Figure \ref{fig:forcing}). While not generally true,
in some cases a variable may behave independent of the system's
state while still causally affecting that state. For example, some
human behavior may be approximated as independent of the climate
state (e.g., city planning and land use decisions).
\item \textbf{Identification formula:}
\begin{equation*}
P(E(t+1) \, | \, do(C = c)) = P(E(t+1) \, | \, C = c)
\end{equation*}
\item \textbf{Average causal effect:}
\begin{equation*}
\mathbb{E}(E(t+1) \, | \, do(C = c)) = \mathbb{E}(E(t+1) \, | \, C=c)
\end{equation*}
\item \textbf{Check:} To check this assumption, we can test if the
cause/forcing is independent of the past state. If the cause is
independent of the past state, we have stronger confidence that the
assumption holds.
\end{itemize}
While this list of assumptions is certainly not exhaustive, it
presents some approaches that may apply to many scenarios in Earth
science. Also while the provided checks might identify when
assumptions break down, there is no general way to ``validate''
assumptions using data. Physical and science-based justification and
reasoning are ultimately always required.
\section{Conclusions}
In summary, causal graphs and causal inference are powerful tools to
reason about problems in the Earth system, whether in models or observations. Specifically, this review
aimed at showing that:
\begin{itemize}
\item Causal graphs concisely and clearly encode physical assumptions about
causal/functional dependencies between processes. Including a causal
graph benefits any observational or modeling analysis, including
those that use regression.
\item Whether a causal effect can be calculated from data is
determined by the causal graph. Thus the tractability of a causal
analysis, or the strength of assumptions necessary to make the
analysis tractable, is determined and assessed before collecting,
generating, or manipulating data (which can cost a tremendous amount
in terms of researchers' time or computational resources). We
recommend early causal analysis to determine tractability during a
project's conception, before resources are spent obtaining or
analyzing data.
\item Calculated causal effects measure the response of target
variables (i.e. effects) to \textit{interventions} on other
variables in the system (i.e. causes). With statistical modeling
(i.e., regression) one can estimate the functional relationship
between interventions on causes and effects. These functional
relationships \textit{generalize} because they map
\textit{interventions} onto the response: unlike observed
associations (e.g., naive regression), we know the response is
attributable to the function's input, and not to some other process in
the system (e.g., an unobserved common cause). This causal approach opens
up a new path to calculate generalized functional relationships when
we either do not know the functional form a priori, or it is too
computationally intractable to calculate from models.
\item Because the Earth system and its constituents are dynamical
systems evolving through time, we can construct broadly applicable,
generic Earth system causal graphs. We can use these graphs to
calculate generalized functional relationships between processes of
interest, which would not be possible with associations, correlations
or simple regressions. However, causal inference in the Earth
sciences also presents challenges as we only partially observe the
state space of the system.
\item These challenges can be alleviated by applying causal theory to
generic causal graphs of the Earth system and identifying the
assumptions that allow for causal inference from data (Section
\ref{sec:necess-cond-caus}).
\end{itemize}
Here we focus on the fundamentals of calculating causal effects from
data. However, causal inference is a thriving, active area of
research, and there are many other causal inference techniques and
abstractions that could benefit the Earth system research
community. For example, there are techniques for representing
variables observed under selection bias in the causal graph and
analyzing whether a causal effect can be calculated (i.e. identified)
given that selection bias
\citep[e.g.,][]{bareinboim2014recovering,correa2018generalized}. Selection
bias, defined as a preferential sampling of data according to some
underlying mechanism, is very relevant in Earth sciences. For example,
satellite observations are almost always collected under selection
bias (e.g. clouds obscure surface data, satellites sample at certain
local times of the day which is connected to top of atmosphere solar
forcing, or the sensors themselves could have a bias). Additionally,
transportability
\citep[e.g.,][]{bareinboim2012transportability,Bareinboim7345,lee2019general}
identifies whether one can calculate a causal effect in a passively
observed system called the ``target domain'', by merging experiments
from other systems, called ``source domains'', that may differ from
the target domain. A potential application for transportability in
Earth sciences would be to merge numerical model experiments (e.g.,
Earth system models) and formally transport their results to the real
world. In this case, numerical models are the source domains that
differ from the target domain (``real world'') due to approximations
and different resolutions.
However, because these developments in causal theory are relatively
new, applied domains have yet to establish these recent theoretical
developments' utility for applied analysis. While we encourage applied
scientists to explore how these developments may apply to their
domains, we recognize that many scientists prefer tools with
established utility. To that end, we believe that the use of causal
graphs to organize and structure analyses is mature and directly
applicable to many projects, and can serve as a gateway to applying
these more recent causal developments. We hope that drawing and
including causal graphs in Earth science research becomes more common
in our field, and that this manuscript provides some of the necessary
foundation for readers to attempt using causal graphs in their future
research.
\paragraph{Acknowledgments} The authors want to thank Elias
Bareinboim, Beth Tellman, James Doss-Gollin, David Farnham, and Masa
Haraguchi for thoughtful feedback and comments that greatly improved
an earlier version of this manuscript.
\bibliography{references.bib}
\appendix
\section{Basic probability and syntax}
\label{prob-theory}
In this paper we use capital letters to represent random variables
(e.g., ``$X$''). For example, $P(X)$ is the marginal probability distribution
of a random variable $X$. $P(X)$ is a function of one variable that
outputs a probability (or density, in the case of continuous
variables) given a specific value for $X$. We represent specific
values that a random variable can take with lowercase letters (e.g.,
$x$ in the case of $X$). $P(X)$ is shorthand; a more descriptive but
less concise way to write $P(X)$ is $P(X=x)$ which represents the fact
that $P(X)$ is a function of a specific value of $X$, represented by
$x$. We use both notations, and $P(X)$ has the same meaning as
$P(X=x)$.
For the unfamiliar reader, there are a few basic rules and definitions
in probability that provide relatively complete foundations for
building deeper understanding of probability. These are the
\textbf{sum rule}:
\begin{equation} P(X=x) = \sum_Y P(X=x,\, Y=y)
\label{eq:sum}
\end{equation}
and the \textbf{product rule}:
\begin{equation} P(X=x, \, Y=y) = P(X = x \, | \, Y=y ) P(Y=y) = P(Y =
y \, | \, X=x ) P(X=x)
\label{eq:product}
\end{equation}
The \textit{joint probability distribution} ($P(X=x,Y=y)$) is the
probability that the random variable $X$ equals some value $x$
\emph{and} the random variable $Y$ equals $y$. The joint distribution
is a function of two variables, $x$ and $y$ which are values in the
domains of the random variables $X$ and $Y$ respectively. The
\textit{conditional probability distribution} ($p(X = x \, | \, Y=y
)$) is also a function of two variables $x$ and $y$, but it is the
probability of observing $X$ equal to $x$, given that we have observed
$Y$ equal to $y$. In other words, if we filter our domain to only
values where $Y=y$, then $p(X = x \, | \, Y=y )$ is the probability of
observing $X=x$ in this sub-domain where $Y=y$. The \textit{marginal
probability distribution} ($P(Y=y)$) is just the probability that $Y$
equals some value $y$, and is a function of only $y$. We can calculate
the marginal probability from the joint distribution by summing over
all possible values values of the other random variables in the joint
(the ``sum rule'' - Equation (\ref{eq:sum})). Additionally, the joint
distribution can factorize into a product of conditional and marginal
distributions (``the product rule'' - Equation
(\ref{eq:product})). These two simple rules can be used to build much
of the theory and applications of probability theory (e.g., Bayes'
theorem $P(Y|X) =\frac{P(X|Y) P(Y)}{P(X)}$). While Equations
(\ref{eq:sum}) deals with probability distributions of discrete random
variables, there is also a sum rule analog for continuous random
variables and probability density functions (the syntax of the product
rule is the same):
\begin{equation*} P(X=x) = \int_Y P(X=x,\, Y=y) \, dy
\end{equation*}
where $\int_{Y}$ represents an integral over the domain of $Y$ (e.g.,
$\int_{-\infty}^{\infty}$ if $Y$ is a Gaussian random variable).
\end{document}
|
A subset $S$ of a topological space $T$ is open in $T$ if and only if $S$ is a subset of $T$ and for every $x \in S$, there exists an open ball around $x$ that is contained in $S$. |
A subset $S$ of a topological space $T$ is open in $T$ if and only if $S$ is a subset of $T$ and for every $x \in S$, there exists an open ball around $x$ that is contained in $S$. |
\chapter{pybern}
\label{ch:pybern}
\emph{pybern}\index{pybern} is the core part of the analysis platform. It is
actually a Python module (aka a software library) that contains definitions and
functions used throught the programs used for the processing. Its development
is performed in the relevant \href{https://github.com/DSOlab/autobern}{github repository}.
Currently (\today), the servers running the software use (by default) Python 2.7,
so we must always make sure the module is compatible with this version. However,
the module should also run without problems with Python 3.x.
\section{Installation}
\label{sec:pybern-installation}
\emph{pybern} needs to be installed on your system for everything to work. It also
needs to be re-installed every time any of its files changes. To install the
module, you need to:
\begin{itemize}
\item got to the module's top directory, aka where the \verb|setup.py| is
\item run (with root privilages) the command \verb|$>sudo python setup.py install|
\item all done!
\end{itemize} |
{-# OPTIONS --without-K --safe #-}
module Data.Empty.Base where
open import Level
data ⊥ : Type where
infix 4.5 ¬_
¬_ : Type a → Type a
¬ A = A → ⊥
⊥-elim : ⊥ → A
⊥-elim ()
|
-- From AT: This approach could work, but seems too complicated for now.
-- I have opted to use the comparison lemma instead.
-- See the folder `condensed/extr/`.
/-
import topology.category.Profinite
import for_mathlib.Profinite.disjoint_union
open category_theory
namespace Profinite
universe u
/- The following three lemmas are used below to help speed up some proofs. -/
@[simp] lemma pullback.fst_apply {A B C : Profinite.{u}} (f : A ⟶ C) (g : B ⟶ C)
(a : A) (b : B) (h : f a = g b) : Profinite.pullback.fst f g ⟨(a,b),h⟩ = a := rfl
@[simp] lemma pullback.snd_apply {A B C : Profinite.{u}} (f : A ⟶ C) (g : B ⟶ C)
(a : A) (b : B) (h : f a = g b) : Profinite.pullback.snd f g ⟨(a,b),h⟩ = b := rfl
@[simp] lemma pullback.lift_apply {A B C D : Profinite.{u}} (f : A ⟶ C) (g : B ⟶ C)
(a : D ⟶ A) (b : D ⟶ B) (w : a ≫ f = b ≫ g) (d : D) :
Profinite.pullback.lift f g a b w d = ⟨(a d, b d), by { change (a ≫ f) d = _, rw w, refl }⟩ :=
rfl
structure prepresentation (B : Profinite.{u}) :=
(G : Profinite.{u})
(π : G ⟶ B)
(hπ : function.surjective π)
(R : Profinite.{u})
(r : R ⟶ Profinite.pullback π π)
(hr : function.surjective r)
def prepresentation.fst {B : Profinite.{u}} (P : B.prepresentation) :
P.R ⟶ P.G :=
P.r ≫ Profinite.pullback.fst _ _
def prepresentation.snd {B : Profinite.{u}} (P : B.prepresentation) :
P.R ⟶ P.G :=
P.r ≫ Profinite.pullback.snd _ _
lemma prepresentation.fst_surjective {B : Profinite.{u}} (P : B.prepresentation) :
function.surjective P.fst :=
begin
apply function.surjective.comp _ P.hr,
intros x,
exact ⟨⟨⟨x,x⟩,rfl⟩,rfl⟩,
end
lemma prepresentation.snd_surjective {B : Profinite.{u}} (P : B.prepresentation) :
function.surjective P.snd :=
begin
apply function.surjective.comp _ P.hr,
intros x,
exact ⟨⟨⟨x,x⟩,rfl⟩,rfl⟩,
end
def prepresentation.base {B : Profinite.{u}} (P : B.prepresentation) :
P.R ⟶ B :=
P.snd ≫ P.π
@[simp, reassoc, elementwise]
lemma prepresentation.base_fst {B : Profinite.{u}} (P : B.prepresentation) :
P.fst ≫ P.π = P.base :=
by { dsimp [prepresentation.fst, prepresentation.snd, prepresentation.base],
simp [Profinite.pullback.condition] }
@[simp, reassoc, elementwise]
lemma prepresentation.base_snd {B : Profinite.{u}} (P : B.prepresentation) :
P.snd ≫ P.π = P.base := rfl
lemma prepresentation.base_surjective {B : Profinite.{u}} (P : B.prepresentation) :
function.surjective P.base := function.surjective.comp P.hπ P.snd_surjective
structure prepresentation.hom_over {B₁ B₂ : Profinite.{u}}
(X₁ : B₁.prepresentation) (X₂ : B₂.prepresentation) (f : B₁ ⟶ B₂) :=
(g : X₁.G ⟶ X₂.G)
(hg : g ≫ X₂.π = X₁.π ≫ f)
(r : X₁.R ⟶ X₂.R)
(fst : r ≫ X₂.fst = X₁.fst ≫ g)
(snd : r ≫ X₂.snd = X₁.snd ≫ g)
attribute [simp, reassoc, elementwise]
prepresentation.hom_over.hg
prepresentation.hom_over.fst
prepresentation.hom_over.snd
local attribute [simp, elementwise]
Profinite.pullback.condition
Profinite.pullback.condition_assoc
def prepresentation.hom_over.comp {B₁ B₂ B₃ : Profinite.{u}}
{X₁ : B₁.prepresentation} {X₂ : B₂.prepresentation} {X₃ : B₃.prepresentation}
{f₁ : B₁ ⟶ B₂} {f₂ : B₂ ⟶ B₃}
(e₁ : X₁.hom_over X₂ f₁) (e₂ : X₂.hom_over X₃ f₂) :
X₁.hom_over X₃ (f₁ ≫ f₂) :=
{ g := e₁.g ≫ e₂.g,
hg := by simp,
r := e₁.r ≫ e₂.r,
fst := by simp,
snd := by simp }
def prepresentation.pullback_G {X B : Profinite} (f : X ⟶ B) (hf : function.surjective f)
(P : B.prepresentation) : X.prepresentation :=
{ G := Profinite.pullback f P.π,
π := Profinite.pullback.fst _ _,
hπ := begin
intros x,
obtain ⟨y,hy⟩ := P.hπ (f x),
exact ⟨⟨⟨x,y⟩,hy.symm⟩,rfl⟩,
end,
R := Profinite.pullback f P.base,
r := Profinite.pullback.lift _ _
(Profinite.pullback.lift _ _
(Profinite.pullback.fst _ _)
(Profinite.pullback.snd _ _ ≫ P.fst) $ by simp)
(Profinite.pullback.lift _ _
(Profinite.pullback.fst _ _)
(Profinite.pullback.snd _ _ ≫ P.snd) $ by simp) $ by simp,
hr := begin
rintros ⟨⟨⟨⟨a,b₁⟩,h₁⟩,⟨⟨a₂,b₂⟩,h₂⟩⟩,(rfl : a = a₂)⟩,
dsimp at h₁ h₂,
let c : Profinite.pullback P.π P.π := ⟨⟨b₁,b₂⟩,_⟩,
swap, { dsimp, rw [← h₁, ← h₂] },
obtain ⟨d,hd⟩ := P.hr c,
refine ⟨⟨⟨a,d⟩,_⟩,_⟩,
{ dsimp only [prepresentation.base, prepresentation.snd],
dsimp,
rwa hd },
{ dsimp [prepresentation.fst, prepresentation.snd],
congr,
{ simp [hd] },
{ simp [hd] } },
end } .
def presentation.pullback_R {X B : Profinite} (f : X ⟶ B) (hf : function.surjective f)
(P : B.prepresentation) : (Profinite.pullback f f).prepresentation :=
{ G := Profinite.pullback (Profinite.pullback.snd f f ≫ f) P.π,
π := Profinite.pullback.lift _ _
(Profinite.pullback.fst _ _ ≫ Profinite.pullback.fst _ _)
(Profinite.pullback.fst _ _ ≫ Profinite.pullback.snd _ _) $ by simp,
hπ := begin
rintros ⟨⟨a,b⟩,h⟩,
dsimp at h,
obtain ⟨c,hc⟩ := P.hπ (f b),
refine ⟨⟨⟨⟨⟨a,b⟩,h⟩,c⟩,hc.symm⟩,_⟩,
refl,
end,
R := Profinite.pullback (Profinite.pullback.snd f f ≫ f) P.base,
r := Profinite.pullback.lift _ _
(Profinite.pullback.lift _ _
(Profinite.pullback.fst _ _)
(Profinite.pullback.snd _ _ ≫ P.fst) $ by simp)
(Profinite.pullback.lift _ _
(Profinite.pullback.fst _ _)
(Profinite.pullback.snd _ _ ≫ P.snd) $ by simp) $
by { apply Profinite.pullback.hom_ext; simp },
hr := begin
rintros ⟨⟨⟨⟨⟨⟨a₁,a₁'⟩,h₁'⟩,b₁⟩,h₁⟩,⟨⟨⟨⟨a₂,a₂'⟩,h₂'⟩,b₂⟩,h₂⟩⟩,h⟩,
change f a₁' = (P.π) b₁ at h₁,
change f a₂' = (P.π) b₂ at h₂,
change f a₁ = f a₁' at h₁',
change f a₂ = f a₂' at h₂',
change _ = _ at h,
have hfst := h,
have hsnd := h,
apply_fun (λ a, a.1.1) at hfst,
apply_fun (λ a, a.1.2) at hsnd,
change a₁ = a₂ at hfst,
change a₁' = a₂' at hsnd,
let e : Profinite.pullback f f := ⟨⟨a₁,a₁'⟩, h₁'⟩,
let w₀ : Profinite.pullback P.π P.π := ⟨⟨b₁,b₂⟩,_⟩,
swap, { change _ = _, rw [← h₁, ← h₂, hsnd] },
obtain ⟨w₁,hw₁⟩ := P.hr w₀,
let w : Profinite.pullback (pullback.snd f f ≫ f) P.base := ⟨⟨e,w₁⟩,_⟩,
swap, { change _ = _, change f _ = P.π (pullback.snd _ _ _),
erw hw₁,
dsimp only [pullback.fst_apply, e, pullback.snd_apply], rw hsnd, exact h₂ },
use w,
dsimp only [w, e, pullback.fst_apply, pullback.snd_apply, pullback.lift_apply],
congr,
{ dsimp [prepresentation.fst],
rw hw₁, refl },
{ congr' 2 },
{ dsimp [prepresentation.snd],
rw hw₁, refl },
end } .
def prepresentation.pullback_π {X B : Profinite}
(f : X ⟶ B) (hf : function.surjective f)
(P : B.prepresentation) : (P.pullback_G f hf).hom_over P f :=
{ g := Profinite.pullback.snd _ _,
hg := begin
dsimp [prepresentation.pullback_G],
simp,
end,
r := Profinite.pullback.snd _ _,
fst := begin
dsimp [prepresentation.pullback_G, prepresentation.fst],
simp,
end,
snd := begin
dsimp [prepresentation.pullback_G, prepresentation.snd],
simp,
end }
lemma prepresentation.pullback_π_g_surjective {X B : Profinite}
(f : X ⟶ B) (hf : function.surjective f)
(P : B.prepresentation) :
function.surjective (P.pullback_π f hf).g :=
begin
intros x,
obtain ⟨y,hy⟩ := hf (P.π x),
exact ⟨⟨⟨y,x⟩,hy⟩,rfl⟩,
end
lemma prepresentation.pullback_π_r_surjective {X B : Profinite}
(f : X ⟶ B) (hf : function.surjective f)
(P : B.prepresentation) :
function.surjective (P.pullback_π f hf).r :=
begin
intros x,
obtain ⟨y,hy⟩ := hf (P.base x),
exact ⟨⟨⟨y,x⟩,hy⟩,rfl⟩
end
end Profinite
-/
|
Several people , telling amazing stories to those who would listen , passed themselves off as the " sole survivor " in the years following the slide . The most common such tale is that of an infant girl said to have been the only survivor of the slide . Her real name unknown , the girl was called " Frankie Slide " . Several stories were told of her miraculous escape : she was found in a bale of hay , lying on rocks , under the collapsed roof of her house or in the arms of her dead mother . The legend was based primarily on the story of Marion Leitch , who was thrown from her home into a pile of hay when the slide enveloped her home . Her sisters also survived ; they were found unharmed under a collapsed ceiling joist . Her parents and four brothers died . Influencing the story was the survival of two @-@ year @-@ old Gladys Ennis , who was found outside her home in the mud . The last survivor of the slide , she died in 1995 . In total , 23 people in the path of the slide survived , in addition to the 17 miners who escaped from the tunnels under Turtle Mountain . A ballad by Ed McCurdy featuring the story of Frankie Slide was popular in parts of Canada in the 1950s . The slide has formed the basis of other songs , including " How the Mountain Came Down " by Stompin ' Tom Connors , and more recently , " Frank , AB " by The Rural Alberta Advantage . The Frank Slide has been the subject of several books , both historical and fictional .
|
(* Title: HOL/Auth/n_german_lemma_on_inv__46.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_german Protocol Case Study*}
theory n_german_lemma_on_inv__46 imports n_german_base
begin
section{*All lemmas on causal relation between inv__46 and some rule r*}
lemma n_RecvReqVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvReq N i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvReq N i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Ident ''CurCmd'')) (Const Empty)) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Ident ''CurCmd'')) (Const Empty)) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Ident ''CurCmd'')) (Const Empty)) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_SendInvEVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendInvE i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_SendInvE i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_SendInvSVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendInvS i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_SendInvS i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Ident ''CurCmd'')) (Const ReqS)) (eqn (IVar (Field (Para (Ident ''Chan3'') p__Inv4) ''Cmd'')) (Const InvAck))) (eqn (IVar (Para (Ident ''InvSet'') p__Inv3)) (Const true))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_SendInvAckVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendInvAck i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_SendInvAck i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv)) (eqn (IVar (Ident ''CurCmd'')) (Const ReqS))) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv4) ''Cmd'')) (Const Inv))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_RecvInvAckVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvInvAck i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvInvAck i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_SendGntSVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendGntS i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_SendGntS i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_SendGntEVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendGntE N i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_SendGntE N i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_RecvGntSVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvGntS i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvGntS i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_RecvGntEVsinv__46:
assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvGntE i)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvGntE i" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4" apply fastforce done
have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(i=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_StoreVsinv__46:
assumes a1: "\<exists> i d. i\<le>N\<and>d\<le>N\<and>r=n_Store i d" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_SendReqESVsinv__46:
assumes a1: "\<exists> i. i\<le>N\<and>r=n_SendReqES i" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_SendReqSVsinv__46:
assumes a1: "\<exists> j. j\<le>N\<and>r=n_SendReqS j" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_SendReqEIVsinv__46:
assumes a1: "\<exists> i. i\<le>N\<and>r=n_SendReqEI i" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__46 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
end
|
#pragma once
#include <boost/multi_index/tag.hpp>
namespace mira{
namespace multi_index{
namespace detail{
using boost::multi_index::detail::tag_marker;
template<typename T>
using is_tag = boost::multi_index::detail::is_tag< T >;
} /* namespace multi_index::detail */
template< typename T >
using tag = boost::multi_index::tag< T >;
} /* namespace multi_index */
} /* namespace mira */
|
<a href="https://colab.research.google.com/github/BachiLi/A-Tour-of-Computer-Animation/blob/main/A_Tour_of_Computer_Animation_2_Lagrangian_Mechanics_and_Pendulums.ipynb" target="_parent"></a>
Last time we briefly discussed the Newtonian mechanics, where the position of each object is governed by the forces applied on them ($F=ma$). This is a very general model and covers all phenomonon in classical mechanics. However, in some scenarios, Newtonian mechanics can be a bit cumbersome for deriving the equations of motion. For example, what if our object is a pendulum affected by gravity (see the figure below)? The gravity force pulls the object downwards, but the pendulum wire applies another force to make the object goes towards the tangent direction. Deriving the pendulum wire force is a little bit annoying. Can we do better?
The trick is to reformulate Newtonian mechanics. Instead of considering the local impact of forces to an object, we consider the whole *trajectory* of the motion. Consider the following scenario: given an initial position $x_0$ at time $t_0$, and an end position at $x_1$ at time $t_1$. There are infinitely many trajectories an object can take to go from $x_0$ to $x_1$. In reality, can we predict which trajectory the object will take?
The principle for selecting the actual trajectory is called the [stationary action principle](https://en.wikipedia.org/wiki/Stationary_Action_Principle). Feynman explained this concept much better than me in his [lecture](https://www.feynmanlectures.caltech.edu/II_19.html), but I'll still try to explain it here.
Before I start, note that the stationary action principle is asking a different problem compared to the Newtonian mechanics we introduced last time: in Newtonian mechanics, we are given an initial position and an initial velocity, and are asked to solve an [*initial value problem*](https://en.wikipedia.org/wiki/Initial_value_problem). Here we are asked to solve a [*boundary value problem*](https://en.wikipedia.org/wiki/Boundary_value_problem): given the endpoints (without the velocity), can we solve the intermediate motions? However, this point of view makes the stationary action principle a historically controversial law, philosophically: does an object somehow know where it is going in advance, then it decides the path in between?
It turns out, mathematically, as I will show later, these two laws lead to the exact same motions in classical mechanics. So we will just use the stationary action principle as a convienent mathematical tool for deriving motions. We will leave the philosophical debate as an excersice to the readers. : )
The stationary action principle thinks in terms of energies instead of forces. The idea is that in a physical system, there is a magical quantity called energy, which is a sum of multiple components, that is kept as a constant. In classical mechanics, the energy is the sum of two components: kinetic energy $K$ and potential energy $U$. The kinetic energy represents how fast the object is moving, and the potential energy represents how much *potential* does the object have to become even faster. In classical mechanics, all motions are trade-offs between the kinetic energy and potential energy: when an object becomes faster, it is converting potential energy into kinetic energy, and vice versa. The kinetic energy $K$ is defined as:
$$K = \frac{1}{2} m \dot{x}^2$$
where $m$ is the mass of the object (so far we've assumed the mass to be 1). The potential energy $U$ is defined as the integration of force $F$:
$$\nabla_{x} U = -F$$
For example, in the case of a stone falling to the earth, the potential energy is $U = 9.8 m x$ where $x$ is the height (recall that $F = -9.8 m$).
A magical conjecture is that the sum of kinetic energy and potential energy is a constant over time: $K + U = \text{const}$, if the force $F$ is [*conservative*](https://en.wikipedia.org/wiki/Conservative_force). A conservative force means that the force is only a function of the position, and does not depend on time or velocity.
We can verify this in the stone falling case: we know the motion is $x = -\frac{9.8}{2} t^2 + \dot{x}(0)t + x(0)$. So $K + U = -\frac{1}{2} m (-9.8t + \dot{x}(0))^2 + 9.8 m (-\frac{9.8}{2} t^2 + \dot{x}(0)t + x(0)) = -\frac{1}{2} m \dot{x}(0)^2 + 9.8 m x(0)$ (notice that all the $t$ and $t^2$ terms cancel out).
This is actually pretty easy to prove for the general case using $F=ma$: we can look at the time derivative of $K + U$:
$$\frac{d}{dt}(K+U) = m\dot{x}\ddot{x} + \frac{dU}{dx} \dot{x} = \dot{x} \left( m \ddot{x} - F \right) = 0$$
(note that we use the conservative force assumption in the first equation: $\frac{dU}{dt} = \frac{dU}{dx}\dot{x}$ only if $U$ does not directly depend on time)
Instead of the summation $K+U$, the stationary action principle looks at the difference of them $L=K-U$. The difference is called the *Lagrangian* (probably because [Lagrange](https://en.wikipedia.org/wiki/Joseph-Louis_Lagrange) was the first person who tried to do this. By the way, the sum $H=K+U$ is called the Hamiltonian and is crucial to the [*Hamiltonian mechanics*](https://en.wikipedia.org/wiki/Hamiltonian_mechanics) -- yet another formulation of classical mechanics. We may or may not get to Hamiltonian mechanics in the future).
We define an *action* $S$ which is an integral over the Lagrangian $L$ over time:
$$S(x) = \int_{t=t_0}^{t_1} L(x, \dot{x}, t) = \int_{t=t_0}^{t_1} K - U$$
Note that the domain of $S$ is the set of all possible paths $x$. This makes $S$ a *functional*: it is a meta function that takes a function and outputs a real number.
What does the action $S$ mean intuitively? At each point of time, we are measuring the amount of movement $K$ plus the amount of integrated force (recall $\nabla_{x} U = -F$) applied on the object. This is why this is called an action: it is measuring how much the object is moving, plus how much force is applied.
The stationary action principle claims that the motions in the nature will be the local extrema of the action $S(x)$ (most of the time it will be the global minimum -- nature is lazy and will minimize actions).
To find the extrema of the action $S$, we need to first define the derivative of $S$ -- remember that $S$ is a *functional* -- what does it mean to differentiate a function whose input is a function? For the rigorous math the readers should consult [calculus of variation](https://en.wikipedia.org/wiki/Calculus_of_variations). We can roughly define the derivative $\delta S$ as the changes of $S$ when we apply a perturbation $\delta_x$ of $x$:
$$\delta S(x) = \frac{S(x + \delta_x) - S(x)}{|\delta_x|}$$
So the stationary action principle can be mathematically stated as $\delta S(x) = 0$: we want to find trajectories $x$ that satisfy this equation. Let's verify this using our stone falling to earth example again. We have $L(x, \dot{x}, t) = \frac{1}{2}m\dot{x}^2 - 9.8 m x$. We want to figure out the motion between time $t_0 = 0, t_1 = 1$, and we know $x(0)$ and $x(1)$. To solve for the motion, let's *guess* that it's going to be a qudratic function over time: $x(t) = At^2 + Bt + C$. Our goal is to solve for the parameters $A, B, $ and $C$. Our boundary constraints tells us $x(0) = C$ and $x(1) = A + B + x(0)$. The action integral is
$$S(A, B) = \int_{t=t_0}^{t_1} \frac{1}{2} m (2At+B)^2 - 9.8m(At^2+Bt+x(0))$$
With a little bit of algebra we can actually solve the action integral in closed form:
$$S(A, B) = m \left( (2A^2 - 9.8A) t^2 + (2AB - 9.8B) t + \frac{1}{2}B^2 - 9.8 x(0) \right)$$
To solve for the extrema of $S(A, B)$ under our constraints $A+B = x(1) - x(0)$, we use a trick called the [Lagrange multiplier](https://en.wikipedia.org/wiki/Lagrange_multiplier) (yes it's the same Lagrange): we solve for the extrema of $G(A, B, \lambda) = S(A, B) + \lambda (A+B+x(0)-x(1))$, where $\lambda$ is a free variable. Setting the derivative of $G$ to zero we get three equations with three unknowns ($A, B, \lambda$). Solving for the system we get $A = -\frac{9.8}{2}$ and $B=x(1)-x(0)-\frac{9.8}{2}$.
Compare the motion equation $x(t) = -\frac{9.8}{2}t^2 + (x(1)-x(0)-\frac{9.8}{2})t + x(0)$ to the equation from our last chapter $x(t) -\frac{9.8}{2}t^2 + \dot{x}(0)t + x(0)$, we know that if we set our initial velocity $\dot{x}(0)$ as $x(1)-x(0)-\frac{9.8}{2}$, we will arrive at $x(1)$ at time $1$.
**Exercise**: we can state a different version of stationary action principle assuming we only know $x(0)$ and $\dot{x}(0)$, instead of $x(0)$ and $x(1)$. Does this different version predict the same motion?
In practice, we don't always want to assume a parametric form of our motion trajectory. What can we do in that case?
It turns out that the least action principle is a very strong constraint: even though we are imposing a constraint on an integral over some time, it ends up constraining each point in the time (intuitively, since $x$ is an infinite-dimensional function, taking its derivative and setting it to zero gives us infinitely many constraints).
The constraint is called the [Euler-Lagrange equation](https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation) (Euler proposed a first version of this, and Lagrange refined it to the current form), and can be written as follows:
$$\frac{\partial L}{\partial x} = \frac{d}{dt} \frac{\partial L}{\partial \dot{x}}$$
For example, if $L(x, \dot{x}, t) = \frac{1}{2}m\dot{x}^2 - 9.8 m x$, then the corresponding Euler-Langrange equation is
$$
\begin{split}
\frac{\partial{L}}{\partial x} &= -9.8m \\
\frac{\partial L}{\partial \dot{x}} &= m\dot{x} \\
\frac{d}{dt} \frac{\partial L}{\partial \dot{x}} &= m\ddot{x} \\
\frac{\partial L}{\partial x} &= \frac{d}{dt} \frac{\partial L}{\partial \dot{x}} \rightarrow \ddot{x} = -9.8
\end{split}
$$
We recover the equation of motion from Newtonian mechanics we had from our last chapter! In fact, the Euler-Lagrange equation is usually just a different way of writing $F=ma$. If we look at it closely, since the kinetic energy is always $\frac{1}{2}m\dot{x}^2$, $\frac{\partial L}{\partial x} = -\frac{\partial U}{\partial x} = F$, and if the potential energy $U$ does not depend on the velocity $\dot{x}$, $\frac{d}{dt} \frac{\partial L}{\partial \dot{x}} = m \ddot{x}$.
The derivation of the Euler-Lagrange equation from the stationary action principle can be found online: I recommend the proof from [Preetum Nakkiran](https://preetum.nakkiran.org/lagrange.html) which is geometric and very intuitive. [Wikipedia](https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation#Statement) also has the standard proof using integration by parts. I also like the derivation from [Jason Gross](https://web.mit.edu/~jgross/Public/least_action/Principle%20of%20Least%20Action%20with%20Derivation.html).
The Euler-Lagrange equation connects the stationary action principle and Newtonian mechanics. Remember we set up the problem formulation as a *boundary value problem*. However, Newtonian mechanics predicts an *initial value problem*. Interestingly, since the stationary action principle puts a constraint on every single time instance, **no matter where the particle is going eventually, they need to follow the stationary action principle and Euler-Lagrange equation all the time**.
So, what have we gained over all this crazy math? Let's go back to the example at the beginning, where we have a pendulum. The trick is to use something called the *generalized coordinates*. Instead of specifying the Cartesian position $x$ of the pendulum in 2D, we use a coordinate $q$ based on the angle of the pendulum. We have the relation $x(q) = (r \sin(q), r \cos(q))$ where $r$ is the length of the wire.
Recall that, earlier, we parametrize the trajectory of a stone falling to earth using a quadratic $At^2+Bt+C$, and the sationary principle helps us solve the quadratic parameters. We are doing to same thing here: we "parametrize" the trajector $x$ using the angle $q$, and solve for q instead. Our Lagrangian is now:
$$L(q, \dot{q}, t) = \frac{1}{2} m \dot{x}(q)^2 - 9.8 m (r - x(q)) = \frac{1}{2} m r^2 \dot{q}^2 - 9.8m r (1 - \cos(q))$$
(since $x$ is a vector, $\dot{x}(q) = -r \sin(q) \dot{q}, r \cos(q) \dot{q}$, $\dot{x}(q)^2 = \dot{x}(q) \cdot \dot{x}(q) = r^2 \dot{q}^2$)
The cool thing is that the Euler-Lagrange equation still holds:
$$
\begin{split}
\frac{\partial L}{\partial q} &= \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} \\
\frac{\partial L}{\partial q} &= - 9.8mr \sin(q) \\
\frac{\partial L}{\partial \dot{q}} &= m r^2 \dot{q} \\
\frac{d}{dt} \frac{\partial L}{\partial \dot{q}} &= mr^2 \ddot{q} \\
- 9.8mr \sin(q) &= mr^2 \ddot{q} \\
&\downarrow \\
\ddot{q} &= \frac{-9.8}{r} \sin(q) \\
\end{split}
$$
We get a simple second-order ODE! In fact you can derive the same thing using Newtonian mechanics (see [here](https://www.acs.psu.edu/drussell/Demos/Pendulum/Pendula.html)), however it requires you to factor out the forces and constraints -- something that demands a bit of cleverness. Lagrangian mechanics allows us to do things in a very mechanical, automatic way: it's all basic algebra.
Interestingly, despite its simplicity, this ODE does not have a known closed form. Let's try to visualize it using forward Euler method (finally some code!!):
```python
import math
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
fps = 20
q0 = math.pi / 4
v0 = 0.0
r = 20.0
ts = 0.01
def pendulum_forward_euler(t, q, v):
ct = 0
while True:
nq = q + ts * v
nv = v - ts * (9.8 / r) * math.sin(q)
q = nq
v = nv
ct += ts
if ct >= t:
break
return ct, q, v
def visualize(ode_solver):
fig = plt.figure(figsize=(8, 4))
ax = plt.axes(xlim=(-25, 25), ylim=(-25, 0))
line, = ax.plot([], [], '-', lw=2)
point, = ax.plot([], [], 'g.', ms=20)
t = 0
q = q0
v = v0
def animate(i):
nonlocal t, q, v
dt, q, v = ode_solver(1.0/fps, q, v)
x = r * math.sin(q)
y = -r * math.cos(q)
t += dt
line.set_data([0, x], [0, y])
point.set_data([x], [y])
return point,
plt.close()
return animation.FuncAnimation(fig, animate, frames=400, interval=fps, blit=True)
anim = visualize(pendulum_forward_euler)
HTML(anim.to_html5_video())
```
**Exercise**: can you modify the pendulum to handle the collision with the ceiling?
We'll close this chapter by deriving the Lagrangian mechanics for a double pendulum. The derivation is pretty messy, so we will use sympy to help us deriving the equations.
The first mass on the pendulum is parametrized by $x_0 = (r_0 \sin(q_0), -r_0\cos(q_0))$, the second mass is parametrized by $x_1 = x_0 + (r_1 \sin(q_1), -r_1 \cos(q_1))$.
Our Lagrangian is:
$$L(q,\dot{q},t) = \frac{1}{2} m_0 r_0^2 \dot{q}_0^2 + \frac{1}{2} m_1 \left( r_0^2 \dot{q}_0^2 + 2 r_0 r_1 \dot{q}_0 \dot{q}_1 \cos(q_0 - q_1) + r_1^2 \dot{q}_1^2 \right) - g \left(m_0 r_0 (- \cos(q_0)) + m_1 \left( r_0 (- \cos(q_0)) + r_1 (- \cos(q_1)) \right) \right)$$
(we finally replace the 9.8 magic constant with g)
Let's define it in sympy:
```python
import sympy
sympy.init_printing() # printing latex
t, m0, r0, m1, r1, g = sympy.symbols('t, m0, r0, m1, r1, g')
q0 = sympy.Function('q0')(t)
q1 = sympy.Function('q1')(t)
v0 = q0.diff(t) # \dot{q}_0
v1 = q1.diff(t) # \dot{q}_1
half = sympy.Rational(1, 2)
# Lagrangian
K = half * m0 * r0 * r0 * v0 * v0 + \
half * m1 * (r0 * r0 * v0 * v0 + 2 * r0 * r1 * v0 * v1 * sympy.cos(q0 - q1) + r1 * r1 * v1 * v1)
U = g * (m0 * r0 * (- sympy.cos(q0)) + m1 * (r0 * (- sympy.cos(q0)) + r1 * (- sympy.cos(q1))))
L = sympy.simplify(K - U)
L
```
Next we use sympy to derive the Euler-Lagrange equation (sympy has a function [euler_equations](https://docs.sympy.org/latest/modules/calculus/index.html) for this purpose, but we'll do it manually just for this time):
```python
dLdq0 = L.diff(q0).simplify()
dLdq1 = L.diff(q1).simplify()
dLdv0 = L.diff(v0).simplify()
dLdv1 = L.diff(v1).simplify()
dLdv0dt = dLdv0.diff(t).simplify()
dLdv1dt = dLdv1.diff(t).simplify()
eq0 = sympy.simplify(dLdq0 - dLdv0dt) # this should be 0
eq1 = sympy.simplify(dLdq1 - dLdv1dt) # this should be 0
```
```python
eq0
```
sympy's equation solver didn't do what I wanted. So we'll have to manually arrange the equations.
```python
# simplify eq0 manually
eq0 = -eq0 / r0
eq0.args
```
```python
eq0 = sympy.simplify(eq0.args[0] + eq0.args[1]) + sympy.simplify(eq0.args[2] + eq0.args[3]) + eq0.args[4] + eq0.args[5]
eq0.args
```
```python
eq1
```
```python
# simplify eq1 manually
eq1 = -eq1 / (m1 * r1)
eq1.args
```
```python
# eliminate a1
a0 = (eq0 - eq1 * m1 * sympy.cos(q0 - q1)).expand()
a0.args
```
```python
# manually simplify a0
a0 = sympy.simplify(a0.args[0] + a0.args[1]) + sympy.simplify(a0.args[2] + a0.args[3] + a0.args[6]) + a0.args[4] + a0.args[5] + a0.args[7]
a0.args
```
```python
# r0 (m0 - m1 cos^2(q0 - q1) + m1) = r0 (m0 + m1 sin^2(q0 - q1))
s = sympy.sin(q0 - q1)
# minus sign for moving d^2 dq0/dt^2 to the other side of the equation
a0solve = -(a0.args[0] + sum(a0.args[2:])) / (r0 * (m0 + m1 * s * s))
a0solve
```
```python
# eliminate a0
a1 = (eq0 * sympy.cos(q0 - q1) - eq1 * (m0 + m1)).expand()
a1.args
```
```python
# manually simplify a1
a1 = sympy.simplify(a1.args[0] + a1.args[1]) + sympy.simplify(a1.args[2] + a1.args[3] + a1.args[8]) + sympy.simplify(a1.args[4] + a1.args[5]) + sympy.simplify(a1.args[6] + a1.args[7]) + a1.args[9]
a1.args
```
```python
# r1 (-m0 + m1 cos^2(q0 - q1) - m1) = -r1 (m0 + m1 sin^2(q0 - q1))
# a minus sign for the denominator, another minus sign for moving d^2 dq1/dt^2 to the other side of the equation
a1solve = sum(a1.args[1:]) / (r1 * (m0 + m1 * s * s))
a1solve
```
```python
sympy.Eq(q0.diff(t).diff(t), a0solve)
```
```python
sympy.Eq(q1.diff(t).diff(t), a1solve)
```
Above is our velocity update formula. Let's try to implement this!
```python
import math
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
fps = 20
g = 9.8
q0_init = math.pi
v0_init = 0.0
q1_init = -math.pi / 4
v1_init = 0.0
r0 = 20.0
r1 = 20.0
m0 = 1
m1 = 1
ts = 0.001
def double_pendulum_forward_euler(t, q0, v0, q1, v1):
ct = 0
while True:
nq0 = q0 + ts * v0
nq1 = q1 + ts * v1
s0 = math.sin(q0)
s1 = math.sin(q1)
sd = math.sin(q0 - q1)
cd = math.cos(q0 - q1)
nv0 = v0 + ts * (g * m1 * s1 * cd - g * (m0 + m1) * s0 - m1 * r0 * sd * cd * v0 * v0 - m1 * r1 * sd * v1 * v1) / (r0 * (m0 + m1 * sd * sd))
nv1 = v1 + ts * (g * (m0 + m1) * s0 * cd - g * (m0 + m1) * s1 + m1 * r1 * sd * cd * v1 * v1 + r0 * (m0 + m1) * sd * v0 * v0) / (r1 * (m0 + m1 * sd * sd))
q0 = nq0
q1 = nq1
v0 = nv0
v1 = nv1
ct += ts
if ct >= t:
break
return ct, q0, v0, q1, v1
def visualize(ode_solver):
fig = plt.figure(figsize=(8, 8))
ax = plt.axes(xlim=(-55, 55), ylim=(-55, 55))
line, = ax.plot([], [], '-', lw=2)
point, = ax.plot([], [], 'g.', ms=20)
t = 0
q0 = q0_init
v0 = v0_init
q1 = q1_init
v1 = v1_init
def animate(i):
nonlocal t, q0, v0, q1, v1
dt, q0, v0, q1, v1 = ode_solver(1.0/fps, q0, v0, q1, v1)
x0 = r0 * math.sin(q0)
y0 = -r0 * math.cos(q0)
x1 = x0 + r1 * math.sin(q1)
y1 = y0 - r1 * math.cos(q1)
t += dt
line.set_data([0, x0, x1], [0, y0, y1])
point.set_data([x0, x1], [y0, y1])
return point,
plt.close()
return animation.FuncAnimation(fig, animate, frames=800, interval=fps, blit=True)
anim = visualize(double_pendulum_forward_euler)
HTML(anim.to_html5_video())
```
**Exercise**: can you implement the collision between the masses and the wires?
This actually already allows us to do some fun character animations! Imagine the double pendulum above being an arm of a character. You can setup an articulated character and derive its dynamics yourself using the Lagrangian mechanics.
In summary, we have learned a very powerful way, called Lagrangian mechanics, to derive equations of motion. The usual process to set up a computer animation, is to specify our animation in a generalized coordinates $q$ and their mapping to the Cartesian coordinates $x$. We then derive our Lagrangian $L$: the kinetic energy $K$ is always $\frac{1}{2} m v^2$, the potential energy $U$ depends on the interactions you want to have between the objects and user inputs. We then solve the sationary action principle or the Euler-Lagrange equation to compute the animation at certain time. This process is very mechanical and mostly involving just doing some algebra, and importantly, it enables us to design physically-based animation with great flexibility!
In the next chapter, we will analyze the discretization we've been doing to the differential equation: Are there better way to do it other than the forward Euler method? (yes) What are the different desirable criteria for a discretization?
|
• Teaching Openings in Taiwan: The Department of Education in every city or country will post the jobs on their website, and the graduates/teachers can find the information through these websites.
• Industrial Technology Education Association of Taiwan, R.O.C.
This department was originally founded in February 1953, focused on the preparation of secondary school Industrial Arts teachers as well as conducting researches related to Industrial Arts Education. In August 1994 its name was changed to the Department of Industrial Technology Education, aimed at preparing of secondary school technology teachers as well as the professionals for human resource development in business and industry. In August 2009, the name was formally changed to the Department of Technology Application and Human Resource Development, fully devotes to prepare the professionals for human resource development in technology and industries. At present, the department has undergraduate, master, and PhD programs. |
{-# OPTIONS --rewriting #-}
open import Agda.Primitive
open import Agda.Builtin.Equality
open import Agda.Builtin.Equality.Rewrite
variable
ℓ ℓ' : Level
A : Set ℓ
B : Set ℓ'
x y z : A
sym : x ≡ y → y ≡ x
sym refl = refl
_∙_ : x ≡ y → y ≡ z → x ≡ z
refl ∙ refl = refl
leftId : (p : x ≡ y) → refl ∙ p ≡ p
leftId refl = refl
rightId : (p : x ≡ y) → p ∙ refl ≡ p
rightId refl = refl
transport : (P : A → Set ℓ') → x ≡ y → P x → P y
transport _ refl px = px
ap : (f : A → B) → x ≡ y → f x ≡ f y
ap _ refl = refl
apd : (P : A → Set ℓ') → (f : (a : A) → P a) → (p : x ≡ y) → transport P p (f x) ≡ f y
apd _ _ refl = refl
postulate
Size : Set
base : Size
next : Size → Size
inf : Size
lim : next inf ≡ inf
postulate
elim : (P : Size → Set ℓ) →
(pb : P base) →
(pn : (s : Size) → P s → P (next s)) →
(pi : P inf) →
-- (elim P pb pn pi pl (next inf)) = (elim P pb pn pi pl inf)
(pl : transport P lim (pn inf pi) ≡ pi) →
(s : Size) → P s
elim-base : (P : Size → Set ℓ) →
(pb : P base) →
(pn : (s : Size) → P s → P (next s)) →
(pi : P inf) →
(pl : transport P lim (pn inf pi) ≡ pi) →
elim P pb pn pi pl base ≡ pb
elim-next : (P : Size → Set ℓ) →
(pb : P base) →
(pn : (s : Size) → P s → P (next s)) →
(pi : P inf) →
(pl : transport P lim (pn inf pi) ≡ pi) →
(s : Size) →
elim P pb pn pi pl (next s) ≡ pn s (elim P pb pn pi pl s)
elim-inf : (P : Size → Set ℓ) →
(pb : P base) →
(pn : (s : Size) → P s → P (next s)) →
(pi : P inf) →
(pl : transport P lim (pn inf pi) ≡ pi) →
elim P pb pn pi pl inf ≡ pi
{-# REWRITE elim-base elim-next elim-inf #-}
postulate
elim-lim : (P : Size → Set ℓ) →
(pb : P base) →
(pn : (s : Size) → P s → P (next s)) →
(pi : P inf) →
(pl : transport P lim (pn inf pi) ≡ pi) →
apd P (elim P pb pn pi pl) lim ≡ pl
{-# REWRITE elim-lim #-}
-- postulate lim-refl : transport (λ s → s ≡ inf) lim lim ≡ refl
absurd : ∀ {A s} → base ≡ next s → A
absurd {A} p =
let discr : Size → Set
discr = elim (λ _ → Set) Size (λ _ _ → A) A (apd (λ _ → Set) (λ _ → A) lim)
in transport discr p base
inj : ∀ {s t} → next s ≡ next t → s ≡ t
inj {s} {t} p =
let pred : Size → Size
pred = elim (λ _ → Size) base (λ s _ → s) inf (apd (λ _ → Size) (λ s → inf) lim)
in ap pred p
data Nat : Size → Set where
zero : (s : Size) → Nat (next s)
succ : (s : Size) → Nat s → Nat (next s)
prev : (P : Size → Set ℓ') → P (next inf) → P inf
prev P pni = transport P lim pni
double' : ∀ s → Nat s → Nat inf
double' _ (zero _) = prev Nat (zero inf)
double' _ (succ s n) = prev Nat (succ inf (prev Nat (succ inf (double' s n))))
-- pn inf pi ≡ pi
-- zero: zero = ?pi
-- next: succ _ (succ _ ?pi) = ?pi
double : ∀ s → Nat s → Nat inf
double = elim (λ s → Nat s → Nat inf) {! !} {! !} {! !} {! !}
|
%********************************************************************
% Appendix
%*******************************************************
% If problems with the headers: get headings in appendix etc. right
%\markboth{\spacedlowsmallcaps{Appendix}}{\spacedlowsmallcaps{Appendix}}
\begin{comment}
\chapter{Appendix}
\section{Scheme Ratings}
\subsection{Context-Aware Authentication}
\todo[inline]{Revisit this section}
\citet{bardram2003context} puts forward a prototype and authentication protocol for secure and usable authentication for physicians in hospitals. The system is comprised of a personal smart-card that can be inserted at the hospital computers to access the computers, and a context-aware subsystem that as minimum is location-aware. If the practitioners try to access a computer using their keycard, and their location is the same as the work stations, then they're authenticated without further interaction. If the location differs then they're asked to type their password.
When a new keycard is initialized it generates a public/private keypair and sends the public key to the central server. The keycard uses a one-way authentication protocol and the users password is only known to the keycard.
We grant the system \textit{Quasi-Memorywise-Effortless} as the user is still required to remember the keycard password.
It is \textit{Scalable-for-Users} as the card could easily submit the same public-key to many verifiers.
It is not \textit{Noting-to-Carry}, although, in the hospital setup where it is applied, the staff is required to carry their identity card, and it could qualify for a \textit{Quasi-Noting-to-Carry} is some scenarios.
It is both \textit{Easy-to-Learn}, \textit{Efficient-to-Use} and \textit{Infrequent-Errors} (Assuming that the context-aware service works most of the time).
It is not \textit{Easy-Recovery-from-Loss} as a new card needs to be issued, and a new public-private key-pair needs to be created and submitted to verifies.
As it is a prototype Deployability is less interesting, however we grant it \textit{Accessible} and \textit{Non-Propritary}.
The system is not \textit{Negligible-Cost-per-User} as the setup is very infrastructure heavy.
The system is not built to access web services and is therefore neither \textit{Browser-Compatible} nor \textit{Server-Compatible}.
However, it could easily be used for web services by transmitting the users public key to every verifier, or even generate a new key-pair for every verifier.
It would however still not be compatible.
On the security aspects we deem it to be \textit{Quasi-Resilient-to-Physical-Observation} as the user only rarely types the password.
However, if the keycard is stolen and the password is known the adversary he has full access, and we therefore grant it \textit{Quasi-Resilient-to-Theft}.
It is not \textit{Resilient-to-Phishing} as man-in-the-middle attacks are possible.
It is not \textit{Resilient-to-Throttled-Guessing} nor \textit{Resilient-to-Unthrottled-Guessing}, however, the adversary would have to steal the keycard to start guessing.
Whether it is \textit{Unlinkable} depends on if it uses separate key-pairs or not. We assume the later and grant it \textit{Unlinkable}.
It is \textit{Quasi-Continuous} as there is no continuous authentication per say, but the user is logged-out as soon as the keycard is removed from the bay.
\subsection{Wearable Authentication}
\todo[inline]{Revisit this section}
\citet{ojala2008wearable} presents a prototype for transparent and continuous authentication with work stations. The system is comprised of three components. A ZigBee enabled wearable wrist device that monitors the wearers vitals, a ZigBee receiver and the workstation. When the user puts on the watch it starts to monitor his vitals. The user can now use a fingerprint reader to authenticate with the system. The user remains authenticated for as long as he is wearing the watch. If he takes off the watch, or his vitals stops, then he will be logged-out after 10 seconds. While the user is authenticated he can approach any work station (that has a receiver) and without further interaction start using the machine. As soon as he leaves the machine he is logged out.
We grant the system \textit{Memorywise-Effortless}, \textit{Scalable-for-Users}, \textit{Easy-to-Learn}, \textit{Efficient-to-Use} and \textit{Infrequent-Errors}. We deem it \textit{Quasi-Nothing-to-Carry} as a watch is something that most users always carry, just like a smartphone. It is not \textit{Easy-Recovery-from-Loss} as loosing the watch means having to get a new one, that should then be authorized.
As the system, much like two previous is a prototype that is not built for web-services, we grant it the same scores for deployability.
On the security side it is \textit{Resilient-to-Physical-Observation}, \textit{Resilient-to-Targeted-Impersonation}, \textit{Resilient-to-Throttled-Guessing}, \textit{Resilient-to-Un\-throttled-Guessing}, \textit{Resilient-to-Theft} and \textit{Continuous}. It is \textit{Quasi-Requiring-Explicit-Consent} as the user only gives explicit consent once when using the fingerprint reader.
Other security aspects are not known due to the simplicity of the prototype and is therefore left out of consideration, although we deem them feasible to include.
\end{comment}
\chapter{Android Cryptography Implementation}
\lstinputlisting[language=Java, basicstyle=\scriptsize\ttfamily, numberstyle=\scriptsize\ttfamily, label=lst:distEl]{code/DistributedElgamal.java}
\chapter{Server Cryptography Implementation}
\lstinputlisting[language=Java, basicstyle=\scriptsize\ttfamily, numberstyle=\scriptsize\ttfamily, label=lst:chalService]{code/ChallengeService.java}
\chapter{Server JWT Implementation}
\lstinputlisting[language=Java, basicstyle=\scriptsize\ttfamily, numberstyle=\scriptsize\ttfamily, label=lst:tokenService]{code/TokenService.java}
\chapter{Tamarin Model}\label{ch:tamarin}
\lstinputlisting[language=spthy, basicstyle=\scriptsize\ttfamily, numberstyle=\scriptsize\ttfamily]{code/cta.spthy}
\chapter{Strong Internal Observation Attack}\label{ch:attack}
\begin{figure}[h]
\centering
\begin{wide}
\includegraphics[width=\linewidth]{gfx/attack}
\end{wide}
\caption[Tamarin trace of SIO attack]{The tamarin trace of the Strong Internal Observation Attack}
\label{fig:my_label}
\end{figure} |
Parameter object : Type.
Definition PN : Type := object.
Definition VP : Type := object -> Prop.
Definition NP : Type := VP -> Prop . (* NP := (PN -> Prop) -> Prop *)
Definition UsePN : PN -> NP := fun pn vp => vp pn.
Definition Cl : Type := Prop .
Definition PredVP : NP -> VP -> Cl := fun np vp => np vp.
Definition V : Type := object -> Prop.
Definition UseV : V -> VP := fun v => v.
Definition S : Type := Prop .
Definition Pol : Type := Prop -> Prop .
Definition Pos : Pol := fun p => p.
Definition Neg : Pol := fun p => not p.
Definition UseCl : Pol -> Cl -> S :=
fun pol c => pol c.
Parameter AP : Type.
Parameter A : Type.
Parameter CN : Type.
Parameter Det : Type.
Parameter N : Type.
Parameter V2 : Type.
Parameter AdA : Type.
Parameter Conj : Type.
Parameter ComplV2 : V2 -> NP -> VP.
Parameter DetCN : Det -> CN -> NP.
Parameter ModCN : AP -> CN -> CN.
Parameter CompAP : AP -> VP.
Parameter AdAP : AdA -> AP -> AP.
Parameter ConjS : Conj -> S -> S -> S.
Parameter ConjNP : Conj -> NP -> NP -> NP.
Parameter UseN : N -> CN.
Parameter UseA : A -> AP.
Parameter some_Det : Det.
Parameter every_Det : Det.
Parameter we_NP : NP.
Parameter you_NP : NP.
Parameter very_AdA : AdA.
Parameter and_Conj : Conj.
Parameter or_Conj : Conj.
Parameter man_N : N.
Parameter woman_N : N .
Parameter house_N : N.
Parameter tree_N : N .
Parameter big_A : A .
Parameter small_A : A .
Parameter green_A : A .
Parameter walk_V : V .
Parameter arrive_V : V .
Parameter love_V2 : V2 .
Parameter please_V2 : V2 .
Parameter john_PN : PN .
Parameter mary_PN : PN.
Definition everyoneNP : NP := fun vp => forall x, vp x.
Theorem everyone: everyoneNP walk_V-> walk_V john_PN. cbv. intros. apply H. Qed. |
module CombinatoryLogic.Syntax where
open import Data.String using (String; _++_)
open import Relation.Binary.PropositionalEquality using (_≡_; refl)
-- Kapitel 1, Abschnitt C, §4 (Symbolische Festsetzungen), Def. 1
infixl 6 _∙_
data Combinator : Set where
-- Kapitel 1, Abschnitt C, §3 (Die formalen Grundbegriffe)
B C W K Q Π P ∧ : Combinator
-- Kapitel 1, Abschnitt C, §2, c (Anwendung)
_∙_ : (X Y : Combinator) → Combinator
-- NOTE: for the (intensional) equality defined by Kapitel 1, Abschnitt C, §4
-- (Symbolische Festsetzungen), Festsetzung 2, we use the propositional equality
-- _≡_.
toString : Combinator → String
toString (X ∙ Y@(_ ∙ _)) = toString X ++ "(" ++ toString Y ++ ")"
toString (X ∙ Y) = toString X ++ toString Y
toString B = "B"
toString C = "C"
toString W = "W"
toString K = "K"
toString Q = "Q"
toString Π = "Π"
toString P = "P"
toString ∧ = "∧"
_ : toString (K ∙ Q ∙ (W ∙ C)) ≡ "KQ(WC)"
_ = refl
_ : toString (K ∙ ∧ ∙ (Q ∙ Π) ∙ (W ∙ C)) ≡ "K∧(QΠ)(WC)"
_ = refl
_ : toString (K ∙ ∧ ∙ ((Q ∙ Π) ∙ (W ∙ C))) ≡ "K∧(QΠ(WC))"
_ = refl
_ : toString (K ∙ K ∙ K) ≡ "KKK"
_ = refl
infix 5 _==_
-- Kapitel 1, Abschnitt C, §4 (Symbolische Festsetzungen), Def. 2
_==_ : (X Y : Combinator) → Combinator
X == Y = Q ∙ X ∙ Y
-- Kapitel 1, Abschnitt C, §4 (Symbolische Festsetzungen), Def. 3
I : Combinator
I = W ∙ K
|
@testset "RepeatPartitionIterator" begin
@testset "RepeatPartitionIterator basic" begin
bitr = RepeatPartitionIterator(1:20, 5)
for (itr, exp) in zip(bitr, [1:5, 6:10, 11:15, 16:20])
@test collect(itr) == exp
@test collect(itr) == exp
@test collect(itr) == exp
end
end
@testset "RepeatPartitionIterator partition" begin
bitr = RepeatPartitionIterator(Iterators.partition(1:20, 5), 2)
for (itr, exp) in zip(bitr, [[1:5, 6:10], [11:15, 16:20]])
@test collect(itr) == exp
@test collect(itr) == exp
@test collect(itr) == exp
end
end
@testset "RepeatPartitionIterator repeated partition" begin
import IterTools: ncycle
bitr = RepeatPartitionIterator(ncycle(Iterators.partition(1:20, 5), 3), 2)
cnt = 0;
for (itr, exp) in zip(bitr, [[1:5, 6:10], [11:15, 16:20],[1:5, 6:10], [11:15, 16:20],[1:5, 6:10], [11:15, 16:20]])
@test collect(itr) == exp
@test collect(itr) == exp
@test collect(itr) == exp
cnt += 1
end
end
end
@testset "SeedIterator" begin
rng = MersenneTwister(123)
testitr = SeedIterator(Iterators.map(x -> x * rand(rng, Int), ones(10)); rng=rng, seed=12)
@test collect(testitr) == collect(testitr)
rng = MersenneTwister(1234)
nesteditr = SeedIterator(Iterators.map(x -> x * rand(rng, Int), testitr); rng=rng, seed=1)
@test collect(nesteditr) == collect(nesteditr)
end
@testset "GpuIterator" begin
dorg = 1:100;
itr = GpuIterator(zip([view(dorg,1:10)], [dorg]))
d1,d2 = first(itr) |> cpu
@test !isa(d1, SubArray)
@test d1 == dorg[1:10]
@test d2 == dorg
end
@testset "BatchIterator" begin
@testset "Single array" begin
itr = BatchIterator(collect(reshape(1:2*3*4*5,2,3,4,5)), 2)
for (i, batch) in enumerate(itr)
@test size(batch) == (2,3,4,i==3 ? 1 : 2)
end
@test "biter: $itr" == "biter: BatchIterator(size=(2, 3, 4, 5), batchsize=2, shuffle=false)"
end
@testset "Tuple data shuffle=$shuffle" for shuffle in (true, false)
itr = BatchIterator((collect([1:10 21:30]'), 110:10:200), 3; shuffle)
for (i, (x, y)) in enumerate(itr)
expsize = i == 4 ? 1 : 3
@test size(x) == (2, expsize)
@test size(y) == (expsize,)
end
end
@testset "BatchIterator singleton" begin
itr = BatchIterator(Singleton([1,3,5,7,9,11]), 2)
for (i, b) in enumerate(itr)
@test b == [1,3] .+ 4(i-1)
end
end
@testset "BatchIterator shuffle basic" begin
@test reduce(vcat, BatchIterator(1:20, 3; shuffle=true)) |> sort == 1:20
itr = BatchIterator(ones(2,3,4), 4; shuffle=MersenneTwister(2))
@test "siter: $itr" == "siter: BatchIterator(size=(2, 3, 4), batchsize=4, shuffle=true)"
end
@testset "BatchIterator shuffle ndims $(length(dims))" for dims in ((5), (3,4), (2,3,4), (2,3,4,5), (2,3,4,5,6), (2,3,4,5,6,7))
sitr = BatchIterator(collect(reshape(1:prod(dims),dims...)), 2;shuffle=MersenneTwister(12))
bitr = BatchIterator(collect(reshape(1:prod(dims),dims...)), 2)
sall, nall = Set{Int}(), Set{Int}()
for (sb, nb) in zip(sitr, bitr)
@test sb != nb
@test size(sb) == size(nb)
push!(sall, sb...)
push!(nall, nb...)
end
@test sall == nall
end
end
@testset "RepeatPartitionIterator and ShuffleIterator" begin
import IterTools: ncycle
@testset "Single epoch small" begin
ritr = RepeatPartitionIterator(BatchIterator(1:20, 3; shuffle=MersenneTwister(123)), 4)
for itr in ritr
@test collect(itr) == collect(itr)
end
end
@testset "Multi epoch small" begin
sitr = BatchIterator(1:20, 3;shuffle=MersenneTwister(123))
citr = ncycle(sitr, 2)
ritr = RepeatPartitionIterator(SeedIterator(citr; rng=sitr.rng), 4)
for itr in ritr
@test collect(itr) == collect(itr)
end
end
@testset "Multi epoch big" begin
sitr = BatchIterator(1:20, 3;shuffle= MersenneTwister(123))
citr = ncycle(sitr, 4)
ritr = RepeatPartitionIterator(SeedIterator(citr; rng=sitr.rng), 10)
for (i, itr) in enumerate(ritr)
@test collect(itr) == collect(itr)
end
end
end
@testset "StatefulGenerationIter" begin
import NaiveGAflux: itergeneration, StatefulGenerationIter
ritr = RepeatPartitionIterator(BatchIterator(1:20, 3), 4)
sitr = RepeatPartitionIterator(BatchIterator(1:20, 3), 4) |> StatefulGenerationIter
for (i, itr) in enumerate(ritr)
@test collect(itr) == collect(itergeneration(sitr, i))
end
end
@testset "TimedIterator" begin
@testset "No stopping accumulate = $acc" for (acc, exp) in (
(true, 7),
(false, 0)
)
timeoutcnt = 0
titer = TimedIterator(;timelimit=0.1, patience=2, timeoutaction = () -> timeoutcnt += 1, accumulate_timeouts=acc, base=1:10)
@test collect(titer) == 1:10
@test timeoutcnt === 0 # Or else we'll have flakey tests...
for i in titer
if iseven(i)
sleep(0.11) # Does not matter here if overloaded CI VM takes longer than this to get back to us
end
end
# When accumulating timeouts: after 1,2,3,4 our patience is up, call timeoutaction for 4,5,6,7,8,9,10
# When not accumulating: We never reach patience level
@test timeoutcnt == exp
end
@testset "Stop iteration at timeout" begin
# also test that we really timeout when not accumulating here
titer = TimedIterator(;timelimit=0.1, patience=4, timeoutaction = () -> TimedIteratorStop, accumulate_timeouts=false, base=1:10)
last = 0
for i in titer
last = i
if i > 2
sleep(0.11) # Does not matter here if overloaded CI VM takes longer than this to get back to us
end
end
@test last === 6 # Sleep after 2, then 4 patience
end
end
|
lemma connected_linear_image: fixes f :: "'a::euclidean_space \<Rightarrow> 'b::real_normed_vector" assumes "linear f" and "connected s" shows "connected (f ` s)" |
The complex conjugate of a negative number is the negative of the complex conjugate of the number. |
Address(Orchard Park Drive) runs north south in the north west corner of core campus. Starting at the north end it passes between student housing complexes. Then it crosses Orchard Road and passes the domes and greenhouses. Next is the student farm and the Colleges at La Rue.
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal33.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Lemma conj2synthconj5 : forall (lv0 : natural) (lv1 : natural), (@eq natural (lv0) (plus lv0 lv1)).
Admitted.
QuickChick conj2synthconj5.
|
= Plain maskray =
|
[STATEMENT]
lemma crsp_step_in:
assumes layout: "ly = layout_of ap"
and compile: "tp = tm_of ap"
and crsp: "crsp ly (as, lm) (s, l, r) ires"
and fetch: "abc_fetch as ap = Some ins"
shows "\<exists> stp>0. crsp ly (abc_step_l (as, lm) (Some ins))
(steps (s, l, r) (ci ly (start_of ly as) ins, start_of ly as - 1) stp) ires"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>stp>0. crsp ly (abc_step_l (as, lm) (Some ins)) (steps (s, l, r) (ci ly (start_of ly as) ins, start_of ly as - 1) stp) ires
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
ly = layout_of ap
tp = tm_of ap
crsp ly (as, lm) (s, l, r) ires
abc_fetch as ap = Some ins
goal (1 subgoal):
1. \<exists>stp>0. crsp ly (abc_step_l (as, lm) (Some ins)) (steps (s, l, r) (ci ly (start_of ly as) ins, start_of ly as - 1) stp) ires
[PROOF STEP]
apply(cases ins, simp_all)
[PROOF STATE]
proof (prove)
goal (3 subgoals):
1. \<And>x1. \<lbrakk>crsp (layout_of ap) (as, lm) (s, l, r) ires; ins = Inc x1; ly = layout_of ap; tp = tm_of ap; abc_fetch as ap = Some (Inc x1)\<rbrakk> \<Longrightarrow> \<exists>stp>0. crsp (layout_of ap) (abc_step_l (as, lm) (Some (Inc x1))) (steps (s, l, r) (ci (layout_of ap) (start_of (layout_of ap) as) (Inc x1), start_of (layout_of ap) as - Suc 0) stp) ires
2. \<And>x21 x22. \<lbrakk>crsp (layout_of ap) (as, lm) (s, l, r) ires; ins = Dec x21 x22; ly = layout_of ap; tp = tm_of ap; abc_fetch as ap = Some (Dec x21 x22)\<rbrakk> \<Longrightarrow> \<exists>stp>0. crsp (layout_of ap) (abc_step_l (as, lm) (Some (Dec x21 x22))) (steps (s, l, r) (ci (layout_of ap) (start_of (layout_of ap) as) (Dec x21 x22), start_of (layout_of ap) as - Suc 0) stp) ires
3. \<And>x3. \<lbrakk>crsp (layout_of ap) (as, lm) (s, l, r) ires; ins = Goto x3; ly = layout_of ap; tp = tm_of ap; abc_fetch as ap = Some (Goto x3)\<rbrakk> \<Longrightarrow> \<exists>stp>0. crsp (layout_of ap) (abc_step_l (as, lm) (Some (Goto x3))) (steps (s, l, r) (ci (layout_of ap) (start_of (layout_of ap) as) (Goto x3), start_of (layout_of ap) as - Suc 0) stp) ires
[PROOF STEP]
apply(rule crsp_step_inc, simp_all)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<And>x21 x22. \<lbrakk>crsp (layout_of ap) (as, lm) (s, l, r) ires; ins = Dec x21 x22; ly = layout_of ap; tp = tm_of ap; abc_fetch as ap = Some (Dec x21 x22)\<rbrakk> \<Longrightarrow> \<exists>stp>0. crsp (layout_of ap) (abc_step_l (as, lm) (Some (Dec x21 x22))) (steps (s, l, r) (ci (layout_of ap) (start_of (layout_of ap) as) (Dec x21 x22), start_of (layout_of ap) as - Suc 0) stp) ires
2. \<And>x3. \<lbrakk>crsp (layout_of ap) (as, lm) (s, l, r) ires; ins = Goto x3; ly = layout_of ap; tp = tm_of ap; abc_fetch as ap = Some (Goto x3)\<rbrakk> \<Longrightarrow> \<exists>stp>0. crsp (layout_of ap) (abc_step_l (as, lm) (Some (Goto x3))) (steps (s, l, r) (ci (layout_of ap) (start_of (layout_of ap) as) (Goto x3), start_of (layout_of ap) as - Suc 0) stp) ires
[PROOF STEP]
apply(rule crsp_step_dec, simp_all)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>x3. \<lbrakk>crsp (layout_of ap) (as, lm) (s, l, r) ires; ins = Goto x3; ly = layout_of ap; tp = tm_of ap; abc_fetch as ap = Some (Goto x3)\<rbrakk> \<Longrightarrow> \<exists>stp>0. crsp (layout_of ap) (abc_step_l (as, lm) (Some (Goto x3))) (steps (s, l, r) (ci (layout_of ap) (start_of (layout_of ap) as) (Goto x3), start_of (layout_of ap) as - Suc 0) stp) ires
[PROOF STEP]
apply(rule_tac crsp_step_goto, simp_all)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
```python
from gio import GIO
import numpy as np
##Data from Chan et al. (2018)'s Example 1##
#Notice that we have numpy arrays here.
A = np.array([[2,5],[2,-3],[2,1],[-2,-1]])
b = np.array([[10],[-6],[4],[-10]])
x0 = np.array([[2.5],[3]])
##Creating the GIO object and Generating all of the x^0-ep* for all GIO Models##
#.GIO_all_measures() calls all of the GIO methods
gio_testing = GIO(A,b,x0)
gio_testing.GIO_all_measures()
print("This is x^0-ep* for p=1: ",gio_testing.x0_epsilon_p[0])
print("This is x^0-ep* for p=2: ",gio_testing.x0_epsilon_p[1])
print("This is x^0-ep* for p='inf': ",gio_testing.x0_epsilon_p[2])
print("This is x^0-ep* for absolute duality: ",gio_testing.x0_epsilon_a)
print("This is x^0-ep* for relative duality: ",gio_testing.x0_epsilon_r)
```
This is x^0-ep* for p=1: [[ 2.5 ]
[ 3.66666667]]
This is x^0-ep* for p=2: [[ 2.19230769]
[ 3.46153846]]
This is x^0-ep* for p='inf': [[ 2.1]
[ 3.4]]
This is x^0-ep* for absolute duality: [array([[ 2.1],
[ 3.4]])]
This is x^0-ep* for relative duality: [array([[ 3.16666667],
[ 3.66666667]])]
```python
##Because we are storing the calculated attributes in lists, we see that,
##for the last two print statements in the previous cell, a list was returned
##with an array inside of it. We can solve this problem by add an index element [0]
print("This is x^0-ep* for absolute duality: ",gio_testing.x0_epsilon_a[0])
print("This is x^0-ep* for relative duality: ",gio_testing.x0_epsilon_r[0])
```
This is x^0-ep* for absolute duality: [[ 2.1]
[ 3.4]]
This is x^0-ep* for relative duality: [[ 3.16666667]
[ 3.66666667]]
```python
##The GIO class also handles cases when you might minimally project onto multiple
##hyperplanes. The class will put the multiple istar indices into the .istar_multi
##attribute but will choose the first istar index as the one to continue calculations.
##The class will also output a message to the user to let the user know that
##this has occurred.
A_1 = np.array([[-1,-1],[1,-1],[1,1],[-1,1]])
b_1 = np.array([[-1],[-1],[-1],[-1]])
x0_1 = np.array([[0],[0.7]])
```
The corresponding feasible region set up for the above $A$ and $b$ is as follows:
\begin{equation}
-x_1 - x_2 \geq -1
\end{equation}
\begin{equation}
x_1 - x_2 \geq -1
\end{equation}
\begin{equation}
x_1 + x_2 \geq -1
\end{equation}
\begin{equation}
-x_1 + x_2 \geq -1
\end{equation}
We can then demonstrate the feasible region using the image below (with approximate placement of $x^0$ below). The numbering corresponds to the order of the constraints below (and then the 0 indexing of Python).
```python
##We will create another GIO object to find the optimal epsilon* under the p=1 norm.
##Due to the position of x0_1 and the properties of the p=1 norm, we know that this
##will result into a projection onto multiple hyperplanes
#We will need to bring in everything again because we had a break in the code
#cells (with the markup cell above)
from gio import GIO
import numpy as np
A_1 = np.array([[-1,-1],[1,-1],[1,1],[-1,1]])
b_1 = np.array([[-1],[-1],[-1],[-1]])
x0_1 = np.array([[0],[0.7]])
gio_multi_project_testing = GIO(A_1,b_1,x0_1)
gio_multi_project_testing.GIO_p(1,'F')
```
Under the inf dual norm, x^0 has been projected onto multiple hyperplanes. For now, we will choose the first i index and will put the rest of the indices in the istar_multi attribute.
```python
##As we can see, a message is printed to the user when x^0 is projected onto multiple
##hyperplanes.
##We can see the multiple istar if we print the .istar_multi attribute
print(gio_multi_project_testing.istar_multi)
```
[array([0, 1], dtype=int64)]
```python
##To obtain the raw indices, index into the list
print("Raw Indices: ", gio_multi_project_testing.istar_multi[0])
##To obtain only one of the indices, do a double index
print("Single index: ",gio_multi_project_testing.istar_multi[0][0])
```
Raw Indices: [0 1]
Single index: 0
```python
##We see though that only one epsilon* is calculated and thus only one x^0-epsilon*
##is also calculated
print("This is the epsilon* for p=1: ",gio_multi_project_testing.epsilon_p)
print("This is the (x^0 - epsilon*): ",gio_multi_project_testing.x0_epsilon_p)
```
This is the epsilon* for p=1: [array([[-0.3],
[-0. ]])]
This is the (x^0 - epsilon*): [array([[ 0.3],
[ 0.7]])]
```python
###Calculating rho values###
from gio import GIO
import numpy as np
##Chan et al. (2018)'s Example 1##
##We have changed x0 to demonstrate the rho_p_approx <= rho_p relationship
A = np.array([[2,5],[2,-3],[2,1],[-2,-1]])
b = np.array([[10],[-6],[4],[-10]])
x0 = np.array([[4],[1]])
GIO_rho_test = GIO(A,b,x0)
GIO_rho_test.calculate_rho_p(2) #notice that we did not
#have to specify if_append='F' because
#we set a default value in the code
GIO_rho_test.calculate_rho_p_approx(2)
print("This is rho_p exact:",GIO_rho_test.rho_p)
print("This is rho_p - rho_p_approx:",(GIO_rho_test.rho_p[0] - GIO_rho_test.rho_p_approx[0]))
GIO_rho_test.calculate_rho_a()
print("This is rho_a:",GIO_rho_test.rho_a)
GIO_rho_test.calculate_rho_r()
print("This is rho_r:",GIO_rho_test.rho_r)
```
This is rho_p exact: [0.73886235873548278]
This is rho_p - rho_p_approx: 0.0232038316283
This is rho_a: [0.7119341563786008]
This is rho_r: [0.8851674641148326]
```python
###Testing some values for Validating rho_r#####
from gio import GIO
import numpy as np
##Chan et al. (2018)'s Example 1##
A = np.array([[2,5],[2,-3],[2,1],[-2,-1]])
b = np.array([[10],[-6],[4],[-10]])
x0_1 = np.array([[3],[2]])
x0_2 = np.array([[4],[1]])
x0_3 = np.array([[2],[2]])
validate_rho_r = GIO(A,b,x0_1)
validate_rho_r.calculate_rho_r()
print("This is rho_r for x0_1:",validate_rho_r.rho_r)
validate_rho_r = GIO(A,b,x0_2)
validate_rho_r.calculate_rho_r()
print("This is rho_r for x0_2:",validate_rho_r.rho_r)
validate_rho_r = GIO(A,b,x0_3)
validate_rho_r.calculate_rho_r()
print("This is rho_r for x0_3:",validate_rho_r.rho_r)
```
This is rho_r for x0_1: [0.71428571428571441]
This is rho_r for x0_2: [0.8851674641148326]
Under the b dual norm, x^0 has been projected onto multiple hyperplanes. For now, we will choose the first i index and will put the rest of the indices in the istar_multi attribute.
This is rho_r for x0_3: [0.18644067796610142]
```python
###Structural Considerations for GIO Models###
from gio import GIO
import numpy as np
import pyomo.environ as pyo
A = np.array([[2,5],[2,-3],[2,1],[-2,-1]])
b = np.array([[10],[-6],[4],[-10]])
x0 = np.array([[2.5],[3]])
structural_ep_gio_test = GIO(A,b,x0) #instigating the GIO object
###Step 1: Create the Base Model###
#The attribute where the model is stored is called GIO_struc_ep, which is
#the Pyomo model.
structural_ep_gio_test.GIO_structural_epsilon_setup()
### Step 2: If desired, Add constraints on Epsilon ###
### to GIO_struc_ep ###
### The variable for epsilon is called ep ###
### The index set for variables is varindex ###
def ep_constraint(model): #should provide the details of the index sets and the numvar parameters
return model.ep[1] <= model.ep[2] #specifically did not ID the epsilon as nonnegative in gio.py
structural_ep_gio_test.GIO_struc_ep.constraint_ep = pyo.Constraint(rule=ep_constraint)
def neg_ep(model,i):
return model.ep[i] <= 0
structural_ep_gio_test.GIO_struc_ep.neg_ep = pyo.Constraint(\
structural_ep_gio_test.GIO_struc_ep.varindex,rule=neg_ep)
### Step 3: Solve the Model ###
#You can specify p as p=1,2,infty
#The solver will generate some output letting you know about infeasibility
#but we also produce output too, letting you know which constraint number
#in terms of Python indexing that, when forced to be an equality,
#results in infeasibility
structural_ep_gio_test.GIO_structural_epsilon_solve(2)
print("*************************************************")
print("The c vector found by this workflow is:",structural_ep_gio_test.c_p[0])
###########################################################################################
#We can also generate the same results as the close form solution
structural_ep_no_added_constraints_test = GIO(A,b,x0)
structural_ep_no_added_constraints_test.GIO_structural_epsilon_setup()
structural_ep_no_added_constraints_test.GIO_structural_epsilon_solve(2)
print("The c vector found by this workflow (not adding epsilon constraints) is:",\
structural_ep_no_added_constraints_test.c_p[0])
```
WARNING: Loading a SolverResults object with a warning status into
model=unknown;
message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible
point. Problem may be infeasible.
We have infeasibility for constraint= 0 . Putting FLAG in the container_for_obj_vals vector
WARNING: Loading a SolverResults object with a warning status into
model=unknown;
message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible
point. Problem may be infeasible.
We have infeasibility for constraint= 1 . Putting FLAG in the container_for_obj_vals vector
WARNING: Loading a SolverResults object with a warning status into
model=unknown;
message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible
point. Problem may be infeasible.
We have infeasibility for constraint= 2 . Putting FLAG in the container_for_obj_vals vector
*************************************************
The c vector found by this workflow is: [[-0.66666667]
[-0.33333333]]
The c vector found by this workflow (not adding epsilon constraints) is: [[ 0.4]
[-0.6]]
```python
```
|
module Inigo.Async.Fetch
import Data.Buffer
import Extra.Buffer
import Inigo.Async.Base
import Inigo.Async.Promise
import Inigo.Async.Util
%foreign (promisifyPrim "(url)=>new Promise((resolve,reject)=>(url.startsWith('https')?require('https'):require('http')).get(url,(r)=>{let d='';r.on('data',(c)=>{d+=c;});r.on('end',()=>{resolve(d)});}).on('error',(e)=>{reject(e);}))")
fetch__prim : String -> promise String
%foreign (promisifyPrim "(url)=>new Promise((resolve,reject)=>(url.startsWith('https')?require('https'):require('http')).get(url,(r)=>{let cs=[];r.on('data',(c)=>{cs.push(Buffer.from(c))});r.on('end',()=>{resolve(Buffer.concat(cs))});}).on('error',(e)=>{reject(e);}))")
fetchBuf__prim : String -> promise Buffer
%foreign (promisifyPrim ("(url,method,data,headers)=>new Promise((resolve,reject)=>{"++ toObject ++";let u=new URL(url);let port=u.port!==''?u.port:u.protocol==='https:'?443:80;let opts={hostname:u.hostname,port,path:u.pathname,method,headers:toObject(headers)};let req=(u.protocol==='https:'?require('https'):require('http')).request(opts,(r)=>{let cs=[];r.on('data',(c)=>{cs.push(Buffer.from(c))});r.on('end',()=>{resolve(__prim_js2idris_array([Buffer.from(r.statusCode.toString(), 'utf-8'), Buffer.concat(cs)]))});}).on('error',(e)=>reject(e));req.on('error',(e)=>reject(e));req.write(data);req.end();})"))
request__prim : String -> String -> Buffer -> List (String, String) -> promise (List Buffer)
export
fetch : String -> Promise String
fetch url =
promisify (fetch__prim url)
export
fetchBuf : String -> Promise Buffer
fetchBuf url =
promisify (fetchBuf__prim url)
convertStatusCode : List Buffer -> Promise (Int, Buffer)
convertStatusCode lb =
case lb of
statusBuf :: buf :: [] =>
do
status <- liftIO $ readAll statusBuf
pure (cast status, buf)
_ =>
reject "Invalid response"
export
requestBuf : String -> String -> Buffer -> List (String, String) -> Promise (Int, Buffer)
requestBuf url method body headers =
do
res <- promisify (request__prim url method body headers)
convertStatusCode res
export
request : String -> String -> String -> List (String, String) -> Promise (Int, Buffer)
request url method body headers =
do
buf <- liftIO $ fromString body
res <- promisify (request__prim url method buf headers)
convertStatusCode res
|
[STATEMENT]
lemma finite_field_factorization_int:
assumes sq: "poly_mod.square_free_m p f"
and result: "finite_field_factorization_int p f = (c,fs)"
shows "poly_mod.unique_factorization_m p f (c, mset fs)
\<and> c \<in> {0 ..< p}
\<and> (\<forall> fi \<in> set fs. set (coeffs fi) \<subseteq> {0 ..< p})"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. unique_factorization_m f (c, mset fs) \<and> c \<in> {0..<p} \<and> (\<forall>fi\<in>set fs. set (coeffs fi) \<subseteq> {0..<p})
[PROOF STEP]
using finite_field_factorization_main_integer[OF _ sq, of c fs]
finite_field_factorization_main_uint32[OF _ _ sq, of c fs]
finite_field_factorization_main_uint64[OF _ _ sq, of c fs]
result[unfolded finite_field_factorization_int_def]
[PROOF STATE]
proof (prove)
using this:
finite_field_factorization_main p (finite_field_ops_integer (integer_of_int p)) f = (c, fs) \<Longrightarrow> unique_factorization_m f (c, mset fs) \<and> c \<in> {0..<p} \<and> (\<forall>fi\<in>set fs. set (coeffs fi) \<subseteq> {0..<p})
\<lbrakk>p \<le> 65535; finite_field_factorization_main p (finite_field_ops32 (uint32_of_int p)) f = (c, fs)\<rbrakk> \<Longrightarrow> unique_factorization_m f (c, mset fs) \<and> c \<in> {0..<p} \<and> (\<forall>fi\<in>set fs. set (coeffs fi) \<subseteq> {0..<p})
\<lbrakk>p \<le> 4294967295; finite_field_factorization_main p (finite_field_ops64 (uint64_of_int p)) f = (c, fs)\<rbrakk> \<Longrightarrow> unique_factorization_m f (c, mset fs) \<and> c \<in> {0..<p} \<and> (\<forall>fi\<in>set fs. set (coeffs fi) \<subseteq> {0..<p})
(if p \<le> 65535 then finite_field_factorization_main p (finite_field_ops32 (uint32_of_int p)) else if p \<le> 4294967295 then finite_field_factorization_main p (finite_field_ops64 (uint64_of_int p)) else finite_field_factorization_main p (finite_field_ops_integer (integer_of_int p))) f = (c, fs)
goal (1 subgoal):
1. unique_factorization_m f (c, mset fs) \<and> c \<in> {0..<p} \<and> (\<forall>fi\<in>set fs. set (coeffs fi) \<subseteq> {0..<p})
[PROOF STEP]
by (auto split: if_splits) |
SUBROUTINE G0DIVS(IAXIS,DIVLEN)
C
C ------------------------------------------------
C ROUTINE NO. ( 231) VERSION (A8.8) 01:MAR:93
C ------------------------------------------------
C
C THIS FINDS THE POSITIONS OF THE INTERVALS ON THE GIVEN AXIS
C (FOR LIN. SCALING) WHICH LIE WITHIN THE CURRENT WINDOW AREA,
C AND ALSO SETS THE MOST SUITABLE AXIS ANNOTATION FORMAT.
C
C
C <IAXIS> GIVES THE REQUIRED AXIS BY ITS MODULUS:
C = 1, THE X-AXIS IS TAKEN, OR
C = 2, THE Y-AXIS IS TAKEN.
C <DIVLEN> GIVES THE REQUIRED INTERVAL LENGTH.
C
C
C THE FOLLOWING ARGUMENTS ARE SUPPLIED THROUGH COMMON:
C
C <X1WND0> THE COORDINATES
C <X2WND0> OF
C <Y1WND0> THE
C <Y2WND0> WINDOW RECTANGLE.
C
C <AXPOSX> THE POSITION OF THE X-AXIS ALONG Y.
C <AXPOSY> THE POSITION OF THE Y-AXIS ALONG X.
C <NOTATA> IF ZERO, NO ANNOTATION IS REQUIRED,
C IF NON-ZERO, ANNOT. FORMAT MUST BE CALCULATED.
C
C
C THE FOLLOWING ARGUMENTS ARE RETURNED THROUGH COMMON:
C (ONLY THE ARGS. RELEVANT TO THE GIVEN AXIS ARE CHANGED):
C
C <KTYPEX> IS THE X-AXIS TYPE (= 1 FOR LIN. AXIS).
C <KTYPEY> IS THE Y-AXIS TYPE (= 1 FOR LIN. AXIS).
C <DIVLX> IS THE X-AXIS SUB-INTERVAL LENGTH.
C <DIVLY> IS THE Y-AXIS SUB-INTERVAL LENGTH.
C <NSKIPX> IS THE NO. OF SUB-INTERVALS PER MAJOR INTERVAL IN X.
C <NSKIPY> IS THE NO. OF SUB-INTERVALS PER MAJOR INTERVAL IN Y.
C <NTIKLX> IS THE MARKING START-POINT FOR THE X-AXIS.
C <NTIKLY> IS THE MARKING START-POINT FOR THE Y-AXIS.
C <NTIKHX> IS THE MARKING END- POINT FOR THE X-AXIS.
C <NTIKHY> IS THE MARKING END- POINT FOR THE Y-AXIS.
C <NDECSX> IS THE X-AXIS ANNOTATION BASIS-EXPONENT.
C <NDECSY> IS THE Y-AXIS ANNOTATION BASIS-EXPONENT.
C <NCHRSX> IS THE NO. OF CHARS. IN X-AXIS ANNOTATION.
C <NCHRSY> IS THE NO. OF CHARS. IN Y-AXIS ANNOTATION.
C <NAFTPX> IS THE NO. OF CHARS. AFTER THE DEC. PT. IN X.
C <NAFTPY> IS THE NO. OF CHARS. AFTER THE DEC. PT. IN Y.
C <KANNX> GIVES THE X-AXIS ANNOTATION TYPE, AND
C <KANNY> GIVES THE Y-AXIS ANNOTATION TYPE, AS FOLLOWS:
C = 1, IT IS INTEGER
C = 2, IT IS REAL,
C = 3, IT IS INTEGER WITH MULT. FACTOR.
C = 4, IT IS REAL WITH MULT. FACTOR.
C <KAXIS> IS SET BY <IAXIS> FOR SUBSEQUENT USE.
C
C
LOGICAL DONE
C
COMMON /T0AARG/ KAXIS
COMMON /T0ADIX/ DIVLX,NTIKLX,NTIKHX
COMMON /T0ADIY/ DIVLY,NTIKLY,NTIKHY
COMMON /T0ANOX/ KANNX,NCHRSX,NAFTPX
COMMON /T0ANOY/ KANNY,NCHRSY,NAFTPY
COMMON /T0ASKX/ NSKIPX,NDECSX
COMMON /T0ASKY/ NSKIPY,NDECSY
COMMON /T0ATIC/ MTICKS
COMMON /T0ATYP/ KTYPEX,KTYPEY
COMMON /T0NOTA/ NOTATA
COMMON /T0WNDO/ X1WND0,X2WND0,Y1WND0,Y2WND0
C
DATA LIMSIG /5/
C
C
C THE AXIS TYPE IS SET AND THE END POINTS ARE FOUND.
C
KAXIS= IAXIS
IF (IABS(KAXIS).EQ.1) THEN
ENDMIN= AMIN1(X1WND0,X2WND0)
ENDMAX= AMAX1(X1WND0,X2WND0)
ELSE
ENDMIN= AMIN1(Y1WND0,Y2WND0)
ENDMAX= AMAX1(Y1WND0,Y2WND0)
ENDIF
C
C THE NO. OF STEPS IN THE WINDOW AT THE GIVEN INTERVAL SIZE
C IS FOUND, AND IF THIS IS > 1000, THE INTERVAL LENGTH IS
C INCREASED BY A SUITABLE FACTOR OF 10 TO GIVE < 999 STEPS.
C A LIMIT OF 100 SUB-INTERVALS IS THEN SET, AND THE ACTUAL
C SUB-INTERVAL LENGTH TO BE USED IS HENCE CALCULATED. IF
C THE GIVEN STEP SIZE IS ZERO, A START INTERVAL-LENGTH OF
C A SUITABLE POWER OF 10 IS CHOSEN TO GIVE APPROX. 15 SUB-
C DIVISIONS, AND THE SUB-INTERVAL LIMIT IS SET ALSO AT 15.
C
DIVSIZ= ABS(DIVLEN)
IF (DIVSIZ.GT.0.0) THEN
STEPS= (ENDMAX-ENDMIN)/DIVSIZ
IF (STEPS.GE.1.0E3) THEN
EXP= ALOG10(STEPS)
IF (EXP.LT.0.0) EXP= EXP-1.0
C
DIVSIZ= DIVSIZ*(10.0**(INT(EXP)-2))
ENDIF
C
LIMIT= 100
ELSE
DIVSIZ= (ENDMAX-ENDMIN)/MTICKS
EXP= ALOG10(DIVSIZ)
IF (EXP.LT.0.0) EXP= EXP-1.0
C
DIVSIZ= 10.0**INT(EXP)
LIMIT= MTICKS
ENDIF
C
C THIS SECTION CALCULATES THE INITIAL EDGE POINTS,
C ROUNDING-OFF ALWAYS IN THE CORRECT DIRECTION.
C
ROUND= 0.999
IF (ENDMIN.LT.0.0) ROUND= -0.001
C
NLO= (ENDMIN/DIVSIZ)+ROUND
ROUND= -0.999
IF (ENDMAX.GT.0.0) ROUND= 0.001
C
NHI= (ENDMAX/DIVSIZ)+ROUND
IDIV= 1
IDEC= 1
DONE= .FALSE.
C
C THIS PART INCREASES THE INTERVAL SIZE BY FACTORS OF
C 2, 5, 10, ETC. UNTIL THE GIVEN LIMIT IS SATISFIED.
C
1 IFACT= IDIV*IDEC
NLONOW= NLO
IF (NLO.GT.0) NLONOW= NLONOW+IFACT-1
C
NLONOW= NLONOW/IFACT
NHINOW= NHI
IF (NHI.LT.0) NHINOW= NHINOW-IFACT+1
C
NHINOW= NHINOW/IFACT
IF ((NHINOW-NLONOW).LE.LIMIT) THEN
C
C WHEN THE SUB-INTERVAL LIMIT HAS BEEN SATISFIED, THE
C NEW VALUES ARE STORED, AND THE PROCESS IS REPEATED
C WITH A LIMIT VALUE OF 15 TO FIND THE MAJOR INTERVALS.
C
IF (DONE) THEN
GO TO 2
ELSE
DONE= .TRUE.
NLO= NLONOW
NHI= NHINOW
DIVSIZ= DIVSIZ*IFACT
IDIV= 1
IDEC= 1
LIMIT= MTICKS
ENDIF
ELSE IF (IDIV.EQ.1) THEN
IDIV=2
ELSE IF (IDIV.EQ.2) THEN
IDIV=5
ELSE
IDIV= 1
IDEC= IDEC*10
ENDIF
C
GO TO 1
C
C ONCE THE VALUES HAVE BEEN FOUND, THEY ARE PLACED
C INTO THE APPROPRIATE VARIABLES IN THE COMMON BLOCK.
C
2 IF (IABS(KAXIS).EQ.1) THEN
KTYPEX= 1
DIVLX= DIVSIZ
NTIKLX= NLO
NTIKHX= NHI
NSKIPX= IFACT
ELSE
KTYPEY= 1
DIVLY= DIVSIZ
NTIKLY= NLO
NTIKHY= NHI
NSKIPY= IFACT
ENDIF
C
C IF <NOTATA> IS NON-ZERO AND THERE ARE TICK MARKS
C TO ANNOTATE, THE NUMBER FORMAT IS CALCULATED BELOW:
C
IF (NOTATA.EQ.0) RETURN
NCHARS= 1
KANNOT= 1
IF (NHI-NLO.GE.0) THEN
C
C FIRST THE MAJOR INTERVAL SIZE IS CALCULATED, THEN
C THE MAXIMUM AND MINIMUM POSITION VALUES ARE FOUND.
C
STPSIZ= DIVSIZ*IFACT
TIKMAX= ABS((NHI/IFACT)*STPSIZ)
TIKMIN= ABS((NLO/IFACT)*STPSIZ)
IF (TIKMAX.LT.TIKMIN) THEN
TIKMAX= TIKMIN
TIKMIN= ABS((NHI/IFACT)*STPSIZ)
ENDIF
IF (TIKMIN.LT.STPSIZ) TIKMIN= STPSIZ
IF (NLO.GT.0.AND.NHI.LT.0) TIKMIN= STPSIZ
IF (NLO.LT.0.AND.NHI.GT.0) TIKMIN= STPSIZ
C
C THE EXPONENT AND NO. OF SIG. DIGITS ARE FOUND FOR
C BOTH THE LARGEST VALUE AND THE INTERVAL, AND
C THESE ARE COMBINED TO GIVE THE NUMBER CONSTANTS.
C IF NUMSIG > LIMSIG, THE NUMBER IS TRUNCATED;
C IF NUMEXP >= LIMSIG, OVERFLOW HAS OCCURRED;
C IF NAFTDP > LIMSIG, UNDERFLOW HAS OCCURRED;
C IF NAFTDP <= 0, THE FORMAT IS INTEGER.
C
CALL G0SIZS(TIKMAX,LIMSIG,MAXEXP,MAXSIG)
INTEXP= MAXEXP
INTSIG= 0
IF (STPSIZ.LE.TIKMAX) CALL G0SIZS(STPSIZ,LIMSIG,INTEXP,INTSIG)
C
NAFTDP= INTSIG-INTEXP
NUMEXP= MAXEXP
NUMSIG= MAXSIG
IF (NAFTDP.GT.MAXSIG-MAXEXP) NUMSIG= NUMEXP+NAFTDP
IF (NUMSIG.GT.LIMSIG) NUMSIG= LIMSIG
C
NAFTDP= NUMSIG-NUMEXP-1
IF (NAFTDP.LT.0) NAFTDP= 0
C
NCHARS= NUMSIG+2
IF ((NUMEXP.GE.LIMSIG).OR.
& (NAFTDP.GT.LIMSIG)) THEN
C
C THE FOLLOWING SECTIONS SET ANNOTATION TYPES
C INTEGER AND REAL RESP., WITH A SCALING FACTOR.
C THIS FACTOR IS GIVEN BY THE EXPONENT OF THE
C MINIMUM (NON-ZERO) ABSOLUTE POSITION VALUE.
C
CALL G0SIZS(TIKMIN,LIMSIG,MINEXP,MINSIG)
NDECS= MINEXP
NAFTDP= MINEXP-NUMEXP+NUMSIG-1
IF (NAFTDP.GT.0) THEN
KANNOT= 4
ELSE
NCHARS= MAXEXP-MINEXP+2
NAFTDP= 0
KANNOT= 3
ENDIF
ELSE
NDECS= 0
C
C THE FOLLOWING SECTIONS SET ANNOTATION TYPES
C INTEGER AND REAL RESP., WITHOUT SCALING FACTOR.
C
IF (NAFTDP.LE.0) THEN
NCHARS= MAXEXP+2
NAFTDP= 0
KANNOT= 1
ELSE
IF (NUMEXP.LT.0) NCHARS= NAFTDP+3
C
KANNOT= 2
ENDIF
ENDIF
ENDIF
C
C THE APPROPRIATE VALUES IN THE COMMON BLOCK
C ARE THEN UPDATED, AND THE SUBROUTINE ENDS.
C
IF (IABS(KAXIS).EQ.1) THEN
NCHRSX= NCHARS
NAFTPX= NAFTDP
KANNX= KANNOT
NDECSX= NDECS
ELSE
NCHRSY= NCHARS
NAFTPY= NAFTDP
KANNY= KANNOT
NDECSY= NDECS
ENDIF
C
RETURN
END
|
function [ X ] = X_Solver_first(D,rho)
[U,S,V] = svd(D);
S0 = diag(S);
r = length(S0);
P = [ones(r,1), 1-S0, 1/2/rho-S0];
rt = zeros(r,1);
for t = 1:r
p = P(t,:);
Delta = p(2)^2-4*p(1)*p(3);
if Delta <= 0
rt(t) = 0;
else
rts = roots(p);
rts = sort(rts);
if rts(1)*rts(2)<=0
rt(t) = rts(2);
elseif rts(2)<0
rt(t) = 0;
else
funval = log(1+rts(2))+rho*(rts(2)-S0(t)).^2;
if funval>log(1+0)+rho*(0-S0(t)).^2;
rt(t) = 0;
end
end
end
end
SSS = diag(rt);
[m,n] = size(D);
sig = zeros(m,n);
sig(1:min(m,n),1:min(m,n)) = SSS;
X = U*sig*V';
end |
A complex-valued function is continuous if and only if its real and imaginary parts are continuous. |
/* stable/stable_fit.c
*
* Functions employed by different methods of estimation implemented
* in Libstable.
*
* Copyright (C) 2013. Javier Royuela del Val
* Federico Simmross Wattenberg
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 3 of the License.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; If not, see <http://www.gnu.org/licenses/>.
*
*
* Javier Royuela del Val.
* E.T.S.I. Telecomunicación
* Universidad de Valladolid
* Paseo de Belén 15, 47002 Valladolid, Spain.
* [email protected]
*/
#include "stable.h"
#include "mcculloch.h"
#include <gsl/gsl_complex.h>
#include <gsl/gsl_vector.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_sf_erf.h>
#include <gsl/gsl_randist.h>
#include <gsl/gsl_multimin.h>
#include <gsl/gsl_fft_real.h>
void stable_fft(double *data, const unsigned int length, double * y)
{
//int i;
memcpy ( (void *)y, (const void *) data, length*sizeof(double));
gsl_fft_real_radix2_transform (y, 1, length);
return;
}
double stable_loglikelihood(StableDist *dist, double *data, const unsigned int length)
{
double *pdf=NULL;
double l=0.0;
int i;
pdf=(double*)malloc(sizeof(double)*length);
stable_pdf(dist,data,length,pdf,NULL);
for(i=0;i<length;i++)
{
if (pdf[i]>0.0) l+=log(pdf[i]);
}
free(pdf);
return l;
}
double stable_loglike_p(stable_like_params *params)
{
double *pdf;
double l=0.0;
int i;
pdf=(double*)malloc(sizeof(double)*(params->length));
stable_pdf(params->dist,params->data,params->length,pdf,NULL);
for(i=0;i<params->length;i++)
{
if (pdf[i]>0.0) {l+=log(pdf[i]);}
}
free(pdf);
return l;
}
double stable_minusloglikelihood(const gsl_vector * theta, void * p)
{
/* Cost function to minimize, with the estimation of sigma and mu given by McCulloch at each iteration*/
double alpha=1, beta=0, sigma=1.0, mu=0.0;
double minusloglike=0;
stable_like_params * params = (stable_like_params *) p;
alpha = gsl_vector_get(theta,0);
beta = gsl_vector_get(theta,1);
/* Update sigma and mu with McCulloch. It needs nu_c nu_z*/
czab(alpha, beta, params->nu_c, params->nu_z, &sigma, &mu);
/* Check that the parameters are valid */
if(stable_setparams(params->dist, alpha, beta, sigma, mu, 0) < 0)
{
return GSL_NAN;
}
else minusloglike = -stable_loglike_p(params);
if (isinf(minusloglike) || isnan(minusloglike)) minusloglike=GSL_NAN;
return minusloglike;
}
int compare (const void * a, const void * b)
{
/* qsort compare function */
return ((*(double *)b < *(double *)a) - (*(double *)a < *(double *)b));
}
inline void get_original(const gsl_vector *s,double *a,double *b,double *c,double *m)
{
*a = M_2_PI*atan(gsl_vector_get(s,0))+1.0;
*b = M_2_PI*atan(gsl_vector_get(s,1));
*c = exp(gsl_vector_get(s,2));
*m = gsl_vector_get(s,3);
}
inline void set_expanded(gsl_vector *s,const double a,const double b,const double c,const double m)
{
gsl_vector_set(s,0,tan(M_PI_2*(a-1.0)));
gsl_vector_set(s,1,tan(M_PI_2*b));
gsl_vector_set(s,2,log(c));
gsl_vector_set(s,3,m);
}
double stable_minusloglikelihood_whole(const gsl_vector * theta, void * p)
{
/* Whole cost function to minimize in a 4D parameter space */
double alpha=1, beta=0, sigma=1.0, mu=0.0;
double minusloglike=0;
stable_like_params * params = (stable_like_params *) p;
get_original(theta,&alpha,&beta,&sigma,&mu);
/* Check that the parameters are valid */
if(stable_setparams(params->dist, alpha, beta, sigma, mu, 0) < 0)
{
perror("setparams error");
return GSL_NAN;
}
else minusloglike = -stable_loglike_p(params);
if (isinf(minusloglike) || isnan(minusloglike)) minusloglike=GSL_NAN;
return minusloglike;
}
void stable_fit_init(StableDist *dist, const double * data, const unsigned int length, double *pnu_c,double *pnu_z)
{
/* McCulloch estimation */
double *sorted=NULL;
double alpha0, beta0, sigma0, mu0;
/* We need to sort the data to get percentiles */
sorted = (double*)malloc(length*sizeof(double));
memcpy ( (void *)sorted, (const void *) data, length*sizeof(double));
qsort ( sorted, length, sizeof(double), compare);
/* Estimate the parameters. */
stab((const double *) sorted,length,0,&alpha0,&beta0,&sigma0,&mu0);
/* Set parameters in the distribution */
if(stable_setparams(dist,alpha0,beta0,sigma0,mu0, 0)<0)
{
perror("INITIAL ESTIMATED PARAMETER ARE NOT VALID");
return;
}
/* Get pnu_c and pnu_z needed for mle2d estimation */
cztab(sorted, length, pnu_c, pnu_z);
free(sorted);
return;
}
int stable_fit_iter(StableDist *dist, const double * data, const unsigned int length,const double nu_c,const double nu_z)
{
const gsl_multimin_fminimizer_type *T;
gsl_multimin_fminimizer *s;
gsl_multimin_function likelihood_func;
gsl_vector *theta, *ss;
unsigned int iter = 0;
int status=0;
double size=0;
double a=1,b=0.0,c=1,m=0.0;
stable_like_params par;
par.dist=dist;
par.data=(double *)data;
par.length=length;
par.nu_c=nu_c;
par.nu_z=nu_z;
/* Inicio: Debe haberse inicializado dist con alpha y beta de McCulloch */
theta=gsl_vector_alloc(2);
gsl_vector_set (theta, 0, dist->alpha);
gsl_vector_set (theta, 1, dist->beta);
#ifdef DEBUG
printf("%lf, %lf\n",gsl_vector_get (theta, 0),gsl_vector_get (theta, 1));
#endif
/* Saltos iniciales */
ss = gsl_vector_alloc (2);
gsl_vector_set_all (ss, 0.01);
/* Funcion a minimizar */
likelihood_func.n = 2; // Dimension 2 (alpha y beta)
likelihood_func.f = &stable_minusloglikelihood;
likelihood_func.params = (void *) (&par); // Parametros de la funcion
/* Creacion del minimizer */
T = gsl_multimin_fminimizer_nmsimplex2rand;
s = gsl_multimin_fminimizer_alloc (T, 2); /* Dimension 2*/
/* Poner funcion, estimacion inicial, saltos iniciales */
gsl_multimin_fminimizer_set (s, &likelihood_func, theta, ss);
#ifdef DEBUG
printf("5\n");
#endif
/* Iterar */
do
{
iter++;
status = gsl_multimin_fminimizer_iterate(s);
// if (status!=GSL_SUCCESS) {
// printf("Minimizer warning: %s\n",gsl_strerror(status));
// fflush(stdout);
// }
size = gsl_multimin_fminimizer_size (s);
status = gsl_multimin_test_size (size, 0.02);
/*
if (status == GSL_SUCCESS)
{
printf (" converged to minimum at\n");
}
printf ("%5d %1.5f %1.5f %1.5f %1.5f f() = %1.8e size = %.5f\n",
(int)iter,
gsl_vector_get (s->x, 0),
gsl_vector_get (s->x, 1),
p->dist->sigma,
p->dist->mu_1,
s->fval, size);
//}
*/
} while (status == GSL_CONTINUE && iter < 200);
// if (status!=GSL_SUCCESS)
// {
// printf("Minimizer warning: %s\n",gsl_strerror(status));
// fflush(stdout);
// }
/* Se recupera la estimacion alpha y beta */
gsl_vector_free(theta);
/*
theta = gsl_multimin_fminimizer_x (s);
a = gsl_vector_get (theta, 0);
b = gsl_vector_get (theta, 1);
*/
a = gsl_vector_get (s->x, 0);
b = gsl_vector_get (s->x, 1);
/* Y se estima sigma y mu para esos alpha y beta */
czab(a, b, nu_c, nu_z, &c, &m);
//printf("%5d %10.3e %10.3e %10.3e %10.3e\n",(int)iter,a,b,c,m);
// Se almacena el punto estimado en la distribucion, comprobando que es valido
if (stable_setparams(dist,a,b,c,m,0)<0)
{
perror("FINAL ESTIMATED PARAMETER ARE NOT VALID\n");
}
gsl_vector_free(ss);
gsl_multimin_fminimizer_free (s);
return status;
}
int stable_fit(StableDist *dist, const double *data, const unsigned int length)
{
double nu_c=0.0,nu_z=0.0;
int status = 0;
stable_fit_init(dist,data,length,&nu_c,&nu_z);
status=stable_fit_iter(dist,data,length,nu_c,nu_z);
return status;
}
int stable_fit_iter_whole(StableDist *dist, const double * data, const unsigned int length)
{
const gsl_multimin_fminimizer_type *T;
gsl_multimin_fminimizer *s;
gsl_multimin_function likelihood_func;
gsl_vector *theta, *ss;
unsigned int iter = 0;
int status=0;
double size=0;
double a=1,b=0.0,c=1,m=0.0;
stable_like_params par;
par.dist=dist;
par.data=(double *)data;
par.length=length;
par.nu_c=0;
par.nu_z=0;
/* Inital params (with McCulloch) */
theta=gsl_vector_alloc(4);
set_expanded(theta,dist->alpha,dist->beta,dist->sigma,dist->mu_1);
#ifdef DEBUG
printf("%lf, %lf, %lf, %lf\n",gsl_vector_get (theta, 0),gsl_vector_get (theta, 1),gsl_vector_get (theta, 2),gsl_vector_get (theta, 3));
#endif
/* Initial steps */
ss = gsl_vector_alloc (4);
gsl_vector_set_all (ss, 0.01);
/* Cost function to minimize */
likelihood_func.n = 4; // 4 Dimensions (alpha, beta, sigma, mu_0)
likelihood_func.f = &stable_minusloglikelihood_whole;
likelihood_func.params = (void *) (&par); // Cost function arguments
/* Minimizer creation */
T = gsl_multimin_fminimizer_nmsimplex2rand;
s = gsl_multimin_fminimizer_alloc (T, 4); /* 4 dimensions */
/* Set cost function, initial guess and initial steps */
gsl_multimin_fminimizer_set (s, &likelihood_func, theta, ss);
#ifdef DEBUG
printf("5\n");
#endif
/* Start iterations */
do
{
iter++;
status = gsl_multimin_fminimizer_iterate(s);
if (status!=GSL_SUCCESS) {
perror("Minimizer warning:\n");
}
size = gsl_multimin_fminimizer_size (s);
status = gsl_multimin_test_size (size, 0.002);
/*
printf(" %03d\t size = %f a_ = %f b_ = %f c_ = %f m_ = %f f_ = %f \n",iter,size,gsl_vector_get (s->x, 0),gsl_vector_get (s->x, 1),
gsl_vector_get (s->x, 2),gsl_vector_get (s->x, 3), gsl_multimin_fminimizer_minimum(s));
*/
} while (status == GSL_CONTINUE && iter < 200);
if (status!=GSL_SUCCESS)
{
perror("Minimizer warning");
}
/* Get last estimation */
gsl_vector_free(theta);
theta = gsl_multimin_fminimizer_x (s);
get_original(theta,&a,&b,&c,&m);
/* Set estimated parameters to the distribution and check if their have valid values*/
if (stable_setparams(dist,a,b,c,m,0)<0)
{
perror("FINAL ESTIMATED PARAMETER ARE NOT VALID\n");
}
gsl_vector_free(ss);
gsl_multimin_fminimizer_free (s);
return status;
}
int stable_fit_whole(StableDist *dist, const double *data, const unsigned int length)
{
// double nu_c=0.0,nu_z=0.0;
int status=0;
// stable_fit_init(dist,data,length,&nu_c,&nu_z);
// printf("McCulloch %d sampless: %f %f %f %f\n",length,dist->alpha,dist->beta,dist->sigma,dist->mu_1);
status = stable_fit_iter_whole(dist,data,length);
return status;
}
double * load_rand_data(char * filename, int N)
{
FILE * f_data;
double * data;
int i;
if ((f_data = fopen(filename,"rt")) == NULL)
{
perror("Error when opening file with random data");
}
data=(double*)malloc(N*sizeof(double));
for(i=0;i<N;i++)
{
if (EOF==fscanf(f_data,"%le\n",data+i))
{
perror("Error when reading data");
}
}
return data;
}
int stable_fit_mle(StableDist *dist, const double *data, const unsigned int length) {
return stable_fit_whole(dist,data,length);
}
int stable_fit_mle2d(StableDist *dist, const double *data, const unsigned int length) {
return stable_fit(dist,data,length);
}
|
module model_mnph_artificial_viscosity
#include <messenger.h>
use mod_kinds, only: rk
use mod_constants, only: HALF, THREE, TWO, ONE, ZERO
use mod_io, only: h_field_dimension
use mod_fluid
use type_model, only: model_t
use type_chidg_worker, only: chidg_worker_t
use DNAD_D
use mod_interpolate, only: interpolate_from_vertices
use ieee_arithmetic
implicit none
!> Int. J. Numer. Meth. Fluids 2016; 82:398–416
!! Dilation-based shock capturing for high-order methods
!! Presmoothed h
!! Model Fields:
!! - Smoothed Artifical Viscosity
!!
!! @author Eric M. Wolf
!! @date 07/11/2018
!!
!---------------------------------------------------------------------------------------
type, extends(model_t) :: mnph_artificial_viscosity_t
real(rk) :: av_constant = 1.5_rk
logical :: elem_avg = .false.
contains
procedure :: init
procedure :: compute
end type mnph_artificial_viscosity_t
!***************************************************************************************
contains
!>
!!
!!
!! @author Eric M. Wolf
!! @date 07/11/2018
!!
!--------------------------------------------------------------------------------
subroutine init(self)
class(mnph_artificial_viscosity_t), intent(inout) :: self
integer :: unit, msg
logical :: file_exists, use_lift, elem_avg
namelist /av_options/ elem_avg
call self%set_name('MNPH Artificial Viscosity')
call self%set_dependency('f(Grad(Q))')
call self%add_model_field('Artificial Viscosity')
call self%add_model_field('Artificial Viscosity - 1')
call self%add_model_field('Artificial Viscosity - 2')
call self%add_model_field('Artificial Viscosity - 3')
!!
!! Check if input from 'models.nml' is available.
!! 1: if available, read and set self%mu
!! 2: if not available, do nothing and mu retains default value
!!
!inquire(file='models.nml', exist=file_exists)
!if (file_exists) then
! open(newunit=unit,form='formatted',file='models.nml')
! read(unit,nml=mnph_artificial_viscosity_unsmoothed_ani,iostat=msg)
! if (msg == 0) self%av_constant = av_constant
! close(unit)
!end if
inquire(file='artificial_viscosity.nml', exist=file_exists)
if (file_exists) then
open(newunit=unit,form='formatted',file='artificial_viscosity.nml')
read(unit,nml=av_options,iostat=msg)
if (msg == 0) self%elem_avg = elem_avg
close(unit)
end if
end subroutine init
!***************************************************************************************
!>
!!
!!
!! @author Eric M. Wolf
!! @date 07/11/2018
!!
!--------------------------------------------------------------------------------
subroutine compute(self,worker)
class(mnph_artificial_viscosity_t), intent(in) :: self
type(chidg_worker_t), intent(inout) :: worker
type(AD_D), dimension(:), allocatable :: &
density, vel1, vel2, vel3, T, c, wave_speed, sensor, av, av1, av2, av3, avtemp, temp_av
real(rk), dimension(:) :: h(3)
real(rk) :: hmin
real(rk) :: Pr_star = 0.9_rk
integer(ik) :: p, ii, nvertex, inode, ivertex, idom, ielem, idom_g, inode_g
real(rk), allocatable :: eval_node1(:), eval_node2(:), eval_node3(:), nodes(:,:), h_field(:,:), h_scalar(:)
real(rk) :: eval_node(3), center(3), radius(3), vert_vals_hmin(8)
real(rk), allocatable, dimension(:) :: weights, jinv
idom = worker%element_info%idomain_l
ielem = worker%element_info%ielement_l
idom_g = worker%element_info%idomain_g
h_field = worker%h_smooth()
select case (trim(h_field_dimension))
case('2D','2d')
h_scalar = (h_field(:,1) + h_field(:,2))/TWO
case('3D','3d')
h_scalar = (h_field(:,1) + h_field(:,2) + h_field(:,3))/THREE
case default
call chidg_signal(FATAL,'mnph_artificial_viscosity: invalid input for h_field_dimension (2D,3D).')
end select
!h_scalar = (h_field(:,1)*h_field(:,2)*h_field(:,3))**(ONE/THREE)
!h_scalar = ONE
p = worker%solution_order('interior')
if (p == 0) p = 1
h_scalar = h_scalar/real(p, rk)
h_field = h_field/real(p,rk)
sensor = worker%get_field('MNPH Shock Sensor', 'value')
density = worker%get_field('Density','value')
vel1 = worker%get_field('Momentum-1','value')
vel2 = worker%get_field('Momentum-2','value')
vel3 = worker%get_field('Momentum-3','value')
vel1 = vel1/(density)
vel2 = vel2/(density)
vel3 = vel3/(density)
c = worker%get_field('Pressure', 'value')
wave_speed = c
c = (gam*wave_speed/(density))
c = c*sin_ramp(c, ZERO, ONE)
wave_speed = sqrt(vel1**TWO+vel2**TWO+vel3**TWO+c)
av = 1.0_rk*(1.5_rk*h_scalar)*wave_speed*sensor
!avtemp = av
!av = sin_ramp2(avtemp,0.01_rk*1.5_rk*h_scalar*wave_speed, 1.5_rk*h_scalar*wave_speed)
!if (self%elem_avg) then
! if (worker%interpolation_source == 'element') then
! weights = worker%quadrature_weights('element')
! jinv = worker%inverse_jacobian('element')
! temp_av = av
! temp_av = sum(weights*jinv*av)/sum(weights*jinv)
! av = temp_av
! else
! weights = worker%quadrature_weights('face')
! jinv = worker%inverse_jacobian('face')
! temp_av = av
! temp_av = sum(weights*jinv*av)/sum(weights*jinv)
! av = temp_av
! end if
!end if
if (any(ieee_is_nan(av(:)%x_ad_))) print *, 'unsmoothed av is nan'
if (any(ieee_is_nan(av(:)%x_ad_))) print *, worker%interpolation_source
av1 = 1.5_rk*(h_field(:,1))*wave_speed*sensor
av2 = 1.5_rk*(h_field(:,2))*wave_speed*sensor
av3 = 1.5_rk*(h_field(:,3))*wave_speed*sensor
!av1 = 1.0_rk*(1.5_rk*h_field(:,1))*wave_speed*sensor
!av2 = 1.0_rk*(1.5_rk*h_field(:,2))*wave_speed*sensor
!av3 = 1.0_rk*(1.5_rk*h_field(:,3))*wave_speed*sensor
!print *, 'unsmoothed av: ', av(1)%x_ad_
!! Average to improve robustness
!if (worker%interpolation_source == 'element') then
! weights = worker%quadrature_weights('element')
! jinv = worker%inverse_jacobian('element')
! av2 = av
! av = sum(weights*jinv*av2)/sum(weights*jinv)
!else
! weights = worker%quadrature_weights('face')
! jinv = worker%inverse_jacobian('face')
! av2 = av
! av = sum(weights*jinv*av2)/sum(weights*jinv)
!end if
!
! Contribute laminar viscosity
!
call worker%store_model_field('Artificial Viscosity', 'value', av)
call worker%store_model_field('Artificial Viscosity - 1', 'value', av1)
call worker%store_model_field('Artificial Viscosity - 2', 'value', av2)
call worker%store_model_field('Artificial Viscosity - 3', 'value', av3)
end subroutine compute
!***************************************************************************************
end module model_mnph_artificial_viscosity
|
lemma square_norm_one: fixes x :: "'a::real_normed_div_algebra" assumes "x\<^sup>2 = 1" shows "norm x = 1" |
module Vehicle.Data.Tensor where
open import Level using (Level)
open import Data.Empty.Polymorphic using (⊥)
open import Data.Nat.Base using (ℕ; zero; suc)
open import Data.List.Base using (List; []; _∷_)
open import Data.Vec.Functional using (Vector)
private
variable
a : Level
A : Set a
n : ℕ
Tensor : Set a → List ℕ → Set a
Tensor A [] = ⊥
Tensor A (n ∷ []) = Vector A n
Tensor A (m ∷ n ∷ ns) = Vector (Tensor A (n ∷ ns)) m |
[STATEMENT]
lemma project_preserves_I:
"G \<in> preserves (v o f) ==> project h C G \<in> preserves v"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. G \<in> preserves (v \<circ> f) \<Longrightarrow> project h C G \<in> preserves v
[PROOF STEP]
by (auto simp add: preserves_def project_stable_I extend_set_eq_Collect) |
[STATEMENT]
lemma absolutely_integrable_spike_set:
fixes f :: "'a::euclidean_space \<Rightarrow> 'b::euclidean_space"
assumes f: "f absolutely_integrable_on S" and neg: "negligible {x \<in> S - T. f x \<noteq> 0}" "negligible {x \<in> T - S. f x \<noteq> 0}"
shows "f absolutely_integrable_on T"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f absolutely_integrable_on T
[PROOF STEP]
using absolutely_integrable_spike_set_eq f neg
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>negligible {x \<in> ?S - ?T. ?f x \<noteq> (0::?'b)}; negligible {x \<in> ?T - ?S. ?f x \<noteq> (0::?'b)}\<rbrakk> \<Longrightarrow> (?f absolutely_integrable_on ?S) = (?f absolutely_integrable_on ?T)
f absolutely_integrable_on S
negligible {x \<in> S - T. f x \<noteq> (0::'b)}
negligible {x \<in> T - S. f x \<noteq> (0::'b)}
goal (1 subgoal):
1. f absolutely_integrable_on T
[PROOF STEP]
by blast |
using Random;
export contour_block_SS
"""
contour_block_SS([eltype,] nep [,mintegrator];[tol,][logger,][σ,][radius,][linsolvercreator,][N,][neigs,][k,][L])
This is an implementation of the block_SS contour integral method which is
based on the computation of higher order moments.
The contour is an ellipse centered at `σ` with radii given in `radius`, or if only one `radius` is given, the contour is a circle. The numerical quadrature method is specified in `mintegrator`,
which is a type inheriting from `MatrixIntegrator`, by default
`MatrixTrapezoidal`. For a parallell implementation of the
integrator use `MatrixTrapezoidalParallel`.
The integer `k`
specifies size of the probe subspace. `N` corresponds to the
number of quadrature points. The integer L specifies the number of moments.
Ellipses are the only supported contours. The
`linsolvercreator` must create a linsolver that can handle (rectangular) matrices
as right-hand sides, not only vectors. We integrate in complex arithmetic so
`eltype` must be complex type.
# Example
```julia-repl
julia> nep=SPMF_NEP([[0 1 ; 1 1.0], [1 0 ; 0 0]], [s->one(s),s->exp(1im*s^2)]);
julia> λ,V=contour_assu(nep,radius=3,neigs=6)
julia> @show λ
6-element Array{Complex{Float64},1}:
4.496403249731884e-15 + 2.506628274630998im
-2.506628274631 - 2.8727020762175925e-15im
3.219972424519104e-16 - 2.5066282746310034im
2.5066282746310096 - 1.1438072192922029e-15im
-2.3814273710772784e-7 - 7.748469160458366e-8im
2.381427350935646e-7 + 7.748467479992284e-8im
```
# References
* Asakura, Sakurai, Tadano, Ikegami, Kimura, A numerical method for nonlinear eigenvalue problems using contour integrals, JSIAM Letters, 2009 Volume 1 Pages 52-55
* Van Beeumen, Meerbergen, Michiels. Connections between contour integration and rational Krylov methods for eigenvalue problems, 2016, TW673, https://lirias.kuleuven.be/retrieve/415487/
"""
contour_block_SS(nep::NEP;params...)=contour_block_SS(ComplexF64,nep;params...);
contour_block_SS(nep::NEP,MIntegrator;params...)=contour_block_SS(ComplexF64,nep,MIntegrator;params...);
function contour_block_SS(
::Type{T},
nep::NEP,
::Type{MIntegrator}=MatrixTrapezoidal;
tol::Real=sqrt(eps(real(T))), # Note tol is quite high for this method
σ::Number=zero(complex(T)),
logger=0,
linsolvercreator=BackslashLinSolverCreator(),
neigs=Inf, # Number of wanted eigvals (currently unused)
k::Integer=3, # Columns in matrix to integrate
radius::Union{Real,Tuple,Array}=1, # integration radius
N::Integer=1000, # Nof quadrature nodes
K::Integer=3, # Nof moments
errmeasure::ErrmeasureType = DefaultErrmeasure(nep),
sanity_check=true,
Shat_mode=:native, # native or JSIAM-mode
rank_drop_tol=tol # Used in sanity checking
)where{T<:Number, MIntegrator<:MatrixIntegrator}
@parse_logger_param!(logger)
n = size(nep,1);
# Notation: L in JSIAM-paper corresponds to k in Beyn's paper.
# Input params the same as contourbeyn, but
# the code is like JSIAM-paper
L=k
Random.seed!(10); # Reproducability (not really)
U = rand(T,n,L);
V = rand(T,n,L);
function local_linsolve(λ::TT,V::Matrix{TT}) where {TT<:Number}
local M0inv::LinSolver = create_linsolver(linsolvercreator, nep, λ+σ);
# This requires that lin_solve can handle rectangular
# matrices as the RHS
return lin_solve(M0inv,V);
end
# The step-references refer to the JSIAM-paper
push_info!(logger,"Computing integrals")
local Shat
push_info!(logger,"Forming Mhat and Shat")
Shat = zeros(T,n,L,2*K)
Mhat = zeros(T,L,L,2*K)
length(radius)==1 ? radius1=(radius,radius) : radius1=radius
if (Shat_mode==:JSIAM)
# This is the way the JSIAM-paper proposes to compute Shat
if (length(radius)>1)
error("JSIAM Shat_mode does not support ellipses");
end
# Quadrature points: Only circle supported
w = exp.(2im*pi*(0.5 .+ (0:(N-1)))/N);
omega = radius*w;
push_info!(logger,"Forming all linear systems F(s)^{-1}V:",
continues=true)
# Step 2: Precompute all the linear systems
FinvV =zeros(T,n,L,N);
for k = 1:N
FinvV[:,:,k]=local_linsolve(omega[k],V);
end
push_info!(logger,"");
# Step 3-4: Compute all the integrals and store in Shat (
for k=0:(2*K-1)
for j=0:N-1
d=((omega[j+1])/radius)^(k+1)
Shat[:,:,k+1] += d*FinvV[:,:,j+1]/N;
end
end
elseif (Shat_mode==:native)
# This deviates from the JSIAM-paper description, since
# we do not precompute linear systems, but instead
# compute linear system in combination with the quadrature.
# It handles the scaling differently.
# This version is also more extendable.
g(t) = complex(radius1[1]*cos(t),radius1[2]*sin(t)) # ellipse
gp(t) = complex(-radius1[1]*sin(t),radius1[2]*cos(t)) # derivative
Tv(λ) = local_linsolve(T(λ),V)
f(t) = Tv(g(t))*gp(t)/(2im*pi)
gv=Vector{Function}(undef,2*K)
for k=0:(2*K-1)
gv[k+1]= s -> g(s)^k;
end
# Call the integrator
Shat=integrate_interval(MIntegrator, ComplexF64,
f,gv,0,2*pi,N,logger )
else
error("Unknown Shat_mode: $Shat_mode")
end
for k=0:(2*K-1)
Mhat[:,:,k+1]=U'*Shat[:,:,k+1] # Step 4: Mhat=U'*Shat
end
# Construct H-matrices:
push_info!(logger,"Computing Hhat and Hhat^{<}")
m=K*L;
Hhat=zeros(T,m,m) # Hhat
Hhat2=zeros(T,m,m) # Hhat^{<}
for i=1:K
for j=1:K
Hhat[(i-1)*L .+ (1:L), (j-1)*L .+ (1:L)] = Mhat[:,:,i+j-2+1];
Hhat2[(i-1)*L .+ (1:L), (j-1)*L .+ (1:L)] = Mhat[:,:,i+j-1+1];
end
end
# Extraction more similar to Algorithm 1
# in https://arxiv.org/pdf/1510.02572.pdf
push_info!(logger,"Computing SVD prepare for eigenvalue extraction ",continues=true)
F=svd(Hhat)
UU=F.U;
SS=F.S;
VV=F.V
# rank_drop_tol = δ in reference
pp = count( SS/SS[1] .> rank_drop_tol);
mprime=pp; # To make closer to notation in paper
push_info!(logger," mprime=$mprime");
# Pick relevant eigvecs
UU_H1 = UU[:,1:mprime]
VV_H1 = VV[:,1:mprime]
# Step 7: Project the moment matrices:
Hhat_mprime = UU_H1'*Hhat*VV_H1;
Hhat2_mprime = UU_H1'*Hhat2*VV_H1;
# Step 8:
(xi,X)=eigen(Hhat2_mprime,Hhat_mprime)
# Step 10: Extract eigpair
# Compute S-matrix (by reshaping parts of Shat-tensor)
S=zeros(T,n,L*K)
for j=0:(K-1)
S[:,j*L .+ (1:L)]=Shat[:,:,j+1]
end
V=S*VV_H1*X;
# Reverse the shift
Shat_mode == :JSIAM ? factor=radius : factor=1
λ=σ .+ factor*xi
return λ,V
end
|
! test_balanced_tree.f90 --
! Test program for balanced trees, as well as a demonstration of how
! the include file can be used
!
! TODO:
! - delete_key
!
module my_data
type, public :: point2d
real :: x, y
end type point2d
type, public :: point3d
real :: x, y, z
end type point3d
end module my_data
!
! Define a module for trees that hold 2D points
!
module balanced_trees_2d_points
use my_data, tree_data => point2d
implicit none
public :: print_tree
include "balanced_tree.f90"
! Extra:
! Debug routine to test the implementation
!
recursive subroutine print_tree( tree, indent )
class(balanced_tree), intent(inout) :: tree
character(len=*), intent(in) :: indent
write(*,'(2a,i0,'' - '',2f10.2)') indent, 'Tree: ', tree%key, tree%data
write(*,'(2a,l)') indent, ' Left: ', associated(tree%left)
write(*,'(2a,l)') indent, ' Right: ', associated(tree%right)
if ( associated(tree%left) ) then
call print_tree(tree%left, indent // ' ')
endif
if ( associated(tree%right) ) then
call print_tree(tree%right, indent // ' ')
endif
end subroutine print_tree
end module balanced_trees_2d_points
!
! Define a module for trees that hold 3D points
! No need for a print routine
!
module balanced_trees_3d_points
use my_data, tree_data => point3d
implicit none
include "balanced_tree.f90"
end module balanced_trees_3d_points
!
! Define an overall module so that we have convenient
! names
!
module balanced_trees
use balanced_trees_2d_points, btree_2d => balanced_tree, point2d => tree_data
use balanced_trees_3d_points, btree_3d => balanced_tree, point3d => tree_data
end module balanced_trees
! Test program
!
program test_balanced_trees
use balanced_trees
implicit none
integer :: key
type(point2d) :: p2d
type(point3d) :: p3d ! Not actually used, just an illustration
type(btree_2d) :: tree
type(btree_3d) :: tree3d ! Ditto
logical :: success
key = 30
p2d%x = 1
p2d%y = 30
call tree%add_data(key, p2d)
key = 10
p2d%y = 10
call tree%add_data(key, p2d)
key = 20
p2d%y = 20
call tree%add_data(key, p2d)
key = 40
p2d%y = 40
call tree%add_data(key, p2d)
key = 35
p2d%y = 35
call tree%add_data(key, p2d)
key = 20
p2d%y = 40
call tree%add_data(key, p2d)
call print_tree(tree, '' )
write(*,*) 'Key 30? ', tree%has_key(30)
write(*,*) 'Key 31? ', tree%has_key(31)
key = 35
p2d%y = 40 ! Should be changed
call tree%get_data( key, p2d, success )
write(*,*) 'Key 35: ', success, ' - ', p2d%y
key = 31
p2d%y = 33 ! Should be NOT changed
call tree%get_data( key, p2d, success )
write(*,*) 'Key 31: ', success, ' - ', p2d%y
!
! Now print the contents
!
call tree%traverse( print_p2d )
call tree%destroy
contains
subroutine print_p2d( key, p2d )
integer, intent(in) :: key
type(point2d), intent(in) :: p2d
write(*,*) 'Key: ', key
write(*,*) ' Data X: ', p2d%x
write(*,*) ' Data Y: ', p2d%y
end subroutine print_p2d
end program test_balanced_trees
|
= Voyage : Inspired by Jules Verne =
|
import numpy as np
from videoreader import VideoReader
from napari_plugin_engine import napari_hook_implementation
from dask import delayed
import cv2
class VideoReaderNP(VideoReader):
"""VideoReader posing as numpy array."""
def __init__(self, filename: str, remove_leading_singleton: bool = True):
"""
Args:
filename (str): filename of the video
remove_leading_singleton (bool, optional): Remove leading singleton dimension when returning single frames. Defaults to True.
"""
super().__init__(filename)
self.remove_leading_singleton = remove_leading_singleton
def __getitem__(self, index):
# numpy-like slice imaging into arbitrary dims of the video
# ugly.hacky but works
frames = None
if isinstance(index, int): # single frame
ret, frames = self.read(index)
if frames.shape[2]==3:
frames = cv2.cvtColor(frames, cv2.COLOR_BGR2RGB)
elif isinstance(index, slice): # slice of frames
frames = np.stack([self[ii] for ii in range(*index.indices(len(self)))])
elif isinstance(index, range): # range of frames
frames = np.stack([self[ii] for ii in index])
elif isinstance(index, tuple): # unpack tuple of indices
if isinstance(index[0], slice):
indices = range(*index[0].indices(len(self)))
elif isinstance(index[0], (np.integer, int)):
indices = int(index[0])
else:
indices = None
if indices is not None:
frames = self[indices]
# index into pixels and channels
for cnt, idx in enumerate(index[1:]):
if isinstance(idx, slice):
ix = range(*idx.indices(self.shape[cnt+1]))
elif isinstance(idx, int):
ix = range(idx-1, idx)
else:
continue
if frames.ndim==4: # ugly indexing from the back (-1,-2 etc)
cnt = cnt+1
frames = np.take(frames, ix, axis=cnt)
if self.remove_leading_singleton and frames is not None:
if frames.shape[0] == 1:
frames = frames[0]
return frames
@property
def dtype(self):
return np.uint8
@property
def shape(self):
return (self.number_of_frames, *self.frame_shape)
@property
def ndim(self):
return len(self.shape)+1
@property
def size(self):
return np.product(self.shape)
def min(self):
return 0
def max(self):
return 255
def video_file_reader(path):
array = VideoReaderNP(path, remove_leading_singleton=True)
return [(array, {'name': path}, 'image')]
@napari_hook_implementation
def napari_get_reader(path):
# remember, path can be a list, so we check it's type first...
if isinstance(path, str) and any([path.endswith(ext) for ext in [".mp4", ".mov", ".avi"]]):
# If we recognize the format, we return the actual reader function
return video_file_reader
# otherwise we return None.
return None
|
lemma chain: "a \<in> s \<Longrightarrow> b \<in> s \<Longrightarrow> a \<le> b \<or> b \<le> a" |
{- Byzantine Fault Tolerant Consensus Verification in Agda, version 0.9.
Copyright (c) 2021, Oracle and/or its affiliates.
Licensed under the Universal Permissive License v 1.0 as shown at https://opensource.oracle.com/licenses/upl
-}
open import Util.Prelude
module LibraBFT.Impl.OBM.ECP-LBFT-OBM-Diff.ECP-LBFT-OBM-Diff-0 where
enabled : Bool
enabled = true
|
% INTERNAL FUNCTION: multikronecker
%
% ::
%
% C=kronall(A1,A2,...,An)
%
% Args:
%
% - **Ai** [matrix]: input for the kronecker product
%
% Returns:
% :
%
% - **C** [matrix]: kron(A1,kron(A2,kron(A3,...)))
%
% See also:
% tensorperm
%
% |
-- Pruebas de la distributiva del producto sobre sumas
-- ===================================================
import data.nat.basic
open nat
open list
variables {α : Type*} {β : Type*}
variable (x : α)
variables (xs : list α)
variable (n : ℕ)
variable (ns : list ℕ)
-- ----------------------------------------------------
-- Nota. Se usará la función aplica y sus propiedades
-- estudiadas anteriormente.
-- ----------------------------------------------------
def aplica : (α → β) → list α → list β
| f [] := []
| f (x :: xs) := (f x) :: aplica f xs
@[simp]
lemma aplica_nil
(f : α → β)
: aplica f [] = [] :=
rfl
@[simp]
lemma aplica_cons
(f : α → β)
: aplica f (x :: xs) = (f x) :: aplica f xs :=
rfl
-- ----------------------------------------------------
-- Ejercicio 1. Definir la función
-- suma : list ℕ → ℕ
-- tal que (suma xs) es la suma de los elementos de
-- xs. Por ejemplo,
-- suma [3,2,5] = 10
-- ----------------------------------------------------
def suma : list ℕ → ℕ
| [] := 0
| (n :: ns) := n + suma ns
-- #eval suma [3,2,5]
-- ----------------------------------------------------
-- Ejercicio 2. Demostrar los siguientes lemas
-- + suma_nil :
-- suma ([] : list ℕ) = 0 :=
-- + suma_cons :
-- suma (n :: ns) = n + suma ns :=
-- ----------------------------------------------------
@[simp]
lemma suma_nil :
suma ([] : list ℕ) = 0 :=
rfl
@[simp]
lemma suma_cons :
suma (n :: ns) = n + suma ns :=
rfl
-- ----------------------------------------------------
-- Ejercicio 3. (p. 45) Demostrar que
-- suma (aplica (λ x, 2*x) ns) = 2 * (suma ns)
-- ----------------------------------------------------
-- 1ª demostración
example :
suma (aplica (λ x, 2*x) ns) = 2 * (suma ns) :=
begin
induction ns with m ms HI,
{ rw aplica_nil,
rw suma_nil,
rw nat.mul_zero, },
{ rw aplica_cons,
rw suma_cons,
rw HI,
rw suma_cons,
rw mul_add, },
end
-- 2ª demostración
example :
suma (aplica (λ x, 2*x) ns) = 2 * (suma ns) :=
begin
induction ns with m ms HI,
{ calc suma (aplica (λ (x : ℕ), 2 * x) [])
= suma [] : by rw aplica_nil
... = 0 : by rw suma_nil
... = 2 * 0 : by rw nat.mul_zero
... = 2 * suma [] : by rw suma_nil, },
{ calc suma (aplica (λ x, 2 * x) (m :: ms))
= suma (2 * m :: aplica (λ x, 2 * x) ms) : by rw aplica_cons
... = 2 * m + suma (aplica (λ x, 2 * x) ms) : by rw suma_cons
... = 2 * m + 2 * suma ms : by rw HI
... = 2 * (m + suma ms) : by rw mul_add
... = 2 * suma (m :: ms) : by rw suma_cons, },
end
-- 3ª demostración
example :
suma (aplica (λ x, 2*x) ns) = 2 * (suma ns) :=
begin
induction ns with m ms HI,
{ simp, },
{ simp [HI, mul_add], },
end
-- 4ª demostración
example :
suma (aplica (λ x, 2*x) ns) = 2 * (suma ns) :=
by induction ns ; simp [*, mul_add]
-- 5ª demostración
lemma suma_aplica :
∀ ns, suma (aplica (λ x, 2*x) ns) = 2 * (suma ns)
| [] := by simp
| (m :: ms) := by simp [suma_aplica ms, mul_add]
-- Comentarios sobre las funciones sum y map:
-- + Son equivalentes a las funciones suma y aplica.
-- + Para usarla hay que importar la librería
-- data.list.basic y abrir el espacio de nombre
-- list escribiendo al principio del fichero
-- import data.list.basic
-- open list
-- + Se puede evaluar. Por ejemplo,
-- #eval sum [3,2,5]
-- #eval map (λx, 2*x) [3,2,5]
-- #eval map ((*) 2) [3,2,5]
-- #eval map ((+) 2) [3,2,5]
|
Sign up to Lineout to stay up to date with all of the latest RUPA news / Thanks for subscribing!
With Rugby World Cup (“RWC”) underway the eyes of the world are on our sport. One of the most topical debates across all contact sports is the issue of concussion management.
All stakeholders will be looking forward to a RWC in which players, medical personnel and team support staff play their part in managing this important aspect of player welfare. Rugby is making significant progress in the detection and management of concussion and it will be important that other professional rugby competitions move to adopt the standards set for RWC 2015. Rugby should not see the management of concussion as a risk at RWC, but an opportunity to demonstrate how far the game has come in recent times.
The International Rugby Players Association (“IRPA”) is constantly thinking about what more can be done to further improve the games ability to manage the issues associated with concussion.
Effective from 1st August 2015 after a successful global trial period at elite level, World Rugby has now formally introduced a temporary substitution rule. This applies to any player who is removed from the field of play to undertake a Head Injury Assessment (HIA) when it is unclear if that player has a suspected concussion.
Having the ability to remove a player from the action and from the cauldron of a packed stadium, to undertake a thorough clinical assessment and video review of an incident, in a private controlled environment, makes good practical sense. Previously players in this situation would often go untreated, or be checked over briefly on field using questionable methods.
World Rugby and other key stakeholders are leading a real culture shift amongst professional players, coaches and medical personnel in relation to peer pressure to play. Recent statistics back this up. Research has underpinned the HIA process throughout. Prior to the temporary substitution trial, 56 per cent of players with a confirmed concussion remained on the field following their injury. Now that figure is less than 12 per cent (British Journal of Sports Medicine, 2014) and the hope is that following continued refinement of the HIA process we will see further improvement in 2015 and beyond.
A key point lost on many but one that IRPA believes is fundamental to a conservative management approach is that a player is now removed once the attending medics confirm a diagnosis of suspected concussion. This takes the pressure off trying to make a definitive diagnosis of concussion during a high-pressured game. That does mean players will be removed who have not suffered a concussion, but better the conservative approach.
Thankfully the message is getting through and along with World Rugby’s #RecogniseAndRemove campaign, the games stakeholders are realising that it is not just the player’s responsibility but its coaches, referees, and team support staffs obligation to step in if they suspect a concussion has occurred.
Whilst IRPA see this as a positive step, it is one that needs to be continually developed and filtered down to the community game where awareness and education on the subject is not yet at the same level.
Medical specialist in concussion monitoring individual cases.
Additionally, IRPA is very supportive and engaged on the work that is being undertaken on the laws of the game to ensure the risks of on-field instances that result in serious injury, including concussive events, are minimised.
There is much conjecture on if rugby, and concussions sustained playing the game, lead to long-term cognitive health issues. From a player’s perspective IRPA does not want that debate to be a distraction from making sure the right thing is done by the players now. IRPA believes that the game should be doing all it can to ensure the welfare and health of the player is an ongoing priority.
So what of the future and in particular the debate of using new technologies to take human error out of the equation? One such advancement that may justify further research and refinement is the use of sensors to help aide team doctors with real-time data. It could provide insight into accelerations and forces that they may not have seen first-hand particularly where the player has either not recognised a head knock, chosen not to seek medical assistance or suffered signs and symptoms post game. The sensor does not diagnose a concussion but rather provides accurate previously unavailable live information which combined with testing (HIA) and baseline data can aid in a clinical diagnosis. In the same way team support staff use other data like heart rates and GPS data, the sensor can provide data to assist medical personnel make informed decisions regarding concussion management.
From a player welfare perspective, IRPA believes the pressure put on players to return to play to preserve their contracts and the rehabilitative support provided to players recovering from a concussion are areas that should be looked at further.
Ensuring a players contractual terms and conditions are supportive during times of serious injury is fundamental. It allows the player the opportunity to fully recover without the added pressure or threat of having his or her salary or match payments stopped, or worse, the contract terminated. In this respect physical injuries are more obvious and certain, concussion in its very nature is obscure and invisible. Currently there are no set guidelines around minimum standards for players’ contractual terms and conditions. IRPA feels that the time is right to address this, not just because of the issues around concussion, but to maintain the integrity and reputation of the game.
For those players who endure concussion complications either as part of their return to play process, or following a decision to stop playing, there seems little available to aide recovery beyond rest and time – which for an active professional athlete can be a very uncertain and sometimes depressing experience. IRPA would support more research into identifying rehabilitation techniques and programs to help in the resilience building and recovery process, and to ensure they are readily understood and available throughout the game.
World leading player welfare standards are a priority in building the legacy of our sport. It is fundamental to the integrity of the game and to those who participate in it. IRPA is looking forward to continuing to work with World Rugby to bring about the culture change and to enhance the education and regulatory framework that ensures rugby leads the way on concussion management globally.
The Rugby Union Players' Association (RUPA) will continue to take a proactive and collaborative approach to concussion prevention and management in Australia sport, as part of its involvement with the Australian Athletes' Alliance (AAA).
Every week, RUPA collates five of the best features from the week’s Rugby media and bring them to you in the one convenient location.
Charlie Fetoai: Jack of all trades; master of plenty!
Former Reds centre Charlie Fetoai was seriously injured in 2009 and told he could never play Rugby again. He spoke to RUPA about what he's been up to since, and the importance of planning for life after Rugby. |
[STATEMENT]
lemma EQI: "a=b \<Longrightarrow> EQ a b"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. a = b \<Longrightarrow> EQ a b
[PROOF STEP]
by (simp add: EQ_def) |
If two loops are homotopic, then their continuous images are homotopic. |
theory RA_seL4
imports Main
begin
datatype PC = P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | P12 | P13 | P14 | Idle
datatype PC_SETUP = L1 | L2 | L3 | L4
type_synonym Time = nat
type_synonym Process = nat
type_synonym MRange = "nat \<times> nat"
type_synonym Mac = nat
datatype Cap =
CSCap Process | VSCap Process | TCBCap Process
| TCap | KeyCap | EPCap | GrantCap | IQCap | NetCap | SendCap | ReceiveCap | BadgeCap Process
datatype Object = P Process | INITTIME | KEY | IRQ | NONCE | MEM
datatype Predicate = Read | Write | Grant | Control | AsyncSend | Receive
consts Policy :: "(Process \<times> Predicate \<times> Object) set"
consts P\<^sub>A :: Process
consts P\<^sub>1 :: Process
consts P\<^sub>N :: Process
definition "P_distinct \<equiv> P\<^sub>A \<noteq> P\<^sub>1 \<and> P\<^sub>A \<noteq> P\<^sub>N \<and> P\<^sub>N \<noteq> P\<^sub>1"
datatype Message =
Req Time Mac MRange Process
| Res Time Mac Mac
| Empty
fun isReq :: "Message \<Rightarrow> bool" where
"isReq (Req _ _ _ _) = True"
| "isReq _ = False"
fun isRes :: "Message \<Rightarrow> bool" where
"isRes (Res _ _ _) = True"
| "isRes _ = False"
fun getMTime :: "Message \<Rightarrow> Time " where
"getMTime (Req t _ _ _) = t"
| "getMTime (Res t _ _) = t"
| "getMTime _ = 0 "
fun getMProcess :: "Message \<Rightarrow> Process option" where
"getMProcess (Req _ _ _ p) = Some p"
| "getMProcess (Res _ _ _) = None"
| "getMProcess _ = None "
fun getMRange :: "Message \<Rightarrow> MRange option" where
"getMRange (Req _ _ r _) = Some r"
| "getMRange (Res _ _ _) = None"
| "getMRange _ = None "
fun getFMac :: "Message \<Rightarrow> Mac " where
"getFMac (Req _ fm _ _) = fm"
| "getFMac (Res _ fm _) = fm"
| "getFMac _ = 0 "
fun getSMac :: "Message \<Rightarrow> Mac option" where
"getSMac (Req _ _ _ _) = None"
| "getSMac (Res _ _ sm) = Some sm"
| "getSMac _ = None "
record Message_req =
time_req :: Time (*Treq, Tres, ...*)
mac_req :: nat
mrang_req :: MRange (*a'b'*)
proc_req :: Process (*Pa, P1, ...*)
record Message_resp =
time_resp :: Time (*Treq, Tres, ...*)
(*proc_resp :: Process Pa, P1, ...*)
mac1_resp :: nat
mac2_resp :: nat
record CSpace_rec =
Caps :: "Cap set"
EP :: Message
record TCB_rec =
VSpace :: nat
CSpace :: CSpace_rec
record Proc_rec =
TCB :: TCB_rec
Mem :: MRange
Priority :: nat
Time :: nat
Parent :: nat
record State =
Procs :: "Process set"
ProcRec :: "Process \<Rightarrow> Proc_rec"
TInit :: nat
Key :: nat
Nonce :: nat
irq :: bool
pc :: PC
policy :: "(Process \<times> Predicate \<times> Object) set"
(*Local*)
rval :: "Message"
l_time :: nat
l_mac1 :: nat
l_mac2 :: nat
ep :: Message
definition "P_attest_prop s \<equiv>
P\<^sub>A\<in>Procs s
\<and> Priority ((ProcRec s) P\<^sub>A) = 0
\<and> Time ((ProcRec s) P\<^sub>A) = TInit s
\<and> Parent ((ProcRec s) P\<^sub>A) = 0
\<and> Caps (CSpace (TCB ((ProcRec s) P\<^sub>A))) =
{VSCap P\<^sub>A, CSCap P\<^sub>A, TCBCap P\<^sub>A, TCap, KeyCap , EPCap, GrantCap, IQCap, SendCap,BadgeCap P\<^sub>A, BadgeCap P\<^sub>N}"
definition "Spawned_prop p s \<equiv>
p \<noteq> P\<^sub>A \<and> p \<in> Procs s
\<and> Priority ((ProcRec s) p) > 0
\<and> Time ((ProcRec s) p) > TInit s
\<and> Caps (CSpace (TCB ((ProcRec s) p))) = {VSCap p, CSCap p, TCBCap p}"
definition "Net_prop p s \<equiv>
p \<noteq> P\<^sub>A \<and> p \<in> Procs s
\<and> Priority ((ProcRec s) p) > 0
\<and> Time ((ProcRec s) p) > TInit s
\<and> Caps (CSpace (TCB ((ProcRec s) p))) = {VSCap p, CSCap p, TCBCap p, NetCap, ReceiveCap, BadgeCap P\<^sub>N, BadgeCap P\<^sub>A}"
record SetupState =
ss_pc :: PC_SETUP
definition "CreateProcess tcb mem pri t par\<equiv>
\<lparr>
TCB = tcb,
Mem = mem,
Priority = pri,
Time = t,
Parent = par
\<rparr>"
definition "CreateProcessTCB vs cs\<equiv>
\<lparr>
VSpace = vs,
CSpace = cs
\<rparr>"
definition "CreateProcessCSpace caps \<equiv>
\<lparr>
Caps = caps,
EP = Empty
\<rparr>"
definition "MAC data::nat \<equiv> SOME n. 1 \<le> n \<and> n \<le> data * data * data"
definition "getTime now \<equiv> SOME t . t > now"
definition update_pc :: "PC \<Rightarrow> State \<Rightarrow> State" ("`pc := _" [200])
where
"update_pc v \<equiv> \<lambda> s. s \<lparr>pc := v\<rparr>"
definition update_local_ep :: "Message \<Rightarrow> State \<Rightarrow> State" ("`ep := _" [200])
where
"update_local_ep v \<equiv> \<lambda> s. s \<lparr>ep := v\<rparr>"
definition update_irq :: "bool \<Rightarrow> State \<Rightarrow> State" ("`irq := _" [200])
where
"update_irq v \<equiv> \<lambda> s. s \<lparr>irq := v\<rparr>"
definition update_rval :: "Message \<Rightarrow> State \<Rightarrow> State" ("`rval := _" [200])
where
"update_rval v \<equiv> \<lambda> s. s \<lparr>rval := v\<rparr>"
(*definition update_badge :: "bool \<Rightarrow> Proc_rec \<Rightarrow> Proc_rec" ("`badge := _" [100])
where
"update_badge v \<equiv> \<lambda> s. s \<lparr>Badge := v\<rparr>"*)
definition update_time :: "Time \<Rightarrow> State \<Rightarrow> State" ("`time := _" [100])
where
"update_time v \<equiv> \<lambda> s. s \<lparr>l_time := v\<rparr>"
definition update_mac1 :: "nat \<Rightarrow> State \<Rightarrow> State" ("`mac1 := _" [200])
where
"update_mac1 v \<equiv> \<lambda> s. s \<lparr>l_mac1 := v\<rparr>"
definition update_mac2 :: "nat \<Rightarrow> State \<Rightarrow> State" ("`mac2 := _" [200])
where
"update_mac2 v \<equiv> \<lambda> s. s \<lparr>l_mac2 := v\<rparr>"
definition update_nonce :: "nat \<Rightarrow> State \<Rightarrow> State" ("`nonce := _" [200])
where
"update_nonce v \<equiv> \<lambda> s. s \<lparr>Nonce := v\<rparr>"
definition Setup :: "SetupState \<Rightarrow> State \<Rightarrow> SetupState \<Rightarrow> State \<Rightarrow> bool" where
"Setup ss s ss' s' \<equiv>
(case (ss_pc ss) of
L1 \<Rightarrow> let csp = CreateProcessCSpace {VSCap P\<^sub>A, CSCap P\<^sub>A, TCBCap P\<^sub>A, TCap, KeyCap , EPCap, GrantCap, IQCap, SendCap, BadgeCap P\<^sub>A, BadgeCap P\<^sub>N};
tcb = CreateProcessTCB 0 csp;
pr = CreateProcess tcb (0,1) 0 (TInit s) 0 in
s' = s\<lparr>Procs := Procs s \<union> {P\<^sub>A}, ProcRec := (ProcRec s)(P\<^sub>A := pr), policy := policy s \<union> {(P\<^sub>A, Read, KEY),(P\<^sub>A, Read, INITTIME), (P\<^sub>A, Control,P P\<^sub>A),(P\<^sub>A, Control, IRQ), (P\<^sub>A, Control, NONCE), (P\<^sub>A, Control, MEM)}\<rparr>
\<and> (ss_pc ss' = L2)
| L2 \<Rightarrow> let csp = CreateProcessCSpace {VSCap P\<^sub>1, CSCap P\<^sub>1, TCBCap P\<^sub>1};
tcb = CreateProcessTCB 0 csp;
pr = CreateProcess tcb (1,2) 1 (TInit s) 1 in
s' = s\<lparr>Procs := Procs s \<union> {P\<^sub>1}, ProcRec := (ProcRec s)(P\<^sub>1 := pr),policy := policy s \<union> {(P\<^sub>1, Control,P P\<^sub>1) , (P\<^sub>A, Control,P P\<^sub>1)}\<rparr>
\<and> (ss_pc ss' = L3)
| L3 \<Rightarrow> let csp = CreateProcessCSpace {VSCap P\<^sub>N, CSCap P\<^sub>N, TCBCap P\<^sub>N};
tcb = CreateProcessTCB 0 csp;
pr = CreateProcess tcb (2,3) 1 (TInit s) 0 in
s' = s\<lparr>Procs := Procs s \<union> {P\<^sub>N}, ProcRec := (ProcRec s)(P\<^sub>N := pr), policy := policy s \<union> {(P\<^sub>N, Control,P P\<^sub>N) , (P\<^sub>A, Control,P P\<^sub>N)}\<rparr>
\<and> (ss_pc ss' = L4)
| L4 \<Rightarrow> False
)
"
definition "getEP s p \<equiv> (EP (CSpace (TCB (ProcRec s p))))"
lemmas simps [simp] = update_nonce_def update_mac2_def update_mac1_def update_time_def (*update_badge_def*)
update_rval_def update_irq_def update_pc_def MAC_def update_local_ep_def Let_def
definition Prover :: "State \<Rightarrow> State \<Rightarrow> bool"
where
"Prover s s' \<equiv>
(case (pc s) of
P0 \<Rightarrow> if BadgeCap P\<^sub>A \<in> (Caps (CSpace (TCB ((ProcRec s) P\<^sub>N)))) then s' = (`pc := P1) s else s' = s
| P1 \<Rightarrow> s' = (`pc := P2 \<circ> `irq := False) s
| P2 \<Rightarrow> let ep = (EP(CSpace (TCB ((ProcRec s) P\<^sub>N)))) in if isReq ep then
s' = (`pc := P3 \<circ> `ep := ep \<circ> (\<lambda> s . s \<lparr>ProcRec := (ProcRec s)(P\<^sub>A := (ProcRec s P\<^sub>A)\<lparr>
TCB := (TCB (ProcRec s P\<^sub>A))\<lparr>CSpace := ((CSpace (TCB (ProcRec s P\<^sub>A)))\<lparr>EP := ep\<rparr>)\<rparr>\<rparr>)\<rparr>)) s else s' = s
| P3 \<Rightarrow> (if(TInit s > (getMTime (ep s)))
then (s'= (`pc := Idle \<circ> `rval := Empty) s )
else (s' = (`pc := P4) s))
| P4 \<Rightarrow> (if ((getFMac (ep s)) \<noteq> (MAC ((getMTime (ep s)) * fst (Mem ((ProcRec s) P\<^sub>A)) * snd (Mem ((ProcRec s) P\<^sub>A)) * (P\<^sub>A))))
then (s' = (`pc := Idle \<circ> `rval := Empty) s)
else (s' = (`pc := P5) s))
| P5 \<Rightarrow> s' = (`pc := P6 \<circ> `nonce := (TInit s + (getMTime (ep s)))) s
| P6 \<Rightarrow> s' = (`pc := P7 \<circ> `time := (getTime ((getMTime (ep s))))) s
| P7 \<Rightarrow> s' = (`pc := P8 \<circ> `mac1 := (MAC (fst (Mem ((ProcRec s) P\<^sub>A)) * snd (Mem ((ProcRec s) P\<^sub>A))))) s
| P8 \<Rightarrow> s' = (`pc := P9 \<circ> `mac2 := (MAC ((l_time s) * (P\<^sub>A) * (l_mac1 s)))) s
| P9 \<Rightarrow> s' = (`pc := P10 \<circ> `rval := ((Res (l_time s) (l_mac1 s) (l_mac2 s)))) s
| P10 \<Rightarrow> s' = (`pc := P11 \<circ> (\<lambda> s . s \<lparr>ProcRec := (ProcRec s)(P\<^sub>A := (ProcRec s P\<^sub>A)
\<lparr>TCB := (TCB (ProcRec s P\<^sub>A))\<lparr>CSpace := ((CSpace (TCB (ProcRec s P\<^sub>A)))\<lparr>EP := rval s\<rparr>)\<rparr>\<rparr>)\<rparr>)) s
| P11 \<Rightarrow> s' = (`pc := Idle \<circ> `irq := True) s
| Idle \<Rightarrow> False
| _ \<Rightarrow> False)"
definition Network :: "State \<Rightarrow> Message \<Rightarrow> State \<Rightarrow> bool"
where
"Network s m s' \<equiv>
case (pc s) of
P0 \<Rightarrow> s' = (`pc := P1 \<circ> `ep := m \<circ> (\<lambda> s . s \<lparr>ProcRec := (ProcRec s)(P\<^sub>N := (ProcRec s P\<^sub>N)\<lparr>
TCB := (TCB (ProcRec s P\<^sub>N))\<lparr>CSpace := ((CSpace (TCB (ProcRec s P\<^sub>N)))\<lparr>EP := m\<rparr>)\<rparr>\<rparr>)\<rparr>)) s
| P1 \<Rightarrow> if BadgeCap P\<^sub>N \<in> (Caps (CSpace (TCB ((ProcRec s) P\<^sub>A)))) then s' = (`pc := P2) s else s' = s
| P2 \<Rightarrow> let ep = (EP(CSpace (TCB ((ProcRec s) P\<^sub>A)))) in
s' = (`pc := P3 \<circ> (\<lambda> s . s \<lparr>ProcRec := (ProcRec s)(P\<^sub>N := (ProcRec s P\<^sub>N)\<lparr>
TCB := (TCB (ProcRec s P\<^sub>N))\<lparr>CSpace := ((CSpace (TCB (ProcRec s P\<^sub>N)))\<lparr>EP := ep\<rparr>)\<rparr>\<rparr>)\<rparr>)) s
| _ \<Rightarrow> False"
(* -----------------------------------------------------P\<^sub>A Related Lemmas-------------------------------------------------*)
lemma Setup_P\<^sub>A: "ss_pc ss = L1 \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> P_attest_prop s'"
apply(simp add: Setup_def P_attest_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
done
lemma Setup_P\<^sub>1: "P_distinct \<Longrightarrow> ss_pc ss = L2 \<Longrightarrow> P_attest_prop s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> P_attest_prop s'"
by(simp add:P_distinct_def Setup_def P_attest_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
lemma Setup_P\<^sub>N: "P_distinct \<Longrightarrow> ss_pc ss = L3 \<Longrightarrow> P_attest_prop s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> P_attest_prop s'"
by(simp add:P_distinct_def Setup_def P_attest_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
lemma Network_P\<^sub>A: "P_attest_prop s \<Longrightarrow> Network s m s' \<Longrightarrow> P_attest_prop s'"
apply(simp add: Setup_def Prover_def Network_def P_distinct_def Setup_P\<^sub>A P_attest_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases "pc s", simp_all)
apply clarsimp
apply clarsimp
done
lemma Prover_P\<^sub>A: "P_attest_prop s \<Longrightarrow> Prover s s' \<Longrightarrow> P_attest_prop s'"
apply(simp add: Setup_def Prover_def P_distinct_def Setup_P\<^sub>A P_attest_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases " pc s", simp_all)
apply (case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq (EP (CSpace (TCB (ProcRec s P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) * fst (Mem (ProcRec s P\<^sub>A)) * snd (Mem (ProcRec s P\<^sub>A)) * P\<^sub>A *
(getMTime (ep s) * fst (Mem (ProcRec s P\<^sub>A)) * snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) * fst (Mem (ProcRec s P\<^sub>A)) * snd (Mem (ProcRec s P\<^sub>A)) * P\<^sub>A))", simp_all)
done
(* -----------------------------------------------------P\<^sub>N Related Lemmas-------------------------------------------------*)
lemma Network_P\<^sub>N: "P_distinct \<Longrightarrow> pc s = P3 \<Longrightarrow> Net_prop P\<^sub>N s \<Longrightarrow> Prover s s' \<Longrightarrow> Net_prop P\<^sub>N s'"
apply(simp add:Prover_def P_distinct_def Setup_def P_attest_prop_def Net_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
done
lemma Prover_P\<^sub>N: "P_distinct \<Longrightarrow> pc s = P4 \<Longrightarrow> Net_prop P\<^sub>N s \<Longrightarrow> Network s m s' \<Longrightarrow> Net_prop P\<^sub>N s'"
by(simp add:Network_def Prover_def P_distinct_def Setup_def P_attest_prop_def Net_prop_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
(* *********************************** Confidentiality Through AC ***************************************** *)
definition "KeyConf s \<equiv> \<forall>p. (p, Read, KEY)\<in>policy s \<longrightarrow> p = P\<^sub>A"
definition "MemConf s \<equiv> \<forall>p. (p, Read, MEM)\<in>policy s \<longrightarrow> p = P\<^sub>A"
definition "IrqAC s \<equiv> \<forall>p. (p, Control, IRQ)\<in>policy s \<longrightarrow> p = P\<^sub>A"
definition "TimeConf s \<equiv> \<forall>p. (p, Read, INITTIME)\<in>policy s \<longrightarrow> p = P\<^sub>A"
definition "NONCE_AC s \<equiv> \<forall>p. (p, Control, NONCE)\<in>policy s \<longrightarrow> p = P\<^sub>A"
definition "Super s \<equiv> \<forall>p. p\<in> Procs s \<longrightarrow> (P\<^sub>A, Control, P p)\<in>policy s"
definition "no_grant s \<equiv> \<forall> p p' .p \<in> Procs s \<and> p'\<in>Procs s \<and> p \<noteq> P\<^sub>A \<and> p'\<noteq>P\<^sub>A
\<and> (p, Control, P p')\<in>policy s \<longrightarrow> p = p'"
definition "Evolution s ss \<equiv> (ss_pc ss = L1 \<longrightarrow> Procs s = {}) \<and>( ss_pc ss = L2 \<longrightarrow> Procs s = {P\<^sub>A})
\<and> ( ss_pc ss = L3 \<longrightarrow> Procs s = {P\<^sub>A , P\<^sub>1}) \<and>
( ss_pc ss = L4 \<longrightarrow> Procs s = {P\<^sub>A , P\<^sub>1 , P\<^sub>N})"
definition "Evolution_Policy s ss \<equiv> (ss_pc ss = L1 \<longrightarrow> policy s = {}) \<and>(ss_pc ss = L2 \<longrightarrow> policy s = {(P\<^sub>A, Read, KEY),(P\<^sub>A, Read, INITTIME), (P\<^sub>A, Control,P P\<^sub>A),(P\<^sub>A, Control, IRQ), (P\<^sub>A, Control, NONCE), (P\<^sub>A, Control, MEM)})
\<and> (ss_pc ss = L3 \<longrightarrow> policy s = {(P\<^sub>1, Control,P P\<^sub>1) , (P\<^sub>A, Control,P P\<^sub>1)}) \<and>
(ss_pc ss = L4 \<longrightarrow> policy s = {(P\<^sub>N, Control,P P\<^sub>N) , (P\<^sub>A, Control,P P\<^sub>N)})"
definition "Reflect s \<equiv> \<forall>p. p\<in> Procs s \<longrightarrow> (p, Control,P p)\<in>policy s"
lemma Setup_no_Control : "Evolution_Policy s ss \<Longrightarrow> Reflect s \<Longrightarrow> Evolution s ss \<Longrightarrow> P_distinct \<Longrightarrow> no_grant s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> no_grant s'"
apply(simp add: Setup_def del:simps)
apply (cases "ss_pc ss", simp_all)
apply safe
apply(simp add: Reflect_def Evolution_def Evolution_Policy_def no_grant_def)
apply(simp add: Reflect_def Evolution_def Evolution_Policy_def no_grant_def)
apply blast
by (simp add: Evolution_Policy_def Evolution_def P_distinct_def Reflect_def)
lemma Prover_no_Control:
assumes "Prover s s'"
and "no_grant s"
shows "no_grant s'"
using assms
apply(simp add: Setup_def Prover_def Evolution_def Evolution_Policy_def P_distinct_def Reflect_def P_attest_prop_def CreateProcess_def CSpace_def
CreateProcessTCB_def no_grant_def)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac " isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst
(Mem (ProcRec s P\<^sub>A)) *
snd
(Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Setup_Key_conf : "P_distinct \<Longrightarrow> KeyConf s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> KeyConf s'"
apply(simp add: KeyConf_def Setup_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases "ss_pc ss", simp_all)
done
lemma Prover_Key_conf : "KeyConf s \<Longrightarrow> Prover s s' \<Longrightarrow> KeyConf s'"
apply(simp add: KeyConf_def Setup_def Prover_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def P_distinct_def)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s
", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst
(Mem (ProcRec s P\<^sub>A)) *
snd
(Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Setup_Time_conf : "TimeConf s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> TimeConf s'"
apply(simp add: TimeConf_def Setup_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases "ss_pc ss", simp_all)
done
lemma Prover_Time_conf : "TimeConf s \<Longrightarrow> Prover s s' \<Longrightarrow> TimeConf s'"
apply(simp add: TimeConf_def Prover_def Setup_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst
(Mem (ProcRec s P\<^sub>A)) *
snd
(Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Superiority_Setup : "P_distinct \<Longrightarrow> Super s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> Super s'"
apply(simp add: Super_def Setup_def CreateProcessCSpace_def P_distinct_def
CreateProcessTCB_def CreateProcess_def)
apply(intro allI impI)
apply(cases "ss_pc ss", simp_all)
apply auto[1]
apply blast
by auto
lemma Superiority_Prover : "P_distinct \<Longrightarrow> Super s \<Longrightarrow> Prover s s' \<Longrightarrow> Super s'"
apply(simp add: Super_def Setup_def Prover_def CreateProcessCSpace_def P_distinct_def
CreateProcessTCB_def CreateProcess_def Network_def)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "\<forall>p. p \<in> Procs s \<longrightarrow>
(P\<^sub>A, Control, P p)
\<in> policy s ", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Setup_Mem_conf : "MemConf s \<Longrightarrow> Setup ss s ss' s' \<Longrightarrow> MemConf s'"
apply(simp add: MemConf_def Setup_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases "ss_pc ss", simp_all)
done
lemma Prover_Mem_conf : "MemConf s \<Longrightarrow> Prover s s' \<Longrightarrow> MemConf s'"
apply(simp add: MemConf_def Setup_def Prover_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(case_tac "\<forall>p. (p, Read, MEM) \<in> policy s \<longrightarrow>
p = P\<^sub>A",simp_all)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Prover_IRQ_AC : "IrqAC s \<Longrightarrow> Prover s s' \<Longrightarrow> IrqAC s'"
apply(simp add: Setup_def Prover_def CreateProcessCSpace_def P_distinct_def
CreateProcessTCB_def CreateProcess_def Network_def IrqAC_def)
apply(cases "pc s", simp_all)
apply(case_tac "\<forall>p. (p, Control, IRQ)
\<in> policy s \<longrightarrow>
p = P\<^sub>A", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Prover_Nonce_AC : "NONCE_AC s \<Longrightarrow> Prover s s' \<Longrightarrow> NONCE_AC s'"
apply(simp add: NONCE_AC_def Setup_def Prover_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(case_tac "\<forall>p. (p, Control, NONCE)
\<in> policy s \<longrightarrow>
p = P\<^sub>A ", simp_all)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))",simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s",simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem
(ProcRec s P\<^sub>A)) *
snd (Mem
(ProcRec s P\<^sub>A)) *
P\<^sub>A))",simp_all)
done
(* *********************************** Information Flow ***************************************** *)
definition "inv_rval s \<equiv> pc s \<in> {P10, P11} \<longrightarrow> isRes (rval s) "
definition "inv_req s \<equiv> pc s = P3 \<longrightarrow> isReq (ep s)"
lemma P\<^sub>A_to_P\<^sub>N:
assumes "Prover s s'"
and "inv_rval s"
shows "inv_rval s'"
using assms
apply(simp add: Prover_def inv_rval_def)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma P\<^sub>N_to_P\<^sub>A:
assumes "Prover s s'"
and "inv_req s"
shows "inv_req s'"
using assms
apply(simp add: P_distinct_def Setup_def Prover_def inv_req_def inv_rval_def CreateProcessCSpace_def
CreateProcessTCB_def CreateProcess_def)
apply(cases "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst
(Mem (ProcRec s P\<^sub>A)) *
snd
(Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime
(ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Info_Flow_1: "inv_rval s \<Longrightarrow> pc s = P12 \<Longrightarrow> Badge ((ProcRec s) P\<^sub>A) \<Longrightarrow> Prover s s' \<Longrightarrow>
isRes (EP(CSpace (TCB ((ProcRec s') P\<^sub>A)))) "
apply(simp add: Prover_def inv_rval_def)
done
(* *********************************** Integrity ***************************************** *)
definition "Key_AC \<equiv> \<forall>p. (p, Read, KEY)\<in>Policy \<longrightarrow> p = P\<^sub>A"
definition "Mem_AC \<equiv> \<forall>p. (p, Control, MEM)\<in>Policy \<longrightarrow> p = P\<^sub>A"
lemma Res_Integrity: "pc s\<in> {P2, P3, P4, P5} \<Longrightarrow> Badge ((ProcRec s) P\<^sub>A) \<Longrightarrow> isReq m \<Longrightarrow> Network s m s' \<Longrightarrow> pc s' = P5 \<Longrightarrow>
(EP(CSpace (TCB ((ProcRec s') P\<^sub>A)))) = (EP(CSpace (TCB ((ProcRec s') P\<^sub>N)))) "
apply(simp add: Network_def)
apply(case_tac "pc s = P1", simp_all)
apply(case_tac "pc s = P2", simp_all)
by auto
lemma EP_Integrity_Prover: "pc s = P2 \<Longrightarrow> BadgeCap P\<^sub>A \<in> (Caps (CSpace (TCB ((ProcRec s) P\<^sub>N)))) \<Longrightarrow> Prover s s' \<Longrightarrow> pc s' = P4 \<Longrightarrow>
(EP(CSpace (TCB ((ProcRec s') P\<^sub>A)))) = (EP(CSpace (TCB ((ProcRec s') P\<^sub>N)))) "
apply(simp add: Prover_def)
apply (case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
done
lemma TInit_Integrity:
assumes "Prover s s'"
and "P_attest_prop s"
shows "TInit s' = TInit s"
using assms
apply(simp add: Prover_def Key_AC_def P_attest_prop_def)
apply(case_tac "pc s")
apply simp_all
apply(case_tac "BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))")
apply simp_all
apply(case_tac "getMTime (ep s) < TInit s")
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s",simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))")
apply simp_all
done
lemma Key_Integrity:
assumes "Prover s s'"
and "P_attest_prop s"
shows "Key s' = Key s"
using assms
apply(simp add: Prover_def Key_AC_def P_attest_prop_def)
apply(cases "pc s", simp_all)
apply(case_tac " BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))")
apply simp_all
apply(case_tac " getMTime (ep s) < TInit s")
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))")
apply simp_all
done
lemma Mem_Integrity_Prover:
assumes "Prover s s'"
and "P_attest_prop s"
shows "Mem ((ProcRec s') P\<^sub>A) = Mem ((ProcRec s) P\<^sub>A)"
using assms
apply(simp add: Prover_def Key_AC_def P_attest_prop_def)
apply(cases "pc s", simp_all)
apply(case_tac " BadgeCap P\<^sub>A \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>N)))", simp_all)
apply (case_tac "isReq
(EP (CSpace
(TCB (ProcRec s
P\<^sub>N))))", simp_all)
apply(case_tac "getMTime (ep s) < TInit s", simp_all)
apply(case_tac "getFMac (ep s) \<noteq>
(SOME n.
Suc 0 \<le> n \<and>
n \<le> getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A) *
(getMTime (ep s) *
fst (Mem (ProcRec s P\<^sub>A)) *
snd (Mem (ProcRec s P\<^sub>A)) *
P\<^sub>A))", simp_all)
done
lemma Mem_Integrity_Network:
assumes "Network s m s'"
shows "Mem ((ProcRec s') P\<^sub>A) = Mem ((ProcRec s) P\<^sub>A)"
using assms
apply(simp add: Setup_def P_attest_prop_def Prover_def Network_def Mem_AC_def)
apply(case_tac "pc s", simp_all)
apply(case_tac "BadgeCap P\<^sub>N \<in> Caps (CSpace (TCB (ProcRec s P\<^sub>A)))", simp_all)
done
end
|
State Before: α : Type u_1
β : Type ?u.36188
γ : Type ?u.36191
δ : Type ?u.36194
ι : Type ?u.36197
R : Type ?u.36200
R' : Type ?u.36203
m : MeasurableSpace α
μ μ₁ μ₂ : Measure α
s s₁ s₂ t t₁ t₂ : Set α
hs : s₁ ⊆ s₂
hsμ : ↑↑μ s₂ ≤ ↑↑μ s₁
ht : t₁ ⊆ t₂
htμ : ↑↑μ t₂ ≤ ↑↑μ t₁
⊢ ↑↑μ (s₁ ∪ t₁) = ↑↑μ (s₂ ∪ t₂) State After: α : Type u_1
β : Type ?u.36188
γ : Type ?u.36191
δ : Type ?u.36194
ι : Type ?u.36197
R : Type ?u.36200
R' : Type ?u.36203
m : MeasurableSpace α
μ μ₁ μ₂ : Measure α
s s₁ s₂ t t₁ t₂ : Set α
hs : s₁ ⊆ s₂
hsμ : ↑↑μ s₂ ≤ ↑↑μ s₁
ht : t₁ ⊆ t₂
htμ : ↑↑μ t₂ ≤ ↑↑μ t₁
⊢ ↑↑μ (⋃ (b : Bool), bif b then s₁ else t₁) = ↑↑μ (⋃ (b : Bool), bif b then s₂ else t₂) Tactic: rw [union_eq_iUnion, union_eq_iUnion] State Before: α : Type u_1
β : Type ?u.36188
γ : Type ?u.36191
δ : Type ?u.36194
ι : Type ?u.36197
R : Type ?u.36200
R' : Type ?u.36203
m : MeasurableSpace α
μ μ₁ μ₂ : Measure α
s s₁ s₂ t t₁ t₂ : Set α
hs : s₁ ⊆ s₂
hsμ : ↑↑μ s₂ ≤ ↑↑μ s₁
ht : t₁ ⊆ t₂
htμ : ↑↑μ t₂ ≤ ↑↑μ t₁
⊢ ↑↑μ (⋃ (b : Bool), bif b then s₁ else t₁) = ↑↑μ (⋃ (b : Bool), bif b then s₂ else t₂) State After: no goals Tactic: exact measure_iUnion_congr_of_subset (Bool.forall_bool.2 ⟨ht, hs⟩) (Bool.forall_bool.2 ⟨htμ, hsμ⟩) |
callable <- function(arity, f, formatter = NULL) {
if (is.null(formatter)) {
force(f)
formatter <- function() paste(format(f), collapse = "\n")
}
x <- list(arity = arity, f = f, formatter = formatter)
structure(x, class = c("lox_callable", class(x)))
}
lox_call <- function(callee, arguments, env) UseMethod("lox_call")
lox_call.lox_callable <- function(callee, arguments, env) {
do.call(callee$f, arguments, envir = env)
}
arity <- function(callee) UseMethod("arity")
arity.lox_callable <- function(callee) {
callee$arity
}
#' @export
format.lox_callable <- function(x) {
x$formatter()
}
|
[GOAL]
C : Type u_1
D : Type u_2
E : Type u_3
inst✝⁶ : Category.{u_6, u_1} C
inst✝⁵ : Category.{u_5, u_2} D
inst✝⁴ : Category.{?u.20851, u_3} E
F : C ⥤ D
G : D ⥤ E
A : Type u_4
inst✝³ : AddMonoid A
inst✝² : HasShift C A
inst✝¹ : HasShift D A
inst✝ : HasShift E A
a b : A
e₁ : shiftFunctor C a ⋙ F ≅ F ⋙ shiftFunctor D a
e₂ : shiftFunctor C b ⋙ F ≅ F ⋙ shiftFunctor D b
X : C
⊢ NatTrans.app (isoAdd e₁ e₂).hom X =
F.map (NatTrans.app (shiftFunctorAdd C a b).hom X) ≫
NatTrans.app e₂.hom ((shiftFunctor C a).obj X) ≫
(shiftFunctor D b).map (NatTrans.app e₁.hom X) ≫ NatTrans.app (shiftFunctorAdd D a b).inv (F.obj X)
[PROOFSTEP]
simp only [isoAdd, isoAdd'_hom_app, shiftFunctorAdd'_eq_shiftFunctorAdd]
[GOAL]
C : Type u_1
D : Type u_2
E : Type u_3
inst✝⁶ : Category.{u_6, u_1} C
inst✝⁵ : Category.{u_5, u_2} D
inst✝⁴ : Category.{?u.24197, u_3} E
F : C ⥤ D
G : D ⥤ E
A : Type u_4
inst✝³ : AddMonoid A
inst✝² : HasShift C A
inst✝¹ : HasShift D A
inst✝ : HasShift E A
a b : A
e₁ : shiftFunctor C a ⋙ F ≅ F ⋙ shiftFunctor D a
e₂ : shiftFunctor C b ⋙ F ≅ F ⋙ shiftFunctor D b
X : C
⊢ NatTrans.app (isoAdd e₁ e₂).inv X =
NatTrans.app (shiftFunctorAdd D a b).hom (F.obj X) ≫
(shiftFunctor D b).map (NatTrans.app e₁.inv X) ≫
NatTrans.app e₂.inv ((shiftFunctor C a).obj X) ≫ F.map (NatTrans.app (shiftFunctorAdd C a b).inv X)
[PROOFSTEP]
simp only [isoAdd, isoAdd'_inv_app, shiftFunctorAdd'_eq_shiftFunctorAdd]
[GOAL]
C : Type u_1
D : Type u_2
E : Type u_3
inst✝⁷ : Category.{u_6, u_1} C
inst✝⁶ : Category.{u_5, u_2} D
inst✝⁵ : Category.{?u.38582, u_3} E
F : C ⥤ D
G : D ⥤ E
A : Type u_4
inst✝⁴ : AddMonoid A
inst✝³ : HasShift C A
inst✝² : HasShift D A
inst✝¹ : HasShift E A
inst✝ : CommShift F A
a b c : A
h : a + b = c
⊢ commShiftIso F c = CommShift.isoAdd' h (commShiftIso F a) (commShiftIso F b)
[PROOFSTEP]
subst h
[GOAL]
C : Type u_1
D : Type u_2
E : Type u_3
inst✝⁷ : Category.{u_6, u_1} C
inst✝⁶ : Category.{u_5, u_2} D
inst✝⁵ : Category.{?u.38582, u_3} E
F : C ⥤ D
G : D ⥤ E
A : Type u_4
inst✝⁴ : AddMonoid A
inst✝³ : HasShift C A
inst✝² : HasShift D A
inst✝¹ : HasShift E A
inst✝ : CommShift F A
a b : A
⊢ commShiftIso F (a + b) = CommShift.isoAdd' (_ : a + b = a + b) (commShiftIso F a) (commShiftIso F b)
[PROOFSTEP]
simp only [commShiftIso_add, CommShift.isoAdd]
|
module Main
import Data.Vect
data DataStore : Type where
MkData : (size : Nat) ->
(items : Vect size String) ->
DataStore
size : DataStore -> Nat
size (MkData size items) = size
items : (store : DataStore) -> Vect (size store) String
items (MkData size items) = items
addToStore : DataStore -> String -> DataStore
addToStore (MkData size items) newItem = MkData _ (addToData items)
where
addToData : Vect old String -> Vect (S old) String
addToData [] = [newItem]
addToData (item :: items2) = item :: addToData items2
data Command = Add String
| Get Integer
| Search String
| Size
| Quit
parseCommand : String -> String -> Maybe Command
parseCommand "add" str = Just (Add str)
parseCommand "get" val = case all isDigit (unpack val) of
False => Nothing
True => Just (Get (cast val))
parseCommand "search" pat = Just (Search pat)
parseCommand "size" "" = Just Size
parseCommand "quit" "" = Just Quit
parseCommand _ _ = Nothing
parse : (input : String) -> Maybe Command
parse input = case span (/= ' ') input of
(cmd, args) => parseCommand cmd (ltrim args)
getEntry : (pos : Integer) -> (store : DataStore) -> Maybe (String, DataStore)
getEntry pos store = let store_items = items store in
case integerToFin pos (size store) of
Nothing => Just ("Out of range\n", store)
Just id => Just (index id store_items ++ "\n", store)
findAllMatches : String -> Integer -> Vect n String -> List (Integer, String)
findAllMatches pat idx [] = []
findAllMatches pat idx (x :: xs) = if isInfixOf pat x then (idx, x) :: findAllMatches pat (idx + 1) xs
else findAllMatches pat (idx + 1) xs
search : String -> DataStore -> List (Integer, String)
search pat (MkData size items) = findAllMatches pat 0 items
processInput : DataStore -> String -> Maybe (String, DataStore)
processInput store inp
= case parse inp of
Nothing => Just ("Invalid command\n", store)
Just (Add item) => Just ("ID " ++ show (size store) ++ "\n", addToStore store item)
Just (Get pos) => getEntry pos store
Just (Search pat) => Just ("Found: " ++ show (search pat store) ++ "\n", store)
Just Size => Just ("Size of store: " ++ show (size store) ++ "\n", store)
Just Quit => Nothing
main : IO ()
main = replWith (MkData _ []) "Command: " processInput
|
The Rupa Lake Cooperative in Nepal. Credit: Bioversity International / B. Saugat. Sourced through Flickr.
Responding to the impacts of climate change effectively at national policy and local planning level requires robust and comprehensive information and a strong knowledge base. The potential for this information to provide crucial knowledge in the designing and implementation of climate-resilient policies, plans and programmes is enormous.
In 2010, the Government of Nepal established the Nepal Climate Change Knowledge Management Center (NCCKMC) to facilitate the generation, management and dissemination of climate-related knowledge. The Alternative Energy Promotion Centre (AEPC) has established a climate carbon unit to manage knowledge of climate change adaptation and mitigation. The Ministry of Forest and Soil Conservation (MoFSC) has established Reduced Emission from Forest Degradation and Deforestation (REDD) Implementation Centre to manage knowledge related to mitigation and REDD. Similarly, the Ministry of Agriculture and Development (MoAD) has established Agriculture Information Management System (AIMS) to consolidate climate information and develop these into practical agro advisories.
Issues have, however, arisen within the climate change knowledge management community about the effective functioning of the knowledge management centres (see "Barriers" below).
assess the supply side of knowledge management, with specific focus on the way knowledge and information on climate change is available, processed, packaged and made accessible to those involved in local government planning at sub- national (district) and national level.
An overview of the methods used, key barriers and recommendations are provided below. See the full text (download from right-hand column) for much more detail.
The study used a mix of methods to generate information.
It carried out a desk review to analyse the policy, programmes and projects on managing knowledge of climate change in Nepal. The desk review also analysed the institutional strategy and action plans, particularly of development partners, international and national non-government organisations (I/NGOs) and research institutions related to demand for, and generation and sharing of, knowledge of climate change.
Field work was carried out in Rukum and Dang Districts in mid- western region of Nepal. The aim of field work was to assess the demand for and supply of climate change knowledge at district and community level. The study interviewed 64 persons at district and national level to obtain an overview and suggestions on climate change knowledge management at these level. A Google-based online survey tool was used to map the perceptions of stakeholders, particularly to assess the effectiveness of the NCCKMC.
Finally, two district- and one national-level stakeholder workshops were organised to draw stakeholders’ feedback on ways to strengthen knowledge management work on climate change in Nepal.
the existing knowledge management system has not been able to effectively guide decision-making at policy, planning and implementation level due to its inability to process and package knowledge according to the demand and requirements of the various institutions in the country.
NCCKMC should capitalise on the favourable policy environment for climate change to improve climate change-related knowledge management practices. Nepal has significantly progressed in crafting national policies and a framework on climate change. Several national-level institutional mechanisms and financial flow systems provide mechanisms for partly strengthening the implementation of the climate change agenda in Nepal. Policy is, therefore, not a barrier to operationalising a climate change knowledge management system in Nepal. The country, however, lacks a clear vision, strategy and institutional commitment to knowledge generation and management. This is the reason the available knowledge has contributed little to operationalising a knowledge management policy and practices on climate change.
There is genuine and strong need for an overarching national strategy and roadmap on knowledge management to streamline ongoing initiatives and responses to climate change. There are opportunities for climate change knowledge management in Nepal. This and other research support the assertion that knowledge management could be streamlined and harmonised if government has clear national and local-level strategies on knowledge, and strong leadership and commitment. Policy, institutions and human resources are not an issue in climate change knowledge management in Nepal. Knowledge management centres have already been established in Nepal. What is needed are joint proactive actions and a harmonised approach to climate change knowledge management among different agencies and stakeholders which can feed into policy making, planning and budgeting.
Climate change knowledge management should be mainstreamed within the policy and institutional mandate and work. There is need for state ownership and dedicated institutions to drive climate change knowledge management forward. A multi-stakeholder mechanism should be created for information and knowledge sharing and for learning at different level to allow innovations, ideas and learning to expand and flourish. It is also necessary to mainstream knowledge management within the government and non-government institutions to create a sustainable mechanism for learning and sharing.
Research and academic institutions should take the lead role in knowledge generation. Unless existing research institutions take the lead in knowledge generation, there will continue to be a mismatch in demand for and supply of climate change knowledge. Since many grey areas and knowledge gaps have been identified, the government and development agencies should invest in research and knowledge management.
This study was undertaken as part of the Climate Proofing Growth and Development (CPGD) project funded by the UK aid from the Department for International Development (DFID). The programme has been branded as Action on Climate Today (ACT). The programme aims to improve resilience by directly incorporating climate change considerations into policy, planning and investment environments within each country. Practical Action is an implementing partner of the ACT programme in Nepal. |
State Before: ι : Type u_1
I : Box ι
inst✝ : Countable ι
⊢ MeasurableSet ↑I State After: ι : Type u_1
I : Box ι
inst✝ : Countable ι
⊢ MeasurableSet (pi univ fun i => Ioc (lower I i) (upper I i)) Tactic: rw [coe_eq_pi] State Before: ι : Type u_1
I : Box ι
inst✝ : Countable ι
⊢ MeasurableSet (pi univ fun i => Ioc (lower I i) (upper I i)) State After: no goals Tactic: exact MeasurableSet.univ_pi fun i => measurableSet_Ioc |
#!/usr/bin/env Rscript
#
# This file is part of the `OmnipathR` R package
#
# Copyright
# 2018-2021
# Saez Lab, Uniklinik RWTH Aachen, Heidelberg University
#
# File author(s): Alberto Valdeolivas
# Dénes Türei ([email protected])
# Attila Gábor
#
# Distributed under the MIT (Expat) License.
# See accompanying file `LICENSE` or find a copy at
# https://directory.fsf.org/wiki/License:Expat
#
# Website: https://saezlab.github.io/omnipathr
# Git repo: https://github.com/saezlab/OmnipathR
#
#
# Bio Model Analyzer export: converts path to a BMA motif
# Author: Ben Hall
#
#' Ends a function where something has gone wrong, printing information about the error
#' @param a string with information about why the error occurred
#' @NoRd
wrongInput <- function(reason){
cat(reason)
return(NULL)
}
#' Returns a formatted string describing a BMA interaction between variables
#' @param a unique id, variable ids describing the source and targets, and the edge description
#' @NoRd
bmaRelationship <- function(id,from,to,type){
rel <- sprintf('{"Id":%d,"FromVariable":%d,"ToVariable":%d,"Type":"%s"}', id, from, to, type)
return(rel)
}
#' Returns a formatted string describing the model parameters of a BMA variable
#' @param a unique id, human readable name (e.g. JAG1), unique variable id, granularity (number of levels) and the formula
#' @NoRd
bmaVariableModel <- function(id,name,granularity,formula=""){
var <- sprintf('{"Name":"%s","Id":%d,"RangeFrom":0,"RangeTo":%d,"Formula":"%s"}', name, id, granularity, formula)
return(var)
}
#' Returns a formatted string describing the layout parameters of a BMA variable
#' @param a unique id, human readable name (e.g. JAG1), granularity (number of levels) and the update formula
#' @NoRd
bmaVariableLayout <- function(id,name,x,y,description="") {
var <- sprintf('{"Id":%d,"Name":"%s","Type":"Constant","ContainerId":0,"PositionX":%f,"PositionY":%f,"CellX":0,"CellY":0,"Angle":0,"Description":"%s"}',id,name,x,y,description)
return(var)
}
#' Returns a string containing the target function of a variable
#'
#' Returns either empty string (interpreted as default function), or granularity - activity of upstream inhibitor
#' @param bool stating whether the interaciton is an inhibition, granularity of
#' variables (number of levels), and source of interaction
#' @NoRd
bmaFormula <- function(inhibitor,granularity,upstream){
f <- ifelse(inhibitor,sprintf("%d-var(%s)",granularity,upstream),"")
return(f)
}
#' Returns a string describing the evidence behind an interaction
#'
#' Contains all interaction types with a simple descriptor and PMIDs
#' @param takes an edge from omnipath "e", and optionally the name of the upstream variable ("incoming")
#' @NoRd
bmaDescription <- function(e,incoming=""){
sign <- ifelse(e$is_stimulation == 1,
ifelse(e$is_inhibition == 1,"Mixed","Activator"),
ifelse(e$is_inhibition == 1,"Inhibitor","Unknown"))
refs <- paste(unlist(e$references), sep = '', collapse = ',')
incoming <- ifelse(incoming == "","",paste("",incoming,"",sep=" "))
return(sprintf("%s%s-PMID:%s.",incoming,sign,refs))
}
#' Prints a BMA motif to the screen from a sequence of edges, which can be copy/pasted into the BMA canvas
#'
#' Intended to parallel print_path_es
#' @param takes an sequence of edges, a graph, and a granularity
#' @export
bmaMotif_es <- function(edgeSeq,G,granularity=2){
if(length(edgeSeq) == 0) {
wrongInput("\nempty path\n")
}
#Process-
## Create list of variables
## Create layout of variables
## Create list of links
## Print format as follows (x is a string, xN is an integer, xF is a float)
### {"Model": {"Name": "Omnipath motif",
### "Variables": [{"Name":"x","Id":xN,"RangeFrom"=0,"RangeTo"=granularity},...]
### "Relationships": [{"Id":xN,"FromVariable":xN,"ToVariable":xN,"Type":"Activator"},...]
### }
### "Layout": {"Variables": [{"Id":xN,"Name":"x","Type":"Constant","ContainerId":0,"PositionX":xF,"PositionY":xF,"CellX":0,"CellY":0,"Angle":0,"Description":""},...]
### "Containers":[]
### }
### }
#Code for identifying sign
#signs <- ifelse(edgeSeq$is_stimulation == 1,
# ifelse(edgeSeq$is_inhibition == 1,"(+/-)","( + )"),
# ifelse(edgeSeq$is_inhibition == 1,"( - )","( ? )"))
#interaction <- paste0(" == ", signs," == >")
#relationships <- ifelse(edgeSeq$is_stimulation == 1,
# ifelse(edgeSeq$is_inhibition == 1,"Activator","Activator"),
# ifelse(edgeSeq$is_inhibition == 1,"Inhibitor",return(wrongInput("\nUnsigned input graph\n")))
sources <- tail_of(G, edgeSeq)$name
variableNames <- c(sources,head_of(G, edgeSeq)$name[length(edgeSeq)])
varNum <- length(variableNames)
positions <- vector('list',varNum)
variables <- vector('list',varNum)
relationships <- vector('list',varNum-1)
formula = ""
description = ""
x=125
y=140
for (i in seq_along(variableNames))
{
v <- bmaVariableModel(i,variableNames[i],granularity,formula)
p <- bmaVariableLayout(i,variableNames[i],x,y,description)
if (i < varNum){
#Simplified sign- if inhibition, inhibitor, else (activator/mixed/unknown) activation
r <- bmaRelationship(i+varNum,i,i+1,ifelse(edgeSeq[i]$is_inhibition == 1,"Inhibitor","Activator"))
relationships[[i]] <- r
formula <- bmaFormula((edgeSeq[i]$is_inhibition == 1),granularity,variableNames[i])
description <- bmaDescription(edgeSeq[i])
}
positions[[i]] <- p
variables[[i]] <- v
x <- x + 86.6025404
ymod <- ifelse(i %% 2 == 0, 50, -50)
y <- y + ymod
}
result <- sprintf('{"Model": {"Name": "Omnipath motif","Variables":[%s],"Relationships":[%s]},"Layout":{"Variables":[%s],"Containers":[]}}\n',paste(variables, sep = '', collapse = ','),paste(relationships, sep = '', collapse = ','),paste(positions, sep = '', collapse = ','))
cat(result)
}
#' Prints a BMA motif to the screen from a sequence of nodes, which can be copy/pasted into the BMA canvas
#'
#' Intended to parallel print_path_vs
#' @param takes an sequence of nodes, and a granularity
#' @export
bmaMotif_vs <- function(nodeSeq,G){
if(length(nodeSeq) == 0){
print("empty path")
return(invisible(NULL))
}
nodeSeq_names <- unique_nodeSeq(nodeSeq)
for(i in seq(nodeSeq_names)){
print(paste0("pathway ", i, ": ",
paste(nodeSeq_names[[i]],collapse = " -> ")))
edgeSet <- c()
for(j in 2:length(nodeSeq_names[[i]])){
edgeSet <- c(edgeSet, E(G)[nodeSeq_names[[i]][[j-1]] %->%
nodeSeq_names[[i]][[j]]])
}
bmaMotif_es(E(G)[edgeSet],G)
}
} |
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Tactics.
Require Import Axioms.
Require Import Sigma.
Require Import Equality.
Require Import Sequence.
Require Import Relation.
Require Import Ordinal.
Require Import Syntax.
Require Import SimpSub.
Require Import Dynamic.
Require Import Ofe.
Require Import Uniform.
Require Import Intensional.
Require Import Candidate.
Require Import System.
Require Import Semantics.
Require Import SemanticsKnot.
Require Import Judgement.
Require Import Hygiene.
Require Import ProperClosed.
Require Import ProperFun.
Require Import Shut.
Require Import SemanticsProperty.
Require Import SemanticsEqtype.
Require Import SemanticsSubtype.
Require Import Equivalence.
Require Import ContextHygiene.
Require Import Truncate.
Require Import ProperDownward.
Require Import Subsumption.
Local Ltac prove_hygiene :=
repeat (apply hygiene_auto; cbn; repeat2 split; auto);
eauto using hygiene_weaken, clo_min, hygiene_shift', hygiene_subst1, subst_closub;
try (apply hygiene_var; cbn; auto; done).
Lemma sound_subtype_formation :
forall G a a' b b',
pseq G (deqtype a a')
-> pseq G (deqtype b b')
-> pseq G (deqtype (subtype a b) (subtype a' b')).
Proof.
intros G a b c d.
revert G.
refine (seq_pseq 0 2 [] _ [] _ _ _); cbn.
intros G Hseqab Hseqcd.
rewrite -> seq_eqtype in Hseqab, Hseqcd |- *.
intros i s s' Hs.
so (Hseqab _#3 Hs) as (A & Hal & Har & Hbl & Hbr).
so (Hseqcd _#3 Hs) as (C & Hcl & Hcr & Hdl & Hdr).
exists (iusubtype stop i A C).
simpsub.
do2 3 split;
apply interp_eval_refl;
apply interp_subtype; auto.
Qed.
Lemma sound_subtype_formation_univ :
forall G lv a a' b b',
pseq G (deq a a' (univ lv))
-> pseq G (deq b b' (univ lv))
-> pseq G (deq (subtype a b) (subtype a' b') (univ lv)).
Proof.
intros G lv a b c d.
revert G.
refine (seq_pseq 0 2 [] _ [] _ _ _); cbn.
intros G Hseqab Hseqcd.
rewrite -> seq_univ in Hseqab, Hseqcd |- *.
intros i s s' Hs.
so (Hseqab _#3 Hs) as (pg & A & Hlvl & Hlvr & Hal & Har & Hbl & Hbr).
so (Hseqcd _#3 Hs) as (pg' & C & Hlvl' & _ & Hcl & Hcr & Hdl & Hdr).
so (pginterp_fun _#3 Hlvl Hlvl'); subst pg'.
exists pg, (iusubtype stop i A C).
simpsub.
do2 5 split; auto;
apply interp_eval_refl;
apply interp_subtype; auto.
Qed.
Lemma sound_subtype_intro :
forall G a b,
pseq G (deqtype a a)
-> pseq G (deqtype b b)
-> pseq (hyp_tm a :: G) (deq (var 0) (var 0) (subst sh1 b))
-> pseq G (deq triv triv (subtype a b)).
Proof.
intros G a b.
revert G.
refine (seq_pseq 0 3 [] _ [] _ [_] _ _ _); cbn.
intros G Hseqa Hseqb Hseqincl.
rewrite -> seq_eqtype in Hseqa, Hseqb.
rewrite -> seq_deq in Hseqincl |- *.
intros i s s' Hs.
so (Hseqa _#3 Hs) as (A & Hal & Har & _).
so (Hseqb _#3 Hs) as (B & Hbl & Hbr & _).
exists (iusubtype stop i A B).
simpsub.
do2 2 split.
{
apply interp_eval_refl.
apply interp_subtype; auto.
}
{
apply interp_eval_refl.
apply interp_subtype; auto.
}
cut (rel (den (iusubtype stop i A B)) i triv triv).
{
intro H; auto.
}
cbn.
do2 5 split; auto using star_refl; try prove_hygiene.
intros j m p Hj Hmp.
assert (pwctx j (dot m s) (dot p s') (hyp_tm a :: G)) as Hs'.
{
apply pwctx_cons_tm_seq; eauto using pwctx_downward.
{
apply (seqhyp_tm _#5 (iutruncate (S j) A)).
{
eapply basic_downward; eauto.
}
{
eapply basic_downward; eauto.
}
{
split; auto.
}
}
{
intros k u u' Hu.
so (Hseqa _#3 Hu) as (R & Hl & Hr & _).
eauto.
}
}
so (Hseqincl _#3 Hs') as (R & Hbl' & _ & Hrel & _).
simpsubin Hbl'.
so (basic_fun _#7 (basic_downward _#7 Hj Hbl) Hbl'); subst R.
simpsubin Hrel.
destruct Hrel; auto.
Qed.
Lemma sound_subtype_elim :
forall G a b m n,
pseq G (deq triv triv (subtype a b))
-> pseq G (deq m n a)
-> pseq G (deq m n b).
Proof.
intros G a b m n.
revert G.
refine (seq_pseq 0 2 [] _ [] _ _ _); cbn.
intros G Hseqsub Hseqmn.
rewrite -> seq_deq in Hseqsub, Hseqmn |- *.
intros i s s' Hs.
so (Hseqsub _#3 Hs) as (R & Hsubl & Hsubr & Hinh & _).
simpsubin Hsubl.
simpsubin Hsubr.
invert (basic_value_inv _#6 value_subtype Hsubl).
intros A B Hal Hbl Heq1.
invert (basic_value_inv _#6 value_subtype Hsubr).
intros A' B' Har Hbr Heq2.
so (eqtrans Heq1 (eqsymm Heq2)) as Heq.
clear Heq2.
subst R.
so (iusubtype_inj _#6 Heq) as (<- & <-); clear Heq.
cbn in Hinh.
decompose Hinh.
intros Hsub _ _ _ _ _.
unfold subtype_property in Hsub.
assert (forall j p q,
rel (den A) j p q
-> rel (den B) j p q) as Hsub'.
{
intros j p q Hpq.
so (basic_member_index _#9 Hal Hpq) as Hj.
apply Hsub; auto.
}
so (Hseqmn _#3 Hs) as (A' & Hal' & _ & Hm & Hn & Hmn).
so (basic_fun _#7 Hal Hal'); subst A'; clear Hal'.
exists B.
do2 4 split; auto.
Qed.
Lemma sound_subtype_eta :
forall G a b p,
pseq G (deq p p (subtype a b))
-> pseq G (deq p triv (subtype a b)).
Proof.
intros G a b p.
revert G.
refine (seq_pseq 1 [] p 1 [] _ _ _); cbn.
intros G Hclp Hseq.
rewrite -> seq_deq in Hseq |- *.
intros i s s' Hs.
so (pwctx_impl_closub _#4 Hs) as (Hcls & Hcls').
so (Hseq _#3 Hs) as (R & Hequall & Hequalr & Htrue & _).
exists R.
do2 4 split; auto.
{
simpsub.
simpsubin Hequall.
invert (basic_value_inv _#6 value_subtype Hequall).
intros Q Q' Hl Hr <-.
do2 5 split; auto using star_refl; try prove_hygiene.
destruct Htrue as (H & _).
exact H.
}
{
simpsub.
simpsubin Hequall.
invert (basic_value_inv _#6 value_subtype Hequall).
intros Q Q' Hl Hr <-.
destruct Htrue as (H & _ & _ & _ & Hsteps & _).
do2 5 split; auto using star_refl; try prove_hygiene.
}
Qed.
Lemma sound_subtype_eta_hyp :
forall G1 G2 a a' m n b,
pseq (substctx (dot triv id) G2 ++ G1) (deq m n (subst (under (length G2) (dot triv id)) b))
-> pseq (G2 ++ hyp_tm (subtype a a') :: G1) (deq (subst (under (length G2) sh1) m) (subst (under (length G2) sh1) n) b).
Proof.
intros G1 G2 a a' m n b Hseq.
eapply sound_property_eta_hyp; eauto.
intros s pg z i R H.
simpsubin H.
invert (basic_value_inv _#6 value_subtype H).
intros A A' _ _ <-.
do 3 eexists.
reflexivity.
Qed.
Lemma sound_subtype_convert_hyp :
forall G1 G2 a b J,
pseq (cons (hyp_tm (eqtype a a)) G1) (dsubtype (subst sh1 a) (subst sh1 b))
-> pseq (cons (hyp_tm (eqtype a a)) G1) (dsubtype (subst sh1 b) (subst sh1 a))
-> pseq (G2 ++ hyp_tm b :: G1) J
-> pseq (G2 ++ hyp_tm a :: G1) J.
Proof.
intros G1 G2 a b J.
revert G1.
refine (seq_pseq_hyp 0 3 [] [_] _ [] [_] _ _ [_] _ _ [_] _ _); cbn.
intros G1 Hsubab Hsubba Hseq HclJ.
so (conj Hsubab Hsubba) as Hexteq.
simpsubin Hexteq.
rewrite -> seq_exteqtype in Hexteq.
clear Hsubab Hsubba.
replace J with (substj (under (length G2) id) J) in Hseq by (simpsub; reflexivity).
replace G2 with (substctx id G2) in Hseq by (simpsub; reflexivity).
refine (subsume_seq _ _ (under (length G2) id) _ _ HclJ _ Hseq).
rewrite -> length_substctx.
apply subsume_under.
do2 2 split.
{
intros j.
split.
{
intro Hj.
cbn.
simpsub.
apply hygiene_var; auto.
}
{
intro Hj.
cbn.
simpsubin Hj.
invert Hj.
auto.
}
}
{
intros j.
split.
{
intro Hj.
cbn.
simpsub.
apply hygiene_var; auto.
}
{
intro Hj.
cbn.
simpsubin Hj.
invert Hj.
auto.
}
}
intros i ss ss' Hss.
invertc Hss.
intros m n s s' Hs Hmn Hleft Hright <- <-.
simpsubin Hmn.
invertc Hmn.
intros A Hal Har Hmn.
assert (forall j u,
j <= i
-> seqctx j s u G1
-> pwctx j (dot triv s) (dot triv u) (cons (hyp_tm (eqtype a a)) G1)) as Hsl.
{
intros j u Hj Hu.
apply pwctx_cons_tm; auto.
{
eapply (seqctx_pwctx_left _ _ s'); auto.
eapply pwctx_downward; eauto.
}
{
simpsub.
exploit (Hleft j false u) as Hh; auto using smaller_le.
cbn in Hh.
invertc Hh.
intros A' Har' Hau.
so (basic_fun _#7 (basic_downward _#7 Hj Har) Har'); subst A'; clear Har'.
apply (seqhyp_tm _#5 (iueqtype stop j (iutruncate (S j) A) (iutruncate (S j) A))).
{
apply interp_eval_refl.
apply interp_eqtype; eauto using basic_downward.
}
{
apply interp_eval_refl.
apply interp_eqtype; eauto using basic_downward.
}
{
cbn.
do2 4 split; auto using star_refl; try prove_hygiene.
reflexivity.
}
}
{
intros k v Hk Hsv.
assert (k <= i) as Hki by omega.
exploit (Hleft k false u) as H; eauto using smaller_le.
{
cbn.
eapply (seqctx_downward j); eauto.
}
cbn in H.
invertc H.
intros A' Har' Hau.
so (basic_fun _#7 (basic_downward _#7 Hki Har) Har'); subst A'; clear Har'.
exploit (Hleft k false v) as H; eauto using smaller_le.
{
cbn.
apply pwctx_impl_seqctx; eauto.
}
cbn in H.
invertc H.
intros A' Har' Hav.
so (basic_fun _#7 (basic_downward _#7 Hki Har) Har'); subst A'; clear Har'.
apply (relhyp_tm _#4 (iueqtype stop k (iutruncate (S k) A) (iutruncate (S k) A))).
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
}
{
intros k v Hk Hvu.
assert (k <= i) as Hki by omega.
exploit (Hright k false v) as H; eauto using smaller_le.
{
cbn.
exact (seqctx_zigzag _#6 (pwctx_impl_seqctx _#4 Hvu) (seqctx_downward _#5 Hk Hu) (seqctx_downward _#5 Hki (pwctx_impl_seqctx _#4 Hs))).
}
cbn in H.
invertc H.
intros A' Hal' Hau.
so (basic_fun _#7 (basic_downward _#7 Hki Hal) Hal'); subst A'.
apply (relhyp_tm _#4 (iueqtype stop k (iutruncate (S k) A) (iutruncate (S k) A))).
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
}
}
assert (forall j u,
j <= i
-> seqctx j u s' G1
-> pwctx j (dot triv u) (dot triv s') (cons (hyp_tm (eqtype a a)) G1)) as Hsr.
{
intros j u Hj Hu.
apply pwctx_cons_tm; auto.
{
eapply (seqctx_pwctx_right _ _ s'); auto.
eapply pwctx_downward; eauto.
}
{
simpsub.
exploit (Hright j false u) as Hh; auto using smaller_le.
cbn in Hh.
invertc Hh.
intros A' Hal' Hau.
so (basic_fun _#7 (basic_downward _#7 Hj Hal) Hal'); subst A'; clear Hal'.
apply (seqhyp_tm _#5 (iueqtype stop j (iutruncate (S j) A) (iutruncate (S j) A))).
{
apply interp_eval_refl.
apply interp_eqtype; eauto using basic_downward.
}
{
apply interp_eval_refl.
apply interp_eqtype; eauto using basic_downward.
}
{
cbn.
do2 4 split; auto using star_refl; try prove_hygiene.
reflexivity.
}
}
{
intros k v Hk Huv.
assert (k <= i) as Hki by omega.
exploit (Hleft k false v) as H; eauto using smaller_le.
{
cbn.
exact (seqctx_zigzag _#6 (seqctx_downward _#5 Hki (pwctx_impl_seqctx _#4 Hs)) (seqctx_downward _#5 Hk Hu) (pwctx_impl_seqctx _#4 Huv)).
}
cbn in H.
invertc H.
intros A' Har' Hau.
so (basic_fun _#7 (basic_downward _#7 Hki Har) Har'); subst A'.
apply (relhyp_tm _#4 (iueqtype stop k (iutruncate (S k) A) (iutruncate (S k) A))).
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
}
{
intros k v Hk Hsv.
assert (k <= i) as Hki by omega.
exploit (Hright k false u) as H; eauto using smaller_le.
{
cbn.
eapply (seqctx_downward j); eauto.
}
cbn in H.
invertc H.
intros A' Hal' Hau.
so (basic_fun _#7 (basic_downward _#7 Hki Hal) Hal'); subst A'; clear Hal'.
exploit (Hright k false v) as H; eauto using smaller_le.
{
cbn.
apply pwctx_impl_seqctx; eauto.
}
cbn in H.
invertc H.
intros A' Hal' Hav.
so (basic_fun _#7 (basic_downward _#7 Hki Hal) Hal'); subst A'; clear Hal'.
apply (relhyp_tm _#4 (iueqtype stop k (iutruncate (S k) A) (iutruncate (S k) A))).
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
{
simpsub.
apply interp_eval_refl.
apply interp_eqtype; auto.
}
}
}
so (Hsl i s' (le_refl _) (pwctx_impl_seqctx _#4 Hs)) as Hs'.
so (Hexteq _#3 Hs') as (A' & B & Hal' & _ & Hbl & Hbr & Heq).
simpsubin Hal'.
simpsubin Hbl.
simpsubin Hbr.
so (basic_fun _#7 Hal Hal'); subst A'.
clear Hal'.
do2 4 split.
{
simpsub.
apply pwctx_cons_tm; eauto.
{
eapply seqhyp_tm; eauto.
rewrite <- Heq; auto.
}
{
intros j u Hj Hsu.
exploit (Hleft j false u) as H.
{
apply smaller_le; auto.
}
{
cbn.
apply pwctx_impl_seqctx; auto.
}
{
cbn in H.
invertc H.
intros A' Har' Hau.
so (basic_fun _#7 (basic_downward _#7 Hj Har) Har'); subst A'.
apply (relhyp_tm _#4 (iutruncate (S j) B)); auto.
{
eapply basic_downward; eauto.
}
{
so (Hexteq _#3 (Hsl _ _ Hj (pwctx_impl_seqctx _#4 Hsu))) as (_ & B' & _ & _ & Hbl' & Hbu & _).
simpsubin Hbl'.
simpsubin Hbu.
so (basic_fun _#7 (basic_downward _#7 Hj Hbl) Hbl'); subst B'; clear Hbl'.
auto.
}
}
}
{
intros j u Hj Hus.
exploit (Hright j false u) as H.
{
apply smaller_le; auto.
}
{
cbn.
apply pwctx_impl_seqctx; auto.
}
{
cbn in H.
invertc H.
intros A' Hal' Hau.
so (basic_fun _#7 (basic_downward _#7 Hj Hal) Hal'); subst A'.
apply (relhyp_tm _#4 (iutruncate (S j) B)); auto.
{
eapply basic_downward; eauto.
}
{
so (Hexteq _#3 (Hsr _ _ Hj (pwctx_impl_seqctx _#4 Hus))) as (_ & B' & _ & _ & Hbu & Hbr' & _).
simpsubin Hbr'.
simpsubin Hbu.
so (basic_fun _#7 (basic_downward _#7 Hj Hbr) Hbr'); subst B'; clear Hbr'.
auto.
}
}
}
}
{
simpsub.
apply equivsub_refl.
}
{
simpsub.
apply equivsub_refl.
}
{
intros j d uu Hsmall Huu.
simpsubin Huu.
simpsub.
rewrite -> qpromote_cons in Huu |- *.
rewrite -> qpromote_hyp_tm in Huu |- *.
invertc Huu.
intros p u Hu Hmp <-.
so (smaller_impl_le _#3 Hsmall) as Hj.
apply seqctx_cons; auto.
simpsub.
simpsubin Hmp.
invertc Hmp.
intros B' Hbl' Hbu Hmp.
so (basic_fun _#7 (basic_downward _#7 Hj Hbl) Hbl'); subst B'.
apply (seqhyp_tm _#5 (iutruncate (S j) A)).
{
eapply basic_downward; eauto.
}
{
so (seqctx_pwctx_demote_left _#7 Hsmall Hs Hu) as Hu'.
so (Hexteq _#3 (Hsl j u Hj (pwctx_impl_seqctx _#4 Hu'))) as (A' & _ & Hal' & Hau & _).
simpsubin Hal'.
simpsubin Hau.
so (basic_fun _#7 (basic_downward _#7 Hj Hal) Hal'); subst A'.
exact Hau.
}
{
rewrite -> den_iutruncate in Hmp |- *.
destruct Hmp as (_ & Hmp).
split; auto.
rewrite -> Heq; auto.
}
}
{
intros j d uu Hsmall Huu.
simpsubin Huu.
simpsub.
rewrite -> qpromote_cons in Huu |- *.
rewrite -> qpromote_hyp_tm in Huu |- *.
invertc Huu.
intros p u Hu Hmp <-.
so (smaller_impl_le _#3 Hsmall) as Hj.
apply seqctx_cons; auto.
simpsub.
simpsubin Hmp.
invertc Hmp.
intros B' Hbu Hbr' Hmp.
so (basic_fun _#7 (basic_downward _#7 Hj Hbr) Hbr'); subst B'.
apply (seqhyp_tm _#5 (iutruncate (S j) A)).
{
so (seqctx_pwctx_demote_right _#7 Hsmall Hs Hu) as Hu'.
so (Hexteq _#3 (Hsr j u Hj (pwctx_impl_seqctx _#4 Hu'))) as (A' & _ & Hau & Har' & _).
simpsubin Har'.
simpsubin Hau.
so (basic_fun _#7 (basic_downward _#7 Hj Har) Har'); subst A'.
exact Hau.
}
{
eapply basic_downward; eauto.
}
{
rewrite -> den_iutruncate in Hmp |- *.
destruct Hmp as (_ & Hmp).
split; auto.
rewrite -> Heq; auto.
}
}
Qed.
Lemma sound_subtype_formation_invert1 :
forall G a a' b b',
pseq G (deqtype (subtype a b) (subtype a' b'))
-> pseq G (deqtype a a').
Proof.
intros G a a' b b'.
revert G.
refine (seq_pseq 0 1 [] _ _ _); cbn.
intros G Hseq.
rewrite -> seq_eqtype in Hseq |- *.
intros i s s' Hs.
so (Hseq _#3 Hs) as (R & Hl & Hr & Hl' & Hr').
simpsubin Hl.
simpsubin Hl'.
simpsubin Hr.
simpsubin Hr'.
invert (basic_value_inv _#6 value_subtype Hl).
intros A B Hal Hbl Heql.
invert (basic_value_inv _#6 value_subtype Hr).
intros A' B' Har Hbr Heqr.
so (iusubtype_inj _#6 (eqtrans Heql (eqsymm Heqr))) as (<- & <-).
invert (basic_value_inv _#6 value_subtype Hl').
intros A' B' Hal' Hbl' Heql'.
so (iusubtype_inj _#6 (eqtrans Heql (eqsymm Heql'))) as (<- & <-).
invert (basic_value_inv _#6 value_subtype Hr').
intros A' B' Har' Hbr' Heqr'.
so (iusubtype_inj _#6 (eqtrans Heql (eqsymm Heqr'))) as (<- & <-).
exists A.
auto.
Qed.
Lemma sound_subtype_formation_invert2 :
forall G a a' b b',
pseq G (deqtype (subtype a b) (subtype a' b'))
-> pseq G (deqtype b b').
Proof.
intros G a a' b b'.
revert G.
refine (seq_pseq 0 1 [] _ _ _); cbn.
intros G Hseq.
rewrite -> seq_eqtype in Hseq |- *.
intros i s s' Hs.
so (Hseq _#3 Hs) as (R & Hl & Hr & Hl' & Hr').
simpsubin Hl.
simpsubin Hl'.
simpsubin Hr.
simpsubin Hr'.
invert (basic_value_inv _#6 value_subtype Hl).
intros A B Hal Hbl Heql.
invert (basic_value_inv _#6 value_subtype Hr).
intros A' B' Har Hbr Heqr.
so (iusubtype_inj _#6 (eqtrans Heql (eqsymm Heqr))) as (<- & <-).
invert (basic_value_inv _#6 value_subtype Hl').
intros A' B' Hal' Hbl' Heql'.
so (iusubtype_inj _#6 (eqtrans Heql (eqsymm Heql'))) as (<- & <-).
invert (basic_value_inv _#6 value_subtype Hr').
intros A' B' Har' Hbr' Heqr'.
so (iusubtype_inj _#6 (eqtrans Heql (eqsymm Heqr'))) as (<- & <-).
exists B.
auto.
Qed.
|
-- Copyright 2022-2023 VMware, Inc.
-- SPDX-License-Identifier: BSD-2-Clause
import tactic.omega.main
import tactic.linarith
import tactic.split_ifs
/-!
# Streams
Definition of streams and some basic properties. We don't use mathlib streams
because we hardly need any definitions from it.
A stream over a type `a` is a `ℕ → a`.
Defines agree_upto n s s', usually written with the notation s ==n== s', which
says that s and s' agree on all indices in 0..n (inclusive).
-/
universes u v.
/-- A stream is an infinite sequence of elements from `a`.
The indices usually use the metavariable `t`, meant to represent (a discrete
notion of) time.
-/
def stream (a: Type u) : Type u := ℕ → a.
variable {a : Type u}.
/-- s₁ ==n== s₂ says that streams s₁ and s₂ are equal up to (and including) time
`n`. -/
def agree_upto (n: ℕ) (s₁ s₂: stream a) := ∀ t ≤ n, s₁ t = s₂ t.
notation s ` ==` n `== ` s':35 := agree_upto n s s'.
instance stream_po [partial_order a] : partial_order (stream a) :=
by { unfold stream, apply_instance }.
@[ext]
lemma stream_le_ext [partial_order a] (s1 s2: stream a) :
s1 ≤ s2 = (∀ t, s1 t ≤ s2 t) := rfl.
instance stream_zero [has_zero a] : has_zero (stream a) := ⟨λ (_: ℕ), 0⟩.
@[refl]
lemma agree_refl (n: ℕ) : ∀ (s: stream a), s ==n== s :=
begin
unfold agree_upto,
intros s i _,
refl,
end
@[symm]
lemma agree_symm (n: ℕ) : ∀ (s1 s2: stream a), s1 ==n== s2 → s2 ==n== s1 :=
begin
unfold agree_upto,
intros s1 s2 h12 i hle,
rw [h12]; assumption,
end
@[trans]
lemma agree_trans {n: ℕ} : ∀ (s1 s2 s3: stream a), s1 ==n== s2 → s2 ==n== s3 → s1 ==n== s3 :=
begin
unfold agree_upto,
intros s1 s2 s3 h12 h23 i hle,
rw [h12, h23]; assumption,
end
-- TODO: these don't seem to do anything (don't help with rewriting)
instance agree_upto_refl (n: ℕ) : is_refl (stream a) (agree_upto n) := ⟨agree_refl n⟩.
instance agree_upto_symm (n: ℕ) : is_symm (stream a) (agree_upto n) := ⟨agree_symm n⟩.
instance agree_upto_trans (n: ℕ) : is_trans (stream a) (agree_upto n) := ⟨agree_trans⟩.
instance agree_upto_preorder (n: ℕ) : is_preorder (stream a) (agree_upto n) := ⟨⟩.
instance agree_upto_equiv (n: ℕ) : is_equiv (stream a) (agree_upto n) := ⟨⟩.
theorem agree_everywhere_eq (s s': stream a) :
s = s' ↔ (∀ n, s ==n== s') :=
begin
split,
{ intros h n,
rw h, },
{ intros h,
funext n,
apply (h n), omega,
}
end
lemma agree_upto_weaken {s s': stream a} (n n': ℕ) :
s ==n== s' →
n' ≤ n →
s ==n'== s' :=
begin
intros heq hle,
intros i hle_i,
apply heq, omega,
end
lemma agree_upto_weaken1 {s s': stream a} (n: ℕ) :
s ==n.succ== s' →
s ==n== s' :=
begin
intros heq,
apply (agree_upto_weaken n.succ), assumption, omega,
end
lemma agree_upto_0 (s s': stream a) :
s ==0== s' ↔ s 0 = s' 0 :=
begin
unfold agree_upto,
split,
{ intros hagree,
apply (hagree 0),
omega, },
{ intros h0 t hle,
have h: (t = 0) := by omega,
cc, }
end
lemma agree_upto_extend (n: nat) (s s': stream a) :
s ==n== s' → s n.succ = s' n.succ → s ==n.succ== s' :=
begin
intros hagree heq,
intros i hle,
have h: (i ≤ n ∨ i = n.succ) := by omega,
cases h,
{ apply hagree, assumption, },
{ subst i, assumption, }
end
-- We don't use this theory because everything is based on [agree_upto], but
-- formalize a little bit from the paper.
namespace cutting.
variables [has_zero a].
/-- Construct a stream that matches `s` up to time `t` and is 0 afterward. -/
def cut (s: stream a) (t: ℕ) : stream a :=
λ i, if (i < t) then s i else 0.
lemma cut_at_0 (s: stream a) : cut s 0 = 0 :=
begin
ext n,
unfold cut, rw if_neg, simp,
omega,
end
lemma cut_0 : cut (0 : stream a) = 0 :=
begin
ext n,
unfold cut, split_ifs; refl,
end
theorem cut_cut (s: stream a) (t1 t2: ℕ) :
cut (cut s t1) t2 = cut s (min t1 t2) :=
begin
funext i, simp [cut],
split_ifs; try { simp },
tauto,
{ exfalso, linarith, },
{ exfalso, linarith, },
end
theorem cut_comm (s: stream a) (t1 t2: ℕ) :
cut (cut s t1) t2 = cut (cut s t2) t1 :=
begin
rw [cut_cut, cut_cut],
rw min_comm,
end
theorem cut_idem (s: stream a) (t: ℕ) :
cut (cut s t) t = cut s t :=
begin
rw cut_cut, simp,
end
/-- Relate [agree_upto] to equality on [cut]. -/
theorem agree_upto_cut (s1 s2: stream a) (n: ℕ) :
s1 ==n== s2 ↔ cut s1 n.succ = cut s2 n.succ :=
begin
split,
{ intros heq,
funext t, simp [cut],
split_ifs; try { refl },
apply heq, omega, },
{ intros heq,
intros t hle, simp [cut] at heq,
have h := congr_fun heq t, simp at h,
split_ifs at *,
{ assumption, },
{ exfalso, apply h_1, omega, },
},
end
lemma cut_agree_succ (s1 s2: stream a) (t: ℕ) :
cut s1 t = cut s2 t →
s1 t = s2 t →
cut s1 t.succ = cut s2 t.succ :=
begin
cases t,
{ intros _hcut heq,
ext n,
unfold cut, split_ifs, swap, refl,
have heq : n = 0 := by omega,
subst n, assumption,
},
repeat { rw<- agree_upto_cut },
apply agree_upto_extend,
end
theorem agree_with_cut (s: stream a) (n: ℕ) :
s ==n== cut s n.succ :=
begin
rw [agree_upto_cut, cut_idem],
end
end cutting.
-- #lint only doc_blame simp_nf
|
$ifndef _REP_REFINE_
$define _REP_REFINE_
$endif |
(*
Copyright (C) 2020 Susi Lehtola
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: mgga_exc *)
(* prefix:
mgga_k_rda_params *params;
assert(p->params != NULL);
params = (mgga_k_rda_params * ) (p->params);
*)
rda_s := x -> X2S*x:
rda_p := u -> X2S^2*u:
(* Equation (61) *)
rda_k4 := (s, p, b) -> sqrt(s^4 + b*p^2):
(* Equation (63) *)
rda_k2 := (s, p, b) -> s^2 + b*p:
(* Equation (71); first term is von Weiszäcker according to equation (13) *)
rda_f0 := (s, p) ->
5/3*s^2 + params_a_A0
+ params_a_A1 * (rda_k4(s,p,params_a_a) / (1 + params_a_beta1*rda_k4(s,p,params_a_a)))^2
+ params_a_A2 * (rda_k4(s,p,params_a_b) / (1 + params_a_beta2*rda_k4(s,p,params_a_b)))^4
+ params_a_A3 * (rda_k2(s,p,params_a_c) / (1 + params_a_beta3*rda_k2(s,p,params_a_c))):
(* Complete functional *)
rda_f := (xs, us) -> rda_f0(rda_s(xs), rda_p(us)):
f := (rs, z, xt, xs0, xs1, u0, u1, t0, t1) ->
mgga_kinetic(rda_f, rs, z, xs0, xs1, u0, u1):
|
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
⊢ ordConnectedComponent s a ∈ 𝓝 a ↔ s ∈ 𝓝 a
[PROOFSTEP]
refine' ⟨fun h => mem_of_superset h ordConnectedComponent_subset, fun h => _⟩
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
h : s ∈ 𝓝 a
⊢ ordConnectedComponent s a ∈ 𝓝 a
[PROOFSTEP]
rcases exists_Icc_mem_subset_of_mem_nhds h with ⟨b, c, ha, ha', hs⟩
[GOAL]
case intro.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
h : s ∈ 𝓝 a
b c : X
ha : a ∈ Icc b c
ha' : Icc b c ∈ 𝓝 a
hs : Icc b c ⊆ s
⊢ ordConnectedComponent s a ∈ 𝓝 a
[PROOFSTEP]
exact mem_of_superset ha' (subset_ordConnectedComponent ha hs)
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
have hmem : tᶜ ∈ 𝓝[≥] a := by
refine' mem_nhdsWithin_of_mem_nhds _
rw [← mem_interior_iff_mem_nhds, interior_compl]
exact disjoint_left.1 hd ha
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ tᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
refine' mem_nhdsWithin_of_mem_nhds _
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ tᶜ ∈ 𝓝 a
[PROOFSTEP]
rw [← mem_interior_iff_mem_nhds, interior_compl]
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ a ∈ (closure t)ᶜ
[PROOFSTEP]
exact disjoint_left.1 hd ha
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
rcases exists_Icc_mem_subset_of_mem_nhdsWithin_Ici hmem with ⟨b, hab, hmem', hsub⟩
[GOAL]
case intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
by_cases H : Disjoint (Icc a b) (ordConnectedSection <| ordSeparatingSet s t)
[GOAL]
case pos
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
H : Disjoint (Icc a b) (ordConnectedSection (ordSeparatingSet s t))
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
exact mem_of_superset hmem' (disjoint_left.1 H)
[GOAL]
case neg
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
H : ¬Disjoint (Icc a b) (ordConnectedSection (ordSeparatingSet s t))
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
simp only [Set.disjoint_left, not_forall, Classical.not_not] at H
[GOAL]
case neg
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
H : ∃ x h, x ∈ ordConnectedSection (ordSeparatingSet s t)
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
rcases H with ⟨c, ⟨hac, hcb⟩, hc⟩
[GOAL]
case neg.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hac : a ≤ c
hcb : c ≤ b
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
have hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a := subset_ordConnectedComponent (left_mem_Icc.2 hab) hsub
[GOAL]
case neg.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hac : a ≤ c
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
have hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t)) :=
disjoint_left_ordSeparatingSet.mono_right ordConnectedSection_subset
[GOAL]
case neg.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hac : a ≤ c
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
replace hac : a < c := hac.lt_of_ne <| Ne.symm <| ne_of_mem_of_not_mem hc <| disjoint_left.1 hd ha
[GOAL]
case neg.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
refine' mem_of_superset (Ico_mem_nhdsWithin_Ici (left_mem_Ico.2 hac)) fun x hx hx' => _
[GOAL]
case neg.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
⊢ False
[PROOFSTEP]
refine' hx.2.ne (eq_of_mem_ordConnectedSection_of_uIcc_subset hx' hc _)
[GOAL]
case neg.intro.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
⊢ [[x, c]] ⊆ ordSeparatingSet s t
[PROOFSTEP]
refine' subset_inter (subset_iUnion₂_of_subset a ha _) _
[GOAL]
case neg.intro.intro.intro.refine'_1
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
⊢ [[x, c]] ⊆ ordConnectedComponent tᶜ a
[PROOFSTEP]
exact OrdConnected.uIcc_subset inferInstance (hsub' ⟨hx.1, hx.2.le.trans hcb⟩) (hsub' ⟨hac.le, hcb⟩)
[GOAL]
case neg.intro.intro.intro.refine'_2
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
⊢ [[x, c]] ⊆ ⋃ (x : X) (_ : x ∈ t), ordConnectedComponent sᶜ x
[PROOFSTEP]
rcases mem_iUnion₂.1 (ordConnectedSection_subset hx').2 with ⟨y, hyt, hxy⟩
[GOAL]
case neg.intro.intro.intro.refine'_2.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
⊢ [[x, c]] ⊆ ⋃ (x : X) (_ : x ∈ t), ordConnectedComponent sᶜ x
[PROOFSTEP]
refine' subset_iUnion₂_of_subset y hyt (OrdConnected.uIcc_subset inferInstance hxy _)
[GOAL]
case neg.intro.intro.intro.refine'_2.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
⊢ c ∈ ordConnectedComponent sᶜ y
[PROOFSTEP]
refine' subset_ordConnectedComponent left_mem_uIcc hxy _
[GOAL]
case neg.intro.intro.intro.refine'_2.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
⊢ c ∈ [[y, x]]
[PROOFSTEP]
suffices c < y by
rw [uIcc_of_ge (hx.2.trans this).le]
exact ⟨hx.2.le, this.le⟩
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
this : c < y
⊢ c ∈ [[y, x]]
[PROOFSTEP]
rw [uIcc_of_ge (hx.2.trans this).le]
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
this : c < y
⊢ c ∈ Icc x y
[PROOFSTEP]
exact ⟨hx.2.le, this.le⟩
[GOAL]
case neg.intro.intro.intro.refine'_2.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
⊢ c < y
[PROOFSTEP]
refine' lt_of_not_le fun hyc => _
[GOAL]
case neg.intro.intro.intro.refine'_2.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
hyc : y ≤ c
⊢ False
[PROOFSTEP]
have hya : y < a := not_le.1 fun hay => hsub ⟨hay, hyc.trans hcb⟩ hyt
[GOAL]
case neg.intro.intro.intro.refine'_2.intro.intro
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b✝ c✝ : X
s t : Set X
hd✝ : Disjoint s (closure t)
ha : a ∈ s
hmem : tᶜ ∈ 𝓝[Ici a] a
b : X
hab : a ≤ b
hmem' : Icc a b ∈ 𝓝[Ici a] a
hsub : Icc a b ⊆ tᶜ
c : X
hc : c ∈ ordConnectedSection (ordSeparatingSet s t)
hcb : c ≤ b
hsub' : Icc a b ⊆ ordConnectedComponent tᶜ a
hd : Disjoint s (ordConnectedSection (ordSeparatingSet s t))
hac : a < c
x : X
hx : x ∈ Ico a c
hx' : x ∈ ordConnectedSection (ordSeparatingSet s t)
y : X
hyt : y ∈ t
hxy : x ∈ ordConnectedComponent sᶜ y
hyc : y ≤ c
hya : y < a
⊢ False
[PROOFSTEP]
exact hxy (Icc_subset_uIcc ⟨hya.le, hx.1⟩) ha
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Iic a] a
[PROOFSTEP]
have hd' : Disjoint (ofDual ⁻¹' s) (closure <| ofDual ⁻¹' t) := hd
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hd' : Disjoint (↑ofDual ⁻¹' s) (closure (↑ofDual ⁻¹' t))
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Iic a] a
[PROOFSTEP]
have ha' : toDual a ∈ ofDual ⁻¹' s := ha
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
hd' : Disjoint (↑ofDual ⁻¹' s) (closure (↑ofDual ⁻¹' t))
ha' : ↑toDual a ∈ ↑ofDual ⁻¹' s
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Iic a] a
[PROOFSTEP]
simpa only [dual_ordSeparatingSet, dual_ordConnectedSection] using
compl_section_ordSeparatingSet_mem_nhdsWithin_Ici hd' ha'
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝 a
[PROOFSTEP]
rw [← nhds_left_sup_nhds_right, mem_sup]
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
ha : a ∈ s
⊢ (ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Iic a] a ∧
(ordConnectedSection (ordSeparatingSet s t))ᶜ ∈ 𝓝[Ici a] a
[PROOFSTEP]
exact ⟨compl_section_ordSeparatingSet_mem_nhdsWithin_Iic hd ha, compl_section_ordSeparatingSet_mem_nhdsWithin_Ici hd ha⟩
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
x : X
hx : x ∈ s
⊢ tᶜ ∈ 𝓝 x
[PROOFSTEP]
rw [← mem_interior_iff_mem_nhds, interior_compl]
[GOAL]
X : Type u_1
inst✝² : LinearOrder X
inst✝¹ : TopologicalSpace X
inst✝ : OrderTopology X
a b c : X
s t : Set X
hd : Disjoint s (closure t)
x : X
hx : x ∈ s
⊢ x ∈ (closure t)ᶜ
[PROOFSTEP]
exact disjoint_left.1 hd hx
|
A function $f$ is continuous on a set $S$ if and only if for every $x \in S$ and every $\epsilon > 0$, there exists a $\delta > 0$ such that for every $x' \in S$, if $|x' - x| < \delta$, then $|f(x') - f(x)| < \epsilon$. |
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
#include <Python.h>
#include <numpy/arrayobject.h>
#include <lapacke.h>
/* DGESVD prototype */
extern void LAPACK_dgesvd( char* jobu, char* jobvt, int* m, int* n, double* a,
int* lda, double* s, double* u, int* ldu, double* vt, int* ldvt,
double* work, int* lwork, int* info );
double** An(double *pi, double *x, int labs, int dims);
double** A(double **PI, double **X, int labs, int dims, int n_pool);
double** Fisher(double *pi, double *x, double sigma, int labs, int dims);
static char estVar_docstring[] =
"Calculate the A and Fisher matrix.";
static PyObject *varRedu_estVar(PyObject *self, PyObject *args);
static PyMethodDef module_methods[] = {
{"estVar", varRedu_estVar, METH_VARARGS, estVar_docstring},
{NULL, NULL, 0, NULL}
};
static struct PyModuleDef moduledef = {
PyModuleDef_HEAD_INIT,
"_variance_reduction", /* m_name */
"This module provides calculate A and Fisher matrix using C.", /* m_doc */
-1, /* m_size */
module_methods, /* m_methods */
NULL, /* m_reload */
NULL, /* m_traverse */
NULL, /* m_clear */
NULL, /* m_free */
};
PyMODINIT_FUNC PyInit__variance_reduction(void){
PyObject *m = PyModule_Create(&moduledef);
if(m==NULL){
return NULL;
}
/* Load 'numpy' */
import_array();
return m;
}
double* matrix_mul(double* a, double* b, int m1, int n1, int m2, int n2){
double *ret = (double*) malloc(m1 * n2 * sizeof(double));
if(n1 != m2){
return NULL;
}
for(int i=0; i<m1; i++)
for(int j=0; j<n2; j++){
double temp = 0.0;
for(int p=0; p<n1; p++)
temp += a[i*n1 + p] * b[p*n2 + j];
ret[i*n2 + j] = temp;
}
return ret;
}
void pinv(double** X, int labs, int dims){
int m = labs*dims, n = labs*dims, lda = labs*dims, ldu = labs*dims,
ldvt = labs*dims, lwork, info;
double wkopt;
double *work;
double *s = (double*) malloc(labs*dims * sizeof(double));
double *u = (double*) malloc(labs*dims * labs*dims * sizeof(double));
double *vt = (double*) malloc(labs*dims * labs*dims * sizeof(double));
double *a = (double*) malloc(labs*dims * labs*dims * sizeof(double));
for(int i=0; i<m; i++)
for(int j=0; j<n; j++)
a[i*labs*dims + j] = X[i][j];
/* Query and allocate the optimal workspace */
lwork = -1;
LAPACK_dgesvd("All", "All", &m, &n, a, &lda, s, u, &ldu, vt, &ldvt, &wkopt, &lwork,
&info);
lwork = (int)wkopt;
work = (double*)malloc( lwork*sizeof(double) );
/* Compute SVD */
LAPACK_dgesvd("All", "All", &m, &n, a, &lda, s, u, &ldu, vt, &ldvt, work, &lwork,
&info);
/* Check for convergence */
if(info > 0) {
printf("The algorithm computing SVD failed to converge. %d\n", info);
}
if(info < 0) {
printf("Has illegal value. %d\n", info);
}
int numSigular = 0;
double *si = (double*) malloc(labs*dims * labs*dims * sizeof(double));
memset(si, 0, labs*dims * labs*dims * sizeof(double));
for(int i=0; i<ldu; i++){
if(s[i] > 1e-30){
si[i*ldu + i] = 1.0 / s[i];
numSigular += 1;
}else{
si[i+ldu + i] = 0.0;
}
}
/* calculating transpose */
double *ret = matrix_mul(vt, si, labs*dims, numSigular, numSigular, numSigular);
double *ret_pinv = matrix_mul(ret, u, labs*dims, numSigular, numSigular, labs*dims);
for(int i=0; i<m; i++)
for(int j=0; j<n; j++)
X[i][j] = ret_pinv[i*n + j];
free(ret);
free(ret_pinv);
free(work);
free(a);
free(s);
free(vt);
free(u);
free(si);
return;
}
static PyObject *varRedu_estVar(PyObject *self, PyObject *args)
{
int dims, n_pool, labs, sigma;
PyObject *PI_obj, *X_obj, *ePI_obj, *eX_obj;
if (!PyArg_ParseTuple(args, "dOOOO", &sigma, &PI_obj, &X_obj, &ePI_obj, &eX_obj))
return NULL;
PyArrayObject *PI_array = (PyArrayObject*)PyArray_FROM_OTF(PI_obj, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY);
PyArrayObject *X_array = (PyArrayObject*)PyArray_FROM_OTF(X_obj, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY);
PyArrayObject *ePI_array = (PyArrayObject*)PyArray_FROM_OTF(ePI_obj, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY);
PyArrayObject *eX_array = (PyArrayObject*)PyArray_FROM_OTF(eX_obj, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY);
if (PI_array == NULL || X_array == NULL || ePI_array == NULL || eX_array == NULL) {
Py_XDECREF(PI_array);
Py_XDECREF(X_array);
Py_XDECREF(ePI_array);
Py_XDECREF(eX_array);
return NULL;
}
labs = (int)PyArray_DIM(PI_array, 1);
n_pool = (int)PyArray_DIM(X_array, 0);
dims = (int)PyArray_DIM(X_array, 1);
double **PI = (double**) malloc(n_pool * sizeof(double*));
double **X = (double**) malloc(n_pool * sizeof(double*));
for(int i=0; i<n_pool; i++){
PI[i] = (double*) malloc(labs * sizeof(double));
X[i] = (double*) malloc(dims * sizeof(double));
}
for(int i=0; i<n_pool; i++){
for(int j=0; j<labs; j++){
PI[i][j] = *(double*)PyArray_GETPTR2(PI_array, i, j);
}
for(int j=0; j<dims; j++){
X[i][j] = *(double*)PyArray_GETPTR2(X_array, i, j);
}
}
double *ePI = (double*) PyArray_DATA(ePI_array);
double *eX = (double*) PyArray_DATA(eX_array);
double **retF = Fisher(ePI, eX, sigma, labs, dims);
double **retA = A(PI, X, labs, dims, n_pool);
pinv(retF, labs, dims);
/* calculates the trace of the multiply of pinv(F) and A */
double score = 0.0;
for(int i=0; i<dims*labs; i++){
for(int k=0; k<dims*labs; k++){
score += retA[i][k] * retF[k][i];
}
}
Py_DECREF(PI_array);
Py_DECREF(X_array);
Py_DECREF(ePI_array);
Py_DECREF(eX_array);
PyObject* ret = Py_BuildValue("d", score);
for(int i=0; i<n_pool; i++){
free(PI[i]);
free(X[i]);
}
free(PI);
free(X);
for(int i=0; i<labs*dims; i++){
free(retF[i]);
free(retA[i]);
}
free(retF);
free(retA);
return ret;
}
double** An(double *pi, double *x, int labs, int dims){
double **g = (double**) malloc(labs*dims * sizeof(double*));
for(int i=0; i<labs*dims; i++){
g[i] = (double*) malloc(labs * sizeof(double));
memset(g[i], 0, labs * sizeof(double));
}
for(int p=0; p<labs; p++)
for(int i=0; i<dims; i++){
for(int c=0; c<labs; c++)
if(p == c) g[p*dims + i][c] = pi[p] * (1.0-pi[p]) * x[i];
else g[p*dims + i][c] = -1.0 * pi[p] * pi[c] * x[i];
}
double **an = (double**) malloc(labs*dims * sizeof(double*));
for(int i=0; i<labs*dims; i++){
an[i] = (double*) malloc(labs*dims * sizeof(double));
memset(an[i], 0, labs*dims * sizeof(double));
}
for(int p=0; p<labs; p++)
for(int i=0; i<dims; i++)
for(int q=0; q<labs; q++)
for(int j=0; j<dims; j++){
/* inner product */
double tmp = 0.0;
for(int k=0; k<labs; k++){
tmp += g[p*dims + i][k] * g[q*dims + j][k];
}
an[p*dims + i][q*dims + j] = tmp;
}
for(int i=0; i<labs*dims; i++)
free(g[i]);
free(g);
return an;
}
double** A(double **PI, double **X, int labs, int dims, int n_pool){
double **ret = (double**) malloc(labs*dims * sizeof(double*));
for(int i=0; i<labs*dims; i++){
ret[i] = (double*) malloc(labs*dims * sizeof(double));
memset(ret[i], 0, labs*dims * sizeof(double));
}
for(int n=0; n<n_pool; n++){
double **an = An(PI[n], X[n], labs, dims);
for(int p=0; p<labs; p++)
for(int i=0; i<dims; i++)
for(int q=0; q<labs; q++)
for(int j=0; j<dims; j++)
ret[p*dims + i][q*dims + j] += an[p*dims + i][q*dims + j];
for(int i=0; i<labs*dims; i++)
free(an[i]);
free(an);
}
return ret;
}
double** Fisher(double *pi, double *x, double sigma, int labs, int dims){
double **ret = (double**) malloc(labs*dims * sizeof(double*));
for(int i=0; i<labs*dims; i++){
ret[i] = (double*) malloc(labs*dims * sizeof(double*));
memset(ret[i], 0, labs*dims * sizeof(double));
}
for(int p=0; p<labs; p++)
for(int i=0; i<dims; i++)
for(int q=0; q<labs; q++)
for(int j=0; j<dims; j++){
if(p == q && i == j)
ret[p*dims + i][q*dims + j] = x[i]*x[i]*pi[p]*(1.0-pi[p]) + 1.0/sigma*sigma;
else if(p == q && i != j)
ret[p*dims + i][q*dims + j] = x[i]*x[j]*pi[p]*(1.0-pi[p]);
else
ret[p*dims + i][q*dims + j] = x[i]*x[j]*pi[p]*pi[q];
}
return ret;
}
|
Formal statement is: lemma homeomorphic_simply_connected: "\<lbrakk>S homeomorphic T; simply_connected S\<rbrakk> \<Longrightarrow> simply_connected T" Informal statement is: If two spaces are homeomorphic and one of them is simply connected, then the other is also simply connected. |
lemma pderiv_pcompose: "pderiv (pcompose p q) = pcompose (pderiv p) q * pderiv q" |
(*
File: Nearest_Neighbors.thy
Author: Martin Rau, TU München
*)
section \<open>Nearest Neighbor Search on the \<open>k\<close>-d Tree\<close>
theory Nearest_Neighbors
imports
KD_Tree
"Eval_Base.Eval_Base"
begin
text \<open>
Verifying nearest neighbor search on the k-d tree. Given a \<open>k\<close>-d tree and a point \<open>p\<close>,
which might not be in the tree, find the points \<open>ps\<close> that are closest to \<open>p\<close> using the
Euclidean metric.
\<close>
subsection \<open>Auxiliary Lemmas about \<open>sorted_wrt\<close>\<close>
lemma
assumes "sorted_wrt f xs"
shows sorted_wrt_take: "sorted_wrt f (take n xs)"
and sorted_wrt_drop: "sorted_wrt f (drop n xs)"
proof -
have "sorted_wrt f (take n xs @ drop n xs)"
using assms by simp
thus "sorted_wrt f (take n xs)" "sorted_wrt f (drop n xs)"
using sorted_wrt_append by blast+
qed
definition sorted_wrt_dist :: "('k::finite) point \<Rightarrow> 'k point list \<Rightarrow> bool" where
"sorted_wrt_dist p \<equiv> sorted_wrt (\<lambda>p\<^sub>0 p\<^sub>1. dist p\<^sub>0 p \<le> dist p\<^sub>1 p)"
lemma sorted_wrt_dist_insort_key:
"sorted_wrt_dist p ps \<Longrightarrow> sorted_wrt_dist p (insort_key (\<lambda>q. dist q p) q ps)"
apply2 (induction ps) by (auto simp: sorted_wrt_dist_def set_insort_key)
lemma sorted_wrt_dist_take_drop:
assumes "sorted_wrt_dist p ps"
shows "\<forall>p\<^sub>0 \<in> set (take n ps). \<forall>p\<^sub>1 \<in> set (drop n ps). dist p\<^sub>0 p \<le> dist p\<^sub>1 p"
using assms sorted_wrt_append[of _ "take n ps" "drop n ps"] by (simp add: sorted_wrt_dist_def)
lemma sorted_wrt_dist_last_take_mono:
assumes "sorted_wrt_dist p ps" "n \<le> length ps" "0 < n"
shows "dist (last (take n ps)) p \<le> dist (last ps) p"
using assms unfolding sorted_wrt_dist_def apply2 (induction ps arbitrary: n) by (auto simp add: take_Cons')
lemma sorted_wrt_dist_last_insort_key_eq:
assumes "sorted_wrt_dist p ps" "insort_key (\<lambda>q. dist q p) q ps \<noteq> ps @ [q]"
shows "last (insort_key (\<lambda>q. dist q p) q ps) = last ps"
using assms unfolding sorted_wrt_dist_def apply2 (induction ps) by (auto)
lemma sorted_wrt_dist_last:
assumes "sorted_wrt_dist p ps"
shows "\<forall>q \<in> set ps. dist q p \<le> dist (last ps) p"
proof (cases "ps = []")
case True
thus ?thesis by simp
next
case False
then obtain ps' p' where [simp]:"ps = ps' @ [p']"
using rev_exhaust by blast
hence "sorted_wrt_dist p (ps' @ [p'])"
using assms by blast
thus ?thesis
unfolding sorted_wrt_dist_def using sorted_wrt_append[of _ ps' "[p']"] by simp
qed
subsection \<open>Neighbors Sorted wrt. Distance\<close>
definition upd_nbors :: "nat \<Rightarrow> ('k::finite) point \<Rightarrow> 'k point \<Rightarrow> 'k point list \<Rightarrow> 'k point list" where
"upd_nbors n p q ps = take n (insort_key (\<lambda>q. dist q p) q ps)"
lemma sorted_wrt_dist_nbors:
assumes "sorted_wrt_dist p ps"
shows "sorted_wrt_dist p (upd_nbors n p q ps)"
proof -
have "sorted_wrt_dist p (insort_key (\<lambda>q. dist q p) q ps)"
using assms sorted_wrt_dist_insort_key by blast
thus ?thesis
by (simp add: sorted_wrt_dist_def sorted_wrt_take upd_nbors_def)
qed
lemma sorted_wrt_dist_nbors_diff:
assumes "sorted_wrt_dist p ps"
shows "\<forall>r \<in> set ps \<union> {q} - set (upd_nbors n p q ps). \<forall>s \<in> set (upd_nbors n p q ps). dist s p \<le> dist r p"
proof -
let ?ps' = "insort_key (\<lambda>q. dist q p) q ps"
have "set ps \<union> { q } = set ?ps'"
by (simp add: set_insort_key)
moreover have "set ?ps' = set (take n ?ps') \<union> set (drop n ?ps')"
using append_take_drop_id set_append by metis
ultimately have "set ps \<union> { q } - set (take n ?ps') \<subseteq> set (drop n ?ps')"
by blast
moreover have "sorted_wrt_dist p ?ps'"
using assms sorted_wrt_dist_insort_key by blast
ultimately show ?thesis
unfolding upd_nbors_def using sorted_wrt_dist_take_drop by blast
qed
lemma sorted_wrt_dist_last_upd_nbors_mono:
assumes "sorted_wrt_dist p ps" "n \<le> length ps" "0 < n"
shows "dist (last (upd_nbors n p q ps)) p \<le> dist (last ps) p"
proof (cases "insort_key (\<lambda>q. dist q p) q ps = ps @ [q]")
case True
thus ?thesis
unfolding upd_nbors_def using assms sorted_wrt_dist_last_take_mono by auto
next
case False
hence "last (insort_key (\<lambda>q. dist q p) q ps) = last ps"
using sorted_wrt_dist_last_insort_key_eq assms by blast
moreover have "dist (last (upd_nbors n p q ps)) p \<le> dist (last (insort_key (\<lambda>q. dist q p) q ps)) p"
unfolding upd_nbors_def using assms sorted_wrt_dist_last_take_mono[of p "insort_key (\<lambda>q. dist q p) q ps"]
by (simp add: sorted_wrt_dist_insort_key)
ultimately show ?thesis
by simp
qed
subsection \<open>The Recursive Nearest Neighbor Algorithm\<close>
fun nearest_nbors :: "nat \<Rightarrow> ('k::finite) point list \<Rightarrow> 'k point \<Rightarrow> 'k kdt \<Rightarrow> 'k point list" where
"nearest_nbors n ps p (Leaf q) = upd_nbors n p q ps"
| "nearest_nbors n ps p (Node k v l r) = (
if p$k \<le> v then
let candidates = nearest_nbors n ps p l in
if length candidates = n \<and> dist p (last candidates) \<le> dist v (p$k) then
candidates
else
nearest_nbors n candidates p r
else
let candidates = nearest_nbors n ps p r in
if length candidates = n \<and> dist p (last candidates) \<le> dist v (p$k) then
candidates
else
nearest_nbors n candidates p l
)"
subsection \<open>Auxiliary Lemmas\<close>
lemma cutoff_r:
assumes "invar (Node k v l r)"
assumes "p$k \<le> v" "dist p c \<le> dist (p$k) v"
shows "\<forall>q \<in> set_kdt r. dist p c \<le> dist p q"
proof standard
fix q
assume *: "q \<in> set_kdt r"
have "dist p c \<le> dist (p$k) v"
using assms(3) by blast
also have "... \<le> dist (p$k) v + dist v (q$k)"
by simp
also have "... = dist (p$k) (q$k)"
using * assms(1,2) dist_real_def by auto
also have "... \<le> dist p q"
using dist_vec_nth_le by blast
finally show "dist p c \<le> dist p q" .
qed
lemma cutoff_l:
assumes "invar (Node k v l r)"
assumes "v \<le> p$k" "dist p c \<le> dist v (p$k)"
shows "\<forall>q \<in> set_kdt l. dist p c \<le> dist p q"
proof standard
fix q
assume *: "q \<in> set_kdt l"
have "dist p c \<le> dist v (p$k)"
using assms(3) by blast
also have "... \<le> dist v (p$k) + dist (q$k) v"
by simp
also have "... = dist (p$k) (q$k)"
using * assms(1,2) dist_real_def by auto
also have "... \<le> dist p q"
using dist_vec_nth_le by blast
finally show "dist p c \<le> dist p q" .
qed
subsection \<open>The Main Theorems\<close>
lemma set_nns:
"set (nearest_nbors n ps p kdt) \<subseteq> set_kdt kdt \<union> set ps"
apply2 (induction kdt arbitrary: ps)
apply (auto simp: Let_def upd_nbors_def set_insort_key)
using in_set_takeD set_insort_key by fastforce
lemma length_nns:
"length (nearest_nbors n ps p kdt) = min n (size_kdt kdt + length ps)"
apply2 (induction kdt arbitrary: ps) by (auto simp: Let_def upd_nbors_def)
lemma length_nns_gt_0:
"0 < n \<Longrightarrow> 0 < length (nearest_nbors n ps p kdt)"
apply2 (induction kdt arbitrary: ps) by (auto simp: Let_def upd_nbors_def)
lemma length_nns_n:
assumes "(set_kdt kdt \<union> set ps) - set (nearest_nbors n ps p kdt) \<noteq> {}"
shows "length (nearest_nbors n ps p kdt) = n"
using assms
proof2 (induction kdt arbitrary: ps)
case (Node k v l r)
let ?nnsl = "nearest_nbors n ps p l"
let ?nnsr = "nearest_nbors n ps p r"
consider (A) "p$k \<le> v \<and> length ?nnsl = n \<and> dist p (last ?nnsl) \<le> dist v (p$k)"
| (B) "p$k \<le> v \<and> \<not>(length ?nnsl = n \<and> dist p (last ?nnsl) \<le> dist v (p$k))"
| (C) "v < p$k \<and> length ?nnsr = n \<and> dist p (last ?nnsr) \<le> dist v (p$k)"
| (D) "v < p$k \<and> \<not>(length ?nnsr = n \<and> dist p (last ?nnsr) \<le> dist v (p$k))"
by argo
thus ?case
proof cases
case B
let ?nns = "nearest_nbors n ?nnsl p r"
have "length ?nnsl \<noteq> n \<longrightarrow> (set_kdt l \<union> set ps - set (nearest_nbors n ps p l) = {})"
using Node.IH(1) by blast
hence "length ?nnsl \<noteq> n \<longrightarrow> (set_kdt r \<union> set ?nnsl - set ?nns \<noteq> {})"
using B Node.prems by auto
moreover have "length ?nnsl = n \<longrightarrow> ?thesis"
using B by (auto simp: length_nns)
ultimately show ?thesis
using B Node.IH(2) by force
next
case D
let ?nns = "nearest_nbors n ?nnsr p l"
have "length ?nnsr \<noteq> n \<longrightarrow> (set_kdt r \<union> set ps - set (nearest_nbors n ps p r) = {})"
using Node.IH(2) by blast
hence "length ?nnsr \<noteq> n \<longrightarrow> (set_kdt l \<union> set ?nnsr - set ?nns \<noteq> {})"
using D Node.prems by auto
moreover have "length ?nnsr = n \<longrightarrow> ?thesis"
using D by (auto simp: length_nns)
ultimately show ?thesis
using D Node.IH(1) by force
qed auto
qed (auto simp: upd_nbors_def min_def set_insort_key)
lemma sorted_nns:
"sorted_wrt_dist p ps \<Longrightarrow> sorted_wrt_dist p (nearest_nbors n ps p kdt)"
using sorted_wrt_dist_nbors apply2 (induction kdt arbitrary: ps) by (auto simp: Let_def)
lemma distinct_nns:
assumes "invar kdt" "distinct ps" "set ps \<inter> set_kdt kdt = {}"
shows "distinct (nearest_nbors n ps p kdt)"
using assms
proof2 (induction kdt arbitrary: ps)
case (Node k v l r)
let ?nnsl = "nearest_nbors n ps p l"
let ?nnsr = "nearest_nbors n ps p r"
have "set ps \<inter> set_kdt l = {}" "set ps \<inter> set_kdt r = {}"
using Node.prems(3) by auto
hence DCLR: "distinct ?nnsl" "distinct ?nnsr"
using Node invar_l invar_r by blast+
have "set ?nnsl \<inter> set_kdt r = {}" "set ?nnsr \<inter> set_kdt l = {}"
using Node.prems(1,3) set_nns by fastforce+
hence "distinct (nearest_nbors n ?nnsl p r)" "distinct (nearest_nbors n ?nnsr p l)"
using Node.IH(1,2) Node.prems(1,2) DCLR invar_l invar_r by blast+
thus ?case
using DCLR by (auto simp add: Let_def)
qed (auto simp: upd_nbors_def distinct_insort)
theorem dist_nns:
assumes "invar kdt" "sorted_wrt_dist p ps" "set ps \<inter> set_kdt kdt = {}" "distinct ps" "0 < n"
shows "\<forall>q \<in> set_kdt kdt \<union> set ps - set (nearest_nbors n ps p kdt). dist (last (nearest_nbors n ps p kdt)) p \<le> dist q p"
using assms
proof2 (induction kdt arbitrary: ps)
case (Node k v l r)
let ?nnsl = "nearest_nbors n ps p l"
let ?nnsr = "nearest_nbors n ps p r"
have IHL: "\<forall>q \<in> set_kdt l \<union> set ps - set ?nnsl. dist (last ?nnsl) p \<le> dist q p"
using Node.IH(1) Node.prems invar_l invar_set by auto
have IHR: "\<forall>q \<in> set_kdt r \<union> set ps - set ?nnsr. dist (last ?nnsr) p \<le> dist q p"
using Node.IH(2) Node.prems invar_r invar_set by auto
have SORTED_L: "sorted_wrt_dist p ?nnsl"
using sorted_nns Node.prems(2) by blast
have SORTED_R: "sorted_wrt_dist p ?nnsr"
using sorted_nns Node.prems(2) by blast
have DISTINCT_L: "distinct ?nnsl"
using Node.prems distinct_nns invar_set invar_l by fastforce
have DISTINCT_R: "distinct ?nnsr"
using Node.prems distinct_nns invar_set invar_r
by (metis inf_bot_right inf_sup_absorb inf_sup_aci(3) sup.commute)
consider (A) "p$k \<le> v \<and> length ?nnsl = n \<and> dist p (last ?nnsl) \<le> dist v (p$k)"
| (B) "p$k \<le> v \<and> \<not>(length ?nnsl = n \<and> dist p (last ?nnsl) \<le> dist v (p$k))"
| (C) "v < p$k \<and> length ?nnsr = n \<and> dist p (last ?nnsr) \<le> dist v (p$k)"
| (D) "v < p$k \<and> \<not>(length ?nnsr = n \<and> dist p (last ?nnsr) \<le> dist v (p$k))"
by argo
thus ?case
proof cases
case A
hence "\<forall>q \<in> set_kdt r. dist (last ?nnsl) p \<le> dist q p"
using Node.prems(1,2) cutoff_r by (metis dist_commute)
thus ?thesis
using IHL A by auto
next
case B
let ?nns = "nearest_nbors n ?nnsl p r"
have "set ?nnsl \<subseteq> set_kdt l \<union> set ps" "set ps \<inter> set_kdt r = {}"
using set_nns Node.prems(1,3) by (simp add: set_nns disjoint_iff_not_equal)+
hence "set ?nnsl \<inter> set_kdt r = {}"
using Node.prems(1) by fastforce
hence IHLR: "\<forall>q \<in> set_kdt r \<union> set ?nnsl - set ?nns. dist (last ?nns) p \<le> dist q p"
using Node.IH(2)[OF _ SORTED_L _ DISTINCT_L Node.prems(5)] Node.prems(1) invar_r by blast
have "\<forall>q \<in> set ps - set ?nnsl. dist (last ?nns) p \<le> dist q p"
proof standard
fix q
assume *: "q \<in> set ps - set ?nnsl"
hence "length ?nnsl = n"
using length_nns_n by blast
hence LAST: "dist (last ?nns) p \<le> dist (last ?nnsl) p"
using last_nns_mono SORTED_L invar_r Node.prems(1,2,5) by (metis order_refl)
have "dist (last ?nnsl) p \<le> dist q p"
using IHL * by blast
thus "dist (last ?nns) p \<le> dist q p"
using LAST by argo
qed
hence R: "\<forall>q \<in> set_kdt r \<union> set ps - set ?nns. dist (last ?nns) p \<le> dist q p"
using IHLR by auto
have "\<forall>q \<in> set_kdt l - set ?nnsl. dist (last ?nns) p \<le> dist q p"
proof standard
fix q
assume *: "q \<in> set_kdt l - set ?nnsl"
hence "length ?nnsl = n"
using length_nns_n by blast
hence LAST: "dist (last ?nns) p \<le> dist (last ?nnsl) p"
using last_nns_mono SORTED_L invar_r Node.prems(1,2,5) by (metis order_refl)
have "dist (last ?nnsl) p \<le> dist q p"
using IHL * by blast
thus "dist (last ?nns) p \<le> dist q p"
using LAST by argo
qed
hence L: "\<forall>q \<in> set_kdt l - set ?nns. dist (last ?nns) p \<le> dist q p"
using IHLR by blast
show ?thesis
using B R L by auto
next
case C
hence "\<forall>q \<in> set_kdt l. dist (last ?nnsr) p \<le> dist q p"
using Node.prems(1,2) cutoff_l by (metis dist_commute less_imp_le)
thus ?thesis
using IHR C by auto
next
case D
let ?nns = "nearest_nbors n ?nnsr p l"
have "set ?nnsr \<subseteq> set_kdt r \<union> set ps" "set ps \<inter> set_kdt l = {}"
using set_nns Node.prems(1,3) by (simp add: set_nns disjoint_iff_not_equal)+
hence "set ?nnsr \<inter> set_kdt l = {}"
using Node.prems(1) by fastforce
hence IHRL: "\<forall>q \<in> set_kdt l \<union> set ?nnsr - set ?nns. dist (last ?nns) p \<le> dist q p"
using Node.IH(1)[OF _ SORTED_R _ DISTINCT_R Node.prems(5)] Node.prems(1) invar_l by blast
have "\<forall>q \<in> set ps - set ?nnsr. dist (last ?nns) p \<le> dist q p"
proof standard
fix q
assume *: "q \<in> set ps - set ?nnsr"
hence "length ?nnsr = n"
using length_nns_n by blast
hence LAST: "dist (last ?nns) p \<le> dist (last ?nnsr) p"
using last_nns_mono SORTED_R invar_l Node.prems(1,2,5) by (metis order_refl)
have "dist (last ?nnsr) p \<le> dist q p"
using IHR * by blast
thus "dist (last ?nns) p \<le> dist q p"
using LAST by argo
qed
hence R: "\<forall>q \<in> set_kdt l \<union> set ps - set ?nns. dist (last ?nns) p \<le> dist q p"
using IHRL by auto
have "\<forall>q \<in> set_kdt r - set ?nnsr. dist (last ?nns) p \<le> dist q p"
proof standard
fix q
assume *: "q \<in> set_kdt r - set ?nnsr"
hence "length ?nnsr = n"
using length_nns_n by blast
hence LAST: "dist (last ?nns) p \<le> dist (last ?nnsr) p"
using last_nns_mono SORTED_R invar_l Node.prems(1,2,5) by (metis order_refl)
have "dist (last ?nnsr) p \<le> dist q p"
using IHR * by blast
thus "dist (last ?nns) p \<le> dist q p"
using LAST by argo
qed
hence L: "\<forall>q \<in> set_kdt r - set ?nns. dist (last ?nns) p \<le> dist q p"
using IHRL by blast
show ?thesis
using D R L by auto
qed
qed (auto simp: sorted_wrt_dist_nbors_diff upd_nbors_def)
subsection \<open>Nearest Neighbors Definition and Theorems\<close>
definition nearest_neighbors :: "nat \<Rightarrow> ('k::finite) point \<Rightarrow> 'k kdt \<Rightarrow> 'k point list" where
"nearest_neighbors n p kdt = nearest_nbors n [] p kdt"
theorem length_nearest_neighbors:
"length (nearest_neighbors n p kdt) = min n (size_kdt kdt)"
by (simp add: length_nns nearest_neighbors_def)
theorem sorted_wrt_dist_nearest_neighbors:
"sorted_wrt_dist p (nearest_neighbors n p kdt)"
using sorted_nns unfolding nearest_neighbors_def sorted_wrt_dist_def by force
theorem distinct_nearest_neighbors:
assumes "invar kdt"
shows "distinct (nearest_neighbors n p kdt)"
using assms by (simp add: distinct_nns nearest_neighbors_def)
theorem dist_nearest_neighbors:
assumes "invar kdt" "nns = nearest_neighbors n p kdt"
shows "\<forall>q \<in> (set_kdt kdt - set nns). \<forall>r \<in> set nns. dist r p \<le> dist q p"
proof (cases "0 < n")
case True
have "\<forall>q \<in> set_kdt kdt - set nns. dist (last nns) p \<le> dist q p"
using nearest_neighbors_def dist_nns[OF assms(1), of p "[]", OF _ _ _ True] assms(2)
by (simp add: nearest_neighbors_def sorted_wrt_dist_def)
hence "\<forall>q \<in> set_kdt kdt - set nns. \<forall>n \<in> set nns. dist n p \<le> dist q p"
using assms(2) sorted_wrt_dist_nearest_neighbors[of p n kdt] sorted_wrt_dist_last[of p nns] by force
thus ?thesis
using nearest_neighbors_def by blast
next
case False
hence "length nns = 0"
using assms(2) unfolding nearest_neighbors_def by (auto simp: length_nns)
thus ?thesis
by simp
qed
end
|
"upsetting, traumatic and left with an..."
Antenatal care at the Peartree clinic was very good. Sonographer was nice as were the midwives.
I have written directly to the Maternity Unit and have since received an apology for the care me and my new born baby received. What concerns me is the posting from a lady who had her baby towards the end of 2010 (response from the Hospital Dec 2010). My baby was born in January 2010 - I wrote to the hospital to complain about our treatment, including massive mis-communication regarding discharge i.e. midwives said I could go home and I wasn't told that I couldn't until 9pm! It seems that the lady who posted her problems suffered exactly the same issue, more than six months after the issue had been raised with the hospital.
We're sorry that you had this experience when you gave birth at the QEII recently. We are pleased, however, that you complained because it is very important that such issues are addressed for the benefit of all women using our maternity service. To help with, we will make sure that our midwifery team gets to read your comments. |
(* *********************************************************************)
(* *)
(* The Compcert verified compiler *)
(* *)
(* Xavier Leroy, INRIA Paris-Rocquencourt *)
(* *)
(* Copyright Institut National de Recherche en Informatique et en *)
(* Automatique. All rights reserved. This file is distributed *)
(* under the terms of the GNU General Public License as published by *)
(* the Free Software Foundation, either version 2 of the License, or *)
(* (at your option) any later version. This file is also distributed *)
(* under the terms of the INRIA Non-Commercial License Agreement. *)
(* *)
(* *********************************************************************)
(** Multi-way branches (``switch'' statements) and their compilation
to comparison trees. *)
Require Import EqNat.
Require Import FMaps.
Require FMapAVL.
Require Import Coqlib.
Require Import Integers.
Require Import Ordered.
Module IntMap := FMapAVL.Make(OrderedInt).
Module IntMapF := FMapFacts.Facts(IntMap).
(** A multi-way branch is composed of a list of (key, action) pairs,
plus a default action. *)
Definition table : Type := list (int * nat).
Fixpoint switch_target (n: int) (dfl: nat) (cases: table)
{struct cases} : nat :=
match cases with
| nil => dfl
| (key, action) :: rem =>
if Int.eq n key then action else switch_target n dfl rem
end.
(** Multi-way branches are translated to comparison trees.
Each node of the tree performs either
- an equality against one of the keys;
- or a "less than" test against one of the keys;
- or a computed branch (jump table) against a range of key values. *)
Inductive comptree : Type :=
| CTaction: nat -> comptree
| CTifeq: int -> nat -> comptree -> comptree
| CTiflt: int -> comptree -> comptree -> comptree
| CTjumptable: int -> int -> list nat -> comptree -> comptree.
Fixpoint comptree_match (n: int) (t: comptree) {struct t}: option nat :=
match t with
| CTaction act => Some act
| CTifeq key act t' =>
if Int.eq n key then Some act else comptree_match n t'
| CTiflt key t1 t2 =>
if Int.ltu n key then comptree_match n t1 else comptree_match n t2
| CTjumptable ofs sz tbl t' =>
if Int.ltu (Int.sub n ofs) sz
then list_nth_z tbl (Int.unsigned (Int.sub n ofs))
else comptree_match n t'
end.
(** The translation from a table to a comparison tree is performed
by untrusted Caml code (function [compile_switch] in
file [RTLgenaux.ml]). In Coq, we validate a posteriori the
result of this function. In other terms, we now develop
and prove correct Coq functions that take a table and a comparison
tree, and check that their semantics are equivalent. *)
Fixpoint split_lt (pivot: int) (cases: table)
{struct cases} : table * table :=
match cases with
| nil => (nil, nil)
| (key, act) :: rem =>
let (l, r) := split_lt pivot rem in
if Int.ltu key pivot
then ((key, act) :: l, r)
else (l, (key, act) :: r)
end.
Fixpoint split_eq (pivot: int) (cases: table)
{struct cases} : option nat * table :=
match cases with
| nil => (None, nil)
| (key, act) :: rem =>
let (same, others) := split_eq pivot rem in
if Int.eq key pivot
then (Some act, others)
else (same, (key, act) :: others)
end.
Fixpoint split_between (ofs sz: int) (cases: table)
{struct cases} : IntMap.t nat * table :=
match cases with
| nil => (IntMap.empty nat, nil)
| (key, act) :: rem =>
let (inside, outside) := split_between ofs sz rem in
if Int.ltu (Int.sub key ofs) sz
then (IntMap.add key act inside, outside)
else (inside, (key, act) :: outside)
end.
Definition refine_low_bound (v lo: Z) :=
if zeq v lo then lo + 1 else lo.
Definition refine_high_bound (v hi: Z) :=
if zeq v hi then hi - 1 else hi.
Fixpoint validate_jumptable (cases: IntMap.t nat) (default: nat)
(tbl: list nat) (n: int) {struct tbl} : bool :=
match tbl with
| nil => true
| act :: rem =>
beq_nat act (match IntMap.find n cases with Some a => a | None => default end)
&& validate_jumptable cases default rem (Int.add n Int.one)
end.
Fixpoint validate (default: nat) (cases: table) (t: comptree)
(lo hi: Z) {struct t} : bool :=
match t with
| CTaction act =>
match cases with
| nil =>
beq_nat act default
| (key1, act1) :: _ =>
zeq (Int.unsigned key1) lo && zeq lo hi && beq_nat act act1
end
| CTifeq pivot act t' =>
match split_eq pivot cases with
| (None, _) =>
false
| (Some act', others) =>
beq_nat act act'
&& validate default others t'
(refine_low_bound (Int.unsigned pivot) lo)
(refine_high_bound (Int.unsigned pivot) hi)
end
| CTiflt pivot t1 t2 =>
match split_lt pivot cases with
| (lcases, rcases) =>
validate default lcases t1 lo (Int.unsigned pivot - 1)
&& validate default rcases t2 (Int.unsigned pivot) hi
end
| CTjumptable ofs sz tbl t' =>
let tbl_len := list_length_z tbl in
match split_between ofs sz cases with
| (inside, outside) =>
zle (Int.unsigned sz) tbl_len
&& zle tbl_len Int.max_signed
&& validate_jumptable inside default tbl ofs
&& validate default outside t' lo hi
end
end.
Definition validate_switch (default: nat) (cases: table) (t: comptree) :=
validate default cases t 0 Int.max_unsigned.
(** Correctness proof for validation. *)
Lemma split_eq_prop:
forall v default n cases optact cases',
split_eq n cases = (optact, cases') ->
switch_target v default cases =
(if Int.eq v n
then match optact with Some act => act | None => default end
else switch_target v default cases').
Proof.
induction cases; simpl; intros until cases'.
intros. inversion H; subst. simpl.
destruct (Int.eq v n); auto.
destruct a as [key act].
case_eq (split_eq n cases). intros same other SEQ.
rewrite (IHcases _ _ SEQ).
predSpec Int.eq Int.eq_spec key n; intro EQ; inversion EQ; simpl.
subst n. destruct (Int.eq v key). auto. auto.
predSpec Int.eq Int.eq_spec v key.
subst v. predSpec Int.eq Int.eq_spec key n. congruence. auto.
auto.
Qed.
Lemma split_lt_prop:
forall v default n cases lcases rcases,
split_lt n cases = (lcases, rcases) ->
switch_target v default cases =
(if Int.ltu v n
then switch_target v default lcases
else switch_target v default rcases).
Proof.
induction cases; intros until rcases; simpl.
intro. inversion H; subst. simpl.
destruct (Int.ltu v n); auto.
destruct a as [key act].
case_eq (split_lt n cases). intros lc rc SEQ.
rewrite (IHcases _ _ SEQ).
case_eq (Int.ltu key n); intros; inv H0; simpl.
predSpec Int.eq Int.eq_spec v key.
subst v. rewrite H. auto.
auto.
predSpec Int.eq Int.eq_spec v key.
subst v. rewrite H. auto.
auto.
Qed.
Lemma split_between_prop:
forall v default ofs sz cases inside outside,
split_between ofs sz cases = (inside, outside) ->
switch_target v default cases =
(if Int.ltu (Int.sub v ofs) sz
then match IntMap.find v inside with Some a => a | None => default end
else switch_target v default outside).
Proof.
induction cases; intros until outside; simpl.
intros. inv H. simpl. destruct (Int.ltu (Int.sub v ofs) sz); auto.
destruct a as [key act]. case_eq (split_between ofs sz cases). intros ins outs SEQ.
rewrite (IHcases _ _ SEQ).
case_eq (Int.ltu (Int.sub key ofs) sz); intros; inv H0; simpl.
rewrite IntMapF.add_o.
predSpec Int.eq Int.eq_spec v key.
subst v. rewrite H. rewrite dec_eq_true. auto.
rewrite dec_eq_false; auto.
case_eq (Int.ltu (Int.sub v ofs) sz); intros; auto.
predSpec Int.eq Int.eq_spec v key.
subst v. congruence.
auto.
Qed.
Lemma validate_jumptable_correct_rec:
forall cases default tbl base v,
validate_jumptable cases default tbl base = true ->
0 <= Int.unsigned v < list_length_z tbl ->
list_nth_z tbl (Int.unsigned v) =
Some(match IntMap.find (Int.add base v) cases with Some a => a | None => default end).
Proof.
induction tbl; intros until v; simpl.
unfold list_length_z; simpl. intros. omegaContradiction.
rewrite list_length_z_cons. intros. destruct (andb_prop _ _ H). clear H.
generalize (beq_nat_eq _ _ (sym_equal H1)). clear H1. intro. subst a.
destruct (zeq (Int.unsigned v) 0).
unfold Int.add. rewrite e. rewrite Zplus_0_r. rewrite Int.repr_unsigned. auto.
assert (Int.unsigned (Int.sub v Int.one) = Int.unsigned v - 1).
unfold Int.sub. change (Int.unsigned Int.one) with 1.
apply Int.unsigned_repr. split. omega.
generalize (Int.unsigned_range_2 v). omega.
replace (Int.add base v) with (Int.add (Int.add base Int.one) (Int.sub v Int.one)).
rewrite <- IHtbl. rewrite H. auto. auto. rewrite H. omega.
rewrite Int.sub_add_opp. rewrite Int.add_permut. rewrite Int.add_assoc.
replace (Int.add Int.one (Int.neg Int.one)) with Int.zero.
rewrite Int.add_zero. apply Int.add_commut.
apply Int.mkint_eq. reflexivity.
Qed.
Lemma validate_jumptable_correct:
forall cases default tbl ofs v sz,
validate_jumptable cases default tbl ofs = true ->
Int.ltu (Int.sub v ofs) sz = true ->
Int.unsigned sz <= list_length_z tbl ->
list_nth_z tbl (Int.unsigned (Int.sub v ofs)) =
Some(match IntMap.find v cases with Some a => a | None => default end).
Proof.
intros.
exploit Int.ltu_inv; eauto. intros.
rewrite (validate_jumptable_correct_rec cases default tbl ofs).
rewrite Int.sub_add_opp. rewrite Int.add_permut. rewrite <- Int.sub_add_opp.
rewrite Int.sub_idem. rewrite Int.add_zero. auto.
auto.
omega.
Qed.
Lemma validate_correct_rec:
forall default v t cases lo hi,
validate default cases t lo hi = true ->
lo <= Int.unsigned v <= hi ->
comptree_match v t = Some (switch_target v default cases).
Proof.
Opaque Int.sub.
induction t; simpl; intros until hi.
(* base case *)
destruct cases as [ | [key1 act1] cases1]; intros.
replace n with default. reflexivity.
symmetry. apply beq_nat_eq. auto.
destruct (andb_prop _ _ H). destruct (andb_prop _ _ H1). clear H H1.
assert (Int.unsigned key1 = lo). eapply proj_sumbool_true; eauto.
assert (lo = hi). eapply proj_sumbool_true; eauto.
assert (Int.unsigned v = Int.unsigned key1). omega.
replace n with act1.
simpl. unfold Int.eq. rewrite H5. rewrite zeq_true. auto.
symmetry. apply beq_nat_eq. auto.
(* eq node *)
case_eq (split_eq i cases). intros optact cases' EQ.
destruct optact as [ act | ]. 2: congruence.
intros. destruct (andb_prop _ _ H). clear H.
rewrite (split_eq_prop v default _ _ _ _ EQ).
predSpec Int.eq Int.eq_spec v i.
f_equal. apply beq_nat_eq; auto.
eapply IHt. eauto.
assert (Int.unsigned v <> Int.unsigned i).
rewrite <- (Int.repr_unsigned v) in H.
rewrite <- (Int.repr_unsigned i) in H.
congruence.
split.
unfold refine_low_bound. destruct (zeq (Int.unsigned i) lo); omega.
unfold refine_high_bound. destruct (zeq (Int.unsigned i) hi); omega.
(* lt node *)
case_eq (split_lt i cases). intros lcases rcases EQ V RANGE.
destruct (andb_prop _ _ V). clear V.
rewrite (split_lt_prop v default _ _ _ _ EQ).
unfold Int.ltu. destruct (zlt (Int.unsigned v) (Int.unsigned i)).
eapply IHt1. eauto. omega.
eapply IHt2. eauto. omega.
(* jumptable node *)
case_eq (split_between i i0 cases). intros ins outs EQ V RANGE.
destruct (andb_prop _ _ V). clear V.
destruct (andb_prop _ _ H). clear H.
destruct (andb_prop _ _ H1). clear H1.
rewrite (split_between_prop v _ _ _ _ _ _ EQ).
case_eq (Int.ltu (Int.sub v i) i0); intros.
eapply validate_jumptable_correct; eauto.
eapply proj_sumbool_true; eauto.
eapply IHt; eauto.
Qed.
Definition table_tree_agree
(default: nat) (cases: table) (t: comptree) : Prop :=
forall v, comptree_match v t = Some(switch_target v default cases).
Theorem validate_switch_correct:
forall default t cases,
validate_switch default cases t = true ->
table_tree_agree default cases t.
Proof.
unfold validate_switch, table_tree_agree; intros.
eapply validate_correct_rec; eauto.
apply Int.unsigned_range_2.
Qed.
|
#redirect Davis Timebank
|
State Before: α : Type u
inst✝² : Group α
inst✝¹ : LE α
inst✝ : CovariantClass α α (fun x x_1 => x * x_1) fun x x_1 => x ≤ x_1
a b c d : α
⊢ b⁻¹ * a ≤ c ↔ a ≤ b * c State After: no goals Tactic: rw [← mul_le_mul_iff_left b, mul_inv_cancel_left] |
@testset "Manopt Cyclic Proximal Point" begin
using Dates
n = 100
N = Power(Circle(),(n,))
f = PowPoint(artificialS1Signal(n))
F = x -> costL2TV(N,f,0.5,x)
proxes = [ (λ,x) -> proxDistance(N,λ,f,x), (λ,x) -> proxTV(N,0.5*λ,x) ]
fR,rec = cyclicProximalPoint(N,F,proxes, f;
λ = i -> π/(2*i),
stoppingCriterion = stopWhenAll( stopAfter(Second(10)), stopAfterIteration(5000) ),
debug = [DebugIterate()," ",DebugCost()," ",DebugProximalParameter(),"\n",10000],
record = [RecordProximalParameter(), RecordIterate(f), RecordCost()]
)
@test F(f) > F(fR)
#
o = CyclicProximalPointOptions(f, stopAfterIteration(1), i -> π/(2*i))
p = ProximalProblem(N,F,proxes,[1,2])
@test_throws ErrorException getProximalMap(p,1.,f,3)
end |
module PotentialDB
using Reexport
@reexport using PotentialCalculation
export CASnumber, PotentialRegistry,
addpotential!, defaultregistry, listpotentials, loadpotential, saveregistry
include("potentialregistry.jl")
using .potentialregistry
end # module
|
(** Just an union-find *)
Require Import Arith.Peano_dec.
Require Import String.
Require Import List.
Require Import Cybele.Cybele.
Require Import Cybele.DataStructures.
Set Implicit Arguments.
Set Transparent Obligation.
Import Monad ListNotations.
(** Normal union-find with array of integers *)
Module UnionFind.
Definition Sig: Sig.t := Sig.Make nil ((Array.internal_t nat: Type) :: nil).
Definition Array := Array.t Sig nat.
Definition M := M Sig.
(** The representative of [i] *)
Definition Find (array: Array) (i: nat): M nat :=
fix_ (fun f i =>
let! j := Array.read array i in
match eq_nat_dec i j with
| left _ => ret i
| right _ => f j
end) i.
(** Merge the equivalent classes of [i] and [j] *)
Definition Unify (array: Array) (i j: nat): M unit :=
let! i' := Find array i in
let! j' := Find array j in
Array.write array i' j'.
(** Do the union-find with a list of equalities and return
the list of representatives *)
Definition UnionFind (n: nat) (unions: list (nat * nat))
: M (list nat) :=
let! array := tmp_ref Sig 0 (seq 0 n) in
do! List.iter (fun (ij: nat * nat) =>
let (i, j) := ij in
Unify array i j)
unions in
Array.to_list array.
Definition Eval (n: nat) (l: list (nat * nat)) (nb_steps: nat) :=
UnionFind n l (State.of_prophecy (Prophecy.of_nat _ nb_steps)).
Compute Eval 10 nil 0.
Compute Eval 10 [(0, 1); (0, 0); (2, 9); (1, 4); (4, 1)] 2.
End UnionFind.
|
Name: Alex Loyd
This is Alex with several of his friends.
Biography: Alex Loyd is a happygolucky fellow who is grateful for his friends and his lifestyle. He is currently the drummer for a high school Music Scene band which has yet to be named. The band consists of his friends Eddie, Jackson, Max, Jack, and himself. His favorite hobbies include playing copious amounts of video games, watching tv, listening to music, eating, and hanging out with friends.
Alex can often be seen hanging out with at least 2 friends, earphones draped around his neck and bottle of coke in hand. He is a sharp boy with a good sense of humor. He is generally happy and very outgoing.
Everyone loves Alex.
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal31.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Derive Show for lst.
Derive Arbitrary for lst.
Instance Dec_Eq_lst : Dec_Eq lst.
Proof. dec_eq. Qed.
Lemma lfind_hyp_test : (@eq lst (qreva (qreva (Nil) Nil) Nil) (Nil)).
Admitted.
QuickChick lfind_hyp_test.
|
module Sixel.Helpers
import Data.Buffer
import Sixel.Library
import Sixel.Symbols
import Sixel.Allocator
%foreign (sixel "sixel_helper_get_additional_message")
sixel_helper_get_additional_message : String
%foreign (sixel "sixel_helper_set_additional_message")
sixel_helper_set_additional_message : String -> ()
%foreign (sixel "sixel_helper_format_error")
sixel_helper_format_error : Int -> String
export %inline
getAdditionalMessage : String
getAdditionalMessage = sixel_helper_get_additional_message
export %inline
setAdditionalMessage : String -> IO ()
setAdditionalMessage = pure . sixel_helper_set_additional_message
export %inline
format : Status -> String
format (MkStatus s) = sixel_helper_format_error s
|
import algebra.homology.exact
import category_theory.abelian.opposite
import category_theory.abelian.exact
import category_theory.limits.constructions.epi_mono
import category_theory.abelian.pseudoelements
noncomputable theory
open category_theory
open category_theory.limits
universes w v u
namespace list
variables {α : Type*} (a : α) (L : list α) (m n : ℕ)
/-- Returns the sublist of `L` starting at index `m` of length `n`
(or shorter, if `L` is too short). -/
def extract := (L.drop m).take n
@[simp] lemma extract_nil : [].extract m n = ([] : list α) :=
by { cases n, refl, cases m, refl, refl }
@[simp] lemma extract_zero_right : L.extract m 0 = [] := rfl
@[simp] lemma extract_cons_succ_left : (a :: L).extract m.succ n = L.extract m n := rfl
end list
example : [0,1,2,3,4,5,6,7,8,9].extract 4 3 = [4,5,6] := rfl
namespace category_theory
variables (𝒞 : Type u) [category.{v} 𝒞]
variables [has_zero_morphisms 𝒞] [has_images 𝒞] [has_kernels 𝒞]
variables {𝒜 : Type u} [category.{v} 𝒜] [abelian 𝒜]
namespace exact -- move this
variables {A B C : 𝒜} (f : A ⟶ B) (g : B ⟶ C)
def kernel_op_iso : (kernel f.op).unop ≅ cokernel f :=
{ hom := (kernel.lift _ (cokernel.π f).op begin
simp [← op_comp, limits.cokernel.condition],
end).unop ≫ eq_to_hom (opposite.unop_op (cokernel f)),
inv := cokernel.desc _ (eq_to_hom (opposite.unop_op B).symm ≫ (kernel.ι f.op).unop) begin
dsimp,
rw [category.id_comp, ← f.unop_op, ← unop_comp, f.unop_op, kernel.condition],
refl,
end,
hom_inv_id' := begin
dsimp,
simp,
rw [← unop_id, ← (cokernel.desc f (kernel.ι f.op).unop _).unop_op, ← unop_comp],
congr' 1,
apply limits.equalizer.hom_ext,
dsimp,
simp [← op_comp],
end,
inv_hom_id' := begin
apply limits.coequalizer.hom_ext,
dsimp,
simp [← unop_comp],
end }
def cokernel_op_iso : (cokernel f.op).unop ≅ kernel f :=
{ hom := kernel.lift _ ((cokernel.π f.op).unop ≫ eq_to_hom (opposite.unop_op _)) begin
simp only [eq_to_hom_refl, category.comp_id],
rw [← f.unop_op, ← unop_comp, f.op.op_unop, cokernel.condition],
refl,
end,
inv := eq_to_hom (opposite.unop_op _).symm ≫ (cokernel.desc _ (kernel.ι f).op (by simp [← op_comp])).unop,
hom_inv_id' := begin
simp only [category.id_comp, eq_to_hom_refl, category.comp_id, ← unop_id, ← unop_comp],
rw [← (kernel.lift f (cokernel.π f.op).unop _).unop_op, ← unop_comp],
congr' 1,
apply limits.coequalizer.hom_ext,
dsimp,
simp [← op_comp],
end,
inv_hom_id' := begin
apply limits.equalizer.hom_ext,
dsimp,
simp [← unop_comp]
end } .
@[simp]
lemma kernel.ι_op : (kernel.ι f.op).unop =
eq_to_hom (opposite.unop_op _) ≫ cokernel.π f ≫ (kernel_op_iso f).inv :=
begin
dsimp [kernel_op_iso],
simp,
end
@[simp]
lemma cokernel.π_op : (cokernel.π f.op).unop =
(cokernel_op_iso f).hom ≫ kernel.ι f ≫ eq_to_hom (opposite.unop_op _).symm :=
begin
dsimp [cokernel_op_iso],
simp,
end
variables {f g}
lemma op (h : exact f g) : exact g.op f.op :=
begin
rw abelian.exact_iff,
refine ⟨_, _⟩,
{ simp only [← op_comp, h.w, op_zero], },
apply_fun quiver.hom.unop,
swap, { exact quiver.hom.unop_inj },
simp only [h, unop_comp, cokernel.π_op, eq_to_hom_refl, kernel.ι_op, category.id_comp,
category.assoc, kernel_comp_cokernel_assoc, zero_comp, comp_zero, unop_zero],
end
variables (f g)
def kernel_unop_iso {C B : 𝒜ᵒᵖ} (f : C ⟶ B) : opposite.op (kernel f.unop) ≅ cokernel f :=
{ hom := (kernel.lift _ (cokernel.π f).unop (by simp [← unop_comp])).op ≫
eq_to_hom (opposite.op_unop (cokernel f)),
inv := cokernel.desc _ (eq_to_hom (opposite.op_unop _).symm ≫ (kernel.ι f.unop).op) begin
dsimp,
rw [← f.op_unop, category.id_comp, ← op_comp, f.op_unop, kernel.condition],
refl,
end,
hom_inv_id' := begin
dsimp,
simp,
rw [← (cokernel.desc f (kernel.ι f.unop).op _).op_unop, ← op_comp, ← op_id],
congr' 1,
apply limits.equalizer.hom_ext,
dsimp,
simp [← unop_comp],
end,
inv_hom_id' := begin
apply limits.coequalizer.hom_ext,
dsimp,
simp [← op_comp],
end }
def cokernel_unop_iso {C B : 𝒜ᵒᵖ} (f : C ⟶ B) : opposite.op (cokernel f.unop) ≅ kernel f :=
{ hom := kernel.lift _ ((cokernel.π f.unop).op ≫ eq_to_hom (opposite.op_unop _)) begin
dsimp,
rw [← f.op_unop, category.comp_id, ← op_comp, f.op_unop, cokernel.condition],
refl,
end,
inv := eq_to_hom (opposite.op_unop _).symm ≫
(cokernel.desc _ (kernel.ι f).unop (by simp [← unop_comp])).op,
hom_inv_id' := begin
dsimp,
rw category.id_comp,
rw [← (kernel.lift f ((cokernel.π f.unop).op ≫ 𝟙 C) _).op_unop, ← op_comp, ← op_id],
congr' 1,
apply limits.coequalizer.hom_ext,
dsimp,
simp [← unop_comp],
end,
inv_hom_id' := begin
apply limits.equalizer.hom_ext,
dsimp,
simp [← op_comp]
end }
@[simp]
lemma cokernel.π_unop {C B : 𝒜ᵒᵖ} (f : C ⟶ B) : (cokernel.π f.unop).op =
(cokernel_unop_iso f).hom ≫ kernel.ι f ≫ eq_to_hom (opposite.op_unop _).symm :=
begin
dsimp [cokernel_unop_iso],
simp,
end
@[simp]
lemma kernel.ι_unop {C B : 𝒜ᵒᵖ} (f : C ⟶ B) : (kernel.ι f.unop).op =
eq_to_hom (opposite.op_unop _) ≫ cokernel.π f ≫ (kernel_unop_iso f).inv :=
begin
dsimp [kernel_unop_iso],
simp,
end
lemma unop {C B A : 𝒜ᵒᵖ} {g : C ⟶ B} {f : B ⟶ A} (h : exact g f) : exact f.unop g.unop :=
begin
rw abelian.exact_iff,
refine ⟨by simp only [← unop_comp, h.w, unop_zero], _⟩,
apply_fun quiver.hom.op,
swap, { exact quiver.hom.op_inj },
simp [h],
end
end exact
/-- A sequence `[f, g, ...]` of morphisms is exact if the pair `(f,g)` is exact,
and the sequence `[g, ...]` is exact.
Recall that the pair `(f,g)` is exact if `f ≫ g = 0`
and the natural map from the image of `f` to the kernel of `g` is an epimorphism
(equivalently, in abelian categories: isomorphism). -/
inductive exact_seq : list (arrow 𝒞) → Prop
| nil : exact_seq []
| single : ∀ f, exact_seq [f]
| cons : ∀ {A B C : 𝒞} (f : A ⟶ B) (g : B ⟶ C) (hfg : exact f g) (L) (hgL : exact_seq (g :: L)),
exact_seq (f :: g :: L)
variable {𝒞}
lemma exact_iff_exact_seq {A B C : 𝒞} (f : A ⟶ B) (g : B ⟶ C) :
exact f g ↔ exact_seq 𝒞 [f, g] :=
begin
split,
{ intro h, exact exact_seq.cons f g h _ (exact_seq.single _), },
{ rintro (_ | _ | ⟨A, B, C, f, g, hfg, _, _ | _ | _⟩), exact hfg, }
end
namespace exact_seq
lemma extract : ∀ {L : list (arrow 𝒞)} (h : exact_seq 𝒞 L) (m n : ℕ),
exact_seq 𝒞 (L.extract m n)
| L (nil) m n := by { rw list.extract_nil, exact nil }
| L (single f) m 0 := nil
| L (single f) 0 (n+1) := by { cases n; exact single f }
| L (single f) (m+1) (n+1) := by { cases m; exact nil }
| _ (cons f g hfg L hL) (m+1) n := extract hL m n
| _ (cons f g hfg L hL) 0 0 := nil
| _ (cons f g hfg L hL) 0 1 := single f
| _ (cons f g hfg L hL) 0 (n+2) := cons f g hfg (L.take n) (extract hL 0 (n+1))
inductive arrow_congr : Π (L L' : list (arrow 𝒞)), Prop
| nil : arrow_congr [] []
| cons : ∀ {A B : 𝒞} {f f' : A ⟶ B} {L L' : list (arrow 𝒞)} (h : f = f') (H : arrow_congr L L'),
arrow_congr (f :: L) (f' :: L')
lemma congr : ∀ {L L' : list (arrow 𝒞)}, exact_seq 𝒞 L → arrow_congr L L' → exact_seq 𝒞 L'
| _ _ h arrow_congr.nil := exact_seq.nil
| _ _ h (arrow_congr.cons h₁ arrow_congr.nil) := exact_seq.single _
| _ _ h (arrow_congr.cons h₁ ((arrow_congr.cons h₂ H))) :=
begin
substs h₁ h₂,
rcases h with _ | _ | ⟨A, B, C, f, g, hfg, _, hL⟩,
refine exact_seq.cons _ _ hfg _ (congr hL (arrow_congr.cons rfl H)),
end
lemma append : ∀ {L₁ L₂ L₃ : list (arrow 𝒞)}
(h₁₂ : exact_seq 𝒞 (L₁ ++ L₂)) (h₂₃ : exact_seq 𝒞 (L₂ ++ L₃)) (h₂ : L₂ ≠ []),
exact_seq 𝒞 (L₁ ++ L₂ ++ L₃)
| L₁ [] L₃ h₁₂ h₂₃ h := (h rfl).elim
| [] L₂ L₃ h₁₂ h₂₃ h := by rwa list.nil_append
| (_::[]) (_::L₂) L₃ (cons f g hfg L hL) h₂₃ h := cons f g hfg _ h₂₃
| (_::_::L₁) L₂ L₃ (cons f g hfg L hL) h₂₃ h :=
suffices exact_seq 𝒞 ([f] ++ ([g] ++ L₁ ++ L₂) ++ L₃), { simpa only [list.append_assoc] },
cons _ _ hfg _ $
suffices exact_seq 𝒞 ((g :: L₁) ++ L₂ ++ L₃), { simpa only [list.append_assoc] },
append (by simpa only using hL) h₂₃ h
end exact_seq
namespace arrow
open _root_.opposite
variables {C : Type*} [category C]
@[simps]
def op (f : arrow C) : arrow Cᵒᵖ :=
{ left := op f.right,
right := op f.left,
hom := f.hom.op }
@[simps]
def unop (f : arrow Cᵒᵖ) : arrow C :=
{ left := unop f.right,
right := unop f.left,
hom := f.hom.unop }
@[simp] lemma op_unop (f : arrow C) : f.op.unop = f := by { cases f, dsimp [op, unop], refl }
@[simp] lemma unop_op (f : arrow Cᵒᵖ) : f.unop.op = f := by { cases f, dsimp [op, unop], refl }
@[simp] lemma op_comp_unop : (op ∘ unop : arrow Cᵒᵖ → arrow Cᵒᵖ) = id := by { ext, exact unop_op _ }
@[simp] lemma unop_comp_op : (unop ∘ op : arrow C → arrow C ) = id := by { ext, exact op_unop _ }
end arrow
namespace exact_seq
lemma op : ∀ {L : list (arrow 𝒜)}, exact_seq 𝒜 L → exact_seq 𝒜ᵒᵖ (L.reverse.map arrow.op)
| _ nil := nil
| _ (single f) := single f.op
| _ (cons f g hfg L hL) :=
begin
have := op hL,
simp only [list.reverse_cons, list.map_append] at this ⊢,
refine this.append _ (list.cons_ne_nil _ _),
exact cons _ _ hfg.op _ (single _),
end
lemma unop : ∀ {L : list (arrow 𝒜ᵒᵖ)}, exact_seq 𝒜ᵒᵖ L → exact_seq 𝒜 (L.reverse.map arrow.unop)
| _ nil := nil
| _ (single f) := single f.unop
| _ (cons f g hfg L hL) :=
begin
have := unop hL,
simp only [list.reverse_cons, list.map_append] at this ⊢,
refine this.append _ (list.cons_ne_nil _ _),
exact cons _ _ hfg.unop _ (single _),
end
lemma of_op {L : list (arrow 𝒜)} (h : exact_seq 𝒜ᵒᵖ (L.reverse.map arrow.op)) : exact_seq 𝒜 L :=
by simpa only [list.map_reverse, list.reverse_reverse, list.map_map,
arrow.unop_comp_op, list.map_id] using h.unop
lemma of_unop {L : list (arrow 𝒜ᵒᵖ)} (h : exact_seq 𝒜 (L.reverse.map arrow.unop)) :
exact_seq 𝒜ᵒᵖ L :=
by simpa only [list.map_reverse, list.reverse_reverse, list.map_map,
arrow.op_comp_unop, list.map_id] using h.op
end exact_seq
end category_theory
|
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
L' : Language
inst✝¹ : Structure L' M
h : Definable A L s
φ : L →ᴸ L'
inst✝ : LHom.IsExpansionOn φ M
⊢ Definable A L' s
[PROOFSTEP]
obtain ⟨ψ, rfl⟩ := h
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
L' : Language
inst✝¹ : Structure L' M
φ : L →ᴸ L'
inst✝ : LHom.IsExpansionOn φ M
ψ : Formula (L[[↑A]]) α
⊢ Definable A L' (setOf (Formula.Realize ψ))
[PROOFSTEP]
refine' ⟨(φ.addConstants A).onFormula ψ, _⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
L' : Language
inst✝¹ : Structure L' M
φ : L →ᴸ L'
inst✝ : LHom.IsExpansionOn φ M
ψ : Formula (L[[↑A]]) α
⊢ setOf (Formula.Realize ψ) = setOf (Formula.Realize (LHom.onFormula (LHom.addConstants (↑A) φ) ψ))
[PROOFSTEP]
ext x
[GOAL]
case intro.h
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
L' : Language
inst✝¹ : Structure L' M
φ : L →ᴸ L'
inst✝ : LHom.IsExpansionOn φ M
ψ : Formula (L[[↑A]]) α
x : α → M
⊢ x ∈ setOf (Formula.Realize ψ) ↔ x ∈ setOf (Formula.Realize (LHom.onFormula (LHom.addConstants (↑A) φ) ψ))
[PROOFSTEP]
simp only [mem_setOf_eq, LHom.realize_onFormula]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
⊢ Definable ∅ L s ↔ ∃ φ, s = setOf (Formula.Realize φ)
[PROOFSTEP]
rw [Definable, Equiv.exists_congr_left (LEquiv.addEmptyConstants L (∅ : Set M)).onFormula]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
⊢ (∃ φ, s = setOf (Formula.Realize φ)) ↔
∃ b, s = setOf (Formula.Realize (↑(LEquiv.onFormula (LEquiv.addEmptyConstants L ↑∅)).symm b))
[PROOFSTEP]
simp [-constantsOn]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
hAs : Definable A L s
hAB : A ⊆ B
⊢ Definable B L s
[PROOFSTEP]
rw [definable_iff_empty_definable_with_params] at *
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
hAs : Definable ∅ (L[[↑A]]) s
hAB : A ⊆ B
⊢ Definable ∅ (L[[↑B]]) s
[PROOFSTEP]
exact hAs.map_expansion (L.lhomWithConstantsMap (Set.inclusion hAB))
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
⊢ ∅ = setOf (Formula.Realize ⊥)
[PROOFSTEP]
ext
[GOAL]
case h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
x✝ : α → M
⊢ x✝ ∈ ∅ ↔ x✝ ∈ setOf (Formula.Realize ⊥)
[PROOFSTEP]
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
⊢ univ = setOf (Formula.Realize ⊤)
[PROOFSTEP]
ext
[GOAL]
case h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
x✝ : α → M
⊢ x✝ ∈ univ ↔ x✝ ∈ setOf (Formula.Realize ⊤)
[PROOFSTEP]
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s f g : Set (α → M)
hf : Definable A L f
hg : Definable A L g
⊢ Definable A L (f ∩ g)
[PROOFSTEP]
rcases hf with ⟨φ, rfl⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s g : Set (α → M)
hg : Definable A L g
φ : Formula (L[[↑A]]) α
⊢ Definable A L (setOf (Formula.Realize φ) ∩ g)
[PROOFSTEP]
rcases hg with ⟨θ, rfl⟩
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
φ θ : Formula (L[[↑A]]) α
⊢ Definable A L (setOf (Formula.Realize φ) ∩ setOf (Formula.Realize θ))
[PROOFSTEP]
refine' ⟨φ ⊓ θ, _⟩
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
φ θ : Formula (L[[↑A]]) α
⊢ setOf (Formula.Realize φ) ∩ setOf (Formula.Realize θ) = setOf (Formula.Realize (φ ⊓ θ))
[PROOFSTEP]
ext
[GOAL]
case intro.intro.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
φ θ : Formula (L[[↑A]]) α
x✝ : α → M
⊢ x✝ ∈ setOf (Formula.Realize φ) ∩ setOf (Formula.Realize θ) ↔ x✝ ∈ setOf (Formula.Realize (φ ⊓ θ))
[PROOFSTEP]
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s f g : Set (α → M)
hf : Definable A L f
hg : Definable A L g
⊢ Definable A L (f ∪ g)
[PROOFSTEP]
rcases hf with ⟨φ, hφ⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s f g : Set (α → M)
hg : Definable A L g
φ : Formula (L[[↑A]]) α
hφ : f = setOf (Formula.Realize φ)
⊢ Definable A L (f ∪ g)
[PROOFSTEP]
rcases hg with ⟨θ, hθ⟩
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s f g : Set (α → M)
φ : Formula (L[[↑A]]) α
hφ : f = setOf (Formula.Realize φ)
θ : Formula (L[[↑A]]) α
hθ : g = setOf (Formula.Realize θ)
⊢ Definable A L (f ∪ g)
[PROOFSTEP]
refine' ⟨φ ⊔ θ, _⟩
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s f g : Set (α → M)
φ : Formula (L[[↑A]]) α
hφ : f = setOf (Formula.Realize φ)
θ : Formula (L[[↑A]]) α
hθ : g = setOf (Formula.Realize θ)
⊢ f ∪ g = setOf (Formula.Realize (φ ⊔ θ))
[PROOFSTEP]
ext
[GOAL]
case intro.intro.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s f g : Set (α → M)
φ : Formula (L[[↑A]]) α
hφ : f = setOf (Formula.Realize φ)
θ : Formula (L[[↑A]]) α
hθ : g = setOf (Formula.Realize θ)
x✝ : α → M
⊢ x✝ ∈ f ∪ g ↔ x✝ ∈ setOf (Formula.Realize (φ ⊔ θ))
[PROOFSTEP]
rw [hφ, hθ, mem_setOf_eq, Formula.realize_sup, mem_union, mem_setOf_eq, mem_setOf_eq]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (Finset.inf s f)
[PROOFSTEP]
classical
refine' Finset.induction definable_univ (fun i s _ h => _) s
rw [Finset.inf_insert]
exact (hf i).inter h
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (Finset.inf s f)
[PROOFSTEP]
refine' Finset.induction definable_univ (fun i s _ h => _) s
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝¹ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s✝ : Finset ι
i : ι
s : Finset ι
x✝ : ¬i ∈ s
h : Definable A L (Finset.inf s f)
⊢ Definable A L (Finset.inf (insert i s) f)
[PROOFSTEP]
rw [Finset.inf_insert]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝¹ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s✝ : Finset ι
i : ι
s : Finset ι
x✝ : ¬i ∈ s
h : Definable A L (Finset.inf s f)
⊢ Definable A L (f i ⊓ Finset.inf s f)
[PROOFSTEP]
exact (hf i).inter h
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (Finset.sup s f)
[PROOFSTEP]
classical
refine' Finset.induction definable_empty (fun i s _ h => _) s
rw [Finset.sup_insert]
exact (hf i).union h
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (Finset.sup s f)
[PROOFSTEP]
refine' Finset.induction definable_empty (fun i s _ h => _) s
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝¹ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s✝ : Finset ι
i : ι
s : Finset ι
x✝ : ¬i ∈ s
h : Definable A L (Finset.sup s f)
⊢ Definable A L (Finset.sup (insert i s) f)
[PROOFSTEP]
rw [Finset.sup_insert]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝¹ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s✝ : Finset ι
i : ι
s : Finset ι
x✝ : ¬i ∈ s
h : Definable A L (Finset.sup s f)
⊢ Definable A L (f i ⊔ Finset.sup s f)
[PROOFSTEP]
exact (hf i).union h
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (⋂ (i : ι) (_ : i ∈ s), f i)
[PROOFSTEP]
rw [← Finset.inf_set_eq_iInter]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (Finset.inf s fun i => f i)
[PROOFSTEP]
exact definable_finset_inf hf s
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (⋃ (i : ι) (_ : i ∈ s), f i)
[PROOFSTEP]
rw [← Finset.sup_set_eq_biUnion]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
ι : Type u_2
f : ι → Set (α → M)
hf : ∀ (i : ι), Definable A L (f i)
s : Finset ι
⊢ Definable A L (Finset.sup s fun i => f i)
[PROOFSTEP]
exact definable_finset_sup hf s
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ s : Set (α → M)
hf : Definable A L s
⊢ Definable A L sᶜ
[PROOFSTEP]
rcases hf with ⟨φ, hφ⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ s : Set (α → M)
φ : Formula (L[[↑A]]) α
hφ : s = setOf (Formula.Realize φ)
⊢ Definable A L sᶜ
[PROOFSTEP]
refine' ⟨φ.not, _⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ s : Set (α → M)
φ : Formula (L[[↑A]]) α
hφ : s = setOf (Formula.Realize φ)
⊢ sᶜ = setOf (Formula.Realize (Formula.not φ))
[PROOFSTEP]
ext v
[GOAL]
case intro.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ s : Set (α → M)
φ : Formula (L[[↑A]]) α
hφ : s = setOf (Formula.Realize φ)
v : α → M
⊢ v ∈ sᶜ ↔ v ∈ setOf (Formula.Realize (Formula.not φ))
[PROOFSTEP]
rw [hφ, compl_setOf, mem_setOf, mem_setOf, Formula.realize_not]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
f : α → β
s : Set (α → M)
h : Definable A L s
⊢ Definable A L ((fun g => g ∘ f) ⁻¹' s)
[PROOFSTEP]
obtain ⟨φ, rfl⟩ := h
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
f : α → β
φ : Formula (L[[↑A]]) α
⊢ Definable A L ((fun g => g ∘ f) ⁻¹' setOf (Formula.Realize φ))
[PROOFSTEP]
refine' ⟨φ.relabel f, _⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
f : α → β
φ : Formula (L[[↑A]]) α
⊢ (fun g => g ∘ f) ⁻¹' setOf (Formula.Realize φ) = setOf (Formula.Realize (Formula.relabel f φ))
[PROOFSTEP]
ext
[GOAL]
case intro.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
f : α → β
φ : Formula (L[[↑A]]) α
x✝ : β → M
⊢ x✝ ∈ (fun g => g ∘ f) ⁻¹' setOf (Formula.Realize φ) ↔ x✝ ∈ setOf (Formula.Realize (Formula.relabel f φ))
[PROOFSTEP]
simp only [Set.preimage_setOf_eq, mem_setOf_eq, Formula.realize_relabel]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
⊢ Definable A L ((fun g => g ∘ ↑f) '' s)
[PROOFSTEP]
refine' (congr rfl _).mp (h.preimage_comp f.symm)
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
⊢ (fun g => g ∘ ↑f.symm) ⁻¹' s = (fun g => g ∘ ↑f) '' s
[PROOFSTEP]
rw [image_eq_preimage_of_inverse]
[GOAL]
case h₁
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
⊢ Function.LeftInverse (fun g => g ∘ ↑f.symm) fun g => g ∘ ↑f
[PROOFSTEP]
intro i
[GOAL]
case h₁
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
i : β → M
⊢ (fun g => g ∘ ↑f.symm) ((fun g => g ∘ ↑f) i) = i
[PROOFSTEP]
ext b
[GOAL]
case h₁.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
i : β → M
b : β
⊢ (fun g => g ∘ ↑f.symm) ((fun g => g ∘ ↑f) i) b = i b
[PROOFSTEP]
simp only [Function.comp_apply, Equiv.apply_symm_apply]
[GOAL]
case h₂
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
⊢ Function.RightInverse (fun g => g ∘ ↑f.symm) fun g => g ∘ ↑f
[PROOFSTEP]
intro i
[GOAL]
case h₂
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
i : α → M
⊢ (fun g => g ∘ ↑f) ((fun g => g ∘ ↑f.symm) i) = i
[PROOFSTEP]
ext a
[GOAL]
case h₂.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ≃ β
i : α → M
a : α
⊢ (fun g => g ∘ ↑f) ((fun g => g ∘ ↑f.symm) i) a = i a
[PROOFSTEP]
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
m : ℕ
s : Set (α ⊕ Fin m → M)
h : Definable A L s
⊢ Definable A L ((fun g => g ∘ Sum.inl) '' s)
[PROOFSTEP]
obtain ⟨φ, rfl⟩ := h
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
⊢ Definable A L ((fun g => g ∘ Sum.inl) '' setOf (Formula.Realize φ))
[PROOFSTEP]
refine' ⟨(BoundedFormula.relabel id φ).exs, _⟩
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
⊢ (fun g => g ∘ Sum.inl) '' setOf (Formula.Realize φ) =
setOf (Formula.Realize (BoundedFormula.exs (BoundedFormula.relabel id φ)))
[PROOFSTEP]
ext x
[GOAL]
case intro.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
x : α → M
⊢ x ∈ (fun g => g ∘ Sum.inl) '' setOf (Formula.Realize φ) ↔
x ∈ setOf (Formula.Realize (BoundedFormula.exs (BoundedFormula.relabel id φ)))
[PROOFSTEP]
simp only [Set.mem_image, mem_setOf_eq, BoundedFormula.realize_exs, BoundedFormula.realize_relabel,
Function.comp.right_id, Fin.castAdd_zero, Fin.castIso_refl]
[GOAL]
case intro.h
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
x : α → M
⊢ (∃ x_1, Formula.Realize φ x_1 ∧ x_1 ∘ Sum.inl = x) ↔
∃ xs, BoundedFormula.Realize φ (Sum.elim x (xs ∘ Fin.cast (_ : m = m))) (xs ∘ Fin.natAdd m)
[PROOFSTEP]
constructor
[GOAL]
case intro.h.mp
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
x : α → M
⊢ (∃ x_1, Formula.Realize φ x_1 ∧ x_1 ∘ Sum.inl = x) →
∃ xs, BoundedFormula.Realize φ (Sum.elim x (xs ∘ Fin.cast (_ : m = m))) (xs ∘ Fin.natAdd m)
[PROOFSTEP]
rintro ⟨y, hy, rfl⟩
[GOAL]
case intro.h.mp.intro.intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
y : α ⊕ Fin m → M
hy : Formula.Realize φ y
⊢ ∃ xs, BoundedFormula.Realize φ (Sum.elim (y ∘ Sum.inl) (xs ∘ Fin.cast (_ : m = m))) (xs ∘ Fin.natAdd m)
[PROOFSTEP]
exact ⟨y ∘ Sum.inr, (congr (congr rfl (Sum.elim_comp_inl_inr y).symm) (funext finZeroElim)).mp hy⟩
[GOAL]
case intro.h.mpr
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
x : α → M
⊢ (∃ xs, BoundedFormula.Realize φ (Sum.elim x (xs ∘ Fin.cast (_ : m = m))) (xs ∘ Fin.natAdd m)) →
∃ x_1, Formula.Realize φ x_1 ∧ x_1 ∘ Sum.inl = x
[PROOFSTEP]
rintro ⟨y, hy⟩
[GOAL]
case intro.h.mpr.intro
M : Type w
A : Set M
L : Language
inst✝ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s : Set (α → M)
m : ℕ
φ : Formula (L[[↑A]]) (α ⊕ Fin m)
x : α → M
y : Fin (m + 0) → M
hy : BoundedFormula.Realize φ (Sum.elim x (y ∘ Fin.cast (_ : m = m))) (y ∘ Fin.natAdd m)
⊢ ∃ x_1, Formula.Realize φ x_1 ∧ x_1 ∘ Sum.inl = x
[PROOFSTEP]
exact ⟨Sum.elim x y, (congr rfl (funext finZeroElim)).mp hy, Sum.elim_comp_inl _ _⟩
[GOAL]
M : Type w
A : Set M
L : Language
inst✝¹ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ↪ β
inst✝ : Finite β
⊢ Definable A L ((fun g => g ∘ ↑f) '' s)
[PROOFSTEP]
classical
cases nonempty_fintype β
refine'
(congr rfl (ext fun x => _)).mp
(((h.image_comp_equiv (Equiv.Set.sumCompl (range f))).image_comp_equiv
(Equiv.sumCongr (Equiv.ofInjective f f.injective)
(Fintype.equivFin (↥(range f)ᶜ)).symm)).image_comp_sum_inl_fin
_)
simp only [mem_preimage, mem_image, exists_exists_and_eq_and]
refine' exists_congr fun y => and_congr_right fun _ => Eq.congr_left (funext fun a => _)
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝¹ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ↪ β
inst✝ : Finite β
⊢ Definable A L ((fun g => g ∘ ↑f) '' s)
[PROOFSTEP]
cases nonempty_fintype β
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝¹ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ↪ β
inst✝ : Finite β
val✝ : Fintype β
⊢ Definable A L ((fun g => g ∘ ↑f) '' s)
[PROOFSTEP]
refine'
(congr rfl (ext fun x => _)).mp
(((h.image_comp_equiv (Equiv.Set.sumCompl (range f))).image_comp_equiv
(Equiv.sumCongr (Equiv.ofInjective f f.injective)
(Fintype.equivFin (↥(range f)ᶜ)).symm)).image_comp_sum_inl_fin
_)
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝¹ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ↪ β
inst✝ : Finite β
val✝ : Fintype β
x : α → M
⊢ x ∈
(fun g => g ∘ Sum.inl) ''
((fun g =>
g ∘
↑(Equiv.sumCongr (Equiv.ofInjective ↑f (_ : Function.Injective ↑f))
(Fintype.equivFin ↑(range ↑f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range ↑f))) '' s)) ↔
x ∈ (fun g => g ∘ ↑f) '' s
[PROOFSTEP]
simp only [mem_preimage, mem_image, exists_exists_and_eq_and]
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝¹ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ↪ β
inst✝ : Finite β
val✝ : Fintype β
x : α → M
⊢ (∃ a,
a ∈ s ∧
((a ∘ ↑(Equiv.Set.sumCompl (range ↑f))) ∘
↑(Equiv.sumCongr (Equiv.ofInjective ↑f (_ : Function.Injective ↑f))
(Fintype.equivFin ↑(range ↑f)ᶜ).symm)) ∘
Sum.inl =
x) ↔
∃ x_1, x_1 ∈ s ∧ x_1 ∘ ↑f = x
[PROOFSTEP]
refine' exists_congr fun y => and_congr_right fun _ => Eq.congr_left (funext fun a => _)
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝¹ : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α ↪ β
inst✝ : Finite β
val✝ : Fintype β
x : α → M
y : β → M
x✝ : y ∈ s
a : α
⊢ (((y ∘ ↑(Equiv.Set.sumCompl (range ↑f))) ∘
↑(Equiv.sumCongr (Equiv.ofInjective ↑f (_ : Function.Injective ↑f)) (Fintype.equivFin ↑(range ↑f)ᶜ).symm)) ∘
Sum.inl)
a =
(y ∘ ↑f) a
[PROOFSTEP]
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
⊢ Definable A L ((fun g => g ∘ f) '' s)
[PROOFSTEP]
classical
cases nonempty_fintype α
cases nonempty_fintype β
have h :=
(((h.image_comp_equiv (Equiv.Set.sumCompl (range f))).image_comp_equiv
(Equiv.sumCongr (_root_.Equiv.refl _) (Fintype.equivFin _).symm)).image_comp_sum_inl_fin
_).preimage_comp
(rangeSplitting f)
have h' : A.Definable L {x : α → M | ∀ a, x a = x (rangeSplitting f (rangeFactorization f a))} :=
by
have h' : ∀ a, A.Definable L {x : α → M | x a = x (rangeSplitting f (rangeFactorization f a))} :=
by
refine' fun a => ⟨(var a).equal (var (rangeSplitting f (rangeFactorization f a))), ext _⟩
simp
refine' (congr rfl (ext _)).mp (definable_finset_biInter h' Finset.univ)
simp
refine' (congr rfl (ext fun x => _)).mp (h.inter h')
simp only [Equiv.coe_trans, mem_inter_iff, mem_preimage, mem_image, exists_exists_and_eq_and, mem_setOf_eq]
constructor
· rintro ⟨⟨y, ys, hy⟩, hx⟩
refine' ⟨y, ys, _⟩
ext a
rw [hx a, ← Function.comp_apply (f := x), ← hy]
simp
· rintro ⟨y, ys, rfl⟩
refine' ⟨⟨y, ys, _⟩, fun a => _⟩
· ext
simp [Set.apply_rangeSplitting f]
· rw [Function.comp_apply, Function.comp_apply, apply_rangeSplitting f, rangeFactorization_coe]
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
⊢ Definable A L ((fun g => g ∘ f) '' s)
[PROOFSTEP]
cases nonempty_fintype α
[GOAL]
case intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝ : Fintype α
⊢ Definable A L ((fun g => g ∘ f) '' s)
[PROOFSTEP]
cases nonempty_fintype β
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
⊢ Definable A L ((fun g => g ∘ f) '' s)
[PROOFSTEP]
have h :=
(((h.image_comp_equiv (Equiv.Set.sumCompl (range f))).image_comp_equiv
(Equiv.sumCongr (_root_.Equiv.refl _) (Fintype.equivFin _).symm)).image_comp_sum_inl_fin
_).preimage_comp
(rangeSplitting f)
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
⊢ Definable A L ((fun g => g ∘ f) '' s)
[PROOFSTEP]
have h' : A.Definable L {x : α → M | ∀ a, x a = x (rangeSplitting f (rangeFactorization f a))} :=
by
have h' : ∀ a, A.Definable L {x : α → M | x a = x (rangeSplitting f (rangeFactorization f a))} :=
by
refine' fun a => ⟨(var a).equal (var (rangeSplitting f (rangeFactorization f a))), ext _⟩
simp
refine' (congr rfl (ext _)).mp (definable_finset_biInter h' Finset.univ)
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
⊢ Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
[PROOFSTEP]
have h' : ∀ a, A.Definable L {x : α → M | x a = x (rangeSplitting f (rangeFactorization f a))} :=
by
refine' fun a => ⟨(var a).equal (var (rangeSplitting f (rangeFactorization f a))), ext _⟩
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
⊢ ∀ (a : α), Definable A L {x | x a = x (rangeSplitting f (rangeFactorization f a))}
[PROOFSTEP]
refine' fun a => ⟨(var a).equal (var (rangeSplitting f (rangeFactorization f a))), ext _⟩
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
a : α
⊢ ∀ (x : α → M),
x ∈ {x | x a = x (rangeSplitting f (rangeFactorization f a))} ↔
x ∈ setOf (Formula.Realize (Term.equal (var a) (var (rangeSplitting f (rangeFactorization f a)))))
[PROOFSTEP]
simp
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : ∀ (a : α), Definable A L {x | x a = x (rangeSplitting f (rangeFactorization f a))}
⊢ Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
[PROOFSTEP]
refine' (congr rfl (ext _)).mp (definable_finset_biInter h' Finset.univ)
[GOAL]
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : ∀ (a : α), Definable A L {x | x a = x (rangeSplitting f (rangeFactorization f a))}
⊢ ∀ (x : α → M),
x ∈ ⋂ (i : α) (_ : i ∈ Finset.univ), {x | x i = x (rangeSplitting f (rangeFactorization f i))} ↔
x ∈ {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
[PROOFSTEP]
simp
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
⊢ Definable A L ((fun g => g ∘ f) '' s)
[PROOFSTEP]
refine' (congr rfl (ext fun x => _)).mp (h.inter h')
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
⊢ x ∈
(fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))) ∩
{x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))} ↔
x ∈ (fun g => g ∘ f) '' s
[PROOFSTEP]
simp only [Equiv.coe_trans, mem_inter_iff, mem_preimage, mem_image, exists_exists_and_eq_and, mem_setOf_eq]
[GOAL]
case intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
⊢ ((∃ a,
a ∈ s ∧
((a ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f) ∧
∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))) ↔
∃ x_1, x_1 ∈ s ∧ x_1 ∘ f = x
[PROOFSTEP]
constructor
[GOAL]
case intro.intro.mp
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
⊢ ((∃ a,
a ∈ s ∧
((a ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f) ∧
∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))) →
∃ x_1, x_1 ∈ s ∧ x_1 ∘ f = x
[PROOFSTEP]
rintro ⟨⟨y, ys, hy⟩, hx⟩
[GOAL]
case intro.intro.mp.intro.intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
hx : ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))
y : β → M
ys : y ∈ s
hy :
((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f
⊢ ∃ x_1, x_1 ∈ s ∧ x_1 ∘ f = x
[PROOFSTEP]
refine' ⟨y, ys, _⟩
[GOAL]
case intro.intro.mp.intro.intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
hx : ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))
y : β → M
ys : y ∈ s
hy :
((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f
⊢ y ∘ f = x
[PROOFSTEP]
ext a
[GOAL]
case intro.intro.mp.intro.intro.intro.h
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
hx : ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))
y : β → M
ys : y ∈ s
hy :
((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f
a : α
⊢ (y ∘ f) a = x a
[PROOFSTEP]
rw [hx a, ← Function.comp_apply (f := x), ← hy]
[GOAL]
case intro.intro.mp.intro.intro.intro.h
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
hx : ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))
y : β → M
ys : y ∈ s
hy :
((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f
a : α
⊢ (y ∘ f) a =
(((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl)
(rangeFactorization f a)
[PROOFSTEP]
simp
[GOAL]
case intro.intro.mpr
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
x : α → M
⊢ (∃ x_1, x_1 ∈ s ∧ x_1 ∘ f = x) →
(∃ a,
a ∈ s ∧
((a ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
x ∘ rangeSplitting f) ∧
∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))
[PROOFSTEP]
rintro ⟨y, ys, rfl⟩
[GOAL]
case intro.intro.mpr.intro.intro
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
y : β → M
ys : y ∈ s
⊢ (∃ a,
a ∈ s ∧
((a ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
(y ∘ f) ∘ rangeSplitting f) ∧
∀ (a : α), (y ∘ f) a = (y ∘ f) (rangeSplitting f (rangeFactorization f a))
[PROOFSTEP]
refine' ⟨⟨y, ys, _⟩, fun a => _⟩
[GOAL]
case intro.intro.mpr.intro.intro.refine'_1
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
y : β → M
ys : y ∈ s
⊢ ((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl =
(y ∘ f) ∘ rangeSplitting f
[PROOFSTEP]
ext
[GOAL]
case intro.intro.mpr.intro.intro.refine'_1.h
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
y : β → M
ys : y ∈ s
x✝ : ↑(range f)
⊢ (((y ∘ ↑(Equiv.Set.sumCompl (range f))) ∘
↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ∘
Sum.inl)
x✝ =
((y ∘ f) ∘ rangeSplitting f) x✝
[PROOFSTEP]
simp [Set.apply_rangeSplitting f]
[GOAL]
case intro.intro.mpr.intro.intro.refine'_2
M : Type w
A : Set M
L : Language
inst✝² : Structure L M
α : Type u₁
β : Type u_1
B : Set M
s✝ : Set (α → M)
s : Set (β → M)
h✝ : Definable A L s
f : α → β
inst✝¹ : Finite α
inst✝ : Finite β
val✝¹ : Fintype α
val✝ : Fintype β
h :
Definable A L
((fun g => g ∘ rangeSplitting f) ⁻¹'
((fun g => g ∘ Sum.inl) ''
((fun g => g ∘ ↑(Equiv.sumCongr (Equiv.refl ↑(range f)) (Fintype.equivFin ↑(range f)ᶜ).symm)) ''
((fun g => g ∘ ↑(Equiv.Set.sumCompl (range f))) '' s))))
h' : Definable A L {x | ∀ (a : α), x a = x (rangeSplitting f (rangeFactorization f a))}
y : β → M
ys : y ∈ s
a : α
⊢ (y ∘ f) a = (y ∘ f) (rangeSplitting f (rangeFactorization f a))
[PROOFSTEP]
rw [Function.comp_apply, Function.comp_apply, apply_rangeSplitting f, rangeFactorization_coe]
|
[STATEMENT]
lemma inverse_diff_inverse:
fixes a b :: "'a::division_ring"
assumes "a \<noteq> 0" and "b \<noteq> 0"
shows "inverse a - inverse b = - (inverse a * (a - b) * inverse b)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. inverse a - inverse b = - (inverse a * (a - b) * inverse b)
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
a \<noteq> (0::'a)
b \<noteq> (0::'a)
goal (1 subgoal):
1. inverse a - inverse b = - (inverse a * (a - b) * inverse b)
[PROOF STEP]
by (simp add: algebra_simps) |
corollary\<^marker>\<open>tag unimportant\<close> contour_integral_uniform_limit_circlepath: assumes "\<forall>\<^sub>F n::'a in F. (f n) contour_integrable_on (circlepath z r)" and "uniform_limit (sphere z r) f l F" and "\<not> trivial_limit F" "0 < r" shows "l contour_integrable_on (circlepath z r)" "((\<lambda>n. contour_integral (circlepath z r) (f n)) \<longlongrightarrow> contour_integral (circlepath z r) l) F" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Import modules
import pytest
import numpy as np
# Import from package
from pyswarms.single import GlobalBestPSO
from pyswarms.utils.functions.single_obj import sphere_func
@pytest.mark.parametrize(
"options",
[{"c2": 0.7, "w": 0.5}, {"c1": 0.5, "w": 0.5}, {"c1": 0.5, "c2": 0.7}],
)
def test_keyword_exception(options):
"""Tests if exceptions are thrown when keywords are missing"""
with pytest.raises(KeyError):
GlobalBestPSO(5, 2, options)
@pytest.mark.parametrize(
"bounds",
[
tuple(np.array([-5, -5])),
(np.array([-5, -5, -5]), np.array([5, 5])),
(np.array([-5, -5, -5]), np.array([5, 5, 5])),
],
)
def test_bounds_size_exception(bounds, options):
"""Tests if exceptions are raised when bound sizes are wrong"""
with pytest.raises(IndexError):
GlobalBestPSO(5, 2, options=options, bounds=bounds)
@pytest.mark.parametrize(
"bounds",
[
(np.array([5, 5]), np.array([-5, -5])),
(np.array([5, -5]), np.array([-5, 5])),
],
)
def test_bounds_maxmin_exception(bounds, options):
"""Tests if the max bounds is less than min bounds and vice-versa"""
with pytest.raises(ValueError):
GlobalBestPSO(5, 2, options=options, bounds=bounds)
@pytest.mark.parametrize(
"bounds",
[
[np.array([-5, -5]), np.array([5, 5])],
np.array([np.array([-5, -5]), np.array([5, 5])]),
],
)
def test_bound_type_exception(bounds, options):
"""Tests if exception is raised when bound type is not a tuple"""
with pytest.raises(TypeError):
GlobalBestPSO(5, 2, options=options, bounds=bounds)
@pytest.mark.parametrize("velocity_clamp", [(1, 1, 1), (2, 3, 1)])
def test_vclamp_shape_exception(velocity_clamp, options):
"""Tests if exception is raised when velocity_clamp's size is not equal
to 2"""
with pytest.raises(IndexError):
GlobalBestPSO(5, 2, velocity_clamp=velocity_clamp, options=options)
@pytest.mark.parametrize("velocity_clamp", [(3, 2), (10, 8)])
def test_vclamp_maxmin_exception(velocity_clamp, options):
"""Tests if the max velocity_clamp is less than min velocity_clamp and
vice-versa"""
with pytest.raises(ValueError):
GlobalBestPSO(5, 2, velocity_clamp=velocity_clamp, options=options)
@pytest.mark.parametrize("err, center", [(IndexError, [1.5, 3.2, 2.5])])
def test_center_exception(err, center, options):
"""Tests if exception is thrown when center is not a list or of different shape"""
with pytest.raises(err):
GlobalBestPSO(5, 2, center=center, options=options)
def test_reset_default_values(gbest_reset):
"""Tests if best cost and best pos are set properly when the reset()
method is called"""
assert gbest_reset.swarm.best_cost == np.inf
assert set(gbest_reset.swarm.best_pos) == set(np.array([]))
@pytest.mark.parametrize(
"history, expected_shape",
[
("cost_history", (1000,)),
("mean_pbest_history", (1000,)),
("mean_neighbor_history", (1000,)),
("pos_history", (1000, 10, 2)),
("velocity_history", (1000, 10, 2)),
],
)
def test_training_history_shape(gbest_history, history, expected_shape):
"""Test if training histories are of expected shape"""
pso = vars(gbest_history)
assert np.array(pso[history]).shape == expected_shape
def test_ftol_effect(options):
"""Test if setting the ftol breaks the optimization process accodingly"""
pso = GlobalBestPSO(10, 2, options=options, ftol=1e-1)
pso.optimize(sphere_func, 2000, verbose=0)
assert np.array(pso.cost_history).shape != (2000,)
|
{-# LANGUAGE DeriveDataTypeable, DeriveGeneric #-}
-- |
-- Module : Statistics.Distribution.Binomial
-- Copyright : (c) 2009 Bryan O'Sullivan
-- License : BSD3
--
-- Maintainer : [email protected]
-- Stability : experimental
-- Portability : portable
--
-- The binomial distribution. This is the discrete probability
-- distribution of the number of successes in a sequence of /n/
-- independent yes\/no experiments, each of which yields success with
-- probability /p/.
module Statistics.Distribution.Binomial
(
BinomialDistribution
-- * Constructors
, binomial
-- * Accessors
, bdTrials
, bdProbability
) where
import Data.Aeson (FromJSON, ToJSON)
import Data.Binary (Binary)
import Data.Data (Data, Typeable)
import GHC.Generics (Generic)
import qualified Statistics.Distribution as D
import qualified Statistics.Distribution.Poisson.Internal as I
import Numeric.SpecFunctions (choose,incompleteBeta)
import Numeric.MathFunctions.Constants (m_epsilon)
import Data.Binary (put, get)
import Control.Applicative ((<$>), (<*>))
-- | The binomial distribution.
data BinomialDistribution = BD {
bdTrials :: {-# UNPACK #-} !Int
-- ^ Number of trials.
, bdProbability :: {-# UNPACK #-} !Double
-- ^ Probability.
} deriving (Eq, Read, Show, Typeable, Data, Generic)
instance FromJSON BinomialDistribution
instance ToJSON BinomialDistribution
instance Binary BinomialDistribution where
put (BD x y) = put x >> put y
get = BD <$> get <*> get
instance D.Distribution BinomialDistribution where
cumulative = cumulative
instance D.DiscreteDistr BinomialDistribution where
probability = probability
instance D.Mean BinomialDistribution where
mean = mean
instance D.Variance BinomialDistribution where
variance = variance
instance D.MaybeMean BinomialDistribution where
maybeMean = Just . D.mean
instance D.MaybeVariance BinomialDistribution where
maybeStdDev = Just . D.stdDev
maybeVariance = Just . D.variance
instance D.Entropy BinomialDistribution where
entropy (BD n p)
| n == 0 = 0
| n <= 100 = directEntropy (BD n p)
| otherwise = I.poissonEntropy (fromIntegral n * p)
instance D.MaybeEntropy BinomialDistribution where
maybeEntropy = Just . D.entropy
-- This could be slow for big n
probability :: BinomialDistribution -> Int -> Double
probability (BD n p) k
| k < 0 || k > n = 0
| n == 0 = 1
| otherwise = choose n k * p^k * (1-p)^(n-k)
-- Summation from different sides required to reduce roundoff errors
cumulative :: BinomialDistribution -> Double -> Double
cumulative (BD n p) x
| isNaN x = error "Statistics.Distribution.Binomial.cumulative: NaN input"
| isInfinite x = if x > 0 then 1 else 0
| k < 0 = 0
| k >= n = 1
| otherwise = incompleteBeta (fromIntegral (n-k)) (fromIntegral (k+1)) (1 - p)
where
k = floor x
mean :: BinomialDistribution -> Double
mean (BD n p) = fromIntegral n * p
variance :: BinomialDistribution -> Double
variance (BD n p) = fromIntegral n * p * (1 - p)
directEntropy :: BinomialDistribution -> Double
directEntropy d@(BD n _) =
negate . sum $
takeWhile (< negate m_epsilon) $
dropWhile (not . (< negate m_epsilon)) $
[ let x = probability d k in x * log x | k <- [0..n]]
-- | Construct binomial distribution. Number of trials must be
-- non-negative and probability must be in [0,1] range
binomial :: Int -- ^ Number of trials.
-> Double -- ^ Probability.
-> BinomialDistribution
binomial n p
| n < 0 =
error $ msg ++ "number of trials must be non-negative. Got " ++ show n
| p < 0 || p > 1 =
error $ msg++"probability must be in [0,1] range. Got " ++ show p
| otherwise = BD n p
where msg = "Statistics.Distribution.Binomial.binomial: "
|
# clear the environment ========================================================
rm(list=ls())
gc()
setwd("P:/torabif/workspace/CCU0002-03")
# load options, packages and functions =========================================
source("01_load.r")
# create data ==================================================================
# Run '02', '03' and '04' scripts in Eclipse
# prepare data for analysis ====================================================
source("03a_save_data.r")
source("04a_sample_selection_summary.r")
# report =======================================================================
# for gitlab
render(
input = "README.Rmd",
output_format = md_document(),
quiet = TRUE
)
# for local viewing
render(
input = "README.rmd",
output_file = "README.html",
quiet = TRUE
)
|
-- Andreas, 2012-09-15
-- Positive effects of making Agda recognize constant functions.
-- Arguments to constant functions are ignored in definitional equality.
{-# OPTIONS --copatterns #-}
module NonvariantPolarity where
open import Common.Equality
data ⊥ : Set where
record ⊤ : Set where
constructor trivial
data Bool : Set where
true false : Bool
True : Bool → Set
True true = ⊤
True false = ⊥
module IgnoreArg where
-- A function ignoring its first argument
knot : Bool → Bool → Bool
knot x true = false
knot x false = true
test : (y : Bool) → knot true y ≡ knot false y
test y = refl
module UnusedModulePar where
-- An unused module parameter
module M (x : Bool) where
not : Bool → Bool
not true = false
not false = true
open M true
open M false renaming (not to not′)
test : (y : Bool) → not y ≡ not′ y
test y = refl
module CoinductiveUnit where
record Unit : Set where
coinductive
constructor delay
field force : Unit
open Unit
-- The identity on Unit does not match on its argument, so it is constant.
id : Unit → Unit
force (id x) = id (force x)
idConst : (x y : Unit) → id x ≡ id y
idConst x y = refl
-- That does not imply x ≡ y (needs bisimulation).
|
theory Lexer
imports Spec
begin
section {* The Lexer Functions by Sulzmann and Lu *}
fun
mkeps :: "rexp \<Rightarrow> val"
where
"mkeps(ONE) = Void"
| "mkeps(SEQ r1 r2) = Seq (mkeps r1) (mkeps r2)"
| "mkeps(ALT r1 r2) = (if nullable(r1) then Left (mkeps r1) else Right (mkeps r2))"
| "mkeps(STAR r) = Stars []"
fun injval :: "rexp \<Rightarrow> char \<Rightarrow> val \<Rightarrow> val"
where
"injval (CR d) c Void = Char d"
| "injval (ALT r1 r2) c (Left v1) = Left(injval r1 c v1)"
| "injval (ALT r1 r2) c (Right v2) = Right(injval r2 c v2)"
| "injval (SEQ r1 r2) c (Seq v1 v2) = Seq (injval r1 c v1) v2"
| "injval (SEQ r1 r2) c (Left (Seq v1 v2)) = Seq (injval r1 c v1) v2"
| "injval (SEQ r1 r2) c (Right v2) = Seq (mkeps r1) (injval r2 c v2)"
| "injval (STAR r) c (Seq v (Stars vs)) = Stars ((injval r c v) # vs)"
fun
lexer :: "rexp \<Rightarrow> string \<Rightarrow> val option"
where
"lexer r [] = (if nullable r then Some(mkeps r) else None)"
| "lexer r (c#s) = (case (lexer (der c r) s) of
None \<Rightarrow> None
| Some(v) \<Rightarrow> Some(injval r c v))"
section {* Mkeps, Injval Properties *}
lemma mkeps_nullable:
assumes "nullable(r)"
shows "\<Turnstile> mkeps r : r"
using assms
by (induct rule: nullable.induct)
(auto intro: Prf.intros)
lemma mkeps_flat:
assumes "nullable(r)"
shows "flat (mkeps r) = []"
using assms
by (induct rule: nullable.induct) (auto)
lemma Prf_injval_flat:
assumes "\<Turnstile> v : der c r"
shows "flat (injval r c v) = c # (flat v)"
using assms
apply(induct c r arbitrary: v rule: der.induct)
apply(auto elim!: Prf_elims intro: mkeps_flat split: if_splits)
done
lemma Prf_injval:
assumes "\<Turnstile> v : der c r"
shows "\<Turnstile> (injval r c v) : r"
using assms
apply(induct r arbitrary: c v rule: rexp.induct)
apply(auto intro!: Prf.intros mkeps_nullable elim!: Prf_elims split: if_splits)
apply(simp add: Prf_injval_flat)
done
text {*
Mkeps and injval produce, or preserve, Posix values.
*}
lemma Posix_mkeps:
assumes "nullable r"
shows "[] \<in> r \<rightarrow> mkeps r"
using assms
apply(induct r rule: nullable.induct)
apply(auto intro: Posix.intros simp add: nullable_correctness Sequ_def)
apply(subst append.simps(1)[symmetric])
apply(rule Posix.intros)
apply(auto)
done
lemma Posix_injval:
assumes "s \<in> (der c r) \<rightarrow> v"
shows "(c # s) \<in> r \<rightarrow> (injval r c v)"
using assms
proof(induct r arbitrary: s v rule: rexp.induct)
case ZERO
have "s \<in> der c ZERO \<rightarrow> v" by fact
then have "s \<in> ZERO \<rightarrow> v" by simp
then have "False" by cases
then show "(c # s) \<in> ZERO \<rightarrow> (injval ZERO c v)" by simp
next
case ONE
have "s \<in> der c ONE \<rightarrow> v" by fact
then have "s \<in> ZERO \<rightarrow> v" by simp
then have "False" by cases
then show "(c # s) \<in> ONE \<rightarrow> (injval ONE c v)" by simp
next
case (CR d)
consider (eq) "c = d" | (ineq) "c \<noteq> d" by blast
then show "(c # s) \<in> (CR d) \<rightarrow> (injval (CR d) c v)"
proof (cases)
case eq
have "s \<in> der c (CR d) \<rightarrow> v" by fact
then have "s \<in> ONE \<rightarrow> v" using eq by simp
then have eqs: "s = [] \<and> v = Void" by cases simp
show "(c # s) \<in> CR d \<rightarrow> injval (CR d) c v" using eq eqs
by (auto intro: Posix.intros)
next
case ineq
have "s \<in> der c (CR d) \<rightarrow> v" by fact
then have "s \<in> ZERO \<rightarrow> v" using ineq by simp
then have "False" by cases
then show "(c # s) \<in> CR d \<rightarrow> injval (CR d) c v" by simp
qed
next
case (ALT r1 r2)
have IH1: "\<And>s v. s \<in> der c r1 \<rightarrow> v \<Longrightarrow> (c # s) \<in> r1 \<rightarrow> injval r1 c v" by fact
have IH2: "\<And>s v. s \<in> der c r2 \<rightarrow> v \<Longrightarrow> (c # s) \<in> r2 \<rightarrow> injval r2 c v" by fact
have "s \<in> der c (ALT r1 r2) \<rightarrow> v" by fact
then have "s \<in> ALT (der c r1) (der c r2) \<rightarrow> v" by simp
then consider (left) v' where "v = Left v'" "s \<in> der c r1 \<rightarrow> v'"
| (right) v' where "v = Right v'" "s \<notin> L (der c r1)" "s \<in> der c r2 \<rightarrow> v'"
by cases auto
then show "(c # s) \<in> ALT r1 r2 \<rightarrow> injval (ALT r1 r2) c v"
proof (cases)
case left
have "s \<in> der c r1 \<rightarrow> v'" by fact
then have "(c # s) \<in> r1 \<rightarrow> injval r1 c v'" using IH1 by simp
then have "(c # s) \<in> ALT r1 r2 \<rightarrow> injval (ALT r1 r2) c (Left v')" by (auto intro: Posix.intros)
then show "(c # s) \<in> ALT r1 r2 \<rightarrow> injval (ALT r1 r2) c v" using left by simp
next
case right
have "s \<notin> L (der c r1)" by fact
then have "c # s \<notin> L r1" by (simp add: der_correctness Der_def)
moreover
have "s \<in> der c r2 \<rightarrow> v'" by fact
then have "(c # s) \<in> r2 \<rightarrow> injval r2 c v'" using IH2 by simp
ultimately have "(c # s) \<in> ALT r1 r2 \<rightarrow> injval (ALT r1 r2) c (Right v')"
by (auto intro: Posix.intros)
then show "(c # s) \<in> ALT r1 r2 \<rightarrow> injval (ALT r1 r2) c v" using right by simp
qed
next
case (SEQ r1 r2)
have IH1: "\<And>s v. s \<in> der c r1 \<rightarrow> v \<Longrightarrow> (c # s) \<in> r1 \<rightarrow> injval r1 c v" by fact
have IH2: "\<And>s v. s \<in> der c r2 \<rightarrow> v \<Longrightarrow> (c # s) \<in> r2 \<rightarrow> injval r2 c v" by fact
have "s \<in> der c (SEQ r1 r2) \<rightarrow> v" by fact
then consider
(left_nullable) v1 v2 s1 s2 where
"v = Left (Seq v1 v2)" "s = s1 @ s2"
"s1 \<in> der c r1 \<rightarrow> v1" "s2 \<in> r2 \<rightarrow> v2" "nullable r1"
"\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> s1 @ s\<^sub>3 \<in> L (der c r1) \<and> s\<^sub>4 \<in> L r2)"
| (right_nullable) v1 s1 s2 where
"v = Right v1" "s = s1 @ s2"
"s \<in> der c r2 \<rightarrow> v1" "nullable r1" "s1 @ s2 \<notin> L (SEQ (der c r1) r2)"
| (not_nullable) v1 v2 s1 s2 where
"v = Seq v1 v2" "s = s1 @ s2"
"s1 \<in> der c r1 \<rightarrow> v1" "s2 \<in> r2 \<rightarrow> v2" "\<not>nullable r1"
"\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> s1 @ s\<^sub>3 \<in> L (der c r1) \<and> s\<^sub>4 \<in> L r2)"
by (force split: if_splits elim!: Posix_elims simp add: Sequ_def der_correctness Der_def)
then show "(c # s) \<in> SEQ r1 r2 \<rightarrow> injval (SEQ r1 r2) c v"
proof (cases)
case left_nullable
have "s1 \<in> der c r1 \<rightarrow> v1" by fact
then have "(c # s1) \<in> r1 \<rightarrow> injval r1 c v1" using IH1 by simp
moreover
have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> s1 @ s\<^sub>3 \<in> L (der c r1) \<and> s\<^sub>4 \<in> L r2)" by fact
then have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> (c # s1) @ s\<^sub>3 \<in> L r1 \<and> s\<^sub>4 \<in> L r2)" by (simp add: der_correctness Der_def)
ultimately have "((c # s1) @ s2) \<in> SEQ r1 r2 \<rightarrow> Seq (injval r1 c v1) v2" using left_nullable by (rule_tac Posix.intros)
then show "(c # s) \<in> SEQ r1 r2 \<rightarrow> injval (SEQ r1 r2) c v" using left_nullable by simp
next
case right_nullable
have "nullable r1" by fact
then have "[] \<in> r1 \<rightarrow> (mkeps r1)" by (rule Posix_mkeps)
moreover
have "s \<in> der c r2 \<rightarrow> v1" by fact
then have "(c # s) \<in> r2 \<rightarrow> (injval r2 c v1)" using IH2 by simp
moreover
have "s1 @ s2 \<notin> L (SEQ (der c r1) r2)" by fact
then have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = c # s \<and> [] @ s\<^sub>3 \<in> L r1 \<and> s\<^sub>4 \<in> L r2)" using right_nullable
by(auto simp add: der_correctness Der_def append_eq_Cons_conv Sequ_def)
ultimately have "([] @ (c # s)) \<in> SEQ r1 r2 \<rightarrow> Seq (mkeps r1) (injval r2 c v1)"
by(rule Posix.intros)
then show "(c # s) \<in> SEQ r1 r2 \<rightarrow> injval (SEQ r1 r2) c v" using right_nullable by simp
next
case not_nullable
have "s1 \<in> der c r1 \<rightarrow> v1" by fact
then have "(c # s1) \<in> r1 \<rightarrow> injval r1 c v1" using IH1 by simp
moreover
have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> s1 @ s\<^sub>3 \<in> L (der c r1) \<and> s\<^sub>4 \<in> L r2)" by fact
then have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> (c # s1) @ s\<^sub>3 \<in> L r1 \<and> s\<^sub>4 \<in> L r2)" by (simp add: der_correctness Der_def)
ultimately have "((c # s1) @ s2) \<in> SEQ r1 r2 \<rightarrow> Seq (injval r1 c v1) v2" using not_nullable
by (rule_tac Posix.intros) (simp_all)
then show "(c # s) \<in> SEQ r1 r2 \<rightarrow> injval (SEQ r1 r2) c v" using not_nullable by simp
qed
next
case (STAR r)
have IH: "\<And>s v. s \<in> der c r \<rightarrow> v \<Longrightarrow> (c # s) \<in> r \<rightarrow> injval r c v" by fact
have "s \<in> der c (STAR r) \<rightarrow> v" by fact
then consider
(cons) v1 vs s1 s2 where
"v = Seq v1 (Stars vs)" "s = s1 @ s2"
"s1 \<in> der c r \<rightarrow> v1" "s2 \<in> (STAR r) \<rightarrow> (Stars vs)"
"\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> s1 @ s\<^sub>3 \<in> L (der c r) \<and> s\<^sub>4 \<in> L (STAR r))"
apply(auto elim!: Posix_elims(1-5) simp add: der_correctness Der_def intro: Posix.intros)
apply(rotate_tac 3)
apply(erule_tac Posix_elims(6))
apply (simp add: Posix.intros(6))
using Posix.intros(7) by blast
then show "(c # s) \<in> STAR r \<rightarrow> injval (STAR r) c v"
proof (cases)
case cons
have "s1 \<in> der c r \<rightarrow> v1" by fact
then have "(c # s1) \<in> r \<rightarrow> injval r c v1" using IH by simp
moreover
have "s2 \<in> STAR r \<rightarrow> Stars vs" by fact
moreover
have "(c # s1) \<in> r \<rightarrow> injval r c v1" by fact
then have "flat (injval r c v1) = (c # s1)" by (rule Posix1)
then have "flat (injval r c v1) \<noteq> []" by simp
moreover
have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> s1 @ s\<^sub>3 \<in> L (der c r) \<and> s\<^sub>4 \<in> L (STAR r))" by fact
then have "\<not> (\<exists>s\<^sub>3 s\<^sub>4. s\<^sub>3 \<noteq> [] \<and> s\<^sub>3 @ s\<^sub>4 = s2 \<and> (c # s1) @ s\<^sub>3 \<in> L r \<and> s\<^sub>4 \<in> L (STAR r))"
by (simp add: der_correctness Der_def)
ultimately
have "((c # s1) @ s2) \<in> STAR r \<rightarrow> Stars (injval r c v1 # vs)" by (rule Posix.intros)
then show "(c # s) \<in> STAR r \<rightarrow> injval (STAR r) c v" using cons by(simp)
qed
qed
section {* Lexer Correctness *}
lemma lexer_correct_None:
shows "s \<notin> L r \<longleftrightarrow> lexer r s = None"
apply(induct s arbitrary: r)
apply(simp)
apply(simp add: nullable_correctness)
apply(simp)
apply(drule_tac x="der a r" in meta_spec)
apply(auto)
apply(auto simp add: der_correctness Der_def)
done
lemma lexer_correct_Some:
shows "s \<in> L r \<longleftrightarrow> (\<exists>v. lexer r s = Some(v) \<and> s \<in> r \<rightarrow> v)"
apply(induct s arbitrary : r)
apply(simp only: lexer.simps)
apply(simp)
apply(simp add: nullable_correctness Posix_mkeps)
apply(drule_tac x="der a r" in meta_spec)
apply(simp (no_asm_use) add: der_correctness Der_def del: lexer.simps)
apply(simp del: lexer.simps)
apply(simp only: lexer.simps)
apply(case_tac "lexer (der a r) s = None")
apply(auto)[1]
apply(simp)
apply(erule exE)
apply(simp)
apply(rule iffI)
apply(simp add: Posix_injval)
apply(simp add: Posix1(1))
done
lemma lexer_correctness:
shows "(lexer r s = Some v) \<longleftrightarrow> s \<in> r \<rightarrow> v"
and "(lexer r s = None) \<longleftrightarrow> \<not>(\<exists>v. s \<in> r \<rightarrow> v)"
using Posix1(1) Posix_determ lexer_correct_None lexer_correct_Some apply fastforce
using Posix1(1) lexer_correct_None lexer_correct_Some by blast
fun flex :: "rexp \<Rightarrow> (val \<Rightarrow> val) => string \<Rightarrow> (val \<Rightarrow> val)"
where
"flex r f [] = f"
| "flex r f (c#s) = flex (der c r) (\<lambda>v. f (injval r c v)) s"
lemma flex_fun_apply:
shows "g (flex r f s v) = flex r (g o f) s v"
apply(induct s arbitrary: g f r v)
apply(simp_all add: comp_def)
by meson
lemma flex_append:
shows "flex r f (s1 @ s2) = flex (ders s1 r) (flex r f s1) s2"
apply(induct s1 arbitrary: s2 r f)
apply(simp_all)
done
lemma lexer_flex:
shows "lexer r s = (if nullable (ders s r)
then Some(flex r id s (mkeps (ders s r))) else None)"
apply(induct s arbitrary: r)
apply(simp_all add: flex_fun_apply)
done
unused_thms
end |
Formal statement is: lemma Bseq_ignore_initial_segment: "Bseq X \<Longrightarrow> Bseq (\<lambda>n. X (n + k))" Informal statement is: If $X$ is a bounded sequence, then so is the sequence $X$ shifted by $k$ positions. |
State Before: F : Type ?u.75344
α : Type u_1
β : Type ?u.75350
inst✝¹ : LinearOrderedSemiring α
inst✝ : FloorSemiring α
a✝ : α
n : ℕ
a : α
⊢ Nat.cast ⁻¹' Iio a = Iio ⌈a⌉₊ State After: case h
F : Type ?u.75344
α : Type u_1
β : Type ?u.75350
inst✝¹ : LinearOrderedSemiring α
inst✝ : FloorSemiring α
a✝ : α
n : ℕ
a : α
x✝ : ℕ
⊢ x✝ ∈ Nat.cast ⁻¹' Iio a ↔ x✝ ∈ Iio ⌈a⌉₊ Tactic: ext State Before: case h
F : Type ?u.75344
α : Type u_1
β : Type ?u.75350
inst✝¹ : LinearOrderedSemiring α
inst✝ : FloorSemiring α
a✝ : α
n : ℕ
a : α
x✝ : ℕ
⊢ x✝ ∈ Nat.cast ⁻¹' Iio a ↔ x✝ ∈ Iio ⌈a⌉₊ State After: no goals Tactic: simp [lt_ceil] |
set.seed(1234)
R2_bootstrap <- function(model, n.sim = 10000) {
# bootstrap n.sim times to calc pseudo r-squared values:
lmer_bs <- bootMer(model, FUN=function(x)r.squaredGLMM(x),
nsim=n.sim)
# calculate median and 0.025,0.975 percentiles for R2c and R2m:
lmer_bs_R2 <- as.data.frame(lmer_bs$t) %>%
rownames_to_column(var = "type_R2") %>%
rename(R2=V1) %>%
mutate(type_R2 = substr(type_R2,1,3)) %>%
group_by(type_R2) %>%
mutate(median = median(R2),
ll = quantile(R2,0.025),
ul = quantile(R2, 0.975))
# return bootstrap data:
return(lmer_bs_R2)
}
|
function dl=lpcaa2dl(aa)
%LPCAA2DL LPC: Convert area coefficients to dct of log area DL=(AA)
% note: we do not correct for sinc distortion; perhaps we should multiply by
% k=1:p-1;s=[sqrt(0.5)/p 2*sin(pi*k/(2*p))./(pi*k)];
% Copyright (C) Mike Brookes 1998
% Version: $Id: lpcaa2dl.m,v 1.4 2007/05/04 07:01:38 dmb Exp $
%
% VOICEBOX is a MATLAB toolbox for speech processing.
% Home page: http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This program is free software; you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation; either version 2 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You can obtain a copy of the GNU General Public License from
% http://www.gnu.org/copyleft/gpl.html or by writing to
% Free Software Foundation, Inc.,675 Mass Ave, Cambridge, MA 02139, USA.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[nf,p2]=size(aa);
dl=rdct(log(aa(:,2:p2-1)./aa(:,p2*ones(1,p2-2))).').';
|
function one_phase_solve(m::JuMP.Model)
nlp_raw = MathOptNLPModel(m);
return one_phase_solve(nlp_raw)
end
function one_phase_solve(m::JuMP.Model, pars::Class_parameters)
nlp_raw = MathOptNLPModel(m);
return one_phase_solve(nlp_raw, pars::Class_parameters)
end
function one_phase_solve(nlp_raw::NLPModels.AbstractNLPModel)
pars = Class_parameters()
return one_phase_solve(nlp_raw, pars)
end
function one_phase_solve(nlp_raw::NLPModels.AbstractNLPModel, pars::Class_parameters)
nlp = Class_CUTEst(nlp_raw);
if ncon(nlp) == 0
throw(ErrorException("Unconstrained minimization problems are unsupported"))
end
timer = class_advanced_timer()
start_advanced_timer(timer)
start_advanced_timer(timer, "INIT")
if pars.init.init_style == :gertz
intial_it = gertz_init(nlp, pars, timer); # Gertz, Michael, Jorge Nocedal, and A. Sartenar. "A starting point strategy for nonlinear interior methods." Applied mathematics letters 17.8 (2004): 945-952.
elseif pars.init.init_style == :mehrotra
intial_it = mehrotra_init(nlp, pars, timer);
elseif pars.init.init_style == :LP
intial_it = LP_init(nlp, pars, timer);
else
error("Init strategy does not exist")
end
pause_advanced_timer(timer, "INIT")
pause_advanced_timer(timer)
if pars.output_level >= 4
print_timer_stats(timer)
end
start_advanced_timer(timer)
@assert(is_feasible(intial_it, pars.ls.comp_feas))
iter, status, hist, t, err = one_phase_IPM(intial_it, pars, timer);
pause_advanced_timer(timer)
if pars.output_level >= 3
print_timer_stats(timer)
end
return iter, status, hist, t, err, timer
end
function switching_condition(iter::Class_iterate, last_step_was_superlinear::Bool, pars::Class_parameters)
# should we take an aggressive step or not?
is_feas = is_feasible(iter, pars.ls.comp_feas_agg)
dual_avg = scaled_dual_feas(iter, pars)
if pars.primal_bounds_dual_feas
dual_progress = dual_avg < pars.aggressive_dual_threshold * LinearAlgebra.norm(get_primal_res(iter), Inf)
else
dual_progress = dual_avg < pars.aggressive_dual_threshold * get_mu(iter)
end
delta_small = get_delta(iter) < sqrt(get_mu(iter)) * max(0.1, LinearAlgebra.norm(get_y(iter),Inf))
lag_grad = LinearAlgebra.norm(eval_grad_lag(iter,get_mu(iter)),1) < sum(iter.point.s .* iter.point.y) + LinearAlgebra.norm(get_grad(iter) + iter.point.mu * eval_grad_r(iter),1) # + LinearAlgebra.norm(get_primal_res(iter), Inf) + 1.0 #+ sqrt(LinearAlgebra.norm(get_y(iter),Inf))
be_aggressive = is_feas && dual_progress && lag_grad
be_aggressive |= last_step_was_superlinear && dual_progress && lag_grad
return be_aggressive
end
function one_phase_IPM(iter::Class_iterate, pars::Class_parameters, timer::class_advanced_timer)
#####################################################################
# THE MAIN ALGORITHM
# input:
# iter = starting point
# pars = parameters for running the algorithm
# timer = code to time the algorithm
#####################################################################
# intialize code
t = 0;
rpt = 0.0
progress = Array{alg_history2,1}();
filter = Array{Class_filter,1}();
update!(iter, timer, pars) # is this necessary ????
kkt_solver = pick_KKT_solver(pars);
initialize!(kkt_solver, iter)
if pars.output_level >= 1
head_progress()
end
display = pars.output_level >= 1
record_progress_first_it!(progress, iter, kkt_solver, pars, display)
if pars.output_level >= 4
println("")
end
init_step_size = 1.0
status = :success
agg_next_step = false;
dir_size_agg = Inf
dir_size_stable = 0.0
ls_info = false
start_time = time()
last_step_was_superlinear = false
scale_update = false
# check termination critieron at starting point
start_advanced_timer(timer, "misc/terminate")
status = terminate(iter, pars)
pause_advanced_timer(timer, "misc/terminate")
if status != false
if pars.output_level >= 1
println("Terminated with ", status)
end
return iter, status, progress, t, false
end
if time() - start_time > pars.term.max_time
if pars.output_level >= 1
println("Terminated due to timeout")
end
return iter, :MAX_TIME, progress, t, false
end
# run the algorithm
for t = 1:pars.term.max_it
@assert(is_feasible(iter, pars.ls.comp_feas))
for i = 1:pars.max_it_corrections
if pars.output_level >= 4
println("================================== ITERATION $t, MINOR ITERATION $i ======================================")
end
if pars.output_level >= 5
println("Strict comp = ", maximum(min.(iter.point.s/LinearAlgebra.norm(iter.point.s,Inf),iter.point.y/LinearAlgebra.norm(iter.point.y,Inf))))
end
tot_num_fac = 0; inertia_num_fac = 0;
start_advanced_timer(timer, "misc/checks")
be_aggressive = switching_condition(iter, last_step_was_superlinear, pars) # should we take an aggressive step or not?
last_step_was_superlinear = false
pause_advanced_timer(timer, "misc/checks")
if i == 1
##########################################################################################
##### first step of iteration, we need to compute lag hessian and do a factorization #####
##########################################################################################
update_H!(iter, timer, pars)
@assert(is_updated(iter.cache))
# form matrix that we are going to factorize
form_system!(kkt_solver, iter, timer)
start_advanced_timer(timer, "ipopt_strategy")
# Figures out what delta will give
fact_succeed, inertia_num_fac, new_delta = ipopt_strategy!(iter, kkt_solver, pars, timer)
tot_num_fac = inertia_num_fac
old_delta = get_delta(iter)
set_delta(iter, new_delta)
pause_advanced_timer(timer, "ipopt_strategy")
if fact_succeed != :success
return iter, :MAX_DELTA, progress, t, false
end
start_advanced_timer(timer, "STEP")
start_advanced_timer(timer, "STEP/first")
if pars.output_level >= 5
println(pd("**"), pd("status"), pd("delta"), pd("a_P"), pd("dx"), pd("dy"), pd("ds"))
end
for k = 1:100
#status, new_iter, ls_info = take_step!(iter, reduct_factors, kkt_solver, ls_mode, filter, pars, actual_min_step_size, timer)
step_status, new_iter, ls_info, reduct_factors = take_step2!(be_aggressive, iter, kkt_solver, filter, pars, timer)
if pars.output_level >= 6
println("")
println(pd("**"), pd(step_status), rd(get_delta(iter)), rd(ls_info.step_size_P), rd(LinearAlgebra.norm(kkt_solver.dir.x,Inf)), rd(LinearAlgebra.norm(kkt_solver.dir.y,Inf)), rd(LinearAlgebra.norm(kkt_solver.dir.s,Inf)))
end
if step_status == :success
break
elseif i < 100 && get_delta(iter) < pars.delta.max
if pars.test.response_to_failure == :lag_delta_inc
set_delta(iter, max(LinearAlgebra.norm(eval_grad_lag(iter,iter.point.mu),Inf) / LinearAlgebra.norm(kkt_solver.dir.x,Inf),get_delta(iter) * pars.delta.inc, max(pars.delta.start, old_delta * pars.delta.dec)))
elseif pars.test.response_to_failure == :default
set_delta(iter, max(get_delta(iter) * pars.delta.inc, max(pars.delta.start, old_delta * pars.delta.dec)))
else
error("pars.test.response_to_failure parameter incorrectly set")
end
inertia = factor!(kkt_solver, get_delta(iter), timer)
tot_num_fac += 1
elseif LinearAlgebra.norm(comp(iter),Inf) > 1e-14
warn("Error ... large delta causing issues")
iter.point.y = iter.point.mu ./ iter.point.s
step_status = :success
break
else
pause_advanced_timer(timer, "STEP/first")
pause_advanced_timer(timer, "STEP")
println("Terminated due to max delta while attempting to take step")
println("delta=$(get_delta(iter)), be_aggressive=$be_aggressive, status=$step_status")
println("dx = $(LinearAlgebra.norm(kkt_solver.dir.x,2)), dy = $(LinearAlgebra.norm(kkt_solver.dir.y,2)), ds = $(LinearAlgebra.norm(kkt_solver.dir.s,2))")
@show reduct_factors #, ls_mode
@show ls_info
return iter, :MAX_DELTA, progress, t, false
end
end
pause_advanced_timer(timer, "STEP/first")
pause_advanced_timer(timer, "STEP")
else
#######################################
### corrections, reuse factorization ##
#######################################
start_advanced_timer(timer, "STEP")
start_advanced_timer(timer, "STEP/correction")
step_status, new_iter, ls_info, reduct_factors = take_step2!(be_aggressive, iter, kkt_solver, filter, pars, timer)
if pars.superlinear_theory_mode && be_aggressive
if get_mu(new_iter) < get_mu(iter) * 0.1
last_step_was_superlinear = true
end
end
pause_advanced_timer(timer, "STEP/correction")
pause_advanced_timer(timer, "STEP")
end
if step_status == :success
iter = new_iter
if be_aggressive
dir_size_agg = LinearAlgebra.norm(kkt_solver.dir.x, 2)
end
end
add!(filter, iter, pars) # update filter
start_advanced_timer(timer, "misc/terminate")
status = terminate(iter, pars) # check termination criteron
pause_advanced_timer(timer, "misc/terminate")
start_advanced_timer(timer, "misc/record_progress")
# output to the console
output_level = pars.output_level
display = output_level >= 4 || (output_level >= 3 && i == 1) || (output_level == 2 && t % 10 == 1 && i == 1) || (status != false && output_level >= 1)
record_progress!(progress, t, be_aggressive ? "agg" : "stb", iter, kkt_solver, ls_info, reduct_factors, inertia_num_fac, tot_num_fac, pars, display)
if pars.output_level >= 4
println("")
end
pause_advanced_timer(timer, "misc/record_progress")
@assert(is_updated_correction(iter.cache))
check_for_nan(iter.point)
# if termination criteron is satisfied stop the algorithm
if status != false
if pars.output_level >= 1
println("Terminated with ", status)
end
return iter, status, progress, t, false
end
if time() - start_time > pars.term.max_time
if pars.output_level >= 1
println("Terminated due to timeout")
end
return iter, :MAX_TIME, progress, t, false
end
if step_status != :success
break
end
end
end
if pars.output_level >= 1
println("Terminated due to max iterations reached")
end
return iter, :MAX_IT, progress, pars.term.max_it, false
end
|
Require Import List.
Import ListNotations.
(*Require Import Coq.Lists.ListSet.*)
Require Import Extraction.
Require Import Unify.
Require Import MiniKanrenSyntax.
Require Import Stream.
Require Import DenotationalSem.
Require Import OperationalSem.
Require Import OpSemSoundness.
Require Import OpSemCompleteness.
Module ObviousConstraintStore <: ConstraintStoreSig.
Definition constraint_store (s : subst) : Set := list (term * term).
Definition init_cs : constraint_store empty_subst := [].
Inductive add_constraint_ind_def : forall (s : subst), constraint_store s -> term -> term -> option (constraint_store s) -> Set :=
| acC : forall s cs t1 t2, add_constraint_ind_def s cs t1 t2 (Some ((t1, t2) :: cs)).
Definition add_constraint := add_constraint_ind_def.
Lemma add_constraint_exists :
forall (s : subst) (cs : constraint_store s) (t1 t2 : term),
{r : option (constraint_store s) & add_constraint s cs t1 t2 r}.
Proof. intros. eexists. econstructor. Qed.
Lemma add_constraint_unique :
forall (s : subst) (cs : constraint_store s) (t1 t2 : term) (r r' : option (constraint_store s)),
add_constraint s cs t1 t2 r -> add_constraint s cs t1 t2 r' -> r = r'.
Proof. intros. good_inversion H. good_inversion H0. simpl_existT_cs_same. reflexivity. Qed.
Inductive upd_cs_ind_def : forall (s : subst), constraint_store s -> forall (d : subst), option (constraint_store (compose s d)) -> Set :=
| uC : forall s cs d, upd_cs_ind_def s cs d (Some cs).
Definition upd_cs := upd_cs_ind_def.
Lemma upd_cs_exists :
forall (s : subst) (cs : constraint_store s) (d : subst),
{r : option (constraint_store (compose s d)) & upd_cs s cs d r}.
Proof. intros. eexists. econstructor. Qed.
Lemma upd_cs_unique :
forall (s : subst) (cs : constraint_store s) (d : subst) (r r' : option (constraint_store (compose s d))),
upd_cs s cs d r -> upd_cs s cs d r' -> r = r'.
Proof. intros. good_inversion H. good_inversion H0. simpl_existT_cs_same. reflexivity. Qed.
Definition in_denotational_sem_cs (s : subst) (cs : constraint_store s) (f : repr_fun) :=
forall (t1 t2 : term), In (t1, t2) cs -> ~ gt_eq (apply_repr_fun f t1) (apply_repr_fun f t2).
Notation "[| s , cs , f |]" := (in_denotational_sem_cs s cs f) (at level 0).
Lemma init_condition : forall f, [| empty_subst , init_cs , f |].
Proof. unfold in_denotational_sem_cs. contradiction. Qed.
Lemma add_constraint_fail_condition :
forall (s : subst) (cs : constraint_store s) (t1 t2 : term),
add_constraint s cs t1 t2 None ->
forall f, ~ ([| s , cs , f |] /\ [ s , f ] /\ [| Disunify t1 t2 , f |]).
Proof. intros. inversion H. Qed.
Lemma add_constraint_success_condition :
forall (s : subst) (cs cs' : constraint_store s) (t1 t2 : term),
add_constraint s cs t1 t2 (Some cs') ->
forall f, [| s , cs' , f |] /\ [ s , f ] <->
[| s , cs , f |] /\ [ s , f ] /\ [| Disunify t1 t2 , f |].
Proof.
unfold in_denotational_sem_cs. intros.
good_inversion H. simpl_existT_cs_same. good_inversion H4. split.
{ intros [DSCS DSS]. split; try split; intros; auto.
{ apply DSCS. right. auto. }
{ constructor. apply DSCS. left. auto. } }
{ intros [DSCS [DSS DSG]]. good_inversion DSG. split; auto. intros.
destruct H; auto. good_inversion H; auto. }
Qed.
Lemma upd_cs_fail_condition :
forall (s : subst) (cs : constraint_store s) (d : subst),
upd_cs s cs d None -> forall f, ~ ([| s , cs , f |] /\ [ compose s d , f ]).
Proof. intros. inversion H. Qed.
Lemma upd_cs_success_condition :
forall (s : subst) (cs : constraint_store s) (d : subst) (cs' : constraint_store (compose s d)),
upd_cs s cs d (Some cs') ->
forall f, [| compose s d , cs' , f |] /\ [ compose s d , f ] <->
[| s , cs , f |] /\ [ compose s d , f ].
Proof.
unfold in_denotational_sem_cs. intros. good_inversion H. simpl_existT_cs_same.
good_inversion H3. reflexivity.
Qed.
End ObviousConstraintStore.
Module OperationalSemObviousCS := OperationalSemAbstr ObviousConstraintStore.
Module OperationalSemObviousCSSoundness := OperationalSemSoundnessAbstr ObviousConstraintStore.
Module OperationalSemObviousCSCompleteness := OperationalSemCompletenessAbstr ObviousConstraintStore.
Import OperationalSemObviousCS.
Extraction Language Haskell.
Extraction "extracted/obvious_diseq_interpreter.hs" op_sem_exists.
|
Formal statement is: lemma homotopy_eqv_sing: fixes S :: "'a::real_normed_vector set" and a :: "'b::real_normed_vector" shows "S homotopy_eqv {a} \<longleftrightarrow> S \<noteq> {} \<and> contractible S" Informal statement is: A set $S$ is homotopy equivalent to a single point if and only if $S$ is nonempty and contractible. |
State Before: R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
⊢ IsRegular p State After: case left
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
⊢ IsLeftRegular p
case right
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
⊢ IsRightRegular p Tactic: constructor State Before: case left
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
⊢ IsLeftRegular p State After: case left
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : (fun x => p * x) q = (fun x => p * x) r
⊢ q = r Tactic: intro q r h State Before: case left
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : (fun x => p * x) q = (fun x => p * x) r
⊢ q = r State After: case left
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : p * q = p * r
⊢ q = r Tactic: dsimp only at h State Before: case left
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : p * q = p * r
⊢ q = r State After: no goals Tactic: rw [← sub_eq_zero, ← hp.mul_right_eq_zero_iff, mul_sub, h, sub_self] State Before: case right
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
⊢ IsRightRegular p State After: case right
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : (fun x => x * p) q = (fun x => x * p) r
⊢ q = r Tactic: intro q r h State Before: case right
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : (fun x => x * p) q = (fun x => x * p) r
⊢ q = r State After: case right
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : q * p = r * p
⊢ q = r Tactic: simp only at h State Before: case right
R✝ : Type u
S : Type v
a b : R✝
m n : ℕ
ι : Type y
inst✝¹ : Semiring R✝
p✝ : R✝[X]
R : Type u_1
inst✝ : Ring R
p : R[X]
hp : Monic p
q r : R[X]
h : q * p = r * p
⊢ q = r State After: no goals Tactic: rw [← sub_eq_zero, ← hp.mul_left_eq_zero_iff, sub_mul, h, sub_self] |
function [xmin,... % Solution, returned as a real vector or real array.
fmin,... % Objective function value at the solution, returned as a real number.
stopflag,... % Reason cmaes stopped, returned as an integer.
out] = ... % Information about the optimization process, returned as a structure
my_cmaes(infitfun,... % Function to minimize, specified as a function handle or function name
xstart,... % Initial point, specified as a real vector or real array
inopts,... % Optimization options, specified as a structure
varargin) % Additional arguments to FITFUN
% This is a modified version of the CMA-ES optimisation algorithm of
% Nikolaus Hansen, GNU General Public License (C) 2001-2008
% The original version is available from http://cma.gforge.inria.fr/
% Differences to the original:
% 1. inputs and outputs are consistent with MATLAB's proprietary optimisers
% (e.g. fminsearch)
% - this requires that old argument INSIGMA is included in OPTS
% 2. this version now accepts a function handle as FUN instead of only a
% character string
% 3. new options:
% - inopts.RestartIf: when opts.Restarts>0 it sets a condition whether to
% restart ot not
% - inopts.flgEvalParallel: this option existed before, but its behaviour
% changed, if true or 1 it now triggers parallel evaluation with a
% parfor loop
% Copyright (C) 2021 L. Trotter
% This file is part of the Modular Assessment of Rainfall-Runoff Models
% Toolbox (MARRMoT).
% MARRMoT is a free software (GNU GPL v3) and distributed WITHOUT ANY
% WARRANTY. See <https://www.gnu.org/licenses/> for details.
% ORIGINAL COPIRIGHT NOTICE IS AT THE END OF THE FILE as in the original
% cmaes.m file.
% cmaes.m, Version 3.61.beta, last change: April, 2012
% CMAES implements an Evolution Strategy with Covariance Matrix
% Adaptation (CMA-ES) for nonlinear function minimization. For
% introductory comments and copyright (GPL) see end of file (type
% 'type cmaes'). cmaes.m runs with MATLAB (Windows, Linux) and,
% without data logging and plotting, it should run under Octave
% (Linux, package octave-forge is needed).
%
% OPTS = CMAES returns default options.
% OPTS = CMAES('defaults') returns default options quietly.
% OPTS = CMAES('displayoptions') displays options.
% OPTS = CMAES('defaults', OPTS) supplements options OPTS with default
% options.
%
% XMIN = CMAES(FITFUN, XSTART[, OPTS]) locates an approximate minimum
% XMIN of function FITFUN starting from column vector XSTART with the initial
% coordinate wise search standard deviation OPTS.INSIGMA.
%
% Input arguments:
%
% FITFUN is a string function name like 'frosen' or a function hanfle.
% FUN takes as argument a column vector of size of XSTART and returns a
% scalar. An easy way to implement a hard non-linear constraint is to
% return NaN. Then, this function evaluation is not counted and a newly
% sampled point is tried immediately.
%
% XSTART is a column vector, or a matrix, or a string. If XSTART is a matrix,
% mean(XSTART, 2) is taken as initial point. If XSTART is a string like
% '2*rand(10,1)-1', the string is evaluated first.
%
% INOPTS (an optional argument) is a struct holding additional input
% options. Valid field names and a short documentation can be
% discovered by looking at the default options (type 'cmaes'
% without arguments, see above). Empty or missing fields in OPTS
% invoke the default value, i.e. OPTS needs not to have all valid
% field names. Capitalization does not matter and unambiguous
% abbreviations can be used for the field names. If a string is
% given where a numerical value is needed, the string is evaluated
% by eval, where 'N' expands to the problem dimension
% (==size(XSTART,1)) and 'popsize' to the population size.
%
% [XMIN, FMIN, STOPFLAG, OUT] = CMAES(FITFUN, XSTART)
% returns the best (minimal) point XMIN (found in the last
% generation); function value FMIN of XMIN; an STOPFLAG value as cell array,
% where possible entries are 'fitness', 'tolx', 'tolupx', 'tolfun',
% 'maxfunevals', 'maxiter', 'stoptoresume', 'manual',
% 'warnconditioncov', 'warnnoeffectcoord', 'warnnoeffectaxis',
% 'warnequalfunvals', 'warnequalfunvalhist', 'bug' (use
% e.g. any(strcmp(STOPFLAG, 'tolx')) or findstr(strcat(STOPFLAG,
% 'tolx')) for further processing); a record struct OUTPUT with some
% more output, where the struct SOLUTIONS.BESTEVER contains the overall
% best evaluated point X with function value F evaluated at evaluation
% count EVALS. The last output argument BESTEVER equals
% OUT.SOLUTIONS.BESTEVER. Moreover a history of solutions and
% parameters is written to files according to the Log-options.
%
% A regular manual stop can be achieved via the file signals.par. The
% program is terminated if the first two non-white sequences in any
% line of this file are 'stop' and the value of the LogFilenamePrefix
% option (by default 'outcmaes'). Also a run can be skipped.
% Given, for example, 'skip outcmaes run 2', skips the second run
% if option Restarts is at least 2, and another run will be started.
%
% To run the code completely silently set Disp, Save, and Log options
% to 0. With OPTS.LogModulo > 0 (1 by default) the most important
% data are written to ASCII files permitting to investigate the
% results (e.g. plot with function plotcmaesdat) even while CMAES is
% still running (which can be quite useful on expensive objective
% functions). When OPTS.SaveVariables==1 (default) everything is saved
% in file OPTS.SaveFilename (default 'variablescmaes.mat') allowing to
% resume the search afterwards by using the resume option.
%
% To find the best ever evaluated point load the variables typing
% "es=load('variablescmaes')" and investigate the variable
% es.out.solutions.bestever.
%
% OPTS.INSIGMA is a scalar, or a column vector of size(X0,1), or a string
% that can be evaluated into one of these. INSIGMA determines the
% initial coordinate wise standard deviations for the search.
% Setting INSIGMA one third of the initial search region is
% appropriate, e.g., the initial point in [0, 6]^10 and SIGMA=2
% means cmaes('myfun', 3*rand(10,1), 2). If SIGMA is missing and
% size(X0,2) > 1, INSIGMA is set to sqrt(var(X0')'). That is, X0 is
% used as a sample for estimating initial mean and variance of the
% search distribution. If inopts.LBounds and inopts.UBounds are both set
% legally, INSIGMA will default to 0.3(inopts.LBounds - inopts.UBounds)
%
% In case of a noisy objective function (uncertainties) set
% OPTS.Noise.on = 1. This option interferes presumably with some
% termination criteria, because the step-size sigma will presumably
% not converge to zero anymore. If CMAES was provided with a
% fifth argument (P1 in the below example, which is passed to the
% objective function FUN), this argument is multiplied with the
% factor given in option Noise.alphaevals, each time the detected
% noise exceeds a threshold. This argument can be used within
% FUN, for example, as averaging number to reduce the noise level.
%
% OPTS.DiagonalOnly > 1 defines the number of initial iterations,
% where the covariance matrix remains diagonal and the algorithm has
% internally linear time complexity. OPTS.DiagonalOnly = 1 means
% keeping the covariance matrix always diagonal and this setting
% also exhibits linear space complexity. This can be particularly
% useful for dimension > 100. The default is OPTS.DiagonalOnly = 0.
%
% OPTS.CMA.active = 1 turns on "active CMA" with a negative update
% of the covariance matrix and checks for positive definiteness.
% OPTS.CMA.active = 2 does not check for pos. def. and is numerically
% faster. Active CMA usually speeds up the adaptation and might
% become a default in near future.
%
% The primary strategy parameter to play with is OPTS.PopSize, which
% can be increased from its default value. Increasing the population
% size (by default linked to increasing parent number OPTS.ParentNumber)
% improves global search properties in exchange to speed. Speed
% decreases, as a rule, at most linearely with increasing population
% size. It is advisable to begin with the default small population
% size. The options Restarts and IncPopSize can be used for an
% automated multistart where the population size is increased by the
% factor IncPopSize (two by default) before each restart. X0 (given as
% string) is reevaluated for each restart. Stopping options
% StopFunEvals, StopIter, MaxFunEvals, and Fitness terminate the
% program, all others including MaxIter invoke another restart, where
% the iteration counter is reset to zero.
%
% See also FMINSEARCH, FMINUNC, FMINBND.
cmaVersion = '3.61.beta';
% ----------- Set Defaults for Input Parameters and Options -------------
% These defaults may be edited for convenience
% Options defaults: Stopping criteria % (value of stop flag)
defopts.insigma = '[] % when left empty, it defaults to the standard deviation of xstart if all(size(xstart) > 1) or to';
defopts.StopFitness = '-Inf % stop if f(xmin) < stopfitness, minimization';
defopts.MaxFunEvals = 'Inf % maximal number of fevals';
defopts.MaxIter = '1e3*(N+5)^2/sqrt(popsize) % maximal number of iterations';
defopts.StopFunEvals = 'Inf % stop after resp. evaluation, possibly resume later';
defopts.StopIter = 'Inf % stop after resp. iteration, possibly resume later';
defopts.TolX = '1e-11*max(insigma) % stop if x-change smaller TolX';
defopts.TolUpX = '1e3*max(insigma) % stop if x-changes larger TolUpX';
defopts.TolFun = '1e-12 % stop if fun-changes smaller TolFun';
defopts.TolHistFun = '1e-13 % stop if back fun-changes smaller TolHistFun';
defopts.StopOnStagnation = 'on % stop when fitness stagnates for a long time';
% TODO: stagnation has four parameters for the period: min = 120, const = 30N/lam, rel = 0.2, max = 2e5
% defopts.StopOnStagnation = '[120 30*N/popsize 0.2 2e5] % [min const rel_iter max] measuring period';
defopts.StopOnWarnings = 'yes % ''no''==''off''==0, ''on''==''yes''==1 ';
defopts.StopOnEqualFunctionValues = '2 + N/3 % number of iterations';
% Options defaults: Other
defopts.DiffMaxChange = 'Inf % maximal variable change(s), can be Nx1-vector';
defopts.DiffMinChange = '0 % minimal variable change(s), can be Nx1-vector';
defopts.WarnOnEqualFunctionValues = ...
'yes % ''no''==''off''==0, ''on''==''yes''==1 ';
defopts.LBounds = '-Inf % lower bounds, scalar or Nx1-vector';
defopts.UBounds = 'Inf % upper bounds, scalar or Nx1-vector';
defopts.EvalParallel = 'no % Can we use parallel evaluation with a parallel pool of workers?';
defopts.EvalInitialX = 'yes % evaluation of initial solution';
defopts.Restarts = '0 % number of restarts ';
defopts.IncPopSize = '2 % multiplier for population size before each restart';
defopts.RestartIf = 'true % condition for restart: true restarts as many times as in opts.Restarts';
defopts.PopSize = '(4 + floor(3*log(N))) % population size, lambda';
defopts.ParentNumber = 'floor(popsize/2) % AKA mu, popsize equals lambda';
defopts.RecombinationWeights = 'superlinear decrease % or linear, or equal';
defopts.DiagonalOnly = '0*(1+100*N/sqrt(popsize))+(N>=1000) % C is diagonal for given iterations, 1==always';
defopts.Noise.on = '0 % uncertainty handling is off by default';
defopts.Noise.reevals = '1*ceil(0.05*lambda) % nb. of re-evaluated for uncertainty measurement';
defopts.Noise.theta = '0.5 % threshold to invoke uncertainty treatment'; % smaller: more likely to diverge
defopts.Noise.cum = '0.3 % cumulation constant for uncertainty';
defopts.Noise.cutoff = '2*lambda/3 % rank change cutoff for summation';
defopts.Noise.alphasigma = '1+2/(N+10) % factor for increasing sigma'; % smaller: slower adaptation
defopts.Noise.epsilon = '1e-7 % additional relative perturbation before reevaluation';
defopts.Noise.minmaxevals = '[1 inf] % min and max value of 2nd arg to fitfun, start value is 5th arg to cmaes';
defopts.Noise.alphaevals = '1+2/(N+10) % factor for increasing 2nd arg to fitfun';
defopts.Noise.callback = '[] % callback function when uncertainty threshold is exceeded';
% defopts.TPA = 0;
defopts.CMA.cs = '(mueff+2)/(N+mueff+3) % cumulation constant for step-size';
%qqq defopts.CMA.cs = (mueff^0.5)/(N^0.5+mueff^0.5) % the short time horizon version
defopts.CMA.damps = '1 + 2*max(0,sqrt((mueff-1)/(N+1))-1) + cs % damping for step-size';
% defopts.CMA.ccum = '4/(N+4) % cumulation constant for covariance matrix';
defopts.CMA.ccum = '(4 + mueff/N) / (N+4 + 2*mueff/N) % cumulation constant for pc';
defopts.CMA.ccov1 = '2 / ((N+1.3)^2+mueff) % learning rate for rank-one update';
defopts.CMA.ccovmu = '2 * (mueff-2+1/mueff) / ((N+2)^2+mueff) % learning rate for rank-mu update';
defopts.CMA.on = 'yes';
defopts.CMA.active = '0 % active CMA 1: neg. updates with pos. def. check, 2: neg. updates';
flg_future_setting = 0; % testing for possible future variant(s)
if flg_future_setting
disp('in the future')
% damps setting from Brockhoff et al 2010
% this damps diverges with popsize 400:
% defopts.CMA.damps = '2*mueff/lambda + 0.3 + cs % damping for step-size';
% cmaeshtml('benchmarkszero', ones(20,1)*2, 5, o, 15);
% how about:
% defopts.CMA.damps = '2*mueff/lambda + 0.3 + 2*max(0,sqrt((mueff-1)/(N+1))-1) + cs % damping for step-size';
defopts.CMA.damps = '0.5 + 0.5*min(1, (0.27*lambda/mueff-1)^2) + 2*max(0,sqrt((mueff-1)/(N+1))-1) + cs % damping for step-size';
if 11 < 3
defopts.CMA.damps = '0.5 + 0.5*min(1,(lam_mirr/(0.159*lambda)-1)^2) + 2*max(0,sqrt((mueff-1)/(N+1))-1) + cs % damping for step-size';
defopts.mirrored_offspring = 'floor(0.5 + 0.159 * lambda)';
% TODO: this should also depend on diagonal option!?
defopts.CMA.active = 'floor(int8(lam_mirr>0)) % active CMA 1: neg. updates with pos. def. check, 2: neg. updates';
end
% ccum adjusted for large mueff, better on schefelmult?
% TODO: this should also depend on diagonal option!?
defopts.CMA.ccum = '(4 + mueff/N) / (N+4 + 2*mueff/N) % cumulation constant for pc';
defopts.CMA.active = '1 % active CMA 1: neg. updates with pos. def. check, 2: neg. updates';
end
defopts.Resume = 'no % resume former run from SaveFile';
defopts.Science = 'on % off==do some additional (minor) problem capturing, NOT IN USE';
defopts.ReadSignals = 'on % from file signals.par for termination, yet a stumb';
defopts.Seed = 'sum(100*clock) % evaluated if it is a string';
defopts.DispFinal = 'on % display messages like initial and final message';
defopts.DispModulo = '100 % [0:Inf], disp messages after every i-th iteration';
defopts.SaveVariables = 'on % [on|final|off][-v6] save variables to .mat file';
defopts.SaveFilename = 'variablescmaes.mat % save all variables, see SaveVariables';
defopts.LogModulo = '1 % [0:Inf] if >1 record data less frequently after gen=100';
defopts.LogTime = '25 % [0:100] max. percentage of time for recording data';
defopts.LogFilenamePrefix = 'outcmaes % files for output data';
defopts.LogPlot = 'off % plot while running using output data files';
%qqqkkk
%defopts.varopt1 = ''; % 'for temporary and hacking purposes';
%defopts.varopt2 = ''; % 'for temporary and hacking purposes';
defopts.UserData = 'for saving data/comments associated with the run';
defopts.UserDat2 = ''; 'for saving data/comments associated with the run';
% ---------------------- Handling Input Parameters ----------------------
if nargin < 1 || isequal(infitfun, 'defaults') % pass default options
if nargin < 1
disp('Default options returned (type "help cmaes" for help).');
end
xmin = defopts;
if nargin > 1 % supplement second argument with default options
xmin = getoptions(xstart, defopts);
end
return;
end
if isequal(infitfun, 'displayoptions')
names = fieldnames(defopts);
for name = names'
disp([name{:} repmat(' ', 1, 20-length(name{:})) ': ''' defopts.(name{:}) '''']);
end
return;
end
input.fitfun = infitfun; % record used input
if isempty(infitfun)
% fitfun = definput.fitfun;
% warning(['Objective function not determined, ''' fitfun ''' used']);
error(['Objective function not determined']);
end
% EDITS so that fitfun is a handle rather than a string
% if ~ischar(fitfun)
% error('first argument FUN must be a string');
% end
if ischar(infitfun)
fitfun_char = infitfun;
original_fitfun = @(varargin) feval(fitfun_char, varargin{:});
else
original_fitfun = infitfun;
end
if nargin < 2
xstart = [];
end
input.xstart = xstart;
if isempty(xstart)
% xstart = definput.xstart; % objective variables initial point
% warning('Initial search point, and problem dimension, not determined');
error('Initial search point, and problem dimension, not determined');
end
% EDITS to add insigma to inopts
% if nargin < 3
% insigma = [];
% end
% if isa(insigma, 'struct')
% error(['Third argument SIGMA must be (or eval to) a scalar '...
% 'or a column vector of size(X0,1)']);
% end
% Compose options opts
if nargin < 3 || isempty(inopts) % no input options available
inopts = [];
inopts = defopts;
else
inopts = getoptions(inopts, defopts);
end
% EDITS to add insigma to inopts
original_insigma = myeval(inopts.insigma);
input.sigma = original_insigma;
if isempty(original_insigma)
if all(size(myeval(xstart)) > 1)
original_insigma = std(xstart, 0, 2);
if any(original_insigma == 0)
error(['Initial search volume is zero, choose SIGMA or X0 appropriate']);
end
else
% will be captured later
% error(['Initial step sizes (SIGMA) not determined']);
end
end
i = strfind(inopts.SaveFilename, '%'); % remove everything after comment
if ~isempty(i)
inopts.SaveFilename = inopts.SaveFilename(1:i(1)-1);
end
inopts.SaveFilename = deblank(inopts.SaveFilename); % remove trailing white spaces
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
counteval = 0; countevalNaN = 0;
irun = 0;
while irun <= myeval(inopts.Restarts) && (irun == 0 || myeval(inopts.RestartIf)) % for-loop does not work with resume
irun = irun + 1;
% ------------------------ Initialization -------------------------------
% Handle resuming of old run
flgresume = myevalbool(inopts.Resume);
xmean = myeval(xstart);
if all(size(xmean) > 1)
xmean = mean(xmean, 2); % in case if xstart is a population
elseif size(xmean, 2) > 1
xmean = xmean';
end
if ~flgresume % not resuming a former run
% Assign settings from input parameters and options for myeval...
N = size(xmean, 1); numberofvariables = N;
lambda0 = floor(myeval(inopts.PopSize) * myeval(inopts.IncPopSize)^(irun-1));
% lambda0 = floor(myeval(opts.PopSize) * 3^floor((irun-1)/2));
popsize = lambda0;
lambda = lambda0;
original_insigma = myeval(original_insigma);
if all(size(original_insigma) == [N 2])
original_insigma = 0.5 * (original_insigma(:,2) - original_insigma(:,1));
end
insigma = original_insigma;
else % flgresume is true, do resume former run
tmp = whos('-file', inopts.SaveFilename);
for i = 1:length(tmp)
if strcmp(tmp(i).name, 'localopts');
error('Saved variables include variable "localopts", please remove');
end
end
local.opts = inopts; % keep stopping and display options
local.varargin = varargin;
load(inopts.SaveFilename);
varargin = local.varargin;
flgresume = 1;
% Overwrite old stopping and display options
inopts.StopFitness = local.opts.StopFitness;
%%opts.MaxFunEvals = local.opts.MaxFunEvals;
%%opts.MaxIter = local.opts.MaxIter;
inopts.StopFunEvals = local.opts.StopFunEvals;
inopts.StopIter = local.opts.StopIter;
inopts.TolX = local.opts.TolX;
inopts.TolUpX = local.opts.TolUpX;
inopts.TolFun = local.opts.TolFun;
inopts.TolHistFun = local.opts.TolHistFun;
inopts.StopOnStagnation = local.opts.StopOnStagnation;
inopts.StopOnWarnings = local.opts.StopOnWarnings;
inopts.ReadSignals = local.opts.ReadSignals;
inopts.DispFinal = local.opts.DispFinal;
inopts.LogPlot = local.opts.LogPlot;
inopts.DispModulo = local.opts.DispModulo;
inopts.SaveVariables = local.opts.SaveVariables;
inopts.LogModulo = local.opts.LogModulo;
inopts.LogTime = local.opts.LogTime;
inopts.EvalParallel = local.opts.EvalParallel;
% Get any option that doesn't exist in the old option from the new one,
% this is handy if you make any changes that adds new options between two
% restarts
inopts = getoptions(inopts, local.opts);
clear local; % otherwise local would be overwritten during load
end
%--------------------------------------------------------------
% Evaluate options
stopFitness = myeval(inopts.StopFitness);
stopMaxFunEvals = myeval(inopts.MaxFunEvals);
stopMaxIter = myeval(inopts.MaxIter);
stopFunEvals = myeval(inopts.StopFunEvals);
stopIter = myeval(inopts.StopIter);
if flgresume
stopIter = stopIter + countiter;
end
stopTolX = myeval(inopts.TolX);
stopTolUpX = myeval(inopts.TolUpX);
stopTolFun = myeval(inopts.TolFun);
stopTolHistFun = myeval(inopts.TolHistFun);
stopOnStagnation = myevalbool(inopts.StopOnStagnation);
stopOnWarnings = myevalbool(inopts.StopOnWarnings);
flgreadsignals = myevalbool(inopts.ReadSignals);
flgWarnOnEqualFunctionValues = myevalbool(inopts.WarnOnEqualFunctionValues);
flgEvalParallel = myevalbool(inopts.EvalParallel);
if flgEvalParallel
poolobj = gcp('nocreate');
if isempty(poolobj); poolobj = parpool; end
end
stopOnEqualFunctionValues = myeval(inopts.StopOnEqualFunctionValues);
arrEqualFunvals = zeros(1, 10+N);
flgDiagonalOnly = myeval(inopts.DiagonalOnly);
flgActiveCMA = myeval(inopts.CMA.active);
noiseHandling = myevalbool(inopts.Noise.on);
noiseMinMaxEvals = myeval(inopts.Noise.minmaxevals);
noiseAlphaEvals = myeval(inopts.Noise.alphaevals);
noiseCallback = myeval(inopts.Noise.callback);
flgdisplay = myevalbool(inopts.DispFinal);
flgplotting = myevalbool(inopts.LogPlot);
verbosemodulo = myeval(inopts.DispModulo);
flgscience = myevalbool(inopts.Science);
flgsaving = [];
strsaving = [];
if strfind(inopts.SaveVariables, '-v6')
i = strfind(inopts.SaveVariables, '%');
if isempty(i) || i == 0 || strfind(inopts.SaveVariables, '-v6') < i
strsaving = '-v6';
flgsaving = 1;
flgsavingfinal = 1;
end
end
if strncmp('final', inopts.SaveVariables, 5)
flgsaving = 0;
flgsavingfinal = 1;
end
if isempty(flgsaving)
flgsaving = myevalbool(inopts.SaveVariables);
flgsavingfinal = flgsaving;
end
savemodulo = myeval(inopts.LogModulo);
savetime = myeval(inopts.LogTime);
i = strfind(inopts.LogFilenamePrefix, ' '); % remove everything after white space
if ~isempty(i)
inopts.LogFilenamePrefix = inopts.LogFilenamePrefix(1:i(1)-1);
end
% TODO here silent option? set disp, save and log options to 0
%--------------------------------------------------------------
if (isfinite(stopFunEvals) || isfinite(stopIter)) && ~flgsaving
warning('To resume later the saving option needs to be set');
end
% Do more checking and initialization
if flgresume % resume is on
time.t0 = clock;
if flgdisplay
disp([' resumed from ' inopts.SaveFilename ]);
end
if counteval >= stopMaxFunEvals
error(['MaxFunEvals exceeded, use StopFunEvals as stopping ' ...
'criterion before resume']);
end
if countiter >= stopMaxIter
error(['MaxIter exceeded, use StopIter as stopping criterion ' ...
'before resume']);
end
else % flgresume
xmean = mean(myeval(xstart), 2); % evaluate xstart again, because of irun
maxdx = myeval(inopts.DiffMaxChange); % maximal sensible variable change
mindx = myeval(inopts.DiffMinChange); % minimal sensible variable change
% can both also be defined as Nx1 vectors
% rescale
original_lbounds = myeval(inopts.LBounds);
original_ubounds = myeval(inopts.UBounds);
if length(original_lbounds) == 1
original_lbounds = repmat(original_lbounds, N, 1);
end
if length(original_ubounds) == 1
original_ubounds = repmat(original_ubounds, N, 1);
end
% We'll use fixed bounds [0;10] and rescale the parameters linearly
% withing this bounds, if original bounds are given - otherwise no
% scaling happens
if all(original_lbounds > -Inf) && all(original_ubounds < Inf)
lbounds = zeros(N, 1);
ubounds = repmat(10, N, 1);
xmean = scale_linear(xmean, original_lbounds, original_ubounds, lbounds, ubounds);
fitfun = @(x, varargin) original_fitfun(scale_linear(x, lbounds, ubounds, original_lbounds, original_ubounds), varargin{:});
if ~isempty(original_insigma)
insigma = scale_linear_range(original_insigma, original_ubounds - original_lbounds, ubounds - lbounds);
end
else
lbounds = original_lbounds;
ubounds = original_ubounds;
insigma = original_insigma;
fitfun = original_fitfun;
end
if isempty(insigma) % last chance to set insigma
if all(lbounds > -Inf) && all(ubounds < Inf)
if any(lbounds>=ubounds)
error('upper bound must be greater than lower bound');
end
insigma = 0.3*(ubounds-lbounds);
stopTolX = myeval(inopts.TolX); % reevaluate these
stopTolUpX = myeval(inopts.TolUpX);
else
error(['Initial step sizes (SIGMA) not determined']);
end
end
% Check all vector sizes
if size(xmean, 2) > 1 || size(xmean,1) ~= N
error(['intial search point should be a column vector of size ' ...
num2str(N)]);
elseif ~(all(size(insigma) == [1 1]) || all(size(insigma) == [N 1]))
error(['input parameter SIGMA should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
elseif size(stopTolX, 2) > 1 || ~ismember(size(stopTolX, 1), [1 N])
error(['option TolX should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
elseif size(stopTolUpX, 2) > 1 || ~ismember(size(stopTolUpX, 1), [1 N])
error(['option TolUpX should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
elseif size(maxdx, 2) > 1 || ~ismember(size(maxdx, 1), [1 N])
error(['option DiffMaxChange should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
elseif size(mindx, 2) > 1 || ~ismember(size(mindx, 1), [1 N])
error(['option DiffMinChange should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
elseif size(lbounds, 2) > 1 || ~ismember(size(lbounds, 1), [1 N])
error(['option lbounds should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
elseif size(ubounds, 2) > 1 || ~ismember(size(ubounds, 1), [1 N])
error(['option ubounds should be (or eval to) a scalar '...
'or a column vector of size ' num2str(N)] );
end
% Initialize dynamic internal state parameters
if any(insigma <= 0)
error(['Initial search volume (SIGMA) must be greater than zero']);
end
if max(insigma)/min(insigma) > 1e6
error(['Initial search volume (SIGMA) badly conditioned']);
end
sigma = max(insigma); % overall standard deviation
pc = zeros(N,1); ps = zeros(N,1); % evolution paths for C and sigma
if length(insigma) == 1
insigma = insigma * ones(N,1) ;
end
diagD = insigma/max(insigma); % diagonal matrix D defines the scaling
diagC = diagD.^2;
if flgDiagonalOnly ~= 1 % use at some point full covariance matrix
B = eye(N,N); % B defines the coordinate system
BD = B.*repmat(diagD',N,1); % B*D for speed up only
C = diag(diagC); % covariance matrix == BD*(BD)'
end
if flgDiagonalOnly
B = 1;
end
fitness.hist=NaN*ones(1,10+ceil(3*10*N/lambda)); % history of fitness values
fitness.histsel=NaN*ones(1,10+ceil(3*10*N/lambda)); % history of fitness values
fitness.histbest=[]; % history of fitness values
fitness.histmedian=[]; % history of fitness values
% Initialize boundary handling
bnd.isactive = any(lbounds > -Inf) || any(ubounds < Inf);
if bnd.isactive
if any(lbounds>ubounds)
error('lower bound found to be greater than upper bound');
end
[xmean ti] = xintobounds(xmean, lbounds, ubounds); % just in case
if any(ti)
warning('Initial point was out of bounds, corrected');
end
bnd.weights = zeros(N,1); % weights for bound penalty
% scaling is better in axis-parallel case, worse in rotated
bnd.flgscale = 0; % scaling will be omitted if zero
if bnd.flgscale ~= 0
bnd.scale = diagC/mean(diagC);
else
bnd.scale = ones(N,1);
end
idx = (lbounds > -Inf) | (ubounds < Inf);
if length(idx) == 1
idx = idx * ones(N,1);
end
bnd.isbounded = zeros(N,1);
bnd.isbounded(find(idx)) = 1;
maxdx = min(maxdx, (ubounds - lbounds)/2);
if any(sigma*sqrt(diagC) > maxdx)
fac = min(maxdx ./ sqrt(diagC))/sigma;
sigma = min(maxdx ./ sqrt(diagC));
warning(['Initial SIGMA multiplied by the factor ' num2str(fac) ...
', because it was larger than half' ...
' of one of the boundary intervals']);
end
idx = (lbounds > -Inf) & (ubounds < Inf);
dd = diagC;
if any(5*sigma*sqrt(dd(idx)) < ubounds(idx) - lbounds(idx))
warning(['Initial SIGMA is, in at least one coordinate, ' ...
'much smaller than the '...
'given boundary intervals. For reasonable ' ...
'global search performance SIGMA should be ' ...
'between 0.2 and 0.5 of the bounded interval in ' ...
'each coordinate. If all coordinates have ' ...
'lower and upper bounds SIGMA can be empty']);
end
bnd.dfithist = 1; % delta fit for setting weights
bnd.aridxpoints = []; % remember complete outside points
bnd.arfitness = []; % and their fitness
bnd.validfitval = 0;
bnd.iniphase = 1;
end
% ooo initial feval, for output only
if irun == 1
out.algorithm = 'Evolution Strategy with Covariance Matrix Adaptation (CMA-ES)';
out.solutions.bestever.x = scale_linear(xmean, lbounds, ubounds, original_lbounds, original_ubounds);
out.solutions.bestever.f = Inf; % for simpler comparison below
out.solutions.bestever.evals = counteval;
bestever = out.solutions.bestever;
end
if myevalbool(inopts.EvalInitialX)
% EDITS to make fitfun a handle rather a string
% fitness.hist(1)=feval(fitfun, xmean, varargin{:});
fitness.hist(1)=fitfun(xmean, varargin{:});
fitness.histsel(1)=fitness.hist(1);
counteval = counteval + 1;
if fitness.hist(1) < out.solutions.bestever.f
out.solutions.bestever.x = scale_linear(xmean, lbounds, ubounds, original_lbounds, original_ubounds);
out.solutions.bestever.f = fitness.hist(1);
out.solutions.bestever.evals = counteval;
bestever = out.solutions.bestever;
end
else
fitness.hist(1)=NaN;
fitness.histsel(1)=NaN;
end
% initialize random number generator
if ischar(inopts.Seed)
randn('state', eval(inopts.Seed)); % random number generator state
else
randn('state', inopts.Seed);
end
%qqq
% load(opts.SaveFilename, 'startseed');
% randn('state', startseed);
% disp(['SEED RELOADED FROM ' opts.SaveFilename]);
startseed = randn('state'); % for retrieving in saved variables
% Initialize further constants
chiN=N^0.5*(1-1/(4*N)+1/(21*N^2)); % expectation of
% ||N(0,I)|| == norm(randn(N,1))
countiter = 0;
% Initialize records and output
if irun == 1
time.t0 = clock;
% TODO: keep also median solution?
out.evals = counteval; % should be first entry
out.stopflag = {};
outiter = 0;
% Write headers to output data files
filenameprefix = inopts.LogFilenamePrefix;
if savemodulo && savetime
filenames = {};
filenames(end+1) = {'axlen'};
filenames(end+1) = {'fit'};
filenames(end+1) = {'stddev'};
filenames(end+1) = {'xmean'};
filenames(end+1) = {'xrecentbest'};
str = [' (startseed=' num2str(startseed(2)) ...
', ' num2str(clock, '%d/%02d/%d %d:%d:%2.2f') ')'];
for namecell = filenames(:)'
name = namecell{:};
[fid, err] = fopen(['./' filenameprefix name '.dat'], 'w');
if fid < 1 % err ~= 0
warning(['could not open ' filenameprefix name '.dat']);
filenames(find(strcmp(filenames,name))) = [];
else
% fprintf(fid, '%s\n', ...
% ['<CMAES-OUTPUT version="' cmaVersion '">']);
% fprintf(fid, [' <NAME>' name '</NAME>\n']);
% fprintf(fid, [' <DATE>' date() '</DATE>\n']);
% fprintf(fid, ' <PARAMETERS>\n');
% fprintf(fid, [' dimension=' num2str(N) '\n']);
% fprintf(fid, ' </PARAMETERS>\n');
% different cases for DATA columns annotations here
% fprintf(fid, ' <DATA');
if strcmp(name, 'axlen')
fprintf(fid, ['%% columns="iteration, evaluation, sigma, ' ...
'max axis length, min axis length, ' ...
'all principal axes lengths (sorted square roots ' ...
'of eigenvalues of C)"' str]);
elseif strcmp(name, 'fit')
fprintf(fid, ['%% columns="iteration, evaluation, sigma, axis ratio, bestever,' ...
' best, median, worst fitness function value,' ...
' further objective values of best"' str]);
elseif strcmp(name, 'stddev')
fprintf(fid, ['%% columns=["iteration, evaluation, sigma, void, void, ' ...
'stds==sigma*sqrt(diag(C))"' str]);
elseif strcmp(name, 'xmean')
fprintf(fid, ['%% columns="iteration, evaluation, void, ' ...
'void, void, xmean"' str]);
elseif strcmp(name, 'xrecentbest')
fprintf(fid, ['%% columns="iteration, evaluation, fitness, ' ...
'void, void, xrecentbest"' str]);
end
fprintf(fid, '\n'); % DATA
if strcmp(name, 'xmean')
fprintf(fid, '%ld %ld 0 0 0 ', 0, counteval);
% fprintf(fid, '%ld %ld 0 0 %e ', countiter, counteval, fmean);
%qqq fprintf(fid, msprintf('%e ', genophenotransform(out.genopheno, xmean)) + '\n');
fprintf(fid, '%e ', scale_linear(xmean, lbounds, ubounds, original_lbounds, original_ubounds));
fprintf(fid, '\n');
end
fclose(fid);
clear fid; % preventing
end
end % for files
end % savemodulo
end % irun == 1
end % else flgresume
% -------------------- Generation Loop --------------------------------
stopflag = {};
while isempty(stopflag)
% set internal parameters
if countiter == 0 || lambda ~= lambda_last
if countiter > 0 && floor(log10(lambda)) ~= floor(log10(lambda_last)) ...
&& flgdisplay
disp([' lambda = ' num2str(lambda)]);
lambda_hist(:,end+1) = [countiter+1; lambda];
else
lambda_hist = [countiter+1; lambda];
end
lambda_last = lambda;
% Strategy internal parameter setting: Selection
mu = myeval(inopts.ParentNumber); % number of parents/points for recombination
if strncmp(lower(inopts.RecombinationWeights), 'equal', 3)
weights = ones(mu,1); % (mu_I,lambda)-CMA-ES
elseif strncmp(lower(inopts.RecombinationWeights), 'linear', 3)
weights = mu+0.5-(1:mu)';
elseif strncmp(lower(inopts.RecombinationWeights), 'superlinear', 3)
% use (lambda+1)/2 as reference if mu < lambda/2
weights = log(max(mu, lambda/2) + 1/2)-log(1:mu)'; % muXone array for weighted recombination
else
error(['Recombination weights to be "' inopts.RecombinationWeights ...
'" is not implemented']);
end
mueff=sum(weights)^2/sum(weights.^2); % variance-effective size of mu
weights = weights/sum(weights); % normalize recombination weights array
if mueff == lambda
error(['Combination of values for PopSize, ParentNumber and ' ...
' and RecombinationWeights is not reasonable']);
end
% Strategy internal parameter setting: Adaptation
cc = myeval(inopts.CMA.ccum); % time constant for cumulation for covariance matrix
cs = myeval(inopts.CMA.cs);
% old way TODO: remove this at some point
% mucov = mueff; % size of mu used for calculating learning rate ccov
% ccov = (1/mucov) * 2/(N+1.41)^2 ... % learning rate for covariance matrix
% + (1-1/mucov) * min(1,(2*mucov-1)/((N+2)^2+mucov));
% new way
if myevalbool(inopts.CMA.on)
ccov1 = myeval(inopts.CMA.ccov1);
ccovmu = min(1-ccov1, myeval(inopts.CMA.ccovmu));
else
ccov1 = 0;
ccovmu = 0;
end
% flgDiagonalOnly = -lambda*4*1/ccov; % for ccov==1 it is not needed
% 0 : C will never be diagonal anymore
% 1 : C will always be diagonal
% >1: C is diagonal for first iterations, set to 0 afterwards
if flgDiagonalOnly < 1
flgDiagonalOnly = 0;
end
if flgDiagonalOnly
ccov1_sep = min(1, ccov1 * (N+1.5) / 3);
ccovmu_sep = min(1-ccov1_sep, ccovmu * (N+1.5) / 3);
elseif N > 98 && flgdisplay && countiter == 0
disp('consider option DiagonalOnly for high-dimensional problems');
end
% ||ps|| is close to sqrt(mueff/N) for mueff large on linear fitness
%damps = ... % damping for step size control, usually close to one
% (1 + 2*max(0,sqrt((mueff-1)/(N+1))-1)) ... % limit sigma increase
% * max(0.3, ... % reduce damps, if max. iteration number is small
% 1 - N/min(stopMaxIter,stopMaxFunEvals/lambda)) + cs;
damps = myeval(inopts.CMA.damps);
if noiseHandling
noiseReevals = min(myeval(inopts.Noise.reevals), lambda);
noiseAlpha = myeval(inopts.Noise.alphasigma);
noiseEpsilon = myeval(inopts.Noise.epsilon);
noiseTheta = myeval(inopts.Noise.theta);
noisecum = myeval(inopts.Noise.cum);
noiseCutOff = myeval(inopts.Noise.cutoff); % arguably of minor relevance
else
noiseReevals = 0; % more convenient in later coding
end
%qqq hacking of a different parameter setting, e.g. for ccov or damps,
% can be done here, but is not necessary anymore, see opts.CMA.
% ccov1 = 0.0*ccov1; disp(['CAVE: ccov1=' num2str(ccov1)]);
% ccovmu = 0.0*ccovmu; disp(['CAVE: ccovmu=' num2str(ccovmu)]);
% damps = inf*damps; disp(['CAVE: damps=' num2str(damps)]);
% cc = 1; disp(['CAVE: cc=' num2str(cc)]);
end
% Display initial message
if countiter == 0 && flgdisplay
if mu == 1
strw = '100';
elseif mu < 8
strw = [sprintf('%.0f', 100*weights(1)) ...
sprintf(' %.0f', 100*weights(2:end)')];
else
strw = [sprintf('%.2g ', 100*weights(1:2)') ...
sprintf('%.2g', 100*weights(3)') '...' ...
sprintf(' %.2g', 100*weights(end-1:end)') ']%, '];
end
if irun > 1
strrun = [', run ' num2str(irun)];
else
strrun = '';
end
disp([' n=' num2str(N) ': (' num2str(mu) ',' ...
num2str(lambda) ')-CMA-ES(w=[' ...
strw ']%, ' ...
'mu_eff=' num2str(mueff,'%.1f') ...
') on function ' ...
(func2str(fitfun)) strrun]);
% EDITS to make fitfun a handle rahter than a string
%(fitfun) strrun]);
if flgDiagonalOnly == 1
disp(' C is diagonal');
elseif flgDiagonalOnly
disp([' C is diagonal for ' num2str(floor(flgDiagonalOnly)) ' iterations']);
end
end
flush;
countiter = countiter + 1;
% Generate and evaluate lambda offspring
fitness.raw = NaN(1, lambda + noiseReevals);
fitness_to_calc=find(isnan(fitness.raw));
arz = NaN(N,numel(fitness.raw));
arx = arz; arxvalid = arx;
tries = 0;
while numel(fitness_to_calc)>0
arz(:,fitness_to_calc) = randn(N, numel(fitness_to_calc));
for k=fitness_to_calc
% calculate samples
if k <= lambda % regular samples (not the re-evaluation-samples)
if flgDiagonalOnly
arx(:,k) = xmean + sigma * diagD .* arz(:,k); % Eq. (1)
else
arx(:,k) = xmean + sigma * (BD * arz(:,k)); % Eq. (1)
end
else % re-evaluation solution with index > lambda
if flgDiagonalOnly
arx(:,k) = arx(:,k-lambda) + (noiseEpsilon * sigma) * diagD .* arz(:,k);
else
arx(:,k) = arx(:,k-lambda) + (noiseEpsilon * sigma) * (BD * arz(:,k));
end
end
% Handle bounds
if ~bnd.isactive
arxvalid(:,k) = arx(:,k);
else
arxvalid(:,k) = xintobounds(arx(:,k), lbounds, ubounds);
end
end
% non-parallel evaluation
if ~flgEvalParallel
for k=fitness_to_calc
fitness.raw(k) = fitfun(arxvalid(:,k), varargin{:});
end
else % parallel implementation here
all_fitness = fitness.raw;
parfor k=fitness_to_calc
all_fitness(k) = fitfun(arxvalid(:,k), varargin{:});
end
fitness.raw = all_fitness;
end
counteval = counteval + numel(fitness_to_calc);
% these are the ones that still need to be calculated, because they
% gave NaN
fitness_to_calc=find(isnan(fitness.raw));
countevalNaN = countevalNaN + numel(fitness_to_calc);
tries = tries + 1;
if mod(tries,100)==0
warning([num2str(tries) ...
' NaN objective function values at evaluation ' ...
num2str(counteval)]);
end
end
fitness.sel = fitness.raw;
% ----- handle boundaries -----
if 1 < 3 && bnd.isactive
% Get delta fitness values
val = myprctile(fitness.raw, [25 75]);
% more precise would be exp(mean(log(diagC)))
val = (val(2) - val(1)) / N / mean(diagC) / sigma^2;
%val = (myprctile(fitness.raw, 75) - myprctile(fitness.raw, 25)) ...
% / N / mean(diagC) / sigma^2;
% Catch non-sensible values
if ~isfinite(val)
warning('Non-finite fitness range');
val = max(bnd.dfithist);
elseif val == 0 % happens if all points are out of bounds
val = min(bnd.dfithist(bnd.dfithist>0)); % seems not to make sense, given all solutions are out of bounds
elseif bnd.validfitval == 0 % flag that first sensible val was found
bnd.dfithist = [];
bnd.validfitval = 1;
end
% Store delta fitness values
if length(bnd.dfithist) < 20+(3*N)/lambda
bnd.dfithist = [bnd.dfithist val];
else
bnd.dfithist = [bnd.dfithist(2:end) val];
end
[tx ti] = xintobounds(xmean, lbounds, ubounds);
% Set initial weights
if bnd.iniphase
if any(ti)
bnd.weights(find(bnd.isbounded)) = 2.0002 * median(bnd.dfithist);
if bnd.flgscale == 0 % scale only initial weights then
dd = diagC;
idx = find(bnd.isbounded);
dd = dd(idx) / mean(dd); % remove mean scaling
bnd.weights(idx) = bnd.weights(idx) ./ dd;
end
if bnd.validfitval && countiter > 2
bnd.iniphase = 0;
end
end
end
% Increase weights
if 1 < 3 && any(ti) % any coordinate of xmean out of bounds
% judge distance of xmean to boundary
tx = xmean - tx;
idx = (ti ~= 0 & abs(tx) > 3*max(1,sqrt(N)/mueff) ...
* sigma*sqrt(diagC)) ;
% only increase if xmean is moving away
idx = idx & (sign(tx) == sign(xmean - xold));
if ~isempty(idx) % increase
% the factor became 1.2 instead of 1.1, because
% changed from max to min in version 3.52
bnd.weights(idx) = 1.2^(min(1, mueff/10/N)) * bnd.weights(idx);
end
end
% Calculate scaling biased to unity, product is one
if bnd.flgscale ~= 0
bnd.scale = exp(0.9*(log(diagC)-mean(log(diagC))));
end
% Assigned penalized fitness
bnd.arpenalty = (bnd.weights ./ bnd.scale)' * (arxvalid - arx).^2;
fitness.sel = fitness.raw + bnd.arpenalty;
end % handle boundaries
% ----- end handle boundaries -----
% compute noise measurement and reduce fitness arrays to size lambda
if noiseHandling
[noiseS] = local_noisemeasurement(fitness.sel(1:lambda), ...
fitness.sel(lambda+(1:noiseReevals)), ...
noiseReevals, noiseTheta, noiseCutOff);
if countiter == 1 % TODO: improve this very rude way of initialization
noiseSS = 0;
noiseN = 0; % counter for mean
end
noiseSS = noiseSS + noisecum * (noiseS - noiseSS);
% noise-handling could be done here, but the original sigma is still needed
% disp([noiseS noiseSS noisecum])
fitness.rawar12 = fitness.raw; % just documentary
fitness.selar12 = fitness.sel; % just documentary
% qqq refine fitness based on both values
if 11 < 3 % TODO: in case of outliers this mean is counterproductive
% median out of three would be ok
fitness.raw(1:noiseReevals) = ... % not so raw anymore
(fitness.raw(1:noiseReevals) + fitness.raw(lambda+(1:noiseReevals))) / 2;
fitness.sel(1:noiseReevals) = ...
(fitness.sel(1:noiseReevals) + fitness.sel(lambda+(1:noiseReevals))) / 2;
end
fitness.raw = fitness.raw(1:lambda);
fitness.sel = fitness.sel(1:lambda);
end
% Sort by fitness
[fitness.raw, fitness.idx] = sort(fitness.raw);
[fitness.sel, fitness.idxsel] = sort(fitness.sel); % minimization
fitness.hist(2:end) = fitness.hist(1:end-1); % record short history of
fitness.hist(1) = fitness.raw(1); % best fitness values
if length(fitness.histbest) < 120+ceil(30*N/lambda) || ...
(mod(countiter, 5) == 0 && length(fitness.histbest) < 2e4) % 20 percent of 1e5 gen.
fitness.histbest = [fitness.raw(1) fitness.histbest]; % best fitness values
fitness.histmedian = [median(fitness.raw) fitness.histmedian]; % median fitness values
else
fitness.histbest(2:end) = fitness.histbest(1:end-1);
fitness.histmedian(2:end) = fitness.histmedian(1:end-1);
fitness.histbest(1) = fitness.raw(1); % best fitness values
fitness.histmedian(1) = median(fitness.raw); % median fitness values
end
fitness.histsel(2:end) = fitness.histsel(1:end-1); % record short history of
fitness.histsel(1) = fitness.sel(1); % best sel fitness values
% Calculate new xmean, this is selection and recombination
xold = xmean; % for speed up of Eq. (2) and (3)
cmean = 1; % 1/min(max((lambda-1*N)/2, 1), N); % == 1/kappa
xmean = (1-cmean) * xold + cmean * arx(:,fitness.idxsel(1:mu))*weights;
zmean = arz(:,fitness.idxsel(1:mu))*weights;%==D^-1*B'*(xmean-xold)/sigma
if mu == 1
fmean = fitness.sel(1);
else
fmean = NaN; % [] does not work in the latter assignment
% fmean = feval(fitfun, xintobounds(xmean, lbounds, ubounds), varargin{:});
% counteval = counteval + 1;
end
% Cumulation: update evolution paths
ps = (1-cs)*ps + sqrt(cs*(2-cs)*mueff) * (B*zmean); % Eq. (4)
hsig = norm(ps)/sqrt(1-(1-cs)^(2*countiter))/chiN < 1.4 + 2/(N+1);
if flg_future_setting
hsig = sum(ps.^2) / (1-(1-cs)^(2*countiter)) / N < 2 + 4/(N+1); % just simplified
end
% hsig = norm(ps)/sqrt(1-(1-cs)^(2*countiter))/chiN < 1.4 + 2/(N+1);
% hsig = norm(ps)/sqrt(1-(1-cs)^(2*countiter))/chiN < 1.5 + 1/(N-0.5);
% hsig = norm(ps) < 1.5 * sqrt(N);
% hsig = 1;
pc = (1-cc)*pc ...
+ hsig*(sqrt(cc*(2-cc)*mueff)/sigma/cmean) * (xmean-xold); % Eq. (2)
if hsig == 0
% disp([num2str(countiter) ' ' num2str(counteval) ' pc update stalled']);
end
% Adapt covariance matrix
neg.ccov = 0; % TODO: move parameter setting upwards at some point
if ccov1 + ccovmu > 0 % Eq. (3)
if flgDiagonalOnly % internal linear(?) complexity
diagC = (1-ccov1_sep-ccovmu_sep+(1-hsig)*ccov1_sep*cc*(2-cc)) * diagC ... % regard old matrix
+ ccov1_sep * pc.^2 ... % plus rank one update
+ ccovmu_sep ... % plus rank mu update
* (diagC .* (arz(:,fitness.idxsel(1:mu)).^2 * weights));
% * (repmat(diagC,1,mu) .* arz(:,fitness.idxsel(1:mu)).^2 * weights);
diagD = sqrt(diagC); % replaces eig(C)
else
arpos = (arx(:,fitness.idxsel(1:mu))-repmat(xold,1,mu)) / sigma;
% "active" CMA update: negative update, in case controlling pos. definiteness
if flgActiveCMA > 0
% set parameters
neg.mu = mu;
neg.mueff = mueff;
if flgActiveCMA > 10 % flat weights with mu=lambda/2
neg.mu = floor(lambda/2);
neg.mueff = neg.mu;
end
% neg.mu = ceil(min([N, lambda/4, mueff])); neg.mueff = mu; % i.e. neg.mu <= N
% Parameter study: in 3-D lambda=50,100, 10-D lambda=200,400, 30-D lambda=1000,2000 a
% three times larger neg.ccov does not work.
% increasing all ccov rates three times does work (probably because of the factor (1-ccovmu))
% in 30-D to looks fine
neg.ccov = (1 - ccovmu) * 0.25 * neg.mueff / ((N+2)^1.5 + 2*neg.mueff);
neg.minresidualvariance = 0.66; % keep at least 0.66 in all directions, small popsize are most critical
neg.alphaold = 0.5; % where to make up for the variance loss, 0.5 means no idea what to do
% 1 is slightly more robust and gives a better "guaranty" for pos. def.,
% but does it make sense from the learning perspective for large ccovmu?
neg.ccovfinal = neg.ccov;
% prepare vectors, compute negative updating matrix Cneg and checking matrix Ccheck
arzneg = arz(:,fitness.idxsel(lambda:-1:lambda - neg.mu + 1));
% i-th longest becomes i-th shortest
% TODO: this is not in compliance with the paper Hansen&Ros2010,
% where simply arnorms = arnorms(end:-1:1) ?
[arnorms idxnorms] = sort(sqrt(sum(arzneg.^2, 1)));
[ignore idxnorms] = sort(idxnorms); % inverse index
arnormfacs = arnorms(end:-1:1) ./ arnorms;
% arnormfacs = arnorms(randperm(neg.mu)) ./ arnorms;
arnorms = arnorms(end:-1:1); % for the record
if flgActiveCMA < 20
arzneg = arzneg .* repmat(arnormfacs(idxnorms), N, 1); % E x*x' is N
% arzneg = sqrt(N) * arzneg ./ repmat(sqrt(sum(arzneg.^2, 1)), N, 1); % E x*x' is N
end
if flgActiveCMA < 10 && neg.mu == mu % weighted sum
if mod(flgActiveCMA, 10) == 1 % TODO: prevent this with a less tight but more efficient check (see below)
Ccheck = arzneg * diag(weights) * arzneg'; % in order to check the largest EV
end
artmp = BD * arzneg;
Cneg = artmp * diag(weights) * artmp';
else % simple sum
if mod(flgActiveCMA, 10) == 1
Ccheck = (1/neg.mu) * arzneg*arzneg'; % in order to check largest EV
end
artmp = BD * arzneg;
Cneg = (1/neg.mu) * artmp*artmp';
end
% check pos.def. and set learning rate neg.ccov accordingly,
% this check makes the original choice of neg.ccov extremly failsafe
% still assuming C == BD*BD', which is only approxim. correct
if mod(flgActiveCMA, 10) == 1 && 1 - neg.ccov * arnorms(idxnorms).^2 * weights < neg.minresidualvariance
% TODO: the simple and cheap way would be to set
% fac = 1 - ccovmu - ccov1 OR 1 - mueff/lambda and
% neg.ccov = fac*(1 - neg.minresidualvariance) / (arnorms(idxnorms).^2 * weights)
% this is the more sophisticated way:
% maxeigenval = eigs(arzneg * arzneg', 1, 'lm', eigsopts); % not faster
maxeigenval = max(eig(Ccheck)); % norm is much slower, because (norm()==max(svd())
%disp([countiter log10([neg.ccov, maxeigenval, arnorms(idxnorms).^2 * weights, max(arnorms)^2]), ...
% neg.ccov * arnorms(idxnorms).^2 * weights])
% pause
% remove less than ??34*(1-cmu)%?? of variance in any direction
% 1-ccovmu is the variance left from the old C
neg.ccovfinal = min(neg.ccov, (1-ccovmu)*(1-neg.minresidualvariance)/maxeigenval);
% -ccov1 removed to avoid error message??
if neg.ccovfinal < neg.ccov
disp(['active CMA at iteration ' num2str(countiter) ...
': max EV ==', num2str([maxeigenval, neg.ccov, neg.ccovfinal])]);
end
end
% xmean = xold; % the distribution does not degenerate!?
% update C
C = (1-ccov1-ccovmu+neg.alphaold*neg.ccovfinal+(1-hsig)*ccov1*cc*(2-cc)) * C ... % regard old matrix
+ ccov1 * pc*pc' ... % plus rank one update
+ (ccovmu + (1-neg.alphaold)*neg.ccovfinal) ... % plus rank mu update
* arpos * (repmat(weights,1,N) .* arpos') ...
- neg.ccovfinal * Cneg; % minus rank mu update
else % no active (negative) update
C = (1-ccov1-ccovmu+(1-hsig)*ccov1*cc*(2-cc)) * C ... % regard old matrix
+ ccov1 * pc*pc' ... % plus rank one update
+ ccovmu ... % plus rank mu update
* arpos * (repmat(weights,1,N) .* arpos');
% is now O(mu*N^2 + mu*N), was O(mu*N^2 + mu^2*N) when using diag(weights)
% for mu=30*N it is now 10 times faster, overall 3 times faster
end
diagC = diag(C);
end
end
% the following is de-preciated and will be removed in future
% better setting for cc makes this hack obsolete
if 11 < 2 && ~flgscience
% remove momentum in ps, if ps is large and fitness is getting worse.
% this should rarely happen.
% this might very well be counterproductive in dynamic environments
if sum(ps.^2)/N > 1.5 + 10*(2/N)^.5 && ...
fitness.histsel(1) > max(fitness.histsel(2:3))
ps = ps * sqrt(N*(1+max(0,log(sum(ps.^2)/N))) / sum(ps.^2));
if flgdisplay
disp(['Momentum in ps removed at [niter neval]=' ...
num2str([countiter counteval]) ']']);
end
end
end
% Adapt sigma
if flg_future_setting % according to a suggestion from Dirk Arnold (2000)
% exp(1) is still not reasonably small enough, maybe 2/3?
sigma = sigma * exp(min(1, (sum(ps.^2)/N - 1)/2 * cs/damps)); % Eq. (5)
else
% exp(1) is still not reasonably small enough
sigma = sigma * exp(min(1, (sqrt(sum(ps.^2))/chiN - 1) * cs/damps)); % Eq. (5)
end
% disp([countiter norm(ps)/chiN]);
if 11 < 3 % testing with optimal step-size
if countiter == 1
disp('*********** sigma set to const * ||x|| ******************');
end
sigma = 0.04 * mueff * sqrt(sum(xmean.^2)) / N; % 20D,lam=1000:25e3
sigma = 0.3 * mueff * sqrt(sum(xmean.^2)) / N; % 20D,lam=(40,1000):17e3
% 75e3 with def (1.5)
% 35e3 with damps=0.25
end
if 11 < 3
if countiter == 1
disp('*********** xmean set to const ******************');
end
xmean = ones(N,1);
end
% Update B and D from C
if ~flgDiagonalOnly && (ccov1+ccovmu+neg.ccov) > 0 && mod(countiter, 1/(ccov1+ccovmu+neg.ccov)/N/10) < 1
C=triu(C)+triu(C,1)'; % enforce symmetry to prevent complex numbers
[B,tmp] = eig(C); % eigen decomposition, B==normalized eigenvectors
% effort: approx. 15*N matrix-vector multiplications
diagD = diag(tmp);
if any(~isfinite(diagD))
clear idx; % prevents error under octave
save(['tmp' inopts.SaveFilename]);
error(['function eig returned non-finited eigenvalues, cond(C)=' ...
num2str(cond(C)) ]);
end
if any(any(~isfinite(B)))
clear idx; % prevents error under octave
save(['tmp' inopts.SaveFilename]);
error(['function eig returned non-finited eigenvectors, cond(C)=' ...
num2str(cond(C)) ]);
end
% limit condition of C to 1e14 + 1
if min(diagD) <= 0
if stopOnWarnings
stopflag(end+1) = {'warnconditioncov'};
else
warning(['Iteration ' num2str(countiter) ...
': Eigenvalue (smaller) zero']);
diagD(diagD<0) = 0;
tmp = max(diagD)/1e14;
C = C + tmp*eye(N,N); diagD = diagD + tmp*ones(N,1);
end
end
if max(diagD) > 1e14*min(diagD)
if stopOnWarnings
stopflag(end+1) = {'warnconditioncov'};
else
warning(['Iteration ' num2str(countiter) ': condition of C ' ...
'at upper limit' ]);
tmp = max(diagD)/1e14 - min(diagD);
C = C + tmp*eye(N,N); diagD = diagD + tmp*ones(N,1);
end
end
diagC = diag(C);
diagD = sqrt(diagD); % D contains standard deviations now
% diagD = diagD / prod(diagD)^(1/N); C = C / prod(diagD)^(2/N);
BD = B.*repmat(diagD',N,1); % O(n^2)
end % if mod
% Align/rescale order of magnitude of scales of sigma and C for nicer output
% TODO: interference with sigmafacup: replace 1e10 with 2*sigmafacup
% not a very usual case
if 1 < 2 && sigma > 1e10*max(diagD) && sigma > 8e14 * max(insigma)
fac = sigma; % / max(diagD);
sigma = sigma/fac;
pc = fac * pc;
diagD = fac * diagD;
if ~flgDiagonalOnly
C = fac^2 * C; % disp(fac);
BD = B .* repmat(diagD',N,1); % O(n^2), but repmat might be inefficient todo?
end
diagC = fac^2 * diagC;
end
if flgDiagonalOnly > 1 && countiter > flgDiagonalOnly
% full covariance matrix from now on
flgDiagonalOnly = 0;
B = eye(N,N);
BD = diag(diagD);
C = diag(diagC); % is better, because correlations are spurious anyway
end
if noiseHandling
if countiter == 1 % assign firstvarargin for noise treatment e.g. as #reevaluations
if ~isempty(varargin) && length(varargin{1}) == 1 && isnumeric(varargin{1})
if irun == 1
firstvarargin = varargin{1};
else
varargin{1} = firstvarargin; % reset varargin{1}
end
else
firstvarargin = 0;
end
end
if noiseSS < 0 && noiseMinMaxEvals(2) > noiseMinMaxEvals(1) && firstvarargin
varargin{1} = max(noiseMinMaxEvals(1), varargin{1} / noiseAlphaEvals^(1/4)); % still experimental
elseif noiseSS > 0
if ~isempty(noiseCallback) % to be removed?
res = feval(noiseCallback); % should also work without output argument!?
if ~isempty(res) && res > 1 % TODO: decide for interface of callback
% also a dynamic popsize could be done here
sigma = sigma * noiseAlpha;
end
else
if noiseMinMaxEvals(2) > noiseMinMaxEvals(1) && firstvarargin
varargin{1} = min(noiseMinMaxEvals(2), varargin{1} * noiseAlphaEvals);
end
sigma = sigma * noiseAlpha;
% lambda = ceil(0.1 * sqrt(lambda) + lambda);
% TODO: find smallest increase of lambda with log-linear
% convergence in iterations
end
% qqq experimental: take a mean to estimate the true optimum
noiseN = noiseN + 1;
if noiseN == 1
noiseX = xmean;
else
noiseX = noiseX + (3/noiseN) * (xmean - noiseX);
end
end
end
% ----- numerical error management -----
% Adjust maximal coordinate axis deviations
if any(sigma*sqrt(diagC) > maxdx)
sigma = min(maxdx ./ sqrt(diagC));
%warning(['Iteration ' num2str(countiter) ': coordinate axis std ' ...
% 'deviation at upper limit of ' num2str(maxdx)]);
% stopflag(end+1) = {'maxcoorddev'};
end
% Adjust minimal coordinate axis deviations
if any(sigma*sqrt(diagC) < mindx)
sigma = max(mindx ./ sqrt(diagC)) * exp(0.05+cs/damps);
%warning(['Iteration ' num2str(countiter) ': coordinate axis std ' ...
% 'deviation at lower limit of ' num2str(mindx)]);
% stopflag(end+1) = {'mincoorddev'};;
end
% Adjust too low coordinate axis deviations
if any(xmean == xmean + 0.2*sigma*sqrt(diagC))
if stopOnWarnings
stopflag(end+1) = {'warnnoeffectcoord'};
else
warning(['Iteration ' num2str(countiter) ': coordinate axis std ' ...
'deviation too low' ]);
if flgDiagonalOnly
diagC = diagC + (ccov1_sep+ccovmu_sep) * (diagC .* ...
(xmean == xmean + 0.2*sigma*sqrt(diagC)));
else
C = C + (ccov1+ccovmu) * diag(diagC .* ...
(xmean == xmean + 0.2*sigma*sqrt(diagC)));
end
sigma = sigma * exp(0.05+cs/damps);
end
end
% Adjust step size in case of (numerical) precision problem
if flgDiagonalOnly
tmp = 0.1*sigma*diagD;
else
tmp = 0.1*sigma*BD(:,1+floor(mod(countiter,N)));
end
if all(xmean == xmean + tmp)
i = 1+floor(mod(countiter,N));
if stopOnWarnings
stopflag(end+1) = {'warnnoeffectaxis'};
else
warning(['Iteration ' num2str(countiter) ...
': main axis standard deviation ' ...
num2str(sigma*diagD(i)) ' has no effect' ]);
sigma = sigma * exp(0.2+cs/damps);
end
end
% Adjust step size in case of equal function values (flat fitness)
% isequalfuncvalues = 0;
if fitness.sel(1) == fitness.sel(1+ceil(0.1+lambda/4))
% isequalfuncvalues = 1;
if stopOnEqualFunctionValues
arrEqualFunvals = [countiter arrEqualFunvals(1:end-1)];
% stop if this happens in more than 33%
if arrEqualFunvals(end) > countiter - 3 * length(arrEqualFunvals)
stopflag(end+1) = {'equalfunvals'};
end
else
if flgWarnOnEqualFunctionValues
warning(['Iteration ' num2str(countiter) ...
': equal function values f=' num2str(fitness.sel(1)) ...
' at maximal main axis sigma ' ...
num2str(sigma*max(diagD))]);
end
sigma = sigma * exp(0.2+cs/damps);
end
end
% Adjust step size in case of equal function values
if countiter > 2 && myrange([fitness.hist fitness.sel(1)]) == 0
if stopOnWarnings
stopflag(end+1) = {'warnequalfunvalhist'};
else
warning(['Iteration ' num2str(countiter) ...
': equal function values in history at maximal main ' ...
'axis sigma ' num2str(sigma*max(diagD))]);
sigma = sigma * exp(0.2+cs/damps);
end
end
% ----- end numerical error management -----
% Keep overall best solution
out.evals = counteval;
out.solutions.evals = counteval;
out.solutions.mean.x = scale_linear(xmean, lbounds, ubounds, original_lbounds, original_ubounds);
out.solutions.mean.f = fmean;
out.solutions.mean.evals = counteval;
out.solutions.recentbest.x = scale_linear(arxvalid(:, fitness.idx(1)), lbounds, ubounds, original_lbounds, original_ubounds);
out.solutions.recentbest.f = fitness.raw(1);
out.solutions.recentbest.evals = counteval + fitness.idx(1) - lambda;
out.solutions.recentworst.x = scale_linear(arxvalid(:, fitness.idx(end)), lbounds, ubounds, original_lbounds, original_ubounds);
out.solutions.recentworst.f = fitness.raw(end);
out.solutions.recentworst.evals = counteval + fitness.idx(end) - lambda;
if fitness.hist(1) < out.solutions.bestever.f
out.solutions.bestever.x = scale_linear(arxvalid(:, fitness.idx(1)), lbounds, ubounds, original_lbounds, original_ubounds);
out.solutions.bestever.f = fitness.hist(1);
out.solutions.bestever.evals = counteval + fitness.idx(1) - lambda;
bestever = out.solutions.bestever;
end
% Set stop flag
if fitness.raw(1) <= stopFitness, stopflag(end+1) = {'fitness'}; end
if counteval >= stopMaxFunEvals, stopflag(end+1) = {'maxfunevals'}; end
if countiter >= stopMaxIter, stopflag(end+1) = {'maxiter'}; end
if all(sigma*(max(abs(pc), sqrt(diagC))) < stopTolX)
stopflag(end+1) = {'tolx'};
end
if any(sigma*sqrt(diagC) > stopTolUpX)
stopflag(end+1) = {'tolupx'};
end
if sigma*max(diagD) == 0 % should never happen
stopflag(end+1) = {'bug'};
end
if countiter > 2 && myrange([fitness.sel fitness.hist]) <= stopTolFun
stopflag(end+1) = {'tolfun'};
end
if countiter >= length(fitness.hist) && myrange(fitness.hist) <= stopTolHistFun
stopflag(end+1) = {'tolhistfun'};
end
l = floor(length(fitness.histbest)/3);
if 1 < 2 && stopOnStagnation && ... % leads sometimes early stop on ftablet, fcigtab
countiter > N * (5+100/lambda) && ...
length(fitness.histbest) > 100 && ...
median(fitness.histmedian(1:l)) >= median(fitness.histmedian(end-l:end)) && ...
median(fitness.histbest(1:l)) >= median(fitness.histbest(end-l:end))
stopflag(end+1) = {'stagnation'};
end
if counteval >= stopFunEvals || countiter >= stopIter
stopflag(end+1) = {'stoptoresume'};
if length(stopflag) == 1 && flgsaving == 0
error('To resume later the saving option needs to be set');
end
end
% read stopping message from file signals.par
if flgreadsignals
fid = fopen('./signals.par', 'rt'); % can be performance critical
while fid > 0
strline = fgetl(fid); %fgets(fid, 300);
if strline < 0 % fgets and fgetl returns -1 at end of file
break;
end
% 'stop filename' sets stopflag to manual
str = sscanf(strline, ' %s %s', 2);
if strcmp(str, ['stop' inopts.LogFilenamePrefix])
stopflag(end+1) = {'manual'};
break;
end
% 'skip filename run 3' skips a run, but not the last
str = sscanf(strline, ' %s %s %s', 3);
if strcmp(str, ['skip' inopts.LogFilenamePrefix 'run'])
i = strfind(strline, 'run');
if irun == sscanf(strline(i+3:end), ' %d ', 1) && irun <= myeval(inopts.Restarts)
stopflag(end+1) = {'skipped'};
end
end
end % while, break
if fid > 0
fclose(fid);
clear fid; % prevents strange error under octave
end
end
out.stopflag = stopflag;
% ----- output generation -----
if verbosemodulo > 0 && isfinite(verbosemodulo)
if countiter == 1 || mod(countiter, 10*verbosemodulo) < 1
disp(['Iterat, #Fevals: Function Value (median,worst) ' ...
'|Axis Ratio|' ...
'idx:Min SD idx:Max SD']);
end
if mod(countiter, verbosemodulo) < 1 ...
|| (verbosemodulo > 0 && isfinite(verbosemodulo) && ...
(countiter < 3 || ~isempty(stopflag)))
[minstd minstdidx] = min(sigma*sqrt(diagC));
[maxstd maxstdidx] = max(sigma*sqrt(diagC));
% format display nicely
disp([repmat(' ',1,4-floor(log10(countiter))) ...
num2str(countiter) ' , ' ...
repmat(' ',1,5-floor(log10(counteval))) ...
num2str(counteval) ' : ' ...
num2str(fitness.hist(1), '%.13e') ...
' +(' num2str(median(fitness.raw)-fitness.hist(1), '%.0e ') ...
',' num2str(max(fitness.raw)-fitness.hist(1), '%.0e ') ...
') | ' ...
num2str(max(diagD)/min(diagD), '%4.2e') ' | ' ...
repmat(' ',1,1-floor(log10(minstdidx))) num2str(minstdidx) ':' ...
num2str(minstd, ' %.1e') ' ' ...
repmat(' ',1,1-floor(log10(maxstdidx))) num2str(maxstdidx) ':' ...
num2str(maxstd, ' %.1e')]);
elseif countiter == 3
disp("Don't worry! Your calibration is still running.");
disp(['The next output will be displayed at the ',...
num2str(verbosemodulo), 'th iteration or at the end of the calibration.']);
disp("You can change this setting using the 'DispModulo' option passed to 'my_cmaes'")
end
end
% measure time for recording data
if countiter < 3
time.c = 0.05;
time.nonoutput = 0;
time.recording = 0;
time.saving = 0.15; % first saving after 3 seconds of 100 iterations
time.plotting = 0;
elseif countiter > 300
% set backward horizon, must be long enough to cover infrequent plotting etc
% time.c = min(1, time.nonoutput/3 + 1e-9);
time.c = max(1e-5, 0.1/sqrt(countiter)); % mean over all or 1e-5
end
% get average time per iteration
time.t1 = clock;
time.act = max(0,etime(time.t1, time.t0));
time.nonoutput = (1-time.c) * time.nonoutput ...
+ time.c * time.act;
time.recording = (1-time.c) * time.recording; % per iteration
time.saving = (1-time.c) * time.saving;
time.plotting = (1-time.c) * time.plotting;
% record output data, concerning time issues
if savemodulo && savetime && (countiter < 1e2 || ~isempty(stopflag) || ...
countiter >= outiter + savemodulo)
outiter = countiter;
% Save output data to files
for namecell = filenames(:)'
name = namecell{:};
[fid, err] = fopen(['./' filenameprefix name '.dat'], 'a');
if fid < 1 % err ~= 0
warning(['could not open ' filenameprefix name '.dat']);
else
if strcmp(name, 'axlen')
fprintf(fid, '%d %d %e %e %e ', countiter, counteval, sigma, ...
max(diagD), min(diagD));
fprintf(fid, '%e ', sort(diagD));
fprintf(fid, '\n');
elseif strcmp(name, 'disp') % TODO
elseif strcmp(name, 'fit')
fprintf(fid, '%ld %ld %e %e %25.18e %25.18e %25.18e %25.18e', ...
countiter, counteval, sigma, max(diagD)/min(diagD), ...
out.solutions.bestever.f, ...
fitness.raw(1), median(fitness.raw), fitness.raw(end));
if ~isempty(varargin) && length(varargin{1}) == 1 && isnumeric(varargin{1}) && varargin{1} ~= 0
fprintf(fid, ' %f', varargin{1});
end
fprintf(fid, '\n');
elseif strcmp(name, 'stddev')
fprintf(fid, '%ld %ld %e 0 0 ', countiter, counteval, sigma);
fprintf(fid, '%e ', sigma*sqrt(diagC));
fprintf(fid, '\n');
elseif strcmp(name, 'xmean')
if isnan(fmean)
fprintf(fid, '%ld %ld 0 0 0 ', countiter, counteval);
else
fprintf(fid, '%ld %ld 0 0 %e ', countiter, counteval, fmean);
end
fprintf(fid, '%e ', scale_linear(xmean, lbounds, ubounds, original_lbounds, original_ubounds));
fprintf(fid, '\n');
elseif strcmp(name, 'xrecentbest')
% TODO: fitness is inconsistent with x-value
fprintf(fid, '%ld %ld %25.18e 0 0 ', countiter, counteval, fitness.raw(1));
fprintf(fid, '%e ', scale_linear(arx(:,fitness.idx(1)), lbounds, ubounds, original_lbounds, original_ubounds));
fprintf(fid, '\n');
end
fclose(fid);
end
end
% get average time for recording data
time.t2 = clock;
time.recording = time.recording + time.c * max(0,etime(time.t2, time.t1));
% plot
if flgplotting && countiter > 1
if countiter == 2
iterplotted = 0;
end
if ~isempty(stopflag) || ...
((time.nonoutput+time.recording) * (countiter - iterplotted) > 1 && ...
time.plotting < 0.05 * (time.nonoutput+time.recording))
local_plotcmaesdat(324, filenameprefix);
iterplotted = countiter;
% outplot(out); % outplot defined below
if time.plotting == 0 % disregard opening of the window
time.plotting = time.nonoutput+time.recording;
else
time.plotting = time.plotting + time.c * max(0,etime(clock, time.t2));
end
end
end
if countiter > 100 + 20 && savemodulo && ...
time.recording * countiter > 0.1 && ... % absolute time larger 0.1 second
time.recording > savetime * (time.nonoutput+time.recording) / 100
savemodulo = floor(1.02 * savemodulo) + 1;
% disp(['++savemodulo == ' num2str(savemodulo) ' at ' num2str(countiter)]); %qqq
end
end % if output
% save everything
time.t3 = clock;
if ~isempty(stopflag) || time.saving < 0.05 * time.nonoutput || countiter == 100
xmin = scale_linear(arxvalid(:, fitness.idx(1)), lbounds, ubounds, original_lbounds, original_ubounds);
fmin = fitness.raw(1);
if flgsaving && countiter > 2
clear idx; % prevents error under octave
% -v6 : non-compressed non-unicode for version 6 and earlier
if ~isempty(strsaving) && ~isoctave
save('-mat', strsaving, inopts.SaveFilename); % for inspection and possible restart
else
save('-mat', inopts.SaveFilename); % for inspection and possible restart
end
time.saving = time.saving + time.c * max(0,etime(clock, time.t3));
end
end
time.t0 = clock;
% ----- end output generation -----
end % while, end generation loop
% -------------------- Final Procedures -------------------------------
if flgEvalParallel
delete(poolobj);
end
% Evaluate xmean and return best point as xmin
fmin = fitness.raw(1);
xmin = scale_linear(arxvalid(:, fitness.idx(1)), lbounds, ubounds, original_lbounds, original_ubounds); % Return best point of last generation.
if length(stopflag) > sum(strcmp(stopflag, 'stoptoresume')) % final stopping
out.solutions.mean.f = ...
fitfun(xintobounds(xmean, lbounds, ubounds), varargin{:});
% EDITS to make fitfun a handle rather than a string
%feval(fitfun, xintobounds(xmean, lbounds, ubounds), varargin{:});
counteval = counteval + 1;
out.solutions.mean.evals = counteval;
if out.solutions.mean.f < out.solutions.bestever.f
out.solutions.bestever = out.solutions.mean; % Return xmean as bestever point
out.solutions.bestever.x = scale_linear(xintobounds(xmean, lbounds, ubounds), lbounds, ubounds, original_lbounds, original_ubounds);
bestever = out.solutions.bestever;
end
fmin = bestever.f;
xmin = bestever.x;
end
% Save everything and display final message
if flgsavingfinal
clear idx; % prevents error under octave
if ~isempty(strsaving) && ~isoctave
save('-mat', strsaving, inopts.SaveFilename); % for inspection and possible restart
else
save('-mat', inopts.SaveFilename); % for inspection and possible restart
end
message = [' (saved to ' inopts.SaveFilename ')'];
else
message = [];
end
if flgdisplay
disp(['#Fevals: f(returned x) | bestever.f | stopflag' ...
message]);
if isoctave
strstop = stopflag(:);
else
strcat(stopflag(:), '.');
end
strstop = stopflag(:); %strcat(stopflag(:), '.');
disp([repmat(' ',1,6-floor(log10(counteval))) ...
num2str(counteval, '%6.0f') ': ' num2str(fmin, '%.11e') ' | ' ...
num2str(out.solutions.bestever.f, '%.11e') ' | ' ...
strstop{1:end}]);
if N < 102
disp(['mean solution:' sprintf(' %+.1e', scale_linear(xmean, lbounds, ubounds, original_lbounds, original_ubounds))]);
disp(['std deviation:' sprintf(' %.1e', sigma*sqrt(diagC))]);
disp(sprintf('use plotcmaesdat.m for plotting the output at any time (option LogModulo must not be zero)'));
end
if exist('sfile', 'var')
disp(['Results saved in ' sfile]);
end
end
out.arstopflags{irun} = stopflag;
if any(strcmp(stopflag, 'fitness')) ...
|| any(strcmp(stopflag, 'maxfunevals')) ...
|| any(strcmp(stopflag, 'stoptoresume')) ...
|| any(strcmp(stopflag, 'manual'))
break;
end
end % while irun <= Restarts
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function xscaled = scale_linear(x, old_lbs, old_ubs, new_lbs, new_ubs)
if all(old_lbs == new_lbs) && all(old_ubs == new_ubs)
xscaled = x;
else
xscaled = new_lbs + (new_ubs - new_lbs) .* (x - old_lbs) ./ (old_ubs - old_lbs);
end
function xscaled = scale_linear_range(x, old_range, new_range)
if all(old_range == new_range)
xscaled = x;
else
xscaled = new_range .* x ./ (old_range);
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function [x, idx] = xintobounds(x, lbounds, ubounds)
%
% x can be a column vector or a matrix consisting of column vectors
%
if ~isempty(lbounds)
if length(lbounds) == 1
idx = x < lbounds;
x(idx) = lbounds;
else
arbounds = repmat(lbounds, 1, size(x,2));
idx = x < arbounds;
x(idx) = arbounds(idx);
end
else
idx = 0;
end
if ~isempty(ubounds)
if length(ubounds) == 1
idx2 = x > ubounds;
x(idx2) = ubounds;
else
arbounds = repmat(ubounds, 1, size(x,2));
idx2 = x > arbounds;
x(idx2) = arbounds(idx2);
end
else
idx2 = 0;
end
idx = idx2-idx;
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function opts=getoptions(inopts, defopts)
% OPTS = GETOPTIONS(INOPTS, DEFOPTS) handles an arbitrary number of
% optional arguments to a function. The given arguments are collected
% in the struct INOPTS. GETOPTIONS matches INOPTS with a default
% options struct DEFOPTS and returns the merge OPTS. Empty or missing
% fields in INOPTS invoke the default value. Fieldnames in INOPTS can
% be abbreviated.
%
% The returned struct OPTS is first assigned to DEFOPTS. Then any
% field value in OPTS is replaced by the respective field value of
% INOPTS if (1) the field unambiguously (case-insensitive) matches
% with the fieldname in INOPTS (cut down to the length of the INOPTS
% fieldname) and (2) the field is not empty.
%
% Example:
% In the source-code of the function that needs optional
% arguments, the last argument is the struct of optional
% arguments:
%
% function results = myfunction(mandatory_arg, inopts)
% % Define four default options
% defopts.PopulationSize = 200;
% defopts.ParentNumber = 50;
% defopts.MaxIterations = 1e6;
% defopts.MaxSigma = 1;
%
% % merge default options with input options
% opts = getoptions(inopts, defopts);
%
% % Thats it! From now on the values in opts can be used
% for i = 1:opts.PopulationSize
% % do whatever
% if sigma > opts.MaxSigma
% % do whatever
% end
% end
%
% For calling the function myfunction with default options:
% myfunction(argument1, []);
% For calling the function myfunction with modified options:
% opt.pop = 100; % redefine PopulationSize option
% opt.PAR = 10; % redefine ParentNumber option
% opt.maxiter = 2; % opt.max=2 is ambiguous and would result in an error
% myfunction(argument1, opt);
%
% 04/07/19: Entries can be structs itself leading to a recursive
% call to getoptions.
%
if nargin < 2 || isempty(defopts) % no default options available
opts=inopts;
return;
elseif isempty(inopts) % empty inopts invoke default options
opts = defopts;
return;
elseif ~isstruct(defopts) % handle a single option value
if isempty(inopts)
opts = defopts;
elseif ~isstruct(inopts)
opts = inopts;
else
error('Input options are a struct, while default options are not');
end
return;
elseif ~isstruct(inopts) % no valid input options
error('The options need to be a struct or empty');
end
opts = defopts; % start from defopts
% if necessary overwrite opts fields by inopts values
defnames = fieldnames(defopts);
idxmatched = []; % indices of defopts that already matched
for name = fieldnames(inopts)'
name = name{1}; % name of i-th inopts-field
if isoctave
for i = 1:size(defnames, 1)
idx(i) = strncmpi(defnames(i), name, length(name));
end
else
idx = strncmpi(defnames, name, length(name));
end
if sum(idx) > 1
error(['option "' name '" is not an unambigous abbreviation. ' ...
'Use opts=RMFIELD(opts, ''' name, ...
''') to remove the field from the struct.']);
end
if sum(idx) == 1
defname = defnames{find(idx)};
if ismember(find(idx), idxmatched)
error(['input options match more than ones with "' ...
defname '". ' ...
'Use opts=RMFIELD(opts, ''' name, ...
''') to remove the field from the struct.']);
end
idxmatched = [idxmatched find(idx)];
val = getfield(inopts, name);
% next line can replace previous line from MATLAB version 6.5.0 on and in octave
% val = inopts.(name);
if isstruct(val) % valid syntax only from version 6.5.0
opts = setfield(opts, defname, ...
getoptions(val, getfield(defopts, defname)));
elseif isstruct(getfield(defopts, defname))
% next three lines can replace previous three lines from MATLAB
% version 6.5.0 on
% opts.(defname) = ...
% getoptions(val, defopts.(defname));
% elseif isstruct(defopts.(defname))
warning(['option "' name '" disregarded (must be struct)']);
elseif ~isempty(val) % empty value: do nothing, i.e. stick to default
opts = setfield(opts, defnames{find(idx)}, val);
% next line can replace previous line from MATLAB version 6.5.0 on
% opts.(defname) = inopts.(name);
end
else
warning(['option "' name '" disregarded (unknown field name)']);
end
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function res=myeval(s)
if ischar(s)
res = evalin('caller', s);
else
res = s;
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function res=myevalbool(s)
if ~ischar(s) % s may not and cannot be empty
res = s;
else % evaluation string s
if strncmpi(s, 'yes', 3) || strncmpi(s, 'on', 2) ...
|| strncmpi(s, 'true', 4) || strncmp(s, '1 ', 2)
res = 1;
elseif strncmpi(s, 'no', 2) || strncmpi(s, 'off', 3) ...
|| strncmpi(s, 'false', 5) || strncmp(s, '0 ', 2)
res = 0;
else
try res = evalin('caller', s); catch
error(['String value "' s '" cannot be evaluated']);
end
try res ~= 0; catch
error(['String value "' s '" cannot be evaluated reasonably']);
end
end
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function res = isoctave
% any hack to find out whether we are running octave
s = version;
res = 0;
if exist('fflush', 'builtin') && eval(s(1)) < 7
res = 1;
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function flush
if isoctave
feval('fflush', stdout);
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
% ----- replacements for statistic toolbox functions ------------
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function res=myrange(x)
res = max(x) - min(x);
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function res = myprctile(inar, perc, idx)
%
% Computes the percentiles in vector perc from vector inar
% returns vector with length(res)==length(perc)
% idx: optional index-array indicating sorted order
%
N = length(inar);
flgtranspose = 0;
% sizes
if size(perc,1) > 1
perc = perc';
flgtranspose = 1;
if size(perc,1) > 1
error('perc must not be a matrix');
end
end
if size(inar, 1) > 1 && size(inar,2) > 1
error('data inar must not be a matrix');
end
% sort inar
if nargin < 3 || isempty(idx)
[sar idx] = sort(inar);
else
sar = inar(idx);
end
res = [];
for p = perc
if p <= 100*(0.5/N)
res(end+1) = sar(1);
elseif p >= 100*((N-0.5)/N)
res(end+1) = sar(N);
else
% find largest index smaller than required percentile
availablepercentiles = 100*((1:N)-0.5)/N;
i = max(find(p > availablepercentiles));
% interpolate linearly
res(end+1) = sar(i) ...
+ (sar(i+1)-sar(i))*(p - availablepercentiles(i)) ...
/ (availablepercentiles(i+1) - availablepercentiles(i));
end
end
if flgtranspose
res = res';
end
% ---------------------------------------------------------------
% ---------------------------------------------------------------
% ---------------------------------------------------------------
% ---------------------------------------------------------------
function [s ranks rankDelta] = local_noisemeasurement(arf1, arf2, lamreev, theta, cutlimit)
% function [s ranks rankDelta] = noisemeasurement(arf1, arf2, lamreev, theta)
%
% Input:
% arf1, arf2 : two arrays of function values. arf1 is of size 1xlambda,
% arf2 may be of size 1xlamreev or 1xlambda. The first lamreev values
% in arf2 are (re-)evaluations of the respective solutions, i.e.
% arf1(1) and arf2(1) are two evaluations of "the first" solution.
% lamreev: number of reevaluated individuals in arf2
% theta : parameter theta for the rank change limit, between 0 and 1,
% typically between 0.2 and 0.7.
% cutlimit (optional): output s is computed as a mean of rankchange minus
% threshold, where rankchange is <=2*(lambda-1). cutlimit limits
% abs(rankchange minus threshold) in this calculation to cutlimit.
% cutlimit=1 evaluates basically the sign only. cutlimit=2 could be
% the rank change with one solution (both evaluations of it).
%
% Output:
% s : noise measurement, s>0 means the noise measure is above the
% acceptance threshold
% ranks : 2xlambda array, corresponding to [arf1; arf2], of ranks
% of arf1 and arf2 in the set [arf1 arf2], values are in [1:2*lambda]
% rankDelta: 1xlambda array of rank movements of arf2 compared to
% arf1. rankDelta(i) agrees with the number of values from
% the set [arf1 arf2] that lie between arf1(i) and arf2(i).
%
% Note: equal function values might lead to somewhat spurious results.
% For this case a revision is advisable.
%%% verify input argument sizes
if size(arf1,1) ~= 1
error('arf1 must be an 1xlambda array');
elseif size(arf2,1) ~= 1
error('arf2 must be an 1xsomething array');
elseif size(arf1,2) < size(arf2,2) % not really necessary, but saver
error('arf2 must not be smaller than arf1 in length');
end
lam = size(arf1, 2);
if size(arf1,2) ~= size(arf2,2)
arf2(end+1:lam) = arf1((size(arf2,2)+1):lam);
end
if nargin < 5
cutlimit = inf;
end
%%% capture unusual values
if any(diff(arf1) == 0)
% this will presumably interpreted as rank change, because
% sort(ones(...)) returns 1,2,3,...
warning([num2str(sum(diff(arf1)==0)) ' equal function values']);
end
%%% compute rank changes into rankDelta
% compute ranks
[ignore, idx] = sort([arf1 arf2]);
[ignore, ranks] = sort(idx);
ranks = reshape(ranks, lam, 2)';
rankDelta = ranks(1,:) - ranks(2,:) - sign(ranks(1,:) - ranks(2,:));
%%% compute rank change limits using both ranks(1,...) and ranks(2,...)
for i = 1:lamreev
sumlim(i) = ...
0.5 * (...
myprctile(abs((1:2*lam-1) - (ranks(1,i) - (ranks(1,i)>ranks(2,i)))), ...
theta*50) ...
+ myprctile(abs((1:2*lam-1) - (ranks(2,i) - (ranks(2,i)>ranks(1,i)))), ...
theta*50));
end
%%% compute measurement
%s = abs(rankDelta(1:lamreev)) - sumlim; % lives roughly in 0..2*lambda
% max: 1 rankchange in 2*lambda is always fine
s = abs(rankDelta(1:lamreev)) - max(1, sumlim); % lives roughly in 0..2*lambda
% cut-off limit
idx = abs(s) > cutlimit;
s(idx) = sign(s(idx)) * cutlimit;
s = mean(s);
% ---------------------------------------------------------------
% ---------------------------------------------------------------
% ---------------------------------------------------------------
% ---------------------------------------------------------------
% just a "local" copy of plotcmaesdat.m, with manual_mode set to zero
function local_plotcmaesdat(figNb, filenameprefix, filenameextension, objectvarname)
% PLOTCMAESDAT;
% PLOTCMAES(FIGURENUMBER_iBEGIN_iEND, FILENAMEPREFIX, FILENAMEEXTENSION, OBJECTVARNAME);
% plots output from CMA-ES, e.g. cmaes.m, Java class CMAEvolutionStrategy...
% mod(figNb,100)==1 plots versus iterations.
%
% PLOTCMAES([101 300]) plots versus iteration, from iteration 300.
% PLOTCMAES([100 150 800]) plots versus function evaluations, between iteration 150 and 800.
%
% Upper left subplot: blue/red: function value of the best solution in the
% recent population, cyan: same function value minus best
% ever seen function value, green: sigma, red: ratio between
% longest and shortest principal axis length which is equivalent
% to sqrt(cond(C)).
% Upper right plot: time evolution of the distribution mean (default) or
% the recent best solution vector.
% Lower left: principal axes lengths of the distribution ellipsoid,
% equivalent with the sqrt(eig(C)) square root eigenvalues of C.
% Lower right: magenta: minimal and maximal "true" standard deviation
% (with sigma included) in the coordinates, other colors: sqrt(diag(C))
% of all diagonal elements of C, if C is diagonal they equal to the
% lower left.
%
% Files [FILENAMEPREFIX name FILENAMEEXTENSION] are used, where
% name = axlen, OBJECTVARNAME (xmean|xrecentbest), fit, or stddev.
%
manual_mode = 0;
if nargin < 1 || isempty(figNb)
figNb = 325;
end
if nargin < 2 || isempty(filenameprefix)
filenameprefix = 'outcmaes';
end
if nargin < 3 || isempty(filenameextension)
filenameextension = '.dat';
end
if nargin < 4 || isempty(objectvarname)
objectvarname = 'xmean';
objectvarname = 'xrecentbest';
end
% load data
d.x = load([filenameprefix objectvarname filenameextension]);
% d.x = load([filenameprefix 'xmean' filenameextension]);
% d.x = load([filenameprefix 'xrecentbest' filenameextension]);
d.f = load([filenameprefix 'fit' filenameextension]);
d.std = load([filenameprefix 'stddev' filenameextension]);
d.D = load([filenameprefix 'axlen' filenameextension]);
% interpret entries in figNb for cutting out some data
if length(figNb) > 1
iend = inf;
istart = figNb(2);
if length(figNb) > 2
iend = figNb(3);
end
figNb = figNb(1);
d.x = d.x(d.x(:,1) >= istart & d.x(:,1) <= iend, :);
d.f = d.f(d.f(:,1) >= istart & d.f(:,1) <= iend, :);
d.std = d.std(d.std(:,1) >= istart & d.std(:,1) <= iend, :);
d.D = d.D(d.D(:,1) >= istart & d.D(:,1) <= iend, :);
end
% decide for x-axis
iabscissa = 2; % 1== versus iterations, 2==versus fevals
if mod(figNb,100) == 1
iabscissa = 1; % a short hack
end
if iabscissa == 1
xlab ='iterations';
elseif iabscissa == 2
xlab = 'function evaluations';
end
if size(d.x, 2) < 1000
minxend = 1.03*d.x(end, iabscissa);
else
minxend = 0;
end
% set up figure window
if manual_mode
figure(figNb); % just create and raise the figure window
else
if 1 < 3 && evalin('caller', 'iterplotted') == 0 && evalin('caller', 'irun') == 1
figure(figNb); % incomment this, if figure raise in the beginning is not desired
elseif ismember(figNb, findobj('Type', 'figure'))
set(0, 'CurrentFigure', figNb); % prevents raise of existing figure window
else
figure(figNb);
end
end
% plot fitness etc
foffset = 1e-99;
dfit = d.f(:,6)-min(d.f(:,6));
[ignore idxbest] = min(dfit);
dfit(dfit<1e-98) = NaN;
subplot(2,2,1); hold off;
dd = abs(d.f(:,7:8)) + foffset;
dd(d.f(:,7:8)==0) = NaN;
semilogy(d.f(:,iabscissa), dd, '-k'); hold on;
% additional fitness data, for example constraints values
if size(d.f,2) > 8
dd = abs(d.f(:,9:end)) + 10*foffset; % a hack
% dd(d.f(:,9:end)==0) = NaN;
semilogy(d.f(:,iabscissa), dd, '-m'); hold on;
if size(d.f,2) > 12
semilogy(d.f(:,iabscissa),abs(d.f(:,[7 8 11 13]))+foffset,'-k'); hold on;
end
end
idx = find(d.f(:,6)>1e-98); % positive values
if ~isempty(idx) % otherwise non-log plot gets hold
semilogy(d.f(idx,iabscissa), d.f(idx,6)+foffset, '.b'); hold on;
end
idx = find(d.f(:,6) < -1e-98); % negative values
if ~isempty(idx)
semilogy(d.f(idx, iabscissa), abs(d.f(idx,6))+foffset,'.r'); hold on;
end
semilogy(d.f(:,iabscissa),abs(d.f(:,6))+foffset,'-b'); hold on;
semilogy(d.f(:,iabscissa),dfit,'-c'); hold on;
semilogy(d.f(:,iabscissa),(d.f(:,4)),'-r'); hold on; % AR
semilogy(d.std(:,iabscissa), [max(d.std(:,6:end)')' ...
min(d.std(:,6:end)')'], '-m'); % max,min std
maxval = max(d.std(end,6:end));
minval = min(d.std(end,6:end));
text(d.std(end,iabscissa), maxval, sprintf('%.0e', maxval));
text(d.std(end,iabscissa), minval, sprintf('%.0e', minval));
semilogy(d.std(:,iabscissa),(d.std(:,3)),'-g'); % sigma
% plot best f
semilogy(d.f(idxbest,iabscissa),min(dfit),'*c'); hold on;
semilogy(d.f(idxbest,iabscissa),abs(d.f(idxbest,6))+foffset,'*r'); hold on;
ax = axis;
ax(2) = max(minxend, ax(2));
axis(ax);
yannote = 10^(log10(ax(3)) + 0.05*(log10(ax(4))-log10(ax(3))));
text(ax(1), yannote, ...
[ 'f=' num2str(d.f(end,6), '%.15g') ]);
title('blue:abs(f), cyan:f-min(f), green:sigma, red:axis ratio');
grid on;
subplot(2,2,2); hold off;
plot(d.x(:,iabscissa), d.x(:,6:end),'-'); hold on;
ax = axis;
ax(2) = max(minxend, ax(2));
axis(ax);
% add some annotation lines
[ignore idx] = sort(d.x(end,6:end));
% choose no more than 25 indices
idxs = round(linspace(1, size(d.x,2)-5, min(size(d.x,2)-5, 25)));
yy = repmat(NaN, 2, size(d.x,2)-5);
yy(1,:) = d.x(end, 6:end);
yy(2,idx(idxs)) = linspace(ax(3), ax(4), length(idxs));
plot([d.x(end,iabscissa) ax(2)], yy, '-');
plot(repmat(d.x(end,iabscissa),2), [ax(3) ax(4)], 'k-');
for i = idx(idxs)
text(ax(2), yy(2,i), ...
['x(' num2str(i) ')=' num2str(yy(1,i))]);
end
lam = 'NA';
if size(d.x, 1) > 1 && d.x(end, 1) > d.x(end-1, 1)
lam = num2str((d.x(end, 2) - d.x(end-1, 2)) / (d.x(end, 1) - d.x(end-1, 1)));
end
title(['Object Variables (' num2str(size(d.x, 2)-5) ...
'-D, popsize~' lam ')']);grid on;
subplot(2,2,3); hold off; semilogy(d.D(:,iabscissa), d.D(:,6:end), '-');
ax = axis;
ax(2) = max(minxend, ax(2));
axis(ax);
title('Principal Axes Lengths');grid on;
xlabel(xlab);
subplot(2,2,4); hold off;
% semilogy(d.std(:,iabscissa), d.std(:,6:end), 'k-'); hold on;
% remove sigma from stds
d.std(:,6:end) = d.std(:,6:end) ./ (d.std(:,3) * ones(1,size(d.std,2)-5));
semilogy(d.std(:,iabscissa), d.std(:,6:end), '-'); hold on;
if 11 < 3 % max and min std
semilogy(d.std(:,iabscissa), [d.std(:,3).*max(d.std(:,6:end)')' ...
d.std(:,3).*min(d.std(:,6:end)')'], '-m', 'linewidth', 2);
maxval = max(d.std(end,6:end));
minval = min(d.std(end,6:end));
text(d.std(end,iabscissa), d.std(end,3)*maxval, sprintf('max=%.0e', maxval));
text(d.std(end,iabscissa), d.std(end,3)*minval, sprintf('min=%.0e', minval));
end
ax = axis;
ax(2) = max(minxend, ax(2));
axis(ax);
% add some annotation lines
[ignore idx] = sort(d.std(end,6:end));
% choose no more than 25 indices
idxs = round(linspace(1, size(d.x,2)-5, min(size(d.x,2)-5, 25)));
yy = repmat(NaN, 2, size(d.std,2)-5);
yy(1,:) = d.std(end, 6:end);
yy(2,idx(idxs)) = logspace(log10(ax(3)), log10(ax(4)), length(idxs));
semilogy([d.std(end,iabscissa) ax(2)], yy, '-');
semilogy(repmat(d.std(end,iabscissa),2), [ax(3) ax(4)], 'k-');
for i = idx(idxs)
text(ax(2), yy(2,i), [' ' num2str(i)]);
end
title('Standard Deviations in Coordinates divided by sigma');grid on;
xlabel(xlab);
if figNb ~= 324
% zoom on; % does not work in Octave
end
drawnow;
% ---------------------------------------------------------------
% --------------- TEST OBJECTIVE FUNCTIONS ----------------------
% ---------------------------------------------------------------
%%% Unimodal functions
function f=fjens1(x)
%
% use population size about 2*N
%
f = sum((x>0) .* x.^1, 1);
if any(any(x<0))
idx = sum(x < 0, 1) > 0;
f(idx) = 1e3;
% f = f + 1e3 * sum(x<0, 1);
% f = f + 10 * sum((x<0) .* x.^2, 1);
f(idx) = f(idx) + 1e-3*abs(randn(1,sum(idx)));
% f(idx) = NaN;
end
function f=fsphere(x)
f = sum(x.^2,1);
function f=fmax(x)
f = max(abs(x), [], 1);
function f=fssphere(x)
f=sqrt(sum(x.^2, 1));
% lb = -0.512; ub = 512;
% xfeas = x;
% xfeas(x<lb) = lb;
% xfeas(x>ub) = ub;
% f=sum(xfeas.^2, 1);
% f = f + 1e-9 * sum((xfeas-x).^2);
function f=fspherenoise(x, Nevals)
if nargin < 2 || isempty(Nevals)
Nevals = 1;
end
[N,popsi] = size(x);
% x = x .* (1 + 0.3e-0 * randn(N, popsi)/(2*N)); % actuator noise
fsum = 10.^(0*(0:N-1)/(N-1)) * x.^2;
% f = 0*rand(1,1) ...
% + fsum ...
% + fsum .* (2*randn(1,popsi) ./ randn(1,popsi).^0 / (2*N)) ...
% + 1*fsum.^0.9 .* 2*randn(1,popsi) / (2*N); %
% f = fsum .* exp(0.1*randn(1,popsi));
f = fsum .* (1 + (10/(N+10)/sqrt(Nevals))*randn(1,popsi));
% f = fsum .* (1 + (0.1/N)*randn(1,popsi)./randn(1,popsi).^1);
idx = rand(1,popsi) < 0.0;
if sum(idx) > 0
f(idx) = f(idx) + 1e3*exp(randn(1,sum(idx)));
end
function f=fmixranks(x)
N = size(x,1);
f=(10.^(0*(0:(N-1))/(N-1))*x.^2).^0.5;
if size(x, 2) > 1 % compute ranks, if it is a population
[ignore, idx] = sort(f);
[ignore, ranks] = sort(idx);
k = 9; % number of solutions randomly permuted, lambda/2-1
% works still quite well (two time slower)
for i = k+1:k-0:size(x,2)
idx(i-k+(1:k)) = idx(i-k+randperm(k));
end
%disp([ranks' f'])
[ignore, ranks] = sort(idx);
%disp([ranks' f'])
%pause
f = ranks+1e-9*randn(1,1);
end
function f = fsphereoneax(x)
f = x(1)^2;
f = mean(x)^2;
function f=frandsphere(x)
N = size(x,1);
idx = ceil(N*rand(7,1));
f=sum(x(idx).^2);
function f=fspherelb0(x, M) % lbound at zero for 1:M needed
if nargin < 2 M = 0; end
N = size(x,1);
% M active bounds, f_i = 1 for x = 0
f = -M + sum((x(1:M) + 1).^2);
f = f + sum(x(M+1:N).^2);
function f=fspherehull(x)
% Patton, Dexter, Goodman, Punch
% in -500..500
% spherical ridge through zeros(N,1)
% worst case start point seems x = 2*100*sqrt(N)
% and small step size
N = size(x,1);
f = norm(x) + (norm(x-100*sqrt(N)) - 100*N)^2;
function f=fellilb0(x, idxM, scal) % lbound at zero for 1:M needed
N = size(x,1);
if nargin < 3 || isempty(scal)
scal = 100;
end
scale=scal.^((0:N-1)/(N-1));
if nargin < 2 || isempty(idxM)
idxM = 1:N;
end
%scale(N) = 1e0;
% M active bounds
xopt = 0.1;
x(idxM) = x(idxM) + xopt;
f = scale.^2*x.^2;
f = f - sum((xopt*scale(idxM)).^2);
% f = exp(f) - 1;
% f = log10(f+1e-19) + 19;
f = f + 1e-19;
function f=fcornersphere(x)
w = ones(size(x,1));
w(1) = 2.5; w(2)=2.5;
idx = x < 0;
f = sum(x(idx).^2);
idx = x > 0;
f = f + 2^2*sum(w(idx).*x(idx).^2);
function f=fsectorsphere(x, scal)
%
% This is deceptive for cumulative sigma control CSA in large dimension:
% The strategy (initially) diverges for N=50 and popsize = 150. (Even
% for cs==1 this can be observed for larger settings of N and
% popsize.) The reason is obvious from the function topology.
% Divergence can be avoided by setting boundaries or adding a
% penalty for large ||x||. Then, convergence can be observed again.
% Conclusion: for popsize>N cumulative sigma control is not completely
% reasonable, but I do not know better alternatives. In particular:
% TPA takes longer to converge than CSA when the latter still works.
%
if nargin < 2 || isempty (scal)
scal = 1e3;
end
f=sum(x.^2,1);
idx = x<0;
f = f + (scal^2 - 1) * sum((idx.*x).^2,1);
if 11 < 3
idxpen = find(f>1e9);
if ~isempty(idxpen)
f(idxpen) = f(idxpen) + 1e8*sum(x(:,idxpen).^2,1);
end
end
function f=fstepsphere(x, scal)
if nargin < 2 || isempty (scal)
scal = 1e0;
end
N = size(x,1);
f=1e-11+sum(scal.^((0:N-1)/(N-1))*floor(x+0.5).^2);
f=1e-11+sum(floor(scal.^((0:N-1)/(N-1))'.*x+0.5).^2);
% f=1e-11+sum(floor(x+0.5).^2);
function f=fstep(x)
% in -5.12..5.12 (bounded)
N = size(x,1);
f=1e-11+6*N+sum(floor(x));
function f=flnorm(x, scal, e)
if nargin < 2 || isempty(scal)
scal = 1;
end
if nargin < 3 || isempty(e)
e = 1;
end
if e==inf
f = max(abs(x));
else
N = size(x,1);
scale = scal.^((0:N-1)/(N-1))';
f=sum(abs(scale.*x).^e);
end
function f=fneumaier3(x)
% in -n^2..n^2
% x^*-i = i(n+1-i)
N = size(x,1);
% f = N*(N+4)*(N-1)/6 + sum((x-1).^2) - sum(x(1:N-1).*x(2:N));
f = sum((x-1).^2) - sum(x(1:N-1).*x(2:N));
function f = fmaxmindist(y)
% y in [-1,1], y(1:2) is first point on a plane, y(3:4) second etc
% points best
% 5 1.4142
% 8 1.03527618
% 10 0.842535997
% 20 0.5997
pop = size(y,2);
N = size(y,1)/2;
f = [];
for ipop = 1:pop
if any(abs(y(:,ipop)) > 1)
f(ipop) = NaN;
else
x = reshape(y(:,ipop), [2, N]);
f(ipop) = inf;
for i = 1:N
f(ipop) = min(f(ipop), min(sqrt(sum((x(:,[1:i-1 i+1:N]) - repmat(x(:,i), 1, N-1)).^2, 1))));
end
end
end
f = -f;
function f=fchangingsphere(x)
N = size(x,1);
global scale_G; global count_G; if isempty(count_G) count_G=-1; end
count_G = count_G+1;
if mod(count_G,10) == 0
scale_G = 10.^(2*rand(1,N));
end
%disp(scale(1));
f = scale_G*x.^2;
function f= flogsphere(x)
f = 1-exp(-sum(x.^2));
function f= fexpsphere(x)
f = exp(sum(x.^2)) - 1;
function f=fbaluja(x)
% in [-0.16 0.16]
y = x(1);
for i = 2:length(x)
y(i) = x(i) + y(i-1);
end
f = 1e5 - 1/(1e-5 + sum(abs(y)));
function f=fschwefel(x)
f = 0;
for i = 1:size(x,1),
f = f+sum(x(1:i))^2;
end
function f=fcigar(x, ar)
if nargin < 2 || isempty(ar)
ar = 1e3;
end
f = x(1,:).^2 + ar^2*sum(x(2:end,:).^2,1);
function f=fcigtab(x)
f = x(1,:).^2 + 1e8*x(end,:).^2 + 1e4*sum(x(2:(end-1),:).^2, 1);
function f=ftablet(x)
f = 1e6*x(1,:).^2 + sum(x(2:end,:).^2, 1);
function f=felli(x, lgscal, expon, expon2)
% lgscal: log10(axis ratio)
% expon: x_i^expon, sphere==2
N = size(x,1); if N < 2 error('dimension must be greater one'); end
% x = x - repmat(-0.5+(1:N)',1,size(x,2)); % optimum in 1:N
if nargin < 2 || isempty(lgscal), lgscal = 3; end
if nargin < 3 || isempty(expon), expon = 2; end
if nargin < 4 || isempty(expon2), expon2 = 1; end
f=((10^(lgscal*expon)).^((0:N-1)/(N-1)) * abs(x).^expon).^(1/expon2);
% if rand(1,1) > 0.015
% f = NaN;
% end
% f = f + randn(size(f));
function f=fellitest(x)
beta = 0.9;
N = size(x,1);
f = (1e6.^((0:(N-1))/(N-1))).^beta * (x.^2).^beta;
function f=fellii(x, scal)
N = size(x,1); if N < 2 error('dimension must be greater one'); end
if nargin < 2
scal = 1;
end
f= (scal*(1:N)).^2 * (x).^2;
function f=fellirot(x)
N = size(x,1);
global ORTHOGONALCOORSYSTEM_G
if isempty(ORTHOGONALCOORSYSTEM_G) ...
|| length(ORTHOGONALCOORSYSTEM_G) < N ...
|| isempty(ORTHOGONALCOORSYSTEM_G{N})
coordinatesystem(N);
end
f = felli(ORTHOGONALCOORSYSTEM_G{N}*x);
function f=frot(x, fun, varargin)
N = size(x,1);
global ORTHOGONALCOORSYSTEM_G
if isempty(ORTHOGONALCOORSYSTEM_G) ...
|| length(ORTHOGONALCOORSYSTEM_G) < N ...
|| isempty(ORTHOGONALCOORSYSTEM_G{N})
coordinatesystem(N);
end
f = feval(fun, ORTHOGONALCOORSYSTEM_G{N}*x, varargin{:});
function coordinatesystem(N)
if nargin < 1 || isempty(N)
arN = 2:30;
else
arN = N;
end
global ORTHOGONALCOORSYSTEM_G
ORTHOGONALCOORSYSTEM_G{1} = 1;
for N = arN
ar = randn(N,N);
for i = 1:N
for j = 1:i-1
ar(:,i) = ar(:,i) - ar(:,i)'*ar(:,j) * ar(:,j);
end
ar(:,i) = ar(:,i) / norm(ar(:,i));
end
ORTHOGONALCOORSYSTEM_G{N} = ar;
end
function f=fplane(x)
f=x(1);
function f=ftwoaxes(x)
f = sum(x(1:floor(end/2),:).^2, 1) + 1e6*sum(x(floor(1+end/2):end,:).^2, 1);
function f=fparabR(x)
f = -x(1,:) + 100*sum(x(2:end,:).^2,1);
function f=fsharpR(x)
f = abs(-x(1, :)).^2 + 100 * sqrt(sum(x(2:end,:).^2, 1));
function f=frosen(x)
if size(x,1) < 2 error('dimension must be greater one'); end
N = size(x,1);
popsi = size(x,2);
f = 1e2*sum((x(1:end-1,:).^2 - x(2:end,:)).^2,1) + sum((x(1:end-1,:)-1).^2,1);
% f = f + f^0.9 .* (2*randn(1,popsi) ./ randn(1,popsi).^0 / (2*N));
function f=frosenlin(x)
if size(x,1) < 2 error('dimension must be greater one'); end
x_org = x;
x(x>30) = 30;
x(x<-30) = -30;
f = 1e2*sum(-(x(1:end-1,:).^2 - x(2:end,:)),1) + ...
sum((x(1:end-1,:)-1).^2,1);
f = f + sum((x-x_org).^2,1);
% f(any(abs(x)>30,1)) = NaN;
function f=frosenrot(x)
N = size(x,1);
global ORTHOGONALCOORSYSTEM_G
if isempty(ORTHOGONALCOORSYSTEM_G) ...
|| length(ORTHOGONALCOORSYSTEM_G) < N ...
|| isempty(ORTHOGONALCOORSYSTEM_G{N})
coordinatesystem(N);
end
f = frosen(ORTHOGONALCOORSYSTEM_G{N}*x);
function f=frosenmodif(x)
f = 74 + 100*(x(2)-x(1)^2)^2 + (1-x(1))^2 ...
- 400*exp(-sum((x+1).^2)/2/0.05);
function f=fschwefelrosen1(x)
% in [-10 10]
f=sum((x.^2-x(1)).^2 + (x-1).^2);
function f=fschwefelrosen2(x)
% in [-10 10]
f=sum((x(2:end).^2-x(1)).^2 + (x(2:end)-1).^2);
function f=fdiffpow(x)
[N popsi] = size(x); if N < 2 error('dimension must be greater one'); end
f = sum(abs(x).^repmat(2+10*(0:N-1)'/(N-1), 1, popsi), 1);
f = sqrt(f);
function f=fabsprod(x)
f = sum(abs(x),1) + prod(abs(x),1);
function f=ffloor(x)
f = sum(floor(x+0.5).^2,1);
function f=fmaxx(x)
f = max(abs(x), [], 1);
%%% Multimodal functions
function f=fbirastrigin(x)
% todo: the volume needs to be a constant
N = size(x,1);
idx = (sum(x, 1) < 0.5*N); % global optimum
f = zeros(1,size(x,2));
f(idx) = 10*(N-sum(cos(2*pi*x(:,idx)),1)) + sum(x(:,idx).^2,1);
idx = ~idx;
f(idx) = 0.1 + 10*(N-sum(cos(2*pi*(x(:,idx)-2)),1)) + sum((x(:,idx)-2).^2,1);
function f=fackley(x)
% -32.768..32.768
% Adding a penalty outside the interval is recommended,
% because for large step sizes, fackley imposes like frand
%
N = size(x,1);
f = 20-20*exp(-0.2*sqrt(sum(x.^2)/N));
f = f + (exp(1) - exp(sum(cos(2*pi*x))/N));
% add penalty outside the search interval
f = f + sum((x(x>32.768)-32.768).^2) + sum((x(x<-32.768)+32.768).^2);
function f = fbohachevsky(x)
% -15..15
f = sum(x(1:end-1).^2 + 2 * x(2:end).^2 - 0.3 * cos(3*pi*x(1:end-1)) ...
- 0.4 * cos(4*pi*x(2:end)) + 0.7);
function f=fconcentric(x)
% in +-600
s = sum(x.^2);
f = s^0.25 * (sin(50*s^0.1)^2 + 1);
function f=fgriewank(x)
% in [-600 600]
[N, P] = size(x);
f = 1 - prod(cos(x'./sqrt(1:N))) + sum(x.^2)/4e3;
scale = repmat(sqrt(1:N)', 1, P);
f = 1 - prod(cos(x./scale), 1) + sum(x.^2, 1)/4e3;
% f = f + 1e4*sum(x(abs(x)>5).^2);
% if sum(x(abs(x)>5).^2) > 0
% f = 1e4 * sum(x(abs(x)>5).^2) + 1e8 * sum(x(x>5)).^2;
% end
function f=fgriewrosen(x)
% F13 or F8F2
[N, P] = size(x);
scale = repmat(sqrt(1:N)', 1, P);
y = [x(2:end,:); x(1,:)];
x = 100 * (x.^2 - y) + (x - 1).^2; % Rosenbrock part
f = 1 - prod(cos(x./scale), 1) + sum(x.^2, 1)/4e3;
f = sum(1 - cos(x) + x.^2/4e3, 1);
function f=fspallpseudorastrigin(x, scal, skewfac, skewstart, amplitude)
% by default multi-modal about between -30 and 30
if nargin < 5 || isempty(amplitude)
amplitude = 40;
end
if nargin < 4 || isempty(skewstart)
skewstart = 0;
end
if nargin < 3 || isempty(skewfac)
skewfac = 1;
end
if nargin < 2 || isempty(scal)
scal = 1;
end
N = size(x,1);
scale = 1;
if N > 1
scale=scal.^((0:N-1)'/(N-1));
end
% simple version:
% f = amplitude*(N - sum(cos(2*pi*(scale.*x)))) + sum((scale.*x).^2);
% skew version:
y = repmat(scale, 1, size(x,2)) .* x;
idx = find(x > skewstart);
if ~isempty(idx)
y(idx) = skewfac*y(idx);
end
f = amplitude * (0*N-prod(cos((2*pi)^0*y),1)) + 0.05 * sum(y.^2,1) ...
+ randn(1,1);
function f=frastrigin(x, scal, skewfac, skewstart, amplitude)
% by default multi-modal about between -30 and 30
if nargin < 5 || isempty(amplitude)
amplitude = 10;
end
if nargin < 4 || isempty(skewstart)
skewstart = 0;
end
if nargin < 3 || isempty(skewfac)
skewfac = 1;
end
if nargin < 2 || isempty(scal)
scal = 1;
end
N = size(x,1);
scale = 1;
if N > 1
scale=scal.^((0:N-1)'/(N-1));
end
% simple version:
% f = amplitude*(N - sum(cos(2*pi*(scale.*x)))) + sum((scale.*x).^2);
% skew version:
y = repmat(scale, 1, size(x,2)) .* x;
idx = find(x > skewstart);
% idx = intersect(idx, 2:2:10);
if ~isempty(idx)
y(idx) = skewfac*y(idx);
end
f = amplitude * (N-sum(cos(2*pi*y),1)) + sum(y.^2,1);
function f=frastriginmax(x)
N = size(x,1);
f = (N/20)*807.06580387678 - (10 * (N-sum(cos(2*pi*x),1)) + sum(x.^2,1));
f(any(abs(x) > 5.12)) = 1e2*N;
function f = fschaffer(x)
% -100..100
N = size(x,1);
s = x(1:N-1,:).^2 + x(2:N,:).^2;
f = sum(s.^0.25 .* (sin(50*s.^0.1).^2+1), 1);
function f=fschwefelmult(x)
% -500..500
%
N = size(x,1);
f = - sum(x.*sin(sqrt(abs(x))), 1);
f = 418.9829*N - 1.27275661e-5*N - sum(x.*sin(sqrt(abs(x))), 1);
% penalty term
f = f + 1e4*sum((abs(x)>500) .* (abs(x)-500).^2, 1);
function f=ftwomax(x)
% Boundaries at +/-5
N = size(x,1);
f = -abs(sum(x)) + 5*N;
function f=ftwomaxtwo(x)
% Boundaries at +/-10
N = size(x,1);
f = abs(sum(x));
if f > 30
f = f - 30;
end
f = -f;
function f=frand(x)
f=1./(1-rand(1, size(x,2))) - 1;
% CHANGES
% 12/04/28: (3.61) stopIter is relative to countiter after resume (thanks to Tom Holden)
% 12/04/28: (3.61) some syncing from 3.32.integer branch (cmean introduced, ...)
% 12/02/19: "future" setting of ccum, correcting for large mueff, is default now
% 11/11/15: bug-fix: max value for ccovmu_sep setting corrected
% 10/11/11: (3.52.beta) boundary handling: replace max with min in change
% rate formula. Active CMA: check of pos.def. improved.
% Plotting: value of lambda appears in the title.
% 10/04/03: (3.51.beta) active CMA cleaned up. Equal fitness detection
% looks into history now.
% 10/03/08: (3.50.beta) "active CMA" revised and bug-fix of ambiguous
% option Noise.alpha -> Noise.alphasigma.
% 09/10/12: (3.40.beta) a slightly modified version of "active CMA",
% that is a negative covariance matrix update, use option
% CMA.active. In 10;30;90-D the gain on ftablet is a factor
% of 1.6;2.5;4.4 (the scaling improves by sqrt(N)). On
% Rosenbrock the gain is about 25%. On sharp ridge the
% behavior is improved. Cigar is unchanged.
% 09/08/10: local plotcmaesdat remains in backround
% 09/08/10: bug-fix in time management for data writing, logtime was not
% considered properly (usually not at all).
% 09/07/05: V3.24: stagnation termination added
% 08/09/27: V3.23: momentum alignment is out-commented and de-preciated
% 08/09/25: V3.22: re-alignment of sigma and C was buggy
% 08/07/15: V3.20, CMA-parameters are options now. ccov and mucov were replaced
% by ccov1 \approx ccov/mucov and ccovmu \approx (1-1/mucov)*ccov
% 08/06/30: file name xrecent was change to xrecentbest (compatible with other
% versions)
% 08/06/29: time stamp added to output files
% 08/06/28: bug fixed with resume option, commentary did not work
% 08/06/28: V3.10, uncertainty (noise) handling added (re-implemented), according
% to reference "A Method for Handling Uncertainty..." from below.
% 08/06/28: bug fix: file xrecent was empty
% 08/06/01: diagonalonly clean up. >1 means some iterations.
% 08/05/05: output is written to file preventing an increasing data
% array and ease long runs.
% 08/03/27: DiagonalOnly<0 learns for -DiagonalOnly iterations only the
% diagonal with a larger learning rate.
% 08/03 (2.60): option DiagonalOnly>=1 invokes a time- and space-linear
% variant with only diagonal elements of the covariance matrix
% updating. This can be useful for large dimensions, say > 100.
% 08/02: diag(weights) * ... replaced with repmat(weights,1,N) .* ...
% in C update, implies O(mu*N^2) instead of O(mu^2*N + mu*N^2).
% 07/09: tolhistfun as termination criterion added, "<" changed to
% "<=" also for TolFun to allow for stopping on zero difference.
% Name tolfunhist clashes with option tolfun.
% 07/07: hsig threshold made slighly smaller for large dimension,
% useful for lambda < lambda_default.
% 07/06: boundary handling: scaling in the boundary handling
% is omitted now, see bnd.flgscale. This seems not to
% have a big impact. Using the scaling is worse on rotated
% functions, but better on separable ones.
% 07/05: boundary handling: weight i is not incremented anymore
% if xmean(i) moves towards the feasible space. Increment
% factor changed to 1.2 instead of 1.1.
% 07/05: boundary handling code simplified not changing the algorithm
% 07/04: bug removed for saving in octave
% 06/11/10: more testing of outcome of eig, fixed max(D) to max(diag(D))
% 06/10/21: conclusive final bestever assignment in the end
% 06/10/21: restart and incpopsize option implemented for restarts
% with increasing population size, version 2.50.
% 06/09/16: output argument bestever inserted again for convenience and
% backward compatibility
% 06/08: output argument out and struct out reorganized.
% 06/01: Possible parallel evaluation included as option EvalParallel
% 05/11: Compatibility to octave implemented, package octave-forge
% is needed.
% 05/09: Raise of figure and waiting for first plots improved
% 05/01: Function coordinatesystem cleaned up.
% 05/01: Function prctile, which requires the statistics toolbox,
% replaced by myprctile.
% 05/01: Option warnonequalfunctionvalues included.
% 04/12: Decrease of sigma removed. Problems on fsectorsphere can
% be addressed better by adding search space boundaries.
% 04/12: Boundary handling simpyfied.
% 04/12: Bug when stopping criteria tolx or tolupx are vectors.
% 04/11: Three input parameters are obligatory now.
% 04/11: Bug in boundary handling removed: Boundary weights can decrease now.
% 04/11: Normalization for boundary weights scale changed.
% 04/11: VerboseModulo option bug removed. Documentation improved.
% 04/11: Condition for increasing boundary weights changed.
% 04/10: Decrease of sigma when fitness is getting consistenly
% worse. Addresses the problems appearing on fsectorsphere for
% large population size.
% 04/10: VerboseModulo option included.
% 04/10: Bug for condition for increasing boundary weights removed.
% 04/07: tolx depends on initial sigma to achieve scale invariance
% for this stopping criterion.
% 04/06: Objective function value NaN is not counted as function
% evaluation and invokes resampling of the search point.
% 04/06: Error handling for eigenvalue beeing zero (never happens
% with default parameter setting)
% 04/05: damps further tuned for large mueff
% o Details for stall of pc-adaptation added (variable hsig
% introduced).
% 04/05: Bug in boundary handling removed: A large initial SIGMA was
% corrected not until *after* the first iteration, which could
% lead to a complete failure.
% 04/05: Call of function range (works with stats toolbox only)
% changed to myrange.
% 04/04: Parameter cs depends on mueff now and damps \propto sqrt(mueff)
% instead of \propto mueff.
% o Initial stall to adapt C (flginiphase) is removed and
% adaptation of pc is stalled for large norm(ps) instead.
% o Returned default options include documentation.
% o Resume part reorganized.
% 04/03: Stopflag becomes cell-array.
% ---------------------------------------------------------------
% CMA-ES: Evolution Strategy with Covariance Matrix Adaptation for
% nonlinear function minimization. To be used under the terms of the
% GNU General Public License (http://www.gnu.org/copyleft/gpl.html).
% Author (copyright): Nikolaus Hansen, 2001-2008.
% e-mail: nikolaus.hansen AT inria.fr
% URL:http://www.bionik.tu-berlin.de/user/niko
% References: See below.
% ---------------------------------------------------------------
%
% GENERAL PURPOSE: The CMA-ES (Evolution Strategy with Covariance
% Matrix Adaptation) is a robust search method which should be
% applied, if derivative based methods, e.g. quasi-Newton BFGS or
% conjucate gradient, (supposably) fail due to a rugged search
% landscape (e.g. noise, local optima, outlier, etc.). On smooth
% landscapes CMA-ES is roughly ten times slower than BFGS. For up to
% N=10 variables even the simplex direct search method (Nelder & Mead)
% is often faster, but far less robust than CMA-ES. To see the
% advantage of the CMA, it will usually take at least 30*N and up to
% 300*N function evaluations, where N is the search problem dimension.
% On considerably hard problems the complete search (a single run) is
% expected to take at least 30*N^2 and up to 300*N^2 function
% evaluations.
%
% SOME MORE COMMENTS:
% The adaptation of the covariance matrix (e.g. by the CMA) is
% equivalent to a general linear transformation of the problem
% coding. Nevertheless every problem specific knowlegde about the best
% linear transformation should be exploited before starting the
% search. That is, an appropriate a priori transformation should be
% applied to the problem. This also makes the identity matrix as
% initial covariance matrix the best choice.
%
% The strategy parameter lambda (population size, opts.PopSize) is the
% preferred strategy parameter to play with. If results with the
% default strategy are not satisfactory, increase the population
% size. (Remark that the crucial parameter mu (opts.ParentNumber) is
% increased proportionally to lambda). This will improve the
% strategies capability of handling noise and local minima. We
% recomment successively increasing lambda by a factor of about three,
% starting with initial values between 5 and 20. Casually, population
% sizes even beyond 1000+100*N can be sensible.
%
%
% ---------------------------------------------------------------
%%% REFERENCES
%
% The equation numbers refer to
% Hansen, N. and S. Kern (2004). Evaluating the CMA Evolution
% Strategy on Multimodal Test Functions. Eighth International
% Conference on Parallel Problem Solving from Nature PPSN VIII,
% Proceedings, pp. 282-291, Berlin: Springer.
% (http://www.bionik.tu-berlin.de/user/niko/ppsn2004hansenkern.pdf)
%
% Further references:
% Hansen, N. and A. Ostermeier (2001). Completely Derandomized
% Self-Adaptation in Evolution Strategies. Evolutionary Computation,
% 9(2), pp. 159-195.
% (http://www.bionik.tu-berlin.de/user/niko/cmaartic.pdf).
%
% Hansen, N., S.D. Mueller and P. Koumoutsakos (2003). Reducing the
% Time Complexity of the Derandomized Evolution Strategy with
% Covariance Matrix Adaptation (CMA-ES). Evolutionary Computation,
% 11(1). (http://mitpress.mit.edu/journals/pdf/evco_11_1_1_0.pdf).
%
% Ros, R. and N. Hansen (2008). A Simple Modification in CMA-ES
% Achieving Linear Time and Space Complexity. To appear in Tenth
% International Conference on Parallel Problem Solving from Nature
% PPSN X, Proceedings, Berlin: Springer.
%
% Hansen, N., A.S.P. Niederberger, L. Guzzella and P. Koumoutsakos
% (2009?). A Method for Handling Uncertainty in Evolutionary
% Optimization with an Application to Feedback Control of
% Combustion. To appear in IEEE Transactions on Evolutionary
% Computation.
|
[STATEMENT]
lemma map_equal:
"dom m = dom m' \<Longrightarrow> (\<And>x. x \<in> dom m \<Longrightarrow> m x = m' x) \<Longrightarrow> m = m'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>dom m = dom m'; \<And>x. x \<in> dom m \<Longrightarrow> m x = m' x\<rbrakk> \<Longrightarrow> m = m'
[PROOF STEP]
by fastforce |
Formal statement is: lemma poly_eq_iff: "p = q \<longleftrightarrow> (\<forall>n. coeff p n = coeff q n)" Informal statement is: Two polynomials are equal if and only if they have the same coefficients. |
using Unitful, UnitfulAstro
using PhysicalConstants.CODATA2018: h, c_0, k_B
export blackbody
"""
blackbody(wave::Vector{<:Quantity}, T::Quantity)
blackbody(wave::Vector{<:Real}, T::Real)
Create a blackbody spectrum using Planck's law. The curve follows the mathematical form
``B_\\lambda(T) = \\frac{2hc^2}{\\lambda^5}\\frac{1}{e^{hc/\\lambda k_B T} - 1}``
If `wave` and `T` are not `Unitful.Quantity`, they are assumed to be in angstrom and Kelvin, and the returned flux will be in units `W m^-2 Å^-1`.
The physical constants are calculated using [PhysicalConstants.jl](https://github.com/juliaphysics/physicalconstants.jl), specifically the CODATA2018 measurement set.
# References
[Planck's Law](https://en.wikipedia.org/wiki/Planck%27s_law)
# Examples
```jldoctest
julia> using Spectra, Unitful, UnitfulAstro
julia> wave = range(1, 3, length=100)u"μm"
(1.0:0.020202020202020204:3.0) μm
julia> bb = blackbody(wave, 2000u"K")
UnitfulSpectrum (100,)
λ (μm) f (W μm^-1 m^-2)
T: 2000 K
name: Blackbody
julia> blackbody(ustrip.(u"angstrom", wave), 6000)
Spectrum (100,)
T: 6000
name: Blackbody
julia> bb.wave[argmax(bb)]
1.4444444444444444 μm
julia> 2898u"μm*K" / bb.T # See if it matches up with Wien's law
1.449 μm
```
"""
function blackbody(wave::AbstractVector{<:Quantity}, T::Quantity)
out_unit = u"W/m^2" / unit(eltype(wave))
flux = _blackbody(wave, T) .|> out_unit
return spectrum(wave, flux, name = "Blackbody", T = T)
end
function blackbody(wave::AbstractVector{<:Real}, T::Real)
flux = ustrip.(u"W/m^2/angstrom", _blackbody(wave * u"angstrom", T * u"K"))
return spectrum(wave, flux, name = "Blackbody", T = T)
end
_blackbody(wave::AbstractVector{<:Quantity}, T::Quantity) = blackbody(T).(wave)
"""
blackbody(T::Quantity)
Returns a function for calculating blackbody curves.
"""
blackbody(T::Quantity) = w->2h * c_0^2 / w^5 / (exp(h * c_0 / (w * k_B * T)) - 1)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.