text
stringlengths 0
3.34M
|
---|
lemma smallo_imp_eventually_sgn: fixes f g :: "real \<Rightarrow> real" assumes "g \<in> o(f)" shows "eventually (\<lambda>x. sgn (f x + g x) = sgn (f x)) at_top" |
%% writeMtrFile_tetGen
% Below is a demonstration of the features of the |writeMtrFile_tetGen| function
%%
clear; close all; clc;
%% Syntax
% |writeMtrFile_tetGen(inputStruct,bOpt);|
%% Description
% UNDOCUMENTED
%% Examples
%
%%
%
% <<gibbVerySmall.gif>>
%
% _*GIBBON*_
% <www.gibboncode.org>
%
% _Kevin Mattheus Moerman_, <[email protected]>
%%
% _*GIBBON footer text*_
%
% License: <https://github.com/gibbonCode/GIBBON/blob/master/LICENSE>
%
% GIBBON: The Geometry and Image-based Bioengineering add-On. A toolbox for
% image segmentation, image-based modeling, meshing, and finite element
% analysis.
%
% Copyright (C) 2006-2022 Kevin Mattheus Moerman and the GIBBON contributors
%
% This program is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program. If not, see <http://www.gnu.org/licenses/>.
|
/****************************************************************************
* *
* Author : [email protected] *
* ~~~~~~~~ *
* License : see COPYING file for details. *
* ~~~~~~~~~ *
****************************************************************************/
#pragma once
#include <etl/cstring.h>
// #include <fmt/core.h>
// #include <fmt/format.h>
#include <gsl/gsl>
namespace logging {
/// Line size including tick number and task name.
static constexpr size_t MAX_LINE_SIZE = 256;
/// Have to be initialized.
void init ();
/**
* Add a log. Do not call from an ISR.
*/
bool log (gsl::czstring<> str);
// template <typename Format, typename... Args> bool log2 (Format &&format, Args &&... args)
// {
// etl::string<MAX_LINE_SIZE> buf;
// fmt::format_to (std::back_inserter (buf), std::forward<Format> (format), std::forward<Args> (args)...);
// log (buf.c_str ());
// return true;
// }
} // namespace logging |
\documentclass[letterpaper,final,12pt,reqno]{amsart}
\usepackage[total={6.3in,9.2in},top=1.1in,left=1.1in]{geometry}
\usepackage{times,bm,bbm,empheq,verbatim,fancyvrb,graphicx}
\usepackage[dvipsnames]{xcolor}
\usepackage[kw]{pseudo}
\pseudoset{left-margin=15mm,topsep=5mm,idfont=\texttt}
\usepackage{tikz}
\usetikzlibrary{decorations.pathreplacing}
% hyperref should be the last package we load
\usepackage[pdftex,
colorlinks=true,
plainpages=false, % only if colorlinks=true
linkcolor=blue, % ...
citecolor=Red, % ...
urlcolor=black % ...
]{hyperref}
\DefineVerbatimEnvironment{cline}{Verbatim}{fontsize=\small,xleftmargin=5mm}
\renewcommand{\baselinestretch}{1.05}
\newtheorem{lemma}{Lemma}
\newcommand{\Matlab}{\textsc{Matlab}\xspace}
\newcommand{\eps}{\epsilon}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\grad}{\nabla}
\newcommand{\Div}{\nabla\cdot}
\newcommand{\trace}{\operatorname{tr}}
\newcommand{\hbn}{\hat{\mathbf{n}}}
\newcommand{\bb}{\mathbf{b}}
\newcommand{\be}{\mathbf{e}}
\newcommand{\bbf}{\mathbf{f}}
\newcommand{\bg}{\mathbf{g}}
\newcommand{\bn}{\mathbf{n}}
\newcommand{\br}{\mathbf{r}}
\newcommand{\bu}{\mathbf{u}}
\newcommand{\bv}{\mathbf{v}}
\newcommand{\bw}{\mathbf{w}}
\newcommand{\bx}{\mathbf{x}}
\newcommand{\bV}{\mathbf{V}}
\newcommand{\bX}{\mathbf{X}}
\newcommand{\bxi}{\bm{\xi}}
\newcommand{\bzero}{\bm{0}}
\newcommand{\rhoi}{\rho_{\text{i}}}
\newcommand{\ip}[2]{\left<#1,#2\right>}
\newcommand{\Rpr}{R_{\text{pr}}}
\newcommand{\Rin}{R_{\text{in}}}
\newcommand{\Rfw}{R_{\text{fw}}}
\begin{document}
\title[The FAS multigrid scheme]{The full approximation storage multigrid scheme: \\ A 1D finite element example}
\author{Ed Bueler}
\begin{abstract} This note describes the full approximation storage (FAS) multigrid scheme for an easy one-dimensional nonlinear boundary value problem. The problem is discretized by a simple finite element (FE) scheme. We apply both FAS V-cycles and F-cycles, with a nonlinear Gauss-Seidel smoother, to solve the resulting finite-dimensional problem. The mathematics of the FAS restriction and prolongation operators, in the FE case, are explained. A self-contained Python program implements the scheme. Optimal performance, i.e.~work proportional to the number of unknowns, is demonstrated for both kinds of cycles, including convergence nearly to discretization error in a single F-cycle. \end{abstract}
\thanks{Version 2. This note is expository, and submission for publication is not foreseen. Thanks to Matt Knepley for thoughtful comments.}
\maketitle
\tableofcontents
\thispagestyle{empty}
\bigskip
\section{Introduction} \label{sec:intro}
We consider the full approximation storage (FAS) scheme, originally described by Brandt \cite{Brandt1977}, for an easy nonlinear elliptic equation. Like other multigrid schemes it exhibits optimal solver complexity \cite{Bueler2021} when correctly applied, as we demonstrate at the end. Helpful write-ups of FAS can be found in well-known textbooks \cite{BrandtLivne2011,Briggsetal2000,Trottenbergetal2001}, but we describe the scheme from a finite element point of view, compatible with the multigrid approaches used for obstacle problems \cite{GraeserKornhuber2009} for example, and we provide an easy-to-digest Python implementation.
Our problem is an ordinary differential equation (ODE) boundary value problem, the nonlinear Liouville-Bratu equation \cite{Bratu1914,Liouville1853}:
\begin{equation}
-u'' - \lambda\, e^u = g, \qquad u(0) = u(1) = 0. \label{liouvillebratu}
\end{equation}
In this problem $\lambda$ is a real constant, $g(x)$ is given, and we seek $u(x)$. This equation arises in the theory of combustion \cite{FrankKameneckij1955} and the stability of stars.
Our goal is to solve \eqref{liouvillebratu} in optimal $O(m)$ time on a mesh of $m$ elements. A Python implementation of FAS, \texttt{fas1.py} in directory \texttt{fas/py/},\footnote{Clone the Git repository\, \href{https://github.com/bueler/mg-glaciers}{\texttt{github.com/bueler/mg-glaciers}}\, and look in the \texttt{fas/py/} directory.} accomplishes such optimal-time solutions both by V-cycle and F-cycle strategies (section \ref{sec:cycles}). (This note serves as its documentation.) While optimal-time solutions of 1D problems are not unusual, FAS and other multigrid strategies for many nonlinear 2D and 3D partial differential equations (PDEs) are also optimal. This makes them the highest-performing class of solver algorithms for such problems.
By default the program \texttt{fas1.py} solves \eqref{liouvillebratu} with $g=0$. A runtime option \texttt{-mms}, the ``method of manufactured solutions'' \cite{Bueler2021}, facilitates testing by specifying a problem with known exact solution and nonzero source term. In detail, the solution is $u(x)=\sin(3\pi x)$, and by differentiation $g(x)=9\pi^2 \sin(3\pi x) - \lambda e^{\sin(3\pi x)}$.
\section{The finite element method} \label{sec:femethod}
To solve the problem using the finite element (FE) method \cite{Braess2007,Bueler2021,Elmanetal2014}, we rewrite \eqref{liouvillebratu} in weak form. Let $F$ be the nonlinear operator
\begin{equation}
F(u)[v] = \int_0^1 u'(x) v'(x) - \lambda e^{u(x)} v(x)\, dx, \label{operator}
\end{equation}
acting on $u$ and $v$ from the space of functions $\mathcal{H}=H_0^1[0,1]$, a Sobolev space \cite{Evans2010}. (These functions have zero boundary values and one square-integrable derivative.) Note $F(u)[v]$ is linear in $v$ but not in $u$. We also define a linear functional from the right-hand function $g$ in \eqref{liouvillebratu}:
\begin{equation}
\ell[v] = \ip{g}{v} = \int_0^1 g(x) v(x) dx. \label{rhsfunctional}
\end{equation}
Both $F(u)[\cdot]$ and $\ell[\cdot]$ are (continuous) linear functionals, acting on functions $v$ in $\mathcal{H}$, thus they are in the dual space $\mathcal{H}'$. One derives the weak form
\begin{equation}
F(u)[v] = \ell[v] \qquad \text{for all $v$ in $\mathcal{H}$} \label{weakform}
\end{equation}
by multiplying equation \eqref{liouvillebratu} by a test function $v$ and integrating by parts.
From now on we address problem \eqref{weakform}, despite its abstract form. In an FE context a clear separation is desirable between functions, like the solution $u(x)$, and the equations themselves, which are, essentially, functionals. As in linear algebra, where one indexes the equations by row indices, \eqref{weakform} states ``$v$th equation''; the equations are indexed by the test functions. The FE method will reduce the problem to a finite number of unknowns by writing $u(x)$ in a basis of a finite-dimensional subspace of $\mathcal{H}$. One gets finitely-many equations by using test functions $v$ from the same basis.
We apply the simplest possible mesh setup, namely an equally-spaced mesh on $[0,1]$ of $m$ elements (subintervals) of lengths $h=1/m$. The interior nodes (points) are $x_p=ph$ for $p=1,\dots,m-1$. This mesh supports a finite-dimensional vector subspace of $\mathcal{H}$:
\begin{equation}
\mathcal{S}^h = \left\{v(x)\,\big|\,v \text{ is continuous, linear on each subinterval, and } v(0)=v(1)=0\right\}. \label{fespace}
\end{equation}
This space has a basis of ``hat'' functions $\{\psi_p(x)\}$, one for each interior node (Figure \ref{fig:onehat}). Such a hat function $\psi_p$ is defined by two properties: $\psi_p$ is in $\mathcal{S}^h$ and $\psi_p(x_q)=\delta_{pq}$ for all $q$. Note that the $L^2$ norm of $\psi_p$ depends on the mesh resolution $h$, and that $\ip{\psi_p}{\psi_q}\ne 0$ for only three indices $q=p-1,p,p+1$. The basis of hat functions, while well-conditioned, is not orthonormal.
\begin{figure}
\includegraphics[width=0.6\textwidth]{figs/onehat.pdf}
\caption{A piecewise-linear hat function $\psi_p(x)$ lives at each interior node $x_p$.}
\label{fig:onehat}
\end{figure}
The numerical solution $u^h$ has the expansion
\begin{equation}
u^h(x) = \sum_{p=1}^{m-1} u[p] \psi_p(x) \label{fesolution}
\end{equation}
with coefficients $u[p]$ equal to the point values $u^h(x_p)$. That is, because the hat functions form a ``nodal basis'' \cite{Elmanetal2014}, $u^h$ may be represented as a vector $\bu$ in $\RR^{m-1}$ either by its coefficients in the basis $\{\psi_p\}$ or its point values:
\begin{equation}
\bu =\{u[p]\} = \{u^h(x_p)\}. \label{fevector}
\end{equation}
The FE approximation $F^h$ of the nonlinear operator $F$ in \eqref{operator} acts on functions in $\mathcal{S}^h$. Its values $F^h(w^h)[\psi_p]$ are easily computed if the transcendental integral is approximated, for example by using the trapezoid rule, as follows. Noting that the support of $\psi_p(x)$ is $[x_{p-1},x_{p+1}]$, and that the derivative of $\psi_p$ is $\pm 1/h$, we have:
\begin{align}
F(w^h)[\psi_p] &= \int_0^1 (w^h)'(x) \psi_p'(x) - \lambda e^{w^h(x)} \psi_p(x)\, dx \label{feoperator} \\
&= \int_{x_{p-1}}^{x_{p+1}} (w^h)'(x) (\pm 1/h)\,dx - \lambda \int_{x_{p-1}}^{x_{p+1}} e^{w^h(x)} \psi_p(x)\, dx \notag \\
&\approx h \left(\frac{w[p]-w[p-1]}{h} - \frac{w[p+1]-w[p]}{h}\right) - h \lambda e^{w[p]} \notag \\
&= \frac{1}{h}\left(2w[p]-w[p-1]-w[p+1]\right) - h \lambda e^{w[p]} \notag \\
&= F^h(w^h)[\psi_p] \notag
\end{align}
Note that $F^h$ is a rescaled version of a well-known $O(h^2)$ finite difference expression. Function \texttt{FF()} in \texttt{fas1.py} computes this formula.
Now consider the right-hand-side functional $\ell[v]$ in \eqref{weakform}, which we will approximate by $\ell^h[v]$ acting on $\mathcal{S}^h$. We again apply the trapezoid rule to compute the integral $\ip{g}{\psi_p}$, and we get the simple formula
\begin{equation}
\ell^h[\psi_p] = h\, g(x_p). \label{ferhs}
\end{equation}
The linear functional $\ell^h$ and the function $g$ are different objects, though they only differ by a factor of the mesh size $h$.
The finite element weak form can now be stated:
\begin{equation}
F^h(u^h)[v] = \ell^h[v] \qquad \text{for all } v \text{ in } \mathcal{S}^h. \label{feweakform}
\end{equation}
To solve \eqref{feweakform} we seek an iterate $w^h$ so that the \emph{residual}
\begin{equation}
r^h(w^h)[v] = \ell^h[v] - F^h(w^h)[v] \label{feresidual}
\end{equation}
is small for all $v$ in $\mathcal{S}^h$. Again $r^h(w^h)$ is a linear functional acting on functions in $\mathcal{S}_h$, so it suffices to apply it to a basis of test functions $v=\psi_p$:
\begin{equation}
r^h(w^h)[\psi_p] = \ell^h[\psi_p] - \frac{1}{h}\left(2w[p]-w[p-1]-w[p+1]\right) + h \lambda e^{w[p]}. \label{feresidualdetail}
\end{equation}
Solving the finite-dimensional nonlinear system, i.e.~the FE approximation of \eqref{weakform}, is equivalent to finding $w^h$ in $\mathcal{S}^h$ so that $r^h(w^h)[\psi_p]=0$ for $p=1,\dots,m-1$.
A function in \texttt{fas1.py} computes \eqref{feresidualdetail} for any source functional $\ell^h$. On the original mesh, soon to be called the ``fine mesh'', we will use formula \eqref{ferhs}. However, the FAS algorithm (sections \ref{sec:fastwolevel} and \ref{sec:cycles}) is a systematic way to introduce a new source functional on each coarser mesh.
The function $u^h(x)$ in $\mathcal{S}^h$, equivalently $\bu$ in $\RR^{m-1}$ given by \eqref{fevector}, exactly solves a finite-dimensional nonlinear system \eqref{feweakform}. In practice, however, at each stage we only possess an iterate $w^h(x)$, for which the ``algebraic error'' is
\begin{equation}
e^h = w^h - u^h. \label{feerror}
\end{equation}
On the other hand, $u^h$ is not the continuum solution either; the ``discretization error'' $u^h-u$, where $u$ is the exact solution of the continuum problem \eqref{weakform}, is also nonzero in general. The theory of an FE method will show that discretization error goes to zero as $h\to 0$, at a particular rate determined by the FE space and the smoothness of the continuum problem \cite{Elmanetal2014}, but such a theory assumes we have exactly-solved the finite-dimensional system, i.e.~that we possess $u^h$ itself. The full ``numerical error'' is the difference $w^h-u$, and we have
\begin{equation}
\|w^h-u\| \le \|w^h-u^h\|+\|u^h-u\|.
\end{equation}
The numerical error, which we want to control, is bounded by the algebraic error plus the discretization error.
In the \texttt{-mms} case of \texttt{fas1.py}, where the exact solution $u$ of the continuum problem is known, the numerical error norm $\|w^h-u\|$ is computable. Normally we cannot access $u$ or $u^h$ directly, and only the residual norm $\|r^h(w^h)\|$ is computable, but the numerical error is controlled to within a matrix condition number by the residual norm.
\section{The nonlinear Gauss-Seidel iteration} \label{sec:ngs}
Next we describe an iteration which will, if carried far enough, solve the finite-dimensional nonlinear system \eqref{feweakform} to desired accuracy. This is the nonlinear Gauss-Seidel (NGS) iteration \cite{Briggsetal2000}, also called Gauss-Seidel-Newton \cite{BrandtLivne2011}. It updates the iterate $w^h$ by changing each point value at $x_p$ to make the residual at that point zero. That is, NGS solves the problem
\begin{equation}
\phi(c) = r^h(w^h + c \psi_p)[\psi_p] = 0 \label{ngspointproblem}
\end{equation}
for a scalar $c$. Once $c$ is found we update the point value (coefficient):
\begin{equation}
w^h \leftarrow w^h + c \psi_p, \label{ngspointupdate}
\end{equation}
equivalently $w[p] \leftarrow w[p] + c$.
As in the linear Gauss-Seidel iteration \cite{Greenbaum1997}, $w[p]$ is updated in a certain nodal ordering, using current values $w[q]$ when evaluating the residual in \eqref{ngspointproblem}. However, as the residual is made zero at one point it is no longer zero at the previous points. Gauss-Seidel-type methods are called ``successive'' \cite{GraeserKornhuber2009} or ``multiplicative'' \cite{Bueler2021} corrections. ``Additive'' corrections, of which the Jacobi iteration \cite{Greenbaum1997} is the best known, are also possible, but they are somewhat less efficient. Note our program only runs in serial, so the parallelizability of the Jacobi iteration cannot be exploited.
Solving the scalar problem $\phi(c)=0$ cannot be done exactly when considering a transcendental problem like \eqref{liouvillebratu}. Instead we will use a fixed number of Newton iterations \cite[Chapter 4]{Bueler2021} to generate a (scalar) sequence $\{c_k\}$ converging to $c$. Starting from $c_0=0$ we compute
\begin{equation}
\phi'(c_k)\, s_k = -\phi(c_k), \qquad c_{k+1} = c_k + s_k, \label{ngsnewton}
\end{equation}
for $k=0,1,\dots$. From \eqref{feresidualdetail} we have
\begin{align*}
\phi(c) &= \ell^h[\psi_p] - \frac{1}{h} \left(2(w[p]+c) - w[p-1] - w[p+1]\right) + h \lambda e^{w[p]+c}, \\
\phi'(c) &= -\frac{2}{h} + h \lambda e^{w[p]+c}.
\end{align*}
The vast majority of the work of our FAS algorithms will be in evaluating these expressions.
The NGS method ``sweeps'' through the mesh, zeroing $\phi(c)$ at successive nodes $x_p$, in increasing $p$ order, as in the following pseudocode which modifies $w^h$ in-place:
\begin{pseudo*}
\pr{ngssweep}(w^h,\ell^h,\id{niters}=2)\text{:} \\+
$r(w^h)[v] := \ell^h[v] - F^h(w^h)[v]$ \\
for $p=1,\dots,m-1$ \\+
$\phi(c) := r^h(w^h + c \psi_p)[\psi_p]$ \\
$c=0$ \\
for $k=1,\dots,$\id{niters} \\+
$c \gets c - \phi(c) / \phi'(c)$ \\-
$w[p] \gets w[p] + c$
\end{pseudo*}
For FAS algorithms (next section) we also define \textsc{ngssweep-back} with ``\textbf{for} $p=m-1,\dots,1$''. Function \texttt{ngssweep()} in \texttt{fas1.py} computes either order, and the \texttt{niters} default is two.
For a linear differential equation the Gauss-Seidel iteration is known to converge subject to matrix assumptions which correspond to ellipticity of the original problem \cite[for example]{Greenbaum1997}. We expect that for weak nonlinearities, e.g.~small $\lambda$ in \eqref{liouvillebratu}, our method will therefore converge as a solution method for \eqref{feweakform}, and we will demonstrate that this occurs in practice (section \ref{sec:convergence}). However, one observes in practice that, after substantial progress in the first few sweeps during which the residual becomes very smooth, soon NGS stagnates. Following Brandt \cite{Brandt1977,BrandtLivne2011}, who asserts that such a stalling scheme must be ``wrong'', we adopt the multigrid approach next.
\section{The FAS equation for two levels} \label{sec:fastwolevel}
The fundamental goal of any multigrid scheme is to do a minimal amount of work (smoothing) on a given mesh and then to switch to an inexpensive coarser mesh to do the rest of the work. By transferring (restricting) a version of the problem to the coarser mesh one can nearly solve for the error. The coarse-mesh approximation of the error is then added-back (prolonged) to correct the solution on the finer mesh. Being a multigrid scheme, full approximation storage (FAS) \cite{Brandt1977,Briggsetal2000} must therefore include the following elements:
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item a hierarchy of meshes, with restriction and prolongation operators between levels,
\item a ``smoother'' for each level, and
\item a meaningful way to transfer the problem to a coarser mesh.
\end{enumerate}
Regarding (i), we describe only two levels at first, but a full mesh hierarchy is used in section \ref{sec:cycles}. Here our coarse mesh has spacing $2h$ and $m/2$ elements (subintervals); all quantities on the coarse mesh have superscript ``$2h$''. The program \texttt{fas1.py} only refines by factors of two, but the ideas generalize for other refinement factors.
For (ii), a small fixed number of NGS sweeps is our smoother on the fine mesh. Each sweep, given by algorithm \textsc{ngssweep} above, is an $O(m)$ operation with a small constant. (The constant is determined by the number of Newton iterations and the expense of evaluating nonlinearities at each point, e.g.~$\lambda e^u$ in \eqref{liouvillebratu}.) A few NGS sweeps produces the results that the fine-mesh residual $r^h(w^h)$ and algebraic error $e^h = w^h - u^h$ becomee smooth, but they do not necessarily become small. Using more sweeps of NGS would eventually make the error small, and solve problem \eqref{feweakform}, but inefficiently in the sense that many sweeps would be needed, generally giving an $O(m^q)$ method for $q\gg 1$. However, NGS sweeps on a coarser mesh will see the coarse-mesh interpolant of the fine-mesh residual as less smooth, so the coarser-mesh NGS can quickly eliminate a large fraction of the error. Descending to yet coarser meshes, in a V-cycle as described in section \ref{sec:cycles}, leads to a coarsest mesh on which the error can be eliminated entirely by applying NGS at only a few interior points. (In the default settings for \texttt{fas1.py}, the coarsest mesh has two subintervals and one interior point.)
For item (iii), what is the coarse-mesh version of the problem? To derive this equation, namely to explain Brandt's FAS equation \cite{Brandt1977}, we start from the FE weak form \eqref{feweakform}. The fine-mesh solution $u^h$ is generally unknown. For an iterate $w^h$ we subtract $F^h(w^h)[v]$ from both sides to get the residual \eqref{feresidual} on the right:
\begin{equation}
F^h(u^h)[v] - F^h(w^h)[v] = r^h(w^h)[v]. \label{fasproto}
\end{equation}
It is not yet the FAS equation, but three key observations apply to equation \eqref{fasproto}:
\begin{itemize}
\item Both $w^h$ and $r^h(w^h)$ are known and/or computable.
\item If NGS sweeps have been applied to $w^h$ then $e^h=w^h-u^h$ and $r^h(w^h)$ are smooth.
\item If $F^h$ were linear in $w^h$ then we could rewrite the equation in terms of the error:
$$\qquad\qquad\qquad\qquad F^h(e^h)[v] = -r^h(w^h)[v] \qquad\qquad (\text{\emph{if $F^h$ is linear}}).$$
(One could even write the error equation using a matrix, i.e.~$A\be=-\br$.)
\end{itemize}
Based on these observations, Brandt proposed proposed a new nonlinear equation on the coarse mesh. It is derived from \eqref{fasproto} by replacing terms using restriction operators on the computable quantities and by re-discretizing the nonlinear operator to get $F^{2h}$ acting on $\mathcal{S}^{2h}$. Because the problem is nonlinear we must store a coarse-mesh approximation to the solution, namely $u^{2h}$ in $\mathcal{S}^{2h}$, not just the error. Denoting the restriction operators by $R'$ and $R$, which are addressed in the next section, the following is the FAS equation:
\begin{equation}
F^{2h}(u^{2h})[v] - F^{2h}(R w^h)[v] = R' (r^h(w^h))[v], \label{faspreequation}
\end{equation}
for all $v$ in $\mathcal{S}^{2h}$. We can simplify the appearance by trivial rearrangement,
\begin{equation}
F^{2h}(u^{2h})[v] = \ell^h[v], \label{fasequation}
\end{equation}
where
\begin{equation}
\ell^{2h}[v] = R' (r^h(w^h))[v] + F^{2h}(R w^h)[v]. \label{fasell}
\end{equation}
The key idea behind the FAS equation \eqref{fasequation}, which has the same form as the fine-mesh weak form \eqref{feweakform}, is that the smoothness of the error and residual have allowed us to accurately transfer the problem to the coarser mesh. Note that if $w^h=u^h$, that is, if $w^h$ is the exact solution to the fine-mesh problem \eqref{feweakform}, then $r^h(w^h)=0$ so $\ell^{2h}$ simplifies to $F^{2h}(R w^h)[v]$, and the solution of \eqref{fasequation} would be $u^{2h} = R w^h$ by well-posedness.
Next, in stating the two-level FAS method we will suppose \eqref{fasequation} is solved exactly, so $u^{2h}$ and the coarse-mesh error $u^{2h}-Rw^h$ are known. We will update the iterate on the finer mesh by adding a fine-mesh version the error:
\begin{equation}
w^h \gets w^h + P(u^{2h} - R w^h) \label{fasupdate}
\end{equation}
Here $P$ is a prolongation operator (next section); it extends a function in $\mathcal{S}^{2h}$ to a function in $\mathcal{S}^h$. Supposing that the smoother and the restriction/prolongation operators $R',R,P$ are all determined, formulas \eqref{fasequation}, \eqref{fasell}, and \eqref{fasupdate} define the following in-place algorithm in which $F^h$ and $F^{2h}$ denote discretizations of $F$ on the two meshes:
\label{fastwolevel}
\begin{pseudo*}
\pr{fas-twolevel}(w^h,\ell^h,\id{down}=1,\id{up}=1)\text{:} \\+
for $j=1,\dots,$\id{down} \\+
\pr{ngssweep}(w^h,\ell^h) \\-
$\ell^{2h}[v] := R' (\ell^h-F^h(w^h))[v] + F^{2h}(R w^h)[v]$ \\
$w^{2h} = \pr{copy}(R w^h)$ \\
\pr{coarsesolve}(w^{2h},\ell^{2h}) \\
$w^h \gets w^h + P(w^{2h} - R w^h)$ \\
for $j=1,\dots,$\id{up} \\+
\pr{ngssweep-back}(w^h,\ell^h)
\end{pseudo*}
We allow smoothing before and after the coarse-mesh correction. Specifically, \texttt{down} forward NGS sweeps modify $w^h$ before the coarse-mesh correction and \texttt{up} backward sweeps after.
While it is common in linear multigrid \cite{Briggsetal2000,Bueler2021,Trottenbergetal2001} to apply a direct solver like LU decomposition as the coarse-mesh solver, our problem is nonlinear so no finite-time direct solver is available. Instead we do enough NGS sweeps to solve the coarse-mesh problem accurately:
\begin{pseudo*}
\pr{coarsesolve}(w,\ell,\id{coarse}=1)\text{:} \\+
for $j=1,\dots,$\id{coarse} \\+
\pr{ngssweep}(w,\ell)
\end{pseudo*}
In order to implement FAS we must define the action of operators $R'$, $R$, and $P$ in \eqref{fasell} and \eqref{fasupdate},which is done next. In section \ref{sec:cycles} we will define an FAS V-cycle by replacing \textsc{coarsesolve} with the recursive application of the FAS solver itself.
\section{Restriction and prolongation operators} \label{sec:restrictionprolongation}
To explain the two different restriction operators $R'$ and $R$ in \eqref{fasequation}, plus the prolongation $P$ in \eqref{fasupdate}, first note that functions $w^h$ in $\mathcal{S}^h$ are distinct objects from linear functionals like the residual $r^h(w^h)$. Denoting such linear functionals by $(\mathcal{S}^h)'$, the three operators are already distinguished by their domain and range spaces:
\begin{align}
R' &: (\mathcal{S}^h)' \to (\mathcal{S}^{2h})', \label{rpoperators} \\
R &: \mathcal{S}^h \to \mathcal{S}^{2h}, \notag \\
P &: \mathcal{S}^{2h} \to \mathcal{S}^h. \notag
\end{align}
On the other hand, both functions in $\mathcal{S}^h$ and linear functionals in $(\mathcal{S}^h)'$ are representable by vectors in $\RR^{m-1}$. One stores a function $w^h$ via coefficients $w[p]$ with respect to an expansion in the hat function basis $\{\psi_p\}$, as in \eqref{fesolution} for example, while one stores a functional $\ell^h$ by its values $\ell^h[\psi_p]$. Though it makes sense to represent $w^h$ as a column vector and $\ell^h$ as a row vector \cite{TrefethenBau1997}, in Python one may use ``flat'' one-dimensional NumPy arrays for both purposes. For our problem an iterate $w^h$ has zero boundary values, and likewise $\ell^h$ acts on $v$ with zero boundary values, thus only interior-point hat functions are needed in these representations.
But how do $R'$, $R$, and $P$ actually operate in the finite element (FE) case? The key calculation relates the coarse-mesh hat functions $\psi_q^{2h}(x)$ to the fine mesh hats $\psi_p^h(x)$ (Figure \ref{fig:hatcombination}):
\begin{equation}
\psi_q^{2h}(x) = \frac{1}{2} \psi_{2q-1}^h(x) + \psi_{2q}^h(x) + \frac{1}{2} \psi_{2q+1}^h(x), \label{hatrelation}
\end{equation}
for $q=1,2,\dots,M-1$. Recall that $M=m/2$, and that we are assuming $m$ is even.
\begin{figure}
\includegraphics[width=0.6\textwidth]{figs/hatcombination.pdf}
\caption{Formula \eqref{hatrelation} writes a coarse-mesh hat $\psi_q^{2h}(x)$ (solid) as a linear combination of fine-mesh hats $\psi_p^h(x)$ (dotted) for $p=2q-1,2q,2q+1$.}
\label{fig:hatcombination}
\end{figure}
First consider the prolongation $P$. Because a piecewise-linear function on the coarse mesh is also a piecewise-linear function on the fine mesh, $P$ is defined as the injection of $\mathcal{S}^{2h}$ into $\mathcal{S}^h$, without changing the function. Suppose $w^{2h}(x)$ is in $\mathcal{S}^{2h}$, so $w^{2h}(x) = \sum_{q=1}^{M-1} w[q] \psi_q^{2h}(x)$. Then we use \eqref{hatrelation} to compute $P w^{2h}$ in terms of fine-mesh hat functions:
\begin{align}
(P w^{2h})(x) &= \sum_{q=1}^{M-1} w[q] \left(\frac{1}{2} \psi_{2q-1}^h(x) + \psi_{2q}^h(x) + \frac{1}{2} \psi_{2q+1}^h(x)\right) \label{pformula} \\
&= \frac{1}{2} w[1] \psi_1^h(x) + w[1] \psi_2^h(x) + \left(\frac{1}{2} w[1] + \frac{1}{2} w[2]\right) \psi_3^h(x) + w[2] \psi_4^h(x) \notag \\
&\qquad + \left(\frac{1}{2} w[2] + \frac{1}{2} w[3]\right) \psi_5^h(x) + \dots + w[M\!-\!1] \psi_{m-2}^h(x) \notag \\
&\qquad + \frac{1}{2} w[M\!-\!1] \psi_{m-1}^h(x) \notag
\end{align}
As a matrix, $P:\RR^{M-1} \to \RR^{m-1}$ acts on vectors; it has $M-1$ columns and $m-1$ rows:
\begin{equation}
P = \begin{bmatrix}
1/2 & & & \\
1 & & & \\
1/2 & 1/2 & & \\
& 1 & & \\
& 1/2 & 1/2 & \\
& & & \ddots
\end{bmatrix} \label{pmatrix}
\end{equation}
The columns of $P$ are linearly-independent and the column sums equal two by \eqref{hatrelation}. The row sums equal one except for the first and last rows.
Next, the restriction $R'$ acts on fine-mesh linear functionals $\ell:\mathcal{S}^h \to \RR$. It is called ``canonical restriction'' \cite{GraeserKornhuber2009} because its output, the functional $R'\ell:\mathcal{S}^{2h}\to \RR$, acts on coarse-mesh functions the same way as $\ell$ itself acts on those functions, so defining $R'$ involves no choices. We may state this using $P$:
\begin{equation}
(R'\ell)[v] = \ell[Pv], \label{rprimedefinition}
\end{equation}
for $v$ in $\mathcal{S}^{2h}$. As noted earlier, $\ell$ is represented by a vector in $\RR^{m-1}$ of the values $\ell[\psi_p^h]$, so one computes the values of $R'\ell$ using \eqref{hatrelation}:
\begin{align}
(R'\ell)[\psi_q^{2h}] &= \ell[\psi_q^{2h}] = \ell\left[\frac{1}{2} \psi_{2q-1}^h + \psi_{2q}^h + \frac{1}{2} \psi_{2q+1}^h\right] \label{rprimeformula} \\
&= \frac{1}{2} \ell[\psi_{2q-1}^h] + \ell[\psi_{2q}^h] + \frac{1}{2} \ell[\psi_{2q+1}^h]. \notag
\end{align}
As a matrix $R'$ is the matrix transpose of $P$, with $M-1$ rows and $m-1$ columns:
\begin{equation}
R' = \begin{bmatrix}
1/2 & 1 & 1/2 & & & \\
& & 1/2 & 1 & 1/2 & \\
& & & & 1/2 & \\
& & & & & \ddots
\end{bmatrix} \label{rprimematrix}
\end{equation}
Finally we consider the restriction $R:\mathcal{S}^h\to\mathcal{S}^{2h}$ acting on functions, a more interesting map because it loses information. (By contrast, $P$ and $R'$ essentially preserve the input object, without loss, via reinterpretation on the output mesh.) Consider a fine-mesh function $w^h = \sum_{p=1}^{m-1} w[p] \psi_p^{h}$. The result $R w^h$ is linear across those fine-mesh nodes which are not in the coarse mesh, and so the values at those in-between nodes are not recoverable.
There are three well-known versions of the restriction $R$:
\begin{itemize}
\item $\Rpr$ is defined as projection, by the property
\begin{equation}
\ip{\Rpr w^h}{v} = \ip{w^h}{v} \label{rprdefinition}
\end{equation}
for all $v\in \mathcal{S}^{2h}$. Computing the entries of $\Rpr$ requires solving a linear system. To show this system we define the invertible, sparse, symmetric mass matrices \cite{Elmanetal2014}, namely $Q_{jk}^{h} = \ip{\psi_j^{h}}{\psi_k^{h}}$ for the fine mesh and $Q_{jk}^{2h} = \ip{\psi_j^{2h}}{\psi_k^{2h}}$ for the coarse. Then one solves a matrix equation for $\Rpr$:
\begin{equation}
Q^{2h} \Rpr = R' Q^{h}, \label{rprequation}
\end{equation}
or equivalently $\Rpr = (Q^{2h})^{-1} R' Q^{h}$. Equation \eqref{rprequation} is justified by using $v=\psi_s^{2h}$ in definition \eqref{rprdefinition}, and then applying \eqref{hatrelation}, as follows. Write $z = \Rpr w^h = \sum_{q=1}^{M-1} z[q] \psi_q^{2h}$ and expand both sides:
\begin{align*}
\ip{z}{\psi_s^{2h}} &= \ip{w^h}{\psi_s^{2h}} \\
\sum_{q=1}^{M-1} z[q] \ip{\psi_q^{2h}}{\psi_s^{2h}} &= \sum_{p=1}^{m-1} w[p] \ip{\psi_p^{h}}{\frac{1}{2} \psi_{2s-1}^{h} + \psi_{2s}^{h} + \frac{1}{2} \psi_{2s+1}^{h}} \\
\sum_{q=1}^{M-1} Q_{sq}^{2h} z[q] &= \sum_{p=1}^{m-1} \left(\frac{1}{2} Q_{2s-1,p} + Q_{2s,p} + \frac{1}{2} Q_{2s+1,p}\right) w[p] \\
(Q^{2h} \Rpr w^h)[s] &= (R' Q^h w^h)[s]
\end{align*}
(Note $w^h$ in $\mathcal{S}^h$ and index $s$ are arbitrary.) In 1D the mass matrices $Q^{2h},Q^h$ are tridiagonal, thus each column of $\Rpr$ can be found by solving equation \eqref{rprequation} using an $O(M)$ algorithm \cite{TrefethenBau1997}, implying $O(M^2)$ work. While this is possible, and the result could even be found by hand in this case, the alternatives below are easier to implement.
\item $\Rin$ is defined as pointwise injection. Supposing $w^h = \sum_{p=1}^{m-1} w[p] \psi_p^{h}$,
\begin{equation}
\Rin w^h = \sum_{q=1}^{M-1} w[2q] \psi_q^{2h}, \label{rindefinition}
\end{equation}
so $(\Rin w^h)(x_q) = w^h(x_q) = w[2q]$ for each point $x_q$. In other words, to compute $\Rin w^h$ we simply drop the nodal values at those fine-mesh nodes which are not in the coarse mesh. As a matrix this is
\begin{equation}
\Rin = \begin{bmatrix}
0 & 1 & & & & &\\
& & 0 & 1 & & & \\
& & & & 0 & 1 & \\
& & & & & & \ddots
\end{bmatrix}. \label{rinmatrix}
\end{equation}
This restriction is very simple but it may lose track of the magnitude of $w^h$, or badly mis-represent it, \emph{if} the input is not smooth. For example, sampling a sawtooth function at the coarse-mesh nodes would capture only the peaks or only the troughs.
\item $\Rfw$, the ``full-weighting'' restriction \cite{Briggsetal2000}, averages nodal values onto the coarse mesh:
\begin{equation}
\Rfw w^h = \sum_{q=1}^{M-1} \left(\frac{1}{4} w[2q-1] + \frac{1}{2} w[2q] + \frac{1}{4} w[2q+1]\right) \psi_q^{2h}. \label{rfwdefinition}
\end{equation}
This computes each coarse-mesh nodal value of $z=\Rfw w^h$ as a weighted average of the value of $w^h$ at the three closest fine-mesh nodes. The matrix is thus a multiple of the canonical restriction matrix in \eqref{rprimematrix}:
\begin{equation}
\Rfw = \begin{bmatrix}
1/4 & 1/2 & 1/4 & & & \\
& & 1/4 & 1/2 & 1/4 & \\
& & & & 1/4 & \\
& & & & & \ddots
\end{bmatrix} = \frac{1}{2} R'. \label{rfwmatrix}
\end{equation}
\end{itemize}
\medskip
Which restriction do we choose? Because of their simplicity, we will implement and compare $\Rfw$ and $\Rin$ in \texttt{fas1.py}.
\section{Cycles} \label{sec:cycles}
The main principles of the FAS scheme are already contained in the \textsc{fas-twolevel} algorithm in section \ref{sec:fastwolevel}, from which it is a small step to solve the coarse-mesh problem by the same scheme, creating a so-called ``V-cycle''. To define this precisely we need an indexed hierarchy of mesh levels. Start with a coarsest mesh with $m_0$ elements of length $h_0=1/m_0$. (By default in \texttt{fas1.py} we have $m_0=2$.) For $k=1,\dots,K$ we refine by factors of two so that the $k$th mesh has $m_k=2^k m_0$ elements of length $h_k=h_0/2^k$. The final $K$th mesh is now called the ``fine mesh''. Instead of the superscripts $h$ and $2h$ used in section \ref{sec:fastwolevel}, now we use a ``$k$'' superscript to indicate the mesh on which a quantity lives.
On this hierarchy an FAS V-cycle is the following in-place recursive algorithm:
\begin{pseudo*}
\pr{fas-vcycle}(k,w^k,\ell^k,\id{down}=1,\id{up}=1)\text{:} \\+
if $k=0$ \\+
\pr{coarsesolve}(w^0,\ell^0) \\-
else \\+
for $j=1,\dots,$\id{down} \\+
\pr{ngssweep}(w^k,\ell^k) \\-
$w^{k-1} = \pr{copy}(R w^k)$ \\
$\ell^{k-1}[v] := R' (\ell^k-F^k(w^k))[v] + F^{k-1}(R w^k)[v]$ \\
\pr{fas-vcycle}(k-1,w^{k-1},\ell^{k-1}) \\
$w^k \gets w^k + P(w^{k-1} - R w^k)$ \\
for $j=1,\dots,$\id{up} \\+
\pr{ngssweep-back}(w^k,\ell^k) \\-
\end{pseudo*}
Observe that the meaning of ``$\ell^k$'' depends on the mesh level. On the fine level it is $\ell^K[v] = \ip{g}{v}$, as in \eqref{ferhs}, but on coarser levels it is determined by the nontrivial FAS formula \eqref{fasell}. Also note that \textsc{fas-vcycle} does in-place modification of the coarse-mesh iterate $w^{k-1}$. A V-cycle with $K=3$ is shown in Figure \ref{fig:cycles}.
\begin{figure}
\input{tikz/cycles.tex}
\caption{An FAS V-cycle (left) and F-cycle (right) on a mesh hierarchy with four levels ($K=3$). Solid dots are \texttt{down} sweeps of NGS, open circles are \texttt{up} sweeps, and squares are \textsc{coarsesolve}. Thick grey edges show $\hat P$.}
\label{fig:cycles}
\end{figure}
V-cycles can be iterated to solve problem \eqref{feweakform} to desired accuracy. We put this in a pseudocode for clarity:
\begin{pseudo*}
\pr{fas-solver}(w^K,\id{rtol}=10^{-4},\id{cyclemax}=100)\text{:} \\+
$\ell^K[v] = \ip{g}{v}$ \\
$r_0 = \|\ell^K - F^K(w^K)\|$ \\
for $s=1,\dots,\id{cyclemax}$ \\+
\pr{fas-vcycle}(K,w^K,\ell^K) \\
if $\|\ell^K-F^K(w^K)\| < \id{rtol}\,r_0$ \\+
break \\--
return $w^K$
\end{pseudo*}
Our Python code \texttt{fas1.py} implements \pr{fas-vcycle} and \pr{fas-solver}, and options \texttt{-rtol}, \texttt{-cyclemax} override the defaults for the latter. As is easily seen by experimentation, and as we will demonstrate in the next two sections, 7 to 12 V-cycles, using the default settings in \textsc{fas-vcycle} including \texttt{down} $=1$ and \texttt{up} $=1$ smoother applications, make a very effective solver on any mesh.
But we can add a different multilevel idea to get a new kind of cycle. It is based on the observation that an iterative equation solver, linear or nonlinear, often depends critically on the quality of its initial iterate. Indeed, choosing initial iterate $w^K=0$ and calling \textsc{fas-solver} may not yield a convergent method. However, one finds in practice that coarse meshes are more forgiving with respect to the initial iterate than are finer meshes. Now the new idea is to start on the coarsest mesh in the hierarchy, where a blind guess like $w^0=0$ is most likely to succeed, and then work upward through the levels. At each mesh level one computes an initial iterate by prolongation of a nearly-converged iterate on the previous level, and then one does a V-cycle. At the finest mesh level we may do repeated V-cycles.
The resulting algorithm is called an FAS multigrid ``F-cycle'' because the pattern in Figure \ref{fig:cycles} (right) looks vaguely like an ``F'' on its back; it is the following algorithm:
\begin{pseudo*}
\pr{fas-fcycle}(K,\id{down}=1,\id{up}=1)\text{:} \\+
$w^0 = 0$ \\
$\ell^0[v] = \ip{g}{v}$ \\
\pr{coarsesolve}(w^0,\ell^0) \\
for $k=1,\dots,K$ \\+
$w^k = \hat P w^{k-1}$ \\
$\ell^k[v] = \ip{g}{v}$ \\
\pr{fas-vcycle}(k,w^k,\ell^k) \\-
return $w^K$
\end{pseudo*}
Note that parameters \id{down} and \id{up} are passed into the V-cycle.
This algorithm is also called a ``full multigrid'' (FMG) cycle \cite{BrandtLivne2011,Briggsetal2000}, but the meaning of ``full'' is fundamentally different in FAS versus FMG terminology. One may run \pr{fas-fcycle} to generate the initial iterate for \pr{fas-solver}, as as we will see in section \ref{sec:performance}, the result of one F-cycle is already a very good solution.
It is important to avoid the introduction of high frequencies as one generates the first iterate on the finer mesh. Thus a coarse-mesh solution is prolonged on to the next level by a possibly-different operator:
\begin{equation}
w^k = \hat P w^{k-1} \label{enhancedprolongation}
\end{equation}
It is common for a better interpolation scheme to be used for $\hat P$ than for $P$ \cite{Trottenbergetal2001}. Our choice for $\hat P$ first applies $P$ to generate a fine-mesh function, but followed by sweeping once through the \emph{new} fine-mesh nodes and applying NGS there without altering values at the nodes already present in the coarse mesh. This $\hat P$ is half of a smoother, and counted as such; see section \ref{sec:performance}.
\section{Convergence} \label{sec:convergence}
The Python program \texttt{fas1.py} accompanying this note applies \pr{fas-solver} by default, with zero initial iterate, to solve equation \eqref{liouvillebratu}. The program depends only on the widely-available NumPy library \cite{Harrisetal2020}. It imports local modules \texttt{meshlevel.py}, \texttt{problems.py}, and \texttt{cycles.py} from the same directory.
To get started, clone the Git repository and run the program:
\begin{cline}
$ git clone https://github.com/bueler/mg-glaciers.git
$ cd mg-glaciers/fas/py/
$ ./fas1.py
m=8 mesh, 6 V(1,1) cycles (19.50 WU): |u|_2=0.102443
\end{cline}
%$
Various allowed options to \texttt{fas1.py} are shown by usage help:\footnote{Also, a small suite of software (regression) tests of \texttt{fas1.py} is run with \,\texttt{make test}.}
\begin{cline}
$ ./fas1.py -h
\end{cline}
%$
For example, choosing a mesh with $m=2^{K+1}=16$ elements and a problem with known exact solution (section \ref{sec:intro}), yields Figure \ref{fig:show}:
\begin{cline}
$ ./fas1.py -K 3 -mms -show
m=16 mesh, 6 V(1,1) cycles (21.75 WU): ... |u-u_ex|_2=2.1315e-02
\end{cline}
%$
The V-cycles in this run, exactly as shown in Figure \ref{fig:cycles}, are reported as ``\texttt{V(1,1)}'' because the defaults correspond to \texttt{down} $=1$ and \texttt{up} $=1$ NGS sweeps on each level. Note that runs with option \texttt{-mms} report the final numerical error $\|w^h-u\|_2$.
\begin{figure}
\includegraphics[width=0.8\textwidth]{figs/show.pdf}
\caption{Results from a \texttt{-mms} run of \texttt{fas1.py} on $m=16$ elements.}
\label{fig:show}
\end{figure}
By using the \texttt{-mms} case we can demonstrate convergence of our implemented FE method, and thereby verify \texttt{fas1.py}. The numerical error from runs with 12 V-cycles, i.e.~with options \texttt{-rtol 0 -cyclemax 12}, and $K=3,4,\dots,14$ corresponding to $16\le m \le 32768$ elements, are shown in Figure \ref{fig:converge}. Because our problem is so simple, with a very smooth solution, the convergence rate is exactly at the expected rate $O(h^2)$ \cite{Elmanetal2014}.
However, if instead of a small, fixed number of V-cycles we instead try a large number of NGS sweeps, e.g.~we apply the algorithm below with \texttt{-rtol 0 -cyclemax 10000} and zero initial iterate, then the results are disappointing.
\begin{pseudo*}
\pr{ngsonly}(w^K,\id{rtol}=10^{-4},\id{cyclemax}=100)\text{:} \\+
$\ell^K[v] = \ip{g}{v}$ \\
$r_0 = \|\ell^K - F^K(w^K)\|$ \\
for $s=1,\dots,\id{cyclemax}$ \\+
\pr{ngssweep}(w^K,\ell^K) \\
if $\|\ell^K-F^K(w^K)\| < \id{rtol}\,r_0$ \\+
break \\--
return $w^K$
\end{pseudo*}
As shown in Figure \ref{fig:converge}, such runs generates convergence to discretization error only on meshes with $m=16,32,64,128$. For slightly finer meshes ($m=256,512$) the same number of sweeps is no longer sufficient, and continuing to yet finer meshes using the same number of sweeps would make essentially no progress (not shown). The reason for this behavior is that almost all of the algebraic error (section \ref{sec:femethod}) is in low-frequency modes which the NGS sweeps are barely able to reduce. This is the situation which multigrid schemes are designed to address \cite{BrandtLivne2011,Briggsetal2000}: by moving the problem between meshes the same smoother will efficiently-reduce all frequencies present in the error. Both the smoother and the coarse-level solver components of our FAS algorithms consist entirely of NGS sweeps, but by adding a multilevel mesh infrastructure we have arranged that the sweeps are always making progress.
\begin{figure}
\includegraphics[width=0.7\textwidth]{figs/converge.pdf}
\caption{For a fixed number of V-cycles the numerical error $\|u-u_{\text{ex}}\|_2$ converges to zero at the expected rate $O(h^2)$. Even $10^4$ NGS sweeps fail to converge at higher resolutions.}
\label{fig:converge}
\end{figure}
\section{Performance} \label{sec:performance}
Having verified our method, our first performance test compares three solver algorithms:
\begin{itemize}
\item \textsc{fas-fcycle}, defined in section \ref{sec:cycles}.
\item \textsc{fas-solver}, which does V-cycles, defined in section \ref{sec:cycles}.
\item \textsc{ngsonly}, defined in section \ref{sec:convergence}.
\end{itemize}
The two FAS algorithms actually represent many different algorithms according to the different options. While making no attempt to systematically-explore the parameter space, we observe that 7 to 12 V(1,1) cycles suffice to approach discretization error in the \texttt{-mms} problem. For F-cycles we must choose how many V-cycles to take once the finest level is reached, and 2 or 3 certainly suffice. Experimentation in minimizing the work units (below), while maintaining convergence, yields a choice of three V(1,0) cycles.
The three chosen algorithms become the following specific \texttt{fas1.py} options on meshes with $m=2^{K+1}$ elements for $K=3,4,\dots,17,18$:
\medskip
\begin{tabular}{ll}
\textsf{F-cycle$+$3$\times$V(1,0)} \,: &\texttt{-mms -fcycle -rtol 0 -cyclemax 4 -up 0 -K }$K$ \\
\textsf{12 V(1,1) cycles} \,: &\texttt{-mms -rtol 0 -cyclemax 12 -K }$K$ \\
\textsf{NGS sweeps} \,: &\texttt{-mms -rtol 0 -cyclemax $Z$ -ngsonly -K }$K$
\end{tabular}
\medskip
In order to achieve convergence for NGS sweeps alone, we must choose rapidly increasing $Z$ as $K$ increases. For the comparison below we simply double $Z$ until the reported numerical error is within a factor of two of discretization error (as reported by the FAS algorithms), but at $K=7$ the time is 100 seconds and we stop testing.
The results for run time on the author's laptop are in Figure \ref{fig:optimal}. For all the coarser meshes, e.g.~$m=16,\dots,256$, the FAS algorithms run in about 0.3 seconds. This is the minimum time to start and run any Python program on this machine, so the actual computational time is not really detected. For $m \ge 10^3$ both FAS algorithms enter into a regime where the run time is greater than one second, and then it becomes proportional to $m$. That is, their solver complexity is $O(m^1)$. These are \emph{optimal} solvers \cite[Chapter 7]{Bueler2021}.
By contrast, the \pr{ngsonly} algorithm is far from optimal, and not capable of solving on fine meshes. Fitting the three finest-mesh completed cases suggests its time is $O(m^{3.5})$.
\begin{figure}
\includegraphics[width=0.7\textwidth]{figs/optimal.pdf}
\caption{Run time to reach discretization error is optimal $O(m)$ for both V-cycles and F-cycles. Run time explodes for NGS sweeps.}
\label{fig:optimal}
\end{figure}
A standard way to compare multigrid-type solver algorithms uses the concept of a \emph{work unit} (WU). One WU is the number of operations needed to do one smoother sweep on the finest mesh, which takes $O(m)$ arithmetic (floating point) operations. For WUs in a 1D multilevel scheme, note that a smoother sweep on the second-finest mesh is $\frac{1}{2}$WU, and so on downward in the hierarchy, so the total of WU for a multigrid algorithm is a finite geometric sum \cite{Briggsetal2000} which depends on the number of levels $K$. For simplicity we do not count the arithmetic work in restriction and prolongation, other than in the enhanced prolongation $\hat P$ in \eqref{enhancedprolongation}, which uses $\frac{1}{2}$WU when passing to the finest mesh. Also we ignore non-arithmetic work entirely, for example vector copies.
Consider the $K\to\infty$ limit of WU calculations for the three algorithms above:
\begin{align*}
\text{WU}\big(\text{\textsf{F-cycle$+Z\times$V(1,0)}}\big) &\approx 3+2Z \\
\text{WU}\big(\text{\textsf{$Z$ V(1,1) cycles}}\big) &\approx 4Z \\
\text{WU}\big(\text{\textsf{$Z$ NGS sweeps}}\big) &= Z
\end{align*}
(Note that counting work units for NGS sweeps is trivial.) To confirm this we have added WU counting to \texttt{fas1.py}. On a $K=10$ mesh with $m=2^{11}=2048$ elements, for example, we observe that \textsf{F-cycles$+$3$\times$V(1,0)} requires a measured 8.98 WU while \textsf{12 V(1,1) cycles} uses 47.96 WU.
In fact a single F-cycle, without any additional V-cycles, nearly reaches discretization error. Consider three single-F-cycle schemes. The first is ``F(1,1)'', which uses the default settings \id{down}=1 and \id{up}=1. The other two are ``F(1,0)'', using \id{up}=0, and ``F(1,0)+$\Rin$'', which changes from the default full-weighting restriction ($\Rfw$) to injection ($\Rin$). These three solvers use 9, 5, and 5 WU, respectively, in the $K\to\infty$ limit of many levels.
Figure \ref{fig:tme} shows that on $K=7,\dots,18$ meshes, with up to $m=2^{19} = 5 \times 10^5$ elements, the measured numerical error is within a factor of two of discretization error. Note that the F(1,0) cycles actually generate smaller errors, and there is no significant difference between the two restriction methods. On the finest mesh it seems the discretization error itself, of order $10^{-11}$, was corrupted by rounding errors, and so all the measured numerical errors are closer together. Noting that some multigrid authors \cite[for example]{BrownSmithAhmadia2013} use ``textbook multigrid efficiency'' if fewer than 10 WU are needed to achieve discretization error, we conclude that our F-cycles exhibit textbook multigrid efficiency.
\begin{figure}
\includegraphics[width=0.7\textwidth]{figs/tme.pdf}
\caption{Computed numerical error, relative to discretization error, from three versions of a single F-cycle.}
\label{fig:tme}
\end{figure}
\section{Extensions} \label{sec:extensions}
Our program \texttt{fas1.py} is deliberately basic in many senses. Here are three possible extensions which the reader might want to implement:
\renewcommand{\labelenumi}{\textbf{\Roman{enumi}.}}
\begin{enumerate}
\item The default value of the parameter $\lambda$ in \eqref{liouvillebratu} is \texttt{-lam 1.0}, but one can check that the $g=0$ problem becomes unstable at a critical value $\lambda_c \approx 3.5$. Interestingly, the solution changes very little as $\lambda \nearrow \lambda_c$; things are boring until failure occurs. (The most-common numerical symptom is overflow of $e^u$.) Equation \eqref{liouvillebratu} is a very simple model for combustion of a chemical mixture, and this instability corresponds to a chemical explosion \cite{FrankKameneckij1955}. However, finding $\lambda_c$ precisely is not easy because \texttt{fas1.py} always initializes at the distant location $w^0=0$. The behavior of FAS F-cycles is especially nontrivial near the critical $\lambda$ because the critical value is different on coarse grids. (And apparently sometimes smaller!) A better strategy for solutions near the critical value, and for parameter studies generally, is ``continuation''. For example, one might use a saved fine-mesh solution as the initial value in a run with a slightly-different $\lambda$ value. The new run would then only need a few V-cycles.
\item Equation \eqref{liouvillebratu} is a ``semilinear'' ODE because its nonlinearity occurs in the zeroth-derivative term \cite{Evans2010}. One might instead solve a ``quasilinear'' equation where the nonlinearity is in the coefficient to the highest-order derivative. For example, one might try a $p$-Laplacian \cite{Evans2010} extension to the Liouville-Bratu equation:
\begin{equation}
-\left(|u'|^{p-2} u'\right)' - \lambda e^u = g. \label{pbratu}
\end{equation}
This equation is the same as \eqref{liouvillebratu} when $p=2$, but for other values $p$ in $(1,\infty)$ the solution is less well-behaved because the coefficient of $u''$ can degenerate or explode. However, a literature at least exists for the corresponding Poisson problem with $\lambda=0$ \cite{BarrettLiu1993,Bueler2021}. A basic technique is to regularize the leading coefficient with a numerical parameter $\eps>0$: replace $|u'|^{p-2}$ with $\left(|u'|^2+\eps\right)^{(p-2)/2}$. With such a change, continuation (item \textbf{II}) will be both important and more complicated.
\item The most significant extension of \texttt{fas1.py} would be to ``merely'' change from 1D to 2D or 3D. That is, to change from solving ODEs to solving elliptic PDEs like $-\grad^2 u - \lambda e^u=g$, where $\grad^2$ is the Laplacian operator. However, doing this in the style of \texttt{fas1.py}, using only NumPy vectors for infrastructure, is not recommended. Instead, it would be wise to apply an FE library like Firedrake \cite{Rathgeberetal2016} or Fenics \cite{Loggetal2012}, on top of an advanced solver library like PETSc \cite{Balayetal2021,Bueler2021}. Such libraries involve a substantial learning curve, and their support for FAS multigrid methods is incomplete, but they allow experimentation with higher-order FE spaces and many other benefits.
\end{enumerate}
\section{Conclusion} \label{sec:conclusion}
Regarding the performance of the solvers in the last few sections, we summarize as follows:
\begin{quotation}
\emph{On any mesh of $m$ elements, problem \eqref{feweakform} can be solved nearly to the discretization error of our piecewise-linear FE method by using a single FAS F-cycle, or a few FAS V-cycles, and the work of these methods is $O(m)$ with a small constant; they are optimal. The faster F-cycle gives textbook multigrid efficiency. These facts holds for all $m$ up until rounding errors overwhelm the discretization at around $m=10^6$. By contrast, single-level NGS requires rapidly-increasing numbers of sweeps because the work scales as $O(m^q)$ for $q\gg 1$. For example, more than $10^3$ sweeps are required if $m>10^2$, and if $m>10^3$ then discretization error cannot be achieved by single-level NGS sweeps in reasonable time.}
\end{quotation}
\small
\bigskip
\bibliography{fas}
\bibliographystyle{siam}
\end{document}
|
#ConventionalApp: Develop conventional applications in Julia
#-------------------------------------------------------------------------------
module ConventionalApp
using Pkg
#using LibGit2
include("base.jl")
include("filegen.jl")
#include("install.jl")
export Project, setup_env, activeproject, @include_startup
end #ConventionalApp
|
(*In 8.5pl1 (088b316):*)
Variables P Q : nat -> Prop.
Variable f : nat -> nat.
Goal forall (x:nat), (forall y, P y -> forall z, Q z -> y=f z -> False) -> False. intros.
ecase H with (3:=eq_refl).
(*
produces "Anomaly: Evar ?X10 was not declared. Please report." *)
Abort.
(*
Another variation:
*)
Goal forall (x:nat), (forall y, y=x -> False) -> False.
intros.
unshelve ecase H with (1:=eq_refl).
Qed.
(*
produces 2 unsolved goals;
2 subgoals
x : nat
H : forall y : nat, y = x -> False
______________________________________(1/2)
Type
______________________________________(2/2)
?A
apparently from eq_refl, even though it must have had its type inferred to @eq nat y x. *)
|
/-
Copyright (c) 2020 Bhavik Mehta. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Bhavik Mehta, E. W. Ayers
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.category_theory.over
import Mathlib.category_theory.limits.shapes.finite_limits
import Mathlib.category_theory.yoneda
import Mathlib.order.complete_lattice
import Mathlib.data.set.lattice
import Mathlib.PostPort
universes v u l
namespace Mathlib
/-!
# Theory of sieves
- For an object `X` of a category `C`, a `sieve X` is a set of morphisms to `X`
which is closed under left-composition.
- The complete lattice structure on sieves is given, as well as the Galois insertion
given by downward-closing.
- A `sieve X` (functorially) induces a presheaf on `C` together with a monomorphism to
the yoneda embedding of `X`.
## Tags
sieve, pullback
-/
namespace category_theory
/-- A set of arrows all with codomain `X`. -/
def presieve {C : Type u} [category C] (X : C) :=
{Y : C} → set (Y ⟶ X)
namespace presieve
protected instance inhabited {C : Type u} [category C] {X : C} : Inhabited (presieve X) :=
{ default := ⊤ }
/--
Given a set of arrows `S` all with codomain `X`, and a set of arrows with codomain `Y` for each
`f : Y ⟶ X` in `S`, produce a set of arrows with codomain `X`:
`{ g ≫ f | (f : Y ⟶ X) ∈ S, (g : Z ⟶ Y) ∈ R f }`.
-/
def bind {C : Type u} [category C] {X : C} (S : presieve X) (R : {Y : C} → {f : Y ⟶ X} → S f → presieve Y) : presieve X :=
fun (Z : C) (h : Z ⟶ X) => ∃ (Y : C), ∃ (g : Z ⟶ Y), ∃ (f : Y ⟶ X), ∃ (H : S f), R H g ∧ g ≫ f = h
@[simp] theorem bind_comp {C : Type u} [category C] {X : C} {Y : C} {Z : C} (f : Y ⟶ X) {S : presieve X} {R : {Y : C} → {f : Y ⟶ X} → S f → presieve Y} {g : Z ⟶ Y} (h₁ : S f) (h₂ : R h₁ g) : bind S R (g ≫ f) :=
Exists.intro Y (Exists.intro g (Exists.intro f (Exists.intro h₁ { left := h₂, right := rfl })))
/-- The singleton presieve. -/
-- Note we can't make this into `has_singleton` because of the out-param.
structure singleton {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) : presieve X
where
@[simp] theorem singleton_eq_iff_domain {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) (g : Y ⟶ X) : singleton f g ↔ f = g := sorry
theorem singleton_self {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) : singleton f f :=
singleton.mk
end presieve
/--
For an object `X` of a category `C`, a `sieve X` is a set of morphisms to `X` which is closed under
left-composition.
-/
structure sieve {C : Type u} [category C] (X : C)
where
arrows : presieve X
downward_closed' : ∀ {Y Z : C} {f : Y ⟶ X}, arrows f → ∀ (g : Z ⟶ Y), arrows (g ≫ f)
namespace sieve
protected instance has_coe_to_fun {C : Type u} [category C] {X : C} : has_coe_to_fun (sieve X) :=
has_coe_to_fun.mk (fun (x : sieve X) => presieve X) arrows
@[simp] theorem downward_closed {C : Type u} [category C] {X : C} {Y : C} {Z : C} (S : sieve X) {f : Y ⟶ X} (hf : coe_fn S Y f) (g : Z ⟶ Y) : coe_fn S Z (g ≫ f) :=
downward_closed' S hf g
theorem arrows_ext {C : Type u} [category C] {X : C} {R : sieve X} {S : sieve X} : arrows R = arrows S → R = S := sorry
protected theorem ext {C : Type u} [category C] {X : C} {R : sieve X} {S : sieve X} (h : ∀ {Y : C} (f : Y ⟶ X), coe_fn R Y f ↔ coe_fn S Y f) : R = S :=
arrows_ext (funext fun (x : C) => funext fun (f : x ⟶ X) => propext (h f))
protected theorem ext_iff {C : Type u} [category C] {X : C} {R : sieve X} {S : sieve X} : R = S ↔ ∀ {Y : C} (f : Y ⟶ X), coe_fn R Y f ↔ coe_fn S Y f :=
{ mp := fun (h : R = S) (Y : C) (f : Y ⟶ X) => h ▸ iff.rfl, mpr := sieve.ext }
/-- The supremum of a collection of sieves: the union of them all. -/
protected def Sup {C : Type u} [category C] {X : C} (𝒮 : set (sieve X)) : sieve X :=
mk (fun (Y : C) => set_of fun (f : Y ⟶ X) => ∃ (S : sieve X), ∃ (H : S ∈ 𝒮), arrows S f) sorry
/-- The infimum of a collection of sieves: the intersection of them all. -/
protected def Inf {C : Type u} [category C] {X : C} (𝒮 : set (sieve X)) : sieve X :=
mk (fun (Y : C) => set_of fun (f : Y ⟶ X) => ∀ (S : sieve X), S ∈ 𝒮 → arrows S f) sorry
/-- The union of two sieves is a sieve. -/
protected def union {C : Type u} [category C] {X : C} (S : sieve X) (R : sieve X) : sieve X :=
mk (fun (Y : C) (f : Y ⟶ X) => coe_fn S Y f ∨ coe_fn R Y f) sorry
/-- The intersection of two sieves is a sieve. -/
protected def inter {C : Type u} [category C] {X : C} (S : sieve X) (R : sieve X) : sieve X :=
mk (fun (Y : C) (f : Y ⟶ X) => coe_fn S Y f ∧ coe_fn R Y f) sorry
/--
Sieves on an object `X` form a complete lattice.
We generate this directly rather than using the galois insertion for nicer definitional properties.
-/
protected instance complete_lattice {C : Type u} [category C] {X : C} : complete_lattice (sieve X) :=
complete_lattice.mk sieve.union (fun (S R : sieve X) => ∀ {Y : C} (f : Y ⟶ X), coe_fn S Y f → coe_fn R Y f)
(bounded_lattice.lt._default fun (S R : sieve X) => ∀ {Y : C} (f : Y ⟶ X), coe_fn S Y f → coe_fn R Y f) sorry sorry
sorry sorry sorry sorry sieve.inter sorry sorry sorry (mk (fun (_x : C) => set.univ) sorry) sorry
(mk (fun (_x : C) => ∅) sorry) sorry sieve.Sup sieve.Inf sorry sorry sorry sorry
/-- The maximal sieve always exists. -/
protected instance sieve_inhabited {C : Type u} [category C] {X : C} : Inhabited (sieve X) :=
{ default := ⊤ }
@[simp] theorem Inf_apply {C : Type u} [category C] {X : C} {Ss : set (sieve X)} {Y : C} (f : Y ⟶ X) : coe_fn (Inf Ss) Y f ↔ ∀ (S : sieve X), S ∈ Ss → coe_fn S Y f :=
iff.rfl
@[simp] theorem Sup_apply {C : Type u} [category C] {X : C} {Ss : set (sieve X)} {Y : C} (f : Y ⟶ X) : coe_fn (Sup Ss) Y f ↔ ∃ (S : sieve X), ∃ (H : S ∈ Ss), coe_fn S Y f :=
iff.rfl
@[simp] theorem inter_apply {C : Type u} [category C] {X : C} {R : sieve X} {S : sieve X} {Y : C} (f : Y ⟶ X) : coe_fn (R ⊓ S) Y f ↔ coe_fn R Y f ∧ coe_fn S Y f :=
iff.rfl
@[simp] theorem union_apply {C : Type u} [category C] {X : C} {R : sieve X} {S : sieve X} {Y : C} (f : Y ⟶ X) : coe_fn (R ⊔ S) Y f ↔ coe_fn R Y f ∨ coe_fn S Y f :=
iff.rfl
@[simp] theorem top_apply {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) : coe_fn ⊤ Y f :=
trivial
/-- Generate the smallest sieve containing the given set of arrows. -/
@[simp] theorem generate_apply {C : Type u} [category C] {X : C} (R : presieve X) (Z : C) (f : Z ⟶ X) : coe_fn (generate R) Z f = ∃ (Y : C), ∃ (h : Z ⟶ Y), ∃ (g : Y ⟶ X), R g ∧ h ≫ g = f :=
Eq.refl (coe_fn (generate R) Z f)
/--
Given a presieve on `X`, and a sieve on each domain of an arrow in the presieve, we can bind to
produce a sieve on `X`.
-/
@[simp] theorem bind_apply {C : Type u} [category C] {X : C} (S : presieve X) (R : {Y : C} → {f : Y ⟶ X} → S f → sieve Y) : ⇑(bind S R) = presieve.bind S fun (Y : C) (f : Y ⟶ X) (h : S f) => ⇑(R h) :=
Eq.refl ⇑(bind S R)
theorem sets_iff_generate {C : Type u} [category C] {X : C} (R : presieve X) (S : sieve X) : generate R ≤ S ↔ R ≤ ⇑S := sorry
/-- Show that there is a galois insertion (generate, set_over). -/
def gi_generate {C : Type u} [category C] {X : C} : galois_insertion generate arrows :=
galois_insertion.mk (fun (𝒢 : presieve X) (_x : arrows (generate 𝒢) ≤ 𝒢) => generate 𝒢) sets_iff_generate sorry sorry
theorem le_generate {C : Type u} [category C] {X : C} (R : presieve X) : R ≤ ⇑(generate R) :=
galois_connection.le_u_l (galois_insertion.gc gi_generate) R
/-- If the identity arrow is in a sieve, the sieve is maximal. -/
theorem id_mem_iff_eq_top {C : Type u} [category C] {X : C} {S : sieve X} : coe_fn S X 𝟙 ↔ S = ⊤ := sorry
/-- If an arrow set contains a split epi, it generates the maximal sieve. -/
theorem generate_of_contains_split_epi {C : Type u} [category C] {X : C} {Y : C} {R : presieve X} (f : Y ⟶ X) [split_epi f] (hf : R f) : generate R = ⊤ := sorry
@[simp] theorem generate_of_singleton_split_epi {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) [split_epi f] : generate (presieve.singleton f) = ⊤ :=
generate_of_contains_split_epi f (presieve.singleton_self f)
@[simp] theorem generate_top {C : Type u} [category C] {X : C} : generate ⊤ = ⊤ :=
generate_of_contains_split_epi 𝟙 True.intro
/-- Given a morphism `h : Y ⟶ X`, send a sieve S on X to a sieve on Y
as the inverse image of S with `_ ≫ h`.
That is, `sieve.pullback S h := (≫ h) '⁻¹ S`. -/
def pullback {C : Type u} [category C] {X : C} {Y : C} (h : Y ⟶ X) (S : sieve X) : sieve Y :=
mk (fun (Y_1 : C) (sl : Y_1 ⟶ Y) => coe_fn S Y_1 (sl ≫ h)) sorry
@[simp] theorem pullback_id {C : Type u} [category C] {X : C} {S : sieve X} : pullback 𝟙 S = S := sorry
@[simp] theorem pullback_top {C : Type u} [category C] {X : C} {Y : C} {f : Y ⟶ X} : pullback f ⊤ = ⊤ :=
top_unique fun (_x : C) (g : _x ⟶ Y) => id
theorem pullback_comp {C : Type u} [category C] {X : C} {Y : C} {Z : C} {f : Y ⟶ X} {g : Z ⟶ Y} (S : sieve X) : pullback (g ≫ f) S = pullback g (pullback f S) := sorry
@[simp] theorem pullback_inter {C : Type u} [category C] {X : C} {Y : C} {f : Y ⟶ X} (S : sieve X) (R : sieve X) : pullback f (S ⊓ R) = pullback f S ⊓ pullback f R := sorry
theorem pullback_eq_top_iff_mem {C : Type u} [category C] {X : C} {Y : C} {S : sieve X} (f : Y ⟶ X) : coe_fn S Y f ↔ pullback f S = ⊤ := sorry
theorem pullback_eq_top_of_mem {C : Type u} [category C] {X : C} {Y : C} (S : sieve X) {f : Y ⟶ X} : coe_fn S Y f → pullback f S = ⊤ :=
iff.mp (pullback_eq_top_iff_mem f)
/--
Push a sieve `R` on `Y` forward along an arrow `f : Y ⟶ X`: `gf : Z ⟶ X` is in the sieve if `gf`
factors through some `g : Z ⟶ Y` which is in `R`.
-/
@[simp] theorem pushforward_apply {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) (R : sieve Y) (Z : C) (gf : Z ⟶ X) : coe_fn (pushforward f R) Z gf = ∃ (g : Z ⟶ Y), g ≫ f = gf ∧ coe_fn R Z g :=
Eq.refl (coe_fn (pushforward f R) Z gf)
theorem pushforward_apply_comp {C : Type u} [category C] {X : C} {Y : C} {R : sieve Y} {Z : C} {g : Z ⟶ Y} (hg : coe_fn R Z g) (f : Y ⟶ X) : coe_fn (pushforward f R) Z (g ≫ f) :=
Exists.intro g { left := rfl, right := hg }
theorem pushforward_comp {C : Type u} [category C] {X : C} {Y : C} {Z : C} {f : Y ⟶ X} {g : Z ⟶ Y} (R : sieve Z) : pushforward (g ≫ f) R = pushforward f (pushforward g R) := sorry
theorem galois_connection {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) : galois_connection (pushforward f) (pullback f) := sorry
theorem pullback_monotone {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) : monotone (pullback f) :=
galois_connection.monotone_u (galois_connection f)
theorem pushforward_monotone {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) : monotone (pushforward f) :=
galois_connection.monotone_l (galois_connection f)
theorem le_pushforward_pullback {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) (R : sieve Y) : R ≤ pullback f (pushforward f R) :=
galois_connection.le_u_l (galois_connection f) R
theorem pullback_pushforward_le {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) (R : sieve X) : pushforward f (pullback f R) ≤ R :=
galois_connection.l_u_le (galois_connection f) R
theorem pushforward_union {C : Type u} [category C] {X : C} {Y : C} {f : Y ⟶ X} (S : sieve Y) (R : sieve Y) : pushforward f (S ⊔ R) = pushforward f S ⊔ pushforward f R :=
galois_connection.l_sup (galois_connection f)
theorem pushforward_le_bind_of_mem {C : Type u} [category C] {X : C} {Y : C} (S : presieve X) (R : {Y : C} → {f : Y ⟶ X} → S f → sieve Y) (f : Y ⟶ X) (h : S f) : pushforward f (R h) ≤ bind S R := sorry
theorem le_pullback_bind {C : Type u} [category C] {X : C} {Y : C} (S : presieve X) (R : {Y : C} → {f : Y ⟶ X} → S f → sieve Y) (f : Y ⟶ X) (h : S f) : R h ≤ pullback f (bind S R) :=
eq.mpr
(id (Eq._oldrec (Eq.refl (R h ≤ pullback f (bind S R))) (Eq.symm (propext (galois_connection f (R h) (bind S R))))))
(pushforward_le_bind_of_mem (fun {Y : C} (f : Y ⟶ X) => S f) R f h)
/-- If `f` is a monomorphism, the pushforward-pullback adjunction on sieves is coreflective. -/
def galois_coinsertion_of_mono {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) [mono f] : galois_coinsertion (pushforward f) (pullback f) :=
galois_connection.to_galois_coinsertion (galois_connection f) sorry
/-- If `f` is a split epi, the pushforward-pullback adjunction on sieves is reflective. -/
def galois_insertion_of_split_epi {C : Type u} [category C] {X : C} {Y : C} (f : Y ⟶ X) [split_epi f] : galois_insertion (pushforward f) (pullback f) :=
galois_connection.to_galois_insertion (galois_connection f) sorry
/-- A sieve induces a presheaf. -/
@[simp] theorem functor_obj {C : Type u} [category C] {X : C} (S : sieve X) (Y : Cᵒᵖ) : functor.obj (functor S) Y = Subtype fun (g : opposite.unop Y ⟶ X) => coe_fn S (opposite.unop Y) g :=
Eq.refl (functor.obj (functor S) Y)
/--
If a sieve S is contained in a sieve T, then we have a morphism of presheaves on their induced
presheaves.
-/
def nat_trans_of_le {C : Type u} [category C] {X : C} {S : sieve X} {T : sieve X} (h : S ≤ T) : functor S ⟶ functor T :=
nat_trans.mk fun (Y : Cᵒᵖ) (f : functor.obj (functor S) Y) => { val := subtype.val f, property := sorry }
/-- The natural inclusion from the functor induced by a sieve to the yoneda embedding. -/
@[simp] theorem functor_inclusion_app {C : Type u} [category C] {X : C} (S : sieve X) (Y : Cᵒᵖ) (f : functor.obj (functor S) Y) : nat_trans.app (functor_inclusion S) Y f = subtype.val f :=
Eq.refl (nat_trans.app (functor_inclusion S) Y f)
theorem nat_trans_of_le_comm {C : Type u} [category C] {X : C} {S : sieve X} {T : sieve X} (h : S ≤ T) : nat_trans_of_le h ≫ functor_inclusion T = functor_inclusion S :=
rfl
/-- The presheaf induced by a sieve is a subobject of the yoneda embedding. -/
protected instance functor_inclusion_is_mono {C : Type u} [category C] {X : C} {S : sieve X} : mono (functor_inclusion S) :=
mono.mk
fun (Z : Cᵒᵖ ⥤ Type v) (f g : Z ⟶ functor S) (h : f ≫ functor_inclusion S = g ≫ functor_inclusion S) =>
nat_trans.ext f g
(funext fun (Y : Cᵒᵖ) => funext fun (y : functor.obj Z Y) => subtype.ext (congr_fun (nat_trans.congr_app h Y) y))
/--
A natural transformation to a representable functor induces a sieve. This is the left inverse of
`functor_inclusion`, shown in `sieve_of_functor_inclusion`.
-/
-- TODO: Show that when `f` is mono, this is right inverse to `functor_inclusion` up to isomorphism.
@[simp] theorem sieve_of_subfunctor_apply {C : Type u} [category C] {X : C} {R : Cᵒᵖ ⥤ Type v} (f : R ⟶ functor.obj yoneda X) (Y : C) (g : Y ⟶ X) : coe_fn (sieve_of_subfunctor f) Y g = ∃ (t : functor.obj R (opposite.op Y)), nat_trans.app f (opposite.op Y) t = g :=
Eq.refl (coe_fn (sieve_of_subfunctor f) Y g)
theorem sieve_of_subfunctor_functor_inclusion {C : Type u} [category C] {X : C} {S : sieve X} : sieve_of_subfunctor (functor_inclusion S) = S := sorry
protected instance functor_inclusion_top_is_iso {C : Type u} [category C] {X : C} : is_iso (functor_inclusion ⊤) :=
is_iso.mk
(nat_trans.mk fun (Y : Cᵒᵖ) (a : functor.obj (functor.obj yoneda X) Y) => { val := a, property := True.intro })
|
module squareRoot
square_root_approx : (number : Double) -> (approx : Double) -> Stream Double
square_root_approx number approx =
let next = (approx + (number / approx)) / 2 in
next :: square_root_approx number next
square_root_bound : (max : Nat) -> (number : Double) -> (bound : Double) -> (approxs : Stream Double) -> Double
square_root_bound Z number bound (value :: vs) = value
square_root_bound (S k) number bound (v :: vs) =
case (v * v) < bound of
False => square_root_bound k number bound vs
True => v
square_root : (number : Double) -> Double
square_root number = square_root_bound 100 number 0.0000000001 (square_root_approx number number)
|
A function $f$ is continuous on a set $S$ if and only if for every $x \in S$ and every $\epsilon > 0$, there exists a $\delta > 0$ such that for all $x' \in S$, if $|x' - x| < \delta$, then $|f(x') - f(x)| < \epsilon$. |
from numpy import cos, power
from numba import njit
from numpy_pulse import numpy_pulse
@njit(fastmath=True)
def numpy_thetaneurons(t, x, e, KdivN, a):
# Model a network of oscillators
I_sync = numpy_pulse(x).sum()
return (1 - cos(x)) + (1 + cos(x)) * (e + a * KdivN * I_sync);
|
Celebrate Recovery® is a Christ-centered, 12 step recovery program for anyone struggling with hurt, pain or addiction of any kind. Celebrate Recovery is a safe place to find community and freedom from the issues that are controlling our life. It is based on the actual words of Jesus rather than psychological theory. It was designed as a program to help those who are struggling by showing them the loving power of Jesus Christ through a recovery process. Twenty five years ago, Saddleback Church launched Celebrate Recovery with 43 people; it is now in over 29,000 churches worldwide! |
[STATEMENT]
lemma Crypt_imp_invKey_keysFor: "Crypt K X \<in> H \<Longrightarrow> invKey K \<in> keysFor H"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Crypt K X \<in> H \<Longrightarrow> invKey K \<in> keysFor H
[PROOF STEP]
by (unfold keysFor_def, blast) |
Require Import Basic.
Require Import EqDec.
Require Import Minus.
Require Import Incremental.
Require Import ListEx.
Require Import Rosette.Unquantified.
Require Import Native.
Require Import Bool.
Require Import Nat.
Open Scope nat.
(* A very simple test case for nats to demonstrate
work saved by incremental serarch *)
(* purposely dumb, slow, bad primality test *)
Definition isPrime (n: nat) : bool :=
let aux :=
fix hasFactor n t :=
match t with
| 0 | 1 => false
| S t' =>
if (n / t) * t =? n then true
else hasFactor n t'
end
in
match n with
| 0%nat => false (* undefined *)
| 1%nat => false (* undefined *)
| S n' => negb (aux n n')
end.
Compute (fold_left (fun b n => andb b (isPrime n))
[2; 3; 5; 7; 11; 13; 17; 19] true).
Compute (fold_left (fun b n => andb b (negb (isPrime n)))
[4; 6; 8; 9; 10; 12; 14; 15; 16; 18; 20] true).
Definition evensAfterTwo (n : nat) : list nat :=
let aux := (fix buildList n acc :=
match n with
| 0 => acc
| S n' =>
match acc with
| [] => buildList n' [4]
| h :: _ => buildList n' (2+h :: acc)
end
end) in
rev (aux n []).
Compute evensAfterTwo 100.
Definition evenSpace `{Basic} (n : nat) : Space nat :=
let aux := (fix spaceOf l acc :=
match l with
| [] => acc
| h :: t => spaceOf t (union (single h) acc)
end) in
let evens := evensAfterTwo n in
aux evens empty.
Compute (@evenSpace listSpace 5).
Existing Instance rosetteSearch.
Definition listToOption {A} (l:list A) : option A :=
match l with
| [] => None
| a::_ => Some a
end.
Definition testSpace (n : nat) : Space nat :=
if isPrime n then single n else empty.
Definition primeSearch (n : nat) : Result nat :=
search (bind (evenSpace n) testSpace).
Extraction Language Scheme.
Extraction "primes-naive" primeSearch.
Definition primeIncSearch `{eqDec nat} (n : nat) : Result nat :=
let evens := evenSpace n in
let extra := union evens (single 3) in
incSearch extra evens testSpace.
Definition testIncSearch := @primeIncSearch _.
Extraction Language Scheme.
Extraction "primes-incremental" testIncSearch.
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal33.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Lemma conj10eqsynthconj2 : forall (lv0 : natural), (@eq natural (Succ lv0) (plus (Succ lv0) Zero)).
Admitted.
QuickChick conj10eqsynthconj2.
|
(* Copyright 2021 (C) Mihails Milehins *)
section\<open>Natural transformation of a semifunctor\<close>
theory CZH_SMC_NTSMCF
imports
CZH_SMC_Semifunctor
CZH_DG_TDGHM
begin
subsection\<open>Background\<close>
named_theorems ntsmcf_cs_simps
named_theorems ntsmcf_cs_intros
lemmas [smc_cs_simps] = dg_shared_cs_simps
lemmas [smc_cs_intros] = dg_shared_cs_intros
subsubsection\<open>Slicing\<close>
definition ntsmcf_tdghm :: "V \<Rightarrow> V"
where "ntsmcf_tdghm \<NN> =
[
\<NN>\<lparr>NTMap\<rparr>,
smcf_dghm (\<NN>\<lparr>NTDom\<rparr>),
smcf_dghm (\<NN>\<lparr>NTCod\<rparr>),
smc_dg (\<NN>\<lparr>NTDGDom\<rparr>),
smc_dg (\<NN>\<lparr>NTDGCod\<rparr>)
]\<^sub>\<circ>"
text\<open>Components.\<close>
lemma ntsmcf_tdghm_components:
shows [slicing_simps]: "ntsmcf_tdghm \<NN>\<lparr>NTMap\<rparr> = \<NN>\<lparr>NTMap\<rparr>"
and [slicing_commute]: "ntsmcf_tdghm \<NN>\<lparr>NTDom\<rparr> = smcf_dghm (\<NN>\<lparr>NTDom\<rparr>)"
and [slicing_commute]: "ntsmcf_tdghm \<NN>\<lparr>NTCod\<rparr> = smcf_dghm (\<NN>\<lparr>NTCod\<rparr>)"
and [slicing_commute]: "ntsmcf_tdghm \<NN>\<lparr>NTDGDom\<rparr> = smc_dg (\<NN>\<lparr>NTDGDom\<rparr>)"
and [slicing_commute]: "ntsmcf_tdghm \<NN>\<lparr>NTDGCod\<rparr> = smc_dg (\<NN>\<lparr>NTDGCod\<rparr>)"
unfolding ntsmcf_tdghm_def nt_field_simps by (auto simp: nat_omega_simps)
subsection\<open>Definition and elementary properties\<close>
text\<open>
A natural transformation of semifunctors, as presented in this work,
is a generalization of the concept of a natural transformation, as presented in
Chapter I-4 in \<^cite>\<open>"mac_lane_categories_2010"\<close>, to semicategories and
semifunctors.
\<close>
locale is_ntsmcf =
\<Z> \<alpha> +
vfsequence \<NN> +
NTDom: is_semifunctor \<alpha> \<AA> \<BB> \<FF> +
NTCod: is_semifunctor \<alpha> \<AA> \<BB> \<GG>
for \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> +
assumes ntsmcf_length[smc_cs_simps]: "vcard \<NN> = 5\<^sub>\<nat>"
and ntsmcf_is_tdghm[slicing_intros]: "ntsmcf_tdghm \<NN> :
smcf_dghm \<FF> \<mapsto>\<^sub>D\<^sub>G\<^sub>H\<^sub>M smcf_dghm \<GG> : smc_dg \<AA> \<mapsto>\<mapsto>\<^sub>D\<^sub>G\<^bsub>\<alpha>\<^esub> smc_dg \<BB>"
and ntsmcf_NTDom[smc_cs_simps]: "\<NN>\<lparr>NTDom\<rparr> = \<FF>"
and ntsmcf_NTCod[smc_cs_simps]: "\<NN>\<lparr>NTCod\<rparr> = \<GG>"
and ntsmcf_NTDGDom[smc_cs_simps]: "\<NN>\<lparr>NTDGDom\<rparr> = \<AA>"
and ntsmcf_NTDGCod[smc_cs_simps]: "\<NN>\<lparr>NTDGCod\<rparr> = \<BB>"
and ntsmcf_Comp_commute[smc_cs_intros]: "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b \<Longrightarrow>
\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> = \<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
syntax "_is_ntsmcf" :: "V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> bool"
(\<open>(_ :/ _ \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F _ :/ _ \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<index> _)\<close> [51, 51, 51, 51, 51] 51)
translations "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>" \<rightleftharpoons>
"CONST is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN>"
abbreviation all_ntsmcfs :: "V \<Rightarrow> V"
where "all_ntsmcfs \<alpha> \<equiv> set {\<NN>. \<exists>\<FF> \<GG> \<AA> \<BB>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}"
abbreviation ntsmcfs :: "V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V"
where "ntsmcfs \<alpha> \<AA> \<BB> \<equiv> set {\<NN>. \<exists>\<FF> \<GG>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}"
abbreviation these_ntsmcfs :: "V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V \<Rightarrow> V"
where "these_ntsmcfs \<alpha> \<AA> \<BB> \<FF> \<GG> \<equiv> set {\<NN>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}"
lemmas [smc_cs_simps] =
is_ntsmcf.ntsmcf_length
is_ntsmcf.ntsmcf_NTDom
is_ntsmcf.ntsmcf_NTCod
is_ntsmcf.ntsmcf_NTDGDom
is_ntsmcf.ntsmcf_NTDGCod
is_ntsmcf.ntsmcf_Comp_commute
lemmas [smc_cs_intros] = is_ntsmcf.ntsmcf_Comp_commute
lemma (in is_ntsmcf) ntsmcf_is_tdghm':
assumes "\<FF>' = smcf_dghm \<FF>"
and "\<GG>' = smcf_dghm \<GG>"
and "\<AA>' = smc_dg \<AA>"
and "\<BB>' = smc_dg \<BB>"
shows "ntsmcf_tdghm \<NN> : \<FF>' \<mapsto>\<^sub>D\<^sub>G\<^sub>H\<^sub>M \<GG>' : \<AA>' \<mapsto>\<mapsto>\<^sub>D\<^sub>G\<^bsub>\<alpha>\<^esub> \<BB>'"
unfolding assms(1-4) by (rule ntsmcf_is_tdghm)
lemmas [slicing_intros] = is_ntsmcf.ntsmcf_is_tdghm'
text\<open>Rules.\<close>
lemma (in is_ntsmcf) is_ntsmcf_axioms'[smc_cs_intros]:
assumes "\<alpha>' = \<alpha>" and "\<AA>' = \<AA>" and "\<BB>' = \<BB>" and "\<FF>' = \<FF>" and "\<GG>' = \<GG>"
shows "\<NN> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<AA>' \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>'\<^esub> \<BB>'"
unfolding assms by (rule is_ntsmcf_axioms)
mk_ide rf is_ntsmcf_def[unfolded is_ntsmcf_axioms_def]
|intro is_ntsmcfI|
|dest is_ntsmcfD[dest]|
|elim is_ntsmcfE[elim]|
lemmas [smc_cs_intros] =
is_ntsmcfD(3,4)
lemma is_ntsmcfI':
assumes "\<Z> \<alpha>"
and "vfsequence \<NN>"
and "\<FF> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "vcard \<NN> = 5\<^sub>\<nat>"
and "\<NN>\<lparr>NTDom\<rparr> = \<FF>"
and "\<NN>\<lparr>NTCod\<rparr> = \<GG>"
and "\<NN>\<lparr>NTDGDom\<rparr> = \<AA>"
and "\<NN>\<lparr>NTDGCod\<rparr> = \<BB>"
and "vsv (\<NN>\<lparr>NTMap\<rparr>)"
and "\<D>\<^sub>\<circ> (\<NN>\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
and "\<And>a. a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr> \<Longrightarrow> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr> : \<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr> \<mapsto>\<^bsub>\<BB>\<^esub> \<GG>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
and "\<And>a b f. f : a \<mapsto>\<^bsub>\<AA>\<^esub> b \<Longrightarrow>
\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> = \<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
shows "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by (intro is_ntsmcfI is_tdghmI, unfold ntsmcf_tdghm_components slicing_simps)
(
simp_all add:
assms nat_omega_simps
ntsmcf_tdghm_def
is_semifunctorD(6)[OF assms(3)]
is_semifunctorD(6)[OF assms(4)]
)
lemma is_ntsmcfD':
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<Z> \<alpha>"
and "vfsequence \<NN>"
and "\<FF> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "vcard \<NN> = 5\<^sub>\<nat>"
and "\<NN>\<lparr>NTDom\<rparr> = \<FF>"
and "\<NN>\<lparr>NTCod\<rparr> = \<GG>"
and "\<NN>\<lparr>NTDGDom\<rparr> = \<AA>"
and "\<NN>\<lparr>NTDGCod\<rparr> = \<BB>"
and "vsv (\<NN>\<lparr>NTMap\<rparr>)"
and "\<D>\<^sub>\<circ> (\<NN>\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
and "\<And>a. a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr> \<Longrightarrow> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr> : \<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr> \<mapsto>\<^bsub>\<BB>\<^esub> \<GG>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
and "\<And>a b f. f : a \<mapsto>\<^bsub>\<AA>\<^esub> b \<Longrightarrow>
\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> = \<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by
(
simp_all add:
is_ntsmcfD(2-11)[OF assms]
is_tdghmD[OF is_ntsmcfD(6)[OF assms], unfolded slicing_simps]
)
lemma is_ntsmcfE':
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
obtains "\<Z> \<alpha>"
and "vfsequence \<NN>"
and "\<FF> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "vcard \<NN> = 5\<^sub>\<nat>"
and "\<NN>\<lparr>NTDom\<rparr> = \<FF>"
and "\<NN>\<lparr>NTCod\<rparr> = \<GG>"
and "\<NN>\<lparr>NTDGDom\<rparr> = \<AA>"
and "\<NN>\<lparr>NTDGCod\<rparr> = \<BB>"
and "vsv (\<NN>\<lparr>NTMap\<rparr>)"
and "\<D>\<^sub>\<circ> (\<NN>\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
and "\<And>a. a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr> \<Longrightarrow> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr> : \<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr> \<mapsto>\<^bsub>\<BB>\<^esub> \<GG>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
and "\<And>a b f. f : a \<mapsto>\<^bsub>\<AA>\<^esub> b \<Longrightarrow>
\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> = \<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
using assms by (simp add: is_ntsmcfD')
text\<open>Slicing.\<close>
context is_ntsmcf
begin
interpretation tdghm: is_tdghm
\<alpha> \<open>smc_dg \<AA>\<close> \<open>smc_dg \<BB>\<close> \<open>smcf_dghm \<FF>\<close> \<open>smcf_dghm \<GG>\<close> \<open>ntsmcf_tdghm \<NN>\<close>
by (rule ntsmcf_is_tdghm)
lemmas_with [unfolded slicing_simps]:
ntsmcf_NTMap_vsv = tdghm.tdghm_NTMap_vsv
and ntsmcf_NTMap_vdomain[smc_cs_simps] = tdghm.tdghm_NTMap_vdomain
and ntsmcf_NTMap_is_arr = tdghm.tdghm_NTMap_is_arr
and ntsmcf_NTMap_is_arr'[smc_cs_intros] = tdghm.tdghm_NTMap_is_arr'
sublocale NTMap: vsv \<open>\<NN>\<lparr>NTMap\<rparr>\<close>
rewrites "\<D>\<^sub>\<circ> (\<NN>\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
by (rule ntsmcf_NTMap_vsv) (simp add: smc_cs_simps)
lemmas_with [unfolded slicing_simps]:
ntsmcf_NTMap_app_in_Arr[smc_cs_intros] = tdghm.tdghm_NTMap_app_in_Arr
and ntsmcf_NTMap_vrange_vifunion = tdghm.tdghm_NTMap_vrange_vifunion
and ntsmcf_NTMap_vrange = tdghm.tdghm_NTMap_vrange
and ntsmcf_NTMap_vsubset_Vset = tdghm.tdghm_NTMap_vsubset_Vset
and ntsmcf_NTMap_in_Vset = tdghm.tdghm_NTMap_in_Vset
and ntsmcf_is_tdghm_if_ge_Limit = tdghm.tdghm_is_tdghm_if_ge_Limit
end
lemmas [smc_cs_intros] = is_ntsmcf.ntsmcf_NTMap_is_arr'
lemma (in is_ntsmcf) ntsmcf_Comp_commute':
assumes "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b" and "g : c \<mapsto>\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
shows
"\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> (\<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> g) =
(\<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>) \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> g"
using assms
by
(
cs_concl cs_shallow
cs_simp: ntsmcf_Comp_commute semicategory.smc_Comp_assoc[symmetric]
cs_intro: smc_cs_intros
)
lemma (in is_ntsmcf) ntsmcf_Comp_commute'':
assumes "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b" and "g : c \<mapsto>\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
shows
"\<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> (\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> g) =
(\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr>) \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> g"
using assms
by
(
cs_concl
cs_simp: ntsmcf_Comp_commute semicategory.smc_Comp_assoc[symmetric]
cs_intro: smc_cs_intros
)
text\<open>Elementary properties.\<close>
lemma ntsmcf_eqI:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN>' : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<AA>' \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>'"
and "\<NN>\<lparr>NTMap\<rparr> = \<NN>'\<lparr>NTMap\<rparr>"
and "\<FF> = \<FF>'"
and "\<GG> = \<GG>'"
and "\<AA> = \<AA>'"
and "\<BB> = \<BB>'"
shows "\<NN> = \<NN>'"
proof-
interpret L: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(1))
interpret R: is_ntsmcf \<alpha> \<AA>' \<BB>' \<FF>' \<GG>' \<NN>' by (rule assms(2))
show ?thesis
proof(rule vsv_eqI)
have dom: "\<D>\<^sub>\<circ> \<NN> = 5\<^sub>\<nat>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps V_cs_simps)
show "\<D>\<^sub>\<circ> \<NN> = \<D>\<^sub>\<circ> \<NN>'"
by (cs_concl cs_shallow cs_simp: smc_cs_simps V_cs_simps)
from assms(4-7) have sup:
"\<NN>\<lparr>NTDom\<rparr> = \<NN>'\<lparr>NTDom\<rparr>" "\<NN>\<lparr>NTCod\<rparr> = \<NN>'\<lparr>NTCod\<rparr>"
"\<NN>\<lparr>NTDGDom\<rparr> = \<NN>'\<lparr>NTDGDom\<rparr>" "\<NN>\<lparr>NTDGCod\<rparr> = \<NN>'\<lparr>NTDGCod\<rparr>"
by (simp_all add: smc_cs_simps)
show "a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<NN> \<Longrightarrow> \<NN>\<lparr>a\<rparr> = \<NN>'\<lparr>a\<rparr>" for a
by (unfold dom, elim_in_numeral, insert assms(3) sup)
(auto simp: nt_field_simps)
qed auto
qed
lemma ntsmcf_tdghm_eqI:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN>' : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<AA>' \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>'"
and "\<FF> = \<FF>'"
and "\<GG> = \<GG>'"
and "\<AA> = \<AA>'"
and "\<BB> = \<BB>'"
and "ntsmcf_tdghm \<NN> = ntsmcf_tdghm \<NN>'"
shows "\<NN> = \<NN>'"
proof(rule ntsmcf_eqI[of \<alpha>])
from assms(7) have "ntsmcf_tdghm \<NN>\<lparr>NTMap\<rparr> = ntsmcf_tdghm \<NN>'\<lparr>NTMap\<rparr>" by simp
then show "\<NN>\<lparr>NTMap\<rparr> = \<NN>'\<lparr>NTMap\<rparr>" unfolding slicing_simps by simp_all
from assms(3-6) show "\<FF> = \<FF>'" "\<GG> = \<GG>'" "\<AA> = \<AA>'" "\<BB> = \<BB>'" by simp_all
qed (simp_all add: assms(1,2))
lemma (in is_ntsmcf) ntsmcf_def:
"\<NN> = [\<NN>\<lparr>NTMap\<rparr>, \<NN>\<lparr>NTDom\<rparr>, \<NN>\<lparr>NTCod\<rparr>, \<NN>\<lparr>NTDGDom\<rparr>, \<NN>\<lparr>NTDGCod\<rparr>]\<^sub>\<circ>"
proof(rule vsv_eqI)
have dom_lhs: "\<D>\<^sub>\<circ> \<NN> = 5\<^sub>\<nat>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps V_cs_simps)
have dom_rhs:
"\<D>\<^sub>\<circ> [\<NN>\<lparr>NTMap\<rparr>, \<NN>\<lparr>NTDGDom\<rparr>, \<NN>\<lparr>NTDGCod\<rparr>, \<NN>\<lparr>NTDom\<rparr>, \<NN>\<lparr>NTCod\<rparr>]\<^sub>\<circ> = 5\<^sub>\<nat>"
by (simp add: nat_omega_simps)
then show "\<D>\<^sub>\<circ> \<NN> = \<D>\<^sub>\<circ> [\<NN>\<lparr>NTMap\<rparr>, \<NN>\<lparr>NTDom\<rparr>, \<NN>\<lparr>NTCod\<rparr>, \<NN>\<lparr>NTDGDom\<rparr>, \<NN>\<lparr>NTDGCod\<rparr>]\<^sub>\<circ>"
unfolding dom_lhs dom_rhs by (simp add: nat_omega_simps)
show "a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<NN> \<Longrightarrow>
\<NN>\<lparr>a\<rparr> = [\<NN>\<lparr>NTMap\<rparr>, \<NN>\<lparr>NTDom\<rparr>, \<NN>\<lparr>NTCod\<rparr>, \<NN>\<lparr>NTDGDom\<rparr>, \<NN>\<lparr>NTDGCod\<rparr>]\<^sub>\<circ>\<lparr>a\<rparr>"
for a
by (unfold dom_lhs, elim_in_numeral, unfold nt_field_simps)
(simp_all add: nat_omega_simps)
qed (auto simp: vsv_axioms)
text\<open>Size.\<close>
lemma (in is_ntsmcf) ntsmcf_in_Vset:
assumes "\<Z> \<beta>" and "\<alpha> \<in>\<^sub>\<circ> \<beta>"
shows "\<NN> \<in>\<^sub>\<circ> Vset \<beta>"
proof-
interpret \<beta>: \<Z> \<beta> by (rule assms(1))
note [smc_cs_intros] =
ntsmcf_NTMap_in_Vset
NTDom.smcf_in_Vset
NTCod.smcf_in_Vset
NTDom.HomDom.smc_in_Vset
NTDom.HomCod.smc_in_Vset
from assms(2) show ?thesis
by (subst ntsmcf_def)
(
cs_concl cs_shallow
cs_simp: smc_cs_simps cs_intro: smc_cs_intros V_cs_intros
)
qed
lemma (in is_ntsmcf) ntsmcf_is_ntsmcf_if_ge_Limit:
assumes "\<Z> \<beta>" and "\<alpha> \<in>\<^sub>\<circ> \<beta>"
shows "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<beta>\<^esub> \<BB>"
proof(intro is_ntsmcfI )
show "ntsmcf_tdghm \<NN> :
smcf_dghm \<FF> \<mapsto>\<^sub>D\<^sub>G\<^sub>H\<^sub>M smcf_dghm \<GG> : smc_dg \<AA> \<mapsto>\<mapsto>\<^sub>D\<^sub>G\<^bsub>\<beta>\<^esub> smc_dg \<BB>"
by (rule is_tdghm.tdghm_is_tdghm_if_ge_Limit[OF ntsmcf_is_tdghm assms])
show "\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> = \<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
if "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b" for f a b
by
(
use that in
\<open>cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros\<close>
)+
qed
(
cs_concl cs_shallow
cs_simp: smc_cs_simps
cs_intro:
smc_cs_intros
V_cs_intros
assms
NTDom.smcf_is_semifunctor_if_ge_Limit
NTCod.smcf_is_semifunctor_if_ge_Limit
)+
lemma small_all_ntsmcfs[simp]:
"small {\<NN>. \<exists>\<FF> \<GG> \<AA> \<BB>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}"
proof(cases \<open>\<Z> \<alpha>\<close>)
case True
from is_ntsmcf.ntsmcf_in_Vset show ?thesis
by (intro down[of _ \<open>Vset (\<alpha> + \<omega>)\<close>])
(auto simp: True \<Z>.\<Z>_Limit_\<alpha>\<omega> \<Z>.\<Z>_\<omega>_\<alpha>\<omega> \<Z>.intro \<Z>.\<Z>_\<alpha>_\<alpha>\<omega>)
next
case False
then have "{\<NN>. \<exists>\<FF> \<GG> \<AA> \<BB>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>} = {}" by auto
then show ?thesis by simp
qed
lemma small_ntsmcfs[simp]: "small {\<NN>. \<exists>\<FF> \<GG>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}"
by (rule down[of _ \<open>set {\<NN>. \<exists>\<FF> \<GG> \<AA> \<BB>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}\<close>])
auto
lemma small_these_ntcfs[simp]: "small {\<NN>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}"
by (rule down[of _ \<open>set {\<NN>. \<exists>\<FF> \<GG> \<AA> \<BB>. \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>}\<close>])
auto
text\<open>Further elementary results.\<close>
lemma these_ntsmcfs_iff(*not simp*):
"\<NN> \<in>\<^sub>\<circ> these_ntsmcfs \<alpha> \<AA> \<BB> \<FF> \<GG> \<longleftrightarrow> \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by auto
subsection\<open>Opposite natural transformation of semifunctors\<close>
subsubsection\<open>Definition and elementary properties\<close>
text\<open>See section 1.5 in \<^cite>\<open>"bodo_categories_1970"\<close>.\<close>
definition op_ntsmcf :: "V \<Rightarrow> V"
where "op_ntsmcf \<NN> =
[
\<NN>\<lparr>NTMap\<rparr>,
op_smcf (\<NN>\<lparr>NTCod\<rparr>),
op_smcf (\<NN>\<lparr>NTDom\<rparr>),
op_smc (\<NN>\<lparr>NTDGDom\<rparr>),
op_smc (\<NN>\<lparr>NTDGCod\<rparr>)
]\<^sub>\<circ>"
text\<open>Components.\<close>
lemma op_ntsmcf_components[smc_op_simps]:
shows "op_ntsmcf \<NN>\<lparr>NTMap\<rparr> = \<NN>\<lparr>NTMap\<rparr>"
and "op_ntsmcf \<NN>\<lparr>NTDom\<rparr> = op_smcf (\<NN>\<lparr>NTCod\<rparr>)"
and "op_ntsmcf \<NN>\<lparr>NTCod\<rparr> = op_smcf (\<NN>\<lparr>NTDom\<rparr>)"
and "op_ntsmcf \<NN>\<lparr>NTDGDom\<rparr> = op_smc (\<NN>\<lparr>NTDGDom\<rparr>)"
and "op_ntsmcf \<NN>\<lparr>NTDGCod\<rparr> = op_smc (\<NN>\<lparr>NTDGCod\<rparr>)"
unfolding op_ntsmcf_def nt_field_simps by (auto simp: nat_omega_simps)
text\<open>Slicing.\<close>
lemma op_tdghm_ntsmcf_tdghm[slicing_commute]:
"op_tdghm (ntsmcf_tdghm \<NN>) = ntsmcf_tdghm (op_ntsmcf \<NN>)"
proof(rule vsv_eqI)
have dom_lhs: "\<D>\<^sub>\<circ> (op_tdghm (ntsmcf_tdghm \<NN>)) = 5\<^sub>\<nat>"
unfolding op_tdghm_def by (auto simp: nat_omega_simps)
have dom_rhs: "\<D>\<^sub>\<circ> (ntsmcf_tdghm (op_ntsmcf \<NN>)) = 5\<^sub>\<nat>"
unfolding ntsmcf_tdghm_def by (auto simp: nat_omega_simps)
show "\<D>\<^sub>\<circ> (op_tdghm (ntsmcf_tdghm \<NN>)) = \<D>\<^sub>\<circ> (ntsmcf_tdghm (op_ntsmcf \<NN>))"
unfolding dom_lhs dom_rhs by simp
show "a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> (op_tdghm (ntsmcf_tdghm \<NN>)) \<Longrightarrow>
op_tdghm (ntsmcf_tdghm \<NN>)\<lparr>a\<rparr> = ntsmcf_tdghm (op_ntsmcf \<NN>)\<lparr>a\<rparr>"
for a
by
(
unfold dom_lhs,
elim_in_numeral,
unfold ntsmcf_tdghm_def op_ntsmcf_def op_tdghm_def nt_field_simps
)
(auto simp: nat_omega_simps slicing_commute[symmetric])
qed (auto simp: ntsmcf_tdghm_def op_tdghm_def)
subsubsection\<open>Further properties\<close>
lemma (in is_ntsmcf) is_ntsmcf_op:
"op_ntsmcf \<NN> : op_smcf \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF> : op_smc \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> op_smc \<BB>"
proof(rule is_ntsmcfI, unfold smc_op_simps)
show "vfsequence (op_ntsmcf \<NN>)" by (simp add: op_ntsmcf_def)
show "vcard (op_ntsmcf \<NN>) = 5\<^sub>\<nat>" by (simp add: op_ntsmcf_def nat_omega_simps)
fix f a b assume "f : b \<mapsto>\<^bsub>\<AA>\<^esub> a"
with is_ntsmcf_axioms show
"\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>op_smc \<BB>\<^esub> \<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
\<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>op_smc \<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by (cs_concl cs_simp: smc_cs_simps smc_op_simps cs_intro: smc_cs_intros)
qed
(
insert is_ntsmcf_axioms,
(
cs_concl cs_shallow
cs_simp: smc_cs_simps slicing_commute[symmetric]
cs_intro: smc_cs_intros smc_op_intros dg_op_intros slicing_intros
)+
)
lemma (in is_ntsmcf) is_ntsmcf_op'[smc_op_intros]:
assumes "\<GG>' = op_smcf \<GG>"
and "\<FF>' = op_smcf \<FF>"
and "\<AA>' = op_smc \<AA>"
and "\<BB>' = op_smc \<BB>"
shows "op_ntsmcf \<NN> : \<GG>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>' : \<AA>' \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>'"
unfolding assms by (rule is_ntsmcf_op)
lemmas [smc_op_intros] = is_ntsmcf.is_ntsmcf_op'
lemma (in is_ntsmcf) ntsmcf_op_ntsmcf_op_ntsmcf[smc_op_simps]:
"op_ntsmcf (op_ntsmcf \<NN>) = \<NN>"
proof(rule ntsmcf_eqI[of \<alpha> \<AA> \<BB> \<FF> \<GG> _ \<AA> \<BB> \<FF> \<GG>], unfold smc_op_simps)
interpret op:
is_ntsmcf \<alpha> \<open>op_smc \<AA>\<close> \<open>op_smc \<BB>\<close> \<open>op_smcf \<GG>\<close> \<open>op_smcf \<FF>\<close> \<open>op_ntsmcf \<NN>\<close>
by (rule is_ntsmcf_op)
from op.is_ntsmcf_op show
"op_ntsmcf (op_ntsmcf \<NN>) : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by (simp add: smc_op_simps)
qed (auto simp: smc_cs_intros)
lemmas ntsmcf_op_ntsmcf_op_ntsmcf[smc_op_simps] =
is_ntsmcf.ntsmcf_op_ntsmcf_op_ntsmcf
lemma eq_op_ntsmcf_iff:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN>' : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<AA>' \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>'"
shows "op_ntsmcf \<NN> = op_ntsmcf \<NN>' \<longleftrightarrow> \<NN> = \<NN>'"
proof
interpret L: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(1))
interpret R: is_ntsmcf \<alpha> \<AA>' \<BB>' \<FF>' \<GG>' \<NN>' by (rule assms(2))
assume prems: "op_ntsmcf \<NN> = op_ntsmcf \<NN>'"
show "\<NN> = \<NN>'"
proof(rule ntsmcf_eqI[OF assms])
from prems L.ntsmcf_op_ntsmcf_op_ntsmcf R.ntsmcf_op_ntsmcf_op_ntsmcf show
"\<NN>\<lparr>NTMap\<rparr> = \<NN>'\<lparr>NTMap\<rparr>"
by metis+
from prems L.ntsmcf_op_ntsmcf_op_ntsmcf R.ntsmcf_op_ntsmcf_op_ntsmcf
have "\<NN>\<lparr>NTDom\<rparr> = \<NN>'\<lparr>NTDom\<rparr>"
and "\<NN>\<lparr>NTCod\<rparr> = \<NN>'\<lparr>NTCod\<rparr>"
and "\<NN>\<lparr>NTDGDom\<rparr> = \<NN>'\<lparr>NTDGDom\<rparr>"
and "\<NN>\<lparr>NTDGCod\<rparr> = \<NN>'\<lparr>NTDGCod\<rparr>"
by metis+
then show "\<FF> = \<FF>'" "\<GG> = \<GG>'" "\<AA> = \<AA>'" "\<BB> = \<BB>'" by (auto simp: smc_cs_simps)
qed
qed auto
subsection\<open>Vertical composition of natural transformations\<close>
subsubsection\<open>Definition and elementary properties\<close>
text\<open>See Chapter II-4 in \<^cite>\<open>"mac_lane_categories_2010"\<close>.\<close>
definition ntsmcf_vcomp :: "V \<Rightarrow> V \<Rightarrow> V" (infixl \<open>\<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<close> 55)
where "ntsmcf_vcomp \<MM> \<NN> =
[
(\<lambda>a\<in>\<^sub>\<circ>\<NN>\<lparr>NTDGDom\<rparr>\<lparr>Obj\<rparr>. (\<MM>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>) \<circ>\<^sub>A\<^bsub>\<NN>\<lparr>NTDGCod\<rparr>\<^esub> (\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>)),
\<NN>\<lparr>NTDom\<rparr>,
\<MM>\<lparr>NTCod\<rparr>,
\<NN>\<lparr>NTDGDom\<rparr>,
\<MM>\<lparr>NTDGCod\<rparr>
]\<^sub>\<circ>"
text\<open>Components.\<close>
lemma ntsmcf_vcomp_components:
shows
"(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr> =
(\<lambda>a\<in>\<^sub>\<circ>\<NN>\<lparr>NTDGDom\<rparr>\<lparr>Obj\<rparr>. (\<MM>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>) \<circ>\<^sub>A\<^bsub>\<NN>\<lparr>NTDGCod\<rparr>\<^esub> (\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>))"
and [dg_shared_cs_simps, smc_cs_simps]: "(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTDom\<rparr> = \<NN>\<lparr>NTDom\<rparr>"
and [dg_shared_cs_simps, smc_cs_simps]: "(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTCod\<rparr> = \<MM>\<lparr>NTCod\<rparr>"
and [dg_shared_cs_simps, smc_cs_simps]:
"(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTDGDom\<rparr> = \<NN>\<lparr>NTDGDom\<rparr>"
and [dg_shared_cs_simps, smc_cs_simps]:
"(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTDGCod\<rparr> = \<MM>\<lparr>NTDGCod\<rparr>"
unfolding nt_field_simps ntsmcf_vcomp_def by (simp_all add: nat_omega_simps)
subsubsection\<open>Natural transformation map\<close>
lemma ntsmcf_vcomp_NTMap_vsv[dg_shared_cs_intros, smc_cs_intros]:
"vsv ((\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>)"
unfolding ntsmcf_vcomp_components by simp
lemma ntsmcf_vcomp_NTMap_vdomain[smc_cs_simps]:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<D>\<^sub>\<circ> ((\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
proof-
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> using assms by auto
show ?thesis unfolding ntsmcf_vcomp_components by (simp add: smc_cs_simps)
qed
lemma ntsmcf_vcomp_NTMap_app[smc_cs_simps]:
assumes "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
shows "(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> = \<MM>\<lparr>NTMap\<rparr>\<lparr>a\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> using assms by auto
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> using assms by auto
from assms show ?thesis
unfolding ntsmcf_vcomp_components by (simp add: smc_cs_simps)
qed
lemma ntsmcf_vcomp_NTMap_vrange:
assumes "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<R>\<^sub>\<circ> ((\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>) \<subseteq>\<^sub>\<circ> \<BB>\<lparr>Arr\<rparr>"
unfolding ntsmcf_vcomp_components
proof(rule vrange_VLambda_vsubset)
fix x assume prems: "x \<in>\<^sub>\<circ> \<NN>\<lparr>NTDGDom\<rparr>\<lparr>Obj\<rparr>"
from prems assms show "\<MM>\<lparr>NTMap\<rparr>\<lparr>x\<rparr> \<circ>\<^sub>A\<^bsub>\<NN>\<lparr>NTDGCod\<rparr>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>x\<rparr> \<in>\<^sub>\<circ> \<BB>\<lparr>Arr\<rparr>"
by (cs_prems cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
(cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed
subsubsection\<open>Further properties\<close>
lemma ntsmcf_vcomp_composable_commute[smc_cs_simps]:
\<comment>\<open>See Chapter II-4 in \cite{mac_lane_categories_2010}).\<close>
assumes "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b"
shows
"(\<MM>\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr>) \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
\<HH>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> (\<MM>\<lparr>NTMap\<rparr>\<lparr>a\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>)"
(is \<open>(?MC \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> ?NC) \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> ?R = ?T \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> (?MD \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> ?ND)\<close>)
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
from assms show ?thesis
by (intro \<MM>.NTDom.HomCod.smc_pattern_rectangle_left)
(cs_concl cs_intro: smc_cs_intros cs_simp: \<NN>.ntsmcf_Comp_commute)
qed
lemma ntsmcf_vcomp_is_ntsmcf[smc_cs_intros]:
\<comment>\<open>See Chapter II-4 in \cite{mac_lane_categories_2010}.\<close>
assumes "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
show ?thesis
proof(intro is_ntsmcfI')
show "vfsequence (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)" by (simp add: ntsmcf_vcomp_def)
show "vcard (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = 5\<^sub>\<nat>"
by (auto simp: nat_omega_simps ntsmcf_vcomp_def)
show "vsv ((\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>)"
unfolding ntsmcf_vcomp_components by simp
from assms show "(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> : \<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr> \<mapsto>\<^bsub>\<BB>\<^esub> \<HH>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
if "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>" for a
by
(
use that in
\<open>cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros\<close>
)
fix f a b assume "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b"
with assms show
"(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> \<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
\<HH>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<BB>\<^esub> (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by
(
cs_concl
cs_simp: smc_cs_simps is_ntsmcf.ntsmcf_Comp_commute'
cs_intro: smc_cs_intros
)
qed (use assms in \<open>auto simp: smc_cs_simps ntsmcf_vcomp_NTMap_vrange\<close>)
qed
lemma ntsmcf_vcomp_assoc[smc_cs_simps]:
\<comment>\<open>See Chapter II-4 in \cite{mac_lane_categories_2010}.\<close>
assumes "\<LL> : \<HH> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "(\<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> = \<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
proof-
interpret \<LL>: is_ntsmcf \<alpha> \<AA> \<BB> \<HH> \<KK> \<LL> by (rule assms(1))
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> by (rule assms(2))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(3))
show ?thesis
proof(rule ntsmcf_eqI[of \<alpha>])
show "((\<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr> = (\<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>"
proof(rule vsv_eqI)
fix a assume "a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> ((\<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>)"
then have "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
unfolding ntsmcf_vcomp_components by (simp add: smc_cs_simps)
with assms show
"((\<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
(\<LL> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed (simp_all add: ntsmcf_vcomp_components)
qed (auto intro: smc_cs_intros)
qed
subsubsection\<open>
Opposite of the vertical composition of natural transformations
of semifunctors
\<close>
lemma op_ntsmcf_ntsmcf_vcomp[smc_op_simps]:
assumes "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "op_ntsmcf (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = op_ntsmcf \<NN> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<MM>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> using assms(1) by auto
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> using assms(2) by auto
show ?thesis
proof(rule ntsmcf_eqI[of \<alpha>]; (intro symmetric)?)
show "op_ntsmcf (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr> =
(op_ntsmcf \<NN> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<MM>)\<lparr>NTMap\<rparr>"
proof(rule vsv_eqI)
fix a assume "a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> (op_ntsmcf (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>)"
then have a: "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
unfolding smc_op_simps ntsmcf_vcomp_NTMap_vdomain[OF assms(2)] by simp
with
\<MM>.NTDom.HomCod.op_smc_Comp
\<MM>.ntsmcf_NTMap_is_arr[OF a]
\<NN>.ntsmcf_NTMap_is_arr[OF a]
show "op_ntsmcf (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
(op_ntsmcf \<NN> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<MM>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
unfolding smc_op_simps ntsmcf_vcomp_components
by (simp add: smc_cs_simps)
qed (simp_all add: smc_op_simps smc_cs_simps ntsmcf_vcomp_components(1))
qed (auto intro: smc_cs_intros smc_op_intros)
qed
subsection\<open>Horizontal composition of natural transformations\<close>
subsubsection\<open>Definition and elementary properties\<close>
text\<open>See Chapter II-5 in \<^cite>\<open>"mac_lane_categories_2010"\<close>.\<close>
definition ntsmcf_hcomp :: "V \<Rightarrow> V \<Rightarrow> V" (infixl \<open>\<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<close> 55)
where "ntsmcf_hcomp \<MM> \<NN> =
[
(
\<lambda>a\<in>\<^sub>\<circ>\<NN>\<lparr>NTDGDom\<rparr>\<lparr>Obj\<rparr>.
(
\<MM>\<lparr>NTCod\<rparr>\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<MM>\<lparr>NTDGCod\<rparr>\<^esub>
\<MM>\<lparr>NTMap\<rparr>\<lparr>\<NN>\<lparr>NTDom\<rparr>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr>
)
),
(\<MM>\<lparr>NTDom\<rparr> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<lparr>NTDom\<rparr>),
(\<MM>\<lparr>NTCod\<rparr> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<lparr>NTCod\<rparr>),
(\<NN>\<lparr>NTDGDom\<rparr>),
(\<MM>\<lparr>NTDGCod\<rparr>)
]\<^sub>\<circ>"
text\<open>Components.\<close>
lemma ntsmcf_hcomp_components:
shows
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr> =
(
\<lambda>a\<in>\<^sub>\<circ>\<NN>\<lparr>NTDGDom\<rparr>\<lparr>Obj\<rparr>.
(
\<MM>\<lparr>NTCod\<rparr>\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<MM>\<lparr>NTDGCod\<rparr>\<^esub>
\<MM>\<lparr>NTMap\<rparr>\<lparr>\<NN>\<lparr>NTDom\<rparr>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr>
)
)"
and [dg_shared_cs_simps, smc_cs_simps]:
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTDom\<rparr> = \<MM>\<lparr>NTDom\<rparr> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<lparr>NTDom\<rparr>"
and [dg_shared_cs_simps, smc_cs_simps]:
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTCod\<rparr> = \<MM>\<lparr>NTCod\<rparr> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<lparr>NTCod\<rparr>"
and [dg_shared_cs_simps, smc_cs_simps]:
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTDGDom\<rparr> = \<NN>\<lparr>NTDGDom\<rparr>"
and [dg_shared_cs_simps, smc_cs_simps]:
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTDGCod\<rparr> = \<MM>\<lparr>NTDGCod\<rparr>"
unfolding nt_field_simps ntsmcf_hcomp_def by (auto simp: nat_omega_simps)
subsubsection\<open>Natural transformation map\<close>
lemma ntsmcf_hcomp_NTMap_vsv[smc_cs_intros]: "vsv ((\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>)"
unfolding ntsmcf_hcomp_components by auto
lemma ntsmcf_hcomp_NTMap_vdomain[smc_cs_simps]:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<D>\<^sub>\<circ> ((\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
proof-
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(1))
show ?thesis unfolding ntsmcf_hcomp_components by (simp add: smc_cs_simps)
qed
lemma ntsmcf_hcomp_NTMap_app[smc_cs_simps]:
assumes "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
shows "(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
\<GG>'\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> \<MM>\<lparr>NTMap\<rparr>\<lparr>\<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
from assms(3) show ?thesis
unfolding ntsmcf_hcomp_components by (simp add: smc_cs_simps)
qed
lemma ntsmcf_hcomp_NTMap_vrange:
assumes "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<R>\<^sub>\<circ> ((\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>) \<subseteq>\<^sub>\<circ> \<CC>\<lparr>Arr\<rparr>"
proof
interpret \<MM>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
fix f assume "f \<in>\<^sub>\<circ> \<R>\<^sub>\<circ> ((\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>)"
with ntsmcf_hcomp_NTMap_vdomain obtain a
where a: "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>" and f_def: "f = (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
unfolding ntsmcf_hcomp_components by (force simp: smc_cs_simps)
have \<FF>a: "\<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr> \<in>\<^sub>\<circ> \<BB>\<lparr>Obj\<rparr>"
by (simp add: \<NN>.NTDom.smcf_ObjMap_app_in_HomCod_Obj a)
from \<NN>.ntsmcf_NTMap_is_arr[OF a] have "\<GG>'\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr> :
\<GG>'\<lparr>ObjMap\<rparr>\<lparr>\<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr> \<mapsto>\<^bsub>\<CC>\<^esub> \<GG>'\<lparr>ObjMap\<rparr>\<lparr>\<GG>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr>"
by (force intro: smc_cs_intros)
then have "\<GG>'\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> \<MM>\<lparr>NTMap\<rparr>\<lparr>\<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr> \<in>\<^sub>\<circ> \<CC>\<lparr>Arr\<rparr>"
by
(
meson
\<MM>.ntsmcf_NTMap_is_arr[OF \<FF>a]
\<MM>.NTDom.HomCod.smc_is_arrE
\<MM>.NTDom.HomCod.smc_Comp_is_arr
)
with a show "f \<in>\<^sub>\<circ> \<CC>\<lparr>Arr\<rparr>"
unfolding f_def ntsmcf_hcomp_components by (simp add: smc_cs_simps)
qed
subsubsection\<open>Further properties\<close>
lemma ntsmcf_hcomp_composable_commute:
\<comment>\<open>See Chapter II-5 in \cite{mac_lane_categories_2010}.\<close>
assumes "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b"
shows
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
(\<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
from assms(3) have [simp]: "b \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>" and a: "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>" by auto
from \<MM>.is_ntsmcf_axioms \<NN>.is_ntsmcf_axioms have \<MM>\<NN>b:
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>b\<rparr> =
(\<GG>'\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr>\<rparr>) \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<MM>\<lparr>NTMap\<rparr>\<lparr>\<FF>\<lparr>ObjMap\<rparr>\<lparr>b\<rparr>\<rparr>)"
by (auto simp: smc_cs_simps)
let ?\<GG>'\<FF>f = \<open>\<GG>'\<lparr>ArrMap\<rparr>\<lparr>\<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr>\<rparr>\<close>
from a \<MM>.is_ntsmcf_axioms \<NN>.is_ntsmcf_axioms have \<MM>\<NN>a:
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
\<GG>'\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> \<MM>\<lparr>NTMap\<rparr>\<lparr>\<FF>\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps)+
note \<MM>.NTCod.smcf_ArrMap_Comp[smc_cs_simps del]
from assms show ?thesis
unfolding \<MM>\<NN>b \<MM>\<NN>a
by (intro \<MM>.NTDom.HomCod.smc_pattern_rectangle_left)
(
cs_concl
cs_simp: smc_cs_simps is_semifunctor.smcf_ArrMap_Comp[symmetric]
cs_intro: smc_cs_intros
)+
qed
lemma ntsmcf_hcomp_is_ntsmcf:
\<comment>\<open>See Chapter II-5 in \cite{mac_lane_categories_2010}.\<close>
assumes "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> : \<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
show ?thesis
proof(intro is_ntsmcfI', unfold ntsmcf_hcomp_components(3,4))
show "vfsequence (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)" unfolding ntsmcf_hcomp_def by auto
show "vcard (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = 5\<^sub>\<nat>"
unfolding ntsmcf_hcomp_def by (simp add: nat_omega_simps)
from assms show "(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> :
(\<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>)\<lparr>ObjMap\<rparr>\<lparr>a\<rparr> \<mapsto>\<^bsub>\<CC>\<^esub> (\<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>)\<lparr>ObjMap\<rparr>\<lparr>a\<rparr>"
if "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>" for a
by
(
use that in
\<open>cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros\<close>
)
fix f a b assume "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b"
with ntsmcf_hcomp_composable_commute[OF assms]
show "(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
(\<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by auto
qed (auto simp: ntsmcf_hcomp_components(1) smc_cs_simps intro: smc_cs_intros)
qed
lemma ntsmcf_hcomp_is_ntsmcf'[smc_cs_intros]:
assumes "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<SS> = \<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>"
and "\<SS>' = \<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>"
shows "\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> : \<SS> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<SS>' : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
using assms(1,2) unfolding assms(3,4) by (rule ntsmcf_hcomp_is_ntsmcf)
lemma ntsmcf_hcomp_assoc[smc_cs_simps]:
\<comment>\<open>See Chapter II-5 in \cite{mac_lane_categories_2010}.\<close>
assumes "\<LL> : \<FF>'' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>'' : \<CC> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
and "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "(\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> = \<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
proof-
interpret \<LL>: is_ntsmcf \<alpha> \<CC> \<DD> \<FF>'' \<GG>'' \<LL> by (rule assms(1))
interpret \<MM>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<MM> by (rule assms(2))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(3))
interpret \<LL>\<MM>: is_ntsmcf \<alpha> \<BB> \<DD> \<open>\<FF>'' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>'\<close> \<open>\<GG>'' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>'\<close> \<open>\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>\<close>
by (auto intro: smc_cs_intros)
interpret \<MM>\<NN>: is_ntsmcf \<alpha> \<AA> \<CC> \<open>\<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>\<close> \<open>\<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>\<close> \<open>\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<close>
by (auto intro: smc_cs_intros)
note smcf_axioms =
\<LL>.NTDom.is_semifunctor_axioms
\<LL>.NTCod.is_semifunctor_axioms
\<MM>.NTDom.is_semifunctor_axioms
\<MM>.NTCod.is_semifunctor_axioms
\<NN>.NTDom.is_semifunctor_axioms
\<NN>.NTCod.is_semifunctor_axioms
show ?thesis
proof(rule ntsmcf_eqI)
from assms show
"\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> :
(\<FF>'' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>') \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<GG>'' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>') \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> :
\<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
by (auto intro: smc_cs_intros)
from \<LL>\<MM>.is_ntsmcf_axioms \<NN>.is_ntsmcf_axioms have dom_lhs:
"\<D>\<^sub>\<circ> ((\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
by (simp add: smc_cs_simps)
from \<MM>\<NN>.is_ntsmcf_axioms \<LL>.is_ntsmcf_axioms have dom_rhs:
"\<D>\<^sub>\<circ> ((\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
by (simp add: smc_cs_simps)
show "(\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr> = (\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>"
proof(rule vsv_eqI, unfold dom_lhs dom_rhs)
fix a assume "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
with assms show
"(\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
(\<LL> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed (simp_all add: ntsmcf_hcomp_components)
qed
(
insert smcf_axioms,
auto simp: smcf_comp_assoc intro!: smc_cs_intros
)
qed
subsubsection\<open>Opposite of the horizontal composition of the
natural transformation of semifunctors\<close>
lemma op_ntsmcf_ntsmcf_hcomp[smc_op_simps]:
assumes "\<MM> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "op_ntsmcf (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = op_ntsmcf \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<NN>"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
have op_\<MM>: "op_ntsmcf \<MM> :
op_smcf \<GG>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF>' : op_smc \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> op_smc \<CC>"
and op_\<NN>: "op_ntsmcf \<NN> :
op_smcf \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF> : op_smc \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> op_smc \<BB>"
by
(
cs_concl cs_shallow
cs_simp: smc_op_simps cs_intro: smc_cs_intros smc_op_intros
)
show ?thesis
proof(rule sym, rule ntsmcf_eqI, unfold smc_op_simps slicing_simps)
show
"op_ntsmcf \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<NN> :
op_smcf \<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF> :
op_smc \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> op_smc \<CC>"
by
(
cs_concl cs_shallow
cs_simp: smc_op_simps cs_intro: smc_cs_intros smc_op_intros
)
show "op_ntsmcf (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) :
op_smcf \<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<FF> :
op_smc \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> op_smc \<CC>"
by
(
cs_concl cs_shallow
cs_simp: smc_op_simps cs_intro: smc_cs_intros smc_op_intros
)
show "(op_ntsmcf \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<NN>)\<lparr>NTMap\<rparr> = (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>"
proof
(
rule vsv_eqI,
unfold
ntsmcf_hcomp_NTMap_vdomain[OF assms(2)]
ntsmcf_hcomp_NTMap_vdomain[OF op_\<NN>]
smc_op_simps
)
fix a assume "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
with assms show
"(op_ntsmcf \<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr> = (\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by
(
cs_concl cs_shallow
cs_simp: smc_cs_simps smc_op_simps
cs_intro: smc_cs_intros smc_op_intros
)
qed (auto simp: ntsmcf_hcomp_components)
qed simp_all
qed
subsection\<open>Interchange law\<close>
lemma ntsmcf_comp_interchange_law:
\<comment>\<open>See Chapter II-5 in \cite{mac_lane_categories_2010}.\<close>
assumes "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<MM>' : \<GG>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN>' : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
shows
"((\<MM>' \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>') \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)) =
(\<MM>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
proof-
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
interpret \<MM>': is_ntsmcf \<alpha> \<BB> \<CC> \<GG>' \<HH>' \<MM>' by (rule assms(3))
interpret \<NN>': is_ntsmcf \<alpha> \<BB> \<CC> \<FF>' \<GG>' \<NN>' by (rule assms(4))
interpret \<NN>'\<NN>:
is_ntsmcf \<alpha> \<AA> \<CC> \<open>\<FF>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>\<close> \<open>\<GG>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>\<close> \<open>\<NN>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<close>
by (auto intro: smc_cs_intros)
interpret \<MM>\<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<HH> \<open>\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>\<close>
by (auto intro: smc_cs_intros)
show ?thesis
proof(rule ntsmcf_eqI[of \<alpha>])
show
"(\<MM>' \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr> =
(\<MM>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>"
proof
(
rule vsv_eqI,
unfold
ntsmcf_vcomp_NTMap_vdomain[OF \<NN>'\<NN>.is_ntsmcf_axioms]
ntsmcf_hcomp_NTMap_vdomain[OF \<MM>\<NN>.is_ntsmcf_axioms]
)
fix a assume "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
with assms show
"(\<MM>' \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
((\<MM>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN>' \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by
(
cs_concl
cs_simp: smc_cs_simps is_ntsmcf.ntsmcf_Comp_commute'
cs_intro: smc_cs_intros
)
qed (auto intro: smc_cs_intros)
qed (auto intro: smc_cs_intros)
qed
subsection\<open>
Composition of a natural transformation of semifunctors and a semifunctor
\<close>
subsubsection\<open>Definition and elementary properties\<close>
abbreviation (input) ntsmcf_smcf_comp :: "V \<Rightarrow> V \<Rightarrow> V" (infixl "\<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F" 55)
where "ntsmcf_smcf_comp \<equiv> tdghm_dghm_comp"
text\<open>Slicing.\<close>
lemma ntsmcf_tdghm_ntsmcf_smcf_comp[slicing_commute]:
"ntsmcf_tdghm \<NN> \<circ>\<^sub>T\<^sub>D\<^sub>G\<^sub>H\<^sub>M\<^sub>-\<^sub>D\<^sub>G\<^sub>H\<^sub>M smcf_dghm \<HH> = ntsmcf_tdghm (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)"
unfolding
tdghm_dghm_comp_def
dghm_comp_def
ntsmcf_tdghm_def
smcf_dghm_def
smc_dg_def
dg_field_simps
dghm_field_simps
nt_field_simps
by (simp add: nat_omega_simps) (*slow*)
subsubsection\<open>Natural transformation map\<close>
mk_VLambda (in is_semifunctor)
tdghm_dghm_comp_components(1)[where \<HH>=\<FF>, unfolded smcf_HomDom]
|vdomain ntsmcf_smcf_comp_NTMap_vdomain[smc_cs_simps]|
|app ntsmcf_smcf_comp_NTMap_app[smc_cs_simps]|
lemmas [smc_cs_simps] =
is_semifunctor.ntsmcf_smcf_comp_NTMap_vdomain
is_semifunctor.ntsmcf_smcf_comp_NTMap_app
lemma ntsmcf_smcf_comp_NTMap_vrange:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>" and "\<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<R>\<^sub>\<circ> ((\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)\<lparr>NTMap\<rparr>) \<subseteq>\<^sub>\<circ> \<CC>\<lparr>Arr\<rparr>"
proof-
interpret \<NN>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF> \<GG> \<NN> by (rule assms(1))
interpret \<HH>: is_semifunctor \<alpha> \<AA> \<BB> \<HH> by (rule assms(2))
show ?thesis
unfolding tdghm_dghm_comp_components
by (auto simp: smc_cs_simps intro: smc_cs_intros)
qed
subsubsection\<open>
Opposite of the composition of a natural transformation of
semifunctors and a semifunctor
\<close>
lemma op_ntsmcf_ntsmcf_smcf_comp[smc_op_simps]:
"op_ntsmcf (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>) = op_ntsmcf \<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_smcf \<HH>"
unfolding
tdghm_dghm_comp_def
dghm_comp_def
op_ntsmcf_def
op_smcf_def
op_smc_def
dg_field_simps
dghm_field_simps
nt_field_simps
by (simp add: nat_omega_simps) (*slow*)
subsubsection\<open>
Composition of a natural transformation of semifunctors and a
semifunctors is a natural transformation of semifunctors
\<close>
lemma ntsmcf_smcf_comp_is_ntsmcf[intro]:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>" and "\<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
proof-
interpret \<NN>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF> \<GG> \<NN> by (rule assms(1))
interpret \<HH>: is_semifunctor \<alpha> \<AA> \<BB> \<HH> by (rule assms(2))
show ?thesis
proof(rule is_ntsmcfI)
show "vfsequence (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)"
unfolding tdghm_dghm_comp_def by (simp add: nat_omega_simps)
from assms show "\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
by (cs_concl cs_intro: smc_cs_intros)
from assms show "\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
by (cs_concl cs_intro: smc_cs_intros)
show "vcard (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>) = 5\<^sub>\<nat>"
unfolding tdghm_dghm_comp_def by (simp add: nat_omega_simps)
from assms show
"ntsmcf_tdghm (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>) :
smcf_dghm (\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>) \<mapsto>\<^sub>D\<^sub>G\<^sub>H\<^sub>M smcf_dghm (\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>) :
smc_dg \<AA> \<mapsto>\<mapsto>\<^sub>D\<^sub>G\<^bsub>\<alpha>\<^esub> smc_dg \<CC>"
by
(
cs_concl
cs_simp: slicing_commute[symmetric]
cs_intro: slicing_intros dg_cs_intros
)
show
"(\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
(\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
if "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b" for a b f
using that by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed (auto simp: smc_cs_simps)
qed
lemma ntsmcf_smcf_comp_is_semifunctor'[smc_cs_intros]:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<FF>' = \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>"
and "\<GG>' = \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>"
shows "\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
using assms(1,2) unfolding assms(3,4) ..
subsubsection\<open>Further properties\<close>
lemma ntsmcf_smcf_comp_ntsmcf_smcf_comp_assoc:
assumes "\<NN> : \<HH> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' : \<CC> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
and "\<GG> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<FF> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "(\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> = \<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>)"
proof-
interpret \<NN>: is_ntsmcf \<alpha> \<CC> \<DD> \<HH> \<HH>' \<NN> by (rule assms(1))
interpret \<GG>: is_semifunctor \<alpha> \<BB> \<CC> \<GG> by (rule assms(2))
interpret \<FF>: is_semifunctor \<alpha> \<AA> \<BB> \<FF> by (rule assms(3))
show ?thesis
proof(rule ntsmcf_tdghm_eqI)
from assms show
"(\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> :
\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> :
\<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
show "\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>) :
\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> :
\<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
from assms show
"ntsmcf_tdghm ((\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>) =
ntsmcf_tdghm (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>))"
by
(
cs_concl
cs_simp: slicing_commute[symmetric]
cs_intro: slicing_intros tdghm_dghm_comp_tdghm_dghm_comp_assoc
)
qed simp_all
qed
lemma (in is_ntsmcf) ntsmcf_ntsmcf_smcf_comp_smcf_id[smc_cs_simps]:
"\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F smcf_id \<AA> = \<NN>"
proof(rule ntsmcf_tdghm_eqI)
show "\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F smcf_id \<AA> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
show "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
show "ntsmcf_tdghm (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F smcf_id \<AA>) = ntsmcf_tdghm \<NN>"
by
(
cs_concl cs_shallow
cs_simp: slicing_simps slicing_commute[symmetric]
cs_intro: smc_cs_intros slicing_intros dg_cs_simps
)
qed simp_all
lemmas [smc_cs_simps] = is_ntsmcf.ntsmcf_ntsmcf_smcf_comp_smcf_id
lemma ntsmcf_vcomp_ntsmcf_smcf_comp[smc_cs_simps]:
assumes "\<KK> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
shows
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>) =
(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>"
proof(rule ntsmcf_eqI)
from assms show "(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> :
\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
by (cs_concl cs_shallow cs_intro: smc_cs_intros)
from assms show "\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>) :
\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
by (cs_concl cs_shallow cs_intro: smc_cs_intros)
from assms have dom_lhs:
"\<D>\<^sub>\<circ> ((\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>))\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
from assms have dom_rhs: "\<D>\<^sub>\<circ> ((\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>)\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
show
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>))\<lparr>NTMap\<rparr> =
(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>)\<lparr>NTMap\<rparr>"
proof(rule vsv_eqI, unfold dom_lhs dom_rhs)
fix a assume "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
with assms show
"(\<MM> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
(\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed (cs_concl cs_shallow cs_intro: smc_cs_intros)+
qed simp_all
subsection\<open>
Composition of a semifunctor and a natural transformation of semifunctors
\<close>
subsubsection\<open>Definition and elementary properties\<close>
abbreviation (input) smcf_ntsmcf_comp :: "V \<Rightarrow> V \<Rightarrow> V" (infixl "\<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F" 55)
where "smcf_ntsmcf_comp \<equiv> dghm_tdghm_comp"
text\<open>Slicing.\<close>
lemma ntsmcf_tdghm_smcf_ntsmcf_comp[slicing_commute]:
"smcf_dghm \<HH> \<circ>\<^sub>D\<^sub>G\<^sub>H\<^sub>M\<^sub>-\<^sub>T\<^sub>D\<^sub>G\<^sub>H\<^sub>M ntsmcf_tdghm \<NN> = ntsmcf_tdghm (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
unfolding
dghm_tdghm_comp_def
dghm_comp_def
ntsmcf_tdghm_def
smcf_dghm_def
smc_dg_def
dg_field_simps
dghm_field_simps
nt_field_simps
by (simp add: nat_omega_simps) (*slow*)
subsubsection\<open>Natural transformation map\<close>
mk_VLambda (in is_ntsmcf)
dghm_tdghm_comp_components(1)[where \<NN>=\<NN>, unfolded ntsmcf_NTDGDom]
|vdomain smcf_ntsmcf_comp_NTMap_vdomain[smc_cs_simps]|
|app smcf_ntsmcf_comp_NTMap_app[smc_cs_simps]|
lemmas [smc_cs_simps] =
is_ntsmcf.smcf_ntsmcf_comp_NTMap_vdomain
is_ntsmcf.smcf_ntsmcf_comp_NTMap_app
lemma smcf_ntsmcf_comp_NTMap_vrange:
assumes "\<HH> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>" and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<R>\<^sub>\<circ> ((\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>) \<subseteq>\<^sub>\<circ> \<CC>\<lparr>Arr\<rparr>"
proof-
interpret \<HH>: is_semifunctor \<alpha> \<BB> \<CC> \<HH> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
show ?thesis
unfolding dghm_tdghm_comp_components
by (auto simp: smc_cs_simps intro: smc_cs_intros)
qed
subsubsection\<open>
Opposite of the composition of a semifunctor
and a natural transformation of semifunctors
\<close>
lemma op_ntsmcf_smcf_ntsmcf_comp[smc_op_simps]:
"op_ntsmcf (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = op_smcf \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F op_ntsmcf \<NN>"
unfolding
dghm_tdghm_comp_def
dghm_comp_def
op_ntsmcf_def
op_smcf_def
op_smc_def
dg_field_simps
dghm_field_simps
nt_field_simps
by (simp add: nat_omega_simps) (*slow*)
subsubsection\<open>
Composition of a semifunctor and a natural transformation of
semifunctors is a natural transformation of semifunctors
\<close>
lemma smcf_ntsmcf_comp_is_ntsmcf[intro]:
assumes "\<HH> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>" and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> : \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
proof-
interpret \<HH>: is_semifunctor \<alpha> \<BB> \<CC> \<HH> by (rule assms(1))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(2))
show ?thesis
proof(rule is_ntsmcfI)
show "vfsequence (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)" unfolding dghm_tdghm_comp_def by simp
from assms show "\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
by (cs_concl cs_intro: smc_cs_intros)
from assms show "\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
by (cs_concl cs_intro: smc_cs_intros)
show "vcard (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = 5\<^sub>\<nat>"
unfolding dghm_tdghm_comp_def by (simp add: nat_omega_simps)
from assms show "ntsmcf_tdghm (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) :
smcf_dghm (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>) \<mapsto>\<^sub>D\<^sub>G\<^sub>H\<^sub>M smcf_dghm (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>) :
smc_dg \<AA> \<mapsto>\<mapsto>\<^sub>D\<^sub>G\<^bsub>\<alpha>\<^esub> smc_dg \<CC>"
by
(
cs_concl
cs_simp: slicing_commute[symmetric]
cs_intro: dg_cs_intros slicing_intros
)
have [smc_cs_simps]:
"\<HH>\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>b\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> \<HH>\<lparr>ArrMap\<rparr>\<lparr>\<FF>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr>\<rparr> =
\<HH>\<lparr>ArrMap\<rparr>\<lparr>\<GG>\<lparr>ArrMap\<rparr>\<lparr>f\<rparr>\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> \<HH>\<lparr>ArrMap\<rparr>\<lparr>\<NN>\<lparr>NTMap\<rparr>\<lparr>a\<rparr>\<rparr>"
if "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b" for a b f
using assms that
by
(
cs_concl
cs_simp:
is_ntsmcf.ntsmcf_Comp_commute
is_semifunctor.smcf_ArrMap_Comp[symmetric]
cs_intro: smc_cs_intros
)
from assms show
"(\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>b\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> =
(\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>)\<lparr>ArrMap\<rparr>\<lparr>f\<rparr> \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
if "f : a \<mapsto>\<^bsub>\<AA>\<^esub> b" for a b f
using assms that
by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed (auto simp: smc_cs_simps)
qed
lemma smcf_ntsmcf_comp_is_semifunctor'[smc_cs_intros]:
assumes "\<HH> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<FF>' = \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>"
and "\<GG>' = \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>"
shows "\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> : \<FF>' \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG>' : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
using assms(1,2) unfolding assms(3,4) ..
subsubsection\<open>Further properties\<close>
lemma smcf_comp_smcf_ntsmcf_comp_assoc:
assumes "\<NN> : \<HH> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<FF> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<GG> : \<CC> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
shows "(\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>) \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> = \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
proof(rule ntsmcf_tdghm_eqI)
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<HH> \<HH>' \<NN> by (rule assms(1))
interpret \<FF>: is_semifunctor \<alpha> \<BB> \<CC> \<FF> by (rule assms(2))
interpret \<GG>: is_semifunctor \<alpha> \<CC> \<DD> \<GG> by (rule assms(3))
from assms show "(\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF>) \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> :
\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
from assms show "\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) :
\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH>' : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
from assms show
"ntsmcf_tdghm (\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) =
ntsmcf_tdghm (\<GG> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<FF> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))"
by
(
cs_concl
cs_simp: slicing_commute[symmetric]
cs_intro: slicing_intros dghm_comp_dghm_tdghm_comp_assoc
)
qed simp_all
lemma (in is_ntsmcf) ntsmcf_smcf_ntsmcf_comp_smcf_id[smc_cs_simps]:
"smcf_id \<BB> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> = \<NN>"
proof(rule ntsmcf_tdghm_eqI)
show "smcf_id \<BB> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
show "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
by (cs_concl cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
show "ntsmcf_tdghm (dghm_id \<BB> \<circ>\<^sub>D\<^sub>G\<^sub>H\<^sub>M\<^sub>-\<^sub>T\<^sub>D\<^sub>G\<^sub>H\<^sub>M \<NN>) = ntsmcf_tdghm \<NN>"
by
(
cs_concl cs_shallow
cs_simp: slicing_simps slicing_commute[symmetric]
cs_intro: smc_cs_intros slicing_intros dg_cs_simps
)
qed simp_all
lemmas [smc_cs_simps] = is_ntsmcf.ntsmcf_smcf_ntsmcf_comp_smcf_id
lemma smcf_ntsmcf_comp_ntsmcf_smcf_comp_assoc:
assumes "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<HH> : \<CC> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>"
and "\<KK> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows "(\<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK> = \<HH> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<NN> \<circ>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<KK>)"
proof-
interpret \<NN>: is_ntsmcf \<alpha> \<BB> \<CC> \<FF> \<GG> \<NN> by (rule assms(1))
interpret \<HH>: is_semifunctor \<alpha> \<CC> \<DD> \<HH> by (rule assms(2))
interpret \<KK>: is_semifunctor \<alpha> \<AA> \<BB> \<KK> by (rule assms(3))
show ?thesis
by (rule ntsmcf_tdghm_eqI)
(
use assms in
\<open>
cs_concl
cs_simp: smc_cs_simps slicing_commute[symmetric]
cs_intro:
smc_cs_intros
slicing_intros
dghm_tdghm_comp_tdghm_dghm_comp_assoc
\<close>
)+
qed
lemma smcf_ntsmcf_comp_ntsmcf_vcomp:
assumes "\<KK> : \<BB> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<CC>"
and "\<MM> : \<GG> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<HH> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
and "\<NN> : \<FF> \<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<GG> : \<AA> \<mapsto>\<mapsto>\<^sub>S\<^sub>M\<^sub>C\<^bsub>\<alpha>\<^esub> \<BB>"
shows
"\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) =
(\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM>) \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
proof-
interpret \<KK>: is_semifunctor \<alpha> \<BB> \<CC> \<KK> by (rule assms(1))
interpret \<MM>: is_ntsmcf \<alpha> \<AA> \<BB> \<GG> \<HH> \<MM> by (rule assms(2))
interpret \<NN>: is_ntsmcf \<alpha> \<AA> \<BB> \<FF> \<GG> \<NN> by (rule assms(3))
show
"\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>) = \<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>)"
proof(rule ntsmcf_eqI)
have dom_lhs: "\<D>\<^sub>\<circ> ((\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
unfolding dghm_tdghm_comp_components smc_cs_simps by simp
have dom_rhs:
"\<D>\<^sub>\<circ> ((\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>) = \<AA>\<lparr>Obj\<rparr>"
unfolding ntsmcf_vcomp_components smc_cs_simps by simp
show
"(\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr> =
(\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>"
proof(rule vsv_eqI, unfold dom_lhs dom_rhs smc_cs_simps)
fix a assume "a \<in>\<^sub>\<circ> \<AA>\<lparr>Obj\<rparr>"
then show
"(\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr> =
(\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<MM> \<bullet>\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F (\<KK> \<circ>\<^sub>S\<^sub>M\<^sub>C\<^sub>F\<^sub>-\<^sub>N\<^sub>T\<^sub>S\<^sub>M\<^sub>C\<^sub>F \<NN>))\<lparr>NTMap\<rparr>\<lparr>a\<rparr>"
by (cs_concl cs_shallow cs_simp: smc_cs_simps cs_intro: smc_cs_intros)
qed (cs_concl cs_shallow cs_intro: smc_cs_intros)+
qed (cs_concl cs_shallow cs_intro: smc_cs_intros)+
qed
text\<open>\newpage\<close>
end |
-- Andreas, 2011-04-14
-- {-# OPTIONS -v tc.cover:20 -v tc.lhs.unify:20 #-}
module Issue291a where
open import Imports.Coinduction
data _≡_ {A : Set}(a : A) : A -> Set where
refl : a ≡ a
data RUnit : Set where
runit : ∞ RUnit -> RUnit
j : (u : ∞ RUnit) -> ♭ u ≡ runit u -> Set
j u ()
-- needs to fail (reports a Bad split!)
|
State Before: α : Type u_1
β : Type ?u.23422
E : Type u_2
F : Type u_3
G : Type ?u.23431
E' : Type ?u.23434
F' : Type ?u.23437
G' : Type ?u.23440
E'' : Type ?u.23443
F'' : Type ?u.23446
G'' : Type ?u.23449
R : Type ?u.23452
R' : Type ?u.23455
𝕜 : Type ?u.23458
𝕜' : Type ?u.23461
inst✝¹² : Norm E
inst✝¹¹ : Norm F
inst✝¹⁰ : Norm G
inst✝⁹ : SeminormedAddCommGroup E'
inst✝⁸ : SeminormedAddCommGroup F'
inst✝⁷ : SeminormedAddCommGroup G'
inst✝⁶ : NormedAddCommGroup E''
inst✝⁵ : NormedAddCommGroup F''
inst✝⁴ : NormedAddCommGroup G''
inst✝³ : SeminormedRing R
inst✝² : SeminormedRing R'
inst✝¹ : NormedField 𝕜
inst✝ : NormedField 𝕜'
c c' c₁ c₂ : ℝ
f : α → E
g : α → F
k : α → G
f' : α → E'
g' : α → F'
k' : α → G'
f'' : α → E''
g'' : α → F''
l l' : Filter α
h : (fun x => ‖f x‖) =ᶠ[l] fun x => ‖g x‖
⊢ ∀ᶠ (x : α) in l, ‖f x‖ ≤ 1 * ‖g x‖ State After: no goals Tactic: simpa only [one_mul] using h.le State Before: α : Type u_1
β : Type ?u.23422
E : Type u_2
F : Type u_3
G : Type ?u.23431
E' : Type ?u.23434
F' : Type ?u.23437
G' : Type ?u.23440
E'' : Type ?u.23443
F'' : Type ?u.23446
G'' : Type ?u.23449
R : Type ?u.23452
R' : Type ?u.23455
𝕜 : Type ?u.23458
𝕜' : Type ?u.23461
inst✝¹² : Norm E
inst✝¹¹ : Norm F
inst✝¹⁰ : Norm G
inst✝⁹ : SeminormedAddCommGroup E'
inst✝⁸ : SeminormedAddCommGroup F'
inst✝⁷ : SeminormedAddCommGroup G'
inst✝⁶ : NormedAddCommGroup E''
inst✝⁵ : NormedAddCommGroup F''
inst✝⁴ : NormedAddCommGroup G''
inst✝³ : SeminormedRing R
inst✝² : SeminormedRing R'
inst✝¹ : NormedField 𝕜
inst✝ : NormedField 𝕜'
c c' c₁ c₂ : ℝ
f : α → E
g : α → F
k : α → G
f' : α → E'
g' : α → F'
k' : α → G'
f'' : α → E''
g'' : α → F''
l l' : Filter α
h : (fun x => ‖f x‖) =ᶠ[l] fun x => ‖g x‖
⊢ ∀ᶠ (x : α) in l, ‖g x‖ ≤ 1 * ‖f x‖ State After: no goals Tactic: simpa only [one_mul] using h.symm.le |
```python
from IPython.display import HTML
from IPython.display import display
display(HTML("<style>.container { width:70% !important; }</style>"))
```
```python
%matplotlib inline
import numpy as np, scipy, scipy.stats as stats, pandas as pd, matplotlib.pyplot as plt, seaborn as sns
import statsmodels, statsmodels.api as sm
import sympy, sympy.stats
import pymc3 as pm
import daft
import xarray, numba, arviz as az
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
# pd.set_option('display.float_format', lambda x: '%.2f' % x)
np.set_printoptions(edgeitems=10)
np.set_printoptions(linewidth=1000)
np.set_printoptions(suppress=True)
np.core.arrayprint._line_width = 180
SEED = 42
np.random.seed(SEED)
sns.set()
import warnings
warnings.filterwarnings("ignore")
```
This blog post is part of the [Series: Monte Carlo Methods](https://weisser-zwerg.dev/posts/series-monte-carlo-methods/).
You can find this blog post on [weisser-zwerg.dev](https://weisser-zwerg.dev/posts/monte-carlo-markov-chain-monte-carlo/) or on [github](https://github.com/cs224/blog-series-monte-carlo-methods) as either [html](https://rawcdn.githack.com/cs224/blog-series-monte-carlo-methods/main/0020-markov-chain-monte-carlo.html) or via [nbviewer](https://nbviewer.jupyter.org/github/cs224/blog-series-monte-carlo-methods/blob/main/0020-markov-chain-monte-carlo.ipynb?flush_cache=true).
# Markov chain Monte Carlo (MCMC)
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Markov-chain-Monte-Carlo" data-toc-modified-id="Markov-chain-Monte-Carlo-1"><span class="toc-item-num">1 </span>Markov chain Monte Carlo</a></span></li><li><span><a href="#Impl" data-toc-modified-id="Impl-2"><span class="toc-item-num">2 </span>Impl</a></span></li><li><span><a href="#NumPyro" data-toc-modified-id="NumPyro-3"><span class="toc-item-num">3 </span>NumPyro</a></span></li><li><span><a href="#Resources" data-toc-modified-id="Resources-4"><span class="toc-item-num">4 </span>Resources</a></span></li></ul></div>
## Markov chain Monte Carlo
I will not explain the details of why Markov chain Monte Carlo (MCMC) works, because other people have done already a very good job at it. Have a look below at the [Resources](#Resources) section. I personally found the following two books very helpful, because they explain the mechanics of MCMC in a discrete set-up, which at least for me helps in building an intuition.
* [Machine Learning: A Bayesian and Optimization Perspective](https://www.amazon.com/-/de/dp/0128188030) by [Sergios Theodoridis](https://sergiostheodoridis.wordpress.com/)
* Chapter 14.7 Markov Chain Monte Carlo Methods
* [Probabilistic Graphical Models: Principles and Techniques](https://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193) by Daphne Koller and Nir Friedman
But all the other mentioned references are very helpful, too.
A Markov chain starts with the initial distribution of starting states $\pi_0(X)$. If you start with a concrete single state then this will be a dirac-delta $\pi_0(X) = \delta(X=x_0)$. In addition you have a so called transition operator $\mathcal{T}(x\to x')$. This transition operator is a simple conditional probability just in reverse notation $\mathcal{T}(x\to x')\equiv p(x'\,|\,x)$. In addition this transitio operator is kalled a *kernel* $\mathcal{K}(x\to x')\equiv \mathcal{T}(x\to x')\equiv p(x'\,|\,x)$. If you then apply the transition operator / kernel several times in sequence and marginalize over all earlier state variables then you get a sequence of distributions
$
\begin{array}{rcl}
\pi_1(x')&=&\int\pi_0(x)\cdot p(x'\,|\,x)dx=\int\pi_0(x)\cdot \mathcal{T}(x\to x')dx=\int\pi_0(x)\cdot \mathcal{K}(x\to x')dx=\int p(x, x')dx
\end{array}
$
$\pi_0(X)\to \pi_1(X)\to .... \to \pi_N(X)$.
Just as a recap: the following vocabulary is typically used in conjunction with the conditions that have to hold to make MCMC work:
* [ergodic](https://en.wikipedia.org/wiki/Markov_chain#Ergodicity)
* [homogeneous](https://www.robots.ox.ac.uk/~fwood/teaching/C19_hilary_2015_2016/mcmc.pdf)
* [stationary](https://www.robots.ox.ac.uk/~fwood/teaching/C19_hilary_2015_2016/mcmc.pdf) / [invariant](https://www.robots.ox.ac.uk/~fwood/teaching/C19_hilary_2015_2016/mcmc.pdf)
* [detailed-balance](https://www.robots.ox.ac.uk/~fwood/teaching/C19_hilary_2015_2016/mcmc.pdf)
* [irreducible](https://www.robots.ox.ac.uk/~fwood/teaching/C19_hilary_2015_2016/mcmc.pdf)
* [aperiodic](https://www.robots.ox.ac.uk/~fwood/teaching/C19_hilary_2015_2016/mcmc.pdf)
* [regular](https://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193)
But in the end the goal is to construct a Markov chain that has the target distribution as its stationary (aka invariant) / equilibrium distribution and will converge to this equilibrium distribution in the long run no matter from where you start.
Stationarity is defined by:
$
\begin{array}{rcl}
\pi_S(X=x')&=&\displaystyle\sum_{x\in Val(X)}\pi_S(X=x)\mathcal{T}(x\to x')\\
&=&\displaystyle\int\pi_S(x)\mathcal{T}(x\to x')dx
\end{array}
$
The first line is for the discrete case and the second one is for the continuous case. The subscript $S$ in $\pi_S$ is for *stationary*.
So in our construction process of a transition operator for a target distribution you start by restricting yourself to transition operators that satisfy detailed balance w.r.t. a particular
target distribution, because then that distribution will be the stationary / invariant distribution.
The following equation is the detailed-balance equation:
$
\begin{array}{rcl}
\displaystyle\pi_S(X=x)\mathcal{T}(x\to x')&=&\displaystyle\pi_S(X=x')\mathcal{T}(x'\to x)\\
\end{array}
$
A Markov chain that respects detailed-balance is said to be reversible. Reversibility implies that $\pi_S$ is a stationary distribution of $\mathcal{T}$
In addition we use *homogeneity*. A Markov chain is *homogeneous* if the transition operators are the same (constant) for every transition we make $\mathcal{T}_1=\mathcal{T}_2=...=\mathcal{T}_N=\mathcal{T}$. It can be shown that a *homogeneous* Markov chain that possesses a *stationary* distribution (guaranteed via *detailed-balance*) will converge to the single *equilibrium* distribution from any starting point, subject only to weak restrictions on the stationary distribution and the transition probabilities ([Neal 1993](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.8183))
In the discrete case *regularity* (see [Probabilistic Graphical Models: Principles and Techniques](https://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193)) plus detailed-balance guarantee convergence to its stationary distribution.
My goal for this blog post is to look deeper into how to combine elements of algorithms in the sense of **fine-grained composable abstractions**
([FCA](https://web.archive.org/web/20130117175652/http://blog.getprismatic.com/blog/2012/4/5/software-engineering-at-prismatic.html)), which does not get a lot of attention in the other resources.
A good starting point for that is to look at the detailed balance equation.
## Impl
```python
# Flip coin 9 times and get 6 heads
samples = np.array([0,0,0,1,1,1,1,1,1])
def fn(a, b):
return lambda p: stats.bernoulli(p).pmf(samples).prod() * stats.beta(a,b).pdf(p)
# convert from omega, kappa parametrization of the beta distribution to the alpha, beta parametrization
def ok2ab(omega, kappa):
return omega*(kappa-2)+1,(1-omega)*(kappa-2)+1
```
```python
@numba.jit(nopython=True)
def bernoulli(p, samples):
r = np.zeros_like(samples, dtype=numba.float64)
for i in range(len(r)):
if samples[i] < 0.5: # == 0
r[i] = np.log(1-p)
else: # == 1
r[i] = np.log(p)
return np.sum(r)
```
```python
bernoulli(0.001, samples)
```
```python
# verify that our implementation delivers the same results as the scipy implementation
stats.bernoulli(0.001).logpmf(samples).sum()
```
```python
@numba.jit(nopython=True)
def logpdf(p):
return bernoulli(p,samples)
@numba.jit(nopython=True)
def mcmc(q_rvs, unif_rvs, trace, logpdf):
p = 0.5
for i in range(N):
rv = q_rvs[i]
unifrand = unif_rvs[i]
p_new = p + rv
log_hastings_ratio = logpdf(p_new) - logpdf(p) # is only correct, because rv is from a symmetric distribution otherwise the "Hastings q(y,x)/q(x,y) is missing"
if log_hastings_ratio >= 0.0 or unifrand < np.exp(log_hastings_ratio):
p = p_new
trace[i] = p
return trace
```
```python
N = 10000
q_rvs = stats.norm(0,0.3).rvs(size=N, random_state=np.random.RandomState(42))
unif_rvs = stats.uniform.rvs(size=N, random_state=np.random.RandomState(42))
trace = np.zeros(N)
mcmc(q_rvs, unif_rvs, trace, logpdf)
trace
```
```python
datadict = {'p': trace}
dataset = az.convert_to_inference_data(datadict)
dataset
```
```python
az.summary(dataset)
```
```python
az.plot_trace(dataset)
```
```python
def kdeplot(lds, parameter_name=None, x_min = None, x_max = None, ax=None, kernel='gau'):
if parameter_name is None and isinstance(lds, pd.Series):
parameter_name = lds.name
kde = sm.nonparametric.KDEUnivariate(lds)
kde.fit(kernel=kernel, fft=False, gridsize=2**10)
if x_min is None:
x_min = kde.support.min()
else:
x_min = min(kde.support.min(), x_min)
if x_max is None:
x_max = kde.support.max()
else:
x_max = max(kde.support.max(), x_max)
x = np.linspace(x_min, x_max,100)
y = kde.evaluate(x)
if ax is None:
plt.figure(figsize=(6, 3), dpi=80, facecolor='w', edgecolor='k')
ax = plt.subplot(1, 1, 1)
ax.plot(x, y, lw=2)
ax.set_xlabel(parameter_name)
ax.set_ylabel('Density')
```
```python
plt.figure(figsize=(15, 8), dpi=80, facecolor='w', edgecolor='k')
ax = plt.subplot(1, 1, 1)
kdeplot(trace, x_min=0.0, x_max=1.0, ax=ax)
x = np.linspace(0.0,1.0,100)
y = stats.beta(1+6,1+3).pdf(x)
ax.plot(x,y)
```
```python
with pm.Model() as model:
p = pm.Beta('p', 1.0, 1.0)
yl = pm.Bernoulli('yl', p, observed=samples)
prior = pm.sample_prior_predictive()
posterior = pm.sample(return_inferencedata=True)
posterior_pred = pm.sample_posterior_predictive(posterior)
```
```python
pm.summary(posterior)
```
```python
ldf = pd.DataFrame(datadict)
ldf['w'] = 1.0/len(ldf)
ldf = ldf.sort_values('p').set_index('p')
ldf['c'] = ldf['w'].cumsum()
ldf1 = ldf
ldf1
```
```python
ldf = posterior['posterior']['p'].loc[dict(chain=0)].to_dataframe()
ldf = ldf.drop('chain', axis=1)
ldf['w'] = 1.0/len(ldf)
ldf = ldf.sort_values('p').set_index('p')
ldf['c'] = ldf['w'].cumsum()
ldf2 = ldf
ldf2
```
```python
plt.figure(figsize=(15, 8), dpi=80, facecolor='w', edgecolor='k')
ax = plt.subplot(1, 1, 1)
x = np.linspace(0.0,1.0,100)
y = stats.beta(1+6,1+3).cdf(x)
ax.plot(x,y, label='exact')
ldf1.loc[0.0:1.0, 'c'].plot(ax=ax, label='self-made MCMC')
ldf2.loc[0.0:1.0, 'c'].plot(ax=ax, label='PyMC3')
ax.legend()
```
## NumPyro
```python
# pip install numpyro[cpu]
import numpyro, numpyro.distributions, numpyro.infer
import jax
numpyro.set_platform('cpu')
numpyro.set_host_device_count(4)
```
```python
def coin_flip_example(y=None):
p = numpyro.sample('p', numpyro.distributions.Beta(1,1))
numpyro.sample('obs', numpyro.distributions.Bernoulli(p), obs=y)
```
```python
nuts_kernel = numpyro.infer.NUTS(coin_flip_example)
sample_kwargs = dict(
sampler=nuts_kernel,
num_warmup=1000,
num_samples=1000,
num_chains=4,
chain_method="parallel"
)
mcmc = numpyro.infer.MCMC(**sample_kwargs)
rng_key = jax.random.PRNGKey(0)
mcmc.run(rng_key, y=samples, extra_fields=('potential_energy',))
```
```python
mcmc.print_summary()
```
```python
pyro_trace = az.from_numpyro(mcmc)
pyro_trace
```
```python
az.summary(pyro_trace)
```
```python
az.plot_trace(pyro_trace)
```
## Resources
Text books:
* [Handbook of Markov Chain Monte Carlo](https://www.amazon.com/-/de/dp-B008GXJVF8/dp/B008GXJVF8/) by Steve Brooks, Andrew Gelman, Galin Jones, Xiao-Li Meng
* [Machine Learning: A Bayesian and Optimization Perspective](https://www.amazon.com/-/de/dp/0128188030) by [Sergios Theodoridis](https://sergiostheodoridis.wordpress.com/)
* Chapter 14.7 Markov Chain Monte Carlo Methods
* [Probabilistic Graphical Models: Principles and Techniques](https://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193) by Daphne Koller and Nir Friedman
* Chapter 12.3 Markov Chain Monkte Carlo Methods
* [Bayesian Reasoning and Machine Learning](https://www.amazon.com/Bayesian-Reasoning-Machine-Learning-Barber/dp/0521518148) by David Barber
* Chapter 27.4 Markov chain Monte Carlo (MCMC)
* [Pattern Recognition and Machine Learning](https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/1493938436) by Christopher M. Bishop
* Chapter 11.2 Markov Chain Monte Carlo
* [Machine Learning: A Probabilistic Perspective](https://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020/)
* Chapter 24 Markov chain Monte Carlo (MCMC) inference
* [Doing Bayesian Data Analysis](https://www.amazon.com/-/de/dp/0124058884/): A Tutorial with R, JAGS, and Stan by [John Kruschke](http://doingbayesiandataanalysis.blogspot.com/)
* Chapter 7 Markov Chain Monte Carlo
* [Statistical Rethinking](https://www.amazon.com/-/de/dp/036713991X): A Bayesian Course with Examples in R and STAN by [Richard McElreath](https://elevanth.org/blog/)
* Chapter 9 Markov Chain Monte Carlo
Tutorial:
* [A Tutorial on Markov Chain Monte-Carlo and Bayesian Modeling](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3759243) by Martin B. Haugh
Blog posts:
* [Series of posts on implementing Hamiltonian Monte Carlo](https://discourse.pymc.io/t/series-of-posts-on-implementing-hamiltonian-monte-carlo/3138) by Colin Carroll
* [Hamiltonian Monte Carlo from scratch](https://colindcarroll.com/2019/04/11/hamiltonian-monte-carlo-from-scratch/)
* [Step Size Adaptation in Hamiltonian Monte Carlo](https://colindcarroll.com/2019/04/21/step-size-adaptation-in-hamiltonian-monte-carlo/)
* [Choice of Symplectic Integrator in Hamiltonian Monte Carlo](https://colindcarroll.com/2019/04/28/choice-of-symplectic-integrator-in-hamiltonian-monte-carlo/)
* [Pragmatic Probabilistic Programming](https://colcarroll.github.io/hmc_tuning_talk/)
* [A tour of probabilistic programming language APIs](https://colcarroll.github.io/ppl-api/)
* [minimc](https://github.com/ColCarroll/minimc) ~15 line Hamiltonian Monte Carlo implementation
* [Hamiltonian Monte Carlo in PyMC3](https://colcarroll.github.io/hamiltonian_monte_carlo_talk/bayes_talk.html)
* [Markov Chains: Why Walk When You Can Flow?](https://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/) by Richard McElreath
* [Bayesian Inference Algorithms: MCMC and VI](https://towardsdatascience.com/bayesian-inference-algorithms-mcmc-and-vi-a8dad51ad5f5) by Wicaksono Wijono
Papers:
* [Mixed Hamiltonian Monte Carlo for Mixed Discrete and Continuous Variables](https://arxiv.org/abs/1909.04852) by Guangyao Zhou
* [Guangyao (Stannis) Zhou](https://stanniszhou.github.io/)
* [mixed_hmc](https://github.com/StannisZhou/mixed_hmc) on GitHub
* [MixedHMC](http://num.pyro.ai/en/latest/mcmc.html#numpyro.infer.mixed_hmc.MixedHMC) for NumPyro
* [YouTube](https://www.youtube.com/watch?v=ag44SuB0wB8)
```python
```
|
-- Occurs there are several ways to parse a left-hand side.
module AmbiguousParseForLHS where
data X : Set where
if_then_else_ : X -> X -> X -> X
if_then_ : X -> X -> X
x : X
bad : X -> X
bad (if x then if x then x else x) = x
bad _ = if x then x
|
Formal statement is: lemma tendsto_divide_zero: fixes c :: "'a::real_normed_field" shows "(f \<longlongrightarrow> 0) F \<Longrightarrow> ((\<lambda>x. f x / c) \<longlongrightarrow> 0) F" Informal statement is: If $f$ tends to $0$ in $F$, then $f/c$ tends to $0$ in $F$. |
[STATEMENT]
lemma joinable_components_eq:
"connected t \<and> t \<subseteq> s \<and> c1 \<in> components s \<and> c2 \<in> components s \<and> c1 \<inter> t \<noteq> {} \<and> c2 \<inter> t \<noteq> {} \<Longrightarrow> c1 = c2"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. connected t \<and> t \<subseteq> s \<and> c1 \<in> components s \<and> c2 \<in> components s \<and> c1 \<inter> t \<noteq> {} \<and> c2 \<inter> t \<noteq> {} \<Longrightarrow> c1 = c2
[PROOF STEP]
by (metis (full_types) components_iff joinable_connected_component_eq) |
[STATEMENT]
lemma r_code_const1_aux_prim: "prim_recfn 3 r_code_const1_aux"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. prim_recfn 3 r_code_const1_aux
[PROOF STEP]
by (simp_all add: r_code_const1_aux_def) |
using Net
using Test
@testset "Net.jl" begin
# Write your tests here.
end
|
require_relative '../rest_provider.rb'
Puppet::Type.type(:xldeploy_user).provide :rest, :parent => Puppet::Provider::XLDeployRestProvider do
confine :feature => :restclient
has_feature :restclient
def create
props = {'@admin' => false, 'username' => resource[:id], 'password' => resource[:password] }
xml = XmlSimple.xml_out(
props,
{
'RootName' => 'user',
'AttrPrefix' => true,
}
)
rest_post "security/user/#{resource[:id]}", xml
end
def destroy
rest_delete "security/user/#{resource[:id]}"
end
def exists?
response = rest_get "security/user/#{resource[:id]}"
if response =~ /Not found/
return false
else
return true
end
end
end
|
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
#
# Typ ::
#
# Value ::
# t = <type>
# v = <.>
#
# Types:
# TReal
# TComplex
# TInt
# TArray(<type>, <size>)
# TVect(<type>, <vlen>)
#
# Value(<type>, <.>)
# V(<.>) infers type automatically
#
Declare(Value, IsExp, IsValue, TArray, TVect, BitVector, T_UInt);
_evInt := v -> Cond(IsInt(v), v, IsList(v), List(v, i->_evInt(i)), v.ev());
#----------------------------------------------------------------------------------------------
# Typ : data types
#----------------------------------------------------------------------------------------------
Class(TypOps, rec(
Print := x-> When(IsBound(x.print), x.print(), Print(x.__name__)),
\= := RewritableObjectOps.\=,
\< := RewritableObjectOps.\<
));
Class(TypOpsNoPrint, ClassOps, rec(
\= := RewritableObjectOps.\=,
\< := RewritableObjectOps.\<
));
Declare(RangeT);
Class(RangeTOps, PrintOps, rec(
\= := (v1,v2) -> When( ObjId(v1)<>RangeT or ObjId(v2)<>RangeT, false,
v1.max=v2.max and v1.min=v2.min and v1.eps=v2.eps),
\< := (v1,v2) -> Error("Operation '<' is undefined for RangeT."),
\+ := (v1,v2) -> When( ObjId(v1)<>RangeT or ObjId(v2)<>RangeT, Error("'+' is defined for RangeT only"),
RangeT(Min2(v1.min, v2.min), Max2(v1.max, v2.max), Max2(v1.eps, v2.eps))),
\* := (v1,v2) -> When( ObjId(v1)<>RangeT or ObjId(v2)<>RangeT, Error("'*' is defined for RangeT only"),
RangeT(Max2(v1.min, v2.min), Min2(v1.max, v2.max), Max2(v1.eps, v2.eps))),
));
#F RangeT(<min>, <max>, <eps>): data type range
#F <min> smallest value, <max> largest value, <eps> unit roundoff
Class(RangeT, rec(
__call__ := (self, min, max, eps) >>
WithBases(self, rec( min := min, max := max, eps := eps, operations := RangeTOps)),
print := self >> Print(self.__name__, "(", self.min, ", ", self.max, ", ", self.eps, ")"),
));
Class(Typ, rec(
operations := TypOps,
isType := true,
isSigned := self >> true,
doHashValues := false,
check := v -> v,
#normalize := (self, v) >> Value(self,v),
eval := self >> self,
vbase := rec(),
value := meth(self, v)
local ev;
if IsExp(v) then
ev := v.eval();
if IsSymbolic(ev) and not IsValue(ev) then
v.t := self;
return v;
fi;
fi;
if IsValue(v) then return Value.new(self, self.check(v.v));
else return Value.new(self,self.check(v));
fi;
end,
realType := self >> self,
product := (v1, v2) -> v1 * v2,
sum := (v1, v2) -> v1 + v2,
base_t := self >> self, # composite types should return base data type (without recursion).
saturate := abstract(), # (self, v) >> ...
# range should return RangeT
range := abstract(), # (self) >> ...
));
Class(CompositeTyp, Typ, rec(operations := TypOpsNoPrint));
Class(AtomicTyp, Typ, rec(
doHashValues := true,
isAtomic := true,
rChildren := self >> [],
from_rChildren := (self, rch) >> Checked(rch=[], self),
free := self >> Union(List(self.rChildren(), FreeVars)),
vtype := (self,v) >> TVect(self, v),
csize := self >> sizeof(self)
));
IsType := x -> IsRec(x) and IsBound(x.isType) and x.isType;
Class(TFunc, RewritableObject, Typ, rec(
check := v -> v, #Checked(IsFunction(v), v),
product := (v1, v2) -> Error("Can not multiply functions"),
sum := (v1, v2) -> Error("Can not add functions"),
zero := (v1, v2) -> Error("TFunc.zero() is not supported"),
one := (v1, v2) -> Error("TFunc.one() is not supported"),
free := self >> Union(List(self.params, FreeVars)),
updateParams := self >> Checked(ForAll(self.params, e->IsType(e) or IsValue(e) or IsInt(e) or IsSymbolic(e)), true),
csize := self >> sizeof(self)
));
IsFuncT := x -> IsType(x) and ObjId(x)=TFunc;
Class(ValueOps, PrintOps, rec(
\= := (v1,v2) -> Cond(
not IsValue(v2), v1.v=v2,
not IsValue(v1), v1=v2.v,
IsBound(v1.t.vequals), v1.t.vequals(v1.v, v2.v),
IsBound(v2.t.vequals), v2.t.vequals(v1.v, v2.v),
v1.v = v2.v),
\< := (v1,v2) -> Cond(
not IsValue(v2), When(IsRec(v2), ObjId(v1) < ObjId(v2), v1.v < v2),
not IsValue(v1), When(IsRec(v1), ObjId(v1) < ObjId(v2), v1 < v2.v),
v1.v < v2.v)
));
#----------------------------------------------------------------------------------------------
# Value : values or constants
# NB: All values are automatically hashed in GlobalConstantHash
# This can reduce memory footprint, since lots of values are repetitive,
# like 1s and 0s, and also float values that are too close to each other will
# hash to same value (by virtue of ValueOps.\=), which will prevent compiler
# from putting them in separate registers, and thus degrading performance.
#
# This has negligible effect on accuracy, as long as <type>.vequals is valid.
#
#----------------------------------------------------------------------------------------------
Class(Value, rec(
isValue := true,
__call__ := (self, t, v) >> t.value(v),
new := (self,t,v) >> # HashedValue(GlobalConstantHash, <-- this hashes the Value upon construction, disabled now
# due to slowness with large data() blocks, which aren't hashed, unless this option is used
Cond(t.vbase=rec(),
WithBases(self, rec(t:=t, v:=v, operations := ValueOps)),
WithBases(self, Inherit(t.vbase, rec(t:=t, v:=v, operations:=ValueOps)))
),
#),
ev := self >> self.v,
eval := self >> self,
free := self >> Set([]),
from_rChildren := (self, rch) >> self,
# print := self >> Print(self.__name__, "(", self.t, ",", self.v, ")"),
# print := self >> Print(self.v),
print := self >> Cond(IsString(self.v), Print("V(\"",self.v, "\")"), Print("V(", self.v, ")")),
dims := self >> Cond(
IsArrayT(self.t), self.t.dims(),
Error("<self>.dims() is only valid when self.t is a TArray"))
));
IsValue := x -> IsRec(x) and IsBound(x.isValue) and x.isValue;
#----------------------------------------------------------------------------------------------
#----------------------------------------------------------------------------------------------
Declare(TComplex);
Class(TUnknown, AtomicTyp, rec( one := self >> 1, zero := self >> 0));
Class(TVoid, AtomicTyp);
Class(TDummy, AtomicTyp); # used in autolib for Lambda parameters that are ignored
Class(TReal, AtomicTyp, rec(
cutoff := 1e-15,
hash := (val, size) -> 1 + (DoubleRep64(Double(val)) mod size), #(IntDouble(1.0*val*size) mod size),
check := (self,v) >> Cond(
IsExp(v), ReComplex(Complex(code.EvalScalar(v))),
IsInt(v), Double(v),
IsRat(v), v,
IsDouble(v), When(AbsFloat(v) < self.cutoff, 0.0, v),
IsCyc(v), ReComplex(Complex(v)),
IsComplex(v), ReComplex(v),
Error("<v> must be a double or an expression")),
vequals := (self, v1,v2) >> When(
(IsDouble(v1) or IsInt(v1) or IsRat(v1)) and (IsDouble(v2) or IsInt(v2) or IsRat(v2)),
AbsFloat(Double(v1)-Double(v2)) < self.cutoff,
false),
zero := self >> self.value(0.0),
one := self >> self.value(1.0),
strId := self >> "f",
complexType := self >> TComplex,
));
TDouble:=TReal;
#
# format: | sign | integer bits | frac bits |
# # make sure we have space at least for the sign bit
#
_fpdouble := (val,b,fb) -> let(res := IntDouble(val * 2.0^fb),
Cond(
val = 1 and (fb = b-1), # we can represent 1 as 0.999999, if we use frac bits only
2^fb - 1,
val = -1 and (fb = b-1), # we can represent 1 as 0.999999, if we use frac bits only
-(2^fb - 1),
Log2Int(res)+2 > b, Error("Overflow, value=", val, ", signed width=",
Log2Int(res)+2, ", max width=", b),
res));
# format: | integer bits | frac bits |
#
_ufpdouble := (val,b,fb) -> let(res := IntDouble(val * 2.0^fb),
When(Log2Int(res)+1 > b, Error("Overflow, value=", val, ", unsigned width=",
Log2Int(res)+1, ", max width=", b),
res));
#F TFixedPt(<bits>, <fracbits>) -- fixed point data type
#F
#F <bits> -- total # of bits (including sign bit)
#F <fracbits> -- number of fractional bits
#F
#F Number of integer bits is assumed to be bits-1-fracbits (1 = sign bit)
#F
Class(TFixedPt, TReal, rec(
operations := TypOpsNoPrint, # NOTE: do not inherit from TReal, and then this line won't be needed
__call__ := (self, bits, fracbits) >> WithBases(self, rec(
bits := bits,
fracbits := fracbits,
operations := TypOps)),
rChildren := self >> [self.bits, self.fracbits],
rSetChild := rSetChildFields("bits", "fracbits"),
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch),
print := self >> Print(self.__name__, "(", self.bits, ", ", self.fracbits, ")"),
check := (self, v) >> _fpdouble(v, self.bits, self.fracbits)
));
# TUFixedPt(<bits>, <fracbit>) -- unsigned fixed point data type
#
Class(TUFixedPt, TFixedPt);
Class(TComplex, AtomicTyp, rec(
hash := (val, size) -> let(
cplx := Complex(val),
#h := IntDouble(size * (ReComplex(cplx)+ImComplex(cplx))),
h := DoubleRep64(ReComplex(cplx)) + DoubleRep64(ImComplex(cplx)),
1 + (h mod size)),
check := v -> Cond(
BagType(v) in [T_CYC, T_RAT, T_INT], v, # exact representation
IsDouble(v), When(AbsFloat(v) < TReal.cutoff, 0, v),
IsComplex(v), Complex(TReal.check(ReComplex(v)), TReal.check(ImComplex(v))),
IsExp(v), Complex(v.ev())),
realType := self >> TReal,
complexType := self >> self,
zero := self >> self.value(0.0),
one := self >> self.value(1.0),
));
Class(TBool, AtomicTyp, rec(
hash := (val, size) -> 1 + (InternalHash(val) mod size),
check := v -> Cond(IsBool(v), v, Error("<v> must be a boolean")),
one := self >> self.value(true),
zero := self >> self.value(false),
));
Class(TInt_Base, AtomicTyp, rec(
hash := (val, size) -> 1 + (10047871*val mod size),
bits := 32,
check := v -> Cond(IsExp(v), Int(v.ev()),
IsInt(v), v,
IsDouble(v) and IsInt(IntDouble(v)), IntDouble(v),
Error("<v> must be an integer or an expression")),
one := self >> self.value(1),
zero := self >> self.value(0),
complexType := self >> TComplex,
));
Class(TInt, TInt_Base, rec(strId := self >> "i"));
Class(TUInt, TInt_Base, rec(isSigned := False, strId := self >> "ui"));
Class(TULongLong, TInt_Base);
IsChar := (x)->When(BagType(x)=T_CHAR, true, false);
Class(TChar, TInt_Base, rec(
hash := (val, size) -> When(IsChar(val), 1 + (InternalHash(val) mod size), TInt_Base.hash(val, size)),
bits := 8,
check := v -> Cond(IsExp(v), Int(v.ev()),
IsInt(v), v,
IsChar(v), v,
Error("<v> must be an integer or an expression")),
));
Class(TUChar, TInt_Base, rec(
bits := 8,
isSigned := self >> false,
check := v -> Cond(IsExp(v), Int(v.ev()),
IsInt(v), v,
Error("<v> must be an integer or an expression")),
));
Class(TString, AtomicTyp, rec(
doHashValues := true,
hash := (val, size) -> 1 + (InternalHash(val) mod size),
check := v -> Cond(IsString(v), v, Error("<v> must be a string")),
one := self >> Error("TString.one() is not allowed"),
zero := self >> Error("TString.zero() is not allowed"),
));
Class(TList, CompositeTyp, rec(
isListT := true,
hash := (val, size) -> Error("Not implemented"),
__call__ := (self, t) >>
WithBases(self, rec(
t := Checked(IsType(t), t),
operations := PrintOps)),
print := self >> Print(self.__name__, "(", self.t, ")"),
check := v -> Cond(IsList(v), v, Error("<v> must be a list")),
one := self >> Error("TList.one() is not allowed"),
zero := self >> Error("TList.zero() is not allowed"),
rChildren := self >> [self.t],
rSetChild := rSetChildFields("t"),
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch),
));
Class(TSym, CompositeTyp, rec(
hash := (val, size) -> Error("Not implemented"),
check := v -> v,
__call__ := (self, id) >>
WithBases(self, rec(
id := Checked(IsString(id), id),
operations := TypOps)),
rChildren := self >> [self.id],
rSetChild := rSetChildFields("id"),
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch),
print := self >> Print(self.__name__, "(\"", self.id, "\")"),
csize := self >> sizeof(self)
));
#F TArrayBase -- base class for array-like element collection types
#F
#F Subclasses: TPtr, TArray, TVect, BitVector
#F
#F Default constructor:
#F
#F __call__(<element-type>, <size>) - array type of <size> elements of <element-type>
#F
Class(TArrayBase, CompositeTyp, rec(
__call__ := (self, t, size) >>
WithBases(self, rec(
t := Checked(IsType(t), t),
size := Checked(IsPosInt0Sym(size), size),
operations := TypOps)),
hash := (self, val, size) >> (Sum(val, x -> x.t.hash(x.v, size)) mod size) + 1,
rChildren := self >> [self.t, self.size],
rSetChild := rSetChildFields("t", "size"),
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch),
isSigned := self >> self.t.isSigned(),
print := self >> Print(self.__name__, "(", self.t, ", ", self.size, ")"),
check := (self, v) >> Checked(IsList(v), Length(v) = self.size,
ForAll(v, el -> el.t = self.t), v),
# these fields go into values
vbase := rec(
free := self >> Union(List(self.v, e -> e.free())),
rChildren := self >> self.v,
rSetChild := meth(self, n, newC) self.v[n] := newC; end
),
zero := self >> self.value(Replicate(_unwrap(self.size), self.t.zero())),
one := self >> self.value(Replicate(_unwrap(self.size), self.t.one())),
value := (self, v) >> let(vv := When(IsValue(v), v.v, v),
Cond(IsExp(vv), vv,
Checked(IsList(vv),
Value.new(self, List(vv, e->self.t.value(e)))))),
# array type can have free variables in .size field
free := self >> Union(FreeVars(self.size), FreeVars(self.t)),
csize := self >> self.t.csize() * self.size,
realType := self >> ObjId(self)(self.t.realType(), self.size),
base_t := self >> self.t,
range := self >> self.t.range(),
));
Declare(TPtr, TArray);
#F TArray(<element-type>, <size>) - array type of <size> elements of <element-type>
#F
Class(TArray, TArrayBase, rec(
isArrayT := true,
vtype := (self, v) >> TArray(self.t.vtype(v), self.size/v),
toPtrType := self >> TPtr(self.t),
doHashValues := true,
dims := self >> Cond(
ObjId(self.t)=TArray, [self.size] :: self.t.dims(),
[self.size])
));
# [ptrAligned, ptrUnaligned] are TPtr.alignment values
ptrUnaligned := [1,0];
ptrAligned4 := [4,0];
ptrAligned8 := [8,0];
ptrAligned16 := [16,0];
ptrAligned := ptrAligned16;
# NOTE: this is a hack, esp because 16 byte boundary is hardcoded in ptrAligned
# It should be in SpiralDefaults somehow
TArray.alignment := ptrAligned;
TArray.qualifiers := [];
TypeDomain := (dom, els) ->
Cond(Same(dom, Rationals) or Same(dom, Scalars) or Same(dom, Doubles), TReal,
Same(dom, Complexes), TComplex,
Same(dom, Cyclotomics), When(ForAll(els, x->Im(x)=0), TReal, TComplex),
Same(dom, Integers), TInt,
Error("Unrecognized domain <dom>"));
# IsArrayT(<t>) - checks whether <t> is an array type object
IsArrayT := x -> IsType(x) and IsBound(x.isArrayT) and x.isArrayT;
# IsListT(<t>) - checks whether <t> is a list type object
IsListT := x -> IsType(x) and IsBound(x.isListT) and x.isListT;
# IsVecT(<t>) - checks whether <t> is an vector type object
IsVecT := x -> IsType(x) and IsBound(x.isVecT) and x.isVecT;
# IsPtrT(<t>) - checks whether <t> is a pointer type object
IsPtrT := x-> IsType(x) and IsBound(x.isPtrT) and x.isPtrT;
# IsUnalignedPtrT(<t>) - checks whether <t> is a unaligned pointer type object,
# unaligned means aligned with smaller granularity than child type (t.t)
# size.
IsUnalignedPtrT := x -> IsPtrT(x) and x.alignment<>ptrAligned;
# obsolete, use IsArrayT
IsArray := IsArrayT;
Class(TPtr, TArrayBase, rec(
isPtrT := true,
__call__ := arg >> let(self := arg[1],
t := arg[2],
qualifiers := When(IsBound(arg[3]), arg[3], []),
alignment := When(IsBound(arg[4]), arg[4], ptrAligned16),
WithBases(self, rec(
t := Checked(IsType(t), t),
size := 0,
qualifiers := qualifiers,
_restrict := false,
alignment := alignment,
operations := TypOps)).normalizeAlignment()),
# value := (self, v) >> Error("Values of TPtr type are not allowed"),
value := Typ.value,
check := (self, v) >> Cond(
IsList(v), Checked(
Length(v) = self.size,
ForAll(v, el -> el.t = self.t),
v
),
IsInt(v), v,
Error("TPtr needs to either point to an array or some value")
),
# this looks crazy, but sometimes this happens (in LRB backend actually) : X - X
# where X is a pointer. Internally this can become X + (-X), and then becomes 0
isSigned := self >> true,
rChildren := self >> [self.t, self.qualifiers, self.alignment],
rSetChild := rSetChildFields("t", "qualifiers", "alignment"),
zero := self >> TInt.zero(),
one := self >> TInt.one(),
print := self >> Print(self.__name__, "(", self.t,
When(self.qualifiers<>[], Print(", ", self.qualifiers)), ")",
When(self._restrict, ".restrict()", ""),
".aligned(", self.alignment, ")"
),
restrict := (self) >> CopyFields(self, rec(_restrict := true)),
unRestricted := (self) >> CopyFields(self, rec(_restrict := false)),
csize := self >> sizeof(self),
realType := self >> Cond(self._restrict,
ObjId(self)(self.t.realType(), self.qualifiers).restrict(),
ObjId(self)(self.t.realType(), self.qualifiers)
),
aligned := (self, a) >> CopyFields(self, rec( alignment := Checked(IsList(a) and Length(a)=2, a) )).normalizeAlignment(),
unaligned := (self) >> CopyFields(self, rec( alignment := [1,0] )),
normalizeAlignment := meth(self)
if IsValue(self.alignment[2]) then self.alignment[2] := self.alignment[2].v;
elif IsSymbolic(self.alignment[2]) then self.alignment := ptrUnaligned; # NOTE: Conservative assumption
fi;
Constraint(IsInt(self.alignment[2]));
self.alignment[2] := self.alignment[2] mod self.alignment[1];
return self;
end,
withAlignment := (self, value) >> CopyFields(self, rec(
alignment := When(IsPtrT(value), value.alignment, value))),
# things get a little strange here because we allow pointers
# to be set to some int based offset
#
vbase := rec(
free := self >> Cond(
IsList(self.v), Union(List(self.v, e -> e.free())),
IsInt(self.v), [],
Error("hmm.")
),
rChildren := self >> Cond(
IsList(self.v), self.v,
IsInt(self.v), [],
Error("hmm.")
),
rSetChild := meth(arg)
local _self;
_self := arg[1];
if Length(arg) = 3 then
_self.v[arg[2]] := arg[3];
elif Length(arg) = 2 then
_self.v := arg[2];
else
Error("choke");
fi;
end,
),
));
# -- TVect ----------------------------------------------------------------------
Class(TVect, TArrayBase, rec(
isVecT := true,
doHashValues := true,
__call__ := (self, t, size) >> Cond(t=T_UInt(1), BitVector(size), Inherited(t, size)),
product := (v1, v2) -> Checked(IsList(v1) or IsList(v2), let(
vv1 := When(not IsList(v1), Replicate(Length(v2), v1), v1),
vv2 := When(not IsList(v2), Replicate(Length(v1), v2), v2),
l := Length(vv1),
Checked(l = Length(vv2),
List([1..l], i -> vv1[i]*vv2[i])))),
sum := (v1, v2) -> Checked(IsList(v1) or IsList(v2), let(
vv1 := When(not IsList(v1), Replicate(Length(v2), v1), v1),
vv2 := When(not IsList(v2), Replicate(Length(v1), v2), v2),
l := Length(vv1),
Checked(l = Length(vv2),
List([1..l], i -> vv1[i]+vv2[i])))),
value := (self, v) >> Cond( IsValue(v) and self=v.t, v, let(
vv := When(IsValue(v), v.v, v),
Cond(IsExp(vv), vv,
IsList(vv), Value.new(self, List(vv, e -> self.t.value(e))),
<#else#>
Value.new(self, List(Replicate(self.size, vv), e -> self.t.value(e)))))),
saturate := (self, v) >> let( vv := _unwrap(v), Cond(not IsList(vv) or Length(vv)<>self.size, v,
Value.new(self, List(vv, e -> self.t.saturate(e)))) ),
toUnsigned := self >> TVect(self.t.toUnsigned(), self.size),
toSigned := self >> TVect(self.t.toSigned(), self.size),
double := self >> TVect(self.t.double(), self.size/2),
));
IsTVectDouble := x -> IsVecT(x) and x.t = TReal;
TVectDouble := vlen -> TVect(TReal, vlen);
#Class(T_Type, Typ, rec(
# __call__ := (self, bits) >>
# WithBases(self, rec(
# bits := Checked(IsPosInt(bits), bits),
# operations := TypOps)),
#
# hash := (self, val, size) >> 1 + (10047871*val mod size),
#
# rChildren := self >> [],
# rSetChild := self >> Error("This function should not be called"),
# print := self >> Print(self.__name__, "(", self.bits, ")"),
# free := self >> Union(List(self.rChildren(), FreeVars)),
# vtype := (self,v) >> TVect(self, v)
#));
Class(T_Type, RewritableObject, rec(
isType := true,
isSigned := self >> true,
realType := self >> self,
doHashValues := false,
check := v -> v,
vbase := rec(),
value := meth(self, v)
local ev;
if IsExp(v) then
ev := v.eval();
if IsSymbolic(ev) and not IsValue(ev) then
v.t := self;
return v;
fi;
fi;
if IsValue(v) then
return Value.new(self, self.check(v.v));
else
return Value.new(self, self.check(v));
fi;
end,
eval := self >> self,
product := (v1, v2) -> v1 * v2,
sum := (v1, v2) -> v1 + v2,
zero := self >> self.value(0),
one := self >> self.value(1),
csize := self >> sizeof(self),
base_t := self >> self, # composite types should return base type (without recursion).
saturate := abstract(), # (self, v) >> ...
range := abstract(), # (self) >> ...
));
Declare(T_Int, T_UInt, T_Complex);
Class(T_Ord, T_Type, rec(
hash := (val, size) -> 1 + (10047871*val mod size),
saturate := (self, v) >> let( b := self.range(),
Cond( IsExp(v), v, self.value(Max2(b.min, Min2(b.max, _unwrap(v)))))),
));
Class(T_Int, T_Ord, rec(
check := (self, v) >> let(
i := Cond(IsDouble(v), IntDouble(v),
IsRat(v), Int(v),
Error("Can't convert <v> to an integer")),
b := self.params[1],
((i + 2^(b-1)) mod 2^b) - 2^(b-1)),
strId := self >> "i"::StringInt(self.params[1]),
range := self >> RangeT(-2^(self.params[1]-1), 2^(self.params[1]-1)-1, 1),
isSigned := True,
toUnsigned := self >> T_UInt(self.params[1]),
toSigned := self >> self,
double := self >> T_Int(2*self.params[1]),
));
Class(T_UInt, T_Ord, rec(
check := (self, v) >> let(
i := Cond(IsDouble(v), IntDouble(v),
IsRat(v), Int(v),
Error("Can't convert <v> to an integer")),
b := self.params[1],
i mod 2^b),
strId := self >> "ui"::StringInt(self.params[1]),
range := self >> RangeT(0, 2^self.params[1]-1, 1),
isSigned := False,
toUnsigned := self >> self,
toSigned := self >> T_Int(self.params[1]),
double := self >> T_UInt(2*self.params[1]),
));
Class(BitVector, TArrayBase, rec(
isVecT := true,
__call__ := (self, size) >> Inherited(T_UInt(1), size),
print := self >> Print(self.__name__, "(", self.size, ")"),
vbase := rec(
print := self >> let(n:=Length(self.v), Print("h'", HexStringInt(Sum([1..n], i->self.v[i] * 2^(n-i))))),
),
isSigned := self >> false,
rChildren := self >> [self.size],
rSetChild := rSetChildFields("size"),
one := self >> self.value(Replicate(self.size, 1)),
zero := self >> self.value(Replicate(self.size, 0)),
hash := (self, val, size) >> let(n:=Length(val),
1 + (Sum([1..n], i -> val[i] * 2^(n-i)) mod size)),
product := TVect.product,
sum := TVect.sum,
_uint1 := T_UInt(1),
value := (self, v) >> When( IsValue(v) and v.t = self, v,
let(vv := When(IsValue(v), v.v, v),
Cond(IsExp(vv), vv,
Checked(IsList(vv),
Value.new(self, List(vv, e->self._uint1.check(e))))))),
));
Class(T_Real, T_Type, rec(
#correct cutoffs are floor(log10(2^(mantissa bits + 1)))
cutoff := self>>Cond(
self.params[1] = 128, 1e-34,
self.params[1] = 80, 1e-19,
self.params[1] = 64, 1e-15,
self.params[1] = 32, 1e-7,
Error("cutoff not supported")
),
hash := TReal.hash,
check := (self,v) >> let( r := Cond(
IsExp(v), ReComplex(Complex(code.EvalScalar(v))),
IsInt(v), Double(v),
IsRat(v), v,
IsDouble(v), v,
IsCyc(v), ReComplex(Complex(v)),
IsComplex(v), ReComplex(v),
# else
Error("<v> must be a double or an expression")),
When(AbsFloat(r) < self.cutoff(), 0.0, r)),
vequals := (self, v1,v2) >> When(
(IsDouble(v1) or IsInt(v1)) and (IsDouble(v2) or IsInt(v2)),
AbsFloat(Double(v1)-Double(v2)) < self.cutoff(),
false),
zero := self >> self.value(0.0),
one := self >> self.value(1.0),
isSigned := (self) >> true,
strId := self >> "f"::StringInt(self.params[1]),
range := self >> Cond(
self.params[1] = 128, RangeT(
-1.7976931348623157e+308 - 10e291, #INF
1.7976931348623157e+308 + 10e291, #INF
1e-34
),
self.params[1] = 80, RangeT(
-1.7976931348623157e+308 - 10e291, #INF
1.7976931348623157e+308 + 10e291, #INF
1e-19
),
self.params[1] = 64, RangeT(
-1.7976931348623157e+308,
1.7976931348623157e+308,
1.1102230246251565e-016
),
self.params[1] = 32, RangeT(
-3.4028234e+038,
3.4028234e+038,
5.96046448e-008
)),
complexType := self >> T_Complex(self),
));
Class(T_Complex, T_Type, rec(
hash := TComplex.hash,
realType := self >> self.params[1],
complexType := self >> self,
isSigned := self >> self.params[1].isSigned(),
strId := self >> "c"::self.params[1].strId(),
check := (self, v) >> let(
realt := self.params[1],
cpx := Complex(v),
Complex(realt.check(ReComplex(cpx)),
realt.check(ImComplex(cpx))))
));
# # complex type is made up of TWO T_Real, T_Uint, or T_Int types.
# Class(T_Complex, TArrayBase, rec(
# isComplex := true,
# __call__ := (arg) >> let(
# self := arg[1],
# t := arg[2],
# Checked(
# ObjId(t) in [T_Real, T_UInt, T_Int],
# WithBases(self, rec(
# t := t,
# qualifiers := When(Length(arg) > 2, arg[3], []),
# operations := TypOps,
# size := 0
# ))
# )
# ),
# rChildren := self >> [self.t, self.qualifiers],
# rSetChild := rSetChildFields("t", "qualifiers"),
# print := self >> Print(self.__name__, "(", self.t,
# When(self.qualifiers <> [],
# Print(", ", self.qualifiers)
# ),
# ")"
# )
# ));
_IsVar := (e) -> code.IsVar(e);
#F T_Struct: structure type.
#F
#F T_Struct("structname", [<var1>, <var2>, ... , <varN>])
#F
Class(T_Struct, T_Type, rec(
updateParams := meth(self)
Constraint(IsString(self.params[1]));
Constraint(IsList(self.params[2]));
Constraint(ForAll(self.params[2], e -> _IsVar(e)));
end,
getName := self >> self.params[1],
getVars := self >> self.params[2]
));
IsIntT := (t) -> IsType(t) and t in [TChar, TInt] or ObjId(t) = T_Int;
IsUIntT := (t) -> IsType(t) and t in [TUChar, TUInt] or ObjId(t) = T_UInt;
IsOrdT := (t) -> IsIntT(t) or IsUIntT(t);
IsFixedPtT := (t) -> IsType(t) and ObjId(t)=TFixedPt;
IsRealT := (t) -> IsType(t) and t=TReal or ObjId(t)=T_Real;
IsComplexT := (t) -> IsType(t) and t=TComplex or ObjId(t)=T_Complex;
IsOddInt := n -> When(IsValue(n), n.v mod 2 = 1, IsInt(n) and n mod 2 = 1);
IsEvenInt := n -> When(IsValue(n), n.v mod 2 =0, IsInt(n) and n mod 2 = 0);
|
lemma LIMSEQ_realpow_zero: fixes x :: real assumes "0 \<le> x" "x < 1" shows "(\<lambda>n. x ^ n) \<longlonglongrightarrow> 0" |
Formal statement is: lemma le_degree: "coeff p n \<noteq> 0 \<Longrightarrow> n \<le> degree p" Informal statement is: If the coefficient of $x^n$ in a polynomial $p$ is nonzero, then $n$ is less than or equal to the degree of $p$. |
/- Review: Sets as defined by membership predicates
Review: We specific a set of objects of some
type, α, as a predicate, p : α → Prop. Recall
that p takes an argument, a, of type α, and
reduces to a proposition, p_a, about a. If a
"satisfies" p, then it has the property that
p specifies, and otherwise not.
For example, if α is ℕ, and even : ℕ → Prop
:= λ n, n % 2 = 0, then even n will be true
(have a proof) for any even actual parameter,
n, otherwise false. The predicate represents
the set precisely and completely.
-/
-- Logical notation, logical thinking, even is a predicate
def isEven : ℕ → Prop := fun n, n % 2 = 0 -- λ expression
-- Set notation, set thinking, even as a set (a collection of objects)
def evens := { n | isEven n } -- evens is the set of n that satisfy even
-- Logical notation, predicate applications yielding propositions
example : evens 0 := rfl
example : evens 1 := rfl
example : evens 2 := rfl
-- Set mmber notation for the same predicate applications
example : 0 ∈ evens := rfl
example : 1 ∈ evens := rfl
example : 2 ∈ evens := rfl
/-
So these expressions all mean the same thing:
3 ∈ evens -- set membership notation
evens 3 -- predicate application notation
3 % 2 = 0 -- predicate application reduced to a propostion
-/
/-
Takeaway: To formally specify a set, all you
have to do is specify the membership predicate.
Then you can use that predicate to express your
sets using set builder notation. Step 1: specify
the predicate. Step 2: use it within set builder
expressions. Here's a by now familiar example.
-/
/-
Exercise: use set membership notation to write the next
few test cases after the ones given above: for argument
values of 3, 4, and 5. It's not challenging. But now take
the opportunity to think though how these tests are working.
And marvel at what our proof assistant: guaranteeing to find
and finding mismatches between proofs and the propositions
they are claimed to prove.
-/
/-
Exercise: formally specify the set of odd natural numbers
breaking up your specification into the two parts discussed
above. That is, define isOdd, the predicate, and then odds,
the set, separately. Then state and prove the propositions,
2 ∉ odds, and 3 ∈ odds. (Use example.)
-/ |
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
import scipy.stats as st
import sympy as sp
import datetime
import scipy as sp
sns.set_style("whitegrid")
```
```python
df = pd.read_csv(
"/Users/sean/Desktop/Github/Parametric-Portfolio-Policy/price.csv"
).set_index("Date")
df_price = pd.pivot_table(df, values="adjusted", index=df.index, columns=df["symbol"])
mktcap = pd.pivot_table(df, values="mktcap", index=df.index, columns=df["symbol"])
have_null = df_price.columns[df_price.isna().any()]
have_null.append(df_price.columns[df_price.isna().any()])
df_price = df_price.drop(columns=have_null)
ret = np.log(df_price).diff()
ret = ret.iloc[1::, :]
have_null = mktcap.columns[mktcap.isna().any()]
have_null.append(mktcap.columns[mktcap.isna().any()])
mktcap = mktcap.drop(columns=have_null)
mktcap = mktcap.iloc[1::, :]
m12 = ret.rolling(12).sum()
m12 = m12.iloc[11::, :]
ret = ret.iloc[12::, :]
mktcap = mktcap.iloc[11::, :]
```
```python
ret_rowmean = ret.mean(axis=1)
mktcap_rowmean = mktcap.mean(axis=1)
m12_rowmean = m12.mean(axis=1)
ret_std = ret.std(axis=1)
mkt_std = mktcap.std(axis=1)
m12_std = m12.std(axis=1)
d = {
"Mean(ret)": ret_rowmean,
"Mean(mktcap)": mktcap_rowmean,
"Mean(m12)": m12_rowmean,
"Sd(ret)": ret_std,
"Sd(mktcap)": mkt_std,
"Sd(m12_std)": m12_std,
}
DF = pd.DataFrame(d)
print(DF.iloc[1::, :])
fig = plt.figure(figsize=(20, 10))
DF = DF.reset_index()
for i in range(1, 7):
plt.subplot(2, 3, i)
DF.iloc[:, i].plot()
plt.show()
fig = plt.figure(figsize=(20, 10))
plt.subplot(2, 2, 1)
plt.plot(ret.iloc[:, 0:10])
plt.ylabel("monthly returns")
plt.subplot(2, 2, 2)
plt.plot(m12.iloc[:, 0:10])
plt.ylabel("12 month returns")
plt.subplot(2, 2, 3)
plt.plot(mktcap.iloc[:, 0:10])
plt.ylabel("market captalization")
plt.show()
```
```python
def Scale(y, c=True, sc=True):
"""
If ‘scale’ is
‘TRUE’ then scaling is done by dividing the (centered) columns of
‘x’ by their standard deviations if ‘center’ is ‘TRUE’, and the
root mean square otherwise. If ‘scale’ is ‘FALSE’, no scaling is
done.
The root-mean-square for a (possibly centered) column is defined
as sqrt(sum(x^2)/(n-1)), where x is a vector of the non-missing
values and n is the number of non-missing values. In the case
‘center = TRUE’, this is the same as the standard deviation, but
in general it is not.
"""
x = y.copy()
if c:
x -= x.mean()
if sc and c:
x /= x.std()
elif sc:
x /= np.sqrt(x.pow(2).sum().div(x.count() - 1))
return x
scaled_mktcap = pd.DataFrame(Scale(mktcap.T))
scaled_m12 = pd.DataFrame(Scale(m12.T))
scaled_mktcap = scaled_mktcap.T
scaled_mktcap = scaled_mktcap.iloc[0:109, :]
scaled_m12 = scaled_m12.T
scaled_m12 = scaled_m12.iloc[0:109, :]
def PPS(x, wb, nt, ret, m12, mktcap, rr):
wi = wb + nt * (x[0] * m12 + x[1] * mktcap)
wret = (wi * ret).sum(axis=1)
ut = ((1 + wret) ** (1 - rr)) / (1 - rr)
u = -(ut.mean())
return u
Scaled_m12 = scaled_m12.reset_index()
Scaled_m12 = Scaled_m12.drop("Date", axis=1)
Scaled_mktcap = scaled_mktcap.reset_index()
Scaled_mktcap = Scaled_mktcap.drop("Date", axis=1)
Ret = ret.reset_index()
Ret = Ret.drop("Date", axis=1)
nt = wb = 1 / np.shape(ret)[1]
rr = 5
res_save = []
weights = []
x0 = np.array([0, 0])
for i in range(0, 60):
opt = sp.optimize.minimize(
PPS,
x0,
method="BFGS",
args=(
wb,
nt,
Ret.iloc[0 : 48 + i, :],
Scaled_m12.iloc[0 : 48 + i, :],
Scaled_mktcap.iloc[0 : 48 + i, :],
rr,
),
)
print("The {} window".format(i + 1))
print("The value:", opt["x"])
res_save.append(opt["x"])
w = wb + nt * (
opt["x"][0] * Scaled_m12.iloc[i + 48, :]
+ opt["x"][1] * Scaled_mktcap.iloc[i + 48, :]
)
print(w)
weights.append(w)
index = ret.index[49:110]
char_df = pd.DataFrame(res_save, index=index, columns=["m12", "mktcap"])
fig = plt.figure(figsize=(12, 8))
plt.subplot(2, 1, 1)
char_df["m12"].plot()
plt.title("m12")
plt.ylabel("Parameter Value")
plt.subplot(2, 1, 2)
char_df["mktcap"].plot()
plt.title("mktcap")
plt.ylabel("Parameter Value")
plt.show()
```
```python
weights = pd.DataFrame(weights)
ret_fit = (weights * Ret.tail(60)).sum(axis=1)
ret_EW = (nt * Ret.tail(60)).sum(axis=1)
acc_fit = ret_fit.cumsum()
acc_fitvalue = acc_fit.values
acc_fitvalue = acc_fitvalue[1::]
acc_EW = ret_EW.cumsum()
acc_EWvalue = acc_EW.values
acc_ret = ret.tail(60).cumsum()
top100 = (np.argsort(-acc_ret.tail(1))).iloc[0, :].values[0:100]
acc_top100 = acc_ret.iloc[:, top100].mean(axis=1)
acc_top100value = acc_top100.values
acc_df = pd.DataFrame(index=index)
acc_df["opt"] = acc_fitvalue
acc_df["EW"] = acc_EWvalue
acc_df["top100"] = acc_top100value
acc_df.plot(figsize=(12, 8))
plt.ylabel("Cumulative Return")
plt.show()
```
```python
weights.index = index
lv = (weights[weights < 0]).sum(axis=1).reindex(index)
lv.plot(figsize=(12,8))
plt.ylabel('Leverage')
plt.gca().invert_yaxis()
```
```python
def PPS1(x, wb, nt, ret, m12, mktcap, rr):
wi = wb + nt*(x[0] * m12 + x[1]* mktcap)
wret = (wi*ret).sum(axis=1)
ut = ((1+wret)**(1-rr))/(1-rr)
u = - (ut.mean())
Lv = abs((wi[wi<0]).sum(axis=1))
pen = abs((Lv[Lv>2]*Lv).sum())
return (u+pen)
### 然后重新进行这个最优化的过程
nt = wb = 1/np.shape(ret)[1]
rr = 5
guess = np.array([0,0])
res_save_pen = []
weights_pen = []
for i in range(0,60):
opt = sp.optimize.minimize(PPS1, guess, method='BFGS', args=(wb, nt, Ret.iloc[0:48+i,:],
Scaled_m12.iloc[0:48+i,:], Scaled_mktcap.iloc[0:48+i,:],
rr))
print('The {} window'.format(i+1))
print('The value:', opt['x'])
res_save_pen.append(opt['x'])
w = wb + nt*(opt['x'][0]*Scaled_m12.iloc[i + 48,:] + opt['x'][1]*Scaled_mktcap.iloc[i + 48,:])
print(w)
weights_pen.append(w)
```
The 1 window
The value: [-0.9543712 -7.51789321]
symbol
A 0.009796
AAL 0.006203
AAP 0.013160
ABC 0.009796
ABT -0.014175
...
XRAY 0.009570
XRX 0.012427
YUM 0.007635
ZBH 0.005968
ZION 0.009151
Name: 48, Length: 453, dtype: float64
The 2 window
The value: [-0.89505728 -7.83816766]
symbol
A 0.011205
AAL 0.006027
AAP 0.013753
ABC 0.008845
ABT -0.014034
...
XRAY 0.009966
XRX 0.012934
YUM 0.007842
ZBH 0.005912
ZION 0.010260
Name: 49, Length: 453, dtype: float64
The 3 window
The value: [-0.48689843 -7.34678018]
symbol
A 0.010066
AAL 0.006205
AAP 0.011681
ABC 0.008699
ABT -0.012701
...
XRAY 0.009384
XRX 0.011298
YUM 0.006425
ZBH 0.006613
ZION 0.010823
Name: 50, Length: 453, dtype: float64
The 4 window
The value: [-0.64507729 -7.75819671]
symbol
A 0.010805
AAL 0.007953
AAP 0.012876
ABC 0.008058
ABT -0.014206
...
XRAY 0.010371
XRX 0.011840
YUM 0.008138
ZBH 0.006754
ZION 0.011259
Name: 51, Length: 453, dtype: float64
The 5 window
The value: [-1.15099938 -7.67249017]
symbol
A 0.010954
AAL 0.010012
AAP 0.012940
ABC 0.007971
ABT -0.012607
...
XRAY 0.011225
XRX 0.011943
YUM 0.010322
ZBH 0.006837
ZION 0.009402
Name: 52, Length: 453, dtype: float64
The 6 window
The value: [-1.68124977 -7.46513938]
symbol
A 0.011585
AAL 0.010517
AAP 0.011624
ABC 0.006552
ABT -0.009921
...
XRAY 0.011947
XRX 0.012137
YUM 0.008823
ZBH 0.008189
ZION 0.007264
Name: 53, Length: 453, dtype: float64
The 7 window
The value: [-1.12886332 -7.37765617]
symbol
A 0.010348
AAL 0.006840
AAP 0.012036
ABC 0.007806
ABT -0.010286
...
XRAY 0.010435
XRX 0.009906
YUM 0.007829
ZBH 0.005553
ZION 0.008157
Name: 54, Length: 453, dtype: float64
The 8 window
The value: [-1.13222598 -7.64585915]
symbol
A 0.008805
AAL 0.008144
AAP 0.012348
ABC 0.007223
ABT -0.009049
...
XRAY 0.010317
XRX 0.009833
YUM 0.007737
ZBH 0.006401
ZION 0.009194
Name: 55, Length: 453, dtype: float64
The 9 window
The value: [-0.55950277 -7.27429434]
symbol
A 0.008221
AAL 0.007912
AAP 0.011148
ABC 0.007859
ABT -0.008423
...
XRAY 0.009875
XRX 0.010236
YUM 0.007233
ZBH 0.006912
ZION 0.010772
Name: 56, Length: 453, dtype: float64
The 10 window
The value: [-0.33417868 -7.4661312 ]
symbol
A 0.008702
AAL 0.008734
AAP 0.010505
ABC 0.008456
ABT -0.010743
...
XRAY 0.009368
XRX 0.010708
YUM 0.008169
ZBH 0.006556
ZION 0.011258
Name: 57, Length: 453, dtype: float64
The 11 window
The value: [ 0.28183591 -6.93942583]
symbol
A 0.008406
AAL 0.009908
AAP 0.010094
ABC 0.009166
ABT -0.010787
...
XRAY 0.008408
XRX 0.011042
YUM 0.005620
ZBH 0.006292
ZION 0.010748
Name: 58, Length: 453, dtype: float64
The 12 window
The value: [ 0.44626173 -7.53510475]
symbol
A 0.008757
AAL 0.010897
AAP 0.011049
ABC 0.010055
ABT -0.011623
...
XRAY 0.008851
XRX 0.012301
YUM 0.005947
ZBH 0.006719
ZION 0.011375
Name: 59, Length: 453, dtype: float64
The 13 window
The value: [ 0.21475763 -7.33125765]
symbol
A 0.008399
AAL 0.009315
AAP 0.010708
ABC 0.009559
ABT -0.010920
...
XRAY 0.009044
XRX 0.011273
YUM 0.006813
ZBH 0.006280
ZION 0.011117
Name: 60, Length: 453, dtype: float64
The 14 window
The value: [ 0.35647075 -7.04497741]
symbol
A 0.008684
AAL 0.010276
AAP 0.010760
ABC 0.009534
ABT -0.011572
...
XRAY 0.008737
XRX 0.011132
YUM 0.006284
ZBH 0.006330
ZION 0.010832
Name: 61, Length: 453, dtype: float64
The 15 window
The value: [-0.34156057 -7.36808611]
symbol
A 0.008452
AAL 0.005541
AAP 0.009265
ABC 0.009324
ABT -0.010259
...
XRAY 0.010069
XRX 0.010804
YUM 0.007402
ZBH 0.006481
ZION 0.011125
Name: 62, Length: 453, dtype: float64
The 16 window
The value: [-1.12894638 -7.1357366 ]
symbol
A 0.007640
AAL 0.000472
AAP 0.007585
ABC 0.009240
ABT -0.008447
...
XRAY 0.011353
XRX 0.008358
YUM 0.007322
ZBH 0.005647
ZION 0.011410
Name: 63, Length: 453, dtype: float64
The 17 window
The value: [-0.81722137 -6.89477617]
symbol
A 0.008094
AAL 0.000421
AAP 0.007465
ABC 0.007562
ABT -0.009274
...
XRAY 0.009931
XRX 0.008798
YUM 0.007109
ZBH 0.004878
ZION 0.012771
Name: 64, Length: 453, dtype: float64
The 18 window
The value: [-0.61805065 -7.4662651 ]
symbol
A 0.008531
AAL 0.001656
AAP 0.008190
ABC 0.009161
ABT -0.011005
...
XRAY 0.010508
XRX 0.010500
YUM 0.007207
ZBH 0.005607
ZION 0.013237
Name: 65, Length: 453, dtype: float64
The 19 window
The value: [-1.07253798 -7.3507675 ]
symbol
A 0.007848
AAL -0.000262
AAP 0.007109
ABC 0.007098
ABT -0.012250
...
XRAY 0.010811
XRX 0.008349
YUM 0.010342
ZBH 0.005998
ZION 0.014104
Name: 66, Length: 453, dtype: float64
The 20 window
The value: [-1.09725638 -7.27340231]
symbol
A 0.009274
AAL -0.002141
AAP 0.005466
ABC 0.007745
ABT -0.011832
...
XRAY 0.011207
XRX 0.009082
YUM 0.010213
ZBH 0.006622
ZION 0.014275
Name: 67, Length: 453, dtype: float64
The 21 window
The value: [-1.20463587 -7.30623249]
symbol
A 0.009732
AAL 0.000280
AAP 0.005213
ABC 0.007586
ABT -0.012802
...
XRAY 0.011640
XRX 0.009047
YUM 0.009725
ZBH 0.005696
ZION 0.013095
Name: 68, Length: 453, dtype: float64
The 22 window
The value: [-1.05775068 -7.3488353 ]
symbol
A 0.010189
AAL 0.000680
AAP 0.006607
ABC 0.007036
ABT -0.012865
...
XRAY 0.010732
XRX 0.008921
YUM 0.008780
ZBH 0.004704
ZION 0.013533
Name: 69, Length: 453, dtype: float64
The 23 window
The value: [-0.36066043 -6.88830993]
symbol
A 0.008844
AAL 0.003805
AAP 0.008679
ABC 0.007753
ABT -0.011775
...
XRAY 0.009036
XRX 0.010272
YUM 0.007715
ZBH 0.005612
ZION 0.012138
Name: 70, Length: 453, dtype: float64
The 24 window
The value: [-0.22940295 -7.39457501]
symbol
A 0.009713
AAL 0.004205
AAP 0.009307
ABC 0.008413
ABT -0.013375
...
XRAY 0.009760
XRX 0.011207
YUM 0.008220
ZBH 0.005852
ZION 0.012432
Name: 71, Length: 453, dtype: float64
The 25 window
The value: [ 0.8992754 -6.72786514]
symbol
A 0.006676
AAL 0.008470
AAP 0.011095
ABC 0.010017
ABT -0.012066
...
XRAY 0.008631
XRX 0.011319
YUM 0.006750
ZBH 0.005998
ZION 0.008203
Name: 72, Length: 453, dtype: float64
The 26 window
The value: [ 0.28392595 -6.59925357]
symbol
A 0.008292
AAL 0.006869
AAP 0.009556
ABC 0.008552
ABT -0.012514
...
XRAY 0.009103
XRX 0.010796
YUM 0.006642
ZBH 0.005753
ZION 0.010163
Name: 73, Length: 453, dtype: float64
The 27 window
The value: [ 0.78404651 -6.81165642]
symbol
A 0.008221
AAL 0.007892
AAP 0.010089
ABC 0.010814
ABT -0.012619
...
XRAY 0.009227
XRX 0.011074
YUM 0.006475
ZBH 0.006357
ZION 0.009282
Name: 74, Length: 453, dtype: float64
The 28 window
The value: [-0.54128339 -6.82099629]
symbol
A 0.009183
AAL 0.005158
AAP 0.009549
ABC 0.004164
ABT -0.013078
...
XRAY 0.009267
XRX 0.012098
YUM 0.006405
ZBH 0.006042
ZION 0.012199
Name: 75, Length: 453, dtype: float64
The 29 window
The value: [ 0.34662373 -6.68019346]
symbol
A 0.008413
AAL 0.007014
AAP 0.009818
ABC 0.008346
ABT -0.013047
...
XRAY 0.009101
XRX 0.010238
YUM 0.006336
ZBH 0.005716
ZION 0.010710
Name: 76, Length: 453, dtype: float64
The 30 window
The value: [ 0.85234258 -6.96163978]
symbol
A 0.008166
AAL 0.006488
AAP 0.010366
ABC 0.010293
ABT -0.013785
...
XRAY 0.009569
XRX 0.009539
YUM 0.006617
ZBH 0.005986
ZION 0.011310
Name: 77, Length: 453, dtype: float64
The 31 window
The value: [ 2.19320172 -6.83281968]
symbol
A 0.007381
AAL 0.006241
AAP 0.014019
ABC 0.011802
ABT -0.012907
...
XRAY 0.010837
XRX 0.005715
YUM 0.009292
ZBH 0.005281
ZION 0.010619
Name: 78, Length: 453, dtype: float64
The 32 window
The value: [ 1.79537923 -6.77731469]
symbol
A 0.007205
AAL 0.007392
AAP 0.012571
ABC 0.011528
ABT -0.012126
...
XRAY 0.010348
XRX 0.006325
YUM 0.008313
ZBH 0.006669
ZION 0.010912
Name: 79, Length: 453, dtype: float64
The 33 window
The value: [ 0.47547663 -6.73730621]
symbol
A 0.008426
AAL 0.007540
AAP 0.009667
ABC 0.008358
ABT -0.010877
...
XRAY 0.009283
XRX 0.009804
YUM 0.006776
ZBH 0.006200
ZION 0.010736
Name: 80, Length: 453, dtype: float64
The 34 window
The value: [ 0.2527376 -6.7844879]
symbol
A 0.008728
AAL 0.006716
AAP 0.008856
ABC 0.007964
ABT -0.011194
...
XRAY 0.008669
XRX 0.010144
YUM 0.007304
ZBH 0.005991
ZION 0.010755
Name: 81, Length: 453, dtype: float64
The 35 window
The value: [ 0.29066476 -6.98488281]
symbol
A 0.008574
AAL 0.006842
AAP 0.009412
ABC 0.007964
ABT -0.011482
...
XRAY 0.008712
XRX 0.010249
YUM 0.007222
ZBH 0.006223
ZION 0.011115
Name: 82, Length: 453, dtype: float64
The 36 window
The value: [ 0.4278985 -6.78403067]
symbol
A 0.008485
AAL 0.006068
AAP 0.008933
ABC 0.007805
ABT -0.011113
...
XRAY 0.008689
XRX 0.009713
YUM 0.007184
ZBH 0.005869
ZION 0.010628
Name: 83, Length: 453, dtype: float64
The 37 window
The value: [ 2.29416469 -6.73882514]
symbol
A 0.009586
AAL 0.003528
AAP 0.008921
ABC 0.007641
ABT -0.010201
...
XRAY 0.012359
XRX 0.005832
YUM 0.008310
ZBH 0.004739
ZION 0.010832
Name: 84, Length: 453, dtype: float64
The 38 window
The value: [ 0.25870832 -6.83953171]
symbol
A 0.008524
AAL 0.006622
AAP 0.009102
ABC 0.007674
ABT -0.009501
...
XRAY 0.008445
XRX 0.010163
YUM 0.006789
ZBH 0.005915
ZION 0.010772
Name: 85, Length: 453, dtype: float64
The 39 window
The value: [ 0.28167413 -7.11858694]
symbol
A 0.008788
AAL 0.006668
AAP 0.009469
ABC 0.007688
ABT -0.010360
...
XRAY 0.008980
XRX 0.010644
YUM 0.006835
ZBH 0.005922
ZION 0.011122
Name: 86, Length: 453, dtype: float64
The 40 window
The value: [ 0.53057859 -7.15373277]
symbol
A 0.008871
AAL 0.006587
AAP 0.009945
ABC 0.007087
ABT -0.009339
...
XRAY 0.009532
XRX 0.010526
YUM 0.006638
ZBH 0.005992
ZION 0.011193
Name: 87, Length: 453, dtype: float64
The 41 window
The value: [ 0.24376097 -7.21245867]
symbol
A 0.008697
AAL 0.008104
AAP 0.009582
ABC 0.008257
ABT -0.009051
...
XRAY 0.008973
XRX 0.011052
YUM 0.006644
ZBH 0.005538
ZION 0.011241
Name: 88, Length: 453, dtype: float64
The 42 window
The value: [ 0.20362262 -7.0020921 ]
symbol
A 0.008587
AAL 0.008363
AAP 0.009134
ABC 0.008185
ABT -0.008354
...
XRAY 0.008638
XRX 0.010880
YUM 0.006461
ZBH 0.005534
ZION 0.010708
Name: 89, Length: 453, dtype: float64
The 43 window
The value: [ 0.32475113 -7.16884235]
symbol
A 0.008688
AAL 0.007860
AAP 0.009041
ABC 0.007946
ABT -0.010856
...
XRAY 0.008741
XRX 0.010950
YUM 0.006433
ZBH 0.005658
ZION 0.010789
Name: 90, Length: 453, dtype: float64
The 44 window
The value: [ 2.45031232 -6.73942657]
symbol
A 0.012074
AAL 0.003559
AAP 0.003614
ABC 0.002297
ABT -0.012481
...
XRAY 0.009683
XRX 0.007911
YUM 0.007228
ZBH 0.008090
ZION 0.009393
Name: 91, Length: 453, dtype: float64
The 45 window
The value: [ 2.44946219 -6.70916986]
symbol
A 0.012863
AAL 0.002929
AAP -0.000604
ABC 0.001058
ABT -0.010213
...
XRAY 0.009094
XRX 0.008755
YUM 0.006417
ZBH 0.009956
ZION 0.010230
Name: 92, Length: 453, dtype: float64
The 46 window
The value: [ 0.8066508 -6.90408464]
symbol
A 0.009328
AAL 0.005825
AAP 0.005934
ABC 0.005904
ABT -0.009069
...
XRAY 0.007686
XRX 0.011051
YUM 0.007925
ZBH 0.005979
ZION 0.011271
Name: 93, Length: 453, dtype: float64
The 47 window
The value: [ 1.9989516 -6.93433456]
symbol
A 0.007866
AAL 0.007750
AAP 0.008009
ABC 0.001776
ABT -0.011869
...
XRAY 0.005811
XRX 0.006836
YUM 0.009523
ZBH 0.005027
ZION 0.015024
Name: 94, Length: 453, dtype: float64
The 48 window
The value: [ 2.105427 -6.89989775]
symbol
A 0.007651
AAL 0.006401
AAP 0.008819
ABC -0.000168
ABT -0.012122
...
XRAY 0.004715
XRX 0.004157
YUM 0.008488
ZBH 0.004084
ZION 0.017764
Name: 95, Length: 453, dtype: float64
The 49 window
The value: [ 1.95877808 -6.95462894]
symbol
A 0.009451
AAL 0.005981
AAP 0.006889
ABC 0.004536
ABT -0.009261
...
XRAY 0.004463
XRX 0.008928
YUM 0.007352
ZBH 0.005553
ZION 0.018154
Name: 96, Length: 453, dtype: float64
The 50 window
The value: [ 1.89673657 -6.96843745]
symbol
A 0.009912
AAL 0.005427
AAP 0.006366
ABC 0.005631
ABT -0.009739
...
XRAY 0.005291
XRX 0.010067
YUM 0.007015
ZBH 0.005658
ZION 0.019677
Name: 97, Length: 453, dtype: float64
The 51 window
The value: [ 1.99938929 -6.89226094]
symbol
A 0.010766
AAL 0.005299
AAP 0.004932
ABC 0.006220
ABT -0.009718
...
XRAY 0.005854
XRX 0.008001
YUM 0.005747
ZBH 0.005681
ZION 0.018394
Name: 98, Length: 453, dtype: float64
The 52 window
The value: [ 1.97334252 -6.93377389]
symbol
A 0.010992
AAL 0.008923
AAP 0.004637
ABC 0.005274
ABT -0.008064
...
XRAY 0.006752
XRX 0.010715
YUM 0.006809
ZBH 0.003758
ZION 0.015173
Name: 99, Length: 453, dtype: float64
The 53 window
The value: [ 2.41738242 -6.45571581]
symbol
A 0.010607
AAL 0.013811
AAP 0.002419
ABC 0.010013
ABT -0.007119
...
XRAY 0.005339
XRX 0.009071
YUM 0.008058
ZBH 0.002000
ZION 0.015455
Name: 100, Length: 453, dtype: float64
The 54 window
The value: [ 2.28343609 -6.66178159]
symbol
A 0.010607
AAL 0.016154
AAP -0.000340
ABC 0.009116
ABT -0.007370
...
XRAY 0.006171
XRX 0.010739
YUM 0.007767
ZBH 0.003947
ZION 0.019010
Name: 101, Length: 453, dtype: float64
The 55 window
The value: [ 1.20740194 -7.09144041]
symbol
A 0.008948
AAL 0.009867
AAP 0.004297
ABC 0.008580
ABT -0.009893
...
XRAY 0.007241
XRX 0.011272
YUM 0.006761
ZBH 0.004086
ZION 0.014783
Name: 102, Length: 453, dtype: float64
The 56 window
The value: [ 0.3533308 -7.05726374]
symbol
A 0.007961
AAL 0.007993
AAP 0.008918
ABC 0.008645
ABT -0.010275
...
XRAY 0.008650
XRX 0.011324
YUM 0.006138
ZBH 0.005926
ZION 0.011240
Name: 103, Length: 453, dtype: float64
The 57 window
The value: [ 0.31230485 -7.1939908 ]
symbol
A 0.008075
AAL 0.007977
AAP 0.009436
ABC 0.009075
ABT -0.011149
...
XRAY 0.008950
XRX 0.011431
YUM 0.006439
ZBH 0.006075
ZION 0.011350
Name: 104, Length: 453, dtype: float64
The 58 window
The value: [ 0.28801909 -7.09472482]
symbol
A 0.007829
AAL 0.007473
AAP 0.009242
ABC 0.009206
ABT -0.010418
...
XRAY 0.008754
XRX 0.010966
YUM 0.006363
ZBH 0.006430
ZION 0.010729
Name: 105, Length: 453, dtype: float64
The 59 window
The value: [ 0.24804893 -7.14193385]
symbol
A 0.007925
AAL 0.007185
AAP 0.009286
ABC 0.009089
ABT -0.010648
...
XRAY 0.008841
XRX 0.011218
YUM 0.006094
ZBH 0.006895
ZION 0.010373
Name: 106, Length: 453, dtype: float64
The 60 window
The value: [ 0.54386191 -7.07371611]
symbol
A 0.008645
AAL 0.006943
AAP 0.007084
ABC 0.008990
ABT -0.009673
...
XRAY 0.008761
XRX 0.011584
YUM 0.006559
ZBH 0.006777
ZION 0.010154
Name: 107, Length: 453, dtype: float64
```python
weights_pen = pd.DataFrame(weights_pen)
ret_fit_pen = (weights_pen*Ret.tail(60)).sum(axis=1)
ret_EW_pen = (nt*Ret.tail(60)).sum(axis=1)
acc_fit_pen = ret_fit_pen.cumsum()
acc_fitpenvalue = acc_fit_pen.values
acc_EW_pen = ret_EW_pen.cumsum()
acc_EWpenvalue = acc_EW_pen.values
acc_df_pen = pd.DataFrame()
acc_df_pen['opt'] = acc_fitvalue
acc_fitpenvalue = acc_fitpenvalue[1::]
acc_df_pen['opt_pen'] = acc_fitpenvalue
acc_df_pen['EW'] = acc_EWpenvalue
acc_df_pen['top100'] = acc_top100value
acc_df_pen.index = index
acc_df_pen.plot(figsize=(12,8))
plt.ylabel('Cumulative Return')
```
```python
```
|
/-
Copyright (c) 2021 Jannis Limperg. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Jannis Limperg
-/
-- TODO clean up this test case
import Lean
open Lean
open Lean.Meta
open Lean.Elab.Tactic
inductive Even : Nat → Prop
| zero : Even Nat.zero
| plus_two {n} : Even n → Even (Nat.succ (Nat.succ n))
inductive Odd : Nat → Prop
| one : Odd (Nat.succ Nat.zero)
| plus_two {n} : Odd n → Odd (Nat.succ (Nat.succ n))
inductive EvenOrOdd : Nat → Prop
| even {n} : Even n → EvenOrOdd n
| odd {n} : Odd n → EvenOrOdd n
attribute [aesop 50%] EvenOrOdd.even EvenOrOdd.odd
attribute [aesop safe] Even.zero Even.plus_two
attribute [aesop 100%] Odd.one Odd.plus_two
def EvenOrOdd' (n : Nat) : Prop := EvenOrOdd n
@[aesop norm (builder tactic)]
def testNormTactic : TacticM Unit := do
evalTactic (← `(tactic|try simp only [EvenOrOdd']))
set_option pp.all false
set_option trace.Aesop.RuleSet false
set_option trace.Aesop.Steps false
example : EvenOrOdd' 3 := by aesop
-- In this example, the goal is solved already during normalisation.
example : 0 = 0 := by aesop
|
State Before: α : Type u_1
β : Type u_2
γ : Type ?u.5233
δ : Type ?u.5236
a : α
b : β
x : α
⊢ (a, b).fst = x ↔ (a, b) = (x, (a, b).snd) State After: no goals Tactic: simp |
Formal statement is: lemma degree_sum_le: "finite S \<Longrightarrow> (\<And>p. p \<in> S \<Longrightarrow> degree (f p) \<le> n) \<Longrightarrow> degree (sum f S) \<le> n" Informal statement is: If $f$ is a function from a finite set $S$ to polynomials of degree at most $n$, then the sum of the polynomials $f(s)$ for $s \in S$ has degree at most $n$. |
{-# OPTIONS --cubical --safe #-}
module Cubical.Relation.Everything where
open import Cubical.Relation.Nullary public
open import Cubical.Relation.Nullary.DecidableEq public
open import Cubical.Relation.Binary public
|
Then Jill came in , and she did grin ,
|
\name{widthDetails.legend_body}
\alias{widthDetails.legend_body}
\title{
Grob width for legend_body
}
\description{
Grob width for legend_body
}
\usage{
\method{widthDetails}{legend_body}(x)
}
\arguments{
\item{x}{A legend_body object.}
}
\examples{
# There is no example
NULL
}
|
/****************************************************************************
** Copyright (c) 2021, Fougue Ltd. <http://www.fougue.pro>
** All rights reserved.
** See license at https://github.com/fougue/mayo/blob/master/LICENSE.txt
****************************************************************************/
#pragma once
#include <gsl/span>
namespace Mayo {
template<typename T> using Span = gsl::span<T>;
} // namespace Mayo
|
/-
Copyright (c) 2019 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon
-/
import category_theory.category.basic
/-!
# Tools to reformulate category-theoretic axioms in a more associativity-friendly way
## The `reassoc` attribute
The `reassoc` attribute can be applied to a lemma
```lean
@[reassoc]
lemma some_lemma : foo ≫ bar = baz := ...
```
and produce
```lean
lemma some_lemma_assoc {Y : C} (f : X ⟶ Y) : foo ≫ bar ≫ f = baz ≫ f := ...
```
The name of the produced lemma can be specified with `@[reassoc other_lemma_name]`. If
`simp` is added first, the generated lemma will also have the `simp` attribute.
## The `reassoc_axiom` command
When declaring a class of categories, the axioms can be reformulated to be more amenable
to manipulation in right associated expressions:
```lean
class some_class (C : Type) [category C] :=
(foo : Π X : C, X ⟶ X)
(bar : ∀ {X Y : C} (f : X ⟶ Y), foo X ≫ f = f ≫ foo Y)
reassoc_axiom some_class.bar
```
Here too, the `reassoc` attribute can be used instead. It works well when combined with
`simp`:
```lean
attribute [simp, reassoc] some_class.bar
```
-/
namespace tactic
open category_theory
/-- From an expression `f ≫ g`, extract the expression representing the category instance. -/
meta def get_cat_inst : expr → tactic expr
| `(@category_struct.comp _ %%struct_inst _ _ _ _ _) := pure struct_inst
| _ := failed
/-- (internals for `@[reassoc]`)
Given a lemma of the form `∀ ..., f ≫ g = h`, proves a new lemma of the form
`h : ∀ ... {W} (k), f ≫ (g ≫ k) = h ≫ k`, and returns the type and proof of this lemma.
-/
meta def prove_reassoc (h : expr) : tactic (expr × expr) :=
do
(vs,t) ← infer_type h >>= open_pis,
(lhs,rhs) ← match_eq t,
struct_inst ← get_cat_inst lhs <|> get_cat_inst rhs <|> fail "no composition found in statement",
`(@quiver.hom _ %%hom_inst %%X %%Y) ← infer_type lhs,
C ← infer_type X,
X' ← mk_local' `X' binder_info.implicit C,
ft ← to_expr ``(@quiver.hom _ %%hom_inst %%Y %%X'),
f' ← mk_local_def `f' ft,
t' ← to_expr ``(@category_struct.comp _ %%struct_inst _ _ _%%lhs %%f' =
@category_struct.comp _ %%struct_inst _ _ _ %%rhs %%f'),
let c' := h.mk_app vs,
(_,pr) ← solve_aux t' (rewrite_target c'; reflexivity),
pr ← instantiate_mvars pr,
let s := simp_lemmas.mk,
s ← s.add_simp ``category.assoc,
s ← s.add_simp ``category.id_comp,
s ← s.add_simp ``category.comp_id,
(t'', pr', _) ← simplify s [] t',
pr' ← mk_eq_mp pr' pr,
t'' ← pis (vs ++ [X',f']) t'',
pr' ← lambdas (vs ++ [X',f']) pr',
pure (t'',pr')
/-- (implementation for `@[reassoc]`)
Given a declaration named `n` of the form `∀ ..., f ≫ g = h`, proves a new lemma named `n'`
of the form `∀ ... {W} (k), f ≫ (g ≫ k) = h ≫ k`.
-/
meta def reassoc_axiom (n : name) (n' : name := n.append_suffix "_assoc") : tactic unit :=
do d ← get_decl n,
let ls := d.univ_params.map level.param,
let c := @expr.const tt n ls,
(t'',pr') ← prove_reassoc c,
add_decl $ declaration.thm n' d.univ_params t'' (pure pr'),
copy_attribute `simp n n'
setup_tactic_parser
/--
The `reassoc` attribute can be applied to a lemma
```lean
@[reassoc]
lemma some_lemma : foo ≫ bar = baz := ...
```
to produce
```lean
lemma some_lemma_assoc {Y : C} (f : X ⟶ Y) : foo ≫ bar ≫ f = baz ≫ f := ...
```
The name of the produced lemma can be specified with `@[reassoc other_lemma_name]`. If
`simp` is added first, the generated lemma will also have the `simp` attribute.
-/
@[user_attribute]
meta def reassoc_attr : user_attribute unit (option name) :=
{ name := `reassoc,
descr := "create a companion lemma for associativity-aware rewriting",
parser := optional ident,
after_set := some (λ n _ _,
do some n' ← reassoc_attr.get_param n | reassoc_axiom n (n.append_suffix "_assoc"),
reassoc_axiom n $ n.get_prefix ++ n' ) }
add_tactic_doc
{ name := "reassoc",
category := doc_category.attr,
decl_names := [`tactic.reassoc_attr],
tags := ["category theory"] }
/--
When declaring a class of categories, the axioms can be reformulated to be more amenable
to manipulation in right associated expressions:
```lean
class some_class (C : Type) [category C] :=
(foo : Π X : C, X ⟶ X)
(bar : ∀ {X Y : C} (f : X ⟶ Y), foo X ≫ f = f ≫ foo Y)
reassoc_axiom some_class.bar
```
The above will produce:
```lean
lemma some_class.bar_assoc {Z : C} (g : Y ⟶ Z) :
foo X ≫ f ≫ g = f ≫ foo Y ≫ g := ...
```
Here too, the `reassoc` attribute can be used instead. It works well when combined with
`simp`:
```lean
attribute [simp, reassoc] some_class.bar
```
-/
@[user_command]
meta def reassoc_cmd (_ : parse $ tk "reassoc_axiom") : lean.parser unit :=
do n ← ident,
of_tactic $
do n ← resolve_constant n,
reassoc_axiom n
add_tactic_doc
{ name := "reassoc_axiom",
category := doc_category.cmd,
decl_names := [`tactic.reassoc_cmd],
tags := ["category theory"] }
namespace interactive
/-- `reassoc h`, for assumption `h : x ≫ y = z`, creates a new assumption
`h : ∀ {W} (f : Z ⟶ W), x ≫ y ≫ f = z ≫ f`.
`reassoc! h`, does the same but deletes the initial `h` assumption.
(You can also add the attribute `@[reassoc]` to lemmas to generate new declarations generalized
in this way.)
-/
meta def reassoc (del : parse (tk "!")?) (ns : parse ident*) : tactic unit :=
do ns.mmap' (λ n,
do h ← get_local n,
(t,pr) ← prove_reassoc h,
assertv n t pr,
when del.is_some (tactic.clear h) )
end interactive
def calculated_Prop {α} (β : Prop) (hh : α) := β
meta def derive_reassoc_proof : tactic unit :=
do `(calculated_Prop %%v %%h) ← target,
(t,pr) ← prove_reassoc h,
unify v t,
exact pr
end tactic
/-- With `h : x ≫ y ≫ z = x` (with universal quantifiers tolerated),
`reassoc_of h : ∀ {X'} (f : W ⟶ X'), x ≫ y ≫ z ≫ f = x ≫ f`.
The type and proof of `reassoc_of h` is generated by `tactic.derive_reassoc_proof`
which make `reassoc_of` meta-programming adjacent. It is not called as a tactic but as
an expression. The goal is to avoid creating assumptions that are dismissed after one use:
```lean
example (X Y Z W : C) (x : X ⟶ Y) (y : Y ⟶ Z) (z z' : Z ⟶ W) (w : X ⟶ Z)
(h : x ≫ y = w)
(h' : y ≫ z = y ≫ z') :
x ≫ y ≫ z = w ≫ z' :=
begin
rw [h',reassoc_of h],
end
```
-/
theorem category_theory.reassoc_of {α} (hh : α) {β}
(x : tactic.calculated_Prop β hh . tactic.derive_reassoc_proof) : β := x
/--
`reassoc_of h` takes local assumption `h` and add a ` ≫ f` term on the right of
both sides of the equality. Instead of creating a new assumption from the result, `reassoc_of h`
stands for the proof of that reassociated statement. This keeps complicated assumptions that are
used only once or twice from polluting the local context.
In the following, assumption `h` is needed in a reassociated form. Instead of proving it as a new
goal and adding it as an assumption, we use `reassoc_of h` as a rewrite rule which works just as
well.
```lean
example (X Y Z W : C) (x : X ⟶ Y) (y : Y ⟶ Z) (z z' : Z ⟶ W) (w : X ⟶ Z)
(h : x ≫ y = w)
(h' : y ≫ z = y ≫ z') :
x ≫ y ≫ z = w ≫ z' :=
begin
-- reassoc_of h : ∀ {X' : C} (f : W ⟶ X'), x ≫ y ≫ f = w ≫ f
rw [h',reassoc_of h],
end
```
Although `reassoc_of` is not a tactic or a meta program, its type is generated
through meta-programming to make it usable inside normal expressions.
-/
add_tactic_doc
{ name := "category_theory.reassoc_of",
category := doc_category.tactic,
decl_names := [`category_theory.reassoc_of],
tags := ["category theory"] }
|
Require Import Crypto.Arithmetic.PrimeFieldTheorems.
Require Import Crypto.Specific.solinas32_2e256m189_12limbs.Synthesis.
(* TODO : change this to field once field isomorphism happens *)
Definition carry :
{ carry : feBW_loose -> feBW_tight
| forall a, phiBW_tight (carry a) = (phiBW_loose a) }.
Proof.
Set Ltac Profiling.
Time synthesize_carry ().
Show Ltac Profile.
Time Defined.
Print Assumptions carry.
|
function v = compute_rbf(x,d,xi, name)
switch name
case 'abs'
f = @(x)abs(x);
case 'gauss'
sigma = .03;
f = @(x)exp(-x.^2/(2*sigma^2));
case 'poly3'
q = 3;
f = @(x)abs(x).^q;
case 'sqrt'
q = .5;
f = @(x)abs(x).^q;
case 'thinplate'
f = @(x)(x.^2).*log(abs(x)+.001);
end
n = length(xi);
m = length(x);
[Y,X] = meshgrid(x,x);
D = f(X-Y);
% solve for weights
a = pinv(D)*d;
v = f( repmat(xi, [1 m]) - repmat(x', [n 1]) ) .* repmat( a', [n 1] );
v = sum(v, 2);
|
module View where
open import Data.Unit
open import Data.Char
open import Data.Maybe
open import Data.Container.Indexed
open import Signature
open import Model
open import IO.Primitive
------------------------------------------------------------------------
postulate
view : Model → IO ⊤
data VtyEvent : Set where
key : Char → VtyEvent
enter : VtyEvent
vtyToCmd : VtyEvent → (m : Mode) → Maybe (Command Hårss m)
vtyToCmd (key 'j') cmd = just (moveP down)
vtyToCmd (key 'k') cmd = just (moveP up)
vtyToCmd (key 'a') cmd = just (promptP addFeed)
vtyToCmd (key '\t') cmd = just (promptP search)
vtyToCmd (key 'R') cmd = just fetchP
vtyToCmd _ cmd = nothing
vtyToCmd enter (input p) = just doneP
vtyToCmd (key '\t') (input search) = just searchNextP
vtyToCmd (key c) (input p) = just (putCharP c)
|
function A = makehatch_plus(hatch,n,m)
%MAKEHATCH_PLUS Predefined hatch patterns
%
% Modification of MAKEHATCH to allow for selection of matrix size. Useful whe using
% APPLYHATCH_PLUS with higher resolution output.
%
% input (optional) N size of hatch matrix (default = 6)
% input (optional) M width of lines and dots in hatching (default = 1)
%
% MAKEHATCH_PLUS(HATCH,N,M) returns a matrix with the hatch pattern for HATCH
% according to the following table:
% HATCH pattern
% ------- ---------
% / right-slanted lines
% \ left-slanted lines
% | vertical lines
% - horizontal lines
% + crossing vertical and horizontal lines
% x criss-crossing lines
% . square dots
% c circular dots
% w Just a blank white pattern
% k Just a totally black pattern
%
% See also: APPLYHATCH, APPLYHATCH_PLUS, APPLYHATCH_PLUSCOLOR, MAKEHATCH
% By Ben Hinkle, [email protected]
% This code is in the public domain.
% Modified Brian FG Katz 8-aout-03
% Modified David M Kaplan 19-fevrier-08
if ~exist('n','var'), n = 6; end
if ~exist('m','var'), m = 1; end
n=round(n);
switch (hatch)
case '\'
[B,C] = meshgrid( 0:n-1 );
B = B-C;
clear C
A = abs(B) <= m/2;
A = A | abs(B-n) <= m/2;
A = A | abs(B+n) <= m/2;
case '/'
A = fliplr(makehatch_plus('\',n,m));
case '|'
A=zeros(n);
A(:,1:m) = 1;
case '-'
A = makehatch_plus('|',n,m);
A = A';
case '+'
A = makehatch_plus('|',n,m);
A = A | A';
case 'x'
A = makehatch_plus('\',n,m);
A = A | fliplr(A);
case '.'
A=zeros(n);
A(1:2*m,1:2*m)=1;
case 'c'
[B,C] = meshgrid( 0:n-1 );
A = sqrt(B.^2+C.^2) <= m;
A = A | fliplr(A) | flipud(A) | flipud(fliplr(A));
case 'w'
A = zeros(n);
case 'k'
A = ones(n);
otherwise
error(['Undefined hatch pattern "' hatch '".']);
end |
/-
Copyright (c) 2021 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro
-/
import tactic.hint
/-!
# Intuitionistic tautology (`itauto`) decision procedure
The `itauto` tactic will prove any intuitionistic tautology. It implements the well known
`G4ip` algorithm:
[Dyckhoff, *Contraction-free sequent calculi for intuitionistic logic*][dyckhoff_1992].
All built in propositional connectives are supported: `true`, `false`, `and`, `or`, `implies`,
`not`, `iff`, `xor`, as well as `eq` and `ne` on propositions. Anything else, including definitions
and predicate logical connectives (`forall` and `exists`), are not supported, and will have to be
simplified or instantiated before calling this tactic.
The resulting proofs will never use any axioms except possibly `propext`, and `propext` is only
used if the input formula contains an equality of propositions `p = q`.
## Implementation notes
The core logic of the prover is in three functions:
* `prove : context → prop → state_t ℕ option proof`: The main entry point.
Gets a context and a goal, and returns a `proof` object or fails, using `state_t ℕ` for the name
generator.
* `search : context → prop → state_t ℕ option proof`: Same meaning as `proof`, called during the
search phase (see below).
* `context.add : prop → proof → context → except (prop → proof) context`: Adds a proposition with
its proof into the context, but it also does some simplifications on the spot while doing so.
It will either return the new context, or if it happens to notice a proof of false, it will
return a function to compute a proof of any proposition in the original context.
The intuitionistic logic rules are separated into three groups:
* level 1: No splitting, validity preserving: apply whenever you can.
Left rules in `context.add`, right rules in `prove`.
* `context.add`:
* simplify `Γ, ⊤ ⊢ B` to `Γ ⊢ B`
* `Γ, ⊥ ⊢ B` is true
* simplify `Γ, A ∧ B ⊢ C` to `Γ, A, B ⊢ C`
* simplify `Γ, ⊥ → A ⊢ B` to `Γ ⊢ B`
* simplify `Γ, ⊤ → A ⊢ B` to `Γ, A ⊢ B`
* simplify `Γ, A ∧ B → C ⊢ D` to `Γ, A → B → C ⊢ D`
* simplify `Γ, A ∨ B → C ⊢ D` to `Γ, A → C, B → C ⊢ D`
* `prove`:
* `Γ ⊢ ⊤` is true
* simplify `Γ ⊢ A → B` to `Γ, A ⊢ B`
* `search`:
* `Γ, P ⊢ P` is true
* simplify `Γ, P, P → A ⊢ B` to `Γ, P, A ⊢ B`
* level 2: Splitting rules, validity preserving: apply after level 1 rules. Done in `prove`
* simplify `Γ ⊢ A ∧ B` to `Γ ⊢ A` and `Γ ⊢ B`
* simplify `Γ, A ∨ B ⊢ C` to `Γ, A ⊢ C` and `Γ, B ⊢ C`
* level 3: Splitting rules, not validity preserving: apply only if nothing else applies.
Done in `search`
* `Γ ⊢ A ∨ B` follows from `Γ ⊢ A`
* `Γ ⊢ A ∨ B` follows from `Γ ⊢ B`
* `Γ, (A₁ → A₂) → C ⊢ B` follows from `Γ, A₂ → C, A₁ ⊢ A₂` and `Γ, C ⊢ B`
This covers the core algorithm, which only handles `true`, `false`, `and`, `or`, and `implies`.
For `iff` and `eq`, we treat them essentially the same as `(p → q) ∧ (q → p)`, although we use
a different `prop` representation because we have to remember to apply different theorems during
replay. For definitions like `not` and `xor`, we just eagerly unfold them. (This could potentially
cause a blowup issue for `xor`, but it isn't used very often anyway. We could add it to the `prop`
grammar if it matters.)
## Tags
propositional logic, intuitionistic logic, decision procedure
-/
namespace tactic
namespace itauto
/-- Different propositional constructors that are variants of "and" for the purposes of the
theorem prover. -/
@[derive [has_reflect, decidable_eq]]
inductive and_kind | and | iff | eq
instance : inhabited and_kind := ⟨and_kind.and⟩
/-- A reified inductive type for propositional logic. -/
@[derive [has_reflect, decidable_eq]]
inductive prop : Type
| var : ℕ → prop -- propositional atoms P_i
| true : prop -- ⊤
| false : prop -- ⊥
| and' : and_kind → prop → prop → prop -- p ∧ q, p ↔ q, p = q
| or : prop → prop → prop -- p ∨ q
| imp : prop → prop → prop -- p → q
/-- Constructor for `p ∧ q`. -/
@[pattern] def prop.and : prop → prop → prop := prop.and' and_kind.and
/-- Constructor for `p ↔ q`. -/
@[pattern] def prop.iff : prop → prop → prop := prop.and' and_kind.iff
/-- Constructor for `p = q`. -/
@[pattern] def prop.eq : prop → prop → prop := prop.and' and_kind.eq
/-- Constructor for `¬ p`. -/
@[pattern] def prop.not (a : prop) : prop := a.imp prop.false
/-- Constructor for `xor p q`. -/
@[pattern] def prop.xor (a b : prop) : prop := (a.and b.not).or (b.and a.not)
instance : inhabited prop := ⟨prop.true⟩
/-- Given the contents of an `and` variant, return the two conjuncts. -/
def and_kind.sides : and_kind → prop → prop → prop × prop
| and_kind.and A B := (A, B)
| _ A B := (A.imp B, B.imp A)
/-- Debugging printer for propositions. -/
meta def prop.to_format : prop → format
| (prop.var i) := format!"v{i}"
| prop.true := format!"⊤"
| prop.false := format!"⊥"
| (prop.and p q) := format!"({p.to_format} ∧ {q.to_format})"
| (prop.iff p q) := format!"({p.to_format} ↔ {q.to_format})"
| (prop.eq p q) := format!"({p.to_format} = {q.to_format})"
| (prop.or p q) := format!"({p.to_format} ∨ {q.to_format})"
| (prop.imp p q) := format!"({p.to_format} → {q.to_format})"
meta instance : has_to_format prop := ⟨prop.to_format⟩
section
open ordering
/-- A comparator for `and_kind`. (There should really be a derive handler for this.) -/
def and_kind.cmp (p q : and_kind) : ordering :=
by { cases p; cases q, exacts [eq, lt, lt, gt, eq, lt, gt, gt, eq] }
/-- A comparator for propositions. (There should really be a derive handler for this.) -/
def prop.cmp (p q : prop) : ordering :=
begin
induction p with _ ap _ _ p₁ p₂ _ _ p₁ p₂ _ _ p₁ p₂ _ _ p₁ p₂ generalizing q; cases q,
case var var { exact cmp p q },
case true true { exact eq },
case false false { exact eq },
case and' and' : aq q₁ q₂ { exact (ap.cmp aq).or_else ((p₁ q₁).or_else (p₂ q₂)) },
case or or : q₁ q₂ { exact (p₁ q₁).or_else (p₂ q₂) },
case imp imp : q₁ q₂ { exact (p₁ q₁).or_else (p₂ q₂) },
exacts [lt, lt, lt, lt, lt,
gt, lt, lt, lt, lt,
gt, gt, lt, lt, lt,
gt, gt, gt, lt, lt,
gt, gt, gt, gt, lt,
gt, gt, gt, gt, gt]
end
instance : has_lt prop := ⟨λ p q, p.cmp q = lt⟩
instance : decidable_rel (@has_lt.lt prop _) := λ _ _, ordering.decidable_eq _ _
end
/-- A reified inductive proof type for intuitionistic propositional logic. -/
@[derive has_reflect]
inductive proof
-- ⊢ A, causes failure during reconstruction
| «sorry» : proof
-- (n: A) ⊢ A
| hyp (n : name) : proof
-- ⊢ ⊤
| triv : proof
-- (p: ⊥) ⊢ A
| exfalso' (p : proof) : proof
-- (p: (x: A) ⊢ B) ⊢ A → B
| intro (x : name) (p : proof) : proof
-- ak = and: (p: A ∧ B) ⊢ A
-- ak = iff: (p: A ↔ B) ⊢ A → B
-- ak = eq: (p: A = B) ⊢ A → B
| and_left (ak : and_kind) (p : proof) : proof
-- ak = and: (p: A ∧ B) ⊢ B
-- ak = iff: (p: A ↔ B) ⊢ B → A
-- ak = eq: (p: A = B) ⊢ B → A
| and_right (ak : and_kind) (p : proof) : proof
-- ak = and: (p₁: A) (p₂: B) ⊢ A ∧ B
-- ak = iff: (p₁: A → B) (p₁: B → A) ⊢ A ↔ B
-- ak = eq: (p₁: A → B) (p₁: B → A) ⊢ A = B
| and_intro (ak : and_kind) (p₁ p₂ : proof) : proof
-- ak = and: (p: A ∧ B → C) ⊢ A → B → C
-- ak = iff: (p: (A ↔ B) → C) ⊢ (A → B) → (B → A) → C
-- ak = eq: (p: (A = B) → C) ⊢ (A → B) → (B → A) → C
| curry (ak : and_kind) (p : proof) : proof
-- This is a partial application of curry.
-- ak = and: (p: A ∧ B → C) (q : A) ⊢ B → C
-- ak = iff: (p: (A ↔ B) → C) (q: A → B) ⊢ (B → A) → C
-- ak = eq: (p: (A ↔ B) → C) (q: A → B) ⊢ (B → A) → C
| curry₂ (ak : and_kind) (p q : proof) : proof
-- (p: A → B) (q: A) ⊢ B
| app' : proof → proof → proof
-- (p: A ∨ B → C) ⊢ A → C
| or_imp_left (p : proof) : proof
-- (p: A ∨ B → C) ⊢ B → C
| or_imp_right (p : proof) : proof
-- (p: A) ⊢ A ∨ B
| or_inl (p : proof) : proof
-- (p: B) ⊢ A ∨ B
| or_inr (p : proof) : proof
-- (p: B) ⊢ A ∨ B
-- (p₁: A ∨ B) (p₂: (x: A) ⊢ C) (p₃: (x: B) ⊢ C) ⊢ C
| or_elim' (p₁ : proof) (x : name) (p₂ p₃ : proof) : proof
-- (p₁: decidable A) (p₂: (x: A) ⊢ C) (p₃: (x: ¬ A) ⊢ C) ⊢ C
| decidable_elim (classical : bool) (p₁ x : name) (p₂ p₃ : proof) : proof
-- classical = ff: (p: decidable A) ⊢ A ∨ ¬A
-- classical = tt: (p: Prop) ⊢ p ∨ ¬p
| em (classical : bool) (p : name) : proof
-- The variable x here names the variable that will be used in the elaborated proof
-- (p: ((x:A) → B) → C) ⊢ B → C
| imp_imp_simp (x : name) (p : proof) : proof
instance : inhabited proof := ⟨proof.triv⟩
/-- Debugging printer for proof objects. -/
meta def proof.to_format : proof → format
| proof.sorry := "sorry"
| (proof.hyp i) := to_fmt i
| proof.triv := "triv"
| (proof.exfalso' p) := format!"(exfalso {p.to_format})"
| (proof.intro x p) := format!"(λ {x}, {p.to_format})"
| (proof.and_left _ p) := format!"{p.to_format} .1"
| (proof.and_right _ p) := format!"{p.to_format} .2"
| (proof.and_intro _ p q) := format!"⟨{p.to_format}, {q.to_format}⟩"
| (proof.curry _ p) := format!"(curry {p.to_format})"
| (proof.curry₂ _ p q) := format!"(curry {p.to_format} {q.to_format})"
| (proof.app' p q) := format!"({p.to_format} {q.to_format})"
| (proof.or_imp_left p) := format!"(or_imp_left {p.to_format})"
| (proof.or_imp_right p) := format!"(or_imp_right {p.to_format})"
| (proof.or_inl p) := format!"(or.inl {p.to_format})"
| (proof.or_inr p) := format!"(or.inr {p.to_format})"
| (proof.or_elim' p x q r) :=
format!"({p.to_format}.elim (λ {x}, {q.to_format}) (λ {x}, {r.to_format})"
| (proof.em ff p) := format!"(decidable.em {p})"
| (proof.em tt p) := format!"(classical.em {p})"
| (proof.decidable_elim _ p x q r) :=
format!"({p}.elim (λ {x}, {q.to_format}) (λ {x}, {r.to_format})"
| (proof.imp_imp_simp _ p) := format!"(imp_imp_simp {p.to_format})"
meta instance : has_to_format proof := ⟨proof.to_format⟩
/-- A variant on `proof.exfalso'` that performs opportunistic simplification. -/
meta def proof.exfalso : prop → proof → proof
| prop.false p := p
| A p := proof.exfalso' p
/-- A variant on `proof.or_elim` that performs opportunistic simplification. -/
meta def proof.or_elim : proof → name → proof → proof → proof
| (proof.em cl p) x q r := proof.decidable_elim cl p x q r
| p x q r := proof.or_elim' p x q r
/-- A variant on `proof.app'` that performs opportunistic simplification.
(This doesn't do full normalization because we don't want the proof size to blow up.) -/
meta def proof.app : proof → proof → proof
| (proof.curry ak p) q := proof.curry₂ ak p q
| (proof.curry₂ ak p q) r := p.app (q.and_intro ak r)
| (proof.or_imp_left p) q := p.app q.or_inl
| (proof.or_imp_right p) q := p.app q.or_inr
| (proof.imp_imp_simp x p) q := p.app (proof.intro x q)
| p q := p.app' q
-- Note(Mario): the typechecker is disabled because it requires proofs to carry around additional
-- props. These can be retrieved from the git history if you want to re-enable this.
/-
/-- A typechecker for the `proof` type. This is not used by the tactic but can be used for
debugging. -/
meta def proof.check : name_map prop → proof → option prop
| Γ (proof.hyp i) := Γ.find i
| Γ proof.triv := some prop.true
| Γ (proof.exfalso' A p) := guard (p.check Γ = some prop.false) $> A
| Γ (proof.intro x A p) := do B ← p.check (Γ.insert x A), pure (prop.imp A B)
| Γ (proof.and_left ak p) := do
prop.and' ak' A B ← p.check Γ | none,
guard (ak = ak') $> (ak.sides A B).1
| Γ (proof.and_right ak p) := do
prop.and' ak' A B ← p.check Γ | none,
guard (ak = ak') $> (ak.sides A B).2
| Γ (proof.and_intro and_kind.and p q) := do
A ← p.check Γ, B ← q.check Γ,
pure (A.and B)
| Γ (proof.and_intro ak p q) := do
prop.imp A B ← p.check Γ | none,
C ← q.check Γ, guard (C = prop.imp B A) $> (A.and' ak B)
| Γ (proof.curry ak p) := do
prop.imp (prop.and' ak' A B) C ← p.check Γ | none,
let (A', B') := ak.sides A B,
guard (ak = ak') $> (A'.imp $ B'.imp C)
| Γ (proof.curry₂ ak p q) := do
prop.imp (prop.and' ak' A B) C ← p.check Γ | none,
A₂ ← q.check Γ,
let (A', B') := ak.sides A B,
guard (ak = ak' ∧ A₂ = A') $> (B'.imp C)
| Γ (proof.app' p q) := do prop.imp A B ← p.check Γ | none, A' ← q.check Γ, guard (A = A') $> B
| Γ (proof.or_imp_left B p) := do
prop.imp (prop.or A B') C ← p.check Γ | none,
guard (B = B') $> (A.imp C)
| Γ (proof.or_imp_right A p) := do
prop.imp (prop.or A' B) C ← p.check Γ | none,
guard (A = A') $> (B.imp C)
| Γ (proof.or_inl B p) := do A ← p.check Γ | none, pure (A.or B)
| Γ (proof.or_inr A p) := do B ← p.check Γ | none, pure (A.or B)
| Γ (proof.or_elim p x q r) := do
prop.or A B ← p.check Γ | none,
C ← q.check (Γ.insert x A),
C' ← r.check (Γ.insert x B),
guard (C = C') $> C
| Γ (proof.imp_imp_simp x A p) := do
prop.imp (prop.imp A' B) C ← p.check Γ | none,
guard (A = A') $> (B.imp C)
-/
/-- Get a new name in the pattern `h0, h1, h2, ...` -/
@[inline] meta def fresh_name : ℕ → name × ℕ :=
λ n, (mk_simple_name ("h" ++ to_string n), n+1)
/-- The context during proof search is a map from propositions to proof values. -/
meta def context := native.rb_map prop proof
/-- Debug printer for the context. -/
meta def context.to_format (Γ : context) : format :=
Γ.fold "" $ λ P p f, P.to_format /- ++ " := " ++ p.to_format -/ ++ ",\n" ++ f
meta instance : has_to_format context := ⟨context.to_format⟩
/-- Insert a proposition and its proof into the context, as in `have : A := p`. This will eagerly
apply all level 1 rules on the spot, which are rules that don't split the goal and are validity
preserving: specifically, we drop `⊤` and `A → ⊤` hypotheses, close the goal if we find a `⊥`
hypothesis, split all conjunctions, and also simplify `⊥ → A` (drop), `⊤ → A` (simplify to `A`),
`A ∧ B → C` (curry to `A → B → C`) and `A ∨ B → C` (rewrite to `(A → C) ∧ (B → C)` and split). -/
meta def context.add : prop → proof → context → except (prop → proof) context
| prop.true p Γ := pure Γ
| prop.false p Γ := except.error (λ A, proof.exfalso A p)
| (prop.and' ak A B) p Γ := do
let (A, B) := ak.sides A B,
Γ ← Γ.add A (p.and_left ak),
Γ.add B (p.and_right ak)
| (prop.imp prop.false A) p Γ := pure Γ
| (prop.imp prop.true A) p Γ := Γ.add A (p.app proof.triv)
| (prop.imp (prop.and' ak A B) C) p Γ :=
let (A, B) := ak.sides A B in
Γ.add (prop.imp A (B.imp C)) (p.curry ak)
| (prop.imp (prop.or A B) C) p Γ := do
Γ ← Γ.add (A.imp C) p.or_imp_left,
Γ.add (B.imp C) p.or_imp_right
| (prop.imp A prop.true) p Γ := pure Γ
| A p Γ := pure (Γ.insert A p)
/-- Add `A` to the context `Γ` with proof `p`. This version of `context.add` takes a continuation
and a target proposition `B`, so that in the case that `⊥` is found we can skip the continuation
and just prove `B` outright. -/
@[inline] meta def context.with_add (Γ : context) (A : prop) (p : proof)
(B : prop) (f : context → prop → ℕ → bool × proof × ℕ) (n : ℕ) : bool × proof × ℕ :=
match Γ.add A p with
| except.ok Γ_A := f Γ_A B n
| except.error p := (tt, p B, n)
end
/-- Map a function over the proof (regardless of whether the proof is successful or not). -/
def map_proof (f : proof → proof) : bool × proof × ℕ → bool × proof × ℕ
| (b, p, n) := (b, f p, n)
/-- Convert a value-with-success to an optional value. -/
def is_ok {α} : bool × α → option α
| (ff, p) := none
| (tt, p) := some p
/-- Skip the continuation and return a failed proof if the boolean is false. -/
def when_ok : bool → (ℕ → bool × proof × ℕ) → ℕ → bool × proof × ℕ
| ff f n := (ff, proof.sorry, n)
| tt f n := f n
/-- The search phase, which deals with the level 3 rules, which are rules that are not validity
preserving and so require proof search. One obvious one is the or-introduction rule: we prove
`A ∨ B` by proving `A` or `B`, and we might have to try one and backtrack.
There are two rules dealing with implication in this category: `p, p → C ⊢ B` where `p` is an
atom (which is safe if we can find it but often requires the right search to expose the `p`
assumption), and `(A₁ → A₂) → C ⊢ B`. We decompose the double implication into two subgoals: one to
prove `A₁ → A₂`, which can be written `A₂ → C, A₁ ⊢ A₂` (where we used `A₁` to simplify
`(A₁ → A₂) → C`), and one to use the consequent, `C ⊢ B`. The search here is that there are
potentially many implications to split like this, and we have to try all of them if we want to be
complete. -/
meta def search (prove : context → prop → ℕ → bool × proof × ℕ) :
context → prop → ℕ → bool × proof × ℕ
| Γ B n := match Γ.find B with
| some p := (tt, p, n)
| none :=
let search₁ := Γ.fold none $ λ A p r, match r with
| some r := some r
| none := match A with
| prop.imp A' C := match Γ.find A' with
| some q := is_ok $ context.with_add (Γ.erase A) C (p.app q) B prove n
| none := match A' with
| prop.imp A₁ A₂ := do
let Γ : context := Γ.erase A,
let (a, n) := fresh_name n,
(p₁, n) ← is_ok $ Γ.with_add A₁ (proof.hyp a) A₂ (λ Γ_A₁ A₂,
Γ_A₁.with_add (prop.imp A₂ C) (proof.imp_imp_simp a p) A₂ prove) n,
is_ok $ Γ.with_add C (p.app (proof.intro a p₁)) B prove n
| _ := none
end
end
| _ := none
end
end in
match search₁ with
| some r := (tt, r)
| none := match B with
| prop.or B₁ B₂ := match map_proof proof.or_inl (prove Γ B₁ n) with
| (ff, _) := map_proof proof.or_inr (prove Γ B₂ n)
| r := r
end
| _ := (ff, proof.sorry, n)
end
end
end
/-- The main prover. This receives a context of proven or assumed lemmas and a target proposition,
and returns a proof or `none` (with state for the fresh variable generator).
The intuitionistic logic rules are separated into three groups:
* level 1: No splitting, validity preserving: apply whenever you can.
Left rules in `context.add`, right rules in `prove`
* level 2: Splitting rules, validity preserving: apply after level 1 rules. Done in `prove`
* level 3: Splitting rules, not validity preserving: apply only if nothing else applies.
Done in `search`
The level 1 rules on the right of the turnstile are `Γ ⊢ ⊤` and `Γ ⊢ A → B`, these are easy to
handle. The rule `Γ ⊢ A ∧ B` is a level 2 rule, also handled here. If none of these apply, we try
the level 2 rule `A ∨ B ⊢ C` by searching the context and splitting all ors we find. Finally, if
we don't make any more progress, we go to the search phase.
-/
meta def prove : context → prop → ℕ → bool × proof × ℕ
| Γ prop.true n := (tt, proof.triv, n)
| Γ (prop.imp A B) n :=
let (a, n) := fresh_name n in
map_proof (proof.intro a) $ Γ.with_add A (proof.hyp a) B prove n
| Γ (prop.and' ak A B) n :=
let (A, B) := ak.sides A B in
let (b, p, n) := prove Γ A n in
map_proof (p.and_intro ak) $ when_ok b (prove Γ B) n
| Γ B n := Γ.fold (λ b Γ, cond b prove (search prove) Γ B) (λ A p IH b Γ n,
match A with
| prop.or A₁ A₂ :=
let Γ : context := Γ.erase A in
let (a, n) := fresh_name n in
let (b, p₁, n) := Γ.with_add A₁ (proof.hyp a) B (λ Γ _, IH tt Γ) n in
map_proof (proof.or_elim p a p₁) $
when_ok b (Γ.with_add A₂ (proof.hyp a) B (λ Γ _, IH tt Γ)) n
| _ := IH b Γ n
end) ff Γ n
/-- Reifies an atomic or otherwise unrecognized proposition. If it is defeq to a proposition we
have already allocated, we reuse it, otherwise we name it with a new index. -/
meta def reify_atom (atoms : ref (buffer expr)) (e : expr) : tactic prop := do
vec ← read_ref atoms,
o ← try_core $ vec.iterate failure (λ i e' r,
r <|> (is_def_eq e e' >> pure i.1)),
match o with
| none := write_ref atoms (vec.push_back e) $> prop.var vec.size
| some i := pure $ prop.var i
end
/-- Reify an `expr` into a `prop`, allocating anything non-propositional as an atom in the
`atoms` list. -/
meta def reify (atoms : ref (buffer expr)) : expr → tactic prop
| `(true) := pure prop.true
| `(false) := pure prop.false
| `(¬ %%a) := prop.not <$> reify a
| `(%%a ∧ %%b) := prop.and <$> reify a <*> reify b
| `(%%a ∨ %%b) := prop.or <$> reify a <*> reify b
| `(%%a ↔ %%b) := prop.iff <$> reify a <*> reify b
| `(xor %%a %%b) := prop.xor <$> reify a <*> reify b
| `(@eq Prop %%a %%b) := prop.eq <$> reify a <*> reify b
| `(@ne Prop %%a %%b) := prop.not <$> (prop.eq <$> reify a <*> reify b)
| `(implies %%a %%b) := prop.imp <$> reify a <*> reify b
| e@`(%%a → %%b) :=
if b.has_var then reify_atom atoms e else prop.imp <$> reify a <*> reify b
| e := reify_atom atoms e
/-- Once we have a proof object, we have to apply it to the goal. (Some of these cases are a bit
annoying because `applyc` gets the arguments wrong sometimes so we have to use `to_expr` instead.)
-/
meta def apply_proof : name_map expr → proof → tactic unit
| Γ proof.sorry := fail "itauto failed"
| Γ (proof.hyp n) := do e ← Γ.find n, exact e
| Γ proof.triv := triv
| Γ (proof.exfalso' p) := do
t ← mk_mvar, to_expr ``(false.elim %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.intro x p) := do e ← intro_core x, apply_proof (Γ.insert x e) p
| Γ (proof.and_left and_kind.and p) := do
t ← mk_mvar, to_expr ``(and.left %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.and_left and_kind.iff p) := do
t ← mk_mvar, to_expr ``(iff.mp %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.and_left and_kind.eq p) := do
t ← mk_mvar, to_expr ``(cast %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.and_right and_kind.and p) := do
t ← mk_mvar, to_expr ``(and.right %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.and_right and_kind.iff p) := do
t ← mk_mvar, to_expr ``(iff.mpr %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.and_right and_kind.eq p) := do
t ← mk_mvar, to_expr ``(cast (eq.symm %%t)) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.and_intro and_kind.and p q) := do
t₁ ← mk_mvar, t₂ ← mk_mvar, to_expr ``(and.intro %%t₁ %%t₂) tt ff >>= exact,
gs ← get_goals, set_goals (t₁::t₂::gs), apply_proof Γ p >> apply_proof Γ q
| Γ (proof.and_intro and_kind.iff p q) := do
t₁ ← mk_mvar, t₂ ← mk_mvar, to_expr ``(iff.intro %%t₁ %%t₂) tt ff >>= exact,
gs ← get_goals, set_goals (t₁::t₂::gs), apply_proof Γ p >> apply_proof Γ q
| Γ (proof.and_intro and_kind.eq p q) := do
t₁ ← mk_mvar, t₂ ← mk_mvar, to_expr ``(propext (iff.intro %%t₁ %%t₂)) tt ff >>= exact,
gs ← get_goals, set_goals (t₁::t₂::gs), apply_proof Γ p >> apply_proof Γ q
| Γ (proof.curry ak p) := do
e ← intro_core `_, let n := e.local_uniq_name,
apply_proof (Γ.insert n e) (proof.curry₂ ak p (proof.hyp n))
| Γ (proof.curry₂ ak p q) := do
e ← intro_core `_, let n := e.local_uniq_name,
apply_proof (Γ.insert n e) (p.app (q.and_intro ak (proof.hyp n)))
| Γ (proof.app' p q) := do
A ← mk_meta_var (expr.sort level.zero),
B ← mk_meta_var (expr.sort level.zero),
g₁ ← mk_meta_var `((%%A : Prop) → (%%B : Prop)),
g₂ ← mk_meta_var A,
g :: gs ← get_goals,
unify (g₁ g₂) g,
set_goals (g₁::g₂::gs) >> apply_proof Γ p >> apply_proof Γ q
| Γ (proof.or_imp_left p) := do
e ← intro_core `_, let n := e.local_uniq_name,
apply_proof (Γ.insert n e) (p.app (proof.hyp n).or_inl)
| Γ (proof.or_imp_right p) := do
e ← intro_core `_, let n := e.local_uniq_name,
apply_proof (Γ.insert n e) (p.app (proof.hyp n).or_inr)
| Γ (proof.or_inl p) := do
t ← mk_mvar, to_expr ``(or.inl %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.or_inr p) := do
t ← mk_mvar, to_expr ``(or.inr %%t) tt ff >>= exact,
gs ← get_goals, set_goals (t::gs), apply_proof Γ p
| Γ (proof.or_elim' p x p₁ p₂) := do
t₁ ← mk_mvar, t₂ ← mk_mvar, t₃ ← mk_mvar, to_expr ``(or.elim %%t₁ %%t₂ %%t₃) tt ff >>= exact,
gs ← get_goals, set_goals (t₁::t₂::t₃::gs), apply_proof Γ p,
e ← intro_core x, apply_proof (Γ.insert x e) p₁,
e ← intro_core x, apply_proof (Γ.insert x e) p₂
| Γ (proof.em ff n) := do
e ← Γ.find n,
to_expr ``(@decidable.em _ %%e) >>= exact
| Γ (proof.em tt n) := do
e ← Γ.find n,
to_expr ``(@classical.em %%e) >>= exact
| Γ (proof.decidable_elim ff n x p₁ p₂) := do
e ← Γ.find n,
t₁ ← mk_mvar, t₂ ← mk_mvar, to_expr ``(@dite _ _ %%e %%t₁ %%t₂) tt ff >>= exact,
gs ← get_goals, set_goals (t₁::t₂::gs),
e ← intro_core x, apply_proof (Γ.insert x e) p₁,
e ← intro_core x, apply_proof (Γ.insert x e) p₂
| Γ (proof.decidable_elim tt n x p₁ p₂) := do
e ← Γ.find n,
e ← to_expr ``(@classical.dec %%e),
t₁ ← mk_mvar, t₂ ← mk_mvar, to_expr ``(@dite _ _ %%e %%t₁ %%t₂) tt ff >>= exact,
gs ← get_goals, set_goals (t₁::t₂::gs),
e ← intro_core x, apply_proof (Γ.insert x e) p₁,
e ← intro_core x, apply_proof (Γ.insert x e) p₂
| Γ (proof.imp_imp_simp x p) := do
e ← intro_core `_, let n := e.local_uniq_name,
apply_proof (Γ.insert n e) (p.app (proof.intro x (proof.hyp n)))
end itauto
open itauto
/-- A decision procedure for intuitionistic propositional logic.
* `use_dec` will add `a ∨ ¬ a` to the context for every decidable atomic proposition `a`.
* `use_classical` will allow `a ∨ ¬ a` to be added even if the proposition is not decidable,
using classical logic.
* `extra_dec` will add `a ∨ ¬ a` to the context for specified (not necessarily atomic)
propositions `a`.
-/
meta def itauto (use_dec use_classical : bool) (extra_dec : list expr) : tactic unit :=
using_new_ref mk_buffer $ λ atoms,
using_new_ref mk_name_map $ λ hs, do
t ← target,
t ← mcond (is_prop t) (reify atoms t) (tactic.exfalso $> prop.false),
hyps ← local_context,
(Γ, decs) ← hyps.mfoldl
(λ (Γ : except (prop → proof) context × native.rb_map prop (bool × expr)) h, do
e ← infer_type h,
mcond (is_prop e)
(do A ← reify atoms e,
let n := h.local_uniq_name,
read_ref hs >>= λ Γ, write_ref hs (Γ.insert n h),
pure (Γ.1 >>= λ Γ', Γ'.add A (proof.hyp n), Γ.2))
(match e with
| `(decidable %%p) :=
if use_dec then do
A ← reify atoms p,
let n := h.local_uniq_name,
pure (Γ.1, Γ.2.insert A (ff, h))
else pure Γ
| _ := pure Γ
end))
(except.ok native.mk_rb_map, native.mk_rb_map),
let add_dec (force : bool) (decs : native.rb_map prop (bool × expr)) (e : expr) := (do
A ← reify atoms e,
dec_e ← mk_app ``decidable [e],
res ← try_core (mk_instance dec_e),
if res.is_none ∧ ¬ use_classical then
if force then do
m ← mk_meta_var dec_e,
set_goals [m] >> apply_instance >> failure
else pure decs
else
pure (native.rb_map.insert decs A (res.elim (tt, e) (prod.mk ff)))),
decs ← extra_dec.mfoldl (add_dec tt) decs,
decs ← if use_dec then do
let decided := match Γ with
| except.ok Γ := Γ.fold native.mk_rb_set $ λ p _ m, match p with
| prop.var i := m.insert i
| prop.not (prop.var i) := m.insert i
| _ := m
end
| except.error _ := native.mk_rb_set
end,
read_ref atoms >>= λ ats, ats.2.iterate (pure decs) $ λ i e r,
if decided.contains i.1 then r else r >>= λ decs, add_dec ff decs e
else pure decs,
Γ ← decs.fold (pure Γ) (λ A ⟨cl, pf⟩ r, r >>= λ Γ, do
n ← mk_fresh_name,
read_ref hs >>= λ Γ, write_ref hs (Γ.insert n pf),
pure (Γ >>= λ Γ', Γ'.add (A.or A.not) (proof.em cl n))),
let p := match Γ with
| except.ok Γ := (prove Γ t 0).2.1
| except.error p := p t
end,
hs ← read_ref hs, apply_proof hs p
namespace interactive
setup_tactic_parser
/-- A decision procedure for intuitionistic propositional logic. Unlike `finish` and `tauto!` this
tactic never uses the law of excluded middle (without the `!` option), and the proof search is
tailored for this use case. (`itauto!` will work as a classical SAT solver, but the algorithm is
not very good in this situation.)
```lean
example (p : Prop) : ¬ (p ↔ ¬ p) := by itauto
```
`itauto [a, b]` will additionally attempt case analysis on `a` and `b` assuming that it can derive
`decidable a` and `decidable b`. `itauto *` will case on all decidable propositions that it can
find among the atomic propositions, and `itauto! *` will case on all propositional atoms.
*Warning:* This can blow up the proof search, so it should be used sparingly.
-/
meta def itauto (classical : parse (tk "!")?)
: parse (some <$> pexpr_list <|> tk "*" *> pure none)? → tactic unit
| none := tactic.itauto false classical.is_some []
| (some none) := tactic.itauto true classical.is_some []
| (some (some ls)) := ls.mmap i_to_expr >>= tactic.itauto false classical.is_some
add_hint_tactic "itauto"
add_tactic_doc
{ name := "itauto",
category := doc_category.tactic,
decl_names := [`tactic.interactive.itauto],
tags := ["logic", "propositional logic", "intuitionistic logic", "decision procedure"] }
end interactive
end tactic
|
[GOAL]
S : Type u_1
R : Type u_2
inst✝² : AddMonoidWithOne R
inst✝¹ : SetLike S R
s : S
inst✝ : AddSubmonoidWithOneClass S R
n : ℕ
⊢ ↑n ∈ s
[PROOFSTEP]
induction n
[GOAL]
case zero
S : Type u_1
R : Type u_2
inst✝² : AddMonoidWithOne R
inst✝¹ : SetLike S R
s : S
inst✝ : AddSubmonoidWithOneClass S R
⊢ ↑Nat.zero ∈ s
[PROOFSTEP]
simp [zero_mem, add_mem, one_mem, *]
[GOAL]
case succ
S : Type u_1
R : Type u_2
inst✝² : AddMonoidWithOne R
inst✝¹ : SetLike S R
s : S
inst✝ : AddSubmonoidWithOneClass S R
n✝ : ℕ
n_ih✝ : ↑n✝ ∈ s
⊢ ↑(Nat.succ n✝) ∈ s
[PROOFSTEP]
simp [zero_mem, add_mem, one_mem, *]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝¹ : NonAssocSemiring R
M : Submonoid R
inst✝ : SetLike S R
hSR : SubsemiringClass S R
s : S
n : ℕ
⊢ ↑n ∈ s
[PROOFSTEP]
rw [← nsmul_one]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝¹ : NonAssocSemiring R
M : Submonoid R
inst✝ : SetLike S R
hSR : SubsemiringClass S R
s : S
n : ℕ
⊢ n • 1 ∈ s
[PROOFSTEP]
exact nsmul_mem (one_mem _) _
[GOAL]
R✝ : Type u
S : Type v
T : Type w
inst✝⁴ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝³ : SetLike S R✝
hSR : SubsemiringClass S R✝
s : S
R : Type u_1
inst✝² : Semiring R
inst✝¹ : SetLike S R
inst✝ : SubsemiringClass S R
x : { x // x ∈ s }
n : ℕ
⊢ ↑(x ^ n) = ↑x ^ n
[PROOFSTEP]
induction' n with n ih
[GOAL]
case zero
R✝ : Type u
S : Type v
T : Type w
inst✝⁴ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝³ : SetLike S R✝
hSR : SubsemiringClass S R✝
s : S
R : Type u_1
inst✝² : Semiring R
inst✝¹ : SetLike S R
inst✝ : SubsemiringClass S R
x : { x // x ∈ s }
⊢ ↑(x ^ Nat.zero) = ↑x ^ Nat.zero
[PROOFSTEP]
simp
[GOAL]
case succ
R✝ : Type u
S : Type v
T : Type w
inst✝⁴ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝³ : SetLike S R✝
hSR : SubsemiringClass S R✝
s : S
R : Type u_1
inst✝² : Semiring R
inst✝¹ : SetLike S R
inst✝ : SubsemiringClass S R
x : { x // x ∈ s }
n : ℕ
ih : ↑(x ^ n) = ↑x ^ n
⊢ ↑(x ^ Nat.succ n) = ↑x ^ Nat.succ n
[PROOFSTEP]
simp [pow_succ, ih]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
p q : Subsemiring R
h : (fun s => s.carrier) p = (fun s => s.carrier) q
⊢ p = q
[PROOFSTEP]
cases p
[GOAL]
case mk
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
q : Subsemiring R
toSubmonoid✝ : Submonoid R
add_mem'✝ : ∀ {a b : R}, a ∈ toSubmonoid✝.carrier → b ∈ toSubmonoid✝.carrier → a + b ∈ toSubmonoid✝.carrier
zero_mem'✝ : 0 ∈ toSubmonoid✝.carrier
h :
(fun s => s.carrier) { toSubmonoid := toSubmonoid✝, add_mem' := add_mem'✝, zero_mem' := zero_mem'✝ } =
(fun s => s.carrier) q
⊢ { toSubmonoid := toSubmonoid✝, add_mem' := add_mem'✝, zero_mem' := zero_mem'✝ } = q
[PROOFSTEP]
cases q
[GOAL]
case mk.mk
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
toSubmonoid✝¹ : Submonoid R
add_mem'✝¹ : ∀ {a b : R}, a ∈ toSubmonoid✝¹.carrier → b ∈ toSubmonoid✝¹.carrier → a + b ∈ toSubmonoid✝¹.carrier
zero_mem'✝¹ : 0 ∈ toSubmonoid✝¹.carrier
toSubmonoid✝ : Submonoid R
add_mem'✝ : ∀ {a b : R}, a ∈ toSubmonoid✝.carrier → b ∈ toSubmonoid✝.carrier → a + b ∈ toSubmonoid✝.carrier
zero_mem'✝ : 0 ∈ toSubmonoid✝.carrier
h :
(fun s => s.carrier) { toSubmonoid := toSubmonoid✝¹, add_mem' := add_mem'✝¹, zero_mem' := zero_mem'✝¹ } =
(fun s => s.carrier) { toSubmonoid := toSubmonoid✝, add_mem' := add_mem'✝, zero_mem' := zero_mem'✝ }
⊢ { toSubmonoid := toSubmonoid✝¹, add_mem' := add_mem'✝¹, zero_mem' := zero_mem'✝¹ } =
{ toSubmonoid := toSubmonoid✝, add_mem' := add_mem'✝, zero_mem' := zero_mem'✝ }
[PROOFSTEP]
congr
[GOAL]
case mk.mk.e_toSubmonoid
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
toSubmonoid✝¹ : Submonoid R
add_mem'✝¹ : ∀ {a b : R}, a ∈ toSubmonoid✝¹.carrier → b ∈ toSubmonoid✝¹.carrier → a + b ∈ toSubmonoid✝¹.carrier
zero_mem'✝¹ : 0 ∈ toSubmonoid✝¹.carrier
toSubmonoid✝ : Submonoid R
add_mem'✝ : ∀ {a b : R}, a ∈ toSubmonoid✝.carrier → b ∈ toSubmonoid✝.carrier → a + b ∈ toSubmonoid✝.carrier
zero_mem'✝ : 0 ∈ toSubmonoid✝.carrier
h :
(fun s => s.carrier) { toSubmonoid := toSubmonoid✝¹, add_mem' := add_mem'✝¹, zero_mem' := zero_mem'✝¹ } =
(fun s => s.carrier) { toSubmonoid := toSubmonoid✝, add_mem' := add_mem'✝, zero_mem' := zero_mem'✝ }
⊢ toSubmonoid✝¹ = toSubmonoid✝
[PROOFSTEP]
exact SetLike.coe_injective' h
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
sm : Submonoid R
hm : ↑sm = s
sa : AddSubmonoid R
ha : ↑sa = s
x y : R
⊢ x ∈ s → y ∈ s → x * y ∈ s
[PROOFSTEP]
simpa only [← hm] using sm.mul_mem
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
sm : Submonoid R
hm : ↑sm = s
sa : AddSubmonoid R
ha : ↑sa = s
⊢ 1 ∈ { carrier := s, mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) }.carrier
[PROOFSTEP]
exact hm ▸ sm.one_mem
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
sm : Submonoid R
hm : ↑sm = s
sa : AddSubmonoid R
ha : ↑sa = s
x y : R
⊢ x ∈
{ toSubsemigroup := { carrier := s, mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) },
one_mem' :=
(_ :
1 ∈
{ carrier := s,
mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) }.carrier) }.toSubsemigroup.carrier →
y ∈
{ toSubsemigroup := { carrier := s, mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) },
one_mem' :=
(_ :
1 ∈
{ carrier := s,
mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) }.carrier) }.toSubsemigroup.carrier →
x + y ∈
{ toSubsemigroup := { carrier := s, mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) },
one_mem' :=
(_ :
1 ∈
{ carrier := s,
mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) }.carrier) }.toSubsemigroup.carrier
[PROOFSTEP]
simpa only [← ha] using sa.add_mem
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
sm : Submonoid R
hm : ↑sm = s
sa : AddSubmonoid R
ha : ↑sa = s
⊢ 0 ∈
{ toSubsemigroup := { carrier := s, mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) },
one_mem' :=
(_ :
1 ∈
{ carrier := s,
mul_mem' := (_ : ∀ {x y : R}, x ∈ s → y ∈ s → x * y ∈ s) }.carrier) }.toSubsemigroup.carrier
[PROOFSTEP]
exact ha ▸ sa.zero_mem
[GOAL]
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
s✝ : Subsemiring R✝
R : Type u_1
inst✝ : Semiring R
s : Subsemiring R
x : { x // x ∈ s }
n : ℕ
⊢ ↑(x ^ n) = ↑x ^ n
[PROOFSTEP]
induction' n with n ih
[GOAL]
case zero
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
s✝ : Subsemiring R✝
R : Type u_1
inst✝ : Semiring R
s : Subsemiring R
x : { x // x ∈ s }
⊢ ↑(x ^ Nat.zero) = ↑x ^ Nat.zero
[PROOFSTEP]
simp
[GOAL]
case succ
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
s✝ : Subsemiring R✝
R : Type u_1
inst✝ : Semiring R
s : Subsemiring R
x : { x // x ∈ s }
n : ℕ
ih : ↑(x ^ n) = ↑x ^ n
⊢ ↑(x ^ Nat.succ n) = ↑x ^ Nat.succ n
[PROOFSTEP]
simp [pow_succ, ih]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s✝ : Subsemiring R
f : R →+* S
s : Subsemiring R
y : S
⊢ y ∈ map f s ↔ ∃ x, x ∈ s ∧ ↑f x = y
[PROOFSTEP]
convert Set.mem_image_iff_bex (f := f) (s := s.carrier) (y := y) using 1
[GOAL]
case h.e'_2.a
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s✝ : Subsemiring R
f : R →+* S
s : Subsemiring R
y : S
⊢ (∃ x, x ∈ s ∧ ↑f x = y) ↔ ∃ x x_1, ↑f x = y
[PROOFSTEP]
simp
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
g : S →+* T
f✝ f : R →+* S
⊢ rangeS f = Subsemiring.map f ⊤
[PROOFSTEP]
ext
[GOAL]
case h
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
g : S →+* T
f✝ f : R →+* S
x✝ : S
⊢ x✝ ∈ rangeS f ↔ x✝ ∈ Subsemiring.map f ⊤
[PROOFSTEP]
simp
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
g : S →+* T
f : R →+* S
⊢ Subsemiring.map g (rangeS f) = rangeS (comp g f)
[PROOFSTEP]
simpa only [rangeS_eq_map] using (⊤ : Subsemiring R).map_map g f
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set (Subsemiring R)
⊢ ↑(⨅ (t : Subsemiring R) (_ : t ∈ s), t.toSubmonoid) = ⋂ (t : Subsemiring R) (_ : t ∈ s), ↑t
[PROOFSTEP]
simp
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set (Subsemiring R)
⊢ ↑(⨅ (t : Subsemiring R) (_ : t ∈ s), toAddSubmonoid t) = ⋂ (t : Subsemiring R) (_ : t ∈ s), ↑t
[PROOFSTEP]
simp
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
f : R ≃+* S
K : Subsemiring R
x : S
⊢ x ∈ map (↑f) K ↔ ↑(RingEquiv.symm f) x ∈ K
[PROOFSTEP]
convert @Set.mem_image_equiv _ _ (↑K) f.toEquiv x using 1
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
⊢ subsemiringClosure M = Subsemiring.closure ↑M
[PROOFSTEP]
ext
[GOAL]
case h
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
⊢ x✝ ∈ subsemiringClosure M ↔ x✝ ∈ Subsemiring.closure ↑M
[PROOFSTEP]
refine' ⟨fun hx => _, fun hx => (Subsemiring.mem_closure.mp hx) M.subsemiringClosure fun s sM => _⟩
[GOAL]
case h.refine'_1
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
hx : x✝ ∈ subsemiringClosure M
⊢ x✝ ∈ Subsemiring.closure ↑M
[PROOFSTEP]
rintro - ⟨H1, rfl⟩
[GOAL]
case h.refine'_2
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
hx : x✝ ∈ Subsemiring.closure ↑M
s : R
sM : s ∈ ↑M
⊢ s ∈ ↑(subsemiringClosure M)
[PROOFSTEP]
rintro - ⟨H1, rfl⟩
[GOAL]
case h.refine'_1.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
hx : x✝ ∈ subsemiringClosure M
H1 : Subsemiring R
⊢ x✝ ∈ (fun t => ⋂ (_ : t ∈ {S | ↑M ⊆ ↑S}), ↑t) H1
[PROOFSTEP]
rintro - ⟨H2, rfl⟩
[GOAL]
case h.refine'_2.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
hx : x✝ ∈ Subsemiring.closure ↑M
s : R
sM : s ∈ ↑M
H1 : AddSubmonoid R
⊢ s ∈ (fun t => ⋂ (_ : t ∈ {S | ↑M ⊆ ↑S}), ↑t) H1
[PROOFSTEP]
rintro - ⟨H2, rfl⟩
[GOAL]
case h.refine'_1.intro.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
hx : x✝ ∈ subsemiringClosure M
H1 : Subsemiring R
H2 : H1 ∈ {S | ↑M ⊆ ↑S}
⊢ x✝ ∈ (fun h => ↑H1) H2
[PROOFSTEP]
exact AddSubmonoid.mem_closure.mp hx H1.toAddSubmonoid H2
[GOAL]
case h.refine'_2.intro.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
x✝ : R
hx : x✝ ∈ Subsemiring.closure ↑M
s : R
sM : s ∈ ↑M
H1 : AddSubmonoid R
H2 : H1 ∈ {S | ↑M ⊆ ↑S}
⊢ s ∈ (fun h => ↑H1) H2
[PROOFSTEP]
exact H2 sM
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
⊢ ↑(closure s) = ↑(AddSubmonoid.closure ↑(Submonoid.closure s))
[PROOFSTEP]
simp [← Submonoid.subsemiringClosure_toAddSubmonoid, Submonoid.subsemiringClosure_eq_closure]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
⊢ closure ↑(AddSubmonoid.closure s) = closure s
[PROOFSTEP]
ext x
[GOAL]
case h
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
x : R
⊢ x ∈ closure ↑(AddSubmonoid.closure s) ↔ x ∈ closure s
[PROOFSTEP]
refine' ⟨fun hx => _, fun hx => closure_mono AddSubmonoid.subset_closure hx⟩
[GOAL]
case h
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
x : R
hx : x ∈ closure ↑(AddSubmonoid.closure s)
⊢ x ∈ closure s
[PROOFSTEP]
rintro - ⟨H, rfl⟩
[GOAL]
case h.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
x : R
hx : x ∈ closure ↑(AddSubmonoid.closure s)
H : Subsemiring R
⊢ x ∈ (fun t => ⋂ (_ : t ∈ {S | s ⊆ ↑S}), ↑t) H
[PROOFSTEP]
rintro - ⟨J, rfl⟩
[GOAL]
case h.intro.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
x : R
hx : x ∈ closure ↑(AddSubmonoid.closure s)
H : Subsemiring R
J : H ∈ {S | s ⊆ ↑S}
⊢ x ∈ (fun h => ↑H) J
[PROOFSTEP]
refine' (AddSubmonoid.mem_closure.mp (mem_closure_iff.mp hx)) H.toAddSubmonoid fun y hy => _
[GOAL]
case h.intro.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
x : R
hx : x ∈ closure ↑(AddSubmonoid.closure s)
H : Subsemiring R
J : H ∈ {S | s ⊆ ↑S}
y : R
hy : y ∈ ↑(Submonoid.closure ↑(AddSubmonoid.closure s))
⊢ y ∈ ↑(toAddSubmonoid H)
[PROOFSTEP]
refine' (Submonoid.mem_closure.mp hy) H.toSubmonoid fun z hz => _
[GOAL]
case h.intro.intro
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
x : R
hx : x ∈ closure ↑(AddSubmonoid.closure s)
H : Subsemiring R
J : H ∈ {S | s ⊆ ↑S}
y : R
hy : y ∈ ↑(Submonoid.closure ↑(AddSubmonoid.closure s))
z : R
hz : z ∈ ↑(AddSubmonoid.closure s)
⊢ z ∈ ↑H.toSubmonoid
[PROOFSTEP]
exact (AddSubmonoid.mem_closure.mp hz) H.toAddSubmonoid fun w hw => J hw
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
p : (x : R) → x ∈ closure s → Prop
Hs : ∀ (x : R) (h : x ∈ s), p x (_ : x ∈ ↑(closure s))
H0 : p 0 (_ : 0 ∈ closure s)
H1 : p 1 (_ : 1 ∈ closure s)
Hadd : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x + y) (_ : x + y ∈ closure s)
Hmul : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x * y) (_ : x * y ∈ closure s)
a : R
ha : a ∈ closure s
⊢ p a ha
[PROOFSTEP]
refine' Exists.elim _ fun (ha : a ∈ closure s) (hc : p a ha) => hc
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
p : (x : R) → x ∈ closure s → Prop
Hs : ∀ (x : R) (h : x ∈ s), p x (_ : x ∈ ↑(closure s))
H0 : p 0 (_ : 0 ∈ closure s)
H1 : p 1 (_ : 1 ∈ closure s)
Hadd : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x + y) (_ : x + y ∈ closure s)
Hmul : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x * y) (_ : x * y ∈ closure s)
a : R
ha : a ∈ closure s
⊢ ∃ x, p a x
[PROOFSTEP]
refine' closure_induction ha (fun m hm => ⟨subset_closure hm, Hs m hm⟩) ⟨zero_mem _, H0⟩ ⟨one_mem _, H1⟩ ?_ ?_
[GOAL]
case refine'_1
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
p : (x : R) → x ∈ closure s → Prop
Hs : ∀ (x : R) (h : x ∈ s), p x (_ : x ∈ ↑(closure s))
H0 : p 0 (_ : 0 ∈ closure s)
H1 : p 1 (_ : 1 ∈ closure s)
Hadd : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x + y) (_ : x + y ∈ closure s)
Hmul : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x * y) (_ : x * y ∈ closure s)
a : R
ha : a ∈ closure s
⊢ ∀ (x y : R), (∃ x_1, p x x_1) → (∃ x, p y x) → ∃ x_1, p (x + y) x_1
[PROOFSTEP]
exact (fun x y hx hy => hx.elim fun hx' hx => hy.elim fun hy' hy => ⟨add_mem hx' hy', Hadd _ _ _ _ hx hy⟩)
[GOAL]
case refine'_2
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Set R
p : (x : R) → x ∈ closure s → Prop
Hs : ∀ (x : R) (h : x ∈ s), p x (_ : x ∈ ↑(closure s))
H0 : p 0 (_ : 0 ∈ closure s)
H1 : p 1 (_ : 1 ∈ closure s)
Hadd : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x + y) (_ : x + y ∈ closure s)
Hmul : ∀ (x : R) (hx : x ∈ closure s) (y : R) (hy : y ∈ closure s), p x hx → p y hy → p (x * y) (_ : x * y ∈ closure s)
a : R
ha : a ∈ closure s
⊢ ∀ (x y : R), (∃ x_1, p x x_1) → (∃ x, p y x) → ∃ x_1, p (x * y) x_1
[PROOFSTEP]
exact (fun x y hx hy => hx.elim fun hx' hx => hy.elim fun hy' hy => ⟨mul_mem hx' hy', Hmul _ _ _ _ hx hy⟩)
[GOAL]
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x : R
⊢ x ∈ closure s ↔ ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
[PROOFSTEP]
constructor
[GOAL]
case mp
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x : R
⊢ x ∈ closure s → ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
[PROOFSTEP]
intro hx
[GOAL]
case mp
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x : R
hx : x ∈ closure s
⊢ ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
[PROOFSTEP]
let p : R → Prop := fun x =>
∃ (L : List (List R)), (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ (List.map List.prod L).sum = x
[GOAL]
case mp
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x : R
hx : x ∈ closure s
p : R → Prop := fun x => ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
⊢ ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
[PROOFSTEP]
exact
AddSubmonoid.closure_induction (p := p) (mem_closure_iff.1 hx)
(fun x hx =>
suffices ∃ t : List R, (∀ y ∈ t, y ∈ s) ∧ t.prod = x from
let ⟨t, ht1, ht2⟩ := this
⟨[t], List.forall_mem_singleton.2 ht1, by rw [List.map_singleton, List.sum_singleton, ht2]⟩
Submonoid.closure_induction hx (fun x hx => ⟨[x], List.forall_mem_singleton.2 hx, one_mul x⟩)
⟨[], List.forall_mem_nil _, rfl⟩ fun x y ⟨t, ht1, ht2⟩ ⟨u, hu1, hu2⟩ =>
⟨t ++ u, List.forall_mem_append.2 ⟨ht1, hu1⟩, by rw [List.prod_append, ht2, hu2]⟩)
⟨[], List.forall_mem_nil _, rfl⟩ fun x y ⟨L, HL1, HL2⟩ ⟨M, HM1, HM2⟩ =>
⟨L ++ M, List.forall_mem_append.2 ⟨HL1, HM1⟩, by rw [List.map_append, List.sum_append, HL2, HM2]⟩
[GOAL]
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x✝³ : R
hx✝ : x✝³ ∈ closure s
p : R → Prop := fun x => ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
x✝² : R
hx : x✝² ∈ ↑(Submonoid.closure s)
x y : R
x✝¹ : ∃ t, (∀ (y : R), y ∈ t → y ∈ s) ∧ List.prod t = x
x✝ : ∃ t, (∀ (y : R), y ∈ t → y ∈ s) ∧ List.prod t = y
t : List R
ht1 : ∀ (y : R), y ∈ t → y ∈ s
ht2 : List.prod t = x
u : List R
hu1 : ∀ (y : R), y ∈ u → y ∈ s
hu2 : List.prod u = y
⊢ List.prod (t ++ u) = x * y
[PROOFSTEP]
rw [List.prod_append, ht2, hu2]
[GOAL]
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x✝ : R
hx✝ : x✝ ∈ closure s
p : R → Prop := fun x => ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
x : R
hx : x ∈ ↑(Submonoid.closure s)
this : ∃ t, (∀ (y : R), y ∈ t → y ∈ s) ∧ List.prod t = x
t : List R
ht1 : ∀ (y : R), y ∈ t → y ∈ s
ht2 : List.prod t = x
⊢ List.sum (List.map List.prod [t]) = x
[PROOFSTEP]
rw [List.map_singleton, List.sum_singleton, ht2]
[GOAL]
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M✝ : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x✝² : R
hx : x✝² ∈ closure s
p : R → Prop := fun x => ∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x
x y : R
x✝¹ : p x
x✝ : p y
L : List (List R)
HL1 : ∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s
HL2 : List.sum (List.map List.prod L) = x
M : List (List R)
HM1 : ∀ (t : List R), t ∈ M → ∀ (y : R), y ∈ t → y ∈ s
HM2 : List.sum (List.map List.prod M) = y
⊢ List.sum (List.map List.prod (L ++ M)) = x + y
[PROOFSTEP]
rw [List.map_append, List.sum_append, HL2, HM2]
[GOAL]
case mpr
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x : R
⊢ (∃ L, (∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s) ∧ List.sum (List.map List.prod L) = x) → x ∈ closure s
[PROOFSTEP]
rintro ⟨L, HL1, HL2⟩
[GOAL]
case mpr.intro.intro
R✝ : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R✝
M : Submonoid R✝
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R : Type u_1
inst✝ : Semiring R
s : Set R
x : R
L : List (List R)
HL1 : ∀ (t : List R), t ∈ L → ∀ (y : R), y ∈ t → y ∈ s
HL2 : List.sum (List.map List.prod L) = x
⊢ x ∈ closure s
[PROOFSTEP]
exact
HL2 ▸
list_sum_mem fun r hr =>
let ⟨t, ht1, ht2⟩ := List.mem_map.1 hr
ht2 ▸ list_prod_mem _ fun y hy => subset_closure <| HL1 t ht1 y hy
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Subsemiring R
x : R × S
⊢ x ∈ prod s ⊤ ↔ x ∈ comap (RingHom.fst R S) s
[PROOFSTEP]
simp [mem_prod, MonoidHom.coe_fst]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s : Subsemiring S
x : R × S
⊢ x ∈ prod ⊤ s ↔ x ∈ comap (RingHom.snd R S) s
[PROOFSTEP]
simp [mem_prod, MonoidHom.coe_snd]
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
ι : Sort u_1
hι : Nonempty ι
S : ι → Subsemiring R
hS : Directed (fun x x_1 => x ≤ x_1) S
x : R
⊢ x ∈ ⨆ (i : ι), S i ↔ ∃ i, x ∈ S i
[PROOFSTEP]
refine' ⟨_, fun ⟨i, hi⟩ => (SetLike.le_def.1 <| le_iSup S i) hi⟩
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
ι : Sort u_1
hι : Nonempty ι
S : ι → Subsemiring R
hS : Directed (fun x x_1 => x ≤ x_1) S
x : R
⊢ x ∈ ⨆ (i : ι), S i → ∃ i, x ∈ S i
[PROOFSTEP]
let U : Subsemiring R :=
Subsemiring.mk' (⋃ i, (S i : Set R)) (⨆ i, (S i).toSubmonoid)
(Submonoid.coe_iSup_of_directed <| hS.mono_comp _ fun _ _ => id) (⨆ i, (S i).toAddSubmonoid)
(AddSubmonoid.coe_iSup_of_directed <| hS.mono_comp _ fun _ _ => id)
-- Porting note: gave the hypothesis an explicit name because `@this` doesn't work
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
ι : Sort u_1
hι : Nonempty ι
S : ι → Subsemiring R
hS : Directed (fun x x_1 => x ≤ x_1) S
x : R
U : Subsemiring R :=
Subsemiring.mk' (⋃ (i : ι), ↑(S i)) (⨆ (i : ι), (S i).toSubmonoid)
(_ : ↑(⨆ (i : ι), (S i).toSubmonoid) = ⋃ (i : ι), ↑(S i).toSubmonoid) (⨆ (i : ι), toAddSubmonoid (S i))
(_ : ↑(⨆ (i : ι), toAddSubmonoid (S i)) = ⋃ (i : ι), ↑(toAddSubmonoid (S i)))
⊢ x ∈ ⨆ (i : ι), S i → ∃ i, x ∈ S i
[PROOFSTEP]
suffices h : ⨆ i, S i ≤ U by simpa using @h x
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
ι : Sort u_1
hι : Nonempty ι
S : ι → Subsemiring R
hS : Directed (fun x x_1 => x ≤ x_1) S
x : R
U : Subsemiring R :=
Subsemiring.mk' (⋃ (i : ι), ↑(S i)) (⨆ (i : ι), (S i).toSubmonoid)
(_ : ↑(⨆ (i : ι), (S i).toSubmonoid) = ⋃ (i : ι), ↑(S i).toSubmonoid) (⨆ (i : ι), toAddSubmonoid (S i))
(_ : ↑(⨆ (i : ι), toAddSubmonoid (S i)) = ⋃ (i : ι), ↑(toAddSubmonoid (S i)))
h : ⨆ (i : ι), S i ≤ U
⊢ x ∈ ⨆ (i : ι), S i → ∃ i, x ∈ S i
[PROOFSTEP]
simpa using @h x
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
ι : Sort u_1
hι : Nonempty ι
S : ι → Subsemiring R
hS : Directed (fun x x_1 => x ≤ x_1) S
x : R
U : Subsemiring R :=
Subsemiring.mk' (⋃ (i : ι), ↑(S i)) (⨆ (i : ι), (S i).toSubmonoid)
(_ : ↑(⨆ (i : ι), (S i).toSubmonoid) = ⋃ (i : ι), ↑(S i).toSubmonoid) (⨆ (i : ι), toAddSubmonoid (S i))
(_ : ↑(⨆ (i : ι), toAddSubmonoid (S i)) = ⋃ (i : ι), ↑(toAddSubmonoid (S i)))
⊢ ⨆ (i : ι), S i ≤ U
[PROOFSTEP]
exact iSup_le fun i x hx => Set.mem_iUnion.2 ⟨i, hx⟩
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
ι : Sort u_1
hι : Nonempty ι
S : ι → Subsemiring R
hS : Directed (fun x x_1 => x ≤ x_1) S
x : R
⊢ x ∈ ↑(⨆ (i : ι), S i) ↔ x ∈ ⋃ (i : ι), ↑(S i)
[PROOFSTEP]
simp [mem_iSup_of_directed hS]
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
S : Set (Subsemiring R)
Sne : Set.Nonempty S
hS : DirectedOn (fun x x_1 => x ≤ x_1) S
x : R
⊢ x ∈ sSup S ↔ ∃ s, s ∈ S ∧ x ∈ s
[PROOFSTEP]
haveI : Nonempty S := Sne.to_subtype
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
S : Set (Subsemiring R)
Sne : Set.Nonempty S
hS : DirectedOn (fun x x_1 => x ≤ x_1) S
x : R
this : Nonempty ↑S
⊢ x ∈ sSup S ↔ ∃ s, s ∈ S ∧ x ∈ s
[PROOFSTEP]
simp only [sSup_eq_iSup', mem_iSup_of_directed hS.directed_val, SetCoe.exists, Subtype.coe_mk, exists_prop]
[GOAL]
R : Type u
S✝ : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S✝
inst✝ : NonAssocSemiring T
S : Set (Subsemiring R)
Sne : Set.Nonempty S
hS : DirectedOn (fun x x_1 => x ≤ x_1) S
x : R
⊢ x ∈ ↑(sSup S) ↔ x ∈ ⋃ (s : Subsemiring R) (_ : s ∈ S), ↑s
[PROOFSTEP]
simp [mem_sSup_of_directedOn Sne hS]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝⁷ : NonAssocSemiring R
M : Submonoid R
inst✝⁶ : NonAssocSemiring S
inst✝⁵ inst✝⁴ : NonAssocSemiring T
s : Subsemiring R
σR : Type u_1
σS : Type u_2
inst✝³ : SetLike σR R
inst✝² : SetLike σS S
inst✝¹ : SubsemiringClass σR R
inst✝ : SubsemiringClass σS S
f : R →+* S
⊢ ↑(rangeS f) = ↑⊤ ↔ Set.range ↑f = Set.univ
[PROOFSTEP]
rw [coe_rangeS, coe_top]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝² : NonAssocSemiring R
M : Submonoid R
inst✝¹ : NonAssocSemiring S
inst✝ : NonAssocSemiring T
s t : Subsemiring R
g : S → R
f : R →+* S
h : Function.LeftInverse g ↑f
src✝ : R →+* { x // x ∈ RingHom.rangeS f } := RingHom.rangeSRestrict f
x : { x // x ∈ RingHom.rangeS f }
x' : R
hx' : ↑f x' = ↑x
⊢ ↑f (g ↑x) = ↑x
[PROOFSTEP]
rw [← hx', h x']
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x y : { x // x ∈ closure s }
⊢ x * y = y * x
[PROOFSTEP]
ext
[GOAL]
case a
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x y : { x // x ∈ closure s }
⊢ ↑(x * y) = ↑(y * x)
[PROOFSTEP]
simp only [Subsemiring.coe_mul]
[GOAL]
case a
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x y : { x // x ∈ closure s }
⊢ ↑x * ↑y = ↑y * ↑x
[PROOFSTEP]
refine'
closure_induction₂ x.prop y.prop hcomm (fun x => by simp only [zero_mul, mul_zero])
(fun x => by simp only [zero_mul, mul_zero]) (fun x => by simp only [one_mul, mul_one])
(fun x => by simp only [one_mul, mul_one]) (fun x y z h₁ h₂ => by simp only [add_mul, mul_add, h₁, h₂])
(fun x y z h₁ h₂ => by simp only [add_mul, mul_add, h₁, h₂])
(fun x y z h₁ h₂ => by rw [mul_assoc, h₂, ← mul_assoc, h₁, mul_assoc]) fun x y z h₁ h₂ => by
rw [← mul_assoc, h₁, mul_assoc, h₂, ← mul_assoc]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y : { x // x ∈ closure s }
x : R'
⊢ 0 * x = x * 0
[PROOFSTEP]
simp only [zero_mul, mul_zero]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y : { x // x ∈ closure s }
x : R'
⊢ x * 0 = 0 * x
[PROOFSTEP]
simp only [zero_mul, mul_zero]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y : { x // x ∈ closure s }
x : R'
⊢ 1 * x = x * 1
[PROOFSTEP]
simp only [one_mul, mul_one]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y : { x // x ∈ closure s }
x : R'
⊢ x * 1 = 1 * x
[PROOFSTEP]
simp only [one_mul, mul_one]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y✝ : { x // x ∈ closure s }
x y z : R'
h₁ : x * z = z * x
h₂ : y * z = z * y
⊢ (x + y) * z = z * (x + y)
[PROOFSTEP]
simp only [add_mul, mul_add, h₁, h₂]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y✝ : { x // x ∈ closure s }
x y z : R'
h₁ : x * y = y * x
h₂ : x * z = z * x
⊢ x * (y + z) = (y + z) * x
[PROOFSTEP]
simp only [add_mul, mul_add, h₁, h₂]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y✝ : { x // x ∈ closure s }
x y z : R'
h₁ : x * z = z * x
h₂ : y * z = z * y
⊢ x * y * z = z * (x * y)
[PROOFSTEP]
rw [mul_assoc, h₂, ← mul_assoc, h₁, mul_assoc]
[GOAL]
R : Type u
S : Type v
T : Type w
inst✝³ : NonAssocSemiring R
M : Submonoid R
inst✝² : NonAssocSemiring S
inst✝¹ : NonAssocSemiring T
R' : Type u_1
α : Type u_2
β : Type u_3
inst✝ : Semiring R'
s : Set R'
hcomm : ∀ (a : R'), a ∈ s → ∀ (b : R'), b ∈ s → a * b = b * a
src✝ : Semiring { x // x ∈ closure s } := toSemiring (closure s)
x✝ y✝ : { x // x ∈ closure s }
x y z : R'
h₁ : x * y = y * x
h₂ : x * z = z * x
⊢ x * (y * z) = y * z * x
[PROOFSTEP]
rw [← mul_assoc, h₁, mul_assoc, h₂, ← mul_assoc]
|
State Before: α : Type u_1
β : Type ?u.6289
γ : Type ?u.6292
p : Pmf α
a : α
⊢ ↑p a = 0 ↔ ¬a ∈ support p State After: no goals Tactic: rw [mem_support_iff, Classical.not_not] |
import joblib
import json
import numpy as np
import agents.utils as utils
import agents.basicTransmission as basicTransmission
from agents.line_folower import Drive
from agents.double_model_agent import DoubleModelAgent
from settings import settings
def extract_state(data_sample, state_keys, data_is_array=False):
if data_is_array:
return [extract_state(x, state_keys) for x in data_sample]
state_vector = []
for key in state_keys:
if key == 'angle':
state_vector += [
np.sin(np.deg2rad(data_sample[key])),
np.cos(np.deg2rad(data_sample[key]))
]
elif isinstance(data_sample[key], list):
state_vector += data_sample[key]
else:
state_vector.append(data_sample[key])
return state_vector
class TripleModelAgent(DoubleModelAgent):
def load(self, path, **kwargs):
super().load(path)
self.steer_classifier = joblib.load(f'{path}/steer_classifier')
self.sl_agent = Drive(**kwargs)
def drive(self, state, **kwargs):
response = utils.get_default_response()
response['gear'] = self.transmission.get_new_gear(state)
state_vector = self.scaler.transform(
[extract_state(state, self.state_keys)]
)
steer_direction = self.steer_classifier.predict(
state_vector
)[0]
steer_magnitude = self.steer_regressor.predict(
state_vector
)[0]
# if steer_direction == 0 and steer_magnitude > 0.05:
# print(
# 'steer_direction', steer_direction, 'steer_magnitude', steer_magnitude
# )
if steer_direction < 0 and steer_magnitude > 0:
steer_val = 0
elif steer_direction > 0 and steer_magnitude < 0:
steer_val = 0
elif steer_direction == 0:
steer_val = steer_magnitude / 3
else:
steer_val = steer_magnitude
state_vector = np.hstack((state_vector, [[steer_val]]))
speed_action_index = self.speed_classifier.predict(
state_vector
)[0]
response = {
**response,
'steer': steer_val,
**self.speed_actions_labels[speed_action_index],
}
# self.apply_speed_limit(state, response)
# self.manage_start_accel(state, response)
return response
isStart = True
def manage_start_accel(self, state, response):
if self.isStart:
if state['distFromStart'] > 1 and state['speedX'] < 50:
response['accel'] = 1
response['brake'] = 0
else:
self.isStart = False
def apply_speed_limit(self, state, response, goalSpeed=135):
resp = self.sl_agent.drive(state)
for key in ['accel', 'brake']:
response[key] = resp[key]
# if state['speedX'] > goalSpeed:
# if state['speedX'] > 5. + goalSpeed:
# response['brake'] = 1
# response['accel'] = 0
# else:
# response['accel'] = 1
|
Formal statement is: lemma sum_delta'': fixes s::"'a::real_vector set" assumes "finite s" shows "(\<Sum>x\<in>s. (if y = x then f x else 0) *\<^sub>R x) = (if y\<in>s then (f y) *\<^sub>R y else 0)" Informal statement is: If $s$ is a finite set, then $\sum_{x \in s} (\text{if } y = x \text{ then } f(x) \text{ else } 0) \cdot x = (\text{if } y \in s \text{ then } f(y) \cdot y \text{ else } 0)$. |
function i = dyad(j)
% dyad -- Index entire j-th dyad of 1-d wavelet xform
% Usage
% ix = dyad(j);
% Inputs
% j integer
% Outputs
% ix list of all indices of wavelet coeffts at j-th level
%
i = (2^(j)+1):(2^(j+1)) ;
%
% Copyright (c) 1993. David L. Donoho
%
%
% Part of Wavelab Version 850
% Built Tue Jan 3 13:20:40 EST 2006
% This is Copyrighted Material
% For Copying permissions see COPYING.m
% Comments? e-mail [email protected]
|
Thank you to all who attended my launch on Facebook and made it a special day for me. Anyone wanting to order the book can do so for as little as 2.99 for ebooks, and 9.99 for black and white print. I do have to recommend the 24.99 full color print book. |
! { dg-do compile }
! This test is run with result-checking and -fbounds-check as
! nested_array_constructor_2.f90
! PR fortran/35846
! This used to ICE because the charlength of the trim-expression was
! NULL.
! Contributed by Tobias Burnus <[email protected]>
implicit none
character(len=2) :: c(3)
c = 'a'
c = (/ (/ trim(c(1)), 'a' /)//'c', 'cd' /)
print *, c
end
|
If $s$ is a countably compact set, then there exists a finite subcover of any countable open cover of $s$. |
import os
import datetime
from math import floor,exp
import numpy as np
# from .utils import feval_top_hit_lgb, feval_aucpr_lgb
# def binary_loss(preds, train_data))
# labels = train_data.get_label()
# preds = 1. / (1. + np.exp(-preds))
# grad = preds - labels
# hess = preds * (1. - preds)
# return grad, hess
def feval_aucpr_lgb(y_pred, dtrain):
y_true = dtrain.get_label()
precisions, recalls ,thrs = precision_recall_curve(y_true, y_pred)
mean_precisions = 0.5*(precisions[:-1]+precisions[1:])
intervals = recalls[:-1] - recalls[1:]
auc_pr = np.dot(mean_precisions, intervals)
return 'aucpr', auc_pr, True
alpha=0.7
def lgb_weight_binary_loss(preds, train_data): # lightgbm ->log(1+exp(-yaF))
labels = train_data.get_label()
preds = 1. / (1. + np.exp(-preds))
grad = (alpha*labels+(1-alpha)*(1-labels))*preds - alpha*labels
hess = (alpha*labels+(1-alpha)*(1-labels))*preds*(1-preds)
return grad, hess
gamma = 2
def focal_loss(preds, train_data): # output will be LLR, not probability
yt = train_data.get_label() # series
yt = yt.values
yp = 1./(1. + np.exp(-preds))
g1 = yp*(1-yp)
g2 = yt + np.power(-1,yt)*yp
g3 = yp + yt -1
g4 = 1-yt-np.power(-1,yt)*yp
g5 = yt +np.power(-1,yt)*yp
grad = gamma*g3*np.power(g2, gamma)*np.log(g4)+np.power(-1,yt)*np.power(g5, gamma+1)
# grad = gamma*(yp + yt-1)*(yt+(-1)^(yt)*yp)^gamma*np.log(1-yt-(-1)^yt*yp) +(-1)^yt*(yt+(-1)^yt*yp)^(gamma+1)
hess = g1*(gamma*((np.power(g2, gamma)+ gamma*np.power(-1,yt)*g3*np.power(g2,gamma-1))*np.log(g4)-(np.power(-1,yt)*g3*np.power(g2,gamma))/g4)+(gamma+1)*np.power(g5,gamma))
return grad, hess
def load_lgb_config(root_path, data_path, docs_path, model_path, tmp_path, log_path, version,model_id,
bussiness_type,imp_file):
print('load_lgb_config'.center(50,'-'))
# root_path = r'/home/yuanyuqing163/hb_model'
feature_config={'iv_threshold': 0.00001, # 0.002
'lgb_importance_revise': False, # False->use exist importance to train model
'lgb_importance_type': 'gain', # gain, split
'imp_threshold': 13.5e-4, # 15(hnb,125),18(hnb2,126),21(hnb2,104)
'category_threshold': 50,
'null_percent':0.98,
'drop_less':False, # True, keep all attrs with importance above threhold
'corr_threshold':0.98, #0.95(142_xgbimp) 0.92(13-159)
'model_type':'lgb',
}
lgb_config={
"learning_rate": 0.022647037734232003,
"max_depth": 7,
"num_leaves": 49,
"min_data_in_leaf": 39,
"bagging_fraction": 0.7863746352956377,
"bagging_freq": 1,
"feature_fraction": 0.827604087681333,
"min_gain_to_split": 1.8335341943609902,
"min_data_in_bin": 22,
"lambda_l2": 2.2635889734158456,
"lambda_l1": 0.2791470419628548,
"seed": 42,
"num_threads": 8,
"num_boost_round": 800,
"early_stopping_round": 40,
"min_sum_hessian_in_leaf": 0.001,
"max_cat_threshold": 16, # 32, limit the max threshold points in categorical features
'fobj': None,
"feval": None,
# "learning_rates": lambda x: 0.002 if x<5 else 0.03*exp(-floor(x/200)),#[0,n_iters)
"learning_rates": None,
"objective": "binary",
"is_unbalance": False, # or scale_pos_weight = 1.0
'zero_as_missing':False,
"metric": ["binary_logloss","auc"],
"metric_freq": 5,
"boosting": "gbdt",
"verbose": 0, #<0(fatal) 0(waring) 1(info) >1(debug)
'boost_from_average':True # default True
}
##=========================file ===========================
data_path = os.path.join(root_path, data_path,'trans')
conf_path = os.path.join(root_path,docs_path)
tmp_path = os.path.join(root_path,tmp_path,bussiness_type)
# model_path = os.path.join(root_path,model_path,'lgb',bussiness_type+ datetime.datetime.now().strftime('_%m%d'))
model_path = os.path.join(root_path,model_path,version, 'lgb',model_id+'_'+bussiness_type+ '_model')
log_path = os.path.join(root_path,log_path)
if not os.path.exists(data_path): os.makedirs(data_path)
if not os.path.exists(tmp_path): os.makedirs(tmp_path)
if not os.path.exists(model_path):os.makedirs(model_path)
if not os.path.exists(log_path):os.makedirs(log_path)
input_file=dict(
# iv_file = os.path.join(conf_path,'iv_bj_lx.csv'),
iv_file = None,
lgb_imp_file = os.path.join(model_path,'lgb_importance_total.csv'),
lgb_imp_tmp_file = os.path.join(model_path,'lgb_importance_cur.csv'),
imp_file = imp_file,
cv_train_file = os.path.join(data_path,'train_cv_lgb_'+model_id+'_'+bussiness_type),
cv_val_file = os.path.join(data_path, 'val_cv_lgb_'+model_id+'_'+bussiness_type),
category_feature_path = os.path.join(model_path, 'lgb_category_feat'),
)
output_file = dict(model_path = os.path.join(model_path,'lgb.model'),
# val_attr_path = os.path.join(model_path,'lgb_val_attr'),
null_path = os.path.join(tmp_path,'drop_null.csv'),
category_path = os.path.join(tmp_path,'drop_category.csv'),
encoder_path = os.path.join(model_path,'lgb_encoder'),
low_iv_path = os.path.join(tmp_path,'drop_low_iv.csv'),
corr_path = os.path.join(tmp_path,'drop_corr.csv'),
select_feature_path = os.path.join(model_path,'lgb_selected_feature.csv'),
log_file = os.path.join(log_path, 'lgb_log'+ datetime.datetime.now().strftime('_%m%d')))
return feature_config, lgb_config, input_file, output_file
|
# Packages
using Printf
using LinearAlgebra
using Statistics
using Random
# outter functions
include("../utils/byrow.jl")
include("../utils/dare.jl")
include("../utils/dlyap.jl")
include("../utils/simula01.jl")
# inner functions
include("ACQR_simula.jl")
include("ACQR_kfilter.jl")
include("ACQR_kfilter_s.jl")
include("ACQR_ksmoother.jl")
include("ACQR_ksmoother_s.jl")
include("ACQR_em_starting.jl")
include("ACQR_em.jl")
include("ACQR_em_s.jl")
|
1886 – 87 , 1894 – 95 , 1896 – 97 , 1904 – 05 , 1912 – 13 , 1919 – 20 , 1956 – 57
|
[STATEMENT]
lemma "0x01 = 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (1::'a) = (1::'a)
[PROOF STEP]
oops |
(**************************************************
* Author: Ana Nora Evans ([email protected])
**************************************************)
Require Import List.
Require Import Coq.Structures.Equalities.
Require Import Coq.Strings.String.
Require Import Coq.NArith.BinNat.
Require Import Coq.PArith.BinPos.
Require Import Coq.FSets.FMapAVL.
Require Import Coq.FSets.FMapFacts.
Require Import Coq.Structures.OrdersEx.
Require Import Coq.Structures.OrdersAlt.
Set Implicit Arguments.
Set Contextual Implicit.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Require Import extructures.ord.
Module N_as_OT := Backport_OT N_as_OT.
Module BinNatMap := FMapAVL.Make(N_as_OT).
Module BinNatMapExtra := WProperties_fun N_as_OT BinNatMap.
Module BinNatMapFacts := BinNatMapExtra.F.
Module backPositive_as_OT := Backport_OT Positive_as_OT.
Module PMap := FMapAVL.Make backPositive_as_OT.
Module PMapExtra := WProperties_fun Positive_as_OT PMap.
Module PMapFacts := PMapExtra.F.
Module backNat_as_OT := Backport_OT Nat_as_OT.
Module NatMap := FMapAVL.Make backNat_as_OT.
Module NatMapExtra := WProperties_fun backNat_as_OT NatMap.
Module NatMapFacts := NatMapExtra.F.
Module ListUtil.
(* TODO: I would like to avoid passing eqb *)
(* AAA: This can be fixed by using fmap in CoqUtils *)
Fixpoint get_by_key {K V:Type} (eqb : K->K->bool) (k : K)
(l : list (K*V)) : option V :=
match l with
| nil => None
| (k1,v1)::ls =>
if (eqb k k1) then (Some v1)
else (get_by_key eqb k ls)
end.
End ListUtil.
Require Import Ascii.
Module Log.
Open Scope char_scope.
Definition natToDigit (n : nat) : Ascii.ascii :=
match n with
| 0 => "0"
| 1 => "1"
| 2 => "2"
| 3 => "3"
| 4 => "4"
| 5 => "5"
| 6 => "6"
| 7 => "7"
| 8 => "8"
| _ => "9"
end.
Close Scope char_scope.
Definition log_nat (n : nat) : string :=
let fix log_nat_aux (time n : nat) (acc : string) : string :=
let acc' := String (natToDigit (Nat.modulo n 10)) acc in
match time with
| 0 => acc'
| S time' =>
match Nat.div n 10 with
| 0 => acc'
| n' => log_nat_aux time' n' acc'
end
end in
log_nat_aux n n EmptyString.
Definition log_N (n : N) : string :=
log_nat (N.to_nat n).
Definition log_pos (n : positive) : string :=
log_nat (Pos.to_nat n).
End Log.
|
# -*- encoding: UTF8 -*-
"""
Generate XYZ data with corresponding energies from some chosen PES:
- Lennard Jones
- Stillinger Weber etc.
"""
from symmetry_transform import symmetryTransform, symmetryTransformBehler
from timeit import default_timer as timer # Best timer indep. of system
from math import pi,sqrt,exp,cos,isnan,sin
from file_management import loadFromFile, readXYZ_Files
from warnings import filterwarnings
import tensorflow as tf
import numpy as np
import glob
# import time
import sys
import os
def PES_Lennard_Jones(xyz_i):
"""
Simple LJ pair potential
"""
eps = 1. # 1.0318 * 10^(-2) eV
sig = 1. # 3.405 * 10^(-7) meter
r = np.linalg.norm(xyz_i, axis=1)
rc = 1.6*sig
LJ0 = abs(4*eps*((sig/rc)**12 - (sig/rc)**6)) # Potential goes to zero at cut
LJ = 4*eps*((sig/r)**12 - (sig/r)**6) * (r < rc) + LJ0
U = np.sum( LJ )
return U
def PES_Stillinger_Weber(xyz_i):
"""
INPUT
- xyz_i: Matrix with columnds containing cartesian coordinates,
relative to the current atom i, i.e.:
[[x1 y1 z1]
[x2 y2 z2]
[x3 y3 z3]
[x4 y4 z4]]
"""
xyz = xyz_i
r = np.linalg.norm(xyz, axis=1)
N = len(r) # Number of neighbours for atom i, which we are currently inspecting
# A lot of definitions first
A = 7.049556277
B = 0.6022245584
p = 4.
q = 0.
a = 1.8
l = 21. # lambda
g = 1.2 # gamma
cos_tc = -1.0/3.0 # 109.47 deg
eps = 2.1683 # [eV]
sig = 2.0951 # [Å]
rc = (r < a*sig) # Bool array. "False" cast to 0 and "True" to 1
filterwarnings("ignore", category=RuntimeWarning) # U2 below can give NaN
U2 = A*eps*(B*(sig/r)**p-(sig/r)**q) * np.exp(sig/(r-a*sig)) * rc
filterwarnings("always", category=RuntimeWarning) # Turn warnings back on
def U2_serial(r_vec, r_cut): # Slow, i.e. only use if U2 gives NaN
U2_E = 0
for r,rc in zip(r_vec,r_cut):
if rc:
U2_E += A*eps*(B*(sig/r)**p-(sig/r)**q) * np.exp(sig/(r-a*sig))
else:
pass # Add 0
return U2_E
def U3(rij, rik, cos_theta):
if (rij < a*sig) and (rik < a*sig):
exp_factor = exp(g*sig/(rij-a*sig)) * exp(g*sig/(rik-a*sig))
angle_factor = l*eps*(cos_theta - cos_tc)**2
return exp_factor * angle_factor
else:
return 0.0
# Sum up two body terms
U2_sum = np.sum(U2)
if isnan(U2_sum): # NaN gotten, re-computing with serial code. U2 = ", U2_sum
U2_sum = U2_serial(r, rc)
# Need a double sum to find three body terms
U3_sum = 0.0
for j in range(N): # j != i
v_rij = xyz[j] # Only depend on j
rij = r[j]
for k in range(j+1,N): # i < j < k
v_rik = xyz[k]
rik = r[k]
cos_theta_jik = np.dot(v_rij, v_rik) / (rij*rik)
U3_sum += U3(rij, rik, cos_theta_jik)
U_total = U2_sum/2.0 + U3_sum # The U2-sum is per-2-body-bond, thus we grant half to each atom
return U_total
def potentialEnergyGenerator(xyz_N, PES):
if len(xyz_N.shape) == 2: # This is just a single neighbor list
return PES(xyz_N)
else:
size = xyz_N.shape[2]
Ep = np.zeros(size)
for i in range(size):
xyz_i = xyz_N[:,:,i]
Ep[i] = PES(xyz_i)
"""
# Plot distribution of potential energy (per particle)
import matplotlib.pyplot as plt
plt.hist(Ep,bins=50)
plt.show()
"""
return Ep
def potentialEnergyGeneratorSingleNeigList(xyz_i, PES):
return PES(xyz_i)
def createXYZ_biased(r_min, r_max, size, neighbours=7, histogramPlot=False, verbose=False):
"""
# Input: Size of train and test size + number of neighbours
# Output: xyz-neighbours-matrix of size 'size'
Generates random numbers with x,y,z that can be [-r_max,r_max] with r in [r_min, r_max]
"""
if verbose:
print "Creating XYZ-neighbor-data for:\n - Neighbors: %d \n - Samples : %d" %(neighbours,size)
print "-------------------------------"
xyz_N = np.zeros((neighbours,3,size))
xyz = np.zeros((size,3))
for i in range(neighbours): # Fill cube slice for each neighbor (quicker than "size")
r2 = np.random.uniform(r_min, r_max, size)**2
xyz[:,0] = np.random.uniform(0, r2, size)
xyz[:,1] = np.random.uniform(0, r2-xyz[:,0], size)
xyz[:,2] = r2 - xyz[:,0] - xyz[:,1]
for row in range(size):
np.random.shuffle(xyz[row,:]) # This shuffles in-place (so no copying)
xyz_N[i,0,:] = np.sqrt(xyz[:,0]) * np.random.choice([-1,1],size) # 50-50 if position is plus or minus
xyz_N[i,1,:] = np.sqrt(xyz[:,1]) * np.random.choice([-1,1],size)
xyz_N[i,2,:] = np.sqrt(xyz[:,2]) * np.random.choice([-1,1],size)
if histogramPlot:
import matplotlib.pyplot as plt
plt.subplot(3,1,1);plt.hist(xyz_N[:,0,:].ravel(),bins=70);plt.subplot(3,1,2);plt.hist(xyz_N[:,1,:].ravel(),bins=70);plt.subplot(3,1,3);plt.hist(xyz_N[:,2,:].ravel(),bins=70);plt.show()
return xyz_N
def createXYZ_uni_MC(r_min, r_max, size, neighbours=7, histogramPlot=False, verbose=False):
"""
# Input: Size of train and test size + number of neighbours
# Output: xyz-neighbours-matrix of size 'size'
Generates uniform random numbers inside a sphere with Monte Carlo
sampling with x,y,z that can be [0,r_max] with r in [r_min, r_max]
"""
if verbose:
print "Creating XYZ-neighbor-data for:\n - Neighbors: %d \n - Samples : %d" %(neighbours,size)
print "-------------------------------"
tot_atoms = size * neighbours
xyz = np.random.uniform(-r_max,r_max,[tot_atoms,3])
r = np.linalg.norm(xyz, axis=1) # Compute all R at once
accepted_indices = []
bad_indices = []
for i in range(len(r)):
if r[i] < r_max and r[i] > r_min: # Accepted range of values
accepted_indices.append(i)
else:
bad_indices.append(i)
# Do the missing number of atoms serially:
for j in bad_indices:
while True:
x,y,z = np.random.uniform(-r_max,r_max,3)
r = sqrt(x**2 + y**2 + z**2)
if r < r_max and r > r_min: # Accepted range of values
xyz[j,:] = [x,y,z] # Replace bad apple
break # Skip to next
# Fill output cube
xyz_N = np.zeros((neighbours,3,size))
for i in range(size):
xyz_N[:,:,i] = xyz[i*neighbours:(i+1)*neighbours,:]
if histogramPlot:
import matplotlib.pyplot as plt
plt.subplot(3,1,1);plt.hist(xyz_N[:,0,:].ravel(),bins=70);plt.subplot(3,1,2);plt.hist(xyz_N[:,1,:].ravel(),bins=70);plt.subplot(3,1,3);plt.hist(xyz_N[:,2,:].ravel(),bins=70);plt.show()
return xyz_N
def createXYZ_uni2(r_min, r_max, size, neighbours=7, histogramPlot=False, verbose=False):
"""
# Input: Size of train and test size + number of neighbours
# Output: xyz-neighbours-matrix of size 'size'
Generates uniform random numbers inside a sphere directly
sampling with x,y,z that can be [0,r_max] with r in [r_min, r_max]
"""
if verbose:
print "Creating XYZ-neighbor-data for:\n - Neighbors: %d \n - Samples : %d" %(neighbours,size)
print "-------------------------------"
tot_atoms = size * neighbours
u = np.random.uniform(0,1,tot_atoms)
v = np.random.uniform(0,1,tot_atoms)
w = np.random.uniform(0,1,tot_atoms)
R_usq3 = (r_max-r_min) * u**(1./3.) + r_min
w2pi = 2*np.pi*w
v_term = np.sqrt(1-(1-2*v)**2)
xyz = np.zeros((tot_atoms,3))
xyz[:,0] = R_usq3 * v_term * np.cos(w2pi)
xyz[:,1] = R_usq3 * v_term * np.sin(w2pi)
xyz[:,2] = R_usq3 * (1-2*v)
# Fill output cube
xyz_N = np.zeros((neighbours,3,size))
for i in range(size):
xyz_N[:,:,i] = xyz[i*neighbours:(i+1)*neighbours,:]
if histogramPlot:
import matplotlib.pyplot as plt
plt.subplot(3,1,1);plt.hist(xyz_N[:,0,:].ravel(),bins=70);plt.subplot(3,1,2);plt.hist(xyz_N[:,1,:].ravel(),bins=70);plt.subplot(3,1,3);plt.hist(xyz_N[:,2,:].ravel(),bins=70);plt.show()
return xyz_N
def createTrainData(size, neighbours, PES, verbose=False):
if PES == PES_Stillinger_Weber:
sigma = 2.0951
r_low = 0.85 * sigma
r_high = 1.8 * sigma - 1E-8 # SW has a divide by zero at exactly cutoff
xyz_N = createXYZ_uni2(r_low, r_high, size, neighbours, verbose=verbose)
Ep = potentialEnergyGenerator(xyz_N, PES)
Ep = Ep.reshape([size,1])
G_funcs, nmbr_G = generate_symmfunc_input_Si_Behler()
nn_input = np.zeros((size, nmbr_G))
for i in range(size):
xyz_i = xyz_N[:,:,i]
nn_input[i,:] = symmetryTransform(G_funcs, xyz_i)
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
percent = round(float(i+1)/size*100., 2)
sys.stdout.write('\rTransforming xyz with symmetry functions. %.2f %% complete' %(percent))
sys.stdout.flush()
if verbose:
print " "
else:
print "To be implemented! For now, use PES = PES_Stillinger_Weber. Exiting..."
sys.exit(0)
return nn_input, Ep
def checkAndMaybeLoadPrevTrainData(filename, no_load=False):
origFilename = filename
listOfTrainData = glob.glob("SW_train_*.txt")
if filename in listOfTrainData: # Filename already exist
i = 0
while True:
i += 1
filename = origFilename[:-4] + "_v%d" %i + ".txt"
if filename not in listOfTrainData:
print "New filename:", filename
break # Continue changing name until we find one available
if not listOfTrainData: # No previous files
return False, None, filename
elif not no_load:
nmbrFiles = len(listOfTrainData)
yn = raw_input("Found %d file(s). Load them into this file? (y/N) " %nmbrFiles)
if yn in ["y","Y","yes","Yes","YES"]: # Standard = enter = NO
loadedData = []
for file_i in listOfTrainData:
all_data = loadFromFile(0, file_i, shuffle_rows=False)
loadedData.append(all_data.return_all_data())
yn = raw_input("Delete files loaded? (Y/n) ")
if yn in ["y","Y","yes","Yes","YES",""]: # Standard = enter = YES
for file_i in listOfTrainData:
os.remove(file_i)
filename = origFilename # Since we delete it here
# Smash all data into a single file
if len(loadedData) > 1:
all_data = np.concatenate(loadedData, axis=0)
else:
all_data = loadedData[0]
return True, all_data, filename
return False, None, filename
else:
return False, None, filename
def createTrainDataDump(size, neighbours, PES, filename, only_concatenate=False, verbose=False, no_load=False):
# Check if file exist and in case, ask if it should be loaded
filesLoadedBool, prev_data, filename = checkAndMaybeLoadPrevTrainData(filename, no_load)
if only_concatenate:
if verbose:
sys.stdout.write('\n\r' + ' '*80) # White out line
sys.stdout.write('\rSaving all training data to file.')
sys.stdout.flush()
np.random.shuffle(prev_data) # Shuffle the rows of the data i.e. the symmetry vectors
np.savetxt(filename, prev_data, delimiter=',')
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
sys.stdout.write('\rSaving all training data to file. Done!\n')
sys.stdout.flush()
else:
if PES == PES_Stillinger_Weber: # i.e. if not 'only_concatenate'
sigma = 2.0951 # 1.0
r_low = 0.85 * sigma # Shortest possible dist between atom i and neighbor
r_high = 1.8 * sigma - 1E-10 # SW has a divide by zero at exactly cutoff
xyz_N_train = createXYZ_uni2(r_low, r_high, size, neighbours, verbose=verbose)
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
sys.stdout.write('\rComputing potential energy.')
sys.stdout.flush()
Ep = potentialEnergyGenerator(xyz_N_train, PES)
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
sys.stdout.write('\rComputing potential energy. Done!\n')
sys.stdout.flush()
G_funcs, nmbr_G = generate_symmfunc_input_Si_Behler()
xTrain = np.zeros((size, nmbr_G))
for i in range(size):
xyz_i = xyz_N_train[:,:,i]
# xTrain[i,:] = symmetryTransform(G_funcs, xyz_i)
xTrain[i,:] = symmetryTransformBehler(G_funcs, xyz_i)
if verbose and (i+1)%10 == 0:
sys.stdout.write('\r' + ' '*80) # White out line
percent = round(float(i+1)/size*100., 2)
sys.stdout.write('\rTransforming xyz with symmetry functions. %.2f %% complete' %(percent))
sys.stdout.flush()
elif PES == PES_Lennard_Jones:
sigma = 1.0
r_low = 0.9 * sigma
r_high = 1.6 * sigma
xyz_N_train = createXYZ_uni2(r_low, r_high, size, neighbours, verbose=verbose)
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
sys.stdout.write('\rComputing potential energy.')
sys.stdout.flush()
Ep = potentialEnergyGenerator(xyz_N_train, PES)
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
sys.stdout.write('\rComputing potential energy. Done!\n')
sys.stdout.flush()
G_funcs, nmbr_G = generate_symmfunc_input_LJ(sigma)
xTrain = np.zeros((size, nmbr_G))
for i in range(size):
xyz_i = xyz_N_train[:,:,i]
xTrain[i,:] = symmetryTransform(G_funcs, xyz_i)
if verbose and (i+1)%10 == 0:
sys.stdout.write('\r' + ' '*80) # White out line
percent = round(float(i+1)/size*100., 2)
sys.stdout.write('\rTransforming xyz with symmetry functions. %.2f %% complete' %(percent))
sys.stdout.flush()
else:
print "To be implemented! For now, use PES = PES_Stillinger_Weber. Exiting..."
sys.exit(0)
if verbose:
sys.stdout.write('\n\r' + ' '*80) # White out line
sys.stdout.write('\rSaving all training data to file.')
sys.stdout.flush()
dump_data = np.zeros((size, nmbr_G + 1))
dump_data[:,0] = Ep
dump_data[:,1:] = xTrain
if filesLoadedBool:
dump_data = np.concatenate((dump_data, prev_data), axis=0) # Add loaded files
np.random.shuffle(dump_data) # Shuffle the rows of the data i.e. the symmetry vectors
np.savetxt(filename, dump_data, delimiter=',')
if verbose:
sys.stdout.write('\r' + ' '*80) # White out line
sys.stdout.write('\rSaving all training data to file. Done!\n')
sys.stdout.flush()
def createDataDumpBehlerSi():
PES = PES_Stillinger_Weber
size = 200000
neighbours = 10
sigma = 2.0951 # 1.0
r_low = 0.85 * sigma
r_high = 1.8 * sigma - 1E-8 # SW has a divide by zero at exactly cutoff
xyz_N_train = createXYZ_uni2(r_low, r_high, size, neighbours, verbose=True)
Ep = potentialEnergyGenerator(xyz_N_train, PES)
params, nmbr_G = generate_symmfunc_input_Si_Behler()
xTrain = np.zeros((size, nmbr_G))
for i in range(size):
xyz_i = xyz_N_train[:,:,i]
xTrain[i,:] = symmetryTransformBehler(params, xyz_i)
dump_data = np.zeros((size, nmbr_G + 1))
dump_data[:,0] = Ep
dump_data[:,1:] = xTrain
np.random.shuffle(dump_data) # Shuffle the rows of the data i.e. the symmetry vectors
np.savetxt("SW_Behler_200000_n10.txt", dump_data, delimiter=',')
def generate_symmfunc_input_Si_Behler():
# Rescale to a lower cutoff, i.e. 3.77118
scale_to_SW_cut = False # NOTICE ME SENPAI
if scale_to_SW_cut:
print "!!NB!! USING SCALED BEHLER SYMMETRY FUNCTIONS"
SW_cut = 3.77118
scale_fac = (SW_cut / 6.0) * 0.99999999999999 # Make damn sure floats stay below cut
# Behlers Si-values
paramsForSymm = []
with open("Important_data/behler_Si_symm_funcs.txt", "r") as open_file:
row = -1
for line in open_file:
row += 1
if row == 0:
continue
line = line.replace(",", " ")
linesplit = line.split()
if row == 1:
tot_nmbr_symm = int(linesplit[0])
continue
if linesplit[0] == "G2":
"""
'G2', 2.0, 6.0, 0.0 # eta, cut, Rs
"""
G2_params = np.array(linesplit[1:4], dtype=float)
if scale_to_SW_cut:
G2_params[1:] *= scale_fac # cut AND Rs
paramsForSymm.append([2] + list(G2_params))
elif linesplit[0] == "G4":
"""
'G4', 0.01 , 6.0, 1, 1 # eta, cut, zeta, lambda
"""
G4_params = np.array(linesplit[1:5], dtype=float)
if scale_to_SW_cut:
G4_params[1] *= scale_fac # ONLY cut
paramsForSymm.append([4] + list(G4_params))
elif linesplit[0] == "G5":
"""
'G5', 0.01 , 6.0, 1, 1 # eta, cut, zeta, lambda
"""
G5_params = np.array(linesplit[1:5], dtype=float)
if scale_to_SW_cut:
G4_params[1] *= scale_fac # ONLY cut
paramsForSymm.append([5] + list(G5_params))
else:
print linesplit[0], "not understood. Should be 'G2', 'G4' or 'G5'..."
assert tot_nmbr_symm == len(paramsForSymm)
return paramsForSymm, tot_nmbr_symm
def generate_symmfunc_input_Si_v1():
sigma = 2.0951
G_funcs = [0,0,0,0,0] # Start out with NO symm.funcs.
G_vars = [1,3,2,4,5] # Number of variables symm.func. take as input
G_args_list = ["rc[i][j]",
"rc[i][j], rs[i][j], eta[i][j]",
"rc[i][j], kappa[i][j]",
"rc[i][j], eta[i][j], zeta[i][j], lambda_c[i][j]",
"rc[i][j], eta[i][j], zeta[i][j], lambda_c[i][j]"]
# Make use of symmetry function G2 and G5: (indicate how many)
which_symm_funcs = [2, 4] # G5 instead of G4, because SW doesnt care about Rjk
wsf = which_symm_funcs
how_many_funcs = [10, 120]
hmf = how_many_funcs
# This is where the pain begins -_-
# Note: [3] * 4 evaluates to [3,3,3,3]
rc = [[1.8 * sigma]*10, [1.8 * sigma]*120]
rs = [[0.85 * sigma]*10, None]
eta = [[0.0, 0.3, 0.65, 1.25, 2.5, 5.0, 10.0, 20.0, 40.0, 90.0], \
[0.0]*12 + [0.3]*12 + [0.65]*12 + [1.25]*12 + [2.5]*12 + [5.]*12 + [10.]*12 + [20.]*12 + [40.]*12 + [90.]*12]
zeta = [[None], [1,1,2,2,4,4,8,8,16,16,32,32]*10]
lambda_c = [[None],[-1,1]*60]
i = 0 # Will be first G-func
for G,n in zip(wsf, hmf):
G_funcs[G-1] = [n, np.zeros((n, G_vars[G-1]))]
for j in range(n):
symm_args = eval("np.array([%s])" %(G_args_list[G-1]))
G_funcs[G-1][1][j] = symm_args
i += 1
tot_Gs = np.sum(np.array(hmf))
return G_funcs, tot_Gs
def generate_symmfunc_input_LJ(sigma=1.0):
"""
Domain:
a = 0.9 --> b = 1.6 # times sigma
"""
G_funcs = [0,0,0,0,0] # Start out with NO symm.funcs.
G_vars = [1,3,2,4,5] # Number of variables symm.func. take as input
G_args_list = ["rc[i][j]",
"rc[i][j], rs[i][j], eta[i][j]",
"rc[i][j], kappa[i][j]",
"rc[i][j], eta[i][j], zeta[i][j], lambda_c[i][j]",
"rc[i][j], eta[i][j], zeta[i][j], lambda_c[i][j]"]
# Make use of symmetry function G2 and G5: (indicate how many)
which_symm_funcs = [2] # G5 instead of G4, because SW doesnt care about Rjk
wsf = which_symm_funcs
how_many_funcs = [10]
hmf = how_many_funcs
# This is where the pain begins -_-
# Note: [3] * 4 evaluates to [3,3,3,3]
rc = [[1.6*sigma]*10, None]
rs = [[0.9*sigma]*10, None]
eta = [[0.0, 1.0, 2.5, 5.0, 10.0, 20.0, 40.0, 90.0, 200.0, 500.0], [None]]
i = 0 # Will be first G-func
for G,n in zip(wsf, hmf):
G_funcs[G-1] = [n, np.zeros((n, G_vars[G-1]))]
for j in range(n):
symm_args = eval("np.array([%s])" %(G_args_list[G-1]))
G_funcs[G-1][1][j] = symm_args
i += 1
tot_Gs = np.sum(np.array(hmf))
return G_funcs, tot_Gs
def testLammpsData(filename):
Ep = []
Ep2 = []
with open(filename, 'r') as lammps_file:
"""
File looks like this
x1 y1 z1 r1^2 x2 y2 z2 r2^2 ... xN yN zN rN^2 Ep
"""
for i,row in enumerate(lammps_file):
if i < 2000:
continue # Skip first 2000
xyzr_i = np.array(row.split(), dtype=float)
n_elem = len(xyzr_i-1)/4 # Remove Ep and Compute
Ep.append(xyzr_i[-1])
xyzr_i = xyzr_i[:-1].reshape(n_elem,4)
xyz_i = xyzr_i[:,:-1]
Ep2.append(potentialEnergyGenerator(xyz_i,PES=PES_Stillinger_Weber))
def NeighListDataToSymmToFile(open_filename, save_filename, size):
Ep = []
if size == "all":
with open(open_filename, 'r') as lammps_file:
size = 0
for line in lammps_file:
size +=1
save_filename += "%d.txt" %size
if size == 0:
print "Length of file is zero! Fix and run again!\nExiting!"
sys.exit(0)
with open(open_filename, 'r') as lammps_file:
"""
File looks like this
x1 y1 z1 r1^2 x2 y2 z2 r2^2 ... xN yN zN rN^2 Ep
"""
# G_funcs, nmbr_G = generate_symmfunc_input_Si_v1() # Bad, don't use :)
G_funcs, nmbr_G = generate_symmfunc_input_Si_Behler() # Read from file, cleaner
xTrain = np.zeros((size, nmbr_G))
for i,row in enumerate(lammps_file):
if i >= size:
continue # Skip to the next row
xyzr_i = np.array(row.split(), dtype=float)
n_elem = len(xyzr_i-1)/4 # Remove Ep and compute
# Ep.append(xyzr_i[-1]) # This is broken by lammps somehow... compute my own below
xyzr_i = xyzr_i[:-1].reshape(n_elem,4)
xyz_i = xyzr_i[:,:-1]
Ep.append(potentialEnergyGenerator(xyz_i, PES=PES_Stillinger_Weber))
xTrain[i,:] = symmetryTransformBehler(G_funcs, xyz_i)
if (i+1)%10 == 0:
sys.stdout.write('\r' + ' '*80) # White out line
percent = round(float(i+1)/size*100., 2)
sys.stdout.write('\rTransforming xyz with symmetry functions. %.2f %% complete' %(percent))
sys.stdout.flush()
print "\nNmbr of lines in file", i+1, ", length Ep:", len(Ep), ", size --> file:", size
dump_data = np.zeros((size, nmbr_G + 1))
dump_data[:,0] = Ep
dump_data[:,1:] = xTrain
np.random.shuffle(dump_data) # Shuffle the rows of the data i.e. the symmetry vectors
np.savetxt(save_filename, dump_data, delimiter=',')
print "Saved symmetry vector training data to file:"
print '"%s"\n' %save_filename
def rotateXYZ(xyz, xr, yr, zr, angle="radians"):
"""
Rotates around all cartesian axes
xyz: [x,y,z]
"""
if angle == "degrees":
xr = xr/360.*2*pi
yr = yr/360.*2*pi
zr = zr/360.*2*pi
angle = "radians"
if angle == "radians":
Rx = np.array([[cos(xr), -sin(xr), 0],
[sin(xr), cos(xr), 0],
[0 , 0, 1]])
Ry = np.array([[cos(yr) , 0, sin(yr)],
[0 , 1, 0],
[-sin(yr), 0, cos(yr)]])
Rz = np.array([[1, 0, 0],
[0, cos(zr), -sin(zr)],
[0, sin(zr), cos(zr)]])
R = np.dot(np.dot(Rx, Ry), Rz) # Dot for 2d-arrays does matrix multiplication
return np.dot(xyz, R)
else:
print "Angle must be given in 'radians' or 'degrees'. Exiting."
sys.exit(0)
def testAngularInvarianceEpAndSymmFuncs():
sigma = 2.0951 # 1.0
r_low = 0.85 * sigma
r_high = 1.8 * sigma - 1E-8 # SW has a divide by zero at exactly cutoff
size = 1
neighbours = 8
PES = PES_Stillinger_Weber
xyz_N = createXYZ_uni2(r_low, r_high, size, neighbours, verbose=False) # size = 10, neigh = 5
Ep0 = potentialEnergyGenerator(xyz_N, PES)
G_funcs, nmbr_G = generate_symmfunc_input_Si_Behler()
symm_func_vec0 = np.zeros((size, nmbr_G))
symm_func_vec1 = np.zeros((size, nmbr_G))
for i in range(size):
xyz_nl = xyz_N[:,:,i] # single nl, (neighbor list)
symm_func_vec0[i,:] = symmetryTransform(G_funcs, xyz_nl) # construct the symmetry vector pre-rotation
for rotation in range(50): # Do x,y,z rotation a total of 50 times to
xr, yr, zr = np.random.uniform(0,2*np.pi,3) # Rotate all neighbor atoms with same angles (generated randomly)
for j in range(xyz_nl.shape[0]): # single x,y,z vector
xyz_nl[j] = rotateXYZ(xyz_nl[j], xr, yr, zr) # rotate all atoms in neighbor list with same angles
xyz_N[:,:,i] = xyz_nl
symm_func_vec1[i,:] = symmetryTransform(G_funcs, xyz_nl) # construct the symmetry vector post-rotation
Ep1 = potentialEnergyGenerator(xyz_N, PES)
mae_ep0 = np.mean(np.abs(Ep0))
mae_ep1 = np.mean(np.abs(Ep1))
EpDiff = abs(mae_ep1 - mae_ep0)
mae_g0 = np.mean(np.abs(symm_func_vec0))
mae_g1 = np.mean(np.abs(symm_func_vec1))
GDiff = abs(mae_g1 - mae_g0)
print "MAE Ep:", mae_ep0, ", MAE Ep after rotation:", mae_ep1, ", diff:", EpDiff
print "MAE SymmVec:", mae_g0, ", after rotation:", mae_g1, ", diff:", GDiff
if __name__ == '__main__':
"""
Suggestion: Only have ONE option set to True at the time.
(not an absolute rule!)
"""
# Based on random structures, fixed/variable number of neighbours
dumpToFile = False
dumpMultiple = False
# Structures from SW run in lammps
xyz_to_neigh_lists = True
dumpXYZ_file = True # From own algo: "readXYZ_Files"
# Unit tests
testAngSymm = False
testLammps = False
testClass = False
if xyz_to_neigh_lists or dumpXYZ_file:
n_atoms = int(raw_input("Number of atoms? "))
if xyz_to_neigh_lists:
"""
Takes XYZ files from LAMMPS dump and makes neighbor lists
"""
other_info = "" # i.e. "no_3_body"
cutoff = 3.77118 # Stillinger-Weber
samples_per_dt = 10 # Integer value or "all" (dont use "all" for very small systems!)
test_boundary = True # Just use atoms wherever they are
file_path = "Important_data/Test_nn/enfil_sw_%sp%s.xyz" %(n_atoms,other_info)
save_file = "Important_data/neigh_list_from_xyz_%sp%s.txt" %(n_atoms,other_info)
readXYZ_Files(file_path, save_file, samples_per_dt, cutoff, test_boundary)
if dumpXYZ_file:
"""
Takes neighbour lists and makes symmetry data / training data
"""
other_info = "" # i.e. "no_3_body"
size = "all" # should be <= rows in file!!!!
open_filename = "Important_data/neigh_list_from_xyz_%sp%s.txt" %(n_atoms,other_info)
save_filename = "SW_train_xyz_%s.txt" %str(size)
if size == "all":
save_filename = "SW_train_xyz_%sp%s_" %(n_atoms,other_info)
NeighListDataToSymmToFile(open_filename, save_filename, size)
if testLammps:
filename = "Important_data/neighbours.txt"
testLammpsData(filename)
if dumpToFile:
if True:
# This is SW
size = 50000
neighbours = 2
# filename = "stillinger-weber-symmetry-data.txt"
filename = "SW_train_rs_%s_n%s.txt" %(str(size), str(neighbours))
print "When run directly (like now), this file dumps training data to file:"
print '"%s"' %filename
print "-------------------------------"
print "Neighbors", neighbours
print "-------------------------------"
PES = PES_Stillinger_Weber
t0 = timer()
createTrainDataDump(size, neighbours, PES, filename, \
only_concatenate=False, verbose=True)
t1 = timer() - t0
print "\nComputation took: %.2f seconds" %t1
else:
# This is LJ
size = 10000
neighbours = 8
# filename = "stillinger-weber-symmetry-data.txt"
filename = "LJ_train_rs_%s_n%s.txt" %(str(size), str(neighbours))
print "When run directly (like now), this file dumps training data to file:"
print '"%s"' %filename
print "-------------------------------"
print "Neighbors", neighbours
print "-------------------------------"
PES = PES_Lennard_Jones
t0 = timer()
createTrainDataDump(size, neighbours, PES, filename, \
only_concatenate=False, verbose=True)
t1 = timer() - t0
print "\nComputation took: %.2f seconds" %t1
if dumpMultiple:
size = 100 # PER single value in neigh_list
# The list below matches the distribution of neighbours in SW run at "standard settings":
# neigh_list = [4]*2 + [5]*6 + [6]*13 + [7]*14 + [8]*9 + [9]*3 + [10] # len: 48
# The list below seeks to add more system states that are unlikely to be sampled from an actual simulation
neigh_list = [4]*2 + [5]*6 + [6]*2 + [7]*3 + [8]*9 + [9]*3 + [10] # len: 26
# neigh_list = [1,2,2,2]
t0_tot = timer()
PES = PES_Stillinger_Weber
for ID,neighbours in enumerate(neigh_list):
filename = "SW_train_%s_n%s_%s.txt" %(str(size), str(neighbours), str(ID))
print "When run directly (like now), this file dumps training data to file:"
print '"%s"' %filename
print "-------------------------------"
print "Neighbors", neighbours
print "-------------------------------"
t0 = timer()
createTrainDataDump(size, neighbours, PES, filename, \
only_concatenate=False, verbose=True, \
no_load=True)
t1 = timer() - t0
print "\nComputation of %d neighbours took: %.2f seconds" %(neighbours,t1)
t1 = timer() - t0_tot
# Load all files into one master file
createTrainDataDump(0, 0, PES, "SW_train_manyneigh_%d.txt" %(size*len(neigh_list)), \
only_concatenate=True, verbose=True, no_load=False)
if t1 > 1000:
t1 /= 60.
print "\nTotal computation took: %.2f minutes" %t1
else:
print "\nTotal computation took: %.2f seconds" %t1
if testClass:
testSize = 100 # Remove these from training set
filename = "test-class-symmetry-data.txt"
all_data = loadFromFile(testSize, filename)
xTrain, yTrain = all_data(1)
print xTrain[:,0:5], "\n", yTrain
xTrain, yTrain = all_data(1)
print xTrain[:,0:5], "\n", yTrain # Make sure this is different from above print out
if testAngSymm:
testAngularInvarianceEpAndSymmFuncs()
|
(** * Inflationarity *)
(** The dual notion of an inflationarity or a progressive map
is a deflationarity or a regressive map,
which is why we do not define it separately. *)
From DEZ Require Export
Init.
(** ** Inflationary Unary Operation *)
(** ** Progressive Map *)
Class IsInflUnOp (A : Type) (X : A -> A -> Prop)
(f : A -> A) : Prop :=
infl_un_op (x : A) : X x (f x).
(** ** Inflationary Left Action *)
Class IsInflActL (A B : Type) (X : B -> B -> Prop)
(al : A -> B -> B) : Prop :=
infl_act_l (x : A) (a : B) : X a (al x a).
Section Context.
Context (A B : Type) (X : B -> B -> Prop)
(al : A -> B -> B).
(** Inflationarity of a left action is a special case
of the inflationarity of its partial application. *)
#[export] Instance infl_act_l_is_infl_un_op
`{!IsInflActL X al} (x : A) : IsInflUnOp X (al x).
Proof. intros a. apply infl_act_l. Qed.
#[local] Instance infl_un_op_is_infl_act_l
`{!forall x : A, IsInflUnOp X (al x)} : IsInflActL X al.
Proof. intros x a. apply infl_un_op. Qed.
End Context.
(** ** Inflationary Right Action *)
Class IsInflActR (A B : Type) (X : B -> B -> Prop)
(ar : B -> A -> B) : Prop :=
infl_act_r (a : B) (x : A) : X a (ar a x).
Section Context.
Context (A B : Type) (X : B -> B -> Prop)
(ar : B -> A -> B).
(** Inflationarity of a left action is a special case
of the inflationarity of its flipped partial application. *)
#[export] Instance infl_act_r_is_infl_un_op_flip
`{!IsInflActR X ar} (x : A) : IsInflUnOp X (flip ar x).
Proof. intros a. unfold flip. apply infl_act_r. Qed.
#[local] Instance infl_un_op_flip_is_infl_act_r
`{!forall x : A, IsInflUnOp X (flip ar x)} : IsInflActR X ar.
Proof.
intros x a.
change (ar x a) with (flip ar a x).
apply infl_un_op.
Qed.
End Context.
Section Context.
Context (A B : Type) (X : B -> B -> Prop)
(al : A -> B -> B).
(** Inflationarity of a left action is a special case
of the inflationarity of its flipped version. *)
#[local] Instance infl_act_l_is_infl_act_r_flip
`{!IsInflActL X al} : IsInflActR X (flip al).
Proof. intros a x. unfold flip in *. eauto. Qed.
#[local] Instance infl_act_r_flip_is_infl_act_l
`{!IsInflActR X (flip al)} : IsInflActL X al.
Proof. intros x. unfold flip in *. eauto. Qed.
End Context.
(** ** Left-Inflationary Binary Operation *)
Class IsInflL (A : Type) (X : A -> A -> Prop)
(k : A -> A -> A) : Prop :=
infl_l (x y : A) : X y (k x y).
Section Context.
Context (A : Type) (X : A -> A -> Prop)
(k : A -> A -> A).
(** Left-inflationarity of a binary operation is a special case
of the inflationarity of its partial application. *)
#[export] Instance infl_l_is_infl_un_op
`{!IsInflL X k} (x : A) : IsInflUnOp X (k x).
Proof. intros y. apply infl_l. Qed.
#[local] Instance infl_un_op_is_infl_l
`{!forall x : A, IsInflUnOp X (k x)} : IsInflL X k.
Proof. intros x y. apply infl_un_op. Qed.
End Context.
(** ** Right-Inflationary Binary Operation *)
Class IsInflR (A : Type) (X : A -> A -> Prop)
(k : A -> A -> A) : Prop :=
infl_r (x y : A) : X x (k x y).
Section Context.
Context (A : Type) (X : A -> A -> Prop)
(k : A -> A -> A).
(** Right-inflationarity of a binary operation is a special case
of the inflationarity of its flipped partial application. *)
#[export] Instance infl_r_is_infl_un_op_flip
`{!IsInflR X k} (y : A) : IsInflUnOp X (flip k y).
Proof. intros x. unfold flip. apply infl_r. Qed.
#[local] Instance infl_un_op_flip_is_infl_r
`{!forall y : A, IsInflUnOp X (flip k y)} : IsInflR X k.
Proof.
intros x y.
change (k x y) with (flip k y x).
apply infl_un_op.
Qed.
End Context.
Section Context.
Context (A : Type) (X : A -> A -> Prop)
(k : A -> A -> A).
(** Left-inflationarity of a binary operation is a special case
of the right-inflationarity of its flipped version. *)
#[local] Instance infl_l_is_infl_r_flip
`{!IsInflL X k} : IsInflR X (flip k).
Proof. intros y x. unfold flip in *. eauto. Qed.
#[local] Instance infl_r_flip_is_infl_l
`{!IsInflActR X (flip k)} : IsInflActL X k.
Proof. intros x y. unfold flip in *. eauto. Qed.
End Context.
(** ** Inflationary Binary Operation *)
Class IsInfl (A : Type) (X : A -> A -> Prop)
(k : A -> A -> A) : Prop := {
infl_is_infl_l :> IsInflL X k;
infl_is_infl_r :> IsInflR X k;
}.
|
MODULE safe_open_mod
!
! Module for performing a "safe" open of a file for
! a Fortran read/write operation. Makes sure the requested file
! unit number is not in use, and increments it until an unused
! unit is found
!
CONTAINS
SUBROUTINE safe_open(iunit, istat, filename, filestat, &
& fileform, record_in, access_in, delim_in)
!
! Module for performing a "safe" open of a file for
! a Fortran read/write operation. Makes sure the requested file
! unit number is not in use, and increments it until an unused
! unit is found
!
! Note that:
! 1) the actual i/o unit number used is returned in the first argument.
! 2) the status variable from the OPEN command is returned as the second
! argument.
! Here are some examples of usage:
!
! To open an existing namelist input file:
! CALL safe_open(iou,istat,nli_file_name,'old','formatted')
!
! To create a file, in order to write to it:
! CALL safe_open(iou,istat,my_output_file_name,'replace','formatted')
!
! To create an output file, with 'NONE' as delimiter for characters for
! list-directed output and Namelist output
! CALL safe_open(iou,istat,my_output_file_name,'replace',
! & 'formatted',delim_in='none')
! JDH 08-30-2004.
! Based on Steve Hirshman's original safe_open routine
! Rearranged comments, continuation lines, some statement ordering.
! Should be NO change in functionality.
!
! JDH 2010-06-09
! Added coding for DELIM specification
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
INTEGER, INTENT(inout) :: iunit
INTEGER, INTENT(out) :: istat
CHARACTER(LEN=*), INTENT(in) :: filename, filestat, fileform
INTEGER, INTENT(in), OPTIONAL :: record_in
CHARACTER(LEN=*), INTENT(in), OPTIONAL :: access_in
CHARACTER(LEN=*), INTENT(in), OPTIONAL :: delim_in
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
CHARACTER(LEN=*), PARAMETER :: cdelim = "apostrophe", &
cform="formatted", cunform="unformatted", &
cscratch="scratch", cseq="sequential"
CHARACTER(LEN=10) :: acc_type
CHARACTER(LEN=10) :: delim_type
LOGICAL :: lopen, lexist, linvalid
!-----------------------------------------------
! Start of Executable Code
!-----------------------------------------------
!-----------------------------------------------
!
! Check that unit is not already opened
! Increment iunit until find one that is not in use
!
linvalid = .true.
IF (iunit < 0) THEN
WRITE (6, *) 'In safe_open, requested unit was uninitialized: IUNIT=', iunit
iunit = 10
END IF
DO WHILE (linvalid)
INQUIRE(iunit, exist=lexist, opened=lopen, iostat=istat)
linvalid = (istat.ne.0 .or. .not.lexist) .or. lopen
IF (.not.linvalid) EXIT
iunit = iunit + 1
END DO
! JDH 08-24-2004 This next IF(Present) clause seems to be duplicated below.
! I think one of the two should be eliminated, for clarity.
IF (PRESENT(access_in)) THEN
acc_type = TRIM(access_in)
ELSE
acc_type = cseq
END IF
! Why not call this variable lscratch?
lexist = (filestat(1:1).eq.'s') .or. (filestat(1:1).eq.'S') !Scratch file
! JDH 08-24-2004 Below is nearly exact duplicate of IF(Present) clause
! from above
IF (PRESENT(access_in)) THEN
acc_type = TRIM(access_in)
ELSE
acc_type = 'SEQUENTIAL'
END IF
! JDH 2010-06-09. Coding for DELIM
IF (PRESENT(delim_in)) THEN
SELECT CASE (delim_in(1:1))
CASE ('n', 'N')
delim_type = 'none'
CASE ('q', 'Q')
delim_type = 'quote'
CASE DEFAULT
delim_type = cdelim
END SELECT
ELSE
delim_type = cdelim
ENDIF
! Here are the actual OPEN commands. Eight different cases.
SELECT CASE (fileform(1:1))
CASE ('u', 'U')
IF (PRESENT(record_in)) THEN
IF (lexist) THEN ! unformatted, record length specified, scratch
OPEN(unit=iunit, form=cunform, status=cscratch, &
& recl=record_in, access=acc_type, iostat=istat)
ELSE ! unformatted, record length specified, non-scratch
OPEN(unit=iunit, file=TRIM(filename), form=cunform, &
& status=TRIM(filestat), recl=record_in, &
& access=acc_type, iostat=istat)
END IF
ELSE
IF (lexist) THEN ! unformatted, record length unspecified, scratch
OPEN(unit=iunit, form=cunform, status=cscratch, &
& access=acc_type, iostat=istat)
ELSE ! unformatted, record length unspecified, non-scratch
OPEN(unit=iunit, file=TRIM(filename), form=cunform, &
& status=TRIM(filestat), access=acc_type,iostat=istat)
END IF
END IF
CASE DEFAULT
IF (PRESENT(record_in)) THEN
IF (lexist) THEN ! formatted, record length specified, scratch
OPEN(unit=iunit, form=cform, status=cscratch, &
delim=TRIM(delim_type), recl=record_in, &
access=acc_type, iostat=istat)
ELSE ! formatted, record length specified, non-scratch
OPEN(unit=iunit, file=TRIM(filename), form=cform, &
status=TRIM(filestat), delim=TRIM(delim_type), &
recl=record_in, access=acc_type, iostat=istat)
END IF
ELSE
IF (lexist) THEN ! formatted, record length unspecified, scratch
OPEN(unit=iunit, form=cform, status=cscratch, &
delim=TRIM(delim_type), access=acc_type, &
iostat=istat)
ELSE ! formatted, record length unspecified, non-scratch
OPEN(unit=iunit, file=TRIM(filename), form=cform, &
status=TRIM(filestat), delim=TRIM(delim_type), &
access=acc_type, iostat=istat)
END IF
END IF
END SELECT
END SUBROUTINE safe_open
END MODULE safe_open_mod
|
subsection \<open>Instantiation for IO automata\<close>
(*<*)
theory BD_Security_IO
imports Abstract_BD_Security BD_Security_TS IO_Automaton Filtermap
begin
(*>*)
no_notation relcomp (infixr "O" 75)
abbreviation never :: "('a \<Rightarrow> bool) \<Rightarrow> 'a list \<Rightarrow> bool" where "never U \<equiv> list_all (\<lambda> a. \<not> U a)"
locale BD_Security_IO = IO_Automaton istate step
for istate :: 'state and step :: "'state \<Rightarrow> 'act \<Rightarrow> 'out \<times> 'state"
+
fixes (* value filtering and production: *)
\<phi> :: "('state,'act,'out) trans \<Rightarrow> bool" and f :: "('state,'act,'out) trans \<Rightarrow> 'value"
and (* observation filtering and production: *)
\<gamma> :: "('state,'act,'out) trans \<Rightarrow> bool" and g :: "('state,'act,'out) trans \<Rightarrow> 'obs"
and (* declassification trigger: *)
T :: "('state,'act,'out) trans \<Rightarrow> bool"
and (* declassification bound: *)
B :: "'value list \<Rightarrow> 'value list \<Rightarrow> bool"
begin
sublocale BD_Security_TS where validTrans = validTrans and srcOf = srcOf and tgtOf = tgtOf .
lemma reachNT_step_induct[consumes 1, case_names Istate Step]:
assumes "reachNT s"
and "P istate"
and "\<And>s a ou s'. reachNT s \<Longrightarrow> step s a = (ou, s') \<Longrightarrow> \<not>T (Trans s a ou s') \<Longrightarrow> P s \<Longrightarrow> P s'"
shows "P s"
using assms
by (induction rule: reachNT.induct) (auto elim: validTrans.elims)
lemma reachNT_PairI:
assumes "reachNT s" and "step s a = (ou, s')" and "\<not> T (Trans s a ou s')"
shows "reachNT s'"
using assms reachNT.simps[of s']
by auto
lemma reachNT_state_cases[cases set, consumes 1, case_names init step]:
assumes "reachNT s"
obtains "s = istate"
| sh a ou where "reach sh" "step sh a = (ou,s)" "\<not>T (Trans sh a ou s)"
using assms
unfolding reachNT.simps[of s]
by (fastforce intro: reachNT_reach elim: validTrans.elims)
(* This is assumed to be an invariant only modulo non T *)
definition invarNT where
"invarNT Inv \<equiv> \<forall> s a ou s'. reachNT s \<and> Inv s \<and> \<not> T (Trans s a ou s') \<and> step s a = (ou,s') \<longrightarrow> Inv s'"
lemma invarNT_disj:
assumes "invarNT Inv1" and "invarNT Inv2"
shows "invarNT (\<lambda> s. Inv1 s \<or> Inv2 s)"
using assms unfolding invarNT_def by blast
lemma invarNT_conj:
assumes "invarNT Inv1" and "invarNT Inv2"
shows "invarNT (\<lambda> s. Inv1 s \<and> Inv2 s)"
using assms unfolding invarNT_def by blast
lemma holdsIstate_invarNT:
assumes h: "holdsIstate Inv" and i: "invarNT Inv" and a: "reachNT s"
shows "Inv s"
using a using h i unfolding holdsIstate_def invarNT_def
by (induction rule: reachNT_step_induct) auto
end (* context BD_Security_IO *)
(*<*)
end
(*>*)
|
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁴ : NormedField 𝕜
inst✝³ : NormedField 𝕜'
inst✝² : SeminormedAddCommGroup E
inst✝¹ : NormedSpace 𝕜 E
inst✝ : NormedSpace 𝕜' E
r : ℝ
c : ↑(closedBall 0 1)
x : ↑(ball 0 r)
⊢ ‖↑c • ↑x‖ < r
[PROOFSTEP]
simpa only [norm_smul, one_mul] using
mul_lt_mul' (mem_closedBall_zero_iff.1 c.2) (mem_ball_zero_iff.1 x.2) (norm_nonneg _) one_pos
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁴ : NormedField 𝕜
inst✝³ : NormedField 𝕜'
inst✝² : SeminormedAddCommGroup E
inst✝¹ : NormedSpace 𝕜 E
inst✝ : NormedSpace 𝕜' E
r : ℝ
c : ↑(closedBall 0 1)
x : ↑(closedBall 0 r)
⊢ ‖↑c • ↑x‖ ≤ r
[PROOFSTEP]
simpa only [norm_smul, one_mul] using
mul_le_mul (mem_closedBall_zero_iff.1 c.2) (mem_closedBall_zero_iff.1 x.2) (norm_nonneg _) zero_le_one
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁴ : NormedField 𝕜
inst✝³ : NormedField 𝕜'
inst✝² : SeminormedAddCommGroup E
inst✝¹ : NormedSpace 𝕜 E
inst✝ : NormedSpace 𝕜' E
r : ℝ
c : ↑(sphere 0 1)
x : ↑(sphere 0 r)
⊢ ‖↑c • ↑x‖ = r
[PROOFSTEP]
rw [norm_smul, mem_sphere_zero_iff_norm.1 c.coe_prop, mem_sphere_zero_iff_norm.1 x.coe_prop, one_mul]
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁵ : NormedField 𝕜
inst✝⁴ : NormedField 𝕜'
inst✝³ : SeminormedAddCommGroup E
inst✝² : NormedSpace 𝕜 E
inst✝¹ : NormedSpace 𝕜' E
r✝ : ℝ
inst✝ : CharZero 𝕜
r : ℝ
hr : r ≠ 0
x : ↑(sphere 0 r)
h : x = -x
⊢ ↑x = -↑x
[PROOFSTEP]
conv_lhs => rw [h]
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁵ : NormedField 𝕜
inst✝⁴ : NormedField 𝕜'
inst✝³ : SeminormedAddCommGroup E
inst✝² : NormedSpace 𝕜 E
inst✝¹ : NormedSpace 𝕜' E
r✝ : ℝ
inst✝ : CharZero 𝕜
r : ℝ
hr : r ≠ 0
x : ↑(sphere 0 r)
h : x = -x
| ↑x
[PROOFSTEP]
rw [h]
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁵ : NormedField 𝕜
inst✝⁴ : NormedField 𝕜'
inst✝³ : SeminormedAddCommGroup E
inst✝² : NormedSpace 𝕜 E
inst✝¹ : NormedSpace 𝕜' E
r✝ : ℝ
inst✝ : CharZero 𝕜
r : ℝ
hr : r ≠ 0
x : ↑(sphere 0 r)
h : x = -x
| ↑x
[PROOFSTEP]
rw [h]
[GOAL]
𝕜 : Type u_1
𝕜' : Type u_2
E : Type u_3
inst✝⁵ : NormedField 𝕜
inst✝⁴ : NormedField 𝕜'
inst✝³ : SeminormedAddCommGroup E
inst✝² : NormedSpace 𝕜 E
inst✝¹ : NormedSpace 𝕜' E
r✝ : ℝ
inst✝ : CharZero 𝕜
r : ℝ
hr : r ≠ 0
x : ↑(sphere 0 r)
h : x = -x
| ↑x
[PROOFSTEP]
rw [h]
|
Formal statement is: lemmas open_real_lessThan = open_lessThan[where 'a=real] Informal statement is: The set of real numbers less than a given real number is open. |
Description For some unknown reasons, if you set a ambient_generic have a dynamic preset, it start to loop, even if you have the flag "Is not looped" checked.
Some presets, like the "machine" types, will only loop three times the sound, and then it will stop. It doesn´t stop if you unchecked the flag.
The rest of presets will make loop the sound regarless if you checked the loop flag or not. Some of those presets will go out of control making any kind of noises until it just make stop the sound.
Steps To Reproduce Put an ambient_generic on your map.
Set a dynamic preset on the list.
Additional Information Checked both on 1.35 and SVN. |
using Documenter, Stheno
DocMeta.setdocmeta!(
Stheno,
:DocTestSetup,
:(using Stheno, Random, LinearAlgebra);
recursive=true,
)
makedocs(
modules = [Stheno],
format = Documenter.HTML(),
sitename = "Stheno.jl",
pages = [
"Home" => "index.md",
"Getting Started" => "getting_started.md",
"GP API" => "gp_api.md",
"CompositeGP API" => "composite_gp_api.md",
"Input Types" => "input_types.md",
"Kernel Design" => "kernel_design.md",
"Internals" => "internals.md",
],
)
deploydocs(repo="github.com/willtebbutt/Stheno.jl.git")
|
Contenido bajo licencia Creative Commons BY 4.0 y código bajo licencia MIT. © Juan Gómez y Nicolás Guarín-Zapata 2020. Este material es parte del curso Modelación Computacional en el programa de Ingeniería Civil de la Universidad EAFIT.
# Integración numerica
## Introducción
Discutiremos de manera breve la definición de cuadratura. Posteriormente nos concentraremos en las cuadraturas Gaussianas que por su eficiencia y facilidad de sistematización son de amplio uso en ingeniería y física. Para estas cubriremos su desarrollo general y su implementación en Python. Los detalles de la cuadratura Gaussiana y su implementación se discutirán por medio de un ejemplo.
**Al completar este notebook usted debería estar en la capacidad de:**
* Identificar una cuadratura como una formula de evaluar integrales numéricamente.
* Identificar la relación entre la función a integrar y el tipo de esquema requerido para su evaluación.
* Evaluar integrales numéricamente usando cuadraturas Gaussianas.
## Cuadraturas
Una cuadratura es una formula para la evaluación numerica de integrales de la forma general:
$$I=\int\limits_{V(\vec{x})} f(\vec{x}) \mathrm{d}V(\vec{x}) \approx\sum_{i=1}^{N} f(\vec{x}_i)w_i\, .$$
Note que esta expresión corresponde a la evaluación de la función $f(x)$ en $N$ puntos de coordenadas $x_i$ multiplicados por $N$ factores $w_i$. Los factores se denominan **pesos** o factores de ponderación ya que se encargan de ponderar la contribución de cada término $f(x_i)$ a $I$ y tienen una interpretación similar al diferencial $\mathrm{d}V$. Incluso, estos últimos son los que se encargarían de aportar las unidades pertinentes a la integral (aproximada).
### Ejemplo: regla del trapecio
Una cuadratura con la cual estamos familiarizados es la regla del trapecio dada por:
$$I=\int\limits_a^b f(x) \mathrm{d}x \approx \frac{h}{2}[f(a) + f(b)]\, ,$$
en donde $h = b - a$. En esta expresión podemos reconocer los factores de ponderación $w_1 = h/2$, $w_2 = h/2$ y los puntos de evaluación $x_1 = a$ y $x_2 = b$.
Por ejemplo, consideremos la siguiente integral:
$$I = \int\limits_{-1}^{+1} (x^3 + 4x^2 - 10) \mathrm{d}x \approx 1.0\cdot f(-1) + 1.0\cdot f(+1) = -12\, .$$
### Cuadraturas Gaussianas
Una de las cuadraturas mas poderosas encontradas en la practica son las denominadas cuadraturas [Gaussianas](https://en.wikipedia.org/wiki/Gaussian_quadrature). En estas, los factores de ponderación $w_i$ y los puntos de evaluación $x_i$ son seleccionados de manera que se obtenga la mejor aproximación (mínimo error) de la manera más efectiva (mínimo número de puntos de evaluación). El ser formuladas usando un proceso de ajuste de $2 N$ parámetros correspondientes a los $N$ pesos y a los $N$ puntos de evaluación permiten integrar de manera exacta funciones polinomiales de orden a lo sumo $2 N - 1$.
La principal desventaja de las cuadraturas Gaussianas es el hecho de que en estas los puntos de evaluación se encuentran especificados en términos de coordenadas en el rango fijo entre $x=-1$ y $x=+1$ lo cual obliga a que sea necesario realizar una transformación previa o cambio de variable.
Para evitar confusiones en la notación denotemos el espacio en el que se indican las coordenadas de las cuadraturas Gaussianas mediante la letra $r$, de manera que el cambio de variables se expresa como:
$$I = \int\limits_{x=a}^{x=b} f(x) \mathrm{d}x \equiv \int\limits_{r=-1}^{r=+1}F(r) \mathrm{d}r\, .$$
Nótese que el cambio de variables implica:
* Relacionar $x$ y $r$ lo que podemos escribir de forma general como $x = x(r)$ y $r = r(x)$.
* Expresar $f(x)$ en términos de la nueva variable de acuerdo con $F(r) = f[x(r)]$.
* Expresar $\mathrm{d}x$ en términos de $\mathrm{d}r$.
### Cuadratura de 2 puntos
Considere el caso de una cuadratura de 2 puntos, es decir $N =2$. En este caso los factores de ponderación y puntos de evaluación se especifican en la siguiente tabla:
| $r$ | $w$ |
|---------------------|-------|
| $\frac{-\sqrt3}{3}$ | $1.0$ |
| $\frac{+\sqrt3}{3}$ | $1.0$ |
Para realizar el cambio de variables asumamos que la relación entre las variables independientes $x$ y $r$ es lineal de manera que:
$$x(r) = \frac{1}{2}(a + b) + \frac{r}{2}(b - a) \equiv \frac{1}{2}(a + b) + \frac{h}{2}r\, ,$$
y por lo tanto:
$$\mathrm{d}x=\frac{h}{2}\mathrm{d}r\, .$$
Esto que produce la siguiente equivalencia entre las integrales en los 2 espacios:
$$I = \int\limits_{x=a}^{x=b} f(x) \mathrm{d}x \equiv \int\limits_{r=-1}^{r=+1} f[ x(r)]\frac{h}{2} \mathrm{d}r\, .$$
Ahora, la integral formulada en el espacio de $r$ es fácilmente evaluable mediante las coordenadas y pesos de la tabla.
<div class="alert alert-warning">
Consultar los factores y puntos de integración para una cuadratura Gaussiana de 4 puntos.
</div>
## Solución en Python
En los bloques de código que se presentan a continuación se implementa la cuadratura Gaussiana de 2 puntos para calcular la integral:
$$
I=\int_{x = -1}^{x = +1}(x^3+4x^2-10)\operatorname dx
$$
<div class="alert alert-warning">
Adicionar comentarios a cada uno de los bloques de código que se presentan a continuación.
</div>
```python
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
```
<div class="alert alert-warning">
En el espacio encerrado entre comillas en cada una de las siguientes subrutinas indique el significado de cada uno de los parámetros y su tipo de dato.
</div>
```python
def gpoints2():
"""Cuadratura de Gauss de 2 puntos"""
xw = np.zeros([2])
xp = np.zeros([2])
xw[:] = 1.0
xp[0] = -0.577350269189626
xp[1] = 0.577350269189626
return xw, xp
```
```python
def transform(a, b, r):
"""
"""
h = b-a
xr = (a + b)/2.0 + h*r/2.0
return xr, h
```
```python
def myfun(x):
"""
"""
fx = x**3 + 4*x**2 - 10
return fx
```
<div class="alert alert-warning">
Adicione comentarios al código de integración.
</div>
```python
ngpts = 2
a = -1.0
b = +1.0
integral = 0.0
xw, xp = gpoints2()
for i in range(0, ngpts):
xr, h = transform(a, b, xp[i])
fx = myfun(xr)
integral = integral + fx*h/2.0*xw[i]
print(integral)
```
-17.333333333333332
<div class="alert alert-warning">
**Preguntas:**
1. Modificar el código anterior para calcular la integral con una cuadratura de 3 puntos.
2. Repetir el cálculo de la integral anterior si ahora los límites de integración son $a =0$ y $b=2$.
3. Usando la cuadratura Gaussiana calcular la siguiente integral:
$$I=\int\limits_{x=3.0}^{x=6.0} \mathrm{d}x$$
4. ¿Cómo sería la generalización de la cuadratura Gaussiana sobre un cuadrilátero?
</div>
## Glosario de términos
**Cuadratura:** Formula de integración numerica compuesta por un conjunto de puntos de evaluación y factores de ponderación.
**Punto de integración:** Punto de evaluación de la función a integrar mediante una cuadratura numérica.
**Punto de Gauss:** Punto de integración en una cuadratura Gaussiana.
**Factor de ponderación:** Constante que pondera la contribución de la función a la integral cuando esta es evaluada en un punto de integración determinado.
## Actividad para la clase
La figura muestra el problema de una cuña de semi-ángulo interno $\phi=\frac\pi4$ y lado $\ell = 10.0$ sometida a tracciones en las superficies inclinadas de magnitud $S = 1.0$.
<center>
</center>
Considerando que la relaciónes deformación-desplazamiento y tensión-deformación están dadas por:
\begin{align}
\varepsilon_{xx} &= \frac{\partial u}{\partial x}\, ,\\
\varepsilon_{yy} &= \frac{\partial v}{\partial y}\, ,\\
\varepsilon_{xy} &= \frac{1}{2}\left(\frac{\partial u}{\partial y}
+ \frac{\partial v}{\partial x}\right)\, ,\\
\sigma_{xx} &= \frac E{1 + \nu}\varepsilon_{xx} + \frac{\nu E}{(1+\nu)(1-2\nu)}(\varepsilon_{xx} + \varepsilon_{yy})\, ,\\
\sigma_{yy} &= \frac E{1+\nu}\varepsilon_{yy} + \frac{\nu E}{(1+\nu)(1-2\nu)}(\varepsilon_{xx} + \varepsilon_{yy})\, ,\\
\sigma_{xy} &= \frac{E}{2(1 + \nu)} \varepsilon_{xy}\, ,
\end{align}
se pide:
1. Calcular la energía de deformación del sistema dada por:
$$I = \frac{1}{2}\int\limits_S (\sigma_{xx}\varepsilon_{xx} + \sigma_{yy}\varepsilon_{yy}
+ 2\sigma_{xy}\varepsilon_{xy})\mathrm{d}S\, ,$$
asumiendo que los desplazamientos en los puntos izquierdo y derecho están dados por
$$\vec{u}_\text{izq} = -2.0 \hat{\imath}\, ,$$
y
$$\vec{u}_\text{der} = +2.0\hat{\imath}\, ,$$
mientras que los de los puntos superior e inferior corresponden a
$$\vec{u}_\text{sup} = -2.0 \hat{\jmath}\, ,$$
y
$$\vec{u}_\text{inf}=+2.0\hat{\jmath}\, .$$
2. Verifique que su resultado es correcto comparando con la solución analítica del problema.
## Formato del notebook
La siguiente celda cambia el formato del Notebook.
```python
from IPython.core.display import HTML
def css_styling():
styles = open('./nb_style.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
/*
Template for Notebooks for Modelación computacional.
Based on Lorena Barba template available at:
https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css
*/
/* Fonts */
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/* Text */
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
module Derive.Util.TyConInfo
import Language.Reflection.Elab
import Language.Reflection.Utils
import Derive.Kit
||| A representation of a type constructor in which all argument names
||| are made unique and they are separated into params and then
||| indices.
record TyConInfo where
constructor MkTyConInfo
||| Invariant: the names of the args have been made unique
args : List TyConArg
||| The type constructor, applied to its arguments
result : Raw
%name TyConInfo info
getParams : TyConInfo -> List (TTName, Raw)
getParams info = mapMaybe param (args info)
where param : TyConArg -> Maybe (TTName, Raw)
param (TyConParameter a) = Just (argName a, argTy a)
param _ = Nothing
getIndices : TyConInfo -> List (TTName, Raw)
getIndices info = mapMaybe index (args info)
where index : TyConArg -> Maybe (TTName, Raw)
index (TyConIndex a) = Just (argName a, argTy a)
index _ = Nothing
||| Rename a bound variable across a telescope
renameIn : (from, to : TTName) -> List (TTName, Raw) -> List (TTName, Raw)
renameIn from to [] = []
renameIn from to ((n, ty)::tele) =
(n, alphaRaw (rename from to) ty) ::
if from == n
then tele
else renameIn from to tele
||| Rename a free variable in type constructor info (used to implement
||| alpha-conversion for unique binder names, hence the odd name)
alphaTyConInfo : (TTName -> Maybe TTName) -> TyConInfo -> TyConInfo
alphaTyConInfo ren (MkTyConInfo [] res) = MkTyConInfo [] (alphaRaw ren res)
alphaTyConInfo ren (MkTyConInfo (tcArg::tcArgs) res) =
let MkTyConInfo tcArgs' res' = alphaTyConInfo (restrict ren (tyConArgName tcArg)) (MkTyConInfo tcArgs res)
tcArg' = updateTyConArgTy (alphaRaw ren) tcArg
in MkTyConInfo (tcArg'::tcArgs') res'
getTyConInfo' : List TyConArg -> Raw -> (TTName -> Maybe TTName) -> Elab TyConInfo
getTyConInfo' [] res _ = pure (MkTyConInfo [] res)
getTyConInfo' (tcArg::tcArgs) res ren =
do let n = tyConArgName tcArg
n' <- nameFrom n
let ren' = extend ren n n'
-- n' is globally unique so we don't worry about scope
next <- getTyConInfo' tcArgs (RApp res (Var n')) ren'
pure $ alphaTyConInfo ren' $
record {args = setTyConArgName tcArg n' :: args next} next
getTyConInfo : List TyConArg -> Raw -> Elab TyConInfo
getTyConInfo args res = getTyConInfo' args res (const Nothing)
||| Convert the parameter names in a constructor to their global unique versions
processCtorArgs : TyConInfo -> (TTName, List CtorArg, Raw) -> Elab (TTName, List CtorArg, Raw)
processCtorArgs info (cn, cargs, resTy) =
do (args', ty') <- convert' (const Nothing) cargs resTy info
pure (cn, args', ty')
where
||| Find the name that was assigned to a given parameter
||| by comparing positions in the TyConInfo
findParam : TTName -> List Raw -> List TyConArg -> Elab TTName
findParam paramN (Var n :: args) (TyConParameter a :: tcArgs) =
if n == paramN
then pure (argName a)
else findParam paramN args tcArgs
findParam paramN (_ :: args) (_ :: tcArgs) =
findParam paramN args tcArgs
findParam paramN _ _ = fail [ TextPart "Parameter"
, NamePart paramN
, TextPart "not found."
]
convert' : (TTName -> Maybe TTName) ->
List CtorArg -> Raw ->
TyConInfo -> Elab (List CtorArg, Raw)
convert' subst [] ty info = pure ([], alphaRaw subst ty)
convert' subst (CtorField a :: args) ty info =
do n' <- nameFrom (argName a)
let a' = record {
argName = n'
, argTy = alphaRaw subst (argTy a)
} a
let subst' = extend subst (argName a) n'
(args', ty') <- convert' subst' args ty info
pure (CtorField a' :: args', ty')
convert' subst (CtorParameter a :: ctorArgs) ty info =
do n' <- findParam (argName a) (snd (unApply ty)) (args info)
let a' = record {
argName = n'
, argTy = alphaRaw subst (argTy a)
} a
let subst' = extend subst (argName a) n'
(args', ty') <- convert' subst' ctorArgs ty info
pure (CtorParameter a' :: args', ty')
|
(* Title: HOL/HOLCF/IOA/Abstraction.thy
Author: Olaf Müller
*)
section \<open>Abstraction Theory -- tailored for I/O automata\<close>
theory Abstraction
imports LiveIOA
begin
default_sort type
definition cex_abs :: "('s1 \<Rightarrow> 's2) \<Rightarrow> ('a, 's1) execution \<Rightarrow> ('a, 's2) execution"
where "cex_abs f ex = (f (fst ex), Map (\<lambda>(a, t). (a, f t)) \<cdot> (snd ex))"
text \<open>equals cex_abs on Sequences -- after ex2seq application\<close>
definition cex_absSeq ::
"('s1 \<Rightarrow> 's2) \<Rightarrow> ('a option, 's1) transition Seq \<Rightarrow> ('a option, 's2)transition Seq"
where "cex_absSeq f = (\<lambda>s. Map (\<lambda>(s, a, t). (f s, a, f t)) \<cdot> s)"
definition is_abstraction :: "('s1 \<Rightarrow> 's2) \<Rightarrow> ('a, 's1) ioa \<Rightarrow> ('a, 's2) ioa \<Rightarrow> bool"
where "is_abstraction f C A \<longleftrightarrow>
((\<forall>s \<in> starts_of C. f s \<in> starts_of A) \<and>
(\<forall>s t a. reachable C s \<and> s \<midarrow>a\<midarrow>C\<rightarrow> t \<longrightarrow> f s \<midarrow>a\<midarrow>A\<rightarrow> f t))"
definition weakeningIOA :: "('a, 's2) ioa \<Rightarrow> ('a, 's1) ioa \<Rightarrow> ('s1 \<Rightarrow> 's2) \<Rightarrow> bool"
where "weakeningIOA A C h \<longleftrightarrow> (\<forall>ex. ex \<in> executions C \<longrightarrow> cex_abs h ex \<in> executions A)"
definition temp_strengthening :: "('a, 's2) ioa_temp \<Rightarrow> ('a, 's1) ioa_temp \<Rightarrow> ('s1 \<Rightarrow> 's2) \<Rightarrow> bool"
where "temp_strengthening Q P h \<longleftrightarrow> (\<forall>ex. (cex_abs h ex \<TTurnstile> Q) \<longrightarrow> (ex \<TTurnstile> P))"
definition temp_weakening :: "('a, 's2) ioa_temp \<Rightarrow> ('a, 's1) ioa_temp \<Rightarrow> ('s1 \<Rightarrow> 's2) \<Rightarrow> bool"
where "temp_weakening Q P h \<longleftrightarrow> temp_strengthening (\<^bold>\<not> Q) (\<^bold>\<not> P) h"
definition state_strengthening :: "('a, 's2) step_pred \<Rightarrow> ('a, 's1) step_pred \<Rightarrow> ('s1 \<Rightarrow> 's2) \<Rightarrow> bool"
where "state_strengthening Q P h \<longleftrightarrow> (\<forall>s t a. Q (h s, a, h t) \<longrightarrow> P (s, a, t))"
definition state_weakening :: "('a, 's2) step_pred \<Rightarrow> ('a, 's1) step_pred \<Rightarrow> ('s1 \<Rightarrow> 's2) \<Rightarrow> bool"
where "state_weakening Q P h \<longleftrightarrow> state_strengthening (\<^bold>\<not> Q) (\<^bold>\<not> P) h"
definition is_live_abstraction :: "('s1 \<Rightarrow> 's2) \<Rightarrow> ('a, 's1) live_ioa \<Rightarrow> ('a, 's2) live_ioa \<Rightarrow> bool"
where "is_live_abstraction h CL AM \<longleftrightarrow>
is_abstraction h (fst CL) (fst AM) \<and> temp_weakening (snd AM) (snd CL) h"
(* thm about ex2seq which is not provable by induction as ex2seq is not continuous *)
axiomatization where
ex2seq_abs_cex: "ex2seq (cex_abs h ex) = cex_absSeq h (ex2seq ex)"
(* analog to the proved thm strength_Box - proof skipped as trivial *)
axiomatization where
weak_Box: "temp_weakening P Q h \<Longrightarrow> temp_weakening (\<box>P) (\<box>Q) h"
(* analog to the proved thm strength_Next - proof skipped as trivial *)
axiomatization where
weak_Next: "temp_weakening P Q h \<Longrightarrow> temp_weakening (\<circle>P) (\<circle>Q) h"
subsection \<open>\<open>cex_abs\<close>\<close>
lemma cex_abs_UU [simp]: "cex_abs f (s, UU) = (f s, UU)"
by (simp add: cex_abs_def)
lemma cex_abs_nil [simp]: "cex_abs f (s,nil) = (f s, nil)"
by (simp add: cex_abs_def)
lemma cex_abs_cons [simp]:
"cex_abs f (s, (a, t) \<leadsto> ex) = (f s, (a, f t) \<leadsto> (snd (cex_abs f (t, ex))))"
by (simp add: cex_abs_def)
subsection \<open>Lemmas\<close>
lemma temp_weakening_def2: "temp_weakening Q P h \<longleftrightarrow> (\<forall>ex. (ex \<TTurnstile> P) \<longrightarrow> (cex_abs h ex \<TTurnstile> Q))"
apply (simp add: temp_weakening_def temp_strengthening_def NOT_def temp_sat_def satisfies_def)
apply auto
done
lemma state_weakening_def2: "state_weakening Q P h \<longleftrightarrow> (\<forall>s t a. P (s, a, t) \<longrightarrow> Q (h s, a, h t))"
apply (simp add: state_weakening_def state_strengthening_def NOT_def)
apply auto
done
subsection \<open>Abstraction Rules for Properties\<close>
lemma exec_frag_abstraction [rule_format]:
"is_abstraction h C A \<Longrightarrow>
\<forall>s. reachable C s \<and> is_exec_frag C (s, xs) \<longrightarrow> is_exec_frag A (cex_abs h (s, xs))"
apply (simp add: cex_abs_def)
apply (pair_induct xs simp: is_exec_frag_def)
txt \<open>main case\<close>
apply (auto dest: reachable.reachable_n simp add: is_abstraction_def)
done
lemma abs_is_weakening: "is_abstraction h C A \<Longrightarrow> weakeningIOA A C h"
apply (simp add: weakeningIOA_def)
apply auto
apply (simp add: executions_def)
txt \<open>start state\<close>
apply (rule conjI)
apply (simp add: is_abstraction_def cex_abs_def)
txt \<open>is-execution-fragment\<close>
apply (erule exec_frag_abstraction)
apply (simp add: reachable.reachable_0)
done
lemma AbsRuleT1:
"is_abstraction h C A \<Longrightarrow> validIOA A Q \<Longrightarrow> temp_strengthening Q P h \<Longrightarrow> validIOA C P"
apply (drule abs_is_weakening)
apply (simp add: weakeningIOA_def validIOA_def temp_strengthening_def)
apply (auto simp add: split_paired_all)
done
lemma AbsRuleT2:
"is_live_abstraction h (C, L) (A, M) \<Longrightarrow> validLIOA (A, M) Q \<Longrightarrow> temp_strengthening Q P h
\<Longrightarrow> validLIOA (C, L) P"
apply (unfold is_live_abstraction_def)
apply auto
apply (drule abs_is_weakening)
apply (simp add: weakeningIOA_def temp_weakening_def2 validLIOA_def validIOA_def temp_strengthening_def)
apply (auto simp add: split_paired_all)
done
lemma AbsRuleTImprove:
"is_live_abstraction h (C, L) (A, M) \<Longrightarrow> validLIOA (A,M) (H1 \<^bold>\<longrightarrow> Q) \<Longrightarrow>
temp_strengthening Q P h \<Longrightarrow> temp_weakening H1 H2 h \<Longrightarrow> validLIOA (C, L) H2 \<Longrightarrow>
validLIOA (C, L) P"
apply (unfold is_live_abstraction_def)
apply auto
apply (drule abs_is_weakening)
apply (simp add: weakeningIOA_def temp_weakening_def2 validLIOA_def validIOA_def temp_strengthening_def)
apply (auto simp add: split_paired_all)
done
subsection \<open>Correctness of safe abstraction\<close>
lemma abstraction_is_ref_map: "is_abstraction h C A \<Longrightarrow> is_ref_map h C A"
apply (auto simp: is_abstraction_def is_ref_map_def)
apply (rule_tac x = "(a,h t) \<leadsto>nil" in exI)
apply (simp add: move_def)
done
lemma abs_safety: "inp C = inp A \<Longrightarrow> out C = out A \<Longrightarrow> is_abstraction h C A \<Longrightarrow> C =<| A"
apply (simp add: ioa_implements_def)
apply (rule trace_inclusion)
apply (simp (no_asm) add: externals_def)
apply (auto)[1]
apply (erule abstraction_is_ref_map)
done
subsection \<open>Correctness of life abstraction\<close>
text \<open>
Reduces to \<open>Filter (Map fst x) = Filter (Map fst (Map (\<lambda>(a, t). (a, x)) x)\<close>,
that is to special Map Lemma.
\<close>
lemma traces_coincide_abs:
"ext C = ext A \<Longrightarrow> mk_trace C \<cdot> xs = mk_trace A \<cdot> (snd (cex_abs f (s, xs)))"
apply (unfold cex_abs_def mk_trace_def filter_act_def)
apply simp
apply (pair_induct xs)
done
text \<open>
Does not work with \<open>abstraction_is_ref_map\<close> as proof of \<open>abs_safety\<close>, because
\<open>is_live_abstraction\<close> includes \<open>temp_strengthening\<close> which is necessarily based
on \<open>cex_abs\<close> and not on \<open>corresp_ex\<close>. Thus, the proof is redone in a more specific
way for \<open>cex_abs\<close>.
\<close>
lemma abs_liveness:
"inp C = inp A \<Longrightarrow> out C = out A \<Longrightarrow> is_live_abstraction h (C, M) (A, L) \<Longrightarrow>
live_implements (C, M) (A, L)"
apply (simp add: is_live_abstraction_def live_implements_def livetraces_def liveexecutions_def)
apply auto
apply (rule_tac x = "cex_abs h ex" in exI)
apply auto
text \<open>Traces coincide\<close>
apply (pair ex)
apply (rule traces_coincide_abs)
apply (simp (no_asm) add: externals_def)
apply (auto)[1]
text \<open>\<open>cex_abs\<close> is execution\<close>
apply (pair ex)
apply (simp add: executions_def)
text \<open>start state\<close>
apply (rule conjI)
apply (simp add: is_abstraction_def cex_abs_def)
text \<open>\<open>is_execution_fragment\<close>\<close>
apply (erule exec_frag_abstraction)
apply (simp add: reachable.reachable_0)
text \<open>Liveness\<close>
apply (simp add: temp_weakening_def2)
apply (pair ex)
done
subsection \<open>Abstraction Rules for Automata\<close>
lemma AbsRuleA1:
"inp C = inp A \<Longrightarrow> out C = out A \<Longrightarrow> inp Q = inp P \<Longrightarrow> out Q = out P \<Longrightarrow>
is_abstraction h1 C A \<Longrightarrow> A =<| Q \<Longrightarrow> is_abstraction h2 Q P \<Longrightarrow> C =<| P"
apply (drule abs_safety)
apply assumption+
apply (drule abs_safety)
apply assumption+
apply (erule implements_trans)
apply (erule implements_trans)
apply assumption
done
lemma AbsRuleA2:
"inp C = inp A \<Longrightarrow> out C = out A \<Longrightarrow> inp Q = inp P \<Longrightarrow> out Q = out P \<Longrightarrow>
is_live_abstraction h1 (C, LC) (A, LA) \<Longrightarrow> live_implements (A, LA) (Q, LQ) \<Longrightarrow>
is_live_abstraction h2 (Q, LQ) (P, LP) \<Longrightarrow> live_implements (C, LC) (P, LP)"
apply (drule abs_liveness)
apply assumption+
apply (drule abs_liveness)
apply assumption+
apply (erule live_implements_trans)
apply (erule live_implements_trans)
apply assumption
done
declare split_paired_All [simp del]
subsection \<open>Localizing Temporal Strengthenings and Weakenings\<close>
lemma strength_AND:
"temp_strengthening P1 Q1 h \<Longrightarrow> temp_strengthening P2 Q2 h \<Longrightarrow>
temp_strengthening (P1 \<^bold>\<and> P2) (Q1 \<^bold>\<and> Q2) h"
by (auto simp: temp_strengthening_def)
lemma strength_OR:
"temp_strengthening P1 Q1 h \<Longrightarrow> temp_strengthening P2 Q2 h \<Longrightarrow>
temp_strengthening (P1 \<^bold>\<or> P2) (Q1 \<^bold>\<or> Q2) h"
by (auto simp: temp_strengthening_def)
lemma strength_NOT: "temp_weakening P Q h \<Longrightarrow> temp_strengthening (\<^bold>\<not> P) (\<^bold>\<not> Q) h"
by (auto simp: temp_weakening_def2 temp_strengthening_def)
lemma strength_IMPLIES:
"temp_weakening P1 Q1 h \<Longrightarrow> temp_strengthening P2 Q2 h \<Longrightarrow>
temp_strengthening (P1 \<^bold>\<longrightarrow> P2) (Q1 \<^bold>\<longrightarrow> Q2) h"
by (simp add: temp_weakening_def2 temp_strengthening_def)
lemma weak_AND:
"temp_weakening P1 Q1 h \<Longrightarrow> temp_weakening P2 Q2 h \<Longrightarrow>
temp_weakening (P1 \<^bold>\<and> P2) (Q1 \<^bold>\<and> Q2) h"
by (simp add: temp_weakening_def2)
lemma weak_OR:
"temp_weakening P1 Q1 h \<Longrightarrow> temp_weakening P2 Q2 h \<Longrightarrow>
temp_weakening (P1 \<^bold>\<or> P2) (Q1 \<^bold>\<or> Q2) h"
by (simp add: temp_weakening_def2)
lemma weak_NOT:
"temp_strengthening P Q h \<Longrightarrow> temp_weakening (\<^bold>\<not> P) (\<^bold>\<not> Q) h"
by (auto simp add: temp_weakening_def2 temp_strengthening_def)
lemma weak_IMPLIES:
"temp_strengthening P1 Q1 h \<Longrightarrow> temp_weakening P2 Q2 h \<Longrightarrow>
temp_weakening (P1 \<^bold>\<longrightarrow> P2) (Q1 \<^bold>\<longrightarrow> Q2) h"
by (simp add: temp_weakening_def2 temp_strengthening_def)
subsubsection \<open>Box\<close>
(* FIXME: should be same as nil_is_Conc2 when all nils are turned to right side! *)
lemma UU_is_Conc: "(UU = x @@ y) = (((x::'a Seq)= UU) | (x = nil \<and> y = UU))"
by (Seq_case_simp x)
lemma ex2seqConc [rule_format]:
"Finite s1 \<longrightarrow> (\<forall>ex. (s \<noteq> nil \<and> s \<noteq> UU \<and> ex2seq ex = s1 @@ s) \<longrightarrow> (\<exists>ex'. s = ex2seq ex'))"
apply (rule impI)
apply Seq_Finite_induct
apply blast
text \<open>main case\<close>
apply clarify
apply (pair ex)
apply (Seq_case_simp x2)
text \<open>\<open>UU\<close> case\<close>
apply (simp add: nil_is_Conc)
text \<open>\<open>nil\<close> case\<close>
apply (simp add: nil_is_Conc)
text \<open>cons case\<close>
apply (pair aa)
done
(* important property of ex2seq: can be shiftet, as defined "pointwise" *)
lemma ex2seq_tsuffix: "tsuffix s (ex2seq ex) \<Longrightarrow> \<exists>ex'. s = (ex2seq ex')"
apply (unfold tsuffix_def suffix_def)
apply auto
apply (drule ex2seqConc)
apply auto
done
(*important property of cex_absSeq: As it is a 1to1 correspondence,
properties carry over *)
lemma cex_absSeq_tsuffix: "tsuffix s t \<Longrightarrow> tsuffix (cex_absSeq h s) (cex_absSeq h t)"
apply (unfold tsuffix_def suffix_def cex_absSeq_def)
apply auto
apply (simp add: Mapnil)
apply (simp add: MapUU)
apply (rule_tac x = "Map (% (s,a,t) . (h s,a, h t))\<cdot>s1" in exI)
apply (simp add: Map2Finite MapConc)
done
lemma strength_Box: "temp_strengthening P Q h \<Longrightarrow> temp_strengthening (\<box>P) (\<box>Q) h"
apply (unfold temp_strengthening_def state_strengthening_def temp_sat_def satisfies_def Box_def)
apply clarify
apply (frule ex2seq_tsuffix)
apply clarify
apply (drule_tac h = "h" in cex_absSeq_tsuffix)
apply (simp add: ex2seq_abs_cex)
done
subsubsection \<open>Init\<close>
lemma strength_Init:
"state_strengthening P Q h \<Longrightarrow> temp_strengthening (Init P) (Init Q) h"
apply (unfold temp_strengthening_def state_strengthening_def
temp_sat_def satisfies_def Init_def unlift_def)
apply auto
apply (pair ex)
apply (Seq_case_simp x2)
apply (pair a)
done
subsubsection \<open>Next\<close>
lemma TL_ex2seq_UU: "TL \<cdot> (ex2seq (cex_abs h ex)) = UU \<longleftrightarrow> TL \<cdot> (ex2seq ex) = UU"
apply (pair ex)
apply (Seq_case_simp x2)
apply (pair a)
apply (Seq_case_simp s)
apply (pair a)
done
lemma TL_ex2seq_nil: "TL \<cdot> (ex2seq (cex_abs h ex)) = nil \<longleftrightarrow> TL \<cdot> (ex2seq ex) = nil"
apply (pair ex)
apply (Seq_case_simp x2)
apply (pair a)
apply (Seq_case_simp s)
apply (pair a)
done
(*important property of cex_absSeq: As it is a 1to1 correspondence,
properties carry over*)
lemma cex_absSeq_TL: "cex_absSeq h (TL \<cdot> s) = TL \<cdot> (cex_absSeq h s)"
by (simp add: MapTL cex_absSeq_def)
(* important property of ex2seq: can be shiftet, as defined "pointwise" *)
lemma TLex2seq: "snd ex \<noteq> UU \<Longrightarrow> snd ex \<noteq> nil \<Longrightarrow> \<exists>ex'. TL\<cdot>(ex2seq ex) = ex2seq ex'"
apply (pair ex)
apply (Seq_case_simp x2)
apply (pair a)
apply auto
done
lemma ex2seqnilTL: "TL \<cdot> (ex2seq ex) \<noteq> nil \<longleftrightarrow> snd ex \<noteq> nil \<and> snd ex \<noteq> UU"
apply (pair ex)
apply (Seq_case_simp x2)
apply (pair a)
apply (Seq_case_simp s)
apply (pair a)
done
lemma strength_Next: "temp_strengthening P Q h \<Longrightarrow> temp_strengthening (\<circle>P) (\<circle>Q) h"
apply (unfold temp_strengthening_def state_strengthening_def temp_sat_def satisfies_def Next_def)
apply simp
apply auto
apply (simp add: TL_ex2seq_nil TL_ex2seq_UU)
apply (simp add: TL_ex2seq_nil TL_ex2seq_UU)
apply (simp add: TL_ex2seq_nil TL_ex2seq_UU)
apply (simp add: TL_ex2seq_nil TL_ex2seq_UU)
text \<open>cons case\<close>
apply (simp add: TL_ex2seq_nil TL_ex2seq_UU ex2seq_abs_cex cex_absSeq_TL [symmetric] ex2seqnilTL)
apply (erule conjE)
apply (drule TLex2seq)
apply assumption
apply auto
done
text \<open>Localizing Temporal Weakenings - 2\<close>
lemma weak_Init: "state_weakening P Q h \<Longrightarrow> temp_weakening (Init P) (Init Q) h"
apply (simp add: temp_weakening_def2 state_weakening_def2
temp_sat_def satisfies_def Init_def unlift_def)
apply auto
apply (pair ex)
apply (Seq_case_simp x2)
apply (pair a)
done
text \<open>Localizing Temproal Strengthenings - 3\<close>
lemma strength_Diamond: "temp_strengthening P Q h \<Longrightarrow> temp_strengthening (\<diamond>P) (\<diamond>Q) h"
apply (unfold Diamond_def)
apply (rule strength_NOT)
apply (rule weak_Box)
apply (erule weak_NOT)
done
lemma strength_Leadsto:
"temp_weakening P1 P2 h \<Longrightarrow> temp_strengthening Q1 Q2 h \<Longrightarrow>
temp_strengthening (P1 \<leadsto> Q1) (P2 \<leadsto> Q2) h"
apply (unfold Leadsto_def)
apply (rule strength_Box)
apply (erule strength_IMPLIES)
apply (erule strength_Diamond)
done
text \<open>Localizing Temporal Weakenings - 3\<close>
lemma weak_Diamond: "temp_weakening P Q h \<Longrightarrow> temp_weakening (\<diamond>P) (\<diamond>Q) h"
apply (unfold Diamond_def)
apply (rule weak_NOT)
apply (rule strength_Box)
apply (erule strength_NOT)
done
lemma weak_Leadsto:
"temp_strengthening P1 P2 h \<Longrightarrow> temp_weakening Q1 Q2 h \<Longrightarrow>
temp_weakening (P1 \<leadsto> Q1) (P2 \<leadsto> Q2) h"
apply (unfold Leadsto_def)
apply (rule weak_Box)
apply (erule weak_IMPLIES)
apply (erule weak_Diamond)
done
lemma weak_WF:
"(\<And>s. Enabled A acts (h s) \<Longrightarrow> Enabled C acts s)
\<Longrightarrow> temp_weakening (WF A acts) (WF C acts) h"
apply (unfold WF_def)
apply (rule weak_IMPLIES)
apply (rule strength_Diamond)
apply (rule strength_Box)
apply (rule strength_Init)
apply (rule_tac [2] weak_Box)
apply (rule_tac [2] weak_Diamond)
apply (rule_tac [2] weak_Init)
apply (auto simp add: state_weakening_def state_strengthening_def
xt2_def plift_def option_lift_def NOT_def)
done
lemma weak_SF:
"(\<And>s. Enabled A acts (h s) \<Longrightarrow> Enabled C acts s)
\<Longrightarrow> temp_weakening (SF A acts) (SF C acts) h"
apply (unfold SF_def)
apply (rule weak_IMPLIES)
apply (rule strength_Box)
apply (rule strength_Diamond)
apply (rule strength_Init)
apply (rule_tac [2] weak_Box)
apply (rule_tac [2] weak_Diamond)
apply (rule_tac [2] weak_Init)
apply (auto simp add: state_weakening_def state_strengthening_def
xt2_def plift_def option_lift_def NOT_def)
done
lemmas weak_strength_lemmas =
weak_OR weak_AND weak_NOT weak_IMPLIES weak_Box weak_Next weak_Init
weak_Diamond weak_Leadsto strength_OR strength_AND strength_NOT
strength_IMPLIES strength_Box strength_Next strength_Init
strength_Diamond strength_Leadsto weak_WF weak_SF
ML \<open>
fun abstraction_tac ctxt =
SELECT_GOAL (auto_tac
(ctxt addSIs @{thms weak_strength_lemmas}
addsimps [@{thm state_strengthening_def}, @{thm state_weakening_def}]))
\<close>
method_setup abstraction = \<open>Scan.succeed (SIMPLE_METHOD' o abstraction_tac)\<close>
end
|
-- --------------------------------------------------------------- [ BTree.idr ]
-- Module : BTree.idr
-- Copyright : (c) 2015,2016 See CONTRIBUTORS.md
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
||| Implementation of a Binary Tree an AVL Binary Search Tree.
|||
||| The underlying AVL Tree is a Key-Value Tree, this library just
||| wraps this up as a simple Binary tree for values i.e. keys.
module Data.AVL.BTree
import Data.AVL
%access export
-- ------------------------------------------------------------- [ Definitions ]
||| A Binary Search Tree.
|||
||| @ty The type of the elements in the tree.
data BTree : (ty : Type) -> Type
where
MkTree : {a : Type } -> AVLTree n a Unit -> BTree a
-- --------------------------------------------------------------------- [ API ]
||| Return an empty BTree.
empty : Ord a => BTree a
empty = MkTree (Element Empty AVLEmpty)
||| Insert an element into the Tree.
insert : (Ord a) => a -> BTree a -> BTree a
insert a (MkTree t) = MkTree (snd $ AVL.API.insert a () t)
||| Does the tree contain the given element?
contains : (Ord a) => a -> BTree a -> Bool
contains a (MkTree t) = isJust (AVL.API.lookup a t)
||| How many nodes are in the tree?
size : BTree a -> Nat
size (MkTree t) = AVL.API.size t
||| Construct an ordered list containing the elements of the tree.
toList : BTree a -> List a
toList (MkTree t) = map fst $ AVL.API.toList t
||| Construct a tree from a list of elements.
fromList : (Ord a) => List a -> BTree a
fromList xs = (foldl (\t,k => BTree.insert k t) empty xs)
-- --------------------------------------------------------------- [ Instances ]
Foldable BTree where
foldr f i (MkTree t) = foldr (\x,_,p => f x p) i t
Eq a => Eq (BTree a) where
(==) (MkTree (Element t _)) (MkTree (Element t' _)) = t == t'
Show a => Show (BTree a) where
show s = "{ " ++ (unwords . intersperse "," . map show . BTree.toList $ s) ++ " }"
namespace Quantifier
data All : (predicate : type -> Type) -> (set : BTree type) -> Type where
Satisfies : (prf : AllKeys p tree) -> All p (MkTree tree)
namespace Predicate
data Elem : (value : type) -> (tree : BTree type) -> Type where
IsElem : (prf : HasKey value tree)
-> Elem value (MkTree tree)
private
elemNotInTree : (prfIsNotElem : HasKey value tree -> Void) -> Elem value (MkTree tree) -> Void
elemNotInTree prfIsNotElem (IsElem prf) = prfIsNotElem prf
isElem : DecEq type
=> (value : type)
-> (tree : BTree type)
-> Dec (Elem value tree)
isElem value (MkTree tree) with (isKey value tree)
isElem value (MkTree tree) | (Yes prf) = Yes (IsElem prf)
isElem value (MkTree tree) | (No prfIsNotElem) = No (elemNotInTree prfIsNotElem)
-- --------------------------------------------------------------------- [ EOF ]
|
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import scipy
from scipy.integrate import odeint
from scipy.integrate import quad
from scipy.optimize import minimize
from scipy.optimize import fsolve
from scipy.optimize import least_squares
from scipy.interpolate import interp1d
```
Go slow, write the plan down, and check energy
```python
mpl.rc('figure',dpi=250)
mpl.rc('text',usetex=True)
```
```python
def add_labels(xlabel, ylabel, title):
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title(title)
plt.legend()
```
```python
def generate_samples(func, bounds, N):
"""
Assignment 5: function for sampling from a custom 1d distribution.
Input: func - any function
bounds - tuple (a, b)
Output: 1d samples of size N
"""
a, b = bounds
x = np.linspace(a, b)
f_max = max(func(x))
x_rand = np.random.uniform(a, b, size=N)
y_rand = np.random.uniform(0, f_max, size=N)
samples = x_rand[y_rand < func(x_rand)]
return samples
```
```python
def spherical_to_xyz(r, theta, phi):
"Generate a vector in xyz coordinates from r, theta, phi"
return np.array([r*np.sin(theta)*np.cos(phi), r*np.sin(theta)*np.sin(phi), r*np.cos(theta)])
```
Solves a problem of the form
\begin{equation}
\frac{dy}{dt} = f(y, t)
\end{equation}
```python
def main(y, t):
"Main thing to integrate"
position = y[0:3]
velocity = y[3:6]
dydt = np.empty(6)
dydt[0:3] = velocity
dydt[3:6] = np.array([0, 0, -9.81]) #force goes here...
# Don't forget to divide by m
return dydt
m = 1
theta = np.pi/3
t = np.linspace(0, 1)
y0 = np.array([0, 0, 0, 5*np.cos(theta), 0, 5*np.sin(theta)])
```
```python
y = odeint(main, y0, t)
plt.plot(t, y[:, 2])
```
```python
def df(f, x):
h = 0.0001
return (f(x + h) - f(x)/h)
# Whenever possible, use polyderiv if possible
```
```python
# If number of counts is n, then the uncertainty is sqrt(n) cough Poisson distribution
# Error of the mean is always stdev/sqrt(n) by CLT
```
## Curve Fitting Syntax
To use curve fit please pass in a function with params
```python
import numpy as np
from scipy.optimize import curve_fit
x = np.array([1, 2, 3, 9])
y = np.array([1, 4, 1, 3])
def fit_func(x, a, b):
return a*x + b
popt = curve_fit(fit_func, x, y)
popt[0]
plt.scatter(x, y)
x_cont = np.linspace(1, 9)
plt.plot(x_cont, fit_func(x_cont, *popt[0]))
```
## Minimization Syntax
To use minimize must pass in a loss function. Note we already plug in the data
```python
import numpy as np
from scipy.optimize import minimize
x = np.array([1, 2, 3, 9])
y = np.array([1, 4, 1, 3])
def loss_func(params, x, y):
a, b = params
fit_func = lambda x: a*x + b
return np.sum((fit_func(x) - y)**2)
```
```python
params0 = np.array([1, 1])
popt_minimization = minimize(loss_func, params0, args=(x, y))
```
```python
plt.scatter(x, y)
plt.plot(x_cont, fit_func(x_cont, *popt_minimization.x))
```
## Least Squares Syntax
```python
import numpy as np
from scipy.optimize import least_squares
def res_func(params, x, y):
a, b = params
fit_func = lambda x: a*x + b
return (fit_func(x) - y)
params0 = np.array([1, 1])
popt_minimization = least_squares(res_func, params0, args=(x, y))
popt_minimization
plt.scatter(x, y)
plt.plot(x_cont, fit_func(x_cont, *popt_minimization.x))
```
## Polyfit Syntax
```python
import numpy as np
x = np.array([1, 2, 3, 9])
y = np.array([1, 4, 1, 3])
plt.scatter(x, y)
x_cont = np.linspace(1, 9)
popt = np.polyfit(x, y, 1)
popt
plt.plot(x_cont, np.poly1d(popt)(x_cont))
# Poly fit is the easiest to use
```
## Testing
```python
```
```python
myfunc()
```
3
```python
a = np.arange(4, 10)
wh = np.where(a > 7)
a[wh]
```
array([8, 9])
```python
a[a > 7]
```
array([8, 9])
```python
b = np.arange(10, 30).reshape(-1, 4)
b
```
array([[10, 11, 12, 13],
[14, 15, 16, 17],
[18, 19, 20, 21],
[22, 23, 24, 25],
[26, 27, 28, 29]])
```python
wh = np.where(b > 15)
b[wh[0], wh[1]]
```
array([16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
```python
(b > 15)
```
array([[False, False, False, False],
[False, False, True, True],
[ True, True, True, True],
[ True, True, True, True],
[ True, True, True, True]])
```python
(b < 17) & (b > 15)
# & is a ufunc
# and comparing the whole array
```
array([[False, False, False, False],
[False, False, True, False],
[False, False, False, False],
[False, False, False, False],
[False, False, False, False]])
```python
xdata = np.linspace(1, 10, 10)
ydata = np.exp(xdata)
x_cont = np.linspace(1, 10)
plt.plot(x_cont, interp1d(xdata, ydata, kind='linear')(x_cont))
plt.plot(x_cont, interp1d(xdata, ydata, kind='cubic')(x_cont))
# sometimes need to turn on extrapolate if minimize is complaing
```
```python
```
|
(*
Copyright (c) 2017, ETH Zurich
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*)
(*<*)
theory Equivalence
imports Resolution
begin
(*>*)
subsection {* View Equivalence *}
text_raw {* \label{sec:vieweq} *}
text {* A view is a function that encodes the result of all address resolutions beginning at a
given node. *}
type_synonym view = "addr \<Rightarrow> name set"
definition view_from :: "nodeid \<Rightarrow> net \<Rightarrow> view"
where "view_from node net = (\<lambda>addr. resolve net (node, addr))"
text {* A remapping is a renaming of nodes, leaving addresses intact. *}
type_synonym remap = "nodeid \<Rightarrow> nodeid"
definition rename :: "remap \<Rightarrow> name \<Rightarrow> name"
where "rename m n = (m (fst n), snd n)"
(*<*)
lemma rename_id:
"rename id ` S = S"
unfolding rename_def id_def by(auto)
(*>*)
primrec rename_list :: "(nodeid \<Rightarrow> nodeid) \<Rightarrow> nodeid list \<Rightarrow> (nodeid \<Rightarrow> nodeid)"
where "rename_list f [] = id" |
"rename_list f (nd#nds) = (\<lambda>x. if x = nd then f nd else x) o (rename_list f nds)"
text {* Two nets are view-equivalent for some node, if the two views have the same domain, and
give the same result for all addresses, modulo a renaming of accepting nodes. *}
definition view_eq :: "nodeid \<Rightarrow> (remap \<times> net) \<Rightarrow> (remap \<times> net) \<Rightarrow> bool"
where "view_eq nd x y \<longleftrightarrow>
(\<forall>a. resolve_dom (snd x,(nd,a)) = resolve_dom (snd y,(nd,a))) \<and>
(\<forall>a. resolve_dom (snd x,(nd,a)) \<longrightarrow>
rename (fst x) ` view_from nd (snd x) a =
rename (fst y) ` view_from nd (snd y) a)"
(*<*)
definition view_le :: "nodeid \<Rightarrow> (remap \<times> net) \<Rightarrow> (remap \<times> net) \<Rightarrow> bool"
where "view_le nd x y \<longleftrightarrow>
(\<forall>a. resolve_dom (snd x,(nd,a)) \<longrightarrow> resolve_dom (snd y,(nd,a))) \<and>
(\<forall>a. resolve_dom (snd x,(nd,a)) \<longrightarrow>
rename (fst x) ` view_from nd (snd x) a =
rename (fst y) ` view_from nd (snd y) a)"
(*>*)
definition view_eq_on :: "nodeid set \<Rightarrow> (remap \<times> net) \<Rightarrow> (remap \<times> net) \<Rightarrow> bool"
where "view_eq_on S x y \<longleftrightarrow> (\<forall>nd\<in>S. view_eq nd x y)"
(*<*)
definition view_le_on :: "nodeid set \<Rightarrow> (remap \<times> net) \<Rightarrow> (remap \<times> net) \<Rightarrow> bool"
where "view_le_on S x y \<longleftrightarrow> (\<forall>nd\<in>S. view_le nd x y)"
(*>*)
text {* Two nodes are equivalent (for a given net) if they have the same view. *}
definition node_eq :: "net \<Rightarrow> nodeid \<Rightarrow> nodeid \<Rightarrow> bool"
where "node_eq net nd nd' \<longleftrightarrow>
(\<forall>a. resolve_dom (net,(nd,a)) = resolve_dom (net,(nd',a))) \<and>
(\<forall>a. resolve_dom (net,(nd,a)) \<longrightarrow> view_from nd net a = view_from nd' net a)"
(*<*)
lemma view_eq_empty:
"view_eq_on {} x y"
by(simp add:view_eq_on_def)
lemma view_eq_onI:
"(\<And>nd. nd \<in> S \<Longrightarrow> view_eq nd x y) \<Longrightarrow> view_eq_on S x y"
by(simp add:view_eq_on_def)
lemma view_le_onI:
"(\<And>nd. nd \<in> S \<Longrightarrow> view_le nd x y) \<Longrightarrow> view_le_on S x y"
by(simp add:view_le_on_def)
lemma view_eqI:
"(\<And>a. resolve_dom (snd x, (nd,a)) = resolve_dom (snd y, (nd,a))) \<Longrightarrow>
(\<And>a. resolve_dom (snd x, (nd,a)) \<Longrightarrow>
rename (fst x) ` view_from nd (snd x) a =
rename (fst y) ` view_from nd (snd y) a) \<Longrightarrow> view_eq nd x y"
by(simp add:view_eq_def)
lemma view_leI:
"(\<And>a. resolve_dom (snd x, (nd,a)) \<Longrightarrow> resolve_dom (snd y, (nd,a))) \<Longrightarrow>
(\<And>a. resolve_dom (snd x, (nd,a)) \<Longrightarrow>
rename (fst x) ` view_from nd (snd x) a =
rename (fst y) ` view_from nd (snd y) a) \<Longrightarrow> view_le nd x y"
by(simp add:view_le_def)
lemma view_eq_domD:
"view_eq nd x y \<Longrightarrow> resolve_dom (snd x, nd, a) = resolve_dom (snd y, nd, a)"
by(simp only:view_eq_def)
lemma view_le_domD:
"view_le nd x y \<Longrightarrow> resolve_dom (snd x, nd, a) \<Longrightarrow> resolve_dom (snd y, nd, a)"
by(simp only:view_le_def)
lemma view_eq_viewD:
"view_eq nd x y \<Longrightarrow> resolve_dom (snd x, nd, a) \<Longrightarrow>
rename (fst x) ` view_from nd (snd x) a = rename (fst y) ` view_from nd (snd y) a"
by(simp only:view_eq_def)
lemma view_le_viewD:
"view_le nd x y \<Longrightarrow> resolve_dom (snd x, nd, a) \<Longrightarrow>
rename (fst x) ` view_from nd (snd x) a = rename (fst y) ` view_from nd (snd y) a"
by(simp only:view_le_def)
(*>*)
text {* Both view-equivalence and node-equivalence are proper equivalence relations. *}
lemma equivp_view_eq:
"\<And>nd. equivp (view_eq nd)"
(*<*)
by(intro equivpI sympI transpI reflpI, simp_all add:view_eq_def)
(*>*)
lemma equivp_view_eq_on:
fixes S :: "nodeid set"
shows "equivp (view_eq_on S)"
(*<*)
unfolding view_eq_on_def using equivp_view_eq
by(blast intro:equivpI reflpI sympI transpI
dest:equivp_reflp equivp_symp equivp_transp)
lemma equivp_node_eq:
"\<And>net. equivp (node_eq net)"
by(intro equivpI sympI transpI reflpI, simp_all add:node_eq_def)
lemma transp_view_le:
"\<And>nd. transp (view_le nd)"
by(intro transpI, simp add:view_le_def)
lemma transp_view_le_on:
"\<And>S. transp (view_le_on S)"
unfolding view_le_on_def
using transp_view_le by(blast dest:transpD intro:transpI)
lemma view_eq_view_le:
"view_eq nd x y \<Longrightarrow> view_le nd x y"
by(simp add:view_eq_def view_le_def)
lemma view_eq_on_view_le_on:
"view_eq_on S x y \<Longrightarrow> view_le_on S x y"
by(simp add:view_eq_on_def view_le_on_def view_eq_view_le)
lemma view_eq_antisym:
"view_le nd x y \<Longrightarrow> view_le nd y x \<Longrightarrow> view_eq nd x y"
unfolding view_le_def view_eq_def by(blast)
lemma view_eq_on_antisym:
"view_le_on S x y \<Longrightarrow> view_le_on S y x \<Longrightarrow> view_eq_on S x y"
unfolding view_le_on_def view_eq_on_def by(blast intro:view_eq_antisym)
lemmas view_eq_symp = equivp_symp[OF equivp_view_eq]
lemmas view_eq_on_symp = equivp_symp[OF equivp_view_eq_on]
text {* Declare rules for transitivity reasoning. *}
lemmas [trans] = equivp_transp[OF equivp_view_eq_on] equivp_transp[OF equivp_view_eq]
transpD[OF transp_view_le_on] transpD[OF transp_view_le]
transpD[OF transp_view_le_on, OF view_eq_on_view_le_on]
transpD[OF transp_view_le, OF view_eq_view_le]
transpD[OF transp_view_le_on, OF _ view_eq_on_view_le_on]
transpD[OF transp_view_le, OF _ view_eq_view_le]
(*>*)
text {* Both equivalence relations preserve resolution. *}
lemma node_eq_resolve:
fixes nd nd' :: nodeid and net :: net and a :: addr
shows "node_eq net nd nd' \<Longrightarrow> resolve_dom (net,nd,a) \<Longrightarrow> resolve net (nd,a) = resolve net (nd',a)"
(*<*)
by(simp only:node_eq_def view_from_def)
(*>*)
lemma view_eq_resolve:
fixes nd :: nodeid and x y :: "remap \<times> net" and a :: addr
shows "view_eq nd x y \<Longrightarrow> resolve_dom (snd x,nd,a) \<Longrightarrow>
rename (fst x) ` resolve (snd x) (nd,a) = rename (fst y) ` resolve (snd y) (nd,a)"
(*<*)
by(simp only:view_eq_def view_from_def)
lemma view_le_resolve:
shows "view_le nd x y \<Longrightarrow> resolve_dom (snd x,nd,a) \<Longrightarrow>
rename (fst x) ` resolve (snd x) (nd,a) = rename (fst y) ` resolve (snd y) (nd,a)"
by(simp only:view_le_def view_from_def)
(*>*)
text {* View-equivalence is preserved by any further node renaming. *}
(*<*)
lemma view_le_comp:
assumes le_fg: "view_le nd (f,net) (g,net')"
shows "view_le nd (h o f,net) (h o g,net')"
proof(intro view_leI, simp_all)
fix a
assume dom: "resolve_dom (net, nd, a)"
with le_fg show dom': "resolve_dom (net', nd, a)"
by(auto dest:view_le_domD)
show "rename (h \<circ> f) ` view_from nd net a = rename (h \<circ> g) ` view_from nd net' a"
(is "?A (h o f) = ?B (h o g)")
proof(intro set_eqI iffI)
fix x
assume "x \<in> ?A (h o f)"
then obtain y where yin: "y \<in> view_from nd net a" and xeq: "x = rename (h o f) y"
by(blast)
let ?x' = "rename f y"
from yin have "?x' \<in> ?A f" by(auto)
with dom have "?x' \<in> ?B g" by(simp add: view_le_viewD[OF le_fg, simplified])
then obtain y' where y'in: "y' \<in> view_from nd net' a" and x'eq: "?x' = rename g y'"
by(blast)
from xeq x'eq have "x = rename (h o g) y'" by(simp add:rename_def)
with y'in show "x \<in> ?B (h o g)" by(blast)
next
fix x
assume "x \<in> ?B (h o g)"
then obtain y where yin: "y \<in> view_from nd net' a" and xeq: "x = rename (h o g) y"
by(blast)
let ?x' = "rename g y"
from yin have "?x' \<in> ?B g" by(auto)
with dom have "?x' \<in> ?A f" by(simp add: view_le_viewD[OF le_fg, simplified])
then obtain y' where y'in: "y' \<in> view_from nd net a" and x'eq: "?x' = rename f y'"
by(blast)
from xeq x'eq have "x = rename (h o f) y'" by(simp add:rename_def)
with y'in show "x \<in> ?A (h o f)" by(blast)
qed
qed
(*>*)
lemma view_eq_comp:
"view_eq nd (f,net) (g,net') \<Longrightarrow> view_eq nd (h o f,net) (h o g,net')"
(*<*)
by(blast intro:view_eq_antisym view_le_comp
dest:view_eq_view_le view_eq_view_le[OF view_eq_symp])
lemma view_le_on_comp:
"view_le_on S (f,net) (g,net') \<Longrightarrow> view_le_on S (h o f,net) (h o g,net')"
by(simp add:view_le_comp view_le_on_def)
lemma view_eq_on_comp:
"view_eq_on S (f,net) (g,net') \<Longrightarrow> view_eq_on S (h o f,net) (h o g,net')"
by(simp add:view_eq_comp view_eq_on_def)
(*>*)
text {* For transformations that add nodes, we need to know that the new node has no descendents
or ancestors. *}
definition fresh_node :: "net \<Rightarrow> nodeid \<Rightarrow> bool"
where "fresh_node net nd \<longleftrightarrow>
(\<forall>a. translate (net nd) a = {}) \<and>
(\<forall>x y. (x,y) \<in> decodes_to net \<longrightarrow> fst x \<noteq> nd) \<and>
accept (net nd) = {}"
(*<*)
lemma fresh_node_translateD: "\<And>net nd a. fresh_node net nd \<Longrightarrow> translate (net nd) a = {}"
by(auto simp:fresh_node_def)
lemma fresh_node_reachableD: "\<And>net nd x y. fresh_node net nd \<Longrightarrow> (x,y) \<in> decodes_to net \<Longrightarrow> fst x \<noteq> nd"
unfolding fresh_node_def by(blast)
lemma fresh_node_acceptD: "\<And>net nd. fresh_node net nd \<Longrightarrow> accept (net nd) = {}"
by(simp add:fresh_node_def)
end
(*>*) |
import numpy as np
from .data_list import ImageList
import torch.utils.data as util_data
from torchvision import transforms
from PIL import Image, ImageOps
class ResizeImage():
def __init__(self, size):
if isinstance(size, int):
self.size = (int(size), int(size))
else:
self.size = size
def __call__(self, img):
th, tw = self.size
return img.resize((th, tw))
class PlaceCrop(object):
def __init__(self, size, start_x, start_y):
if isinstance(size, int):
self.size = (int(size), int(size))
else:
self.size = size
self.start_x = start_x
self.start_y = start_y
def __call__(self, img):
th, tw = self.size
return img.crop((self.start_x, self.start_y, self.start_x + tw, self.start_y + th))
def load_images(images_file_path, batch_size, resize_size=256, is_train=True, crop_size=224, is_cen=False, num_workers=4):
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
if not is_train:
start_center = (resize_size - crop_size - 1) / 2
transformer = transforms.Compose([
ResizeImage(resize_size),
PlaceCrop(crop_size, start_center, start_center),
transforms.ToTensor(),
normalize])
images = ImageList(open(images_file_path).readlines(), transform=transformer)
images_loader = util_data.DataLoader(images, batch_size=batch_size, shuffle=False, num_workers=num_workers, drop_last=True)
else:
if is_cen:
transformer = transforms.Compose([ResizeImage(resize_size),
transforms.Scale(resize_size),
transforms.RandomHorizontalFlip(),
transforms.CenterCrop(crop_size),
transforms.ToTensor(),
normalize])
else:
transformer = transforms.Compose([ResizeImage(resize_size),
transforms.RandomResizedCrop(crop_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize])
images = ImageList(open(images_file_path).readlines(), transform=transformer)
images_loader = util_data.DataLoader(images, batch_size=batch_size, shuffle=True, num_workers=num_workers, drop_last=True)
return images_loader
|
-- see: http://docs.idris-lang.org/en/latest/tutorial/interp.html
-- module Interp
import Data.Vect
import Data.Fin
data Ty = TyInt | TyBool | TyFun Ty Ty
interpTy : Ty -> Type
interpTy TyInt = Int
interpTy TyBool = Bool
interpTy (TyFun A T) = interpTy A -> interpTy T
using (G:Vect n Ty) -- the context
-- `HasType i G T` is a proof that variable `i` in context `G` has type `T`
data HasType : (i : Fin n) -> Vect n Ty -> Ty -> Type where
-- proof that the most recently defined variable is well-typed:
Stop : HasType FZ (t :: G) t -- de-Brujin index 0
Pop : HasType k G t -> HasType (FS k) (u :: G) t -- Pop Stop is index 1, ...
data Expr : Vect n Ty -> Ty -> Type where
Var : HasType i G t
-> Expr G t
Val : (x : Int)
-> Expr G TyInt
Lam : Expr (a :: G) t
-> Expr G (TyFun a t)
App : Expr G (TyFun a t)
-> Expr G a
-> Expr G t
Op : (interpTy a -> interpTy b -> interpTy c) ->
Expr G a -> Expr G b
-> Expr G c
If : Expr G TyBool -> Lazy (Expr G a) -> Lazy (Expr G a)
-> Expr G a
data Env : Vect n Ty -> Type where
Nil : Env Nil
(::) : interpTy a -> Env G -> Env (a :: G)
-- given a proof that a variable if defined in the context,
-- we can produce a value from the environment:
lookup : HasType i G t -> Env G -> interpTy t
lookup Stop (x :: xs) = x
lookup (Pop k) (x :: xs) = lookup k xs
interp : Env G -> Expr G t -> interpTy t
interp env (Var i) = lookup i env
interp env (Val x) = x
interp env (Lam sc) = \x => interp (x :: env) sc
interp env (App f s) = (interp env f) (interp env s)
interp env (Op op x y) = op (interp env x) (interp env y)
interp env (If cnd thn els)
= if interp env cnd
then interp env thn
else interp env els
succ : Expr G (TyFun TyInt TyInt)
succ = Lam (Op (+) (Var Stop) (Val 1))
add : Expr G (TyFun TyInt (TyFun TyInt TyInt)) -- \x.\y.y + x
add = Lam (Lam (Op (+) (Var Stop) (Var (Pop Stop))))
-- `interp [] (App (App add (Val 2)) (Val 2))` gives `4 : Int`
iszero : Expr G (TyFun TyInt TyBool)
iszero = Lam (Op (==) (Var Stop) (Val 0))
-- `interp [] (App fact (Val 4))` gives `24 : Int`
fact : Expr G (TyFun TyInt TyInt)
fact = Lam (If (App iszero (Var Stop))
(Val 1)
(Op (*) (Var Stop)
(App fact (Op (-) (Var Stop) (Val 1)))))
main : IO ()
main = do
putStr "Enter a number: "
n <- getLine
printLn (interp [] fact (cast n))
|
Require Import Blech.Defaults.
Require Import Coq.Setoids.Setoid.
Require Import Coq.Classes.SetoidClass.
Require Import Blech.Bishop.
Require Import Blech.Category.
Require Import Blech.Functor.
Require Import Blech.Category.Over.
Require Import Blech.Category.Bsh.
Require Blech.Reflect.
Import CategoryNotations.
Import FunctorNotations.
Import BishopNotations.
Import OverNotations.
Open Scope category_scope.
Open Scope object_scope.
Open Scope bishop_scope.
#[local]
Obligation Tactic := Reflect.category_simpl.
#[local]
Definition pullback {A B C: Bsh} (f: Bsh A C) (g: Bsh B C) :=
{ '(x, y) | proj1_sig f x == proj1_sig g y }.
#[program]
Instance pullback_Setoid {A B C: Bsh} (f: Bsh A C) (g: Bsh B C): Setoid (pullback f g) := {
equiv x y := fst (proj1_sig x) == fst (proj1_sig y) ∧ snd (proj1_sig x) == snd (proj1_sig y) ;
}.
Next Obligation.
Proof.
exists.
- intros [[? ?] p].
cbn in *.
split.
all: reflexivity.
- intros [[? ?] p] [[? ?] q] [r l].
cbn in *.
rewrite r, l.
split.
all: reflexivity.
- intros [[? ?] ?] [[? ?] ?] [[? ?] ?] [p q] [r s].
cbn in *.
rewrite p, q, r, s.
split.
all: reflexivity.
Qed.
#[program]
Definition Basechange {A B: Bsh} (f: Bsh A B): Functor (Bsh/B) (Bsh/A) := {|
op (u: Bsh/B) := lub (pullback f (π u) /~ pullback_Setoid f (π u)), λ x, fst (proj1_sig x) ;
map '(lub _, _) '(lub _, _) f '(x, y) := (x, f y) ;
|}.
Next Obligation.
Proof.
intros ? ? [p q].
auto.
Qed.
Next Obligation.
Proof.
rewrite (H _).
cbn in *.
destruct pat.
cbn in *.
destruct x0.
cbn in *.
inversion Heq_anonymous.
subst.
apply e.
Qed.
Next Obligation.
Proof.
intros ? ? [p q].
cbn.
rewrite p, q.
split.
all: reflexivity.
Qed.
Next Obligation.
Proof.
exists.
all: cbn.
- intros.
split.
all: reflexivity.
- intros.
split.
all: reflexivity.
- intros ? ? [? ?] [? ?] r [[? ?] ?].
cbn in *.
split.
1: reflexivity.
apply r.
Qed.
|
function tf = isTestCaseSubclass(name)
%isTestCaseSubclass True for name of a TestCase subclass
% tf = isTestCaseSubclass(name) returns true if the string name is the name of
% a TestCase subclass on the MATLAB path.
% Steven L. Eddins
% Copyright 2008-2009 The MathWorks, Inc.
tf = false;
class_meta = meta.class.fromName(name);
if isempty(class_meta)
% Not the name of a class
return;
end
if strcmp(class_meta.Name, 'TestCase')
tf = true;
else
tf = isMetaTestCaseSubclass(class_meta);
end
function tf = isMetaTestCaseSubclass(class_meta)
tf = false;
if strcmp(class_meta.Name, 'TestCase')
tf = true;
else
% Invoke function recursively on parent classes.
super_classes = class_meta.SuperClasses;
for k = 1:numel(super_classes)
if isMetaTestCaseSubclass(super_classes{k})
tf = true;
break;
end
end
end
|
-- ------------------------------------------------------------------ [ IR.idr ]
-- Module : IR.idr
-- Copyright : (c) Jan de Muijnck-Hughes
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
||| The intermediate representation for GRL-Derived Languages.
module GRL.IR
import GRL.Common
%access export
-- ---------------------------------------------------------- [ AST Definition ]
||| An IR to aid in converting DSL language constructs into Goal Graph
||| objects.
public export
data GExpr : GTy -> Type where
Elem : GElemTy -> String
-> Maybe SValue -> GExpr ELEM
ILink : GIntentTy -> CValue
-> GExpr ELEM -> GExpr ELEM -> GExpr INTENT
SLink : GStructTy -> GExpr ELEM
-> List (GExpr ELEM) -> GExpr STRUCT
getTitle : GExpr ELEM -> String
getTitle (Elem ty t s) = t
-- -------------------------------------------------------------------- [ Show ]
private
shGExprE : GExpr ELEM -> String
shGExprE (Elem ty t ms) = with List unwords ["[Element", show ty, show t, show ms,"]"]
private
shGExprI : GExpr INTENT -> String
shGExprI (ILink ty cty a b) = with List unwords ["[Intent", show ty, show cty, shGExprE a, shGExprE b, "]"]
private
shGExprS : GExpr STRUCT -> String
shGExprS (SLink ty x ys) = with List unwords ["[Structure", show ty, shGExprE x, show' ys, "]"]
where
show' : List (GExpr ELEM) -> String
show' ys = "[" ++ (unwords $ intersperse "," (map shGExprE ys)) ++ "]"
private
shGExpr : {ty : GTy} -> GExpr ty -> String
shGExpr {ty=ELEM} x = shGExprE x
shGExpr {ty=INTENT} x = shGExprI x
shGExpr {ty=STRUCT} x = shGExprS x
Show (GExpr ty) where
show x = shGExpr x
-- ------------------------------------------------------------- [ Eq Instance ]
private
eqGExprE : GExpr ELEM -> GExpr ELEM -> Bool
eqGExprE (Elem xty x sx) (Elem yty y sy) = xty == yty && x == y && sx == sy
private
eqGExprI : GExpr INTENT -> GExpr INTENT -> Bool
eqGExprI (ILink xty xc xa xb) (ILink yty yc ya yb) = xty == yty && xc == yc && eqGExprE xa ya && eqGExprE xb yb
private
eqGExprS : GExpr STRUCT -> GExpr STRUCT -> Bool
eqGExprS (SLink xty xa (xbs)) (SLink yty ya (ybs)) =
xty == yty && eqGExprE xa ya && eqGExprList xbs ybs
where
eqGExprList : List (GExpr ELEM) -> List (GExpr ELEM) -> Bool
eqGExprList Nil Nil = True
eqGExprList (x::xs) (y::ys) =
if eqGExprE x y
then (eqGExprList (assert_smaller (x::xs) xs) (assert_smaller (y::ys) ys))
else False
eqGExprList _ _ = False
export
eqGExpr : GExpr a -> GExpr b -> Bool
eqGExpr {a=ELEM} {b=ELEM} x y = eqGExprE x y
eqGExpr {a=INTENT} {b=INTENT} x y = eqGExprI x y
eqGExpr {a=STRUCT} {b=STRUCT} x y = eqGExprS x y
eqGExpr _ _ = False
Eq (GExpr ty) where
(==) x y = eqGExpr x y
||| The GRL Class for allowing DSL designers to instruct the Goal
||| Graph builder how to convert expressions in the DSL to the IR.
|||
||| @a The DSL which is indexed by `GTy` type.
public export
interface GRL (a : GTy -> Type) where
||| Instruct the interpreter to construct a goal node.
mkElem : a ELEM -> GExpr ELEM
||| Intruct the interpreter to construct a intent link.
mkIntent : a INTENT -> GExpr INTENT
||| Instruct the interpreter to consturct a structural link.
mkStruct : a STRUCT -> GExpr STRUCT
-- --------------------------------------------------------------------- [ EOF ]
|
/-
Copyright (c) 2023 Kevin Buzzard. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author : Kevin Buzzard
-/
import tactic
import analysis.normed_space.lp_space -- theory of ℓᵖ spaces
/-
# ℓᵖ spaces
The set-up : `I` is an index type, `E` is a family of `normed_add_comm_group`s
(so if `i : I` then `E i` is a type and if `v : E i` then `‖v‖` makes sense and
is a real number).
Then given `p : ℝ≥0∞` (i.e. an element `p` of `[0,∞]`) there is a theory
of ℓᵖ spaces, which is the subspace of `∏ᵢ, E i` consisting of the
sections `vᵢ` such that `∑ᵢ ‖vᵢ‖ᵖ < ∞`. For `p=∞` this means "the ‖vᵢ‖ are
bounded".
-/
open_locale ennreal -- to get notation ℝ≥0∞
variables (I : Type) (E : I → Type) [∀ i, normed_add_comm_group (E i)] (p : ℝ≥0∞)
-- Here's how to say that an element of the product of the Eᵢ is in the ℓᵖ space
example (v : Π i, E i) : Prop := mem_ℓp v p
-- Technical note: 0^0=1 and x^0=0 for x>0, so ℓ⁰ is the functions with finite support.
variable (v : Π i, E i)
example : mem_ℓp v 0 ↔ set.finite {i | v i ≠ 0} :=
begin
exact mem_ℓp_zero_iff,
end
example : mem_ℓp v ∞ ↔ bdd_above (set.range (λ i, ‖v i‖)) :=
begin
exact mem_ℓp_infty_iff,
end
-- The function ennreal.to_real sends x<∞ to x and ∞ to 0.
-- So `0 < p.to_real` is a way of saying `0 < p < ∞`.
example (hp : 0 < p.to_real) :
mem_ℓp v p ↔ summable (λ i, ‖v i‖ ^ p.to_real) :=
begin
exact mem_ℓp_gen_iff hp
end
-- It's a theorem in the library that if p ≤ q then mem_ℓp v p → mem_ℓp v q
example (q : ℝ≥0∞) (hpq : p ≤ q) : mem_ℓp v p → mem_ℓp v q :=
begin
intro h,
exact h.of_exponent_ge hpq,
end
-- The space of all `v` satisfying `mem_ℓp v p` is
-- called lp E p
example : Type := lp E p
-- It has a norm:
noncomputable example (v : lp E p) : ℝ := ‖v‖
-- It's a `normed_add_comm_group` if `1 ≤ p`
noncomputable example [fact (1 ≤ p)] : normed_add_comm_group (lp E p) :=
begin
apply_instance, -- Typeclass inference can't see 1 ≤ p unless we
-- use the `fact` typeclass (a way of putting arbitrary facts)
-- into the system
end
-- `real.is_conjugate_exponent p q` means that `p,q>1` are reals and `1/p+1/q=1`
example (p q : ℝ) (hp : 1 < p) (hq : 1 < q) (hpq : 1 / p + 1 / q = 1) :
p.is_conjugate_exponent q :=
{ one_lt := hp,
inv_add_inv_conj := hpq } -- note that `hq` not needed as it follows
-- We have a verison of Hoelder's inequality.
#check @lp.tsum_mul_le_mul_norm
example (q : ℝ≥0∞) (hpq : p.to_real.is_conjugate_exponent q.to_real) (f : lp E p)
(g : lp E q) : ∑' (i : I), ‖f i‖ * ‖g i‖ ≤ ‖f‖ * ‖g‖ :=
begin
have := lp.tsum_mul_le_mul_norm hpq f g,
exact this.2,
end
-- This would be a useless theorem if `∑' (i : I), ‖f i‖ * ‖g i‖` diverged,
-- because in Lean if a sum diverges then by definition the `∑'` of it is 0.
-- So we also need this:
example (q : ℝ≥0∞) (hpq : p.to_real.is_conjugate_exponent q.to_real) (f : lp E p)
(g : lp E q) : summable (λ i, ‖f i‖ * ‖g i‖) :=
begin
have := lp.tsum_mul_le_mul_norm hpq f g,
exact this.1,
end
|
module ParsePSSE
function parse_skipsection(lines::Array{String}, pos::Int)
while pos <= length(lines)
line = strip(lines[pos])
if line[1] in [ '0', 'Q' ]
break
end
pos += 1
end
return pos
end
function parse_data(lines::Array{String}, pos::Int, info::Array;
delim=",")
start = last = pos
last = parse_skipsection(lines, start)
ndata = last - start
data = Array{Any}(undef, ndata, length(info))
for i=1:ndata
# Get rid of the single quotes.
line = replace(lines[start+i-1], "'" => "")
fields = split(line, delim)
for j=1:length(info)
if info[j][3] == String
data[i,j] = strip(fields[info[j][2]])
else
data[i,j] = parse(info[j][3], fields[info[j][2]])
end
end
end
pos = last + 1
return pos, data
end
include("parse_raw.jl")
include("parse_rop.jl")
include("parse_inl.jl")
include("parse_con.jl")
end
|
State Before: α : Type u_1
β : Type u_3
γ : Type u_2
l✝ m : Language α
a b x : List α
g : β → γ
f : α → β
l : Language α
⊢ ↑(map g) (↑(map f) l) = ↑(map (g ∘ f)) l State After: no goals Tactic: simp [map, image_image] |
\documentclass[12pt]{article}
\usepackage{tikz}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{tabularx}
\usepackage{multicol}
\usepackage{algpseudocode}
\usepackage{algorithm}
% Geometry
\usepackage{geometry}
\geometry{letterpaper, left=15mm, top=20mm, right=15mm, bottom=20mm}
% Fancy Header
\usepackage{fancyhdr}
\renewcommand{\footrulewidth}{0.4pt}
\pagestyle{fancy}
\fancyhf{}
\chead{CSC 420 - Artificial Intelligence}
\lfoot{CALU Spring 2022}
\rfoot{RDK}
% Add vertical spacing to tables
\renewcommand{\arraystretch}{1.4}
% Macros
\newcommand{\definition}[1]{\underline{\textbf{#1}}}
\newenvironment{rcases}
{\left.\begin{aligned}}
{\end{aligned}\right\rbrace}
% Begin Document
\begin{document}
\section*{Notes, Chapter 2}
\subsection*{Agents and Environments}
\begin{itemize}
\item An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
\item Agents include humans, robots, vehicles, etc
\item The agent function maps from percept histories to actions $ f: P^* \to A$
\item The agent program runs on the physical architecture to produce $f$ \\ $agent = architecture + program$
\end{itemize}
The goal of AI then is to link the percepts of the environment to actions that it can take.
\subsection*{Rationality}
\begin{itemize}
\item A \definition{rational agent} does the right thing, but what does it mean to do the right thing?
\item A \definition{performance measure} to evaluate the behavior of the agent in an environment
\begin{itemize}
\item One point per square cleaned up in time T?
\item One point per clear square per time step, minus one per move?
\end{itemize}
\item A rational agent chooses whichever action maximizes the expected value of the performane measure given the percept sequence to date.
\item What is rational at any given time depends on four things:
\begin{enumerate}
\item The performance measure that defines the criterion of success
\item The agent's prior knowledge of the environment
\item The actions that the agent can perform
\item The agent's percept sequence to date
\end{enumerate}
\end{itemize}
\subsubsection*{Definition of a Rational Agent}
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance
measure given the evidence provided by the percept sequence, and whatever built-in knowledge the agent has.
\subsection*{The Nature of Environments / Task Environments (PEAS)}
To design a rational agent, we must specify the task environment:
\begin{itemize}
\item Performance measure
\item Environment
\item Actuators
\item Sensors
\end{itemize}
\subsubsection*{PEAS - Example - Automated Taxi}
\begin{itemize}
\item Performance Measure: Profit, Safety, Destination (minimal path), Comfort
\item Environment: US streets/freeways, traffic, weather
\item Actuators: Steering, Accelerator, Brake, Horn, Speakers/Display, etc
\item Sensors: Cameras, Accelerometers, Engine Sensors, GPS, etc
\end{itemize}
\subsection*{Properties of Task Environments}
\begin{itemize}
\item Fully Observable vs Partially Observable
\item Single-agent vs Multi-agent
\begin{itemize}
\item Competitive vs Cooperative environment
\end{itemize}
\item Deterministic vs Nondeterministic
\begin{itemize}
\item In deterministic environments, the next state of the environment is completely determined by the current state and the action executed by the agent
\end{itemize}
\item Episodic vs Sequential
\begin{itemize}
\item In an episodic environment, the agen'ts experience is divided into atomic episodes.
In each episode, the agent receives a percept and then performs a single action.
The next episode does not depend on the action taken in the previous ones.
\item In a sequential environment, the current decisions could affect the future decisions.
\end{itemize}
\item Static vs Dynamic: a dynamic environment can be changed for the agent
\item Discrete vs Continuous: Able to process a snapshot vs ongoing inputs
\end{itemize}
\begin{tabular}{l | l l}
Task Environment & Self-Driving Taxi & Crossword \\ \hline
Observable & Partially & Fully \\
Agents & Multi & Single \\
Deterministic & No & Yes \\
Episodic/Sequential & Sequential & Sequential \\
Static/Dynamic & Dynamic & Static \\
Continuous/Discrete & Continuous & Discrete
\end{tabular}
\pagebreak
\subsection{Agent Types}
Any type of agent can have a learning model added to it.
\subsubsection{Simple Reflex Agents}
This agent type selects actions based on the currect percept, ignoring the rest of the percept history.
\subsubsection{Model-based reflex agents}
This agent maintains some sort of internal state that depends on the percept history.
This is to handle partial observability to let the agent keep track of the part of the world it cannot see now.
\subsubsection{Goal-based Agents}
This type keeps track of the world state as well as a set of goals it is trying to achieve and chooses an action that will (eventually) leadd to the achievements of its goals.
\subsubsection{Utility-based Agents}
This type can be used wwhen there are conflicting goals. It uses a model of the world, along with a utility function that measures its performance among the states of the world.
Then it chooses the action that maximizes the expected utility.
Goal based agents and utility-based agents are more flexible than simple reflex and model-based reflex agents.
Why?
\subsubsection{Learning Agents}
\begin{itemize}
\item The \textbf{learning} element is responsible for making improvements
\item The \textbf{performance} element is responsible for selecting external actions
\item The \textbf{critic} provides feedback on how the agent is doing
\item The \textbf{problem generator} is responsible for suggesting actions that will lead to new experiences
\end{itemize}
\end{document} |
The Denton County Homeless Coalition (DCHC) is presenting community homeless data and the results from the 2018 Point-In-Time (PIT) Count. Schedule events are on April 13, 2018 in Denton and April 21, 2018 in Lewisville.
The Denton County Homeless Coalition invites you to come interact with our homeless data, see the 2018 Point-In-Time Count Survey Results, and learn about current efforts to bust barriers to housing. The event includes Volunteer Appreciation activities for PIT Count survey volunteers. Each event is open to the public and is designed to display the reporting in a gallery style presentation. Attendees are able to come and go as they please. Volunteers will be on hand to answer questions. |
State Before: α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0 State After: case inl
α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
h✝ : Finite α
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0
case inr
α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
h✝ : Infinite α
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0 Tactic: cases finite_or_infinite α State Before: case inl
α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
h✝ : Finite α
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0 State After: case inl
α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
h✝ : Finite α
this : Fintype α := Fintype.ofFinite α
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0 Tactic: letI := Fintype.ofFinite α State Before: case inl
α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
h✝ : Finite α
this : Fintype α := Fintype.ofFinite α
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0 State After: no goals Tactic: simp only [*, Nat.card_eq_fintype_card, dif_pos] State Before: case inr
α✝ : Type ?u.215
β : Type ?u.218
γ : Type ?u.221
α : Type u_1
h✝ : Infinite α
⊢ Nat.card α = if h : Finite α then Fintype.card α else 0 State After: no goals Tactic: simp only [*, card_eq_zero_of_infinite, not_finite_iff_infinite.mpr, dite_false] |
/-
Copyright (c) 2019 Scott Morrison. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Scott Morrison, Bhavik Mehta, Adam Topaz
-/
import category_theory.functor.category
import category_theory.functor.fully_faithful
import category_theory.functor.reflects_isomorphisms
namespace category_theory
open category
universes v₁ u₁ -- morphism levels before object levels. See note [category_theory universes].
variables (C : Type u₁) [category.{v₁} C]
/--
The data of a monad on C consists of an endofunctor T together with natural transformations
η : 𝟭 C ⟶ T and μ : T ⋙ T ⟶ T satisfying three equations:
- T μ_X ≫ μ_X = μ_(TX) ≫ μ_X (associativity)
- η_(TX) ≫ μ_X = 1_X (left unit)
- Tη_X ≫ μ_X = 1_X (right unit)
-/
structure monad extends C ⥤ C :=
(η' [] : 𝟭 _ ⟶ to_functor)
(μ' [] : to_functor ⋙ to_functor ⟶ to_functor)
(assoc' : ∀ X, to_functor.map (nat_trans.app μ' X) ≫ μ'.app _ = μ'.app _ ≫ μ'.app _ . obviously)
(left_unit' : ∀ X : C, η'.app (to_functor.obj X) ≫ μ'.app _ = 𝟙 _ . obviously)
(right_unit' : ∀ X : C, to_functor.map (η'.app X) ≫ μ'.app _ = 𝟙 _ . obviously)
/--
The data of a comonad on C consists of an endofunctor G together with natural transformations
ε : G ⟶ 𝟭 C and δ : G ⟶ G ⋙ G satisfying three equations:
- δ_X ≫ G δ_X = δ_X ≫ δ_(GX) (coassociativity)
- δ_X ≫ ε_(GX) = 1_X (left counit)
- δ_X ≫ G ε_X = 1_X (right counit)
-/
structure comonad extends C ⥤ C :=
(ε' [] : to_functor ⟶ 𝟭 _)
(δ' [] : to_functor ⟶ to_functor ⋙ to_functor)
(coassoc' : ∀ X, nat_trans.app δ' _ ≫ to_functor.map (δ'.app X) = δ'.app _ ≫ δ'.app _ . obviously)
(left_counit' : ∀ X : C, δ'.app X ≫ ε'.app (to_functor.obj X) = 𝟙 _ . obviously)
(right_counit' : ∀ X : C, δ'.app X ≫ to_functor.map (ε'.app X) = 𝟙 _ . obviously)
variables {C} (T : monad C) (G : comonad C)
instance coe_monad : has_coe (monad C) (C ⥤ C) := ⟨λ T, T.to_functor⟩
instance coe_comonad : has_coe (comonad C) (C ⥤ C) := ⟨λ G, G.to_functor⟩
@[simp] lemma monad_to_functor_eq_coe : T.to_functor = T := rfl
@[simp] lemma comonad_to_functor_eq_coe : G.to_functor = G := rfl
/-- The unit for the monad `T`. -/
def monad.η : 𝟭 _ ⟶ (T : C ⥤ C) := T.η'
/-- The multiplication for the monad `T`. -/
def monad.μ : (T : C ⥤ C) ⋙ (T : C ⥤ C) ⟶ T := T.μ'
/-- The counit for the comonad `G`. -/
def comonad.ε : (G : C ⥤ C) ⟶ 𝟭 _ := G.ε'
/-- The comultiplication for the comonad `G`. -/
def comonad.δ : (G : C ⥤ C) ⟶ (G : C ⥤ C) ⋙ G := G.δ'
/-- A custom simps projection for the functor part of a monad, as a coercion. -/
def monad.simps.coe := (T : C ⥤ C)
/-- A custom simps projection for the unit of a monad, in simp normal form. -/
def monad.simps.η : 𝟭 _ ⟶ (T : C ⥤ C) := T.η
/-- A custom simps projection for the multiplication of a monad, in simp normal form. -/
def monad.simps.μ : (T : C ⥤ C) ⋙ (T : C ⥤ C) ⟶ (T : C ⥤ C) := T.μ
/-- A custom simps projection for the functor part of a comonad, as a coercion. -/
def comonad.simps.coe := (G : C ⥤ C)
/-- A custom simps projection for the counit of a comonad, in simp normal form. -/
def comonad.simps.ε : (G : C ⥤ C) ⟶ 𝟭 _ := G.ε
/-- A custom simps projection for the comultiplication of a comonad, in simp normal form. -/
def comonad.simps.δ : (G : C ⥤ C) ⟶ (G : C ⥤ C) ⋙ (G : C ⥤ C) := G.δ
initialize_simps_projections category_theory.monad (to_functor → coe, η' → η, μ' → μ)
initialize_simps_projections category_theory.comonad (to_functor → coe, ε' → ε, δ' → δ)
@[reassoc]
lemma monad.assoc (T : monad C) (X : C) :
(T : C ⥤ C).map (T.μ.app X) ≫ T.μ.app _ = T.μ.app _ ≫ T.μ.app _ :=
T.assoc' X
@[simp, reassoc] lemma monad.left_unit (T : monad C) (X : C) :
T.η.app ((T : C ⥤ C).obj X) ≫ T.μ.app X = 𝟙 ((T : C ⥤ C).obj X) :=
T.left_unit' X
@[simp, reassoc] lemma monad.right_unit (T : monad C) (X : C) :
(T : C ⥤ C).map (T.η.app X) ≫ T.μ.app X = 𝟙 ((T : C ⥤ C).obj X) :=
T.right_unit' X
@[reassoc]
lemma comonad.coassoc (G : comonad C) (X : C) :
G.δ.app _ ≫ (G : C ⥤ C).map (G.δ.app X) = G.δ.app _ ≫ G.δ.app _ :=
G.coassoc' X
@[simp, reassoc] lemma comonad.left_counit (G : comonad C) (X : C) :
G.δ.app X ≫ G.ε.app ((G : C ⥤ C).obj X) = 𝟙 ((G : C ⥤ C).obj X) :=
G.left_counit' X
@[simp, reassoc] lemma comonad.right_counit (G : comonad C) (X : C) :
G.δ.app X ≫ (G : C ⥤ C).map (G.ε.app X) = 𝟙 ((G : C ⥤ C).obj X) :=
G.right_counit' X
/-- A morphism of monads is a natural transformation compatible with η and μ. -/
@[ext]
structure monad_hom (T₁ T₂ : monad C) extends nat_trans (T₁ : C ⥤ C) T₂ :=
(app_η' : ∀ X, T₁.η.app X ≫ app X = T₂.η.app X . obviously)
(app_μ' : ∀ X, T₁.μ.app X ≫ app X = ((T₁ : C ⥤ C).map (app X) ≫ app _) ≫ T₂.μ.app X . obviously)
/-- A morphism of comonads is a natural transformation compatible with ε and δ. -/
@[ext]
structure comonad_hom (M N : comonad C) extends nat_trans (M : C ⥤ C) N :=
(app_ε' : ∀ X, app X ≫ N.ε.app X = M.ε.app X . obviously)
(app_δ' : ∀ X, app X ≫ N.δ.app X = M.δ.app X ≫ app _ ≫ (N : C ⥤ C).map (app X) . obviously)
restate_axiom monad_hom.app_η'
restate_axiom monad_hom.app_μ'
attribute [simp, reassoc] monad_hom.app_η monad_hom.app_μ
restate_axiom comonad_hom.app_ε'
restate_axiom comonad_hom.app_δ'
attribute [simp, reassoc] comonad_hom.app_ε comonad_hom.app_δ
instance : category (monad C) :=
{ hom := monad_hom,
id := λ M, { to_nat_trans := 𝟙 (M : C ⥤ C) },
comp := λ _ _ _ f g,
{ to_nat_trans := { app := λ X, f.app X ≫ g.app X } } }
instance : category (comonad C) :=
{ hom := comonad_hom,
id := λ M, { to_nat_trans := 𝟙 (M : C ⥤ C) },
comp := λ M N L f g,
{ to_nat_trans := { app := λ X, f.app X ≫ g.app X } } }
instance {T : monad C} : inhabited (monad_hom T T) := ⟨𝟙 T⟩
@[simp]
instance {G : comonad C} : inhabited (comonad_hom G G) := ⟨𝟙 G⟩
@[simp] lemma comonad_hom.id_to_nat_trans (T : comonad C) :
(𝟙 T : T ⟶ T).to_nat_trans = 𝟙 (T : C ⥤ C) :=
rfl
@[simp] lemma comp_to_nat_trans {T₁ T₂ T₃ : comonad C} (f : T₁ ⟶ T₂) (g : T₂ ⟶ T₃) :
(f ≫ g).to_nat_trans =
((f.to_nat_trans : _ ⟶ (T₂ : C ⥤ C)) ≫ g.to_nat_trans : (T₁ : C ⥤ C) ⟶ T₃) :=
rfl
/-- Construct a monad isomorphism from a natural isomorphism of functors where the forward
direction is a monad morphism. -/
@[simps]
def monad_iso.mk {M N : monad C} (f : (M : C ⥤ C) ≅ N) (f_η f_μ) :
M ≅ N :=
{ hom := { to_nat_trans := f.hom, app_η' := f_η, app_μ' := f_μ },
inv :=
{ to_nat_trans := f.inv,
app_η' := λ X, by simp [←f_η],
app_μ' := λ X,
begin
rw ←nat_iso.cancel_nat_iso_hom_right f,
simp only [nat_trans.naturality, iso.inv_hom_id_app, assoc, comp_id, f_μ,
nat_trans.naturality_assoc, iso.inv_hom_id_app_assoc, ←functor.map_comp_assoc],
simp,
end } }
/-- Construct a comonad isomorphism from a natural isomorphism of functors where the forward
direction is a comonad morphism. -/
@[simps]
def comonad_iso.mk {M N : comonad C} (f : (M : C ⥤ C) ≅ N) (f_ε f_δ) :
M ≅ N :=
{ hom := { to_nat_trans := f.hom, app_ε' := f_ε, app_δ' := f_δ },
inv :=
{ to_nat_trans := f.inv,
app_ε' := λ X, by simp [←f_ε],
app_δ' := λ X,
begin
rw ←nat_iso.cancel_nat_iso_hom_left f,
simp only [reassoc_of (f_δ X), iso.hom_inv_id_app_assoc, nat_trans.naturality_assoc],
rw [←functor.map_comp, iso.hom_inv_id_app, functor.map_id],
apply (comp_id _).symm
end } }
variable (C)
/--
The forgetful functor from the category of monads to the category of endofunctors.
-/
@[simps]
def monad_to_functor : monad C ⥤ (C ⥤ C) :=
{ obj := λ T, T,
map := λ M N f, f.to_nat_trans }
instance : faithful (monad_to_functor C) := {}.
@[simp]
lemma monad_to_functor_map_iso_monad_iso_mk {M N : monad C} (f : (M : C ⥤ C) ≅ N) (f_η f_μ) :
(monad_to_functor _).map_iso (monad_iso.mk f f_η f_μ) = f :=
by { ext, refl }
instance : reflects_isomorphisms (monad_to_functor C) :=
{ reflects := λ M N f i,
begin
resetI,
convert is_iso.of_iso (monad_iso.mk (as_iso ((monad_to_functor C).map f)) f.app_η f.app_μ),
ext; refl,
end }
/--
The forgetful functor from the category of comonads to the category of endofunctors.
-/
@[simps]
def comonad_to_functor : comonad C ⥤ (C ⥤ C) :=
{ obj := λ G, G,
map := λ M N f, f.to_nat_trans }
instance : faithful (comonad_to_functor C) := {}.
@[simp]
lemma comonad_to_functor_map_iso_comonad_iso_mk {M N : comonad C} (f : (M : C ⥤ C) ≅ N) (f_ε f_δ) :
(comonad_to_functor _).map_iso (comonad_iso.mk f f_ε f_δ) = f :=
by { ext, refl }
instance : reflects_isomorphisms (comonad_to_functor C) :=
{ reflects := λ M N f i,
begin
resetI,
convert is_iso.of_iso (comonad_iso.mk (as_iso ((comonad_to_functor C).map f)) f.app_ε f.app_δ),
ext; refl,
end }
variable {C}
/--
An isomorphism of monads gives a natural isomorphism of the underlying functors.
-/
@[simps {rhs_md := semireducible}]
def monad_iso.to_nat_iso {M N : monad C} (h : M ≅ N) : (M : C ⥤ C) ≅ N :=
(monad_to_functor C).map_iso h
/--
An isomorphism of comonads gives a natural isomorphism of the underlying functors.
-/
@[simps {rhs_md := semireducible}]
def comonad_iso.to_nat_iso {M N : comonad C} (h : M ≅ N) : (M : C ⥤ C) ≅ N :=
(comonad_to_functor C).map_iso h
variable (C)
namespace monad
/-- The identity monad. -/
@[simps]
def id : monad C :=
{ to_functor := 𝟭 C,
η' := 𝟙 (𝟭 C),
μ' := 𝟙 (𝟭 C) }
instance : inhabited (monad C) := ⟨monad.id C⟩
end monad
namespace comonad
/-- The identity comonad. -/
@[simps]
def id : comonad C :=
{ to_functor := 𝟭 _,
ε' := 𝟙 (𝟭 C),
δ' := 𝟙 (𝟭 C) }
instance : inhabited (comonad C) := ⟨comonad.id C⟩
end comonad
end category_theory
|
/-
Copyright (c) 2018 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro, Johannes Hölzl, Simon Hudon, Kenny Lau
-/
import data.multiset.bind
import control.traversable.lemmas
import control.traversable.instances
/-!
# Functoriality of `multiset`.
-/
universes u
namespace multiset
open list
instance : functor multiset :=
{ map := @map }
@[simp] lemma fmap_def {α' β'} {s : multiset α'} (f : α' → β') : f <$> s = s.map f := rfl
instance : is_lawful_functor multiset :=
by refine { .. }; intros; simp
open is_lawful_traversable is_comm_applicative
variables {F : Type u → Type u} [applicative F] [is_comm_applicative F]
variables {α' β' : Type u} (f : α' → F β')
def traverse : multiset α' → F (multiset β') :=
quotient.lift (functor.map coe ∘ traversable.traverse f)
begin
introv p, unfold function.comp,
induction p,
case perm.nil { refl },
case perm.cons
{ have : multiset.cons <$> f p_x <*> (coe <$> traverse f p_l₁) =
multiset.cons <$> f p_x <*> (coe <$> traverse f p_l₂),
{ rw [p_ih] },
simpa with functor_norm },
case perm.swap
{ have : (λa b (l:list β'), (↑(a :: b :: l) : multiset β')) <$> f p_y <*> f p_x =
(λa b l, ↑(a :: b :: l)) <$> f p_x <*> f p_y,
{ rw [is_comm_applicative.commutative_map],
congr, funext a b l, simpa [flip] using perm.swap b a l },
simp [(∘), this] with functor_norm },
case perm.trans { simp [*] }
end
instance : monad multiset :=
{ pure := λ α x, {x},
bind := @bind,
.. multiset.functor }
@[simp] lemma pure_def {α} : (pure : α → multiset α) = singleton := rfl
@[simp] lemma bind_def {α β} : (>>=) = @bind α β := rfl
instance : is_lawful_monad multiset :=
{ bind_pure_comp_eq_map := λ α β f s, multiset.induction_on s rfl $ λ a s ih, by simp,
pure_bind := λ α β x f, by simp [pure],
bind_assoc := @bind_assoc }
open functor
open traversable is_lawful_traversable
@[simp]
lemma lift_coe {α β : Type*} (x : list α) (f : list α → β)
(h : ∀ a b : list α, a ≈ b → f a = f b) :
quotient.lift f h (x : multiset α) = f x :=
quotient.lift_mk _ _ _
@[simp]
lemma map_comp_coe {α β} (h : α → β) :
functor.map h ∘ coe = (coe ∘ functor.map h : list α → multiset β) :=
by funext; simp [functor.map]
lemma id_traverse {α : Type*} (x : multiset α) :
traverse id.mk x = x :=
quotient.induction_on x begin intro, simp [traverse], refl end
lemma comp_traverse {G H : Type* → Type*}
[applicative G] [applicative H]
[is_comm_applicative G] [is_comm_applicative H]
{α β γ : Type*}
(g : α → G β) (h : β → H γ) (x : multiset α) :
traverse (comp.mk ∘ functor.map h ∘ g) x =
comp.mk (functor.map (traverse h) (traverse g x)) :=
quotient.induction_on x
(by intro;
simp [traverse,comp_traverse] with functor_norm;
simp [(<$>),(∘)] with functor_norm)
lemma map_traverse {G : Type* → Type*}
[applicative G] [is_comm_applicative G]
{α β γ : Type*}
(g : α → G β) (h : β → γ)
(x : multiset α) :
functor.map (functor.map h) (traverse g x) =
traverse (functor.map h ∘ g) x :=
quotient.induction_on x
(by intro; simp [traverse] with functor_norm;
rw [is_lawful_functor.comp_map, map_traverse])
lemma traverse_map {G : Type* → Type*}
[applicative G] [is_comm_applicative G]
{α β γ : Type*}
(g : α → β) (h : β → G γ)
(x : multiset α) :
traverse h (map g x) =
traverse (h ∘ g) x :=
quotient.induction_on x
(by intro; simp [traverse];
rw [← traversable.traverse_map h g];
[ refl, apply_instance ])
lemma naturality {G H : Type* → Type*}
[applicative G] [applicative H]
[is_comm_applicative G] [is_comm_applicative H]
(eta : applicative_transformation G H)
{α β : Type*} (f : α → G β) (x : multiset α) :
eta (traverse f x) = traverse (@eta _ ∘ f) x :=
quotient.induction_on x
(by intro; simp [traverse,is_lawful_traversable.naturality] with functor_norm)
end multiset
|
The plain maskray inhabits the continental shelf of northern Australia from the Wellesley Islands in Queensland to the Bonaparte Archipelago in Western Australia , including the Gulf of Carpentaria and the Timor and Arafura Seas . There are unsubstantiated reports that its range extends to southern Papua New Guinea . It is the least common of the several maskray species native to the region . This species is a bottom @-@ dweller that prefers habitats with fine sediment . It has been recorded from between 12 and 62 m ( 39 and 203 ft ) deep , and tends to be found farther away from shore than other maskrays in its range .
|
#define BOOST_TEST_DYN_LINK
#include <canard/net/ofp/v13/queue_property/min_rate.hpp>
#include <boost/test/unit_test.hpp>
#include <cstdint>
#include <vector>
#include <canard/net/ofp/v13/io/openflow.hpp>
namespace of = canard::net::ofp;
namespace v13 = of::v13;
namespace queue_props = v13::queue_properties;
namespace protocol = v13::protocol;
namespace {
auto operator ""_bin(char const* const str, std::size_t const size)
-> std::vector<std::uint8_t>
{
return std::vector<std::uint8_t>(str, str + size);
}
struct min_rate_fixture
{
queue_props::min_rate sut{0x1234};
std::vector<std::uint8_t> binary
= "\x00\x01\x00\x10\x00\x00\x00\x00""\x12\x34\x00\x00\x00\x00\x00\x00"
""_bin
;
};
}
BOOST_AUTO_TEST_SUITE(queue_property_test)
BOOST_AUTO_TEST_SUITE(min_rate_test)
BOOST_AUTO_TEST_CASE(construct_test)
{
auto const rate = std::uint16_t{34};
auto const sut = queue_props::min_rate{rate};
BOOST_TEST(sut.property() == protocol::OFPQT_MIN_RATE);
BOOST_TEST(sut.length() == sizeof(protocol::ofp_queue_prop_min_rate));
BOOST_TEST(sut.rate() == rate);
}
BOOST_AUTO_TEST_SUITE(equality)
BOOST_AUTO_TEST_CASE(true_if_same_object)
{
auto const sut = queue_props::min_rate{1};
BOOST_TEST((sut == sut));
}
BOOST_AUTO_TEST_CASE(true_if_rate_is_equal)
{
BOOST_TEST((queue_props::min_rate{2} == queue_props::min_rate{2}));
}
BOOST_AUTO_TEST_CASE(is_true_if_both_rate_are_not_configurable)
{
BOOST_TEST(
(queue_props::min_rate{protocol::OFPQ_MIN_RATE_UNCFG}
== queue_props::min_rate{protocol::OFPQ_MIN_RATE_UNCFG}));
}
BOOST_AUTO_TEST_CASE(false_if_rate_is_not_equal)
{
BOOST_TEST((queue_props::min_rate{3} != queue_props::min_rate{4}));
}
BOOST_AUTO_TEST_CASE(is_false_if_rate_is_not_equal_and_both_are_over_1000)
{
BOOST_TEST(
(queue_props::min_rate{1001} != queue_props::min_rate{1002}));
}
BOOST_AUTO_TEST_CASE(is_false_if_both_rates_are_over_1000_but_one_is_uncfg)
{
BOOST_TEST(
(queue_props::min_rate{1001}
!= queue_props::min_rate{protocol::OFPQ_MIN_RATE_UNCFG}));
}
BOOST_AUTO_TEST_CASE(is_false_if_lhs_rate_is_over_1000)
{
BOOST_TEST(
(queue_props::min_rate{1000} != queue_props::min_rate{1001}));
}
BOOST_AUTO_TEST_CASE(is_false_if_rhs_rate_is_over_1000)
{
BOOST_TEST(
(queue_props::min_rate{1001} != queue_props::min_rate{1000}));
}
BOOST_AUTO_TEST_CASE(false_if_pad_is_not_equal)
{
auto const binary
= "\x00\x01\x00\x10\x00\x00\x00\x00""\x02\x34\x00\x00\x00\x00\x00\x01"
""_bin;
auto it = binary.begin();
auto const nonzero_pad
= queue_props::min_rate::decode(it, binary.end());
BOOST_TEST((queue_props::min_rate{0x234} != nonzero_pad));
}
BOOST_AUTO_TEST_SUITE_END() // equality
BOOST_AUTO_TEST_SUITE(function_equivalent)
BOOST_AUTO_TEST_CASE(true_if_same_object)
{
auto const sut = queue_props::min_rate{1};
BOOST_TEST(equivalent(sut, sut));
}
BOOST_AUTO_TEST_CASE(true_if_rate_is_equal)
{
BOOST_TEST(
equivalent(queue_props::min_rate{2}, queue_props::min_rate{2}));
}
BOOST_AUTO_TEST_CASE(is_true_if_both_rate_are_not_configurable)
{
BOOST_TEST(
equivalent(
queue_props::min_rate{protocol::OFPQ_MIN_RATE_UNCFG}
, queue_props::min_rate{protocol::OFPQ_MIN_RATE_UNCFG}));
}
BOOST_AUTO_TEST_CASE(false_if_rate_is_not_equal)
{
BOOST_TEST(
!equivalent(queue_props::min_rate{3}, queue_props::min_rate{4}));
}
BOOST_AUTO_TEST_CASE(is_true_if_rate_is_not_equal_but_both_are_over_1000)
{
BOOST_TEST(
equivalent(
queue_props::min_rate{1001}, queue_props::min_rate{1002}));
}
BOOST_AUTO_TEST_CASE(is_false_if_both_rates_are_over_1000_but_one_is_uncfg)
{
BOOST_TEST(
!equivalent(
queue_props::min_rate{1001}
, queue_props::min_rate{protocol::OFPQ_MIN_RATE_UNCFG}));
}
BOOST_AUTO_TEST_CASE(is_false_if_lhs_rate_is_over_1000)
{
BOOST_TEST(
!equivalent(
queue_props::min_rate{1000}, queue_props::min_rate{1001}));
}
BOOST_AUTO_TEST_CASE(is_false_if_rhs_rate_is_over_1000)
{
BOOST_TEST(
!equivalent(
queue_props::min_rate{1001}, queue_props::min_rate{1000}));
}
BOOST_AUTO_TEST_CASE(true_if_pad_is_not_equal)
{
auto const binary
= "\x00\x01\x00\x10\x00\x00\x00\x00""\x02\x34\x00\x00\x00\x00\x00\x01"
""_bin;
auto it = binary.begin();
auto const nonzero_pad
= queue_props::min_rate::decode(it, binary.end());
BOOST_TEST(equivalent(queue_props::min_rate{0x234}, nonzero_pad));
}
BOOST_AUTO_TEST_SUITE_END() // function_equivalent
BOOST_FIXTURE_TEST_CASE(encode_test, min_rate_fixture)
{
auto buffer = std::vector<std::uint8_t>{};
sut.encode(buffer);
BOOST_TEST(buffer.size() == sut.length());
BOOST_TEST(buffer == binary, boost::test_tools::per_element{});
}
BOOST_FIXTURE_TEST_CASE(decode_test, min_rate_fixture)
{
auto it = binary.begin();
auto const it_end = binary.end();
auto const min_rate = queue_props::min_rate::decode(it, it_end);
BOOST_TEST((it == it_end));
BOOST_TEST((min_rate == sut));
}
BOOST_AUTO_TEST_SUITE_END() // min_rate_test
BOOST_AUTO_TEST_SUITE_END() // queue_property_test
|
module Data.Nat.Literal where
open import Data.Nat using (ℕ; suc; zero)
open import Data.Fin using (Fin; suc; zero)
open import Data.Unit using (⊤)
open import Data.Empty using (⊥)
open import Agda.Builtin.FromNat using (Number; fromNat) public
_≤_ : ℕ → ℕ → Set
zero ≤ n = ⊤
suc m ≤ zero = ⊥
suc m ≤ suc n = m ≤ n
instance
ℕ-num : Number ℕ
ℕ-num .Number.Constraint _ = ⊤
ℕ-num .Number.fromNat n = n
instance
Fin-num : {n : ℕ} → Number (Fin (suc n))
Fin-num {n} .Number.Constraint m = m ≤ n
Fin-num {n} .Number.fromNat m ⦃ p ⦄ = from m n p where
from : (m n : ℕ) → m ≤ n → Fin (suc n)
from zero _ _ = zero
from (suc _) zero ()
from (suc m) (suc n) p = suc (from m n p)
|
#include <bunsan/application/application.hpp>
#include <bunsan/log/trivial.hpp>
#include <bunsan/runtime/demangle.hpp>
#include <boost/assert.hpp>
#include <iostream>
namespace bunsan::application {
application::application(const int argc, const char *const argv[])
: m_argc(argc),
m_argv(argv),
m_parser("Options"),
m_global_registry(global_registry::lock()) {
BOOST_ASSERT(m_argv);
BOOST_ASSERT(!m_argv[m_argc]);
}
boost::optional<std::string> application::executable() const {
if (m_argv[0]) return std::string(m_argv[0]);
return boost::none;
}
application::~application() {}
int application::exec() {
try {
initialize_argument_parser(m_parser);
const variables_map variables = m_parser.parse_command_line(m_argc, m_argv);
if (variables.count("help")) {
print_help();
return exit_success;
} else if (variables.count("version")) {
print_version();
return exit_success;
}
return main(variables);
} catch (boost::program_options::error &e) {
return argument_parser_error(e);
} catch (std::exception &e) {
BUNSAN_LOG_ERROR << "Error of type [" << runtime::type_name(e) << "]\n"
<< e.what();
return exit_failure;
}
}
int application::argument_parser_error(std::exception &e) {
BUNSAN_LOG_ERROR << "Argument parser error: " << e.what();
print_help();
return argument_parser_failure;
}
void application::print_help() {
const auto exe = executable();
if (m_name.empty())
std::cerr << "Usage\n";
else
std::cerr << m_name << " usage\n";
print_synopsis();
std::cerr << m_parser.help() << std::flush;
}
void application::print_synopsis() {}
void application::print_version() {}
void application::initialize_argument_parser(argument_parser &parser) {
parser.add_options()("help,h", "Print help")("version", "Print version");
}
} // namespace bunsan::application
|
(* -*- mode: coq; mode: visual-line -*- *)
(** * Theorems about Non-dependent function types *)
Require Import Basics.Overture Basics.PathGroupoids Basics.Decidable
Basics.Equivalences.
Require Import Types.Forall.
Local Open Scope path_scope.
Generalizable Variables A B C D f g n.
Definition arrow@{u u0} (A : Type@{u}) (B : Type@{u0}) := A -> B.
#[export] Instance IsReflexive_arrow : Reflexive arrow :=
fun _ => idmap.
#[export] Instance IsTransitive_arrow : Transitive arrow :=
fun _ _ _ f g => compose g f.
Section AssumeFunext.
Context `{Funext}.
(** ** Paths *)
(** As for dependent functions, paths [p : f = g] in a function type [A -> B] are equivalent to functions taking values in path types, [H : forall x:A, f x = g x], or concisely [H : f == g]. These are all given in the [Overture], but we can give them separate names for clarity in the non-dependent case. *)
Definition path_arrow {A B : Type} (f g : A -> B)
: (f == g) -> (f = g)
:= path_forall f g.
(** There are a number of combinations of dependent and non-dependent for [apD10_path_forall]; we list all of the combinations as helpful lemmas for rewriting. *)
Definition ap10_path_arrow {A B : Type} (f g : A -> B) (h : f == g)
: ap10 (path_arrow f g h) == h
:= apD10_path_forall f g h.
Definition apD10_path_arrow {A B : Type} (f g : A -> B) (h : f == g)
: apD10 (path_arrow f g h) == h
:= apD10_path_forall f g h.
Definition ap10_path_forall {A B : Type} (f g : A -> B) (h : f == g)
: ap10 (path_forall f g h) == h
:= apD10_path_forall f g h.
Definition eta_path_arrow {A B : Type} (f g : A -> B) (p : f = g)
: path_arrow f g (ap10 p) = p
:= eta_path_forall f g p.
Definition path_arrow_1 {A B : Type} (f : A -> B)
: (path_arrow f f (fun x => 1)) = 1
:= eta_path_arrow f f 1.
Definition equiv_ap10 {A B : Type} f g
: (f = g) <~> (f == g)
:= Build_Equiv _ _ (@ap10 A B f g) _.
Global Instance isequiv_path_arrow {A B : Type} (f g : A -> B)
: IsEquiv (path_arrow f g) | 0
:= isequiv_path_forall f g.
Definition equiv_path_arrow {A B : Type} (f g : A -> B)
: (f == g) <~> (f = g)
:= equiv_path_forall f g.
(** Function extensionality for two-variable functions *)
Definition equiv_path_arrow2 {X Y Z: Type} (f g : X -> Y -> Z)
: (forall x y, f x y = g x y) <~> f = g.
Proof.
refine (equiv_path_arrow _ _ oE _).
apply equiv_functor_forall_id; intro x.
apply equiv_path_arrow.
Defined.
Definition ap100_path_arrow2 {X Y Z : Type} {f g : X -> Y -> Z}
(h : forall x y, f x y = g x y) (x : X) (y : Y)
: ap100 (equiv_path_arrow2 f g h) x y = h x y.
Proof.
unfold ap100.
refine (ap (fun p => ap10 p y) _ @ _).
1: apply apD10_path_arrow.
cbn.
apply apD10_path_arrow.
Defined.
(** ** Path algebra *)
Definition path_arrow_pp {A B : Type} (f g h : A -> B)
(p : f == g) (q : g == h)
: path_arrow f h (fun x => p x @ q x) = path_arrow f g p @ path_arrow g h q
:= path_forall_pp f g h p q.
(** ** Transport *)
(** Transporting in non-dependent function types is somewhat simpler than in dependent ones. *)
Definition transport_arrow {A : Type} {B C : A -> Type}
{x1 x2 : A} (p : x1 = x2) (f : B x1 -> C x1) (y : B x2)
: (transport (fun x => B x -> C x) p f) y = p # (f (p^ # y)).
Proof.
destruct p; simpl; auto.
Defined.
Definition transport_arrow_toconst {A : Type} {B : A -> Type} {C : Type}
{x1 x2 : A} (p : x1 = x2) (f : B x1 -> C) (y : B x2)
: (transport (fun x => B x -> C) p f) y = f (p^ # y).
Proof.
destruct p; simpl; auto.
Defined.
Definition transport_arrow_fromconst {A B : Type} {C : A -> Type}
{x1 x2 : A} (p : x1 = x2) (f : B -> C x1) (y : B)
: (transport (fun x => B -> C x) p f) y = p # (f y).
Proof.
destruct p; simpl; auto.
Defined.
(** And some naturality and coherence for these laws. *)
Definition ap_transport_arrow_toconst {A : Type} {B : A -> Type} {C : Type}
{x1 x2 : A} (p : x1 = x2) (f : B x1 -> C) {y1 y2 : B x2} (q : y1 = y2)
: ap (transport (fun x => B x -> C) p f) q
@ transport_arrow_toconst p f y2
= transport_arrow_toconst p f y1
@ ap (fun y => f (p^ # y)) q.
Proof.
destruct p, q; reflexivity.
Defined.
(** ** Dependent paths *)
(** Usually, a dependent path over [p:x1=x2] in [P:A->Type] between [y1:P x1] and [y2:P x2] is a path [transport P p y1 = y2] in [P x2]. However, when [P] is a function space, these dependent paths have a more convenient description: rather than transporting the argument of [y1] forwards and backwards, we transport only forwards but on both sides of the equation, yielding a "naturality square". *)
Definition dpath_arrow
{A:Type} (B C : A -> Type) {x1 x2:A} (p:x1=x2)
(f : B x1 -> C x1) (g : B x2 -> C x2)
: (forall (y1:B x1), transport C p (f y1) = g (transport B p y1))
<~>
(transport (fun x => B x -> C x) p f = g).
Proof.
destruct p.
apply equiv_path_arrow.
Defined.
Definition ap10_dpath_arrow
{A:Type} (B C : A -> Type) {x1 x2:A} (p:x1=x2)
(f : B x1 -> C x1) (g : B x2 -> C x2)
(h : forall (y1:B x1), transport C p (f y1) = g (transport B p y1))
(u : B x1)
: ap10 (dpath_arrow B C p f g h) (p # u)
= transport_arrow p f (p # u)
@ ap (fun x => p # (f x)) (transport_Vp B p u)
@ h u.
Proof.
destruct p; simpl; unfold ap10.
exact (apD10_path_forall f g h u @ (concat_1p _)^).
Defined.
(** ** Maps on paths *)
(** The action of maps given by application. *)
Definition ap_apply_l {A B : Type} {x y : A -> B} (p : x = y) (z : A) :
ap (fun f => f z) p = ap10 p z
:= 1.
Definition ap_apply_Fl {A B C : Type} {x y : A} (p : x = y) (M : A -> B -> C) (z : B) :
ap (fun a => (M a) z) p = ap10 (ap M p) z
:= match p with 1 => 1 end.
Definition ap_apply_Fr {A B C : Type} {x y : A} (p : x = y) (z : B -> C) (N : A -> B) :
ap (fun a => z (N a)) p = ap01 z (ap N p)
:= (ap_compose N _ _).
Definition ap_apply_FlFr {A B C : Type} {x y : A} (p : x = y) (M : A -> B -> C) (N : A -> B) :
ap (fun a => (M a) (N a)) p = ap11 (ap M p) (ap N p)
:= match p with 1 => 1 end.
(** The action of maps given by lambda. *)
Definition ap_lambda {A B C : Type} {x y : A} (p : x = y) (M : A -> B -> C) :
ap (fun a b => M a b) p =
path_arrow _ _ (fun b => ap (fun a => M a b) p).
Proof.
destruct p;
symmetry;
simpl; apply path_arrow_1.
Defined.
(** ** Functorial action *)
Definition functor_arrow `(f : B -> A) `(g : C -> D)
: (A -> C) -> (B -> D)
:= @functor_forall A (fun _ => C) B (fun _ => D) f (fun _ => g).
Definition not_contrapositive `(f : B -> A)
: not A -> not B
:= functor_arrow f idmap.
Definition ap_functor_arrow `(f : B -> A) `(g : C -> D)
(h h' : A -> C) (p : h == h')
: ap (functor_arrow f g) (path_arrow _ _ p)
= path_arrow _ _ (fun b => ap g (p (f b)))
:= @ap_functor_forall _ A (fun _ => C) B (fun _ => D)
f (fun _ => g) h h' p.
(** ** Truncatedness: functions into an n-type is an n-type *)
Global Instance contr_arrow {A B : Type} `{Contr B}
: Contr (A -> B) | 100
:= contr_forall.
Global Instance istrunc_arrow {A B : Type} `{IsTrunc n B}
: IsTrunc n (A -> B) | 100
:= istrunc_forall.
(** ** Equivalences *)
Global Instance isequiv_functor_arrow `{IsEquiv B A f} `{IsEquiv C D g}
: IsEquiv (functor_arrow f g) | 1000
:= @isequiv_functor_forall _ A (fun _ => C) B (fun _ => D)
_ _ _ _.
Definition equiv_functor_arrow `{IsEquiv B A f} `{IsEquiv C D g}
: (A -> C) <~> (B -> D)
:= @equiv_functor_forall _ A (fun _ => C) B (fun _ => D)
f _ (fun _ => g) _.
Definition equiv_functor_arrow' `(f : B <~> A) `(g : C <~> D)
: (A -> C) <~> (B -> D)
:= @equiv_functor_forall' _ A (fun _ => C) B (fun _ => D)
f (fun _ => g).
(* We could do something like this notation, but it's not clear that it would be that useful, and might be confusing. *)
(* Notation "f -> g" := (equiv_functor_arrow' f g) : equiv_scope. *)
(** What remains is really identical to that in [Forall]. *)
End AssumeFunext.
(** ** Decidability *)
(** This doesn't require funext *)
Global Instance decidable_arrow {A B : Type}
`{Decidable A} `{Decidable B}
: Decidable (A -> B).
Proof.
destruct (dec B) as [x2|y2].
- exact (inl (fun _ => x2)).
- destruct (dec A) as [x1|y1].
+ apply inr; intros f.
exact (y2 (f x1)).
+ apply inl; intros x1.
elim (y1 x1).
Defined.
|
module Exercise_5_2_11
import Ch05.LambdaCalculusWithArith
%default total
-- The implementation below is the simple (and naive) one
||| Term representing a function that, given a list of Church numerals (encoded as in ex. 5.2.8),
||| will return their sum
listsum_naive : Term
listsum_naive = lambda 0
(\l => l . plus . church_zero)
-- The implementation below uses the fix-point combinator
||| Term representing a function that, given a list of Church numerals (encoded as in ex. 5.2.8),
||| will return their sum
listsum : Term
listsum = fix . g where
g : Term
g = lambda 0
(\f => (lambda 1
(\l => IfThenElse (realbool . (isnil . l))
church_zero
(plus . (head . l) . (f . (tail . l))))))
|
/-
Copyright (c) 2020 Scott Morrison. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Scott Morrison
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.category_theory.limits.preserves.basic
import Mathlib.category_theory.limits.lattice
import Mathlib.PostPort
universes v l
namespace Mathlib
/-!
# The category of "pairwise intersections".
Given `ι : Type v`, we build the diagram category `pairwise ι`
with objects `single i` and `pair i j`, for `i j : ι`,
whose only non-identity morphisms are
`left : pair i j ⟶ single i` and `right : pair i j ⟶ single j`.
We use this later in describing (one formulation of) the sheaf condition.
Given any function `U : ι → α`, where `α` is some complete lattice (e.g. `(opens X)ᵒᵖ`),
we produce a functor `pairwise ι ⥤ α` in the obvious way,
and show that `supr U` provides a colimit cocone over this functor.
-/
namespace category_theory
/--
An inductive type representing either a single term of a type `ι`, or a pair of terms.
We use this as the objects of a category to describe the sheaf condition.
-/
inductive pairwise (ι : Type v) where
| single : ι → pairwise ι
| pair : ι → ι → pairwise ι
namespace pairwise
protected instance pairwise_inhabited {ι : Type v} [Inhabited ι] : Inhabited (pairwise ι) :=
{ default := single Inhabited.default }
/--
Morphisms in the category `pairwise ι`. The only non-identity morphisms are
`left i j : single i ⟶ pair i j` and `right i j : single j ⟶ pair i j`.
-/
inductive hom {ι : Type v} : pairwise ι → pairwise ι → Type v where
| id_single : (i : ι) → hom (single i) (single i)
| id_pair : (i j : ι) → hom (pair i j) (pair i j)
| left : (i j : ι) → hom (pair i j) (single i)
| right : (i j : ι) → hom (pair i j) (single j)
protected instance hom_inhabited {ι : Type v} [Inhabited ι] :
Inhabited (hom (single Inhabited.default) (single Inhabited.default)) :=
{ default := hom.id_single Inhabited.default }
/--
The identity morphism in `pairwise ι`.
-/
def id {ι : Type v} (o : pairwise ι) : hom o o := sorry
/-- Composition of morphisms in `pairwise ι`. -/
def comp {ι : Type v} {o₁ : pairwise ι} {o₂ : pairwise ι} {o₃ : pairwise ι} (f : hom o₁ o₂)
(g : hom o₂ o₃) : hom o₁ o₃ :=
sorry
protected instance category_theory.category {ι : Type v} : category (pairwise ι) := category.mk
/-- Auxilliary definition for `diagram`. -/
@[simp] def diagram_obj {ι : Type v} {α : Type v} (U : ι → α) [semilattice_inf α] :
pairwise ι → α :=
sorry
/-- Auxilliary definition for `diagram`. -/
@[simp] def diagram_map {ι : Type v} {α : Type v} (U : ι → α) [semilattice_inf α] {o₁ : pairwise ι}
{o₂ : pairwise ι} (f : o₁ ⟶ o₂) : diagram_obj U o₁ ⟶ diagram_obj U o₂ :=
sorry
/--
Given a function `U : ι → α` for `[semilattice_inf α]`, we obtain a functor `pairwise ι ⥤ α`,
sending `single i` to `U i` and `pair i j` to `U i ⊓ U j`,
and the morphisms to the obvious inequalities.
-/
def diagram {ι : Type v} {α : Type v} (U : ι → α) [semilattice_inf α] : pairwise ι ⥤ α :=
functor.mk (diagram_obj U) fun (X Y : pairwise ι) (f : X ⟶ Y) => diagram_map U f
-- `complete_lattice` is not really needed, as we only ever use `inf`,
-- but the appropriate structure has not been defined.
/-- Auxilliary definition for `cocone`. -/
def cocone_ι_app {ι : Type v} {α : Type v} (U : ι → α) [complete_lattice α] (o : pairwise ι) :
diagram_obj U o ⟶ supr U :=
sorry
/--
Given a function `U : ι → α` for `[complete_lattice α]`,
`supr U` provides a cocone over `diagram U`.
-/
@[simp] theorem cocone_X {ι : Type v} {α : Type v} (U : ι → α) [complete_lattice α] :
limits.cocone.X (cocone U) = supr U :=
Eq.refl (limits.cocone.X (cocone U))
/--
Given a function `U : ι → α` for `[complete_lattice α]`,
`infi U` provides a limit cone over `diagram U`.
-/
def cocone_is_colimit {ι : Type v} {α : Type v} (U : ι → α) [complete_lattice α] :
limits.is_colimit (cocone U) :=
limits.is_colimit.mk fun (s : limits.cocone (diagram U)) => hom_of_le sorry
end Mathlib |
(* [10],Fig.1 for Performance Evaluation of COMPSAC'17 paper*)
Require Export parser.
Require Export refine.
(** the security of single component Transaction Service *)
Definition TS: IA := parse
"IA [s1,s2,s3,s4,s5,s6,s7] (s1) [acceptT, endT, newT, startM, endM] [startT, logM] []
[s1->(acceptT)s1, s1->(newT)s2, s2->(startT)s3, s3->(endT)s1, s1->(startM)s4,
s4->(endM)s1, s4->(newT)s5, s5->(startT)s6, s6->(endT)s7, s7->(logM)s4]".
(*SIR-GNNI: true*)
Definition TS_hid: IA :=
IAutomaton.hiding TS (ASet.GenActs [&"startM",&"endM",&"logM"]).
Definition TS_res: IA :=
IAutomaton.hiding
( IAutomaton.restriction TS (ASet.GenActs [&"startM",&"endM"]) )
(ASet.GenActs [&"logM"]).
Eval compute in :> TS_hid.
Eval compute in :> TS_res.
Eval compute in SIRGNNI_refinement TS_res TS_hid.
(*SME-NI: true *)
Definition TS_rep: IA :=
IAutomaton.hiding
( IAutomaton.replacement TS (&"tau") (ASet.GenActs [&"startM",&"endM"]) )
(ASet.GenActs [&"logM"]).
Eval compute in :> TS_rep.
Eval compute in SMENI_refinement TS_rep TS_hid.
(** the security of Supervisor *)
Definition SV: IA := parse
"IA [u1,u2,u3,u4,u5] (u1) [mOn, logM, logF] [startM, endM] []
[u1->(mOn)u2, u1->(logF)u1, u2->(startM)u3, u3->(logM)u3, u3->(logF)u4,
u4->(logM)u5, u5->(endM)u1]".
(*SIR-GNNI: true *)
Definition SV_hid: IA :=
IAutomaton.hiding SV (ASet.GenActs [&"mOn",&"logM",&"startM",&"endM"]).
Definition SV_res: IA :=
IAutomaton.hiding
( IAutomaton.restriction SV (ASet.GenActs [&"mOn",&"logM"]) )
(ASet.GenActs [&"endM",&"startM"]).
Eval compute in :> SV_hid.
Eval compute in :> SV_res.
Eval compute in SIRGNNI_refinement SV_res SV_hid.
(*SME-NI: true *)
Definition SV_rep: IA :=
IAutomaton.hiding
( IAutomaton.replacement SV (&"tau") (ASet.GenActs [&"mOn", &"logM"]) )
(ASet.GenActs [&"startM",&"endM"]).
Eval compute in :> SV_rep.
Eval compute in SMENI_refinement SV_rep SV_hid.
(** the security of the result of composition *)
Definition TPU: IA := parse
"IA [t1,t2,t3,t4] (t1) [startT] [nOk,ok,logF,endT] []
[t1->(startT)t2, t2->(nOk)t4, t4->(logF)t3, t2->(ok)t3,t3->(endT)t1]".
Definition compIA: IA := IAutomaton.composition TS (IAutomaton.composition TPU SV).
Eval compute in :> compIA.
(*SIR-GNNI: true *)
Definition compIA_hid: IA :=
IAutomaton.hiding compIA (ASet.GenActs [&"mOn"]).
Definition compIA_res: IA :=
IAutomaton.restriction compIA (ASet.GenActs [&"mOn"]).
Eval compute in SIRGNNI_refinement compIA_res compIA_hid.
|
[STATEMENT]
lemma check_v_g_weakening:
fixes e::e and \<Gamma>'::\<Gamma>
assumes "\<Theta>; \<B> ; \<Gamma> \<turnstile> v \<Leftarrow> \<tau>" and "toSet \<Gamma> \<subseteq> toSet \<Gamma>'" and "\<Theta> ; \<B> \<turnstile>\<^sub>w\<^sub>f \<Gamma>'"
shows "\<Theta>; \<B> ; \<Gamma>' \<turnstile> v \<Leftarrow> \<tau>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<Theta> ; \<B> ; \<Gamma>' \<turnstile> v \<Leftarrow> \<tau>
[PROOF STEP]
using subtype_weakening infer_v_g_weakening check_v_elims check_v_subtypeI assms
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?\<tau>1.0 \<lesssim> ?\<tau>2.0; toSet ?\<Gamma> \<subseteq> toSet ?\<Gamma>'; ?\<Theta> ; ?\<B> \<turnstile>\<^sub>w\<^sub>f ?\<Gamma>' \<rbrakk> \<Longrightarrow> ?\<Theta> ; ?\<B> ; ?\<Gamma>' \<turnstile> ?\<tau>1.0 \<lesssim> ?\<tau>2.0
\<lbrakk> ?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?v \<Rightarrow> ?\<tau>; toSet ?\<Gamma> \<subseteq> toSet ?\<Gamma>'; ?\<Theta> ; ?\<B> \<turnstile>\<^sub>w\<^sub>f ?\<Gamma>' \<rbrakk> \<Longrightarrow> ?\<Theta> ; ?\<B> ; ?\<Gamma>' \<turnstile> ?v \<Rightarrow> ?\<tau>
\<lbrakk>?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?v \<Leftarrow> ?\<tau>; \<And>\<tau>1. \<lbrakk>?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> \<tau>1 \<lesssim> ?\<tau>; ?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?v \<Rightarrow> \<tau>1\<rbrakk> \<Longrightarrow> ?P\<rbrakk> \<Longrightarrow> ?P
\<lbrakk>?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?\<tau>1.0 \<lesssim> ?\<tau>2.0; ?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?v \<Rightarrow> ?\<tau>1.0\<rbrakk> \<Longrightarrow> ?\<Theta> ; ?\<B> ; ?\<Gamma> \<turnstile> ?v \<Leftarrow> ?\<tau>2.0
\<Theta> ; \<B> ; \<Gamma> \<turnstile> v \<Leftarrow> \<tau>
toSet \<Gamma> \<subseteq> toSet \<Gamma>'
\<Theta> ; \<B> \<turnstile>\<^sub>w\<^sub>f \<Gamma>'
goal (1 subgoal):
1. \<Theta> ; \<B> ; \<Gamma>' \<turnstile> v \<Leftarrow> \<tau>
[PROOF STEP]
by metis |
{-
This second-order signature was created from the following second-order syntax description:
syntax CommMonoid | CM
type
* : 0-ary
term
unit : * | ε
add : * * -> * | _⊕_ l20
theory
(εU⊕ᴸ) a |> add (unit, a) = a
(εU⊕ᴿ) a |> add (a, unit) = a
(⊕A) a b c |> add (add(a, b), c) = add (a, add(b, c))
(⊕C) a b |> add(a, b) = add(b, a)
-}
module CommMonoid.Signature where
open import SOAS.Context
open import SOAS.Common
open import SOAS.Syntax.Signature *T public
open import SOAS.Syntax.Build *T public
-- Operator symbols
data CMₒ : Set where
unitₒ addₒ : CMₒ
-- Term signature
CM:Sig : Signature CMₒ
CM:Sig = sig λ
{ unitₒ → ⟼₀ *
; addₒ → (⊢₀ *) , (⊢₀ *) ⟼₂ *
}
open Signature CM:Sig public
|
(* Property from Case-Analysis for Rippling and Inductive Proof,
Moa Johansson, Lucas Dixon and Alan Bundy, ITP 2010.
This Isabelle theory is produced using the TIP tool offered at the following website:
https://github.com/tip-org/tools
This file was originally provided as part of TIP benchmark at the following website:
https://github.com/tip-org/benchmarks
Yutaka Nagashima at CIIRC, CTU changed the TIP output theory file slightly
to make it compatible with Isabelle2017.*)
theory TIP_prop_32
imports "../../Test_Base"
begin
datatype Nat = Z | S "Nat"
fun min :: "Nat => Nat => Nat" where
"min (Z) y = Z"
| "min (S z) (Z) = Z"
| "min (S z) (S y1) = S (min z y1)"
theorem property0 :
"((min a b) = (min b a))"
oops
end
|
// Copyright (C) 2018 Thejaka Amila Kanewala, Marcin Zalewski, Andrew Lumsdaine.
// Boost Software License - Version 1.0 - August 17th, 2003
// Permission is hereby granted, free of charge, to any person or organization
// obtaining a copy of the software and accompanying documentation covered by
// this license (the "Software") to use, reproduce, display, distribute,
// execute, and transmit the Software, and to prepare derivative works of the
// Software, and to permit third-parties to whom the Software is furnished to
// do so, all subject to the following:
// The copyright notices in the Software and this entire statement, including
// the above license grant, this restriction and the following disclaimer,
// must be included in all copies of the Software, in whole or in part, and
// all derivative works of the Software, unless such copies or derivative
// works are solely in the form of machine-executable object code generated by
// a source language processor.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
// SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
// FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
// Authors: Thejaka Kanewala
// Andrew Lumsdaine
#ifndef BOOST_GRAPH_KLA_SSSP_THREAD_HPP
#define BOOST_GRAPH_KLA_SSSP_THREAD_HPP
#ifndef BOOST_GRAPH_USE_MPI
#error "Parallel BGL files should not be included unless <boost/graph/use_mpi.hpp> has been included"
#endif
#include <am++/counter_coalesced_message_type.hpp>
#include <am++/detail/thread_support.hpp>
#include <boost/parallel/append_buffer.hpp>
#include <boost/graph/graph_traits.hpp>
#include <boost/property_map/property_map.hpp>
#include <boost/graph/iteration_macros.hpp>
#include <boost/graph/parallel/algorithm.hpp> // for all_reduce
#include <boost/graph/parallel/thread_support.hpp> // for compare_and_swap
#include <algorithm> // for std::min, std::max
#include <boost/format.hpp>
#include <iostream>
#include <atomic>
#include "boost/tuple/tuple.hpp"
//#include <tuple>
namespace boost { namespace graph { namespace distributed {
template<typename Graph,
typename DistanceMap,
typename EdgeWeightMap,
typename WorkStats,
typename PriorityQueueGenerator = thread_priority_queue_gen,
typename MessageGenerator =
amplusplus::simple_generator<amplusplus::counter_coalesced_message_type_gen> >
class kla_shortest_paths_thread {
typedef kla_shortest_paths_thread<Graph, DistanceMap, EdgeWeightMap, PriorityQueueGenerator>
self_type;
typedef typename boost::property_map<Graph, vertex_owner_t>::const_type OwnerMap;
typedef typename graph_traits<Graph>::vertex_descriptor Vertex;
typedef typename graph_traits<Graph>::degree_size_type Degree;
typedef typename property_traits<EdgeWeightMap>::value_type Dist;
typedef std::pair<Vertex, std::pair<Dist, size_t> > vertex_distance_data;
struct default_comparer {
bool operator()(const vertex_distance_data& vd1, const vertex_distance_data& vd2) const {
return vd1.second.first > vd2.second.first;
}
};
typedef typename PriorityQueueGenerator::template queue<vertex_distance_data,
default_comparer>::type Bucket;
typedef typename std::vector<Bucket*>::size_type BucketIndex;
struct vertex_distance_handler;
struct minimum_pair_first
{
template<typename T>
const T& operator()(const T& x, const T& y) const { return x.first < y.first ? x : y; }
template<typename F>
struct result {
typedef typename boost::function_traits<F>::arg1_type type;
};
};
typedef typename MessageGenerator::template call_result<vertex_distance_data, vertex_distance_handler, owner_from_pair<OwnerMap, vertex_distance_data>, amplusplus::idempotent_combination_t<minimum_pair_first > >::type
RelaxMessage;
public:
kla_shortest_paths_thread(Graph& g,
DistanceMap distance,
EdgeWeightMap weight,
amplusplus::transport &t,
size_t k_level,
int offs,
WorkStats& stats,
int freq,
MessageGenerator message_gen =
MessageGenerator(amplusplus::counter_coalesced_message_type_gen(1 << 12)))
: dummy_first_member_for_init_order((amplusplus::register_mpi_datatype<vertex_distance_data>(), 0)),
g(g), transport(t),
distance(distance),
weight(weight),
owner(get(vertex_owner, g)),
level_sync(false),
current_level(0),
buckets_processed(0),
current_bucket(0),
k_level(k_level),
core_offset(offs),
work_stats(stats),
relax_msg(message_gen, transport, owner_from_pair<OwnerMap, vertex_distance_data>(owner),
amplusplus::idempotent_combination(minimum_pair_first())),
flushFrequency(freq)
{
initialize();
}
#ifdef AMPLUSPLUS_PRINT_HIT_RATES
typedef std::pair<unsigned long long, unsigned long long> cache_stats;
cache_stats
get_cache_stats() {
cache_stats stats(0, 0);
for(size_t i = 0; i < relax_msg.counters.hits.size(); ++i) {
stats.first += relax_msg.counters.hits[i];
stats.second += relax_msg.counters.tests[i];
}
return stats;
}
#endif // AMPLUSPLUS_PRINT_HIT_RATES
void set_source(Vertex s) { source = s; } // for threaded execution
void set_level_sync() { level_sync = true; } // force level-synchronized exploration
void operator() (int tid) { run(source, tid); }
void run(Vertex s, int tid = 0);
void initialize_threaded_buckets(int tid);
void allocate_pqs();
BucketIndex get_num_levels() { return buckets_processed; }
time_type get_start_time() { return start_time; }
protected:
void initialize();
// Relax the edge (u, v), creating a new best path of distance x.
void relax(const vertex_distance_data& data);
void find_next_bucket();
const int dummy_first_member_for_init_order; // Unused
const Graph& g;
amplusplus::transport& transport;
DistanceMap distance;
EdgeWeightMap weight;
OwnerMap owner;
Vertex source;
bool level_sync;
size_t k_level;
int core_offset;
WorkStats& work_stats;
size_t k_current_level;
// Bucket data structure. The ith bucket contains all local vertices
std::vector<shared_ptr<Bucket> > buckets;
// Bucket to hold vertices deleted at each level
shared_ptr<Bucket> deleted_vertices;
BucketIndex current_level; // How many buckets have we processed?
BucketIndex buckets_processed; // Stats tracking
BucketIndex num_buckets;
// Shared thread state to make sure we're all on the same page
BucketIndex current_bucket;
shared_ptr<amplusplus::detail::barrier> t_bar;
RelaxMessage relax_msg;
time_type start_time;
int flushFrequency;
};
#define KLA_SHORTEST_PATHS_THREAD_PARMS \
typename Graph, typename DistanceMap, typename EdgeWeightMap, typename WorkStats, typename Bucket, typename MessageGenerator
#define KLA_SHORTEST_PATHS_THREAD_TYPE \
kla_shortest_paths_thread<Graph, DistanceMap, EdgeWeightMap, WorkStats, Bucket, MessageGenerator>
template<KLA_SHORTEST_PATHS_THREAD_PARMS>
void
KLA_SHORTEST_PATHS_THREAD_TYPE::initialize()
{
int nthreads = transport.get_nthreads();
relax_msg.set_handler(vertex_distance_handler(*this));
// Setup distance map
distance.set_consistency_model(0);
set_property_map_role(vertex_distance, distance);
// Set the currently processing k-level
k_current_level = k_level;
// Initialize buckets data structure
//
// Extra bucket is so we don't try to insert into the bucket we're processing
// when the index wraps
num_buckets = 2;
// Declare bucket data structure and index variable
buckets.resize(num_buckets);
for (BucketIndex i = 0 ; i < buckets.size() ; ++i) {
shared_ptr<Bucket> p(new Bucket(nthreads));
buckets[i].swap(p);
}
// Initialize distance labels
BGL_FORALL_VERTICES_T(v, g, Graph) {
put(distance, v, (std::numeric_limits<Dist>::max)());
}
}
template<KLA_SHORTEST_PATHS_THREAD_PARMS>
void
KLA_SHORTEST_PATHS_THREAD_TYPE::run(Vertex s, int tid)
{
int doFlushCounter = 0;
//int debugwait = 1;
//if(transport.rank()==0)
//if(tid==0)
//while (debugwait) ;
int count_epoch = 0 ;
AMPLUSPLUS_WITH_THREAD_ID(tid) {
int nthreads = transport.get_nthreads();
if (tid == 0)
t_bar.reset(new amplusplus::detail::barrier(nthreads));
// This barrier acts as a temporary barrier until we can be sure t_bar is initialized
{ amplusplus::scoped_epoch epoch(transport); }
t_bar->wait();
// if two processes are running on the same node, core_offset
// is important to achieve thread affinity
if (pin(tid+core_offset) != 0) {
std::cerr << "[ERROR] Unable to pin current thread to "
<< "core : " << tid << std::endl;
assert(false);
}
// wait till all threads are pinned
t_bar->wait();
{ amplusplus::scoped_epoch epoch(transport); }
validate_thread_core_relation();
start_time = get_time();
// Push the source onto the bucket
{
amplusplus::scoped_epoch epoch(transport);
if (get(owner, s) == transport.rank() && tid == 0) {
relax(vertex_distance_data(s,std::make_pair(0,0)));
}
}
t_bar->wait();
while(current_bucket != (std::numeric_limits<BucketIndex>::max)()) {
unsigned long all_process_bucket_empty = 1;
unsigned long p_bucket_full = 0;
count_epoch = 0;
// process current bucket
while(all_process_bucket_empty != 0) {
// wait till all threads reach here
// before checking whether queue is empty we need to make sure
// none of the threads are pushing elements to the queue.
// Therefore make sure all threads reach here before checking whether queue is
// empty
t_bar->wait();
if (!buckets[current_bucket]->empty(tid))
p_bucket_full = 1;
else
p_bucket_full = 0;
{
amplusplus::scoped_epoch_value epoch(transport,
p_bucket_full,
all_process_bucket_empty);
vertex_distance_data vd;
while(buckets[current_bucket]->pop(vd, tid)) {
assert(all_process_bucket_empty != 0);
Vertex v = vd.first;
Dist dist = vd.second.first;
size_t v_level = vd.second.second;
Dist dv = get(distance, v);
//if we already have a better distance in the distance map
if (dv < dist) {
continue;
}
//Otherwise we got a better distance. Relax edges
BGL_FORALL_OUTEDGES_T(v, e, g, Graph) {
Vertex u = target(e, g);
Dist we = get(weight, e);
vertex_distance_data new_vd(u, std::make_pair(dist + we, v_level+1));
relax_msg.send(new_vd);
}
doFlushCounter++;
if(doFlushCounter == flushFrequency) {
doFlushCounter = 0;
transport.get_scheduler().run_one();
}
}
} // end of epoch
count_epoch += 1;
} // end of while(all_process_bucket_empty != 0)
assert(buckets[current_bucket]->empty(tid));
// If all processes are done with the current bucket:
// 1. clear the current bucket
// 2. increase current level
// 3. find the next bucket to work on
buckets[current_bucket]->clear(tid);
t_bar->wait();
#ifdef PRINT_DEBUG
std::cerr << tid << "@" << transport.rank() << ": Current bucket " << current_bucket
<< " size " << buckets[current_bucket]->size() << std::endl;
#endif
assert(buckets[current_bucket]->size(tid) == 0);
// find next bucket with work and update current_level
BucketIndex old_bucket = current_bucket;
t_bar->wait();
if (tid == 0) {
find_next_bucket();
if (current_bucket != (std::numeric_limits<BucketIndex>::max)()) {
current_level += current_bucket - old_bucket;
++buckets_processed;
}
// update k_current_level
k_current_level += k_level;
}
t_bar->wait();
} // end of current_bucket != (std::numeric_limits<BucketIndex>::max)()
}
}
template<KLA_SHORTEST_PATHS_THREAD_PARMS>
void
KLA_SHORTEST_PATHS_THREAD_TYPE::relax(const vertex_distance_data& data)
{
#ifdef PBGL2_PRINT_WORK_STATS
work_stats.increment_edges(amplusplus::detail::get_thread_id());
#endif
Vertex v = data.first;
Dist d = data.second.first;
size_t k_vertex = data.second.second;
using boost::parallel::val_compare_and_swap;
Dist old_dist = get(distance, v), last_old_dist;
while (d < old_dist) {
last_old_dist = old_dist;
old_dist = val_compare_and_swap(&distance[v], old_dist, d);
if (last_old_dist == old_dist) {
#ifdef PBGL2_PRINT_WORK_STATS
int tid = amplusplus::detail::get_thread_id();
if(old_dist < std::numeric_limits<Dist>::max()) {
work_stats.increment_invalidated(tid);
} else {
work_stats.increment_useful(tid);
}
#endif
// We got a better distance. If current
// k-level is less than or equal to vertex k-level
// straight a way relax it
if (k_vertex <= k_current_level) {
assert((k_current_level-k_level) < k_vertex <= k_current_level);
BGL_FORALL_OUTEDGES_T(v, e, g, Graph) {
Vertex u = target(e, g);
Dist we = get(weight, e);
vertex_distance_data new_vd(u, std::make_pair(d + we, k_vertex+1));
relax_msg.send(new_vd);
}
} else { // otherwise we put vertex to next bucket
int tid = amplusplus::detail::get_thread_id();
buckets[(current_bucket+1)%2]->put(data, tid);
}
break; // No need to insert into current bucket now
}
}
#ifdef PBGL2_PRINT_WORK_STATS
work_stats.increment_rejected(amplusplus::detail::get_thread_id());
#endif
return;
}
template<KLA_SHORTEST_PATHS_THREAD_PARMS>
void
KLA_SHORTEST_PATHS_THREAD_TYPE::find_next_bucket()
{
using boost::parallel::all_reduce;
using boost::parallel::minimum;
BucketIndex old_bucket = current_bucket;
BucketIndex max_bucket = (std::numeric_limits<BucketIndex>::max)();
current_bucket = (current_bucket + 1) % buckets.size();
while (current_bucket != old_bucket && buckets[current_bucket]->empty())
current_bucket = (current_bucket + 1) % buckets.size();
if (current_bucket == old_bucket)
current_bucket = max_bucket;
// If we wrapped, project index past end of buckets to use min()
if (current_bucket < old_bucket) current_bucket += buckets.size();
all_reduce<BucketIndex, minimum<BucketIndex> > r(transport, minimum<BucketIndex>());
current_bucket = r(current_bucket);
// Map index back into range of buckets
if (current_bucket != max_bucket)
current_bucket %= buckets.size();
}
template<KLA_SHORTEST_PATHS_THREAD_PARMS>
struct KLA_SHORTEST_PATHS_THREAD_TYPE::
vertex_distance_handler {
vertex_distance_handler() : self(NULL) {}
vertex_distance_handler(kla_shortest_paths_thread& self) : self(&self) {}
void operator() (const vertex_distance_data& data) const {
self->relax(data);
}
protected:
kla_shortest_paths_thread* self;
};
} } } // end namespace boost::graph::distributed
#endif // BOOST_GRAPH_KLA_SHORTEST_PATHS_THREAD_HPP
|
[STATEMENT]
lemma a_3:
"a(x) * a(y) * d(x \<squnion> y) = bot"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. a x * a y * d (x \<squnion> y) = bot
[PROOF STEP]
by (metis a_complement a_dist_sup d_def) |
using MPI
function runmpi(tests, file)
MPI.Initialized() && !MPI.Finalized() &&
error("runmpi does not work if MPI has been "*
"Initialized but not Finalizd")
# The code below was modified from the MPI.jl file runtests.jl
#
# Code coverage command line options; must correspond to src/julia.h
# and src/ui/repl.c
JL_LOG_NONE = 0
JL_LOG_USER = 1
JL_LOG_ALL = 2
coverage_opts = Dict{Int, String}(JL_LOG_NONE => "none",
JL_LOG_USER => "user",
JL_LOG_ALL => "all")
coverage_opt = coverage_opts[Base.JLOptions().code_coverage]
testdir = dirname(file)
if haskey(ENV, "SLURM_JOB_ID")
oversubscribe = `--oversubscribe`
else
oversubscribe = ``
end
for (n, f) in tests
cmd = `mpiexec $oversubscribe -n $n $(Base.julia_cmd()) --startup-file=no --project=$(Base.active_project()) --code-coverage=$coverage_opt $(joinpath(testdir, f))`
@info "Running MPI test..." n f cmd
# Running this way prevents:
# Balance Law Solver | No tests
# since external tests are not returned as passed/fail
@time @test (run(cmd); true)
end
end
|
function MRSIStruct = setSpectralWidth(MRSIStruct, value)
MRSIStruct.spectralWidth = value;
end |
module ROC
using Infinity
using Missings # to use Missings.nonmissingtype
using RecipesBase # for creating a Plots.jl recipe
export ROCData,
roc,
AUC,
PPV,
cutoffs
include("rocdata.jl")
include("roc_main.jl")
include("rocplot.jl")
end # module
|
SUBROUTINE STRBS1 (IOPT)
C
C PHASE ONE FOR STRESS RECOVERY
C
C IOPT = 0 (BASIC BENDING TRIANGLE)
C IOPT = 1 (SUB-CALCULATIONS FOR SQDPL1)
C IOPT = 2 (SUB-CALCULATIONS FOR STRPL1)
C
C CALLS FROM THIS ROUTINE ARE MADE TO
C
C MAT - MATERIAL DATA ROUTINE
C TRANSS - SINGLE PRECISION TRANSFORMATION SUPPLIER
C INVERS - SINGLE PRECISION INVERSE ROUTINE
C GMMATS - SINGLE PRECISION MATRIX MULTIPLY AND TRANSPOSE
C MESAGE - ERROR MESSAGE WRITER
C
LOGICAL STRAIN
INTEGER SUBSCA,SUBSCB
REAL KS,J2X2,ST(3)
DIMENSION D(9),G2X2(4),J2X2(4),S(258),ECPT(25),G(9),HIC(18),
1 HIB(18),TITE(18),T(9),KS(30),HINV(36)
C
C COMMON /SDR2X3/ ESTWDS(32),ESTAWD(32),NGPS(32),STRSWD(32),
C 1 FORCWD(32)
C THE ABOVE COMMON BLOCK IS NOT USED IN THIS ROUTINE DIRECTLY.
C
C ESTWDS(I) = THE NUMBER OF WORDS IN THE EST INPUT BLOCK FOR
C THE I-TH ELEMENT TYPE.
C ESTAWD(I) = THE NUMBER OF WORDS COMPUTED IN PHASE-I FOR THE
C I-TH ELEMENT TYPE, AND INSERTED INTO THE ESTA ARRAY
C OF THE LABELED BLOCK SDR2X5
C NGPS (I) = THE NUMBER OF GRID POINTS ASSOCIATED WITH THE I-TH
C ELEMENT TYPE.
C STRSWD(I) = THE NUMBER OF WORDS COMPUTED IN PHASE-II FOR
C THE I-TH ELEMENT TYPE, AND INSERTED INTO THE ESTA
C ARRAY OF THE LABELED BLOCK SDR2X5.
C FORCWD(I) = THE NUMBER OF WORDS COMPUTED IN PHASE-II FOR
C THE I-TH ELEMENT TYPE, AND INSERTED INTO THE FORCES
C ARRAY OF THE LABELED BLOCK SDR2X5.
C
COMMON /BLANK / IDUMMY(10), STRAIN
COMMON /CONDAS/ CONSTS(5)
COMMON /MATIN / MATID,INFLAG,ELTEMP,STRESS,SINTH,COSTH
COMMON /MATOUT/ G11,G12,G13,G22,G23,G33,RHO,ALPHA1,ALPHA2,ALP12,
1 T SUB 0,G SUB E,SIGTEN,SIGCOM,SIGSHE,G2X211,
2 G2X212,G2X222
COMMON /SDR2X6/ A(225),XSUBB,XSUBC,YSUBC,E(18),TEMP,XBAR,AREA,
1 XCSQ,YBAR2,YCSQ,YBAR,XBSQ,PX2,XCYC,PY2,PXY2,XBAR3,
2 YBAR3,DETERM,PROD9(9),TEMP9(9),NSIZED,DUMDUM(4),
3 NPIVOT,THETA ,NSUBC,ISING,SUBSCA,SUBSCB,NERROR,
4 NBEGIN,NTYPED,XC,YC,YC2,YC3,ISUB,XC3,DUM55(1)
COMMON /SDR2X5/ NECPT(1),NGRID(3),ANGLE,MATID1,EYE,MATID2,T2,FMU,
1 Z11,Z22,DUMMY1,X1,Y1,Z1,DUMMY2,X2,Y2,Z2,DUMMY3,
2 X3,Y3,Z3,DUMB(76),PH1OUT(100),FORVEC(25)
EQUIVALENCE (CONSTS(4),DEGRA),(D(1),G(1),A(79)),
1 (ECPT(1),NECPT(1)),(KS(1),PH1OUT(1)),
2 (G2X2(1),A(88)),(S(1),A(55)),(TITE(1),A(127)),
3 (J2X2(1),A(92)),(T(1),A(118)),(HIB(1),A(109)),
4 (HIC(1),A(127)),(HINV(1),A(73)),(ST(1),PH1OUT(99))
C
C ECPT LIST FOR BASIC BENDING TRIANGLE NAME IN
C THIS
C ECPT ROUTINE TYPE
C -------- ----------------------------------- -------- -------
C ECPT( 1) = ELEMENT ID NECPT(1) INTEGER
C ECPT( 2) = GRID POINT A NGRID(1) INTEGER
C ECPT( 3) = GRID POINT B NGRID(2) INTEGER
C ECPT( 4) = GRID POINT C NGRID(3) INTEGER
C ECPT( 5) = THETA = ANGLE OF MATERIAL ANGLE REAL
C ECPT( 6) = MATERIAL ID 1 MATID1 INTEGER
C ECPT( 7) = I = MOMENT OF INERTIA EYE REAL
C ECPT( 8) = MATERIAL ID 2 MATID2 INTEGER
C ECPT( 9) = T2 T2 REAL
C ECPT(10) = NON-STRUCTURAL-MASS FMU REAL
C ECPT(11) = Z1 Z11 REAL
C ECPT(12) = Z2 Z22 REAL
C ECPT(13) = COORD. SYSTEM ID 1 NECPT(13) INTEGER
C ECPT(14) = X1 X1 REAL
C ECPT(15) = Y1 Y1 REAL
C ECPT(16) = Z1 Z1 REAL
C ECPT(17) = COORD. SYSTEM ID 2 NECPT(17) INTEGER
C ECPT(18) = X2 X2 REAL
C ECPT(19) = Y2 Y2 REAL
C ECPT(20) = Z2 Z2 REAL
C ECPT(21) = COORD. SYSTEM ID 3 NECPT(21) INTEGER
C ECPT(22) = X3 X3 REAL
C ECPT(23) = Y3 Y3 REAL
C ECPT(24) = Z3 Z3 REAL
C ECPT(25) = ELEMENT TEMPERATURE ELTEMP REAL
C
IF (IOPT .GT. 0) GO TO 30
ELTEMP = ECPT(25)
C
C SET UP I, J, K VECTORS STORING AS FOLLOWS AND ALSO CALCULATE
C X-SUB-B, X-SUB-C, AND Y-SUB-C.
C
C E(11), E(14), E(17) WILL BE THE I-VECTOR.
C E(12), E(15), E(18) WILL BE THE J-VECTOR.
C E( 1), E( 4), E( 7) WILL BE THE K-VECTOR.
C
C FIND I-VECTOR = RSUBB - RUBA (NON-NORMALIZED)
C
E(11) = X2 - X1
E(14) = Y2 - Y1
E(17) = Z2 - Z1
C
C FIND LENGTH = X-SUB-B COOR. IN ELEMENT SYSTEM
C
XSUBB = SQRT(E(11)**2 + E(14)**2 + E(17)**2)
IF (XSUBB .GT. 1.0E-06) GO TO 10
CALL MESAGE (-30,37,ECPT(1))
C
C NORMALIZE I-VECTOR WITH X-SUB-B
C
10 E(11) = E(11)/XSUBB
E(14) = E(14)/XSUBB
E(17) = E(17)/XSUBB
C
C TAKE RSUBC - RSUBA AND STORE TEMPORARILY IN E(2), E(5), E(8)
C
E(2) = X3 - X1
E(5) = Y3 - Y1
E(8) = Z3 - Z1
C
C X-SUB-C = I . (RSUBC - RSUBA), THUS
C
XSUBC = E(11)*E(2) + E(14)*E(5) + E(17)*E(8)
C
C CROSSING I-VECTOR TO (RSUBC - RSUBA) GIVES THE K-VECTOR
C (NON-NORMALIZED)
C
E(1) = E(14)*E( 8) - E( 5)*E(17)
E(4) = E( 2)*E(17) - E(11)*E( 8)
E(7) = E(11)*E( 5) - E( 2)*E(14)
C
C FIND LENGTH = Y-SUB-C COOR. IN ELEMENT SYSTEM
C
YSUBC = SQRT(E(1)**2 + E(4)**2 + E(7)**2)
IF (YSUBC .GT. 1.0E-06) GO TO 20
CALL MESAGE (-30,37,ECPT(1))
C
C NORMALIZE K-VECTOR WITH Y-SUB-C
C
20 E(1) = E(1)/YSUBC
E(4) = E(4)/YSUBC
E(7) = E(7)/YSUBC
C
C NOW HAVING I AND K VECTORS GET -- J = K CROSS I
C
E(12) = E( 4)*E(17) - E(14)*E( 7)
E(15) = E(11)*E( 7) - E( 1)*E(17)
E(18) = E( 1)*E(14) - E(11)*E( 4)
C
C NORMALIZE J-VECTOR FOR COMPUTER EXACTNESS JUST TO MAKE SURE
C
TEMP = SQRT(E(12)**2 + E(15)**2 + E(18)**2)
E(12) = E(12)/TEMP
E(15) = E(15)/TEMP
E(18) = E(18)/TEMP
E( 2) = 0.0
E( 3) = 0.0
E( 5) = 0.0
E( 6) = 0.0
E( 8) = 0.0
E( 9) = 0.0
E(10) = 0.0
E(13) = 0.0
E(16) = 0.0
C
C CONVERT ANGLE FROM DEGREES TO RADIANS STORING IN THETA.
C
THETA = ANGLE*DEGRA
SINTH = SIN(THETA)
COSTH = COS(THETA)
IF (ABS(SINTH) .LT. 1.0E-06) SINTH = 0.0
C
C SETTING UP G MATRIX
C
30 MATID = MATID1
INFLAG = 2
CALL MAT (ECPT(1))
C
C FILL G-MATRIX WITH OUTPUT FROM MAT ROUTINE
C
G(1) = G11
G(2) = G12
G(3) = G13
G(4) = G12
G(5) = G22
G(6) = G23
G(7) = G13
G(8) = G23
G(9) = G33
C
C COMPUTATION OF D = I.G-MATRIX (EYE IS INPUT FROM THE ECPT)
C
DO 50 I = 1,9
50 D(I) = G(I)*EYE
IF (IOPT .EQ. 0) CALL GMMATS (D,3,3,0, ALPHA1,3,1,0, ST(1))
C
XBAR =(XSUBB + XSUBC)/3.0
YBAR = YSUBC/3.0
IF (IOPT .GT. 0) GO TO 60
XC = XBAR
YC = YBAR
C
C FORMING K 5X6 AND STORING TEMPORARILY IN PH1OUT OUTPUT SPACE.
C S (EQUIVALENCED)
C
60 XC3 = 3.0*XC
YC3 = 3.0*YC
YC2 = 2.0*YC
IF (STRAIN) GO TO 63
KS( 1) = D(1)
KS( 2) = D(3)
KS( 3) = D(2)
KS( 4) = D(1)*XC3
KS( 5) = D(2)*XC + D(3)*YC2
KS( 6) = D(2)*YC3
KS( 7) = D(2)
KS( 8) = D(6)
KS( 9) = D(5)
KS(10) = D(2)*XC3
KS(11) = D(5)*XC + D(6)*YC2
KS(12) = D(5)*YC3
KS(13) = D(3)
KS(14) = D(9)
KS(15) = D(6)
KS(16) = D(3)*XC3
KS(17) = D(6)*XC + D(9)*YC2
KS(18) = D(6)*YC3
C
C ROWS 4 AND 5
C
KS(19) = 0.0
KS(20) = 0.0
KS(21) = 0.0
KS(22) =-D(1)*6.0
KS(23) =-D(2)*2.0 - D(9)*4.0
KS(24) =-D(6)*6.0
KS(25) = 0.0
KS(26) = 0.0
KS(27) = 0.0
KS(28) =-D(3)*6.0
KS(29) =-D(6)*6.0
KS(30) =-D(5)*6.0
GO TO 67
63 CONTINUE
DO 65 I = 1,30
KS(I) = 0.0
65 CONTINUE
KS( 1) = 1.0
KS( 4) = XC3
KS( 9) = 1.0
KS(11) = XC
KS(12) = YC3
KS(14) = 0.5
KS(17) = YC
67 CONTINUE
C
C MULTIPLY FIRST 3 ROWS BY 2.0
C
DO 70 I = 1,18
70 KS(I) = KS(I)*2.0
C
XCSQ = XSUBC**2
YCSQ = YSUBC**2
XBSQ = XSUBB**2
XCYC = XSUBC*YSUBC
C
C F1LL (HBAR) MATRIX STORING AT A(37) TRHU A(72)
C
DO 90 I = 37,72
90 A(I) = 0.0
C
A(37) = XBSQ
A(40) = XBSQ*XSUBB
A(44) = XSUBB
A(49) =-2.0*XSUBB
A(52) =-3.0*XBSQ
A(55) = XCSQ
A(56) = XCYC
A(57) = YCSQ
A(58) = XCSQ*XSUBC
A(59) = YCSQ*XSUBC
A(60) = YCSQ*YSUBC
A(62) = XSUBC
A(63) = YSUBC*2.
A(65) = XCYC*2.0
A(66) = YCSQ*3.0
A(67) =-2.0*XSUBC
A(68) =-YSUBC
A(70) =-3.0*XCSQ
A(71) =-YCSQ
C
IF (T2 .EQ. 0.0) GO TO 110
C
C ALL OF THE FOLLOWING OPERATIONS THROUGH STATEMENT LABEL 500
C ARE NECESSARY IF T2 IS NON-ZERO.
C
C GET THE G2X2 MATRIX
C
MATID = MATID2
INFLAG = 3
CALL MAT (ECPT(1))
IF (G2X211.EQ.0.0 .AND. G2X212.EQ.0.0 .AND. G2X222.EQ.0.0)
1 GO TO 110
G2X2(1) = G2X211*T2
G2X2(2) = G2X212*T2
G2X2(3) = G2X212*T2
G2X2(4) = G2X222*T2
C
DETERM = G2X2(1)*G2X2(4) - G2X2(3)*G2X2(2)
J2X2(1) = G2X2(4)/DETERM
J2X2(2) =-G2X2(2)/DETERM
J2X2(3) =-G2X2(3)/DETERM
J2X2(4) = G2X2(1)/DETERM
C
C (H ) IS PARTITIONED INTO A LEFT AND RIGHT PORTION AND ONLY THE
C YQ RIGHT PORTION IS COMPUTED AND USED AS A (2X3). THE LEFT
C 2X3 PORTION IS NULL. THE RIGHT PORTION WILL BE STORED AT
C A(73)...A(78) UNTIL NOT NEEDED ANY FURTHER.
C
TEMP = 2.0*D(2) + 4.0*D(9)
A(73) = -6.0*(J2X2(1)*D(1) + J2X2(2)*D(3))
A(74) = -J2X2(1)*TEMP + 6.0*J2X2(2)*D(6)
A(75) = -6.0*(J2X2(1)*D(6) + J2X2(2)*D(5))
A(76) = -6.0*(J2X2(2)*D(1) + J2X2(4)*D(3))
A(77) = -J2X2(2)*TEMP + 6.0*J2X2(4)*D(6)
A(78) = -6.0*(J2X2(2)*D(6) + J2X2(4)*D(5))
C
C THE ABOVE 6 ELEMENTS NOW REPRESENT THE (H ) MATRIX (2X3)
C YQ
C
C ADD TO 6 OF THE (HBAR) ELEMENTS THE RESULT OF(H )(H )
C UY YQ
C THE PRODUCT IS FORMED DIRECTLY IN THE ADDITION PROCESS BELOW.
C NO (H ) MATRIX IS ACTUALLY COMPUTED DIRECTLY.
C UY
C
C THE FOLLOWING IS THEN PER STEPS 6 AND 7 PAGE -16- MS-17.
C
DO 100 I = 1,3
A(I+39) = A(I+39) + XSUBB*A(I+72)
100 A(I+57) = A(I+57) + XSUBC*A(I+72) + YSUBC*A(I+75)
C
C THIS ENDS ADDED COMPUTATION FOR CASE OF T2 NOT ZERO
C
110 CONTINUE
C
C AT THIS POINT INVERT (H) WHICH IS STORED AT A(37) THRU A(72)
C STORE INVERSE BACK IN A(37) THRU A(72)
C NO NEED TO COMPUTE DETERMINANT SINCE IT IS NOT USED SUBSEQUENTLY.
C
ISING = -1
CALL INVERS (6,A(37),6,A(73),0,DETERM,ISING,A(79))
C
C CHECK TO SEE IF H WAS SINGULAR
C
IF (ISING .NE. 2) GO TO 120
C
C
C ISING = 2 IMPLIES SINGULAR MATRIX THUS ERROR CONDITION.
C
CALL MESAGE (-30,38,ECPT(1))
C
C SAVE H-INVERSE IF TRI-PLATE IS CALLING
C
120 DO 130 I = 1,36
130 HINV(I) = A(I+36)
C
C FILL S-MATRIX, EQUIVALENCED TO A(55). (6X3)
C
S( 1) = 1.0
S( 2) = 0.0
S( 3) =-XSUBB
S( 4) = 0.0
S( 5) = 1.0
S( 6) = 0.0
S( 7) = 0.0
S( 8) = 0.0
S( 9) = 1.0
S(10) = 1.0
S(11) = YSUBC
S(12) =-XSUBC
S(13) = 0.0
S(14) = 1.0
S(15) = 0.0
S(16) = 0.0
S(17) = 0.0
S(18) = 1.0
C
C COMPUTE S , S , AND S NO TRANSFORMATIONS
C A B C
C
C -1
C S = - K H S , S = K H , S = K H
C A S B S IB C S IC
C
C S COMPUTATION.
C A
C
CALL GMMATS (HINV(1),6,6,0, S(1),6,3,0, A(16))
C
C DIVIDE H-INVERSE INTO A LEFT 6X3 AND RIGHT 6X3 PARTITION.
C
I = 0
J =-6
150 J = J + 6
K = 0
160 K = K + 1
I = I + 1
ISUB = J + K
HIB(I) = HINV(ISUB )
HIC(I) = HINV(ISUB + 3)
IF (K .LT. 3 ) GO TO 160
IF (J .LT. 30) GO TO 150
C
CALL GMMATS (KS(1),5,6,0, A(16),6,3,0, A(1))
C
C MULTIPLY S SUB A BY (-1)
C
DO 170 I = 1,15
170 A(I) = -A(I)
C
C S COMPUTATION
C B
C
CALL GMMATS (KS,5,6,0, HIB,6,3,0, A(16))
C
C S COMPUTATION
C C
C
CALL GMMATS (KS,5,6,0, HIC,6,3,0, A(31))
C
C RETURN IF TRI OR QUAD PLATE ROUTINE IS CALLING.
C
IF (IOPT .GT. 0) RETURN
C T
C TRANSFORM S , S , S WITH E T , I = A,B,C
C A B C I
C T T
C COMPUTING TRANSPOSE OF E T = T E
C I I
DO 200 I = 1,3
C
C POINTER TO S MATRIX = 15*I - 14
C I
C POINTER TO OUTPUT POSITION = 30*I - 21
C
C CHECK TO SEE IF T IS NEEDED.
C
IF (NECPT(4*I+9)) 180,190,180
180 CALL TRANSS (NECPT(4*I+9),T)
CALL GMMATS (T,3,3,1, E( 1),3,3,0, TITE( 1))
CALL GMMATS (T,3,3,1, E(10),3,3,0, TITE(10))
CALL GMMATS (A(15*I-14),5,3,0, TITE,6,3,1, PH1OUT(30*I-21))
GO TO 200
190 CALL GMMATS (A(15*I-14),5,3,0, E,6,3,1, PH1OUT(30*I-21))
200 CONTINUE
PH1OUT(1) = ECPT(1)
PH1OUT(2) = ECPT(2)
PH1OUT(3) = ECPT(3)
PH1OUT(4) = ECPT(4)
C
C PH1OUT(5) IS A DUMMY
C
PH1OUT(6) = ECPT( 7)
PH1OUT(7) = ECPT(11)
PH1OUT(8) = ECPT(12)
C
C PHASE I BASIC BENDING TRIANGLE SDR2 COMPLETE
C
RETURN
END
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jul 20 13:08:20 2019
@author: liuhongbing
"""
import numpy as np
import tensorflow as tf
from data_pregross import getPoetryList,DataSet,load_model
class lstm_poerty_train():
def __init__(self):
self.batch_size = 64
self.rnn_size = 100
self.embedding_size = 300
self.num_layers = 2
self.model = 'lstm'
self.word_size = 6109
# 定义RNN
def neural_network(self, xs):
if self.model == 'rnn':
cell_fun = tf.contrib.rnn.BasicRNNCell
elif self.model == 'gru':
cell_fun = tf.contrib.rnn.GRUCell
elif self.model == 'lstm':
cell_fun = tf.contrib.rnn.BasicLSTMCell
cell1 = cell_fun(self.rnn_size, state_is_tuple=True)
cell2 = cell_fun(self.rnn_size, state_is_tuple=True)
cell = tf.contrib.rnn.MultiRNNCell([cell1, cell2], state_is_tuple=True)
initial_state = cell.zero_state(self.batch_size, tf.float32)
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [self.rnn_size, self.word_size])
softmax_b = tf.get_variable("softmax_b", [self.word_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [self.word_size, self.embedding_size])
inputs = tf.nn.embedding_lookup(embedding, xs)
outputs, last_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state, scope='rnnlm')
output = tf.reshape(outputs,[-1, self.rnn_size])
logits = tf.matmul(output, softmax_w) + softmax_b
probs = tf.nn.softmax(logits)
return logits, last_state, probs, cell, initial_state
#训练
def train_neural_network(self, xs,ys):
logits, last_state, _, _, _ = self.neural_network(xs)
targets = tf.reshape(ys, [-1])
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
[logits], [targets], [tf.ones_like(targets, dtype=tf.float32)], self.word_size)
cost = tf.reduce_mean(loss)
global_step = tf.train.get_or_create_global_step()
learing_rate = tf.train.exponential_decay(
learning_rate=0.002, global_step=global_step, decay_steps=1000, decay_rate=0.97, staircase=True)
optimizer = tf.train.AdamOptimizer(learing_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
tf.summary.scalar('lr', learing_rate)
tf.summary.scalar("loss", loss)
tf.summary.scalar("global_step", global_step)
summaries = tf.summary.merge_all()
return loss, cost, train_op,global_step,summaries,last_state
if __name__=='__main__':
tf.reset_default_graph()
root = "/Users/liuhongbing/Documents/tensorflow/data/poetry"
batch_size = 64
poetry_vector,word_num_map,_— = getPoetryList(root)
trainds = DataSet(len(poetry_vector),poetry_vector,word_num_map)
n_chunk = len(poetry_vector) // batch_size -1
input_data = tf.placeholder(tf.int32, [batch_size, None])
output_targets = tf.placeholder(tf.int32, [batch_size, None])
lpt = lstm_poerty_train()
loss, cost, train_op,global_step,summaries,last_state = lpt.train_neural_network(input_data,output_targets)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver(tf.all_variables())
last_epoch = load_model(sess, saver, root+'/model/')
print(last_epoch)
for epoch in range(last_epoch + 1,100):
print(f"iter {epoch}........................")
all_loss = 0.0
for batche in range(n_chunk):
x,y = trainds.next_batch(batch_size)
train_loss, _ , _ = sess.run([cost, last_state, train_op], feed_dict={input_data: x, output_targets: y})
all_loss = all_loss + train_loss
if batche % 50 == 1:
#print(epoch, batche, 0.01,train_loss)
print(epoch, '\t', batche, '\t', 0.002 * (0.97 ** epoch),'\t',train_loss)
if batche % 300 == 1:
saver.save(sess, root+'/model/poetry.module', global_step=epoch)
print (epoch,' Loss: ', all_loss * 1.0 / n_chunk)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.