text
stringlengths 0
3.34M
|
---|
% !TeX root = article.tex
\section{Description of plasticity in the framework of physics engines}
In this section, key concepts related to the introduced model are explained. The main differences between
traditional structural analysis and physics engines based approaches are reviewed and discussed.
Velocity-based formulation of constraint based rigid body simulation
is commonly used by physics based game
developers and film production teams.
%\citet[p.~45]{erleben.thesis}
\cite{erleben.thesis}
provides reasoning and theoretical details for the popularity of
velocity-based formulation in constraint-based rigid body simulation instead of accelaration based.
The main reason is that collision handling can be done without the use of additional procedures.
Work presented by
\cite{erleben.thesis} provides the basis for the velocity-based formulation discussed in this work.
%\citet[p.~45-50]{erleben.thesis}.
% pdf page 64
In the following section, these formulations will be clarified by a simple example using \cbullet\ implementation.
Impulse $\vec{J}$
in the time interval $\Delta t $ can be written as:
\begin{equation} \label{eq:impulseIntegral}
\vec{J} = \int_{0}^{\Delta t} \vec{f}_{true}(t) dt,
\end{equation}
where $\vec{f}_{true}(t)$ is force.
Using Newton's second law of motion $\vec{F}=m\vec{a}$ ,
$\vec{v}^{\Delta t}$ can be solved for the velocity as:
\begin{equation} \label{eq:impulseIntegraWithNewton}
\int_{0}^{\Delta t} m \frac{d\vec{v}}{dt}dt= \int_{0}^{\Delta t} \vec{f}_{true}(t)
\end{equation}
\begin{equation} \label{eq:impulse}
m(\vec{v}^{\, \Delta t} - \vec{v}^{\, 0})=\vec{J},
\end{equation}
where superscripts denote time, i.e. ${\vec{v}}^{\Delta t}=\vec{v}(\Delta t)$.
Next position can be found
by integrating the velocity.
Updates after each step can be summarized for locations and
for velocities respectively as follows:
\begin{equation} \label{eq:eomL} % pdf page 69
\vec{s}^{\, t+\Delta t} = \vec{s}^{\, t}+\Delta t S \vec{u}^{\, t+\Delta t}
\end{equation}
\begin{equation} \label{eq:eomV}
\vec{u}^{\, t+\Delta t} = \vec{u}^{\, t}+\Delta t M^{-1}(C N \vec{f}^{\ t+\Delta t} + \vec{f}_{ext}) .
\end{equation}
The symbols used in Equations \ref{eq:eomL} and \ref{eq:eomV}
are summarized in Table \ref{tab:eom}.
Figure \ref{fig:eom-contact} describes the collision of two
bodies, rectangular body $B_1$ and triangular body $B_2$
where $\vec{r}_i$ is position of body $i$,
$\vec{p}_k $ is position of contact point $k$,
$\vec{r}_{ik} $ is a vector between the center of gravity of body $i$ and contact point $k$,
and
$\vec{n}_{k}$ is the contact normal for contact point $k$.
It is common convention that the contact normal points
from the body with the smallest index to the body with the largest index, \cite{erleben.thesis}.
In case of point or edge contacts, averaging the normals of neighboring polygons can be used, \cite{Hahn:1998}.
\begin{figure}[tb!]
\centering
\begin{tikzpicture}
\coordinate (O1) at(2,1);
\coordinate (O2) at(2,4);
\coordinate (C) at(2,2);
\draw (0,0) -- (4,0) -- (4,2) -- (0,2) --(0,0);
\draw (2,2) -- (3.5,5) -- (0.5,5) -- (2,2) ;
\node at (3.5,0.4) {$B_1$};
\filldraw (O1) circle (0.5mm) node[anchor=north] {$\vec{r}_1$};
\node at (2.8,4.5) {$B_2$};
\filldraw (O2) circle (0.5mm) node[anchor=south] {$\vec{r}_2$};
\filldraw (C) circle (0.5mm) node[anchor=north west] {$\vec{p}_1$};
\draw[-{Stealth[length=3mm]}] (O1) -- (C) node[anchor=north east] {$\vec{r}_{11}$};
\draw[-{Stealth[length=3mm]}] (O2) -- (C) node[anchor=south east] {$\vec{r}_{21}$};
\draw[-latex,thick] (C) -- ++(0,1.4) node[anchor=west] {$\vec{n}_{1}$};
\node[anchor=west] at (4.5,4.5) {
$\vec{r}_i$ = position of body $i$
};
\node[anchor=west] at (4.5,4) {$\vec{p}_k $ = position of contact point $k$};
\node[anchor=west] at (4.5,3.5) {$\vec{r}_{ik} $ = $\vec{p}_k - \vec{r}_i $};
\node[anchor=west] at (4.5,3) {$\vec{n}_{k} $ = normal for contact point $k$};
\end{tikzpicture}
\caption{Illustration of nomenclature for equations of motion for collision.}
\label{fig:eom-contact}
\end{figure}
% pdf page 33, notation in typical ODEs
\begin{table}
\tbl{Nomenclature for equations of motion}{
\begin{tabular}{|l| l|}
\hline
{\bf Symbol} & {\bf Description} \\ \hline
$\vec{r}_i$ & position of center of mass for body $i$ \\ \hline
$\vec{q}_i$ & orientation for body $i$ as quaternion $\lbrack s_i, x_i, y_i, z_i \rbrack ^T $ \\
\hline
$\vec{p}_h$ & contact or joint point $k$ \\ \hline
$\vec{r}_{ki}$ & $\vec{p}_k - \vec{r}_i$ \\ \hline
$\vec{s}$ & $\lbrack \vec{r}_1, \vec{q}_1,...,\vec{r}_n, \vec{q}_n \rbrack ^T $\\ \hline
$Q_i$ & \begin{tabular}{@{}c}
rotation of quaternion $\vec{q}_i$
as matrix \\ where
$\frac{1}{2}\vec{\omega}_i \vec{q}_i=Q_i \vec{\omega}_i$
\end{tabular}
$
\frac{1}{2} \left[ \begin{array}{ccc}
-x_i & -y_i & -z_i \\
s_i & z_i & -y_i \\
-z_i & s_i & x_i \\
y_i & -x_i & s_i
\end{array} \right]
$
\\ \hline
$S$ &
\begin{tabular}{@{}c}
generalized transformation matrix \\
$ S \in \mathbb{R}^{7n \times 6n}$
\end{tabular}
$ \left[ \begin{array}{ccccc}
1 & & & & 0 \\
& Q_i \\
& & \ddots \\
& & & 1 \\
0 & & & & Q_n
\end{array} \right]
$
\\ \hline
$\vec{v}_i$ & linear velocity of center of mass for body $i$ \\ \hline
$\vec{\omega}_i$ & angular velocity of center of mass for body $i$ \\ \hline
$\vec{u}$ & $\lbrack \vec{v}_1, \vec{\omega}_1,...,\vec{v}_n, \vec{\omega}_n \rbrack ^T $\\ \hline
$M$ &
\begin{tabular}{@{}c}
generalized mass matrix \\
$ M \in \mathbb{R}^{6n \times 6n}$
\end{tabular}
$
\left[ \begin{array}{ccccc}
m_i 1 & & & & 0 \\
& I_1 \\
& & \ddots \\
& & & m_n 1 \\
0 & & & & I_n
\end{array} \right]
$
\\ \hline
$I_i$ & inertia tensor for body $i$ \\ \hline
$C$ & contact condition matrix $ C \in \mathbb{R}^{6n \times 3K}$ \\ \hline
$N$ & contact normal matrix $ N \in \mathbb{R}^{3K \times K}$ \\ \hline
\end {tabular}}
\label{tab:eom}
\end{table}
Friction in contacts and joint constraints can be handled in a unified way by refactoring
equation \ref{eq:eomV} as,
\cite{erleben.thesis}
%\citet[p.~66-67]{erleben.thesis}
\begin{equation} \label{eq:eomV2}
\vec{u}^{\, t+\Delta t} = \vec{u}^{\, t}+\Delta t M^{-1}(
J_{contact}^T \vec{\lambda}_{contact}
+ J_{joint}^T \vec{\lambda}_{joint}
+ \vec{f}_{ext}),
\end{equation}
where Jacobian terms $J_{joint}^T$ for joints are
derived by taking time derivatives of the kinematic constraints.
Symbols used in Equation \ref{eq:eomV2} are summarized in Table
\ref{tab:eom-g} and Figure \ref{fig:eom-joint},
where $\vec{r}_{anc}^{\,i}$ is used to define at which point
joint constraint is applied relative to body $i$.
\begin{figure}
\centering
\begin{tikzpicture}
\coordinate (O1) at(1,1);
\coordinate (O2) at(1,3);
\coordinate (C) at(1,2);
\draw (0,0) -- (2,0) -- (2,2) -- (0,2) --(0,0);
\draw (0,2) -- (2,2) -- (2,4) -- (0,4) --(0,2) ;
\node at (0.5,0.4) {$B_1$};
\filldraw (O1) circle (0.5mm) node[anchor=north] {$\vec{r}_1$};
\node at (0.5,3.5) {$B_2$};
\filldraw (O2) circle (0.5mm) node[anchor=south] {$\vec{r}_2$};
\draw[-{Stealth[length=3mm]}] (O1) -- (C) node[anchor=north east] {$\vec{r}_{anc}^{\,1}$};
\draw[-{Stealth[length=3mm]}] (O2) -- (C) node[anchor=south east] {$\vec{r}_{anc}^{\,2}$};
\node[anchor=west] at (4.5,3.5) {
$\vec{r}_i$ = position of body $i$
};
\node[anchor=west] at (4.5,3) {$\vec{r}_{anc}^{\,i} $ = body frame vector $i$};
\end{tikzpicture}
\caption{Illustration of nomenclature for equations of motion for joint.}
\label{fig:eom-joint}
\end{figure}
% pdf page 33, notation in typical ODEs
\begin{table}
\tbl{Additional terms for generalized equations of motion}{
\begin{tabular}{|l| l|}
\hline
{\bf Symbol} & {\bf Description} \\ \hline
$J_{contact}$ & Jacobian matrix for contacts \\ \hline
$\lambda_{contact}$ & vector of lagrange multipliers for contacts \\ \hline
$J_{joint}$ & Jacobian matrix for joints \\ \hline
$\lambda_{joint}$ & vector of lagrange multipliers for joints \\ \hline
\end {tabular}}
\label{tab:eom-g}
\end {table}
Constraint processing in \cbullet\ is based on ODE, \cite{ode}.
Joints are also discussed in detail in
\cite{erleben.thesis}.
%\citet[p.~60-90]{erleben.thesis}.
Equations \ref{eq:constraintEquation}, \ref{eq:lambdaLow} and
\ref{eq:lambdaHigh}
are created for each constraint.
Derivation for terms in Equation \ref{eq:constraintEquation}
can be done using the position and orientation of connected bodies
e.g. for ball joint formulation is based on both joint points having the same position.
In contact cases, formulation is easier if it is done using velocities, \cite{ode.joints}.
\begin{equation} \label{eq:constraintEquation}
J_1 \vec{v}_1 + \Omega_1 \vec{\omega}_1 +
J_2 \vec{v}_2 + \Omega_2 \vec{\omega}_2 = \vec{c} + C \vec{\lambda}
\end{equation}
\begin{equation} \label{eq:lambdaLow}
\vec{\lambda} \geq \vec{l}
\end{equation}
\begin{equation} \label{eq:lambdaHigh}
\vec{\lambda} \leq \vec{h}
\end{equation}
In the following section, these equations will be explained by a simple example.
The main parameters and corresponding fields in \cbullet\
are given in Table \ref{tab:constraintParameters}.
\begin {table}
\tbl {Constraint parameters}{
\begin{tabular}{|c| l| l|}
\hline
{\bf Parameter} & {\bf Description} & {\bf btConstraintInfo2 pointer}\\ \hline
$J_1, \Omega_1$ & Jacobian & m\_J1linearAxis, m\_J1angularAxis \\
$J_2, \Omega_2$ & & m\_J2linearAxis, m\_J2angularAxis \\ \hline
$\vec{v}$ & linear velocity & \\ \hline
$\vec{\omega}$ & angular velocity & \\ \hline
$\vec{c}$ & right side vector & m\_constraintError \\ \hline
$C$ & constraint force mixing & cfm \\ \hline
$\vec{\lambda}$ & constraint force & \\ \hline
$\vec{l}$ & low limit for constraint force & m\_lowerLimit \\ \hline
$\vec{h}$ & high limit for constraint force & m\_upperLimit \\ \hline
\end {tabular}}
\label{tab:constraintParameters}
\end {table}
In structural analysis, a formulation and associated numerical solution procedure are selected
based on needed features.
Often, finite element method is used.
In most cases, a static solution with an assumption of linear strain-displacement relation
using displacement based boundary conditions is used.
\cite{bathe-1975} provides a description for handling of various nonlinearities.
In large displacement analysis, formulation may be based on updated formulation (Eulerian) or
Lagrangian formulation where initial configuration is used.
Further enhancements are material nonlinearity and dynamic analysis.
Physics engine provides dynamic analysis with large reference translations and rotations
while assuming bodies to be undeformable.
Material plasticity can be accounted for in simulations by using a suitable coefficient of restitution.
This provides a reasonable means to simulate loss of energy in collisions.
In this work simulation of breaking of bodies made of ductile material is made more realistic
by splitting the rigid body
to multiple bodies that are connected by energy absorbing joints.
A typical engineering stress-strain curve of ductile steel is shown in Figure \ref{fig:sscurve}.
\begin{figure}
\centering
\begin{tikzpicture}
\coordinate (Y) at (1,4);
\draw[->] (0,0) -- (6,0) node[right] {\large{$\epsilon$}};
\draw[->] (0,0) -- (0,5) node[above] {\large{$\sigma$}};
\draw(0,0) -- (Y) -- (2,4) .. controls (5,5) .. (6,4);
\draw[dashed](0,4) -- (Y);
\node at (-0.2,4) [align=right] {$f_y$};
\draw(0.25,1) -- (0.5,1) -- (0.5,2);
\node at (0.75,1.5) {$E$};
\node at (0.8,2.5) [anchor=west] {$\sigma = E \epsilon$ if $\sigma \le f_y$};
\end{tikzpicture}
\caption{Engineering stress-strain curve of ductile steel (not to scale).}
\label{fig:sscurve}
\end{figure}
In Figure \ref{fig:sscurve}, $\sigma$ is stress, $E$ is Youngs modulus and $f_y$ is yield stress.
Engineering stress and strain mean that original dimensions are used in stress calculation,
\cite{dowling}.
%\citet[p.~108]{dowling}.
The stress-strain curve is not drawn to scale as elastic strain could not be seen as it is typically
0.001 to 0.005 and fracture strain can be 100 times larger.
In this work, an elastic-fully plastic material model is used in most scenarios.
Having elastic part allows elastic displacements for slender structures.
Elastic material behavior is ignored in approach introduced in this work if
the deformation is related to ahigher frequency
than integration stability would allow.
It should be noted that geometry
of bodies is not updated during analysis and thus engineering stress-strain properties are used.
In this work, strain hardening is taken into account by assuming that plastic volume in bending
expands,
\cite{dowling}.
%\citet[p.~672]{dowling}.
Material that starts to yield first is hardened and as a result of which yielding moves.
The difference between the elastic and plastic section modulus is depicted in Figure \ref{fig:wp}.
\begin{figure}[htb!]
\centering
\begin{tikzpicture}
\coordinate (S) at (2.5,5);
\draw (0,5) -- (4,5) ;
\draw (0,0) -- (4,0) ;
\draw (2,0) -- (2,5) ;
\draw (1.5,0) -- (S);
\node[above] at (S) [align=center] {\large{$\sigma<f_y$}};
\node[anchor=west] at (3,3) {
\begin{tabular}{l}
Under elastic load\\
stress increases\\
linearly from zero\\
at neutral axis to\\
maximum value at \\
surface of body
\end{tabular}
};
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}
\coordinate (S) at (3,5);
\draw (0,5) -- (4,5) ;
\draw (0,0) -- (4,0) ;
\draw (2,0) -- (2,5) ;
\draw (1,0) -- (1,2.5) -- (3,2.5) -- (S);
\node[above] at (S) [align=center] {\large{$\sigma=f_y$}};
\node[anchor=west] at (3,3) {
\begin{tabular}{l}
Under fully plastic load\\
stress is at yield\\
level over full\\
cross section
\end{tabular}
};
\end{tikzpicture}
\caption{Axial stress distribution over a cross section for bending under elastic and fully plastic loads.}
\label{fig:wp}
\end{figure}
As shown in Figure 2.4, if stress is below yield limit $f_y$, stress and strain are linear within the material.
If cross section is fully plastic, stress is assumed to be at yield level over the whole cross section such that
the plastic section modulus is higher than the elastic section modulus.
In this work, plasticity is handled by defining maximum forces
in Equations \ref{eq:lambdaLow} and
\ref{eq:lambdaHigh} using plastic capacities, which are defined below.
Maximum force acting in a direction of $\vec{r}_{anc}^{\,i} $
is product of area and yield stress as follows:
\begin{equation} \label{eq:fN}
N_{max}= \int_A f_y.
\end{equation}
Maximum forces acting perpendicular to $\vec{r}_{anc}^{\,i} $
are a product of area and shear yield stress $\tau_y$ as follows:
\begin{equation} \label{eq:fQ}
Q_{max}= \int_A \tau_y.
\end{equation}
Maximum moments acting around the axis perpendicular to $\vec{r}_{anc}^{\,i} $
are integrals of the perpdendicular distance
and yield stress $f_y$ as given for the moment around the $x$-axis
and moment around the $z$-axis, respectively:
\begin{equation} \label{eq:Mx}
M_{max}^x= \int_A z f_y,
\end{equation}
\begin{equation} \label{eq:Mz}
M_{max}^z= \int_A x f_y.
\end{equation}
Maximum moment around $\vec{r}_{anc}^{\,i} $
is an integral of distance $d$ from the joint point
and shear yield stress $\tau_y$ as:
\begin{equation} \label{eq:My}
M_{max}^y= \int_A d \tau_y.
\end{equation}
Maximum forces and moments for a
rectangular section with width $b$ and height $h$ using constant yield stress
are given in Table \ref{tab:maxForces}.
Yield shear stress is assumed to be $ 0.5\, f_y$ using the Tresca yield critetion.
If the von Mises yield criterion is used 0.5 is replaced by 0.58 ($1/\sqrt{3}$), \cite{dowling}.
% p. 262, p. 268
These are not exact values in a multiaxial stress state but they
should be acceptable in most gaming scenarios.
\begin {table}
\tbl {Maximum forces and moments for
rectangular section with width $b$ and height $h$ using constant yield stress $f_y$}{
\begin{tabular}{| c| c|}
\hline
{\bf Direction} & {\bf Maximum value} \\ \hline
maximum shear force & $0.5\, b\, h f_y$ \\ \hline
maximum normal force & $b\, h\, f_y$ \\ \hline
maximum bending moment in direction of $h$& $0.25\, b\, h^2 \, f_y$ \\ \hline
maximum bending moment in direction of $b$ & $0.25\, b^2\, h\, f_y$ \\ \hline
maximum torque & $ \approx 0.19\, b\, h\, \frac{b\, + h}{2} f_y$ \\ \hline
\end{tabular}}
\label{tab:maxForces}
\end {table}
For torque there is a closed form solution only for
circular cross sections.
Given approximation is
best suited for cases where $b$ and $h$ are similar.
Better approximation for any given $b$ and $h$ can be obtained
by integrating distance from the center of the joint over cross section and
multiplying it with the yield shear stress e.g. using Octave, \cite{octave}.
An example of calculation of the maximum moment around $\vec{r}_{anc}^{\,i} $
is shown in Figure \ref{fig:octave-mp}.
\begin{figure}
\centering
\lstset{language=octave}
\begin{lstlisting}
b=0.01; h=0.01; fy=200e6;
wpy=fy/2*dblquad(@(x,z)...
sqrt(x.*x+z.*z),-b/2,b/2,-h/2,h/2)
38.2
\end{lstlisting}
\caption{Calculation of maximum moment around $\vec{r}_{anc}^{\,i} $ using Octave.}
\label{fig:octave-mp}
\end{figure}
The basic idea introduced in this study can be tested with any framework having motors and hinge constraints.
This can be done by setting the target velocity of the motor to zero and limiting
the maximum motor impulse to plastic moment multiplied by a timestep.
Further enhancements were created and tested by forking \cbullet\ source code
and adding new constraints, \cite{pbullet}.
Instructions for using windows executable and source code are available, \cite{bp}.
|
InstallMethod(NilpotentChv,
"NilpotentChv element in simple Lie type",
[IsChevalleyAdj,IsList],
function(sys,coeffs)
local object,
B,admat,l;
if Filtered(coeffs,i->not i[2] in Integers)=[] then
coeffs:=List(coeffs,i->[i[1],One(ring(sys))*i[2]]);
elif Filtered(coeffs,i->not i[2] in ring(sys))<>[] then
Error("NilpotentChv: The coefficients ar not in indicated ring.");
fi;
object:=Objectify(NewType(NewFamily("NilpotentChvFamily"),
IsAttributeStoringRep and
IsNilpotentChv and
IsAdditiveElementWithInverse and
IsAdditiveElementWithZero),
rec());
SetchevalleyAdj(object,sys);
# Simplified form of the element (negative root vectors are also allowd "2*Len..")
coeffs:=Filtered(List([1..2*Length(positiveRoots(sys))],
i->[i,One(ring(sys))*Sum(List(Filtered(coeffs,
j->j[1]=i),
k->k[2]))]),
l->l[2]<>Zero(ring(sys)));
Setcoefficients(object,coeffs);
B:=Basis(lieAlgebra(sys));
if coeffs<>[] then SetLieAlgebraElement(object,Sum(coeffs,i->i[2]*B[i[1]]));
else SetLieAlgebraElement(object,Zero(lieAlgebra(sys))); fi;
SetLieAlgebraCoeffs(object,Coefficients(B,LieAlgebraElement(object)));
# l:=Length(PRoots(R));
# admat:=AdjointMatrix(B,LieAlgebraElement(e));
# SetPositiveAde(object,List([1..l],i->admat[i]{[1..l]}));
SetName(object,Concatenation("<nilpotent element for ",
type(sys),String(rank(sys)),
" in characteristic ",String(Characteristic(sys)),">"));
return object;
end);
InstallMethod(Ascend,
"If possible embed the parameters in a polynomial ring with coefficients ring the base ring",
[IsNilpotentChv,IsPolynomialRing],
function(e,inel)
local sys,
check;
sys:=chevalleyAdj(e);
if not IsPolynomialRing(inel) then
Error("Ascend NilpotentChv: The new ring is not a polynomial ring to embed in!\n");
elif CoefficientsRing(inel)<>ring(sys) then
Error("Ascend NilpotentChv: CoefficientsRing and ring(sys) do not coincide!\n");
fi;
return NilpotentChv(ChevalleyAdj(sys,inel),List(coefficients(e),i->[i[1],One(inel)*i[2]]));
end);
InstallMethod(Ascend,
"If possible embed the parameters in a polynomial ring with coefficients ring the base ring",
[IsNilpotentChv,IsChevalleyAdj],
function(e,sys)
local syse,
check;
syse:=chevalleyAdj(e);
if not IsPolynomialRing(ring(sys)) then
Error("The new ring is not a polynomial ring to embed in.");
elif CoefficientsRing(ring(sys))<>ring(syse) then
Error("CoefficientsRing and ring(syse) do not coincide.");
fi;
return NilpotentChv(sys,List(coefficients(e),i->[i[1],One(ring(sys))*i[2]]));
end);
InstallMethod(\*,
"multiplication by scalar for nilpotent elements s*e",
[IsRingElement,IsNilpotentChv],
function(s,e)
local sys;
sys:=chevalleyAdj(e);
if not s in ring(sys) then
Error("Scalar*NilpotentChv: Scalar not in ring of nilpotent!\n");
fi;
return NilpotentChv(sys,List(coefficients(e),i->[i[1],s*i[2]]));
end);
InstallMethod(\+,
"addition for nilpotent elements e1+e2",
[IsNilpotentChv,IsNilpotentChv],
function(e1,e2)
local sys1,sys2;
sys1:=chevalleyAdj(e1);
sys2:=chevalleyAdj(e2);
if type(sys1)<>type(sys2) or
rank(sys1)<>rank(sys2) or
ring(sys1)<>ring(sys2) then
Error("NilpotentChv+NilpotentChv: Not in the same family!\n");
fi;
return NilpotentChv(sys1,Concatenation(coefficients(e1),coefficients(e2)));
end);
InstallMethod(AdditiveInverseMutable,
"additive inverse for nilpotent elements -e1",
[IsNilpotentChv],
function(e)
local sys;
sys:=chevalleyAdj(e);
return NilpotentChv(sys,List(coefficients(e),i->[i[1],-i[2]]));
end);
InstallMethod(ZeroMutable,
"Zero vector of the Lie algebra",
[IsNilpotentChv],
function(e)
local sys;
return NilpotentChv(chevalleyAdj(e),[]);
end);
|
State Before: C : Type u₁
inst✝¹ : Category C
D : Type u₂
inst✝ : Category D
F G : C ⥤ D
X Y : C
e : X ≅ Y
hX : F.obj X = G.obj X
hY : F.obj Y = G.obj Y
⊢ F.obj X = G.obj X State After: no goals Tactic: rw [hX] State Before: C : Type u₁
inst✝¹ : Category C
D : Type u₂
inst✝ : Category D
F G : C ⥤ D
X Y : C
e : X ≅ Y
hX : F.obj X = G.obj X
hY : F.obj Y = G.obj Y
⊢ G.obj Y = F.obj Y State After: no goals Tactic: rw [hY] State Before: C : Type u₁
inst✝¹ : Category C
D : Type u₂
inst✝ : Category D
F G : C ⥤ D
X Y : C
e : X ≅ Y
hX : F.obj X = G.obj X
hY : F.obj Y = G.obj Y
h₂ : F.map e.hom = eqToHom (_ : F.obj X = G.obj X) ≫ G.map e.hom ≫ eqToHom (_ : G.obj Y = F.obj Y)
⊢ F.obj Y = G.obj Y State After: no goals Tactic: rw [hY] State Before: C : Type u₁
inst✝¹ : Category C
D : Type u₂
inst✝ : Category D
F G : C ⥤ D
X Y : C
e : X ≅ Y
hX : F.obj X = G.obj X
hY : F.obj Y = G.obj Y
h₂ : F.map e.hom = eqToHom (_ : F.obj X = G.obj X) ≫ G.map e.hom ≫ eqToHom (_ : G.obj Y = F.obj Y)
⊢ G.obj X = F.obj X State After: no goals Tactic: rw [hX] State Before: C : Type u₁
inst✝¹ : Category C
D : Type u₂
inst✝ : Category D
F G : C ⥤ D
X Y : C
e : X ≅ Y
hX : F.obj X = G.obj X
hY : F.obj Y = G.obj Y
h₂ : F.map e.hom = eqToHom (_ : F.obj X = G.obj X) ≫ G.map e.hom ≫ eqToHom (_ : G.obj Y = F.obj Y)
⊢ F.map e.inv = eqToHom (_ : F.obj Y = G.obj Y) ≫ G.map e.inv ≫ eqToHom (_ : G.obj X = F.obj X) State After: no goals Tactic: simp only [← IsIso.Iso.inv_hom e, Functor.map_inv, h₂, IsIso.inv_comp, inv_eqToHom,
Category.assoc] |
lemma synthetic_div_unique_lemma: "smult c p = pCons a p \<Longrightarrow> p = 0" |
Formal statement is: lemma of_real_in_Ints_iff [simp]: "of_real x \<in> \<int> \<longleftrightarrow> x \<in> \<int>" Informal statement is: A real number is an integer if and only if its image under the map $x \mapsto \mathbb{C}$ is an integer. |
Long Island has some of the finest beaches in the world.
What can compare for beauty with Main Beach in East Hampton? Malibu? Cannes? Miami? Those beaches are all thin, puny strips of sand. Incredibly overrated. They could all fit on any Long Island beach at the same time and there would be plenty of shoreline to spare.
I never get tired of looking at our beautiful beach in East Hampton. And when I stare and look at it long enough, I always think back to what used to be. When you grew up in, as they say, "limited circumstances," a trip to the beach was the best celebration of the line "The best things in life are free."
I grew up on West 7th Street in Brooklyn, one subway stop (five minutes) from Coney Island. The Sea Beach train (now called the N) ran behind our tiny house when I was a kid. Every time the train rolled into the station my entire house would shake. At night I would lie in my bed and listen to the train come in. It was a friendly, familiar sound to a little kid: It meant people were coming home.
In the summer my Mom would take me and my little brother to Coney Island just about every day. For me, it was like traveling to Oz.
When you walked off the subway, some incredible smells fought with each other to get into your nose. The first was the smell of raw clams being squirted with lemon. And then there was the smell of ice-cold beer foaming up and out of the glass in the clam bar that was in the promenade of the subway terminal.
As you walked across the street, you smelled the sweetness of cotton candy and two seconds later you smelled the garlic and spices of those sizzling Nathan's hot dogs that made your mouth water. By the time you got to the boardwalk, you were starving and reaching in your bag of homemade sandwiches to sneak a bite.
You could get to the beach by walking onto the boardwalk or under it. (A few years later the Drifters would tell the world about the wonders that could be found "Under the Boardwalk.") Walking under was the faster way to get onto the beach and that's the way you always went. You braved the cold clammy sand that hadn't felt the sunlight in years. You gingerly stepped over (while still managing to sneak a peek at) the teenage couples who were passionately "making out" on the blankets in the dim semi-privacy that could only be found under the boardwalk. They pretended they were invisible. I was very young but I knew enough to go along with the pretense.
The walk on the beach was a joke. There seemed to be millions of people on the beach; consequently, there was no beach. We stepped on one beach blanket after another. Finally, my mom staked out a claim and we parked our blanket, touching four other blankets, and rushed to the water.
To be honest, the water in Coney Island was just slightly cleaner than the Ganges in India. The recent BP oil spill disaster triggered a sense memory that I hadn't thought about in so many years. For years after World War II, the water in Coney Island was filled with oil chunks that blackened our feet. I remember my father telling me that it probably came from one of the ships that had been blown up nearby during the war. I remember wondering if it was one of ours or one of theirs.
But now the smell in the air was suntan oil, and as a kid I remember staying in the water for hours to fight the waves. Invariably, my mother would call me in because "your lips are turning blue." She never came into the water. She just joined all the other mothers who were standing on the shore on "blue lip patrol." When the time came to go home, I always begged for another half-hour. They always gave it to me.
On the one-stop ride home, I would rush to the front of the first car and stand on my toes at the open window. The cool air hitting my sunburned face would feel wonderful.
My favorite line in any movie is from Atlantic City. A very old Burt Lancaster is trying to impress a very young Susan Sarandon. They're looking at the Atlantic Ocean. She says, "It's very beautiful." He says, "Yes." Then he looks at her and says, "But this is nothing. You should have seen it in the old days."
My children have the same ocean I had in the old days. But they don't have the noise of subway trains every night. They've never been to Coney Island. The beach they walk on in East Hampton is spotlessly clean and well maintained. There's always plenty of room. Thanks to the diligence of the village fathers, they will never smell food on the way to Main Beach.
My children seem to have everything, but in some ways I feel sorry for them. With all they have, they will never have the richness of these memories.
If you wish to comment on "Jerry's Ink" please send your message to [email protected] or visit indyeastend.com and scroll to the bottom of the column. |
import topology.basic
import topology.instances.basic
import topology.properties
import misc.set
namespace topology
universe u
open set
theorem finite_union_of_compact_sets_compact {X : Type u} [topology X]
: ∀ {l : list (set X)}, is_cover (list_to_set l) → (∀ {S}, S ∈ l → compact_set S) → compact_space X := sorry
end topology |
{-# OPTIONS --safe #-}
module Cubical.HITs.Wedge.Base where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Pointed
open import Cubical.HITs.Pushout.Base
open import Cubical.Data.Sigma
open import Cubical.Data.Unit
_⋁_ : ∀ {ℓ ℓ'} → Pointed ℓ → Pointed ℓ' → Type (ℓ-max ℓ ℓ')
_⋁_ (A , ptA) (B , ptB) = Pushout {A = Unit} {B = A} {C = B} (λ _ → ptA) (λ _ → ptB)
-- Pointed versions
_⋁∙ₗ_ : ∀ {ℓ ℓ'} → Pointed ℓ → Pointed ℓ' → Pointed (ℓ-max ℓ ℓ')
A ⋁∙ₗ B = (A ⋁ B) , (inl (snd A))
_⋁∙ᵣ_ : ∀ {ℓ ℓ'} → Pointed ℓ → Pointed ℓ' → Pointed (ℓ-max ℓ ℓ')
A ⋁∙ᵣ B = (A ⋁ B) , (inr (snd B))
-- Wedge sums of functions
_∨→_ : ∀ {ℓ ℓ' ℓ''} {A : Pointed ℓ} {B : Pointed ℓ'} {C : Pointed ℓ''}
→ (f : A →∙ C) (g : B →∙ C)
→ A ⋁ B → fst C
(f ∨→ g) (inl x) = fst f x
(f ∨→ g) (inr x) = fst g x
(f ∨→ g) (push a i₁) = (snd f ∙ sym (snd g)) i₁
-- Pointed version
∨→∙ : ∀ {ℓ ℓ' ℓ''} {A : Pointed ℓ} {B : Pointed ℓ'} {C : Pointed ℓ''}
→ (f : A →∙ C) (g : B →∙ C) → ((A ⋁∙ₗ B) →∙ C)
fst (∨→∙ {A = A} f g) = f ∨→ g
snd (∨→∙ {A = A} f g) = snd f
-- Wedge sum of Units is contractible
isContr-Unit⋁Unit : isContr ((Unit , tt) ⋁ (Unit , tt))
fst isContr-Unit⋁Unit = inl tt
snd isContr-Unit⋁Unit (inl tt) = refl
snd isContr-Unit⋁Unit (inr tt) = push tt
snd isContr-Unit⋁Unit (push tt i) j = push tt (i ∧ j)
⋁↪ : ∀ {ℓ ℓ'} {A : Pointed ℓ} {B : Pointed ℓ'}
→ A ⋁ B → typ A × typ B
⋁↪ {B = B} (inl x) = x , pt B
⋁↪ {A = A} (inr x) = pt A , x
⋁↪ {A = A} {B = B} (push a i) = pt A , pt B
|
module Lvl.Decidable where
open import Data.Boolean.Stmt
open import Logic.Propositional
open import Logic.Propositional.Theorems
import Lvl
open import Type.Properties.Decidable
open import Type.Properties.Decidable.Proofs
open import Type
private variable ℓ ℓ₁ : Lvl.Level
-- Changing classical propositions' universe levels by using their boolean representation.
module _ (P : Type{ℓ}) ⦃ dec : Decidable(0)(P) ⦄ where
Convert : Type{ℓ₁}
Convert = Lvl.Up(IsTrue(decide(0)(P)))
-- LvlConvert is satisfied whenever its proposition is.
Convert-correctness : P ↔ Convert{ℓ₁}
Convert-correctness = [↔]-transitivity decider-true ([↔]-intro Lvl.Up.obj Lvl.up)
|
State Before: d x y : ℤ
⊢ sqrtd * { re := x, im := y } = { re := d * y, im := x } State After: no goals Tactic: simp [ext] |
{- Byzantine Fault Tolerant Consensus Verification in Agda, version 0.9.
Copyright (c) 2021, Oracle and/or its affiliates.
Licensed under the Universal Permissive License v 1.0 as shown at https://opensource.oracle.com/licenses/upl
-}
open import LibraBFT.ImplShared.Consensus.Types
open import Optics.All
open import Util.Encode
open import Util.PKCS as PKCS hiding (sign)
open import Util.Prelude
module LibraBFT.Impl.Types.ValidatorSigner where
sign : {C : Set} ⦃ enc : Encoder C ⦄ → ValidatorSigner → C → Signature
sign (ValidatorSigner∙new _ sk) c = PKCS.sign-encodable c sk
postulate -- TODO-1: publicKey_USE_ONLY_AT_INIT
publicKey_USE_ONLY_AT_INIT : ValidatorSigner → PK
obmGetValidatorSigner : AuthorName → List ValidatorSigner → Either ErrLog ValidatorSigner
obmGetValidatorSigner name vss =
case List-filter go vss of λ where
(vs ∷ []) → pure vs
_ → Left fakeErr -- [ "ValidatorSigner", "obmGetValidatorSigner"
-- , name , "not found in"
-- , show (fmap (^.vsAuthor.aAuthorName) vss) ]
where
go : (vs : ValidatorSigner) → Dec (vs ^∙ vsAuthor ≡ name)
go (ValidatorSigner∙new _vsAuthor _) = _vsAuthor ≟ name
|
{-# OPTIONS --without-K #-}
module function.isomorphism.univalence where
open import equality.core
open import function.core
open import function.isomorphism.core
open import hott.equivalence.alternative
open import hott.univalence
≅⇒≡ : ∀ {i}{X Y : Set i}
→ X ≅ Y → X ≡ Y
≅⇒≡ = ≈⇒≡ ∘ ≅⇒≈
|
//****************************************************************************
// (c) 2008, 2009 by the openOR Team
//****************************************************************************
// The contents of this file are available under the GPL v2.0 license
// or under the openOR comercial license. see
// /Doc/openOR_free_license.txt or
// /Doc/openOR_comercial_license.txt
// for Details.
//****************************************************************************
//! OPENOR_INTERFACE_FILE(openOR_core)
//****************************************************************************
/**
* @file
* @author Christian Winne
* @ingroup openOR_core
*/
#ifndef openOR_core_Math_vectorconcept_hpp
#define openOR_core_Math_vectorconcept_hpp
#include <boost/mpl/assert.hpp>
#include <boost/mpl/less_equal.hpp>
#include <boost/concept/usage.hpp>
#include <boost/concept_check.hpp>
#include <openOR/Utility/conceptcheck.hpp>
#include <openOR/Math/traits.hpp>
namespace openOR {
namespace Math {
template <int I, typename P>
typename if_const < P,
typename VectorTraits<typename boost::remove_const<P>::type>::Access::ConstAccessType,
typename VectorTraits<typename boost::remove_const<P>::type>::Access::AccessType >
::type get(P& p) {
return VectorTraits<typename boost::remove_const<P>::type>::Access::template get<I>(p);
}
namespace Impl {
template < class Vec,
int Begin = 0,
int End = VectorTraits<Vec>::Dimension::value >
class VectorCompileTimeIterator {
public:
template<template <class, int> class Op>
inline void apply() const {
BOOST_MPL_ASSERT((boost::mpl::less_equal<boost::mpl::int_<Begin>, boost::mpl::int_<End> >));
Apply0<Begin, End, Op>()();
}
template<template <class, class, int> class Op, class Vec2>
inline void apply(Vec& vec, const Vec2& vec2) const {
BOOST_MPL_ASSERT((boost::mpl::less_equal<boost::mpl::int_<Begin>, boost::mpl::int_<End> >));
Apply2<Begin, End, Op, Vec2>()(vec, vec2);
}
template<template <class, class, class, int> class Op, class Vec2, class Vec3>
inline void apply(Vec& vec, const Vec2& vec2, const Vec3& vec3) const {
BOOST_MPL_ASSERT((boost::mpl::less_equal<boost::mpl::int_<Begin>, boost::mpl::int_<End> >));
Apply3<Begin, End, Op, Vec2, Vec3>()(vec, vec2, vec3);
}
template<template <class, int> class Op, typename Result, template<class> class Reducer>
inline Result reduce(const Vec& vec) const {
BOOST_MPL_ASSERT((boost::mpl::less<boost::mpl::int_<Begin>, boost::mpl::int_<End> >));
return Reduce1 < Begin, End - 1, Op, Result, Reducer > ()(vec);
}
template<template <class, class, int> class Op, class Vec2, typename Result, template<class> class Reducer>
inline Result reduce(const Vec& vec, const Vec2& vec2) const {
BOOST_MPL_ASSERT((boost::mpl::less<boost::mpl::int_<Begin>, boost::mpl::int_<End> >));
return Reduce2 < Begin, End - 1, Op, Vec2, Result, Reducer > ()(vec, vec2);
}
private:
template<int I, int Count, template <class, int> class Op>
struct Apply0 {
inline void operator()() const {
Op<Vec, I>()();
Apply0 < I + 1, Count, Op > ()();
}
};
template<int Count, template <class, int> class Op>
struct Apply0<Count, Count, Op> {
inline void operator()() const {}
};
template<int I, int Count, template <class, class, int> class Op, class Vec2>
struct Apply2 {
inline void operator()(Vec& vec, const Vec2& vec2) const {
Op<Vec, Vec2, I>()(vec, vec2);
Apply2 < I + 1, Count, Op, Vec2 > ()(vec, vec2);
}
};
template<int Count, template <class, class, int> class Op, class Vec2>
struct Apply2<Count, Count, Op, Vec2> {
inline void operator()(Vec& vec, const Vec2& vec2) const {}
};
template<int I, int Count, template <class, class, class, int> class Op, class Vec2, class Vec3>
struct Apply3 {
inline void operator()(Vec& vec, const Vec2& vec2, const Vec3& vec3) const {
Op<Vec, Vec2, Vec3, I>()(vec, vec2, vec3);
Apply3 < I + 1, Count, Op, Vec2, Vec3> ()(vec, vec2, vec3);
}
};
template<int Count, template <class, class, class, int> class Op, class Vec2, class Vec3>
struct Apply3<Count, Count, Op, Vec2, Vec3> {
inline void operator()(Vec&, const Vec2&, const Vec3&) const {
}
};
template<int I, int Count, template <class, int> class Op, typename Result, template<class> class Reducer>
struct Reduce1 {
inline Result operator()(const Vec& vec) {
Result r1 = Op<Vec, I>()(vec);
Result r2 = Reduce1 < I + 1, Count, Op, Result, Reducer > ()(vec);
return Reducer<Result>()(r1, r2);
}
};
template<int Count, template <class, int> class Op, typename Result, template<class> class Reducer>
struct Reduce1<Count, Count, Op, Result, Reducer> {
inline Result operator()(const Vec& vec) {
return Op<Vec, Count>()(vec);
}
};
template<int I, int Count, template <class, class, int> class Op, class Vec2, typename Result, template<class> class Reducer>
struct Reduce2 {
inline Result operator()(const Vec& vec, const Vec2& vec2) {
return Reducer<Result>()(Op<Vec, Vec2, I>()(vec, vec2), Reduce2 < I + 1, Count, Op, Vec2, Result, Reducer > ()(vec, vec2));
}
};
template<int Count, template <class, class, int> class Op, class Vec2, typename Result, template<class> class Reducer>
struct Reduce2<Count, Count, Op, Vec2, Result, Reducer> {
inline Result operator()(const Vec& vec, const Vec2& vec2) {
return Op<Vec, Vec2, Count>()(vec, vec2);
}
};
};
}
namespace Concept {
template <typename Type>
class ConstVector {
private:
BOOST_MPL_ASSERT((typename VectorTraits<Type>::IsVector));
// FIX THIS!!! TODO?
//BOOST_MPL_ASSERT((boost::mpl::greater_equal<
// typename VectorTraits<Type>::Dimension,
// boost::mpl::int_<0> >
// ));
typedef typename VectorTraits<Type>::ValueType ValueType;
enum { DIMENSION = VectorTraits<Type>::Dimension::value };
template <class V, int I>
struct AccessCheck {
void operator()() const {
const V* p = 0;
typename VectorTraits<V>::ValueType val(get<I>(*p));
(void)sizeof(val); // To avoid "unused variable" warnings
}
};
public :
/// BCCL macro to check the ConstPoint concept
BOOST_CONCEPT_USAGE(ConstVector) {
Impl::VectorCompileTimeIterator<Type>().template apply<AccessCheck>();
// TODO: FIX THIS!!!
// Type* p = NULL;
//ValueType norm(Math::norm(*p));
//ValueType dotv(Math::dot(*p, *p));
}
};
template<class Type>
class Vector : public boost::DefaultConstructible<Type>, public boost::CopyConstructible<Type>, public boost::Assignable<Type> {
private:
BOOST_MPL_ASSERT((typename VectorTraits<Type>::IsVector)); // A valid vector trait has to be defined for "Type"
BOOST_CONCEPT_ASSERT((Concept::ConstVector<Type>)); // "Type" has to fullfil the ConstVector concept
//BOOST_MPL_ASSERT((boost::mpl::greater<typename VectorTraits<Type>::Dimension, boost::mpl::int_<0> >)); // "Type" must have a valid dimension
typedef typename VectorTraits<Type>::ValueType ValueType;
enum { DIMENSION = VectorTraits<Type>::Dimension::value };
template <class V, int I>
struct AccessCheck {
void operator()() const {
V* p = NULL;
get<I>(*p) = typename VectorTraits<V>::ValueType();
}
};
public:
/// BCCL macro to check the Point concept
BOOST_CONCEPT_USAGE(Vector) {
Impl::VectorCompileTimeIterator<Type>().template apply<AccessCheck>();
Type t;
ValueType v;
t *= v;
t = v * t;
t = t * v;
t = t + t;
t = t - t;
t += t;
t -= t;
}
};
}
}
}
#endif
|
After three months recuperating from her injuries , Pattycake was returned to her mother on June 15 , 1973 . The entire incident was documented by artist Susan Green in her book Gentle Gorilla : The Story of Patty Cake ( 1978 ) .
|
#define _GNU_SOURCE
#include <cblas.h>
#include <lapacke.h>
int main(int argc, char **argv)
{
int N = 3;
double X[] = { 1, 2, 3 };
int INCX = 1;
double res = cblas_dnrm2(N, X, INCX);
return 0;
}
|
lemma bounded_linear_sub: "bounded_linear f \<Longrightarrow> bounded_linear g \<Longrightarrow> bounded_linear (\<lambda>x. f x - g x)" |
import numpy as np
import scipy.stats as sts
from preprocessing.timeseries import points_of_crossing
def maxdrawdown_by_trend(close, trend):
cross = points_of_crossing(close, trend)
drawdowns_yields = []
drawdowns_indices = []
drawdowns_time = []
for i in range(len(cross['all'])-1):
start = cross['all'][i]
end = cross['all'][i+1]
if start - end == 0:
if close[start] > close[end]:
drawdowns_indices.append(end)
drawdowns_yields.append(np.log(close[end] / close[start]))
drawdowns_time.append(1)
continue
dd = np.min(np.log(close[start : end + 1] / close[start]))
dd_ind = np.argmin(np.log(close[start : end + 1] / close[start]))
if dd < 0:
drawdowns_yields.append(dd)
drawdowns_indices.append(cross['all'][i] + dd_ind)
drawdowns_time.append(drawdowns_indices[-1] - start)
res = {
'drawdowns_yields': np.array(drawdowns_yields),
'drawdowns_time': np.array(drawdowns_time),
'drawdowns_indices': np.array(drawdowns_indices)
}
return res
def maxdrawdown(series: np.ndarray) -> tuple:
"""Calculate maximum drawdowns and their time of input series
Arguments:
series - np.ndarray of prices
----------------
EXAMPLE:
>>> y = sts.norm.rvs(size=100).cumsum()
>>> dd = maxdrawdown(y)
>>> dd[0] # This is calculated maxdrawdown
Return tuple;
0 - value of maxdrawdown;
1 - dict:
drowdowns - all drawdowns;
drawdowns_time - time of each drawdown;
drawdowns_yield - all drawdowns as pct changes
2 - list of event start indices
----------------
P.S. If series at last iteration has drawdown
algorithm will notify about it.
"""
assert isinstance(series, np.ndarray), 'Incorrect type of series for maxdrawdown. Its must be np.ndarray'
drawdowns = []
drawdowns_time = []
drawdowns_begin = []
drawdowns_yield = []
current_dd = None
start_time_dd = None
possible_dd = None
for i in range(1, len(series)):
if current_dd is None:
if series[i] < series[i - 1]:
current_dd = series[i - 1]
possible_dd = series[i]
start_time_dd = i - 1
drawdowns_begin.append(start_time_dd)
elif series[i] < current_dd:
if series[i] < possible_dd:
possible_dd = series[i]
elif series[i] > current_dd:
drawdowns.append(possible_dd - current_dd)
drawdowns_yield.append(possible_dd / current_dd)
drawdowns_time.append(i - start_time_dd - 1)
current_dd = None
start_time_dd = None
possible_dd = None
max_drawdown = np.min(drawdowns)
if current_dd is not None:
max_drawdown = possible_dd / current_dd
print(f'Drawdown is not over yet! Current max drawdown is {max_drawdown}')
to_ret = (
max_drawdown,
dict(
drawdowns=np.array(drawdowns),
drawdowns_yield=np.array(drawdowns_yield),
drawdowns_time=np.array(drawdowns_time)
),
drawdowns_begin
)
return to_ret
def get_drawdown_time(close, trend):
trend_gap = close - trend
gap_time = []
t = 0
for i in trend_gap:
if not t:
if i < 0:
t += 1
elif i <= 0:
t += 1
elif i > 0:
gap_time.append(t)
t = 0
return gap_time
def kupiec_test():
"""Kupiec test for evaluation of VaR estimations
"""
pass
class AltmanZscore:
pass
if __name__ == '__main__':
pass |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Structured General Purpose Assignment
% LaTeX Template
%
% This template has been downloaded from:
% http://www.latextemplates.com
%
% Original author:
% Ted Pavlic (http://www.tedpavlic.com)
%
% Note:
% The \lipsum[#] commands throughout this template generate dummy text
% to fill the template out. These commands should all be removed when
% writing assignment content.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass{article}
\usepackage{fancyhdr} % Required for custom headers
\usepackage{lastpage} % Required to determine the last page for the footer
\usepackage{extramarks} % Required for headers and footers
\usepackage{graphicx} % Required to insert images
\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template
\usepackage{amsmath}
\usepackage{todonotes}
% Margins
\topmargin=-0.45in
\evensidemargin=0in
\oddsidemargin=0in
\textwidth=6.5in
\textheight=9.0in
\headsep=0.25in
\linespread{1.1} % Line spacing
% Set up the header and footer
\pagestyle{fancy}
\lhead{\hmwkAuthorName} % Top left header
\chead{\hmwkClass\ : \hmwkTitle} % Top center header
\rhead{\firstxmark} % Top right header
\lfoot{\lastxmark} % Bottom left footer
\cfoot{} % Bottom center footer
\rfoot{Page\ \thepage\ of\ \pageref{LastPage}} % Bottom right footer
\renewcommand\headrulewidth{0.4pt} % Size of the header rule
\renewcommand\footrulewidth{0.4pt} % Size of the footer rule
\setlength\parindent{0pt} % Removes all indentation from paragraphs
%----------------------------------------------------------------------------------------
% DOCUMENT STRUCTURE COMMANDS
% Skip this unless you know what you're doing
%----------------------------------------------------------------------------------------
% Header and footer for when a page split occurs within a problem environment
\newcommand{\enterProblemHeader}[1]{
\nobreak\extramarks{#1}{#1 continued on next page\ldots}\nobreak
\nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak
}
% Header and footer for when a page split occurs between problem environments
\newcommand{\exitProblemHeader}[1]{
\nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak
\nobreak\extramarks{#1}{}\nobreak
}
\setcounter{secnumdepth}{0} % Removes default section numbers
\newcounter{homeworkProblemCounter} % Creates a counter to keep track of the number of problems
\newcommand{\homeworkProblemName}{}
\newenvironment{homeworkProblem}[1][Problem \arabic{homeworkProblemCounter}]{ % Makes a new environment called homeworkProblem which takes 1 argument (custom name) but the default is "Problem #"
\stepcounter{homeworkProblemCounter} % Increase counter for number of problems
\renewcommand{\homeworkProblemName}{#1} % Assign \homeworkProblemName the name of the problem
\section{\homeworkProblemName} % Make a section in the document with the custom problem count
\enterProblemHeader{\homeworkProblemName} % Header and footer within the environment
}{
\exitProblemHeader{\homeworkProblemName} % Header and footer after the environment
}
\newcommand{\problemAnswer}[1]{ % Defines the problem answer command with the content as the only argument
\noindent\framebox[\columnwidth][c]{\begin{minipage}{0.98\columnwidth}#1\end{minipage}} % Makes the box around the problem answer and puts the content inside
}
\newcommand{\homeworkSectionName}{}
\newenvironment{homeworkSection}[1]{ % New environment for sections within homework problems, takes 1 argument - the name of the section
\renewcommand{\homeworkSectionName}{#1} % Assign \homeworkSectionName to the name of the section from the environment argument
\subsection{\homeworkSectionName} % Make a subsection with the custom name of the subsection
\enterProblemHeader{\homeworkProblemName\ [\homeworkSectionName]} % Header and footer within the environment
}{
\enterProblemHeader{\homeworkProblemName} % Header and footer after the environment
}
%----------------------------------------------------------------------------------------
% NAME AND CLASS SECTION
%----------------------------------------------------------------------------------------
\newcommand{\hmwkTitle}{Assignment\ \# 2} % Assignment title
\newcommand{\hmwkDueDate}{Monday,\ October\ 5,\ 2015} % Due date
\newcommand{\hmwkClass}{CSCI-567} % Course/class
\newcommand{\hmwkClassTime}{} % Class/lecture time
\newcommand{\hmwkAuthorName}{Saket Choudhary} % Your name
\newcommand{\hmwkAuthorID}{2170058637} % Teacher/lecturer
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
\title{
\vspace{2in}
\textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\
\normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate}\\
\vspace{0.1in}\large{\textit{\hmwkClassTime}}
\vspace{3in}
}
\author{\textbf{\hmwkAuthorName} \\
\textbf{\hmwkAuthorID}
}
\date{} % Insert date here if you want it to appear below your name
%----------------------------------------------------------------------------------------
\begin{document}
\maketitle
%----------------------------------------------------------------------------------------
% TABLE OF CONTENTS
%----------------------------------------------------------------------------------------
%\setcounter{tocdepth}{1} % Uncomment this line if you don't want subsections listed in the ToC
\newpage
\tableofcontents
\newpage
\listoftodos
%----------------------------------------------------------------------------------------
% PROBLEM 2
%----------------------------------------------------------------------------------------
\begin{homeworkProblem}[Problem 1] % Custom section title
\begin{homeworkSection}{\homeworkProblemName: ~(a)}
\problemAnswer{ % Answer
%Linear regression assumes uncertainty in the measurement of dependent variable($Y$). $Y$ being a dependent variable is assumed to be the 'true' value that can be measured with ultimate precision. Any model that relates the independent variable to dependent will assume some kind of modeling error which in case of linear regression is often taken as Gaussian random variable . Additively the 'noise' and the independent variable predict the dependent.
%\todo{Fix this answeer. it is verbose}
Linear regression assumes that the regressors have been observed 'truly' and as such the dependent variables $Y$ are the ones that are uncertain. The analogy is simpler to think when $Y$ is a 'response', caused due to some indepndent variable $X$. hence though $X$ is measured absolutely, (dependent varible) $Y$'s measurement should be accounted for errors.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(b)}
\problemAnswer{ % Answer
In order to make linear regression robust to outliers, a n\"{a}ive solution will choose "absolute deviation"($L1$ norm) over "squared error"($L2$ norm) as the criterion for loss function. The reason this might work out in most cases(especially when the outliers belong to a non normal distribution) is that "squared error" will blow up errors when they are large. Thus $L2$ norm will give more weight to large residuals($|y-w^Tx|^2>>|y-w^Tx|$ and we are trying to minimise this error), while the $L1$ norm gives equal weights to all residuals.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(c)}
\problemAnswer{ % Answer
A quick way to realise this is to consider the "scale" of any two independent variables. Say one of the dependent variables is 'time'. Rescaling time from hours to seconds will also rescale its coefficient(approximately by a factor of 60), but the importance remains the same!
Another example is to consider a model with two dependent variables that affect the dependent variable in a similar manner(or are equally important regressors). However
they are on different scales say $X1$ on [1-100] and $X2$ on [0-1], resulting in the coefficient of $X1$ being too smaller than that of $X2$
in a linear regression setting.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(d)}
\problemAnswer{ % Answer
If the dependent variables are perfect linear combination, the matrix $XX^T$ will be non invertible.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(e)}
\problemAnswer{ % Answer
A simple solution would be to use $k-1$ bits instead of $k$ bits for k categories. For example, using the following setup for a 3 category setup:
\begin{align*}
Red &= 0\ 0\\
Green &= 1\ 0\\
Blue &= 0\ 1
\end{align*}
Here is an alternate solution that exploits the property of features still being equidistant from the origin(though they are no longer equidistant from each other)
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(f)}
\problemAnswer{ % Answer
If the independent variables are highly correlated, the coefficients might still be entirely different. From the example in Part (c) abov
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(g)}
\problemAnswer{ % Answer
Using a posterior probability cutoff of 0.5 in linear regression is not same as 0.5 for logistic. A 0.5 rehsold on logistic guarantees that the point all points lying to the right belong to one class. However for a regression problem, this is not true, because the predicted value of $y$ is an 'intepolated or extraplolated'
In any case, logistic regression is a better choice, since the output is constrained in the range of $0-1$, which can be treated directly as a probability values as compared to the less intuitive relation with the output of the linear regression.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(h)}
\problemAnswer{ % Answer
When the number of variables exceed the number of samples, the system is undetermined. And yes, it can be solved by simply obtaining psuedo-inverse of $X$ which is always defined.
}
\end{homeworkSection}
\end{homeworkProblem}
\begin{homeworkProblem}[Problem 2]
\begin{homeworkSection}{\homeworkProblemName: ~(a)}
\problemAnswer{
Class 1: $\vec{x} = (x_1, x_2, \cdots x_{2D})$ where each $x_i \sim N(0, \sigma^2)$
Class 2: $\vec{x} = (x_1, x_2, \cdots x_D, x_{D+1}+\delta, \cdots x_{2D})$
From first principles, the discriminant curve is given by:
$$
P(y=1|x) \geq P(y=0|x)
$$
And hence we have:
\begin{align*}
\log(P(y=1|x)) &\geq \log(P(y=0|x))\\
\log(P(x|y=1)p(y=1) &\geq \log(P(x|y=0)p(y=0)\\
\tag{1}\log(P(x|y=1))+\log(P(y=1)) &\geq \log(P(x|y=0)) + \log(p(y=0))\\
\end{align*}
Now, since $x$ is 2D dimensional and assuming independece of all attributes:
\begin{align*}
P(x|y=1) &= \prod_{i=1}^{2D} p(x_i|y=1) \\
\log(P(x|y=1) &= \sum_{i=1}^{2D} \log(p(x_i|y=1))\\
\log(P(x|y=1)) &= -{D}\log(2\pi \sigma^2) -\sum_{i=1}^{2D}\frac{x_i^2}{2\sigma^2}\tag{2}
\end{align*}
Similarly for class $0$: $x_i \sim N(0, \sigma^2) \forall x \in \{1..D\}$ and
$x_i \sim N(\delta, \sigma^2) \forall x \in \{D+1..2D\}$ Notice that the latter is a shifted normal.
\begin{align*}
P(x|y=0) &= \prod_{i=1}^{2D} p(x_i|y=0) \\
\log(P(x|y=0) &= \sum_{i=1}^{D} \log(p(x_i|y=1)) +\sum_{i=1}^{D} \log(p(x_i|y=1) \\
\log(P(x|y=0)) &= -{D}\log(2\pi \sigma^2) -\sum_{i=1}^{D}\frac{x_i^2}{2\sigma^2} - \sum_{i=D+1}^{2D}\frac{(x_i-\delta)^2}{2\sigma^2}\tag{3}
\end{align*}
}
\problemAnswer{
Plugging $(2),(3)$ in $(1)$ we get:
\begin{align*}
-D\log(2\pi \sigma^2) -\sum_{i=1}^{2D}\frac{x_i^2}{2\sigma^2} + \log(p(y=1)) &\geq -D\log(2\pi \sigma^2) -\sum_{i=1}^{D}\frac{x_i^2}{2\sigma^2} - \sum_{i=D+1}^{2D}\frac{(x_i-\delta)^2}{2\sigma^2} + \log(p(y=0))\\
\log(p(y=1)) + -\sum_{i=D+1}^{2D}\frac{x_i^2}{2\sigma^2} &\geq - \sum_{i=D+1}^{2D}\frac{x_i^2-2\delta x_i + \delta^2}{2\sigma^2} + \log(p(y=0))\\
\log(p(y=1))-\log(p(y=0)) &\geq \frac{D\delta}{\sigma^2} \sum_{i=D+1}^{2D}x_i - D\frac{\delta^2}{2\sigma^2}\\
\log(p(y=1))-\log(p(y=0)) &\geq \frac{D\delta}{\sigma^2} \sum_{i=1}^{D}x_i - D\frac{\delta^2}{2\sigma^2}\\
-\frac{D\delta}{\sigma^2} \sum_{i=1}^{D}x_i + D\frac{\delta^2}{2\sigma^2} &+ \log(p(y=1))-\log(p(y=0)) \geq 0
\end{align*}
}
%\clearpage
\problemAnswer{
Where the change of indices in the penultimate step is permitted since $x_i$ are i.i.d(after taking care of the shifted mean)
Now consider the general form solution of LDA and GDA:
\begin{align*}
\sum_i b_i x_i + c &\geq 0\tag{LDA}\\
\sum_i a_ix_i^2 + \sum b_i x_i + c &\geq 0\tag{GDA}
\end{align*}
Where $x_i$ represents the $i^{th}$ dimension indepdent variable, where each $x_i$ is a feature/attribute and hence is not limited to 2 dimensional special case.
In this case, owing to the homoscedasticity assumption(the variance of the two class conditions being equal to $\sigma^2$) LDA and GDA return the same solution.
\underline{Solution form for $LDA$:}
\begin{align*}
-\frac{D\delta}{\sigma^2} \sum_{i=1}^{D}x_i + D\frac{\delta^2}{2\sigma^2} &+ \log(p(y=1))-\log(p(y=0)) \geq 0
\end{align*}
Assuming equal priors, $p(y=1) = p(y=0)$,
\begin{align*}
-\frac{D\delta}{\sigma^2} \sum_{i=1}^{D}x_i + D\frac{\delta^2}{2\sigma^2} \geq 0 \\
-\sum_{i=1}^{D}x_i + \frac{\delta}{2} \geq 0
\end{align*}
and hence for LDA $b_i = 0 \forall i \in \{1..D\}$ and $b_i = -\frac{D\delta}{\sigma^2} \forall i \in \{D+1..2D\}$(simpliifies to $-1$ for the case with equal priors). and $c = \frac{D\delta^2}{2\sigma^2}$ (simplifies to $\frac{\delta}{2}$ for the case with equal priors )
\textbf{In either case it does depend on $\delta$}
\underline{Solution for $GDA$}:
\begin{align*}
-\frac{D\delta}{\sigma^2} \sum_{i=1}^{D}x_i + D\frac{\delta^2}{2\sigma^2} &+ \log(p(y=1))-\log(p(y=0)) \geq 0
\end{align*}
and hence $a_i = 0 \forall i$ and $b_i = 0 \forall i \in \{1..D\}$ and $b_i = -\frac{D\delta}{\sigma^2} \forall i \in \{D+1..2D\}$(simplifies to $-1$ for the case with equal priors).
\textbf{In either case it does depend on $\delta$}
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(b1)}
\problemAnswer{
$P(X|Y=c_1) \sim N(\mu_1, \Sigma)$ and $p(X|Y=c_2) \sim N(\mu_2, \Sigma)$
where $\mu_1, \mu_2 \in R^D, \Sigma \in R^{D\times D}$
\begin{align*}
P(Y=1|X) &= \frac{P(X|Y=1)P(Y=1)}{P(X)}\\
&= \frac{P(X|Y=1)P(Y=1)}{P(X|Y=1)P(Y=1)+P(X|Y=2)P(Y=2)}\\
&= \frac{1}{1+\frac{P(X|Y=2)P(Y=2)}{P(X|Y=1)P(Y=1)}}\\
&= \frac{1}{1+\exp(\log(\frac{P(X|Y=2)P(Y=2)}{P(X|Y=1)P(Y=1)}))}\\
&= \frac{1}{1+\exp(\log({P(X|Y=2)P(Y=2)})-\log({P(X|Y=1)P(Y=1)}))}\\
&= \frac{1}{1+\exp(-(\log(\frac{P(Y=1)}{P(Y=2)}))+\log(P(X|Y=2))-\log(P(X|Y=1)))}\tag{4}\\
\end{align*}
\begin{align*}
\log(P(X|Y=1)) &= -\frac{1}{2} \ln(|\Sigma|) - \frac{D}{2}\ln(\pi) -\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1)\\
\log(P(X|Y=2)) &= -\frac{1}{2} \ln(|\Sigma|) - \frac{D}{2}\ln(\pi) -\frac{1}{2}(x-\mu_2)^T\Sigma^{-1}(x-\mu_2)\\
\log(P(X|Y=2))-\log(P(X|Y=1)) &= \frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1) - \frac{1}{2}(x-\mu_2)^T\Sigma^{-1}(x-\mu_2)\\
\log(P(X|Y=2))-\log(P(X|Y=1)) &= (\mu_1^T-\mu_2^T)\Sigma^{-1}x + x^T\Sigma^{-1}(\mu_1-\mu_2)\\
\log(P(X|Y=2))-\log(P(X|Y=1)) &= 2(\mu_1^T-\mu_2^T)\Sigma^{-1}x + \mu_1^T \Sigma^{-1} \mu_1 - \mu_2^T \Sigma^{-1} \mu_2
\end{align*}
Plugging $(5)$ in $4$:
\begin{align*}
P(Y=1|X) &= \frac{1}{1+\exp(-(\log(\frac{P(Y=1)}{P(Y=2)})+ \mu_1^T \Sigma^{-1} \mu_1 - \mu_2^T \Sigma^{-1} \mu_2+2(\mu_1^T-\mu_2^T)\Sigma^{-1}x ))}\\
\P(Y=1|X) &= \frac{1}{1+\exp(-(C)+\theta^T x ))}
\end{align*}
Where $$\boxed{C = \frac{P(Y=1)}{P(Y=2)}- \mu_1^T \Sigma^{-1} \mu_1 + \mu_2^T \Sigma^{-1} \mu_2}$$
$$\boxed{\theta = 2(\mu_1-\mu_2)\Sigma^{-1}}$$ since $$(\Sigma^{-1})^T = \Sigma^{-1})$$
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(b2)}
\problemAnswer{
Given $p(y|x)$ is logistic
$P(Y=1|X) = \frac{1}{1+\exp(-(C+\theta^T x ))}$
Consider the simplification using first principles as in part(b1):
\begin{align*}
P(Y=1|X)&= \frac{1}{1+\exp(-(\log(\frac{P(Y=1)}{P(Y=2)}))+\log(P(X|Y=2))-\log(P(X|Y=1)))}
\end{align*}
Now, consider the distribution $P(X=x|Y=1) = e^{-\lambda_1} \frac{\lambda_1^x}{x!}$ and
$P(X=x|Y=2) = e^{-\lambda_2} \frac{\lambda_2^x}{x!}$
$\log(P(X|Y=2))-\log(P(X|Y=1) = \lambda_1-\lambda_2 +x(log\frac{\lambda_1}{\lambda_2})$
and hence,
\begin{align*}
P(Y=1|X) &= \frac{1}{1+\exp(-(\log(\frac{P(Y=1)}{P(Y=2)}))+\lambda_1-\lambda_2 +x(log\frac{\lambda_1}{\lambda_2}) ))}
\end{align*}
implying it is possible to arrive at a logistic regression expression even from a poisson distribution and hence the $p(x|y)$ need not be gaussian.
}
\end{homeworkSection}
\end{homeworkProblem}
\begin{homeworkProblem}{3}
\problemAnswer{
\begin{align*}
L(w_{i+1}, \lambda) &= ||w_{i+1}-w_i||_2^2+ \lambda(w_{i+1}^Tx_i)y_i\\
&= (w_{i+1}-w_i)^T(w_{i+1}-w_i) + \lambda(w_{i+1})\\
\Delta_{w_{i+1}} L &= 2w_{i+1}-2w_i + \lambda x_iy_i = 0 \tag{3.1} \\
\Delta_{\lambda} L &= w_{i+1}^Tx_iy_i = 0 \tag{3.2} \\
\end{align*}
Thus, from $3.1$ and $3.2$,
\begin{align*}
w_{i+1} &= w_i - \frac{1}{2}\lambda x_i\\
&\text {where} \\
w_{i+1}^Tx_iy_i = 0
\end{align*}
}
\end{homeworkProblem}
\begin{homeworkProblem}{4}
\begin{homeworkSection}{\homeworkProblemName: ~(a)}
\begin{tabular}{|c|c|}
\hline Variable & $\# Missing$ \\
\hline pclass & 0 \\
\hline survival & 0 \\
\hline name & 0 \\
\hline sex & 0 \\
\hline age & 263 \\
\hline sibsp & 0 \\
\hline parch & 0 \\
\hline ticket & 0 \\
\hline fare & 1 \\
\hline cabin & 1014 \\
\hline embarked & 2 \\
\hline boat & 823 \\
\hline body & 1188 \\
\hline home & 564 \\
\hline
\end{tabular}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(b)}
\begin{figure}
\includegraphics{data/problem4b}
\caption{4a. Montonic relationship}
\end{figure}
From the graph above we see that $pclass$ and 'age' might not be really informative. (Given they do not have any monotonicity). whereas the rest all variables have.
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(c)}
\problemAnswer{
\begin{tabular}{|c|c|}
\hline Variable & Information \\
\hline home & 0.999467 \\
\hline name & 0.963746 \\
\hline sex & 0.963746 \\
\hline ticket & 0.963746 \\
\hline embarked & 0.963129 \\
\hline cabin & 0.940286 \\
\hline fare & 0.493161 \\
\hline boat & 0.095333 \\
\hline pclass & 0.066290 \\
\hline parch & 0.034475 \\
\hline age & 0.026518 \\
\hline sibsp & 0.012308 \\
\hline body & 0.00 \\
\hline
\end{tabular}
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(d)}
\problemAnswer{
MM: Multiple Models\\
SM: Substituted Models\\
IM: Individual Model\\
\begin{tabular}{|c|c|c|c|c|}
\hline & With age column(IM) & Without Age column(IM) & With NaN substituted(SM) & MM \\
\hline Training Accuracy & 0.776758 & 0.801223 & 0.793578 & 0.629969\\
\hline Testing Accuracy & 0.769466 & 0.781679 & 0.775573 & 0.781679\\
\hline
\end{tabular}
Thus, the individual model(IM) with the age column completely removed seems to have worked better than the MM. Though in training MM perfroms poorer than SM, its performance is at part with SM for the test dataset. In totality, Substituted model seems to have worked better.(considering both training and test datasets).
This is an indicative that the 'age' factor is not really informative as is also evident from part (c) above where 'age' features low in the information table.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(e)}
\problemAnswer{
Total number of columns: $602$.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(f)}
\begin{figure}
\includegraphics{data/accuracy}
\caption{4f. Training/testing accuracy v/s iteration}
\end{figure}
\problemAnswer{
The method of forward selection seems to have worked well. The training accuracy increased by increasing the number of features iteratively. This also lead to an increase in test accuracy though only marginally.
As evident, the training accuracy plot seems to flatten (and hence saturate) near 0.85 for around 10 features. So $10$ features can be assumed to be an optimal choice for number of features.
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(g)}
Alpha: 1.000000e-03 Iterations to Converge: 4862 Accuracy: 7.022901e-01\\
Alpha: 2.000000e-03 Iterations to Converge: 6114 Accuracy: 7.423664e-01\\
Alpha: 3.000000e-03 Iterations to Converge: 5807 Accuracy: 7.843511e-01\\
Alpha: 4.000000e-03 Iterations to Converge: 5731 Accuracy: 8.015267e-01\\
Alpha: 5.000000e-03 Iterations to Converge: 5393 Accuracy: 8.091603e-01\\
Alpha: 6.000000e-03 Iterations to Converge: 5192 Accuracy: 8.187023e-01\\
Alpha: 7.000000e-03 Iterations to Converge: 4914 Accuracy: 8.339695e-01\\
Alpha: 8.000000e-03 Iterations to Converge: 4730 Accuracy: 8.473282e-01\\
Alpha: 9.000000e-03 Iterations to Converge: 4624 Accuracy: 8.454198e-01\\
Alpha: 1.000000e-02 Iterations to Converge: 4513 Accuracy: 8.454198e-01\\
Alpha: 1.100000e-02 Iterations to Converge: 4371 Accuracy: 8.492366e-01\\
Alpha: 1.200000e-02 Iterations to Converge: 4218 Accuracy: 8.549618e-01\\
Alpha: 1.300000e-02 Iterations to Converge: 4102 Accuracy: 8.625954e-01\\
Alpha: 1.400000e-02 Iterations to Converge: 4036 Accuracy: 8.645038e-01\\
Alpha: 1.500000e-02 Iterations to Converge: 4024 Accuracy: 8.664122e-01\\
Alpha: 1.600000e-02 Iterations to Converge: 4078 Accuracy: 8.721374e-01\\
Alpha: 1.700000e-02 Iterations to Converge: 4198 Accuracy: 8.740458e-01\\
Alpha: 1.800000e-02 Iterations to Converge: 4366 Accuracy: 8.759542e-01\\
Alpha: 1.900000e-02 Iterations to Converge: 4564 Accuracy: 8.797710e-01\\
Alpha: 2.000000e-02 Iterations to Converge: 4760 Accuracy: 8.835878e-01\\
Alpha: 2.100000e-02 Iterations to Converge: 4897 Accuracy: 8.835878e-01\\
Alpha: 2.200000e-02 Iterations to Converge: 4953 Accuracy: 8.874046e-01\\
Alpha: 2.300000e-02 Iterations to Converge: 4948 Accuracy: 8.874046e-01\\
Alpha: 2.400000e-02 Iterations to Converge: 4913 Accuracy: 8.912214e-01\\
Alpha: 2.500000e-02 Iterations to Converge: 4864 Accuracy: 8.912214e-01\\
Alpha: 2.600000e-02 Iterations to Converge: 4811 Accuracy: 8.931298e-01\\
Alpha: 2.700000e-02 Iterations to Converge: 4759 Accuracy: 8.912214e-01\\
Alpha: 2.800000e-02 Iterations to Converge: 4710 Accuracy: 8.931298e-01\\
Alpha: 5.000000e-02 Iterations to Converge: 4292 Accuracy: 9.217557e-01\\
Alpha: 5.100000e-02 Iterations to Converge: 4270 Accuracy: 9.236641e-01\\
Alpha: 5.200000e-02 Iterations to Converge: 4248 Accuracy: 9.236641e-01\\
Alpha: 5.300000e-02 Iterations to Converge: 4226 Accuracy: 9.236641e-01\\
Alpha: 5.400000e-02 Iterations to Converge: 4204 Accuracy: 9.217557e-01\\
Alpha: 5.500000e-02 Iterations to Converge: 4183 Accuracy: 9.217557e-01\\
Alpha: 5.600000e-02 Iterations to Converge: 4162 Accuracy: 9.217557e-01\\
Alpha: 5.700000e-02 Iterations to Converge: 4142 Accuracy: 9.217557e-01\\
Alpha: 1.300000e-01 Iterations to Converge: 3801 Accuracy: 9.408397e-01\\
Alpha: 1.310000e-01 Iterations to Converge: 3805 Accuracy: 9.408397e-01\\
Alpha: 1.320000e-01 Iterations to Converge: 3808 Accuracy: 9.408397e-01\\
Alpha: 1.330000e-01 Iterations to Converge: 3811 Accuracy: 9.408397e-01\\
Alpha: 1.340000e-01 Iterations to Converge: 3814 Accuracy: 9.408397e-01\\
Alpha: 1.350000e-01 Iterations to Converge: 3818 Accuracy: 9.408397e-01\\
Alpha: 1.360000e-01 Iterations to Converge: 3821 Accuracy: 9.408397e-01\\
Alpha: 1.370000e-01 Iterations to Converge: 3825 Accuracy: 9.408397e-01\\
Alpha: 1.380000e-01 Iterations to Converge: 3828 \textbf{ Accuracy: 9.408397e-01}\\
\problemAnswer{ Thus, it takes approximately 4000 iterations to converge with the choice of the slope parameter around 0.1 and gives an accuracy of 0.94. (Also very low values of the slope parameter seems to hit a local minima) It does seem to be converging in a stable way.
\textbf{ glmfit accuracy: 0.984733}
}
\end{homeworkSection}
\begin{homeworkSection}{\homeworkProblemName: ~(h)}
\problemAnswer{
Number of iterations: 25
Accuracy: 0.583969
\textbf{ glmfit accuracy: 0.984733}\\
Newton's methods implementation seems to be buggy somewhere.
}
\end{homeworkSection}
\end{homeworkProblem}
\end{document}
|
!---------------------------------------------------------------------------
!
!> \file CheckStagnation.f90
!> \brief Check if the system is stagnant.
!> \author M.H.A. Piro
!> \date July 4, 2012
!> \sa CheckPhaseAssemblage.f90
!
!
! Revisions:
! ==========
!
! Date Programmer Description of change
! ---- ---------- ---------------------
! 07/04/2012 M.H.A. Piro Original code (happy Independence Day)
!
!
! Purpose:
! ========
!
!> \details The purpose of this subroutine is to check whether the system
!! has become stagnant by determining the number of solution phases that
!! have molar quantities that changed by a specified value. For example,
!! this subroutine will return the number of solution phases that have
!! changed by more than 5%.
!
!
! Pertinent variables:
! ====================
!
!> \param[in] dMolesPhaseChange A double real scalar representing the
!! relative change of the number of moles
!! of a solution phase (e.g., = 0.05).
!> \param[out] nPhasesCheck An integer scalar representing the
!! number of solution phases that have
!! molar quantities that have changed more
!! than the specified amount.
!> \param[out] dMaxChange A double real scalar representing the
!! maximum relative change of dMolesPhase.
!
!---------------------------------------------------------------------------
subroutine CheckStagnation(dMolesPhaseChange,dMaxChange,nPhasesCheck)
USE ModuleThermo
USE ModuleGEMSolver
implicit none
integer :: nPhasesCheck, j, k
real(8) :: dMolesPhaseChange, dTemp, dMaxChange
! Initialize variables:
nPhasesCheck = 0
dMaxChange = 0D0
! Check to make sure that dMolesPhaseChange is a reasonable value:
if (dMolesPhaseChange > 0D0) then
! Loop through all solution phases:
do j = 1, nSolnPhases
! Determine relative solution phase index:
k = nElements - j + 1
! Compute the relative change of the number of moles of this solution phase:
dTemp = DABS(dMolesPhase(k) - dMolesPhaseLast(k)) / dMolesPhaseLast(k)
! Count the number of phases that have significant changes to their molar quantities:
if (dTemp >= dMolesPhaseChange) nPhasesCheck = nPhasesCheck + 1
! Compute maximum fractional change:
dMaxChange = DMAX1(dTemp, dMaxChange)
end do
end if
return
end subroutine CheckStagnation
|
It is no secret that the husband and I have built a fairly extensive network at home. It started way back when when I studied towards the NT4 MCSE, and, over the years, as new products were released, we added those products to our network to further our learning.
Yesterday, we wiped our domain controller, and started fresh on 2008. Using the Add Roles wizard got a little confusing, so I reverted to the more familiar dcpromo, which made a lot more sense, and didn’t feel much different from 2003. Of course, the AD roles are now extended and sparkly new, so you have to pay attention during the wizard. DO NOT just click next, next finish.
Of course, our Hyper-V machine is also running 2008, but I had precious little to do with that install – the husband did it in the dead of night one night when he couldn’t sleep.
I had a couple of issues initially with the DNS setup. No reverse lookup zone was created, and there were a couple of other things I needed to tweak as well. I am a little concerned, because the self-tests continuously fail, so I am still not convinced that the DNS install is 100% super—duper, but, for now, the network is working, so I am not going to play too much right now (ie. I will fix this later).
We have also been doing a SQL consolidation, and I am going to attempt to rewrite our intranet in ASP.net with a SQL2008 back-end. I have been threatening for years to do this, and I suppose that time has come.
One of the reasons we decided to start over was because we had been installing a variety of services into the old domain that made a bit of a mess to the schema, especially because we didn’t clean up correctly – reinstalled machines without removing the applications correctly, that kind of thing. One of the big culprits here was LCS.
Granted, we make these mistakes because it is a home environment, so it is not critical to achieve 9 sigma, but we have also learnt some good lessons that we may actually one day apply in corporate environments.
And while it is not important at home to have 100% uptime, we do strive to stay up as much as possible, especially because we do actually make use of some of these services to keep our home running, such as schedule all family outings via Exchange and keep track of our budget and shopping lists via our intranet web. And our internet connection needs to be up as much as possible, because I am an addict our daughter needs it for homework. |
import cv2
import numpy as np
def getDet(old_frame,curr_frame,p0,win_s=10,numLevel=2):
lk_params = dict(
winSize=(win_s,win_s),
maxLevel=numLevel,
criteria=(
(
cv2.TERM_CRITERIA_EPS |
cv2.TERM_CRITERIA_COUNT
),
10,
0.03
)
)
old_gray=cv2.cvtColor(
old_frame,
cv2.COLOR_BGR2GRAY
)
frame_gray=cv2.cvtColor(
curr_frame,
cv2.COLOR_BGR2GRAY
)
p=p0.reshape((-1,1,2)).astype(np.float32)
p1,st,err=cv2.calcOpticalFlowPyrLK(
old_gray,
frame_gray,
p,
None,
**lk_params
)
return p1.reshape((1,-1,2)) |
\documentclass[12]{article}
\usepackage[margin=1in]{geometry}
\usepackage{graphicx, float,amsmath, physics, amssymb, mathtools, siunitx}
\usepackage{physics,hyperref,subcaption}
\hypersetup{
colorlinks=true,
linkcolor=black,
filecolor=black,
urlcolor=blue,
citecolor=black,
}
\renewcommand{\baselinestretch}{1.2}
\graphicspath{{./Plots/}}
\begin{document}
\begin{flushright}
Tim Skaras
Advisor: Professor Mustafa Amin
\end{flushright}
\begin{center}
{\LARGE Faraday Wave Experiment}
\end{center}
\section{Introduction}
Bose-Einstein Condensation (BEC) is a state of matter in boson gases that results when atoms, cooled very close to absolute zero, abruptly accumulate in their ground state \cite{schroeder1999introduction}. This state of matter has been experimentally realized using various atomic vapors \cite{pethick2008bose}. In particular, a laboratory here at Rice led by Professor Randall Hulet has studied Bose-Einstein Condensation in lithium. Professor Hulet's lab is able to tune the interaction strength between atoms by exploiting a shallow zero-crossing of a Feshbach resonance, and, due to nonlinearity, this can result in rich dynamics such as the creation of matter-wave soliton trains \cite{nguyen2017formation}. Because it is difficult to study nonlinear systems analytically, computational tools have been important for understanding the behavior of these systems.
\subsection{Project Overview}
More recently, Professor Hulet's lab has performed experiments on the emergence of Faraday Waves in a Bose-Einstein Condensate by periodically modulating the scattering length for these atoms. In these experiments, they have modulated the scattering length periodically for a fixed period of time. Then after holding the scattering length constant, they observe that perturbations of particular wavenumbers will grow rapidly, resulting in the emergence of faraday waves.
Using a mean field theory, it can be shown that the dynamics of a Bose-Einstein condensate are governed by the time-dependent Gross-Pitaevskii Equation \cite{pethick2008bose}
\begin{equation}
i \hbar \frac{\partial \psi(\mathbf{r}, t)}{\partial t}=-\frac{\hbar^{2}}{2 m} \nabla^{2} \psi(\mathbf{r}, t)+V(\mathbf{r}) \psi(\mathbf{r}, t)+ g|\psi(\mathbf{r}, t)|^{2} \psi(\mathbf{r}, t), \quad \quad g \equiv 4 \pi \hbar^{2} a_{s} / m
\label{DimGPE3D}
\end{equation}
\nolinebreak
where $\psi$ is a field describing the number density of atoms in real space, hence the total particle number $N$ satisfies
\begin{equation}
\int_{\mathbb{R}^{3}} d \mathbf{r}|\psi(\mathbf{r})|^{2} = N
\end{equation}
The parameter $g$ corresponds to the interaction strength between atoms ($g>0$ corresponding to repulsive interactions and $g < 0$ corresponding to attractive interactions), and is defined in terms of the scattering length $a_s$ and the atomic mass $m$.
In this project, I numerically solve this nonlinear, partial differential equation assuming a cigar-shaped trapping potential in three spatial dimensions to computationally confirm recent experimental results from Professor Hulet's lab by showing that the BEC's perturbations in Fourier space grow in the expected way.
\section{Methods}
\subsection{Dimensionless GPE}
\label{sec:DimGPE}
Though the form given for the Gross-Pitaevskii equation in (\ref{DimGPE3D}) is physically correct, the units in which the equation has been expressed are not appropriate for the time and length scale of our problem. To remedy this, we will first scale the field
\begin{equation}
\frac{\psi}{\sqrt{N}} \rightarrow \psi
\end{equation}
which now gives the PDE
\begin{equation}
i \hbar \frac{\partial \psi(\mathbf{r}, t)}{\partial t}=-\frac{\hbar^{2}}{2 m} \nabla^{2} \psi(\mathbf{r}, t)+V(\mathbf{r}) \psi(\mathbf{r}, t)+ gN|\psi(\mathbf{r}, t)|^{2} \psi(\mathbf{r}, t), \quad \quad g \equiv 4 \pi \hbar^{2} a_{s} / m
\label{GPEUnity}
\end{equation}
with a normalization condition that is set to unity
\begin{equation}
\int_{\mathbb{R}^{3}} d \mathbf{r}|\psi(\mathbf{r})|^{2} = 1
\end{equation}
To make the GPE dimensionless, we stipulate that the potential is harmonic
\begin{equation}
V(\textbf{r}) = \frac{m}{2}\left(\omega_{x}^{2} x^{2}+\omega_{y}^{2} y^{2}+\omega_{z}^{2} z^{2}\right)
\end{equation}
and then choose a convenient characteristic length $a_0$ and scale our coordinates as follows
\begin{equation}
\omega_{x} t \rightarrow t, \quad \mathbf{r} / a_{0} \rightarrow \mathbf{r}, \quad a_{0}^{3 / 2} \psi \rightarrow \psi \quad \text { where } \quad a_{0} \equiv \sqrt{\frac{\hbar }{ m \omega_{x}}}
\label{scaling}
\end{equation}
Plugging (\ref{scaling}) into (\ref{GPEUnity}) and simplifying we find the GPE in dimensionless form can be written as
\begin{equation}
i \frac{\partial \psi(\mathbf{r}, t)}{\partial t}= -\frac{1}{2 } \nabla^{2} \psi(\mathbf{r}, t)+V(\mathbf{r}) \psi(\mathbf{r}, t)+ gN|\psi(\mathbf{r}, t)|^{2} \psi(\mathbf{r}, t), \quad \quad g \equiv 4 \pi a_{s}
\label{GPE3D}
\end{equation}
where the potential has been scaled so that one of the trapping frequencies is unity
\begin{equation*}
V(\textbf{r}) = \frac{1}{2}\left( x^{2}+\gamma_{y}^{2} y^{2}+\gamma_{z}^{2} z^{2}\right), \quad \gamma_y \equiv \frac{\omega_y}{\omega_x}, \quad \gamma_z \equiv \frac{\omega_z}{\omega_x}
\end{equation*}
For the Faraday Wave experiment that we will be considering, the trapping potential is cylindrically symmetric with $\omega_x = \omega_y$. We thus define $\omega_\rho$ as the trapping frequency along the x-axis and y-axis (henceforth called the radial direction), thus $\omega_\rho = \omega_x = \omega_y$. So the trapping potential is really
\begin{equation}
V(\textbf{r}) = \frac{1}{2}\left( \rho^{2}+\gamma_{z}^{2} z^{2}\right), \quad \rho^2 = x^2 + y^2, \quad \gamma_z \equiv \frac{\omega_z}{\omega_\rho}.
\end{equation}
\subsection{Numerical Methods}
To solve this nonlinear PDE, I use a time-splitting spectral (TSSP) method while assuming periodic boundary conditions. The general idea behind applying this method is that some operators on the RHS of (\ref{GPE3D}) are easy to apply in momentum space (e.g., the second order derivative) and some are easy to apply in position space (e.g., the potential term). To exploit this fact, this method involves shifting the wave function solution between its real space and momentum space representation via fourier transformation and applying each operator component in the space it is most easily applied.
Before I can start solving the nonlinear PDE, however, I must find the correct initial condition. These experiments in Hulet's lab are performed on BEC that is very close to its ground state. The ground state for the GPE is not known analytically in this trapping potential, so I will use finite difference methods to implement the imaginary-time propagation method \cite{chiofalo2000ground, muruganandam2009fortran}, which will allow me to numerically calculate the ground state.
The TSSP method is described in detail in \cite{bao2003numerical} for a one dimensional version of the GPE, and the method can be straightforwardly generalized to three spatial dimensions without affecting the method's validity. The advantages of this method are multifold: the TSSP method is unconditionally stable, time reversible, time-transverse invariant, conserves particle number, and is second order accurate in space and time \cite{bao2003numerical}. One noteworthy drawback is that this method is not symplectic, i.e., it does not conserve energy.
Time reversible means that if the solution $\psi_{n+1}$ at $t_{n+1}$ is obtained by applying the TSSP method to the solution $\psi_n$ at $t_n$, then the past state can be re-obtained from $\psi_{n+1}$ by using TSSP with time step $-\Delta t = -(t_{n+1} - t_n)$. Time-transverse invariance or gauge invariance refers to the property of the GPE that if $V \rightarrow V + \alpha$ where $\alpha \in \mathbb{R}$, then the solution $\psi \rightarrow \psi e^{i\alpha t}$, which implies that number density of atoms $|\psi|^2$ is unchanged under this transformation. This method respects this property by producing the corresponding changes in $\psi$ when $V$ is transformed in this way \cite{antoine2013computational}.
\subsection{Implementation and Validation}
I have implemented this spectral method in Python\footnote{My \href{https://github.com/TimSkaras/GPE-SpectralMethod}{github} repository}. I used the CuPy package\footnote{\href{https://cupy.chainer.org}{CuPy website}} to implement this method because this package provides a NumPy-like environment that performs computations on a GPU. This allowed my code to acheive considerable speedup (almost an order of magnitude for certain problems) over a serial implementation. I will now consider some of the test cases that I have used to ensure I have correctly implemented the time-splitting spectral method and the imaginary-time propagation method.
\subsubsection{Homogeneous Solution}
It can be trivially verified that
\begin{equation}
\psi(t) = A e^{-i g|A|^2 t}, \quad A \in \mathbb{C}
\end{equation}
is a solution to (\ref{GPE3D}). It should be noted that the method will accurately solve a constant intial condition ($\psi(\textbf{x},0) = A$) only when the time step is small compared to the period of oscillation which is determined by angular frequency $\omega = g |A|^2$. In the file \href{https://github.com/TimSkaras/GPE-SpectralMethod/blob/master/Tests/homogeneous_test.py}{homogeneous\_test.py}, I solve a homogeneous initial condition with $A = \frac{1}{2}$ and $g = 10$ thus giving $T = \frac{2\pi}{\omega} = 2.51$. Our time step is $\Delta t \approx 0.0179$. For more details on parameters, see the file linked in this document.
After solving the problem with these initial conditions until $t = 2.5$. When the code is run using a GPU, the maximum error at any one gridpoint is on the order of $10^{-14}$ and the normalization loss ratio (the change in the normalization divided by the norm at $t=0$) is also on the order of $10^{-14}$. Running the code in serial gives a max error and norm loss ratio of the same order of magnitude.
\subsubsection{Plane Wave}
It is also possible to show that the Gross-Pitaevskii Equation admits a plane wave solution with the form
\begin{equation}
\psi(\textbf{r},t) = Ae^{i(\textbf{k} \dotproduct \textbf{r} - \omega t)}, \quad A \in \mathbb{C}
\end{equation}
provided that $\omega$ and $\vec{k}$ satisfy the dispersion relation
\begin{equation}
\omega(\textbf{k}) = \frac{1}{2}|\textbf{k}|^2 + g \abs{A}^2
\end{equation}
In the file \href{https://github.com/TimSkaras/GPE-SpectralMethod/blob/master/Tests/homogeneous_test.py}{planewave\_test.py}, I use this analytic solution to again test the accuracy of my solver. I use the same value for $A$ and $g$ but with $\vec{k} = (\frac{2\pi}{L_x}, 2\frac{2\pi}{L_y},3\frac{2\pi}{L_z})$ the period is now $T = \frac{2\pi}{\omega} = 1.97$. Solving this plane wave initial condition until $t = 2.5$ gives a max error on the order of $10^{-13}$ and a norm loss ratio on the order of $10^{-14}$. Changing from GPU to serial has no significant effect on the error and loss of normalization.
Because this initial condition is spatially varying, it gives us a little more information on how well the solver is working. The spatial derivative in (\ref{GPE3D}) is handled entirely separately from the potential and nonlinear terms. The TSSP method splits (\ref{GPE3D}) into two equations that each be solved exactly and then combines the splitting steps together using Strang splitting to give a second-order accurate solution. The equation with the Laplacian is solved using fourier transforms, and in the code this is handled with FFT. This test and the previous one are useful for testing these different components and isolating which part of the solver has a problem if any.
\subsubsection{Thomas-Fermi Ground State}
The previous two test cases are meant to ensure that the code is correctly solving equation (\ref{GPE3D}), but it is also necessary to test that I have correctly implemented the imaginary-time propagation method. Though we have no exact ground state to which we can compare our numerical result, it is possible to write down an approximate ground state for certain limiting cases of the GPE \cite{bao2003numerical, bao2003ground}.
As explained in the references just cited, a stationary solution $\phi(\textbf{x})$ of the GPE can be written as
\begin{equation}
\psi(\textbf{r},t) = e^{-i\mu t}\phi(\textbf{r})
\label{stationary}
\end{equation}
where $\mu$ is the chemical potential. Plugging (\ref{stationary}) into (\ref{GPE3D}) gives us the equation $\phi(\textbf{x})$ must satisfy to be a stationary solution
\begin{equation}
\mu \phi(\mathbf{r})=-\frac{1}{2} \nabla^{2} \phi(\mathbf{r})+V(\mathbf{r}) \phi(\mathbf{r})+ g|\phi(\mathbf{r})|^{2} \phi(\mathbf{r})
\label{StationaryGPE}
\end{equation}
with the normalization condition
\begin{equation}
\int_{\mathbb{R}^{3}}|\phi(\mathbf{r})|^{2} d \mathbf{r}=1.
\label{NormCondition}
\end{equation}
The ground state is the stationary solution satsifying (\ref{StationaryGPE}) and (\ref{NormCondition}) while also minimizing the energy functional
\begin{equation}
E[\phi] := \int_{\mathbb{R}^{3}} \left[ \frac1{2} |\nabla \phi|^{2} + V(\textbf{r})|\phi|^{2} + \frac{g}{2}|\phi|^{4} \right] d \mathbf{r}
\label{Energy}
\end{equation}
This ground state solution is unique if we also require that $\phi$ be real. In the strongly-interacting regime where $g N \gg a_{0}$ and $g > 0$, the ground state can be approximated as
\begin{equation}
\mu_{g}=\frac{1}{2}\left(\frac{15 gN \gamma_y\gamma_{z}}{4 \pi}\right)^{2 / 5}, \phi_{g}(\mathbf{x})=\left\{\begin{array}{ll}{\sqrt{\frac{\mu_{g}-V(\mathbf{x}) }{gN}},} & {V(\mathbf{x})<\mu_{g}} \\ {0,} & {\text { otherwise }}\end{array}\right.
\label{StrongGS}
\end{equation}
This approximate ground state is known as the Thomas-Fermi ground state. It can be derived from the Thomas-Fermi approximtion where the kinetic term in (\ref{GPE3D}) is dropped from the equation.
In the file \href{https://github.com/TimSkaras/GPE-SpectralMethod/blob/master/Tests/TFgs_test.py}{TFgs\_test.py}, I have used my implementation of the imaginary-time propagation method to find the ground state in a trap with $\omega_x = \omega_y = \omega_z = 1$ and $gN = 2000$. On a grid of points $64 \times 64 \times 64$ points, I have compared my numerically calculated ground state with the analytic approximation given in (\ref{StrongGS}), and I have provided a plot of each in figure \ref{fig:TFgs}.
\begin{figure}[h]
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[scale=0.36]{Plots/TFgsNumerical}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[scale=0.35]{Plots/TFgsAnalytic}
\caption{}
\end{subfigure}
\caption{These are plots of $|\psi|^2$ for the numerically calculated solution (a) and the analytic approximation (b). The x-dimension has been integrated out to make plotting possible. The two are very similar execpt at the boundary where the analytic aproximation becomes zero.}
\label{fig:TFgs}
\end{figure}
To compare how close the numerical solution is the analytic approximation, I used equation (\ref{Energy}) to calculate the ground state energy for each. The numerical solution has an energy of 8.28 and the analytic approximation has an energy of 8.37. The max error between the wavefunction for each was 0.010.
These figures are not as promising as the tests for the GPE solver were, but there may be good reason for this. The energy of the Thomas-Fermi ground state is actually infinite, namely
\begin{equation}
E[\phi_g] = +\infty
\end{equation}
due to the low regularity of $\phi_g$ at the free boundary where $V(x) = \mu_g$. To remedy this issue, an interface layer has to be constructed at this boundary to improve the accuracy of the approximation \cite{bao2003ground}.
\section{Results}
\subsection{Experimental Setup}
In the experiment, the lithium BEC is confined in a cylindrically symmetric harmonic trap. Let $\omega_\rho$ be the trapping frequency in the radial direction and $\omega_z$ the trapping frequency in the axial direction (i.e., along the z-axis). The experimental parameters are
\begin{equation}
\left(\frac{\omega_{\rho}}{2 \pi}, \frac{\omega_{z}}{2 \pi}\right)=(476 \,\mathrm{Hz},7 \,\mathrm{Hz}) , \quad a_{0}=1.74 \mu \mathrm{m}, \quad N \approx 7 \times 10^{5}, \quad a_s \approx 4r_{\text{Bohr}}, \quad t_{mod} = 5~\textrm{ms}
\end{equation}
Because $\omega_\rho \gg \omega_z$, the BEC in its ground state is highly elongated along the axial direction. In the experiment, the scattering length is modulated periodically such that
\begin{equation}
g(t) =
\begin{cases}
g_0(1 + \epsilon \sin(\omega t)) & \text{if $0\leq t \leq t_{mod}$} \\
g_0 & \text{else}
\end{cases}
\end{equation}
where $g_0$ is the value for $g = 4\pi a_s$ before the experiment begins, $t_{mod}$ is the length of time over which the scattering length is modulated, and $\epsilon = 0.2$. The modulation frequency of the scattering length $\omega$ varies depending on the epxeriment being performed.
When the scattering length is modulated in this way, the BEC expands and contracts within the trapping potential, and continues to do so after the scattering length is kept fixed. This expansion and contraction sets up a persistent modulation of the width which in turn drives the growth of periodic spatial perturbations along the axial direction of the BEC. The rate of growth for these spatial perturbations is dependent on the perturbation's wavenumber. There is one mode in particular that will grow faster than all other modes which we call $k_{max}$, and the timescale on which this dominant growing mode will emerge is dependent on the scattering length modulation frequency $\omega$ \cite{mustafa}.
In order to reconstruct the experiment in numerical simulation, these parameters must be converted to dimensionless form as described in section \ref{sec:DimGPE}. Doing so we obtain
\begin{equation}
\left(\omega_\rho, \omega_z\right)=(1 , \textstyle\frac{7}{476}), \quad g \approx 0.001528, \quad gN \approx 1070, \quad t_{mod} = 14.952 \approx 5\pi.
\end{equation}
We have computationally simulated a condensate with these conditions where $\omega = 2\omega_\rho$, and an animation of its time evolution can be downloaded \href{https://github.com/TimSkaras/GPE-SpectralMethod/blob/master/Animations/CodnensateAnimation.mp4}{here}. With this simulation, we have can test how well energy is conserved by the numercal method. Unfortunately, the time-splitting spectral method is not guaranteed to conserve energy, so checking that the energy is conserved can be a indicator that the time-step is not too large. Figure \ref{fig:energy} plots how the energy of the condensate changes with time. As we expect, the energy is increasing before $t_{mod}$ as a result of the changing scattering length. After $t_{mod}$ the scattering length is held constant and the energy remains relativly constant as well.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{energyPlot}
\caption{Energy of condensate over time}
\label{fig:energy}
\end{figure}
\subsection{Computational Results}
\label{subsec:CompResults}
Now, we analyze computational results that can be directly compared with experiment. Due to limitations on instrument precision in the laboratory, certain properties of the BEC cannot be measured. The results that can be compared with experimental measurement include: (1) how the time it takes for the the dominant mode to reach its maximum amplitude varies with the scattering length modulation frequency $\omega$ and (2) how the wavenumber of the dominant mode varies with $\omega$.
First, we find the ground state for the trapping potential and experimental parameters described above and then evolve the system forward in time. After a long enough time has elapsed, the small spatial perturbations along the axial direction will grow until a dominant mode emerges. We will use a fourier transformation to analyze the growth and amplitude of each mode for the spatial perturbations. One of the modes grows in amplitude until reaching a maximum. Afterwards, this dominant mode will have a decaying amplitude until it grows in size again. We denote as $t_{max}$ the time it takes for this dominant mode to reach its maximum. In figure \ref{fig:tmaxNat}, I have plotted how $t_{max}$ varies with the modulation frequency. We see that $t_{max}$ increases as $\omega$ moves away from the resonant frequency, which is $2\omega_\rho$ or twice the radial trapping frequency \cite{mustafa}. See figure \ref{fig:tmaxExp} to view this plot with the units used in experiment rather than in dimensionless form.
\begin{figure}[H]
\centering
\includegraphics[scale=0.9]{tmaxNatUnits}
\caption{This plot compares $t_{max}$ with the modulation frequency $\omega$. The maximum emerges the soonest when the modulation frequency is at its resonant frequency, which is twice the radial trapping frequency.}
\label{fig:tmaxNat}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.9]{tmaxExpUnits}
\caption{This plot compares $t_{max}$ with the modulation frequency $\omega$. The maximum emerges the soonest when the modulation frequency is at its resonant frequency, which is twice the radial trapping frequency.}
\label{fig:tmaxExp}
\end{figure}
Next, we look at how the wavenumber of the dominant mode varies with $\omega$. Theoretical analysis predicts that the wavenumber of the dominant mode should always be $k_{dom} = 1.1$ or $k_{dom} = 0.632$ \si{\micro\meter^{-1}} should be independent of $\omega$ \cite{mustafa}. Our computational results are close to this prediction, with the wavenumber of the dominant mode remaining close to this analytic prediction. In figure \ref{fig:kmaxNat}, we have plotted how the wavenumber of the dominant mode varies with the modulation frequency.
\begin{figure}[H]
\centering
\includegraphics[scale=0.9]{kmaxNatUnits}
\caption{This plot compares $k_{max}$ with the modulation frequency $\omega$.}
\label{fig:kmaxNat}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.9]{kmaxExpUnits}
\caption{This plot compares $k_{max}$ with the modulation frequency $\omega$.}
\label{fig:kmaxExp}
\end{figure}
%There are two ways that we can define the ``dominant mode": the mode that reaches the maximum amplitude first and the mode that has the largest amplitude as the spatial perturbations leave the domain of validity. The analytic predictions for predicting what mode has the fastest growth rate \cite{mustafa} assume that the amplitude of each mode is small. This simplifies the analysis because it also means that the each of the spatial perturbation modes will grow independently of each other and not be coupled. When the amplitude of one or more mores has grown sufficiently large, this assumption no longer holds and the analytic predictions are cannot reliably predict behavior. The ``domain of validity" is the time frame in which this assumption of uncoupled modes still holds
\subsection{Island Formation}
One notable phenomenon in the animations that we have produced from these simulations is that modulating the scattering length produces an ``island" or isolated accumulation of atoms to the left and right of the central cloud. For simulations with the parameters given in subsection \ref{subsec:CompResults}, the islands tend to form around $t = 285$ as illustrated in figure \ref{fig:islands1}. The full 1D animation of these dynamics can be viewed \href{https://github.com/TimSkaras/GPE-SpectralMethod/blob/master/Animations/CodnensateAnimation.mp4}{here}. Additionally, we have created a 2D animation of the same simulation, though now displayed as a heat map, which can be viewed \href{https://github.com/TimSkaras/GPE-SpectralMethod/blob/master/Animations/heatMapIslands.mp4}{here}. I speculate the the island formation is the result of the periodic expansion and contraction of the atomic cloud and the growth of spatial patterns on the BEC surface that results from this expansion and contraction, rather than being directly caused by the periodic modulation of the scattering length \textit{per se}.
At the very least, it can be shown that this expansion and contraction can also be produced by modulating the potential for a fixed period of time, which also ultimately produces similar dynamics by causing the atomic cloud to expand and contract. In this 1D \href{}{animation} and in this 2D heat map \href{}{animation}
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{IslandFormation1D}
\caption{This plot shows the islands around the time when they first form after the scattering length has been modulated at the resonant frequency. This plot shows the number density of atoms along the z-axis when $t = 285$.}
\label{fig:islands1}
\end{figure}
\bibliographystyle{abbrv}
\bibliography{FaradayExperimentBib}
\end{document} |
(*
* Copyright 2020, Data61, CSIRO (ABN 41 687 119 230)
*
* SPDX-License-Identifier: BSD-2-Clause
*)
(*
* Accessing nested structs.
* Testcase for bug VER-321.
*)
theory nested_struct imports "AutoCorres.AutoCorres" begin
external_file "nested_struct.c"
install_C_file "nested_struct.c"
(* Nested struct translation currently only works for packed_type types. *)
instance s1_C :: array_outer_packed by intro_classes auto
instance s3_C :: array_outer_packed by intro_classes auto
autocorres "nested_struct.c"
context nested_struct begin
thm f'_def test'_def
lemma "\<lbrace> \<lambda>s. is_valid_point1_C s p1 \<and> is_valid_point2_C s p2 \<rbrace>
test' p1 p2
\<lbrace> \<lambda>_ s. num_C.n_C (point1_C.x_C (heap_point1_C s p1)) =
index (point2_C.n_C (heap_point2_C s p2)) 0 \<rbrace>!"
unfolding test'_def f'_def
apply wp
apply (clarsimp simp: fun_upd_apply)
done
thm g'_def
lemma "\<lbrace> \<lambda>h. is_valid_s4_C h s \<and>
s1_C.x_C (index (s2_C.x_C (s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0))) 0) = v \<rbrace>
g' s
\<lbrace> \<lambda>_ h. index (s4_C.x_C (heap_s4_C h s)) 0 = index (s4_C.x_C (heap_s4_C h s)) 1 \<and>
s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0) = s3_C.y_C (index (s4_C.x_C (heap_s4_C h s)) 0) \<and>
index (s2_C.x_C (s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0))) 0 =
index (s2_C.x_C (s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0))) 1 \<and>
s1_C.x_C (index (s2_C.x_C (s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0))) 0) =
s1_C.y_C (index (s2_C.x_C (s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0))) 0) \<and>
s1_C.x_C (index (s2_C.x_C (s3_C.x_C (index (s4_C.x_C (heap_s4_C h s)) 0))) 0) = v
\<rbrace>!"
unfolding g'_def
apply wp
apply (clarsimp simp: fun_upd_apply)
done
end
end
|
The picture book is a form of illustrated literature popularized in the twentieth century. Although the illustrations can use a range of media from oil painting to collage to quilting, they are most commonly watercolor or pencil drawings.
Picture books are for young children, and while some may have very basic language, most are written with vocabulary a child can understand but not necessarily read. For this reason, picture books tend to have two functions in the lives of children: they are first read to young children by adults, and then children read them themselves once they begin to learn to read.
The precursors of the modern picture book were illustrated books of rhymes and short stories produced by English illustrators Randolph Caldecott, Walter Crane, and Kate Greenaway in the late nineteenth century. These had a larger proportion of pictures to words than earlier books, and many of their pictures were in color.
The first book with a more modern format was Beatrix Potter‘s “The Tale of Peter Rabbit“, ordinally published in 1902.
The Caldecott Medal, named for illustrator Randolph Caldecott, is given each year by the American Library association to the illustrator of the best American picture book of that year. |
# -*- coding: utf-8 -*-
"""Running ML algorithms on infraslow MEG PSD and DCCS."""
import os
import utils
import ml_tools
import numpy as np
import pandas as pd
from copy import deepcopy
# Global variables
glasser_rois = utils.ProjectData.glasser_rois
_, meg_sessions = utils.ProjectData.meg_metadata
card_sort_task_data = utils.load_behavior(behavior='CardSort_Unadj')
def _infraslow_psd_model(
alg, kernel, permute=False, seed=None, output_dir=None):
"""Predict DCCS using infraslow PSD."""
if not output_dir:
output_dir = os.path.abspath(os.path.dirname(__file__))
infraslow_psd = utils.load_infraslow_psd()
feature_selection_grid = {
'C': (.01, 1, 10, 100),
"gamma": np.logspace(-2, 2, 5)
}
regression_grid = None
if alg == 'SVM':
regression_grid = {
'C': (.01, 1, 10, 100, 1000),
"gamma": (1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1),
# 'degree': (2, 3, 4, 5)
}
ML_pipe = ml_tools.ML_pipeline(
predictors=infraslow_psd,
targets=card_sort_task_data,
feature_selection_gridsearch=feature_selection_grid,
model_gridsearch=regression_grid,
feature_names=glasser_rois,
session_names=meg_sessions,
random_state=seed,
debug=True)
ML_pipe.run_predictions(model=alg, model_kernel=kernel)
if not permute:
ml_tools.save_outputs(ML_pipe, output_dir)
else:
ML_pipe.debug = False
perm_dict = ml_tools.perm_tests(ML_pipe, n_iters=permute)
utils.save_xls(
perm_dict, os.path.join(output_dir, 'permutation_tests.xlsx'))
def _alpha_psd_model(
alg, kernel, permute=False, seed=None, output_dir=None):
"""Predict DCCS using alpha PSD."""
if not output_dir:
output_dir = os.path.abspath(os.path.dirname(__file__))
alpha_psd = utils.load_alpha_psd()
feature_selection_grid = {
'C': (.01, 1, 10, 100),
"gamma": np.logspace(-2, 2, 5)
}
regression_grid = None
if alg == 'SVM':
regression_grid = {
'C': (.01, 1, 10, 100, 1000),
"gamma": (1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1),
# 'degree': (2, 3, 4, 5)
}
ML_pipe = ml_tools.ML_pipeline(
predictors=alpha_psd,
targets=card_sort_task_data,
feature_selection_gridsearch=feature_selection_grid,
model_gridsearch=regression_grid,
feature_names=glasser_rois,
session_names=meg_sessions,
random_state=seed,
debug=True)
ML_pipe.run_predictions(model=alg, model_kernel=kernel)
if not permute:
ml_tools.save_outputs(ML_pipe, output_dir)
else:
ML_pipe.debug = False
perm_dict = ml_tools.perm_tests(ML_pipe, n_iters=permute)
utils.save_xls(
perm_dict, os.path.join(output_dir, 'permutation_tests.xlsx'))
def _infraslow_pac_model(
alg, kernel, permute=False, seed=None, output_dir=None, rois=None):
"""Predict DCCS using infraslow PAC."""
if not output_dir:
output_dir = os.path.abspath(os.path.dirname(__file__))
if not rois:
rois = glasser_rois
infraslow_pac = utils.load_phase_amp_coupling(rois=rois)
latent_vars = ml_tools.plsc(infraslow_pac, card_sort_task_data)
feature_selection_grid = {
'C': (.01, 1, 10, 100),
"gamma": np.logspace(-2, 2, 5)
}
regression_grid = None
if alg == 'SVM':
regression_grid = {
'C': (.01, 1, 10, 100, 1000),
"gamma": (1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1),
# 'degree': (2, 3, 4, 5)
}
ML_pipe = ml_tools.ML_pipeline(
# predictors=infraslow_pac,
predictors=latent_vars,
targets=card_sort_task_data,
run_PLSC=False,
feature_selection_gridsearch=feature_selection_grid,
model_gridsearch=regression_grid,
feature_names=glasser_rois,
session_names=meg_sessions,
random_state=seed,
debug=True)
ML_pipe.run_predictions(model=alg, model_kernel=kernel)
if not permute:
ml_tools.save_outputs(ML_pipe, output_dir)
else:
ML_pipe.debug = False
perm_dict = ml_tools.perm_tests(ML_pipe, n_iters=permute)
utils.save_xls(
perm_dict, os.path.join(output_dir, 'permutation_tests.xlsx'))
def _infraslow_ppc_model(
alg, kernel, permute=False, seed=None, output_dir=None, rois=None):
"""Predict DCCS using infraslow PPC."""
if not output_dir:
output_dir = os.path.abspath(os.path.dirname(__file__))
if not rois:
rois = glasser_rois
infraslow_ppc = utils.load_phase_phase_coupling(rois=rois)
session_data = ml_tools._stack_session_data(infraslow_ppc, return_df=True)
to_drop = []
for col in list(session_data):
values = session_data[col].values
if all(v == 0 for v in values) or all(v == 1 for v in values):
to_drop.append(col)
cleaned_data = session_data.drop(columns=to_drop)
latent_vars = ml_tools.plsc(cleaned_data, card_sort_task_data)
feature_selection_grid = {
'C': (.01, 1, 10, 100),
"gamma": np.logspace(-2, 2, 5)
}
regression_grid = None
if alg == 'SVM':
regression_grid = {
'C': (.01, 1, 10, 100, 1000),
"gamma": (1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1),
# 'degree': (2, 3, 4, 5)
}
ML_pipe = ml_tools.ML_pipeline(
# predictors=infraslow_ppc,
predictors=latent_vars,
targets=card_sort_task_data,
feature_selection_gridsearch=feature_selection_grid,
model_gridsearch=regression_grid,
feature_names=glasser_rois,
session_names=meg_sessions,
random_state=seed,
debug=True)
ML_pipe.run_predictions(model=alg, model_kernel=kernel)
if not permute:
ml_tools.save_outputs(ML_pipe, output_dir)
else:
ML_pipe.debug = False
perm_dict = ml_tools.perm_tests(ML_pipe, n_iters=permute)
utils.save_xls(
perm_dict, os.path.join(output_dir, 'permutation_tests.xlsx'))
def _alpha_ppc_model(
alg, kernel, permute=False, seed=None, output_dir=None, rois=None):
"""Predict DCCS using infraslow PPC."""
if not output_dir:
output_dir = os.path.abspath(os.path.dirname(__file__))
if not rois:
rois = glasser_rois
infraslow_ppc = utils.load_phase_phase_coupling(band='Alpha', rois=rois)
session_data = ml_tools._stack_session_data(infraslow_ppc, return_df=True)
to_drop = []
for col in list(session_data):
values = session_data[col].values
if all(v == 0 for v in values) or all(v == 1 for v in values):
to_drop.append(col)
cleaned_data = session_data.drop(columns=to_drop)
latent_vars = ml_tools.plsc(cleaned_data, card_sort_task_data)
feature_selection_grid = {
'C': (.01, 1, 10, 100),
"gamma": np.logspace(-2, 2, 5)
}
regression_grid = None
if alg == 'SVM':
regression_grid = {
'C': (.01, 1, 10, 100, 1000),
"gamma": (1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1),
# 'degree': (2, 3, 4, 5)
}
ML_pipe = ml_tools.ML_pipeline(
# predictors=infraslow_ppc,
predictors=latent_vars,
targets=card_sort_task_data,
feature_selection_gridsearch=feature_selection_grid,
model=alg,
model_kernel=kernel,
model_gridsearch=regression_grid,
feature_names=glasser_rois,
session_names=meg_sessions,
random_state=seed,
debug=True)
if not permute:
ML_pipe.run_predictions()
ml_tools.save_outputs(ML_pipe, output_dir)
else:
ML_pipe.debug = False
perm_dict = ml_tools.perm_tests(ML_pipe, n_iters=permute)
print(perm_dict)
utils.save_xls(
perm_dict, os.path.join(output_dir, 'permutation_tests.xlsx'))
def _dACC_eff_conn_model(
alg, kernel, permute=False, seed=None, output_dir=None, rois=None):
"""Predict DCCS using alpha effective connectivity with dACC."""
if not output_dir:
output_dir = os.path.abspath(os.path.dirname(__file__))
if not rois:
rois = glasser_rois
data_dir = utils.ProjectData.data_dir
eff_conn_file = os.path.join(data_dir, 'dACC_effective_connectivity.xlsx')
dACC_eff_conn = utils.load_xls(eff_conn_file)
session_data = ml_tools._stack_session_data(dACC_eff_conn, return_df=True)
connections = list(session_data)
latent_vars = ml_tools.plsc(session_data, card_sort_task_data)
feature_selection_grid = {
'C': (.01, 1, 10, 100),
"gamma": np.logspace(-2, 2, 5)
}
regression_grid = None
if alg == 'SVM':
regression_grid = {
'C': (.01, 1, 10, 100, 1000),
"gamma": (1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1),
}
ML_pipe = ml_tools.ML_pipeline(
predictors=session_data,
# predictors=latent_vars,
targets=card_sort_task_data,
feature_selection_gridsearch=feature_selection_grid,
model=alg,
model_kernel=kernel,
model_gridsearch=regression_grid,
feature_names=connections,
session_names=meg_sessions,
random_state=seed,
debug=True)
if not permute:
ML_pipe.run_predictions()
ml_tools.save_outputs(ML_pipe, output_dir)
else:
ML_pipe.debug = False
perm_dict = ml_tools.perm_tests(ML_pipe, n_iters=permute)
print(perm_dict)
utils.save_xls(
perm_dict, os.path.join(output_dir, 'permutation_tests.xlsx'))
def try_algorithms_on_psd():
seed = 13 # For reproducibility
print('Running ML with PSD: %s' % utils.ctime())
ml_algorithms = ['ExtraTrees', 'SVM']
kernels = ['linear', 'rbf'] # only applies to SVM
for m in ml_algorithms:
if m == 'ExtraTrees':
output_dir = "./results/infraslow_PSD_%s" % m
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_infraslow_psd_model(
alg=m, kernel=None, seed=seed, output_dir=output_dir)
output_dir = "./results/alpha_PSD_%s" % m
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_alpha_psd_model(
alg=m, kernel=None, seed=seed, output_dir=output_dir)
elif m == 'SVM':
for k in kernels:
output_dir = "./results/infraslow_PSD_%s_%s" % (m, k)
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_infraslow_psd_model(
alg=m, kernel=k, seed=seed, output_dir=output_dir)
output_dir = "./results/alpha_PSD_%s_%s" % (m, k)
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_alpha_psd_model(
alg=m, kernel=k, seed=seed, output_dir=output_dir)
def try_algorithms_on_pac(rois=None):
seed = 13
print('Running ML with PAC: %s' % utils.ctime())
ml_algorithms = ['ExtraTrees', 'SVM']
kernels = ['linear', 'rbf'] # only applies to SVM
for m in ml_algorithms:
if m == 'ExtraTrees':
output_dir = "./results/infraslow_PAC_%s" % m
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_infraslow_pac_model(
alg=m, kernel=None, seed=seed, output_dir=output_dir)
elif m == 'SVM':
for k in kernels:
output_dir = "./results/infraslow_PAC_%s_%s" % (m, k)
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_infraslow_pac_model(
alg=m, kernel=k, seed=seed, output_dir=output_dir)
def try_algorithms_on_ppc(rois=None):
seed = 13
print('Running ML with PPC: %s' % utils.ctime())
ml_algorithms = ['ExtraTrees', 'SVM']
kernels = ['linear', 'rbf'] # only applies to SVM
for m in ml_algorithms:
if m == 'ExtraTrees':
# output_dir = "./results/infraslow_PPC_%s" % m
# if not os.path.isdir(output_dir):
# os.mkdir(output_dir)
# _infraslow_ppc_model(
# alg=m, kernel=None,
# seed=seed,
# output_dir=output_dir,
# rois=rois)
output_dir = "./results/alpha_PPC_%s" % m
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_alpha_ppc_model(
alg=m, kernel=None,
seed=seed,
output_dir=output_dir,
rois=rois)
elif m == 'SVM':
for k in kernels:
# output_dir = "./results/infraslow_PPC_%s_%s" % (m, k)
# if not os.path.isdir(output_dir):
# os.mkdir(output_dir)
# _infraslow_ppc_model(
# alg=m, kernel=k,
# seed=seed,
# output_dir=output_dir,
# rois=rois)
output_dir = "./results/alpha_PPC_%s_%s" % (m, k)
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_alpha_ppc_model(
alg=m, kernel=k,
seed=seed,
output_dir=output_dir,
rois=rois)
def try_algorithms_on_eff_conn():
seed = 13 # For reproducibility
print('Running ML with effective connectivity: %s' % utils.ctime())
ml_algorithms = ['ExtraTrees', 'SVM']
kernels = ['linear', 'rbf'] # only applies to SVM
for m in ml_algorithms:
if m == 'ExtraTrees':
output_dir = "./results/alpha_dACC_eff_conn_%s" % m
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_dACC_eff_conn_model(
alg=m, kernel=None, seed=seed, output_dir=output_dir)
elif m == 'SVM':
for k in kernels:
output_dir = "./results/alpha_dACC_eff_conn_%s_%s" % (m, k)
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
_dACC_eff_conn_model(
alg=m, kernel=k, seed=seed, output_dir=output_dir)
if __name__ == "__main__":
# try_algorithms_on_psd()
# infraslow_compare_dict = ml_tools.compare_algorithms(band='infraslow')
# utils.save_xls(
# infraslow_compare_dict,
# './results/infraslow_PSD_model_comparison.xlsx')
# alpha_compare_dict = ml_tools.compare_algorithms(band='alpha')
# utils.save_xls(
# alpha_compare_dict,
# './results/alpha_PSD_model_comparison.xlsx')
# psd_rois, _ = ml_tools.pick_algorithm(infraslow_compare_dict)
#
# try_algorithms_on_pac(rois=psd_rois)
# compare_dict = ml_tools.compare_algorithms(model='PAC')
# utils.save_xls(
# compare_dict, './results/infraslow_PAC_model_comparison.xlsx')
from misc import get_psd_rois
psd_rois, _ = get_psd_rois()
# try_algorithms_on_ppc(rois=psd_rois)
# compare_dict = ml_tools.compare_algorithms(band='infraslow', model='PPC')
# utils.save_xls(
# compare_dict, './results/infraslow_PPC_model_comparison.xlsx')
# compare_dict = ml_tools.compare_algorithms(band='alpha', model='PPC')
# utils.save_xls(
# compare_dict, './results/alpha_PPC_model_comparison.xlsx')
# _, algorithm, dir = ml_tools.pick_algorithm(
# compare_dict, band='alpha', model='PPC', return_directory=True)
# str_check = algorithm.split(' ')
# if len(str_check) > 1:
# alg, kernel = str_check[0], str_check[1]
# else:
# alg, kernel = algorithm, None
# _alpha_ppc_model(
# alg=alg,
# kernel=kernel,
# permute=1000,
# seed=13,
# output_dir=dir,
# rois=psd_rois)
# try_algorithms_on_eff_conn()
compare_dict = ml_tools.compare_algorithms(
band='alpha', model='dACC_eff_conn')
utils.save_xls(
compare_dict, './results/alpha_dACC_eff_conn_model_comparison.xlsx')
|
lemma is_interval_minus_translation[simp]: shows "is_interval ((-) x ` X) = is_interval X" |
context("Dotplot")
set.seed(111)
dat <- data.frame(x = LETTERS[1:2], y = rnorm(30), g = LETTERS[3:5])
test_that("Dodging works", {
p <- ggplot(dat, aes(x = x, y = y, fill = g)) +
geom_dotplot(binwidth=.2, binaxis="y", position="dodge", stackdir="center")
bp <- ggplot_build(p)
df <- bp$data[[1]]
# Number of levels in the dodged variable
ndodge <- 3
# The amount of space allocated within each dodge group
dwidth <- .9 / ndodge
# This should be the x position for each before dodging
xbase <- ceiling(df$group / ndodge)
# This is the offset from dodging
xoffset <- (df$group-1) %% ndodge - (ndodge-1) / 2
xoffset <- xoffset * dwidth
# Check actual x locations equal predicted x locations
expect_true(all(abs(df$x - (xbase + xoffset)) < 1e-6))
# Check that xmin and xmax are in the right place
expect_true(all(abs(df$xmax - df$x - dwidth/2) < 1e-6))
expect_true(all(abs(df$x - df$xmin - dwidth/2) < 1e-6))
})
test_that("Binning works", {
bp <- ggplot_build(ggplot(dat, aes(x=y)) + geom_dotplot(binwidth=.4, method="histodot"))
x <- bp$data[[1]]$x
# Need ugly hack to make sure mod function doesn't give values like -3.99999
# due to floating point error
expect_true(all(abs((x - min(x) + 1e-7) %% .4) < 1e-6))
bp <- ggplot_build(ggplot(dat, aes(x=y)) + geom_dotplot(binwidth=.4, method="dotdensity"))
x <- bp$data[[1]]$x
# This one doesn't ensure that dotdensity works, but it does check that it's not
# doing fixed bin sizes
expect_false(all(abs((x - min(x) + 1e-7) %% .4) < 1e-6))
})
test_that("NA's result in warning from stat_bindot", {
set.seed(122)
dat <- data.frame(x=rnorm(20))
dat$x[c(2,10)] <- NA
# Need to assign it to a var here so that it doesn't automatically print
expect_that(bp <- ggplot_build(ggplot(dat, aes(x)) + geom_dotplot(binwidth=.2)),
gives_warning("Removed 2 rows.*stat_bindot"))
})
|
module map-++-distribute where
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong)
open Eq.≡-Reasoning
open import lists using (List; []; _∷_; _++_; map)
-- リストの結合に関するmapの分配法則の証明
map-++-distribute : {A B : Set} → (f : A → B) → (xs ys : List A)
→ map f (xs ++ ys) ≡ map f xs ++ map f ys
map-++-distribute f [] ys =
begin
map f ([] ++ ys)
≡⟨⟩
map f ys
≡⟨⟩
map f [] ++ map f ys
∎
map-++-distribute f (x ∷ xs) ys =
begin
map f ((x ∷ xs) ++ ys)
≡⟨⟩
f x ∷ map f (xs ++ ys)
≡⟨ cong (f x ∷_) (map-++-distribute f xs ys) ⟩
f x ∷ map f xs ++ map f ys
≡⟨⟩
map f (x ∷ xs) ++ map f ys
∎
|
[STATEMENT]
lemma [fcomp_norm_simps]: "CONSTRAINT (IS_PURE P) A \<Longrightarrow> P (the_pure A)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. CONSTRAINT (IS_PURE P) A \<Longrightarrow> P (the_pure A)
[PROOF STEP]
by (auto simp: IS_PURE_def) |
[STATEMENT]
theorem "tm.tfr\<^sub>s\<^sub>e\<^sub>t (set M\<^sub>T\<^sub>L\<^sub>S)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. tm.tfr\<^sub>s\<^sub>e\<^sub>t (set M\<^sub>T\<^sub>L\<^sub>S)
[PROOF STEP]
by (rule tm.tfr\<^sub>s\<^sub>e\<^sub>t_if_comp_tfr\<^sub>s\<^sub>e\<^sub>t') eval |
[GOAL]
J : Type w
X✝ Y✝ Z✝ : WidePullbackShape J
f : X✝ ⟶ Y✝
g : Y✝ ⟶ Z✝
⊢ X✝ ⟶ Z✝
[PROOFSTEP]
cases f
[GOAL]
case id
J : Type w
X✝ Z✝ : WidePullbackShape J
g : X✝ ⟶ Z✝
⊢ X✝ ⟶ Z✝
case term J : Type w Z✝ : WidePullbackShape J j✝ : J g : none ⟶ Z✝ ⊢ some j✝ ⟶ Z✝
[PROOFSTEP]
exact g
[GOAL]
case term
J : Type w
Z✝ : WidePullbackShape J
j✝ : J
g : none ⟶ Z✝
⊢ some j✝ ⟶ Z✝
[PROOFSTEP]
cases g
[GOAL]
case term.id
J : Type w
j✝ : J
⊢ some j✝ ⟶ none
[PROOFSTEP]
apply Hom.term _
[GOAL]
J : Type w
x✝¹ x✝ : WidePullbackShape J
⊢ Subsingleton (x✝¹ ⟶ x✝)
[PROOFSTEP]
constructor
[GOAL]
case allEq
J : Type w
x✝¹ x✝ : WidePullbackShape J
⊢ ∀ (a b : x✝¹ ⟶ x✝), a = b
[PROOFSTEP]
intro a b
[GOAL]
case allEq
J : Type w
x✝¹ x✝ : WidePullbackShape J
a b : x✝¹ ⟶ x✝
⊢ a = b
[PROOFSTEP]
casesm*WidePullbackShape _, (_ : WidePullbackShape _) ⟶ (_ : WidePullbackShape _)
[GOAL]
case allEq.none.none.id.id
J : Type w
⊢ Hom.id none = Hom.id none
case allEq.some.none.term.term
J : Type w
val✝ : J
⊢ Hom.term val✝ = Hom.term val✝
case allEq.some.some.id.id J : Type w val✝ : J ⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
rfl
[GOAL]
case allEq.some.none.term.term
J : Type w
val✝ : J
⊢ Hom.term val✝ = Hom.term val✝
case allEq.some.some.id.id J : Type w val✝ : J ⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
rfl
[GOAL]
case allEq.some.some.id.id
J : Type w
val✝ : J
⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → objs j ⟶ B
X✝ Y✝ : WidePullbackShape J
f : X✝ ⟶ Y✝
⊢ (fun j => Option.casesOn j B objs) X✝ ⟶ (fun j => Option.casesOn j B objs) Y✝
[PROOFSTEP]
cases' f with _ j
[GOAL]
case id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → objs j ⟶ B
X✝ : WidePullbackShape J
⊢ (fun j => Option.casesOn j B objs) X✝ ⟶ (fun j => Option.casesOn j B objs) X✝
[PROOFSTEP]
apply 𝟙 _
[GOAL]
case term
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → objs j ⟶ B
j : J
⊢ (fun j => Option.casesOn j B objs) (some j) ⟶ (fun j => Option.casesOn j B objs) none
[PROOFSTEP]
exact arrows j
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
j : WidePullbackShape J
⊢ F.obj j = (wideCospan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.term j)).obj j
[PROOFSTEP]
aesop_cat
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
j j' : WidePullbackShape J
f : j ⟶ j'
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
j' =
(fun j =>
match j with
| none => f✝
| some j => π j)
j ≫
F.map f
[PROOFSTEP]
cases j
[GOAL]
case none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
j' : WidePullbackShape J
f : none ⟶ j'
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
j' =
(fun j =>
match j with
| none => f✝
| some j => π j)
none ≫
F.map f
[PROOFSTEP]
cases j'
[GOAL]
case some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
j' : WidePullbackShape J
val✝ : J
f : some val✝ ⟶ j'
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
j' =
(fun j =>
match j with
| none => f✝
| some j => π j)
(some val✝) ≫
F.map f
[PROOFSTEP]
cases j'
[GOAL]
case none.none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
f : none ⟶ none
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
none =
(fun j =>
match j with
| none => f✝
| some j => π j)
none ≫
F.map f
[PROOFSTEP]
cases f
[GOAL]
case none.some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
val✝ : J
f : none ⟶ some val✝
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
(some val✝) =
(fun j =>
match j with
| none => f✝
| some j => π j)
none ≫
F.map f
[PROOFSTEP]
cases f
[GOAL]
case some.none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
val✝ : J
f : some val✝ ⟶ none
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
none =
(fun j =>
match j with
| none => f✝
| some j => π j)
(some val✝) ≫
F.map f
[PROOFSTEP]
cases f
[GOAL]
case some.some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f✝ : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f✝
val✝¹ val✝ : J
f : some val✝¹ ⟶ some val✝
⊢ ((Functor.const (WidePullbackShape J)).obj X).map f ≫
(fun j =>
match j with
| none => f✝
| some j => π j)
(some val✝) =
(fun j =>
match j with
| none => f✝
| some j => π j)
(some val✝¹) ≫
F.map f
[PROOFSTEP]
cases f
[GOAL]
case none.none.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
⊢ ((Functor.const (WidePullbackShape J)).obj X).map (Hom.id none) ≫
(fun j =>
match j with
| none => f
| some j => π j)
none =
(fun j =>
match j with
| none => f
| some j => π j)
none ≫
F.map (Hom.id none)
[PROOFSTEP]
refine id _
[GOAL]
case some.none.term
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
val✝ : J
⊢ ((Functor.const (WidePullbackShape J)).obj X).map (Hom.term val✝) ≫
(fun j =>
match j with
| none => f
| some j => π j)
none =
(fun j =>
match j with
| none => f
| some j => π j)
(some val✝) ≫
F.map (Hom.term val✝)
[PROOFSTEP]
refine id _
[GOAL]
case some.some.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
val✝ : J
⊢ ((Functor.const (WidePullbackShape J)).obj X).map (Hom.id (some val✝)) ≫
(fun j =>
match j with
| none => f
| some j => π j)
(some val✝) =
(fun j =>
match j with
| none => f
| some j => π j)
(some val✝) ≫
F.map (Hom.id (some val✝))
[PROOFSTEP]
refine id _
[GOAL]
case none.none.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
⊢ ((Functor.const (WidePullbackShape J)).obj X).map (Hom.id none) ≫
(fun j =>
match j with
| none => f
| some j => π j)
none =
(fun j =>
match j with
| none => f
| some j => π j)
none ≫
F.map (Hom.id none)
[PROOFSTEP]
dsimp
[GOAL]
case some.none.term
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
val✝ : J
⊢ ((Functor.const (WidePullbackShape J)).obj X).map (Hom.term val✝) ≫
(fun j =>
match j with
| none => f
| some j => π j)
none =
(fun j =>
match j with
| none => f
| some j => π j)
(some val✝) ≫
F.map (Hom.term val✝)
[PROOFSTEP]
dsimp
[GOAL]
case some.some.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
val✝ : J
⊢ ((Functor.const (WidePullbackShape J)).obj X).map (Hom.id (some val✝)) ≫
(fun j =>
match j with
| none => f
| some j => π j)
(some val✝) =
(fun j =>
match j with
| none => f
| some j => π j)
(some val✝) ≫
F.map (Hom.id (some val✝))
[PROOFSTEP]
dsimp
[GOAL]
case none.none.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
⊢ 𝟙 X ≫ f = f ≫ F.map (𝟙 none)
[PROOFSTEP]
simp [w]
[GOAL]
case some.none.term
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
val✝ : J
⊢ 𝟙 X ≫ f = π val✝ ≫ F.map (Hom.term val✝)
[PROOFSTEP]
simp [w]
[GOAL]
case some.some.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePullbackShape J ⥤ C
X : C
f : X ⟶ F.obj none
π : (j : J) → X ⟶ F.obj (some j)
w : ∀ (j : J), π j ≫ F.map (Hom.term j) = f
val✝ : J
⊢ 𝟙 X ≫ π val✝ = π val✝ ≫ F.map (𝟙 (some val✝))
[PROOFSTEP]
simp [w]
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
j : WidePullbackShape J
⊢ (𝟭 (WidePullbackShape J)).obj j ≅
((wideCospan none (fun j => some (↑h j)) fun j => Hom.term (↑h j)) ⋙
wideCospan none (fun j => some (Equiv.invFun h j)) fun j => Hom.term (Equiv.invFun h j)).obj
j
[PROOFSTEP]
aesop_cat_nonterminal
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
repeat rfl
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
case some
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
val✝ : J
⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
X✝ Y✝ : WidePullbackShape J
f : X✝ ⟶ Y✝
⊢ (𝟭 (WidePullbackShape J)).map f ≫
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(some val ≅
Option.rec none (fun val => some (↑h.symm val))
(Option.rec none (fun val => some (↑h val)) (some val))) =
(some val ≅ some val))
(Iso.refl (some val))))
Y✝).hom =
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(some val ≅
Option.rec none (fun val => some (↑h.symm val))
(Option.rec none (fun val => some (↑h val)) (some val))) =
(some val ≅ some val))
(Iso.refl (some val))))
X✝).hom ≫
((wideCospan none (fun j => some (↑h j)) fun j => Hom.term (↑h j)) ⋙
wideCospan none (fun j => some (Equiv.invFun h j)) fun j => Hom.term (Equiv.invFun h j)).map
f
[PROOFSTEP]
simp only [eq_iff_true_of_subsingleton]
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
j : WidePullbackShape J'
⊢ ((wideCospan none (fun j => some (Equiv.invFun h j)) fun j => Hom.term (Equiv.invFun h j)) ⋙
wideCospan none (fun j => some (↑h j)) fun j => Hom.term (↑h j)).obj
j ≅
(𝟭 (WidePullbackShape J')).obj j
[PROOFSTEP]
aesop_cat_nonterminal
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J' ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
repeat rfl
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J' ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
case some
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
val✝ : J'
⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
X✝ Y✝ : WidePullbackShape J'
f : X✝ ⟶ Y✝
⊢ ((wideCospan none (fun j => some (Equiv.invFun h j)) fun j => Hom.term (Equiv.invFun h j)) ⋙
wideCospan none (fun j => some (↑h j)) fun j => Hom.term (↑h j)).map
f ≫
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(Option.rec none (fun val => some (↑h val))
(Option.rec none (fun val => some (↑h.symm val)) (some val)) ≅
some val) =
(some val ≅ some val))
(Iso.refl (some val))))
Y✝).hom =
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(Option.rec none (fun val => some (↑h val))
(Option.rec none (fun val => some (↑h.symm val)) (some val)) ≅
some val) =
(some val ≅ some val))
(Iso.refl (some val))))
X✝).hom ≫
(𝟭 (WidePullbackShape J')).map f
[PROOFSTEP]
simp only [eq_iff_true_of_subsingleton]
[GOAL]
J : Type w
X✝ Y✝ Z✝ : WidePushoutShape J
f : X✝ ⟶ Y✝
g : Y✝ ⟶ Z✝
⊢ X✝ ⟶ Z✝
[PROOFSTEP]
cases f
[GOAL]
case id
J : Type w
X✝ Z✝ : WidePushoutShape J
g : X✝ ⟶ Z✝
⊢ X✝ ⟶ Z✝
case init J : Type w Z✝ : WidePushoutShape J j✝ : J g : some j✝ ⟶ Z✝ ⊢ none ⟶ Z✝
[PROOFSTEP]
exact g
[GOAL]
case init
J : Type w
Z✝ : WidePushoutShape J
j✝ : J
g : some j✝ ⟶ Z✝
⊢ none ⟶ Z✝
[PROOFSTEP]
cases g
[GOAL]
case init.id
J : Type w
j✝ : J
⊢ none ⟶ some j✝
[PROOFSTEP]
apply Hom.init _
[GOAL]
J : Type w
x✝¹ x✝ : WidePushoutShape J
⊢ Subsingleton (x✝¹ ⟶ x✝)
[PROOFSTEP]
constructor
[GOAL]
case allEq
J : Type w
x✝¹ x✝ : WidePushoutShape J
⊢ ∀ (a b : x✝¹ ⟶ x✝), a = b
[PROOFSTEP]
intro a b
[GOAL]
case allEq
J : Type w
x✝¹ x✝ : WidePushoutShape J
a b : x✝¹ ⟶ x✝
⊢ a = b
[PROOFSTEP]
casesm*WidePushoutShape _, (_ : WidePushoutShape _) ⟶ (_ : WidePushoutShape _)
[GOAL]
case allEq.none.none.id.id
J : Type w
⊢ Hom.id none = Hom.id none
case allEq.none.some.init.init
J : Type w
val✝ : J
⊢ Hom.init val✝ = Hom.init val✝
case allEq.some.some.id.id J : Type w val✝ : J ⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
repeat rfl
[GOAL]
case allEq.none.none.id.id
J : Type w
⊢ Hom.id none = Hom.id none
case allEq.none.some.init.init
J : Type w
val✝ : J
⊢ Hom.init val✝ = Hom.init val✝
case allEq.some.some.id.id J : Type w val✝ : J ⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
rfl
[GOAL]
case allEq.none.some.init.init
J : Type w
val✝ : J
⊢ Hom.init val✝ = Hom.init val✝
case allEq.some.some.id.id J : Type w val✝ : J ⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
rfl
[GOAL]
case allEq.some.some.id.id
J : Type w
val✝ : J
⊢ Hom.id (some val✝) = Hom.id (some val✝)
[PROOFSTEP]
rfl
[GOAL]
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
X✝ Y✝ : WidePushoutShape J
f : X✝ ⟶ Y✝
⊢ (fun j => Option.casesOn j B objs) X✝ ⟶ (fun j => Option.casesOn j B objs) Y✝
[PROOFSTEP]
cases' f with _ j
[GOAL]
case id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
X✝ : WidePushoutShape J
⊢ (fun j => Option.casesOn j B objs) X✝ ⟶ (fun j => Option.casesOn j B objs) X✝
[PROOFSTEP]
apply 𝟙 _
[GOAL]
case init
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
j : J
⊢ (fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) (some j)
[PROOFSTEP]
exact arrows j
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
X✝ Y✝ Z✝ : WidePushoutShape J
f : X✝ ⟶ Y✝
g : Y✝ ⟶ Z✝
⊢ { obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a → Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(f ≫ g) =
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
f ≫
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
g
[PROOFSTEP]
cases f
[GOAL]
case id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
X✝ Z✝ : WidePushoutShape J
g : X✝ ⟶ Z✝
⊢ { obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a → Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.id X✝ ≫ g) =
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.id X✝) ≫
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
g
[PROOFSTEP]
simp only [Eq.ndrec, hom_id, eq_rec_constant, Category.id_comp]
[GOAL]
case id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
X✝ Z✝ : WidePushoutShape J
g : X✝ ⟶ Z✝
⊢ Hom.rec (motive := fun a a_1 t =>
X✝ = a →
Z✝ = a_1 → HEq (𝟙 X✝ ≫ g) t → (Option.rec B (fun val => objs val) X✝ ⟶ Option.rec B (fun val => objs val) Z✝))
(fun X h =>
Eq.rec (motive := fun x x_1 =>
Z✝ = x →
HEq (𝟙 X✝ ≫ g) (𝟙 x) → (Option.rec B (fun val => objs val) X✝ ⟶ Option.rec B (fun val => objs val) Z✝))
(fun h =>
Eq.rec (motive := fun x x_1 =>
(f : X✝ ⟶ x) →
HEq f (𝟙 X✝) → (Option.rec B (fun val => objs val) X✝ ⟶ Option.rec B (fun val => objs val) x))
(fun f h => 𝟙 (Option.rec B (fun val => objs val) X✝)) (_ : X✝ = Z✝) (𝟙 X✝ ≫ g))
h)
(fun j h =>
Eq.rec (motive := fun x x_1 =>
(f : x ⟶ Z✝) →
Z✝ = some j →
HEq f (Hom.init j) → (Option.rec B (fun val => objs val) x ⟶ Option.rec B (fun val => objs val) Z✝))
(fun f h =>
Eq.rec (motive := fun x x_1 =>
(f : none ⟶ x) → HEq f (Hom.init j) → (B ⟶ Option.rec B (fun val => objs val) x)) (fun f h => arrows j)
(_ : some j = Z✝) f)
(_ : none = X✝) (𝟙 X✝ ≫ g))
(𝟙 X✝ ≫ g) (_ : X✝ = X✝) (_ : Z✝ = Z✝) (_ : HEq (Hom.id X✝ ≫ g) (Hom.id X✝ ≫ g)) =
Hom.rec (motive := fun a a_1 t =>
X✝ = a → Z✝ = a_1 → HEq g t → (Option.rec B (fun val => objs val) X✝ ⟶ Option.rec B (fun val => objs val) Z✝))
(fun X h =>
Eq.rec (motive := fun x x_1 =>
Z✝ = x → HEq g (𝟙 x) → (Option.rec B (fun val => objs val) X✝ ⟶ Option.rec B (fun val => objs val) Z✝))
(fun h =>
Eq.rec (motive := fun x x_1 =>
(f : X✝ ⟶ x) →
HEq f (𝟙 X✝) → (Option.rec B (fun val => objs val) X✝ ⟶ Option.rec B (fun val => objs val) x))
(fun f h => 𝟙 (Option.rec B (fun val => objs val) X✝)) (_ : X✝ = Z✝) g)
h)
(fun j h =>
Eq.rec (motive := fun x x_1 =>
(f : x ⟶ Z✝) →
Z✝ = some j →
HEq f (Hom.init j) → (Option.rec B (fun val => objs val) x ⟶ Option.rec B (fun val => objs val) Z✝))
(fun f h =>
Eq.rec (motive := fun x x_1 =>
(f : none ⟶ x) → HEq f (Hom.init j) → (B ⟶ Option.rec B (fun val => objs val) x)) (fun f h => arrows j)
(_ : some j = Z✝) f)
(_ : none = X✝) g)
g (_ : X✝ = X✝) (_ : Z✝ = Z✝) (_ : HEq g g)
[PROOFSTEP]
congr
[GOAL]
case init
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
Z✝ : WidePushoutShape J
j✝ : J
g : some j✝ ⟶ Z✝
⊢ { obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a → Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.init j✝ ≫ g) =
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.init j✝) ≫
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
g
[PROOFSTEP]
cases g
[GOAL]
case init.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
j✝ : J
⊢ { obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a → Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.init j✝ ≫ Hom.id (some j✝)) =
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.init j✝) ≫
{ obj := fun j => Option.casesOn j B objs,
map := fun {X Y} f =>
Hom.casesOn (motive := fun a a_1 t =>
X = a →
Y = a_1 → HEq f t → ((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
f
(fun X_1 h =>
Eq.ndrec (motive := fun X_2 =>
Y = X_2 →
HEq f (Hom.id X_2) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun h =>
Eq.ndrec (motive := fun {Y} =>
(f : X ⟶ Y) →
HEq f (Hom.id X) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.id X = f) ▸ 𝟙 ((fun j => Option.casesOn j B objs) X)) (_ : X = Y) f)
h)
(fun j h =>
Eq.ndrec (motive := fun {X} =>
(f : X ⟶ Y) →
Y = some j →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) X ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h =>
Eq.ndrec (motive := fun {Y} =>
(f : none ⟶ Y) →
HEq f (Hom.init j) →
((fun j => Option.casesOn j B objs) none ⟶ (fun j => Option.casesOn j B objs) Y))
(fun f h => (_ : Hom.init j = f) ▸ arrows j) (_ : some j = Y) f)
(_ : none = X) f)
(_ : X = X) (_ : Y = Y) (_ : HEq f f) }.map
(Hom.id (some j✝))
[PROOFSTEP]
simp only [Eq.ndrec, hom_id, eq_rec_constant, Category.comp_id]
[GOAL]
case init.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
B : C
objs : J → C
arrows : (j : J) → B ⟶ objs j
j✝ : J
⊢ Hom.rec (motive := fun a a_1 t => none = a → some j✝ = a_1 → HEq (Hom.init j✝ ≫ 𝟙 (some j✝)) t → (B ⟶ objs j✝))
(fun X h =>
Eq.rec (motive := fun x x_1 => some j✝ = x → HEq (Hom.init j✝ ≫ 𝟙 (some j✝)) (𝟙 x) → (B ⟶ objs j✝))
(fun h =>
Eq.rec (motive := fun x x_1 => (f : none ⟶ x) → HEq f (𝟙 none) → (B ⟶ Option.rec B (fun val => objs val) x))
(fun f h => 𝟙 B) (_ : none = some j✝) (Hom.init j✝ ≫ 𝟙 (some j✝)))
h)
(fun j h h =>
Eq.rec (motive := fun x x_1 => (f : none ⟶ x) → HEq f (Hom.init j) → (B ⟶ Option.rec B (fun val => objs val) x))
(fun f h => arrows j) (_ : some j = some j✝) (Hom.init j✝ ≫ 𝟙 (some j✝)))
(Hom.init j✝ ≫ 𝟙 (some j✝)) (_ : none = none) (_ : some j✝ = some j✝)
(_ : HEq (Hom.init j✝ ≫ Hom.id (some j✝)) (Hom.init j✝ ≫ Hom.id (some j✝))) =
arrows j✝
[PROOFSTEP]
congr
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
j : WidePushoutShape J
⊢ F.obj j = (wideSpan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.init j)).obj j
[PROOFSTEP]
cases j
[GOAL]
case none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
⊢ F.obj none = (wideSpan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.init j)).obj none
case some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
val✝ : J
⊢ F.obj (some val✝) = (wideSpan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.init j)).obj (some val✝)
[PROOFSTEP]
repeat rfl
[GOAL]
case none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
⊢ F.obj none = (wideSpan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.init j)).obj none
case some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
val✝ : J
⊢ F.obj (some val✝) = (wideSpan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.init j)).obj (some val✝)
[PROOFSTEP]
rfl
[GOAL]
case some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
val✝ : J
⊢ F.obj (some val✝) = (wideSpan (F.obj none) (fun j => F.obj (some j)) fun j => F.map (Hom.init j)).obj (some val✝)
[PROOFSTEP]
rfl
[GOAL]
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
j j' : WidePushoutShape J
f : j ⟶ j'
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
j' =
(fun j =>
match j with
| none => f✝
| some j => ι j)
j ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases j
[GOAL]
case none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
j' : WidePushoutShape J
f : none ⟶ j'
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
j' =
(fun j =>
match j with
| none => f✝
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases j'
[GOAL]
case some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
j' : WidePushoutShape J
val✝ : J
f : some val✝ ⟶ j'
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
j' =
(fun j =>
match j with
| none => f✝
| some j => ι j)
(some val✝) ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases j'
[GOAL]
case none.none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
f : none ⟶ none
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
none =
(fun j =>
match j with
| none => f✝
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases f
[GOAL]
case none.some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
val✝ : J
f : none ⟶ some val✝
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
(some val✝) =
(fun j =>
match j with
| none => f✝
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases f
[GOAL]
case some.none
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
val✝ : J
f : some val✝ ⟶ none
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
none =
(fun j =>
match j with
| none => f✝
| some j => ι j)
(some val✝) ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases f
[GOAL]
case some.some
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f✝ : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f✝
val✝¹ val✝ : J
f : some val✝¹ ⟶ some val✝
⊢ F.map f ≫
(fun j =>
match j with
| none => f✝
| some j => ι j)
(some val✝) =
(fun j =>
match j with
| none => f✝
| some j => ι j)
(some val✝¹) ≫
((Functor.const (WidePushoutShape J)).obj X).map f
[PROOFSTEP]
cases f
[GOAL]
case none.none.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
⊢ F.map (Hom.id none) ≫
(fun j =>
match j with
| none => f
| some j => ι j)
none =
(fun j =>
match j with
| none => f
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map (Hom.id none)
[PROOFSTEP]
refine id _
[GOAL]
case none.some.init
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
val✝ : J
⊢ F.map (Hom.init val✝) ≫
(fun j =>
match j with
| none => f
| some j => ι j)
(some val✝) =
(fun j =>
match j with
| none => f
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map (Hom.init val✝)
[PROOFSTEP]
refine id _
[GOAL]
case some.some.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
val✝ : J
⊢ F.map (Hom.id (some val✝)) ≫
(fun j =>
match j with
| none => f
| some j => ι j)
(some val✝) =
(fun j =>
match j with
| none => f
| some j => ι j)
(some val✝) ≫
((Functor.const (WidePushoutShape J)).obj X).map (Hom.id (some val✝))
[PROOFSTEP]
refine id _
[GOAL]
case none.none.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
⊢ F.map (Hom.id none) ≫
(fun j =>
match j with
| none => f
| some j => ι j)
none =
(fun j =>
match j with
| none => f
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map (Hom.id none)
[PROOFSTEP]
dsimp
[GOAL]
case none.some.init
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
val✝ : J
⊢ F.map (Hom.init val✝) ≫
(fun j =>
match j with
| none => f
| some j => ι j)
(some val✝) =
(fun j =>
match j with
| none => f
| some j => ι j)
none ≫
((Functor.const (WidePushoutShape J)).obj X).map (Hom.init val✝)
[PROOFSTEP]
dsimp
[GOAL]
case some.some.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
val✝ : J
⊢ F.map (Hom.id (some val✝)) ≫
(fun j =>
match j with
| none => f
| some j => ι j)
(some val✝) =
(fun j =>
match j with
| none => f
| some j => ι j)
(some val✝) ≫
((Functor.const (WidePushoutShape J)).obj X).map (Hom.id (some val✝))
[PROOFSTEP]
dsimp
[GOAL]
case none.none.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
⊢ F.map (𝟙 none) ≫ f = f ≫ 𝟙 X
[PROOFSTEP]
simp [w]
[GOAL]
case none.some.init
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
val✝ : J
⊢ F.map (Hom.init val✝) ≫ ι val✝ = f ≫ 𝟙 X
[PROOFSTEP]
simp [w]
[GOAL]
case some.some.id
J : Type w
C : Type u
inst✝ : Category.{v, u} C
F : WidePushoutShape J ⥤ C
X : C
f : F.obj none ⟶ X
ι : (j : J) → F.obj (some j) ⟶ X
w : ∀ (j : J), F.map (Hom.init j) ≫ ι j = f
val✝ : J
⊢ F.map (𝟙 (some val✝)) ≫ ι val✝ = ι val✝ ≫ 𝟙 X
[PROOFSTEP]
simp [w]
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
j : WidePushoutShape J
⊢ (𝟭 (WidePushoutShape J)).obj j ≅
((wideSpan none (fun j => some (↑h j)) fun j => Hom.init (↑h j)) ⋙
wideSpan none (fun j => some (Equiv.invFun h j)) fun j => Hom.init (Equiv.invFun h j)).obj
j
[PROOFSTEP]
aesop_cat_nonterminal
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
repeat rfl
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
case some
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
val✝ : J
⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
X✝ Y✝ : WidePushoutShape J
f : X✝ ⟶ Y✝
⊢ (𝟭 (WidePushoutShape J)).map f ≫
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(some val ≅
Option.rec none (fun val => some (↑h.symm val))
(Option.rec none (fun val => some (↑h val)) (some val))) =
(some val ≅ some val))
(Iso.refl (some val))))
Y✝).hom =
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(some val ≅
Option.rec none (fun val => some (↑h.symm val))
(Option.rec none (fun val => some (↑h val)) (some val))) =
(some val ≅ some val))
(Iso.refl (some val))))
X✝).hom ≫
((wideSpan none (fun j => some (↑h j)) fun j => Hom.init (↑h j)) ⋙
wideSpan none (fun j => some (Equiv.invFun h j)) fun j => Hom.init (Equiv.invFun h j)).map
f
[PROOFSTEP]
simp only [eq_iff_true_of_subsingleton]
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
j : WidePushoutShape J'
⊢ ((wideSpan none (fun j => some (Equiv.invFun h j)) fun j => Hom.init (Equiv.invFun h j)) ⋙
wideSpan none (fun j => some (↑h j)) fun j => Hom.init (↑h j)).obj
j ≅
(𝟭 (WidePushoutShape J')).obj j
[PROOFSTEP]
aesop_cat_nonterminal
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J' ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
repeat rfl
[GOAL]
case none
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
⊢ none ≅ none
case some J : Type w C : Type u inst : Category.{v, u} C J' : Type w' h : J ≃ J' val✝ : J' ⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
case some
J : Type w
C : Type u
inst : Category.{v, u} C
J' : Type w'
h : J ≃ J'
val✝ : J'
⊢ some val✝ ≅ some val✝
[PROOFSTEP]
rfl
[GOAL]
[PROOFSTEP]
rfl
[GOAL]
J : Type w
C : Type u
inst✝ : Category.{v, u} C
J' : Type w'
h : J ≃ J'
X✝ Y✝ : WidePushoutShape J'
f : X✝ ⟶ Y✝
⊢ ((wideSpan none (fun j => some (Equiv.invFun h j)) fun j => Hom.init (Equiv.invFun h j)) ⋙
wideSpan none (fun j => some (↑h j)) fun j => Hom.init (↑h j)).map
f ≫
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(Option.rec none (fun val => some (↑h val))
(Option.rec none (fun val => some (↑h.symm val)) (some val)) ≅
some val) =
(some val ≅ some val))
(Iso.refl (some val))))
Y✝).hom =
((fun j =>
id
(Option.casesOn j (id (Iso.refl none)) fun val =>
Eq.mpr
(_ :
(Option.rec none (fun val => some (↑h val))
(Option.rec none (fun val => some (↑h.symm val)) (some val)) ≅
some val) =
(some val ≅ some val))
(Iso.refl (some val))))
X✝).hom ≫
(𝟭 (WidePushoutShape J')).map f
[PROOFSTEP]
simp only [eq_iff_true_of_subsingleton]
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
j : J
⊢ π arrows j ≫ arrows j = base arrows
[PROOFSTEP]
apply limit.w (WidePullbackShape.wideCospan _ _ _) (WidePullbackShape.Hom.term j)
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
j : J
⊢ lift f fs w ≫ π arrows j = fs j
[PROOFSTEP]
simp only [limit.lift_π, WidePullbackShape.mkCone_pt, WidePullbackShape.mkCone_π_app]
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
⊢ lift f fs w ≫ base arrows = f
[PROOFSTEP]
simp only [limit.lift_π, WidePullbackShape.mkCone_pt, WidePullbackShape.mkCone_π_app]
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
⊢ (∀ (j : J), g ≫ π arrows j = fs j) → g ≫ base arrows = f → g = lift f fs w
[PROOFSTEP]
intro h1 h2
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g ≫ π arrows j = fs j
h2 : g ≫ base arrows = f
⊢ g = lift f fs w
[PROOFSTEP]
apply (limit.isLimit (WidePullbackShape.wideCospan B objs arrows)).uniq (WidePullbackShape.mkCone f fs <| w)
[GOAL]
case x
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g ≫ π arrows j = fs j
h2 : g ≫ base arrows = f
⊢ ∀ (j : WidePullbackShape J),
g ≫ NatTrans.app (limit.cone (WidePullbackShape.wideCospan B objs arrows)).π j =
NatTrans.app (WidePullbackShape.mkCone f fs w).π j
[PROOFSTEP]
rintro (_ | _)
[GOAL]
case x.none
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g ≫ π arrows j = fs j
h2 : g ≫ base arrows = f
⊢ g ≫ NatTrans.app (limit.cone (WidePullbackShape.wideCospan B objs arrows)).π none =
NatTrans.app (WidePullbackShape.mkCone f fs w).π none
[PROOFSTEP]
apply h2
[GOAL]
case x.some
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g ≫ π arrows j = fs j
h2 : g ≫ base arrows = f
val✝ : J
⊢ g ≫ NatTrans.app (limit.cone (WidePullbackShape.wideCospan B objs arrows)).π (some val✝) =
NatTrans.app (WidePullbackShape.mkCone f fs w).π (some val✝)
[PROOFSTEP]
apply h1
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type ?u.192408
inst✝¹ : Category.{v₂, ?u.192408} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
⊢ ∀ (j : J), (fun j => g ≫ π arrows j) j ≫ arrows j = g ≫ base arrows
[PROOFSTEP]
aesop_cat
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
⊢ g = lift (g ≫ base arrows) (fun j => g ≫ π arrows j) (_ : ∀ (j : J), (g ≫ π arrows j) ≫ arrows j = g ≫ base arrows)
[PROOFSTEP]
apply eq_lift_of_comp_eq
[GOAL]
case a
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
⊢ ∀ (j : J), g ≫ π arrows j = g ≫ π arrows j
case a
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
⊢ g ≫ base arrows = g ≫ base arrows
[PROOFSTEP]
aesop_cat
[GOAL]
case a
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g : X ⟶ widePullback B (fun j => objs j) arrows
⊢ g ≫ base arrows = g ≫ base arrows
[PROOFSTEP]
rfl
-- Porting note: quite a few missing refl's in aesop_cat now
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g1 g2 : X ⟶ widePullback B (fun j => objs j) arrows
⊢ (∀ (j : J), g1 ≫ π arrows j = g2 ≫ π arrows j) → g1 ≫ base arrows = g2 ≫ base arrows → g1 = g2
[PROOFSTEP]
intro h1 h2
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g1 g2 : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g1 ≫ π arrows j = g2 ≫ π arrows j
h2 : g1 ≫ base arrows = g2 ≫ base arrows
⊢ g1 = g2
[PROOFSTEP]
apply limit.hom_ext
[GOAL]
case w
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g1 g2 : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g1 ≫ π arrows j = g2 ≫ π arrows j
h2 : g1 ≫ base arrows = g2 ≫ base arrows
⊢ ∀ (j : WidePullbackShape J),
g1 ≫ limit.π (WidePullbackShape.wideCospan B (fun j => objs j) arrows) j =
g2 ≫ limit.π (WidePullbackShape.wideCospan B (fun j => objs j) arrows) j
[PROOFSTEP]
rintro (_ | _)
[GOAL]
case w.none
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g1 g2 : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g1 ≫ π arrows j = g2 ≫ π arrows j
h2 : g1 ≫ base arrows = g2 ≫ base arrows
⊢ g1 ≫ limit.π (WidePullbackShape.wideCospan B (fun j => objs j) arrows) none =
g2 ≫ limit.π (WidePullbackShape.wideCospan B (fun j => objs j) arrows) none
[PROOFSTEP]
apply h2
[GOAL]
case w.some
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → objs j ⟶ B
inst✝ : HasWidePullback B objs arrows
X : D
f : X ⟶ B
fs : (j : J) → X ⟶ objs j
w : ∀ (j : J), fs j ≫ arrows j = f
g1 g2 : X ⟶ widePullback B (fun j => objs j) arrows
h1 : ∀ (j : J), g1 ≫ π arrows j = g2 ≫ π arrows j
h2 : g1 ≫ base arrows = g2 ≫ base arrows
val✝ : J
⊢ g1 ≫ limit.π (WidePullbackShape.wideCospan B (fun j => objs j) arrows) (some val✝) =
g2 ≫ limit.π (WidePullbackShape.wideCospan B (fun j => objs j) arrows) (some val✝)
[PROOFSTEP]
apply h1
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
j : J
⊢ arrows j ≫ ι arrows j = head arrows
[PROOFSTEP]
apply colimit.w (WidePushoutShape.wideSpan _ _ _) (WidePushoutShape.Hom.init j)
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
j : J
⊢ ι arrows j ≫ desc f fs w = fs j
[PROOFSTEP]
simp only [colimit.ι_desc, WidePushoutShape.mkCocone_pt, WidePushoutShape.mkCocone_ι_app]
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
⊢ head arrows ≫ desc f fs w = f
[PROOFSTEP]
simp only [colimit.ι_desc, WidePushoutShape.mkCocone_pt, WidePushoutShape.mkCocone_ι_app]
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
⊢ (∀ (j : J), ι arrows j ≫ g = fs j) → head arrows ≫ g = f → g = desc f fs w
[PROOFSTEP]
intro h1 h2
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g = fs j
h2 : head arrows ≫ g = f
⊢ g = desc f fs w
[PROOFSTEP]
apply (colimit.isColimit (WidePushoutShape.wideSpan B objs arrows)).uniq (WidePushoutShape.mkCocone f fs <| w)
[GOAL]
case x
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g = fs j
h2 : head arrows ≫ g = f
⊢ ∀ (j : WidePushoutShape J),
NatTrans.app (colimit.cocone (WidePushoutShape.wideSpan B objs arrows)).ι j ≫ g =
NatTrans.app (WidePushoutShape.mkCocone f fs w).ι j
[PROOFSTEP]
rintro (_ | _)
[GOAL]
case x.none
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g = fs j
h2 : head arrows ≫ g = f
⊢ NatTrans.app (colimit.cocone (WidePushoutShape.wideSpan B objs arrows)).ι none ≫ g =
NatTrans.app (WidePushoutShape.mkCocone f fs w).ι none
[PROOFSTEP]
apply h2
[GOAL]
case x.some
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g = fs j
h2 : head arrows ≫ g = f
val✝ : J
⊢ NatTrans.app (colimit.cocone (WidePushoutShape.wideSpan B objs arrows)).ι (some val✝) ≫ g =
NatTrans.app (WidePushoutShape.mkCocone f fs w).ι (some val✝)
[PROOFSTEP]
apply h1
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type ?u.203710
inst✝¹ : Category.{v₂, ?u.203710} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
j : J
⊢ arrows j ≫ (fun j => ι arrows j ≫ g) j = head arrows ≫ g
[PROOFSTEP]
rw [← Category.assoc]
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type ?u.203710
inst✝¹ : Category.{v₂, ?u.203710} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
j : J
⊢ (arrows j ≫ ι arrows j) ≫ g = head arrows ≫ g
[PROOFSTEP]
simp
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
⊢ g =
desc (head arrows ≫ g) (fun j => ι arrows j ≫ g)
(_ : ∀ (j : J), arrows j ≫ (fun j => ι arrows j ≫ g) j = head arrows ≫ g)
[PROOFSTEP]
apply eq_desc_of_comp_eq
[GOAL]
case a
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
⊢ ∀ (j : J), ι arrows j ≫ g = ι arrows j ≫ g
case a
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
⊢ head arrows ≫ g = head arrows ≫ g
[PROOFSTEP]
aesop_cat
[GOAL]
case a
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g : widePushout B (fun j => objs j) arrows ⟶ X
⊢ head arrows ≫ g = head arrows ≫ g
[PROOFSTEP]
rfl
-- Porting note: another missing rfl
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g1 g2 : widePushout B (fun j => objs j) arrows ⟶ X
⊢ (∀ (j : J), ι arrows j ≫ g1 = ι arrows j ≫ g2) → head arrows ≫ g1 = head arrows ≫ g2 → g1 = g2
[PROOFSTEP]
intro h1 h2
[GOAL]
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g1 g2 : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g1 = ι arrows j ≫ g2
h2 : head arrows ≫ g1 = head arrows ≫ g2
⊢ g1 = g2
[PROOFSTEP]
apply colimit.hom_ext
[GOAL]
case w
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g1 g2 : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g1 = ι arrows j ≫ g2
h2 : head arrows ≫ g1 = head arrows ≫ g2
⊢ ∀ (j : WidePushoutShape J),
colimit.ι (WidePushoutShape.wideSpan B (fun j => objs j) arrows) j ≫ g1 =
colimit.ι (WidePushoutShape.wideSpan B (fun j => objs j) arrows) j ≫ g2
[PROOFSTEP]
rintro (_ | _)
[GOAL]
case w.none
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g1 g2 : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g1 = ι arrows j ≫ g2
h2 : head arrows ≫ g1 = head arrows ≫ g2
⊢ colimit.ι (WidePushoutShape.wideSpan B (fun j => objs j) arrows) none ≫ g1 =
colimit.ι (WidePushoutShape.wideSpan B (fun j => objs j) arrows) none ≫ g2
[PROOFSTEP]
apply h2
[GOAL]
case w.some
J : Type w
C : Type u
inst✝² : Category.{v, u} C
D : Type u_1
inst✝¹ : Category.{v₂, u_1} D
B : D
objs : J → D
arrows : (j : J) → B ⟶ objs j
inst✝ : HasWidePushout B objs arrows
X : D
f : B ⟶ X
fs : (j : J) → objs j ⟶ X
w : ∀ (j : J), arrows j ≫ fs j = f
g1 g2 : widePushout B (fun j => objs j) arrows ⟶ X
h1 : ∀ (j : J), ι arrows j ≫ g1 = ι arrows j ≫ g2
h2 : head arrows ≫ g1 = head arrows ≫ g2
val✝ : J
⊢ colimit.ι (WidePushoutShape.wideSpan B (fun j => objs j) arrows) (some val✝) ≫ g1 =
colimit.ι (WidePushoutShape.wideSpan B (fun j => objs j) arrows) (some val✝) ≫ g2
[PROOFSTEP]
apply h1
|
[STATEMENT]
lemma \<G>_subset: \<open>N1 \<subseteq> N2 \<Longrightarrow> \<G>_Fset N1 \<subseteq> \<G>_Fset N2\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. N1 \<subseteq> N2 \<Longrightarrow> \<G>_Fset N1 \<subseteq> \<G>_Fset N2
[PROOF STEP]
by auto |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! !!
!! This file is part of SciFT project !!
!! Copyright (c) 2010-2014 Nestor F. Aguirre ([email protected]) !!
!! !!
!! Redistribution and use in source and binary forms, with or without !!
!! modification, are permitted provided that the following conditions are met: !!
!! !!
!! 1. Redistributions of source code must retain the above copyright notice, this !!
!! list of conditions and the following disclaimer. !!
!! 2. Redistributions in binary form must reproduce the above copyright notice, !!
!! this list of conditions and the following disclaimer in the documentation !!
!! and/or other materials provided with the distribution. !!
!! 3. Neither the name of the copyright holders nor the names of its contributors !!
!! may be used to endorse or promote products derived from this software !!
!! without specific prior written permission. !!
!! !!
!! The copyright holders provide no reassurances that the source code provided !!
!! does not infringe any patent, copyright, or any other intellectual property !!
!! rights of third parties. The copyright holders disclaim any liability to any !!
!! recipient for claims brought against recipient by any third party for !!
!! infringement of that parties intellectual property rights. !!
!! !!
!! THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND !!
!! ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED !!
!! WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE !!
!! DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR !!
!! ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES !!
!! (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; !!
!! LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND !!
!! ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT !!
!! (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS !!
!! SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. !!
!! !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! @todo implementar la selección de la presición para toReal
module String_
implicit none
private
character(1), public, parameter :: ENDL = achar(10)
! @todo Toca 100 porque en algunos casos len(FString_NULL) cambia al llamarla desde diferentes funciones y cuando cambia, cambia a 100
character(100), public, parameter :: FString_NULL = repeat(""C,100)
! http://publib.boulder.ibm.com/infocenter/lnxpcomp/v9v111/index.jsp?topic=/com.ibm.xlf111.linux.doc/xlflr/generic_interface_blocks.htm
! module m
! type point
! character(20) label
! integer x, y
! contains
! procedure :: writepoint
! generic :: write(formatted) => writepoint
! end type
!
! type :: line
! type(point) :: p1,p2
! contains
! procedure :: writeline
! generic :: write(formatted) => writeline
! end type
! contains
! subroutine writepoint(dtv, unit, iotype, vlist, iostat, iomsg)
! class(point), intent(in) :: dtv
! integer, intent(in) :: unit
! character(*), intent(in) :: iotype
! integer, intent(in) :: vlist(:)
! integer, intent(out) :: iostat
! character(*), intent(inout) :: iomsg
!
! write(unit, *, iostat=iostat, iomsg=iomsg) &
! trim(dtv%label), ': (', dtv%x, ', ', dtv%y, ')'
! end subroutine
!
! subroutine writeline(dtv, unit, iotype, vlist, iostat, iomsg)
! class(line), intent(in) :: dtv
! integer, intent(in) :: dtv
! character(*), intent(in) :: iotype
! integer, intent(in) :: vlist(:)
! integer, intent(out) :: iostat
! character(*), intent(inout) :: iomsg
!
! real length, delta_x, delta_y
! delta_x = dtv%p2%x - dtv%p1%x
! delta_y = dtv%p2%y - dtv%p1%y
! length = sqrt(delta_x**2 + delta_y**2)
!
! write(unit, *, iostat=iostat, iomsg=iomsg) &
! 'Distance from ', dtv%p1, ' to ', dtv%p2, ' is ', length
! end subroutine
! end module
! ! READ (FORMATTED)
! subroutine my_read_routine_formatted(dtv, unit, iotype, v_list, iostat, iomsg)
! integer, intent(in) :: unit ! unit number
! ! the derived-type value/variable
! dtv_type_spec , intent(inout) :: dtv
! ! the edit descriptor string
! character(len=*), intent(in) :: iotype
! integer, intent(in) :: v_list(:)
! integer, intent(out) :: iostat
! character (len=*), intent(inout) :: iomsg
! end
!
! ! READ (UNFORMATTED)
! subroutine my_read_routine_unformatted(dtv, unit, iostat, iomsg)
! integer, intent(in) :: unit
! ! the derived-type value/variable
! dtv_type_spec , intent(inout) :: dtv
! integer, intent(out) :: iostat
! character (len=*), intent(inout) :: iomsg
! end
!
! ! WRITE (FORMATTED)
! subroutine my_write_routine_formatted(dtv, unit, iotype, v_list, iostat, iomsg)
! integer, intent(in) :: unit
! ! the derived-type value/variable
! dtv_type_spec , intent(in) :: dtv
! ! the edit descriptor string
! character (len=*), intent(in) :: iotype
! integer, intent(in) :: v_list(:)
! integer, intent(out) :: iostat
! character (len=*), intent(inout) :: iomsg
! end
!
! ! WRITE (UNFORMATTED)
! subroutine my_write_routine_unformatted(dtv, unit, iostat, iomsg)
! integer, intent(in) :: unit
! ! the derived-type value/variable
! dtv_type_spec , intent(in) :: dtv
! integer, intent(out) :: iostat
! character (len=*), intent(inout) :: iomsg
! end
public :: &
FString_split, &
FString_toLogical, &
FString_toInteger, &
FString_toReal, &
FString_toComplex, &
FString_toString, &
FString_toIntegerArray, &
FString_toRealArray, &
FString_fromString, &
FString_fromInteger, &
FString_fromReal, &
FString_fromLogical, &
FString_fromIntegerArray, &
FString_fromRealArray, &
FString_removeTabs, &
FString_replace, &
FString_replaceByRealArr, &
FString_count, &
FString_hashKey, &
FString_isInteger, &
FString_isNumeric, &
FString_toUpper, &
FString_toLower, &
FString_isNull, &
FString_removeFileExtension, &
String_split, &
String_toLogical, &
String_toInteger, &
String_toReal, &
String_toComplex, &
! String_fromInteger, &
! String_fromReal, &
String_test
type, public :: String
character(:), allocatable :: fstr
contains
generic :: init => initDefault, fromFString
generic :: assignment(=) => copy, fromFString
generic :: operator(+) => add, addFStr
generic :: operator(<) => lt
generic :: operator(==) => eq, eqFStr
generic :: operator(/=) => neq, neqFStr
generic :: operator(//) => add, addFStr
! generic :: write(formatted) => writeString
procedure :: initDefault
procedure :: fromFString
! procedure :: fromInteger
! procedure :: fromReal
procedure :: copy
final :: destroy
procedure :: str
procedure :: show
! procedure :: writeString
procedure :: isEmpty
procedure :: length
procedure :: at
procedure :: add
procedure :: addFStr
procedure :: lt
procedure :: eq
procedure :: eqFStr
procedure :: neq
procedure :: neqFStr
procedure :: split
procedure :: toInteger
procedure :: toReal
procedure :: toComplex
procedure :: removeTabs
procedure :: hashKey
procedure :: isInteger
procedure :: isNumeric
procedure :: toUpper
procedure :: toLower
procedure :: replace
procedure :: replaceByRealArr
procedure :: removeFileExtension
end type String
contains
!>
!! @brief Constructor
!!
subroutine initDefault( this )
class(String) :: this
if( allocated(this.fstr) ) deallocate(this.fstr)
this.fstr = ""
end subroutine initDefault
!>
!! @brief Constructor, from fortran string
!!
subroutine fromFString( this, fstr )
class(String), intent(out) :: this
character(*), intent(in) :: fstr
if( allocated(this.fstr) ) deallocate(this.fstr)
! this.fstr = adjustl(trim(fstr))
this.fstr = fstr
end subroutine fromFString
!>
!! @brief Copy constructor
!!
subroutine copy( this, other )
class(String), intent(inout) :: this
class(String), intent(in) :: other
if( allocated(this.fstr) ) deallocate(this.fstr)
this.fstr = other.fstr
end subroutine copy
!>
!! @brief Destructor
!!
subroutine destroy( this )
type(String) :: this
if( allocated(this.fstr) ) deallocate(this.fstr)
end subroutine destroy
!>
!! @brief Convert to string
!!
function str( this ) result( output )
class(String) :: this
character(len=200) :: output
integer :: fmt
character(len=200) :: strBuffer
output = ""
output = trim(output)//"<String:"
output = trim(output)//this.fstr
! output = trim(output)//"min="
! fmt = int(log10(this.min+1.0))+1
! write(strBuffer, "(f<fmt+7>.6)") this.min
! output = trim(output)//trim(strBuffer)
!
! output = trim(output)//",size="
! fmt = int(log10(float(this.size+1)))+1
! write(strBuffer, "(i<fmt>)") this.size
! output = trim(output)//trim(strBuffer)
output = trim(output)//">"
end function str
!>
!! @brief Show
!!
subroutine show( this, unit )
class(String) :: this
integer, optional, intent(in) :: unit
integer :: effunit
if( present(unit) ) then
effunit = unit
else
effunit = 6
end if
write(effunit,"(a)") trim(this.str())
end subroutine show
!>
!! @brief
!!
! subroutine writeString( this, unit, iotype, vlist, iostat, iomsg )
! class(String), intent(in) :: this
! integer, intent(in) :: unit
! character(*), intent(in) :: iotype
! integer, intent(in) :: vlist(:)
! integer, intent(out) :: iostat
! character(*), intent(inout) :: iomsg
!
! write(unit, *, iostat=iostat, iomsg=iomsg) trim(this.str())
! end subroutine writeString
!>
!! @brief
!!
function isEmpty( this ) result( output )
class(String), intent(in) :: this
logical :: output
output = merge( .true., .false., len_trim(adjustl(this.fstr)) == 0 )
end function isEmpty
!>
!! @brief
!!
function length( this ) result( output )
class(String), intent(in) :: this
integer :: output
output = len_trim(this.fstr)
end function length
!>
!! @brief
!!
function at( this, pos ) result( output )
class(String), intent(in) :: this
integer, intent(in) :: pos
character :: output
character(:), allocatable :: buffer
buffer = trim(this.fstr)
output = buffer(pos:pos)
end function at
!>
!! @brief
!!
function add( this, other ) result( output )
class(String), intent(in) :: this
class(String), intent(in) :: other
type(String) :: output
output.fstr = this.fstr//other.fstr
end function add
!>
!! @brief
!!
function addFStr( this, other ) result( output )
class(String), intent(in) :: this
character(*), intent(in) :: other
type(String) :: output
output.fstr = this.fstr//other
end function addFStr
!>
!! @brief
!!
function lt( this, other ) result( output )
class(String), intent(in) :: this
class(String), intent(in) :: other
logical :: output
output = ( this.hashKey() < other.hashKey() )
end function lt
!>
!! @brief
!!
function eq( this, other ) result( output )
class(String), intent(in) :: this
class(String), intent(in) :: other
logical :: output
output = ( this.hashKey() == other.hashKey() )
end function eq
!>
!! @brief
!!
function eqFStr( this, other ) result( output )
class(String), intent(in) :: this
character(*), intent(in) :: other
logical :: output
output = ( this.hashKey() == FString_hashKey(other) )
end function eqFStr
!>
!! @brief
!!
function neq( this, other ) result( output )
class(String), intent(in) :: this
class(String), intent(in) :: other
logical :: output
output = ( this.hashKey() /= other.hashKey() )
end function neq
!>
!! @brief
!!
function neqFStr( this, other ) result( output )
class(String), intent(in) :: this
character(*), intent(in) :: other
logical :: output
output = ( this.hashKey() /= FString_hashKey(other) )
end function neqFStr
!>
!! @brief
!!
subroutine split( this, tokens, delimiters )
class(String), intent(in) :: this
character(*), allocatable, intent(out) :: tokens(:)
character(*), intent(in) :: delimiters
call FString_split( this.fstr, tokens, delimiters )
end subroutine split
!>
!! @brief
!!
subroutine FString_split( str, tokens, delimiters )
character(*), intent(in) :: str
character(*), allocatable, intent(out) :: tokens(:)
character(*), intent(in) :: delimiters
integer, allocatable :: posTokenBegin(:), posTokenEnd(:)
logical :: isDelimiter, isDelimiterPrev
integer :: ntokens
integer :: i, j
if( allocated(tokens) ) then
deallocate(tokens)
end if
if( len(trim(str)) == 0 ) then
allocate( tokens(1) )
tokens(1) = str
return
end if
! En el peor de los casos todos los caracteres son separadores
allocate( posTokenBegin(len(str)) )
allocate( posTokenEnd(len(str)) )
posTokenBegin = 0
posTokenEnd = 0
!! scan
ntokens = 1
isDelimiterPrev = .true.
do i=1,len(str)
! write(*,"(3A)",advance="no") str(i:i)
isDelimiter = .false.
do j=1,len(delimiters)
if( str(i:i) == delimiters(j:j) ) then
isDelimiter = .true.
exit
end if
end do
if( isDelimiter .and. .not. isDelimiterPrev ) then
posTokenEnd( ntokens ) = i-1
ntokens = ntokens + 1
! write(*,*) " E"
else if( .not. isDelimiter .and. isDelimiterPrev ) then
posTokenBegin( ntokens ) = i
! write(*,*) " B"
! else
! write(*,*) ""
end if
isDelimiterPrev = isDelimiter
end do
if( posTokenEnd(ntokens) == 0 .and. .not. isDelimiter ) then
posTokenEnd( ntokens ) = len( str )
else
ntokens = ntokens - 1
end if
! write(*,"(A,<len(str)>I3)") "ntokens = ", ntokens
! write(*,"(A,<len(str)>I3)") " begin = ", posTokenBegin
! write(*,"(A,<len(str)>I3)") " end = ", posTokenEnd
allocate( tokens(ntokens) )
do i=1,ntokens
tokens(i) = str( posTokenBegin(i):posTokenEnd(i) )
end do
deallocate( posTokenBegin )
deallocate( posTokenEnd )
end subroutine FString_split
!>
!! @brief
!!
function toLogical( this ) result( output )
class(String), intent(in) :: this
logical :: output
read( this.fstr, * ) output
end function toLogical
!>
!! @brief
!!
function toInteger( this ) result( output )
class(String), intent(in) :: this
integer :: output
read( this.fstr, * ) output
end function toInteger
!>
!! @brief
!!
function toReal( this ) result( output )
class(String), intent(in) :: this
real(8) :: output
read( this.fstr, * ) output
end function toReal
!>
!! @brief
!!
function toComplex( this ) result( output )
class(String), intent(in) :: this
complex(8) :: output
read( this.fstr, * ) output
end function toComplex
!>
!! @brief Replace every occurrence of the character \tab in this string
!! for four blank spaces
!!
subroutine removeTabs( this )
class(String) :: this
this = FString_removeTabs( this.fstr )
end subroutine removeTabs
!>
!! @brief
!!
function FString_toLogical( str ) result( output )
character(*), intent(in) :: str
logical :: output
read( str, * ) output
end function FString_toLogical
!>
!! @brief
!!
function FString_toInteger( str ) result( output )
character(*), intent(in) :: str
integer :: output
read( str, * ) output
end function FString_toInteger
!>
!! @brief
!!
function FString_toReal( str ) result( output )
character(*), intent(in) :: str
real(8) :: output
read( str, * ) output
end function FString_toReal
!>
!! @brief
!!
function FString_toComplex( str ) result( output )
character(*), intent(in) :: str
complex(8) :: output
read( str, * ) output
end function FString_toComplex
!>
!! @brief
!!
function FString_toString( str ) result( output )
character(*), intent(in) :: str
type(String) :: output
call output.fromFString( str )
end function FString_toString
!>
!! @brief
!!
subroutine FString_toIntegerArray( str, output )
character(*), intent(in) :: str
integer, allocatable :: output(:)
character(1000), allocatable :: tokens(:)
character(1000) :: strBuffer
integer :: i
call FString_split( trim(adjustl(str)), tokens, "()[]" )
strBuffer = tokens(1)
call FString_split( strBuffer, tokens, "," )
if( allocated(output) ) deallocate( output )
allocate( output(size(tokens)) )
do i=1,size(tokens)
read( tokens(i), * ) output(i)
end do
deallocate(tokens)
end subroutine FString_toIntegerArray
!>
!! @brief
!!
subroutine FString_toRealArray( str, output )
character(*), intent(in) :: str
real(8), allocatable :: output(:)
character(100000), allocatable :: tokens(:)
character(100000) :: strBuffer
integer :: i
call FString_split( trim(adjustl(str)), tokens, "()[]" )
strBuffer = tokens(1)
if( len_trim(strBuffer) == 0 ) then
deallocate(tokens)
! deallocate(strBuffer)
return
end if
call FString_split( strBuffer, tokens, "," )
if( allocated(output) ) deallocate( output )
allocate( output(size(tokens)) )
do i=1,size(tokens)
read( tokens(i), * ) output(i)
end do
deallocate(tokens)
! deallocate(strBuffer)
end subroutine FString_toRealArray
!>
!! @brief
!!
function FString_fromString( str ) result( output )
type(String), intent(in) :: str
character(:), allocatable :: output
output = str.fstr
end function FString_fromString
!>
!! @brief
!!
function FString_fromInteger( val, format ) result( output )
integer, intent(in) :: val
character(*), optional, intent(in) :: format
character(1000) :: output
character(1000) :: strBuffer
if( present(format) ) then
write( strBuffer, format ) val
output = strBuffer
else
write( strBuffer, * ) val
output = trim(adjustl(strBuffer))
end if
end function FString_fromInteger
!>
!! @brief
!!
function FString_fromReal( val, format ) result( output )
real(8), intent(in) :: val
character(*), optional, intent(in) :: format
character(:), allocatable :: output
character(1000) :: strBuffer
if( present(format) ) then
write( strBuffer, format ) val
output = strBuffer
else
write( strBuffer, * ) val
output = trim(adjustl(strBuffer))
end if
end function FString_fromReal
!>
!! @brief
!!
function FString_fromLogical( val, format ) result( output )
logical, intent(in) :: val
character(*), optional, intent(in) :: format
character(:), allocatable :: output
character(1000) :: strBuffer
if( present(format) ) then
write( strBuffer, format ) val
output = strBuffer
else
write( strBuffer, * ) val
output = trim(adjustl(strBuffer))
end if
end function FString_fromLogical
!>
!! @brief
!!
function FString_fromIntegerArray( val, format ) result( output )
integer, intent(in) :: val(:)
character(*), optional, intent(in) :: format
character(1000) :: output
character(1000) :: strBuffer
if( present(format) ) then
write( strBuffer, format ) val
else
write( strBuffer, * ) val
end if
output = "( "//trim(adjustl(strBuffer))//" )"
end function FString_fromIntegerArray
!>
!! @brief
!!
function FString_fromRealArray( val, format ) result( output )
real(8), intent(in) :: val(:)
character(*), optional, intent(in) :: format
character(:), allocatable :: output
character(1000) :: strBuffer
if( present(format) ) then
write( strBuffer, format ) val
else
write( strBuffer, * ) val
end if
output = "( "//trim(adjustl(strBuffer))//" )"
end function FString_fromRealArray
!>
!! @brief Replace every occurrence of the character \tab in this string
!! for four blank spaces
!! @todo Hay que ofrecer la opción de seleccionar el tamaño del tab
!! por ejemplo subroutine FString_removeTabs( str, tabSize )
!! @todo Creo que retornar un allocatable es peligroso
!!
function FString_removeTabs( str ) result( output )
character(*), intent(inout) :: str
character(:), allocatable :: output
output = FString_replace( str, achar(9), " " )
end function FString_removeTabs
!>
!! @brief
!!
function FString_replace( str, before, after, wholeWords, wordSeparators ) result( output )
character(*), intent(in) :: str
character(*), intent(in) :: before
character(*), intent(in) :: after
logical, optional, intent(in) :: wholeWords
character(*), optional, intent(in) :: wordSeparators
character(:), allocatable :: output
integer :: i
integer :: nMatches
integer, allocatable :: matchPos(:)
nMatches = FString_count( str, before, matchPos, wholeWords, wordSeparators )
if( nMatches == 0 ) then
output = str
return
end if
output = ""
do i=1,nMatches+1
if( i==1 ) then
output = str(1:matchPos(i)-1)//after
else if ( i==size(matchPos)+1 ) then
output = output//str(min(matchPos(i-1)+len(before),len(str)+1):len(str))
else
output = output//str(matchPos(i-1)+len(before):matchPos(i)-1)//after
end if
end do
deallocate( matchPos )
end function FString_replace
!>
!! @brief
!!
function FString_replaceByRealArr( str, varName, varValue, wholeWords, wordSeparators ) result( output )
character(*), intent(inout) :: str
character(*), intent(in) :: varName(:)
real(8), intent(in) :: varValue(:)
logical, optional, intent(in) :: wholeWords
character(*), optional, intent(in) :: wordSeparators
character(:), allocatable :: output
integer :: i
if( size(varName) /= size(varValue) ) then
write(6,*) "### ERROR ### FString_replaceByRealArr. size(varName) /= size(varValue)"
stop
end if
output = str
do i=1,size(varName)
output = FString_replace( output, trim(adjustl(varName(i))), FString_fromReal(varValue(i)), wholeWords=.true. )
end do
end function FString_replaceByRealArr
!>
!! @brief
!!
function FString_count( str, ref, matchPos, wholeWords, wordSeparators ) result( nMatches )
character(*), intent(in) :: str
character(*), intent(in) :: ref
integer, allocatable, optional, intent(out) :: matchPos(:)
logical, optional, intent(in) :: wholeWords
character(*), optional, intent(in) :: wordSeparators
integer :: nMatches
logical :: effWholeWords
character(:), allocatable :: effWordSeparators
integer :: pos
integer, allocatable :: tmpMatchPos(:)
integer, allocatable :: tmpMatchWholeWord(:) ! 0 or 1
character(:), allocatable :: strBuffer
integer :: i, n
effWholeWords = .false.
if( present(wholeWords) ) effWholeWords = wholeWords
! wordSeparators": "./\\()\"'-:,.;<>~!@#$%^&*|+=[]{}`~?\\.",
effWordSeparators = ":;@-.,/_~?&=%+#*()[]{} "//achar(9)
if( present(wordSeparators) ) effWordSeparators = wordSeparators
strBuffer = str
! En el peor de los casos todos los caracteres son ref
allocate( tmpMatchPos(len(str)) )
allocate( tmpMatchWholeWord(len(str)) )
tmpMatchPos = 0
tmpMatchWholeWord = 0
n = 1
do while( .true. )
pos = index( strBuffer, ref )
if( pos == 0 ) exit
if( ( &
( index( effWordSeparators, strBuffer(pos-1:pos-1) ) /= 0 .or. pos-1 == 0 ).and. &
index( effWordSeparators, strBuffer(pos+len(ref):pos+len(ref)) ) /= 0 &
) ) then
tmpMatchWholeWord(n) = 1
end if
tmpMatchPos(n) = pos + merge( 0, tmpMatchPos(n-1), n<1 )
n = n + 1
strBuffer = strBuffer(pos+1:)
end do
nMatches = n-1
if( present(matchPos) ) then
if( effWholeWords ) then
if( allocated(matchPos) ) deallocate( matchPos )
allocate( matchPos(sum(tmpMatchWholeWord)) )
i = 1
do n=1,nMatches
if( tmpMatchWholeWord(n) == 1 ) then
matchPos(i) = tmpMatchPos(n)
i = i+1
end if
end do
nMatches = sum(tmpMatchWholeWord)
else
if( allocated(matchPos) ) deallocate( matchPos )
allocate( matchPos(nMatches) )
matchPos = tmpMatchPos(1:nMatches)
end if
end if
deallocate(tmpMatchPos)
end function FString_count
!>
!! @brief
!!
! function hashKey( this, debug ) result( output )
function hashKey( this ) result( output )
class(String), intent(in) :: this
! logical, optional, intent(in) :: debug
integer :: output
! output = FString_hashKey( this.fstr, debug )
output = FString_hashKey( this.fstr )
end function hashKey
!>
!! @brief
!! @todo Al parecer si this=,R,S:0,5 dice que es de tipo numerico
!!
function isInteger( this ) result( output )
class(String), intent(in) :: this
logical :: output
output = FString_isInteger( this.fstr )
end function isInteger
!>
!! @brief
!! @todo Al parecer si this=,R,S:0,5 dice que es de tipo numerico
!!
function isNumeric( this ) result( output )
class(String), intent(in) :: this
logical :: output
output = FString_isNumeric( this.fstr )
end function isNumeric
!>
!! @brief Returns an uppercase copy of the string.
!!
function toUpper( this ) result( output )
class(String), intent(in) :: this
type(String) :: output
output = FString_toUpper( this.fstr )
end function toUpper
!>
!! @brief Returns a lowercase copy of the string.
!!
function toLower( this ) result( output )
class(String), intent(in) :: this
type(String) :: output
output = FString_toLower( this.fstr )
end function toLower
!>
!! @brief Replaces every occurrence of the string before with the string after
!!
subroutine replace( this, before, after, wholeWords, wordSeparators )
class(String) :: this
character(*), intent(in) :: before
character(*), intent(in) :: after
logical, optional, intent(in) :: wholeWords
character(*), optional, intent(in) :: wordSeparators
this = FString_replace( this.fstr, before, after, wholeWords, wordSeparators )
end subroutine replace
!>
!! @brief
!!
subroutine replaceByRealArr( this, varName, varValue, wholeWords, wordSeparators )
class(String) :: this
character(*), allocatable, intent(in) :: varName(:)
real(8), allocatable, intent(in) :: varValue(:)
logical, optional, intent(in) :: wholeWords
character(*), optional, intent(in) :: wordSeparators
this = FString_replaceByRealArr( this.fstr, varName, varValue, wholeWords, wordSeparators )
end subroutine replaceByRealArr
!>
!! @brief
!!
function removeFileExtension( this, extension ) result( output )
class(String), intent(in) :: this
type(String), optional :: extension
type(String) :: output
output = FString_removeFileExtension( this.fstr, extension=extension.fstr )
end function removeFileExtension
!>
!! @brief
!! Taken from the java.lang.String hash function, it use 32 bits
!!
! function FString_hashKey( str, debug ) result( output )
function FString_hashKey( str ) result( output )
character(*), intent(in) :: str
! logical, optional, intent(in) :: debug
integer :: output
integer :: i
! logical :: effDebug
!
! effDebug = .false.
! if( present(debug) ) effDebug = debug
output = 0
do i=1,len(str)
! if( effDebug ) then
! write(0,*) str(i:i), "-->", ichar(str(i:i)), " : ", ichar(str(i:i))*31**(len(str)-i)
! end if
output = output + ichar(str(i:i))*31**(len(str)-i)
end do
! if( effDebug ) then
! write(0,*) "key --> ", output
! end if
end function FString_hashKey
!>
!! @brief
!! @todo Al parecer si this=,R,S:0,5 dice que es de tipo numerico
!!
function FString_isInteger( str ) result( output )
character(*), intent(in) :: str
logical :: output
integer :: realType
integer :: e
output = .false.
read( str, *, iostat=e ) realType
output = ( e == 0 )
end function FString_isInteger
!>
!! @brief
!! @todo Al parecer si this=,R,S:0,5 dice que es de tipo numerico
!!
function FString_isNumeric( str ) result( output )
character(*), intent(in) :: str
logical :: output
real(8) :: realType
integer :: e
output = .false.
read( str, *, iostat=e ) realType
output = ( e == 0 )
end function FString_isNumeric
!>
!! @brief Returns an uppercase copy of the string.
!! @todo La conversión se hace con una tabla específica, pero hay
!! que permitir que la conversión se haga con un LOCALE designado
!! @see FString_toLower()
!!
function FString_toUpper( str ) result( output )
character(*), intent(in) :: str
character(len(str)) :: output
! Translation from awk implementation
! echo aábcdeéfghiéjklmnoópqrstuúvwxyz | awk '{ print toupper($0) }'
character(*), parameter :: lc = "¡!¿? aábcdeéfghiíjklmnoópqrstuúvwxyz"
character(*), parameter :: uc = "¡!¿? AÁBCDEÉFGHIÍJKLMNOÓPQRSTUÚVWXYZ"
integer :: j, k
output = str
do j=1,len_trim(str)
k = index( lc, str(j:j) )
if (k > 0) output(j:j) = uc(k:k)
end do
end function FString_toUpper
!>
!! @brief Returns a lowercase copy of the string.
!!
function FString_toLower( str ) result( output )
character(*), intent(in) :: str
character(len(str)) :: output
character(*), parameter :: lc = "¡!¿? aábcdeéfghiíjklmnoópqrstuúvwxyz"
character(*), parameter :: uc = "¡!¿? AÁBCDEÉFGHIÍJKLMNOÓPQRSTUÚVWXYZ"
integer :: j, k
output = str
do j=1,len_trim(str)
k = index( uc, str(j:j) )
if (k > 0) output(j:j) = lc(k:k)
end do
end function FString_toLower
!>
!! @brief Returns a lowercase copy of the string.
!!
function FString_isNull( str ) result( output )
character(*), intent(in) :: str
logical :: output
output = ( trim(str) == trim(FString_NULL) )
end function FString_isNull
!>
!! @brief
!!
function FString_removeFileExtension( str, extension ) result( output )
character(*), intent(in) :: str
character(:), optional, allocatable :: extension
character(:), allocatable :: output
integer :: idPos
idPos = scan( str, ".", back=.true. )
output = str(1:idPos-1)
if( present(extension) ) extension = str(idPos:len(str))
end function FString_removeFileExtension
!>
!! @brief
!!
function String_toLogical( str ) result( output )
type(String), intent(in) :: str
logical :: output
read( str.fstr, * ) output
end function String_toLogical
!>
!! @brief
!!
function String_toInteger( str ) result( output )
type(String), intent(in) :: str
integer :: output
read( str.fstr, * ) output
end function String_toInteger
!>
!! @brief
!!
function String_toReal( str ) result( output )
type(String), intent(in) :: str
real(8) :: output
read( str.fstr, * ) output
end function String_toReal
!>
!! @brief
!!
function String_toComplex( str ) result( output )
type(String), intent(in) :: str
complex(8) :: output
read( str.fstr, * ) output
end function String_toComplex
!>
!! @brief
!!
subroutine String_split( str, tokens, delimiters )
class(String), intent(in) :: str
character(*), allocatable, intent(out) :: tokens(:)
character(*), intent(in) :: delimiters
call FString_split( str.fstr, tokens, delimiters )
end subroutine String_split
!>
!! @brief Test method
!!
subroutine String_test()
type(String) :: str1
type(String) :: str2
type(String) :: str3
integer :: int1
real(8) :: real1
complex(8) :: complex1
character(100), allocatable :: fstrArray(:)
integer, allocatable :: intArray(:)
real(8), allocatable :: realArray(:)
character(:), allocatable :: fstr1
character(100), allocatable :: tokens(:)
! character(100) :: fstr
character(:), allocatable :: fstr
integer :: i
write(*,*)
write(*,*) "Testing constructors"
write(*,*) "===================="
call str1.init( "Hello my friends" )
str2 = str1
call str2.show()
str2 = "Hello my friends from asignation operator"
call str2.show()
fstr = "Hello my friends from fortran string"
str2 = fstr
call str2.show()
write(*,*)
write(*,*) "Testing operators"
write(*,*) "================="
call str1.show()
call str2.show()
str3 = str1+str2
call str3.show()
str1 = "My friends"
str1 = str1+" it works"
call str1.show()
write(*,*)
write(*,*) "Testing split"
write(*,*) "============="
write(*,*) "Original ==> ", str1.fstr
call str1.split( tokens, " " )
write(*,*) "Split( ) ==> "
do i=1,size(tokens)
write(*,*) i, " ", trim(tokens(i))
end do
write(*,*)
fstr1 = "Hello :my friends: from;?fortran?string"
write(*,*) "Original ==> ", fstr1
call FString_split( fstr1, tokens, ":;?" )
write(*,*) "Split(:;?) ==> "
do i=1,size(tokens)
write(*,*) i, " ", trim(tokens(i))
end do
write(*,*)
fstr1 = "Hello :my friends: from;?fortran?string"
write(*,*) "Original ==> ", fstr1
call FString_split( fstr1, tokens, "-" )
write(*,*) "Split(-) ==> "
do i=1,size(tokens)
write(*,*) i, " ", trim(tokens(i))
end do
write(*,*)
fstr1 = ""
write(*,*) "Original ==> ", fstr1
call FString_split( fstr1, tokens, "-" )
write(*,*) "Split(-) ==> "
do i=1,size(tokens)
write(*,*) i, " ", trim(tokens(i))
end do
write(*,*)
fstr1 = "------"
write(*,*) "Original ==> ", fstr1
call FString_split( fstr1, tokens, "-" )
write(*,*) "Split(-) ==> "
do i=1,size(tokens)
write(*,*) i, " ", trim(tokens(i))
end do
deallocate( tokens )
write(*,*)
write(*,*) "Testing convertion to integer, real and complex"
write(*,*) "==============================================="
str1 = "AAABBB"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
str1 = "12345"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
write(*,*) "integer => ", str1.toInteger()
write(*,*) " real => ", str1.toReal()
write(*,*) "complex => ", str1.toComplex()
str1 = "0.12345"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
write(*,*) "integer => ", str1.toInteger()
write(*,*) " real => ", str1.toReal()
write(*,*) "complex => ", str1.toComplex()
str1 = "-3.52345"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
write(*,*) "integer => ", str1.toInteger()
write(*,*) " real => ", str1.toReal()
write(*,*) "complex => ", str1.toComplex()
str1 = "(-3.52345,1.7538)"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
write(*,*) "integer => ", str1.toInteger()
write(*,*) " real => ", str1.toReal()
write(*,*) "complex => ", str1.toComplex()
str1 = " ( -3.52345, 2.345, 6.345 )"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
call FString_toIntegerArray( str1.fstr, intArray )
write(*,*) "integer => ", intArray
str1 = " ( -3.52345, 2.345, 6.345 )"
call str1.show()
write(*,*) "isNumeric => ", str1.isNumeric()
call FString_toRealArray( str1.fstr, realArray )
write(*,*) " real => ", realArray
if( allocated(intArray) ) deallocate( intArray )
if( allocated(realArray) ) deallocate( realArray )
write(*,*)
write(*,*) "Testing convertion from integer and real"
write(*,*) "======================================"
int1 = 12345
write(*,*) "integer => ", trim(FString_fromInteger( int1 ))
write(*,*) "integer => ", trim(FString_fromInteger( int1, "(I10)" ))
real1 = -3.52345_8
write(*,*) " real => ", trim(FString_fromReal( real1 ))
write(*,*) " real => ", trim(FString_fromReal( real1, "(F10.3)" ))
write(*,*)
write(*,*) "Testing count and replace characters"
write(*,*) "===================================="
! fstr = "maHola"//char(9)//"ma,amigos"//char(9)//char(9)//"del almama"
fstr = "maHola ma,amigos del almama"
write(*,*) "inicial --"//trim(fstr)//"-- len=", len_trim(fstr)
write(*,*) "found ", FString_count( fstr, char(9) ), "characters char(9)"
write(*,*) "found ", FString_count( fstr, "am" ), "characters 'am'"
write(*,*) "---"//FString_removeTabs( fstr )//"---"
! call FString_replace( fstr, achar(9), " " )
! write(*,*) "removeTabs --"//trim(fstr)//"--"
write(*,*) "---"//FString_replace( fstr, char(9), "XXX" )//"---"
! call FString_replace( fstr, "a", "uu" )
write(*,*) "inicial ---"//trim(fstr)//"---"
write(*,*) "replace 'ma'->'xx' ---"//FString_replace( fstr, "ma", "xx" )//"---"
write(*,*) "replace hw 'ma'->'xx' ---"//FString_replace( fstr, "ma", "xx", wholeWords=.true. )//"---"
write(*,*)
write(*,*) "Testing replace variables by reals"
write(*,*) "=================================="
fstr = "a**2*sin(2*pi/4.0/a**2)+exp(-b*x**2)"
write(*,"(A,A)") "original => ", fstr
write(*,"(A,2A10)") " vars => ", ["a","b"]
write(*,"(A,2F10.5)") " vars => ", [3.12345_8,0.09876_8]
write(*,"(A,A)") " final => ", trim(FString_replaceByRealArr( fstr, ["a","b"], [3.12345_8,0.09876_8] ))
write(*,*)
fstr = "a**2*sin(2*pi/4.0/a**2)+exp(-b*x**2)"
write(*,"(A,A)") "original => ", fstr
allocate(fstrArray(3))
fstrArray = [ "a", "b", "pi" ]
write(*,"(A,3A10)") " vars => ", fstrArray
allocate(realArray(3))
realArray = [ 3.12345_8, 0.09876_8, 3.141592_8 ]
write(*,"(A,3F10.5)") " vars => ", realArray
write(*,"(A,A)") " final => ", trim(FString_replaceByRealArr( fstr, fstrArray, realArray ))
deallocate(fstrArray)
deallocate(realArray)
write(*,*)
write(*,*) "Testing FString_NULL"
write(*,*) "===================="
str1 = FString_NULL
write(*,*) "str ==> ", str1.fstr
write(*,*) "( str1 /= FString_NULL ) ==> ", ( str1 /= FString_NULL )
write(*,*) "( str1 == FString_NULL ) ==> ", ( str1 == FString_NULL )
write(*,*)
write(*,*) "Testing Remove extension"
write(*,*) "========================"
str1 = "Hola.234-kjsdf.dat"
write(*,*) "str ==> ", str1.fstr
str2 = str1.removeFileExtension( extension=str3 )
write(*,*) "str.removeFileExtension() ==> ", str2.fstr
write(*,*) " extension ==> ", str3.fstr
write(*,*) "FString_removeFileExtension( str ) ==> ", FString_removeFileExtension( str1.fstr )
end subroutine String_test
end module String_
|
## Running simulation with spin-transfer torque (STT)
[Weiwei Wang and Hans Fangohr for Aurelien Manchon's group, May 2014]
The implemented equation in finmag with STT is [1,2],
\begin{equation}
\frac{\partial \vec{m}}{\partial t} = - \gamma \vec{m} \times \vec{H} + \alpha \vec{m} \times \frac{\partial \vec{m}}{\partial t} + u (\vec{j}_s \cdot \nabla) \vec{m} - \beta u [\vec{m}\times (\vec{j}_s \cdot \nabla)\vec{m}]
\end{equation}
where $\vec{j}_s$ is the current density. $u$ is the material parameter, and in default,
$$u=u_{ZL}=\frac{u_0}{1+\beta^2}$$
There is an option "using_u0" in sim.set_zhangli method, u=u0 if "using_u0 = True" and
$$u_0=\frac{g \mu_B P}{2 |e| M_s}=\frac{g \mu_B P a^3}{2 |e| \mu_s}$$
where $\mu_B=|e|\hbar/(2m)$ is the Bohr magneton, P is the polarization rate, e is the electron charge.
The implemented Landau-Lifshitz-Gilbert equation with Slonczewski spin-transfer torque is [3],
\begin{equation}
\frac{\partial \vec{m}}{\partial t} = - \gamma \vec{m} \times \vec{H} + \alpha \vec{m} \times \frac{\partial \vec{m}}{\partial t}
+ \gamma \beta \epsilon (\vec{m} \times \vec{m}_p \times \vec{m})
\end{equation}
where
\begin{align*}
\beta&=\left|\frac{\hbar}{\mu_0 e}\right|\frac{J}{tM_\mathrm{s}}\,\,\, \mathrm{and}\\
\epsilon&=\frac{P\Lambda^2}{(\Lambda^2+1)+(\Lambda^2-1)(\vec{m}\cdot\vec{m}_p)}
\end{align*}
[1] S. Zhang and Z. Li, Roles of nonequilibrium conduction electrons on the magnetization dynamics of ferromagnets, Phys. Rev. Lett. 93, 127204 (2004).
[2] A. Thiaville, Y. Nakatani, J. Miltat and Y. Suzuki, Micromagnetic understanding of current-driven domain wall motion in patterned nanowires, Europhys. Lett. 69, 990 (2005).
[3] J. Xiao, A. Zangwill, and M. D. Stiles, “Boltzmann test of Slonczewski’s theory of spin-transfer torque,” Phys. Rev. B, 70, 172405 (2004).
Import the related python modules and create a 2d mesh,
```
# show matplotlib plots inside this notebook
%matplotlib inline
import os
import matplotlib.pyplot as plt
import dolfin as df
import numpy as np
from finmag import Simulation as Sim
from finmag.energies import Exchange, DMI, UniaxialAnisotropy
from finmag.util.dmi_helper import find_skyrmion_center_2d
from finmag.util.helpers import set_logging_level
import finmag
#set_logging_level("INFO") # use this to reduce DEBUG output if desired
mesh = df.RectangleMesh(0, 0, 200, 40, 200, 40)
```
We define a function to generate a skyrmion in the track,
```
def m_init_one(pos):
x,y = pos
if y > 35:
return (1,0,0)
elif y < 5:
return (-1,0,0)
x0 = 50
y0 = 20
if (x-x0)**2+(y-y0)**2<10**2:
return (0,0,-1)
else:
return (0,0,1)
```
Similarly, we define a function for the current density profile, here the current density is uniformly distributed,
```
def init_J_x(pos):
return (1e12,0,0)
```
Create function that can plot scalar field of one magnetisation component:
```
def plot_2d_comp(sim, comp='z', title=None):
"""expects a simulation object sim and a component to plot. Component can be
'x' or 'y' or 'z'
Not optimised for speed.
"""
finmag.logger.info("plot_2d_comp: at t = {:g}".format(sim.t))
comps = {'x': 0, 'y': 1, 'z': 2}
assert comp in comps, "print unknown component {}, we know: {}".format(comp, comp.keys())
m = sim.get_field_as_dolfin_function('m')
# get mesh coordinates for plotting
coords = mesh.coordinates()
mym = []
for coord in coords:
mym.append(m(coord))
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import numpy as np
x = [ r[0] for r in coords]
y = [ r[1] for r in coords]
# extract i-ith component of magnetisation
mi = [ m[comps[comp]] for m in mym]
# Create the Triangulation; no triangles so Delaunay triangulation created.
triang = tri.Triangulation(x, y)
# tripcolor plot.
plt.figure()
plt.gca().set_aspect('equal')
plt.tripcolor(triang, mi, shading='flat', cmap=plt.cm.rainbow)
plt.colorbar()
if title:
plt.title(title)
else:
plt.title('Plot of {} component of m at t={:.3f}ns'.format(comp, sim.t * 1e9))
```
We create function to relax the system, inside the function we create a simulation instance and considering the exchange, anisotropy and DM interactions, notice that we use D=3 mJ m$^{-2}$
```
def relax():
#we use the 1d pbc in x direction, and pbc=None will get rid of the pbc
sim = Sim(mesh, Ms=5.8e5, unit_length=1e-9, pbc='1d')
sim.set_m(m_init_one)
plot_2d_comp(sim, comp='z', title='initial magnetisation (z-comp)')
sim.add(UniaxialAnisotropy(K1=6e5, axis=[0, 0, 1]))
sim.add(Exchange(A=1.5e-11))
sim.add(DMI(D=3e-3))
sim.alpha = 0.5
sim.relax(stopping_dmdt=0.05)
plot_2d_comp(sim, comp='z', title='relaxed magnetisation (z-comp)')
#after the relaxation, we save the magnetisation for next stage
np.save('m00.npy',sim.m)
return sim
```
```
sim = relax()
```
## We move the skyrmion by zhang-li spin transfer torque,
```
def move_skyrmion():
sim = Sim(mesh, Ms=5.8e5, unit_length=1e-9, pbc='1d')
sim.set_m(np.load('m00.npy'))
sim.add(UniaxialAnisotropy(K1=6e5, axis=[0, 0, 1]))
sim.add(Exchange(A=1.5e-11))
sim.add(DMI(D=3e-3))
sim.alpha = 0.3
#We use the zhang-li spin-transfer torque with parameters that polarisation=0.5 and beta=0.01
sim.set_zhangli(init_J_x, P=0.5, beta=0.01, using_u0=False)
# every 0.1ns save vtk data
sim.schedule('save_vtk', every=1e-10, filename='vtks/m.pvd', overwrite=True)
# every 0.1ns save raw data
sim.schedule('save_m', every=1e-10, filename='npys/m.pvd', overwrite=True)
# every 0.1ns create plot for notebook
sim.schedule(plot_2d_comp, every=1e-10)
# now do the calculation (events scheduled above will be done automatically)
sim.run_until(0.5e-9)
```
```
move_skyrmion()
```
### We now use the Slonczewski spin-transfer torque to move the skyrmion
```
def move_skyrmion_S():
sim = Sim(mesh, Ms=5.8e5, unit_length=1e-9, pbc='1d')
sim.set_m(np.load('m00.npy'))
sim.add(UniaxialAnisotropy(K1=6e5, axis=[0, 0, 1]))
sim.add(Exchange(A=1.5e-11))
sim.add(DMI(D=3e-3))
sim.alpha = 0.3
#We using the Slonczewski spin-transfer torque with parameters polarisation=0.5 thickness=0.4e-9 and direction=(0,0,1)
sim.set_stt(current_density=1e10, polarisation=0.5, thickness=0.4e-9, direction=(0,1,0))
sim.schedule('save_vtk', every=1e-10, filename='vtks/m2.pvd', overwrite=True)
sim.schedule('save_m', every=1e-10, filename='npys/m2', overwrite=True)
# every 0.1ns create plot for notebook
sim.schedule(plot_2d_comp, every=1e-10)
sim.run_until(0.5e-9)
return sim
```
```
move_skyrmion_S()
```
At last, we create a function to plot the skyrmion positions as a function of time:
```
def plot_skx_pos_t():
sim = Sim(mesh, Ms=5.8e5, unit_length=1e-9, pbc='1d')
ts = []
xs = []
ys = []
for i in range(6):
sim.set_m(np.load('npys/m2_%06d.npy'%i))
x,y = find_skyrmion_center_2d(sim.llg._m)
ts.append(1e-10*i)
xs.append(x)
ys.append(y)
fig=plt.figure()
plt.plot(ts,xs,'.-',label='x')
plt.plot(ts,ys,'.-',label='y')
#plt.xlim([0,6])
#plt.ylim([-0.3,0.2])
plt.xlabel('time (s)')
plt.ylabel('pos (nm)')
plt.legend()
```
Plot the data:
```
plot_skx_pos_t()
```
```
finmag.__version__
```
'4648:ca9fb31f80d2421f5b51bf89660836dc9fdc7125'
|
##################################################################################################################
## This is the implementation for impedance controller between any two frames of the robot.
## This code works with just pybullet and does not depend on dynamic graph.
## Primarly designed for designing and debuggin controllers
#################################################################################################################
## Author: Avadesh Meduri
## Date: 20/09/2019
#################################################################################################################
import numpy as np
import pinocchio as pin
from pinocchio.utils import zero,eye
class ImpedanceController(object):
def __init__(self, name, pin_robot, frame_root_name, frame_end_name, start_column):
'''
Input :
name : Name of the impdeance controller (Ex. Front left leg impedance)
pinocchio_robot : pinocchio wrapper instance.
frame_root_name : The root frame name where the spring starts(Ex. Hip)
frame_end_name : the second frame name where the spring ends(Ex. end effector)
start_column : the column from where 3 columns from the jacobian are selected
'''
self.name = name
self.pin_robot = pin_robot
self.frame_root_name = frame_root_name
self.frame_end_name = frame_end_name
self.frame_root_idx = self.pin_robot.model.getFrameId(self.frame_root_name)
self.frame_end_idx = self.pin_robot.model.getFrameId(self.frame_end_name)
self.start_column = start_column
def compute_forward_kinematics(self,q):
'''
Computes forward kinematics of all the frames and stores in data
'''
pin.framesForwardKinematics(self.pin_robot.model, self.pin_robot.data, q)
def compute_distance_between_frames(self,q):
'''
Computes the distance between the two frames or computes the location
of frame_end with respect to frame_root
'''
return self.pin_robot.data.oMf[self.frame_end_idx].translation - self.pin_robot.data.oMf[self.frame_root_idx].translation
def compute_relative_velocity_between_frames(self,q,dq):
'''
computes the velocity of the end_frame with respect to a frame
whose origin aligns with the root frame but is oriented as the world frame
'''
# TODO: define relative vel with respect to frame oriented as the base frame but located at root frame
## will be a problem in case of a back flip with current implementation.
frame_config_root = pin.SE3(self.pin_robot.data.oMf[self.frame_root_idx].rotation, np.zeros((3,1)))
frame_config_end = pin.SE3(self.pin_robot.data.oMf[self.frame_end_idx].rotation, np.zeros((3,1)))
vel_root_in_world_frame = frame_config_root.action.dot(pin.computeFrameJacobian(self.pin_robot.model, self.pin_robot.data, q, self.frame_root_idx)).dot(dq)[0:3]
vel_end_in_world_frame = frame_config_end.action.dot(pin.computeFrameJacobian(self.pin_robot.model, self.pin_robot.data, q, self.frame_end_idx)).dot(dq)[0:3]
return np.subtract(vel_end_in_world_frame, vel_root_in_world_frame).T
def compute_jacobian(self,q):
'''
computes the jacobian in the world frame
Math : J = R(World,Foot) * J_(Foot frame)
Selection of the required portion of the jacobian is also done here
'''
self.compute_forward_kinematics(q)
jac = pin.computeFrameJacobian(self.pin_robot.model, self.pin_robot.data, q, self.frame_end_idx)
jac = self.pin_robot.data.oMf[self.frame_end_idx].rotation.dot(jac[0:3])
return jac
def compute_impedance_torques(self, q, dq, kp, kd, x_des, xd_des, f):
'''
Computes the desired joint torques tau = -Jt * (F + kp(x-x_des) + kd(xd-xd_des))
Inputs:
q = joint angles
dq = joint velocites
Kp = proportional gain
Kd = derivative gain
x_des = desired [x,y,z] at time t (in the root joint frame)
xd_des = desired velocity of end effector at time t (in the root joint frame)
'''
assert (np.shape(x_des) == (3,))
assert (np.shape(xd_des) == (3,))
assert (np.shape(f) == (3,))
assert (np.shape(kp) == (3,))
assert (np.shape(kd) == (3,))
#### Reshaping values to desired shapes
x_des = np.array(x_des)
xd_des = np.array(xd_des)
f = np.array(f)
kp = np.array([[kp[0],0,0],
[0,kp[1],0],
[0,0,kp[2]]])
kd = np.array([[kd[0],0,0],
[0,kd[1],0],
[0,0,kd[2]]])
#######################################
self.compute_forward_kinematics(q)
x = self.compute_distance_between_frames(q)
xd = self.compute_relative_velocity_between_frames(q,dq)
jac = self.compute_jacobian(q)[:, self.start_column:self.start_column+3]
# Store force for learning project.
self.F_ = f + np.matmul(kp, (x - x_des)) + np.matmul(kd, (xd - xd_des).T).T
tau = -jac.T.dot(self.F_.T)
return tau
def compute_impedance_torques_world(self, q, dq, kp, kd, x_des, xd_des, f):
"""Computes the leg impedance using world coordiante x_des and xd_des.
Args:
q: pinocchio generalized coordiantes of robot
dq: pinocchio generalized velocity of robot
kp: (list size 3) P gains for position error.
kd: (list size 3) D gains for velocity error.
x_des: (list size 3) desired endeffector position size 3.
xd_des: (list size 3) desired endeffector velocity size 3.
f: (list size 3) feedforward force to apply at endeffector.
"""
assert (np.shape(x_des) == (3,))
assert (np.shape(xd_des) == (3,))
assert (np.shape(f) == (3,))
assert (np.shape(kp) == (3,))
assert (np.shape(kd) == (3,))
#### Reshaping values to desired shapes
x_des = np.array(x_des)
xd_des = np.array(xd_des)
f = np.array(f)
kp = np.array(kp)
kd = np.array(kd)
#######################################
self.compute_forward_kinematics(q)
jac = self.compute_jacobian(q)
x = self.pin_robot.data.oMf[self.frame_end_idx].translation
xd = jac.dot(dq)
jac = jac[:, self.start_column:self.start_column+3]
# Store force for learning project.
self.F_ = f + kp*(x - x_des) + kd*(xd - xd_des)
tau = -jac.T.dot(self.F_)
return tau
class ImpedanceControllerSolo8(ImpedanceController):
def compute_impedance_torques(self, q, dq, kp, kd, x_des, xd_des, f):
'''
Computes the desired joint torques tau = -Jt * (F + kp(x-x_des) + kd(xd-xd_des))
Inputs:
q = joint angles
dq = joint velocites
Kp = proportional gain
Kd = derivative gain
x_des = desired [x,y,z] at time t (in the root joint frame)
xd_des = desired velocity of end effector at time t (in the root joint frame)
'''
assert (np.shape(x_des) == (3,))
assert (np.shape(xd_des) == (3,))
assert (np.shape(f) == (3,))
assert (np.shape(kp) == (3,))
assert (np.shape(kd) == (3,))
#### Reshaping values to desired shapes
x_des = np.array(x_des)
xd_des = np.array(xd_des)
f = np.array(f)
kp = np.array(kp)
kd = np.array(kd)
#######################################
self.compute_forward_kinematics(q)
x = self.compute_distance_between_frames(q)
xd = self.compute_relative_velocity_between_frames(q,dq)
jac = self.compute_jacobian(q)
# Store force for learning project.
self.F_ = f + kp*(x - x_des) + kd*(xd - xd_des)
tau = -jac.T.dot(self.F_)
return tau
def compute_impedance_torques_world(self, q, dq, kp, kd, x_des, xd_des, f):
"""Computes the leg impedance using world coordiante x_des and xd_des.
Args:
q: pinocchio generalized coordiantes of robot
dq: pinocchio generalized velocity of robot
kp: (list size 3) P gains for position error.
kd: (list size 3) D gains for velocity error.
x_des: (list size 3) desired endeffector position size 3.
xd_des: (list size 3) desired endeffector velocity size 3.
f: (list size 3) feedforward force to apply at endeffector.
"""
assert (np.shape(x_des) == (3,))
assert (np.shape(xd_des) == (3,))
assert (np.shape(f) == (3,))
assert (np.shape(kp) == (3,))
assert (np.shape(kd) == (3,))
#### Reshaping values to desired shapes
x_des = np.array(x_des)
xd_des = np.array(xd_des)
f = np.array(f)
kp = np.array(kp)
kd = np.array(kd)
#######################################
self.compute_forward_kinematics(q)
jac = self.compute_jacobian(q)
x = self.pin_robot.data.oMf[self.frame_end_idx].translation
xd = jac.dot(dq)
# Store force for learning project.
self.F_ = f + kp*(x - x_des) + kd*(xd - xd_des)
tau = -jac.T.dot(self.F_)
return tau
|
If $n$ is a positive integer, then $0$ is an $n$th power. |
import PeriodicTable
# Alias to avoid similarity of elements and Element in DFTK module namespace
periodic_table = PeriodicTable.elements
# Data structure for chemical element and the potential model via which
# they interact with electrons. A compensating charge background is
# always assumed. It is assumed that each implementing struct
# defines at least the functions `local_potential_fourier` and `local_potential_real`.
# Very likely `charge_nuclear` and `charge_ionic` need to be defined as well.
abstract type Element end
"""Return the total nuclear charge of an atom type"""
charge_nuclear(::Element) = 0
# This is a fallback implementation that should be altered as needed.
"""Return the total ionic charge of an atom type (nuclear charge - core electrons)"""
charge_ionic(el::Element) = charge_nuclear(el)
"""Return the number of valence electrons"""
n_elec_valence(el::Element) = charge_ionic(el)
"""Return the number of core electrons"""
n_elec_core(el::Element) = charge_nuclear(el) - charge_ionic(el)
"""Radial local potential, in Fourier space: V(q) = int_{R^3} V(x) e^{-iqx} dx."""
function local_potential_fourier(el::Element, q::AbstractVector)
local_potential_fourier(el, norm(q))
end
"""Radial local potential, in real space."""
function local_potential_real(el::Element, q::AbstractVector)
local_potential_real(el, norm(q))
end
struct ElementCoulomb <: Element
Z::Int # Nuclear charge
symbol # Element symbol
end
charge_ionic(el::ElementCoulomb) = el.Z
charge_nuclear(el::ElementCoulomb) = el.Z
"""
Element interacting with electrons via a bare Coulomb potential
(for all-electron calculations)
`key` may be an element symbol (like `:Si`), an atomic number (e.g. `14`)
or an element name (e.g. `"silicon"`)
"""
ElementCoulomb(key) = ElementCoulomb(periodic_table[key].number, Symbol(periodic_table[key].symbol))
function local_potential_fourier(el::ElementCoulomb, q::T) where {T <: Real}
q == 0 && return zero(T) # Compensating charge background
# General atom => Use default Coulomb potential
# We use int_{R^3} -Z/r e^{-i q⋅x} = 4π / |q|^2
return -4T(π) * el.Z / q^2
end
local_potential_real(el::ElementCoulomb, r::Real) = -el.Z / r
struct ElementPsp <: Element
Z::Int # Nuclear charge
symbol # Element symbol
psp # Pseudopotential data structure
end
"""
Element interacting with electrons via a pseudopotential model.
`key` may be an element symbol (like `:Si`), an atomic number (e.g. `14`)
or an element name (e.g. `"silicon"`)
"""
function ElementPsp(key; psp)
ElementPsp(periodic_table[key].number, Symbol(periodic_table[key].symbol), psp)
end
charge_ionic(el::ElementPsp) = el.psp.Zion
charge_nuclear(el::ElementPsp) = el.Z
function local_potential_fourier(el::ElementPsp, q::T) where {T <: Real}
q == 0 && return zero(T) # Compensating charge background
eval_psp_local_fourier(el.psp, q)
end
function local_potential_real(el::ElementPsp, r::Real)
# Use local part of pseudopotential defined in Element object
return eval_psp_local_real(el.psp, r)
end
struct ElementCohenBergstresser <: Element
Z::Int # Nuclear charge
symbol # Element symbol
V_sym # Map |G|^2 (in units of (2π / lattice_constant)^2) to form factors
lattice_constant # Lattice constant (in Bohr) which is assumed
end
charge_ionic(el::ElementCohenBergstresser) = 2
charge_nuclear(el::ElementCohenBergstresser) = el.Z
"""
Element where the interaction with electrons is modelled
as in [CohenBergstresser1966](https://doi.org/10.1103/PhysRev.141.789).
Only the homonuclear lattices of the diamond structure
are implemented (i.e. Si, Ge, Sn).
`key` may be an element symbol (like `:Si`), an atomic number (e.g. `14`)
or an element name (e.g. `"silicon"`)
"""
function ElementCohenBergstresser(key; lattice_constant=nothing)
# Form factors from Cohen-Bergstresser paper Table 2, converted to Bohr
# Lattice constants from Table 1, converted to Bohr
data = Dict(:Si => (form_factors=Dict( 3 => -0.21u"Ry",
8 => 0.04u"Ry",
11 => 0.08u"Ry"),
lattice_constant=5.43u"Å"),
:Ge => (form_factors=Dict( 3 => -0.23u"Ry",
8 => 0.01u"Ry",
11 => 0.06u"Ry"),
lattice_constant=5.66u"Å"),
:Sn => (form_factors=Dict( 3 => -0.20u"Ry",
8 => 0.00u"Ry",
11 => 0.04u"Ry"),
lattice_constant=6.49u"Å"),
)
symbol = Symbol(periodic_table[key].symbol)
if !(symbol in keys(data))
error("Cohen-Bergstresser potential not implemented for element $symbol.")
end
isnothing(lattice_constant) && (lattice_constant = data[symbol].lattice_constant)
lattice_constant = austrip(lattice_constant)
# Unit-cell volume of the primitive lattice (used in DFTK):
unit_cell_volume = det(lattice_constant / 2 .* [[0 1 1]; [1 0 1]; [1 1 0]])
# The form factors in the Cohen-Bergstresser paper Table 2 are
# with respect to normalized planewaves (i.e. not plain Fourier coefficients)
# and are already symmetrized into a sin-cos basis (see derivation p. 141)
# => Scale by Ω / 2 to get them into the DFTK convention
V_sym = Dict(key => austrip(value) * unit_cell_volume / 2
for (key, value) in pairs(data[symbol].form_factors))
ElementCohenBergstresser(periodic_table[key].number, symbol, V_sym, lattice_constant)
end
function local_potential_fourier(el::ElementCohenBergstresser, q::T) where {T <: Real}
q == 0 && return zero(T) # Compensating charge background
# Get |q|^2 in units of (2π / lattice_constant)^2
qsq_pi = Int(round(q^2 / (2π / el.lattice_constant)^2, digits=2))
T(get(el.V_sym, qsq_pi, 0.0))
end
|
import time
import ast
import matplotlib.pyplot as plt
import numpy as np
with open('C:/Git/dyn_fol_times2', 'r') as f:
data = f.read().split('\n')
times = []
for line in data:
try:
if line == '':
continue
line = '{' + line.replace('one_it', '"one_it"').replace('mpc_id', '"mpc_id"') + '}'
# print(line)
times.append(ast.literal_eval(line))
except:
pass
|
/**
* Copyright (c) 2017-present, Facebook, Inc. and its affiliates.
* All rights reserved.
*
* This source code is licensed under the BSD-style license found in the
* LICENSE file in the root directory of this source tree.
*/
#include "logdevice/lib/ClientSettingsImpl.h"
#include <string>
#include <boost/program_options.hpp>
#include "logdevice/common/debug.h"
#include "logdevice/include/Err.h"
#include "logdevice/lib/ClientPluginHelper.h"
namespace facebook { namespace logdevice {
//
// ClientSettings Implementation
//
ClientSettingsImpl::ClientSettingsImpl()
// By default be paranoid and don't crash the process on failed assert.
: settings_(
{{"abort-on-failed-check", folly::kIsDebug ? "true" : "false"}}) {
settings_updater_ = std::make_shared<SettingsUpdater>();
settings_updater_->registerSettings(settings_);
plugin_registry_ =
std::make_shared<PluginRegistry>(getClientPluginProviders());
plugin_registry_->addOptions(settings_updater_.get());
}
int ClientSettingsImpl::set(const char* name, const char* value) {
ld_info("ClientSettingsImpl::set(\"%s\", \"%s\")", name, value);
try {
settings_updater_->setFromClient(name, value);
} catch (const boost::program_options::error& ex) {
using namespace boost::program_options;
err = dynamic_cast<const unknown_option*>(&ex)
? E::UNKNOWN_SETTING
: dynamic_cast<const validation_error*>(&ex) ? E::INVALID_SETTING_VALUE
: E::INVALID_PARAM;
return -1;
}
return 0;
}
}} // namespace facebook::logdevice
|
! Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
! See https://llvm.org/LICENSE.txt for license information.
! SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
! intrinsic declarations in host and internal routine
program pp
integer(kind=4), intrinsic :: len_trim
call ss
contains
integer function kk
integer(kind=4) :: len_trim
len_trim = 3
kk = len_trim
end
subroutine ss
integer(kind=4), intrinsic :: len_trim
character*4 :: s = 'FAIL'
if (len_trim(s) - kk() .eq. 1) then
print*, 'PASS'
else
print*, s
endif
end
end
|
/-
Copyright (c) 2020 Adam Topaz. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Adam Topaz
-/
import algebra.free_algebra
import algebra.ring_quot
import algebra.triv_sq_zero_ext
import algebra.algebra.operations
import linear_algebra.multilinear.basic
/-!
# Tensor Algebras
Given a commutative semiring `R`, and an `R`-module `M`, we construct the tensor algebra of `M`.
This is the free `R`-algebra generated (`R`-linearly) by the module `M`.
## Notation
1. `tensor_algebra R M` is the tensor algebra itself. It is endowed with an R-algebra structure.
2. `tensor_algebra.ι R` is the canonical R-linear map `M → tensor_algebra R M`.
3. Given a linear map `f : M → A` to an R-algebra `A`, `lift R f` is the lift of `f` to an
`R`-algebra morphism `tensor_algebra R M → A`.
## Theorems
1. `ι_comp_lift` states that the composition `(lift R f) ∘ (ι R)` is identical to `f`.
2. `lift_unique` states that whenever an R-algebra morphism `g : tensor_algebra R M → A` is
given whose composition with `ι R` is `f`, then one has `g = lift R f`.
3. `hom_ext` is a variant of `lift_unique` in the form of an extensionality theorem.
4. `lift_comp_ι` is a combination of `ι_comp_lift` and `lift_unique`. It states that the lift
of the composition of an algebra morphism with `ι` is the algebra morphism itself.
## Implementation details
As noted above, the tensor algebra of `M` is constructed as the free `R`-algebra generated by `M`,
modulo the additional relations making the inclusion of `M` into an `R`-linear map.
-/
variables (R : Type*) [comm_semiring R]
variables (M : Type*) [add_comm_monoid M] [module R M]
namespace tensor_algebra
/--
An inductively defined relation on `pre R M` used to force the initial algebra structure on
the associated quotient.
-/
inductive rel : free_algebra R M → free_algebra R M → Prop
-- force `ι` to be linear
| add {a b : M} :
rel (free_algebra.ι R (a+b)) (free_algebra.ι R a + free_algebra.ι R b)
| smul {r : R} {a : M} :
rel (free_algebra.ι R (r • a)) (algebra_map R (free_algebra R M) r * free_algebra.ι R a)
end tensor_algebra
/--
The tensor algebra of the module `M` over the commutative semiring `R`.
-/
@[derive [inhabited, semiring, algebra R]]
def tensor_algebra := ring_quot (tensor_algebra.rel R M)
namespace tensor_algebra
instance {S : Type*} [comm_ring S] [module S M] : ring (tensor_algebra S M) :=
ring_quot.ring (rel S M)
variables {M}
/--
The canonical linear map `M →ₗ[R] tensor_algebra R M`.
-/
@[irreducible] def ι : M →ₗ[R] (tensor_algebra R M) :=
{ to_fun := λ m, (ring_quot.mk_alg_hom R _ (free_algebra.ι R m)),
map_add' := λ x y, by { rw [←alg_hom.map_add], exact ring_quot.mk_alg_hom_rel R rel.add, },
map_smul' := λ r x, by { rw [←alg_hom.map_smul], exact ring_quot.mk_alg_hom_rel R rel.smul, } }
lemma ring_quot_mk_alg_hom_free_algebra_ι_eq_ι (m : M) :
ring_quot.mk_alg_hom R (rel R M) (free_algebra.ι R m) = ι R m :=
by { rw [ι], refl }
/--
Given a linear map `f : M → A` where `A` is an `R`-algebra, `lift R f` is the unique lift
of `f` to a morphism of `R`-algebras `tensor_algebra R M → A`.
-/
@[irreducible, simps symm_apply]
def lift {A : Type*} [semiring A] [algebra R A] : (M →ₗ[R] A) ≃ (tensor_algebra R M →ₐ[R] A) :=
{ to_fun := ring_quot.lift_alg_hom R ∘ λ f,
⟨free_algebra.lift R ⇑f, λ x y (h : rel R M x y), by induction h;
simp only [algebra.smul_def, free_algebra.lift_ι_apply, linear_map.map_smulₛₗ,
ring_hom.id_apply, map_mul, alg_hom.commutes, map_add]⟩,
inv_fun := λ F, F.to_linear_map.comp (ι R),
left_inv := λ f, begin
rw [ι],
ext1 x,
exact (ring_quot.lift_alg_hom_mk_alg_hom_apply _ _ _ _).trans (free_algebra.lift_ι_apply f x),
end,
right_inv := λ F, ring_quot.ring_quot_ext' _ _ _ $ free_algebra.hom_ext $ funext $ λ x, begin
rw [ι],
exact (ring_quot.lift_alg_hom_mk_alg_hom_apply _ _ _ _).trans (free_algebra.lift_ι_apply _ _)
end }
variables {R}
@[simp]
theorem ι_comp_lift {A : Type*} [semiring A] [algebra R A] (f : M →ₗ[R] A) :
(lift R f).to_linear_map.comp (ι R) = f :=
by { convert (lift R).symm_apply_apply f, simp only [lift, equiv.coe_fn_symm_mk] }
@[simp]
theorem lift_ι_apply {A : Type*} [semiring A] [algebra R A] (f : M →ₗ[R] A) (x) :
lift R f (ι R x) = f x :=
by { conv_rhs { rw ← ι_comp_lift f}, refl }
@[simp]
theorem lift_unique {A : Type*} [semiring A] [algebra R A] (f : M →ₗ[R] A)
(g : tensor_algebra R M →ₐ[R] A) : g.to_linear_map.comp (ι R) = f ↔ g = lift R f :=
by { rw ← (lift R).symm_apply_eq, simp only [lift, equiv.coe_fn_symm_mk] }
-- Marking `tensor_algebra` irreducible makes `ring` instances inaccessible on quotients.
-- https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/algebra.2Esemiring_to_ring.20breaks.20semimodule.20typeclass.20lookup/near/212580241
-- For now, we avoid this by not marking it irreducible.
@[simp]
theorem lift_comp_ι {A : Type*} [semiring A] [algebra R A] (g : tensor_algebra R M →ₐ[R] A) :
lift R (g.to_linear_map.comp (ι R)) = g :=
by { rw ←lift_symm_apply, exact (lift R).apply_symm_apply g }
/-- See note [partially-applied ext lemmas]. -/
@[ext]
theorem hom_ext {A : Type*} [semiring A] [algebra R A] {f g : tensor_algebra R M →ₐ[R] A}
(w : f.to_linear_map.comp (ι R) = g.to_linear_map.comp (ι R)) : f = g :=
begin
rw [←lift_symm_apply, ←lift_symm_apply] at w,
exact (lift R).symm.injective w,
end
/-- If `C` holds for the `algebra_map` of `r : R` into `tensor_algebra R M`, the `ι` of `x : M`,
and is preserved under addition and muliplication, then it holds for all of `tensor_algebra R M`.
-/
-- This proof closely follows `free_algebra.induction`
@[elab_as_eliminator]
lemma induction {C : tensor_algebra R M → Prop}
(h_grade0 : ∀ r, C (algebra_map R (tensor_algebra R M) r))
(h_grade1 : ∀ x, C (ι R x))
(h_mul : ∀ a b, C a → C b → C (a * b))
(h_add : ∀ a b, C a → C b → C (a + b))
(a : tensor_algebra R M) :
C a :=
begin
-- the arguments are enough to construct a subalgebra, and a mapping into it from M
let s : subalgebra R (tensor_algebra R M) :=
{ carrier := C,
mul_mem' := h_mul,
add_mem' := h_add,
algebra_map_mem' := h_grade0, },
let of : M →ₗ[R] s := (ι R).cod_restrict s.to_submodule h_grade1,
-- the mapping through the subalgebra is the identity
have of_id : alg_hom.id R (tensor_algebra R M) = s.val.comp (lift R of),
{ ext,
simp [of], },
-- finding a proof is finding an element of the subalgebra
convert subtype.prop (lift R of a),
exact alg_hom.congr_fun of_id a,
end
/-- The left-inverse of `algebra_map`. -/
def algebra_map_inv : tensor_algebra R M →ₐ[R] R :=
lift R (0 : M →ₗ[R] R)
variables (M)
lemma algebra_map_left_inverse :
function.left_inverse algebra_map_inv (algebra_map R $ tensor_algebra R M) :=
λ x, by simp [algebra_map_inv]
@[simp] lemma algebra_map_inj (x y : R) :
algebra_map R (tensor_algebra R M) x = algebra_map R (tensor_algebra R M) y ↔ x = y :=
(algebra_map_left_inverse M).injective.eq_iff
@[simp] lemma algebra_map_eq_zero_iff (x : R) : algebra_map R (tensor_algebra R M) x = 0 ↔ x = 0 :=
map_eq_zero_iff (algebra_map _ _) (algebra_map_left_inverse _).injective
@[simp] lemma algebra_map_eq_one_iff (x : R) : algebra_map R (tensor_algebra R M) x = 1 ↔ x = 1 :=
map_eq_one_iff (algebra_map _ _) (algebra_map_left_inverse _).injective
variables {M}
/-- The canonical map from `tensor_algebra R M` into `triv_sq_zero_ext R M` that sends
`tensor_algebra.ι` to `triv_sq_zero_ext.inr`. -/
def to_triv_sq_zero_ext [module Rᵐᵒᵖ M] [is_central_scalar R M] :
tensor_algebra R M →ₐ[R] triv_sq_zero_ext R M :=
lift R (triv_sq_zero_ext.inr_hom R M)
@[simp] lemma to_triv_sq_zero_ext_ι (x : M) [module Rᵐᵒᵖ M] [is_central_scalar R M] :
to_triv_sq_zero_ext (ι R x) = triv_sq_zero_ext.inr x :=
lift_ι_apply _ _
/-- The left-inverse of `ι`.
As an implementation detail, we implement this using `triv_sq_zero_ext` which has a suitable
algebra structure. -/
def ι_inv : tensor_algebra R M →ₗ[R] M :=
begin
letI : module Rᵐᵒᵖ M := module.comp_hom _ ((ring_hom.id R).from_opposite mul_comm),
haveI : is_central_scalar R M := ⟨λ r m, rfl⟩,
exact (triv_sq_zero_ext.snd_hom R M).comp to_triv_sq_zero_ext.to_linear_map
end
lemma ι_left_inverse : function.left_inverse ι_inv (ι R : M → tensor_algebra R M) :=
λ x, by simp [ι_inv]
variables (R)
@[simp] lemma ι_inj (x y : M) : ι R x = ι R y ↔ x = y :=
ι_left_inverse.injective.eq_iff
@[simp] lemma ι_eq_zero_iff (x : M) : ι R x = 0 ↔ x = 0 :=
by rw [←ι_inj R x 0, linear_map.map_zero]
variables {R}
@[simp] lemma ι_eq_algebra_map_iff (x : M) (r : R) : ι R x = algebra_map R _ r ↔ x = 0 ∧ r = 0 :=
begin
refine ⟨λ h, _, _⟩,
{ letI : module Rᵐᵒᵖ M := module.comp_hom _ ((ring_hom.id R).from_opposite mul_comm),
haveI : is_central_scalar R M := ⟨λ r m, rfl⟩,
have hf0 : to_triv_sq_zero_ext (ι R x) = (0, x), from lift_ι_apply _ _,
rw [h, alg_hom.commutes] at hf0,
have : r = 0 ∧ 0 = x := prod.ext_iff.1 hf0,
exact this.symm.imp_left eq.symm, },
{ rintro ⟨rfl, rfl⟩,
rw [linear_map.map_zero, ring_hom.map_zero] }
end
@[simp] lemma ι_ne_one [nontrivial R] (x : M) : ι R x ≠ 1 :=
begin
rw [←(algebra_map R (tensor_algebra R M)).map_one, ne.def, ι_eq_algebra_map_iff],
exact one_ne_zero ∘ and.right,
end
/-- The generators of the tensor algebra are disjoint from its scalars. -/
variables (R M)
/-- Construct a product of `n` elements of the module within the tensor algebra.
See also `pi_tensor_product.tprod`. -/
def tprod (n : ℕ) : multilinear_map R (λ i : fin n, M) (tensor_algebra R M) :=
(multilinear_map.mk_pi_algebra_fin R n (tensor_algebra R M)).comp_linear_map $ λ _, ι R
@[simp] lemma tprod_apply {n : ℕ} (x : fin n → M) :
tprod R M n x = (list.of_fn (λ i, ι R (x i))).prod := rfl
variables {R M}
end tensor_algebra
namespace free_algebra
variables {R M}
/-- The canonical image of the `free_algebra` in the `tensor_algebra`, which maps
`free_algebra.ι R x` to `tensor_algebra.ι R x`. -/
def to_tensor : free_algebra R M →ₐ[R] tensor_algebra R M :=
free_algebra.lift R (tensor_algebra.ι R)
@[simp] lemma to_tensor_ι (m : M) : (free_algebra.ι R m).to_tensor = tensor_algebra.ι R m :=
by simp [to_tensor]
end free_algebra
|
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
⊢ IsNoetherian R { x // x ∈ N } ↔ ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
[PROOFSTEP]
refine
⟨fun ⟨hn⟩ => fun s hs =>
have : s ≤ LinearMap.range N.subtype := N.range_subtype.symm ▸ hs
Submodule.map_comap_eq_self this ▸ (hn _).map _,
fun h => ⟨fun s => ?_⟩⟩
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
h : ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
s : Submodule R { x // x ∈ N }
⊢ Submodule.FG s
[PROOFSTEP]
have f := (Submodule.equivMapOfInjective N.subtype Subtype.val_injective s).symm
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
h : ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
s : Submodule R { x // x ∈ N }
f : { x // x ∈ Submodule.map (Submodule.subtype N) s } ≃ₗ[R] { x // x ∈ s }
⊢ Submodule.FG s
[PROOFSTEP]
have h₁ := h (s.map N.subtype) (Submodule.map_subtype_le N s)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
h : ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
s : Submodule R { x // x ∈ N }
f : { x // x ∈ Submodule.map (Submodule.subtype N) s } ≃ₗ[R] { x // x ∈ s }
h₁ : Submodule.FG (Submodule.map (Submodule.subtype N) s)
⊢ Submodule.FG s
[PROOFSTEP]
have h₂ : (⊤ : Submodule R (s.map N.subtype)).map f = ⊤ := by simp
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
h : ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
s : Submodule R { x // x ∈ N }
f : { x // x ∈ Submodule.map (Submodule.subtype N) s } ≃ₗ[R] { x // x ∈ s }
h₁ : Submodule.FG (Submodule.map (Submodule.subtype N) s)
⊢ Submodule.map f ⊤ = ⊤
[PROOFSTEP]
simp
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
h : ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
s : Submodule R { x // x ∈ N }
f : { x // x ∈ Submodule.map (Submodule.subtype N) s } ≃ₗ[R] { x // x ∈ s }
h₁ : Submodule.FG (Submodule.map (Submodule.subtype N) s)
h₂ : Submodule.map f ⊤ = ⊤
⊢ Submodule.FG s
[PROOFSTEP]
have h₃ := ((Submodule.fg_top _).2 h₁).map (↑f : _ →ₗ[R] s)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
N : Submodule R M
h : ∀ (s : Submodule R M), s ≤ N → Submodule.FG s
s : Submodule R { x // x ∈ N }
f : { x // x ∈ Submodule.map (Submodule.subtype N) s } ≃ₗ[R] { x // x ∈ s }
h₁ : Submodule.FG (Submodule.map (Submodule.subtype N) s)
h₂ : Submodule.map f ⊤ = ⊤
h₃ : Submodule.FG (Submodule.map ↑f ⊤)
⊢ Submodule.FG s
[PROOFSTEP]
exact (Submodule.fg_top _).1 (h₂ ▸ h₃)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
⊢ IsNoetherian R { x // x ∈ ⊤ } ↔ IsNoetherian R M
[PROOFSTEP]
constructor
[GOAL]
case mp
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
⊢ IsNoetherian R { x // x ∈ ⊤ } → IsNoetherian R M
[PROOFSTEP]
intro h
[GOAL]
case mpr
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
⊢ IsNoetherian R M → IsNoetherian R { x // x ∈ ⊤ }
[PROOFSTEP]
intro h
[GOAL]
case mp
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
h : IsNoetherian R { x // x ∈ ⊤ }
⊢ IsNoetherian R M
[PROOFSTEP]
exact isNoetherian_of_linearEquiv (LinearEquiv.ofTop (⊤ : Submodule R M) rfl)
[GOAL]
case mpr
R : Type u_1
M : Type u_2
P : Type u_3
inst✝⁴ : Semiring R
inst✝³ : AddCommMonoid M
inst✝² : AddCommMonoid P
inst✝¹ : Module R M
inst✝ : Module R P
h : IsNoetherian R M
⊢ IsNoetherian R { x // x ∈ ⊤ }
[PROOFSTEP]
exact isNoetherian_of_linearEquiv (LinearEquiv.ofTop (⊤ : Submodule R M) rfl).symm
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
⊢ IsNoetherian R ((i : ι) → M i)
[PROOFSTEP]
cases nonempty_fintype ι
[GOAL]
case intro
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
⊢ IsNoetherian R ((i : ι) → M i)
[PROOFSTEP]
haveI := Classical.decEq ι
[GOAL]
case intro
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
⊢ IsNoetherian R ((i : ι) → M i)
[PROOFSTEP]
suffices on_finset : ∀ s : Finset ι, IsNoetherian R (∀ i : s, M i)
[GOAL]
case intro
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
on_finset : ∀ (s : Finset ι), IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ IsNoetherian R ((i : ι) → M i)
[PROOFSTEP]
let coe_e := Equiv.subtypeUnivEquiv <| @Finset.mem_univ ι _
[GOAL]
case intro
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
on_finset : ∀ (s : Finset ι), IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
coe_e : { x // x ∈ Finset.univ } ≃ ι := Equiv.subtypeUnivEquiv (_ : ∀ (x : ι), x ∈ Finset.univ)
⊢ IsNoetherian R ((i : ι) → M i)
[PROOFSTEP]
letI : IsNoetherian R (∀ i : Finset.univ, M (coe_e i)) := on_finset Finset.univ
[GOAL]
case intro
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
on_finset : ∀ (s : Finset ι), IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
coe_e : { x // x ∈ Finset.univ } ≃ ι := Equiv.subtypeUnivEquiv (_ : ∀ (x : ι), x ∈ Finset.univ)
this : IsNoetherian R ((i : { x // x ∈ Finset.univ }) → M (↑coe_e i)) := on_finset Finset.univ
⊢ IsNoetherian R ((i : ι) → M i)
[PROOFSTEP]
exact isNoetherian_of_linearEquiv (LinearEquiv.piCongrLeft R M coe_e)
[GOAL]
case on_finset
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
⊢ ∀ (s : Finset ι), IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
[PROOFSTEP]
intro s
[GOAL]
case on_finset
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
⊢ IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
[PROOFSTEP]
induction' s using Finset.induction with a s has ih
[GOAL]
case on_finset.empty
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
⊢ IsNoetherian R ((i : { x // x ∈ ∅ }) → M ↑i)
[PROOFSTEP]
exact
⟨fun s => by
have : s = ⊥ := by simp only [eq_iff_true_of_subsingleton]
rw [this]
apply Submodule.fg_bot⟩
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Submodule R ((i : { x // x ∈ ∅ }) → M ↑i)
⊢ Submodule.FG s
[PROOFSTEP]
have : s = ⊥ := by simp only [eq_iff_true_of_subsingleton]
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Submodule R ((i : { x // x ∈ ∅ }) → M ↑i)
⊢ s = ⊥
[PROOFSTEP]
simp only [eq_iff_true_of_subsingleton]
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
s : Submodule R ((i : { x // x ∈ ∅ }) → M ↑i)
this : s = ⊥
⊢ Submodule.FG s
[PROOFSTEP]
rw [this]
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
s : Submodule R ((i : { x // x ∈ ∅ }) → M ↑i)
this : s = ⊥
⊢ Submodule.FG ⊥
[PROOFSTEP]
apply Submodule.fg_bot
[GOAL]
case on_finset.insert
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ IsNoetherian R ((i : { x // x ∈ insert a s }) → M ↑i)
[PROOFSTEP]
refine
@isNoetherian_of_linearEquiv R (M a × ((i : s) → M i)) _ _ _ _ _ _ ?_ <| @isNoetherian_prod R (M a) _ _ _ _ _ _ _ ih
[GOAL]
case on_finset.insert
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ (M a × ((i : { x // x ∈ s }) → M ↑i)) ≃ₗ[R] (i : { x // x ∈ insert a s }) → M ↑i
[PROOFSTEP]
refine
{ toFun := fun f i =>
(Finset.mem_insert.1 i.2).by_cases (fun h : i.1 = a => show M i.1 from Eq.recOn h.symm f.1)
(fun h : i.1 ∈ s => show M i.1 from f.2 ⟨i.1, h⟩),
invFun := fun f => (f ⟨a, Finset.mem_insert_self _ _⟩, fun i => f ⟨i.1, Finset.mem_insert_of_mem i.2⟩),
map_add' := ?_, map_smul' := ?_
left_inv := ?_, right_inv := ?_ }
[GOAL]
case on_finset.insert.refine_1
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ ∀ (x y : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(x + y) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
x +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
y
[PROOFSTEP]
intro f g
[GOAL]
case on_finset.insert.refine_1
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
⊢ (fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g
[PROOFSTEP]
ext i
[GOAL]
case on_finset.insert.refine_1.h
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : { x // x ∈ insert a s }
⊢ (fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) i =
((fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g)
i
[PROOFSTEP]
unfold Or.by_cases
[GOAL]
case on_finset.insert.refine_1.h
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : { x // x ∈ insert a s }
⊢ (fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) i =
((fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g)
i
[PROOFSTEP]
cases' i with i hi
[GOAL]
case on_finset.insert.refine_1.h.mk
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
⊢ (fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) { val := i, property := hi } =
((fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g)
{ val := i, property := hi }
[PROOFSTEP]
rcases Finset.mem_insert.1 hi with (rfl | h)
[GOAL]
case on_finset.insert.refine_1.h.mk.inl
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
has : ¬i ∈ s
f g : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ (fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
(f + g) { val := i, property := hi } =
((fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
f +
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
g)
{ val := i, property := hi }
[PROOFSTEP]
change _ = _ + _
[GOAL]
case on_finset.insert.refine_1.h.mk.inl
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
has : ¬i ∈ s
f g : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ (fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
(f + g) { val := i, property := hi } =
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
f { val := i, property := hi } +
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
g { val := i, property := hi }
[PROOFSTEP]
simp only [dif_pos]
[GOAL]
case on_finset.insert.refine_1.h.mk.inl
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
has : ¬i ∈ s
f g : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ (f + g).fst = f.fst + g.fst
[PROOFSTEP]
rfl
[GOAL]
case on_finset.insert.refine_1.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ (fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) { val := i, property := hi } =
((fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g)
{ val := i, property := hi }
[PROOFSTEP]
change _ = _ + _
[GOAL]
case on_finset.insert.refine_1.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ (fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) { val := i, property := hi } =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f { val := i, property := hi } +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g { val := i, property := hi }
[PROOFSTEP]
have : ¬i = a := by
rintro rfl
exact has h
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ ¬i = a
[PROOFSTEP]
rintro rfl
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
h : i ∈ s
has : ¬i ∈ s
f g : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ False
[PROOFSTEP]
exact has h
[GOAL]
case on_finset.insert.refine_1.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
this : ¬i = a
⊢ (fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) { val := i, property := hi } =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f { val := i, property := hi } +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g { val := i, property := hi }
[PROOFSTEP]
simp only [dif_neg this, dif_pos h]
[GOAL]
case on_finset.insert.refine_1.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f g : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
this : ¬i = a
⊢ Prod.snd (f + g) { val := i, property := (_ : i ∈ s) } =
Prod.snd f { val := i, property := (_ : i ∈ s) } + Prod.snd g { val := i, property := (_ : i ∈ s) }
[PROOFSTEP]
rfl
[GOAL]
case on_finset.insert.refine_2
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ ∀ (r : R) (x : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
x
[PROOFSTEP]
intro c f
[GOAL]
case on_finset.insert.refine_2
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
⊢ AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f
[PROOFSTEP]
ext i
[GOAL]
case on_finset.insert.refine_2.h
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : { x // x ∈ insert a s }
⊢ AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) i =
(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f)
i
[PROOFSTEP]
unfold Or.by_cases
[GOAL]
case on_finset.insert.refine_2.h
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : { x // x ∈ insert a s }
⊢ AddHom.toFun
{
toFun := fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s),
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g) }
(c • f) i =
(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s),
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g) }
f)
i
[PROOFSTEP]
cases' i with i hi
[GOAL]
case on_finset.insert.refine_2.h.mk
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
⊢ AddHom.toFun
{
toFun := fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s),
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g) }
(c • f) { val := i, property := hi } =
(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s),
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g) }
f)
{ val := i, property := hi }
[PROOFSTEP]
rcases Finset.mem_insert.1 hi with (rfl | h)
[GOAL]
case on_finset.insert.refine_2.h.mk.inl
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
i : ι
has : ¬i ∈ s
f : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ AddHom.toFun
{
toFun := fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s),
map_add' :=
(_ :
∀ (f g : M i × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
(f + g) =
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
f +
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
g) }
(c • f) { val := i, property := hi } =
(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s),
map_add' :=
(_ :
∀ (f g : M i × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
(f + g) =
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
f +
(fun f i_1 =>
if hp : ↑i_1 = i then
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(_ : ↑i_1 ∈ s))
g) }
f)
{ val := i, property := hi }
[PROOFSTEP]
dsimp
[GOAL]
case on_finset.insert.refine_2.h.mk.inl
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
i : ι
has : ¬i ∈ s
f : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ (if hp : i = i then c • f.fst else c • Prod.snd f { val := i, property := (_ : i ∈ s) }) =
c • if hp : i = i then f.fst else Prod.snd f { val := i, property := (_ : i ∈ s) }
[PROOFSTEP]
simp only [dif_pos]
[GOAL]
case on_finset.insert.refine_2.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ AddHom.toFun
{
toFun := fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s),
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g) }
(c • f) { val := i, property := hi } =
(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s),
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
(f + g) =
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
f +
(fun f i =>
if hp : ↑i = a then
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
hp
else
(fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(_ : ↑i ∈ s))
g) }
f)
{ val := i, property := hi }
[PROOFSTEP]
dsimp
[GOAL]
case on_finset.insert.refine_2.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ (if hp : i = a then (_ : a = i) ▸ (c • f.fst) else c • Prod.snd f { val := i, property := (_ : i ∈ s) }) =
c • if hp : i = a then (_ : a = i) ▸ f.fst else Prod.snd f { val := i, property := (_ : i ∈ s) }
[PROOFSTEP]
have : ¬i = a := by
rintro rfl
exact has h
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ ¬i = a
[PROOFSTEP]
rintro rfl
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
i : ι
h : i ∈ s
has : ¬i ∈ s
f : M i × ((i : { x // x ∈ s }) → M ↑i)
hi : i ∈ insert i s
⊢ False
[PROOFSTEP]
exact has h
[GOAL]
case on_finset.insert.refine_2.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
c : R
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
hi : i ∈ insert a s
h : i ∈ s
this : ¬i = a
⊢ (if hp : i = a then (_ : a = i) ▸ (c • f.fst) else c • Prod.snd f { val := i, property := (_ : i ∈ s) }) =
c • if hp : i = a then (_ : a = i) ▸ f.fst else Prod.snd f { val := i, property := (_ : i ∈ s) }
[PROOFSTEP]
simp only [dif_neg this, dif_pos h]
[GOAL]
case on_finset.insert.refine_3
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ Function.LeftInverse
(fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i => f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom.toFun
[PROOFSTEP]
intro f
[GOAL]
case on_finset.insert.refine_3
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : M a × ((i : { x // x ∈ s }) → M ↑i)
⊢ (fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i => f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
(AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
f) =
f
[PROOFSTEP]
apply Prod.ext
[GOAL]
case on_finset.insert.refine_3.h₁
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : M a × ((i : { x // x ∈ s }) → M ↑i)
⊢ ((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
(AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
f)).fst =
f.fst
[PROOFSTEP]
simp only [Or.by_cases, dif_pos]
[GOAL]
case on_finset.insert.refine_3.h₂
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : M a × ((i : { x // x ∈ s }) → M ↑i)
⊢ ((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
(AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
f)).snd =
f.snd
[PROOFSTEP]
ext ⟨i, his⟩
[GOAL]
case on_finset.insert.refine_3.h₂.h.mk
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
his : i ∈ s
⊢ Prod.snd
((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
(AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
f))
{ val := i, property := his } =
Prod.snd f { val := i, property := his }
[PROOFSTEP]
have : ¬i = a := by
rintro rfl
exact has his
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
his : i ∈ s
⊢ ¬i = a
[PROOFSTEP]
rintro rfl
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
his : i ∈ s
has : ¬i ∈ s
f : M i × ((i : { x // x ∈ s }) → M ↑i)
⊢ False
[PROOFSTEP]
exact has his
[GOAL]
case on_finset.insert.refine_3.h₂.h.mk
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : M a × ((i : { x // x ∈ s }) → M ↑i)
i : ι
his : i ∈ s
this : ¬i = a
⊢ Prod.snd
((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
(AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
f))
{ val := i, property := his } =
Prod.snd f { val := i, property := his }
[PROOFSTEP]
simp only [Or.by_cases, this, not_false_iff, dif_neg]
[GOAL]
case on_finset.insert.refine_4
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
⊢ Function.RightInverse
(fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i => f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom.toFun
[PROOFSTEP]
intro f
[GOAL]
case on_finset.insert.refine_4
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : (i : { x // x ∈ insert a s }) → M ↑i
⊢ AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
f) =
f
[PROOFSTEP]
ext ⟨i, hi⟩
[GOAL]
case on_finset.insert.refine_4.h.mk
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : (i : { x // x ∈ insert a s }) → M ↑i
i : ι
hi : i ∈ insert a s
⊢ AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
f)
{ val := i, property := hi } =
f { val := i, property := hi }
[PROOFSTEP]
rcases Finset.mem_insert.1 hi with (rfl | h)
[GOAL]
case on_finset.insert.refine_4.h.mk.inl
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
has : ¬i ∈ s
f : (i_1 : { x // x ∈ insert i s }) → M ↑i_1
hi : i ∈ insert i s
⊢ AddHom.toFun
{
toAddHom :=
{
toFun := fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this,
map_add' :=
(_ :
∀ (f g : M i × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(f + g) =
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
f +
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M i × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this,
map_add' :=
(_ :
∀ (f g : M i × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(f + g) =
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
f +
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this,
map_add' :=
(_ :
∀ (f g : M i × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
(f + g) =
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
f +
(fun f i_1 =>
Or.by_cases (_ : ↑i_1 = i ∨ ↑i_1 ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : i = ↑i_1) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i_1, property := h };
this)
g) }
f) }.toAddHom
((fun f =>
(f { val := i, property := (_ : i ∈ insert i s) }, fun i_1 =>
f { val := ↑i_1, property := (_ : ↑i_1 ∈ insert i s) }))
f)
{ val := i, property := hi } =
f { val := i, property := hi }
[PROOFSTEP]
simp only [Or.by_cases, dif_pos]
[GOAL]
case on_finset.insert.refine_4.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : (i : { x // x ∈ insert a s }) → M ↑i
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
f)
{ val := i, property := hi } =
f { val := i, property := hi }
[PROOFSTEP]
have : ¬i = a := by
rintro rfl
exact has h
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : (i : { x // x ∈ insert a s }) → M ↑i
i : ι
hi : i ∈ insert a s
h : i ∈ s
⊢ ¬i = a
[PROOFSTEP]
rintro rfl
[GOAL]
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this : DecidableEq ι
s : Finset ι
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
i : ι
h : i ∈ s
has : ¬i ∈ s
f : (i_1 : { x // x ∈ insert i s }) → M ↑i_1
hi : i ∈ insert i s
⊢ False
[PROOFSTEP]
exact has h
[GOAL]
case on_finset.insert.refine_4.h.mk.inr
R✝ : Type u_1
M✝ : Type u_2
P : Type u_3
inst✝⁹ : Ring R✝
inst✝⁸ : AddCommGroup M✝
inst✝⁷ : AddCommGroup P
inst✝⁶ : Module R✝ M✝
inst✝⁵ : Module R✝ P
R : Type u_4
ι : Type u_5
M : ι → Type u_6
inst✝⁴ : Ring R
inst✝³ : (i : ι) → AddCommGroup (M i)
inst✝² : (i : ι) → Module R (M i)
inst✝¹ : Finite ι
inst✝ : ∀ (i : ι), IsNoetherian R (M i)
val✝ : Fintype ι
this✝ : DecidableEq ι
a : ι
s : Finset ι
has : ¬a ∈ s
ih : IsNoetherian R ((i : { x // x ∈ s }) → M ↑i)
f : (i : { x // x ∈ insert a s }) → M ↑i
i : ι
hi : i ∈ insert a s
h : i ∈ s
this : ¬i = a
⊢ AddHom.toFun
{
toAddHom :=
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) },
map_smul' :=
(_ :
∀ (c : R) (f : M a × ((i : { x // x ∈ s }) → M ↑i)),
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this,
map_add' :=
(_ :
∀ (f g : M a × ((i : { x // x ∈ s }) → M ↑i)),
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
(f + g) =
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
f +
(fun f i =>
Or.by_cases (_ : ↑i = a ∨ ↑i ∈ s)
(fun h =>
let_fun this := Eq.recOn (_ : a = ↑i) f.fst;
this)
fun h =>
let_fun this := Prod.snd f { val := ↑i, property := h };
this)
g) }
f) }.toAddHom
((fun f =>
(f { val := a, property := (_ : a ∈ insert a s) }, fun i =>
f { val := ↑i, property := (_ : ↑i ∈ insert a s) }))
f)
{ val := i, property := hi } =
f { val := i, property := hi }
[PROOFSTEP]
simp only [Or.by_cases, dif_neg this, dif_pos h]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
⊢ IsNoetherian R M ↔ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
have :=
(CompleteLattice.wellFounded_characterisations <| Submodule R M).out 0
3
-- Porting note: inlining this makes rw complain about it being a metavariable
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
this : (WellFounded fun x x_1 => x > x_1) ↔ ∀ (k : Submodule R M), CompleteLattice.IsCompactElement k
⊢ IsNoetherian R M ↔ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
rw [this]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
this : (WellFounded fun x x_1 => x > x_1) ↔ ∀ (k : Submodule R M), CompleteLattice.IsCompactElement k
⊢ IsNoetherian R M ↔ ∀ (k : Submodule R M), CompleteLattice.IsCompactElement k
[PROOFSTEP]
exact ⟨fun ⟨h⟩ => fun k => (fg_iff_compact k).mp (h k), fun h => ⟨fun k => (fg_iff_compact k).mpr (h k)⟩⟩
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
⊢ IsNoetherian R M ↔ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
let α := { N : Submodule R M // N.FG }
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
⊢ IsNoetherian R M ↔ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
constructor
[GOAL]
case mp
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
⊢ IsNoetherian R M → WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
intro H
[GOAL]
case mp
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : IsNoetherian R M
⊢ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
let f : α ↪o Submodule R M := OrderEmbedding.subtype _
[GOAL]
case mp
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : IsNoetherian R M
f : α ↪o Submodule R M := OrderEmbedding.subtype fun N => FG N
⊢ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
exact OrderEmbedding.wellFounded f.dual (isNoetherian_iff_wellFounded.mp H)
[GOAL]
case mpr
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
⊢ (WellFounded fun x x_1 => x > x_1) → IsNoetherian R M
[PROOFSTEP]
intro H
[GOAL]
case mpr
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
⊢ IsNoetherian R M
[PROOFSTEP]
constructor
[GOAL]
case mpr.noetherian
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
⊢ ∀ (s : Submodule R M), FG s
[PROOFSTEP]
intro N
[GOAL]
case mpr.noetherian
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N : Submodule R M
⊢ FG N
[PROOFSTEP]
obtain ⟨⟨N₀, h₁⟩, e : N₀ ≤ N, h₂⟩ := WellFounded.has_min H {N' : α | N'.1 ≤ N} ⟨⟨⊥, Submodule.fg_bot⟩, @bot_le _ _ _ N⟩
[GOAL]
case mpr.noetherian.intro.mk.intro
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
⊢ FG N
[PROOFSTEP]
convert h₁
[GOAL]
case h.e'_6
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
⊢ N = N₀
[PROOFSTEP]
refine' (e.antisymm _).symm
[GOAL]
case h.e'_6
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
⊢ N ≤ N₀
[PROOFSTEP]
by_contra h₃
[GOAL]
case h.e'_6
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
h₃ : ¬N ≤ N₀
⊢ False
[PROOFSTEP]
obtain ⟨x, hx₁ : x ∈ N, hx₂ : x ∉ N₀⟩ := Set.not_subset.mp h₃
[GOAL]
case h.e'_6.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
h₃ : ¬N ≤ N₀
x : M
hx₁ : x ∈ N
hx₂ : ¬x ∈ N₀
⊢ False
[PROOFSTEP]
apply hx₂
[GOAL]
case h.e'_6.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
h₃ : ¬N ≤ N₀
x : M
hx₁ : x ∈ N
hx₂ : ¬x ∈ N₀
⊢ x ∈ N₀
[PROOFSTEP]
rw [eq_of_le_of_not_lt (le_sup_right : N₀ ≤ _)
(h₂ ⟨_, Submodule.FG.sup ⟨{ x }, by rw [Finset.coe_singleton]⟩ h₁⟩ <|
sup_le ((Submodule.span_singleton_le_iff_mem _ _).mpr hx₁) e)]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
h₃ : ¬N ≤ N₀
x : M
hx₁ : x ∈ N
hx₂ : ¬x ∈ N₀
⊢ span R ↑{x} = span R {x}
[PROOFSTEP]
rw [Finset.coe_singleton]
[GOAL]
case h.e'_6.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N✝ : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N✝
inst✝² : Module R N✝
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
α : Type u_2 := { N // FG N }
H : WellFounded fun x x_1 => x > x_1
N N₀ : Submodule R M
h₁ : FG N₀
e : N₀ ≤ N
h₂ : ∀ (x : { N // FG N }), x ∈ {N' | ↑N' ≤ N} → ¬x > { val := N₀, property := h₁ }
h₃ : ¬N ≤ N₀
x : M
hx₁ : x ∈ N
hx₂ : ¬x ∈ N₀
⊢ x ∈ span R {x} ⊔ N₀
[PROOFSTEP]
exact (le_sup_left : (R ∙ x) ≤ _) (Submodule.mem_span_singleton_self _)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
⊢ (∀ (a : Set (Submodule R M)), Set.Nonempty a → ∃ M', M' ∈ a ∧ ∀ (I : Submodule R M), I ∈ a → ¬M' < I) ↔
IsNoetherian R M
[PROOFSTEP]
rw [isNoetherian_iff_wellFounded, WellFounded.wellFounded_iff_has_min]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Semiring R
inst✝⁵ : AddCommMonoid M
inst✝⁴ : Module R M
inst✝³ : AddCommMonoid N
inst✝² : Module R N
inst✝¹ : AddCommMonoid P
inst✝ : Module R P
⊢ (∀ (f : ℕ →o Submodule R M), ∃ n, ∀ (m : ℕ), n ≤ m → ↑f n = ↑f m) ↔ IsNoetherian R M
[PROOFSTEP]
rw [isNoetherian_iff_wellFounded, WellFounded.monotone_chain_condition]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
⊢ Set.Finite s
[PROOFSTEP]
refine'
by_contradiction fun hf => (RelEmbedding.wellFounded_iff_no_descending_seq.1 (wellFounded_submodule_gt R M)).elim' _
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
⊢ (fun x x_1 => x > x_1) ↪r fun x x_1 => x > x_1
[PROOFSTEP]
have f : ℕ ↪ s := Set.Infinite.natEmbedding s hf
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
⊢ (fun x x_1 => x > x_1) ↪r fun x x_1 => x > x_1
[PROOFSTEP]
have : ∀ n, (↑) ∘ f '' {m | m ≤ n} ⊆ s := by
rintro n x ⟨y, _, rfl⟩
exact (f y).2
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
⊢ ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
[PROOFSTEP]
rintro n x ⟨y, _, rfl⟩
[GOAL]
case intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
n y : ℕ
left✝ : y ∈ {m | m ≤ n}
⊢ (Subtype.val ∘ ↑f) y ∈ s
[PROOFSTEP]
exact (f y).2
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
⊢ (fun x x_1 => x > x_1) ↪r fun x x_1 => x > x_1
[PROOFSTEP]
let coe' : s → M := (↑)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
⊢ (fun x x_1 => x > x_1) ↪r fun x x_1 => x > x_1
[PROOFSTEP]
have : ∀ a b : ℕ, a ≤ b ↔ span R (coe' ∘ f '' {m | m ≤ a}) ≤ span R ((↑) ∘ f '' {m | m ≤ b}) :=
by
intro a b
rw [span_le_span_iff hs (this a) (this b), Set.image_subset_image_iff (Subtype.coe_injective.comp f.injective),
Set.subset_def]
exact ⟨fun hab x (hxa : x ≤ a) => le_trans hxa hab, fun hx => hx a (le_refl a)⟩
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
⊢ ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
[PROOFSTEP]
intro a b
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
a b : ℕ
⊢ a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
[PROOFSTEP]
rw [span_le_span_iff hs (this a) (this b), Set.image_subset_image_iff (Subtype.coe_injective.comp f.injective),
Set.subset_def]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
a b : ℕ
⊢ a ≤ b ↔ ∀ (x : ℕ), x ∈ {m | m ≤ a} → x ∈ {m | m ≤ b}
[PROOFSTEP]
exact ⟨fun hab x (hxa : x ≤ a) => le_trans hxa hab, fun hx => hx a (le_refl a)⟩
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this✝ : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
this : ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
⊢ (fun x x_1 => x > x_1) ↪r fun x x_1 => x > x_1
[PROOFSTEP]
exact
⟨⟨fun n => span R (coe' ∘ f '' {m | m ≤ n}), fun x y =>
by
rw [le_antisymm_iff, (this x y).symm, (this y x).symm, ← le_antisymm_iff, imp_self]
trivial⟩,
by dsimp [GT.gt]; simp only [lt_iff_le_not_le, (this _ _).symm]; tauto⟩
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this✝ : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
this : ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
x y : ℕ
⊢ (fun n => span R (coe' ∘ ↑f '' {m | m ≤ n})) x = (fun n => span R (coe' ∘ ↑f '' {m | m ≤ n})) y → x = y
[PROOFSTEP]
rw [le_antisymm_iff, (this x y).symm, (this y x).symm, ← le_antisymm_iff, imp_self]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this✝ : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
this : ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
x y : ℕ
⊢ True
[PROOFSTEP]
trivial
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this✝ : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
this : ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
⊢ ∀ {a b : ℕ},
↑{ toFun := fun n => span R (coe' ∘ ↑f '' {m | m ≤ n}),
inj' :=
(_ :
∀ (x y : ℕ),
(fun n => span R (coe' ∘ ↑f '' {m | m ≤ n})) x = (fun n => span R (coe' ∘ ↑f '' {m | m ≤ n})) y →
x = y) }
a >
↑{ toFun := fun n => span R (coe' ∘ ↑f '' {m | m ≤ n}),
inj' :=
(_ :
∀ (x y : ℕ),
(fun n => span R (coe' ∘ ↑f '' {m | m ≤ n})) x = (fun n => span R (coe' ∘ ↑f '' {m | m ≤ n})) y →
x = y) }
b ↔
a > b
[PROOFSTEP]
dsimp [GT.gt]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this✝ : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
this : ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
⊢ ∀ {a b : ℕ}, span R (Subtype.val ∘ ↑f '' {m | m ≤ b}) < span R (Subtype.val ∘ ↑f '' {m | m ≤ a}) ↔ b < a
[PROOFSTEP]
simp only [lt_iff_le_not_le, (this _ _).symm]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : Nontrivial R
inst✝ : IsNoetherian R M
s : Set M
hs : LinearIndependent R Subtype.val
hf : ¬Set.Finite s
f : ℕ ↪ ↑s
this✝ : ∀ (n : ℕ), Subtype.val ∘ ↑f '' {m | m ≤ n} ⊆ s
coe' : ↑s → M := Subtype.val
this : ∀ (a b : ℕ), a ≤ b ↔ span R (coe' ∘ ↑f '' {m | m ≤ a}) ≤ span R (Subtype.val ∘ ↑f '' {m | m ≤ b})
⊢ ∀ {a b : ℕ}, True
[PROOFSTEP]
tauto
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : IsNoetherian R M
inst✝ : IsNoetherian R P
f : M →ₗ[R] N
g : N →ₗ[R] P
hf : Injective ↑f
hg : Surjective ↑g
h : LinearMap.range f = LinearMap.ker g
⊢ ∀ (a : Submodule R N), map f (comap f a) = a ⊓ LinearMap.range f
[PROOFSTEP]
simp [Submodule.map_comap_eq, inf_comm]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁸ : Ring R
inst✝⁷ : AddCommGroup M
inst✝⁶ : Module R M
inst✝⁵ : AddCommGroup N
inst✝⁴ : Module R N
inst✝³ : AddCommGroup P
inst✝² : Module R P
inst✝¹ : IsNoetherian R M
inst✝ : IsNoetherian R P
f : M →ₗ[R] N
g : N →ₗ[R] P
hf : Injective ↑f
hg : Surjective ↑g
h : LinearMap.range f = LinearMap.ker g
⊢ ∀ (a : Submodule R N), comap g (map g a) = a ⊔ LinearMap.range f
[PROOFSTEP]
simp [Submodule.comap_map_eq, h]
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
⊢ ∃ n, n ≠ 0 ∧ LinearMap.ker (f ^ n) ⊓ LinearMap.range (f ^ n) = ⊥
[PROOFSTEP]
obtain ⟨n, w⟩ := monotone_stabilizes_iff_noetherian.mpr I (f.iterateKer.comp ⟨fun n => n + 1, fun n m w => by linarith⟩)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n m : ℕ
w : n ≤ m
⊢ (fun n => n + 1) n ≤ (fun n => n + 1) m
[PROOFSTEP]
linarith
[GOAL]
case intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w :
∀ (m : ℕ),
n ≤ m →
↑(OrderHom.comp (LinearMap.iterateKer f)
{ toFun := fun n => n + 1,
monotone' := (_ : ∀ (n m : ℕ), n ≤ m → (fun n => n + 1) n ≤ (fun n => n + 1) m) })
n =
↑(OrderHom.comp (LinearMap.iterateKer f)
{ toFun := fun n => n + 1,
monotone' := (_ : ∀ (n m : ℕ), n ≤ m → (fun n => n + 1) n ≤ (fun n => n + 1) m) })
m
⊢ ∃ n, n ≠ 0 ∧ LinearMap.ker (f ^ n) ⊓ LinearMap.range (f ^ n) = ⊥
[PROOFSTEP]
specialize w (2 * n + 1) (by linarith only)
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w :
∀ (m : ℕ),
n ≤ m →
↑(OrderHom.comp (LinearMap.iterateKer f)
{ toFun := fun n => n + 1,
monotone' := (_ : ∀ (n m : ℕ), n ≤ m → (fun n => n + 1) n ≤ (fun n => n + 1) m) })
n =
↑(OrderHom.comp (LinearMap.iterateKer f)
{ toFun := fun n => n + 1,
monotone' := (_ : ∀ (n m : ℕ), n ≤ m → (fun n => n + 1) n ≤ (fun n => n + 1) m) })
m
⊢ n ≤ 2 * n + 1
[PROOFSTEP]
linarith only
[GOAL]
case intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w :
↑(OrderHom.comp (LinearMap.iterateKer f)
{ toFun := fun n => n + 1, monotone' := (_ : ∀ (n m : ℕ), n ≤ m → (fun n => n + 1) n ≤ (fun n => n + 1) m) })
n =
↑(OrderHom.comp (LinearMap.iterateKer f)
{ toFun := fun n => n + 1, monotone' := (_ : ∀ (n m : ℕ), n ≤ m → (fun n => n + 1) n ≤ (fun n => n + 1) m) })
(2 * n + 1)
⊢ ∃ n, n ≠ 0 ∧ LinearMap.ker (f ^ n) ⊓ LinearMap.range (f ^ n) = ⊥
[PROOFSTEP]
dsimp at w
[GOAL]
case intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
⊢ ∃ n, n ≠ 0 ∧ LinearMap.ker (f ^ n) ⊓ LinearMap.range (f ^ n) = ⊥
[PROOFSTEP]
refine' ⟨n + 1, Nat.succ_ne_zero _, _⟩
[GOAL]
case intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
⊢ LinearMap.ker (f ^ (n + 1)) ⊓ LinearMap.range (f ^ (n + 1)) = ⊥
[PROOFSTEP]
rw [eq_bot_iff]
[GOAL]
case intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
⊢ LinearMap.ker (f ^ (n + 1)) ⊓ LinearMap.range (f ^ (n + 1)) ≤ ⊥
[PROOFSTEP]
rintro - ⟨h, ⟨y, rfl⟩⟩
[GOAL]
case intro.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
y : M
h : ↑(f ^ (n + 1)) y ∈ ↑(LinearMap.ker (f ^ (n + 1)))
⊢ ↑(f ^ (n + 1)) y ∈ ⊥
[PROOFSTEP]
rw [mem_bot, ← LinearMap.mem_ker, w]
[GOAL]
case intro.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
y : M
h : ↑(f ^ (n + 1)) y ∈ ↑(LinearMap.ker (f ^ (n + 1)))
⊢ y ∈ LinearMap.ker (f ^ (2 * n + 1 + 1))
[PROOFSTEP]
erw [LinearMap.mem_ker] at h ⊢
[GOAL]
case intro.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
y : M
h : ↑(f ^ (n + 1)) (↑(f ^ (n + 1)) y) = 0
⊢ ↑(f ^ (2 * n + 1 + 1)) y = 0
[PROOFSTEP]
change (f ^ (n + 1) * f ^ (n + 1)) y = 0 at h
[GOAL]
case intro.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
y : M
h : ↑(f ^ (n + 1) * f ^ (n + 1)) y = 0
⊢ ↑(f ^ (2 * n + 1 + 1)) y = 0
[PROOFSTEP]
rw [← pow_add] at h
[GOAL]
case intro.intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
y : M
h : ↑(f ^ (n + 1 + (n + 1))) y = 0
⊢ ↑(f ^ (2 * n + 1 + 1)) y = 0
[PROOFSTEP]
convert h using 3
[GOAL]
case h.e'_2.h.e'_5.h.e'_6
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : M →ₗ[R] M
n : ℕ
w : LinearMap.ker (f ^ (n + 1)) = LinearMap.ker (f ^ (2 * n + 1 + 1))
y : M
h : ↑(f ^ (n + 1 + (n + 1))) y = 0
⊢ 2 * n + 1 + 1 = n + 1 + (n + 1)
[PROOFSTEP]
ring
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M →ₗ[R] M
s : Surjective ↑f
⊢ Injective ↑f
[PROOFSTEP]
obtain ⟨n, ne, w⟩ := IsNoetherian.exists_endomorphism_iterate_ker_inf_range_eq_bot f
[GOAL]
case intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M →ₗ[R] M
s : Surjective ↑f
n : ℕ
ne : n ≠ 0
w : LinearMap.ker (f ^ n) ⊓ LinearMap.range (f ^ n) = ⊥
⊢ Injective ↑f
[PROOFSTEP]
rw [LinearMap.range_eq_top.mpr (LinearMap.iterate_surjective s n), inf_top_eq, LinearMap.ker_eq_bot] at w
[GOAL]
case intro.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M →ₗ[R] M
s : Surjective ↑f
n : ℕ
ne : n ≠ 0
w : Injective ↑(f ^ n)
⊢ Injective ↑f
[PROOFSTEP]
exact LinearMap.injective_of_iterate_injective ne w
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
⊢ ∃ n, ∀ (m : ℕ), n ≤ m → f m = ⊥
[PROOFSTEP]
suffices t : ∃ n : ℕ, ∀ m, n ≤ m → f (m + 1) = ⊥
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
t : ∃ n, ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
⊢ ∃ n, ∀ (m : ℕ), n ≤ m → f m = ⊥
[PROOFSTEP]
obtain ⟨n, w⟩ := t
[GOAL]
case intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
n : ℕ
w : ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
⊢ ∃ n, ∀ (m : ℕ), n ≤ m → f m = ⊥
[PROOFSTEP]
use n + 1
[GOAL]
case h
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
n : ℕ
w : ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
⊢ ∀ (m : ℕ), n + 1 ≤ m → f m = ⊥
[PROOFSTEP]
rintro (_ | m) p
[GOAL]
case h.zero
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
n : ℕ
w : ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
p : n + 1 ≤ Nat.zero
⊢ f Nat.zero = ⊥
[PROOFSTEP]
cases p
[GOAL]
case h.succ
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
n : ℕ
w : ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
m : ℕ
p : n + 1 ≤ Nat.succ m
⊢ f (Nat.succ m) = ⊥
[PROOFSTEP]
apply w
[GOAL]
case h.succ.a
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
n : ℕ
w : ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
m : ℕ
p : n + 1 ≤ Nat.succ m
⊢ n ≤ m
[PROOFSTEP]
exact Nat.succ_le_succ_iff.mp p
[GOAL]
case t
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
⊢ ∃ n, ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
[PROOFSTEP]
obtain ⟨n, w⟩ := monotone_stabilizes_iff_noetherian.mpr I (partialSups f)
[GOAL]
case t.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁶ : Ring R
inst✝⁵ : AddCommGroup M
inst✝⁴ : Module R M
inst✝³ : AddCommGroup N
inst✝² : Module R N
inst✝¹ : AddCommGroup P
inst✝ : Module R P
I : IsNoetherian R M
f : ℕ → Submodule R M
h : ∀ (n : ℕ), Disjoint (↑(partialSups f) n) (f (n + 1))
n : ℕ
w : ∀ (m : ℕ), n ≤ m → ↑(partialSups f) n = ↑(partialSups f) m
⊢ ∃ n, ∀ (m : ℕ), n ≤ m → f (m + 1) = ⊥
[PROOFSTEP]
exact ⟨n, fun m p => (h m).eq_bot_of_ge <| sup_eq_left.1 <| (w (m + 1) <| le_add_right p).symm.trans <| w m p⟩
[GOAL]
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
⊢ N ≃ₗ[R] PUnit
[PROOFSTEP]
apply Nonempty.some
[GOAL]
case h
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
⊢ Nonempty (N ≃ₗ[R] PUnit)
[PROOFSTEP]
obtain ⟨n, w⟩ := IsNoetherian.disjoint_partialSups_eventually_bot (f.tailing i) (f.tailings_disjoint_tailing i)
[GOAL]
case h.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
n : ℕ
w : ∀ (m : ℕ), n ≤ m → LinearMap.tailing f i m = ⊥
⊢ Nonempty (N ≃ₗ[R] PUnit)
[PROOFSTEP]
specialize w n (le_refl n)
[GOAL]
case h.intro
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
n : ℕ
w : LinearMap.tailing f i n = ⊥
⊢ Nonempty (N ≃ₗ[R] PUnit)
[PROOFSTEP]
apply Nonempty.intro
[GOAL]
case h.intro.val
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
n : ℕ
w : LinearMap.tailing f i n = ⊥
⊢ N ≃ₗ[R] PUnit
[PROOFSTEP]
refine (LinearMap.tailingLinearEquiv f i n).symm ≪≫ₗ ?_
[GOAL]
case h.intro.val
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
n : ℕ
w : LinearMap.tailing f i n = ⊥
⊢ { x // x ∈ LinearMap.tailing f i n } ≃ₗ[R] PUnit
[PROOFSTEP]
rw [w]
[GOAL]
case h.intro.val
R : Type u_1
M : Type u_2
P : Type u_3
N : Type w
inst✝⁷ : Ring R
inst✝⁶ : AddCommGroup M
inst✝⁵ : Module R M
inst✝⁴ : AddCommGroup N
inst✝³ : Module R N
inst✝² : AddCommGroup P
inst✝¹ : Module R P
inst✝ : IsNoetherian R M
f : M × N →ₗ[R] M
i : Injective ↑f
n : ℕ
w : LinearMap.tailing f i n = ⊥
⊢ { x // x ∈ ⊥ } ≃ₗ[R] PUnit
[PROOFSTEP]
apply Submodule.botEquivPUnit
[GOAL]
R : Type ?u.290620
M : Type ?u.290623
inst✝³ : Finite M
inst✝² : Semiring R
inst✝¹ : AddCommMonoid M
inst✝ : Module R M
s : Submodule R M
⊢ span R ↑(Finite.toFinset (_ : Set.Finite ↑s)) = s
[PROOFSTEP]
rw [Set.Finite.coe_toFinset, Submodule.span_eq]
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Semiring R
inst✝¹ : AddCommMonoid M
inst✝ : Module R M
N : Submodule R M
h : IsNoetherian R M
⊢ IsNoetherian R { x // x ∈ N }
[PROOFSTEP]
rw [isNoetherian_iff_wellFounded] at h ⊢
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Semiring R
inst✝¹ : AddCommMonoid M
inst✝ : Module R M
N : Submodule R M
h : WellFounded fun x x_1 => x > x_1
⊢ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
exact OrderEmbedding.wellFounded (Submodule.MapSubtype.orderEmbedding N).dual h
[GOAL]
R : Type ?u.294062
inst✝² : Ring R
M : Type ?u.294068
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
h : IsNoetherian R M
⊢ IsNoetherian R (M ⧸ N)
[PROOFSTEP]
rw [isNoetherian_iff_wellFounded] at h ⊢
[GOAL]
R : Type ?u.294062
inst✝² : Ring R
M : Type ?u.294068
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
h : WellFounded fun x x_1 => x > x_1
⊢ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
exact OrderEmbedding.wellFounded (Submodule.comapMkQOrderEmbedding N).dual h
[GOAL]
R : Type u_1
S : Type u_2
M : Type u_3
inst✝⁶ : Semiring R
inst✝⁵ : Semiring S
inst✝⁴ : AddCommMonoid M
inst✝³ : SMul R S
inst✝² : Module S M
inst✝¹ : Module R M
inst✝ : IsScalarTower R S M
h : IsNoetherian R M
⊢ IsNoetherian S M
[PROOFSTEP]
rw [isNoetherian_iff_wellFounded] at h ⊢
[GOAL]
R : Type u_1
S : Type u_2
M : Type u_3
inst✝⁶ : Semiring R
inst✝⁵ : Semiring S
inst✝⁴ : AddCommMonoid M
inst✝³ : SMul R S
inst✝² : Module S M
inst✝¹ : Module R M
inst✝ : IsScalarTower R S M
h : WellFounded fun x x_1 => x > x_1
⊢ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
refine' (Submodule.restrictScalarsEmbedding R S M).dual.wellFounded h
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
⊢ IsNoetherian R { x // x ∈ N }
[PROOFSTEP]
let ⟨s, hs⟩ := hN
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
⊢ IsNoetherian R { x // x ∈ N }
[PROOFSTEP]
haveI := Classical.decEq M
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this : DecidableEq M
⊢ IsNoetherian R { x // x ∈ N }
[PROOFSTEP]
haveI := Classical.decEq R
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝ : DecidableEq M
this : DecidableEq R
⊢ IsNoetherian R { x // x ∈ N }
[PROOFSTEP]
have : ∀ x ∈ s, x ∈ N := fun x hx => hs ▸ Submodule.subset_span hx
[GOAL]
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ IsNoetherian R { x // x ∈ N }
[PROOFSTEP]
refine @isNoetherian_of_surjective R ((↑s : Set M) → R) N _ _ _ (Pi.module _ _ _) _ ?_ ?_ isNoetherian_pi
[GOAL]
case refine_1
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ (↑↑s → R) →ₗ[R] { x // x ∈ N }
[PROOFSTEP]
fapply LinearMap.mk
[GOAL]
case refine_1.toAddHom
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ AddHom (↑↑s → R) { x // x ∈ N }
[PROOFSTEP]
fapply AddHom.mk
[GOAL]
case refine_1.toAddHom.toFun
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ (↑↑s → R) → { x // x ∈ N }
[PROOFSTEP]
exact fun f => ⟨∑ i in s.attach, f i • i.1, N.sum_mem fun c _ => N.smul_mem _ <| this _ c.2⟩
[GOAL]
case refine_1.toAddHom.map_add'
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ ∀ (x y : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (x + y) i • ↑i, property := (_ : ∑ i in Finset.attach s, (x + y) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, x i • ↑i, property := (_ : ∑ i in Finset.attach s, x i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, y i • ↑i, property := (_ : ∑ i in Finset.attach s, y i • ↑i ∈ N) }
[PROOFSTEP]
intro f g
[GOAL]
case refine_1.toAddHom.map_add'
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
f g : ↑↑s → R
⊢ { val := ∑ i in Finset.attach s, (f + g) i • ↑i, property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i, property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }
[PROOFSTEP]
apply Subtype.eq
[GOAL]
case refine_1.toAddHom.map_add'.a
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
f g : ↑↑s → R
⊢ ↑{ val := ∑ i in Finset.attach s, (f + g) i • ↑i, property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
↑({ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i, property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) })
[PROOFSTEP]
change (∑ i in s.attach, (f i + g i) • _) = _
[GOAL]
case refine_1.toAddHom.map_add'.a
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
f g : ↑↑s → R
⊢ ∑ i in Finset.attach s, (f i + g i) • ↑i =
↑({ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i, property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) })
[PROOFSTEP]
simp only [add_smul, Finset.sum_add_distrib]
[GOAL]
case refine_1.toAddHom.map_add'.a
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
f g : ↑↑s → R
⊢ ∑ x in Finset.attach s, f x • ↑x + ∑ x in Finset.attach s, g x • ↑x =
↑({ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i, property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) })
[PROOFSTEP]
rfl
[GOAL]
case refine_1.map_smul'
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ ∀ (r : R) (x : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
x
[PROOFSTEP]
intro c f
[GOAL]
case refine_1.map_smul'
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
c : R
f : ↑↑s → R
⊢ AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i, property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f
[PROOFSTEP]
apply Subtype.eq
[GOAL]
case refine_1.map_smul'.a
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
c : R
f : ↑↑s → R
⊢ ↑(AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f)) =
↑(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f)
[PROOFSTEP]
change (∑ i in s.attach, (c • f i) • _) = _
[GOAL]
case refine_1.map_smul'.a
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
c : R
f : ↑↑s → R
⊢ ∑ i in Finset.attach s, (c • f i) • ↑i =
↑(↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f)
[PROOFSTEP]
simp only [smul_eq_mul, mul_smul]
[GOAL]
case refine_1.map_smul'.a
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
c : R
f : ↑↑s → R
⊢ ∑ x in Finset.attach s, c • f x • ↑x =
↑(↑(RingHom.id R) c •
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) })
[PROOFSTEP]
exact Finset.smul_sum.symm
[GOAL]
case refine_2
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ LinearMap.range
{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) } =
⊤
[PROOFSTEP]
rw [LinearMap.range_eq_top]
[GOAL]
case refine_2
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
⊢ Surjective
↑{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) }
[PROOFSTEP]
rintro ⟨n, hn⟩
[GOAL]
case refine_2.mk
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
⊢ ∃ a,
↑{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) }
a =
{ val := n, property := hn }
[PROOFSTEP]
change n ∈ N at hn
[GOAL]
case refine_2.mk
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
⊢ ∃ a,
↑{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) }
a =
{ val := n, property := hn }
[PROOFSTEP]
rw [← hs, ← Set.image_id (s : Set M), Finsupp.mem_span_image_iff_total] at hn
[GOAL]
case refine_2.mk
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn✝ : n ∈ N
hn : ∃ l, l ∈ Finsupp.supported R R ↑s ∧ ↑(Finsupp.total M M R id) l = n
⊢ ∃ a,
↑{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) }
a =
{ val := n, property := hn✝ }
[PROOFSTEP]
rcases hn with ⟨l, hl1, hl2⟩
[GOAL]
case refine_2.mk.intro.intro
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
l : M →₀ R
hl1 : l ∈ Finsupp.supported R R ↑s
hl2 : ↑(Finsupp.total M M R id) l = n
⊢ ∃ a,
↑{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) }
a =
{ val := n, property := hn }
[PROOFSTEP]
refine' ⟨fun x => l x, Subtype.ext _⟩
[GOAL]
case refine_2.mk.intro.intro
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
l : M →₀ R
hl1 : l ∈ Finsupp.supported R R ↑s
hl2 : ↑(Finsupp.total M M R id) l = n
⊢ ↑(↑{
toAddHom :=
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i, property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) },
map_smul' :=
(_ :
∀ (c : R) (f : ↑↑s → R),
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
(c • f) =
↑(RingHom.id R) c •
AddHom.toFun
{
toFun := fun f =>
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) },
map_add' :=
(_ :
∀ (f g : ↑↑s → R),
{ val := ∑ i in Finset.attach s, (f + g) i • ↑i,
property := (_ : ∑ i in Finset.attach s, (f + g) i • ↑i ∈ N) } =
{ val := ∑ i in Finset.attach s, f i • ↑i,
property := (_ : ∑ i in Finset.attach s, f i • ↑i ∈ N) } +
{ val := ∑ i in Finset.attach s, g i • ↑i,
property := (_ : ∑ i in Finset.attach s, g i • ↑i ∈ N) }) }
f) }
fun x => ↑l ↑x) =
↑{ val := n, property := hn }
[PROOFSTEP]
change (∑ i in s.attach, l i • (i : M)) = n
[GOAL]
case refine_2.mk.intro.intro
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
l : M →₀ R
hl1 : l ∈ Finsupp.supported R R ↑s
hl2 : ↑(Finsupp.total M M R id) l = n
⊢ ∑ i in Finset.attach s, ↑l ↑i • ↑i = n
[PROOFSTEP]
rw [@Finset.sum_attach M M s _ fun i => l i • i, ← hl2, Finsupp.total_apply, Finsupp.sum, eq_comm]
[GOAL]
case refine_2.mk.intro.intro
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
l : M →₀ R
hl1 : l ∈ Finsupp.supported R R ↑s
hl2 : ↑(Finsupp.total M M R id) l = n
⊢ ∑ a in l.support, ↑l a • id a = ∑ x in s, ↑l x • x
[PROOFSTEP]
refine' Finset.sum_subset hl1 fun x _ hx => _
[GOAL]
case refine_2.mk.intro.intro
R : Type u_1
M : Type u_2
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
N : Submodule R M
I : IsNoetherianRing R
hN : FG N
s : Finset M
hs : span R ↑s = N
this✝¹ : DecidableEq M
this✝ : DecidableEq R
this : ∀ (x : M), x ∈ s → x ∈ N
n : M
hn : n ∈ N
l : M →₀ R
hl1 : l ∈ Finsupp.supported R R ↑s
hl2 : ↑(Finsupp.total M M R id) l = n
x : M
x✝ : x ∈ s
hx : ¬x ∈ l.support
⊢ ↑l x • id x = 0
[PROOFSTEP]
rw [Finsupp.not_mem_support_iff.1 hx, zero_smul]
[GOAL]
R : Type u_1
inst✝¹ : Ring R
S : Type u_2
inst✝ : Ring S
f : R →+* S
hf : Surjective ↑f
H : IsNoetherianRing R
⊢ IsNoetherianRing S
[PROOFSTEP]
rw [isNoetherianRing_iff, isNoetherian_iff_wellFounded] at H ⊢
[GOAL]
R : Type u_1
inst✝¹ : Ring R
S : Type u_2
inst✝ : Ring S
f : R →+* S
hf : Surjective ↑f
H : WellFounded fun x x_1 => x > x_1
⊢ WellFounded fun x x_1 => x > x_1
[PROOFSTEP]
exact OrderEmbedding.wellFounded (Ideal.orderEmbeddingOfSurjective f hf).dual H
[GOAL]
R : Type u_1
inst✝¹ : CommRing R
inst✝ : IsNoetherianRing R
⊢ IsNilpotent (nilradical R)
[PROOFSTEP]
obtain ⟨n, hn⟩ := Ideal.exists_radical_pow_le_of_fg (⊥ : Ideal R) (IsNoetherian.noetherian _)
[GOAL]
case intro
R : Type u_1
inst✝¹ : CommRing R
inst✝ : IsNoetherianRing R
n : ℕ
hn : Ideal.radical ⊥ ^ n ≤ ⊥
⊢ IsNilpotent (nilradical R)
[PROOFSTEP]
exact ⟨n, eq_bot_iff.mpr hn⟩
|
[GOAL]
α : Type u_1
M : Type u_2
inst✝ : AddCommMonoid M
f : α → α
g : α → M
m n : ℕ
x : α
⊢ birkhoffSum f g (m + n) x = birkhoffSum f g m x + birkhoffSum f g n (f^[m] x)
[PROOFSTEP]
simp_rw [birkhoffSum, sum_range_add, add_comm m, iterate_add_apply]
[GOAL]
α : Type u_1
M : Type u_2
inst✝ : AddCommMonoid M
f : α → α
x : α
h : IsFixedPt f x
g : α → M
n : ℕ
⊢ birkhoffSum f g n x = n • g x
[PROOFSTEP]
simp [birkhoffSum, (h.iterate _).eq]
[GOAL]
α : Type u_1
G : Type u_2
inst✝ : AddCommGroup G
f : α → α
g : α → G
n : ℕ
x : α
⊢ birkhoffSum f g n (f x) - birkhoffSum f g n x = g (f^[n] x) - g x
[PROOFSTEP]
rw [← sub_eq_iff_eq_add.2 (birkhoffSum_succ f g n x), ← sub_eq_iff_eq_add.2 (birkhoffSum_succ' f g n x), ← sub_add, ←
sub_add, sub_add_comm]
|
(* Title: Abstract Substitutions
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2016, 2017
Author: Martin Desharnais <desharnais at mpi-inf.mpg.de>, 2022
Maintainer: Anders Schlichtkrull <andschl at dtu.dk>
*)
section \<open>Abstract Substitutions\<close>
theory Abstract_Substitution
imports Clausal_Logic Map2
begin
text \<open>
Atoms and substitutions are abstracted away behind some locales, to avoid having a direct dependency
on the IsaFoR library.
Conventions: \<open>'s\<close> substitutions, \<open>'a\<close> atoms.
\<close>
subsection \<open>Library\<close>
lemma f_Suc_decr_eventually_const:
fixes f :: "nat \<Rightarrow> nat"
assumes leq: "\<forall>i. f (Suc i) \<le> f i"
shows "\<exists>l. \<forall>l' \<ge> l. f l' = f (Suc l')"
proof (rule ccontr)
assume a: "\<nexists>l. \<forall>l' \<ge> l. f l' = f (Suc l')"
have "\<forall>i. \<exists>i'. i' > i \<and> f i' < f i"
proof
fix i
from a have "\<exists>l' \<ge> i. f l' \<noteq> f (Suc l')"
by auto
then obtain l' where
l'_p: "l' \<ge> i \<and> f l' \<noteq> f (Suc l')"
by metis
then have "f l' > f (Suc l')"
using leq le_eq_less_or_eq by auto
moreover have "f i \<ge> f l'"
using leq l'_p by (induction l' arbitrary: i) (blast intro: lift_Suc_antimono_le)+
ultimately show "\<exists>i' > i. f i' < f i"
using l'_p less_le_trans by blast
qed
then obtain g_sm :: "nat \<Rightarrow> nat" where
g_sm_p: "\<forall>i. g_sm i > i \<and> f (g_sm i) < f i"
by metis
define c :: "nat \<Rightarrow> nat" where
"\<And>n. c n = (g_sm ^^ n) 0"
have "f (c i) > f (c (Suc i))" for i
by (induction i) (auto simp: c_def g_sm_p)
then have "\<forall>i. (f \<circ> c) i > (f \<circ> c) (Suc i)"
by auto
then have "\<exists>fc :: nat \<Rightarrow> nat. \<forall>i. fc i > fc (Suc i)"
by metis
then show False
using wf_less_than by (simp add: wf_iff_no_infinite_down_chain)
qed
subsection \<open>Substitution Operators\<close>
locale substitution_ops =
fixes
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's"
begin
abbreviation subst_atm_abbrev :: "'a \<Rightarrow> 's \<Rightarrow> 'a" (infixl "\<cdot>a" 67) where
"subst_atm_abbrev \<equiv> subst_atm"
abbreviation comp_subst_abbrev :: "'s \<Rightarrow> 's \<Rightarrow> 's" (infixl "\<odot>" 67) where
"comp_subst_abbrev \<equiv> comp_subst"
definition comp_substs :: "'s list \<Rightarrow> 's list \<Rightarrow> 's list" (infixl "\<odot>s" 67) where
"\<sigma>s \<odot>s \<tau>s = map2 comp_subst \<sigma>s \<tau>s"
definition subst_atms :: "'a set \<Rightarrow> 's \<Rightarrow> 'a set" (infixl "\<cdot>as" 67) where
"AA \<cdot>as \<sigma> = (\<lambda>A. A \<cdot>a \<sigma>) ` AA"
definition subst_atmss :: "'a set set \<Rightarrow> 's \<Rightarrow> 'a set set" (infixl "\<cdot>ass" 67) where
"AAA \<cdot>ass \<sigma> = (\<lambda>AA. AA \<cdot>as \<sigma>) ` AAA"
definition subst_atm_list :: "'a list \<Rightarrow> 's \<Rightarrow> 'a list" (infixl "\<cdot>al" 67) where
"As \<cdot>al \<sigma> = map (\<lambda>A. A \<cdot>a \<sigma>) As"
definition subst_atm_mset :: "'a multiset \<Rightarrow> 's \<Rightarrow> 'a multiset" (infixl "\<cdot>am" 67) where
"AA \<cdot>am \<sigma> = image_mset (\<lambda>A. A \<cdot>a \<sigma>) AA"
definition
subst_atm_mset_list :: "'a multiset list \<Rightarrow> 's \<Rightarrow> 'a multiset list" (infixl "\<cdot>aml" 67)
where
"AAA \<cdot>aml \<sigma> = map (\<lambda>AA. AA \<cdot>am \<sigma>) AAA"
definition
subst_atm_mset_lists :: "'a multiset list \<Rightarrow> 's list \<Rightarrow> 'a multiset list" (infixl "\<cdot>\<cdot>aml" 67)
where
"AAs \<cdot>\<cdot>aml \<sigma>s = map2 (\<cdot>am) AAs \<sigma>s"
definition subst_lit :: "'a literal \<Rightarrow> 's \<Rightarrow> 'a literal" (infixl "\<cdot>l" 67) where
"L \<cdot>l \<sigma> = map_literal (\<lambda>A. A \<cdot>a \<sigma>) L"
lemma atm_of_subst_lit[simp]: "atm_of (L \<cdot>l \<sigma>) = atm_of L \<cdot>a \<sigma>"
unfolding subst_lit_def by (cases L) simp+
definition subst_cls :: "'a clause \<Rightarrow> 's \<Rightarrow> 'a clause" (infixl "\<cdot>" 67) where
"AA \<cdot> \<sigma> = image_mset (\<lambda>A. A \<cdot>l \<sigma>) AA"
definition subst_clss :: "'a clause set \<Rightarrow> 's \<Rightarrow> 'a clause set" (infixl "\<cdot>cs" 67) where
"AA \<cdot>cs \<sigma> = (\<lambda>A. A \<cdot> \<sigma>) ` AA"
definition subst_cls_list :: "'a clause list \<Rightarrow> 's \<Rightarrow> 'a clause list" (infixl "\<cdot>cl" 67) where
"Cs \<cdot>cl \<sigma> = map (\<lambda>A. A \<cdot> \<sigma>) Cs"
definition subst_cls_lists :: "'a clause list \<Rightarrow> 's list \<Rightarrow> 'a clause list" (infixl "\<cdot>\<cdot>cl" 67) where
"Cs \<cdot>\<cdot>cl \<sigma>s = map2 (\<cdot>) Cs \<sigma>s"
definition subst_cls_mset :: "'a clause multiset \<Rightarrow> 's \<Rightarrow> 'a clause multiset" (infixl "\<cdot>cm" 67) where
"CC \<cdot>cm \<sigma> = image_mset (\<lambda>A. A \<cdot> \<sigma>) CC"
lemma subst_cls_add_mset[simp]: "add_mset L C \<cdot> \<sigma> = add_mset (L \<cdot>l \<sigma>) (C \<cdot> \<sigma>)"
unfolding subst_cls_def by simp
lemma subst_cls_mset_add_mset[simp]: "add_mset C CC \<cdot>cm \<sigma> = add_mset (C \<cdot> \<sigma>) (CC \<cdot>cm \<sigma>)"
unfolding subst_cls_mset_def by simp
definition generalizes_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" where
"generalizes_atm A B \<longleftrightarrow> (\<exists>\<sigma>. A \<cdot>a \<sigma> = B)"
definition strictly_generalizes_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" where
"strictly_generalizes_atm A B \<longleftrightarrow> generalizes_atm A B \<and> \<not> generalizes_atm B A"
definition generalizes_lit :: "'a literal \<Rightarrow> 'a literal \<Rightarrow> bool" where
"generalizes_lit L M \<longleftrightarrow> (\<exists>\<sigma>. L \<cdot>l \<sigma> = M)"
definition strictly_generalizes_lit :: "'a literal \<Rightarrow> 'a literal \<Rightarrow> bool" where
"strictly_generalizes_lit L M \<longleftrightarrow> generalizes_lit L M \<and> \<not> generalizes_lit M L"
definition generalizes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"generalizes C D \<longleftrightarrow> (\<exists>\<sigma>. C \<cdot> \<sigma> = D)"
definition strictly_generalizes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"strictly_generalizes C D \<longleftrightarrow> generalizes C D \<and> \<not> generalizes D C"
definition subsumes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"subsumes C D \<longleftrightarrow> (\<exists>\<sigma>. C \<cdot> \<sigma> \<subseteq># D)"
definition strictly_subsumes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"strictly_subsumes C D \<longleftrightarrow> subsumes C D \<and> \<not> subsumes D C"
definition variants :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"variants C D \<longleftrightarrow> generalizes C D \<and> generalizes D C"
definition is_renaming :: "'s \<Rightarrow> bool" where
"is_renaming \<sigma> \<longleftrightarrow> (\<exists>\<tau>. \<sigma> \<odot> \<tau> = id_subst)"
definition is_renaming_list :: "'s list \<Rightarrow> bool" where
"is_renaming_list \<sigma>s \<longleftrightarrow> (\<forall>\<sigma> \<in> set \<sigma>s. is_renaming \<sigma>)"
definition inv_renaming :: "'s \<Rightarrow> 's" where
"inv_renaming \<sigma> = (SOME \<tau>. \<sigma> \<odot> \<tau> = id_subst)"
definition is_ground_atm :: "'a \<Rightarrow> bool" where
"is_ground_atm A \<longleftrightarrow> (\<forall>\<sigma>. A = A \<cdot>a \<sigma>)"
definition is_ground_atms :: "'a set \<Rightarrow> bool" where
"is_ground_atms AA = (\<forall>A \<in> AA. is_ground_atm A)"
definition is_ground_atm_list :: "'a list \<Rightarrow> bool" where
"is_ground_atm_list As \<longleftrightarrow> (\<forall>A \<in> set As. is_ground_atm A)"
definition is_ground_atm_mset :: "'a multiset \<Rightarrow> bool" where
"is_ground_atm_mset AA \<longleftrightarrow> (\<forall>A. A \<in># AA \<longrightarrow> is_ground_atm A)"
definition is_ground_lit :: "'a literal \<Rightarrow> bool" where
"is_ground_lit L \<longleftrightarrow> is_ground_atm (atm_of L)"
definition is_ground_cls :: "'a clause \<Rightarrow> bool" where
"is_ground_cls C \<longleftrightarrow> (\<forall>L. L \<in># C \<longrightarrow> is_ground_lit L)"
definition is_ground_clss :: "'a clause set \<Rightarrow> bool" where
"is_ground_clss CC \<longleftrightarrow> (\<forall>C \<in> CC. is_ground_cls C)"
definition is_ground_cls_list :: "'a clause list \<Rightarrow> bool" where
"is_ground_cls_list CC \<longleftrightarrow> (\<forall>C \<in> set CC. is_ground_cls C)"
definition is_ground_subst :: "'s \<Rightarrow> bool" where
"is_ground_subst \<sigma> \<longleftrightarrow> (\<forall>A. is_ground_atm (A \<cdot>a \<sigma>))"
definition is_ground_subst_list :: "'s list \<Rightarrow> bool" where
"is_ground_subst_list \<sigma>s \<longleftrightarrow> (\<forall>\<sigma> \<in> set \<sigma>s. is_ground_subst \<sigma>)"
definition grounding_of_cls :: "'a clause \<Rightarrow> 'a clause set" where
"grounding_of_cls C = {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}"
definition grounding_of_clss :: "'a clause set \<Rightarrow> 'a clause set" where
"grounding_of_clss CC = (\<Union>C \<in> CC. grounding_of_cls C)"
definition is_unifier :: "'s \<Rightarrow> 'a set \<Rightarrow> bool" where
"is_unifier \<sigma> AA \<longleftrightarrow> card (AA \<cdot>as \<sigma>) \<le> 1"
definition is_unifiers :: "'s \<Rightarrow> 'a set set \<Rightarrow> bool" where
"is_unifiers \<sigma> AAA \<longleftrightarrow> (\<forall>AA \<in> AAA. is_unifier \<sigma> AA)"
definition is_mgu :: "'s \<Rightarrow> 'a set set \<Rightarrow> bool" where
"is_mgu \<sigma> AAA \<longleftrightarrow> is_unifiers \<sigma> AAA \<and> (\<forall>\<tau>. is_unifiers \<tau> AAA \<longrightarrow> (\<exists>\<gamma>. \<tau> = \<sigma> \<odot> \<gamma>))"
definition is_imgu :: "'s \<Rightarrow> 'a set set \<Rightarrow> bool" where
"is_imgu \<sigma> AAA \<longleftrightarrow> is_unifiers \<sigma> AAA \<and> (\<forall>\<tau>. is_unifiers \<tau> AAA \<longrightarrow> \<tau> = \<sigma> \<odot> \<tau>)"
definition var_disjoint :: "'a clause list \<Rightarrow> bool" where
"var_disjoint Cs \<longleftrightarrow>
(\<forall>\<sigma>s. length \<sigma>s = length Cs \<longrightarrow> (\<exists>\<tau>. \<forall>i < length Cs. \<forall>S. S \<subseteq># Cs ! i \<longrightarrow> S \<cdot> \<sigma>s ! i = S \<cdot> \<tau>))"
end
subsection \<open>Substitution Lemmas\<close>
locale substitution = substitution_ops subst_atm id_subst comp_subst
for
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" +
assumes
subst_atm_id_subst[simp]: "A \<cdot>a id_subst = A" and
subst_atm_comp_subst[simp]: "A \<cdot>a (\<sigma> \<odot> \<tau>) = (A \<cdot>a \<sigma>) \<cdot>a \<tau>" and
subst_ext: "(\<And>A. A \<cdot>a \<sigma> = A \<cdot>a \<tau>) \<Longrightarrow> \<sigma> = \<tau>" and
make_ground_subst: "is_ground_cls (C \<cdot> \<sigma>) \<Longrightarrow> \<exists>\<tau>. is_ground_subst \<tau> \<and>C \<cdot> \<tau> = C \<cdot> \<sigma>" and
wf_strictly_generalizes_atm: "wfP strictly_generalizes_atm"
begin
lemma subst_ext_iff: "\<sigma> = \<tau> \<longleftrightarrow> (\<forall>A. A \<cdot>a \<sigma> = A \<cdot>a \<tau>)"
by (blast intro: subst_ext)
subsubsection \<open>Identity Substitution\<close>
lemma id_subst_comp_subst[simp]: "id_subst \<odot> \<sigma> = \<sigma>"
by (rule subst_ext) simp
lemma comp_subst_id_subst[simp]: "\<sigma> \<odot> id_subst = \<sigma>"
by (rule subst_ext) simp
lemma id_subst_comp_substs[simp]: "replicate (length \<sigma>s) id_subst \<odot>s \<sigma>s = \<sigma>s"
using comp_substs_def by (induction \<sigma>s) auto
lemma comp_substs_id_subst[simp]: "\<sigma>s \<odot>s replicate (length \<sigma>s) id_subst = \<sigma>s"
using comp_substs_def by (induction \<sigma>s) auto
lemma subst_atms_id_subst[simp]: "AA \<cdot>as id_subst = AA"
unfolding subst_atms_def by simp
lemma subst_atmss_id_subst[simp]: "AAA \<cdot>ass id_subst = AAA"
unfolding subst_atmss_def by simp
lemma subst_atm_list_id_subst[simp]: "As \<cdot>al id_subst = As"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_id_subst[simp]: "AA \<cdot>am id_subst = AA"
unfolding subst_atm_mset_def by simp
lemma subst_atm_mset_list_id_subst[simp]: "AAs \<cdot>aml id_subst = AAs"
unfolding subst_atm_mset_list_def by simp
lemma subst_cls_id_subst[simp]: "C \<cdot> id_subst = C"
unfolding subst_cls_def by simp
lemma subst_clss_id_subst[simp]: "CC \<cdot>cs id_subst = CC"
unfolding subst_clss_def by simp
lemma subst_cls_list_id_subst[simp]: "Cs \<cdot>cl id_subst = Cs"
unfolding subst_cls_list_def by simp
lemma subst_cls_lists_id_subst[simp]: "Cs \<cdot>\<cdot>cl replicate (length Cs) id_subst = Cs"
unfolding subst_cls_lists_def by (induct Cs) auto
lemma subst_cls_mset_id_subst[simp]: "CC \<cdot>cm id_subst = CC"
unfolding subst_cls_mset_def by simp
subsubsection \<open>Associativity of Composition\<close>
lemma comp_subst_assoc[simp]: "\<sigma> \<odot> (\<tau> \<odot> \<gamma>) = \<sigma> \<odot> \<tau> \<odot> \<gamma>"
by (rule subst_ext) simp
subsubsection \<open>Compatibility of Substitution and Composition\<close>
lemma subst_atms_comp_subst[simp]: "AA \<cdot>as (\<tau> \<odot> \<sigma>) = AA \<cdot>as \<tau> \<cdot>as \<sigma>"
unfolding subst_atms_def by auto
lemma subst_atmss_comp_subst[simp]: "AAA \<cdot>ass (\<tau> \<odot> \<sigma>) = AAA \<cdot>ass \<tau> \<cdot>ass \<sigma>"
unfolding subst_atmss_def by auto
lemma subst_atm_list_comp_subst[simp]: "As \<cdot>al (\<tau> \<odot> \<sigma>) = As \<cdot>al \<tau> \<cdot>al \<sigma>"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_comp_subst[simp]: "AA \<cdot>am (\<tau> \<odot> \<sigma>) = AA \<cdot>am \<tau> \<cdot>am \<sigma>"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_comp_subst[simp]: "AAs \<cdot>aml (\<tau> \<odot> \<sigma>) = (AAs \<cdot>aml \<tau>) \<cdot>aml \<sigma>"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_comp_substs[simp]: "AAs \<cdot>\<cdot>aml (\<tau>s \<odot>s \<sigma>s) = AAs \<cdot>\<cdot>aml \<tau>s \<cdot>\<cdot>aml \<sigma>s"
unfolding subst_atm_mset_lists_def comp_substs_def map_zip_map map_zip_map2 map_zip_assoc
by (simp add: split_def)
lemma subst_lit_comp_subst[simp]: "L \<cdot>l (\<tau> \<odot> \<sigma>) = L \<cdot>l \<tau> \<cdot>l \<sigma>"
unfolding subst_lit_def by (auto simp: literal.map_comp o_def)
lemma subst_cls_comp_subst[simp]: "C \<cdot> (\<tau> \<odot> \<sigma>) = C \<cdot> \<tau> \<cdot> \<sigma>"
unfolding subst_cls_def by auto
lemma subst_clsscomp_subst[simp]: "CC \<cdot>cs (\<tau> \<odot> \<sigma>) = CC \<cdot>cs \<tau> \<cdot>cs \<sigma>"
unfolding subst_clss_def by auto
lemma subst_cls_list_comp_subst[simp]: "Cs \<cdot>cl (\<tau> \<odot> \<sigma>) = Cs \<cdot>cl \<tau> \<cdot>cl \<sigma>"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_comp_substs[simp]: "Cs \<cdot>\<cdot>cl (\<tau>s \<odot>s \<sigma>s) = Cs \<cdot>\<cdot>cl \<tau>s \<cdot>\<cdot>cl \<sigma>s"
unfolding subst_cls_lists_def comp_substs_def map_zip_map map_zip_map2 map_zip_assoc
by (simp add: split_def)
subsubsection \<open>``Commutativity'' of Membership and Substitution\<close>
lemma Melem_subst_atm_mset[simp]: "A \<in># AA \<cdot>am \<sigma> \<longleftrightarrow> (\<exists>B. B \<in># AA \<and> A = B \<cdot>a \<sigma>)"
unfolding subst_atm_mset_def by auto
lemma Melem_subst_cls[simp]: "L \<in># C \<cdot> \<sigma> \<longleftrightarrow> (\<exists>M. M \<in># C \<and> L = M \<cdot>l \<sigma>)"
unfolding subst_cls_def by auto
lemma Melem_subst_cls_mset[simp]: "AA \<in># CC \<cdot>cm \<sigma> \<longleftrightarrow> (\<exists>BB. BB \<in># CC \<and> AA = BB \<cdot> \<sigma>)"
unfolding subst_cls_mset_def by auto
subsubsection \<open>Signs and Substitutions\<close>
lemma subst_lit_is_neg[simp]: "is_neg (L \<cdot>l \<sigma>) = is_neg L"
unfolding subst_lit_def by auto
lemma subst_minus[simp]: "(- L) \<cdot>l \<mu> = - (L \<cdot>l \<mu>)"
by (simp add: literal.map_sel subst_lit_def uminus_literal_def)
subsubsection \<open>Substitution on Literal(s)\<close>
lemma eql_neg_lit_eql_atm[simp]: "(Neg A' \<cdot>l \<eta>) = Neg A \<longleftrightarrow> A' \<cdot>a \<eta> = A"
by (simp add: subst_lit_def)
lemma eql_pos_lit_eql_atm[simp]: "(Pos A' \<cdot>l \<eta>) = Pos A \<longleftrightarrow> A' \<cdot>a \<eta> = A"
by (simp add: subst_lit_def)
lemma subst_cls_negs[simp]: "(negs AA) \<cdot> \<sigma> = negs (AA \<cdot>am \<sigma>)"
unfolding subst_cls_def subst_lit_def subst_atm_mset_def by auto
lemma subst_cls_poss[simp]: "(poss AA) \<cdot> \<sigma> = poss (AA \<cdot>am \<sigma>)"
unfolding subst_cls_def subst_lit_def subst_atm_mset_def by auto
lemma atms_of_subst_atms: "atms_of C \<cdot>as \<sigma> = atms_of (C \<cdot> \<sigma>)"
proof -
have "atms_of (C \<cdot> \<sigma>) = set_mset (image_mset atm_of (image_mset (map_literal (\<lambda>A. A \<cdot>a \<sigma>)) C))"
unfolding subst_cls_def subst_atms_def subst_lit_def atms_of_def by auto
also have "... = set_mset (image_mset (\<lambda>A. A \<cdot>a \<sigma>) (image_mset atm_of C))"
by simp (meson literal.map_sel)
finally show "atms_of C \<cdot>as \<sigma> = atms_of (C \<cdot> \<sigma>)"
unfolding subst_atms_def atms_of_def by auto
qed
lemma in_image_Neg_is_neg[simp]: "L \<cdot>l \<sigma> \<in> Neg ` AA \<Longrightarrow> is_neg L"
by (metis bex_imageD literal.disc(2) literal.map_disc_iff subst_lit_def)
lemma subst_lit_in_negs_subst_is_neg: "L \<cdot>l \<sigma> \<in># (negs AA) \<cdot> \<tau> \<Longrightarrow> is_neg L"
by simp
subsubsection \<open>Substitution on Empty\<close>
lemma subst_atms_empty[simp]: "{} \<cdot>as \<sigma> = {}"
unfolding subst_atms_def by auto
lemma subst_atmss_empty[simp]: "{} \<cdot>ass \<sigma> = {}"
unfolding subst_atmss_def by auto
lemma comp_substs_empty_iff[simp]: "\<sigma>s \<odot>s \<eta>s = [] \<longleftrightarrow> \<sigma>s = [] \<or> \<eta>s = []"
using comp_substs_def map2_empty_iff by auto
lemma subst_atm_list_empty[simp]: "[] \<cdot>al \<sigma> = []"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_empty[simp]: "{#} \<cdot>am \<sigma> = {#}"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_empty[simp]: "[] \<cdot>aml \<sigma> = []"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_empty[simp]: "[] \<cdot>\<cdot>aml \<sigma>s = []"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_empty[simp]: "{#} \<cdot> \<sigma> = {#}"
unfolding subst_cls_def by auto
lemma subst_clss_empty[simp]: "{} \<cdot>cs \<sigma> = {}"
unfolding subst_clss_def by auto
lemma subst_cls_list_empty[simp]: "[] \<cdot>cl \<sigma> = []"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_empty[simp]: "[] \<cdot>\<cdot>cl \<sigma>s = []"
unfolding subst_cls_lists_def by auto
lemma subst_atms_empty_iff[simp]: "AA \<cdot>as \<eta> = {} \<longleftrightarrow> AA = {}"
unfolding subst_atms_def by auto
lemma subst_atmss_empty_iff[simp]: "AAA \<cdot>ass \<eta> = {} \<longleftrightarrow> AAA = {}"
unfolding subst_atmss_def by auto
lemma subst_atm_list_empty_iff[simp]: "As \<cdot>al \<eta> = [] \<longleftrightarrow> As = []"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_empty_iff[simp]: "AA \<cdot>am \<eta> = {#} \<longleftrightarrow> AA = {#}"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_empty_iff[simp]: "AAs \<cdot>aml \<eta> = [] \<longleftrightarrow> AAs = []"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_empty_iff[simp]: "AAs \<cdot>\<cdot>aml \<eta>s = [] \<longleftrightarrow> (AAs = [] \<or> \<eta>s = [])"
using map2_empty_iff subst_atm_mset_lists_def by auto
lemma subst_cls_empty_iff[simp]: "C \<cdot> \<eta> = {#} \<longleftrightarrow> C = {#}"
unfolding subst_cls_def by auto
lemma subst_clss_empty_iff[simp]: "CC \<cdot>cs \<eta> = {} \<longleftrightarrow> CC = {}"
unfolding subst_clss_def by auto
lemma subst_cls_list_empty_iff[simp]: "Cs \<cdot>cl \<eta> = [] \<longleftrightarrow> Cs = []"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_empty_iff[simp]: "Cs \<cdot>\<cdot>cl \<eta>s = [] \<longleftrightarrow> Cs = [] \<or> \<eta>s = []"
using map2_empty_iff subst_cls_lists_def by auto
lemma subst_cls_mset_empty_iff[simp]: "CC \<cdot>cm \<eta> = {#} \<longleftrightarrow> CC = {#}"
unfolding subst_cls_mset_def by auto
subsubsection \<open>Substitution on a Union\<close>
lemma subst_atms_union[simp]: "(AA \<union> BB) \<cdot>as \<sigma> = AA \<cdot>as \<sigma> \<union> BB \<cdot>as \<sigma>"
unfolding subst_atms_def by auto
lemma subst_atmss_union[simp]: "(AAA \<union> BBB) \<cdot>ass \<sigma> = AAA \<cdot>ass \<sigma> \<union> BBB \<cdot>ass \<sigma>"
unfolding subst_atmss_def by auto
lemma subst_atm_list_append[simp]: "(As @ Bs) \<cdot>al \<sigma> = As \<cdot>al \<sigma> @ Bs \<cdot>al \<sigma>"
unfolding subst_atm_list_def by auto
lemma subst_cls_union[simp]: "(C + D) \<cdot> \<sigma> = C \<cdot> \<sigma> + D \<cdot> \<sigma>"
unfolding subst_cls_def by auto
lemma subst_clss_union[simp]: "(CC \<union> DD) \<cdot>cs \<sigma> = CC \<cdot>cs \<sigma> \<union> DD \<cdot>cs \<sigma>"
unfolding subst_clss_def by auto
lemma subst_cls_list_append[simp]: "(Cs @ Ds) \<cdot>cl \<sigma> = Cs \<cdot>cl \<sigma> @ Ds \<cdot>cl \<sigma>"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_append[simp]:
"length Cs = length \<sigma>s \<Longrightarrow> length Cs' = length \<sigma>s' \<Longrightarrow>
(Cs @ Cs') \<cdot>\<cdot>cl (\<sigma>s @ \<sigma>s') = Cs \<cdot>\<cdot>cl \<sigma>s @ Cs' \<cdot>\<cdot>cl \<sigma>s'"
unfolding subst_cls_lists_def by auto
lemma subst_cls_mset_union[simp]: "(CC + DD) \<cdot>cm \<sigma> = CC \<cdot>cm \<sigma> + DD \<cdot>cm \<sigma>"
unfolding subst_cls_mset_def by auto
subsubsection \<open>Substitution on a Singleton\<close>
lemma subst_atms_single[simp]: "{A} \<cdot>as \<sigma> = {A \<cdot>a \<sigma>}"
unfolding subst_atms_def by auto
lemma subst_atmss_single[simp]: "{AA} \<cdot>ass \<sigma> = {AA \<cdot>as \<sigma>}"
unfolding subst_atmss_def by auto
lemma subst_atm_list_single[simp]: "[A] \<cdot>al \<sigma> = [A \<cdot>a \<sigma>]"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_single[simp]: "{#A#} \<cdot>am \<sigma> = {#A \<cdot>a \<sigma>#}"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list[simp]: "[AA] \<cdot>aml \<sigma> = [AA \<cdot>am \<sigma>]"
unfolding subst_atm_mset_list_def by auto
lemma subst_cls_single[simp]: "{#L#} \<cdot> \<sigma> = {#L \<cdot>l \<sigma>#}"
by simp
lemma subst_clss_single[simp]: "{C} \<cdot>cs \<sigma> = {C \<cdot> \<sigma>}"
unfolding subst_clss_def by auto
lemma subst_cls_list_single[simp]: "[C] \<cdot>cl \<sigma> = [C \<cdot> \<sigma>]"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_single[simp]: "[C] \<cdot>\<cdot>cl [\<sigma>] = [C \<cdot> \<sigma>]"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Substitution on @{term Cons}\<close>
lemma subst_atm_list_Cons[simp]: "(A # As) \<cdot>al \<sigma> = A \<cdot>a \<sigma> # As \<cdot>al \<sigma>"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_list_Cons[simp]: "(A # As) \<cdot>aml \<sigma> = A \<cdot>am \<sigma> # As \<cdot>aml \<sigma>"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_Cons[simp]: "(C # Cs) \<cdot>\<cdot>aml (\<sigma> # \<sigma>s) = C \<cdot>am \<sigma> # Cs \<cdot>\<cdot>aml \<sigma>s"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_list_Cons[simp]: "(C # Cs) \<cdot>cl \<sigma> = C \<cdot> \<sigma> # Cs \<cdot>cl \<sigma>"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_Cons[simp]: "(C # Cs) \<cdot>\<cdot>cl (\<sigma> # \<sigma>s) = C \<cdot> \<sigma> # Cs \<cdot>\<cdot>cl \<sigma>s"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Substitution on @{term tl}\<close>
lemma subst_atm_list_tl[simp]: "tl (As \<cdot>al \<sigma>) = tl As \<cdot>al \<sigma>"
by (cases As) auto
lemma subst_atm_mset_list_tl[simp]: "tl (AAs \<cdot>aml \<sigma>) = tl AAs \<cdot>aml \<sigma>"
by (cases AAs) auto
lemma subst_cls_list_tl[simp]: "tl (Cs \<cdot>cl \<sigma>) = tl Cs \<cdot>cl \<sigma>"
by (cases Cs) auto
lemma subst_cls_lists_tl[simp]: "length Cs = length \<sigma>s \<Longrightarrow> tl (Cs \<cdot>\<cdot>cl \<sigma>s) = tl Cs \<cdot>\<cdot>cl tl \<sigma>s"
by (cases Cs; cases \<sigma>s) auto
subsubsection \<open>Substitution on @{term nth}\<close>
lemma comp_substs_nth[simp]:
"length \<tau>s = length \<sigma>s \<Longrightarrow> i < length \<tau>s \<Longrightarrow> (\<tau>s \<odot>s \<sigma>s) ! i = (\<tau>s ! i) \<odot> (\<sigma>s ! i)"
by (simp add: comp_substs_def)
lemma subst_atm_list_nth[simp]: "i < length As \<Longrightarrow> (As \<cdot>al \<tau>) ! i = As ! i \<cdot>a \<tau>"
unfolding subst_atm_list_def using less_Suc_eq_0_disj nth_map by force
lemma subst_atm_mset_list_nth[simp]: "i < length AAs \<Longrightarrow> (AAs \<cdot>aml \<eta>) ! i = (AAs ! i) \<cdot>am \<eta>"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_nth[simp]:
"length AAs = length \<sigma>s \<Longrightarrow> i < length AAs \<Longrightarrow> (AAs \<cdot>\<cdot>aml \<sigma>s) ! i = (AAs ! i) \<cdot>am (\<sigma>s ! i)"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_list_nth[simp]: "i < length Cs \<Longrightarrow> (Cs \<cdot>cl \<tau>) ! i = (Cs ! i) \<cdot> \<tau>"
unfolding subst_cls_list_def using less_Suc_eq_0_disj nth_map by (induction Cs) auto
lemma subst_cls_lists_nth[simp]:
"length Cs = length \<sigma>s \<Longrightarrow> i < length Cs \<Longrightarrow> (Cs \<cdot>\<cdot>cl \<sigma>s) ! i = (Cs ! i) \<cdot> (\<sigma>s ! i)"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Substitution on Various Other Functions\<close>
lemma subst_clss_image[simp]: "image f X \<cdot>cs \<sigma> = {f x \<cdot> \<sigma> | x. x \<in> X}"
unfolding subst_clss_def by auto
lemma subst_cls_mset_image_mset[simp]: "image_mset f X \<cdot>cm \<sigma> = {# f x \<cdot> \<sigma>. x \<in># X #}"
unfolding subst_cls_mset_def by auto
lemma mset_subst_atm_list_subst_atm_mset[simp]: "mset (As \<cdot>al \<sigma>) = mset (As) \<cdot>am \<sigma>"
unfolding subst_atm_list_def subst_atm_mset_def by auto
lemma mset_subst_cls_list_subst_cls_mset: "mset (Cs \<cdot>cl \<sigma>) = (mset Cs) \<cdot>cm \<sigma>"
unfolding subst_cls_mset_def subst_cls_list_def by auto
lemma sum_list_subst_cls_list_subst_cls[simp]: "sum_list (Cs \<cdot>cl \<eta>) = sum_list Cs \<cdot> \<eta>"
unfolding subst_cls_list_def by (induction Cs) auto
lemma set_mset_subst_cls_mset_subst_clss: "set_mset (CC \<cdot>cm \<mu>) = (set_mset CC) \<cdot>cs \<mu>"
by (simp add: subst_cls_mset_def subst_clss_def)
lemma Neg_Melem_subst_atm_subst_cls[simp]: "Neg A \<in># C \<Longrightarrow> Neg (A \<cdot>a \<sigma>) \<in># C \<cdot> \<sigma> "
by (metis Melem_subst_cls eql_neg_lit_eql_atm)
lemma Pos_Melem_subst_atm_subst_cls[simp]: "Pos A \<in># C \<Longrightarrow> Pos (A \<cdot>a \<sigma>) \<in># C \<cdot> \<sigma> "
by (metis Melem_subst_cls eql_pos_lit_eql_atm)
lemma in_atms_of_subst[simp]: "B \<in> atms_of C \<Longrightarrow> B \<cdot>a \<sigma> \<in> atms_of (C \<cdot> \<sigma>)"
by (metis atms_of_subst_atms image_iff subst_atms_def)
subsubsection \<open>Renamings\<close>
lemma is_renaming_id_subst[simp]: "is_renaming id_subst"
unfolding is_renaming_def by simp
lemma is_renamingD: "is_renaming \<sigma> \<Longrightarrow> (\<forall>A1 A2. A1 \<cdot>a \<sigma> = A2 \<cdot>a \<sigma> \<longleftrightarrow> A1 = A2)"
by (metis is_renaming_def subst_atm_comp_subst subst_atm_id_subst)
lemma inv_renaming_cancel_r[simp]: "is_renaming r \<Longrightarrow> r \<odot> inv_renaming r = id_subst"
unfolding inv_renaming_def is_renaming_def by (metis (mono_tags) someI_ex)
lemma inv_renaming_cancel_r_list[simp]:
"is_renaming_list rs \<Longrightarrow> rs \<odot>s map inv_renaming rs = replicate (length rs) id_subst"
unfolding is_renaming_list_def by (induction rs) (auto simp add: comp_substs_def)
lemma Nil_comp_substs[simp]: "[] \<odot>s s = []"
unfolding comp_substs_def by auto
lemma comp_substs_Nil[simp]: "s \<odot>s [] = []"
unfolding comp_substs_def by auto
lemma is_renaming_idempotent_id_subst: "is_renaming r \<Longrightarrow> r \<odot> r = r \<Longrightarrow> r = id_subst"
by (metis comp_subst_assoc comp_subst_id_subst inv_renaming_cancel_r)
lemma is_renaming_left_id_subst_right_id_subst:
"is_renaming r \<Longrightarrow> s \<odot> r = id_subst \<Longrightarrow> r \<odot> s = id_subst"
by (metis comp_subst_assoc comp_subst_id_subst is_renaming_def)
lemma is_renaming_closure: "is_renaming r1 \<Longrightarrow> is_renaming r2 \<Longrightarrow> is_renaming (r1 \<odot> r2)"
unfolding is_renaming_def by (metis comp_subst_assoc comp_subst_id_subst)
lemma is_renaming_inv_renaming_cancel_atm[simp]: "is_renaming \<rho> \<Longrightarrow> A \<cdot>a \<rho> \<cdot>a inv_renaming \<rho> = A"
by (metis inv_renaming_cancel_r subst_atm_comp_subst subst_atm_id_subst)
lemma is_renaming_inv_renaming_cancel_atms[simp]: "is_renaming \<rho> \<Longrightarrow> AA \<cdot>as \<rho> \<cdot>as inv_renaming \<rho> = AA"
by (metis inv_renaming_cancel_r subst_atms_comp_subst subst_atms_id_subst)
lemma is_renaming_inv_renaming_cancel_atmss[simp]: "is_renaming \<rho> \<Longrightarrow> AAA \<cdot>ass \<rho> \<cdot>ass inv_renaming \<rho> = AAA"
by (metis inv_renaming_cancel_r subst_atmss_comp_subst subst_atmss_id_subst)
lemma is_renaming_inv_renaming_cancel_atm_list[simp]: "is_renaming \<rho> \<Longrightarrow> As \<cdot>al \<rho> \<cdot>al inv_renaming \<rho> = As"
by (metis inv_renaming_cancel_r subst_atm_list_comp_subst subst_atm_list_id_subst)
lemma is_renaming_inv_renaming_cancel_atm_mset[simp]: "is_renaming \<rho> \<Longrightarrow> AA \<cdot>am \<rho> \<cdot>am inv_renaming \<rho> = AA"
by (metis inv_renaming_cancel_r subst_atm_mset_comp_subst subst_atm_mset_id_subst)
lemma is_renaming_inv_renaming_cancel_atm_mset_list[simp]: "is_renaming \<rho> \<Longrightarrow> (AAs \<cdot>aml \<rho>) \<cdot>aml inv_renaming \<rho> = AAs"
by (metis inv_renaming_cancel_r subst_atm_mset_list_comp_subst subst_atm_mset_list_id_subst)
lemma is_renaming_list_inv_renaming_cancel_atm_mset_lists[simp]:
"length AAs = length \<rho>s \<Longrightarrow> is_renaming_list \<rho>s \<Longrightarrow> AAs \<cdot>\<cdot>aml \<rho>s \<cdot>\<cdot>aml map inv_renaming \<rho>s = AAs"
by (metis inv_renaming_cancel_r_list subst_atm_mset_lists_comp_substs
subst_atm_mset_lists_id_subst)
lemma is_renaming_inv_renaming_cancel_lit[simp]: "is_renaming \<rho> \<Longrightarrow> (L \<cdot>l \<rho>) \<cdot>l inv_renaming \<rho> = L"
by (metis inv_renaming_cancel_r subst_lit_comp_subst subst_lit_id_subst)
lemma is_renaming_inv_renaming_cancel_clss[simp]:
"is_renaming \<rho> \<Longrightarrow> CC \<cdot>cs \<rho> \<cdot>cs inv_renaming \<rho> = CC"
by (metis inv_renaming_cancel_r subst_clss_id_subst subst_clsscomp_subst)
lemma is_renaming_inv_renaming_cancel_cls_list[simp]:
"is_renaming \<rho> \<Longrightarrow> Cs \<cdot>cl \<rho> \<cdot>cl inv_renaming \<rho> = Cs"
by (metis inv_renaming_cancel_r subst_cls_list_comp_subst subst_cls_list_id_subst)
lemma is_renaming_list_inv_renaming_cancel_cls_list[simp]:
"length Cs = length \<rho>s \<Longrightarrow> is_renaming_list \<rho>s \<Longrightarrow> Cs \<cdot>\<cdot>cl \<rho>s \<cdot>\<cdot>cl map inv_renaming \<rho>s = Cs"
by (metis inv_renaming_cancel_r_list subst_cls_lists_comp_substs subst_cls_lists_id_subst)
lemma is_renaming_inv_renaming_cancel_cls_mset[simp]:
"is_renaming \<rho> \<Longrightarrow> CC \<cdot>cm \<rho> \<cdot>cm inv_renaming \<rho> = CC"
by (metis inv_renaming_cancel_r subst_cls_mset_comp_subst subst_cls_mset_id_subst)
subsubsection \<open>Monotonicity\<close>
lemma subst_cls_mono: "set_mset C \<subseteq> set_mset D \<Longrightarrow> set_mset (C \<cdot> \<sigma>) \<subseteq> set_mset (D \<cdot> \<sigma>)"
by force
lemma subst_subset_mono: "D \<subset># C \<Longrightarrow> D \<cdot> \<sigma> \<subset># C \<cdot> \<sigma>"
unfolding subst_cls_def by (simp add: image_mset_subset_mono)
subsubsection \<open>Size after Substitution\<close>
lemma size_subst[simp]: "size (D \<cdot> \<sigma>) = size D"
unfolding subst_cls_def by auto
lemma subst_atm_list_length[simp]: "length (As \<cdot>al \<sigma>) = length As"
unfolding subst_atm_list_def by auto
lemma length_subst_atm_mset_list[simp]: "length (AAs \<cdot>aml \<eta>) = length AAs"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_length[simp]: "length (AAs \<cdot>\<cdot>aml \<sigma>s) = min (length AAs) (length \<sigma>s)"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_list_length[simp]: "length (Cs \<cdot>cl \<sigma>) = length Cs"
unfolding subst_cls_list_def by auto
lemma comp_substs_length[simp]: "length (\<tau>s \<odot>s \<sigma>s) = min (length \<tau>s) (length \<sigma>s)"
unfolding comp_substs_def by auto
lemma subst_cls_lists_length[simp]: "length (Cs \<cdot>\<cdot>cl \<sigma>s) = min (length Cs) (length \<sigma>s)"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Variable Disjointness\<close>
lemma var_disjoint_clauses:
assumes "var_disjoint Cs"
shows "\<forall>\<sigma>s. length \<sigma>s = length Cs \<longrightarrow> (\<exists>\<tau>. Cs \<cdot>\<cdot>cl \<sigma>s = Cs \<cdot>cl \<tau>)"
proof clarify
fix \<sigma>s :: "'s list"
assume a: "length \<sigma>s = length Cs"
then obtain \<tau> where "\<forall>i < length Cs. \<forall>S. S \<subseteq># Cs ! i \<longrightarrow> S \<cdot> \<sigma>s ! i = S \<cdot> \<tau>"
using assms unfolding var_disjoint_def by blast
then have "\<forall>i < length Cs. (Cs ! i) \<cdot> \<sigma>s ! i = (Cs ! i) \<cdot> \<tau>"
by auto
then have "Cs \<cdot>\<cdot>cl \<sigma>s = Cs \<cdot>cl \<tau>"
using a by (auto intro: nth_equalityI)
then show "\<exists>\<tau>. Cs \<cdot>\<cdot>cl \<sigma>s = Cs \<cdot>cl \<tau>"
by auto
qed
subsubsection \<open>Ground Expressions and Substitutions\<close>
lemma ex_ground_subst: "\<exists>\<sigma>. is_ground_subst \<sigma>"
using make_ground_subst[of "{#}"]
by (simp add: is_ground_cls_def)
lemma is_ground_cls_list_Cons[simp]:
"is_ground_cls_list (C # Cs) = (is_ground_cls C \<and> is_ground_cls_list Cs)"
unfolding is_ground_cls_list_def by auto
paragraph \<open>Ground union\<close>
lemma is_ground_atms_union[simp]: "is_ground_atms (AA \<union> BB) \<longleftrightarrow> is_ground_atms AA \<and> is_ground_atms BB"
unfolding is_ground_atms_def by auto
lemma is_ground_atm_mset_union[simp]:
"is_ground_atm_mset (AA + BB) \<longleftrightarrow> is_ground_atm_mset AA \<and> is_ground_atm_mset BB"
unfolding is_ground_atm_mset_def by auto
lemma is_ground_cls_union[simp]: "is_ground_cls (C + D) \<longleftrightarrow> is_ground_cls C \<and> is_ground_cls D"
unfolding is_ground_cls_def by auto
lemma is_ground_clss_union[simp]:
"is_ground_clss (CC \<union> DD) \<longleftrightarrow> is_ground_clss CC \<and> is_ground_clss DD"
unfolding is_ground_clss_def by auto
lemma is_ground_cls_list_is_ground_cls_sum_list[simp]:
"is_ground_cls_list Cs \<Longrightarrow> is_ground_cls (sum_list Cs)"
by (meson in_mset_sum_list2 is_ground_cls_def is_ground_cls_list_def)
paragraph \<open>Grounding simplifications\<close>
lemma grounding_of_clss_empty[simp]: "grounding_of_clss {} = {}"
by (simp add: grounding_of_clss_def)
lemma grounding_of_clss_singleton[simp]: "grounding_of_clss {C} = grounding_of_cls C"
by (simp add: grounding_of_clss_def)
lemma grounding_of_clss_insert:
"grounding_of_clss (insert C N) = grounding_of_cls C \<union> grounding_of_clss N"
by (simp add: grounding_of_clss_def)
lemma grounding_of_clss_union:
"grounding_of_clss (A \<union> B) = grounding_of_clss A \<union> grounding_of_clss B"
by (simp add: grounding_of_clss_def)
paragraph \<open>Grounding monotonicity\<close>
lemma is_ground_cls_mono: "C \<subseteq># D \<Longrightarrow> is_ground_cls D \<Longrightarrow> is_ground_cls C"
unfolding is_ground_cls_def by (metis set_mset_mono subsetD)
lemma is_ground_clss_mono: "CC \<subseteq> DD \<Longrightarrow> is_ground_clss DD \<Longrightarrow> is_ground_clss CC"
unfolding is_ground_clss_def by blast
lemma grounding_of_clss_mono: "CC \<subseteq> DD \<Longrightarrow> grounding_of_clss CC \<subseteq> grounding_of_clss DD"
using grounding_of_clss_def by auto
lemma sum_list_subseteq_mset_is_ground_cls_list[simp]:
"sum_list Cs \<subseteq># sum_list Ds \<Longrightarrow> is_ground_cls_list Ds \<Longrightarrow> is_ground_cls_list Cs"
by (meson in_mset_sum_list is_ground_cls_def is_ground_cls_list_is_ground_cls_sum_list
is_ground_cls_mono is_ground_cls_list_def)
paragraph \<open>Substituting on ground expression preserves ground\<close>
lemma is_ground_comp_subst[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_subst (\<tau> \<odot> \<sigma>)"
unfolding is_ground_subst_def is_ground_atm_def by auto
lemma ground_subst_ground_atm[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_atm (A \<cdot>a \<sigma>)"
by (simp add: is_ground_subst_def)
lemma ground_subst_ground_lit[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_lit (L \<cdot>l \<sigma>)"
unfolding is_ground_lit_def subst_lit_def by (cases L) auto
lemma ground_subst_ground_clss[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_clss (CC \<cdot>cs \<sigma>)"
unfolding is_ground_clss_def subst_clss_def by auto
lemma ground_subst_ground_cls_list[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_cls_list (Cs \<cdot>cl \<sigma>)"
unfolding is_ground_cls_list_def subst_cls_list_def by auto
lemma ground_subst_ground_cls_lists[simp]:
"\<forall>\<sigma> \<in> set \<sigma>s. is_ground_subst \<sigma> \<Longrightarrow> is_ground_cls_list (Cs \<cdot>\<cdot>cl \<sigma>s)"
unfolding is_ground_cls_list_def subst_cls_lists_def by (auto simp: set_zip)
lemma subst_cls_eq_grounding_of_cls_subset_eq:
assumes "D \<cdot> \<sigma> = C"
shows "grounding_of_cls C \<subseteq> grounding_of_cls D"
proof
fix C\<sigma>'
assume "C\<sigma>' \<in> grounding_of_cls C"
then obtain \<sigma>' where
C\<sigma>': "C \<cdot> \<sigma>' = C\<sigma>'" "is_ground_subst \<sigma>'"
unfolding grounding_of_cls_def by auto
then have "C \<cdot> \<sigma>' = D \<cdot> \<sigma> \<cdot> \<sigma>' \<and> is_ground_subst (\<sigma> \<odot> \<sigma>')"
using assms by auto
then show "C\<sigma>' \<in> grounding_of_cls D"
unfolding grounding_of_cls_def using C\<sigma>'(1) by force
qed
paragraph \<open>Substituting on ground expression has no effect\<close>
lemma is_ground_subst_atm[simp]: "is_ground_atm A \<Longrightarrow> A \<cdot>a \<sigma> = A"
unfolding is_ground_atm_def by simp
lemma is_ground_subst_atms[simp]: "is_ground_atms AA \<Longrightarrow> AA \<cdot>as \<sigma> = AA"
unfolding is_ground_atms_def subst_atms_def image_def by auto
lemma is_ground_subst_atm_mset[simp]: "is_ground_atm_mset AA \<Longrightarrow> AA \<cdot>am \<sigma> = AA"
unfolding is_ground_atm_mset_def subst_atm_mset_def by auto
lemma is_ground_subst_atm_list[simp]: "is_ground_atm_list As \<Longrightarrow> As \<cdot>al \<sigma> = As"
unfolding is_ground_atm_list_def subst_atm_list_def by (auto intro: nth_equalityI)
lemma is_ground_subst_atm_list_member[simp]:
"is_ground_atm_list As \<Longrightarrow> i < length As \<Longrightarrow> As ! i \<cdot>a \<sigma> = As ! i"
unfolding is_ground_atm_list_def by auto
lemma is_ground_subst_lit[simp]: "is_ground_lit L \<Longrightarrow> L \<cdot>l \<sigma> = L"
unfolding is_ground_lit_def subst_lit_def by (cases L) simp_all
lemma is_ground_subst_cls[simp]: "is_ground_cls C \<Longrightarrow> C \<cdot> \<sigma> = C"
unfolding is_ground_cls_def subst_cls_def by simp
lemma is_ground_subst_clss[simp]: "is_ground_clss CC \<Longrightarrow> CC \<cdot>cs \<sigma> = CC"
unfolding is_ground_clss_def subst_clss_def image_def by auto
lemma is_ground_subst_cls_lists[simp]:
assumes "length P = length Cs" and "is_ground_cls_list Cs"
shows "Cs \<cdot>\<cdot>cl P = Cs"
using assms by (metis is_ground_cls_list_def is_ground_subst_cls min.idem nth_equalityI nth_mem
subst_cls_lists_nth subst_cls_lists_length)
lemma is_ground_subst_lit_iff: "is_ground_lit L \<longleftrightarrow> (\<forall>\<sigma>. L = L \<cdot>l \<sigma>)"
using is_ground_atm_def is_ground_lit_def subst_lit_def by (cases L) auto
lemma is_ground_subst_cls_iff: "is_ground_cls C \<longleftrightarrow> (\<forall>\<sigma>. C = C \<cdot> \<sigma>)"
by (metis ex_ground_subst ground_subst_ground_cls is_ground_subst_cls)
paragraph \<open>Grounding of substitutions\<close>
lemma grounding_of_subst_cls_subset: "grounding_of_cls (C \<cdot> \<mu>) \<subseteq> grounding_of_cls C"
proof (rule subsetI)
fix D
assume "D \<in> grounding_of_cls (C \<cdot> \<mu>)"
then obtain \<gamma> where D_def: "D = C \<cdot> \<mu> \<cdot> \<gamma>" and gr_\<gamma>: "is_ground_subst \<gamma>"
unfolding grounding_of_cls_def mem_Collect_eq by auto
show "D \<in> grounding_of_cls C"
unfolding grounding_of_cls_def mem_Collect_eq D_def
using is_ground_comp_subst[OF gr_\<gamma>, of \<mu>]
by force
qed
lemma grounding_of_subst_clss_subset: "grounding_of_clss (CC \<cdot>cs \<mu>) \<subseteq> grounding_of_clss CC"
using grounding_of_subst_cls_subset
by (auto simp: grounding_of_clss_def subst_clss_def)
lemma grounding_of_subst_cls_renaming_ident[simp]:
assumes "is_renaming \<rho>"
shows "grounding_of_cls (C \<cdot> \<rho>) = grounding_of_cls C"
by (metis (no_types, lifting) assms subset_antisym subst_cls_comp_subst
subst_cls_eq_grounding_of_cls_subset_eq subst_cls_id_subst is_renaming_def)
lemma grounding_of_subst_clss_renaming_ident[simp]:
assumes "is_renaming \<rho>"
shows "grounding_of_clss (CC \<cdot>cs \<rho>) = grounding_of_clss CC"
by (metis assms dual_order.eq_iff grounding_of_subst_clss_subset
is_renaming_inv_renaming_cancel_clss)
paragraph \<open>Members of ground expressions are ground\<close>
lemma is_ground_cls_as_atms: "is_ground_cls C \<longleftrightarrow> (\<forall>A \<in> atms_of C. is_ground_atm A)"
by (auto simp: atms_of_def is_ground_cls_def is_ground_lit_def)
lemma is_ground_cls_imp_is_ground_lit: "L \<in># C \<Longrightarrow> is_ground_cls C \<Longrightarrow> is_ground_lit L"
by (simp add: is_ground_cls_def)
lemma is_ground_cls_imp_is_ground_atm: "A \<in> atms_of C \<Longrightarrow> is_ground_cls C \<Longrightarrow> is_ground_atm A"
by (simp add: is_ground_cls_as_atms)
lemma is_ground_cls_is_ground_atms_atms_of[simp]: "is_ground_cls C \<Longrightarrow> is_ground_atms (atms_of C)"
by (simp add: is_ground_cls_imp_is_ground_atm is_ground_atms_def)
lemma grounding_ground: "C \<in> grounding_of_clss M \<Longrightarrow> is_ground_cls C"
unfolding grounding_of_clss_def grounding_of_cls_def by auto
lemma is_ground_cls_if_in_grounding_of_cls: "C' \<in> grounding_of_cls C \<Longrightarrow> is_ground_cls C'"
using grounding_ground grounding_of_clss_singleton by blast
lemma in_subset_eq_grounding_of_clss_is_ground_cls[simp]:
"C \<in> CC \<Longrightarrow> CC \<subseteq> grounding_of_clss DD \<Longrightarrow> is_ground_cls C"
unfolding grounding_of_clss_def grounding_of_cls_def by auto
lemma is_ground_cls_empty[simp]: "is_ground_cls {#}"
unfolding is_ground_cls_def by simp
lemma is_ground_cls_add_mset[simp]:
"is_ground_cls (add_mset L C) \<longleftrightarrow> is_ground_lit L \<and> is_ground_cls C"
by (auto simp: is_ground_cls_def)
lemma grounding_of_cls_ground: "is_ground_cls C \<Longrightarrow> grounding_of_cls C = {C}"
unfolding grounding_of_cls_def by (simp add: ex_ground_subst)
lemma grounding_of_cls_empty[simp]: "grounding_of_cls {#} = {{#}}"
by (simp add: grounding_of_cls_ground)
lemma union_grounding_of_cls_ground: "is_ground_clss (\<Union> (grounding_of_cls ` N))"
by (simp add: grounding_ground grounding_of_clss_def is_ground_clss_def)
lemma is_ground_clss_grounding_of_clss[simp]: "is_ground_clss (grounding_of_clss N)"
using grounding_of_clss_def union_grounding_of_cls_ground by metis
paragraph \<open>Grounding idempotence\<close>
lemma grounding_of_grounding_of_cls: "E \<in> grounding_of_cls D \<Longrightarrow> D \<in> grounding_of_cls C \<Longrightarrow> E = D"
using grounding_of_cls_def by auto
lemma image_grounding_of_cls_grounding_of_cls:
"grounding_of_cls ` grounding_of_cls C = (\<lambda>x. {x}) ` grounding_of_cls C"
proof (rule image_cong)
show "\<And>x. x \<in> grounding_of_cls C \<Longrightarrow> grounding_of_cls x = {x}"
using grounding_of_cls_ground is_ground_cls_if_in_grounding_of_cls by blast
qed simp
lemma grounding_of_clss_grounding_of_clss[simp]:
"grounding_of_clss (grounding_of_clss N) = grounding_of_clss N"
unfolding grounding_of_clss_def UN_UN_flatten
unfolding image_grounding_of_cls_grounding_of_cls
by simp
subsubsection \<open>Subsumption\<close>
lemma strictly_subsumes_empty_left[simp]: "strictly_subsumes {#} C \<longleftrightarrow> C \<noteq> {#}"
unfolding strictly_subsumes_def subsumes_def subst_cls_def by simp
subsubsection \<open>Unifiers\<close>
lemma card_le_one_alt: "finite X \<Longrightarrow> card X \<le> 1 \<longleftrightarrow> X = {} \<or> (\<exists>x. X = {x})"
by (induct rule: finite_induct) auto
lemma is_unifier_subst_atm_eqI:
assumes "finite AA"
shows "is_unifier \<sigma> AA \<Longrightarrow> A \<in> AA \<Longrightarrow> B \<in> AA \<Longrightarrow> A \<cdot>a \<sigma> = B \<cdot>a \<sigma>"
unfolding is_unifier_def subst_atms_def card_le_one_alt[OF finite_imageI[OF assms]]
by (metis equals0D imageI insert_iff)
lemma is_unifier_alt:
assumes "finite AA"
shows "is_unifier \<sigma> AA \<longleftrightarrow> (\<forall>A \<in> AA. \<forall>B \<in> AA. A \<cdot>a \<sigma> = B \<cdot>a \<sigma>)"
unfolding is_unifier_def subst_atms_def card_le_one_alt[OF finite_imageI[OF assms(1)]]
by (rule iffI, metis empty_iff insert_iff insert_image, blast)
lemma is_unifiers_subst_atm_eqI:
assumes "finite AA" "is_unifiers \<sigma> AAA" "AA \<in> AAA" "A \<in> AA" "B \<in> AA"
shows "A \<cdot>a \<sigma> = B \<cdot>a \<sigma>"
by (metis assms is_unifiers_def is_unifier_subst_atm_eqI)
theorem is_unifiers_comp:
"is_unifiers \<sigma> (set_mset ` set (map2 add_mset As Bs) \<cdot>ass \<eta>) \<longleftrightarrow>
is_unifiers (\<eta> \<odot> \<sigma>) (set_mset ` set (map2 add_mset As Bs))"
unfolding is_unifiers_def is_unifier_def subst_atmss_def by auto
subsubsection \<open>Most General Unifier\<close>
lemma is_mgu_is_unifiers: "is_mgu \<sigma> AAA \<Longrightarrow> is_unifiers \<sigma> AAA"
using is_mgu_def by blast
lemma is_mgu_is_most_general: "is_mgu \<sigma> AAA \<Longrightarrow> is_unifiers \<tau> AAA \<Longrightarrow> \<exists>\<gamma>. \<tau> = \<sigma> \<odot> \<gamma>"
using is_mgu_def by blast
lemma is_unifiers_is_unifier: "is_unifiers \<sigma> AAA \<Longrightarrow> AA \<in> AAA \<Longrightarrow> is_unifier \<sigma> AA"
using is_unifiers_def by simp
lemma is_imgu_is_mgu[intro]: "is_imgu \<sigma> AAA \<Longrightarrow> is_mgu \<sigma> AAA"
by (auto simp: is_imgu_def is_mgu_def)
lemma is_imgu_comp_idempotent[simp]: "is_imgu \<sigma> AAA \<Longrightarrow> \<sigma> \<odot> \<sigma> = \<sigma>"
by (simp add: is_imgu_def)
lemma is_imgu_subst_atm_idempotent[simp]: "is_imgu \<sigma> AAA \<Longrightarrow> A \<cdot>a \<sigma> \<cdot>a \<sigma> = A \<cdot>a \<sigma>"
using is_imgu_comp_idempotent[of \<sigma>] subst_atm_comp_subst[of A \<sigma> \<sigma>] by simp
lemma is_imgu_subst_atms_idempotent[simp]: "is_imgu \<sigma> AAA \<Longrightarrow> AA \<cdot>as \<sigma> \<cdot>as \<sigma> = AA \<cdot>as \<sigma>"
using is_imgu_comp_idempotent[of \<sigma>] subst_atms_comp_subst[of AA \<sigma> \<sigma>] by simp
lemma is_imgu_subst_lit_idemptotent[simp]: "is_imgu \<sigma> AAA \<Longrightarrow> L \<cdot>l \<sigma> \<cdot>l \<sigma> = L \<cdot>l \<sigma>"
using is_imgu_comp_idempotent[of \<sigma>] subst_lit_comp_subst[of L \<sigma> \<sigma>] by simp
lemma is_imgu_subst_cls_idemptotent[simp]: "is_imgu \<sigma> AAA \<Longrightarrow> C \<cdot> \<sigma> \<cdot> \<sigma> = C \<cdot> \<sigma>"
using is_imgu_comp_idempotent[of \<sigma>] subst_cls_comp_subst[of C \<sigma> \<sigma>] by simp
lemma is_imgu_subst_clss_idemptotent[simp]: "is_imgu \<sigma> AAA \<Longrightarrow> CC \<cdot>cs \<sigma> \<cdot>cs \<sigma> = CC \<cdot>cs \<sigma>"
using is_imgu_comp_idempotent[of \<sigma>] subst_clsscomp_subst[of CC \<sigma> \<sigma>] by simp
subsubsection \<open>Generalization and Subsumption\<close>
lemma variants_sym: "variants D D' \<longleftrightarrow> variants D' D"
unfolding variants_def by auto
lemma variants_iff_subsumes: "variants C D \<longleftrightarrow> subsumes C D \<and> subsumes D C"
proof
assume "variants C D"
then show "subsumes C D \<and> subsumes D C"
unfolding variants_def generalizes_def subsumes_def
by (metis subset_mset.refl)
next
assume sub: "subsumes C D \<and> subsumes D C"
then have "size C = size D"
unfolding subsumes_def by (metis antisym size_mset_mono size_subst)
then show "variants C D"
using sub unfolding subsumes_def variants_def generalizes_def
by (metis leD mset_subset_size size_mset_mono size_subst
subset_mset.not_eq_order_implies_strict)
qed
lemma generalizes_lit_refl[simp]: "generalizes_lit L L"
unfolding generalizes_lit_def by (rule exI[of _ id_subst]) simp
lemma generalizes_lit_trans:
"generalizes_lit L1 L2 \<Longrightarrow> generalizes_lit L2 L3 \<Longrightarrow> generalizes_lit L1 L3"
unfolding generalizes_lit_def using subst_lit_comp_subst by blast
lemma generalizes_refl[simp]: "generalizes C C"
unfolding generalizes_def by (rule exI[of _ id_subst]) simp
lemma generalizes_trans: "generalizes C D \<Longrightarrow> generalizes D E \<Longrightarrow> generalizes C E"
unfolding generalizes_def using subst_cls_comp_subst by blast
lemma subsumes_refl: "subsumes C C"
unfolding subsumes_def by (rule exI[of _ id_subst]) auto
lemma strictly_generalizes_irrefl: "\<not> strictly_generalizes C C"
unfolding strictly_generalizes_def by blast
lemma strictly_generalizes_antisym: "strictly_generalizes C D \<Longrightarrow> \<not> strictly_generalizes D C"
unfolding strictly_generalizes_def by blast
lemma strictly_generalizes_trans:
"strictly_generalizes C D \<Longrightarrow> strictly_generalizes D E \<Longrightarrow> strictly_generalizes C E"
unfolding strictly_generalizes_def using generalizes_trans by blast
lemma strictly_subsumes_irrefl: "\<not> strictly_subsumes C C"
unfolding strictly_subsumes_def by blast
lemma strictly_subsumes_antisym: "strictly_subsumes C D \<Longrightarrow> \<not> strictly_subsumes D C"
unfolding strictly_subsumes_def by blast
lemma strictly_subsumes_trans:
"strictly_subsumes C D \<Longrightarrow> strictly_subsumes D E \<Longrightarrow> strictly_subsumes C E"
unfolding strictly_subsumes_def using subsumes_trans by blast
lemma subset_strictly_subsumes: "C \<subset># D \<Longrightarrow> strictly_subsumes C D"
using strict_subset_subst_strictly_subsumes[of C id_subst] by auto
lemma strictly_generalizes_neq: "strictly_generalizes D' D \<Longrightarrow> D' \<noteq> D \<cdot> \<sigma>"
unfolding strictly_generalizes_def generalizes_def by blast
lemma strictly_subsumes_neq: "strictly_subsumes D' D \<Longrightarrow> D' \<noteq> D \<cdot> \<sigma>"
unfolding strictly_subsumes_def subsumes_def by blast
lemma variants_imp_exists_substitution: "variants D D' \<Longrightarrow> \<exists>\<sigma>. D \<cdot> \<sigma> = D'"
unfolding variants_iff_subsumes subsumes_def
by (meson strictly_subsumes_def subset_mset_def strict_subset_subst_strictly_subsumes subsumes_def)
lemma strictly_subsumes_variants:
assumes "strictly_subsumes E D" and "variants D D'"
shows "strictly_subsumes E D'"
proof -
from assms obtain \<sigma> \<sigma>' where
\<sigma>_\<sigma>'_p: "D \<cdot> \<sigma> = D' \<and> D' \<cdot> \<sigma>' = D"
using variants_imp_exists_substitution variants_sym by metis
from assms obtain \<sigma>'' where
"E \<cdot> \<sigma>'' \<subseteq># D"
unfolding strictly_subsumes_def subsumes_def by auto
then have "E \<cdot> \<sigma>'' \<cdot> \<sigma> \<subseteq># D \<cdot> \<sigma>"
using subst_cls_mono_mset by blast
then have "E \<cdot> (\<sigma>'' \<odot> \<sigma>) \<subseteq># D'"
using \<sigma>_\<sigma>'_p by auto
moreover from assms have n: "\<nexists>\<sigma>. D \<cdot> \<sigma> \<subseteq># E"
unfolding strictly_subsumes_def subsumes_def by auto
have "\<nexists>\<sigma>. D' \<cdot> \<sigma> \<subseteq># E"
proof
assume "\<exists>\<sigma>'''. D' \<cdot> \<sigma>''' \<subseteq># E"
then obtain \<sigma>''' where
"D' \<cdot> \<sigma>''' \<subseteq># E"
by auto
then have "D \<cdot> (\<sigma> \<odot> \<sigma>''') \<subseteq># E"
using \<sigma>_\<sigma>'_p by auto
then show False
using n by metis
qed
ultimately show ?thesis
unfolding strictly_subsumes_def subsumes_def by metis
qed
lemma neg_strictly_subsumes_variants:
assumes "\<not> strictly_subsumes E D" and "variants D D'"
shows "\<not> strictly_subsumes E D'"
using assms strictly_subsumes_variants variants_sym by auto
end
locale substitution_renamings = substitution subst_atm id_subst comp_subst
for
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" +
fixes
renamings_apart :: "'a clause list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a"
assumes
renamings_apart_length: "length (renamings_apart Cs) = length Cs" and
renamings_apart_renaming: "\<rho> \<in> set (renamings_apart Cs) \<Longrightarrow> is_renaming \<rho>" and
renamings_apart_var_disjoint: "var_disjoint (Cs \<cdot>\<cdot>cl (renamings_apart Cs))" and
atm_of_atms_subst:
"\<And>As Bs. atm_of_atms As \<cdot>a \<sigma> = atm_of_atms Bs \<longleftrightarrow> map (\<lambda>A. A \<cdot>a \<sigma>) As = Bs"
begin
subsubsection \<open>Generalization and Subsumption\<close>
lemma wf_strictly_generalizes: "wfP strictly_generalizes"
proof -
{
assume "\<exists>C_at. \<forall>i. strictly_generalizes (C_at (Suc i)) (C_at i)"
then obtain C_at :: "nat \<Rightarrow> 'a clause" where
sg_C: "\<And>i. strictly_generalizes (C_at (Suc i)) (C_at i)"
by blast
define n :: nat where
"n = size (C_at 0)"
have sz_C: "size (C_at i) = n" for i
proof (induct i)
case (Suc i)
then show ?case
using sg_C[of i] unfolding strictly_generalizes_def generalizes_def subst_cls_def
by (metis size_image_mset)
qed (simp add: n_def)
obtain \<sigma>_at :: "nat \<Rightarrow> 's" where
C_\<sigma>: "\<And>i. image_mset (\<lambda>L. L \<cdot>l \<sigma>_at i) (C_at (Suc i)) = C_at i"
using sg_C[unfolded strictly_generalizes_def generalizes_def subst_cls_def] by metis
define Ls_at :: "nat \<Rightarrow> 'a literal list" where
"Ls_at = rec_nat (SOME Ls. mset Ls = C_at 0)
(\<lambda>i Lsi. SOME Ls. mset Ls = C_at (Suc i) \<and> map (\<lambda>L. L \<cdot>l \<sigma>_at i) Ls = Lsi)"
have
Ls_at_0: "Ls_at 0 = (SOME Ls. mset Ls = C_at 0)" and
Ls_at_Suc: "\<And>i. Ls_at (Suc i) =
(SOME Ls. mset Ls = C_at (Suc i) \<and> map (\<lambda>L. L \<cdot>l \<sigma>_at i) Ls = Ls_at i)"
unfolding Ls_at_def by simp+
have mset_Lt_at_0: "mset (Ls_at 0) = C_at 0"
unfolding Ls_at_0 by (rule someI_ex) (metis list_of_mset_exi)
have "mset (Ls_at (Suc i)) = C_at (Suc i) \<and> map (\<lambda>L. L \<cdot>l \<sigma>_at i) (Ls_at (Suc i)) = Ls_at i"
for i
proof (induct i)
case 0
then show ?case
by (simp add: Ls_at_Suc, rule someI_ex,
metis C_\<sigma> image_mset_of_subset_list mset_Lt_at_0)
next
case Suc
then show ?case
by (subst (1 2) Ls_at_Suc) (rule someI_ex, metis C_\<sigma> image_mset_of_subset_list)
qed
note mset_Ls = this[THEN conjunct1] and Ls_\<sigma> = this[THEN conjunct2]
have len_Ls: "\<And>i. length (Ls_at i) = n"
by (metis mset_Ls mset_Lt_at_0 not0_implies_Suc size_mset sz_C)
have is_pos_Ls: "\<And>i j. j < n \<Longrightarrow> is_pos (Ls_at (Suc i) ! j) \<longleftrightarrow> is_pos (Ls_at i ! j)"
using Ls_\<sigma> len_Ls by (metis literal.map_disc_iff nth_map subst_lit_def)
have Ls_\<tau>_strict_lit: "\<And>i \<tau>. map (\<lambda>L. L \<cdot>l \<tau>) (Ls_at i) \<noteq> Ls_at (Suc i)"
by (metis C_\<sigma> mset_Ls Ls_\<sigma> mset_map sg_C generalizes_def strictly_generalizes_def
subst_cls_def)
have Ls_\<tau>_strict_tm:
"map ((\<lambda>t. t \<cdot>a \<tau>) \<circ> atm_of) (Ls_at i) \<noteq> map atm_of (Ls_at (Suc i))" for i \<tau>
proof -
obtain j :: nat where
j_lt: "j < n" and
j_\<tau>: "Ls_at i ! j \<cdot>l \<tau> \<noteq> Ls_at (Suc i) ! j"
using Ls_\<tau>_strict_lit[of \<tau> i] len_Ls
by (metis (no_types, lifting) length_map list_eq_iff_nth_eq nth_map)
have "atm_of (Ls_at i ! j) \<cdot>a \<tau> \<noteq> atm_of (Ls_at (Suc i) ! j)"
using j_\<tau> is_pos_Ls[OF j_lt]
by (metis (mono_guards) literal.expand literal.map_disc_iff literal.map_sel subst_lit_def)
then show ?thesis
using j_lt len_Ls by (metis nth_map o_apply)
qed
define tm_at :: "nat \<Rightarrow> 'a" where
"\<And>i. tm_at i = atm_of_atms (map atm_of (Ls_at i))"
have "\<And>i. generalizes_atm (tm_at (Suc i)) (tm_at i)"
unfolding tm_at_def generalizes_atm_def atm_of_atms_subst
using Ls_\<sigma>[THEN arg_cong, of "map atm_of"] by (auto simp: comp_def)
moreover have "\<And>i. \<not> generalizes_atm (tm_at i) (tm_at (Suc i))"
unfolding tm_at_def generalizes_atm_def atm_of_atms_subst by (simp add: Ls_\<tau>_strict_tm)
ultimately have "\<And>i. strictly_generalizes_atm (tm_at (Suc i)) (tm_at i)"
unfolding strictly_generalizes_atm_def by blast
then have False
using wf_strictly_generalizes_atm[unfolded wfP_def wf_iff_no_infinite_down_chain] by blast
}
then show "wfP (strictly_generalizes :: 'a clause \<Rightarrow> _ \<Rightarrow> _)"
unfolding wfP_def by (blast intro: wf_iff_no_infinite_down_chain[THEN iffD2])
qed
define c :: "nat \<Rightarrow> 'a clause" where
"\<And>n. c n = (f ^^ n) C"
have incc: "c i \<in> CC" for i
by (induction i) (auto simp: c_def f_p C_p)
have ps: "\<forall>i. strictly_subsumes (c (Suc i)) (c i)"
using incc f_p unfolding c_def by auto
have "\<forall>i. size (c i) \<ge> size (c (Suc i))"
using ps unfolding strictly_subsumes_def subsumes_def by (metis size_mset_mono size_subst)
then have lte: "\<forall>i. (size \<circ> c) i \<ge> (size \<circ> c) (Suc i)"
unfolding comp_def .
then have "\<exists>l. \<forall>l' \<ge> l. size (c l') = size (c (Suc l'))"
using f_Suc_decr_eventually_const comp_def by auto
then obtain l where
l_p: "\<forall>l' \<ge> l. size (c l') = size (c (Suc l'))"
by metis
then have "\<forall>l' \<ge> l. strictly_generalizes (c (Suc l')) (c l')"
using ps unfolding strictly_generalizes_def generalizes_def
by (metis size_subst less_irrefl strictly_subsumes_def mset_subset_size subset_mset_def
subsumes_def strictly_subsumes_neq)
then have "\<forall>i. strictly_generalizes (c (Suc i + l)) (c (i + l))"
unfolding strictly_generalizes_def generalizes_def by auto
then have "\<exists>f. \<forall>i. strictly_generalizes (f (Suc i)) (f i)"
by (rule exI[of _ "\<lambda>x. c (x + l)"])
then show False
using wf_strictly_generalizes
wf_iff_no_infinite_down_chain[of "{(x, y). strictly_generalizes x y}"]
unfolding wfP_def by auto
qed
lemma wf_strictly_subsumes: "wfP strictly_subsumes"
using strictly_subsumes_has_minimum by (metis equals0D wfP_eq_minimal)
end
subsection \<open>Most General Unifiers\<close>
locale mgu = substitution_renamings subst_atm id_subst comp_subst renamings_apart atm_of_atms
for
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
renamings_apart :: "'a literal multiset list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a"+
fixes
mgu :: "'a set set \<Rightarrow> 's option"
assumes
mgu_sound: "finite AAA \<Longrightarrow> (\<forall>AA \<in> AAA. finite AA) \<Longrightarrow> mgu AAA = Some \<sigma> \<Longrightarrow> is_mgu \<sigma> AAA" and
mgu_complete:
"finite AAA \<Longrightarrow> (\<forall>AA \<in> AAA. finite AA) \<Longrightarrow> is_unifiers \<sigma> AAA \<Longrightarrow> \<exists>\<tau>. mgu AAA = Some \<tau>"
begin
lemmas is_unifiers_mgu = mgu_sound[unfolded is_mgu_def, THEN conjunct1]
lemmas is_mgu_most_general = mgu_sound[unfolded is_mgu_def, THEN conjunct2]
lemma mgu_unifier:
assumes
aslen: "length As = n" and
aaslen: "length AAs = n" and
mgu: "Some \<sigma> = mgu (set_mset ` set (map2 add_mset As AAs))" and
i_lt: "i < n" and
a_in: "A \<in># AAs ! i"
shows "A \<cdot>a \<sigma> = As ! i \<cdot>a \<sigma>"
proof -
from mgu have "is_mgu \<sigma> (set_mset ` set (map2 add_mset As AAs))"
using mgu_sound by auto
then have "is_unifiers \<sigma> (set_mset ` set (map2 add_mset As AAs))"
using is_mgu_is_unifiers by auto
then have "is_unifier \<sigma> (set_mset (add_mset (As ! i) (AAs ! i)))"
using i_lt aslen aaslen unfolding is_unifiers_def is_unifier_def
by simp (metis length_zip min.idem nth_mem nth_zip prod.case set_mset_add_mset_insert)
then show ?thesis
using aslen aaslen a_in is_unifier_subst_atm_eqI
by (metis finite_set_mset insertCI set_mset_add_mset_insert)
qed
end
subsection \<open>Idempotent Most General Unifiers\<close>
locale imgu = mgu subst_atm id_subst comp_subst renamings_apart atm_of_atms mgu
for
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
renamings_apart :: "'a literal multiset list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a" and
mgu :: "'a set set \<Rightarrow> 's option" +
assumes
mgu_is_imgu: "finite AAA \<Longrightarrow> (\<forall>AA \<in> AAA. finite AA) \<Longrightarrow> mgu AAA = Some \<sigma> \<Longrightarrow> is_imgu \<sigma> AAA"
end
|
#include "vr/util/parse.h"
#include <boost/algorithm/string/classification.hpp>
#include <boost/algorithm/string/split.hpp>
//----------------------------------------------------------------------------
namespace vr
{
namespace util
{
//............................................................................
string_vector
split (std::string const & s, string_literal_t const separators, bool keep_empty_tokens)
{
string_vector r { };
boost::split (r, s, boost::is_any_of (separators), (keep_empty_tokens ? boost::token_compress_off : boost::token_compress_on));
return r;
}
} // end of 'util'
} // end of namespace
//----------------------------------------------------------------------------
|
//
// Copyright Silvin Lubecki 2010
//
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
//
#include "depend_pch.h"
#include "StronglyConnectedComponents.h"
#include "Filter_ABC.h"
#include "StronglyConnectedComponentsVisitor_ABC.h"
#include "ComponentVisitor_ABC.h"
#include <boost/foreach.hpp>
#include <map>
using namespace depend;
// -----------------------------------------------------------------------------
// Name: StronglyConnectedComponents constructor
// Created: SLI 2010-08-23
// -----------------------------------------------------------------------------
StronglyConnectedComponents::StronglyConnectedComponents( const Visitable< DependencyVisitor_ABC >& metric, const Filter_ABC& filter )
: filter_( filter )
{
metric.Apply( *this );
}
// -----------------------------------------------------------------------------
// Name: StronglyConnectedComponents destructor
// Created: SLI 2010-08-23
// -----------------------------------------------------------------------------
StronglyConnectedComponents::~StronglyConnectedComponents()
{
// NOTHING
}
namespace
{
class Component : public Visitable< ComponentVisitor_ABC >
{
public:
Component( const std::vector< std::string >& units )
: units_( units )
{}
virtual void Apply( ComponentVisitor_ABC& visitor ) const
{
BOOST_FOREACH( const std::string& unit, units_ )
visitor.NotifyUnit( unit );
}
private:
const std::vector< std::string >& units_;
};
template< typename T, typename U >
void Visit( T& components, const U& labels, const Filter_ABC& filter, StronglyConnectedComponentsVisitor_ABC& visitor )
{
typedef std::vector< std::string > T_Dependencies;
typedef std::map< typename T::key_type, T_Dependencies > T_Components;
T_Components sorted_components;
BOOST_FOREACH( const typename T::value_type& component, components )
{
typename U::const_iterator it = labels.find( component.first );
if( filter.Check( it->second ) )
sorted_components[ component.second ].push_back( it->second );
}
BOOST_FOREACH( const typename T_Components::value_type& component, sorted_components )
if( component.second.size() > 1 )
visitor.NotifyComponent( Component( component.second ) );
}
}
// -----------------------------------------------------------------------------
// Name: StronglyConnectedComponents::Apply
// Created: SLI 2011-04-06
// -----------------------------------------------------------------------------
void StronglyConnectedComponents::Apply( StronglyConnectedComponentsVisitor_ABC& visitor ) const
{
typedef std::map< T_Graph::vertex_descriptor, T_Graph::vertices_size_type > T_Map;
typedef boost::associative_property_map< T_Map > T_PropertyMap;
T_Map mymap;
T_PropertyMap pmap( mymap );
boost::strong_components( graph_.graph(), pmap );
::Visit( mymap, labels_, filter_, visitor );
}
// -----------------------------------------------------------------------------
// Name: StronglyConnectedComponents::NotifyInternalDependency
// Created: SLI 2010-08-23
// -----------------------------------------------------------------------------
void StronglyConnectedComponents::NotifyInternalDependency( const std::string& fromModule, const std::string& toModule, const std::string& /*context*/ )
{
labels_[ boost::add_vertex( fromModule, graph_ ) ] = fromModule;
labels_[ boost::add_vertex( toModule, graph_ ) ] = toModule;
boost::add_edge_by_label( fromModule, toModule, graph_ );
}
// -----------------------------------------------------------------------------
// Name: StronglyConnectedComponents::NotifyExternalDependency
// Created: SLI 2010-08-23
// -----------------------------------------------------------------------------
void StronglyConnectedComponents::NotifyExternalDependency( const std::string& /*fromModule*/, const std::string& /*toModule*/, const std::string& /*context*/ )
{
// NOTHING
}
|
[GOAL]
x : ℝ
h₁ : x ≠ -1
h₂ : x ≠ 1
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
cases' h₁.lt_or_lt with h₁ h₁
[GOAL]
case inl
x : ℝ
h₁✝ : x ≠ -1
h₂ : x ≠ 1
h₁ : x < -1
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
have : 1 - x ^ 2 < 0 := by nlinarith [h₁]
[GOAL]
x : ℝ
h₁✝ : x ≠ -1
h₂ : x ≠ 1
h₁ : x < -1
⊢ 1 - x ^ 2 < 0
[PROOFSTEP]
nlinarith [h₁]
[GOAL]
case inl
x : ℝ
h₁✝ : x ≠ -1
h₂ : x ≠ 1
h₁ : x < -1
this : 1 - x ^ 2 < 0
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
rw [sqrt_eq_zero'.2 this.le, div_zero]
[GOAL]
case inl
x : ℝ
h₁✝ : x ≠ -1
h₂ : x ≠ 1
h₁ : x < -1
this : 1 - x ^ 2 < 0
⊢ HasStrictDerivAt arcsin 0 x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
have : arcsin =ᶠ[𝓝 x] fun _ => -(π / 2) := (gt_mem_nhds h₁).mono fun y hy => arcsin_of_le_neg_one hy.le
[GOAL]
case inl
x : ℝ
h₁✝ : x ≠ -1
h₂ : x ≠ 1
h₁ : x < -1
this✝ : 1 - x ^ 2 < 0
this : arcsin =ᶠ[𝓝 x] fun x => -(π / 2)
⊢ HasStrictDerivAt arcsin 0 x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
exact ⟨(hasStrictDerivAt_const _ _).congr_of_eventuallyEq this.symm, contDiffAt_const.congr_of_eventuallyEq this⟩
[GOAL]
case inr
x : ℝ
h₁✝ : x ≠ -1
h₂ : x ≠ 1
h₁ : -1 < x
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
cases' h₂.lt_or_lt with h₂ h₂
[GOAL]
case inr.inl
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : x < 1
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
have : 0 < sqrt (1 - x ^ 2) := sqrt_pos.2 (by nlinarith [h₁, h₂])
[GOAL]
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : x < 1
⊢ 0 < 1 - x ^ 2
[PROOFSTEP]
nlinarith [h₁, h₂]
[GOAL]
case inr.inl
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : x < 1
this : 0 < sqrt (1 - x ^ 2)
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
simp only [← cos_arcsin, one_div] at this ⊢
[GOAL]
case inr.inl
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : x < 1
this : 0 < cos (arcsin x)
⊢ HasStrictDerivAt arcsin (cos (arcsin x))⁻¹ x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
exact
⟨sinLocalHomeomorph.hasStrictDerivAt_symm ⟨h₁, h₂⟩ this.ne' (hasStrictDerivAt_sin _),
sinLocalHomeomorph.contDiffAt_symm_deriv this.ne' ⟨h₁, h₂⟩ (hasDerivAt_sin _) contDiff_sin.contDiffAt⟩
[GOAL]
case inr.inr
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : 1 < x
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
have : 1 - x ^ 2 < 0 := by nlinarith [h₂]
[GOAL]
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : 1 < x
⊢ 1 - x ^ 2 < 0
[PROOFSTEP]
nlinarith [h₂]
[GOAL]
case inr.inr
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : 1 < x
this : 1 - x ^ 2 < 0
⊢ HasStrictDerivAt arcsin (1 / sqrt (1 - x ^ 2)) x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
rw [sqrt_eq_zero'.2 this.le, div_zero]
[GOAL]
case inr.inr
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : 1 < x
this : 1 - x ^ 2 < 0
⊢ HasStrictDerivAt arcsin 0 x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
have : arcsin =ᶠ[𝓝 x] fun _ => π / 2 := (lt_mem_nhds h₂).mono fun y hy => arcsin_of_one_le hy.le
[GOAL]
case inr.inr
x : ℝ
h₁✝ : x ≠ -1
h₂✝ : x ≠ 1
h₁ : -1 < x
h₂ : 1 < x
this✝ : 1 - x ^ 2 < 0
this : arcsin =ᶠ[𝓝 x] fun x => π / 2
⊢ HasStrictDerivAt arcsin 0 x ∧ ContDiffAt ℝ ⊤ arcsin x
[PROOFSTEP]
exact ⟨(hasStrictDerivAt_const _ _).congr_of_eventuallyEq this.symm, contDiffAt_const.congr_of_eventuallyEq this⟩
[GOAL]
x : ℝ
h : x ≠ -1
⊢ HasDerivWithinAt arcsin (1 / sqrt (1 - x ^ 2)) (Ici x) x
[PROOFSTEP]
rcases eq_or_ne x 1 with (rfl | h')
[GOAL]
case inl
h : 1 ≠ -1
⊢ HasDerivWithinAt arcsin (1 / sqrt (1 - 1 ^ 2)) (Ici 1) 1
[PROOFSTEP]
convert (hasDerivWithinAt_const (1 : ℝ) _ (π / 2)).congr _ _
[GOAL]
case h.e'_7
h : 1 ≠ -1
⊢ 1 / sqrt (1 - 1 ^ 2) = 0
[PROOFSTEP]
simp (config := { contextual := true }) [arcsin_of_one_le]
[GOAL]
case inl.convert_3
h : 1 ≠ -1
⊢ ∀ (x : ℝ), x ∈ Ici 1 → arcsin x = π / 2
[PROOFSTEP]
simp (config := { contextual := true }) [arcsin_of_one_le]
[GOAL]
case inl.convert_4
h : 1 ≠ -1
⊢ arcsin 1 = π / 2
[PROOFSTEP]
simp (config := { contextual := true }) [arcsin_of_one_le]
[GOAL]
case inr
x : ℝ
h : x ≠ -1
h' : x ≠ 1
⊢ HasDerivWithinAt arcsin (1 / sqrt (1 - x ^ 2)) (Ici x) x
[PROOFSTEP]
exact (hasDerivAt_arcsin h h').hasDerivWithinAt
[GOAL]
x : ℝ
h : x ≠ 1
⊢ HasDerivWithinAt arcsin (1 / sqrt (1 - x ^ 2)) (Iic x) x
[PROOFSTEP]
rcases em (x = -1) with (rfl | h')
[GOAL]
case inl
h : -1 ≠ 1
⊢ HasDerivWithinAt arcsin (1 / sqrt (1 - (-1) ^ 2)) (Iic (-1)) (-1)
[PROOFSTEP]
convert (hasDerivWithinAt_const (-1 : ℝ) _ (-(π / 2))).congr _ _
[GOAL]
case h.e'_7
h : -1 ≠ 1
⊢ 1 / sqrt (1 - (-1) ^ 2) = 0
[PROOFSTEP]
simp (config := { contextual := true }) [arcsin_of_le_neg_one]
[GOAL]
case inl.convert_3
h : -1 ≠ 1
⊢ ∀ (x : ℝ), x ∈ Iic (-1) → arcsin x = -(π / 2)
[PROOFSTEP]
simp (config := { contextual := true }) [arcsin_of_le_neg_one]
[GOAL]
case inl.convert_4
h : -1 ≠ 1
⊢ arcsin (-1) = -(π / 2)
[PROOFSTEP]
simp (config := { contextual := true }) [arcsin_of_le_neg_one]
[GOAL]
case inr
x : ℝ
h : x ≠ 1
h' : ¬x = -1
⊢ HasDerivWithinAt arcsin (1 / sqrt (1 - x ^ 2)) (Iic x) x
[PROOFSTEP]
exact (hasDerivAt_arcsin h' h).hasDerivWithinAt
[GOAL]
x : ℝ
⊢ DifferentiableWithinAt ℝ arcsin (Ici x) x ↔ x ≠ -1
[PROOFSTEP]
refine' ⟨_, fun h => (hasDerivWithinAt_arcsin_Ici h).differentiableWithinAt⟩
[GOAL]
x : ℝ
⊢ DifferentiableWithinAt ℝ arcsin (Ici x) x → x ≠ -1
[PROOFSTEP]
rintro h rfl
[GOAL]
h : DifferentiableWithinAt ℝ arcsin (Ici (-1)) (-1)
⊢ False
[PROOFSTEP]
have : sin ∘ arcsin =ᶠ[𝓝[≥] (-1 : ℝ)] id := by
filter_upwards [Icc_mem_nhdsWithin_Ici ⟨le_rfl, neg_lt_self (zero_lt_one' ℝ)⟩] with x using sin_arcsin'
[GOAL]
h : DifferentiableWithinAt ℝ arcsin (Ici (-1)) (-1)
⊢ sin ∘ arcsin =ᶠ[𝓝[Ici (-1)] (-1)] id
[PROOFSTEP]
filter_upwards [Icc_mem_nhdsWithin_Ici ⟨le_rfl, neg_lt_self (zero_lt_one' ℝ)⟩] with x using sin_arcsin'
[GOAL]
h : DifferentiableWithinAt ℝ arcsin (Ici (-1)) (-1)
this : sin ∘ arcsin =ᶠ[𝓝[Ici (-1)] (-1)] id
⊢ False
[PROOFSTEP]
have := h.hasDerivWithinAt.sin.congr_of_eventuallyEq this.symm (by simp)
[GOAL]
h : DifferentiableWithinAt ℝ arcsin (Ici (-1)) (-1)
this : sin ∘ arcsin =ᶠ[𝓝[Ici (-1)] (-1)] id
⊢ id (-1) = sin (arcsin (-1))
[PROOFSTEP]
simp
[GOAL]
h : DifferentiableWithinAt ℝ arcsin (Ici (-1)) (-1)
this✝ : sin ∘ arcsin =ᶠ[𝓝[Ici (-1)] (-1)] id
this : HasDerivWithinAt id (cos (arcsin (-1)) * derivWithin arcsin (Ici (-1)) (-1)) (Ici (-1)) (-1)
⊢ False
[PROOFSTEP]
simpa using (uniqueDiffOn_Ici _ _ left_mem_Ici).eq_deriv _ this (hasDerivWithinAt_id _ _)
[GOAL]
x : ℝ
⊢ DifferentiableWithinAt ℝ arcsin (Iic x) x ↔ x ≠ 1
[PROOFSTEP]
refine' ⟨fun h => _, fun h => (hasDerivWithinAt_arcsin_Iic h).differentiableWithinAt⟩
[GOAL]
x : ℝ
h : DifferentiableWithinAt ℝ arcsin (Iic x) x
⊢ x ≠ 1
[PROOFSTEP]
rw [← neg_neg x, ← image_neg_Ici] at h
[GOAL]
x : ℝ
h : DifferentiableWithinAt ℝ arcsin (Neg.neg '' Ici (-x)) (- -x)
⊢ x ≠ 1
[PROOFSTEP]
have := (h.comp (-x) differentiableWithinAt_id.neg (mapsTo_image _ _)).neg
[GOAL]
x : ℝ
h : DifferentiableWithinAt ℝ arcsin (Neg.neg '' Ici (-x)) (- -x)
this : DifferentiableWithinAt ℝ (fun y => -(arcsin ∘ Neg.neg) y) (Ici (-x)) (-x)
⊢ x ≠ 1
[PROOFSTEP]
simpa [(· ∘ ·), differentiableWithinAt_arcsin_Ici] using this
[GOAL]
⊢ deriv arcsin = fun x => 1 / sqrt (1 - x ^ 2)
[PROOFSTEP]
funext x
[GOAL]
case h
x : ℝ
⊢ deriv arcsin x = 1 / sqrt (1 - x ^ 2)
[PROOFSTEP]
by_cases h : x ≠ -1 ∧ x ≠ 1
[GOAL]
case pos
x : ℝ
h : x ≠ -1 ∧ x ≠ 1
⊢ deriv arcsin x = 1 / sqrt (1 - x ^ 2)
[PROOFSTEP]
exact (hasDerivAt_arcsin h.1 h.2).deriv
[GOAL]
case neg
x : ℝ
h : ¬(x ≠ -1 ∧ x ≠ 1)
⊢ deriv arcsin x = 1 / sqrt (1 - x ^ 2)
[PROOFSTEP]
rw [deriv_zero_of_not_differentiableAt (mt differentiableAt_arcsin.1 h)]
[GOAL]
case neg
x : ℝ
h : ¬(x ≠ -1 ∧ x ≠ 1)
⊢ 0 = 1 / sqrt (1 - x ^ 2)
[PROOFSTEP]
simp only [not_and_or, Ne.def, Classical.not_not] at h
[GOAL]
case neg
x : ℝ
h : x = -1 ∨ x = 1
⊢ 0 = 1 / sqrt (1 - x ^ 2)
[PROOFSTEP]
rcases h with (rfl | rfl)
[GOAL]
case neg.inl
⊢ 0 = 1 / sqrt (1 - (-1) ^ 2)
[PROOFSTEP]
simp
[GOAL]
case neg.inr
⊢ 0 = 1 / sqrt (1 - 1 ^ 2)
[PROOFSTEP]
simp
[GOAL]
x : ℝ
⊢ -deriv (fun y => arcsin y) x = -(1 / sqrt (1 - x ^ 2))
[PROOFSTEP]
simp only [deriv_arcsin]
[GOAL]
x : ℝ
n : ℕ∞
⊢ ContDiffAt ℝ n arccos x ↔ n = 0 ∨ x ≠ -1 ∧ x ≠ 1
[PROOFSTEP]
refine' Iff.trans ⟨fun h => _, fun h => _⟩ contDiffAt_arcsin_iff
[GOAL]
case refine'_1
x : ℝ
n : ℕ∞
h : ContDiffAt ℝ n arccos x
⊢ ContDiffAt ℝ n arcsin x
[PROOFSTEP]
simpa [arccos] using (@contDiffAt_const _ _ _ _ _ _ _ _ _ _ (π / 2)).sub h
[GOAL]
case refine'_2
x : ℝ
n : ℕ∞
h : ContDiffAt ℝ n arcsin x
⊢ ContDiffAt ℝ n arccos x
[PROOFSTEP]
simpa [arccos] using (@contDiffAt_const _ _ _ _ _ _ _ _ _ _ (π / 2)).sub h
|
\chapter{Conclusions}
\label{conclusion}
This project involved an array of different tools and data.
Despite many difficulties and mistakes during the planning of the project, all of the aims of the project as stated in the proposal have been met.
Despite lacking conclusive results this work represents a step towards a system for unifying diverse interaction data sources.
%However, the results obtained are not conclusive or useful.
%FIN: try and spin the positives a bit harder - what can be done to make them useful, 'this work represents a step towards that'
%project ran out time towards the end
\section{Deliverables}
%what was achieved (as a reminder)
During the course of the project a large number of data sources were tested and extracted into a machine learning workflow.
These formed large feature vector files which could be used for classification.
Three different classifiers were then trained on this data and compared.
Of these, the best was used to provide predictions to weight edges.
A Bayesian method was used to combine this with other data sources and prior knowledge to generate the edge weights.
These edge weights were then used to create a weighted \ac{PPI} graph which was then compared to its unweighted counterpart.
%problems with results
%FIN: could maybe put in what features you identified as being important as a possible source of future work
%\section{}
\section{Future work}
%repeat Bayesian approach in a more principled way
%perhaps using probabilistic programming to enforce priors
As a basis for future work, this work illustrates many difficulties in working with varied publicly available data sources.
%FIN: with a mixture of manual and automated curation
However, it also provides insight into the correct method for weighting interaction edges.
These edges should be weighted directly as an estimate of interaction strength.
The premise of this project was that the posterior probability of the interaction existing would correlate well with the strength of the interaction as interactions which are strong will be observed more often.
%FIN: what is a strong vs weak interaction in this context? because you can have tight 'strong' or weak physicochemical interactions between proteins (dissociation constants etc)
However, if a full probabilistic model was to be designed the latent variable - which was interaction in our model - could be a continuous variable in the unit interval defined at a Beta distribution.
%FIN: expand?
The problem then becomes one of estimating interaction strength, which is difficult to observe in order to obtain the training set required to create a probabilistic model.
Using the array of biological databases available it would be possible to link different observations based on strong biological prior knowledge.
%describe this method in more detail?
Alternatively, this project provides a possible method for instead defining the interactions that make up a PPI network.
Using the method described in section \ref{bayes} and including the interaction databases described in chapter \ref{background} it would be possible to create a realistic estimate of the existence of arbitrary protein interactions.
This would require research to determine realistic TPR and TNR values to use for each database included.
%an example?
%the role of probabilistic programming?
%reference guido's probabilistic programming language?
\section*{Conclusion}
%conclusion of the conclusion
Despite the results of this project, the resources and insight into better solutions generated made it worthwhile.
In the future, building on the results of this project, it will be possible to create a \ac{PPI} network that accurately summarises our knowledge of interactions and interaction strength using all of the available data.
|
function DoScanSeriesUpd(Simuh,ScanFlag)
global VCtl
try
[pathstr,name,ext]=fileparts(Simuh.SeqXMLFile);
catch me
guidata(Simuh.SimuPanel_figure,Simuh);
return;
end
if ~isfield(Simuh,'ScanSeries')
Simuh.ScanSeriesInd=1;
Simuh.ScanSeries(1,1:3)={[num2str(Simuh.ScanSeriesInd) ':'],'Dx',name};
else
switch ScanFlag
case 0 % load PSD
Simuh.ScanSeries(Simuh.ScanSeriesInd,1:3)={[num2str(Simuh.ScanSeriesInd) ':'],'Dx',name};
case 1 % scan setting
Simuh.ScanSeries(Simuh.ScanSeriesInd,1:2)={[num2str(Simuh.ScanSeriesInd) ':'],'Dx'};
case 2 % scanning
Simuh.ScanSeries(Simuh.ScanSeriesInd,1:2)={[num2str(Simuh.ScanSeriesInd) ':'],'...'};
VCtl.SeriesName = Simuh.ScanSeries{Simuh.ScanSeriesInd,3};
case 3 % scan complete
Simuh.ScanSeries(Simuh.ScanSeriesInd,1:2)={[num2str(Simuh.ScanSeriesInd) ':'],'V'};
Simuh.ScanSeriesInd=Simuh.ScanSeriesInd+1;
case 4 % scan fail
Simuh.ScanSeries(Simuh.ScanSeriesInd,1:2)={[num2str(Simuh.ScanSeriesInd) ':'],'X'};
Simuh.ScanSeriesInd=Simuh.ScanSeriesInd+1;
case 5 % add to batch
Simuh.ScanSeries(Simuh.ScanSeriesInd,1:2)={[num2str(Simuh.ScanSeriesInd) ':'],'B'};
Simuh.ScanSeriesInd=Simuh.ScanSeriesInd+1;
end
end
set(Simuh.ScanSeries_uitable,'Data',Simuh.ScanSeries);
set(Simuh.ScanSeries_uitable,'Enable','on');
guidata(Simuh.SimuPanel_figure,Simuh);
end |
(**************************************************************************)
(* * *)
(* _ * The Coccinelle Library / Evelyne Contejean *)
(* <o> * CNRS-LRI-Universite Paris Sud *)
(* -/@|@\- * A3PAT Project *)
(* -@ | @- * *)
(* -\@|@/- * This file is distributed under the terms of the *)
(* -v- * CeCILL-C licence *)
(* * *)
(**************************************************************************)
Set Implicit Arguments.
From Coq Require Import Relations Setoid List Multiset Arith Recdef Morphisms.
From CoLoR Require Import more_list list_permut ordered_set.
Ltac dummy a b a_eq_b :=
assert (Dummy : a = b); [exact a_eq_b | clear a_eq_b; rename Dummy into a_eq_b].
Module Type Sort.
Declare Module Import DOS : ordered_set.S.
Declare Module Import LPermut :
list_permut.S with Definition EDS.A := DOS.A
with Definition EDS.eq_A := @eq DOS.A.
(** *** Decomposition of the list wrt a pivot. *)
Function partition (pivot : A) (l : list A) {struct l} : (list A) * (list A) :=
match l with
| nil => (nil, nil)
| a :: l' =>
match partition pivot l' with (l1,l2) =>
if o_bool a pivot then (a :: l1, l2) else (l1, a :: l2)
end
end.
Parameter quicksort : list A -> list A.
Function remove_list (la l : list A) {struct l} : option (list A) :=
match la with
| nil => Some l
| a :: la' =>
match l with
| nil => None
| b :: l' =>
if eq_bool a b
then remove_list la' l'
else
match remove_list la l' with
| None => None
| Some rmv => Some (b :: rmv)
end
end
end.
(** *** Definition of a sorted list. *)
Inductive is_sorted : list A -> Prop :=
| is_sorted_nil : is_sorted nil
| is_sorted_cons1 : forall t, is_sorted (t :: nil)
| is_sorted_cons2 :
forall t1 t2 l, o t1 t2 -> is_sorted (t2 :: l) -> is_sorted (t1 :: t2 :: l).
#[global] Hint Constructors is_sorted : core.
Parameter in_remove_list :
forall l la, is_sorted la -> is_sorted l ->
match remove_list la l with
| None => forall rmv, ~ (permut (la ++ rmv) l)
| Some rmv => permut (la ++ rmv) l
end.
Parameter quicksort_equation : forall l : list A,
quicksort l =
match l with
| nil => nil (A:=A)
| h :: tl =>
let (l1, l2) := partition h tl in
quicksort l1 ++ h :: quicksort l2
end.
Parameter quick_permut : forall l, permut l (quicksort l).
Parameter quick_permut_bis :
forall l1 l2, permut l1 l2 -> permut (quicksort l1) l2.
Parameter length_quicksort : forall l, length (quicksort l) = length l.
Parameter in_quick_in : forall a l, In a l <-> In a (quicksort l).
Parameter sorted_tl_sorted : forall e l, is_sorted (e :: l) -> is_sorted l.
Parameter quick_sorted : forall l, is_sorted (quicksort l).
Parameter sort_is_unique :
forall l1 l2, is_sorted l1 -> is_sorted l2 -> permut l1 l2 -> l1 = l2.
End Sort.
(** ** Definition of quicksort over lists. *)
Module Make (DOS1 : ordered_set.S) <: Sort with Module DOS := DOS1.
Module Import DOS := DOS1.
Module EDS := decidable_set.Convert(DOS1).
Module Import LPermut <:
list_permut.S with Definition EDS.A := DOS.A
with Definition EDS.eq_A := @eq DOS.A :=
list_permut.Make EDS.
Global Instance mem_morph : Proper (eq ==> permut ==> iff) (mem EDS.eq_A).
Proof. intros a b ab c d cd. subst. apply mem_permut_mem; trivial. Qed.
Global Instance add_A_morph : Proper (eq ==> permut ==> permut) (List.cons (A:=A)).
Proof. intros a b e l1 l2 P. rewrite <- permut_cons; trivial. Qed.
Global Instance length_morph : Proper (permut ==> eq) (length (A:=A)).
Proof. intros a b ab. eapply permut_length. apply ab. Qed.
(** *** Decomposition of the list wrt a pivot. *)
Function partition (pivot : A) (l : list A) {struct l} : (list A) * (list A) :=
match l with
| nil => (nil, nil)
| a :: l' =>
match partition pivot l' with (l1,l2) =>
if o_bool a pivot then (a :: l1, l2) else (l1, a :: l2)
end
end.
Function quicksort (l : list A) { measure (@length A) l} : list A :=
match l with
| nil => nil
| h :: tl =>
match partition h tl with (l1, l2) =>
quicksort l1 ++ h :: quicksort l2
end
end.
Proof.
(* 1 Second recursive call *)
intros e_l e l e_l_eq_e_l; subst e_l;
functional induction (partition e l)
as [|H1 e' l' H2 IH l1 l2 rec_res e'_lt_e|H1 e' l' H2 IH l1 l2 rec_res H3];
intros ll1 ll2 Hpart; injection Hpart; intros H H'; subst.
(* 1.1 l = nil *)
simpl; auto with arith.
(* 1.2 l = _ :: _ *)
apply lt_trans with (length (e :: l')); [apply (IH _ _ rec_res) | auto with arith].
simpl; apply lt_n_S; apply (IH _ _ rec_res).
(* 2 First recursive call *)
intros e_l e l e_l_eq_e_l; subst e_l;
functional induction (partition e l)
as [|H1 e' l' H2 IH l1 l2 rec_res e'_lt_e|H1 e' l' H2 IH l1 l2 rec_res H3];
intros ll1 ll2 Hpart; injection Hpart; intros H H'; subst.
(* 2.1 l = nil *)
simpl; auto with arith.
(* 2.2 l = _ :: _ *)
simpl; apply lt_n_S; apply (IH _ _ rec_res).
apply lt_trans with (length (e :: l')); [apply (IH _ _ rec_res) | auto with arith].
Defined.
(** *** The result of quicksort is a permutation of the original list. *)
Lemma partition_list_permut1 :
forall e l, let (l1,l2) := partition e l in permut (l1 ++ l2) l.
Proof.
intros e l; functional induction (partition e l)
as [|H1 e' l' H2 IH l1 l2 rec_res e'_lt_e|H1 e' l' H2 IH l1 l2 rec_res H3].
(* l = nil *)
auto.
(* l = a :: l' /\ o_A a e *)
simpl app; rewrite rec_res in IH; rewrite IH; auto.
(* l = a::l' /\ ~o_A a e *)
simpl app; rewrite rec_res in IH;
apply permut_sym; rewrite <- permut_cons_inside; auto.
reflexivity.
Qed.
Theorem quick_permut : forall l, permut l (quicksort l).
Proof.
intros l; functional induction (quicksort l)
as [ | l a' l' l_eq_a'_l' l1 l2 H S1 S2]; auto.
(* 2.1 l = nil *)
rewrite <- permut_cons_inside.
assert (P := partition_list_permut1 a' l'); rewrite H in P.
rewrite <- P; rewrite <- S1; rewrite <- S2; auto.
reflexivity.
Qed.
Theorem quick_permut_bis :
forall l1 l2, permut l1 l2 -> permut (quicksort l1) l2.
Proof.
intros l1 l2 P; apply permut_trans with l1; trivial;
apply permut_sym; apply quick_permut.
Qed.
(** *** Definition of a sorted list. *)
Inductive is_sorted : list A -> Prop :=
| is_sorted_nil : is_sorted nil
| is_sorted_cons1 : forall t, is_sorted (t :: nil)
| is_sorted_cons2 :
forall t1 t2 l, o t1 t2 -> is_sorted (t2 :: l) -> is_sorted (t1 :: t2 :: l).
#[global] Hint Constructors is_sorted : core.
(** *** The result of quicksort is a sorted list. *)
Lemma sorted_tl_sorted :
forall e l, is_sorted (e :: l) -> is_sorted l.
Proof.
intros e l S; inversion S; auto.
Qed.
Lemma quick_sorted_aux1 :
forall l1 l2 e, is_sorted l1 -> is_sorted l2 ->
(forall a, mem EDS.eq_A a l1 -> o a e) ->
(forall a, mem EDS.eq_A a l2 -> o e a) ->
is_sorted (l1 ++ e :: l2).
Proof.
induction l1 as [ | e1 l1 ]; intros l2 e S1 S2 L1 L2.
simpl; destruct l2 as [ | e2 l2]; intros;
[apply is_sorted_cons1 | apply is_sorted_cons2 ]; trivial;
apply (L2 e2); simpl; left; reflexivity.
destruct l1 as [ | e1' l1] ; simpl; intros; apply is_sorted_cons2.
apply L1; left; reflexivity.
simpl in IHl1; apply IHl1; trivial; contradiction.
inversion S1; trivial.
rewrite app_comm_cons; apply IHl1; trivial.
inversion S1; trivial.
intros; apply L1; trivial; right; trivial.
Qed.
Lemma quick_sorted_aux3 :
forall l e, let (l1,_) := partition e l in forall a, mem EDS.eq_A a l1 -> o a e.
Proof.
intros l e;
functional induction (partition e l) as
[|l a' l' l_eq_a'_l' IH l1' l2' H a'_lt_e
|l a' l' l_eq_a'_l' IH l1' l2' H _H].
contradiction.
intros a [a_eq_a' | a_in_l1']; [idtac | rewrite H in IH; apply IH; trivial].
generalize (o_bool_ok a' e); dummy a a' a_eq_a'; subst; rewrite a'_lt_e; trivial.
intros; rewrite H in IH; apply IH; trivial.
Qed.
Lemma quick_sorted_aux4 :
forall l e, let (_,l2) := partition e l in forall a, mem EDS.eq_A a l2 -> o e a.
Proof.
intros l e;
functional induction (partition e l)
as [|l a' l' l_eq_a'_l' IH l1' l2' H a'_lt_e
|l a' l' l_eq_a'_l' IH l1' l2' H a'_ge_e].
contradiction.
intros; rewrite H in IH; apply IH; trivial.
intros a [a_eq_a' | a_in_l1']; [subst | rewrite H in IH; apply IH]; trivial.
dummy a a' a_eq_a'; subst; trivial.
generalize (o_bool_ok a' e); rewrite a'_ge_e; intro a'_ge_e'.
destruct (o_total e a'); intros.
assumption.
absurd (DOS.o a' e); assumption.
Qed.
Theorem quick_sorted : forall l, is_sorted (quicksort l).
Proof.
intros l;
functional induction (quicksort l)
as [ | l a' l' l_eq_a'_l' l1 l2 H S1 S2].
auto.
apply quick_sorted_aux1; trivial.
intros a a_in_ql1;
assert (H1 := quick_sorted_aux3 l' a'); rewrite H in H1; apply H1.
rewrite (mem_permut_mem a (quick_permut l1)); trivial.
intros a a_in_ql2;
assert (H2 := quick_sorted_aux4 l' a'); rewrite H in H2;
apply H2; rewrite (mem_permut_mem a (quick_permut l2)); trivial.
Qed.
(** *** There is a unique sorted list equivalent up to a permutation to a given list.*)
Lemma sorted_cons_min :
forall l e, is_sorted (e :: l) -> (forall e', mem EDS.eq_A e' (e :: l) -> o e e').
Proof.
simpl; intros l e S e' [e'_eq_e | e'_in_l].
dummy e' e e'_eq_e; subst; trivial.
generalize o_proof; intro O; inversion O as [R _ _]; apply R.
generalize e S e' e'_in_l; clear e S e' e'_in_l; induction l as [ | a l ].
intros e _ e' e'_in_nil; elim e'_in_nil.
simpl; intros e S e' [e'_eq_a | e'_in_l].
dummy e' a e'_eq_a; subst e'; inversion S; trivial.
generalize o_proof; intro O; inversion O as [ _ T _ ]; apply T with a.
inversion S; trivial.
apply IHl; trivial; inversion S; trivial.
Qed.
Theorem sort_is_unique :
forall l1 l2, is_sorted l1 -> is_sorted l2 -> permut l1 l2 -> l1 = l2.
Proof.
induction l1 as [ | e1 l1 ]; intros l2 S1 S2 P.
inversion P; trivial.
inversion P as [ | a b l k1 k2 a_eq_b P' H1 H']; dummy e1 b a_eq_b; subst.
destruct k1 as [ | a1 k1].
rewrite (IHl1 k2); trivial.
inversion S1; trivial.
inversion S2; trivial.
assert (e1_eq_a1 : b = a1).
assert (O:= o_proof); inversion O as [ _ _ AS ]; apply AS; clear O AS.
apply (sorted_cons_min S1); rewrite P; left; reflexivity.
apply (sorted_cons_min S2); rewrite <- P; left; reflexivity.
subst a1; rewrite (IHl1 (k1 ++ b :: k2)); trivial.
apply sorted_tl_sorted with b; trivial.
apply sorted_tl_sorted with b; trivial.
simpl in P; rewrite <- permut_cons in P; trivial.
reflexivity.
Qed.
(** Some usefull properties on the result of quicksort. *)
Lemma length_quicksort : forall l, length (quicksort l) = length l.
Proof.
intro l; apply permut_length with EDS.eq_A;
apply permut_sym; apply quick_permut.
Qed.
Lemma in_quick_in : forall a l, In a l <-> In a (quicksort l).
Proof.
intros a l; apply in_permut_in.
apply quick_permut.
Qed.
Lemma in_quicksort_cons : forall a l, In a (quicksort (a :: l)).
Proof.
intros a l; rewrite <- in_quick_in; left; trivial.
Qed.
(** What happens when a sorted list is removed from another one.*)
Function remove_list (la l : list A) {struct l} : option (list A) :=
match la with
| nil => Some l
| a :: la' =>
match l with
| nil => None
| b :: l' =>
if eq_bool a b
then remove_list la' l'
else
match remove_list la l' with
| None => None
| Some rmv => Some (b :: rmv)
end
end
end.
Lemma in_remove_list :
forall l la, is_sorted la -> is_sorted l ->
match remove_list la l with
| None => forall rmv, ~ (permut (la ++ rmv) l)
| Some rmv => permut (la ++ rmv) l
end.
Proof.
fix in_remove_list 1.
intro l; case l; clear l.
(* l = [] *)
intro la; simpl; case la; clear la.
intros _ _; apply Pnil.
intros a la Sla _ rmv P; assert (H := permut_length P); discriminate.
(* l = _ :: _ *)
intros e l la; simpl; case la; clear la.
intros _ _; reflexivity.
intros a la Sala Sel ; assert (Sl := sorted_tl_sorted Sel).
assert (Sla := sorted_tl_sorted Sala).
generalize (eq_bool_ok a e); case (eq_bool a e).
intros a_eq_e; generalize (in_remove_list l la Sla Sl); case (remove_list la l).
intros rmv P; simpl; rewrite <- permut_cons; assumption.
intros H rmv P; apply (H rmv); simpl in P; rewrite <- permut_cons in P; assumption.
intros a_diff_e; generalize (in_remove_list l (a :: la) Sala Sl); case (remove_list (a :: la) l).
intros rmv P; symmetry; rewrite <- permut_cons_inside.
symmetry; assumption.
reflexivity.
intros H rmv P.
assert (e_in_larm : mem EDS.eq_A e ((a :: la) ++ rmv)).
rewrite (mem_permut_mem e P); left; reflexivity.
simpl in e_in_larm; case e_in_larm; clear e_in_larm.
intro e_eq_a; apply a_diff_e; symmetry; assumption.
rewrite <- mem_or_app; intro e'_in_larm; case e'_in_larm; clear e'_in_larm.
intro e_in_la; apply a_diff_e;
assert (O := o_proof); inversion_clear O as [ _ _ AS]; apply AS.
apply sorted_cons_min with la; [apply Sala | right; trivial].
apply sorted_cons_min with l; [apply Sel | rewrite <- P; left; reflexivity].
intro e_in_rmv; case (mem_split_set _ _ eq_bool_ok _ _ e_in_rmv);
intros e' M; case M; clear M.
intros l1 M; case M; clear M.
intros l2 M; case M; clear M.
intros e_eq_e' M; case M; clear M.
intros M _; subst rmv.
apply (H (l1 ++ l2)).
rewrite ass_app in P; simpl in P.
assert (P' := permut_add_inside ((a :: la) ++ l1) l2 nil l (sym_eq e_eq_e')); simpl in P'.
rewrite <- P' in P.
rewrite ass_app; simpl; assumption.
Qed.
End Make.
(*
(** *** The lists resulting from the partition are not longer than the original one. *)
Lemma length_fst_partition :
forall n, forall pivot, forall l,
length l <= n -> length (fst (partition pivot l)) < S n.
Proof.
induction n; intros pivot l; destruct l as [ | e l ]; simpl; auto with arith.
intro H; generalize (le_Sn_O (length l) H); intros; contradiction.
intro L'; assert (L : length l <= n).
apply le_S_n; trivial.
generalize (IHn pivot l L); destruct (partition pivot l);
destruct (o_elt_dec e pivot); simpl; auto with arith.
Qed.
Lemma length_snd_partition :
forall n, forall pivot, forall l,
length l <= n -> length (snd (partition pivot l)) < S n.
Proof.
induction n; intros pivot l; destruct l as [ | e l ]; simpl; auto with arith.
intro H; generalize (le_Sn_O (length l) H); intros; contradiction.
intro L'; assert (L : length l <= n).
apply le_S_n; trivial.
generalize (IHn pivot l L); destruct (partition pivot l);
destruct (o_elt_dec e pivot); simpl; auto with arith.
Qed.
(** *** Definition of a step of quicksort. *)
Definition F_quick :
forall l : list elt, (forall y : list elt , o_length y l -> list elt) -> list elt.
Proof.
destruct l as [ | e l ].
intros; exact nil.
intro qrec;
set (pl := partition e l);
exact
((qrec (fst pl) (length_fst_partition e l (le_n (length l)))) ++
e :: (qrec (snd pl) (length_snd_partition e l (le_n (length l))))).
Defined.
(** *** Definition of quicksort. *)
Definition quicksort : forall (l : list elt), (list elt) :=
Fix (@well_founded_length elt) (fun l => list elt) F_quick.
(** *** A more practical equivalent definition of quicksort. *)
Lemma quicksort_unfold : forall l : list elt,
quicksort l = @F_quick l (fun y _ => quicksort y).
Proof.
intros; unfold quicksort;
apply Fix_eq with (P:= (fun _:list elt => list elt)).
unfold F_quick; destruct x; simpl; auto.
intros f g H; do 2 rewrite H; trivial.
Save.
(** *** Sorting the empty list yields the empty list. *)
Lemma quick_nil : quicksort nil = nil.
Proof.
rewrite quicksort_unfold; unfold F_quick; simpl; trivial.
Qed.
(** *** The usual equality on quicksort. *)
Lemma quick_cons_aux1 :
forall e l, quicksort (e :: l) =
let pl := partition e l in
(quicksort (fst pl)) ++ e :: (quicksort (snd pl)).
Proof.
intros e l; rewrite quicksort_unfold; unfold F_quick; simpl; trivial.
Qed.
Theorem quick_cons :
forall e l, let (l1,l2) := partition e l in
quicksort (e :: l) = (quicksort l1) ++ e :: (quicksort l2).
Proof.
intros; generalize (quick_cons_aux1 e l); destruct (partition e l).
trivial.
Save.
Extract Constant eq_elt_dec => eq.
Extract Constant o_elt_dec => le.
Extract Constant elt => int.
Extraction split_list.
Extraction partition.
Extraction NoInline partition.
Extraction quicksort.
*)
|
'''
Created on November 2019.
An image generator which returns the input of a neural network each time it gets called.
This input consists of a batch of images and its corresponding labels.
@author: Soroosh Tayebi Arasteh <[email protected]>
https://github.com/tayebiarasteh/
'''
import os.path
import json
from scipy import ndimage, misc
import numpy as np
import matplotlib.pyplot as plt
import math
from skimage.transform import resize
class ImageGenerator:
def __init__(self, file_path, json_path, batch_size, image_size, rotation=False, mirroring=False, shuffle=False):
'''
:type image_size: tuple
'''
self.class_dict = {0: 'airplane', 1: 'automobile', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog', 6: 'frog',
7: 'horse', 8: 'ship', 9: 'truck'}
self.path = (file_path, json_path)
self.batch_size = batch_size
self.image_size = image_size
self.shuffle = shuffle
self.mirroring = mirroring
self.rotation = rotation
self.counter = 0 # shows the number of times next() has been called for each object of the class.
# if self.counter =! 0 means that we have not created a new object.
def next(self):
'''This function creates a batch of images and corresponding labels and returns it.'''
with open(self.path[1]) as data_file:
label_file = json.load(data_file)
all_images_indices = np.arange(len(label_file)) # indices of all the images in the dataset in a numpy array.
images = [] # a batch (list) of images
labels = [] # the corresponding labels
if self.shuffle:
np.random.shuffle(all_images_indices)
'''If the last batch is smaller than the others,
complete that batch by reusing images from the beginning of your training data set:'''
if (self.counter+1)*self.batch_size > len(label_file):
offset = (self.counter+1)*self.batch_size - len(label_file)
chosen_batch = all_images_indices[
self.counter * self.batch_size :len(label_file)]
chosen_batch = np.append(chosen_batch, all_images_indices[0:offset])
self.counter = -1 # at the end of the method with +1, it becomes zero and we basically reset our counter.
else:
chosen_batch = all_images_indices[self.counter*self.batch_size:(self.counter+1)*self.batch_size]
for i in chosen_batch:
images.append(np.load(os.path.join(self.path[0], str(i) + '.npy')))
labels.append(label_file[str(i)])
# Resizing
for i, image in enumerate(images):
images[i] = resize(image, self.image_size)
# Augmentation
for i, image in enumerate(images):
images[i] = self.augment(image)
# converting list to np array
labels = np.asarray(labels)
images = np.asarray(images)
self.counter += 1
output = (images, labels)
return output
def augment(self,img):
'''This function takes a single image as an input and performs a random transformation
(mirroring and/or rotation) on it and outputs the transformed image'''
# mirroring (randomly)
if self.mirroring:
i = np.random.randint(0, 2, 1) # randomness
if i[0] == 1: # 0: no | 1: yes
img = np.fliplr(img)
# rotation (randomly)
if self.rotation:
i = np.random.randint(0,4,1)
i = i[0]
img = np.rot90(img, i)
return img
def class_name(self, int_label):
'''This function returns the class name for a specific input'''
return self.class_dict[int_label]
def show(self):
'''In order to verify that the generator creates batches as required, this functions calls next to get a
batch of images and labels and visualizes it.'''
images, labels = self.next()
for i, image in enumerate(images):
if self.batch_size > 3:
n_rows = math.ceil(self.batch_size/3) # number of rows to plot for subplot
else:
n_rows = 1
plt.subplot(n_rows, 3, i+1)
plt.title(self.class_name(labels[i]))
toPlot = plt.imshow(image)
# hiding the axes text
toPlot.axes.get_xaxis().set_visible(False)
toPlot.axes.get_yaxis().set_visible(False)
plt.show()
|
lemma measure_UNION_AE: assumes I: "finite I" shows "(\<And>i. i \<in> I \<Longrightarrow> F i \<in> fmeasurable M) \<Longrightarrow> pairwise (\<lambda>i j. AE x in M. x \<notin> F i \<or> x \<notin> F j) I \<Longrightarrow> measure M (\<Union>i\<in>I. F i) = (\<Sum>i\<in>I. measure M (F i))" |
lemma bigtheta_powr: fixes f :: "'a \<Rightarrow> real" shows "f \<in> \<Theta>[F](g) \<Longrightarrow> (\<lambda>x. \<bar>f x\<bar> powr p) \<in> \<Theta>[F](\<lambda>x. \<bar>g x\<bar> powr p)" |
{-# OPTIONS_GHC -Wno-incomplete-patterns #-}
module VecSpec where
import Data.Complex
import Test.QuickCheck
import Test.QuickCheck.Poly
import Category
import Functor
import Unboxed
import Vec
type N = 10
prop_Vec_Functor_id :: FnProp (Vec3 N A)
prop_Vec_Functor_id = checkFnEqual law_Functor_id
prop_Vec_Functor_comp :: Fun B C -> Fun A B -> FnProp (Vec3 N A)
prop_Vec_Functor_comp f g = checkFnEqual (law_Functor_comp f g)
type UA = Int
type UB = Double
type UC = Complex Double
prop_UVec3_Functor_id :: FnProp (UVec3 N UA)
prop_UVec3_Functor_id = checkFnEqual law_Functor_id
prop_UVec3_Functor_comp ::
(UB -#> UC) -> (UA -#> UB) -> FnProp (UVec3 N UA)
prop_UVec3_Functor_comp f g = checkFnEqual (law_Functor_comp f g)
|
[STATEMENT]
lemma wprepare_erase_Bk[simp]: "wprepare_erase m lm (b, Oc # list)
\<Longrightarrow> wprepare_erase m lm (b, Bk # list)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. wprepare_erase m lm (b, Oc # list) \<Longrightarrow> wprepare_erase m lm (b, Bk # list)
[PROOF STEP]
apply(simp only:wprepare_invs, auto simp: )
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
-- |
-- Module : Cartesian.Plane.Core
-- Description :
-- Copyright : (c) Jonatan H Sundqvist, year
-- License : MIT
-- Maintainer : Jonatan H Sundqvist
-- Stability : experimental|stable
-- Portability : POSIX (not sure)
--
-- Created date year
-- TODO | - Which constraints are appropriate (Num is probably too generic, should be Real, maybe RealFrac)
-- - Strictness, performance
-- - Rename (?)
-- SPEC | -
-- -
--------------------------------------------------------------------------------------------------------------------------------------------
-- API
--------------------------------------------------------------------------------------------------------------------------------------------
module Cartesian.Plane.Core where
--------------------------------------------------------------------------------------------------------------------------------------------
-- We'll need these
--------------------------------------------------------------------------------------------------------------------------------------------
-- import Data.List (sort, minimumBy)
-- import Data.Ord (comparing)
-- import Data.Complex hiding (magnitude)
-- import Control.Monad (when)
import Control.Applicative
import Control.Lens ((^.))
import Cartesian.Internal.Types
import Cartesian.Internal.Lenses (begin, end)
import Cartesian.Internal.Core
--------------------------------------------------------------------------------------------------------------------------------------------
-- Functions
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
-- | Determines if a point lies within a polygon using the odd-even method.
--
-- TODO: Use epsilon (?)
-- TODO: How to treat points that lie on an edge
-- inside :: Num n => Polygon n -> Vector2D n -> Bool
-- inside polygon (Vector2D px py) = error "Cartesian.Plane.inside is still a work in progress"
-- where
-- edges = polygon ++ [head polygon] -- Close the loop
-- between (Line (Vector ax ay) (Vector bx by)) = _
-- |
-- instance Convertible (Vector2D f, Vector3D f) where
-- _
-- TODO: Use type families for this stuff (?)
-- |
-- to3D :: Num f => Vector2D f -> Vector3D f
-- to3D (Vector2D x' y') = Vector3D x' y' 0
-- -- |
-- from3D :: Num f => Vector3D f -> Vector2D f
-- from3D (Vector3D x' y' _) = Vector2D x' y'
-- -- | Perform some unary operation on a 2D vector as a 3D vector, converting the result back to 2D by discarding the z component.
-- -- TODO: Rename (?)
-- -- TODO: Loosen Num restriction (eg. to anything with a 'zero' value) (?)
-- in3D :: (Num f, Num f') => (Vector3D f -> Vector3D f') -> Vector2D f -> Vector2D f'
-- in3D f = from3D . f . to3D
-- | Same as in3D, but for binary operations.
-- _ :: _
-- _ f = from3D . (dotmap f) . to3D
-- Vector math -----------------------------------------------------------------------------------------------------------------------------
-- | Angle (in radians) between the positive X-axis and the vector
-- argument :: (Floating a, Eq a) => Vector a -> a
-- argument (Vector 0 0) = 0
-- argument (Vector x y) = atan $ y/x
--
--
-- arg :: (Floating a, Eq a) => Vector a -> a
-- arg = argument
--
--
-- -- | Vector -> (magnitude, argument)
-- polar :: (Floating a, Eq a) => Vector a -> (a, a)
-- polar v@(Vector x y) = (magnitude v, argument v)
-- Linear functions ------------------------------------------------------------------------------------------------------------------------
-- | Yields the intersection point of two finite lines. The lines are defined inclusively by
-- their endpoints. The result is wrapped in a Maybe value to account for non-intersecting
-- lines.
-- TODO: Refactor
-- TODO: Move equation solving to separate function (two linear functions)
-- TODO: Invariants, check corner cases
-- TODO: Deal with vertical lines
-- TODO: Factor out infinite-line logic
-- TODO: Decide how to deal with identical lines
-- TODO: Factor out domain logic (eg. write restrict or domain function)
-- TODO: Return Either instead of Maybe (eg. Left "parallel") (?)
-- TODO: Visual debugging functions
-- TODO: Math notes, MathJax or LaTex
-- TODO: Intersect for curves (functions) and single points (?)
-- TODO: Polymorphic, typeclass (lines, shapes, ranges, etc.) (?)
-- TODO: Intersect Rectangles
intersect :: RealFloat f => Line (Vector2D f) -> Line (Vector2D f) -> Maybe (Vector2D f)
intersect f' g' = do
p <- mp
indomain f' p
indomain g' p
where
-- indomain :: RealFloat f => Line (Vector2D f) -> Vector2D f -> Maybe (Vector2D f)
indomain h' = restrict (h'^.begin) (h'^.end) -- TODO: Rename
-- mp :: Maybe (Vector2D f)
mp = case (linear f', linear g') of
(Just f, Nothing) -> let x' = g'^.begin.x in Just . Vector2D x' $ plotpoint f x'
(Nothing, Just g) -> let x' = f'^.begin.x in Just . Vector2D x' $ plotpoint g x'
(Just f, Just g) -> linearIntersect f g
_ -> Nothing
-- | Gives the linear function overlapping the given segment, or Nothing if there is no such function
linear :: RealFloat f => Line (Vector2D f) -> Maybe (Linear f)
linear line = Linear <$> intercept line <*> slope line
-- | Applies a linear function to the given value
-- TODO: Rename (?)
plotpoint :: RealFloat f => Linear f -> f -> f
plotpoint f x' = slopeOf f*x' + interceptOf f
-- | Finds the intersection (if any) of two linear functions
-- TODO: Use Epsilon (?)
-- TODO: Rename (eg. 'solve') (?)
linearIntersect :: RealFloat f => Linear f -> Linear f -> Maybe (Vector2D f)
linearIntersect f g
| slopeOf f == slopeOf g = Nothing
| otherwise = let x' = (β-b)/(a-α) in Just $ Vector2D x' (a*x' + b)
where
(a, α) = (slopeOf f, slopeOf g)
(b, β) = (interceptOf f, interceptOf g)
-- |
slope :: RealFloat f => Line (Vector2D f) -> Maybe f
slope (Line fr to)
| dx == 0 = Nothing
| otherwise = Just $ dy/dx
where
(Vector2D dx dy) = liftA2 (-) to fr
-- |
intercept :: RealFloat f => Line (Vector2D f) -> Maybe f
intercept line = do
slope' <- slope line
return $ y' - slope'*x'
where
(Vector2D x' y') = line^.begin
--------------------------------------------------------------------------------------------------------------------------------------------
-- | Ensures that a given point lies within the domain and codomain
-- TODO: Make polymorphic
-- TODO: Let this function work on scalars, write another function for domain and codomain (?)
-- restrict domain codomain p = _
-- restrict :: (Applicative v, Traversable v, Num n, Ord n) => v n -> v n -> v n -> Maybe (v n)
-- restrict a b p@(Vector2D x' y')
-- | indomain && incodomain = Just p
-- | otherwise = Nothing
-- where
-- (Vector2D lowx lowy) = dotwise min a b
-- (Vector2D highx highy) = dotwise max a b
-- indomain = between lowx highx x'
-- incodomain = between lowy highy y'
|
Require Import Lists.List.
Require Import Sorting.Permutation.
Require Import Fin.
Require Import Vector.
Require Import Logic.Eqdep_dec.
Require Import Arith.PeanoNat.
Require Import Arith.Peano_dec.
Class ConstructorSpec (Ctor: Set) :=
{ contraVariantArity : Ctor -> nat;
coVariantArity : Ctor -> nat;
epsilon: Ctor -> Prop }.
Class ConstructorRelation (Ctor: Set) `{ConstructorSpec Ctor}
(R : forall C D,
(Fin.t (contraVariantArity C) -> Fin.t (contraVariantArity D)) ->
(Fin.t (coVariantArity D) -> Fin.t (coVariantArity C)) -> Prop) :=
{ reflexivity: forall C f g,
(forall k, f k = k) ->
(forall k, g k = k) ->
R C C f g;
transitivity:
forall C D E f g f' g' f'f gg',
(forall k, f'f k = f' (f k)) ->
(forall k, gg' k = g (g' k)) ->
R C D f g -> R D E f' g' -> R C E f'f gg';
contraMap_unique:
forall C D f f' g g',
R C D f g -> R C D f' g' -> forall x, g x = g' x;
epsilon_monotonicity:
forall C D f g, R C D f g -> epsilon C -> epsilon D }.
Module Type ConstructorSpecification.
Parameter Ctor: Set.
Parameter CtorSpec: ConstructorSpec Ctor.
Parameter R: forall C D,
(Fin.t (contraVariantArity C) -> Fin.t (contraVariantArity D)) ->
(Fin.t (coVariantArity D) -> Fin.t (coVariantArity C)) -> Prop.
Parameter CtorRel: ConstructorRelation Ctor R.
Parameter Ctor_eq_dec: forall (C D: Ctor), { C = D } + { C <> D }.
End ConstructorSpecification.
Module Type SubtypeSystem(Import Spec: ConstructorSpecification).
Unset Elimination Schemes.
Inductive IntersectionType: Set :=
| Constr : forall C,
t (IntersectionType) (contraVariantArity C) ->
t (IntersectionType) (coVariantArity C) ->
IntersectionType
| Inter: IntersectionType -> IntersectionType -> IntersectionType.
(* Boilerplate for induction/recursion on nested vectors *)
Fixpoint IntersectionType_rect
(P: IntersectionType -> Type)
(constr_case:
forall C
(contraArgs: t (IntersectionType) (contraVariantArity C))
(coArgs: t (IntersectionType) (coVariantArity C)),
(forall k, P (nth contraArgs k)) ->
(forall k, P (nth coArgs k)) ->
P (Constr C contraArgs coArgs))
(inter_case: forall sigma tau, P sigma -> P tau -> P (Inter sigma tau))
(sigma: IntersectionType) {struct sigma}: P (sigma) :=
match sigma with
| Constr C contraArgs coArgs =>
constr_case C contraArgs coArgs
((fix contra_prf n (xs: t IntersectionType n) {struct xs}: forall k, P (nth xs k) :=
match xs as xs' in t _ n' return (forall (k : Fin.t n'), P (nth xs' k)) with
| nil _ => fun k => Fin.case0 (fun k => P (nth (nil _) k)) k
| cons _ x _ xs =>
fun k =>
Fin.caseS' k (fun k => P (nth (cons _ x _ xs) k))
(IntersectionType_rect P constr_case inter_case x)
(contra_prf _ xs)
end) _ contraArgs)
((fix co_prf n (xs: t IntersectionType n) {struct xs}: forall k, P (nth xs k) :=
match xs as xs' in t _ n' return (forall (k : Fin.t n'), P (nth xs' k)) with
| nil _ => fun k => Fin.case0 (fun k => P (nth (nil _) k)) k
| cons _ x _ xs =>
fun k =>
Fin.caseS' k (fun k => P (nth (cons _ x _ xs) k))
(IntersectionType_rect P constr_case inter_case x)
(co_prf _ xs)
end) _ coArgs)
| Inter sigma tau =>
inter_case sigma tau
(IntersectionType_rect P constr_case inter_case sigma)
(IntersectionType_rect P constr_case inter_case tau)
end.
Definition IntersectionType_ind
(P : IntersectionType -> Prop)
(constr_case:
forall C
(contraArgs: t (IntersectionType) (contraVariantArity C))
(coArgs: t (IntersectionType) (coVariantArity C)),
(forall k, P (nth contraArgs k)) ->
(forall k, P (nth coArgs k)) ->
P (Constr C contraArgs coArgs))
(inter_case: forall sigma tau, P sigma -> P tau -> P (Inter sigma tau))
(sigma: IntersectionType): P (sigma) :=
IntersectionType_rect P constr_case inter_case sigma.
Definition IntersectionType_rec
(P : IntersectionType -> Set)
(constr_case:
forall C
(contraArgs: t (IntersectionType) (contraVariantArity C))
(coArgs: t (IntersectionType) (coVariantArity C)),
(forall k, P (nth contraArgs k)) ->
(forall k, P (nth coArgs k)) ->
P (Constr C contraArgs coArgs))
(inter_case: forall sigma tau, P sigma -> P tau -> P (Inter sigma tau))
(sigma: IntersectionType): P (sigma) :=
IntersectionType_rect P constr_case inter_case sigma.
(* Helper lemmas *)
Lemma Vector_tl_ineq:
forall {T} (x : T) {n} xs ys, xs <> ys -> cons T x n xs <> cons T x n ys.
Proof.
intros T x n xs ys ineq.
intro devil.
injection devil as devil.
contradict ineq.
apply inj_pair2_eq_dec.
- apply Nat.eq_dec.
- exact devil.
Qed.
Definition Vector_eq_dec:
forall {T} {n}
(t_eq_dec: forall (x y: T), {x = y} + {x <> y}) (xs ys: t T n), {xs = ys} + {xs <> ys}.
Proof.
intros T n t_eq_dec xs.
induction xs as [ | x n xs IH ]; intros ys.
- apply (fun P prf => case0 P prf ys).
left; reflexivity.
- apply (caseS' ys).
clear ys; intros y ys.
destruct (t_eq_dec x y) as [ x_eq | x_ineq ].
+ rewrite x_eq.
destruct (IH ys) as [ xs_eq | ].
* rewrite xs_eq.
left; reflexivity.
* right; apply Vector_tl_ineq; assumption.
+ right; intro devil; inversion devil.
contradiction x_ineq.
Defined.
(* Decidable syntactic equality on types (helpful e.g. for pattern matching) *)
Fixpoint IntersectionType_eq_dec (sigma: IntersectionType): forall tau, { sigma = tau } + { sigma <> tau }.
Proof.
intro tau.
destruct sigma as [ C contraArgs coArgs | sigma1 sigma2 ];
destruct tau as [ D contraArgs' coArgs' | tau1 tau2 ];
try solve [ right; intro devil; inversion devil ].
- destruct (Ctor_eq_dec C D) as [ prf | disprf ];
[ | right; intro devil; inversion devil; contradiction ].
revert contraArgs' coArgs'.
rewrite <- prf.
clear D prf.
intros contraArgs' coArgs'.
destruct (Vector_eq_dec IntersectionType_eq_dec contraArgs contraArgs') as [ eq | ineq ];
[ | right; intro devil; inversion devil as [ contraArgs_eq ] ].
+ destruct (Vector_eq_dec IntersectionType_eq_dec coArgs coArgs') as [ eq' | ineq' ];
[ rewrite eq; rewrite eq'; left; reflexivity
| right; intro devil; inversion devil as [ [ contraArgs_eq coArgs_eq ] ] ].
* apply ineq'.
revert coArgs_eq.
clear ...
intro coArgs_eq.
assert (coArgs_eq': existT (fun n => t IntersectionType n) (coVariantArity C) coArgs =
existT (fun n => t IntersectionType n) (coVariantArity C) coArgs').
{ remember (existT (fun n => t IntersectionType n) (coVariantArity C) coArgs) as lhs eqn:lhs_eq.
dependent rewrite coArgs_eq in lhs_eq.
rewrite lhs_eq.
reflexivity. }
clear coArgs_eq.
revert coArgs coArgs' coArgs_eq'.
generalize (coVariantArity C).
intros n coArgs coArgs' coArgs_eq.
induction coArgs.
{ apply (fun r => case0 _ r coArgs').
reflexivity. }
{ revert coArgs_eq.
apply (caseS' coArgs'); clear coArgs'; intros arg' coArgs' coArgs_eq.
inversion coArgs_eq as [ [ arg'_eq coArgs_eq' ] ].
apply f_equal.
auto. }
+ apply ineq.
assert (contraArgs_eq': existT (fun n => t IntersectionType n) (contraVariantArity C) contraArgs =
existT (fun n => t IntersectionType n) (contraVariantArity C) contraArgs').
{ remember (existT (fun n => t IntersectionType n) (contraVariantArity C) contraArgs) as lhs eqn:lhs_eq.
dependent rewrite contraArgs_eq in lhs_eq.
rewrite lhs_eq.
reflexivity. }
clear ineq devil contraArgs_eq.
revert contraArgs contraArgs' contraArgs_eq'.
clear ...
generalize (contraVariantArity C).
intros n contraArgs contraArgs' contraArgs_eq.
induction contraArgs.
* apply (fun r => case0 _ r contraArgs').
reflexivity.
* revert contraArgs_eq.
apply (caseS' contraArgs'); clear contraArgs'; intros arg' contraArgs' contraArgs_eq.
inversion contraArgs_eq as [ [ arg'_eq contraArgs_eq' ] ].
apply f_equal.
auto.
- destruct (IntersectionType_eq_dec sigma1 tau1) as [ prf1 | disprf1 ];
[ rewrite prf1 | right; intro devil; inversion devil; contradiction ];
destruct (IntersectionType_eq_dec sigma2 tau2) as [ prf2 | disprf2 ];
[ rewrite prf2 | right; intro devil; inversion devil; contradiction ];
left; reflexivity.
Defined.
Set Elimination Schemes.
(* Helper stuff to unclutter definitions *)
Import ListNotations.
Import EqNotations.
Definition fixContraArity
{contraArity: nat}
{k: nat} {ctors: t Ctor (S k)}
(arityOk: forall i, contraArity = contraVariantArity (nth ctors i))
{i: Fin.t k}
(f: Fin.t contraArity -> Fin.t contraArity):
Fin.t (contraVariantArity (nth ctors (FS i))) -> Fin.t (contraVariantArity (nth ctors F1)) :=
rew [fun n => Fin.t n -> _ ] arityOk (FS i) in
rew [fun n => _ -> Fin.t n] arityOk F1 in f.
Definition fixCoArity
{coArity: nat}
{k: nat} {ctors: t Ctor (S k)}
(arityOk: forall i, coArity = coVariantArity (nth ctors i))
{i: Fin.t k}
(f: Fin.t coArity -> Fin.t coArity):
Fin.t (coVariantArity (nth ctors F1)) -> Fin.t (coVariantArity (nth ctors (FS i))) :=
rew [fun n => Fin.t n -> _ ] arityOk F1 in
rew [fun n => _ -> Fin.t n] arityOk (FS i) in f.
Fixpoint positions (n : nat): t (Fin.t n) n :=
match n with
| O => nil _
| S n => cons _ F1 _ (map FS (positions n))
end.
Lemma positions_spec: forall n k, nth (positions n) k = k.
Proof.
intro n.
induction n as [ | n IHn ]; intro k.
- inversion k.
- remember (S n) as n' eqn:n'_eq.
destruct k.
+ reflexivity.
+ simpl.
injection n'_eq as n_eq.
revert k.
rewrite n_eq.
intro k.
rewrite (nth_map _ _ _ _ (eq_refl k)).
rewrite (IHn k).
reflexivity.
Qed.
Definition makeConstructor
{k : nat} {coArity contraArity: nat}
(ctors: t Ctor k)
(contraArgs: t (t IntersectionType contraArity) k)
(coArgs: t (t IntersectionType coArity) k)
(contraArityOk: forall i, contraArity = contraVariantArity (nth ctors i))
(coArityOk: forall i, coArity = coVariantArity (nth ctors i))
(i: Fin.t k): IntersectionType :=
Constr (nth ctors i)
(rew (contraArityOk i) in nth contraArgs i)
(rew (coArityOk i) in nth coArgs i).
(* The subtype system *)
Inductive LEQ: list IntersectionType -> IntersectionType -> Prop :=
| CtorLeft: forall C contraArgs coArgs rho Gamma Gamma',
Permutation Gamma' ((Constr C contraArgs coArgs) :: Gamma) ->
LEQ Gamma rho ->
LEQ Gamma' rho
| InterRight: forall Gamma sigma tau, LEQ Gamma sigma -> LEQ Gamma tau -> LEQ Gamma (Inter sigma tau)
| InterLeft: forall Gamma sigma tau rho Gamma' Gamma'',
Permutation Gamma' (sigma :: tau :: Gamma) ->
Permutation Gamma'' (Inter sigma tau :: Gamma) ->
LEQ Gamma' rho ->
LEQ Gamma'' rho
| CtorRight:
forall k contraArity coArity
(ctors: t Ctor (S k))
(contraArgs: t (t IntersectionType contraArity) (S k))
(coArgs: t (t IntersectionType coArity) (S k))
(fs: t (Fin.t contraArity -> Fin.t contraArity) k)
(gs: t (Fin.t coArity -> Fin.t coArity) k)
(contraArityOk: forall i, contraArity = contraVariantArity (nth ctors i))
(coArityOk: forall i, coArity = coVariantArity (nth ctors i))
(Gamma: list IntersectionType),
(k = 0 -> epsilon (hd ctors)) ->
(forall (i: Fin.t k),
R (nth ctors (FS i)) (nth ctors F1)
(fixContraArity contraArityOk (nth fs i))
(fixCoArity coArityOk (nth gs i))) ->
(forall i j,
LEQ (nth (nth contraArgs F1) j :: []) (nth (nth contraArgs (FS i)) ((nth fs i) j))) ->
(forall i, exists Gamma,
Permutation (to_list (map (fun j => nth (nth coArgs (FS j)) ((nth gs j) i)) (positions k))) Gamma /\
LEQ Gamma (nth (nth coArgs F1) i)) ->
Permutation
Gamma
(to_list (map (fun i => makeConstructor ctors contraArgs coArgs contraArityOk coArityOk (FS i))
(positions k))) ->
LEQ Gamma (makeConstructor ctors contraArgs coArgs contraArityOk coArityOk F1).
(* Derived rules *)
Lemma sigmaLeft:
forall Gamma Gamma' sigma rho,
Permutation Gamma' (sigma :: Gamma) ->
LEQ Gamma rho -> LEQ Gamma' rho.
Proof.
intros Gamma Gamma' sigma.
revert Gamma Gamma'.
induction sigma as [ C contraArgs coArgs IHcontra IHco | sigma1 sigma2 IHsigma1 IHsigma2 ];
intros Gamma Gamma' rho permPrf rhoPrf.
- apply (CtorLeft C contraArgs coArgs rho Gamma); assumption.
- apply (InterLeft Gamma sigma1 sigma2 rho (sigma1::sigma2::Gamma) Gamma').
+ apply Permutation_refl.
+ assumption.
+ eapply IHsigma1; [ apply Permutation_refl | ].
eapply IHsigma2; [ apply Permutation_refl | ].
assumption.
Qed.
Lemma sigmaRefl: forall sigma, LEQ [sigma] sigma.
Proof.
intro sigma.
induction sigma as [ C contraArgs coArgs IHcontra IHco | sigma1 sigma2 IHsigma1 IHsigma2].
- set (ctors := cons _ C _ (cons _ C _ (nil _))).
assert (contraArityOk : forall i, contraVariantArity C = contraVariantArity (nth ctors i)).
{ intro i.
apply (Fin.caseS' i); [ reflexivity | clear i; intro i ].
apply (Fin.caseS' i); [ reflexivity | clear i; intro i ].
inversion i. }
assert (coArityOk : forall i, coVariantArity C = coVariantArity (nth ctors i)).
{ intro i.
apply (Fin.caseS' i); [ reflexivity | clear i; intro i ].
apply (Fin.caseS' i); [ reflexivity | clear i; intro i ].
inversion i. }
generalize (CtorRight 1 (contraVariantArity C) (coVariantArity C)
(cons _ C _ (cons _ C _ (nil _)))
(cons _ contraArgs _ (cons _ contraArgs _ (nil _)))
(cons _ coArgs _ (cons _ coArgs _ (nil _)))
(cons _ (fun x => x) _ (nil _))
(cons _ (fun x => x) _ (nil _))
contraArityOk coArityOk
[Constr C contraArgs coArgs]
(fun k_eq =>
False_ind _ (match k_eq in _ = n return match n with | 1 => True | _ => False end with
| eq_refl => I
end))).
unfold makeConstructor.
rewrite (UIP_dec (Nat.eq_dec) (contraArityOk F1) eq_refl).
rewrite (UIP_dec (Nat.eq_dec) (coArityOk F1) eq_refl).
simpl.
intro result; apply result; clear result.
+ intro i.
apply (Fin.caseS' i); [ clear i | intro devil; inversion devil ].
simpl.
apply reflexivity.
* intro k.
unfold fixContraArity.
simpl.
match goal with
| [|- (rew [fun n => _] ?p1 in _) k = _ ] =>
set (eq1 := p1); simpl in eq1;
rewrite (UIP_dec (Nat.eq_dec) eq1 eq_refl);
simpl
end.
match goal with
| [|- (rew [fun n => _] ?p1 in _) k = _ ] =>
set (eq2 := p1); simpl in eq2;
rewrite (UIP_dec (Nat.eq_dec) eq2 eq_refl);
simpl
end.
reflexivity.
* intro k.
unfold fixCoArity.
simpl.
match goal with
| [|- (rew [fun n => _] ?p1 in _) k = _ ] =>
set (eq1 := p1); simpl in eq1;
rewrite (UIP_dec (Nat.eq_dec) eq1 eq_refl);
simpl
end.
match goal with
| [|- (rew [fun n => _] ?p1 in _) k = _ ] =>
set (eq2 := p1); simpl in eq2;
rewrite (UIP_dec (Nat.eq_dec) eq2 eq_refl);
simpl
end.
reflexivity.
+ intro i.
apply (Fin.caseS' i); [ clear i | intro devil; inversion devil ].
simpl.
exact IHcontra.
+ intro i.
eexists; split; [ apply Permutation_refl | ].
apply IHco.
+ rewrite (UIP_dec (Nat.eq_dec) (contraArityOk (FS F1)) eq_refl).
rewrite (UIP_dec (Nat.eq_dec) (coArityOk (FS F1)) eq_refl).
reflexivity.
- apply (InterLeft List.nil sigma1 sigma2 _ _ _ (Permutation_refl _) (Permutation_refl _)).
apply InterRight.
+ apply (sigmaLeft _ (sigma1 :: sigma2 :: []) sigma2 sigma1 (perm_swap _ _ _) IHsigma1).
+ apply (sigmaLeft _ (sigma1 :: sigma2 :: []) sigma1 sigma2 (Permutation_refl _) IHsigma2).
Qed.
End SubtypeSystem.
|
-- Andreas, 2018-10-23, issue #3309 reported by G. Brunerie
--
-- Check that we can use irrelevant record fields in copattern matching.
--
-- (A refactoring broke the correct relevances of pattern variables
-- after matching on an irrelevant projection pattern.)
record Σ (A : Set) (B : A → Set) : Set where
constructor _,_
field
fst : A
.snd : B fst
open Σ
pair : {A : Set} {B : A → Set} (a : A) .(b : B a) → Σ A B
pair a b .fst = a
pair a b .snd = b
f : {A : Set} {B : A → Set} (a : A) .(b : B a) → Σ A B
fst (f a b) = a
snd (f a b) = b
-- Should work.
|
! { dg-do run }
program rabbithole
implicit none
character(len=:), allocatable :: text_block(:)
integer i, ii
character(len=10) :: cten='abcdefghij'
character(len=20) :: ctwenty='abcdefghijabcdefghij'
ii = -6
text_block = [character(len=ii) :: cten, ctwenty]
if (any(len_trim(text_block) /= 0)) call abort
end program rabbithole
|
------------------------------------------------------------------------
-- The Agda standard library
--
-- Comonads
------------------------------------------------------------------------
-- Note that currently the monad laws are not included here.
{-# OPTIONS --without-K --safe #-}
module Category.Comonad where
open import Level
open import Function
private
variable
a b c f : Level
A : Set a
B : Set b
C : Set c
record RawComonad (W : Set f → Set f) : Set (suc f) where
infixl 1 _=>>_ _=>=_
infixr 1 _<<=_ _=<=_
field
extract : W A → A
extend : (W A → B) → (W A → W B)
duplicate : W A → W (W A)
duplicate = extend id
liftW : (A → B) → W A → W B
liftW f = extend (f ∘′ extract)
_=>>_ : W A → (W A → B) → W B
_=>>_ = flip extend
_=>=_ : (W A → B) → (W B → C) → W A → C
f =>= g = g ∘′ extend f
_<<=_ : (W A → B) → W A → W B
_<<=_ = extend
_=<=_ : (W B → C) → (W A → B) → W A → C
_=<=_ = flip _=>=_
|
[STATEMENT]
lemma termination_no_match:
"i < length ss \<Longrightarrow> ss ! i = C nm \<bullet>\<bullet> ts
\<Longrightarrow> sum_list (map size_tm ts) < sum_list (map size_tm ss)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>i < length ss; ss ! i = C nm \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> sum_list (map NBE.size_tm ts) < sum_list (map NBE.size_tm ss)
[PROOF STEP]
apply(subgoal_tac "C nm \<bullet>\<bullet> ts : set ss")
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<lbrakk>i < length ss; ss ! i = C nm \<bullet>\<bullet> ts; C nm \<bullet>\<bullet> ts \<in> set ss\<rbrakk> \<Longrightarrow> sum_list (map NBE.size_tm ts) < sum_list (map NBE.size_tm ss)
2. \<lbrakk>i < length ss; ss ! i = C nm \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> C nm \<bullet>\<bullet> ts \<in> set ss
[PROOF STEP]
apply(drule sum_list_map_remove1[of _ _ size_tm])
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<lbrakk>i < length ss; ss ! i = C nm \<bullet>\<bullet> ts; sum_list (map NBE.size_tm ss) = NBE.size_tm (C nm \<bullet>\<bullet> ts) + sum_list (map NBE.size_tm (remove1 (C nm \<bullet>\<bullet> ts) ss))\<rbrakk> \<Longrightarrow> sum_list (map NBE.size_tm ts) < sum_list (map NBE.size_tm ss)
2. \<lbrakk>i < length ss; ss ! i = C nm \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> C nm \<bullet>\<bullet> ts \<in> set ss
[PROOF STEP]
apply(simp add:size_tm_foldl_At size_list_conv_sum_list)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>i < length ss; ss ! i = C nm \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> C nm \<bullet>\<bullet> ts \<in> set ss
[PROOF STEP]
apply (metis in_set_conv_nth)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
class foo (α : Type) : Type := (f : α)
def foo.f' {α : Type} [c : foo α] : α := foo.f
#print foo.f -- def foo.f : {α : Type} → [self : foo α] → α
#print foo.f' -- def foo.f' : {α : Type} → [c : foo α] → α
variable {α : Type} [c : foo α]
#check c.f -- ok
#check c.f' -- ok
structure bar : Prop := (f : ∀ {m : Nat}, m = 0)
def bar.f' : bar → ∀ {m : Nat}, m = 0 := bar.f
#print bar.f -- def bar.f : bar → ∀ {m : ℕ}, m = 0
#print bar.f' -- def bar.f' : bar → ∀ {m : ℕ}, m = 0
variable (h : bar) (m : Nat)
#check (h.f : ∀ {m : Nat}, m = 0) -- ok
#check (h.f : m = 0) -- ok
#check h.f (m := m) -- ok
#check h.f (m := 0) -- ok
#check (h.f' : m = 0) -- ok
theorem ex1 (n) : (h.f : n = 0) = h.f (m := n) :=
rfl
|
namespace Lean
syntax "foo " binderIdent : term
example : Syntax → MacroM Syntax
| `(foo _) => `(_)
| `(foo $x:ident) => `($x:ident)
| _ => `(_)
|
{-# LANGUAGE BangPatterns #-}
--{-# XRankNTypes #-}
module RprGD2 (OptimisationType (Rapid , FromData ,WithDepth)
, mmlRprGdEvl,mml2DsGD -- ,mmlRprTest
, mml2DsGen, RprMix2.aBar,mapP -- , stdTest
--,mmlRprTest
) where
import Numeric.LinearAlgebra
import Numeric.GSL.Statistics (stddev,lag1auto)
import qualified Data.ByteString.Lazy.Char8 as L (writeFile, pack, unpack, intercalate,appendFile, append,concat)
import System.Random
import Control.Concurrent
import Control.DeepSeq
import Foreign.Storable
import Control.Parallel (par, pseq)
import qualified Control.Parallel.Strategies as St-- (parMap)
import System.IO.Unsafe
import Data.Maybe (isJust,fromMaybe,maybe,listToMaybe,fromJust)
import Data.Time
import Data.Char (isDigit)
--import Text.Printf
import Data.Ord (comparing)
import Control.Monad (liftM,liftM2)
import Data.List (transpose, foldr1, sort, foldl' ,nub, nubBy,sortBy, minimumBy, partition)
import qualified Data.Map as M (Map,empty,insertWith' ,mapWithKey,filterWithKey,toList, fromList)
--import IO
--------------------------------------------------------------------------------------------
import HMDIFPrelude (bsReadChr,bs2Int')
import RprMix2 -- (vLength)
import LINreg2
import ResultTypes
import ProcessFile (writeToFile,log2File,writeOrAppend,myTime)
import LMisc hiding (force)
import ListStats (nCr,normal,binomial ,myTake,-- corr,
mean, adjustedMatrix,cdf, meanBy, stDivBy, -- stDiv,
adjustedMatrixN, adjustedMatrixPN,stNorm,mean,stDivM)
---------------------------------------------------------------------------------------------
---- optimisation type
data OptimisationType = Rapid | FromData | WithDepth Int deriving (Show, Eq)-- | WithNum Int
---------------------------------------------------------------------------------------------
---- Paralell definitions
concatPMap :: (a -> [b]) -> [a] -> [b]
concatPMap f = concat . St.parMap (St.parList St.rseq) f
-- parallel map and parallel concatMap
--mapP = St.parMap (St.dot St.rseq St.rpar)
mapP' :: (a -> b) -> [a] -> [b]
mapP' f = foldl' (\bs a -> let fa = (f a) `St.using` St.rseq in
fa : bs ) []
--
mapP :: (a -> b) -> [a] -> [b]
mapP f = St.parMap St.rseq f
concatPMap' f = foldl'Rnf (\xs x -> xs ++ [St.using (f x) (St.dot St.rseq St.rpar) ] ) []
--- strict fold vector
-- foldVector :: Storable a => (a -> b -> b) -> b -> Vector a -> b
foldVector' :: (NFData b, Storable a) => (b -> a -> b) -> b -> Vector a -> b
foldVector' f a vs = foldVector (\b g x -> g (seqit $ f x b)) id vs a
where seqit a = rnf a `pseq` a
--------------------------------------------------------------------------------------------------
myMinimum :: (NFData a, Ord a) => [a] -> a
myMinimum (x:xs) = foldl'Rnf min x xs
myMinimum _ = error "error: myMinimum empty list"
--
--mmlRprGD :: Vector Double -> Double -> Double -> Matrix Double -> Mmodel -> (Mmodel, Double)
{-# NOINLINE mmlRprGD #-}
mmlRprGD g ys n cts nSLP dss (ml, model) = mmlRprGD_aux g ys n cts nSLP dss model
mmlRprGD_aux :: (RandomGen g) => g
-> Vector Double
-> Double
-> [Int]
-> Maybe Bool
-> Matrix Double
-> Mmodel
-> IO OModel
mmlRprGD_aux g ys n cts nSLP dss model = do
return $ Omod (nAbaMod model taU) oldMsgLn
where
------- compute values from the input model, eg the weighs-----------
nAbaMod (MkModel a b c x _ ) k = (MkModel a b c x k)
----
fL = (+ 1) . fromIntegral . length
aA' = fL cts
-------------------------------------------------------
siG = sigD model
siGE = sigE model
taU = tau model
---------------------------------------------------------
xXa = invS nSLP (dss <> (aBar model)) ---
abar = aBar model
aA = fromIntegral . (\ a -> if a == 2 then 1 else a `div` 2) $ dim abar
errFun a b = (r_n model a b)
rN = zipVectorWith errFun xXa ys -- (r_n model)
lnR = dim rN -- get the magnitude of rN
wData :: Matrix Double
wData = diagRect 0 rN lnR lnR
wError :: Matrix Double
wError = diagRect 0 (mapVector (1 -) rN) lnR lnR
---
xtWx = ((trans dss) <> wData) <> dss
------------------------------------------------------------
---
nAlpha = foldVector (+) 0 rN
nAlpA1 = nAlpha - aA - 1 --
--
ysSubXa = zipVectorWith (-) ys xXa -- the data minus the predictions
ySubWySub ww = ysSubXa <.> (ww <> ysSubXa) -- y
--
dehAbarSq = (abar <.> abar)
aBarSqTau1 = dehAbarSq / (2 * taU^2)
----------------------------------------------------------------------------------------
------------------- calculate the message length of the input model --------------------
----------------------------------------------------------------------------------------
!oldMsgLn = l1 `par` l2 `par` l3 `par` l4 `pseq` (l1 + l2 + l3 + l4)
----------------------------------log 0.01 *------------------------------------------------------
restrPr = ((aA + 1)* log 2)/ 2
restrPrior = maybe restrPr (\_ -> restrPr) nSLP
l1 = (aA + 2.0) * logg taU + aBarSqTau1 - restrPrior + 0.5 * ((n-2) * log (2 * pi))
----------------------------------- log 2 * (aA + 1)/2 ---------------------------------
l2 = nAlpA1 * logg siG + (1/(2 * siG^2)) * (ySubWySub wData)
l3 = (n - nAlpha) * logg siGE + (1/(2 * siGE^2)) * (ySubWySub wError) + 0.5 * logg (2 * nAlpha)
mlLogs = 0.5 * log (det xtWx) + 0.5 * logg (2*(n - nAlpha)) + 0.5 * log (pi * (aA + 3))
l4 = mlLogs - 1.319
----------------------------------------------------------------------------------------------------
dummyMod = Omod (MkModel 0 0 0 (fromList [1]) 0) 100000000
vunStdNorm' vs = vunStdNorm (vmean vs) (vstDiv vs) vs
{----------------------------------------------------------------------------------------------------
mmlRprGdEvl: the mmlRPR evaluation function. Takes:
n - the range in years of roughness data (which corresponds to the number of data points
when dealing with simulation data)
ms - the number of chains that are joined together - the maintenance strategy.
Note that the mml2DS uses the mmlRPR on adjaent sections joined together. Hence this
parameter says how many sections are to be joined for calls to mmlRprGdEvl. When this
functions is called outside of the mml2DS, this valus is always 1 (i.e we do not make any
prior assumptions about maintenances on the chain)
ys - a lsit of the data values for this chain
pp - the interval in years between maintenenace interventions
gen - a random generator to initialize the process
nSlopes - determines whether to exclude negative slopes: if true, negative slopes are excluded
otherwise they are not
rprOnly - toggles the applicaion of the MMLRPR function only. The likelihood of maintenance
is not applied to each interventions
----------------------------------------------------------------------------------------------------}
mmlRprGdEvl :: (RandomGen g) => Int ->
Int -> -- error search depth limit
[Vector Double] -> -- Either (Vector Double) [Double] -> -- the data in standardized for
Double ->
g ->
Maybe Bool -> -- negative slopes (temporarily used for turning on and off the mixture model)
Int ->
IO (Maybe ( (OModel,(Vector Double,[Int]) ) , [Double])) --ms
mmlRprGdEvl n dpt ys pp gen nSlopes mtd = -- mN std
mmlRprAuxGD n dpt m p (filter ((== mLen) . dim) ys) pp gen nSlopes mtd True -- ms -- mN std
where
p = 1 / pp
--- don need this claculation. we can filter out irelevant values
m = n `div` mtd
mLen = maybe 0 (\_ -> maximum $ map dim ys) (listToMaybe ys)
---------------------------------------------------------------------------------------------------
unStd :: (OModel,(Vector Double,[Int])) -> (OModel,(Vector Double,[Int]))
unStd (Omod (MkModel c b e ks d) ml,(vs,ys)) = (Omod (MkModel c b e (vunStdNorm' ks) d) ml,(vunStdNorm' vs,ys))
unSd ms =
case ms of
Nothing -> Nothing
Just (a,b) -> Just (unStd a, b)
------------
mmlRprAuxGD n _ m p ysv pp gen nSlopes mtd toOpt = do
return minMod
where
minMod = liftM applyMixture $
maybe Nothing (\_ -> Just $ minimumBy (comparing (fst . fst)) rprs1) (listToMaybe rprs1)
--
result rprss = (snd (minimumBy (comparing (fst . snd)) rprss), map fst rprss)
f = fromIntegral
(g1,_) = split gen
abr' :: Matrix Double -> [(Double, (Vector Double, (Double, Vector Double))) ]
abr' = (: []) . minimumBy (comparing fst)
. zipWith (\f a -> f a) (map (\ys -> mmlLinearRegLP ys 1 1) setLen)
. replicate len -- (length ysv)
where
len = length ysv
setLen = replicate len $ join ysv
----------------------------------------------------------------------------
initM b s = initModel gen s b -
initLz (ml, (prd, (s,ab))) = (ml, (prd , initM ab s))
--
mod :: Matrix Double -> [(Double, (Vector Double, Mmodel))]
mod mxx = map initLz $ abr' mxx
----------------------------------------------------------------------------
nN = f . sum $ map dim ysv -- n
nNs = map ((\n -> [1 .. n]) . f . dim) ysv
fLn = (+ 1) . f . length
msgLm (Omod _ l) = l
appMML g1 nM ct mmX (ml,(prd , inMd)) = unsafePerformIO $ mmlRprGD g1 (join ysv) nM ct nSlopes mmX (ml, inMd)
iTr (mX, cts) = (map (appMML gen nN cts mX) (mod mX), (mX, cts))
cMML = mapP (getPredictions nSlopes) . mapSnd . iTr
nps cmm = realignedNpieces cmm mtd
--------------------------------------------------------------------------------------------
----------- applying the mixture model to the discovered intervention, only ----------------
rprs1 = [ (kx, cm1) | cm1 <- [0 .. m], kx <- ( concatPMap (mapSnd . iTr) . (nps cm1)) nNs ] `St.using` (St.parList St.rseq)
applyMixture ( (Omod md mln , (mx , cts)), cm) =
let (!nMd , !mxL) = unsafePerformIO $ nextModelR' gen cts nSlopes mx (join ysv) md
ncma = n
nncma = ncma `nCr` cm
ncr = (binomial n cm p) / f nncma -- else 1
lncr = log ncr
newOMod = Omod nMd ((lncr + mln) + mxL)
prds = getPredictions nSlopes (newOMod, (mx, cts))
in (prds , [lncr] )
-------------- end applying the Mixture Model at the end ----------------------------------
getPredictions nSlp (mm, (xConfig, cts)) = (mm , (invS nSlp prds , cts))
where
prds = xConfig <> (aBar (oMod mm))
--- retain the domain of the function
retDom :: (a -> [b]) -> a -> [(a , b)]
retDom f a = map (\x -> (a, x)) ( f a)
---
nAbaMod (MkModel a b c _ x) k = (MkModel a b c k x)
{----------------------------------------------------------------------------------------------------
The mml2DsGD functions applies the MMLRPR to alogorithm to a list of data
points (i.e. the readings or chainages) and after
sectioning them off mml2DsGD years distance gen [[Double]]
--}
--- returning results
mml2DsGen :: (RandomGen g) => g -> TdsParms ->
OptimisationType -> -- the optimisation type (Rapid , FromData ,WithDepth)
SecOrSchm ->
IO SecOrSchm
mml2DsGen g tprms optType ssm =
case ssm of
-- analysis results for sections
Left sect -> do
let (a, b , c ,d) = (sCondition sect, sScannerCode sect, sScannerSorrg sect, sCWay sect)
let mkSect = Section a b c d (sChains sect)
liftM (Left . mkSect . omd2Results) $ mml2DsGD g tprms optType [xs | xs <- sChains sect, length (cData xs) > 1]
-- results for schemes
Right schm -> do
nScmRecs <- mapM (mkSchemeRecRes g tprms optType) (schmRecs schm)
return $ Right $ Scheme (schemeName schm) (schmCondition schm) nScmRecs
where
mkSchemeRecRes :: (RandomGen g) => g ->
TdsParms ->
OptimisationType ->
SchemeRec ->
IO SchemeRec
mkSchemeRecRes g tp opT src = liftM (mkSchmRec . omd2Results) $ mml2DsGD g tp opT chn
where
mkSchmRec = SchemeRec scd srd srnm srg stc sec chn
--
scd = scmScannerCode src
srd = roadClassCode src
srnm = roadName src
srg = scmScannerSorrg src
--- carriage way
stc = startChain src
sec = endChain src
chn = scmChains src
--}
-- returning a OMod, etc
mml2DsGD :: (RandomGen g) => g ->
TdsParms -> -- the parameters for the mmlTds
OptimisationType -> -- the optimisation type (Rapid , FromData ,WithDepth)
[Chain] -> --- a list of all the data for each chain in a section or scheme
IO ([(OModel, ([Double], [Int]))], [Int])
mml2DsGD gen tdPrs optType chxss = do
---------------------------------------------
scrollLog ("Identifying candidate groups for section with length: " ++ show len) -- (max 3 (len `div` 15))
!mks1 <- (joinPairsV dist optType 2 len 25 xss)
------
let total = length mks1
let msgP1 = "Fitting progression rate and identifying outliers for section length: " ++ show len ++". "
let msgP2 = show total ++": candidate groups found."
onlycuts <- liftM (maybe 0 (fromJust . bs2Int') . listToMaybe . head) $ bsReadChr ',' "./mmlTdsFiles/onlycuts.txt"
-- print ("negslopes is : " ++ show nSlopes)
if len > 0 then
if null mks1 then do
let noGrupMsg = "NO Groups for section length: " ++ show len
logLog ("-------------------"++noGrupMsg)
return dummy
else if onlycuts == 1 then do
logLog ("only first cuts printed")
return dummy
else do
--
scrollLog (msgP1 ++ msgP2)
let mks2 = [(a,b,c) | ((a,b),c) <- zip (zip mks1 (rGens gen)) [1 ..]]
let applyRpr = mapP (mml2DsGD_aux1 total minTrend scrollLog)
let appMin = liftM (minimumBy compML'') . sequence . applyRpr -- ($!)
appMin mks2
else do
logLog ("Empty section list sent to MML2DS: " ++ show len ++". limit: "++ show lenLim)
return dummy
where
---- projections --
xss = map cData chxss
(!dpt, !pp , !dist) = (searchDept tdPrs, rngInYrs tdPrs, rngInDist tdPrs)
(nSlopes,lim,minAnl) = (negSlopes tdPrs, maxChn2Aln tdPrs, minChn2Aln tdPrs)
minTrend = minPts2Trd tdPrs
loggers = logFun tdPrs
scrollLog = maybe (\_ -> return ()) (\_ -> loggers !! 0) (listToMaybe loggers)
logLog = maybe (\_ -> return ()) (\_ -> loggers !! 1) (listToMaybe loggers)
maxChain = (foldl' (\n xs -> max (length xs) n) 0 xss)
lenLim = lim * maxChain
len = length xss
minLen = minAnl * maxChain
-------------------------------- end of projections ------------------
dummy = ([(dummyMod , ([0] ,[0]))] ,[])
pm = 1 / dist
compML'' = comparing (sumBy (oModMl . fst). fst)
f = fromIntegral
{--------------------------- logging the output from the meddage lengths --------------------
printCutsInfo :: OptimisationType -> ([(OModel, ([Double], [Int]))], [Int] ) -> IO ()
printCutsInfo opTp (ks,cs) = do
let cutsMsg = "\n cuts: "++ printElm cs
let mlgMsg = "; \t message length: " ++ (show $ sumBy (oModMl . fst) ks)
let name = case opTp of
Rapid -> "fromData" -- "priorOnly"
FromData -> "fromData"
WithDepth _ -> "fixedDepth"
let file = "./SimulationFiles/"++name++"groups.txt"
-- let header = "\n-----------\t cuts and message lengths ------------\n"
let output = L.pack (cutsMsg ++ mlgMsg)
writeOrAppend file output
---------------------------------------------------------------------------------------}
mml2DsGD_aux1 :: RandomGen g => Int ->
Int ->
(String -> IO()) ->
( ([([Vector Double], Int)],[Int]), g, Int) ->
IO ([(OModel, ([Double], [Int]))], [Int])
mml2DsGD_aux1 total minToTrend write ((pgroups,pgcuts),gen,num) = do
write ("Calculating mmlrpr on group: "++ show num ++ " of " ++ show total)
let max1 = sumBy snd pgroups -- sum pgcut
----
let calc = [ (result,pgcuts) |
let !rprsCalcs = unsafePerformIO . sequence $ mapP (calcRpR max1 gen minToTrend) pgroups -- do the mlrpr on each grou
, all isJust rprsCalcs -- . filter isJust
, let result = (map fromJust . filter isJust) rprsCalcs]
if null calc then do
write ("Invalid mmlrpr group for group "++ show num)
return dummy
else do
retMinComp calc
where
--
retMinComp = return . minimumBy compML''
--
calcRpR :: RandomGen g => Int -> g -> Int ->
([Vector Double], Int) -> IO (Maybe (OModel,([Double],[Int]))) -- , [Double]) -
calcRpR ns g mtd (vys, ms) = do
!rprs <- mmlRprGdEvl ms dpt vys pp g nSlopes mtd
case rprs of
Just ( (Omod r1 r2, prs),nn) -> do
let smm = ncr' + r2
let str = ("\nncr' is: "++ show ncr' ++" mLen is: "++ show r2 ++". Their sum is: " ++ show smm)
-- L.appendFile "tempLog1.txt" (L.pack str)
return $ Just (Omod r1 smm , (toList (fst prs), snd prs) ) --
Nothing -> return $ Nothing
where
ncr' = (binomial ns ms pm) / f (ns `nCr` ms)
-----------------------------------------------------------------------------------------------------}
------------
--------------------------------------------------------------------------------------------------
findGrpsFromData :: Double -> -- the distance of intervention points
Int -> -- lenght of the input list
[[Double]] -> -- ^ the data to aling
IO [([([Vector Double], Int)], [Int])]
findGrpsFromData dist inLen =
return . findCandidateSet . uncurry calculateFixedGroups1 . findInterventions_aux
where
findCandidateSet xs
| length xs > 100 = maintainBy' avgRelCorr (Just 45) xs
| otherwise = (take 10 . sortBy (comparing avgRelCorr )) xs
--
avgRelCorr = (meanBy (relativeCorr . fst)) . fst
-- find possible interventions from the data
findInterventions_aux :: [[Double]] -> ([[Double]] , [[Int]])
findInterventions_aux xys = (xys , (nub . findGrps1 0.7) xys)
where
findGrps1 tolerance xs = [ ys | let stDiv = stddev . fromList
, ys <- findGrps tolerance 38 mmlCorr xs
] `St.using` (St.parList St.rseq)
-- calculate fixed groups
calculateFixedGroups1 :: [[Double]] -> [[Int]] -> [([([Vector Double], Int)], [Int])]
calculateFixedGroups1 xys = map (unzip . foldr mkLst [] . filter (not . null . fst) . mkIntervals vs )
where
vs = (map fromList xys)
mkLst (ks,k) ps = ((ks, sumBy dim ks) , k) : ps
-- relative correlaton between adjacent chains
relativeCorr :: [Vector Double] -> Double
relativeCorr xs
| length xs < 2 = 4 -- bias against groups with only one chian
| otherwise = mcD xs
where
corrDist ps qs = abs (nCorrV ps qs - 1)
mcD = meanBy (exp . uncurry corrDist) . zipp
----------------------------------------------------------------------------------------------------
--- an infinite list of random generators
rGens :: RandomGen g => g -> [g]
rGens g = g : (g1 : rGens g2)
where (g1, g2) = split g
------------------------------------------------------------------------------------------------------------------------
joinPairsV :: Double -> -- the distance of maintenence interventions
OptimisationType -> -- the optimisation type (Rapid , FromData ,WithDepth)
Int -> -- ^ the minimum number of chains to aligh
--Int -> -- ^ maximum
Int -> -- ^ length of the input list
Int -> -- ^ the maximum number of chains to align
[[Double]] -> -- ^ the data to aling
IO [([([Vector Double], Int)], [Int])]
--[([(Vector Double, Int, Double )],[Int])]
-- (joinPairsV pm optType minLen lenLim 3 len lim) xss
joinPairsV dist optType malg xysLen lim xys -- minL maxL
| null xys = return []
| xysLen < 4 = return joinFew -- ' pm --xys
--- check the length of the combined list
| lenData < 10 = return joinFew
| otherwise =
case optType of
Rapid -> findGrpsFromData dist xysLen xys
FromData -> findGrpsFromData dist xysLen xys
WithDepth d -> calculateFixedGroups vxys [(repeat d)]
where
lenData = sumBy length xys
--
vxys = map fromList xys
------------------
vs = join vxys
ms = dim vs
joinFew = [([([vs],dim vs)],[])]
---------------------------------------------------------------------------------
-- given a list of possible cuts and the list of chains, we generate the a list
-- of possible maintenance groups defeind by the cuts
calculateFixedGroups :: [Vector Double] -> [[Int]] -> IO [([([Vector Double], Int)], [Int])]
calculateFixedGroups vs = return . map (unzip . foldr mkLst [] . filter (not . null . fst) . mkIntervals vs)
where
mkLst (ks,k) ps = ((ks, sumBy dim ks) , k) : ps
-- return [ (mkIntervals vs cs , cs) | cs <- ks]
|
Amber Associates Recruitment Solutions is a traditional recruitment agency & business. The co-Directors have over 45 years combined experience recruiting the right staff for clients of all sizes, in Media, IT, Construction, Medical, HealthTech, Finance and Public Sector. From BBC- to- Sky, JLL, HFEA, AWIN & UKGOV.
From office support staff, including GDPR savvy administrators, PA's and Office Managers, to Data Protection Officers, IT Managers, Data Science Executives, Analysts and Security and Compliance Managers, Project Managers and Regulatory Officers.
We spend time with our clients assessing and listening to your needs and conduct face to face interviews with all our candidates. We pride ourselves on ‘getting it right’, saving you time and saving you money. Use our expertise, call us and experience the difference.
Amber Associates was originally founded in 2001 by Joanne James. After working for the leading High Street agencies, Joanne sensed that many clients and candidates had become disillusioned with the recruitment industry and in particular recruitment agency reliance on sheer weight of numbers to fill a single vacancy.
We are a UK limited company, we are GDPR compliant and we adhere to the REC Guides and Codes of Conduct. We are fully insured and regsitered with the Crown Commercial Service.
Contact Mark James through his Urbano Profile for more details on how we can help. |
module Common.MAlonzo where
open import Common.Prelude
open import Common.Coinduction
postulate
putStrLn : ∞ String → IO Unit
{-# COMPILED putStrLn putStrLn #-}
main = putStrLn (♯ "This is a dummy main routine.")
mainPrint : String → _
mainPrint s = putStrLn (♯ s)
postulate
natToString : Nat → String
{-# COMPILED natToString show #-}
mainPrintNat : Nat → _
mainPrintNat n = putStrLn (♯ (natToString n))
|
rebol [
title: "Proxy-server-test"
]
attempt [
proxy: make object! [
port-spec: make port! [
scheme: 'tcp
port-id: 80
proxy: make system/schemes/default/proxy []
]
if any [system/schemes/default/proxy/host system/schemes/http/proxy/host] [
proxy-spec: make port! [scheme: 'http]
port-spec/proxy: make proxy-spec/proxy copy []
]
responses: make object! [
filtered: {HTTP/1.0 200 OK
Content-Type: text/html
Content-Length: 29
<html>Link filtered.</html>}
]
comment {
link-handler: func[port /local bytes target response-line data][
;print ["link" mold port]
print "$$$$$$$$$$$$$$$$$$$$$LINKJ$$$$$$$$$$$$$$$$$$$$$$$$$$$"
either none? port/state/inBuffer [
port/state/inBuffer: make string! 2 + port/state/num: 10000
port/user-data: make object! [
http-get-header: make string! 1000
partner: none ;data source connection
]
][
clear port/state/inBuffer
]
bytes: read-io port port/state/inBuffer port/state/num
either bytes <= 0 [
networking/remove-port port
;if port? port/user-data/partner [
; networking/remove-port port/user-data/partner
;]
][
either none? port/user-data/partner [
append port/user-data/http-get-header port/state/inBuffer
data: port/user-data/http-get-header
print ["GET header:" mold data]
if parse/all data [
thru "GET " copy target to " " to end
][
print ["TARGET:" target]
replace/all target "!" "%21" ; hexify '!' character
either error? err: catch [
port-spec/host: port-spec/path: port-spec/target: none
tgt: net-utils/URL-Parser/parse-url port-spec target
] [
print "Error in port-spec parse"
probe disarm err
;print 'error if debug-request [print "DEATH!!!"]]
][
print [tab "Parsed target:" port-spec/host port-spec/path port-spec/target]
]
either error? err: try [
all [system/schemes/http/proxy/type <> 'generic
system/schemes/default/proxy/type <> 'generic
tmp: find data "http://"
remove/part tmp find find/tail tmp "//" "/"]
Root-Protocol/open-proto port-spec
print [tab "Opened port to:" port-spec/target]
port/user-data/partner: port-spec/sub-port
] [
insert port "HTTP/1.0 400 Bad Request^/^/"
networking/remove-port port
print "Death!"
probe disarm err
] [
;if no-keep-alive [
; if tmp: find data "Proxy-Connection" [remove/part tmp find/tail tmp newline]
;]
; send the request
if not empty? data [
write-io port/user-data/partner data length? data
print "Sending request"
probe data clear data
]
; add the pair of connections to the link list
port/user-data/partner/user-data: port
networking/add-port port/user-data/partner :partner-handler
]
]
][
probe to-string port/state/inBuffer
]
;probe port/user-data/http-get-header
;parse to-string port/state/inBuffer
;net-utils/net-log ["low level read of " bytes "bytes"]
;print responses/filtered
;insert port proxy/responses/filtered
;networking/remove-port port
;insert port "ok^@"
]
]
}
partner-handler: func[port /local bytes subport][
;if port? port [
;print "=====partner===="
either none? port/state/inBuffer [
port/state/inBuffer: make binary! 2 + port/state/num: 10000
][
clear port/state/inBuffer
]
;? port
bytes: read-io port port/state/inBuffer port/state/num
either bytes <= 0 [
;print "Closing by proxy partner"
;networking/remove-port port/user-data
close port/user-data
networking/remove-port port
][
;probe port/state/inBuffer
write-io port/user-data port/state/inBuffer bytes
net-utils/net-log ["proxy partner low level read of " bytes "bytes"]
]
;]
]
size: 10000
data: make string! 10000
server-handler: func[port /local bytes subport partner][
if port? subport: first port [
print ["PROXY: new connection" subport/host subport/port-id]
read-io subport data size
print ["GET header:" mold data]
if parse/all data [
thru "GET " copy target to " " to end
][
print ["TARGET:" target]
replace/all target "!" "%21" ; hexify '!' character
either error? err: catch [
port-spec/host: port-spec/path: port-spec/target: none
tgt: net-utils/URL-Parser/parse-url port-spec target
port-spec/scheme: 'tcp
] [
print "Error in port-spec parse"
probe disarm err
;print 'error if debug-request [print "DEATH!!!"]]
][
;print [tab "Parsed target:" port-spec/host port-spec/path port-spec/target]
]
either error? err: try [
all [system/schemes/http/proxy/type <> 'generic
system/schemes/default/proxy/type <> 'generic
tmp: find data "http://"
remove/part tmp find find/tail tmp "//" "/"]
Root-Protocol/open-proto port-spec
; print [tab "Opened port to:" port-spec/target]
;replace target "http://" "tcp://"
;probe head target
partner: port-spec/sub-port
] [
insert subport "HTTP/1.0 400 Bad Request^/^/"
close subport clear data
print "Death!"
probe disarm err
] [
;if no-keep-alive [
; if tmp: find data "Proxy-Connection" [remove/part tmp find/tail tmp newline]
;]
; send the request
if tmp: find data "Proxy-Connection" [remove/part tmp find/tail tmp newline]
if not empty? data [
partner/user-data: subport
networking/add-port partner :partner-handler
write-io partner data length? data
;print "Sending request"
;probe data
clear data
]
; add the pair of connections to the link list
]
]
;networking/add-port subport get in proxy 'link-handler
]
]
server-port: open/direct/no-wait tcp://:9005
networking/add-port server-port :server-handler
]
]
comment {
set-net [none none none "192.168.0.1" 9005 'generic]
length? x: read/binary http://192.168.0.1/imgz/divka.gif
print read http://192.168.0.1/
} |
(* Title: ZF/AC/AC17_AC1.thy
Author: Krzysztof Grabczewski
The equivalence of AC0, AC1 and AC17
Also, the proofs needed to show that each of AC2, AC3, ..., AC6 is equivalent
to AC0 and AC1.
*)
theory AC17_AC1
imports HH
begin
(** AC0 is equivalent to AC1.
AC0 comes from Suppes, AC1 from Rubin & Rubin **)
lemma AC0_AC1_lemma: "[| f:(\<Pi> X \<in> A. X); D \<subseteq> A |] ==> \<exists>g. g:(\<Pi> X \<in> D. X)"
by (fast intro!: lam_type apply_type)
lemma AC0_AC1: "AC0 ==> AC1"
apply (unfold AC0_def AC1_def)
apply (blast intro: AC0_AC1_lemma)
done
lemma AC1_AC0: "AC1 ==> AC0"
by (unfold AC0_def AC1_def, blast)
(**** The proof of AC1 ==> AC17 ****)
lemma AC1_AC17_lemma: "f \<in> (\<Pi> X \<in> Pow(A) - {0}. X) ==> f \<in> (Pow(A) - {0} -> A)"
apply (rule Pi_type, assumption)
apply (drule apply_type, assumption, fast)
done
lemma AC1_AC17: "AC1 ==> AC17"
apply (unfold AC1_def AC17_def)
apply (rule allI)
apply (rule ballI)
apply (erule_tac x = "Pow (A) -{0}" in allE)
apply (erule impE, fast)
apply (erule exE)
apply (rule bexI)
apply (erule_tac [2] AC1_AC17_lemma)
apply (rule apply_type, assumption)
apply (fast dest!: AC1_AC17_lemma elim!: apply_type)
done
(**** The proof of AC17 ==> AC1 ****)
(* *********************************************************************** *)
(* more properties of HH *)
(* *********************************************************************** *)
lemma UN_eq_imp_well_ord:
"[| x - (\<Union>j \<in> LEAST i. HH(\<lambda>X \<in> Pow(x)-{0}. {f`X}, x, i) = {x}.
HH(\<lambda>X \<in> Pow(x)-{0}. {f`X}, x, j)) = 0;
f \<in> Pow(x)-{0} -> x |]
==> \<exists>r. well_ord(x,r)"
apply (rule exI)
apply (erule well_ord_rvimage
[OF bij_Least_HH_x [THEN bij_converse_bij, THEN bij_is_inj]
Ord_Least [THEN well_ord_Memrel]], assumption)
done
(* *********************************************************************** *)
(* theorems closer to the proof *)
(* *********************************************************************** *)
lemma not_AC1_imp_ex:
"~AC1 ==> \<exists>A. \<forall>f \<in> Pow(A)-{0} -> A. \<exists>u \<in> Pow(A)-{0}. f`u \<notin> u"
apply (unfold AC1_def)
apply (erule swap)
apply (rule allI)
apply (erule swap)
apply (rule_tac x = "\<Union>(A)" in exI)
apply (blast intro: lam_type)
done
lemma AC17_AC1_aux1:
"[| \<forall>f \<in> Pow(x) - {0} -> x. \<exists>u \<in> Pow(x) - {0}. f`u\<notin>u;
\<exists>f \<in> Pow(x)-{0}->x.
x - (\<Union>a \<in> (LEAST i. HH(\<lambda>X \<in> Pow(x)-{0}. {f`X},x,i)={x}).
HH(\<lambda>X \<in> Pow(x)-{0}. {f`X},x,a)) = 0 |]
==> P"
apply (erule bexE)
apply (erule UN_eq_imp_well_ord [THEN exE], assumption)
apply (erule ex_choice_fun_Pow [THEN exE])
apply (erule ballE)
apply (fast intro: apply_type del: DiffE)
apply (erule notE)
apply (rule Pi_type, assumption)
apply (blast dest: apply_type)
done
lemma AC17_AC1_aux2:
"~ (\<exists>f \<in> Pow(x)-{0}->x. x - F(f) = 0)
==> (\<lambda>f \<in> Pow(x)-{0}->x . x - F(f))
\<in> (Pow(x) -{0} -> x) -> Pow(x) - {0}"
by (fast intro!: lam_type dest!: Diff_eq_0_iff [THEN iffD1])
lemma AC17_AC1_aux3:
"[| f`Z \<in> Z; Z \<in> Pow(x)-{0} |]
==> (\<lambda>X \<in> Pow(x)-{0}. {f`X})`Z \<in> Pow(Z)-{0}"
by auto
lemma AC17_AC1_aux4:
"\<exists>f \<in> F. f`((\<lambda>f \<in> F. Q(f))`f) \<in> (\<lambda>f \<in> F. Q(f))`f
==> \<exists>f \<in> F. f`Q(f) \<in> Q(f)"
by simp
lemma AC17_AC1: "AC17 ==> AC1"
apply (unfold AC17_def)
apply (rule classical)
apply (erule not_AC1_imp_ex [THEN exE])
apply (case_tac
"\<exists>f \<in> Pow(x)-{0} -> x.
x - (\<Union>a \<in> (LEAST i. HH (\<lambda>X \<in> Pow (x) -{0}. {f`X},x,i) ={x}) . HH (\<lambda>X \<in> Pow (x) -{0}. {f`X},x,a)) = 0")
apply (erule AC17_AC1_aux1, assumption)
apply (drule AC17_AC1_aux2)
apply (erule allE)
apply (drule bspec, assumption)
apply (drule AC17_AC1_aux4)
apply (erule bexE)
apply (drule apply_type, assumption)
apply (simp add: HH_Least_eq_x del: Diff_iff )
apply (drule AC17_AC1_aux3, assumption)
apply (fast dest!: subst_elem [OF _ HH_Least_eq_x [symmetric]]
f_subset_imp_HH_subset elim!: mem_irrefl)
done
(* **********************************************************************
AC1 ==> AC2 ==> AC1
AC1 ==> AC4 ==> AC3 ==> AC1
AC4 ==> AC5 ==> AC4
AC1 \<longleftrightarrow> AC6
************************************************************************* *)
(* ********************************************************************** *)
(* AC1 ==> AC2 *)
(* ********************************************************************** *)
lemma AC1_AC2_aux1:
"[| f:(\<Pi> X \<in> A. X); B \<in> A; 0\<notin>A |] ==> {f`B} \<subseteq> B \<inter> {f`C. C \<in> A}"
by (fast elim!: apply_type)
lemma AC1_AC2_aux2:
"[| pairwise_disjoint(A); B \<in> A; C \<in> A; D \<in> B; D \<in> C |] ==> f`B = f`C"
by (unfold pairwise_disjoint_def, fast)
lemma AC1_AC2: "AC1 ==> AC2"
apply (unfold AC1_def AC2_def)
apply (rule allI)
apply (rule impI)
apply (elim asm_rl conjE allE exE impE, assumption)
apply (intro exI ballI equalityI)
prefer 2 apply (rule AC1_AC2_aux1, assumption+)
apply (fast elim!: AC1_AC2_aux2 elim: apply_type)
done
(* ********************************************************************** *)
(* AC2 ==> AC1 *)
(* ********************************************************************** *)
lemma AC2_AC1_aux1: "0\<notin>A ==> 0 \<notin> {B*{B}. B \<in> A}"
by (fast dest!: sym [THEN Sigma_empty_iff [THEN iffD1]])
lemma AC2_AC1_aux2: "[| X*{X} \<inter> C = {y}; X \<in> A |]
==> (THE y. X*{X} \<inter> C = {y}): X*A"
apply (rule subst_elem [of y])
apply (blast elim!: equalityE)
apply (auto simp add: singleton_eq_iff)
done
lemma AC2_AC1_aux3:
"\<forall>D \<in> {E*{E}. E \<in> A}. \<exists>y. D \<inter> C = {y}
==> (\<lambda>x \<in> A. fst(THE z. (x*{x} \<inter> C = {z}))) \<in> (\<Pi> X \<in> A. X)"
apply (rule lam_type)
apply (drule bspec, blast)
apply (blast intro: AC2_AC1_aux2 fst_type)
done
lemma AC2_AC1: "AC2 ==> AC1"
apply (unfold AC1_def AC2_def pairwise_disjoint_def)
apply (intro allI impI)
apply (elim allE impE)
prefer 2 apply (fast elim!: AC2_AC1_aux3)
apply (blast intro!: AC2_AC1_aux1)
done
(* ********************************************************************** *)
(* AC1 ==> AC4 *)
(* ********************************************************************** *)
lemma empty_notin_images: "0 \<notin> {R``{x}. x \<in> domain(R)}"
by blast
lemma AC1_AC4: "AC1 ==> AC4"
apply (unfold AC1_def AC4_def)
apply (intro allI impI)
apply (drule spec, drule mp [OF _ empty_notin_images])
apply (best intro!: lam_type elim!: apply_type)
done
(* ********************************************************************** *)
(* AC4 ==> AC3 *)
(* ********************************************************************** *)
lemma AC4_AC3_aux1: "f \<in> A->B ==> (\<Union>z \<in> A. {z}*f`z) \<subseteq> A*\<Union>(B)"
by (fast dest!: apply_type)
lemma AC4_AC3_aux2: "domain(\<Union>z \<in> A. {z}*f(z)) = {a \<in> A. f(a)\<noteq>0}"
by blast
lemma AC4_AC3_aux3: "x \<in> A ==> (\<Union>z \<in> A. {z}*f(z))``{x} = f(x)"
by fast
lemma AC4_AC3: "AC4 ==> AC3"
apply (unfold AC3_def AC4_def)
apply (intro allI ballI)
apply (elim allE impE)
apply (erule AC4_AC3_aux1)
apply (simp add: AC4_AC3_aux2 AC4_AC3_aux3 cong add: Pi_cong)
done
(* ********************************************************************** *)
(* AC3 ==> AC1 *)
(* ********************************************************************** *)
lemma AC3_AC1_lemma:
"b\<notin>A ==> (\<Pi> x \<in> {a \<in> A. id(A)`a\<noteq>b}. id(A)`x) = (\<Pi> x \<in> A. x)"
apply (simp add: id_def cong add: Pi_cong)
apply (rule_tac b = A in subst_context, fast)
done
lemma AC3_AC1: "AC3 ==> AC1"
apply (unfold AC1_def AC3_def)
apply (fast intro!: id_type elim: AC3_AC1_lemma [THEN subst])
done
(* ********************************************************************** *)
(* AC4 ==> AC5 *)
(* ********************************************************************** *)
lemma AC4_AC5: "AC4 ==> AC5"
apply (unfold range_def AC4_def AC5_def)
apply (intro allI ballI)
apply (elim allE impE)
apply (erule fun_is_rel [THEN converse_type])
apply (erule exE)
apply (rename_tac g)
apply (rule_tac x=g in bexI)
apply (blast dest: apply_equality range_type)
apply (blast intro: Pi_type dest: apply_type fun_is_rel)
done
(* ********************************************************************** *)
(* AC5 ==> AC4, Rubin & Rubin, p. 11 *)
(* ********************************************************************** *)
lemma AC5_AC4_aux1: "R \<subseteq> A*B ==> (\<lambda>x \<in> R. fst(x)) \<in> R -> A"
by (fast intro!: lam_type fst_type)
lemma AC5_AC4_aux2: "R \<subseteq> A*B ==> range(\<lambda>x \<in> R. fst(x)) = domain(R)"
by (unfold lam_def, force)
lemma AC5_AC4_aux3: "[| \<exists>f \<in> A->C. P(f,domain(f)); A=B |] ==> \<exists>f \<in> B->C. P(f,B)"
apply (erule bexE)
apply (frule domain_of_fun, fast)
done
lemma AC5_AC4_aux4: "[| R \<subseteq> A*B; g \<in> C->R; \<forall>x \<in> C. (\<lambda>z \<in> R. fst(z))` (g`x) = x |]
==> (\<lambda>x \<in> C. snd(g`x)): (\<Pi> x \<in> C. R``{x})"
apply (rule lam_type)
apply (force dest: apply_type)
done
lemma AC5_AC4: "AC5 ==> AC4"
apply (unfold AC4_def AC5_def, clarify)
apply (elim allE ballE)
apply (drule AC5_AC4_aux3 [OF _ AC5_AC4_aux2], assumption)
apply (fast elim!: AC5_AC4_aux4)
apply (blast intro: AC5_AC4_aux1)
done
(* ********************************************************************** *)
(* AC1 \<longleftrightarrow> AC6 *)
(* ********************************************************************** *)
lemma AC1_iff_AC6: "AC1 \<longleftrightarrow> AC6"
by (unfold AC1_def AC6_def, blast)
end
|
import h5py
import numpy as np
def hdf5_reader(data_path,key=None):
'''
Hdf5 file reader, return numpy array.
'''
hdf5_file = h5py.File(data_path,'r')
image = np.asarray(hdf5_file[key],dtype=np.float32)
hdf5_file.close()
return image
# NHWC
class DataGenerator(object):
def __init__(self,path_list,num_classes=2,channels=1,input_shape=(256,256),shuffle=True):
self.path_list = path_list
self.num_classes = num_classes
self.channels = channels
self.shape = input_shape
self.shuffle = shuffle
self.index = -1
def _load_data(self):
image,label = self._next_data()
image = np.expand_dims(image,axis=-1)
one_hot_label = np.zeros(self.shape + (self.num_classes,),dtype=np.float32)
for z in range(1, self.num_classes):
temp = (label==z).astype(np.float32)
one_hot_label[...,z] = temp
one_hot_label[...,0] = np.amax(one_hot_label[...,1:],axis=-1) == 0
return image,one_hot_label
def _cycle_path_list(self):
self.index += 1
if self.index >= len(self.path_list):
self.index = 0
if self.shuffle:
np.random.shuffle(self.path_list)
def _next_data(self):
self._cycle_path_list()
image = hdf5_reader(self.path_list[self.index],'image')
label = hdf5_reader(self.path_list[self.index],'label')
return image,label
def __call__(self,batch_size):
images = np.empty((batch_size,) + self.shape + (self.channels,), dtype=np.float32)
labels = np.empty((batch_size,) + self.shape + (self.num_classes,),dtype=np.float32)
for i in range(batch_size):
images[i],labels[i] = self._load_data()
return images,labels
|
lemma mono_SucI2: "\<forall>n. X (Suc n) \<le> X n \<Longrightarrow> monoseq X" |
/-
Copyright (c) 2020 Riccardo Brasca. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Riccardo Brasca
! This file was ported from Lean 3 source module number_theory.primes_congruent_one
! leanprover-community/mathlib commit 55d224c38461be1e8e4363247dd110137c24a4ff
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Data.Nat.PrimeFin
import Mathbin.RingTheory.Polynomial.Cyclotomic.Eval
/-!
# Primes congruent to one
We prove that, for any positive `k : ℕ`, there are infinitely many primes `p` such that
`p ≡ 1 [MOD k]`.
-/
namespace Nat
open Polynomial Nat Filter
open Nat
/-- For any positive `k : ℕ` there exists an arbitrarily large prime `p` such that
`p ≡ 1 [MOD k]`. -/
theorem exists_prime_gt_modEq_one {k : ℕ} (n : ℕ) (hk0 : k ≠ 0) :
∃ p : ℕ, Nat.Prime p ∧ n < p ∧ p ≡ 1 [MOD k] :=
by
rcases(one_le_iff_ne_zero.2 hk0).eq_or_lt with (rfl | hk1)
· rcases exists_infinite_primes (n + 1) with ⟨p, hnp, hp⟩
exact ⟨p, hp, hnp, modeq_one⟩
let b := k * n !
have hgt : 1 < (eval (↑b) (cyclotomic k ℤ)).natAbs :=
by
rcases le_iff_exists_add'.1 hk1.le with ⟨k, rfl⟩
have hb : 2 ≤ b := le_mul_of_le_of_one_le hk1 n.factorial_pos
calc
1 ≤ b - 1 := le_tsub_of_add_le_left hb
_ < (eval (b : ℤ) (cyclotomic (k + 1) ℤ)).natAbs :=
sub_one_lt_nat_abs_cyclotomic_eval hk1 (succ_le_iff.1 hb).ne'
let p := min_fac (eval (↑b) (cyclotomic k ℤ)).natAbs
haveI hprime : Fact p.prime := ⟨min_fac_prime (ne_of_lt hgt).symm⟩
have hroot : is_root (cyclotomic k (ZMod p)) (cast_ring_hom (ZMod p) b) :=
by
rw [is_root.def, ← map_cyclotomic_int k (ZMod p), eval_map, coe_cast_ring_hom, ← Int.cast_ofNat,
← Int.coe_castRingHom, eval₂_hom, Int.coe_castRingHom, ZMod.int_cast_zmod_eq_zero_iff_dvd _ _]
apply Int.dvd_natAbs.1
exact_mod_cast min_fac_dvd (eval (↑b) (cyclotomic k ℤ)).natAbs
have hpb : ¬p ∣ b :=
hprime.1.coprime_iff_not_dvd.1 (coprime_of_root_cyclotomic hk0.bot_lt hroot).symm
refine' ⟨p, hprime.1, not_le.1 fun habs => _, _⟩
· exact hpb (dvd_mul_of_dvd_right (dvd_factorial (min_fac_pos _) habs) _)
· have hdiv : orderOf (b : ZMod p) ∣ p - 1 :=
ZMod.orderOf_dvd_card_sub_one (mt (CharP.cast_eq_zero_iff _ _ _).1 hpb)
haveI : NeZero (k : ZMod p) :=
NeZero.of_not_dvd (ZMod p) fun hpk => hpb (dvd_mul_of_dvd_left hpk _)
have : k = orderOf (b : ZMod p) := (is_root_cyclotomic_iff.mp hroot).eq_orderOf
rw [← this] at hdiv
exact ((modeq_iff_dvd' hprime.1.Pos).2 hdiv).symm
#align nat.exists_prime_gt_modeq_one Nat.exists_prime_gt_modEq_one
theorem frequently_atTop_modEq_one {k : ℕ} (hk0 : k ≠ 0) :
∃ᶠ p in atTop, Nat.Prime p ∧ p ≡ 1 [MOD k] :=
by
refine' frequently_at_top.2 fun n => _
obtain ⟨p, hp⟩ := exists_prime_gt_modeq_one n hk0
exact ⟨p, ⟨hp.2.1.le, hp.1, hp.2.2⟩⟩
#align nat.frequently_at_top_modeq_one Nat.frequently_atTop_modEq_one
/-- For any positive `k : ℕ` there are infinitely many primes `p` such that `p ≡ 1 [MOD k]`. -/
theorem infinite_setOf_prime_modEq_one {k : ℕ} (hk0 : k ≠ 0) :
Set.Infinite { p : ℕ | Nat.Prime p ∧ p ≡ 1 [MOD k] } :=
frequently_atTop_iff_infinite.1 (frequently_atTop_modEq_one hk0)
#align nat.infinite_set_of_prime_modeq_one Nat.infinite_setOf_prime_modEq_one
end Nat
|
\documentclass[12pt]{article}
% essential math based packages
\usepackage{amsfonts} % commonly used fonts and symbols in mathematics
\usepackage{amsmath} % additional set of math tools on top of LaTeX
\usepackage{amssymb} % common math symbols
\usepackage{amsthm} % theorem based setup following AMS standards
\usepackage[T1]{fontenc}
\usepackage{multicol}
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
}
\begin{document}
% remove indentations for entire document
\setlength\parindent{0pt}
\section{Math enviornments}
% Introduciont to math enviornments
%------------------------------------%
% short cut symbols
\subsection*{Inline elements}
\begin{enumerate}
\item using \verb!$...$!: $a+b=c$
\item using \verb!\[...\]!: \(a/b=c\)
\item using environment \verb!\begin{math}...\end{math}!):
\begin{math}
a - b = c
\end{math}
\end{enumerate}
Takeaway: Although there are said to be issues, or incompatibilities with using \verb!$...$!, most examples and working code use this shorthand for inline math. If for whatever reason you do find issues, then use \verb!\(...\)!.
\subsection*{Blocked elements}
\begin{enumerate}
\item using \verb!$$...$$!: $$ \frac{a}{b}=c $$
\item using \verb!\[...\]!: \[ \int^a_b = c\]
\item using enviornment \verb!\begin{displaymath}...\end{displaymath}!:
\begin{displaymath}
\dfrac{\partial a}{\partial b} = c
\end{displaymath}
\end{enumerate}
Takeaway: Much like for inline math, \verb!$$...$$! is commonly used. However, if you do find issues (which rarely does), then use \verb!\[...\]!.
\subsection*{Alignments \& numberings}
\begin{enumerate}
\item Numbered equations
\begin{equation}
\text{KE} = 1/2mv^2
\end{equation}
\item No numbered equations ({\*}trick, similar to section numberings)
\begin{equation*}
\text{PE} = \int_{\text{ref}}^{x} F \operatorname{d}\!\overrightarrow{x}
\end{equation*}
\item Numbered equations (not aligned)
\begin{gather}
\exp^{ix} = \cos{x} + i\sin(x) \\
\exp^{i\pi} + 1 = 0
\end{gather}
\item Numbered and aligned
\begin{align}
\nabla \cdot \vec{D} &= \rho_v \\
\nabla \cdot \vec{B} &= 0 \\
\nabla \times \vec{E} &= - \frac{\partial B}{\partial t} \\
\nabla \times \vec{B} &= \mu_{0}\vec{J} +
\mu_{0}\epsilon_{0}\frac{\partial E}{\partial t}
\end{align}
\item Controlling numbering and alignment
\begin{align}
&\nabla \cdot \vec{D} = \rho_v \nonumber \\
&\nabla \cdot \vec{B} = 0 \\
&\nabla \times \vec{E} = - \frac{\partial B}{\partial t} \\
&\nabla \times \vec{B} = \mu_{0}\vec{J} +
\mu_{0}\epsilon_{0}\frac{\partial E}{\partial t} \nonumber
\end{align}
\end{enumerate}
Takeaway: All environments can use the \* trick to suppress numbering, or \verb!\nonumber! can do this specifically per line. Although not shown in this demonstration, if equations get too long, or multiple equations should be given 1 equation number (such as an \textit{if/else} statement), use the \verb!\begin{split}! or \verb!\begin{multiline}! environments.
%------------------------------------%
\section{Symbols}
\subsection*{Greek symbols}
Note that greek symbols that can be represented by english letters such as \verb!\Alpha! and \verb!\Chi! do not exists, as their symbols $A$ and $X$ are indistinguishable from using letters \verb!$A$! and \verb!$X$!. However, some packages override this behavior, so please check what math packages you import.
$$\alpha, A, \beta, B, \gamma, \Gamma, \delta, \Delta ...\, \mu, \nu $$
\subsection*{Equation symbols}
You have control over all types of symbols relevant to mathematical, and even graphical representation. To get an extensive list please look \href{https://en.wikibooks.org/wiki/LaTeX/Mathematics#List_of_mathematical_symbols}{here}.
\subsection*{Formatting mathematical symbols}
Some equations need more than a simple definition or symbol. Some symbols can be compounded to make more complex statements. For example
$$
\overrightarrow{\sum_{i=\iiint}^{j={\widehat{AAA}}}}
$$
A more comprehensive discussion on this topic, and how to customize the look can be found \href{https://en.wikibooks.org/wiki/LaTeX/Mathematics#Formatting_mathematics_symbols}{here}.
\section{Spacing}
Horizontal spacing is dictated by the document class font size (e.g. 11pt, 12pt, etc.) and is measured by \textit{em} which is roughly proportional to the horizontal width of a capital M. To artificially create 1 em width is to use \verb!\quad!. See: \\
\indent A{\quad}B\\
\indent AMB (... a little bit more than M)\\
Knowing this, there are many commands such as \verb!\,! \verb!\:! that create fractions of \verb!\quad!. The variety of commands for horizontal spacing in normal and math mode can be found \href{https://en.wikibooks.org/wiki/LaTeX/Mathematics#Controlling_horizontal_spacing}{here}. There is also a discussion on which \href{https://tex.stackexchange.com/questions/41476/lengths-and-when-to-use-them}{spacing is appropriate}.
\end{document} |
theory VTcomp
imports Exc_Nres_Monad
begin
section \<open>Library\<close>
text \<open>
This theory contains a collection of auxiliary material that was used as a library for the contest.
\<close>
lemma monadic_WHILEIT_unfold:
"monadic_WHILEIT I b f s = do {
ASSERT (I s); bb\<leftarrow>b s; if bb then do { s \<leftarrow> f s; monadic_WHILEIT I b f s } else RETURN s
}"
unfolding monadic_WHILEIT_def
apply (subst RECT_unfold)
apply refine_mono
by simp
no_notation Ref.lookup ("!_" 61)
no_notation Ref.update ("_ := _" 62)
subsection \<open>Specialized Rules for Foreach-Loops\<close>
lemma nfoldli_upt_rule:
assumes INTV: "lb\<le>ub"
assumes I0: "I lb \<sigma>0"
assumes IS: "\<And>i \<sigma>. \<lbrakk> lb\<le>i; i<ub; I i \<sigma>; c \<sigma> \<rbrakk> \<Longrightarrow> f i \<sigma> \<le> SPEC (I (i+1))"
assumes FNC: "\<And>i \<sigma>. \<lbrakk> lb\<le>i; i\<le>ub; I i \<sigma>; \<not>c \<sigma> \<rbrakk> \<Longrightarrow> P \<sigma>"
assumes FC: "\<And>\<sigma>. \<lbrakk> I ub \<sigma>; c \<sigma> \<rbrakk> \<Longrightarrow> P \<sigma>"
shows "nfoldli [lb..<ub] c f \<sigma>0 \<le> SPEC P"
apply (rule nfoldli_rule[where I="\<lambda>l _ \<sigma>. I (lb+length l) \<sigma>"])
apply simp_all
apply (simp add: I0)
subgoal using IS
by (metis Suc_eq_plus1 add_diff_cancel_left' eq_diff_iff le_add1 length_upt upt_eq_lel_conv)
subgoal for l1 l2 \<sigma>
apply (rule FNC[where i="lb + length l1"])
apply (auto simp: INTV)
using INTV upt_eq_append_conv by auto
apply (rule FC) using INTV
by auto
definition [enres_unfolds]: "efor (lb::int) ub f \<sigma> \<equiv> doE {
EASSERT (lb\<le>ub);
(_,\<sigma>) \<leftarrow> EWHILET (\<lambda>(i,\<sigma>). i<ub) (\<lambda>(i,\<sigma>). doE { \<sigma> \<leftarrow> f i \<sigma>; ERETURN (i+1,\<sigma>) }) (lb,\<sigma>);
ERETURN \<sigma>
}"
lemma efor_rule:
assumes INTV: "lb\<le>ub"
assumes I0: "I lb \<sigma>0"
assumes IS: "\<And>i \<sigma>. \<lbrakk> lb\<le>i; i<ub; I i \<sigma> \<rbrakk> \<Longrightarrow> f i \<sigma> \<le> ESPEC E (I (i+1))"
assumes FC: "\<And>\<sigma>. \<lbrakk> I ub \<sigma> \<rbrakk> \<Longrightarrow> P \<sigma>"
shows "efor lb ub f \<sigma>0 \<le> ESPEC E P"
unfolding efor_def
supply EWHILET_rule[where R="measure (\<lambda>(i,_). nat (ub-i))" and I="\<lambda>(i,\<sigma>). lb\<le>i \<and> i\<le>ub \<and> I i \<sigma>", refine_vcg]
apply refine_vcg
apply auto
using assms apply auto
done
subsection \<open>Improved Do-Notation for the \<open>nres\<close>-Monad\<close>
abbreviation (do_notation) bind_doN where "bind_doN \<equiv> Refine_Basic.bind"
notation (output) bind_doN (infixl "\<bind>" 54)
notation (ASCII output) bind_doN (infixl ">>=" 54)
nonterminal doN_binds and doN_bind
syntax
"_doN_block" :: "doN_binds \<Rightarrow> 'a" ("doN {//(2 _)//}" [12] 62)
"_doN_bind" :: "[pttrn, 'a] \<Rightarrow> doN_bind" ("(2_ \<leftarrow>/ _)" 13)
"_doN_let" :: "[pttrn, 'a] \<Rightarrow> doN_bind" ("(2let _ =/ _)" [1000, 13] 13)
"_doN_then" :: "'a \<Rightarrow> doN_bind" ("_" [14] 13)
"_doN_final" :: "'a \<Rightarrow> doN_binds" ("_")
"_doN_cons" :: "[doN_bind, doN_binds] \<Rightarrow> doN_binds" ("_;//_" [13, 12] 12)
"_thenM" :: "['a, 'b] \<Rightarrow> 'c" (infixl "\<then>" 54)
syntax (ASCII)
"_doN_bind" :: "[pttrn, 'a] \<Rightarrow> doN_bind" ("(2_ <-/ _)" 13)
"_thenM" :: "['a, 'b] \<Rightarrow> 'c" (infixl ">>" 54)
translations
"_doN_block (_doN_cons (_doN_then t) (_doN_final e))"
\<rightleftharpoons> "CONST bind_doN t (\<lambda>_. e)"
"_doN_block (_doN_cons (_doN_bind p t) (_doN_final e))"
\<rightleftharpoons> "CONST bind_doN t (\<lambda>p. e)"
"_doN_block (_doN_cons (_doN_let p t) bs)"
\<rightleftharpoons> "let p = t in _doN_block bs"
"_doN_block (_doN_cons b (_doN_cons c cs))"
\<rightleftharpoons> "_doN_block (_doN_cons b (_doN_final (_doN_block (_doN_cons c cs))))"
"_doN_cons (_doN_let p t) (_doN_final s)"
\<rightleftharpoons> "_doN_final (let p = t in s)"
"_doN_block (_doN_final e)" \<rightharpoonup> "e"
"(m \<then> n)" \<rightharpoonup> "(m \<bind> (\<lambda>_. n))"
subsection \<open>Array Blit exposed to Sepref\<close>
definition "op_list_blit src si dst di len \<equiv>
(take di dst @ take len (drop si src) @ drop (di+len) dst)"
context
notes op_list_blit_def[simp]
begin
sepref_decl_op (no_def) list_blit :
"op_list_blit"
:: "[\<lambda>((((src,si),dst),di),len). si+len \<le> length src \<and> di+len \<le> length dst]\<^sub>f
((((\<langle>A\<rangle>list_rel \<times>\<^sub>r nat_rel) \<times>\<^sub>r \<langle>A\<rangle>list_rel) \<times>\<^sub>r nat_rel) \<times>\<^sub>r nat_rel) \<rightarrow> \<langle>A\<rangle>list_rel" .
end
lemma blit_len[simp]: "si + len \<le> length src \<and> di + len \<le> length dst
\<Longrightarrow> length (op_list_blit src si dst di len) = length dst"
by (auto simp: op_list_blit_def)
context
notes [fcomp_norm_unfold] = array_assn_def[symmetric]
begin
lemma array_blit_hnr_aux:
"(uncurry4 (\<lambda>src si dst di len. do { blit src si dst di len; return dst }),
uncurry4 mop_list_blit)
\<in> is_array\<^sup>k*\<^sub>anat_assn\<^sup>k*\<^sub>ais_array\<^sup>d*\<^sub>anat_assn\<^sup>k*\<^sub>anat_assn\<^sup>k \<rightarrow>\<^sub>a is_array"
apply sepref_to_hoare
apply (clarsimp simp: refine_pw_simps)
apply (sep_auto simp: is_array_def op_list_blit_def)
done
sepref_decl_impl (ismop) array_blit: array_blit_hnr_aux .
end
end
|
If $f$ is $O(g)$ and $\Omega(g)$, then $f$ is $\Theta(g)$. |
import Lvl
open import Structure.Category
open import Structure.Setoid
open import Type
-- TODO: Deprecate this file and use Relator.Equals.Category instead
module Structure.Category.Morphism.IdTransport where
import Functional.Dependent as Fn
import Function.Equals
open Function.Equals.Dependent
open import Logic
open import Logic.Propositional
open import Logic.Predicate
open import Relator.Equals using ([≡]-intro) renaming (_≡_ to _≡ₑ_)
open import Relator.Equals.Proofs
import Structure.Categorical.Names as Names
open import Structure.Category.Functor
open import Structure.Categorical.Properties
open import Structure.Function
open import Structure.Relator.Equivalence
open import Structure.Relator.Properties
open import Syntax.Transitivity
module _
{ℓₒ ℓₘ ℓₑ : Lvl.Level}
(cat : CategoryObject{ℓₒ}{ℓₘ}{ℓₑ})
where
open CategoryObject(cat)
open Category(category) using (_∘_ ; id ; identityₗ ; identityᵣ)
open Category.ArrowNotation(category)
open Morphism.OperModule ⦃ morphism-equiv ⦄ (\{x} → _∘_ {x})
open Morphism.IdModule ⦃ morphism-equiv ⦄ (\{x} → _∘_ {x})(id)
private variable a b c : Object
-- Essentially the identity morphism masquerading as a morphism between two arbitrary but identical objects.
transport : (a ≡ₑ b) → (a ⟶ b)
transport = sub₂(_≡ₑ_)(_⟶_) ⦃ [≡]-sub-of-reflexive ⦃ intro id ⦄ ⦄
transport-function : Function ⦃ [≡]-equiv ⦄ ⦃ morphism-equiv ⦄ (transport {a = a}{b = b})
Function.congruence transport-function xy = sub₂(_≡ₑ_)(_≡_) ⦃ [≡]-sub-of-reflexive ⦃ Equiv.reflexivity(morphism-equiv) ⦄ ⦄ ([≡]-with(transport) xy)
transport-of-reflexivity : (transport(reflexivity(_≡ₑ_)) ≡ id{a})
transport-of-reflexivity = reflexivity(_≡_) ⦃ Equiv.reflexivity morphism-equiv ⦄
-- transport-of-symmetry : ∀{ab : (a ≡ₑ b)}{ba : (b ≡ₑ a)} → (transitivity(_≡ₑ_) ab ba ≡ reflexivity(_≡ₑ_)) → (transport(symmetry(_≡ₑ_) ab) ≡ transport ba)
transport-of-transitivity : ∀{ab : (a ≡ₑ b)}{bc : (b ≡ₑ c)} → (transport(transitivity(_≡ₑ_) ab bc) ≡ transport(bc) ∘ transport(ab))
transport-of-transitivity {ab = [≡]-intro} {bc = [≡]-intro} = symmetry(_≡_) ⦃ Equiv.symmetry morphism-equiv ⦄ (Morphism.identityₗ(_∘_)(id))
[∘]-on-transport-inverseₗ : ∀{ab : (a ≡ₑ b)} → ((transport (symmetry(_≡ₑ_) ab)) ∘ (transport ab) ≡ id)
[∘]-on-transport-inverseₗ {ab = [≡]-intro} = Morphism.identityₗ(_∘_)(id)
instance
transport-inverseₗ : ∀{ab : (a ≡ₑ b)} → Inverseₗ(transport ab) (transport(symmetry(_≡ₑ_) ab))
transport-inverseₗ {ab = ab} = Morphism.intro ([∘]-on-transport-inverseₗ {ab = ab})
[∘]-on-transport-inverseᵣ : ∀{ab : (a ≡ₑ b)} → ((transport ab) ∘ (transport (symmetry(_≡ₑ_) ab)) ≡ id)
[∘]-on-transport-inverseᵣ {ab = [≡]-intro} = Morphism.identityᵣ(_∘_)(id)
instance
transport-inverseᵣ : ∀{ab : (a ≡ₑ b)} → Inverseᵣ(transport ab) (transport(symmetry(_≡ₑ_) ab))
transport-inverseᵣ {ab = ab} = Morphism.intro ([∘]-on-transport-inverseᵣ {ab = ab})
instance
transport-isomorphism : ∀{ab : (a ≡ₑ b)} → Isomorphism(transport ab)
transport-isomorphism {ab = ab} = [∃]-intro (transport(symmetry(_≡_) ab)) ⦃ [∧]-intro (transport-inverseₗ {ab = ab}) (transport-inverseᵣ {ab = ab}) ⦄
transport-congruence-symmetry-involution : ∀{ab : (a ≡ₑ b)} → ((transport Fn.∘ symmetry(_≡ₑ_) Fn.∘ symmetry(_≡ₑ_)) ab ≡ transport ab)
transport-congruence-symmetry-involution {ab = [≡]-intro} = reflexivity(_≡_) ⦃ Equiv.reflexivity morphism-equiv ⦄
module _
{ℓₒₗ ℓₘₗ ℓₑₗ ℓₒᵣ ℓₘᵣ ℓₑᵣ : Lvl.Level}
{catₗ : CategoryObject{ℓₒₗ}{ℓₘₗ}{ℓₑₗ}}
{catᵣ : CategoryObject{ℓₒᵣ}{ℓₘᵣ}{ℓₑᵣ}}
where
open CategoryObject
open Category using (_∘_ ; id ; identityₗ ; identityᵣ)
open Category.ArrowNotation
private open module Equivᵣ {x}{y} = Equivalence (Equiv-equivalence ⦃ morphism-equiv(catᵣ){x}{y} ⦄) using ()
transport-of-congruenced-functor : (([∃]-intro F ⦃ intro map ⦄) : catₗ →ᶠᵘⁿᶜᵗᵒʳ catᵣ) → ∀{a b : Object(catₗ)}{ab : (a ≡ₑ b)} → (transport(catᵣ)(congruence₁ F ab) ≡ map(transport(catₗ)(ab)))
transport-of-congruenced-functor ([∃]-intro F functor@⦃ intro map ⦄) {ab = [≡]-intro} =
transport catᵣ (congruence₁ F [≡]-intro) 🝖[ _≡_ ]-[]
transport catᵣ [≡]-intro 🝖[ _≡_ ]-[]
id(category(catᵣ)) 🝖[ _≡_ ]-[ Functor.id-preserving functor ]-sym
map(id(category(catₗ))) 🝖[ _≡_ ]-[]
map(transport catₗ [≡]-intro) 🝖-end
-- transport-of-congruenced-bifunctor : ∀{ab : (a ≡ₑ b)}{[∃]-intro F : Bifunctor} → (F(transport(ab)(cd)) ≡ transport(congruence₂ F ab cd))
|
State Before: α : Type u
f g : Filter α
s✝ t : Set α
β : Type v
s : β → Set α
is : Set β
hf : Set.Finite is
⊢ (⋂ (i : β) (_ : i ∈ ∅), s i) ∈ f ↔ ∀ (i : β), i ∈ ∅ → s i ∈ f State After: no goals Tactic: simp State Before: α : Type u
f g : Filter α
s✝¹ t : Set α
β : Type v
s : β → Set α
is : Set β
hf : Set.Finite is
a✝ : β
s✝ : Set β
x✝¹ : ¬a✝ ∈ s✝
x✝ : Set.Finite s✝
hs : (⋂ (i : β) (_ : i ∈ s✝), s i) ∈ f ↔ ∀ (i : β), i ∈ s✝ → s i ∈ f
⊢ (⋂ (i : β) (_ : i ∈ insert a✝ s✝), s i) ∈ f ↔ ∀ (i : β), i ∈ insert a✝ s✝ → s i ∈ f State After: no goals Tactic: simp [hs] |
(* Title: HOL/BNF_Fixpoint_Base.thy
Author: Lorenz Panny, TU Muenchen
Author: Dmitriy Traytel, TU Muenchen
Author: Jasmin Blanchette, TU Muenchen
Author: Martin Desharnais, TU Muenchen
Copyright 2012, 2013, 2014
Shared fixpoint operations on bounded natural functors.
*)
section {* Shared Fixpoint Operations on Bounded Natural Functors *}
theory BNF_Fixpoint_Base
imports BNF_Composition Basic_BNFs
begin
lemma False_imp_eq_True: "(False \<Longrightarrow> Q) \<equiv> Trueprop True"
by default simp_all
lemma conj_imp_eq_imp_imp: "(P \<and> Q \<Longrightarrow> PROP R) \<equiv> (P \<Longrightarrow> Q \<Longrightarrow> PROP R)"
by default simp_all
lemma mp_conj: "(P \<longrightarrow> Q) \<and> R \<Longrightarrow> P \<Longrightarrow> R \<and> Q"
by auto
lemma predicate2D_conj: "P \<le> Q \<and> R \<Longrightarrow> P x y \<Longrightarrow> R \<and> Q x y"
by auto
lemma eq_sym_Unity_conv: "(x = (() = ())) = x"
by blast
lemma case_unit_Unity: "(case u of () \<Rightarrow> f) = f"
by (cases u) (hypsubst, rule unit.case)
lemma case_prod_Pair_iden: "(case p of (x, y) \<Rightarrow> (x, y)) = p"
by simp
lemma unit_all_impI: "(P () \<Longrightarrow> Q ()) \<Longrightarrow> \<forall>x. P x \<longrightarrow> Q x"
by simp
lemma pointfree_idE: "f \<circ> g = id \<Longrightarrow> f (g x) = x"
unfolding comp_def fun_eq_iff by simp
lemma o_bij:
assumes gf: "g \<circ> f = id" and fg: "f \<circ> g = id"
shows "bij f"
unfolding bij_def inj_on_def surj_def proof safe
fix a1 a2 assume "f a1 = f a2"
hence "g ( f a1) = g (f a2)" by simp
thus "a1 = a2" using gf unfolding fun_eq_iff by simp
next
fix b
have "b = f (g b)"
using fg unfolding fun_eq_iff by simp
thus "EX a. b = f a" by blast
qed
lemma ssubst_mem: "\<lbrakk>t = s; s \<in> X\<rbrakk> \<Longrightarrow> t \<in> X"
by simp
lemma case_sum_step:
"case_sum (case_sum f' g') g (Inl p) = case_sum f' g' p"
"case_sum f (case_sum f' g') (Inr p) = case_sum f' g' p"
by auto
lemma obj_one_pointE: "\<forall>x. s = x \<longrightarrow> P \<Longrightarrow> P"
by blast
lemma type_copy_obj_one_point_absE:
assumes "type_definition Rep Abs UNIV" "\<forall>x. s = Abs x \<longrightarrow> P" shows P
using type_definition.Rep_inverse[OF assms(1)]
by (intro mp[OF spec[OF assms(2), of "Rep s"]]) simp
lemma obj_sumE_f:
assumes "\<forall>x. s = f (Inl x) \<longrightarrow> P" "\<forall>x. s = f (Inr x) \<longrightarrow> P"
shows "\<forall>x. s = f x \<longrightarrow> P"
proof
fix x from assms show "s = f x \<longrightarrow> P" by (cases x) auto
qed
lemma case_sum_if:
"case_sum f g (if p then Inl x else Inr y) = (if p then f x else g y)"
by simp
lemma prod_set_simps:
"fsts (x, y) = {x}"
"snds (x, y) = {y}"
unfolding prod_set_defs by simp+
lemma sum_set_simps:
"setl (Inl x) = {x}"
"setl (Inr x) = {}"
"setr (Inl x) = {}"
"setr (Inr x) = {x}"
unfolding sum_set_defs by simp+
lemma Inl_Inr_False: "(Inl x = Inr y) = False"
by simp
lemma Inr_Inl_False: "(Inr x = Inl y) = False"
by simp
lemma spec2: "\<forall>x y. P x y \<Longrightarrow> P x y"
by blast
lemma rewriteR_comp_comp: "\<lbrakk>g \<circ> h = r\<rbrakk> \<Longrightarrow> f \<circ> g \<circ> h = f \<circ> r"
unfolding comp_def fun_eq_iff by auto
lemma rewriteR_comp_comp2: "\<lbrakk>g \<circ> h = r1 \<circ> r2; f \<circ> r1 = l\<rbrakk> \<Longrightarrow> f \<circ> g \<circ> h = l \<circ> r2"
unfolding comp_def fun_eq_iff by auto
lemma rewriteL_comp_comp: "\<lbrakk>f \<circ> g = l\<rbrakk> \<Longrightarrow> f \<circ> (g \<circ> h) = l \<circ> h"
unfolding comp_def fun_eq_iff by auto
lemma rewriteL_comp_comp2: "\<lbrakk>f \<circ> g = l1 \<circ> l2; l2 \<circ> h = r\<rbrakk> \<Longrightarrow> f \<circ> (g \<circ> h) = l1 \<circ> r"
unfolding comp_def fun_eq_iff by auto
lemma convol_o: "\<langle>f, g\<rangle> \<circ> h = \<langle>f \<circ> h, g \<circ> h\<rangle>"
unfolding convol_def by auto
lemma map_prod_o_convol: "map_prod h1 h2 \<circ> \<langle>f, g\<rangle> = \<langle>h1 \<circ> f, h2 \<circ> g\<rangle>"
unfolding convol_def by auto
lemma map_prod_o_convol_id: "(map_prod f id \<circ> \<langle>id, g\<rangle>) x = \<langle>id \<circ> f, g\<rangle> x"
unfolding map_prod_o_convol id_comp comp_id ..
lemma o_case_sum: "h \<circ> case_sum f g = case_sum (h \<circ> f) (h \<circ> g)"
unfolding comp_def by (auto split: sum.splits)
lemma case_sum_o_map_sum: "case_sum f g \<circ> map_sum h1 h2 = case_sum (f \<circ> h1) (g \<circ> h2)"
unfolding comp_def by (auto split: sum.splits)
lemma case_sum_o_map_sum_id: "(case_sum id g \<circ> map_sum f id) x = case_sum (f \<circ> id) g x"
unfolding case_sum_o_map_sum id_comp comp_id ..
lemma rel_fun_def_butlast:
"rel_fun R (rel_fun S T) f g = (\<forall>x y. R x y \<longrightarrow> (rel_fun S T) (f x) (g y))"
unfolding rel_fun_def ..
lemma subst_eq_imp: "(\<forall>a b. a = b \<longrightarrow> P a b) \<equiv> (\<forall>a. P a a)"
by auto
lemma eq_subset: "op = \<le> (\<lambda>a b. P a b \<or> a = b)"
by auto
lemma eq_le_Grp_id_iff: "(op = \<le> Grp (Collect R) id) = (All R)"
unfolding Grp_def id_apply by blast
lemma Grp_id_mono_subst: "(\<And>x y. Grp P id x y \<Longrightarrow> Grp Q id (f x) (f y)) \<equiv>
(\<And>x. x \<in> P \<Longrightarrow> f x \<in> Q)"
unfolding Grp_def by rule auto
lemma vimage2p_mono: "vimage2p f g R x y \<Longrightarrow> R \<le> S \<Longrightarrow> vimage2p f g S x y"
unfolding vimage2p_def by blast
lemma vimage2p_refl: "(\<And>x. R x x) \<Longrightarrow> vimage2p f f R x x"
unfolding vimage2p_def by auto
lemma
assumes "type_definition Rep Abs UNIV"
shows type_copy_Rep_o_Abs: "Rep \<circ> Abs = id" and type_copy_Abs_o_Rep: "Abs \<circ> Rep = id"
unfolding fun_eq_iff comp_apply id_apply
type_definition.Abs_inverse[OF assms UNIV_I] type_definition.Rep_inverse[OF assms] by simp_all
lemma type_copy_map_comp0_undo:
assumes "type_definition Rep Abs UNIV"
"type_definition Rep' Abs' UNIV"
"type_definition Rep'' Abs'' UNIV"
shows "Abs' \<circ> M \<circ> Rep'' = (Abs' \<circ> M1 \<circ> Rep) \<circ> (Abs \<circ> M2 \<circ> Rep'') \<Longrightarrow> M1 \<circ> M2 = M"
by (rule sym) (auto simp: fun_eq_iff type_definition.Abs_inject[OF assms(2) UNIV_I UNIV_I]
type_definition.Abs_inverse[OF assms(1) UNIV_I]
type_definition.Abs_inverse[OF assms(3) UNIV_I] dest: spec[of _ "Abs'' x" for x])
lemma vimage2p_id: "vimage2p id id R = R"
unfolding vimage2p_def by auto
lemma vimage2p_comp: "vimage2p (f1 \<circ> f2) (g1 \<circ> g2) = vimage2p f2 g2 \<circ> vimage2p f1 g1"
unfolding fun_eq_iff vimage2p_def o_apply by simp
lemma vimage2p_rel_fun: "rel_fun (vimage2p f g R) R f g"
unfolding rel_fun_def vimage2p_def by auto
lemma fun_cong_unused_0: "f = (\<lambda>x. g) \<Longrightarrow> f (\<lambda>x. 0) = g"
by (erule arg_cong)
lemma inj_on_convol_ident: "inj_on (\<lambda>x. (x, f x)) X"
unfolding inj_on_def by simp
lemma map_sum_if_distrib_then:
"\<And>f g e x y. map_sum f g (if e then Inl x else y) = (if e then Inl (f x) else map_sum f g y)"
"\<And>f g e x y. map_sum f g (if e then Inr x else y) = (if e then Inr (g x) else map_sum f g y)"
by simp_all
lemma map_sum_if_distrib_else:
"\<And>f g e x y. map_sum f g (if e then x else Inl y) = (if e then map_sum f g x else Inl (f y))"
"\<And>f g e x y. map_sum f g (if e then x else Inr y) = (if e then map_sum f g x else Inr (g y))"
by simp_all
lemma case_prod_app: "case_prod f x y = case_prod (\<lambda>l r. f l r y) x"
by (case_tac x) simp
lemma case_sum_map_sum: "case_sum l r (map_sum f g x) = case_sum (l \<circ> f) (r \<circ> g) x"
by (case_tac x) simp+
lemma case_sum_transfer:
"rel_fun (rel_fun R T) (rel_fun (rel_fun S T) (rel_fun (rel_sum R S) T)) case_sum case_sum"
unfolding rel_fun_def by (auto split: sum.splits)
lemma case_prod_map_prod: "case_prod h (map_prod f g x) = case_prod (\<lambda>l r. h (f l) (g r)) x"
by (case_tac x) simp+
lemma case_prod_o_map_prod: "case_prod f \<circ> map_prod g1 g2 = case_prod (\<lambda>l r. f (g1 l) (g2 r))"
unfolding comp_def by auto
lemma case_prod_transfer:
"(rel_fun (rel_fun A (rel_fun B C)) (rel_fun (rel_prod A B) C)) case_prod case_prod"
unfolding rel_fun_def by simp
lemma eq_ifI: "(P \<longrightarrow> t = u1) \<Longrightarrow> (\<not> P \<longrightarrow> t = u2) \<Longrightarrow> t = (if P then u1 else u2)"
by simp
lemma comp_transfer:
"rel_fun (rel_fun B C) (rel_fun (rel_fun A B) (rel_fun A C)) (op \<circ>) (op \<circ>)"
unfolding rel_fun_def by simp
lemma If_transfer: "rel_fun (op =) (rel_fun A (rel_fun A A)) If If"
unfolding rel_fun_def by simp
lemma Abs_transfer:
assumes type_copy1: "type_definition Rep1 Abs1 UNIV"
assumes type_copy2: "type_definition Rep2 Abs2 UNIV"
shows "rel_fun R (vimage2p Rep1 Rep2 R) Abs1 Abs2"
unfolding vimage2p_def rel_fun_def
type_definition.Abs_inverse[OF type_copy1 UNIV_I]
type_definition.Abs_inverse[OF type_copy2 UNIV_I] by simp
lemma Inl_transfer:
"rel_fun S (rel_sum S T) Inl Inl"
by auto
lemma Inr_transfer:
"rel_fun T (rel_sum S T) Inr Inr"
by auto
lemma Pair_transfer: "rel_fun A (rel_fun B (rel_prod A B)) Pair Pair"
unfolding rel_fun_def by simp
ML_file "Tools/BNF/bnf_fp_util.ML"
ML_file "Tools/BNF/bnf_fp_def_sugar_tactics.ML"
ML_file "Tools/BNF/bnf_fp_def_sugar.ML"
ML_file "Tools/BNF/bnf_fp_n2m_tactics.ML"
ML_file "Tools/BNF/bnf_fp_n2m.ML"
ML_file "Tools/BNF/bnf_fp_n2m_sugar.ML"
end
|
[STATEMENT]
lemma poly_pos_between_leq_less:
"(\<forall>x. a \<le> x \<and> x < b \<longrightarrow> poly p x > 0) \<longleftrightarrow>
((a \<ge> b \<or> (p \<noteq> 0 \<and> poly p a > 0 \<and> count_roots_between p a b =
(if a < b \<and> poly p b = 0 then 1 else 0))))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<forall>x. a \<le> x \<and> x < b \<longrightarrow> 0 < poly p x) = (b \<le> a \<or> p \<noteq> 0 \<and> 0 < poly p a \<and> count_roots_between p a b = (if a < b \<and> poly p b = 0 then 1 else 0))
[PROOF STEP]
by (simp only: poly_pos_between_leq_less Let_def
poly_no_roots_leq_less, force) |
-- Theorems/Exercises from "Logical Investigations, with the Nuprl Proof Assistant"
-- by Robert L. Constable and Anne Trostle
-- http://www.nuprl.org/MathLibrary/LogicalInvestigations/
import logic
-- 2. The Minimal Implicational Calculus
theorem thm1 {A B : Prop} : A → B → A :=
assume Ha Hb, Ha
theorem thm2 {A B C : Prop} : (A → B) → (A → B → C) → (A → C) :=
assume Hab Habc Ha,
Habc Ha (Hab Ha)
theorem thm3 {A B C : Prop} : (A → B) → (B → C) → (A → C) :=
assume Hab Hbc Ha,
Hbc (Hab Ha)
-- 3. False Propositions and Negation
theorem thm4 {P Q : Prop} : ¬P → P → Q :=
assume Hnp Hp,
absurd Hp Hnp
theorem thm5 {P : Prop} : P → ¬¬P :=
assume (Hp : P) (HnP : ¬P),
absurd Hp HnP
theorem thm6 {P Q : Prop} : (P → Q) → (¬Q → ¬P) :=
assume (Hpq : P → Q) (Hnq : ¬Q) (Hp : P),
have Hq : Q, from Hpq Hp,
show false, from absurd Hq Hnq
theorem thm7 {P Q : Prop} : (P → ¬P) → (P → Q) :=
assume Hpnp Hp,
absurd Hp (Hpnp Hp)
theorem thm8 {P Q : Prop} : ¬(P → Q) → (P → ¬Q) :=
assume (Hn : ¬(P → Q)) (Hp : P) (Hq : Q),
-- Rermak we don't even need the hypothesis Hp
have H : P → Q, from assume H', Hq,
absurd H Hn
-- 4. Conjunction and Disjunction
theorem thm9 {P : Prop} : (P ∨ ¬P) → (¬¬P → P) :=
assume (em : P ∨ ¬P) (Hnn : ¬¬P),
or.elim em
(assume Hp, Hp)
(assume Hn, absurd Hn Hnn)
theorem thm10 {P : Prop} : ¬¬(P ∨ ¬P) :=
assume Hnem : ¬(P ∨ ¬P),
have Hnp : ¬P, from
assume Hp : P,
have Hem : P ∨ ¬P, from or.inl Hp,
absurd Hem Hnem,
have Hem : P ∨ ¬P, from or.inr Hnp,
absurd Hem Hnem
theorem thm11 {P Q : Prop} : ¬P ∨ ¬Q → ¬(P ∧ Q) :=
assume (H : ¬P ∨ ¬Q) (Hn : P ∧ Q),
or.elim H
(assume Hnp : ¬P, absurd (and.elim_left Hn) Hnp)
(assume Hnq : ¬Q, absurd (and.elim_right Hn) Hnq)
theorem thm12 {P Q : Prop} : ¬(P ∨ Q) → ¬P ∧ ¬Q :=
assume H : ¬(P ∨ Q),
have Hnp : ¬P, from assume Hp : P, absurd (or.inl Hp) H,
have Hnq : ¬Q, from assume Hq : Q, absurd (or.inr Hq) H,
and.intro Hnp Hnq
theorem thm13 {P Q : Prop} : ¬P ∧ ¬Q → ¬(P ∨ Q) :=
assume (H : ¬P ∧ ¬Q) (Hn : P ∨ Q),
or.elim Hn
(assume Hp : P, absurd Hp (and.elim_left H))
(assume Hq : Q, absurd Hq (and.elim_right H))
theorem thm14 {P Q : Prop} : ¬P ∨ Q → P → Q :=
assume (Hor : ¬P ∨ Q) (Hp : P),
or.elim Hor
(assume Hnp : ¬P, absurd Hp Hnp)
(assume Hq : Q, Hq)
theorem thm15 {P Q : Prop} : (P → Q) → ¬¬(¬P ∨ Q) :=
assume (Hpq : P → Q) (Hn : ¬(¬P ∨ Q)),
have H1 : ¬¬P ∧ ¬Q, from thm12 Hn,
have Hnp : ¬P, from mt Hpq (and.elim_right H1),
absurd Hnp (and.elim_left H1)
theorem thm16 {P Q : Prop} : (P → Q) ∧ ((P ∨ ¬P) ∨ (Q ∨ ¬Q)) → ¬P ∨ Q :=
assume H : (P → Q) ∧ ((P ∨ ¬P) ∨ (Q ∨ ¬Q)),
have Hpq : P → Q, from and.elim_left H,
or.elim (and.elim_right H)
(assume Hem1 : P ∨ ¬P, or.elim Hem1
(assume Hp : P, or.inr (Hpq Hp))
(assume Hnp : ¬P, or.inl Hnp))
(assume Hem2 : Q ∨ ¬Q, or.elim Hem2
(assume Hq : Q, or.inr Hq)
(assume Hnq : ¬Q, or.inl (mt Hpq Hnq)))
-- 5. First-Order Logic: All and Exists
section
variables {T : Type} {C : Prop} {P : T → Prop}
theorem thm17a : (C → ∀x, P x) → (∀x, C → P x) :=
assume H : C → ∀x, P x,
take x : T, assume Hc : C,
H Hc x
theorem thm17b : (∀x, C → P x) → (C → ∀x, P x) :=
assume (H : ∀x, C → P x) (Hc : C),
take x : T,
H x Hc
theorem thm18a : ((∃x, P x) → C) → (∀x, P x → C) :=
assume H : (∃x, P x) → C,
take x, assume Hp : P x,
have Hex : ∃x, P x, from exists.intro x Hp,
H Hex
theorem thm18b : (∀x, P x → C) → (∃x, P x) → C :=
assume (H1 : ∀x, P x → C) (H2 : ∃x, P x),
obtain (w : T) (Hw : P w), from H2,
H1 w Hw
theorem thm19a : (C ∨ ¬C) → (∃x : T, true) → (C → (∃x, P x)) → (∃x, C → P x) :=
assume (Hem : C ∨ ¬C) (Hin : ∃x : T, true) (H1 : C → ∃x, P x),
or.elim Hem
(assume Hc : C,
obtain (w : T) (Hw : P w), from H1 Hc,
have Hr : C → P w, from assume Hc, Hw,
exists.intro w Hr)
(assume Hnc : ¬C,
obtain (w : T) (Hw : true), from Hin,
have Hr : C → P w, from assume Hc, absurd Hc Hnc,
exists.intro w Hr)
theorem thm19b : (∃x, C → P x) → C → (∃x, P x) :=
assume (H : ∃x, C → P x) (Hc : C),
obtain (w : T) (Hw : C → P w), from H,
exists.intro w (Hw Hc)
theorem thm20a : (C ∨ ¬C) → (∃x : T, true) → ((¬∀x, P x) → ∃x, ¬P x) → ((∀x, P x) → C) → (∃x, P x → C) :=
assume Hem Hin Hnf H,
or.elim Hem
(assume Hc : C,
obtain (w : T) (Hw : true), from Hin,
exists.intro w (assume H : P w, Hc))
(assume Hnc : ¬C,
have H1 : ¬(∀x, P x), from mt H Hnc,
have H2 : ∃x, ¬P x, from Hnf H1,
obtain (w : T) (Hw : ¬P w), from H2,
exists.intro w (assume H : P w, absurd H Hw))
theorem thm20b : (∃x, P x → C) → (∀ x, P x) → C :=
assume Hex Hall,
obtain (w : T) (Hw : P w → C), from Hex,
Hw (Hall w)
theorem thm21a : (∃x : T, true) → ((∃x, P x) ∨ C) → (∃x, P x ∨ C) :=
assume Hin H,
or.elim H
(assume Hex : ∃x, P x,
obtain (w : T) (Hw : P w), from Hex,
exists.intro w (or.inl Hw))
(assume Hc : C,
obtain (w : T) (Hw : true), from Hin,
exists.intro w (or.inr Hc))
theorem thm21b : (∃x, P x ∨ C) → ((∃x, P x) ∨ C) :=
assume H,
obtain (w : T) (Hw : P w ∨ C), from H,
or.elim Hw
(assume H : P w, or.inl (exists.intro w H))
(assume Hc : C, or.inr Hc)
theorem thm22a : (∀x, P x) ∨ C → ∀x, P x ∨ C :=
assume H, take x,
or.elim H
(assume Hl, or.inl (Hl x))
(assume Hr, or.inr Hr)
theorem thm22b : (C ∨ ¬C) → (∀x, P x ∨ C) → ((∀x, P x) ∨ C) :=
assume Hem H1,
or.elim Hem
(assume Hc : C, or.inr Hc)
(assume Hnc : ¬C,
have Hx : ∀x, P x, from
take x,
have H1 : P x ∨ C, from H1 x,
or_resolve_left H1 Hnc,
or.inl Hx)
theorem thm23a : (∃x, P x) ∧ C → (∃x, P x ∧ C) :=
assume H,
have Hex : ∃x, P x, from and.elim_left H,
have Hc : C, from and.elim_right H,
obtain (w : T) (Hw : P w), from Hex,
exists.intro w (and.intro Hw Hc)
theorem thm23b : (∃x, P x ∧ C) → (∃x, P x) ∧ C :=
assume H,
obtain (w : T) (Hw : P w ∧ C), from H,
have Hex : ∃x, P x, from exists.intro w (and.elim_left Hw),
and.intro Hex (and.elim_right Hw)
theorem thm24a : (∀x, P x) ∧ C → (∀x, P x ∧ C) :=
assume H, take x,
and.intro (and.elim_left H x) (and.elim_right H)
theorem thm24b : (∃x : T, true) → (∀x, P x ∧ C) → (∀x, P x) ∧ C :=
assume Hin H,
obtain (w : T) (Hw : true), from Hin,
have Hc : C, from and.elim_right (H w),
have Hx : ∀x, P x, from take x, and.elim_left (H x),
and.intro Hx Hc
end -- of section
|
lemma abs_prod: "abs (prod f A :: 'a :: linordered_idom) = prod (\<lambda>x. abs (f x)) A" |
[STATEMENT]
lemma DirProds_one_iso: "(\<lambda>x. x G) \<in> iso (DirProds f {G}) (f G)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lambda>x. x G) \<in> Group.iso (DirProds f {G}) (f G)
[PROOF STEP]
proof (intro isoI homI)
[PROOF STATE]
proof (state)
goal (3 subgoals):
1. \<And>x. x \<in> carrier (DirProds f {G}) \<Longrightarrow> x G \<in> carrier (f G)
2. \<And>x y. \<lbrakk>x \<in> carrier (DirProds f {G}); y \<in> carrier (DirProds f {G})\<rbrakk> \<Longrightarrow> (x \<otimes>\<^bsub>DirProds f {G}\<^esub> y) G = x G \<otimes>\<^bsub>f G\<^esub> y G
3. bij_betw (\<lambda>x. x G) (carrier (DirProds f {G})) (carrier (f G))
[PROOF STEP]
show "bij_betw (\<lambda>x. x G) (carrier (DirProds f {G})) (carrier (f G))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. bij_betw (\<lambda>x. x G) (carrier (DirProds f {G})) (carrier (f G))
[PROOF STEP]
proof (unfold bij_betw_def, rule)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. inj_on (\<lambda>x. x G) (carrier (DirProds f {G}))
2. (\<lambda>x. x G) ` carrier (DirProds f {G}) = carrier (f G)
[PROOF STEP]
show "inj_on (\<lambda>x. x G) (carrier (DirProds f {G}))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. inj_on (\<lambda>x. x G) (carrier (DirProds f {G}))
[PROOF STEP]
by (intro inj_onI, unfold DirProds_def PiE_def Pi_def extensional_def, fastforce)
[PROOF STATE]
proof (state)
this:
inj_on (\<lambda>x. x G) (carrier (DirProds f {G}))
goal (1 subgoal):
1. (\<lambda>x. x G) ` carrier (DirProds f {G}) = carrier (f G)
[PROOF STEP]
show "(\<lambda>x. x G) ` carrier (DirProds f {G}) = carrier (f G)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lambda>x. x G) ` carrier (DirProds f {G}) = carrier (f G)
[PROOF STEP]
proof(intro equalityI subsetI)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. \<And>x. x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G}) \<Longrightarrow> x \<in> carrier (f G)
2. \<And>x. x \<in> carrier (f G) \<Longrightarrow> x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})
[PROOF STEP]
show "x \<in> carrier (f G)" if "x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})" for x
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<in> carrier (f G)
[PROOF STEP]
using that
[PROOF STATE]
proof (prove)
using this:
x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})
goal (1 subgoal):
1. x \<in> carrier (f G)
[PROOF STEP]
unfolding DirProds_def
[PROOF STATE]
proof (prove)
using this:
x \<in> (\<lambda>x. x G) ` carrier \<lparr>carrier = Pi\<^sub>E {G} (carrier \<circ> f), monoid.mult = \<lambda>x y. \<lambda>i\<in>{G}. x i \<otimes>\<^bsub>f i\<^esub> y i, one = \<lambda>i\<in>{G}. \<one>\<^bsub>f i\<^esub>\<rparr>
goal (1 subgoal):
1. x \<in> carrier (f G)
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
?x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G}) \<Longrightarrow> ?x \<in> carrier (f G)
goal (1 subgoal):
1. \<And>x. x \<in> carrier (f G) \<Longrightarrow> x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})
[PROOF STEP]
show "x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})" if xc: "x \<in> carrier (f G)" for x
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})
[PROOF STEP]
proof
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. x = ?x G
2. ?x \<in> carrier (DirProds f {G})
[PROOF STEP]
show "(\<lambda>k\<in>{G}. x) \<in> carrier (DirProds f {G})"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lambda>k\<in>{G}. x) \<in> carrier (DirProds f {G})
[PROOF STEP]
unfolding DirProds_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lambda>k\<in>{G}. x) \<in> carrier \<lparr>carrier = Pi\<^sub>E {G} (carrier \<circ> f), monoid.mult = \<lambda>x y. \<lambda>i\<in>{G}. x i \<otimes>\<^bsub>f i\<^esub> y i, one = \<lambda>i\<in>{G}. \<one>\<^bsub>f i\<^esub>\<rparr>
[PROOF STEP]
using xc
[PROOF STATE]
proof (prove)
using this:
x \<in> carrier (f G)
goal (1 subgoal):
1. (\<lambda>k\<in>{G}. x) \<in> carrier \<lparr>carrier = Pi\<^sub>E {G} (carrier \<circ> f), monoid.mult = \<lambda>x y. \<lambda>i\<in>{G}. x i \<otimes>\<^bsub>f i\<^esub> y i, one = \<lambda>i\<in>{G}. \<one>\<^bsub>f i\<^esub>\<rparr>
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
(\<lambda>k\<in>{G}. x) \<in> carrier (DirProds f {G})
goal (1 subgoal):
1. x = (\<lambda>k\<in>{G}. x) G
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
(\<lambda>k\<in>{G}. x) \<in> carrier (DirProds f {G})
goal (1 subgoal):
1. x = (\<lambda>k\<in>{G}. x) G
[PROOF STEP]
show "x = (\<lambda>k\<in>{G}. x) G"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x = (\<lambda>k\<in>{G}. x) G
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
x = (\<lambda>k\<in>{G}. x) G
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
?x \<in> carrier (f G) \<Longrightarrow> ?x \<in> (\<lambda>x. x G) ` carrier (DirProds f {G})
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
(\<lambda>x. x G) ` carrier (DirProds f {G}) = carrier (f G)
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
bij_betw (\<lambda>x. x G) (carrier (DirProds f {G})) (carrier (f G))
goal (2 subgoals):
1. \<And>x. x \<in> carrier (DirProds f {G}) \<Longrightarrow> x G \<in> carrier (f G)
2. \<And>x y. \<lbrakk>x \<in> carrier (DirProds f {G}); y \<in> carrier (DirProds f {G})\<rbrakk> \<Longrightarrow> (x \<otimes>\<^bsub>DirProds f {G}\<^esub> y) G = x G \<otimes>\<^bsub>f G\<^esub> y G
[PROOF STEP]
qed (unfold DirProds_def PiE_def Pi_def extensional_def, auto) |
#ifndef NODE_ATTR_DOC_HPP_
#define NODE_ATTR_DOC_HPP_
/////////1/////////2/////////3/////////4/////////5/////////6/////////7/////////8
// Name :
// Author : Avi
// Revision : $Revision: #8 $
//
// Copyright 2009-2020 ECMWF.
// This software is licensed under the terms of the Apache Licence version 2.0
// which can be obtained at http://www.apache.org/licenses/LICENSE-2.0.
// In applying this licence, ECMWF does not waive the privileges and immunities
// granted to it by virtue of its status as an intergovernmental organisation
// nor does it submit to any jurisdiction.
//
// Description :
/////////1/////////2/////////3/////////4/////////5/////////6/////////7/////////8
#include <boost/core/noncopyable.hpp>
// ===========================================================================
// IMPORTANT: These appear as python doc strings.
// Additionally they are auto documented using sphinx-poco
// Hence the doc strings use reStructuredText markup
// ===========================================================================
class NodeAttrDoc : private boost::noncopyable {
public:
static const char* variable_doc();
static const char* zombie_doc();
static const char* zombie_type_doc();
static const char* zombie_user_action_type_doc();
static const char* child_cmd_type_doc();
static const char* label_doc();
static const char* limit_doc();
static const char* inlimit_doc();
static const char* event_doc();
static const char* meter_doc();
static const char* queue_doc();
static const char* date_doc();
static const char* day_doc();
static const char* days_enum_doc();
static const char* time_doc();
static const char* today_doc();
static const char* late_doc();
static const char* autocancel_doc();
static const char* autoarchive_doc();
static const char* autorestore_doc();
static const char* repeat_doc();
static const char* repeat_date_doc();
static const char* repeat_date_list_doc();
static const char* repeat_integer_doc();
static const char* repeat_enumerated_doc();
static const char* repeat_string_doc();
static const char* repeat_day_doc();
static const char* cron_doc();
static const char* clock_doc();
private:
NodeAttrDoc()= default;
};
#endif
|
These two healthcare REITs are trading for dirt-cheap valuations despite high dividends and a solid history of growth.
Investing in real estate investment trusts, or REITs, is one of the best ways to enjoy high dividends and the potential for capital growth. On a valuation basis, REITs specializing in healthcare properties are trading cheaply right now, and two seem to be a particularly good bargain: HCP Inc. (NYSE:HCP) and Medical Properties Trust (NYSE:MPW).
This type of real estate should be an excellent long-term investment for three main reasons: demographics, increased healthcare spending, and market opportunity.
Demographics indicate a growing demand for healthcare properties over the coming decades. Simply put, the population is getting older -- fast. The 65-and-up population in the U.S. is expected to nearly double by 2050 as baby boomers age and live longer. Older individuals require more healthcare, therefore the number of healthcare facilities will grow to meet the demand.
Furthermore, healthcare costs are rising at a faster rate than other expenditures, as you can see in the chart below. Given that commercial properties derive most of their value from their ability to generate rental income, healthcare properties should appreciate faster than other property types as long as this trend continues.
Finally, the healthcare real estate market is about $1 trillion in size, and no REIT has more than a 3% market share. The industry is highly fragmented, meaning there are plenty of opportunities for new investments from existing properties, in addition to the opportunities that will come from future growth of the industry.
HCP is one of the "big three" healthcare REITs, and it owns 1,179 properties in a variety of categories -- mainly senior housing, post-acute care, life science, and medical office buildings. Essentially, the business model is to acquire attractive properties and team up with some of the best operating partners in the business, such as Brookdale Senior Living.
The company pays a notable 7.1% dividend yield and has an even more impressive record of dividend growth. In fact, HCP has increased its dividend for 31 consecutive years and is a member of the S&P 500 Dividend Aristocrats.
HCP's biggest recent news item is the planned spinoff of its HCR ManorCare assets, which include virtually all of the post-acute/skilled-nursing properties in the company's portfolio. You can have an in-depth look at the spinoff, but the general goal is to allow HCP to focus on its private-pay senior housing, life science, and medical office properties, thereby improving portfolio quality and giving the company more financial flexibility to pursue future growth opportunities. The spun-off assets, meanwhile, will be placed in a newly created REIT that will strive to maximize their value.
According to HCP, once this happens, the company can employ several new strategies with these properties, including some that are not possible or practical while the assets are still a part of HCP.
Data source: HCP company presentation.
Medical Properties Trust focuses on hospital properties, which, according to the company, produce better initial yields than other types of healthcare real estate. In fact, the company is the fourth-largest owner of for-profit hospital beds in the country.
Data source: Medical Properties Trust.
The company has 204 properties located in 29 states and four foreign countries, and the long-term plan calls for even further international diversification. This way, if one market faces headwinds (say, the U.S.), it won't represent virtually all of Medical Properties' assets.
The company does have a relatively high debt load for a REIT: Debt represents 51.6% of Medical Properties' assets, so there's added risk to consider. However, 98% of the portfolio's leases have annual rent increases built in, and the company's payout ratio is less than two-thirds of FFO -- lower than that of most peers.
In short, there's no reason to believe Medical Properties Trust will have any debt-related issues going forward, with a growing stream of income that's already more than enough to cover the dividend.
Note: Share prices and guidance are current as of 5/23/16. Normalized or adjusted FFO guidance is used when available.
No stock with double-digit growth potential is without risk, and these two are certainly no exception. In fact, a higher level of perceived risk is responsible for the low valuations. Healthcare spending could slow, operating partners could face greater financial difficulties, or there could be a shortage of attractive acquisition opportunities in the target property types. Any one of these factors could cause these stocks to take a dive.
However, I think the growth potential and the solid track record of delivering profits in a variety of economic climates more than make up for the risks. Either of these healthcare REITs would make a solid addition to a well-diversified dividend growth portfolio. |
Formal statement is: proposition lipschitz_on_closed_Union: assumes "\<And>i. i \<in> I \<Longrightarrow> lipschitz_on M (U i) f" "\<And>i. i \<in> I \<Longrightarrow> closed (U i)" "finite I" "M \<ge> 0" "{u..(v::real)} \<subseteq> (\<Union>i\<in>I. U i)" shows "lipschitz_on M {u..v} f" Informal statement is: If $f$ is Lipschitz on each closed set $U_i$ and $\{u, v\} \subseteq \bigcup_{i \in I} U_i$, then $f$ is Lipschitz on $\{u, v\}$. |
/-
Copyright (c) 2022 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro
-/
import Std.Data.RBMap.WF
/-!
# Path operations; `modify` and `alter`
This develops the necessary theorems to construct the `modify` and `alter` functions on `RBSet`
using path operations for in-place modification of an `RBTree`.
-/
namespace Std
namespace RBNode
open RBColor
attribute [simp] Path.fill
/-! ## path balance -/
/-- Asserts that property `p` holds on the root of the tree, if any. -/
def OnRoot (p : α → Prop) : RBNode α → Prop
| nil => True
| node _ _ x _ => p x
/--
Auxiliary definition for `zoom_ins`: set the root of the tree to `v`, creating a node if necessary.
-/
def setRoot (v : α) : RBNode α → RBNode α
| nil => node red nil v nil
| node c a _ b => node c a v b
/--
Auxiliary definition for `zoom_ins`: set the root of the tree to `v`, creating a node if necessary.
-/
def delRoot : RBNode α → RBNode α
| nil => nil
| node _ a _ b => a.append b
namespace Path
/-- Same as `fill` but taking its arguments in a pair for easier composition with `zoom`. -/
@[inline] def fill' : RBNode α × Path α → RBNode α := fun (t, path) => path.fill t
theorem zoom_fill' (cut : α → Ordering) (t : RBNode α) (path : Path α) :
fill' (zoom cut t path) = path.fill t := by
induction t generalizing path with
| nil => rfl
| node _ _ _ _ iha ihb => unfold zoom; split <;> [apply iha, apply ihb, rfl]
theorem zoom_fill (H : zoom cut t path = (t', path')) : path.fill t = path'.fill t' :=
(H ▸ zoom_fill' cut t path).symm
theorem insertNew_eq_insert (h : zoom (cmp v) t = (nil, path)) :
path.insertNew v = (t.insert cmp v).setBlack :=
insert_setBlack .. ▸ (zoom_ins h).symm
theorem zoom_del {t : RBNode α} :
t.zoom cut path = (t', path') →
path.del (t.del cut) (match t with | node c .. => c | _ => red) =
path'.del t'.delRoot (match t' with | node c .. => c | _ => red) := by
unfold RBNode.del; split <;> simp [zoom]
· intro | rfl, rfl => rfl
· next c a y b =>
split
· have IH := @zoom_del (t := a)
match a with
| nil => intro | rfl => rfl
| node black .. | node red .. => apply IH
· have IH := @zoom_del (t := b)
match b with
| nil => intro | rfl => rfl
| node black .. | node red .. => apply IH
· intro | rfl => rfl
variable (c₀ : RBColor) (n₀ : Nat) in
/--
The balance invariant for a path. `path.Balanced c₀ n₀ c n` means that `path` is a red-black tree
with balance invariant `c₀, n₀`, but it has a "hole" where a tree with balance invariant `c, n`
has been removed. The defining property is `Balanced.fill`: if `path.Balanced c₀ n₀ c n` and you
fill the hole with a tree satisfying `t.Balanced c n`, then `(path.fill t).Balanced c₀ n₀` .
-/
protected inductive Balanced : Path α → RBColor → Nat → Prop where
/-- The root of the tree is `c₀, n₀`-balanced by assumption. -/
| protected root : Path.root.Balanced c₀ n₀
/-- Descend into the left subtree of a red node. -/
| redL : Balanced y black n → parent.Balanced red n →
(Path.left red parent v y).Balanced black n
/-- Descend into the right subtree of a red node. -/
| redR : Balanced x black n → parent.Balanced red n →
(Path.right red x v parent).Balanced black n
/-- Descend into the left subtree of a black node. -/
| blackL : Balanced y c₂ n → parent.Balanced black (n + 1) →
(Path.left black parent v y).Balanced c₁ n
/-- Descend into the right subtree of a black node. -/
| blackR : Balanced x c₁ n → parent.Balanced black (n + 1) →
(Path.right black x v parent).Balanced c₂ n
/--
The defining property of a balanced path: If `path` is a `c₀,n₀` tree with a `c,n` hole,
then filling the hole with a `c,n` tree yields a `c₀,n₀` tree.
-/
protected theorem Balanced.fill {path : Path α} {t} :
path.Balanced c₀ n₀ c n → t.Balanced c n → (path.fill t).Balanced c₀ n₀
| .root, h => h
| .redL hb H, ha | .redR ha H, hb => H.fill (.red ha hb)
| .blackL hb H, ha | .blackR ha H, hb => H.fill (.black ha hb)
protected theorem _root_.Std.RBNode.Balanced.zoom : t.Balanced c n → path.Balanced c₀ n₀ c n →
zoom cut t path = (t', path') → ∃ c n, t'.Balanced c n ∧ path'.Balanced c₀ n₀ c n
| .nil, hp => fun e => by cases e; exact ⟨_, _, .nil, hp⟩
| .red ha hb, hp => by
unfold zoom; split
· exact ha.zoom (.redL hb hp)
· exact hb.zoom (.redR ha hp)
· intro e; cases e; exact ⟨_, _, .red ha hb, hp⟩
| .black ha hb, hp => by
unfold zoom; split
· exact ha.zoom (.blackL hb hp)
· exact hb.zoom (.blackR ha hp)
· intro e; cases e; exact ⟨_, _, .black ha hb, hp⟩
theorem ins_eq_fill {path : Path α} {t : RBNode α} :
path.Balanced c₀ n₀ c n → t.Balanced c n → path.ins t = (path.fill t).setBlack
| .root, h => rfl
| .redL hb H, ha | .redR ha H, hb => by unfold ins; exact ins_eq_fill H (.red ha hb)
| .blackL hb H, ha => by rw [ins, fill, ← ins_eq_fill H (.black ha hb), balance1_eq ha]
| .blackR ha H, hb => by rw [ins, fill, ← ins_eq_fill H (.black ha hb), balance2_eq hb]
protected theorem Balanced.ins {path : Path α}
(hp : path.Balanced c₀ n₀ c n) (ht : t.RedRed (c = red) n) :
∃ n, (path.ins t).Balanced black n := by
induction hp generalizing t with
| root => exact ht.setBlack
| redL hr hp ih => match ht with
| .balanced .nil => exact ih (.balanced (.red .nil hr))
| .balanced (.red ha hb) => exact ih (.redred rfl (.red ha hb) hr)
| .balanced (.black ha hb) => exact ih (.balanced (.red (.black ha hb) hr))
| redR hl hp ih => match ht with
| .balanced .nil => exact ih (.balanced (.red hl .nil))
| .balanced (.red ha hb) => exact ih (.redred rfl hl (.red ha hb))
| .balanced (.black ha hb) => exact ih (.balanced (.red hl (.black ha hb)))
| blackL hr hp ih => exact have ⟨c, h⟩ := ht.balance1 hr; ih (.balanced h)
| blackR hl hp ih => exact have ⟨c, h⟩ := ht.balance2 hl; ih (.balanced h)
protected theorem Balanced.insertNew {path : Path α} (H : path.Balanced c n black 0) :
∃ n, (path.insertNew v).Balanced black n := H.ins (.balanced (.red .nil .nil))
protected theorem Balanced.insert {path : Path α} (hp : path.Balanced c₀ n₀ c n) :
t.Balanced c n → ∃ c n, (path.insert t v).Balanced c n
| .nil => ⟨_, hp.insertNew⟩
| .red ha hb => ⟨_, _, hp.fill (.red ha hb)⟩
| .black ha hb => ⟨_, _, hp.fill (.black ha hb)⟩
theorem zoom_insert {path : Path α} {t : RBNode α} (ht : t.Balanced c n)
(H : zoom (cmp v) t = (t', path)) :
(path.insert t' v).setBlack = (t.insert cmp v).setBlack := by
have ⟨_, _, ht', hp'⟩ := ht.zoom .root H
cases ht' with simp [insert]
| nil => simp [insertNew_eq_insert H, setBlack_idem]
| red hl hr => rw [← ins_eq_fill hp' (.red hl hr), insert_setBlack]; exact (zoom_ins H).symm
| black hl hr => rw [← ins_eq_fill hp' (.black hl hr), insert_setBlack]; exact (zoom_ins H).symm
protected theorem Balanced.del {path : Path α}
(hp : path.Balanced c₀ n₀ c n) (ht : t.DelProp c' n) (hc : c = black → c' ≠ red) :
∃ n, (path.del t c').Balanced black n := by
induction hp generalizing t c' with
| root => match c', ht with
| red, ⟨_, h⟩ | black, ⟨_, _, h⟩ => exact h.setBlack
| @redL _ n _ _ hb hp ih => match c', n, ht with
| red, _, _ => cases hc rfl rfl
| black, _, ⟨_, rfl, ha⟩ => exact ih ((hb.balLeft ha).of_false (fun.)) (fun.)
| @redR _ n _ _ ha hp ih => match c', n, ht with
| red, _, _ => cases hc rfl rfl
| black, _, ⟨_, rfl, hb⟩ => exact ih ((ha.balRight hb).of_false (fun.)) (fun.)
| @blackL _ _ n _ _ _ hb hp ih => match c', n, ht with
| red, _, ⟨_, ha⟩ => exact ih ⟨_, rfl, .redred ⟨⟩ ha hb⟩ (fun.)
| black, _, ⟨_, rfl, ha⟩ => exact ih ⟨_, rfl, (hb.balLeft ha).imp fun _ => ⟨⟩⟩ (fun.)
| @blackR _ _ n _ _ _ ha hp ih => match c', n, ht with
| red, _, ⟨_, hb⟩ => exact ih ⟨_, rfl, .redred ⟨⟩ ha hb⟩ (fun.)
| black, _, ⟨_, rfl, hb⟩ => exact ih ⟨_, rfl, (ha.balRight hb).imp fun _ => ⟨⟩⟩ (fun.)
/-- Asserts that `p` holds on all elements to the left of the hole. -/
def AllL (p : α → Prop) : Path α → Prop
| .root => True
| .left _ parent _ _ => parent.AllL p
| .right _ a x parent => a.All p ∧ p x ∧ parent.AllL p
/-- Asserts that `p` holds on all elements to the right of the hole. -/
def AllR (p : α → Prop) : Path α → Prop
| .root => True
| .left _ parent x b => parent.AllR p ∧ p x ∧ b.All p
| .right _ _ _ parent => parent.AllR p
/--
The property of a path returned by `t.zoom cut`. Each of the parents visited along the path have
the appropriate ordering relation to the cut.
-/
def Zoomed (cut : α → Ordering) : Path α → Prop
| .root => True
| .left _ parent x _ => cut x = .lt ∧ parent.Zoomed cut
| .right _ _ x parent => cut x = .gt ∧ parent.Zoomed cut
theorem zoom_zoomed₁ (e : zoom cut t path = (t', path')) : t'.OnRoot (cut · = .eq) :=
match t, e with
| nil, rfl => trivial
| node .., e => by
revert e; unfold zoom; split
· exact zoom_zoomed₁
· exact zoom_zoomed₁
· next H => intro e; cases e; exact H
theorem zoom_zoomed₂ (e : zoom cut t path = (t', path'))
(hp : path.Zoomed cut) : path'.Zoomed cut :=
match t, e with
| nil, rfl => hp
| node .., e => by
revert e; unfold zoom; split
· next h => exact fun e => zoom_zoomed₂ e ⟨h, hp⟩
· next h => exact fun e => zoom_zoomed₂ e ⟨h, hp⟩
· intro e; cases e; exact hp
/--
`path.RootOrdered cmp v` is true if `v` would be able to fit into the hole
without violating the ordering invariant.
-/
def RootOrdered (cmp : α → α → Ordering) : Path α → α → Prop
| .root, _ => True
| .left _ parent x _, v => cmpLT cmp v x ∧ parent.RootOrdered cmp v
| .right _ _ x parent, v => cmpLT cmp x v ∧ parent.RootOrdered cmp v
theorem _root_.Std.RBNode.cmpEq.RootOrdered_congr {cmp : α → α → Ordering} (h : cmpEq cmp a b) :
∀ {t : Path α}, t.RootOrdered cmp a ↔ t.RootOrdered cmp b
| .root => .rfl
| .left .. => and_congr h.lt_congr_left h.RootOrdered_congr
| .right .. => and_congr h.lt_congr_right h.RootOrdered_congr
theorem Zoomed.toRootOrdered {cmp} :
∀ {path : Path α}, path.Zoomed (cmp v) → path.RootOrdered cmp v
| .root, h => h
| .left .., ⟨h, hp⟩ => ⟨⟨h⟩, hp.toRootOrdered⟩
| .right .., ⟨h, hp⟩ => ⟨⟨OrientedCmp.cmp_eq_gt.1 h⟩, hp.toRootOrdered⟩
/-- The ordering invariant for a `Path`. -/
def Ordered (cmp : α → α → Ordering) : Path α → Prop
| .root => True
| .left _ parent x b => parent.Ordered cmp ∧
b.All (cmpLT cmp x ·) ∧ parent.RootOrdered cmp x ∧
b.All (parent.RootOrdered cmp) ∧ b.Ordered cmp
| .right _ a x parent => parent.Ordered cmp ∧
a.All (cmpLT cmp · x) ∧ parent.RootOrdered cmp x ∧
a.All (parent.RootOrdered cmp) ∧ a.Ordered cmp
protected theorem Ordered.fill : ∀ {path : Path α} {t},
(path.fill t).Ordered cmp ↔ path.Ordered cmp ∧ t.Ordered cmp ∧ t.All (path.RootOrdered cmp)
| .root, _ => ⟨fun H => ⟨⟨⟩, H, .trivial ⟨⟩⟩, (·.2.1)⟩
| .left .., _ => by
simp [Ordered.fill, RBNode.Ordered, Ordered, RootOrdered, All_and]
exact ⟨
fun ⟨hp, ⟨ax, xb, ha, hb⟩, ⟨xp, ap, bp⟩⟩ => ⟨⟨hp, xb, xp, bp, hb⟩, ha, ⟨ax, ap⟩⟩,
fun ⟨⟨hp, xb, xp, bp, hb⟩, ha, ⟨ax, ap⟩⟩ => ⟨hp, ⟨ax, xb, ha, hb⟩, ⟨xp, ap, bp⟩⟩⟩
| .right .., _ => by
simp [Ordered.fill, RBNode.Ordered, Ordered, RootOrdered, All_and]
exact ⟨
fun ⟨hp, ⟨ax, xb, ha, hb⟩, ⟨xp, ap, bp⟩⟩ => ⟨⟨hp, ax, xp, ap, ha⟩, hb, ⟨xb, bp⟩⟩,
fun ⟨⟨hp, ax, xp, ap, ha⟩, hb, ⟨xb, bp⟩⟩ => ⟨hp, ⟨ax, xb, ha, hb⟩, ⟨xp, ap, bp⟩⟩⟩
theorem _root_.Std.RBNode.Ordered.zoom' {t : RBNode α} {path : Path α}
(ht : t.Ordered cmp) (hp : path.Ordered cmp) (tp : t.All (path.RootOrdered cmp))
(pz : path.Zoomed cut) (eq : t.zoom cut path = (t', path')) :
t'.Ordered cmp ∧ path'.Ordered cmp ∧ t'.All (path'.RootOrdered cmp) ∧ path'.Zoomed cut :=
have ⟨hp', ht', tp'⟩ := Ordered.fill.1 <| zoom_fill eq ▸ Ordered.fill.2 ⟨hp, ht, tp⟩
⟨ht', hp', tp', zoom_zoomed₂ eq pz⟩
theorem _root_.Std.RBNode.Ordered.zoom {t : RBNode α}
(ht : t.Ordered cmp) (eq : t.zoom cut = (t', path')) :
t'.Ordered cmp ∧ path'.Ordered cmp ∧ t'.All (path'.RootOrdered cmp) ∧ path'.Zoomed cut :=
ht.zoom' (path := .root) ⟨⟩ (.trivial ⟨⟩) ⟨⟩ eq
theorem Ordered.ins : ∀ {path : Path α} {t : RBNode α},
t.Ordered cmp → path.Ordered cmp → t.All (path.RootOrdered cmp) → (path.ins t).Ordered cmp
| .root, t, ht, _, _ => Ordered.setBlack.2 ht
| .left red parent x b, a, ha, ⟨hp, xb, xp, bp, hb⟩, H => by
unfold ins; have ⟨ax, ap⟩ := All_and.1 H; exact hp.ins ⟨ax, xb, ha, hb⟩ ⟨xp, ap, bp⟩
| .right red a x parent, b, hb, ⟨hp, ax, xp, ap, ha⟩, H => by
unfold ins; have ⟨xb, bp⟩ := All_and.1 H; exact hp.ins ⟨ax, xb, ha, hb⟩ ⟨xp, ap, bp⟩
| .left black parent x b, a, ha, ⟨hp, xb, xp, bp, hb⟩, H => by
unfold ins; have ⟨ax, ap⟩ := All_and.1 H
exact hp.ins (ha.balance1 ax xb hb) (balance1_All.2 ⟨xp, ap, bp⟩)
| .right black a x parent, b, hb, ⟨hp, ax, xp, ap, ha⟩, H => by
unfold ins; have ⟨xb, bp⟩ := All_and.1 H
exact hp.ins (ha.balance2 ax xb hb) (balance2_All.2 ⟨xp, ap, bp⟩)
theorem Ordered.insertNew {path : Path α} (hp : path.Ordered cmp) (vp : path.RootOrdered cmp v) :
(path.insertNew v).Ordered cmp :=
hp.ins ⟨⟨⟩, ⟨⟩, ⟨⟩, ⟨⟩⟩ ⟨vp, ⟨⟩, ⟨⟩⟩
theorem Ordered.insert : ∀ {path : Path α} {t : RBNode α},
path.Ordered cmp → t.Ordered cmp → t.All (path.RootOrdered cmp) → path.RootOrdered cmp v →
t.OnRoot (cmpEq cmp v) → (path.insert t v).Ordered cmp
| _, nil, hp, _, _, vp, _ => hp.insertNew vp
| _, node .., hp, ⟨ax, xb, ha, hb⟩, ⟨_, ap, bp⟩, vp, xv => Ordered.fill.2
⟨hp, ⟨ax.imp xv.lt_congr_right.2, xb.imp xv.lt_congr_left.2, ha, hb⟩, vp, ap, bp⟩
theorem Ordered.del : ∀ {path : Path α} {t : RBNode α} {c},
t.Ordered cmp → path.Ordered cmp → t.All (path.RootOrdered cmp) → (path.del t c).Ordered cmp
| .root, t, _, ht, _, _ => Ordered.setBlack.2 ht
| .left _ parent x b, a, red, ha, ⟨hp, xb, xp, bp, hb⟩, H => by
unfold del; have ⟨ax, ap⟩ := All_and.1 H; exact hp.del ⟨ax, xb, ha, hb⟩ ⟨xp, ap, bp⟩
| .right _ a x parent, b, red, hb, ⟨hp, ax, xp, ap, ha⟩, H => by
unfold del; have ⟨xb, bp⟩ := All_and.1 H; exact hp.del ⟨ax, xb, ha, hb⟩ ⟨xp, ap, bp⟩
| .left _ parent x b, a, black, ha, ⟨hp, xb, xp, bp, hb⟩, H => by
unfold del; have ⟨ax, ap⟩ := All_and.1 H
exact hp.del (ha.balLeft ax xb hb) (ap.balLeft xp bp)
| .right _ a x parent, b, black, hb, ⟨hp, ax, xp, ap, ha⟩, H => by
unfold del; have ⟨xb, bp⟩ := All_and.1 H
exact hp.del (ha.balRight ax xb hb) (ap.balRight xp bp)
theorem Ordered.erase : ∀ {path : Path α} {t : RBNode α},
path.Ordered cmp → t.Ordered cmp → t.All (path.RootOrdered cmp) → (path.erase t).Ordered cmp
| _, nil, hp, ht, tp => Ordered.fill.2 ⟨hp, ht, tp⟩
| _, node .., hp, ⟨ax, xb, ha, hb⟩, ⟨_, ap, bp⟩ => hp.del (ha.append ax xb hb) (ap.append bp)
end Path
/-! ## alter -/
/-- The `alter` function preserves the ordering invariants. -/
protected theorem Ordered.alter {t : RBNode α}
(H : ∀ {x t' p}, t.zoom cut = (t', p) → f t'.root? = some x →
p.RootOrdered cmp x ∧ t'.OnRoot (cmpEq cmp x))
(h : t.Ordered cmp) : (alter cut f t).Ordered cmp := by
simp [alter]; split
· next path eq =>
have ⟨_, hp, _, _⟩ := h.zoom eq; split
· exact h
· next hf => exact hp.insertNew (H eq hf).1
· next path eq =>
have ⟨⟨ax, xb, ha, hb⟩, hp, ⟨_, ap, bp⟩, _⟩ := h.zoom eq; split
· exact hp.del (ha.append ax xb hb) (ap.append bp)
· next hf =>
have ⟨yp, xy⟩ := H eq hf
apply Path.Ordered.fill.2
exact ⟨hp, ⟨ax.imp xy.lt_congr_right.2, xb.imp xy.lt_congr_left.2, ha, hb⟩, yp, ap, bp⟩
/-- The `alter` function preserves the balance invariants. -/
protected theorem Balanced.alter {t : RBNode α}
(h : t.Balanced c n) : ∃ c n, (t.alter cut f).Balanced c n := by
simp [alter]; split
· next path eq =>
split
· exact ⟨_, _, h⟩
· have ⟨_, _, .nil, h⟩ := h.zoom .root eq
exact ⟨_, h.insertNew⟩
· next path eq =>
have ⟨_, _, h, hp⟩ := h.zoom .root eq
split
· match h with
| .red ha hb => exact ⟨_, hp.del ((ha.append hb).of_false (· rfl rfl)) (fun.)⟩
| .black ha hb => exact ⟨_, hp.del ⟨_, rfl, (ha.append hb).imp fun _ => ⟨⟩⟩ (fun.)⟩
· match h with
| .red ha hb => exact ⟨_, _, hp.fill (.red ha hb)⟩
| .black ha hb => exact ⟨_, _, hp.fill (.black ha hb)⟩
theorem modify_eq_alter (t : RBNode α) : t.modify cut f = t.alter cut (.map f) := by
simp [modify, alter]; split <;> simp [Option.map]
/-- The `modify` function preserves the ordering invariants. -/
protected theorem Ordered.modify {t : RBNode α}
(H : (t.zoom cut).1.OnRoot fun x => cmpEq cmp (f x) x)
(h : t.Ordered cmp) : (modify cut f t).Ordered cmp :=
modify_eq_alter _ ▸ h.alter @fun
| _, .node .., _, eq, rfl => by
rw [eq] at H; exact ⟨H.RootOrdered_congr.2 (h.zoom eq).2.2.1.1, H⟩
/-- The `modify` function preserves the balance invariants. -/
protected theorem Balanced.modify {t : RBNode α}
(h : t.Balanced c n) : ∃ c n, (t.modify cut f).Balanced c n := modify_eq_alter _ ▸ h.alter
theorem WF.alter {t : RBNode α}
(H : ∀ {x t' p}, t.zoom cut = (t', p) → f t'.root? = some x →
p.RootOrdered cmp x ∧ t'.OnRoot (cmpEq cmp x))
(h : WF cmp t) : WF cmp (alter cut f t) :=
let ⟨h₁, _, _, h₂⟩ := h.out; WF_iff.2 ⟨h₁.alter H, h₂.alter⟩
theorem WF.modify {t : RBNode α}
(H : (t.zoom cut).1.OnRoot fun x => cmpEq cmp (f x) x)
(h : WF cmp t) : WF cmp (t.modify cut f) :=
let ⟨h₁, _, _, h₂⟩ := h.out; WF_iff.2 ⟨h₁.modify H, h₂.modify⟩
theorem find?_eq_zoom : ∀ {t : RBNode α} (p := .root), t.find? cut = (t.zoom cut p).1.root?
| .nil, _ => rfl
| .node .., _ => by unfold find? zoom; split <;> [apply find?_eq_zoom, apply find?_eq_zoom, rfl]
end RBNode
namespace RBSet
open RBNode
/--
A sufficient condition for `ModifyWF` is that the new element compares equal to the original.
-/
theorem ModifyWF.of_eq {t : RBSet α cmp}
(H : ∀ {x}, RBNode.find? cut t.val = some x → cmpEq cmp (f x) x) : ModifyWF t cut f := by
refine ⟨.modify ?_ t.2⟩
revert H; rw [find?_eq_zoom]
(cases (t.1.zoom cut).1 <;> intro H) <;> [trivial, exact H rfl]
end RBSet
namespace RBMap
/--
`O(log n)`. In-place replace the corresponding to key `k`.
This takes the element out of the tree while `f` runs,
so it uses the element linearly if `t` is unshared.
-/
def modify (t : RBMap α β cmp) (k : α) (f : β → β) : RBMap α β cmp :=
@RBSet.modifyP _ _ t (cmp k ·.1) (fun (a, b) => (a, f b))
(.of_eq fun _ => ⟨OrientedCmp.cmp_refl (cmp := byKey Prod.fst cmp)⟩)
/-- Auxiliary definition for `alter`. -/
def alter.adapt (k : α) (f : Option β → Option β) : Option (α × β) → Option (α × β)
| none =>
match f none with
| none => none
| some v => some (k, v)
| some (k', v') =>
match f (some v') with
| none => none
| some v => some (k', v)
/--
`O(log n)`. `alterP cut f t` simultaneously handles inserting, erasing and replacing an element
using a function `f : Option α → Option α`. It is passed the result of `t.findP? cut`
and can either return `none` to remove the element or `some a` to replace/insert
the element with `a` (which must have the same ordering properties as the original element).
The element is used linearly if `t` is unshared.
The `AlterWF` assumption is required because `f` may change
the ordering properties of the element, which would break the invariants.
-/
@[specialize] def alter
(t : RBMap α β cmp) (k : α) (f : Option β → Option β) : RBMap α β cmp := by
refine @RBSet.alterP _ _ t (cmp k ·.1) (alter.adapt k f) ⟨.alter (@fun _ t' p eq => ?_) t.2⟩
cases t' <;> simp [alter.adapt, RBNode.root?] <;> split <;> intro h <;> cases h
· exact ⟨(t.2.out.1.zoom eq).2.2.2.toRootOrdered, ⟨⟩⟩
· refine ⟨(?a).RootOrdered_congr.2 (t.2.out.1.zoom eq).2.2.1.1, ?a⟩
exact ⟨OrientedCmp.cmp_refl (cmp := byKey Prod.fst cmp)⟩
end RBMap
|
#Set the costum plot
function set_my_glmakie_theme()
GLMakie.activate!()
set_theme!(
#Font
fontsize = 15,
font = "Helvetica",
#Axis control
Axis = (
xgridcolor = "#eee4da",
ygridcolor = "#eee4da",
backgroundcolor = :white,
xgridstyle=:dash,
ygridstyle=:dash,
xgridwidth=1,
ygridwidth=1,
xtickalign=1,
ytickalign=1,
leftspinevisible = true,
bottomspinevisible = true,
rightspinevisible = true,
topspinevisible = true,
xgridvisible = true,
ygridvisible = true,
spinewidth = 1,
xtickwidth = 1,
ytickwidth = 1,
)
)
end |
State Before: n : ℕ
⊢ cos (↑n * (2 * π) + π) = -1 State After: no goals Tactic: simpa only [cos_zero] using (cos_periodic.nat_mul n).add_antiperiod_eq cos_antiperiodic |
[STATEMENT]
lemma ball_trivial [simp]: "ball x 0 = {}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ball x 0 = {}
[PROOF STEP]
by (simp add: ball_def) |
library(ggplot2)
library(spatstat)
library(maptools)
source('plot_utils.r')
# set frames_dir to a place where you will store 2K+ pngs
frames_dir = "/Users/mike/Data/frames/"
# the file 'afg.data' is generated by load_data.R
load("afg.data") # afg.data
afg = afg.data
afg$data$Type = factor(afg$data$Type)
# let's only look at one type of event
l = "Explosive Hazard"
afg$data = afg$data[afg$data$Type==l,]
# we need to set this as the polygon files we've got have some overlaps
spatstat.options(checkpolygons = FALSE)
# build the window that creates the outline of Afghanistan
win = as(afg$outline, "owin")
# we need to "fortify" the polygons to plot them with ggplot
fortified.roads = fortify.SpatialLinesDataFrame(afg$ringroad)
fortified.outline = fortify.SpatialPolygons(afg$outline)
fortified.admin = fortify.SpatialPolygons(afg$admin)
# extract the settlement data
sett = data.frame(
x=afg$sett$LON,
y=afg$sett$LAT,
name=as.character(afg$sett$NAME),
hjust=afg$sett$hjust,
vjust=afg$sett$vjust,
stringsAsFactors=FALSE
)
# extract time in a usable format
t = unclass(as.POSIXct.Date(afg$data$DateOccurred))
# build some useful time amounts
unix_start = as.Date("1970-01-01")
day_duration = 60 * 60 * 24 # seconds
now = t[1] # seconds since unix_start
num_days = round(t[length(t)]-now) / day_duration # how many actual days we have
one_month = day_duration * 31 # this is our time window
days = seq(t[1],t[length(t)],day_duration) # this is the time stamp of each day
# upper and lower limits
cmax = 10
cmin = 0
do_plot <- function(now){
day = (now - t[1])/day_duration
# figure out which points we want to smooth over for this day
time.flags = (t > (now - one_month)) & (t < now)
#today.flags = (t > now-(2*day_duration)) & (t < now+(2*day_duration))
# increment time
now = now+day_duration
# extract the relevant points
locns = data.frame(
long = afg$data$Longitude[time.flags],
lat = afg$data$Latitude[time.flags],
label = afg$data$Type[time.flags]
)
#locns.today = data.frame(
# long = afg$data$Longitude[today.flags],
# lat = afg$data$Latitude[today.flags],
# label = afg$data$Type[today.flags]
#)
# build a ppp object for the density estimation
afg.points = as.ppp(ppp(
locns$long[!is.na(locns$long)],
locns$lat[!is.na(locns$lat)],
window = win
))
# calculate kernel smoothing
d = density(afg.points,0.2)
img = t(as.matrix(d)) # note transposition
# build grid points
df = expand.grid(x=d$xcol, y=d$yrow)
# pull out the image values
df$z = melt(img)$value
# threshold
df$z[df$z > cmax] = cmax
df$z[df$z < cmin] = cmin
# build up plot
p = ggplot(df,aes(x=x,y=y))
# the tiling creates the heatmap
p = p + geom_tile(aes(fill=z))
# control the colour
p = p + scale_fill_gradient(
"Intensity",
low="white",
high="cornflowerblue",
limits=c(cmin, cmax),
legend=FALSE # the legend is a bit hard to interpret
)
# this adds outlines, roads etc and is controlled in plot_utils.r
p <- add_afghanistan(
p,
fortified.outline,
fortified.admin,
fortified.roads,
sett
)
# add points, coloured by type
#p = p + geom_point(
# data=locns.today,
# aes(x=long,y=lat,colour=label)
#)
# add axis labels
p = p + ylab("Latitude") + xlab("Longitude")
# add title
p = p + opts(title=paste("Intensity of",l,"Events"))
# add the month and year
now.posix = as.POSIXct(now,origin=unix_start)
df.date = data.frame(x=70,y=30,t=format(now.posix,"%B %Y"))
p = p + geom_text(data = df.date, aes(x=x, y=y, label=t), hjust=0,legend=FALSE)
# save the plot
ggsave(
filename=paste(frames_dir,'afghanistan_',day,'.png',sep=""),
plot=p,
width = 14,
height= 8,
dpi=72
)
}
cat('generating frames')
aaply(days,1,do_plot,.progress='text')
# run this command to join the files (replacing the path as necessary)
# ffmpeg -f image2 -r 20 -i ~/Data/frames/afghanistan_%d.png -b 600k afghanistan.mp4 |
From stdpp Require Export namespaces.
From iris.proofmode Require Import tactics.
From iris.algebra Require Import gmap.
From Perennial.base_logic.lib Require Export fancy_updates crash_token.
From Perennial.base_logic.lib Require Import wsat.
From iris.prelude Require Import options.
Import uPred.
Definition ncfupd_def `{!invGS Σ, !crashGS Σ} (E1 E2 : coPset) (P: iProp Σ) : iProp Σ :=
∀ q, NC q -∗ |={E1, E2}=> P ∗ NC q.
Definition ncfupd_aux `{!invGS Σ, !crashGS Σ} : seal (ncfupd_def). Proof. by eexists. Qed.
Definition ncfupd `{!invGS Σ, !crashGS Σ} := ncfupd_aux.(unseal).
Definition ncfupd_eq `{!invGS Σ, !crashGS Σ} : ncfupd = ncfupd_def := ncfupd_aux.(seal_eq).
Arguments ncfupd {Σ I C} : rename.
Notation "|NC={ E1 }=> Q" := (ncfupd E1 E1 Q)
(at level 99, E1 at level 50, Q at level 200,
format "'[ ' |NC={ E1 }=> '/' Q ']'") : bi_scope.
Notation "|@NC={ C , E1 }=> Q" := (ncfupd (C:=C) E1 E1 Q)
(at level 99, C, E1 at level 50, Q at level 200, only parsing) : bi_scope.
Notation "|NC={ E1 , E2 }=> P" := (ncfupd E1 E2 P)
(at level 99, E1, E2 at level 50, P at level 200,
format "'[ ' |NC={ E1 , E2 }=> '/' P ']'") : bi_scope.
Notation "|@NC={ C , E1 , E2 }=> P" := (ncfupd (C:=C) E1 E2 P)
(at level 99, C, E1, E2 at level 50, P at level 200, only parsing) : bi_scope.
Notation "|NC={ Eo } [ Ei ]▷=> Q" := (∀ q, NC q -∗ |={Eo,Ei}=> ▷ |={Ei,Eo}=> Q ∗ NC q)%I
(at level 99, Eo, Ei at level 50, Q at level 200,
format "'[ ' |NC={ Eo } [ Ei ]▷=> '/' Q ']'") : bi_scope.
Notation "|NC={ E1 } [ E2 ]▷=>^ n Q" := (Nat.iter n (λ P, |NC={E1}[E2]▷=> P) Q)%I
(at level 99, E1, E2 at level 50, n at level 9, Q at level 200,
format "'[ ' |NC={ E1 } [ E2 ]▷=>^ n '/' Q ']'").
Section ncfupd.
Context `{!invGS Σ, !crashGS Σ}.
Implicit Types P: iProp Σ.
Implicit Types E : coPset.
Global Instance ncfupd_ne E1 E2 : NonExpansive (ncfupd E1 E2).
Proof. rewrite ncfupd_eq. solve_proper. Qed.
Lemma ncfupd_intro_mask E1 E2 P : E2 ⊆ E1 → P ⊢ |NC={E1,E2}=> |NC={E2,E1}=> P.
Proof.
rewrite ncfupd_eq /ncfupd_def.
iIntros (?) "HP". iIntros (q) "HNC".
iMod fupd_mask_subseteq as "Hclo"; try eassumption. iModIntro.
iFrame "HNC". iIntros.
iMod "Hclo". by iFrame.
Qed.
Lemma except_0_ncfupd E1 E2 P : ◇ (|NC={E1,E2}=> P) ⊢ |NC={E1,E2}=> P.
Proof. rewrite ncfupd_eq. iIntros "H". iIntros (q) "HNC". iMod "H". by iApply "H". Qed.
Lemma ncfupd_mono E1 E2 P Q : (P ⊢ Q) → (|NC={E1,E2}=> P) ⊢ |NC={E1,E2}=> Q.
Proof.
rewrite ncfupd_eq.
iIntros (HPQ) "H". iIntros (q) "HNC". iMod ("H" with "[$]") as "(?&$)".
iModIntro. by iApply HPQ.
Qed.
Lemma fupd_ncfupd E1 E2 P : (|={E1,E2}=> P) ⊢ |NC={E1,E2}=> P.
Proof.
rewrite ?ncfupd_eq. iIntros "H". iIntros (q) "HNC". iMod "H" as "$". eauto.
Qed.
Lemma ncfupd_trans E1 E2 E3 P : (|NC={E1,E2}=> |NC={E2,E3}=> P) ⊢ |NC={E1,E3}=> P.
Proof.
rewrite ?ncfupd_eq. iIntros "H". iIntros (q) "HNC".
iMod ("H" with "[$]") as "(H&HNC)".
iMod ("H" with "[$]") as "($&$)".
eauto.
Qed.
Lemma ncfupd_mask_frame_r' E1 E2 Ef P:
E1 ## Ef → (|NC={E1,E2}=> ⌜E2 ## Ef⌝ → P) ⊢ |NC={E1 ∪ Ef,E2 ∪ Ef}=> P.
Proof.
rewrite ?ncfupd_eq. iIntros (?) "H". iIntros (q) "HNC".
iSpecialize ("H" with "[$]"). iApply (fupd_mask_frame_r'); eauto.
iMod "H" as "(H&?)". iIntros "!> ?". iFrame. by iApply "H".
Qed.
Lemma ncfupd_frame_r E1 E2 P R:
(|NC={E1,E2}=> P) ∗ R ⊢ |NC={E1,E2}=> P ∗ R.
Proof.
rewrite ncfupd_eq.
iIntros "(H&$)". iIntros (q) "HNC". by iMod ("H" with "[$]").
Qed.
Global Instance ncfupd_proper E1 E2 :
Proper ((≡) ==> (≡)) (ncfupd E1 E2) := ne_proper _.
Global Instance ncfupd_mono' E1 E2 : Proper ((⊢) ==> (⊢)) (ncfupd E1 E2).
Proof. intros P Q; apply ncfupd_mono. Qed.
Global Instance ncfupd_flip_mono' E1 E2 :
Proper (flip (⊢) ==> flip (⊢)) (ncfupd E1 E2).
Proof. intros P Q; apply ncfupd_mono. Qed.
Lemma bupd_ncfupd E P:
(|==> P) ⊢ |NC={E}=> P.
Proof.
rewrite ncfupd_eq.
iIntros "H". iIntros (q) "HNC".
iMod "H". by iFrame.
Qed.
Lemma ncfupd_intro E P : P ⊢ |NC={E}=> P.
Proof. by rewrite {1}(ncfupd_intro_mask E E P) // ncfupd_trans. Qed.
(** [iMod (ncfupd_mask_subseteq E)] is the recommended way to change your
current mask to [E]. *)
Lemma ncfupd_mask_subseteq {E1} E2 : E2 ⊆ E1 → ⊢@{iPropI Σ} |NC={E1,E2}=> |NC={E2,E1}=> emp.
Proof. exact: ncfupd_intro_mask. Qed.
Lemma ncfupd_except_0 E1 E2 P : (|NC={E1,E2}=> ◇ P) ⊢ |NC={E1,E2}=> P.
Proof. by rewrite {1}(ncfupd_intro E2 P) except_0_ncfupd ncfupd_trans. Qed.
Global Instance from_assumption_ncfupd E p P Q :
FromAssumption p P (|==> Q) → KnownRFromAssumption p P (|NC={E}=> Q).
Proof. rewrite /KnownRFromAssumption /FromAssumption=>->. apply bupd_ncfupd. Qed.
Global Instance from_pure_ncfupd a E P φ :
FromPure a P φ → FromPure a (|NC={E}=> P) φ.
Proof. rewrite /FromPure. intros <-. apply ncfupd_intro. Qed.
Lemma ncfupd_frame_l E1 E2 R Q : (R ∗ |NC={E1,E2}=> Q) ⊢ |NC={E1,E2}=> R ∗ Q.
Proof. rewrite !(comm _ R); apply ncfupd_frame_r. Qed.
Lemma ncfupd_wand_l E1 E2 P Q : (P -∗ Q) ∗ (|NC={E1,E2}=> P) ⊢ |NC={E1,E2}=> Q.
Proof. by rewrite ncfupd_frame_l wand_elim_l. Qed.
Lemma ncfupd_wand_r E1 E2 P Q : (|NC={E1,E2}=> P) ∗ (P -∗ Q) ⊢ |NC={E1,E2}=> Q.
Proof. by rewrite ncfupd_frame_r wand_elim_r. Qed.
Lemma ncfupd_mask_weaken E1 E2 P : E2 ⊆ E1 → P ⊢ |NC={E1,E2}=> P.
Proof.
intros ?. rewrite -{1}(right_id emp%I bi_sep P%I).
rewrite (ncfupd_intro_mask E1 E2 emp%I) //.
by rewrite ncfupd_frame_l sep_elim_l.
Qed.
Lemma ncfupd_mask_frame_r E1 E2 Ef P :
E1 ## Ef → (|NC={E1,E2}=> P) ⊢ |NC={E1 ∪ Ef,E2 ∪ Ef}=> P.
Proof.
intros ?. rewrite -ncfupd_mask_frame_r' //. f_equiv.
apply impl_intro_l, and_elim_r.
Qed.
Lemma ncfupd_mask_mono E1 E2 P : E1 ⊆ E2 → (|NC={E1}=> P) ⊢ |NC={E2}=> P.
Proof.
intros (Ef&->&?)%subseteq_disjoint_union_L. by apply ncfupd_mask_frame_r.
Qed.
Lemma ncfupd_sep E P Q : (|NC={E}=> P) ∗ (|NC={E}=> Q) ⊢ |NC={E}=> P ∗ Q.
Proof. by rewrite ncfupd_frame_r ncfupd_frame_l ncfupd_trans. Qed.
Lemma ncfupd_mask_frame E E' E1 E2 P :
E1 ⊆ E →
(|NC={E1,E2}=> |NC={E2 ∪ (E ∖ E1),E'}=> P) -∗ (|NC={E,E'}=> P).
Proof.
intros ?. rewrite (ncfupd_mask_frame_r _ _ (E ∖ E1)); last set_solver.
rewrite ncfupd_trans.
by replace (E1 ∪ E ∖ E1) with E by (by apply union_difference_L).
Qed.
Global Instance into_wand_ncfupd E p q R P Q :
IntoWand false false R P Q →
IntoWand p q (|NC={E}=> R) (|NC={E}=> P) (|NC={E}=> Q).
Proof.
rewrite /IntoWand /= => HR. rewrite !intuitionistically_if_elim HR.
apply wand_intro_l. by rewrite ncfupd_sep wand_elim_r.
Qed.
Global Instance into_wand_ncfupd_persistent E1 E2 p q R P Q :
IntoWand false q R P Q → IntoWand p q (|NC={E1,E2}=> R) P (|NC={E1,E2}=> Q).
Proof.
rewrite /IntoWand /= => HR. rewrite intuitionistically_if_elim HR.
apply wand_intro_l. by rewrite ncfupd_frame_l wand_elim_r.
Qed.
Global Instance into_wand_ncfupd_args E1 E2 p q R P Q :
IntoWand p false R P Q → IntoWand' p q R (|NC={E1,E2}=> P) (|NC={E1,E2}=> Q).
Proof.
rewrite /IntoWand' /IntoWand /= => ->.
apply wand_intro_l. by rewrite intuitionistically_if_elim ncfupd_wand_r.
Qed.
Global Instance from_sep_ncfupd E P Q1 Q2 :
FromSep P Q1 Q2 → FromSep (|NC={E}=> P) (|NC={E}=> Q1) (|NC={E}=> Q2).
Proof. rewrite /FromSep =><-. apply ncfupd_sep. Qed.
Global Instance from_or_ncfupd E1 E2 P Q1 Q2 :
FromOr P Q1 Q2 → FromOr (|NC={E1,E2}=> P) (|NC={E1,E2}=> Q1) (|NC={E1,E2}=> Q2).
Proof.
rewrite /FromOr=><-. apply or_elim; apply ncfupd_mono;
[apply bi.or_intro_l|apply bi.or_intro_r].
Qed.
Global Instance from_exist_ncfupd {A} E1 E2 P (Φ : A → iProp Σ) :
FromExist P Φ → FromExist (|NC={E1,E2}=> P) (λ a, |NC={E1,E2}=> Φ a)%I.
Proof.
rewrite /FromExist=><-. apply exist_elim=> a. by rewrite -(exist_intro a).
Qed.
Lemma ncfupd_elim E1 E2 E3 P Q :
(Q -∗ (|NC={E2,E3}=> P)) → (|NC={E1,E2}=> Q) -∗ (|NC={E1,E3}=> P).
Proof. intros ->. rewrite ncfupd_trans //. Qed.
Global Instance except_0_ncfupd' E1 E2 P :
IsExcept0 (|NC={E1,E2}=> P).
Proof. by rewrite /IsExcept0 except_0_ncfupd. Qed.
Global Instance from_modal_ncfupd E P :
FromModal True modality_id (|NC={E}=> P) (|NC={E}=> P) P.
Proof. by rewrite /FromModal /= -ncfupd_intro. Qed.
Global Instance from_modal_ncfupd_wrong_mask E1 E2 P :
FromModal
(pm_error "Only non-mask-changing update modalities can be introduced directly.
Use [iApply ncfupd_mask_intro] to introduce mask-changing update modalities")
modality_id (|NC={E1,E2}=> P) (|NC={E1,E2}=> P) P | 100.
Proof. by intros []. Qed.
Global Instance elim_modal_bupd_ncfupd p E1 E2 P Q :
ElimModal True p false (|==> P) P (|NC={E1,E2}=> Q) (|NC={E1,E2}=> Q) | 10.
Proof.
by rewrite /ElimModal intuitionistically_if_elim
(bupd_ncfupd E1) ncfupd_frame_r wand_elim_r ncfupd_trans.
Qed.
Global Instance elim_modal_ncfupd_ncfupd p E1 E2 E3 P Q :
ElimModal True p false (|NC={E1,E2}=> P) P (|NC={E1,E3}=> Q) (|NC={E2,E3}=> Q).
Proof.
by rewrite /ElimModal intuitionistically_if_elim
ncfupd_frame_r wand_elim_r ncfupd_trans.
Qed.
Global Instance elim_modal_fupd_ncfupd p E1 E2 E3 P Q :
ElimModal True p false (|={E1,E2}=> P) P (|NC={E1,E3}=> Q) (|NC={E2,E3}=> Q).
Proof.
rewrite /ElimModal => ?. rewrite (fupd_ncfupd _ _) intuitionistically_if_elim
ncfupd_frame_r wand_elim_r ncfupd_trans //=.
Qed.
Global Instance add_modal_ncfupd E1 E2 P Q :
AddModal (|NC={E1}=> P) P (|NC={E1,E2}=> Q).
Proof. by rewrite /AddModal ncfupd_frame_r wand_elim_r ncfupd_trans. Qed.
Global Instance elim_acc_ncfupd {X} E1 E2 E α β mγ Q :
ElimAcc (X:=X) True (ncfupd E1 E2) (ncfupd E2 E1) α β mγ
(|NC={E1,E}=> Q)
(λ x, |NC={E2}=> β x ∗ (mγ x -∗? |NC={E1,E}=> Q))%I.
Proof.
rewrite /ElimAcc.
iIntros (_) "Hinner >Hacc". iDestruct "Hacc" as (x) "[Hα Hclose]".
iMod ("Hinner" with "Hα") as "[Hβ Hfin]".
iMod ("Hclose" with "Hβ") as "Hγ". by iApply "Hfin".
Qed.
Global Instance frame_ncfupd p E1 E2 R P Q :
Frame p R P Q → Frame p R (|NC={E1,E2}=> P) (|NC={E1,E2}=> Q).
Proof. rewrite /Frame=><-. by rewrite ncfupd_frame_l. Qed.
Lemma ncfupd_mask_mono' E1 E2 E1' E2' P : E1 ⊆ E2 → E2' ⊆ E1' → (|NC={E1, E1'}=> P) ⊢ |NC={E2,E2'}=> P.
Proof.
iIntros (??) "H".
iMod (ncfupd_mask_subseteq E1) as "Hclo"; auto.
iMod "H".
iApply (ncfupd_mask_weaken); auto.
Qed.
(** Introduction lemma for a mask-changing ncfupd.
This lemma is intended to be [iApply]ed. *)
Lemma ncfupd_mask_intro E1 E2 P :
E2 ⊆ E1 →
((|NC={E2,E1}=> emp) -∗ P) -∗ |NC={E1,E2}=> P.
Proof.
iIntros (?) "HP". iMod ncfupd_mask_subseteq as "Hclose"; last iModIntro; first done.
by iApply "HP".
Qed.
Lemma step_ncfupd_mask_mono Eo1 Eo2 Ei1 Ei2 P :
Ei2 ⊆ Ei1 → Eo1 ⊆ Eo2 →
(|NC={Eo1,Ei1}=> ▷ |NC={Ei1, Eo1}=> P) ⊢ (|NC={Eo2,Ei2}=> ▷ |NC={Ei2, Eo2}=> P).
Proof.
intros ??. rewrite -(emp_sep (|NC={Eo1,Ei1}=> ▷ _))%I.
rewrite (ncfupd_intro_mask Eo2 Eo1 emp%I) //.
rewrite ncfupd_frame_r -(ncfupd_trans Eo2 Eo1 Ei2). f_equiv.
rewrite ncfupd_frame_l -(ncfupd_trans Eo1 Ei1 Ei2). f_equiv.
rewrite (ncfupd_intro_mask Ei1 Ei2 (|NC={_,_}=> emp)%I) //.
rewrite ncfupd_frame_r. f_equiv.
rewrite [X in (X ∗ _)%I]later_intro -later_sep. f_equiv.
rewrite ncfupd_frame_r -(ncfupd_trans Ei2 Ei1 Eo2). f_equiv.
rewrite ncfupd_frame_l -(ncfupd_trans Ei1 Eo1 Eo2). f_equiv.
by rewrite ncfupd_frame_r left_id.
Qed.
Lemma step_ncfupd_intro Ei Eo P : Ei ⊆ Eo → ▷ P -∗ |NC={Eo,Ei}=> ▷ |NC={Ei,Eo}=> P.
Proof. intros. by rewrite -(step_ncfupd_mask_mono Ei _ Ei _) // -!ncfupd_intro. Qed.
Lemma step_ncfupd_wand Eo Ei P Q : (|NC={Eo}[Ei]▷=> P) -∗ (P -∗ Q) -∗ |NC={Eo}[Ei]▷=> Q.
Proof.
apply wand_intro_l.
rewrite (later_intro (P -∗ Q)%I).
iIntros "(HPQ&H)" (q) "HNC".
iMod ("H" with "[$]") as "H". iModIntro. iNext.
iMod "H" as "(HP&$)". iModIntro. by iApply "HPQ".
Qed.
Lemma step_ncfupd_ncfupd Eo Ei P : (|NC={Eo}[Ei]▷=> P) ⊣⊢ (|NC={Eo}[Ei]▷=> |NC={Eo}=> P).
Proof.
apply (anti_symm (⊢)).
- iIntros "H" (q) "HNC". iMod ("H" with "[$]") as "H".
iModIntro. iNext. iMod "H" as "(H&$)". iModIntro.
eauto.
- iIntros "H" (q) "HNC". iMod ("H" with "[$]") as "H".
iModIntro. iNext. iMod "H" as "(H&HNC)".
rewrite ncfupd_eq /ncfupd_def.
by iMod ("H" with "[$]") as "($&$)".
Qed.
Lemma step_ncfupdN_mono Eo Ei n P Q :
(P ⊢ Q) → (|NC={Eo}[Ei]▷=>^n P) ⊢ (|NC={Eo}[Ei]▷=>^n Q).
Proof.
intros HPQ. induction n as [|n IH]=> //=.
iIntros "H". iApply (step_ncfupd_wand with "H"); eauto.
iApply IH.
Qed.
Lemma step_ncfupdN_wand Eo Ei n P Q :
(|NC={Eo}[Ei]▷=>^n P) -∗ (P -∗ Q) -∗ (|NC={Eo}[Ei]▷=>^n Q).
Proof.
apply wand_intro_l. induction n as [|n IH]=> /=.
{ by rewrite wand_elim_l. }
iIntros "(HPQ&H)".
iApply (step_ncfupd_wand with "H"); eauto.
iIntros. iApply IH. by iFrame.
Qed.
Lemma step_ncfupdN_S_ncfupd n E P:
(|NC={E}[∅]▷=>^(S n) P) ⊣⊢ (|NC={E}[∅]▷=>^(S n) |NC={E}=> P).
Proof.
apply (anti_symm (⊢)); rewrite !Nat.iter_succ_r; apply step_ncfupdN_mono;
rewrite -step_ncfupd_ncfupd //.
Qed.
(** If the goal is a fancy update, this lemma can be used to make a later appear
in front of it in exchange for a later credit.
This is typically used as [iApply (lc_ncfupd_add_later with "Hcredit")],
where ["Hcredit"] is a credit available in the context. *)
Lemma lc_ncfupd_add_later E1 E2 P :
£1 -∗ (▷ |NC={E1, E2}=> P) -∗ |NC={E1, E2}=> P.
Proof.
iIntros "Hf Hupd".
iMod (lc_fupd_elim_later with "Hf Hupd"). done.
Qed.
End ncfupd.
Lemma nc_fupd_soundness `{!invGpreS Σ, !crashGpreS Σ} n E1 E2 (φ : Prop) :
(∀ `{Hinv: !invGS Σ} `{Hcrash: !crashGS Σ}, £ n ⊢ ∀ q, NC q -∗ |={E1,E2}=> ⌜φ⌝) → φ.
Proof.
iIntros (Hfupd). eapply (fupd_soundness n).
iIntros (?) "Hlc".
iMod NC_alloc as (Hc) "HNC".
iApply (Hfupd with "Hlc"). done.
Qed.
Lemma ncfupd_soundness `{!invGpreS Σ, !crashGpreS Σ} n E1 E2 (φ : Prop) :
(∀ `{Hinv: !invGS Σ} `{Hcrash: !crashGS Σ}, £ n ⊢ |NC={E1,E2}=> ⌜φ⌝) → φ.
Proof.
iIntros (Hfupd). eapply (fupd_soundness n).
iIntros (?) "Hlc".
iMod NC_alloc as (Hc) "HNC".
iPoseProof (Hfupd with "Hlc") as "Hfupd".
rewrite ncfupd_eq /ncfupd_def.
iMod ("Hfupd" with "HNC") as "[? _]". done.
Qed.
Global Hint Extern 1 (environments.envs_entails _ (|NC={_}=> _)) => iModIntro : core.
|
= = = Notable players = = =
|
theory Fold_Spmf
imports
More_CC
begin
primrec (transfer)
foldl_spmf :: "('b \<Rightarrow> 'a \<Rightarrow> 'b spmf) \<Rightarrow> 'b spmf \<Rightarrow> 'a list \<Rightarrow> 'b spmf"
where
foldl_spmf_Nil: "foldl_spmf f p [] = p"
| foldl_spmf_Cons: "foldl_spmf f p (x # xs) = foldl_spmf f (bind_spmf p (\<lambda>a. f a x)) xs"
lemma foldl_spmf_return_pmf_None [simp]:
"foldl_spmf f (return_pmf None) xs = return_pmf None"
by(induction xs) simp_all
lemma foldl_spmf_bind_spmf: "foldl_spmf f (bind_spmf p g) xs = bind_spmf p (\<lambda>a. foldl_spmf f (g a) xs)"
by(induction xs arbitrary: g) simp_all
lemma bind_foldl_spmf_return:
"bind_spmf p (\<lambda>x. foldl_spmf f (return_spmf x) xs) = foldl_spmf f p xs"
by(simp add: foldl_spmf_bind_spmf[symmetric])
lemma foldl_spmf_map [simp]: "foldl_spmf f p (map g xs) = foldl_spmf (map_fun id (map_fun g id) f) p xs"
by(induction xs arbitrary: p) simp_all
lemma foldl_spmf_identity [simp]: "foldl_spmf (\<lambda>s x. return_spmf s) p xs = p"
by(induction xs arbitrary: p) simp_all
lemma foldl_spmf_conv_foldl:
"foldl_spmf (\<lambda>s x. return_spmf (f s x)) p xs = map_spmf (\<lambda>s. foldl f s xs) p"
by(induction xs arbitrary: p)(simp_all add: map_spmf_conv_bind_spmf[symmetric] spmf.map_comp o_def)
lemma foldl_spmf_Cons':
"foldl_spmf f (return_spmf a) (x # xs) = bind_spmf (f a x) (\<lambda>a'. foldl_spmf f (return_spmf a') xs)"
by(simp add: bind_foldl_spmf_return)
lemma foldl_spmf_append: "foldl_spmf f p (xs @ ys) = foldl_spmf f (foldl_spmf f p xs) ys"
by(induction xs arbitrary: p) simp_all
lemma
foldl_spmf_helper:
assumes "\<And>x. h (f x) = x"
assumes "\<And>x. f (h x) = x"
shows "foldl_spmf (\<lambda>a e. map_spmf h (g (f a) e)) acc es =
map_spmf h (foldl_spmf g (map_spmf f acc) es)"
using assms proof (induction es arbitrary: acc)
case (Cons a es)
then show ?case
by (simp add: spmf.map_comp map_bind_spmf bind_map_spmf o_def)
qed (simp add: map_spmf_conv_bind_spmf)
lemma
foldl_spmf_helper2:
assumes "\<And>x y. p (f x y) = x"
assumes "\<And>x y. q (f x y) = y"
assumes "\<And>x. f (p x) (q x) = x"
shows "foldl_spmf (\<lambda>a e. map_spmf (f (p a)) (g (q a) e)) acc es =
bind_spmf acc (\<lambda>acc'. map_spmf (f (p acc')) (foldl_spmf g (return_spmf (q acc')) es))"
proof (induction es arbitrary: acc)
note [simp] = spmf.map_comp map_bind_spmf bind_map_spmf o_def
case (Cons e es)
then show ?case
apply (simp add: map_spmf_conv_bind_spmf assms)
apply (subst bind_spmf_assoc[symmetric])
by (simp add: bind_foldl_spmf_return)
qed (simp add: assms(3))
lemma foldl_pair_constl: "foldl (\<lambda>s e. map_prod (\<lambda>_. c) (\<lambda>r. f r e) s) (c, sr) l =
Pair c (foldl (\<lambda>s e. f s e) sr l)"
by (induction l arbitrary: sr) (auto simp add: map_prod_def split_def)
lemma foldl_spmf_pair_left:
"foldl_spmf (\<lambda>(l, r) e. map_spmf (\<lambda>l'. (l', r)) (f l e)) (return_spmf (l, r)) es =
map_spmf (\<lambda>l'. (l', r)) (foldl_spmf f (return_spmf l) es)"
apply (induction es arbitrary: l)
apply simp_all
apply (subst (2) map_spmf_conv_bind_spmf)
apply (subst foldl_spmf_bind_spmf)
apply (subst (2) bind_foldl_spmf_return[symmetric])
by (simp add: map_spmf_conv_bind_spmf)
lemma foldl_spmf_pair_left2:
"foldl_spmf (\<lambda>(l, _) e. map_spmf (\<lambda>l'. (l', c')) (f l e)) (return_spmf (l, c)) es =
map_spmf (\<lambda>l'. (l', if es = [] then c else c')) (foldl_spmf f (return_spmf l) es)"
apply (induction es arbitrary: l c c')
apply simp_all
apply (subst (2) map_spmf_conv_bind_spmf)
apply (subst foldl_spmf_bind_spmf)
apply (subst (2) bind_foldl_spmf_return[symmetric])
by (simp add: map_spmf_conv_bind_spmf)
lemma foldl_pair_constr: "foldl (\<lambda>s e. map_prod (\<lambda>l. f l e) (\<lambda>_. c) s) (sl, c) l =
Pair (foldl (\<lambda>s e. f s e) sl l) c"
by (induction l arbitrary: sl) (auto simp add: map_prod_def split_def)
lemma foldl_spmf_pair_right:
"foldl_spmf (\<lambda>(l, r) e. map_spmf (\<lambda>r'. (l, r')) (f r e)) (return_spmf (l, r)) es =
map_spmf (\<lambda>r'. (l, r')) (foldl_spmf f (return_spmf r) es)"
apply (induction es arbitrary: r)
apply simp_all
apply (subst (2) map_spmf_conv_bind_spmf)
apply (subst foldl_spmf_bind_spmf)
apply (subst (2) bind_foldl_spmf_return[symmetric])
by (simp add: map_spmf_conv_bind_spmf)
lemma foldl_spmf_pair_right2:
"foldl_spmf (\<lambda>(_, r) e. map_spmf (\<lambda>r'. (c', r')) (f r e)) (return_spmf (c, r)) es =
map_spmf (\<lambda>r'. (if es = [] then c else c', r')) (foldl_spmf f (return_spmf r) es)"
apply (induction es arbitrary: r c c')
apply simp_all
apply (subst (2) map_spmf_conv_bind_spmf)
apply (subst foldl_spmf_bind_spmf)
apply (subst (2) bind_foldl_spmf_return[symmetric])
by (auto simp add: map_spmf_conv_bind_spmf split_def)
lemma foldl_spmf_pair_right3:
"foldl_spmf (\<lambda>(l, r) e. map_spmf (Pair (g e)) (f r e)) (return_spmf (l, r)) es =
map_spmf (Pair (if es = [] then l else g (last es))) (foldl_spmf f (return_spmf r) es)"
apply (induction es arbitrary: r l)
apply simp_all
apply (subst (2) map_spmf_conv_bind_spmf)
apply (subst foldl_spmf_bind_spmf)
apply (subst (2) bind_foldl_spmf_return[symmetric])
by (clarsimp simp add: split_def map_bind_spmf o_def)
lemma foldl_pullout: "bind_spmf f (\<lambda>x. bind_spmf (foldl_spmf g init (events x)) (\<lambda>y. h x y)) =
bind_spmf (bind_spmf f (\<lambda>x. foldl_spmf (\<lambda>(l, r) e. map_spmf (Pair l) (g r e)) (map_spmf (Pair x) init) (events x)))
(\<lambda>(x, y). h x y)" for f g h init events
apply (simp add: foldl_spmf_helper2[where f=Pair and p=fst and q=snd, simplified] split_def)
apply (clarsimp simp add: map_spmf_conv_bind_spmf)
by (subst bind_spmf_assoc[symmetric]) (auto simp add: bind_foldl_spmf_return)
lemma bind_foldl_spmf_pair_append: "
bind_spmf
(foldl_spmf (\<lambda>(x, y) e. map_spmf (apfst ((@) x)) (f y e)) (return_spmf (a @ c, b)) es)
(\<lambda>(x, y). g x y) =
bind_spmf
(foldl_spmf (\<lambda>(x, y) e. map_spmf (apfst ((@) x)) (f y e)) (return_spmf (c, b)) es)
(\<lambda>(x, y). g (a @ x) y)"
apply (induction es arbitrary: c b)
apply (simp_all add: split_def map_spmf_conv_bind_spmf apfst_def map_prod_def)
apply (subst (1 2) foldl_spmf_bind_spmf)
by simp
lemma foldl_spmf_chain: "
(foldl_spmf (\<lambda>(oevents, s_event) event. map_spmf (map_prod ((@) oevents) id) (fff s_event event)) (return_spmf ([], s_event)) ievents)
\<bind> (\<lambda>(oevents, s_event'). foldl_spmf ggg (return_spmf s_core) oevents
\<bind> (\<lambda>s_core'. return_spmf (f s_core' s_event'))) =
foldl_spmf (\<lambda>(s_event, s_core) event. fff s_event event \<bind> (\<lambda>(oevents, s_event').
map_spmf (Pair s_event') (foldl_spmf ggg (return_spmf s_core) oevents))) (return_spmf (s_event, s_core)) ievents
\<bind> (\<lambda>(s_event', s_core'). return_spmf (f s_core' s_event'))"
proof (induction ievents arbitrary: s_event s_core)
case Nil
show ?case by simp
next
case (Cons e es)
show ?case
apply (subst (1 2) foldl_spmf_Cons')
apply (simp add: split_def)
apply (subst map_spmf_conv_bind_spmf)
apply simp
apply (rule bind_spmf_cong[OF refl])
apply (subst (2) map_spmf_conv_bind_spmf)
apply simp
apply (subst Cons.IH[symmetric, simplified split_def])
apply (subst bind_commute_spmf)
apply (subst (2) map_spmf_conv_bind_spmf[symmetric])
apply (subst map_bind_spmf[symmetric, simplified o_def])
apply (subst (1) foldl_spmf_bind_spmf[symmetric])
apply (subst (3) map_spmf_conv_bind_spmf)
apply (simp add: foldl_spmf_append[symmetric] map_prod_def split_def)
subgoal for x
apply (cases x)
subgoal for a b
apply (simp add: split_def)
apply (subst bind_foldl_spmf_pair_append[where c="[]" and a=a and b=b and es=es, simplified apfst_def map_prod_def append_Nil2 split_def id_def])
by simp
done
done
qed
\<comment> \<open>pauses\<close>
primrec pauses :: "'a list \<Rightarrow> (unit, 'a, 'b) gpv" where
"pauses [] = Done ()"
| "pauses (x # xs) = Pause x (\<lambda>_. pauses xs)"
lemma WT_gpv_pauses [WT_intro]:
"\<I> \<turnstile>g pauses xs \<surd>" if "set xs \<subseteq> outs_\<I> \<I>"
using that by(induction xs) auto
lemma exec_gpv_pauses:
"exec_gpv callee (pauses xs) s =
map_spmf (Pair ()) (foldl_spmf (map_fun id (map_fun id (map_spmf snd)) callee) (return_spmf s) xs)"
by(induction xs arbitrary: s)(simp_all add: split_def foldl_spmf_Cons' map_bind_spmf bind_map_spmf o_def del: foldl_spmf_Cons)
end |
#ifndef __GSL_PERMUTE_MATRIX_H__
#define __GSL_PERMUTE_MATRIX_H__
#if !defined( GSL_FUN )
# if !defined( GSL_DLL )
# define GSL_FUN extern
# elif defined( BUILD_GSL_DLL )
# define GSL_FUN extern __declspec(dllexport)
# else
# define GSL_FUN extern __declspec(dllimport)
# endif
#endif
#include <gsl/gsl_permute_matrix_complex_long_double.h>
#include <gsl/gsl_permute_matrix_complex_double.h>
#include <gsl/gsl_permute_matrix_complex_float.h>
#include <gsl/gsl_permute_matrix_long_double.h>
#include <gsl/gsl_permute_matrix_double.h>
#include <gsl/gsl_permute_matrix_float.h>
#include <gsl/gsl_permute_matrix_ulong.h>
#include <gsl/gsl_permute_matrix_long.h>
#include <gsl/gsl_permute_matrix_uint.h>
#include <gsl/gsl_permute_matrix_int.h>
#include <gsl/gsl_permute_matrix_ushort.h>
#include <gsl/gsl_permute_matrix_short.h>
#include <gsl/gsl_permute_matrix_uchar.h>
#include <gsl/gsl_permute_matrix_char.h>
#endif /* __GSL_PERMUTE_MATRIX_H__ */
|
function [beta,Mu_c,Sigma_y_tmp] = GMC(Priors, Mu, Sigma, x, in, out)
%GMC Gaussian Mixture Conditional. Returns the conditional P(out|in=x) of
% the Gaussian Mixture Model P(out,in) at the point in=x.
%
%
%
% Inputs -----------------------------------------------------------------
% o Priors: 1 x K array representing the prior probabilities of the K GMM
% components.
% o Mu: D x K array representing the centers of the K GMM components.
% o Sigma: D x D x K array representing the covariance matrices of the
% K GMM components.
% o x: P x N array representing N datapoints of P dimensions.
% o in: 1 x P array representing the dimensions to consider as
% inputs.
% o out: 1 x Q array representing the dimensions to consider as
% outputs (D=P+Q)
%
% Output------------------------------------------------------------------------
%
% o beta : N x K
%
% o Mu_c : Q x N x K
%
% o Sigma_c : Q x Q x K
%
%
%
nbData = size(x,2);
nbVar = size(Mu,1);
nbStates = size(Sigma,3);
%% Compute the influence of each GMM component, given input x
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i=1:nbStates
% if i == 1
% Mu(in,i)
% Sigma(in,in,i)
% inv(Sigma(in,in,i))
% det(Sigma(in,in,i))
% gaussPDF(x, Mu(in,i), Sigma(in,in,i))
% end
Pxi(:,i) = Priors(i).*gaussPDF(x, Mu(in,i), Sigma(in,in,i));
end
beta = Pxi./repmat(sum(Pxi,2)+realmin,1,nbStates);
%% Compute expected means y, given input x
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for j=1:nbStates
% (N x D x K)
% (D x N) (D x D) (D x N)
Mu_c(:,:,j) = (repmat(Mu(out,j),1,nbData) + Sigma(out,in,j)*inv(Sigma(in,in,j)) * (x-repmat(Mu(in,j),1,nbData)))';
% Mu_c(:,:,j) = (repmat(Mu(out,j),1,nbData))';
end
%% Compute Marginal covariance matrices Sigma_y
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for j=1:nbStates
Sigma_y_tmp(:,:,j) = Sigma(out,out,j) - (Sigma(out,in,j)*inv(Sigma(in,in,j))*Sigma(in,out,j));
Sigma_y_tmp(:,:,j) = 0.5 * (Sigma_y_tmp(:,:,j) + Sigma_y_tmp(:,:,j)');
end
end
|
#-------------------------------------------------------------------------------
# KINSHIP matrix calculation using GenABEL package
#
# Here we convert PLINK tped and tfam files to GenABEL data and we calculate
# kinship matrix using "ibs" function. We need tped and tfam files
#
# Note: tped should have subject alleles coded by letters, EX: A,C,G,T (see
# page 233 of GenABEL tutorial)
#
# Author : Karim Oualkacha
# Adapted by : Pablo Cinoglani
#
# 2012
#-------------------------------------------------------------------------------
library(GenABEL)
library(nFactors)
library(CompQuadForm)
#-------------------------------------------------------------------------------
# Create kinship file
#-------------------------------------------------------------------------------
createKinship <- function( tfam , tpedFile ) {
# Convert TFAM to GenABEL phenotype
# Save as genable phenotype file
# It contains: subject IDs, SEX and the quantitaive phenotype.
# This file should have headers for each column: "id sex pheno"
cat('Writing GenABEL phenotype file\n');
sexOri <- (2 - tfam$sex); # Note: Sex has to be coded as 0=female and 1=male, instead of 1=male, 2=female)
sex <- pmin(1, pmax(0,sexOri) ); # Make sure it is always {0,1}
if( sum(sex!=sexOri) ) { cat("\n\nWARNING: Sex had to be adjusted to {0,1}. This will probably create some problems\n\n"); }
genPhen <- data.frame( id=tfam$individualId, sex=sex, pheno=tfam$phenotype );
write.table( genPhen, file=genabelPhenFile, col.names=TRUE, row.names=FALSE, quote=FALSE);
#---
# Converts PLINK tped file to GenABEL genotype
#---
cat('Converting TPED to GenABEL file:', genabelGenFile,'\n');
convert.snp.tped(tped = tpedFile, tfam = tfamFile, out = genabelGenFile, strand = "+")
#---
# Calculate and save kinship matrix
#---
# Load GenABEL data
cat('Loafind GenABEL file: ', genabelGenFile,'\n');
data.GenABEL <- load.gwaa.data(phe = genabelPhenFile, gen = genabelGenFile, force = T)
# Next command calculates the kinship matrix using all autosomal markers that we have in the tped file
cat('Calculating Kinship matrix (IBS)\n');
kinshipMatrix = ibs(data.GenABEL[, autosomal(data.GenABEL)], weight = "freq")
cat('Calculating diagReplace on Kinship matrix\n');
kinshipMatrix = diagReplace(kinshipMatrix, upper=TRUE)
kinshipMatrix;
}
#-------------------------------------------------------------------------------
# Create 'pheno.txt' file (used by FaST-LMM)
#-------------------------------------------------------------------------------
createPheno <- function(phenoFile, tfam){
# Create 'pheno' file
pheno.test = data.frame(tfam$familyId, tfam$individualId, tfam$phenotype)
colnames(pheno.test) = c( 'familyId', 'individualId', 'phenotype');
cat('Writing phenotype to file: ', phenoFile , '\n' );
write.table(pheno.test, file = phenoFile, sep = " ", quote = FALSE, row.names = FALSE, col.names = FALSE)
}
#-------------------------------------------------------------------------------
# Create 'sim' file for FaST-LMM
#
# From FastLMM's handbook:
# Instead of SNP data from which genetic similarities are computed, the user may provide
# the genetic similarities directly using the –sim <filename> option. The file containing
# the genetic similarities should be tab delimited and have both row and column labels for
# the IDs (family ID and individual ID separated by a space). The value in the top-left
# corner of the file should be var.
#
# Arguments:
#
# tfam : This is the TFAM matrix (PLINK's TFAM format file)
#
# kinshipMatrix : The kinship matrix. If the kinship matrix noted is calculated from
# GenABEL we should convert it to the kinship matrix using this
# command:
# kin1 = diagReplace(kin1, upper=TRUE)
#
#-------------------------------------------------------------------------------
createSim <- function(simFile, tfam, kinshipMatrix ) {
cat('Creating simmilarity\n' );
# Create 'simmilarity' file for FaST-LMM (simFile).
kin.FaST = 2 * kinshipMatrix
n.col = paste(tfam[,1] , tfam[,2]) # Create labels 'family_ID individual_ID'
colnames(kin.FaST) <- n.col
rownames(kin.FaST) <- n.col
# Write simmilarity file
cat('Writing simmilarity matrix to file: ', simFile , '\n' );
write.table(c('var\t'), file = simFile, quote = FALSE, sep = "\t", row.names = FALSE, col.names = FALSE, eol = ""); # This is just to create the required 'var'
write.table(kin.FaST, file = simFile, quote = FALSE, sep = "\t", row.names = TRUE , col.names = TRUE , append=TRUE); # Now we dump the data
}
#-------------------------------------------------------------------------------
# Create a TPED file for FastLmm
# NOTE: Since FaST-LMM doesn't actually use this file, we can use any random data
#-------------------------------------------------------------------------------
createTpedFastlmm <- function(numIndividuals, tpedFile){
cat('Creating TPED file (for FaST-LMM)\n' );
# File already exists: Don't even bother.
if( file.exists(tpedFile) ) {
if( debug) { cat('File ', tpedFile ,' already exists. Nothing done.\n'); }
return;
}
geno.FaST = floor( 2 * runif(2 * numIndividuals) ) + 1; # A random vector of '1' and '2' (representing 'A' and 'C'). Note: Any pther combination should also do the trick.
geno.test = c(1, "snp0", 0, 0, 1, geno.FaST)
geno.test = rbind(geno.test)
if( debug ) { cat('Writing tped matrix to file: ', tpedFile , '\tsize: ', dim(geno.test), '\n' ); }
write.matrix(geno.test, file = tpedFile, sep="")
}
#-------------------------------------------------------------------------------
# Fatal error
#-------------------------------------------------------------------------------
fatalError <- function(errStr) {
errStr <- paste('FATAL ERROR:', errStr, "\n");
stop( errStr );
}
#-------------------------------------------------------------------------------
# Invoke FastLmm and get VC
#
# Under the NULL we call FaST-LMM to estimate the VCs
#
# Note: In order to invoke Fast-LMM we must create some temporal files
# After invoking the program, we have to read and parse the result files
#-------------------------------------------------------------------------------
invokeFastlmm <- function(tfam, simFile, phenoFile, tfamFile) {
cat('Invoking FaST-LMM\n' );
# Directory for eigenvalue output (Fast-LMM)
eigenDir <- tmpFile("ASKAT_FaSTLMM");
# TMP File names to be used
genoName <- tmpFile("geno_test", sep="_"); # FaSTLMM version 2.03 doesn't allow any '/' in this name (bug)
genoTfamFileName <- paste(genoName,".tfam", sep="");
genoTpedFileName <- paste(genoName,".tped", sep="");
genoOutFileName <- paste(genoName,".out.txt", sep="");
fastlmmOutFileName <- tmpFile("OUTFaST-LMM.txt");
#---
# Create TMP files used by the program
#---
# Create TPED file
numIndividuals = dim(tfam)[1]
createTpedFastlmm(numIndividuals, genoTpedFileName)
# Copy TFAM file
if( ! file.exists(genoTfamFileName) ) {
file.copy(tfamFile, genoTfamFileName);
}
#---
# Invoke Fast-LMM
#---
# Create Fast-LMM command line and execute it
fastlmmcCmd <- paste( path.FastLmm # Full path to fastlmm binary
, "-tfile", genoName # basename for PLINK's transposed .tfam and .tped files
, "-sim" , simFile # file containing the genetic similarity matrix
, "-eigenOut", eigenDir # save the spectral decomposition object to the directoryname
, "-pheno", phenoFile # name of phenotype file
, "-out", genoOutFileName # name of output file
, "-mpheno 1" # index for phenotype in -pheno file to process, starting at 1 for the first phenotype column
);
if( debug ) { cat('Execute system command: ', fastlmmcCmd , '\n' ); }
retCode <- system(fastlmmcCmd)
if( retCode != 0 ) fatalError( paste("Cannot execute command\n\t", fastlmmcCmd) );
#---
# Read Fast-LMM results (from TMP files)
#---
# Read results from 'genoOutFileName'
if( debug ) { cat('Reading FaST-LMM table from: ', genoOutFileName , '\n' ); }
res.fastlmm = read.table(genoOutFileName, header=TRUE)
nullGeneticVar = res.fastlmm$NullGeneticVar
nullResidualVar = res.fastlmm$NullResidualVar
# Read 'S' matrix
p = dim(tfam)[1]
sFileName <- paste(eigenDir,"S.bin", sep="/")
if( debug ) { cat('Reading read.SVD.bin S-matrix from: ', sFileName , '\n' ); }
read.SVD.bin = file(sFileName, "rb")
S = readBin(read.SVD.bin, "numeric", n=p, endian="little")
close(read.SVD.bin)
S = diag(sort(S, decreasing = T))
# Read 'U' matrix
uFileName <- paste(eigenDir,"U.bin", sep="/")
if( debug ) { cat('Reading upper read.SVD.bin U-matrix from: ', uFileName , '\n' ); }
read.SVD.bin = file(uFileName, "rb")
U = readBin(read.SVD.bin, "numeric", n=p*p, endian="little")
close(read.SVD.bin)
U = matrix(U,p,p, byrow=F)
U = U[ ,ncol(U):1]
# Remove TMP files and dirs
# WARNING: A function should NOT have side effects (e.g. deleting a file created somewhere else)
if( !debug) unlink( c(fastlmmOutFileName, genoTfamFileName, genoTpedFileName, genoOutFileName, fastlmmOutFileName ) );
if( !debug) unlink( eigenDir, recursive = TRUE)
# Create results list
return( list(nullGeneticVar = nullGeneticVar, nullResidualVar = nullResidualVar, S = S, U = U) );
}
#-------------------------------------------------------------------------------
# Create a name for a temporal file
#-------------------------------------------------------------------------------
tmpFile <- function(name, sep="/") { return( paste(tmpDir, name, sep=sep) ); }
#-------------------------------------------------------------------------------
# Main
#-------------------------------------------------------------------------------
debug <- FALSE
exitAfterError <- !debug # Exit after any error, (unless we are in debug mode)
if( exitAfterError ) {
# Make sure we get an error code after a 'stop' (and we quit the process)
options( error= quote({ traceback(); q("no",status=1,FALSE) }) );
} else {
# Make sure we get a stack trace
options( error= traceback );
}
#---
# Parse command line arguments
# Debug:
# cmdLineArgs <- c( 'geno_cov.tped', 'geno_cov.tfam', 'geno_cov.genable.gen', 'geno_cov.genable.phen', 'geno_cov.kinship.RData', 'geno_cov.sim', 'geno_cov.pheno.txt' );
#---
if( !exists('cmdLineArgs') ) { cmdLineArgs <- commandArgs(trailingOnly = TRUE); }
tpedFile <- cmdLineArgs[1];
tfamFile <- cmdLineArgs[2];
genabelGenFile <- cmdLineArgs[3];
genabelPhenFile <- cmdLineArgs[4];
kinshipFile <- cmdLineArgs[5];
simFile <- cmdLineArgs[6];
phenoFile <- cmdLineArgs[7];
path.FastLmm <- cmdLineArgs[8];
cat("Kinship arguments:\n");
cat("\tInput TPED file :", tpedFile, "\n");
cat("\tInput TFAM file :", tfamFile, "\n");
cat("\tOutput GenABEL 'genotype' file :", genabelGenFile, "\n");
cat("\tOutput GenABEL 'phenotype' file :", genabelPhenFile, "\n");
cat("\tOutput Kinship matrix file :", kinshipFile, "\n");
cat("\tOutput SIM matrix file :", simFile, "\n");
cat("\tOutput pheno file :", phenoFile, "\n");
cat("\tFast-LMM path :", path.FastLmm , "\n" );
#---
# TMP dir (form tpedFile)
#---
len <- nchar(tpedFile);
extLen <- nchar( ".tped" );
if( len <= 5 ) { fatalError(paste("TPED file", tpedFile, "does not have '.tped' extension\n")); }
ext <- tolower( substr( tpedFile, len - extLen + 1 , len ) );
if( ext != ".tped" ) { fatalError(paste("TPED file", tpedFile, "does not have '.tped' extension\n")); }
tmpDir <- substr( tpedFile, 0 , len - extLen );
dir.create(tmpDir, showWarnings = FALSE);
#---
# Read TFAM
#---
cat('Reading TFAM file\n');
tfam <- read.csv(tfamFile, sep="", header=FALSE, col.names=c('familyId','individualId', 'paternalId', 'maternalId', 'sex', 'phenotype') );
#---
# Create (or load) kinship file
#---
if( file.exists(kinshipFile) ) {
cat("Kinship file '", kinshipFile ,"' exists. Loading.\n");
load( kinshipFile )
} else {
kinshipMatrix <- createKinship( tfam, tpedFile );
}
#---
# Create SIM file (for FaST-LMM)
#---
createSim(simFile, tfam, kinshipMatrix );
#---
# Create pheno.txt file (for FaST-LMM)
#---
createPheno(phenoFile, tfam);
#---
# Invoke FaST-LMM
#---
fastlmm = invokeFastlmm(tfam, simFile, phenoFile, tfamFile );
#---
# Save kinship as RData file
#---
cat('Saving results to file', kinshipFile, '\n');
save( kinshipMatrix, fastlmm, file=kinshipFile );
if( !debug) unlink( tmpDir, recursive = TRUE)
|
theory SumTail
imports Main
begin
text \<open>
\begin{description}
\item[\bf (a)] Define a primitive recursive function @{term ListSum} that
computes the sum of all elements of a list of natural numbers.
Prove the following equations. Note that @{term "[0..n]"} und @{term
"replicate n a"} are already defined in a theory {\tt List.thy}.
\end{description}
\<close>
primrec ListSum :: "nat list \<Rightarrow> nat"
where
"ListSum [] = 0" |
"ListSum (x#xs) = x + ListSum xs"
lemma ListSum_append[simp]:"ListSum (xs @ ys) = ListSum xs + ListSum ys"
apply (induction xs) by auto
theorem "2 * ListSum [0..<n+1] = n * (n + 1)"
apply (induction n) by auto
theorem "ListSum (replicate n a) = n * a"
apply (induction n) by auto
text \<open>
\begin{description}
\item[\bf (b)] Define an equivalent function @{term ListSumT} using a
tail-recursive function @{term ListSumTAux}. Prove that @{term ListSum}
and @{term ListSumT} are in fact equivalent.
\end{description}
\<close>
primrec ListSumTAux :: "nat list \<Rightarrow> nat \<Rightarrow> nat"
where
"ListSumTAux [] n = n" |
"ListSumTAux (x#xs) n = ListSumTAux xs (x + n)"
lemma ListSumTAux_sum_accum[simp]:"\<forall> a b. ListSumTAux xs (a + b) = a + ListSumTAux xs b"
apply (induction xs) by auto
lemma ListSumTAux_append[simp]:"ListSumTAux (xs @ ys) 0 = ListSumTAux xs 0 + ListSumTAux ys 0"
apply (induction xs)
apply auto
by (metis ListSumTAux_sum_accum Nat.add_0_right)
definition ListSumT :: "nat list \<Rightarrow> nat"
where
"ListSumT xs = ListSumTAux xs 0"
lemma ListSumT_append[simp]:"ListSumT (xs @ ys) = ListSumT xs + ListSumT ys"
apply (induction xs)
apply (auto simp add: ListSumT_def)
by (metis ListSumTAux_append ListSumTAux_sum_accum add.commute add.left_neutral)
theorem "ListSum xs = ListSumT xs"
apply (induction xs)
apply (auto simp add: ListSumT_def)
by (metis ListSumTAux_sum_accum Nat.add_0_right)
end
|
(*<*)
theory ModeCondition
imports Main FaultModellingTypes Boolean_Expression_Checkers
begin
(*>*)
(*<*)
datatype_new ('a, 'b, 'c) Condition =
Operations
(Tautology:"'a \<Rightarrow> bool")
(Sat: "'a \<Rightarrow> bool")
(Equiv: "'a \<Rightarrow> 'a \<Rightarrow> bool")
(Absorb: "'a \<Rightarrow> 'a \<Rightarrow> bool")
(Eval: "('b \<Rightarrow> 'c) \<Rightarrow>'a \<Rightarrow> bool")
(Top: "'a")
(Bot: "'a")
(Both: "'a binop")
(Any: "'a binop")
(Not: "'a \<Rightarrow> 'a")
notation (output) Tautology ("\<TT>\<index> _" 70)
notation (output) Sat ("\<SS>\<index> _" 70)
notation (output) Equiv (infixr "\<EE>\<index>" 70)
notation (output) Absorb (infixr "\<AA>\<index>" 70)
notation (output) Eval ("_\<Colon>\<lbrakk>_\<rbrakk>\<index>" 70)
notation (output) Top ("\<top>\<index>")
notation (output) Bot ("\<bottom>\<index>")
notation (output) Both (infixr "\<and>\<index>" 50)
notation (output) Any (infixr "\<or>\<index>" 50)
notation (output) Not ("\<not>\<index>_" 50)
datatype_new 'a BoolEx =
MCConst bool
| MCVar 'a
| MCNot "'a BoolEx"
| MCAnd "'a BoolEx" "'a BoolEx"
type_synonym 'a SeqEx = "'a list set"
notation (output) MCConst ("_\<^sub>B" 60)
notation (output) MCVar ("_\<^sub>B" 60)
notation (output) MCNot ("\<not>_" 64)
notation (output) MCAnd (infix "\<and>" 65)
abbreviation MCOr :: "'vb BoolEx \<Rightarrow> 'vb BoolEx \<Rightarrow> 'vb BoolEx"
where "MCOr b1 b2 \<equiv> MCNot (MCAnd (MCNot b1) (MCNot b2))"
notation (output) MCOr (infix "\<or>" 70)
abbreviation MCNand :: "'vb BoolEx \<Rightarrow> 'vb BoolEx \<Rightarrow> 'vb BoolEx"
where "MCNand b1 b2 \<equiv> (MCNot (MCAnd b1 b2))"
notation (output) MCNand (infix "\<^sup>\<not>\<and>" 73)
abbreviation MCXor :: "'vb BoolEx \<Rightarrow> 'vb BoolEx \<Rightarrow> 'vb BoolEx"
where "MCXor b1 b2 \<equiv> (MCAnd (MCNand b1 b2) (MCOr b1 b2))"
notation (output) MCXor (infix "\<otimes>" 70)
type_synonym 'vb MCEnv = "'vb \<Rightarrow> bool"
primrec BoolEx_to_bool_expr :: "'vb BoolEx \<Rightarrow> 'vb bool_expr"
where
"BoolEx_to_bool_expr (MCConst c) = Const_bool_expr c" |
"BoolEx_to_bool_expr (MCVar v) = Atom_bool_expr v" |
"BoolEx_to_bool_expr (MCNot b) = Neg_bool_expr (BoolEx_to_bool_expr b)" |
"BoolEx_to_bool_expr (MCAnd b1 b2) =
And_bool_expr (BoolEx_to_bool_expr b1) (BoolEx_to_bool_expr b2)"
(*>*)
primrec BoolEx_eval :: "'vb MCEnv \<Rightarrow> 'vb BoolEx \<Rightarrow> bool"
where
"BoolEx_eval _ (MCConst c) = c" |
"BoolEx_eval s (MCVar v) = (s v)" |
"BoolEx_eval s (MCNot b) = (\<not> BoolEx_eval s b)" |
"BoolEx_eval s (MCAnd b1 b2) =
((BoolEx_eval s b1) \<and> (BoolEx_eval s b2))"
definition absorb_rule :: "bool \<Rightarrow> bool \<Rightarrow> bool"
where
"absorb_rule a b \<equiv> (a \<longrightarrow> b) \<or> (b \<longrightarrow> a)"
definition absorb_test :: "'a bool_expr \<Rightarrow> 'a bool_expr \<Rightarrow> bool"
where
"absorb_test a b \<equiv> taut_test (Or_bool_expr (Imp_bool_expr a b) (Imp_bool_expr b a))"
definition BoolEx_Absorb :: "'vb BoolEx \<Rightarrow> 'vb BoolEx \<Rightarrow> bool"
where
"BoolEx_Absorb c1 c2 \<equiv>
(
absorb_test (BoolEx_to_bool_expr c1) (BoolEx_to_bool_expr c2)
)"
definition BoolCondition :: "('vb BoolEx, 'vb, bool) Condition" ("\<^sub>B")
where
"BoolCondition \<equiv>
Operations
(taut_test \<circ> BoolEx_to_bool_expr)
(sat_test \<circ> BoolEx_to_bool_expr)
(\<lambda> c1 c2. equiv_test (BoolEx_to_bool_expr c1) (BoolEx_to_bool_expr c2))
(BoolEx_Absorb)
(BoolEx_eval)
(MCConst True)
(MCConst False)
(MCAnd)
(MCOr)
(MCNot)"
definition SeqCondition:: "('a SeqEx, nat, 'a SeqEx) Condition"
where
"SeqCondition \<equiv> Operations
(\<lambda> x . x = {})
(\<lambda> x . x = {})
(\<lambda> x y . x = y)
(\<lambda> x y . x = y)
(\<lambda> x s . True)
(UNIV)
({})
(\<lambda> x y . x \<inter> y)
(\<lambda> x y . x \<union> y)
(\<lambda> x . UNIV - x)
"
declare BoolCondition_def [simp]
definition Tautology_Eval_s :: "('a, 'b, 'c) Condition \<Rightarrow> ('b \<Rightarrow> 'c) \<Rightarrow> 'a \<Rightarrow> bool"
where "Tautology_Eval_s C s a \<equiv> (Tautology C a = Eval C s a)"
definition ValuePreservation :: "('a, 'b, 'c) Condition \<Rightarrow> bool"
where
"ValuePreservation C \<equiv> (\<forall> a b.
(Tautology C a = (\<forall> s. Eval C s a)) \<and>
(Sat C a = (\<exists> s. Eval C s a)) \<and>
(Absorb C a b = (\<forall> s. absorb_rule (Eval C s a) (Eval C s b))) \<and>
(Equiv C a b = (\<forall> s. Eval C s a = Eval C s b))
)"
definition ValidLattice :: "('a, 'b, 'c) Condition \<Rightarrow> bool"
where
"ValidLattice C \<equiv>
(Tautology C (Top C)) \<and>
(\<not> (Sat C (Bot C))
)"
definition ValidOps :: "('a, 'b, 'c) Condition \<Rightarrow> bool"
where
"ValidOps cond \<equiv> (\<forall> a b.
(Equiv cond (Any cond a (Bot cond)) a) \<and>
(Equiv cond (Any cond a (Top cond)) (Top cond)) \<and>
(Equiv cond (Both cond a (Bot cond)) (Bot cond)) \<and>
(Equiv cond (Both cond a (Top cond)) a) \<and>
(Equiv cond (Any cond a b) (Any cond b a)) \<and>
(Equiv cond (Both cond a b) (Both cond b a)) \<and>
(Equiv cond (Bot cond) (Not cond (Top cond))) \<and>
(Equiv cond (Top cond) (Not cond (Bot cond)))
)"
definition ValidCondition :: "('a, 'b, 'c) Condition \<Rightarrow> bool"
where
"ValidCondition C \<equiv> ValuePreservation C \<and> ValidLattice C \<and> ValidOps C"
lemma "\<lbrakk> ValuePreservation C \<rbrakk> \<Longrightarrow>
Tautology C a \<Longrightarrow> ((\<forall> s. \<not> Eval C s a)) \<Longrightarrow> False"
apply (simp add: ValuePreservation_def)
done
(*
definition TautologyProperty :: "('a, 'b, 'c) Condition \<Rightarrow> bool"
where
"TautologyProperty cond \<equiv> \<forall> a. Tautology cond a \<longrightarrow>
(\<exists> b c. Equiv cond b (Not cond c) \<and> (Equiv cond a (Any cond b c)))"
lemma "\<lbrakk> C = BoolCondition; ValidOps C; ValidLattice C; ValuePreservation C \<rbrakk> \<Longrightarrow>
TautologyProperty C"
apply (auto)
apply (auto simp add: ValidOps_def TautologyProperty_def ValidLattice_def ValuePreservation_def)
apply (auto simp add: BoolEx_eval_def BoolEx_Absorb_def)
apply (auto simp add: taut_test equiv_test sat_test BoolEx_to_bool_expr_def)
done
*)
lemma ValuePreservation_BoolEx:
"val_bool_expr (BoolEx_to_bool_expr b) s = BoolEx_eval s b"
apply (induction b)
apply (auto)
done
lemma ValuePreservation_BoolCondition: "ValuePreservation BoolCondition"
apply (auto simp add: ValuePreservation_def)
apply (auto simp add: taut_test ValuePreservation_BoolEx )
apply (auto simp add: sat_test ValuePreservation_BoolEx)
apply (auto simp add: absorb_rule_def BoolEx_Absorb_def absorb_test_def taut_test
ValuePreservation_BoolEx)
apply (auto simp add: equiv_test ValuePreservation_BoolEx)
done
lemma ValidOps_BoolCondition: "ValidOps BoolCondition"
apply (auto simp add: ValidOps_def equiv_test)
done
lemma ValidLattice_BoolCondition: "ValidLattice BoolCondition"
apply (auto simp add: ValidLattice_def taut_test sat_test)
done
theorem ValidCondition_BoolCondition: "ValidCondition BoolCondition"
apply (auto simp add: ValidCondition_def)
(*apply (auto simp add: ValuePreservation_BoolCondition)*)
apply (auto simp add: ValuePreservation_def)
apply (auto simp add: taut_test ValuePreservation_BoolEx)
apply (auto simp add: sat_test ValuePreservation_BoolEx)
apply (auto simp add: absorb_rule_def absorb_test_def BoolEx_Absorb_def taut_test
ValuePreservation_BoolEx)
apply (auto simp add: equiv_test ValuePreservation_BoolEx)
apply (auto simp add: ValidOps_def equiv_test)
apply (auto simp add: ValidLattice_def taut_test sat_test)
done
end
|
// Copyright (c) 2006 by BBNT Solutions LLC
// All Rights Reserved.
#include "Generic/common/leak_detection.h" // This must be the first #include
#include "Generic/common/SessionLogger.h"
#include "Generic/patterns/UnionPattern.h"
#include "Generic/patterns/PatternReturn.h"
#include "Generic/patterns/ShortcutPattern.h"
#include "Generic/patterns/features/PatternFeatureSet.h"
#include "Generic/patterns/features/GenericPFeature.h"
#include <boost/lexical_cast.hpp>
#include "Generic/common/Sexp.h"
// Private symbols
namespace {
Symbol membersSym = Symbol(L"members");
Symbol greedySym = Symbol(L"GREEDY");
}
UnionPattern::UnionPattern(Sexp *sexp, const Symbol::HashSet &entityLabels,
const PatternWordSetMap& wordSets):
_is_greedy(false)
{
int nkids = sexp->getNumChildren();
if (nkids < 2)
throwError(sexp, "too few children in UnionPattern");
initializeFromSexp(sexp, entityLabels, wordSets);
if (_patternList.size() == 0)
throwError(sexp, "no member patterns specified in UnionPattern");
}
bool UnionPattern::initializeFromAtom(Sexp *childSexp, const Symbol::HashSet &entityLabels, const PatternWordSetMap& wordSets) {
Symbol atom = childSexp->getValue();
if (LanguageVariantSwitchingPattern::initializeFromAtom(childSexp, entityLabels, wordSets)) {
return true;
} else if (atom == greedySym) {
_is_greedy = true; return true;
} else {
logFailureToInitializeFromAtom(childSexp);
return false;
}
}
bool UnionPattern::initializeFromSubexpression(Sexp *childSexp, const Symbol::HashSet &entityLabels,
const PatternWordSetMap& wordSets)
{
Symbol constraintType = childSexp->getFirstChild()->getValue();
if (LanguageVariantSwitchingPattern::initializeFromSubexpression(childSexp, entityLabels, wordSets)) {
return true;
} else if (constraintType == membersSym) {
int n_patterns = childSexp->getNumChildren() - 1;
for (int j = 0; j < n_patterns; j++)
_patternList.push_back(parseSexp(childSexp->getNthChild(j+1), entityLabels, wordSets));
return true;
} else {
logFailureToInitializeFromChildSexp(childSexp);
return false;
}
}
PatternFeatureSet_ptr UnionPattern::matchesSentence(PatternMatcher_ptr patternMatcher, SentenceTheory *sTheory, UTF8OutputStream *debug) {
if (_languageVariant && !patternMatcher->getActiveLanguageVariant()->matchesConstraint(*_languageVariant)) {
return matchesAlignedSentence(patternMatcher, sTheory, _languageVariant);
}
std::vector<float> scores;
bool matchedOne = false;
PatternFeatureSet_ptr allPatterns = boost::make_shared<PatternFeatureSet>(); // features from all patterns.
for (size_t i = 0; i < _patternList.size(); i++) {
SentenceMatchingPattern_ptr pattern = _patternList[i]->castTo<SentenceMatchingPattern>();
PatternFeatureSet_ptr sfs = pattern->matchesSentence(patternMatcher, sTheory);
if (sfs){
matchedOne = true;
allPatterns->addFeatures(sfs);
scores.push_back(sfs->getScore());
SessionLogger::dbg("BRANDY") << getDebugID() << ": UNION member " << i << " of " << _patternList.size() << " has score " << sfs->getScore() << "\n";
if (_is_greedy) break;
}
}
if( !matchedOne ) {
return PatternFeatureSet_ptr();
}
addID(allPatterns);
allPatterns->addFeature(boost::make_shared<GenericPFeature>(shared_from_this(), -1, patternMatcher->getActiveLanguageVariant()));
allPatterns->setScore(_scoringFunction(scores, _score));
return allPatterns;
}
std::vector<PatternFeatureSet_ptr> UnionPattern::multiMatchesSentence(PatternMatcher_ptr patternMatcher, SentenceTheory *sTheory, UTF8OutputStream *debug) {
if (_languageVariant && !patternMatcher->getActiveLanguageVariant()->matchesConstraint(*_languageVariant)) {
return multiMatchesAlignedSentence(patternMatcher, sTheory, _languageVariant);
}
std::vector<PatternFeatureSet_ptr> return_vector;
if (_is_greedy || _force_single_match_sentence) {
PatternFeatureSet_ptr sfs = matchesSentence(patternMatcher, sTheory, debug);
if (sfs != 0)
return_vector.push_back(sfs);
return return_vector;
} else {
for (size_t i = 0; i < _patternList.size(); i++) {
SentenceMatchingPattern_ptr pattern = _patternList[i]->castTo<SentenceMatchingPattern>();
std::vector<PatternFeatureSet_ptr> subpatternVector = pattern->multiMatchesSentence(patternMatcher, sTheory);
for (size_t j = 0; j < subpatternVector.size(); j++)
return_vector.push_back(subpatternVector[j]);
}
return return_vector;
}
}
Pattern_ptr UnionPattern::replaceShortcuts(const SymbolToPatternMap &refPatterns) {
for (size_t i = 0; i < _patternList.size(); ++i) {
replaceShortcut<Pattern>(_patternList[i], refPatterns);
}
return shared_from_this();
}
void UnionPattern::dump(std::ostream &out, int indent) const {
for (int i = 0; i < indent; i++) out << " ";
out << "UnionPattern: ";
if (!getID().is_null()) out << getID();
out << std::endl;
for (size_t i = 0; i < _patternList.size(); i++) {
_patternList[i]->dump(out, indent+2);
}
}
/**
* Redefinition of parent class's virtual method that returns a vector of
* pointers to PatternReturn objects for a given pattern.
*
* @param PatternReturnVecSeq for a given pattern
*
* @author [email protected]
* @date 2010.10.20
**/
void UnionPattern::getReturns(PatternReturnVecSeq & output) const {
Pattern::getReturns(output);
for (size_t n = 0; n < _patternList.size(); ++n) {
if (_patternList[n])
_patternList[n]->getReturns(output);
}
}
/**
* Retrieves first valid ID found.
*
* @return First valid ID.
*
* @author [email protected]
* @date 2011.08.08
**/
Symbol UnionPattern::getFirstValidID() const {
return getFirstValidIDForCompositePattern(_patternList);
}
|
\documentclass[Chemistry.tex]{subfiles}
\begin{document}
\chapter{Atoms, Molecules and Stoichiometry}
An \sldef{atom} is the smallest particle an element can be divided into without losing its identity.
\sldef{Isotopes} are atoms of the same element with the same number of protons in the nucleus but different number of neutrons.
A \sldef{molecule} is made up of a group of atoms held together by covalent bonds.
The \sldef{relative atomic mass} (\(A_r\)) of an element is the ratio of the average mass of one atom of that element to one-twelfth the mass of an atom of carbon-12.
The \sldef{relative isotopic mass} of an isotope of an element is the ratio of the mass of one atom of that isotope to one-twelfth the mass of an atom of carbon-12.
The \sldef{relative molecular mass} (\(M_r\)) of a molecule is the ratio of the average mass of one molecule to one-twelfth the mass of an atom of carbon-12.
The \sldef{relative formula mass} (\(M_r\)) of an ionic compound is the ratio of the average mass of one formula unit of that compound to one-twelfth the mass of an atom of carbon-12.
One \sldef{mole} is the amount of substance (\(n\)) that contains the same number of particles as there are atoms in exactly \SI{12.0}{\gram} of pure carbon-12. The number of particles (\(N\)) in one mole of any substance is a constant known as \sldef{Avogadro's constant} (\(L\)), which is approximately equal to \SI{6.02E23}{\per\mole}.
The \sldef{molar mass} (\(M\)) of a substance refers to the mass of one mole of that substance; it has units \si{\gram\per\mole}.
\section{Stoichiometry Involving Gases}
The \sldef{molar volume} of a gas is the volume occupied by one mole of a gas at a specified temperature and pressure. The molar volume at room temperature and pressure \SI{298}{\kelvin} and \SI{1}{\atmosphere} is \SI{24.0}{\cubic\deci\metre}, while the molar volume at standard temperature and pressure \SI{273}{\kelvin} and \SI{1}{\atmosphere} is \SI{22.4}{\cubic\deci\metre}.
\section{Stoichiometric Calculations}
\sldef{Theoretical yield} is the maximum amount of product that can be obtained from a given amount of reactants. Likewise, \sldef{actual yield} is the amount of product actually obtained from a reaction. \sldef{Percentage yield} is the ratio of actual yield to theoretical yield, expressed as a percentage.
In a reaction, the \sldef{limiting reagent} is the reactant that is completely used up at the end of the reaction. The amount of product formed is determined by the amount of the limiting reagent(s) at the start.
The \sldef{concentration} of a solution is the amount of solute, in grams or moles, per unit volume of solution. A \sldef{standard solution} is a solution with a known concentration.
\sldef{Dilution} is the process of adding more solvent to a known volume of solution to lower the concentration of the solution, with the number of moles of solute remaining the same.
The \sldef{empirical formulae} of a compound is formula showing the simplest ratio of the number of atoms of each element in a compound.
The \sldef{molecular formula} is the exact formula showing the actual number of atoms of each element present in a compound.
A \sldef{part per million} (\si{\ppm}) refers to a fraction out of a million. It is similar to a percentage.
\section{Volumetric Analysis}
Titration is a process involving the gradual addition of one solution to a fixed volume of another solution until stoichiometric amounts of the two solutions have reacted.
When direct titration is not possible, back titration is used. Titration is not possible when \begin{slinenumor}
\item one of the reactants is an insoluble solid
\item there is no suitable indicator for the titration
\item the sample may contain impurities, which may interfere with direct titration.\end{slinenumor}
Back titration involves a known excess of one reagent reacting with an unknown amount of another reagent, followed by a direct titration to find out the amount of excess reagent.
Table \ref{tb:1.indicators} details common indicators.
The \sldef{volume strength} of \slch{H2O2} is the volume of \slch{O2} (at s.t.p.) that can be evolved by the decomposition of one volume of \slch{H2O2}. It is a ratio of volume of \slch{O2} per volume of \slch{H2O2}.
\begin{table*}\centering\begin{tabular}[c]{lllll}
\toprule
\textbf{Indicator} & \textbf{pH range} & \textbf{Acid} & \textbf{Endpoint} & \textbf{Alkali}\tabularnewline
\midrule
Methyl orange & \numrange{3}{5} & Red & Orange & Yellow\tabularnewline
(screened) & \numrange{3}{5} & Purple & Grey & Green\tabularnewline
Phenolphthalein & \numrange{8}{10} & Colourless & Pale pink & Red\tabularnewline
Bromothymol blue & \numrange{6}{7.6} & Yellow & Bluish-green & Blue\tabularnewline
\bottomrule
\end{tabular}\caption{Common indicators}\label{tb:1.indicators}\end{table*}
\end{document} |
{-|
Module : Univariate
Description : Univariate polynomials (notably cyclotomics)
Copyright : (c) Matthew Amy, 2020
Maintainer : [email protected]
Stability : experimental
Portability : portable
-}
module Feynman.Algebra.Polynomial.Univariate(
Univariate,
Cyclotomic,
var,
constant,
(*|),
primitiveRoot,
evaluate
)
where
import Data.List
import Data.Map(Map)
import qualified Data.Map as Map
import Data.Complex
import Data.Maybe(maybe)
import qualified Feynman.Util.Unicode as Unicode
import Feynman.Algebra.Base
import Feynman.Algebra.Polynomial
{-------------------------------
Univariate polynomials
-------------------------------}
-- | Univariate polynomials over the ring 'r'
data Univariate r = Univariate { getCoeffs :: !(Map Integer r) } deriving (Eq, Ord)
instance (Eq r, Num r, Show r) => Show (Univariate r) where
show = showWithName "x"
instance Degree (Univariate r) where
degree = maybe 0 (fromIntegral . fst) . Map.lookupMax . getCoeffs
-- | Print a univariate polynomial with a particular variable name
showWithName :: (Eq r, Num r, Show r) => String -> Univariate r -> String
showWithName x p
| p == 0 = "0"
| otherwise = intercalate " + " $ map showTerm (Map.assocs $ getCoeffs p)
where showTerm (expt, a)
| expt == 0 = show a
| a == 1 = showExpt expt
| a == -1 = "-" ++ showExpt expt
| otherwise = show a ++ showExpt expt
showExpt expt
| expt == 1 = x
| otherwise = Unicode.sup x expt
instance (Eq r, Num r) => Num (Univariate r) where
(+) = add
(*) = mult
negate = Univariate . Map.map negate . getCoeffs
abs = id
signum = id
fromInteger 0 = Univariate Map.empty
fromInteger i = Univariate $ Map.singleton 0 (fromInteger i)
-- | Normalize a univariate polynomial
normalize :: (Eq r, Num r) => Univariate r -> Univariate r
normalize = Univariate . Map.filter (/=0) . getCoeffs
-- | The unique univariate variable
var :: Num r => Univariate r
var = Univariate $ Map.singleton 1 1
-- | Constant polynomial
constant :: (Eq r, Num r) => r -> Univariate r
constant a
| a == 0 = Univariate $ Map.empty
| otherwise = Univariate $ Map.singleton 0 a
-- | Multiply by a scalar
(*|) :: (Eq r, Num r) => r -> Univariate r -> Univariate r
(*|) 0 = \_p -> zero
(*|) a = Univariate . Map.map (a*) . getCoeffs
-- | Add two univariate polynomials
add :: (Eq r, Num r) => Univariate r -> Univariate r -> Univariate r
add p = normalize . Univariate . Map.unionWith (+) (getCoeffs p) . getCoeffs
-- | Multiply two univariate polynomials
mult :: (Eq r, Num r) => Univariate r -> Univariate r -> Univariate r
mult p = normalize . Map.foldrWithKey (\expt a -> add (mulTerm expt a)) 0 . getCoeffs
where mulTerm expt a = Univariate . Map.mapKeysMonotonic (+ expt) . Map.map (* a) $ getCoeffs p
{-------------------------------
Cyclotomics
-------------------------------}
-- | Cyclotomic polynomials over the ring 'r'
data Cyclotomic r = Cyc { getOrder :: !Integer, getPoly :: !(Univariate r) } deriving (Eq, Ord)
instance (Eq r, Num r, Show r) => Show (Cyclotomic r) where
show p = showWithName (Unicode.sub Unicode.zeta (getOrder p)) $ getPoly p
instance Degree (Cyclotomic r) where
degree = degree . getPoly
instance (Eq r, Num r) => Num (Cyclotomic r) where
p + q = reduceOrder $ Cyc m (p' + q')
where (Cyc m p', Cyc _ q') = unifyOrder p q
p * q = reduceOrder $ Cyc m (p' * q')
where (Cyc m p', Cyc _ q') = unifyOrder p q
negate (Cyc m p) = Cyc m $ negate p
abs p = p
signum p = p
fromInteger i = Cyc 1 (fromInteger i)
-- | Unify the order of two cyclotomics
unifyOrder :: Cyclotomic r -> Cyclotomic r -> (Cyclotomic r, Cyclotomic r)
unifyOrder (Cyc n p) (Cyc m q)
| n == m = (Cyc n p, Cyc m q)
| otherwise = (Cyc r p', Cyc r q')
where r = lcm n m
p' = Univariate . Map.mapKeysMonotonic ((r `div` n) *) $ getCoeffs p
q' = Univariate . Map.mapKeysMonotonic ((r `div` m) *) $ getCoeffs q
-- | Rewrite the cyclotomic in lowest order
reduceOrder :: (Eq r, Num r) => Cyclotomic r -> Cyclotomic r
reduceOrder (Cyc m c) = Cyc m' c'
where m' = m `div` d
c' = Univariate . Map.mapKeysMonotonic (`div` d) $ getCoeffs c
d = foldr gcd m . Map.keys $ getCoeffs c
-- | Construct the cyclotomic polynomial \(\zeta_i\)
primitiveRoot :: Num r => Integer -> Cyclotomic r
primitiveRoot i = Cyc i var
-- | Convert to a complex number
evaluate :: (Real r, RealFloat f) => Cyclotomic r -> Complex f
evaluate (Cyc m p) = Map.foldrWithKey f (0.0 :+ 0.0) $ getCoeffs p
where f root coeff = (mkPolar (realToFrac coeff) (expnt root) +)
expnt root = 2.0*pi*(fromInteger root)/(fromInteger m)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.