text
stringlengths 0
3.34M
|
---|
There has been organized Irish Dancing in Davis since about 1990. Its useful to split discussion of it into two categories, because of the different levels of financial and other commitment needed.
Ceili or social dance
An Irish Culture Club started in the early 1990s at UC Davis as a way of getting access to campus resources. Classes in social dance were offered through the UCD Experimental College. Folks from the club and these classes met at places such as Mansion Cellars, Dempseys (both Departed Businesses), on the patio at Sudwerk and in a private home. Many musicians generously came and played for dancers, which is a little different than a usual Irish music session (in that the tempos and tune choices have to match what the dancers request, rather than following the choices of the musicians).
A class in Celtic Social Dancing is offered at the Davis Art Center on Tuesday nights (it was Monday nights from January 2007 to August 2008, and was Irish Ceili (Folk) Dance). Instructors http://www.siamsa.net/bios Shirleigh Brannon and Users/DougWalter teach basic moves and figures of traditional Irish ceili and Scottish Country dances. We take students from easy dances such as Siege of Carrick to more challenging eight hands and choreographed team dances (when appropriate to their tolerance for enjoyment). The claim is that no partner or previous experience is needed, just soft shoes and love of the dance. Check the printed http://www.davisartcenter.org/classes.html course catalog for more information.
Step and team dance
This is a solo form of dance. It is usually thought of as more balletic and rigorous (and when ceili dances are performed for recitals or competition theyre often referred to as team dances). Different moves and routines are used for light shoe or hard shoe dances; the former is somewhat similar to Scottish Highland dancing, while the latter employs battering steps and clicks that were one basis for American Tap dancing (according to research by Jere Curry, among others).
Instruction in Irish step dance is available in Davis through the Raven Valley Irish Dance school, offering classes at Davis Art Center on Thursday nights. Davis dancers have also pursued instruction through the http://www.mcbrideirishdancers.com McBride School of Irish Dance and the http://www.kennellyschool.com/index.html Kennelly School of Irish Dance. Both have classes in the Bay Area, the former coming as close to Davis as Vallejo, while the latter has a Sacramento class.
A class in Teen/Adult Beginning Step Dance is offered at the Davis Art Center on Tuesday nights (it was Monday nights from January 2007 to August 2008). Instructor http://www.siamsa.net/bios Shirleigh Brannon teaches the beginnings of step dancing in a lower style but at a faster pace suitable for adults and teens. The printed http://www.davisartcenter.org/classes.html course catalog has more information.
20080703 13:50:14 nbsp Is there an actual ceili anywhere in Davis anymore (ie. not just lessons, but a ceili that anyone can attend)? Is Shirleighs ceili just for lessons, or can anyone come?
I dont think there is a ceili anywhere in Davis. The Art Center wants the facility used by students, so the class is not wide open. Users/DougWalter
There is an excellent ceili in Alameda on Monday nights, which you can find into on here: http://www.ilsitane.com/ceili.htm
There is also one at the Starry Plough in Berkeley at the same time, although this one is much smaller than the one in Alameda (Id currently recommend the Alameda one.) Users/IDoNotExist
20080815 16:13:14 nbsp I must say that the Ceili in Alameda is excellent (if a little bit far away). Live musicians. Friendly people. Hours of dancing. Many people from (or formerly from) Davis. And the traditional Ceili dances are mixed in with new dances and unusual (for a ceili) music. Definitely fun! Users/IDoNotExist
|
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE DeriveDataTypeable, DeriveGeneric #-}
-- |
-- Module : Statistics.Distribution.Gamma
-- Copyright : (c) 2009, 2011 Bryan O'Sullivan
-- License : BSD3
--
-- Maintainer : [email protected]
-- Stability : experimental
-- Portability : portable
--
-- The gamma distribution. This is a continuous probability
-- distribution with two parameters, /k/ and ϑ. If /k/ is
-- integral, the distribution represents the sum of /k/ independent
-- exponentially distributed random variables, each of which has a
-- mean of ϑ.
module Statistics.Distribution.Gamma
(
GammaDistribution
-- * Constructors
, gammaDistr
, gammaDistrE
, improperGammaDistr
, improperGammaDistrE
-- * Accessors
, gdShape
, gdScale
) where
import Control.Applicative
import Data.Aeson (FromJSON(..), ToJSON, Value(..), (.:))
import Data.Binary (Binary(..))
import Data.Data (Data, Typeable)
import GHC.Generics (Generic)
import Numeric.MathFunctions.Constants (m_pos_inf, m_NaN, m_neg_inf)
import Numeric.SpecFunctions (incompleteGamma, invIncompleteGamma, logGamma, digamma)
import qualified System.Random.MWC.Distributions as MWC
import Statistics.Distribution.Poisson.Internal as Poisson
import qualified Statistics.Distribution as D
import Statistics.Internal
-- | The gamma distribution.
data GammaDistribution = GD {
gdShape :: {-# UNPACK #-} !Double -- ^ Shape parameter, /k/.
, gdScale :: {-# UNPACK #-} !Double -- ^ Scale parameter, ϑ.
} deriving (Eq, Typeable, Data, Generic)
instance Show GammaDistribution where
showsPrec i (GD k theta) = defaultShow2 "improperGammaDistr" k theta i
instance Read GammaDistribution where
readPrec = defaultReadPrecM2 "improperGammaDistr" improperGammaDistrE
instance ToJSON GammaDistribution
instance FromJSON GammaDistribution where
parseJSON (Object v) = do
k <- v .: "gdShape"
theta <- v .: "gdScale"
maybe (fail $ errMsgI k theta) return $ improperGammaDistrE k theta
parseJSON _ = empty
instance Binary GammaDistribution where
put (GD x y) = put x >> put y
get = do
k <- get
theta <- get
maybe (fail $ errMsgI k theta) return $ improperGammaDistrE k theta
-- | Create gamma distribution. Both shape and scale parameters must
-- be positive.
gammaDistr :: Double -- ^ Shape parameter. /k/
-> Double -- ^ Scale parameter, ϑ.
-> GammaDistribution
gammaDistr k theta
= maybe (error $ errMsg k theta) id $ gammaDistrE k theta
errMsg :: Double -> Double -> String
errMsg k theta
= "Statistics.Distribution.Gamma.gammaDistr: "
++ "k=" ++ show k
++ "theta=" ++ show theta
++ " but must be positive"
-- | Create gamma distribution. Both shape and scale parameters must
-- be positive.
gammaDistrE :: Double -- ^ Shape parameter. /k/
-> Double -- ^ Scale parameter, ϑ.
-> Maybe GammaDistribution
gammaDistrE k theta
| k > 0 && theta > 0 = Just (GD k theta)
| otherwise = Nothing
-- | Create gamma distribution. Both shape and scale parameters must
-- be non-negative.
improperGammaDistr :: Double -- ^ Shape parameter. /k/
-> Double -- ^ Scale parameter, ϑ.
-> GammaDistribution
improperGammaDistr k theta
= maybe (error $ errMsgI k theta) id $ improperGammaDistrE k theta
errMsgI :: Double -> Double -> String
errMsgI k theta
= "Statistics.Distribution.Gamma.gammaDistr: "
++ "k=" ++ show k
++ "theta=" ++ show theta
++ " but must be non-negative"
-- | Create gamma distribution. Both shape and scale parameters must
-- be non-negative.
improperGammaDistrE :: Double -- ^ Shape parameter. /k/
-> Double -- ^ Scale parameter, ϑ.
-> Maybe GammaDistribution
improperGammaDistrE k theta
| k >= 0 && theta >= 0 = Just (GD k theta)
| otherwise = Nothing
instance D.Distribution GammaDistribution where
cumulative = cumulative
instance D.ContDistr GammaDistribution where
density = density
logDensity (GD k theta) x
| x <= 0 = m_neg_inf
| otherwise = log x * (k - 1) - (x / theta) - logGamma k - log theta * k
quantile = quantile
instance D.Variance GammaDistribution where
variance (GD a l) = a * l * l
instance D.Mean GammaDistribution where
mean (GD a l) = a * l
instance D.MaybeMean GammaDistribution where
maybeMean = Just . D.mean
instance D.MaybeVariance GammaDistribution where
maybeStdDev = Just . D.stdDev
maybeVariance = Just . D.variance
instance D.MaybeEntropy GammaDistribution where
maybeEntropy (GD a l)
| a > 0 && l > 0 =
Just $
a
+ log l
+ logGamma a
+ (1-a) * digamma a
| otherwise = Nothing
instance D.ContGen GammaDistribution where
genContVar (GD a l) = MWC.gamma a l
density :: GammaDistribution -> Double -> Double
density (GD a l) x
| a < 0 || l <= 0 = m_NaN
| x <= 0 = 0
| a == 0 = if x == 0 then m_pos_inf else 0
| x == 0 = if a < 1 then m_pos_inf else if a > 1 then 0 else 1/l
| a < 1 = Poisson.probability (x/l) a * a / x
| otherwise = Poisson.probability (x/l) (a-1) / l
cumulative :: GammaDistribution -> Double -> Double
cumulative (GD k l) x
| x <= 0 = 0
| otherwise = incompleteGamma k (x/l)
quantile :: GammaDistribution -> Double -> Double
quantile (GD k l) p
| p == 0 = 0
| p == 1 = 1/0
| p > 0 && p < 1 = l * invIncompleteGamma k p
| otherwise =
error $ "Statistics.Distribution.Gamma.quantile: p must be in [0,1] range. Got: "++show p
|
%------------------------------------------------------------------------------
% package includes
%------------------------------------------------------------------------------
% font encoding is set up for pdflatex, for other environments see
% http://tex.stackexchange.com/questions/44694/fontenc-vs-inputenc
\usepackage[T1]{fontenc} % 8-bit fonts, improves handling of hyphenations
\usepackage{lmodern}
\usepackage[utf8]{inputenc}
\usepackage{slashed}
% provides `old' commands for table of contents. Eases the ability to switch
% between book and scrbook
\usepackage{scrhack}
\newcommand{\nn}{\nonumber}
\newcommand{\taul}{\tau_\text{lep}}
\newcommand{\tauh}{\tau_\text{h}}
\newcommand{\pt}{p_\text{T}}
\newcommand{\ptmiss}{\slashed{E}_T}
\newcommand{\mreco}{m_{\text{reco}}}
\usepackage[english]{babel}
\usepackage[style=numeric, sorting=none,,backend=biber]{biblatex}
\addbibresource{bib/topic1.bib}
% ------------------- layout, default -------------------
% adjust the style of float's captions, separated from text to improve readabilty
\usepackage[labelfont=bf, labelsep=colon, format=hang, textfont=singlespacing]{caption}
% With format = hang your caption will look like this:
% Figure 1: Lorem ipsum dolor sit amet,
% consectetuer adipiscing elit.
% Ut purus elit, vestibulum
% If you instead want
% Figure 1: Lorem ipsum dolor sit amet,
% consectetuer adipiscing elit. Ut purus
% elit, vestibulum
% change to format=plain
\usepackage{chngcntr} % continuous numbering of figures/tables over chapters
\counterwithout{equation}{chapter}
\counterwithout{figure}{chapter}
\counterwithout{table}{chapter}
% Uncomment the following line if you switch from scrbook to book
% and comment the setkomafont line
\usepackage{titlesec} % remove "Chapter" from the chapter title
\titleformat{\chapter}[hang]{\bfseries\huge}{\thechapter}{2pc}{\huge}
\titlespacing*{\chapter}{0pt}{10pt}{10pt}
%\setkomafont{chapter}{\normalfont\bfseries\huge}
\makeatletter
\renewcommand\chapter{\par%
\thispagestyle{plain}%
\global\@topnum\z@
\@afterindentfalse
\secdef\@chapter\@schapter}
\makeatother
\usepackage{setspace} % Line spacing
\onehalfspacing
% \doublespacing % uncomment for double spacing, e.g. for annotations in correction
% ------------------- functional, default-------------------
\usepackage[dvipsnames]{xcolor} % more colors
\usepackage{array} % custom format per column in table - needed on the title page
\usepackage{graphicx} % include graphics
\usepackage{subcaption} % divide figure, e.g. 1(a), 1(b)...
\usepackage{amsmath} % |
\usepackage{amsthm} % | math, bmatrix etc
\usepackage{amsfonts} % |
\usepackage{calc} % calculate within LaTeX
\usepackage[unicode=true,bookmarks=true,bookmarksnumbered=true,
bookmarksopen=true,bookmarksopenlevel=1,breaklinks=false,
pdfborder={0 0 0},backref=false,colorlinks=false]{hyperref}
\usepackage{etoolbox} % if-else commands
%==========================================
% You might not need the following packages, I only included them as they
% are needed for the example floats
% ------------------- functional, custom -------------------
\usepackage{algorithm,algpseudocode}
\usepackage{bm} % bold greek variables (boldmath)
\usepackage{tikz}
\usetikzlibrary{positioning} % use: above left of, etc
% required for the ToDo list
\usepackage{ifthen}
% Improves general appearance of the text
\usepackage[protrusion=true,expansion=true, kerning]{microtype}
\usepackage{enumitem}
% nicer font for pdf rendering
%\usepackage{lmodern}
% For nicer looking tables
\usepackage{booktabs}
% usually you don't need this, just for demonstration of a longer caption
\usepackage{lipsum}
%------------------------------------------------------------------------------
% (re)new commands / settings
%------------------------------------------------------------------------------
% ----------------- referencing ----------------
\newcommand{\secref}[1]{Section~\ref{#1}}
\newcommand{\chapref}[1]{Chapter~\ref{#1}}
\renewcommand{\eqref}[1]{Eq.(\ref{#1})}
\newcommand{\figref}[1]{Figure~\ref{#1}}
\newcommand{\tabref}[1]{Table~\ref{#1}}
% ------------------- colors -------------------
\definecolor{darkgreen}{rgb}{0.0, 0.5, 0.0}
% Colors of the Albert Ludwigs University as in
% https://www.zuv.uni-freiburg.de/service/cd/cd-manual/farbwelt
\definecolor{UniBlue}{RGB}{0, 74, 153}
\definecolor{UniRed}{RGB}{193, 0, 42}
\definecolor{UniGrey}{RGB}{154, 155, 156}
% ------------------- layout -------------------
% prevents floating objects from being placed ahead of their section
\let\mySection\section\renewcommand{\section}{\suppressfloats[t]\mySection}
\let\mySubSection\subsection\renewcommand{\subsection}{\suppressfloats[t]\mySubSection}
% ------------------- math formatting commands -------------------
% define vectors to be bold instead of using an arrow
%\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\mat}[1]{\mathbf{#1}}
% tag equation with name
\newcommand{\eqname}[1]{\tag*{#1}}
% ------------------- pdf settings -------------------
% ADAPT THIS
\hypersetup{pdftitle={\thetitle},
pdfauthor={\theauthor},
pdfsubject={Undergraduate thesis at the Albert Ludwig University of Freiburg},
pdfkeywords={deep learning, awesome algorithm, undergraduate thesis},
pdfpagelayout=OneColumn, pdfnewwindow=true, pdfstartview=XYZ, plainpages=false}
%==========================================
% You might not need the following commands, I only included them as they
% are needed for the example floats
% ------------------- Tikz styles -------------------
\tikzset{>=latex} % arrow style
% ------------------- algorithm ---------------------
% Command to align comments in algorithm
\newcommand{\alignedComment}[1]{\Comment{\parbox[t]{.35\linewidth}{#1}}}
% define a foreach command in algorithms
\algnewcommand\algorithmicforeach{\textbf{foreach}}
\algdef{S}[FOR]{ForEach}[1]{\algorithmicforeach\ #1\ \algorithmicdo}
% line spacing should be 1.5
\renewcommand{\baselinestretch}{1.2}
% set distance between items in a list, for more details see the
% enumitem package: https://www.ctan.org/pkg/enumitem
\setlist{itemsep=.5em}
% use ra in your tables to increase the space between rows
% 1.3 should be fine
\newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}}
% ToDo counters
\usepackage{ifthen} %für whiledo-Schleife
\newcounter{todos}
\setcounter{todos}{0}
\newcounter{extends}
\setcounter{extends}{0}
\newcounter{drafts}
\setcounter{drafts}{0}
% ------------------- marker commands -------------------
% ToDo command
\newcommand{\todo}[1]{\textbf{\textcolor{red}{(TODO: #1)}}\refstepcounter{todos}\label{todo \thetodos}}
\newcommand{\extend}[1]{\textbf{\textcolor{darkgreen}{(EXTEND: #1)}}\refstepcounter{extends}\label{extend \theextends}}
% Lighter color to note down quick drafts
\newcommand{\draft}[1]{\textbf{\textcolor{NavyBlue}{(DRAFT: #1)}}\refstepcounter{drafts}\label{draft \thedrafts}}
% microtype with lmodern, see https://tex.stackexchange.com/questions/75305/microtype-warning-with-lmodern-package-and-koma-script
%\DeclareMicrotypeAlias{lmss}{cmr} |
theory T86
imports Main
begin
lemma "(
(\<forall> x::nat. \<forall> y::nat. meet(x, y) = meet(y, x)) &
(\<forall> x::nat. \<forall> y::nat. join(x, y) = join(y, x)) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. meet(x, meet(y, z)) = meet(meet(x, y), z)) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. join(x, join(y, z)) = join(join(x, y), z)) &
(\<forall> x::nat. \<forall> y::nat. meet(x, join(x, y)) = x) &
(\<forall> x::nat. \<forall> y::nat. join(x, meet(x, y)) = x) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. mult(x, join(y, z)) = join(mult(x, y), mult(x, z))) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. mult(join(x, y), z) = join(mult(x, z), mult(y, z))) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. meet(x, over(join(mult(x, y), z), y)) = x) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. meet(y, undr(x, join(mult(x, y), z))) = y) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. join(mult(over(x, y), y), x) = x) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. join(mult(y, undr(y, x)), x) = x) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. over(join(x, y), z) = join(over(x, z), over(y, z))) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. over(x, meet(y, z)) = join(over(x, y), over(x, z))) &
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. undr(meet(x, y), z) = join(undr(x, z), undr(y, z))) &
(\<forall> x::nat. \<forall> y::nat. invo(join(x, y)) = meet(invo(x), invo(y))) &
(\<forall> x::nat. \<forall> y::nat. invo(meet(x, y)) = join(invo(x), invo(y))) &
(\<forall> x::nat. invo(invo(x)) = x)
) \<longrightarrow>
(\<forall> x::nat. \<forall> y::nat. \<forall> z::nat. undr(x, join(y, z)) = join(undr(x, y), undr(x, z)))
"
nitpick[card nat=4,timeout=86400]
oops
end |
Formal statement is: proposition Cauchy_integral_formula_convex: assumes S: "convex S" and K: "finite K" and contf: "continuous_on S f" and fcd: "(\<And>x. x \<in> interior S - K \<Longrightarrow> f field_differentiable at x)" and z: "z \<in> interior S" and vpg: "valid_path \<gamma>" and pasz: "path_image \<gamma> \<subseteq> S - {z}" and loop: "pathfinish \<gamma> = pathstart \<gamma>" shows "((\<lambda>w. f w / (w - z)) has_contour_integral (2*pi * \<i> * winding_number \<gamma> z * f z)) \<gamma>" Informal statement is: If $f$ is a continuous function on a convex set $S$ and $f$ is holomorphic on the interior of $S$ except for a finite number of points, then the Cauchy integral formula holds for $f$ on $S$. |
module TaggedProb
import Semiring
import Data.Morphisms
import HetVect
import Marginalization
-- %default total
%access public export
mutual
data Prob : (unit : Type) -> (current : Type) -> Type where
Dist : ((a -> s) -> s) -> Prob s a
Fmap : (Vect xs -> b) -> ListProb s xs -> Prob s b
Join : Prob s (Prob s a) -> Prob s a
data ListProb : (unit : Type) -> (xs : List Type) -> Type where
Nil : ListProb s []
(::) : Prob s x -> ListProb s xs -> ListProb s (x::xs)
getProb : Prob s a -> (a -> s) -> s
getProb (Dist g) f = g f
getProb (Join x) f = getProb x (\a => getProb a f)
getProb (Fmap g ps) f = go g ps f where
go : (Vect xs -> b) -> ListProb s xs -> (b -> s) -> s
go f [] g = g (f [])
go f (p :: ps) g = getProb p (\x => go (f . (x::)) ps g)
map : (a -> b) -> Prob s a -> Prob s b
map f (Dist g) = Dist (\h => g (h . f))
map f (Fmap g x) = Fmap (f . g) x
map f (Join x) = Join (map (\p => map f p) x)
(<$>) : (a -> b) -> Prob s a -> Prob s b
(<$>) = map
pure : a -> Prob s a
pure x = Fmap (\([]) => x) Nil
return : a -> Prob s a
return = pure
(<*>) : Prob s (a -> b) -> Prob s a -> Prob s b
(<*>) (Dist f) xs = Dist (\k => f (\c => getProb xs (k . c)))
(<*>) (Fmap f ps) p = Fmap (\(v::vs) => f vs v) (p::ps)
(<*>) (Join x) xs = Join (map (<*> xs) x)
join : Prob s (Prob s a) -> Prob s a
join = Join
(>>=) : Prob s a -> (a -> Prob s b) -> Prob s b
(>>=) p f = Join (map f p)
dice : Prob Double Int
dice = Dist (\f => Semiring.sum ( map (\n => f n / 6.0) [1 .. 6]) )
probOf : (Eq a, Semiring s) => a -> Prob s a -> s
probOf x p = getProb p (\y => if x == y then one else zer)
dices : Prob Double Int
dices = do
x <- dice
y <- dice
return (x+y)
|
#!/usr/bin/env Rscript
# NOTE: fOptions uses non-standard definition of cost of carry: b = interestrate - carryrate, i.e. the adjusted drift κ in the geometric Brownian motion:
# dS_t = (κ - σ^2/2) S_t dt + σ S_t dW_t
library(fOptions)
df = Reduce(function(x,y) merge(x,y,all=TRUE),
list(data.frame(startprice=c(80,100,120)),
data.frame(strikeprice=c(80,100,120)),
data.frame(days=c(50,120)),
data.frame(interestrate=c(0.05,0.07)),
data.frame(carryrate=c(0.0,0.025)),
data.frame(sigma=c(0.05,0.1,0.5))
))
bs = df
for (type in c("c","p")) {
bs[type] = with(bs,
GBSOption(TypeFlag = type, S = startprice, X = strikeprice, Time = days/365,
r = interestrate, b = interestrate - carryrate, sigma = sigma)@price)
}
write.csv(bs,"bs.csv",row.names=FALSE)
crr = merge(df,data.frame(nsteps=c(20,100)),all=TRUE)
for (otype in c("ce","pe","ca","pa")) {
crr[otype] = with(crr,
sapply(mapply(CRRBinomialTreeOption,TypeFlag = otype, S = startprice, X = strikeprice, Time = days/365,
r = interestrate, b = interestrate - carryrate, sigma = sigma, n=nsteps),
function(x) x@price))
}
write.csv(crr,"crr.csv",row.names=FALSE)
tian = merge(df,data.frame(nsteps=c(20,100)),all=TRUE)
for (otype in c("ce","pe","ca","pa")) {
tian[otype] = with(tian,
sapply(mapply(TIANBinomialTreeOption,TypeFlag = otype, S = startprice, X = strikeprice, Time = days/365,
r = interestrate, b = interestrate - carryrate, sigma = sigma, n=nsteps),
function(x) x@price))
}
write.csv(tian,"tian.csv",row.names=FALSE)
jr = merge(df,data.frame(nsteps=c(20,100)),all=TRUE)
for (otype in c("ce","pe","ca","pa")) {
tian[otype] = with(jr,
sapply(mapply(JRBinomialTreeOption,TypeFlag = otype, S = startprice, X = strikeprice, Time = days/365,
r = interestrate, b = interestrate - carryrate, sigma = sigma, n=nsteps),
function(x) x@price))
}
write.csv(tian,"jr.csv",row.names=FALSE)
|
||| Projection based on Plain Merge.
|||
||| The POPL '08 Paper /Multi-Party Asynchronous Session Types/.
||| When projecting a =Choice= in which the projected role is not involved,
||| projection occurs when all choices are equal.
module Sessions.Projection.Plain
import Decidable.Equality
import Data.List
import Data.List.Elem
import Data.List1
import Sessions.Meta
import Sessions.Global
import Sessions.Global.Involved
import Sessions.Local
import Sessions.Local.Same
import Sessions.Local.Same.All
import public Sessions.Projection.Error
import public Sessions.Projection
%default total
public export
data Merge : (this : List1 (typeL, typeM, Local typeR typeL typeM rs g))
-> (that : Local typeR typeL typeM rs g)
-> Type
where
PMerge : (same : All ((cl,cm,c):::cs))
-> Merge ((cl,cm,c):::cs) c
public export
merge : DecEq typeR typeL typeM
=> (this : List1 (typeL, typeM, Local typeR typeL typeM rs g))
-> Maybe (that ** Merge this that)
merge ((l,(m,c)) ::: tail) with (allSame ((l,(m,c)) ::: tail))
merge ((l,(m,c)) ::: tail) | (Yes prf)
= Just (c ** PMerge prf)
merge ((l,(m,c)) ::: tail) | (No contra)
= Nothing
public export
project : DecEq typeR typeL typeM
=> {rs : List Ty}
-> {g : Ty}
-> (role : typeR)
-> (termG : Global typeR typeL typeM rs g)
-> Maybe (termL ** Project Merge role termG termL)
project = Projection.Term.project merge
-- [ EOF ]
|
Describe Users/chuckgirl here.
5 year member of the American Chuckwagon Association (ACWA). Ill even be participating in this years HE PAID YOUR FEES cookoff in Hartford, SD in July with my awesome team, Chuckgurlz, the first all female, queer, chuckwagoner team.
I have BA in English, MS in Mechanical Engineering, and am taking some time off from a PhD in transportation. My work is on alternative fuels and preindustrial mass transportation.
I just moved back to Davis, after taking some time off in Texas from my PhD .
20110405 22:35:01 nbsp A belated Welcome to the Wiki! Thanks for the additions to the UC Davis English Department page. Users/TomGarberson
20110427 09:30:04 nbsp If youre looking for Murphys I had a pint at Sophias the other day. But Im not really a bar frequenter so Im not sure if any other places around have it on tap. Users/MeggoWaffle
|
#redirect Davis Computer
|
/* stable/stable_koutrouvelis.h
*
* Koutrouvelis method for parameter estimation of alpha-stable
* distributions. Based on [1] and MATLAB code available in [2].
*
* [1] Koutrouvelis, I. A. An Iterative Procedure for the Estimation
* of the Parameters of Stable Laws Communications in Statistics
* - Simulation and Computation, 1981, 10, 17-28
* [2] Mark S. Veillete. Alpha-Stable Distributions in MATLAB
* http://math.bu.edu/people/mveillet/research.html
*
* Copyright (C) 2013. Javier Royuela del Val
* Federico Simmross Wattenberg
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 3 of the License.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; If not, see <http://www.gnu.org/licenses/>.
*
*
* Javier Royuela del Val.
* E.T.S.I. Telecomunicación
* Universidad de Valladolid
* Paseo de Belén 15, 47002 Valladolid, Spain.
* [email protected]
*/
#include <stdio.h>
#include <math.h>
#include <float.h>
#include <gsl/gsl_complex.h>
#include <gsl/gsl_complex_math.h>
#include <gsl/gsl_vector.h>
#include <gsl/gsl_fit.h>
#include <gsl/gsl_blas.h>
#include <gsl/gsl_multifit.h>
#include "stable.h"
inline double sign(double x) {
return ((.0 < x) - (x < .0));
}
double var(const double * data, const int N) {
double acum = .0, acum2 = .0;
double v;
int i;
for(i=0;i<N;i++) {
acum += data[i];
acum2 += data[i]*data[i];
}
v = (1.0/(N-1.0)) * (acum2 - (1.0/N)*acum*acum);
return v;
}
int chooseK(double,int);
int chooseL(double,int);
void setcovYY(const double * t, int K, int N, double alpha, double beta, double gam, double **covYY);
void setcovZZ(const double * t, int K, int N, double alpha, double beta, double gam, double **covZZ);
double ecfRoot(const double * data, const int N);
void stable_samplecharfunc(const double* x, const unsigned int Nx,
const double* t, const unsigned int Nt, gsl_complex *z);
gsl_complex stable_samplecharfunc_point(const double* x,
const unsigned int N, double t);
int stable_fit_koutrouvelis(StableDist * dist, const double * data, const unsigned int N) {
int maxiter = 10;
double xTol = 0.01;
double * s = NULL;
gsl_complex * phi = NULL;
gsl_complex * phi2 = NULL;
int i;
double alpha = dist->alpha;
double beta = dist->beta;
double sigma = dist->sigma;
double mu1 = dist->mu_1;
double alphaold = alpha;
double mu1old = mu1;
double diff = .0;
double diffbest = DBL_MAX;
double diffmax = .0;
double alphabest = alpha;
double betabest = beta;
double sigmabest = sigma;
double mu1best = mu1;
double * t = NULL;
double * w = NULL;
double * y = NULL;
double * p = NULL;
double * t2 = NULL;
double * w2 = NULL;
double * y2 = NULL;
double * p2 = NULL;
gsl_matrix * X;
gsl_matrix * covmat;
gsl_vector * cvout;
gsl_vector * ydat;
gsl_vector * weights;
int stat;
double sumsq = .0;
double c0 = .0, c1 = .0;
double cov00 = .0, cov01 = .0, cov11 = .0;
double sigmanew = 0;
double **covYY;// **covZZ;
double **covZZ;
int iter = 0;
int K,L;
int row;
double t_0=0;
double step_u=0;
double estshift;
gsl_multifit_linear_workspace * linws;
if (sigma == 0) {
sigma = sqrt(var(data,N));
stable_setparams(dist,alpha,beta,sigma,mu1,1);
}
s = (double *) malloc(N*sizeof(double));
for (i=0;i<N;i++) {
s[i] = (data[i]-mu1)/sigma;
}
covmat = gsl_matrix_alloc(2,2);
cvout = gsl_vector_alloc(2);
for (iter = 0; iter<maxiter; iter ++) {
if (iter <= 1) {
K = chooseK(alpha,N);
t = (double *) realloc(t,K*sizeof(double));
p = (double *) realloc(p,K*sizeof(double));
w = (double *) realloc(w,K*sizeof(double));
y = (double *) realloc(y,K*sizeof(double));
phi = (gsl_complex*) realloc(phi,K*sizeof(gsl_complex));
for(i=0;i<K;i++) {
t[i]=((double)i+1.0)*M_PI/25.0;
w[i]=log(fabs(t[i]));
}
}
if (iter==1) {
covYY = (double **)malloc(K*sizeof(double *));
covYY[0] = (double*)malloc(K*K*sizeof(double));
for (row=1;row<K;row++) {
covYY[row] = covYY[0] + row*K;
}
}
stable_samplecharfunc(s,N,t,K,phi);
for(i=0;i<K;i++) {
y[i] = log(-2.0*gsl_complex_logabs(phi[i]));
}
if (iter==0) { //Use ordinary least squares regression
// printf("ordls: ");fflush(stdout);
stat = gsl_fit_linear (w, 1, y, 1, K, &c0, &c1, &cov00, &cov01, &cov11, &sumsq);
alpha = c1;
sigmanew = pow(exp(c0)/2.0,1.0/alpha);
sigma = sigma*sigmanew;
// printf("stat = %d, alpha=%f, sigma=%f\n",stat,alpha,sigma);
}
else { //Use weighted least squares regression
// printf("wls: ");fflush(stdout);
setcovYY(t, K, N, alpha, beta, 1.0, covYY);
for (i=0;i<K;i++) {
p[i] = 1.0/covYY[i][i];
}
stat = gsl_fit_wlinear(w, 1, p, 1, y, 1, K, &c0, &c1, &cov00, &cov01, &cov11, &sumsq);
alpha = c1;
sigmanew = pow(exp(c0)/2.0,1.0/alpha);
sigma = sigma*sigmanew;
// printf("stat = %d, alpha=%f, sigma=%f\n",stat,alpha,sigma);
}
/*************** rescale data ******************/
// printf("rescale\n");fflush(stdout);
for (i=0;i<N;i++) {
s[i] = s[i]/sigmanew;
}
if (alpha<0) alpha = 0;
if (alpha>2) alpha = 2;
if (beta<-1) beta = -1;
if (beta> 1) beta = 1;
if (sigma<0) sigma = 0;
/********** refine beta and mu **************/
// printf("refbetamu\n");fflush(stdout);
if (iter <= 1) {
L = chooseL(alpha,N);
// printf("alpha = %f, N = %d, L = %d\n",alpha,N,L);fflush(stdout);
t_0 = ecfRoot(s,N);
step_u = M_PI/50;
if (t_0/L<step_u) step_u = t_0/L;
t2 = (double *) realloc(t2,L*sizeof(double));
p2 = (double *) realloc(p2,L*sizeof(double));
w2 = (double *) realloc(w2,L*sizeof(double));
y2 = (double *) realloc(y2,L*sizeof(double));
phi2 = (gsl_complex*)realloc(phi2,L*sizeof(gsl_complex));
for(i=0;i<L;i++) {
t2[i]=((double)i+1.0)*step_u;
w2[i]=sign(t2[i])*pow(fabs(t2[i]),alpha);
}
}
if (iter==0){
// printf("multimin alloc L = %d\n",L);fflush(stdout);
linws = gsl_multifit_linear_alloc (L,2);
}
else if (iter ==1) {
//Reserve covariance matrix mem
covZZ = (double **)malloc(L*sizeof(double *));
covZZ[0] = (double*)malloc(L*L*sizeof(double));
for (row=0;row<L;row++) {
covZZ[row] = covZZ[0] + row*L;
}
gsl_multifit_linear_free(linws);
linws = gsl_multifit_linear_alloc (L,2);
}
// printf("Second charfunc iter %d\n",iter);fflush(stdout);
stable_samplecharfunc(s,N,t2,L,phi2);
// printf("done\n");
for(i=0;i<L;i++) {
y2[i] = gsl_complex_arg(phi2[i]);
}
X = gsl_matrix_alloc(L,2);
ydat = gsl_vector_alloc(L);
for(row=0;row<L;row++) {
gsl_matrix_set(X,row,0,t2[row]);
gsl_matrix_set(X,row,1,w2[row]);
gsl_vector_set(ydat,row,y2[row]);
}
if (iter==0) {
// printf("multi ordls: ");
stat = gsl_multifit_linear(X,ydat,cvout,covmat,&sumsq,linws);
// printf("stat = %d, beta=%f\n",stat,beta);
}
else {
// printf("multi wls: ");
setcovZZ(t2, L, N, alpha, beta, 1.0, covZZ);
weights = gsl_vector_alloc(L);
for (i=0;i<L;i++) {
p2[i] = 1.0/covZZ[i][i];
gsl_vector_set(weights,i,p2[i]);
}
stat = gsl_multifit_wlinear(X,weights,ydat,cvout,covmat,&sumsq,linws);
// printf("stat = %d, beta=%f\n",stat,beta);
}
if (alpha>1.98 || alpha < 0.05 || fabs(alpha-1.0)<0.05 || isnan(alpha)) {
beta = 0.0;
} else {
beta = gsl_vector_get(cvout,1)/tan(alpha*M_PI*0.5);
}
estshift = gsl_vector_get(cvout,0);
mu1 = mu1 + sigma * estshift;
// printf("rem shift\n");
/*** remove estimated shift ***/
for (i=0;i<N;i++) {
s[i] = s[i] - estshift;
}
if (isnan(alpha) || isnan(beta) || isnan(sigma) || isnan(mu1))
{
iter++;
break;
}
// printf("check conv");
/** check for convergence or blow up**/
diff = pow(alpha - alphaold,2) + pow(mu1 - mu1old,2);
if (iter <= 2 && diff > diffmax) {
diffmax = diff;
}
// else if (diff > 2.0*diffmax) {
#ifdef DEBUG
printf("blow up on iter %d\n",iter);
#endif
// break;
// }
// printf("check best\n");
if (fabs(diff) < diffbest) {
alphabest = alpha;
betabest = beta;
sigmabest = sigma;
mu1best = mu1;
diffbest = diff;
if (diff < xTol) {
gsl_matrix_free(X);
gsl_vector_free(ydat);
if (iter>0) gsl_vector_free(weights);
iter++;
break;
}
}
alphaold = alpha;
mu1old = mu1;
gsl_matrix_free(X);
gsl_vector_free(ydat);
if (iter>0) {
gsl_vector_free(weights);
}
}
if (maxiter > 0 && iter >= 0) {
alpha = alphabest;
beta = betabest;
sigma = sigmabest;
mu1 = mu1best;
}
if (alpha<0) alpha = 0;
if (alpha>2) alpha = 2;
if (beta<-1) beta = -1;
if (beta> 1) beta = 1;
if (sigma<0) sigma = 0;
stable_setparams(dist,alpha,beta,sigma,mu1,1);
free(t);
free(p);
free(w);
free(y);
free(phi);
free(t2);
free(p2);
free(w2);
free(y2);
free(phi2);
gsl_matrix_free(covmat);
gsl_vector_free(cvout);
free(s);
if (iter>1) {
free(covYY[0]);
free(covYY);
free(covZZ[0]);
free(covZZ);
}
gsl_multifit_linear_free(linws);
//printf(" iter %d diff %f a %f b %f s %f m %f\n",iter,diff,alpha,beta,sigma,dist->mu_0);
return 0;
} // end stable_fit_koutrouvelis
int chooseK(double alpha, int N) {
double a[]={1.9,1.5,1.3,1.1,.9,.7,.5,.3};
int n[] = {200, 800, 1600};
double Kmat[8][3]={{ 9, 9, 9},
{11,11,11},
{22,16,14},
{24,18,15},
{28,22,18},
{30,24,20},
{86,68,56},
{134,124,118}};
int i,j;
double xi;
double xj;
double Kp;
if (alpha < 0.3) alpha = 0.3;
if (alpha > 1.9) alpha = 1.9;
if (N<200) N=200;
if (N>1600) N=1600;
i=1;
j=1;
while (i<7 && a[i]>=alpha) {
++i;
}
while (j<2 && n[j]<=N) {
++j;
}
xi = 1.0 - (alpha-a[i])/(a[i-1]-a[i]);
xj = 1.0 - (double)(n[j]-N)/(double)(n[j]-n[j-1]);
// printf("i,j = %d, %d; xi,xj = %f, %f\n",i,j,xi,xj);
/* bilinear interpolation */
Kp = xj * (xi * Kmat[i][ j ] + (1.0-xi) * Kmat[i-1][ j ] ) +
(1.0-xj) * (xi * Kmat[i][j-1] + (1.0-xi) * Kmat[i-1][j-1] );
Kp = floor(Kp+.5);
return (int)Kp;
}
int chooseL(double alpha, int N) {
double a[]={1.9,1.5,1.1,.9,.7,.5,.3};
int n[] = {200, 800, 1600};
double Lmat[7][3]={{ 9,10,11},
{12,14,15},
{16,18,17},
{14,14,14},
{24,16,16},
{40,38,36},
{70,68,66}};
int i,j;
double xi, xj;
double Lp;
if (alpha < 0.3) alpha = 0.3;
if (alpha > 1.9) alpha = 1.9;
if (N<200) N=200;
if (N>1600) N=1600;
i=1;
j=1;
while (i<6 && a[i]>=alpha) {
++i;
}
while (j<2 && n[j]<=N) {
++j;
}
xi = 1.0 - (alpha-a[i])/(a[i-1]-a[i]);
xj = 1.0 - (double)(n[j]-N)/(double)(n[j]-n[j-1]);
// printf("i,j = %d, %d; xi,xj = %f, %f\n",i,j,xi,xj);
/* bilinear interpolation */
Lp = xj * (xi * Lmat[i][ j ] + (1.0-xi) * Lmat[i-1][ j ] ) +
(1.0-xj) * (xi * Lmat[i][j-1] + (1.0-xi) * Lmat[i-1][j-1] );
Lp = floor(Lp + .5);
return (int)Lp;
}
void setcovYY(const double * t, int K, int N, double alpha, double beta, double gam, double **covYY) {
double w = tan(alpha*M_PI_2);
double calpha = pow(gam,alpha);
int j = 0, k = 0;
double * talpha = (double *)malloc(K*sizeof(double));
for (j=0;j<K;j++) {
talpha[j]=pow(fabs(t[j]),alpha);
}
double tj,tja,tk,tka;
double tjmtka,tjptka,stj,stk;
double A,B,D,E;//F,G,H;
for (j=0;j<K;j++) {
tj = t[j];
tja = talpha[j];
stj = sign(tj);
for (k=0;k<K;k++) {
tk = t[k];
tka = talpha[k];
stk = sign(tk);
tjmtka = pow(fabs(tj-tk),alpha);
tjptka = pow(fabs(tj+tk),alpha);
A = calpha * (tja + tka - tjmtka);
B = calpha * beta * (-tja * stj * w + tka * stk * w + tjmtka * sign(tj - tk) * w );
D = calpha * (tja + tka - tjptka);
E = calpha * beta * ( tja * stj * w + tja * stk * w - tjptka * sign(tj + tk) * w );
// F = calpha * (tja + tka);
// G = -calpha * tjmtka;
// H = -calpha * tjptka;
// printf("[j][k] = [%d][%d]",j,k);fflush(stdout);
covYY[j][k] = (exp(A)*cos(B)+exp(D)*cos(E)-2.0) / ( 2.0 * N * pow(gam,2.0*alpha)*pow(fabs(tj*tk),alpha) );
// covZZ[j][k] = exp(F)*( exp(G)*cos(B)-exp(H)*cos(E) ) /(2.0*N);
}
}
free(talpha);
return;
}
void setcovZZ(const double * t, int K, int N, double alpha, double beta, double gam, double **covZZ) {
double w = tan(alpha*M_PI_2);
double calpha = pow(gam,alpha);
int j = 0, k = 0;
double * talpha = (double *)malloc(K*sizeof(double));
for (j=0;j<K;j++) {
talpha[j]=pow(fabs(t[j]),alpha);
}
double tj,tja,tk,tka;
double tjmtka,tjptka,stj,stk;
double B,E,F,G,H;
for (j=0;j<K;j++) {
tj = t[j];
tja = talpha[j];
stj = sign(tj);
for (k=0;k<K;k++) {
tk = t[k];
tka = talpha[k];
stk = sign(tk);
tjmtka = pow(fabs(tj-tk),alpha);
tjptka = pow(fabs(tj+tk),alpha);
// A = calpha * (tja + tka - tjmtka);
B = calpha * beta * (-tja * stj * w + tka * stk * w + tjmtka * sign(tj - tk) * w );
// D = calpha * (tja + tka - tjptka);
E = calpha * beta * ( tja * stj * w + tja * stk * w - tjptka * sign(tj + tk) * w );
F = calpha * (tja + tka);
G = -calpha * tjmtka;
H = -calpha * tjptka;
// covYY[j][k] = (exp(A)*cos(B)+exp(D)*cos(E)-2.0) / ( 2.0 * N * pow(gam,2.0*alpha)*pow(fabs(tj*tk),alpha) );
covZZ[j][k] = exp(F)*( exp(G)*cos(B)-exp(H)*cos(E) ) /(2.0*N);
}
}
free(talpha);
return;
}
double ecfRoot(const double * data, int N) {
double m=0;
int i;
for(i=0;i<N;i++) {
m += fabs(data[i]);
}
m = m/N;
double t = 0;
gsl_complex val = stable_samplecharfunc_point(data, N, t);
int iter=0;
while (iter<10000 && fabs(GSL_REAL(val))>1e-3) {
t += GSL_REAL(val)/m;
val = stable_samplecharfunc_point(data, N, t);
}
return t;
}
gsl_complex stable_samplecharfunc_point(const double* x,
const unsigned int N, double t)
{
unsigned int i;
double zr=0.0,zi=0.0;
gsl_complex z;
for(i=0;i<N;i++)
{
zr+=cos(t*x[i]);
zi+=sin(t*x[i]);
}
GSL_SET_COMPLEX(&z,zr/N,zi/N);
return z;
}
void stable_samplecharfunc(const double* x, const unsigned int Nx,
const double* t, const unsigned int Nt, gsl_complex *z)
{
unsigned int ix,it;
double zr=0.0,zi=0.0,w;
for(it=0;it<Nt;it++)
{
w = t[it];
zr = .0;
zi = .0;
for(ix=0;ix<Nx;ix++)
{
zr+=cos(w*x[ix]);
zi+=sin(w*x[ix]);
}
// if (isnan(zr*zi) || isinf(zr*zi)) {
// printf("Bad cfuncpoint: z[%d] = z(%f) = %f +i%f",it,w,zr,zi);
// }
GSL_SET_COMPLEX(&z[it],zr/Nx,zi/Nx);
}
return;
}
|
State Before: ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
⊢ comapDomain h hh (single (h k) x) = single k x State After: case h
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
i : κ
⊢ ↑(comapDomain h hh (single (h k) x)) i = ↑(single k x) i Tactic: ext i State Before: case h
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
i : κ
⊢ ↑(comapDomain h hh (single (h k) x)) i = ↑(single k x) i State After: case h
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
i : κ
⊢ ↑(single (h k) x) (h i) = ↑(single k x) i Tactic: rw [comapDomain_apply] State Before: case h
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
i : κ
⊢ ↑(single (h k) x) (h i) = ↑(single k x) i State After: case h.inl
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
i : κ
x : β (h i)
⊢ ↑(single (h i) x) (h i) = ↑(single i x) i
case h.inr
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
i : κ
hik : i ≠ k
⊢ ↑(single (h k) x) (h i) = ↑(single k x) i Tactic: obtain rfl | hik := Decidable.eq_or_ne i k State Before: case h.inl
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
i : κ
x : β (h i)
⊢ ↑(single (h i) x) (h i) = ↑(single i x) i State After: no goals Tactic: rw [single_eq_same, single_eq_same] State Before: case h.inr
ι : Type u
γ : Type w
β : ι → Type v
β₁ : ι → Type v₁
β₂ : ι → Type v₂
dec : DecidableEq ι
κ : Type u_1
inst✝¹ : DecidableEq κ
inst✝ : (i : ι) → Zero (β i)
h : κ → ι
hh : Function.Injective h
k : κ
x : β (h k)
i : κ
hik : i ≠ k
⊢ ↑(single (h k) x) (h i) = ↑(single k x) i State After: no goals Tactic: rw [single_eq_of_ne hik.symm, single_eq_of_ne (hh.ne hik.symm)] |
State Before: α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x✝ : α
l l' : List α
h : l ~r l'
hn : Nodup l
x : α
hx : x ∈ l
⊢ next l x hx = next l' x (_ : x ∈ l') State After: case intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l l' : List α
h : l ~r l'
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
⊢ next l (nthLe l k hk) hx = next l' (nthLe l k hk) (_ : nthLe l k hk ∈ l') Tactic: obtain ⟨k, hk, rfl⟩ := nthLe_of_mem hx State Before: case intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l l' : List α
h : l ~r l'
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
⊢ next l (nthLe l k hk) hx = next l' (nthLe l k hk) (_ : nthLe l k hk ∈ l') State After: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ next l (nthLe l k hk) hx = next (rotate l n) (nthLe l k hk) (_ : nthLe l k hk ∈ rotate l n) Tactic: obtain ⟨n, rfl⟩ := id h State Before: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ next l (nthLe l k hk) hx = next (rotate l n) (nthLe l k hk) (_ : nthLe l k hk ∈ rotate l n) State After: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ nthLe l ((k + 1) % length l) (_ : (k + 1) % length l < length l) =
next (rotate l n) (nthLe l k hk) (_ : nthLe l k hk ∈ rotate l n) Tactic: rw [next_nthLe _ hn] State Before: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ nthLe l ((k + 1) % length l) (_ : (k + 1) % length l < length l) =
next (rotate l n) (nthLe l k hk) (_ : nthLe l k hk ∈ rotate l n) State After: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ nthLe l ((k + 1) % length l) (_ : (k + 1) % length l < length l) =
next (rotate l n)
(nthLe (rotate l n) ((length l - n % length l + k) % length l)
(_ : (length l - n % length l + k) % length l < length (rotate l n)))
(_ :
nthLe (rotate l n) ((length l - n % length l + k) % length l)
(_ : (length l - n % length l + k) % length l < length (rotate l n)) ∈
rotate l n) Tactic: simp_rw [← nthLe_rotate' _ n k] State Before: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ nthLe l ((k + 1) % length l) (_ : (k + 1) % length l < length l) =
next (rotate l n)
(nthLe (rotate l n) ((length l - n % length l + k) % length l)
(_ : (length l - n % length l + k) % length l < length (rotate l n)))
(_ :
nthLe (rotate l n) ((length l - n % length l + k) % length l)
(_ : (length l - n % length l + k) % length l < length (rotate l n)) ∈
rotate l n) State After: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ nthLe (rotate l n) ((length l - n % length l + (k + 1) % length l) % length l)
(_ : (length l - n % length l + (k + 1) % length l) % length l < length (rotate l n)) =
nthLe (rotate l n) (((length l - n % length l + k) % length l + 1) % length (rotate l n))
(_ : ((length l - n % length l + k) % length l + 1) % length (rotate l n) < length (rotate l n)) Tactic: rw [next_nthLe _ (h.nodup_iff.mp hn), ← nthLe_rotate' _ n] State Before: case intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
l✝ : List α
x : α
l : List α
hn : Nodup l
k : ℕ
hk : k < length l
hx : nthLe l k hk ∈ l
n : ℕ
h : l ~r rotate l n
⊢ nthLe (rotate l n) ((length l - n % length l + (k + 1) % length l) % length l)
(_ : (length l - n % length l + (k + 1) % length l) % length l < length (rotate l n)) =
nthLe (rotate l n) (((length l - n % length l + k) % length l + 1) % length (rotate l n))
(_ : ((length l - n % length l + k) % length l + 1) % length (rotate l n) < length (rotate l n)) State After: no goals Tactic: simp [add_assoc] |
[STATEMENT]
lemma mono_let: "(\<And>x. le (f x) (f' x)) \<Longrightarrow> le (Let x f) (Let x f')"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>x. le (f x) (f' x)) \<Longrightarrow> le (Let x f) (Let x f')
[PROOF STEP]
by auto |
% Profiler extension template.
%
% Profiler extension for [algorithm]
%
% This file is part of the Kernel Adaptive Filtering Toolbox for Matlab.
% https://github.com/steven2358/kafbox/
classdef kafbox_template_profiler < kafbox_template
properties (GetAccess = 'public', SetAccess = 'private')
elapsed = 0; % elapsed time
end
methods
function kaf = kafbox_template_profiler(parameters) % constructor
if nargin<1, parameters = struct(); end
kaf = kaf@kafbox_template(parameters);
end
function flops = lastflops(kaf) % flops for last iteration
% numbers of operations
m1 = 1;
m2 = 1;
m3 = 1;
m4 = 1;
floptions = struct(...
'sum', m1, ...
'mult', m2, ...
'div', m3, ...
sprintf('%s_kernel',kaf.kerneltype), [m4,1,size(kaf.dict,2)]);
flops = kflops(floptions);
end
%% flops breakdown
% [space for remarks on number of operations used above]
%%
function train_profiled(kaf,x,y)
t1 = tic;
kaf.train(x,y);
t2 = toc(t1);
kaf.elapsed = kaf.elapsed + t2;
end
function bytes = lastbytes(kaf) % bytes used in last iteration
m = size(kaf.dict,1);
m2 = 1;
bytes = (m2 + m*size(kaf.dict,2)); % 8 bytes for double precision
% [list variables]
end
end
end
|
theory flash4Rev imports flashPub
begin
section{*Main defintions*}
lemma NI_FAckVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_FAck ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_InvVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Inv iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_InvAck_1VsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_InvAck_1 iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3 a4 a5 a6,auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_InvAck_1_HomeVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_InvAck_1_Home iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_InvAck_2VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_InvAck_2 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_Local_GetX_GetXVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_GetX iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_Nak1VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_Nak1 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_Nak2VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_Nak2 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_Nak3VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_Nak3 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX1VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX1 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX2VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX2 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX3VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX3 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX4VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX4 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX5VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX5 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX6VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX6 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX7VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX7 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX8VsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX8 N iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1\<and>iRule2=iInv2) \<or>(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>(iRule1=iInv2\<and>iRule2=iInv1) \<or>(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 a5 a6 , auto)
moreover
{assume b1:"(iRule1=iInv1\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>iRule2=iInv1)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 a5 a6 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 a5 a6 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s \<or> ?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX8_homeVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX8_home N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX9VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX9 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX10VsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX10 N iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1\<and>iRule2=iInv2) \<or>(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>(iRule1=iInv2\<and>iRule2=iInv1) \<or>(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 a5 a6 , auto)
moreover
{assume b1:"(iRule1=iInv1\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>iRule2=iInv1)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 a5 a6 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 a5 a6 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s \<or> ?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX10_homeVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX10_home N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_Dirty'') ) ( Const false )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_GetX_PutX11VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_GetX_PutX11 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ( eqn ( IVar ( Global ''Dir_local'') ) ( Const true )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_Get_GetVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Get iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_Local_Get_Nak1VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Nak1 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_Get_Nak2VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Nak2 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_Get_Nak3VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Nak3 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_Get_Put1VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Put1 N iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_Get_Put2VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Put2 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_Get_Put3VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Get_Put3 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Local_PutVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_Put ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_Local_PutXAcksDoneVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Local_PutXAcksDone ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_NakVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Nak iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Nak_ClearVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Nak_Clear ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_Nak_HomeVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Nak_Home ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_Remote_GetX_NakVsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_GetX_Nak iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1\<and>iRule2=iInv2) \<or>(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>(iRule1=iInv2\<and>iRule2=iInv1) \<or>(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 a5 a6 , auto)
moreover
{assume b1:"(iRule1=iInv1\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>iRule2=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s \<or> ?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_GetX_Nak_HomeVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_GetX_Nak_Home iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_Remote_GetX_PutXVsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_GetX_PutX iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1\<and>iRule2=iInv2) \<or>(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>(iRule1=iInv2\<and>iRule2=iInv1) \<or>(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 a5 a6 , auto)
moreover
{assume b1:"(iRule1=iInv1\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>iRule2=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 a5 a6 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''CacheState'' iRule2) ) ( Const CACHE_E )) ( eqn ( IVar ( Para ''CacheState'' iInv1) ) ( Const CACHE_E )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s \<or> ?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_GetX_PutX_HomeVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_GetX_PutX_Home iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_Get_Nak1VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_Get_Nak1 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_Remote_Get_Nak2VsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_Get_Nak2 iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1\<and>iRule2=iInv2) \<or>(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>(iRule1=iInv2\<and>iRule2=iInv1) \<or>(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 a5 a6 , auto)
moreover
{assume b1:"(iRule1=iInv1\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>iRule2=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s \<or> ?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_Get_Put1VsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_Get_Put1 iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_Get_Put2VsInv4:
(*Rule2VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iRule2 \<le> N" and a3:"iInv1 \<le> N" and a4:"iInv2 \<le> N" and a5:"iInv1~=iInv2 " and a6:"iRule1~=iRule2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_Get_Put2 iRule1 iRule2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1\<and>iRule2=iInv2) \<or>(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>(iRule1=iInv2\<and>iRule2=iInv1) \<or>(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 a5 a6 , auto)
moreover
{assume b1:"(iRule1=iInv1\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv1\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>iRule2=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>iRule2=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 )\<and>(iRule2~=iInv1 \<and>iRule2~=iInv2 ))"
have "?P1 s \<or> ?P2 s"
apply(cut_tac a1 a2 a3 a4 a5 a6 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_PutVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_Put iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have allCases:"formEval ( eqn ( IVar ( Para ''InvMarked'' iInv1) ) ( Const true )) s \<or>formEval (neg ( eqn ( IVar ( Para ''InvMarked'' iInv1) ) ( Const true )) ) s "
by auto
moreover
{assume c1:"formEval ( eqn ( IVar ( Para ''InvMarked'' iInv1) ) ( Const true )) s"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 c1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume c1:"formEval (neg ( eqn ( IVar ( Para ''InvMarked'' iInv1) ) ( Const true )) ) s"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 c1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately have "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_Remote_PutXVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Remote_PutX iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P3 s"
apply( cut_tac a1 a2 a3 a4 b1 , simp)
apply(rule_tac x=" (neg ( andForm ( eqn ( IVar ( Para ''UniMsg_Cmd'' iInv2) ) ( Const UNI_PutX )) ( eqn ( IVar ( Para ''UniMsg_Cmd'' iInv1) ) ( Const UNI_PutX )) ) ) " in exI,auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma NI_ReplaceVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Replace iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_ReplaceHomeVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_ReplaceHome ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_ReplaceHomeShrVldVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_ReplaceHomeShrVld ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_ReplaceShrVldVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_ReplaceShrVld iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma NI_ShWbVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_ShWb N ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma NI_WbVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (NI_Wb ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_GetX_GetX1VsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_GetX_GetX1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_GetX_GetX2VsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_GetX_GetX2 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_GetX_PutX1VsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_GetX_PutX1 N ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_GetX_PutX2VsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_GetX_PutX2 N ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_GetX_PutX3VsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_GetX_PutX3 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_GetX_PutX4VsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_GetX_PutX4 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_Get_GetVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_Get_Get ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_Get_PutVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_Get_Put ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_PutXVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_PutX ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Local_ReplaceVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Local_Replace ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
lemma PI_Remote_GetVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Remote_Get iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma PI_Remote_GetXVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Remote_GetX iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma PI_Remote_PutXVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Remote_PutX iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have allCases:"(iRule1=iInv1) \<or>(iRule1=iInv2) \<or>((iRule1~=iInv1 \<and>iRule1~=iInv2 )) "
by( cut_tac a1 a2 a3 a4 , auto)
moreover
{assume b1:"(iRule1=iInv1)"
have "?P1 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"(iRule1=iInv2)"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
moreover
{assume b1:"((iRule1~=iInv1 \<and>iRule1~=iInv2 ))"
have "?P2 s"
apply(cut_tac a1 a2 a3 a4 b1 , auto intro!:forallVars1 simp add :invHoldForRule2'_def varsOfVar_def)
done
then have "?P1 s\<or> ?P2 s \<or> ?P3 s"
by blast
}
ultimately show "?P1 s\<or> ?P2 s\<or> ?P3 s"
by metis
qed
lemma PI_Remote_ReplaceVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (PI_Remote_Replace iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma StoreVsInv4:
(*Rule1VsPInv2*)
assumes a1:"iRule1 \<le> N" and a2:"iInv1 \<le> N" and a3:"iInv2 \<le> N" and a4:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (Store iRule1 ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
by (cut_tac a1 a2 a3 a4 , auto)
lemma StoreHomeVsInv4:
(*Rule0VsPInv2*)
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 "
shows "invHoldForRule' s (inv4 iInv1 iInv2 ) (StoreHome ) (invariants N)" (is " ?P1 s\<or>?P2 s\<or>?P3 s")
proof -
have "?P2 s"
by (cut_tac a1 a2 a3, auto )
then show "?P1 s\<or>?P2 s\<or>?P3 s"
by auto
qed
end
|
data Vect : Nat -> Type -> Type where
Nil : Vect Z a
(::) : a -> Vect k a -> Vect (S k) a
(++) : Vect n a -> Vect m a -> Vect (n + m) a
(++) Nil ys = ys
(++) (x :: xs) ys = x :: xs ++ ys
data Fin : Nat -> Type where
FZ : Fin (S k)
FS : Fin k -> Fin (S k)
index : Fin n -> Vect n a -> a
index FZ (x :: xs) = x
index (FS k) (x :: xs) = index k xs
filter : (a -> Bool) -> Vect n a -> (p ** Vect p a)
filter p Nil = (_ ** [])
filter p (x :: xs)
= case filter p xs of
(_ ** xs') => if p x then (_ ** x :: xs')
else (_ ** xs')
cons : t -> (x : Nat ** Vect x t) -> (x : Nat ** Vect x t)
cons val xs
= record { fst = S (fst xs),
snd = (val :: snd xs) } xs
cons' : t -> (x : Nat ** Vect x t) -> (x : Nat ** Vect x t)
cons' val
= record { fst $= S,
snd $= (val ::) }
Show a => Show (Vect n a) where
show xs = "[" ++ show' xs ++ "]" where
show' : forall n . Vect n a -> String
show' Nil = ""
show' (x :: Nil) = show x
show' (x :: xs) = show x ++ ", " ++ show' xs
|
In 1997 Ronald Phillips published their first annual catalogue attracting buyers from outside London as well as overseas. Since the first publication the annual catalogue which includes some of the best and rarest pieces on the market, has proven to be a great success not only as a selling tool but also as a reference book finding a welcome home in many furniture collectors library. Hard bound, extensively researched and beautifully illustrated; representing a colourful cross-section of what is available at Ronald Phillips. |
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
same_params := (pat1, targ, pat2) -> pat1.target(targ).cond(e->e.params = pat2.val.params);
one_by_one := x -> x.range() = 1 and x.domain() = 1;
# does not work with .cond (!!!) NOTE
Enum(100, x->@(x), @z, @n, @m, @k, @l, @j, @g, @phi, @r, @s, @ss, @b, @bb);
Enum(200, x->@(x), @M, @N, @R, @S, @W, @F, @G, @D);
IsIndexMapping := x -> IsBound(x._perm) and x._perm;
# L^mn_n
ltensor_flip := function(match, mn, n)
local m, i, pos, dom, ldom, lran, ll, rr, res, ldom;
dom := Product(match, x->x.domain());
ldom := 1; lran := 1; pos := 1;
while lran<>n do
lran:=lran*range(match[pos]);
ldom:=ldom*domain(match[pos]);
pos:=pos+1;
od;
ll := match{[1..pos-1]};
rr := match{[pos..Length(match)]};
res := fTensor(Concatenation(rr, ll));
if ldom=1 or ldom=dom then
return res;
else
return fCompose(res, L(dom, ldom));
fi;
end;
# for given diagDirsum looking for child which relative domain start <= offset and relative domain end >= (offset + domain)
# returns [child index, offset relative to child]
_findDirsumChild := function( dirsum, offset, domain)
local i, ch, d;
ch := dirsum.children();
d := 0;
for i in [1..Length(ch)] do
if offset >= d then
d := d + ch[i].domain();
if offset + domain <= d then
return [i, offset - d + ch[i].domain()];
fi;
else
return [0, 0];
fi;
od;
return [0, 0];
end;
# *******************************************************************
Class(RulesFuncSimp, RuleSet);
RewriteRules(RulesFuncSimp, rec(
# ===================================================================
# Operator flattening and identity
# ===================================================================
FlattenTensor := ARule(fTensor, [@(1, fTensor) ], e -> @(1).val.children()),
FlattenCompose := ARule(fCompose, [@(1, fCompose)], e -> @(1).val.children()),
ComposeId1 := ARule(fCompose, [@(1), fId ], e -> [@(1).val]),
ComposeId2 := ARule(fCompose, [fId, @(1)], e -> [@(1).val]),
fBase_1x1 := Rule(@(1, fBase, one_by_one), e->fId(1)),
fId_drop_Value := Rule([@(1, fId), @(2, Value)], e->fId(@(2).val.v)),
Const_fbase := ARule(fCompose, [@(1), @(2, fBase, e -> IsInt(e.params[2]))],
e -> [ When(IsInt(@(1).val.range()),
fBase(@(1).val.range(), @(1).val.at(@(2).val.params[2])),
fConst(@(1).val.range(), 1, @(1).val.at(@(2).val.params[2]))) ]),
Compose1x1_1 := ARule(fCompose, [@(1).cond(one_by_one), @(2)], e -> [@(2).val]),
Compose1x1_2 := ARule(fCompose, [@(2), @(1).cond(one_by_one) ], e -> [@(2).val]),
TensorId1 := ARule(fTensor, [@(1), [@(2,[fId,J]), 1]], e -> [@(1).val]),
TensorId2 := ARule(fTensor, [[@(2,[fId,J]), 1], @(1)], e -> [@(1).val]),
TensorId1V := ARule(fTensor, [@(1), [@(2,[fId,J]), @(3, Value, e->e.v=1)]], e -> [@(1).val]),
TensorId2V := ARule(fTensor, [[@(2,[fId,J]), @(3, Value, e->e.v=1)], @(1)], e -> [@(1).val]),
diagTensorId1 := ARule(diagTensor, [@(1), [fConst, @(2), 1, 1]], e -> [@(1).val]),
diagTensorIdV1 := ARule(diagTensor, [@(1), [fConst, @(2), 1, _1]], e -> [@(1).val]),
diagTensorId2 := ARule(diagTensor, [[fConst, @(2), 1, 1], @(1)], e -> [@(1).val]),
diagTensorIdV2 := ARule(diagTensor, [[fConst, @(2), 1, _1], @(1)], e -> [@(1).val]),
diagTensorZero1 := ARule(diagTensor, [@(1), [fConst, @(2), 1, 0]], e -> [fConst(@(1).val.domain(), 0)]),
diagTensorZeroV1 := ARule(diagTensor, [@(1), [fConst, @(2), 1, _0]], e -> [fConst(@(1).val.domain(), 0)]),
diagTensorZero2 := ARule(diagTensor, [[fConst, @(2), 1, 0], @(1)], e -> [fConst(@(1).val.domain(), 0)]),
diagTensorZeroV2 := ARule(diagTensor, [[fConst, @(2), 1, _0], @(1)], e -> [fConst(@(1).val.domain(), 0)]),
DropL := Rule(@(1, [L,OS], e -> e.params[2] = 1 or e.params[2] = e.params[1]), e -> fId(@1.val.params[1])),
DropTr := Rule(@(1, Tr, e -> e.params[1] = 1 or e.params[2] = 1), e -> fId(@1.val.domain())),
IP_toI := Rule([IP, @(2), fId], e -> fId(@(2).val)),
J_1 := Rule([J, 1], e->fId(1)),
OS_2 := Rule([OS, 2, @], e->fId(2)),
OS_fAdd := ARule(fCompose, [ [OS, @(1), @(2).cond(e -> e in [@(1).val-1, -1])],
[fAdd, @(3), @(4).cond(e -> e = @(3).val - 1), 1] ],
e -> [ fAdd(@(3).val, @(4).val, 1), J(@(4).val) ]),
Lfold := ARule(fCompose, [@(1,L), @(2,L),e->let(
prod := @(1).val.params[2] * @(2).val.params[2],
a:=@(1).val.params[1],
When(prod < a, prod in FactorsInt(a), (prod / a) in FactorsInt(a)))],
e -> let(
prod := @(1).val.params[2] * @(2).val.params[2], a:=@(1).val.params[1],
[ L(a, When(prod < a, prod, prod / a)) ])),
Hfold := ARule(fCompose, [@(1,H), @(2,H)],
e-> [H(@(1).val.params[1],
@(2).val.params[2],
@(1).val.params[3]+@(2).val.params[3]*@(1).val.params[4],
@(1).val.params[4]*@(2).val.params[4])]),
# ===================================================================
# Diagonal Functions
# ===================================================================
ComposePrecompute := Rule([fCompose, ..., [fPrecompute, @(1)], ...],
e -> fPrecompute(ApplyFunc(fCompose, List(e.children(),
x->When(ObjId(x)=fPrecompute, x.child(1), x))))),
PrecomputePrecompute := Rule([fPrecompute, [fPrecompute, @(1)]], e->fPrecompute(@(1).val)),
# drop diagDirsum when H refers to only one its child
drop_diagDirsum_H := ARule( fCompose, [ @(1, diagDirsum), @(2, H, x -> x.params[4] = 1 and _findDirsumChild(@(1).val, x.params[3], x.params[2])[1]>0)],
e -> let( f := _findDirsumChild(@(1).val, @(2).val.params[3], @(2).val.params[2]),
obj := @(1).val.children()[f[1]],
[ obj, H(obj.domain(), @(2).val.params[2], f[2], 1) ] )),
RCData_fCompose_TReal := Rule([RCData, [@(1,fCompose), @(2).cond(e->e.range = TReal), ...]],
e -> fCompose(RCData(@(2).val), diagTensor(fCompose(Drop(@(1).val.children(),1)), fId(2)))),
RCData_fCompose_TInt := Rule([RCData, [@(1,fCompose), @(2).cond(e->e.range = TInt), ...]],
e -> fCompose(RCData(@(2).val), fTensor(fCompose(Drop(@(1).val.children(),1)), fId(2)))),
# RCData_fCompose := Rule([RCData, [@(1,fCompose), @(2), ...]],
# e -> fCompose(RCData(@(2).val), fTensor(fCompose(Drop(@(1).val.children(),1)), fId(2)))),
RCData_Precompute := Rule([RCData, [fPrecompute, @(1)]], e->fPrecompute(RCData(@(1).val))),
RCData_CRData := Rule([RCData, [CRData, @(1)]], e -> @(1).val),
CRData_RCData := Rule([CRData, [RCData, @(1)]], e -> @(1).val),
FConj_Precompute := Rule([@(0,[FConj,FRConj]), [fPrecompute, @(1)]], e->fPrecompute(ObjId(@(0).val)(@(1).val))),
ComposeConstX := ARule(fCompose, [@(1,fConst), @(2)],
e -> [fConst(@(1).val.params[1], @(2).val.domain(), @(1).val.params[3])]),
fAdd0 := Rule([fAdd, @(1), @(2).cond(e->e=@(1).val), 0], e->fId(@(1).val)),
fAddPair := ARule(fCompose, [@(1, fAdd), @(2, fAdd)],
e -> [ fAdd(@(1).val.range(), @(2).val.domain(),
@(1).val.params[3] + @(2).val.params[3]) ]),
fTensor_fAdd_H := ARule(fTensor, [ [@(1, fAdd),@, _1, @], @(2, H)],
e -> [ H(@(1).val.params[1] * @(2).val.params[1],
@(2).val.params[2],
@(2).val.params[1] * @(1).val.params[3] + @(2).val.params[3],
@(2).val.params[4]) ]),
# ===================================================================
# fTensor
# ===================================================================
# fId(m) (X) fId(n) -> fId(m*n)
TensorIdId := ARule(fTensor, [@(1,fId), @(2,fId)],
e -> [ fId(@(1).val.params[1] * @(2).val.params[1]) ]),
TensorId1 := ARule(fTensor, [@(1, fId, x->x.params[1]=1)],
e -> []),
# L(mn,m) o (fBase(m,j) (X) f) -> f (X) fBase(j,m)
LTensorFlip := ARule(fCompose,
[ @(1,L), [ @(3,fTensor), @(2).cond(e->range(e) = @(1).val.params[2] and domain(e)=1), ...] ],
e -> [ fTensor(Copy(Drop(@(3).val.children(), 1)), Copy(@(2).val)) ] ),
# L(mn,n) o f (X) fBase(j,m) -> (fBase(m,j) (X) f)
LTensorFlip1 := ARule(fCompose,
[ @(1,L), [ @(3,fTensor), ...,
@(2, fBase, e->range(e) = @(1).val.params[1]/@(1).val.params[2]) ]],
e -> [fTensor(Copy(Last(@(3).val.children())), Copy(DropLast(@(3).val.children(), 1)))] ),
L_H := ARule(fCompose, [ @(1, L), [ @(2, fTensor), @(3, fBase), @(4, fId, e->IsInt(@(1).val.params[1]/(@(1).val.params[2] * e.params[1])))] ],
e-> [ H(@(1).val.params[1], @(4).val.params[1],
@(4).val.params[1] * @(1).val.params[2] * imod(@(3).val.params[2], @(1).val.params[1] / (@(4).val.params[1] * @(1).val.params[2])) +
idiv(@(3).val.params[2], @(1).val.params[1] / (@(4).val.params[1] * @(1).val.params[2])),
@(1).val.params[2]) ]),
Refl0_u_H0 := ARule(fCompose, [@(1, Refl0_u), @(2, H, e -> e.params[3]=0 and e.params[4]=1)], e -> [ let(
k := @(1).val.params[1],
H(@(1).val.range(), @(2).val.domain(), 0, k)) ]),
Refl0_u_Hrest := ARule(fCompose, [
@(1, Refl0_u),
@(2, H, e -> e.params[3]=@(1).val.params[2] and e.params[4]=1),
[fTensor, @(3,fBase), @(4, fId, e->e.params[1]=@(1).val.params[2])]],
e -> [ let(k := @(1).val.params[1],
BH(@(1).val.range(), 2*@(1).val.range(), @(4).val.domain(), 1+@(3).val.params[2], 2*k)) ]),
Refl0_u_Hrest_1it := ARule(fCompose, [
@(1, Refl0_u),
@(2, H, e -> e.params[3]=@(1).val.params[2] and e.params[4]=1 and e.params[3]=e.params[2]) ],
e -> [ let(k := @(1).val.params[1],
BH(@(1).val.range(), 2*@(1).val.range(), @(2).val.domain(), 1, 2*k)) ]),
Refl1_H := ARule(fCompose, [@(1, Refl1), [fTensor, @(2,fBase), @(3, fId, e->e.params[1]=@(1).val.params[2])]],
e -> let(
k := @(1).val.params[1],
[ BH(@(1).val.range(), 2*@(1).val.range() - 1, @(3).val.domain(), @(2).val.params[2], 2*k) ])),
fTensor_X_fId_H := ARule(fCompose, [[@(1, fTensor), ..., [fId, @(2)]], # NOTE: assumes d|base
[@(3, H), @(4).cond(e->_divides(@(2).val,e)), @(5).cond(e->_divides(@(2).val,e)),
@(6).cond(e->_divides(@(2).val,e)), _1]], e ->
let(p := @(3).val.params, d := @(2).val, base := p[3],
[ @(1).val, fTensor(H(p[1]/d, p[2]/d, base/d, 1), fId(d)) ])),
TrTensor := ARule(fCompose, [@(1, Tr), [ fTensor, @(2), @(3).cond(e->e.range()=@(1).val.params[1]) ]],
e -> [ fCompose(fTensor(@(3).val, @(2).val), Tr(@(3).val.domain(), @(2).val.domain())) ]
),
LTensorGeneral := ARule(
fCompose,
[@(1,[L,Tr]),
@(2,fTensor,
e -> let(
p := @(1).val.params,
a := When(ObjId(@(1).val)=Tr, p[1], p[1]/p[2]),
b := p[2],
ForAll(e.children(), x->AnySyms(x.domain(), x.range()) or x.domain()>1 and x.range()>1) and # workaround old fId(1) bug
full_merge_tensor_chains(@(3), [b, a], e.children(), (x,y)->y,
Product, fTensor,
fId, x->x, x->x, range) <> false))],
e -> [ ltensor_flip(@(3).val, @(1).val.params[1], @(1).val.params[2]) ]
),
# fTensor o fTensor -> fTensor( ..o.., ..o.. , ...)
TensorMerge := ARule(fCompose,
[ @(1,fTensor), @(2,fTensor,e->full_merge_tensor_chains(
@(3), @(1).val.children(), e.children(),
fCompose, fTensor, fTensor, x->x, x->x, domain, range) <> false) ],
e -> [ fTensor(@(3).val) ]
),
TensorMerge_fIdInt := ARule(fCompose,
[ @(1,fTensor,e->let(l1:=Last(e.children()), ObjId(l1)=fId and IsInt(l1.domain()))),
@(2,fTensor, e->let(l1:=Last(@(1).val.children()), l2 := Last(e.children()), IsInt(l2.domain()) and ObjId(l2)=fId and Gcd(l1.domain(), l2.domain())>1))],
e -> let(l1 := Last(@(1).val.children()).domain(), l2 := Last(@(2).val.children()).domain(), gcd := Gcd(l1, l2),
[ fTensor(fCompose(fTensor(DropLast(@(1).val.children(), 1), fId(l1/gcd)), fTensor(DropLast(@(2).val.children(), 1), fId(l2/gcd))), fId(gcd)) ])
),
# diagTensor o fTensor -> diagTensor( ..o.., ..o.. , ...)
diagTensorMerge := ARule(fCompose,
[ @(1,diagTensor), @(2,fTensor,e->full_merge_tensor_chains(
@(3), @(1).val.children(), e.children(),
fCompose, diagTensor, fTensor, x->x, x->fConst(range(x), 1), domain, range) <> false) ],
e -> [ diagTensor(@(3).val) ]
),
# fTensor(..., Y, ...) o X -> fTensor(..., Y o X, ...)
TensorComposeMergeRight := ARule(fCompose,
[ @(1,fTensor), @(2).cond(e -> compat_tensor_chains(
@(1).val.children(), [e], domain, range)) ],
e -> [ fTensor(merge_tensor_chains(
@(1).val.children(), [@(2).val], fCompose, x->x, x->x, domain, range)) ]
),
# X o fTensor(..., Y, ...) -> fTensor(..., X o Y, ...)
TensorComposeMergeLeft := ARule(fCompose,
[ @(1), @(2,fTensor,e-> IsInt(@(1).val.range()) and compat_tensor_chains( # NOTE: BAD HACK!!! Cannot do that with diag functions...
[@(1).val], e.children(), domain, range)) ],
e -> [ fTensor(merge_tensor_chains(
[@(1).val], @(2).val.children(), fCompose, x->x, x->x, domain, range)) ]
),
# [fBase,fId] X (fB o fAny) -> fB2 o (fI X fAny)
TensorB := ARule(fTensor, [
@(1, [fBase, fId], e -> Is2Power(e.range())),
@(2, fCompose, e -> ObjId(e.child(1)) = fB)
],
e -> [fCompose(
let(i := @(1).val,
b := @(2).val.child(1),
fB(i.range() * b.range(), List(b.params[3], e -> Log2Int(i.range()) + e))
),
fTensor(
@(1).val,
@(2).val.child(2)
)
)]
),
# ===================================================================
# Other
# ===================================================================
# primitive constant folding for functions
Compose_FList_const := ARule(fCompose, [@(1, FList), @(2).cond(x->x.free()=[])],
e -> let(
lst := @(1).val, idx := List(@(2).val.tolist(), EvalScalar),
[ FList(lst.t, lst.list{1+idx}) ])),
# fCond o f (note: fCond o fCond should only be handled by Compose_fCond_lft)
Compose_fCond_rt := ARule(fCompose, [[fCond, @(1), @(2), @(3)], @(4).cond(e->ObjId(e)<>fCond)],
e -> [ fCond(fCompose(@(1).val, @(4).val), fCompose(@(2).val, @(4).val), fCompose(@(3).val, @(4).val)) ]),
Compose_fCond_lft := ARule(fCompose, [@(4), [fCond, @(1), @(2), @(3)]],
e -> [ fCond(@(1).val, fCompose(@(4).val, @(2).val), fCompose(@(4).val, @(3).val)) ]),
fCond_fold0 := Rule([fCond, [fConst, @, @, 0], @(2), @(3)], e -> @(3).val),
fCond_foldV0 := Rule([fCond, [fConst, @, @, _0], @(2), @(3)], e -> @(3).val),
fCond_fold1 := Rule([fCond, [fConst, @, @, 1], @(2), @(3)], e -> @(2).val),
fCond_foldV1 := Rule([fCond, [fConst, @, @, _1], @(2), @(3)], e -> @(2).val),
# B2 o fTensor
fCompose_B2_fTensor := ARule(fCompose, [@(1,B2), @(2,fTensor)],
e -> [fB2(@(1).val.getES(), @(2).val.children())]
),
# L(abc,b) o L(abc,c) -> L(abc,bc)
fCompose_Tr := ARule( fCompose, [ [@(1, [Tr, L]), @, @(2)], [@(3, [Tr, L]), @, @(4).cond( x -> let( abc := @(1).val.range(), @(3).val.domain() = abc and (abc mod (@(2).val*x) = 0)))]],
e -> let( abc := @(1).val.range(), bc := @(2).val*@(4).val,
When( ObjId(@(1).val) = L, [L(abc, bc)], [Tr(abc/bc, bc)] ))),
fDirsum_fIds := ARule( fDirsum, [@(1, fId), @(2, fId)], e -> [fId(@(1).val.range()+@(2).val.range())]),
));
|
/-
Copyright (c) 2021 Yury Kudryashov. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yury Kudryashov
-/
import analysis.box_integral.partition.additive
import measure_theory.measure.lebesgue
/-!
# Box-additive functions defined by measures
In this file we prove a few simple facts about rectangular boxes, partitions, and measures:
- given a box `I : box ι`, its coercion to `set (ι → ℝ)` and `I.Icc` are measurable sets;
- if `μ` is a locally finite measure, then `(I : set (ι → ℝ))` and `I.Icc` have finite measure;
- if `μ` is a locally finite measure, then `λ J, (μ J).to_real` is a box additive function.
For the last statement, we both prove it as a proposition and define a bundled
`box_integral.box_additive` function.
### Tags
rectangular box, measure
-/
open set
noncomputable theory
open_locale ennreal big_operators classical box_integral
variables {ι : Type*}
namespace box_integral
open measure_theory
namespace box
variables (I : box ι)
lemma measurable_set_coe [fintype ι] (I : box ι) : measurable_set (I : set (ι → ℝ)) :=
begin
rw [coe_eq_pi],
haveI := fintype.encodable ι,
exact measurable_set.univ_pi (λ i, measurable_set_Ioc)
end
lemma measurable_set_Icc [fintype ι] (I : box ι) : measurable_set I.Icc := measurable_set_Icc
lemma measure_Icc_lt_top (μ : measure (ι → ℝ)) [is_locally_finite_measure μ] : μ I.Icc < ∞ :=
show μ (Icc I.lower I.upper) < ∞, from I.is_compact_Icc.measure_lt_top
lemma measure_coe_lt_top (μ : measure (ι → ℝ)) [is_locally_finite_measure μ] : μ I < ∞ :=
(measure_mono $ coe_subset_Icc).trans_lt (I.measure_Icc_lt_top μ)
end box
lemma prepartition.measure_Union_to_real [fintype ι] {I : box ι} (π : prepartition I)
(μ : measure (ι → ℝ)) [is_locally_finite_measure μ] :
(μ π.Union).to_real = ∑ J in π.boxes, (μ J).to_real :=
begin
erw [← ennreal.to_real_sum, π.Union_def, measure_bUnion_finset π.pairwise_disjoint],
exacts [λ J hJ, J.measurable_set_coe, λ J hJ, (J.measure_coe_lt_top μ).ne]
end
end box_integral
open box_integral box_integral.box
variables [fintype ι]
namespace measure_theory
namespace measure
/-- If `μ` is a locally finite measure on `ℝⁿ`, then `λ J, (μ J).to_real` is a box-additive
function. -/
@[simps] def to_box_additive (μ : measure (ι → ℝ)) [is_locally_finite_measure μ] :
ι →ᵇᵃ[⊤] ℝ :=
{ to_fun := λ J, (μ J).to_real,
sum_partition_boxes' := λ J hJ π hπ, by rw [← π.measure_Union_to_real, hπ.Union_eq] }
end measure
end measure_theory
namespace box_integral
open measure_theory
namespace box
@[simp] lemma volume_apply (I : box ι) :
(volume : measure (ι → ℝ)).to_box_additive I = ∏ i, (I.upper i - I.lower i) :=
by rw [measure.to_box_additive_apply, coe_eq_pi, real.volume_pi_Ioc_to_real I.lower_le_upper]
lemma volume_face_mul {n} (i : fin (n + 1)) (I : box (fin (n + 1))) :
(∏ j, ((I.face i).upper j - (I.face i).lower j)) * (I.upper i - I.lower i) =
∏ j, (I.upper j - I.lower j) :=
by simp only [face_lower, face_upper, (∘), fin.prod_univ_succ_above _ i, mul_comm]
end box
namespace box_additive_map
/-- Box-additive map sending each box `I` to the continuous linear endomorphism
`x ↦ (volume I).to_real • x`. -/
protected def volume {E : Type*} [normed_group E] [normed_space ℝ E] :
ι →ᵇᵃ (E →L[ℝ] E) :=
(volume : measure (ι → ℝ)).to_box_additive.to_smul
lemma volume_apply {E : Type*} [normed_group E] [normed_space ℝ E] (I : box ι) (x : E) :
box_additive_map.volume I x = (∏ j, (I.upper j - I.lower j)) • x :=
congr_arg2 (•) I.volume_apply rfl
end box_additive_map
end box_integral
|
module stencil_defect_module
use bl_constants_module
use bl_types
use multifab_module
use cc_stencil_module
use stencil_types_module
implicit none
private
public :: compute_defect, stencil_apply
contains
! Computes dd = ff - ss * uu
subroutine compute_defect(ss, dd, ff, uu, mm, stencil_type, lcross, &
uniform_dh, bottom_solver, diagonalize, filled)
use bl_prof_module
type(multifab), intent(in) :: ff, ss
type(multifab), intent(inout) :: dd, uu
type(imultifab), intent(in) :: mm
integer, intent(in) :: stencil_type
logical, intent(in) :: lcross
logical, intent(in), optional :: uniform_dh, bottom_solver, diagonalize, filled
type(bl_prof_timer), save :: bpt
call bl_proffortfuncstart("compute_defect")
call build(bpt, "compute_defect")
call stencil_apply(ss, dd, uu, mm, stencil_type, lcross, &
uniform_dh, bottom_solver, diagonalize, filled)
call sub_sub(dd, ff)
call rescale(dd, -one)
call destroy(bpt)
call bl_proffortfuncstop("compute_defect")
end subroutine compute_defect
!
! Computes rr = aa * uu
!
subroutine stencil_apply(aa, rr, uu, mm, stencil_type, lcross, &
uniform_dh, bottom_solver, diagonalize, filled)
use bl_prof_module
use cc_stencil_apply_module, only : stencil_apply_1d, stencil_apply_2d, stencil_apply_3d, &
stencil_apply_ibc_2d, stencil_apply_ibc_3d
use nodal_stencil_apply_module, only: stencil_apply_1d_nodal, &
stencil_apply_2d_nodal, &
stencil_apply_3d_nodal
use stencil_util_module, only : is_ibc_stencil
type(multifab), intent(in) :: aa
type(multifab), intent(inout) :: rr
type(multifab), intent(inout) :: uu
type(imultifab), intent(in) :: mm
integer, intent(in) :: stencil_type
logical, intent(in) :: lcross
logical, intent(in),optional :: uniform_dh, bottom_solver, diagonalize, filled
real(kind=dp_t), pointer :: rp(:,:,:,:), up(:,:,:,:), ap(:,:,:,:)
integer , pointer :: mp(:,:,:,:)
integer :: i, n, lo(get_dim(rr)), hi(get_dim(rr)), dm
logical :: nodal_flag, luniform_dh, lbottom_solver, ldiagonalize, lfilled
type(bl_prof_timer), save :: bpt
call build(bpt, "its_stencil_apply")
luniform_dh = .false. ; if ( present(uniform_dh) ) luniform_dh = uniform_dh
lbottom_solver = .false. ; if ( present(bottom_solver) ) lbottom_solver = bottom_solver
ldiagonalize = .false. ; if ( present(diagonalize) ) ldiagonalize = diagonalize
lfilled = .false. ; if ( present(filled) ) lfilled = filled
if (ldiagonalize .and. .not. nodal_q(rr)) then
call bl_error("Dont set diagonalize flag = true for cell-centered in stencil_apply")
end if
call bl_assert(ncomp(uu).ge.ncomp(rr), 'uu must have at least as many components as rr')
if (.not.lfilled) call fill_boundary(uu, 1, ncomp(rr), cross = lcross)
dm = get_dim(rr)
nodal_flag = nodal_q(uu)
do i = 1, nfabs(rr)
rp => dataptr(rr, i)
up => dataptr(uu, i)
ap => dataptr(aa, i)
lo = lwb(get_box(uu,i))
hi = upb(get_box(uu,i))
if (is_ibc_stencil(aa,i)) then
do n = 1, ncomp(rr)
select case(dm)
case (2)
call stencil_apply_ibc_2d(ap(:,1,1,1), rp(:,:,1,n), nghost(rr), &
up(:,:,1,n), nghost(uu), lo, hi)
case (3)
call stencil_apply_ibc_3d(ap(:,1,1,1), rp(:,:,:,n), nghost(rr), &
up(:,:,:,n), nghost(uu), lo, hi)
end select
end do
else
mp => dataptr(mm, i)
do n = 1, ncomp(rr)
select case(dm)
case (1)
if ( .not. nodal_flag) then
call stencil_apply_1d(ap(:,:,1,1), rp(:,1,1,n), nghost(rr), up(:,1,1,n), nghost(uu), &
mp(:,1,1,1), lo, hi)
else
call stencil_apply_1d_nodal(ap(1,:,1,1), rp(:,1,1,n), up(:,1,1,n), &
mp(:,1,1,1), nghost(uu), nghost(rr), ldiagonalize)
end if
case (2)
if ( .not. nodal_flag) then
call stencil_apply_2d(ap(:,:,:,1), rp(:,:,1,n), nghost(rr), up(:,:,1,n), nghost(uu), &
mp(:,:,1,1), lo, hi)
else
call stencil_apply_2d_nodal(ap(1,:,:,1), rp(:,:,1,n), up(:,:,1,n), &
mp(:,:,1,1), nghost(uu), nghost(rr), stencil_type, ldiagonalize)
end if
case (3)
if ( .not. nodal_flag) then
call stencil_apply_3d(ap(:,:,:,:), rp(:,:,:,n), nghost(rr), up(:,:,:,n), nghost(uu), &
mp(:,:,:,1), bottom_solver=lbottom_solver)
else
call stencil_apply_3d_nodal(ap(1,:,:,:), rp(:,:,:,n), up(:,:,:,n), &
mp(:,:,:,1), nghost(uu), nghost(rr), stencil_type, luniform_dh, lbottom_solver, ldiagonalize)
end if
end select
end do
end if
end do
call destroy(bpt)
end subroutine stencil_apply
end module stencil_defect_module
|
[STATEMENT]
lemma token_run_intial_state:
"token_run x x = q\<^sub>0"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. token_run x x = q\<^sub>0
[PROOF STEP]
by simp |
State Before: α : Type u_1
a : Array α
x : α
i : Nat
h : i < size a
⊢ (push a x)[i]? = some a[i] State After: no goals Tactic: rw [getElem?_pos, get_push_lt] |
theory Rewriting1
imports LocalLexingSpecification Pattern1 DerivationTrees1 DerivationTrees_Ambiguity
Containers.Containers
begin
text "Grammar Rewriting implementing the rewriting based on patterns as in Pattern1"
text "can this be defined using transitive closure"
(*alternative*)
(*describes patter [A, \<alpha>, \<circ>X\<delta>, alternative_rule]*)
type_synonym ('a, 'b) pattern = "('a, 'b) symbol \<times> ('a, 'b) symbol list \<times> ('a, 'b) symbol list \<times> ('a, 'b) symbol list"
type_synonym ('a, 'c) rule = "('a, 'c) symbol \<times> ('a, 'c) symbol list"
lemma csorted_set_to_list:
assumes "ID ccompare = Some (c::('a::ccompare) comparator)"
"finite s"
shows"set (csorted_list_of_set (s::('a::ccompare set))) = s \<and> distinct (csorted_list_of_set (s::('a::ccompare set)))"
proof -
have 1:"class.linorder (le_of_comp c) (lt_of_comp c)" using ID_ccompare' [OF assms(1)] comparator.linorder by blast
show ?thesis using linorder.set_sorted_list_of_set [OF 1 assms(2)] assms(2)
linorder.distinct_sorted_list_of_set [OF 1] unfolding csorted_list_of_set_def assms(1) by auto
qed
type_synonym ('a, 'c) assoc_group = "('a, 'c) rule set \<times> bias"
type_synonym ('a, 'c) disambig = "(('a, 'c) rule \<times> ('a, 'c) rule) set"
(*datatype ('a, 'c ) grammar = Grammar (nonterms: "('a, 'c) symbol set") (terms: "('a, 'c) symbol set")(rules:"('a, 'c) rule set") (start: "('a, 'c) symbol")
*)
type_synonym ('a, 'c ) grammar = "('a, 'c) symbol set \<times> ('a, 'c) symbol set \<times>
('a, 'c) rule set \<times>('a, 'c) symbol"(*\<NN> \<TT> \<RR> \<SS>*)
type_synonym ('a,'c) recursrel = "(('a, 'c) symbol \<times> ('a, 'c) symbol) set"
fun fresh::"((('a, 'c) symbol) \<Rightarrow> nat option) \<Rightarrow> ('a, 'c) symbol \<Rightarrow> ((('a, 'c) symbol) \<Rightarrow> nat option)" where
"fresh m s = (if m s = None then m(s \<mapsto> 1) else m(s \<mapsto> (the (m s)) + 1))"
fun convert::"nat \<Rightarrow> ('a, 'c) symbol \<Rightarrow> ('a \<times> nat, 'c) symbol" where
"convert a = (case_sum (\<lambda> t. Inl (t, a)) Inr) "
fun convert_back::"('a \<times> nat, 'c) symbol \<Rightarrow> ('a, 'c) symbol" where
"convert_back s= (case_sum (\<lambda> t . Inl (fst t)) Inr) s"
fun convert_back_sentence::"('a \<times> nat, 'c) symbol list\<Rightarrow> ('a, 'c) symbol list" where
"convert_back_sentence l= map convert_back l"
fun basic_convert::"('a, 'c) symbol \<Rightarrow> ('a \<times> nat, 'c ) symbol" where
"basic_convert a = convert 0 a"
fun stage0::"('a, 'c) grammar \<Rightarrow> ('a \<times> nat, 'c) grammar " where
"stage0 (\<NN>, \<TT>, R, s) = (((convert 0) ` \<NN>), ((convert 0) ` \<TT>), ((\<lambda>(n, l). ((convert 0 n), map (convert 0 ) l))` R) ,(convert 0 s))"
context CFG
begin
definition new_nonterm::"('a \<times> nat) set" where
"new_nonterm = ((\<lambda> a. (a, 0)) ` \<A>)"
term case_sum
definition new_term::"'b set" where
"new_term = \<B>"
definition rules::"('a \<times> nat, 'b) rule set" where
"rules = ((\<lambda>(n, l). ((convert 0 n), map (convert 0 ) l))` \<RR>)"
definition start::"'a \<times> nat" where
"start = (\<S>, 0)"
lemma start_valid:"start \<in> new_nonterm"
by (auto simp add: startsymbol_dom start_def new_nonterm_def)
lemma h1:"(a, b) \<in> local.rules \<Longrightarrow> a \<in> Inl ` (\<lambda>a. (a, 0)) ` \<A>"
proof -
assume "(a, b) \<in> rules"
then obtain n l where n:"(n, l) \<in> \<RR> \<and> (a = (convert 0 n)) \<and> b = map (convert 0) l" using rules_def by auto
then have "n \<in> Inl ` \<A>" using validRules by blast
with n have "a \<in> (convert 0) ` Inl ` \<A>" by blast
then show ?thesis by auto
qed
lemma h2:"(a, b) \<in> local.rules \<Longrightarrow> s \<in> set b \<Longrightarrow> s \<notin> Inr ` new_term \<Longrightarrow> s \<in> Inl ` (\<lambda>a. (a, 0)) ` \<A> "
proof -
assume assms:"(a, b) \<in> rules" "s \<in> set b" "s \<notin> Inr ` new_term"
then obtain n l where n:"(n, l) \<in> \<RR> \<and> (a = (convert 0 n)) \<and> b = map (convert 0) l" using rules_def by auto
then have "\<forall> s \<in> set l . s \<in> (Inl ` \<A>) \<union> (Inr ` \<B>)" using validRules by fast
with n have "\<forall> s \<in> set b . s \<in> ((convert 0) `(Inl ` \<A>) \<union> (Inr ` \<B>))" by force
then have "\<forall> s \<in> set b . s \<in> ((convert 0) `(Inl ` \<A>)) \<union> (Inr ` \<B>)" by blast
then have "\<forall> s \<in> set b . s \<in> (Inl `(\<lambda> a. (a, 0)) ` \<A>) \<union> (Inr ` new_term)" using new_term_def by auto
with assms show ?thesis by auto
qed
lemma finite_rules:"finite rules"
by (auto simp add: finite_grammar rules_def)
interpretation grammar2: CFG new_nonterm new_term rules start
apply(unfold_locales)
apply(auto simp add: startsymbol_dom start_def new_nonterm_def)
apply(auto simp add: h1 h2 finite_rules)
apply (auto simp add: new_term_def finite_terminals)
done
lemma terminal_equality:"grammar2.is_terminal x \<Longrightarrow> is_terminal (convert_back x)"
apply(auto simp add: grammar2.is_terminal_def is_terminal_def new_term_def)
by (smt (verit, ccfv_threshold) case_sum_inject disjE_realizer grammar2.is_terminal_def new_term_def surjective_sum)
lemma terminal_equality':"is_terminal x \<Longrightarrow> grammar2.is_terminal (convert 0 x)"
apply(auto simp add: grammar2.is_terminal_def is_terminal_def new_term_def)
by (smt (verit, ccfv_threshold) case_sum_inject disjE_realizer grammar2.is_terminal_def new_term_def surjective_sum)
lemma nonterminal_equality:"grammar2.is_nonterminal x \<Longrightarrow> is_nonterminal (convert_back x)"
proof -
assume "grammar2.is_nonterminal x"
then have "case_sum (\<lambda> s. s \<in> new_nonterm) (\<lambda> t. False) x" by (simp add:grammar2.is_nonterminal_def)
then obtain s where 1:"x = Inl s \<and> s \<in> ((\<lambda> a. (a, 0)) ` \<A>)" using new_nonterm_def
by (metis old.sum.simps(5) old.sum.simps(6) sumE)
then obtain s' where valid:"s = (s', 0) \<and> s' \<in> \<A>" by blast
with 1 have "convert_back x = Inl s'" by simp
with valid show ?thesis using is_nonterminal_def by simp
qed
(*list all helpers*)
lemma list_all_map:"list_all f (map g b) \<Longrightarrow> list_all (f \<circ> g) b"
by (simp add: list.pred_map)
lemma list_all_implies:"list_all f b \<Longrightarrow> \<forall> x . (f x\<longrightarrow> g x) \<Longrightarrow> list_all g b"
using list_all_pos_neg_ex by blast
lemma list_all2_implies:"list_all2 f b b' \<Longrightarrow> \<forall> x y. (f x y\<longrightarrow> g x y) \<Longrightarrow> list_all2 g b b'"
using list_all2_mono by blast
lemma list_all2_implies_list_all:"list_all2 (\<lambda> x y. f x) b b' \<Longrightarrow> list_all f b"
proof (induction b arbitrary: b')
case Nil
then show ?case by simp
next
case (Cons a b)
then obtain x xs where"b' = x#xs" by (meson list_all2_Cons1)
with Cons(2) have "(\<lambda>x y. f x) a x \<and>list_all2 (\<lambda>x y. f x) b xs" by blast
with Cons(1) have "f a \<and> list_all f b" by blast
then show ?case by simp
qed
lemma nonterminal_equality':"is_nonterminal x \<Longrightarrow> grammar2.is_nonterminal (convert 0 x)"
proof -
assume "is_nonterminal x"
then have "case_sum (\<lambda> s. s \<in> \<A>) (\<lambda> t. False) x" by (simp add: is_nonterminal_def)
then obtain s where 1:"x = Inl s \<and> s \<in> \<A>"
by (metis old.sum.simps(5) old.sum.simps(6) sumE)
then obtain s'::"('a \<times> nat)" where valid:"s' = (\<lambda> a. (a, 0)) s \<and> s' \<in> new_nonterm" using new_nonterm_def by blast
with 1 have "convert 0 x = Inl s'" by simp
with valid grammar2.is_nonterminal_def show ?thesis by fastforce
qed
lemma backconversion:"N = convert_back (convert n N)"
proof (cases N)
case (Inl a)
then show ?thesis by simp
next
case (Inr b)
then show ?thesis by simp
qed
lemma backconversion_sen:"N = convert_back_sentence (map (convert n) N)"
proof (induction N)
case Nil
then show ?case by auto
next
case (Cons a N)
have "convert_back_sentence (map (convert n) (a # N)) = (convert_back (convert n a))#N" using Cons by simp
then show ?case using backconversion by auto
qed
lemma rule_equality:"(N, \<alpha>) \<in> rules \<Longrightarrow> (convert_back N, convert_back_sentence \<alpha>) \<in> \<RR>"
proof -
assume "(N, \<alpha>) \<in> rules"
then obtain N' \<alpha>' where alt:"N = convert 0 N' \<and> \<alpha> = map (convert 0) \<alpha>' \<and> (N', \<alpha>') \<in> \<RR>" using rules_def by auto
then have "N' = convert_back N \<and> \<alpha>' = convert_back_sentence \<alpha>" using backconversion backconversion_sen by auto
with alt show ?thesis by auto
qed
lemma rule_equality':"\<forall> (N, \<alpha>) \<in> \<RR> . (convert 0 N, map (convert 0) \<alpha> ) \<in> rules"
using rules_def by auto
fun convertback_rule::"('a \<times> nat, 'b) rule \<Rightarrow> ('a, 'b) rule" where
"convertback_rule (N, a) = (convert_back N, convert_back_sentence a)"
lemma convertback_rule_split:"fst (convertback_rule s) = (convert_back (fst s))
\<and> snd (convertback_rule s) = (convert_back_sentence (snd s))"
apply(cases s) by simp
lemma start_conversion:"convert 0 \<SS> = grammar2.\<SS>"
using grammar2.\<SS>_def \<SS>_def local.start_def by auto
lemma start_conversion':"\<SS> = convert_back grammar2.\<SS>"
using grammar2.\<SS>_def \<SS>_def local.start_def by auto
lemma equality:"grammar2.Tree_wf T \<Longrightarrow> (s, a) = DeriveFromTree T \<Longrightarrow> \<exists> T'. Tree_wf T'
\<and> (convert_back s, convert_back_sentence a) = DeriveFromTree T'"
proof (induction T arbitrary: s a)
case (Leaf x)
then have 1:"grammar2.is_terminal x" by simp
from Leaf(2) have s:"s = x \<and> a = [x]" by simp
then have a:"convert_back_sentence a = [convert_back x]" by simp
from 1 have terminal:"is_terminal (convert_back x)" using terminal_equality by blast
obtain leaf where def:"leaf = (Leaf (convert_back x))" by blast
with terminal have wf:"Tree_wf leaf" by simp
from def a s have "(convert_back s, convert_back_sentence a )= DeriveFromTree leaf" by simp
with wf show ?case by blast
next
case (Inner x)
then have 1:"grammar2.is_nonterminal x" by simp
from Inner(2) have s:"s = x \<and> a = [x]" by simp
then have a:"convert_back_sentence a = [convert_back x]" by simp
from 1 have terminal:"is_nonterminal (convert_back x)" using nonterminal_equality by blast
obtain leaf where def:"leaf = (Inner (convert_back x))" by blast
with terminal have wf:"Tree_wf leaf" by simp
from def a s have "(convert_back s, convert_back_sentence a )= DeriveFromTree leaf" by simp
with wf show ?case by blast
next
case (DT r b)
then have validrule:"r \<in> rules" by auto (*need implication that *)
obtain N \<alpha> where na:"N = fst r \<and> \<alpha> = snd r" by blast
with validrule have valid_rule:"(convert_back N, convert_back_sentence \<alpha>) \<in> \<RR>" using rule_equality by auto
from na have \<alpha>_def:"(map (\<lambda> t . fst (DeriveFromTree t)) b) = \<alpha>"
using DT(2) grammar2.TreeSym_equal_DeriveFrom_root'' by fastforce
from DT.prems(1) have "snd r = concat (map grammar2.TreeSym b)" by simp (*downright impossible to convert unless ? define specific conversion function*)
(*just need lemma that derives root is equal to TreeSym*)
then have "snd r = concat (map (\<lambda> t . [fst (DeriveFromTree t)]) b)" using grammar2.TreeSym_implies_DeriveFrom_root by presburger
with na have "convert_back_sentence \<alpha> = concat (map (\<lambda> t . [convert_back (fst (DeriveFromTree t))]) b)" by auto
(*how to apply IH on all subtrees in list?*)
from DT(2) have "\<forall> i \<in> set b . grammar2.Tree_wf i" by (auto simp add: list_all_def)
with DT(1) have "\<forall> x \<in> set b .(\<exists>T'. Tree_wf T' \<and> convertback_rule (DeriveFromTree x)
= DeriveFromTree T') " by (metis convertback_rule.cases convertback_rule.simps)
then have "list_all (\<lambda> T . \<exists> T'. Tree_wf T' \<and> convertback_rule (DeriveFromTree T)
= DeriveFromTree T') b" by (simp add:list_all_def)
then obtain b'' where b'':"list_all2 (\<lambda> T' T . Tree_wf T' \<and> convertback_rule (DeriveFromTree T)
= DeriveFromTree T') b'' b" using implied_existence' by fast
then have "list_all2 (\<lambda> T' T . Tree_wf T') b'' b" using list_all2_implies by fast
then have wf_subtrees:"list_all Tree_wf b''" using list_all2_implies_list_all by auto
from b'' have "list_all2 (\<lambda> T' T . convertback_rule (DeriveFromTree T)
= DeriveFromTree T') b'' b" using list_all2_implies by fast
then have "list_all2 (\<lambda> T' T .
DeriveFromTree T' = (\<lambda> t. convertback_rule (DeriveFromTree t)) T) b'' b" using list_all2_mono by fastforce
with list_all2_map2 have 2:"map (\<lambda> t. convertback_rule (DeriveFromTree t)) b = map DeriveFromTree b''" by force
then have "map fst (map (\<lambda> t. convertback_rule (DeriveFromTree t)) b) = map fst (map DeriveFromTree b'')" by simp
then have "map (\<lambda> t. fst (convertback_rule (DeriveFromTree t))) b = map (\<lambda> t. fst (DeriveFromTree t)) b''"
using map_map [where ?f="fst" and ?g="(\<lambda> t. convertback_rule (DeriveFromTree t))" and ?xs="b"]
map_map [where ?f="fst" and ?g="DeriveFromTree" and ?xs="b''"] sorry (*Composition weird*)
then have "map (\<lambda> t . convert_back (fst (DeriveFromTree t))) b = map (\<lambda> t. fst (DeriveFromTree t)) b''"
using map_map convertback_rule_split by auto
then have "map convert_back (map (\<lambda> t . fst (DeriveFromTree t)) b) = map (\<lambda> t. fst (DeriveFromTree t)) b''"
using map_map [where ?f="convert_back" and ?g="\<lambda> t. fst (DeriveFromTree t)"] sorry (*again composition*)
then have valid_subtrees':"convert_back_sentence \<alpha> = concat (map TreeSym b'')" using TreeSym_equal_DeriveFrom_root''
\<alpha>_def by auto
from DT.prems(2) have a_def:"a = (concat( map (\<lambda>subtree . (snd (DeriveFromTree subtree))) b))" by simp
from 2 have "map snd (map (\<lambda> t. convertback_rule (DeriveFromTree t)) b) = map snd (map DeriveFromTree b'')" by simp
then have "map (\<lambda> t. snd (convertback_rule (DeriveFromTree t))) b = map (\<lambda> t. snd (DeriveFromTree t)) b''"
using map_map [where ?f="fst"] sorry (*should jus be this rule application*)
then have subderiv_eq:"map (\<lambda> t . convert_back_sentence (snd (DeriveFromTree t))) b = map (\<lambda> t. snd (DeriveFromTree t)) b''"
using map_map convertback_rule_split by auto
then have "map convert_back_sentence (map (\<lambda> t . snd (DeriveFromTree t)) b) = map (\<lambda> t. snd (DeriveFromTree t)) b''"
using map_map [where ?f="convert_back" and ?g="\<lambda> t. fst (DeriveFromTree t)"] sorry
(*using concat_map or something*)
from map_concat [where ?f="convert_back" and ?xs="(map (\<lambda> t . snd (DeriveFromTree t)) b)"]
have "convert_back_sentence a = (concat( map (convert_back_sentence )
(map (\<lambda>subtree . (snd (DeriveFromTree subtree))) b)))"
by (metis convert_back_sentence.simps map_eq_conv a_def)
then have deriv:"convert_back_sentence a = (concat( map (\<lambda>subtree . (convert_back_sentence (snd (DeriveFromTree subtree)))) b))"
using map_map [where ?f="convert_back_sentence" and ?g="(\<lambda>subtree . (snd (DeriveFromTree subtree)))" and ?xs="b"] sorry
(*using \<open>map (\<lambda>t. convert_back_sentence (snd (DeriveFromTree t))) b = map (\<lambda>t. snd (DeriveFromTree t)) b''\<close>
\<open>map convert_back_sentence (map (\<lambda>t. snd (DeriveFromTree t)) b) = map (\<lambda>t. snd (DeriveFromTree t)) b''\<close> by presburger
*)
then have derive':"convert_back_sentence a = (concat( map (\<lambda>subtree . (snd (DeriveFromTree subtree))) b''))"
using subderiv_eq by simp
obtain T' where tree:"T' = (DT (convert_back N, convert_back_sentence \<alpha>) b'')" by blast
then have wf:"Tree_wf T'" using valid_rule valid_subtrees' wf_subtrees by auto
from na have "N = s" using DT.prems(2) by simp
with tree derive' have "DeriveFromTree T' = (convert_back s, convert_back_sentence a)" by simp
with wf show ?case by auto
qed
fun convert_tree::"('a \<times> nat, 'b) derivtree \<Rightarrow> ('a, 'b) derivtree" where
"convert_tree (Leaf s) = Leaf (convert_back s)"|
"convert_tree (Inner s) = Inner (convert_back s)"|
"convert_tree (DT r b) = DT (convertback_rule r) (map convert_tree b)"
lemma equality':"Tree_wf T \<Longrightarrow> \<exists> T' . grammar2.Tree_wf T'
\<and> T = (convert_tree T') \<and> (fst (DeriveFromTree T')) = convert 0 (fst (DeriveFromTree T))
\<and> (snd (DeriveFromTree T')) = map (convert 0) (snd (DeriveFromTree T)) "
proof (induction T)
case (Leaf x)
then have 1:"is_terminal x" by simp
from 1 have "grammar2.is_terminal (convert 0 x)" using terminal_equality' by blast
then have 2:"grammar2.Tree_wf (Leaf (convert 0 x))" by simp
have 3:"convert_back (convert 0 x) = x" by (metis backconversion)
then show ?case using 2 by fastforce
next
case (Inner x)
then have 1:"is_nonterminal x" by simp
from 1 have "grammar2.is_nonterminal (convert 0 x)" using nonterminal_equality' by blast
then have 2:"grammar2.Tree_wf (Inner (convert 0 x))" by simp
have 3:"convert_back (convert 0 x) = x" by (metis backconversion)
then show ?case using 2 by fastforce
next
case (DT r b)
then have r:"r \<in> \<RR> \<and> snd r = (map (\<lambda> t. (fst (DeriveFromTree t))) b)"
by (simp add :TreeSym_equal_DeriveFrom_root'')
obtain N \<alpha> where na:"(N, \<alpha>) = r" apply(cases r) by blast
with r have valid_rule:"(convert 0 N, map (convert 0) \<alpha>) \<in> rules" using rule_equality' by blast
have convertback:"convertback_rule (convert 0 N, map (convert 0) \<alpha>) = r" using na
by (metis backconversion backconversion_sen convertback_rule.simps)
(*subtrees*)
from DT(2) have "list_all Tree_wf b" by simp
then have "list_all ( \<lambda> t. \<exists>T'. grammar2.Tree_wf T' \<and> t = convert_tree T' \<and>
fst (DeriveFromTree T') = convert 0 (fst (DeriveFromTree t))
\<and> snd (DeriveFromTree T') = map (convert 0) (snd (DeriveFromTree t))) b"
using DT(1) list.pred_mono_strong by force
then obtain b' where b':"list_all2 ( \<lambda>T' t. grammar2.Tree_wf T' \<and> t = convert_tree T' \<and>
fst (DeriveFromTree T') = convert 0 (fst (DeriveFromTree t))
\<and> snd (DeriveFromTree T') = map (convert 0) (snd (DeriveFromTree t)))b' b" using implied_existence' by fast
then have 1:"list_all grammar2.Tree_wf b'" and 2:"list_all2 ( \<lambda>T' t. convert_tree T' = t)b' b"
and 3:"list_all2 ( \<lambda>T' t. fst (DeriveFromTree T') = convert 0 (fst (DeriveFromTree t)) ) b' b"
and sndderiv:"list_all2 ( \<lambda>T' t. snd (DeriveFromTree T') = map (convert 0) (snd (DeriveFromTree t)) ) b' b"
apply (metis (no_types, lifting) list_all2_implies_list_all list_all2_implies)
using b' list_all2_implies apply fast using b' list_all2_implies apply fast using list_all2_implies b' by fast
from 2 list_all2_map have 4:"map convert_tree b' = b" by blast
from 3 have "map (\<lambda> t. fst (DeriveFromTree t)) b' = map (\<lambda> t. convert 0 (fst (DeriveFromTree t))) b"
by (metis list_all2_map2)
then have "concat (map grammar2.TreeSym b') = map (convert 0) (map (\<lambda> t. (fst (DeriveFromTree t))) b)"
using map_map grammar2.TreeSym_equal_DeriveFrom_root'' by simp
then have subtree_deriv:"concat (map grammar2.TreeSym b') = map (convert 0) \<alpha>" using na r by auto
from sndderiv have "map (\<lambda> t. snd (DeriveFromTree t)) b' = map (\<lambda> t. map (convert 0) (snd (DeriveFromTree t))) b"
by (metis list_all2_map2)
then have "(map (\<lambda> t. snd (DeriveFromTree t)) b') = map (\<lambda> t. map (convert 0) (snd (DeriveFromTree t))) b" by blast
with map_map have "(map (\<lambda> t. snd (DeriveFromTree t)) b') = map (map (convert 0)) (map (\<lambda> t. (snd (DeriveFromTree t))) b)"
by auto
then have "concat (map (\<lambda> t. snd (DeriveFromTree t)) b') = concat (map
(map (convert 0)) (map (\<lambda> t. (snd (DeriveFromTree t))) b))" by simp
with map_concat have final:"concat (map (\<lambda> t. snd (DeriveFromTree t)) b') = map (convert 0)
(concat (map (\<lambda> t. snd (DeriveFromTree t)) b))" by metis
obtain T where T:"T = DT (convert 0 N, map (convert 0) \<alpha>) b'" by blast
then have wf:"grammar2.Tree_wf T" using valid_rule subtree_deriv 1 by auto
from convertback 4 T have conv:"convert_tree T = (DT r b)" by simp
from T na have root:"fst (DeriveFromTree T) = convert 0 (fst (DeriveFromTree (DT r b)))" by auto
from T na final have deriv:"snd (DeriveFromTree T) = map (convert 0) (snd (DeriveFromTree (DT r b)))" by auto
then show ?case using wf conv root by auto
qed
lemma terminal_equality2_help:"grammar2.is_terminal x \<Longrightarrow> (is_terminal \<circ> convert_back) x"
using terminal_equality by simp
lemma grammar2_is_word_implies_is_word:"grammar2.is_word xa \<Longrightarrow> is_word (convert_back_sentence xa)"
apply(simp add: grammar2.is_word_def is_word_def)
using terminal_equality2_help list_all_map [where ?f="is_terminal" and ?g="convert_back" and ?b="xa"]
list_all_implies list.pred_map by blast
lemma grammar2_derivations:
assumes "grammar2.is_derivation xa"
shows "is_derivation (convert_back_sentence xa)"
proof -
from assms obtain T where" DeriveFromTree T = (grammar2.\<SS>, xa) \<and> grammar2.Tree_wf T"
by (auto simp add: grammar2.is_derivation_def dest!: grammar2.derives_implies_Derivation grammar2.DerivationSymbol_implies_DerivTree
[OF _ grammar2.\<SS>_symbol])
then obtain T' where "Tree_wf T' \<and> DeriveFromTree T' = (\<SS>,
convert_back_sentence xa)" by (metis equality start_conversion')
then show ?thesis using DerivationTree_implies_Derivation
Derivation_implies_derives is_derivation_def by fastforce
qed
lemma grammar2_is_word_derivations:"grammar2.is_word xa \<Longrightarrow> grammar2.is_derivation xa \<Longrightarrow>
is_derivation (convert_back_sentence xa) \<and> is_word (convert_back_sentence xa)"
using grammar2_is_word_implies_is_word grammar2_derivations by auto
lemma terminal_equality2_help':"is_terminal x \<Longrightarrow> (grammar2.is_terminal \<circ> (convert 0)) x"
using terminal_equality' by simp
lemma is_word_implies_grammar2_is_word:"is_word xa \<Longrightarrow> grammar2.is_word (map (convert 0) xa)"
apply(simp add: grammar2.is_word_def is_word_def del: convert.simps)
using terminal_equality2_help' list_all_map [where ?f="grammar2.is_terminal" and ?g="convert 0" and ?b="xa"]
list_all_implies list.pred_map by blast
lemma word_derivat:
assumes "DeriveFromTree tree = (\<SS>, xa) " "Tree_wf tree"
(*would use completeness lemma*)
shows "grammar2.is_derivation (map (convert 0) xa)"
proof -
from assms obtain T' where T':"grammar2.Tree_wf T'
\<and> tree = (convert_tree T') \<and> (fst (DeriveFromTree T')) = convert 0 \<SS> \<and>
(snd (DeriveFromTree T')) = map (convert 0) xa" using equality' by force
then have "DeriveFromTree T'= (grammar2.\<SS>, map (convert 0) xa)"
by (metis prod.collapse start_conversion)
then show ?thesis using T' grammar2.DerivationTree_implies_Derivation
grammar2.Derivation_implies_derives grammar2.is_derivation_def by fastforce
qed
lemma grammar2_is_word_derivations':"is_word xa \<Longrightarrow> DeriveFromTree tree = (\<SS>, xa) \<Longrightarrow>Tree_wf tree \<Longrightarrow>
grammar2.is_word (map (convert 0)xa ) \<and> grammar2.is_derivation (map (convert 0) xa)"
using word_derivat is_word_implies_grammar2_is_word by blast
lemma is_word_projr:"is_word xa \<Longrightarrow> map projr xa = map projr (map (convert 0) xa)"
apply(auto simp add: is_word_def is_terminal_def)
by (metis old.sum.simps(5) old.sum.simps(6) projr_def sum.exhaust_sel)
lemma is_word_projr':"grammar2.is_word xa \<Longrightarrow> map projr xa = map projr (convert_back_sentence xa)"
apply(auto simp add: grammar2.is_word_def grammar2.is_terminal_def)
by (metis old.sum.simps(5) old.sum.simps(6) projr_def sum.exhaust_sel)
theorem grammar2_eq:"grammar2.\<L>_t = \<L>_t"
apply(auto simp add: \<L>_t_def grammar2.\<L>_t_def \<L>_def grammar2.\<L>_def is_derivation_def
dest!: derives_implies_Derivation DerivationSymbol_implies_DerivTree [OF _ \<SS>_symbol])
using grammar2_is_word_derivations is_word_projr' is_derivation_def apply fast
apply(auto simp add: grammar2.is_derivation_def
dest!: grammar2.derives_implies_Derivation grammar2.DerivationSymbol_implies_DerivTree )
using grammar2_is_word_derivations' is_word_projr grammar2.is_derivation_def by blast
end
(*main invariant of rewriting:*)
end |
State Before: k : Type u_3
V : Type u_4
P : Type u_2
inst✝³ : Ring k
inst✝² : AddCommGroup V
inst✝¹ : Module k V
S : AffineSpace V P
ι : Type u_1
s : Finset ι
ι₂ : Type ?u.365709
s₂ : Finset ι₂
w : ι → k
p : ι → P
pred : ι → Prop
inst✝ : DecidablePred pred
⊢ (↑(affineCombination k (Finset.subtype pred s) fun i => p ↑i) fun i => w ↑i) =
↑(affineCombination k (filter pred s) p) w State After: no goals Tactic: rw [affineCombination_apply, affineCombination_apply, weightedVSubOfPoint_subtype_eq_filter] |
{-# OPTIONS --cubical --safe --postfix-projections #-}
module Cardinality.Finite.Structure where
open import Prelude
open import Data.Fin
open import Data.Nat
open import Data.Nat.Properties
private
variable
n m : ℕ
liftˡ : ∀ n m → Fin m → Fin (n + m)
liftˡ zero m x = x
liftˡ (suc n) m x = fs (liftˡ n m x)
liftʳ : ∀ n m → Fin n → Fin (n + m)
liftʳ (suc n) m f0 = f0
liftʳ (suc n) m (fs x) = fs (liftʳ n m x)
mapl : (A → B) → A ⊎ C → B ⊎ C
mapl f (inl x) = inl (f x)
mapl f (inr x) = inr x
fin-sum-to : ∀ n m → Fin n ⊎ Fin m → Fin (n + m)
fin-sum-to n m = either (liftʳ n m) (liftˡ n m)
fin-sum-from : ∀ n m → Fin (n + m) → Fin n ⊎ Fin m
fin-sum-from zero m x = inr x
fin-sum-from (suc n) m f0 = inl f0
fin-sum-from (suc n) m (fs x) = mapl fs (fin-sum-from n m x)
mapl-distrib : ∀ {a b c d} {A : Type a} {B : Type b} {C : Type c} {D : Type d} (xs : A ⊎ B) (h : A → C) (f : C → D) (g : B → D) → either′ f g (mapl h xs) ≡ either′ (f ∘′ h) g xs
mapl-distrib (inl x) h f g = refl
mapl-distrib (inr x) h f g = refl
either-distrib : ∀ {d} {D : Type d} (f : A → C) (g : B → C) (h : C → D) (xs : A ⊎ B) → either′ (h ∘ f) (h ∘ g) xs ≡ h (either′ f g xs)
either-distrib f g h (inl x) = refl
either-distrib f g h (inr x) = refl
open import Path.Reasoning
fin-sum-to-from : ∀ n m x → fin-sum-to n m (fin-sum-from n m x) ≡ x
fin-sum-to-from zero m x = refl
fin-sum-to-from (suc n) m f0 = refl
fin-sum-to-from (suc n) m (fs x) =
fin-sum-to (suc n) m (mapl fs (fin-sum-from n m x)) ≡⟨ mapl-distrib (fin-sum-from n m x) fs (liftʳ (suc n) m) (liftˡ (suc n) m) ⟩
either (liftʳ (suc n) m ∘ fs) (liftˡ (suc n) m) (fin-sum-from n m x) ≡⟨⟩
either (fs ∘ liftʳ n m) (fs ∘ liftˡ n m) (fin-sum-from n m x) ≡⟨ either-distrib (liftʳ n m) (liftˡ n m) fs (fin-sum-from n m x) ⟩
fs (either (liftʳ n m) (liftˡ n m) (fin-sum-from n m x)) ≡⟨ cong fs (fin-sum-to-from n m x) ⟩
fs x ∎
-- fin-sum-from-to : ∀ n m x → fin-sum-from n m (fin-sum-to n m x) ≡ x
-- fin-sum-from-to n m (inl x) = {!!}
-- fin-sum-from-to n m (inr x) = {!!}
-- fin-sum : ∀ n m → Fin n ⊎ Fin m ⇔ Fin (n + m)
-- fin-sum n m .fun = fin-sum-to n m
-- fin-sum n m .inv = fin-sum-from n m
-- fin-sum n m .rightInv = fin-sum-to-from n m
-- fin-sum n m .leftInv = fin-sum-from-to n m
|
[STATEMENT]
lemma OclNotEmpty_invalid[simp,code_unfold]:"(invalid->notEmpty\<^sub>S\<^sub>e\<^sub>t()) = invalid"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. invalid->notEmpty\<^sub>S\<^sub>e\<^sub>t() = invalid
[PROOF STEP]
by(simp add: OclNotEmpty_def) |
theory OpSem
imports Main HOL.Rat (*SetADT*)
begin
type_synonym TS = rat (* Timestamp *)
type_synonym T = nat (* Thread ID *)
type_synonym L = nat (* Location *)
type_synonym V = nat
definition "null = (0 :: nat)"
(* bool: true = release/aquire or false = relaxed*)
datatype action =
Read bool L V
| Write bool L V
| Update L V V
fun avar :: "action \<Rightarrow> L" where
"avar (Read _ x v) = x"
| "avar (Write _ x v) = x"
| "avar (Update x e v) = x"
fun wr_val :: "action \<Rightarrow> V option" where
"wr_val (Write _ _ v) = Some v"
| "wr_val (Update _ _ v) = Some v"
| "wr_val _ = None"
fun rd_val :: "action \<Rightarrow> V option" where
"rd_val (Read _ _ v) = Some v"
| "rd_val (Update _ v _) = Some v"
| "rd_val _ = None"
fun isRA :: "action \<Rightarrow> bool" where
"isRA (Read b _ _) = b"
| "isRA (Write b _ _) = b"
| "isRA (Update _ _ _) = True"
fun isWr :: "action \<Rightarrow> bool" where
"isWr (Read _ _ _) = False"
| "isWr (Write _ _ _) = True"
| "isWr (Update _ _ _) = False"
fun isRd :: "action \<Rightarrow> bool" where
"isRd (Read _ _ _) = True"
| "isRd (Write _ _ _) = False"
| "isRd (Update _ _ _) = False"
fun isUp :: "action \<Rightarrow> bool" where
"isUp (Read _ _ _) = False"
| "isUp (Write _ _ _) = False"
| "isUp (Update _ _ _) = True"
abbreviation "reads a \<equiv> a \<in> (dom rd_val)"
abbreviation "writes a \<equiv> a \<in> (dom wr_val)"
type_synonym View = "L \<Rightarrow> T"
type_synonym event = "TS \<times> T \<times> action"
definition time_stamp :: "event \<Rightarrow> TS" where "time_stamp e \<equiv> fst e"
definition tid :: "event \<Rightarrow> T" where "tid e \<equiv> fst (snd e)"
definition act :: "event \<Rightarrow> action" where "act e \<equiv> snd (snd e)"
definition var :: "L \<times> TS \<Rightarrow> L" where "var = fst"
definition tst :: "L \<times> TS \<Rightarrow> TS" where "tst = snd"
record write_record =
val :: V
is_releasing :: bool
record surrey_state =
writes :: "(L \<times> TS) set"
thrView :: "T \<Rightarrow> L \<Rightarrow> (L \<times> TS)"
modView :: "(L \<times> TS) \<Rightarrow> L \<Rightarrow> (L \<times> TS)"
mods :: "(L \<times> TS) \<Rightarrow> write_record"
covered :: "(L \<times> TS) set"
definition "value \<sigma> w \<equiv> val (mods \<sigma> w)"
definition "releasing \<sigma> w \<equiv> is_releasing (mods \<sigma> w)"
definition "writes_on \<sigma> x = {w . var w = x \<and> w \<in> writes \<sigma>}"
definition "visible_writes \<sigma> t x \<equiv> {w \<in> writes_on \<sigma> x . tst(thrView \<sigma> t x) \<le> tst w}"
lemma writes_on_var [simp]: "w \<in> writes_on \<sigma> x \<Longrightarrow> var w = x"
by (simp add: writes_on_def)
lemma visible_var [simp]: "w \<in> visible_writes \<sigma> t x \<Longrightarrow> var w = x"
by (auto simp add: visible_writes_def)
lemma visible_writes_in_writes: "visible_writes \<sigma> t x \<subseteq> writes \<sigma>"
using visible_writes_def writes_on_def by fastforce
definition "valid_fresh_ts \<sigma> w ts' \<equiv> tst w < ts' \<and> (\<forall> w' \<in> writes_on \<sigma> (var w). tst w < tst w' \<longrightarrow> ts' < tst w')"
definition "ts_oride m m' x \<equiv> if tst (m x) \<le> tst (m' x) then m' x else m x"
definition rev_app :: "'s \<Rightarrow> ('s \<Rightarrow> 't) \<Rightarrow> 't" (infixl ";;" 150)
where
"rev_app s f \<equiv> f s"
definition
"update_thrView t nv \<sigma> \<equiv> \<sigma> \<lparr> thrView := (thrView \<sigma>)(t := nv)\<rparr>"
definition
"update_modView w nv \<sigma> \<equiv> \<sigma> \<lparr> modView := (modView \<sigma>)(w := nv) \<rparr>"
definition
"update_mods w nwr \<sigma> \<equiv> \<sigma> \<lparr> mods := (mods \<sigma>)(w := nwr)\<rparr> \<lparr>writes := writes \<sigma> \<union> {w}\<rparr>"
definition
"add_cv w \<sigma> \<equiv> \<sigma> \<lparr> covered := covered \<sigma> \<union> {w}\<rparr>"
definition "syncing \<sigma> w b \<equiv> releasing \<sigma> w \<and> b"
lemma [simp]: "syncing \<sigma> w False = False"
by (simp add: syncing_def)
lemma [simp]: "syncing \<sigma> w True = releasing \<sigma> w"
by (simp add: syncing_def)
definition
"read_trans t b w \<sigma> \<equiv>
let new_thr_idx = (thrView \<sigma> t)(var w := w) in
let new_thr_idx' =
if syncing \<sigma> w b then ts_oride new_thr_idx (modView \<sigma> w) else new_thr_idx in
\<sigma> ;; update_thrView t new_thr_idx'"
lemma [simp]: "\<not> syncing \<sigma> w b \<Longrightarrow> thrView (read_trans t b w \<sigma>) t = (thrView \<sigma> t)(var w := w)"
by (simp add: read_trans_def rev_app_def update_thrView_def)
lemma syncing_thrView_read_trans [simp]: "syncing \<sigma> w b \<Longrightarrow>
thrView (read_trans t b w \<sigma>) t = ts_oride ((thrView \<sigma> t)(var w := w)) (modView \<sigma> w)"
by (simp add: read_trans_def rev_app_def update_thrView_def)
lemma [simp]: "t' \<noteq> t \<Longrightarrow> thrView (read_trans t b w \<sigma>) t' = (thrView \<sigma> t')"
apply (simp add: read_trans_def rev_app_def update_thrView_def)
by (metis fun_upd_other surrey_state.ext_inject surrey_state.surjective surrey_state.update_convs(2))
lemma [simp]: "var w \<noteq> x \<Longrightarrow> b = False \<Longrightarrow> thrView (read_trans t b w \<sigma>) t x = thrView \<sigma> t x"
by (simp add: read_trans_def rev_app_def update_thrView_def Let_def ts_oride_def)
lemma [simp]: "modView (read_trans t b w \<sigma>) = modView \<sigma>"
by(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
lemma [simp]: "mods (read_trans t b w \<sigma>) = mods \<sigma>"
by(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
lemma [simp]: "covered (read_trans t b w \<sigma>) = covered \<sigma>"
by(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
lemma [simp]: "writes (read_trans t b w \<sigma>) = writes \<sigma>"
by(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
lemma [simp]: "writes_on (read_trans t b w \<sigma>) x = writes_on \<sigma> x"
apply(unfold writes_on_def)
by(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
lemma [simp]: "value (read_trans t False w \<sigma>) x = value \<sigma> x"
apply(unfold value_def)
by(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
definition
"write_trans t b w v \<sigma> ts' \<equiv>
\<sigma> ;; update_thrView t ((thrView \<sigma> t)(var w := (var w, ts')))
;; update_modView (var w, ts') ((thrView \<sigma> t)(var w := (var w, ts')))
;; update_mods (var w, ts') \<lparr> val = v, is_releasing = b\<rparr>"
lemma [simp]: "thrView (write_trans t b w v \<sigma> ts') t = (thrView \<sigma> t)(var w := (var w, ts'))"
by (simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "t' \<noteq> t \<Longrightarrow> thrView (write_trans t b w v \<sigma> ts') t' = (thrView \<sigma> t')"
by (simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "var w' = var w \<Longrightarrow> tst w' = ts' \<Longrightarrow> modView (write_trans t b w v \<sigma> ts') w' = (thrView \<sigma> t)(var w := (var w, ts'))"
apply (simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
by (metis prod.collapse tst_def var_def)
lemma [simp]: "var w' \<noteq> var w \<Longrightarrow> modView (write_trans t b w v \<sigma> ts') w' = modView \<sigma> w'"
by (auto simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "var w' \<noteq> var w \<Longrightarrow> modView (write_trans t b w v \<sigma> ts') w' y = modView \<sigma> w' y"
by (auto simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "tst w' \<noteq> ts' \<Longrightarrow> modView (write_trans t b w v \<sigma> ts') w' = modView \<sigma> w'"
by (auto simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "var w' = var w \<Longrightarrow> tst w' = ts' \<Longrightarrow> mods (write_trans t b w v \<sigma> ts') w' = \<lparr> val = v, is_releasing = b\<rparr>"
apply (simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
by (metis prod.collapse tst_def var_def)
lemma [simp]: "var w' \<noteq> var w \<Longrightarrow> mods (write_trans t b w v \<sigma> ts') w' = mods \<sigma> w'"
by (auto simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "tst w' \<noteq> ts' \<Longrightarrow> mods (write_trans t b w v \<sigma> ts') w' = mods \<sigma> w'"
by (auto simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "covered (write_trans t b w v \<sigma> ts') = covered \<sigma>"
by (simp add: write_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "writes (write_trans t b w v \<sigma> ts') = writes \<sigma> \<union> {(var w, ts')}"
by(simp add: Let_def rev_app_def
write_trans_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "x = var w \<Longrightarrow> writes_on (write_trans t b w v \<sigma> ts') x = writes_on \<sigma> x \<union> {(var w, ts')}"
apply(unfold writes_on_def)
apply(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
using Collect_cong by auto
lemma [simp]: "y \<noteq> var w \<Longrightarrow> writes_on (write_trans t b w v \<sigma> ts') y = writes_on \<sigma> y"
apply(unfold writes_on_def)
by(auto simp add: read_trans_def Let_def rev_app_def update_thrView_def)
lemma [simp]: "w \<in> writes_on \<sigma> y \<Longrightarrow> w \<in> writes_on (write_trans t b w' v \<sigma> ts') y"
apply(unfold writes_on_def)
by simp
definition "update_trans t w v' \<sigma> ts' \<equiv>
let new_thr_idx = (thrView \<sigma> t)(var w := (var w, ts')) in
let new_thr_idx' =
if releasing \<sigma> w
then
ts_oride new_thr_idx (modView \<sigma> w)
else
new_thr_idx
in
\<sigma> ;; update_thrView t new_thr_idx'
;; update_modView (var w, ts') new_thr_idx'
;; update_mods (var w, ts') \<lparr> val = v', is_releasing = True\<rparr>
;; add_cv w"
definition "update_trans_a t w v' \<sigma> ts' \<equiv>
let new_thr_idx = (thrView \<sigma> t)(var w := (var w, ts')) in
let new_thr_idx' =
if releasing \<sigma> w
then
ts_oride new_thr_idx (modView \<sigma> w)
else
new_thr_idx
in
\<sigma> ;; update_thrView t new_thr_idx'
;; update_modView (var w, ts') new_thr_idx'
;; update_mods (var w, ts') \<lparr> val = v', is_releasing = False\<rparr>
;; add_cv w"
definition "update_trans_r t w v' \<sigma> ts' \<equiv>
let new_thr_idx = (thrView \<sigma> t)(var w := (var w, ts')) in
\<sigma> ;; update_thrView t new_thr_idx
;; update_modView (var w, ts') new_thr_idx
;; update_mods (var w, ts') \<lparr> val = v', is_releasing = True\<rparr>
;; add_cv w"
definition "CAS t w cv nv' \<sigma> ts' \<equiv>
if value \<sigma> w = cv
then
(update_trans t w nv' \<sigma> ts', True)
else
(read_trans t False w \<sigma>, False)"
definition cas_step :: "T \<Rightarrow> L \<Rightarrow> V \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> surrey_state \<Rightarrow> bool"
where
"cas_step t l cv nv \<sigma> \<sigma>'\<equiv>
\<exists> w ts'. w \<in> visible_writes \<sigma> t l \<and>
w \<notin> covered \<sigma> \<and>
valid_fresh_ts \<sigma> w ts' \<and>
\<sigma>' = fst(CAS t w cv nv \<sigma> ts')"
lemma [simp]: "\<not> releasing \<sigma> w \<Longrightarrow>
thrView (update_trans t w v' \<sigma> ts') t = (thrView \<sigma> t)(var w := (var w, ts'))"
by (simp add: Let_def rev_app_def update_modView_def update_mods_def update_thrView_def update_trans_def add_cv_def)
lemma [simp]: "\<not> releasing \<sigma> w \<Longrightarrow>
thrView (update_trans_a t w v' \<sigma> ts') t = (thrView \<sigma> t)(var w := (var w, ts'))"
by (simp add: Let_def rev_app_def update_modView_def update_mods_def update_thrView_def update_trans_a_def add_cv_def)
lemma [simp]: " releasing \<sigma> w \<Longrightarrow>
thrView (update_trans t w v' \<sigma> ts') t = ts_oride ((thrView \<sigma> t)(var w := (var w, ts'))) (modView \<sigma> w)"
by (auto simp add: Let_def update_trans_def add_cv_def rev_app_def update_modView_def update_mods_def update_thrView_def)
lemma [simp]: " releasing \<sigma> w \<Longrightarrow>
thrView (update_trans_a t w v' \<sigma> ts') t = ts_oride ((thrView \<sigma> t)(var w := (var w, ts'))) (modView \<sigma> w)"
by (auto simp add: Let_def update_trans_a_def add_cv_def rev_app_def update_modView_def update_mods_def update_thrView_def)
lemma [simp]: "thrView (update_trans_r t w v' \<sigma> ts') t = (thrView \<sigma> t)(var w := (var w, ts'))"
by (simp add: Let_def rev_app_def update_modView_def update_mods_def update_thrView_def update_trans_r_def add_cv_def)
lemma [simp]: "t' \<noteq> t \<Longrightarrow> thrView (update_trans t w v' \<sigma> ts') t' = (thrView \<sigma> t')"
by (simp add: Let_def update_trans_def add_cv_def rev_app_def update_modView_def update_mods_def update_thrView_def)
lemma [simp]: "var w' = var w \<Longrightarrow> tst w' = ts' \<Longrightarrow>
\<not> releasing \<sigma> w \<Longrightarrow> modView (update_trans t w v' \<sigma> ts') w' = (thrView \<sigma> t)(var w := (var w, ts'))"
apply (simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
by (metis prod.collapse tst_def var_def)
lemma [simp]: "var w' = var w \<Longrightarrow> tst w' = ts' \<Longrightarrow>
releasing \<sigma> w \<Longrightarrow> modView (update_trans t w v' \<sigma> ts') w' = ts_oride ((thrView \<sigma> t)(var w := (var w, ts'))) (modView \<sigma> w)"
apply (simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
by (metis prod.collapse tst_def var_def)
lemma [simp]: "var w' \<noteq> var w \<Longrightarrow> modView (update_trans t w v' \<sigma> ts') w' = modView \<sigma> w'"
by (auto simp add: Let_def fun_upd_idem_iff fun_upd_twist rev_app_def update_modView_def update_mods_def update_thrView_def update_trans_def add_cv_def)
lemma [simp]: "tst w' \<noteq> ts' \<Longrightarrow> modView (update_trans t w v' \<sigma> ts') w' = modView \<sigma> w'"
by (auto simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "var w' \<noteq> var w \<Longrightarrow> mods (update_trans t w v' \<sigma> ts') w' = mods \<sigma> w'"
by (auto simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "tst w' \<noteq> ts' \<Longrightarrow> mods (update_trans t w v' \<sigma> ts') w' = mods \<sigma> w'"
by (auto simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "var w' = var w \<Longrightarrow> tst w' = ts' \<Longrightarrow>
mods (update_trans t w v' \<sigma> ts') w' = \<lparr> val = v', is_releasing = True\<rparr>"
apply (simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
by (metis prod.collapse tst_def var_def)
lemma [simp]: "covered (update_trans t w v' \<sigma> ts') = covered \<sigma> \<union> {w}"
by (simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "writes (update_trans t w v' \<sigma> ts') = writes \<sigma> \<union> {(var w, ts')}"
by (auto simp add: Let_def update_trans_def rev_app_def add_cv_def update_thrView_def
update_modView_def update_mods_def)
lemma [simp]: "x = var w \<Longrightarrow> writes_on (update_trans t w v' \<sigma> ts') x = writes_on \<sigma> x \<union> {(x, ts')}"
apply(unfold writes_on_def)
apply(simp add: read_trans_def Let_def rev_app_def update_thrView_def)
using Collect_cong by auto
lemma [simp]: "y \<noteq> var w \<Longrightarrow> writes_on (update_trans t w v' \<sigma> ts') y = writes_on \<sigma> y"
apply(unfold writes_on_def)
by(auto simp add: read_trans_def Let_def rev_app_def update_thrView_def)
definition step :: "T \<Rightarrow> action \<Rightarrow> surrey_state \<Rightarrow> surrey_state \<Rightarrow> bool"
where
"step t a \<sigma> \<sigma>'\<equiv>
\<exists> w. w \<in> visible_writes \<sigma> t (avar a) \<and>
(case a of
Read b x v \<Rightarrow>
v = value \<sigma> w \<and>
\<sigma>' = read_trans t b w \<sigma>
| Write b x v \<Rightarrow> \<exists> ts'.
w \<notin> covered \<sigma> \<and>
valid_fresh_ts \<sigma> w ts' \<and>
\<sigma>' = write_trans t b w v \<sigma> ts'
| Update x v v' \<Rightarrow> \<exists> ts'.
v = value \<sigma> w \<and>
w \<notin> covered \<sigma> \<and>
valid_fresh_ts \<sigma> w ts' \<and>
\<sigma>' = update_trans t w v' \<sigma> ts')
"
lemma step_cases:
"step t a \<sigma> \<sigma>'
\<Longrightarrow>
\<lbrakk>\<And> w b x v. \<sigma>' = read_trans t b w \<sigma> \<and> a = Read b x v \<and> w \<in> visible_writes \<sigma> t (avar a) \<and>
v = value \<sigma> w \<Longrightarrow> P \<sigma> (read_trans t b w \<sigma>)\<rbrakk>
\<Longrightarrow>
\<lbrakk>\<And> w b x v ts'. \<sigma>' = write_trans t b w v \<sigma> ts' \<and> a = Write b x v \<and> w \<in> visible_writes \<sigma> t (avar a) \<and>
w \<notin> covered \<sigma> \<and>
valid_fresh_ts \<sigma> w ts'
\<Longrightarrow> P \<sigma> (write_trans t b w v \<sigma> ts') \<rbrakk>
\<Longrightarrow>
\<lbrakk>\<And> w x v v' ts'. \<sigma>' = update_trans t w v' \<sigma> ts' \<and> a = Update x v v' \<and>
w \<in> visible_writes \<sigma> t (avar a) \<and>
v = value \<sigma> w \<and>
w \<notin> covered \<sigma> \<and>
valid_fresh_ts \<sigma> w ts'
\<Longrightarrow> P \<sigma> (update_trans t w v' \<sigma> ts')\<rbrakk>
\<Longrightarrow> P \<sigma> \<sigma>'"
apply(simp add: step_def) apply(case_tac a) by auto
definition "WrX x v \<equiv> Write False x v"
definition "WrR x v \<equiv> Write True x v"
definition "RdX x v \<equiv> Read False x v"
definition "RdA x v \<equiv> Read True x v"
abbreviation WrX_state_abbr:: " surrey_state \<Rightarrow> L \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ [_ := _]\<^sub>_ _" [100,100,100,100,100])
where "\<sigma> [x := v]\<^sub>t \<sigma>' \<equiv> step t (WrX x v) \<sigma> \<sigma>'"
abbreviation WrR_state_abbr:: " surrey_state \<Rightarrow> L \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ [_ :=\<^sup>R _]\<^sub>_ _" [100,100,100,100,100])
where "\<sigma> [x :=\<^sup>R v]\<^sub>t \<sigma>' \<equiv> step t (WrR x v) \<sigma> \<sigma>'"
abbreviation RdX_state_abbr:: " surrey_state \<Rightarrow> V \<Rightarrow> L \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ [_ \<leftarrow> _]\<^sub>_ _" [100,100,100,100,100])
where "\<sigma> [r \<leftarrow> x]\<^sub>t \<sigma>' \<equiv> step t (RdX x r) \<sigma> \<sigma>'"
abbreviation RdA_state_abbr:: " surrey_state \<Rightarrow> V \<Rightarrow> L \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ [_ \<leftarrow>\<^sup>A _]\<^sub>_ _" [100,100,100,100,100])
where "\<sigma> [r \<leftarrow>\<^sup>A x]\<^sub>t \<sigma>' \<equiv> step t (RdA x r) \<sigma> \<sigma>'"
abbreviation Up_state_abbr:: " surrey_state \<Rightarrow> L \<Rightarrow> V \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ RMW[_, _, _]\<^sub>_ _" [100,100,100,100,100,100])
where "\<sigma> RMW[x, u, v]\<^sub>t \<sigma>' \<equiv> step t (Update x u v) \<sigma> \<sigma>'"
abbreviation Up_A_state_abbr:: " surrey_state \<Rightarrow> L \<Rightarrow> V \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ RMW\<^sup>A[_, _, _]\<^sub>_ _" [100,100,100,100,100,100])
where "\<sigma> RMW\<^sup>A[x, u, v]\<^sub>t \<sigma>' \<equiv> step t (Update x u v) \<sigma> \<sigma>'"
abbreviation Up_R_state_abbr:: " surrey_state \<Rightarrow> L \<Rightarrow> V \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ RMW\<^sup>R[_, _, _]\<^sub>_ _" [100,100,100,100,100,100])
where "\<sigma> RMW\<^sup>R[x, u, v]\<^sub>t \<sigma>' \<equiv> step t (Update x u v) \<sigma> \<sigma>'"
abbreviation Swap_state_abbr:: " surrey_state \<Rightarrow> L \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("_ SWAP[_, _]\<^sub>_ _" [100,100,100,100,100])
where "\<sigma> SWAP[x, v]\<^sub>t \<sigma>' \<equiv> \<exists>u . step t (Update x u v) \<sigma> \<sigma>'"
definition "initial_state \<sigma> I \<equiv>
\<exists> F . writes \<sigma> = {(x, F x) | x. True} \<and>
(\<forall> t x. thrView \<sigma> t x = (x, F x)) \<and>
(\<forall> w x. modView \<sigma> w x = (x, F x)) \<and>
(\<forall> w. mods \<sigma> w = \<lparr> val = I (var w), is_releasing = False \<rparr>) \<and>
covered \<sigma> = {}"
definition
"wfs \<sigma> \<equiv>
(\<forall> t x. thrView \<sigma> t x \<in> writes_on \<sigma> x) \<and>
(\<forall> w x. modView \<sigma> w x \<in> writes_on \<sigma> x) \<and>
(\<forall> x. finite(writes_on \<sigma> x)) \<and>
(\<forall> w. w \<in> writes \<sigma> \<longrightarrow> modView \<sigma> w (var w) = w) \<and>
covered \<sigma> \<subseteq> writes \<sigma>"
definition "lastWr \<sigma> x \<equiv> (x, Max (tst`(writes_on \<sigma> x)))"
definition "p_obs \<sigma> t x u \<equiv> \<exists> w. w \<in> visible_writes \<sigma> t x \<and> u = value \<sigma> w"
definition "d_obs \<sigma> view x u \<equiv> view x = lastWr \<sigma> x \<and> value \<sigma> (lastWr \<sigma> x) = u"
definition "d_obs_t \<sigma> t x u \<equiv> d_obs \<sigma> (thrView \<sigma> t) x u"
definition "c_obs \<sigma> x u t y v \<equiv>
\<forall> w \<in> visible_writes \<sigma> t x. value \<sigma> w = u \<longrightarrow>
d_obs \<sigma> (modView \<sigma> w) y v \<and>
releasing \<sigma> w"
abbreviation p_obs_abbr:: "nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> surrey_state \<Rightarrow> bool" ("[_ \<approx>\<^sub>_ _] _" [100, 100, 100, 100])
where "[x \<approx>\<^sub>t u] \<sigma> \<equiv> p_obs \<sigma> t x u"
abbreviation d_obs_abbr:: "nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> surrey_state \<Rightarrow> bool" ("[_ =\<^sub>_ _] _" [100, 100, 100, 100])
where "[x =\<^sub>t u] \<sigma> \<equiv> d_obs_t \<sigma> t x u"
abbreviation c_obs_abbr:: "nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> surrey_state \<Rightarrow> bool" ("[_ = _]\<^sub>_\<lparr>_ = _ \<rparr> _" [100, 100, 100, 100, 100, 100])
where "[x = u]\<^sub>t\<lparr>y = v\<rparr> \<sigma> \<equiv> c_obs \<sigma> x u t y v"
definition "covered_v \<sigma> x v \<equiv> \<forall> w . w \<in> writes_on \<sigma> x \<and> w \<notin> covered \<sigma> \<longrightarrow> w = lastWr \<sigma> x \<and> value \<sigma> w = v"
abbreviation covered_v_abbr:: "L \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> bool" ("cvd[_, _] _" [100, 100,100])
where "cvd[x, u] \<sigma> \<equiv> covered_v \<sigma> x u"
definition "mo w w'\<equiv> var(w) = var(w') \<and> tst(w) < tst(w')"
definition "enc \<sigma> view x u \<equiv> \<exists> w . w \<in> writes_on \<sigma> x \<and> tst(w) \<le> tst(view x) \<and> value \<sigma> w = u"
definition "enc_t \<sigma> t x u \<equiv> enc \<sigma> (thrView \<sigma> t) x u"
definition "p_vorder \<sigma> u x v \<equiv> \<exists> w w'. w \<in> writes_on \<sigma> x \<and> w' \<in> writes_on \<sigma> x \<and>
value \<sigma> w = u \<and> value \<sigma> w' = v \<and>
mo w w' "
definition "d_vorder \<sigma> u x v \<equiv> (\<forall> w w'. w \<in> writes_on \<sigma> x \<and> w' \<in> writes_on \<sigma> x \<and>
value \<sigma> w = u \<and> value \<sigma> w' = v \<longrightarrow>
mo w w') \<and> p_vorder \<sigma> u x v"
definition "init_val \<sigma> x v \<equiv>
\<exists> w . w \<in> writes_on \<sigma> x \<and>
(\<forall>w'\<in> writes_on \<sigma> x . w \<noteq> w' \<longrightarrow> mo w w') \<and>
value \<sigma> w = v"
definition "amo \<sigma> x u \<equiv> \<not> p_vorder \<sigma> u x u"
definition "no_val \<sigma> x i u \<equiv> init_val \<sigma> x i \<and> \<not> p_vorder \<sigma> i x u"
definition "last_val \<sigma> x u i \<equiv>
init_val \<sigma> x i \<and> p_vorder \<sigma> i x u
\<and> (\<forall> w. w \<noteq> u \<longrightarrow> \<not> p_vorder \<sigma> u x w)"
abbreviation p_vorder_abbr:: "V \<Rightarrow> L \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> bool" ("[_ \<leadsto>\<^sub>_ _] _" [100,100,100,100])
where "[u \<leadsto>\<^sub>x v] \<sigma> \<equiv> p_vorder \<sigma> u x v"
abbreviation d_vorder_abbr:: "V \<Rightarrow> L \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> bool" ("[_ \<hookrightarrow>\<^sub>_ _] _" [100,100,100,100])
where "[u \<hookrightarrow>\<^sub>x v] \<sigma> \<equiv> d_vorder \<sigma> u x v"
abbreviation amo_abbr:: "L \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> bool" ("[\<one>\<^sub>_ _] _" [100,100,100])
where "[\<one>\<^sub>x u] \<sigma> \<equiv> amo \<sigma> x u"
abbreviation no_abbr:: "L \<Rightarrow> V \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> bool" ("[\<zero>\<^sub>_ _]\<^sub>_ _" [100,100,100,100])
where "[\<zero>\<^sub>x u]\<^sub>i \<sigma> \<equiv> no_val \<sigma> x i u"
abbreviation enc_abbr:: "L \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("[en _ _]\<^sub>_ _" [100,100,100,100])
where "[en x u]\<^sub>t \<sigma> \<equiv> enc_t \<sigma> t x u"
abbreviation last_abbr:: "L \<Rightarrow> V \<Rightarrow> V \<Rightarrow> T \<Rightarrow> surrey_state \<Rightarrow> bool" ("[last _ _ _]\<^sub>_ _" [100, 100,100,100,100])
where "[last x i u]\<^sub>t \<sigma> \<equiv> last_val \<sigma> x u i"
abbreviation init_abbr:: "L \<Rightarrow> V \<Rightarrow> surrey_state \<Rightarrow> bool" ("[init _ _] _" [100, 100,100])
where "[init x v] \<sigma> \<equiv> init_val \<sigma> x v"
lemma initially_write_unique: "initial_state \<sigma> I \<Longrightarrow> w \<in> writes_on \<sigma> x \<Longrightarrow> w' \<in> writes_on \<sigma> x \<Longrightarrow> w = w'"
apply(unfold initial_state_def writes_on_def) by auto
lemma initial_wfs: assumes "initial_state \<sigma> I" shows "wfs \<sigma>"
apply(simp add: initial_state_def wfs_def)
apply(rule conjI)
using assms writes_on_def
apply (smt CollectI fst_conv initial_state_def var_def)
apply(rule conjI)
using assms writes_on_def initial_state_def apply simp
apply (smt CollectI fst_conv initial_state_def var_def writes_on_def)
apply rule using initially_write_unique[OF assms(1)]
apply (smt CollectI Collect_cong finite.emptyI finite.insertI insert_compr not_finite_existsD singletonD writes_on_def)
apply(rule conjI)
apply (smt CollectD Pair_inject assms initial_state_def)
using assms initial_state_def by fastforce
lemma [simp]: "wfs \<sigma> \<Longrightarrow> writes_on \<sigma> x \<noteq> {}"
apply(simp add: wfs_def)
by (metis empty_iff)
lemma [simp]: "wfs \<sigma> \<Longrightarrow> finite(writes_on \<sigma> x)"
by(simp add: wfs_def)
lemma [simp]: "wfs \<sigma> \<Longrightarrow> thrView \<sigma> t x \<in> writes_on \<sigma> x"
by(simp add: wfs_def)
lemma [simp]: "wfs \<sigma> \<Longrightarrow> modView \<sigma> w x \<in> writes_on \<sigma> x"
using wfs_def by blast
lemma [simp]: "wfs \<sigma> \<Longrightarrow> modView (read_trans t b w \<sigma>) w x \<in> writes_on (read_trans t b w \<sigma>) x"
by auto
lemma [simp]: "wfs \<sigma> \<Longrightarrow> writes_on \<sigma> x = writes_on (read_trans t b w \<sigma>) x"
by auto
lemma last_write_max: "wfs \<sigma> \<Longrightarrow> w \<in> writes_on \<sigma> x \<Longrightarrow> tst w \<le> tst (lastWr \<sigma> x)"
by(simp add: lastWr_def)
end |
"""Collection of simple proxy types with intersection methods."""
import tensorflow as tf
import numpy as np
class AABB():
"""A axis aligned bounding box, defined in [x_0,x_1]x[y_0,y_1]x[z_0,z_1] for b_0=[x_0,y_0,z_0], b1=[x_1,y_1,z_1]."""
def __init__(self, b_0: list, b_1: list):
self.b_0 = tf.constant(b_0, dtype=tf.float32)
self.b_1 = tf.constant(b_1, dtype=tf.float32)
def __call__(self, rays_o: tf.Tensor, rays_d: tf.Tensor) -> tf.Tensor:
"""Intersect rays with proxy, assumes that the ray origin is outside of the AABB."""
# Get ray plane intersection
inv_rays_d = 1. / rays_d
t_0 = (self.b_0 - rays_o) * inv_rays_d
t_1 = (self.b_1 - rays_o) * inv_rays_d
# Reorder such that the nearer intersection is in t_0
t_0_tmp = t_0
t_0 = tf.where(t_0 < t_1, t_0, t_1)
t_1 = tf.where(t_0_tmp > t_1, t_0_tmp, t_1)
# Get box intersection from plane intersections
t_0 = tf.reduce_max(t_0, axis=1)
t_1 = tf.reduce_min(t_1, axis=1)
# Set both t_0, t_1 to infinity if there is no intersection
t_0_tmp = t_0
t_0 = tf.where(t_0 < t_1, t_0, np.inf)
t_1 = tf.where(t_0_tmp < t_1, t_1, np.inf)
return tf.stack([t_0, t_1], -1) |
module Data.Functor.Coyoneda
%access public export
data Coyoneda : (f : Type -> Type) -> (a : Type) -> Type where
Coyo : {b : Type} -> (b -> a) -> f b -> Coyoneda f a
Functor (Coyoneda f) where
map f (Coyo h c) = Coyo (f . h) c
liftCoyoneda : f a -> Coyoneda f a
liftCoyoneda f = Coyo id f
lowerCoyoneda : Functor f => Coyoneda f a -> f a
lowerCoyoneda (Coyo f c) = map f c
|
{-# OPTIONS --without-K --safe #-}
open import Categories.Category
module Categories.Diagram.Coequalizer {o ℓ e} (C : Category o ℓ e) where
open Category C
open HomReasoning
open import Level
private
variable
A B : Obj
h i : A ⇒ B
record Coequalizer (f g : A ⇒ B) : Set (o ⊔ ℓ ⊔ e) where
field
{obj} : Obj
arr : B ⇒ obj
equality : arr ∘ f ≈ arr ∘ g
coequalize : h ∘ f ≈ h ∘ g → obj ⇒ cod h
universal : ∀ {eq : h ∘ f ≈ h ∘ g} → h ≈ coequalize eq ∘ arr
unique : ∀ {eq : h ∘ f ≈ h ∘ g} → h ≈ i ∘ arr → i ≈ coequalize eq
|
module Test.Blockchain
import Test.Asserts
import Data.Hash
import Blockchain
%access export
||| Simple tests use Nat type
Hashable Nat where
saltedHash64 n = saltedHash64 (toIntegerNat n)
||| Simplified test blocks containing integers that can be only placed consecutively
||| on the blockchain `n :: (n - 1) :: ... :: 1 :: 0 :: Genesis`
record TestNatPayload where
constructor MkTestNatPayload
payload : Nat
Show TestNatPayload where
show = show . payload
Cast TestNatPayload Nat where
cast = payload
||| assumes i can only be linked to (i - 1)
hashPrevNat : Nat -> BlockHash
hashPrevNat Z = GenesisHash
hashPrevNat (S i) = hash . show $ i
||| ignoring serialization
BlockData TestNatPayload where
serialize = show
prevHash = hashPrevNat . payload
||| Note this is very tightly typed requiring consecutive numbers so stuff like this
||| ```idris example
||| natChain1 : HBlockchain [TestNatPayload, TestNatPayload, ()] (blockHash (MkTestNatPayload 1))
||| natChain1 = exampleMiner (MkTestNatPayload 1) (exampleMiner (MkTestNatPayload 1) (Single Genesis))
||| ```
|||will not compile
natChain : HBlockchain [TestNatPayload, TestNatPayload] (blockHash (MkTestNatPayload 1))
natChain = exampleMiner (MkTestNatPayload 1) (exampleMiner (MkTestNatPayload 0) (Single Block.Genesis))
testNatHashes : IO ()
testNatHashes =
assertEq ((take 2) . computeHashes $ natChain) (map (hash . show) $ [1,0])
testNatPayloads : IO ()
testNatPayloads =
assertEq (the (List Nat) (asList $ natChain)) [1,0]
||| Again note this is very tightly typed, for example this will not compile
||| ```idris example
||| natSimpleChain = (MkTestNatPayload 1) :: (MkTestNatPayload 1) :: SimpleBlockchain.Genesis
||| ```
natSimpleChain : SimpleBlockchain TestNatPayload (blockHash (MkTestNatPayload 1))
natSimpleChain = (MkTestNatPayload 1) :: (MkTestNatPayload 0) :: SimpleBlockchain.Genesis
extractSimpleHash : SimpleBlockchain a hash -> BlockHash
extractSimpleHash {hash} block = hash
testNatSimpleHash : IO ()
testNatSimpleHash =
assertEq (extractSimpleHash natSimpleChain) (hash . show $ 1)
||| Simplified test msg block
record MsgPayload where
constructor MkMsgPayload
prevMsgHash : BlockHash
msg : String
Show MsgPayload where
show block = (show . prevMsgHash $ block) ++ ":" ++ (msg block)
BlockData MsgPayload where
serialize = show
prevHash = prevMsgHash
||| Attempting to compile the following never returns with Idris 1.2:
||| ```idris example
||| msgChain : SimpleBlockchain MsgPayload (strHash "16FC397CF62F64D3:Hello")
||| msgChain = (MkMsgPayload GenesisHash "Hello") :: SimpleBlockchain.Genesis
||| ```
||| I do not expect to have much use of hardcoded values of hash type variable
||| this works
msgSimpleChain1 : (h ** SimpleBlockchain MsgPayload h)
msgSimpleChain1 = (_ ** (MkMsgPayload GenesisHash "Hello") :: SimpleBlockchain.Genesis)
testMsgSimpleChain1Hash : IO()
testMsgSimpleChain1Hash =
case msgSimpleChain1 of
(h ** _) => assertEq h (strHash $ (show GenesisHash) ++ ":Hello")
mixedChain : (h ** HBlockchain [MsgPayload, TestNatPayload] h)
mixedChain = (_ ** exampleMiner (MkMsgPayload (strHash "0") "Hello") (exampleMiner (MkTestNatPayload 0) (Single Block.Genesis)))
testMixedChainHash : IO()
testMixedChainHash =
case mixedChain of
(h ** _) => assertEq h (strHash (show (strHash "0") ++ ":Hello"))
|
theory FL_RTMax
imports RT FL.Finite_Linear FL.Finite_Linear_Tick_Param
begin
section \<open>Mapping from FL to RTMax\<close>
fun acc2maxref :: "'e acceptance \<Rightarrow> 'e rtrefusal" where
"acc2maxref \<bullet> = \<bullet>\<^sub>\<R>\<^sub>\<T>" |
"acc2maxref [X]\<^sub>\<F>\<^sub>\<L> = [{e. e \<notin> X}]\<^sub>\<R>\<^sub>\<T>"
fun fl2rtm_trace :: "'e fltrace \<Rightarrow> 'e rttrace" where
"fl2rtm_trace \<langle>X\<rangle>\<^sub>\<F>\<^sub>\<L> = \<langle>acc2maxref(X)\<rangle>\<^sub>\<R>\<^sub>\<T>" |
"fl2rtm_trace (A #\<^sub>\<F>\<^sub>\<L> \<rho>) = ((acc2maxref (acceptance A)) #\<^sub>\<R>\<^sub>\<T> (event A) #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<rho>))"
(* need to enforce RT4 here, due to the differences in tick handling between our FL and RT *)
definition fl2rtm :: "'e rtevent fltraces \<Rightarrow> 'e rtevent rtprocess" where
"fl2rtm P = {\<rho>. \<exists>\<sigma>\<in>P. \<rho> = fl2rtm_trace \<sigma> }
\<union> {\<rho>. \<exists>\<sigma>\<in>P. \<exists>\<rho>' y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<and> \<rho> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> }"
lemma fl2rtm_rtWF:
"\<forall>x\<in>(fl2rtm P). rtWF x"
unfolding fl2rtm_def
proof (safe, simp_all)
fix \<sigma> :: "'a rtevent fltrace"
show "\<And>P. \<sigma> \<in> P \<Longrightarrow> rtWF (fl2rtm_trace \<sigma>)"
apply (induct \<sigma>, simp_all)
by (smt acc2maxref.simps(1) acc2maxref.simps(2) amember.elims(2) event_in_acceptance in_rtrefusal.elims(2) in_rtrefusal.simps(1) mem_Collect_eq rtrefusal.inject)
next
fix \<sigma> :: "'a rtevent fltrace" and \<rho>' y
show "\<And> P \<rho>'. \<sigma> \<in> P \<Longrightarrow> fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>
\<Longrightarrow> rtWF (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
proof (induct \<sigma>, auto, case_tac \<rho>', auto)
fix x1a \<sigma> \<rho>'
assume ind_hyp: "\<And>P \<rho>'. \<sigma> \<in> P \<Longrightarrow> fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
rtWF (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
show "((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>)) = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
rtWF (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
apply (induct \<rho>', auto)
apply (metis (no_types, lifting) acc2maxref.elims acc2maxref.simps(1) amember.simps(2) event_in_acceptance in_rtrefusal.elims(2) in_rtrefusal.simps(1) mem_Collect_eq rtrefusal.inject)
using ind_hyp by blast
qed
qed
lemma fl2rtm_MRT0:
assumes "FL0 P" "FL1 P"
shows "MRT0 (fl2rtm P)"
proof -
have "\<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> \<in> P"
using FL0_FL1_bullet_in assms by blast
then have "\<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> (fl2rtm P)"
unfolding fl2rtm_def by (safe, simp_all, rule_tac x="\<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>" in bexI, simp_all)
then show "MRT0 (fl2rtm P)"
unfolding MRT0_def by auto
qed
lemma fl2rtm_trace_monotonic:
"(\<rho>' \<le> \<rho>) = (fl2rtm_trace \<rho>' \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> fl2rtm_trace \<rho>)"
apply (induct \<rho>' \<rho> rule:less_eq_fltrace.induct, auto)
apply (metis (full_types) acc2maxref.simps(1) acc2maxref.simps(2) leq_rttrace_max.simps(1) leq_rttrace_max.simps(2) less_eq_acceptance.elims(2))
apply (case_tac x, auto, case_tac y, auto)
apply (case_tac x, auto, case_tac y, auto, case_tac a, auto, presburger+)
apply (case_tac x, auto, case_tac y, auto, case_tac a, auto)
apply (case_tac x, auto, case_tac y, auto)
apply (metis acc2maxref.elims acceptance_event acceptance_set aevent_less_eq_iff_components amember.simps(1) leq_rttrace_max.simps(8))
apply (metis acceptance_set aevent_less_eq_iff_components amember.simps(1))
apply (metis acc2maxref.elims acceptance_event leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) less_eq_aevent_def)
apply (case_tac x, auto, case_tac y, auto, case_tac a, auto, case_tac aa, simp_all)
apply (metis dual_order.refl equalityI mem_Collect_eq subsetI)
apply (metis acc2maxref.simps(2) amember.elims(2) leq_rttrace_max.simps(9))
apply (metis acc2maxref.elims acceptance_event acceptance_set leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) less_eq_acceptance.simps(1) less_eq_aevent_def)
apply (metis acc2maxref.elims leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) leq_rttrace_max.simps(8) leq_rttrace_max.simps(9))
by (metis acc2maxref.elims leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) leq_rttrace_max.simps(8) leq_rttrace_max.simps(9))
lemma leq_rttrace_max_fl2rtm_trace_exists:
"\<And>\<rho>'. \<rho>' \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> fl2rtm_trace \<sigma> \<Longrightarrow> \<exists>\<sigma>'. \<rho>' = fl2rtm_trace \<sigma>' \<and> \<sigma>' \<le> \<sigma>"
proof (induct \<sigma>, auto)
fix x and \<rho>' :: "'a rttrace"
show "\<rho>' \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<langle>acc2maxref x\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow> \<exists>\<sigma>'. \<rho>' = fl2rtm_trace \<sigma>' \<and> \<sigma>' \<le> \<langle>x\<rangle>\<^sub>\<F>\<^sub>\<L>"
apply (induct \<rho>', auto, cases x, auto)
apply (metis acc2maxref.simps(1) fl2rtm_trace.simps(1) leq_rttrace_max.simps(3) order_refl rtrefusal.exhaust)
by (metis (full_types) acc2maxref.simps(1) acc2maxref.simps(2) fl2rtm_trace.simps(1) fl2rtm_trace_monotonic leq_rttrace_max.simps(2) rtrefusal.exhaust)
next
fix x1a and \<sigma> :: "'a fltrace" and \<rho>' :: "'a rttrace"
assume ind_hyp: "\<And>\<rho>'. \<rho>' \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> fl2rtm_trace \<sigma> \<Longrightarrow> \<exists>\<sigma>'. \<rho>' = fl2rtm_trace \<sigma>' \<and> \<sigma>' \<le> \<sigma>"
show "\<rho>' \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>)) \<Longrightarrow> \<exists>\<sigma>'. \<rho>' = fl2rtm_trace \<sigma>' \<and> \<sigma>' \<le> x1a #\<^sub>\<F>\<^sub>\<L> \<sigma>"
apply (induct \<rho>', cases x1a, auto, case_tac a, auto)
apply (metis (full_types) acc2maxref.simps(2) acceptance_set amember.simps(2) fl2rtm_trace.simps(1) fl2rtm_trace.simps(2) fl2rtm_trace_monotonic ind_hyp leq_rttrace_max.simps(1) leq_rttrace_max.simps(4) rtrefusal.exhaust)
apply (metis fl2rtm_trace_monotonic ind_hyp leq_rttrace_max.simps(1) leq_rttrace_max.simps(5) rtrefusal.exhaust)
apply (cases x1a, auto, case_tac a, auto)
apply (smt acc2maxref.simps(1) acc2maxref.simps(2) acceptance_event acceptance_set amember.simps(2) bullet_event_acceptance fl2rtm_trace.simps(2) ind_hyp leq_rttrace_max.simps(7) leq_rttrace_max.simps(8) less_eq_fltrace.simps(3) order_refl rtrefusal.exhaust)
by (metis acc2maxref.simps(1) acceptance_event acceptance_set bullet_event_acceptance fl2rtm_trace.simps(2) ind_hyp leq_rttrace_max.simps(6) leq_rttrace_max.simps(9) less_eq_fltrace.simps(3) rtrefusal.exhaust)
qed
lemma fl2rtm_RTM1:
assumes "FL1 P"
shows "RTM1 (fl2rtm P)"
unfolding RTM1_def fl2rtm_def
proof auto
fix \<rho> \<sigma>'
show "\<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> fl2rtm_trace \<sigma>' \<Longrightarrow> \<sigma>' \<in> P \<Longrightarrow> \<exists>\<sigma>\<in>P. \<rho> = fl2rtm_trace \<sigma>"
by (meson FL1_def assms leq_rttrace_max_fl2rtm_trace_exists)
next
fix \<rho> :: "'a rtevent rttrace" and \<sigma>' \<rho>' y
assume \<rho>_leq: "\<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume fl2rtm_trace_\<sigma>'_eq: "fl2rtm_trace \<sigma>' = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume \<sigma>'_in_P: "\<sigma>' \<in> P"
have "\<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>
\<or> (\<exists> \<rho>'' y'. \<rho> = \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<and> \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)"
using \<rho>_leq apply auto
proof (induct \<rho> \<rho>' rule:leq_rttrace_rttrace_init_max.induct, auto)
fix x
show "\<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (metis leq_rttrace_max.simps(1) leq_rttrace_max.simps(5) rtrefusal.exhaust)
next
fix v va vb
show "(v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow>
\<forall>\<rho>''. (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb) = \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<longrightarrow>
\<not> \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow>
(v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (metis leq_rttrace_max.simps(1) leq_rttrace_max.simps(10) leq_rttrace_max.simps(2)
leq_rttrace_max.simps(6) leq_rttrace_max.simps(9) rtrefusal.exhaust rttrace.exhaust rttrace_with_refusal.simps(2))
next
fix vc v va vb
show "\<langle>vc\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> (vb @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> (TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))) \<Longrightarrow>
\<langle>vc\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> (vb @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> (TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)))"
by (metis leq_rttrace_max.simps(1) leq_rttrace_max.simps(4) leq_rttrace_max.simps(5) rtrefusal.exhaust)
next
fix v va vb vc vd ve
assume ind_hyp: "vb \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
\<forall>\<rho>''. vb = \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<longrightarrow>
\<not> \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
vb \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume case_assm1: "(v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (vc #\<^sub>\<R>\<^sub>\<T> vd #\<^sub>\<R>\<^sub>\<T> (ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
assume case_assm2: "\<forall>\<rho>''. (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb) = \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<longrightarrow>
\<not> (\<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (vc #\<^sub>\<R>\<^sub>\<T> vd #\<^sub>\<R>\<^sub>\<T> (ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>))"
have 1: "vb \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
by (metis case_assm1 leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) leq_rttrace_max.simps(8) leq_rttrace_max.simps(9) rtrefusal.exhaust)
have 2: "\<forall>\<rho>''. vb = \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<longrightarrow>
\<not> \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
using case_assm2 apply (auto, erule_tac x="RTEventInit v va \<rho>''" in allE, auto)
by (metis case_assm1 leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) leq_rttrace_max.simps(8) leq_rttrace_max.simps(9) rtrefusal.exhaust)
have "vb \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
using "1" "2" ind_hyp by blast
then show "(v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (vc #\<^sub>\<R>\<^sub>\<T> vd #\<^sub>\<R>\<^sub>\<T> (ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>))"
by (metis case_assm1 leq_rttrace_max.simps(6) leq_rttrace_max.simps(7) leq_rttrace_max.simps(8) leq_rttrace_max.simps(9) rtrefusal.exhaust)
qed
then show "\<forall>\<sigma>\<in>P. \<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
\<rho> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow> \<exists>\<sigma>\<in>P. \<rho> = fl2rtm_trace \<sigma>"
proof auto
assume "\<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
then show "\<forall>\<sigma>\<in>P. \<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
\<rho> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow> \<exists>\<sigma>\<in>P. \<rho> = fl2rtm_trace \<sigma>"
by (metis FL1_def \<sigma>'_in_P assms fl2rtm_trace_\<sigma>'_eq leq_rttrace_max_fl2rtm_trace_exists)
next
fix \<rho>''
assume case_assm1: "\<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume case_assm2: "\<rho> = \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
have "\<exists>\<sigma>''\<in>P. \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>''"
by (metis FL1_def \<sigma>'_in_P assms case_assm1 fl2rtm_trace_\<sigma>'_eq leq_rttrace_max_fl2rtm_trace_exists)
then show "\<forall>\<sigma>\<in>P. \<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
\<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>
\<Longrightarrow> \<exists>\<sigma>\<in>P. \<rho>'' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
by (metis case_assm2)
qed
qed
lemma fl2rtm_trace_concat2_acceptance:
"last \<beta> = A \<Longrightarrow> fl2rtm_trace (\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>B\<rangle>\<^sub>\<F>\<^sub>\<L>) = (rttrace2init (fl2rtm_trace \<beta>)) @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (A+B)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
by (induct \<beta>, auto)
lemma "A \<noteq> \<bullet> \<longrightarrow> e \<in>\<^sub>\<F>\<^sub>\<L> A \<Longrightarrow> acceptance (A,e)\<^sub>\<F>\<^sub>\<L> = A"
proof -
assume assm: "A \<noteq> \<bullet> \<longrightarrow> e \<in>\<^sub>\<F>\<^sub>\<L> A"
have "acceptance (A,e)\<^sub>\<F>\<^sub>\<L> = fst (Rep_aevent (A,e)\<^sub>\<F>\<^sub>\<L>)"
by (simp add: acceptance.rep_eq)
also have "... = A"
by (subst Abs_aevent_inverse, auto simp add: assm)
then show ?thesis
using calculation by auto
qed
lemma fl2rtm_trace_concat2_event:
"last \<beta> = \<bullet> \<Longrightarrow> e \<in>\<^sub>\<F>\<^sub>\<L> A \<Longrightarrow> fl2rtm_trace (\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,e)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>) = (rttrace2init (fl2rtm_trace \<beta>)) @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (A)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
by (induct \<beta>, auto)
lemma fl2rtm_RTM2:
assumes FL2_P: "FL2 P"
shows "RTM2 (fl2rtm P)"
unfolding RTM2_def fl2rtm_def
proof auto
fix \<rho> X e \<sigma>
assume assms: "e \<notin> X" "\<sigma> \<in> P" "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail = fl2rtm_trace \<sigma>"
then obtain \<beta> A where \<sigma>_def: "\<sigma> = \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> \<and> last \<beta> = \<bullet>"
by (metis last_rev3_is_bullet rev3_rev3_const2_last)
then have 1: "fl2rtm_trace \<sigma> = (rttrace2init (fl2rtm_trace \<beta>)) @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (A)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
by (simp add: fl2rtm_trace_concat2_acceptance)
then have 2: "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail = (rttrace2init (fl2rtm_trace \<beta>)) @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (A)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
using assms(3) by auto
then have 3: "[X]\<^sub>\<R>\<^sub>\<T> = acc2maxref (A)"
using rttrace_with_refusal_eq2 by blast
then have 4: "\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,e)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> \<in> P"
using FL2_P assms \<sigma>_def unfolding FL2_def apply auto
apply (erule_tac x="\<beta>" in allE, erule_tac x="A" in allE, erule_tac x="e" in allE, auto)
by (metis CollectI acc2maxref.elims amember.simps(2) rtrefusal.distinct(1) rtrefusal.inject)
then show "\<exists>\<sigma>\<in>P. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
apply (rule_tac x="\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,e)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>" in bexI, auto)
by (metis 2 3 CollectI \<sigma>_def acc2maxref.elims amember.simps(2) assms(1)
fl2rtm_trace_concat2_event rtrefusal.distinct(1) rtrefusal.inject rttrace_with_refusal_eq1)
next
fix \<rho> X e \<sigma> \<rho>' y
assume assms: "e \<notin> X" "\<sigma> \<in> P" "fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
"\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
have "X = UNIV"
using assms(4) apply (auto, induct \<rho> \<rho>' rule:leq_rttrace_init.induct, auto)
apply (metis (full_types) rttrace.distinct(1) rttrace_with_refusal.elims rttrace_with_refusal.simps(2))
by (metis UNIV_I rtrefusal.inject rttrace_with_refusal.simps(3) rttrace_with_refusal_eq2)
then have "False"
using assms(1) by auto
then show "\<exists>\<sigma>\<in>P. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
by auto
qed
lemma no_tick_lemma: "\<And>P \<rho>. \<sigma> \<in> P \<Longrightarrow> tickWF TickRT \<sigma> \<Longrightarrow> \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma> \<Longrightarrow> no_tick \<rho>"
proof (induct \<sigma>, simp_all)
fix xa P \<rho>
show "TickRT \<notin>\<^sub>\<F>\<^sub>\<L> xa \<Longrightarrow> \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = \<langle>acc2maxref xa\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow> no_tick \<rho>"
using rttrace_with_refusal.elims by blast
next
fix x1a :: "'a rtevent aevent" and \<sigma> :: "'a rtevent fltrace" and \<rho> :: "'a rtevent rttrace_init" and P
assume ind_hyp: "\<And>P \<rho>. \<sigma> \<in> P \<Longrightarrow> tickWF TickRT \<sigma> \<Longrightarrow> \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma> \<Longrightarrow> no_tick \<rho>"
assume assm1: "TickRT \<notin>\<^sub>\<F>\<^sub>\<L> acceptance x1a \<and> (if event x1a = TickRT then \<sigma> = \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> else tickWF TickRT \<sigma>)"
assume assm2: "(\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) = ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>))"
assume assm3: "x1a #\<^sub>\<F>\<^sub>\<L> \<sigma> \<in> P"
show "no_tick \<rho>"
using assm2
proof (cases \<rho>, simp_all)
fix \<rho>'
assume \<rho>_def: "\<rho> = RTEventInit (acc2maxref (acceptance x1a)) (event x1a) \<rho>'"
have x1a_no_tick: "event x1a \<noteq> TickRT"
using assm1 assm2 \<rho>_def by (auto, cases \<rho>', auto)
then have "no_tick \<rho>'"
using \<rho>_def assm1 assm2 assm3 ind_hyp by auto
then show "no_tick (RTEventInit (acc2maxref (acceptance x1a)) (event x1a) \<rho>')"
using no_tick.elims(3) x1a_no_tick by blast
qed
qed
thm FLTick0_def tickWF.simps RT4_def
lemma fl2rtm_RT3:
assumes "FLTick0 TickRT P" "FL2 P"
shows "RT3 (fl2rtm P)"
using assms no_tick_lemma unfolding FLTick0_def RT3_def fl2rtm_def
by (auto, blast, metis rttrace_with_refusal_eq3)
lemma fltrace_TickRT_end_exists:
"\<And> \<rho>. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma> \<Longrightarrow>
\<exists> A B \<sigma>'. \<sigma> = \<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and> fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail
\<and> (TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> acc2maxref A = x \<and> acc2maxref B = y"
proof (induct \<sigma>, auto)
fix xa \<rho>
show "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = \<langle>acc2maxref xa\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
\<exists>A B \<sigma>'. \<langle>xa\<rangle>\<^sub>\<F>\<^sub>\<L> = \<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and>
fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and> (TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> acc2maxref A = x \<and> acc2maxref B = y"
using rttrace_with_refusal.elims by blast
next
fix x1a \<sigma> \<rho>
assume ind_hyp: "\<And>\<rho>. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma> \<Longrightarrow>
\<exists>A B \<sigma>'. \<sigma> = \<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and> fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and>
(TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> acc2maxref A = x \<and> acc2maxref B = y"
assume assm: "(\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) = ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>))"
then show "\<exists>A B \<sigma>'. x1a #\<^sub>\<F>\<^sub>\<L> \<sigma> = \<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and>
fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and> (TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> acc2maxref A = x \<and> acc2maxref B = y"
apply -
proof (induct \<rho>, auto)
assume case_assms: "x = acc2maxref (acceptance x1a)" "TickRT = event x1a" "\<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
then show "\<exists>A B \<sigma>''. x1a #\<^sub>\<F>\<^sub>\<L> \<sigma> = \<sigma>'' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and>fl2rtm_trace \<sigma>'' = \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>
\<and> (TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> acc2maxref A = acc2maxref (acceptance x1a) \<and> acc2maxref B = y"
apply (cases x1a, clarsimp, rule_tac x="a" in exI, simp)
by (metis acc2maxref.simps(1) bullet_left_zero2 fl2rtm_trace.elims fl2rtm_trace.simps(1) rttrace.distinct(1) rttrace.inject(1))
next
fix \<rho>
assume "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
then obtain A B \<sigma>' where "\<sigma> = \<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and>
(TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and> acc2maxref A = x \<and> acc2maxref B = y"
using ind_hyp by blast
then show "\<exists>A B \<sigma>''. (x1a #\<^sub>\<F>\<^sub>\<L> \<sigma>) = \<sigma>'' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and>
fl2rtm_trace \<sigma>'' = ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail)) \<and>
(TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>) \<and> acc2maxref A = x \<and> acc2maxref B = y"
by (metis fl2rtm_trace.simps(2) fltrace_concat2.simps(2))
qed
qed
term "Abs_aevent"
term "Rep_aevent"
thm Abs_aevent_cases
lemma fltrace_TickRT_end_butlast_bullet:
"\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma> \<Longrightarrow> tickWF TickRT \<sigma> \<Longrightarrow> x = \<bullet>\<^sub>\<R>\<^sub>\<T>"
proof -
assume \<sigma>_assm: "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
assume \<sigma>_tickWF: "tickWF TickRT \<sigma>"
obtain \<sigma>' A B where \<sigma>'_assms:
"\<sigma> = \<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L> \<and> fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and> acc2maxref A = x \<and> acc2maxref B = y \<and> (TickRT \<in>\<^sub>\<F>\<^sub>\<L> A \<or> A = \<bullet>)"
using \<sigma>_assm fltrace_TickRT_end_exists by blast
have "\<And>\<rho>. fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<Longrightarrow> Finite_Linear_Model.last \<sigma>' = \<bullet>"
proof (induct \<sigma>', auto)
fix x :: "'a rtevent acceptance" and \<rho>
show "\<langle>acc2maxref x\<rangle>\<^sub>\<R>\<^sub>\<T> = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<Longrightarrow> x = \<bullet>"
by (induct \<rho>, auto, metis acc2maxref.simps(2) acceptance.exhaust rtrefusal.distinct(1))
next
fix x1a :: "'a rtevent aevent" and \<sigma>' :: "'a rtevent fltrace" and \<rho>
assume ind_hyp: "\<And>\<rho>. fl2rtm_trace \<sigma>' = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<Longrightarrow> Finite_Linear_Model.last \<sigma>' = \<bullet>"
assume "((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>')) = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
then obtain \<rho>' where "fl2rtm_trace \<sigma>' = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
apply - by (induct \<rho>, auto)
then show "Finite_Linear_Model.last \<sigma>' = \<bullet>"
using ind_hyp by blast
qed
then have last_\<sigma>'_bullet: "Finite_Linear_Model.last \<sigma>' = \<bullet>"
by (simp add: \<sigma>'_assms)
have "tickWF TickRT (\<sigma>' &\<^sub>\<F>\<^sub>\<L> \<langle>(A,TickRT)\<^sub>\<F>\<^sub>\<L>,B\<rangle>\<^sub>\<F>\<^sub>\<L>) \<Longrightarrow> TickRT \<notin>\<^sub>\<F>\<^sub>\<L> A"
using last_\<sigma>'_bullet apply - by (induct \<sigma>', auto, metis amember.simps(1) tickWF.simps(1))
then have TickRT_notin_A: "TickRT \<notin>\<^sub>\<F>\<^sub>\<L> A"
using \<sigma>'_assms \<sigma>_tickWF by blast
then have "A = \<bullet>"
using \<sigma>'_assms by (cases A, auto)
then show "x = \<bullet>\<^sub>\<R>\<^sub>\<T>"
using \<sigma>'_assms acc2maxref.simps(1) by blast
qed
lemma fl2rtm_RT4:
assumes "FLTick0 TickRT P"
shows "RT4 (fl2rtm P)"
using assms unfolding FLTick0_def RT4_def fl2rtm_def
proof (safe, simp_all)
fix \<rho> :: "'a rtevent rttrace_init" and x y \<sigma>
assume \<sigma>_assm: "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
assume \<sigma>_in_P: "\<sigma> \<in> P"
assume FLTick0_P: "\<forall>x. x \<in> P \<longrightarrow> tickWF TickRT x"
have \<sigma>_tickWF: "tickWF TickRT \<sigma>"
using FLTick0_P \<sigma>_in_P by blast
then show "x = \<bullet>\<^sub>\<R>\<^sub>\<T>"
using \<sigma>_assm fltrace_TickRT_end_butlast_bullet by blast
next
fix \<rho> :: "'a rtevent rttrace_init" and x y \<sigma>
assume \<sigma>_assm: "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
assume \<sigma>_in_P: "\<sigma> \<in> P"
assume FLTick0_P: "\<forall>x. x \<in> P \<longrightarrow> tickWF TickRT x"
have \<sigma>_tickWF: "tickWF TickRT \<sigma>"
using FLTick0_P \<sigma>_in_P by blast
have x_bullet: "x = \<bullet>\<^sub>\<R>\<^sub>\<T>"
using \<sigma>_assm \<sigma>_tickWF fltrace_TickRT_end_butlast_bullet by blast
show "\<forall>\<sigma>\<in>P. \<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
\<exists>\<sigma>\<in>P. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
using \<sigma>_assm x_bullet \<sigma>_in_P
by (erule_tac x=\<sigma> in ballE, erule_tac x=\<rho> in allE, safe, erule_tac x=y in allE, simp_all)
next
fix \<rho> :: "'a rtevent rttrace_init" and x y \<sigma> \<rho>' ya
show "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow> x = \<bullet>\<^sub>\<R>\<^sub>\<T>"
by (induct \<rho> \<rho>' rule:leq_rttrace_init.induct, auto, case_tac \<rho>, auto, case_tac x23, auto, case_tac \<rho>, auto)
next
fix \<rho> \<rho>' :: "'a rtevent rttrace_init" and x y \<sigma> ya
assume \<sigma>_assm: "fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>ya\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume \<rho>_\<rho>'_assm: "\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume \<sigma>_in_P: "\<sigma> \<in> P"
assume FLTick0_P: "\<forall>x. x \<in> P \<longrightarrow> tickWF TickRT x"
have \<sigma>_tickWF: "tickWF TickRT \<sigma>"
using FLTick0_P \<sigma>_in_P by blast
have "\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>ya\<rangle>\<^sub>\<R>\<^sub>\<T> = \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>ya\<rangle>\<^sub>\<R>\<^sub>\<T>"
using \<rho>_\<rho>'_assm apply - apply (induct \<rho> \<rho>' rule:leq_rttrace_init.induct, auto)
apply (metis rttrace_with_refusal.simps(2) rttrace_with_refusal_eq3)
using rttrace_with_refusal.elims by blast
then show "\<forall>\<sigma>\<in>P. \<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
\<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
\<exists>\<sigma>\<in>P. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
using \<sigma>_assm \<sigma>_in_P
by (erule_tac x=\<sigma> in ballE, erule_tac x=\<rho> in allE, safe, erule_tac x=ya in allE, simp_all)
qed
lemma fl2rtm_RTM:
assumes "FL0 P" "FL1 P" "FL2 P" "FLTick0 TickRT P"
shows "RTM (fl2rtm P)"
unfolding RTM_def
by (simp add: assms fl2rtm_MRT0 fl2rtm_RT3 fl2rtm_RT4 fl2rtm_RTM1 fl2rtm_RTM2 fl2rtm_rtWF)
lemma fl2rtm_mono: "P \<sqsubseteq>\<^sub>\<F>\<^sub>\<L> Q \<Longrightarrow> fl2rtm P \<sqsubseteq>\<^sub>\<R>\<^sub>\<T> fl2rtm Q"
unfolding refinesRT_def fl2rtm_def by (safe, simp_all, blast+)
section \<open>Mapping from RTMax to FL\<close>
fun maxref2acc :: "'e rtrefusal \<Rightarrow> 'e acceptance" where
"maxref2acc \<bullet>\<^sub>\<R>\<^sub>\<T> = \<bullet>" |
"maxref2acc [X]\<^sub>\<R>\<^sub>\<T> = [{e. e \<notin> X}]\<^sub>\<F>\<^sub>\<L>"
fun rtm2fl_trace :: "'e rtevent rttrace \<Rightarrow> 'e rtevent fltrace" where
"rtm2fl_trace \<langle>X\<rangle>\<^sub>\<R>\<^sub>\<T> = \<langle>maxref2acc X\<rangle>\<^sub>\<F>\<^sub>\<L>" |
"rtm2fl_trace (X #\<^sub>\<R>\<^sub>\<T> EventRT e #\<^sub>\<R>\<^sub>\<T> \<rho>) = ((maxref2acc X, EventRT e)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> rtm2fl_trace \<rho>)" |
"rtm2fl_trace (X #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<rho>) = ((maxref2acc X, TickRT)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>)"
definition rtm2fl :: "'e rtevent rtprocess \<Rightarrow> 'e rtevent fltraces" where
"rtm2fl P = {rtm2fl_trace x |x. x \<in> P }"
lemma rtm2fl_FL0:
assumes "MRT0 P"
shows "FL0 (rtm2fl P)"
using assms unfolding MRT0_def FL0_def rtm2fl_def by auto
lemma rtm2fl_FL1:
assumes "RTM1 P" "\<forall>x\<in>P. rtWF x"
shows "FL1 (rtm2fl P)"
using assms unfolding RTM1_def FL1_def rtm2fl_def
proof auto
fix x :: "'e rtevent rttrace"
show "\<And>P s. \<forall>\<rho>. (\<exists>\<sigma>. \<sigma> \<in> P \<and> \<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<sigma>) \<longrightarrow> \<rho> \<in> P \<Longrightarrow> \<forall>x\<in>P. rtWF x \<Longrightarrow> s \<le> rtm2fl_trace x \<Longrightarrow> x \<in> P \<Longrightarrow> \<exists>x. s = rtm2fl_trace x \<and> x \<in> P"
proof (induct x, auto)
fix P :: "'e rtevent rtprocess" and x s
show "\<forall>\<rho>. (\<exists>\<sigma>. \<sigma> \<in> P \<and> \<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<sigma>) \<longrightarrow> \<rho> \<in> P \<Longrightarrow> s \<le> \<langle>maxref2acc x\<rangle>\<^sub>\<F>\<^sub>\<L> \<Longrightarrow> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P \<Longrightarrow> \<exists>x. s = rtm2fl_trace x \<and> x \<in> P"
by (metis fltrace.exhaust leq_rttrace_max.simps(1) less_eq_acceptance.elims(2) less_eq_fltrace.simps(1) less_eq_fltrace.simps(4) maxref2acc.simps(1) rtm2fl_trace.simps(1))
next
fix P :: "'e rtevent rtprocess" and x :: "'e rtevent rttrace" and x1a x2 s
assume ind_hyp: "\<And>P s. \<forall>\<rho>. (\<exists>\<sigma>. \<sigma> \<in> P \<and> \<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<sigma>) \<longrightarrow> \<rho> \<in> P \<Longrightarrow> \<forall>x\<in>P. rtWF x \<Longrightarrow> s \<le> rtm2fl_trace x \<Longrightarrow> x \<in> P
\<Longrightarrow> \<exists>x. s = rtm2fl_trace x \<and> x \<in> P"
assume case_assms: "\<forall>\<rho>. (\<exists>\<sigma>. \<sigma> \<in> P \<and> \<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<sigma>) \<longrightarrow> \<rho> \<in> P" "\<forall>x\<in>P. rtWF x"
"s \<le> rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x)" "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P"
have 1: "\<forall>\<rho>. (\<exists>\<sigma>. \<sigma> \<in> {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P} \<and> \<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<sigma>) \<longrightarrow> \<rho> \<in> {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
by (metis (mono_tags) RTM1_def RTM1_init_event case_assms(1))
have x1a_x2_assms1: "\<not> x2 \<in>\<^sub>\<R>\<^sub>\<T> x1a"
using case_assms(2) case_assms(4) by auto
then have x1a_x2_assms2: "x2 \<in>\<^sub>\<F>\<^sub>\<L> maxref2acc x1a \<or> maxref2acc x1a = \<bullet>"
by (metis (no_types, lifting) acceptance.inject amember.elims(3) in_rtrefusal.elims(3) maxref2acc.simps(1) maxref2acc.simps(2) mem_Collect_eq)
have 2: "\<forall>t\<in>{t. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> t) \<in> P}. rtWF t"
using case_assms(2) by auto
have "(\<exists>s'. s = (maxref2acc x1a, x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s') \<or> (\<exists>s'. s = (\<bullet>, x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s') \<or> s = \<langle>maxref2acc x1a\<rangle>\<^sub>\<F>\<^sub>\<L> \<or> s = \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>"
using case_assms(3) apply -
proof (induct s, auto)
fix xa
assume "\<langle>xa\<rangle>\<^sub>\<F>\<^sub>\<L> \<le> rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x)"
then have "\<langle>xa\<rangle>\<^sub>\<F>\<^sub>\<L> \<le> (maxref2acc x1a, x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> rtm2fl_trace x"
by (cases x2, auto)
then have "xa \<le> maxref2acc x1a"
by (simp add: x1a_x2_assms2)
then show "xa \<noteq> \<bullet> \<Longrightarrow> xa = maxref2acc x1a"
using less_eq_acceptance.elims(2) by fastforce
next
fix x1aa s
assume ind_hyp: "s \<le> rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<Longrightarrow>
(\<exists>s'. s = (maxref2acc x1a,x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s') \<or> (\<exists>s'. s = (\<bullet>,x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s') \<or> s = \<langle>maxref2acc x1a\<rangle>\<^sub>\<F>\<^sub>\<L> \<or> s = \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>"
show "x1aa #\<^sub>\<F>\<^sub>\<L> s \<le> rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<Longrightarrow> x1aa \<noteq> (\<bullet>,x2)\<^sub>\<F>\<^sub>\<L> \<Longrightarrow> x1aa = (maxref2acc x1a,x2)\<^sub>\<F>\<^sub>\<L>"
using aevent_less_eq_iff_components x1a_x2_assms2 by (cases x2, auto, fastforce+)
qed
then show "\<exists>x. s = rtm2fl_trace x \<and> x \<in> P"
proof auto
fix s'
assume case_assm2: "s = (maxref2acc x1a,x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s'"
have 3: "s' \<le> rtm2fl_trace x"
using case_assms(3) case_assm2 apply auto apply (cases x2, auto, cases x, auto)
using prefixFL_induct2 by fastforce+
have 4: "x \<in> {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using case_assms(4) by blast
have "\<exists>x. s' = rtm2fl_trace x \<and> x \<in> {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using 1 2 3 4 ind_hyp by blast
then show "\<exists>x. (maxref2acc x1a,x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s' = rtm2fl_trace x \<and> x \<in> P"
apply auto apply (rule_tac x="x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x" in exI, auto, cases x2, auto)
by (metis FL_concat_equiv antisym_conv case_assm2 case_assms(3) fltrace.inject(2) rtm2fl_trace.simps(3) x_le_x_concat2)
next
fix s'
assume case_assm2: "s = (\<bullet>,x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s'"
have 1: "\<forall>\<rho>. (\<exists>\<sigma>. \<sigma> \<in> {x. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P} \<and> \<rho> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> \<sigma>) \<longrightarrow> \<rho> \<in> {x. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
by (metis (mono_tags) RTM1_def RTM1_init_event case_assms(1))
have 2: "\<forall>t\<in>{t. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> t) \<in> P}. rtWF t"
using case_assms(2) by auto
have 3: "s' \<le> rtm2fl_trace x"
using case_assms(3) case_assm2 apply auto apply (cases x2, auto, cases x, auto)
by (simp add: fltrace_trans, metis fltrace_trans prefixFL_induct21)
have "(\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x)"
by (cases x1a, auto simp add: leq_rttrace_max_refl)
then have 4: "x \<in> {x. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using case_assms(1) case_assms(4) by blast
have "\<exists>x. s' = rtm2fl_trace x \<and> x \<in> {x. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using 1 2 3 4 ind_hyp[where P="{x. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}", where s=s'] by force
then show "\<exists>x. (\<bullet>,x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> s' = rtm2fl_trace x \<and> x \<in> P"
apply (auto, rule_tac x="\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x" in exI, auto, cases x2, auto)
by (metis FL_concat_equiv case_assm2 case_assms(3) dual_order.antisym less_eq_fltrace.simps(3) rtm2fl_trace.simps(3) x_le_x_concat2)
next
show "s = \<langle>maxref2acc x1a\<rangle>\<^sub>\<F>\<^sub>\<L> \<Longrightarrow> \<exists>x. \<langle>maxref2acc x1a\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<and> x \<in> P"
by (metis case_assms(1) case_assms(4) leq_rttrace_max.simps(1) leq_rttrace_max.simps(4) rtm2fl_trace.simps(1) rtrefusal.exhaust)
next
show "s = \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> \<Longrightarrow> \<exists>x. \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<and> x \<in> P"
by (metis case_assms(1) case_assms(4) leq_rttrace_max.simps(1) maxref2acc.simps(1) rtm2fl_trace.simps(1))
qed
qed
qed
thm FL2_def
thm RT4_def
lemma rtm2fl_trace_aevent_prefix:
"rtWF x \<Longrightarrow> a #\<^sub>\<F>\<^sub>\<L> \<rho> = rtm2fl_trace x \<Longrightarrow>
\<exists> x'. x = ((acc2maxref (acceptance a)) #\<^sub>\<R>\<^sub>\<T> (event a) #\<^sub>\<R>\<^sub>\<T> x') \<and> (rtm2fl_trace x' = \<rho> \<or> event a = TickRT)"
proof (induct x, auto)
fix x1a x2 x
show "a #\<^sub>\<F>\<^sub>\<L> \<rho> = rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<Longrightarrow> \<not> x2 \<in>\<^sub>\<R>\<^sub>\<T> x1a \<Longrightarrow> x1a = acc2maxref (acceptance a)"
by (cases x2, auto, (cases x1a, auto)+)
next
fix x1a x2 x
show "a #\<^sub>\<F>\<^sub>\<L> \<rho> = rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<Longrightarrow> \<not> x2 \<in>\<^sub>\<R>\<^sub>\<T> x1a \<Longrightarrow> rtWF x \<Longrightarrow> x2 = event a"
by (cases x2, auto, (cases x1a, auto)+)
next
fix x1a x2 x
show "a #\<^sub>\<F>\<^sub>\<L> \<rho> = rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<Longrightarrow> \<not> x2 \<in>\<^sub>\<R>\<^sub>\<T> x1a \<Longrightarrow> event a \<noteq> TickRT \<Longrightarrow> rtm2fl_trace x = \<rho>"
by (cases x2, auto, cases x1a, auto)
qed
lemma rtmfl_trace_acceptance: "\<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<Longrightarrow> x = \<langle>acc2maxref A\<rangle>\<^sub>\<R>\<^sub>\<T>"
by (cases x, auto, case_tac x1, auto, case_tac x22, auto)
lemma rtm2fl_trace_fltrace_concat2_acceptance:
"\<And>x. rtWF x \<Longrightarrow> \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>[A]\<^sub>\<F>\<^sub>\<L>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<Longrightarrow>
\<exists> x' X. x = x' @\<^sub>\<R>\<^sub>\<T> \<langle>X\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and> X = acc2maxref (last \<beta> + [A]\<^sub>\<F>\<^sub>\<L>)"
proof (induct \<beta>, auto)
fix x xa
show "rtWF xa \<Longrightarrow> \<langle>x + [A]\<^sub>\<F>\<^sub>\<L>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace xa \<Longrightarrow> \<exists>x'. xa = x' @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (x + [A]\<^sub>\<F>\<^sub>\<L>)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
by (metis rtmfl_trace_acceptance rttrace_with_refusal.simps(3))
next
fix x1a \<beta> x
assume ind_hyp: "\<And>x. rtWF x \<Longrightarrow> \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>[A]\<^sub>\<F>\<^sub>\<L>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<Longrightarrow>
\<exists>x'. x = x' @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (Finite_Linear_Model.last \<beta> + [A]\<^sub>\<F>\<^sub>\<L>)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
assume case_assms: "rtWF x" "x1a #\<^sub>\<F>\<^sub>\<L> (\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>[A]\<^sub>\<F>\<^sub>\<L>\<rangle>\<^sub>\<F>\<^sub>\<L>) = rtm2fl_trace x"
obtain x' where x_def: "x = ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> (event x1a) #\<^sub>\<R>\<^sub>\<T> x')"
using case_assms rtm2fl_trace_aevent_prefix by blast
then have "rtm2fl_trace x' = \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>[A]\<^sub>\<F>\<^sub>\<L>\<rangle>\<^sub>\<F>\<^sub>\<L> \<and> rtWF x'"
using case_assms by (cases "event x1a", auto, cases \<beta>, auto, case_tac x1, auto)
then have "\<exists>x''. x' = x'' @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (Finite_Linear_Model.last \<beta> + [A]\<^sub>\<F>\<^sub>\<L>)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
by (simp add: ind_hyp)
then show "\<exists>x'. x = x' @\<^sub>\<R>\<^sub>\<T> \<langle>acc2maxref (Finite_Linear_Model.last \<beta> + [A]\<^sub>\<F>\<^sub>\<L>)\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail"
by (metis rttrace_with_refusal.simps(1) x_def)
qed
lemma rtm2fl_FL2:
assumes "\<forall>x\<in>P. rtWF x" "RTM2 P"
shows "FL2 (rtm2fl P)"
using assms unfolding RTM1_def RTM2_def FL2_def rtm2fl_def
proof auto
fix \<beta> A a x
assume P_wf: "\<forall>x\<in>P. rtWF x"
assume RTM2_P: "\<forall>\<rho> X e. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<in> P \<and> e \<notin> X \<longrightarrow> \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
assume x_in_P: "x \<in> P"
assume rtm2fl_trace_x_is_\<beta>_A: "\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x"
assume a_in_A: "a \<in>\<^sub>\<F>\<^sub>\<L> A"
obtain A' where A_not_bullet: "A = [A']\<^sub>\<F>\<^sub>\<L>"
by (meson a_in_A amember.elims(2))
obtain x' X where x'_X_def: "x = x' @\<^sub>\<R>\<^sub>\<T> \<langle>X\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail \<and> X = acc2maxref (last \<beta> + [A']\<^sub>\<F>\<^sub>\<L>)"
using A_not_bullet P_wf rtm2fl_trace_fltrace_concat2_acceptance rtm2fl_trace_x_is_\<beta>_A x_in_P by blast
show "\<exists>x. \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<and> x \<in> P"
proof (cases "last \<beta>")
assume last_\<beta>_bullet: "last \<beta> = \<bullet>"
then have X_def: "X = [{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>"
by (simp add: x'_X_def)
then have "x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
using A_not_bullet RTM2_P a_in_A x'_X_def x_in_P by auto
then show "\<exists>x. \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<and> x \<in> P"
proof (rule_tac x="x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>" in exI, auto)
have "\<And> x'. rtWF (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail) \<Longrightarrow> last \<beta> = \<bullet> \<Longrightarrow> \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail) \<Longrightarrow>
\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
proof (induct \<beta>, auto)
fix x'
show "\<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail) \<Longrightarrow>
\<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (case_tac x', auto, cases a, auto, case_tac x22, auto)
next
fix x1a \<beta> x'
assume ind_hyp: "\<And>x'. rtWF (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail) \<Longrightarrow>
\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail) \<Longrightarrow>
\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
assume case_assm1: "x1a #\<^sub>\<F>\<^sub>\<L> (\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L>) = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail)"
assume case_assm2: "rtWF (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail)"
have "\<exists> x''. x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail = ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> x'') \<and>
(rtm2fl_trace x'' = \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> \<or> event x1a = TickRT)"
using case_assm1 case_assm2 rtm2fl_trace_aevent_prefix by blast
then obtain x'' where x''_assms: "x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail = ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> x'') \<and>
rtm2fl_trace x'' = \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> \<and> event x1a \<noteq> TickRT"
by (metis A_not_bullet Finite_Linear_Model.last.simps(2) case_assm1 last_cons_acceptance_not_bullet last_fltrace_acceptance rtm2fl_trace.simps(3))
then obtain x''' where x'''_assms: "rtWF (x''' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail)
\<and> \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x''' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail)
\<and> x'' = x''' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> RTEmptyTail "
using case_assm2 apply (auto) apply (cases x1a, auto)
apply (smt A_not_bullet rtm2fl_trace_fltrace_concat2_acceptance rttrace_with_refusal.simps(1) rttrace_with_refusal_eq2)
by (cases x', auto, fastforce)
then have 1: "\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x''' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (simp add: ind_hyp)
then show "x1a #\<^sub>\<F>\<^sub>\<L> (\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>) = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
(is "?lhs = ?rhs")
proof -
have "?rhs = rtm2fl_trace ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (x''' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
by (metis rttrace_with_refusal.simps(1) rttrace_with_refusal_eq1 x'''_assms x''_assms)
also have "... = x1a #\<^sub>\<F>\<^sub>\<L> rtm2fl_trace (x''' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (metis case_assm1 fltrace.inject(2) rtevent.exhaust rtm2fl_trace.simps(2) x''_assms)
also have "... = ?lhs"
using "1" by auto
then show ?thesis
using calculation by auto
qed
qed
then show "\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace (x' @\<^sub>\<R>\<^sub>\<T> \<langle>[{e. e \<notin> A'}]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> a ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
using P_wf X_def last_\<beta>_bullet rtm2fl_trace_x_is_\<beta>_A x'_X_def x_in_P by blast
qed
next
fix x2
assume last_beta_not_bullet: "last \<beta> = [x2]\<^sub>\<F>\<^sub>\<L>"
have 1: "\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>A\<rangle>\<^sub>\<F>\<^sub>\<L> = \<beta>"
using last_beta_not_bullet by (induct \<beta>, auto simp add: A_not_bullet)
have 2: "\<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = \<beta>"
using last_beta_not_bullet by (induct \<beta>, auto simp add: A_not_bullet)
show "\<exists>x. \<beta> &\<^sub>\<F>\<^sub>\<L> \<langle>(A,a)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace x \<and> x \<in> P"
using 1 2 rtm2fl_trace_x_is_\<beta>_A x_in_P by auto
qed
qed
lemma rtm2fl_FLTick0:
assumes "\<forall>x\<in>P. rtWF x" "RTM1 P" "RTM2 P" "RT3 P" "RT4 P"
shows "FLTick0 TickRT (rtm2fl P)"
using assms unfolding FLTick0_def rtm2fl_def
proof auto
fix xa :: "'a rtevent rttrace"
show "\<And> P. \<forall>x\<in>P. rtWF x \<Longrightarrow> RTM1 P \<Longrightarrow> RTM2 P \<Longrightarrow> RT3 P \<Longrightarrow> RT4 P \<Longrightarrow> xa \<in> P \<Longrightarrow> tickWF TickRT (rtm2fl_trace xa)"
proof (induct xa, auto)
fix P :: "'a rtevent rtprocess" and x
assume RTM2_P: "RTM2 P" and RT3_P: "RT3 P" and RT4_P: "RT4 P"
show "\<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P \<Longrightarrow> TickRT \<in>\<^sub>\<F>\<^sub>\<L> maxref2acc x \<Longrightarrow> False"
proof (cases x, auto)
fix x2
assume case_assms: "\<langle>[x2]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P" "TickRT \<notin> x2"
then have "RTEmptyInit @\<^sub>\<R>\<^sub>\<T> \<langle>[x2]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
using RTM2_P unfolding RTM2_def by (metis rttrace_with_refusal.simps(3))
then show "False"
using RT4_P unfolding RT4_def by blast
qed
next
fix P :: "'a rtevent rtprocess"
fix x1a :: "'a rtevent rtrefusal" and x2 :: "'a rtevent" and xa :: "'a rtevent rttrace"
assume ind_hyp: "\<And>P. \<forall>x\<in>P. rtWF x \<Longrightarrow> RTM1 P \<Longrightarrow> RTM2 P \<Longrightarrow> RT3 P \<Longrightarrow> RT4 P \<Longrightarrow> xa \<in> P \<Longrightarrow> tickWF TickRT (rtm2fl_trace xa)"
assume P_wf: "\<forall>x\<in>P. rtWF x" and RTM1_P: "RTM1 P" and RTM2_P: "RTM2 P" and RT3_P: "RT3 P" and RT4_P: "RT4 P"
assume case_assm: "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> xa) \<in> P"
have 0: "RTM1 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
by (simp add: RTM1_P RTM1_init_event)
have 1: "RTM2 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RTM2_P unfolding RTM2_def by (auto, metis rttrace_with_refusal.simps(1))
have 2: "RT3 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RT3_P unfolding RT3_def
by (auto, metis no_tick.simps(2) no_tick.simps(3) rtevent.exhaust rttrace_with_refusal.simps(1))
have 3: "RT4 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RT4_P unfolding RT4_def by (metis mem_Collect_eq rttrace_with_refusal.simps(1))
have 4: "\<forall>x\<in>{x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}. rtWF x"
using P_wf by auto
have xa_wf: "tickWF TickRT (rtm2fl_trace xa)"
using ind_hyp[where P="{x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"] 0 1 2 3 4 case_assm by force
then show "tickWF TickRT (rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> xa))"
proof (cases x2, auto)
assume x2_tick: "x2 = TickRT"
then have "\<exists> X. xa = \<langle>X\<rangle>\<^sub>\<R>\<^sub>\<T>"
using case_assm RT3_P xa_wf unfolding RT3_def
proof (auto, induct xa, auto)
fix x1aa x2a and xaa :: "'a rtevent rttrace"
assume in_P: "(x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> (x1aa #\<^sub>\<R>\<^sub>\<T> x2a #\<^sub>\<R>\<^sub>\<T> xaa)) \<in> P"
assume RT3_P: "\<forall>\<rho>. (\<exists>x y e. \<rho> @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> e ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P) \<longrightarrow> no_tick \<rho>"
obtain \<rho> x y e where "(x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> (x1aa #\<^sub>\<R>\<^sub>\<T> x2a #\<^sub>\<R>\<^sub>\<T> xaa)) = (RTEventInit x1a TickRT \<rho>) @\<^sub>\<R>\<^sub>\<T> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> (e ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)"
apply (induct xaa, auto, metis rttrace_with_refusal.simps(2))
by (smt rttrace.inject(2) rttrace_init.exhaust rttrace_with_refusal.simps(1) rttrace_with_refusal.simps(2))
then have "no_tick (RTEventInit x1a TickRT \<rho>)"
by (metis RT3_P in_P no_tick.simps(3) x2_tick)
then show "False"
by auto
qed
then have "x1a = \<bullet>\<^sub>\<R>\<^sub>\<T>"
using RT4_P case_assm unfolding RT4_def apply auto
by (metis rttrace_with_refusal.simps(2) x2_tick)
then show "TickRT \<in>\<^sub>\<F>\<^sub>\<L> acceptance (maxref2acc x1a,TickRT)\<^sub>\<F>\<^sub>\<L> \<Longrightarrow> False"
by auto
next
fix x2a
assume x2_event: "x2 = EventRT x2a"
then have "\<not> EventRT x2a \<in>\<^sub>\<R>\<^sub>\<T> x1a"
using P_wf case_assm by auto
then show "event (maxref2acc x1a,EventRT x2a)\<^sub>\<F>\<^sub>\<L> = TickRT \<Longrightarrow> False"
using P_wf case_assm rtm2fl_trace_aevent_prefix x2_event by force
next
fix x2a
assume x2_event: "x2 = EventRT x2a"
then have "\<not> EventRT x2a \<in>\<^sub>\<R>\<^sub>\<T> x1a"
using P_wf case_assm by auto
then show "event (maxref2acc x1a,EventRT x2a)\<^sub>\<F>\<^sub>\<L> = TickRT \<Longrightarrow> rtm2fl_trace xa = \<langle>\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L>"
using P_wf case_assm rtm2fl_trace_aevent_prefix x2_event by force
next
fix x2a
assume x2_event: "x2 = EventRT x2a"
then have event_notin_refusal: "\<not> EventRT x2a \<in>\<^sub>\<R>\<^sub>\<T> x1a"
using P_wf case_assm by auto
assume "TickRT \<in>\<^sub>\<F>\<^sub>\<L> acceptance (maxref2acc x1a,EventRT x2a)\<^sub>\<F>\<^sub>\<L>"
then have "TickRT \<in>\<^sub>\<F>\<^sub>\<L> maxref2acc x1a"
using event_notin_refusal by (cases x1a, auto)
then have x1a_not_bullet: "\<exists> X. x1a = [X]\<^sub>\<R>\<^sub>\<T> \<and> TickRT \<notin> X"
by (cases x1a, auto)
have "\<langle>x1a\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
using RTM1_P RTM1_def case_assm x1a_not_bullet by force
then have "(x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<in> P"
using RTM2_P unfolding RTM2_def
by (metis rttrace_with_refusal.simps(2) rttrace_with_refusal.simps(3) x1a_not_bullet)
then have "x1a = \<bullet>\<^sub>\<R>\<^sub>\<T>"
using RT4_P unfolding RT4_def by (metis (full_types) rttrace_with_refusal.simps(2))
then show "False"
using x1a_not_bullet by blast
qed
qed
qed
definition FLTick :: "'a \<Rightarrow> 'a fltraces \<Rightarrow> bool" where
"FLTick tick P = (FL0 P \<and> FL1 P \<and> FL2 P \<and> FLTick0 tick P)"
lemma rtm2fl_FLTick:
assumes "RTM P"
shows "FLTick TickRT (rtm2fl P)"
using assms unfolding RTM_def FLTick_def apply auto
using rtm2fl_FL0 rtm2fl_FL1 rtm2fl_FL2 rtm2fl_FLTick0 by blast+
lemma rtm2fl_mono: "P \<sqsubseteq>\<^sub>\<R>\<^sub>\<T> Q \<Longrightarrow> rtm2fl P \<sqsubseteq>\<^sub>\<F>\<^sub>\<L> rtm2fl Q"
unfolding refinesRT_def rtm2fl_def by auto
section \<open>Galois connection between FL and RTMax\<close>
lemma rtm2fl_fl2rtm_inverse:
assumes "FLTick0 TickRT P"
shows "rtm2fl (fl2rtm P) = P"
using assms unfolding rtm2fl_def fl2rtm_def
proof (safe, simp_all)
fix \<sigma> :: "'a rtevent fltrace"
show "\<And>P. \<sigma> \<in> P \<Longrightarrow> FLTick0 TickRT P \<Longrightarrow> rtm2fl_trace (fl2rtm_trace \<sigma>) \<in> P"
proof (induct \<sigma>, simp_all)
fix x and P :: "'a rtevent fltraces"
show "\<langle>x\<rangle>\<^sub>\<F>\<^sub>\<L> \<in> P \<Longrightarrow> \<langle>maxref2acc (acc2maxref x)\<rangle>\<^sub>\<F>\<^sub>\<L> \<in> P"
by (cases x, auto)
next
fix x1a \<sigma> and P :: "'a rtevent fltraces"
assume x1a_\<sigma>_in_P: "x1a #\<^sub>\<F>\<^sub>\<L> \<sigma> \<in> P"
assume FLTick0_P: "FLTick0 TickRT P"
assume ind_hyp: "\<And>P. \<sigma> \<in> P \<Longrightarrow> FLTick0 TickRT P \<Longrightarrow> rtm2fl_trace (fl2rtm_trace \<sigma>) \<in> P"
have "FLTick0 TickRT {\<sigma>. x1a #\<^sub>\<F>\<^sub>\<L> \<sigma> \<in> P}"
unfolding FLTick0_def
proof auto
fix x
have "tickWF TickRT (x1a #\<^sub>\<F>\<^sub>\<L> x) \<Longrightarrow> tickWF TickRT x"
by (induct x, auto, (cases x1a, auto, case_tac b, auto, case_tac b, auto)+)
then show "x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P \<Longrightarrow> tickWF TickRT x"
by (meson FLTick0_P FLTick0_def)
qed
then have 1: "x1a #\<^sub>\<F>\<^sub>\<L> rtm2fl_trace (fl2rtm_trace \<sigma>) \<in> P"
by (metis ind_hyp mem_Collect_eq x1a_\<sigma>_in_P)
have 2: "rtm2fl_trace ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>)) = x1a #\<^sub>\<F>\<^sub>\<L> rtm2fl_trace (fl2rtm_trace \<sigma>)"
apply (cases x1a, auto, case_tac a, auto, case_tac b, auto)
apply (metis FLTick0_P FLTick0_def acceptance_set amember.simps(2) tickWF.simps(2) x1a_\<sigma>_in_P)
by (smt FLTick0_P FLTick0_def \<open>x1a #\<^sub>\<F>\<^sub>\<L> rtm2fl_trace (fl2rtm_trace \<sigma>) \<in> P\<close> acceptance_event
maxref2acc.simps(1) rtevent.exhaust rtm2fl_trace.simps(2) rtm2fl_trace.simps(3) tickWF.simps(2))
show "rtm2fl_trace ((acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> event x1a #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace \<sigma>)) \<in> P"
by (simp add: "1" "2")
qed
next
fix \<sigma> :: "'a rtevent fltrace" and \<rho>' y
assume fl2rtm_trace_\<sigma>: "fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume \<sigma>_in_P: "\<sigma> \<in> P"
assume FLTick0_P: "FLTick0 TickRT P"
have "\<And>\<sigma>. tickWF TickRT \<sigma> \<Longrightarrow> fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
rtm2fl_trace (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) = \<sigma>"
proof (induct \<rho>', auto)
fix \<sigma>
show "tickWF TickRT \<sigma> \<Longrightarrow> fl2rtm_trace \<sigma> = (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow> \<langle>(\<bullet>,TickRT)\<^sub>\<F>\<^sub>\<L>,\<bullet>\<rangle>\<^sub>\<F>\<^sub>\<L> = \<sigma>"
by (induct \<sigma>, auto, case_tac x1a, auto)
next
fix x1 x2 \<rho>' \<sigma>
assume ind_hyp: "\<And>\<sigma>. tickWF TickRT \<sigma> \<Longrightarrow>
fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
rtm2fl_trace (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) = \<sigma>"
assume case_assms: "tickWF TickRT \<sigma>" "fl2rtm_trace \<sigma> = (x1 #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>))"
obtain \<sigma>' where \<sigma>'_assms: "\<sigma> = (maxref2acc x1, x2)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<F>\<^sub>\<L> \<sigma>' \<and> fl2rtm_trace \<sigma>' = (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and> \<not> x2 \<in>\<^sub>\<R>\<^sub>\<T> x1"
using case_assms(2) by (induct \<sigma>, auto, (case_tac x1a, auto, case_tac a, auto)+)
have "tickWF TickRT \<sigma>'"
by (metis \<sigma>'_assms amember.simps(1) case_assms(1) tickWF.simps(1) tickWF.simps(2))
then have "rtm2fl_trace (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) = \<sigma>'"
using \<sigma>'_assms ind_hyp by blast
then show "rtm2fl_trace (x1 #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)) = \<sigma>"
using \<sigma>'_assms case_assms by (auto, cases x2, auto)
qed
then show "rtm2fl_trace (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<in> P"
by (metis FLTick0_def \<sigma>_in_P assms fl2rtm_trace_\<sigma>)
next
fix x :: "'a rtevent fltrace"
show "\<And>P. x \<in> P \<Longrightarrow> FLTick0 TickRT P \<Longrightarrow>
\<exists>xa. x = rtm2fl_trace xa \<and>
((\<exists>\<sigma>\<in>P. xa = fl2rtm_trace \<sigma>) \<or>
(\<exists>\<sigma>\<in>P. \<exists>\<rho>'. (\<exists>y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and>
xa = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
proof (induct x)
fix x :: "'a rtevent acceptance" and P
show "\<langle>x\<rangle>\<^sub>\<F>\<^sub>\<L> \<in> P \<Longrightarrow> FLTick0 TickRT P \<Longrightarrow>
\<exists>xa. \<langle>x\<rangle>\<^sub>\<F>\<^sub>\<L> = rtm2fl_trace xa \<and>
((\<exists>\<sigma>\<in>P. xa = fl2rtm_trace \<sigma>) \<or>
(\<exists>\<sigma>\<in>P. \<exists>\<rho>'. (\<exists>y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and>
xa = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
apply (case_tac x, safe, metis acc2maxref.simps(1) fl2rtm_trace.simps(1) maxref2acc.simps(1) rtm2fl_trace.simps(1))
by (metis (no_types, lifting) acc2maxref.simps(2) fl2rtm_trace.simps(1) maxref2acc.simps(2) rtm2fl_trace.simps(1) rtmfl_trace_acceptance rtrefusal.inject rttrace.inject(1))
next
fix x1a :: "'a rtevent aevent" and x P
assume case_assms: "x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P" "FLTick0 TickRT P"
assume ind_hyp: "\<And>P. x \<in> P \<Longrightarrow> FLTick0 TickRT P \<Longrightarrow>
\<exists>xa. x = rtm2fl_trace xa \<and>
((\<exists>\<sigma>\<in>P. xa = fl2rtm_trace \<sigma>) \<or>
(\<exists>\<sigma>\<in>P. \<exists>\<rho>'. (\<exists>y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and>
xa = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
have "FLTick0 TickRT {x. x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P}"
unfolding FLTick0_def
proof auto
fix x
have "tickWF TickRT (x1a #\<^sub>\<F>\<^sub>\<L> x) \<Longrightarrow> tickWF TickRT x"
by (induct x, auto, (cases x1a, auto, case_tac b, auto, case_tac b, auto)+)
then show "x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P \<Longrightarrow> tickWF TickRT x"
by (meson FLTick0_def case_assms(2))
qed
then have "\<exists> xa. x = rtm2fl_trace xa \<and>
((\<exists>\<sigma>\<in>{x. x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P}. xa = fl2rtm_trace \<sigma>) \<or>
(\<exists>\<sigma>\<in>{x. x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P}. \<exists>\<rho>'. (\<exists>y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and>
xa = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
using ind_hyp[where P="{x. x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P}"] case_assms(1) by fastforce
then obtain xa where xa_assms: "x = rtm2fl_trace xa \<and>
((\<exists>\<sigma>\<in>{x. x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P}. xa = fl2rtm_trace \<sigma>) \<or>
(\<exists>\<sigma>\<in>{x. x1a #\<^sub>\<F>\<^sub>\<L> x \<in> P}. \<exists>\<rho>'. (\<exists>y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and>
xa = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
by blast
show "\<exists>xa. x1a #\<^sub>\<F>\<^sub>\<L> x = rtm2fl_trace xa \<and>
((\<exists>\<sigma>\<in>P. xa = fl2rtm_trace \<sigma>) \<or>
(\<exists>\<sigma>\<in>P. \<exists>\<rho>'. (\<exists>y. fl2rtm_trace \<sigma> = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<and>
xa = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>))"
apply (rule_tac x="(acc2maxref (acceptance x1a)) #\<^sub>\<R>\<^sub>\<T> (event x1a) #\<^sub>\<R>\<^sub>\<T> xa" in exI, cases x1a, safe, simp_all)
apply (case_tac a, auto, case_tac b, auto simp add: xa_assms)
apply (metis FLTick0_def acceptance_set amember.simps(2) case_assms(1) case_assms(2) tickWF.simps(2))
apply (case_tac a, auto, case_tac b, auto simp add: xa_assms)
apply (metis FLTick0_def acceptance_set amember.simps(2) case_assms(1) case_assms(2) tickWF.simps(2))
using xa_assms apply auto
apply (metis acc2maxref.simps(2) acceptance_event acceptance_set amember.simps(2) fl2rtm_trace.simps(2))
apply (metis acc2maxref.simps(2) acceptance_event acceptance_set amember.simps(2) fl2rtm_trace.simps(2) rttrace_with_refusal.simps(1))
apply (metis (no_types, hide_lams) FLTick0_def acceptance_event case_assms(1) case_assms(2) maxref2acc.simps(1) rtevent.exhaust rtm2fl_trace.simps(2) rtm2fl_trace.simps(3) tickWF.simps(2))
apply (metis (no_types, lifting) FLTick0_def acceptance_event case_assms(1) case_assms(2) maxref2acc.simps(1) rtevent.exhaust rtm2fl_trace.simps(2) rtm2fl_trace.simps(3) tickWF.simps(2))
apply (metis acc2maxref.simps(1) acceptance_event acceptance_set fl2rtm_trace.simps(2))
by (metis acc2maxref.simps(1) acceptance_event acceptance_set fl2rtm_trace.simps(2) rttrace_with_refusal.simps(1))
qed
qed
lemma fl2rtm_rtm2fl_inverse:
assumes "\<forall>x\<in>P. rtWF x" "RTM1 P" "RTM2 P" "RT3 P" "RT4 P"
shows "fl2rtm (rtm2fl P) = P"
using assms unfolding rtm2fl_def fl2rtm_def
proof (safe, simp_all)
fix xa :: "'a rtevent rttrace"
have "\<And>P. RTM1 P \<Longrightarrow> RT3 P \<Longrightarrow> xa \<in> P \<Longrightarrow> rtWF xa \<Longrightarrow> fl2rtm_trace (rtm2fl_trace xa) \<in> P"
proof (induct xa, simp_all)
fix x :: "'a rtevent rtrefusal" and P
show "\<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P \<Longrightarrow> \<langle>acc2maxref (maxref2acc x)\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
by (case_tac x, simp_all)
next
fix x1a :: "'a rtevent rtrefusal" and x2 xa P
assume in_P: "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> xa) \<in> P"
assume wf_assms: "\<not> x2 \<in>\<^sub>\<R>\<^sub>\<T> x1a \<and> rtWF xa"
assume RTM1_P: "RTM1 P" and RT3_P: "RT3 P"
assume ind_hyp: "\<And>P. RTM1 P \<Longrightarrow> RT3 P \<Longrightarrow> xa \<in> P \<Longrightarrow> fl2rtm_trace (rtm2fl_trace xa) \<in> P"
have x2_TickRT_refusal: "x2 = TickRT \<Longrightarrow> \<exists> X. xa = \<langle>X\<rangle>\<^sub>\<R>\<^sub>\<T>"
using RT3_P RT3_refusal_after_TickRT in_P by blast
show "fl2rtm_trace (rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> xa)) \<in> P"
proof (cases x2, auto)
fix x1
assume case_assm: "x2 = TickRT"
have xa_is_refusal: "\<exists> X. xa = \<langle>X\<rangle>\<^sub>\<R>\<^sub>\<T>"
using RT3_P RT3_refusal_after_TickRT in_P case_assm by blast
have "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> \<langle>x1\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (cases x1a, auto)
then have "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<in> P"
using RTM1_P in_P xa_is_refusal unfolding RTM1_def
by (metis leq_rttrace_max.simps(1) leq_rttrace_max.simps(6) leq_rttrace_max.simps(8) maxref2acc.cases)
then show "((acc2maxref (acceptance (maxref2acc x1a,TickRT)\<^sub>\<F>\<^sub>\<L>)) #\<^sub>\<R>\<^sub>\<T> event (maxref2acc x1a,TickRT)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<in> P"
using wf_assms case_assm by (cases x1a, auto)
next
fix x2a
assume case_assm: "x2 = EventRT x2a"
have 1: "RTM1 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RTM1_P unfolding RTM1_def apply auto
by (metis in_rtrefusal.elims(3) leq_rttrace_max.simps(6) leq_rttrace_max.simps(8) wf_assms)
have 2: "RT3 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RT3_P unfolding RT3_def apply auto
by (metis case_assm no_tick.simps(2) rttrace_with_refusal.simps(1))
have "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace (rtm2fl_trace xa))) \<in> P"
using "1" "2" in_P ind_hyp by blast
then show "((acc2maxref (acceptance (maxref2acc x1a,EventRT x2a)\<^sub>\<F>\<^sub>\<L>)) #\<^sub>\<R>\<^sub>\<T> event (maxref2acc x1a,EventRT x2a)\<^sub>\<F>\<^sub>\<L> #\<^sub>\<R>\<^sub>\<T> (fl2rtm_trace (rtm2fl_trace xa))) \<in> P"
using wf_assms case_assm by (cases x1a, auto)
qed
qed
then show "xa \<in> P \<Longrightarrow> Ball P rtWF \<Longrightarrow> RTM1 P \<Longrightarrow> RT3 P \<Longrightarrow> fl2rtm_trace (rtm2fl_trace xa) \<in> P"
by auto
next
fix \<rho>' xa y
assume in_P: "xa \<in> P"
assume xa_eq: "fl2rtm_trace (rtm2fl_trace xa) = \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>"
assume P_wf: "Ball P rtWF" and RTM1_P: "RTM1 P" and RT4_P: "RT4 P"
have "rtWF xa \<Longrightarrow> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> xa"
using xa_eq apply -
proof (induct xa \<rho>' rule:leq_rttrace_rttrace_init_max.induct, auto)
fix v va vb
assume "fl2rtm_trace (rtm2fl_trace (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb)) = (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)"
then show "\<not> va \<in>\<^sub>\<R>\<^sub>\<T> v \<Longrightarrow> rtWF vb \<Longrightarrow> (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb)"
by (cases va, auto, (cases v, auto)+)
next
fix v va vb vc vd ve
assume ind_hyp: "fl2rtm_trace (rtm2fl_trace vb) = ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<Longrightarrow>
ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> vb"
assume case_assm: "fl2rtm_trace (rtm2fl_trace (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb)) = (vc #\<^sub>\<R>\<^sub>\<T> vd #\<^sub>\<R>\<^sub>\<T> (ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>))"
then have "fl2rtm_trace (rtm2fl_trace vb) = ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> (TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)"
by (cases va, auto, cases ve, auto)
then have "ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> vb"
using ind_hyp by blast
then show "\<not> va \<in>\<^sub>\<R>\<^sub>\<T> v \<Longrightarrow> rtWF vb \<Longrightarrow>
(vc #\<^sub>\<R>\<^sub>\<T> vd #\<^sub>\<R>\<^sub>\<T> (ve @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>)) \<le>\<^sub>\<R>\<^sub>\<T>\<^sub>\<M> (v #\<^sub>\<R>\<^sub>\<T> va #\<^sub>\<R>\<^sub>\<T> vb)"
using case_assm by (cases va, auto, (cases v, auto)+)
qed
then have "\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
using in_P P_wf RTM1_P unfolding RTM1_def by auto
then show "\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
using RT4_P unfolding RT4_def by auto
next
fix x :: "'a rtevent rttrace"
show "\<And>P. \<forall>x\<in>P. rtWF x \<Longrightarrow> RTM2 P \<Longrightarrow> RT3 P \<Longrightarrow> RT4 P \<Longrightarrow> x \<in> P \<Longrightarrow>
\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> x \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
x \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow>
\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> P) \<and> x = fl2rtm_trace \<sigma>"
proof (induct x, auto)
fix x :: "'a rtevent rtrefusal" and P
assume case_assms: "\<forall>x\<in>P. rtWF x" "\<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> \<in> P"
show "\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> P) \<and> \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T> = fl2rtm_trace \<sigma>"
using case_assms by (rule_tac x="rtm2fl_trace \<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T>" in exI, auto, rule_tac x="\<langle>x\<rangle>\<^sub>\<R>\<^sub>\<T>" in exI, auto, cases x, auto)
next
fix x1a :: "'a rtevent rtrefusal" and x2 x P
assume case_assm: "(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P"
assume P_wf: "\<forall>x\<in>P. rtWF x" and RTM2_P: "RTM2 P" and RT3_P: "RT3 P" and RT4_P: "RT4 P"
assume ind_hyp: "\<And>P. \<forall>x\<in>P. rtWF x \<Longrightarrow> RTM2 P \<Longrightarrow> RT3 P \<Longrightarrow> RT4 P \<Longrightarrow> x \<in> P \<Longrightarrow>
\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> x \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
x \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow>
\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> P) \<and> x = fl2rtm_trace \<sigma>"
have 1: "\<forall>x\<in>{x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}. rtWF x"
using P_wf by (auto)
have 2: "RTM2 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RTM2_P unfolding RTM2_def by (auto, metis rttrace_with_refusal.simps(1))
have 3: "RT3 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RT3_P unfolding RT3_def apply auto
by (metis no_tick.elims(2) rttrace_init.distinct(1) rttrace_init.inject rttrace_with_refusal.simps(1))
have 4: "RT4 {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}"
using RT4_P unfolding RT4_def by (auto, (metis rttrace_with_refusal.simps(1))+)
have "\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> x \<notin> {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
x \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow>
\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> {x. (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<in> P}) \<and> x = fl2rtm_trace \<sigma>"
using 1 2 3 4 case_assm ind_hyp by blast
then show "\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> x \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
(x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) \<noteq> (\<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)) \<Longrightarrow>
\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> P) \<and> (x1a #\<^sub>\<R>\<^sub>\<T> x2 #\<^sub>\<R>\<^sub>\<T> x) = fl2rtm_trace \<sigma>"
proof (cases x2, auto)
assume case_assms2: "x2 = TickRT"
assume "\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> x \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
(x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> x) \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
then have "\<forall>\<rho>'. (\<forall>y. (\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
(\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> x) \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
using case_assm case_assms2 apply (erule_tac x="rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)" in allE, auto)
by (metis RT4_P RT4_def acc2maxref.simps(1) acceptance_event acceptance_set maxref2acc.simps(1) rttrace.inject(2) rttrace_with_refusal.simps(2) rttrace_with_refusal_eq3)
then have "x \<noteq> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
by (metis rttrace_with_refusal.simps(2))
then have "x = \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<or> (\<exists>X e. x = \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> \<and> e \<notin> X)"
using RT3_P case_assm case_assms2 unfolding RT3_def
apply (auto, cases x, auto)
apply (erule_tac x="RTEmptyInit" in allE, auto, case_tac x1, auto)
apply (erule_tac x="RTEventInit x1a TickRT RTEmptyInit" in allE, auto)
using RT3_P RT3_refusal_after_TickRT by blast
then have "x = \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>"
proof auto
fix X and e :: "'a rtevent"
assume inner_assms: "x = \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>" "e \<notin> X"
then have "(\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> \<langle>[X]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<in> P"
by (metis RT4_P RT4_def case_assm case_assms2 rttrace_with_refusal.simps(2))
then have "(\<bullet>\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> ([X]\<^sub>\<R>\<^sub>\<T> #\<^sub>\<R>\<^sub>\<T> e #\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)) \<in> P"
using RTM2_P inner_assms unfolding RTM2_def
by (metis rttrace_with_refusal.simps(1) rttrace_with_refusal.simps(2) rttrace_with_refusal.simps(3))
then show False
using RT3_P RT3_refusal_after_TickRT by blast
qed
then show "\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> P) \<and> (x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> x) = fl2rtm_trace \<sigma>"
apply (rule_tac x="rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> TickRT #\<^sub>\<R>\<^sub>\<T> x)" in exI, auto)
using case_assm case_assms2 apply force
using P_wf case_assm case_assms2 rtm2fl_trace_aevent_prefix by fastforce+
next
fix x2a
assume case_assm2: "x2 = EventRT x2a"
assume "\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> x \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
(x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x) \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
then have 1: "\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x) \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
x \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>)"
using P_wf apply auto
apply (erule_tac x="rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x)" in allE, auto)
apply (erule_tac x="RTEventInit x1a (EventRT x2a) \<rho>'" in allE, auto)
using rtm2fl_trace_aevent_prefix by force+
assume "\<forall>\<sigma>. (\<forall>x. \<sigma> = rtm2fl_trace x \<longrightarrow> (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x) \<notin> P) \<or>
(\<forall>\<rho>'. (\<forall>y. fl2rtm_trace \<sigma> \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>y\<rangle>\<^sub>\<R>\<^sub>\<T>) \<or>
x \<noteq> \<rho>' @\<^sub>\<R>\<^sub>\<T> \<langle>\<bullet>\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T> @\<^sub>\<R>\<^sub>\<T> TickRT ##\<^sub>\<R>\<^sub>\<T> \<langle>[UNIV]\<^sub>\<R>\<^sub>\<T>\<rangle>\<^sub>\<R>\<^sub>\<T>) \<Longrightarrow>
\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x) \<in> P) \<and> x = fl2rtm_trace \<sigma>"
then have "\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x) \<in> P) \<and> x = fl2rtm_trace \<sigma>"
using 1 by auto
then show "\<exists>\<sigma>. (\<exists>x. \<sigma> = rtm2fl_trace x \<and> x \<in> P) \<and> (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x) = fl2rtm_trace \<sigma>"
apply (auto, rule_tac x="rtm2fl_trace (x1a #\<^sub>\<R>\<^sub>\<T> EventRT x2a #\<^sub>\<R>\<^sub>\<T> x)" in exI, auto)
using P_wf rtm2fl_trace_aevent_prefix by force+
qed
qed
qed
lemma fl2rtm_rtm2fl_Galois:
assumes "FLTick0 TickRT A" "RTM B"
shows "fl2rtm A \<sqsubseteq>\<^sub>\<R>\<^sub>\<T> B = A \<sqsubseteq>\<^sub>\<F>\<^sub>\<L> rtm2fl B"
using assms rtm2fl_mono unfolding RTM_def by (metis fl2rtm_mono fl2rtm_rtm2fl_inverse rtm2fl_fl2rtm_inverse)
lemma rtm2fl_fl2rtm_Galois:
assumes "FLTick0 TickRT B" "RTM A"
shows "rtm2fl A \<sqsubseteq>\<^sub>\<F>\<^sub>\<L> B = (A \<sqsubseteq>\<^sub>\<R>\<^sub>\<T> fl2rtm B)"
using assms rtm2fl_mono unfolding RTM_def by (metis fl2rtm_mono fl2rtm_rtm2fl_inverse rtm2fl_fl2rtm_inverse)
end |
State Before: α : Type u_1
β : Type ?u.86482
inst✝ : MeasurableSpace α
s : SignedMeasure α
j : JordanDecomposition α
h : s = toSignedMeasure j
⊢ toJordanDecomposition s = j State After: no goals Tactic: rw [h, toJordanDecomposition_toSignedMeasure] |
Formal statement is: lemma in_box_complex_iff: "x \<in> box a b \<longleftrightarrow> Re x \<in> {Re a<..<Re b} \<and> Im x \<in> {Im a<..<Im b}" Informal statement is: A complex number $x$ is in the box $[a,b]$ if and only if its real and imaginary parts are in the intervals $[\Re(a),\Re(b)]$ and $[\Im(a),\Im(b)]$, respectively. |
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.DStructures.Structures.XModule where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Equiv
open import Cubical.Foundations.HLevels
open import Cubical.Foundations.Isomorphism
open import Cubical.Foundations.Structure
open import Cubical.Functions.FunExtEquiv
open import Cubical.Homotopy.Base
open import Cubical.Data.Sigma
open import Cubical.Relation.Binary
open import Cubical.Algebra.Group
open import Cubical.Structures.LeftAction
open import Cubical.DStructures.Base
open import Cubical.DStructures.Meta.Properties
open import Cubical.DStructures.Structures.Constant
open import Cubical.DStructures.Structures.Type
open import Cubical.DStructures.Structures.Group
open import Cubical.DStructures.Structures.Action
private
variable
ℓ ℓ' : Level
-------------------------------------------------
-- Definitions and properties of
-- equivariance and peiffer condition
-------------------------------------------------
module _ ((((G₀ , H) , _α_) , isAct) : Action ℓ ℓ') (φ : GroupHom H G₀) where
open GroupNotation₀ G₀
open GroupNotationᴴ H
private
f = GroupHom.fun φ
-- α is equivariant w.r.t φ if φ (g α h) ≡ g + (φ h) - g
isEquivariant : Type (ℓ-max ℓ ℓ')
isEquivariant = (g : ⟨ G₀ ⟩) → (h : ⟨ H ⟩) → f (g α h) ≡ (g +₀ f h) -₀ g
-- G₀ is a set, so isEquivariant is a proposition
isPropIsEquivariant : isProp isEquivariant
isPropIsEquivariant = isPropΠ2 (λ g h → set₀ (f (g α h)) ((g +₀ f h) -₀ g))
-- (α, φ) satisfies the peiffer condition, if
-- (φ h) α h' ≡ h + h' - h
isPeiffer : Type ℓ'
isPeiffer = (h h' : ⟨ H ⟩) → (f h) α h' ≡ (h +ᴴ h') -ᴴ h
-- H is a set, so isPeiffer is a proposition
isPropIsPeiffer : isProp isPeiffer
isPropIsPeiffer = isPropΠ2 (λ h h' → setᴴ ((f h) α h') ((h +ᴴ h') -ᴴ h))
module _ (ℓ ℓ' : Level) where
----------------------
-- Define the types of
-- - Actions α with a morphism φ
-- - Precrossed modules
-- - Crossed modules
-- and add URG structures to them
----------------------
ActionB = Σ[ (((G₀ , H) , _α_) , isAct) ∈ Action ℓ ℓ' ] (GroupHom H G₀)
PreXModule = Σ[ (α , φ) ∈ ActionB ] (isEquivariant α φ)
XModule = Σ[ ((α , φ) , isEqui) ∈ PreXModule ] (isPeiffer α φ)
-- displayed over 𝒮-Action, a morphism back
-- by lifting the morphism back over Grp² twice
𝒮ᴰ-Action\PreXModuleStr : URGStrᴰ (𝒮-Action ℓ ℓ')
(λ (((G , H) , _) , _) → GroupHom H G)
(ℓ-max ℓ ℓ')
𝒮ᴰ-Action\PreXModuleStr = VerticalLift2-𝒮ᴰ (𝒮-group ℓ ×𝒮 𝒮-group ℓ')
(𝒮ᴰ-G²\B ℓ ℓ')
(𝒮ᴰ-G²\Las ℓ ℓ')
(𝒮ᴰ-G²Las\Action ℓ ℓ')
𝒮-PreXModuleStr : URGStr ActionB (ℓ-max ℓ ℓ')
𝒮-PreXModuleStr = ∫⟨ 𝒮-Action ℓ ℓ' ⟩ 𝒮ᴰ-Action\PreXModuleStr
-- add equivariance condition
-- use that equivariance is a proposition
𝒮ᴰ-PreXModule : URGStrᴰ 𝒮-PreXModuleStr
(λ (α , φ) → isEquivariant α φ)
ℓ-zero
𝒮ᴰ-PreXModule = Subtype→Sub-𝒮ᴰ (λ (α , φ) → isEquivariant α φ , isPropIsEquivariant α φ)
𝒮-PreXModuleStr
𝒮-PreXModule : URGStr PreXModule (ℓ-max ℓ ℓ')
𝒮-PreXModule = ∫⟨ 𝒮-PreXModuleStr ⟩ 𝒮ᴰ-PreXModule
-- add the proposition isPeiffer to precrossed modules
𝒮ᴰ-XModule : URGStrᴰ 𝒮-PreXModule
(λ ((α , φ) , isEqui) → isPeiffer α φ)
ℓ-zero
𝒮ᴰ-XModule = Subtype→Sub-𝒮ᴰ (λ ((α , φ) , isEqui) → isPeiffer α φ , isPropIsPeiffer α φ)
𝒮-PreXModule
𝒮-XModule : URGStr XModule (ℓ-max ℓ ℓ')
𝒮-XModule = ∫⟨ 𝒮-PreXModule ⟩ 𝒮ᴰ-XModule
|
(* A proposition is (computationally) decidable *)
Definition Dec (A:Prop) : Type := {A} + {~A}.
(* A predicate is (computationally) decidable *)
Definition pDec (a:Type) (p:a -> Prop) : Type := forall (x:a), Dec (p x).
Arguments pDec {a}.
(* Two-fold decidable predicates *)
Definition pDec2 (a b:Type) (p:a -> b -> Prop) :=
forall (x:a) (y:b), Dec (p x y).
Arguments pDec2 {a} {b}.
Lemma pDec2Dec : forall (a b:Type) (p:a -> b ->Prop) (x:a),
pDec2 p -> pDec (p x).
Proof.
intros a b p x H1 y. apply H1.
Defined.
Definition DeciderOf (a:Type) (p:a -> Prop) (f:a -> bool) : Prop :=
forall (x:a), p x <-> f x = true.
Arguments DeciderOf {a}.
Definition Decidable (a:Type) (p:a -> Prop) : Prop :=
exists (f:a -> bool), DeciderOf p f.
Arguments Decidable {a}.
Lemma pDecDecidable : forall (a:Type) (p:a -> Prop),
pDec p -> Decidable p.
Proof.
intros a p q. remember (fun x =>
match (q x) with
| left _ => true
| right _ => false
end) as f eqn:E.
exists f. intros x. split; intros H1.
- rewrite E. destruct (q x) as [H2|H2].
+ reflexivity.
+ apply H2 in H1. contradiction.
- rewrite E in H1. destruct (q x) as [H2|H2].
+ assumption.
+ inversion H1.
Qed.
|
module Data.Time.Calendar
import public Data.Time.Calendar.Days
import public Data.Time.Calendar.CalendarDiffDays
import public Data.Time.Calendar.Gregorian
import public Data.Time.Calendar.WeekDate
%default total
-- ---------------------------------------------------------------------------
|
Definition total_map (A B:Type) :=
A -> B.
Definition t_update {A B : Type} (eq_dec:forall (x y:A),{x=y}+{x<>y}) (f:total_map A B) (x:A) (v:B) : total_map A B :=
fun y => if eq_dec x y then v else f y.
Definition t_get {A} {B} (f:total_map A B) (x:A) : B :=
f x.
Definition t_empty {A} {B} (v:B) : total_map A B :=
fun (_ : A) => v.
Lemma t_apply_empty: forall A B x v, @t_empty A B v x = v.
Proof.
intros.
unfold t_empty. reflexivity.
Qed.
Definition partial_map (A B:Type) := total_map A (option B).
Definition empty {A B: Type} : partial_map A B :=
t_empty None.
Definition update {A B: Type} (eq_dec:forall (x y:A),{x=y}+{x<>y}) (m: partial_map A B) (k: A) (v: B) :=
t_update eq_dec m k (Some v).
Definition member {A B: Type} (m : partial_map A B) (k: A) :=
match (m k) with
| Some _ => true
| None => false
end.
Lemma apply_empty : forall A B x, @empty A B x = None.
Proof.
intros. unfold empty. rewrite t_apply_empty. reflexivity.
Qed.
|
include("support.jl")
function remove_dups!(ll :: LinkedList)
record :: Set = Set()
push!(record, ll.head.item)
prev :: Node = ll.head # pointer to the first occurance of some item
curr = ll.head.next
while curr != nothing
if curr.item in record
prev.next = curr.next
else
push!(record, curr.next.item)
prev = curr # update when new item appears
end
curr = curr.next
end
end
# Time : O(n)
# Space : O(n)
using Base: Test
ll = LinkedList()
append_to_tail!(ll, 1)
append_to_tail!(ll, 2)
append_to_tail!(ll, 2)
append_to_tail!(ll, 3)
append_to_tail!(ll, 3)
append_to_tail!(ll, 3)
remove_dups!(ll)
delete_node!(ll, 1)
delete_node!(ll, 2)
delete_node!(ll, 3)
@test ll.head == nothing
|
(*
* Copyright 2020, Data61, CSIRO (ABN 41 687 119 230)
*
* SPDX-License-Identifier: GPL-2.0-only
*)
theory RWHelper_DP
imports ProofHelpers_DP KHeap_DP
begin
definition eq_on :: "'a set \<Rightarrow> ('a \<Rightarrow> 'b option) \<Rightarrow> ('a \<Rightarrow> 'b option) \<Rightarrow> bool"
where "eq_on m s s' \<equiv> \<forall>ptr\<in> m. s ptr = s' ptr"
lemma eq_on_subset:
"\<lbrakk>B \<subseteq> A ; eq_on A s s' \<rbrakk> \<Longrightarrow> eq_on B s s'"
by (auto simp:eq_on_def)
definition WritingOf :: "(('a \<Rightarrow>'b option) \<Rightarrow> ('a \<Rightarrow> 'b option)) \<Rightarrow> 'a set"
where "WritingOf f \<equiv> SUP s:UNIV. {ptr. (f s) ptr \<noteq> s ptr} "
definition IsReadingEstimateOf :: "'a set \<Rightarrow> (('a \<Rightarrow> 'b option) \<Rightarrow> ('a \<Rightarrow> 'b option)) \<Rightarrow> 'a set \<Rightarrow> bool"
where "IsReadingEstimateOf m f estimate \<equiv> (\<forall>s s'. (eq_on m s s') \<longrightarrow> (eq_on estimate (f s) (f s')))"
definition ReadingEstimateOf :: "(('a \<Rightarrow>'b option) \<Rightarrow> ('a \<Rightarrow> 'b option)) \<Rightarrow> ('a set) \<Rightarrow> ('a set)"
where "ReadingEstimateOf f estimate \<equiv> Inter {m. IsReadingEstimateOf m f estimate }"
abbreviation ReadingOf :: "(('a \<Rightarrow>'b option) \<Rightarrow> ('a \<Rightarrow> 'b option)) \<Rightarrow> 'a set"
where "ReadingOf f \<equiv> ReadingEstimateOf f (WritingOf f)"
lemma eq_on_trans:
"\<lbrakk>eq_on a s sa ; eq_on a sa sb\<rbrakk> \<Longrightarrow> eq_on a s sb"
by (simp add:eq_on_def)
lemma ReadingEstimateOf_inter:
"\<lbrakk>IsReadingEstimateOf a f r; IsReadingEstimateOf b f r \<rbrakk> \<Longrightarrow> IsReadingEstimateOf (a \<inter> b) f r"
apply (clarsimp simp:IsReadingEstimateOf_def)
apply (drule_tac x = s in spec)
apply (drule_tac x = "(\<lambda>ptr. if ptr\<in>(b - a) then (s' ptr) else (s ptr))" in spec)
apply (drule_tac x = "(\<lambda>ptr. if ptr\<in>(b - a) then (s' ptr) else (s ptr))" in spec)
apply (drule_tac x = s' in spec)
apply (erule impE)
apply (simp add:eq_on_def)
apply (erule impE)
apply (simp add:eq_on_def)
apply (rule eq_on_trans)
apply simp+
done
lemma ReadingEstimateOf_read_subset:
"\<lbrakk>IsReadingEstimateOf a f w; a \<subseteq> b\<rbrakk> \<Longrightarrow> IsReadingEstimateOf b f w"
by (auto simp add:IsReadingEstimateOf_def eq_on_def)
lemma ReadingEstimateOf_write_subset:
"\<lbrakk>IsReadingEstimateOf a f w; w' \<subseteq> w\<rbrakk> \<Longrightarrow> IsReadingEstimateOf a f w'"
by (auto simp add:IsReadingEstimateOf_def eq_on_def)
lemma reading_estimateD:
"\<lbrakk>IsReadingEstimateOf a f w; eq_on a s s'\<rbrakk> \<Longrightarrow> eq_on w (f s) (f s')"
by (auto simp:IsReadingEstimateOf_def)
lemma not_writingD:
"ptr \<notin> WritingOf f \<Longrightarrow> (f s) ptr = s ptr"
by (auto simp:WritingOf_def)
lemma well_ordered_estimate:
"WritingOf f \<subseteq> writing_estimate \<Longrightarrow>
ReadingOf f \<subseteq> ReadingEstimateOf f writing_estimate"
by (auto simp add:IsReadingEstimateOf_def ReadingEstimateOf_def eq_on_subset)
lemma writing_estimate_pipe:
"\<lbrakk>WritingOf f \<subseteq> Q; WritingOf g \<subseteq> Q\<rbrakk> \<Longrightarrow> WritingOf (f\<circ>g) \<subseteq> Q"
apply (subst WritingOf_def)
apply clarsimp
apply (rule ccontr)
apply (drule(1) contra_subsetD)+
apply (drule_tac s = "g xa" in not_writingD)
apply (drule_tac s = xa in not_writingD)
apply simp
done
lemma reading_writing_estimate:
"\<lbrakk>eq_on R s s'; IsReadingEstimateOf R g (WritingOf g)\<rbrakk> \<Longrightarrow> eq_on R (g s) (g s')"
apply (subst eq_on_def)
apply clarsimp
apply (case_tac "ptr \<in> WritingOf g")
apply (clarsimp simp:IsReadingEstimateOf_def)
apply (elim impE allE)
apply simp
apply (clarsimp simp:eq_on_def)
apply (clarsimp simp:WritingOf_def eq_on_def)
done
lemma reading_estimate_pipe:
assumes reg: "IsReadingEstimateOf R g M"
and ref: " IsReadingEstimateOf R f M"
and wg: "WritingOf g \<subseteq> M"
and wf: "WritingOf f \<subseteq> M"
shows "IsReadingEstimateOf R (f \<circ> g) M"
apply (clarsimp simp: IsReadingEstimateOf_def)
apply (cut_tac ReadingEstimateOf_write_subset[OF reg wg])
apply (drule(1) reading_writing_estimate[rotated])
apply (erule reading_estimateD[OF ref])
done
definition
"IsSepWritingEstimateOf f P proj m \<equiv> \<forall>ptr v.
\<lbrace>\<lambda>s. (proj s) ptr = v \<and> P s \<and> ptr \<in> (UNIV - m) \<rbrace> f \<lbrace>\<lambda>r s. (proj s) ptr = v \<rbrace>"
definition
"IsStrongSepWritingEstimateOf f P proj m g \<equiv> \<forall>state.
\<lbrace>\<lambda>s. (proj s) = state \<and> P s \<rbrace> f
\<lbrace>\<lambda>r s. (proj s) |` m = (g state) \<and> (proj s) |` (UNIV - m) = state |` (UNIV - m)\<rbrace>"
definition
"IsSepReadingEstimateOf r f P proj m \<equiv> \<forall>substate. \<exists>g.
\<lbrace>\<lambda>s. (proj s) |` r = substate \<and> P s \<rbrace> f \<lbrace>\<lambda>r s. (proj s) |` m = g substate\<rbrace>"
lemma sep_writing_estimateD:
"\<lbrakk>IsSepWritingEstimateOf f P proj m; (r, s') \<in> fst (f s);P s \<rbrakk>
\<Longrightarrow> proj s |` (UNIV - m) = proj s' |` (UNIV - m)"
apply (rule ext)
apply (clarsimp simp: restrict_map_def
IsSepWritingEstimateOf_def)
apply (drule_tac x = x in spec)
apply (drule_tac x = "proj s x" in spec)
apply (drule(1) use_valid)
apply simp+
done
lemma sep_writing_estimate_imp:
"\<lbrakk>IsSepWritingEstimateOf f P' proj m; \<And>s. P s \<Longrightarrow> P' s\<rbrakk>
\<Longrightarrow> IsSepWritingEstimateOf f P proj m"
apply (clarsimp simp:IsSepWritingEstimateOf_def)
apply (drule_tac x = ptr in spec)
apply (drule_tac x = v in spec)
apply (erule hoare_pre)
apply clarsimp
done
lemma sep_strong_writing_estimateD:
"\<lbrakk>IsStrongSepWritingEstimateOf f P proj m g; (r, s') \<in> fst (f s);P s \<rbrakk>
\<Longrightarrow> proj s' |` m = g (proj s) \<and> proj s' |` (UNIV - m) = proj s |` (UNIV - m)"
apply (simp add:
IsStrongSepWritingEstimateOf_def)
apply (drule_tac x = "proj s " in spec)
apply (drule use_valid)
apply assumption
apply simp+
done
lemma intent_reset_twice[simp]:
"intent_reset (intent_reset z) = intent_reset z"
apply (case_tac z)
apply (simp_all add:intent_reset_def)
done
lemma largest_set:
"UNIV \<subseteq> cmps \<Longrightarrow> cmps = UNIV"
by auto
definition "sep_map_predicate p P cmps \<equiv> \<lambda>s. \<exists>obj. (sep_map_general p obj cmps s \<and> P obj)"
definition "sep_heap_dom P m = (\<forall>s. P s \<longrightarrow> dom (sep_heap s) = m)"
definition "sep_irq_node_dom P m = (\<forall>s. P s \<longrightarrow> dom (sep_irq_node s) = m)"
definition "sep_map_spec P s = (\<forall>s'. P s' \<longrightarrow> s' = s)"
lemma sep_heap_domD:
"\<lbrakk>sep_heap_dom P m; P s ; p \<notin> m\<rbrakk>
\<Longrightarrow> p \<notin> dom (sep_heap s)"
by (fastforce simp:sep_heap_dom_def)
lemma sep_heap_domD':
"\<lbrakk>sep_heap_dom P m;P s\<rbrakk>
\<Longrightarrow> m = dom (sep_heap s)"
by (fastforce simp:sep_heap_dom_def)
lemma sep_irq_node_domD':
"\<lbrakk>sep_irq_node_dom P m; P s\<rbrakk>
\<Longrightarrow> m = dom (sep_irq_node s)"
by (fastforce simp: sep_irq_node_dom_def)
lemma sep_specD:
"\<lbrakk>sep_map_spec P s; P s'\<rbrakk> \<Longrightarrow> s = s'"
by (clarsimp simp: sep_map_spec_def)
lemma sep_heap_dom_sep_map_predicate:
"m = {ptr}\<times> cmps \<Longrightarrow>
sep_heap_dom (sep_map_predicate ptr P cmps) m"
apply (clarsimp simp: sep_map_general_def
object_to_sep_state_def
sep_heap_dom_def sep_map_predicate_def
split:sep_state.splits if_splits)
apply (rule set_eqI)
apply (clarsimp simp:dom_def object_project_def split:cdl_component_id.splits)
done
lemma sep_irq_node_dom_sep_map_predicate:
"sep_irq_node_dom (sep_map_predicate ptr P cmps) {}"
apply (clarsimp simp: sep_map_general_def object_to_sep_state_def
sep_irq_node_dom_def sep_map_predicate_def
split:sep_state.splits if_split_asm)
done
lemma sep_map_rewrite_spec:
"sep_map_general = (\<lambda>p obj cmps. sep_map_predicate p ((=) obj) cmps)"
"sep_map_o = (\<lambda>p obj. sep_map_predicate p ((=) obj) UNIV)"
"sep_map_f = (\<lambda>p obj. sep_map_predicate p ((=) obj) {Fields})"
"sep_map_c = (\<lambda>p cap. let (ptr,slot) = p in
sep_map_predicate ptr (\<lambda>obj. object_slots obj = [ slot \<mapsto> cap]) {Slot slot})"
by (fastforce simp: sep_map_predicate_def sep_any_def sep_map_general_def
sep_map_o_def sep_map_f_def sep_map_c_def split_def
split: sep_state.splits)+
lemma sep_map_rewrite_any:
"sep_any_map_c = (\<lambda>ptr state.
sep_map_predicate (fst ptr) (\<lambda>obj. \<exists>cap. object_slots obj = [(snd ptr) \<mapsto> cap]) {Slot (snd ptr)} state)"
by (fastforce simp: sep_map_predicate_def sep_map_general_def sep_any_def
sep_map_o_def sep_map_f_def sep_map_c_def split_def
split: sep_state.splits)
lemma sep_heap_dom_conj:
"\<lbrakk>sep_heap_dom P m;sep_heap_dom P' m'\<rbrakk> \<Longrightarrow> sep_heap_dom (P \<and>* P') (m \<union> m')"
apply (clarsimp simp: sep_heap_dom_def sep_conj_def
sep_disj_sep_state_def sep_state_disj_def)
apply (auto simp: map_disj_def plus_sep_state_def sep_state_add_def)
done
lemma sep_heap_dom_simps:
"sep_heap_dom (slot \<mapsto>c -) ({(fst slot,Slot (snd slot))})"
"sep_heap_dom (slot \<mapsto>c cap) ({(fst slot,Slot (snd slot))})"
apply (simp add:sep_map_rewrite_any sep_heap_dom_sep_map_predicate)
apply (simp add:sep_map_rewrite_spec sep_heap_dom_sep_map_predicate split_def)
done
lemma sep_irq_node_dom_simps:
"sep_irq_node_dom (slot \<mapsto>c -) {}"
"sep_irq_node_dom (slot \<mapsto>c cap) {}"
apply (simp add:sep_map_rewrite_any sep_irq_node_dom_sep_map_predicate)
apply (simp add:sep_map_rewrite_spec sep_irq_node_dom_sep_map_predicate split_def)
done
lemma sep_map_spec_conj:
"\<lbrakk>sep_map_spec P s; sep_map_spec P' s'\<rbrakk>
\<Longrightarrow> sep_map_spec (P \<and>* P')
(SepState (sep_heap s ++ sep_heap s')
(sep_irq_node s ++ sep_irq_node s'))"
by (clarsimp simp: sep_map_spec_def sep_conj_def
plus_sep_state_def sep_state_add_def)
lemma sep_spec_simps:
"sep_map_spec (slot \<mapsto>c cap)
(SepState [(fst slot,Slot (snd slot)) \<mapsto> (CDL_Cap (Some (reset_cap_asid cap)))]
Map.empty)"
apply (clarsimp simp:sep_map_spec_def sep_map_c_def sep_map_general_def)
apply (case_tac s')
apply (clarsimp simp:object_to_sep_state_def)
apply (rule ext)
apply (clarsimp simp: object_project_def object_slots_object_clean
split: if_split_asm)
done
lemma sep_conj_spec:
"\<lbrakk> < P \<and>* Q > s\<rbrakk>
\<Longrightarrow> \<exists>s'. < P \<and>* (=) s' > s"
by (auto simp:sep_state_projection_def sep_conj_def
sep_disj_sep_state_def sep_state_disj_def)
lemma sep_conj_spec_value:
"\<lbrakk> < P \<and>* (=) s' > s; sep_heap_dom P m; p \<notin> m\<rbrakk>
\<Longrightarrow> (sep_heap s') p = (sep_heap (sep_state_projection s) |` (UNIV - m)) p"
apply (clarsimp simp:sep_state_projection_def sep_conj_def
sep_disj_sep_state_def sep_state_disj_def)
apply (drule(2) sep_heap_domD)
apply (simp add: plus_sep_state_def sep_state_add_def
split: sep_state.splits)
apply (clarsimp simp: map_add_def split:option.splits)
done
lemma write_estimate_via_sep:
assumes sep_valid: "\<And>obj Q. \<lbrace>\<lambda>s. < P \<and>* Q > s \<rbrace>
f \<lbrace>\<lambda>r s. < P' \<and>* Q > s \<rbrace>"
and sep_heap_dom: "sep_heap_dom P m"
and sep_heap_dom': "sep_heap_dom P' m"
shows "IsSepWritingEstimateOf f (\<lambda>s. < P \<and>* Q> s)
(\<lambda>s. sep_heap (sep_state_projection s)) m"
apply (clarsimp simp: valid_def IsSepWritingEstimateOf_def)
apply (drule sep_conj_spec)
apply clarsimp
apply (drule use_valid[OF _ sep_valid])
apply simp
apply (drule(1) sep_conj_spec_value[OF _ sep_heap_dom])
apply (drule(1) sep_conj_spec_value[OF _ sep_heap_dom'])
apply simp
done
lemma sep_map_dom_predicate:
"\<lbrakk>sep_heap_dom P m; sep_irq_node_dom P m';
<P \<and>* P'> b\<rbrakk>
\<Longrightarrow> P (SepState (sep_heap (sep_state_projection b) |` m)
(sep_irq_node (sep_state_projection b) |` m'))"
apply (clarsimp simp: sep_state_projection_def sep_conj_def
plus_sep_state_def sep_state_add_def)
apply (drule(1) sep_heap_domD')
apply (drule(1) sep_irq_node_domD')
apply simp
apply (case_tac x,case_tac y)
apply (clarsimp simp: sep_state_projection_def sep_conj_def
sep_disj_sep_state_def sep_state_disj_def)
apply (simp add: map_add_restrict_dom_left)
done
lemma strong_write_estimate_via_sep:
assumes sep_valid: "\<And>obj Q. \<lbrace>\<lambda>s. < P \<and>* Q > s \<rbrace>
f \<lbrace>\<lambda>r s. < P' \<and>* Q > s \<rbrace>"
and sep_heap_dom: "sep_heap_dom P m"
and sep_heap_dom': "sep_heap_dom P' m"
and sep_irq_node_dom' : "sep_irq_node_dom P' m'"
and sep_spec: "sep_map_spec P' state"
shows "IsStrongSepWritingEstimateOf f (\<lambda>s. < P \<and>* Q> s)
(\<lambda>s. sep_heap (sep_state_projection s)) m (\<lambda>s. sep_heap state)"
apply (clarsimp simp: valid_def IsStrongSepWritingEstimateOf_def)
apply (drule sep_conj_spec)
apply clarsimp
apply (drule use_valid[OF _ sep_valid])
apply simp
apply (rule conjI)
apply simp
apply (drule sep_map_dom_predicate[OF sep_heap_dom' sep_irq_node_dom'])
apply (drule sep_specD[OF sep_spec])
apply (case_tac state,simp)
apply (rule ext,clarsimp simp:restrict_map_def)
apply (drule sep_conj_spec_value)
apply (rule sep_heap_dom)
apply simp
apply (drule(1) sep_conj_spec_value[OF _ sep_heap_dom'])
apply simp
done
lemma map_eqI:
"\<lbrakk>a |` m = b |` m;a |` (UNIV - m) = b |` (UNIV - m)\<rbrakk> \<Longrightarrow> a = b"
apply (rule ext)
apply (drule_tac x = x in fun_cong)+
apply (auto simp:restrict_map_def split:if_splits)
done
lemma using_writing_estimate:
assumes we: "IsSepWritingEstimateOf f P proj m"
shows "\<lbrace>\<lambda>s. P s \<and> Q ((proj s) |` (UNIV - m)) \<rbrace> f \<lbrace>\<lambda>r s. Q ((proj s) |` (UNIV - m))\<rbrace>"
apply (clarsimp simp:valid_def)
apply (erule arg_cong[where f = Q,THEN iffD1,rotated])
apply (erule sep_writing_estimateD[OF we])
apply simp
done
lemma using_strong_writing_estimate:
assumes we: "IsStrongSepWritingEstimateOf f P proj m g"
shows
"\<lbrace>\<lambda>s. P s \<and> Q ((proj s) |` (UNIV - m) ++ g (proj s)) \<rbrace> f \<lbrace>\<lambda>r s. Q (proj s)\<rbrace>"
apply (clarsimp simp:valid_def)
apply (erule arg_cong[where f = Q,THEN iffD1,rotated])
apply (rule map_eqI[where m = m])
apply (drule(1) sep_strong_writing_estimateD[OF we,THEN conjunct1,symmetric])
apply (rule ext)
apply (clarsimp simp:restrict_map_def
map_add_def split:option.splits)
apply (frule(1) sep_strong_writing_estimateD[OF we,THEN conjunct1,symmetric])
apply (drule(1) sep_strong_writing_estimateD[OF we,THEN conjunct2,symmetric])
apply (rule ext)
apply simp
done
(* Here are some examples that we get valid rules from existing sep_logical rules *)
(* 1. We need some predicates in the furture to make sure schedule will do the right thing *)
definition "scheduable_cap cap \<equiv> case cap of
RunningCap \<Rightarrow> True | RestartCap \<Rightarrow> True | _ \<Rightarrow> False"
definition tcb_scheduable :: "cdl_tcb \<Rightarrow> bool"
where "tcb_scheduable \<equiv> \<lambda>tcb. (cdl_tcb_caps tcb) tcb_pending_op_slot
= Some RunningCap \<or> (cdl_tcb_caps tcb) tcb_pending_op_slot = Some RestartCap"
abbreviation "tcb_at_heap \<equiv> \<lambda>P ptr heap.
object_at_heap (\<lambda>obj. \<exists>tcb. obj = Tcb tcb \<and> P tcb) ptr heap"
definition all_scheduable_tcbs :: "(word32 \<Rightarrow> cdl_object option) \<Rightarrow> cdl_object_id set"
where "all_scheduable_tcbs \<equiv> \<lambda>m. {ptr. tcb_at_heap tcb_scheduable ptr m}"
definition sep_all_scheduable_tcbs :: "(32 word \<times> cdl_component_id \<Rightarrow> cdl_component option) \<Rightarrow> cdl_object_id set"
where "sep_all_scheduable_tcbs m \<equiv> {ptr. \<exists>obj cap. m (ptr,Fields) = Some (CDL_Object obj) \<and> is_tcb obj
\<and> m (ptr,Slot tcb_pending_op_slot) = Some (CDL_Cap (Some cap)) \<and> scheduable_cap cap}"
lemma is_tcb_obj_type:
"is_tcb = (\<lambda>x. object_type x = TcbType)"
by (auto simp:is_tcb_def object_type_def split:cdl_object.splits)
lemma all_scheduable_tcbs_rewrite:
"all_scheduable_tcbs (cdl_objects s) =
sep_all_scheduable_tcbs (sep_heap (sep_state_projection s))"
apply (intro set_eqI iffI)
apply (clarsimp simp:all_scheduable_tcbs_def sep_state_projection_def
sep_all_scheduable_tcbs_def object_at_heap_def object_project_def
is_tcb_obj_type)
apply (clarsimp simp:object_type_def object_slots_object_clean
tcb_scheduable_def object_slots_def scheduable_cap_def)
apply (fastforce simp:object_clean_def asid_reset_def update_slots_def
reset_cap_asid_def intent_reset_def object_slots_def
split:if_splits)
apply (clarsimp simp:all_scheduable_tcbs_def sep_state_projection_def
sep_all_scheduable_tcbs_def object_at_heap_def object_project_def
is_tcb_obj_type split:option.splits)
apply (clarsimp simp:object_type_def tcb_scheduable_def
scheduable_cap_def object_slots_def object_clean_def asid_reset_def
update_slots_def intent_reset_def reset_cap_asid_def
split:cdl_object.splits cdl_cap.splits option.splits)
done
lemma update_slots_rev:
"update_slots slots obj = obj' \<Longrightarrow>
obj = update_slots (object_slots obj) obj'"
by (clarsimp simp:update_slots_def object_slots_def
split:cdl_object.splits)
lemma all_scheduable_tcbsD:
"ptr \<in> all_scheduable_tcbs (cdl_objects s)
\<Longrightarrow> tcb_at_heap tcb_scheduable ptr (cdl_objects s)"
by (simp add:all_scheduable_tcbs_def)
lemma all_scheduable_tcbsD':
"ptr \<notin> all_scheduable_tcbs (cdl_objects s)
\<Longrightarrow> \<not> tcb_at_heap tcb_scheduable ptr (cdl_objects s)"
by (simp add:all_scheduable_tcbs_def)
lemma scheduable_cap_reset_cap_asid[simp]:
"scheduable_cap (reset_cap_asid cap) = scheduable_cap cap"
by (case_tac cap,simp_all add: reset_cap_asid_def scheduable_cap_def)
lemma set_cap_all_scheduable_tcbs:
"\<lbrace>\<lambda>s. all_scheduable_tcbs (cdl_objects s) = {cur_thread} \<and> (cap = RunningCap \<or> cap = RestartCap) \<rbrace>
set_cap (cur_thread,tcb_pending_op_slot) cap
\<lbrace>\<lambda>rv s. all_scheduable_tcbs (cdl_objects s) = {cur_thread} \<rbrace>"
apply (rule hoare_name_pre_state)
apply (cut_tac all_scheduable_tcbsD[where ptr = cur_thread])
prefer 2
apply fastforce
apply (clarsimp simp:all_scheduable_tcbs_rewrite)
apply (rule hoare_pre)
apply (rule using_strong_writing_estimate
[where proj = "(\<lambda>a. sep_heap (sep_state_projection a))"])
apply (rule strong_write_estimate_via_sep[OF set_cap_wp])
apply (rule sep_heap_dom_simps sep_irq_node_dom_simps sep_spec_simps)+
apply (rule conjI)
apply (clarsimp simp:sep_map_c_conj
Let_def sep_any_exist all_scheduable_tcbs_rewrite[symmetric]
dest!:in_singleton)
apply (clarsimp simp:object_at_heap_def tcb_scheduable_def
sep_state_projection_def object_project_def)
apply (rule conjI)
apply (clarsimp simp:object_slots_def object_clean_def
update_slots_def intent_reset_def asid_reset_def
split:option.splits)
apply fastforce+
apply (rule subst,assumption)
apply (drule in_singleton)
apply (intro set_eqI iffI)
apply (clarsimp simp: sep_all_scheduable_tcbs_def sep_state_projection_def
split: if_split_asm option.splits)
apply (fastforce simp: sep_all_scheduable_tcbs_def map_add_def
sep_state_projection_def scheduable_cap_def
split: option.splits)
done
lemma sep_inv_to_all_scheduable_tcbs:
assumes sep: "\<And>P. \<lbrace><P>\<rbrace> f \<lbrace>\<lambda>r. <P>\<rbrace>"
shows "\<lbrace>\<lambda>s. P (all_scheduable_tcbs (cdl_objects s))\<rbrace> f
\<lbrace>\<lambda>r s. P (all_scheduable_tcbs (cdl_objects s))\<rbrace>"
apply (clarsimp simp:valid_def all_scheduable_tcbs_rewrite)
apply (erule use_valid)
apply (rule hoare_strengthen_post)
apply (rule sep)
apply assumption
apply simp
done
lemma validE_to_valid:
assumes validE:"\<And>E. \<lbrace>P\<rbrace>f\<lbrace>\<lambda>r. Q\<rbrace>,\<lbrace>\<lambda>r s. E\<rbrace>"
shows "\<lbrace>P\<rbrace>f\<lbrace>\<lambda>r. Q\<rbrace>"
using validE[where E = False]
apply (clarsimp simp:validE_def valid_def)
apply (drule_tac spec)
apply (erule(1) impE)
apply (drule_tac bspec)
apply assumption
apply (auto split:sum.splits)
done
end
|
module Imperative.Lexer
import DataStructures
import Lightyear.Strings
%access export
rIf : Parser ()
rIf = token "if"
rThen : Parser ()
rThen = token "then"
rElse : Parser ()
rElse = token "else"
rWhile : Parser ()
rWhile = token "while"
rDo : Parser ()
rDo = token "do"
rSkip : Parser ()
rSkip = token "skip"
rTrue : Parser ()
rTrue = token "true"
rFalse : Parser ()
rFalse = token "false"
rPlus : Parser ()
rPlus = token "+"
rMinus : Parser ()
rMinus = token "-"
rTimes : Parser ()
rTimes = token "*"
rDivide : Parser ()
rDivide = token "/"
rAssign : Parser ()
rAssign = token ":="
rLT : Parser ()
rLT = token "<"
rGT : Parser ()
rGT = token ">"
rAnd : Parser ()
rAnd = token "and"
rOr : Parser ()
rOr = token "or"
rNot : Parser ()
rNot = token "not"
reservedNames : List String
reservedNames = [ "if"
, "then"
, "else"
, "while"
, "do"
, "skip"
, "true"
, "false"
, "not"
, "and"
, "or"
]
|
proposition Stone_Weierstrass_uniform_limit: fixes f :: "'a::euclidean_space \<Rightarrow> 'b::euclidean_space" assumes S: "compact S" and f: "continuous_on S f" obtains g where "uniform_limit S g f sequentially" "\<And>n. polynomial_function (g n)" |
A function $f$ from a topological space $X$ to a topological space $Y$ has a limit at a point $a \in X$ if and only if $f(a) \in Y$ and for every open set $V$ containing $f(a)$, there exists an open set $U$ containing $a$ such that $f(U) \subseteq V$. |
In 1910 , the Franciscans turned the management of the parish over to the Diocese of Omaha . In October of that year , Edward S. Muenich became the first diocesan pastor of St. Leonard 's .
|
With increasing threat of data breaches, the next major compliance mandate in data protection, data privacy, and IT security is EU's General Data Projection Regulation (GDPR) with global implications. The session explores requirements, impact, areas of compliance risks, penalties, and practical means of compliance solutions, including a basic set of questions for self-assessments such as: - Can you determine what your risk profile is? - Can we control where data resides? - Can we enhance data privacy, including data obfuscation? - Can we quickly and comprehensively notify in the event of a breach? - Can we continuously evaluate the effectiveness of our security? - What personal data is out there and where is it? - Can we control what personal data is accessible and who can access it? - Can we detect unauthorized access or breaches of personal data? Through the active discussion and self-assessment on these topics, the audience will gain insights to high level requirements and gauge current compliance level.
Describe the global impact and high level regulatory provisions / requirements of General Data Protection Regulation (GDPR); Discuss and gain insights to basic set of questions for self-assessment for GDPR compliance, in consideration of data security and data privacy. |
(**
Derivation of a signature (in order to build the signature Θ^(n) as we like it)
*)
Require Import UniMath.Foundations.PartD.
Require Import UniMath.Foundations.Propositions.
Require Import UniMath.Foundations.Sets.
Require Import UniMath.CategoryTheory.Core.Prelude.
Require Import UniMath.CategoryTheory.FunctorCategory.
Require Import UniMath.CategoryTheory.Monads.Monads.
Require Import UniMath.CategoryTheory.Monads.LModules.
Require Import UniMath.CategoryTheory.Monads.Derivative.
Require Import UniMath.CategoryTheory.DisplayedCats.Core.
Require Import UniMath.CategoryTheory.DisplayedCats.Constructions.
Require Import UniMath.CategoryTheory.limits.coproducts.
Require Import UniMath.CategoryTheory.limits.graphs.colimits.
Require Import UniMath.CategoryTheory.limits.terminal.
Require Import Modules.Prelims.lib.
Require Import Modules.Prelims.LModulesCoproducts.
Require Import Modules.Prelims.DerivationIsFunctorial.
Require Import Modules.Signatures.Signature.
Section DAr.
Context {C : category}
(bcpC : limits.bincoproducts.BinCoproducts C)
(CT : limits.terminal.Terminal C).
Local Notation "∂" := (LModule_deriv_functor (TerminalObject CT) bcpC
_).
Local Notation Signature := (signature C).
Local Notation MOD R := (category_LModule R C).
Variable (a : signature C).
Definition signature_deriv_on_objects (R : Monad C) : LModule R C :=
∂ (a R).
Definition signature_deriv_on_morphisms (R S : Monad C)
(f : Monad_Mor R S) : LModule_Mor _ (signature_deriv_on_objects R)
(pb_LModule f (signature_deriv_on_objects S)) :=
(# ∂ (# a f)%ar ) · (inv_from_iso (pb_LModule_deriv_iso _ bcpC f _ )).
Definition signature_deriv_data : @signature_data C
:= signature_deriv_on_objects ,, signature_deriv_on_morphisms.
Lemma signature_deriv_is_signature : is_signature signature_deriv_data.
Proof.
split.
- intros R c.
cbn -[LModule_deriv_functor identity].
repeat rewrite id_right.
etrans.
{
set (f := ((#a)%ar _)).
eapply (maponpaths (fun (z : LModule_Mor _ _ _) => (# ∂ z : LModule_Mor _ _ _) c )(t1 := f)
(t2 := morphism_from_iso (pb_LModule_id_iso _ ))
).
apply LModule_Mor_equiv;[apply homset_property|].
apply nat_trans_eq;[apply homset_property|].
apply signature_id.
}
apply idpath.
- intros R S T f g.
etrans.
{
apply (cancel_postcomposition (C := MOD R)).
rewrite signature_comp.
apply functor_comp.
}
apply LModule_Mor_equiv;[apply homset_property|].
apply nat_trans_eq;[apply homset_property|].
intro x.
cbn.
repeat rewrite id_right.
apply idpath.
Qed.
Definition signature_deriv : Signature := _ ,, signature_deriv_is_signature.
Lemma signature_to_deriv_laws : is_signature_Mor a signature_deriv
(fun R => LModule_to_deriv CT bcpC R (a R)).
Proof.
intros R S f.
apply (nat_trans_eq (homset_property _)).
intro c.
cbn.
repeat rewrite id_left.
rewrite id_right.
apply pathsinv0.
apply nat_trans_ax.
Qed.
Definition signature_to_deriv : signature_Mor a signature_deriv := _ ,, signature_to_deriv_laws.
End DAr.
Fixpoint signature_deriv_n {C : category} bcp T a (n :nat) : signature C :=
match n with
0 => a
| S m => @signature_deriv C bcp T (@signature_deriv_n C bcp T a m)
end.
|
Caravans (AD&D Fantasy Roleplaying, Al-Qadim) [Rick Swan] on * FREE* shipping on qualifying offers. AD&D Fantasy Roleplaying, Al-Qadim. TSR Caravans Basic Information Author(s) Rick Swan Editor(s) C. Terry Dezra For the purposes of this wiki only, the current date for Al-Qadim products is. Sea of Caravans Geography Aliases Bahr al-Dahil Type Sea Region North Zakhara For the purposes of this wiki only, the current date for Al-Qadim products is.
Its cover features a painting by Jeff Easley depicting Tasslehoff Burrfoot peering at a red dragon and Verminaard of the Dragonarmies of Ansalon.
The Windows version also included scripting options for the Aurora toolkit. Member feedback about Battlesystem Skirmishes: I see your eye is as keen as the eagle and your mind as sharp as my jambiya, for you hold in your hand a great treasure. Treachery awaits at the hands of those you trust most! Forgotten Realms deities Revolvy Brain revolvybrain. Member feedback about Lords of Waterdeep: In 2nd edition, Psionicists gradually gain access to additional disciplines as they advance in level.
Salvatore’s The Cleric Quintet. It’s a beautiful Persian-style rug with a name, Ala’i the Hungry [ But beneath the beauty lies mystery, for the court of the Grand Caliph is as full of intrigue as the Grand Bazaar is full of adventures waiting to happen. But it’s hard to imagine a player getting excited about them. Linux games Revolvy Brain revolvybrain. Email to friends Share on Facebook – opens in a new window or tab Share on Twitter – opens in a new window or tab Share on Pinterest – opens in a new window or tab Add to watch list.
Tufala bint Maneira Neverwinter novel topic Neverwinter is a fantasy novel by American author R.
Reset Fields Log in. Rather, its caravan serves as an excuse to move the characters [across the desert]. First appearing in the Fiend Folio inthey have since appeared in and been adapted to numerous campaign settings al-qadi, Greyhawk, Dragonlance, Dark Sun, and the Forgotten Realms.
Sell now – Have one to sell? Sign in to check out Check out as guest. The expansion pack adds a new campaign and new features including new character classes, creatures, feats, and spells, and other nuances such as allowing the player to access and modify their henchman’s inventory. She soon discovers that she has a newly acquired azure colored tattoo imprinted on the inside of her sword arm extending from her catavans to her elbow; an Report item – opens in a new window or tab. Hasar Al-Yasan, Noble Genie TSR company games Revolvy Brain al-qaadim.
No additional import charges at delivery! Urdu-language writers Revolvy Brain revolvybrain.
It was a commercial success, with sales aboveunits caravanss by early Minimum monthly payments are required. Learn More – opens in a new window or tab International shipping and import charges paid to Pitney Bowes Inc. In 2nd, 3rd and 3. Reviewers were pleased alqadim new features introduced in the game, like more options for party customization and an overland map, but were caravana impressed with the game’s storyline and technical achievements. Seems to me it should go on the Campaign Book “page campaign guide describing Muluk” as the promo text says on the back of the box.
Released in Octoberit is set craavans the Forgotten Realms campaign world. Wizards of the Coast games Revolvy Brain revolvybrain. In the second part, the PCs are the women of the tribe, who must escape the evil flame mage’s harem and use all their wits to win free of a strange city.
Member feedback about Aarakocra: Member feedback about Scourge of the Slave Lords: Views Read Edit View history. Calculate Varies based on location and delivery method.
Deities are included in this list only when documented in a Forgotten Realms-specific source or otherwise clearly indicated as existing in the setting. Member feedback about Gateway to the Savage Frontier: Cruise Con ’95 Sands of Fire ? Eberron Revolvy Brain revolvybrain. Member feedback about Artemis Entreri: Fictional populated places Revolvy Brain revolvybrain.
This is a list of campaign settings published for role-playing games. This is a list of Forgotten Realms deities. The MC sheets are: You can help Wikipedia by expanding it. Khalid al-Karim 39 of Each tale is told by Caravanx, a beautiful young woman who reveals them night after night to a murderous king. |
function pde = Stokesdata0
%
nu = 1;
pde = struct('f',@f,'g',@g,'exactp', @exactp, ...
'exactu', @exactu,'g_D', @g_D, 'nu', nu);
function z = f(p)
x = p(:,1); y = p(:,2);
z(:,1) = -4*pi^2*(2*cos(2*pi*x)-1).*sin(2*pi*y)+x.^2;
z(:,2) = 4*pi^2*(2*cos(2*pi*y)-1).*sin(2*pi*x);
end
function z = g(p)
z = zeros(size(p,1),1);
end
function z = exactu(p)
x = p(:,1); y = p(:,2);
z(:,1) = (1-cos(2*pi*x)).*sin(2*pi*y);
z(:,2) = -(1-cos(2*pi*y)).*sin(2*pi*x);
end
function z = exactp(p)
x = p(:,1); % y = p(:,2);
z = 1/3*x.^3;
end
function z = g_D(p) % Dirichlet boundary condition
z = exactu(p);
end
end |
Formal statement is: lemma bounded_translation_minus: fixes S :: "'a::real_normed_vector set" shows "bounded S \<Longrightarrow> bounded ((\<lambda>x. x - a) ` S)" Informal statement is: If $S$ is a bounded set, then the set of all points obtained by translating $S$ by $a$ is also bounded. |
/-
Copyright (c) 2015 Leonardo de Moura. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura, Mario Carneiro
-/
import data.list.big_operators.basic
/-!
# Lists in product and sigma types
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
This file proves basic properties of `list.product` and `list.sigma`, which are list constructions
living in `prod` and `sigma` types respectively. Their definitions can be found in
[`data.list.defs`](./defs). Beware, this is not about `list.prod`, the multiplicative product.
-/
variables {α β : Type*}
namespace list
/-! ### product -/
@[simp] lemma nil_product (l : list β) : product (@nil α) l = [] := rfl
@[simp] lemma product_cons (a : α) (l₁ : list α) (l₂ : list β)
: product (a::l₁) l₂ = map (λ b, (a, b)) l₂ ++ product l₁ l₂ := rfl
@[simp] lemma product_nil : ∀ (l : list α), product l (@nil β) = []
| [] := rfl
| (a::l) := by rw [product_cons, product_nil]; refl
@[simp] lemma mem_product {l₁ : list α} {l₂ : list β} {a : α} {b : β} :
(a, b) ∈ product l₁ l₂ ↔ a ∈ l₁ ∧ b ∈ l₂ :=
by simp only [product, mem_bind, mem_map, prod.ext_iff, exists_prop,
and.left_comm, exists_and_distrib_left, exists_eq_left, exists_eq_right]
lemma length_product (l₁ : list α) (l₂ : list β) :
length (product l₁ l₂) = length l₁ * length l₂ :=
by induction l₁ with x l₁ IH; [exact (zero_mul _).symm,
simp only [length, product_cons, length_append, IH,
right_distrib, one_mul, length_map, add_comm]]
/-! ### sigma -/
variable {σ : α → Type*}
@[simp] lemma nil_sigma (l : Π a, list (σ a)) : (@nil α).sigma l = [] := rfl
@[simp] lemma sigma_cons (a : α) (l₁ : list α) (l₂ : Π a, list (σ a))
: (a::l₁).sigma l₂ = map (sigma.mk a) (l₂ a) ++ l₁.sigma l₂ := rfl
@[simp] lemma sigma_nil : ∀ (l : list α), l.sigma (λ a, @nil (σ a)) = []
| [] := rfl
| (a::l) := by rw [sigma_cons, sigma_nil]; refl
@[simp] lemma mem_sigma {l₁ : list α} {l₂ : Π a, list (σ a)} {a : α} {b : σ a} :
sigma.mk a b ∈ l₁.sigma l₂ ↔ a ∈ l₁ ∧ b ∈ l₂ a :=
by simp only [list.sigma, mem_bind, mem_map, exists_prop, exists_and_distrib_left,
and.left_comm, exists_eq_left, heq_iff_eq, exists_eq_right]
lemma length_sigma (l₁ : list α) (l₂ : Π a, list (σ a)) :
length (l₁.sigma l₂) = (l₁.map (λ a, length (l₂ a))).sum :=
by induction l₁ with x l₁ IH; [refl,
simp only [map, sigma_cons, length_append, length_map, IH, sum_cons]]
end list
|
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
Declare(apack, nth, var, toExpArg, ExpOps, VarOps, NthOps, ExprFuncs, AnySyms, errExp, funcExp);
Class(Symbolic, rec(
isSymbolic := true,
visitAs := "Symbolic",
setType := meth(self)
self.t := self.computeType();
return self;
end,
dims := self >> Cond(
IsArrayT(self.t), self.t.dims(),
Error("<self>.dims() is only valid when self.t is a TArray"))
# must be implemented in subclasses
#computeType := self >> ..type of self..
));
IsSymbolic := o -> IsRec(o) and IsBound(o.isSymbolic) and o.isSymbolic;
IsExpArg := o -> IsSymbolic(o) or IsValue(o);
IsLoc := x -> IsRec(x) and IsBound(x.isLoc) and x.isLoc;
IsNth := x -> IsRec(x) and IsBound(x.__bases__) and x.__bases__[1] = nth;
IsVar := x -> IsRec(x) and IsBound(x.__bases__) and x.__bases__[1] = var;
IsExp := x -> IsRec(x) and IsBound(x.isExp) and x.isExp;
toRange := rng -> Cond(
rng = [], 0,
IsRange(rng), Checked(rng[1]=0, Last(rng)+1),
IsInt(rng), rng,
IsValue(rng), rng.v,
IsSymbolic(rng), rng,
Error("<rng> must be a range, an integer, or a symbolic expression"));
listRange := rng -> Cond(
IsRange(rng), Checked(rng[1]=0, rng),
IsInt(rng), [0..rng-1],
Error("<rng> must be a range or an integer"));
# _ListElmOp: executes operation on evaluated list elements
Declare(_ListElmOp);
_ListElmOp := (a, b, op) ->
Cond( IsList(a) and IsList(b) and not IsString(a) and not IsString(b),
Checked(Length(a)=Length(b), List([1..Length(a)], i -> _ListElmOp(a[i], b[i], op))),
IsRec(a) and IsBound(a.ev),
_ListElmOp(a.ev(), b, op),
IsRec(b) and IsBound(b.ev),
_ListElmOp(a, b.ev(), op),
IsList(a) and not IsString(a),
List(a, e -> _ListElmOp(e, b, op)),
IsList(b) and not IsString(b),
List(a, e -> _ListElmOp(e, b, op)),
op(a, b) );
Class(Loc, Symbolic, rec(
isLoc := true,
isExp := true,
free := self >> Set(ConcatList(self.rChildren(), FreeVars)),
print := self >> Print(self.__name__, "(", PrintCS(self.rChildren()), ")"),
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch),
));
#F nth(<loc>, <idx>) -- symbolic representation of array access
#F
Class(nth, Loc, rec(
__call__ := (self, loc, idx) >> WithBases(self,
rec(operations := NthOps,
loc := toExpArg(loc),
idx := toExpArg(idx))).setType().cfold(),
can_fold := self >> self.idx _is funcExp or (IsValue(self.idx) and
(IsValue(self.loc) or (IsVar(self.loc) and IsBound(self.loc.value)) or self.loc _is apack)),
cfold := self >> When(self.can_fold(), self.eval(), self),
rChildren := self >> [self.loc, self.idx],
rSetChild := rSetChildFields("loc", "idx"),
ev := self >> let(e := self.eval(),
Cond(IsBound(e.v), e.v, e)),
eval := self >> let(loc := self.loc.eval(), idx := self.idx.eval(),
evself := CopyFields(self, rec(loc := loc, idx := idx)), # Simply return expression in case value cannot be returned (although it appears this wasn't desired originally?)
Cond(idx _is funcExp,
self.t.value(idx.args[1]),
not IsValue(idx),
evself, # self,
idx.v < 0,
errExp(self.t),
loc _is apack,
Cond(idx.v >= Length(loc.args), errExp(self.t), loc.args[idx.v+1]),
IsValue(loc),
Cond(idx.v >= Length(loc.v), errExp(self.t), V(loc.v[idx.v+1])),
IsVar(loc) and IsBound(loc.value),
Cond(idx.v >= Length(loc.value.v), errExp(self.t), V(loc.value.v[idx.v+1])),
evself)), # self)),
computeType := self >> Cond(
IsPtrT(self.loc.t) or IsArrayT(self.loc.t) or IsListT(self.loc.t), self.loc.t.t,
ObjId(self.loc.t)=TSym, TSym("Containee"), #used with C++ container objects (EnvList)
self.loc.t = TUnknown, self.loc.t,
Error("Unknown types of 1st argument <self.loc> in ", ObjId(self))
),
isExpComposite := true
));
#F deref(<loc>) -- symbolic representation of pointer dereference, equivalent to nth(<loc>, 0)
#F
Class(deref, nth, rec(
__call__ := (self, loc) >> Inherited(loc, TInt.value(0)),
rChildren := self >> [self.loc],
rSetChild := rSetChildFields("loc"),
));
#F addrof(<loc>) -- symbolic representation of address of <loc>.
#F
#F For a variable 'foo', addrof(foo) is the equivalent of &(foo) in C
#F
Class(addrof, Loc, rec(
__call__ := (self, loc) >> WithBases(self,
rec(operations := NthOps, loc := loc, idx := 0)).setType(),
computeType := self >> TPtr(self.loc.t),
rChildren := self >> [self.loc],
rSetChild := rSetChildFields("loc"),
can_fold := False,
));
#F var(<id>, <t>)
#F var(<id>, <t>, <range>)
#F var.fresh(<id>, <t>, <range>)
#F var.fresh_t(<id>, <t>)
#F
#F Create symbolic variables. variables are kept in a global hash, and thus
#F two variables with same name will refer to same physical object.
#F Namely
#F Same(var("zz", TInt), var("zz", TInt)) == true
#F Moreover,
#F spiral> v1 := var("zz", TInt);;
#F spiral> v2 := var("zz", TReal);;
#F spiral> v1.t;
#F TReal;
#F spiral> v2.t;
#F TReal;
#F
Class(var, Loc, rec(
rChildren := self >> [],
from_rChildren := (self, rch) >> self,
free := self >> Set([self]),
equals := (self,o) >> Same(self,o),
setAttr := meth(self, attr) self.(attr) := true; return self; end,
setAttrTo := meth(self, attr, val) self.(attr) := val; return self; end,
computeType := self >> self.t,
__call__ := meth(arg)
local self, id, range, t, v;
self := arg[1];
id := arg[2];
if Length(arg) >= 3 then t := arg[3]; else t := TUnknown; fi;
if Length(arg) >= 4 then range := arg[4]; else range := false; fi;
if not IsBound(self.table.(id)) then
v := WithBases(self, rec(operations := VarOps, id := id, t := t));
if range <> false then v.range := range; fi;
self.table.(id) := CantCopy(v);
v.uid := [BagAddr(v),1];
return v;
else
v := self.table.(id);
if t<>TUnknown then v.t := t; fi;
if range <> false then v.range := range; fi;
#if Length(arg) >= 3 then
#return Error("Variable '", id, "' is already defined, use var(..).xxx to update fields");
#fi;
return v;
fi;
end,
setRange := meth(self, r)
self.range := r;
return self;
end,
setValue := meth(self, v)
self.value := v;
return self;
end,
clone := self >> When(
IsBound(self.range),
var.fresh(self.id{[1]}, self.t, self.range),
var.fresh_t(self.id{[1]}, self.t)
),
printFull := self >> Print(
self.__name__, "(\"", self.id, "\", ", self.t,
When(
IsBound(self.range),
Print(", ", self.range), ""
),
")"
),
# printShort := self >> Print(self.__name__, "(\"", self.id, "\")"),
printShort := self >> Print(self.id),
print := ~.printShort,
fresh := (self,id,t,range) >> self(self._id(id), t, range),
fresh_t := (self,id,t) >> Cond(
IsInt(t) or IsScalar(t),
self(self._id(id), TInt, t),
IsType(t),
self(self._id(id), t),
Error("<t> must be a type or an integer that represents an interval")),
_id := meth(self, id)
local cnt, st;
cnt := When(IsBound(self.counter.(id)), self.counter.(id), 1);
# Intel compiler (ver 8 and 9)
# in linux uses variable i386 as a keyword.
if cnt = 385 then
self.counter.(id) := cnt+2;
else
self.counter.(id) := cnt+1;
fi;
st := Concat(id, String(cnt));
# st := Concat(id, VarNameInt(cnt));
if IsBound(self.table.(st)) then
self.counter.(id) := cnt+1000;
return self._id(id);
else return st;
fi;
end,
nth := (self, idx) >> nth(self, idx),
ev := self >> self, #When(IsBound(self.value), self.value.ev(), self),
eval := self >> self,
can_fold := False,
flush := meth(self)
self.table := WeakRef(tab());
self.counter := tab();
end,
table := WeakRef(tab()),
counter := tab(),
has_range := self >> IsInt(self.range)
));
#F ----------------------------------------------------------------------------------------------
#F Exp : expressions
#F ----------------------------------------------------------------------------------------------
Class(Exp, Symbolic, rec(
isExp := true,
isExpComposite := true,
__call__ := arg >> WithBases(arg[1],
rec(args := List(Drop(arg, 1), toExpArg), operations := ExpOps)).setType(),
print := self >> Print(self.__name__, "(", PrintCS(self.args), ")"),
ev := self >> Error("not implemented"),
eval := meth(self)
local evargs, res, type;
evargs := List(self.args, e -> e.eval());
if evargs <> [] and ForAll(evargs, IsValue) then
res := ShallowCopy(self);
res.args := evargs;
res := res.ev();
type := self.computeType();
return type.value(res);
else
res := ApplyFunc(ObjId(self), evargs);
res.t := self.t; # NOTE: why is this line here?
return res;
fi;
end,
rChildren := self >> self.args,
rSetChild := meth(self, n, newChild)
self.args[n] := newChild;
end,
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch),
free := self >> Set(ConcatList(self.args, FreeVars)),
can_fold := self >> not IsPtrT(self.t) and
let(rch := self.rChildren(), rch<>[] and ForAll(rch, c -> IsValue(c) or (IsVar(c) and IsBound(c.value)))),
cfold := self >> When(self.can_fold(), self.eval(), self),
setType := meth(self)
if IsBound(self.computeType) then
self.t := self.computeType();
else
self.t := UnifyTypes(List(self.args, x->x.t));
PrintErr("Warning: ", ObjId(self), " needs a computeType() method. ",
"(default type = ", self.t, ")\n");
fi;
return self;
end
));
Class(AutoFoldExp, Exp, rec(
__call__ := arg >> ApplyFunc(Inherited, Drop(arg, 1)).cfold()
));
#F AutoFoldRealExp -- scalar or vector expression with floating point result type
Class(AutoFoldRealExp, AutoFoldExp, rec(
computeType := self >> let(
t := UnifyTypesL(self.args),
Cond( IsRealT(t.base_t()), t, Cond(IsVecT(t), TVect(TReal. t.size), TReal)))
));
Class(ListableExp, Exp, rec(
__call__ := meth(arg)
local self, res;
self := arg[1];
arg := Drop(arg, 1);
if Length(arg)=2 then
if IsList(arg[1]) then return List(arg[1], e->self(e, arg[2]));
elif IsList(arg[2]) then return List(arg[2], e->self(arg[1], e));
fi;
fi;
res := WithBases(self, rec(args := List(arg, toExpArg), operations := ExpOps));
return res.cfold();
end
));
Declare(apack);
# TArray expression
Class(apack, AutoFoldExp, rec(
ev := self >> List(self.args, x->x.ev()),
computeType := self >> TArray(UnifyTypes(List(self.args, x->x.t)), Length(self.args)),
can_fold := False, # apack expected to be in nth, let nth to fold first instead of apack
fromList := (lst, func) -> ApplyFunc(apack, Map(lst, func)),
fromMat := (mat, func) -> apack.fromList(mat, r -> apack.fromList(r, func)),
));
#F cxpack(<re>, <im>) -- packs <re> <im> pair into complex number
Class( cxpack, AutoFoldExp, rec(
ev := self >> ApplyFunc(Complex, List(self.args, x->x.ev())),
computeType := self >> UnifyTypesV(self.args).complexType(),
));
#F brackets(<exp>) -- symbolic representation of brackets
Class(brackets, Exp, rec(
__call__ := arg >> Checked( Length(arg) = 2, ApplyFunc(Inherited, Drop(arg, 1))),
computeType := self >> self.args[1].t,
can_fold := self >> Inherited() and Length(self.args)=1,
));
#F fcall(<func>, <arg1>, ...) -- symbolic representation of a function call
#F <func> could be a variable or a Lambda
#F Example:
#F f := var("f", TFunc(TInt, TInt));
#F fcall(f, 1);
#F fcall(L(16,4).lambda(), 1);
#F
Class(fcall, Exp, rec(
__call__ := arg >> let(
self := arg[1],
args := List(Drop(arg, 1), toExpArg),
Cond(Length(args) < 1,
Error("fcall must have at least 1 argument: function"),
IsLambda(args[1]),
ApplyFunc(args[1].at, Drop(args, 1)),
#else
WithBases(self, rec(args := args, operations := ExpOps)).setType())),
computeType := self >> let(ft := self.args[1].t, Cond(
(ft in [TString, TUnknown]) or (ObjId(ft)=TPtr and ObjId(ft.t)=TSym), TUnknown,
ObjId(ft) = TFunc, Last(ft.params),
Error("<self.args[1].t> must be TFunc(..) or TUnknown"))),
eval := self >> ApplyFunc(ObjId(self), List(self.args, e->e.eval())),
can_fold := False,
));
Class(gapcall, Exp, rec(
__call__ := meth(arg)
local res, fname;
res := WithBases(arg[1], rec(args := List(Drop(arg, 1), toExpArg),
operations := ExpOps,
t := TUnknown));
if Length(res.args) < 1
then Error("gapcall must have at least 1 argument: function name"); fi;
if IsVar(res.args[1]) then
fname := res.args[1].id;
elif IsString(res.args[1]) then
fname := res.args[1];
else
return res;
fi;
if IsBound(ExprFuncs.(fname)) then
return ApplyFunc(ExprFuncs.(fname), Drop(res.args, 1));
else
return res;
fi;
end,
ev := self >> ApplyFunc(Eval(DelayedValueOf(self.args[1].id)),
List(Drop(self.args,1), x->x.ev())),
eval := meth(self)
local evargs;
evargs := List(Drop(self.args,1), e->e.eval());
if ForAll(evargs, IsValue) then return V(self.ev());
else return ApplyFunc(ObjId(self), Concatenation([self.args[1]], evargs));
fi;
end
));
ExprDelay := function(d)
d := FunccallsDelay(d);
d := DelaySubst(d, e->Global.Type(e) in [T_VAR, T_VARAUTO],
e -> var(NameOf(e)));
d := DelaySubst(d, e->Global.Type(e) = T_FUNCCALL,
e -> ApplyFunc(gapcall, e{[1..Length(e)]}));
return When(IsExp(d), d, V(d));
end;
toExpArg := x -> Cond(IsRec(x) or IsFunction(x), x,
IsDelay(x), ExprDelay(x),
V(x));
toAssignTarget := x -> x;
#F ----------------------------------------------------------------------------------------------
#F Expressions: Basic Arithmetic
#F
#F add(<a>, <b>, ...)
Class(add, AutoFoldExp, rec(
# __sum is overriden in descendant 'adds' (saturated addition)
__sum := (self, a, b) >> self.t.value(self.t.sum(_stripval(a), _stripval(b))),
# ev := self >> FoldL(self.args, (acc, e)->self.__sum(acc, e.ev()), self.t.zero()).ev(),
ev := self >> let(fe := FoldL(self.args, (acc, e)->self.__sum(acc, e.ev()), self.t.zero()), When(self = fe, self, fe.ev()) ),
# the intricate logic below is for computing the new alignment when dealing
# with pointer types
_ptrPlusOfs := (ptr_t, ofs) ->
TPtr(ptr_t.t, ptr_t.qualifiers, [ptr_t.alignment[1], (ptr_t.alignment[2] + ofs) mod ptr_t.alignment[1]]),
_addPtrT := function(ptr_args)
local align, el_t, t;
if Length(ptr_args)=1 then return ptr_args[1].t; fi;
align := [ Gcd(List(ptr_args, x->x.t.alignment[1])) ];
align[2] := Sum(ptr_args, x->x.t.alignment[2]) mod align[1];
el_t := UnifyTypes(List(ptr_args, x->x.t.t));
return TPtr(el_t, ConcatList(ptr_args, x->x.t.qualifiers), align);
end,
computeType := meth(self)
local len, t, ptr_args, other_args, sum;
len := Length(self.args);
if len=0 then return TInt;
elif len=1 then return self.args[1].t;
else
[ptr_args, other_args] := SplitBy(self.args, x->IsPtrT(x.t) or IsArrayT(x.t));
if Length(ptr_args)=0 then
return UnifyTypesL(self.args);
elif Length(ptr_args)=1 then
sum := Sum(other_args);
if other_args<>[] and not IsIntT(sum.t) then Error("Can't add non-integer to a pointer"); fi;
return self._ptrPlusOfs(ptr_args[1].t, sum);
elif Length(other_args)=0 then
return self._addPtrT(ptr_args);
else
return Error("Addition of more than one pointer and integers is not defined");
fi;
fi;
end,
# premultiplies all constants, removes 0s
cfold := meth(self)
local cons, sym, e, a, t, zero;
a := self.args;
# Processing size 1 first allows to skip computation of the type
# and of the zero
if Length(a)=1 then
return a[1];
# fast special case for 2 terms, i.e., add(a, b)
elif Length(a)=2 then
if IsBound(self.t.zero) then
t := self.t;
zero := self.t.zero();
return Cond((a[1]=0 or a[1]=zero) and a[2].t = t, a[2],
(a[2]=0 or a[2]=zero) and a[1].t = t, a[1],
IsValue(a[1]) and IsValue(a[2]), t.value(self.__sum(a[1].v, a[2].v)),
self);
else
return self;
fi;
# general case for add with >2 terms
else
t := self.t;
if IsBound(t.zero) then
zero := t.zero(); cons := zero; sym := [];
for e in self.args do
if IsSymbolic(e) then Add(sym, e);
else cons := self.__sum(cons, e);
fi;
od;
if sym=[] then return cons;
elif (cons=0 or cons=zero) and CopyFields(self, rec(args:=sym)).computeType() = t then self.args := sym;
else self.args := [When(IsPtrT(t), TInt.value(cons.v), cons)] :: sym;
fi;
if Length(self.args)=1 then return self.args[1]; fi;
fi;
return self;
fi;
end,
has_range := self >> ForAll(self.args, e -> Cond(IsValue(e), true, IsBound(e.has_range), e.has_range(), false) ),
range := self >> let(ranges := List(self.args, e -> Cond(IsValue(e), e, IsVar(e), V(e.range-1), e.range())), Sum(ranges))
));
#F adds(<a>, <b>, ...) saturated addition
Class(adds, add, rec(
__sum := (self, a, b) >> self.t.saturate(_stripval(a) + _stripval(b)),
));
Class(neg, AutoFoldExp, rec(
ev := self >> -self.args[1].ev(),
computeType := self >> let(t := self.args[1].t,
Cond(IsPtrT(t),
t.aligned([t.alignment[1], -t.alignment[2] mod t.alignment[1]]),
t)),
));
#F sub(<a>, <b>)
Class(sub, AutoFoldExp, rec(
# __sub is overriden in descendant 'subs' (saturated substraction)
__sub := (self, a, b) >> let(type := self.computeType(), type.value(a - b)),
ev := self >> let(eve := self.__sub(self.args[1].ev(), self.args[2].ev()), When(self = eve, self, eve.ev()) ),
computeType := meth(self)
local a, b, isptr_a, isptr_b;
[a, b] := self.args;
[isptr_a, isptr_b] := [IsPtrT(a.t) or IsArrayT(a.t), IsPtrT(b.t) or IsArrayT(b.t)];
if not isptr_a and not isptr_b then return UnifyPair(a.t, b.t);
elif isptr_a and isptr_b then
return add._addPtrT([a, neg(b)]);
elif isptr_a then
return add._ptrPlusOfs(a.t, -b);
else #isptr_b
return add._ptrPlusOfs(neg(b).t, a);
fi;
end,
cfold := self >> let(a := self.args[1], b := self.args[2], zero := self.t.zero(),
Cond((a=0 or a=zero) and b.t=self.t, neg(b),
(b=0 or b=zero) and a.t=self.t, a,
a=b, zero,
IsValue(a) and IsValue(b), self.__sub(a, b),
self)),
));
#F subs(<a>, <b>) saturated substraction
Class(subs, sub, rec(
__sub := (self, a, b) >> self.t.saturate(_stripval(a) - _stripval(b)),
));
Class(mul, AutoFoldExp, rec(
ev := self >> let(eve := FoldL(self.args, (z, x) -> self.t.product(_stripval(z), x.ev()), self.t.one()), When(self = eve, self, V(eve).ev())),
_ptrMul := function(ptr_t, mult)
local t;
t := Copy(ptr_t);
t.alignment[2] := (t.alignment[2] * mult) mod t.alignment[1];
return t;
end,
computeType := meth(self)
local len, t, ptr_t, ptr_args, other_args, prod, args;
args := self.args;
len := Length(args);
if len=0 then return TInt;
elif len=1 then return args[1].t;
# elif len=2 then
# if IsPtrT(args[1].t) then
# if not IsIntT(args[2].t) then Error("Can't multiply a pointer by a non-integer"); fi;
# return self._ptrMul(args[1].t, args[2]);
# elif IsPtrT(args[2].t) then
# if not IsIntT(args[1].t) then Error("Can't multiply a pointer by a non-integer"); fi;
# return self._ptrMul(args[2].t, args[1]);
# else
# return UnifyPair(args[1].t, args[2].t);
# fi;
else
[ptr_args, other_args] := SplitBy(args, x->IsPtrT(x.t));
if ptr_args=[] then
return UnifyTypesL(args);
elif Length(ptr_args) > 1 then Error("Can't multiply pointers");
else
prod := Product(other_args);
if other_args<>[] and not IsIntT(prod.t) then Error("Can't multiply a pointer by a non-integer"); fi;
return self._ptrMul(ptr_args[1].t, prod);
fi;
fi;
end,
# premultiplies all constants, removes 1s, and returns 0 if any factors is = 0
cfold := meth(self)
local cons, sym, e, a, one, zero, t;
t := self.t; one := t.one(); zero := t.zero();
a := self.args;
# fast special case for 2 factors, i.e., mul(a, b)
if Length(a)=2 then
return Cond((a[1]=1 or a[1]=one) and t=a[2].t, a[2],
(a[2]=1 or a[2]=one) and t=a[1].t, a[1],
a[1]=0 or a[2]=0 or a[1] = zero or a[2] = zero, zero,
IsValue(a[1]) and IsValue(a[2]), t.value(t.product(a[1].v, a[2].v)),
self);
elif Length(a)=1 then return a[1];
# general case for mul with >2 factors
else
cons := one; sym := [];
for e in self.args do
if IsSymbolic(e) then Add(sym, e);
elif e=0 or e=zero then return zero;
else cons := cons * e;
fi;
od;
if sym=[] then return cons;
elif (cons=1 or cons=one) and t=UnifyTypesL(sym) then self.args := sym;
else self.args := [cons] :: sym;
fi;
if Length(self.args)=1 then return self.args[1]; fi;
return self;
fi;
end,
has_range := self >> ForAll(self.args, e -> Cond(IsValue(e), true, IsBound(e.has_range), e.has_range(), false) ),
range := self >> let(ranges := List(self.args, e -> Cond(IsValue(e), e, IsVar(e), V(e.range-1), e.range())), Product(ranges))
));
Class(pow, AutoFoldExp, rec(
ev := self >> self.args[1].ev() ^ self.args[2].ev(),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
# max(...) is derived from min(...) by overloading _ev(<vars list>) method
Class(min, AutoFoldExp, rec(
ev := self >> self._ev(self.args).ev(),
computeType := self >> UnifyTypes(List(self.args, x->x.t)),
cfold := meth(self)
local m, vals, exps, args, i, a, op;
op := ObjId(self);
m := Set(self.args);
if Length(m)=1 then
return m[1];
else
i := 1; vals := []; exps := [];
while i<=Length(m) do
a := m[i];
if a _is Value then
Add(vals, a);
elif a _is op then
Append(m, a.args);
else
Add(exps, a);
fi;
i := i+1;
od;
args := When(vals<>[], [self._ev(vals)], []) :: exps;
if args = self.args then
return self;
else
return ApplyFunc(op, args);
fi;
fi;
end,
_ev := (self, vals) >> self.t.value(FoldL1(vals, (a,b) -> _ListElmOp(a, b, Min2))),
));
Class(max, min, rec(
_ev := (self, vals) >> self.t.value(FoldL1(vals, (a,b) -> _ListElmOp(a, b, Max2))),
));
#F average(<a>, <b>)
Class(average, AutoFoldExp, rec(
ev := self >> _ListElmOp(self.t.sum(self.args[1].ev(), self.args[2].ev()), 2, QuoInt),
computeType := self >> UnifyTypes(List(self.args, x->x.t)),
));
Class(re, AutoFoldExp, rec(
ev := self >> let(
t := InferType(self.args[1]),
v := self.args[1].ev(),
Cond(IsVecT(t), List(v, e -> ReComplex(Complex(e.ev()))),
ReComplex(Complex(v)))
),
computeType := self >> self.args[1].t.realType()
));
Class(im, AutoFoldExp, rec(
ev := self >> let(
t := InferType(self.args[1]),
v := self.args[1].ev(),
Cond(IsVecT(t), List(v, e -> ImComplex(Complex(e.ev()))),
ImComplex(Complex(v)))
),
computeType := self >> self.args[1].t.realType()
));
Class(conj, AutoFoldExp, rec(
ev := self >> let(a := self.args[1].ev(),
When(IsCyc(a), Global.Conjugate(a),
ReComplex(a)-Cplx(0,1)*ImComplex(a))),
computeType := self >> self.args[1].t
));
#F ----------------------------------------------------------------------------------------------
#F Expressions: Division
#F
#F fdiv(<a>, <b>) - divides two integers or two reals, result if always TReal
#F This is different from idiv and div.
#F
#F fdiv(TInt, TInt) == TReal
#F idiv(TInt, TInt) == TInt (rounding can happen)
#F ddiv(TInt, TInt) == TInt (rounding can happen, but unlike idiv add(ddiv(a,b),ddiv(c,b), ...) => ddiv(add(a,c, ...), b) allowed)
#F div(TInt, TInt) == TInt (arguments are expected to be divisible)
#F
#F fdiv(a, b). Floating-point division
#F fdiv(TInt, TInt) = TReal.
Class(fdiv, AutoFoldExp, rec(
ev := self >> (self.args[1].ev() / self.args[2].ev()),
computeType := self >> TReal));
Declare(idiv);
_handle_idiv_mul := function(div_obj)
local factors, d, gcd, f, m, den, ranges;
m := div_obj.args[1];
den := div_obj.args[2];
if m.has_range() and m.range() < den then
return V(0);
fi;
factors := [];
d := den;
for f in m.args do
if IsValue(f) then
gcd := Gcd(f.ev(), d.ev());
Add(factors, f/gcd);
d := d/gcd;
else Add(factors, f);
fi;
od;
if d = 1 then return ApplyFunc(mul, factors);
else return div_obj;
fi;
end;
_handle_idiv_add := function(div_obj)
local values, addends, a, den, ranges;
a := div_obj.args[1];
den := div_obj.args[2];
if a.has_range() and a.range() < den then
return V(0);
fi;
values := Filtered(a.args, v -> IsValue(v));
if ForAny(values, v -> (v.ev() mod den.ev()) <> 0) then
return div_obj;
fi;
addends := List(a.args, v -> idiv(v, den));
if ForAll(addends, v -> ObjId(v) <> idiv) then return ApplyFunc(add, addends);
else return div_obj;
fi;
end;
#F idiv(a, b). Integer division with rounding.
#F idiv(a, b) = floor(fdiv(a, b))
Class(idiv, AutoFoldExp, rec(
ev := self >> _ListElmOp(self.args[1], self.args[2], QuoInt),
cfold := self >> let(a := self.args[1], b := self.args[2],
Cond(a=a.t.zero(), self.t.zero(),
a=b, self.t.one(),
b=b.t.one() and a.t = self.t, a,
IsValue(a) and IsValue(b), self.t.value(self.ev()),
#Dani: Simplifying expr
ObjId(a) = mul and IsValue(b), _handle_idiv_mul(self),
ObjId(a) = add and IsValue(b), _handle_idiv_add(self),
IsVar(a) and IsValue(b) and IsInt(a.range), When((a.range-1)<b.ev(), V(0), self),
self)),
computeType := self >> let( t := UnifyTypes(List(self.args, e -> e.t)),
Checked(IsOrdT(t.base_t()), t) ),
has_range := self >> ForAll(self.args, e -> Cond(IsValue(e), true, IsBound(e.has_range), e.has_range(), false) ),
range := self >> let(a := self.args[1], a_range := Cond(IsValue(a), a.ev(), IsVar(a), a.range-1, a.range().ev()), V(QuoInt(a_range, self.args[2].ev())) )
));
#F idivmod(i, n, d) = imod( idiv(i, d), n ).
#F Assume N-dim tensor dimension where d is the stride of dimension D and n*d of dimension D+1.
#F idivmod isolates the index i_D from the linearized i = .. + i_{D+1}*n*d + i_D*d + ...
Class(idivmod, AutoFoldExp, rec(
ev := self >> idiv( self.args[1].ev(), self.args[3].ev() ) mod self.args[2].ev() ,
computeType := self >> TInt));
idiv_ceil := (a, b) -> idiv(a+b-1, b);
#F ddiv(a, b). Integer division with rounding. Same as idiv but
#F add(ddiv(a,b),ddiv(c,b), ...) => ddiv(add(a,c, ...), b) allowed
Class(ddiv, idiv);
#F div(a, b). Exact integer (no rounding) or floating-point division
#F If <a> and <b> are integers, they are expected to be divisible.
#F If both are reals, then div(a, b) = fdiv(a, b)
#F
Class(div, AutoFoldExp, rec(
ev := self >> self.args[1].ev() / self.args[2].ev(),
cfold := self >> let(a := self.args[1], b := self.args[2],
Cond(a=0, self.t.zero(), # what if b==0?
a=b, self.t.one(),
b=1 and a.t = self.t, a,
IsValue(a) and IsValue(b), self.t.value(a.v / b.v),
self)),
computeType := self >> UnifyTypes(List(self.args, x->x.t))
));
#param div, a division that is propagated to params, when it is known that
#the params are divisible
Class(pdiv, div);
# In Spiral (unlike C) mod from negative number is a positive number: -5 mod 3 = 1
Class(imod, AutoFoldExp, rec(
ev := self >> self.args[1].ev() mod self.args[2].ev(),
cfold := self >> let(a := self.args[1], b := self.args[2],
Cond(a=0, a,
b=1, self.t.zero(),
IsValue(a) and IsValue(b), self.t.value(a.v mod b.v),
ObjId(a)=ObjId(self) and a.args[2] = b, a,
IsBound(a.has_range) and a.has_range() and IsValue(b), let(r := When(IsVar(a), V(a.range-1), a.range()), When(r < b, a, self)),
self)),
computeType := self >> When(IsPtrT(self.args[1].t), TInt, UnifyTypes([self.args[1].t, self.args[2].t]))
));
Class(floor, AutoFoldExp, rec(
ev := self >> let(f := self.args[1].ev(), Cond(
IsDouble(f), d_floor(f),
IsRat(f), spiral.approx.FloorRat(f),
Error("Don't know how take floor of <f>"))),
computeType := self >> TInt));
Class(ceil, AutoFoldExp, rec(
ev := self >> let(f := self.args[1].ev(), Cond(
IsDouble(f), d_ceil(f),
IsRat(f), spiral.approx.CeilingRat(f),
Error("Don't know how take ceiling of <f>"))),
computeType := self >> TInt));
#F ----------------------------------------------------------------------------------------------
#F Expressions: Various functions
#F
V_true := V(true);
V_false := V(false);
Class(null, Exp, rec(
computeType := self >> TPtr(TVoid)
));
# powmod(<phi>, <g>, <exp>, <N>) = phi * g^exp mod N
Class(powmod, AutoFoldExp, rec(
ev := self >> self.args[1].ev () * PowerMod(self.args[2].ev(), self.args[3].ev(), self.args[4].ev())
mod self.args[4].ev(),
computeType := self >> TInt));
# ilogmod(<n>, <g>, <N>) -- solution <exp> in powmod(1, <g>, <exp>, <N>) = <n> [g^exp mod N = n]
Class(ilogmod, AutoFoldExp, rec(
ev := self >> LogMod(self.args[1].ev(), self.args[2].ev(), self.args[3].ev()),
computeType := self >> TInt));
#F abs(<a>) -- absolute value
Class(abs, AutoFoldExp, rec(
ev := self >> _ListElmOp(self.args[1], self.args[1], (a,b) -> SignInt(a)*b),
computeType := self >> self.args[1].t
));
#F absdiff(<a>,<b>) -- absolute difference |<a>-<b>|
Class(absdiff, AutoFoldExp, rec(
ev := self >> sub(max(self.args[1], self.args[2]), min(self.args[1], self.args[2])).ev(),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
#F absdiff2(<a>,<b>) -- absolute difference between a and b where 0<=b && (a==0 || a==2^n-1 && b<=a)
#F this operation can be implemented as a xor b for integer a and b
Class(absdiff2, absdiff);
#F sign(<a>) -- returns 1 if a is positive, -1 if negative, 0 if a=0
Class(sign, AutoFoldExp, rec(
ev := self >> let(a:=self.args[1].ev(),
Cond(a>0, 1, a<0, -1, 0)),
computeType := self >> self.args[1].t
));
#F fpmul(fracbits, a, b) -- fixed point multiplication, computes (a*b) >> fracbits
Class(fpmul, AutoFoldExp, rec(
ev := self >> (self.args[2].ev() * self.args[3].ev()) / 2^self.args[1].ev(),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
#F ----------------------------------------------------------------------------------------------
#F Expressions: Boolean Arithmetic and Conditions
#F
Class(logic_and, AutoFoldExp, rec(
ev := self >> ForAll(self.args, x->x.ev()),
cfold := meth(self)
local a;
a := Filtered(self.args, x->x<>true);
if ForAny(a, x->x=false) then return V_false;
elif a=[] then return V_true;
elif Length(a)=1 then return a[1];
else self.args := a;
return self;
fi;
end,
computeType := self >> TBool
));
Class(logic_or, AutoFoldExp, rec(
ev := self >> ForAny(self.args, x->x.ev()),
cfold := meth(self)
local a;
a := Filtered(self.args, x->x<>false);
if ForAny(a, x->x=true) then return V_true;
elif a=[] then return V_false;
elif Length(a)=1 then return a[1];
else self.args := a;
return self;
fi;
end,
computeType := self >> TBool
));
Class(logic_neg, AutoFoldExp, rec(
ev := self >> When(self.args[1].ev(), false, true),
computeType := self >> TBool));
# Mixin for comparision operations.
# Users should define method _ev_op(a, b) only, which defines operation.
_logic_mixin := rec(
computeType := self >> let(
types := List(self.args, e->e.t),
# There is no TPtr unification defined at the moment but it's legal
# to compare pointers (at least with TPtr(TVoid)) so here is this stupid hack
t := Cond( ForAny(types, IsPtrT), TBool, UnifyTypes(types)),
When( IsVecT(t),
TVect(TBool, t.size),
# else
TBool)),
ev := self >> let(
a := self.args,
l := Length(a),
Checked( l>1,
ApplyFunc(logic_and, List([2..l],
i -> _ListElmOp(a[i-1], a[i], self._ev_op)
)).ev()
)
),
);
#F eq(a, b, c, ...) symbolic representation of a = b = c = ...
Class(eq, _logic_mixin, AutoFoldExp, rec( _ev_op := (a, b) -> Checked(not AnySyms(a,b), a=b )));
#F neq(a, b) symbolic representation of a<>b
Class(neq, _logic_mixin, AutoFoldExp, rec( _ev_op := (a, b) -> Checked(not AnySyms(a,b), a<>b )));
#F leq(a, b, c, ...) symbolic representation of a <= b <= c <= ...
Class(leq, _logic_mixin, AutoFoldExp, rec( _ev_op := (a, b) -> Checked(not AnySyms(a,b), a<=b )));
#F lt(a, b, c, ...) symbolic representation of a < b < c < ...
Class(lt, _logic_mixin, AutoFoldExp, rec( _ev_op := (a, b) -> Checked(not AnySyms(a,b), a<b )));
#F geq(a, b, c, ...) symbolic representation of a >= b >= c >= ...
Class(geq, _logic_mixin, AutoFoldExp, rec( _ev_op := (a, b) -> Checked(not AnySyms(a,b), a>=b )));
#F gt(a, b, c, ...) symbolic representation of a > b > c > ...
Class(gt, _logic_mixin, AutoFoldExp, rec( _ev_op := (a, b) -> Checked(not AnySyms(a,b), a>b )));
_logic_mask_mixin := rec(
computeType := self >> let(
t := UnifyTypes(List(self.args, e->e.t)),
b := t.base_t(),
nb := Cond(
ObjId(b) in [T_Real, T_Int, T_UInt], T_Int(b.params[1]),
b in [TReal, TInt, TUInt], TInt,
Error("unexpected data type")
),
Cond( IsVecT(t), TVect(nb, t.size), nb)
),
ev := self >> let( b := Inherited(), _ListElmOp( b, b, (a, b) -> Checked(not IsSymbolic(a), When(a=true, -1, 0))))
);
#F mask_gt(a, b) symbolic representation of a > b where result is an integer mask: -1 (true) or 0 (false)
Class(mask_gt, _logic_mask_mixin, gt);
#F mask_eq(a, b) symbolic representation of a = b where result is an integer mask: -1 (true) or 0 (false)
Class(mask_eq, _logic_mask_mixin, eq);
#F mask_lt(a, b) symbolic representation of a < b where result is an integer mask: -1 (true) or 0 (false)
Class(mask_lt, _logic_mask_mixin, lt);
Class(cond, Exp, rec(
eval := meth(self)
local i, cc;
i := 0;
for i in [1..QuoInt(Length(self.args),2)] do
cc := self.args[2*i-1].eval();
if not IsValue(cc) then return self; # unevaluatable cond
elif ((IsBool(cc.v) and cc.v) or (IsInt(cc.v) and cc.v<>0)) then # true clause found
return self.args[2*i].eval();
fi;
od;
if 2*i+1 > Length(self.args) then
# in the case of nested conds, this particular cond might be unreachable,
# so it can be invalid, we generate errExp() in this case, instead of crashing
# return errExp(self.t);
return Error("Else clause missing in 'cond' object <self>");
else
return self.args[2*i+1].eval();
fi;
end,
computeType := self >> UnifyTypes(List([1..QuoInt(Length(self.args),2)], i->self.args[2*i].t)),
ev := self >> let(ev:=self.eval(), When(IsValue(ev), ev.v, ev))
));
#F _map_cond(<cexp>, <pred_func>, <exp_func>)
#F Maps cond(...) expression <cexp> by applying <pred_func> to predicates and <exp_func> to expressions.
_map_cond := (cexp, pred_func, exp_func) -> ApplyFunc(cond, List([1..Length(cexp.args)], i ->
Cond( i mod 2 = 1 and i<>Length(cexp.args), pred_func, exp_func)(cexp.args[i])));
#F maybe() -- "magic" boolean function, that satisfies logic_not(maybe()) = maybe()
#F
#F maybe() behaves as 'true' inside 'and'/'or' operators,
#F but also satisfies the uncertainty rule logic_not(maybe()) = maybe()
#F
Class(maybe, Exp, rec(
computeType := self >> TBool
));
#F ----------------------------------------------------------------------------------------------
#F Expressions: Bit manipulation
#F
Class(bin_parity, Exp, rec(
ev := self >> BinParity(self.args[1].ev()),
computeType := self >> self.args[1].t
));
Class(bin_and, Exp, rec(
ev := self >> _ListElmOp(self.args[1], self.args[2], BinAnd),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
Class(bin_or, Exp, rec(
ev := self >> _ListElmOp(self.args[1], self.args[2], BinOr),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
Class(bin_andnot, Exp, rec(
ev := self >> BinAnd(BinNot(self.args[1].ev()), self.args[2].ev()),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
Class(bin_xor, Exp, rec(
ev := self >> _ListElmOp(self.args[1], self.args[2], BinXor),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
Class(adrgen, Exp, rec(
ev := self >> (2^self.args[1].ev()-1) - (2^self.args[2].ev()-1),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
Class(concat, Exp, rec(
ev := self >> (self.args[1].ev() * (2^self.args[3].ev()) + self.args[2].ev()),
computeType := self >> UnifyPair(self.args[1].t, self.args[2].t)
));
Class(truncate, Exp, rec(
ev := self >> BinAnd(self.args[1].ev(), 2^self.args[2].ev()-1),
computeType := self >> self.args[1].t
));
Class(bin_shr, Exp, rec(
ev := self >> let(
a:=self.args[1].ev(), b:=self.args[2].ev(), bits:=When(IsBound(self.args[3]), self.args[3].ev(), false),
Cond(bits=false, When( IsList(a), ShiftList(a, -b, 0), Int(a * 2^(-b))), Int(a * 2^(-b)) mod 2^bits)),
computeType := self >> self.args[1].t
));
Class(bin_shl, Exp, rec(
ev := self >> let(
a:=self.args[1].ev(), b:=self.args[2].ev(), bits:=When(IsBound(self.args[3]), self.args[3].ev(), false),
Cond( bits=false, When( IsList(a), ShiftList(a, b, 0), Int(a * 2^b) ),
Int(a * 2^b) mod 2^bits)),
computeType := self >> self.args[1].t
));
Class(arith_shr, Exp, rec(
ev := self >> let(
a:=self.args[1].ev(), b:=self.args[2].ev(), bits:=When(IsBound(self.args[3]), self.args[3].ev(), false),
Cond(bits=false, When( IsList(a), ShiftList(a, -b, Last(a)), Int(a * 2^(-b))), Int(a * 2^(-b)) mod 2^bits)),
computeType := self >> self.args[1].t
));
Class(arith_shl, Exp, rec(
ev := self >> let(
a:=self.args[1].ev(), b:=self.args[2].ev(), bits:=When(IsBound(self.args[3]), self.args[3].ev(), false),
Cond(bits=false, When( IsList(a), ShiftList(a, b, 0), Int(a * 2^b)), Int(a * 2^b) mod 2^bits)),
computeType := self >> self.args[1].t
));
Class(rCyclicShift, Exp, rec(
ev := self >> let(a := self.args[1].ev(), shift := self.args[2].ev(),
c := 2^shift, bits := self.args[3].ev(),
BinAnd(a, c-1) * 2^(bits-shift) + Int(a / c)),
computeType := self >> self.args[1].t
));
Class(bit_sel, Exp, rec(
ev := self >> let(a := self.args[1].ev(), bit := self.args[2].ev(),
bin_and(arith_shr(a, bit), 1).ev()
)
));
Class(xor, Exp, rec(
ev := self >> Xor (self.args, e -> e.ev()),
computeType := self >> UnifyTypes(List(self.args, x->x.t))
));
#F ----------------------------------------------------------------------------------------------
#F Expressions: Irrational functions
#F
Class(omega, AutoFoldExp, rec(
ev := self >> E(self.args[1].ev()) ^ self.args[2].ev(),
computeType := self >> TComplex));
Class(exp, AutoFoldRealExp, rec(
ev := self >> d_exp(self.args[1].ev())));
Class(log, AutoFoldRealExp, rec(
ev := self >> let( l := d_log(self.args[1].ev()),
Cond(Length(self.args)=2, l/d_log(self.args[2].ev()), l))));
Class(cospi, AutoFoldExp, rec(
ev := self >> CosPi(self.args[1].ev()),
computeType := self >> TReal));
Class(sinpi, AutoFoldExp, rec(
ev := self >> SinPi(self.args[1].ev()),
computeType := self >> TReal));
Class(omegapi, AutoFoldExp, rec(
ev := self >> ExpIPi(self.args[1].ev()),
computeType := self >> TComplex));
Class(sqrt, AutoFoldRealExp, rec(
ev := self >> Sqrt(self.args[1].ev())));
Class(rsqrt, AutoFoldRealExp, rec(
ev := self >> 1/Sqrt(self.args[1].ev())));
#F ----------------------------------------------------------------------------------------------
#F Expressions: Specials and Wrappers
#F
Class(PlaceholderExp, Exp, rec(
ev := self >> self.args[1].ev(),
computeType := self >> self.args[1].t
));
Class(no_mod, PlaceholderExp);
Class(small_mod, imod);
Class(accu, PlaceholderExp);
Class(depends, PlaceholderExp);
Class(depends_memory, depends);
Class(virtual_var, Exp, rec(
__call__ := (self, vars, idx) >>
let(idx2 := toExpArg(idx),
When(IsValue(idx2),
Cond(idx2.v >= Length(vars), errExp(self.t), vars[idx2.v+1]),
WithBases(self,
rec(args := [vars, idx2],
operations := ExpOps,
t := Last(vars).t)))),
computeType := self >> Last(self.args[1]).t,
ev := self >> let(
vars := self.args[1],
idx := self.args[2].eval(),
Cond(not IsValue(idx),
self,
idx.v < 0,
errExp(self.t),
IsList(vars),
Cond(idx.v >= Length(vars), errExp(self.t), vars[idx.v+1]))),
));
#F castizx(<exp>) - cast signed <exp> to twice larger data type with zero extension
Class(castizx, Exp, rec(
__call__ := (self, expr) >>
WithBases(self, rec(
args := [toExpArg(expr)],
operations := ExpOps,
)).setType(),
computeType := self >> self.args[1].t.double().toSigned(),
ev := self >> self.args[1].t.toUnsigned().value(self.args[1].ev()).ev(),
));
#F castuzx(<exp>) - cast unsigned <exp> to twice larger data type with zero extension
Class(castuzx, castizx, rec(
computeType := self >> self.args[1].t.double().toUnsigned(),
));
# cmemo(<expr>, <target>, <prefix>)
Class(cmemo, Exp, rec(
__call__ := (self, prefix, target, exp) >>
Cond(IsValue(exp), exp.v,
#IsVar(exp), exp,
WithBases(self, rec(
operations := ExpOps,
prefix := prefix,
target := target,
args := [ var.fresh_t(prefix, TInt) ],
mapping := toExpArg(exp).eval() ))),
eval := self >> self.mapping.eval()
));
ExprFuncs := rec(
T_SUM := add,
T_DIFF := sub,
T_PROD := mul,
T_QUO := div,
T_MOD := imod,
T_POW := pow,
nth := nth,
Int := floor,
QuoInt := idiv,
LogMod := ilogmod,
CosPi := cospi,
SinPi := sinpi,
Sqrt := sqrt,
ReComplex := re,
ImComplex := im,
Cond := cond
);
#F noneExp(<t>) - represents an uninialized value of type <t>
#F
#F This handles the sitation with Scat * Diag * Scat(f)
#F Scat(f) should never write explicit 0's even though Diag scales them
#F
Class(noneExp, Exp, rec(
computeType := self >> self.args[1]
));
#F errExp(<t>) - represents an invalid result of type <t>
#F
#F The reason this is used is to support the following strange rewrite:
#F 0 * nth(T, i) -> 0, when i is out of bounds
#F For example if i<0, normally the above might break, but using errExp:
#F 0 * nth(T, -1) -> 0 * errExp(TReal) -> 0
#F
Class(errExp, Exp, rec(
computeType := self >> self.args[1]
));
#F funcExp(<i>) -- used to "hack" affine transformations out of Gath/Scat
#F
#F See Doc(Gath) for an explanation on how this works.
#F <i> must be an integer expression.
#F
#F Currently, this has the following semantics
#F
#F nth(X, i) == X[i]
#F nth(X, funcExp(i)) == i
#F
#F The proper way of doing this would be instead (using h. coords, X[len(x)] = 1)
#F nth(X, funcExp(i)) -> i * nth(X, len(X)) = i * X[len(X)] = i
#F
#F See http://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations
#F
Class(funcExp, Exp, rec(
eval := self >> funcExp(self.args[1].eval()),
computeType := self >> self.args[1].t,
can_fold := False,
));
#F ----------------------------------------------------------------------------------------------
#F GAP Operations Records
#F
Class(ExpOps, PrintOps, rec(
\+ := add,
\- := sub,
\* := (e1,e2) -> When(e1=-1, neg(e2), mul(e1,e2)),
\/ := div,
\^ := pow,
\mod := imod,
\= := (e1,e2) -> Cond(
ObjId(e1) <> ObjId(e2), false,
e1.rChildren() = e2.rChildren()),
\< := (e1,e2) -> Cond(
ObjId(e1) <> ObjId(e2), ObjId(e1) < ObjId(e2),
e1.rChildren() < e2.rChildren())
));
Class(VarOps, ExpOps, rec(
\= := (v1,v2) -> Same(v1,v2),
\< := (v1,v2) -> Cond(not (IsVar(v1) and IsVar(v2)), ObjId(v1) < ObjId(v2),
BagAddr(v1) < BagAddr(v2))
));
Class(NthOps, ExpOps, rec(
\= := (v1,v2) -> IsRec(v1) and IsRec(v2) and Same(ObjId(v1), ObjId(v2))
and v1.loc=v2.loc and v1.idx = v2.idx,
\< := (v1,v2) -> Cond(
not Same(ObjId(v1), ObjId(v2)), ObjId(v1) < ObjId(v2),
v1.loc=v2.loc, v1.idx < v2.idx,
v1.loc < v2.loc)
));
_val := x->Cond(IsValue(x) or IsSymbolic(x), x, InferType(x).value(x));
ValueOps.\+ := (aa,bb) -> let(a:=_val(aa), b:=_val(bb), Cond(IsValue(a) and IsValue(b),
let(t:=UnifyPair(a.t, b.t), t.value(t.sum(a.v, b.v))), add(a, b)));
ValueOps.\- := (aa,bb) -> let(a:=_val(aa), b:=_val(bb), Cond(IsValue(a) and IsValue(b),
let(t:=UnifyPair(a.t, b.t), t.value(t.sum(a.v, -b.v))), sub(a, b)));
ValueOps.\* := (aa,bb) -> let(a:=_val(aa), b:=_val(bb), Cond(IsValue(a) and IsValue(b),
let(t:=UnifyPair(a.t, b.t), t.value(t.product(a.v, b.v))), mul(a, b)));
ValueOps.\/ := div;
ValueOps.\^ := pow;
ValueOps.\mod := imod;
#----------------------------------------------------------------------------------------------
# Command : high level instructions
#
# skip
# assign
# chain
# decl
# data
# loop
#----------------------------------------------------------------------------------------------
CmdOps := rec(Print := s -> s.print(0,3));
Class(Command, AttrMixin, rec(
isCommand := true,
print := (self,i,si) >> Print(self.__name__),
countedArithCost := (self, countrec) >> countrec.arithcost(self.countOps(countrec)),
countOps := meth(self, countrec)
local cmds, ops, i;
ops := List([1..Length(countrec.ops)], i->0);
if self.__name__ = "func" and self.id = "init" then return(ops); fi;
if IsBound(self.cmds) then cmds := self.cmds;
else if IsBound(self.cmd) then cmds := [self.cmd]; else return(0); fi;
fi;
for i in cmds do
if IsBound(i.countOps) then
ops := ops + i.countOps(countrec);
else
Error(i.__name__, "doesn't have countOps");
fi;
od;
return(ops);
end,
free := self >> Set(ConcatList(self.rChildren(), FreeVars)),
from_rChildren := (self, rch) >> ApplyFunc(ObjId(self), rch).takeA(self.a)
));
Class(ExpCommand, Command, rec(
isExpCommand := true,
__call__ := arg >> let(self := arg[1], args := Drop(arg,1),
WithBases(self, rec(
operations := CmdOps,
args := List(args, toExpArg)))),
rChildren := self >> self.args,
rSetChild := meth(self, i, newC) self.args[i] := newC; return newC; end,
print := (self,i,si) >>
Print(self.__name__, "(", PrintCS(self.args), ")")
));
IsCommand := x -> IsRec(x) and IsBound(x.isCommand) and x.isCommand;
IsExpCommand := x -> IsRec(x) and IsBound(x.isExpCommand) and x.isExpCommand;
#F throw(<arg>) - symbolic representation of exception throw
#F
Class(throw, ExpCommand);
#F call(<func>, <arg1>, <arg2>, ...) - symbolic representation of an external function call
#F
Class(call, ExpCommand);
Class(skip, Command, rec(
__call__ := self >> WithBases(self, rec(operations:=CmdOps)),
print := (self,i,si) >> Print(self.__name__, "()"),
rChildren := self >> [],
free := self >> [],
op_in := self >> Set([])
));
Class(break, skip);
Class(const, Command, rec(
__call__ := (self, value) >> WithBases(self, rec(operations:=CmdOps, val:=value)),
print := (self,i,si) >> Print(self.__name__, "(", self.val ,")"),
rChildren := self >> [],
free := self >> []
));
Class(dma_barrier, skip, rec(
));
Class(dist_barrier, skip, rec(
));
Class(noUnparse, Command, rec(
__call__ := (self, string) >> WithBases(self, rec(operations:=CmdOps, str:=string)),
print := (self,i,si) >> Print(self.__name__, "(\"", self.str, "\")"),
rChildren := self >> [],
rSetChild := (self, n, c) >> Error("no children")
));
Class(assign, Command, rec(
isAssign := true,
__call__ := (self, loc, exp) >> WithBases(self,
rec(operations := CmdOps,
loc := toAssignTarget(loc),
exp := toExpArg(exp))),
rChildren := self >> [self.loc, self.exp],
rSetChild := rSetChildFields("loc", "exp"),
unroll := self >> self,
#Must do a collect so that nested assigns work
countOps := (self, countrec) >> List([1..Length(countrec.ops)],
i->Length(Collect(self, @(1, countrec.ops[i], e->IsRec(e) and
((IsBound(e.t) and (ObjId(e.t)=TVect) or (IsBound(e.countAsVectOp) and e.countAsVectOp())))))) ),
print := (self,i,si) >> let(name := Cond(IsBound(self.isCompute) and self.isCompute,
gap.colors.DarkYellow(self.__name__),
IsBound(self.isLoad) and self.isLoad,
gap.colors.DarkRed(self.__name__),
IsBound(self.isStore) and self.isStore,
gap.colors.DarkGreen(self.__name__),
self.__name__),
Print(name, "(", self.loc, ", ", self.exp, ")"))
));
Class(regassign, assign, rec(
isAssign := true,
__call__ := (self, loc, exp) >> WithBases(self,
rec(operations := CmdOps,
loc := toAssignTarget(loc),
exp := toExpArg(exp))),
rChildren := self >> [self.loc, self.exp],
rSetChild := rSetChildFields("loc", "exp"),
unroll := self >> self,
#Must do a collect so that nested assigns work
countOps := (self, countrec) >> List([1..Length(countrec.ops)],
i->Length(Collect(self, @(1, countrec.ops[i], e->IsRec(e) and
((IsBound(e.t) and (ObjId(e.t)=TVect) or (IsBound(e.countAsVectOp) and e.countAsVectOp())))))) ),
print := (self,i,is) >> let(name := Cond(IsBound(self.isCompute) and self.isCompute,
gap.colors.DarkYellow(self.__name__),
IsBound(self.isLoad) and self.isLoad,
gap.colors.DarkRed(self.__name__),
IsBound(self.isStore) and self.isStore,
gap.colors.DarkGreen(self.__name__),
self.__name__),
Print(name, "(", self.loc, ", ", self.exp, ")"))
));
IsAssign := x -> IsRec(x) and IsBound(x.isAssign) and x.isAssign;
Class(assign_acc, assign);
# syntax like a printf, prints out something in the output code.
Class(PRINT, Command, rec(
__call__ := (arg) >> let(
self := arg[1],
fmt := arg[2],
vars := Drop(arg, 2),
Checked(
IsString(fmt),
IsList(vars),
WithBases(self, rec(
operations := CmdOps,
fmt := fmt,
vars := vars
))
)
),
rChildren := self >> Concat([self.fmt], self.vars),
rSetChild := meth(self, n, val)
if n = 1 then
self.fmt := val;
else
self.vars[n-1] := val;
fi;
end,
print := (self, i, si) >> Print(self.__name__, "(\"", self.fmt, When(Length(self.vars) <> 0, "\", ", "\""), PrintCS(self.vars), ")")
));
#NOTE THAT HAO
# Class(PRINT, Exp);
# inserts a comment into the output code
Class(comment, Command, rec(
isComment := true,
__call__ := (self, exp) >> Checked(
IsString(exp),
WithBases(self,
rec(operations := CmdOps,
exp := exp
)
)
),
rChildren := self >> [self.exp],
rSetChild := rSetChildFields("exp"),
print := (self, i, si) >> Print(self.__name__, "(\"", self.exp, "\")")
));
Class(quote, Command, rec(
isQuote := true,
__call__ := (self, cmd) >> Checked(
IsCommand(cmd),
WithBases(self, rec(
operations := CmdOps,
cmd := cmd
))
),
rChildren := self >> [self.cmd],
rSetChild := rSetChildFields("cmd"),
print := (self, i, si) >> Print(self.__name__, "(", self.cmd.print(i+si,si), ")")
));
Class(wrap, Command, rec(
rChildren := self >> [ self.cmd ],
rSetChild := rSetChildFields("cmd"),
__call__ := (self, cmd) >> WithBases(self,
rec(operations := CmdOps,
cmd := Checked(IsCommand(cmd), cmd))),
print := meth(self,i,si)
Print(self.__name__, "(\n");
Print(Blanks(i+si), self.cmd.print(i+si, si), "\n");
Print(Blanks(i), ")");
end
));
Class(multiwrap, Command, rec(
rChildren := self >> self.cmds,
rSetChild := meth(self, n, newChild) self.cmds[n] := newChild; end,
__call__ := meth(arg)
local self, cmds;
self := arg[1];
cmds := Flat(Drop(arg, 1));
return WithBases(self,
rec(operations := CmdOps,
cmds := Checked(ForAll(cmds, IsCommand), cmds)));
end,
printCmds := meth(self, i, si)
local c;
for c in Take(self.cmds, Length(self.cmds)-1) do
Print(Blanks(i));
c.print(i, si);
Print(",\n");
od;
Print(Blanks(i));
Last(self.cmds).print(i, si);
Print("\n");
end,
print := (self,i,si) >> When(Length(self.cmds)=0,
Print(self.__name__, "()"),
Print(self.__name__, "(\n", self.printCmds(i+si, si), Blanks(i), ")"))
));
IsChain := x -> IsRec(x) and IsBound(x.isChain) and x.isChain;
Class(chain, multiwrap, rec(
isChain := true,
flatten := self >> let(cls := self.__bases__[1],
CopyFields(self, rec(cmds := ConcatList(self.cmds,
c -> Cond(IsChain(c) and not IsBound(c.doNotFlatten), c.cmds,
ObjId(c) = skip, [],
[c]))))),
__call__ := meth(arg)
local self, cmds;
[self, cmds] := [arg[1], Flat(Drop(arg, 1))];
return WithBases(self, rec(
operations := CmdOps,
cmds := Checked(ForAll(cmds, IsCommand), cmds)));
end
));
Class(unroll_cmd, multiwrap, rec(
flatten := self >> chain(self.cmds).flatten()
));
Class(kern, Command, rec(
__call__ := (self, bbnum, cmd) >> WithBases(self, rec(
bbnum := bbnum,
cmd := Checked(IsCommand(cmd), cmd),
operations := CmdOps
)),
rChildren := self >> [self.bbnum, self.cmd],
rSetChild := rSetChildFields("bbnum", "cmd"),
print := (self, i, si) >> Print(self.__name__, "(", self.bbnum, ", ", self.cmd.print(i+si,si), ")")
));
Class(unparseChain, multiwrap, rec(
#rChildren := self >> [],
#rSetChild := self >> Error("Not implemented"),
__call__ := meth(arg)
local self, cmds;
[self, cmds] := [arg[1], Flat(Drop(arg, 1))];
return WithBases(self, rec(
operations := CmdOps,
cmds := cmds));
end,
print := (self,i,si) >> Print(self.__name__)
));
Class(decl, Command, rec(
__call__ := meth(self, vars, cmd)
local tvars;
if IsList(vars) and Length(vars)=0 then return cmd; fi;
tvars := When(IsList(vars), Checked(ForAll(vars, IsLoc), vars),
Checked(IsLoc(vars), [vars]));
Sort(tvars, (a,b) -> a.id < b.id);
return WithBases(self,
rec(operations := CmdOps,
cmd := Checked(IsCommand(cmd), cmd),
vars := tvars));
end,
rChildren := self >> [self.vars, self.cmd],
rSetChild := rSetChildFields("vars", "cmd"),
print := (self, i, si) >> Print(self.__name__, "(", self.vars, ",\n",
Blanks(i+si),
self.cmd.print(i+si, si),
"\n", Blanks(i), ")"),
free := self >> Difference(self.cmd.free(), Set(self.vars)),
));
Class(data, Command, rec(
__call__ := (self, var, value, cmd) >> WithBases(self,
rec(operations := CmdOps,
cmd := Checked(IsCommand(cmd), cmd),
var := Checked(ObjId(var) in [code.var,code.param], var),
value := value)), #Checked(IsValue(value), value))),
rChildren := self >> [self.var, self.value, self.cmd],
rSetChild := rSetChildFields("var", "value", "cmd"),
free := self >> Difference(Union(FreeVars(self.cmd), FreeVars(self.value)), [self.var]),
print := (self, i, si) >> Print(self.__name__, "(", self.var, ", ",
self.value, ",\n", Blanks(i+si),
self.cmd.print(i+si, si),
"\n", Blanks(i), ")"),
));
#F rdepth_marker(<depth>, <cmd>) - Autolib's recursion depth marker,
#F BCRDepth turnes into this marker, <depth> >= 1, one is the deepest (recursion) level.
#F
Class(rdepth_marker, Command, rec(
__call__ := (self, depth, cmd) >> WithBases(self, rec(
depth := depth,
cmd := Checked(IsCommand(cmd), cmd),
operations := CmdOps
)),
rChildren := self >> [self.depth, self.cmd],
rSetChild := rSetChildFields("depth", "cmd"),
print := (self, i, si) >> Print(self.__name__, "(", self.depth, ", ", self.cmd.print(i+si,si), ")")
));
Declare(SubstVars);
Class(asmvolatile, Command, rec(
__call__ := meth(self, asm)
return WithBases(self,
rec(operations := CmdOps,
asm := asm));
end,
rChildren := self >> [self.asm],
rSetChild := rSetChildFields("asm"),
print := (self, i, si) >> Print("asmvolatile(\n",self.asm,")\n"))
);
Class(loop_base, Command, rec(
isLoop := true,
countOps := (self, countrec) >> self.cmd.countOps(countrec) * Length(listRange(self.range)),
rChildren := self >> [self.var, self.range, self.cmd],
rSetChild := rSetChildFields("var", "range", "cmd"),
print := (self, i, si) >> Print(self.__name__, "(", self.var, ", ",
self.range, ",\n", Blanks(i+si),
self.cmd.print(i+si, si),
Print("\n", Blanks(i), ")")),
free := meth(self) local c;
c := self.cmd.free();
if IsExp(self.range) then c:=Set(Concat(c, self.range.free())); fi;
SubtractSet(c, Set([self.var]));
return c;
end
));
Declare(loopn);
FreshVars := function (code, map)
local v;
for v in Filtered(Difference(Collect(code, var), code.free()), e -> IsArrayT(e.t)) do
map.(v.id) := var.fresh_t(String(Filtered(v.id, c -> not c in "0123456789")), v.t);
od;
return SubstTopDownNR(code, @(1, var, x -> IsBound(map.(x.id))), e -> map.(e.id));
end;
Class(loop, loop_base, rec(
__call__ := meth(self, loopvar, range, cmd)
local result;
# Constraint(IsVar(loopvar)); YSV: could be a param
Constraint(IsCommand(cmd));
if IsSymbolic(range) then return loopn(loopvar, range, cmd); fi;
range := toRange(range);
if range = 1 then
return SubstBottomUp(Copy(cmd), @(1, var, e->e=loopvar), e->V(0));
elif range = 0 then
return skip();
else
loopvar.setRange(range);
range := listRange(range);
result := WithBases(self,
rec(operations := CmdOps, cmd := cmd, var := loopvar, range := range));
loopvar.isLoopIndex := true;
#loopvar.loop := result;
return result;
fi;
end,
unroll := self >>
chain( List(self.range,
index_value -> FreshVars(Copy(self.cmd),
tab((self.var.id) := V(index_value)))))
));
Class(multibuffer_loop, loop_base, rec(
__call__ := meth(self, loopvar, range, y, x, gathmem, twiddles, bufs, cmd, scatmem)
local result;
# Constraint(IsVar(loopvar)); YSV: could be a param
Constraint(IsCommand(cmd));
#if IsSymbolic(range) then return loopn(loopvar, range, cmd); fi;
range := toRange(range);
#if range = 1 then
# return SubstBottomUp(Copy(cmd), @(1, var, e->e=loopvar), e->V(0));
#elif range = 0 then
# return skip();
#else
loopvar.setRange(range);
range := listRange(range);
result := WithBases(self,
rec(operations := CmdOps,
gathmem := gathmem,
twiddles := twiddles,
bufs := bufs,
cmd := cmd,
y := y,
x := x,
scatmem := scatmem,
var := loopvar,
range := range));
loopvar.isLoopIndex := true;
#loopvar.loop := result;
return result;
#fi;
end,
rChildren := self >> [self.var, self.range, self.y, self.x, self.gathmem, self.twiddles, self.bufs, self.cmd, self.scatmem],
rSetChild := rSetChildFields("var", "range", "y", "x", "gathmem", "twiddles", "bufs", "cmd", "scatmem"),
unroll := self >>
chain( List(self.range,
index_value -> SubstVars(Copy(self.cmd),
tab((self.var.id) := V(index_value)))))
));
Class(mem_loop, multibuffer_loop);
Class(loop_sw, loop, rec(
unroll := self >>
chain( List(self.range,
index_value -> SubstVars(Copy(self.cmd),
tab((self.var.id) := V(index_value)))))
));
Class(loopn, loop_base, rec(
__call__ := meth(self, loopvar, range, cmd)
local result;
# Constraint(IsVar(loopvar)); # YSV: could be a param
Constraint(IsCommand(cmd));
range := toExpArg(range);
if IsValue(range) then return loop(loopvar, range.v, cmd);
else
loopvar.setRange(range);
result := WithBases(self,
rec(operations := CmdOps, cmd := cmd, var := loopvar, range := range));
loopvar.isLoopIndex := true;
return result;
fi;
end,
unroll := self >> let(res:=loop(self.var, self.range.ev(), self.cmd),
When(ObjId(res)=loop, res.unroll(), res)), # res if loop has single iteration it returns just the body
));
Class(doloop, loop_base, rec(
__call__ := (self, loopvar, range, cmd) >> WithBases(self,
rec(operations := CmdOps, cmd := cmd, var := loopvar, range := range))
));
# IF(<cond>, <then_cmd>, <else_cmd>) - symbolic representation of a conditional
#
Class(IF, Command, rec(
__call__ := (self, cond, then_cmd, else_cmd) >>
Cond( cond = true, Checked(IsCommand(then_cmd), then_cmd),
cond = false, Checked(IsCommand(else_cmd), else_cmd),
WithBases( self, rec(
operations := CmdOps,
then_cmd := Checked(IsCommand(then_cmd), then_cmd),
else_cmd := Checked(IsCommand(else_cmd), else_cmd),
cond := toExpArg(cond))) ),
rChildren := self >> [self.cond, self.then_cmd, self.else_cmd],
rSetChild := rSetChildFields("cond", "then_cmd", "else_cmd"),
free := self >> Union(self.cond.free(), self.then_cmd.free(), self.else_cmd.free()),
print := (self, i, si) >>
Print(self.__name__, "(", self.cond, ",\n",
Blanks(i+si), self.then_cmd.print(i+si, si), ",\n",
Blanks(i+si), self.else_cmd.print(i+si, si), "\n",
Blanks(i), ")")
));
Class(DOWHILE, Command, rec(
__call__ := (self, cond, then_cmd) >> WithBases(self,
rec(operations := CmdOps,
then_cmd := Checked(IsCommand(then_cmd), then_cmd),
cond := toExpArg(cond))),
rChildren := self >> [self.cond, self.then_cmd],
rSetChild := rSetChildFields("cond", "then_cmd"),
free := self >> Union(self.cond.free(), self.then_cmd.free()),
print := (self, i, si) >>
Print(self.__name__, "(", self.cond, ",\n",
Blanks(i+si), self.then_cmd.print(i+si, si), "\n",
Blanks(i), ")")
));
Class(WHILE, Command, rec(
__call__ := (self, cond, then_cmd) >> WithBases(self,
rec(operations := CmdOps,
then_cmd := Checked(IsCommand(then_cmd), then_cmd),
cond := toExpArg(cond))),
rChildren := self >> [self.cond, self.then_cmd],
rSetChild := rSetChildFields("cond", "then_cmd"),
free := self >> Union(self.cond.free(), self.then_cmd.free()),
print := (self, i, si) >>
Print(self.__name__, "(", self.cond, ",\n",
Blanks(i+si), self.then_cmd.print(i+si, si), "\n",
Blanks(i), ")")
));
Class(multi_if, ExpCommand, rec(
__call__ := arg >> let(
self := arg[1],
args := Cond( Length(arg)=2 and IsList(arg[2]), arg[2], Drop(arg,1)),
Cond( # Length(args)=0, skip(), #NOTE: code in autolib's _genPlan relies on this and messing with 'args' directly
Length(args)=1, toExpArg(args[1]),
WithBases(self, rec(
operations := CmdOps,
args := List(args, toExpArg))))),
print := (self, i, si) >> Print(self.__name__, "(\n",
DoForAll([1..Length(self.args)], j ->
Cond( IsOddInt(j), Print(Blanks(i+si), self.args[j]),
Print(", ", self.args[j].print(i+si+si, si), When(j<>Length(self.args), ",\n")))),
Print("\n", Blanks(i), ")"))
));
#F program(<cmd1>, <cmd2>, ...) - top level collection of commands,
#F usually each cmd is decl, func or struct
#F
Class(program, multiwrap);
Class(trycatch, multiwrap);
Class(tryfinally, multiwrap);
#F func(<ret>, <id>, <params>, <cmd>)
#F ret - return type
#F id - function name string
#F params - list of parameters (typed vars)
#F cmd - function body
#F
Class(func, Command, rec(
__call__ := (self, ret, id, params, cmd) >> WithBases(self, rec(
ret := Checked(IsType(ret), ret),
id := Checked(IsString(id), id),
params := Checked(IsList(params), params),
cmd := Checked(IsCommand(cmd), cmd),
operations := CmdOps)),
free := self >> Difference(self.cmd.free(), self.params),
rChildren := self >> [self.ret, self.id, self.params, self.cmd],
rSetChild := rSetChildFields("ret", "id", "params", "cmd"),
print := (self, i, si) >> Print(self.__name__, "(", self.ret, ", \"", self.id, "\", ", self.params, ", \n",
Blanks(i+si), self.cmd.print(i+si, si), "\n", Blanks(i), ")", self.printA()),
# we have to handle vector values differently, since we only want to count unique ones, but they may be masqueraded by vparam or as integer constants in fpmuls
countOps := meth(self, countrec)
local count, reccountrec, vals, vparams, fpmuls;
if self.id = "transform" and Last(countrec.ops) = Value then
reccountrec := Copy(countrec);
reccountrec.ops := DropLast(countrec.ops, 1);
count := self.cmd.countOps(reccountrec);
vals := Set(List(Collect(self.cmd, @@(1, Value, (e,cx)->ObjId(e.t)=TVect or
(IsBound(cx.Value) and cx.Value=[] and ObjId(e.t)=AtomicTyp and e.t.name="TReal" and IsFloat(e.v)))), i->i.v));
vparams := Set(List(Collect(self.cmd, @(1, spiral.platforms.vparam, e->IsList(e.p) and ForAll(e.p, IsString))), i->i.p));
fpmuls := Set(Filtered(List(Collect(self.cmd, fpmul), i-> i.args[2]), IsValue));
Add(count, Length(vals)+Length(vparams)+Length(fpmuls));
return count;
else
return self.cmd.countOps(countrec);
fi;
end
));
Class(func_ppe, func);
#
## define
#
# this command is used to define new types in the unparsed code.
# if you need a
#
# typedef struct { ... }
#
# this is it.
#
Class(define, Command, rec(
__call__ := (self, types) >> WithBases(self, rec(
types := types,
operations := CmdOps
)),
rChildren := self >> [self.types],
rSetChild := rSetChildFields("types"),
print := (self, i, si) >> Print(
self.__name__, "(", self.types, ")"
)
));
#
## IfDef
#
# A #if statement to control execution in code. Used for debugging -- getting counts for only
# certain kernels, etc.
#
Class(IfDef, Command, rec(
__call__ := (self, cond, cmd) >> WithBases(self, rec(
cond := Checked(IsString(cond), cond),
operations := CmdOps,
cmd := Checked(IsCommand(cmd), cmd)
)),
rChildren := self >> [self.cond, self.cmd],
rSetChild := rSetChildFields("cond", "cmd"),
print := (self, i, si) >> Print(
self.__name__, "(\"", self.cond, "\", ", self.cmd, ")"
),
));
Class(Define, Command, rec(
__call__ := (self, var, exp) >> WithBases(self, rec(
operations := CmdOps,
var := var,
exp := exp
)),
rChildren := self >> [self.exp, self.var],
rSetChild := rSetChildFields("exp", "var"),
print := (self, i, si) >> Print(
self.__name__, "(", self.var, ", ", self.exp, ")")
));
#-----------------------------------------------------------------------------
#F Ind()
#F Ind(<range>) -- <range> must be an integer or symbolic integer that will
#F imply the range of variable of [0 .. <range>-1]
#F NB: we do not set isLoopIndex attribute below, because Ind() is now
#F used in Lambda's and some other places which are not loops
#F NOTE?
#F
Ind := arg -> Cond(
Length(arg)=0, var.fresh_t("i", TInt),
Length(arg)=1,
Cond(arg[1]=TInt, var.fresh_t("ii", TInt),
var.fresh("i", TInt, toRange(arg[1]))),
Error("Usage: Ind() | Ind(<range>)")
);
IndNR := () -> var.fresh_t("i", TInt);
IntVar := pfx -> var.fresh_t(pfx, TInt);
DataInd := (type, range) -> var.fresh("k", type, toRange(range));
TempVec := type -> var.fresh_t("T", type);
Dat := type -> var.fresh_t("D", type);
Dat1d := (type,nentries) -> var.fresh_t("D", TArray(type, nentries));
Dat2d := (type,rows,cols) -> var.fresh_t("D", TArray(TArray(type, cols), rows));
Dat3d := (type,planes,rows,cols) -> var.fresh_t("D", TArray(TArray(TArray(type, cols), rows), planes));
TempVar := type -> var.fresh_t("t", type);
IsLoop := x->IsRec(x) and IsBound(x.isLoop) and x.isLoop;
IsUnrollableLoop := x->IsRec(x) and IsBound(x.isLoop) and x.isLoop and x.__name__ <> "dist_loop";
IsChain := x->IsRec(x) and IsBound(x.isChain) and x.isChain;
IsLoopIndex := v -> IsRec(v) and IsBound(v.isLoopIndex) and v.isLoopIndex;
IsParallelLoopIndex := v -> IsRec(v) and IsBound(v.isParallelLoopIndex) and v.isParallelLoopIndex;
IndPar := idx -> Ind(idx).setAttr("isParallelLoopIndex");
#F FlattenCode(<code>) . . . . . . . . . . . flattens nested chain commands
#F
FlattenCode := c -> SubstBottomUp(c, @.cond(x->IsChain(x) or ObjId(x)=unroll_cmd), e -> e.flatten());
#F FlattenCode2(<code>) . . . same as FlattenCode, but also replaces chain(c) by c
#F
FlattenCode2 := c -> SubstBottomUpRules(c, [
[@.cond(x->IsChain(x) or ObjId(x)=unroll_cmd), e -> e.flatten(), "flatten1"],
[[chain, @(1)], e->e.cmds[1], "flatten2"]
]);
#F FlattenCode0(<code>) . . same as FlattenCode, but avoids unroll_cmd
#F
FlattenCode0 := c -> SubstBottomUp(c, @.cond(x->IsChain(x)), e -> e.flatten());
#F UnrollCode(<code>) . . . . . . fully unrolls <code> without optimization
#F SubstBottomUp works faster than SubstTopDown as it unrolls innermost loops first
#F but it doesn't work when loop domain depends from outer loop variable.
UnrollCode := c -> let(
buc := SubstBottomUp(c, @.cond(x -> IsLoop(x) and not IsSymbolic(x.range)), e->e.unroll()),
tdc := SubstTopDown(buc, @.cond(IsLoop), e->e.unroll()),
SubstBottomUp(tdc, virtual_var, e->e.ev())
);
#F ArithCostCode(<code>) . . . . . . . returns a list [num_adds, num_muls]
#F
ArithCostCode := c -> let(
ops := List([Collect(c, add), Collect(c, sub), Collect(c, mul)],
lst -> Sum(lst, x->Length(x.args)-1)),
[ops[1]+ops[2], ops[3]]);
#F SubstVars(<expr>, <bindings>)
#F
#F Evaluates several variables in <expr> to their values given in <bindings>.
#F <bindings> should be a record or a table of the form:
#F rec( var1 := value1, ...) OR
#F tab( var1 := value1, ...)
#F
SubstVars := function (expr, bindings)
return SubstLeaves(expr, @(200, var, e -> IsBound(bindings.(e.id))),
e -> bindings.(e.id));
end;
SubstVarsEval := function (expr, bindings)
return SubstBottomUp(expr, @,
e -> Cond(IsVar(e), bindings.(e.id), IsExp(e), e.eval(), V(e)));
end;
|
[STATEMENT]
lemma lmap_lstrict_prefix:
"lstrict_prefix xs ys \<Longrightarrow> lstrict_prefix (lmap f xs) (lmap f ys)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. lstrict_prefix xs ys \<Longrightarrow> lstrict_prefix (lmap f xs) (lmap f ys)
[PROOF STEP]
by (metis llength_lmap lmap_lprefix lprefix_llength_eq_imp_eq lstrict_prefix_def) |
-- Idris2
import System
import System.Concurrency
-- Test `conditionBroadcast` wakes all threads for 1 main and N child threads
main : IO ()
main =
let n = 3 in
do cvMutex <- makeMutex
cv <- makeCondition
ts <- for [1..n] $ \_ => fork $ do mutexAcquire cvMutex
conditionWait cv cvMutex
putStrLn "Hello mother"
mutexRelease cvMutex
putStrLn "Hello children"
sleep 1
conditionBroadcast cv
ignore $ for ts $ \t => threadWait t
sleep 1
|
Andrew volunteered for the New Zealand Expeditionary Force ( NZEF ) in October 1915 . Because only men between the ages of 19 and 45 were required to register for service with the NZEF , he falsified his age to ensure that he would be eligible for duty overseas . A member of the 12th Reinforcements , he embarked for the Western Front via Egypt on 1 May 1916 . In France , he was posted to B Company , Wellington Infantry Battalion with the rank of private .
|
module utm where
open import turing
open import Data.Product
-- open import Data.Bool
open import Data.List
open import Data.Nat
open import logic
data utmStates : Set where
reads : utmStates
read0 : utmStates
read1 : utmStates
read2 : utmStates
read3 : utmStates
read4 : utmStates
read5 : utmStates
read6 : utmStates
loc0 : utmStates
loc1 : utmStates
loc2 : utmStates
loc3 : utmStates
loc4 : utmStates
loc5 : utmStates
loc6 : utmStates
fetch0 : utmStates
fetch1 : utmStates
fetch2 : utmStates
fetch3 : utmStates
fetch4 : utmStates
fetch5 : utmStates
fetch6 : utmStates
fetch7 : utmStates
print0 : utmStates
print1 : utmStates
print2 : utmStates
print3 : utmStates
print4 : utmStates
print5 : utmStates
print6 : utmStates
print7 : utmStates
mov0 : utmStates
mov1 : utmStates
mov2 : utmStates
mov3 : utmStates
mov4 : utmStates
mov5 : utmStates
mov6 : utmStates
tidy0 : utmStates
tidy1 : utmStates
halt : utmStates
data utmΣ : Set where
0 : utmΣ
1 : utmΣ
B : utmΣ
* : utmΣ
$ : utmΣ
^ : utmΣ
X : utmΣ
Y : utmΣ
Z : utmΣ
@ : utmΣ
b : utmΣ
utmδ : utmStates → utmΣ → utmStates × (Write utmΣ) × Move
utmδ reads x = read0 , wnone , mnone
utmδ read0 * = read1 , write * , left
utmδ read0 x = read0 , write x , right
utmδ read1 x = read2 , write @ , right
utmδ read2 ^ = read3 , write ^ , right
utmδ read2 x = read2 , write x , right
utmδ read3 0 = read4 , write 0 , left
utmδ read3 1 = read5 , write 1 , left
utmδ read3 b = read6 , write b , left
utmδ read4 @ = loc0 , write 0 , right
utmδ read4 x = read4 , write x , left
utmδ read5 @ = loc0 , write 1 , right
utmδ read5 x = read5 , write x , left
utmδ read6 @ = loc0 , write B , right
utmδ read6 x = read6 , write x , left
utmδ loc0 0 = loc0 , write X , left
utmδ loc0 1 = loc0 , write Y , left
utmδ loc0 B = loc0 , write Z , left
utmδ loc0 $ = loc1 , write $ , right
utmδ loc0 x = loc0 , write x , left
utmδ loc1 X = loc2 , write 0 , right
utmδ loc1 Y = loc3 , write 1 , right
utmδ loc1 Z = loc4 , write B , right
utmδ loc1 * = fetch0 , write * , right
utmδ loc1 x = loc1 , write x , right
utmδ loc2 0 = loc5 , write X , right
utmδ loc2 1 = loc6 , write Y , right
utmδ loc2 B = loc6 , write Z , right
utmδ loc2 x = loc2 , write x , right
utmδ loc3 1 = loc5 , write Y , right
utmδ loc3 0 = loc6 , write X , right
utmδ loc3 B = loc6 , write Z , right
utmδ loc3 x = loc3 , write x , right
utmδ loc4 B = loc5 , write Z , right
utmδ loc4 0 = loc6 , write X , right
utmδ loc4 1 = loc6 , write Y , right
utmδ loc4 x = loc4 , write x , right
utmδ loc5 $ = loc1 , write $ , right
utmδ loc5 x = loc5 , write x , left
utmδ loc6 $ = halt , write $ , right
utmδ loc6 * = loc0 , write * , left
utmδ loc6 x = loc6 , write x , right
utmδ fetch0 0 = fetch1 , write X , left
utmδ fetch0 1 = fetch2 , write Y , left
utmδ fetch0 B = fetch3 , write Z , left
utmδ fetch0 x = fetch0 , write x , right
utmδ fetch1 $ = fetch4 , write $ , right
utmδ fetch1 x = fetch1 , write x , left
utmδ fetch2 $ = fetch5 , write $ , right
utmδ fetch2 x = fetch2 , write x , left
utmδ fetch3 $ = fetch6 , write $ , right
utmδ fetch3 x = fetch3 , write x , left
utmδ fetch4 0 = fetch7 , write X , right
utmδ fetch4 1 = fetch7 , write X , right
utmδ fetch4 B = fetch7 , write X , right
utmδ fetch4 * = print0 , write * , left
utmδ fetch4 x = fetch4 , write x , right
utmδ fetch5 0 = fetch7 , write Y , right
utmδ fetch5 1 = fetch7 , write Y , right
utmδ fetch5 B = fetch7 , write Y , right
utmδ fetch5 * = print0 , write * , left
utmδ fetch5 x = fetch5 , write x , right
utmδ fetch6 0 = fetch7 , write Z , right
utmδ fetch6 1 = fetch7 , write Z , right
utmδ fetch6 B = fetch7 , write Z , right
utmδ fetch6 * = print0 , write * , left
utmδ fetch6 x = fetch6 , write x , right
utmδ fetch7 * = fetch0 , write * , right
utmδ fetch7 x = fetch7 , write x , right
utmδ print0 X = print1 , write X , right
utmδ print0 Y = print2 , write Y , right
utmδ print0 Z = print3 , write Z , right
utmδ print1 ^ = print4 , write ^ , right
utmδ print1 x = print1 , write x , right
utmδ print2 ^ = print5 , write ^ , right
utmδ print2 x = print2 , write x , right
utmδ print3 ^ = print6 , write ^ , right
utmδ print3 x = print3 , write x , right
utmδ print4 x = print7 , write 0 , left
utmδ print5 x = print7 , write 1 , left
utmδ print6 x = print7 , write B , left
utmδ print7 X = mov0 , write X , right
utmδ print7 Y = mov1 , write Y , right
utmδ print7 x = print7 , write x , left
utmδ mov0 ^ = mov2 , write ^ , left
utmδ mov0 x = mov0 , write x , right
utmδ mov1 ^ = mov3 , write ^ , right
utmδ mov1 x = mov1 , write x , right
utmδ mov2 0 = mov4 , write ^ , right
utmδ mov2 1 = mov5 , write ^ , right
utmδ mov2 B = mov6 , write ^ , right
utmδ mov3 0 = mov4 , write ^ , left
utmδ mov3 1 = mov5 , write ^ , left
utmδ mov3 B = mov6 , write ^ , left
utmδ mov4 ^ = tidy0 , write 0 , left
utmδ mov5 ^ = tidy0 , write 1 , left
utmδ mov6 ^ = tidy0 , write B , left
utmδ tidy0 $ = tidy1 , write $ , left
utmδ tidy0 x = tidy0 , write x , left
utmδ tidy1 X = tidy1 , write 0 , left
utmδ tidy1 Y = tidy1 , write 1 , left
utmδ tidy1 Z = tidy1 , write B , left
utmδ tidy1 $ = reads , write $ , right
utmδ tidy1 x = tidy1 , write x , left
utmδ _ x = halt , write x , mnone
U-TM : Turing utmStates utmΣ
U-TM = record {
tδ = utmδ
; tstart = read0
; tend = tend
; tnone = b
} where
tend : utmStates → Bool
tend halt = true
tend _ = false
-- Copyδ : CopyStates → ℕ → CopyStates × ( Write ℕ ) × Move
-- Copyδ s1 0 = H , wnone , mnone
-- Copyδ s1 1 = s2 , write 0 , right
-- Copyδ s2 0 = s3 , write 0 , right
-- Copyδ s2 1 = s2 , write 1 , right
-- Copyδ s3 0 = s4 , write 1 , left
-- Copyδ s3 1 = s3 , write 1 , right
-- Copyδ s4 0 = s5 , write 0 , left
-- Copyδ s4 1 = s4 , write 1 , left
-- Copyδ s5 0 = s1 , write 1 , right
-- Copyδ s5 1 = s5 , write 1 , left
-- Copyδ H _ = H , wnone , mnone
-- Copyδ _ (suc (suc _)) = H , wnone , mnone
Copyδ-encode : List utmΣ
Copyδ-encode =
0 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ 0 ∷ -- s1 0 = H , wnone , mnone
* ∷ 0 ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ 1 ∷ -- s1 1 = s2 , write 0 , right
* ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 0 ∷ 1 ∷ -- s2 0 = s3 , write 0 , right
* ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ -- s2 1 = s2 , write 1 , right
* ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ -- s3 0 = s4 , write 1 , left
* ∷ 0 ∷ 1 ∷ 1 ∷ 1 ∷ 0 ∷ 1 ∷ 1 ∷ 1 ∷ 0 ∷ 1 ∷ -- s3 1 = s3 , write 1 , right
* ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ -- s4 0 = s5 , write 0 , left
* ∷ 1 ∷ 0 ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ -- s4 1 = s4 , write 1 , left
* ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 1 ∷ -- s5 0 = s1 , write 1 , right
* ∷ 1 ∷ 0 ∷ 1 ∷ 1 ∷ 1 ∷ 0 ∷ 1 ∷ 1 ∷ 0 ∷ 0 ∷ -- s5 1 = s5 , write 1 , left
[]
input-encode : List utmΣ
input-encode = 1 ∷ 1 ∷ 0 ∷ 0 ∷ 0 ∷ []
input+Copyδ : List utmΣ
input+Copyδ = ( $ ∷ 0 ∷ 0 ∷ 0 ∷ 0 ∷ * ∷ [] ) -- start state
++ Copyδ-encode
++ ( $ ∷ ^ ∷ input-encode )
short-input : List utmΣ
short-input = $ ∷ 0 ∷ 0 ∷ 0 ∷ * ∷
0 ∷ 0 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 1 ∷ * ∷
0 ∷ 0 ∷ 1 ∷ 0 ∷ 1 ∷ 1 ∷ 1 ∷ * ∷
0 ∷ 1 ∷ B ∷ 1 ∷ 0 ∷ 1 ∷ 0 ∷ * ∷
1 ∷ 0 ∷ 0 ∷ 0 ∷ 1 ∷ 1 ∷ 1 ∷ $ ∷
^ ∷ 0 ∷ 0 ∷ 1 ∷ 1 ∷ []
utm-test1 : List utmΣ → utmStates × ( List utmΣ ) × ( List utmΣ )
utm-test1 inp = Turing.taccept U-TM inp
{-# TERMINATING #-}
utm-test2 : ℕ → List utmΣ → utmStates × ( List utmΣ ) × ( List utmΣ )
utm-test2 n inp = loop n (Turing.tstart U-TM) inp []
where
loop : ℕ → utmStates → ( List utmΣ ) → ( List utmΣ ) → utmStates × ( List utmΣ ) × ( List utmΣ )
loop zero q L R = ( q , L , R )
loop (suc n) q L R with move {utmStates} {utmΣ} {0} {utmδ} q L R | q
... | nq , nL , nR | reads = loop n nq nL nR
... | nq , nL , nR | _ = loop (suc n) nq nL nR
t1 = utm-test2 20 short-input
t : (n : ℕ) → utmStates × ( List utmΣ ) × ( List utmΣ )
-- t n = utm-test2 n input+Copyδ
t n = utm-test2 n short-input
|
```python
import numpy as np
import sympy as sym
import numba
import pydae.build as db
```
```python
```
\begin{eqnarray}
\dot \delta &=& \Omega_b \left(\omega - \omega_s\right)\\
\dot \omega &=& 1/(2 H) \left(p_m - p_e - D (\omega - \omega_s) \right)\\
\end{eqnarray}
$$ \sf
\Omega_{b} \left(\sf \omega - \omega_{s}\right)
$$
## System definition
```python
params = {'S_b':500.0,
'X_d':1.81,'X1d':0.3,'T1d0':8.0, # synnchronous machine d-axis parameters
'X_q':1.76,'X1q':0.65,'T1q0':1.0, # synnchronous machine q-axis parameters
'R_a':0.003,'X_l': 0.05,
'H':3.5,'D':1.0,
'Omega_b':2*np.pi*50,'omega_s':1.0,
'v_0':1.0,'theta_0':0.0,
'K_a':100, 'T_r':0.1, 'v_pss':0.0,
'T_c':2.0,'T_b':10.0,'Kpgov':10.0,'Kigov':2.0,'Droop':0.05,'Kimw':0.0,'S_b_g':500.0}
u_ini_dict = {'P_t':0.8, 'Q_t':0.2} # for the initialization problem
u_run_dict = {'p_m':0.8,'v_ref':1.0} # for the running problem (here initialization and running problem are the same)
x_list_syn = ['delta','omega','e1q','e1d','v_c'] # [inductor current, PI integrator]
y_ini_list = ['i_d','i_q','v_1','theta_1','p_m','v_ref','v_f'] # for the initialization problem
y_run_list = ['i_d','i_q','v_1','theta_1','P_t','Q_t', 'v_f'] # for the running problem (here initialization and running problem are the same)
x_list_gov = ['xi_gov','xi_mw','x_gov','p_m_meas'] # [inductor current, PI integrator]
sys_vars = {'params':params,
'u_list':u_run_dict,
'x_list':x_list_syn + x_list_gov,
'y_list':y_run_list}
exec(db.sym_gen_str()) # exec to generate the required symbolic varables and constants
```
```python
v_d = v_1*sin(delta - theta_1)
v_q = v_1*cos(delta - theta_1)
p_e = i_d*(v_d + R_a*i_d) + i_q*(v_q + R_a*i_q)
ddelta = Omega_b*(omega - omega_s)
domega = 1/(2*H)*(p_m - p_e - D*(omega - omega_s))
de1q = 1/T1d0*(-e1q - (X_d - X1d)*i_d + v_f)
de1d = 1/T1q0*(-e1d + (X_q - X1q)*i_q)
dv_c = (v_1 - v_c)/T_r
g_1 = v_q + R_a*i_q + X1d*i_d - e1q
g_2 = v_d + R_a*i_d - X1q*i_q - e1d
g_3 = P_t - (v_1*v_0*sin(theta_1 - theta_0))/X_l
g_4 = Q_t + (v_1*v_0*cos(theta_1 - theta_0))/X_l - v_1**2/X_l
g_5 = i_d*v_d + i_q*v_q - P_t
g_6 = i_d*v_q - i_q*v_d - Q_t
g_7 = K_a*(v_ref - v_c + v_pss) - v_f
sys = {'name':'smib_milano_ex8p1_4ord_avr',
'params_dict':params,
'f_list':[ddelta,domega,de1q,de1d,dv_c],
'g_list':[g_1,g_2,g_3,g_4,g_5,g_6,g_7],
'x_list':x_list,
'y_ini_list':y_ini_list,
'y_run_list':y_run_list,
'u_run_dict':u_run_dict,
'u_ini_dict':u_ini_dict,
'h_dict':{'p_m':p_m}}
sys = db.system(sys)
db.sys2num(sys)
```
```python
omega_droop = Droop*p_m_meas
p_gov= Kpgov*(omega_ref - omega - omega_droop + Kimw*xi_mw) + Kigov*xi_gov
p_m = x_gov + T_c_1/T_b_1*(p_gov - x_gov)
f1 = 1.0 - omega_1 - omega_droop_1 + Kimw_1*xi_mw_1
f2 = p_ref - p_m
f3 = (p_gov_1 - x_gov_1)/T_b_1
f4 = 1/0.1*(p_m_1 - p_m_meas_1)
g_grid_p = P_t_1*S_b_1/S_b - (v_1*v_0*sin(theta_1 - theta_0))/X_l_1
g_grid_q = Q_t_1*S_b_1/S_b + (v_1*v_0*cos(theta_1 - theta_0))/X_l_1 - v_1**2/X_l_1
```
```python
```
|
```
%%capture
!pip install --quiet --upgrade pip
!pip install --quiet cirq==0.7
```
# Rabi Oscillation Experiment
In this experiment, you are going to use Cirq to check that rotating a qubit by an increasing angle, and then measuring the qubit, produces Rabi oscillations. This requires you to do the following things:
1. Prepare the $|0\rangle$ state.
2. Rotate by an angle $\theta$ around the $X$ axis.
3. Measure to see if the result is a 1 or a 0.
4. Repeat steps 1-3 $k$ times.
5. Report the fraction of $\frac{\text{Number of 1's}}{k}$
found in step 3.
## 1. Getting to know Cirq
Cirq emphasizes the details of implementing quantum algorithms on near term devices.
For example, when you work on a qubit in Cirq you don't operate on an unspecified qubit that will later be mapped onto a device by a hidden step.
Instead, you are always operating on specific qubits at specific locations that you specify.
Suppose you are working with a 54 qubit Sycamore chip.
This device is included in Cirq by default.
It is called `cirq.google.Sycamore`, and you can see its layout by printing it.
```
import cirq
working_device = cirq.google.Sycamore
print(working_device)
```
(0, 5)───(0, 6)
│ │
│ │
(1, 4)───(1, 5)───(1, 6)───(1, 7)
│ │ │ │
│ │ │ │
(2, 3)───(2, 4)───(2, 5)───(2, 6)───(2, 7)───(2, 8)
│ │ │ │ │ │
│ │ │ │ │ │
(3, 2)───(3, 3)───(3, 4)───(3, 5)───(3, 6)───(3, 7)───(3, 8)───(3, 9)
│ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │
(4, 1)───(4, 2)───(4, 3)───(4, 4)───(4, 5)───(4, 6)───(4, 7)───(4, 8)───(4, 9)
│ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │
(5, 0)───(5, 1)───(5, 2)───(5, 3)───(5, 4)───(5, 5)───(5, 6)───(5, 7)───(5, 8)
│ │ │ │ │ │ │
│ │ │ │ │ │ │
(6, 1)───(6, 2)───(6, 3)───(6, 4)───(6, 5)───(6, 6)───(6, 7)
│ │ │ │ │
│ │ │ │ │
(7, 2)───(7, 3)───(7, 4)───(7, 5)───(7, 6)
│ │ │
│ │ │
(8, 3)───(8, 4)───(8, 5)
│
│
(9, 4)
For this experiment you only need one qubit and you can just pick whichever one you like.
```
my_qubit = cirq.GridQubit(5, 6)
```
Once you've chosen your qubit you can build circuits that use it.
```
from cirq.contrib.svg import SVGCircuit
# Create a circuit with X, Ry(pi/2) and H.
my_circuit = cirq.Circuit(
# Rotate the qubit pi/2 radians around the X axis.
cirq.rx(3.141 / 2).on(my_qubit),
# Measure the qubit.
cirq.measure(my_qubit, key='out')
)
SVGCircuit(my_circuit)
```
findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.
Now you can simulate sampling from your circuit using `cirq.Simulator`.
```
sim = cirq.Simulator()
samples = sim.sample(my_circuit, repetitions=10)
samples
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>out</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
</tr>
<tr>
<th>5</th>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>1</td>
</tr>
<tr>
<th>7</th>
<td>1</td>
</tr>
<tr>
<th>8</th>
<td>0</td>
</tr>
<tr>
<th>9</th>
<td>1</td>
</tr>
</tbody>
</table>
</div>
You can also get properties of the circuit, such as the density matrix of the circuit's output or the state vector just before the terminal measurement.
```
state_vector_before_measurement = sim.simulate(my_circuit[:-1])
sampled_state_vector_after_measurement = sim.simulate(my_circuit)
print(f'State before measurement:')
print(state_vector_before_measurement)
print(f'State after measurement:')
print(sampled_state_vector_after_measurement)
```
State before measurement:
measurements: (no measurements)
output vector: 0.707|0⟩ - 0.707j|1⟩
State after measurement:
measurements: out=1
output vector: -1j|1⟩
You can also examine the outputs from a noisy environment.
For example, an environment where 10% depolarization is applied to each qubit after each operation in the circuit:
```
noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.1))
noisy_post_measurement_state = noisy_sim.simulate(my_circuit)
noisy_pre_measurement_state = noisy_sim.simulate(my_circuit[:-1])
print('Noisy state after measurement:' + str(noisy_post_measurement_state))
print('Noisy state before measurement:' + str(noisy_pre_measurement_state))
```
Noisy state after measurement:measurements: out=0
final density matrix:
[[0.9333334 +0.j 0. +0.j]
[0. +0.j 0.06666666+0.j]]
Noisy state before measurement:measurements: (no measurements)
final density matrix:
[[0.50012845+0.j 0. +0.43333334j]
[0. -0.43333334j 0.49987155+0.j ]]
## 2. Parameterized Circuits and Sweeps
Now that you have some of the basics end to end, you can create a parameterized circuit that rotates by an angle $\theta$:
```
import sympy
theta = sympy.Symbol('theta')
parameterized_circuit = cirq.Circuit(
cirq.rx(theta).on(my_qubit),
cirq.measure(my_qubit, key='out')
)
SVGCircuit(parameterized_circuit)
```
In the above block you saw that there is a `sympy.Symbol` that you placed in the circuit. Cirq supports symbolic computation involving circuits. What this means is that when you construct `cirq.Circuit` objects you can put placeholders in many of the classical control parameters of the circuit which you can fill with values later on.
Now if you wanted to use `cirq.simulate` or `cirq.sample` with the parameterized circuit you would also need to specify a value for `theta`.
```
samples_at_theta_equals_2 = sim.sample(
parameterized_circuit,
params={theta: 2},
repetitions=10)
samples_at_theta_equals_2
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>theta</th>
<th>out</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>5</th>
<td>2</td>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>7</th>
<td>2</td>
<td>0</td>
</tr>
<tr>
<th>8</th>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>9</th>
<td>2</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
You can also specify *multiple* values of `theta`, and get samples back for each value.
```
samples_at_multiple_theta = sim.sample(
parameterized_circuit,
params=[{theta: 0.5}, {theta: 3.141}],
repetitions=10)
samples_at_multiple_theta
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>theta</th>
<th>out</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>5</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>7</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>8</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>9</th>
<td>0.500</td>
<td>0</td>
</tr>
<tr>
<th>0</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>5</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>6</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>7</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>8</th>
<td>3.141</td>
<td>1</td>
</tr>
<tr>
<th>9</th>
<td>3.141</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
Cirq has shorthand notation you can use to sweep `theta` over a range of values.
```
samples_at_swept_theta = sim.sample(
parameterized_circuit,
params=cirq.Linspace(theta, start=0, stop=3.14159, length=5),
repetitions=5)
samples_at_swept_theta
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>theta</th>
<th>out</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.000000</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.000000</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0.000000</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0.000000</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0.000000</td>
<td>0</td>
</tr>
<tr>
<th>0</th>
<td>0.785397</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.785397</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0.785397</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0.785397</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0.785397</td>
<td>0</td>
</tr>
<tr>
<th>0</th>
<td>1.570795</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>1.570795</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1.570795</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>1.570795</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>1.570795</td>
<td>0</td>
</tr>
<tr>
<th>0</th>
<td>2.356192</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>2.356192</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>2.356192</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>2.356192</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>2.356192</td>
<td>0</td>
</tr>
<tr>
<th>0</th>
<td>3.141590</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>3.141590</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>3.141590</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>3.141590</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>3.141590</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
The result value being returned by `sim.sample` is a `pandas.DataFrame` object.
Pandas is a common library for working with table data in python.
You can use standard pandas methods to analyze and summarize your results.
```
import pandas
big_results = sim.sample(
parameterized_circuit,
params=cirq.Linspace(theta, start=0, stop=3.14159, length=20),
repetitions=10_000)
# big_results is too big to look at. Plot cross tabulated data instead.
pandas.crosstab(big_results.theta, big_results.out).plot()
```
## 3. The built-in experiment
Cirq comes with a pre-written Rabi oscillation experiment `cirq.experiments.rabi_oscillations`.
This method takes a `cirq.Sampler`, which could be a simulator or a network connection to real hardware.
The method takes a few more experimental parameters, and returns a result object
that can be plotted.
```
import datetime
result = cirq.experiments.rabi_oscillations(
sampler=noisy_sim,
qubit=my_qubit,
num_points=50,
repetitions=10000)
result.plot()
```
Notice that you can tell from the plot that you used the noisy simulator you defined earlier.
You can also tell that the amount of depolarization is roughly 10%.
## 4. Exercise: Find the best qubit
As you have seen, you can use Cirq to perform a Rabi oscillation experiment.
You can either make the experiment yourself out of the basic pieces made available by Cirq, or use the prebuilt experiment method.
Now you're going to put this knowledge to the test.
There is some amount of depolarizing noise on each qubit.
Your goal is to characterize every qubit from the Sycamore chip using a Rabi oscillation experiment, and find the qubit with the lowest noise according to the secret noise model.
```
import hashlib
class SecretNoiseModel(cirq.NoiseModel):
def noisy_operation(self, op):
# Hey! No peeking!
q = op.qubits[0]
v = hashlib.sha256(str(q).encode()).digest()[0] / 256
yield cirq.depolarize(v).on(q)
yield op
secret_noise_sampler = cirq.DensityMatrixSimulator(noise=SecretNoiseModel())
```
```
q = cirq.google.Sycamore.qubits[3]
print('qubit', repr(q))
cirq.experiments.rabi_oscillations(
sampler=secret_noise_sampler,
qubit=q
).plot()
```
```
```
|
\documentclass[10pt]{article}
\usepackage{caption}
\usepackage{graphicx}
\graphicspath{ {./images/} }
\usepackage{fancyhdr}
%opening
\title{Preparing Moth Specimens for Microscopic Examination}
\author{Dr. Paul J. Palmer}
\pagestyle{fancy}
\makeatletter
\let\runauthor\@author
\let\runtitle\@title
\makeatother
\chead{\runtitle}
\lfoot{\runauthor}
\rfoot{\today}
\begin{document}
%\maketitle
\pagenumbering{gobble}
%\begin{abstract}
%
%\end{abstract}
%\section*{Pitfall traps for spider recording}
%\begin{center}
% \centering
% \includegraphics[width=0.3\linewidth]{images/pitfall}\hfill
% \includegraphics[width=.3\textwidth]{images/alcohol}\hfill
% \includegraphics[width=.3\textwidth]{images/complete}\hfill
% \captionof{figure}{Placing the pitfall trap: Place the pitfall cup in a carefully dug hole; Fill the collector cup with 30 mL of food grade \textit{Mono~Propylene~Glycol} and place the inside the pitfall; Cover with the rain shield leaving a vertical gap of about 10~mm. }
%
%\end{center}
These instructions explain how to prepare moth specimens for confirmation of identification (ID) by microscopic examination. These are my own preferences, but other workers will probably accept specimens prepared in this way. ID often requires examination of both external and internal features so the whole specimen should be preserved in a near perfect condition as possible. Dissection of the genitalia requires removal of the abdomen and separation of the reproductive organs and their arrangement on a microscope slide. This is only possible if the specimen is completely dessicated since the presence of any moisture will result in decay and the growth of moulds obscuring the features required for ID. Traditionally specimens were presented set on pins in a perfectly dehydrated state ready for examination. Specimen details were included on labels mounted on the same pin.
Unmounted specimens should be stored in suitable tubes treated in the following way:
\begin{description}
\item[Label] A label should be placed \textbf{inside the tube} along with a single specimen recording: Who, What, Where, When and How. Figure~\ref{InternalLabel} illustrates how labels are used and why they must be inside the specimen tube. Use a pigment pen or graphite pencil with labels in alcohol.
\item[Freezer] A minimum of 48 hours in a domestic freezer will ensure that the specimen is dead along with any mites that may be present. There is no harm in exceeding this time.
\item[Dehydration] Exposure to dry air is needed to remove all moisture. The tubes should be uncovered and stoppered with cotton wool and left for several weeks or longer. If the dessication is carried out in the freezer, then the tubes should be brought up to room temperature and left for several hours before the lids are put in place.
\item[Pickling in Alcohol] Storage in 70\% isopropanol or propylene glycol is an acceptable alternative to dehydration. Once the alcohol is added, store in a freezer for one week to complete the fixing process.
\item[Post] The tubes may be posted to: Paul Palmer, 136 Burton Road, Melton Mowbray, Leicestershire LE13 1DL. An advance note is appreciated \\ (\texttt{[email protected]}) and by arrangement, callers are always welcome.
\end{description}
% TODO: \usepackage{graphicx} required
\begin{center}
\centering
\includegraphics[width=0.3\linewidth]{images/specimens}\hfill
\includegraphics[width=.3\textwidth]{images/prepared}\hfill
\includegraphics[width=.3\textwidth]{images/slides}\hfill
\captionof{figure}{From left to right: Specimens as recieved for examination; Once photographed and catalogued each specimen is placed in a standard tube along with its label; After dissection, the label is glued to the microscope slide.}
\label{InternalLabel}
\end{center}
\end{document}
|
#' @export
#' @title get.asp
#' @description gets the aspect ratio
#' @family plotting
#' @author unknown, \email{<unknown>@@dfo-mpo.gc.ca}
#' @export
get.asp <- function() {
pin <- par("pin")
usr <- par("usr")
asp <- (pin[2]/(usr[4] - usr[3]))/(pin[1]/(usr[2] - usr[1]))
return(asp)
} |
lemma filterlim_tendsto_neg_mult_at_bot: fixes c :: real assumes c: "(f \<longlongrightarrow> c) F" "c < 0" and g: "filterlim g at_top F" shows "LIM x F. f x * g x :> at_bot" |
[STATEMENT]
lemma multpw_converse:
"multpw (ns\<inverse>) = (multpw ns)\<inverse>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. multpw (ns\<inverse>) = (multpw ns)\<inverse>
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. multpw (ns\<inverse>) = (multpw ns)\<inverse>
[PROOF STEP]
have "(X, Y) \<in> multpw (ns\<inverse>) \<Longrightarrow> (X, Y) \<in> (multpw ns)\<inverse>" for X Y and ns :: "'a rel"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (X, Y) \<in> multpw (ns\<inverse>) \<Longrightarrow> (X, Y) \<in> (multpw ns)\<inverse>
[PROOF STEP]
by (induct X Y rule: multpw.induct) (auto intro: multpw.intros)
[PROOF STATE]
proof (state)
this:
(?X, ?Y) \<in> multpw (?ns\<inverse>) \<Longrightarrow> (?X, ?Y) \<in> (multpw ?ns)\<inverse>
goal (1 subgoal):
1. multpw (ns\<inverse>) = (multpw ns)\<inverse>
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
(?X, ?Y) \<in> multpw (?ns\<inverse>) \<Longrightarrow> (?X, ?Y) \<in> (multpw ?ns)\<inverse>
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
(?X, ?Y) \<in> multpw (?ns\<inverse>) \<Longrightarrow> (?X, ?Y) \<in> (multpw ?ns)\<inverse>
goal (1 subgoal):
1. multpw (ns\<inverse>) = (multpw ns)\<inverse>
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
multpw (ns\<inverse>) = (multpw ns)\<inverse>
goal:
No subgoals!
[PROOF STEP]
qed |
/-
Copyright (c) 2017 Johannes Hölzl. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johannes Hölzl, Jeremy Avigad
-/
import control.traversable.instances
import data.set.finite
import order.copy
import tactic.monotonicity
/-!
# Theory of filters on sets
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
## Main definitions
* `filter` : filters on a set;
* `at_top`, `at_bot`, `cofinite`, `principal` : specific filters;
* `map`, `comap` : operations on filters;
* `tendsto` : limit with respect to filters;
* `eventually` : `f.eventually p` means `{x | p x} ∈ f`;
* `frequently` : `f.frequently p` means `{x | ¬p x} ∉ f`;
* `filter_upwards [h₁, ..., hₙ]` : takes a list of proofs `hᵢ : sᵢ ∈ f`, and replaces a goal `s ∈ f`
with `∀ x, x ∈ s₁ → ... → x ∈ sₙ → x ∈ s`;
* `ne_bot f` : an utility class stating that `f` is a non-trivial filter.
Filters on a type `X` are sets of sets of `X` satisfying three conditions. They are mostly used to
abstract two related kinds of ideas:
* *limits*, including finite or infinite limits of sequences, finite or infinite limits of functions
at a point or at infinity, etc...
* *things happening eventually*, including things happening for large enough `n : ℕ`, or near enough
a point `x`, or for close enough pairs of points, or things happening almost everywhere in the
sense of measure theory. Dually, filters can also express the idea of *things happening often*:
for arbitrarily large `n`, or at a point in any neighborhood of given a point etc...
In this file, we define the type `filter X` of filters on `X`, and endow it with a complete lattice
structure. This structure is lifted from the lattice structure on `set (set X)` using the Galois
insertion which maps a filter to its elements in one direction, and an arbitrary set of sets to
the smallest filter containing it in the other direction.
We also prove `filter` is a monadic functor, with a push-forward operation
`filter.map` and a pull-back operation `filter.comap` that form a Galois connections for the
order on filters.
The examples of filters appearing in the description of the two motivating ideas are:
* `(at_top : filter ℕ)` : made of sets of `ℕ` containing `{n | n ≥ N}` for some `N`
* `𝓝 x` : made of neighborhoods of `x` in a topological space (defined in topology.basic)
* `𝓤 X` : made of entourages of a uniform space (those space are generalizations of metric spaces
defined in topology.uniform_space.basic)
* `μ.ae` : made of sets whose complement has zero measure with respect to `μ` (defined in
`measure_theory.measure_space`)
The general notion of limit of a map with respect to filters on the source and target types
is `filter.tendsto`. It is defined in terms of the order and the push-forward operation.
The predicate "happening eventually" is `filter.eventually`, and "happening often" is
`filter.frequently`, whose definitions are immediate after `filter` is defined (but they come
rather late in this file in order to immediately relate them to the lattice structure).
For instance, anticipating on topology.basic, the statement: "if a sequence `u` converges to
some `x` and `u n` belongs to a set `M` for `n` large enough then `x` is in the closure of
`M`" is formalized as: `tendsto u at_top (𝓝 x) → (∀ᶠ n in at_top, u n ∈ M) → x ∈ closure M`,
which is a special case of `mem_closure_of_tendsto` from topology.basic.
## Notations
* `∀ᶠ x in f, p x` : `f.eventually p`;
* `∃ᶠ x in f, p x` : `f.frequently p`;
* `f =ᶠ[l] g` : `∀ᶠ x in l, f x = g x`;
* `f ≤ᶠ[l] g` : `∀ᶠ x in l, f x ≤ g x`;
* `𝓟 s` : `principal s`, localized in `filter`.
## References
* [N. Bourbaki, *General Topology*][bourbaki1966]
Important note: Bourbaki requires that a filter on `X` cannot contain all sets of `X`, which
we do *not* require. This gives `filter X` better formal properties, in particular a bottom element
`⊥` for its lattice structure, at the cost of including the assumption
`[ne_bot f]` in a number of lemmas and definitions.
-/
open function set order
universes u v w x y
open_locale classical
/-- A filter `F` on a type `α` is a collection of sets of `α` which contains the whole `α`,
is upwards-closed, and is stable under intersection. We do not forbid this collection to be
all sets of `α`. -/
structure filter (α : Type*) :=
(sets : set (set α))
(univ_sets : set.univ ∈ sets)
(sets_of_superset {x y} : x ∈ sets → x ⊆ y → y ∈ sets)
(inter_sets {x y} : x ∈ sets → y ∈ sets → x ∩ y ∈ sets)
/-- If `F` is a filter on `α`, and `U` a subset of `α` then we can write `U ∈ F` as on paper. -/
instance {α : Type*}: has_mem (set α) (filter α) := ⟨λ U F, U ∈ F.sets⟩
namespace filter
variables {α : Type u} {f g : filter α} {s t : set α}
@[simp] protected lemma mem_mk {t : set (set α)} {h₁ h₂ h₃} : s ∈ mk t h₁ h₂ h₃ ↔ s ∈ t := iff.rfl
@[simp] protected lemma mem_sets : s ∈ f.sets ↔ s ∈ f := iff.rfl
instance inhabited_mem : inhabited {s : set α // s ∈ f} := ⟨⟨univ, f.univ_sets⟩⟩
lemma filter_eq : ∀ {f g : filter α}, f.sets = g.sets → f = g
| ⟨a, _, _, _⟩ ⟨._, _, _, _⟩ rfl := rfl
lemma filter_eq_iff : f = g ↔ f.sets = g.sets :=
⟨congr_arg _, filter_eq⟩
protected lemma ext_iff : f = g ↔ ∀ s, s ∈ f ↔ s ∈ g :=
by simp only [filter_eq_iff, ext_iff, filter.mem_sets]
@[ext]
protected lemma ext : (∀ s, s ∈ f ↔ s ∈ g) → f = g :=
filter.ext_iff.2
/-- An extensionality lemma that is useful for filters with good lemmas about `sᶜ ∈ f` (e.g.,
`filter.comap`, `filter.coprod`, `filter.Coprod`, `filter.cofinite`). -/
protected lemma coext (h : ∀ s, sᶜ ∈ f ↔ sᶜ ∈ g) : f = g :=
filter.ext $ compl_surjective.forall.2 h
@[simp] lemma univ_mem : univ ∈ f :=
f.univ_sets
lemma mem_of_superset {x y : set α} (hx : x ∈ f) (hxy : x ⊆ y) : y ∈ f :=
f.sets_of_superset hx hxy
lemma inter_mem {s t : set α} (hs : s ∈ f) (ht : t ∈ f) : s ∩ t ∈ f :=
f.inter_sets hs ht
@[simp] lemma inter_mem_iff {s t : set α} : s ∩ t ∈ f ↔ s ∈ f ∧ t ∈ f :=
⟨λ h, ⟨mem_of_superset h (inter_subset_left s t),
mem_of_superset h (inter_subset_right s t)⟩, and_imp.2 inter_mem⟩
lemma diff_mem {s t : set α} (hs : s ∈ f) (ht : tᶜ ∈ f) : s \ t ∈ f :=
inter_mem hs ht
lemma univ_mem' (h : ∀ a, a ∈ s) : s ∈ f :=
mem_of_superset univ_mem (λ x _, h x)
lemma mp_mem (hs : s ∈ f) (h : {x | x ∈ s → x ∈ t} ∈ f) : t ∈ f :=
mem_of_superset (inter_mem hs h) $ λ x ⟨h₁, h₂⟩, h₂ h₁
lemma congr_sets (h : {x | x ∈ s ↔ x ∈ t} ∈ f) : s ∈ f ↔ t ∈ f :=
⟨λ hs, mp_mem hs (mem_of_superset h (λ x, iff.mp)),
λ hs, mp_mem hs (mem_of_superset h (λ x, iff.mpr))⟩
@[simp] lemma bInter_mem {β : Type v} {s : β → set α} {is : set β} (hf : is.finite) :
(⋂ i ∈ is, s i) ∈ f ↔ ∀ i ∈ is, s i ∈ f :=
finite.induction_on hf (by simp) (λ i s hi _ hs, by simp [hs])
@[simp] lemma bInter_finset_mem {β : Type v} {s : β → set α} (is : finset β) :
(⋂ i ∈ is, s i) ∈ f ↔ ∀ i ∈ is, s i ∈ f :=
bInter_mem is.finite_to_set
alias bInter_finset_mem ← _root_.finset.Inter_mem_sets
attribute [protected] finset.Inter_mem_sets
@[simp] lemma sInter_mem {s : set (set α)} (hfin : s.finite) :
⋂₀ s ∈ f ↔ ∀ U ∈ s, U ∈ f :=
by rw [sInter_eq_bInter, bInter_mem hfin]
@[simp] lemma Inter_mem {β : Type v} {s : β → set α} [finite β] :
(⋂ i, s i) ∈ f ↔ ∀ i, s i ∈ f :=
by simpa using bInter_mem finite_univ
lemma exists_mem_subset_iff : (∃ t ∈ f, t ⊆ s) ↔ s ∈ f :=
⟨λ ⟨t, ht, ts⟩, mem_of_superset ht ts, λ hs, ⟨s, hs, subset.rfl⟩⟩
lemma monotone_mem {f : filter α} : monotone (λ s, s ∈ f) :=
λ s t hst h, mem_of_superset h hst
lemma exists_mem_and_iff {P : set α → Prop} {Q : set α → Prop} (hP : antitone P) (hQ : antitone Q) :
(∃ u ∈ f, P u) ∧ (∃ u ∈ f, Q u) ↔ (∃ u ∈ f, P u ∧ Q u) :=
begin
split,
{ rintro ⟨⟨u, huf, hPu⟩, v, hvf, hQv⟩, exact ⟨u ∩ v, inter_mem huf hvf,
hP (inter_subset_left _ _) hPu, hQ (inter_subset_right _ _) hQv⟩ },
{ rintro ⟨u, huf, hPu, hQu⟩, exact ⟨⟨u, huf, hPu⟩, u, huf, hQu⟩ }
end
lemma forall_in_swap {β : Type*} {p : set α → β → Prop} :
(∀ (a ∈ f) b, p a b) ↔ ∀ b (a ∈ f), p a b :=
set.forall_in_swap
end filter
namespace tactic.interactive
open tactic
setup_tactic_parser
/--
`filter_upwards [h₁, ⋯, hₙ]` replaces a goal of the form `s ∈ f` and terms
`h₁ : t₁ ∈ f, ⋯, hₙ : tₙ ∈ f` with `∀ x, x ∈ t₁ → ⋯ → x ∈ tₙ → x ∈ s`.
The list is an optional parameter, `[]` being its default value.
`filter_upwards [h₁, ⋯, hₙ] with a₁ a₂ ⋯ aₖ` is a short form for
`{ filter_upwards [h₁, ⋯, hₙ], intros a₁ a₂ ⋯ aₖ }`.
`filter_upwards [h₁, ⋯, hₙ] using e` is a short form for
`{ filter_upwards [h1, ⋯, hn], exact e }`.
Combining both shortcuts is done by writing `filter_upwards [h₁, ⋯, hₙ] with a₁ a₂ ⋯ aₖ using e`.
Note that in this case, the `aᵢ` terms can be used in `e`.
-/
meta def filter_upwards
(s : parse types.pexpr_list?)
(wth : parse with_ident_list?)
(tgt : parse (tk "using" *> texpr)?) : tactic unit :=
do
(s.get_or_else []).reverse.mmap (λ e, eapplyc `filter.mp_mem >> eapply e),
eapplyc `filter.univ_mem',
`[dsimp only [set.mem_set_of_eq]],
let wth := wth.get_or_else [],
if ¬wth.empty then intros wth else skip,
match tgt with
| some e := exact e
| none := skip
end
add_tactic_doc
{ name := "filter_upwards",
category := doc_category.tactic,
decl_names := [`tactic.interactive.filter_upwards],
tags := ["goal management", "lemma application"] }
end tactic.interactive
namespace filter
variables {α : Type u} {β : Type v} {γ : Type w} {δ : Type*} {ι : Sort x}
section principal
/-- The principal filter of `s` is the collection of all supersets of `s`. -/
def principal (s : set α) : filter α :=
{ sets := {t | s ⊆ t},
univ_sets := subset_univ s,
sets_of_superset := λ x y hx, subset.trans hx,
inter_sets := λ x y, subset_inter }
localized "notation (name := filter.principal) `𝓟` := filter.principal" in filter
@[simp] lemma mem_principal {s t : set α} : s ∈ 𝓟 t ↔ t ⊆ s := iff.rfl
lemma mem_principal_self (s : set α) : s ∈ 𝓟 s := subset.rfl
end principal
open_locale filter
section join
/-- The join of a filter of filters is defined by the relation `s ∈ join f ↔ {t | s ∈ t} ∈ f`. -/
def join (f : filter (filter α)) : filter α :=
{ sets := {s | {t : filter α | s ∈ t} ∈ f},
univ_sets := by simp only [mem_set_of_eq, univ_sets, ← filter.mem_sets, set_of_true],
sets_of_superset := λ x y hx xy,
mem_of_superset hx $ λ f h, mem_of_superset h xy,
inter_sets := λ x y hx hy,
mem_of_superset (inter_mem hx hy) $ λ f ⟨h₁, h₂⟩, inter_mem h₁ h₂ }
@[simp] lemma mem_join {s : set α} {f : filter (filter α)} :
s ∈ join f ↔ {t | s ∈ t} ∈ f := iff.rfl
end join
section lattice
variables {f g : filter α} {s t : set α}
instance : partial_order (filter α) :=
{ le := λ f g, ∀ ⦃U : set α⦄, U ∈ g → U ∈ f,
le_antisymm := λ a b h₁ h₂, filter_eq $ subset.antisymm h₂ h₁,
le_refl := λ a, subset.rfl,
le_trans := λ a b c h₁ h₂, subset.trans h₂ h₁ }
theorem le_def : f ≤ g ↔ ∀ x ∈ g, x ∈ f := iff.rfl
protected lemma not_le : ¬ f ≤ g ↔ ∃ s ∈ g, s ∉ f := by simp_rw [le_def, not_forall]
/-- `generate_sets g s`: `s` is in the filter closure of `g`. -/
inductive generate_sets (g : set (set α)) : set α → Prop
| basic {s : set α} : s ∈ g → generate_sets s
| univ : generate_sets univ
| superset {s t : set α} : generate_sets s → s ⊆ t → generate_sets t
| inter {s t : set α} : generate_sets s → generate_sets t → generate_sets (s ∩ t)
/-- `generate g` is the largest filter containing the sets `g`. -/
def generate (g : set (set α)) : filter α :=
{ sets := generate_sets g,
univ_sets := generate_sets.univ,
sets_of_superset := λ x y, generate_sets.superset,
inter_sets := λ s t, generate_sets.inter }
lemma sets_iff_generate {s : set (set α)} {f : filter α} : f ≤ filter.generate s ↔ s ⊆ f.sets :=
iff.intro
(λ h u hu, h $ generate_sets.basic $ hu)
(λ h u hu, hu.rec_on h univ_mem
(λ x y _ hxy hx, mem_of_superset hx hxy)
(λ x y _ _ hx hy, inter_mem hx hy))
lemma mem_generate_iff {s : set $ set α} {U : set α} :
U ∈ generate s ↔ ∃ t ⊆ s, set.finite t ∧ ⋂₀ t ⊆ U :=
begin
split ; intro h,
{ induction h,
case basic : V V_in
{ exact ⟨{V}, singleton_subset_iff.2 V_in, finite_singleton _, (sInter_singleton _).subset⟩ },
case univ { exact ⟨∅, empty_subset _, finite_empty, subset_univ _⟩ },
case superset : V W hV' hVW hV
{ rcases hV with ⟨t, hts, ht, htV⟩,
exact ⟨t, hts, ht, htV.trans hVW⟩ },
case inter : V W hV' hW' hV hW
{ rcases ⟨hV, hW⟩ with ⟨⟨t, hts, ht, htV⟩, u, hus, hu, huW⟩,
exact ⟨t ∪ u, union_subset hts hus, ht.union hu,
(sInter_union _ _).subset.trans $ inter_subset_inter htV huW⟩ } },
{ rcases h with ⟨t, hts, tfin, h⟩,
exact mem_of_superset ((sInter_mem tfin).2 $ λ V hV, generate_sets.basic $ hts hV) h },
end
/-- `mk_of_closure s hs` constructs a filter on `α` whose elements set is exactly
`s : set (set α)`, provided one gives the assumption `hs : (generate s).sets = s`. -/
protected def mk_of_closure (s : set (set α)) (hs : (generate s).sets = s) : filter α :=
{ sets := s,
univ_sets := hs ▸ (univ_mem : univ ∈ generate s),
sets_of_superset := λ x y, hs ▸ (mem_of_superset : x ∈ generate s → x ⊆ y → y ∈ generate s),
inter_sets := λ x y, hs ▸ (inter_mem : x ∈ generate s → y ∈ generate s →
x ∩ y ∈ generate s) }
lemma mk_of_closure_sets {s : set (set α)} {hs : (generate s).sets = s} :
filter.mk_of_closure s hs = generate s :=
filter.ext $ λ u,
show u ∈ (filter.mk_of_closure s hs).sets ↔ u ∈ (generate s).sets, from hs.symm ▸ iff.rfl
/-- Galois insertion from sets of sets into filters. -/
def gi_generate (α : Type*) :
@galois_insertion (set (set α)) (filter α)ᵒᵈ _ _ filter.generate filter.sets :=
{ gc := λ s f, sets_iff_generate,
le_l_u := λ f u h, generate_sets.basic h,
choice := λ s hs, filter.mk_of_closure s (le_antisymm hs $ sets_iff_generate.1 $ le_rfl),
choice_eq := λ s hs, mk_of_closure_sets }
/-- The infimum of filters is the filter generated by intersections
of elements of the two filters. -/
instance : has_inf (filter α) := ⟨λf g : filter α,
{ sets := {s | ∃ (a ∈ f) (b ∈ g), s = a ∩ b },
univ_sets := ⟨_, univ_mem, _, univ_mem, by simp⟩,
sets_of_superset := begin
rintro x y ⟨a, ha, b, hb, rfl⟩ xy,
refine ⟨a ∪ y, mem_of_superset ha (subset_union_left a y),
b ∪ y, mem_of_superset hb (subset_union_left b y), _⟩,
rw [← inter_union_distrib_right, union_eq_self_of_subset_left xy]
end,
inter_sets := begin
rintro x y ⟨a, ha, b, hb, rfl⟩ ⟨c, hc, d, hd, rfl⟩,
refine ⟨a ∩ c, inter_mem ha hc, b ∩ d, inter_mem hb hd, _⟩,
ac_refl
end }⟩
lemma mem_inf_iff {f g : filter α} {s : set α} :
s ∈ f ⊓ g ↔ ∃ t₁ ∈ f, ∃ t₂ ∈ g, s = t₁ ∩ t₂ := iff.rfl
lemma mem_inf_of_left {f g : filter α} {s : set α} (h : s ∈ f) : s ∈ f ⊓ g :=
⟨s, h, univ, univ_mem, (inter_univ s).symm⟩
lemma mem_inf_of_right {f g : filter α} {s : set α} (h : s ∈ g) : s ∈ f ⊓ g :=
⟨univ, univ_mem, s, h, (univ_inter s).symm⟩
lemma inter_mem_inf {α : Type u} {f g : filter α} {s t : set α}
(hs : s ∈ f) (ht : t ∈ g) : s ∩ t ∈ f ⊓ g :=
⟨s, hs, t, ht, rfl⟩
lemma mem_inf_of_inter {f g : filter α} {s t u : set α} (hs : s ∈ f) (ht : t ∈ g) (h : s ∩ t ⊆ u) :
u ∈ f ⊓ g :=
mem_of_superset (inter_mem_inf hs ht) h
lemma mem_inf_iff_superset {f g : filter α} {s : set α} :
s ∈ f ⊓ g ↔ ∃ t₁ ∈ f, ∃ t₂ ∈ g, t₁ ∩ t₂ ⊆ s :=
⟨λ ⟨t₁, h₁, t₂, h₂, eq⟩, ⟨t₁, h₁, t₂, h₂, eq ▸ subset.rfl⟩,
λ ⟨t₁, h₁, t₂, h₂, sub⟩, mem_inf_of_inter h₁ h₂ sub⟩
instance : has_top (filter α) :=
⟨{ sets := {s | ∀ x, x ∈ s},
univ_sets := λ x, mem_univ x,
sets_of_superset := λ x y hx hxy a, hxy (hx a),
inter_sets := λ x y hx hy a, mem_inter (hx _) (hy _) }⟩
lemma mem_top_iff_forall {s : set α} : s ∈ (⊤ : filter α) ↔ (∀ x, x ∈ s) :=
iff.rfl
@[simp] lemma mem_top {s : set α} : s ∈ (⊤ : filter α) ↔ s = univ :=
by rw [mem_top_iff_forall, eq_univ_iff_forall]
section complete_lattice
/- We lift the complete lattice along the Galois connection `generate` / `sets`. Unfortunately,
we want to have different definitional equalities for the lattice operations. So we define them
upfront and change the lattice operations for the complete lattice instance. -/
private def original_complete_lattice : complete_lattice (filter α) :=
@order_dual.complete_lattice _ (gi_generate α).lift_complete_lattice
local attribute [instance] original_complete_lattice
instance : complete_lattice (filter α) := original_complete_lattice.copy
/- le -/ filter.partial_order.le rfl
/- top -/ (filter.has_top).1
(top_unique $ λ s hs, by simp [mem_top.1 hs])
/- bot -/ _ rfl
/- sup -/ _ rfl
/- inf -/ (filter.has_inf).1
begin
ext f g : 2,
exact le_antisymm
(le_inf (λ s, mem_inf_of_left) (λ s, mem_inf_of_right))
(begin
rintro s ⟨a, ha, b, hb, rfl⟩,
exact inter_sets _ (@inf_le_left (filter α) _ _ _ _ ha)
(@inf_le_right (filter α) _ _ _ _ hb)
end)
end
/- Sup -/ (join ∘ 𝓟) (by { ext s x, exact mem_Inter₂.symm.trans
(set.ext_iff.1 (sInter_image _ _) x).symm})
/- Inf -/ _ rfl
instance : inhabited (filter α) := ⟨⊥⟩
end complete_lattice
/-- A filter is `ne_bot` if it is not equal to `⊥`, or equivalently the empty set
does not belong to the filter. Bourbaki include this assumption in the definition
of a filter but we prefer to have a `complete_lattice` structure on filter, so
we use a typeclass argument in lemmas instead. -/
class ne_bot (f : filter α) : Prop := (ne' : f ≠ ⊥)
lemma ne_bot_iff {f : filter α} : ne_bot f ↔ f ≠ ⊥ := ⟨λ h, h.1, λ h, ⟨h⟩⟩
lemma ne_bot.ne {f : filter α} (hf : ne_bot f) : f ≠ ⊥ := ne_bot.ne'
@[simp] lemma not_ne_bot {α : Type*} {f : filter α} : ¬ f.ne_bot ↔ f = ⊥ :=
not_iff_comm.1 ne_bot_iff.symm
lemma ne_bot.mono {f g : filter α} (hf : ne_bot f) (hg : f ≤ g) : ne_bot g :=
⟨ne_bot_of_le_ne_bot hf.1 hg⟩
lemma ne_bot_of_le {f g : filter α} [hf : ne_bot f] (hg : f ≤ g) : ne_bot g :=
hf.mono hg
@[simp] lemma sup_ne_bot {f g : filter α} : ne_bot (f ⊔ g) ↔ ne_bot f ∨ ne_bot g :=
by simp [ne_bot_iff, not_and_distrib]
lemma not_disjoint_self_iff : ¬ disjoint f f ↔ f.ne_bot := by rw [disjoint_self, ne_bot_iff]
lemma bot_sets_eq : (⊥ : filter α).sets = univ := rfl
lemma sup_sets_eq {f g : filter α} : (f ⊔ g).sets = f.sets ∩ g.sets :=
(gi_generate α).gc.u_inf
lemma Sup_sets_eq {s : set (filter α)} : (Sup s).sets = (⋂ f ∈ s, (f : filter α).sets) :=
(gi_generate α).gc.u_Inf
lemma supr_sets_eq {f : ι → filter α} : (supr f).sets = (⋂ i, (f i).sets) :=
(gi_generate α).gc.u_infi
lemma generate_empty : filter.generate ∅ = (⊤ : filter α) :=
(gi_generate α).gc.l_bot
lemma generate_univ : filter.generate univ = (⊥ : filter α) :=
mk_of_closure_sets.symm
lemma generate_union {s t : set (set α)} :
filter.generate (s ∪ t) = filter.generate s ⊓ filter.generate t :=
(gi_generate α).gc.l_sup
lemma generate_Union {s : ι → set (set α)} :
filter.generate (⋃ i, s i) = (⨅ i, filter.generate (s i)) :=
(gi_generate α).gc.l_supr
@[simp] lemma mem_bot {s : set α} : s ∈ (⊥ : filter α) :=
trivial
@[simp] lemma mem_sup {f g : filter α} {s : set α} :
s ∈ f ⊔ g ↔ s ∈ f ∧ s ∈ g :=
iff.rfl
lemma union_mem_sup {f g : filter α} {s t : set α} (hs : s ∈ f) (ht : t ∈ g) :
s ∪ t ∈ f ⊔ g :=
⟨mem_of_superset hs (subset_union_left s t), mem_of_superset ht (subset_union_right s t)⟩
@[simp] lemma mem_Sup {x : set α} {s : set (filter α)} :
x ∈ Sup s ↔ (∀ f ∈ s, x ∈ (f : filter α)) :=
iff.rfl
@[simp] lemma mem_supr {x : set α} {f : ι → filter α} :
x ∈ supr f ↔ (∀ i, x ∈ f i) :=
by simp only [← filter.mem_sets, supr_sets_eq, iff_self, mem_Inter]
@[simp] lemma supr_ne_bot {f : ι → filter α} : (⨆ i, f i).ne_bot ↔ ∃ i, (f i).ne_bot :=
by simp [ne_bot_iff]
lemma infi_eq_generate (s : ι → filter α) : infi s = generate (⋃ i, (s i).sets) :=
show generate _ = generate _, from congr_arg _ $ congr_arg Sup $ (range_comp _ _).symm
lemma mem_infi_of_mem {f : ι → filter α} (i : ι) : ∀ {s}, s ∈ f i → s ∈ ⨅ i, f i :=
show (⨅ i, f i) ≤ f i, from infi_le _ _
lemma mem_infi_of_Inter {ι} {s : ι → filter α} {U : set α} {I : set ι} (I_fin : I.finite)
{V : I → set α} (hV : ∀ i, V i ∈ s i) (hU : (⋂ i, V i) ⊆ U) : U ∈ ⨅ i, s i :=
begin
haveI := I_fin.fintype,
refine mem_of_superset (Inter_mem.2 $ λ i, _) hU,
exact mem_infi_of_mem i (hV _)
end
lemma mem_infi {ι} {s : ι → filter α} {U : set α} : (U ∈ ⨅ i, s i) ↔
∃ I : set ι, I.finite ∧ ∃ V : I → set α, (∀ i, V i ∈ s i) ∧ U = ⋂ i, V i :=
begin
split,
{ rw [infi_eq_generate, mem_generate_iff],
rintro ⟨t, tsub, tfin, tinter⟩,
rcases eq_finite_Union_of_finite_subset_Union tfin tsub with ⟨I, Ifin, σ, σfin, σsub, rfl⟩,
rw sInter_Union at tinter,
set V := λ i, U ∪ ⋂₀ σ i with hV,
have V_in : ∀ i, V i ∈ s i,
{ rintro i,
have : (⋂₀ σ i) ∈ s i,
{ rw sInter_mem (σfin _),
apply σsub },
exact mem_of_superset this (subset_union_right _ _) },
refine ⟨I, Ifin, V, V_in, _⟩,
rwa [hV, ← union_Inter, union_eq_self_of_subset_right] },
{ rintro ⟨I, Ifin, V, V_in, rfl⟩,
exact mem_infi_of_Inter Ifin V_in subset.rfl }
end
lemma mem_infi' {ι} {s : ι → filter α} {U : set α} : (U ∈ ⨅ i, s i) ↔
∃ I : set ι, I.finite ∧ ∃ V : ι → set α, (∀ i, V i ∈ s i) ∧
(∀ i ∉ I, V i = univ) ∧ (U = ⋂ i ∈ I, V i) ∧ U = ⋂ i, V i :=
begin
simp only [mem_infi, set_coe.forall', bInter_eq_Inter],
refine ⟨_, λ ⟨I, If, V, hVs, _, hVU, _⟩, ⟨I, If, λ i, V i, λ i, hVs i, hVU⟩⟩,
rintro ⟨I, If, V, hV, rfl⟩,
refine ⟨I, If, λ i, if hi : i ∈ I then V ⟨i, hi⟩ else univ, λ i, _, λ i hi, _, _⟩,
{ split_ifs, exacts [hV _, univ_mem] },
{ exact dif_neg hi },
{ simp only [Inter_dite, bInter_eq_Inter, dif_pos (subtype.coe_prop _), subtype.coe_eta,
Inter_univ, inter_univ, eq_self_iff_true, true_and] }
end
lemma exists_Inter_of_mem_infi {ι : Type*} {α : Type*} {f : ι → filter α} {s}
(hs : s ∈ ⨅ i, f i) : ∃ t : ι → set α, (∀ i, t i ∈ f i) ∧ s = ⋂ i, t i :=
let ⟨I, If, V, hVs, hV', hVU, hVU'⟩ := mem_infi'.1 hs in ⟨V, hVs, hVU'⟩
lemma mem_infi_of_finite {ι : Type*} [finite ι] {α : Type*} {f : ι → filter α} (s) :
s ∈ (⨅ i, f i) ↔ ∃ t : ι → set α, (∀ i, t i ∈ f i) ∧ s = ⋂ i, t i :=
begin
refine ⟨exists_Inter_of_mem_infi, _⟩,
rintro ⟨t, ht, rfl⟩,
exact Inter_mem.2 (λ i, mem_infi_of_mem i (ht i))
end
@[simp] lemma le_principal_iff {s : set α} {f : filter α} : f ≤ 𝓟 s ↔ s ∈ f :=
show (∀ {t}, s ⊆ t → t ∈ f) ↔ s ∈ f,
from ⟨λ h, h (subset.refl s), λ hs t ht, mem_of_superset hs ht⟩
lemma Iic_principal (s : set α) : Iic (𝓟 s) = {l | s ∈ l} :=
set.ext $ λ x, le_principal_iff
lemma principal_mono {s t : set α} : 𝓟 s ≤ 𝓟 t ↔ s ⊆ t :=
by simp only [le_principal_iff, iff_self, mem_principal]
@[mono] lemma monotone_principal : monotone (𝓟 : set α → filter α) :=
λ _ _, principal_mono.2
@[simp] lemma principal_eq_iff_eq {s t : set α} : 𝓟 s = 𝓟 t ↔ s = t :=
by simp only [le_antisymm_iff, le_principal_iff, mem_principal]; refl
@[simp] lemma join_principal_eq_Sup {s : set (filter α)} : join (𝓟 s) = Sup s := rfl
@[simp] lemma principal_univ : 𝓟 (univ : set α) = ⊤ :=
top_unique $ by simp only [le_principal_iff, mem_top, eq_self_iff_true]
@[simp] lemma principal_empty : 𝓟 (∅ : set α) = ⊥ :=
bot_unique $ λ s _, empty_subset _
lemma generate_eq_binfi (S : set (set α)) : generate S = ⨅ s ∈ S, 𝓟 s :=
eq_of_forall_le_iff $ λ f, by simp [sets_iff_generate, le_principal_iff, subset_def]
/-! ### Lattice equations -/
lemma empty_mem_iff_bot {f : filter α} : ∅ ∈ f ↔ f = ⊥ :=
⟨λ h, bot_unique $ λ s _, mem_of_superset h (empty_subset s),
λ h, h.symm ▸ mem_bot⟩
lemma nonempty_of_mem {f : filter α} [hf : ne_bot f] {s : set α} (hs : s ∈ f) :
s.nonempty :=
s.eq_empty_or_nonempty.elim (λ h, absurd hs (h.symm ▸ mt empty_mem_iff_bot.mp hf.1)) id
lemma ne_bot.nonempty_of_mem {f : filter α} (hf : ne_bot f) {s : set α} (hs : s ∈ f) :
s.nonempty :=
@nonempty_of_mem α f hf s hs
@[simp] lemma empty_not_mem (f : filter α) [ne_bot f] : ¬(∅ ∈ f) :=
λ h, (nonempty_of_mem h).ne_empty rfl
lemma nonempty_of_ne_bot (f : filter α) [ne_bot f] : nonempty α :=
nonempty_of_exists $ nonempty_of_mem (univ_mem : univ ∈ f)
lemma compl_not_mem {f : filter α} {s : set α} [ne_bot f] (h : s ∈ f) : sᶜ ∉ f :=
λ hsc, (nonempty_of_mem (inter_mem h hsc)).ne_empty $ inter_compl_self s
lemma filter_eq_bot_of_is_empty [is_empty α] (f : filter α) : f = ⊥ :=
empty_mem_iff_bot.mp $ univ_mem' is_empty_elim
protected lemma disjoint_iff {f g : filter α} :
disjoint f g ↔ ∃ (s ∈ f) (t ∈ g), disjoint s t :=
by simp only [disjoint_iff, ← empty_mem_iff_bot, mem_inf_iff,
inf_eq_inter, bot_eq_empty, @eq_comm _ ∅]
lemma disjoint_of_disjoint_of_mem {f g : filter α} {s t : set α} (h : disjoint s t)
(hs : s ∈ f) (ht : t ∈ g) : disjoint f g :=
filter.disjoint_iff.mpr ⟨s, hs, t, ht, h⟩
lemma ne_bot.not_disjoint (hf : f.ne_bot) (hs : s ∈ f) (ht : t ∈ f) :
¬ disjoint s t :=
λ h, not_disjoint_self_iff.2 hf $ filter.disjoint_iff.2 ⟨s, hs, t, ht, h⟩
lemma inf_eq_bot_iff {f g : filter α} :
f ⊓ g = ⊥ ↔ ∃ (U ∈ f) (V ∈ g), U ∩ V = ∅ :=
by simpa only [←disjoint_iff, set.disjoint_iff_inter_eq_empty] using filter.disjoint_iff
lemma _root_.pairwise.exists_mem_filter_of_disjoint {ι : Type*} [finite ι]
{l : ι → filter α} (hd : pairwise (disjoint on l)) :
∃ s : ι → set α, (∀ i, s i ∈ l i) ∧ pairwise (disjoint on s) :=
begin
simp only [pairwise, function.on_fun, filter.disjoint_iff, subtype.exists'] at hd,
choose! s t hst using hd,
refine ⟨λ i, ⋂ j, @s i j ∩ @t j i, λ i, _, λ i j hij, _⟩,
exacts [Inter_mem.2 (λ j, inter_mem (@s i j).2 (@t j i).2),
(hst hij).mono ((Inter_subset _ j).trans (inter_subset_left _ _))
((Inter_subset _ i).trans (inter_subset_right _ _))]
end
lemma _root_.set.pairwise_disjoint.exists_mem_filter {ι : Type*} {l : ι → filter α} {t : set ι}
(hd : t.pairwise_disjoint l) (ht : t.finite) :
∃ s : ι → set α, (∀ i, s i ∈ l i) ∧ t.pairwise_disjoint s :=
begin
casesI ht,
obtain ⟨s, hd⟩ : ∃ s : Π i : t, {s : set α // s ∈ l i}, pairwise (disjoint on λ i, (s i : set α)),
{ rcases (hd.subtype _ _).exists_mem_filter_of_disjoint with ⟨s, hsl, hsd⟩,
exact ⟨λ i, ⟨s i, hsl i⟩, hsd⟩ },
-- TODO: Lean fails to find `can_lift` instance and fails to use an instance supplied by `letI`
rcases @subtype.exists_pi_extension ι (λ i, {s // s ∈ l i}) _ _ s with ⟨s, rfl⟩,
exact ⟨λ i, s i, λ i, (s i).2, pairwise.set_of_subtype _ _ hd⟩
end
/-- There is exactly one filter on an empty type. -/
instance unique [is_empty α] : unique (filter α) :=
{ to_inhabited := filter.inhabited, uniq := filter_eq_bot_of_is_empty }
/-- There are only two filters on a `subsingleton`: `⊥` and `⊤`. If the type is empty, then they are
equal. -/
lemma eq_top_of_ne_bot [subsingleton α] (l : filter α) [ne_bot l] : l = ⊤ :=
begin
refine top_unique (λ s hs, _),
obtain rfl : s = univ, from subsingleton.eq_univ_of_nonempty (nonempty_of_mem hs),
exact univ_mem
end
lemma forall_mem_nonempty_iff_ne_bot {f : filter α} :
(∀ (s : set α), s ∈ f → s.nonempty) ↔ ne_bot f :=
⟨λ h, ⟨λ hf, not_nonempty_empty (h ∅ $ hf.symm ▸ mem_bot)⟩, @nonempty_of_mem _ _⟩
instance [nonempty α] : nontrivial (filter α) :=
⟨⟨⊤, ⊥, ne_bot.ne $ forall_mem_nonempty_iff_ne_bot.1 $ λ s hs,
by rwa [mem_top.1 hs, ← nonempty_iff_univ_nonempty]⟩⟩
lemma nontrivial_iff_nonempty : nontrivial (filter α) ↔ nonempty α :=
⟨λ h, by_contra $ λ h',
by { haveI := not_nonempty_iff.1 h', exact not_subsingleton (filter α) infer_instance },
@filter.nontrivial α⟩
lemma eq_Inf_of_mem_iff_exists_mem {S : set (filter α)} {l : filter α}
(h : ∀ {s}, s ∈ l ↔ ∃ f ∈ S, s ∈ f) : l = Inf S :=
le_antisymm (le_Inf $ λ f hf s hs, h.2 ⟨f, hf, hs⟩)
(λ s hs, let ⟨f, hf, hs⟩ := h.1 hs in (Inf_le hf : Inf S ≤ f) hs)
lemma eq_infi_of_mem_iff_exists_mem {f : ι → filter α} {l : filter α}
(h : ∀ {s}, s ∈ l ↔ ∃ i, s ∈ f i) :
l = infi f :=
eq_Inf_of_mem_iff_exists_mem $ λ s, h.trans exists_range_iff.symm
lemma eq_binfi_of_mem_iff_exists_mem {f : ι → filter α} {p : ι → Prop} {l : filter α}
(h : ∀ {s}, s ∈ l ↔ ∃ i (_ : p i), s ∈ f i) :
l = ⨅ i (_ : p i), f i :=
begin
rw [infi_subtype'],
apply eq_infi_of_mem_iff_exists_mem,
intro s,
exact h.trans ⟨λ ⟨i, pi, si⟩, ⟨⟨i, pi⟩, si⟩, λ ⟨⟨i, pi⟩, si⟩, ⟨i, pi, si⟩⟩
end
lemma infi_sets_eq {f : ι → filter α} (h : directed (≥) f) [ne : nonempty ι] :
(infi f).sets = (⋃ i, (f i).sets) :=
let ⟨i⟩ := ne, u := { filter .
sets := (⋃ i, (f i).sets),
univ_sets := by simp only [mem_Union]; exact ⟨i, univ_mem⟩,
sets_of_superset := by simp only [mem_Union, exists_imp_distrib];
intros x y i hx hxy; exact ⟨i, mem_of_superset hx hxy⟩,
inter_sets :=
begin
simp only [mem_Union, exists_imp_distrib],
intros x y a hx b hy,
rcases h a b with ⟨c, ha, hb⟩,
exact ⟨c, inter_mem (ha hx) (hb hy)⟩
end } in
have u = infi f, from eq_infi_of_mem_iff_exists_mem
(λ s, by simp only [filter.mem_mk, mem_Union, filter.mem_sets]),
congr_arg filter.sets this.symm
lemma mem_infi_of_directed {f : ι → filter α} (h : directed (≥) f) [nonempty ι] (s) :
s ∈ infi f ↔ ∃ i, s ∈ f i :=
by simp only [← filter.mem_sets, infi_sets_eq h, mem_Union]
lemma mem_binfi_of_directed {f : β → filter α} {s : set β}
(h : directed_on (f ⁻¹'o (≥)) s) (ne : s.nonempty) {t : set α} :
t ∈ (⨅ i ∈ s, f i) ↔ ∃ i ∈ s, t ∈ f i :=
by haveI : nonempty {x // x ∈ s} := ne.to_subtype;
erw [infi_subtype', mem_infi_of_directed h.directed_coe, subtype.exists]; refl
lemma binfi_sets_eq {f : β → filter α} {s : set β}
(h : directed_on (f ⁻¹'o (≥)) s) (ne : s.nonempty) :
(⨅ i ∈ s, f i).sets = ⋃ i ∈ s, (f i).sets :=
ext $ λ t, by simp [mem_binfi_of_directed h ne]
lemma infi_sets_eq_finite {ι : Type*} (f : ι → filter α) :
(⨅ i, f i).sets = (⋃ t : finset ι, (⨅ i ∈ t, f i).sets) :=
begin
rw [infi_eq_infi_finset, infi_sets_eq],
exact directed_of_sup (λ s₁ s₂, binfi_mono),
end
lemma infi_sets_eq_finite' (f : ι → filter α) :
(⨅ i, f i).sets = (⋃ t : finset (plift ι), (⨅ i ∈ t, f (plift.down i)).sets) :=
by { rw [← infi_sets_eq_finite, ← equiv.plift.surjective.infi_comp], refl }
lemma mem_infi_finite {ι : Type*} {f : ι → filter α} (s) :
s ∈ infi f ↔ ∃ t : finset ι, s ∈ ⨅ i ∈ t, f i :=
(set.ext_iff.1 (infi_sets_eq_finite f) s).trans mem_Union
lemma mem_infi_finite' {f : ι → filter α} (s) :
s ∈ infi f ↔ ∃ t : finset (plift ι), s ∈ ⨅ i ∈ t, f (plift.down i) :=
(set.ext_iff.1 (infi_sets_eq_finite' f) s).trans mem_Union
@[simp] lemma sup_join {f₁ f₂ : filter (filter α)} : (join f₁ ⊔ join f₂) = join (f₁ ⊔ f₂) :=
filter.ext $ λ x, by simp only [mem_sup, mem_join]
@[simp] lemma supr_join {ι : Sort w} {f : ι → filter (filter α)} :
(⨆ x, join (f x)) = join (⨆ x, f x) :=
filter.ext $ λ x, by simp only [mem_supr, mem_join]
instance : distrib_lattice (filter α) :=
{ le_sup_inf :=
begin
intros x y z s,
simp only [and_assoc, mem_inf_iff, mem_sup, exists_prop, exists_imp_distrib, and_imp],
rintro hs t₁ ht₁ t₂ ht₂ rfl,
exact ⟨t₁, x.sets_of_superset hs (inter_subset_left t₁ t₂),
ht₁,
t₂,
x.sets_of_superset hs (inter_subset_right t₁ t₂),
ht₂,
rfl⟩
end,
..filter.complete_lattice }
-- The dual version does not hold! `filter α` is not a `complete_distrib_lattice`. -/
instance : coframe (filter α) :=
{ Inf := Inf,
infi_sup_le_sup_Inf := λ f s, begin
rw [Inf_eq_infi', infi_subtype'],
rintro t ⟨h₁, h₂⟩,
rw infi_sets_eq_finite' at h₂,
simp only [mem_Union, (finset.inf_eq_infi _ _).symm] at h₂,
obtain ⟨u, hu⟩ := h₂,
suffices : (⨅ i, f ⊔ ↑i) ≤ f ⊔ u.inf (λ i, ↑i.down),
{ exact this ⟨h₁, hu⟩ },
refine finset.induction_on u (le_sup_of_le_right le_top) _,
rintro ⟨i⟩ u _ ih,
rw [finset.inf_insert, sup_inf_left],
exact le_inf (infi_le _ _) ih,
end,
..filter.complete_lattice }
lemma mem_infi_finset {s : finset α} {f : α → filter β} {t : set β} :
t ∈ (⨅ a ∈ s, f a) ↔ (∃ p : α → set β, (∀ a ∈ s, p a ∈ f a) ∧ t = ⋂ a ∈ s, p a) :=
begin
simp only [← finset.set_bInter_coe, bInter_eq_Inter, infi_subtype'],
refine ⟨λ h, _, _⟩,
{ rcases (mem_infi_of_finite _).1 h with ⟨p, hp, rfl⟩,
refine ⟨λ a, if h : a ∈ s then p ⟨a, h⟩ else univ, λ a ha, by simpa [ha] using hp ⟨a, ha⟩, _⟩,
refine Inter_congr_of_surjective id surjective_id _,
rintro ⟨a, ha⟩, simp [ha] },
{ rintro ⟨p, hpf, rfl⟩,
exact Inter_mem.2 (λ a, mem_infi_of_mem a (hpf a a.2)) }
end
/-- If `f : ι → filter α` is directed, `ι` is not empty, and `∀ i, f i ≠ ⊥`, then `infi f ≠ ⊥`.
See also `infi_ne_bot_of_directed` for a version assuming `nonempty α` instead of `nonempty ι`. -/
lemma infi_ne_bot_of_directed' {f : ι → filter α} [nonempty ι]
(hd : directed (≥) f) (hb : ∀ i, ne_bot (f i)) : ne_bot (infi f) :=
⟨begin
intro h,
have he : ∅ ∈ (infi f), from h.symm ▸ (mem_bot : ∅ ∈ (⊥ : filter α)),
obtain ⟨i, hi⟩ : ∃ i, ∅ ∈ f i,
from (mem_infi_of_directed hd ∅).1 he,
exact (hb i).ne (empty_mem_iff_bot.1 hi)
end⟩
/-- If `f : ι → filter α` is directed, `α` is not empty, and `∀ i, f i ≠ ⊥`, then `infi f ≠ ⊥`.
See also `infi_ne_bot_of_directed'` for a version assuming `nonempty ι` instead of `nonempty α`. -/
lemma infi_ne_bot_of_directed {f : ι → filter α}
[hn : nonempty α] (hd : directed (≥) f) (hb : ∀ i, ne_bot (f i)) : ne_bot (infi f) :=
begin
casesI is_empty_or_nonempty ι,
{ constructor, simp [infi_of_empty f, top_ne_bot] },
{ exact infi_ne_bot_of_directed' hd hb }
end
lemma Inf_ne_bot_of_directed' {s : set (filter α)} (hne : s.nonempty) (hd : directed_on (≥) s)
(hbot : ⊥ ∉ s) : ne_bot (Inf s) :=
(Inf_eq_infi' s).symm ▸ @infi_ne_bot_of_directed' _ _ _
hne.to_subtype hd.directed_coe (λ ⟨f, hf⟩, ⟨ne_of_mem_of_not_mem hf hbot⟩)
lemma Inf_ne_bot_of_directed [nonempty α] {s : set (filter α)} (hd : directed_on (≥) s)
(hbot : ⊥ ∉ s) : ne_bot (Inf s) :=
(Inf_eq_infi' s).symm ▸ infi_ne_bot_of_directed hd.directed_coe
(λ ⟨f, hf⟩, ⟨ne_of_mem_of_not_mem hf hbot⟩)
lemma infi_ne_bot_iff_of_directed' {f : ι → filter α} [nonempty ι] (hd : directed (≥) f) :
ne_bot (infi f) ↔ ∀ i, ne_bot (f i) :=
⟨λ H i, H.mono (infi_le _ i), infi_ne_bot_of_directed' hd⟩
lemma infi_ne_bot_iff_of_directed {f : ι → filter α} [nonempty α] (hd : directed (≥) f) :
ne_bot (infi f) ↔ (∀ i, ne_bot (f i)) :=
⟨λ H i, H.mono (infi_le _ i), infi_ne_bot_of_directed hd⟩
@[elab_as_eliminator]
lemma infi_sets_induct {f : ι → filter α} {s : set α} (hs : s ∈ infi f) {p : set α → Prop}
(uni : p univ)
(ins : ∀ {i s₁ s₂}, s₁ ∈ f i → p s₂ → p (s₁ ∩ s₂)) : p s :=
begin
rw [mem_infi_finite'] at hs,
simp only [← finset.inf_eq_infi] at hs,
rcases hs with ⟨is, his⟩,
revert s,
refine finset.induction_on is _ _,
{ intros s hs, rwa [mem_top.1 hs] },
{ rintro ⟨i⟩ js his ih s hs,
rw [finset.inf_insert, mem_inf_iff] at hs,
rcases hs with ⟨s₁, hs₁, s₂, hs₂, rfl⟩,
exact ins hs₁ (ih hs₂) }
end
/-! #### `principal` equations -/
@[simp] lemma inf_principal {s t : set α} : 𝓟 s ⊓ 𝓟 t = 𝓟 (s ∩ t) :=
le_antisymm
(by simp only [le_principal_iff, mem_inf_iff]; exact ⟨s, subset.rfl, t, subset.rfl, rfl⟩)
(by simp [le_inf_iff, inter_subset_left, inter_subset_right])
@[simp] lemma sup_principal {s t : set α} : 𝓟 s ⊔ 𝓟 t = 𝓟 (s ∪ t) :=
filter.ext $ λ u, by simp only [union_subset_iff, mem_sup, mem_principal]
@[simp] lemma supr_principal {ι : Sort w} {s : ι → set α} : (⨆ x, 𝓟 (s x)) = 𝓟 (⋃ i, s i) :=
filter.ext $ λ x, by simp only [mem_supr, mem_principal, Union_subset_iff]
@[simp] lemma principal_eq_bot_iff {s : set α} : 𝓟 s = ⊥ ↔ s = ∅ :=
empty_mem_iff_bot.symm.trans $ mem_principal.trans subset_empty_iff
@[simp] lemma principal_ne_bot_iff {s : set α} : ne_bot (𝓟 s) ↔ s.nonempty :=
ne_bot_iff.trans $ (not_congr principal_eq_bot_iff).trans nonempty_iff_ne_empty.symm
alias principal_ne_bot_iff ↔ _ _root_.set.nonempty.principal_ne_bot
lemma is_compl_principal (s : set α) : is_compl (𝓟 s) (𝓟 sᶜ) :=
is_compl.of_eq (by rw [inf_principal, inter_compl_self, principal_empty]) $
by rw [sup_principal, union_compl_self, principal_univ]
theorem mem_inf_principal' {f : filter α} {s t : set α} :
s ∈ f ⊓ 𝓟 t ↔ tᶜ ∪ s ∈ f :=
by simp only [← le_principal_iff, (is_compl_principal s).le_left_iff, disjoint_assoc, inf_principal,
← (is_compl_principal (t ∩ sᶜ)).le_right_iff, compl_inter, compl_compl]
theorem mem_inf_principal {f : filter α} {s t : set α} :
s ∈ f ⊓ 𝓟 t ↔ {x | x ∈ t → x ∈ s} ∈ f :=
by { simp only [mem_inf_principal', imp_iff_not_or], refl }
lemma supr_inf_principal (f : ι → filter α) (s : set α) :
(⨆ i, f i ⊓ 𝓟 s) = (⨆ i, f i) ⊓ 𝓟 s :=
by { ext, simp only [mem_supr, mem_inf_principal] }
lemma inf_principal_eq_bot {f : filter α} {s : set α} : f ⊓ 𝓟 s = ⊥ ↔ sᶜ ∈ f :=
by { rw [← empty_mem_iff_bot, mem_inf_principal], refl }
lemma mem_of_eq_bot {f : filter α} {s : set α} (h : f ⊓ 𝓟 sᶜ = ⊥) : s ∈ f :=
by rwa [inf_principal_eq_bot, compl_compl] at h
lemma diff_mem_inf_principal_compl {f : filter α} {s : set α} (hs : s ∈ f) (t : set α) :
s \ t ∈ f ⊓ 𝓟 tᶜ :=
inter_mem_inf hs $ mem_principal_self tᶜ
lemma principal_le_iff {s : set α} {f : filter α} :
𝓟 s ≤ f ↔ ∀ V ∈ f, s ⊆ V :=
begin
change (∀ V, V ∈ f → V ∈ _) ↔ _,
simp_rw mem_principal,
end
@[simp] lemma infi_principal_finset {ι : Type w} (s : finset ι) (f : ι → set α) :
(⨅ i ∈ s, 𝓟 (f i)) = 𝓟 (⋂ i ∈ s, f i) :=
begin
induction s using finset.induction_on with i s hi hs,
{ simp },
{ rw [finset.infi_insert, finset.set_bInter_insert, hs, inf_principal] },
end
@[simp] lemma infi_principal {ι : Type w} [finite ι] (f : ι → set α) :
(⨅ i, 𝓟 (f i)) = 𝓟 (⋂ i, f i) :=
by { casesI nonempty_fintype ι, simpa using infi_principal_finset finset.univ f }
lemma infi_principal_finite {ι : Type w} {s : set ι} (hs : s.finite) (f : ι → set α) :
(⨅ i ∈ s, 𝓟 (f i)) = 𝓟 (⋂ i ∈ s, f i) :=
begin
lift s to finset ι using hs,
exact_mod_cast infi_principal_finset s f
end
end lattice
@[mono] lemma join_mono {f₁ f₂ : filter (filter α)} (h : f₁ ≤ f₂) :
join f₁ ≤ join f₂ :=
λ s hs, h hs
/-! ### Eventually -/
/-- `f.eventually p` or `∀ᶠ x in f, p x` mean that `{x | p x} ∈ f`. E.g., `∀ᶠ x in at_top, p x`
means that `p` holds true for sufficiently large `x`. -/
protected def eventually (p : α → Prop) (f : filter α) : Prop := {x | p x} ∈ f
notation `∀ᶠ` binders ` in ` f `, ` r:(scoped p, filter.eventually p f) := r
lemma eventually_iff {f : filter α} {P : α → Prop} : (∀ᶠ x in f, P x) ↔ {x | P x} ∈ f :=
iff.rfl
@[simp] lemma eventually_mem_set {s : set α} {l : filter α} : (∀ᶠ x in l, x ∈ s) ↔ s ∈ l := iff.rfl
protected lemma ext' {f₁ f₂ : filter α}
(h : ∀ p : α → Prop, (∀ᶠ x in f₁, p x) ↔ (∀ᶠ x in f₂, p x)) :
f₁ = f₂ :=
filter.ext h
lemma eventually.filter_mono {f₁ f₂ : filter α} (h : f₁ ≤ f₂) {p : α → Prop}
(hp : ∀ᶠ x in f₂, p x) :
∀ᶠ x in f₁, p x :=
h hp
lemma eventually_of_mem {f : filter α} {P : α → Prop} {U : set α} (hU : U ∈ f) (h : ∀ x ∈ U, P x) :
∀ᶠ x in f, P x :=
mem_of_superset hU h
protected lemma eventually.and {p q : α → Prop} {f : filter α} :
f.eventually p → f.eventually q → ∀ᶠ x in f, p x ∧ q x :=
inter_mem
@[simp]
lemma eventually_true (f : filter α) : ∀ᶠ x in f, true := univ_mem
lemma eventually_of_forall {p : α → Prop} {f : filter α} (hp : ∀ x, p x) :
∀ᶠ x in f, p x :=
univ_mem' hp
lemma forall_eventually_of_eventually_forall {f : filter α} {p : α → β → Prop}
(h : ∀ᶠ x in f, ∀ y, p x y) : ∀ y, ∀ᶠ x in f, p x y :=
by { intros y, filter_upwards [h], tauto, }
@[simp] lemma eventually_false_iff_eq_bot {f : filter α} :
(∀ᶠ x in f, false) ↔ f = ⊥ :=
empty_mem_iff_bot
@[simp] lemma eventually_const {f : filter α} [t : ne_bot f] {p : Prop} :
(∀ᶠ x in f, p) ↔ p :=
classical.by_cases (λ h : p, by simp [h]) (λ h, by simpa [h] using t.ne)
lemma eventually_iff_exists_mem {p : α → Prop} {f : filter α} :
(∀ᶠ x in f, p x) ↔ ∃ v ∈ f, ∀ y ∈ v, p y :=
exists_mem_subset_iff.symm
lemma eventually.exists_mem {p : α → Prop} {f : filter α} (hp : ∀ᶠ x in f, p x) :
∃ v ∈ f, ∀ y ∈ v, p y :=
eventually_iff_exists_mem.1 hp
lemma eventually.mp {p q : α → Prop} {f : filter α} (hp : ∀ᶠ x in f, p x)
(hq : ∀ᶠ x in f, p x → q x) :
∀ᶠ x in f, q x :=
mp_mem hp hq
lemma eventually.mono {p q : α → Prop} {f : filter α} (hp : ∀ᶠ x in f, p x)
(hq : ∀ x, p x → q x) :
∀ᶠ x in f, q x :=
hp.mp (eventually_of_forall hq)
@[simp] lemma eventually_and {p q : α → Prop} {f : filter α} :
(∀ᶠ x in f, p x ∧ q x) ↔ (∀ᶠ x in f, p x) ∧ (∀ᶠ x in f, q x) :=
inter_mem_iff
lemma eventually.congr {f : filter α} {p q : α → Prop} (h' : ∀ᶠ x in f, p x)
(h : ∀ᶠ x in f, p x ↔ q x) : ∀ᶠ x in f, q x :=
h'.mp (h.mono $ λ x hx, hx.mp)
lemma eventually_congr {f : filter α} {p q : α → Prop} (h : ∀ᶠ x in f, p x ↔ q x) :
(∀ᶠ x in f, p x) ↔ (∀ᶠ x in f, q x) :=
⟨λ hp, hp.congr h, λ hq, hq.congr $ by simpa only [iff.comm] using h⟩
@[simp] lemma eventually_all {ι : Type*} [finite ι] {l} {p : ι → α → Prop} :
(∀ᶠ x in l, ∀ i, p i x) ↔ ∀ i, ∀ᶠ x in l, p i x :=
by { casesI nonempty_fintype ι, simpa only [filter.eventually, set_of_forall] using Inter_mem }
@[simp] lemma eventually_all_finite {ι} {I : set ι} (hI : I.finite) {l} {p : ι → α → Prop} :
(∀ᶠ x in l, ∀ i ∈ I, p i x) ↔ (∀ i ∈ I, ∀ᶠ x in l, p i x) :=
by simpa only [filter.eventually, set_of_forall] using bInter_mem hI
alias eventually_all_finite ← _root_.set.finite.eventually_all
attribute [protected] set.finite.eventually_all
@[simp] lemma eventually_all_finset {ι} (I : finset ι) {l} {p : ι → α → Prop} :
(∀ᶠ x in l, ∀ i ∈ I, p i x) ↔ ∀ i ∈ I, ∀ᶠ x in l, p i x :=
I.finite_to_set.eventually_all
alias eventually_all_finset ← _root_.finset.eventually_all
attribute [protected] finset.eventually_all
@[simp] lemma eventually_or_distrib_left {f : filter α} {p : Prop} {q : α → Prop} :
(∀ᶠ x in f, p ∨ q x) ↔ (p ∨ ∀ᶠ x in f, q x) :=
classical.by_cases (λ h : p, by simp [h]) (λ h, by simp [h])
@[simp] lemma eventually_or_distrib_right {f : filter α} {p : α → Prop} {q : Prop} :
(∀ᶠ x in f, p x ∨ q) ↔ ((∀ᶠ x in f, p x) ∨ q) :=
by simp only [or_comm _ q, eventually_or_distrib_left]
@[simp] lemma eventually_imp_distrib_left {f : filter α} {p : Prop} {q : α → Prop} :
(∀ᶠ x in f, p → q x) ↔ (p → ∀ᶠ x in f, q x) :=
by simp only [imp_iff_not_or, eventually_or_distrib_left]
@[simp]
lemma eventually_bot {p : α → Prop} : ∀ᶠ x in ⊥, p x := ⟨⟩
@[simp]
lemma eventually_top {p : α → Prop} : (∀ᶠ x in ⊤, p x) ↔ (∀ x, p x) :=
iff.rfl
@[simp] lemma eventually_sup {p : α → Prop} {f g : filter α} :
(∀ᶠ x in f ⊔ g, p x) ↔ (∀ᶠ x in f, p x) ∧ (∀ᶠ x in g, p x) :=
iff.rfl
@[simp]
lemma eventually_Sup {p : α → Prop} {fs : set (filter α)} :
(∀ᶠ x in Sup fs, p x) ↔ (∀ f ∈ fs, ∀ᶠ x in f, p x) :=
iff.rfl
@[simp]
lemma eventually_supr {p : α → Prop} {fs : ι → filter α} :
(∀ᶠ x in (⨆ b, fs b), p x) ↔ (∀ b, ∀ᶠ x in fs b, p x) :=
mem_supr
@[simp]
lemma eventually_principal {a : set α} {p : α → Prop} :
(∀ᶠ x in 𝓟 a, p x) ↔ (∀ x ∈ a, p x) :=
iff.rfl
lemma eventually_inf {f g : filter α} {p : α → Prop} :
(∀ᶠ x in f ⊓ g, p x) ↔ ∃ (s ∈ f) (t ∈ g), ∀ x ∈ s ∩ t, p x :=
mem_inf_iff_superset
theorem eventually_inf_principal {f : filter α} {p : α → Prop} {s : set α} :
(∀ᶠ x in f ⊓ 𝓟 s, p x) ↔ ∀ᶠ x in f, x ∈ s → p x :=
mem_inf_principal
/-! ### Frequently -/
/-- `f.frequently p` or `∃ᶠ x in f, p x` mean that `{x | ¬p x} ∉ f`. E.g., `∃ᶠ x in at_top, p x`
means that there exist arbitrarily large `x` for which `p` holds true. -/
protected def frequently (p : α → Prop) (f : filter α) : Prop := ¬∀ᶠ x in f, ¬p x
notation `∃ᶠ` binders ` in ` f `, ` r:(scoped p, filter.frequently p f) := r
lemma eventually.frequently {f : filter α} [ne_bot f] {p : α → Prop} (h : ∀ᶠ x in f, p x) :
∃ᶠ x in f, p x :=
compl_not_mem h
lemma frequently_of_forall {f : filter α} [ne_bot f] {p : α → Prop} (h : ∀ x, p x) :
∃ᶠ x in f, p x :=
eventually.frequently (eventually_of_forall h)
lemma frequently.mp {p q : α → Prop} {f : filter α} (h : ∃ᶠ x in f, p x)
(hpq : ∀ᶠ x in f, p x → q x) :
∃ᶠ x in f, q x :=
mt (λ hq, hq.mp $ hpq.mono $ λ x, mt) h
lemma frequently.filter_mono {p : α → Prop} {f g : filter α} (h : ∃ᶠ x in f, p x) (hle : f ≤ g) :
∃ᶠ x in g, p x :=
mt (λ h', h'.filter_mono hle) h
lemma frequently.mono {p q : α → Prop} {f : filter α} (h : ∃ᶠ x in f, p x)
(hpq : ∀ x, p x → q x) :
∃ᶠ x in f, q x :=
h.mp (eventually_of_forall hpq)
lemma frequently.and_eventually {p q : α → Prop} {f : filter α}
(hp : ∃ᶠ x in f, p x) (hq : ∀ᶠ x in f, q x) :
∃ᶠ x in f, p x ∧ q x :=
begin
refine mt (λ h, hq.mp $ h.mono _) hp,
exact λ x hpq hq hp, hpq ⟨hp, hq⟩
end
lemma eventually.and_frequently {p q : α → Prop} {f : filter α}
(hp : ∀ᶠ x in f, p x) (hq : ∃ᶠ x in f, q x) :
∃ᶠ x in f, p x ∧ q x :=
by simpa only [and.comm] using hq.and_eventually hp
lemma frequently.exists {p : α → Prop} {f : filter α} (hp : ∃ᶠ x in f, p x) : ∃ x, p x :=
begin
by_contradiction H,
replace H : ∀ᶠ x in f, ¬ p x, from eventually_of_forall (not_exists.1 H),
exact hp H
end
lemma eventually.exists {p : α → Prop} {f : filter α} [ne_bot f] (hp : ∀ᶠ x in f, p x) :
∃ x, p x :=
hp.frequently.exists
lemma frequently_iff_forall_eventually_exists_and {p : α → Prop} {f : filter α} :
(∃ᶠ x in f, p x) ↔ ∀ {q : α → Prop}, (∀ᶠ x in f, q x) → ∃ x, p x ∧ q x :=
⟨λ hp q hq, (hp.and_eventually hq).exists,
λ H hp, by simpa only [and_not_self, exists_false] using H hp⟩
lemma frequently_iff {f : filter α} {P : α → Prop} :
(∃ᶠ x in f, P x) ↔ ∀ {U}, U ∈ f → ∃ x ∈ U, P x :=
begin
simp only [frequently_iff_forall_eventually_exists_and, exists_prop, and_comm (P _)],
refl
end
@[simp] lemma not_eventually {p : α → Prop} {f : filter α} :
(¬ ∀ᶠ x in f, p x) ↔ (∃ᶠ x in f, ¬ p x) :=
by simp [filter.frequently]
@[simp] lemma not_frequently {p : α → Prop} {f : filter α} :
(¬ ∃ᶠ x in f, p x) ↔ (∀ᶠ x in f, ¬ p x) :=
by simp only [filter.frequently, not_not]
@[simp] lemma frequently_true_iff_ne_bot (f : filter α) : (∃ᶠ x in f, true) ↔ ne_bot f :=
by simp [filter.frequently, -not_eventually, eventually_false_iff_eq_bot, ne_bot_iff]
@[simp] lemma frequently_false (f : filter α) : ¬ ∃ᶠ x in f, false := by simp
@[simp] lemma frequently_const {f : filter α} [ne_bot f] {p : Prop} :
(∃ᶠ x in f, p) ↔ p :=
classical.by_cases (λ h : p, by simpa [h]) (λ h, by simp [h])
@[simp] lemma frequently_or_distrib {f : filter α} {p q : α → Prop} :
(∃ᶠ x in f, p x ∨ q x) ↔ (∃ᶠ x in f, p x) ∨ (∃ᶠ x in f, q x) :=
by simp only [filter.frequently, ← not_and_distrib, not_or_distrib, eventually_and]
lemma frequently_or_distrib_left {f : filter α} [ne_bot f] {p : Prop} {q : α → Prop} :
(∃ᶠ x in f, p ∨ q x) ↔ (p ∨ ∃ᶠ x in f, q x) :=
by simp
lemma frequently_or_distrib_right {f : filter α} [ne_bot f] {p : α → Prop} {q : Prop} :
(∃ᶠ x in f, p x ∨ q) ↔ (∃ᶠ x in f, p x) ∨ q :=
by simp
@[simp] lemma frequently_imp_distrib {f : filter α} {p q : α → Prop} :
(∃ᶠ x in f, p x → q x) ↔ ((∀ᶠ x in f, p x) → ∃ᶠ x in f, q x) :=
by simp [imp_iff_not_or, not_eventually, frequently_or_distrib]
lemma frequently_imp_distrib_left {f : filter α} [ne_bot f] {p : Prop} {q : α → Prop} :
(∃ᶠ x in f, p → q x) ↔ (p → ∃ᶠ x in f, q x) :=
by simp
lemma frequently_imp_distrib_right {f : filter α} [ne_bot f] {p : α → Prop} {q : Prop} :
(∃ᶠ x in f, p x → q) ↔ ((∀ᶠ x in f, p x) → q) :=
by simp
@[simp] lemma eventually_imp_distrib_right {f : filter α} {p : α → Prop} {q : Prop} :
(∀ᶠ x in f, p x → q) ↔ ((∃ᶠ x in f, p x) → q) :=
by simp only [imp_iff_not_or, eventually_or_distrib_right, not_frequently]
@[simp] lemma frequently_and_distrib_left {f : filter α} {p : Prop} {q : α → Prop} :
(∃ᶠ x in f, p ∧ q x) ↔ (p ∧ ∃ᶠ x in f, q x) :=
by simp only [filter.frequently, not_and, eventually_imp_distrib_left, not_imp]
@[simp] lemma frequently_and_distrib_right {f : filter α} {p : α → Prop} {q : Prop} :
(∃ᶠ x in f, p x ∧ q) ↔ ((∃ᶠ x in f, p x) ∧ q) :=
by simp only [and_comm _ q, frequently_and_distrib_left]
@[simp] lemma frequently_bot {p : α → Prop} : ¬ ∃ᶠ x in ⊥, p x := by simp
@[simp]
lemma frequently_top {p : α → Prop} : (∃ᶠ x in ⊤, p x) ↔ (∃ x, p x) :=
by simp [filter.frequently]
@[simp]
lemma frequently_principal {a : set α} {p : α → Prop} :
(∃ᶠ x in 𝓟 a, p x) ↔ (∃ x ∈ a, p x) :=
by simp [filter.frequently, not_forall]
lemma frequently_sup {p : α → Prop} {f g : filter α} :
(∃ᶠ x in f ⊔ g, p x) ↔ (∃ᶠ x in f, p x) ∨ (∃ᶠ x in g, p x) :=
by simp only [filter.frequently, eventually_sup, not_and_distrib]
@[simp]
lemma frequently_Sup {p : α → Prop} {fs : set (filter α)} :
(∃ᶠ x in Sup fs, p x) ↔ (∃ f ∈ fs, ∃ᶠ x in f, p x) :=
by simp [filter.frequently, -not_eventually, not_forall]
@[simp]
lemma frequently_supr {p : α → Prop} {fs : β → filter α} :
(∃ᶠ x in (⨆ b, fs b), p x) ↔ (∃ b, ∃ᶠ x in fs b, p x) :=
by simp [filter.frequently, -not_eventually, not_forall]
lemma eventually.choice {r : α → β → Prop} {l : filter α}
[l.ne_bot] (h : ∀ᶠ x in l, ∃ y, r x y) : ∃ f : α → β, ∀ᶠ x in l, r x (f x) :=
begin
classical,
use (λ x, if hx : ∃ y, r x y then classical.some hx
else classical.some (classical.some_spec h.exists)),
filter_upwards [h],
intros x hx,
rw dif_pos hx,
exact classical.some_spec hx
end
/-!
### Relation “eventually equal”
-/
/-- Two functions `f` and `g` are *eventually equal* along a filter `l` if the set of `x` such that
`f x = g x` belongs to `l`. -/
def eventually_eq (l : filter α) (f g : α → β) : Prop := ∀ᶠ x in l, f x = g x
notation f ` =ᶠ[`:50 l:50 `] `:0 g:50 := eventually_eq l f g
lemma eventually_eq.eventually {l : filter α} {f g : α → β} (h : f =ᶠ[l] g) :
∀ᶠ x in l, f x = g x :=
h
lemma eventually_eq.rw {l : filter α} {f g : α → β} (h : f =ᶠ[l] g) (p : α → β → Prop)
(hf : ∀ᶠ x in l, p x (f x)) :
∀ᶠ x in l, p x (g x) :=
hf.congr $ h.mono $ λ x hx, hx ▸ iff.rfl
lemma eventually_eq_set {s t : set α} {l : filter α} :
s =ᶠ[l] t ↔ ∀ᶠ x in l, x ∈ s ↔ x ∈ t :=
eventually_congr $ eventually_of_forall $ λ x, ⟨eq.to_iff, iff.to_eq⟩
alias eventually_eq_set ↔ eventually_eq.mem_iff eventually.set_eq
@[simp] lemma eventually_eq_univ {s : set α} {l : filter α} : s =ᶠ[l] univ ↔ s ∈ l :=
by simp [eventually_eq_set]
lemma eventually_eq.exists_mem {l : filter α} {f g : α → β} (h : f =ᶠ[l] g) :
∃ s ∈ l, eq_on f g s :=
h.exists_mem
lemma eventually_eq_of_mem {l : filter α} {f g : α → β} {s : set α}
(hs : s ∈ l) (h : eq_on f g s) : f =ᶠ[l] g :=
eventually_of_mem hs h
lemma eventually_eq_iff_exists_mem {l : filter α} {f g : α → β} :
(f =ᶠ[l] g) ↔ ∃ s ∈ l, eq_on f g s :=
eventually_iff_exists_mem
lemma eventually_eq.filter_mono {l l' : filter α} {f g : α → β} (h₁ : f =ᶠ[l] g) (h₂ : l' ≤ l) :
f =ᶠ[l'] g :=
h₂ h₁
@[refl] lemma eventually_eq.refl (l : filter α) (f : α → β) :
f =ᶠ[l] f :=
eventually_of_forall $ λ x, rfl
lemma eventually_eq.rfl {l : filter α} {f : α → β} : f =ᶠ[l] f := eventually_eq.refl l f
@[symm] lemma eventually_eq.symm {f g : α → β} {l : filter α} (H : f =ᶠ[l] g) :
g =ᶠ[l] f :=
H.mono $ λ _, eq.symm
@[trans] lemma eventually_eq.trans {l : filter α} {f g h : α → β}
(H₁ : f =ᶠ[l] g) (H₂ : g =ᶠ[l] h) : f =ᶠ[l] h :=
H₂.rw (λ x y, f x = y) H₁
lemma eventually_eq.prod_mk {l} {f f' : α → β} (hf : f =ᶠ[l] f') {g g' : α → γ} (hg : g =ᶠ[l] g') :
(λ x, (f x, g x)) =ᶠ[l] (λ x, (f' x, g' x)) :=
hf.mp $ hg.mono $ by { intros, simp only * }
lemma eventually_eq.fun_comp {f g : α → β} {l : filter α} (H : f =ᶠ[l] g) (h : β → γ) :
(h ∘ f) =ᶠ[l] (h ∘ g) :=
H.mono $ λ x hx, congr_arg h hx
lemma eventually_eq.comp₂ {δ} {f f' : α → β} {g g' : α → γ} {l} (Hf : f =ᶠ[l] f') (h : β → γ → δ)
(Hg : g =ᶠ[l] g') :
(λ x, h (f x) (g x)) =ᶠ[l] (λ x, h (f' x) (g' x)) :=
(Hf.prod_mk Hg).fun_comp (uncurry h)
@[to_additive]
lemma eventually_eq.mul [has_mul β] {f f' g g' : α → β} {l : filter α} (h : f =ᶠ[l] g)
(h' : f' =ᶠ[l] g') :
((λ x, f x * f' x) =ᶠ[l] (λ x, g x * g' x)) :=
h.comp₂ (*) h'
@[to_additive]
lemma eventually_eq.inv [has_inv β] {f g : α → β} {l : filter α} (h : f =ᶠ[l] g) :
((λ x, (f x)⁻¹) =ᶠ[l] (λ x, (g x)⁻¹)) :=
h.fun_comp has_inv.inv
@[to_additive]
lemma eventually_eq.div [has_div β] {f f' g g' : α → β} {l : filter α} (h : f =ᶠ[l] g)
(h' : f' =ᶠ[l] g') :
((λ x, f x / f' x) =ᶠ[l] (λ x, g x / g' x)) :=
h.comp₂ (/) h'
@[to_additive] lemma eventually_eq.const_smul {𝕜} [has_smul 𝕜 β] {l : filter α} {f g : α → β}
(h : f =ᶠ[l] g) (c : 𝕜) :
(λ x, c • f x) =ᶠ[l] (λ x, c • g x) :=
h.fun_comp (λ x, c • x)
@[to_additive] lemma eventually_eq.smul {𝕜} [has_smul 𝕜 β] {l : filter α} {f f' : α → 𝕜}
{g g' : α → β} (hf : f =ᶠ[l] f') (hg : g =ᶠ[l] g') :
(λ x, f x • g x) =ᶠ[l] λ x, f' x • g' x :=
hf.comp₂ (•) hg
lemma eventually_eq.sup [has_sup β] {l : filter α} {f f' g g' : α → β}
(hf : f =ᶠ[l] f') (hg : g =ᶠ[l] g') :
(λ x, f x ⊔ g x) =ᶠ[l] λ x, f' x ⊔ g' x :=
hf.comp₂ (⊔) hg
lemma eventually_eq.inf [has_inf β] {l : filter α} {f f' g g' : α → β}
(hf : f =ᶠ[l] f') (hg : g =ᶠ[l] g') :
(λ x, f x ⊓ g x) =ᶠ[l] λ x, f' x ⊓ g' x :=
hf.comp₂ (⊓) hg
lemma eventually_eq.preimage {l : filter α} {f g : α → β}
(h : f =ᶠ[l] g) (s : set β) : f ⁻¹' s =ᶠ[l] g ⁻¹' s :=
h.fun_comp s
lemma eventually_eq.inter {s t s' t' : set α} {l : filter α} (h : s =ᶠ[l] t) (h' : s' =ᶠ[l] t') :
(s ∩ s' : set α) =ᶠ[l] (t ∩ t' : set α) :=
h.comp₂ (∧) h'
lemma eventually_eq.union {s t s' t' : set α} {l : filter α} (h : s =ᶠ[l] t) (h' : s' =ᶠ[l] t') :
(s ∪ s' : set α) =ᶠ[l] (t ∪ t' : set α) :=
h.comp₂ (∨) h'
lemma eventually_eq.compl {s t : set α} {l : filter α} (h : s =ᶠ[l] t) :
(sᶜ : set α) =ᶠ[l] (tᶜ : set α) :=
h.fun_comp not
lemma eventually_eq.diff {s t s' t' : set α} {l : filter α} (h : s =ᶠ[l] t) (h' : s' =ᶠ[l] t') :
(s \ s' : set α) =ᶠ[l] (t \ t' : set α) :=
h.inter h'.compl
lemma eventually_eq_empty {s : set α} {l : filter α} :
s =ᶠ[l] (∅ : set α) ↔ ∀ᶠ x in l, x ∉ s :=
eventually_eq_set.trans $ by simp
lemma inter_eventually_eq_left {s t : set α} {l : filter α} :
(s ∩ t : set α) =ᶠ[l] s ↔ ∀ᶠ x in l, x ∈ s → x ∈ t :=
by simp only [eventually_eq_set, mem_inter_iff, and_iff_left_iff_imp]
lemma inter_eventually_eq_right {s t : set α} {l : filter α} :
(s ∩ t : set α) =ᶠ[l] t ↔ ∀ᶠ x in l, x ∈ t → x ∈ s :=
by rw [inter_comm, inter_eventually_eq_left]
@[simp] lemma eventually_eq_principal {s : set α} {f g : α → β} :
f =ᶠ[𝓟 s] g ↔ eq_on f g s :=
iff.rfl
lemma eventually_eq_inf_principal_iff {F : filter α} {s : set α} {f g : α → β} :
(f =ᶠ[F ⊓ 𝓟 s] g) ↔ ∀ᶠ x in F, x ∈ s → f x = g x :=
eventually_inf_principal
lemma eventually_eq.sub_eq [add_group β] {f g : α → β} {l : filter α} (h : f =ᶠ[l] g) :
f - g =ᶠ[l] 0 :=
by simpa using (eventually_eq.sub (eventually_eq.refl l f) h).symm
lemma eventually_eq_iff_sub [add_group β] {f g : α → β} {l : filter α} :
f =ᶠ[l] g ↔ f - g =ᶠ[l] 0 :=
⟨λ h, h.sub_eq, λ h, by simpa using h.add (eventually_eq.refl l g)⟩
section has_le
variables [has_le β] {l : filter α}
/-- A function `f` is eventually less than or equal to a function `g` at a filter `l`. -/
def eventually_le (l : filter α) (f g : α → β) : Prop := ∀ᶠ x in l, f x ≤ g x
notation f ` ≤ᶠ[`:50 l:50 `] `:0 g:50 := eventually_le l f g
lemma eventually_le.congr {f f' g g' : α → β} (H : f ≤ᶠ[l] g) (hf : f =ᶠ[l] f') (hg : g =ᶠ[l] g') :
f' ≤ᶠ[l] g' :=
H.mp $ hg.mp $ hf.mono $ λ x hf hg H, by rwa [hf, hg] at H
lemma eventually_le_congr {f f' g g' : α → β} (hf : f =ᶠ[l] f') (hg : g =ᶠ[l] g') :
f ≤ᶠ[l] g ↔ f' ≤ᶠ[l] g' :=
⟨λ H, H.congr hf hg, λ H, H.congr hf.symm hg.symm⟩
end has_le
section preorder
variables [preorder β] {l : filter α} {f g h : α → β}
lemma eventually_eq.le (h : f =ᶠ[l] g) : f ≤ᶠ[l] g := h.mono $ λ x, le_of_eq
@[refl] lemma eventually_le.refl (l : filter α) (f : α → β) :
f ≤ᶠ[l] f :=
eventually_eq.rfl.le
lemma eventually_le.rfl : f ≤ᶠ[l] f := eventually_le.refl l f
@[trans] lemma eventually_le.trans (H₁ : f ≤ᶠ[l] g) (H₂ : g ≤ᶠ[l] h) : f ≤ᶠ[l] h :=
H₂.mp $ H₁.mono $ λ x, le_trans
@[trans] lemma eventually_eq.trans_le (H₁ : f =ᶠ[l] g) (H₂ : g ≤ᶠ[l] h) : f ≤ᶠ[l] h :=
H₁.le.trans H₂
@[trans] lemma eventually_le.trans_eq (H₁ : f ≤ᶠ[l] g) (H₂ : g =ᶠ[l] h) : f ≤ᶠ[l] h :=
H₁.trans H₂.le
end preorder
lemma eventually_le.antisymm [partial_order β] {l : filter α} {f g : α → β}
(h₁ : f ≤ᶠ[l] g) (h₂ : g ≤ᶠ[l] f) :
f =ᶠ[l] g :=
h₂.mp $ h₁.mono $ λ x, le_antisymm
lemma eventually_le_antisymm_iff [partial_order β] {l : filter α} {f g : α → β} :
f =ᶠ[l] g ↔ f ≤ᶠ[l] g ∧ g ≤ᶠ[l] f :=
by simp only [eventually_eq, eventually_le, le_antisymm_iff, eventually_and]
lemma eventually_le.le_iff_eq [partial_order β] {l : filter α} {f g : α → β} (h : f ≤ᶠ[l] g) :
g ≤ᶠ[l] f ↔ g =ᶠ[l] f :=
⟨λ h', h'.antisymm h, eventually_eq.le⟩
lemma eventually.ne_of_lt [preorder β] {l : filter α} {f g : α → β}
(h : ∀ᶠ x in l, f x < g x) : ∀ᶠ x in l, f x ≠ g x :=
h.mono (λ x hx, hx.ne)
lemma eventually.ne_top_of_lt [partial_order β] [order_top β] {l : filter α} {f g : α → β}
(h : ∀ᶠ x in l, f x < g x) : ∀ᶠ x in l, f x ≠ ⊤ :=
h.mono (λ x hx, hx.ne_top)
lemma eventually.lt_top_of_ne [partial_order β] [order_top β] {l : filter α} {f : α → β}
(h : ∀ᶠ x in l, f x ≠ ⊤) : ∀ᶠ x in l, f x < ⊤ :=
h.mono (λ x hx, hx.lt_top)
lemma eventually.lt_top_iff_ne_top [partial_order β] [order_top β] {l : filter α} {f : α → β} :
(∀ᶠ x in l, f x < ⊤) ↔ ∀ᶠ x in l, f x ≠ ⊤ :=
⟨eventually.ne_of_lt, eventually.lt_top_of_ne⟩
@[mono] lemma eventually_le.inter {s t s' t' : set α} {l : filter α} (h : s ≤ᶠ[l] t)
(h' : s' ≤ᶠ[l] t') :
(s ∩ s' : set α) ≤ᶠ[l] (t ∩ t' : set α) :=
h'.mp $ h.mono $ λ x, and.imp
@[mono] lemma eventually_le.union {s t s' t' : set α} {l : filter α} (h : s ≤ᶠ[l] t)
(h' : s' ≤ᶠ[l] t') :
(s ∪ s' : set α) ≤ᶠ[l] (t ∪ t' : set α) :=
h'.mp $ h.mono $ λ x, or.imp
@[mono] lemma eventually_le.compl {s t : set α} {l : filter α} (h : s ≤ᶠ[l] t) :
(tᶜ : set α) ≤ᶠ[l] (sᶜ : set α) :=
h.mono $ λ x, mt
@[mono] lemma eventually_le.diff {s t s' t' : set α} {l : filter α} (h : s ≤ᶠ[l] t)
(h' : t' ≤ᶠ[l] s') :
(s \ s' : set α) ≤ᶠ[l] (t \ t' : set α) :=
h.inter h'.compl
lemma eventually_le.mul_le_mul
[mul_zero_class β] [partial_order β] [pos_mul_mono β] [mul_pos_mono β]
{l : filter α} {f₁ f₂ g₁ g₂ : α → β}
(hf : f₁ ≤ᶠ[l] f₂) (hg : g₁ ≤ᶠ[l] g₂) (hg₀ : 0 ≤ᶠ[l] g₁) (hf₀ : 0 ≤ᶠ[l] f₂) :
f₁ * g₁ ≤ᶠ[l] f₂ * g₂ :=
by filter_upwards [hf, hg, hg₀, hf₀] with x using mul_le_mul
@[to_additive eventually_le.add_le_add]
lemma eventually_le.mul_le_mul' [has_mul β] [preorder β]
[covariant_class β β (*) (≤)] [covariant_class β β (swap (*)) (≤)]
{l : filter α} {f₁ f₂ g₁ g₂ : α → β} (hf : f₁ ≤ᶠ[l] f₂) (hg : g₁ ≤ᶠ[l] g₂) :
f₁ * g₁ ≤ᶠ[l] f₂ * g₂ :=
by filter_upwards [hf, hg] with x hfx hgx using mul_le_mul' hfx hgx
lemma eventually_le.mul_nonneg [ordered_semiring β] {l : filter α} {f g : α → β}
(hf : 0 ≤ᶠ[l] f) (hg : 0 ≤ᶠ[l] g) :
0 ≤ᶠ[l] f * g :=
by filter_upwards [hf, hg] with x using mul_nonneg
lemma eventually_sub_nonneg [ordered_ring β] {l : filter α} {f g : α → β} :
0 ≤ᶠ[l] g - f ↔ f ≤ᶠ[l] g :=
eventually_congr $ eventually_of_forall $ λ x, sub_nonneg
lemma eventually_le.sup [semilattice_sup β] {l : filter α} {f₁ f₂ g₁ g₂ : α → β}
(hf : f₁ ≤ᶠ[l] f₂) (hg : g₁ ≤ᶠ[l] g₂) :
f₁ ⊔ g₁ ≤ᶠ[l] f₂ ⊔ g₂ :=
by filter_upwards [hf, hg] with x hfx hgx using sup_le_sup hfx hgx
lemma eventually_le.le_sup_of_le_left [semilattice_sup β] {l : filter α} {f g h : α → β}
(hf : h ≤ᶠ[l] f) :
h ≤ᶠ[l] f ⊔ g :=
by filter_upwards [hf] with x hfx using le_sup_of_le_left hfx
lemma eventually_le.le_sup_of_le_right [semilattice_sup β] {l : filter α} {f g h : α → β}
(hg : h ≤ᶠ[l] g) :
h ≤ᶠ[l] f ⊔ g :=
by filter_upwards [hg] with x hgx using le_sup_of_le_right hgx
lemma join_le {f : filter (filter α)} {l : filter α} (h : ∀ᶠ m in f, m ≤ l) : join f ≤ l :=
λ s hs, h.mono $ λ m hm, hm hs
/-! ### Push-forwards, pull-backs, and the monad structure -/
section map
/-- The forward map of a filter -/
def map (m : α → β) (f : filter α) : filter β :=
{ sets := preimage m ⁻¹' f.sets,
univ_sets := univ_mem,
sets_of_superset := λ s t hs st, mem_of_superset hs $ preimage_mono st,
inter_sets := λ s t hs ht, inter_mem hs ht }
@[simp] lemma map_principal {s : set α} {f : α → β} :
map f (𝓟 s) = 𝓟 (set.image f s) :=
filter.ext $ λ a, image_subset_iff.symm
variables {f : filter α} {m : α → β} {m' : β → γ} {s : set α} {t : set β}
@[simp] lemma eventually_map {P : β → Prop} :
(∀ᶠ b in map m f, P b) ↔ ∀ᶠ a in f, P (m a) :=
iff.rfl
@[simp] lemma frequently_map {P : β → Prop} :
(∃ᶠ b in map m f, P b) ↔ ∃ᶠ a in f, P (m a) :=
iff.rfl
@[simp] lemma mem_map : t ∈ map m f ↔ m ⁻¹' t ∈ f := iff.rfl
lemma mem_map' : t ∈ map m f ↔ {x | m x ∈ t} ∈ f := iff.rfl
lemma image_mem_map (hs : s ∈ f) : m '' s ∈ map m f :=
f.sets_of_superset hs $ subset_preimage_image m s
lemma image_mem_map_iff (hf : injective m) : m '' s ∈ map m f ↔ s ∈ f :=
⟨λ h, by rwa [← preimage_image_eq s hf], image_mem_map⟩
lemma range_mem_map : range m ∈ map m f :=
by { rw ←image_univ, exact image_mem_map univ_mem }
lemma mem_map_iff_exists_image : t ∈ map m f ↔ (∃ s ∈ f, m '' s ⊆ t) :=
⟨λ ht, ⟨m ⁻¹' t, ht, image_preimage_subset _ _⟩,
λ ⟨s, hs, ht⟩, mem_of_superset (image_mem_map hs) ht⟩
@[simp] lemma map_id : filter.map id f = f :=
filter_eq $ rfl
@[simp] lemma map_id' : filter.map (λ x, x) f = f := map_id
@[simp] lemma map_compose : filter.map m' ∘ filter.map m = filter.map (m' ∘ m) :=
funext $ λ _, filter_eq $ rfl
@[simp] lemma map_map : filter.map m' (filter.map m f) = filter.map (m' ∘ m) f :=
congr_fun (@@filter.map_compose m m') f
/-- If functions `m₁` and `m₂` are eventually equal at a filter `f`, then
they map this filter to the same filter. -/
lemma map_congr {m₁ m₂ : α → β} {f : filter α} (h : m₁ =ᶠ[f] m₂) :
map m₁ f = map m₂ f :=
filter.ext' $ λ p,
by { simp only [eventually_map], exact eventually_congr (h.mono $ λ x hx, hx ▸ iff.rfl) }
end map
section comap
/-- The inverse map of a filter. A set `s` belongs to `filter.comap m f` if either of the following
equivalent conditions hold.
1. There exists a set `t ∈ f` such that `m ⁻¹' t ⊆ s`. This is used as a definition.
2. The set `{y | ∀ x, m x = y → x ∈ s}` belongs to `f`, see `filter.mem_comap'`.
3. The set `(m '' sᶜ)ᶜ` belongs to `f`, see `filter.mem_comap_iff_compl` and
`filter.compl_mem_comap`. -/
def comap (m : α → β) (f : filter β) : filter α :=
{ sets := { s | ∃ t ∈ f, m ⁻¹' t ⊆ s },
univ_sets := ⟨univ, univ_mem, by simp only [subset_univ, preimage_univ]⟩,
sets_of_superset := λ a b ⟨a', ha', ma'a⟩ ab, ⟨a', ha', ma'a.trans ab⟩,
inter_sets := λ a b ⟨a', ha₁, ha₂⟩ ⟨b', hb₁, hb₂⟩,
⟨a' ∩ b', inter_mem ha₁ hb₁, inter_subset_inter ha₂ hb₂⟩ }
variables {f : α → β} {l : filter β} {p : α → Prop} {s : set α}
lemma mem_comap' : s ∈ comap f l ↔ {y | ∀ ⦃x⦄, f x = y → x ∈ s} ∈ l :=
⟨λ ⟨t, ht, hts⟩, mem_of_superset ht $ λ y hy x hx, hts $ mem_preimage.2 $ by rwa hx,
λ h, ⟨_, h, λ x hx, hx rfl⟩⟩
/-- RHS form is used, e.g., in the definition of `uniform_space`. -/
lemma mem_comap_prod_mk {x : α} {s : set β} {F : filter (α × β)} :
s ∈ comap (prod.mk x) F ↔ {p : α × β | p.fst = x → p.snd ∈ s} ∈ F :=
by simp_rw [mem_comap', prod.ext_iff, and_imp, @forall_swap β (_ = _), forall_eq, eq_comm]
@[simp] lemma eventually_comap : (∀ᶠ a in comap f l, p a) ↔ ∀ᶠ b in l, ∀ a, f a = b → p a :=
mem_comap'
@[simp] lemma frequently_comap : (∃ᶠ a in comap f l, p a) ↔ ∃ᶠ b in l, ∃ a, f a = b ∧ p a :=
by simp only [filter.frequently, eventually_comap, not_exists, not_and]
lemma mem_comap_iff_compl : s ∈ comap f l ↔ (f '' sᶜ)ᶜ ∈ l :=
by simp only [mem_comap', compl_def, mem_image, mem_set_of_eq, not_exists, not_and', not_not]
lemma compl_mem_comap : sᶜ ∈ comap f l ↔ (f '' s)ᶜ ∈ l :=
by rw [mem_comap_iff_compl, compl_compl]
end comap
/-- The monadic bind operation on filter is defined the usual way in terms of `map` and `join`.
Unfortunately, this `bind` does not result in the expected applicative. See `filter.seq` for the
applicative instance. -/
def bind (f : filter α) (m : α → filter β) : filter β := join (map m f)
/-- The applicative sequentiation operation. This is not induced by the bind operation. -/
def seq (f : filter (α → β)) (g : filter α) : filter β :=
⟨{ s | ∃ u ∈ f, ∃ t ∈ g, (∀ m ∈ u, ∀ x ∈ t, (m : α → β) x ∈ s) },
⟨univ, univ_mem, univ, univ_mem,
by simp only [forall_prop_of_true, mem_univ, forall_true_iff]⟩,
λ s₀ s₁ ⟨t₀, t₁, h₀, h₁, h⟩ hst, ⟨t₀, t₁, h₀, h₁, λ x hx y hy, hst $ h _ hx _ hy⟩,
λ s₀ s₁ ⟨t₀, ht₀, t₁, ht₁, ht⟩ ⟨u₀, hu₀, u₁, hu₁, hu⟩,
⟨t₀ ∩ u₀, inter_mem ht₀ hu₀, t₁ ∩ u₁, inter_mem ht₁ hu₁,
λ x ⟨hx₀, hx₁⟩ x ⟨hy₀, hy₁⟩, ⟨ht _ hx₀ _ hy₀, hu _ hx₁ _ hy₁⟩⟩⟩
/-- `pure x` is the set of sets that contain `x`. It is equal to `𝓟 {x}` but
with this definition we have `s ∈ pure a` defeq `a ∈ s`. -/
instance : has_pure filter :=
⟨λ (α : Type u) x,
{ sets := {s | x ∈ s},
inter_sets := λ s t, and.intro,
sets_of_superset := λ s t hs hst, hst hs,
univ_sets := trivial }⟩
instance : has_bind filter := ⟨@filter.bind⟩
instance : has_seq filter := ⟨@filter.seq⟩
instance : functor filter := { map := @filter.map }
lemma pure_sets (a : α) : (pure a : filter α).sets = {s | a ∈ s} := rfl
@[simp] lemma mem_pure {a : α} {s : set α} : s ∈ (pure a : filter α) ↔ a ∈ s := iff.rfl
@[simp] lemma eventually_pure {a : α} {p : α → Prop} :
(∀ᶠ x in pure a, p x) ↔ p a :=
iff.rfl
@[simp] lemma principal_singleton (a : α) : 𝓟 {a} = pure a :=
filter.ext $ λ s, by simp only [mem_pure, mem_principal, singleton_subset_iff]
@[simp] lemma map_pure (f : α → β) (a : α) : map f (pure a) = pure (f a) :=
rfl
@[simp] lemma join_pure (f : filter α) : join (pure f) = f := filter.ext $ λ s, iff.rfl
@[simp] lemma pure_bind (a : α) (m : α → filter β) :
bind (pure a) m = m a :=
by simp only [has_bind.bind, bind, map_pure, join_pure]
section
-- this section needs to be before applicative, otherwise the wrong instance will be chosen
/-- The monad structure on filters. -/
protected def monad : monad filter := { map := @filter.map }
local attribute [instance] filter.monad
protected lemma is_lawful_monad : is_lawful_monad filter :=
{ id_map := λ α f, filter_eq rfl,
pure_bind := λ α β, pure_bind,
bind_assoc := λ α β γ f m₁ m₂, filter_eq rfl,
bind_pure_comp_eq_map := λ α β f x, filter.ext $ λ s,
by simp only [has_bind.bind, bind, functor.map, mem_map', mem_join, mem_set_of_eq,
comp, mem_pure] }
end
instance : applicative filter := { map := @filter.map, seq := @filter.seq }
instance : alternative filter :=
{ failure := λ α, ⊥,
orelse := λ α x y, x ⊔ y }
@[simp] lemma map_def {α β} (m : α → β) (f : filter α) : m <$> f = map m f := rfl
@[simp] lemma bind_def {α β} (f : filter α) (m : α → filter β) : f >>= m = bind f m := rfl
/-! #### `map` and `comap` equations -/
section map
variables {f f₁ f₂ : filter α} {g g₁ g₂ : filter β} {m : α → β} {m' : β → γ} {s : set α} {t : set β}
@[simp] theorem mem_comap : s ∈ comap m g ↔ ∃ t ∈ g, m ⁻¹' t ⊆ s := iff.rfl
theorem preimage_mem_comap (ht : t ∈ g) : m ⁻¹' t ∈ comap m g :=
⟨t, ht, subset.rfl⟩
lemma eventually.comap {p : β → Prop} (hf : ∀ᶠ b in g, p b) (f : α → β) :
∀ᶠ a in comap f g, p (f a) :=
preimage_mem_comap hf
lemma comap_id : comap id f = f :=
le_antisymm (λ s, preimage_mem_comap) (λ s ⟨t, ht, hst⟩, mem_of_superset ht hst)
lemma comap_id' : comap (λ x, x) f = f := comap_id
lemma comap_const_of_not_mem {x : β} (ht : t ∈ g) (hx : x ∉ t) :
comap (λ y : α, x) g = ⊥ :=
empty_mem_iff_bot.1 $ mem_comap'.2 $ mem_of_superset ht $ λ x' hx' y h, hx $ h.symm ▸ hx'
lemma comap_const_of_mem {x : β} (h : ∀ t ∈ g, x ∈ t) : comap (λ y : α, x) g = ⊤ :=
top_unique $ λ s hs, univ_mem' $ λ y, h _ (mem_comap'.1 hs) rfl
lemma map_const [ne_bot f] {c : β} : f.map (λ x, c) = pure c :=
by { ext s, by_cases h : c ∈ s; simp [h] }
lemma comap_comap {m : γ → β} {n : β → α} : comap m (comap n f) = comap (n ∘ m) f :=
filter.coext $ λ s, by simp only [compl_mem_comap, image_image]
section comm
/-!
The variables in the following lemmas are used as in this diagram:
```
φ
α → β
θ ↓ ↓ ψ
γ → δ
ρ
```
-/
variables {φ : α → β} {θ : α → γ} {ψ : β → δ} {ρ : γ → δ} (H : ψ ∘ φ = ρ ∘ θ)
include H
lemma map_comm (F : filter α) : map ψ (map φ F) = map ρ (map θ F) :=
by rw [filter.map_map, H, ← filter.map_map]
lemma comap_comm (G : filter δ) : comap φ (comap ψ G) = comap θ (comap ρ G) :=
by rw [filter.comap_comap, H, ← filter.comap_comap]
end comm
lemma _root_.function.semiconj.filter_map {f : α → β} {ga : α → α} {gb : β → β}
(h : function.semiconj f ga gb) : function.semiconj (map f) (map ga) (map gb) :=
map_comm h.comp_eq
lemma _root_.function.commute.filter_map {f g : α → α} (h : function.commute f g) :
function.commute (map f) (map g) :=
h.filter_map
lemma _root_.function.semiconj.filter_comap {f : α → β} {ga : α → α} {gb : β → β}
(h : function.semiconj f ga gb) : function.semiconj (comap f) (comap gb) (comap ga) :=
comap_comm h.comp_eq.symm
lemma _root_.function.commute.filter_comap {f g : α → α} (h : function.commute f g) :
function.commute (comap f) (comap g) :=
h.filter_comap
@[simp] theorem comap_principal {t : set β} : comap m (𝓟 t) = 𝓟 (m ⁻¹' t) :=
filter.ext $ λ s,
⟨λ ⟨u, (hu : t ⊆ u), (b : preimage m u ⊆ s)⟩, (preimage_mono hu).trans b,
λ h, ⟨t, subset.refl t, h⟩⟩
@[simp] theorem comap_pure {b : β} : comap m (pure b) = 𝓟 (m ⁻¹' {b}) :=
by rw [← principal_singleton, comap_principal]
lemma map_le_iff_le_comap : map m f ≤ g ↔ f ≤ comap m g :=
⟨λ h s ⟨t, ht, hts⟩, mem_of_superset (h ht) hts, λ h s ht, h ⟨_, ht, subset.rfl⟩⟩
lemma gc_map_comap (m : α → β) : galois_connection (map m) (comap m) :=
λ f g, map_le_iff_le_comap
@[mono] lemma map_mono : monotone (map m) := (gc_map_comap m).monotone_l
@[mono] lemma comap_mono : monotone (comap m) := (gc_map_comap m).monotone_u
@[simp] lemma map_bot : map m ⊥ = ⊥ := (gc_map_comap m).l_bot
@[simp] lemma map_sup : map m (f₁ ⊔ f₂) = map m f₁ ⊔ map m f₂ := (gc_map_comap m).l_sup
@[simp] lemma map_supr {f : ι → filter α} : map m (⨆ i, f i) = (⨆ i, map m (f i)) :=
(gc_map_comap m).l_supr
@[simp] lemma map_top (f : α → β) : map f ⊤ = 𝓟 (range f) :=
by rw [← principal_univ, map_principal, image_univ]
@[simp] lemma comap_top : comap m ⊤ = ⊤ := (gc_map_comap m).u_top
@[simp] lemma comap_inf : comap m (g₁ ⊓ g₂) = comap m g₁ ⊓ comap m g₂ := (gc_map_comap m).u_inf
@[simp] lemma comap_infi {f : ι → filter β} : comap m (⨅ i, f i) = (⨅ i, comap m (f i)) :=
(gc_map_comap m).u_infi
lemma le_comap_top (f : α → β) (l : filter α) : l ≤ comap f ⊤ :=
by { rw [comap_top], exact le_top }
lemma map_comap_le : map m (comap m g) ≤ g := (gc_map_comap m).l_u_le _
lemma le_comap_map : f ≤ comap m (map m f) := (gc_map_comap m).le_u_l _
@[simp] lemma comap_bot : comap m ⊥ = ⊥ :=
bot_unique $ λ s _, ⟨∅, mem_bot, by simp only [empty_subset, preimage_empty]⟩
lemma ne_bot_of_comap (h : (comap m g).ne_bot) : g.ne_bot :=
begin
rw ne_bot_iff at *,
contrapose! h,
rw h,
exact comap_bot
end
lemma comap_inf_principal_range : comap m (g ⊓ 𝓟 (range m)) = comap m g := by simp
lemma disjoint_comap (h : disjoint g₁ g₂) : disjoint (comap m g₁) (comap m g₂) :=
by simp only [disjoint_iff, ← comap_inf, h.eq_bot, comap_bot]
lemma comap_supr {ι} {f : ι → filter β} {m : α → β} :
comap m (supr f) = (⨆ i, comap m (f i)) :=
le_antisymm
(λ s hs,
have ∀ i, ∃ t, t ∈ f i ∧ m ⁻¹' t ⊆ s,
by simpa only [mem_comap, exists_prop, mem_supr] using mem_supr.1 hs,
let ⟨t, ht⟩ := classical.axiom_of_choice this in
⟨⋃ i, t i, mem_supr.2 $ λ i, (f i).sets_of_superset (ht i).1 (subset_Union _ _),
begin
rw [preimage_Union, Union_subset_iff],
exact λ i, (ht i).2
end⟩)
(supr_le $ λ i, comap_mono $ le_supr _ _)
lemma comap_Sup {s : set (filter β)} {m : α → β} : comap m (Sup s) = (⨆ f ∈ s, comap m f) :=
by simp only [Sup_eq_supr, comap_supr, eq_self_iff_true]
lemma comap_sup : comap m (g₁ ⊔ g₂) = comap m g₁ ⊔ comap m g₂ :=
by rw [sup_eq_supr, comap_supr, supr_bool_eq, bool.cond_tt, bool.cond_ff]
lemma map_comap (f : filter β) (m : α → β) : (f.comap m).map m = f ⊓ 𝓟 (range m) :=
begin
refine le_antisymm (le_inf map_comap_le $ le_principal_iff.2 range_mem_map) _,
rintro t' ⟨t, ht, sub⟩,
refine mem_inf_principal.2 (mem_of_superset ht _),
rintro _ hxt ⟨x, rfl⟩,
exact sub hxt
end
lemma map_comap_of_mem {f : filter β} {m : α → β} (hf : range m ∈ f) : (f.comap m).map m = f :=
by rw [map_comap, inf_eq_left.2 (le_principal_iff.2 hf)]
instance can_lift (c) (p) [can_lift α β c p] :
can_lift (filter α) (filter β) (map c) (λ f, ∀ᶠ x : α in f, p x) :=
{ prf := λ f hf, ⟨comap c f, map_comap_of_mem $ hf.mono can_lift.prf⟩ }
lemma comap_le_comap_iff {f g : filter β} {m : α → β} (hf : range m ∈ f) :
comap m f ≤ comap m g ↔ f ≤ g :=
⟨λ h, map_comap_of_mem hf ▸ (map_mono h).trans map_comap_le, λ h, comap_mono h⟩
theorem map_comap_of_surjective {f : α → β} (hf : surjective f) (l : filter β) :
map f (comap f l) = l :=
map_comap_of_mem $ by simp only [hf.range_eq, univ_mem]
lemma _root_.function.surjective.filter_map_top {f : α → β} (hf : surjective f) : map f ⊤ = ⊤ :=
(congr_arg _ comap_top).symm.trans $ map_comap_of_surjective hf ⊤
lemma subtype_coe_map_comap (s : set α) (f : filter α) :
map (coe : s → α) (comap (coe : s → α) f) = f ⊓ 𝓟 s :=
by rw [map_comap, subtype.range_coe]
lemma image_mem_of_mem_comap {f : filter α} {c : β → α} (h : range c ∈ f) {W : set β}
(W_in : W ∈ comap c f) : c '' W ∈ f :=
begin
rw ← map_comap_of_mem h,
exact image_mem_map W_in
end
lemma image_coe_mem_of_mem_comap {f : filter α} {U : set α} (h : U ∈ f) {W : set U}
(W_in : W ∈ comap (coe : U → α) f) : coe '' W ∈ f :=
image_mem_of_mem_comap (by simp [h]) W_in
lemma comap_map {f : filter α} {m : α → β} (h : injective m) :
comap m (map m f) = f :=
le_antisymm
(λ s hs, mem_of_superset (preimage_mem_comap $ image_mem_map hs) $
by simp only [preimage_image_eq s h])
le_comap_map
lemma mem_comap_iff {f : filter β} {m : α → β} (inj : injective m)
(large : set.range m ∈ f) {S : set α} : S ∈ comap m f ↔ m '' S ∈ f :=
by rw [← image_mem_map_iff inj, map_comap_of_mem large]
lemma map_le_map_iff_of_inj_on {l₁ l₂ : filter α} {f : α → β} {s : set α}
(h₁ : s ∈ l₁) (h₂ : s ∈ l₂) (hinj : inj_on f s) :
map f l₁ ≤ map f l₂ ↔ l₁ ≤ l₂ :=
⟨λ h t ht, mp_mem h₁ $ mem_of_superset (h $ image_mem_map (inter_mem h₂ ht)) $
λ y ⟨x, ⟨hxs, hxt⟩, hxy⟩ hys, hinj hxs hys hxy ▸ hxt, λ h, map_mono h⟩
lemma map_le_map_iff {f g : filter α} {m : α → β} (hm : injective m) : map m f ≤ map m g ↔ f ≤ g :=
by rw [map_le_iff_le_comap, comap_map hm]
lemma map_eq_map_iff_of_inj_on {f g : filter α} {m : α → β} {s : set α}
(hsf : s ∈ f) (hsg : s ∈ g) (hm : inj_on m s) :
map m f = map m g ↔ f = g :=
by simp only [le_antisymm_iff, map_le_map_iff_of_inj_on hsf hsg hm,
map_le_map_iff_of_inj_on hsg hsf hm]
lemma map_inj {f g : filter α} {m : α → β} (hm : injective m) :
map m f = map m g ↔ f = g :=
map_eq_map_iff_of_inj_on univ_mem univ_mem (hm.inj_on _)
lemma map_injective {m : α → β} (hm : injective m) : injective (map m) :=
λ f g, (map_inj hm).1
lemma comap_ne_bot_iff {f : filter β} {m : α → β} : ne_bot (comap m f) ↔ ∀ t ∈ f, ∃ a, m a ∈ t :=
begin
simp only [← forall_mem_nonempty_iff_ne_bot, mem_comap, forall_exists_index],
exact ⟨λ h t t_in, h (m ⁻¹' t) t t_in subset.rfl, λ h s t ht hst, (h t ht).imp hst⟩,
end
lemma comap_ne_bot {f : filter β} {m : α → β} (hm : ∀ t ∈ f, ∃ a, m a ∈ t) : ne_bot (comap m f) :=
comap_ne_bot_iff.mpr hm
lemma comap_ne_bot_iff_frequently {f : filter β} {m : α → β} :
ne_bot (comap m f) ↔ ∃ᶠ y in f, y ∈ range m :=
by simp [comap_ne_bot_iff, frequently_iff, ← exists_and_distrib_left, and.comm]
lemma comap_ne_bot_iff_compl_range {f : filter β} {m : α → β} :
ne_bot (comap m f) ↔ (range m)ᶜ ∉ f :=
comap_ne_bot_iff_frequently
lemma comap_eq_bot_iff_compl_range {f : filter β} {m : α → β} :
comap m f = ⊥ ↔ (range m)ᶜ ∈ f :=
not_iff_not.mp $ ne_bot_iff.symm.trans comap_ne_bot_iff_compl_range
lemma comap_surjective_eq_bot {f : filter β} {m : α → β} (hm : surjective m) :
comap m f = ⊥ ↔ f = ⊥ :=
by rw [comap_eq_bot_iff_compl_range, hm.range_eq, compl_univ, empty_mem_iff_bot]
lemma disjoint_comap_iff (h : surjective m) : disjoint (comap m g₁) (comap m g₂) ↔ disjoint g₁ g₂ :=
by rw [disjoint_iff, disjoint_iff, ← comap_inf, comap_surjective_eq_bot h]
lemma ne_bot.comap_of_range_mem {f : filter β} {m : α → β}
(hf : ne_bot f) (hm : range m ∈ f) : ne_bot (comap m f) :=
comap_ne_bot_iff_frequently.2 $ eventually.frequently hm
@[simp] lemma comap_fst_ne_bot_iff {f : filter α} :
(f.comap (prod.fst : α × β → α)).ne_bot ↔ f.ne_bot ∧ nonempty β :=
begin
casesI is_empty_or_nonempty β,
{ rw [filter_eq_bot_of_is_empty (f.comap _), ← not_iff_not]; [simp *, apply_instance] },
{ simp [comap_ne_bot_iff_frequently, h] }
end
@[instance] lemma comap_fst_ne_bot [nonempty β] {f : filter α} [ne_bot f] :
(f.comap (prod.fst : α × β → α)).ne_bot :=
comap_fst_ne_bot_iff.2 ⟨‹_›, ‹_›⟩
@[simp] lemma comap_snd_ne_bot_iff {f : filter β} :
(f.comap (prod.snd : α × β → β)).ne_bot ↔ nonempty α ∧ f.ne_bot :=
begin
casesI is_empty_or_nonempty α with hα hα,
{ rw [filter_eq_bot_of_is_empty (f.comap _), ← not_iff_not];
[simp, apply_instance] },
{ simp [comap_ne_bot_iff_frequently, hα] }
end
@[instance] lemma comap_snd_ne_bot [nonempty α] {f : filter β} [ne_bot f] :
(f.comap (prod.snd : α × β → β)).ne_bot :=
comap_snd_ne_bot_iff.2 ⟨‹_›, ‹_›⟩
lemma comap_eval_ne_bot_iff' {ι : Type*} {α : ι → Type*} {i : ι} {f : filter (α i)} :
(comap (eval i) f).ne_bot ↔ (∀ j, nonempty (α j)) ∧ ne_bot f :=
begin
casesI is_empty_or_nonempty (Π j, α j) with H H,
{ rw [filter_eq_bot_of_is_empty (f.comap _), ← not_iff_not]; [skip, assumption],
simp [← classical.nonempty_pi] },
{ haveI : ∀ j, nonempty (α j), from classical.nonempty_pi.1 H,
simp [comap_ne_bot_iff_frequently, *] }
end
@[simp] lemma comap_eval_ne_bot_iff {ι : Type*} {α : ι → Type*} [∀ j, nonempty (α j)]
{i : ι} {f : filter (α i)} :
(comap (eval i) f).ne_bot ↔ ne_bot f :=
by simp [comap_eval_ne_bot_iff', *]
@[instance] lemma comap_eval_ne_bot {ι : Type*} {α : ι → Type*} [∀ j, nonempty (α j)]
(i : ι) (f : filter (α i)) [ne_bot f] :
(comap (eval i) f).ne_bot :=
comap_eval_ne_bot_iff.2 ‹_›
lemma comap_inf_principal_ne_bot_of_image_mem {f : filter β} {m : α → β}
(hf : ne_bot f) {s : set α} (hs : m '' s ∈ f) :
ne_bot (comap m f ⊓ 𝓟 s) :=
begin
refine ⟨compl_compl s ▸ mt mem_of_eq_bot _⟩,
rintro ⟨t, ht, hts⟩,
rcases hf.nonempty_of_mem (inter_mem hs ht) with ⟨_, ⟨x, hxs, rfl⟩, hxt⟩,
exact absurd hxs (hts hxt)
end
lemma comap_coe_ne_bot_of_le_principal {s : set γ} {l : filter γ} [h : ne_bot l] (h' : l ≤ 𝓟 s) :
ne_bot (comap (coe : s → γ) l) :=
h.comap_of_range_mem $ (@subtype.range_coe γ s).symm ▸ h' (mem_principal_self s)
lemma ne_bot.comap_of_surj {f : filter β} {m : α → β}
(hf : ne_bot f) (hm : surjective m) :
ne_bot (comap m f) :=
hf.comap_of_range_mem $ univ_mem' hm
lemma ne_bot.comap_of_image_mem {f : filter β} {m : α → β} (hf : ne_bot f)
{s : set α} (hs : m '' s ∈ f) :
ne_bot (comap m f) :=
hf.comap_of_range_mem $ mem_of_superset hs (image_subset_range _ _)
@[simp] lemma map_eq_bot_iff : map m f = ⊥ ↔ f = ⊥ :=
⟨by { rw [←empty_mem_iff_bot, ←empty_mem_iff_bot], exact id },
λ h, by simp only [h, map_bot]⟩
lemma map_ne_bot_iff (f : α → β) {F : filter α} : ne_bot (map f F) ↔ ne_bot F :=
by simp only [ne_bot_iff, ne, map_eq_bot_iff]
lemma ne_bot.map (hf : ne_bot f) (m : α → β) : ne_bot (map m f) :=
(map_ne_bot_iff m).2 hf
lemma ne_bot.of_map : ne_bot (f.map m) → ne_bot f := (map_ne_bot_iff m).1
instance map_ne_bot [hf : ne_bot f] : ne_bot (f.map m) := hf.map m
lemma sInter_comap_sets (f : α → β) (F : filter β) :
⋂₀ (comap f F).sets = ⋂ U ∈ F, f ⁻¹' U :=
begin
ext x,
suffices : (∀ (A : set α) (B : set β), B ∈ F → f ⁻¹' B ⊆ A → x ∈ A) ↔
∀ (B : set β), B ∈ F → f x ∈ B,
by simp only [mem_sInter, mem_Inter, filter.mem_sets, mem_comap, this, and_imp,
exists_prop, mem_preimage, exists_imp_distrib],
split,
{ intros h U U_in,
simpa only [subset.refl, forall_prop_of_true, mem_preimage] using h (f ⁻¹' U) U U_in },
{ intros h V U U_in f_U_V,
exact f_U_V (h U U_in) },
end
end map
-- this is a generic rule for monotone functions:
lemma map_infi_le {f : ι → filter α} {m : α → β} :
map m (infi f) ≤ (⨅ i, map m (f i)) :=
le_infi $ λ i, map_mono $ infi_le _ _
lemma map_infi_eq {f : ι → filter α} {m : α → β} (hf : directed (≥) f) [nonempty ι] :
map m (infi f) = (⨅ i, map m (f i)) :=
map_infi_le.antisymm
(λ s (hs : preimage m s ∈ infi f),
let ⟨i, hi⟩ := (mem_infi_of_directed hf _).1 hs in
have (⨅ i, map m (f i)) ≤ 𝓟 s, from
infi_le_of_le i $ by { simp only [le_principal_iff, mem_map], assumption },
filter.le_principal_iff.1 this)
lemma map_binfi_eq {ι : Type w} {f : ι → filter α} {m : α → β} {p : ι → Prop}
(h : directed_on (f ⁻¹'o (≥)) {x | p x}) (ne : ∃ i, p i) :
map m (⨅ i (h : p i), f i) = (⨅ i (h : p i), map m (f i)) :=
begin
haveI := nonempty_subtype.2 ne,
simp only [infi_subtype'],
exact map_infi_eq h.directed_coe
end
lemma map_inf_le {f g : filter α} {m : α → β} : map m (f ⊓ g) ≤ map m f ⊓ map m g :=
(@map_mono _ _ m).map_inf_le f g
lemma map_inf {f g : filter α} {m : α → β} (h : injective m) :
map m (f ⊓ g) = map m f ⊓ map m g :=
begin
refine map_inf_le.antisymm _,
rintro t ⟨s₁, hs₁, s₂, hs₂, ht : m ⁻¹' t = s₁ ∩ s₂⟩,
refine mem_inf_of_inter (image_mem_map hs₁) (image_mem_map hs₂) _,
rw [←image_inter h, image_subset_iff, ht]
end
lemma map_inf' {f g : filter α} {m : α → β} {t : set α} (htf : t ∈ f) (htg : t ∈ g)
(h : inj_on m t) : map m (f ⊓ g) = map m f ⊓ map m g :=
begin
lift f to filter t using htf, lift g to filter t using htg,
replace h : injective (m ∘ coe) := h.injective,
simp only [map_map, ← map_inf subtype.coe_injective, map_inf h],
end
lemma disjoint_map {m : α → β} (hm : injective m) {f₁ f₂ : filter α} :
disjoint (map m f₁) (map m f₂) ↔ disjoint f₁ f₂ :=
by simp only [disjoint_iff, ← map_inf hm, map_eq_bot_iff]
lemma map_equiv_symm (e : α ≃ β) (f : filter β) :
map e.symm f = comap e f :=
map_injective e.injective $ by rw [map_map, e.self_comp_symm, map_id,
map_comap_of_surjective e.surjective]
lemma map_eq_comap_of_inverse {f : filter α} {m : α → β} {n : β → α}
(h₁ : m ∘ n = id) (h₂ : n ∘ m = id) : map m f = comap n f :=
map_equiv_symm ⟨n, m, congr_fun h₁, congr_fun h₂⟩ f
lemma comap_equiv_symm (e : α ≃ β) (f : filter α) :
comap e.symm f = map e f :=
(map_eq_comap_of_inverse e.self_comp_symm e.symm_comp_self).symm
lemma map_swap_eq_comap_swap {f : filter (α × β)} : prod.swap <$> f = comap prod.swap f :=
map_eq_comap_of_inverse prod.swap_swap_eq prod.swap_swap_eq
/-- A useful lemma when dealing with uniformities. -/
lemma map_swap4_eq_comap {f : filter ((α × β) × (γ × δ))} :
map (λ p : (α × β) × (γ × δ), ((p.1.1, p.2.1), (p.1.2, p.2.2))) f =
comap (λ p : (α × γ) × (β × δ), ((p.1.1, p.2.1), (p.1.2, p.2.2))) f :=
map_eq_comap_of_inverse (funext $ λ ⟨⟨_, _⟩, ⟨_, _⟩⟩, rfl) (funext $ λ ⟨⟨_, _⟩, ⟨_, _⟩⟩, rfl)
lemma le_map {f : filter α} {m : α → β} {g : filter β} (h : ∀ s ∈ f, m '' s ∈ g) :
g ≤ f.map m :=
λ s hs, mem_of_superset (h _ hs) $ image_preimage_subset _ _
lemma le_map_iff {f : filter α} {m : α → β} {g : filter β} : g ≤ f.map m ↔ ∀ s ∈ f, m '' s ∈ g :=
⟨λ h s hs, h (image_mem_map hs), le_map⟩
protected lemma push_pull (f : α → β) (F : filter α) (G : filter β) :
map f (F ⊓ comap f G) = map f F ⊓ G :=
begin
apply le_antisymm,
{ calc map f (F ⊓ comap f G) ≤ map f F ⊓ (map f $ comap f G) : map_inf_le
... ≤ map f F ⊓ G : inf_le_inf_left (map f F) map_comap_le },
{ rintro U ⟨V, V_in, W, ⟨Z, Z_in, hZ⟩, h⟩,
apply mem_inf_of_inter (image_mem_map V_in) Z_in,
calc
f '' V ∩ Z = f '' (V ∩ f ⁻¹' Z) : by rw image_inter_preimage
... ⊆ f '' (V ∩ W) : image_subset _ (inter_subset_inter_right _ ‹_›)
... = f '' (f ⁻¹' U) : by rw h
... ⊆ U : image_preimage_subset f U }
end
protected lemma push_pull' (f : α → β) (F : filter α) (G : filter β) :
map f (comap f G ⊓ F) = G ⊓ map f F :=
by simp only [filter.push_pull, inf_comm]
lemma principal_eq_map_coe_top (s : set α) : 𝓟 s = map (coe : s → α) ⊤ :=
by simp
lemma inf_principal_eq_bot_iff_comap {F : filter α} {s : set α} :
F ⊓ 𝓟 s = ⊥ ↔ comap (coe : s → α) F = ⊥ :=
by rw [principal_eq_map_coe_top s, ← filter.push_pull',inf_top_eq, map_eq_bot_iff]
section applicative
lemma singleton_mem_pure {a : α} : {a} ∈ (pure a : filter α) :=
mem_singleton a
lemma pure_injective : injective (pure : α → filter α) :=
λ a b hab, (filter.ext_iff.1 hab {x | a = x}).1 rfl
instance pure_ne_bot {α : Type u} {a : α} : ne_bot (pure a) :=
⟨mt empty_mem_iff_bot.2 $ not_mem_empty a⟩
@[simp] lemma le_pure_iff {f : filter α} {a : α} : f ≤ pure a ↔ {a} ∈ f :=
by rw [← principal_singleton, le_principal_iff]
lemma mem_seq_def {f : filter (α → β)} {g : filter α} {s : set β} :
s ∈ f.seq g ↔ (∃ u ∈ f, ∃ t ∈ g, ∀ x ∈ u, ∀ y ∈ t, (x : α → β) y ∈ s) :=
iff.rfl
lemma mem_seq_iff {f : filter (α → β)} {g : filter α} {s : set β} :
s ∈ f.seq g ↔ (∃ u ∈ f, ∃ t ∈ g, set.seq u t ⊆ s) :=
by simp only [mem_seq_def, seq_subset, exists_prop, iff_self]
lemma mem_map_seq_iff {f : filter α} {g : filter β} {m : α → β → γ} {s : set γ} :
s ∈ (f.map m).seq g ↔ (∃ t u, t ∈ g ∧ u ∈ f ∧ ∀ x ∈ u, ∀ y ∈ t, m x y ∈ s) :=
iff.intro
(λ ⟨t, ht, s, hs, hts⟩, ⟨s, m ⁻¹' t, hs, ht, λ a, hts _⟩)
(λ ⟨t, s, ht, hs, hts⟩, ⟨m '' s, image_mem_map hs, t, ht, λ f ⟨a, has, eq⟩, eq ▸ hts _ has⟩)
lemma seq_mem_seq {f : filter (α → β)} {g : filter α} {s : set (α → β)} {t : set α}
(hs : s ∈ f) (ht : t ∈ g) : s.seq t ∈ f.seq g :=
⟨s, hs, t, ht, λ f hf a ha, ⟨f, hf, a, ha, rfl⟩⟩
lemma le_seq {f : filter (α → β)} {g : filter α} {h : filter β}
(hh : ∀ t ∈ f, ∀ u ∈ g, set.seq t u ∈ h) : h ≤ seq f g :=
λ s ⟨t, ht, u, hu, hs⟩, mem_of_superset (hh _ ht _ hu) $
λ b ⟨m, hm, a, ha, eq⟩, eq ▸ hs _ hm _ ha
@[mono] lemma seq_mono {f₁ f₂ : filter (α → β)} {g₁ g₂ : filter α}
(hf : f₁ ≤ f₂) (hg : g₁ ≤ g₂) : f₁.seq g₁ ≤ f₂.seq g₂ :=
le_seq $ λ s hs t ht, seq_mem_seq (hf hs) (hg ht)
@[simp] lemma pure_seq_eq_map (g : α → β) (f : filter α) : seq (pure g) f = f.map g :=
begin
refine le_antisymm (le_map $ λ s hs, _) (le_seq $ λ s hs t ht, _),
{ rw ← singleton_seq, apply seq_mem_seq _ hs,
exact singleton_mem_pure },
{ refine sets_of_superset (map g f) (image_mem_map ht) _,
rintro b ⟨a, ha, rfl⟩, exact ⟨g, hs, a, ha, rfl⟩ }
end
@[simp] lemma seq_pure (f : filter (α → β)) (a : α) : seq f (pure a) = map (λ g : α → β, g a) f :=
begin
refine le_antisymm (le_map $ λ s hs, _) (le_seq $ λ s hs t ht, _),
{ rw ← seq_singleton,
exact seq_mem_seq hs singleton_mem_pure },
{ refine sets_of_superset (map (λg:α→β, g a) f) (image_mem_map hs) _,
rintro b ⟨g, hg, rfl⟩, exact ⟨g, hg, a, ht, rfl⟩ }
end
@[simp] lemma seq_assoc (x : filter α) (g : filter (α → β)) (h : filter (β → γ)) :
seq h (seq g x) = seq (seq (map (∘) h) g) x :=
begin
refine le_antisymm (le_seq $ λ s hs t ht, _) (le_seq $ λ s hs t ht, _),
{ rcases mem_seq_iff.1 hs with ⟨u, hu, v, hv, hs⟩,
rcases mem_map_iff_exists_image.1 hu with ⟨w, hw, hu⟩,
refine mem_of_superset _
(set.seq_mono ((set.seq_mono hu subset.rfl).trans hs) subset.rfl),
rw ← set.seq_seq,
exact seq_mem_seq hw (seq_mem_seq hv ht) },
{ rcases mem_seq_iff.1 ht with ⟨u, hu, v, hv, ht⟩,
refine mem_of_superset _ (set.seq_mono subset.rfl ht),
rw set.seq_seq,
exact seq_mem_seq (seq_mem_seq (image_mem_map hs) hu) hv }
end
lemma prod_map_seq_comm (f : filter α) (g : filter β) :
(map prod.mk f).seq g = seq (map (λ b a, (a, b)) g) f :=
begin
refine le_antisymm (le_seq $ λ s hs t ht, _) (le_seq $ λ s hs t ht, _),
{ rcases mem_map_iff_exists_image.1 hs with ⟨u, hu, hs⟩,
refine mem_of_superset _ (set.seq_mono hs subset.rfl),
rw ← set.prod_image_seq_comm,
exact seq_mem_seq (image_mem_map ht) hu },
{ rcases mem_map_iff_exists_image.1 hs with ⟨u, hu, hs⟩,
refine mem_of_superset _ (set.seq_mono hs subset.rfl),
rw set.prod_image_seq_comm,
exact seq_mem_seq (image_mem_map ht) hu }
end
instance : is_lawful_functor (filter : Type u → Type u) :=
{ id_map := λ α f, map_id,
comp_map := λ α β γ f g a, map_map.symm }
instance : is_lawful_applicative (filter : Type u → Type u) :=
{ pure_seq_eq_map := λ α β, pure_seq_eq_map,
map_pure := λ α β, map_pure,
seq_pure := λ α β, seq_pure,
seq_assoc := λ α β γ, seq_assoc }
instance : is_comm_applicative (filter : Type u → Type u) :=
⟨λ α β f g, prod_map_seq_comm f g⟩
lemma {l} seq_eq_filter_seq {α β : Type l} (f : filter (α → β)) (g : filter α) :
f <*> g = seq f g := rfl
end applicative
/-! #### `bind` equations -/
section bind
@[simp] lemma eventually_bind {f : filter α} {m : α → filter β} {p : β → Prop} :
(∀ᶠ y in bind f m, p y) ↔ ∀ᶠ x in f, ∀ᶠ y in m x, p y :=
iff.rfl
@[simp] lemma eventually_eq_bind {f : filter α} {m : α → filter β} {g₁ g₂ : β → γ} :
(g₁ =ᶠ[bind f m] g₂) ↔ ∀ᶠ x in f, g₁ =ᶠ[m x] g₂ :=
iff.rfl
@[simp] lemma eventually_le_bind [has_le γ] {f : filter α} {m : α → filter β} {g₁ g₂ : β → γ} :
(g₁ ≤ᶠ[bind f m] g₂) ↔ ∀ᶠ x in f, g₁ ≤ᶠ[m x] g₂ :=
iff.rfl
lemma mem_bind' {s : set β} {f : filter α} {m : α → filter β} :
s ∈ bind f m ↔ {a | s ∈ m a} ∈ f :=
iff.rfl
@[simp] lemma mem_bind {s : set β} {f : filter α} {m : α → filter β} :
s ∈ bind f m ↔ ∃ t ∈ f, ∀ x ∈ t, s ∈ m x :=
calc s ∈ bind f m ↔ {a | s ∈ m a} ∈ f : iff.rfl
... ↔ (∃ t ∈ f, t ⊆ {a | s ∈ m a}) : exists_mem_subset_iff.symm
... ↔ (∃ t ∈ f, ∀ x ∈ t, s ∈ m x) : iff.rfl
lemma bind_le {f : filter α} {g : α → filter β} {l : filter β} (h : ∀ᶠ x in f, g x ≤ l) :
f.bind g ≤ l :=
join_le $ eventually_map.2 h
@[mono] lemma bind_mono {f₁ f₂ : filter α} {g₁ g₂ : α → filter β} (hf : f₁ ≤ f₂)
(hg : g₁ ≤ᶠ[f₁] g₂) :
bind f₁ g₁ ≤ bind f₂ g₂ :=
begin
refine le_trans (λ s hs, _) (join_mono $ map_mono hf),
simp only [mem_join, mem_bind', mem_map] at hs ⊢,
filter_upwards [hg, hs] with _ hx hs using hx hs,
end
lemma bind_inf_principal {f : filter α} {g : α → filter β} {s : set β} :
f.bind (λ x, g x ⊓ 𝓟 s) = (f.bind g) ⊓ 𝓟 s :=
filter.ext $ λ s, by simp only [mem_bind, mem_inf_principal]
lemma sup_bind {f g : filter α} {h : α → filter β} :
bind (f ⊔ g) h = bind f h ⊔ bind g h :=
by simp only [bind, sup_join, map_sup, eq_self_iff_true]
lemma principal_bind {s : set α} {f : α → filter β} :
(bind (𝓟 s) f) = (⨆ x ∈ s, f x) :=
show join (map f (𝓟 s)) = (⨆ x ∈ s, f x),
by simp only [Sup_image, join_principal_eq_Sup, map_principal, eq_self_iff_true]
end bind
section list_traverse
/- This is a separate section in order to open `list`, but mostly because of universe
equality requirements in `traverse` -/
open list
lemma sequence_mono :
∀ (as bs : list (filter α)), forall₂ (≤) as bs → sequence as ≤ sequence bs
| [] [] forall₂.nil := le_rfl
| (a :: as) (b :: bs) (forall₂.cons h hs) := seq_mono (map_mono h) (sequence_mono as bs hs)
variables {α' β' γ' : Type u} {f : β' → filter α'} {s : γ' → set α'}
lemma mem_traverse :
∀ (fs : list β') (us : list γ'),
forall₂ (λ b c, s c ∈ f b) fs us → traverse s us ∈ traverse f fs
| [] [] forall₂.nil := mem_pure.2 $ mem_singleton _
| (f :: fs) (u :: us) (forall₂.cons h hs) := seq_mem_seq (image_mem_map h) (mem_traverse fs us hs)
lemma mem_traverse_iff (fs : list β') (t : set (list α')) :
t ∈ traverse f fs ↔
(∃ us : list (set α'), forall₂ (λ b (s : set α'), s ∈ f b) fs us ∧ sequence us ⊆ t) :=
begin
split,
{ induction fs generalizing t,
case nil { simp only [sequence, mem_pure, imp_self, forall₂_nil_left_iff,
exists_eq_left, set.pure_def, singleton_subset_iff, traverse_nil] },
case cons : b fs ih t
{ intro ht,
rcases mem_seq_iff.1 ht with ⟨u, hu, v, hv, ht⟩,
rcases mem_map_iff_exists_image.1 hu with ⟨w, hw, hwu⟩,
rcases ih v hv with ⟨us, hus, hu⟩,
exact ⟨w :: us, forall₂.cons hw hus, (set.seq_mono hwu hu).trans ht⟩ } },
{ rintro ⟨us, hus, hs⟩,
exact mem_of_superset (mem_traverse _ _ hus) hs }
end
end list_traverse
/-! ### Limits -/
/-- `tendsto` is the generic "limit of a function" predicate.
`tendsto f l₁ l₂` asserts that for every `l₂` neighborhood `a`,
the `f`-preimage of `a` is an `l₁` neighborhood. -/
@[pp_nodot] def tendsto (f : α → β) (l₁ : filter α) (l₂ : filter β) := l₁.map f ≤ l₂
lemma tendsto_def {f : α → β} {l₁ : filter α} {l₂ : filter β} :
tendsto f l₁ l₂ ↔ ∀ s ∈ l₂, f ⁻¹' s ∈ l₁ := iff.rfl
lemma tendsto_iff_eventually {f : α → β} {l₁ : filter α} {l₂ : filter β} :
tendsto f l₁ l₂ ↔ ∀ ⦃p : β → Prop⦄, (∀ᶠ y in l₂, p y) → ∀ᶠ x in l₁, p (f x) :=
iff.rfl
lemma tendsto.eventually {f : α → β} {l₁ : filter α} {l₂ : filter β} {p : β → Prop}
(hf : tendsto f l₁ l₂) (h : ∀ᶠ y in l₂, p y) :
∀ᶠ x in l₁, p (f x) :=
hf h
lemma tendsto.frequently {f : α → β} {l₁ : filter α} {l₂ : filter β} {p : β → Prop}
(hf : tendsto f l₁ l₂) (h : ∃ᶠ x in l₁, p (f x)) :
∃ᶠ y in l₂, p y :=
mt hf.eventually h
lemma tendsto.frequently_map {l₁ : filter α} {l₂ : filter β} {p : α → Prop} {q : β → Prop}
(f : α → β) (c : filter.tendsto f l₁ l₂) (w : ∀ x, p x → q (f x)) (h : ∃ᶠ x in l₁, p x) :
∃ᶠ y in l₂, q y :=
c.frequently (h.mono w)
@[simp] lemma tendsto_bot {f : α → β} {l : filter β} : tendsto f ⊥ l := by simp [tendsto]
@[simp] lemma tendsto_top {f : α → β} {l : filter α} : tendsto f l ⊤ := le_top
lemma le_map_of_right_inverse {mab : α → β} {mba : β → α} {f : filter α} {g : filter β}
(h₁ : mab ∘ mba =ᶠ[g] id) (h₂ : tendsto mba g f) :
g ≤ map mab f :=
by { rw [← @map_id _ g, ← map_congr h₁, ← map_map], exact map_mono h₂ }
lemma tendsto_of_is_empty [is_empty α] {f : α → β} {la : filter α} {lb : filter β} :
tendsto f la lb :=
by simp only [filter_eq_bot_of_is_empty la, tendsto_bot]
lemma eventually_eq_of_left_inv_of_right_inv {f : α → β} {g₁ g₂ : β → α} {fa : filter α}
{fb : filter β} (hleft : ∀ᶠ x in fa, g₁ (f x) = x) (hright : ∀ᶠ y in fb, f (g₂ y) = y)
(htendsto : tendsto g₂ fb fa) :
g₁ =ᶠ[fb] g₂ :=
(htendsto.eventually hleft).mp $ hright.mono $ λ y hr hl, (congr_arg g₁ hr.symm).trans hl
lemma tendsto_iff_comap {f : α → β} {l₁ : filter α} {l₂ : filter β} :
tendsto f l₁ l₂ ↔ l₁ ≤ l₂.comap f :=
map_le_iff_le_comap
alias tendsto_iff_comap ↔ tendsto.le_comap _
protected lemma tendsto.disjoint {f : α → β} {la₁ la₂ : filter α} {lb₁ lb₂ : filter β}
(h₁ : tendsto f la₁ lb₁) (hd : disjoint lb₁ lb₂) (h₂ : tendsto f la₂ lb₂) :
disjoint la₁ la₂ :=
(disjoint_comap hd).mono h₁.le_comap h₂.le_comap
lemma tendsto_congr' {f₁ f₂ : α → β} {l₁ : filter α} {l₂ : filter β} (hl : f₁ =ᶠ[l₁] f₂) :
tendsto f₁ l₁ l₂ ↔ tendsto f₂ l₁ l₂ :=
by rw [tendsto, tendsto, map_congr hl]
lemma tendsto.congr' {f₁ f₂ : α → β} {l₁ : filter α} {l₂ : filter β}
(hl : f₁ =ᶠ[l₁] f₂) (h : tendsto f₁ l₁ l₂) : tendsto f₂ l₁ l₂ :=
(tendsto_congr' hl).1 h
theorem tendsto_congr {f₁ f₂ : α → β} {l₁ : filter α} {l₂ : filter β}
(h : ∀ x, f₁ x = f₂ x) : tendsto f₁ l₁ l₂ ↔ tendsto f₂ l₁ l₂ :=
tendsto_congr' (univ_mem' h)
theorem tendsto.congr {f₁ f₂ : α → β} {l₁ : filter α} {l₂ : filter β}
(h : ∀ x, f₁ x = f₂ x) : tendsto f₁ l₁ l₂ → tendsto f₂ l₁ l₂ :=
(tendsto_congr h).1
lemma tendsto_id' {x y : filter α} : tendsto id x y ↔ x ≤ y := iff.rfl
lemma tendsto_id {x : filter α} : tendsto id x x := le_refl x
lemma tendsto.comp {f : α → β} {g : β → γ} {x : filter α} {y : filter β} {z : filter γ}
(hg : tendsto g y z) (hf : tendsto f x y) : tendsto (g ∘ f) x z :=
λ s hs, hf (hg hs)
lemma tendsto.mono_left {f : α → β} {x y : filter α} {z : filter β}
(hx : tendsto f x z) (h : y ≤ x) : tendsto f y z :=
(map_mono h).trans hx
lemma tendsto.mono_right {f : α → β} {x : filter α} {y z : filter β}
(hy : tendsto f x y) (hz : y ≤ z) : tendsto f x z :=
le_trans hy hz
lemma tendsto.ne_bot {f : α → β} {x : filter α} {y : filter β} (h : tendsto f x y) [hx : ne_bot x] :
ne_bot y :=
(hx.map _).mono h
lemma tendsto_map {f : α → β} {x : filter α} : tendsto f x (map f x) := le_refl (map f x)
lemma tendsto_map' {f : β → γ} {g : α → β} {x : filter α} {y : filter γ}
(h : tendsto (f ∘ g) x y) : tendsto f (map g x) y :=
by rwa [tendsto, map_map]
@[simp] lemma tendsto_map'_iff {f : β → γ} {g : α → β} {x : filter α} {y : filter γ} :
tendsto f (map g x) y ↔ tendsto (f ∘ g) x y :=
by { rw [tendsto, map_map], refl }
lemma tendsto_comap {f : α → β} {x : filter β} : tendsto f (comap f x) x :=
map_comap_le
@[simp] lemma tendsto_comap_iff {f : α → β} {g : β → γ} {a : filter α} {c : filter γ} :
tendsto f a (c.comap g) ↔ tendsto (g ∘ f) a c :=
⟨λ h, tendsto_comap.comp h, λ h, map_le_iff_le_comap.mp $ by rwa [map_map]⟩
lemma tendsto_comap'_iff {m : α → β} {f : filter α} {g : filter β} {i : γ → α}
(h : range i ∈ f) : tendsto (m ∘ i) (comap i f) g ↔ tendsto m f g :=
by { rw [tendsto, ← map_compose], simp only [(∘), map_comap_of_mem h, tendsto] }
lemma tendsto.of_tendsto_comp {f : α → β} {g : β → γ} {a : filter α} {b : filter β} {c : filter γ}
(hfg : tendsto (g ∘ f) a c) (hg : comap g c ≤ b) :
tendsto f a b :=
begin
rw tendsto_iff_comap at hfg ⊢,
calc a ≤ comap (g ∘ f) c : hfg
... ≤ comap f b : by simpa [comap_comap] using comap_mono hg
end
lemma comap_eq_of_inverse {f : filter α} {g : filter β} {φ : α → β} (ψ : β → α)
(eq : ψ ∘ φ = id) (hφ : tendsto φ f g) (hψ : tendsto ψ g f) : comap φ g = f :=
begin
refine ((comap_mono $ map_le_iff_le_comap.1 hψ).trans _).antisymm (map_le_iff_le_comap.1 hφ),
rw [comap_comap, eq, comap_id],
exact le_rfl
end
lemma map_eq_of_inverse {f : filter α} {g : filter β} {φ : α → β} (ψ : β → α)
(eq : φ ∘ ψ = id) (hφ : tendsto φ f g) (hψ : tendsto ψ g f) : map φ f = g :=
begin
refine le_antisymm hφ (le_trans _ (map_mono hψ)),
rw [map_map, eq, map_id],
exact le_rfl
end
lemma tendsto_inf {f : α → β} {x : filter α} {y₁ y₂ : filter β} :
tendsto f x (y₁ ⊓ y₂) ↔ tendsto f x y₁ ∧ tendsto f x y₂ :=
by simp only [tendsto, le_inf_iff, iff_self]
lemma tendsto_inf_left {f : α → β} {x₁ x₂ : filter α} {y : filter β}
(h : tendsto f x₁ y) : tendsto f (x₁ ⊓ x₂) y :=
le_trans (map_mono inf_le_left) h
lemma tendsto_inf_right {f : α → β} {x₁ x₂ : filter α} {y : filter β}
(h : tendsto f x₂ y) : tendsto f (x₁ ⊓ x₂) y :=
le_trans (map_mono inf_le_right) h
lemma tendsto.inf {f : α → β} {x₁ x₂ : filter α} {y₁ y₂ : filter β}
(h₁ : tendsto f x₁ y₁) (h₂ : tendsto f x₂ y₂) : tendsto f (x₁ ⊓ x₂) (y₁ ⊓ y₂) :=
tendsto_inf.2 ⟨tendsto_inf_left h₁, tendsto_inf_right h₂⟩
@[simp] lemma tendsto_infi {f : α → β} {x : filter α} {y : ι → filter β} :
tendsto f x (⨅ i, y i) ↔ ∀ i, tendsto f x (y i) :=
by simp only [tendsto, iff_self, le_infi_iff]
lemma tendsto_infi' {f : α → β} {x : ι → filter α} {y : filter β} (i : ι) (hi : tendsto f (x i) y) :
tendsto f (⨅ i, x i) y :=
hi.mono_left $ infi_le _ _
theorem tendsto_infi_infi {f : α → β} {x : ι → filter α} {y : ι → filter β}
(h : ∀ i, tendsto f (x i) (y i)) : tendsto f (infi x) (infi y) :=
tendsto_infi.2 $ λ i, tendsto_infi' i (h i)
@[simp] lemma tendsto_sup {f : α → β} {x₁ x₂ : filter α} {y : filter β} :
tendsto f (x₁ ⊔ x₂) y ↔ tendsto f x₁ y ∧ tendsto f x₂ y :=
by simp only [tendsto, map_sup, sup_le_iff]
lemma tendsto.sup {f : α → β} {x₁ x₂ : filter α} {y : filter β} :
tendsto f x₁ y → tendsto f x₂ y → tendsto f (x₁ ⊔ x₂) y :=
λ h₁ h₂, tendsto_sup.mpr ⟨ h₁, h₂ ⟩
@[simp] lemma tendsto_supr {f : α → β} {x : ι → filter α} {y : filter β} :
tendsto f (⨆ i, x i) y ↔ ∀ i, tendsto f (x i) y :=
by simp only [tendsto, map_supr, supr_le_iff]
theorem tendsto_supr_supr {f : α → β} {x : ι → filter α} {y : ι → filter β}
(h : ∀ i, tendsto f (x i) (y i)) : tendsto f (supr x) (supr y) :=
tendsto_supr.2 $ λ i, (h i).mono_right $ le_supr _ _
@[simp] lemma tendsto_principal {f : α → β} {l : filter α} {s : set β} :
tendsto f l (𝓟 s) ↔ ∀ᶠ a in l, f a ∈ s :=
by simp only [tendsto, le_principal_iff, mem_map', filter.eventually]
@[simp] lemma tendsto_principal_principal {f : α → β} {s : set α} {t : set β} :
tendsto f (𝓟 s) (𝓟 t) ↔ ∀ a ∈ s, f a ∈ t :=
by simp only [tendsto_principal, eventually_principal]
@[simp] lemma tendsto_pure {f : α → β} {a : filter α} {b : β} :
tendsto f a (pure b) ↔ ∀ᶠ x in a, f x = b :=
by simp only [tendsto, le_pure_iff, mem_map', mem_singleton_iff, filter.eventually]
lemma tendsto_pure_pure (f : α → β) (a : α) :
tendsto f (pure a) (pure (f a)) :=
tendsto_pure.2 rfl
lemma tendsto_const_pure {a : filter α} {b : β} : tendsto (λ x, b) a (pure b) :=
tendsto_pure.2 $ univ_mem' $ λ _, rfl
lemma pure_le_iff {a : α} {l : filter α} : pure a ≤ l ↔ ∀ s ∈ l, a ∈ s :=
iff.rfl
lemma tendsto_pure_left {f : α → β} {a : α} {l : filter β} :
tendsto f (pure a) l ↔ ∀ s ∈ l, f a ∈ s :=
iff.rfl
@[simp] lemma map_inf_principal_preimage {f : α → β} {s : set β} {l : filter α} :
map f (l ⊓ 𝓟 (f ⁻¹' s)) = map f l ⊓ 𝓟 s :=
filter.ext $ λ t, by simp only [mem_map', mem_inf_principal, mem_set_of_eq, mem_preimage]
/-- If two filters are disjoint, then a function cannot tend to both of them along a non-trivial
filter. -/
lemma tendsto.not_tendsto {f : α → β} {a : filter α} {b₁ b₂ : filter β} (hf : tendsto f a b₁)
[ne_bot a] (hb : disjoint b₁ b₂) :
¬ tendsto f a b₂ :=
λ hf', (tendsto_inf.2 ⟨hf, hf'⟩).ne_bot.ne hb.eq_bot
protected lemma tendsto.if {l₁ : filter α} {l₂ : filter β} {f g : α → β} {p : α → Prop}
[∀ x, decidable (p x)] (h₀ : tendsto f (l₁ ⊓ 𝓟 {x | p x}) l₂)
(h₁ : tendsto g (l₁ ⊓ 𝓟 { x | ¬ p x }) l₂) :
tendsto (λ x, if p x then f x else g x) l₁ l₂ :=
begin
simp only [tendsto_def, mem_inf_principal] at *,
intros s hs,
filter_upwards [h₀ s hs, h₁ s hs],
simp only [mem_preimage],
intros x hp₀ hp₁,
split_ifs,
exacts [hp₀ h, hp₁ h],
end
protected lemma tendsto.if' {α β : Type*} {l₁ : filter α} {l₂ : filter β} {f g : α → β}
{p : α → Prop} [decidable_pred p] (hf : tendsto f l₁ l₂) (hg : tendsto g l₁ l₂) :
tendsto (λ a, if p a then f a else g a) l₁ l₂ :=
begin
replace hf : tendsto f (l₁ ⊓ 𝓟 {x | p x}) l₂ := tendsto_inf_left hf,
replace hg : tendsto g (l₁ ⊓ 𝓟 {x | ¬ p x}) l₂ := tendsto_inf_left hg,
exact hf.if hg,
end
protected lemma tendsto.piecewise {l₁ : filter α} {l₂ : filter β} {f g : α → β}
{s : set α} [∀ x, decidable (x ∈ s)]
(h₀ : tendsto f (l₁ ⊓ 𝓟 s) l₂) (h₁ : tendsto g (l₁ ⊓ 𝓟 sᶜ) l₂) :
tendsto (piecewise s f g) l₁ l₂ :=
h₀.if h₁
end filter
open_locale filter
lemma set.eq_on.eventually_eq {α β} {s : set α} {f g : α → β} (h : eq_on f g s) :
f =ᶠ[𝓟 s] g :=
h
lemma set.eq_on.eventually_eq_of_mem {α β} {s : set α} {l : filter α} {f g : α → β}
(h : eq_on f g s) (hl : s ∈ l) :
f =ᶠ[l] g :=
h.eventually_eq.filter_mono $ filter.le_principal_iff.2 hl
lemma has_subset.subset.eventually_le {α} {l : filter α} {s t : set α} (h : s ⊆ t) : s ≤ᶠ[l] t :=
filter.eventually_of_forall h
lemma set.maps_to.tendsto {α β} {s : set α} {t : set β} {f : α → β} (h : maps_to f s t) :
filter.tendsto f (𝓟 s) (𝓟 t) :=
filter.tendsto_principal_principal.2 h
|
module TestSpawnAt
using Test
import ThreadPools: StaticPool
using ThreadPools
include("util.jl")
@testset "@tspawnat" begin
@testset "@normal operation" begin
obj = TestObj(0)
function fn!(obj)
sleep(0.1)
obj.data = Threads.threadid()
end
task = @tspawnat Threads.nthreads() fn!(obj)
@test obj.data == 0
wait(task)
@test obj.data == Threads.nthreads()
end
@testset "@out of bounds" begin
@test_throws AssertionError task = @tspawnat Threads.nthreads()+1 randn()
@test_throws AssertionError task = @tspawnat 0 randn()
end
end
end # module |
The father of a six year old boy who recently become the youngest member of Mensa has called for better supports in school for gifted children like his son.
John Fitzgerald from Kildimo has the reading ability of a 15-year-old and has already read all the Harry Potter books.
He was able to recite the 12-times tables at the age of three and often solves sixth class maths problems in his spare time.
During a recent appearance on the Late Late Show, he stunned host Ryan Tubridy and the audience by performing a complex maths puzzle where he calculated the value of Ryan’s name by assigning a number to each letter and then adding them all up - in just seconds.
His father Barry recalls reading bedtime stories to his son when he was just two years old and John being able to point out any words he had missed.
Now in senior infants, John will skip first class and go straight into second class next September.
“He generally learns on his own and he partakes in the stuff he can partake in,” said Barry.
And, while he loves going to school in Kilcornan NS, Barry is concerned that the resources are simply not there to help children like his son realise their full potential.
“There are other kids there who are getting one-0n-one with the teacher and that is the kind of thing that he needs.
“Disadvantaged kids get the resources which is fair enough, but I have checked with the principal and John is not entitled to anything,” he said.
“He is special needs but in a different way,” Barry added.
In order for John to realise his full potential, Barry believes he needs special support for a few hours a week.
“The curriculum is never going to be a problem for him, but to get the one-on-one support to help him reach his full potential is the thing,” he said.
John made headlines recently when he was revealed as the youngest of Mensa’s 1,000 Irish members, with an IQ in the top 2% of the population. A week in which he was featured in several national newspapers was capped off with his Late Late Show appearance.
“He loves going to school, he loves his friends and he would never miss it,” he said.
And despite his extraordinary intellect, John remains an ordinary little boy with a passion for soccer and especially his beloved Manchester United.
“He’s passionate about Wayne Rooney and another hero is Roy Keane. John recently did a school project on another idol, Pele,” Barry added. |
State Before: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
⊢ f ≤ᵐ[μ] g State After: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
⊢ 0 ≤ᵐ[μ] g - f Tactic: rw [← eventually_sub_nonneg] State Before: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
⊢ 0 ≤ᵐ[μ] g - f State After: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s✝ t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
s : Set α
hs : MeasurableSet s
⊢ ↑↑μ s < ⊤ → 0 ≤ ∫ (x : α) in s, (g - f) x ∂μ Tactic: refine' ae_nonneg_of_forall_set_integral_nonneg (hg.sub hf) fun s hs => _ State Before: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s✝ t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
s : Set α
hs : MeasurableSet s
⊢ ↑↑μ s < ⊤ → 0 ≤ ∫ (x : α) in s, (g - f) x ∂μ State After: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s✝ t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
s : Set α
hs : MeasurableSet s
⊢ ↑↑μ s < ⊤ → (∫ (a : α) in s, f a ∂μ) ≤ ∫ (a : α) in s, g a ∂μ Tactic: rw [integral_sub' hg.integrableOn hf.integrableOn, sub_nonneg] State Before: α : Type u_1
E : Type ?u.79505
m m0 : MeasurableSpace α
μ : Measure α
s✝ t : Set α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
p : ℝ≥0∞
f✝ f g : α → ℝ
hf : Integrable f
hg : Integrable g
hf_le : ∀ (s : Set α), MeasurableSet s → ↑↑μ s < ⊤ → (∫ (x : α) in s, f x ∂μ) ≤ ∫ (x : α) in s, g x ∂μ
s : Set α
hs : MeasurableSet s
⊢ ↑↑μ s < ⊤ → (∫ (a : α) in s, f a ∂μ) ≤ ∫ (a : α) in s, g a ∂μ State After: no goals Tactic: exact hf_le s hs |
-- 2012-03-08 Andreas
module NoTerminationCheck3 where
data Bool : Set where
true false : Bool
f : Bool -> Bool
f true = true
{-# NO_TERMINATION_CHECK #-}
f false = false
-- error: cannot place pragma inbetween clauses
|
function FIM=observedFisherInfo(z,RInv,h,JacobMat,HessMat)
%%OBSERVEDFISHERINFO Assuming that a linear or nonlinear measurement is
% corrupted with zero-mean Gaussian noise, the observed Fisher
% information matrix (FIM) has a standard form in terms of the
% values of the measurement function and its first and second
% derivatives. This function takes those values and returns the
% observed FIM. Summing the FIMs from multiple simultaneous
% independent measurements or measurement components returns the
% observed FIM for the fused measurement. The inverse of the FIM
% is the Cramér-Rao lower bound (CRLB). If only a single
% measurement is considered, and h=z, then h, z, and HessMat can
% all be omitted. Usualy, there is no benefit to including the
% terms.
%
%INPUTS: z The zDimX1 measurement, or if multiple measurements of the same
% time are to the zDimXnumMeas matrix of those measurements. This
% can be omitted if one just wants the FIM without this.
% RInv The zDimXzDim inverse of the covariance matrix associated with
% the multivariate Gaussian noise corrupting z, or if multiple
% measurements are to be fused AND RInv differs among them, then a
% zDimXzDimXnumMeas collection of all of the inverse matrices. If
% z is omitted and multiple measurements are fused, then RInv MUST
% be specified as a zDimXzDimXnumMeas matrix, not as a single
% zDimXzDim matrix.
% h The zDimX1 value of the xDimX1 state converted into the
% measurement domain. If z is omitted, then this is not needed.
% JacobMat The zDimXxDim Jacobian matrix of derivatives of the measurement
% function h taken with respect to the elements of the target
% state. This is assumed the same for all measurements fused by
% this function.
% HessMat The xDimXxDimXzDim matrix of second derivatives of the
% measurement function h with respect to the elements of the state
% x. HessMat(i,j,k) is the Hessian for the kth measurement
% component with derivatives taken with respect to elements i and
% j of the x vector. i and j can be equal. Note that all
% HessMat(:,:,k) are symmetric matrices. In 3D, the order of the
% second derivatives in each submatrix is of the form:
% [d^2/(dxdx), d^2/(dxdy), d^2/(dxdz);
% d^2/(dydx), d^2/(dydy), d^2/(dydz);
% d^2/(dzdx), d^2/(dzdy), d^2/(dzdz)];
%
%OUTPUTS: FIM The xDimXxDim observed Fisher information matrix.
%
%The FIM and CRLB and in many statistics texts. When considering target
%tracking, one can look at Chapter 2.7.2 of [1]. Since no expectation is
%taken for the observed Fisher information matrix, only the form in terms
%of second derivatives in [1] can be used, not the form in terms of an
%outer product of first derivatives. The use of the inverse of the observed
%FIM in characterizing the accuracy of ML estimates is discussed in [2].
%
%The observed FIM is simply the negative of the matrix of second
%derivatives of the logarithm of the likelihood function. In the problem at
%hand:
%nabla_{x}(\nabla_x)'log(p(z|x))
%where here p(z|x)=1/sqrt(det(2*pi*R))*exp(-(1/2)*(z-h(x))'*inv(R)*(z-h(x))
%The gradient of the logarithm of the likelihood function is
%nabla_{x}log(p(z|x))=-H'*inv(R)(h(x)-z)
%where H=nabla_{x} h(x)'
%The matrix of second derivatives is thus
%nabla_{x}(\nabla_x)'log(p(z|x))=-H'*inv(R)*H-C
%where the jth column of C is given by
%C(:,j)=((\partial / \partial x_j)H')*inv(R)*(h(x)-z)
%
%EXAMPLE 1:
%In this example, we consider how well the observed FIM can be used as the
%covariance matrix of a fused measurement. In this instance, with all of
%the measurement being the same accuracy, we just average the measurements
%to get a fused measurement. We then find the NEES both with and without
%using the Hessian term. It is seen that when using the Hessian term, the
%NEES is closet to 1 than when not using the Hessian term.
% numMCRuns=10000;
% numMeas=3;
% sigmaR=100;
% sigmaAz=3*(pi/180);
% sigmaEl=3*(pi/180);
% SR=diag([sigmaR;sigmaAz;sigmaEl]);
% R=SR*SR';
% RInv=inv(R);
%
% zTrue=[30e3;60*(pi/180);3*(pi/180)];
% systemType=0;
% useHalfRange=true;
% xTrue=spher2Cart(zTrue,systemType,useHalfRange);
%
% NEESWithHess=0;
% NEESWithoutHess=0;
% for k=1:numMCRuns
% zMeas=zeros(3,numMeas);
% for curMeas=1:numMeas
% zMeas(:,curMeas)=zTrue+SR*randn(3,1);
% end
% xAvg=mean(spher2Cart(zMeas,systemType,useHalfRange),2);
%
% h=Cart2Sphere(xAvg,systemType,useHalfRange);
% JacobMat=calcSpherJacob(xAvg,systemType,useHalfRange);
% HessMat=calcSpherHessian(xAvg,systemType,useHalfRange);
%
% FIMWithHess=observedFisherInfo(zMeas,RInv,h,JacobMat,HessMat);
% FIMWithoutHess=numMeas*observedFisherInfo([],RInv,[],JacobMat,HessMat);
% diff=xAvg-xTrue;
% NEESWithHess=NEESWithHess+diff'*FIMWithHess*diff;
% NEESWithoutHess=NEESWithoutHess+diff'*FIMWithoutHess*diff;
% end
% NEESWithHess=NEESWithHess/(3*numMCRuns)
% NEESWithoutHess=NEESWithoutHess/(3*numMCRuns)
%
%EXAMPLE 2:
%In this instance, we used the observed Fisher information as a covariance
%matrix of a single spherical measurement in the absence of knowing the
%truth. Thus, we just take h(z)=z and can omit the matrix of second
%derivatives. Evaluating the NEES, one sees it is consistent (near 1, maybe
%like 0.99 or 1.001). Of course, at higher angular noise levels, a debiased
%function like spher2CartTaylor can perform better.
% numMCRuns=10000;
% sigmaR=10;
% sigmaAz=0.1*(pi/180);
% sigmaEl=0.1*(pi/180);
% SR=diag([sigmaR;sigmaAz;sigmaEl]);
% R=SR*SR';
% RInv=inv(R);
%
% zTrue=[1e3;60*(pi/180);3*(pi/180)];
% systemType=0;
% useHalfRange=true;
% xTrue=spher2Cart(zTrue,systemType,useHalfRange);
%
% NEES=0;
% for k=1:numMCRuns
% zMeas=zTrue+SR*randn(3,1);
% xConv=spher2Cart(zMeas,systemType,useHalfRange);
% JacobMat=calcSpherJacob(xConv,systemType,useHalfRange);
%
% invCRLB=observedFisherInfo([],RInv,[],JacobMat);
%
% diff=xConv-xTrue;
% NEES=NEES+diff'*invCRLB*diff;
% end
% NEES=NEES/(3*numMCRuns)
%
%REFERENCES:
%[1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with
% Applications to Tracking and Navigation: Theory, Algorithms and
% Software. New York: John Wiley and Sons, 2001.
%[2] B. Efron and D. Hinkley, "Assessing the accuracy of the maximum
% likelihood estimator: Observed versus expected Fisher information,"
% Department of Statistics, Stanford, University, Tech. Rep. 108, 8 Mar.
% 1978.
%
%December 2020 David F.Crouse, Naval Research Laboratory, Washington D.C.
%(UNCLASSIFIED) DISTRIBUTION STATEMENT A. Approved for public release.
if(~isempty(z))
numZ=size(z,2);
xDim=size(HessMat,1);
zDim=size(HessMat,3);
FIM=zeros(xDim,xDim);
C=zeros(xDim,xDim);
for curMeas=1:numZ
if(size(RInv,3)>1)
RInvCur=RInv(:,:,curMeas);
else
RInvCur=RInv;
end
FIM=FIM+JacobMat'*RInv*JacobMat;
RhzVal=RInvCur*(h-z(:,curMeas));
for k=1:xDim
C(:,k)=reshape(HessMat(:,k,:),[xDim,zDim])*RhzVal;
end
FIM=FIM+C;
end
else
numMeas=size(RInv,3);
xDim=size(JacobMat,2);
FIM=zeros(xDim,xDim);
for k=1:numMeas
FIM=FIM+JacobMat'*RInv(:,:,k)*JacobMat;
end
end
end
%LICENSE:
%
%The source code is in the public domain and not licensed or under
%copyright. The information and software may be used freely by the public.
%As required by 17 U.S.C. 403, third parties producing copyrighted works
%consisting predominantly of the material produced by U.S. government
%agencies must provide notice with such work(s) identifying the U.S.
%Government material incorporated and stating that such material is not
%subject to copyright protection.
%
%Derived works shall not identify themselves in a manner that implies an
%endorsement by or an affiliation with the Naval Research Laboratory.
%
%RECIPIENT BEARS ALL RISK RELATING TO QUALITY AND PERFORMANCE OF THE
%SOFTWARE AND ANY RELATED MATERIALS, AND AGREES TO INDEMNIFY THE NAVAL
%RESEARCH LABORATORY FOR ALL THIRD-PARTY CLAIMS RESULTING FROM THE ACTIONS
%OF RECIPIENT IN THE USE OF THE SOFTWARE.
|
# Vanilla RNNs, GRUs and the `scan` function
In this notebook, you will learn how to define the forward method for vanilla RNNs and GRUs. Additionally, you will see how to define and use the function `scan` to compute forward propagation for RNNs.
By completing this notebook, you will:
- Be able to define the forward method for vanilla RNNs and GRUs
- Be able to define the `scan` function to perform forward propagation for RNNs
- Understand how forward propagation is implemented for RNNs.
```python
import numpy as np
from numpy import random
from time import perf_counter
```
An implementation of the `sigmoid` function is provided below so you can use it in this notebook.
```python
def sigmoid(x): # Sigmoid function
return 1.0 / (1.0 + np.exp(-x))
```
# Part 1: Forward method for vanilla RNNs and GRUs
In this part of the notebook, you'll see the implementation of the forward method for a vanilla RNN and you'll implement that same method for a GRU. For this excersice you'll use a set of random weights and variables with the following dimensions:
- Embedding size (`emb`) : 128
- Hidden state size (`h_dim`) : (16,1)
The weights `w_` and biases `b_` are initialized with dimensions (`h_dim`, `emb + h_dim`) and (`h_dim`, 1). We expect the hidden state `h_t` to be a column vector with size (`h_dim`,1) and the initial hidden state `h_0` is a vector of zeros.
```python
random.seed(10) # Random seed, so your results match ours
emb = 128 # Embedding size
T = 256 # Number of variables in the sequences
h_dim = 16 # Hidden state dimension
h_0 = np.zeros((h_dim, 1)) # Initial hidden state
# Random initialization of weights and biases
w1 = random.standard_normal((h_dim, emb+h_dim))
w2 = random.standard_normal((h_dim, emb+h_dim))
w3 = random.standard_normal((h_dim, emb+h_dim))
b1 = random.standard_normal((h_dim, 1))
b2 = random.standard_normal((h_dim, 1))
b3 = random.standard_normal((h_dim, 1))
X = random.standard_normal((T, emb,1))
#需要shape 是(256, 128,1 ), 不能是(256, 128),
# 第一个input是 X[0] dim (128,1), h_0是 shape 是(128, 1), 可以concatenate,
# 如果X shape是(256, 128), X[0] dim (128,), error
print(X.shape)
weights = [w1, w2, w3, b1, b2, b3]
```
(256, 128, 1)
## 1.1 Forward method for vanilla RNNs
The vanilla RNN cell is quite straight forward. Its most general structure is presented in the next figure:
As you saw in the lecture videos, the computations made in a vanilla RNN cell are equivalent to the following equations:
\begin{equation}
h^{<t>}=g(W_{h}[h^{<t-1>},x^{<t>}] + b_h)
\label{eq: htRNN}
\end{equation}
\begin{equation}
\hat{y}^{<t>}=g(W_{yh}h^{<t>} + b_y)
\label{eq: ytRNN}
\end{equation}
where $[h^{<t-1>},x^{<t>}]$ means that $h^{<t-1>}$ and $x^{<t>}$ are concatenated together. In the next cell we provide the implementation of the forward method for a vanilla RNN.
```python
def forward_V_RNN(inputs, weights): # Forward propagation for a a single vanilla RNN cell
x, h_t = inputs
# weights.
wh, _, _, bh, _, _ = weights
# new hidden state
h_t = np.dot(wh, np.concatenate([h_t, x])) + bh #np.concatenate default axis =0, 每行后面加
h_t = sigmoid(h_t)
return h_t, h_t
```
As you can see, we omitted the computation of $\hat{y}^{<t>}$. This was done for the sake of simplicity, so you can focus on the way that hidden states are updated here and in the GRU cell.
## 1.2 Forward method for GRUs
A GRU cell have more computations than the ones that vanilla RNNs have. You can see this visually in the following diagram:
As you saw in the lecture videos, GRUs have relevance $\Gamma_r$ and update $\Gamma_u$ gates that control how the hidden state $h^{<t>}$ is updated on every time step. With these gates, GRUs are capable of keeping relevant information in the hidden state even for long sequences. The equations needed for the forward method in GRUs are provided below:
\begin{equation}
\Gamma_r=\sigma{(W_r[h^{<t-1>}, x^{<t>}]+b_r)}
\end{equation}
\begin{equation}
\Gamma_u=\sigma{(W_u[h^{<t-1>}, x^{<t>}]+b_u)}
\end{equation}
\begin{equation}
c^{<t>}=\tanh{(W_h[\Gamma_r*h^{<t-1>},x^{<t>}]+b_h)}
\end{equation}
\begin{equation}
h^{<t>}=\Gamma_u*c^{<t>}+(1-\Gamma_u)*h^{<t-1>}
\end{equation}
In the next cell, please implement the forward method for a GRU cell by computing the update `u` and relevance `r` gates, and the candidate hidden state `c`.
```python
def forward_GRU(inputs, weights): # Forward propagation for a single GRU cell
x, h_t = inputs
print(x.shape)
print(h_t.shape)
# weights.
wu, wr, wc, bu, br, bc = weights
# Update gate
### START CODE HERE (1-2 lINES) ###
u = np.dot(wu, np.concatenate([h_t, x])) + bu
u = sigmoid(u)
### END CODE HERE ###
# Relevance gate
### START CODE HERE (1-2 lINES) ###
r = np.dot(wr, np.concatenate([h_t, x])) + br
r = sigmoid(r)
### END CODE HERE ###
# Candidate hidden state
### START CODE HERE (1-2 lINES) ###
c = np.dot(wc, np.concatenate([r * h_t, x])) + bc
c = np.tanh(c)
### END CODE HERE ###
# New Hidden state h_t
h_t = u* c + (1 - u)* h_t
return h_t, h_t
```
Run the following cell to check your implementation.
```python
forward_GRU([X[1],h_0], weights)[0]
```
(128, 1)
(16, 1)
array([[ 9.77779014e-01],
[-9.97986240e-01],
[-5.19958083e-01],
[-9.99999886e-01],
[-9.99707004e-01],
[-3.02197037e-04],
[-9.58733503e-01],
[ 2.10804828e-02],
[ 9.77365398e-05],
[ 9.99833090e-01],
[ 1.63200940e-08],
[ 8.51874303e-01],
[ 5.21399924e-02],
[ 2.15495959e-02],
[ 9.99878828e-01],
[ 9.77165472e-01]])
Expected output:
<pre>
array([[ 9.77779014e-01],
[-9.97986240e-01],
[-5.19958083e-01],
[-9.99999886e-01],
[-9.99707004e-01],
[-3.02197037e-04],
[-9.58733503e-01],
[ 2.10804828e-02],
[ 9.77365398e-05],
[ 9.99833090e-01],
[ 1.63200940e-08],
[ 8.51874303e-01],
[ 5.21399924e-02],
[ 2.15495959e-02],
[ 9.99878828e-01],
[ 9.77165472e-01]])
</pre>
# Part 2: Implementation of the `scan` function
In the lectures you saw how the `scan` function is used for forward propagation in RNNs. It takes as inputs:
- `fn` : the function to be called recurrently (i.e. `forward_GRU`)
- `elems` : the list of inputs for each time step (`X`)
- `weights` : the parameters needed to compute `fn`
- `h_0` : the initial hidden state
`scan` goes through all the elements `x` in `elems`, calls the function `fn` with arguments ([`x`, `h_t`],`weights`), stores the computed hidden state `h_t` and appends the result to a list `ys`. Complete the following cell by calling `fn` with arguments ([`x`, `h_t`],`weights`).
```python
def scan(fn, elems, weights, h_0=None): # Forward propagation for RNNs
h_t = h_0
ys = []
for x in elems:
### START CODE HERE (1 lINE) ###
y, h_t = fn([x, h_t], weights)
### END CODE HERE ###
ys.append(y)
return ys, h_t
```
# Part 3: Comparison between vanilla RNNs and GRUs
You have already seen how forward propagation is computed for vanilla RNNs and GRUs. As a quick recap, you need to have a forward method for the recurrent cell and a function like `scan` to go through all the elements from a sequence using a forward method. You saw that GRUs performed more computations than vanilla RNNs, and you can check that they have 3 times more parameters. In the next two cells, we compute forward propagation for a sequence with 256 time steps (`T`) for an RNN and a GRU with the same hidden state `h_t` size (`h_dim`=16).
```python
# vanilla RNNs
tic = perf_counter()
ys, h_T = scan(forward_V_RNN, X, weights, h_0)
toc = perf_counter()
RNN_time=(toc-tic)*1000
print (f"It took {RNN_time:.2f}ms to run the forward method for the vanilla RNN.")
```
It took 2.67ms to run the forward method for the vanilla RNN.
```python
# GRUs
tic = perf_counter()
ys, h_T = scan(forward_GRU, X, weights, h_0)
toc = perf_counter()
GRU_time=(toc-tic)*1000
print (f"It took {GRU_time:.2f}ms to run the forward method for the GRU.")
```
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
(128, 1)
(16, 1)
It took 53.07ms to run the forward method for the GRU.
As you were told in the lectures, GRUs take more time to compute (However, sometimes, although a rare occurrence, Vanilla RNNs take more time. Can you figure out what might cause this ?). This means that training and prediction would take more time for a GRU than for a vanilla RNN. However, GRUs allow you to propagate relevant information even for long sequences, so when selecting an architecture for NLP you should assess the tradeoff between computational time and performance.
<b>Congratulations!</b> Now you know how the forward method is implemented for vanilla RNNs and GRUs, and you know how the scan function provides an abstraction for forward propagation in RNNs.
|
#include <boost/polygon/polygon.hpp>
int
main ()
{
return 0;
}
|
{-# LANGUAGE FlexibleContexts #-}
module Main where
import System.Exit
import System.Random
import System.TimeIt
import Control.Monad.State
import Numeric.LinearAlgebra
import Control.Monad.Trans.Maybe ( runMaybeT )
import Control.Monad.Writer ( runWriter )
import Criterion.Main
import Criterion.Types
import Neural.Network
import Neural.Activation ( sigmoid
, sigmoid'
)
import Neural.Training
import Data.MNIST ( loadData )
shallowShape :: [Int]
shallowShape = [784, 30, 10]
deepShape :: [Int]
deepShape = [784, 30, 30, 30, 10]
-- HYPER PARAMETERS
-- learing rate
μ :: Double
μ = 0.5
-- Mini batch size
mbs :: Int
mbs = 100
-- Amount of training epochs
epochs :: Int
epochs = 50
main :: IO ()
main = defaultMain
[ env setupEnv $ \ ~(trainEx, testEx) ->
let trainShallow = timeIt (randomIO >>= trainNetwork trainEx testEx shallowShape)
trainDeep = timeIt (randomIO >>= trainNetwork trainEx testEx deepShape)
in bgroup
"main"
[ bench "shallow" $ nfIO trainShallow
, bench "deep" $ nfIO trainDeep
]
]
trainNetwork
:: [TrainingExample Double]
-> [TestingExample Double]
-> [Int]
-> Seed
-> IO (Network Double)
trainNetwork trainEx testEx shape seed = do
let gen = mkStdGen seed
let trainFunc = train sigmoid sigmoid' μ trainEx (Just testEx) epochs mbs
let (randNet, gen') = runState (randomNetwork shape) gen
let (net, logs) = runWriter $ evalStateT (trainFunc randNet) gen'
putStrLn "Training network..."
mapM_ putStrLn logs
return net
setupEnv
:: IO ([TrainingExample Double], [TestingExample Double])
setupEnv = do
mnistData <- runMaybeT loadData
case mnistData of
Nothing -> putStrLn "Failed to load MNIST data." >> exitFailure
Just (trainEx, testEx) -> return (trainEx, testEx)
|
(*
Copyright (C) 2017 M.A.L. Marques
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: gga_exc *)
herman_c1 := 0.003:
herman_f := x -> 1 + herman_c1/X_FACTOR_C * x^2:
f := (rs, zeta, xt, xs0, xs1) -> gga_exchange(herman_f, rs, zeta, xs0, xs1):
|
/-
Copyright (c) 2019 Johan Commelin. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johan Commelin, Simon Hudon, Scott Morrison
-/
import control.bifunctor
import logic.equiv.basic
/-!
# Functor and bifunctors can be applied to `equiv`s.
We define
```lean
def functor.map_equiv (f : Type u → Type v) [functor f] [is_lawful_functor f] :
α ≃ β → f α ≃ f β
```
and
```lean
def bifunctor.map_equiv (F : Type u → Type v → Type w) [bifunctor F] [is_lawful_bifunctor F] :
α ≃ β → α' ≃ β' → F α α' ≃ F β β'
```
-/
universes u v w
variables {α β : Type u}
open equiv
namespace functor
variables (f : Type u → Type v) [functor f] [is_lawful_functor f]
/-- Apply a functor to an `equiv`. -/
def map_equiv (h : α ≃ β) : f α ≃ f β :=
{ to_fun := map h,
inv_fun := map h.symm,
left_inv := λ x, by simp [map_map],
right_inv := λ x, by simp [map_map] }
@[simp]
lemma map_equiv_apply (h : α ≃ β) (x : f α) :
(map_equiv f h : f α ≃ f β) x = map h x := rfl
@[simp]
lemma map_equiv_symm_apply (h : α ≃ β) (y : f β) :
(map_equiv f h : f α ≃ f β).symm y = map h.symm y := rfl
@[simp]
end functor
namespace bifunctor
variables {α' β' : Type v} (F : Type u → Type v → Type w) [bifunctor F] [is_lawful_bifunctor F]
/-- Apply a bifunctor to a pair of `equiv`s. -/
def map_equiv (h : α ≃ β) (h' : α' ≃ β') : F α α' ≃ F β β' :=
{ to_fun := bimap h h',
inv_fun := bimap h.symm h'.symm,
left_inv := λ x, by simp [bimap_bimap, id_bimap],
right_inv := λ x, by simp [bimap_bimap, id_bimap] }
@[simp]
lemma map_equiv_apply (h : α ≃ β) (h' : α' ≃ β') (x : F α α') :
(map_equiv F h h' : F α α' ≃ F β β') x = bimap h h' x := rfl
@[simp]
lemma map_equiv_symm_apply (h : α ≃ β) (h' : α' ≃ β') (y : F β β') :
(map_equiv F h h' : F α α' ≃ F β β').symm y = bimap h.symm h'.symm y := rfl
@[simp]
lemma map_equiv_refl_refl : map_equiv F (equiv.refl α) (equiv.refl α') = equiv.refl (F α α') :=
begin
ext x,
simp [id_bimap]
end
end bifunctor
|
module NN.Dense
import Backprop
import Control.Optics
import Linear.Backprop
import Linear.V
import Utils.String
public export
record Dense (i : Nat) (o : Nat) (a : Type) where
constructor MkDense
weights : T [i, o] a
biases : T [o] a
export
lweights : Simple Lens (Dense i o a) (T [i, o] a)
lweights = lens weights (\s, b => { weights := b } s)
export
lbiases : Simple Lens (Dense i o a) (T [o] a)
lbiases = lens biases (\s, b => { biases := b } s)
export
{i : _} -> {o : _} -> FromDouble a => FromDouble (Dense i o a) where
fromDouble x = MkDense (fromDouble x) (fromDouble x)
export
{i : _} -> {o : _} -> CanBack a => CanBack (Dense i o a) where
zero = MkDense zero zero
one = MkDense one one
add x1 x2 = MkDense (add x1.weights x2.weights) (add x1.biases x2.biases)
export
{i : _} -> {o : _} -> Num a => Num (Dense i o a) where
x1 + x2 = MkDense (x1.weights + x2.weights) (x1.biases + x2.biases)
x1 * x2 = MkDense (x1.weights * x2.weights) (x1.biases * x2.biases)
fromInteger x = MkDense (fromInteger x) (fromInteger x)
export
{i : _} -> {o : _} -> Neg a => Neg (Dense i o a) where
x1 - x2 = MkDense (x1.weights - x2.weights) (x1.biases - x2.biases)
negate x = MkDense (negate x.weights) (negate x.biases)
export
Show a => Show (Dense i o a) where
show x = show_record "MkDense" [("weights", show x.weights), ("biases", show x.biases)]
export
apply : {i, o, n, a : _} -> CanBack a => Num a => Node s (Dense i o a) -> Node s (T [n, i] a) -> Node s (T [n, o] a)
apply layer x = x <> (layer ^. lweights) + konst (layer ^. lbiases)
|
lemma continuous_disconnected_range_constant: assumes S: "connected S" and conf: "continuous_on S f" and fim: "f ` S \<subseteq> t" and cct: "\<And>y. y \<in> t \<Longrightarrow> connected_component_set t y = {y}" shows "f constant_on S" |
# Local sensitivity
The goal of this notebook is to implement the calculation of local sensitvity form scratch. It is benchmarked against the results of the global sensitivity from my previos [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb).
Local sensitivity differs from global sensitivity in that it considers only the dataset to be released, and not all the possible release datasets. Furthermore, you only calculate the neighbors of the release dataset, and not of all possible datasets. And it is only with these neighbors and the release dataset that you find the maximum norm.
Global sensitibvity is therefore an upper bound of local sensitivity.
### Contributions of notebook
1. Two functions to calculate the local sensitivity of a dataset empirically.
2. Comparisons between local and global sensitivity results.
### Mean questions for clarification
- If local sensitivity would imply less noise due to its smaller value, then why do we not always use local sensitvity? If for each dataset you would calculate its particular loca sensitvity, then adversary could also take that into consideration when plotting noise distributions of the different possible dataset release combinations. These distributions would have a lower std, and thus, once the adversary gets a query DP result, it would be easier for him/her to discard possible release datasets (A visual representation of this process carried out by an attacker is in the [paper](https://git.gnunet.org/bibliography.git/plain/docs/Choosing-%CE%B5-2011Lee.pdf) I implemented 2 blog posts ago, depicted in Figs. 1 and 3). That is why some researchers invented smooth bounds for local sensitivities, but that is something I will cover in future blog posts.
**TIP**: The best way to understand this notebook better is to open my previous [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb) and go the visualizations of scenario (a), and to cell 25 for unbounded and cell 33 for unbounded sensitvity in order to compare the raw data.
## Datasets
```python
# Visualization
%pylab inline
from IPython.display import display, Math, Latex
import matplotlib.pyplot as plt
# handling data
import csv
import json
import pandas as pd
# Math
from random import random
import scipy.stats as ss
import numpy as np
import itertools
from collections import Counter
```
Populating the interactive namespace from numpy and matplotlib
### Datasets
We have 2 datsets to test our function:
- (a_s) A small one I can use to benchmark our functions against a published paper ["How much is enough? Choosing epsilon for Differential Privacy"](https://git.gnunet.org/bibliography.git/plain/docs/Choosing-%CE%B5-2011Lee.pdf). Which was implemented in on of my previous [blog posts](https://github.com/gonzalo-munillag/Blog/tree/main/Extant_Papers_Implementations/A_method_to_choose_epsilon).
- (a_l) A large one to test the functions further.
The use case will be records from the students in a school.
###### D_small
```python
# We define the actual dataset (conforming the universe)
D_small_universe_dict = {'name': ['Chris', 'Kelly', 'Pat', 'Terry'], 'school_year': [1, 2, 3, 4], 'absence_days': [1, 2, 3, 10]}
```
```python
D_small_universe = pd.DataFrame(D_small_universe_dict)
D_small_universe
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Terry</td>
<td>4</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
```python
# We define the the dataset that we will release
D_small_release = D_small_universe.drop([3], axis=0)
D_small_release
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
The adversary model adopted in the paper mentioned above is the worst-case scenario and it will be the one I adopted in this notebook: An attacker has infinite computation power, and because DP provides privacy given adversaries with arbitrary background knowledge, it is okay to assume that the adversary has full access to all the records (adversary knows all the universe, i.e. D_a_small_universe). But there is a dataset made from the universe without an individual (D_a_small_release), and the adversary does not know who is and who is not in it (this is the only thing the adversary does not know about the universe), but the adversary knows D_a_small_release contains people with a particular quality (the students who have not been in probation). With D_a_small_universe, the attacker will try to reconstruct the dataset he does not know (D_a_small_release) by employing queries on D_a_small_release without having access to it.
###### D_a_large
The larger dataset for the universe is used to test the functions with a hamming distance >1 and with a universe with duplicated values both in the univese and the release, which does not mean are the same records.
```python
# We define the actual dataset (conforming the universe)
D_large_universe_dict = {'name': ['Chris', 'Kelly', 'Keny', 'Sherry', 'Jerry', 'Morty', "Beth", "Summer", "Squanchy", "Rick"], \
'school_year': [1, 2, 2, 2, 5, 5, 7, 8, 9, 9], 'absence_days': [1, 2, 3, 4, 5, 6, 7, 8, 15, 20]}
```
```python
D_large_universe = pd.DataFrame(D_large_universe_dict)
D_large_universe
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Keny</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Sherry</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Jerry</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>Morty</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>6</th>
<td>Beth</td>
<td>7</td>
<td>7</td>
</tr>
<tr>
<th>7</th>
<td>Summer</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<th>8</th>
<td>Squanchy</td>
<td>9</td>
<td>15</td>
</tr>
<tr>
<th>9</th>
<td>Rick</td>
<td>9</td>
<td>20</td>
</tr>
</tbody>
</table>
</div>
```python
# We define the the dataset that we will release
D_large_release = D_large_universe.iloc[:6,:]
D_large_release
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Keny</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Sherry</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Jerry</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>Morty</td>
<td>5</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
## Functions
### Auxiliary function
##### Here we have all the queries we could make to the numerical data
```python
# With this funciton, we can make easier to call the mean, median... functions
# REF: https://stackoverflow.com/questions/34794634/how-to-use-a-variable-as-function-name-in-python
# It is not clean to have the var percentile input each function, but it is less verbose than having a function
# For each percentile. We could however limit the maount of percentiles offer to 25 and 75.
class Query_class:
"""
A class used to represent a query. YOu instantiate an object that will perform a particlar query on an array
Attributes
----------
fDic - (dict) containing the possible queries the class can be transformed into
fActive - (function) it contins the function we created the class to have
Methods
-------
run_query - I will run the query for which we instantiated the class
The other methods implement the different possible queries
"""
def __init__(self, fCase):
# mapping: string --> variable = function name
fDic = {'mean':self._mean,
'median':self._median,
'count': self._count,
'sum': self._sum,
'std': self._std,
'var': self._var,
'percentile': self._percentile}
self.fActive = fDic[fCase]
# Calculate the mean of an array
def _mean(self, array, percentile):
return np.mean(array)
# Calculate the median of an array
def _median(self, array, percentile):
return np.median(array)
# Calculate the number of elements in the array
def _count(self, array, percentile):
return len(array)
# Calculate the sum of an array
def _sum(self, array, percentile):
return np.sum(array)
# Calculate the std of an array
def _std(self, array, percentile):
return np.std(array)
# Calculate the variance of an array
def _var(self, array, percentile):
return np.var(array)
def _percentile(self, array, percentile):
return np.percentile(array, percentile)
# It will run the given query
def run_query(self, array, percentile=50):
return self.fActive(array, percentile)
```
```python
# Set of checks on the input values
def verify_sensitivity_inputs(universe_cardinality, universe_subset_cardinality, hamming_distance):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (df) cardinality of the universe subset
hamming_distance - (int) hamming distance between neighboring datasets
OUTPUT:
ValueError - (str) error message due to the value of the inputs
Description:
It performs multiple checks to verify the validity of the inputs for the calculation of senstitivity
"""
# Check on unverse cardinality (1).
# The cardinality of the subset of the universe cannot be larger than the universe
if universe_cardinality < universe_subset_cardinality:
raise ValueError("Your universe dataset cannot be smaller than your release dataset.")
# Checks on the validity of the chosen hamming_distance (3)
if hamming_distance >= (universe_subset_cardinality):
raise ValueError("Hamming distance chosen is larger than the cardinality of the release dataset.")
if (hamming_distance > np.abs(universe_cardinality - universe_subset_cardinality)):
raise ValueError("Hamming distance chosen is larger than the difference in cardinalities between the \
universe and the release dataset, i.e., \
there are not enough values in your universe to create such a large neighboring dataset (Re-sampling records).")
# The hamming distance cannot be 0, then your neighbor dataset is equal to the original dataset
if hamming_distance == 0:
raise ValueError("Hamming distance cannot be 0.")
```
```python
# Used by unbounded unbounded_empirical_global_L1_sensitivity_a
def L1_norm_max(release_dataset_query_value, neighbor_datasets, query, percentile):
"""
INPUT:
release_dataset_query_value - (float) query value of a particular possible release dataset
neighbor_datasets - (list) contains the possible neighbors of the specific release dataset
query - (object) instance of class Query_class
percentile - (int) percentile value for the percentile query
OUTPUT:
L1_norm_maximum - (float) maximum L1 norm calcuated from the differences between the query results
of the neighbor datasets and the specific release dataset
Description:
It claculates the maximum L1 norm between the query results of the neighbor datasets and the specific release dataset
"""
neighbor_dataset_query_values = []
for neighbor_dataset in neighbor_datasets:
neighbor_dataset_query_value = query.run_query(neighbor_dataset, percentile)
neighbor_dataset_query_values.append(neighbor_dataset_query_value)
# We select the maximum and minimum values of the queries, as the intermediate values will not
# yield a larger L1 norm (ultimately, we are interested in the maximum L1 norm)
neighbor_dataset_query_value_min, neighbor_dataset_query_value_max = \
min(neighbor_dataset_query_values), max(neighbor_dataset_query_values)
# We calculate the L1 norm for these two values and pick the maximum
L1_norm_i = np.abs(release_dataset_query_value - neighbor_dataset_query_value_min)
L1_norm_ii = np.abs(release_dataset_query_value - neighbor_dataset_query_value_max)
L1_norm_maximum = max(L1_norm_i, L1_norm_ii)
return L1_norm_maximum
```
```python
def calculate_unbounded_sensitivities(universe, universe_subset, columns, hamming_distance, unbounded_sensitivities):
"""
INPUT:
universe - (df or dict) contains all possible values of the dataset
universe_subset - (df) contains subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
hamming_distance - (int) hamming distance between neighboring datasets
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
OUTPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
Description:
It calculates the sensitivities for a set of queries given a universe and a release dataset.
"""
# Calculate the sensitivity of different queries for the unbounded DP
query_type = 'mean'
mean_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'median'
median_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'count'
count_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'sum'
sum_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'std'
std_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'var'
var_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'percentile'
percentile = 25
percentile_25_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
percentile = 50
percentile_50_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
percentile = 75
percentile_75_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
percentile = 90
percentile_90_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
print('Unbounded sensitivities for mean', mean_unbounded_global_sensitivities)
print('Unbounded sensitivities for median', median_unbounded_global_sensitivities)
print('Unbounded sensitivities for count', count_unbounded_global_sensitivities)
print('Unbounded sensitivities for sum', sum_unbounded_global_sensitivities)
print('Unbounded sensitivities for std', std_unbounded_global_sensitivities)
print('Unbounded sensitivities for var', var_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 25', percentile_25_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 50', percentile_50_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 75', percentile_75_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 90', percentile_90_unbounded_global_sensitivities)
unbounded_sensitivities = build_sensitivity_dict(unbounded_sensitivities, hamming_distance,\
mean_unbounded_global_sensitivities, median_unbounded_global_sensitivities, count_unbounded_global_sensitivities, \
sum_unbounded_global_sensitivities, std_unbounded_global_sensitivities, var_unbounded_global_sensitivities, \
percentile_25_unbounded_global_sensitivities, percentile_50_unbounded_global_sensitivities, \
percentile_75_unbounded_global_sensitivities, percentile_90_unbounded_global_sensitivities)
return unbounded_sensitivities
```
```python
def calculate_bounded_sensitivities(universe, universe_subset, columns, hamming_distance, bounded_sensitivities):
"""
INPUT:
universe - (df or dict) contains all possible values of the dataset
universe_subset - (df) contains subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
hamming_distance - (int) hamming distance between neighboring datasets
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
OUTPUT
bounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
Description:
It calculates the sensitivities for a set of queries given a universe and a release dataset.
"""
# Calculate the sensitivity of different queries for the unbounded DP
query_type = 'mean'
mean_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'median'
median_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'count'
count_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'sum'
sum_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'std'
std_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'var'
var_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance)
query_type = 'percentile'
percentile = 25
percentile_25_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
percentile = 50
percentile_50_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
percentile = 75
percentile_75_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
percentile = 90
percentile_90_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile)
print('Bounded sensitivities for mean', mean_bounded_global_sensitivities)
print('Bounded sensitivities for median', median_bounded_global_sensitivities)
print('Bounded sensitivities for count', count_bounded_global_sensitivities)
print('Bounded sensitivities for sum', sum_bounded_global_sensitivities)
print('Bounded sensitivities for std', std_bounded_global_sensitivities)
print('Bounded sensitivities for var', var_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 25', percentile_25_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 50', percentile_50_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 75', percentile_75_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 90', percentile_90_bounded_global_sensitivities)
bounded_sensitivities = build_sensitivity_dict(bounded_sensitivities, hamming_distance,\
mean_bounded_global_sensitivities, median_bounded_global_sensitivities, count_bounded_global_sensitivities, \
sum_bounded_global_sensitivities, std_bounded_global_sensitivities, var_bounded_global_sensitivities, \
percentile_25_bounded_global_sensitivities, percentile_50_bounded_global_sensitivities, \
percentile_75_bounded_global_sensitivities, percentile_90_bounded_global_sensitivities)
return bounded_sensitivities
```
```python
# We save the values in a dictionary
def build_sensitivity_dict(unbounded_sensitivities, hamming_distance, mean_sensitivity, median_sensitivity, count_sensitivity, _sum_sensitivity, _std_sensitivity, _var_sensitivity, percentile_25_sensitivity, percentile_50_sensitivity, percentile_75_sensitivity, percentile_90_sensitivity):
"""
INPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
hamming_distance - (int) hamming distance of the neighboring datasets
mean_sensitivity - (float) sensitivity of the mean query
median_sensitivity - (float) sensitivity of the media query
count_sensitivity - (float) sensitivity of the count query
_sum_sensitivity - (float) sensitivity of the sum query
_std_sensitivity - (float) sensitivity of the std query
_var - (float) sensitivity of the var query
percentile_25_sensitivity - (float) sensitivity of the percentile 25 query
percentile_50_sensitivity - (float) sensitivity of the percentile 50 query
percentile_75_sensitivity - (float) sensitivity of the percentile 75query
percentile_90_sensitivity - (float) sensitivity of the percentile 90 query
OUTPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
"""
unbounded_sensitivities[hamming_distance] = {}
unbounded_sensitivities[hamming_distance]['mean'] = mean_sensitivity
unbounded_sensitivities[hamming_distance]['median'] = median_sensitivity
unbounded_sensitivities[hamming_distance]['count'] = count_sensitivity
unbounded_sensitivities[hamming_distance]['sum'] = _sum_sensitivity
unbounded_sensitivities[hamming_distance]['std'] = _std_sensitivity
unbounded_sensitivities[hamming_distance]['var'] = _var_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_25'] = percentile_25_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_50'] = percentile_50_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_75'] = percentile_75_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_90'] = percentile_90_sensitivity
return unbounded_sensitivities
```
## Main Functions
### Unbounded Local Sensitivity
```latex
%%latex
\begin{align}
\ell_{1, \mbox{sensitivity}}: LS(x) =\max_{\substack{
{y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = h
}} \|f(x)-f(y)\|_{1}
\end{align}
```
\begin{align}
\ell_{1, \mbox{sensitivity}}: LS(x) =\max_{\substack{
{y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = h
}} \|f(x)-f(y)\|_{1}
\end{align}
\begin{align}
\ell_{1, \mbox{sensitivity}}: LS(x) =\max_{\substack{
{y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = h
}} \|f(x)-f(y)\|_{1}
\end{align}
```python
def unbounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile=50):
"""
INPUT:
universe - (df ot dict) contains all possible values of the dataset
universe_subset - (df or dict) contains the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
hamming_distance - (int) hamming distance between neighboring datasets
percentile - (int) percentile value for the percentile query
OUTPUT:
unbounded_global_sensitivity - (float) the unbounded global sensitivity of the input universe
Description:
It claculates the global sensitivity of an array based on the knowledge of the entire universe of
the dataset and query_type.
"""
# Check if the values for the hamming distance and universe sizes comply with the basic constraints
# verify_sensitivity_inputs(universe.shape[0], universe_subset.shape[0], hamming_distance)
verify_sensitivity_inputs(universe.shape[0], universe_subset.shape[0], hamming_distance)
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
# We will store the sensitivity of each column of the dataset containing universe in a dictionary
unbounded_global_sensitivity_per_colum = {}
for column in columns:
# 1) RELEASE DATASET /// it could be a dataframe or a tupple
try:
release_dataset = universe_subset[column]
except:
release_dataset = universe_subset
# 2) |NEIGHBORING DATASET| < |RELEASE DATASET| //// cardinalities
neighbor_with_less_records_datasets = itertools.combinations(release_dataset, \
universe_subset.shape[0] - hamming_distance)
neighbor_with_less_records_datasets = list(neighbor_with_less_records_datasets)
# 3) |NEIGHBORING DATASET| > |RELEASE DATASET| //// cardinalities
symmetric_difference = list((Counter(universe[column]) - Counter(release_dataset)).elements())
neighbor_possible_value_combinations = itertools.combinations(symmetric_difference, hamming_distance)
neighbor_possible_value_combinations = list(neighbor_possible_value_combinations)
neighbor_with_more_records_datasets = []
for neighbor_possible_value_combination in neighbor_possible_value_combinations:
# We create neighboring datasets by concatenating the neighbor_possible_value_combination with the release dataset
neighbor_with_more_records_dataset = np.append(release_dataset, neighbor_possible_value_combination)
neighbor_with_more_records_datasets.append(neighbor_with_more_records_dataset)
# 4) For each possible release datase, there is a set of neighboring datasets
# We will iterate through each possible release dataset and calculate the L1 norm with
# each of its repspective neighboring datasets
L1_norms = []
release_dataset_query_value = query.run_query(release_dataset, percentile)
L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_less_records_datasets, query, percentile)
L1_norms.append(L1_norm)
L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_more_records_datasets, query, percentile)
L1_norms.append(L1_norm)
# We pick the maximum out of all the maximum L1_norms calculated from each possible release dataset
unbounded_global_sensitivity_per_colum[column] = max(L1_norms)
return unbounded_global_sensitivity_per_colum
```
### Bounded Local Sensitivity
```latex
%%latex
\begin{align}
\ell_{1, \mbox{sensitivity}}: LS(x) =\max_{\substack{
{y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = 0
}} \|f(x)-f(y)\|_{1}
\end{align}
```
\begin{align}
\ell_{1, \mbox{sensitivity}}: LS(x) =\max_{\substack{
{y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = 0
}} \|f(x)-f(y)\|_{1}
\end{align}
\begin{align}
\ell_{1, \mbox{sensitivity}}: LS(x) =\max_{\substack{
{y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = 0
}} \|f(x)-f(y)\|_{1}
\end{align}
```python
def bounded_empirical_local_L1_sensitivity(universe, universe_subset, columns, query_type, hamming_distance, percentile=50):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset - (df or dict) contains the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
hamming_distance - (int) hamming distance between neighboring datasets
percentile - (int) percentile value for the percentile query
OUTPUT:
bounded_global_sensitivity - (float) the bounded global sensitivity of the input universe
Description:
It claculates the global sensitivity of an array based on the knowledge of the entire universe of
the dataset and query_type.
"""
# Check if the values for the hamming distance and universe sizes comply with the basic constraints
verify_sensitivity_inputs(universe.shape[0], universe_subset.shape[0], hamming_distance)
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
# We will store the sensitivity of each column of the dataset containing universe in a dictionary
bounded_global_sensitivity_per_column = {}
for column in columns:
# We calculate all the possible release datasets
# First we obtain the combinations within the release dataset. The size of this combinations is not the original size
# but the original size minus the hamming_distance /// it could be a dataframe or a tupple
try:
release_i_datasets = itertools.combinations(universe_subset[column], universe_subset.shape[0] - hamming_distance)
except:
release_i_datasets = itertools.combinations(universe_subset, universe_subset.shape[0] - hamming_distance)
release_i_datasets = list(release_i_datasets)
# it will contain sets of neighboring datasets. The L1 norm will be calculated between these sets. The maximum will be chosen
# The datasets from different groups do not necesarilly need to be neighbors, thus we separate them in groups
neighbor_dataset_groups = []
for release_i_dataset in release_i_datasets:
# second we calculate the combinations of the items in the universe that are not in the release dataset
# the size of a combination is equal to the hamming distance. This will not discard duplicates
symmetric_difference = list((Counter(universe[column]) - Counter(release_i_dataset)).elements())
release_ii_datasets = itertools.combinations(symmetric_difference, hamming_distance)
release_ii_datasets = list(release_ii_datasets)
# We create neighboring datasets by concatenating i with ii
neighbor_datasets = []
for release_ii_dataset in release_ii_datasets:
neighbor = list(release_i_dataset + release_ii_dataset)
neighbor_datasets.append(neighbor)
neighbor_dataset_groups.append(neighbor_datasets)
# We calculate the L1_norm for the different combinations with the aim to find the max
# We can loop in this manner because we are obtaining the absolute values
L1_norms = []
for m in range(0, len(neighbor_dataset_groups)):
for i in range(0, len(neighbor_dataset_groups[m])-1):
for j in range(i+1, len(neighbor_dataset_groups[m])):
L1_norm = np.abs(query.run_query(neighbor_dataset_groups[m][i], percentile) - query.run_query(neighbor_dataset_groups[m][j], percentile))
L1_norms.append(L1_norm)
bounded_global_sensitivity_per_column[column] = max(L1_norms)
return bounded_global_sensitivity_per_column
```
## MAIN
### Unbounded sensitivity - Scenario a
Side Note: I comment off and on the printing script for sensitivities depending on the verbosity needed per explanation.
Let us begin with the small dataset and a hamming distance of 1 (the only one it allows given the shapes of the release and universe dataset)
```python
D_small_release
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
```python
D_small_universe
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Terry</td>
<td>4</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
##### In the [paper](https://git.gnunet.org/bibliography.git/plain/docs/Choosing-%CE%B5-2011Lee.pdf) mentioned, Table 4 shows the local sensitivities for different possible worlds. Let us check them one by one:
##### For {1, 2, 3}, the local sensitivity of the median should be 0.5 for absence_days:
```python
columns = ['school_year', 'absence_days']
hamming_distance = 1
unbounded_sensitivities = {}
unbounded_sensitivities = calculate_unbounded_sensitivities(D_small_universe, D_small_release, columns, hamming_distance, unbounded_sensitivities)
unbounded_sensitivities[hamming_distance]['median']
```
Unbounded sensitivities for mean {'school_year': 0.5, 'absence_days': 2.0}
Unbounded sensitivities for median {'school_year': 0.5, 'absence_days': 0.5}
Unbounded sensitivities for count {'school_year': 1, 'absence_days': 1}
Unbounded sensitivities for sum {'school_year': 4, 'absence_days': 10}
Unbounded sensitivities for std {'school_year': 0.31649658092772603, 'absence_days': 2.7190373250050115}
Unbounded sensitivities for var {'school_year': 0.5833333333333334, 'absence_days': 11.833333333333334}
Unbounded sensitivities for percentile 25 {'school_year': 0.75, 'absence_days': 0.75}
Unbounded sensitivities for percentile 50 {'school_year': 0.5, 'absence_days': 0.5}
Unbounded sensitivities for percentile 75 {'school_year': 0.75, 'absence_days': 2.25}
Unbounded sensitivities for percentile 90 {'school_year': 0.9000000000000004, 'absence_days': 5.100000000000001}
{'school_year': 0.5, 'absence_days': 0.5}
My function works as intended for the benchmark dataset. The results for median are the same as in the paper. The paper does not deal with other types of queries. (We also get the percentile 50 to cross-check with the median).
I printed the other queries so you can have a look at them as well, I will however stop printing all of them in the next cells.
##### For {1, 2, 10}, the local sensitivity of the median should be 4 for absence_days:
```python
D_small_release_test = D_small_release
D_small_release_test = D_small_release_test.drop(['absence_days'], axis=1)
D_small_release_test['absence_days'] = [1, 2, 10]
D_small_release_test
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
```python
unbounded_sensitivities = {}
unbounded_sensitivities = calculate_unbounded_sensitivities(D_small_universe, D_small_release_test, columns, hamming_distance, unbounded_sensitivities)
unbounded_sensitivities[hamming_distance]['median']
```
{'school_year': 0.5, 'absence_days': 4.0}
##### For {1, 3, 10}, the local sensitivity of the median should be 3.5 for absence_days:
```python
D_small_release_test = D_small_release
D_small_release_test = D_small_release_test.drop(['absence_days'], axis=1)
D_small_release_test['absence_days'] = [1, 3, 10]
D_small_release_test
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
```python
unbounded_sensitivities = {}
unbounded_sensitivities = calculate_unbounded_sensitivities(D_small_universe, D_small_release_test, columns, hamming_distance, unbounded_sensitivities)
unbounded_sensitivities[hamming_distance]['median']
```
{'school_year': 0.5, 'absence_days': 3.5}
##### For {2, 3, 10}, the local sensitivity of the median should be 3.5:
```python
D_small_release_test = D_small_release
D_small_release_test = D_small_release_test.drop(['absence_days'], axis=1)
D_small_release_test['absence_days'] = [2, 3, 10]
D_small_release_test
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
```python
unbounded_sensitivities = {}
unbounded_sensitivities = calculate_unbounded_sensitivities(D_small_universe, D_small_release_test, columns, hamming_distance, unbounded_sensitivities)
unbounded_sensitivities[hamming_distance]['median']
```
{'school_year': 0.5, 'absence_days': 3.5}
Each of these tests have passed. And the maximum of these local sensitivities, is the global sensitivity, as explained in the paper. This would be 4, if we scroll down in my other [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/Extant_Papers_Implementations/A_method_to_choose_epsilon/How_much_is_enough_Calculating_An_Optimal_Epsilon.ipynb) to point MEDIAN, you will see that for unbounded sensitivity we calculated also 4.
```latex
%%latex
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x \in \mathbb{N}^{(\mathcal{X})}} \\
}} LS(x)
\end{align}
```
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x \in \mathbb{N}^{(\mathcal{X})}} \\
}} LS(x)
\end{align}
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x \in \mathbb{N}^{(\mathcal{X})}} \\
}} LS(x)
\end{align}
we could therefore loop through all the combinations of release datasets and apply our local sensitivity function. We would only need to pick the maximum L1 norm.
#### Let us try our function with a larger dataset, which it also allows for a larger hamming distance.
```python
D_large_universe
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Keny</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Sherry</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Jerry</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>Morty</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>6</th>
<td>Beth</td>
<td>7</td>
<td>7</td>
</tr>
<tr>
<th>7</th>
<td>Summer</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<th>8</th>
<td>Squanchy</td>
<td>9</td>
<td>15</td>
</tr>
<tr>
<th>9</th>
<td>Rick</td>
<td>9</td>
<td>20</td>
</tr>
</tbody>
</table>
</div>
```python
D_large_release
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Keny</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Sherry</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Jerry</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>Morty</td>
<td>5</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
```python
columns = ['school_year', 'absence_days']
hamming_distances = [1, 2, 3, 4]
unbounded_sensitivities = {}
for hamming_distance in hamming_distances:
print('Hamming distance = ', hamming_distance)
unbounded_sensitivities = calculate_unbounded_sensitivities(D_large_universe, D_large_release, columns, hamming_distance, unbounded_sensitivities)
```
Hamming distance = 1
Unbounded sensitivities for mean {'school_year': 0.8809523809523809, 'absence_days': 2.3571428571428568}
Unbounded sensitivities for median {'school_year': 0.0, 'absence_days': 0.5}
Unbounded sensitivities for count {'school_year': 1, 'absence_days': 1}
Unbounded sensitivities for sum {'school_year': 9, 'absence_days': 20}
Unbounded sensitivities for std {'school_year': 1.0306508339365563, 'absence_days': 4.278553969413484}
Unbounded sensitivities for var {'school_year': 4.303287981859411, 'absence_days': 32.92006802721088}
Unbounded sensitivities for percentile 25 {'school_year': 0.0, 'absence_days': 0.75}
Unbounded sensitivities for percentile 50 {'school_year': 0.0, 'absence_days': 0.5}
Unbounded sensitivities for percentile 75 {'school_year': 2.25, 'absence_days': 0.75}
Unbounded sensitivities for percentile 90 {'school_year': 1.6000000000000014, 'absence_days': 6.100000000000005}
Hamming distance = 2
Unbounded sensitivities for mean {'school_year': 1.5416666666666665, 'absence_days': 3.5}
Unbounded sensitivities for median {'school_year': 1.5, 'absence_days': 1.0}
Unbounded sensitivities for count {'school_year': 2, 'absence_days': 2}
Unbounded sensitivities for sum {'school_year': 18, 'absence_days': 35}
Unbounded sensitivities for std {'school_year': 1.4250645133943491, 'absence_days': 4.656135903018995}
Unbounded sensitivities for var {'school_year': 6.512152777777779, 'absence_days': 37.583333333333336}
Unbounded sensitivities for percentile 25 {'school_year': 0.25, 'absence_days': 1.5}
Unbounded sensitivities for percentile 50 {'school_year': 1.5, 'absence_days': 1.0}
Unbounded sensitivities for percentile 75 {'school_year': 2.25, 'absence_days': 3.5}
Unbounded sensitivities for percentile 90 {'school_year': 4.0, 'absence_days': 11.0}
Hamming distance = 3
Unbounded sensitivities for mean {'school_year': 1.9444444444444442, 'absence_days': 3.6111111111111107}
Unbounded sensitivities for median {'school_year': 3.0, 'absence_days': 1.5}
Unbounded sensitivities for count {'school_year': 3, 'absence_days': 3}
Unbounded sensitivities for sum {'school_year': 26, 'absence_days': 43}
Unbounded sensitivities for std {'school_year': 1.5723301886761005, 'absence_days': 4.3003996877159665}
Unbounded sensitivities for var {'school_year': 6.81172839506173, 'absence_days': 33.1820987654321}
Unbounded sensitivities for percentile 25 {'school_year': 1.5, 'absence_days': 2.25}
Unbounded sensitivities for percentile 50 {'school_year': 3.0, 'absence_days': 1.5}
Unbounded sensitivities for percentile 75 {'school_year': 3.75, 'absence_days': 3.25}
Unbounded sensitivities for percentile 90 {'school_year': 4.0, 'absence_days': 10.5}
Hamming distance = 4
Unbounded sensitivities for mean {'school_year': 2.1666666666666665, 'absence_days': 3.5999999999999996}
Unbounded sensitivities for median {'school_year': 3.0, 'absence_days': 2.0}
Unbounded sensitivities for count {'school_year': 4, 'absence_days': 4}
Unbounded sensitivities for sum {'school_year': 33, 'absence_days': 50}
Unbounded sensitivities for std {'school_year': 1.5723301886761005, 'absence_days': 3.9921748723400663}
Unbounded sensitivities for var {'school_year': 6.327777777777779, 'absence_days': 29.573333333333327}
Unbounded sensitivities for percentile 25 {'school_year': 3.0, 'absence_days': 3.0}
Unbounded sensitivities for percentile 50 {'school_year': 3.0, 'absence_days': 2.0}
Unbounded sensitivities for percentile 75 {'school_year': 3.5, 'absence_days': 3.0}
Unbounded sensitivities for percentile 90 {'school_year': 4.0, 'absence_days': 9.999999999999998}
##### For the comparison of values you may check cell 24 of the notebook of my previous [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb)
### Visualization
```python
plt.figure(figsize=(15, 7))
query_types = ['mean', 'median', 'count', 'sum', 'std', 'var', 'percentile_25', 'percentile_50', 'percentile_75', 'percentile_90']
x_values = []
for key in unbounded_sensitivities.keys():
x_values.append(key)
for inedx_column, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(inedx_column+1))
plt.subplot(plot_index)
query_type_legend_handles = []
for query_type in query_types:
y_values = []
for hamming_distance in unbounded_sensitivities.keys():
y_values.append(unbounded_sensitivities[hamming_distance][query_type][column])
# plot the sensitivities
legend_handle, = plt.plot(x_values, y_values, label=query_type)
query_type_legend_handles.append(legend_handle)
# Legends
legend = plt.legend(handles=query_type_legend_handles, bbox_to_anchor=(0., -0.2, 1., .102), \
ncol=5, mode="expand", borderaxespad=0.)
legend_plot = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Hamming distance')
plt.ylabel('Local Sensitivity')
plt.title('{}) Universe Domain {} = {}'.format(inedx_column+1, column, D_large_universe[column].values))
plt.suptitle('Local sensitivities based on unbounded DP of different queries for different domains for different hamming distances')
plt.show()
;
```
##### For the comparison of plots you may check cell 26 of the notebook of my previous [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb)
It is interesting to see that the sensitivity for variance declines and at least for the sum the local sensitivity seems to be equal to the global sensitivity. The rest have lower values. This means that this particular dataset is the one with the maximum sensitivity for sum.
Looking at the raw data, you might have thought that perhaps if the dataset is the one that provides maximum local sensitivity for a particular hamming distance and query, then it is also the maximum for the rest of the values of the hamming distance. But I can corroborate in the next set of raw data for unbounded sensitivity that that is not true, e.g. you will see the median sometimes is the maximum local sensitivity and sometimes not.
Let us find the global sensitivity for a hamming distance of 1 iterating through all the possible combinations of the release dataset. Afterward, we will make a comparison with prior results. (I will comment the printing of sensitivity results for getting less verbose outputs)
```python
# We set the hamming distance and arrays to save the sensitivities
hamming_distance = 1
median_local_sensitivities = {}
unbounded_sensitivities = {}
columns = ['school_year', 'absence_days']
query_type = 'median'
for column in columns:
# We get all the possible release datasets based on the universe
release_datasets = itertools.combinations(D_large_universe[column], D_large_release.shape[0])
release_datasets = list(release_datasets)
median_local_sensitivities[column] = []
for release_dataset in release_datasets:
# I transform the release_dataset in the according format
release_dataset = pd.Series(release_dataset)
median_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(D_large_universe, release_dataset, columns, query_type, hamming_distance)
median_local_sensitivities[column].append(median_unbounded_global_sensitivities[column])
```
```python
for column in columns:
max_sensitivity = np.max(median_local_sensitivities[column])
print('Max local sensitivity (=global sensitivity) of {} is {}'.format(column, max_sensitivity))
```
Max local sensitivity (=global sensitivity) of school_year is 3.0
Max local sensitivity (=global sensitivity) of absence_days is 2.5
##### Find these results in cell 25 of [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb). Look into hamming distance 1 and median: 3 and 2.5
Here are the results for a hamming distance of 2, also obtaining equal resuls as in the previous blog post: 3.5 and 6.
```python
# We set the hamming distance and arrays to save the sensitivities
hamming_distance = 2
median_local_sensitivities = {}
unbounded_sensitivities = {}
columns = ['school_year', 'absence_days']
query_type = 'median'
for column in columns:
# We get all the possible release datasets based on the universe
release_datasets = itertools.combinations(D_large_universe[column], D_large_release.shape[0])
release_datasets = list(release_datasets)
median_local_sensitivities[column] = []
for release_dataset in release_datasets:
# I transform the release_dataset in the according format
release_dataset = pd.Series(release_dataset)
median_unbounded_global_sensitivities = unbounded_empirical_local_L1_sensitivity(D_large_universe, release_dataset, columns, query_type, hamming_distance)
median_local_sensitivities[column].append(median_unbounded_global_sensitivities[column])
```
```python
for column in columns:
max_sensitivity = np.max(median_local_sensitivities[column])
print('Max local sensitivity (=global sensitivity) of {} is {}'.format(column, max_sensitivity))
```
Max local sensitivity (=global sensitivity) of school_year is 3.5
Max local sensitivity (=global sensitivity) of absence_days is 6.0
### Bounded local sensitivity
Let us begin with the small dataset and a hamming distance of 1 (the only one it allows given the shapes of the release and universe dataset)
```python
D_small_release
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
```python
D_small_universe
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Pat</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Terry</td>
<td>4</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
##### For {1, 2, 3}, the local sensitivity of the median should be 1 for absence_days (above inequality 16 of point 6 of the paper):
```python
columns = ['school_year', 'absence_days']
hamming_distance = 1
bounded_sensitivities = {}
bounded_sensitivities = calculate_bounded_sensitivities(D_small_universe, D_small_release, columns, hamming_distance, bounded_sensitivities)
```
Bounded sensitivities for mean {'school_year': 1.0, 'absence_days': 3.0}
Bounded sensitivities for median {'school_year': 1.0, 'absence_days': 1.0}
Bounded sensitivities for count {'school_year': 0, 'absence_days': 0}
Bounded sensitivities for sum {'school_year': 3, 'absence_days': 9}
Bounded sensitivities for std {'school_year': 0.430722547996921, 'absence_days': 3.2111854102704642}
Bounded sensitivities for var {'school_year': 0.8888888888888887, 'absence_days': 15.555555555555555}
Bounded sensitivities for percentile 25 {'school_year': 1.0, 'absence_days': 1.0}
Bounded sensitivities for percentile 50 {'school_year': 1.0, 'absence_days': 1.0}
Bounded sensitivities for percentile 75 {'school_year': 1.0, 'absence_days': 4.0}
Bounded sensitivities for percentile 90 {'school_year': 0.9999999999999996, 'absence_days': 5.799999999999999}
Note that these values are equal to the ones obtained with global sensitivity this is because, the universe is very small in comparison to the release dataset (differ in cardinality 1), so the neighbors of all the possible release datasets are the same. Thus, it does not matter that you do local or global sensitivity in this case.
#### Let us try our function with a larger dataset and with a larger hamming distance.
```python
D_large_universe
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Keny</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Sherry</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Jerry</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>Morty</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>6</th>
<td>Beth</td>
<td>7</td>
<td>7</td>
</tr>
<tr>
<th>7</th>
<td>Summer</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<th>8</th>
<td>Squanchy</td>
<td>9</td>
<td>15</td>
</tr>
<tr>
<th>9</th>
<td>Rick</td>
<td>9</td>
<td>20</td>
</tr>
</tbody>
</table>
</div>
```python
D_large_release
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>school_year</th>
<th>absence_days</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Chris</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>Kelly</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th>2</th>
<td>Keny</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>3</th>
<td>Sherry</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Jerry</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<th>5</th>
<td>Morty</td>
<td>5</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
Find the results in cell 33 of [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb) in order to compare them to these. You will see that some of them are equal (this dataset in particular provides max local sensitivity for a particular hamming distance and query, and some are smaller, never larger).
```python
columns = ['school_year', 'absence_days']
hamming_distances = [1, 2, 3, 4]
bounded_sensitivities = {}
for hamming_distance in hamming_distances:
print('Hamming distance = ', hamming_distance)
bounded_sensitivities = calculate_bounded_sensitivities(D_large_universe, D_large_release, columns, hamming_distance, bounded_sensitivities)
```
Hamming distance = 1
Bounded sensitivities for mean {'school_year': 1.3333333333333335, 'absence_days': 3.166666666666667}
Bounded sensitivities for median {'school_year': 1.5, 'absence_days': 1.0}
Bounded sensitivities for count {'school_year': 0, 'absence_days': 0}
Bounded sensitivities for sum {'school_year': 8, 'absence_days': 19}
Bounded sensitivities for std {'school_year': 1.1814550849669505, 'absence_days': 4.757896452763675}
Bounded sensitivities for var {'school_year': 5.111111111111111, 'absence_days': 38.8888888888889}
Bounded sensitivities for percentile 25 {'school_year': 0.0, 'absence_days': 1.0}
Bounded sensitivities for percentile 50 {'school_year': 1.5, 'absence_days': 1.0}
Bounded sensitivities for percentile 75 {'school_year': 0.75, 'absence_days': 1.0}
Bounded sensitivities for percentile 90 {'school_year': 2.0, 'absence_days': 7.5}
Hamming distance = 2
Bounded sensitivities for mean {'school_year': 2.4999999999999996, 'absence_days': 5.333333333333334}
Bounded sensitivities for median {'school_year': 3.0, 'absence_days': 2.0}
Bounded sensitivities for count {'school_year': 0, 'absence_days': 0}
Bounded sensitivities for sum {'school_year': 15, 'absence_days': 32}
Bounded sensitivities for std {'school_year': 1.8635911660052833, 'absence_days': 5.566559153271799}
Bounded sensitivities for var {'school_year': 9.333333333333336, 'absence_days': 50.0}
Bounded sensitivities for percentile 25 {'school_year': 0.75, 'absence_days': 2.0}
Bounded sensitivities for percentile 50 {'school_year': 3.0, 'absence_days': 2.0}
Bounded sensitivities for percentile 75 {'school_year': 3.75, 'absence_days': 8.0}
Bounded sensitivities for percentile 90 {'school_year': 4.0, 'absence_days': 12.0}
Hamming distance = 3
Bounded sensitivities for mean {'school_year': 3.4999999999999996, 'absence_days': 6.166666666666666}
Bounded sensitivities for median {'school_year': 4.5, 'absence_days': 3.5}
Bounded sensitivities for count {'school_year': 0, 'absence_days': 0}
Bounded sensitivities for sum {'school_year': 21, 'absence_days': 37}
Bounded sensitivities for std {'school_year': 1.9592731613934147, 'absence_days': 5.566559153271799}
Bounded sensitivities for var {'school_year': 10.000000000000002, 'absence_days': 50.0}
Bounded sensitivities for percentile 25 {'school_year': 3.0, 'absence_days': 3.0}
Bounded sensitivities for percentile 50 {'school_year': 4.5, 'absence_days': 3.5}
Bounded sensitivities for percentile 75 {'school_year': 4.5, 'absence_days': 8.5}
Bounded sensitivities for percentile 90 {'school_year': 4.0, 'absence_days': 12.0}
Hamming distance = 4
Bounded sensitivities for mean {'school_year': 4.333333333333334, 'absence_days': 6.666666666666666}
Bounded sensitivities for median {'school_year': 5.5, 'absence_days': 4.0}
Bounded sensitivities for count {'school_year': 0, 'absence_days': 0}
Bounded sensitivities for sum {'school_year': 26, 'absence_days': 40}
Bounded sensitivities for std {'school_year': 1.9592731613934147, 'absence_days': 5.566559153271799}
Bounded sensitivities for var {'school_year': 10.000000000000002, 'absence_days': 50.0}
Bounded sensitivities for percentile 25 {'school_year': 3.5, 'absence_days': 4.0}
Bounded sensitivities for percentile 50 {'school_year': 5.5, 'absence_days': 4.0}
Bounded sensitivities for percentile 75 {'school_year': 4.5, 'absence_days': 8.5}
Bounded sensitivities for percentile 90 {'school_year': 4.0, 'absence_days': 12.0}
### Visualization
```python
plt.figure(figsize=(15, 7))
query_types = ['mean', 'median', 'count', 'sum', 'std', 'var', 'percentile_25', 'percentile_50', 'percentile_75', 'percentile_90']
x_values = []
for key in bounded_sensitivities.keys():
x_values.append(key)
for inedx_column, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(inedx_column+1))
plt.subplot(plot_index)
query_type_legend_handles = []
for query_type in query_types:
y_values = []
for hamming_distance in bounded_sensitivities.keys():
y_values.append(bounded_sensitivities[hamming_distance][query_type][column])
# plot the sensitivities
legend_handle, = plt.plot(x_values, y_values, label=query_type)
query_type_legend_handles.append(legend_handle)
# Legends
legend = plt.legend(handles=query_type_legend_handles, bbox_to_anchor=(0., -0.2, 1., .102), \
ncol=5, mode="expand", borderaxespad=0.)
legend_plot = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Hamming distance')
plt.ylabel('Local Sensitivity')
plt.title('{}) Universe Domain {} = {}'.format(inedx_column+1, column, D_large_universe[column].values))
plt.suptitle('Local sensitivities based on unbounded DP of different queries for different domains for different hamming distances')
plt.show()
;
```
The sum, the mean, the std, and the variance seem to be the same as in global sensitivity. You can also see that of course the values are not higher that in global sensitivity.
#### Let us find the global sensitivity for a hamming distance of 1 iterating through all the possible combinations of the release dataset. then, we will compare this result with my previous [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb) (I will comment the printing of sensitivity results for getting less verbose outputs). It should be 3 and 2.5, as per cell 33.
```python
# We set the hamming distance and arrays to save the sensitivities
hamming_distance = 1
median_local_sensitivities = {}
bounded_sensitivities = {}
columns = ['school_year', 'absence_days']
query_type = 'median'
for column in columns:
# We get all the possible release datasets based on the release dataset
release_datasets = itertools.combinations(D_large_universe[column], D_large_release.shape[0])
release_datasets = list(release_datasets)
median_local_sensitivities[column] = []
for release_dataset in release_datasets:
# I transform the release_dataset in the according format
release_dataset = pd.Series(release_dataset)
median_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(D_large_universe, release_dataset, columns, query_type, hamming_distance)
median_local_sensitivities[column].append(median_bounded_global_sensitivities[column])
```
```python
for column in columns:
max_sensitivity = np.max(median_local_sensitivities[column])
print('Max local sensitivity (=global sensitivity) of {} is {}'.format(column, max_sensitivity))
```
Max local sensitivity (=global sensitivity) of school_year is 3.0
Max local sensitivity (=global sensitivity) of absence_days is 2.5
#### For a hamming distance of 2, then the values should be 4 and 5.5, as in cell 33 of [blog post](https://github.com/gonzalo-munillag/Blog/blob/main/My_implementations/Global_sensitivity/Global_Sensitivity.ipynb)
```python
# We set the hamming distance and arrays to save the sensitivities
hamming_distance = 2
median_local_sensitivities = {}
bounded_sensitivities = {}
columns = ['school_year', 'absence_days']
query_type = 'median'
for column in columns:
# We get all the possible release datasets based on the release dataset
release_datasets = itertools.combinations(D_large_universe[column], D_large_release.shape[0])
release_datasets = list(release_datasets)
median_local_sensitivities[column] = []
for release_dataset in release_datasets:
# I transform the release_dataset in the according format
release_dataset = pd.Series(release_dataset)
median_bounded_global_sensitivities = bounded_empirical_local_L1_sensitivity(D_large_universe, release_dataset, columns, query_type, hamming_distance)
median_local_sensitivities[column].append(median_bounded_global_sensitivities[column])
```
```python
for column in columns:
max_sensitivity = np.max(median_local_sensitivities[column])
print('Max local sensitivity (=global sensitivity) of {} is {}'.format(column, max_sensitivity))
```
Max local sensitivity (=global sensitivity) of school_year is 5.5
Max local sensitivity (=global sensitivity) of absence_days is 4.0
|
#include <boost/mpl/aux_/comparison_op.hpp>
|
If $S$ is locally path-connected, then the connected component of $S$ containing $x$ is also locally path-connected. |
Rafael Nadal became the first man to win eight titles at the same Grand Slam tournament after beating fellow Spaniard David Ferrer 6-3, 6-2, 6-3 in the French Open final on Sunday.
If Rafael Nadal truly was going to be challenged, if his bid for an unprecedented eighth French Open championship would be slowed even a bit, this might have been the moment.
Ferrer glared at the ball as it flew past and landed in a corner, then smiled ruefully. What else was there to do? Dealing with Nadal's defence-to-offence on red clay is a thankless task. His rain-soaked 6-3, 6-2, 6-3 victory over Ferrer on was Nadal's record 59th win in 60 matches at the French Open and made him the only man with eight titles at any Grand Slam tournament.
"Winning 17 Grand Slam titles, that's miles away," Nadal said with his typical humility. "I'm not even thinking about it."
Let's be plain: No one, perhaps not even Ferrer himself, expected Nadal to lose Sunday.
Nadal had yet to make his French Open debut then, missing it that year because of a broken left foot. On May 23, 2005, Nadal played his first match at Roland Garros, beating Lars Burgsmuller 6-1, 7-6 (4), 6-1 on Court 1, known as the "bullring" because of its oval shape.
There was occasional shakiness this year. Nadal lost the first set of each of his first two matches, and was pushed to a tiebreaker to begin his third. His fourth match, a straight-set win against No. 15 Kei Nishikori, "was a major step forward," Nadal said. Still, he barely edged No. 1-ranked Novak Djokovic in a thrilling semifinal that lasted more than 4 1/2 hours and ended 9-7 in the fifth set Friday.
By any measure, that match was far more enjoyable to take in than the final, akin to dining on a filet mignon accompanied by a well-aged bottle of Bordeaux — each bite and sip rich, textured — one day, then grabbing a hot dog and can of soda from a street vendor 48 hours later.
That's when Nadal took over, winning seven games in a row and 12 of 14 to render the ultimate result pretty clear. It was as if he simply decided, "Enough is enough." His court coverage was impeccable, as usual, showing no signs of any problems from that left knee, which was supported by a band of white tape. His lefty forehand whips were really on-target, accounting for 19 of his 35 winners and repeatedly forcing errors from Ferrer.
Yes, Nadal is No. 1 at the French Open, without a doubt. When the ATP rankings are issued Monday, however, he will be No. 5, due to points he dropped while hurt. Oddly enough, Ferrer will be at No. 4.
"Yeah, it's strange, no? I lost the final against Rafael, but tomorrow I am going to be No. 4 and him No. 5," Ferrer said with a grin, then delivered his punchline: "I prefer to win here and to stay No. 5." |
lemma prime_power_mult_nat: fixes p :: nat assumes p: "prime p" and xy: "x * y = p ^ k" shows "\<exists>i j. x = p ^ i \<and> y = p^ j" |
Formal statement is: lemma is_pole_transform: assumes "is_pole f a" "eventually (\<lambda>x. f x = g x) (at a)" "a=b" shows "is_pole g b" Informal statement is: If $f$ has a pole at $a$, and $f$ and $g$ agree in a neighborhood of $a$, then $g$ has a pole at $a$. |
With this additional money , the Academy was able to organise a survey of the state of the humanities and social sciences in the United Kingdom , authoring a report that was published by Oxford University Press in 1961 as Research in the Humanities and the Social Sciences . On the basis of this report , Wheeler was able to secure a dramatic rise in funding from the British Treasury ; they increased their annual grant to £ 25 @,@ 000 , and promised that this would increase to £ 50 @,@ 000 shortly after . According to his later biographer Jacquetta Hawkes , in doing so Wheeler raised the position of the Academy to that of " the main source of official patronage for the humanities " within the United Kingdom , while Piggott stated that he set the organisation upon its " modern course " .
|
#' ---
#' title: "Uvod u R"
#' author:
#' - "Milutin Pejovic, Petar Bursac"
#' date: "`r format(Sys.time(), '%d %B %Y')`"
#' output:
#' html_document:
#' keep_md: true
#' theme: "simplex"
#' highlight: tango
#' toc: true
#' toc_depth: 5
#' toc_float: true
#' fig_caption: yes
#' ---
#'
#' # Programsko okruzenje
#+ echo = FALSE, warning = FALSE, message = FALSE, fig.width = 10, fig.height = 8, fig.align='center'
knitr::include_graphics("Figures/01-RConsole.jpg")
knitr::include_graphics("Figures/02-RStudio.jpg")
#'
#'
#'
#'
#' # Osnovne matematicke operacije #
#'
#' R je moguce koristiti kao bilo koji kalculator koristeci osnovne komande:
#' Sabiranje
2+2
1+2+3+4+5
#'
#' Oduzimanje
10-1
5-6
#'
#' Mnozenje
2*2
1*2*3
#'
#' Deljenje
4/2
10/2/4
#'
#' Koriscenje zagrada:
10/2+2
10/(2+2)
(10/2)+2
#'
#'
#' Stepenovanje i korenovanje
2^2
3^4
1^0
sqrt(4)
#'
#'
#' > <h3>Zadatak 1</h3>
#' > + Pronadji operator koji za rezultat operacije deljenja daje celobrojni deo rezultat (npr 111 `operator` 100 = 1).
#' > + Pronadji operator koji za rezultat operacije deljenja daje decimalni deo (npr 111 `operator` 100 = 0.11).
#'
#'
#'
#'
#' # Pozivanje funkcija u R-u
#'
#' Logaritmi:
log(0)
log(1)
#'
#' kao i logaritmi sa proizvoljnom osnovom (bazom):
log10(1)
log10(10)
log2(1)
log2(2)
logb(1,5) # Dva argumenta
logb(5,base=5) # Dva argumenta od kojih je jedan imenovan
#'
#' Prirodni eksponent:
exp(0)
exp(1)
#'
#' Kao i mnoge druge matematicke operacije:
#' Apsolutna vrednost
abs(-10)
#'
#' Faktorial:
factorial(10)
#'
#'
#' # Dodavanje komentara
#'
#' Dodavanje komentara je veoma vazan deo dokumentovanja koda.
#'
#'
#'
#' Svaki deo teksta koji je pracen sa (`#`) postaje komentar i `ne` izvrsava se od strane R-a.
2+2 # This is a comment. The code before the `#` is still evaluated by R.
#'
#' Neki programski jezici omogucavaju tzv. multi-line komentare. R ne podrzava tu opciju. U R-u svaki komentar je zaseban
#' R-studio podrzava `proglasavanje` vise linija koda komentarom pozivom jedne komande `ctrl+shift+c`.
#'
#'
#'
#'
#' # Promenljive ili objekti
#'
#'
#' R omogucuje da se svakoj vrednosti doda ime, kako bi se ona pozivanjem tog imena kasnije mogla upotrebiti. Dodeljivanjem imena nekoj vrednosti (moguce je dodati ime i nekoj funkciji ili skupu nekih vrednosti) na taj nacin kreiramo `promenljive`.
#' Na primer, umesto da odmah vidimo rezultat operacije `2+2`, mi mozemo sklaistiti taj rezultat pod nekim imenom i kasnije ga pogledati:
a <- 2+2
#' Ukoliko hocemo da vidimo rezultat koji se `krije` iza tog imena, mi cemo samo ukucati to ime i pristisnuti `enter`:
a
#' Kao sto ste mogli da vidite, operator za dodeljivanje imenta je `<-` i rezultatu operacije na desnoj strani dodeli ime koje je na levoj strani operatora.
#' Isti rezultat se dobija i sa znakom `=` (ali se zbog litljivosti koda i sireg koriscenja znaka jednakost on izbegava):
a = 2+2
a
#' Moguce je koristiti operator i u `suprotnom` smeru, ali to nije uobicajeno:
2+2 -> a
a
#'
#' Iz svega navedenog proizilazi da tako rezultat operacije koji je sacuvan pod nekim imenom mozemo pozivati u okviru drugih operacija:
#' Tako na primer, ako zelimo da rezultau prethodne operacije dodamo broj 4, pisacemo komandu:
#'
a + 4
#' Moguce je da isto ime iskoristimo i za skladistenje druge vrednosti. U tom slucaju gubimo prethodno sacuvan rezultat pod tim imenom.
a <- 2+2
a <- 3
a
#' Mozemo napraviti `kopiju` promenljive ali pod drugim imenom:
b <- a
b
#' Ako zelimo da uklonimo promeljivu `a`, pozvacemo komandu `rm` (skracemo od `remove`):
rm(a)
#' Ako zelimo da vidimo kakve smo sve promenljive napravili. Pozvacemo komandu `ls`.
ls()
#' Ova funkcija ce vam izlistati sve promenljive koje ste kreirali tokom R sesije. Jedno pokretanje R-a je jedna sesija. Sto vise promenljvih to je teze da znate sa cim baratate u tokom sesije. Memorija racuara se takode opterecuje vise sa vecim brojem promenljivih.
#' Ukoliko zeimo da uklonimo sve promenljive to mozemo uraditi pozivom komande `rm(list = ls())`
#' Takode, mozemo pozvati komandu:
b <- NULL
#' Ova komanda `ne brise` probmenljivu `b`, vec je samo ostavlja `praznom`.
#' Treba imati na umu da je `NULL` razlicito `NA`!
#' You can see this in the difference between the following two vectors:
#'
#' Postoji i drugi nacin dodeljivanja imena, odnosno kreiranja promenljivih, a to je pozivom komande `assign`:
assign("x",3)
x
#' To nije uobicajeno u pocetku, ali kasnije moze biti korisno.
#' ## Pravila nazivanja promenljivih ##
#'
#' U R-u postoje jednostavna pravila koja se moraju postovati prilikom nazivanja promenljivih:
#'
# +Imena promeljivih u R-u su `case sensitive`, sto znaci da `a` nije isto sto i `A`.
# +Imena promeljivih u R-u moraju pocinjati sa slovima.
# +Imena promeljivih u R-u mogu sadrzati slova, znake i brojeve, kao na primer (`.`) ili (`_`).
# +Pozeljno je izbegavati dugacka imena.
#'
#'
#'
#' # Help
#'
#' R omogucava brz i jednostavan poziv `help`-a za svaku funkciju. Help je veoma jednostavan i pregledan. Help se moze pozvati na jedan od sledecih nacina:
#+ eval = FALSE
help(lm)
#' ili
#+ eval = FALSE
?lm
#' Za odredeni operator mora se moraju se koristiti navodnici:
#+ eval = FALSE
?`+`
#'
#' # Uvod u strukture podataka u R-u
#'
#' R vam omogucava da kreirate promenljve od komlikovanijih struktura podataka, ne samo od jedne vrednosti. Na primer, vi mozete kreirati promenljviu koja sadrzi vektor, niz (array), matricu ili listu. Od vrste strukture podataka zavisi kako ce ona biti prikazana i koje operacije su nad njom moguce.
#'
#' ### Vektori, nizovi i matrice
#' Vektor i niz se skladiste na isti nacin, samo se uz niz vezuje i broj dimenzija, pa tako se on moze posmatrati kao visedimenzionalni vektor. Na primer, komandom `c` se kreira vektor, a komandom `array` se kreira niz.
v <- c(1,2,3,4,5,6,7,8,9,10,11,12)
v
a <- array(v, dim=c(3,4))
a
#'
#' > <h3>Zadatak 2</h3>
#' > + Pronadjite funkciju koja kreira `2x6` matricu od vrednosti koje se nalaze u promenljvoj `v.
#' > + Kreirati vektor koji ce sadrzati imena studenata u ucionici.
#'
#' **Za vektore nizove i matrice je zajednicko da oni mogu skladistiti samo jednu vrstu podataka (najcesce numericke).**
#'
#'
#' ### Liste
#'
#' Lista je struktura podataka koja moze skladistiti razlicite vrste podataka
#'
e <- list(student="Milutin Pejovic", `broj indeksa` = 1018, `godina upisa` = 2012)
#'
#' > <h3>Zadatak 3</h3>
#' > + Kreirati listu u kojoj ce prvi element sadrzati sva imena studenata, drugi element sve brojeve indeksa i treci element sve godine upisa.
#'
#' ### Data-frames
#'
#' `Dataframe` je tabelarna struktura podataka u kojoj svaka kolona moze da sadrzi razliciti tip podataka. Ona u stvari predstavlja listu gde je svaki element liste u stvari vektor sa istom duzimom. `dataframe` je najslicniji `excel` tabeli.
studenti <- data.frame(ime = c("Milutin", "Petar"), Prezime = c("Pejovic", "Bursac"), `Broj indeksa` = c(1018, 1023), `Godina upisa` = c(2002, 2013))
studenti
#'
#'
#' # R paketi
#'
#' ### R paketi prosiruju mogucnosti osnovne instalacije R-a.
#'
#'
#' R paketi su softverske jedinice koje su kreirane za resavanje odredenih problema. Tako na primer imamo pakete koji su namenjeni ucitavanju razlicitih formata podataka ili paketi koji su namenjeni ispisivanju dobijenih rezultata u razlicite vrste fajlova.
#'
#'
#' Instalacija paketa se vrsi pozivom komande:
#'
#+ echo = TRUE, eval = FALSE
install.packages("ime paketa")
#'
#'Ucitavanje paketa se vrsi komandom `
#'
#+ eval = FALSE
library()
#' Opis paketa se moze pogledati pozivom funkcije
#+ eval = FALSE
packageDescription("name-of-package")
#'
#'
#' Postoji ogroman broj paketa koji se skladiste na centralnom repozorijumu R-a koji se zove [CRAN/packages](https://cran.r-project.org/web/packages/index.html).
#'
#'
#'
|
module Play.RankN
example2 : ({a : Type} -> a -> a) -> (Int, String)
example2 f = (f 2, f "hello")
example2' : ({a : Type} -> a -> a) -> Int
example2' = fst . example2
-- also compiles and works fine
-- example2' f = fst . example2 $ f
example3 : (({a : Type} -> a -> a) -> Int) -> String
example3 f = show . f $ id
|
\clearpage
\subsection{Expressions (with Function Calls, Variables, and Constants)} % (fold)
\label{sub:expressions_with_variables_}
You can \textbf{read} the values from Variables and Constants within Expressions. The value of the expression is calculated by \textbf{reading} the values from the Variables and Constants when the expression is calculated\footnote{This means that the value will be affected by the statements that occurred before the expression was calculated.}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{./topics/storing-using-data/diagrams/Expression}
\caption{Expressions can read values from Function Calls, Variables, and Constants}
\label{fig:expressions-with-variables}
\end{figure}
\mynote{
\begin{itemize}
\item Expression is the \textbf{term} given to the code that calculates values within your Statements.
\item You can read values from Function Calls, Variables, and Constants.
\item You use the Variable or Constant's name to access its value within an Expression.
\item The \nameref{sub:function_call} runs the code in the Function, and then reads the result returned.
\item There are actually \textbf{two expressions} in Figure \ref{fig:expressions-with-variables}:
\begin{enumerate}
\item The first Expression is the value passed to the \texttt{sin} function (\texttt{$deg \times PI \times 180$}). This value is calculated by reading the values from the \texttt{deg} variable and the \texttt{PI} constant. These values are then used in the Expression to determine that value that is passed to the Parameter in \texttt{sin}.
\item The second Expression is the result returned from the call to the \texttt{sin} function. This will calculate the sine of the value calculated in the first expression.
\end{enumerate}
\item The Expression reads the value of the Variable \textbf{at the time} it is executed.
\item Expressions are used to calculate values that are...
\begin{itemize}
\item Passed to Parameters within \nameref{sub:procedure call}s.
\item Assigned to Variables within \nameref{sub:assignment_statement}s.
\end{itemize}
\end{itemize}
}
% subsection expressions_with_variables_ (end) |
(************************************************************************)
(* This library include Coq definitions of Android inter-component communication,
intent, intent filter and tests required for intent delivery. The proofs
of intent delivery and intent broadcasts are carried. *)
(************************************************************************)
Require Import Arith.
Require Import Atom.
Require Import List.
(************************ Basic Definitions *********************************************)
(**Decidable equality on atoms *)
Notation "x == y" :=
(eq_atom_dec x y) (at level 67) : metatheory_scope.
Open Scope metatheory_scope.
Definition eq_atom (x y : atom) : bool :=
if (x == y) then true else false.
Notation "x =?= y" :=
(eq_atom x y) (at level 67) : metatheory_scope.
(**Finds a name in the list.*)
Fixpoint find (n: atom) (nl: list atom) : bool :=
match nl with
| nil => false
| cons h tl => if (h =?= n) then true else find n tl
end.
(**Boolean and*)
Definition andb (b1 b2: bool) : bool :=
match b1, b2 with
| true, _ => b2
| false, _ => false
end.
(**Boolean or*)
Definition orb (b1 b2: bool) : bool :=
match b1, b2 with
| false, false => false
| _, _ => true
end.
Notation "b1 & b2" := (andb b1 b2) (at level 60, right associativity).
(****************************************************************************************)
(**************************** IPC Formalization *****************************************)
(****************************************************************************************)
(*************************** URI, Intent, Filter ****************************************)
(*Content URI: optional scheme, host, port, and path:*)
Inductive uri: Type :=
| valid_uri: option atom -> option atom -> option nat -> option atom -> uri.
(**Implicit intent: action, category, data (uri, MIME type)*)
Inductive intent: Type :=
int: atom -> list atom -> uri -> atom -> intent.
(**Intent filter: contain specification of actions, categories, URI and MIME types.*)
Inductive filter: Type :=
filt: list atom -> list atom -> list uri -> list atom -> filter.
(*************************** Testing ACTION ********************************************)
(**Finds a match for an intent action in the filter actions.*)
Definition testaction (action : atom) (fil : filter) : bool :=
match fil with
| filt actions _ _ _ => find action actions
end.
(*************************** Testing CATEGORIES ****************************************)
(**Checks if ALL intent categories exist in the filter.*)
Fixpoint testcategory (intentcats : list atom) (filtercats : list atom ) {struct intentcats} : bool :=
match intentcats with
| nil => true
| cons x l => match (find x l) with
| false => false
| true => testcategory l filtercats
end
end.
(*************************** Testing URI ************************************************)
(*Compare optional filter URI attribute with an intent URI attribute. *)
Definition cmpoattr (on1 on2: option atom) : bool :=
match on1, on2 with
| None, _ => true (*if attribute is NOT specified in filter, test is passed.*)
| Some n1, Some n2 => (n1 =?= n2)
| Some n, None => false (*if a filter URI attribute is not listed in intent, test fails*)
end.
(**Equivality on optional port numbers (listed in filter and intent)*)
Definition beq_nato (on1 on2: option nat) : bool :=
match on1, on2 with
| None, _ => true (*no port is specified in filter*)
| Some n1, Some n2 => beq_nat n1 n2 (*both specifies ports and equality is test*)
| Some n, None => false (*no port is specified in the intent*)
end.
(**"When the URI in an intent is compared to a URI specification in a filter,
it's compared only to the parts of the URI included in the filter. For example:
a) If a scheme is not specified, the host is ignored.
b) If a host is not specified, the port is ignored.
c) If both the scheme and host are not specified, the path is ignored.
d) If a filter specifies only a scheme, all URIs with that scheme match the filter.
e) If a filter specifies a scheme and an authority but no path, all URIs with the same
scheme and authority pass the filter, regardless of their paths.
f) If a filter specifies a scheme, an authority, and a path, only URIs with the same
scheme, authority, and path pass the filter." *)
Fixpoint testuri (filuri iuri: uri) : bool :=
match filuri, iuri with
| valid_uri None None None None, _ => true (*no URI in filter, test is passed.*)
| valid_uri None _ (Some port) (Some path), valid_uri _ _ porto patho =>
beq_nato (Some port) porto & cmpoattr (Some path) patho (*a*)
| valid_uri (Some scheme) None _ (Some path), valid_uri schemeo _ _ patho =>
cmpoattr (Some scheme) schemeo & cmpoattr (Some path) patho (*b*)
| valid_uri None None (Some port) _, valid_uri _ _ porto _ =>
beq_nato (Some port) porto (*c*)
| valid_uri (Some scheme) None None None, valid_uri (Some scheme') _ _ _ => scheme =?= scheme' (*d*)
| valid_uri (Some scheme) (Some host) _ None, valid_uri (Some scheme') (Some host') _ _ =>
(scheme =?= scheme') & (host =?= host') (*e*)
| valid_uri (Some scheme) (Some host) _ (Some path), valid_uri (Some scheme') (Some host') _ (Some path') =>
(scheme =?= scheme') & (host =?= host') & (path =?= path') (*f*)
| valid_uri schemeo hosto porto patho, valid_uri schemeo' hosto' porto' patho' => (*otherwise case*)
cmpoattr schemeo schemeo' & cmpoattr hosto hosto' & beq_nato porto porto' & cmpoattr patho patho'
end.
(**Example of ambiguity captured
(*If a host is not specified, the port is ignored.*)
| valid_uri (Some scheme) None _ patho, valid_uri schemeo' _ _ patho' =>
cmpoattr (Some scheme) schemeo' & cmpoattr patho patho'
Contradicts
If a filter specifies only a scheme, all URIs with that scheme match the filter.
| valid_uri (Some scheme) None None None, valid_uri (Some scheme') _ _ _ => namecmp scheme scheme'
*)
(*************************** Testing MIME Type *****************************************)
(**Finds an atom (intent type) match in the list of atoms (filter types).
NOTE: If there is no type, one should be inferred, however, this is not formalized.*)
Definition testtype (filtypes: list atom) (itype: atom): bool :=
match filtypes, itype with
| nil, _ => true
| filtypes, it => find it filtypes
end.
(*************************** Testing DATA *********************************************)
(**"The data test compares both the URI and the MIME type in the intent to a URI and MIME type
specified in the filter. The rules are as follows:
a) An intent that contains neither a URI nor a MIME type passes the test only if the
filter does not specify any URIs or MIME types.
b) An intent that contains a URI but no MIME type (neither explicit nor inferable from
the URI) passes the test only if its URI matches the filter's URI format and the filter
likewise does not specify a MIME type.
c) An intent that contains a MIME type but not a URI passes the test only if the filter
lists the same MIME type and does not specify a URI format.
d) An intent that contains both a URI and a MIME type (either explicit or inferable from
the URI) passes the MIME type part of the test only if that type matches a type listed
in the filter. It passes the URI part of the test either if its URI matches a URI in
the filter *****or if it has a content: or file: URI and the filter does not specify a
URI****. In other words, a component is presumed to support content: and file: data if
its filter lists only a MIME type."*)
Fixpoint testdata (iuri: option uri) (itype: option atom)
(filuris: list uri) (filtypes: list atom) : bool :=
match iuri, itype, filuris, filtypes with
| None, None, nil, nil => true (*a*)
| Some u, None, cons u' ul, nil => orb (testuri u' u) (testdata iuri None ul nil) (*b*)
| None, Some it, nil, ftl => testtype ftl it (*c*)
| Some u, Some it, cons u' ul, ftl =>
(orb (testuri u' u) (testdata iuri None ul nil)) & (testtype ftl it) (*d*)
| None, None, cons u' ul, nil => false
| None, None, nil, cons t' tl => false
| None, None, cons u' ul, cons t' tl => true (*a*)
| _, _, _, _ => false (*otherwise cases*)
end.
(*************************** Delivering Intent *****************************************)
(**An intent is accepted only if it passes all the three tests*)
Definition accept (i: intent) (f: filter) : bool :=
match i, f with
| int a cl u t, filt fal fcl ful ftl =>
testaction a (filt fal fcl ful ftl) &
testcategory cl fcl &
testdata (Some u) (Some t) ful ftl
end.
(****************************************************************************************)
(**************************** Proof of Delivery *****************************************)
(****************************************************************************************)
(**A component accepts an inent ONLY IF it passes all the three tests.*)
Theorem accept_intent: forall a cl u t fal fcl ful ftl,
testaction a (filt fal fcl ful ftl) = true ->
testcategory cl fcl = true ->
testdata (Some u) (Some t) ful ftl = true ->
accept (int a cl u t) (filt fal fcl ful ftl) = true.
Proof.
intros ???????? TA TC TD.
unfold accept.
rewrite TA.
rewrite TC.
rewrite TD.
simpl. auto.
Qed.
(*An application is modeled as a filter.*)
Definition application := filter.
(*Android system is a list of application.*)
Definition system := list application.
(*An intent is broadcast, if it is accepted by every application in the system*)
Fixpoint broadcast (I: intent) (S: system) : bool :=
match S with
| nil => true
| cons A A' => (accept I A) & broadcast I A'
end.
Lemma app_nil_left : forall l:list application, l = nil++l.
Proof.
induction l; simpl in |- *; auto.
Qed.
Lemma broadcast_append: forall l l' a I,
broadcast I (l'++(a :: l)) = accept I a & broadcast I (l'++l).
Proof.
intros.
induction l'.
(*CASE: l'=nil*)
do 2 rewrite <- app_nil_left.
auto.
(*CASE: l'=a0::l'*)
simpl in *.
rewrite IHl'.
destruct (accept I a0);simpl;auto.
destruct (accept I a);simpl;auto.
Qed.
Theorem broadcast_general: forall I A S,
broadcast I S = true ->
In A S ->
accept I A = true.
Proof.
intros ?????.
apply In_split in H0.
destruct H0.
destruct H0.
rewrite H0 in H.
rewrite -> broadcast_append in H.
unfold andb in *.
destruct (accept I A).
(*CASE true*)
auto.
(*CASE false*)
inversion H.
Qed.
(****************** End ******************) |
(*
Copyright (C) 2017 M.A.L. Marques
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: mgga_exc *)
(* prefix:
mgga_x_mvsb_params *params;
assert(p->params != NULL);
params = (mgga_x_mvsb_params * ) (p->params);
*)
$include "mgga_x_mvs.mpl"
mvsb_beta := (t, x) -> mvs_alpha(t, x)*K_FACTOR_C/(t - K_FACTOR_C):
mvsb_f := (x, u, t) -> (1 + params_a_k0*mvs_fa(mvsb_beta(t,x)))
/ (1 + params_a_b*(X2S*x)^4)^(1/8):
f := (rs, z, xt, xs0, xs1, u0, u1, t0, t1) ->
mgga_exchange(mvsb_f, rs, z, xs0, xs1, u0, u1, t0, t1):
|
lemma gcd_bezout_sum_nat: fixes a::nat assumes "a * x + b * y = d" shows "gcd a b dvd d" |
[STATEMENT]
lemma word_length_lt:
"as \<in> lists A \<Longrightarrow> sum_list as = a \<Longrightarrow> \<not> reduced_word_for A a as \<Longrightarrow>
word_length A a < length as"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>as \<in> lists A; sum_list as = a; \<not> reduced_word_for A a as\<rbrakk> \<Longrightarrow> word_length A a < length as
[PROOF STEP]
using reduced_word_forI_length'
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>?as \<in> lists ?A; sum_list ?as = ?a; length ?as \<le> word_length ?A ?a\<rbrakk> \<Longrightarrow> reduced_word_for ?A ?a ?as
goal (1 subgoal):
1. \<lbrakk>as \<in> lists A; sum_list as = a; \<not> reduced_word_for A a as\<rbrakk> \<Longrightarrow> word_length A a < length as
[PROOF STEP]
by fastforce |
{-# OPTIONS --safe #-}
module JVM.Printer.Jasmin where
open import Function
open import Data.Nat
open import Data.Integer
open import Data.Nat.Show as NatShow
open import Data.String as S using (String)
open import Data.List as List
open import JVM.Types
open import JVM.Builtins
sep : List (List String) → List String
sep = List.concat ∘ List.intersperse [ " " ]
Words = List String
abstract
Line = String
Lines = List String
unwords : Words → Line
unwords = S.unwords
line : String → Line
line = id
lines : List Line → Lines
lines = id
unlines : Lines → String
unlines = S.unlines
infixr 4 _<+_
_<+_ : Line → Lines → Lines
l <+ ls = l ∷ ls
infixl 6 _+>_
_+>_ : Lines → Line → Lines
ls +> l = ls ∷ʳ l
infixl 4 _<>_
_<>_ : Lines → Lines → Lines
ls <> js = ls ++ js
pars : List Lines → Lines
pars = concat
ident : Line → Line
ident = "\t" S.++_
indent : Lines → Lines
indent = List.map ident
record ClassSpec : Set where
constructor class
field
class_name : String
out : Line
out = unwords $ ".class" ∷ "public" ∷ class_name ∷ []
record SuperSpec : Set where
constructor super
field
class_name : String
out : Line
out = unwords $ ".super" ∷ class_name ∷ []
record Header : Set where
field
class_spec : ClassSpec
super_spec : SuperSpec
out : Lines
out = lines $ ClassSpec.out class_spec
∷ SuperSpec.out super_spec
∷ []
module Descriptor where
type-desc : Ty → String
type-desc boolean = "Z"
type-desc byte = "B"
type-desc short = "S"
type-desc int = "I"
type-desc long = "J"
type-desc char = "C"
type-desc (array t) = "[" S.++ type-desc t
type-desc (ref cls) = S.concat $ "L" ∷ cls ∷ ";" ∷ []
ret-desc : Ret → String
ret-desc void = "V"
ret-desc (ty t) = type-desc t
out : List Ty → Ret → String
out as r = args as S.++ ret-desc r
where
args : List Ty → String
args d = "(" S.++ (S.intersperse ";" (List.map type-desc d)) S.++ ")"
data Comparator : Set where
eq ne lt ge gt le : Comparator
icmpge icmpgt icmpeq icmpne icmplt icmple : Comparator
open import JVM.Types using (Fun)
module Comp where
out : Comparator → String
out eq = "eq"
out ne = "ne"
out lt = "lt"
out ge = "ge"
out gt = "gt"
out le = "le"
out icmpge = "_icmpge"
out icmpgt = "_icmpgt"
out icmpeq = "_icmpeq"
out icmpne = "_icmpne"
out icmplt = "_icmplt"
out icmple = "_icmple"
data Instr : Set where
nop pop dup swap ret : Instr
aconst_null : Instr
bipush sipush : ℤ → Instr
iconstm1 iconst0 iconst1 iconst2 : Instr
aaload aload iload : ℕ → Instr
aastore astore istore : ℕ → Instr
new : String → Instr
goto : String → Instr
if : Comparator → String → Instr
iadd isub imul idiv ixor : Instr
invokevirtual invokespecial invokestatic : Fun → Instr
module Funs where
out : Fun → String
out (c / m :⟨ as ⟩ r) = c S.++ "/" S.++ m S.++ Descriptor.out as r
module Instruction where
lbl : String → String
lbl x = "label_" S.++ x
showInt : ℤ → String
showInt (+ n) = NatShow.show n
showInt (-[1+ n ]) = "-" S.++ NatShow.show (ℕ.suc n)
out : Instr → Line
out nop = line "nop"
out pop = line "pop"
out dup = line "dup"
out swap = line "swap"
out ret = line "return"
out aconst_null = line "aconst_null"
out (bipush n) = unwords $ "sipush" ∷ showInt n ∷ []
out (sipush n) = unwords $ "bipush" ∷ showInt n ∷ []
out (aload n) = unwords $ "aload" ∷ NatShow.show n ∷ []
out (astore n) = unwords $ "astore" ∷ NatShow.show n ∷ []
out (iload n) = unwords $ "iload" ∷ NatShow.show n ∷ []
out (istore n) = unwords $ "istore" ∷ NatShow.show n ∷ []
out (aaload n) = unwords $ "aaload" ∷ NatShow.show n ∷ []
out (aastore n) = unwords $ "astore" ∷ NatShow.show n ∷ []
out iconstm1 = line "iconstm1"
out iconst0 = line "iconst_0"
out iconst1 = line "iconst_1"
out iconst2 = line "iconst_2"
out (goto l) = unwords $ "goto" ∷ lbl l ∷ []
out (if c l) = unwords $ ("if" S.++ Comp.out c) ∷ lbl l ∷ []
out iadd = line "iadd"
out isub = line "isub"
out imul = line "imul"
out idiv = line "idiv"
out ixor = line "ixor"
out (new c) = unwords $ "new" ∷ c ∷ []
out (invokespecial sf) = unwords $ "invokespecial" ∷ Funs.out sf ∷ []
out (invokestatic sf) = unwords $ "invokestatic" ∷ Funs.out sf ∷ []
out (invokevirtual sf) = unwords $ "invokevirtual" ∷ Funs.out sf ∷ []
data Stat : Set where
label : String → Stat
instr : Instr → Stat
module Statement where
out : Stat → Line
out (label x) = line $ Instruction.lbl x S.++ ":"
out (instr x) = Instruction.out x
record ClassField : Set where
constructor clsfield
field
name : String
access : List String
f_ty : Ty
out : Line
out = unwords
$ ".field"
∷ access
++ name
∷ Descriptor.type-desc f_ty
∷ []
record Method : Set where
constructor method
field
name : String
access : List String
locals : ℕ
stacksize : ℕ
m_args : List Ty
m_ret : Ret
body : List Stat
out : Lines
out = (unwords $ ".method" ∷ (S.unwords access) ∷ (name S.++ Descriptor.out m_args m_ret) ∷ [])
<+ (ident $ unwords $ ".limit locals" ∷ NatShow.show locals ∷ [])
<+ (ident $ unwords $ ".limit stack" ∷ NatShow.show stacksize ∷ [])
<+ (lines $ List.map (ident ∘ Statement.out) body)
+> (line $ ".end method")
record Jasmin : Set where
constructor jasmin
field
header : Header
fields : List ClassField
methods : List Method
out : Lines
out = Header.out header
<> lines (List.map ClassField.out fields)
<> pars (List.map Method.out methods)
module _ where
defaultInit : Method
defaultInit = method "<init>" [ "public" ] 1 1 [] void
( instr (aload 0)
∷ instr (invokespecial (Object / "<init>" :⟨ [] ⟩ void))
∷ instr ret
∷ []
)
procedure : (name : String) → ℕ → ℕ → List Stat → Jasmin
procedure name locals stack st =
jasmin
(record { class_spec = class name ; super_spec = super Object })
[]
( method "apply" ("public" ∷ "static" ∷ []) locals stack [] void (st ∷ʳ instr ret)
∷ defaultInit
∷ method "main" ("public" ∷ "static" ∷ []) 1 0 [ array (ref "java/lang/String") ] void
( instr (invokestatic (name / "apply" :⟨ [] ⟩ void))
∷ instr ret
∷ []
)
∷ []
)
|
!!# MODULE <<VAR_ScalarFluxes>>
MODULE VAR_ScalarFluxes
!!## PURPOSE
!! Stores scalar fluxes.
!!## MODULES
USE KND_ScalarFluxes !!((02-A-KND_ScalarFluxes.f90))
!!## DEFAULT IMPLICIT
IMPLICIT NONE
!!## DEFAULT ACCESS
PRIVATE
!!## GLOBAL VARIABLES
!!
!!### High Order Variables
!! * high order cell scalar fluxes <ScalarFluxC>
!! * high order face scalar fluxes <ScalarFluxF>
!! * high order vert scalar fluxes <ScalarFluxV>
REAL(KIND_ScalarFlux) ,POINTER :: ScalarFluxC(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: ScalarFluxF(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: ScalarFluxV(:,:) => NULL()
!
!! * last iterations scalar flux <LastScalarFluxV>
REAL(KIND_ScalarFlux) ,POINTER :: LastScalarFluxV(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LastScalarFluxF(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LastScalarFluxC(:,:) => NULL()
!
!! * cell scalar flux cell function <ScalarFluxCellFunction>
REAL(KIND_ScalarFlux) ,POINTER :: ScalarFluxCellFunction(:,:,:) => NULL()
!! e.g. a linear function for group $g$ and cell $i$ is represented as
!! $\phi(x,y) = a + bx + cy$ where
! a=ScalarFluxFunction(1,g,i),
! b=ScalarFluxFunction(2,g,i),
! c=ScalarFluxFunction(3,g,i).
!
!! * "incoming" scalar fluxes on boundary <ScalarFluxIN>
REAL(KIND_ScalarFlux),POINTER :: ScalarFluxIN(:,:) => NULL()
!
!
!!### Low Order Variables
!!
!! * low order cell scalar fluxes <LO_ScalarFluxC>
!! * low order face scalar fluxes <LO_ScalarFluxF>
!! * low order vert scalar fluxes <LO_ScalarFluxV>
REAL(KIND_ScalarFlux) ,POINTER :: LO_ScalarFluxC(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LO_ScalarFluxF(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LO_ScalarFluxV(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LastLO_ScalarFluxV(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LastLO_ScalarFluxF(:,:) => NULL()
REAL(KIND_ScalarFlux) ,POINTER :: LastLO_ScalarFluxC(:,:) => NULL()
!!## PUBLIC ACCESS
PUBLIC :: KIND_ScalarFlux
PUBLIC :: ScalarFluxV,ScalarFluxF,ScalarFluxC
PUBLIC :: LO_ScalarFluxV,LO_ScalarFluxF,LO_ScalarFluxC
PUBLIC :: LastScalarFluxV,LastScalarFluxF,LastScalarFluxC
PUBLIC :: LastLO_ScalarFluxV,LastLO_ScalarFluxF,LastLO_ScalarFluxC
PUBLIC :: ScalarFluxCellFunction
PUBLIC :: ScalarFluxIN
END MODULE
|
theorem add_right_cancel_iff (t a b : mynat) : a + t = b + t ↔ a = b :=
begin
split,
exact add_right_cancel a t b,
intro h,
rwa [h],
end
|
```python
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, Aer, IBMQ
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
```
ibmqfactory.load_account:WARNING:2021-07-23 22:20:04,310: Credentials are already in use. The existing account in the session will be replaced.
```python
!pip install pennylane
import pennylane as qml
```
Collecting pennylane
Using cached PennyLane-0.16.0-py3-none-any.whl (514 kB)
Collecting autoray
Using cached autoray-0.2.5-py3-none-any.whl (16 kB)
Collecting autograd
Using cached autograd-1.3-py3-none-any.whl
Requirement already satisfied: toml in /opt/conda/lib/python3.8/site-packages (from pennylane) (0.10.2)
Requirement already satisfied: numpy in /opt/conda/lib/python3.8/site-packages (from pennylane) (1.21.0)
Requirement already satisfied: scipy in /opt/conda/lib/python3.8/site-packages (from pennylane) (1.7.0)
Requirement already satisfied: appdirs in /opt/conda/lib/python3.8/site-packages (from pennylane) (1.4.4)
Collecting semantic-version==2.6
Using cached semantic_version-2.6.0-py3-none-any.whl (14 kB)
Requirement already satisfied: networkx in /opt/conda/lib/python3.8/site-packages (from pennylane) (2.6.1)
Collecting future>=0.15.2
Using cached future-0.18.2-py3-none-any.whl
Requirement already satisfied: matplotlib>=3.3 in /opt/conda/lib/python3.8/site-packages (from networkx->pennylane) (3.4.2)
Requirement already satisfied: pandas>=1.1 in /opt/conda/lib/python3.8/site-packages (from networkx->pennylane) (1.3.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx->pennylane) (1.3.1)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx->pennylane) (2.4.7)
Requirement already satisfied: python-dateutil>=2.7 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx->pennylane) (2.8.1)
Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx->pennylane) (8.3.1)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx->pennylane) (0.10.0)
Requirement already satisfied: six in /opt/conda/lib/python3.8/site-packages (from cycler>=0.10->matplotlib>=3.3->networkx->pennylane) (1.16.0)
Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.8/site-packages (from pandas>=1.1->networkx->pennylane) (2021.1)
Installing collected packages: future, semantic-version, autoray, autograd, pennylane
Successfully installed autograd-1.3 autoray-0.2.5 future-0.18.2 pennylane-0.16.0 semantic-version-2.6.0
```python
!pip install pennylane-qiskit
```
Collecting pennylane-qiskit
Using cached PennyLane_qiskit-0.16.0-py3-none-any.whl (21 kB)
Requirement already satisfied: numpy in /opt/conda/lib/python3.8/site-packages (from pennylane-qiskit) (1.21.0)
Requirement already satisfied: pennylane>=0.16 in /opt/conda/lib/python3.8/site-packages (from pennylane-qiskit) (0.16.0)
Requirement already satisfied: qiskit>=0.25 in /opt/conda/lib/python3.8/site-packages (from pennylane-qiskit) (0.27.0)
Requirement already satisfied: networkx>=2.2 in /opt/conda/lib/python3.8/site-packages (from pennylane-qiskit) (2.6.1)
Requirement already satisfied: scipy!=1.6.1,>=1.5 in /opt/conda/lib/python3.8/site-packages (from networkx>=2.2->pennylane-qiskit) (1.7.0)
Requirement already satisfied: pandas>=1.1 in /opt/conda/lib/python3.8/site-packages (from networkx>=2.2->pennylane-qiskit) (1.3.0)
Requirement already satisfied: matplotlib>=3.3 in /opt/conda/lib/python3.8/site-packages (from networkx>=2.2->pennylane-qiskit) (3.4.2)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx>=2.2->pennylane-qiskit) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx>=2.2->pennylane-qiskit) (1.3.1)
Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx>=2.2->pennylane-qiskit) (8.3.1)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx>=2.2->pennylane-qiskit) (2.4.7)
Requirement already satisfied: python-dateutil>=2.7 in /opt/conda/lib/python3.8/site-packages (from matplotlib>=3.3->networkx>=2.2->pennylane-qiskit) (2.8.1)
Requirement already satisfied: six in /opt/conda/lib/python3.8/site-packages (from cycler>=0.10->matplotlib>=3.3->networkx>=2.2->pennylane-qiskit) (1.16.0)
Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.8/site-packages (from pandas>=1.1->networkx>=2.2->pennylane-qiskit) (2021.1)
Requirement already satisfied: autoray in /opt/conda/lib/python3.8/site-packages (from pennylane>=0.16->pennylane-qiskit) (0.2.5)
Requirement already satisfied: autograd in /opt/conda/lib/python3.8/site-packages (from pennylane>=0.16->pennylane-qiskit) (1.3)
Requirement already satisfied: appdirs in /opt/conda/lib/python3.8/site-packages (from pennylane>=0.16->pennylane-qiskit) (1.4.4)
Requirement already satisfied: semantic-version==2.6 in /opt/conda/lib/python3.8/site-packages (from pennylane>=0.16->pennylane-qiskit) (2.6.0)
Requirement already satisfied: toml in /opt/conda/lib/python3.8/site-packages (from pennylane>=0.16->pennylane-qiskit) (0.10.2)
Requirement already satisfied: qiskit-ibmq-provider==0.14.0 in /opt/conda/lib/python3.8/site-packages (from qiskit>=0.25->pennylane-qiskit) (0.14.0)
Requirement already satisfied: qiskit-terra==0.17.4 in /opt/conda/lib/python3.8/site-packages (from qiskit>=0.25->pennylane-qiskit) (0.17.4)
Requirement already satisfied: qiskit-aqua==0.9.2 in /opt/conda/lib/python3.8/site-packages (from qiskit>=0.25->pennylane-qiskit) (0.9.2)
Requirement already satisfied: qiskit-ignis==0.6.0 in /opt/conda/lib/python3.8/site-packages (from qiskit>=0.25->pennylane-qiskit) (0.6.0)
Requirement already satisfied: qiskit-aer==0.8.2 in /opt/conda/lib/python3.8/site-packages (from qiskit>=0.25->pennylane-qiskit) (0.8.2)
Requirement already satisfied: pybind11>=2.6 in /opt/conda/lib/python3.8/site-packages (from qiskit-aer==0.8.2->qiskit>=0.25->pennylane-qiskit) (2.6.2)
Requirement already satisfied: sympy>=1.3 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (1.8)
Requirement already satisfied: setuptools>=40.1.0 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (57.1.0)
Requirement already satisfied: yfinance in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (0.1.61)
Requirement already satisfied: psutil>=5 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (5.8.0)
Requirement already satisfied: dlx<=1.0.4 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (1.0.4)
Requirement already satisfied: h5py in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (3.3.0)
Requirement already satisfied: retworkx>=0.8.0 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (0.9.0)
Requirement already satisfied: scikit-learn>=0.20.0 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (0.24.2)
Requirement already satisfied: docplex<=2.20.204 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (2.20.204)
Requirement already satisfied: quandl in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (3.6.1)
Requirement already satisfied: fastdtw<=0.3.4 in /opt/conda/lib/python3.8/site-packages (from qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (0.3.4)
Requirement already satisfied: urllib3>=1.21.1 in /opt/conda/lib/python3.8/site-packages (from qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (1.26.6)
Requirement already satisfied: requests-ntlm>=1.1.0 in /opt/conda/lib/python3.8/site-packages (from qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (1.1.0)
Requirement already satisfied: dill>=0.3 in /opt/conda/lib/python3.8/site-packages (from qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (0.3.4)
Requirement already satisfied: websocket-client>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (1.1.0)
Requirement already satisfied: requests>=2.19 in /opt/conda/lib/python3.8/site-packages (from qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (2.25.1)
Requirement already satisfied: python-constraint>=1.4 in /opt/conda/lib/python3.8/site-packages (from qiskit-terra==0.17.4->qiskit>=0.25->pennylane-qiskit) (1.4.0)
Requirement already satisfied: ply>=3.10 in /opt/conda/lib/python3.8/site-packages (from qiskit-terra==0.17.4->qiskit>=0.25->pennylane-qiskit) (3.11)
Requirement already satisfied: jsonschema>=2.6 in /opt/conda/lib/python3.8/site-packages (from qiskit-terra==0.17.4->qiskit>=0.25->pennylane-qiskit) (3.2.0)
Requirement already satisfied: fastjsonschema>=2.10 in /opt/conda/lib/python3.8/site-packages (from qiskit-terra==0.17.4->qiskit>=0.25->pennylane-qiskit) (2.15.1)
Requirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.8/site-packages (from jsonschema>=2.6->qiskit-terra==0.17.4->qiskit>=0.25->pennylane-qiskit) (0.18.0)
Requirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.8/site-packages (from jsonschema>=2.6->qiskit-terra==0.17.4->qiskit>=0.25->pennylane-qiskit) (21.2.0)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.8/site-packages (from requests>=2.19->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.8/site-packages (from requests>=2.19->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (4.0.0)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.8/site-packages (from requests>=2.19->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (2021.5.30)
Requirement already satisfied: ntlm-auth>=1.0.2 in /opt/conda/lib/python3.8/site-packages (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (1.5.0)
Requirement already satisfied: cryptography>=1.3 in /opt/conda/lib/python3.8/site-packages (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (3.4.7)
Requirement already satisfied: cffi>=1.12 in /opt/conda/lib/python3.8/site-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (1.14.5)
Requirement already satisfied: pycparser in /opt/conda/lib/python3.8/site-packages (from cffi>=1.12->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.14.0->qiskit>=0.25->pennylane-qiskit) (2.20)
Requirement already satisfied: joblib>=0.11 in /opt/conda/lib/python3.8/site-packages (from scikit-learn>=0.20.0->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (1.0.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/lib/python3.8/site-packages (from scikit-learn>=0.20.0->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (2.1.0)
Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.8/site-packages (from sympy>=1.3->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (1.2.1)
Requirement already satisfied: future>=0.15.2 in /opt/conda/lib/python3.8/site-packages (from autograd->pennylane>=0.16->pennylane-qiskit) (0.18.2)
Requirement already satisfied: inflection>=0.3.1 in /opt/conda/lib/python3.8/site-packages (from quandl->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (0.5.1)
Requirement already satisfied: more-itertools in /opt/conda/lib/python3.8/site-packages (from quandl->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (8.8.0)
Requirement already satisfied: lxml>=4.5.1 in /opt/conda/lib/python3.8/site-packages (from yfinance->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (4.6.3)
Requirement already satisfied: multitasking>=0.0.7 in /opt/conda/lib/python3.8/site-packages (from yfinance->qiskit-aqua==0.9.2->qiskit>=0.25->pennylane-qiskit) (0.0.9)
Installing collected packages: pennylane-qiskit
Successfully installed pennylane-qiskit-0.16.0
```python
from pennylane.optimize.gradient_descent import *
from pennylane import numpy as np
```
```python
Optimizer = qml.QNGOptimizer(0.01, False, 0)
```
```python
num_qubits = 4 #4 qubits
# device
#dev = qml.device('qiskit.ibmq', wires=num_qubits, backend='ibmq_qasm_simulator', provider=provider)
dev = qml.device("default.qubit", wires=4, shots=1024)
params = np.array([np.pi/2, np.pi/2, np.pi/2])
#user input for rotation gate angles
"""
params_1 = list(map(float, input().strip().split(' ')))
params_2 = list(map(float, input().strip().split(' ')))
params_3 = list(map(float, input().strip().split(' ')))
params_4 = list(map(float, input().strip().split(' ')))
params_5 = list(map(float, input().strip().split(' ')))
params_6 = list(map(float, input().strip().split(' ')))
params_list = list(map(float, input().strip().split(' ')))
"""
"""params_1 = default_params #[phi, theta, omega]
params_2 = default_params #[phi, theta, omega]
params_3 = default_params #[phi, theta, omega]
params_4 = default_params #[phi, theta, omega]
params_5 = default_params #[phi, theta, omega]
params_6 = default_params #[phi, theta, omega]
params_list = default_params #[phi, theta, omega]"""
@qml.qnode(dev)
def vqc(params, wires=4):
#Cluster State
qml.Hadamard(wires=0)
qml.Hadamard(wires=1)
qml.Hadamard(wires=2)
qml.Hadamard(wires=3)
qml.CNOT(wires=[0,1])
qml.CNOT(wires=[1,2])
qml.CNOT(wires=[2,3])
qml.RY(np.pi, wires=0)
qml.RY(np.pi, wires=1)
qml.RY(np.pi, wires=2)
qml.RY(np.pi, wires=3)
qml.CNOT(wires=[0,1])
qml.CNOT(wires=[1,2])
qml.CNOT(wires=[2,3])
#QConvLayer1
qml.U3(params[0], params[1], params[2], wires=0)
qml.U3(params[0], params[1], params[2], wires=1)
qml.U3(params[0], params[1], params[2], wires=2)
qml.U3(params[0], params[1], params[2], wires=3)
qml.CRot(params[0], params[1], params[2], wires=[0,1])
qml.CRot(params[0], params[1], params[2], wires=[2,3])
#QConvLayer2
qml.CRot(params_list[0], params_list[1], params_list[2], wires=[0,2])
return qml.expval(qml.PauliZ(1)), qml.expval(qml.PauliZ(2)), qml.expval(qml.PauliZ(3))
```
```python
help (qml.Hamiltonian)
```
Help on class Hamiltonian in module pennylane.vqe.vqe:
class Hamiltonian(builtins.object)
| Hamiltonian(coeffs, observables, simplify=False)
|
| Lightweight class for representing Hamiltonians for Variational Quantum
| Eigensolver problems.
|
| Hamiltonians can be expressed as linear combinations of observables, e.g.,
| :math:`\sum_{k=0}^{N-1} c_k O_k`.
|
| This class keeps track of the terms (coefficients and observables) separately.
|
| Args:
| coeffs (Iterable[float]): coefficients of the Hamiltonian expression
| observables (Iterable[Observable]): observables in the Hamiltonian expression
| simplify (bool): Specifies whether the Hamiltonian is simplified upon initialization
| (like-terms are combined). The default value is `False`.
|
| .. seealso:: :class:`~.ExpvalCost`, :func:`~.molecular_hamiltonian`
|
| **Example:**
|
| A Hamiltonian can be created by simply passing the list of coefficients
| as well as the list of observables:
|
| >>> coeffs = [0.2, -0.543]
| >>> obs = [qml.PauliX(0) @ qml.PauliZ(1), qml.PauliZ(0) @ qml.Hadamard(2)]
| >>> H = qml.Hamiltonian(coeffs, obs)
| >>> print(H)
| (-0.543) [Z0 H2]
| + (0.2) [X0 Z1]
|
| The user can also provide custom observables:
|
| >>> obs_matrix = np.array([[0.5, 1.0j, 0.0, -3j],
| [-1.0j, -1.1, 0.0, -0.1],
| [0.0, 0.0, -0.9, 12.0],
| [3j, -0.1, 12.0, 0.0]])
| >>> obs = qml.Hermitian(obs_matrix, wires=[0, 1])
| >>> H = qml.Hamiltonian((0.8, ), (obs, ))
| >>> print(H)
| (0.8) [Hermitian0'1]
|
| Alternatively, the :func:`~.molecular_hamiltonian` function from the
| :doc:`/introduction/chemistry` module can be used to generate a molecular
| Hamiltonian.
|
| .. Warning::
|
| Hamiltonians can be constructed using Pythonic arithmetic operations. For example:
|
| >>> qml.PauliX(0) + 2 * qml.PauliZ(0) @ qml.PauliZ(1)
|
| is equivalent to the following Hamiltonian:
|
| >>> qml.Hamiltonian([1, 2], [qml.PauliX(0), qml.PauliZ(0) @ qml.PauliZ(1)])
|
| When Hamiltonians are defined using arithmetic operations **inside of QNodes**, constituent observables
| may be queued as operations/an error may be thrown. Thus, Hamiltonians must be defined either outside of QNodes,
| or inside of QNodes using the conventional method.
|
| Note that this issue also arises when calling the ``simplify()`` method.
|
| Methods defined here:
|
| __add__(self, H)
| The addition operation between a Hamiltonian and a Hamiltonian/Tensor/Observable.
|
| __iadd__(self, H)
| The inplace addition operation between a Hamiltonian and a Hamiltonian/Tensor/Observable.
|
| __imul__(self, a)
| The inplace scalar multiplication operation between a scalar and a Hamiltonian.
|
| __init__(self, coeffs, observables, simplify=False)
| Initialize self. See help(type(self)) for accurate signature.
|
| __isub__(self, H)
| The inplace subtraction operation between a Hamiltonian and a Hamiltonian/Tensor/Observable.
|
| __matmul__(self, H)
| The tensor product operation between a Hamiltonian and a Hamiltonian/Tensor/Observable.
|
| __mul__(self, a)
| The scalar multiplication operation between a scalar and a Hamiltonian.
|
| __repr__(self)
| Return repr(self).
|
| __rmul__ = __mul__(self, a)
|
| __str__(self)
| Return str(self).
|
| __sub__(self, H)
| The subtraction operation between a Hamiltonian and a Hamiltonian/Tensor/Observable.
|
| compare(self, H)
| Compares with another :class:`~Hamiltonian`, :class:`~.Observable`, or :class:`~.Tensor`,
| to determine if they are equivalent.
|
| Hamiltonians/observables are equivalent if they represent the same operator
| (their matrix representations are equal), and they are defined on the same wires.
|
| .. Warning::
|
| The compare method does **not** check if the matrix representation
| of a :class:`~.Hermitian` observable is equal to an equivalent
| observable expressed in terms of Pauli matrices, or as a
| linear combination of Hermitians.
| To do so would require the matrix form of Hamiltonians and Tensors
| be calculated, which would drastically increase runtime.
|
| Returns:
| (bool): True if equivalent.
|
| **Examples**
|
| >>> A = np.array([[1, 0], [0, -1]])
| >>> H = qml.Hamiltonian(
| ... [0.5, 0.5],
| ... [qml.Hermitian(A, 0) @ qml.PauliY(1), qml.PauliY(1) @ qml.Hermitian(A, 0) @ qml.Identity("a")]
| ... )
| >>> obs = qml.Hermitian(A, 0) @ qml.PauliY(1)
| >>> print(H.compare(obs))
| True
|
| >>> H1 = qml.Hamiltonian([1, 1], [qml.PauliX(0), qml.PauliZ(1)])
| >>> H2 = qml.Hamiltonian([1, 1], [qml.PauliZ(0), qml.PauliX(1)])
| >>> H1.compare(H2)
| False
|
| >>> ob1 = qml.Hamiltonian([1], [qml.PauliX(0)])
| >>> ob2 = qml.Hermitian(np.array([[0, 1], [1, 0]]), 0)
| >>> ob1.compare(ob2)
| False
|
| queue(self)
| Queues a qml.Hamiltonian instance
|
| simplify(self)
| Simplifies the Hamiltonian by combining like-terms.
|
| **Example**
|
| >>> ops = [qml.PauliY(2), qml.PauliX(0) @ qml.Identity(1), qml.PauliX(0)]
| >>> H = qml.Hamiltonian([1, 1, -2], ops)
| >>> H.simplify()
| >>> print(H)
| (-1) [X0]
| + (1) [Y2]
|
| ----------------------------------------------------------------------
| Readonly properties defined here:
|
| coeffs
| Return the coefficients defining the Hamiltonian.
|
| Returns:
| Iterable[float]): coefficients in the Hamiltonian expression
|
| name
|
| ops
| Return the operators defining the Hamiltonian.
|
| Returns:
| Iterable[Observable]): observables in the Hamiltonian expression
|
| terms
| The terms of the Hamiltonian expression :math:`\sum_{k=0}^{N-1} c_k O_k`
|
| Returns:
| (tuple, tuple): tuples of coefficients and operations, each of length N
|
| wires
| The sorted union of wires from all operators.
|
| Returns:
| (Wires): Combined wires present in all terms, sorted.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
```python
coeffs = [1,1,1]
obs = [qml.PauliZ(0), qml.PauliZ(0), qml.PauliZ(0)]
H = qml.Hamiltonian(coeffs, obs)
```
```python
cost_fn = qml.ExpvalCost(vqc, H, dev)
```
```python
%tb
theta_new = Optimizer.step(cost_fn, params)
```
|
import data.real.basic
variables (f : ℝ → ℝ) (a b : ℝ)
#check @lt_of_not_ge
#check @not_lt_of_gt
-- BEGIN
example (h : monotone f) (h' : f a < f b) : a < b :=
begin
apply lt_of_not_ge,
intro hab,
have : f a ≥ f b,
from h hab,
apply not_lt_of_ge this,
exact h',
end
example (h : a ≤ b) (h' : f b < f a) : ¬ monotone f :=
begin
intro fmono,
have : f a ≤ f b,
from fmono h,
apply not_lt_of_ge this,
exact h',
end
-- END |
lemma sets_vimage_algebra_space: "X \<in> sets (vimage_algebra X f M)" |
Formal statement is: lemma open_Collect_less: fixes f g :: "'a::topological_space \<Rightarrow> 'b::linorder_topology" assumes f: "continuous_on UNIV f" and g: "continuous_on UNIV g" shows "open {x. f x < g x}" Informal statement is: If $f$ and $g$ are continuous real-valued functions, then the set $\{x \mid f(x) < g(x)\}$ is open. |
[STATEMENT]
lemma valid_gc_mmsaux: "valid_mmsaux args cur aux ys \<Longrightarrow> valid_mmsaux args cur (gc_mmsaux aux) ys"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. valid_mmsaux args cur aux ys \<Longrightarrow> valid_mmsaux args cur (gc_mmsaux aux) ys
[PROOF STEP]
using valid_gc_mmsaux_unfolded
[PROOF STATE]
proof (prove)
using this:
valid_mmsaux ?args ?cur (?nt, ?gc, ?maskL, ?maskR, ?data_prev, ?data_in, ?tuple_in, ?tuple_since) ?ys \<Longrightarrow> valid_mmsaux ?args ?cur (gc_mmsaux (?nt, ?gc, ?maskL, ?maskR, ?data_prev, ?data_in, ?tuple_in, ?tuple_since)) ?ys
goal (1 subgoal):
1. valid_mmsaux args cur aux ys \<Longrightarrow> valid_mmsaux args cur (gc_mmsaux aux) ys
[PROOF STEP]
by (cases aux) fast |
From Test Require Import tactic.
Section FOFProblem.
Variable Universe : Set.
Variable UniverseElement : Universe.
Variable oS_ : Universe -> Universe -> Universe -> Universe -> Prop.
Variable col_ : Universe -> Universe -> Universe -> Prop.
Variable betS_ : Universe -> Universe -> Universe -> Prop.
Variable defsameside_1 : (forall P Q A B : Universe, (exists X U V : Universe, (oS_ P Q A B -> (col_ A B U /\ (col_ A B V /\ (betS_ P U X /\ (betS_ Q V X /\ (~(col_ A B P) /\ ~(col_ A B Q))))))))).
Variable defsameside2_2 : (forall P Q A B X U V : Universe, ((col_ A B U /\ (col_ A B V /\ (betS_ P U X /\ (betS_ Q V X /\ (~(col_ A B P) /\ ~(col_ A B Q)))))) -> oS_ P Q A B)).
Variable lemma_collinearorder_3 : (forall A B C : Universe, (col_ A B C -> (col_ B A C /\ (col_ B C A /\ (col_ C A B /\ (col_ A C B /\ col_ C B A)))))).
Variable lemma_NCorder_4 : (forall A B C : Universe, (~(col_ A B C) -> (~(col_ B A C) /\ (~(col_ B C A) /\ (~(col_ C A B) /\ (~(col_ A C B) /\ ~(col_ C B A))))))).
Theorem lemma_samesidesymmetric_5 : (forall A B P Q : Universe, (oS_ P Q A B -> (oS_ Q P A B /\ (oS_ P Q B A /\ oS_ Q P B A)))).
Proof.
time tac.
Qed.
End FOFProblem.
|
module Replica.RunEnv
%default total
public export
record RunEnv where
constructor MkRunEnv
interactive : Bool
path : String
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.Algebra.Group.MorphismProperties where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Isomorphism
open import Cubical.Foundations.Equiv
open import Cubical.Foundations.Equiv.HalfAdjoint
open import Cubical.Foundations.HLevels
open import Cubical.Foundations.Univalence
open import Cubical.Foundations.SIP
open import Cubical.Foundations.Function using (_∘_; id)
open import Cubical.Foundations.GroupoidLaws hiding (_⁻¹)
open import Cubical.Functions.Embedding
open import Cubical.Data.Sigma
open import Cubical.Data.Prod using (isPropProd)
open import Cubical.Algebra
open import Cubical.Algebra.Properties
open import Cubical.Algebra.Group.Morphism
open import Cubical.Structures.Axioms
open import Cubical.Structures.Auto
open import Cubical.Structures.Record
open import Cubical.Algebra.Monoid.Properties using (isPropIsMonoid)
open import Cubical.Relation.Binary.Reasoning.Equality
open Iso
private
variable
ℓ ℓ′ ℓ′′ : Level
F : Group ℓ
G : Group ℓ′
H : Group ℓ′′
isPropIsGroupHom : ∀ (G : Group ℓ) (H : Group ℓ′) f → isProp (IsGroupHom G H f)
isPropIsGroupHom G H f (isgrouphom aHom) (isgrouphom bHom) =
cong isgrouphom
(isPropHomomorphic₂ (Group.is-set H) f (Group._•_ G) (Group._•_ H) aHom bHom)
isSetGroupHom : isSet (G ⟶ᴴ H)
isSetGroupHom {G = G} {H = H} = isOfHLevelRespectEquiv 2 equiv
(isSetΣ (isSetΠ λ _ → is-set H)
(λ f → isProp→isSet (isPropIsGroupHom G H f)))
where
open Group
equiv : (Σ[ g ∈ (⟨ G ⟩ → ⟨ H ⟩) ] IsGroupHom G H g) ≃ GroupHom G H
equiv = isoToEquiv (iso (λ (g , m) → grouphom g m)
(λ (grouphom g m) → g , m)
(λ _ → refl) λ _ → refl)
isGroupHomComp : {f : ⟨ F ⟩ → ⟨ G ⟩} {g : ⟨ G ⟩ → ⟨ H ⟩}
→ IsGroupHom F G f → IsGroupHom G H g → IsGroupHom F H (g ∘ f)
isGroupHomComp {g = g} (isgrouphom fHom) (isgrouphom gHom) =
isgrouphom (λ _ _ → cong g (fHom _ _) ∙ gHom _ _)
private
isGroupHomComp′ : (f : F ⟶ᴴ G) (g : G ⟶ᴴ H)
→ IsGroupHom F H (GroupHom.fun g ∘ GroupHom.fun f)
isGroupHomComp′ (grouphom f (isgrouphom fHom)) (grouphom g (isgrouphom gHom)) =
isgrouphom (λ _ _ → cong g (fHom _ _) ∙ gHom _ _)
compGroupHom : (F ⟶ᴴ G) → (G ⟶ᴴ H) → (F ⟶ᴴ H)
compGroupHom f g = grouphom _ (isGroupHomComp′ f g)
compGroupEquiv : F ≃ᴴ G → G ≃ᴴ H → F ≃ᴴ H
compGroupEquiv f g = groupequiv (compEquiv f.eq g.eq) (isGroupHomComp′ f.hom g.hom)
where
module f = GroupEquiv f
module g = GroupEquiv g
isGroupHomId : (G : Group ℓ) → IsGroupHom G G id
isGroupHomId G = record
{ preservesOp = λ _ _ → refl
}
idGroupHom : (G : Group ℓ) → (G ⟶ᴴ G)
idGroupHom G = record
{ fun = id
; isHom = isGroupHomId G
}
idGroupEquiv : (G : Group ℓ) → G ≃ᴴ G
idGroupEquiv G = record
{ eq = idEquiv ⟨ G ⟩
; isHom = isGroupHomId G
}
-- Isomorphism inversion
isGroupHomInv : (eqv : G ≃ᴴ H) → IsGroupHom H G (invEq (GroupEquiv.eq eqv))
isGroupHomInv {G = G} {H = H} (groupequiv eq (isgrouphom hom)) = isgrouphom (λ x y → isInj-f (
f (f⁻¹ (x H.• y)) ≡⟨ retEq eq _ ⟩
x H.• y ≡˘⟨ cong₂ H._•_ (retEq eq x) (retEq eq y) ⟩
f (f⁻¹ x) H.• f (f⁻¹ y) ≡˘⟨ hom (f⁻¹ x) (f⁻¹ y) ⟩
f (f⁻¹ x G.• f⁻¹ y) ∎))
where
module G = Group G
module H = Group H
f = equivFun eq
f⁻¹ = invEq eq
isInj-f : {x y : ⟨ G ⟩} → f x ≡ f y → x ≡ y
isInj-f {x} {y} = invEq (_ , isEquiv→isEmbedding (eq .snd) x y)
invGroupHom : G ≃ᴴ H → (H ⟶ᴴ G)
invGroupHom eq = record { isHom = isGroupHomInv eq }
invGroupEquiv : G ≃ᴴ H → H ≃ᴴ G
invGroupEquiv eq = record
{ eq = invEquiv (GroupEquiv.eq eq)
; isHom = isGroupHomInv eq
}
groupHomEq : {f g : G ⟶ᴴ H} → (GroupHom.fun f ≡ GroupHom.fun g) → f ≡ g
groupHomEq {G = G} {H = H} {grouphom f fm} {grouphom g gm} p i =
grouphom (p i) (p-hom i)
where
p-hom : PathP (λ i → IsGroupHom G H (p i)) fm gm
p-hom = toPathP (isPropIsGroupHom G H _ _ _)
groupEquivEq : {f g : G ≃ᴴ H} → (GroupEquiv.eq f ≡ GroupEquiv.eq g) → f ≡ g
groupEquivEq {G = G} {H = H} {groupequiv f fm} {groupequiv g gm} p i =
groupequiv (p i) (p-hom i)
where
p-hom : PathP (λ i → IsGroupHom G H (p i .fst)) fm gm
p-hom = toPathP (isPropIsGroupHom G H _ _ _)
module GroupΣTheory {ℓ} where
RawGroupStructure : Type ℓ → Type ℓ
RawGroupStructure X = (X → X → X) × X × (X → X)
RawGroupEquivStr = AutoEquivStr RawGroupStructure
rawGroupUnivalentStr : UnivalentStr _ RawGroupEquivStr
rawGroupUnivalentStr = autoUnivalentStr RawGroupStructure
GroupAxioms : (G : Type ℓ) → RawGroupStructure G → Type ℓ
GroupAxioms G (_•_ , ε , _⁻¹) = IsMonoid G _•_ ε × Inverse ε _⁻¹ _•_
GroupStructure : Type ℓ → Type ℓ
GroupStructure = AxiomsStructure RawGroupStructure GroupAxioms
GroupΣ : Type (ℓ-suc ℓ)
GroupΣ = TypeWithStr ℓ GroupStructure
isPropGroupAxioms : (G : Type ℓ) (s : RawGroupStructure G) → isProp (GroupAxioms G s)
isPropGroupAxioms G (_•_ , ε , _⁻¹) = isPropΣ isPropIsMonoid
λ isMonG → isPropInverse (IsMonoid.is-set isMonG) _•_ _⁻¹ ε
GroupEquivStr : StrEquiv GroupStructure ℓ
GroupEquivStr = AxiomsEquivStr RawGroupEquivStr GroupAxioms
GroupAxiomsIsoIsGroup : {G : Type ℓ} (s : RawGroupStructure G) →
Iso (GroupAxioms G s) (IsGroup G (s .fst) (s .snd .fst) (s .snd .snd))
fun (GroupAxiomsIsoIsGroup s) (x , y) = isgroup x y
inv (GroupAxiomsIsoIsGroup s) (isgroup x y) = (x , y)
rightInv (GroupAxiomsIsoIsGroup s) _ = refl
leftInv (GroupAxiomsIsoIsGroup s) _ = refl
GroupAxioms≡IsGroup : {G : Type ℓ} (s : RawGroupStructure G) →
GroupAxioms G s ≡ IsGroup G (s .fst) (s .snd .fst) (s .snd .snd)
GroupAxioms≡IsGroup s = isoToPath (GroupAxiomsIsoIsGroup s)
Group→GroupΣ : Group ℓ → GroupΣ
Group→GroupΣ (mkgroup G _•_ ε _⁻¹ isGroup) =
G , (_•_ , ε , _⁻¹) , GroupAxiomsIsoIsGroup (_•_ , ε , _⁻¹) .inv isGroup
GroupΣ→Group : GroupΣ → Group ℓ
GroupΣ→Group (G , (_•_ , ε , _⁻¹) , isGroupG) =
mkgroup G _•_ ε _⁻¹ (GroupAxiomsIsoIsGroup (_•_ , ε , _⁻¹) .fun isGroupG)
GroupIsoGroupΣ : Iso (Group ℓ) GroupΣ
GroupIsoGroupΣ =
iso Group→GroupΣ GroupΣ→Group (λ _ → refl) (λ _ → refl)
groupUnivalentStr : UnivalentStr GroupStructure GroupEquivStr
groupUnivalentStr = axiomsUnivalentStr _ isPropGroupAxioms rawGroupUnivalentStr
GroupΣPath : (G H : GroupΣ) → (G ≃[ GroupEquivStr ] H) ≃ (G ≡ H)
GroupΣPath = SIP groupUnivalentStr
GroupEquivΣ : (G H : Group ℓ) → Type ℓ
GroupEquivΣ G H = Group→GroupΣ G ≃[ GroupEquivStr ] Group→GroupΣ H
GroupIsoΣPath : {G H : Group ℓ} → Iso (GroupEquiv G H) (GroupEquivΣ G H)
fun GroupIsoΣPath (groupequiv eq hom) = eq , IsGroupHom.preservesOp hom , IsGroupHom.preservesId hom , IsGroupHom.preservesInv hom
inv GroupIsoΣPath (eq , hom , _) = groupequiv eq (isgrouphom hom)
rightInv (GroupIsoΣPath {H = H}) _ = ΣPathTransport→PathΣ _ _ (refl ,
ΣPathTransport→PathΣ _ _ (transportRefl _ ,
ΣPathTransport→PathΣ _ _
(Group.is-set H _ _ _ _ ,
isPropΠ (λ _ → Group.is-set H _ _) _ _ )
))
leftInv (GroupIsoΣPath {H = H}) _ = refl
GroupPath : (G H : Group ℓ) → (GroupEquiv G H) ≃ (G ≡ H)
GroupPath G H =
GroupEquiv G H ≃⟨ isoToEquiv GroupIsoΣPath ⟩
GroupEquivΣ G H ≃⟨ GroupΣPath _ _ ⟩
Group→GroupΣ G ≡ Group→GroupΣ H ≃⟨ isoToEquiv (invIso (congIso GroupIsoGroupΣ)) ⟩
G ≡ H ■
RawGroupΣ : Type (ℓ-suc ℓ)
RawGroupΣ = TypeWithStr ℓ RawGroupStructure
Group→RawGroupΣ : Group ℓ → RawGroupΣ
Group→RawGroupΣ (mkgroup A _•_ ε _⁻¹ _) = A , _•_ , ε , _⁻¹
InducedGroup : (G : Group ℓ) (H : RawGroupΣ) (e : G .Group.Carrier ≃ H .fst)
→ RawGroupEquivStr (Group→RawGroupΣ G) H e → Group ℓ
InducedGroup G H e r =
GroupΣ→Group (inducedStructure rawGroupUnivalentStr (Group→GroupΣ G) H (e , r))
InducedGroupPath : (G : Group ℓ) (H : RawGroupΣ) (e : G .Group.Carrier ≃ H .fst)
(E : RawGroupEquivStr (Group→RawGroupΣ G) H e) →
G ≡ InducedGroup G H e E
InducedGroupPath G H e E =
GroupPath G (InducedGroup G H e E) .fst (groupequiv e (isgrouphom (E .fst)))
-- We now extract the important results from the above module
open GroupΣTheory public using (InducedGroup; InducedGroupPath)
isPropIsGroup : {G : Type ℓ} {_•_ : Op₂ G} {ε : G} {_⁻¹ : Op₁ G} → isProp (IsGroup G _•_ ε _⁻¹)
isPropIsGroup =
subst isProp (GroupΣTheory.GroupAxioms≡IsGroup (_ , _ , _))
(GroupΣTheory.isPropGroupAxioms _ (_ , _ , _))
GroupPath : (G ≃ᴴ H) ≃ (G ≡ H)
GroupPath = GroupΣTheory.GroupPath _ _
open Group
uaGroup : G ≃ᴴ H → G ≡ H
uaGroup = equivFun GroupPath
carac-uaGroup : {G H : Group ℓ} (f : G ≃ᴴ H) → cong Carrier (uaGroup f) ≡ ua (GroupEquiv.eq f)
carac-uaGroup (groupequiv f m) =
(refl ∙∙ ua f ∙∙ refl) ≡˘⟨ rUnit (ua f) ⟩
ua f ∎
Group≡ : (G H : Group ℓ) → (
Σ[ p ∈ ⟨ G ⟩ ≡ ⟨ H ⟩ ]
Σ[ q ∈ PathP (λ i → p i → p i → p i) (_•_ G) (_•_ H) ]
Σ[ r ∈ PathP (λ i → p i) (ε G) (ε H) ]
Σ[ s ∈ PathP (λ i → p i → p i) (_⁻¹ G) (_⁻¹ H) ]
PathP (λ i → IsGroup (p i) (q i) (r i) (s i)) (isGroup G) (isGroup H))
≃ (G ≡ H)
Group≡ G H = isoToEquiv (iso
(λ (p , q , r , s , t) i → mkgroup (p i) (q i) (r i) (s i) (t i))
(λ p → cong Carrier p , cong _•_ p , cong ε p , cong _⁻¹ p , cong isGroup p)
(λ _ → refl) (λ _ → refl))
caracGroup≡ : {G H : Group ℓ} (p q : G ≡ H) → cong Carrier p ≡ cong Carrier q → p ≡ q
caracGroup≡ {G = G} {H} p q t = cong (fst (Group≡ G H)) (Σ≡Prop (λ _ → isPropΣ
(isOfHLevelPathP' 1 (isSetΠ2 λ _ _ → is-set H) _ _) λ _ → isPropΣ
(isOfHLevelPathP' 1 (is-set H) _ _) λ _ → isPropΣ
(isOfHLevelPathP' 1 (isSetΠ λ _ → is-set H) _ _) λ _ →
isOfHLevelPathP 1 isPropIsGroup _ _)
t)
uaGroupId : (G : Group ℓ) → uaGroup (idGroupEquiv G) ≡ refl
uaGroupId G = caracGroup≡ _ _ (carac-uaGroup (idGroupEquiv G) ∙ uaIdEquiv)
uaCompGroupEquiv : {F G H : Group ℓ} (f : GroupEquiv F G) (g : GroupEquiv G H) → uaGroup (compGroupEquiv f g) ≡ uaGroup f ∙ uaGroup g
uaCompGroupEquiv f g = caracGroup≡ _ _ (
cong Carrier (uaGroup (compGroupEquiv f g))
≡⟨ carac-uaGroup (compGroupEquiv f g) ⟩
ua (eq (compGroupEquiv f g))
≡⟨ uaCompEquiv _ _ ⟩
ua (eq f) ∙ ua (eq g)
≡˘⟨ cong (_∙ ua (eq g)) (carac-uaGroup f) ⟩
cong Carrier (uaGroup f) ∙ ua (eq g)
≡˘⟨ cong (cong Carrier (uaGroup f) ∙_) (carac-uaGroup g) ⟩
cong Carrier (uaGroup f) ∙ cong Carrier (uaGroup g)
≡˘⟨ cong-∙ Carrier (uaGroup f) (uaGroup g) ⟩
cong Carrier (uaGroup f ∙ uaGroup g) ∎)
where open GroupEquiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.