text
stringlengths 0
3.34M
|
---|
function y=fitpop(thetabit,bits,loss)
% J. C. Spall, August 1999
% This function returns a vector of N fitness values.
global p N thetamin thetamax
theta=zeros(p,1);
for i=1:N
pointer=1;
for j=1:p
theta(j)=bit2num(thetabit(i,pointer:pointer+bits(j)-1),...
bits(j),thetamin(j),thetamax(j));
pointer=pointer+bits(j);
end
fitness(i)=-feval(loss,theta);
end
y=fitness;
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Copyright (c) 2015, ETH Zurich.
% All rights reserved.
%
% This file is distributed under the terms in the attached LICENSE file.
% If you do not find this file, copies can be found by writing to:
% ETH Zurich D-INFK, Universitaetstr 6, CH-8092 Zurich. Attn: Systems Group.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\providecommand{\pgfsyspdfmark}[3]{}
\documentclass[a4paper,11pt,twoside]{report}
\usepackage{amsmath}
\usepackage{bftn}
\usepackage{calc}
\usepackage{verbatim}
\usepackage{xspace}
\usepackage{pifont}
\usepackage{pxfonts}
\usepackage{textcomp}
\usepackage{multirow}
\usepackage{listings}
\usepackage{todonotes}
\usepackage{hyperref}
\title{Skate in Barrelfish}
\author{Barrelfish project}
% \date{\today} % Uncomment (if needed) - date is automatic
\tnnumber{020}
\tnkey{Skate}
\lstdefinelanguage{skate}{
morekeywords={schema,typedef,fact,enum},
sensitive=true,
morecomment=[l]{//},
morecomment=[s]{/*}{*/},
morestring=[b]",
}
\presetkeys{todonotes}{inline}{}
\begin{document}
\maketitle % Uncomment for final draft
\begin{versionhistory}
\vhEntry{0.1}{16.11.2015}{MH}{Initial Version}
\vhEntry{0.2}{20.04.2017}{RA}{Renaming ot Skate and expanding.}
\end{versionhistory}
% \intro{Abstract} % Insert abstract here
% \intro{Acknowledgements} % Uncomment (if needed) for acknowledgements
\tableofcontents % Uncomment (if needed) for final draft
% \listoffigures % Uncomment (if needed) for final draft
% \listoftables % Uncomment (if needed) for final draft
\cleardoublepage
\setcounter{secnumdepth}{2}
\newcommand{\fnname}[1]{\textit{\texttt{#1}}}%
\newcommand{\datatype}[1]{\textit{\texttt{#1}}}%
\newcommand{\varname}[1]{\texttt{#1}}%
\newcommand{\keywname}[1]{\textbf{\texttt{#1}}}%
\newcommand{\pathname}[1]{\texttt{#1}}%
\newcommand{\tabindent}{\hspace*{3ex}}%
\newcommand{\Skate}{\lstinline[language=skate]}
\newcommand{\ccode}{\lstinline[language=C]}
\lstset{
language=C,
basicstyle=\ttfamily \small,
keywordstyle=\bfseries,
flexiblecolumns=false,
basewidth={0.5em,0.45em},
boxpos=t,
captionpos=b
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Introduction and usage}
\label{chap:introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\emph{Skate}\footnote{Skates are cartilaginous fish belonging to the family
Rajidae in the superorder Batoidea of rays. More than 200 species have been
described, in 32 genera. The two subfamilies are Rajinae (hardnose skates) and
Arhynchobatinae (softnose skates).
Source: \href{https://en.wikipedia.org/wiki/Skate_(fish)}{Wikipedia}}
is a domain specific language to describe the schema of
Barrelfish's System Knowledge Base (SKB)~\cite{skb}. The SKB stores all
statically or dynamically discovered facts about the system. Static facts are
known and exist already at compile time of the SKB ramdisk or are added through
an initialization script or program.
Examples for static facts include the device database, that associates known
drivers with devices or the devices of a wellknown SoC. Dynamic facts, on the
otherhand, are added to the SKB during and based on hardware discovery.
Examples for dynamic facts include the number of processors or PCI Express
devices.
Inside the SKB, a prolog based constraint solver takes the added facts and
computes a solution for hardware configuration such as PCI bridge programming,
NUMA information for memory allocation or device driver lookup. Programs can
query the SKB using Prolog statements and obtain device configuration and PCI
bridge programming, interrupt routing and constructing routing trees for IPC.
Applications can use information to determine hardware characteristics such as
cores, nodes, caches and memory as well as their affinity.
The Skate language is used to define format of those facts. The DSL is then
compiled into a set of fact definitions and functions that are wrappers arround
the SKB client functions, in particular \texttt{skb\_add\_fact()}, to ensure
the correct format of the added facts.
The intention when designing Skate is that the contents of system descriptor
tables such as ACPI, hardware information obtained by CPUID or PCI discovery
can be extracted from the respective manuals and easily specified in a Skate
file.
Skate complements the SKB by defining a \emph{schema} of the data stored in
the SKB. A schema defines facts and their structure, which is similar to Prolog
facts and their arity. A code-generation tool generates a C-API to populate the
SKB according to a specific schema instance.
The Skate compiler is written in Haskell using the Parsec parsing library. It
generates C header files from the Skate files. In addition it supports the
generation of Schema documentation.
The source code for Skate can be found in \texttt{SOURCE/tools/skate}.
\section{Use cases}
We envision the following non exhausting list of possible use cases for Skate:
\begin{itemize}
\item A programmer is writing PCI discovery code or a device driver. The
program inserts various facts about the discovered devices and the state
of them into the SKB. To make the inserted facts usable to other programs
running on the system, the format of the facts have to be known. For this
purpose we need a common to specify the format of those facts and their
meaning.
\item Each program needs to ultimately deal with the issue of actually
inserting the facts into the SKB or query them. For this purpose, the fact
strings need to be formatted accordingly, and this may be done differently
for various languages and is error prone to typos. Skate is intended to
remove the burden from the programmer by providing a language native (e.g.
C or Rust) to ensure a safe way of inserting facts into the SKB.
\item Just knowing the format and the existence of certain facts is
useless. A programmer needs to understand the meaning of them and their
fields. It's not enough just to list the facts with the fields. Skate
provides a way to generate a documentation about the specified facts. This
enables programmers to reason about which facts should be used in
particular selecting the right level of abstraction. This is important
given that facts entered into the SKB from hardware discovery are
intentionally as un-abstracted as possible.
\item Documenting the available inference rules that the SKB implements
to abstract facts into useful concepts for the OS.
\end{itemize}
\section{Command line options}
\label{sec:cmdline}
\begin{verbatim}
$ skate <options> INFILE.skt
\end{verbatim}
Where options is one of
\begin{description}
\item[-o] \textit{filename} The output file name
\item[-L] generate latex documentation
\item[-H] generate headerfile
\item[-W] generate Wiki syntax documentation
\end{description}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Lexical Conventions}
\label{chap:lexer}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The Skate parser follows a similar convention as opted by modern day
programming languages like C and Java. Hence, Skate uses a java-style-like
parser based on the Haskell Parsec Library. The following conventions are used:
\begin{description}
\item[Encoding] The file should be encoded using plain text.
\item[Whitespace:] As in C and Java, Skate considers sequences of
space, newline, tab, and carriage return characters to be
whitespace. Whitespace is generally not significant.
\item[Comments:] Skate supports C-style comments. Single line comments
start with \texttt{//} and continue until the end of the line.
Multiline comments are enclosed between \texttt{/*} and \texttt{*/};
anything inbetween is ignored and treated as white space.
\item[Identifiers:] Valid Skate identifiers are sequences of numbers
(0-9), letters (a-z, A-Z) and the underscore character ``\texttt{\_}''. They
must start with a letter or ``\texttt{\_}''.
\begin{align*}
identifier & \rightarrow ( letter \mid \_ ) (letter \mid digit \mid \_)^{\textrm{*}} \\
letter & \rightarrow (\textsf{A \ldots Z} \mid \textsf{a \ldots z})\\
digit & \rightarrow (\textsf{0 \ldots 9})
\end{align*}
Note that a single underscore ``\texttt{\_}'' by itself is a special,
``don't care'' or anonymous identifier which is treated differently
inside the language.
\item[Case Sensitivity] Skate is not case sensitive hence identifiers
\texttt{foo} and \texttt{Foo} will be the same.
\item[Integer Literals:] A Skate integer literal is a sequence of
digits, optionally preceded by a radix specifier. As in C, decimal (base 10)
literals have no specifier and hexadecimal literals start with
\texttt{0x}. Binary literals start with \texttt{0b}.
In addition, as a special case the string \texttt{1s} can be used to
indicate an integer which is composed entirely of binary 1's.
\begin{align*}
digit & \rightarrow (\textsf{0 \ldots 9})^{\textrm{1}}\\
hexadecimal & \rightarrow (\textsf{0x})(\textsf{0 \ldots 9} \mid \textsf{A \ldots F} \mid \textsf{a \ldots f})^{\textrm{1}}\\
binary & \rightarrow (\textsf{0b})(\textsf{0, 1})^{\textrm{1}}\\
\end{align*}
\item[String Literals] String literals are enclosed in double quotes and should
not span multiple lines.
\item[Reserved words:] The following are reserved words in Skate:
\begin{verbatim}
schema, fact, flags, constants, enumeration, text, section
\end{verbatim}
\item[Special characters:] The following characters are used as operators,
separators, terminators or other special purposes in Skate:
\begin{alltt}
\{ \} [ ] ( ) + - * / ; , . =
\end{alltt}
\end{description}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Schema Declaration}
\label{chap:declaration}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In this chapter we define the layout of a Skate schema file, which declarations
it must contain and what other declarations it can have. Each Skate schema file
defines exactly one schema, which may refer to other schemas.
\section{Syntax Highlights}
In the following sections we use the syntax highlighting as follows:
\begin{syntax}
\synbf{bold}: Keywords
\synit{italic}: Identifiers / strings chosen by the user
\verb+verbatim+ constructs, symbols etc
\end{syntax}
\section{Conventions}
There are a some conventions that should be followed when writing a schema
declaration. Following the conventions ensures consistency among different
schemas and allows generating a readable and well structured documentation.
\begin{description}
\item[Identifiers] Either camelcase or underscore can be used to separate
words. Identifiers must be unique i.e. their fully qualified identifier must
be unique. A fully qualified identifier can be constructed as
$schema.(namespace.)*name.$
\item[Descriptions] The description fields of the declarations should be
used as a more human readable representation of the identifier. No use of
abbreviations or
\item[Hierarchy/Grouping] Declarations of the same concept should be grouped
in a schema file (e.g. a single ACPI table). The declarations may be
grouped further using namespaces (e.g. IO or local interrupt controllers)
\item[Sections/Text] Additional information can be provided using text
blocks and sections. Each declaration can be wrapped in a section.
\end{description}
\todo{which conventions do we actually want}
\section{The Skate File}
A Skate file must consist of zero or more \emph{import} declarations (see
Section~\ref{sec:decl:schema}) followed by a single \emph{schema} declaration
(see Section~\ref{sec:decl:schema}) which contains the actual definitions. The
Skate file typically has the extension \emph{*.sks}, referring to a Skate (or
SKB) schema.
\begin{syntax}
/* Header comments */
(\synbf{import} schema)*
/* the actual schema declaration */
\synbf{schema} theschema "" \verb+{+...\verb+}+
\end{syntax}
Note that all imports must be stated at the beginning of the file. Comments can
be inserted at any place.
%\begin{align*}
% skatefile & \rightarrow ( Import )^{\textrm{*}} (Schema)
%\end{align*}
\section{Imports}\label{sec:decl:import}
An import statement makes the definitions in a different schema file
available in the current schema definition, as described below. The
syntax of an import declaration is as follows:
\paragraph{Syntax}
\begin{syntax}
\synbf{import} \synit{schema};
\end{syntax}
\paragraph{Fields}
\begin{description}
\item[schema] is the name of the schema to import definitions from.
\end{description}
The order of the imports does not matter to skate. At compile time, the Skate
compiler will try to resolve the imports by searching the include paths and the
path of the current schema file for an appropriate schema file. Imported files
are parsed at the same time as the main schema file. The Skate compiler will
attempt to parse all the imports of the imported files transitively. Cyclic
dependencies between device files will not cause errors, but at present are
unlikely to result in C header files which will successfully compile.
\section{Types}\label{sec:decl:types}
The Skate type system consists of a set of built in types and a set of implicit
type definitions based on the declarations of the schema. Skate performs some
checks on the use of types.
\subsection{BuiltIn Types}
Skate supports the common C-like types such as integers, floats, chars as well
as boolean values and Strings (character arrays). In addition, Skate treats
the Barrelfish capability reference (\texttt{struct capref}) as a built in
type.
\begin{syntax}
UInt8, UInt16, UInt32, UInt64, UIntPtr
Int8, Int16, Int32, Int64, IntPtr
Float, Double
Char, String
Bool
Capref
\end{syntax}
\subsection{Declaring Types}
All declarations stated in Section~\ref{sec:decl:decls} are implicitly types
and can be used within the fact declarations. This can restrict the values
that are valid in a field. The syntax of the declarations enforces certain
restrictions on which types can be used in the given context.
In particular, fact declarations allow fields to be of type fact which allows
a notion of inheritance and common abstractions. For example, PCI devices and
USB devices may implement a specialization of the device abstraction. Note,
cicular dependencies must be avoided.
Defining type aliases using a a C-Like typedef is currently not supported.
\section{Schema}\label{sec:decl:schema}
A schema groups all the facts of a particular topic together. For example,
a schema could be the PCI Express devices, memory regions or an ACPI table.
Each schema must have a unique name, which must match the name of the file, and
it must have at least one declaration to be considered a valid file. All checks
that are being executed by Skate are stated in Chapter~\ref{chap:astops}.
There can only be one schema declaration in a single Schema file.
\paragraph{Syntax}
\begin{syntax}
\synbf{schema} \synit{name} "\synit{description}" \verb+{+
\synit{declaration};
\ldots
\verb+}+;
\end{syntax}
\paragraph{Fields}
\begin{description}
\item[name] is an identifier for the Schema type, and will be used to
generate identifiers in the target language (typically C).
The name of the schema \emph{must} correspond to the
filename of the file, including case sensitivity: for example,
the file \texttt{cpuid.sks} will define a schema type
of name \texttt{cpuid}.
\item [description] is a string literal in double quotes, which
describes the schema type being specified, for example \texttt{"CPUID
Information Schema"}.
\item [declaration] must contain at least one of the following declarations:
\begin{itemize}
\item namespace -- Section \ref{sec:decl:namespace}
\item flags -- Section \ref{sec:decl:flags}
\item constants -- Section \ref{sec:decl:constants}
\item enumeration -- Section \ref{sec:decl:enums}
\item facts -- Section \ref{sec:decl:facts}
\item section -- Section \ref{sec:doc:section}
\item text -- Section \ref{sec:doc:text}
\end{itemize}
\end{description}
\section{Namespaces}
\label{sec:decl:namespace}
The idea of a namespaces is to provide more hierarchical structure similar to
Java packages or URIs (schema.namespace.namespace) For example, a PCI devices
may have virtual and physical functions or a processor has multiple cores.
Namespaces can be nested within a schema to build a deeper hierarchy.
Namespaces will have an effect on the code generation.
\todo{does everything has to live in a namespace?, or is there an implicit
default namespace?}
\paragraph{Syntax}
\begin{syntax}
\synbf{namespace} \synit{name} "\synit{description}" \verb+{+
\synit{declaration};
\ldots
\verb+}+;
\end{syntax}
\paragraph{Fields}
\begin{description}
\item[name] the identifier of this namespace.
\item[description] human readable description of this namespace
\item[declarations] One or more declarations that are valid a schema
definition.
\end{description}
\section{Declarations}\label{sec:decl:decls}
In this section we define the syntax for the possible fact, constant, flags
and enumeration declarations in Skate. Each of the following declarations will
define a type and can be used.
\subsection{Flags}
\label{sec:decl:flags}
Flags are bit fields of a fixed size (8, 16, 32, 64 bits) where each bit
position has a specific meaning e.g. the CPU is enabled or an interrupt
is edge-triggered.
In contrast to constants and enumerations, the bit positions of the flags have
a particular meaning and two flags can be combined effectively enabling both
options whereas the combination of enumeration values or constants may not be
defined. Bit positions that are not defined in the flag group are treated as
zero.
As an example of where to use the flags use case we take the GICC CPU Interface
flags as defined in the MADT Table of the ACPI specification.
\begin{tabular}{lll}
\textbf{Flag} & \textbf{Bit} & \textbf{Description} \\
\hline
Enabled & 0 & If zero, this processor is unusable.\\
Performance Interrupt Mode & 1 & 0 - Level-triggered,\\
VGIC Maintenance Interrupt Mode & 2 & 0 - Level-triggered, 1 -
Edge-Triggered \\
Reserved & 3..31 & Reserved \\
\hline
\end{tabular}
\subsubsection{Syntax}
\begin{syntax}
\synbf{flags} \synit{name} \synit{width} "\synit{description}" \verb+{+
\synit{position1} \synit{name1} "\synit{description1}" ;
\ldots
\verb+}+;
\end{syntax}
\subsubsection{Fields}
\begin{description}
\item[name] the identifier of this flag group. Must be unique for all
declarations.
\item [width] The width in bits of this flag group. Defines the maximum
number of flags supported. This is one of 8, 16, 32, 64.
\item [description] description in double quotes is a short explanation of
what the flag group represents.
\item [name1] identifier of the flag. Must be unique within the flag
group.
\item [position1] integer defining which bit position the flag sets
\item [description1] description of this particular flag.
\end{description}
\subsubsection{Type}
Flags with identifier $name$ define the following type:
\begin{syntax}
\synbf{flag} \synit{name};
\end{syntax}
\subsubsection{Example}
The example from the ACPI table can be expressed in Skate as follows:
\begin{syntax}
\synbf{flags} CPUInterfaceFlags \synit{32} "\synit{GICC CPU Interface Flags}"\
\verb+{+
0 Enabled "\synit{The CPU is enabled and can be used}" ;
1 Performance "\synit{Performance Interrupt Mode Edge-Triggered }" ;
1 VGICMaintenance "\synit{VGIC Maintenance Interrupt Mode Edge-Triggered}" ;
\verb+}+;
\end{syntax}
\subsection{Constants}
\label{sec:decl:constants}
Constants provide a way to specify a set of predefined values of a particular
type. They are defined in a constant group and every constant of this group
needs to be of the same type.
Compared to flags, the combination of two constants has no meaning (e.g.
adding two version numbers). In addition, constants only define a set of known
values, but do not rule out the possibility of observing other values. As an
example for this may be the vendor ID of a PCI Expess device, where the
constant group contains the known vendor IDs.
As an example where constants ca be used we take the GIC version field of the
GICD entry of the ACPI MADT Table.
\begin{tabular}{ll}
\textbf{Value} & \textbf{Meaning} \\
\hline
\texttt{0x00} & No GIC version is specified, fall back to hardware
discovery for GIC version \\
\texttt{0x01} & Controller is a GICv1 \\
\texttt{0x02} & Controller is a GICv2 \\
\texttt{0x03} & Controller is a GICv3 \\
\texttt{0x04} & Controller is a GICv4 \\
\texttt{0x05-0xFF} & Reserved for future use. \\
\hline
\end{tabular}
\subsubsection{Syntax}
\begin{syntax}
\synbf{constants} \synit{name} \synit{builtintype} "\synit{description}" \verb+{+
\synit{name1} = \synit{value1} "\synit{description1}" ;
\ldots
\verb+}+;
\end{syntax}
\subsubsection{Fields}
\begin{description}
\item[name] the identifier of this constants group. Must be unique for all
declarations.
\item [builtintype] the type of the constant group. Must be one of the
builtin types as defined in~\ref{sec:decl:types}
\item [description] description in double quotes is a short explanation of
what the constant group represents.
\item [name1] identifier of the constant. Must be unique within the
constant group.
\item [value1] the value of the constant. Must match the declared type.
\item [description1] description of this particular constant
\end{description}
\subsubsection{Type}
Constants with identifier $name$ define the following type:
\begin{syntax}
\synbf{const} \synit{name};
\end{syntax}
\subsubsection{Example}
The GIC version of our example can be expressed in the syntax as follows:
\begin{syntax}
\synbf{constants} GICVersion \synit{uint8} "\synit{The GIC Version}" \verb+{+
unspecified = 0x00 "\synit{No GIC version is specified}" ;
GICv1 = 0x01 "\synit{Controller is a GICv1}" ;
GICv2 = 0x02 "\synit{Controller is a GICv2}" ;
GICv3 = 0x03 "\synit{Controller is a GICv3}" ;
GICv4 = 0x04 "\synit{Controller is a GICv4}" ;
\verb+}+;
\end{syntax}
\subsection{Enumerations}
\label{sec:decl:enums}
Enumerations model a finite set of states effectively constants that only allow
the specified values. However, in contrast to constants they are not assigned
an specific value. Two enumeration values cannot be combined. As an example,
the enumeration construct can be used to express the state of a device in the
system which can be in one of the following states: \emph{uninitialized,
operational, suspended, halted.} It's obvious, that the combination of the
states operational
and suspended is meaning less.
\subsubsection{Syntax}
\begin{syntax}
\synbf{enumeration} \synit{name} "\synit{description}" \verb+{+
\synit{name1} "\synit{description1}";
\ldots
\verb+}+;
\end{syntax}
\subsubsection{Fields}
\begin{description}
\item[name] the identifier of this enumeration group. Must be unique for
all declarations.
\item [description] description in double quotes is a short explanation of
what the enumeration group represents.
\item [name1] identifier of the element. Must be unique within the
enumeration group.
\item [description1] description of this particular element
\end{description}
\subsubsection{Type}
Enumerations with identifier $name$ define the following type:
\begin{syntax}
\synbf{enum} \synit{name};
\end{syntax}
\subsubsection{Example}
\begin{syntax}
\synbf{enumeration} DeviceState "\synit{Possible device states}" \verb+{+
uninitialized "\synit{The device is uninitialized}";
operational "\synit{The device is operaetional}";
suspended "\synit{The device is suspended}";
halted "\synit{The device is halted}";
\verb+}+;
\end{syntax}
\subsection{Facts}
\label{sec:decl:facts}
The fact is the central element of Skate. It defines the actual facts about the
system that are put into the SKB. Each fact has a name and one or more fields
of a given type. Facts should be defined such that they do not require any
transformation. For example, take the entries of an ACPI table and define a
fact for each of the entry types.
\subsubsection{Syntax}
\begin{syntax}
\synbf{fact} \synit{name} "\synit{description}" \verb+{+
\synit{type1} \synit{name1} "\synit{description1}" ;
\ldots
\verb+}+;
\end{syntax}
\subsubsection{Fields}
\begin{description}
\item[name] the identifier of this fact. Must be unique for all
declarations.
\item[description] description in double quotes is a short English
explanation of what the fact defines. (e.g. Local APIC)
\item[type1] the type of the fact field. Must be one of the BuiltIn types
or one of the constants, flags or other facts. When using
facts as field types, there must be no recursive nesting.
\item [name1] identifier of a fact field. Must be unique within the
Fact group.
\item [description1] description of this particular field
\end{description}
\subsubsection{Type}
Facts with identifier $name$ define the following type.
\begin{syntax}
\synbf{fact} \synit{name};
\end{syntax}
\section{Documentation}
The schema declaration may contain \emph{section} and \emph{text} blocks that
allow providing an introduction or additional information for the schema
declared in the Skate file. The two constructs are for documentation purpose
only and do not affect code generation. The section and text blocks can appear
at any place in the Schema declaration. There is no type being defined for
documentation blocks.
\subsection{Schema}
The generated documentation will contain all the schemas declared in the source
tree. The different schema files correspond to chapters in the resulting
documentation or form a page of a Wiki for instance.
\subsection{Text}
\label{sec:doc:text}
By adding \texttt{text} blocks, additional content can be added to the generated
documentation. This includes examples and additional information of the
declarations of the schema. The text blocks are omitted when generating code.
Note, each of the text lines must be wrapped in double quotes. Generally, a
block of text will translate to a paragraph.
\subsubsection{Syntax}
\begin{syntax}
\synbf{text} \verb+{+
"\synit{text}"
...
\verb+};+
\end{syntax}
\subsubsection{Fields}
\begin{description}
\item[text] A line of text in double quotes.
\end{description}
\subsection{Sections}
\label{sec:doc:section}
The \texttt{section} construct allows to insert section headings into the
documentation. A section logically groups the declarations and text blocks
together to allow expressing a logical hierarchy.
\subsubsection{Syntax}
\begin{syntax}
\synbf{section} "\synit{name}" \verb+{+
\synit{declaration};
\ldots
\verb+};+
\end{syntax}
\subsubsection{Fields}
\begin{description}
\item[name] the name will be used as the section heading
\item[declaration] declarations belonging to this section.
\end{description}
Note, nested sections will result into (sub)subheadings or heading 2, 3, ...
Namespaces will appear as sections in the documentation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Operations and checks on the AST}
\label{chap:astops}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The following checks are executed after the parser has consumed the entire
Skate file and created the AST.
\section{Filename Check}
As already stated, the name of the Skate (without extension) must match the
identifier of the declared schema in the Skate file. This is required for
resolving imports of other Schemas.
\section{Uniqueness of declarations / fields}
Skate ensures that all declarations within a namespace are unique no matter
which type they are i.e. there cannot be a fact and a constant definition with
the same identifier. Moreover, the same check is applied to the fact attributes
as well as flags, enumerations and constant values.
Checks are based on the qualified identifier.
\section{Type Checks}
\section{Sorting of Declarations}
\todo{This requires generated a dependency graph for the facts etc. }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{C mapping for Schema Definitions}
\label{chap:cmapping}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
For each schema specification, Skate generates ....
\paragraph{Abbrevations}
In all the sections of this chapter, we use the follwing abbrevations, where
the actual value may be upper or lower case depending on the conventions:
\begin{description}
\item[SN] The schema name as used in the schema declaration.
\item[DN] The declaration name as used in the flags / constants /
enumeration / facts declaration
\item[FN] The field name as used in field declaration of flags / constants /
enumeration / facts
\end{description}
In general all defined functions, types and macros are prefixed with the schema
name SN.
\paragraph{Conventions}
We use the follwing conventions for the generated code:
\begin{itemize}
\item macro definitions and enumerations are uppercase.
\item type definitions, function names are lowercase.
\item use of the underscore \texttt{'\_'} to separate words
\end{itemize}
\todo{just a header file (cf mackerel), or also C functions (cf. flounder)?}
\section{Using Schemas}
Developers can use the schemas by including the generated header file of a
schema. All header files are placed in the schema subdirectory of the main
include folder of the build tree. For example, the
schema \texttt{SN} would generate the file \texttt{SN\_schema.h} and can
be included by a C program with:
\begin{quote}
\texttt{\#include <schema/SN\_schema.h}
\end{quote}
\section{Preamble}
The generated headerfile is protected by a include guard that depends on the
schema name. For example, the schema \texttt{SN} will be guarded by the
macro definition \texttt{\_\_SCHEMADEF\_\_SN\_H\_}. The header file will
include the folling header files:
\begin{enumerate}
\item a common header \texttt{skate.h} providing the missing macro and
function definitions for correct C generation.
\item an include for each of the imported schema devices.
\end{enumerate}
\section{Constants}
For any declared constant group, Skate will generate the following:
\paragraph{Type and macro definitions}
\begin{enumerate}
\item A type definition for the declared type of the constant group. The
type typename will be \texttt{SN\_DN\_t}.
\item A set of CPP macro definitions, one for each of the declared constants.
Each macro will have the name as in \texttt{SN\_DN\_FN} and expands to the field value cast to the type of the field.
\end{enumerate}
\paragraph{Function definitions}
\begin{enumerate}
\item A function to describe the value
\begin{quote}
\texttt{SN\_DN\_describe(SN\_DN\_t);}
\end{quote}
\item An snprintf-like function to pretty-print values of type SN\_DN\_t,
with prototype:
\begin{quote}
\texttt{int SN\_DN\_print(char *s, size\_t sz);}
\end{quote}
\end{enumerate}
\todo{Do we need more ?}
\section{Flags}
\paragraph{Type and macro definitions}
\begin{enumerate}
\item A type definition for the declared type of the flag group. The
type typename will be \texttt{SN\_DN\_t}.
\end{enumerate}
\paragraph{Function definitions}
\begin{enumerate}
\item A function to describe the value
\begin{quote}
\texttt{SN\_DN\_describe(SN\_DN\_t);}
\end{quote}
\end{enumerate}
\todo{Do we need more ?}
\section{Enumerations}
Enumerations translate one-to-one to the C enumeration type in a straight
forward manner:
\begin{quote}
\texttt{typdef enum \{ SN\_DN\_FN1, ... \} SN\_DN\_t; }
\end{quote}
\paragraph{Function definitions}
\begin{enumerate}
\item A function to describe the value
\begin{quote}
\texttt{SN\_DN\_describe(SN\_DN\_t);}
\end{quote}
\item A function to pretty-print the value
\begin{quote}
\texttt{SN\_DN\_print(char *b, size\_t sz, SN\_DN\_t val);}
\end{quote}
\end{enumerate}
\section{Facts}
\paragraph{Type and macro definitions}
\begin{enumerate}
\item A type definition for the declared type of the flag group. The
type typename will be \texttt{SN\_DN\_t}.
\end{enumerate}
\paragraph{Function definitions}
\begin{enumerate}
\item A function to describe the value
\begin{quote}
\texttt{SN\_DN\_describe(SN\_DN\_t);}
\end{quote}
\item A function to add a fact to the SKB
\item A function to retrieve all the facts of this type from the SKB
\item A function to delete the fact from the SKB
\end{enumerate}
\todo{Provide some way of wildcard values. e.g. list all facts with this
filter or delete all facts that match the filter.}
\section{Namespaces}
\paragraph{Function definitions}
\begin{enumerate}
\item A function to retrieve all the facts belonging to a name space
\end{enumerate}
\section{Sections and text blocks}
For the \texttt{section} and \texttt{text} blocks in the schema file, there
won't be any visible C constructs generated, but rather turned into comment
blocks in the generated C files.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Prolog mapping for Schema Definitions}
\label{chap:prologmapping}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Each fact added to the SKB using Skate is represented by a single Prolog
functor. The functor name in Prolog consist of the schema and fact name. The
fact defined in Listing \ref{lst:sample_schema} is represented by the functor
\lstinline!cpuid_vendor!~and has an arity of three.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Generated Documentation}
\label{chap:documentation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Access Control}
\label{chap:accesscontrol}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
on the level of schema or namespaces.
\chapter{Integration into the Hake build system}
Skate is a tool that is integrated with Hake. Add the attribute
\lstinline!SkateSchema!~to a Hakefile to invoke Skate as shown in Listing
\ref{lst:Skate_hake}.
\begin{lstlisting}[caption={Including Skate schemata in Hake},
label={lst:Skate_hake}, language=Haskell]
[ build application {
SkateSchema = [ "cpu" ]
...
} ]
\end{lstlisting}
Adding an entry for \varname{SkateSchema} to a Hakefile will generate both
header and implementation and adds it to the list of compiled resources. A
Skate schema is referred to by its name and Skate will look for a file
ending with \varname{.Skate} containing the schema definition.
The header file is placed in \pathname{include/schema} in the build tree, the C
implementation is stored in the Hakefile application or library directory.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliographystyle{abbrv}
\bibliography{barrelfish}
\end{document}
|
From CoqAlgs Require Export Sorting.Sort.
From CoqAlgs Require Import Ord.
Set Implicit Arguments.
Function extractMin {A : Ord} (l : list A) : option (A * list A) :=
match l with
| [] => None
| h :: t =>
match extractMin t with
| None => Some (h, [])
| Some (m, l') =>
if h β€? m then Some (h, m :: l') else Some (m, h :: l')
end
end.
Function ss {A : Ord} (l : list A) {measure length l} : list A :=
match extractMin l with
| None => []
| Some (m, l') => m :: ss l'
end.
Proof.
induction l as [| h t]; cbn; intros.
inv teq.
destruct (extractMin t) eqn: Heq.
destruct p0. trich; inv teq; cbn; rewrite <- Nat.succ_lt_mono; eapply IHt; eauto.
inv teq. cbn. apply le_n_S, Nat.le_0_l.
Defined.
Lemma Permutation_extractMin :
forall (A : Ord) (l l' : list A) (m : A),
extractMin l = Some (m, l') -> Permutation (m :: l') l.
Proof.
intros A l. functional induction extractMin l; cbn; inv 1.
destruct t; cbn in e0.
reflexivity.
destruct (extractMin t).
destruct p. 1-2: trich.
rewrite Permutation.perm_swap. auto.
Qed.
Lemma Permutation_ss :
forall (A : Ord) (l : list A),
Permutation (ss A l) l.
Proof.
intros. functional induction @ss A l.
destruct l; cbn in e.
reflexivity.
destruct (extractMin l); try destruct p; trich.
apply Permutation_extractMin in e. rewrite <- e, IHl0. reflexivity.
Qed.
Lemma extractMin_spec :
forall (A : Ord) (l l' : list A) (m : A),
extractMin l = Some (m, l') -> forall x : A, In x l' -> m β€ x.
Proof.
intros A l.
functional induction extractMin l;
inv 1; inv 1; trich.
specialize (IHo _ _ e0 _ H0). trich.
specialize (IHo _ _ e0 _ H0). trich.
Qed.
Lemma Sorted_ss :
forall (A : Ord) (l : list A),
Sorted trich_le (ss A l).
Proof.
intros. functional induction @ss A l.
destruct l; trich.
apply Sorted_cons.
intros. assert (In x l').
apply Permutation_in with (ss A l').
apply Permutation_ss.
assumption.
eapply extractMin_spec; eauto.
assumption.
Qed.
(** Da ultimate selection sort! *)
Function mins'
{A : Ord} (l : list A) : list A * list A :=
match l with
| [] => ([], [])
| h :: t =>
let
(mins, rest) := mins' t
in
match mins with
| [] => ([h], rest)
| m :: ms =>
match h <?> m with
| Lt => ([h], mins ++ rest)
| Eq => (h :: mins, rest)
| Gt => (mins, h :: rest)
end
end
end.
Lemma mins'_nil :
forall (A : Ord) (l rest : list A),
mins' l = ([], rest) -> rest = [].
Proof.
intros. functional induction mins' l; inv H.
Qed.
Lemma mins'_length :
forall (A : Ord) (l mins rest : list A),
mins' l = (mins, rest) -> length l = length mins + length rest.
Proof.
intros A l. functional induction mins' l; inv 1; cbn in *.
destruct t; cbn in e0.
inv e0.
destruct (mins' t), l.
inv e0.
destruct (c <?> c0); inv e0.
1-3: f_equal; rewrite (IHp _ _ e0), ?app_length; cbn; lia.
Qed.
Function ss_mins'
{A : Ord} (l : list A) {measure length l} : list A :=
match mins' l with
| ([], _) => []
| (mins, rest) => mins ++ ss_mins' rest
end.
Proof.
intros. functional induction mins' l; inv teq; cbn in *.
apply mins'_nil in e0. subst. cbn. apply le_n_S, Nat.le_0_l.
all: apply mins'_length in e0; cbn in e0;
rewrite e0, ?app_length; lia.
Defined.
(** Time to prove something *)
Class Select (A : Ord) : Type :=
{
select : list A -> list A * list A * list A;
select_mins :
forall l mins rest maxes : list A,
select l = (mins, rest, maxes) ->
forall x y : A, In x mins -> In y l -> x β€ y;
select_maxes :
forall l mins rest maxes : list A,
select l = (mins, rest, maxes) ->
forall x y : A, In x l -> In y maxes -> x β€ y;
select_Permutation :
forall l mins rest maxes : list A,
select l = (mins, rest, maxes) ->
Permutation l (mins ++ rest ++ maxes);
select_length_rest :
forall l mins rest maxes : list A,
select l = (mins, rest, maxes) ->
mins = [] /\ rest = [] /\ maxes = [] \/
lt (length rest) (length l);
}.
Coercion select : Select >-> Funclass.
Set Warnings "-unused-pattern-matching-variable". (* Line 166 - bug in Coq? *)
Function gss
{A : Ord} (s : Select A) (l : list A)
{measure length l} : list A :=
match select l with
| ([], [], []) => []
| (mins, rest, maxes) =>
mins ++ gss s rest ++ maxes
end.
Proof.
all: intros; subst;
apply select_length_rest in teq;
decompose [and or] teq; clear teq;
try congruence; try assumption.
Defined.
Set Warnings "unused-pattern-matching-variable".
Lemma Permutation_gss :
forall (A : Ord) (s : Select A) (l : list A),
Permutation (gss s l) l.
Proof.
intros. functional induction @gss A s l.
apply select_Permutation in e. cbn in e. symmetry. assumption.
rewrite IHl0. symmetry. apply select_Permutation. assumption.
Qed.
Lemma select_In :
forall (A : Ord) (s : Select A) (l mins rest maxes : list A) (x : A),
select l = (mins, rest, maxes) ->
In x mins \/ In x rest \/ In x maxes -> In x l.
Proof.
intros. eapply Permutation_in.
symmetry. apply select_Permutation. eassumption.
apply in_or_app. decompose [or] H0; clear H0.
left. assumption.
right. apply in_or_app. left. assumption.
right. apply in_or_app. right. assumption.
Qed.
Lemma select_mins_maxes :
forall (A : Ord) (s : Select A) (l mins rest maxes : list A),
select l = (mins, rest, maxes) ->
forall x y : A, In x mins -> In y maxes -> x β€ y.
Proof.
intros. eapply select_mins; try eassumption.
eapply select_In; eauto.
Qed.
Lemma select_mins_same :
forall (A : Ord) (s : Select A) (l mins rest maxes : list A),
select l = (mins, rest, maxes) ->
forall x y : A, In x mins -> In y mins -> x = y.
Proof.
intros. apply trich_le_antisym.
eapply select_mins; eauto. eapply select_In; eauto.
eapply select_mins; eauto. eapply select_In; eauto.
Qed.
Lemma select_maxes_same :
forall (A : Ord) (s : Select A) (l mins rest maxes : list A),
select l = (mins, rest, maxes) ->
forall x y : A, In x maxes -> In y maxes -> x = y.
Proof.
intros. apply trich_le_antisym.
eapply select_maxes; eauto. eapply select_In; eauto.
eapply select_maxes; eauto. eapply select_In; eauto.
Qed.
Lemma same_Sorted :
forall (A : Ord) (x : A) (l : list A),
(forall y : A, In y l -> x = y) ->
Sorted trich_le l.
Proof.
intros A x.
induction l as [| h t]; cbn; intros.
constructor.
specialize (IHt ltac:(auto)). change (Sorted trich_le ([h] ++ t)).
apply Sorted_app.
constructor.
assumption.
assert (x = h) by auto; subst. inv 1.
intro. right. apply H. auto.
inv H1.
Qed.
Lemma Sorted_select_mins :
forall (A : Ord) (s : Select A) (l mins rest maxes : list A),
select l = (mins, rest, maxes) -> Sorted trich_le mins.
Proof.
destruct mins; intros.
constructor.
apply same_Sorted with c. intros. eapply select_mins_same.
exact H.
left. reflexivity.
assumption.
Qed.
Lemma Sorted_select_maxes :
forall (A : Ord) (s : Select A) (l mins rest maxes : list A),
select l = (mins, rest, maxes) -> Sorted trich_le maxes.
Proof.
destruct maxes; intros.
constructor.
apply same_Sorted with c. intros. eapply select_maxes_same.
exact H.
left. reflexivity.
assumption.
Qed.
Lemma gss_In :
forall (A : Ord) (s : Select A) (x : A) (l : list A),
In x (gss s l) <-> In x l.
Proof.
intros. split; intros.
eapply Permutation_in.
apply Permutation_gss.
assumption.
eapply Permutation_in.
symmetry. apply Permutation_gss.
assumption.
Qed.
Lemma gSorted_ss :
forall (A : Ord) (s : Select A) (l : list A),
Sorted trich_le (gss s l).
Proof.
intros. functional induction @gss A s l; try clear y.
constructor.
apply Sorted_app.
eapply Sorted_select_mins. eassumption.
apply Sorted_app.
assumption.
eapply Sorted_select_maxes. eassumption.
intros. rewrite gss_In in H. eapply select_maxes; eauto.
eapply select_In; eauto.
intros. apply in_app_or in H0. destruct H0.
rewrite gss_In in H0. eapply select_mins; try eassumption.
eapply select_In; eauto.
eapply select_mins_maxes; eauto.
Qed.
#[refine]
#[export]
Instance Sort_gss (A : Ord) (s : Select A) : Sort trich_le :=
{
sort := gss s;
Sorted_sort := gSorted_ss s;
}.
Proof.
intros. apply Permutation_gss.
Defined.
Lemma min_dflt_spec :
forall (A : Ord) (x h : A) (t : list A),
In x (h :: t) -> min_dflt A h t β€ x.
Proof.
intros until t. revert x h.
induction t as [| h' t']; simpl in *.
inv 1; trich.
destruct 1 as [H1 | [H2 | H3]]; subst.
trich. specialize (IHt' x x ltac:(left; reflexivity)). trich.
trich. specialize (IHt' x x ltac:(left; reflexivity)). trich.
trich. pose (IH1 := IHt' h h ltac:(auto)). pose (IH2 := IHt' x h ltac:(auto)). trich.
Qed.
(*
Lemma min_In :
forall (A : Ord) (m : A) (l : list A),
trich_min l = Some m -> In m l.
Proof.
intros. functional induction min l; cbn; inv H.
Qed.
Lemma lengthOrder_removeFirst_min :
forall (A : Ord) (m : A) (l : list A),
min l = Some m -> lengthOrder (removeFirst m l) l.
Proof.
intros. functional induction min l; inv H; trich; red; cbn; try lia.
rewrite <- Nat.succ_lt_mono. apply IHo. assumption.
Qed.
#[refine]
#[export]
Instance Select_min (A : Ord) : Select A :=
{
select l :=
match min l with
| None => ([], [], [])
| Some m => ([m], removeFirst m l, [])
end;
}.
Proof.
all: intros; destruct (min l) eqn: Hc; inv H.
inv H0. eapply min_spec; eauto.
rewrite app_nil_r. cbn.
apply perm_Permutation, removeFirst_In_perm, min_In, Hc.
destruct l; cbn in *.
reflexivity.
destruct (min l); inv Hc.
right. apply lengthOrder_removeFirst_min. assumption.
Defined.
*) |
Jacqueline Fernandez ( born 11 August 1985 ) is a Sri Lankan actress , former model , and the winner of the 2006 Miss Universe Sri Lanka pageant . As Miss Universe Sri Lanka she represented her country at the 2006 world Miss Universe pageant . She graduated with a degree in mass communication from the University of Sydney , and worked as a television reporter in Sri Lanka .
|
Free spirit! An art for lovers of the perfect gesture.
For beginners or the more experienced, ESF Chamrousse offers lessons in telemark. Technique and elegance, characterises the telemark turns. With the help of an ESF Chamrousse instructor you can really master this great discipline.
The art of the perfect curve !
Make sure you are insured! We recommend the Assur' Glisse Insurance.
Telemark offers us a really different and traditional way of skiing. I really enjoyed during my lesson and I am delighted with my progress. |
function [param] = tps_compute_param(PP,kernel,U,Pm,Q1,Q2,R,lambda,target)
%%=====================================================================
%% $RCSfile: tps_compute_param.m,v $
%% $Author: bjian $
%% $Date: 2008/11/24 08:59:01 $
%% $Revision: 1.1 $
%%=====================================================================
TB = U*PP;
QQ = Q2*Q2';
A = inv(TB'*QQ*TB + lambda*kernel)*TB'*QQ;
tps = A*target;
affine = inv(R)*Q1'*(target-TB*tps);
param = [affine; tps];
|
1. The Zetafax SMTP Server is receiving an SMTP stream that has multiple line breaks in the FROM field. As a result, the Zetafax SMTP Server does not parse this field properly and only a partial address is forwarded onto the Zetafax Server. The truncated FROM address can be seen in the Zetafax Server Monitor as the following line: βMessage from:β (note that there is no data). A working system will display the following log line: βMessage from: [email protected]β.
An update is available here to address the first issue. This includes a fix to allow Zetafax SMTP Server to parse addresses with line breaks correctly.
Please refer to ZTN1418 for instructions on importing a list of SMTP users and enabling them.
This behaviour was corrected in the update and technote detailed above. |
\section{Asset Allocation with Transaction Costs}
\label{sec:asset_allocation}
\begin{frame}{Asset Allocation with Transaction Costs}
\begin{block}{Goal}
How to dynamically invest the available capital in a portfolio of different assets in order to maximize the expected total return or another relevant performance measure.
\end{block}
\begin{block}{\textbf{Reward function}: portfolio log-return with transaction costs}
\vspace{-0.5cm}
\begin{equation*}
\resizebox{.9 \textwidth}{!}{
$R_{t+1} = \log \left\{ 1 + \sum^{I}_{i=0} \left[ a_t^i X_{t+1}^i - \delta_i
\left| a_t^i - \tilde{a}_t^i \right| - \delta_s {(a_t^i)}^- \right] -
\delta_f \mathbf{1}_{{a}_t \neq \tilde{{a}}_{t-1}}\right\}$
}
\end{equation*}
\end{block}
\begin{block}{\textbf{Actions}: Portfolio weights}
\begin{equation*}
\{a_t^i\}_{i=0}^I \;\;\; \text{s.t.}\;\;\; \sum^{I}_{i=0} a_t^i = 1 \;\;\;\;\; \forall t \in \{0, 1, 2, \ldots\}
\end{equation*}
\end{block}
\begin{block}{\textbf{State}: assets past returns and current allocation}
\begin{equation*}
S_t = \{X, X_t, X_{t-1}, \ldots, X_{t-P}, \tilde{a}_t\}
\end{equation*}
\end{block}
\end{frame}
|
# Decision Lens API
#
# No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
#
# OpenAPI spec version: 1.0
#
# Generated by: https://github.com/swagger-api/swagger-codegen.git
#' SourceType Class
#'
#'
#' @importFrom R6 R6Class
#' @importFrom jsonlite fromJSON toJSON
#' @export
SourceType <- R6::R6Class(
'SourceType',
public = list(
initialize = function(){
},
toJSON = function() {
SourceTypeObject <- list()
SourceTypeObject
},
fromJSON = function(SourceTypeJson) {
SourceTypeObject <- dlensFromJSON(SourceTypeJson)
},
toJSONString = function() {
sprintf(
'{
}',
)
},
fromJSONString = function(SourceTypeJson) {
SourceTypeObject <- dlensFromJSON(SourceTypeJson)
}
)
)
|
Formal statement is: lemma fst_im_cbox [simp]: "cbox c d \<noteq> {} \<Longrightarrow> (fst ` cbox (a,c) (b,d)) = cbox a b" Informal statement is: If the interval $[c,d]$ is non-empty, then the image of the interval $[(a,c),(b,d)]$ under the function $fst$ is the interval $[a,b]$. |
SUBROUTINE MA_LSCB ( mszrpt, marrpt, ipt, iret )
C************************************************************************
C* MA_LSCB *
C* *
C* This subroutine gets the length and starting position of each of the *
C* sections 1-5 in the WMO FM13 report. If a section is missing, then *
C* its length and starting position are set to zero. *
C* *
C* MA_LSCB ( MSZRPT, MARRPT, IPT, IRET ) *
C* *
C* Input parameters: *
C* MSZRPT INTEGER Length of report in bytes *
C* MARRPT CHAR* Report array *
C* IPT INTEGER Pointer to space before the *
C* i(R)i(x)hVV group *
C* *
C* Output parameters: *
C* LSEC1 INTEGER Length of section 1 in report *
C* ISEC1 INTEGER Pointer to start of section 1 *
C* LSEC2 INTEGER Length of section 2 in report *
C* ISEC2 INTEGER Pointer to start of section 2 *
C* LSEC3 INTEGER Length of section 3 in report *
C* ISEC3 INTEGER Pointer to start of section 3 *
C* LSEC4 INTEGER Length of section 4 in report *
C* ISEC4 INTEGER Pointer to start of section 4 *
C* LSEC5 INTEGER Length of section 5 in report *
C* ISEC5 INTEGER Pointer to start of section 5 *
C* IRET INTEGER Return code *
C* 0 = Normal return *
C* 1 = Problems *
C** *
C* Log: *
C* R. Hollern/NCEP 4/96 *
C* R. Hollern/NCEP 8/96 Added check on section lengths *
C* R. Hollern/NCEP 12/96 Modified logic to compute length of *
C* sections *
C* D. Kidwell/NCEP 4/97 Cleaned up code and documentation *
C* D. Kidwell/NCEP 10/97 Documentation *
C* R. Hollern/NCEP 1/98 Corrected decoding problem finding start*
C* of the marine section in report *
C* R. Hollern/NCEP 5/00 Corrected problem with finding the start*
C* of section 2 in CMAN report when sect 1 *
C* was less than 35 characters in length *
C* C. Caruso Magee/NCEP 6/01 Corrected problem w/ finding length of *
C* each section (was returning length that *
C* was short by one character, so wasn't *
C* decoding some data as section length *
C* would occasionally return as 0 instead *
C* of 1). Added 1 to lsec1, 2, 3, and 4. *
C************************************************************************
INCLUDE 'macmn.cmn'
C*
CHARACTER*(*) marrpt
C------------------------------------------------------------------------
iret = 0
lsec1 = 0
lsec2 = 0
lsec3 = 0
lsec4 = 0
lsec5 = 0
isec1 = 0
isec2 = 0
isec3 = 0
isec4 = 0
isec5 = 0
lrpt = mszrpt
C
C* If there is a section 555, determine its length.
C
i555 = INDEX ( marrpt (1:mszrpt), ' 555 ' )
C
IF ( i555 .gt. 0 ) THEN
isec5 = i555 + 3
lsec5 = mszrpt - isec5
lrpt = i555
END IF
C
i444 = INDEX ( marrpt (1:lrpt), ' 444 ' )
C
IF ( i444 .gt. 0 ) THEN
isec4 = i444 + 3
lsec4 = lrpt - isec4
lrpt = i444
END IF
C
i333 = INDEX ( marrpt (1:lrpt), ' 333 ' )
C
IF ( i333 .gt. 0 ) THEN
isec3 = i333 + 3
lsec3 = lrpt - isec3
lrpt = i333
END IF
C
IF ( ibrtyp .eq. 2 ) THEN
i222 = INDEX ( marrpt (1:lrpt), ' 222//' )
IF ( i222 .gt. 0 ) THEN
isec2 = i222 + 4
lsec2 = lrpt - isec2
lrpt = i222
END IF
ELSE
i222 = INDEX ( marrpt (35:lrpt), ' 222' )
IF ( i222 .gt. 0 ) THEN
i222 = i222 + 34
isec2 = i222 + 4
lsec2 = lrpt - isec2
lrpt = i222
END IF
END IF
C
C* Get start and length of Section 1.
C
isec1 = ipt
lsec1 = lrpt - ipt
C
C* If section lengths are too long, reject report.
C* This can happen if reports are not separated from
C* each other by a report separator.
C
IF ( lsec1 .gt. 75 .or. lsec2 .gt. 70 .or.
+ lsec3 .gt. 100 ) iret = 1
C*
RETURN
END
|
-----------------------------------------------------------------------------
-- |
-- Module : Data.Random.Distribution.MultivariateNormal
-- Copyright : (c) 2016 FP Complete Corporation
-- License : MIT (see LICENSE)
-- Maintainer : [email protected]
--
-- Sample from the multivariate normal distribution with a given
-- vector-valued \(\mu\) and covariance matrix \(\Sigma\). For example,
-- the chart below shows samples from the bivariate normal
-- distribution.
--
-- <<diagrams/src_Data_Random_Distribution_MultivariateNormal_diagM.svg#diagram=diagM&height=600&width=500>>
--
-- Example code to generate the chart:
--
-- > import qualified Graphics.Rendering.Chart as C
-- > import Graphics.Rendering.Chart.Backend.Diagrams
-- >
-- > import Data.Random.Distribution.MultivariateNormal
-- >
-- > import qualified Data.Random as R
-- > import Data.Random.Source.PureMT
-- > import Control.Monad.State
-- > import qualified Numeric.LinearAlgebra.HMatrix as LA
-- >
-- > nSamples :: Int
-- > nSamples = 10000
-- >
-- > sigma1, sigma2, rho :: Double
-- > sigma1 = 3.0
-- > sigma2 = 1.0
-- > rho = 0.5
-- >
-- > singleSample :: R.RVarT (State PureMT) (LA.Vector Double)
-- > singleSample = R.sample $ Normal (LA.fromList [0.0, 0.0])
-- > (LA.sym $ (2 LA.>< 2) [ sigma1, rho * sigma1 * sigma2
-- > , rho * sigma1 * sigma2, sigma2])
-- >
-- > multiSamples :: [LA.Vector Double]
-- > multiSamples = evalState (replicateM nSamples $ R.sample singleSample) (pureMT 3)
-- > pts = map (f . LA.toList) multiSamples
-- > where
-- > f [x, y] = (x, y)
-- > f _ = error "Only pairs for this chart"
-- >
-- >
-- > chartPoint pointVals n = C.toRenderable layout
-- > where
-- >
-- > fitted = C.plot_points_values .~ pointVals
-- > $ C.plot_points_style . C.point_color .~ opaque red
-- > $ C.plot_points_title .~ "Sample"
-- > $ def
-- >
-- > layout = C.layout_title .~ "Sampling Bivariate Normal (" ++ (show n) ++ " samples)"
-- > $ C.layout_y_axis . C.laxis_generate .~ C.scaledAxis def (-3,3)
-- > $ C.layout_x_axis . C.laxis_generate .~ C.scaledAxis def (-3,3)
-- >
-- > $ C.layout_plots .~ [C.toPlot fitted]
-- > $ def
-- >
-- > diagM = do
-- > denv <- defaultEnv C.vectorAlignmentFns 600 500
-- > return $ fst $ runBackend denv (C.render (chartPoint pts nSamples) (500, 500))
--
-----------------------------------------------------------------------------
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE MultiParamTypeClasses #-}
module Data.Random.Distribution.MultivariateNormal
( Normal(..)
) where
import Data.Random.Distribution
import qualified Numeric.LinearAlgebra.HMatrix as H
import Control.Monad
import qualified Data.Random as R
import Foreign.Storable ( Storable )
import Data.Maybe ( fromJust )
normalMultivariate :: H.Vector Double -> H.Herm Double -> R.RVarT m (H.Vector Double)
normalMultivariate mu bigSigma = do
z <- replicateM (H.size mu) (rvarT R.StdNormal)
return $ mu + bigA H.#> (H.fromList z)
where
(vals, bigU) = H.eigSH bigSigma
lSqrt = H.diag $ H.cmap sqrt vals
bigA = bigU H.<> lSqrt
data family Normal k :: *
data instance Normal (H.Vector Double) = Normal (H.Vector Double) (H.Herm Double)
instance Distribution Normal (H.Vector Double) where
rvar (Normal m s) = normalMultivariate m s
normalPdf :: (H.Numeric a, H.Field a, H.Indexable (H.Vector a) a, Num (H.Vector a)) =>
H.Vector a -> H.Herm a -> H.Vector a -> a
normalPdf mu sigma x = exp $ normalLogPdf mu sigma x
normalLogPdf :: (H.Numeric a, H.Field a, H.Indexable (H.Vector a) a, Num (H.Vector a)) =>
H.Vector a -> H.Herm a -> H.Vector a -> a
normalLogPdf mu bigSigma x = - H.sumElements (H.cmap log (diagonals dec))
- 0.5 * (fromIntegral (H.size mu)) * log (2 * pi)
- 0.5 * s
where
dec = fromJust $ H.mbChol bigSigma
t = fromJust $ H.linearSolve (H.tr dec) (H.asColumn $ x - mu)
u = H.cmap (\v -> v * v) t
s = H.sumElements u
diagonals :: (Storable a, H.Element t, H.Indexable (H.Vector t) a) =>
H.Matrix t -> H.Vector a
diagonals m = H.fromList (map (\i -> m H.! i H.! i) [0..n-1])
where
n = max (H.rows m) (H.cols m)
instance PDF Normal (H.Vector Double) where
pdf (Normal m s) = normalPdf m s
logPdf (Normal m s) = normalLogPdf m s
|
Being a mom is one of the hardest jobs in the world.
Listen, the postpartum body struggles hit everyone.
Remember Ta-Ta Towels? They Have A Maternity Version Now!
This mom embraced her beautiful postpartum c-section belly like a boss. |
(*<*)
theory hw08tmpl
imports Complex_Main "HOL-Library.Tree"
begin
(*>*)
text {* \NumHomework{Bounding Fibonacci}{June 8}
We start by defining the Fibonacci sequence, and an alternative
induction scheme for indexes greater 0:
*}
fun fib :: "nat \<Rightarrow> nat"
where
fib0: "fib 0 = 0"
| fib1: "fib (Suc 0) = 1"
| fib2: "fib (Suc (Suc n)) = fib (Suc n) + fib n"
lemma f_alt_induct [consumes 1, case_names 1 2 rec]:
assumes "n > 0"
and "P (Suc 0)" "P 2" "\<And>n. n > 0 \<Longrightarrow> P n \<Longrightarrow> P (Suc n) \<Longrightarrow> P (Suc (Suc n))"
shows "P n"
using assms(1)
proof (induction n rule: fib.induct)
case (3 n)
thus ?case using assms by (cases n) (auto simp: eval_nat_numeral)
qed (auto simp: \<open>P (Suc 0)\<close> \<open>P 2\<close>)
text \<open>Show that the Fibonacci numbers grow exponentially, i.e., that they are
bounded from below by \<open>1.5\<^sup>n/3\<close>.
Use the alternative induction scheme defined above.
\<close>
lemma fib_lowerbound: "n > 0 \<Longrightarrow> real (fib n) \<ge> 1.5 ^ n / 3"
proof (induction n rule: f_alt_induct)
oops
text \<open>
\NumHomework{AVL Trees}{June 8}
AVL trees are binary search trees where, for each node, the heights of
its subtrees differ by at most one. In this homework, you are to bound
the minimal number of nodes in an AVL tree of a given height.
First, define the AVL invariant on binary trees.
Note: In practice, one additionally stores the heights or height difference
in the nodes, but this is not required for this exercise.
\<close>
fun avl :: "'a tree \<Rightarrow> bool"
where
"avl _ = undefined"
text \<open>Show that an AVL tree of height \<open>h\<close> has at least \<open>fib (h+2)\<close> nodes:\<close>
lemma avl_fib_bound: "avl t \<Longrightarrow> height t = h \<Longrightarrow> fib (h+2) \<le> size1 t"
oops
text \<open>Combine your results to get an exponential lower bound on the number
of nodes in an AVL tree.\<close>
lemma avl_lowerbound:
assumes "avl t"
shows "1.5 ^ (height t + 2) / 3 \<le> real (size1 t)"
oops
(*<*)
end
(*>*)
|
lemma sigma_algebra_Pow: "sigma_algebra sp (Pow sp)" |
include '../User_Mod/Impinging_Jet_Nu.f90'
include '../User_Mod/Impinging_Jet_Profiles.f90'
!==============================================================================!
subroutine User_Mod_Save_Results(flow, turb, mult, swarm, ts)
!------------------------------------------------------------------------------!
! Calls User_Impinging_Jet_Nu and User_Impinging_Jet_Profile functions. !
!------------------------------------------------------------------------------!
implicit none
!---------------------------------[Arguments]----------------------------------!
type(Field_Type) :: flow
type(Turb_Type) :: turb
type(Multiphase_Type) :: mult
type(Swarm_Type) :: swarm
integer :: ts ! time step
!==============================================================================!
call User_Mod_Impinging_Jet_Nu (turb)
call User_Mod_Impinging_Jet_Profiles(turb)
end subroutine
|
postulate
foo : Set
bar : Set
baz : Set β Set
baz fooo = Fooo
|
I noticed that many of the members are out of the office in the week starting October 8. Considering the due date for the PMDA inquiries is October 22, the document content needs to be fixed on 16. This is why I chose October 16 for the meeting date. Preferably, we would like to complete QC before 18 and obtain management approval on 19.
I noticed many members are out of office in week starting October 8. Considering due date for PMDA inquiry is October 22, document content need to be fixed on 16. This is why I chose October 16 for meeting date. Preferably, we would like to complete QC before 18 and obtain management approval on 19. |
## constructors.jl
## (c) 2014--2020 David A. van Leeuwen
## Constructors related to the types in namedarraytypes.jl
## This code is licensed under the MIT license
## See the file LICENSE.md in this distribution
letter(i) = string(Char((64+i) % 256))
## helpers for constructing names dictionaries
defaultnames(dim::Integer) = map(string, 1:dim)
function defaultnamesdict(names::AbstractVector)
dict = OrderedDict(zip(names, 1:length(names)))
length(dict) == length(names) || error("Cannot have duplicated names for indices")
return dict
end
defaultnamesdict(dim::Integer) = defaultnamesdict(defaultnames(dim))
defaultnamesdict(dims::Tuple) = map(defaultnamesdict, dims) # ::NTuple{length(dims), OrderedDict}
defaultdimname(dim::Integer) = Symbol(letter(dim))
defaultdimnames(ndim::Integer) = ntuple(defaultdimname, ndim)
defaultdimnames(a::AbstractArray) = defaultdimnames(ndims(a))
## disambiguation (Argh...)
function NamedArray(a::AbstractArray{T,N},
names::Tuple{},
dimnames::NTuple{N, Any}) where {T,N}
NamedArray{T,N,typeof(a),Tuple{}}(a, (), ())
end
NamedArray(a::AbstractArray{T,N}, names::Tuple{}) where {T,N} = NamedArray{T,N,typeof(a),Tuple{}}(a, (), ())
NamedArray(a::AbstractArray{T,0}, ::Tuple{}, ::Tuple{}) where T = NamedArray{T,0,typeof(a),Tuple{}}(a, (), ())
## Basic constructor: array, tuple of dicts, tuple of dimnames
function NamedArray(array::AbstractArray{T,N},
names::NTuple{N,OrderedDict},
dimnames::NTuple{N}=defaultdimnames(array)) where {T,N}
## inner constructor
NamedArray{T, N, typeof(array), typeof(names)}(array, names, dimnames)
end
## constructor with array, names and dimnames (dict is created from names)
function NamedArray(array::AbstractArray{T,N},
names::NTuple{N,AbstractVector}=tuple((defaultnames(d) for d in size(array))...),
dimnames::NTuple{N, Any}=defaultdimnames(array)) where {T,N}
dicts = defaultnamesdict(names)
NamedArray{T, N, typeof(array), typeof(dicts)}(array, dicts, dimnames)
end
## Deprecated: use tuples, as above
## vectors instead of tuples, with defaults (incl. no names or dimnames at all)
function NamedArray(array::AbstractArray{T,N},
names::AbstractVector{VT},
dimnames::AbstractVector=[defaultdimname(i) for i in 1:ndims(array)]) where {T,N,VT}
length(names) == length(dimnames) == N || error("Dimension mismatch")
if VT == Union{} ## edge case, array == Array{}()
dicts = ()
elseif VT <: OrderedDict
dicts = tuple(names...)
else
dicts = defaultnamesdict(tuple(names...))::NTuple{N, OrderedDict{eltype(VT),Int}}
end
NamedArray{T, N, typeof(array), typeof(dicts)}(array, dicts, tuple(dimnames...))
end
## I can't get this working
## @Base.deprecate NamedArray(array::AbstractArray{T,N}, names::AbstractVector{<:AbstractVector}, dimnames::AbstractVector) where {T,N} NamedArray(array::AbstractArray{T,N}, names::NTuple{N}, dimnames::NTuple{N}) where {T,N}
## special case for 1-dim array to circumvent Julia tuple-comma-oddity, #86
NamedArray(array::AbstractVector{T},
names::AbstractVector{VT}=defaultnames(length(array)),
dimname=defaultdimname(1)) where {T,VT} =
NamedArray(array, (names,), (dimname,))
function NamedArray(array::AbstractArray{T,N}, names::NamedTuple) where {T, N}
length(names) == N || error("Dimension mismatch")
return NamedArray(array, values(names), keys(names))
end
## Type and dimensions
"""
`NamedArray(T::Type, dims::Int...)` creates an uninitialized array with default names
for the dimensions (`:A`, `:B`, ...) and indices (`"1"`, `"2"`, ...).
"""
NamedArray(T::DataType, dims::Int...) = NamedArray(Array{T}(undef, dims...))
NamedArray{T}(n...) where {T} = NamedArray(Array{T}(undef, n...))
|
[STATEMENT]
lemma prod_list_transfer [transfer_rule]:
"(list_all2 A ===> A) prod_list prod_list"
if [transfer_rule]: "A 1 1" "(A ===> A ===> A) (*) (*)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (list_all2 A ===> A) prod_list prod_list
[PROOF STEP]
unfolding prod_list.eq_foldr [abs_def]
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (list_all2 A ===> A) (\<lambda>xs. foldr (*) xs (1::'a)) (\<lambda>xs. foldr (*) xs (1::'b))
[PROOF STEP]
by transfer_prover |
State Before: K : Type u
instβΒ³ : Field K
V : Type v
instβΒ² : AddCommGroup V
instβΒΉ : Module K V
instβ : FiniteDimensional K V
β’ β(coevaluation K V) 1 =
let bV := Basis.ofVectorSpace K V;
β i : β(Basis.ofVectorSpaceIndex K V), βbV i ββ[K] Basis.coord bV i State After: K : Type u
instβΒ³ : Field K
V : Type v
instβΒ² : AddCommGroup V
instβΒΉ : Module K V
instβ : FiniteDimensional K V
β’ β(β(Basis.constr (Basis.singleton Unit K) K) fun x =>
β x : β(Basis.ofVectorSpaceIndex K V),
β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x)
1 =
β x : β(Basis.ofVectorSpaceIndex K V), β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x Tactic: simp only [coevaluation, id] State Before: K : Type u
instβΒ³ : Field K
V : Type v
instβΒ² : AddCommGroup V
instβΒΉ : Module K V
instβ : FiniteDimensional K V
β’ β(β(Basis.constr (Basis.singleton Unit K) K) fun x =>
β x : β(Basis.ofVectorSpaceIndex K V),
β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x)
1 =
β x : β(Basis.ofVectorSpaceIndex K V), β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x State After: K : Type u
instβΒ³ : Field K
V : Type v
instβΒ² : AddCommGroup V
instβΒΉ : Module K V
instβ : FiniteDimensional K V
β’ β i : Unit,
β(Basis.equivFun (Basis.singleton Unit K)) 1 i β’
β x : β(Basis.ofVectorSpaceIndex K V),
β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x =
β x : β(Basis.ofVectorSpaceIndex K V), β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x Tactic: rw [(Basis.singleton Unit K).constr_apply_fintype K] State Before: K : Type u
instβΒ³ : Field K
V : Type v
instβΒ² : AddCommGroup V
instβΒΉ : Module K V
instβ : FiniteDimensional K V
β’ β i : Unit,
β(Basis.equivFun (Basis.singleton Unit K)) 1 i β’
β x : β(Basis.ofVectorSpaceIndex K V),
β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x =
β x : β(Basis.ofVectorSpaceIndex K V), β(Basis.ofVectorSpace K V) x ββ[K] Basis.coord (Basis.ofVectorSpace K V) x State After: no goals Tactic: simp only [Fintype.univ_punit, Finset.sum_const, one_smul, Basis.singleton_repr,
Basis.equivFun_apply, Basis.coe_ofVectorSpace, one_nsmul, Finset.card_singleton] |
theory Exercise1
imports IMP
begin
(* NOTE: This is an over-estimate of potentially
assigned variables. Specifically, it is possible that
the collection of reachable states is restricted by
execution within the sequence or while commands. This
restriction could then reduce the collection of
assignable variables of an if or while command. For
example, consider:
assigned (
IF (Less (V a) (N 3))
THEN (a ::= 3)
ELSE (b ::=3)
)
In isolation, the answer would rightly be {a,b}.
However, this could materially change if the state
space going into the evaluation is restricted by an
earlier command from a sequence or while command. For
example:
assigned (
(a := 1) ;; (
IF (Less (V a) (N 3))
THEN (a ::= 3)
ELSE (b ::= 3)
))
In this latter case, the answer would be {a} and not
{a,b}. While the initial state is completely
unrestricted, this need not be the case for
intermediate states.
*)
fun assigned_oe :: "com \<Rightarrow> vname set" where
"assigned_oe SKIP = {}" |
"assigned_oe (v ::= a) = {v}" |
"assigned_oe (a;;b)
= (assigned_oe a) \<union> (assigned_oe b)" |
"assigned_oe (IF b THEN c1 ELSE c2)
= (assigned_oe c1) \<union> (assigned_oe c2)" |
"assigned_oe (WHILE b DO c) = (assigned_oe c)"
(* NOTE: I did not notice the split_format behavior
until I saw it used by someone else to solve this
problem (i.e. https://github.com/kolya-vasiliev/concrete-semantics/blob/ae2a4b32ec63766e6a11e043d85d082c70eeaebc/Big_Step.thy#L68-L84),
but it is briefly documented in the Isar reference
manual. *)
theorem not_assigned_oe_implies_unmodified:
"\<lbrakk> (c,s) \<Rightarrow> t; x \<notin> assigned_oe c \<rbrakk> \<Longrightarrow> s x = t x"
apply (induction rule: big_step.induct[split_format(complete)])
apply auto
done
(* I tried creating a more accurate version of assigned
with the help of satisfiability of the guards seen
given all previous guards and assignments, but this
quickly became very complex. Furthermore, I have
heard on multiple occasions that variable assignment
and value reachability calculations are in general at
least uncomputable, and so overapproximations or
underapproximations are typically used instead (with
ongoing interest in particular overapproximation or
underapproximation formulations which could be more
valuable in the appropriate context). I admit to
having not seen the specific result myself, much less
explored the proof, but it does seem like the wrong
tree to be barking up in this introductory material.*)
end |
[STATEMENT]
lemma wf_state_step_preservation:
assumes "wf_state s" and "step s s'"
shows "wf_state s'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. wf_state s'
[PROOF STEP]
using assms(2,1)
[PROOF STATE]
proof (prove)
using this:
s \<rightarrow> s'
wf_state s
goal (1 subgoal):
1. wf_state s'
[PROOF STEP]
proof (cases s s' rule: step.cases)
[PROOF STATE]
proof (state)
goal (22 subgoals):
1. \<And>F f l pc d H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPush d)\<rbrakk> \<Longrightarrow> wf_state s'
2. \<And>F f l pc n H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx1 n # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx1 n)\<rbrakk> \<Longrightarrow> wf_state s'
3. \<And>F f l pc b H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx2 b # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx2 b)\<rbrakk> \<Longrightarrow> wf_state s'
4. \<And>F f l pc H R x \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (x # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R \<Sigma> # st); next_instr (F_get F) f l pc = Some IPop\<rbrakk> \<Longrightarrow> wf_state s'
5. \<And>F f l pc n R d H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGet n); n < length R; cast_Dyn (R ! n) = Some d\<rbrakk> \<Longrightarrow> wf_state s'
6. \<And>F f l pc \<tau> n R d blob H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (blob # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = Some blob\<rbrakk> \<Longrightarrow> wf_state s'
7. \<And>F f l pc \<tau> n R d F' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F' H (box_stack f (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st)); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = None; F' = Fenv.map_entry F f generalize_fundef\<rbrakk> \<Longrightarrow> wf_state s'
8. \<And>F f l pc n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISet n); n < length R; cast_Dyn blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
9. \<And>F f l pc \<tau> n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISetUbx \<tau> n); n < length R; cast_and_box \<tau> blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
10. \<And>F f l pc x i i' H d R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (i # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (ILoad x); cast_Dyn i = Some i'; heap_get H (x, i') = Some d\<rbrakk> \<Longrightarrow> wf_state s'
A total of 22 subgoals...
[PROOF STEP]
case (step_op_inl F f l pc op ar \<Sigma> \<Sigma>' opinl x F' H R st)
[PROOF STATE]
proof (state)
this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOp op)
\<AA>\<rr>\<ii>\<tt>\<yy> op = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<II>\<nn>\<ll> op \<Sigma>' = Some opinl
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOpInl opinl))
goal (22 subgoals):
1. \<And>F f l pc d H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPush d)\<rbrakk> \<Longrightarrow> wf_state s'
2. \<And>F f l pc n H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx1 n # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx1 n)\<rbrakk> \<Longrightarrow> wf_state s'
3. \<And>F f l pc b H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx2 b # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx2 b)\<rbrakk> \<Longrightarrow> wf_state s'
4. \<And>F f l pc H R x \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (x # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R \<Sigma> # st); next_instr (F_get F) f l pc = Some IPop\<rbrakk> \<Longrightarrow> wf_state s'
5. \<And>F f l pc n R d H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGet n); n < length R; cast_Dyn (R ! n) = Some d\<rbrakk> \<Longrightarrow> wf_state s'
6. \<And>F f l pc \<tau> n R d blob H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (blob # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = Some blob\<rbrakk> \<Longrightarrow> wf_state s'
7. \<And>F f l pc \<tau> n R d F' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F' H (box_stack f (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st)); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = None; F' = Fenv.map_entry F f generalize_fundef\<rbrakk> \<Longrightarrow> wf_state s'
8. \<And>F f l pc n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISet n); n < length R; cast_Dyn blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
9. \<And>F f l pc \<tau> n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISetUbx \<tau> n); n < length R; cast_and_box \<tau> blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
10. \<And>F f l pc x i i' H d R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (i # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (ILoad x); cast_Dyn i = Some i'; heap_get H (x, i') = Some d\<rbrakk> \<Longrightarrow> wf_state s'
A total of 22 subgoals...
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOp op)
\<AA>\<rr>\<ii>\<tt>\<yy> op = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<II>\<nn>\<ll> op \<Sigma>' = Some opinl
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOpInl opinl))
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOp op)
\<AA>\<rr>\<ii>\<tt>\<yy> op = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<II>\<nn>\<ll> op \<Sigma>' = Some opinl
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOpInl opinl))
goal (1 subgoal):
1. wf_state s'
[PROOF STEP]
using assms(1)
[PROOF STATE]
proof (prove)
using this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOp op)
\<AA>\<rr>\<ii>\<tt>\<yy> op = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<II>\<nn>\<ll> op \<Sigma>' = Some opinl
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOpInl opinl))
wf_state s
goal (1 subgoal):
1. wf_state s'
[PROOF STEP]
by (auto intro!: wf_stateI intro: sp_instr_Op_OpInl_conv[symmetric]
elim!: wf_fundefs_rewrite_body dest!: wf_stateD \<II>\<nn>\<ll>_invertible)
[PROOF STATE]
proof (state)
this:
wf_state s'
goal (21 subgoals):
1. \<And>F f l pc d H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPush d)\<rbrakk> \<Longrightarrow> wf_state s'
2. \<And>F f l pc n H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx1 n # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx1 n)\<rbrakk> \<Longrightarrow> wf_state s'
3. \<And>F f l pc b H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx2 b # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx2 b)\<rbrakk> \<Longrightarrow> wf_state s'
4. \<And>F f l pc H R x \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (x # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R \<Sigma> # st); next_instr (F_get F) f l pc = Some IPop\<rbrakk> \<Longrightarrow> wf_state s'
5. \<And>F f l pc n R d H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGet n); n < length R; cast_Dyn (R ! n) = Some d\<rbrakk> \<Longrightarrow> wf_state s'
6. \<And>F f l pc \<tau> n R d blob H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (blob # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = Some blob\<rbrakk> \<Longrightarrow> wf_state s'
7. \<And>F f l pc \<tau> n R d F' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F' H (box_stack f (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st)); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = None; F' = Fenv.map_entry F f generalize_fundef\<rbrakk> \<Longrightarrow> wf_state s'
8. \<And>F f l pc n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISet n); n < length R; cast_Dyn blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
9. \<And>F f l pc \<tau> n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISetUbx \<tau> n); n < length R; cast_and_box \<tau> blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
10. \<And>F f l pc x i i' H d R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (i # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (ILoad x); cast_Dyn i = Some i'; heap_get H (x, i') = Some d\<rbrakk> \<Longrightarrow> wf_state s'
A total of 21 subgoals...
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (21 subgoals):
1. \<And>F f l pc d H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPush d)\<rbrakk> \<Longrightarrow> wf_state s'
2. \<And>F f l pc n H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx1 n # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx1 n)\<rbrakk> \<Longrightarrow> wf_state s'
3. \<And>F f l pc b H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx2 b # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx2 b)\<rbrakk> \<Longrightarrow> wf_state s'
4. \<And>F f l pc H R x \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (x # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R \<Sigma> # st); next_instr (F_get F) f l pc = Some IPop\<rbrakk> \<Longrightarrow> wf_state s'
5. \<And>F f l pc n R d H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGet n); n < length R; cast_Dyn (R ! n) = Some d\<rbrakk> \<Longrightarrow> wf_state s'
6. \<And>F f l pc \<tau> n R d blob H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (blob # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = Some blob\<rbrakk> \<Longrightarrow> wf_state s'
7. \<And>F f l pc \<tau> n R d F' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F' H (box_stack f (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st)); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = None; F' = Fenv.map_entry F f generalize_fundef\<rbrakk> \<Longrightarrow> wf_state s'
8. \<And>F f l pc n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISet n); n < length R; cast_Dyn blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
9. \<And>F f l pc \<tau> n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISetUbx \<tau> n); n < length R; cast_and_box \<tau> blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
10. \<And>F f l pc x i i' H d R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (i # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (ILoad x); cast_Dyn i = Some i'; heap_get H (x, i') = Some d\<rbrakk> \<Longrightarrow> wf_state s'
A total of 21 subgoals...
[PROOF STEP]
case (step_op_inl_miss F f l pc opinl ar \<Sigma> \<Sigma>' x F' H R st)
[PROOF STATE]
proof (state)
this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOpInl opinl)
\<AA>\<rr>\<ii>\<tt>\<yy> (\<DD>\<ee>\<II>\<nn>\<ll> opinl) = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<not> \<II>\<ss>\<II>\<nn>\<ll> opinl \<Sigma>'
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOp (\<DD>\<ee>\<II>\<nn>\<ll> opinl)))
goal (21 subgoals):
1. \<And>F f l pc d H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPush d)\<rbrakk> \<Longrightarrow> wf_state s'
2. \<And>F f l pc n H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx1 n # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx1 n)\<rbrakk> \<Longrightarrow> wf_state s'
3. \<And>F f l pc b H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx2 b # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx2 b)\<rbrakk> \<Longrightarrow> wf_state s'
4. \<And>F f l pc H R x \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (x # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R \<Sigma> # st); next_instr (F_get F) f l pc = Some IPop\<rbrakk> \<Longrightarrow> wf_state s'
5. \<And>F f l pc n R d H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGet n); n < length R; cast_Dyn (R ! n) = Some d\<rbrakk> \<Longrightarrow> wf_state s'
6. \<And>F f l pc \<tau> n R d blob H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (blob # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = Some blob\<rbrakk> \<Longrightarrow> wf_state s'
7. \<And>F f l pc \<tau> n R d F' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F' H (box_stack f (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st)); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = None; F' = Fenv.map_entry F f generalize_fundef\<rbrakk> \<Longrightarrow> wf_state s'
8. \<And>F f l pc n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISet n); n < length R; cast_Dyn blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
9. \<And>F f l pc \<tau> n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISetUbx \<tau> n); n < length R; cast_and_box \<tau> blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
10. \<And>F f l pc x i i' H d R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (i # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (ILoad x); cast_Dyn i = Some i'; heap_get H (x, i') = Some d\<rbrakk> \<Longrightarrow> wf_state s'
A total of 21 subgoals...
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOpInl opinl)
\<AA>\<rr>\<ii>\<tt>\<yy> (\<DD>\<ee>\<II>\<nn>\<ll> opinl) = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<not> \<II>\<ss>\<II>\<nn>\<ll> opinl \<Sigma>'
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOp (\<DD>\<ee>\<II>\<nn>\<ll> opinl)))
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOpInl opinl)
\<AA>\<rr>\<ii>\<tt>\<yy> (\<DD>\<ee>\<II>\<nn>\<ll> opinl) = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<not> \<II>\<ss>\<II>\<nn>\<ll> opinl \<Sigma>'
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOp (\<DD>\<ee>\<II>\<nn>\<ll> opinl)))
goal (1 subgoal):
1. wf_state s'
[PROOF STEP]
using assms(1)
[PROOF STATE]
proof (prove)
using this:
s = Global.state.State F H (Frame f l pc R \<Sigma> # st)
s' = Global.state.State F' H (Frame f l (Suc pc) R (OpDyn x # drop ar \<Sigma>) # st)
next_instr (F_get F) f l pc = Some (IOpInl opinl)
\<AA>\<rr>\<ii>\<tt>\<yy> (\<DD>\<ee>\<II>\<nn>\<ll> opinl) = ar
ar \<le> length \<Sigma>
ap_map_list cast_Dyn (take ar \<Sigma>) = Some \<Sigma>'
\<not> \<II>\<ss>\<II>\<nn>\<ll> opinl \<Sigma>'
\<II>\<nn>\<ll>\<OO>\<pp> opinl \<Sigma>' = x
F' = Fenv.map_entry F f (\<lambda>fd. rewrite_fundef_body fd l pc (IOp (\<DD>\<ee>\<II>\<nn>\<ll> opinl)))
wf_state s
goal (1 subgoal):
1. wf_state s'
[PROOF STEP]
by (auto intro!: wf_stateI intro: sp_instr_Op_OpInl_conv
elim!: wf_fundefs_rewrite_body dest!: wf_stateD)
[PROOF STATE]
proof (state)
this:
wf_state s'
goal (20 subgoals):
1. \<And>F f l pc d H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPush d)\<rbrakk> \<Longrightarrow> wf_state s'
2. \<And>F f l pc n H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx1 n # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx1 n)\<rbrakk> \<Longrightarrow> wf_state s'
3. \<And>F f l pc b H R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpUbx2 b # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IPushUbx2 b)\<rbrakk> \<Longrightarrow> wf_state s'
4. \<And>F f l pc H R x \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (x # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R \<Sigma> # st); next_instr (F_get F) f l pc = Some IPop\<rbrakk> \<Longrightarrow> wf_state s'
5. \<And>F f l pc n R d H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGet n); n < length R; cast_Dyn (R ! n) = Some d\<rbrakk> \<Longrightarrow> wf_state s'
6. \<And>F f l pc \<tau> n R d blob H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F H (Frame f l (Suc pc) R (blob # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = Some blob\<rbrakk> \<Longrightarrow> wf_state s'
7. \<And>F f l pc \<tau> n R d F' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R \<Sigma> # st); s' = Global.state.State F' H (box_stack f (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st)); next_instr (F_get F) f l pc = Some (IGetUbx \<tau> n); n < length R; cast_Dyn (R ! n) = Some d; unbox \<tau> d = None; F' = Fenv.map_entry F f generalize_fundef\<rbrakk> \<Longrightarrow> wf_state s'
8. \<And>F f l pc n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISet n); n < length R; cast_Dyn blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
9. \<And>F f l pc \<tau> n R blob d R' H \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (blob # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R' \<Sigma> # st); next_instr (F_get F) f l pc = Some (ISetUbx \<tau> n); n < length R; cast_and_box \<tau> blob = Some d; R' = R[n := OpDyn d]\<rbrakk> \<Longrightarrow> wf_state s'
10. \<And>F f l pc x i i' H d R \<Sigma> st. \<lbrakk>wf_state s; s = Global.state.State F H (Frame f l pc R (i # \<Sigma>) # st); s' = Global.state.State F H (Frame f l (Suc pc) R (OpDyn d # \<Sigma>) # st); next_instr (F_get F) f l pc = Some (ILoad x); cast_Dyn i = Some i'; heap_get H (x, i') = Some d\<rbrakk> \<Longrightarrow> wf_state s'
A total of 20 subgoals...
[PROOF STEP]
qed (auto simp: box_stack_def
intro!: wf_stateI wf_fundefs_generalize
intro: sp_instr.intros
dest!: wf_stateD) |
! Initialize in Fortran
! Calculate in GPU
module matrix
INTERFACE
subroutine matmul_wrapper(n,a,b,c) BIND (C, NAME="matmul_wrapper")
USE ISO_C_BINDING
implicit none
integer(c_int), value :: n
real(c_double) :: a(n,n), b(n,n), c(n,n)
end subroutine matmul_wrapper
END INTERFACE
end module matrix
program matMul
use ISO_C_BINDING
use matrix
integer, parameter :: n=2**13
! integer, parameter :: n=3
integer :: i,j,k
real*8 :: a(n,n), b(n,n), c(n,n)
! real*8 :: cF(n,n)
do i=1,n
do j=1,n
a(i,j) = i*j
b(i,j) = i-j
enddo
enddo
print *, "Data initialized"
call matmul_wrapper(n,a,b,c)
print *, "Koniec fortrana"
! print *, "Sprawdzenie w fortranie"
! do i=1,n
! do j=1,n
! cF(i,j) = 0.d0
! do k=1,n
! cF(i,j)=cF(i,j)+a(k,i)*b(j,k)
! enddo
! enddo
! enddo
! print *, "Sprawdzenie"
! do i=1,n
! print "(3F8.2,A,3F8.2)", c(:,i), " ", cF(i,:)
! enddo
! print *, "========="
! do i=1,n
! print "(3F8.2,A,3F8.2)", a(:,i), " ", b(:,i)
! enddo
! print *, "========="
! do i=1,n
! do j=1,n
! if(dabs(c(i,j)-cF(j,i))>0.1d0) then
! print "(2I3,4F8.2)", i, j, a(i,j), b(i,j), c(j,i), cF(j,i)
! endif
! enddo
! enddo
end program matMul
|
/-
Copyright (c) 2020 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon
-/
import data.bool
import meta.rb_map
import tactic.core
/-!
# list_unused_decls
`#list_unused_decls` is a command used for theory development.
When writing a new theory one often tries
multiple variations of the same definitions: `foo`, `foo'`, `fooβ`,
`fooβ`, etc. Once the main definition or theorem has been written,
it's time to clean up and the file can contain a lot of dead code.
Mark the main declarations with `@[main_declaration]` and
`#list_unused_decls` will show the declarations in the file
that are not needed to define the main declarations.
Some of the so-called "unused" declarations may turn out to be useful
after all. The oversight can be corrected by marking those as
`@[main_declaration]`. `#list_unused_decls` will revise the list of
unused declarations. By default, the list of unused declarations will
not include any dependency of the main declarations.
The `@[main_declaration]` attribute should be removed before submitting
code to mathlib as it is merely a tool for cleaning up a module.
-/
namespace tactic
/-- Attribute `main_declaration` is used to mark declarations that are featured
in the current file. Then, the `#list_unused_decls` command can be used to
list the declaration present in the file that are not used by the main
declarations of the file. -/
@[user_attribute]
meta def main_declaration_attr : user_attribute :=
{ name := `main_declaration,
descr := "tag essential declarations to help identify unused definitions" }
/-- `update_unsed_decls_list n m` removes from the map of unneeded declarations those
referenced by declaration named `n` which is considerred to be a
main declaration -/
private meta def update_unsed_decls_list :
name β name_map declaration β tactic (name_map declaration)
| n m :=
do d β get_decl n,
if m.contains n then do
let m := m.erase n,
let ns := d.value.list_constant.union d.type.list_constant,
ns.mfold m update_unsed_decls_list
else pure m
/-- In the current file, list all the declaration that are not marked as `@[main_declaration]` and
that are not referenced by such declarations -/
meta def all_unused (fs : list (option string)) : tactic (name_map declaration) :=
do ds β get_decls_from fs,
ls β ds.keys.mfilter (succeeds β user_attribute.get_param_untyped main_declaration_attr),
ds β ls.mfoldl (flip update_unsed_decls_list) ds,
ds.mfilter $ Ξ» n d, do
e β get_env,
return $ !d.is_auto_or_internal e
/-- expecting a string literal (e.g. `"src/tactic/find_unused.lean"`)
-/
meta def parse_file_name (fn : pexpr) : tactic (option string) :=
some <$> (to_expr fn >>= eval_expr string) <|> fail "expecting: \"src/dir/file-name\""
setup_tactic_parser
/-- The command `#list_unused_decls` lists the declarations that that
are not used the main features of the present file. The main features
of a file are taken as the declaration tagged with
`@[main_declaration]`.
A list of files can be given to `#list_unused_decls` as follows:
```lean
#list_unused_decls ["src/tactic/core.lean","src/tactic/interactive.lean"]
```
They are given in a list that contains file names written as Lean
strings. With a list of files, the declarations from all those files
in addition to the declarations above `#list_unused_decls` in the
current file will be considered and their interdependencies will be
analyzed to see which declarations are unused by declarations marked
as `@[main_declaration]`. The files listed must be imported by the
current file. The path of the file names is expected to be relative to
the root of the project (i.e. the location of `leanpkg.toml` when it
is present).
Neither `#list_unused_decls` nor `@[main_declaration]` should appear
in a finished mathlib development. -/
@[user_command]
meta def unused_decls_cmd (_ : parse $ tk "#list_unused_decls") : lean.parser unit :=
do fs β pexpr_list,
show tactic unit, from
do fs β fs.mmap parse_file_name,
ds β all_unused $ none :: fs,
ds.to_list.mmap' $ Ξ» β¨n,_β©, trace!"#print {n}"
add_tactic_doc
{ name := "#list_unused_decls",
category := doc_category.cmd,
decl_names := [`tactic.unused_decls_cmd],
tags := ["debugging"] }
end tactic
|
\chapter{IPv6}
\section{Representation}
IPv6 addresses are 128 bits in length and written as a string of 32 hexadecimal values. Every 4 bits is represented by a single hexadecimal digit. The preferred format for writing an IPv6 address is x:x:x:x:x:x:x:x, where each ``x'' is a single \emph{hextet}\footnote{four hexadecimal values}. For example,
\begin{verbatim}
2001:0DB8:0000:1111:0000:0000:0000:0200
\end{verbatim}
There are two rules to help reduce the number of digits needed to represent an IPv6 address.
\begin{itemize}
\item \textbf{Omit leading 0s:} Omit any leading 0s (zeros) in any hextet. This rule only applies to leading 0s, NOT to trailing 0s; otherwise, the address would be ambiguous. For example, the hextet \verb|0ABC| will become \verb|ABC|.
\begin{verbatim}
2001:0DB8:0000:1111:0000:0000:0000:0200
2001: DB8: 0:1111: 0: 0: 0: 200
\end{verbatim}
\item \textbf{Omit 0 segments:} Double colon (::) can replace any contiguous string of one or more hextets consisting of all 0s. The double colon (::) can only be used once within an address; otherwise, there would be more than one possible resulting address.
\begin{verbatim}
2001:0DB8:0000:1111:0000:0000:0000:0200
2001: DB8: 0:1111: 0: 0: 0: 200
2001: DB8: 0:1111::200
\end{verbatim}
\end{itemize}
\section{Types of IPv6 Addresses}
There are three types of IPv6 addresses: Unicast, Multicast, and Anycast. Unlike IPv4, IPv6 does not have a broadcast address. However, there is an IPv6 all-node multicast address that essentially gives the same result.
\subsection{Unicast IPv6}
An IPv6 unicast address uniquely identifies an interface on an IPv6-enabled device. There are three types of IPv6 unicast address: Global unicast, Link-local and Unique local unicast.
\paragraph{Global unicast:} A global unicast address is a globally unique and Internet-routable address. The ICANN\footnote{Internet Committee for Assigned Names and Numbers} assigns IPv6 Global Unicast address to organization. A global unicast address has three parts: Global routing prefix, Subnet ID, Interface ID (Figure \ref{GUA}). The \textbf{global routing prefix} (first three hextets) is the network portion of the IPv6 address that is assigned by the ISP. The \textbf{Subnet ID} (fourth hextet) is used by an organization to identify subnets within its site. The \textbf{Interface ID} (last four hextets) is the host portion of the address.
\begin{figure}[hbtp]
\caption{Global routing prefix}\label{GUA}
\centering
\includegraphics[scale=0.6]{pictures/GUA.PNG}
\end{figure}
\paragraph{Link-local:} Link-local addresses are used to communicate with other devices on the same local link\footnote{With IPv6, the term link refers to a subnet.}. Packets with a source or destination link-local address cannot be routed beyond the link from which the packet originated. Their uniqueness must only be confirmed on that link. If a link-local address is not configured manually on an interface, the device will automatically create its own. IPv6 link-local addresses are in the \textbf{FE80::/10} range\footnote{The /10 indicates that the first 10 bits are 1111 1110 10xx xxxx. The first hextet has a range of 1111 1110 1000 0000 (FE80) to 1111 1110 1011 1111 (FEBF)}.
\paragraph{Dynamic Link-Local Addresses} A link-local address can be established dynamically or configured manually as a static link-local address. Operating systems will typically use either EUI-64 process or a randomly generated 64-bit number to dynamically assign link-local address. By default, Cisco routers use EUI-64 to generate the Interface ID for all link-local address on IPv6 interfaces. For serial interfaces, the router will use the MAC address of an Ethernet interface.
\paragraph{Static Link-Local Addresses} A drawback to using the dynamically assigned link-local address is its long interface ID, which makes it challenging to identify and remember assigned addresses. Configuring the link-local address manually provides the ability to create an address that is recognizable and easier to remember.
\begin{verbatim}
R1(config)# interface g0/0
R1(config-if)# ipv6 address fe80::1 link-local
\end{verbatim}
The link-local address in the above example is used to make it easily recognizable as belonging to router R1. The same IPv6 link-local address is configured on all of R1's interfaces. FE80::1 can be configured on each link because it only has to be unique on that link. Similar to R1, router R2 would be configured with FE80::2 as the IPv6 link-local address on all of its interfaces
\paragraph{Unique local unicast:}Unique local addresses are used for local addressing within a site or between a limited number of sites. These addresses should not be Internet-routable and should not be translated to a global IPv6 address. Unique local addresses can be used for devices that will never need or have access from another network. Unique local addresses are in the range of \textbf{FC00::/7 to FDFF::/7}.
\subsection{Multicast IPv6}
IPv6 multicast addresses have the prefix \textbf{FF00::/8}. Multicast addresses can only be destination addresses and not source addresses. There are two types of IPv6 multicast addresses: Assigned multicast address and Solicited node multicast address.
\paragraph{Assigned mulicast addresses} are reserved multicast addresses for predefined groups of devices. For example, \textbf{FF02::1} is the all-nodes multicast address, which has the same effect as broadcast IPv4 address. The all-routers multicast address \textbf{FF02::2} identifies a group of all IPv6 routers\footnote{A router with the ipv6 unicast-routing global configuration command executed} on the network.
\paragraph{A solicited-node multicast address} is mapped to a special Ethernet multicast address. This allows the Ethernet NIC to filter the frame by examining the destination MAC address.
\section{ICMPv6}
The informational and error messages found in ICMPv6 are very similar to the control and error messages implemented by ICMPv4. However, ICMPv6 has new features and improved functionality not found in ICMPv4. ICMPv6 includes four types of message: RS and RA message\footnote{Router Solicitation (RS) message, Router Advertisement (RA) message} (communication between a router and a device), NS and NA message\footnote{Neighbor Solicitation (NS) message, Neighbor Advertisement (NA) message} (communication between devices). Address resolution and DAD process use NA and NS message, while RS and RA message contribute to assigning IPv6 to devices.\\
\begin{figure}[hbtp]
\caption{Address Resolution}\label{AddressResolution}
\centering
\includegraphics[scale=1]{pictures/AddressResolution.PNG}
\end{figure}
\paragraph{Address resolution} acts like ARP of IPv4. It is used to determine the MAC address of a destination IPv6 address. A device will send an NS message to the solicited node address to ask for MAC address. The message will include the known (targeted) IPv6 address. The device that has the targeted IPv6 address will respond with an NA message containing its Ethernet MAC address. Figure \ref{AddressResolution} shows two PCs exchanging NS and NA messages.
\begin{figure}[hbtp]
\caption{DAD process}\label{DADprocess}
\centering
\includegraphics[scale=1]{pictures/DADprocess.PNG}
\end{figure}
\paragraph{DAD process} is a Duplicate Address Detection. To check the uniqueness of an address, the device will send an NS message with its own IPv6 address as the targeted IPv6 address, shown in Figure \ref{DADprocess}. If another device on the network has this address, it will respond with an NA message. This NA message will notify the sending device that the address is in use. If a corresponding NA message is not returned within a certain period of time, the unicast address is unique and acceptable for use.
\section{EUI-64 Process}
When the RA message is either SLAAC or SLAAC with stateless DHCPv6, the client must generate its own Interface ID using EUI-64 Process or Randomly Generated. An EUI-64 Interface ID is represented in binary and is made up of three parts (Figure \ref{EUI64}):
\begin{itemize}
\item \textbf{24-bit OUI} from the client MAC address\footnote{Ethernet MAC addresses are made up of two parts: vendor code OUI assigned by IEEE and device identifier}, but the 7th bit (the Universally/Locally (U/L) bit) is reversed. This means that if the 7th bit is a 0, it becomes a 1, and vice versa.
\item The inserted 16-bit value \textbf{FFFE} (in hexadecimal).
\item \textbf{24-bit Device Identifier} from the client MAC address.
\end{itemize}
\begin{figure}[hbtp]
\caption{EUI-64 process}\label{EUI64}
\centering
\includegraphics[scale=1]{pictures/EUI64.PNG}
\end{figure}
The advantage of EUI-64 is the Ethernet MAC address can be used to determine the Interface ID. It also allows network administrators to easily track an IPv6 address to an end device using the unique MAC address. However, this has caused privacy concerns among many users. They are concerned that their packets can be traced to the actual physical computer. Due to these concerns, a randomly generated Interface ID may be used instead.
\section{IPv4 and IPv6 Coexistence}
Several techniques have been developed to accommodate a variety of transition IPv4-to-IPv6 scenarios:
\begin{itemize}
\item \textbf{Dual-stack:} A device interface is running both IPv4 and IPv6 protocols enabling it to communicate with either network.
\item \textbf{Tunneling:} The process of encapsulating an IPv6 packet inside an IPv4 packet. This allows the IPv6 packet to be transmitted over an IPv4-only network.
\item \textbf{Translation:} NAT64 allows IPv6-enabled devices to communicate with IPv4-enabled devices using a translation technique similar to NAT for IPv4. An IPv6 packet is translated to an IPv4 packet and vice versa.
\end{itemize} |
/* ===============================================================================*/
/* Version 1.0. Cullan Howlett */
/* Copyright (c) 2017 International Centre for Radio Astronomy Research, */
/* The MIT License (MIT) University of Western Australia */
/* */
/* Permission is hereby granted, free of charge, to any person obtaining a copy */
/* of this software and associated documentation files (the "Software"), to deal */
/* in the Software without restriction, including without limitation the rights */
/* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell */
/* copies of the Software, and to permit persons to whom the Software is */
/* furnished to do so, subject to the following conditions: */
/* */
/* The above copyright notice and this permission notice shall be included in */
/* all copies or substantial portions of the Software. */
/* */
/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR */
/* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, */
/* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE */
/* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER */
/* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, */
/* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN */
/* THE SOFTWARE. */
/* ===============================================================================*/
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_spline.h>
#include <gsl/gsl_linalg.h>
#include <gsl/gsl_integration.h>
// Fisher matrix calculation for surveys with velocity and density field measurements.
// ASSUMPTIONS:
// - Uncorrelated shot-noise between the density and velocity fields
// - I use the trapezium rule to integrate over r. There will be some error due to this, but this makes the most sense as
// we are binning the number density anyway, and it makes hella difference in the speed of the code.
// - The redshift dependence of the non-linear matter and velocity divergence power spectra is captured using linear interpolation.
// - The PV error scales as a fixed pecentage of H0*r.
// - Flat LCDM cosmology (but not necessarily GR as gammaval can be changed).
// - The damping of the velocity and density fields due to non-linear RSD is redshift independent
// The parameters necessary for the calculation
static int nparams = 4; // The number of free parameters (we can use any of beta, fsigma8, r_g, sigma_g, sigma_u)
static int Data[4] = {0,1,3,4}; // A vector of flags for the parameters we are interested in (0=beta, 1=fsigma8, 2=r_g, 3=sigma_g, 4=sigma_u). MAKE SURE THE LENGTH OF THIS VECTOR, NPARAMS AND THE ENTRIES AGREE/MAKE SENSE, OR YOU MIGHT GET NONSENSE RESULTS!!
static int nziter = 1; // Now many bins in redshift between zmin and zmax we are considering
static double zmin = 0.0; // The minimum redshift to consider (You must have power spectra that are within this range or GSL spline will error out)
static double zmax = 0.1; // The maximum redshift to consider (You must have power spectra that are within this range or GSL spline will error out)
static double Om = 0.3121; // The matter density at z=0
static double c = 299792.458; // The speed of light in km/s
static double gammaval = 0.55; // The value of gammaval to use in the forecasts (where f(z) = Om(z)^gammaval)
static double r_g = 1.0; // The cross correlation coefficient between the velocity and density fields
static double beta0 = 0.393; // The value of beta (at z=0, we'll modify this by the redshift dependent value of bias and f as required)
static double sigma80 = 0.8150; // The value of sigma8 at z=0
static double sigma_u = 13.00; // The value of the velocity damping parameter in Mpc/h. I use the values from Jun Koda's paper
static double sigma_g = 4.24; // The value of the density damping parameter in Mpc/h. I use the values from Jun Koda's paper
static double kmax = 0.2; // The maximum k to evaluate for dd, dv and vv correlations (Typical values are 0.1 - 0.2, on smaller scales the models are likely to break down).
static double survey_area[3] = {0.0, 0.0, 1.745}; // We need to know the survey area for each survey and the overlap area between the surveys (redshift survey only first, then PV survey only, then overlap.
// For fully overlapping we would have {0, 0, size_overlap}. For redshift larger than PV, we would have {size_red-size_overlap, 0, size_overlap}). Units are pi steradians, such that full sky is 4.0, half sky is 2.0 etc.
static double error_rand = 300.0; // The observational error due to random non-linear velocities (I normally use 300km/s as in Jun Koda's paper)
static double error_dist = 0.05; // The percentage error on the distance indicator (Typically 0.05 - 0.10 for SNe IA, 0.2 or more for Tully-Fisher or Fundamental Plane)
static double verbosity = 0; // How much output to give: 0 = only percentage errors on fsigma8, 1 = other useful info and nuisance parameters, 2 = full fisher and covariance matrices
// The number of redshifts and the redshifts themselves of the input matter and velocity divergence power spectra.
// These numbers are multiplied by 100, converted to ints and written in the form _z0p%02d which is then appended to the filename Pvel_file. See routine read_power.
static double nzin = 11;
static double zin[11] = {0.00, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50};
char * Pvel_file = "./example_files/example_pk"; // The file containing the velocity divergence power spectrum. Don't include .dat as we'll append the redshifts on read in
// The files containing the number density of the surveys. First is the PV survey, then the redshift survey. These files MUST have the same binning and redshift range,
// so that the sum over redshift bins works (would be fine if we used splines), i.e., if one survey is shallower then that file must contain rows with n(z)=0.
// I also typically save nbar x 10^6 in the input file to make sure I don't lose precision when outputting small nbar values to files. This is corrected when the nbar file
// is read in, so see the read_nz() routine!
char * nbar_file[300] = {"./example_files/example_nbar_vel.dat",
"./example_files/example_nbar_red.dat"};
// Other global parameters and arrays
int NK, * NRED;
double pkkmin; // The minimum kmin to integrate over, based on the input power spectrum file
double pkkmax; // The maximum k in the input power spectrum. The maximum k to integrate over is the smallest of this or kmax
double * zarray;
double * rarray;
double * deltararray;
double * growtharray;
double ** nbararray;
double * karray, * deltakarray;
double ** pmmarray, ** pmtarray, ** pttarray;
gsl_spline * growth_spline, * r_spline;
gsl_interp_accel * growth_acc, * r_acc;
// Prototypes
double zeff_integrand(double mu, void * pin);
double mu_integrand(double mu, void * pin);
double ezinv(double x, void *p);
double rz(double red);
double growthfunc(double x, void *p);
double growthz(double red);
void read_nz();
void read_power();
// Calculates the fished matrix for a velocity survey.
int main(int argc, char **argv) {
FILE * fout;
int i, j;
// Read in the velocity divergence power spectrum output from the COPTER code (Carlson 2009)
read_power();
// Read in the number densities of the surveys
read_nz();
// Run some checks
if (!((survey_area[0] > 0.0) || (survey_area[2] > 0.0))) {
for (i=0; i<nparams; i++) {
if (Data[i] == 2) {
printf("ERROR: r_g is a free parameter, but there is no information in the density field (Fisher matrix will be singular)\n");
exit(0);
}
if (Data[i] == 3) {
printf("ERROR: sigma_g is a free parameter, but there is no information in the density field (Fisher matrix will be singular)\n");
exit(0);
}
}
}
if (!((survey_area[1] > 0.0) || (survey_area[2] > 0.0))) {
for (i=0; i<nparams; i++) {
if (Data[i] == 4) {
printf("ERROR: sigma_u is a free parameter, but there is no information in the velocity field (Fisher matrix will be singular)\n");
exit(0);
}
}
}
if ((sizeof(Data)/sizeof(*Data)) != nparams) {
printf("ERROR: Size of Data vector for parameters of interest must be equal to nparams\n");
exit(0);
}
// Calculate the Fisher matrices for all bins.
gsl_matrix * Fisher_Tot = gsl_matrix_alloc(nparams, nparams);
for (i=0; i<nparams; i++) {
for (j=0; j<nparams; j++) gsl_matrix_set(Fisher_Tot, i, j, 0.0);
}
printf("Evaluating the Fisher Matrix for %d bins between [z_min = %lf, z_max = %lf]\n", nziter, zmin, zmax);
if (verbosity == 0) printf("# zmin zmax zeff fsigma8(z_eff) percentage error(z_eff)\n");
int ziter;
for (ziter = 0; ziter<nziter; ziter++) {
double zbinwidth = (zmax-zmin)/(nziter);
double zmin_iter = ziter*zbinwidth + zmin;
double zmax_iter = (ziter+1.0)*zbinwidth + zmin;
double rzmax = gsl_spline_eval(r_spline, zmax_iter, r_acc);
double kmin = M_PI/rzmax;
if (verbosity > 0) printf("Evaluating the Fisher Matrix for [k_min = %lf, k_max = %lf] and [z_min = %lf, z_max = %lf]\n", kmin, kmax, zmin_iter, zmax_iter);
// Calculate the effective redshift (which I base on the sum of the S/N for the density and velocity fields)
int numk;
double k_sum1 = 0.0, k_sum2 = 0.0;
for (numk=0; numk<NK; numk++) {
double k = karray[numk]+0.5*deltakarray[numk];
double deltak = deltakarray[numk];
if (k < kmin) continue;
if (k > kmax) continue;
double result, error;
double params[4] = {numk, k, zmin_iter, zmax_iter};
size_t nevals = 1000;
gsl_function F;
F.function = &zeff_integrand;
F.params = ¶ms;
gsl_integration_workspace * w = gsl_integration_workspace_alloc(1000);
gsl_integration_qags(&F, 0.0, 1.0, 0, 5e-3, nevals, w, &result, &error);
gsl_integration_workspace_free(w);
k_sum1 += k*k*deltak*result;
k_sum2 += k*k*deltak;
}
double z_eff = k_sum1/k_sum2;
if (verbosity > 0) printf("Effective redshift z_eff = %lf\n", z_eff);
double growth_eff = gsl_spline_eval(growth_spline, z_eff, growth_acc);
// Calculate the fisher matrix, integrating over k, then mu, then r (r is last as it means we are effectively integrating over effective volume).
// As the input spectra are tabulated we'll just use the trapezium rule to integrate over k
gsl_matrix * Fisher = gsl_matrix_alloc(nparams, nparams);
for (i=0; i<nparams; i++) {
for (j=i; j<nparams; j++) {
double k_sum = 0.0;
for (numk=0; numk<NK; numk++) {
double k = karray[numk]+0.5*deltakarray[numk];
double deltak = deltakarray[numk];
if (k < kmin) continue;
if (k > kmax) continue;
double result, error;
double params[6] = {numk, k, Data[i], Data[j], zmin_iter, zmax_iter};
size_t nevals = 1000;
gsl_function F;
F.function = &mu_integrand;
F.params = ¶ms;
gsl_integration_workspace * w = gsl_integration_workspace_alloc(1000);
gsl_integration_qags(&F, 0.0, 1.0, 0, 5e-3, nevals, w, &result, &error);
gsl_integration_workspace_free(w);
k_sum += k*k*deltak*result;
}
//printf("%d, %d, %lf\n", i, j, k_sum/(4.0*M_PI));
gsl_matrix_set(Fisher, i, j, k_sum/(4.0*M_PI));
gsl_matrix_set(Fisher, j, i, k_sum/(4.0*M_PI));
}
}
for (i=0; i<nparams; i++) {
for (j=0; j<nparams; j++) {
double val = gsl_matrix_get(Fisher_Tot, i, j) + gsl_matrix_get(Fisher, i, j);
gsl_matrix_set(Fisher_Tot, i, j, val);
}
}
if (verbosity == 2) {
printf("Fisher Matrix\n======================\n");
for (i=0; i<nparams; i++) {
printf("[");
for (j=0; j<nparams; j++) printf("%15.6lf,\t", gsl_matrix_get(Fisher, i, j));
printf("],\n");
}
}
// Now invert the Fisher matrix
int s;
gsl_permutation * p;
gsl_matrix * Covariance = gsl_matrix_alloc(nparams, nparams);
p = gsl_permutation_alloc(nparams);
gsl_linalg_LU_decomp(Fisher, p, &s);
gsl_linalg_LU_invert(Fisher, p, Covariance);
gsl_permutation_free(p);
double sigma8 = sigma80 * growth_eff;
double Omz = Om*ezinv(z_eff,NULL)*ezinv(z_eff,NULL)*(1.0+z_eff)*(1.0+z_eff)*(1.0+z_eff);
double f = pow(Omz, gammaval);
double beta = f*beta0*growth_eff/pow(Om,0.55);
if (verbosity == 0) {
for (i=0; i<nparams; i++) {
if (Data[i] == 1) printf("%12.6lf %12.6lf %12.6lf %12.6lf %12.6lf\n", zmin_iter, zmax_iter, z_eff, f*sigma8, 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/(f*sigma8));
}
}
if (verbosity > 0) {
for (i=0; i<nparams; i++) {
if (Data[i] == 0) {
printf("beta = %12.6lf +/- %12.6lf\n", beta, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on beta\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/beta);
}
if (Data[i] == 1) {
printf("fsigma8 = %12.6lf +/- %12.6lf\n", f*sigma8, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on fsigma8\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/(f*sigma8));
}
if (Data[i] == 2) {
printf("r_g = %12.6lf +/- %12.6lf\n", r_g, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on r_g\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/r_g);
}
if (Data[i] == 3) {
printf("sigma_g = %12.6lf +/- %12.6lf\n", sigma_g, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on sigma_g\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/sigma_g);
}
if (Data[i] == 4) {
printf("sigma_u = %12.6lf +/- %12.6lf\n", sigma_u, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on sigma_u\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/sigma_u);
}
}
}
if (verbosity == 2) {
printf("Covariance Matrix\n======================\n");
for (i=0; i<nparams; i++) {
printf("[");
for (j=0; j<nparams; j++) printf("%15.6lf,\t", gsl_matrix_get(Covariance, i, j));
printf("],\n");
}
}
gsl_matrix_free(Fisher);
gsl_matrix_free(Covariance);
}
// Now the full Fisher matrix over all redshifts if we had more than 1 redshift bin
if (nziter > 1) {
double rzmax = gsl_spline_eval(r_spline, zmax, r_acc);
double kmin = M_PI/rzmax;
if (verbosity > 0) printf("Finally, evaluating the Fisher Matrix for [k_min = %lf, k_max = %lf] and [z_min = %lf, z_max = %lf]\n", kmin, kmax, zmin, zmax);
// Calculate the effective redshift
int numk;
double k_sum1 = 0.0, k_sum2 = 0.0;
for (numk=0; numk<NK; numk++) {
double k = karray[numk]+0.5*deltakarray[numk];
double deltak = deltakarray[numk];
if (k < kmin) continue;
if (k > kmax) continue;
double result, error;
double params[4] = {numk, k, zmin, zmax};
size_t nevals = 1000;
gsl_function F;
F.function = &zeff_integrand;
F.params = ¶ms;
gsl_integration_workspace * w = gsl_integration_workspace_alloc(1000);
gsl_integration_qags(&F, 0.0, 1.0, 0, 5e-3, nevals, w, &result, &error);
gsl_integration_workspace_free(w);
k_sum1 += k*k*deltak*result;
k_sum2 += k*k*deltak;
}
double z_eff = k_sum1/k_sum2;
if (verbosity > 0) printf("Effective redshift z_eff = %lf\n", z_eff);
double growth_eff = gsl_spline_eval(growth_spline, z_eff, growth_acc);
if (verbosity == 2) {
printf("Fisher Matrix\n======================\n");
for (i=0; i<nparams; i++) {
printf("[");
for (j=0; j<nparams; j++) printf("%15.6lf,\t", gsl_matrix_get(Fisher_Tot, i, j));
printf("],\n");
}
}
// Now invert the Fisher matrix
int s;
gsl_permutation * p;
gsl_matrix * Covariance = gsl_matrix_alloc(nparams, nparams);
p = gsl_permutation_alloc(nparams);
gsl_linalg_LU_decomp(Fisher_Tot, p, &s);
gsl_linalg_LU_invert(Fisher_Tot, p, Covariance);
gsl_permutation_free(p);
double sigma8 = sigma80 * growth_eff;
double Omz = Om*ezinv(z_eff,NULL)*ezinv(z_eff,NULL)*(1.0+z_eff)*(1.0+z_eff)*(1.0+z_eff);
double f = pow(Omz, gammaval);
double beta = f*beta0*growth_eff/pow(Om,0.55);
if (verbosity == 0) {
printf("# Full redshift range:\n");
for (i=0; i<nparams; i++) {
if (Data[i] == 1) printf("%12.6lf %12.6lf %12.6lf %12.6lf %12.6lf\n", zmin, zmax, z_eff, f*sigma8, 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/(f*sigma8));
}
}
if (verbosity > 0) {
for (i=0; i<nparams; i++) {
if (Data[i] == 0) {
printf("beta = %12.6lf +/- %12.6lf\n", beta, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on beta\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/beta);
}
if (Data[i] == 1) {
printf("fsigma8 = %12.6lf +/- %12.6lf\n", f*sigma8, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on fsigma8\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/(f*sigma8));
}
if (Data[i] == 2) {
printf("r_g = %12.6lf +/- %12.6lf\n", r_g, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on r_g\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/r_g);
}
if (Data[i] == 3) {
printf("sigma_g = %12.6lf +/- %12.6lf\n", sigma_g, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on sigma_g\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/sigma_g);
}
if (Data[i] == 4) {
printf("sigma_u = %12.6lf +/- %12.6lf\n", sigma_u, sqrt(gsl_matrix_get(Covariance, i, i)));
printf("%4.2lf percent error on sigma_u\n", 100.0*sqrt(gsl_matrix_get(Covariance, i, i))/sigma_u);
}
}
}
if (verbosity == 2) {
printf("Covariance Matrix\n======================\n");
for (i=0; i<nparams; i++) {
printf("[");
for (j=0; j<nparams; j++) printf("%15.6lf,\t", gsl_matrix_get(Covariance, i, j));
printf("],\n");
}
}
}
gsl_matrix_free(Fisher_Tot);
gsl_spline_free(growth_spline);
gsl_interp_accel_free(growth_acc);
return 0;
}
// The integrand to calculate the effective redshift. I'm not actually sure how this is done in the case of
// density and velocity field measurements, but it seems logical to base it on the sum of the integral of the density spectra and the velocity power spectra
// weighted by their effective signal to noise. In this way the effective redshift is calculated in the same way as a redshift survey, but there is some dependence
// on the S/N in the velocity power spectrum too. In any case, the S/N of the density field measurement is always much higher (because there are
// no errors and the number density is higher) and so this dominates the effective redshift calculation.
double zeff_integrand(double mu, void * pin) {
int i, j, m, q, u, surv;
double * p = (double *)pin;
int numk = (int)p[0];
double k = p[1];
double zminval = p[2];
double zmaxval = p[3];
gsl_interp_accel * Pmm_acc, * Pmt_acc, * Ptt_acc;
gsl_spline * Pmm_spline, * Pmt_spline, * Ptt_spline;
if (nzin > 1) {
double * Pmm_array = (double *)malloc(nzin*sizeof(double));
double * Pmt_array = (double *)malloc(nzin*sizeof(double));
double * Ptt_array = (double *)malloc(nzin*sizeof(double));
for (j=0; j<nzin; j++) {
Pmm_array[j] = pmmarray[j][numk];
Pmt_array[j] = pmtarray[j][numk];
Ptt_array[j] = pttarray[j][numk];
}
Pmm_acc = gsl_interp_accel_alloc();
Pmm_spline = gsl_spline_alloc(gsl_interp_cspline, nzin);
gsl_spline_init(Pmm_spline, zin, Pmm_array, nzin);
free(Pmm_array);
Pmt_acc = gsl_interp_accel_alloc();
Pmt_spline = gsl_spline_alloc(gsl_interp_cspline, nzin);
gsl_spline_init(Pmt_spline, zin, Pmt_array, nzin);
free(Pmt_array);
Ptt_acc = gsl_interp_accel_alloc();
Ptt_spline = gsl_spline_alloc(gsl_interp_cspline, nzin);
gsl_spline_init(Ptt_spline, zin, Ptt_array, nzin);
free(Ptt_array);
}
double dendamp = sqrt(1.0/(1.0+0.5*(k*k*mu*mu*sigma_g*sigma_g))); // This is unitless
double veldamp = sin(k*sigma_u)/(k*sigma_u); // This is unitless
double dVeff = 0.0, zdVeff = 0.0;
for (i=0; i<NRED[0]; i++) {
double zval = zarray[i];
if (zval < zminval) continue;
if (zval > zmaxval) break;
double r_sum = 0.0;
double r = rarray[i];
double deltar = deltararray[i];
double dd_prefac=0.0, vv_prefac=0.0;
double P_gg=0.0, P_uu=0.0;
double sigma8 = sigma80 * growtharray[i];
// First lets calculate the relevant power spectra. Interpolate the power spectra linearly in redshift
double Pmm, Pmt, Ptt;
Pmm = gsl_spline_eval(Pmm_spline, zval, Pmm_acc);
Pmt = gsl_spline_eval(Pmt_spline, zval, Pmm_acc);
Ptt = gsl_spline_eval(Ptt_spline, zval, Pmm_acc);
double Omz = Om*ezinv(zval,NULL)*ezinv(zval,NULL)*(1.0+zval)*(1.0+zval)*(1.0+zval);
double f = pow(Omz, gammaval);
double beta = f*beta0*growtharray[i]/pow(Om,0.55);
vv_prefac = 1.0e2*f*mu*veldamp/k;
dd_prefac = (1.0/(beta*beta) + 2.0*r_g*mu*mu/beta + mu*mu*mu*mu)*f*f*dendamp*dendamp;
P_gg = dd_prefac*Pmm;
P_uu = vv_prefac*vv_prefac*Ptt;
// We need to do the overlapping and non-overlapping parts of the redshifts and PV surveys separately
for (surv=0; surv<3; surv++) {
double surv_sum = 0.0;
if (survey_area[surv] > 0.0) {
double error_obs, error_noise, n_g = 0.0, n_u = 0.0;
// Set the nbar for each section.
if (surv == 0) {
n_g = nbararray[1][i];
} else if (surv == 1) {
error_obs = 100.0*error_dist*r; // Percentage error * distance * H0 in km/s (factor of 100.0 comes from hubble parameter)
error_noise = error_rand*error_rand + error_obs*error_obs; // Error_noise is in km^{2}s^{-2}
n_u = nbararray[0][i]/error_noise;
} else {
error_obs = 100.0*error_dist*r; // Percentage error * distance * H0 in km/s (factor of 100.0 comes from hubble parameter)
error_noise = error_rand*error_rand + error_obs*error_obs; // Error_noise is in km^{2}s^{-2}
n_u = nbararray[0][i]/error_noise;
n_g = nbararray[1][i];
}
double value1 = n_g/(1.0 + n_g*P_gg);
double value2 = n_u/(1.0 + n_u*P_uu);
surv_sum += value1*value1 + value2*value2;
surv_sum *= survey_area[surv];
r_sum += surv_sum;
}
}
dVeff += r*r*deltar*r_sum;
zdVeff += zval*r*r*deltar*r_sum;
}
gsl_spline_free(Pmm_spline);
gsl_spline_free(Pmt_spline);
gsl_spline_free(Ptt_spline);
gsl_interp_accel_free(Pmm_acc);
gsl_interp_accel_free(Pmt_acc);
gsl_interp_accel_free(Ptt_acc);
return zdVeff/dVeff;
}
// The integrand for the integral over mu in the Fisher matrix calculation.
// For each mu we need to create a 4x4 matrix of the relevant power spectra derivatives and the inverse of the power spectrum matrix.
// Because there are some regions where the number density goes to zero we have to work directly with the inverse as it is difficult to invert numerically
// but if we deal with the inverse only then we can just set the relevant parts to zero when the number density is zero.
double mu_integrand(double mu, void * pin) {
int i, j, m, q, u, surv;
double * p = (double *)pin;
double result, error;
int numk = (int)p[0];
double k = p[1];
double zminval = p[4];
double zmaxval = p[5];
gsl_interp_accel * Pmm_acc, * Pmt_acc, * Ptt_acc;
gsl_spline * Pmm_spline, * Pmt_spline, * Ptt_spline;
double * Pmm_array = (double *)malloc(nzin*sizeof(double));
double * Pmt_array = (double *)malloc(nzin*sizeof(double));
double * Ptt_array = (double *)malloc(nzin*sizeof(double));
for (j=0; j<nzin; j++) {
Pmm_array[j] = pmmarray[j][numk];
Pmt_array[j] = pmtarray[j][numk];
Ptt_array[j] = pttarray[j][numk];
}
Pmm_acc = gsl_interp_accel_alloc();
Pmm_spline = gsl_spline_alloc(gsl_interp_cspline, nzin);
gsl_spline_init(Pmm_spline, zin, Pmm_array, nzin);
free(Pmm_array);
Pmt_acc = gsl_interp_accel_alloc();
Pmt_spline = gsl_spline_alloc(gsl_interp_cspline, nzin);
gsl_spline_init(Pmt_spline, zin, Pmt_array, nzin);
free(Pmt_array);
Ptt_acc = gsl_interp_accel_alloc();
Ptt_spline = gsl_spline_alloc(gsl_interp_cspline, nzin);
gsl_spline_init(Ptt_spline, zin, Ptt_array, nzin);
free(Ptt_array);
double dendamp = sqrt(1.0/(1.0+0.5*(k*k*mu*mu*sigma_g*sigma_g))); // This is unitless
double veldamp = sin(k*sigma_u)/(k*sigma_u); // This is unitless
double result_sum = 0.0;
for (i=0; i<NRED[0]; i++) {
double zval = zarray[i];
double r_sum = 0.0;
double r = rarray[i];
double deltar = deltararray[i];
if (zval < zminval) continue;
if (zval > zmaxval) break;
double dd_prefac=0.0, dv_prefac=0.0, vv_prefac=0.0;
double P_gg=0.0, P_ug=0.0, P_uu=0.0;
double sigma8 = sigma80 * growtharray[i];
// First lets calculate the relevant power spectra. Interpolate the power spectra linearly in redshift
double Pmm, Pmt, Ptt;
Pmm = gsl_spline_eval(Pmm_spline, zval, Pmm_acc);
Pmt = gsl_spline_eval(Pmt_spline, zval, Pmm_acc);
Ptt = gsl_spline_eval(Ptt_spline, zval, Pmm_acc);
double Omz = Om*ezinv(zval,NULL)*ezinv(zval,NULL)*(1.0+zval)*(1.0+zval)*(1.0+zval);
double f = pow(Omz, gammaval);
double beta = f*beta0*growtharray[i]/pow(Om,0.55);
vv_prefac = 1.0e2*f*mu*veldamp/k;
dd_prefac = (1.0/(beta*beta) + 2.0*r_g*mu*mu/beta + mu*mu*mu*mu)*f*f*dendamp*dendamp;
dv_prefac = (r_g/beta + mu*mu)*f*dendamp;
P_gg = dd_prefac*Pmm;
P_ug = vv_prefac*dv_prefac*Pmt;
P_uu = vv_prefac*vv_prefac*Ptt;
// And now the derivatives. Need to create a matrix of derivatives for each of the two parameters of interest
gsl_matrix * dPdt1 = gsl_matrix_calloc(2, 2);
gsl_matrix * dPdt2 = gsl_matrix_calloc(2, 2);
double value;
switch((int)p[2]) {
// Differential w.r.t betaA
case 0:
value = -2.0*(1.0/beta + r_g*mu*mu)*f*f*dendamp*dendamp*Pmm/(beta*beta);
gsl_matrix_set(dPdt1, 0, 0, value);
value = -(vv_prefac*f*r_g*dendamp*Pmt)/(beta*beta);
gsl_matrix_set(dPdt1, 0, 1, value);
gsl_matrix_set(dPdt1, 1, 0, value);
break;
// Differential w.r.t fsigma8
case 1:
value = 2.0*(f/(beta*beta) + 2.0*f*r_g*mu*mu/beta + f*mu*mu*mu*mu)*dendamp*dendamp*Pmm/sigma8;
gsl_matrix_set(dPdt1, 0, 0, value);
value = 2.0*vv_prefac*(r_g/beta + mu*mu)*dendamp*Pmt/sigma8;
gsl_matrix_set(dPdt1, 0, 1, value);
gsl_matrix_set(dPdt1, 1, 0, value);
value = (2.0*P_uu)/(f*sigma8);
gsl_matrix_set(dPdt1, 1, 1, value);
break;
// Differential w.r.t r_g
case 2:
value = 2.0*(1.0/beta)*mu*mu*f*f*dendamp*dendamp*Pmm;
gsl_matrix_set(dPdt1, 0, 0, value);
value = vv_prefac*(1.0/beta)*f*dendamp*Pmt;
gsl_matrix_set(dPdt1, 0, 1, value);
gsl_matrix_set(dPdt1, 1, 0, value);
break;
// Differential w.r.t sigma_g
case 3:
value = -k*k*mu*mu*dendamp*dendamp*sigma_g*P_gg;
gsl_matrix_set(dPdt1, 0, 0, value);
value = -0.5*k*k*mu*mu*dendamp*dendamp*sigma_g*P_ug;
gsl_matrix_set(dPdt1, 0, 1, value);
gsl_matrix_set(dPdt1, 1, 0, value);
break;
// Differential w.r.t sigma_u
case 4:
value = P_ug*(k*cos(k*sigma_u)/sin(k*sigma_u) - 1.0/sigma_u);
gsl_matrix_set(dPdt1, 0, 1, value);
gsl_matrix_set(dPdt1, 1, 0, value);
value = 2.0*P_uu*(k*cos(k*sigma_u)/sin(k*sigma_u) - 1.0/sigma_u);
gsl_matrix_set(dPdt1, 1, 1, value);
break;
default:
break;
}
switch((int)p[3]) {
// Differential w.r.t betaA
case 0:
value = -2.0*(1.0/beta + r_g*mu*mu)*f*f*dendamp*dendamp*Pmm/(beta*beta);
gsl_matrix_set(dPdt2, 0, 0, value);
value = -(vv_prefac*f*r_g*dendamp*Pmt)/(beta*beta);
gsl_matrix_set(dPdt2, 0, 1, value);
gsl_matrix_set(dPdt2, 1, 0, value);
break;
// Differential w.r.t fsigma8
case 1:
value = 2.0*(f/(beta*beta) + 2.0*f*r_g*mu*mu/beta + f*mu*mu*mu*mu)*dendamp*dendamp*Pmm/sigma8;
gsl_matrix_set(dPdt2, 0, 0, value);
value = 2.0*vv_prefac*(r_g/beta + mu*mu)*dendamp*Pmt/sigma8;
gsl_matrix_set(dPdt2, 0, 1, value);
gsl_matrix_set(dPdt2, 1, 0, value);
value = (2.0*P_uu)/(f*sigma8);
gsl_matrix_set(dPdt2, 1, 1, value);
break;
// Differential w.r.t r_g
case 2:
value = 2.0*(1.0/beta)*mu*mu*f*f*dendamp*dendamp*Pmm;
gsl_matrix_set(dPdt2, 0, 0, value);
value = vv_prefac*(1.0/beta)*f*dendamp*Pmt;
gsl_matrix_set(dPdt2, 0, 1, value);
gsl_matrix_set(dPdt2, 1, 0, value);
break;
// Differential w.r.t sigma_g
case 3:
value = -k*k*mu*mu*dendamp*dendamp*sigma_g*P_gg;
gsl_matrix_set(dPdt2, 0, 0, value);
value = -0.5*k*k*mu*mu*dendamp*dendamp*sigma_g*P_ug;
gsl_matrix_set(dPdt2, 0, 1, value);
gsl_matrix_set(dPdt2, 1, 0, value);
break;
// Differential w.r.t sigma_u
case 4:
value = P_ug*(k*cos(k*sigma_u)/sin(k*sigma_u) - 1.0/sigma_u);
gsl_matrix_set(dPdt2, 0, 1, value);
gsl_matrix_set(dPdt2, 1, 0, value);
value = 2.0*P_uu*(k*cos(k*sigma_u)/sin(k*sigma_u) - 1.0/sigma_u);
gsl_matrix_set(dPdt2, 1, 1, value);
break;
default:
break;
}
// We need to do the overlapping and non-overlapping parts of the surveys separately
for (surv=0; surv<3; surv++) {
double surv_sum = 0.0;
if (survey_area[surv] > 0.0) {
double error_obs, error_noise, n_g = 0.0, n_u = 0.0;
// Set the nbar for each section.
if (surv == 0) {
n_g = nbararray[1][i];
} else if (surv == 1) {
error_obs = 100.0*error_dist*r; // Percentage error * distance * H0 in km/s (factor of 100.0 comes from hubble parameter)
error_noise = error_rand*error_rand + error_obs*error_obs; // Error_noise is in km^{2}s^{-2}
n_u = nbararray[0][i]/error_noise;
} else {
error_obs = 100.0*error_dist*r; // Percentage error * distance * H0 in km/s (factor of 100.0 comes from hubble parameter)
error_noise = error_rand*error_rand + error_obs*error_obs; // Error_noise is in km^{2}s^{-2}
n_u = nbararray[0][i]/error_noise;
n_g = nbararray[1][i];
}
//printf("%lf, %lf, %lf\n", r, n_g, 1.0e6*n_u);
if (!((n_u > 0.0) || (n_g > 0.0))) continue;
// First we need the determinant.
double det = 1.0 + n_u*n_g*(P_gg*P_uu - P_ug*P_ug) + n_u*P_uu + n_g*P_gg;
// Now the inverse matrix.
gsl_matrix * iP = gsl_matrix_calloc(2, 2);
value = n_u*n_g*P_uu + n_g;
gsl_matrix_set(iP, 0, 0, value);
value = n_g*n_u*P_gg + n_u;
gsl_matrix_set(iP, 1, 1, value);
value = - n_g*n_u*P_ug;
gsl_matrix_set(iP, 0, 1, value);
gsl_matrix_set(iP, 1, 0, value);
// Finally we need to compute the Fisher integrand by summing over the inverse and differential matrices
for (j=0; j<2; j++) {
for (m=0; m<2; m++) {
for (u=0; u<2; u++) {
for (q=0; q<2; q++) {
value = gsl_matrix_get(dPdt1, j, q)*gsl_matrix_get(iP, q, u)*gsl_matrix_get(dPdt2, u, m)*gsl_matrix_get(iP, m, j);
surv_sum += value;
}
}
}
}
surv_sum /= det*det;
surv_sum *= survey_area[surv];
r_sum += surv_sum;
gsl_matrix_free(iP);
//printf("%d, %lf, %lf, %lf, %lf\n", surv, k, mu, r, r_sum);
}
}
//printf("%lf, %lf, %lf, %lf\n", k, mu, r, r_sum);
result_sum += r*r*deltar*r_sum;
gsl_matrix_free(dPdt1);
gsl_matrix_free(dPdt2);
}
gsl_spline_free(Pmm_spline);
gsl_spline_free(Pmt_spline);
gsl_spline_free(Ptt_spline);
gsl_interp_accel_free(Pmm_acc);
gsl_interp_accel_free(Pmt_acc);
gsl_interp_accel_free(Ptt_acc);
return result_sum;
}
// Routine to read in the number density as a function of redshift. We need a file containing the left-most edge of each redshift bin and teh number density in that bin.
// From this we create arrays to store the bin centre, the bin width, the comoving distance and growth factor at the bin centre and the number density.
// The last bin width and bin centre is constructed from the last row of the input and the value of zmax at the top of the code.
// ITS VERY IMPORTANT THAT THE NUMBER OF ROWS AND THE REDSHIFTS OF BOTH THE DENSITY AND PV NUMBER DENSITIES MATCH AS THE INTEGRATION OVER Z IS DONE USING THE TRAPEZIUM RULE.
// ALSO MAKE NOTE OF THE FACTOR OF 1.0e-6 ON LINE 827. THIS IS BECAUSE I TYPICALLY SAVE THE VALUE OF NBAR x 10^6 IN THE INPUT FILES< SO THAT I DON'T LOSE PRECISION
// WHEN SMALL VALUES OF THE NUMBER DENSITY ARE WRITTEN TO A FILE!
void read_nz() {
FILE * fp;
char buf[500];
int i, nsamp;
NRED = (int *)calloc(2, sizeof(int));
nbararray = (double **)calloc(2, sizeof(double*));
double * zinarray;
for (nsamp = 0; nsamp < 2; nsamp++) {
if(!(fp = fopen(nbar_file[nsamp], "r"))) {
printf("\nERROR: Can't open nbar file '%s'.\n\n", nbar_file[nsamp]);
exit(0);
}
NRED[nsamp] = 0;
while(fgets(buf,500,fp)) {
if(strncmp(buf,"#",1)!=0) {
double tz, tnbar;
if(sscanf(buf, "%lf %lf\n", &tz, &tnbar) != 2) {printf("nbar read error\n"); exit(0);};
if (tz > zmax) break;
NRED[nsamp]++;
}
}
fclose(fp);
if (nsamp == 0) zinarray = (double *)calloc(NRED[nsamp], sizeof(double));
nbararray[nsamp] = (double *)calloc(NRED[nsamp], sizeof(double));
NRED[nsamp] = 0;
fp = fopen(nbar_file[nsamp], "r");
while(fgets(buf,500,fp)) {
if(strncmp(buf,"#",1)!=0) {
double tz, tnbar;
if(sscanf(buf, "%lf %lf\n", &tz, &tnbar) != 2) {printf("nbar read error\n"); exit(0);};
if (tz > zmax) break;
if (nsamp == 0) zinarray[NRED[nsamp]] = tz;
nbararray[nsamp][NRED[nsamp]] = 1.0e-6*tnbar;
NRED[nsamp]++;
}
}
fclose(fp);
}
if (NRED[1] != NRED[0]) {
printf("ERROR: The number of redshift bins for each sample must match\n");
exit(0);
}
zarray = (double *)calloc(NRED[0], sizeof(double));
rarray = (double *)calloc(NRED[0], sizeof(double));
deltararray = (double *)calloc(NRED[0], sizeof(double));
growtharray = (double *)calloc(NRED[0], sizeof(double));
for (i=0; i<NRED[0]-1; i++) {
zarray[i] = (zinarray[i+1]+zinarray[i])/2.0;
rarray[i] = rz(zarray[i]);
deltararray[i] = rz(zinarray[i+1]) - rz(zinarray[i]);
growtharray[i] = growthz(zarray[i])/growthz(0.0);
//printf("%12.6lf %12.6lf %12.6lf %12.6lf %12.6lf %12.6lf\n", zarray[i], rarray[i], deltararray[i], growtharray[i], nbararray[0][i], nbararray[1][i]);
}
zarray[NRED[0]-1] = (zmax+zinarray[NRED[0]-1])/2.0;
rarray[NRED[0]-1] = rz(zarray[NRED[0]-1]);
deltararray[NRED[0]-1] = rz(zmax) - rz(zinarray[NRED[0]-1]);
growtharray[NRED[0]-1] = growthz(zarray[NRED[0]-1])/growthz(0.0);
//printf("%12.6lf %12.6lf %12.6lf %12.6lf %12.6lf %12.6lf\n", zarray[NRED[0]-1], rarray[NRED[0]-1], deltararray[NRED[0]-1], growtharray[NRED[0]-1], nbararray[0][NRED[0]-1], nbararray[1][NRED[0]-1]);
growth_acc = gsl_interp_accel_alloc();
growth_spline = gsl_spline_alloc(gsl_interp_cspline, NRED[0]);
gsl_spline_init(growth_spline, zarray, growtharray, NRED[0]);
free(zinarray);
// Also create a simple redshift-distance spline
int nbins = 400;
double REDMIN = 0.0;
double REDMAX = 2.0;
double redbinwidth = (REDMAX-REDMIN)/(double)(nbins-1);
double RMIN = rz(REDMIN);
double RMAX = rz(REDMAX);
double * ztemp = (double *)malloc(nbins*sizeof(double));
double * rtemp = (double *)malloc(nbins*sizeof(double));
for (i=0;i<nbins;i++) {
ztemp[i] = i*redbinwidth+REDMIN;
rtemp[i] = rz(ztemp[i]);
}
r_acc = gsl_interp_accel_alloc();
r_spline = gsl_spline_alloc(gsl_interp_cspline, nbins);
gsl_spline_init(r_spline, ztemp, rtemp, nbins);
free(ztemp);
free(rtemp);
return;
}
// Routine to read in the velocity power spectrum.
void read_power() {
FILE * fp;
char buf[500];
int i, j;
pmmarray = (double**)malloc(nzin*sizeof(double*));
pmtarray = (double**)malloc(nzin*sizeof(double*));
pttarray = (double**)malloc(nzin*sizeof(double*));
for (i = 0; i<nzin; i++) {
char Pvel_file_in[500];
sprintf(Pvel_file_in, "%s_z0p%02d.dat", Pvel_file, (int)(100.0*zin[i]));
if(!(fp = fopen(Pvel_file_in, "r"))) {
printf("\nERROR: Can't open power file '%s'.\n\n", Pvel_file_in);
exit(0);
}
NK = 0;
while(fgets(buf,500,fp)) {
if(strncmp(buf,"#",1)!=0) {
double tk, pkdelta, pkdeltavel, pkvel;
if(sscanf(buf, "%lf %lf %lf %lf\n", &tk, &pkdelta, &pkdeltavel, &pkvel) != 4) {printf("Pvel read error\n"); exit(0);};
NK++;
}
}
fclose(fp);
if (i == 0) {
karray = (double *)calloc(NK, sizeof(double));
deltakarray = (double *)calloc(NK-1, sizeof(double));
}
pmmarray[i] = (double *)calloc(NK, sizeof(double));
pmtarray[i] = (double *)calloc(NK, sizeof(double));
pttarray[i] = (double *)calloc(NK, sizeof(double));
NK = 0;
fp = fopen(Pvel_file_in, "r");
while(fgets(buf,500,fp)) {
if(strncmp(buf,"#",1)!=0) {
double tk, pkdelta, pkdeltavel, pkvel;
if(sscanf(buf, "%lf %lf %lf %lf\n", &tk, &pkdelta, &pkdeltavel, &pkvel) != 4) {printf("Pvel read error\n"); exit(0);};
if (i == 0) karray[NK] = tk;
pttarray[i][NK] = pkvel;
pmmarray[i][NK] = pkdelta;
pmtarray[i][NK] = pkdeltavel;
NK++;
}
}
fclose(fp);
}
for (i=0; i<NK-1; i++) deltakarray[i] = karray[i+1]-karray[i];
pkkmin = karray[0];
pkkmax = karray[NK-1];
if (pkkmax < kmax) {
printf("ERROR: The maximum k in the input power spectra id less than k_max\n");
exit(0);
}
return;
}
// Integrand for the comoving distance
double ezinv(double x, void *p) {
return 1.0/sqrt(Om*(1.0+x)*(1.0+x)*(1.0+x)+(1.0-Om));
}
// Calculates the comoving distance from the redshift
double rz(double red) {
double result, error;
gsl_function F;
gsl_integration_workspace * w = gsl_integration_workspace_alloc(1000);
F.function = &ezinv;
gsl_integration_qags(&F, 0.0, red, 0, 1e-7, 1000, w, &result, &error);
gsl_integration_workspace_free(w);
return c*result/100.0;
}
// The integrand for the normalised growth factor
double growthfunc(double x, void *p) {
double red = 1.0/x - 1.0;
double Omz = Om*ezinv(red,NULL)*ezinv(red,NULL)/(x*x*x);
double f = pow(Omz, gammaval);
return f/x;
}
// Calculates the normalised growth factor as a function of redshift given a value of gammaval
double growthz(double red) {
double result, error;
gsl_function F;
gsl_integration_workspace * w = gsl_integration_workspace_alloc(1000);
F.function = &growthfunc;
double a = 1.0/(1.0+red);
gsl_integration_qags(&F, a, 1.0, 0, 1e-7, 1000, w, &result, &error);
gsl_integration_workspace_free(w);
return exp(-result);
}
|
function loopback_arr(; kwargs...)
return (; bigarrs = (;kwargs...))
end
fns = (; loopback_arr)
|
Arkansas State University - Newport Campus DOES use SAT or ACT scores for admitting a substantial number of students into Bachelor Degree programs.
Ashland Community & Technical College DOES use SAT or ACT scores for admitting a substantial number of students into Bachelor Degree programs.
Austin Community College DOES NOT use SAT or ACT scores for admitting a substantial number of students into Bachelor Degree programs. |
section {* Alphabetised Predicates *}
theory utp_pred
imports
utp_expr
utp_subst
utp_tactics
begin
text {* An alphabetised predicate is a simply a boolean valued expression *}
type_synonym '\<alpha> upred = "(bool, '\<alpha>) uexpr"
translations
(type) "'\<alpha> upred" <= (type) "(bool, '\<alpha>) uexpr"
subsection {* Predicate syntax *}
text {* We want to remain as close as possible to the mathematical UTP syntax, but also
want to be conservative with HOL. For this reason we chose not to steal syntax
from HOL, but where possible use polymorphism to allow selection of the appropriate
operator (UTP vs. HOL). Thus we will first remove the standard syntax for conjunction,
disjunction, and negation, and replace these with adhoc overloaded definitions. *}
purge_notation
conj (infixr "\<and>" 35) and
disj (infixr "\<or>" 30) and
Not ("\<not> _" [40] 40)
consts
utrue :: "'a" ("true")
ufalse :: "'a" ("false")
uconj :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixr "\<and>" 35)
udisj :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixr "\<or>" 30)
uimpl :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixr "\<Rightarrow>" 25)
uiff :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixr "\<Leftrightarrow>" 25)
unot :: "'a \<Rightarrow> 'a" ("\<not> _" [40] 40)
uex :: "('a, '\<alpha>) uvar \<Rightarrow> 'p \<Rightarrow> 'p"
uall :: "('a, '\<alpha>) uvar \<Rightarrow> 'p \<Rightarrow> 'p"
ushEx :: "['a \<Rightarrow> 'p] \<Rightarrow> 'p"
ushAll :: "['a \<Rightarrow> 'p] \<Rightarrow> 'p"
adhoc_overloading
uconj conj and
udisj disj and
unot Not
text {* We set up two versions of each of the quantifiers: @{const uex} / @{const uall} and
@{const ushEx} / @{const ushAll}. The former pair allows quantification of UTP variables,
whilst the latter allows quantification of HOL variables. Both varieties will be
needed at various points. Syntactically they are distinguish by a boldface quantifier
for the HOL versions (achieved by the "bold" escape in Isabelle). *}
nonterminal idt_list
syntax
"_idt_el" :: "idt \<Rightarrow> idt_list" ("_")
"_idt_list" :: "idt \<Rightarrow> idt_list \<Rightarrow> idt_list" ("(_,/ _)" [0, 1])
"_uex" :: "salpha \<Rightarrow> logic \<Rightarrow> logic" ("\<exists> _ \<bullet> _" [0, 10] 10)
"_uall" :: "salpha \<Rightarrow> logic \<Rightarrow> logic" ("\<forall> _ \<bullet> _" [0, 10] 10)
"_ushEx" :: "idt_list \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<exists> _ \<bullet> _" [0, 10] 10)
"_ushAll" :: "idt_list \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<forall> _ \<bullet> _" [0, 10] 10)
"_ushBEx" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<exists> _ \<in> _ \<bullet> _" [0, 0, 10] 10)
"_ushBAll" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<forall> _ \<in> _ \<bullet> _" [0, 0, 10] 10)
"_ushGAll" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<forall> _ | _ \<bullet> _" [0, 0, 10] 10)
"_ushGtAll" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<forall> _ > _ \<bullet> _" [0, 0, 10] 10)
"_ushLtAll" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<^bold>\<forall> _ < _ \<bullet> _" [0, 0, 10] 10)
translations
"_uex x P" == "CONST uex x P"
"_uall x P" == "CONST uall x P"
"_ushEx (_idt_el x) P" == "CONST ushEx (\<lambda> x. P)"
"_ushEx (_idt_list x y) P" => "CONST ushEx (\<lambda> x. (_ushEx y P))"
"\<^bold>\<exists> x \<in> A \<bullet> P" => "\<^bold>\<exists> x \<bullet> \<guillemotleft>x\<guillemotright> \<in>\<^sub>u A \<and> P"
"_ushAll (_idt_el x) P" == "CONST ushAll (\<lambda> x. P)"
"_ushAll (_idt_list x y) P" => "CONST ushAll (\<lambda> x. (_ushAll y P))"
"\<^bold>\<forall> x \<in> A \<bullet> P" => "\<^bold>\<forall> x \<bullet> \<guillemotleft>x\<guillemotright> \<in>\<^sub>u A \<Rightarrow> P"
"\<^bold>\<forall> x | P \<bullet> Q" => "\<^bold>\<forall> x \<bullet> P \<Rightarrow> Q"
"\<^bold>\<forall> x > y \<bullet> P" => "\<^bold>\<forall> x \<bullet> \<guillemotleft>x\<guillemotright> >\<^sub>u y \<Rightarrow> P"
"\<^bold>\<forall> x < y \<bullet> P" => "\<^bold>\<forall> x \<bullet> \<guillemotleft>x\<guillemotright> <\<^sub>u y \<Rightarrow> P"
subsection {* Predicate operators *}
text {* We chose to maximally reuse definitions and laws built into HOL. For this reason,
when introducing the core operators we proceed by lifting operators from the
polymorphic algebraic hiearchy of HOL. Thus the initial definitions take
place in the context of type class instantiations. We first introduce our own
class called \emph{refine} that will add the refinement operator syntax to
the HOL partial order class. *}
class refine = order
abbreviation refineBy :: "'a::refine \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<sqsubseteq>" 50) where
"P \<sqsubseteq> Q \<equiv> less_eq Q P"
text {* Since, on the whole, lattices in UTP are the opposite way up to the standard definitions
in HOL, we syntactically invert the lattice operators. This is the one exception where
we do steal HOL syntax, but I think it makes sense for UTP. *}
purge_notation inf (infixl "\<sqinter>" 70)
notation inf (infixl "\<squnion>" 70)
purge_notation sup (infixl "\<squnion>" 65)
notation sup (infixl "\<sqinter>" 65)
purge_notation Inf ("\<Sqinter>_" [900] 900)
notation Inf ("\<Squnion>_" [900] 900)
purge_notation Sup ("\<Squnion>_" [900] 900)
notation Sup ("\<Sqinter>_" [900] 900)
purge_notation bot ("\<bottom>")
notation bot ("\<top>")
purge_notation top ("\<top>")
notation top ("\<bottom>")
purge_syntax
"_INF1" :: "pttrns \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Sqinter>_./ _)" [0, 10] 10)
"_INF" :: "pttrn \<Rightarrow> 'a set \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Sqinter>_\<in>_./ _)" [0, 0, 10] 10)
"_SUP1" :: "pttrns \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Squnion>_./ _)" [0, 10] 10)
"_SUP" :: "pttrn \<Rightarrow> 'a set \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Squnion>_\<in>_./ _)" [0, 0, 10] 10)
syntax
"_INF1" :: "pttrns \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Squnion>_./ _)" [0, 10] 10)
"_INF" :: "pttrn \<Rightarrow> 'a set \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Squnion>_\<in>_./ _)" [0, 0, 10] 10)
"_SUP1" :: "pttrns \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Sqinter>_./ _)" [0, 10] 10)
"_SUP" :: "pttrn \<Rightarrow> 'a set \<Rightarrow> 'b \<Rightarrow> 'b" ("(3\<Sqinter>_\<in>_./ _)" [0, 0, 10] 10)
text {* We trivially instantiate our refinement class *}
instance uexpr :: (order, type) refine ..
-- {* Configure transfer law for refinement for the fast relational tactics. *}
theorem upred_ref_iff [uexpr_transfer_laws]:
"(P \<sqsubseteq> Q) = (\<forall>b. \<lbrakk>Q\<rbrakk>\<^sub>e b \<longrightarrow> \<lbrakk>P\<rbrakk>\<^sub>e b)"
apply (transfer)
apply (clarsimp)
done
text {* Next we introduce the lattice operators, which is again done by lifting. *}
instantiation uexpr :: (lattice, type) lattice
begin
lift_definition sup_uexpr :: "('a, 'b) uexpr \<Rightarrow> ('a, 'b) uexpr \<Rightarrow> ('a, 'b) uexpr"
is "\<lambda>P Q A. sup (P A) (Q A)" .
lift_definition inf_uexpr :: "('a, 'b) uexpr \<Rightarrow> ('a, 'b) uexpr \<Rightarrow> ('a, 'b) uexpr"
is "\<lambda>P Q A. inf (P A) (Q A)" .
instance
by (intro_classes) (transfer, auto)+
end
instantiation uexpr :: (bounded_lattice, type) bounded_lattice
begin
lift_definition bot_uexpr :: "('a, 'b) uexpr" is "\<lambda> A. bot" .
lift_definition top_uexpr :: "('a, 'b) uexpr" is "\<lambda> A. top" .
instance
by (intro_classes) (transfer, auto)+
end
instance uexpr :: (distrib_lattice, type) distrib_lattice
by (intro_classes) (transfer, rule ext, auto simp add: sup_inf_distrib1)
text {* Finally we show that predicates form a Boolean algebra (under the lattice operators). *}
instance uexpr :: (boolean_algebra, type) boolean_algebra
apply (intro_classes, unfold uexpr_defs; transfer, rule ext)
apply (simp_all add: sup_inf_distrib1 diff_eq)
done
instantiation uexpr :: (complete_lattice, type) complete_lattice
begin
lift_definition Inf_uexpr :: "('a, 'b) uexpr set \<Rightarrow> ('a, 'b) uexpr"
is "\<lambda> PS A. INF P:PS. P(A)" .
lift_definition Sup_uexpr :: "('a, 'b) uexpr set \<Rightarrow> ('a, 'b) uexpr"
is "\<lambda> PS A. SUP P:PS. P(A)" .
instance
by (intro_classes)
(transfer, auto intro: INF_lower SUP_upper simp add: INF_greatest SUP_least)+
end
syntax
"_mu" :: "idt \<Rightarrow> logic \<Rightarrow> logic" ("\<mu> _ \<bullet> _" [0, 10] 10)
"_nu" :: "idt \<Rightarrow> logic \<Rightarrow> logic" ("\<nu> _ \<bullet> _" [0, 10] 10)
translations
"\<nu> X \<bullet> P" == "CONST lfp (\<lambda> X. P)"
"\<mu> X \<bullet> P" == "CONST gfp (\<lambda> X. P)"
instance uexpr :: (complete_distrib_lattice, type) complete_distrib_lattice
apply (intro_classes)
unfolding INF_def
apply (transfer, rule ext, auto)
using sup_INF apply fastforce
unfolding SUP_def
apply (transfer, rule ext, auto)
using inf_SUP apply fastforce
done
instance uexpr :: (complete_boolean_algebra, type) complete_boolean_algebra ..
text {* With the lattice operators defined, we can proceed to give definitions for the
standard predicate operators in terms of them. *}
definition "true_upred = (top :: '\<alpha> upred)"
definition "false_upred = (bot :: '\<alpha> upred)"
definition "conj_upred = (inf :: '\<alpha> upred \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred)"
definition "disj_upred = (sup :: '\<alpha> upred \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred)"
definition "not_upred = (uminus :: '\<alpha> upred \<Rightarrow> '\<alpha> upred)"
definition "diff_upred = (minus :: '\<alpha> upred \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred)"
abbreviation Conj_upred :: "'\<alpha> upred set \<Rightarrow> '\<alpha> upred" ("\<And>_" [900] 900) where
"\<And> A \<equiv> \<Squnion> A"
abbreviation Disj_upred :: "'\<alpha> upred set \<Rightarrow> '\<alpha> upred" ("\<Or>_" [900] 900) where
"\<Or> A \<equiv> \<Sqinter> A"
notation
conj_upred (infixr "\<and>\<^sub>p" 35) and
disj_upred (infixr "\<or>\<^sub>p" 30)
lift_definition USUP :: "('a \<Rightarrow> '\<alpha> upred) \<Rightarrow> ('a \<Rightarrow> ('b::complete_lattice, '\<alpha>) uexpr) \<Rightarrow> ('b, '\<alpha>) uexpr"
is "\<lambda> P F b. Sup {\<lbrakk>F x\<rbrakk>\<^sub>eb | x. \<lbrakk>P x\<rbrakk>\<^sub>eb}" .
lift_definition UINF :: "('a \<Rightarrow> '\<alpha> upred) \<Rightarrow> ('a \<Rightarrow> ('b::complete_lattice, '\<alpha>) uexpr) \<Rightarrow> ('b, '\<alpha>) uexpr"
is "\<lambda> P F b. Inf {\<lbrakk>F x\<rbrakk>\<^sub>eb | x. \<lbrakk>P x\<rbrakk>\<^sub>eb}" .
declare USUP_def [upred_defs]
declare UINF_def [upred_defs]
syntax
"_USup" :: "idt \<Rightarrow> logic \<Rightarrow> logic" ("\<Sqinter> _ \<bullet> _" [0, 10] 10)
"_USup_mem" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<Sqinter> _ \<in> _ \<bullet> _" [0, 10] 10)
"_USUP" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<Sqinter> _ | _ \<bullet> _" [0, 0, 10] 10)
"_UInf" :: "idt \<Rightarrow> logic \<Rightarrow> logic" ("\<Squnion> _ \<bullet> _" [0, 10] 10)
"_UInf_mem" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<Squnion> _ \<in> _ \<bullet> _" [0, 10] 10)
"_UINF" :: "idt \<Rightarrow> logic \<Rightarrow> logic \<Rightarrow> logic" ("\<Squnion> _ | _ \<bullet> _" [0, 10] 10)
translations
"\<Sqinter> x | P \<bullet> F" => "CONST USUP (\<lambda> x. P) (\<lambda> x. F)"
"\<Sqinter> x \<bullet> F" == "\<Sqinter> x | true \<bullet> F"
"\<Sqinter> x \<bullet> F" == "\<Sqinter> x | true \<bullet> F"
"\<Sqinter> x \<in> A \<bullet> F" => "\<Sqinter> x | \<guillemotleft>x\<guillemotright> \<in>\<^sub>u \<guillemotleft>A\<guillemotright> \<bullet> F"
"\<Sqinter> x | P \<bullet> F" <= "CONST USUP (\<lambda> x. P) (\<lambda> y. F)"
"\<Sqinter> x | P \<bullet> F(x)" <= "CONST USUP (\<lambda> x. P) F"
"\<Squnion> x | P \<bullet> F" => "CONST UINF (\<lambda> x. P) (\<lambda> x. F)"
"\<Squnion> x \<bullet> F" == "\<Squnion> x | true \<bullet> F"
"\<Squnion> x \<in> A \<bullet> F" => "\<Squnion> x | \<guillemotleft>x\<guillemotright> \<in>\<^sub>u \<guillemotleft>A\<guillemotright> \<bullet> F"
"\<Squnion> x | P \<bullet> F" <= "CONST UINF (\<lambda> x. P) (\<lambda> y. F)"
"\<Squnion> x | P \<bullet> F(x)" <= "CONST UINF (\<lambda> x. P) F"
text {* We also define the other predicate operators *}
lift_definition impl::"'\<alpha> upred \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred" is
"\<lambda> P Q A. P A \<longrightarrow> Q A" .
lift_definition iff_upred ::"'\<alpha> upred \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred" is
"\<lambda> P Q A. P A \<longleftrightarrow> Q A" .
lift_definition ex :: "('a, '\<alpha>) uvar \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred" is
"\<lambda> x P b. (\<exists> v. P(put\<^bsub>x\<^esub> b v))" .
lift_definition shEx ::"['\<beta> \<Rightarrow>'\<alpha> upred] \<Rightarrow> '\<alpha> upred" is
"\<lambda> P A. \<exists> x. (P x) A" .
lift_definition all :: "('a, '\<alpha>) uvar \<Rightarrow> '\<alpha> upred \<Rightarrow> '\<alpha> upred" is
"\<lambda> x P b. (\<forall> v. P(put\<^bsub>x\<^esub> b v))" .
lift_definition shAll ::"['\<beta> \<Rightarrow>'\<alpha> upred] \<Rightarrow> '\<alpha> upred" is
"\<lambda> P A. \<forall> x. (P x) A" .
text {* We have to add a u subscript to the closure operator as I don't want to override the syntax
for HOL lists (we'll be using them later). *}
lift_definition closure::"'\<alpha> upred \<Rightarrow> '\<alpha> upred" ("[_]\<^sub>u") is
"\<lambda> P A. \<forall>A'. P A'" .
lift_definition taut :: "'\<alpha> upred \<Rightarrow> bool" ("`_`")
is "\<lambda> P. \<forall> A. P A" .
-- {* Configuration for UTP tactics (see @{theory utp_tactics}). *}
update_uexpr_rep_eq_thms -- {* Reread @{text rep_eq} theorems. *}
declare utp_pred.taut.rep_eq [upred_defs]
adhoc_overloading
utrue "true_upred" and
ufalse "false_upred" and
unot "not_upred" and
uconj "conj_upred" and
udisj "disj_upred" and
uimpl impl and
uiff iff_upred and
uex ex and
uall all and
ushEx shEx and
ushAll shAll
syntax
"_uneq" :: "logic \<Rightarrow> logic \<Rightarrow> logic" (infixl "\<noteq>\<^sub>u" 50)
"_unmem" :: "('a, '\<alpha>) uexpr \<Rightarrow> ('a set, '\<alpha>) uexpr \<Rightarrow> (bool, '\<alpha>) uexpr" (infix "\<notin>\<^sub>u" 50)
translations
"x \<noteq>\<^sub>u y" == "CONST unot (x =\<^sub>u y)"
"x \<notin>\<^sub>u A" == "CONST unot (CONST bop (op \<in>) x A)"
declare true_upred_def [upred_defs]
declare false_upred_def [upred_defs]
declare conj_upred_def [upred_defs]
declare disj_upred_def [upred_defs]
declare not_upred_def [upred_defs]
declare diff_upred_def [upred_defs]
declare subst_upd_uvar_def [upred_defs]
declare unrest_usubst_def [upred_defs]
declare uexpr_defs [upred_defs]
lemma true_alt_def: "true = \<guillemotleft>True\<guillemotright>"
by (pred_auto)
declare true_alt_def[THEN sym,lit_simps]
declare false_alt_def[THEN sym,lit_simps]
abbreviation cond ::
"('a,'\<alpha>) uexpr \<Rightarrow> '\<alpha> upred \<Rightarrow> ('a,'\<alpha>) uexpr \<Rightarrow> ('a,'\<alpha>) uexpr"
("(3_ \<triangleleft> _ \<triangleright>/ _)" [52,0,53] 52)
where "P \<triangleleft> b \<triangleright> Q \<equiv> trop If b P Q"
subsection {* Unrestriction Laws *}
lemma unrest_true [unrest]: "x \<sharp> true"
by (pred_auto)
lemma unrest_false [unrest]: "x \<sharp> false"
by (pred_auto)
lemma unrest_conj [unrest]: "\<lbrakk> x \<sharp> (P :: '\<alpha> upred); x \<sharp> Q \<rbrakk> \<Longrightarrow> x \<sharp> P \<and> Q"
by (pred_auto)
lemma unrest_disj [unrest]: "\<lbrakk> x \<sharp> (P :: '\<alpha> upred); x \<sharp> Q \<rbrakk> \<Longrightarrow> x \<sharp> P \<or> Q"
by (pred_auto)
lemma unrest_USUP [unrest]:
"\<lbrakk> (\<And> i. x \<sharp> P(i)); (\<And> i. x \<sharp> Q(i)) \<rbrakk> \<Longrightarrow> x \<sharp> (\<Sqinter> i | P(i) \<bullet> Q(i))"
by (pred_auto)
lemma unrest_UINF [unrest]:
"\<lbrakk> (\<And> i. x \<sharp> P(i)); (\<And> i. x \<sharp> Q(i)) \<rbrakk> \<Longrightarrow> x \<sharp> (\<Squnion> i | P(i) \<bullet> Q(i))"
by (pred_auto)
lemma unrest_impl [unrest]: "\<lbrakk> x \<sharp> P; x \<sharp> Q \<rbrakk> \<Longrightarrow> x \<sharp> P \<Rightarrow> Q"
by (pred_auto)
lemma unrest_iff [unrest]: "\<lbrakk> x \<sharp> P; x \<sharp> Q \<rbrakk> \<Longrightarrow> x \<sharp> P \<Leftrightarrow> Q"
by (pred_auto)
lemma unrest_not [unrest]: "x \<sharp> (P :: '\<alpha> upred) \<Longrightarrow> x \<sharp> (\<not> P)"
by (pred_auto)
text {* The sublens proviso can be thought of as membership below. *}
lemma unrest_ex_in [unrest]:
"\<lbrakk> mwb_lens y; x \<subseteq>\<^sub>L y \<rbrakk> \<Longrightarrow> x \<sharp> (\<exists> y \<bullet> P)"
by (pred_auto)
declare sublens_refl [simp]
declare lens_plus_ub [simp]
declare lens_plus_right_sublens [simp]
declare comp_wb_lens [simp]
declare comp_mwb_lens [simp]
declare plus_mwb_lens [simp]
lemma unrest_ex_diff [unrest]:
assumes "x \<bowtie> y" "y \<sharp> P"
shows "y \<sharp> (\<exists> x \<bullet> P)"
using assms
apply (pred_auto)
using lens_indep_comm apply fastforce+
done
lemma unrest_all_in [unrest]:
"\<lbrakk> mwb_lens y; x \<subseteq>\<^sub>L y \<rbrakk> \<Longrightarrow> x \<sharp> (\<forall> y \<bullet> P)"
by (pred_auto)
lemma unrest_all_diff [unrest]:
assumes "x \<bowtie> y" "y \<sharp> P"
shows "y \<sharp> (\<forall> x \<bullet> P)"
using assms
by (pred_simp, simp_all add: lens_indep_comm)
lemma unrest_shEx [unrest]:
assumes "\<And> y. x \<sharp> P(y)"
shows "x \<sharp> (\<^bold>\<exists> y \<bullet> P(y))"
using assms by (pred_auto)
lemma unrest_shAll [unrest]:
assumes "\<And> y. x \<sharp> P(y)"
shows "x \<sharp> (\<^bold>\<forall> y \<bullet> P(y))"
using assms by (pred_auto)
lemma unrest_closure [unrest]:
"x \<sharp> [P]\<^sub>u"
by (pred_auto)
subsection {* Substitution Laws *}
text {* Substitution is monotone *}
lemma subst_mono: "P \<sqsubseteq> Q \<Longrightarrow> (\<sigma> \<dagger> P) \<sqsubseteq> (\<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_true [usubst]: "\<sigma> \<dagger> true = true"
by (pred_auto)
lemma subst_not [usubst]: "\<sigma> \<dagger> (\<not> P) = (\<not> \<sigma> \<dagger> P)"
by (pred_auto)
lemma subst_impl [usubst]: "\<sigma> \<dagger> (P \<Rightarrow> Q) = (\<sigma> \<dagger> P \<Rightarrow> \<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_iff [usubst]: "\<sigma> \<dagger> (P \<Leftrightarrow> Q) = (\<sigma> \<dagger> P \<Leftrightarrow> \<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_disj [usubst]: "\<sigma> \<dagger> (P \<or> Q) = (\<sigma> \<dagger> P \<or> \<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_conj [usubst]: "\<sigma> \<dagger> (P \<and> Q) = (\<sigma> \<dagger> P \<and> \<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_sup [usubst]: "\<sigma> \<dagger> (P \<sqinter> Q) = (\<sigma> \<dagger> P \<sqinter> \<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_inf [usubst]: "\<sigma> \<dagger> (P \<squnion> Q) = (\<sigma> \<dagger> P \<squnion> \<sigma> \<dagger> Q)"
by (pred_auto)
lemma subst_USUP [usubst]: "\<sigma> \<dagger> (\<Sqinter> i | P(i) \<bullet> Q(i)) = (\<Sqinter> i | (\<sigma> \<dagger> P(i)) \<bullet> (\<sigma> \<dagger> Q(i)))"
by (pred_auto)
lemma subst_UINF [usubst]: "\<sigma> \<dagger> (\<Squnion> i | P(i) \<bullet> Q(i)) = (\<Squnion> i | (\<sigma> \<dagger> P(i)) \<bullet> (\<sigma> \<dagger> Q(i)))"
by (pred_auto)
lemma subst_closure [usubst]: "\<sigma> \<dagger> [P]\<^sub>u = [P]\<^sub>u"
by (pred_auto)
lemma subst_shEx [usubst]: "\<sigma> \<dagger> (\<^bold>\<exists> x \<bullet> P(x)) = (\<^bold>\<exists> x \<bullet> \<sigma> \<dagger> P(x))"
by (pred_auto)
lemma subst_shAll [usubst]: "\<sigma> \<dagger> (\<^bold>\<forall> x \<bullet> P(x)) = (\<^bold>\<forall> x \<bullet> \<sigma> \<dagger> P(x))"
by (pred_auto)
text {* TODO: Generalise the quantifier substitution laws to n-ary substitutions *}
lemma subst_ex_same [usubst]:
assumes "mwb_lens x"
shows "(\<exists> x \<bullet> P)\<lbrakk>v/x\<rbrakk> = (\<exists> x \<bullet> P)"
by (simp add: assms id_subst subst_unrest unrest_ex_in)
lemma subst_ex_indep [usubst]:
assumes "x \<bowtie> y" "y \<sharp> v"
shows "(\<exists> y \<bullet> P)\<lbrakk>v/x\<rbrakk> = (\<exists> y \<bullet> P\<lbrakk>v/x\<rbrakk>)"
using assms
apply (pred_auto)
using lens_indep_comm apply fastforce+
done
lemma subst_all_same [usubst]:
assumes "mwb_lens x"
shows "(\<forall> x \<bullet> P)\<lbrakk>v/x\<rbrakk> = (\<forall> x \<bullet> P)"
by (simp add: assms id_subst subst_unrest unrest_all_in)
lemma subst_all_indep [usubst]:
assumes "x \<bowtie> y" "y \<sharp> v"
shows "(\<forall> y \<bullet> P)\<lbrakk>v/x\<rbrakk> = (\<forall> y \<bullet> P\<lbrakk>v/x\<rbrakk>)"
using assms
by (pred_simp, simp_all add: lens_indep_comm)
subsection {* Predicate Laws *}
text {* Showing that predicates form a Boolean Algebra (under the predicate operators) gives us
many useful laws. *}
interpretation boolean_algebra diff_upred not_upred conj_upred "op \<le>" "op <"
disj_upred false_upred true_upred
by (unfold_locales; pred_auto)
lemma taut_true [simp]: "`true`"
by (pred_auto)
lemma refBy_order: "P \<sqsubseteq> Q = `Q \<Rightarrow> P`"
by (pred_auto)
lemma conj_idem [simp]: "((P::'\<alpha> upred) \<and> P) = P"
by (pred_auto)
lemma disj_idem [simp]: "((P::'\<alpha> upred) \<or> P) = P"
by (pred_auto)
lemma conj_comm: "((P::'\<alpha> upred) \<and> Q) = (Q \<and> P)"
by (pred_auto)
lemma disj_comm: "((P::'\<alpha> upred) \<or> Q) = (Q \<or> P)"
by (pred_auto)
lemma conj_subst: "P = R \<Longrightarrow> ((P::'\<alpha> upred) \<and> Q) = (R \<and> Q)"
by (pred_auto)
lemma conj_assoc:"(((P::'\<alpha> upred) \<and> Q) \<and> S) = (P \<and> (Q \<and> S))"
by (pred_auto)
lemma disj_assoc:"(((P::'\<alpha> upred) \<or> Q) \<or> S) = (P \<or> (Q \<or> S))"
by (pred_auto)
lemma conj_disj_abs:"((P::'\<alpha> upred) \<and> (P \<or> Q)) = P"
by (pred_auto)
lemma disj_conj_abs:"((P::'\<alpha> upred) \<or> (P \<and> Q)) = P"
by (pred_auto)
lemma conj_disj_distr:"((P::'\<alpha> upred) \<and> (Q \<or> R)) = ((P \<and> Q) \<or> (P \<and> R))"
by (pred_auto)
lemma disj_conj_distr:"((P::'\<alpha> upred) \<or> (Q \<and> R)) = ((P \<or> Q) \<and> (P \<or> R))"
by (pred_auto)
lemma true_disj_zero [simp]:
"(P \<or> true) = true" "(true \<or> P) = true"
by (pred_auto)+
lemma true_conj_zero [simp]:
"(P \<and> false) = false" "(false \<and> P) = false"
by (pred_auto)+
lemma imp_vacuous [simp]: "(false \<Rightarrow> u) = true"
by (pred_auto)
lemma imp_true [simp]: "(p \<Rightarrow> true) = true"
by (pred_auto)
lemma true_imp [simp]: "(true \<Rightarrow> p) = p"
by (pred_auto)
lemma p_and_not_p [simp]: "(P \<and> \<not> P) = false"
by (pred_auto)
lemma p_or_not_p [simp]: "(P \<or> \<not> P) = true"
by (pred_auto)
lemma p_imp_p [simp]: "(P \<Rightarrow> P) = true"
by (pred_auto)
lemma p_iff_p [simp]: "(P \<Leftrightarrow> P) = true"
by (pred_auto)
lemma p_imp_false [simp]: "(P \<Rightarrow> false) = (\<not> P)"
by (pred_auto)
lemma not_conj_deMorgans [simp]: "(\<not> ((P::'\<alpha> upred) \<and> Q)) = ((\<not> P) \<or> (\<not> Q))"
by (pred_auto)
lemma not_disj_deMorgans [simp]: "(\<not> ((P::'\<alpha> upred) \<or> Q)) = ((\<not> P) \<and> (\<not> Q))"
by (pred_auto)
lemma conj_disj_not_abs [simp]: "((P::'\<alpha> upred) \<and> ((\<not>P) \<or> Q)) = (P \<and> Q)"
by (pred_auto)
lemma subsumption1:
"`P \<Rightarrow> Q` \<Longrightarrow> (P \<or> Q) = Q"
by (pred_auto)
lemma subsumption2:
"`Q \<Rightarrow> P` \<Longrightarrow> (P \<or> Q) = P"
by (pred_auto)
lemma neg_conj_cancel1: "(\<not> P \<and> (P \<or> Q)) = (\<not> P \<and> Q :: '\<alpha> upred)"
by (pred_auto)
lemma neg_conj_cancel2: "(\<not> Q \<and> (P \<or> Q)) = (\<not> Q \<and> P :: '\<alpha> upred)"
by (pred_auto)
lemma double_negation [simp]: "(\<not> \<not> (P::'\<alpha> upred)) = P"
by (pred_auto)
lemma true_not_false [simp]: "true \<noteq> false" "false \<noteq> true"
by (pred_auto)+
lemma closure_conj_distr: "([P]\<^sub>u \<and> [Q]\<^sub>u) = [P \<and> Q]\<^sub>u"
by (pred_auto)
lemma closure_imp_distr: "`[P \<Rightarrow> Q]\<^sub>u \<Rightarrow> [P]\<^sub>u \<Rightarrow> [Q]\<^sub>u`"
by (pred_auto)
lemma uinf_or:
fixes P Q :: "'\<alpha> upred"
shows "(P \<sqinter> Q) = (P \<or> Q)"
by (pred_auto)
lemma usup_and:
fixes P Q :: "'\<alpha> upred"
shows "(P \<squnion> Q) = (P \<and> Q)"
by (pred_auto)
lemma USUP_cong_eq:
"\<lbrakk> \<And> x. P\<^sub>1(x) = P\<^sub>2(x); \<And> x. `P\<^sub>1(x) \<Rightarrow> Q\<^sub>1(x) =\<^sub>u Q\<^sub>2(x)` \<rbrakk> \<Longrightarrow>
(\<Sqinter> x | P\<^sub>1(x) \<bullet> Q\<^sub>1(x)) = (\<Sqinter> x | P\<^sub>2(x) \<bullet> Q\<^sub>2(x))"
by (unfold USUP_def, pred_simp, metis)
lemma USUP_as_Sup: "(\<Sqinter> P \<in> \<P> \<bullet> P) = \<Sqinter> \<P>"
apply (simp add: upred_defs bop.rep_eq lit.rep_eq Sup_uexpr_def)
apply (pred_simp)
apply (simp add: setcompr_eq_image)
done
lemma USUP_as_Sup_collect: "(\<Sqinter>P\<in>A \<bullet> f(P)) = (\<Sqinter>P\<in>A. f(P))"
apply (simp add: upred_defs bop.rep_eq lit.rep_eq Sup_uexpr_def )
apply (pred_simp)
apply (simp add: Setcompr_eq_image )
unfolding SUP_def apply transfer apply auto
done
lemma USUP_as_Sup_image: "(\<Sqinter> P | \<guillemotleft>P\<guillemotright> \<in>\<^sub>u \<guillemotleft>A\<guillemotright> \<bullet> f(P)) = \<Sqinter> (f ` A)"
apply (simp add: upred_defs bop.rep_eq lit.rep_eq Sup_uexpr_def)
apply (pred_simp)
apply (simp add: Setcompr_eq_image )
done
lemma UINF_as_Inf: "(\<Squnion> P \<in> \<P> \<bullet> P) = \<Squnion> \<P>"
apply (simp add: upred_defs bop.rep_eq lit.rep_eq Inf_uexpr_def)
apply (pred_simp)
apply (simp add: Setcompr_eq_image )
done
lemma UINF_as_Inf_collect: "(\<Squnion>P\<in>A \<bullet> f(P)) = (\<Squnion>P\<in>A. f(P))"
apply (simp add: upred_defs bop.rep_eq lit.rep_eq Sup_uexpr_def)
apply (pred_simp)
apply (simp add: Setcompr_eq_image)
unfolding INF_def apply transfer apply auto
done
lemma UINF_as_Inf_image: "(\<Squnion> P \<in> \<P> \<bullet> f(P)) = \<Squnion> (f ` \<P>)"
apply (simp add: upred_defs bop.rep_eq lit.rep_eq Inf_uexpr_def)
apply (pred_simp)
apply (simp add: Setcompr_eq_image)
done
lemma USUP_image_eq [simp]: "USUP (\<lambda>i. \<guillemotleft>i\<guillemotright> \<in>\<^sub>u \<guillemotleft>f ` A\<guillemotright>) g = (\<Sqinter> i\<in>A \<bullet> g(f(i)))"
by (pred_simp, rule_tac cong[of Sup Sup], auto)
lemma UINF_image_eq [simp]: "UINF (\<lambda>i. \<guillemotleft>i\<guillemotright> \<in>\<^sub>u \<guillemotleft>f ` A\<guillemotright>) g = (\<Squnion> i\<in>A \<bullet> g(f(i)))"
by (pred_simp, rule_tac cong[of Inf Inf], auto)
lemma not_USUP: "(\<not> (\<Sqinter> i\<in>A\<bullet> P(i))) = (\<Squnion> i\<in>A\<bullet> \<not> P(i))"
by (pred_auto)
lemma not_UINF: "(\<not> (\<Squnion> i\<in>A\<bullet> P(i))) = (\<Sqinter> i\<in>A\<bullet> \<not> P(i))"
by (pred_auto)
lemma USUP_empty [simp]: "(\<Sqinter> i \<in> {} \<bullet> P(i)) = false"
by (pred_auto)
lemma USUP_insert [simp]: "(\<Sqinter> i\<in>insert x xs \<bullet> P(i)) = (P(x) \<sqinter> (\<Sqinter> i\<in>xs \<bullet> P(i)))"
apply (pred_simp)
apply (subst Sup_insert[THEN sym])
apply (rule_tac cong[of Sup Sup])
apply (auto)
done
lemma UINF_empty [simp]: "(\<Squnion> i \<in> {} \<bullet> P(i)) = true"
by (pred_auto)
lemma UINF_insert [simp]: "(\<Squnion> i\<in>insert x xs \<bullet> P(i)) = (P(x) \<squnion> (\<Squnion> i\<in>xs \<bullet> P(i)))"
apply (pred_simp)
apply (subst Inf_insert[THEN sym])
apply (rule_tac cong[of Inf Inf])
apply (auto)
done
lemma conj_USUP_dist:
"(P \<and> (\<Sqinter> Q\<in>S \<bullet> F(Q))) = (\<Sqinter> Q\<in>S \<bullet> P \<and> F(Q))"
by (simp add: upred_defs bop.rep_eq lit.rep_eq, pred_auto)
lemma disj_USUP_dist:
"S \<noteq> {} \<Longrightarrow> (P \<or> (\<Sqinter> Q\<in>S \<bullet> F(Q))) = (\<Sqinter> Q\<in>S \<bullet> P \<or> F(Q))"
by (simp add: upred_defs bop.rep_eq lit.rep_eq, pred_auto)
lemma conj_UINF_dist:
"S \<noteq> {} \<Longrightarrow> (P \<and> (\<Squnion> Q\<in>S \<bullet> F(Q))) = (\<Squnion> Q\<in>S \<bullet> P \<and> F(Q))"
by (subst uexpr_eq_iff, auto simp add: conj_upred_def UINF.rep_eq inf_uexpr.rep_eq bop.rep_eq lit.rep_eq)
lemma UINF_conj_UINF: "((\<Squnion> P \<in> A \<bullet> F(P)) \<and> (\<Squnion> P \<in> A \<bullet> G(P))) = (\<Squnion> P \<in> A \<bullet> F(P) \<and> G(P))"
by (simp add: upred_defs bop.rep_eq lit.rep_eq, pred_auto)
lemma UINF_cong:
assumes "\<And> P. P \<in> A \<Longrightarrow> F(P) = G(P)"
shows "(\<Sqinter> P\<in>A \<bullet> F(P)) = (\<Sqinter> P\<in>A \<bullet> G(P))"
by (simp add: USUP_as_Sup_collect assms)
lemma USUP_cong:
assumes "\<And> P. P \<in> A \<Longrightarrow> F(P) = G(P)"
shows "(\<Squnion> P\<in>A \<bullet> F(P)) = (\<Squnion> P\<in>A \<bullet> G(P))"
by (simp add: UINF_as_Inf_collect assms)
lemma UINF_subset_mono: "A \<subseteq> B \<Longrightarrow> (\<Sqinter> P\<in>B \<bullet> F(P)) \<sqsubseteq> (\<Sqinter> P\<in>A \<bullet> F(P))"
by (simp add: SUP_subset_mono USUP_as_Sup_collect)
lemma USUP_subset_mono: "A \<subseteq> B \<Longrightarrow> (\<Squnion> P\<in>A \<bullet> F(P)) \<sqsubseteq> (\<Squnion> P\<in>B \<bullet> F(P))"
by (simp add: INF_superset_mono UINF_as_Inf_collect)
lemma mu_id: "(\<mu> X \<bullet> X) = true"
by (simp add: antisym gfp_upperbound)
lemma mu_const: "(\<mu> X \<bullet> P) = P"
by (simp add: gfp_unfold mono_def)
lemma nu_id: "(\<nu> X \<bullet> X) = false"
by (simp add: lfp_lowerbound utp_pred.bot.extremum_uniqueI)
lemma nu_const: "(\<nu> X \<bullet> P) = P"
by (simp add: lfp_const)
lemma true_iff [simp]: "(P \<Leftrightarrow> true) = P"
by (pred_auto)
lemma impl_alt_def: "(P \<Rightarrow> Q) = (\<not> P \<or> Q)"
by (pred_auto)
lemma eq_upred_refl [simp]: "(x =\<^sub>u x) = true"
by (pred_auto)
lemma eq_upred_sym: "(x =\<^sub>u y) = (y =\<^sub>u x)"
by (pred_auto)
lemma eq_cong_left:
assumes "vwb_lens x" "$x \<sharp> Q" "$x\<acute> \<sharp> Q" "$x \<sharp> R" "$x\<acute> \<sharp> R"
shows "(($x\<acute> =\<^sub>u $x \<and> Q) = ($x\<acute> =\<^sub>u $x \<and> R)) \<longleftrightarrow> (Q = R)"
using assms
by (pred_simp, (meson mwb_lens_def vwb_lens_mwb weak_lens_def)+)
lemma conj_eq_in_var_subst:
fixes x :: "('a, '\<alpha>) uvar"
assumes "vwb_lens x"
shows "(P \<and> $x =\<^sub>u v) = (P\<lbrakk>v/$x\<rbrakk> \<and> $x =\<^sub>u v)"
using assms
by (pred_simp, (metis vwb_lens_wb wb_lens.get_put)+)
lemma conj_eq_out_var_subst:
fixes x :: "('a, '\<alpha>) uvar"
assumes "vwb_lens x"
shows "(P \<and> $x\<acute> =\<^sub>u v) = (P\<lbrakk>v/$x\<acute>\<rbrakk> \<and> $x\<acute> =\<^sub>u v)"
using assms
by (pred_simp, (metis vwb_lens_wb wb_lens.get_put)+)
lemma conj_pos_var_subst:
assumes "vwb_lens x"
shows "($x \<and> Q) = ($x \<and> Q\<lbrakk>true/$x\<rbrakk>)"
using assms
by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put, metis (full_types) vwb_lens_wb wb_lens.get_put)
lemma conj_neg_var_subst:
assumes "vwb_lens x"
shows "(\<not> $x \<and> Q) = (\<not> $x \<and> Q\<lbrakk>false/$x\<rbrakk>)"
using assms
by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put, metis (full_types) vwb_lens_wb wb_lens.get_put)
lemma le_pred_refl [simp]:
fixes x :: "('a::preorder, '\<alpha>) uexpr"
shows "(x \<le>\<^sub>u x) = true"
by (pred_auto)
lemma shEx_unbound [simp]: "(\<^bold>\<exists> x \<bullet> P) = P"
by (pred_auto)
lemma shEx_bool [simp]: "shEx P = (P True \<or> P False)"
by (pred_simp, metis (full_types))
lemma shEx_commute: "(\<^bold>\<exists> x \<bullet> \<^bold>\<exists> y \<bullet> P x y) = (\<^bold>\<exists> y \<bullet> \<^bold>\<exists> x \<bullet> P x y)"
by (pred_auto)
lemma shEx_cong: "\<lbrakk> \<And> x. P x = Q x \<rbrakk> \<Longrightarrow> shEx P = shEx Q"
by (pred_auto)
lemma shAll_unbound [simp]: "(\<^bold>\<forall> x \<bullet> P) = P"
by (pred_auto)
lemma shAll_bool [simp]: "shAll P = (P True \<and> P False)"
by (pred_simp, metis (full_types))
lemma shAll_cong: "\<lbrakk> \<And> x. P x = Q x \<rbrakk> \<Longrightarrow> shAll P = shAll Q"
by (pred_auto)
lemma upred_eq_true [simp]: "(p =\<^sub>u true) = p"
by (pred_auto)
lemma upred_eq_false [simp]: "(p =\<^sub>u false) = (\<not> p)"
by (pred_auto)
lemma conj_var_subst:
assumes "vwb_lens x"
shows "(P \<and> var x =\<^sub>u v) = (P\<lbrakk>v/x\<rbrakk> \<and> var x =\<^sub>u v)"
using assms
by (pred_simp, (metis (full_types) vwb_lens_def wb_lens.get_put)+)
lemma uvar_assign_exists:
"vwb_lens x \<Longrightarrow> \<exists> v. b = put\<^bsub>x\<^esub> b v"
by (rule_tac x="get\<^bsub>x\<^esub> b" in exI, simp)
lemma uvar_obtain_assign:
assumes "vwb_lens x"
obtains v where "b = put\<^bsub>x\<^esub> b v"
using assms
by (drule_tac uvar_assign_exists[of _ b], auto)
lemma eq_split_subst:
assumes "vwb_lens x"
shows "(P = Q) \<longleftrightarrow> (\<forall> v. P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<rbrakk> = Q\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<rbrakk>)"
using assms
by (pred_simp, metis uvar_assign_exists)
lemma eq_split_substI:
assumes "vwb_lens x" "\<And> v. P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<rbrakk> = Q\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<rbrakk>"
shows "P = Q"
using assms(1) assms(2) eq_split_subst by blast
lemma taut_split_subst:
assumes "vwb_lens x"
shows "`P` \<longleftrightarrow> (\<forall> v. `P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<rbrakk>`)"
using assms
by (pred_simp, metis uvar_assign_exists)
lemma eq_split:
assumes "`P \<Rightarrow> Q`" "`Q \<Rightarrow> P`"
shows "P = Q"
using assms
by (pred_auto)
lemma subst_bool_split:
assumes "vwb_lens x"
shows "`P` = `(P\<lbrakk>false/x\<rbrakk> \<and> P\<lbrakk>true/x\<rbrakk>)`"
proof -
from assms have "`P` = (\<forall> v. `P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<rbrakk>`)"
by (subst taut_split_subst[of x], auto)
also have "... = (`P\<lbrakk>\<guillemotleft>True\<guillemotright>/x\<rbrakk>` \<and> `P\<lbrakk>\<guillemotleft>False\<guillemotright>/x\<rbrakk>`)"
by (metis (mono_tags, lifting))
also have "... = `(P\<lbrakk>false/x\<rbrakk> \<and> P\<lbrakk>true/x\<rbrakk>)`"
by (pred_auto)
finally show ?thesis .
qed
lemma taut_iff_eq:
"`P \<Leftrightarrow> Q` \<longleftrightarrow> (P = Q)"
by (pred_auto)
lemma subst_eq_replace:
fixes x :: "('a, '\<alpha>) uvar"
shows "(p\<lbrakk>u/x\<rbrakk> \<and> u =\<^sub>u v) = (p\<lbrakk>v/x\<rbrakk> \<and> u =\<^sub>u v)"
by (pred_auto)
lemma exists_twice: "mwb_lens x \<Longrightarrow> (\<exists> x \<bullet> \<exists> x \<bullet> P) = (\<exists> x \<bullet> P)"
by (pred_auto)
lemma all_twice: "mwb_lens x \<Longrightarrow> (\<forall> x \<bullet> \<forall> x \<bullet> P) = (\<forall> x \<bullet> P)"
by (pred_auto)
lemma exists_sub: "\<lbrakk> mwb_lens y; x \<subseteq>\<^sub>L y \<rbrakk> \<Longrightarrow> (\<exists> x \<bullet> \<exists> y \<bullet> P) = (\<exists> y \<bullet> P)"
by (pred_auto)
lemma all_sub: "\<lbrakk> mwb_lens y; x \<subseteq>\<^sub>L y \<rbrakk> \<Longrightarrow> (\<forall> x \<bullet> \<forall> y \<bullet> P) = (\<forall> y \<bullet> P)"
by (pred_auto)
lemma ex_commute:
assumes "x \<bowtie> y"
shows "(\<exists> x \<bullet> \<exists> y \<bullet> P) = (\<exists> y \<bullet> \<exists> x \<bullet> P)"
using assms
apply (pred_auto)
using lens_indep_comm apply fastforce+
done
lemma all_commute:
assumes "x \<bowtie> y"
shows "(\<forall> x \<bullet> \<forall> y \<bullet> P) = (\<forall> y \<bullet> \<forall> x \<bullet> P)"
using assms
apply (pred_auto)
using lens_indep_comm apply fastforce+
done
lemma ex_equiv:
assumes "x \<approx>\<^sub>L y"
shows "(\<exists> x \<bullet> P) = (\<exists> y \<bullet> P)"
using assms
by (pred_simp, metis (no_types, lifting) lens.select_convs(2))
lemma all_equiv:
assumes "x \<approx>\<^sub>L y"
shows "(\<forall> x \<bullet> P) = (\<forall> y \<bullet> P)"
using assms
by (pred_simp, metis (no_types, lifting) lens.select_convs(2))
lemma ex_zero:
"(\<exists> &\<emptyset> \<bullet> P) = P"
by (pred_auto)
lemma all_zero:
"(\<forall> &\<emptyset> \<bullet> P) = P"
by (pred_auto)
lemma ex_plus:
"(\<exists> y;x \<bullet> P) = (\<exists> x \<bullet> \<exists> y \<bullet> P)"
by (pred_auto)
lemma all_plus:
"(\<forall> y;x \<bullet> P) = (\<forall> x \<bullet> \<forall> y \<bullet> P)"
by (pred_auto)
lemma closure_all:
"[P]\<^sub>u = (\<forall> &\<Sigma> \<bullet> P)"
by (pred_auto)
lemma unrest_as_exists:
"vwb_lens x \<Longrightarrow> (x \<sharp> P) \<longleftrightarrow> ((\<exists> x \<bullet> P) = P)"
by (pred_simp, metis vwb_lens.put_eq)
lemma ex_mono: "P \<sqsubseteq> Q \<Longrightarrow> (\<exists> x \<bullet> P) \<sqsubseteq> (\<exists> x \<bullet> Q)"
by (pred_auto)
lemma ex_weakens: "wb_lens x \<Longrightarrow> (\<exists> x \<bullet> P) \<sqsubseteq> P"
by (pred_simp, metis wb_lens.get_put)
lemma all_mono: "P \<sqsubseteq> Q \<Longrightarrow> (\<forall> x \<bullet> P) \<sqsubseteq> (\<forall> x \<bullet> Q)"
by (pred_auto)
lemma all_strengthens: "wb_lens x \<Longrightarrow> P \<sqsubseteq> (\<forall> x \<bullet> P)"
by (pred_simp, metis wb_lens.get_put)
lemma ex_unrest: "x \<sharp> P \<Longrightarrow> (\<exists> x \<bullet> P) = P"
by (pred_auto)
lemma all_unrest: "x \<sharp> P \<Longrightarrow> (\<forall> x \<bullet> P) = P"
by (pred_auto)
lemma not_ex_not: "\<not> (\<exists> x \<bullet> \<not> P) = (\<forall> x \<bullet> P)"
by (pred_auto)
subsection {* Conditional laws *}
lemma cond_def:
"(P \<triangleleft> b \<triangleright> Q) = ((b \<and> P) \<or> ((\<not> b) \<and> Q))"
by (pred_auto)
lemma cond_idem:"(P \<triangleleft> b \<triangleright> P) = P" by (pred_auto)
lemma cond_symm:"(P \<triangleleft> b \<triangleright> Q) = (Q \<triangleleft> \<not> b \<triangleright> P)" by (pred_auto)
lemma cond_assoc: "((P \<triangleleft> b \<triangleright> Q) \<triangleleft> c \<triangleright> R) = (P \<triangleleft> b \<and> c \<triangleright> (Q \<triangleleft> c \<triangleright> R))" by (pred_auto)
lemma cond_unit_T [simp]:"(P \<triangleleft> true \<triangleright> Q) = P" by (pred_auto)
lemma cond_unit_F [simp]:"(P \<triangleleft> false \<triangleright> Q) = Q" by (pred_auto)
lemma cond_and_T_integrate:
"((P \<and> b) \<or> (Q \<triangleleft> b \<triangleright> R)) = ((P \<or> Q) \<triangleleft> b \<triangleright> R)"
by (pred_auto)
lemma cond_L6: "(P \<triangleleft> b \<triangleright> (Q \<triangleleft> b \<triangleright> R)) = (P \<triangleleft> b \<triangleright> R)" by (pred_auto)
lemma cond_L7: "(P \<triangleleft> b \<triangleright> (P \<triangleleft> c \<triangleright> Q)) = (P \<triangleleft> b \<or> c \<triangleright> Q)" by (pred_auto)
lemma cond_and_distr: "((P \<and> Q) \<triangleleft> b \<triangleright> (R \<and> S)) = ((P \<triangleleft> b \<triangleright> R) \<and> (Q \<triangleleft> b \<triangleright> S))" by (pred_auto)
lemma cond_imp_distr:
"((P \<Rightarrow> Q) \<triangleleft> b \<triangleright> (R \<Rightarrow> S)) = ((P \<triangleleft> b \<triangleright> R) \<Rightarrow> (Q \<triangleleft> b \<triangleright> S))" by (pred_auto)
lemma cond_eq_distr:
"((P \<Leftrightarrow> Q) \<triangleleft> b \<triangleright> (R \<Leftrightarrow> S)) = ((P \<triangleleft> b \<triangleright> R) \<Leftrightarrow> (Q \<triangleleft> b \<triangleright> S))" by (pred_auto)
lemma cond_conj_distr:"(P \<and> (Q \<triangleleft> b \<triangleright> S)) = ((P \<and> Q) \<triangleleft> b \<triangleright> (P \<and> S))" by (pred_auto)
lemma cond_disj_distr:"(P \<or> (Q \<triangleleft> b \<triangleright> S)) = ((P \<or> Q) \<triangleleft> b \<triangleright> (P \<or> S))" by (pred_auto)
lemma cond_neg: "\<not> (P \<triangleleft> b \<triangleright> Q) = ((\<not> P) \<triangleleft> b \<triangleright> (\<not> Q))" by (pred_auto)
lemma cond_conj: "P \<triangleleft> b \<and> c \<triangleright> Q = (P \<triangleleft> c \<triangleright> Q) \<triangleleft> b \<triangleright> Q"
by (pred_auto)
lemma cond_USUP_dist: "(\<Squnion> P\<in>S \<bullet> F(P)) \<triangleleft> b \<triangleright> (\<Squnion> P\<in>S \<bullet> G(P)) = (\<Squnion> P\<in>S \<bullet> F(P) \<triangleleft> b \<triangleright> G(P))"
by (pred_auto)
lemma cond_UINF_dist: "(\<Sqinter> P\<in>S \<bullet> F(P)) \<triangleleft> b \<triangleright> (\<Sqinter> P\<in>S \<bullet> G(P)) = (\<Sqinter> P\<in>S \<bullet> F(P) \<triangleleft> b \<triangleright> G(P))"
by (pred_auto)
lemma cond_var_subst_left:
assumes "vwb_lens x"
shows "(P\<lbrakk>true/x\<rbrakk> \<triangleleft> var x \<triangleright> Q) = (P \<triangleleft> var x \<triangleright> Q)"
using assms by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put)
lemma cond_var_subst_right:
assumes "vwb_lens x"
shows "(P \<triangleleft> var x \<triangleright> Q\<lbrakk>false/x\<rbrakk>) = (P \<triangleleft> var x \<triangleright> Q)"
using assms by (pred_auto, metis (full_types) vwb_lens.put_eq)
lemma cond_var_split:
"vwb_lens x \<Longrightarrow> (P\<lbrakk>true/x\<rbrakk> \<triangleleft> var x \<triangleright> P\<lbrakk>false/x\<rbrakk>) = P"
by (rel_simp, (metis (full_types) vwb_lens.put_eq)+)
subsection {* Cylindric algebra *}
lemma C1: "(\<exists> x \<bullet> false) = false"
by (pred_auto)
lemma C2: "wb_lens x \<Longrightarrow> `P \<Rightarrow> (\<exists> x \<bullet> P)`"
by (pred_simp, metis wb_lens.get_put)
lemma C3: "mwb_lens x \<Longrightarrow> (\<exists> x \<bullet> (P \<and> (\<exists> x \<bullet> Q))) = ((\<exists> x \<bullet> P) \<and> (\<exists> x \<bullet> Q))"
by (pred_auto)
lemma C4a: "x \<approx>\<^sub>L y \<Longrightarrow> (\<exists> x \<bullet> \<exists> y \<bullet> P) = (\<exists> y \<bullet> \<exists> x \<bullet> P)"
by (pred_simp, metis (no_types, lifting) lens.select_convs(2))+
lemma C4b: "x \<bowtie> y \<Longrightarrow> (\<exists> x \<bullet> \<exists> y \<bullet> P) = (\<exists> y \<bullet> \<exists> x \<bullet> P)"
using ex_commute by blast
lemma C5:
fixes x :: "('a, '\<alpha>) uvar"
shows "(&x =\<^sub>u &x) = true"
by (pred_auto)
lemma C6:
assumes "wb_lens x" "x \<bowtie> y" "x \<bowtie> z"
shows "(&y =\<^sub>u &z) = (\<exists> x \<bullet> &y =\<^sub>u &x \<and> &x =\<^sub>u &z)"
using assms
by (pred_simp, (metis lens_indep_def)+)
lemma C7:
assumes "weak_lens x" "x \<bowtie> y"
shows "((\<exists> x \<bullet> &x =\<^sub>u &y \<and> P) \<and> (\<exists> x \<bullet> &x =\<^sub>u &y \<and> \<not> P)) = false"
using assms
by (pred_simp, simp add: lens_indep_sym)
subsection {* Quantifier lifting *}
named_theorems uquant_lift
lemma shEx_lift_conj_1 [uquant_lift]:
"((\<^bold>\<exists> x \<bullet> P(x)) \<and> Q) = (\<^bold>\<exists> x \<bullet> P(x) \<and> Q)"
by (pred_auto)
lemma shEx_lift_conj_2 [uquant_lift]:
"(P \<and> (\<^bold>\<exists> x \<bullet> Q(x))) = (\<^bold>\<exists> x \<bullet> P \<and> Q(x))"
by (pred_auto)
end |
Is it even possible to make homemade Mac-n-Cheese even better? We think so! Our second featured recipe this month is a great way to enjoy our new Marinated Artichoke Hearts while adding a delicious Mediterranean twist to classic comfort food! Makes a perfect side dish to your Easter Sunday or any night of the week! Not to mention, itβs O.M.G. kind of good!
In a medium sauce pan, on medium high heat, add butter and flour, and stir until for 2-3 minutes while mixture bubbles. Slowly whisk in milk until fully incorporated. Whisk and cook mixture for about 7 minutes, until it thickens and bubbles.
Turn heat off; stir in Garlic Spread, 1 c. each cheese. Add salt and pepper to taste.
Pour over cooked macaroni, stir in spinach,artichokes, and remaining shredded cheese.
Place into a baking dish and sprinkle panko topping on top. Place under broiler for a few minutes until breadcrumbs are golden brown. |
inductive Wrapper where
| wrap: Wrapper
def Wrapper.extend: Wrapper β (Unit Γ Unit)
| .wrap => ((), ())
mutual
inductive Op where
| mk: String β Block β Op
inductive Assign where
| mk : String β Op β Assign
inductive Block where
| mk: Assign β Block
| empty: Block
end
mutual
def runOp: Op β Wrapper
| .mk _ r => let r' := runBlock r; .wrap
def runAssign: Assign β Wrapper
| .mk _ op => runOp op
def runBlock: Block β Wrapper
| .mk a => runAssign a
| .empty => .wrap
end
private def b: Assign := .mk "r" (.mk "APrettyLongString" .empty)
theorem bug: (runAssign b).extend.snd = (runAssign b).extend.snd := by
--unfold b -- extremely slow
sorry
|
! :::
! ::: ------------------------------------------------------------------
! :::
!===========================================================================
! This is called from within threaded loops in advance_gas_tile so *no* OMP here ...
!===========================================================================
subroutine add_grav_source(uin,uin_l1,uin_l2,uin_l3,uin_h1,uin_h2,uin_h3, &
uout,uout_l1,uout_l2,uout_l3,uout_h1,uout_h2,uout_h3, &
grav, gv_l1, gv_l2, gv_l3, gv_h1, gv_h2, gv_h3, &
lo,hi,dx,dy,dz,dt,a_old,a_new,e_added,ke_added)
use amrex_fort_module, only : rt => amrex_real
use eos_module
use meth_params_module, only : NVAR, URHO, UMX, UMY, UMZ, &
UEDEN, grav_source_type
implicit none
integer lo(3), hi(3)
integer uin_l1,uin_l2,uin_l3,uin_h1,uin_h2,uin_h3
integer uout_l1, uout_l2, uout_l3, uout_h1, uout_h2, uout_h3
integer gv_l1, gv_l2, gv_l3, gv_h1, gv_h2, gv_h3
real(rt) uin( uin_l1: uin_h1, uin_l2: uin_h2, uin_l3: uin_h3,NVAR)
real(rt) uout(uout_l1:uout_h1,uout_l2:uout_h2,uout_l3:uout_h3,NVAR)
real(rt) grav( gv_l1: gv_h1, gv_l2: gv_h2, gv_l3: gv_h3,3)
real(rt) dx, dy, dz, dt
real(rt) a_old, a_new
real(rt) e_added,ke_added
real(rt) :: a_half, a_oldsq, a_newsq, a_newsq_inv
real(rt) :: rho
real(rt) :: SrU, SrV, SrW, SrE
real(rt) :: rhoInv, dt_a_new
real(rt) :: old_rhoeint, new_rhoeint, old_ke, new_ke
integer :: i, j, k
a_half = 0.5d0 * (a_old + a_new)
a_oldsq = a_old * a_old
a_newsq = a_new * a_new
a_newsq_inv = 1.d0 / a_newsq
dt_a_new = dt / a_new
! Gravitational source options for how to add the work to (rho E):
! grav_source_type =
! 1: Original version ("does work")
! 3: Puts all gravitational work into KE, not (rho e)
! Add gravitational source terms
do k = lo(3),hi(3)
do j = lo(2),hi(2)
do i = lo(1),hi(1)
! **** Start Diagnostics ****
old_ke = 0.5d0 * (uout(i,j,k,UMX)**2 + uout(i,j,k,UMY)**2 + uout(i,j,k,UMZ)**2) / &
uout(i,j,k,URHO)
old_rhoeint = uout(i,j,k,UEDEN) - old_ke
! **** End Diagnostics ****
rho = uin(i,j,k,URHO)
rhoInv = 1.0d0 / rho
SrU = rho * grav(i,j,k,1)
SrV = rho * grav(i,j,k,2)
SrW = rho * grav(i,j,k,3)
! We use a_new here because we think of d/dt(a rho u) = ... + (rho g)
uout(i,j,k,UMX) = uout(i,j,k,UMX) + SrU * dt_a_new
uout(i,j,k,UMY) = uout(i,j,k,UMY) + SrV * dt_a_new
uout(i,j,k,UMZ) = uout(i,j,k,UMZ) + SrW * dt_a_new
if (grav_source_type .eq. 1) then
! This does work (in 1-d)
! Src = rho u dot g, evaluated with all quantities at t^n
SrE = uin(i,j,k,UMX) * grav(i,j,k,1) + &
uin(i,j,k,UMY) * grav(i,j,k,2) + &
uin(i,j,k,UMZ) * grav(i,j,k,3)
uout(i,j,k,UEDEN) = (a_newsq*uout(i,j,k,UEDEN) + SrE * (dt*a_half)) * a_newsq_inv
else if (grav_source_type .eq. 3) then
new_ke = 0.5d0 * (uout(i,j,k,UMX)**2 + uout(i,j,k,UMY)**2 + uout(i,j,k,UMZ)**2) / &
uout(i,j,k,URHO)
uout(i,j,k,UEDEN) = old_rhoeint + new_ke
else
call bl_error("Error:: Nyx_advection_3d.f90 :: bogus grav_source_type")
end if
! **** Start Diagnostics ****
! This is the new (rho e) as stored in (rho E) after the gravitational work is added
new_ke = 0.5d0 * (uout(i,j,k,UMX)**2 + uout(i,j,k,UMY)**2 + uout(i,j,k,UMZ)**2) / &
uout(i,j,k,URHO)
new_rhoeint = uout(i,j,k,UEDEN) - new_ke
e_added = e_added + (new_rhoeint - old_rhoeint)
ke_added = ke_added + (new_ke - old_ke )
! **** End Diagnostics ****
enddo
enddo
enddo
! print *,' EADDED ',lo(1),lo(2),lo(3), e_added
! print *,'KEADDED ',lo(1),lo(2),lo(3),ke_added
end subroutine add_grav_source
|
State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
β’ multiplicity p (choose (p ^ n) k) + multiplicity p k β€ βn State After: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
hdisj :
Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n)))
β’ multiplicity p (choose (p ^ n) k) + multiplicity p k β€ βn Tactic: have hdisj :
Disjoint ((Ico 1 n.succ).filter fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i)
((Ico 1 n.succ).filter fun i => p ^ i β£ k) := by
simp (config := { contextual := true }) [disjoint_right, *, dvd_iff_mod_eq_zero,
Nat.mod_lt _ (pow_pos hp.pos _)] State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
hdisj :
Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n)))
β’ multiplicity p (choose (p ^ n) k) + multiplicity p k β€ βn State After: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
hdisj :
Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n)))
β’ card (filter (fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k) (Ico 1 (succ n))) β€ n Tactic: rw [multiplicity_choose hp hkn (lt_succ_self _),
multiplicity_eq_card_pow_dvd (ne_of_gt hp.one_lt) hk0.bot_lt
(lt_succ_of_le (log_mono_right hkn)),
β Nat.cast_add, PartENat.coe_le_coe, log_pow hp.one_lt, β card_disjoint_union hdisj,
filter_union_right] State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
hdisj :
Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n)))
β’ card (filter (fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k) (Ico 1 (succ n))) β€ n State After: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
hdisj :
Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n)))
filter_le_Ico :
card (filter (fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k) (Ico 1 (succ n))) β€ card (Ico 1 (succ n))
β’ card (filter (fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k) (Ico 1 (succ n))) β€ n Tactic: have filter_le_Ico := (Ico 1 n.succ).card_filter_le
fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
hdisj :
Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n)))
filter_le_Ico :
card (filter (fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k) (Ico 1 (succ n))) β€ card (Ico 1 (succ n))
β’ card (filter (fun x => p ^ x β€ k % p ^ x + (p ^ n - k) % p ^ x β¨ p ^ x β£ k) (Ico 1 (succ n))) β€ n State After: no goals Tactic: rwa [card_Ico 1 n.succ] at filter_le_Ico State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
β’ Disjoint (filter (fun i => p ^ i β€ k % p ^ i + (p ^ n - k) % p ^ i) (Ico 1 (succ n)))
(filter (fun i => p ^ i β£ k) (Ico 1 (succ n))) State After: no goals Tactic: simp (config := { contextual := true }) [disjoint_right, *, dvd_iff_mod_eq_zero,
Nat.mod_lt _ (pow_pos hp.pos _)] State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
β’ βn β€ multiplicity p (choose (p ^ n) k) + multiplicity p k State After: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
β’ multiplicity p (p ^ n) β€ multiplicity p (choose (p ^ n) k) + multiplicity p k Tactic: rw [β hp.multiplicity_pow_self] State Before: p n k : β
hp : Prime p
hkn : k β€ p ^ n
hk0 : k β 0
β’ multiplicity p (p ^ n) β€ multiplicity p (choose (p ^ n) k) + multiplicity p k State After: no goals Tactic: exact multiplicity_le_multiplicity_choose_add hp _ _ |
lemma (in ring_of_sets) range_disjointed_sets: assumes A: "range A \<subseteq> M" shows "range (disjointed A) \<subseteq> M" |
(*<*)
theory SPRViewNonDet
imports
SPRView
KBPsAuto
begin
(*>*)
subsection\<open>Perfect Recall in Non-deterministic Broadcast Environments\<close>
text_raw\<open>
\begin{figure}[ht]
\begin{isabellebody}%
\<close>
record ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState =
es :: "'es"
ps :: "'a \<Rightarrow> 'ps"
pubActs :: "'ePubAct \<times> ('a \<Rightarrow> 'pPubAct)"
locale FiniteBroadcastEnvironment =
Environment jkbp envInit envAction envTrans envVal envObs
for jkbp :: "('a :: finite, 'p, ('pPubAct :: finite \<times> 'ps :: finite)) JKBP"
and envInit
:: "('a, 'ePubAct :: finite, 'es :: finite, 'pPubAct, 'ps) BEState list"
and envAction :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState
\<Rightarrow> ('ePubAct \<times> 'ePrivAct) list"
and envTrans :: "('ePubAct \<times> 'ePrivAct)
\<Rightarrow> ('a \<Rightarrow> ('pPubAct \<times> 'ps))
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState"
and envVal :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState \<Rightarrow> 'p \<Rightarrow> bool"
and envObs :: "'a \<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState
\<Rightarrow> ('cobs \<times> 'ps \<times> ('ePubAct \<times> ('a \<Rightarrow> 'pPubAct)))"
+ fixes envObsC :: "'es \<Rightarrow> 'cobs"
and envActionES :: "'es \<Rightarrow> ('ePubAct \<times> ('a \<Rightarrow> 'pPubAct))
\<Rightarrow> ('ePubAct \<times> 'ePrivAct) list"
and envTransES :: "('ePubAct \<times> 'ePrivAct) \<Rightarrow> ('a \<Rightarrow> 'pPubAct)
\<Rightarrow> 'es \<Rightarrow> 'es"
defines envObs_def: "envObs a \<equiv> (\<lambda>s. (envObsC (es s), ps s a, pubActs s))"
and envAction_def: "envAction s \<equiv> envActionES (es s) (pubActs s)"
and envTrans_def:
"envTrans eact aact s \<equiv> \<lparr> es = envTransES eact (fst \<circ> aact) (es s)
, ps = snd \<circ> aact
, pubActs = (fst eact, fst \<circ> aact) \<rparr>"
text_raw\<open>
\end{isabellebody}%
\caption{Finite broadcast environments with non-deterministic KBPs.}
\label{fig:kbps-theory-broadcast-envs}
\end{figure}
\<close>
(*<*)
instance BEState_ext :: (finite, finite, finite, finite, finite, finite) finite
proof
let ?U = "UNIV :: ('a, 'b, 'c, 'd, 'e, 'f) BEState_ext set"
{ fix x :: "('a, 'b, 'c, 'd, 'e, 'f) BEState_scheme"
have "\<exists>a b c d. x = BEState_ext a b c d"
by (cases x) simp
} then have U:
"?U = (\<lambda>(((a, b), c), d). BEState_ext a b c d) ` (((UNIV \<times> UNIV) \<times> UNIV) \<times> UNIV)"
by (auto simp add: image_def)
show "finite ?U" by (simp add: U)
qed
(*>*)
text\<open>
\label{sec:kbps-theory-spr-non-deterministic-protocols}
For completeness we reproduce the results of \citet{Ron:1996}
regarding non-deterministic KBPs in broadcast environments.
The determinism requirement is replaced by the constraint that actions
be split into public and private components, where the private part
influences the agents' private states, and the public part is
broadcast and recorded in the system state. Moreover the protocol of
the environment is only a function of the environment state, and not
the agents' private states. Once again an agent's view consists of the
common observation and their private state. The situation is described
by the locale in Figure~\ref{fig:kbps-theory-broadcast-envs}. Note
that as we do not intend to generate code for this case, we adopt more
transparent but less effective representations.
Our goal in the following is to instantiate the @{term
"SimIncrEnvironment"} locale with respect to the assumptions made in
the @{term "FiniteBroadcastEnvironment"} locale. We begin by defining
similar simulation machinery to the previous section.
\<close>
context FiniteBroadcastEnvironment
begin
text\<open>
As for the deterministic variant, we abstract traces using the common
observation. Note that this now includes the public part of the
agents' actions.
\<close>
definition
tObsC :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace
\<Rightarrow> ('cobs \<times> 'ePubAct \<times> ('a \<Rightarrow> 'pPubAct)) Trace"
where
"tObsC \<equiv> tMap (\<lambda>s. (envObsC (es s), pubActs s))"
(*<*)
lemma spr_jview_tObsC:
assumes "spr_jview a t = spr_jview a t'"
shows "tObsC t = tObsC t'"
using SPR.sync[rule_format, OF assms] assms
by (induct rule: trace_induct2) (auto simp: envObs_def tObsC_def)
lemma tObsC_tLength:
"tObsC t = tObsC t' \<Longrightarrow> tLength t = tLength t'"
unfolding tObsC_def by (rule tMap_eq_imp_tLength_eq)
lemma tObsC_tStep_eq_inv:
"tObsC t' = tObsC (t \<leadsto> s) \<Longrightarrow> \<exists>t'' s'. t' = t'' \<leadsto> s'"
unfolding tObsC_def by auto
lemma tObsC_prefix_closed[dest]:
"tObsC (t \<leadsto> s) = tObsC (t' \<leadsto> s') \<Longrightarrow> tObsC t = tObsC t'"
unfolding tObsC_def by simp
lemma tObsC_tLast[iff]:
"tLast (tObsC t) = (envObsC (es (tLast t)), pubActs (tLast t))"
unfolding tObsC_def by simp
lemma tObsC_tStep:
"tObsC (t \<leadsto> s) = tObsC t \<leadsto> (envObsC (es s), pubActs s)"
unfolding tObsC_def by simp
lemma tObsC_initial[iff]:
"tFirst (tObsC t) = (envObsC (es (tFirst t)), pubActs (tFirst t))"
"tObsC (tInit s) = tInit (envObsC (es s), pubActs s)"
"tObsC t = tInit cobs \<longleftrightarrow> (\<exists>s. t = tInit s \<and> envObsC (es s) = fst cobs \<and> pubActs s = snd cobs)"
unfolding tObsC_def by auto
lemma spr_tObsC_trc_aux:
assumes "(t, t') \<in> (\<Union>a. relations SPR.MC a)\<^sup>*"
shows "tObsC t = tObsC t'"
using assms
apply (induct)
apply simp
apply clarsimp
apply (rule_tac a=x in spr_jview_tObsC)
apply simp
done
(*>*)
text\<open>
Similarly we introduce common and agent-specific abstraction functions:
\<close>
definition
tObsC_abs :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Relation"
where
"tObsC_abs t \<equiv> { (tFirst t', tLast t')
|t'. t' \<in> SPR.jkbpC \<and> tObsC t' = tObsC t }"
definition
agent_abs :: "'a \<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Relation"
where
"agent_abs a t \<equiv> { (tFirst t', tLast t')
|t'. t' \<in> SPR.jkbpC \<and> spr_jview a t' = spr_jview a t }"
(*<*)
lemma tObsC_abs_jview_eq[dest, intro]:
"spr_jview a t' = spr_jview a t
\<Longrightarrow> tObsC_abs t = tObsC_abs t'"
unfolding tObsC_abs_def by (fastforce dest: spr_jview_tObsC)
lemma tObsC_absI[intro]:
"\<lbrakk> t' \<in> SPR.jkbpC; tObsC t' = tObsC t; u = tFirst t'; v = tLast t' \<rbrakk>
\<Longrightarrow> (u, v) \<in> tObsC_abs t"
unfolding tObsC_abs_def by blast
lemma tObsC_abs_conv:
"(u, v) \<in> tObsC_abs t
\<longleftrightarrow> (\<exists>t'. t' \<in> SPR.jkbpC \<and> tObsC t' = tObsC t \<and> u = tFirst t' \<and> v = tLast t')"
unfolding tObsC_abs_def by blast
lemma agent_absI[elim]:
"\<lbrakk> t' \<in> SPR.jkbpC; spr_jview a t' = spr_jview a t; u = tFirst t'; v = tLast t' \<rbrakk>
\<Longrightarrow> (u, v) \<in> agent_abs a t"
unfolding agent_abs_def by blast
lemma agent_abs_tLastD[simp]:
"(u, v) \<in> agent_abs a t \<Longrightarrow> envObs a v = envObs a (tLast t)"
unfolding agent_abs_def by auto
lemma agent_abs_inv[dest]:
"(u, v) \<in> agent_abs a t
\<Longrightarrow> \<exists>t'. t' \<in> SPR.jkbpC \<and> spr_jview a t' = spr_jview a t
\<and> u = tFirst t' \<and> v = tLast t'"
unfolding agent_abs_def by blast
(*>*)
end (* context FiniteBroadcastEnvironment *)
text\<open>
The simulation is identical to that in the previous section:
\<close>
record ('a, 'ePubAct, 'es, 'pPubAct, 'ps) SPRstate =
sprFst :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState"
sprLst :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState"
sprCRel :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Relation"
context FiniteBroadcastEnvironment
begin
definition
spr_sim :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) SPRstate"
where
"spr_sim \<equiv> \<lambda>t. \<lparr> sprFst = tFirst t, sprLst = tLast t, sprCRel = tObsC_abs t \<rparr>"
(*<*)
lemma spr_sim_tFirst_tLast:
"\<lbrakk> spr_sim t = s; t \<in> SPR.jkbpC \<rbrakk> \<Longrightarrow> (sprFst s, sprLst s) \<in> sprCRel s"
unfolding spr_sim_def by auto
(*>*)
text\<open>
The Kripke structure over simulated traces is also the same:
\<close>
definition
spr_simRels :: "'a \<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) SPRstate Relation"
where
"spr_simRels \<equiv> \<lambda>a. { (s, s') |s s'.
envObs a (sprFst s) = envObs a (sprFst s')
\<and> envObs a (sprLst s) = envObs a (sprLst s')
\<and> sprCRel s = sprCRel s' }"
definition
spr_simVal :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) SPRstate \<Rightarrow> 'p \<Rightarrow> bool"
where
"spr_simVal \<equiv> envVal \<circ> sprLst"
abbreviation
"spr_simMC \<equiv> mkKripke (spr_sim ` SPR.jkbpC) spr_simRels spr_simVal"
(*<*)
lemma spr_simVal_def2[iff]:
"spr_simVal (spr_sim t) = envVal (tLast t)"
unfolding spr_sim_def spr_simVal_def by simp
(*>*)
text\<open>
As usual, showing that @{term "spr_sim"} is in fact a simulation is
routine for all properties except for reverse simulation. For that we
use proof techniques similar to those of
\citet{DBLP:journals/tocl/LomuscioMR00}: the goal is to show that,
given @{term "t \<in> jkbpC"}, we can construct a trace @{term "t' \<in>
jkbpC"} indistinguishable from @{term "t"} by agent @{term "a"}, based
on the public actions, the common observation and @{term "a"}'s
private and initial states.
To do this we define a splicing operation:
\<close>
definition
sSplice :: "'a
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState"
where
"sSplice a s s' \<equiv> s\<lparr> ps := (ps s)(a := ps s' a) \<rparr>"
(*<*)
lemma sSplice_es[simp]:
"es (sSplice a s s') = es s"
unfolding sSplice_def by simp
lemma sSplice_pubActs[simp]:
"pubActs (sSplice a s s') = pubActs s"
unfolding sSplice_def by simp
lemma sSplice_envObs[simp]:
assumes init: "envObs a s = envObs a s'"
shows "sSplice a s s' = s"
proof -
from init have "ps s a = ps s' a"
by (auto simp: envObs_def)
thus ?thesis
unfolding sSplice_def by (simp add: fun_upd_idem_iff)
qed
lemma sSplice_envObs_a:
assumes "envObsC (es s) = envObsC (es s')"
assumes "pubActs s = pubActs s'"
shows "envObs a (sSplice a s s') = envObs a s'"
using assms
unfolding sSplice_def envObs_def by simp
lemma sSplice_envObs_not_a:
assumes "a' \<noteq> a"
shows "envObs a' (sSplice a s s') = envObs a' s"
using assms
unfolding sSplice_def envObs_def by simp
(*>*)
text\<open>
The effect of @{term "sSplice a s s'"} is to update @{term "s"} with
@{term "a"}'s private state in @{term "s'"}. The key properties are
that provided the common observation on @{term "s"} and @{term "s'"}
are the same, then agent @{term "a"}'s observation on @{term "sSplice
a s s'"} is the same as
at @{term "s'"}, while everyone else's is the
same as at @{term "s"}.
We hoist this operation pointwise to traces:
\<close>
abbreviation
tSplice :: "('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace
\<Rightarrow> 'a
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace
\<Rightarrow> ('a, 'ePubAct, 'es, 'pPubAct, 'ps) BEState Trace"
("_ \<^bsub>\<^esub>\<bowtie>\<^bsub>_\<^esub> _" [55, 1000, 56] 55)
where
"t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t' \<equiv> tZip (sSplice a) t t'"
(*<*)
declare sSplice_envObs_a[simp] sSplice_envObs_not_a[simp]
lemma tSplice_tObsC:
assumes tObsC: "tObsC t = tObsC t'"
shows "tObsC (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') = tObsC t"
using tObsC_tLength[OF tObsC] tObsC
by (induct rule: trace_induct2) (simp_all add: tObsC_tStep)
lemma tSplice_spr_jview_a:
assumes tObsC: "tObsC t = tObsC t'"
shows "spr_jview a (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') = spr_jview a t'"
using tObsC_tLength[OF tObsC] tObsC
by (induct rule: trace_induct2) (simp_all add: tObsC_tStep spr_jview_def)
lemma tSplice_spr_jview_not_a:
assumes tObsC: "tObsC t = tObsC t'"
assumes aa': "a \<noteq> a'"
shows "spr_jview a' (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') = spr_jview a' t"
using tObsC_tLength[OF tObsC] tObsC aa'
by (induct rule: trace_induct2) (simp_all add: tObsC_tStep spr_jview_def)
lemma tSplice_es:
assumes tLen: "tLength t = tLength t'"
shows "es (tLast (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t')) = es (tLast t)"
using tLen by (induct rule: trace_induct2) simp_all
lemma tSplice_pubActs:
assumes tLen: "tLength t = tLength t'"
shows "pubActs (tLast (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t')) = pubActs (tLast t)"
using tLen by (induct rule: trace_induct2) simp_all
lemma tSplice_tFirst[simp]:
assumes tLen: "tLength t = tLength t'"
assumes init: "envObs a (tFirst t) = envObs a (tFirst t')"
shows "tFirst (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') = tFirst t"
using tLen init by (induct rule: trace_induct2) simp_all
lemma tSplice_tLast[simp]:
assumes tLen: "tLength t = tLength t'"
assumes last: "envObs a (tLast t) = envObs a (tLast t')"
shows "tLast (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') = tLast t"
using tLen last
unfolding envObs_def
apply (induct rule: trace_induct2)
apply (auto iff: sSplice_def fun_upd_idem_iff)
done
(*>*)
text\<open>
The key properties are that after splicing, if @{term "t"} and @{term
"t'"} have the same common observation, then so does @{term "t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub>
t'"}, and for all agents @{term "a' \<noteq> a"}, the view @{term "a'"} has
of @{term "t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t'"} is the same as it has of @{term "t"}, while for
@{term "a"} it is the same as @{term "t'"}.
We can conclude that provided the two traces are initially
indistinguishable to @{term "a"}, and not commonly distinguishable,
then @{term "t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t'"} is a canonical trace:
\<close>
lemma tSplice_jkbpC:
assumes tt': "{t, t'} \<subseteq> SPR.jkbpC"
assumes init: "envObs a (tFirst t) = envObs a (tFirst t')"
assumes tObsC: "tObsC t = tObsC t'"
shows "t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t' \<in> SPR.jkbpC"
(*<*)
using tObsC_tLength[OF tObsC] tt' init tObsC
proof(induct rule: trace_induct2)
case (tInit s s') thus ?case by simp
next
case (tStep s s' t t')
hence tt': "t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t' \<in> SPR.jkbpC"
and tLen: "tLength t' = tLength t"
and tObsC: "tObsC (t \<leadsto> s) = tObsC (t' \<leadsto> s')"
by auto
hence tt'n: "t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t' \<in> SPR.jkbpCn (tLength t)"
by auto
from tStep
have ts: "t \<leadsto> s \<in> SPR.jkbpCn (Suc (tLength t))"
and t's': "t' \<leadsto> s' \<in> SPR.jkbpCn (Suc (tLength t'))"
apply -
apply ((rule SPR.jkbpC_tLength_inv, simp_all)[1])+
done
from ts obtain eact aact
where eact: "eact \<in> set (envAction (tLast t))"
and aact: "\<forall>a. aact a \<in> set (jAction (SPR.mkM (SPR.jkbpCn (tLength t))) t a)"
and trans: "envTrans eact aact (tLast t) = s"
apply (auto iff: Let_def)
done
from t's' obtain eact' aact'
where eact': "eact' \<in> set (envAction (tLast t'))"
and aact': "\<forall>a. aact' a \<in> set (jAction (SPR.mkM (SPR.jkbpCn (tLength t'))) t' a)"
and trans': "envTrans eact' aact' (tLast t') = s'"
apply (auto iff: Let_def)
done
define aact'' where "aact'' = aact (a := aact' a)"
from tObsC trans trans'
have aact''_fst: "fst \<circ> aact'' = fst \<circ> aact"
unfolding envTrans_def aact''_def
apply -
apply (rule ext)
apply (auto iff: tObsC_tStep)
apply (erule o_eq_elim)
apply simp
done
from tObsC trans trans'
have aact''_snd: "snd \<circ> aact'' = (snd \<circ> aact)(a := ps s' a)"
unfolding envTrans_def aact''_def
apply -
apply (rule ext)
apply auto
done
have "envTrans eact aact'' (tLast (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t'))
= sSplice a (envTrans eact aact (tLast t)) s'"
apply (simp only: envTrans_def sSplice_def)
using tSplice_es[OF tLen[symmetric]] aact''_fst aact''_snd
apply clarsimp
done
moreover
{ fix a'
have "aact'' a' \<in> set (jAction (SPR.mkM (SPR.jkbpCn (tLength t))) (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') a')"
proof(cases "a' = a")
case False
with tStep have "jAction (SPR.mkM (SPR.jkbpCn (tLength t))) (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') a'
= jAction (SPR.mkM (SPR.jkbpCn (tLength t))) t a'"
apply -
apply (rule S5n_jAction_eq)
apply simp
unfolding SPR.mkM_def
using tSplice_spr_jview_not_a tt'
apply auto
done
with False aact show ?thesis
unfolding aact''_def by simp
next
case True
with tStep have "jAction (SPR.mkM (SPR.jkbpCn (tLength t))) (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') a
= jAction (SPR.mkM (SPR.jkbpCn (tLength t))) t' a"
apply -
apply (rule S5n_jAction_eq)
apply simp
unfolding SPR.mkM_def
using tSplice_spr_jview_a tt'
apply auto
done
with True aact' tLen show ?thesis
unfolding aact''_def by simp
qed }
moreover
from tStep have "envAction (tLast (t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t')) = envAction (tLast t)"
using tSplice_envAction by blast
moreover note eact trans tt'n
ultimately have "(t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> t') \<leadsto> sSplice a s s' \<in> SPR.jkbpCn (Suc (tLength t))"
apply (simp add: Let_def del: split_paired_Ex)
apply (rule exI[where x="eact"])
apply (rule exI[where x="aact''"])
apply simp
done
thus ?case
apply (simp only: tZip.simps)
apply blast
done
qed
lemma spr_sim_r:
"sim_r SPR.MC spr_simMC spr_sim"
proof(rule sim_rI)
fix a p q'
assume pT: "p \<in> worlds SPR.MC"
and fpq': "(spr_sim p, q') \<in> relations spr_simMC a"
from fpq' obtain uq fq vq
where q': "q' = \<lparr> sprFst = uq, sprLst = vq, sprCRel = tObsC_abs p \<rparr>"
and uq: "envObs a (tFirst p) = envObs a uq"
and vq: "envObs a (tLast p) = envObs a vq"
unfolding mkKripke_def spr_sim_def spr_simRels_def
by fastforce
from fpq' have "q' \<in> worlds spr_simMC" by simp
with q' have "(uq, vq) \<in> tObsC_abs p"
using spr_sim_tFirst_tLast[where s=q']
apply auto
done
then obtain t
where tT: "t \<in> SPR.jkbpC"
and tp: "tObsC t = tObsC p"
and tuq: "tFirst t = uq"
and tvq: "tLast t = vq"
by (auto iff: tObsC_abs_conv)
define q where "q = t \<^bsub>\<^esub>\<bowtie>\<^bsub>a\<^esub> p"
from tp tuq uq
have "spr_jview a p = spr_jview a q"
unfolding q_def by (simp add: tSplice_spr_jview_a)
with pT tT tp tuq uq
have pt: "(p, q) \<in> relations SPR.MC a"
unfolding SPR.mkM_def q_def by (simp add: tSplice_jkbpC)
from q' uq vq tp tuq tvq
have ftq': "spr_sim q = q'"
unfolding spr_sim_def q_def
using tSplice_tObsC[where a=a and t=t and t'=p]
apply clarsimp
apply (intro conjI)
apply (auto dest: tObsC_tLength)[2]
unfolding tObsC_abs_def (* FIXME abstract *)
apply simp
done
from pt ftq'
show "\<exists>q. (p, q) \<in> relations SPR.MC a \<and> spr_sim q = q'"
by blast
qed
(*>*)
text\<open>
The proof is by induction over @{term "t"} and @{term "t'"}, and
depends crucially on the public actions being recorded in the state
and commonly observed. Showing the reverse simulation property is then
straightforward.
\<close>
lemma spr_sim: "sim SPR.MC spr_simMC spr_sim"
(*<*)
proof
show "sim_range SPR.MC spr_simMC spr_sim"
by (rule sim_rangeI) (simp_all add: spr_sim_def)
next
show "sim_val SPR.MC spr_simMC spr_sim"
by (rule sim_valI) simp
next
show "sim_f SPR.MC spr_simMC spr_sim"
unfolding spr_simRels_def spr_sim_def mkKripke_def SPR.mkM_def
by (rule sim_fI, auto simp del: split_paired_Ex)
next
show "sim_r SPR.MC spr_simMC spr_sim"
by (rule spr_sim_r)
qed
(*>*)
end (* context FiniteBroadcastEnvironment *)
sublocale FiniteBroadcastEnvironment
< SPR: SimIncrEnvironment jkbp envInit envAction envTrans envVal
spr_jview envObs spr_jviewInit spr_jviewIncr
spr_sim spr_simRels spr_simVal
(*<*)
by standard (simp add: spr_sim)
(*>*)
text\<open>
The algorithmic representations and machinery of the deterministic
JKBP case suffice for this one too, and so we omit the details.
\FloatBarrier
\<close>
(*<*)
end
(*>*)
|
-- Some theorems about operations on non-deterministic values
module nondet-thms where
open import bool
open import bool-thms
open import nat
open import eq
open import nat-thms
open import functions
open import nondet
----------------------------------------------------------------------
-- Theorems about values contained in non-deterministic values:
-- A proof that x is value of the non-deterministic tree y:
-- either it is equal to a deterministic value (ndrefl)
-- or it is somewhere in the tree.
-- If it is in the tree then we need to construct both branches of the tree,
-- and a proof that x is in one of the branches
-- A consequence of this is that any proof that x β y contains the path
-- to x in the tree.
--
-- Example:
-- hInCoin : H β coin
-- hInCoin = left (Val H) (Val T) ndrefl
--
-- Since H is on the left side of coin, we use the left constructor
-- The branches of the tree are (Val H) and (Val T),
-- and since H is identically equal to H this completes the proof.
data _β_ {A : Set} (x : A) : (y : ND A) β Set where
ndrefl : x β (Val x)
left : (l : ND A) β (r : ND A) β x β l β x β (l ?? r)
right : (l : ND A) β (r : ND A) β x β r β x β (l ?? r)
-- A basic inductive lemma that shows that β is closed under function
-- application. That is, if x β nx, then f x β mapND f nx
--
-- Example:
-- ndCons : ... β xs β nxs β (x :: xs) β mapND (_::_ x) nxs
β-apply : {A B : Set} β (f : A β B) β (x : A) β (nx : ND A)
β x β nx β (f x) β (mapND f nx)
β-apply f x (Val .x) ndrefl = ndrefl
β-apply f x (l ?? r) (left .l .r k) =
left (mapND f l) (mapND f r) (β-apply f x l k)
β-apply f x (l ?? r) (right .l .r k) =
right (mapND f l) (mapND f r) (β-apply f x r k)
----------------------------------------------------------------------
-- Theorems about 'mapND':
-- Combine two mapND applications into one:
mapND-mapND : β {A B C : Set} β (f : B β C) (g : A β B) (xs : ND A)
β mapND f (mapND g xs) β‘ mapND (f β g) xs
mapND-mapND f g (Val x) = refl
mapND-mapND f g (t1 ?? t2)
rewrite mapND-mapND f g t1 | mapND-mapND f g t2 = refl
----------------------------------------------------------------------
-- Theorems about 'always':
-- Extend validity of a function with a deterministic argument to validity of
-- the corresponding non-deterministic function:
always-mapND : β {A : Set} β (p : A β πΉ) (xs : ND A)
β ((y : A) β p y β‘ tt)
β always (mapND p xs) β‘ tt
always-mapND _ (Val y) prf = prf y
always-mapND p (t1 ?? t2) prf
rewrite always-mapND p t1 prf
| always-mapND p t2 prf = refl
-- Extend validity of a function with a deterministic argument to validity of
-- the corresponding non-deterministic function:
always-with-nd-arg : β {A : Set} β (p : A β ND πΉ) (xs : ND A)
β ((y : A) β always (p y) β‘ tt)
β always (with-nd-arg p xs) β‘ tt
always-with-nd-arg _ (Val y) prf = prf y
always-with-nd-arg p (t1 ?? t2) prf
rewrite always-with-nd-arg p t1 prf
| always-with-nd-arg p t2 prf = refl
-- Extend validity of a deterministic function to validity of
-- corresponding function with non-deterministic result:
always-toND : β {A : Set} β (p : A β πΉ) (x : A)
β (p x) β‘ tt β always (toND p x) β‘ tt
always-toND _ _ p = p
-- Extend validity of a deterministic function to validity of
-- corresponding non-deterministic function:
always-det-to-nd : β {A : Set} β (p : A β πΉ)
β ((y : A) β (p y) β‘ tt)
β (xs : ND A) β always (det-to-nd p xs) β‘ tt
always-det-to-nd p u xs =
always-with-nd-arg (toND p) xs (Ξ» x β always-toND p x (u x))
----------------------------------------------------------------------
-- Theorems about 'satisfy':
-- A theorem like filter-map in functional programming:
satisfy-mapND : β {A B : Set} β (f : A β B) (xs : ND A) (p : B β πΉ)
β (mapND f xs) satisfy p β‘ xs satisfy (p β f)
satisfy-mapND _ (Val x) _ = refl
satisfy-mapND f (t1 ?? t2) p
rewrite satisfy-mapND f t1 p
| satisfy-mapND f t2 p = refl
-- Extend validity of function with deterministic argument to validity of
-- non-deterministic function:
satisfy-with-nd-arg : β {A B : Set} β (p : B β πΉ) (f : A β ND B) (xs : ND A)
β ((y : A) β (f y) satisfy p β‘ tt)
β (with-nd-arg f xs) satisfy p β‘ tt
satisfy-with-nd-arg _ _ (Val y) prf = prf y
satisfy-with-nd-arg p f (t1 ?? t2) prf
rewrite satisfy-with-nd-arg p f t1 prf
| satisfy-with-nd-arg p f t2 prf = refl
----------------------------------------------------------------------
-- Theorems about 'every':
mapNDval : β (f : β β β) (v : β) (x : ND β) β
every _=β_ v x β‘ tt β every _=β_ (f v) (mapND f x) β‘ tt
mapNDval f v (Val x) u rewrite =β-to-β‘ {v} {x} u | =β-refl (f x) = refl
mapNDval f v (t1 ?? t2) u
rewrite mapNDval f v t1 (&&-fst u)
| mapNDval f v t2 (&&-snd {every _=β_ v t1} {every _=β_ v t2} u) = refl
----------------------------------------------------------------------
-- This theorms allows to weaken a predicate which is always satisfied:
weak-always-predicate : β {A : Set} β (p p1 : A β πΉ) (xs : ND A)
β xs satisfy p β‘ tt
β xs satisfy (Ξ» x β p1 x || p x) β‘ tt
weak-always-predicate p p1 (Val x) u rewrite u | ||-tt (p1 x) = refl
weak-always-predicate p p1 (t1 ?? t2) u
rewrite weak-always-predicate p p1 t1 (&&-fst u)
| weak-always-predicate p p1 t2 (&&-snd {t1 satisfy p} {t2 satisfy p} u)
= refl
----------------------------------------------------------------------
|
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
β’ IsIso f
[PROOFSTEP]
let E := Equiv.ofBijective _ bij
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
β’ IsIso f
[PROOFSTEP]
have hE : Continuous E.symm := by
rw [continuous_iff_isClosed]
intro S hS
rw [β E.image_eq_preimage]
exact isClosedMap f S hS
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
β’ Continuous βE.symm
[PROOFSTEP]
rw [continuous_iff_isClosed]
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
β’ β (s : Set ((forget CompHaus).obj X)), IsClosed s β IsClosed (βE.symm β»ΒΉ' s)
[PROOFSTEP]
intro S hS
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
S : Set ((forget CompHaus).obj X)
hS : IsClosed S
β’ IsClosed (βE.symm β»ΒΉ' S)
[PROOFSTEP]
rw [β E.image_eq_preimage]
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
S : Set ((forget CompHaus).obj X)
hS : IsClosed S
β’ IsClosed (βE '' S)
[PROOFSTEP]
exact isClosedMap f S hS
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
hE : Continuous βE.symm
β’ IsIso f
[PROOFSTEP]
refine' β¨β¨β¨E.symm, hEβ©, _, _β©β©
[GOAL]
case refine'_1
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
hE : Continuous βE.symm
β’ f β« ContinuousMap.mk βE.symm = π X
[PROOFSTEP]
ext x
[GOAL]
case refine'_1.w
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
hE : Continuous βE.symm
x : (forget CompHaus).obj X
β’ β(f β« ContinuousMap.mk βE.symm) x = β(π X) x
[PROOFSTEP]
apply E.symm_apply_apply
[GOAL]
case refine'_2
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
hE : Continuous βE.symm
β’ ContinuousMap.mk βE.symm β« f = π Y
[PROOFSTEP]
ext x
[GOAL]
case refine'_2.w
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X βΆ Y
bij : Function.Bijective βf
E : (forget CompHaus).obj X β (forget CompHaus).obj Y := Equiv.ofBijective (βf) bij
hE : Continuous βE.symm
x : (forget CompHaus).obj Y
β’ β(ContinuousMap.mk βE.symm β« f) x = β(π Y) x
[PROOFSTEP]
apply E.apply_symm_apply
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : βX.toTop ββ βY.toTop
β’ ContinuousMap.mk βf β« ContinuousMap.mk β(Homeomorph.symm f) = π X
[PROOFSTEP]
ext x
[GOAL]
case w
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : βX.toTop ββ βY.toTop
x : (forget CompHaus).obj X
β’ β(ContinuousMap.mk βf β« ContinuousMap.mk β(Homeomorph.symm f)) x = β(π X) x
[PROOFSTEP]
exact f.symm_apply_apply x
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : βX.toTop ββ βY.toTop
β’ ContinuousMap.mk β(Homeomorph.symm f) β« ContinuousMap.mk βf = π Y
[PROOFSTEP]
ext x
[GOAL]
case w
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : βX.toTop ββ βY.toTop
x : (forget CompHaus).obj Y
β’ β(ContinuousMap.mk β(Homeomorph.symm f) β« ContinuousMap.mk βf) x = β(π Y) x
[PROOFSTEP]
exact f.apply_symm_apply x
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X β
Y
x : βX.toTop
β’ βf.inv (βf.hom x) = x
[PROOFSTEP]
simp
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X β
Y
x : βY.toTop
β’ βf.hom (βf.inv x) = x
[PROOFSTEP]
simp
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X β
Y
β’ isoOfHomeo (homeoOfIso f) = f
[PROOFSTEP]
ext
[GOAL]
case w.w
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : X β
Y
xβ : (forget CompHaus).obj X
β’ β(isoOfHomeo (homeoOfIso f)).hom xβ = βf.hom xβ
[PROOFSTEP]
rfl
[GOAL]
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : βX.toTop ββ βY.toTop
β’ homeoOfIso (isoOfHomeo f) = f
[PROOFSTEP]
ext
[GOAL]
case H
Xβ : Type u_1
instβΒ² : TopologicalSpace Xβ
instβΒΉ : CompactSpace Xβ
instβ : T2Space Xβ
X Y : CompHaus
f : βX.toTop ββ βY.toTop
xβ : βX.toTop
β’ β(homeoOfIso (isoOfHomeo f)) xβ = βf xβ
[PROOFSTEP]
rfl
[GOAL]
β’ β {A B : CompHaus} (f : A βΆ B) [inst : IsIso ((forget CompHaus).map f)], IsIso f
[PROOFSTEP]
intro A B f hf
[GOAL]
A B : CompHaus
f : A βΆ B
hf : IsIso ((forget CompHaus).map f)
β’ IsIso f
[PROOFSTEP]
exact CompHaus.isIso_of_bijective _ ((isIso_iff_bijective f).mp hf)
[GOAL]
X : TopCat
Y : CompHaus
β’ Function.LeftInverse (fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun))) fun f =>
ContinuousMap.mk (βf β stoneCechUnit)
[PROOFSTEP]
rintro
β¨f : StoneCech X βΆ Y, hf : Continuous fβ©
-- Porting note: `ext` fails.
[GOAL]
case mk
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
β’ (fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun)))
((fun f => ContinuousMap.mk (βf β stoneCechUnit)) (ContinuousMap.mk f)) =
ContinuousMap.mk f
[PROOFSTEP]
apply ContinuousMap.ext
[GOAL]
case mk.h
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
β’ β (a : β(stoneCechObj X).toTop),
β((fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun)))
((fun f => ContinuousMap.mk (βf β stoneCechUnit)) (ContinuousMap.mk f)))
a =
β(ContinuousMap.mk f) a
[PROOFSTEP]
intro (x : StoneCech X)
[GOAL]
case mk.h
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
β’ β((fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun)))
((fun f => ContinuousMap.mk (βf β stoneCechUnit)) (ContinuousMap.mk f)))
x =
β(ContinuousMap.mk f) x
[PROOFSTEP]
refine' congr_fun _ x
[GOAL]
case mk.h
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
β’ β((fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun)))
((fun f => ContinuousMap.mk (βf β stoneCechUnit)) (ContinuousMap.mk f))) =
β(ContinuousMap.mk f)
[PROOFSTEP]
apply Continuous.ext_on denseRange_stoneCechUnit (continuous_stoneCechExtend _) hf
[GOAL]
case mk.h
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
β’ Set.EqOn (stoneCechExtend ?m.48961) f (Set.range stoneCechUnit)
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
β’ Continuous ((fun f => ContinuousMap.mk (βf β stoneCechUnit)) (ContinuousMap.mk f)).toFun
[PROOFSTEP]
rintro _ β¨y, rflβ©
[GOAL]
case mk.h.intro
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
y : βX
β’ stoneCechExtend ?m.48961 (stoneCechUnit y) = f (stoneCechUnit y)
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
β’ Continuous ((fun f => ContinuousMap.mk (βf β stoneCechUnit)) (ContinuousMap.mk f)).toFun
[PROOFSTEP]
apply congr_fun (stoneCechExtend_extends (hf.comp _)) y
[GOAL]
X : TopCat
Y : CompHaus
f : StoneCech βX βΆ βY.toTop
hf : Continuous f
x : StoneCech βX
β’ Continuous fun x => stoneCechUnit x
[PROOFSTEP]
apply continuous_stoneCechUnit
[GOAL]
X : TopCat
Y : CompHaus
β’ Function.RightInverse (fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun))) fun f =>
ContinuousMap.mk (βf β stoneCechUnit)
[PROOFSTEP]
rintro
β¨f : (X : Type _) βΆ Y, hf : Continuous fβ©
-- Porting note: `ext` fails.
[GOAL]
case mk
X : TopCat
Y : CompHaus
f : βX βΆ βY.toTop
hf : Continuous f
β’ (fun f => ContinuousMap.mk (βf β stoneCechUnit))
((fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun))) (ContinuousMap.mk f)) =
ContinuousMap.mk f
[PROOFSTEP]
apply ContinuousMap.ext
[GOAL]
case mk.h
X : TopCat
Y : CompHaus
f : βX βΆ βY.toTop
hf : Continuous f
β’ β (a : βX),
β((fun f => ContinuousMap.mk (βf β stoneCechUnit))
((fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun))) (ContinuousMap.mk f)))
a =
β(ContinuousMap.mk f) a
[PROOFSTEP]
intro
[GOAL]
case mk.h
X : TopCat
Y : CompHaus
f : βX βΆ βY.toTop
hf : Continuous f
aβ : βX
β’ β((fun f => ContinuousMap.mk (βf β stoneCechUnit))
((fun f => ContinuousMap.mk (stoneCechExtend (_ : Continuous f.toFun))) (ContinuousMap.mk f)))
aβ =
β(ContinuousMap.mk f) aβ
[PROOFSTEP]
exact congr_fun (stoneCechExtend_extends hf) _
[GOAL]
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
β’ CompactSpace β(TopCat.limitCone FF).pt
[PROOFSTEP]
show CompactSpace {u : β j, F.obj j | β {i j : J} (f : i βΆ j), (F.map f) (u i) = u j}
[GOAL]
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
β’ CompactSpace β{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j}
[PROOFSTEP]
rw [β isCompact_iff_compactSpace]
[GOAL]
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
β’ IsCompact {u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j}
[PROOFSTEP]
apply IsClosed.isCompact
[GOAL]
case h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
β’ IsClosed {u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j}
[PROOFSTEP]
have :
{u : β j, F.obj j | β {i j : J} (f : i βΆ j), F.map f (u i) = u j} =
β (i : J) (j : J) (f : i βΆ j), {u | F.map f (u i) = u j} :=
by
ext1
simp only [Set.mem_iInter, Set.mem_setOf_eq]
[GOAL]
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
β’ {u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
[PROOFSTEP]
ext1
[GOAL]
case h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
xβ : (j : J) β β(F.obj j).toTop
β’ xβ β {u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} β
xβ β β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
[PROOFSTEP]
simp only [Set.mem_iInter, Set.mem_setOf_eq]
[GOAL]
case h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
β’ IsClosed {u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j}
[PROOFSTEP]
rw [this]
[GOAL]
case h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
β’ IsClosed (β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j})
[PROOFSTEP]
apply isClosed_iInter
[GOAL]
case h.h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
β’ β (i : J), IsClosed (β (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j})
[PROOFSTEP]
intro i
[GOAL]
case h.h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i : J
β’ IsClosed (β (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j})
[PROOFSTEP]
apply isClosed_iInter
[GOAL]
case h.h.h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i : J
β’ β (i_1 : J), IsClosed (β (f : i βΆ i_1), {u | β(F.map f) (u i) = u i_1})
[PROOFSTEP]
intro j
[GOAL]
case h.h.h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i j : J
β’ IsClosed (β (f : i βΆ j), {u | β(F.map f) (u i) = u j})
[PROOFSTEP]
apply isClosed_iInter
[GOAL]
case h.h.h.h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i j : J
β’ β (i_1 : i βΆ j), IsClosed {u | β(F.map i_1) (u i) = u j}
[PROOFSTEP]
intro f
[GOAL]
case h.h.h.h
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i j : J
f : i βΆ j
β’ IsClosed {u | β(F.map f) (u i) = u j}
[PROOFSTEP]
apply isClosed_eq
[GOAL]
case h.h.h.h.hf
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i j : J
f : i βΆ j
β’ Continuous fun x => β(F.map f) (x i)
[PROOFSTEP]
exact (ContinuousMap.continuous (F.map f)).comp (continuous_apply i)
[GOAL]
case h.h.h.h.hg
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
this :
{u | β {i j : J} (f : i βΆ j), β(F.map f) (u i) = u j} = β (i : J) (j : J) (f : i βΆ j), {u | β(F.map f) (u i) = u j}
i j : J
f : i βΆ j
β’ Continuous fun x => x j
[PROOFSTEP]
exact continuous_apply j
[GOAL]
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
β’ β β¦X Y : Jβ¦ (f : X βΆ Y),
((Functor.const J).obj (mk (TopCat.limitCone FF).pt)).map f β« (fun j => NatTrans.app (TopCat.limitCone FF).Ο j) Y =
(fun j => NatTrans.app (TopCat.limitCone FF).Ο j) X β« F.map f
[PROOFSTEP]
intro _ _ f
[GOAL]
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
Xβ Yβ : J
f : Xβ βΆ Yβ
β’ ((Functor.const J).obj (mk (TopCat.limitCone FF).pt)).map f β« (fun j => NatTrans.app (TopCat.limitCone FF).Ο j) Yβ =
(fun j => NatTrans.app (TopCat.limitCone FF).Ο j) Xβ β« F.map f
[PROOFSTEP]
ext β¨x, hxβ©
[GOAL]
case w.mk
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
Xβ Yβ : J
f : Xβ βΆ Yβ
x : (j : J) β β(FF.obj j)
hx : x β {u | β {i j : J} (f : i βΆ j), β(FF.map f) (u i) = u j}
β’ β(((Functor.const J).obj (mk (TopCat.limitCone FF).pt)).map f β« (fun j => NatTrans.app (TopCat.limitCone FF).Ο j) Yβ)
{ val := x, property := hx } =
β((fun j => NatTrans.app (TopCat.limitCone FF).Ο j) Xβ β« F.map f) { val := x, property := hx }
[PROOFSTEP]
simp only [comp_apply, Functor.const_obj_map, id_apply]
[GOAL]
case w.mk
J : Type v
instβ : SmallCategory J
F : J β₯€ CompHaus
FF : J β₯€ TopCatMax := F β compHausToTop
Xβ Yβ : J
f : Xβ βΆ Yβ
x : (j : J) β β(FF.obj j)
hx : x β {u | β {i j : J} (f : i βΆ j), β(FF.map f) (u i) = u j}
β’ β(NatTrans.app (TopCat.limitCone (F β compHausToTop)).Ο Yβ)
(β(π (mk (TopCat.limitCone (F β compHausToTop)).pt)) { val := x, property := hx }) =
β(F.map f) (β(NatTrans.app (TopCat.limitCone (F β compHausToTop)).Ο Xβ) { val := x, property := hx })
[PROOFSTEP]
exact (hx f).symm
[GOAL]
X Y : CompHaus
f : X βΆ Y
β’ Epi f β Function.Surjective βf
[PROOFSTEP]
constructor
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
β’ Epi f β Function.Surjective βf
[PROOFSTEP]
dsimp [Function.Surjective]
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
β’ Epi f β β (b : (forget CompHaus).obj Y), β a, βf a = b
[PROOFSTEP]
contrapose!
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
β’ (β b, β (a : (forget CompHaus).obj X), βf a β b) β Β¬Epi f
[PROOFSTEP]
rintro β¨y, hyβ© hf
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
β’ False
[PROOFSTEP]
let C := Set.range f
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
β’ False
[PROOFSTEP]
have hC : IsClosed C := (isCompact_range f.continuous).isClosed
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
β’ False
[PROOFSTEP]
let D := ({ y } : Set Y)
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
β’ False
[PROOFSTEP]
have hD : IsClosed D := isClosed_singleton
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
β’ False
[PROOFSTEP]
have hCD : Disjoint C D := by
rw [Set.disjoint_singleton_right]
rintro β¨y', hy'β©
exact hy y' hy'
[GOAL]
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
β’ Disjoint C D
[PROOFSTEP]
rw [Set.disjoint_singleton_right]
[GOAL]
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
β’ Β¬y β C
[PROOFSTEP]
rintro β¨y', hy'β©
[GOAL]
case intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
y' : (forget CompHaus).obj X
hy' : βf y' = y
β’ False
[PROOFSTEP]
exact hy y' hy'
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
β’ False
[PROOFSTEP]
haveI : NormalSpace ((forget CompHaus).obj Y) := normalOfCompactT2
[GOAL]
case mp.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
this : NormalSpace ((forget CompHaus).obj Y)
β’ False
[PROOFSTEP]
obtain β¨Ο, hΟ0, hΟ1, hΟ01β© := exists_continuous_zero_one_of_closed hC hD hCD
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
this : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
β’ False
[PROOFSTEP]
haveI : CompactSpace (ULift.{u} <| Set.Icc (0 : β) 1) := Homeomorph.ulift.symm.compactSpace
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
this : CompactSpace (ULift β(Set.Icc 0 1))
β’ False
[PROOFSTEP]
haveI : T2Space (ULift.{u} <| Set.Icc (0 : β) 1) := Homeomorph.ulift.symm.t2Space
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
β’ False
[PROOFSTEP]
let Z := of (ULift.{u} <| Set.Icc (0 : β) 1)
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
β’ False
[PROOFSTEP]
let g : Y βΆ Z := β¨fun y' => β¨β¨Ο y', hΟ01 y'β©β©, continuous_uLift_up.comp (Ο.continuous.subtype_mk fun y' => hΟ01 y')β©
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
β’ False
[PROOFSTEP]
let h : Y βΆ Z := β¨fun _ => β¨β¨0, Set.left_mem_Icc.mpr zero_le_oneβ©β©, continuous_constβ©
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
β’ False
[PROOFSTEP]
have H : h = g := by
rw [β cancel_epi f]
ext x
apply ULift.ext
apply Subtype.ext
dsimp
-- Porting note: This `change` is not ideal.
-- I think lean is having issues understanding when a `ContinuousMap` should be considered
-- as a morphism.
-- TODO(?): Make morphisms in `CompHaus` (and other topological categories)
-- into a one-field-structure.
change 0 = Ο (f x)
simp only [hΟ0 (Set.mem_range_self x), Pi.zero_apply]
[GOAL]
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
β’ h = g
[PROOFSTEP]
rw [β cancel_epi f]
[GOAL]
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
β’ f β« h = f β« g
[PROOFSTEP]
ext x
[GOAL]
case w
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
x : (forget CompHaus).obj X
β’ β(f β« h) x = β(f β« g) x
[PROOFSTEP]
apply ULift.ext
[GOAL]
case w.h
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
x : (forget CompHaus).obj X
β’ (β(f β« h) x).down = (β(f β« g) x).down
[PROOFSTEP]
apply Subtype.ext
[GOAL]
case w.h.a
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
x : (forget CompHaus).obj X
β’ β(β(f β« h) x).down = β(β(f β« g) x).down
[PROOFSTEP]
dsimp
-- Porting note: This `change` is not ideal.
-- I think lean is having issues understanding when a `ContinuousMap` should be considered
-- as a morphism.
-- TODO(?): Make morphisms in `CompHaus` (and other topological categories)
-- into a one-field-structure.
[GOAL]
case w.h.a
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
x : (forget CompHaus).obj X
β’ β(β(f β« ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }) x).down =
β(β(f β« ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }) x).down
[PROOFSTEP]
change 0 = Ο (f x)
[GOAL]
case w.h.a
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
x : (forget CompHaus).obj X
β’ 0 = βΟ (βf x)
[PROOFSTEP]
simp only [hΟ0 (Set.mem_range_self x), Pi.zero_apply]
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
H : h = g
β’ False
[PROOFSTEP]
apply_fun fun e => (e y).down.1 at H
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
H : β(βh y).down = β(βg y).down
β’ False
[PROOFSTEP]
dsimp at H
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
H :
β(β(ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }) y).down =
β(β(ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }) y).down
β’ False
[PROOFSTEP]
change 0 = Ο y at H
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
H : 0 = βΟ y
β’ False
[PROOFSTEP]
simp only [hΟ1 (Set.mem_singleton y), Pi.one_apply] at H
[GOAL]
case mp.intro.intro.intro.intro
X Y : CompHaus
f : X βΆ Y
y : (forget CompHaus).obj Y
hy : β (a : (forget CompHaus).obj X), βf a β y
hf : Epi f
C : Set ((forget CompHaus).obj Y) := Set.range βf
hC : IsClosed C
D : Set βY.toTop := {y}
hD : IsClosed D
hCD : Disjoint C D
thisβΒΉ : NormalSpace ((forget CompHaus).obj Y)
Ο : C((forget CompHaus).obj Y, β)
hΟ0 : Set.EqOn (βΟ) 0 C
hΟ1 : Set.EqOn (βΟ) 1 D
hΟ01 : β (x : (forget CompHaus).obj Y), βΟ x β Set.Icc 0 1
thisβ : CompactSpace (ULift β(Set.Icc 0 1))
this : T2Space (ULift β(Set.Icc 0 1))
Z : CompHaus := of (ULift β(Set.Icc 0 1))
g : Y βΆ Z := ContinuousMap.mk fun y' => { down := { val := βΟ y', property := (_ : βΟ y' β Set.Icc 0 1) } }
h : Y βΆ Z := ContinuousMap.mk fun x => { down := { val := 0, property := (_ : 0 β Set.Icc 0 1) } }
H : 0 = 1
β’ False
[PROOFSTEP]
exact zero_ne_one H
[GOAL]
case mpr
X Y : CompHaus
f : X βΆ Y
β’ Function.Surjective βf β Epi f
[PROOFSTEP]
rw [β CategoryTheory.epi_iff_surjective]
[GOAL]
case mpr
X Y : CompHaus
f : X βΆ Y
β’ Epi βf β Epi f
[PROOFSTEP]
apply (forget CompHaus).epi_of_epi_map
[GOAL]
X Y : CompHaus
f : X βΆ Y
β’ Mono f β Function.Injective βf
[PROOFSTEP]
constructor
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
β’ Mono f β Function.Injective βf
[PROOFSTEP]
intro hf xβ xβ h
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
β’ xβ = xβ
[PROOFSTEP]
let gβ : of PUnit βΆ X := β¨fun _ => xβ, continuous_constβ©
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
β’ xβ = xβ
[PROOFSTEP]
let gβ : of PUnit βΆ X := β¨fun _ => xβ, continuous_constβ©
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
β’ xβ = xβ
[PROOFSTEP]
have : gβ β« f = gβ β« f := by
ext
exact h
[GOAL]
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
β’ gβ β« f = gβ β« f
[PROOFSTEP]
ext
[GOAL]
case w
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
xβ : (forget CompHaus).obj (of PUnit)
β’ β(gβ β« f) xβ = β(gβ β« f) xβ
[PROOFSTEP]
exact h
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
this : gβ β« f = gβ β« f
β’ xβ = xβ
[PROOFSTEP]
rw [cancel_mono] at this
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
this : gβ = gβ
β’ xβ = xβ
[PROOFSTEP]
apply_fun fun e => e PUnit.unit at this
[GOAL]
case mp
X Y : CompHaus
f : X βΆ Y
hf : Mono f
xβ xβ : (forget CompHaus).obj X
h : βf xβ = βf xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
gβ : of PUnit βΆ X := ContinuousMap.mk fun x => xβ
this : βgβ PUnit.unit = βgβ PUnit.unit
β’ xβ = xβ
[PROOFSTEP]
exact this
[GOAL]
case mpr
X Y : CompHaus
f : X βΆ Y
β’ Function.Injective βf β Mono f
[PROOFSTEP]
rw [β CategoryTheory.mono_iff_injective]
[GOAL]
case mpr
X Y : CompHaus
f : X βΆ Y
β’ Mono βf β Mono f
[PROOFSTEP]
apply (forget CompHaus).mono_of_mono_map
|
function str=disp(tt,name)
%Command window display of a TT-tensor.
%
%
%
% TT-Toolbox 2.2, 2009-2012
%
%This is TT Toolbox, written by Ivan Oseledets et al.
%Institute of Numerical Mathematics, Moscow, Russia
%webpage: http://spring.inm.ras.ru/osel
%
%For all questions, bugs and suggestions please mail
%[email protected]
%---------------------------
if ~exist('name','var')
name = 'ans';
end
r=tt.r; n=tt.n;
d=tt.d;
str=[];
str=[str,sprintf('%s is a %d-dimensional TT-tensor, ranks and mode sizes: \n',name,d)];
%fprintf('r(1)=%d \n', r(1));
for i=1:d
str=[str,sprintf('r(%d)=%d \t n(%d)=%d \n',i,r(i),i,n(i))];
% fprintf(' \t n(%d)=%d \n',i,n(i));
% fprintf('r(%d)=%d \n',i+1,r(i+1));
end
str=[str,sprintf('r(%d)=%d \n',d+1,r(d+1))];
if ( nargout == 0 )
fprintf(str);
end
end
|
module Thesis.IntChanges where
open import Data.Integer.Base
open import Relation.Binary.PropositionalEquality
open import Thesis.Changes
open import Theorem.Groups-Nehemiah
private
intCh = β€
instance
intCS : ChangeStructure β€
intCS = record
{ Ch = β€
; ch_from_to_ = Ξ» dv v1 v2 β v1 + dv β‘ v2
; isCompChangeStructure = record
{ isChangeStructure = record
{ _β_ = _+_
; fromtoββ = Ξ» dv v1 v2 v2β‘v1+dv β v2β‘v1+dv
; _β_ = _-_
; β-fromto = Ξ» a b β n+[m-n]=m {a} {b}
}
; _β_ = Ξ» da1 da2 β da1 + da2
; β-fromto = iβ-fromto
}
}
where
iβ-fromto : (a1 a2 a3 : β€) (da1 da2 : intCh) β
a1 + da1 β‘ a2 β a2 + da2 β‘ a3 β a1 + (da1 + da2) β‘ a3
iβ-fromto a1 a2 a3 da1 da2 a1+da1β‘a2 a2+da2β‘a3
rewrite sym (associative-int a1 da1 da2) | a1+da1β‘a2 = a2+da2β‘a3
|
test_that(
"Starting hello [INPUT] works",
expect_equal(algorithm("Heather!"), "hello Heather!")
)
|
------------------------------------------------------------------------------
-- FOTC version of the domain predicate of quicksort given by the
-- Bove-Capretta method
------------------------------------------------------------------------------
{-# OPTIONS --exact-split #-}
{-# OPTIONS --no-sized-types #-}
{-# OPTIONS --no-universe-polymorphism #-}
{-# OPTIONS --without-K #-}
module FOT.FOTC.Program.QuickSort.DomainPredicate where
open import FOTC.Base
open import FOTC.Base.List
open import FOTC.Data.Nat.Inequalities hiding ( le ; gt )
open import FOTC.Data.Nat.List.Type
open import FOTC.Data.Nat.Type
open import FOTC.Data.List
------------------------------------------------------------------------------
-- We need to define monadic inequalities.
postulate
le gt : D
le-00 : le Β· zero Β· zero β‘ false
le-0S : β d β le Β· zero Β· succβ d β‘ true
le-S0 : β d β le Β· succβ d Β· zero β‘ false
le-SS : β d e β le Β· succβ d Β· succβ e β‘ lt d e
postulate
filter : D β D β D
filter-[] : β f β filter f [] β‘ []
filter-β· : β f d ds β
filter f (d β· ds) β‘
(if f Β· d then d β· filter f (d β· ds) else filter f (d β· ds))
postulate filter-List : β f {xs} β List xs β List (filter f xs)
postulate
qs : D β D
qs-[] : qs [] β‘ []
qs-β· : β x xs β qs (x β· xs) β‘ qs (filter (gt Β· x) xs) ++
x β· qs (filter (le Β· x) xs)
-- Domain predicate for quicksort.
data Dqs : {xs : D} β List xs β Set where
dnil : Dqs lnil
dcons : β {x xs} β (Lxs : List xs) β
Dqs (filter-List (gt Β· x) Lxs) β
Dqs (filter-List (le Β· x) Lxs) β
Dqs (lcons x Lxs)
-- Induction principle associated to the domain predicate of quicksort.
Dqs-ind : (P : {xs : D} β List xs β Set) β
P lnil β
(β {x xs} β (Lxs : List xs) β
Dqs (filter-List (gt Β· x) Lxs) β
P (filter-List (gt Β· x) Lxs) β
Dqs (filter-List (le Β· x) Lxs) β
P (filter-List (le Β· x) Lxs) β
P (lcons x Lxs)) β
(β {xs} β {Lxs : List xs} β Dqs Lxs β P Lxs)
Dqs-ind P P[] ih dnil = P[]
Dqs-ind P P[] ih (dcons Lxs hβ hβ) =
ih Lxs hβ (Dqs-ind P P[] ih hβ) hβ (Dqs-ind P P[] ih hβ)
|
The absolute value of the infnorm of a real number is equal to the infnorm of the real number. |
plusnZ : (n : Nat) -> n + 0 = n
plusnZ 0 = Refl
plusnZ (S k) = rewrite plusnZ k in Refl
plusnSm : (n, m : Nat) -> n + (S m) = S (n + m)
plusnSm Z m = Refl
plusnSm (S k) m = rewrite plusnSm k m in Refl
plusCommutes : (n, m : Nat) -> n + m = m + n
plusCommutes Z m = sym (plusnZ m)
plusCommutes (S k) m = rewrite plusCommutes k m in sym (plusnSm m k)
wrongCommutes : (n, m : Nat) -> n + m = m + n
wrongCommutes Z m = sym (plusnZ m)
wrongCommutes (S k) m = rewrite plusCommutes m k in ?bar
wrongCommutes2 : (n, m : Nat) -> n + m = m + n
wrongCommutes2 Z m = sym (plusnZ m)
wrongCommutes2 (S k) m = rewrite m in ?bar
|
#pragma once
#include "shared.h"
#include <boost/numeric/odeint.hpp>
using namespace boost::numeric::odeint;
typedef std::vector<double> state_type;
typedef runge_kutta_cash_karp54< state_type > error_stepper_type;
class Elevator
{
private:
bool Initialize = false;
bool FixedEarthBoundary = false;
const uint32_t
Dimensions = 3;
uint32_t
ChainSize = 0,
TotalSteps = 0,
SavedSteps = 0;
double
L0 = 0.0,
GM = 0.0,
T0 = 0.0,
Tf = 0.0,
Dt = 0.0,
WPlanet = 0.0,
RSurface = 0.0,
FrictA = 0.0,
FrictB = 0.0,
FrictC = 0.0,
* AnchorAngles = nullptr,
* SprK = nullptr,
* RotK = nullptr,
* Inertia = nullptr,
* Mass = nullptr;
state_type * State = nullptr;
std::vector<double*> StoredStates;
void __RHS(state_type &y, state_type &dy, double t);
double __Distance(const state_type &y, uint32_t i, uint32_t j);
double __Modulus2(const state_type &y, uint32_t i);
double __Alpha(const state_type &y, uint32_t i);
void __EarthFixedBoundary(state_type &y, state_type &dy, const double t);
void __EarthFreeBoundary(state_type &y, state_type &dy, const double t);
void __SpaceFreeBoundary(state_type &y, state_type &dy, const double t);
void __StoreState(const state_type &y, const double t);
public:
Elevator() {}
~Elevator() {
delete[] SprK;
delete[] RotK;
delete[] Inertia;
delete[] Mass;
delete[] AnchorAngles;
delete State;
}
virtual void InitializeFromFile(char const * filename);
void LoadSystem(char const * filename);
void SaveSystem(char const * filename);
void Integrate();
};
|
theory flash8Bra imports flash8Rev
begin
lemma onInv8:
assumes a1:"iInv1 \<le> N" and a2:"iInv2 \<le> N" and a3:"iInv1~=iInv2 " and
b1:"r \<in> rules N" and b2:"invf=inv8 iInv1 iInv2 "
shows "invHoldForRule' s invf r (invariants N)"
proof -
have c1:"ex1P N (% iRule1 . r=NI_Local_GetX_PutX1 N iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_GetX iRule1 )\<or>ex1P N (% iRule1 . r=NI_Replace iRule1 )\<or>ex0P N ( r=NI_ShWb N )\<or>ex0P N ( r=PI_Local_GetX_GetX2 )\<or>ex0P N ( r=NI_Local_PutXAcksDone )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX7 N iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Nak2 iRule1 )\<or>ex0P N ( r=NI_ReplaceHomeShrVld )\<or>ex1P N (% iRule1 . r=NI_Remote_Put iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX5 N iRule1 )\<or>ex0P N ( r=NI_Wb )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Get iRule1 )\<or>ex0P N ( r=PI_Local_Replace )\<or>ex1P N (% iRule1 . r=NI_ReplaceShrVld iRule1 )\<or>ex2P N (% iRule1 iRule2 . r=NI_Local_GetX_PutX8 N iRule1 iRule2 )\<or>ex1P N (% iRule1 . r=NI_InvAck_2 N iRule1 )\<or>ex2P N (% iRule1 iRule2 . r=NI_Remote_Get_Nak2 iRule1 iRule2 )\<or>ex1P N (% iRule1 . r=PI_Remote_Replace iRule1 )\<or>ex0P N ( r=NI_Nak_Home )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Put2 iRule1 )\<or>ex2P N (% iRule1 iRule2 . r=NI_InvAck_1 iRule1 iRule2 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX11 N iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX6 N iRule1 )\<or>ex2P N (% iRule1 iRule2 . r=NI_Remote_Get_Put2 iRule1 iRule2 )\<or>ex0P N ( r=PI_Local_Get_Put )\<or>ex0P N ( r=PI_Local_GetX_PutX1 N )\<or>ex1P N (% iRule1 . r=NI_InvAck_1_Home iRule1 )\<or>ex1P N (% iRule1 . r=NI_Remote_Get_Nak1 iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Nak1 iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_Nak2 iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX10_home N iRule1 )\<or>ex1P N (% iRule1 . r=PI_Remote_Get iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_Nak3 iRule1 )\<or>ex2P N (% iRule1 iRule2 . r=NI_Local_GetX_PutX10 N iRule1 iRule2 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX2 N iRule1 )\<or>ex1P N (% iRule1 . r=NI_Remote_Get_Put1 iRule1 )\<or>ex1P N (% iRule1 . r=NI_Remote_PutX iRule1 )\<or>ex1P N (% iRule1 . r=Store iRule1 )\<or>ex0P N ( r=NI_FAck )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX3 N iRule1 )\<or>ex0P N ( r=PI_Local_GetX_PutX3 )\<or>ex2P N (% iRule1 iRule2 . r=NI_Remote_GetX_PutX iRule1 iRule2 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX8_home N iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Put1 N iRule1 )\<or>ex0P N ( r=PI_Local_GetX_GetX1 )\<or>ex0P N ( r=StoreHome )\<or>ex2P N (% iRule1 iRule2 . r=NI_Remote_GetX_Nak iRule1 iRule2 )\<or>ex1P N (% iRule1 . r=NI_Inv iRule1 )\<or>ex1P N (% iRule1 . r=PI_Remote_PutX iRule1 )\<or>ex0P N ( r=PI_Local_GetX_PutX4 )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX4 N iRule1 )\<or>ex1P N (% iRule1 . r=NI_Nak iRule1 )\<or>ex0P N ( r=PI_Local_GetX_PutX2 N )\<or>ex0P N ( r=NI_Local_Put )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_Nak1 iRule1 )\<or>ex0P N ( r=NI_Nak_Clear )\<or>ex0P N ( r=PI_Local_PutX )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Nak3 iRule1 )\<or>ex1P N (% iRule1 . r=NI_Remote_GetX_Nak_Home iRule1 )\<or>ex0P N ( r=PI_Local_Get_Get )\<or>ex1P N (% iRule1 . r=NI_Local_GetX_PutX9 N iRule1 )\<or>ex1P N (% iRule1 . r=PI_Remote_GetX iRule1 )\<or>ex0P N ( r=NI_ReplaceHome )\<or>ex1P N (% iRule1 . r=NI_Remote_GetX_PutX_Home iRule1 )\<or>ex1P N (% iRule1 . r=NI_Local_Get_Put3 iRule1 )"
apply(cut_tac b1)
apply auto
done moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX1 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX1 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX1 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_GetX iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_GetX iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_GetX iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_GetXVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Replace iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Replace iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Replace iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_ReplaceVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_ShWb N )
"
from c1 have c2:" r= NI_ShWb N "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_ShWb N ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_ShWbVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_GetX_GetX2 )
"
from c1 have c2:" r= PI_Local_GetX_GetX2 "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_GetX_GetX2 ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_GetX_GetX2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_Local_PutXAcksDone )
"
from c1 have c2:" r= NI_Local_PutXAcksDone "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_PutXAcksDone ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_Local_PutXAcksDoneVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX7 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX7 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX7 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX7VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Nak2 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Nak2 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Nak2 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_Nak2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_ReplaceHomeShrVld )
"
from c1 have c2:" r= NI_ReplaceHomeShrVld "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_ReplaceHomeShrVld ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_ReplaceHomeShrVldVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Remote_Put iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Remote_Put iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_Put iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_PutVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX5 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX5 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX5 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX5VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_Wb )
"
from c1 have c2:" r= NI_Wb "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Wb ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_WbVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Get iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Get iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Get iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_GetVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_Replace )
"
from c1 have c2:" r= PI_Local_Replace "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_Replace ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_ReplaceVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_ReplaceShrVld iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_ReplaceShrVld iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_ReplaceShrVld iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_ReplaceShrVldVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_Local_GetX_PutX8 N iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_Local_GetX_PutX8 N iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX8 N iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX8VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_InvAck_2 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_InvAck_2 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_InvAck_2 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_InvAck_2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_Remote_Get_Nak2 iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_Remote_Get_Nak2 iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_Get_Nak2 iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_Get_Nak2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= PI_Remote_Replace iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= PI_Remote_Replace iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Remote_Replace iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis PI_Remote_ReplaceVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_Nak_Home )
"
from c1 have c2:" r= NI_Nak_Home "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Nak_Home ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_Nak_HomeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Put2 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Put2 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Put2 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_Put2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_InvAck_1 iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_InvAck_1 iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_InvAck_1 iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_InvAck_1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX11 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX11 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX11 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX11VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX6 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX6 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX6 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX6VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_Remote_Get_Put2 iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_Remote_Get_Put2 iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_Get_Put2 iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_Get_Put2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_Get_Put )
"
from c1 have c2:" r= PI_Local_Get_Put "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_Get_Put ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_Get_PutVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_GetX_PutX1 N )
"
from c1 have c2:" r= PI_Local_GetX_PutX1 N "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_GetX_PutX1 N ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_GetX_PutX1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_InvAck_1_Home iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_InvAck_1_Home iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_InvAck_1_Home iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_InvAck_1_HomeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Remote_Get_Nak1 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Remote_Get_Nak1 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_Get_Nak1 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_Get_Nak1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Nak1 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Nak1 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Nak1 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_Nak1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_Nak2 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_Nak2 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_Nak2 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_Nak2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX10_home N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX10_home N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX10_home N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX10_homeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= PI_Remote_Get iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= PI_Remote_Get iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Remote_Get iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis PI_Remote_GetVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_Nak3 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_Nak3 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_Nak3 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_Nak3VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_Local_GetX_PutX10 N iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_Local_GetX_PutX10 N iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX10 N iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX10VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX2 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX2 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX2 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Remote_Get_Put1 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Remote_Get_Put1 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_Get_Put1 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_Get_Put1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Remote_PutX iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Remote_PutX iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_PutX iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_PutXVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= Store iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= Store iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (Store iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis StoreVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_FAck )
"
from c1 have c2:" r= NI_FAck "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_FAck ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_FAckVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX3 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX3 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX3 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX3VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_GetX_PutX3 )
"
from c1 have c2:" r= PI_Local_GetX_PutX3 "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_GetX_PutX3 ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_GetX_PutX3VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_Remote_GetX_PutX iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_Remote_GetX_PutX iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_GetX_PutX iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_GetX_PutXVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX8_home N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX8_home N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX8_home N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX8_homeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Put1 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Put1 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Put1 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_Put1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_GetX_GetX1 )
"
from c1 have c2:" r= PI_Local_GetX_GetX1 "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_GetX_GetX1 ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_GetX_GetX1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= StoreHome )
"
from c1 have c2:" r= StoreHome "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (StoreHome ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis StoreHomeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex2P N (% iRule1 iRule2 . r= NI_Remote_GetX_Nak iRule1 iRule2 )
"
from c1 obtain iRule1 iRule2 where c2:" iRule1~=iRule2 \<and> iRule1 \<le> N \<and> iRule2 \<le> N \<and> r= NI_Remote_GetX_Nak iRule1 iRule2 "
by (auto simp add: ex2P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_GetX_Nak iRule1 iRule2 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_GetX_NakVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Inv iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Inv iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Inv iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_InvVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= PI_Remote_PutX iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= PI_Remote_PutX iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Remote_PutX iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis PI_Remote_PutXVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_GetX_PutX4 )
"
from c1 have c2:" r= PI_Local_GetX_PutX4 "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_GetX_PutX4 ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_GetX_PutX4VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX4 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX4 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX4 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX4VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Nak iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Nak iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Nak iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_NakVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_GetX_PutX2 N )
"
from c1 have c2:" r= PI_Local_GetX_PutX2 N "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_GetX_PutX2 N ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_GetX_PutX2VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_Local_Put )
"
from c1 have c2:" r= NI_Local_Put "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Put ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_Local_PutVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_Nak1 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_Nak1 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_Nak1 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_Nak1VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_Nak_Clear )
"
from c1 have c2:" r= NI_Nak_Clear "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Nak_Clear ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_Nak_ClearVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_PutX )
"
from c1 have c2:" r= PI_Local_PutX "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_PutX ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_PutXVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Nak3 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Nak3 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Nak3 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_Nak3VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Remote_GetX_Nak_Home iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Remote_GetX_Nak_Home iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_GetX_Nak_Home iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_GetX_Nak_HomeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= PI_Local_Get_Get )
"
from c1 have c2:" r= PI_Local_Get_Get "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Local_Get_Get ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis PI_Local_Get_GetVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_GetX_PutX9 N iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_GetX_PutX9 N iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_GetX_PutX9 N iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_GetX_PutX9VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= PI_Remote_GetX iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= PI_Remote_GetX iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (PI_Remote_GetX iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis PI_Remote_GetXVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex0P N ( r= NI_ReplaceHome )
"
from c1 have c2:" r= NI_ReplaceHome "
by (auto simp add: ex0P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_ReplaceHome ) (invariants N) "
apply(cut_tac a1 a2 a3 b2 c2 )
by (metis NI_ReplaceHomeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Remote_GetX_PutX_Home iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Remote_GetX_PutX_Home iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Remote_GetX_PutX_Home iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Remote_GetX_PutX_HomeVsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
} moreover
{assume c1: "ex1P N (% iRule1 . r= NI_Local_Get_Put3 iRule1 )
"
from c1 obtain iRule1 where c2:" iRule1 \<le> N \<and> r= NI_Local_Get_Put3 iRule1 "
by (auto simp add: ex1P_def)
have "invHoldForRule' s (inv8 iInv1 iInv2 ) (NI_Local_Get_Put3 iRule1 ) (invariants N) "
apply(cut_tac c2 a1 a2 a3 )
by (metis NI_Local_Get_Put3VsInv8 )
then have "invHoldForRule' s invf r (invariants N) "
by(cut_tac c2 b2, metis)
}ultimately show "invHoldForRule' s invf r (invariants N) "
by blast
qed
end |
module GrinFFI
public export
int_print : Int -> IO ()
int_print i
= foreign FFI_C "idris_int_print" (Int -> IO ()) i
public export
bool_print : Bool -> IO ()
bool_print b =
foreign FFI_C "idris_int_print" (Int -> IO ()) $
case b of
False => 0
True => 1
|
using Distributed
addprocs(4)
@everywhere include("get_objective_difference_solenoid.jl")
get_JavgN(0.001)
|
//impulse: /give @p spawn_egg 1 0 {display:{Name:"Light Bridge Spawner"},EntityTag:{id:"minecraft:chicken",CustomName:"MMH_LightBridgeSpawner",Silent:1,NoGravity:1}}
//MMH_loadChunks()
///clone ~1 ~1 ~1 ~3 ~3 ~3 20 1 16
#MMH
repeat process MMH_lightBridgeSpawner {
if: /testfor @e[type=Chicken,name=MMH_LightBridgeSpawner]
then {
MMH_loadChunks()
/execute @e[type=Chicken,name=MMH_LightBridgeSpawner] ~ ~ ~ scoreboard players operation Rotation MMH_Rotation = @p MMH_Rotation
/scoreboard players test Rotation MMH_Rotation 0 0
conditional: /execute @e[type=Chicken,name=MMH_LightBridgeSpawner] ~ ~ ~ clone 20 1 16 20 3 17 ~-2 ~-1 ~-1 masked
/scoreboard players test Rotation MMH_Rotation 1 1
conditional: /execute @e[type=Chicken,name=MMH_LightBridgeSpawner] ~ ~ ~ clone 21 1 16 22 3 16 ~ ~-1 ~-2 masked
/scoreboard players test Rotation MMH_Rotation 2 2
conditional: /execute @e[type=Chicken,name=MMH_LightBridgeSpawner] ~ ~ ~ clone 22 1 17 22 3 18 ~2 ~-1 ~ masked
/scoreboard players test Rotation MMH_Rotation 3 3
conditional: /execute @e[type=Chicken,name=MMH_LightBridgeSpawner] ~ ~ ~ clone 20 1 18 21 3 18 ~-1 ~-1 ~2 masked
/tp @e[type=Chicken,name=MMH_LightBridgeSpawner] ~ -100 ~
}
}
//impulse: /summon area_effect_cloud ~2 ~1 ~ {CustomName:"ACV_LightBridge",Rotation:[-90.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_lightBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//impulse: /summon area_effect_cloud ~2 ~-1 ~ {CustomName:"ACV_AntiBridge",Rotation:[-90.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_antiBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//
//impulse: /summon area_effect_cloud ~ ~1 ~2 {CustomName:"ACV_LightBridge",Rotation:[0.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_lightBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//impulse: /summon area_effect_cloud ~ ~-1 ~2 {CustomName:"ACV_AntiBridge",Rotation:[0.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_antiBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//
//impulse: /summon area_effect_cloud ~-2 ~1 ~ {CustomName:"ACV_LightBridge",Rotation:[90.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_lightBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//impulse: /summon area_effect_cloud ~-2 ~-1 ~ {CustomName:"ACV_AntiBridge",Rotation:[90.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_antiBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//
//impulse: /summon area_effect_cloud ~ ~1 ~-2 {CustomName:"ACV_LightBridge",Rotation:[180.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_lightBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
//impulse: /summon area_effect_cloud ~ ~-1 ~-2 {CustomName:"ACV_AntiBridge",Rotation:[180.0f,0.0f],Duration:2147483647}
///execute @e[name=ACV_antiBridges] ~ ~ ~ setblock ~ ~ ~ redstone_block
|
[STATEMENT]
lemma parts_insert2:
"parts (insert X (insert Y H)) = parts {X} \<union> parts {Y} \<union> parts H"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. parts (insert X (insert Y H)) = parts {X} \<union> parts {Y} \<union> parts H
[PROOF STEP]
apply (simp add: Un_assoc)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. parts (insert X (insert Y H)) = parts {X} \<union> (parts {Y} \<union> parts H)
[PROOF STEP]
apply (simp add: parts_insert [symmetric])
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
/-
Copyright (c) 2018 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon, Scott Morrison
! This file was ported from Lean 3 source module tactic.solve_by_elim
! leanprover-community/mathlib commit f694c7dead66f5d4c80f446c796a5aad14707f0e
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Tactic.Core
/-!
# solve_by_elim
A depth-first search backwards reasoner.
`solve_by_elim` takes a list of lemmas, and repeating tries to `apply` these against
the goals, recursively acting on any generated subgoals.
It accepts a variety of configuration options described below, enabling
* backtracking across multiple goals,
* pruning the search tree, and
* invoking other tactics before or after trying to apply lemmas.
At present it has no "premise selection", and simply tries the supplied lemmas in order
at each step of the search.
-/
namespace Tactic
namespace SolveByElim
/-- `mk_assumption_set` builds a collection of lemmas for use in
the backtracking search in `solve_by_elim`.
* By default, it includes all local hypotheses, along with `rfl`, `trivial`, `congr_fun` and
`congr_arg`.
* The flag `no_dflt` removes these.
* The argument `hs` is a list of `simp_arg_type`s,
and can be used to add, or remove, lemmas or expressions from the set.
* The argument `attr : list name` adds all lemmas tagged with one of a specified list of attributes.
`mk_assumption_set` returns not a `list expr`, but a `list (tactic expr) Γ tactic (list expr)`.
There are two separate problems that need to be solved.
### Relevant local hypotheses
`solve_by_elim*` works with multiple goals,
and we need to use separate sets of local hypotheses for each goal.
The second component of the returned value provides these local hypotheses.
(Essentially using `local_context`, along with some filtering to remove hypotheses
that have been explicitly removed via `only` or `[-h]`.)
### Stuck metavariables
Lemmas with implicit arguments would be filled in with metavariables if we created the
`expr` objects immediately, so instead we return thunks that generate the expressions
on demand. This is the first component, with type `list (tactic expr)`.
As an example, we have `def rfl : β {Ξ± : Sort u} {a : Ξ±}, a = a`, which on elaboration will become
`@rfl ?m_1 ?m_2`.
Because `solve_by_elim` works by repeated application of lemmas against subgoals,
the first time such a lemma is successfully applied,
those metavariables will be unified, and thereafter have fixed values.
This would make it impossible to apply the lemma
a second time with different values of the metavariables.
See https://github.com/leanprover-community/mathlib/issues/2269
As an optimisation, after we build the list of `tactic expr`s, we actually run them, and replace any
that do not in fact produce metavariables with a simple `return` tactic.
-/
unsafe def mk_assumption_set (no_dflt : Bool) (hs : List simp_arg_type) (attr : List Name) :
tactic (List (tactic expr) Γ tactic (List expr)) :=
-- We lock the tactic state so that any spurious goals generated during
-- elaboration of pre-expressions are discarded
lock_tactic_state
do
let-- `hs` are expressions specified explicitly,
-- `hex` are exceptions (specified via `solve_by_elim [-h]`) referring to local hypotheses,
-- `gex` are the other exceptions
(hs, gex, hex, all_hyps)
β decode_simp_arg_list hs
let-- Recall, per the discussion above, we produce `tactic expr` thunks rather than actual `expr`s.
-- Note that while we evaluate these thunks on two occasions below while preparing the list,
-- this is a one-time cost during `mk_assumption_set`, rather than a cost proportional to the
-- length of the search `solve_by_elim` executes.
hs := hs.map fun h => i_to_expr_for_apply h
let l β attr.mapM fun a => attribute.get_instances a
let l := l.join
let m := l.map fun h => mk_const h
let hs β
(-- In order to remove the expressions we need to evaluate the thunks.
hs ++
m).filterM
fun h => do
let h β h
return <| expr.const_name h β gex
let hs :=
if no_dflt then hs
else ([`rfl, `trivial, `congr_fun, `congr_arg].map fun n => mk_const n) ++ hs
let locals : tactic (List expr) :=
if Β¬no_dflt β¨ all_hyps then do
let ctx β local_context
-- Remove local exceptions specified in `hex`:
return <|
ctx fun h : expr => h β hex
else return []
let hs
β-- Finally, run all of the tactics: any that return an expression without metavariables can safely
-- be replaced by a `return` tactic.
hs.mapM
fun h : tactic expr => do
let e β h
if e then return h else return (return e)
return (hs, locals)
#align tactic.solve_by_elim.mk_assumption_set tactic.solve_by_elim.mk_assumption_set
/-- Configuration options for `solve_by_elim`.
* `accept : list expr β tactic unit` determines whether the current branch should be explored.
At each step, before the lemmas are applied,
`accept` is passed the proof terms for the original goals,
as reported by `get_goals` when `solve_by_elim` started.
These proof terms may be metavariables (if no progress has been made on that goal)
or may contain metavariables at some leaf nodes
(if the goal has been partially solved by previous `apply` steps).
If the `accept` tactic fails `solve_by_elim` aborts searching this branch and backtracks.
By default `accept := Ξ» _, skip` always succeeds.
(There is an example usage in `tests/solve_by_elim.lean`.)
* `pre_apply : tactic unit` specifies an additional tactic to run before each round of `apply`.
* `discharger : tactic unit` specifies an additional tactic to apply on subgoals
for which no lemma applies.
If that tactic succeeds, `solve_by_elim` will continue applying lemmas on resulting goals.
-/
unsafe structure basic_opt extends apply_any_opt where
accept : List expr β tactic Unit := fun _ => skip
pre_apply : tactic Unit := skip
discharger : tactic Unit := failed
max_depth : β := 3
#align tactic.solve_by_elim.basic_opt tactic.solve_by_elim.basic_opt
initialize
registerTraceClass.1 `solve_by_elim
-- trace attempted lemmas
/-- A helper function for trace messages, prepending '....' depending on the current search depth.
-/
unsafe def solve_by_elim_trace (n : β) (f : format) : tactic Unit :=
trace_if_enabled `solve_by_elim
((f!"[solve_by_elim {(List.replicate (n + 1) '.').asString} ") ++ f ++ "]")
#align tactic.solve_by_elim.solve_by_elim_trace tactic.solve_by_elim.solve_by_elim_trace
/-- A helper function to generate trace messages on successful applications. -/
unsafe def on_success (g : format) (n : β) (e : expr) : tactic Unit := do
let pp β pp e
solve_by_elim_trace n f! "β
`{pp }` solves `β’ {g}`"
#align tactic.solve_by_elim.on_success tactic.solve_by_elim.on_success
/-- A helper function to generate trace messages on unsuccessful applications. -/
unsafe def on_failure (g : format) (n : β) : tactic Unit :=
solve_by_elim_trace n f! "β failed to solve `β’ {g}`"
#align tactic.solve_by_elim.on_failure tactic.solve_by_elim.on_failure
/-- A helper function to generate the tactic that print trace messages.
This function exists to ensure the target is pretty printed only as necessary.
-/
unsafe def trace_hooks (n : β) : tactic ((expr β tactic Unit) Γ tactic Unit) :=
if is_trace_enabled_for `solve_by_elim then do
let g β target >>= pp
return (on_success g n, on_failure g n)
else return (fun _ => skip, skip)
#align tactic.solve_by_elim.trace_hooks tactic.solve_by_elim.trace_hooks
/-- The internal implementation of `solve_by_elim`, with a limiting counter.
-/
unsafe def solve_by_elim_aux (opt : basic_opt) (original_goals : List expr)
(lemmas : List (tactic expr)) (ctx : tactic (List expr)) : β β tactic Unit
| n => do
-- First, check that progress so far is `accept`able.
lock_tactic_state
(original_goals instantiate_mvars >>= opt)
-- Then check if we've finished.
done >>
solve_by_elim_trace (opt - n) "success!" <|>
do
-- Otherwise, if there's more time left,
guard
(n > 0) <|>
solve_by_elim_trace opt "π aborting, hit depth limit" >> failed
-- run the `pre_apply` tactic, then
opt
let-- try either applying a lemma and recursing,
(on_success, on_failure)
β trace_hooks (opt - n)
let ctx_lemmas β ctx
apply_any_thunk (lemmas ++ ctx_lemmas return) opt (solve_by_elim_aux (n - 1)) on_success
on_failure <|>-- or if that doesn't work, run the discharger and recurse.
opt >>
solve_by_elim_aux (n - 1)
#align tactic.solve_by_elim.solve_by_elim_aux tactic.solve_by_elim.solve_by_elim_aux
/-- Arguments for `solve_by_elim`:
* By default `solve_by_elim` operates only on the first goal,
but with `backtrack_all_goals := true`, it operates on all goals at once,
backtracking across goals as needed,
and only succeeds if it discharges all goals.
* `lemmas` specifies the list of lemmas to use in the backtracking search.
If `none`, `solve_by_elim` uses the local hypotheses,
along with `rfl`, `trivial`, `congr_arg`, and `congr_fun`.
* `lemma_thunks` provides the lemmas as a list of `tactic expr`,
which are used to regenerate the `expr` objects to avoid binding metavariables.
It should not usually be specified by the user.
(If both `lemmas` and `lemma_thunks` are specified, only `lemma_thunks` is used.)
* `ctx_thunk` is for internal use only: it returns the local hypotheses which will be used.
* `max_depth` bounds the depth of the search.
-/
unsafe structure opt extends basic_opt where
backtrack_all_goals : Bool := false
lemmas : Option (List expr) := none
lemma_thunks : Option (List (tactic expr)) := lemmas.map fun l => l.map return
ctx_thunk : tactic (List expr) := local_context
#align tactic.solve_by_elim.opt tactic.solve_by_elim.opt
/-- If no lemmas have been specified, generate the default set
(local hypotheses, along with `rfl`, `trivial`, `congr_arg`, and `congr_fun`).
-/
unsafe def opt.get_lemma_thunks (opt : opt) : tactic (List (tactic expr) Γ tactic (List expr)) :=
match opt.lemma_thunks with
| none => mk_assumption_set false [] []
| some lemma_thunks => return (lemma_thunks, opt.ctx_thunk)
#align tactic.solve_by_elim.opt.get_lemma_thunks tactic.solve_by_elim.opt.get_lemma_thunks
end SolveByElim
open SolveByElim
/-- `solve_by_elim` repeatedly tries `apply`ing a lemma
from the list of assumptions (passed via the `opt` argument),
recursively operating on any generated subgoals, backtracking as necessary.
`solve_by_elim` succeeds only if it discharges the goal.
(By default, `solve_by_elim` focuses on the first goal, and only attempts to solve that.
With the option `backtrack_all_goals := tt`,
it attempts to solve all goals, and only succeeds if it does so.
With `backtrack_all_goals := tt`, `solve_by_elim` will backtrack a solution it has found for
one goal if it then can't discharge other goals.)
If passed an empty list of assumptions, `solve_by_elim` builds a default set
as per the interactive tactic, using the `local_context` along with
`rfl`, `trivial`, `congr_arg`, and `congr_fun`.
To pass a particular list of assumptions, use the `lemmas` field
in the configuration argument. This expects an
`option (list expr)`. In certain situations it may be necessary to instead use the
`lemma_thunks` field, which expects a `option (list (tactic expr))`.
This allows for regenerating metavariables
for each application, which might otherwise get stuck.
See also the simpler tactic `apply_rules`, which does not perform backtracking.
-/
unsafe def solve_by_elim (opt : opt := { }) : tactic Unit := do
tactic.fail_if_no_goals
let (lemmas, ctx_lemmas) β opt.get_lemma_thunks
(if opt then id else focus1) do
let gs β get_goals
solve_by_elim_aux opt gs lemmas ctx_lemmas opt <|>
fail
("`solve_by_elim` failed.\n" ++ "Try `solve_by_elim { max_depth := N }` for `N > " ++
toString opt ++
"`\n" ++
"or use `set_option trace.solve_by_elim true` to view the search.")
#align tactic.solve_by_elim tactic.solve_by_elim
/- ./././Mathport/Syntax/Translate/Tactic/Mathlib/Core.lean:38:34: unsupported: setup_tactic_parser -/
namespace Interactive
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `parser.optional -/
/-- `apply_assumption` looks for an assumption of the form `... β β _, ... β head`
where `head` matches the current goal.
If this fails, `apply_assumption` will call `symmetry` and try again.
If this also fails, `apply_assumption` will call `exfalso` and try again,
so that if there is an assumption of the form `P β Β¬ Q`, the new tactic state
will have two goals, `P` and `Q`.
Optional arguments:
- `lemmas`: a list of expressions to apply, instead of the local constants
- `tac`: a tactic to run on each subgoal after applying an assumption; if
this tactic fails, the corresponding assumption will be rejected and
the next one will be attempted.
-/
unsafe def apply_assumption (lemmas : parse (parser.optional pexpr_list))
(opt : apply_any_opt := { }) (tac : tactic Unit := skip) : tactic Unit := do
let lemmas β
match lemmas with
| none => local_context
| some lemmas => lemmas.mapM to_expr
tactic.apply_any lemmas opt tac
#align tactic.interactive.apply_assumption tactic.interactive.apply_assumption
add_tactic_doc
{ Name := "apply_assumption"
category := DocCategory.tactic
declNames := [`tactic.interactive.apply_assumption]
tags := ["context management", "lemma application"] }
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `parser.optional -/
/-- `solve_by_elim` calls `apply` on the main goal to find an assumption whose head matches
and then repeatedly calls `apply` on the generated subgoals until no subgoals remain,
performing at most `max_depth` recursive steps.
`solve_by_elim` discharges the current goal or fails.
`solve_by_elim` performs back-tracking if subgoals can not be solved.
By default, the assumptions passed to `apply` are the local context, `rfl`, `trivial`,
`congr_fun` and `congr_arg`.
The assumptions can be modified with similar syntax as for `simp`:
* `solve_by_elim [hβ, hβ, ..., hα΅£]` also applies the named lemmas.
* `solve_by_elim with attrβ ... attrα΅£` also applies all lemmas tagged with the specified attributes.
* `solve_by_elim only [hβ, hβ, ..., hα΅£]` does not include the local context,
`rfl`, `trivial`, `congr_fun`, or `congr_arg` unless they are explicitly included.
* `solve_by_elim [-id_1, ... -id_n]` uses the default assumptions, removing the specified ones.
`solve_by_elim*` tries to solve all goals together, using backtracking if a solution for one goal
makes other goals impossible.
optional arguments passed via a configuration argument as `solve_by_elim { ... }`
- max_depth: number of attempts at discharging generated sub-goals
- discharger: a subsidiary tactic to try at each step when no lemmas apply
(e.g. `cc` may be helpful).
- pre_apply: a subsidiary tactic to run at each step before applying lemmas (e.g. `intros`).
- accept: a subsidiary tactic `list expr β tactic unit` that at each step,
before any lemmas are applied, is passed the original proof terms
as reported by `get_goals` when `solve_by_elim` started
(but which may by now have been partially solved by previous `apply` steps).
If the `accept` tactic fails,
`solve_by_elim` will abort searching the current branch and backtrack.
This may be used to filter results, either at every step of the search,
or filtering complete results
(by testing for the absence of metavariables, and then the filtering condition).
-/
unsafe def solve_by_elim (all_goals : parse <| parser.optional (tk "*")) (no_dflt : parse only_flag)
(hs : parse simp_arg_list) (attr_names : parse with_ident_list)
(opt : solve_by_elim.opt := { }) : tactic Unit := do
let (lemma_thunks, ctx_thunk) β mk_assumption_set no_dflt hs attr_names
tactic.solve_by_elim
{ opt with
backtrack_all_goals := all_goals β¨ opt
lemma_thunks := some lemma_thunks
ctx_thunk }
#align tactic.interactive.solve_by_elim tactic.interactive.solve_by_elim
add_tactic_doc
{ Name := "solve_by_elim"
category := DocCategory.tactic
declNames := [`tactic.interactive.solve_by_elim]
tags := ["search"] }
end Interactive
end Tactic
|
State Before: Ξ± : Type u
Ξ² : Type v
ΞΉ : Sort w
Ξ³ : Type x
s : Set Ξ±
f : Ξ± β Ξ²
hs : Set.Finite s
β’ Set.Finite (f '' s) State After: case intro
Ξ± : Type u
Ξ² : Type v
ΞΉ : Sort w
Ξ³ : Type x
s : Set Ξ±
f : Ξ± β Ξ²
aβ : Fintype βs
β’ Set.Finite (f '' s) Tactic: cases hs State Before: case intro
Ξ± : Type u
Ξ² : Type v
ΞΉ : Sort w
Ξ³ : Type x
s : Set Ξ±
f : Ξ± β Ξ²
aβ : Fintype βs
β’ Set.Finite (f '' s) State After: no goals Tactic: apply toFinite |
/-
Copyright (c) 2017 Johannes HΓΆlzl. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johannes HΓΆlzl
-/
import data.finset.fold
import data.equiv.mul_add
import tactic.abel
/-!
# Big operators
In this file we define products and sums indexed by finite sets (specifically, `finset`).
## Notation
We introduce the following notation, localized in `big_operators`.
To enable the notation, use `open_locale big_operators`.
Let `s` be a `finset Ξ±`, and `f : Ξ± β Ξ²` a function.
* `β x in s, f x` is notation for `finset.prod s f` (assuming `Ξ²` is a `comm_monoid`)
* `β x in s, f x` is notation for `finset.sum s f` (assuming `Ξ²` is an `add_comm_monoid`)
* `β x, f x` is notation for `finset.prod finset.univ f`
(assuming `Ξ±` is a `fintype` and `Ξ²` is a `comm_monoid`)
* `β x, f x` is notation for `finset.sum finset.univ f`
(assuming `Ξ±` is a `fintype` and `Ξ²` is an `add_comm_monoid`)
-/
universes u v w
variables {Ξ± : Type u} {Ξ² : Type v} {Ξ³ : Type w}
namespace finset
/--
`β x in s, f x` is the product of `f x`
as `x` ranges over the elements of the finite set `s`.
-/
@[to_additive "`β x in s, f` is the sum of `f x` as `x` ranges over the elements
of the finite set `s`."]
protected def prod [comm_monoid Ξ²] (s : finset Ξ±) (f : Ξ± β Ξ²) : Ξ² := (s.1.map f).prod
@[simp, to_additive] lemma prod_mk [comm_monoid Ξ²] (s : multiset Ξ±) (hs) (f : Ξ± β Ξ²) :
(β¨s, hsβ© : finset Ξ±).prod f = (s.map f).prod :=
rfl
end finset
/--
There is no established mathematical convention
for the operator precedence of big operators like `β` and `β`.
We will have to make a choice.
Online discussions, such as https://math.stackexchange.com/q/185538/30839
seem to suggest that `β` and `β` should have the same precedence,
and that this should be somewhere between `*` and `+`.
The latter have precedence levels `70` and `65` respectively,
and we therefore choose the level `67`.
In practice, this means that parentheses should be placed as follows:
```lean
β k in K, (a k + b k) = β k in K, a k + β k in K, b k β
β k in K, a k * b k = (β k in K, a k) * (β k in K, b k)
```
(Example taken from page 490 of Knuth's *Concrete Mathematics*.)
-/
library_note "operator precedence of big operators"
localized "notation `β` binders `, ` r:(scoped:67 f, finset.sum finset.univ f) := r"
in big_operators
localized "notation `β` binders `, ` r:(scoped:67 f, finset.prod finset.univ f) := r"
in big_operators
localized "notation `β` binders ` in ` s `, ` r:(scoped:67 f, finset.sum s f) := r"
in big_operators
localized "notation `β` binders ` in ` s `, ` r:(scoped:67 f, finset.prod s f) := r"
in big_operators
open_locale big_operators
namespace finset
variables {s sβ sβ : finset Ξ±} {a : Ξ±} {f g : Ξ± β Ξ²}
@[to_additive] lemma prod_eq_multiset_prod [comm_monoid Ξ²] (s : finset Ξ±) (f : Ξ± β Ξ²) :
β x in s, f x = (s.1.map f).prod := rfl
@[to_additive]
theorem prod_eq_fold [comm_monoid Ξ²] (s : finset Ξ±) (f : Ξ± β Ξ²) :
(β x in s, f x) = s.fold (*) 1 f :=
rfl
@[simp] lemma sum_multiset_singleton (s : finset Ξ±) :
s.sum (Ξ» x, x ::β 0) = s.val :=
by simp [sum_eq_multiset_sum]
end finset
@[to_additive]
lemma monoid_hom.map_prod [comm_monoid Ξ²] [comm_monoid Ξ³] (g : Ξ² β* Ξ³) (f : Ξ± β Ξ²) (s : finset Ξ±) :
g (β x in s, f x) = β x in s, g (f x) :=
by simp only [finset.prod_eq_multiset_prod, g.map_multiset_prod, multiset.map_map]
@[to_additive]
lemma mul_equiv.map_prod [comm_monoid Ξ²] [comm_monoid Ξ³] (g : Ξ² β* Ξ³) (f : Ξ± β Ξ²) (s : finset Ξ±) :
g (β x in s, f x) = β x in s, g (f x) :=
g.to_monoid_hom.map_prod f s
lemma ring_hom.map_list_prod [semiring Ξ²] [semiring Ξ³] (f : Ξ² β+* Ξ³) (l : list Ξ²) :
f l.prod = (l.map f).prod :=
f.to_monoid_hom.map_list_prod l
lemma ring_hom.map_list_sum [semiring Ξ²] [semiring Ξ³] (f : Ξ² β+* Ξ³) (l : list Ξ²) :
f l.sum = (l.map f).sum :=
f.to_add_monoid_hom.map_list_sum l
lemma ring_hom.map_multiset_prod [comm_semiring Ξ²] [comm_semiring Ξ³] (f : Ξ² β+* Ξ³)
(s : multiset Ξ²) :
f s.prod = (s.map f).prod :=
f.to_monoid_hom.map_multiset_prod s
lemma ring_hom.map_multiset_sum [semiring Ξ²] [semiring Ξ³] (f : Ξ² β+* Ξ³) (s : multiset Ξ²) :
f s.sum = (s.map f).sum :=
f.to_add_monoid_hom.map_multiset_sum s
lemma ring_hom.map_prod [comm_semiring Ξ²] [comm_semiring Ξ³] (g : Ξ² β+* Ξ³) (f : Ξ± β Ξ²)
(s : finset Ξ±) :
g (β x in s, f x) = β x in s, g (f x) :=
g.to_monoid_hom.map_prod f s
lemma ring_hom.map_sum [semiring Ξ²] [semiring Ξ³]
(g : Ξ² β+* Ξ³) (f : Ξ± β Ξ²) (s : finset Ξ±) :
g (β x in s, f x) = β x in s, g (f x) :=
g.to_add_monoid_hom.map_sum f s
@[to_additive]
lemma monoid_hom.coe_prod [mul_one_class Ξ²] [comm_monoid Ξ³] (f : Ξ± β Ξ² β* Ξ³) (s : finset Ξ±) :
β(β x in s, f x) = β x in s, f x :=
(monoid_hom.coe_fn Ξ² Ξ³).map_prod _ _
-- See also `finset.prod_apply`, with the same conclusion
-- but with the weaker hypothesis `f : Ξ± β Ξ² β Ξ³`.
@[simp, to_additive]
lemma monoid_hom.finset_prod_apply [mul_one_class Ξ²] [comm_monoid Ξ³] (f : Ξ± β Ξ² β* Ξ³)
(s : finset Ξ±) (b : Ξ²) : (β x in s, f x) b = β x in s, f x b :=
(monoid_hom.eval b).map_prod _ _
variables {s sβ sβ : finset Ξ±} {a : Ξ±} {f g : Ξ± β Ξ²}
namespace finset
section comm_monoid
variables [comm_monoid Ξ²]
@[simp, to_additive]
lemma prod_empty {Ξ± : Type u} {f : Ξ± β Ξ²} : (β x in (β
:finset Ξ±), f x) = 1 := rfl
@[simp, to_additive]
lemma prod_insert [decidable_eq Ξ±] : a β s β (β x in (insert a s), f x) = f a * β x in s, f x :=
fold_insert
/--
The product of `f` over `insert a s` is the same as
the product over `s`, as long as `a` is in `s` or `f a = 1`.
-/
@[simp, to_additive "The sum of `f` over `insert a s` is the same as
the sum over `s`, as long as `a` is in `s` or `f a = 0`."]
lemma prod_insert_of_eq_one_if_not_mem [decidable_eq Ξ±] (h : a β s β f a = 1) :
β x in insert a s, f x = β x in s, f x :=
begin
by_cases hm : a β s,
{ simp_rw insert_eq_of_mem hm },
{ rw [prod_insert hm, h hm, one_mul] },
end
/--
The product of `f` over `insert a s` is the same as the product over `s`, as long as `f a = 1`.
-/
@[simp, to_additive "The sum of `f` over `insert a s` is the same as
the sum over `s`, as long as `f a = 0`."]
lemma prod_insert_one [decidable_eq Ξ±] (h : f a = 1) :
β x in insert a s, f x = β x in s, f x :=
prod_insert_of_eq_one_if_not_mem (Ξ» _, h)
@[simp, to_additive]
lemma prod_singleton : (β x in (singleton a), f x) = f a :=
eq.trans fold_singleton $ mul_one _
@[to_additive]
lemma prod_pair [decidable_eq Ξ±] {a b : Ξ±} (h : a β b) :
(β x in ({a, b} : finset Ξ±), f x) = f a * f b :=
by rw [prod_insert (not_mem_singleton.2 h), prod_singleton]
@[simp, priority 1100] lemma prod_const_one : (β x in s, (1 : Ξ²)) = 1 :=
by simp only [finset.prod, multiset.map_const, multiset.prod_repeat, one_pow]
@[simp, priority 1100] lemma sum_const_zero {Ξ²} {s : finset Ξ±} [add_comm_monoid Ξ²] :
(β x in s, (0 : Ξ²)) = 0 :=
@prod_const_one _ (multiplicative Ξ²) _ _
attribute [to_additive] prod_const_one
@[simp, to_additive]
lemma prod_image [decidable_eq Ξ±] {s : finset Ξ³} {g : Ξ³ β Ξ±} :
(βxβs, βyβs, g x = g y β x = y) β (β x in (s.image g), f x) = β x in s, f (g x) :=
fold_image
@[simp, to_additive]
lemma prod_map (s : finset Ξ±) (e : Ξ± βͺ Ξ³) (f : Ξ³ β Ξ²) :
(β x in (s.map e), f x) = β x in s, f (e x) :=
by rw [finset.prod, finset.map_val, multiset.map_map]; refl
@[congr, to_additive]
lemma prod_congr (h : sβ = sβ) : (βxβsβ, f x = g x) β sβ.prod f = sβ.prod g :=
by rw [h]; exact fold_congr
attribute [congr] finset.sum_congr
@[to_additive]
lemma prod_union_inter [decidable_eq Ξ±] :
(β x in (sβ βͺ sβ), f x) * (β x in (sβ β© sβ), f x) = (β x in sβ, f x) * (β x in sβ, f x) :=
fold_union_inter
@[to_additive]
lemma prod_union [decidable_eq Ξ±] (h : disjoint sβ sβ) :
(β x in (sβ βͺ sβ), f x) = (β x in sβ, f x) * (β x in sβ, f x) :=
by rw [βprod_union_inter, (disjoint_iff_inter_eq_empty.mp h)]; exact (mul_one _).symm
end comm_monoid
end finset
section
open finset
variables [fintype Ξ±] [decidable_eq Ξ±] [comm_monoid Ξ²]
@[to_additive]
lemma is_compl.prod_mul_prod {s t : finset Ξ±} (h : is_compl s t) (f : Ξ± β Ξ²) :
(β i in s, f i) * (β i in t, f i) = β i, f i :=
(finset.prod_union h.disjoint).symm.trans $ by rw [β finset.sup_eq_union, h.sup_eq_top]; refl
end
namespace finset
section comm_monoid
variables [comm_monoid Ξ²]
@[to_additive]
lemma prod_mul_prod_compl [fintype Ξ±] [decidable_eq Ξ±] (s : finset Ξ±) (f : Ξ± β Ξ²) :
(β i in s, f i) * (β i in sαΆ, f i) = β i, f i :=
is_compl_compl.prod_mul_prod f
@[to_additive]
lemma prod_compl_mul_prod [fintype Ξ±] [decidable_eq Ξ±] (s : finset Ξ±) (f : Ξ± β Ξ²) :
(β i in sαΆ, f i) * (β i in s, f i) = β i, f i :=
is_compl_compl.symm.prod_mul_prod f
@[to_additive]
lemma prod_sdiff [decidable_eq Ξ±] (h : sβ β sβ) :
(β x in (sβ \ sβ), f x) * (β x in sβ, f x) = (β x in sβ, f x) :=
by rw [βprod_union sdiff_disjoint, sdiff_union_of_subset h]
@[simp, to_additive]
lemma prod_sum_elim [decidable_eq (Ξ± β Ξ³)]
(s : finset Ξ±) (t : finset Ξ³) (f : Ξ± β Ξ²) (g : Ξ³ β Ξ²) :
β x in s.map function.embedding.inl βͺ t.map function.embedding.inr, sum.elim f g x =
(β x in s, f x) * (β x in t, g x) :=
begin
rw [prod_union, prod_map, prod_map],
{ simp only [sum.elim_inl, function.embedding.inl_apply, function.embedding.inr_apply,
sum.elim_inr] },
{ simp only [disjoint_left, finset.mem_map, finset.mem_map],
rintros _ β¨i, hi, rflβ© β¨j, hj, Hβ©,
cases H }
end
@[to_additive]
lemma prod_bUnion [decidable_eq Ξ±] {s : finset Ξ³} {t : Ξ³ β finset Ξ±} :
(β x β s, β y β s, x β y β disjoint (t x) (t y)) β
(β x in (s.bUnion t), f x) = β x in s, β i in t x, f i :=
by haveI := classical.dec_eq Ξ³; exact
finset.induction_on s (Ξ» _, by simp only [bUnion_empty, prod_empty])
(assume x s hxs ih hd,
have hd' : βxβs, βyβs, x β y β disjoint (t x) (t y),
from assume _ hx _ hy, hd _ (mem_insert_of_mem hx) _ (mem_insert_of_mem hy),
have βyβs, x β y,
from assume _ hy h, by rw [βh] at hy; contradiction,
have βyβs, disjoint (t x) (t y),
from assume _ hy, hd _ (mem_insert_self _ _) _ (mem_insert_of_mem hy) (this _ hy),
have disjoint (t x) (finset.bUnion s t),
from (disjoint_bUnion_right _ _ _).mpr this,
by simp only [bUnion_insert, prod_insert hxs, prod_union this, ih hd'])
@[to_additive]
lemma prod_product {s : finset Ξ³} {t : finset Ξ±} {f : Ξ³ΓΞ± β Ξ²} :
(β x in s.product t, f x) = β x in s, β y in t, f (x, y) :=
begin
haveI := classical.dec_eq Ξ±, haveI := classical.dec_eq Ξ³,
rw [product_eq_bUnion, prod_bUnion],
{ congr, funext, exact prod_image (Ξ» _ _ _ _ H, (prod.mk.inj H).2) },
simp only [disjoint_iff_ne, mem_image],
rintros _ _ _ _ h β¨_, _β© β¨_, _, β¨_, _β©β© β¨_, _β© β¨_, _, β¨_, _β©β© _,
apply h, cc
end
/-- An uncurried version of `finset.prod_product`. -/
@[to_additive "An uncurried version of `finset.sum_product`"]
lemma prod_product' {s : finset Ξ³} {t : finset Ξ±} {f : Ξ³ β Ξ± β Ξ²} :
(β x in s.product t, f x.1 x.2) = β x in s, β y in t, f x y :=
prod_product
/-- Product over a sigma type equals the product of fiberwise products. For rewriting
in the reverse direction, use `finset.prod_sigma'`. -/
@[to_additive "Sum over a sigma type equals the sum of fiberwise sums. For rewriting
in the reverse direction, use `finset.sum_sigma'`"]
lemma prod_sigma {Ο : Ξ± β Type*}
(s : finset Ξ±) (t : Ξ a, finset (Ο a)) (f : sigma Ο β Ξ²) :
(β x in s.sigma t, f x) = β a in s, β s in (t a), f β¨a, sβ© :=
by classical;
calc (β x in s.sigma t, f x) =
β x in s.bUnion (Ξ»a, (t a).map (function.embedding.sigma_mk a)), f x : by rw sigma_eq_bUnion
... = β a in s, β x in (t a).map (function.embedding.sigma_mk a), f x :
prod_bUnion $ assume aβ ha aβ haβ h x hx,
by { simp only [inf_eq_inter, mem_inter, mem_map, function.embedding.sigma_mk_apply] at hx,
rcases hx with β¨β¨y, hy, rflβ©, β¨z, hz, hz'β©β©, cc }
... = β a in s, β s in t a, f β¨a, sβ© :
prod_congr rfl $ Ξ» _ _, prod_map _ _ _
@[to_additive]
lemma prod_sigma' {Ο : Ξ± β Type*}
(s : finset Ξ±) (t : Ξ a, finset (Ο a)) (f : Ξ a, Ο a β Ξ²) :
(β a in s, β s in (t a), f a s) = β x in s.sigma t, f x.1 x.2 :=
eq.symm $ prod_sigma s t (Ξ» x, f x.1 x.2)
@[to_additive]
lemma prod_fiberwise_of_maps_to [decidable_eq Ξ³] {s : finset Ξ±} {t : finset Ξ³} {g : Ξ± β Ξ³}
(h : β x β s, g x β t) (f : Ξ± β Ξ²) :
(β y in t, β x in s.filter (Ξ» x, g x = y), f x) = β x in s, f x :=
begin
letI := classical.dec_eq Ξ±,
rw [β bUnion_filter_eq_of_maps_to h] {occs := occurrences.pos [2]},
refine (prod_bUnion $ Ξ» x' hx y' hy hne, _).symm,
rw [disjoint_filter],
rintros x hx rfl,
exact hne
end
@[to_additive]
lemma prod_image' [decidable_eq Ξ±] {s : finset Ξ³} {g : Ξ³ β Ξ±} (h : Ξ³ β Ξ²)
(eq : βcβs, f (g c) = β x in s.filter (Ξ»c', g c' = g c), h x) :
(β x in s.image g, f x) = β x in s, h x :=
calc (β x in s.image g, f x) = β x in s.image g, β x in s.filter (Ξ» c', g c' = x), h x :
prod_congr rfl $ Ξ» x hx, let β¨c, hcs, hcβ© := mem_image.1 hx in hc βΈ (eq c hcs)
... = β x in s, h x : prod_fiberwise_of_maps_to (Ξ» x, mem_image_of_mem g) _
@[to_additive]
lemma prod_mul_distrib : β x in s, (f x * g x) = (β x in s, f x) * (β x in s, g x) :=
eq.trans (by rw one_mul; refl) fold_op_distrib
@[to_additive]
lemma prod_comm {s : finset Ξ³} {t : finset Ξ±} {f : Ξ³ β Ξ± β Ξ²} :
(β x in s, β y in t, f x y) = (β y in t, β x in s, f x y) :=
begin
classical,
apply finset.induction_on s,
{ simp only [prod_empty, prod_const_one] },
{ intros _ _ H ih,
simp only [prod_insert H, prod_mul_distrib, ih] }
end
@[to_additive]
lemma prod_hom [comm_monoid Ξ³] (s : finset Ξ±) {f : Ξ± β Ξ²} (g : Ξ² β Ξ³) [is_monoid_hom g] :
(β x in s, g (f x)) = g (β x in s, f x) :=
((monoid_hom.of g).map_prod f s).symm
@[to_additive]
lemma prod_hom_rel [comm_monoid Ξ³] {r : Ξ² β Ξ³ β Prop} {f : Ξ± β Ξ²} {g : Ξ± β Ξ³} {s : finset Ξ±}
(hβ : r 1 1) (hβ : βa b c, r b c β r (f a * b) (g a * c)) : r (β x in s, f x) (β x in s, g x) :=
by { delta finset.prod, apply multiset.prod_hom_rel; assumption }
@[to_additive]
lemma prod_subset (h : sβ β sβ) (hf : β x β sβ, x β sβ β f x = 1) :
(β x in sβ, f x) = β x in sβ, f x :=
by haveI := classical.dec_eq Ξ±; exact
have β x in sβ \ sβ, f x = β x in sβ \ sβ, 1,
from prod_congr rfl $ by simpa only [mem_sdiff, and_imp],
by rw [βprod_sdiff h]; simp only [this, prod_const_one, one_mul]
@[to_additive]
lemma prod_filter_of_ne {p : Ξ± β Prop} [decidable_pred p] (hp : β x β s, f x β 1 β p x) :
(β x in (s.filter p), f x) = (β x in s, f x) :=
prod_subset (filter_subset _ _) $ Ξ» x,
by { classical, rw [not_imp_comm, mem_filter], exact Ξ» hβ hβ, β¨hβ, hp _ hβ hββ© }
-- If we use `[decidable_eq Ξ²]` here, some rewrites fail because they find a wrong `decidable`
-- instance first; `{βx, decidable (f x β 1)}` doesn't work with `rw β prod_filter_ne_one`
@[to_additive]
lemma prod_filter_ne_one [β x, decidable (f x β 1)] :
(β x in (s.filter $ Ξ»x, f x β 1), f x) = (β x in s, f x) :=
prod_filter_of_ne $ Ξ» _ _, id
@[to_additive]
lemma prod_filter (p : Ξ± β Prop) [decidable_pred p] (f : Ξ± β Ξ²) :
(β a in s.filter p, f a) = (β a in s, if p a then f a else 1) :=
calc (β a in s.filter p, f a) = β a in s.filter p, if p a then f a else 1 :
prod_congr rfl (assume a h, by rw [if_pos (mem_filter.1 h).2])
... = β a in s, if p a then f a else 1 :
begin
refine prod_subset (filter_subset _ s) (assume x hs h, _),
rw [mem_filter, not_and] at h,
exact if_neg (h hs)
end
@[to_additive]
lemma prod_eq_single_of_mem {s : finset Ξ±} {f : Ξ± β Ξ²} (a : Ξ±) (h : a β s)
(hβ : β b β s, b β a β f b = 1) : (β x in s, f x) = f a :=
begin
haveI := classical.dec_eq Ξ±,
calc (β x in s, f x) = β x in {a}, f x :
begin
refine (prod_subset _ _).symm,
{ intros _ H, rwa mem_singleton.1 H },
{ simpa only [mem_singleton] }
end
... = f a : prod_singleton
end
@[to_additive]
lemma prod_eq_single {s : finset Ξ±} {f : Ξ± β Ξ²} (a : Ξ±)
(hβ : βbβs, b β a β f b = 1) (hβ : a β s β f a = 1) : (β x in s, f x) = f a :=
by haveI := classical.dec_eq Ξ±;
from classical.by_cases
(assume : a β s, prod_eq_single_of_mem a this hβ)
(assume : a β s,
(prod_congr rfl $ Ξ» b hb, hβ b hb $ by rintro rfl; cc).trans $
prod_const_one.trans (hβ this).symm)
@[to_additive]
@[to_additive]
lemma prod_eq_mul {s : finset Ξ±} {f : Ξ± β Ξ²} (a b : Ξ±) (hn : a β b)
(hβ : β c β s, c β a β§ c β b β f c = 1) (ha : a β s β f a = 1) (hb : b β s β f b = 1) :
(β x in s, f x) = (f a) * (f b) :=
begin
haveI := classical.dec_eq Ξ±;
by_cases hβ : a β s; by_cases hβ : b β s,
{ exact prod_eq_mul_of_mem a b hβ hβ hn hβ },
{ rw [hb hβ, mul_one],
apply prod_eq_single_of_mem a hβ,
exact Ξ» c hc hca, hβ c hc β¨hca, ne_of_mem_of_not_mem hc hββ© },
{ rw [ha hβ, one_mul],
apply prod_eq_single_of_mem b hβ,
exact Ξ» c hc hcb, hβ c hc β¨ne_of_mem_of_not_mem hc hβ, hcbβ© },
{ rw [ha hβ, hb hβ, mul_one],
exact trans
(prod_congr rfl (Ξ» c hc, hβ c hc β¨ne_of_mem_of_not_mem hc hβ, ne_of_mem_of_not_mem hc hββ©))
prod_const_one }
end
@[to_additive]
lemma prod_attach {f : Ξ± β Ξ²} : (β x in s.attach, f x) = (β x in s, f x) :=
by haveI := classical.dec_eq Ξ±; exact
calc (β x in s.attach, f x.val) = (β x in (s.attach).image subtype.val, f x) :
by rw [prod_image]; exact assume x _ y _, subtype.eq
... = _ : by rw [attach_image_val]
/-- A product over `s.subtype p` equals one over `s.filter p`. -/
@[simp, to_additive "A sum over `s.subtype p` equals one over `s.filter p`."]
lemma prod_subtype_eq_prod_filter (f : Ξ± β Ξ²) {p : Ξ± β Prop} [decidable_pred p] :
β x in s.subtype p, f x = β x in s.filter p, f x :=
begin
conv_lhs {
erw βprod_map (s.subtype p) (function.embedding.subtype _) f
},
exact prod_congr (subtype_map _) (Ξ» x hx, rfl)
end
/-- If all elements of a `finset` satisfy the predicate `p`, a product
over `s.subtype p` equals that product over `s`. -/
@[to_additive "If all elements of a `finset` satisfy the predicate `p`, a sum
over `s.subtype p` equals that sum over `s`."]
lemma prod_subtype_of_mem (f : Ξ± β Ξ²) {p : Ξ± β Prop} [decidable_pred p]
(h : β x β s, p x) : β x in s.subtype p, f x = β x in s, f x :=
by simp_rw [prod_subtype_eq_prod_filter, filter_true_of_mem h]
/-- A product of a function over a `finset` in a subtype equals a
product in the main type of a function that agrees with the first
function on that `finset`. -/
@[to_additive "A sum of a function over a `finset` in a subtype equals a
sum in the main type of a function that agrees with the first
function on that `finset`."]
lemma prod_subtype_map_embedding {p : Ξ± β Prop} {s : finset {x // p x}} {f : {x // p x} β Ξ²}
{g : Ξ± β Ξ²} (h : β x : {x // p x}, x β s β g x = f x) :
β x in s.map (function.embedding.subtype _), g x = β x in s, f x :=
begin
rw finset.prod_map,
exact finset.prod_congr rfl h
end
@[to_additive]
lemma prod_finset_coe (f : Ξ± β Ξ²) (s : finset Ξ±) :
β (i : (s : set Ξ±)), f i = β i in s, f i :=
prod_attach
@[to_additive]
lemma prod_subtype {p : Ξ± β Prop} {F : fintype (subtype p)} (s : finset Ξ±)
(h : β x, x β s β p x) (f : Ξ± β Ξ²) :
β a in s, f a = β a : subtype p, f a :=
have (β s) = p, from set.ext h, by { substI p, rw [βprod_finset_coe], congr }
@[to_additive]
lemma prod_eq_one {f : Ξ± β Ξ²} {s : finset Ξ±} (h : βxβs, f x = 1) : (β x in s, f x) = 1 :=
calc (β x in s, f x) = β x in s, 1 : finset.prod_congr rfl h
... = 1 : finset.prod_const_one
@[to_additive] lemma prod_apply_dite {s : finset Ξ±} {p : Ξ± β Prop} {hp : decidable_pred p}
(f : Ξ (x : Ξ±), p x β Ξ³) (g : Ξ (x : Ξ±), Β¬p x β Ξ³) (h : Ξ³ β Ξ²) :
(β x in s, h (if hx : p x then f x hx else g x hx)) =
(β x in (s.filter p).attach, h (f x.1 (mem_filter.mp x.2).2)) *
(β x in (s.filter (Ξ» x, Β¬ p x)).attach, h (g x.1 (mem_filter.mp x.2).2)) :=
by letI := classical.dec_eq Ξ±; exact
calc β x in s, h (if hx : p x then f x hx else g x hx)
= β x in s.filter p βͺ s.filter (Ξ» x, Β¬ p x), h (if hx : p x then f x hx else g x hx) :
by rw [filter_union_filter_neg_eq]
... = (β x in s.filter p, h (if hx : p x then f x hx else g x hx)) *
(β x in s.filter (Ξ» x, Β¬ p x), h (if hx : p x then f x hx else g x hx)) :
prod_union (by simp [disjoint_right] {contextual := tt})
... = (β x in (s.filter p).attach, h (if hx : p x.1 then f x.1 hx else g x.1 hx)) *
(β x in (s.filter (Ξ» x, Β¬ p x)).attach, h (if hx : p x.1 then f x.1 hx else g x.1 hx)) :
congr_arg2 _ prod_attach.symm prod_attach.symm
... = (β x in (s.filter p).attach, h (f x.1 (mem_filter.mp x.2).2)) *
(β x in (s.filter (Ξ» x, Β¬ p x)).attach, h (g x.1 (mem_filter.mp x.2).2)) :
congr_arg2 _
(prod_congr rfl (Ξ» x hx, congr_arg h (dif_pos (mem_filter.mp x.2).2)))
(prod_congr rfl (Ξ» x hx, congr_arg h (dif_neg (mem_filter.mp x.2).2)))
@[to_additive] lemma prod_apply_ite {s : finset Ξ±}
{p : Ξ± β Prop} {hp : decidable_pred p} (f g : Ξ± β Ξ³) (h : Ξ³ β Ξ²) :
(β x in s, h (if p x then f x else g x)) =
(β x in s.filter p, h (f x)) * (β x in s.filter (Ξ» x, Β¬ p x), h (g x)) :=
trans (prod_apply_dite _ _ _)
(congr_arg2 _ (@prod_attach _ _ _ _ (h β f)) (@prod_attach _ _ _ _ (h β g)))
@[to_additive] lemma prod_dite {s : finset Ξ±} {p : Ξ± β Prop} {hp : decidable_pred p}
(f : Ξ (x : Ξ±), p x β Ξ²) (g : Ξ (x : Ξ±), Β¬p x β Ξ²) :
(β x in s, if hx : p x then f x hx else g x hx) =
(β x in (s.filter p).attach, f x.1 (mem_filter.mp x.2).2) *
(β x in (s.filter (Ξ» x, Β¬ p x)).attach, g x.1 (mem_filter.mp x.2).2) :=
by simp [prod_apply_dite _ _ (Ξ» x, x)]
@[to_additive] lemma prod_ite {s : finset Ξ±}
{p : Ξ± β Prop} {hp : decidable_pred p} (f g : Ξ± β Ξ²) :
(β x in s, if p x then f x else g x) =
(β x in s.filter p, f x) * (β x in s.filter (Ξ» x, Β¬ p x), g x) :=
by simp [prod_apply_ite _ _ (Ξ» x, x)]
@[to_additive] lemma prod_ite_of_false {p : Ξ± β Prop} {hp : decidable_pred p} (f g : Ξ± β Ξ²)
(h : β x β s, Β¬p x) : (β x in s, if p x then f x else g x) = (β x in s, g x) :=
by { rw prod_ite, simp [filter_false_of_mem h, filter_true_of_mem h] }
@[to_additive] lemma prod_ite_of_true {p : Ξ± β Prop} {hp : decidable_pred p} (f g : Ξ± β Ξ²)
(h : β x β s, p x) : (β x in s, if p x then f x else g x) = (β x in s, f x) :=
by { simp_rw β(ite_not (p _)), apply prod_ite_of_false, simpa }
@[to_additive] lemma prod_apply_ite_of_false {p : Ξ± β Prop} {hp : decidable_pred p} (f g : Ξ± β Ξ³)
(k : Ξ³ β Ξ²) (h : β x β s, Β¬p x) :
(β x in s, k (if p x then f x else g x)) = (β x in s, k (g x)) :=
by { simp_rw apply_ite k, exact prod_ite_of_false _ _ h }
@[to_additive] lemma prod_apply_ite_of_true {p : Ξ± β Prop} {hp : decidable_pred p} (f g : Ξ± β Ξ³)
(k : Ξ³ β Ξ²) (h : β x β s, p x) :
(β x in s, k (if p x then f x else g x)) = (β x in s, k (f x)) :=
by { simp_rw apply_ite k, exact prod_ite_of_true _ _ h }
@[to_additive]
lemma prod_extend_by_one [decidable_eq Ξ±] (s : finset Ξ±) (f : Ξ± β Ξ²) :
β i in s, (if i β s then f i else 1) = β i in s, f i :=
prod_congr rfl $ Ξ» i hi, if_pos hi
@[simp, to_additive]
lemma prod_dite_eq [decidable_eq Ξ±] (s : finset Ξ±) (a : Ξ±) (b : Ξ x : Ξ±, a = x β Ξ²) :
(β x in s, (if h : a = x then b x h else 1)) = ite (a β s) (b a rfl) 1 :=
begin
split_ifs with h,
{ rw [finset.prod_eq_single a, dif_pos rfl],
{ intros, rw dif_neg, cc },
{ cc } },
{ rw finset.prod_eq_one,
intros, rw dif_neg, intro, cc }
end
@[simp, to_additive]
lemma prod_dite_eq' [decidable_eq Ξ±] (s : finset Ξ±) (a : Ξ±) (b : Ξ x : Ξ±, x = a β Ξ²) :
(β x in s, (if h : x = a then b x h else 1)) = ite (a β s) (b a rfl) 1 :=
begin
split_ifs with h,
{ rw [finset.prod_eq_single a, dif_pos rfl],
{ intros, rw dif_neg, cc },
{ cc } },
{ rw finset.prod_eq_one,
intros, rw dif_neg, intro, cc }
end
@[simp, to_additive] lemma prod_ite_eq [decidable_eq Ξ±] (s : finset Ξ±) (a : Ξ±) (b : Ξ± β Ξ²) :
(β x in s, (ite (a = x) (b x) 1)) = ite (a β s) (b a) 1 :=
prod_dite_eq s a (Ξ» x _, b x)
/--
When a product is taken over a conditional whose condition is an equality test on the index
and whose alternative is 1, then the product's value is either the term at that index or `1`.
The difference with `prod_ite_eq` is that the arguments to `eq` are swapped.
-/
@[simp, to_additive] lemma prod_ite_eq' [decidable_eq Ξ±] (s : finset Ξ±) (a : Ξ±) (b : Ξ± β Ξ²) :
(β x in s, (ite (x = a) (b x) 1)) = ite (a β s) (b a) 1 :=
prod_dite_eq' s a (Ξ» x _, b x)
@[to_additive]
lemma prod_ite_index (p : Prop) [decidable p] (s t : finset Ξ±) (f : Ξ± β Ξ²) :
(β x in if p then s else t, f x) = if p then β x in s, f x else β x in t, f x :=
apply_ite (Ξ» s, β x in s, f x) _ _ _
@[simp, to_additive]
lemma prod_dite_irrel (p : Prop) [decidable p] (s : finset Ξ±) (f : p β Ξ± β Ξ²) (g : Β¬p β Ξ± β Ξ²):
(β x in s, if h : p then f h x else g h x) = if h : p then β x in s, f h x else β x in s, g h x :=
by { split_ifs with h; refl }
@[simp] lemma sum_pi_single' {ΞΉ M : Type*} [decidable_eq ΞΉ] [add_comm_monoid M]
(i : ΞΉ) (x : M) (s : finset ΞΉ) :
β j in s, pi.single i x j = if i β s then x else 0 :=
sum_dite_eq' _ _ _
@[simp] lemma sum_pi_single {ΞΉ : Type*} {M : ΞΉ β Type*}
[decidable_eq ΞΉ] [Ξ i, add_comm_monoid (M i)] (i : ΞΉ) (f : Ξ i, M i) (s : finset ΞΉ) :
β j in s, pi.single j (f j) i = if i β s then f i else 0 :=
sum_dite_eq _ _ _
/--
Reorder a product.
The difference with `prod_bij'` is that the bijection is specified as a surjective injection,
rather than by an inverse function.
-/
@[to_additive "
Reorder a sum.
The difference with `sum_bij'` is that the bijection is specified as a surjective injection,
rather than by an inverse function.
"]
lemma prod_bij {s : finset Ξ±} {t : finset Ξ³} {f : Ξ± β Ξ²} {g : Ξ³ β Ξ²}
(i : Ξ aβs, Ξ³) (hi : βa ha, i a ha β t) (h : βa ha, f a = g (i a ha))
(i_inj : βaβ aβ haβ haβ, i aβ haβ = i aβ haβ β aβ = aβ) (i_surj : βbβt, βa ha, b = i a ha) :
(β x in s, f x) = (β x in t, g x) :=
congr_arg multiset.prod
(multiset.map_eq_map_of_bij_of_nodup f g s.2 t.2 i hi h i_inj i_surj)
/--
Reorder a product.
The difference with `prod_bij` is that the bijection is specified with an inverse, rather than
as a surjective injection.
-/
@[to_additive "
Reorder a sum.
The difference with `sum_bij` is that the bijection is specified with an inverse, rather than
as a surjective injection.
"]
lemma prod_bij' {s : finset Ξ±} {t : finset Ξ³} {f : Ξ± β Ξ²} {g : Ξ³ β Ξ²}
(i : Ξ aβs, Ξ³) (hi : βa ha, i a ha β t) (h : βa ha, f a = g (i a ha))
(j : Ξ aβt, Ξ±) (hj : βa ha, j a ha β s) (left_inv : β a ha, j (i a ha) (hi a ha) = a)
(right_inv : β a ha, i (j a ha) (hj a ha) = a) :
(β x in s, f x) = (β x in t, g x) :=
begin
refine prod_bij i hi h _ _,
{intros a1 a2 h1 h2 eq, rw [βleft_inv a1 h1, βleft_inv a2 h2], cc,},
{intros b hb, use j b hb, use hj b hb, exact (right_inv b hb).symm,},
end
@[to_additive]
lemma prod_bij_ne_one {s : finset Ξ±} {t : finset Ξ³} {f : Ξ± β Ξ²} {g : Ξ³ β Ξ²}
(i : Ξ aβs, f a β 1 β Ξ³) (hi : βa hβ hβ, i a hβ hβ β t)
(i_inj : βaβ aβ hββ hββ hββ hββ, i aβ hββ hββ = i aβ hββ hββ β aβ = aβ)
(i_surj : βbβt, g b β 1 β βa hβ hβ, b = i a hβ hβ)
(h : βa hβ hβ, f a = g (i a hβ hβ)) :
(β x in s, f x) = (β x in t, g x) :=
by classical; exact
calc (β x in s, f x) = β x in (s.filter $ Ξ»x, f x β 1), f x : prod_filter_ne_one.symm
... = β x in (t.filter $ Ξ»x, g x β 1), g x :
prod_bij (assume a ha, i a (mem_filter.mp ha).1 (mem_filter.mp ha).2)
(assume a ha, (mem_filter.mp ha).elim $ Ξ»hβ hβ, mem_filter.mpr
β¨hi a hβ hβ, Ξ» hg, hβ (hg βΈ h a hβ hβ)β©)
(assume a ha, (mem_filter.mp ha).elim $ h a)
(assume aβ aβ haβ haβ,
(mem_filter.mp haβ).elim $ Ξ» haββ haββ,
(mem_filter.mp haβ).elim $ Ξ» haββ haββ, i_inj aβ aβ _ _ _ _)
(assume b hb, (mem_filter.mp hb).elim $ Ξ»hβ hβ,
let β¨a, haβ, haβ, eqβ© := i_surj b hβ hβ in β¨a, mem_filter.mpr β¨haβ, haββ©, eqβ©)
... = (β x in t, g x) : prod_filter_ne_one
@[to_additive]
lemma nonempty_of_prod_ne_one (h : (β x in s, f x) β 1) : s.nonempty :=
s.eq_empty_or_nonempty.elim (Ξ» H, false.elim $ h $ H.symm βΈ prod_empty) id
@[to_additive]
lemma exists_ne_one_of_prod_ne_one (h : (β x in s, f x) β 1) : βaβs, f a β 1 :=
begin
classical,
rw β prod_filter_ne_one at h,
rcases nonempty_of_prod_ne_one h with β¨x, hxβ©,
exact β¨x, (mem_filter.1 hx).1, (mem_filter.1 hx).2β©
end
@[to_additive]
lemma prod_subset_one_on_sdiff [decidable_eq Ξ±] (h : sβ β sβ) (hg : β x β (sβ \ sβ), g x = 1)
(hfg : β x β sβ, f x = g x) : β i in sβ, f i = β i in sβ, g i :=
begin
rw [β prod_sdiff h, prod_eq_one hg, one_mul],
exact prod_congr rfl hfg
end
lemma sum_range_succ_comm {Ξ²} [add_comm_monoid Ξ²] (f : β β Ξ²) (n : β) :
β x in range (n + 1), f x = f n + β x in range n, f x :=
by rw [range_succ, sum_insert not_mem_range_self]
lemma sum_range_succ {Ξ²} [add_comm_monoid Ξ²] (f : β β Ξ²) (n : β) :
β x in range (n + 1), f x = β x in range n, f x + f n :=
by simp only [add_comm, sum_range_succ_comm]
@[to_additive]
lemma prod_range_succ_comm (f : β β Ξ²) (n : β) :
β x in range (n + 1), f x = f n * β x in range n, f x :=
by rw [range_succ, prod_insert not_mem_range_self]
@[to_additive]
lemma prod_range_succ (f : β β Ξ²) (n : β) :
β x in range (n + 1), f x = (β x in range n, f x) * f n :=
by simp only [mul_comm, prod_range_succ_comm]
lemma prod_range_succ' (f : β β Ξ²) :
β n : β, (β k in range (n + 1), f k) = (β k in range n, f (k+1)) * f 0
| 0 := prod_range_succ _ _
| (n + 1) := by rw [prod_range_succ _ n, mul_right_comm, β prod_range_succ', prod_range_succ]
lemma prod_range_add (f : β β Ξ²) (n m : β) :
β x in range (n + m), f x =
(β x in range n, f x) * (β x in range m, f (n + x)) :=
begin
induction m with m hm,
{ simp },
{ rw [nat.add_succ, prod_range_succ, hm, prod_range_succ, mul_assoc], },
end
@[to_additive]
lemma prod_range_zero (f : β β Ξ²) :
β k in range 0, f k = 1 :=
by rw [range_zero, prod_empty]
lemma prod_range_one (f : β β Ξ²) :
β k in range 1, f k = f 0 :=
by { rw [range_one], apply @prod_singleton β Ξ² 0 f }
lemma sum_range_one {Ξ΄ : Type*} [add_comm_monoid Ξ΄] (f : β β Ξ΄) :
β k in range 1, f k = f 0 :=
@prod_range_one (multiplicative Ξ΄) _ f
attribute [to_additive finset.sum_range_one] prod_range_one
open multiset
lemma prod_multiset_map_count [decidable_eq Ξ±] (s : multiset Ξ±)
{M : Type*} [comm_monoid M] (f : Ξ± β M) :
(s.map f).prod = β m in s.to_finset, (f m) ^ (s.count m) :=
begin
apply s.induction_on, { simp only [prod_const_one, count_zero, prod_zero, pow_zero, map_zero] },
intros a s ih,
simp only [prod_cons, map_cons, to_finset_cons, ih],
by_cases has : a β s.to_finset,
{ rw [insert_eq_of_mem has, β insert_erase has, prod_insert (not_mem_erase _ _),
prod_insert (not_mem_erase _ _), β mul_assoc, count_cons_self, pow_succ],
congr' 1, refine prod_congr rfl (Ξ» x hx, _),
rw [count_cons_of_ne (ne_of_mem_erase hx)] },
rw [prod_insert has, count_cons_self, count_eq_zero_of_not_mem (mt mem_to_finset.2 has), pow_one],
congr' 1, refine prod_congr rfl (Ξ» x hx, _),
rw count_cons_of_ne,
rintro rfl, exact has hx
end
lemma sum_multiset_map_count [decidable_eq Ξ±] (s : multiset Ξ±)
{M : Type*} [add_comm_monoid M] (f : Ξ± β M) :
(s.map f).sum = β m in s.to_finset, s.count m β’ f m :=
@prod_multiset_map_count _ _ _ (multiplicative M) _ f
attribute [to_additive] prod_multiset_map_count
lemma prod_multiset_count [decidable_eq Ξ±] [comm_monoid Ξ±] (s : multiset Ξ±) :
s.prod = β m in s.to_finset, m ^ (s.count m) :=
by { convert prod_multiset_map_count s id, rw map_id }
lemma sum_multiset_count [decidable_eq Ξ±] [add_comm_monoid Ξ±] (s : multiset Ξ±) :
s.sum = β m in s.to_finset, s.count m β’ m :=
@prod_multiset_count (multiplicative Ξ±) _ _ s
attribute [to_additive] prod_multiset_count
/--
To prove a property of a product, it suffices to prove that
the property is multiplicative and holds on factors.
-/
@[to_additive "To prove a property of a sum, it suffices to prove that
the property is additive and holds on summands."]
lemma prod_induction {M : Type*} [comm_monoid M] (f : Ξ± β M) (p : M β Prop)
(p_mul : β a b, p a β p b β p (a * b)) (p_one : p 1) (p_s : β x β s, p $ f x) :
p $ β x in s, f x :=
multiset.prod_induction _ _ p_mul p_one (multiset.forall_mem_map_iff.mpr p_s)
/--
To prove a property of a product, it suffices to prove that
the property is multiplicative and holds on factors.
-/
@[to_additive "To prove a property of a sum, it suffices to prove that
the property is additive and holds on summands."]
lemma prod_induction_nonempty {M : Type*} [comm_monoid M] (f : Ξ± β M) (p : M β Prop)
(p_mul : β a b, p a β p b β p (a * b)) (hs_nonempty : s.nonempty) (p_s : β x β s, p $ f x) :
p $ β x in s, f x :=
multiset.prod_induction_nonempty p p_mul (by simp [nonempty_iff_ne_empty.mp hs_nonempty])
(multiset.forall_mem_map_iff.mpr p_s)
/--
For any product along `{0, ..., n-1}` of a commutative-monoid-valued function, we can verify that
it's equal to a different function just by checking ratios of adjacent terms.
This is a multiplicative discrete analogue of the fundamental theorem of calculus. -/
lemma prod_range_induction {M : Type*} [comm_monoid M]
(f s : β β M) (h0 : s 0 = 1) (h : β n, s (n + 1) = s n * f n) (n : β) :
β k in finset.range n, f k = s n :=
begin
induction n with k hk,
{ simp only [h0, finset.prod_range_zero] },
{ simp only [hk, finset.prod_range_succ, h, mul_comm] }
end
/--
For any sum along `{0, ..., n-1}` of a commutative-monoid-valued function,
we can verify that it's equal to a different function
just by checking differences of adjacent terms.
This is a discrete analogue
of the fundamental theorem of calculus.
-/
lemma sum_range_induction {M : Type*} [add_comm_monoid M]
(f s : β β M) (h0 : s 0 = 0) (h : β n, s (n + 1) = s n + f n) (n : β) :
β k in finset.range n, f k = s n :=
@prod_range_induction (multiplicative M) _ f s h0 h n
/-- A telescoping sum along `{0, ..., n-1}` of an additive commutative group valued function
reduces to the difference of the last and first terms.-/
lemma sum_range_sub {G : Type*} [add_comm_group G] (f : β β G) (n : β) :
β i in range n, (f (i+1) - f i) = f n - f 0 :=
by { apply sum_range_induction; abel, simp }
lemma sum_range_sub' {G : Type*} [add_comm_group G] (f : β β G) (n : β) :
β i in range n, (f i - f (i+1)) = f 0 - f n :=
by { apply sum_range_induction; abel, simp }
/-- A telescoping product along `{0, ..., n-1}` of a commutative group valued function
reduces to the ratio of the last and first factors.-/
@[to_additive]
lemma prod_range_div {M : Type*} [comm_group M] (f : β β M) (n : β) :
β i in range n, (f (i+1) * (f i)β»ΒΉ) = f n * (f 0)β»ΒΉ :=
by simpa only [β div_eq_mul_inv] using @sum_range_sub (additive M) _ f n
@[to_additive]
lemma prod_range_div' {M : Type*} [comm_group M] (f : β β M) (n : β) :
β i in range n, (f i * (f (i+1))β»ΒΉ) = (f 0) * (f n)β»ΒΉ :=
by simpa only [β div_eq_mul_inv] using @sum_range_sub' (additive M) _ f n
/--
A telescoping sum along `{0, ..., n-1}` of an `β`-valued function
reduces to the difference of the last and first terms
when the function we are summing is monotone.
-/
lemma sum_range_sub_of_monotone {f : β β β} (h : monotone f) (n : β) :
β i in range n, (f (i+1) - f i) = f n - f 0 :=
begin
refine sum_range_induction _ _ (nat.sub_self _) (Ξ» n, _) _,
have hβ : f n β€ f (n+1) := h (nat.le_succ _),
have hβ : f 0 β€ f n := h (nat.zero_le _),
rw [βnat.sub_add_comm hβ, nat.add_sub_cancel' hβ],
end
@[simp] lemma prod_const (b : Ξ²) : (β x in s, b) = b ^ s.card :=
by haveI := classical.dec_eq Ξ±; exact
finset.induction_on s (by simp) (Ξ» a s has ih,
by rw [prod_insert has, card_insert_of_not_mem has, pow_succ, ih])
lemma pow_eq_prod_const (b : Ξ²) : β n, b ^ n = β k in range n, b
| 0 := by simp
| (n+1) := by simp
lemma prod_pow (s : finset Ξ±) (n : β) (f : Ξ± β Ξ²) :
β x in s, f x ^ n = (β x in s, f x) ^ n :=
by haveI := classical.dec_eq Ξ±; exact
finset.induction_on s (by simp) (by simp [mul_pow] {contextual := tt})
-- `to_additive` fails on this lemma, so we prove it manually below
lemma prod_flip {n : β} (f : β β Ξ²) :
β r in range (n + 1), f (n - r) = β k in range (n + 1), f k :=
begin
induction n with n ih,
{ rw [prod_range_one, prod_range_one] },
{ rw [prod_range_succ', prod_range_succ _ (nat.succ n)],
simp [β ih] }
end
@[to_additive]
lemma prod_involution {s : finset Ξ±} {f : Ξ± β Ξ²} :
β (g : Ξ a β s, Ξ±)
(h : β a ha, f a * f (g a ha) = 1)
(g_ne : β a ha, f a β 1 β g a ha β a)
(g_mem : β a ha, g a ha β s)
(g_inv : β a ha, g (g a ha) (g_mem a ha) = a),
(β x in s, f x) = 1 :=
by haveI := classical.dec_eq Ξ±;
haveI := classical.dec_eq Ξ²; exact
finset.strong_induction_on s
(Ξ» s ih g h g_ne g_mem g_inv,
s.eq_empty_or_nonempty.elim (Ξ» hs, hs.symm βΈ rfl)
(Ξ» β¨x, hxβ©,
have hmem : β y β (s.erase x).erase (g x hx), y β s,
from Ξ» y hy, (mem_of_mem_erase (mem_of_mem_erase hy)),
have g_inj : β {x hx y hy}, g x hx = g y hy β x = y,
from Ξ» x hx y hy h, by rw [β g_inv x hx, β g_inv y hy]; simp [h],
have ih': β y in erase (erase s x) (g x hx), f y = (1 : Ξ²) :=
ih ((s.erase x).erase (g x hx))
β¨subset.trans (erase_subset _ _) (erase_subset _ _),
Ξ» h, not_mem_erase (g x hx) (s.erase x) (h (g_mem x hx))β©
(Ξ» y hy, g y (hmem y hy))
(Ξ» y hy, h y (hmem y hy))
(Ξ» y hy, g_ne y (hmem y hy))
(Ξ» y hy, mem_erase.2 β¨Ξ» (h : g y _ = g x hx), by simpa [g_inj h] using hy,
mem_erase.2 β¨Ξ» (h : g y _ = x),
have y = g x hx, from g_inv y (hmem y hy) βΈ by simp [h],
by simpa [this] using hy, g_mem y (hmem y hy)β©β©)
(Ξ» y hy, g_inv y (hmem y hy)),
if hx1 : f x = 1
then ih' βΈ eq.symm (prod_subset hmem
(Ξ» y hy hyβ,
have y = x β¨ y = g x hx, by simp [hy] at hyβ; tauto,
this.elim (Ξ» hy, hy.symm βΈ hx1)
(Ξ» hy, h x hx βΈ hy βΈ hx1.symm βΈ (one_mul _).symm)))
else by rw [β insert_erase hx, prod_insert (not_mem_erase _ _),
β insert_erase (mem_erase.2 β¨g_ne x hx hx1, g_mem x hxβ©),
prod_insert (not_mem_erase _ _), ih', mul_one, h x hx]))
/-- The product of the composition of functions `f` and `g`, is the product
over `b β s.image g` of `f b` to the power of the cardinality of the fibre of `b` -/
lemma prod_comp [decidable_eq Ξ³] {s : finset Ξ±} (f : Ξ³ β Ξ²) (g : Ξ± β Ξ³) :
β a in s, f (g a) = β b in s.image g, f b ^ (s.filter (Ξ» a, g a = b)).card :=
calc β a in s, f (g a)
= β x in (s.image g).sigma (Ξ» b : Ξ³, s.filter (Ξ» a, g a = b)), f (g x.2) :
prod_bij (Ξ» a ha, β¨g a, aβ©) (by simp; tauto) (Ξ» _ _, rfl) (by simp) (by finish)
... = β b in s.image g, β a in s.filter (Ξ» a, g a = b), f (g a) : prod_sigma _ _ _
... = β b in s.image g, β a in s.filter (Ξ» a, g a = b), f b :
prod_congr rfl (Ξ» b hb, prod_congr rfl (by simp {contextual := tt}))
... = β b in s.image g, f b ^ (s.filter (Ξ» a, g a = b)).card :
prod_congr rfl (Ξ» _ _, prod_const _)
@[to_additive]
lemma prod_piecewise [decidable_eq Ξ±] (s t : finset Ξ±) (f g : Ξ± β Ξ²) :
(β x in s, (t.piecewise f g) x) = (β x in s β© t, f x) * (β x in s \ t, g x) :=
by { rw [piecewise, prod_ite, filter_mem_eq_inter, β sdiff_eq_filter], }
@[to_additive]
lemma prod_inter_mul_prod_diff [decidable_eq Ξ±] (s t : finset Ξ±) (f : Ξ± β Ξ²) :
(β x in s β© t, f x) * (β x in s \ t, f x) = (β x in s, f x) :=
by { convert (s.prod_piecewise t f f).symm, simp [finset.piecewise] }
@[to_additive]
lemma prod_eq_mul_prod_diff_singleton [decidable_eq Ξ±] {s : finset Ξ±} {i : Ξ±} (h : i β s)
(f : Ξ± β Ξ²) : β x in s, f x = f i * β x in s \ {i}, f x :=
by { convert (s.prod_inter_mul_prod_diff {i} f).symm, simp [h] }
@[to_additive]
lemma prod_eq_prod_diff_singleton_mul [decidable_eq Ξ±] {s : finset Ξ±} {i : Ξ±} (h : i β s)
(f : Ξ± β Ξ²) : β x in s, f x = (β x in s \ {i}, f x) * f i :=
by { rw [prod_eq_mul_prod_diff_singleton h, mul_comm] }
@[to_additive]
lemma _root_.fintype.prod_eq_mul_prod_compl [decidable_eq Ξ±] [fintype Ξ±] (a : Ξ±) (f : Ξ± β Ξ²) :
β i, f i = (f a) * β i in {a}αΆ, f i :=
prod_eq_mul_prod_diff_singleton (mem_univ a) f
@[to_additive]
lemma _root_.fintype.prod_eq_prod_compl_mul [decidable_eq Ξ±] [fintype Ξ±] (a : Ξ±) (f : Ξ± β Ξ²) :
β i, f i = (β i in {a}αΆ, f i) * f a :=
prod_eq_prod_diff_singleton_mul (mem_univ a) f
/-- A product can be partitioned into a product of products, each equivalent under a setoid. -/
@[to_additive "A sum can be partitioned into a sum of sums, each equivalent under a setoid."]
lemma prod_partition (R : setoid Ξ±) [decidable_rel R.r] :
(β x in s, f x) = β xbar in s.image quotient.mk, β y in s.filter (Ξ» y, β¦yβ§ = xbar), f y :=
begin
refine (finset.prod_image' f (Ξ» x hx, _)).symm,
refl,
end
/-- If we can partition a product into subsets that cancel out, then the whole product cancels. -/
@[to_additive "If we can partition a sum into subsets that cancel out, then the whole sum cancels."]
lemma prod_cancels_of_partition_cancels (R : setoid Ξ±) [decidable_rel R.r]
(h : β x β s, (β a in s.filter (Ξ» y, y β x), f a) = 1) : (β x in s, f x) = 1 :=
begin
rw [prod_partition R, βfinset.prod_eq_one],
intros xbar xbar_in_s,
obtain β¨x, x_in_s, xbar_eq_xβ© := mem_image.mp xbar_in_s,
rw [βxbar_eq_x, filter_congr (Ξ» y _, @quotient.eq _ R y x)],
apply h x x_in_s,
end
@[to_additive]
lemma prod_update_of_not_mem [decidable_eq Ξ±] {s : finset Ξ±} {i : Ξ±}
(h : i β s) (f : Ξ± β Ξ²) (b : Ξ²) : (β x in s, function.update f i b x) = (β x in s, f x) :=
begin
apply prod_congr rfl (Ξ»j hj, _),
have : j β i, by { assume eq, rw eq at hj, exact h hj },
simp [this]
end
lemma prod_update_of_mem [decidable_eq Ξ±] {s : finset Ξ±} {i : Ξ±} (h : i β s) (f : Ξ± β Ξ²) (b : Ξ²) :
(β x in s, function.update f i b x) = b * (β x in s \ (singleton i), f x) :=
by { rw [update_eq_piecewise, prod_piecewise], simp [h] }
/-- If a product of a `finset` of size at most 1 has a given value, so
do the terms in that product. -/
lemma eq_of_card_le_one_of_prod_eq {s : finset Ξ±} (hc : s.card β€ 1) {f : Ξ± β Ξ²} {b : Ξ²}
(h : β x in s, f x = b) : β x β s, f x = b :=
begin
intros x hx,
by_cases hc0 : s.card = 0,
{ exact false.elim (card_ne_zero_of_mem hx hc0) },
{ have h1 : s.card = 1 := le_antisymm hc (nat.one_le_of_lt (nat.pos_of_ne_zero hc0)),
rw card_eq_one at h1,
cases h1 with x2 hx2,
rw [hx2, mem_singleton] at hx,
simp_rw hx2 at h,
rw hx,
rw prod_singleton at h,
exact h }
end
/-- If a sum of a `finset` of size at most 1 has a given value, so do
the terms in that sum. -/
lemma eq_of_card_le_one_of_sum_eq [add_comm_monoid Ξ³] {s : finset Ξ±} (hc : s.card β€ 1)
{f : Ξ± β Ξ³} {b : Ξ³} (h : β x in s, f x = b) : β x β s, f x = b :=
begin
intros x hx,
by_cases hc0 : s.card = 0,
{ exact false.elim (card_ne_zero_of_mem hx hc0) },
{ have h1 : s.card = 1 := le_antisymm hc (nat.one_le_of_lt (nat.pos_of_ne_zero hc0)),
rw card_eq_one at h1,
cases h1 with x2 hx2,
rw [hx2, mem_singleton] at hx,
simp_rw hx2 at h,
rw hx,
rw sum_singleton at h,
exact h }
end
attribute [to_additive eq_of_card_le_one_of_sum_eq] eq_of_card_le_one_of_prod_eq
/-- If a function applied at a point is 1, a product is unchanged by
removing that point, if present, from a `finset`. -/
@[to_additive "If a function applied at a point is 0, a sum is unchanged by
removing that point, if present, from a `finset`."]
lemma prod_erase [decidable_eq Ξ±] (s : finset Ξ±) {f : Ξ± β Ξ²} {a : Ξ±} (h : f a = 1) :
β x in s.erase a, f x = β x in s, f x :=
begin
rw βsdiff_singleton_eq_erase,
refine prod_subset (sdiff_subset _ _) (Ξ» x hx hnx, _),
rw sdiff_singleton_eq_erase at hnx,
rwa eq_of_mem_of_not_mem_erase hx hnx
end
/-- If a product is 1 and the function is 1 except possibly at one
point, it is 1 everywhere on the `finset`. -/
@[to_additive "If a sum is 0 and the function is 0 except possibly at one
point, it is 0 everywhere on the `finset`."]
lemma eq_one_of_prod_eq_one {s : finset Ξ±} {f : Ξ± β Ξ²} {a : Ξ±} (hp : β x in s, f x = 1)
(h1 : β x β s, x β a β f x = 1) : β x β s, f x = 1 :=
begin
intros x hx,
classical,
by_cases h : x = a,
{ rw h,
rw h at hx,
rw [βprod_subset (singleton_subset_iff.2 hx)
(Ξ» t ht ha, h1 t ht (not_mem_singleton.1 ha)),
prod_singleton] at hp,
exact hp },
{ exact h1 x hx h }
end
lemma prod_pow_boole [decidable_eq Ξ±] (s : finset Ξ±) (f : Ξ± β Ξ²) (a : Ξ±) :
(β x in s, (f x)^(ite (a = x) 1 0)) = ite (a β s) (f a) 1 :=
by simp
end comm_monoid
/-- If `f = g = h` everywhere but at `i`, where `f i = g i + h i`, then the product of `f` over `s`
is the sum of the products of `g` and `h`. -/
lemma prod_add_prod_eq [comm_semiring Ξ²] {s : finset Ξ±} {i : Ξ±} {f g h : Ξ± β Ξ²}
(hi : i β s) (h1 : g i + h i = f i) (h2 : β j β s, j β i β g j = f j)
(h3 : β j β s, j β i β h j = f j) : β i in s, g i + β i in s, h i = β i in s, f i :=
by { classical, simp_rw [prod_eq_mul_prod_diff_singleton hi, β h1, right_distrib],
congr' 2; apply prod_congr rfl; simpa }
lemma sum_update_of_mem [add_comm_monoid Ξ²] [decidable_eq Ξ±] {s : finset Ξ±} {i : Ξ±}
(h : i β s) (f : Ξ± β Ξ²) (b : Ξ²) :
(β x in s, function.update f i b x) = b + (β x in s \ (singleton i), f x) :=
by { rw [update_eq_piecewise, sum_piecewise], simp [h] }
attribute [to_additive] prod_update_of_mem
lemma sum_nsmul [add_comm_monoid Ξ²] (s : finset Ξ±) (n : β) (f : Ξ± β Ξ²) :
(β x in s, n β’ (f x)) = n β’ ((β x in s, f x)) :=
@prod_pow _ (multiplicative Ξ²) _ _ _ _
attribute [to_additive sum_nsmul] prod_pow
@[simp] lemma sum_const [add_comm_monoid Ξ²] (b : Ξ²) :
(β x in s, b) = s.card β’ b :=
@prod_const _ (multiplicative Ξ²) _ _ _
attribute [to_additive] prod_const
lemma card_eq_sum_ones (s : finset Ξ±) : s.card = β _ in s, 1 :=
by simp
lemma sum_const_nat {m : β} {f : Ξ± β β} (hβ : βx β s, f x = m) :
(β x in s, f x) = card s * m :=
begin
rw [β nat.nsmul_eq_mul, β sum_const],
apply sum_congr rfl hβ
end
@[simp]
lemma sum_boole {s : finset Ξ±} {p : Ξ± β Prop} [semiring Ξ²] {hp : decidable_pred p} :
(β x in s, if p x then (1 : Ξ²) else (0 : Ξ²)) = (s.filter p).card :=
by simp [sum_ite]
@[norm_cast]
lemma sum_nat_cast [add_comm_monoid Ξ²] [has_one Ξ²] (s : finset Ξ±) (f : Ξ± β β) :
β(β x in s, f x : β) = (β x in s, (f x : Ξ²)) :=
(nat.cast_add_monoid_hom Ξ²).map_sum f s
@[norm_cast]
lemma sum_int_cast [add_comm_group Ξ²] [has_one Ξ²] (s : finset Ξ±) (f : Ξ± β β€) :
β(β x in s, f x : β€) = (β x in s, (f x : Ξ²)) :=
(int.cast_add_hom Ξ²).map_sum f s
lemma sum_comp [add_comm_monoid Ξ²] [decidable_eq Ξ³] {s : finset Ξ±} (f : Ξ³ β Ξ²) (g : Ξ± β Ξ³) :
β a in s, f (g a) = β b in s.image g, (s.filter (Ξ» a, g a = b)).card β’ (f b) :=
@prod_comp _ (multiplicative Ξ²) _ _ _ _ _ _
attribute [to_additive "The sum of the composition of functions `f` and `g`, is the sum
over `b β s.image g` of `f b` times of the cardinality of the fibre of `b`"] prod_comp
lemma sum_range_succ' [add_comm_monoid Ξ²] (f : β β Ξ²) :
β n : β, (β i in range (n + 1), f i) = (β i in range n, f (i + 1)) + f 0 :=
@prod_range_succ' (multiplicative Ξ²) _ _
attribute [to_additive] prod_range_succ'
lemma sum_range_add {Ξ²} [add_comm_monoid Ξ²] (f : β β Ξ²) (n : β) (m : β) :
(β x in range (n + m), f x) =
(β x in range n, f x) + (β x in range m, f (n + x)) :=
@prod_range_add (multiplicative Ξ²) _ _ _ _
attribute [to_additive] prod_range_add
lemma sum_flip [add_comm_monoid Ξ²] {n : β} (f : β β Ξ²) :
(β i in range (n + 1), f (n - i)) = (β i in range (n + 1), f i) :=
@prod_flip (multiplicative Ξ²) _ _ _
attribute [to_additive] prod_flip
section opposite
open opposite
/-- Moving to the opposite additive commutative monoid commutes with summing. -/
@[simp] lemma op_sum [add_comm_monoid Ξ²] {s : finset Ξ±} (f : Ξ± β Ξ²) :
op (β x in s, f x) = β x in s, op (f x) :=
(op_add_equiv : Ξ² β+ Ξ²α΅α΅).map_sum _ _
@[simp] lemma unop_sum [add_comm_monoid Ξ²] {s : finset Ξ±} (f : Ξ± β Ξ²α΅α΅) :
unop (β x in s, f x) = β x in s, unop (f x) :=
(op_add_equiv : Ξ² β+ Ξ²α΅α΅).symm.map_sum _ _
end opposite
section comm_group
variables [comm_group Ξ²]
@[simp, to_additive]
lemma prod_inv_distrib : (β x in s, (f x)β»ΒΉ) = (β x in s, f x)β»ΒΉ :=
s.prod_hom has_inv.inv
end comm_group
@[simp] theorem card_sigma {Ο : Ξ± β Type*} (s : finset Ξ±) (t : Ξ a, finset (Ο a)) :
card (s.sigma t) = β a in s, card (t a) :=
multiset.card_sigma _ _
lemma card_bUnion [decidable_eq Ξ²] {s : finset Ξ±} {t : Ξ± β finset Ξ²}
(h : β x β s, β y β s, x β y β disjoint (t x) (t y)) :
(s.bUnion t).card = β u in s, card (t u) :=
calc (s.bUnion t).card = β i in s.bUnion t, 1 : by simp
... = β a in s, β i in t a, 1 : finset.sum_bUnion h
... = β u in s, card (t u) : by simp
lemma card_bUnion_le [decidable_eq Ξ²] {s : finset Ξ±} {t : Ξ± β finset Ξ²} :
(s.bUnion t).card β€ β a in s, (t a).card :=
by haveI := classical.dec_eq Ξ±; exact
finset.induction_on s (by simp)
(Ξ» a s has ih,
calc ((insert a s).bUnion t).card β€ (t a).card + (s.bUnion t).card :
by rw bUnion_insert; exact finset.card_union_le _ _
... β€ β a in insert a s, card (t a) :
by rw sum_insert has; exact add_le_add_left ih _)
theorem card_eq_sum_card_fiberwise [decidable_eq Ξ²] {f : Ξ± β Ξ²} {s : finset Ξ±} {t : finset Ξ²}
(H : β x β s, f x β t) :
s.card = β a in t, (s.filter (Ξ» x, f x = a)).card :=
by simp only [card_eq_sum_ones, sum_fiberwise_of_maps_to H]
theorem card_eq_sum_card_image [decidable_eq Ξ²] (f : Ξ± β Ξ²) (s : finset Ξ±) :
s.card = β a in s.image f, (s.filter (Ξ» x, f x = a)).card :=
card_eq_sum_card_fiberwise (Ξ» _, mem_image_of_mem _)
lemma gsmul_sum [add_comm_group Ξ²] {f : Ξ± β Ξ²} {s : finset Ξ±} (z : β€) :
gsmul z (β a in s, f a) = β a in s, gsmul z (f a) :=
(s.sum_hom (gsmul z)).symm
@[simp] lemma sum_sub_distrib [add_comm_group Ξ²] :
β x in s, (f x - g x) = (β x in s, f x) - (β x in s, g x) :=
by simpa only [sub_eq_add_neg] using sum_add_distrib.trans (congr_arg _ sum_neg_distrib)
section prod_eq_zero
variables [comm_monoid_with_zero Ξ²]
lemma prod_eq_zero (ha : a β s) (h : f a = 0) : (β x in s, f x) = 0 :=
by haveI := classical.dec_eq Ξ±;
calc (β x in s, f x) = β x in insert a (erase s a), f x : by rw insert_erase ha
... = 0 : by rw [prod_insert (not_mem_erase _ _), h, zero_mul]
lemma prod_boole {s : finset Ξ±} {p : Ξ± β Prop} [decidable_pred p] :
β i in s, ite (p i) (1 : Ξ²) (0 : Ξ²) = ite (β i β s, p i) 1 0 :=
begin
split_ifs,
{ apply prod_eq_one,
intros i hi,
rw if_pos (h i hi) },
{ push_neg at h,
rcases h with β¨i, hi, hqβ©,
apply prod_eq_zero hi,
rw [if_neg hq] },
end
variables [nontrivial Ξ²] [no_zero_divisors Ξ²]
lemma prod_eq_zero_iff : (β x in s, f x) = 0 β (βaβs, f a = 0) :=
begin
classical,
apply finset.induction_on s,
exact β¨not.elim one_ne_zero, Ξ» β¨_, H, _β©, H.elimβ©,
assume a s ha ih,
rw [prod_insert ha, mul_eq_zero, bex_def, exists_mem_insert, ih, β bex_def]
end
theorem prod_ne_zero_iff : (β x in s, f x) β 0 β (β a β s, f a β 0) :=
by { rw [ne, prod_eq_zero_iff], push_neg }
end prod_eq_zero
section comm_group_with_zero
variables [comm_group_with_zero Ξ²]
@[simp]
lemma prod_inv_distrib' : (β x in s, (f x)β»ΒΉ) = (β x in s, f x)β»ΒΉ :=
begin
classical,
by_cases h : β x β s, f x = 0,
{ simpa [prod_eq_zero_iff.mpr h, prod_eq_zero_iff] using h },
{ push_neg at h,
have h' := prod_ne_zero_iff.mpr h,
have hf : β x β s, (f x)β»ΒΉ * f x = 1 := Ξ» x hx, inv_mul_cancel (h x hx),
apply mul_right_cancel' h',
simp [h, h', β finset.prod_mul_distrib, prod_congr rfl hf] }
end
end comm_group_with_zero
end finset
namespace fintype
open finset
/-- `fintype.prod_bijective` is a variant of `finset.prod_bij` that accepts `function.bijective`.
See `function.bijective.prod_comp` for a version without `h`. -/
@[to_additive "`fintype.sum_equiv` is a variant of `finset.sum_bij` that accepts
`function.bijective`.
See `function.bijective.sum_comp` for a version without `h`. "]
lemma prod_bijective {Ξ± Ξ² M : Type*} [fintype Ξ±] [fintype Ξ²] [comm_monoid M]
(e : Ξ± β Ξ²) (he : function.bijective e) (f : Ξ± β M) (g : Ξ² β M) (h : β x, f x = g (e x)) :
β x : Ξ±, f x = β x : Ξ², g x :=
prod_bij
(Ξ» x _, e x)
(Ξ» x _, mem_univ (e x))
(Ξ» x _, h x)
(Ξ» x x' _ _ h, he.injective h)
(Ξ» y _, (he.surjective y).imp $ Ξ» a h, β¨mem_univ _, h.symmβ©)
/-- `fintype.prod_equiv` is a specialization of `finset.prod_bij` that
automatically fills in most arguments.
See `equiv.prod_comp` for a version without `h`.
-/
@[to_additive "`fintype.sum_equiv` is a specialization of `finset.sum_bij` that
automatically fills in most arguments.
See `equiv.sum_comp` for a version without `h`.
"]
lemma prod_equiv {Ξ± Ξ² M : Type*} [fintype Ξ±] [fintype Ξ²] [comm_monoid M]
(e : Ξ± β Ξ²) (f : Ξ± β M) (g : Ξ² β M) (h : β x, f x = g (e x)) :
β x : Ξ±, f x = β x : Ξ², g x :=
prod_bijective e e.bijective f g h
@[to_additive]
lemma prod_finset_coe [comm_monoid Ξ²] :
β (i : (s : set Ξ±)), f i = β i in s, f i :=
(finset.prod_subtype s (Ξ» _, iff.rfl) f).symm
end fintype
namespace list
@[to_additive] lemma prod_to_finset {M : Type*} [decidable_eq Ξ±] [comm_monoid M]
(f : Ξ± β M) : β {l : list Ξ±} (hl : l.nodup), l.to_finset.prod f = (l.map f).prod
| [] _ := by simp
| (a :: l) hl := let β¨not_mem, hlβ© := list.nodup_cons.mp hl in
by simp [finset.prod_insert (mt list.mem_to_finset.mp not_mem), prod_to_finset hl]
end list
namespace multiset
variables [decidable_eq Ξ±]
@[simp] lemma to_finset_sum_count_eq (s : multiset Ξ±) :
(β a in s.to_finset, s.count a) = s.card :=
multiset.induction_on s rfl
(assume a s ih,
calc (β x in to_finset (a ::β s), count x (a ::β s)) =
β x in to_finset (a ::β s), ((if x = a then 1 else 0) + count x s) :
finset.sum_congr rfl $ Ξ» _ _, by split_ifs;
[simp only [h, count_cons_self, nat.one_add], simp only [count_cons_of_ne h, zero_add]]
... = card (a ::β s) :
begin
by_cases a β s.to_finset,
{ have : β x in s.to_finset, ite (x = a) 1 0 = β x in {a}, ite (x = a) 1 0,
{ rw [finset.sum_ite_eq', if_pos h, finset.sum_singleton, if_pos rfl], },
rw [to_finset_cons, finset.insert_eq_of_mem h, finset.sum_add_distrib, ih, this,
finset.sum_singleton, if_pos rfl, add_comm, card_cons] },
{ have ha : a β s, by rwa mem_to_finset at h,
have : β x in to_finset s, ite (x = a) 1 0 = β x in to_finset s, 0, from
finset.sum_congr rfl (Ξ» x hx, if_neg $ by rintro rfl; cc),
rw [to_finset_cons, finset.sum_insert h, if_pos rfl, finset.sum_add_distrib, this,
finset.sum_const_zero, ih, count_eq_zero_of_not_mem ha, zero_add, add_comm, card_cons] }
end)
lemma count_sum' {s : finset Ξ²} {a : Ξ±} {f : Ξ² β multiset Ξ±} :
count a (β x in s, f x) = β x in s, count a (f x) :=
by { dunfold finset.sum, rw count_sum }
@[simp] lemma to_finset_sum_count_nsmul_eq (s : multiset Ξ±) :
(β a in s.to_finset, s.count a β’ (a ::β 0)) = s :=
begin
apply ext', intro b,
rw count_sum',
have h : count b s = count b (count b s β’ (b ::β 0)),
{ rw [singleton_coe, count_nsmul, β singleton_coe, count_singleton, mul_one] },
rw h, clear h,
apply finset.sum_eq_single b,
{ intros c h hcb, rw count_nsmul, convert mul_zero (count c s),
apply count_eq_zero.mpr, exact finset.not_mem_singleton.mpr (ne.symm hcb) },
{ intro hb, rw [count_eq_zero_of_not_mem (mt mem_to_finset.2 hb), count_nsmul, zero_mul]}
end
theorem exists_smul_of_dvd_count (s : multiset Ξ±) {k : β} (h : β (a : Ξ±), k β£ multiset.count a s) :
β (u : multiset Ξ±), s = k β’ u :=
begin
use β a in s.to_finset, (s.count a / k) β’ (a ::β 0),
have hβ : β (x : Ξ±) in s.to_finset, k β’ (count x s / k) β’ (x ::β 0) =
β (x : Ξ±) in s.to_finset, count x s β’ (x ::β 0),
{ refine congr_arg s.to_finset.sum _,
apply funext, intro x,
rw [β mul_nsmul, nat.mul_div_cancel' (h x)] },
rw [β finset.sum_nsmul, hβ, to_finset_sum_count_nsmul_eq]
end
end multiset
@[simp, norm_cast] lemma nat.coe_prod {R : Type*} [comm_semiring R]
(f : Ξ± β β) (s : finset Ξ±) : (ββ i in s, f i : R) = β i in s, f i :=
(nat.cast_ring_hom R).map_prod _ _
@[simp, norm_cast] lemma int.coe_prod {R : Type*} [comm_ring R]
(f : Ξ± β β€) (s : finset Ξ±) : (ββ i in s, f i : R) = β i in s, f i :=
(int.cast_ring_hom R).map_prod _ _
@[simp, norm_cast] lemma units.coe_prod {M : Type*} [comm_monoid M]
(f : Ξ± β units M) (s : finset Ξ±) : (ββ i in s, f i : M) = β i in s, f i :=
(units.coe_hom M).map_prod _ _
|
\documentclass{cv}
\usepackage[utf8]{inputenc}
\usepackage[hidelinks]{hyperref}
\usepackage{tabularx}
%\usepackage{comment}
\usepackage[most]{tcolorbox}
\newcommand{\ColorHref}[3][blue]{\href{#2}{\color{#1}{#3}}}%
\tcbset{
frame code={}
center title,
left=0pt,
right=0pt,
top=0pt,
bottom=0pt,
colback=gray!40,
colframe=white,
width=\dimexpr\textwidth\relax,
enlarge left by=0mm,
boxsep=5pt,
arc=0pt,outer arc=0pt,
}
\begin{document}
\name{John Smith}
\contact{\href{mailto:[email protected]}{[email protected]}}{+1 222 333 444}{\ColorHref{https://www.linkedin.com/in/example-linkedin-profile/}{johnsmith}}
% \par\vspace{1cm}
\section{Working Experience}
\par\vspace{.15cm}\noindent
\noindent
\begin{tcolorbox}
\textbf{Freelance Software Developer} \hfill since September 2016
\end{tcolorbox}
\noindent
\begin{itemize}
\item Developing custom WEB applications for clients.
\item Developing applications for AWS cloud platform.
\item Having experience with Google Cloud Plaform.
\item Etc. and more etc.
\end{itemize}
\par\vspace{.15cm}\noindent
\textbf{Technologies:} C\#, ASP .NET, .NET Core, Java, Go, NodeJS
\par\vspace{.15cm}\noindent
\noindent
\begin{tcolorbox}
\textbf{Software Engineer} at \textbf{Microsoft} \hfill November 2008 -- September 2016
\end{tcolorbox}
\noindent
Microsoft - company you are probably familiar with.
\noindent
\begin{itemize}
\item Part of a small team following SCRUM agile process.
\item Workign on MS Office suite
\item Some more things ...
\end{itemize}
\par\vspace{.15cm}\noindent
\textbf{Technologies:} C\#, C++, Visual Basic
\par\vspace{.25cm}
% \noindent
\section{Education}
\uni{Master's degree}{Computer and Embedded Systems}{\lowercase{\href{http://www.some.school}}{Faculty of Information Technology}}{\lowercase{\href{http://www.some.shool}}{Some University}}{2008 -- 2010}
\noindent\textit{Master's thesis} : {\href{http://link-to-website.com}{Name of MS thesis}}
\par\vspace{.25cm}
\uni{Bachelor's degree}{Information Technology}{\lowercase{\href{http://www.some.school}}{Faculty of Information Technology}}{\lowercase{\href{http://www.some.shool}}{Some University}}{2005 -- 2008}
\noindent\textit{Bachelor's thesis} : {\href{http://link-to-website.com}{Name of BC thesis}
%\par\vspace{.25cm}
\section{Publications}
\begin{tabularx}{0.95\textwidth}{p{1.5cm}X}
\hspace{.5cm} \textbf{2011} & J. Smith "Some Publication Name," IEEE Congress on Evolutionary Computation, Beijing, CN, 2011.
\end{tabularx}
\par\vspace{.5cm}
\section{Most Recent Personal Projects}
\begin{itemize}
\item \ColorHref{https://link-to-something-interesting.com}{\textbf{Name of App}} - Description of an app
\end{itemize}
\par\vspace{.5cm}
\section{Primary Skills}
\begin{itemize}
\item Programming in Windows \& Linux environments
\begin{itemize}
\item C\#, Modern C++
\item Python, Perl, Bash
\end{itemize}
\item WEB development
\begin{itemize}
\item Backend - NodeJS, ASP .NET, SQLite, MongoDB
\item Frontend - JavaScript, React, JQuery
\end{itemize}
\end{itemize}
\end{document}
|
[STATEMENT]
lemma preservation_and_closures:
fixes Rel :: "('a \<times> 'a) set"
and Pred :: "'a \<Rightarrow> bool"
assumes preservation: "rel_preserves_pred Rel Pred"
shows "rel_preserves_pred (Rel\<^sup>=) Pred"
and "rel_preserves_pred (Rel\<^sup>+) Pred"
and "rel_preserves_pred (Rel\<^sup>*) Pred"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. rel_preserves_pred (Rel\<^sup>=) Pred &&& rel_preserves_pred (Rel\<^sup>+) Pred &&& rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (3 subgoals):
1. rel_preserves_pred (Rel\<^sup>=) Pred
2. rel_preserves_pred (Rel\<^sup>+) Pred
3. rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
from preservation
[PROOF STATE]
proof (chain)
picking this:
rel_preserves_pred Rel Pred
[PROOF STEP]
show A: "rel_preserves_pred (Rel\<^sup>=) Pred"
[PROOF STATE]
proof (prove)
using this:
rel_preserves_pred Rel Pred
goal (1 subgoal):
1. rel_preserves_pred (Rel\<^sup>=) Pred
[PROOF STEP]
by (auto simp add: refl)
[PROOF STATE]
proof (state)
this:
rel_preserves_pred (Rel\<^sup>=) Pred
goal (2 subgoals):
1. rel_preserves_pred (Rel\<^sup>+) Pred
2. rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
have B: "\<And>Rel. rel_preserves_pred Rel Pred \<Longrightarrow> rel_preserves_pred (Rel\<^sup>+) Pred"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>Rel. rel_preserves_pred Rel Pred \<Longrightarrow> rel_preserves_pred (Rel\<^sup>+) Pred
[PROOF STEP]
proof clarify
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>Rel a b. rel_preserves_pred Rel Pred \<and> (a, b) \<in> Rel\<^sup>+ \<and> Pred a \<Longrightarrow> Pred b
[PROOF STEP]
fix Rel a b
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>Rel a b. rel_preserves_pred Rel Pred \<and> (a, b) \<in> Rel\<^sup>+ \<and> Pred a \<Longrightarrow> Pred b
[PROOF STEP]
assume "(a, b) \<in> Rel\<^sup>+" and "rel_preserves_pred Rel Pred" and "Pred a"
[PROOF STATE]
proof (state)
this:
(a, b) \<in> Rel\<^sup>+
rel_preserves_pred Rel Pred
Pred a
goal (1 subgoal):
1. \<And>Rel a b. rel_preserves_pred Rel Pred \<and> (a, b) \<in> Rel\<^sup>+ \<and> Pred a \<Longrightarrow> Pred b
[PROOF STEP]
thus "Pred b"
[PROOF STATE]
proof (prove)
using this:
(a, b) \<in> Rel\<^sup>+
rel_preserves_pred Rel Pred
Pred a
goal (1 subgoal):
1. Pred b
[PROOF STEP]
by (induct, blast+)
[PROOF STATE]
proof (state)
this:
Pred b
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
rel_preserves_pred ?Rel Pred \<Longrightarrow> rel_preserves_pred (?Rel\<^sup>+) Pred
goal (2 subgoals):
1. rel_preserves_pred (Rel\<^sup>+) Pred
2. rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
with preservation
[PROOF STATE]
proof (chain)
picking this:
rel_preserves_pred Rel Pred
rel_preserves_pred ?Rel Pred \<Longrightarrow> rel_preserves_pred (?Rel\<^sup>+) Pred
[PROOF STEP]
show "rel_preserves_pred (Rel\<^sup>+) Pred"
[PROOF STATE]
proof (prove)
using this:
rel_preserves_pred Rel Pred
rel_preserves_pred ?Rel Pred \<Longrightarrow> rel_preserves_pred (?Rel\<^sup>+) Pred
goal (1 subgoal):
1. rel_preserves_pred (Rel\<^sup>+) Pred
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
rel_preserves_pred (Rel\<^sup>+) Pred
goal (1 subgoal):
1. rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
from preservation A B[where Rel="Rel\<^sup>="]
[PROOF STATE]
proof (chain)
picking this:
rel_preserves_pred Rel Pred
rel_preserves_pred (Rel\<^sup>=) Pred
rel_preserves_pred (Rel\<^sup>=) Pred \<Longrightarrow> rel_preserves_pred ((Rel\<^sup>=)\<^sup>+) Pred
[PROOF STEP]
show "rel_preserves_pred (Rel\<^sup>*) Pred"
[PROOF STATE]
proof (prove)
using this:
rel_preserves_pred Rel Pred
rel_preserves_pred (Rel\<^sup>=) Pred
rel_preserves_pred (Rel\<^sup>=) Pred \<Longrightarrow> rel_preserves_pred ((Rel\<^sup>=)\<^sup>+) Pred
goal (1 subgoal):
1. rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
using trancl_reflcl[of Rel]
[PROOF STATE]
proof (prove)
using this:
rel_preserves_pred Rel Pred
rel_preserves_pred (Rel\<^sup>=) Pred
rel_preserves_pred (Rel\<^sup>=) Pred \<Longrightarrow> rel_preserves_pred ((Rel\<^sup>=)\<^sup>+) Pred
(Rel\<^sup>=)\<^sup>+ = Rel\<^sup>*
goal (1 subgoal):
1. rel_preserves_pred (Rel\<^sup>*) Pred
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
rel_preserves_pred (Rel\<^sup>*) Pred
goal:
No subgoals!
[PROOF STEP]
qed |
[STATEMENT]
lemma (in prob_space) indep_vars_iff_distr_eq_PiM':
fixes I :: "'i set" and X :: "'i \<Rightarrow> 'a \<Rightarrow> 'b"
assumes "I \<noteq> {}"
assumes rv: "\<And>i. i \<in> I \<Longrightarrow> random_variable (M' i) (X i)"
shows "indep_vars M' X I \<longleftrightarrow>
distr M (\<Pi>\<^sub>M i\<in>I. M' i) (\<lambda>x. \<lambda>i\<in>I. X i x) = (\<Pi>\<^sub>M i\<in>I. distr M (M' i) (X i))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
from assms
[PROOF STATE]
proof (chain)
picking this:
I \<noteq> {}
?i12 \<in> I \<Longrightarrow> random_variable (M' ?i12) (X ?i12)
[PROOF STEP]
obtain j where j: "j \<in> I"
[PROOF STATE]
proof (prove)
using this:
I \<noteq> {}
?i12 \<in> I \<Longrightarrow> random_variable (M' ?i12) (X ?i12)
goal (1 subgoal):
1. (\<And>j. j \<in> I \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
j \<in> I
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
define N' where "N' = (\<lambda>i. if i \<in> I then M' i else M' j)"
[PROOF STATE]
proof (state)
this:
N' = (\<lambda>i. if i \<in> I then M' i else M' j)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
define Y where "Y = (\<lambda>i. if i \<in> I then X i else X j)"
[PROOF STATE]
proof (state)
this:
Y = (\<lambda>i. if i \<in> I then X i else X j)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
have rv: "random_variable (N' i) (Y i)" for i
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. random_variable (N' i) (Y i)
[PROOF STEP]
using j
[PROOF STATE]
proof (prove)
using this:
j \<in> I
goal (1 subgoal):
1. random_variable (N' i) (Y i)
[PROOF STEP]
by (auto simp: N'_def Y_def intro: assms)
[PROOF STATE]
proof (state)
this:
random_variable (N' ?i12) (Y ?i12)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
have "indep_vars M' X I = indep_vars N' Y I"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. indep_vars M' X I = indep_vars N' Y I
[PROOF STEP]
by (intro indep_vars_cong) (auto simp: N'_def Y_def)
[PROOF STATE]
proof (state)
this:
indep_vars M' X I = indep_vars N' Y I
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
indep_vars M' X I = indep_vars N' Y I
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
have "\<dots> \<longleftrightarrow> distr M (\<Pi>\<^sub>M i\<in>I. N' i) (\<lambda>x. \<lambda>i\<in>I. Y i x) = (\<Pi>\<^sub>M i\<in>I. distr M (N' i) (Y i))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. indep_vars N' Y I = (distr M (Pi\<^sub>M I N') (\<lambda>x. \<lambda>i\<in>I. Y i x) = Pi\<^sub>M I (\<lambda>i. distr M (N' i) (Y i)))
[PROOF STEP]
by (intro indep_vars_iff_distr_eq_PiM rv assms)
[PROOF STATE]
proof (state)
this:
indep_vars N' Y I = (distr M (Pi\<^sub>M I N') (\<lambda>x. \<lambda>i\<in>I. Y i x) = Pi\<^sub>M I (\<lambda>i. distr M (N' i) (Y i)))
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
indep_vars N' Y I = (distr M (Pi\<^sub>M I N') (\<lambda>x. \<lambda>i\<in>I. Y i x) = Pi\<^sub>M I (\<lambda>i. distr M (N' i) (Y i)))
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
have "(\<Pi>\<^sub>M i\<in>I. N' i) = (\<Pi>\<^sub>M i\<in>I. M' i)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Pi\<^sub>M I N' = Pi\<^sub>M I M'
[PROOF STEP]
by (intro PiM_cong) (simp_all add: N'_def)
[PROOF STATE]
proof (state)
this:
Pi\<^sub>M I N' = Pi\<^sub>M I M'
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
Pi\<^sub>M I N' = Pi\<^sub>M I M'
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
have "(\<lambda>x. \<lambda>i\<in>I. Y i x) = (\<lambda>x. \<lambda>i\<in>I. X i x)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lambda>x. \<lambda>i\<in>I. Y i x) = (\<lambda>x. \<lambda>i\<in>I. X i x)
[PROOF STEP]
by (simp_all add: Y_def fun_eq_iff)
[PROOF STATE]
proof (state)
this:
(\<lambda>x. \<lambda>i\<in>I. Y i x) = (\<lambda>x. \<lambda>i\<in>I. X i x)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(\<lambda>x. \<lambda>i\<in>I. Y i x) = (\<lambda>x. \<lambda>i\<in>I. X i x)
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
have "(\<Pi>\<^sub>M i\<in>I. distr M (N' i) (Y i)) = (\<Pi>\<^sub>M i\<in>I. distr M (M' i) (X i))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Pi\<^sub>M I (\<lambda>i. distr M (N' i) (Y i)) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i))
[PROOF STEP]
by (intro PiM_cong distr_cong) (simp_all add: N'_def Y_def)
[PROOF STATE]
proof (state)
this:
Pi\<^sub>M I (\<lambda>i. distr M (N' i) (Y i)) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i))
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
goal (1 subgoal):
1. indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
[PROOF STEP]
.
[PROOF STATE]
proof (state)
this:
indep_vars M' X I = (distr M (Pi\<^sub>M I M') (\<lambda>x. \<lambda>i\<in>I. X i x) = Pi\<^sub>M I (\<lambda>i. distr M (M' i) (X i)))
goal:
No subgoals!
[PROOF STEP]
qed |
||| Test if :typeat correctly shows types within !(...)
n : List a -> List a
n xs = pure !xs
f : (a -> List b) -> a -> (b -> List c) -> List c
f g a j = j !(g a)
|
! { dg-do compile }
! { dg-options "-Warray-temporaries" }
subroutine bar(a)
real, dimension(2) :: a
end
program main
integer, parameter :: n=3
integer :: i
real, dimension(n) :: a, b
a = 0.2
i = 2
a(i:i+1) = a(1:2) ! { dg-warning "Creating array temporary" }
a = cshift(a,1) ! { dg-warning "Creating array temporary" }
b = cshift(a,1)
call bar(a(1:3:2)) ! { dg-warning "Creating array temporary" }
end program main
|
foo : String
foo = """
\{show 1}
"""
|
module A
-- Check that this doesn't go into a loop when resolving Show. because
-- f itself is a candidate when elaborating the top level f function!
public export
interface F (p : Type -> Type) where
f : Show a => p a -> a
|
Melchizedek himself overlights this R.E.T. through the I AM presence of any individual that is worked upon. The R.E.T. itself speaks for itself: the Golden Rod acts as an axis similar as that which supports the Earth, βaxisβ meaning holographic βrod lineβ about which a body rotates. The central column of any symmetrical body is the anchoring Golden Rod on the Cosmic level of that body; as the Earth is anchored by its axis, so too does an individualβs Spinal Rod align cosmically with the higher grids of light.
A rotation method is used during this process in which the higher mind of the individual is anchored into the immediate Grid in alignment with the higher chakras. All major and minor meridians will experience a greater flow of βcosmic lawβ enabling a higher purpose and alignment of true self to serve the personality. This is designed in preparation for the Cosmic Tree of Life R.E.T. sessions (to be revealed at a later date).
The grid of the Omniverse flows through this axiational alignment towards the sessionβs end, pushing one ever so gently once more to a new limit or paradigm experience in the now, creating an expanded future of the higher Heart Mind.
This is all I am prepared to say at this stage regarding this R.E.T. technique, though I will finish by adding that the Rod itself holdsβCosmic Encodingsβ. |
State Before: β’ β {a b : Bool}, (!decide (a = !b)) = true β a = b State After: no goals Tactic: decide |
\documentclass{article}
\newcommand{\TUG}{TeX Users Group}
\begin{document}
\section{The \TUG}
The \TUG\ is an organization for people who are interested in \TeX\ or \LaTeX.
\end{document}
|
theory RepeatCorres
imports
RepeatUpdate
CorresHelper
CogentTypingHelper
"build/Generated_CorresSetup"
begin
context update_sem_init begin
definition crepeat
where
"crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1 a \<equiv>
do r <- select UNIV;
(i :: 64 word) <- gets (\<lambda>_. 0);
a0 <- select UNIV;
a0 <- gets (\<lambda>_. a1U (\<lambda>_. a0C a) a0);
a0 <- gets (\<lambda>_. o1U (\<lambda>_. o0C a) a0);
i <- gets (\<lambda>_. 0);
doE a0 <-
doE x <-
doE _ <- liftE (guard (\<lambda>s. True));
whileLoopE (\<lambda>(ret, a0, i) s. i < ((nC a) :: 64 word))
(\<lambda>(ret, a0, i).
doE x <-
doE retval <- liftE (do ret' <- d0 ((stopC a) :: 32 signed word) a0;
gets (\<lambda>_. ret')
od);
_ <- doE _ <- liftE (guard (\<lambda>s. True));
condition (\<lambda>s. boolean_C retval \<noteq> 0)
(doE global_exn_var <- liftE (gets (\<lambda>_. Break));
throwError (global_exn_var, ret, a0)
odE)
(liftE (gets (\<lambda>_. ())))
odE;
liftE (do retval <- do ret' <- d1 ((stepC a) :: 32 signed word) a0;
gets (\<lambda>_. ret')
od;
a0 <- gets (\<lambda>_. a1U (\<lambda>_. retval) a0);
i <- gets (\<lambda>_. i + 1);
gets (\<lambda>_. (retval, a0, i))
od)
odE;
liftE
(case x of
(ret, a0, i) \<Rightarrow>
do _ <- guard (\<lambda>_. True);
gets (\<lambda>_. (ret, a0, i))
od)
odE)
(r, a0, i)
odE;
liftE (case x of (ret, a0, i) \<Rightarrow> gets (\<lambda>_. a0))
odE <handle2>
(\<lambda>(global_exn_var, ret, a0). doE _ <- doE _ <- liftE (guard (\<lambda>s. True));
condition (\<lambda>s. global_exn_var = Break)
(liftE (gets (\<lambda>_. ())))
(throwError ret)
odE;
liftE (gets (\<lambda>_. a0))
odE);
ret <- liftE (gets (\<lambda>_. a1C a0));
global_exn_var <- liftE (gets (\<lambda>_. Return));
throwError ret
odE <catch>
(\<lambda>ret. do _ <- gets (\<lambda>_. ());
gets (\<lambda>_. ret)
od)
od"
definition repeat_inv
where
"repeat_inv srel \<xi>' (i :: 64 word) fstop fstep \<sigma> \<tau>a \<tau>o acc obsv s cn cacc cobsv \<equiv>
val_rel obsv cobsv \<and> i \<le> cn \<and>
(\<exists>\<sigma>' y. urepeat_bod \<xi>' (unat i) (uvalfun_to_expr fstop) (uvalfun_to_expr fstep) \<sigma> \<sigma>' \<tau>a acc \<tau>o obsv y \<and>
(\<sigma>', s) \<in> srel \<and> val_rel y cacc)"
definition repeat_measure
where
"repeat_measure i n = unat n - unat i"
definition repeat_pre_step
where
"repeat_pre_step srel \<xi>' i j fstop fstep \<sigma> \<tau>a \<tau>o acc obsv s cn cacc cobsv \<equiv>
val_rel obsv cobsv \<and> i < cn \<and> i = j \<and>
(\<exists>\<sigma>' y. urepeat_bod \<xi>' (unat i) (uvalfun_to_expr fstop) (uvalfun_to_expr fstep) \<sigma> \<sigma>' \<tau>a acc \<tau>o obsv y \<and>
(\<sigma>', s) \<in> srel \<and> val_rel y cacc \<and>
(\<xi>', [URecord [(y, type_repr (bang \<tau>a)), (obsv, type_repr \<tau>o)] None]
\<turnstile> (\<sigma>', App (uvalfun_to_expr fstop) (Var 0)) \<Down>! (\<sigma>', UPrim (LBool False))))"
lemma step_wp:
assumes \<Xi>wellformed: "proc_ctx_wellformed \<Xi>'"
and \<xi>'matchesu: "\<xi>' matches-u \<Xi>'"
and determ: "determ \<xi>'"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and valrelc: "\<And>x x'. val_rel x (x' :: ('c :: cogent_C_val)) \<equiv>
\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x')"
and d1corres: "\<And>x x' \<sigma> s. val_rel x (x' :: ('c :: cogent_C_val)) \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and a1C_a1U: "\<And>x y. a1C (a1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<And>x y. o1C (a1U y x) = o1C x"
and srel: "(\<sigma>, s) \<in> state_rel"
and acctyp: "\<Xi>', \<sigma> \<turnstile> acc :u \<tau>a \<langle>ra, wa\<rangle>"
and obsvtyp: "\<Xi>', \<sigma> \<turnstile> obsv :u \<tau>o \<langle>ro, {}\<rangle>"
and disjoint: "wa \<inter> ro = {}"
and valrelacc: "val_rel acc (a0C v')"
and valrelobsv: "val_rel obsv (o0C v')"
shows "\<And>arg j i.
\<lbrace>\<lambda>sa. repeat_pre_step state_rel \<xi>' i j fstop fstep \<sigma> \<tau>a \<tau>o acc obsv sa (nC v') (a1C arg) (o1C arg)\<rbrace>
d1 (stepC v') arg
\<lbrace>\<lambda>ret sb.
repeat_inv state_rel \<xi>' (j + 1) fstop fstep \<sigma> \<tau>a \<tau>o acc obsv sb (nC v') (a1C (a1U (\<lambda>_. ret) arg))
(o1C (a1U (\<lambda>_. ret) arg)) \<and>
repeat_measure (j+1) (nC v') < repeat_measure i (nC v')\<rbrace>!"
apply (clarsimp simp: validNF_def valid_def no_fail_def)
apply (subst all_imp_conj_distrib[symmetric]; clarsimp)
apply (clarsimp simp: repeat_pre_step_def repeat_inv_def repeat_measure_def)
apply (rename_tac s \<sigma>' y)
apply (insert d1corres)
apply (drule_tac x = "URecord [(y, type_repr \<tau>a), (obsv, type_repr \<tau>o)] None" and
y = arg in meta_spec2)
apply (drule_tac x = \<sigma>' and y = s in meta_spec2)
apply (erule meta_impE)
apply (simp add: valrelc)
apply (clarsimp simp: corres_def \<Xi>wellformed \<xi>'matchesu)
apply (erule impE)
apply (drule urepeat_bod_preservation[OF \<Xi>wellformed \<xi>'matchesu acctyp obsvtyp
disjoint[simplified Int_commute] _
fsteptype[simplified \<tau>fdef]])
apply clarsimp
apply (rename_tac r' w')
apply (rule_tac x = "r' \<union> ro" in exI)
apply (rule_tac x = w' in exI)
apply (clarsimp simp: \<tau>fdef)
apply (intro matches_ptrs_some[where r' = "{}" and w' = "{}", simplified]
matches_ptrs_empty[where \<tau>s = "[]", simplified]
u_t_struct
u_t_r_cons1[where w' = "{}", simplified]
u_t_r_cons1[where r' = "{}" and w' = "{}", simplified]
u_t_r_empty; simp?)
apply (erule uval_typing_frame(1); simp add: obsvtyp disjoint[simplified Int_commute])
apply (drule frame_noalias_uval_typing'(2)[OF _ obsvtyp disjoint[simplified Int_commute]]; blast)
apply (simp add:matches_ptrs.matches_ptrs_empty)
apply clarsimp
apply (rename_tac a b)
apply (elim allE impE, assumption)
apply (clarsimp simp: inc_le)
apply (simp only: less_is_non_zero_p1[THEN unatSuc2] word_less_nat_alt)
apply (frule urepeat_bod_step_determ[OF _ _ _ determ]; (simp del: urepeat_bod.simps)?)
apply (intro conjI exI; assumption?; (simp del: urepeat_bod.simps add: o1C_a1U a1C_a1U)?)
done
lemma stop_wp:
assumes \<Xi>wellformed: "proc_ctx_wellformed \<Xi>'"
and \<xi>'matchesu: "\<xi>' matches-u \<Xi>'"
and determ: "determ \<xi>'"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and valrelc: "\<And>x x'. val_rel x (x' :: ('c :: cogent_C_val)) \<equiv>
\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x')"
and d0corres: "\<And>x x' \<sigma> s. val_rel x (x' :: ('c :: cogent_C_val)) \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and srel: "(\<sigma>, s) \<in> state_rel"
and acctyp: "\<Xi>', \<sigma> \<turnstile> acc :u \<tau>a \<langle>ra, wa\<rangle>"
and obsvtyp: "\<Xi>', \<sigma> \<turnstile> obsv :u \<tau>o \<langle>ro, {}\<rangle>"
and disjoint: "wa \<inter> ro = {}"
and valrelacc: "val_rel acc (a0C v')"
and valrelobsv: "val_rel obsv (o0C v')"
shows "\<And>j arg j'.
\<lbrace>\<lambda>sa. (\<exists>\<sigma>' y n. (\<sigma>', sa) \<in> state_rel \<and> val_rel y (a1C arg) \<and> val_rel obsv (o1C arg) \<and>
j = j' \<and> j < nC v' \<and>
urepeat_bod \<xi>' n (uvalfun_to_expr fstop) (uvalfun_to_expr fstep) \<sigma> \<sigma>'
\<tau>a acc \<tau>o obsv y \<and>
((\<xi>' , [URecord [(y, type_repr (bang \<tau>a)), (obsv, type_repr \<tau>o)] None]
\<turnstile> (\<sigma>', App (uvalfun_to_expr fstop) (Var 0)) \<Down>! (\<sigma>', UPrim (LBool True)))
\<longrightarrow> n \<ge> unat j \<and> n < unat (nC v')) \<and>
((\<xi>' , [URecord [(y, type_repr (bang \<tau>a)), (obsv, type_repr \<tau>o)] None]
\<turnstile> (\<sigma>', App (uvalfun_to_expr fstop) (Var 0)) \<Down>! (\<sigma>', UPrim (LBool False)))
\<longrightarrow> n = unat j))\<rbrace>
d0 (stopC v') arg
\<lbrace>\<lambda>ret sb.
(boolean_C ret \<noteq> 0 \<longrightarrow>
(\<exists>\<sigma>' y.
urepeat_bod \<xi>' (unat (nC v')) (uvalfun_to_expr fstop) (uvalfun_to_expr fstep) \<sigma> \<sigma>' \<tau>a acc \<tau>o obsv y \<and>
(\<sigma>', sb) \<in> state_rel \<and> val_rel y (a1C arg))) \<and>
(boolean_C ret = 0 \<longrightarrow>
repeat_pre_step state_rel \<xi>' j j' fstop fstep \<sigma> \<tau>a \<tau>o acc obsv sb (nC v') (a1C arg) (o1C arg))\<rbrace>!"
apply (clarsimp simp: validNF_def valid_def no_fail_def)
apply (subst all_imp_conj_distrib[symmetric]; clarsimp)
apply (clarsimp simp: repeat_pre_step_def)
apply (rename_tac s \<sigma>' y n)
apply (insert d0corres)
apply (drule_tac x = "URecord [(y, type_repr (bang \<tau>a)), (obsv, type_repr \<tau>o)] None" and
y = arg in meta_spec2)
apply (drule_tac x = \<sigma>' and y = s in meta_spec2)
apply (erule meta_impE)
apply (simp add: valrelc)
apply (clarsimp simp: corres_def \<Xi>wellformed \<xi>'matchesu)
apply (frule urepeat_bod_preservation[OF \<Xi>wellformed \<xi>'matchesu acctyp obsvtyp
disjoint[simplified Int_commute] _
fsteptype[simplified \<tau>fdef]])
apply clarsimp
apply (rename_tac r' w')
apply (erule impE, rule_tac x = "(r' \<union> w') \<union> ro" in exI, rule_tac x = "{}" in exI)
apply (clarsimp simp: \<tau>fdef)
apply (intro matches_ptrs_some[where r' = "{}" and w' = "{}", simplified]
matches_ptrs_empty[where \<tau>s = "[]", simplified]
u_t_struct
u_t_r_cons1[where w' = "{}", simplified]
u_t_r_cons1[where r' = "{}" and w' = "{}", simplified]
u_t_r_empty; simp?)
apply (rule uval_typing_bang(1); simp)
apply (rule uval_typing_bang(1)[where w = "{}" ,simplified])
apply (erule uval_typing_frame(1); simp add: obsvtyp disjoint[simplified Int_commute])
apply (rule wellformed_imp_bang_type_repr[OF uval_typing_to_wellformed(1)[OF obsvtyp]])
apply (simp add:matches_ptrs.matches_ptrs_empty)
apply clarsimp
apply (rename_tac a b)
apply (elim allE, erule impE, assumption)
apply clarsimp
apply (frule_tac r = "(r' \<union> w') \<union> ro" and w = "{}"
in preservation(1)[where K = "[]" and \<tau>s = "[]", simplified,
OF subst_wellformed_nothing \<Xi>wellformed _
\<xi>'matchesu _ fstoptype, simplified, rotated 1])
apply (clarsimp simp: \<tau>fdef)
apply (intro matches_ptrs_some[where r' = "{}" and w' = "{}", simplified]
matches_ptrs_empty[where \<tau>s = "[]", simplified]
u_t_struct
u_t_r_cons1[where w' = "{}", simplified]
u_t_r_cons1[where r' = "{}" and w' = "{}", simplified]
u_t_r_empty; simp?)
apply (rule uval_typing_bang(1); simp)
apply (rule uval_typing_bang(1)[where w = "{}" ,simplified])
apply (erule uval_typing_frame(1); simp add: obsvtyp disjoint[simplified Int_commute])
apply (rule wellformed_imp_bang_type_repr[OF uval_typing_to_wellformed(1)[OF obsvtyp]])
apply (simp add:matches_ptrs.matches_ptrs_empty)
apply (clarsimp simp: val_rel_bool_t_C_def)
apply (erule u_t_primE; clarsimp)
apply (drule frame_empty; clarsimp)
apply (rule conjI; clarsimp)
apply (intro exI conjI; assumption?)
apply (erule (2) urepeat_bod_early_termination)
apply (intro exI conjI; assumption?)
done
lemma crepeat_corres_base:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [],{}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun (bang \<tau>f) (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and \<xi>'matchesu: "\<xi>' matches-u \<Xi>'"
and determ: "determ \<xi>'"
and \<gamma>i: "\<gamma> ! i =
URecord [(UPrim (LU64 n), RPrim (Num U64)),
(fstop, RFun), (fstep, RFun),
(acc, type_repr \<tau>a), (obsv, type_repr \<tau>o)] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and d0corres: "\<And>x x' \<sigma> s. val_rel x (x' :: ('c :: cogent_C_val)) \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<And>x x' \<sigma> s. val_rel x x' \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and valrela: "\<And>x x'. val_rel x (x' :: 'a) \<equiv>
\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x')"
and valrelc: "\<And>x x'. val_rel x (x' :: 'c) \<equiv>
\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x')"
and a1C_a1U: "\<And>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<And>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<And>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<And>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
proof (rule absfun_corres[OF _ \<gamma>len valrel])
show "abs_fun_rel \<Xi>' state_rel name \<xi>'' cfun \<sigma> s (\<gamma> ! i) v'"
apply (subst abs_fun_rel_def')
apply (clarsimp simp: \<Xi>name \<tau>def \<xi>''name cfundef urepeat_def bang\<tau>o \<gamma>i fsteptype[simplified \<tau>fdef])
apply (insert fstoptype; simp add: \<tau>fdef bang\<tau>o)
apply (thin_tac "_, _, _, _, _ \<turnstile> _ : _")
apply (erule u_t_recE; clarsimp)
apply (erule u_t_r_consE; simp)+
apply (erule conjE)+
apply (drule_tac t = "type_repr _" in sym)+
apply clarsimp
apply (frule tprim_no_pointers(1); clarsimp)
apply (drule tprim_no_pointers(2); clarsimp)
apply (frule tfun_no_pointers(1); clarsimp)
apply (frule tfun_no_pointers(2); clarsimp)
apply (drule uval_typing_uvalfun; simp)
apply (frule tfun_no_pointers(1); clarsimp)
apply (frule tfun_no_pointers(2); clarsimp)
apply (drule uval_typing_uvalfun; simp)
apply (erule u_t_r_emptyE; clarsimp)
apply (rename_tac ra wa ro wo)
apply (cut_tac \<Xi>' = \<Xi>' and \<sigma> = \<sigma> and v = obsv and \<tau> = \<tau>o and r = ro and w = wo
in bang_not_writable(1); simp add: bang\<tau>o)
apply (clarsimp simp: crepeat_def valrela val_rel_word val_rel_fun_tag)
apply (wp; (clarsimp split: prod.splits)?)
apply (rule_tac
I = "\<lambda>(a, b, j) s. repeat_inv state_rel \<xi>' j fstop fstep \<sigma> \<tau>a \<tau>o acc obsv s (nC v') (a1C b) (o1C b)" and
M = "\<lambda>((_,_, j), _). repeat_measure j (nC v')" in whileLoopE_add_invI)
apply (wp; clarsimp split: prod.splits)
using d1corres o1C_a1U a1C_a1U
apply (wp step_wp[OF _ \<xi>'matchesu determ \<tau>fdef fsteptype valrelc]; simp?)
apply (wp; clarsimp)
apply (wp; clarsimp)
using d0corres
apply (wp stop_wp[OF _ \<xi>'matchesu determ \<tau>fdef fstoptype fsteptype valrelc]; simp?)
apply clarsimp
apply (clarsimp simp: repeat_inv_def)
apply (intro exI conjI; assumption?)
apply (clarsimp simp: unat_mono)
apply clarsimp
apply (clarsimp simp: repeat_inv_def)
apply wp
apply (rule validNF_select_UNIV)+
apply (clarsimp simp: repeat_inv_def o1C_o1U a1C_o1U a1C_a1U)
done
next
show "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
using \<Gamma>i by simp
qed
section "Corres rules which are easier to use"
lemma crepeat_corres:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun (bang \<tau>f) (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and \<xi>'matchesu: "\<xi>' matches-u \<Xi>'"
and determ: "determ \<xi>'"
and \<gamma>i: "\<exists>n acc obsv a b. \<gamma> ! i = URecord [n, (fstop, a), (fstep, b), acc, obsv] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and d0corres: "\<And>x x' \<sigma> s. val_rel x (x' :: ('c :: cogent_C_val)) \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<And>x x' \<sigma> s. val_rel x x' \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and valrela: "\<And>x x'. val_rel x (x' :: 'a) \<equiv>
\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x')"
and valrelc: "\<And>x x'. val_rel x (x' :: 'c) \<equiv>
\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x')"
and a1C_a1U: "\<And>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<And>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<And>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<And>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (insert \<gamma>i valrel; clarsimp simp: valrela val_rel_word corres_def)
apply (frule matches_ptrs_length)
apply (frule_tac matches_ptrs_proj_single'[OF _ _ \<Gamma>i[simplified \<Xi>name \<tau>def]]; simp?)
apply (cut_tac \<gamma>len; linarith)
apply clarsimp
apply (erule u_t_recE; clarsimp)
apply (erule u_t_r_consE; simp)+
apply (erule u_t_r_emptyE; simp)
apply (elim conjE)
apply (drule_tac t = "type_repr _" in sym)
apply clarsimp
apply (thin_tac "_ \<inter> _ = {}")+
apply (thin_tac "_ \<subseteq> _")+
apply (thin_tac "_, _ \<turnstile> _ :u _ \<langle>_, _\<rangle>")+
apply (cut_tac state_rel = state_rel and \<sigma> = \<sigma> and s = s and \<xi>'' = \<xi>'' in
crepeat_corres_base[OF \<gamma>len valrel _ _ \<tau>def \<tau>fdef bang\<tau>o _ \<xi>'matchesu determ _ fstoptype
fsteptype _ _ valrela valrelc a1C_a1U _ _ _ cfundef]; simp?)
using \<Gamma>i apply simp
using \<Xi>name apply simp
using \<xi>''name apply simp
using d0corres apply simp
using d1corres apply simp
using a1C_o1U apply simp
using o1C_o1U apply simp
using o1C_a1U apply simp
apply (clarsimp simp: corres_def)
done
lemma crepeat_corres_rel_leq:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun (bang \<tau>f) (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and leq: "rel_leq \<xi>' \<xi>''"
and determ: "determ \<xi>''"
and \<gamma>i: "\<exists>n acc obsv a b. \<gamma> ! i = URecord [n, (fstop, a), (fstep, b), acc, obsv] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and d0corres: "\<And>x x' \<sigma> s. val_rel x (x' :: ('c :: cogent_C_val)) \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<And>x x' \<sigma> s. val_rel x x' \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and valrela: "\<And>x x'. val_rel x (x' :: 'a) \<equiv>
\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x')"
and valrelc: "\<And>x x'. val_rel x (x' :: 'c) \<equiv>
\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x')"
and a1C_a1U: "\<And>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<And>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<And>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<And>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (clarsimp simp: corres_def)
apply (cut_tac state_rel = state_rel and \<sigma> = \<sigma> and s = s and \<xi>'' = \<xi>'' in
crepeat_corres[OF \<gamma>len valrel _ _ \<tau>def \<tau>fdef bang\<tau>o _ rel_leq_matchesuD[OF leq]
determ_rel_leqD[OF leq determ] \<gamma>i fstoptype fsteptype _ _ valrela
valrelc a1C_a1U _ _ _ cfundef]; simp?)
using \<Gamma>i apply simp
using \<Xi>name apply simp
using \<xi>''name apply simp
using d0corres apply simp
using d1corres apply simp
using a1C_o1U apply simp
using o1C_o1U apply simp
using o1C_a1U apply simp
apply (clarsimp simp: corres_def)
done
lemma crepeat_corres_bang:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun \<tau>f (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>a: "bang \<tau>a = \<tau>a"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and leq: "rel_leq \<xi>' \<xi>''"
and determ: "determ \<xi>''"
and \<gamma>i: "\<exists>n acc obsv a b. \<gamma> ! i = URecord [n, (fstop, a), (fstep, b), acc, obsv] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and d0corres: "\<And>x x' \<sigma> s. val_rel x (x' :: ('c :: cogent_C_val)) \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<And>x x' \<sigma> s. val_rel x x' \<Longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and valrela: "\<And>x x'. val_rel x (x' :: 'a) \<equiv>
\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x')"
and valrelc: "\<And>x x'. val_rel x (x' :: 'c) \<equiv>
\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x')"
and a1C_a1U: "\<And>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<And>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<And>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<And>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (rule_tac state_rel = state_rel and \<sigma> = \<sigma> and s = s in
crepeat_corres_rel_leq[OF \<gamma>len valrel _ _ _ \<tau>fdef bang\<tau>o _ leq
determ \<gamma>i _ fsteptype _ _ valrela
valrelc a1C_a1U _ _ _ cfundef]; simp?)
using \<Gamma>i apply simp
using \<Xi>name \<tau>def \<tau>fdef bang\<tau>a bang\<tau>o apply simp
using \<xi>''name apply simp
using \<tau>fdef bang\<tau>a bang\<tau>o fstoptype apply simp
using d0corres apply simp
using d1corres apply simp
using a1C_o1U apply simp
using o1C_o1U apply simp
using o1C_a1U apply simp
done
lemmas crepeat_corres_bang_fun_fun = crepeat_corres_bang
[where fstop = "UFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_fun]
lemmas crepeat_corres_bang_fun_afun = crepeat_corres_bang
[where fstop = "UFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_absfun]
lemmas crepeat_corres_bang_afun_fun = crepeat_corres_bang
[where fstop = "UAFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_fun]
lemmas crepeat_corres_bang_afun_afun = crepeat_corres_bang
[where fstop = "UAFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_absfun]
lemmas crepeat_corres_fun_fun = crepeat_corres_rel_leq
[where fstop = "UFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_fun]
lemmas crepeat_corres_fun_afun = crepeat_corres_rel_leq
[where fstop = "UFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_absfun]
lemmas crepeat_corres_afun_fun = crepeat_corres_rel_leq
[where fstop = "UAFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_fun]
lemmas crepeat_corres_afun_afun = crepeat_corres_rel_leq
[where fstop = "UAFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_absfun]
section "Alternate corres rules"
lemma crepeat_corres_base_all:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun (bang \<tau>f) (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and \<xi>'matchesu: "\<xi>' matches-u \<Xi>'"
and determ: "determ \<xi>'"
and \<gamma>i: "\<gamma> ! i = URecord [(UPrim (LU64 n), RPrim (Num U64)), (fstop, RFun), (fstep, RFun), (acc, type_repr \<tau>a), (obsv, type_repr \<tau>o)] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and valrela: "\<forall>x (x' :: ('a :: cogent_C_val)). val_rel x x' =
(\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None\<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x'))"
and valrelc: "\<forall>x (x' :: ('c :: cogent_C_val)). val_rel x x' =
(\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x'))"
and d0corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and a1C_a1U: "\<forall>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<forall>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<forall>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<forall>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (rule crepeat_corres_base[where o1C = o1C,
OF \<gamma>len valrel _ _ \<tau>def \<tau>fdef bang\<tau>o _ \<xi>'matchesu determ \<gamma>i fstoptype fsteptype, rotated -1, OF cfundef];
(simp add: \<Gamma>i \<Xi>name \<xi>''name valrela valrelc a1C_a1U a1C_o1U o1C_o1U o1C_a1U d0corres[simplified] d1corres[simplified])?)
done
lemma crepeat_corres_all:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun (bang \<tau>f) (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and \<xi>'matchesu: "\<xi>' matches-u \<Xi>'"
and determ: "determ \<xi>'"
and \<gamma>i: "\<exists>n acc obsv a b. \<gamma> ! i = URecord [n, (fstop, a), (fstep, b), acc, obsv] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and valrela: "\<forall>x (x' :: ('a :: cogent_C_val)). val_rel x x' =
(\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x'))"
and valrelc: "\<forall>x (x' :: ('c :: cogent_C_val)). val_rel x x' =
(\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x'))"
and d0corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and a1C_a1U: "\<forall>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<forall>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<forall>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<forall>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (rule crepeat_corres[where o1C = o1C,
OF \<gamma>len valrel _ _ \<tau>def \<tau>fdef bang\<tau>o _ \<xi>'matchesu determ \<gamma>i fstoptype fsteptype, rotated -1, OF cfundef];
(simp add: \<Gamma>i \<Xi>name \<xi>''name valrela valrelc a1C_a1U a1C_o1U o1C_o1U o1C_a1U d0corres[simplified] d1corres[simplified])?)
done
lemma crepeat_corres_rel_leq_all:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun (bang \<tau>f) (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and leq: "rel_leq \<xi>' \<xi>''"
and determ: "determ \<xi>''"
and \<gamma>i: "\<exists>n acc obsv a b. \<gamma> ! i = URecord [n, (fstop, a), (fstep, b), acc, obsv] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some (bang \<tau>f)] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and valrela: "\<forall>x (x' :: ('a :: cogent_C_val)). val_rel x x' =
(\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x'))"
and valrelc: "\<forall>x (x' :: ('c :: cogent_C_val)). val_rel x x' =
(\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x'))"
and d0corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and a1C_a1U: "\<forall>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<forall>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<forall>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<forall>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (rule crepeat_corres_rel_leq[where o1C = o1C,
OF \<gamma>len valrel _ _ \<tau>def \<tau>fdef bang\<tau>o _ leq determ \<gamma>i fstoptype fsteptype, rotated -1, OF cfundef];
(simp add: \<Gamma>i \<Xi>name \<xi>''name valrela valrelc a1C_a1U a1C_o1U o1C_o1U o1C_a1U d0corres[simplified] d1corres[simplified])?)
done
lemma crepeat_corres_bang_all:
assumes \<gamma>len: "i < length \<gamma>"
and valrel: "val_rel (\<gamma> ! i) (v' :: ('a :: cogent_C_val))"
and \<Gamma>i: "\<Gamma> ! i = Some (fst (snd (snd (snd (\<Xi>' name)))))"
and \<Xi>name: "\<Xi>' name = (0, [], {}, \<tau>, \<tau>a)"
and \<tau>def: "\<tau> = TRecord [(''n'', TPrim (Num U64), Present),
(''stop'', TFun \<tau>f (TPrim Bool), Present),
(''step'', TFun \<tau>f \<tau>a, Present),
(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and \<tau>fdef: "\<tau>f = TRecord [(''acc'', \<tau>a, Present), (''obsv'', \<tau>o, Present)] Unboxed"
and bang\<tau>a: "bang \<tau>a = \<tau>a"
and bang\<tau>o: "bang \<tau>o = \<tau>o"
and \<xi>''name: "\<xi>'' name = urepeat \<Xi>' \<xi>' \<tau>a \<tau>o"
and leq: "rel_leq \<xi>' \<xi>''"
and determ: "determ \<xi>''"
and \<gamma>i: "\<exists>n acc obsv a b. \<gamma> ! i = URecord [n, (fstop, a), (fstep, b), acc, obsv] None"
and fstoptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstop) (Var 0) : TPrim Bool"
and fsteptype: "\<Xi>', 0, [], {}, [Some \<tau>f] \<turnstile> App (uvalfun_to_expr fstep) (Var 0) : \<tau>a"
and valrela: "\<forall>x (x' :: ('a :: cogent_C_val)). val_rel x x' =
(\<exists>n f g acc obsv. x = URecord [n, f, g, acc, obsv] None \<and>
val_rel (fst n) (nC x') \<and> val_rel (fst f) (stopC x') \<and>
val_rel (fst g) (stepC x') \<and> val_rel (fst acc) (a0C x') \<and>
val_rel (fst obsv) (o0C x'))"
and valrelc: "\<forall>x (x' :: ('c :: cogent_C_val)). val_rel x x' =
(\<exists>acc obsv. x = URecord [acc, obsv] None \<and> val_rel (fst acc) (a1C x') \<and>
val_rel (fst obsv) (o1C x'))"
and d0corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstop) (Var 0))
(do ret <- d0 (stopC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some (bang \<tau>f)] \<sigma> s"
and d1corres: "\<forall>x x' \<sigma> s. val_rel x x' \<longrightarrow>
corres state_rel (App (uvalfun_to_expr fstep) (Var 0))
(do ret <- d1 (stepC v') x'; gets (\<lambda>s. ret) od)
\<xi>' [x] \<Xi>' [option.Some \<tau>f] \<sigma> s"
and a1C_a1U: "\<forall>x y. a1C (a1U (\<lambda>_. y) x) = y"
and a1C_o1U: "\<forall>x y. a1C (o1U y x) = a1C x"
and o1C_o1U: "\<forall>x y. o1C (o1U (\<lambda>_. y) x) = y"
and o1C_a1U: "\<forall>x y. o1C (a1U y x) = o1C x"
and cfundef: "cfun = crepeat nC stopC stepC a0C o0C a1C a1U o1U d0 d1"
shows
"corres state_rel (App (AFun name [] []) (Var i))
(do x <- cfun v'; gets (\<lambda>s. x) od)
\<xi>'' \<gamma> \<Xi>' \<Gamma> \<sigma> s"
apply (rule crepeat_corres_bang[where o1C = o1C,
OF \<gamma>len valrel _ _ \<tau>def \<tau>fdef bang\<tau>a bang\<tau>o _ leq determ \<gamma>i fstoptype fsteptype, rotated -1, OF cfundef];
(simp add: \<Gamma>i \<Xi>name \<xi>''name valrela valrelc a1C_a1U a1C_o1U o1C_o1U o1C_a1U d0corres[simplified] d1corres[simplified])?)
done
lemmas crepeat_corres_bang_fun_funall = crepeat_corres_bang_all
[where fstop = "UFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_fun]
lemmas crepeat_corres_bang_fun_afun_all = crepeat_corres_bang_all
[where fstop = "UFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_absfun]
lemmas crepeat_corres_bang_afun_fun_all = crepeat_corres_bang_all
[where fstop = "UAFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_fun]
lemmas crepeat_corres_bang_afun_afun_all = crepeat_corres_bang_all
[where fstop = "UAFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_absfun]
lemmas crepeat_corres_fun_fun_all = crepeat_corres_rel_leq_all
[where fstop = "UFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_fun]
lemmas crepeat_corres_fun_afun_all = crepeat_corres_rel_leq_all
[where fstop = "UFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_fun typing_mono_app_cogent_absfun]
lemmas crepeat_corres_afun_fun_all = crepeat_corres_rel_leq_all
[where fstop = "UAFunction _ _ _" and fstep = "UFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_fun]
lemmas crepeat_corres_afun_afun_all = crepeat_corres_rel_leq_all
[where fstop = "UAFunction _ _ _" and fstep = "UAFunction _ _ _", simplified,
OF _ _ _ _ _ _ _ _ _ _ _ typing_mono_app_cogent_absfun typing_mono_app_cogent_absfun]
end (* of context *)
end
|
I was told little about her early life and knew too much of her decline. In the final months I was not even sure it was mum talking β she had been known throughout her life as Peg, sometimes Peggy but when she moved into a nursing home all the staff referred to her as Marian β her first and until then, unused Christian name, except the family always thought it was Marion with an βoβ and I think she did too; her birth certificate confirms the nurses were right. So the sign on her door said Marian; I donβt think I ever really knew who it was on the other side.
Old photographs never lose their capacity to surprise, particulalry when they appear out of the blue. I have digitised and reprinted a large number of old photos belonging to my maternal grandfather but I had never seen this one before. In later years my mother developed a habit of creating random albums, haphazard collections with no obvious consistency of place, subject or time; this one was tucked in between my sisterβs wedding and her grandchildren playing in the garden. Perhaps this is no bad thing, each page has the capacity to astonish, there being no clues as to what might appear next; poor mumβs mind developed a similar trend towards her end.
The car is a 1920s Ford Model A and the small girl is my mother. They are probably all Taylors, my maternal grandmotherβs side of the family. The man on the far left is her grandfather, William and the bowler-hatted gent is Uncle Charlie, thought by mum to be several shillings short of a pound. The lady is her grandmother Emily Susan but the young man in the flat cap is unknown to me, lost with my late motherβs memory.
The picture was probably taken at the family home near Bransbury which was demolished in the fifties when the Andover to Sutton Scotney road was widened. |
[GOAL]
R : Type u_1
S : Type u_2
Ο : Type u_3
x : S
instβΒΉ : CommSemiring R
instβ : CommSemiring S
f : R β+* Polynomial S
g : Ο β Polynomial S
p : MvPolynomial Ο R
β’ Polynomial.eval x (evalβ f g p) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) p
[PROOFSTEP]
apply induction_on p
[GOAL]
case h_C
R : Type u_1
S : Type u_2
Ο : Type u_3
x : S
instβΒΉ : CommSemiring R
instβ : CommSemiring S
f : R β+* Polynomial S
g : Ο β Polynomial S
p : MvPolynomial Ο R
β’ β (a : R),
Polynomial.eval x (evalβ f g (βC a)) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) (βC a)
[PROOFSTEP]
simp
[GOAL]
case h_add
R : Type u_1
S : Type u_2
Ο : Type u_3
x : S
instβΒΉ : CommSemiring R
instβ : CommSemiring S
f : R β+* Polynomial S
g : Ο β Polynomial S
p : MvPolynomial Ο R
β’ β (p q : MvPolynomial Ο R),
Polynomial.eval x (evalβ f g p) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) p β
Polynomial.eval x (evalβ f g q) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) q β
Polynomial.eval x (evalβ f g (p + q)) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) (p + q)
[PROOFSTEP]
intro p q hp hq
[GOAL]
case h_add
R : Type u_1
S : Type u_2
Ο : Type u_3
x : S
instβΒΉ : CommSemiring R
instβ : CommSemiring S
f : R β+* Polynomial S
g : Ο β Polynomial S
pβ p q : MvPolynomial Ο R
hp :
Polynomial.eval x (evalβ f g p) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) p
hq :
Polynomial.eval x (evalβ f g q) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) q
β’ Polynomial.eval x (evalβ f g (p + q)) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) (p + q)
[PROOFSTEP]
simp [hp, hq]
[GOAL]
case h_X
R : Type u_1
S : Type u_2
Ο : Type u_3
x : S
instβΒΉ : CommSemiring R
instβ : CommSemiring S
f : R β+* Polynomial S
g : Ο β Polynomial S
p : MvPolynomial Ο R
β’ β (p : MvPolynomial Ο R) (n : Ο),
Polynomial.eval x (evalβ f g p) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) p β
Polynomial.eval x (evalβ f g (p * X n)) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) (p * X n)
[PROOFSTEP]
intro p n hp
[GOAL]
case h_X
R : Type u_1
S : Type u_2
Ο : Type u_3
x : S
instβΒΉ : CommSemiring R
instβ : CommSemiring S
f : R β+* Polynomial S
g : Ο β Polynomial S
pβ p : MvPolynomial Ο R
n : Ο
hp :
Polynomial.eval x (evalβ f g p) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) p
β’ Polynomial.eval x (evalβ f g (p * X n)) =
evalβ (RingHom.comp (Polynomial.evalRingHom x) f) (fun s => Polynomial.eval x (g s)) (p * X n)
[PROOFSTEP]
simp [hp]
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
β’ β(eval x) (Polynomial.eval q (β(finSuccEquiv R n) f)) = β(eval fun i => Fin.cases (β(eval x) q) x i) f
[PROOFSTEP]
simp only [finSuccEquiv_apply, coe_evalβHom, polynomial_eval_evalβ, eval_evalβ]
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
β’ evalβ (RingHom.comp (eval x) (RingHom.comp (Polynomial.evalRingHom q) (RingHom.comp Polynomial.C C)))
(fun s => β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) s))) f =
β(eval fun i => Fin.cases (β(eval x) q) x i) f
[PROOFSTEP]
conv in RingHom.comp _ _ => {
refine @RingHom.ext _ _ _ _ _ (RingHom.id _) fun r => ?_
simp}
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
| RingHom.comp (eval x) (RingHom.comp (Polynomial.evalRingHom q) (RingHom.comp Polynomial.C C))
[PROOFSTEP]
{ refine @RingHom.ext _ _ _ _ _ (RingHom.id _) fun r => ?_
simp}
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
| RingHom.comp (eval x) (RingHom.comp (Polynomial.evalRingHom q) (RingHom.comp Polynomial.C C))
[PROOFSTEP]
{ refine @RingHom.ext _ _ _ _ _ (RingHom.id _) fun r => ?_
simp}
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
| RingHom.comp (eval x) (RingHom.comp (Polynomial.evalRingHom q) (RingHom.comp Polynomial.C C))
[PROOFSTEP]
refine @RingHom.ext _ _ _ _ _ (RingHom.id _) fun r => ?_
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
r : R
β’ β(RingHom.comp (eval x) (RingHom.comp (Polynomial.evalRingHom q) (RingHom.comp Polynomial.C C))) r = β(RingHom.id R) r
[PROOFSTEP]
simp
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
β’ evalβ (RingHom.id R)
(fun s => β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) s))) f =
β(eval fun i => Fin.cases (β(eval x) q) x i) f
[PROOFSTEP]
simp only [evalβ_id]
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
β’ β(eval fun s => β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) s))) f =
β(eval fun i => Fin.cases (β(eval x) q) x i) f
[PROOFSTEP]
congr
[GOAL]
case e_a.e_f
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
β’ (fun s => β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) s))) = fun i =>
Fin.cases (β(eval x) q) x i
[PROOFSTEP]
funext i
[GOAL]
case e_a.e_f.h
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
i : Fin (n + 1)
β’ β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) i)) = Fin.cases (β(eval x) q) x i
[PROOFSTEP]
refine Fin.cases (by simp) (by simp) i
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
i : Fin (n + 1)
β’ β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) 0)) = Fin.cases (β(eval x) q) x 0
[PROOFSTEP]
simp
[GOAL]
R : Type u_1
n : β
x : Fin n β R
instβ : CommSemiring R
f : MvPolynomial (Fin (n + 1)) R
q : MvPolynomial (Fin n) R
i : Fin (n + 1)
β’ β (i : Fin n),
β(eval x) (Polynomial.eval q (Fin.cases Polynomial.X (fun k => βPolynomial.C (X k)) (Fin.succ i))) =
Fin.cases (β(eval x) q) x (Fin.succ i)
[PROOFSTEP]
simp
|
lemma closed_connected_component: assumes S: "closed S" shows "closed (connected_component_set S x)" |
lemmas scaleR_right_diff_distrib = scaleR_diff_right |
lemmas scaleR_zero_right = real_vector.scale_zero_right |
/-
Copyright (c) 2015 Joey Teng. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Joey Teng.
-/
import tactic -- imports all the Lean tactics
import combinatorics.simple_graph.connectivity
-- Graph Theory
universe u
namespace simple_graph
variables {V : Type u} (G : simple_graph V)
/-!
## Suppliments / Refinement to Connectivity
-/
namespace walk
variables {G} [decidable_eq V]
lemma support_drop_until_in {v w : V} (p : G.walk v w) (u : V) (h : u β p.support): u β (p.drop_until u h).support :=
begin
finish,
end
lemma support_bypass_subset' {u v : V} (p : G.walk u v) : p.support β
p.bypass.support :=
begin
induction p,
{ simp!, },
{ simp! only,
split_ifs,
{
have h : p_p.bypass.support β p_p.support,
{ exact support_bypass_subset p_p, },
by_cases hcases : p_u β p_p.support,
{
intros v' h'',
cases h'',
{ finish, },
{
sorry,
},
},
{ tauto, },
},
{ rw support_cons,
apply list.cons_subset_cons,
assumption, }, },
end
end walk
/-!
## Connectivity
Utilities about `walk`, `trail` and `path`.
-/
def subwalk_nil_from_vertex : Ξ {u : V}, G.walk u u
| u := (walk.nil : G.walk u u)
theorem is_trail_nil {v : V} (p : G.walk v v): p = walk.nil β p.is_trail :=
begin
intro hp,
rw hp,
simp,
end
/-!
## Reachable
Based on `simple_graph.walk`
-/
/-- Exists an walk in `G` from `v` to `w` --/
def reachable (v w : V) : Prop :=
β p : G.walk v w, true
theorem reachable_def {v w : V} :
G.reachable v w β β β¦p : G.walk v wβ¦, true :=
begin
refl,
end
theorem reachable_if_walk {v w : V} (p : G.walk v w):
G.reachable v w :=
begin
use p,
end
/-- `v` is always reachable to itself --/
@[simp]
theorem reachable_self {v : V} : G.reachable v v :=
begin
use walk.nil,
end
@[simp]
theorem reachable_trans {u v w : V} :
G.reachable u v β G.reachable v w β G.reachable u w :=
begin
intros h1 h2,
cases h1 with p1 hp1,
cases h2 with p2 hp2,
have p := p1.append p2,
use p,
end
@[simp]
theorem reachable_symm {u v : V} :
G.reachable u v β G.reachable v u :=
begin
split;
intro h;
cases h with p _;
use p.reverse,
end
@[simp]
theorem reachable_adj {v w : V} : G.adj v w β G.reachable v w :=
begin
intro h,
have p : G.walk v w,
{ constructor,
assumption,
exact walk.nil, },
use p,
end
theorem reachable_if_passing {u v w : V} (p : G.walk u w) (hp : v β p.support) [decidable_eq V]:
G.reachable v w :=
begin
rw reachable_def,
fconstructor,
{
let p' : G.walk u w := p.to_path,
have hp' : v β p'.support,
{
apply walk.support_bypass_subset',
exact hp,
},
have hp'' : p'.is_path,
{ exact walk.bypass_is_path p, },
let q : G.walk v w := p'.drop_until v hp',
use q,
},
{ triv },
end
theorem reachable_if_support {u v w x : V} (p : G.walk u x) (h1 : v β p.support) (h2 : w β p.support) [decidable_eq V]:
G.reachable v w :=
begin
have h' : G.reachable v x,
{ apply reachable_if_passing G p h1, },
have h'' : G.reachable w x,
{ apply reachable_if_passing G p h2, },
have h''' : G.reachable x w,
{
rw G.reachable_symm at h'',
exact h'',
},
-- Unsure how to use "reachable_trans", thus copy over
cases h' with p1 hp1,
cases h''' with p2 hp2,
have p := p1.append p2,
use p,
end
/-### Extend to path-/
theorem reachable_path {v w : V} [decidable_eq V]:
G.reachable v w β β β¦p : G.walk v wβ¦, p.is_path :=
begin
split,
{ simp,
intro h,
rw reachable at h,
cases h with p _,
let p' : G.walk v w := p.bypass,
use p',
exact walk.bypass_is_path p,
},
{ simp,
intros x hx,
use x,
},
end
/-### Extend to trail-/
theorem reachable_trail {v w : V} [decidable_eq V]:
G.reachable v w β β β¦p : G.walk v wβ¦, p.is_trail :=
begin
split,
{ rw reachable_path,
intro h,
cases h with p hp,
use p,
let p' := hp.to_trail,
exact p',
},
{ simp,
intros x hx,
use x,
},
end
/-!## Connected-/
/-- All vertices are reachable to each other --/
def is_connected : Prop :=
β β¦v w : Vβ¦, G.reachable v w
/-- Complete Graph is connected.
Note that empty graph may not, unless `V` is empty. --/
theorem complete_graph_is_connected : G = β€ β G.is_connected :=
begin
intros hG v w,
by_cases eq : v = w,
{ rw eq,
exact G.reachable_self, },
{ rw β ne at eq,
have p : G.walk v w,
{
have h : G.adj v w,
{ finish, },
{ exact walk.cons h walk.nil },
},
rw reachable,
use p,
},
end
/-!## Eulerian Walks-/
section Eulerian
variables [decidable_eq V]
def is_euler_circuit {v : V} (p : G.walk v v) : Prop :=
p.is_circuit β§ (β β¦e : sym2 Vβ¦, e β G.edge_set β (e β p.edges)) β§ β β¦u : Vβ¦, u β p.support
def is_eulerian [decidable_eq V] : Prop :=
β {v : V}, β {p : G.walk v v}, G.is_euler_circuit p
theorem eulerian_is_connected :
G.is_eulerian β G.is_connected :=
begin
rw is_eulerian,
intro h,
cases h with u hu,
cases hu with p hp,
rw is_euler_circuit at hp,
obtain β¨hp, he, hVβ© := hp,
rw is_connected,
intros v w,
have hv : v β p.support,
{ tauto, },
have hw : w β p.support,
{ tauto, },
exact G.reachable_if_support p hv hw,
end
theorem eulerian_all_even_degree [decidable_rel G.adj] [fintype V] :
G.is_eulerian β β (v : V), even (G.degree v):=
begin
intro h,
cases h with u hu,
cases hu with p hp,
obtain β¨hp, he, hVβ© := hp,
intro v,
have hv : v β p.support,
{ apply hV, },
let pe := p.edges,
let deg := G.degree v,
have hdeg : even deg,
{
-- Either
-- - v == u, thus list's first sym2 has a v, and last sym2 has a v
-- - v != u, thus
-- for each (sym2 V), i-th element is (_, v) β i+1-th element is (v, __)
-- hence (sym2 V) comes in pairs
-- thus degree is even.
sorry,
},
exact hdeg,
end
/-Euler's Theorem-/
theorem is_eulerian_iff_all_even_degree [decidable_rel G.adj] [fintype V] :
is_connected G β§ β (v : V), even (G.degree v) β G.is_eulerian :=
begin
sorry
end
/-!### Example: Seven Bridges of KΓΆnigsberg
A modified version of "Seven Bridges of KΓΆnigsberg", which now contains a Euler path.
Two duplicated edges (bridges) are removed from the original graph.
-/
section Konigsberg
/- Four sectors -/
@[derive decidable_eq]
inductive sector_
| v1 : sector_
| v2 : sector_
| v3 : sector_
| v4 : sector_
/-
v1
| \
v2 - v4
| /
v3
-/
def bridge_ : sector_ β sector_ β Prop
| sector_.v1 sector_.v3 := false
| sector_.v3 sector_.v1 := false
| _ _ := true
/-
A Euler Graph
v1
/ \
v2 v4
\ /
v3
-/
def bridge_2 : sector_ β sector_ β Prop
| sector_.v1 sector_.v3 := false
| sector_.v3 sector_.v1 := false
| sector_.v2 sector_.v4 := false
| sector_.v4 sector_.v2 := false
| _ _ := true
def graph_ : simple_graph sector_ := simple_graph.from_rel bridge_
def graph_2 : simple_graph sector_ := simple_graph.from_rel bridge_2
example : sector_.v1 β sector_.v4 :=
begin
exact dec_trivial
end
example {G : simple_graph sector_} [decidable_rel G.adj] [fintype sector_]:
G = graph_2 β G.is_eulerian :=
begin
intro h,
-- construct such a circuit
let v1 := sector_.v1,
let v2 := sector_.v2,
let v3 := sector_.v3,
let v4 := sector_.v4,
rw is_eulerian,
use v4,
let p' : G.walk v4 v4 := walk.nil,
have e14 : G.adj v1 v4,
{
rw h,
fconstructor;
tauto,
},
have e21 : G.adj v2 v1,
{
rw h,
fconstructor;
tauto,
}, -- same as e14
have e32 : G.adj v3 v2,
{
rw h,
fconstructor;
tauto,
},
have e43 : G.adj v4 v3,
{
rw h,
fconstructor;
tauto,
},
let p : G.walk v4 v4 := walk.cons e43 (walk.cons e32 (walk.cons e21 (walk.cons e14 p'))),
-- END of circuit construction
use p,
rw is_euler_circuit,
split,
{
fconstructor,
{
fconstructor,
exact dec_trivial,
},
{ exact dec_trivial, },
},
split,
-- prove edges
{
intro e,
intro he,
let _set := G.edge_set,
rw β mem_edge_set at *,
have h_set : _set = G.edge_set,
{ refl, },
have hset : graph_2.edge_set = {β¦(v1, v4)β§, β¦(v2, v1)β§, β¦(v3, v2)β§, β¦(v4, v3)β§},
{
sorry,
},
rw β h at hset,
rw hset at h_set,
norm_num,
by_cases hc : e = β¦(v4, v3)β§,
{ left, exact hc, },
{
by_cases hc' : e = β¦(v3, v2)β§,
{ right, left, exact hc', },
{
by_cases hc'' : e = β¦(v2, v1)β§,
{ right, right, left, exact hc'', },
{
by_cases hc''' : e = β¦(v1, v4)β§,
{ right, right, right, exact hc''', },
{ finish, },
},
},
},
},
-- proving vertices
{
intro u,
norm_num,
cases u,
{
right, right, right, left,
refl,
},
{
right, right, left,
refl,
},
{
right, left,
refl,
},
{
left,
refl,
},
},
end
end Konigsberg
end Eulerian
end simple_graph
|
test : Set β Set
test record{} = record{}
-- Record pattern at non-record type Set
-- when checking that the clause test record {} = record {} has
-- type Set β Set
|
-- Andreas, 2016-10-04, issue #2236
-- Result splitting should not insert hidden arguments visibly
-- {-# OPTIONS -v interaction.case:100 #-}
-- {-# OPTIONS -v tc.cover:100 #-}
-- {-# OPTIONS -v reify.clause:100 #-}
-- {-# OPTIONS -v reify.implicit:100 #-}
splitMe : (A : Set) {B : Set} β Set
splitMe = {!!} -- C-c C-c RET inserts hidden {B}
splitMe' : (A : Set) {B : Set} (C : Set) {D : Set} β Set
splitMe' = {!!} -- C-c C-c RET inserts hidden {B} and {D}
splitMe'' : {B : Set} (C : Set) {D : Set} β Set
splitMe'' = {!!} -- C-c C-c RET inserts hidden {D}
postulate
N : Set
P : N β Set
test : (A : Set) β β {n} β P n β Set
test = {!!} -- C-c C-c RET inserts hidden {n}
-- Correct is:
-- No hidden arguments inserted on lhs.
|
function [der_loc, der_dir] = msi_file_DER_loc(set_fid)
%function [der_loc, der_dir] = msi_file_DER_loc(set_fid, chans)
% MSI_FILE_GET_LONG(set_fid, chans) Skip to a keyword in a file FID
% and return the single float value
%
key_count = msi_file_find_keyword(set_fid, 'MSI.DerChanCount:');
if (key_count == 1)
DerChanCount = msi_file_get_long(set_fid, 'MSI.DerChanCount:');
end
frewind(set_fid);
keyword = 'MSI.Der_Position_Information.Begin:';
while feof(set_fid) == 0
line = fgetl(set_fid);
matches = findstr(line, keyword);
num = length(matches);
if num > 0
fprintf(1, 'I found a %s\n', line);
for (ii=1:DerChanCount)
line = fgetl(set_fid);
[name, count, errmsg, nextindex] = sscanf(line, '%s\t', 1);
[token, nextline] = strtok(line, ' ');
[bigresult, count, errmsg, nextindex] = sscanf(nextline, '\t%f\t%f\t%f\t%f\t%f\t%f', 6);
loc(1) = bigresult(1);
loc(2) = bigresult(2);
loc(3) = bigresult(3);
dir(1) = bigresult(4);
dir(2) = bigresult(5);
dir(3) = bigresult(6);
%fprintf(1, 'chan: %s\n', name);
%fprintf(1, ' %f %f %f ', loc(1), loc(2), loc(3));
%fprintf(1, ' %f %f %f \n', dir(1), dir(2), dir(3));
%fprintf(1, 'chan: %s (%f,%f,%f) (%f,%f,%f)\n', name, loc[1], loc[2], loc[3], dir[1], dir[2], dir[3]);
outloc(ii,1) = loc(1);
outloc(ii,2) = loc(2);
outloc(ii,3) = loc(3);
outdir(ii,1) = dir(1);
outdir(ii,2) = dir(2);
outdir(ii,3) = dir(3);
end
der_loc = outloc;
der_dir = outdir;
end
end
|
# ---
#
# Script to istall dependencies
# This script will istall the dependencies required to run Achilles.
# This script only needs to be run once for a given R instance (computer).
#
# ---
# install pkgbuild (you might need to run the following line individually)
install.packages("pkgbuild")
# check pkgbuild
pkgbuild::check_build_tools()
# install remotes
install.packages("remotes", INSTALL_opts = c("--no-multiarch"))
# install sql renderer
install.packages("SqlRender")
library(SqlRender)
# test that SqlRenderer was install and works
translate("SELECT TOP 10 * FROM person;", "postgresql")
# install achilles packages
remotes::install_github("OHDSI/CohortMethod", INSTALL_opts = c("--no-multiarch"))
remotes::install_github("OHDSI/Achilles", INSTALL_opts = c("--no-multiarch"))
# check that the Achilles library loads
library(Achilles)
|
{-# LANGUAGE ScopedTypeVariables, BangPatterns #-}
module Mandelbrot
( mandelbrot
) where
import Data.Complex
import Debug.Trace
mag2 :: RealFloat a => Complex a -> a
mag2 z = (realPart z)^2 + (imagPart z)^2
{-# INLINE mag2 #-}
-- For given point and maxIterations returns iteration number when diverged
mandelbrot :: forall a. RealFloat a => Int -> Complex a -> Int
mandelbrot maxIter c = mandelbrot_ 0 0
where
mandelbrot_ :: Int -> Complex a -> Int
mandelbrot_ !iter !z0 =
if iter >= maxIter || (mag2 z0) > 4
then iter
else mandelbrot_ (iter+1) (z0*z0 + c)
{-# INLINE mandelbrot_ #-}
{-# SPECIALIZE mandelbrot :: Int -> Complex Double -> Int #-}
|
{-# OPTIONS --without-K #-}
module Model.Quantification where
open import Model.RGraph as RG using (RGraph)
open import Model.Size as MS using (_<_ ; β¦_β§Ξ ; β¦_β§n ; β¦_β§Ο)
open import Model.Type.Core
open import Source.Size.Substitution.Theory
open import Source.Size.Substitution.Universe as SS using (Subβ’α΅€)
open import Util.HoTT.Equiv
open import Util.HoTT.FunctionalExtensionality
open import Util.HoTT.HLevel
open import Util.Prelude hiding (_β_)
open import Util.Relation.Binary.PropositionalEquality using
( Ξ£-β‘βΊ ; subst-sym-subst ; subst-subst-sym )
import Source.Size as SS
import Source.Type as ST
open RGraph
open RG._β_
open SS.Ctx
open SS.Subβ’α΅€
record β¦ββ§β² {Ξ} n (T : β¦Typeβ§ β¦ Ξ β n β§Ξ) (Ξ΄ : Obj β¦ Ξ β§Ξ) : Set where
no-eta-equality
field
arr : β m (m<n : m < β¦ n β§n .fobj Ξ΄) β Obj T (Ξ΄ , m , m<n)
param : β m m<n mβ² mβ²<n β eq T _ (arr m m<n) (arr mβ² mβ²<n)
open β¦ββ§β² public
β¦ββ§β²Canon : β {Ξ} n (T : β¦Typeβ§ β¦ Ξ β n β§Ξ) (Ξ΄ : Obj β¦ Ξ β§Ξ) β Set
β¦ββ§β²Canon n T Ξ΄
= Ξ£[ f β (β m (m<n : m < β¦ n β§n .fobj Ξ΄) β Obj T (Ξ΄ , m , m<n)) ]
(β m m<n mβ² mβ²<n β eq T _ (f m m<n) (f mβ² mβ²<n))
β¦ββ§β²β
β¦ββ§β²Canon : β {Ξ n T Ξ΄} β β¦ββ§β² {Ξ} n T Ξ΄ β
β¦ββ§β²Canon n T Ξ΄
β¦ββ§β²β
β¦ββ§β²Canon = record
{ forth = Ξ» { record { arr = x ; param = y } β x , y }
; isIso = record
{ back = Ξ» x β record { arr = projβ x ; param = projβ x }
; backβforth = Ξ» { record {} β refl }
; forthβback = Ξ» _ β refl
}
}
abstract
β¦ββ§β²Canon-IsSet : β {Ξ n T Ξ΄} β IsSet (β¦ββ§β²Canon {Ξ} n T Ξ΄)
β¦ββ§β²Canon-IsSet {T = T} = Ξ£-IsSet
(β-IsSet Ξ» m β β-IsSet Ξ» m<n β Ξ» _ _ β T .Obj-IsSet _ _)
(Ξ» f β IsOfHLevel-suc 1 (β-IsProp Ξ» m β β-IsProp Ξ» m<n β β-IsProp Ξ» mβ²
β β-IsProp Ξ» mβ²<n β T .eq-IsProp))
β¦ββ§β²-IsSet : β {Ξ n T Ξ΄} β IsSet (β¦ββ§β² {Ξ} n T Ξ΄)
β¦ββ§β²-IsSet {Ξ} {n} {T} {Ξ΄}
= β
-pres-IsOfHLevel 2 (β
-sym β¦ββ§β²β
β¦ββ§β²Canon) (β¦ββ§β²Canon-IsSet {Ξ} {n} {T} {Ξ΄})
β¦ββ§β²-β‘βΊ : β {Ξ n T Ξ΄} (f g : β¦ββ§β² {Ξ} n T Ξ΄)
β (β m m<n β f .arr m m<n β‘ g .arr m m<n)
β f β‘ g
β¦ββ§β²-β‘βΊ {T = T} record{} record{} fβg = β
-Injective β¦ββ§β²β
β¦ββ§β²Canon (Ξ£-β‘βΊ
( (funext Ξ» m β funext Ξ» m<n β fβg m m<n)
, funext Ξ» m β funext Ξ» m<n β funext Ξ» mβ² β funext Ξ» m<nβ² β T .eq-IsProp _ _))
β¦ββ§β²-resp-ββ¦Typeβ§ : β {Ξ n T U}
β T ββ¦Typeβ§ U
β β {Ξ΄} β β¦ββ§β² {Ξ} n T Ξ΄ β
β¦ββ§β² n U Ξ΄
β¦ββ§β²-resp-ββ¦Typeβ§ TβU = record
{ forth = Ξ» f β record
{ arr = Ξ» m m<n β TβU .forth .fobj (f .arr m m<n)
; param = Ξ» m m<n mβ² mβ²<n β TβU .forth .feq _ (f .param m m<n mβ² mβ²<n)
}
; isIso = record
{ back = Ξ» f β record
{ arr = Ξ» m m<n β TβU .back .fobj (f .arr m m<n)
; param = Ξ» m m<n mβ² mβ²<n β TβU .back .feq _ (f .param m m<n mβ² mβ²<n)
}
; backβforth = Ξ» x β β¦ββ§β²-β‘βΊ _ _ Ξ» m m<n β TβU .back-forth .ββ» _ _
; forthβback = Ξ» x β β¦ββ§β²-β‘βΊ _ _ Ξ» m m<n β TβU .forth-back .ββ» _ _
}
}
β¦ββ§ : β {Ξ} n (T : β¦Typeβ§ β¦ Ξ β n β§Ξ) β β¦Typeβ§ β¦ Ξ β§Ξ
β¦ββ§ n T = record
{ ObjHSet = Ξ» Ξ΄ β HLevelβΊ (β¦ββ§β² n T Ξ΄) β¦ββ§β²-IsSet
; eqHProp = Ξ» _ f g β β-HProp _ Ξ» m β β-HProp _ Ξ» m<n β β-HProp _ Ξ» mβ²
β β-HProp _ Ξ» mβ²<n β T .eqHProp _ (f .arr m m<n) (g .arr mβ² mβ²<n)
; eq-refl = Ξ» f β f .param
}
β¦ββ§-resp-ββ¦Typeβ§ : β {Ξ} n {T U : β¦Typeβ§ β¦ Ξ β n β§Ξ}
β T ββ¦Typeβ§ U
β β¦ββ§ n T ββ¦Typeβ§ β¦ββ§ n U
β¦ββ§-resp-ββ¦Typeβ§ n TβU = record
{ forth = record
{ fobj = β¦ββ§β²-resp-ββ¦Typeβ§ TβU .forth
; feq = Ξ» Ξ΄βΞ΄β² xβy a aβ aβ aβ β TβU .forth .feq _ (xβy a aβ aβ aβ)
}
; back = record
{ fobj = β¦ββ§β²-resp-ββ¦Typeβ§ TβU .back
; feq = Ξ» Ξ΄βΞ΄β² xβy a aβ aβ aβ β TβU .back .feq _ (xβy a aβ aβ aβ)
}
; back-forth = ββΊ Ξ» Ξ΄ x β β¦ββ§β²-resp-ββ¦Typeβ§ TβU .backβforth _
; forth-back = ββΊ Ξ» Ξ΄ x β β¦ββ§β²-resp-ββ¦Typeβ§ TβU .forthβback _
}
absβ : β {Ξ n} {Ξ : β¦Typeβ§ β¦ Ξ β§Ξ} {T : β¦Typeβ§ β¦ Ξ β n β§Ξ}
β subT β¦ SS.Wk β§Ο Ξ β T
β Ξ β β¦ββ§ n T
absβ {Ξ} {n} {Ξ} {T} f = record
{ fobj = Ξ» {Ξ΄} x β record
{ arr = Ξ» m m<n β f .fobj x
; param = Ξ» m m<n mβ² mβ²<n β f .feq _ (Ξ .eq-refl x)
}
; feq = Ξ» _ xβy m mβ² m<nΞ³ mβ²<nΞ³β² β f .feq _ xβy
}
appβ : β {Ξ n m} {Ξ : β¦Typeβ§ β¦ Ξ β§Ξ} {T : β¦Typeβ§ β¦ Ξ β n β§Ξ}
β (m<n : m SS.< n)
β Ξ β β¦ββ§ n T
β Ξ β subT β¦ SS.Sing m<n β§Ο T
appβ {m = m} {T = T} m<n f = record
{ fobj = Ξ» {Ξ΄} x β f .fobj x .arr (β¦ m β§n .fobj Ξ΄) (MS.β¦<β§ m<n)
; feq = Ξ» Ξ΄βΞ΄β² {x y} xβy β f .feq _ xβy _ _ _ _
}
subT-β¦ββ§ : β {Ξ Ξ© n Ο} (β’Ο : Ο βΆ Ξ βα΅€ Ξ©) (T : β¦Typeβ§ β¦ Ξ© β n β§Ξ)
β β¦ββ§ (n [ Ο ]α΅€)
(subT (β¦_β§Ο {Ξ© = Ξ© β n} (Lift β’Ο refl)) T)
ββ¦Typeβ§ subT β¦ β’Ο β§Ο (β¦ββ§ n T)
subT-β¦ββ§ {Ξ} {Ξ©} {n} {Ο} β’Ο T = record
{ forth = record
{ fobj = Ξ» {Ξ³} f β record
{ arr = Ξ» m m<n
β transportObj T
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n _ _ refl refl)
(f .arr m (subst (m <_) (sym (MS.β¦subβ§ β’Ο n)) m<n))
; param = Ξ» m m<n mβ² mβ²<n
β transportObj-resp-eq T _ _ (f .param _ _ _ _)
}
; feq = Ξ» Ξ³βΞ³β² fβg a aβ aβ aβ β transportObj-resp-eq T _ _ (fβg _ _ _ _)
}
; back = record
{ fobj = Ξ» {Ξ³} f β record
{ arr = Ξ» m m<n
β transportObj T
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n _ _ refl refl)
(f .arr m (subst (m <_) (MS.β¦subβ§ β’Ο n) m<n))
; param = Ξ» m m<n mβ² mβ²<n
β transportObj-resp-eq T _ _ (f .param _ _ _ _)
}
; feq = Ξ» Ξ³βΞ³β² fβg a aβ aβ aβ β transportObj-resp-eq T _ _ (fβg _ _ _ _)
}
; back-forth = ββΊ Ξ» Ξ³ f β β¦ββ§β²-β‘βΊ _ _ Ξ» m m<n
β trans
(transportObjβtransportObj T
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n _ _ refl refl)
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n _ _ refl refl))
(trans
(cong
(Ξ» p β transportObj T
(trans
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n
(subst (m <_) (MS.β¦subβ§ β’Ο n) p)
(subst (m <_) (MS.β¦subβ§ β’Ο n) m<n)
refl refl)
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n
(subst (m <_) (MS.β¦subβ§ β’Ο n) m<n)
(subst (m <_) (MS.β¦subβ§ β’Ο n) m<n)
refl refl))
(f .arr m p))
(subst-sym-subst (MS.β¦subβ§ β’Ο n)))
(transportObj-refl T _))
; forth-back = ββΊ Ξ» Ξ³ f β β¦ββ§β²-β‘βΊ _ _ Ξ» m m<n
β trans
(transportObjβtransportObj T
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n _ _ refl refl)
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n _ _ refl refl))
(trans
(cong
(Ξ» p β transportObj T
(trans
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n p p refl refl)
(MS.β¦Ξβnβ§-β‘βΊ Ξ© n p m<n refl refl))
(f .arr m p))
(subst-subst-sym (MS.β¦subβ§ β’Ο n)))
(transportObj-refl T _))
}
|
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
#! [auto_compilation]
import openvino.runtime as ov
compiled_model = ov.compile_model("model.xml")
#! [auto_compilation]
#! [properties_example]
core = ov.Core()
input_a = ov.opset8.parameter([8])
res = ov.opset8.absolute(input_a)
model = ov.Model(res, [input_a])
compiled = core.compile_model(model, "CPU")
print(model.inputs)
print(model.outputs)
print(compiled.inputs)
print(compiled.outputs)
#! [properties_example]
#! [tensor_basics]
data_float64 = np.ones(shape=(2,8))
tensor = ov.Tensor(data_float64)
assert tensor.element_type == ov.Type.f64
data_int32 = np.ones(shape=(2,8), dtype=np.int32)
tensor = ov.Tensor(data_int32)
assert tensor.element_type == ov.Type.i32
#! [tensor_basics]
#! [tensor_shared_mode]
data_to_share = np.ones(shape=(2,8))
shared_tensor = ov.Tensor(data_to_share, shared_memory=True)
# Editing of the numpy array affects Tensor's data
data_to_share[0][2] = 6.0
assert shared_tensor.data[0][2] == 6.0
# Editing of Tensor's data affects the numpy array
shared_tensor.data[0][2] = 0.6
assert data_to_share[0][2] == 0.6
#! [tensor_shared_mode]
#! [tensor_slice_mode]
data_to_share = np.ones(shape=(2,8))
# Specify slice of memory and the shape
shared_tensor = ov.Tensor(data_to_share[1][:] , shape=ov.Shape([8]))
# Editing of the numpy array affects Tensor's data
data_to_share[1][:] = 2
assert np.array_equal(shared_tensor.data, data_to_share[1][:])
#! [tensor_slice_mode]
infer_request = compiled.create_infer_request()
data = np.random.randint(-5, 3 + 1, size=(8))
#! [passing_numpy_array]
# Passing inputs data in form of a dictionary
infer_request.infer(inputs={0: data})
# Passing inputs data in form of a list
infer_request.infer(inputs=[data])
#! [passing_numpy_array]
#! [getting_results]
# Get output tensor
results = infer_request.get_output_tensor().data
# Get tensor with CompiledModel's output node
results = infer_request.get_tensor(compiled.outputs[0]).data
# Get all results with special helper property
results = list(infer_request.results.values())
#! [getting_results]
#! [sync_infer]
# Simple call to InferRequest
results = infer_request.infer(inputs={0: data})
# Extra feature: calling CompiledModel directly
results = compiled_model(inputs={0: data})
#! [sync_infer]
#! [asyncinferqueue]
core = ov.Core()
# Simple model that adds two inputs together
input_a = ov.opset8.parameter([8])
input_b = ov.opset8.parameter([8])
res = ov.opset8.add(input_a, input_b)
model = ov.Model(res, [input_a, input_b])
compiled = core.compile_model(model, "CPU")
# Number of InferRequests that AsyncInferQueue holds
jobs = 4
infer_queue = ov.AsyncInferQueue(compiled, jobs)
# Create data
data = [np.array([i] * 8, dtype=np.float32) for i in range(jobs)]
# Run all jobs
for i in range(len(data)):
infer_queue.start_async({0: data[i], 1: data[i]})
infer_queue.wait_all()
#! [asyncinferqueue]
#! [asyncinferqueue_access]
results = infer_queue[3].get_output_tensor().data
#! [asyncinferqueue_access]
#! [asyncinferqueue_set_callback]
data_done = [False for _ in range(jobs)]
def f(request, userdata):
print(f"Done! Result: {request.get_output_tensor().data}")
data_done[userdata] = True
infer_queue.set_callback(f)
for i in range(len(data)):
infer_queue.start_async({0: data[i], 1: data[i]}, userdata=i)
infer_queue.wait_all()
assert all(data_done)
#! [asyncinferqueue_set_callback]
|
= = Between the wars = =
|
# Reference: Chirp Z-Transform Spectral Zoom Optimization with MATLAB
# http://prod.sandia.gov/techlib/access-control.cgi/2005/057084.pdf
# For spectral zoom, set:
# W = exp(-j*2*pi*(f2-f1)/(m*fs));
# A = exp(j*2*pi*f1/fs);
# where:
# f1 = start freq
# f2 = end freq
# m = length of x
# fs = sampling rate
function czt(x::Vector{Complex128}, m::Int, w::Complex128, a::Complex128)
# TODO: add argument valdiation
# TODO: figure out why output isn't matching FFT
n = length(x)
N = [0:n-1].+n
NM = [-(n-1):(m-1)].+n
M = [0:m-1].+n
nfft = nextpow2(n+m-1)
W2 = w.^(([-(n-1):max(m-1,n-1)].^2)/2)
fg = zeros(Complex128, nfft)
fg[1:n] = x.*(a.^-(N.-n)).*W2[N]
fg = fft(fg)
fw = zeros(Complex128, nfft)
fw[1:length(NM)] = 1./W2[NM]
fw = fft(fw)
gg = ifft(fg.*fw)
return gg[M].*W2[M]
end
function czt(x::Vector{Complex128}, m::Int)
# TODO: add argument valdiation
czt( x, m, exp(-im*2*pi/m), 1.0+0.0im)
end |
/*
* Copyright 2020 Robert Bosch GmbH
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* \file noisy_lane_sensor.cpp
*/
#include <Eigen/Geometry> // for Isometry3d
#include <memory> // for shared_ptr<>
#include <random> // for random_device
#include <string> // for string
#include <utility> // for pair
#include <cloe/component.hpp> // for Component, Json
#include <cloe/component/frustum.hpp> // for Frustum
#include <cloe/component/lane_boundary.hpp> // for LaneBoundary
#include <cloe/component/lane_sensor.hpp> // for LaneBoundarySensor
#include <cloe/conf/action.hpp> // for actions::ConfigureFactory
#include <cloe/plugin.hpp> // for EXPORT_CLOE_PLUGIN
#include <cloe/registrar.hpp> // for Registrar
#include <cloe/sync.hpp> // for Sync
#include <cloe/trigger/set_action.hpp> // for actions::SetVariableActionFactory
#include "noise_data.hpp" // for NoiseData, NoiseConf
namespace cloe {
enum class LaneBoundaryField {
// Lateral distance to vehicle reference point and direction [m].
DyStart,
// Start of road mark in driving direction [m].
DxStart,
// Yaw angle relative to vehicle direction [rad].
HeadingStart,
// Horizontal curvature at start point of the spiral [1/m].
CurvhorStart,
// Change of horizontal curvature at start point of the spiral [1/m^2].
CurvhorChange,
// Distance to last valid measurement [m].
DxEnd
};
// clang-format off
ENUM_SERIALIZATION(LaneBoundaryField, ({
{LaneBoundaryField::DyStart, "dy_start"},
{LaneBoundaryField::DxStart, "dx_start"},
{LaneBoundaryField::HeadingStart, "heading_start"},
{LaneBoundaryField::CurvhorStart, "curv_hor_start"},
{LaneBoundaryField::CurvhorChange, "curv_hor_change"},
{LaneBoundaryField::DxEnd, "dx_end"},
}))
// clang-format on
namespace component {
void add_noise_dy_start(LaneBoundary* lb, const NoiseConf* noise) {
lb->dy_start = lb->dy_start + noise->get();
}
void add_noise_dx_start(LaneBoundary* lb, const NoiseConf* noise) {
lb->dx_start = lb->dx_start + noise->get();
}
void add_noise_heading_start(LaneBoundary* lb, const NoiseConf* noise) {
lb->heading_start = lb->heading_start + noise->get();
}
void add_noise_curv_hor_start(LaneBoundary* lb, const NoiseConf* noise) {
lb->curv_hor_start = lb->curv_hor_start + noise->get();
}
void add_noise_curv_hor_change(LaneBoundary* lb, const NoiseConf* noise) {
lb->curv_hor_change = lb->curv_hor_change + noise->get();
}
void add_noise_dx_end(LaneBoundary* lb, const NoiseConf* noise) {
lb->dx_end = lb->dx_end + noise->get();
}
class LaneNoiseConf : public NoiseConf {
public:
LaneNoiseConf() = default;
virtual ~LaneNoiseConf() noexcept = default;
/**
* Add noise to target parameter.
*/
std::function<void(LaneBoundary*)> apply;
/**
* Set the appropriate target function.
*/
void set_target() {
using namespace std::placeholders; // for _1
switch (target_) {
case LaneBoundaryField::DyStart:
apply = std::bind(add_noise_dy_start, _1, this);
break;
case LaneBoundaryField::DxStart:
apply = std::bind(add_noise_dx_start, _1, this);
break;
case LaneBoundaryField::HeadingStart:
apply = std::bind(add_noise_heading_start, _1, this);
break;
case LaneBoundaryField::CurvhorStart:
apply = std::bind(add_noise_curv_hor_start, _1, this);
break;
case LaneBoundaryField::CurvhorChange:
apply = std::bind(add_noise_curv_hor_change, _1, this);
break;
case LaneBoundaryField::DxEnd:
apply = std::bind(add_noise_dx_end, _1, this);
break;
}
}
CONFABLE_SCHEMA(LaneNoiseConf) {
return Schema{
NoiseConf::schema_impl(),
fable::schema::PropertyList<fable::schema::Box>{
// clang-format off
{"target", Schema(&target_, "data field of the lane boundary the noise should be applied to")},
// clang-format on
},
};
}
void to_json(Json& j) const override {
NoiseConf::to_json(j);
j = Json{
{"target", target_},
};
}
private:
LaneBoundaryField target_{LaneBoundaryField::DyStart};
};
struct NoisyLaneSensorConf : public NoisySensorConf {
/// List of noisy lane boundary parameters.
std::vector<LaneNoiseConf> noisy_params;
CONFABLE_SCHEMA(NoisyLaneSensorConf) {
return Schema{
NoisySensorConf::schema_impl(),
fable::schema::PropertyList<fable::schema::Box>{
// clang-format off
{"noise", Schema(&noisy_params, "configure noisy parameters")},
// clang-format on
},
};
}
void to_json(Json& j) const override {
NoisySensorConf::to_json(j);
j = Json{
{"noise", noisy_params},
};
}
};
class NoisyLaneBoundarySensor : public LaneBoundarySensor {
public:
NoisyLaneBoundarySensor(const std::string& name, const NoisyLaneSensorConf& conf,
std::shared_ptr<LaneBoundarySensor> sensor)
: LaneBoundarySensor(name), config_(conf), sensor_(sensor) {
reset_random();
}
virtual ~NoisyLaneBoundarySensor() noexcept = default;
const LaneBoundaries& sensed_lane_boundaries() const override {
if (cached_) {
return lbs_;
}
for (const auto& kv : sensor_->sensed_lane_boundaries()) {
auto lb = kv.second;
apply_noise(lb);
auto count = kv.first;
auto ptr = std::make_shared<LaneBoundary>(lb);
if (ptr) {
lbs_.insert(std::pair<int, LaneBoundary>(count, lb));
}
}
cached_ = true;
return lbs_;
}
const Frustum& frustum() const override { return sensor_->frustum(); }
const Eigen::Isometry3d& mount_pose() const override { return sensor_->mount_pose(); }
/**
* Process the underlying sensor and clear the cache.
*
* We could process and create the filtered list of objects now, but we can
* also delay it (lazy computation) and only do it when absolutely necessary.
* This comes at the minor cost of checking whether cached_ is true every
* time sensed_objects() is called.
*/
Duration process(const Sync& sync) override {
// This currently shouldn't do anything, but this class acts as a prototype
// for How It Should Be Done.
Duration t = LaneBoundarySensor::process(sync);
if (t < sync.time()) {
return t;
}
// Process the underlying sensor and clear the cache.
t = sensor_->process(sync);
clear_cache();
return t;
}
void reset() override {
LaneBoundarySensor::reset();
sensor_->reset();
clear_cache();
reset_random();
}
void abort() override {
LaneBoundarySensor::abort();
sensor_->abort();
}
void enroll(Registrar& r) override {
r.register_action(std::make_unique<actions::ConfigureFactory>(
&config_, "config", "configure noisy lane component"));
r.register_action<actions::SetVariableActionFactory<bool>>(
"noise_activation", "switch sensor noise on/off", "enable", &config_.enabled);
}
protected:
void apply_noise(LaneBoundary& lb) const {
if (!config_.enabled) {
return;
}
for (auto& np : config_.noisy_params) {
np.apply(&lb);
}
}
void reset_random() {
// Reset the sensor's "master" seed, if applicable.
unsigned long seed = config_.seed;
if (seed == 0) {
std::random_device r;
do {
seed = r();
} while (seed == 0);
if (config_.reuse_seed) {
config_.seed = seed;
}
}
for (auto& np : config_.noisy_params) {
np.set_target();
np.reset(seed);
++seed;
}
}
void clear_cache() {
lbs_.clear();
cached_ = false;
}
private:
NoisyLaneSensorConf config_;
std::shared_ptr<LaneBoundarySensor> sensor_;
mutable bool cached_;
mutable LaneBoundaries lbs_;
};
DEFINE_COMPONENT_FACTORY(NoisyLaneSensorFactory, NoisyLaneSensorConf, "noisy_lane_sensor",
"add gaussian noise to lane sensor output")
DEFINE_COMPONENT_FACTORY_MAKE(NoisyLaneSensorFactory, NoisyLaneBoundarySensor, LaneBoundarySensor)
} // namespace component
} // namespace cloe
EXPORT_CLOE_PLUGIN(cloe::component::NoisyLaneSensorFactory)
|
import ReactorModel.Determinism.ExecutionStep
open ReactorType Classical
variable [Indexable Ξ±]
namespace Execution
variable {s sβ : State Ξ±} in section
abbrev State.Trivial (s : State Ξ±) : Prop :=
s.rtr[.rcn] = β
theorem State.Trivial.of_not_Nontrivial (h : Β¬Nontrivial s) : s.Trivial :=
byContradiction (h β¨Β·β©)
variable (triv : sβ.Trivial) in section
namespace Instantaneous
theorem Step.not_Trivial (e : sβ βα΅’ sβ) : Β¬sβ.Trivial := by
by_contra ht
simp [State.Trivial, Partial.empty_iff] at ht
cases (Partial.mem_iff.mp e.allows_rcn.mem).choose_spec βΈ ht e.rcn
theorem Execution.trivial_eq : (sβ βα΅’* sβ) β sβ = sβ
| refl => rfl
| trans e _ => absurd triv e.not_Trivial
theorem ClosedExecution.preserves_Trivial {e : sβ β| sβ} : sβ.Trivial := by
simp [State.Trivial, βEquivalent.obj?_rcn_eq e.equiv, triv]
theorem ClosedExecution.trivial_eq (e : sβ β| sβ) : sβ = sβ :=
e.exec.trivial_eq triv
end Instantaneous
theorem State.Advance.preserves_Trivial : (Advance sβ sβ) β sβ.Trivial
| mk .. => triv
theorem AdvanceTag.preserves_Trivial (a : sβ β- sβ) : sβ.Trivial :=
a.advance.preserves_Trivial triv
theorem Step.preserves_Trivial : (sβ β sβ) β sβ.Trivial
| close e => e.preserves_Trivial triv
| advance a => a.preserves_Trivial triv
end
end
namespace AdvanceTag
inductive RTC : State Ξ± β State Ξ± β Type
| refl : RTC s s
| trans : (sβ β- sβ) β (RTC sβ sβ) β RTC sβ sβ
theorem RTC.tag_le {sβ sβ : State Ξ±} : (AdvanceTag.RTC sβ sβ) β sβ.tag β€ sβ.tag
| refl => le_refl _
| trans a a' => le_trans (le_of_lt a.tag_lt) a'.tag_le
theorem RTC.deterministic {s sβ sβ : State Ξ±} (ht : sβ.tag = sβ.tag) :
(AdvanceTag.RTC s sβ) β (AdvanceTag.RTC s sβ) β sβ = sβ
| refl, refl => rfl
| refl, trans a a' => absurd ht (ne_of_lt $ lt_of_lt_of_le a.tag_lt a'.tag_le)
| trans a a', refl => absurd ht.symm (ne_of_lt $ lt_of_lt_of_le a.tag_lt a'.tag_le)
| trans aβ aβ', trans aβ aβ' => aβ'.deterministic ht (aβ.determinisic aβ βΈ aβ')
end AdvanceTag
def to_AdvanceTagRTC {sβ sβ : State Ξ±} (triv : sβ.Trivial) : (sβ β* sβ) β AdvanceTag.RTC sβ sβ
| refl => .refl
| step (.advance a) e' => .trans a (e'.to_AdvanceTagRTC $ a.preserves_Trivial triv)
| step (.close e) e' => e.trivial_eq triv βΈ (e'.to_AdvanceTagRTC $ e.preserves_Trivial triv)
theorem trivial_deterministic {s : State Ξ±}
(triv : Β¬s.Nontrivial) (eβ : s β* sβ) (eβ : s β* sβ) (ht : sβ.tag = sβ.tag) : sβ = sβ :=
AdvanceTag.RTC.deterministic ht
(eβ.to_AdvanceTagRTC $ .of_not_Nontrivial triv)
(eβ.to_AdvanceTagRTC $ .of_not_Nontrivial triv)
end Execution |
State Before: a b : String
β’ firstDiffPos a b = { byteIdx := utf8Len (List.takeWhileβ (fun x x_1 => decide (x = x_1)) a.data b.data).fst } State After: no goals Tactic: simpa [firstDiffPos] using
firstDiffPos_loop_eq [] [] a.1 b.1 ((utf8Len a.1).min (utf8Len b.1)) 0 rfl rfl (by simp) State Before: a b : String
β’ Nat.min (utf8Len a.data) (utf8Len b.data) = min (utf8Len [] + utf8Len a.data) (utf8Len [] + utf8Len b.data) State After: no goals Tactic: simp |
# Note that this script can accept some limited command-line arguments, run
# `julia build_tarballs.jl --help` to see a usage message.
using BinaryBuilder
name = "lm_Sensors"
version = v"3.5.0"
# Collection of sources required to build lm_sensors
sources = [
"https://github.com/lm-sensors/lm-sensors/archive/V$(version.major)-$(version.minor)-$(version.patch).tar.gz" =>
"f671c1d63a4cd8581b3a4a775fd7864a740b15ad046fe92038bcff5c5134d7e0",
]
# Bash recipe for building across all platforms
script = raw"""
cd $WORKSPACE/srcdir/lm-sensors-*/
make -j${nproc} PREFIX=${prefix}
make install PREFIX=${prefix}
"""
# These are the platforms we will build for by default, unless further
# platforms are passed in on the command line
platforms = [p for p in supported_platforms() if p isa Linux]
# The products that we will ensure are always built
products = [
LibraryProduct("libsensors", :libsensors),
ExecutableProduct("sensors", :sensors),
]
# Dependencies that must be installed before this package can be built
dependencies = [
]
# Build the tarballs, and possibly a `build.jl` as well.
build_tarballs(ARGS, name, version, sources, script, platforms, products, dependencies; preferred_gcc_version=v"8")
|
module BackProp (
backPropRegression,
backPropClassification,
backProp,
sgdMethod
) where
import Numeric.LinearAlgebra
import Common
import ActivFunc
sgdMethod :: Int -> (Matrix R, Matrix R) -> ((Matrix R, Matrix R) -> [Matrix R] -> [Matrix R]) -> [Matrix R] -> IO [Matrix R]
sgdMethod n xy f ws = do
nxy <- pickupSets n xy
return $ f nxy ws
backPropRegression :: R -> (Matrix R -> Matrix R, Matrix R -> Matrix R) -> (Matrix R, Matrix R) -> [Matrix R] -> [Matrix R]
backPropRegression = backProp id
backPropClassification :: R -> (Matrix R -> Matrix R, Matrix R -> Matrix R) -> (Matrix R, Matrix R) -> [Matrix R] -> [Matrix R]
backPropClassification = backProp softmaxC
backProp :: (Matrix R -> Matrix R) -> R -> (Matrix R -> Matrix R, Matrix R -> Matrix R) -> (Matrix R, Matrix R) -> [Matrix R] -> [Matrix R]
backProp lf rate (f, df) (x, y) ws@(m:ms) = zipWith (-) ws (fmap (scalar rate *) dws)
where
dws = fmap (/ len) . zipWith (<>) ds $ fmap (tr . inputWithBias) vs
len = fromIntegral $ cols x
vs = x : fmap f rs -- length: L - 1
ds = calcDeltas df (zip rs ms) dInit -- length: L - 1
dInit = lf r - y
(rs, r) = (init us, last us)
us = forwards' f ms uInit -- length: L - 1
uInit = m <> inputWithBias x
forwards' :: (Matrix R -> Matrix R) -> [Matrix R] -> Matrix R -> [Matrix R]
forwards' f ws u = scanl (flip $ forward' f) u ws
forward' :: (Matrix R -> Matrix R) -> Matrix R -> Matrix R -> Matrix R
forward' f w u = w <> inputWithBias (f u)
calcDeltas :: (Matrix R -> Matrix R) -> [(Matrix R, Matrix R)] -> Matrix R -> [Matrix R]
calcDeltas df uws d = scanr (calcDelta df) d uws
calcDelta :: (Matrix R -> Matrix R) -> (Matrix R, Matrix R) -> Matrix R -> Matrix R
calcDelta df (u, w) d = df u * tr (weightWithoutBias w) <> d
|
-- |
module DaySeven (examineFile) where
import Data.Vector.Unboxed (Vector, fromList, (!))
import qualified Data.Vector.Unboxed as V
import Statistics.Sample (mean)
import Statistics.Function (sort)
findOptimalFuelp2 :: Vector Int -> Int
findOptimalFuelp2 xs = minimum $ map (`calcFuel` xs) $ meanPositions xs
where
calcFuel = calcFuelBy sumOfNats
findOptimalFuelp1 :: Vector Int -> Int
findOptimalFuelp1 xs = (`calcFuel` xs) $ medianPos xs
where
calcFuel = calcFuelBy id
calcFuelBy :: (Int -> Int) -> Int -> Vector Int -> Int
calcFuelBy f x = V.foldl (\total pos -> total + f (abs (pos - x))) 0
-- | the sum of natural numbers
sumOfNats :: Int -> Int
sumOfNats n = ((n*n) + n) `div` 2
meanPositions :: Vector Int -> [Int]
meanPositions vs = (map floor <> map ceiling) [mean $ V.map fromIntegral vs]
medianPos :: Vector Int -> Int
medianPos = floor . (\v -> sort v ! floor (fromIntegral (V.length v) / 2)) . V.map fromIntegral
examineFile :: String -> IO [Int]
examineFile xs = do
positions <- fromList . (\x -> read $ "["++x++"]") <$> readFile xs
return $ map (\f -> f positions) [findOptimalFuelp1,findOptimalFuelp2]
|
# Lotka-Volterra Introduction
The Lotka-Volterra model is a basic dynamic model named after two biomathematicians, Alfred Lotka and Vito Volterra,
who developed the system of equations independent of each other in the first half of the twentieth century.
Lotka had developed the model to explain the dynamics of predator and prey populations, expanding on his previous model
of autocatalytic chemical reactions. Volterra had developed the system of equations to model the population of predator
fish in the Adriatic Sea.
The basic system consists of two linear ordinary differential equations (ODE). One equation represents the change in population of
the prey over time and is dependent on the population of the predator. The other equation represents the same but
for the predator population.
As an example, we let the prey be sheep and the predators be wolves.
If sheep were to exist without a predator and without a limitation of resources, the population would grow
exponentially:
> $ \frac{dx}{dt}=\alpha*x , $
where $\alpha$ is a constant rate of growth and $x$ is the number of sheep.
If wolves were added to the environment, then the sheep population would depend on how
often they met with the wolves. If there are more wolves, then there is more likelihood that they would meet sheep and
eat one. If there are more sheep, then they would have more likelihood of meeting wolves. If a wolf eats a sheep anytime
it finds one, then the death of the sheep is proportional to how many wolves there are. This relationship is the same
as the law of mass action: the rate of a chemical reaction is proportional to the concentration of the reactants.
In this case, the death of the sheep is then represented by a constant multiplied by the population of the sheep and the
population of the wolves. The sheep population equation is represented as:
> $ \frac{dx}{dt}=\alpha*x-\beta*x*y $
The equation for the population of the wolves is given as:
> $ \frac{dy}{dt} = \delta x y - \gamma y, $
where $y$ is the wolf population, $\delta$ is the growth rate of the wolves that is proportional to number of sheep and
wolves and $\gamma$ is the mortality rate of the wolves.
There are many assumptions with this model and some are stated here:
- The sheep population grows exponentially without the presence of wolves.
- The wolves will always eat a sheep when it meets one.
- The wolves only eat sheep.
To see the interaction between the wolf and sheep populations, let $\alpha= 1.1, \beta = 0.4, \delta = 0.1$, and $\gamma = 0.4$.
If we then solve this system, assuming time is in weeks, with initial population of 10 thousand sheep
and 1 thousand wolves, the following plot shows the solutions on the interval [0,100] (100 weeks or a little under 2 years).
```python
import matplotlib.pyplot as plt
import numpy as np
import random
from scipy.integrate import odeint as ode
from tabulate import tabulate
```
```python
init_pop = [10,1] # initial population levels [prey, predator]
t_steps = 1000 # time steps
t_end = 100 # end of time interval
time = np.linspace(0, t_end, num=t_steps) # array to store time
alpha = 1.1 # prey birthrate
beta = 0.4 # prey death rate
delta = 0.1 # predator birthrate
gamma = 0.4 # predator death rate
k = 150 # prey carrying capacity
k_y = 100 # predator carrying capacity
# Array to track coefficients
coef = [alpha, beta, delta, gamma]
```
```python
# Define the simulation function
def run_simu(tmp_pop, tmp_time, tmp_coef):
tmp_x = tmp_pop[0] # Prey population value
tmp_y = tmp_pop[1] # Predator population value
tmp_alpha = tmp_coef[0]
tmp_beta = tmp_coef[1]
tmp_delta = tmp_coef[2]
tmp_gamma = tmp_coef[3]
dxdt = tmp_alpha * tmp_x - tmp_beta * tmp_x * tmp_y
dydt = tmp_delta * tmp_x * tmp_y - tmp_gamma * tmp_y
return[dxdt, dydt]
# Call the ode solver function
output = ode(run_simu, init_pop, time, args = (coef,))
plt.plot(time, output[:,0], color = "green", label = "Prey Population") # Prey output
plt.plot(time, output[:,1], color = "red", label = "Predator Population") # Prey output
plt.xlabel("Time")
plt.ylabel("Population Level (in thousands)")
plt.title("Predator / Prey Dynamics")
plt.legend()
plt.grid()
plt.show()
```
It makes sense that we see oscillations in the populations as seen in the plot because we have negative
feedback. As the predator population approaches its maximum, the prey population decreases rapidly. Then
after the predator population has decreased, the prey population begins to increase and eventually the predator population
will begin to increase again.
The steady state of the model is when there is equilibrium between the populations.
To find the steady state we need to find the equilibrium points.
In other words, we want to know when the rate of change (derivative) is equal to zero.
The not-so-meaningful solution is when the prey and the predator populations are zero.
The more meaningful solutions can be found as:
> $ \begin{align}
>0 &= \alpha x - \beta x y \\
>0 &= x ( \alpha - \beta y ) \\
> y &= \frac{\alpha}{\beta} \text{ or }x = 0
> \end{align} $
> $ \begin{align}
>0 &= \delta x y - \gamma y \\
>0 &= y ( \delta x - \gamma ) \\
> x &= \frac{\gamma}{\delta} \text{ or }y = 0
> \end{align} $
The non-zero equilibrium point for this system is $ (\frac{\gamma}{\delta}, \frac{\alpha}{\beta}) $
If we change our birth and death rate parameters to visualize this, we see that it is in fact an equilibrium state.
```python
alpha_eq = 1.1 # prey birthrate
beta_eq = 1.1 # prey death rate
delta_eq = 0.4 # predator birthrate
gamma_eq = 4 # predator death rate
# Array to track coefficients
coef_eq = [alpha_eq, beta_eq, delta_eq, gamma_eq]
# Call the ode solver function
output_eq = ode(run_simu, init_pop, time, args = (coef_eq,))
plt.plot(time, output_eq[:,0], color = "green", label = "Prey Population") # Prey output
plt.plot(time, output_eq[:,1], color = "red", label = "Predator Population") # Prey output
plt.xlabel("Time")
plt.ylabel("Population Level (in thousands)")
plt.title("Predator / Prey Dynamics")
plt.legend()
plt.grid()
plt.show()
```
It would be interesting to see how the change in the parameters will change the dynamics of the populations.
To visualize this, we can make a phase space plot with varying parameter values.
```python
def run_range_ode(lrange, urange, tmp_run_simu, tmp_init_pop, tmp_time, coef_base):
output_range = [np.zeros((t_steps,2)), np.zeros((t_steps,2)),
np.zeros((t_steps,2)), np.zeros((t_steps,2)),
np.zeros((t_steps,2)), np.zeros((t_steps,2)),
np.zeros((t_steps,2)), np.zeros((t_steps,2))]
idx = 0 # index to track output array storage
for num in [0,1,2,3]:
# for i in range(len(coef_base)):
coef_new = coef_base[:]
for j in [lrange,urange]:
coef_new[num] = round(coef_base[num] + j, 2)
output_temp = ode(tmp_run_simu, tmp_init_pop, tmp_time, args = (coef_new,))
output_range[idx] = output_temp
idx = idx + 1
return output_range
output_vary = run_range_ode(-0.2, 0.2, run_simu, init_pop, time, coef)
```
```python
# original parameters
plt.plot(output[:,0], output[:,1], color = "black", label = "a: 1.1, b: 0.4, d:0.1, g: 0.4")
# alpha changes
plt.plot(output_vary[0][:,0], output_vary[0][:,1], color = "mediumvioletred", label = "alpha: 0.9")
plt.plot(output_vary[1][:,0], output_vary[1][:,1], color = "hotpink", label = "alpha: 1.3")
# beta changes
#plt.plot(output_vary[2][:,0], output_vary[2][:,1], color = "gold", label = "beta: 0.2")
#plt.plot(output_vary[3][:,0], output_vary[3][:,1], color = "goldenrod", label = "beta: 0.8")
# delta changes
#plt.plot(output_vary[4][:,0], output_vary[4][:,1], color = "springgreen", label = "delta: 0.05")
#plt.plot(output_vary[5][:,0], output_vary[5][:,1], color = "green", label = "delta: 0.2")
# gamma changes
plt.plot(output_vary[6][:,0], output_vary[6][:,1], color = "aqua", label = "gamma: 0.2")
plt.plot(output_vary[7][:,0], output_vary[7][:,1], color = "darkcyan", label = "gamma: 0.8")
#plt.plot(output)
plt.xlabel("Prey Population (in thousands)")
plt.ylabel("Predator Population (in thousands)")
plt.title("Predator / Prey Phase Space Plot")
plt.legend()
plt.grid()
plt.show()
```
The black line is the phase plot with the original parameters. It is a closed orbit that oscillates around the
fixed point $ (\frac{\gamma}{\delta}, \frac{\alpha}{\beta}) = (4, 2.75) $ (one of the equilibrium points).
When keeping all other parameters constant and changing only $\alpha$, the ellipse would stretch or shrink vertically.
Adjusting the values of only $\gamma$, the ellipse would stretch or shrink horizontally. This is because the closed
orbit oscillates around the fixed point and when we change $\gamma$, we change the x-value of the fixed point and when
we change $\alpha$ we change the y value.
These dynamics can be proven algebraically using concepts from linear algebra (calculating the eigenvalue and
eigenvector of the Jacobian matrix).
The assumption that the prey grow exponentially in the absence of predators is not realistic.
A more realistic approach is to assume there is a carrying capacity. When the sheep population is below the environmental
carrying capacity, the growth rate is large. When the sheep population is equal to it's carrying capacity, there is no
growth and when the sheep population is above the carrying capacity, there is negative growth.
Incorporating the carrying capacity into the model would alter the sheep population ODE:
> $ \frac{dx}{dt} = \alpha x (1-\frac{x}{K}), $
where $K$ is carrying capacity.
Now we need to add the predators to the environment:
> $ \frac{dx}{dt} = \alpha x (1-\frac{x}{K}) - \beta x y $
And the predator equation remains the same:
>$ \frac{dy}{dt} = \delta x y - \gamma y $
>
Using these updated ODEs, we can rerun our system and visualize the dynamics.
```python
# Define the simulation function
def run_simu_carry_cap(tmp_pop, tmp_time, tmp_coef):
tmp_x = tmp_pop[0] # Prey population value
tmp_y = tmp_pop[1] # Predator population value
tmp_alpha = tmp_coef[0]
tmp_beta = tmp_coef[1]
tmp_delta = tmp_coef[2]
tmp_gamma = tmp_coef[3]
dxdt = tmp_alpha * tmp_x * (1 - (tmp_x/k)) - tmp_beta * tmp_x * tmp_y
dydt = tmp_delta * tmp_x * tmp_y - tmp_gamma * tmp_y
#dxdt = tmp_alpha * x * (1 - ((x+tmp_beta*y)/k))
#dydt = tmp_delta * x * y - tmp_gamma * y
return[dxdt, dydt]
# Call the ode solver function
output_carry_cap = ode(run_simu_carry_cap, init_pop, time, args = (coef,))
plt.plot(time, output_carry_cap[:,0], color = "green", label = "Prey Population") # Prey output
plt.plot(time, output_carry_cap[:,1], color = "red", label = "Predator Population") # Prey output
plt.xlabel("Time")
plt.ylabel("Population Level (in thousands)")
plt.title("Predator / Prey Dynamics with Prey Carrying Capacity")
plt.legend()
plt.grid()
plt.show()
```
There is still an oscillation between the predator and prey populations, but the difference between the maximum and
minimum values is decreasing. The non-zero steady state of this system has changed. Previously, it was
$ (\frac{\gamma}{\delta}, \frac{\alpha}{\beta}) $, but now that has changed.
> $ \begin{align}
> 0 &= \alpha x (1-\frac{x}{k}) - \beta x y
>\end{align} $
substitute $x =\frac{\gamma}{\delta}$ and solve for $y$
> $ \begin{align}
> 0 &= \alpha \frac{\gamma}{\delta} (1-\frac{\frac{\gamma}{\delta}}{k}) - \beta\frac{\gamma}{\delta} y \\
> 0 &= \alpha (1-\frac{\gamma}{\delta k}) - \beta y \\
> y &= \frac{\alpha}{\beta}(1-\frac{\gamma}{\delta k}) \\
> \end{align} $
The non-zero new steady state is $(\frac{\gamma}{\delta}, \frac{\alpha}{\beta}(1-\frac{\gamma}{\delta k}))$
With the narrowing of the prey and predator population, we know that our phase plot will spiral instead of have an
ellipse.
```python
# original parameters
plt.plot(output_carry_cap[:,0], output_carry_cap[:,1], color = "black", label = "original")
#plt.plot(output)
plt.xlabel("Prey Population (in thousands)")
plt.ylabel("Predator Population (in thousands)")
plt.title("Predator / Prey with Carrying Capacity Phase Plot")
plt.legend()
plt.grid()
plt.show()
```
To add stochastic behavior to our system, we can use an algorithm to randomly choose events
and event times from probability distributions.
We can use the Gillespie algorithm to do this.
We keep track of the population of the predator and prey. An event occurs when birth or death
occurs in either the prey or predator population. The propensity for each event to occur at a given
time is shown in the table below.
```python
data = [['Prey + 1', 'alpha * x * (1 - x/k)'],
['Prey - 1', 'beta * x * y'],
['Predator + 1', 'delta * x * y'],
['Predator - 1','gamma * y']]
print(tabulate(data, headers=["Events", "Propensity"]))
```
Events Propensity
------------ ---------------------
Prey + 1 alpha * x * (1 - x/k)
Prey - 1 beta * x * y
Predator + 1 delta * x * y
Predator - 1 gamma * y
```python
# Initial conditions
# Original init_pop = [10,1]. Too likely for prey/predator to die out with 10 and 1
x = [10] # track prey
y = [1] # track predator
t = [0] # to keep track of time
end = 100
x_all = [] # track results of prey from all model runs
y_all = [] # track results of predator from all model runs
t_all = []
# Keep same rates of coefficients as above
#alpha = 10 # prey birthrate
#beta = 0.1 # prey death rate
#delta = 0.3 # predator birthrate
#gamma = 30 # predator death rate
#k = 150
#k_y = 100
#r = 0.05
#K = 100
for i in range(0,5): # simulate the model 100 times
t = [0] # reset time to rerun
x = [10] # reset time to rerun
y = [1] # reset time to rerun
i = i+1
while t[-1] < end: # keep running until last item in t is after end
#current_x = x[-1]
props = [alpha * x[-1] * (1- x[-1]/k),
beta*x[-1]*y[-1],
delta*x[-1]*y[-1],
gamma * y[-1]]
prop_sum = sum(props)
#print("x: ", x[-1], "; y: ", y[-1])
if prop_sum == 0: # can not divide by zero so break out of while loop
break
# choose next time increment randomly from exponential distribution with mean = 1/prop_sum
#tau = np.random.exponential(scale=1/prop_sum)
tau = np.random.uniform(0.5, 4.5)
# add the randomly chosen tau to the current time
t.append(t[-1]+tau)
# randomly choose number to later weight with probability of events
# to randomly choose which event that will occur at the next time point
rand = random.uniform(0,1)
if rand * prop_sum <= props[0]: # growth of prey
x.append(x[-1] + 1)
y.append(y[-1])
elif rand * prop_sum > props[0] and rand * prop_sum <= props[0] + props[1]: # death of prey
x.append(x[-1] - 1)
y.append(y[-1])
elif rand * prop_sum > props[0] and rand * prop_sum <= props[0] + props[1] + props[2]: # death of predator
x.append(x[-1])
y.append(y[-1] + 1)
else:
x.append(x[-1])
y.append(y[-1] - 1)
x_all.append(x)
y_all.append(y)
t_all.append(t)
# plot results
for i in range(len(x_all)):
plt.plot(t_all[i],x_all[i], color = "green")
plt.plot(t_all[i],y_all[i], color = "red")
plt.legend(['Prey', 'Predator'])
plt.xlabel("Time")
plt.ylabel("Population (in thousands)")
plt.show()
```
In the stochastic model, there are several simulations in which the predator and/or prey population become extinct.
This could be a more realistic situation, especially given that the original models showed the populations had fallen
to very low numbers and could likely become extinct.
If we change the predator-prey model dynamics such that the two species are competing for the same resource(s), then we
can use the competitive Lotka-Volterra Equations which are slightly different:
> $ \frac{dx}{dt} = \alpha x (1-\frac{x+\beta y}{K_x}) $
> $ \frac{dy}{dt} = \delta y (1-\frac{y+\gamma x}{K_y}) $
>
Same as before, $\alpha$ and $\delta$ represent the growth of x and y populations, respectively. $\beta$ and $\gamma$
are the effect that y has on x and x has on y, respectively. In this situation, both x and y populations have carrying
capacities.
This system (as with the others) is not restricted to two populations, it can be generalized to many more populations
interacting in the form:
> $ \frac{dx_i}{dt}=r_ix_i(1-\frac{\sum_{j=1}^{N} a_{ij}x_j}{K_i}) $,
>
where $r_i$ is the growth rate of $x_i$, N is the number of populations that interact with $x_i$ and $K_i$ is the
carrying capacity for $x_i$.
|
module List.Sorted {A : Set}(_β€_ : A β A β Set) where
open import Data.List
data Sorted : List A β Set where
nils : Sorted []
singls : (x : A)
β Sorted [ x ]
conss : {x y : A}{xs : List A}
β x β€ y
β Sorted (y β· xs)
β Sorted (x β· y β· xs)
|
/-
Copyright (c) 2018 Kenny Lau. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kenny Lau, Chris Hughes, Mario Carneiro, Anne Baanen
! This file was ported from Lean 3 source module ring_theory.ideal.quotient
! leanprover-community/mathlib commit bd9851ca476957ea4549eb19b40e7b5ade9428cc
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathlib.Algebra.Ring.Fin
import Mathlib.Algebra.Ring.Prod
import Mathlib.LinearAlgebra.Quotient
import Mathlib.RingTheory.Congruence
import Mathlib.RingTheory.Ideal.Basic
import Mathlib.Tactic.FinCases
/-!
# Ideal quotients
This file defines ideal quotients as a special case of submodule quotients and proves some basic
results about these quotients.
See `Algebra.RingQuot` for quotients of non-commutative rings.
## Main definitions
- `Ideal.Quotient`: the quotient of a commutative ring `R` by an ideal `I : Ideal R`
## Main results
- `Ideal.quotientInfRingEquivPiQuotient`: the **Chinese Remainder Theorem**
-/
universe u v w
namespace Ideal
open Set
open BigOperators
variable {R : Type u} [CommRing R] (I : Ideal R) {a b : R}
variable {S : Type v}
-- Porting note: we need Ξ· for TC
set_option synthInstance.etaExperiment true
-- Note that at present `Ideal` means a left-ideal,
-- so this quotient is only useful in a commutative ring.
-- We should develop quotients by two-sided ideals as well.
/-- The quotient `R/I` of a ring `R` by an ideal `I`.
The ideal quotient of `I` is defined to equal the quotient of `I` as an `R`-submodule of `R`.
This definition is marked `reducible` so that typeclass instances can be shared between
`Ideal.Quotient I` and `Submodule.Quotient I`.
-/
@[reducible]
instance : HasQuotient R (Ideal R) :=
Submodule.hasQuotient
namespace Quotient
variable {I} {x y : R}
instance hasOne (I : Ideal R) : One (R β§Έ I) :=
β¨Submodule.Quotient.mk 1β©
#align ideal.quotient.has_one Ideal.Quotient.hasOne
/-- On `Ideal`s, `Submodule.quotientRel` is a ring congruence. -/
protected def ringCon (I : Ideal R) : RingCon R :=
{ QuotientAddGroup.con I.toAddSubgroup with
mul' := fun {aβ bβ aβ bβ} hβ hβ =>
by
rw [Submodule.quotientRel_r_def] at hβ hββ’
have F := I.add_mem (I.mul_mem_left aβ hβ) (I.mul_mem_right bβ hβ)
have : aβ * aβ - bβ * bβ = aβ * (aβ - bβ) + (aβ - bβ) * bβ := by
rw [mul_sub, sub_mul, sub_add_sub_cancel, mul_comm, mul_comm bβ]
rw [β this] at F
convert F }
#align ideal.quotient.ring_con Ideal.Quotient.ringCon
instance commRing (I : Ideal R) : CommRing (R β§Έ I) :=
{ @Submodule.Quotient.addCommGroup _ _ (id _) (id _) (id _) I,
inferInstanceAs (CommRing (Quotient.ringCon I).Quotient) with }
#align ideal.quotient.comm_ring Ideal.Quotient.commRing
/-- The ring homomorphism from a ring `R` to a quotient ring `R/I`. -/
def mk (I : Ideal R) : R β+* R β§Έ I where
toFun a := Submodule.Quotient.mk a
map_zero' := rfl
map_one' := rfl
map_mul' _ _ := rfl
map_add' _ _ := rfl
#align ideal.quotient.mk Ideal.Quotient.mk
/-- Two `RingHom`s from the quotient by an ideal are equal if their
compositions with `Ideal.Quotient.mk'` are equal.
See note [partially-applied ext lemmas]. -/
@[ext]
theorem ringHom_ext [NonAssocSemiring S] β¦f g : R β§Έ I β+* Sβ¦ (h : f.comp (mk I) = g.comp (mk I)) :
f = g :=
RingHom.ext fun x => Quotient.inductionOn' x <| (RingHom.congr_fun h : _)
#align ideal.quotient.ring_hom_ext Ideal.Quotient.ringHom_ext
instance inhabited : Inhabited (R β§Έ I) :=
β¨mk I 37β©
#align ideal.quotient.inhabited Ideal.Quotient.inhabited
protected theorem eq : mk I x = mk I y β x - y β I :=
Submodule.Quotient.eq I
#align ideal.quotient.eq Ideal.Quotient.eq
@[simp]
theorem mk_eq_mk (x : R) : (Submodule.Quotient.mk x : R β§Έ I) = mk I x := rfl
#align ideal.quotient.mk_eq_mk Ideal.Quotient.mk_eq_mk
theorem eq_zero_iff_mem {I : Ideal R} : mk I a = 0 β a β I :=
Submodule.Quotient.mk_eq_zero _
#align ideal.quotient.eq_zero_iff_mem Ideal.Quotient.eq_zero_iff_mem
-- Porting note: new theorem
theorem mk_eq_mk_iff_sub_mem (x y : R) : mk I x = mk I y β x - y β I := by
rw [β eq_zero_iff_mem, map_sub, sub_eq_zero]
theorem zero_eq_one_iff {I : Ideal R} : (0 : R β§Έ I) = 1 β I = β€ :=
eq_comm.trans <| eq_zero_iff_mem.trans (eq_top_iff_one _).symm
#align ideal.quotient.zero_eq_one_iff Ideal.Quotient.zero_eq_one_iff
theorem zero_ne_one_iff {I : Ideal R} : (0 : R β§Έ I) β 1 β I β β€ :=
not_congr zero_eq_one_iff
#align ideal.quotient.zero_ne_one_iff Ideal.Quotient.zero_ne_one_iff
protected theorem nontrivial {I : Ideal R} (hI : I β β€) : Nontrivial (R β§Έ I) :=
β¨β¨0, 1, zero_ne_one_iff.2 hIβ©β©
#align ideal.quotient.nontrivial Ideal.Quotient.nontrivial
theorem subsingleton_iff {I : Ideal R} : Subsingleton (R β§Έ I) β I = β€ := by
rw [eq_top_iff_one, β subsingleton_iff_zero_eq_one, eq_comm, β (mk I).map_one,
Quotient.eq_zero_iff_mem]
#align ideal.quotient.subsingleton_iff Ideal.Quotient.subsingleton_iff
instance : Unique (R β§Έ (β€ : Ideal R)) :=
β¨β¨0β©, by rintro β¨xβ©; exact Quotient.eq_zero_iff_mem.mpr Submodule.mem_topβ©
theorem mk_surjective : Function.Surjective (mk I) := fun y =>
Quotient.inductionOn' y fun x => Exists.intro x rfl
#align ideal.quotient.mk_surjective Ideal.Quotient.mk_surjective
instance : RingHomSurjective (mk I) :=
β¨mk_surjectiveβ©
/-- If `I` is an ideal of a commutative ring `R`, if `q : R β R/I` is the quotient map, and if
`s β R` is a subset, then `qβ»ΒΉ(q(s)) = βα΅’(i + s)`, the union running over all `i β I`. -/
theorem quotient_ring_saturate (I : Ideal R) (s : Set R) :
mk I β»ΒΉ' (mk I '' s) = β x : I, (fun y => x.1 + y) '' s := by
ext x
simp only [mem_preimage, mem_image, mem_unionα΅’, Ideal.Quotient.eq]
exact
β¨fun β¨a, a_in, hβ© => β¨β¨_, I.neg_mem hβ©, a, a_in, by simpβ©, fun β¨β¨i, hiβ©, a, ha, Eqβ© =>
β¨a, ha, by rw [β Eq, sub_add_eq_sub_sub_swap, sub_self, zero_sub]; exact I.neg_mem hiβ©β©
#align ideal.quotient.quotient_ring_saturate Ideal.Quotient.quotient_ring_saturate
instance noZeroDivisors (I : Ideal R) [hI : I.IsPrime] : NoZeroDivisors (R β§Έ I) where
eq_zero_or_eq_zero_of_mul_eq_zero {a b} := Quotient.inductionOnβ' a b fun {_ _} hab =>
(hI.mem_or_mem (eq_zero_iff_mem.1 hab)).elim (Or.inl β eq_zero_iff_mem.2)
(Or.inr β eq_zero_iff_mem.2)
#align ideal.quotient.no_zero_divisors Ideal.Quotient.noZeroDivisors
instance isDomain (I : Ideal R) [hI : I.IsPrime] : IsDomain (R β§Έ I) :=
let _ := Quotient.nontrivial hI.1
NoZeroDivisors.to_isDomain _
#align ideal.quotient.is_domain Ideal.Quotient.isDomain
theorem isDomain_iff_prime (I : Ideal R) : IsDomain (R β§Έ I) β I.IsPrime := by
refine' β¨fun H => β¨zero_ne_one_iff.1 _, fun {x y} h => _β©, fun h => inferInstanceβ©
Β· haveI : Nontrivial (R β§Έ I) := β¨H.2.1β©
exact zero_ne_one
Β· simp only [β eq_zero_iff_mem, (mk I).map_mul] at hβ’
haveI := @IsDomain.to_noZeroDivisors (R β§Έ I) _ H
exact eq_zero_or_eq_zero_of_mul_eq_zero h
#align ideal.quotient.is_domain_iff_prime Ideal.Quotient.isDomain_iff_prime
theorem exists_inv {I : Ideal R} [hI : I.IsMaximal] :
β {a : R β§Έ I}, a β 0 β β b : R β§Έ I, a * b = 1 := by
rintro β¨aβ© h
rcases hI.exists_inv (mt eq_zero_iff_mem.2 h) with β¨b, c, hc, abcβ©
rw [mul_comm] at abc
refine' β¨mk _ b, Quot.sound _β©
--quot.sound hb
rw [β eq_sub_iff_add_eq'] at abc
rw [abc, β neg_mem_iff (G := R) (H := I), neg_sub] at hc
rw [Submodule.quotientRel_r_def]
convert hc
#align ideal.quotient.exists_inv Ideal.Quotient.exists_inv
open Classical
/-- quotient by maximal ideal is a field. def rather than instance, since users will have
computable inverses in some applications.
See note [reducible non-instances]. -/
@[reducible]
protected noncomputable def field (I : Ideal R) [hI : I.IsMaximal] : Field (R β§Έ I) :=
{ Quotient.commRing I,
Quotient.isDomain
I with
inv := fun a => if ha : a = 0 then 0 else Classical.choose (exists_inv ha)
mul_inv_cancel := fun a (ha : a β 0) =>
show a * dite _ _ _ = _ by rw [dif_neg ha]; exact Classical.choose_spec (exists_inv ha)
inv_zero := dif_pos rfl }
#align ideal.quotient.field Ideal.Quotient.field
/-- If the quotient by an ideal is a field, then the ideal is maximal. -/
theorem maximal_of_isField (I : Ideal R) (hqf : IsField (R β§Έ I)) : I.IsMaximal := by
apply Ideal.isMaximal_iff.2
constructor
Β· intro h
rcases hqf.exists_pair_ne with β¨β¨xβ©, β¨yβ©, hxyβ©
exact hxy (Ideal.Quotient.eq.2 (mul_one (x - y) βΈ I.mul_mem_left _ h))
Β· intro J x hIJ hxnI hxJ
rcases hqf.mul_inv_cancel (mt Ideal.Quotient.eq_zero_iff_mem.1 hxnI) with β¨β¨yβ©, hyβ©
rw [β zero_add (1 : R), β sub_self (x * y), sub_add]
refine' J.sub_mem (J.mul_mem_right _ hxJ) (hIJ (Ideal.Quotient.eq.1 hy))
#align ideal.quotient.maximal_of_is_field Ideal.Quotient.maximal_of_isField
/-- The quotient of a ring by an ideal is a field iff the ideal is maximal. -/
theorem maximal_ideal_iff_isField_quotient (I : Ideal R) : I.IsMaximal β IsField (R β§Έ I) :=
β¨fun h =>
let _i := @Quotient.field _ _ I h
Field.toIsField _,
maximal_of_isField _β©
#align ideal.quotient.maximal_ideal_iff_is_field_quotient Ideal.Quotient.maximal_ideal_iff_isField_quotient
variable [CommRing S]
/-- Given a ring homomorphism `f : R β+* S` sending all elements of an ideal to zero,
lift it to the quotient by this ideal. -/
def lift (I : Ideal R) (f : R β+* S) (H : β a : R, a β I β f a = 0) : R β§Έ I β+* S :=
{
QuotientAddGroup.lift I.toAddSubgroup f.toAddMonoidHom
H with
map_one' := f.map_one
map_zero' := f.map_zero
map_add' := fun aβ aβ => Quotient.inductionOnβ' aβ aβ f.map_add
map_mul' := fun aβ aβ => Quotient.inductionOnβ' aβ aβ f.map_mul }
#align ideal.quotient.lift Ideal.Quotient.lift
@[simp]
theorem lift_surjective_of_surjective (I : Ideal R) {f : R β+* S} (H : β a : R, a β I β f a = 0)
(hf : Function.Surjective f) : Function.Surjective (Ideal.Quotient.lift I f H) := by
intro y
obtain β¨x, rflβ© := hf y
use Ideal.Quotient.mk I x
simp only [Ideal.Quotient.lift_mk]
#align ideal.quotient.lift_surjective_of_surjective Ideal.Quotient.lift_surjective_of_surjective
/-- The ring homomorphism from the quotient by a smaller ideal to the quotient by a larger ideal.
This is the `Ideal.Quotient` version of `Quot.Factor` -/
def factor (S T : Ideal R) (H : S β€ T) : R β§Έ S β+* R β§Έ T :=
Ideal.Quotient.lift S (mk T) fun _ hx => eq_zero_iff_mem.2 (H hx)
#align ideal.quotient.factor Ideal.Quotient.factor
@[simp]
theorem factor_mk (S T : Ideal R) (H : S β€ T) (x : R) : factor S T H (mk S x) = mk T x :=
rfl
#align ideal.quotient.factor_mk Ideal.Quotient.factor_mk
@[simp]
theorem factor_comp_mk (S T : Ideal R) (H : S β€ T) : (factor S T H).comp (mk S) = mk T := by
ext x
rw [RingHom.comp_apply, factor_mk]
#align ideal.quotient.factor_comp_mk Ideal.Quotient.factor_comp_mk
end Quotient
/-- Quotienting by equal ideals gives equivalent rings.
See also `Submodule.quotEquivOfEq`.
-/
def quotEquivOfEq {R : Type _} [CommRing R] {I J : Ideal R} (h : I = J) : R β§Έ I β+* R β§Έ J :=
{ Submodule.quotEquivOfEq I J h with
map_mul' := by
rintro β¨xβ© β¨yβ©
rfl }
#align ideal.quot_equiv_of_eq Ideal.quotEquivOfEq
@[simp]
theorem quotEquivOfEq_mk {R : Type _} [CommRing R] {I J : Ideal R} (h : I = J) (x : R) :
quotEquivOfEq h (Ideal.Quotient.mk I x) = Ideal.Quotient.mk J x :=
rfl
#align ideal.quot_equiv_of_eq_mk Ideal.quotEquivOfEq_mk
@[simp]
theorem quotEquivOfEq_symm {R : Type _} [CommRing R] {I J : Ideal R} (h : I = J) :
(Ideal.quotEquivOfEq h).symm = Ideal.quotEquivOfEq h.symm := by ext; rfl
#align ideal.quot_equiv_of_eq_symm Ideal.quotEquivOfEq_symm
section Pi
variable (ΞΉ : Type v)
/-- `R^n/I^n` is a `R/I`-module. -/
instance modulePi : Module (R β§Έ I) ((ΞΉ β R) β§Έ I.pi ΞΉ) where
smul c m :=
Quotient.liftOnβ' c m (fun r m => Submodule.Quotient.mk <| r β’ m) $ by
intro cβ mβ cβ mβ hc hm
apply Ideal.Quotient.eq.2
rw [Submodule.quotientRel_r_def] at hc hm
intro i
exact I.mul_sub_mul_mem hc (hm i)
one_smul := by
rintro β¨aβ©
convert_to Ideal.Quotient.mk (I.pi ΞΉ) _ = Ideal.Quotient.mk (I.pi ΞΉ) _
congr with i; exact one_mul (a i)
mul_smul := by
rintro β¨aβ© β¨bβ© β¨cβ©
convert_to Ideal.Quotient.mk (I.pi ΞΉ) _ = Ideal.Quotient.mk (I.pi ΞΉ) _
congr 1; funext i; exact mul_assoc a b (c i)
smul_add := by
rintro β¨aβ© β¨bβ© β¨cβ©
convert_to Ideal.Quotient.mk (I.pi ΞΉ) _ = Ideal.Quotient.mk (I.pi ΞΉ) _
congr with i; exact mul_add a (b i) (c i)
smul_zero := by
rintro β¨aβ©
convert_to Ideal.Quotient.mk (I.pi ΞΉ) _ = Ideal.Quotient.mk (I.pi ΞΉ) _
congr with _; exact mul_zero a
add_smul := by
rintro β¨aβ© β¨bβ© β¨cβ©
convert_to Ideal.Quotient.mk (I.pi ΞΉ) _ = Ideal.Quotient.mk (I.pi ΞΉ) _
congr with i; exact add_mul a b (c i)
zero_smul := by
rintro β¨aβ©
convert_to Ideal.Quotient.mk (I.pi ΞΉ) _ = Ideal.Quotient.mk (I.pi ΞΉ) _
congr with i; exact zero_mul (a i)
#align ideal.module_pi Ideal.modulePi
set_option synthInstance.etaExperiment false in -- Porting note: needed, otherwise type times out
/-- `R^n/I^n` is isomorphic to `(R/I)^n` as an `R/I`-module. -/
noncomputable def piQuotEquiv : ((ΞΉ β R) β§Έ I.pi ΞΉ) ββ[R β§Έ I] ΞΉ β (R β§Έ I) := by
refine' β¨β¨β¨?toFun, _β©, _β©, ?invFun, _, _β©
case toFun => set_option synthInstance.etaExperiment true in -- Porting note: to get `Module R R`
exact fun x β¦
Quotient.liftOn' x (fun f i => Ideal.Quotient.mk I (f i)) fun a b hab =>
funext fun i => (Submodule.Quotient.eq' _).2 (QuotientAddGroup.leftRel_apply.mp hab i)
case invFun =>
exact fun x β¦ Ideal.Quotient.mk (I.pi ΞΉ) fun i β¦ Quotient.out' (x i)
Β· rintro β¨_β© β¨_β©; rfl
Β· rintro β¨_β© β¨_β©; rfl
Β· rintro β¨xβ©
exact Ideal.Quotient.eq.2 fun i => Ideal.Quotient.eq.1 (Quotient.out_eq' _)
Β· intro x
ext i
obtain β¨_, _β© := @Quot.exists_rep _ _ (x i)
convert Quotient.out_eq' (x i)
#align ideal.pi_quot_equiv Ideal.piQuotEquiv
/-- If `f : R^n β R^m` is an `R`-linear map and `I β R` is an ideal, then the image of `I^n` is
contained in `I^m`. -/
theorem map_pi {ΞΉ : Type _} [Finite ΞΉ] {ΞΉ' : Type w} (x : ΞΉ β R) (hi : β i, x i β I)
(f : (ΞΉ β R) ββ[R] ΞΉ' β R) (i : ΞΉ') : f x i β I := by
classical
cases nonempty_fintype ΞΉ
rw [pi_eq_sum_univ x]
simp only [Finset.sum_apply, smul_eq_mul, LinearMap.map_sum, Pi.smul_apply, LinearMap.map_smul]
exact I.sum_mem fun j _ => I.mul_mem_right _ (hi j)
#align ideal.map_pi Ideal.map_pi
end Pi
section ChineseRemainder
variable {ΞΉ : Type v}
theorem exists_sub_one_mem_and_mem (s : Finset ΞΉ) {f : ΞΉ β Ideal R}
(hf : β i β s, β j β s, i β j β f i β f j = β€) (i : ΞΉ) (his : i β s) :
β r : R, r - 1 β f i β§ β j β s, j β i β r β f j := by
have : β j β s, j β i β β r : R, β _ : r - 1 β f i, r β f j := by
intro j hjs hji
specialize hf i his j hjs hji.symm
rw [eq_top_iff_one, Submodule.mem_sup] at hf
rcases hf with β¨r, hri, s, hsj, hrsβ©
refine' β¨1 - r, _, _β©
Β· rw [sub_right_comm, sub_self, zero_sub]
exact (f i).neg_mem hri
Β· rw [β hrs, add_sub_cancel']
exact hsj
classical
have : β g : ΞΉ β R, (β j, g j - 1 β f i) β§ β j β s, j β i β g j β f j := by
choose g hg1 hg2 using this
refine' β¨fun j => if H : j β s β§ j β i then g j H.1 H.2 else 1, fun j => _, fun j => _β©
Β· dsimp only
split_ifs with h
Β· apply hg1
rw [sub_self]
exact (f i).zero_mem
Β· intro hjs hji
dsimp only
rw [dif_pos]
Β· apply hg2
exact β¨hjs, hjiβ©
rcases this with β¨g, hgi, hgjβ©
use β x in s.erase i, g x
constructor
Β· rw [β Ideal.Quotient.mk_eq_mk_iff_sub_mem, map_one, map_prod]
apply Finset.prod_eq_one
intros
rw [β RingHom.map_one, Ideal.Quotient.mk_eq_mk_iff_sub_mem]
apply hgi
Β· intro j hjs hji
rw [β Quotient.eq_zero_iff_mem, map_prod]
-- Porting note: Added the below line to help instance inferrence
letI : CommMonoidWithZero (R β§Έ f j) := CommSemiring.toCommMonoidWithZero
refine' Finset.prod_eq_zero (Finset.mem_erase_of_ne_of_mem hji hjs) _
rw [Quotient.eq_zero_iff_mem]
exact hgj j hjs hji
#align ideal.exists_sub_one_mem_and_mem Ideal.exists_sub_one_mem_and_mem
theorem exists_sub_mem [Finite ΞΉ] {f : ΞΉ β Ideal R} (hf : β i j, i β j β f i β f j = β€)
(g : ΞΉ β R) : β r : R, β i, r - g i β f i := by
cases nonempty_fintype ΞΉ
have : β Ο : ΞΉ β R, (β i, Ο i - 1 β f i) β§ β i j, i β j β Ο i β f j := by
have := exists_sub_one_mem_and_mem (Finset.univ : Finset ΞΉ) fun i _ j _ hij => hf i j hij
choose Ο hΟ using this
exists fun i => Ο i (Finset.mem_univ i)
exact β¨fun i => (hΟ i _).1, fun i j hij => (hΟ i _).2 j (Finset.mem_univ j) hij.symmβ©
rcases this with β¨Ο, hΟ1, hΟ2β©
use β i, g i * Ο i
intro i
rw [β Quotient.mk_eq_mk_iff_sub_mem, map_sum]
refine' Eq.trans (Finset.sum_eq_single i _ _) _
Β· intro j _ hji
rw [Quotient.eq_zero_iff_mem]
exact (f i).mul_mem_left _ (hΟ2 j i hji)
Β· intro hi
exact (hi <| Finset.mem_univ i).elim
specialize hΟ1 i
rw [β Quotient.mk_eq_mk_iff_sub_mem, RingHom.map_one] at hΟ1
rw [RingHom.map_mul, hΟ1, mul_one]
#align ideal.exists_sub_mem Ideal.exists_sub_mem
/-- The homomorphism from `R/(β i, f i)` to `β i, (R / f i)` featured in the Chinese
Remainder Theorem. It is bijective if the ideals `f i` are comaximal. -/
def quotientInfToPiQuotient (f : ΞΉ β Ideal R) : (R β§Έ β¨
i, f i) β+* β i, R β§Έ f i :=
Quotient.lift (β¨
i, f i) (Pi.ringHom fun i : ΞΉ => (Quotient.mk (f i) : _)) fun r hr =>
by
rw [Submodule.mem_infα΅’] at hr
ext i
exact Quotient.eq_zero_iff_mem.2 (hr i)
#align ideal.quotient_inf_to_pi_quotient Ideal.quotientInfToPiQuotient
theorem quotientInfToPiQuotient_bijective [Finite ΞΉ] {f : ΞΉ β Ideal R}
(hf : β i j, i β j β f i β f j = β€) : Function.Bijective (quotientInfToPiQuotient f) :=
β¨fun x y =>
Quotient.inductionOnβ' x y fun r s hrs =>
Quotient.eq.2 <|
(Submodule.mem_infα΅’ _).2 fun i =>
Quotient.eq.1 <|
show quotientInfToPiQuotient f (Quotient.mk'' r) i = _ by rw [hrs]; rfl,
fun g =>
let β¨r, hrβ© := exists_sub_mem hf fun i => Quotient.out' (g i)
β¨Quotient.mk _ r, funext fun i => Quotient.out_eq' (g i) βΈ Quotient.eq.2 (hr i)β©β©
#align ideal.quotient_inf_to_pi_quotient_bijective Ideal.quotientInfToPiQuotient_bijective
/-- Chinese Remainder Theorem. Eisenbud Ex.2.6. Similar to Atiyah-Macdonald 1.10 and Stacks 00DT -/
noncomputable def quotientInfRingEquivPiQuotient [Finite ΞΉ] (f : ΞΉ β Ideal R)
(hf : β i j, i β j β f i β f j = β€) : (R β§Έ β¨
i, f i) β+* β i, R β§Έ f i :=
{ Equiv.ofBijective _ (quotientInfToPiQuotient_bijective hf), quotientInfToPiQuotient f with }
#align ideal.quotient_inf_ring_equiv_pi_quotient Ideal.quotientInfRingEquivPiQuotient
end ChineseRemainder
/-- **Chinese remainder theorem**, specialized to two ideals. -/
noncomputable def quotientInfEquivQuotientProd (I J : Ideal R) (coprime : I β J = β€) :
R β§Έ I β J β+* (R β§Έ I) Γ R β§Έ J :=
let f : Fin 2 β Ideal R := ![I, J]
have hf : β i j : Fin 2, i β j β f i β f j = β€ := by
intro i j h
fin_cases i <;> fin_cases j <;> try contradiction
Β· assumption
Β· rwa [sup_comm]
(Ideal.quotEquivOfEq (by simp [infα΅’, inf_comm])).trans <|
(Ideal.quotientInfRingEquivPiQuotient f hf).trans <| RingEquiv.piFinTwo fun i => R β§Έ f i
#align ideal.quotient_inf_equiv_quotient_prod Ideal.quotientInfEquivQuotientProd
@[simp]
theorem quotientInfEquivQuotientProd_fst (I J : Ideal R) (coprime : I β J = β€) (x : R β§Έ I β J) :
(quotientInfEquivQuotientProd I J coprime x).fst =
Ideal.Quotient.factor (I β J) I inf_le_left x :=
Quot.inductionOn x fun _ => rfl
#align ideal.quotient_inf_equiv_quotient_prod_fst Ideal.quotientInfEquivQuotientProd_fst
@[simp]
theorem quotientInfEquivQuotientProd_snd (I J : Ideal R) (coprime : I β J = β€) (x : R β§Έ I β J) :
(quotientInfEquivQuotientProd I J coprime x).snd =
Ideal.Quotient.factor (I β J) J inf_le_right x :=
Quot.inductionOn x fun _ => rfl
#align ideal.quotient_inf_equiv_quotient_prod_snd Ideal.quotientInfEquivQuotientProd_snd
@[simp]
theorem fst_comp_quotientInfEquivQuotientProd (I J : Ideal R) (coprime : I β J = β€) :
(RingHom.fst _ _).comp
(quotientInfEquivQuotientProd I J coprime : R β§Έ I β J β+* (R β§Έ I) Γ R β§Έ J) =
Ideal.Quotient.factor (I β J) I inf_le_left := by
apply Quotient.ringHom_ext; ext; rfl
#align ideal.fst_comp_quotient_inf_equiv_quotient_prod Ideal.fst_comp_quotientInfEquivQuotientProd
@[simp]
theorem snd_comp_quotientInfEquivQuotientProd (I J : Ideal R) (coprime : I β J = β€) :
(RingHom.snd _ _).comp
(quotientInfEquivQuotientProd I J coprime : R β§Έ I β J β+* (R β§Έ I) Γ R β§Έ J) =
Ideal.Quotient.factor (I β J) J inf_le_right := by
apply Quotient.ringHom_ext; ext; rfl
#align ideal.snd_comp_quotient_inf_equiv_quotient_prod Ideal.snd_comp_quotientInfEquivQuotientProd
end Ideal
|
> module Main
> %default total
> %access public export
> %auto_implicits off
> J : Type -> Type -> Type
> J R X = (X -> R) -> X
> K : Type -> Type -> Type
> K R X = (X -> R) -> R
> overline : {X, R : Type} -> J R X -> K R X
> overline e p = p (e p)
> otimes : {X, R : Type} -> J R X -> (X -> J R (List X)) -> J R (List X)
> otimes e f p = x :: xs where
> x = e (\ x' => overline (f x') (\ xs' => p (x' :: xs')))
> xs = f x (\ xs' => p (x :: xs'))
> partial
> bigotimes : {X, R : Type} -> List (List X -> J R X) -> J R (List X)
> bigotimes [] = \ p => []
> bigotimes (e :: es) = (e []) `otimes` (\x => bigotimes [\ xs => d (x :: xs) | d <- es])
> partial
> argsup : {X : Type} -> (xs : List X) -> J Int X
> argsup (x :: Nil) p = x
> argsup (x :: x' :: xs) p = if p x < p x' then argsup (x' :: xs) p else argsup (x :: xs) p
> partial
> e : List Int -> J Int Int
> e _ = argsup [0..7]
> p : List Int -> Int
> p _ = 0
> partial
> main : IO ()
> main = do putStr ("bigotimes (replicate 3 e) p = "
> ++
> show (bigotimes (replicate 3 e) p) ++ "\n")
|
<div align="right"><a href="https://github.com/lucasliano/TC2">TCII Github</a></div>
<div align="center">
<h1>Trabajo Semanal 5</h1>
<h2>Filtro Pasa Altos Notch con GIC</h2>
<h3>LiaΓ±o, Lucas</h3>
</div>
## Consigna
>Se debe diseΓ±ar un filtro pasa-altos, que presente mΓ‘xima planicidad en la banda de paso (frecuencia de corte = 300 Hz) y un cero de transmisiΓ³n en 100 Hz. El prototipo pasabajos normalizado presenta la siguiente respuesta:
>
>
> - Determine la expresiΓ³n de H(s) del filtro pasa-altos normalizado
> - Realizar el diagrama de polos y ceros de H(s)
> - Sintetice el circuito del filtro pedido. Se utilizarΓ‘ para la estructura de segundo orden el siguiente circuito:
>
>
# ResoluciΓ³n a Mano
<div class="alert alert-success">
<strong>Se adjunto PDF con la resoluciΓ³n manual π </strong>
</div>
## ImplementaciΓ³n Computarizada
#### Importamos las librerias a utilizar
```python
import numpy as np
import matplotlib.pyplot as plt
from splane import pzmap, grpDelay, bodePlot, convert2SOS
from scipy import signal
# Esta es una liberia tomada de la comunidad [https://stackoverflow.com/questions/35304245/multiply-scipy-lti-transfer-functions?newreg=b12c460c179042b09ad75c2fb4297bc9]
from ltisys import *
# MΓ³dulos para Jupyter (mejores graficos!)
import warnings
warnings.filterwarnings('ignore')
plt.rcParams['figure.figsize'] = [12, 4]
plt.rcParams['figure.dpi'] = 150 # 200 e.g. is really fine, but slower
```
#### Inicializamos las variables utilizadas
### Transferencia Notch Pasa-bajos Normalizada ( Dato )
\begin{equation}
T(s) = \dfrac{s^{2} + 3^{2}} {s^2 + s + 1}
\end{equation}
### Transferencia Notch Pasa-altos Normalizada (Despues de calcular)
\begin{equation}
T(s) = \dfrac{s^{2} + (\frac{1}{3})^{2}} {s^2 + s + 1} \cdot \dfrac{s}{s+1}
\end{equation}
```python
# Coeficientes Transferencias LP
wp = 1
Qp = 1
k = 3
wz = k * wp
```
## Definimos la transferencia a partir de la expresiΓ³n hallada
```python
# Genero la funciΓ³n transferencia T1 en S
den_1 = [1, (wp / Qp), wp**2]
num_1 = [1, 0, wz**2]
T1 = ltimul(num_1, den_1);
```
## Diagrama de polos y ceros
### Transferencia $T_{1}(s)$
```python
pzmap(T1, 1);
```
## Diagrama de Bode
### Transferencia $H_{LP}(s)$
```python
fig, ax = bodePlot(T1.to_ss(), 2);
```
# Ahora aplicamos la transformaciΓ³n
### Transferencia $H_{HP}(s)$
```python
# Coeficientes Transferencias HP
wp = 1
Qp = 1
k = 1/3
wz = k * wp
sigma = 1
```
```python
# Genero la funciΓ³n transferencia T1 en S
den_1 = [1, (wp / Qp), wp**2]
num_1 = [1, 0, wz**2]
T1 = ltimul(num_1, den_1);
den_2 = [1, sigma]
num_2 = [1,0]
T2 = ltimul(num_2, den_2);
H = T1 * T2
```
```python
pzmap(H, 1);
```
```python
fig, ax = bodePlot(H.to_ss(), 2);
```
---
# SimulaciΓ³n Circuital
|
Require Import
HoTT.Classes.orders.naturals
HoTT.Classes.implementations.peano_naturals.
Require Import
HoTT.Classes.interfaces.abstract_algebra
HoTT.Classes.interfaces.orders
HoTT.Classes.theory.naturals.
Generalizable Variables N.
Section contents.
Context `{Funext} `{Univalence}.
Context `{Naturals N}.
(* Add Ring N : (rings.stdlib_semiring_theory N). *)
(* NatDistance instances are all equivalent, because their behavior is fully
determined by the specification. *)
Lemma nat_distance_unique {a b : NatDistance N}
: forall x y, @nat_distance _ _ a x y = @nat_distance _ _ b x y.
Proof.
intros. unfold nat_distance.
destruct (@nat_distance_sig _ _ a x y) as [[z1 E1]|[z1 E1]],
(@nat_distance_sig _ _ b x y) as [[z2 E2]|[z2 E2]];simpl.
- apply (left_cancellation plus x). path_via y.
- rewrite <-(rings.plus_0_r y),<-E2,<-rings.plus_assoc in E1.
apply (left_cancellation plus y) in E1. apply naturals.zero_sum in E1.
destruct E1;path_via 0.
- rewrite <-(rings.plus_0_r x),<-E2,<-rings.plus_assoc in E1.
apply (left_cancellation plus x) in E1. apply naturals.zero_sum in E1.
destruct E1;path_via 0.
- apply (left_cancellation plus y);path_via x.
Qed.
End contents.
(* An existing instance of [CutMinus]
allows to create an instance of [NatDistance] *)
Global Instance natdistance_cut_minus `{Naturals N} `{!TrivialApart N}
{cm} `{!CutMinusSpec N cm} `{forall x y, Decidable (x β€ y)} : NatDistance N.
Proof.
red. intros. destruct (decide_rel (<=) x y) as [E|E].
- left. exists (y βΈ x).
rewrite rings.plus_comm;apply cut_minus_le;trivial.
- right. exists (x βΈ y).
rewrite rings.plus_comm;apply cut_minus_le, orders.le_flip;trivial.
Defined.
(* Using the preceding instance we can make an instance
for arbitrary models of the naturals
by translation into [nat] on which we already have a [CutMinus] instance. *)
Global Instance natdistance_default `{Naturals N} : NatDistance N | 10.
Proof.
intros x y.
destruct (nat_distance_sig (naturals_to_semiring N nat x)
(naturals_to_semiring N nat y)) as [[n E]|[n E]].
- left. exists (naturals_to_semiring nat N n).
rewrite <-(naturals.to_semiring_involutive N nat y), <-E.
rewrite (rings.preserves_plus (A:=nat)), (naturals.to_semiring_involutive _ _).
split.
- right. exists (naturals_to_semiring nat N n).
rewrite <-(naturals.to_semiring_involutive N nat x), <-E.
rewrite (rings.preserves_plus (A:=nat)), (naturals.to_semiring_involutive _ _).
split.
Defined.
|
C Copyright(C) 2009-2017 National Technology & Engineering Solutions of
C Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
C NTESS, the U.S. Government retains certain rights in this software.
C
C Redistribution and use in source and binary forms, with or without
C modification, are permitted provided that the following conditions are
C met:
C
C * Redistributions of source code must retain the above copyright
C notice, this list of conditions and the following disclaimer.
C
C * Redistributions in binary form must reproduce the above
C copyright notice, this list of conditions and the following
C disclaimer in the documentation and/or other materials provided
C with the distribution.
C * Neither the name of NTESS nor the names of its
C contributors may be used to endorse or promote products derived
C from this software without specific prior written permission.
C
C THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
C "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
C LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
C A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
C OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
C SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
C LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
C DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
C THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
C (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
C OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C $Log: grsdev.f,v $
C Revision 1.2 2009/03/25 12:36:44 gdsjaar
C Add copyright and license notice to all files.
C Permission to assert copyright has been granted; blot is now open source, BSD
C
C Revision 1.1 1994/04/07 20:02:44 gdsjaar
C Initial checkin of ACCESS/graphics/blotII2
C
c Revision 1.2 1990/12/14 08:51:52 gdsjaar
c Added RCS Id and Log to all files
c
C=======================================================================
SUBROUTINE GRSDEV (INDEV)
C=======================================================================
C --*** GRSDEV *** (GRPLIB) Select device
C -- Written by Amy Gilkey, revised 08/24/87
C --
C --GRSDEV selects a device and changes all the device parameters.
C --
C --Parameters:
C -- INDEV - IN - the device to be selected
C --
C --Common Variables:
C -- Uses ICURDV, DEVOK, IFONT, NUMCOL of /GRPCOM/
C -- Sets ICURDV of /GRPCOM/
C --Routines Called:
C -- GRCOLT - (GRPLIB) Set color table
C -- GRFONT - (GRPLIB) Set font
PARAMETER (KDVDI=10000)
COMMON /GRPCOC/ DEVNAM(2), DEVCOD(2)
CHARACTER*3 DEVNAM
CHARACTER*8 DEVCOD
COMMON /GRPCOM/ ICURDV, ISHARD, DEVOK(2), TALKOK(2),
& NSNAP(2), IFONT(2), SOFTCH(2), AUTOPL(2),
& MAXCOL(2), NUMCOL(0:1,2), MAPALT(2), MAPUSE(2)
LOGICAL ISHARD, DEVOK, TALKOK, SOFTCH, AUTOPL
C --If the device number is not given, choose the first available device
IF ((INDEV .NE. 1) .AND. (INDEV .NE. 2)) THEN
IF (DEVOK(1)) THEN
IDEV = 1
ELSE
IDEV = 2
END IF
ELSE
IDEV = INDEV
END IF
C --Skip if invalid parameter
IF (.NOT. DEVOK(IDEV)) GOTO 100
C --Skip if device already selected
IF (IDEV .EQ. ICURDV) GOTO 100
C --Turn off old device and turn on new device
CALL VDESCP (KDVDI + IDEV, 0, 0)
ICURDV = IDEV
C --Set color table
CALL GRCOLT
C --Set font
CALL GRFONT (IFONT(ICURDV))
C --Set number of frames to snap
CALL GRSNAP ('INIT', ICURDV)
C --Set line widths
CALL GRLWID
C --Reset the single hardcopy flag if terminal device selected
IF (ICURDV .EQ. 1) ISHARD = .FALSE.
100 CONTINUE
RETURN
END
|
SYNC TEAM (1)
sync team (race_team, STAT=stat_var, ERRMSG=err_var)
end
|
import io
import time
import cv2
from rtcom import RealTimeCommunication
from PIL import Image, ImageDraw
import numpy as np
from threading import Thread
from utils import VideoCapture
import cv2, queue, threading, time
with RealTimeCommunication("rpi") as rtcom:
cap = VideoCapture(0)
first_pass = True
data = {}
loop_start_time=0
while(True):
data["Cycle Time"] = ((time.perf_counter() - loop_start_time)*1000, "ms")
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
frame = cap.read()
loop_start_time=time.perf_counter()
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
ret, jpg_image = cv2.imencode("*.jpg",frame, encode_param)
rtcom.broadcast_endpoint("camera", bytes(jpg_image), encoding="binary")
rtcom.broadcast_endpoint("data",data)
|
/-
Copyright (c) 2020 Zhouhang Zhou. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Zhouhang Zhou, Yury Kudryashov
-/
import measure_theory.integral.integrable_on
import measure_theory.integral.bochner
import order.filter.indicator_function
import topology.metric_space.thickened_indicator
/-!
# Set integral
In this file we prove some properties of `β« x in s, f x βΞΌ`. Recall that this notation
is defined as `β« x, f x β(ΞΌ.restrict s)`. In `integral_indicator` we prove that for a measurable
function `f` and a measurable set `s` this definition coincides with another natural definition:
`β« x, indicator s f x βΞΌ = β« x in s, f x βΞΌ`, where `indicator s f x` is equal to `f x` for `x β s`
and is zero otherwise.
Since `β« x in s, f x βΞΌ` is a notation, one can rewrite or apply any theorem about `β« x, f x βΞΌ`
directly. In this file we prove some theorems about dependence of `β« x in s, f x βΞΌ` on `s`, e.g.
`integral_union`, `integral_empty`, `integral_univ`.
We use the property `integrable_on f s ΞΌ := integrable f (ΞΌ.restrict s)`, defined in
`measure_theory.integrable_on`. We also defined in that same file a predicate
`integrable_at_filter (f : Ξ± β E) (l : filter Ξ±) (ΞΌ : measure Ξ±)` saying that `f` is integrable at
some set `s β l`.
Finally, we prove a version of the
[Fundamental theorem of calculus](https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus)
for set integral, see `filter.tendsto.integral_sub_linear_is_o_ae` and its corollaries.
Namely, consider a measurably generated filter `l`, a measure `ΞΌ` finite at this filter, and
a function `f` that has a finite limit `c` at `l β ΞΌ.ae`. Then `β« x in s, f x βΞΌ = ΞΌ s β’ c + o(ΞΌ s)`
as `s` tends to `l.small_sets`, i.e. for any `Ξ΅>0` there exists `t β l` such that
`β₯β« x in s, f x βΞΌ - ΞΌ s β’ cβ₯ β€ Ξ΅ * ΞΌ s` whenever `s β t`. We also formulate a version of this
theorem for a locally finite measure `ΞΌ` and a function `f` continuous at a point `a`.
## Notation
We provide the following notations for expressing the integral of a function on a set :
* `β« a in s, f a βΞΌ` is `measure_theory.integral (ΞΌ.restrict s) f`
* `β« a in s, f a` is `β« a in s, f a βvolume`
Note that the set notations are defined in the file `measure_theory/integral/bochner`,
but we reference them here because all theorems about set integrals are in this file.
-/
noncomputable theory
open set filter topological_space measure_theory function
open_locale classical topological_space interval big_operators filter ennreal nnreal measure_theory
variables {Ξ± Ξ² E F : Type*} [measurable_space Ξ±]
namespace measure_theory
section normed_group
variables [normed_group E] {f g : Ξ± β E} {s t : set Ξ±} {ΞΌ Ξ½ : measure Ξ±}
{l l' : filter Ξ±}
variables [complete_space E] [normed_space β E]
lemma set_integral_congr_ae (hs : measurable_set s) (h : βα΅ x βΞΌ, x β s β f x = g x) :
β« x in s, f x βΞΌ = β« x in s, g x βΞΌ :=
integral_congr_ae ((ae_restrict_iff' hs).2 h)
lemma set_integral_congr (hs : measurable_set s) (h : eq_on f g s) :
β« x in s, f x βΞΌ = β« x in s, g x βΞΌ :=
set_integral_congr_ae hs $ eventually_of_forall h
lemma set_integral_congr_set_ae (hst : s =α΅[ΞΌ] t) :
β« x in s, f x βΞΌ = β« x in t, f x βΞΌ :=
by rw measure.restrict_congr_set hst
lemma integral_union_ae (hst : ae_disjoint ΞΌ s t) (ht : null_measurable_set t ΞΌ)
(hfs : integrable_on f s ΞΌ) (hft : integrable_on f t ΞΌ) :
β« x in s βͺ t, f x βΞΌ = β« x in s, f x βΞΌ + β« x in t, f x βΞΌ :=
by simp only [integrable_on, measure.restrict_unionβ hst ht, integral_add_measure hfs hft]
lemma integral_diff (ht : measurable_set t) (hfs : integrable_on f s ΞΌ)
(hft : integrable_on f t ΞΌ) (hts : t β s) :
β« x in s \ t, f x βΞΌ = β« x in s, f x βΞΌ - β« x in t, f x βΞΌ :=
begin
rw [eq_sub_iff_add_eq, β integral_union, diff_union_of_subset hts],
exacts [disjoint_diff.symm, ht, hfs.mono_set (diff_subset _ _), hft]
end
lemma integral_finset_bUnion {ΞΉ : Type*} (t : finset ΞΉ) {s : ΞΉ β set Ξ±}
(hs : β i β t, measurable_set (s i)) (h's : set.pairwise βt (disjoint on s))
(hf : β i β t, integrable_on f (s i) ΞΌ) :
β« x in (β i β t, s i), f x β ΞΌ = β i in t, β« x in s i, f x β ΞΌ :=
begin
induction t using finset.induction_on with a t hat IH hs h's,
{ simp },
{ simp only [finset.coe_insert, finset.forall_mem_insert, set.pairwise_insert,
finset.set_bUnion_insert] at hs hf h's β’,
rw [integral_union _ _ hf.1 (integrable_on_finset_Union.2 hf.2)],
{ rw [finset.sum_insert hat, IH hs.2 h's.1 hf.2] },
{ simp only [disjoint_Union_right],
exact (Ξ» i hi, (h's.2 i hi (ne_of_mem_of_not_mem hi hat).symm).1) },
{ exact finset.measurable_set_bUnion _ hs.2 } }
end
lemma integral_fintype_Union {ΞΉ : Type*} [fintype ΞΉ] {s : ΞΉ β set Ξ±}
(hs : β i, measurable_set (s i)) (h's : pairwise (disjoint on s))
(hf : β i, integrable_on f (s i) ΞΌ) :
β« x in (β i, s i), f x β ΞΌ = β i, β« x in s i, f x β ΞΌ :=
begin
convert integral_finset_bUnion finset.univ (Ξ» i hi, hs i) _ (Ξ» i _, hf i),
{ simp },
{ simp [pairwise_univ, h's] }
end
lemma integral_empty : β« x in β
, f x βΞΌ = 0 := by rw [measure.restrict_empty, integral_zero_measure]
lemma integral_univ : β« x in univ, f x βΞΌ = β« x, f x βΞΌ := by rw [measure.restrict_univ]
lemma integral_add_compl (hs : measurable_set s) (hfi : integrable f ΞΌ) :
β« x in s, f x βΞΌ + β« x in sαΆ, f x βΞΌ = β« x, f x βΞΌ :=
by rw [β integral_union (@disjoint_compl_right (set Ξ±) _ _) hs.compl
hfi.integrable_on hfi.integrable_on, union_compl_self, integral_univ]
/-- For a function `f` and a measurable set `s`, the integral of `indicator s f`
over the whole space is equal to `β« x in s, f x βΞΌ` defined as `β« x, f x β(ΞΌ.restrict s)`. -/
lemma integral_indicator (hs : measurable_set s) :
β« x, indicator s f x βΞΌ = β« x in s, f x βΞΌ :=
begin
by_cases hfi : integrable_on f s ΞΌ, swap,
{ rwa [integral_undef, integral_undef],
rwa integrable_indicator_iff hs },
calc β« x, indicator s f x βΞΌ = β« x in s, indicator s f x βΞΌ + β« x in sαΆ, indicator s f x βΞΌ :
(integral_add_compl hs (hfi.indicator hs)).symm
... = β« x in s, f x βΞΌ + β« x in sαΆ, 0 βΞΌ :
congr_arg2 (+) (integral_congr_ae (indicator_ae_eq_restrict hs))
(integral_congr_ae (indicator_ae_eq_restrict_compl hs))
... = β« x in s, f x βΞΌ : by simp
end
lemma of_real_set_integral_one_of_measure_ne_top {Ξ± : Type*} {m : measurable_space Ξ±}
{ΞΌ : measure Ξ±} {s : set Ξ±} (hs : ΞΌ s β β) :
ennreal.of_real (β« x in s, (1 : β) βΞΌ) = ΞΌ s :=
calc
ennreal.of_real (β« x in s, (1 : β) βΞΌ)
= ennreal.of_real (β« x in s, β₯(1 : β)β₯ βΞΌ) : by simp only [norm_one]
... = β«β» x in s, 1 βΞΌ :
begin
rw of_real_integral_norm_eq_lintegral_nnnorm (integrable_on_const.2 (or.inr hs.lt_top)),
simp only [nnnorm_one, ennreal.coe_one],
end
... = ΞΌ s : set_lintegral_one _
lemma of_real_set_integral_one {Ξ± : Type*} {m : measurable_space Ξ±} (ΞΌ : measure Ξ±)
[is_finite_measure ΞΌ] (s : set Ξ±) :
ennreal.of_real (β« x in s, (1 : β) βΞΌ) = ΞΌ s :=
of_real_set_integral_one_of_measure_ne_top (measure_ne_top ΞΌ s)
lemma integral_piecewise [decidable_pred (β s)] (hs : measurable_set s)
{f g : Ξ± β E} (hf : integrable_on f s ΞΌ) (hg : integrable_on g sαΆ ΞΌ) :
β« x, s.piecewise f g x βΞΌ = β« x in s, f x βΞΌ + β« x in sαΆ, g x βΞΌ :=
by rw [β set.indicator_add_compl_eq_piecewise,
integral_add' (hf.indicator hs) (hg.indicator hs.compl),
integral_indicator hs, integral_indicator hs.compl]
lemma tendsto_set_integral_of_monotone {ΞΉ : Type*} [encodable ΞΉ] [semilattice_sup ΞΉ]
{s : ΞΉ β set Ξ±} {f : Ξ± β E} (hsm : β i, measurable_set (s i))
(h_mono : monotone s) (hfi : integrable_on f (β n, s n) ΞΌ) :
tendsto (Ξ» i, β« a in s i, f a βΞΌ) at_top (π (β« a in (β n, s n), f a βΞΌ)) :=
begin
have hfi' : β«β» x in β n, s n, β₯f xβ₯β βΞΌ < β := hfi.2,
set S := β i, s i,
have hSm : measurable_set S := measurable_set.Union hsm,
have hsub : β {i}, s i β S, from subset_Union s,
rw [β with_density_apply _ hSm] at hfi',
set Ξ½ := ΞΌ.with_density (Ξ» x, β₯f xβ₯β) with hΞ½,
refine metric.nhds_basis_closed_ball.tendsto_right_iff.2 (Ξ» Ξ΅ Ξ΅0, _),
lift Ξ΅ to ββ₯0 using Ξ΅0.le,
have : βαΆ i in at_top, Ξ½ (s i) β Icc (Ξ½ S - Ξ΅) (Ξ½ S + Ξ΅),
from tendsto_measure_Union h_mono (ennreal.Icc_mem_nhds hfi'.ne (ennreal.coe_pos.2 Ξ΅0).ne'),
refine this.mono (Ξ» i hi, _),
rw [mem_closed_ball_iff_norm', β integral_diff (hsm i) hfi (hfi.mono_set hsub) hsub,
β coe_nnnorm, nnreal.coe_le_coe, β ennreal.coe_le_coe],
refine (ennnorm_integral_le_lintegral_ennnorm _).trans _,
rw [β with_density_apply _ (hSm.diff (hsm _)), β hΞ½, measure_diff hsub (hsm _)],
exacts [tsub_le_iff_tsub_le.mp hi.1,
(hi.2.trans_lt $ ennreal.add_lt_top.2 β¨hfi', ennreal.coe_lt_topβ©).ne]
end
lemma has_sum_integral_Union_ae {ΞΉ : Type*} [encodable ΞΉ] {s : ΞΉ β set Ξ±} {f : Ξ± β E}
(hm : β i, null_measurable_set (s i) ΞΌ) (hd : pairwise (ae_disjoint ΞΌ on s))
(hfi : integrable_on f (β i, s i) ΞΌ) :
has_sum (Ξ» n, β« a in s n, f a β ΞΌ) (β« a in β n, s n, f a βΞΌ) :=
begin
simp only [integrable_on, measure.restrict_Union_ae hd hm] at hfi β’,
exact has_sum_integral_measure hfi
end
lemma has_sum_integral_Union {ΞΉ : Type*} [encodable ΞΉ] {s : ΞΉ β set Ξ±} {f : Ξ± β E}
(hm : β i, measurable_set (s i)) (hd : pairwise (disjoint on s))
(hfi : integrable_on f (β i, s i) ΞΌ) :
has_sum (Ξ» n, β« a in s n, f a β ΞΌ) (β« a in β n, s n, f a βΞΌ) :=
has_sum_integral_Union_ae (Ξ» i, (hm i).null_measurable_set) (hd.mono (Ξ» i j h, h.ae_disjoint)) hfi
lemma integral_Union {ΞΉ : Type*} [encodable ΞΉ] {s : ΞΉ β set Ξ±} {f : Ξ± β E}
(hm : β i, measurable_set (s i)) (hd : pairwise (disjoint on s))
(hfi : integrable_on f (β i, s i) ΞΌ) :
(β« a in (β n, s n), f a βΞΌ) = β' n, β« a in s n, f a β ΞΌ :=
(has_sum.tsum_eq (has_sum_integral_Union hm hd hfi)).symm
lemma integral_Union_ae {ΞΉ : Type*} [encodable ΞΉ] {s : ΞΉ β set Ξ±} {f : Ξ± β E}
(hm : β i, null_measurable_set (s i) ΞΌ) (hd : pairwise (ae_disjoint ΞΌ on s))
(hfi : integrable_on f (β i, s i) ΞΌ) :
(β« a in (β n, s n), f a βΞΌ) = β' n, β« a in s n, f a β ΞΌ :=
(has_sum.tsum_eq (has_sum_integral_Union_ae hm hd hfi)).symm
lemma set_integral_eq_zero_of_forall_eq_zero {f : Ξ± β E} (hf : strongly_measurable f)
(ht_eq : β x β t, f x = 0) :
β« x in t, f x βΞΌ = 0 :=
begin
refine integral_eq_zero_of_ae _,
rw [eventually_eq, ae_restrict_iff (hf.measurable_set_eq_fun strongly_measurable_zero)],
refine eventually_of_forall (Ξ» x hx, _),
rw pi.zero_apply,
exact ht_eq x hx,
end
lemma set_integral_union_eq_left {f : Ξ± β E} (hf : strongly_measurable f) (hfi : integrable f ΞΌ)
(hs : measurable_set s) (ht_eq : β x β t, f x = 0) :
β« x in (s βͺ t), f x βΞΌ = β« x in s, f x βΞΌ :=
begin
rw [β set.union_diff_self, union_comm, integral_union,
set_integral_eq_zero_of_forall_eq_zero _ (Ξ» x hx, ht_eq x (diff_subset _ _ hx)), zero_add],
exacts [hf, disjoint_diff.symm, hs, hfi.integrable_on, hfi.integrable_on]
end
lemma set_integral_neg_eq_set_integral_nonpos [linear_order E] [order_closed_topology E]
{f : Ξ± β E} (hf : strongly_measurable f) (hfi : integrable f ΞΌ) :
β« x in {x | f x < 0}, f x βΞΌ = β« x in {x | f x β€ 0}, f x βΞΌ :=
begin
have h_union : {x | f x β€ 0} = {x | f x < 0} βͺ {x | f x = 0},
by { ext, simp_rw [set.mem_union_eq, set.mem_set_of_eq], exact le_iff_lt_or_eq, },
rw h_union,
exact (set_integral_union_eq_left hf hfi (hf.measurable_set_lt strongly_measurable_const)
(Ξ» x hx, hx)).symm,
end
lemma integral_norm_eq_pos_sub_neg {f : Ξ± β β} (hf : strongly_measurable f)
(hfi : integrable f ΞΌ) :
β« x, β₯f xβ₯ βΞΌ = β« x in {x | 0 β€ f x}, f x βΞΌ - β« x in {x | f x β€ 0}, f x βΞΌ :=
have h_meas : measurable_set {x | 0 β€ f x}, from strongly_measurable_const.measurable_set_le hf,
calc β« x, β₯f xβ₯ βΞΌ = β« x in {x | 0 β€ f x}, β₯f xβ₯ βΞΌ + β« x in {x | 0 β€ f x}αΆ, β₯f xβ₯ βΞΌ :
by rw β integral_add_compl h_meas hfi.norm
... = β« x in {x | 0 β€ f x}, f x βΞΌ + β« x in {x | 0 β€ f x}αΆ, β₯f xβ₯ βΞΌ :
begin
congr' 1,
refine set_integral_congr h_meas (Ξ» x hx, _),
dsimp only,
rw [real.norm_eq_abs, abs_eq_self.mpr _],
exact hx,
end
... = β« x in {x | 0 β€ f x}, f x βΞΌ - β« x in {x | 0 β€ f x}αΆ, f x βΞΌ :
begin
congr' 1,
rw β integral_neg,
refine set_integral_congr h_meas.compl (Ξ» x hx, _),
dsimp only,
rw [real.norm_eq_abs, abs_eq_neg_self.mpr _],
rw [set.mem_compl_iff, set.nmem_set_of_eq] at hx,
linarith,
end
... = β« x in {x | 0 β€ f x}, f x βΞΌ - β« x in {x | f x β€ 0}, f x βΞΌ :
by { rw β set_integral_neg_eq_set_integral_nonpos hf hfi, congr, ext1 x, simp, }
lemma set_integral_const (c : E) : β« x in s, c βΞΌ = (ΞΌ s).to_real β’ c :=
by rw [integral_const, measure.restrict_apply_univ]
@[simp]
lemma integral_indicator_const (e : E) β¦s : set Ξ±β¦ (s_meas : measurable_set s) :
β« (a : Ξ±), s.indicator (Ξ» (x : Ξ±), e) a βΞΌ = (ΞΌ s).to_real β’ e :=
by rw [integral_indicator s_meas, β set_integral_const]
@[simp]
lemma integral_indicator_one β¦s : set Ξ±β¦ (hs : measurable_set s) :
β« a, s.indicator 1 a βΞΌ = (ΞΌ s).to_real :=
(integral_indicator_const 1 hs).trans ((smul_eq_mul _).trans (mul_one _))
lemma set_integral_indicator_const_Lp {p : ββ₯0β} (hs : measurable_set s) (ht : measurable_set t)
(hΞΌt : ΞΌ t β β) (x : E) :
β« a in s, indicator_const_Lp p ht hΞΌt x a βΞΌ = (ΞΌ (t β© s)).to_real β’ x :=
calc β« a in s, indicator_const_Lp p ht hΞΌt x a βΞΌ
= (β« a in s, t.indicator (Ξ» _, x) a βΞΌ) :
by rw set_integral_congr_ae hs (indicator_const_Lp_coe_fn.mono (Ξ» x hx hxs, hx))
... = (ΞΌ (t β© s)).to_real β’ x : by rw [integral_indicator_const _ ht, measure.restrict_apply ht]
lemma integral_indicator_const_Lp {p : ββ₯0β} (ht : measurable_set t) (hΞΌt : ΞΌ t β β) (x : E) :
β« a, indicator_const_Lp p ht hΞΌt x a βΞΌ = (ΞΌ t).to_real β’ x :=
calc β« a, indicator_const_Lp p ht hΞΌt x a βΞΌ
= β« a in univ, indicator_const_Lp p ht hΞΌt x a βΞΌ : by rw integral_univ
... = (ΞΌ (t β© univ)).to_real β’ x : set_integral_indicator_const_Lp measurable_set.univ ht hΞΌt x
... = (ΞΌ t).to_real β’ x : by rw inter_univ
lemma set_integral_map {Ξ²} [measurable_space Ξ²] {g : Ξ± β Ξ²} {f : Ξ² β E} {s : set Ξ²}
(hs : measurable_set s)
(hf : ae_strongly_measurable f (measure.map g ΞΌ)) (hg : ae_measurable g ΞΌ) :
β« y in s, f y β(measure.map g ΞΌ) = β« x in g β»ΒΉ' s, f (g x) βΞΌ :=
begin
rw [measure.restrict_map_of_ae_measurable hg hs,
integral_map (hg.mono_measure measure.restrict_le_self) (hf.mono_measure _)],
exact measure.map_mono_of_ae_measurable measure.restrict_le_self hg
end
lemma _root_.measurable_embedding.set_integral_map {Ξ²} {_ : measurable_space Ξ²} {f : Ξ± β Ξ²}
(hf : measurable_embedding f) (g : Ξ² β E) (s : set Ξ²) :
β« y in s, g y β(measure.map f ΞΌ) = β« x in f β»ΒΉ' s, g (f x) βΞΌ :=
by rw [hf.restrict_map, hf.integral_map]
lemma _root_.closed_embedding.set_integral_map [topological_space Ξ±] [borel_space Ξ±]
{Ξ²} [measurable_space Ξ²] [topological_space Ξ²] [borel_space Ξ²]
{g : Ξ± β Ξ²} {f : Ξ² β E} (s : set Ξ²) (hg : closed_embedding g) :
β« y in s, f y β(measure.map g ΞΌ) = β« x in g β»ΒΉ' s, f (g x) βΞΌ :=
hg.measurable_embedding.set_integral_map _ _
lemma measure_preserving.set_integral_preimage_emb {Ξ²} {_ : measurable_space Ξ²} {f : Ξ± β Ξ²} {Ξ½}
(hβ : measure_preserving f ΞΌ Ξ½) (hβ : measurable_embedding f) (g : Ξ² β E) (s : set Ξ²) :
β« x in f β»ΒΉ' s, g (f x) βΞΌ = β« y in s, g y βΞ½ :=
(hβ.restrict_preimage_emb hβ s).integral_comp hβ _
lemma measure_preserving.set_integral_image_emb {Ξ²} {_ : measurable_space Ξ²} {f : Ξ± β Ξ²} {Ξ½}
(hβ : measure_preserving f ΞΌ Ξ½) (hβ : measurable_embedding f) (g : Ξ² β E) (s : set Ξ±) :
β« y in f '' s, g y βΞ½ = β« x in s, g (f x) βΞΌ :=
eq.symm $ (hβ.restrict_image_emb hβ s).integral_comp hβ _
lemma set_integral_map_equiv {Ξ²} [measurable_space Ξ²] (e : Ξ± βα΅ Ξ²) (f : Ξ² β E) (s : set Ξ²) :
β« y in s, f y β(measure.map e ΞΌ) = β« x in e β»ΒΉ' s, f (e x) βΞΌ :=
e.measurable_embedding.set_integral_map f s
lemma norm_set_integral_le_of_norm_le_const_ae {C : β} (hs : ΞΌ s < β)
(hC : βα΅ x βΞΌ.restrict s, β₯f xβ₯ β€ C) :
β₯β« x in s, f x βΞΌβ₯ β€ C * (ΞΌ s).to_real :=
begin
rw β measure.restrict_apply_univ at *,
haveI : is_finite_measure (ΞΌ.restrict s) := β¨βΉ_βΊβ©,
exact norm_integral_le_of_norm_le_const hC
end
lemma norm_set_integral_le_of_norm_le_const_ae' {C : β} (hs : ΞΌ s < β)
(hC : βα΅ x βΞΌ, x β s β β₯f xβ₯ β€ C) (hfm : ae_strongly_measurable f (ΞΌ.restrict s)) :
β₯β« x in s, f x βΞΌβ₯ β€ C * (ΞΌ s).to_real :=
begin
apply norm_set_integral_le_of_norm_le_const_ae hs,
have A : βα΅ (x : Ξ±) βΞΌ, x β s β β₯ae_strongly_measurable.mk f hfm xβ₯ β€ C,
{ filter_upwards [hC, hfm.ae_mem_imp_eq_mk] with _ h1 h2 h3,
rw [β h2 h3],
exact h1 h3 },
have B : measurable_set {x | β₯(hfm.mk f) xβ₯ β€ C} :=
hfm.strongly_measurable_mk.norm.measurable measurable_set_Iic,
filter_upwards [hfm.ae_eq_mk, (ae_restrict_iff B).2 A] with _ h1 _,
rwa h1,
end
lemma norm_set_integral_le_of_norm_le_const_ae'' {C : β} (hs : ΞΌ s < β) (hsm : measurable_set s)
(hC : βα΅ x βΞΌ, x β s β β₯f xβ₯ β€ C) :
β₯β« x in s, f x βΞΌβ₯ β€ C * (ΞΌ s).to_real :=
norm_set_integral_le_of_norm_le_const_ae hs $ by rwa [ae_restrict_eq hsm, eventually_inf_principal]
lemma norm_set_integral_le_of_norm_le_const {C : β} (hs : ΞΌ s < β)
(hC : β x β s, β₯f xβ₯ β€ C) (hfm : ae_strongly_measurable f (ΞΌ.restrict s)) :
β₯β« x in s, f x βΞΌβ₯ β€ C * (ΞΌ s).to_real :=
norm_set_integral_le_of_norm_le_const_ae' hs (eventually_of_forall hC) hfm
lemma norm_set_integral_le_of_norm_le_const' {C : β} (hs : ΞΌ s < β) (hsm : measurable_set s)
(hC : β x β s, β₯f xβ₯ β€ C) :
β₯β« x in s, f x βΞΌβ₯ β€ C * (ΞΌ s).to_real :=
norm_set_integral_le_of_norm_le_const_ae'' hs hsm $ eventually_of_forall hC
lemma set_integral_eq_zero_iff_of_nonneg_ae {f : Ξ± β β} (hf : 0 β€α΅[ΞΌ.restrict s] f)
(hfi : integrable_on f s ΞΌ) :
β« x in s, f x βΞΌ = 0 β f =α΅[ΞΌ.restrict s] 0 :=
integral_eq_zero_iff_of_nonneg_ae hf hfi
lemma set_integral_pos_iff_support_of_nonneg_ae {f : Ξ± β β} (hf : 0 β€α΅[ΞΌ.restrict s] f)
(hfi : integrable_on f s ΞΌ) :
0 < β« x in s, f x βΞΌ β 0 < ΞΌ (support f β© s) :=
begin
rw [integral_pos_iff_support_of_nonneg_ae hf hfi, measure.restrict_applyβ],
rw support_eq_preimage,
exact hfi.ae_strongly_measurable.ae_measurable.null_measurable (measurable_set_singleton 0).compl
end
lemma set_integral_trim {Ξ±} {m m0 : measurable_space Ξ±} {ΞΌ : measure Ξ±} (hm : m β€ m0) {f : Ξ± β E}
(hf_meas : strongly_measurable[m] f) {s : set Ξ±} (hs : measurable_set[m] s) :
β« x in s, f x βΞΌ = β« x in s, f x β(ΞΌ.trim hm) :=
by rwa [integral_trim hm hf_meas, restrict_trim hm ΞΌ]
lemma integral_Icc_eq_integral_Ioc' [partial_order Ξ±] {f : Ξ± β E} {a b : Ξ±} (ha : ΞΌ {a} = 0) :
β« t in Icc a b, f t βΞΌ = β« t in Ioc a b, f t βΞΌ :=
set_integral_congr_set_ae (Ioc_ae_eq_Icc' ha).symm
lemma integral_Ioc_eq_integral_Ioo' [partial_order Ξ±] {f : Ξ± β E} {a b : Ξ±} (hb : ΞΌ {b} = 0) :
β« t in Ioc a b, f t βΞΌ = β« t in Ioo a b, f t βΞΌ :=
set_integral_congr_set_ae (Ioo_ae_eq_Ioc' hb).symm
lemma integral_Icc_eq_integral_Ioc [partial_order Ξ±] {f : Ξ± β E} {a b : Ξ±} [has_no_atoms ΞΌ] :
β« t in Icc a b, f t βΞΌ = β« t in Ioc a b, f t βΞΌ :=
integral_Icc_eq_integral_Ioc' $ measure_singleton a
lemma integral_Ioc_eq_integral_Ioo [partial_order Ξ±] {f : Ξ± β E} {a b : Ξ±} [has_no_atoms ΞΌ] :
β« t in Ioc a b, f t βΞΌ = β« t in Ioo a b, f t βΞΌ :=
integral_Ioc_eq_integral_Ioo' $ measure_singleton b
end normed_group
section mono
variables {ΞΌ : measure Ξ±} {f g : Ξ± β β} {s t : set Ξ±}
(hf : integrable_on f s ΞΌ) (hg : integrable_on g s ΞΌ)
lemma set_integral_mono_ae_restrict (h : f β€α΅[ΞΌ.restrict s] g) :
β« a in s, f a βΞΌ β€ β« a in s, g a βΞΌ :=
integral_mono_ae hf hg h
lemma set_integral_mono_ae (h : f β€α΅[ΞΌ] g) :
β« a in s, f a βΞΌ β€ β« a in s, g a βΞΌ :=
set_integral_mono_ae_restrict hf hg (ae_restrict_of_ae h)
lemma set_integral_mono_on (hs : measurable_set s) (h : β x β s, f x β€ g x) :
β« a in s, f a βΞΌ β€ β« a in s, g a βΞΌ :=
set_integral_mono_ae_restrict hf hg
(by simp [hs, eventually_le, eventually_inf_principal, ae_of_all _ h])
include hf hg -- why do I need this include, but we don't need it in other lemmas?
lemma set_integral_mono_on_ae (hs : measurable_set s) (h : βα΅ x βΞΌ, x β s β f x β€ g x) :
β« a in s, f a βΞΌ β€ β« a in s, g a βΞΌ :=
by { refine set_integral_mono_ae_restrict hf hg _, rwa [eventually_le, ae_restrict_iff' hs], }
omit hf hg
lemma set_integral_mono (h : f β€ g) :
β« a in s, f a βΞΌ β€ β« a in s, g a βΞΌ :=
integral_mono hf hg h
lemma set_integral_mono_set (hfi : integrable_on f t ΞΌ) (hf : 0 β€α΅[ΞΌ.restrict t] f)
(hst : s β€α΅[ΞΌ] t) :
β« x in s, f x βΞΌ β€ β« x in t, f x βΞΌ :=
integral_mono_measure (measure.restrict_mono_ae hst) hf hfi
end mono
section nonneg
variables {ΞΌ : measure Ξ±} {f : Ξ± β β} {s : set Ξ±}
lemma set_integral_nonneg_of_ae_restrict (hf : 0 β€α΅[ΞΌ.restrict s] f) :
0 β€ β« a in s, f a βΞΌ :=
integral_nonneg_of_ae hf
lemma set_integral_nonneg_of_ae (hf : 0 β€α΅[ΞΌ] f) : 0 β€ β« a in s, f a βΞΌ :=
set_integral_nonneg_of_ae_restrict (ae_restrict_of_ae hf)
lemma set_integral_nonneg (hs : measurable_set s) (hf : β a, a β s β 0 β€ f a) :
0 β€ β« a in s, f a βΞΌ :=
set_integral_nonneg_of_ae_restrict ((ae_restrict_iff' hs).mpr (ae_of_all ΞΌ hf))
lemma set_integral_nonneg_ae (hs : measurable_set s) (hf : βα΅ a βΞΌ, a β s β 0 β€ f a) :
0 β€ β« a in s, f a βΞΌ :=
set_integral_nonneg_of_ae_restrict $ by rwa [eventually_le, ae_restrict_iff' hs]
lemma set_integral_le_nonneg {s : set Ξ±} (hs : measurable_set s) (hf : strongly_measurable f)
(hfi : integrable f ΞΌ) :
β« x in s, f x βΞΌ β€ β« x in {y | 0 β€ f y}, f x βΞΌ :=
begin
rw [β integral_indicator hs,
β integral_indicator (strongly_measurable_const.measurable_set_le hf)],
exact integral_mono (hfi.indicator hs)
(hfi.indicator (strongly_measurable_const.measurable_set_le hf))
(indicator_le_indicator_nonneg s f),
end
lemma set_integral_nonpos_of_ae_restrict (hf : f β€α΅[ΞΌ.restrict s] 0) :
β« a in s, f a βΞΌ β€ 0 :=
integral_nonpos_of_ae hf
lemma set_integral_nonpos_of_ae (hf : f β€α΅[ΞΌ] 0) : β« a in s, f a βΞΌ β€ 0 :=
set_integral_nonpos_of_ae_restrict (ae_restrict_of_ae hf)
lemma set_integral_nonpos (hs : measurable_set s) (hf : β a, a β s β f a β€ 0) :
β« a in s, f a βΞΌ β€ 0 :=
set_integral_nonpos_of_ae_restrict ((ae_restrict_iff' hs).mpr (ae_of_all ΞΌ hf))
lemma set_integral_nonpos_ae (hs : measurable_set s) (hf : βα΅ a βΞΌ, a β s β f a β€ 0) :
β« a in s, f a βΞΌ β€ 0 :=
set_integral_nonpos_of_ae_restrict $ by rwa [eventually_le, ae_restrict_iff' hs]
lemma set_integral_nonpos_le {s : set Ξ±} (hs : measurable_set s) (hf : strongly_measurable f)
(hfi : integrable f ΞΌ) :
β« x in {y | f y β€ 0}, f x βΞΌ β€ β« x in s, f x βΞΌ :=
begin
rw [β integral_indicator hs,
β integral_indicator (hf.measurable_set_le strongly_measurable_const)],
exact integral_mono (hfi.indicator (hf.measurable_set_le strongly_measurable_const))
(hfi.indicator hs) (indicator_nonpos_le_indicator s f),
end
end nonneg
section tendsto_mono
variables {ΞΌ : measure Ξ±} [normed_group E] [complete_space E] [normed_space β E]
{s : β β set Ξ±} {f : Ξ± β E}
lemma _root_.antitone.tendsto_set_integral (hsm : β i, measurable_set (s i))
(h_anti : antitone s) (hfi : integrable_on f (s 0) ΞΌ) :
tendsto (Ξ»i, β« a in s i, f a βΞΌ) at_top (π (β« a in (β n, s n), f a βΞΌ)) :=
begin
let bound : Ξ± β β := indicator (s 0) (Ξ» a, β₯f aβ₯),
have h_int_eq : (Ξ» i, β« a in s i, f a βΞΌ) = (Ξ» i, β« a, (s i).indicator f a βΞΌ),
from funext (Ξ» i, (integral_indicator (hsm i)).symm),
rw h_int_eq,
rw β integral_indicator (measurable_set.Inter hsm),
refine tendsto_integral_of_dominated_convergence bound _ _ _ _,
{ intro n,
rw ae_strongly_measurable_indicator_iff (hsm n),
exact (integrable_on.mono_set hfi (h_anti (zero_le n))).1 },
{ rw integrable_indicator_iff (hsm 0),
exact hfi.norm, },
{ simp_rw norm_indicator_eq_indicator_norm,
refine Ξ» n, eventually_of_forall (Ξ» x, _),
exact indicator_le_indicator_of_subset (h_anti (zero_le n)) (Ξ» a, norm_nonneg _) _ },
{ filter_upwards with a using le_trans (h_anti.tendsto_indicator _ _ _) (pure_le_nhds _), },
end
end tendsto_mono
/-! ### Continuity of the set integral
We prove that for any set `s`, the function `Ξ» f : Ξ± ββ[ΞΌ] E, β« x in s, f x βΞΌ` is continuous. -/
section continuous_set_integral
variables [normed_group E] {π : Type*} [normed_field π] [normed_group F] [normed_space π F]
{p : ββ₯0β} {ΞΌ : measure Ξ±}
/-- For `f : Lp E p ΞΌ`, we can define an element of `Lp E p (ΞΌ.restrict s)` by
`(Lp.mem_βp f).restrict s).to_Lp f`. This map is additive. -/
lemma Lp_to_Lp_restrict_add (f g : Lp E p ΞΌ) (s : set Ξ±) :
((Lp.mem_βp (f + g)).restrict s).to_Lp β(f + g)
= ((Lp.mem_βp f).restrict s).to_Lp f + ((Lp.mem_βp g).restrict s).to_Lp g :=
begin
ext1,
refine (ae_restrict_of_ae (Lp.coe_fn_add f g)).mp _,
refine (Lp.coe_fn_add (mem_βp.to_Lp f ((Lp.mem_βp f).restrict s))
(mem_βp.to_Lp g ((Lp.mem_βp g).restrict s))).mp _,
refine (mem_βp.coe_fn_to_Lp ((Lp.mem_βp f).restrict s)).mp _,
refine (mem_βp.coe_fn_to_Lp ((Lp.mem_βp g).restrict s)).mp _,
refine (mem_βp.coe_fn_to_Lp ((Lp.mem_βp (f+g)).restrict s)).mono (Ξ» x hx1 hx2 hx3 hx4 hx5, _),
rw [hx4, hx1, pi.add_apply, hx2, hx3, hx5, pi.add_apply],
end
/-- For `f : Lp E p ΞΌ`, we can define an element of `Lp E p (ΞΌ.restrict s)` by
`(Lp.mem_βp f).restrict s).to_Lp f`. This map commutes with scalar multiplication. -/
lemma Lp_to_Lp_restrict_smul (c : π) (f : Lp F p ΞΌ) (s : set Ξ±) :
((Lp.mem_βp (c β’ f)).restrict s).to_Lp β(c β’ f) = c β’ (((Lp.mem_βp f).restrict s).to_Lp f) :=
begin
ext1,
refine (ae_restrict_of_ae (Lp.coe_fn_smul c f)).mp _,
refine (mem_βp.coe_fn_to_Lp ((Lp.mem_βp f).restrict s)).mp _,
refine (mem_βp.coe_fn_to_Lp ((Lp.mem_βp (c β’ f)).restrict s)).mp _,
refine (Lp.coe_fn_smul c (mem_βp.to_Lp f ((Lp.mem_βp f).restrict s))).mono
(Ξ» x hx1 hx2 hx3 hx4, _),
rw [hx2, hx1, pi.smul_apply, hx3, hx4, pi.smul_apply],
end
/-- For `f : Lp E p ΞΌ`, we can define an element of `Lp E p (ΞΌ.restrict s)` by
`(Lp.mem_βp f).restrict s).to_Lp f`. This map is non-expansive. -/
lemma norm_Lp_to_Lp_restrict_le (s : set Ξ±) (f : Lp E p ΞΌ) :
β₯((Lp.mem_βp f).restrict s).to_Lp fβ₯ β€ β₯fβ₯ :=
begin
rw [Lp.norm_def, Lp.norm_def, ennreal.to_real_le_to_real (Lp.snorm_ne_top _) (Lp.snorm_ne_top _)],
refine (le_of_eq _).trans (snorm_mono_measure _ measure.restrict_le_self),
{ exact s, },
exact snorm_congr_ae (mem_βp.coe_fn_to_Lp _),
end
variables (Ξ± F π)
/-- Continuous linear map sending a function of `Lp F p ΞΌ` to the same function in
`Lp F p (ΞΌ.restrict s)`. -/
def Lp_to_Lp_restrict_clm (ΞΌ : measure Ξ±) (p : ββ₯0β) [hp : fact (1 β€ p)] (s : set Ξ±) :
Lp F p ΞΌ βL[π] Lp F p (ΞΌ.restrict s) :=
@linear_map.mk_continuous π π (Lp F p ΞΌ) (Lp F p (ΞΌ.restrict s)) _ _ _ _ _ _ (ring_hom.id π)
β¨Ξ» f, mem_βp.to_Lp f ((Lp.mem_βp f).restrict s), Ξ» f g, Lp_to_Lp_restrict_add f g s,
Ξ» c f, Lp_to_Lp_restrict_smul c f sβ©
1 (by { intro f, rw one_mul, exact norm_Lp_to_Lp_restrict_le s f, })
variables {Ξ± F π}
variables (π)
lemma Lp_to_Lp_restrict_clm_coe_fn [hp : fact (1 β€ p)] (s : set Ξ±) (f : Lp F p ΞΌ) :
Lp_to_Lp_restrict_clm Ξ± F π ΞΌ p s f =α΅[ΞΌ.restrict s] f :=
mem_βp.coe_fn_to_Lp ((Lp.mem_βp f).restrict s)
variables {π}
@[continuity]
lemma continuous_set_integral [normed_space β E] [complete_space E] (s : set Ξ±) :
continuous (Ξ» f : Ξ± ββ[ΞΌ] E, β« x in s, f x βΞΌ) :=
begin
haveI : fact ((1 : ββ₯0β) β€ 1) := β¨le_rflβ©,
have h_comp : (Ξ» f : Ξ± ββ[ΞΌ] E, β« x in s, f x βΞΌ)
= (integral (ΞΌ.restrict s)) β (Ξ» f, Lp_to_Lp_restrict_clm Ξ± E β ΞΌ 1 s f),
{ ext1 f,
rw [function.comp_apply, integral_congr_ae (Lp_to_Lp_restrict_clm_coe_fn β s f)], },
rw h_comp,
exact continuous_integral.comp (Lp_to_Lp_restrict_clm Ξ± E β ΞΌ 1 s).continuous,
end
end continuous_set_integral
end measure_theory
open measure_theory asymptotics metric
variables {ΞΉ : Type*} [normed_group E]
/-- Fundamental theorem of calculus for set integrals: if `ΞΌ` is a measure that is finite at a
filter `l` and `f` is a measurable function that has a finite limit `b` at `l β ΞΌ.ae`, then `β« x in
s i, f x βΞΌ = ΞΌ (s i) β’ b + o(ΞΌ (s i))` at a filter `li` provided that `s i` tends to `l.small_sets`
along `li`. Since `ΞΌ (s i)` is an `ββ₯0β` number, we use `(ΞΌ (s i)).to_real` in the actual statement.
Often there is a good formula for `(ΞΌ (s i)).to_real`, so the formalization can take an optional
argument `m` with this formula and a proof `of `(Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m`. Without these
arguments, `m i = (ΞΌ (s i)).to_real` is used in the output. -/
lemma filter.tendsto.integral_sub_linear_is_o_ae
[normed_space β E] [complete_space E]
{ΞΌ : measure Ξ±} {l : filter Ξ±} [l.is_measurably_generated]
{f : Ξ± β E} {b : E} (h : tendsto f (l β ΞΌ.ae) (π b))
(hfm : strongly_measurable_at_filter f l ΞΌ) (hΞΌ : ΞΌ.finite_at_filter l)
{s : ΞΉ β set Ξ±} {li : filter ΞΉ} (hs : tendsto s li l.small_sets)
(m : ΞΉ β β := Ξ» i, (ΞΌ (s i)).to_real)
(hsΞΌ : (Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m . tactic.interactive.refl) :
(Ξ» i, β« x in s i, f x βΞΌ - m i β’ b) =o[li] m :=
begin
suffices : (Ξ» s, β« x in s, f x βΞΌ - (ΞΌ s).to_real β’ b) =o[l.small_sets] (Ξ» s, (ΞΌ s).to_real),
from (this.comp_tendsto hs).congr' (hsΞΌ.mono $ Ξ» a ha, ha βΈ rfl) hsΞΌ,
refine is_o_iff.2 (Ξ» Ξ΅ Ξ΅β, _),
have : βαΆ s in l.small_sets, βαΆ x in ΞΌ.ae, x β s β f x β closed_ball b Ξ΅ :=
eventually_small_sets_eventually.2 (h.eventually $ closed_ball_mem_nhds _ Ξ΅β),
filter_upwards [hΞΌ.eventually, (hΞΌ.integrable_at_filter_of_tendsto_ae hfm h).eventually,
hfm.eventually, this],
simp only [mem_closed_ball, dist_eq_norm],
intros s hΞΌs h_integrable hfm h_norm,
rw [β set_integral_const, β integral_sub h_integrable (integrable_on_const.2 $ or.inr hΞΌs),
real.norm_eq_abs, abs_of_nonneg ennreal.to_real_nonneg],
exact norm_set_integral_le_of_norm_le_const_ae' hΞΌs h_norm (hfm.sub ae_strongly_measurable_const)
end
/-- Fundamental theorem of calculus for set integrals, `nhds_within` version: if `ΞΌ` is a locally
finite measure and `f` is an almost everywhere measurable function that is continuous at a point `a`
within a measurable set `t`, then `β« x in s i, f x βΞΌ = ΞΌ (s i) β’ f a + o(ΞΌ (s i))` at a filter `li`
provided that `s i` tends to `(π[t] a).small_sets` along `li`. Since `ΞΌ (s i)` is an `ββ₯0β`
number, we use `(ΞΌ (s i)).to_real` in the actual statement.
Often there is a good formula for `(ΞΌ (s i)).to_real`, so the formalization can take an optional
argument `m` with this formula and a proof `of `(Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m`. Without these
arguments, `m i = (ΞΌ (s i)).to_real` is used in the output. -/
lemma continuous_within_at.integral_sub_linear_is_o_ae
[topological_space Ξ±] [opens_measurable_space Ξ±]
[normed_space β E] [complete_space E]
{ΞΌ : measure Ξ±} [is_locally_finite_measure ΞΌ] {a : Ξ±} {t : set Ξ±}
{f : Ξ± β E} (ha : continuous_within_at f t a) (ht : measurable_set t)
(hfm : strongly_measurable_at_filter f (π[t] a) ΞΌ)
{s : ΞΉ β set Ξ±} {li : filter ΞΉ} (hs : tendsto s li (π[t] a).small_sets)
(m : ΞΉ β β := Ξ» i, (ΞΌ (s i)).to_real)
(hsΞΌ : (Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m . tactic.interactive.refl) :
(Ξ» i, β« x in s i, f x βΞΌ - m i β’ f a) =o[li] m :=
by haveI : (π[t] a).is_measurably_generated := ht.nhds_within_is_measurably_generated _;
exact (ha.mono_left inf_le_left).integral_sub_linear_is_o_ae
hfm (ΞΌ.finite_at_nhds_within a t) hs m hsΞΌ
/-- Fundamental theorem of calculus for set integrals, `nhds` version: if `ΞΌ` is a locally finite
measure and `f` is an almost everywhere measurable function that is continuous at a point `a`, then
`β« x in s i, f x βΞΌ = ΞΌ (s i) β’ f a + o(ΞΌ (s i))` at `li` provided that `s` tends to
`(π a).small_sets` along `li. Since `ΞΌ (s i)` is an `ββ₯0β` number, we use `(ΞΌ (s i)).to_real` in
the actual statement.
Often there is a good formula for `(ΞΌ (s i)).to_real`, so the formalization can take an optional
argument `m` with this formula and a proof `of `(Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m`. Without these
arguments, `m i = (ΞΌ (s i)).to_real` is used in the output. -/
lemma continuous_at.integral_sub_linear_is_o_ae
[topological_space Ξ±] [opens_measurable_space Ξ±]
[normed_space β E] [complete_space E]
{ΞΌ : measure Ξ±} [is_locally_finite_measure ΞΌ] {a : Ξ±}
{f : Ξ± β E} (ha : continuous_at f a) (hfm : strongly_measurable_at_filter f (π a) ΞΌ)
{s : ΞΉ β set Ξ±} {li : filter ΞΉ} (hs : tendsto s li (π a).small_sets)
(m : ΞΉ β β := Ξ» i, (ΞΌ (s i)).to_real)
(hsΞΌ : (Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m . tactic.interactive.refl) :
(Ξ» i, β« x in s i, f x βΞΌ - m i β’ f a) =o[li] m :=
(ha.mono_left inf_le_left).integral_sub_linear_is_o_ae hfm (ΞΌ.finite_at_nhds a) hs m hsΞΌ
/-- Fundamental theorem of calculus for set integrals, `nhds_within` version: if `ΞΌ` is a locally
finite measure, `f` is continuous on a measurable set `t`, and `a β t`, then `β« x in (s i), f x βΞΌ =
ΞΌ (s i) β’ f a + o(ΞΌ (s i))` at `li` provided that `s i` tends to `(π[t] a).small_sets` along `li`.
Since `ΞΌ (s i)` is an `ββ₯0β` number, we use `(ΞΌ (s i)).to_real` in the actual statement.
Often there is a good formula for `(ΞΌ (s i)).to_real`, so the formalization can take an optional
argument `m` with this formula and a proof `of `(Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m`. Without these
arguments, `m i = (ΞΌ (s i)).to_real` is used in the output. -/
lemma continuous_on.integral_sub_linear_is_o_ae
[topological_space Ξ±] [opens_measurable_space Ξ±]
[normed_space β E] [complete_space E] [second_countable_topology_either Ξ± E]
{ΞΌ : measure Ξ±} [is_locally_finite_measure ΞΌ] {a : Ξ±} {t : set Ξ±}
{f : Ξ± β E} (hft : continuous_on f t) (ha : a β t) (ht : measurable_set t)
{s : ΞΉ β set Ξ±} {li : filter ΞΉ} (hs : tendsto s li (π[t] a).small_sets)
(m : ΞΉ β β := Ξ» i, (ΞΌ (s i)).to_real)
(hsΞΌ : (Ξ» i, (ΞΌ (s i)).to_real) =αΆ [li] m . tactic.interactive.refl) :
(Ξ» i, β« x in s i, f x βΞΌ - m i β’ f a) =o[li] m :=
(hft a ha).integral_sub_linear_is_o_ae ht
β¨t, self_mem_nhds_within, hft.ae_strongly_measurable htβ© hs m hsΞΌ
section
/-! ### Continuous linear maps composed with integration
The goal of this section is to prove that integration commutes with continuous linear maps.
This holds for simple functions. The general result follows from the continuity of all involved
operations on the space `LΒΉ`. Note that composition by a continuous linear map on `LΒΉ` is not just
the composition, as we are dealing with classes of functions, but it has already been defined
as `continuous_linear_map.comp_Lp`. We take advantage of this construction here.
-/
open_locale complex_conjugate
variables {ΞΌ : measure Ξ±} {π : Type*} [is_R_or_C π] [normed_space π E]
[normed_group F] [normed_space π F]
{p : ennreal}
namespace continuous_linear_map
variables [complete_space F] [normed_space β F]
lemma integral_comp_Lp (L : E βL[π] F) (Ο : Lp E p ΞΌ) :
β« a, (L.comp_Lp Ο) a βΞΌ = β« a, L (Ο a) βΞΌ :=
integral_congr_ae $ coe_fn_comp_Lp _ _
lemma set_integral_comp_Lp (L : E βL[π] F) (Ο : Lp E p ΞΌ) {s : set Ξ±} (hs : measurable_set s) :
β« a in s, (L.comp_Lp Ο) a βΞΌ = β« a in s, L (Ο a) βΞΌ :=
set_integral_congr_ae hs ((L.coe_fn_comp_Lp Ο).mono (Ξ» x hx hx2, hx))
lemma continuous_integral_comp_L1 (L : E βL[π] F) :
continuous (Ξ» (Ο : Ξ± ββ[ΞΌ] E), β« (a : Ξ±), L (Ο a) βΞΌ) :=
by { rw β funext L.integral_comp_Lp, exact continuous_integral.comp (L.comp_LpL 1 ΞΌ).continuous, }
variables [complete_space E] [normed_space β E]
lemma integral_comp_comm (L : E βL[π] F) {Ο : Ξ± β E} (Ο_int : integrable Ο ΞΌ) :
β« a, L (Ο a) βΞΌ = L (β« a, Ο a βΞΌ) :=
begin
apply integrable.induction (Ξ» Ο, β« a, L (Ο a) βΞΌ = L (β« a, Ο a βΞΌ)),
{ intros e s s_meas s_finite,
rw [integral_indicator_const e s_meas, β @smul_one_smul E β π _ _ _ _ _ (ΞΌ s).to_real e,
continuous_linear_map.map_smul, @smul_one_smul F β π _ _ _ _ _ (ΞΌ s).to_real (L e),
β integral_indicator_const (L e) s_meas],
congr' 1 with a,
rw set.indicator_comp_of_zero L.map_zero },
{ intros f g H f_int g_int hf hg,
simp [L.map_add, integral_add f_int g_int,
integral_add (L.integrable_comp f_int) (L.integrable_comp g_int), hf, hg] },
{ exact is_closed_eq L.continuous_integral_comp_L1 (L.continuous.comp continuous_integral) },
{ intros f g hfg f_int hf,
convert hf using 1 ; clear hf,
{ exact integral_congr_ae (hfg.fun_comp L).symm },
{ rw integral_congr_ae hfg.symm } },
all_goals { assumption }
end
lemma integral_apply {H : Type*} [normed_group H] [normed_space π H]
{Ο : Ξ± β H βL[π] E} (Ο_int : integrable Ο ΞΌ) (v : H) :
(β« a, Ο a βΞΌ) v = β« a, Ο a v βΞΌ :=
((continuous_linear_map.apply π E v).integral_comp_comm Ο_int).symm
lemma integral_comp_comm' (L : E βL[π] F) {K} (hL : antilipschitz_with K L) (Ο : Ξ± β E) :
β« a, L (Ο a) βΞΌ = L (β« a, Ο a βΞΌ) :=
begin
by_cases h : integrable Ο ΞΌ,
{ exact integral_comp_comm L h },
have : Β¬ (integrable (L β Ο) ΞΌ),
by rwa lipschitz_with.integrable_comp_iff_of_antilipschitz L.lipschitz hL (L.map_zero),
simp [integral_undef, h, this]
end
lemma integral_comp_L1_comm (L : E βL[π] F) (Ο : Ξ± ββ[ΞΌ] E) : β« a, L (Ο a) βΞΌ = L (β« a, Ο a βΞΌ) :=
L.integral_comp_comm (L1.integrable_coe_fn Ο)
end continuous_linear_map
namespace linear_isometry
variables [complete_space F] [normed_space β F] [complete_space E] [normed_space β E]
lemma integral_comp_comm (L : E ββα΅’[π] F) (Ο : Ξ± β E) : β« a, L (Ο a) βΞΌ = L (β« a, Ο a βΞΌ) :=
L.to_continuous_linear_map.integral_comp_comm' L.antilipschitz _
end linear_isometry
variables [complete_space E] [normed_space β E] [complete_space F] [normed_space β F]
@[norm_cast] lemma integral_of_real {f : Ξ± β β} : β« a, (f a : π) βΞΌ = ββ« a, f a βΞΌ :=
(@is_R_or_C.of_real_li π _).integral_comp_comm f
lemma integral_re {f : Ξ± β π} (hf : integrable f ΞΌ) :
β« a, is_R_or_C.re (f a) βΞΌ = is_R_or_C.re β« a, f a βΞΌ :=
(@is_R_or_C.re_clm π _).integral_comp_comm hf
lemma integral_im {f : Ξ± β π} (hf : integrable f ΞΌ) :
β« a, is_R_or_C.im (f a) βΞΌ = is_R_or_C.im β« a, f a βΞΌ :=
(@is_R_or_C.im_clm π _).integral_comp_comm hf
lemma integral_conj {f : Ξ± β π} : β« a, conj (f a) βΞΌ = conj β« a, f a βΞΌ :=
(@is_R_or_C.conj_lie π _).to_linear_isometry.integral_comp_comm f
lemma integral_coe_re_add_coe_im {f : Ξ± β π} (hf : integrable f ΞΌ) :
β« x, (is_R_or_C.re (f x) : π) βΞΌ + β« x, is_R_or_C.im (f x) βΞΌ * is_R_or_C.I = β« x, f x βΞΌ :=
begin
rw [mul_comm, β smul_eq_mul, β integral_smul, β integral_add],
{ congr,
ext1 x,
rw [smul_eq_mul, mul_comm, is_R_or_C.re_add_im] },
{ exact hf.re.of_real },
{ exact hf.im.of_real.smul is_R_or_C.I }
end
lemma integral_re_add_im {f : Ξ± β π} (hf : integrable f ΞΌ) :
((β« x, is_R_or_C.re (f x) βΞΌ : β) : π) + (β« x, is_R_or_C.im (f x) βΞΌ : β) * is_R_or_C.I =
β« x, f x βΞΌ :=
by { rw [β integral_of_real, β integral_of_real, integral_coe_re_add_coe_im hf] }
lemma set_integral_re_add_im {f : Ξ± β π} {i : set Ξ±} (hf : integrable_on f i ΞΌ) :
((β« x in i, is_R_or_C.re (f x) βΞΌ : β) : π) +
(β« x in i, is_R_or_C.im (f x) βΞΌ : β) * is_R_or_C.I = β« x in i, f x βΞΌ :=
integral_re_add_im hf
lemma fst_integral {f : Ξ± β E Γ F} (hf : integrable f ΞΌ) :
(β« x, f x βΞΌ).1 = β« x, (f x).1 βΞΌ :=
((continuous_linear_map.fst β E F).integral_comp_comm hf).symm
lemma snd_integral {f : Ξ± β E Γ F} (hf : integrable f ΞΌ) :
(β« x, f x βΞΌ).2 = β« x, (f x).2 βΞΌ :=
((continuous_linear_map.snd β E F).integral_comp_comm hf).symm
lemma integral_pair {f : Ξ± β E} {g : Ξ± β F} (hf : integrable f ΞΌ) (hg : integrable g ΞΌ) :
β« x, (f x, g x) βΞΌ = (β« x, f x βΞΌ, β« x, g x βΞΌ) :=
have _ := hf.prod_mk hg, prod.ext (fst_integral this) (snd_integral this)
lemma integral_smul_const {π : Type*} [is_R_or_C π] [normed_space π E] (f : Ξ± β π) (c : E) :
β« x, f x β’ c βΞΌ = (β« x, f x βΞΌ) β’ c :=
begin
by_cases hf : integrable f ΞΌ,
{ exact ((1 : π βL[π] π).smul_right c).integral_comp_comm hf },
{ by_cases hc : c = 0,
{ simp only [hc, integral_zero, smul_zero] },
rw [integral_undef hf, integral_undef, zero_smul],
simp_rw [integrable_smul_const hc, hf, not_false_iff] }
end
section inner
variables {E' : Type*} [inner_product_space π E'] [complete_space E'] [normed_space β E']
local notation `βͺ`x`, `y`β«` := @inner π E' _ x y
lemma integral_inner {f : Ξ± β E'} (hf : integrable f ΞΌ) (c : E') :
β« x, βͺc, f xβ« βΞΌ = βͺc, β« x, f x βΞΌβ« :=
((@innerSL π E' _ _ c).restrict_scalars β).integral_comp_comm hf
lemma integral_eq_zero_of_forall_integral_inner_eq_zero (f : Ξ± β E') (hf : integrable f ΞΌ)
(hf_int : β (c : E'), β« x, βͺc, f xβ« βΞΌ = 0) :
β« x, f x βΞΌ = 0 :=
by { specialize hf_int (β« x, f x βΞΌ), rwa [integral_inner hf, inner_self_eq_zero] at hf_int }
end inner
lemma integral_with_density_eq_integral_smul
{f : Ξ± β ββ₯0} (f_meas : measurable f) (g : Ξ± β E) :
β« a, g a β(ΞΌ.with_density (Ξ» x, f x)) = β« a, f a β’ g a βΞΌ :=
begin
by_cases hg : integrable g (ΞΌ.with_density (Ξ» x, f x)), swap,
{ rw [integral_undef hg, integral_undef],
rwa [β integrable_with_density_iff_integrable_smul f_meas];
apply_instance },
refine integrable.induction _ _ _ _ _ hg,
{ assume c s s_meas hs,
rw integral_indicator s_meas,
simp_rw [β indicator_smul_apply, integral_indicator s_meas],
simp only [s_meas, integral_const, measure.restrict_apply', univ_inter, with_density_apply],
rw [lintegral_coe_eq_integral, ennreal.to_real_of_real, β integral_smul_const],
{ refl },
{ exact integral_nonneg (Ξ» x, nnreal.coe_nonneg _) },
{ refine β¨(f_meas.coe_nnreal_real).ae_measurable.ae_strongly_measurable, _β©,
rw with_density_apply _ s_meas at hs,
rw has_finite_integral,
convert hs,
ext1 x,
simp only [nnreal.nnnorm_eq] } },
{ assume u u' h_disj u_int u'_int h h',
change β« (a : Ξ±), (u a + u' a) βΞΌ.with_density (Ξ» (x : Ξ±), β(f x)) =
β« (a : Ξ±), f a β’ (u a + u' a) βΞΌ,
simp_rw [smul_add],
rw [integral_add u_int u'_int, h, h', integral_add],
{ exact (integrable_with_density_iff_integrable_smul f_meas).1 u_int },
{ exact (integrable_with_density_iff_integrable_smul f_meas).1 u'_int } },
{ have C1 : continuous (Ξ» (u : Lp E 1 (ΞΌ.with_density (Ξ» x, f x))),
β« x, u x β(ΞΌ.with_density (Ξ» x, f x))) := continuous_integral,
have C2 : continuous (Ξ» (u : Lp E 1 (ΞΌ.with_density (Ξ» x, f x))),
β« x, f x β’ u x βΞΌ),
{ have : continuous ((Ξ» (u : Lp E 1 ΞΌ), β« x, u x βΞΌ) β (with_density_smul_li ΞΌ f_meas)) :=
continuous_integral.comp (with_density_smul_li ΞΌ f_meas).continuous,
convert this,
ext1 u,
simp only [function.comp_app, with_density_smul_li_apply],
exact integral_congr_ae (mem_β1_smul_of_L1_with_density f_meas u).coe_fn_to_Lp.symm },
exact is_closed_eq C1 C2 },
{ assume u v huv u_int hu,
rw [β integral_congr_ae huv, hu],
apply integral_congr_ae,
filter_upwards [(ae_with_density_iff f_meas.coe_nnreal_ennreal).1 huv] with x hx,
rcases eq_or_ne (f x) 0 with h'x|h'x,
{ simp only [h'x, zero_smul]},
{ rw [hx _],
simpa only [ne.def, ennreal.coe_eq_zero] using h'x } }
end
lemma integral_with_density_eq_integral_smulβ
{f : Ξ± β ββ₯0} (hf : ae_measurable f ΞΌ) (g : Ξ± β E) :
β« a, g a β(ΞΌ.with_density (Ξ» x, f x)) = β« a, f a β’ g a βΞΌ :=
begin
let f' := hf.mk _,
calc β« a, g a β(ΞΌ.with_density (Ξ» x, f x))
= β« a, g a β(ΞΌ.with_density (Ξ» x, f' x)) :
begin
congr' 1,
apply with_density_congr_ae,
filter_upwards [hf.ae_eq_mk] with x hx,
rw hx,
end
... = β« a, f' a β’ g a βΞΌ : integral_with_density_eq_integral_smul hf.measurable_mk _
... = β« a, f a β’ g a βΞΌ :
begin
apply integral_congr_ae,
filter_upwards [hf.ae_eq_mk] with x hx,
rw hx,
end
end
lemma set_integral_with_density_eq_set_integral_smul
{f : Ξ± β ββ₯0} (f_meas : measurable f) (g : Ξ± β E) {s : set Ξ±} (hs : measurable_set s) :
β« a in s, g a β(ΞΌ.with_density (Ξ» x, f x)) = β« a in s, f a β’ g a βΞΌ :=
by rw [restrict_with_density hs, integral_with_density_eq_integral_smul f_meas]
lemma set_integral_with_density_eq_set_integral_smulβ {f : Ξ± β ββ₯0} {s : set Ξ±}
(hf : ae_measurable f (ΞΌ.restrict s)) (g : Ξ± β E) (hs : measurable_set s) :
β« a in s, g a β(ΞΌ.with_density (Ξ» x, f x)) = β« a in s, f a β’ g a βΞΌ :=
by rw [restrict_with_density hs, integral_with_density_eq_integral_smulβ hf]
end
section thickened_indicator
variables [pseudo_emetric_space Ξ±]
lemma measure_le_lintegral_thickened_indicator_aux
(ΞΌ : measure Ξ±) {E : set Ξ±} (E_mble : measurable_set E) (Ξ΄ : β) :
ΞΌ E β€ β«β» a, (thickened_indicator_aux Ξ΄ E a : ββ₯0β) βΞΌ :=
begin
convert_to lintegral ΞΌ (E.indicator (Ξ» _, (1 : ββ₯0β)))
β€ lintegral ΞΌ (thickened_indicator_aux Ξ΄ E),
{ rw [lintegral_indicator _ E_mble],
simp only [lintegral_one, measure.restrict_apply, measurable_set.univ, univ_inter], },
{ apply lintegral_mono,
apply indicator_le_thickened_indicator_aux, },
end
lemma measure_le_lintegral_thickened_indicator
(ΞΌ : measure Ξ±) {E : set Ξ±} (E_mble : measurable_set E) {Ξ΄ : β} (Ξ΄_pos : 0 < Ξ΄) :
ΞΌ E β€ β«β» a, (thickened_indicator Ξ΄_pos E a : ββ₯0β) βΞΌ :=
begin
convert measure_le_lintegral_thickened_indicator_aux ΞΌ E_mble Ξ΄,
dsimp,
simp only [thickened_indicator_aux_lt_top.ne, ennreal.coe_to_nnreal, ne.def, not_false_iff],
end
end thickened_indicator
|
Jimmy Chooβs fall β17 menβs collection, shown today during Milan Fashion Week, focused on the idea of clashing. It aimed to combine unexpected materials and fuse categories together, such as using dressy finishes on more casual styles.
In addition to introducing new hybrid shoes β like a velvet slip-on sneaker that resembled a smoking slipper β the new offering made sure to tick all the right shoe trends, too, using shearlings and corduroys as well as introducing sock-fit sneakers.
Jimmy Choo fall β17 collection.
Compared to its previous seasons, the collection here felt much more cohesive overall. Though it cast a wide net in terms of treatments and styles, the brand made attempts to continue them across all of the footwear categories.
For more fall shoes, click through the gallery. |
[STATEMENT]
lemma OR4_analz_knows_Spy:
"[| Gets B \<lbrace>N, X, Crypt (shrK B) X'\<rbrace> \<in> set evs; evs \<in> otway |]
==> X \<in> analz (knows Spy evs)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>Gets B \<lbrace>N, X, Crypt (shrK B) X'\<rbrace> \<in> set evs; evs \<in> otway\<rbrakk> \<Longrightarrow> X \<in> analz (knows Spy evs)
[PROOF STEP]
by blast |
State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
β’ β (x y : Cofix F), r x y β x = y State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : Cofix F
β’ β (y : Cofix F), r x y β x = y Tactic: intro x State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : Cofix F
β’ β (y : Cofix F), r x y β x = y State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : Cofix F
β’ β (a : PFunctor.M (P F)) (y : Cofix F), r (Quot.mk Mcongr a) y β Quot.mk Mcongr a = y Tactic: apply Quot.inductionOn (motive := _) x State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : Cofix F
β’ β (a : PFunctor.M (P F)) (y : Cofix F), r (Quot.mk Mcongr a) y β Quot.mk Mcongr a = y State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
β’ β (a : PFunctor.M (P F)) (y : Cofix F), r (Quot.mk Mcongr a) y β Quot.mk Mcongr a = y Tactic: clear x State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
β’ β (a : PFunctor.M (P F)) (y : Cofix F), r (Quot.mk Mcongr a) y β Quot.mk Mcongr a = y State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : PFunctor.M (P F)
y : Cofix F
β’ r (Quot.mk Mcongr x) y β Quot.mk Mcongr x = y Tactic: intro x y State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : PFunctor.M (P F)
y : Cofix F
β’ r (Quot.mk Mcongr x) y β Quot.mk Mcongr x = y State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : PFunctor.M (P F)
y : Cofix F
β’ β (a : PFunctor.M (P F)), r (Quot.mk Mcongr x) (Quot.mk Mcongr a) β Quot.mk Mcongr x = Quot.mk Mcongr a Tactic: apply Quot.inductionOn (motive := _) y State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : PFunctor.M (P F)
y : Cofix F
β’ β (a : PFunctor.M (P F)), r (Quot.mk Mcongr x) (Quot.mk Mcongr a) β Quot.mk Mcongr x = Quot.mk Mcongr a State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : PFunctor.M (P F)
β’ β (a : PFunctor.M (P F)), r (Quot.mk Mcongr x) (Quot.mk Mcongr a) β Quot.mk Mcongr x = Quot.mk Mcongr a Tactic: clear y State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x : PFunctor.M (P F)
β’ β (a : PFunctor.M (P F)), r (Quot.mk Mcongr x) (Quot.mk Mcongr a) β Quot.mk Mcongr x = Quot.mk Mcongr a State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ Quot.mk Mcongr x = Quot.mk Mcongr y Tactic: intro y rxy State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ Quot.mk Mcongr x = Quot.mk Mcongr y State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ Mcongr x y Tactic: apply Quot.sound State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ Mcongr x y State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ Mcongr x y Tactic: let r' x y := r (Quot.mk _ x) (Quot.mk _ y) State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ Mcongr x y State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
this : IsPrecongr r'
β’ Mcongr x y Tactic: have : IsPrecongr r' := by
intro a b r'ab
have hβ :
Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) =
Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b) :=
h _ _ r'ab
have hβ : β u v : q.P.M, Mcongr u v β Quot.mk r' u = Quot.mk r' v := by
intro u v cuv
apply Quot.sound
simp only
rw [Quot.sound cuv]
apply h'
let f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(by
intro c; apply Quot.inductionOn (motive := _) c; clear c
intro c d; apply Quot.inductionOn (motive := _) d; clear d
intro d rcd; apply Quot.sound; apply rcd)
have : f β Quot.mk r β Quot.mk Mcongr = Quot.mk r' := rfl
rw [β this, PFunctor.comp_map _ _ f, PFunctor.comp_map _ _ (Quot.mk r), abs_map, abs_map,
abs_map, hβ]
rw [PFunctor.comp_map _ _ f, PFunctor.comp_map _ _ (Quot.mk r), abs_map, abs_map, abs_map] State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
this : IsPrecongr r'
β’ Mcongr x y State After: no goals Tactic: refine' β¨r', this, rxyβ© State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
β’ IsPrecongr r' State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) Tactic: intro a b r'ab State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) Tactic: have hβ :
Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) =
Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b) :=
h _ _ r'ab State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) Tactic: have hβ : β u v : q.P.M, Mcongr u v β Quot.mk r' u = Quot.mk r' v := by
intro u v cuv
apply Quot.sound
simp only
rw [Quot.sound cuv]
apply h' State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(_ : β (c b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b)
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) Tactic: let f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(by
intro c; apply Quot.inductionOn (motive := _) c; clear c
intro c d; apply Quot.inductionOn (motive := _) d; clear d
intro d rcd; apply Quot.sound; apply rcd) State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(_ : β (c b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b)
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(_ : β (c b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b)
this : f β Quot.mk r β Quot.mk Mcongr = Quot.mk r'
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) Tactic: have : f β Quot.mk r β Quot.mk Mcongr = Quot.mk r' := rfl State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(_ : β (c b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b)
this : f β Quot.mk r β Quot.mk Mcongr = Quot.mk r'
β’ abs (Quot.mk r' <$> PFunctor.M.dest a) = abs (Quot.mk r' <$> PFunctor.M.dest b) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(_ : β (c b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b)
this : f β Quot.mk r β Quot.mk Mcongr = Quot.mk r'
β’ f <$> Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b) =
abs ((f β Quot.mk r β Quot.mk Mcongr) <$> PFunctor.M.dest b) Tactic: rw [β this, PFunctor.comp_map _ _ f, PFunctor.comp_map _ _ (Quot.mk r), abs_map, abs_map,
abs_map, hβ] State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
f : Quot r β Quot r' :=
Quot.lift (Quot.lift (Quot.mk r') hβ)
(_ : β (c b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b)
this : f β Quot.mk r β Quot.mk Mcongr = Quot.mk r'
β’ f <$> Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b) =
abs ((f β Quot.mk r β Quot.mk Mcongr) <$> PFunctor.M.dest b) State After: no goals Tactic: rw [PFunctor.comp_map _ _ f, PFunctor.comp_map _ _ (Quot.mk r), abs_map, abs_map, abs_map] State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
β’ β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ Quot.mk r' u = Quot.mk r' v Tactic: intro u v cuv State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ Quot.mk r' u = Quot.mk r' v State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ r' u v Tactic: apply Quot.sound State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ r' u v State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ r (Quot.mk Mcongr u) (Quot.mk Mcongr v) Tactic: simp only State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ r (Quot.mk Mcongr u) (Quot.mk Mcongr v) State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ r (Quot.mk Mcongr v) (Quot.mk Mcongr v) Tactic: rw [Quot.sound cuv] State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
u v : PFunctor.M (P F)
cuv : Mcongr u v
β’ r (Quot.mk Mcongr v) (Quot.mk Mcongr v) State After: no goals Tactic: apply h' State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
β’ β (a b : Cofix F), r a b β Quot.lift (Quot.mk r') hβ a = Quot.lift (Quot.mk r') hβ b State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : Cofix F
β’ β (b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b Tactic: intro c State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : Cofix F
β’ β (b : Cofix F), r c b β Quot.lift (Quot.mk r') hβ c = Quot.lift (Quot.mk r') hβ b State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : Cofix F
β’ β (a : PFunctor.M (P F)) (b : Cofix F),
r (Quot.mk Mcongr a) b β Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) = Quot.lift (Quot.mk r') hβ b Tactic: apply Quot.inductionOn (motive := _) c State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : Cofix F
β’ β (a : PFunctor.M (P F)) (b : Cofix F),
r (Quot.mk Mcongr a) b β Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) = Quot.lift (Quot.mk r') hβ b State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
β’ β (a : PFunctor.M (P F)) (b : Cofix F),
r (Quot.mk Mcongr a) b β Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) = Quot.lift (Quot.mk r') hβ b Tactic: clear c State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
β’ β (a : PFunctor.M (P F)) (b : Cofix F),
r (Quot.mk Mcongr a) b β Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) = Quot.lift (Quot.mk r') hβ b State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : PFunctor.M (P F)
d : Cofix F
β’ r (Quot.mk Mcongr c) d β Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ d Tactic: intro c d State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : PFunctor.M (P F)
d : Cofix F
β’ r (Quot.mk Mcongr c) d β Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ d State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : PFunctor.M (P F)
d : Cofix F
β’ β (a : PFunctor.M (P F)),
r (Quot.mk Mcongr c) (Quot.mk Mcongr a) β
Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) Tactic: apply Quot.inductionOn (motive := _) d State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : PFunctor.M (P F)
d : Cofix F
β’ β (a : PFunctor.M (P F)),
r (Quot.mk Mcongr c) (Quot.mk Mcongr a) β
Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : PFunctor.M (P F)
β’ β (a : PFunctor.M (P F)),
r (Quot.mk Mcongr c) (Quot.mk Mcongr a) β
Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) Tactic: clear d State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c : PFunctor.M (P F)
β’ β (a : PFunctor.M (P F)),
r (Quot.mk Mcongr c) (Quot.mk Mcongr a) β
Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr a) State After: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c d : PFunctor.M (P F)
rcd : r (Quot.mk Mcongr c) (Quot.mk Mcongr d)
β’ Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr d) Tactic: intro d rcd State Before: F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c d : PFunctor.M (P F)
rcd : r (Quot.mk Mcongr c) (Quot.mk Mcongr d)
β’ Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr c) = Quot.lift (Quot.mk r') hβ (Quot.mk Mcongr d) State After: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c d : PFunctor.M (P F)
rcd : r (Quot.mk Mcongr c) (Quot.mk Mcongr d)
β’ r' c d Tactic: apply Quot.sound State Before: case a
F : Type u β Type u
instβ : Functor F
q : Qpf F
r : Cofix F β Cofix F β Prop
h' : β (x : Cofix F), r x x
h : β (x y : Cofix F), r x y β Quot.mk r <$> dest x = Quot.mk r <$> dest y
x y : PFunctor.M (P F)
rxy : r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
r' : PFunctor.M (P F) β PFunctor.M (P F) β Prop := fun x y => r (Quot.mk Mcongr x) (Quot.mk Mcongr y)
a b : PFunctor.M (P F)
r'ab : r' a b
hβ : Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest a) = Quot.mk r <$> Quot.mk Mcongr <$> abs (PFunctor.M.dest b)
hβ : β (u v : PFunctor.M (P F)), Mcongr u v β Quot.mk r' u = Quot.mk r' v
c d : PFunctor.M (P F)
rcd : r (Quot.mk Mcongr c) (Quot.mk Mcongr d)
β’ r' c d State After: no goals Tactic: apply rcd |
Require Import Paco.paco.
From ITree Require Import ITree.
Require Import sflib.
Require Import ITreeTac.
Require Import Axioms.
Require Import StdlibExt.
Require Import Streams.
Require Import List.
Require Import Arith ZArith Lia.
(* Set Implicit Arguments. *)
Set Nested Proofs Allowed.
(* Lemma stream_app_eq_prefix_eq A *)
(* pref (s1 s2: stream A) *)
(* : s1 = s2 <-> stream_app pref s1 = stream_app pref s2. *)
(* Proof. *)
(* induction pref as [| h t IH]; ss. *)
(* split. *)
(* - intro TEQ. subst. clarify. *)
(* - intro CONS_EQ. clarify. *)
(* apply IH. eauto. *)
(* Qed. *)
(* Lemma deopt_list_Some_length A *)
(* (l: list A?) (l': list A) *)
(* (DEOPT: deopt_list l = Some l') *)
(* : length l' = length l. *)
(* Proof. *)
(* depgen l'. *)
(* induction l as [| h t IH]; i; ss. *)
(* { clarify. } *)
(* destruct h; desf. *)
(* hexploit IH; eauto. i. ss. *)
(* congruence. *)
(* Qed. *)
(** * Basic Event Definitions *)
(** InteractionTree-style events *)
(* event call (without response) *)
Inductive event_call (E: Type -> Type): Type :=
EventCall: forall R (e: E R), event_call E.
(* visible event (with response) *)
Inductive event (E: Type -> Type): Type :=
Event: forall {R} (e: E R) (r: R), event E.
Definition events (E: Type -> Type): Type :=
list (event E).
Arguments EventCall {E R}.
Arguments Event {E R}.
(* A special event type that cancels execution *)
Inductive nbE: Type -> Type :=
| NobehEvent : nbE unit
.
Inductive errE: Type -> Type :=
| ErrorEvent : errE unit
.
Instance embed_event_call {E1 E2}
`{E1 -< E2}
: Embeddable (event_call E1) (event_call E2) :=
fun ec1 =>
match ec1 with
| EventCall e =>
EventCall (subevent _ e)
end.
Instance embed_events {E1 E2}
`{E1 -< E2}
: Embeddable (event E1) (event E2) :=
fun ev1 =>
match ev1 with
| Event e r =>
Event (subevent _ e) r
end.
Instance embed_list {X Y}
`{Embeddable X Y}
: Embeddable (list X) (list Y) :=
List.map (fun x => embed x).
Instance embed_id A
: Embeddable A A := id.
Instance embed_pair {A A' B B'}
`{Embeddable A A'}
`{Embeddable B B'}
: Embeddable (A * B) (A' * B') :=
fun x => (embed (fst x), embed (snd x)).
Class EventRetInhabit (E: Type -> Type): Prop :=
{ event_return_inhabit
: forall R (e: E R), exists r: R, True ; }.
(** * System Semantics *)
(* timestamp type *)
Notation tsp := Z (only parsing).
Definition flatten_te {sysE: Type -> Type}
(ftr1: tsp * events sysE)
: list (tsp * event sysE) :=
map (fun e => (fst ftr1, e)) (snd ftr1).
Definition flatten_tes {sysE: Type -> Type}
(ftr: list (tsp * events sysE))
: list (tsp * event sysE) :=
List.concat (map flatten_te ftr).
Definition tes_equiv {sysE: Type -> Type}
(ftr1 ftr2: list (tsp * events sysE)): Prop :=
flatten_tes ftr1 = flatten_tes ftr2.
Lemma tes_equiv_trans E
(ft1 ft2 ft3: list (tsp * events E))
(EQV1: tes_equiv ft1 ft2)
(EQV2: tes_equiv ft2 ft3)
: tes_equiv ft1 ft3.
Proof.
inv EQV1. inv EQV2.
r. congruence.
Qed.
Lemma flatten_tes_app E
(l1 l2: list (tsp * events E))
: flatten_tes (l1 ++ l2) =
flatten_tes l1 ++ flatten_tes l2.
Proof.
unfold flatten_tes.
rewrite map_app.
rewrite concat_app. ss.
Qed.
Lemma Forall2_tes_equiv_trans E
(trsl1 trsl2 trsl3: list (list (tsp * events E)))
(EQV1: Forall2 tes_equiv trsl1 trsl2)
(EQV2: Forall2 tes_equiv trsl2 trsl3)
: Forall2 tes_equiv trsl1 trsl3.
Proof.
depgen trsl3.
induction EQV1; i; ss.
inv EQV2.
econs.
{ eapply tes_equiv_trans; eauto. }
eauto.
Qed.
Lemma tes_when_eff_tstamp_identical E
tsp1 (tr: list (tsp * events E))
(TSP_ID: Forall (fun tes => fst tes = tsp1 \/ snd tes = []) tr)
: flatten_tes tr =
map (fun e => (tsp1, e)) (concat (map snd tr)).
Proof.
unfold flatten_tes.
induction tr as [| h t IH]; ss.
rewrite map_app. ss.
inv TSP_ID.
rewrite IH; eauto.
f_equal.
destruct h as [tsp' es'].
des; ss; subst; ss.
Qed.
Lemma Forall2_tes_equiv_refl E
(trsl: list (list (tsp * events E)))
: Forall2 tes_equiv trsl trsl.
Proof.
induction trsl; ss.
econs; ss.
Qed.
Section BEHAV_DEF.
(* Variable ts_t: Type. (* timestamp *) *)
Variable sysE: Type -> Type. (* system event kind *)
(* concrete behavior: including tau-event *)
Definition lexec_t: Type := stream (tsp * events sysE).
Definition exec_t: Type := list lexec_t.
Definition lbehav_t: Type := colist (tsp * event sysE).
Definition behav_t: Type := list lbehav_t.
(** ** Relations between executions and behaviors *)
Definition inftau_lexec (lexec: lexec_t): Prop :=
Forall_stream (fun x => snd x = []) lexec.
Definition silent_local_trace (tr_loc: list (tsp * events sysE)): Prop :=
Forall (fun x => snd x = []) tr_loc.
Lemma inftau_lexec_pref_iff
(lexec: lexec_t) pref
: <<INFTAU: inftau_lexec lexec>> /\
<<SILENT_PREF: silent_local_trace pref>> <->
inftau_lexec (stream_app pref lexec).
Proof.
split.
- i. des.
induction pref as [|h t IH]; ss.
inv SILENT_PREF.
destruct h as [ts []]; ss.
pfold. econs; ss.
left. apply IH. ss.
- intro INFTAU.
induction pref as [|h t IH]; ss.
{ esplits; ss. }
punfold INFTAU. inv INFTAU.
destruct h as [ts []]; ss.
pclearbot.
hexploit IH; eauto. i. des.
esplits; ss.
econs; ss.
Qed.
Inductive _local_exec_behav
(_leb: lexec_t -> lbehav_t -> Prop)
: lexec_t -> lbehav_t -> Prop :=
| LocalExecBehav_Inftau
lexec
(EXEC_SILENT: inftau_lexec lexec)
: _local_exec_behav _leb lexec cnil
| LocalExecBehav_Events
lexec tr_tau lexec' ts es
beh' beh
(LOCAL_EXEC_PREFIX: lexec = stream_app tr_tau (Cons (ts, es) lexec'))
(TRACE_TAU: silent_local_trace tr_tau)
(EVENTS_NONNIL: es <> [])
(LOCAL_EXEC_BEHAV_REST: _leb lexec' beh')
(BEHAVIOR: beh = colist_app (flatten_te (ts, es)) beh')
: _local_exec_behav _leb lexec beh.
Hint Constructors _local_exec_behav: paco.
Lemma local_exec_behav_monotone: monotone2 _local_exec_behav.
Proof. pmonauto. Qed.
Definition local_exec_behav: lexec_t -> lbehav_t -> Prop :=
paco2 _local_exec_behav bot2.
Definition exec_behav : exec_t -> behav_t -> Prop :=
List.Forall2 local_exec_behav.
Hint Resolve local_exec_behav_monotone: paco.
Lemma not_inftau_ex_events
: forall exec
(NOT_EX: ~ exists taus ts es exec',
exec = stream_app taus (Cons (ts, es) exec') /\
silent_local_trace taus /\
es <> []),
inftau_lexec exec.
Proof.
pcofix CIH. i.
destruct exec as [[ts es] exec'].
pfold.
destruct es; ss.
2: {
exfalso.
eapply NOT_EX.
exists [].
esplits; eauto; ss.
}
econs; eauto; ss.
right.
eapply CIH.
intros (taus' & ts' & es' & exec'' & A & B & C).
apply NOT_EX.
exists ((ts, []) :: taus').
esplits; eauto.
- subst. ss.
- r. econs; ss.
Qed.
Lemma exec_div_ex
(lexec: lexec_t)
: exists ox,
match ox with
| None => inftau_lexec lexec
| Some x =>
let '(tr_tau, lexec', ts, es) := x in
lexec = stream_app tr_tau (Cons (ts, es) lexec') /\
silent_local_trace tr_tau /\
es <> []
end.
Proof.
destruct (classic (exists tr_tau ts es lexec',
lexec = stream_app
tr_tau (Cons (ts, es) lexec') /\
silent_local_trace tr_tau /\
es <> [])).
{ des.
eexists (Some (_, _, _, _)).
esplits; eauto. }
hexploit not_inftau_ex_events; eauto.
i.
exists None. ss.
Qed.
Lemma exec_div_sig
(lexec: lexec_t)
: { ox |
match ox with
| None => inftau_lexec lexec
| Some x =>
let '(tr_tau, lexec', ts, es) := x in
lexec = stream_app tr_tau (Cons (ts, es) lexec') /\
silent_local_trace tr_tau /\
es <> []
end }.
Proof.
apply constructive_indefinite_description.
eapply exec_div_ex.
Qed.
CoFixpoint local_exec_beh_constr
: forall (t: Z) (pref: events sysE) (lexec: lexec_t), lbehav_t.
Proof.
i.
destruct pref as [| e_h pref'].
2: { exact (ccons (t, e_h) (local_exec_beh_constr
t pref' lexec)). }
pose (SIG := exec_div_sig lexec).
destruct (proj1_sig SIG) eqn:SIG_DES.
2: { econs 1. }
destruct p as [[[tr_tau lexec'] ts] es].
destruct es as [| es_h es_t].
{ (* exfalso. des. ss. } *)
econs 1. }
econs 2.
{ exact (ts, es_h). }
(* generalize (proj2_sig SIG). *)
(* rewrite SIG_DES. i. *)
(* destruct es as [| es_h es_t]. *)
(* { exfalso. des. ss. } *)
(* econs 2. *)
(* { eapply (ts, es_h). } *)
eapply (local_exec_beh_constr ts es_t lexec').
Defined.
Lemma local_exec_beh_constr_pref
t es exec
: local_exec_beh_constr t es exec =
colist_app (map (fun e => (t, e)) es)
(local_exec_beh_constr t [] exec).
Proof.
induction es as [ | e es' IH]; ss.
match goal with
| |- ?lhs = _ =>
rewrite (unfold_colist _ lhs)
end.
s. rewrite IH. ss.
Qed.
Lemma local_exec_behav_ex exec
: exists beh, local_exec_behav exec beh.
Proof.
exists (local_exec_beh_constr 0 [] exec).
generalize 0%Z.
revert exec.
pcofix CIH. i.
match goal with
| |- paco2 _ _ _ ?c =>
rewrite (unfold_colist _ c)
end.
s.
remember (exec_div_sig exec) as SIG.
destruct SIG. ss.
destruct x.
2: { pfold. econs 1. ss. }
destruct p as [[[tr_tau lexec'] ts] es].
desf.
rewrite local_exec_beh_constr_pref.
pfold. econs 2; eauto; ss.
Qed.
Lemma local_exec_behav_det
: forall exec beh1 beh2
(EXEC_BEH1: local_exec_behav exec beh1)
(EXEC_BEH2: local_exec_behav exec beh2),
colist_eq beh1 beh2.
Proof.
pcofix CIH. i.
punfold EXEC_BEH1. punfold EXEC_BEH2.
inv EXEC_BEH1; inv EXEC_BEH2.
- pfold. econs 1.
- exfalso.
clear - EXEC_SILENT EVENTS_NONNIL.
induction tr_tau as [| ctr_h ctr_t IH]; ss.
{ punfold EXEC_SILENT. inv EXEC_SILENT. ss. }
r in EXEC_SILENT.
punfold EXEC_SILENT. inv EXEC_SILENT.
pclearbot. eauto.
- exfalso.
clear - EXEC_SILENT EVENTS_NONNIL.
induction tr_tau as [| ctr_h ctr_t IH]; ss.
{ punfold EXEC_SILENT. inv EXEC_SILENT. ss. }
r in EXEC_SILENT.
punfold EXEC_SILENT. inv EXEC_SILENT.
pclearbot. eauto.
- assert (tr_tau0 = tr_tau /\ ts0 = ts /\ es0 = es /\ lexec'0 = lexec').
{ hexploit stream_app_3ways; eauto. i. des.
{ exfalso.
destruct l2r as [|hr tr]; ss.
{ rewrite app_nil_r in *. subst. nia. }
clarify.
rr in TRACE_TAU0.
rewrite Forall_forall in TRACE_TAU0.
hexploit TRACE_TAU0.
{ instantiate (1:= (ts, es)).
apply in_or_app. right. ss. eauto. }
ss.
}
{ exfalso.
destruct l1r as [|hr tr]; ss.
{ rewrite app_nil_r in *. subst. nia. }
clarify.
rr in TRACE_TAU.
rewrite Forall_forall in TRACE_TAU.
hexploit TRACE_TAU.
{ instantiate (1:= (ts0, es0)).
apply in_or_app. right. ss. eauto. }
ss.
}
clarify.
}
des. clarify. pclearbot.
destruct es as [| es_h es_t]; ss.
pfold. econs 2.
induction es_t as [| h t IH]; ss.
{ eauto. }
left. pfold. econs 2.
apply IH; ss.
Qed.
Lemma inftau_lexec_Cons
(lexec: lexec_t) ts
: inftau_lexec lexec <->
inftau_lexec (Cons (ts, []) lexec).
Proof.
split.
- i. pfold. econs; ss.
left. ss.
- intro CONS. punfold CONS. inv CONS. ss. pclearbot. ss.
Qed.
Lemma local_exec_beh_div
(lexec: lexec_t) beh
tr lexec'
(EBEH: local_exec_behav lexec beh)
(LEXEC_DIV: lexec = stream_app tr lexec')
: exists beh',
<<BEH_DIV: beh = colist_app (flatten_tes tr) beh'>> /\
<<EBEH': local_exec_behav lexec' beh'>>.
Proof.
subst lexec. depgen beh.
induction tr as [| h_tr t_tr IH]; i; ss.
{ esplits; eauto. }
destruct h_tr as [ts [| h_e t_e]].
{ unfold flatten_tes. ss.
fold (flatten_tes t_tr).
eapply (IH beh).
punfold EBEH. inv EBEH.
- pfold. econs.
apply inftau_lexec_Cons in EXEC_SILENT. ss.
- destruct tr_tau as [| h_tau t_tau]; ss; clarify.
pclearbot.
pfold. econs 2; eauto.
inv TRACE_TAU. ss.
}
punfold EBEH. inv EBEH.
- exfalso.
punfold EXEC_SILENT. inv EXEC_SILENT. ss.
- destruct tr_tau; ss.
2: { exfalso. clarify.
inv TRACE_TAU. ss. }
clarify. rename ts0 into ts.
pclearbot.
hexploit IH; eauto. i. des. clarify.
esplits; eauto.
ss. rewrite colist_app_assoc. ss.
Qed.
(* silent *)
Lemma flatten_silent_local_trace_iff
(tr: list (tsp * events sysE))
: silent_local_trace tr <->
flatten_tes tr = [].
Proof.
unfold flatten_tes, silent_local_trace.
split.
- intro SILENT_PREF.
induction SILENT_PREF; ss.
destruct x; ss. clarify.
- intro MAP.
induction tr; ss.
destruct a as [ts []]; ss.
econs; ss.
apply IHtr. ss.
Qed.
Lemma silent_tes_equiv
(tr1 tr2: list (tsp * events sysE))
(SILENT: silent_local_trace tr2)
(EQUIV: tes_equiv tr1 tr2)
: silent_local_trace tr1.
Proof.
unfold tes_equiv in EQUIV.
apply flatten_silent_local_trace_iff in SILENT.
apply flatten_silent_local_trace_iff.
congruence.
Qed.
End BEHAV_DEF.
Section BEHAV2_DEF.
(* behavior considering ErrorEvent *)
Variable safe_sysE: Type -> Type.
Notation sysE := (safe_sysE +' errE).
Fixpoint trace_find_error (es: events sysE)
: (events safe_sysE) * bool :=
match es with
| [] => ([], false)
| Event e r :: es' =>
match e with
| inl1 ssys_e =>
let (ssys_es', b) := trace_find_error es' in
(Event ssys_e r :: ssys_es', b)
| inr1 err_e =>
([], true)
end
end.
Lemma trace_find_error_spec
(es: events sysE) es_s b
(TR_ERR: trace_find_error es = (es_s, b))
: (if b then
exists es1 es2,
es = es1 ++ [Event (inr1 ErrorEvent) tt] ++ es2 /\
map embed es_s = es1
else map embed es_s = es:Prop).
Proof.
depgen es_s.
induction es as [| h t]; i; ss.
{ clarify. }
destruct h as [T [] r].
2: {
clarify.
destruct e.
destruct r.
exists [].
esplits; ss.
}
destruct (trace_find_error t) as [es_s' b'].
clarify.
hexploit IHt; eauto. i.
destruct b.
- des. subst.
eexists (Event (inl1 s) r :: _).
esplits; ss.
- subst. ss.
Qed.
Inductive _local_exec_behav2
(_leb: lexec_t sysE -> lbehav_t safe_sysE -> Prop)
: lexec_t sysE -> lbehav_t safe_sysE -> Prop :=
| LocalExecBehav2_Inftau
lexec
(EXEC_SILENT: inftau_lexec _ lexec)
: _local_exec_behav2 _leb lexec cnil
| LocalExecBehav2_Events
lexec tr_tau lexec' ts es
beh' beh
es_safe is_err
(LOCAL_EXEC_PREFIX: lexec = stream_app tr_tau (Cons (ts, es) lexec'))
(TRACE_TAU: silent_local_trace _ tr_tau)
(EVENTS_NONNIL: es <> [])
(TRACE_ERROR: trace_find_error es = (es_safe, is_err))
(LOCAL_EXEC_BEHAV_REST:
if is_err then True else _leb lexec' beh')
(BEHAVIOR: beh = colist_app (flatten_te (ts, es_safe)) beh')
: _local_exec_behav2 _leb lexec beh
.
Hint Constructors _local_exec_behav2: paco.
Lemma local_exec_behav2_monotone: monotone2 _local_exec_behav2.
Proof.
ii. inv IN.
- eauto with paco.
- desf.
+ econs; eauto.
+ econs; eauto.
s. eauto.
Qed.
Definition local_exec_behav2: lexec_t _ -> lbehav_t _ -> Prop :=
paco2 _local_exec_behav2 bot2.
Definition exec_behav2 : exec_t _ -> behav_t _ -> Prop :=
List.Forall2 local_exec_behav2.
End BEHAV2_DEF.
Arguments inftau_lexec [sysE].
Arguments local_exec_behav [sysE].
Arguments exec_behav [sysE].
Arguments local_exec_behav2 [safe_sysE].
Arguments exec_behav2 [safe_sysE].
Hint Resolve local_exec_behav_monotone: paco.
Hint Resolve local_exec_behav2_monotone: paco.
(* Distributed system *)
Module DSys.
Section SYSTEM_SEM.
Context {sysE: Type -> Type}.
Definition filter_nb1 (e: event (nbE +' sysE))
: (event sysE)? :=
match e with
| Event er r =>
match er with
| inl1 nbe => None
| inr1 syse => Some (Event syse r)
end
end.
Definition filter_nb_localstep
(tes: (tsp * events (nbE +' sysE)))
: (tsp * events sysE)? :=
let es' := deopt_list (map filter_nb1 (snd tes)) in
option_map (fun es => (fst tes, es)) es'.
Definition filter_nb_sysstep
(es: list (tsp * events (nbE +' sysE)))
: (list (tsp * events sysE))? :=
deopt_list (List.map filter_nb_localstep es).
(** ** System Semantics *)
Record t: Type :=
mk { state: Type ;
num_sites : state -> nat ;
step: state -> list (tsp * events (nbE +' sysE)) -> state -> Prop ;
initial_state: state -> Prop ;
step_prsv_num_sites:
forall st es st' (STEP: step st es st'),
<<NUM_EVTS: length es = num_sites st>> /\
<<NUM_SITES_AFTER:
num_sites st' = num_sites st>> ;
}.
(* Lemma filter_nobeh_Some_length *)
(* es es' *)
(* (FILTER: filter_nobeh es = Some es') *)
(* : length es' = length es. *)
(* Proof. *)
(* unfold filter_nobeh in *. *)
(* hexploit deopt_list_Some_length; eauto. *)
(* rewrite map_length. ss. *)
(* Qed. *)
Section BEHAVIOR.
(* Context {sys: t}. *)
Variable sys: t.
Inductive _exec_state
(exec_st: state sys -> exec_t sysE -> Prop)
: state sys -> exec_t sysE -> Prop :=
| ExecState_Stuck
st exec_undef
(STUCK: ~ exists es st', step _ st es st')
(EXEC_LEN: length exec_undef = num_sites _ st)
: _exec_state exec_st st exec_undef
| ExecState_Step
st es st'
es_sysE exec' exec
(STEP: step _ st es st')
(FILTER_NOBEH: filter_nb_sysstep es = Some es_sysE)
(BEH_CONS: Cons_each_rel es_sysE exec' exec)
(EXEC_REST: exec_st st' exec')
: _exec_state exec_st st exec
.
Hint Constructors _exec_state: paco.
Lemma exec_state_monotone: monotone2 _exec_state.
Proof. pmonauto. Qed.
Definition exec_state
: state sys -> exec_t sysE -> Prop :=
paco2 _exec_state bot2.
Definition behav_state
(st: state sys) (beh: behav_t sysE): Prop :=
exists exec, <<EXEC_ST: exec_state st exec>> /\
<<EXEC_BEH: exec_behav exec beh>>.
Definition behav_sys (beh: behav_t sysE): Prop :=
(~ exists st_i, initial_state sys st_i) \/
(exists st_i, <<INIT_STATE: initial_state sys st_i>> /\
<<BEH_ST:behav_state st_i beh>>).
End BEHAVIOR.
Section SAFE.
Inductive _safe_state {sys: t}
(safe_st: state sys -> Prop)
: state sys -> Prop :=
| SafeState
st
(PROGRESS: exists syse st', step _ st syse st')
(SAFE_NXT: forall syse st'
(STEP: step _ st syse st')
(INTACT: filter_nb_sysstep syse <> None),
safe_st st')
: _safe_state safe_st st.
Hint Constructors _safe_state: paco.
Lemma safe_state_monotone sys
: monotone1 (@_safe_state sys).
Proof. pmonauto. Qed.
Definition safe_state {sys: t}: state sys -> Prop :=
paco1 _safe_state bot1.
Inductive safe (sys: t) : Prop :=
Safe
(INIT_EXISTS: exists st_i, initial_state sys st_i)
(ALL_INIT_SAFE:
forall st_i (INIT: initial_state sys st_i),
safe_state st_i)
.
End SAFE.
End SYSTEM_SEM.
Section DSYS_WITH_ERROR.
Variable safe_sysE: Type -> Type.
Notation sysE := (safe_sysE +' errE).
Section BEHAVIOR.
(* Context {sys: t}. *)
Variable sys: @DSys.t sysE.
Definition behav_state2
(st: DSys.state sys) (beh: behav_t safe_sysE): Prop :=
exists exec, <<EXEC_ST: DSys.exec_state _ st exec>> /\
<<EXEC_BEH: exec_behav2 exec beh>>.
Definition behav_sys2 (beh: behav_t safe_sysE): Prop :=
(~ exists st_i, DSys.initial_state sys st_i) \/
(exists st_i, <<INIT_STATE: DSys.initial_state sys st_i>> /\
<<BEH_ST: behav_state2 st_i beh>>).
End BEHAVIOR.
Inductive _lbeh_err_incl
(err_incl: lbehav_t sysE -> lbehav_t safe_sysE -> Prop)
: lbehav_t sysE -> lbehav_t safe_sysE -> Prop :=
| LBehErrIncl_Inftau
: _lbeh_err_incl err_incl cnil cnil
| LBehErrIncl_SafeEvent
tsp (h1: event sysE) (h2: event safe_sysE)
t1 t2
(SAFE_EVENT: h1 = embed h2)
(INCL_REST: err_incl t1 t2)
: _lbeh_err_incl err_incl (ccons (tsp, h1) t1)
(ccons (tsp, h2) t2)
| LBehErrIncl_Error
err t1 any tsp
(ERROR_EVENT: err = (tsp, Event (inr1 ErrorEvent) tt))
: _lbeh_err_incl err_incl (ccons err t1) any
.
Definition lbeh_err_incl
: lbehav_t sysE -> lbehav_t safe_sysE -> Prop :=
paco2 _lbeh_err_incl bot2.
Hint Constructors _lbeh_err_incl: paco.
Lemma lbeh_err_incl_monotone: monotone2 _lbeh_err_incl.
Proof. pmonauto. Qed.
Hint Resolve lbeh_err_incl_monotone: paco.
Definition beh_err_incl
: behav_t sysE -> behav_t safe_sysE -> Prop :=
Forall2 lbeh_err_incl.
Lemma lbeh_err_incl_safe_prefix
ts (es_safe: events safe_sysE)
beh1' beh2
(INCL: lbeh_err_incl (colist_app (flatten_te (ts, map embed es_safe)) beh1') beh2)
: exists beh2',
beh2 = colist_app (flatten_te (ts, es_safe)) beh2' /\
lbeh_err_incl beh1' beh2'.
Proof.
depgen beh2.
induction es_safe as [| h t IH]; i; ss.
{ esplits; eauto. }
destruct h.
punfold INCL. inv INCL.
2: { ss. }
pclearbot.
destruct h2. clarify.
existT_elim. subst.
unf_resum. subst.
hexploit IH; eauto.
i. des.
esplits; eauto.
subst. eauto.
Qed.
Lemma err_incl_inv
ts es beh' beh2
es_safe err
(INCL : lbeh_err_incl (colist_app (flatten_te (ts, es)) beh') beh2)
(ERR: trace_find_error _ es = (es_safe, err))
: exists beh2',
(if err then True else lbeh_err_incl beh' beh2') /\
beh2 = colist_app (flatten_te (ts, es_safe)) beh2'.
Proof.
generalize (trace_find_error_spec _ es es_safe err ERR).
intro ERR_CASES.
destruct err.
- des. subst.
unfold flatten_te in INCL. ss.
rewrite map_app in INCL.
rewrite colist_app_assoc in INCL.
apply lbeh_err_incl_safe_prefix in INCL. des.
subst. esplits; eauto.
- subst.
apply lbeh_err_incl_safe_prefix in INCL. des.
subst.
esplits; eauto.
Qed.
Lemma local_exec_behav_12
: forall lexec (beh1: lbehav_t sysE) (beh2: lbehav_t safe_sysE)
(LEXEC_BEHAV: local_exec_behav lexec beh1)
(INCL: lbeh_err_incl beh1 beh2),
local_exec_behav2 lexec beh2.
Proof.
pcofix CIH. i.
punfold LEXEC_BEHAV. inv LEXEC_BEHAV.
{ punfold INCL.
pfold. inv INCL.
econs 1. ss. }
pclearbot.
assert (ERR: exists es_safe err,
trace_find_error _ es = (es_safe, err)).
{ eexists. eexists.
apply surjective_pairing. }
des.
hexploit err_incl_inv; eauto.
intros (beh2' & INCL_REST & BEH2_EQ).
subst.
pfold.
econs 2; eauto.
destruct err.
- ss.
- right.
eapply CIH; eauto.
Qed.
Lemma exec_behav_12
exec (beh1: behav_t sysE) (beh2: behav_t safe_sysE)
(EXEC_BEHAV: exec_behav exec beh1)
(INCL: beh_err_incl beh1 beh2)
: exec_behav2 exec beh2.
Proof.
apply Forall2_nth'.
split.
{ eapply Forall2_length in INCL.
eapply Forall2_length in EXEC_BEHAV.
nia. }
i.
eapply Forall2_nth1 in EXEC_BEHAV; eauto.
eapply Forall2_nth2 in INCL; eauto.
des.
eapply local_exec_behav_12; eauto.
clarify.
Qed.
Lemma paco2_err_incl_app
(r: lbehav_t sysE -> lbehav_t safe_sysE -> Prop)
(es_safe: events safe_sysE)
ts t1 t2
(INCL_TLS: paco2 _lbeh_err_incl r t1 t2)
: paco2 _lbeh_err_incl r
(colist_app (map (fun e => (ts, e))
(map embed es_safe)) t1)
(colist_app (map (fun e => (ts, e)) es_safe) t2).
Proof.
induction es_safe as [| h t].
{ ss. }
s. pfold.
econs 2; ss.
left. eauto.
Qed.
Lemma local_exec_behav_21
lexec (lbeh2: lbehav_t safe_sysE)
(LEXEC_BEHAV2: local_exec_behav2 lexec lbeh2)
: exists (lbeh1: lbehav_t sysE),
<<INCL: lbeh_err_incl lbeh1 lbeh2>> /\
<<LEXEC_BEHAV2: local_exec_behav lexec lbeh1>>.
Proof.
generalize (local_exec_behav_ex _ lexec).
intros [beh1 LEXEC_BEHAV1].
esplits; eauto.
revert_until safe_sysE.
pcofix CIH.
i.
punfold LEXEC_BEHAV1.
inv LEXEC_BEHAV1.
{ punfold LEXEC_BEHAV2.
inv LEXEC_BEHAV2.
{ pfold. econs 1. }
exfalso.
clear - EXEC_SILENT EVENTS_NONNIL.
rewrite <- inftau_lexec_pref_iff in EXEC_SILENT.
des.
punfold INFTAU. inv INFTAU. ss.
}
renames tr_tau ts es into tr_tau1 ts1 es1.
renames lexec' beh' TRACE_TAU into lexec'1 beh'1 TRACE_TAU1.
renames EVENTS_NONNIL LOCAL_EXEC_BEHAV_REST into
EVENTS_NONNIL1 LOCAL_EXEC_BEHAV_REST1.
punfold LEXEC_BEHAV2.
inv LEXEC_BEHAV2.
{ exfalso.
clear - EXEC_SILENT EVENTS_NONNIL1.
rewrite <- inftau_lexec_pref_iff in EXEC_SILENT.
des.
punfold INFTAU. inv INFTAU. ss.
}
assert (ts1 = ts /\ es1 = es /\ lexec'1 = lexec').
{ clear - LOCAL_EXEC_PREFIX TRACE_TAU1 TRACE_TAU
EVENTS_NONNIL1 EVENTS_NONNIL.
depgen tr_tau.
induction TRACE_TAU1 as [| h_tau1 t_tau1 IH]; i; ss.
{ inv TRACE_TAU; ss.
{ clarify. }
clarify.
}
inv TRACE_TAU; ss.
{ clarify. }
clarify.
hexploit IH; eauto.
}
des. subst.
generalize (trace_find_error_spec
_ es es_safe is_err TRACE_ERROR).
intro ERR.
destruct is_err.
{ des. subst.
(* pfold. *)
unfold flatten_te. ss.
rewrite map_app. ss.
rewrite colist_app_assoc. s.
eapply paco2_err_incl_app.
pfold. econs 3; eauto.
}
hexploit (des_snoc _ es_safe).
{ destruct es_safe; ss.
{ congruence. }
nia. }
i. des.
subst es_safe.
subst es.
unfold snoc. unfold flatten_te. s.
pclearbot.
repeat rewrite map_app.
repeat rewrite colist_app_assoc.
eapply paco2_err_incl_app. s.
pfold. econs 2; ss.
eauto.
Qed.
Lemma exec_behav_21
exec (beh2: behav_t safe_sysE)
(EXEC_BEHAV: exec_behav2 exec beh2)
: exists (beh1: behav_t sysE),
<<INCL: beh_err_incl beh1 beh2>> /\
<<EXEC_BEHAV2: exec_behav exec beh1>>.
Proof.
cut (forall n (N_UBND: n < length beh2),
exists lbeh1,
(forall lexec lbeh2
(LEXEC: nth_error exec n = Some lexec)
(LBEH2: nth_error beh2 n = Some lbeh2),
<<LINCL: lbeh_err_incl lbeh1 lbeh2>> /\
<<LEXEC_BEH: local_exec_behav lexec lbeh1>>)).
{ intro AUX.
eapply exists_list in AUX.
des.
exists l.
splits.
- r.
apply Forall2_nth'.
split; ss.
i. eapply Forall2_nth2 in EXEC_BEHAV; eauto.
des.
hexploit NTH_PROP; eauto. i. des.
eauto.
- r.
apply Forall2_nth'.
split; ss.
{ eapply Forall2_length in EXEC_BEHAV.
congruence. }
i. eapply Forall2_nth1 in EXEC_BEHAV; eauto.
des.
hexploit NTH_PROP; eauto. i. des.
eauto.
}
i.
hexploit (nth_error_Some2 _ beh2 n); eauto.
i. des.
eapply Forall2_nth2 in EXEC_BEHAV; eauto.
des.
hexploit local_exec_behav_21; eauto.
i. des.
esplits; eauto.
i. clarify.
split; eauto.
Qed.
Lemma lbeh_err_ub_incl_any
lbeh' lbeh_ub
tsp tl
(LBEH_UB_EQ: lbeh_ub = ccons (tsp, Event (inr1 ErrorEvent) tt) tl)
: lbeh_err_incl lbeh_ub lbeh'.
Proof.
subst lbeh_ub.
pfold. econs 3; eauto.
Qed.
Lemma beh_err_ub_incl_any
beh' beh_ub
(BEH_UB: Forall (fun x => exists tsp tl,
x = ccons (tsp, Event (inr1 ErrorEvent) tt) tl) beh_ub)
(LEN_EQ: length beh' = length beh_ub)
: beh_err_incl beh_ub beh'.
Proof.
apply Forall2_nth'.
split; ss.
i.
eapply Forall_nth1 in BEH_UB; eauto.
des.
eapply lbeh_err_ub_incl_any; eauto.
Qed.
Lemma behav1_refine_impl_behav2_refine
(dsys1 dsys2: @DSys.t sysE)
(REF1: DSys.behav_sys dsys1 <1= DSys.behav_sys dsys2)
: behav_sys2 dsys1 <1= behav_sys2 dsys2.
Proof.
intros beh BEH_SYS1.
inv BEH_SYS1.
{ assert (ANY_BEH: forall beh_any, behav_sys dsys1 beh_any).
{ i. econs 1. ss. }
assert (ANY_BEH2: forall beh_any, behav_sys dsys2 beh_any).
{ intro beh_any.
hexploit ANY_BEH; eauto. }
pose (beh_ub:= map (fun _ => ccons (0%Z, Event (inr1 ErrorEvent) tt: event sysE) cnil) beh).
specialize (ANY_BEH2 beh_ub).
inv ANY_BEH2.
{ econs 1. eauto. }
des.
econs 2.
esplits; eauto.
r in BEH_ST.
r.
des.
esplits; eauto.
eapply exec_behav_12; eauto.
eapply beh_err_ub_incl_any.
{ subst beh_ub.
apply Forall_forall.
intros x IN.
apply in_map_iff in IN. des.
clarify.
esplits; eauto.
}
subst beh_ub.
rewrite map_length. ss.
}
des.
r in BEH_ST. des.
hexploit exec_behav_21; eauto.
i. des.
hexploit REF1; eauto.
{ econs 2. esplits; eauto.
econs; eauto. }
intro BEH1.
r in BEH1. des.
- econs 1. eauto.
- econs 2.
esplits; eauto.
r in BEH_ST. r.
des.
esplits; eauto.
eapply exec_behav_12; eauto.
Qed.
End DSYS_WITH_ERROR.
End DSys.
(* TODO: position? *)
Hint Resolve DSys.exec_state_monotone: paco.
Hint Resolve DSys.safe_state_monotone: paco.
Lemma exec_state_len sysE
(sys: @DSys.t sysE) (st: DSys.state sys) exec
(EXEC_ST: DSys.exec_state _ st exec)
: length exec = DSys.num_sites sys st.
Proof.
punfold EXEC_ST. inv EXEC_ST.
- ss.
- unfold DSys.filter_nb_sysstep in *.
hexploit deopt_list_length; eauto.
rewrite map_length. i.
hexploit Forall3_length; eauto. i. des.
hexploit (DSys.step_prsv_num_sites sys); eauto. i. des.
rewrite <- NUM_EVTS.
unfold lexec_t, events in *. nia.
Qed.
Lemma behav_state_len sysE
(sys: @DSys.t sysE) (st: DSys.state sys) beh
(BEH_ST: DSys.behav_state _ st beh)
: length beh = DSys.num_sites sys st.
Proof.
rr in BEH_ST. des.
hexploit exec_state_len; eauto. intro LEN.
rr in EXEC_BEH.
hexploit Forall2_length; eauto. i. nia.
Qed.
Lemma DSys_filter_nb_localstep_inv E
tes_r (tes: tsp * events E)
(FILTER_LOC: DSys.filter_nb_localstep tes_r =
Some tes)
: <<TIMESTAMP_EQ: fst tes_r = fst tes>> /\
<<FILTERED_NB_EACH:
Forall2 (fun e_r e => DSys.filter_nb1 e_r = Some e)
(snd tes_r) (snd tes)>>.
Proof.
unfold DSys.filter_nb_localstep in *.
destruct (deopt_list (map DSys.filter_nb1 (snd tes_r)))
as [l'| ] eqn: DEOPT.
2: { ss. }
destruct tes as [t es].
destruct tes_r as [t' es_r].
ss. clarify.
split; ss.
apply Forall2_nth. i.
hexploit deopt_list_length; eauto.
rewrite map_length. intro LEN_EQ.
apply deopt_list_Some_iff in DEOPT.
destruct (nth_error es_r n) eqn: ES_R_N.
2: {
destruct (nth_error es n) eqn: ES_N.
2: { econs. }
exfalso.
apply nth_error_Some1' in ES_N.
apply nth_error_None in ES_R_N. nia.
}
destruct (nth_error es n) eqn: ES_N.
2: { exfalso.
apply nth_error_Some1' in ES_R_N.
apply nth_error_None in ES_N. nia. }
econs.
apply f_equal with (f:= fun l => nth_error l n) in DEOPT.
do 2 rewrite map_nth_error_rw in DEOPT.
rewrite ES_N, ES_R_N in DEOPT. ss. clarify.
Qed.
Lemma DSys_filter_nb_sysstep_repeat_nil E
tm n
: DSys.filter_nb_sysstep (repeat (tm, @nil (event (nbE +' E))) n) =
Some (repeat (tm, []) n).
Proof.
unfold DSys.filter_nb_sysstep.
apply deopt_list_Some_iff.
do 2 rewrite map_repeat. ss.
Qed.
Lemma filter_nb_localstep_embed E
tm (es: events E)
: DSys.filter_nb_localstep (tm, embed es) =
Some (tm, es).
Proof.
unfold DSys.filter_nb_localstep. ss.
induction es as [|h t IH]; ss.
replace (DSys.filter_nb1 (embed h)) with (Some h).
2: { unfold DSys.filter_nb1.
destruct h; ss. }
unfold embed in IH.
destruct (deopt_list (map DSys.filter_nb1 (embed_list t))) eqn: DEOPT; ss.
clarify.
Qed.
Lemma DSys_filter_nb_sysstep_inv E
es_r (es: list (tsp * events E))
(FILTER_SYS: DSys.filter_nb_sysstep es_r = Some es)
: Forall2 (fun es1 es2 =>
DSys.filter_nb_localstep es1 = Some es2)
es_r es.
Proof.
unfold DSys.filter_nb_sysstep in *.
apply deopt_list_Some_iff in FILTER_SYS.
depgen es.
induction es_r as [| hr tr IH]; i; ss.
{ destruct es; ss. }
destruct es as [| h t]; ss.
clarify.
econs; eauto.
Qed.
Lemma filter_nb_localstep_app E
tm es1 tr1 (syse: event E)
(FLT1: DSys.filter_nb_localstep (tm, es1) = Some tr1)
: DSys.filter_nb_localstep (tm, es1 ++ [embed syse]) =
Some (fst tr1, snd tr1 ++ [syse]).
Proof.
unfold DSys.filter_nb_localstep. ss.
rewrite map_app.
unfold DSys.filter_nb_localstep in FLT1.
match type of FLT1 with
| option_map _ ?x = _ =>
destruct x eqn: DEOPT1; ss
end.
clarify.
erewrite deopt_list_app; cycle 1.
{ eauto. }
{ destruct syse. ss. }
ss.
Qed.
Lemma tes_equiv_sym E
(t1 t2: list (tsp * events E))
(EQV: tes_equiv t1 t2)
: tes_equiv t2 t1.
Proof.
unfold tes_equiv in *. ss.
Qed.
Lemma Forall2_tes_equiv_sym E
(t1 t2: list (list (tsp * events E)))
(EQV: Forall2 tes_equiv t1 t2)
: Forall2 tes_equiv t2 t1.
Proof.
depgen t2.
induction t1; i; ss.
{ inv EQV. econs. }
inv EQV. econs; ss. eauto.
Qed.
|
cdis Forecast Systems Laboratory
cdis NOAA/OAR/ERL/FSL
cdis 325 Broadway
cdis Boulder, CO 80303
cdis
cdis Forecast Research Division
cdis Local Analysis and Prediction Branch
cdis LAPS
cdis
cdis This software and its documentation are in the public domain and
cdis are furnished "as is." The United States government, its
cdis instrumentalities, officers, employees, and agents make no
cdis warranty, express or implied, as to the usefulness of the software
cdis and documentation for any purpose. They assume no responsibility
cdis (1) for the use of the software and documentation; or (2) to provide
cdis technical support to users.
cdis
cdis Permission to use, copy, modify, and distribute this software is
cdis hereby granted, provided that the entire disclaimer notice appears
cdis in all copies. All modifications to this software must be clearly
cdis documented, and are solely the responsibility of the agent making
cdis the modifications. If significant modifications or enhancements
cdis are made to this software, the FSL Software Policy Manager
cdis ([email protected]) should be notified.
cdis
cdis
cdis
cdis
cdis
cdis
cdis
subroutine set_missing_sat(csatid,csattype,chtype,
& image_in,nx,ny,smsng,r_missing_data,scale_img,istatus)
c
c
c J. Smart 11-13-95 include technique to determine bad (outliers) data in
c addition to missing sat pixels
c
implicit none
c
integer nx,ny,i,j,n
integer ii,jj
integer istatus
integer mstatus
integer istat_status
integer imiss_status
integer ibnd,jbnd
real image_in(nx,ny)
real image_temp(nx,ny)
real data(125)
real ave,adev,sdev,var,skew,curt
real smsng
real r_missing_data
real rlow,rhigh,scale_img
character csattype*(*)
character chtype*(*)
character csatid*(*)
write(6,*)'Enter set_missing_sat: ',csattype,' ',chtype
c
c note that this quality control step is performed with satellite counts
c both ir and vis.
c
ibnd=nx
jbnd=ny
c
c the test is different depending on the value of smsng (satellite missing).
c the missing satellite data values are defined in data/static/satellite_lvd.nl
c
rlow=0.0
if(smsng.gt.0.0)then
rhigh=550. !smsng
else
c per Eric Gregow 22 Aug 2008 -- get the IR channels ingested without any exclusion
c of points (that were previously set to missing values)
c rhigh=255.
rhigh=350.
endif
if(csatid.ne.'gmssat'.and.csatid.ne.'meteos')then
if(csattype.eq.'gvr'.or.csattype.eq.'gwc')then
rhigh = 1023.
if(chtype.eq.'4u ')rlow=69.
if(chtype.eq.'wv ')rlow=30.
if(chtype.eq.'11u'.or.chtype.eq.'12u')rlow=16.
if(chtype.eq.'vis')rlow=28.
endif
endif
if(csattype.eq.'rll')then ! Bad range thresholds
if(chtype .ne. 'vis')then
write(6,*)' scale_img passed in = ',scale_img
! if(csatid .ne. 'meteos' .AND. csatid .ne. 'fy')then
! scale_img = .01
! else
! scale_img = .1
! endif
rhigh = 500. /scale_img
rlow = 163.1/scale_img
else ! 'vis'
rhigh = 900.
rlow = 0.
endif
write(6,*)' Range testing thresholds set to ',rlow,rhigh
else
write(6,*)' Range testing thresholds set to ',rlow,rhigh
endif
istat_status=0
imiss_status=0
do j=2,jbnd-1
do i=2,ibnd-1
if(image_in(i,j).eq.smsng.or.
& image_in(i,j).lt.rlow .or.
& image_in(i,j).gt.rhigh )then
n=0
do jj=j-1,j+1
do ii=i-1,i+1
if(image_in(ii,jj).ne.smsng .and.
& image_in(ii,jj).gt.rlow .and.
& image_in(ii,jj).lt.rhigh )then
n=n+1
data(n)=image_in(ii,jj)
endif
enddo
enddo
if(n.ge.2)then
call moment(data,n,
& ave,adev,sdev,var,skew,curt,
& mstatus)
if(abs(image_in(i,j)-ave).gt.3.0*sdev.or.
& image_in(i,j).eq.smsng.or.
& image_in(i,j).lt.rlow .or.
& image_in(i,j).gt.rhigh)then
image_temp(i,j) = ave
istat_status=istat_status-1
else
image_temp(i,j) = r_missing_data
imiss_status=imiss_status-1
endif
else
image_temp(i,j)=r_missing_data
imiss_status=imiss_status-1
endif
else
image_temp(i,j) = image_in(i,j)
endif
end do
end do
c
c take of boundaries
c
do i=1,ibnd
if(image_in(i,1).eq.smsng.or.
& image_in(i,1).lt.rlow .or.
& image_in(i,1).gt.rhigh)then
image_temp(i,1)=r_missing_data
imiss_status=imiss_status-1
else
image_temp(i,1)=image_in(i,1)
endif
if(image_in(i,jbnd).eq.smsng.or.
& image_in(i,jbnd).lt.rlow .or.
& image_in(i,jbnd).gt.rhigh)then
image_temp(i,jbnd)=r_missing_data
imiss_status=imiss_status-1
else
image_temp(i,jbnd)=image_in(i,jbnd)
endif
enddo
do j=1,jbnd
if(image_in(1,j).eq.smsng.or.
& image_in(1,j).lt.rlow .or.
& image_in(1,j).gt.rhigh)then
image_temp(1,j)=r_missing_data
imiss_status=imiss_status-1
else
image_temp(1,j)=image_in(1,j)
endif
if(image_in(ibnd,j).eq.smsng.or.
& image_in(ibnd,j).lt.rlow .or.
& image_in(ibnd,j).gt.rhigh)then
image_temp(ibnd,j)=r_missing_data
imiss_status=imiss_status-1
else
image_temp(ibnd,j)=image_in(ibnd,j)
endif
enddo
do j=1,jbnd
do i=1,ibnd
image_in(i,j)=image_temp(i,j)
enddo
enddo
write(6,*)' # reset to r_missing: ',imiss_status
write(6,*)' # reset to average : ',istat_status
istatus=imiss_status+istat_status
1000 return
end
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.