Datasets:
AI4M
/

text
stringlengths
0
3.34M
chapter \<open>Introduction\<close> text \<open> The Cook-Levin theorem states that the problem \SAT{} of deciding the satisfiability of Boolean formulas in conjunctive normal form is $\NP$-complete~\cite{Cook,Levin}. This article formalizes a proof of this theorem based on the textbook \emph{Computational Complexity:\ A Modern Approach} by Arora and Barak~\cite{ccama}. \<close> section \<open>Outline\<close> text \<open> We start out in Chapter~\ref{s:TM} with a definition of multi-tape Turing machines (TMs) slightly modified from Arora and Barak's definition. The remainder of the chapter is devoted to constructing ever more complex machines for arithmetic on binary numbers, evaluating polynomials, and performing basic operations on lists of numbers and even lists of lists of numbers. Specifying Turing machines and proving their correctness and running time is laborious at the best of times. We slightly alleviate the seemingly inevitable tedium of this by defining elementary reusable Turing machines and introducing ways of composing them sequentially as well as in if-then-else branches and while loops. Together with the representation of natural numbers and lists, we thus get something faintly resembling a structured programming language of sorts. In Chapter~\ref{s:TC} we introduce some basic concepts of complexity theory, such as $\mathcal{P}$, $\NP$, and polynomial-time many-one reduction. Following Arora and Barak the complexity class $\NP$ is defined via verifier Turing machines rather than nondeterministic machines, and so the deterministic TMs introduced in the previous chapter suffice for all definitions. To flesh out the chapter a little we formalize obvious proofs of $\mathcal{P} \subseteq \NP$ and the transitivity of the reducibility relation, although neither result is needed for proving the Cook-Levin theorem. Chapter~\ref{s:Sat} introduces the problem \SAT{} as a language over bit strings. Boolean formulas in conjunctive normal form (CNF) are represented as lists of clauses, each consisting of a list of literals encoded in binary numbers. The list of lists of numbers ``data type'' defined in Chapter~\ref{s:TM} will come in handy at this point. The proof of the Cook-Levin theorem has two parts: Showing that \SAT{} is in $\NP$ and showing that \SAT{} is $\NP$-hard, that is, that every language in $\NP$ can be reduced to \SAT{} in polynomial time. The first part, also proved in Chapter~\ref{s:Sat}, is fairly easy: For a satisfiable CNF formula, a satisfying assignment can be given in roughly the size of the formula, because only the variables in the formula need be assigned a truth value. Moreover whether an assignment satisfies a CNF formula can be verified easily. The hard part is showing the $\NP$-hardness of \SAT{}. The first step (Chapter~\ref{s:oblivious}) is to show that every polynomial-time computation on a multi-tape TM can be performed in polynomial time on a two-tape \emph{oblivious} TM. Oblivious means that the sequence of positions of the Turing machine's tape heads depends only on the \emph{length} of the input. Thus any language in $\NP$ has a polynomial-time two-tape oblivious verifier TM. In Chapter~\ref{s:Reducing} the proof goes on to show how the computations of such a machine can be mapped to CNF formulas such that a CNF formula is satisfiable if and only if the underlying computation was for a string in the language \SAT{} paired with a certificate. Finally in Chapter~\ref{s:Aux_TM} and Chapter~\ref{s:Red_TM} we construct a Turing machine that carries out the reduction in polynomial time. \<close> section \<open>Related work\<close> text \<open> The Cook-Levin theorem has been formalized before. Gamboa and Cowles~\cite{Gamboa2004AMP} present a formalization in ACL2~\cite{acl2}. They formalize $\NP$ and reducibility in terms of Turing machines, but analyze the running time of the reduction from $\NP$-languages to \SAT{} in a different, somewhat ad-hoc, model of computation that they call ``the major weakness'' of their formalization. Employing Coq~\cite{coq}, Gäher and Kunze~\cite{Gher2021MechanisingCT} define $\NP$ and reducibility in the computational model ``call-by-value $\lambda$-calculus L'' introduced by Forster and Smolka~\cite{Forster2017WeakCL}. They show the $\NP$-completeness of \SAT{} in this framework. Turing machines appear in an intermediate problem in the chain of reductions from $\NP$ languages to \SAT{}, but are not used to show the polynomiality of the reduction. Nevertheless, this is likely the first formalization of the Cook-Levin theorem where both the complexity theoretic concepts and the proof of the polynomiality of the reduction use the same model of computation. With regards to Isabelle, Xu~et al.~\cite{Universal_Turing_Machine-AFP} provide a formalization of single-tape Turing machines with a fixed binary alphabet in the computability theory setting and construct a universal TM. While I was putting the finishing touches on this article, Dalvit and Thiemann~\cite{Multitape_To_Singletape_TM-AFP} published a formalization of (deterministic and nondeterministic) multi-tape and single-tape Turing machines and showed how to simulate the former on the latter with quadratic slowdown. Moreover, Thiemann and Schmidinger~\cite{Multiset_Ordering_NPC-AFP} prove the $\NP$-completeness of the Multiset Ordering problem, without, however, proving the polynomial-time computability of the reduction. This article uses Turing machines as model of computation for both the complexity theoretic concepts and the running time analysis of the reduction. It is thus most similar to Gäher and Kunze's work, but has a more elementary, if not brute-force, flavor to it. \<close> section \<open>The core concepts\<close> text \<open> The proof of the Cook-Levin theorem awaits us in Section~\ref{s:complete} on the very last page of this article. The way there is filled with definitions of Turing machines, correctness proofs for Turing machines, and running time-bound proofs for Turing machines, all of which can easily drown out the more relevant concepts. For instance, for verifying that the theorem on the last page really is the Cook-Levin theorem, only a small fraction of this article is relevant, namely the definitions of $\NP$-completeness and of \SAT{}. Recursively breaking down these definitions yields: \begin{itemize} \item $\NP$-completeness: Section~\ref{s:TC-NP} \begin{itemize} \item languages: Section~\ref{s:TC-NP} \item $\NP$-hard: Section~\ref{s:TC-NP} \begin{itemize} \item $\NP$: Section~\ref{s:TC-NP} \begin{itemize} \item Turing machines: Section~\ref{s:tm-basic-tm} \item computing a function: Section~\ref{s:tm-basic-comp} \item pairing strings: Section~\ref{s:tm-basic-pair} \item Big-Oh, polynomial: Section~\ref{s:tm-basic-bigoh} \end{itemize} \item polynomial-time many-one reduction: Section~\ref{s:TC-NP} \end{itemize} \end{itemize} \item \SAT{}: Section~\ref{s:sat-sat-repr} \begin{itemize} \item literal, clause, CNF formula, assignment, satisfiability: Section~\ref{s:CNF} \item representing CNF formulas as strings: Section~\ref{s:sat-sat-repr} \begin{itemize} \item string: Section~\ref{s:tm-basic-tm} \item CNF formula: Section~\ref{s:CNF} \item mapping between symbols and strings: Section~\ref{s:tm-basic-comp} \item mapping between binary and quaternary alphabets: Section~\ref{s:tm-quaternary-encoding} \item lists of lists of natural numbers: Section~\ref{s:tm-numlistlist-repr} \begin{itemize} \item binary representation of natural numbers: Section~\ref{s:tm-arithmetic-binary} \item lists of natural numbers: Section~\ref{s:tm-numlist-repr} \end{itemize} \end{itemize} \end{itemize} \end{itemize} In other words the Sections~\ref{s:tm-basic}, \ref{s:tm-arithmetic-binary}, \ref{s:tm-numlist-repr}, \ref{s:tm-numlistlist-repr}, \ref{s:tm-quaternary-encoding}, \ref{s:TC-NP}, \ref{s:CNF}, and \ref{s:sat-sat-repr} cover all definitions for formalizing the statement ``\SAT{} is $\NP$-complete''. \<close> chapter \<open>Turing machines\label{s:TM}\<close> text \<open> This chapter introduces Turing machines as a model of computing functions within a running-time bound. Despite being quite intuitive, Turing machines are notoriously tedious to work with. And so most of the rest of the chapter is devoted to making this a little easier by providing means of combining TMs and a library of reusable TMs for common tasks. The basic idea (Sections~\ref{s:tm-basic} and~\ref{s:tm-trans}) is to treat Turing machines as a kind of GOTO programming language. A state of a TM corresponds to a line of code executing a rather complex command that, depending on the symbols read, can write symbols, move tape heads, and jump to another state (that is, line of code). States are identified by line numbers. This makes it easy to execute TMs in sequence by concatenating two TM ``programs''. On top of the GOTO implicit in all commands, we then define IF and WHILE in the traditional way (Section~\ref{s:tm-combining}). This makes TMs more composable. The interpretation of states as line numbers deprives TMs of the ability to memorize values ``in states'', for example, the carry bit during a binary addition. In Section~\ref{s:tm-memorizing} we recover some of this flexibility. Being able to combine TMs is helpful, but we also need TMs to combine. This takes up most of the remainder of the chapter. We start with simple operations, such as moving a tape head to the next blank symbol or copying symbols between tapes (Section~\ref{s:tm-elementary}). Extending our programming language analogy for more complex TMs, we identify tapes with variables, so that a tape contains a value of a specific type, such as a number or a list of numbers. In the remaining Sections~\ref{s:tm-arithmetic} to~\ref{s:tm-wellformed} we define these ``data types'' and devise TMs for operations over them. It would be an exaggeration to say all this makes working with Turing machines easy or fun. But at least it makes TMs somewhat more feasible to use for complexity theory, as witnessed by the subsequent chapters. \<close> section \<open>Basic definitions\label{s:tm-basic}\<close> theory Basics imports Main begin text \<open> While Turing machines are fairly simple, there are still a few parts to define, especially if one allows multiple tapes and an arbitrary alphabet: states, tapes (read-only or read-write), cells, tape heads, head movements, symbols, and configurations. Beyond these are more semantic aspects like executing one or many steps of a Turing machine, its running time, and what it means for a TM to ``compute a function''. Our approach at formalizing all this must look rather crude compared to Dalvit and Thiemann's~\cite{Multitape_To_Singletape_TM-AFP}, but still it does get the job done. For lack of a better place, this section also introduces a minimal version of Big-Oh, polynomials, and a pairing function for strings. \<close> subsection \<open>Multi-tape Turing machines\label{s:tm-basic-tm}\<close> text \<open> Arora and Barak~\cite[p.~11]{ccama} define multi-tape Turing machines with these features: \begin{itemize} \item There are $k \geq 2$ infinite one-directional tapes, and each has one head. \item The first tape is the input tape and read-only; the other $k - 1$ tapes can be written to. \item The tape alphabet is a finite set $\Gamma$ containing at least the blank symbol $\Box$, the start symbol $\triangleright$, and the symbols \textbf{0} and \textbf{1}. \item There is a finite set $Q$ of states with start state and halting state $q_\mathit{start}, q_\mathit{halt} \in Q$. \item The behavior is described by a transition function $\delta\colon\ Q \times \Gamma^k \to Q \times \Gamma^{k-1} \times \{L, S, R\}^k$. If the TM is in a state $q$ and the symbols $g_1,\dots,g_k$ are under the $k$ tape heads and $\delta(q, (g_1, \dots, g_k)) = (q', (g'_2, \dots, g'_k), (d_1, \dots, d_k))$, then the TM writes $g'_2, \dots, g'_k$ to the writable tapes, moves the tape heads in the direction (Left, Stay, or Right) indicated by the $d_1, \dots, d_k$ and switches to state $q'$. \end{itemize} \<close> subsubsection \<open>Syntax\<close> text \<open> An obvious data type for the direction a tape head can move: \<close> datatype direction = Left | Stay | Right text \<open> We simplify the definition a bit in that we identify both symbols and states with natural numbers: \begin{itemize} \item We set $\Gamma = \{0, 1, \dots, G - 1\}$ for some $G \geq 4$ and represent the symbols $\Box$, $\triangleright$, \textbf{0}, and \textbf{1} by the numbers 0, 1, 2, and~3, respectively. We represent an alphabet $\Gamma$ by its size $G$. \item We let the set of states be of the form $\{0, 1, \dots, Q\}$ for some $Q\in\nat$ and set the start state $q_\mathit{start} = 0$ and halting state $q_\mathit{halt} = Q$. \end{itemize} The last item presents a fundamental difference to the textbook definition, because it requires that Turing machines with $q_\mathit{start} = q_\mathit{halt}$ have exactly one state, whereas the textbook definition allows them arbitrarily many states. However, if $q_\mathit{start} = q_\mathit{halt}$ then the TM starts in the halting state and thus does not actually do anything. But then it does not matter if there are other states besides that one start/halting state. Our simplified definition therefore does not restrict the expressive power of TMs. It does, however, simplify composing them. \<close> text \<open> The type @{type nat} is used for symbols and for states. \<close> type_synonym state = nat type_synonym symbol = nat text \<open> It is confusing to have the numbers 2 and 3 represent the symbols \textbf{0} and \textbf{1}. The next abbreviations try to hide this somewhat. The glyphs for symbols number~4 and~5 are chosen arbitrarily. While we will encounter Turing machines with huge alphabets, only the following symbols will be used literally: \<close> abbreviation (input) blank_symbol :: nat ("\<box>") where "\<box> \<equiv> 0" abbreviation (input) start_symbol :: nat ("\<triangleright>") where "\<triangleright> \<equiv> 1" abbreviation (input) zero_symbol :: nat ("\<zero>") where "\<zero> \<equiv> 2" abbreviation (input) one_symbol :: nat ("\<one>") where "\<one> \<equiv> 3" abbreviation (input) bar_symbol :: nat ("\<bar>") where "\<bar> \<equiv> 4" abbreviation (input) sharp_symbol :: nat ("\<sharp>") where "\<sharp> \<equiv> 5" no_notation abs ("\<bar>_\<bar>") text \<open> Tapes are infinite in one direction, so each cell can be addressed by a natural number. Likewise the position of a tape head is a natural number. The contents of a tape are represented by a mapping from cell numbers to symbols. A \emph{tape} is a pair of tape contents and head position: \<close> type_synonym tape = "(nat \<Rightarrow> symbol) \<times> nat" text \<open> Our formalization of Turing machines begins with a data type representing a more general concept, which we call \emph{machine}, and later adds a predicate to define which machines are \emph{Turing} machines. In this generalization the number $k$ of tapes is arbitrary, although machines with zero tapes are of little interest. Also, all tapes are writable and the alphabet is not limited, that is, $\Gamma = \nat$. The transition function becomes $ \delta\colon\ \{0, \dots, Q\} \times \nat^k \to \{0, \dots, Q\} \times \nat^k \times \{L,S,R\}^k $ or, saving us one occurrence of~$k$, $ \delta\colon\ \{0, \dots, Q\} \times \nat^k \to \{0, \dots, Q\} \times (\nat \times \{L,S,R\})^k\;. $ The transition function $\delta$ has a fixed behavior in the state $q_{halt} = Q$ (namely making the machine do nothing). Hence $\delta$ needs to be specified only for the $Q$ states $0, \dots, Q - 1$ and thus can be given as a sequence $\delta_0, \dots, \delta_{Q-1}$ where each $\delta_q$ is a function \begin{equation} \label{eq:wf} \delta_q\colon \nat^k \to \{0, \dots, Q\} \times (\nat \times \{L,S,R\})^k. \end{equation} Going one step further we allow the machine to jump to any state in $\nat$, and we will treat any state $q \geq Q$ as a halting state. The $\delta_q$ are then \begin{equation} \label{eq:proper} \delta_q\colon \nat^k \to \nat \times (\nat \times \{L,S,R\})^k. \end{equation} Finally we allow inputs and outputs of arbitrary length, turning the $\delta_q$ into \[ \delta_q\colon \nat^* \to \nat \times (\nat \times \{L,S,R\})^*. \] Such a $\delta_q$ will be called a \emph{command}, and the elements of $\nat \times \{L,S,R\}$ will be called \emph{actions}. An action consists of writing a symbol to a tape at the current tape head position and then moving the tape head. \<close> type_synonym action = "symbol \<times> direction" text \<open> A command maps the list of symbols read from the tapes to a follow-up state and a list of actions. It represents the machine's behavior in one state. \<close> type_synonym command = "symbol list \<Rightarrow> state \<times> action list" text \<open> Machines are then simply lists of commands. The $q$-th element of the list represents the machine's behavior in state $q$. The halting state of a machine $M$ is @{term "length M"}, but there is obviously no such element in the list. \<close> type_synonym machine = "command list" text \<open> Commands in this general form are too amorphous. We call a command \emph{well-formed} for $k$ tapes and the state space $Q$ if on reading $k$ symbols it performs $k$ actions and jumps to a state in $\{0, \dots, Q\}$. A well-formed command corresponds to (\ref{eq:wf}). \<close> definition wf_command :: "nat \<Rightarrow> nat \<Rightarrow> command \<Rightarrow> bool" where "wf_command k Q cmd \<equiv> \<forall>gs. length gs = k \<longrightarrow> length (snd (cmd gs)) = k \<and> fst (cmd gs) \<le> Q" text \<open> A well-formed command is a \emph{Turing command} for $k$ tapes and alphabet $G$ if it writes only symbols from $G$ when reading symbols from $G$ and does not write to tape $0$; that is, it writes to tape $0$ the symbol it read from tape~$0$. \<close> definition turing_command :: "nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> command \<Rightarrow> bool" where "turing_command k Q G cmd \<equiv> wf_command k Q cmd \<and> (\<forall>gs. length gs = k \<longrightarrow> ((\<forall>i<k. gs ! i < G) \<longrightarrow> (\<forall>i<k. fst (snd (cmd gs) ! i) < G)) \<and> (k > 0 \<longrightarrow> fst (snd (cmd gs) ! 0) = gs ! 0))" text \<open> A \emph{Turing machine} is a machine with at least two tapes and four symbols and only Turing commands. \<close> definition turing_machine :: "nat \<Rightarrow> nat \<Rightarrow> machine \<Rightarrow> bool" where "turing_machine k G M \<equiv> k \<ge> 2 \<and> G \<ge> 4 \<and> (\<forall>cmd\<in>set M. turing_command k (length M) G cmd)" subsubsection \<open>Semantics\<close> text \<open> Next we define the semantics of machines. The state and the list of tapes make up the \emph{configuration} of a machine. The semantics are given as functions mapping configurations to follow-up configurations. \<close> type_synonym config = "state \<times> tape list" text \<open> We start with the semantics of a single command. An action affects a tape in the following way. For the head movements we imagine the tapes having cell~0 at the left and the cell indices growing rightward. \<close> fun act :: "action \<Rightarrow> tape \<Rightarrow> tape" where "act (w, m) tp = ((fst tp)(snd tp:=w), case m of Left \<Rightarrow> snd tp - 1 | Stay \<Rightarrow> snd tp | Right \<Rightarrow> snd tp + 1)" text \<open> Reading symbols from one tape, from all tapes, and from configurations: \<close> abbreviation tape_read :: "tape \<Rightarrow> symbol" ("|.|") where "|.| tp \<equiv> fst tp (snd tp)" definition read :: "tape list \<Rightarrow> symbol list" where "read tps \<equiv> map tape_read tps" abbreviation config_read :: "config \<Rightarrow> symbol list" where "config_read cfg \<equiv> read (snd cfg)" text \<open> The semantics of a command: \<close> definition sem :: "command \<Rightarrow> config \<Rightarrow> config" where "sem cmd cfg \<equiv> let (newstate, actions) = cmd (config_read cfg) in (newstate, map (\<lambda>(a, tp). act a tp) (zip actions (snd cfg)))" text \<open> The semantics of one step of a machine consist in the semantics of the command corresponding to the state the machine is in. The following definition ensures that the configuration does not change when it is in a halting state. \<close> definition exe :: "machine \<Rightarrow> config \<Rightarrow> config" where "exe M cfg \<equiv> if fst cfg < length M then sem (M ! (fst cfg)) cfg else cfg" text \<open> Executing a machine $M$ for multiple steps: \<close> fun execute :: "machine \<Rightarrow> config \<Rightarrow> nat \<Rightarrow> config" where "execute M cfg 0 = cfg" | "execute M cfg (Suc t) = exe M (execute M cfg t)" text \<open> We have defined the semantics for arbitrary machines, but most lemmas we are going to prove about @{const exe}, @{const execute}, etc.\ will require the commands to be somewhat well-behaved, more precisely to map lists of $k$ symbols to lists of $k$ actions, as shown in (\ref{eq:proper}). We will call such commands \emph{proper}. \<close> abbreviation proper_command :: "nat \<Rightarrow> command \<Rightarrow> bool" where "proper_command k cmd \<equiv> \<forall>gs. length gs = k \<longrightarrow> length (snd (cmd gs)) = length gs" text \<open> Being proper is a weaker condition than being well-formed. Since @{const exe} treats the state $Q$ and the states $q > Q$ the same, we do not need the $Q$-closure property of well-formedness for most lemmas about semantics. \<close> text \<open> Next we introduce a number of abbreviations for components of a machine and aspects of its behavior. In general, symbols between bars $|\cdot|$ represent operations on tapes, inside angle brackets $<\cdot>$ operations on configurations, between colons $:\!\cdot\!:$ operations on lists of tapes, and inside brackets $[\cdot]$ operations on state/action-list pairs. As for the symbol inside the delimiters, a dot ($.$) refers to a tape symbol, a colon ($:$) to the entire tape contents, and a hash ($\#$) to a head position; an equals sign ($=$) means some component of the left-hand side is changed. An exclamation mark ($!$) accesses an element in a list on the left-hand side term. \null \<close> abbreviation config_length :: "config \<Rightarrow> nat" ("||_||") where "config_length cfg \<equiv> length (snd cfg)" abbreviation tape_move_right :: "tape \<Rightarrow> nat \<Rightarrow> tape" (infixl "|+|" 60) where "tp |+| n \<equiv> (fst tp, snd tp + n)" abbreviation tape_move_left :: "tape \<Rightarrow> nat \<Rightarrow> tape" (infixl "|-|" 60) where "tp |-| n \<equiv> (fst tp, snd tp - n)" abbreviation tape_move_to :: "tape \<Rightarrow> nat \<Rightarrow> tape" (infixl "|#=|" 60) where "tp |#=| n \<equiv> (fst tp, n)" abbreviation tape_write :: "tape \<Rightarrow> symbol \<Rightarrow> tape" (infixl "|:=|" 60) where "tp |:=| h \<equiv> ((fst tp) (snd tp := h), snd tp)" abbreviation config_tape_by_no :: "config \<Rightarrow> nat \<Rightarrow> tape" (infix "<!>" 90) where "cfg <!> j \<equiv> snd cfg ! j" abbreviation config_contents_by_no :: "config \<Rightarrow> nat \<Rightarrow> (nat \<Rightarrow> symbol)" (infix "<:>" 100) where "cfg <:> j \<equiv> fst (cfg <!> j)" abbreviation config_pos_by_no :: "config \<Rightarrow> nat \<Rightarrow> nat" (infix "<#>" 100) where "cfg <#> j \<equiv> snd (cfg <!> j)" abbreviation config_symbol_read :: "config \<Rightarrow> nat \<Rightarrow> symbol" (infix "<.>" 100) where "cfg <.> j \<equiv> (cfg <:> j) (cfg <#> j)" abbreviation config_update_state :: "config \<Rightarrow> nat \<Rightarrow> config" (infix "<+=>" 90) where "cfg <+=> q \<equiv> (fst cfg + q, snd cfg)" abbreviation tapes_contents_by_no :: "tape list \<Rightarrow> nat \<Rightarrow> (nat \<Rightarrow> symbol)" (infix ":::" 100) where "tps ::: j \<equiv> fst (tps ! j)" abbreviation tapes_pos_by_no :: "tape list \<Rightarrow> nat \<Rightarrow> nat" (infix ":#:" 100) where "tps :#: j \<equiv> snd (tps ! j)" abbreviation tapes_symbol_read :: "tape list \<Rightarrow> nat \<Rightarrow> symbol" (infix ":.:" 100) where "tps :.: j \<equiv> (tps ::: j) (tps :#: j)" abbreviation jump_by_no :: "state \<times> action list \<Rightarrow> state" ("[*] _" [90]) where "[*] sas \<equiv> fst sas" abbreviation actions_of_cmd :: "state \<times> action list \<Rightarrow> action list" ("[!!] _" [100] 100) where "[!!] sas \<equiv> snd sas" abbreviation action_by_no :: "state \<times> action list \<Rightarrow> nat \<Rightarrow> action" (infix "[!]" 90) where "sas [!] j \<equiv> snd sas ! j" abbreviation write_by_no :: "state \<times> action list \<Rightarrow> nat \<Rightarrow> symbol" (infix "[.]" 90) where "sas [.] j \<equiv> fst (sas [!] j)" abbreviation direction_by_no :: "state \<times> action list \<Rightarrow> nat \<Rightarrow> direction" (infix "[~]" 100) where "sas [~] j \<equiv> snd (sas [!] j)" text \<open> Symbol sequences consisting of symbols from an alphabet $G$: \<close> abbreviation symbols_lt :: "nat \<Rightarrow> symbol list \<Rightarrow> bool" where "symbols_lt G rs \<equiv> \<forall>i<length rs. rs ! i < G" text \<open> We will frequently have to show that commands are proper or Turing commands. \<close> lemma turing_commandI [intro]: assumes "\<And>gs. length gs = k \<Longrightarrow> length ([!!] cmd gs) = length gs" and "\<And>gs. length gs = k \<Longrightarrow> (\<And>i. i < length gs \<Longrightarrow> gs ! i < G) \<Longrightarrow> (\<And>j. j < length gs \<Longrightarrow> cmd gs [.] j < G)" and "\<And>gs. length gs = k \<Longrightarrow> k > 0 \<Longrightarrow> cmd gs [.] 0 = gs ! 0" and "\<And>gs. length gs = k \<Longrightarrow> [*] (cmd gs) \<le> Q" shows "turing_command k Q G cmd" using assms turing_command_def wf_command_def by simp lemma turing_commandD: assumes "turing_command k Q G cmd" and "length gs = k" shows "length ([!!] cmd gs) = length gs" and "(\<And>i. i < length gs \<Longrightarrow> gs ! i < G) \<Longrightarrow> (\<And>j. j < length gs \<Longrightarrow> cmd gs [.] j < G)" and "k > 0 \<Longrightarrow> cmd gs [.] 0 = gs ! 0" and "\<And>gs. length gs = k \<Longrightarrow> [*] (cmd gs) \<le> Q" using assms turing_command_def wf_command_def by simp_all lemma turing_command_mono: assumes "turing_command k Q G cmd" and "Q \<le> Q'" shows "turing_command k Q' G cmd" using turing_command_def wf_command_def assms by auto lemma proper_command_length: assumes "proper_command k cmd" and "length gs = k" shows "length ([!!] cmd gs) = length gs" using assms by simp abbreviation proper_machine :: "nat \<Rightarrow> machine \<Rightarrow> bool" where "proper_machine k M \<equiv> \<forall>i<length M. proper_command k (M ! i)" lemma prop_list_append: assumes "\<forall>i<length M1. P (M1 ! i)" and "\<forall>i<length M2. P (M2 ! i)" shows "\<forall>i<length (M1 @ M2). P ((M1 @ M2) ! i)" using assms by (simp add: nth_append) text \<open> The empty Turing machine $[]$ is the one Turing machine where the start state is the halting state, that is, $q_\mathit{start} = q_\mathit{halt} = Q = 0$. It is a Turing machine for every $k \geq 2$ and $G \geq 4$: \<close> lemma Nil_tm: "G \<ge> 4 \<Longrightarrow> k \<ge> 2 \<Longrightarrow> turing_machine k G []" using turing_machine_def by simp lemma turing_machineI [intro]: assumes "k \<ge> 2" and "G \<ge> 4" and "\<And>i. i < length M \<Longrightarrow> turing_command k (length M) G (M ! i)" shows "turing_machine k G M" unfolding turing_machine_def using assms by (metis in_set_conv_nth) lemma turing_machineD: assumes "turing_machine k G M" shows "k \<ge> 2" and "G \<ge> 4" and "\<And>i. i < length M \<Longrightarrow> turing_command k (length M) G (M ! i)" using turing_machine_def assms by simp_all text \<open> A few lemmas about @{const act}, @{const read}, and @{const sem}: \null \<close> lemma act: "act a tp = ((fst tp)(snd tp := fst a), case snd a of Left \<Rightarrow> snd tp - 1 | Stay \<Rightarrow> snd tp | Right \<Rightarrow> snd tp + 1)" by (metis act.simps prod.collapse) lemma act_Stay: "j < length tps \<Longrightarrow> act (read tps ! j, Stay) (tps ! j) = tps ! j" by (simp add: read_def) lemma act_Right: "j < length tps \<Longrightarrow> act (read tps ! j, Right) (tps ! j) = tps ! j |+| 1" by (simp add: read_def) lemma act_Left: "j < length tps \<Longrightarrow> act (read tps ! j, Left) (tps ! j) = tps ! j |-| 1" by (simp add: read_def) lemma act_Stay': "act (h, Stay) (tps ! j) = tps ! j |:=| h" by simp lemma act_Right': "act (h, Right) (tps ! j) = tps ! j |:=| h |+| 1" by simp lemma act_Left': "act (h, Left) (tps ! j) = tps ! j |:=| h |-| 1" by simp lemma act_pos_le_Suc: "snd (act a (tps ! j)) \<le> Suc (snd (tps ! j))" proof - obtain w m where "a = (w, m)" by fastforce then show "snd (act a (tps ! j)) \<le> Suc (snd (tps ! j))" using act_Left' act_Stay' act_Right' by (cases m) simp_all qed lemma act_changes_at_most_pos: assumes "i \<noteq> snd tp" shows "fst (act (h, mv) tp) i = fst tp i" by (simp add: assms) lemma act_changes_at_most_pos': assumes "i \<noteq> snd tp" shows "fst (act a tp) i = fst tp i" by (simp add: assms act) lemma read_length: "length (read tps) = length tps" using read_def by simp lemma tapes_at_read: "j < length tps \<Longrightarrow> (q, tps) <.> j = read tps ! j" unfolding read_def by simp lemma tapes_at_read': "j < length tps \<Longrightarrow> tps :.: j = read tps ! j" unfolding read_def by simp lemma read_abbrev: "j < ||cfg|| \<Longrightarrow> read (snd cfg) ! j = cfg <.> j" unfolding read_def by simp lemma sem: "sem cmd cfg = (let rs = read (snd cfg) in (fst (cmd rs), map (\<lambda>(a, tp). act a tp) (zip (snd (cmd rs)) (snd cfg))))" using sem_def read_def by (metis (no_types, lifting) case_prod_beta) lemma sem': "sem cmd cfg = (fst (cmd (read (snd cfg))), map (\<lambda>(a, tp). act a tp) (zip (snd (cmd (read (snd cfg)))) (snd cfg)))" using sem_def read_def by (metis (no_types, lifting) case_prod_beta) lemma sem'': "sem cmd (q, tps) = (fst (cmd (read tps)), map (\<lambda>(a, tp). act a tp) (zip (snd (cmd (read tps))) tps))" using sem' by simp lemma sem_num_tapes_raw: "proper_command k cmd \<Longrightarrow> k = ||cfg|| \<Longrightarrow> k = ||sem cmd cfg||" using sem_def read_length by (simp add: case_prod_beta) lemma sem_num_tapes2: "turing_command k Q G cmd \<Longrightarrow> k = ||cfg|| \<Longrightarrow> k = ||sem cmd cfg||" using sem_num_tapes_raw turing_commandD(1) by simp corollary sem_num_tapes2': "turing_command ||cfg|| Q G cmd \<Longrightarrow> ||cfg|| = ||sem cmd cfg||" using sem_num_tapes2 by simp corollary sem_num_tapes3: "turing_command ||cfg|| Q G cmd \<Longrightarrow> ||cfg|| = ||sem cmd cfg||" by (simp add: turing_commandD(1) sem_num_tapes_raw) lemma sem_fst: assumes "cfg' = sem cmd cfg" and "rs = read (snd cfg)" shows "fst cfg' = fst (cmd rs)" using sem by (metis (no_types, lifting) assms(1) assms(2) fstI) lemma sem_snd: assumes "proper_command k cmd" and "||cfg|| = k" and "rs = read (snd cfg)" and "j < k" shows "sem cmd cfg <!> j = act (snd (cmd rs) ! j) (snd cfg ! j)" using assms sem' read_length by simp lemma snd_semI: assumes "proper_command k cmd" and "length tps = k" and "length tps' = k" and "\<And>j. j < k \<Longrightarrow> act (cmd (read tps) [!] j) (tps ! j) = tps' ! j" shows "snd (sem cmd (q, tps)) = snd (q', tps')" using assms sem_snd[OF assms(1)] sem_num_tapes_raw by (metis nth_equalityI snd_conv) lemma sem_snd_tm: assumes "turing_machine k G M" and "length tps = k" and "rs = read tps" and "j < k" and "q < length M" shows "sem (M ! q) (q, tps) <!> j = act (snd ((M ! q) rs) ! j) (tps ! j)" using assms sem_snd turing_machine_def turing_commandD(1) by (metis nth_mem snd_conv) lemma semI: assumes "proper_command k cmd" and "length tps = k" and "length tps' = k" and "fst (cmd (read tps)) = q'" and "\<And>j. j < k \<Longrightarrow> act (cmd (read tps) [!] j) (tps ! j) = tps' ! j" shows "sem cmd (q, tps) = (q', tps')" using snd_semI[OF assms(1,2,3)] assms(4,5) sem_fst by (metis prod.exhaust_sel snd_conv) text \<open> Commands ignore the state element of the configuration they are applied to. \<close> lemma sem_state_indep: assumes "snd cfg1 = snd cfg2" shows "sem cmd cfg1 = sem cmd cfg2" using sem_def assms by simp text \<open> A few lemmas about @{const exe} and @{const execute}: \<close> lemma exe_lt_length: "fst cfg < length M \<Longrightarrow> exe M cfg = sem (M ! (fst cfg)) cfg" using exe_def by simp lemma exe_ge_length: "fst cfg \<ge> length M \<Longrightarrow> exe M cfg = cfg" using exe_def by simp lemma exe_num_tapes: assumes "turing_machine k G M" and "k = ||cfg||" shows "k = ||exe M cfg||" using assms sem_num_tapes2 turing_machine_def exe_def by (metis nth_mem) lemma exe_num_tapes_proper: assumes "proper_machine k M" and "k = ||cfg||" shows "k = ||exe M cfg||" using assms sem_num_tapes_raw turing_machine_def exe_def by metis lemma execute_num_tapes_proper: assumes "proper_machine k M" and "k = ||cfg||" shows "k = ||execute M cfg t||" using exe_num_tapes_proper assms by (induction t) simp_all lemma execute_num_tapes: assumes "turing_machine k G M" and "k = ||cfg||" shows "k = ||execute M cfg t||" using exe_num_tapes assms by (induction t) simp_all lemma execute_after_halting: assumes "fst (execute M cfg0 t) = length M" shows "execute M cfg0 (t + n) = execute M cfg0 t" by (induction n) (simp_all add: assms exe_def) lemma execute_after_halting': assumes "fst (execute M cfg0 t) \<ge> length M" shows "execute M cfg0 (t + n) = execute M cfg0 t" by (induction n) (simp_all add: assms exe_ge_length) corollary execute_after_halting_ge: assumes "fst (execute M cfg0 t) = length M" and "t \<le> t'" shows "execute M cfg0 t' = execute M cfg0 t" using execute_after_halting assms le_Suc_ex by blast corollary execute_after_halting_ge': assumes "fst (execute M cfg0 t) \<ge> length M" and "t \<le> t'" shows "execute M cfg0 t' = execute M cfg0 t" using execute_after_halting' assms le_Suc_ex by blast lemma execute_additive: assumes "execute M cfg1 t1 = cfg2" and "execute M cfg2 t2 = cfg3" shows "execute M cfg1 (t1 + t2) = cfg3" using assms by (induction t2 arbitrary: cfg3) simp_all lemma turing_machine_execute_states: assumes "turing_machine k G M" and "fst cfg \<le> length M" and "||cfg|| = k" shows "fst (execute M cfg t) \<le> length M" proof (induction t) case 0 then show ?case by (simp add: assms(2)) next case (Suc t) then show ?case using turing_command_def assms(1,3) exe_def execute.simps(2) execute_num_tapes sem_fst turing_machine_def wf_command_def read_length by (smt (verit, best) nth_mem) qed text \<open> While running times are important, usually upper bounds for them suffice. The next predicate expresses that a machine \emph{transits} from one configuration to another one in at most a certain number of steps. \<close> definition transits :: "machine \<Rightarrow> config \<Rightarrow> nat \<Rightarrow> config \<Rightarrow> bool" where "transits M cfg1 t cfg2 \<equiv> \<exists>t'\<le>t. execute M cfg1 t' = cfg2" lemma transits_monotone: assumes "t \<le> t'" and "transits M cfg1 t cfg2" shows "transits M cfg1 t' cfg2" using assms dual_order.trans transits_def by auto lemma transits_additive: assumes "transits M cfg1 t1 cfg2" and "transits M cfg2 t2 cfg3" shows "transits M cfg1 (t1 + t2) cfg3" proof- from assms(1) obtain t1' where 1: "t1' \<le> t1" "execute M cfg1 t1' = cfg2" using transits_def by auto from assms(2) obtain t2' where 2: "t2' \<le> t2" "execute M cfg2 t2' = cfg3" using transits_def by auto then have "execute M cfg1 (t1' + t2') = cfg3" using execute_additive 1 by simp moreover have "t1' + t2' \<le> t1 + t2" using "1"(1) "2"(1) by simp ultimately show ?thesis using transits_def "1"(2) "2"(2) by auto qed lemma transitsI: assumes "execute M cfg1 t' = cfg2" and "t' \<le> t" shows "transits M cfg1 t cfg2" unfolding transits_def using assms by auto lemma execute_imp_transits: assumes "execute M cfg1 t = cfg2" shows "transits M cfg1 t cfg2" unfolding transits_def using assms by auto text \<open> In the vast majority of cases we are only interested in transitions from the start state to the halting state. One way to look at it is the machine \emph{transforms} a list of tapes to another list of tapes within a certain number of steps. \<close> definition transforms :: "machine \<Rightarrow> tape list \<Rightarrow> nat \<Rightarrow> tape list \<Rightarrow> bool" where "transforms M tps t tps' \<equiv> transits M (0, tps) t (length M, tps')" text \<open> The previous predicate will be the standard way in which we express the behavior of a (Turing) machine. Consider, for example, the empty machine: \<close> lemma transforms_Nil: "transforms [] tps 0 tps" using transforms_def transits_def by simp lemma transforms_monotone: assumes "transforms M tps t tps'" and "t \<le> t'" shows "transforms M tps t' tps'" using assms transforms_def transits_monotone by simp text \<open> Most often the tapes will have a start symbol in the first cell followed by a finite sequence of symbols. \<close> definition contents :: "symbol list \<Rightarrow> (nat \<Rightarrow> symbol)" ("\<lfloor>_\<rfloor>") where "\<lfloor>xs\<rfloor> i \<equiv> if i = 0 then \<triangleright> else if i \<le> length xs then xs ! (i - 1) else \<box>" lemma contents_at_0 [simp]: "\<lfloor>zs\<rfloor> 0 = \<triangleright>" using contents_def by simp lemma contents_inbounds [simp]: "i > 0 \<Longrightarrow> i \<le> length zs \<Longrightarrow> \<lfloor>zs\<rfloor> i = zs ! (i - 1)" using contents_def by simp lemma contents_outofbounds [simp]: "i > length zs \<Longrightarrow> \<lfloor>zs\<rfloor> i = \<box>" using contents_def by simp text \<open> When Turing machines are used to compute functions, they are started in a specific configuration where all tapes have the format just defined and the first tape contains the input. This is called the \emph{start configuration}~\cite[p.~13]{ccama}. \<close> definition start_config :: "nat \<Rightarrow> symbol list \<Rightarrow> config" where "start_config k xs \<equiv> (0, (\<lfloor>xs\<rfloor>, 0) # replicate (k - 1) (\<lfloor>[]\<rfloor>, 0))" lemma start_config_length: "k > 0 \<Longrightarrow> ||start_config k xs|| = k" using start_config_def contents_def by simp lemma start_config1: assumes "cfg = start_config k xs" and "0 < j" and "j < k" and "i > 0" shows "(cfg <:> j) i = \<box>" using start_config_def contents_def assms by simp lemma start_config2: assumes "cfg = start_config k xs" and "j < k" shows "(cfg <:> j) 0 = \<triangleright>" using start_config_def contents_def assms by (cases "0 = j") simp_all lemma start_config3: assumes "cfg = start_config k xs" and "i > 0" and "i \<le> length xs" shows "(cfg <:> 0) i = xs ! (i - 1)" using start_config_def contents_def assms by simp lemma start_config4: assumes "0 < j" and "j < k" shows "snd (start_config k xs) ! j = (\<lambda>i. if i = 0 then \<triangleright> else \<box>, 0)" using start_config_def contents_def assms by auto lemma start_config_pos: "j < k \<Longrightarrow> start_config k zs <#> j = 0" using start_config_def by (simp add: nth_Cons') text \<open> We call a symbol \emph{proper} if it is neither the blank symbol nor the start symbol. \<close> abbreviation proper_symbols :: "symbol list \<Rightarrow> bool" where "proper_symbols xs \<equiv> \<forall>i<length xs. xs ! i > Suc 0" lemma proper_symbols_append: assumes "proper_symbols xs" and "proper_symbols ys" shows "proper_symbols (xs @ ys)" using assms prop_list_append by (simp add: nth_append) lemma proper_symbols_ne0: "proper_symbols xs \<Longrightarrow> \<forall>i<length xs. xs ! i \<noteq> \<box>" by auto lemma proper_symbols_ne1: "proper_symbols xs \<Longrightarrow> \<forall>i<length xs. xs ! i \<noteq> \<triangleright>" by auto text \<open> We call the symbols \textbf{0} and \textbf{1} \emph{bit symbols}. \<close> abbreviation bit_symbols :: "nat list \<Rightarrow> bool" where "bit_symbols xs \<equiv> \<forall>i<length xs. xs ! i = \<zero> \<or> xs ! i = \<one>" lemma bit_symbols_append: assumes "bit_symbols xs" and "bit_symbols ys" shows "bit_symbols (xs @ ys)" using assms prop_list_append by (simp add: nth_append) subsubsection \<open>Basic facts about Turing machines\<close> text \<open> A Turing machine with alphabet $G$ started on a symbol sequence over $G$ will only ever have symbols from $G$ on any of its tapes. \<close> lemma tape_alphabet: assumes "turing_machine k G M" and "symbols_lt G zs" and "j < k" shows "((execute M (start_config k zs) t) <:> j) i < G" using assms(3) proof (induction t arbitrary: i j) case 0 have "G \<ge> 2" using turing_machine_def assms(1) by simp then show ?case using start_config_def contents_def 0 assms(2) start_config1 start_config2 by (smt One_nat_def Suc_1 Suc_lessD Suc_pred execute.simps(1) fst_conv lessI nat_less_le neq0_conv nth_Cons_0 snd_conv) next case (Suc t) let ?cfg = "execute M (start_config k zs) t" have *: "execute M (start_config k zs) (Suc t) = exe M ?cfg" by simp show ?case proof (cases "fst ?cfg \<ge> length M") case True then have "execute M (start_config k zs) (Suc t) = ?cfg" using * exe_def by simp then show ?thesis using Suc by simp next case False then have **: "execute M (start_config k zs) (Suc t) = sem (M ! (fst ?cfg)) ?cfg" using * exe_def by simp let ?rs = "config_read ?cfg" let ?cmd = "M ! (fst ?cfg)" let ?sas = "?cmd ?rs" let ?cfg' = "sem ?cmd ?cfg" have "\<forall>j<length ?rs. ?rs ! j < G" using Suc assms(1) execute_num_tapes start_config_length read_abbrev read_length by auto moreover have len: "length ?rs = k" using assms(1) assms(3) execute_num_tapes start_config_def read_length by auto moreover have 2: "turing_command k (length M) G ?cmd" using assms(1) turing_machine_def False leI by simp ultimately have sas: "\<forall>j<length ?rs. ?sas [.] j < G" using turing_command_def by simp have "?cfg' <!> j = act (?sas [!] j) (?cfg <!> j)" using Suc.prems 2 len read_length sem_snd turing_commandD(1) by metis then have "?cfg' <:> j = (?cfg <:> j)(?cfg <#> j := ?sas [.] j)" using act by simp then have "(?cfg' <:> j) i < G" by (simp add: len Suc sas) then show ?thesis using ** by simp qed qed corollary read_alphabet: assumes "turing_machine k G M" and "symbols_lt G zs" shows "\<forall>i<k. config_read (execute M (start_config k zs) t) ! i < G" using assms tape_alphabet execute_num_tapes start_config_length read_abbrev by simp corollary read_alphabet': assumes "turing_machine k G M" and "symbols_lt G zs" shows "symbols_lt G (config_read (execute M (start_config k zs) t))" using read_alphabet assms execute_num_tapes start_config_length read_length turing_machine_def by (metis neq0_conv not_numeral_le_zero) corollary read_alphabet_set: assumes "turing_machine k G M" and "symbols_lt G zs" shows "\<forall>h\<in>set (config_read (execute M (start_config k zs) t)). h < G" using read_alphabet'[OF assms] by (metis in_set_conv_nth) text \<open> The contents of the input tape never change. \<close> lemma input_tape_constant: assumes "turing_machine k G M" and "k = ||cfg||" shows "execute M cfg t <:> 0 = execute M cfg 0 <:> 0" proof (induction t) case 0 then show ?case by simp next case (Suc t) let ?cfg = "execute M cfg t" have 1: "execute M cfg (Suc t) = exe M ?cfg" by simp have 2: "length (read (snd ?cfg)) = k" using execute_num_tapes assms read_length by simp have k: "k > 0" using assms(1) turing_machine_def by simp show ?case proof (cases "fst ?cfg < length M") case True then have 3: "turing_command k (length M) G (M ! fst ?cfg)" using turing_machine_def assms(1) by simp then have "(M ! fst ?cfg) (read (snd ?cfg)) [.] 0 = read (snd ?cfg) ! 0" using turing_command_def 2 k by auto then have 4: "(M ! fst ?cfg) (read (snd ?cfg)) [.] 0 = ?cfg <.> 0" using 2 k read_abbrev read_length by auto have "execute M cfg (Suc t) <:> 0 = sem (M ! fst ?cfg) ?cfg <:> 0" using True exe_def by simp also have "... = fst (act (((M ! fst ?cfg) (read (snd ?cfg))) [!] 0) (?cfg <!> 0))" using sem_snd 2 3 k read_length turing_commandD(1) by metis also have "... = (?cfg <:> 0) ((?cfg <#> 0):=(((M ! fst ?cfg) (read (snd ?cfg))) [.] 0))" using act by simp also have "... = (?cfg <:> 0) ((?cfg <#> 0):=?cfg <.> 0)" using 4 by simp also have "... = ?cfg <:> 0" by simp finally have "execute M cfg (Suc t) <:> 0 = ?cfg <:> 0" . then show ?thesis using Suc by simp next case False then have "execute M cfg (Suc t) = ?cfg" using exe_def by simp then show ?thesis using Suc by simp qed qed text \<open> A head position cannot be greater than the number of steps the machine has been running. \<close> lemma head_pos_le_time: assumes "turing_machine k G M" and "j < k" shows "execute M (start_config k zs) t <#> j \<le> t" proof (induction t) case 0 have "0 < k" using assms(1) turing_machine_def by simp then have "execute M (start_config k zs) 0 <#> j = 0" using start_config_def assms(2) start_config_pos by simp then show ?case by simp next case (Suc t) have *: "execute M (start_config k zs) (Suc t) = exe M (execute M (start_config k zs) t)" (is "_ = exe M ?cfg") by simp show ?case proof (cases "fst ?cfg = length M") case True then have "execute M (start_config k zs) (Suc t) = ?cfg" using * exe_def by simp then show ?thesis using Suc by simp next case False then have less: "fst ?cfg < length M" using assms(1) turing_machine_def by (simp add: start_config_def le_neq_implies_less turing_machine_execute_states) then have "exe M ?cfg = sem (M ! (fst ?cfg)) ?cfg" using exe_def by simp moreover have "proper_command k (M ! (fst ?cfg))" using assms(1) turing_commandD(1) less turing_machine_def nth_mem by blast ultimately have "exe M ?cfg <!> j = act (snd ((M ! (fst ?cfg)) (config_read ?cfg)) ! j) (?cfg <!> j)" using assms(1,2) execute_num_tapes start_config_length sem_snd by auto then have "exe M ?cfg <#> j \<le> Suc (?cfg <#> j)" using act_pos_le_Suc assms(1,2) execute_num_tapes start_config_length by auto then show ?thesis using * Suc.IH by simp qed qed lemma head_pos_le_halting_time: assumes "turing_machine k G M" and "fst (execute M (start_config k zs) T) = length M" and "j < k" shows "execute M (start_config k zs) t <#> j \<le> T" using assms execute_after_halting_ge[OF assms(2)] head_pos_le_time[OF assms(1,3)] by (metis nat_le_linear order_trans) text \<open> A tape cannot contain non-blank symbols at a position larger than the number of steps the Turing machine has been running, except on the input tape. \<close> lemma blank_after_time: assumes "i > t" and "j < k" and "0 < j" and "turing_machine k G M" shows "(execute M (start_config k zs) t <:> j) i = \<box>" using assms(1) proof (induction t) case 0 have "execute M (start_config k zs) 0 = start_config k zs" by simp then show ?case using start_config1 assms turing_machine_def by simp next case (Suc t) have "k \<ge> 2" using assms(2,3) by simp let ?icfg = "start_config k zs" have *: "execute M ?icfg (Suc t) = exe M (execute M ?icfg t)" by simp show ?case proof (cases "fst (execute M ?icfg t) \<ge> length M") case True then have "execute M ?icfg (Suc t) = execute M ?icfg t" using * exe_def by simp then show ?thesis using Suc by simp next case False then have "execute M ?icfg (Suc t) <:> j = sem (M ! (fst (execute M ?icfg t))) (execute M ?icfg t) <:> j" (is "_ = sem ?cmd ?cfg <:> j") using exe_lt_length * by simp also have "... = fst (map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd (read (snd ?cfg)))) (snd ?cfg)) ! j)" using sem' by simp also have "... = fst (act (snd (?cmd (read (snd ?cfg))) ! j) (snd ?cfg ! j))" (is "_ = fst (act ?h (snd ?cfg ! j))") proof - have "||?cfg|| = k" using assms(2) execute_num_tapes[OF assms(4)] start_config_length turing_machine_def by simp moreover have "length (snd (?cmd (read (snd ?cfg)))) = k" using assms(4) execute_num_tapes[OF assms(4)] start_config_length turing_machine_def read_length False turing_command_def wf_command_def by simp ultimately show ?thesis using assms by simp qed finally have "execute M ?icfg (Suc t) <:> j = fst (act ?h (snd ?cfg ! j))" . moreover have "i \<noteq> ?cfg <#> j" using head_pos_le_time[OF assms(4,2)] Suc Suc_lessD leD by blast ultimately have "(execute M ?icfg (Suc t) <:> j) i = fst (?cfg <!> j) i" using act_changes_at_most_pos by (metis prod.collapse) then show ?thesis using Suc Suc_lessD by presburger qed qed subsection \<open>Computing a function\label{s:tm-basic-comp}\<close> text \<open> Turing machines are supposed to compute functions. The functions in question map bit strings to bit strings. We model such strings as lists of Booleans and denote the bits by @{text \<bbbO>} and @{text \<bbbI>}. \<close> type_synonym string = "bool list" notation False ("\<bbbO>") and True ("\<bbbI>") text \<open> This keeps the more abstract level of computable functions separate from the level of concrete implementations as Turing machines, which can use an arbitrary alphabet. We use the term ``string'' only for bit strings, on which functions operate, and the terms ``symbol sequence'' or ``symbols'' for the things written on the tapes of Turing machines. We translate between the two levels in a straightforward way: \<close> abbreviation string_to_symbols :: "string \<Rightarrow> symbol list" where "string_to_symbols x \<equiv> map (\<lambda>b. if b then \<one> else \<zero>) x" abbreviation symbols_to_string :: "symbol list \<Rightarrow> string" where "symbols_to_string zs \<equiv> map (\<lambda>z. z = \<one>) zs" proposition "string_to_symbols [\<bbbO>, \<bbbI>] = [\<zero>, \<one>]" "symbols_to_string [\<zero>, \<one>] = [\<bbbO>, \<bbbI>]" by simp_all lemma bit_symbols_to_symbols: assumes "bit_symbols zs" shows "string_to_symbols (symbols_to_string zs) = zs" using assms by (intro nth_equalityI) auto lemma symbols_to_string_to_symbols: "symbols_to_string (string_to_symbols x) = x" by (intro nth_equalityI) simp_all lemma proper_symbols_to_symbols: "proper_symbols (string_to_symbols zs)" by simp abbreviation string_to_contents :: "string \<Rightarrow> (nat \<Rightarrow> symbol)" where "string_to_contents x \<equiv> \<lambda>i. if i = 0 then \<triangleright> else if i \<le> length x then (if x ! (i - 1) then \<one> else \<zero>) else \<box>" lemma contents_string_to_contents: "string_to_contents xs = \<lfloor>string_to_symbols xs\<rfloor>" using contents_def by auto lemma bit_symbols_to_contents: assumes "bit_symbols ns" shows "\<lfloor>ns\<rfloor> = string_to_contents (symbols_to_string ns)" using assms bit_symbols_to_symbols contents_string_to_contents by simp text \<open> Definition~1.3 in the textbook~\cite{ccama} says that for a Turing machine $M$ to compute a function $f\colon\bbOI^*\to\bbOI^*$ on input $x$, ``it halts with $f(x)$ written on its output tape.'' My initial interpretation of this phrase, and the one formalized below, was that the output is written \emph{after} the start symbol $\triangleright$ in the same fashion as the input is given on the input tape. However after inspecting the Turing machine in Example~1.1, I now believe the more likely meaning is that the output \emph{overwrites} the start symbol, although Example~1.1 precedes Definition~1.3 and might not be subject to it. One advantage of the interpretation with start symbol intact is that the output tape can then be used unchanged as the input of another Turing machine, a property we exploit in Section~\ref{s:tm-composing}. Otherwise one would have to find the start cell of the output tape and either copy the contents to another tape with start symbol or shift the string to the right and restore the start symbol. One way to find the start cell is to move the tape head left while ``marking'' the cells until one reaches an already marked cell, which can only happen when the head is in the start cell, where ``moving left'' does not actually move the head. This process will take time linear in the length of the output and thus will not change the asymptotic running time of the machine. Therefore the choice of interpretation is purely one of convenience. \null \<close> definition halts :: "machine \<Rightarrow> config \<Rightarrow> bool" where "halts M cfg \<equiv> \<exists>t. fst (execute M cfg t) = length M" lemma halts_impl_le_length: assumes "halts M cfg" shows "fst (execute M cfg t) \<le> length M" using assms execute_after_halting_ge' halts_def by (metis linear) definition running_time :: "machine \<Rightarrow> config \<Rightarrow> nat" where "running_time M cfg \<equiv> LEAST t. fst (execute M cfg t) = length M" lemma running_timeD: assumes "running_time M cfg = t" and "halts M cfg" shows "fst (execute M cfg t) = length M" and "\<And>t'. t' < t \<Longrightarrow> fst (execute M cfg t') \<noteq> length M" using assms running_time_def halts_def not_less_Least[of _ "\<lambda>t. fst (execute M cfg t) = length M"] LeastI[of "\<lambda>t. fst (execute M cfg t) = length M"] by auto definition halting_config :: "machine \<Rightarrow> config \<Rightarrow> config" where "halting_config M cfg \<equiv> execute M cfg (running_time M cfg)" abbreviation start_config_string :: "nat \<Rightarrow> string \<Rightarrow> config" where "start_config_string k x \<equiv> start_config k (string_to_symbols x)" text \<open> Another, inconsequential, difference to the textbook definition is that we designate the second tape, rather than the last tape, as the output tape. This means that the indices for the input and output tape are fixed at~0 and~1, respectively, regardless of the total number of tapes. Next is our definition of a $k$-tape Turing machine $M$ computing a function $f$ in $T$-time: \<close> definition computes_in_time :: "nat \<Rightarrow> machine \<Rightarrow> (string \<Rightarrow> string) \<Rightarrow> (nat \<Rightarrow> nat) \<Rightarrow> bool" where "computes_in_time k M f T \<equiv> \<forall>x. halts M (start_config_string k x) \<and> running_time M (start_config_string k x) \<le> T (length x) \<and> halting_config M (start_config_string k x) <:> 1 = string_to_contents (f x)" lemma computes_in_time_mono: assumes "computes_in_time k M f T" and "\<And>n. T n \<le> T' n" shows "computes_in_time k M f T'" using assms computes_in_time_def halts_def running_time_def halting_config_def execute_after_halting_ge by (meson dual_order.trans) text \<open> The definition of @{const computes_in_time} can be expressed with @{const transforms} as well, which will be more convenient for us. \<close> lemma halting_config_execute: assumes "fst (execute M cfg t) = length M" shows "halting_config M cfg = execute M cfg t" proof- have 1: "t \<ge> running_time M cfg" using assms running_time_def by (simp add: Least_le) then have "fst (halting_config M cfg) = length M" using assms LeastI[of "\<lambda>t. fst (execute M cfg t) = length M" t] by (simp add: halting_config_def running_time_def) then show ?thesis using execute_after_halting_ge 1 halting_config_def by metis qed lemma transforms_halting_config: assumes "transforms M tps t tps'" shows "halting_config M (0, tps) = (length M, tps')" using assms transforms_def halting_config_def halting_config_execute transits_def by (metis fst_eqD) lemma computes_in_time_execute: assumes "computes_in_time k M f T" shows "execute M (start_config_string k x) (T (length x)) <:> 1 = string_to_contents (f x)" proof - let ?t = "running_time M (start_config_string k x)" let ?cfg = "start_config_string k x" have "execute M ?cfg ?t = halting_config M ?cfg" using halting_config_def by simp then have "fst (execute M ?cfg ?t) = length M" using assms computes_in_time_def running_timeD(1) by blast moreover have "?t \<le> T (length x)" using computes_in_time_def assms by simp ultimately have "execute M ?cfg ?t = execute M ?cfg (T (length x)) " using execute_after_halting_ge by presburger moreover have "execute M ?cfg ?t <:> 1 = string_to_contents (f x)" using computes_in_time_def halting_config_execute assms halting_config_def by simp ultimately show ?thesis by simp qed lemma transforms_running_time: assumes "transforms M tps t tps'" shows "running_time M (0, tps) \<le> t" using running_time_def transforms_def transits_def by (smt Least_le[of _ t] assms execute_after_halting_ge fst_conv) text \<open> This is the alternative characterization of @{const computes_in_time}: \<close> lemma computes_in_time_alt: "computes_in_time k M f T = (\<forall>x. \<exists>tps. tps ::: 1 = string_to_contents (f x) \<and> transforms M (snd (start_config_string k x)) (T (length x)) tps)" (is "?lhs = ?rhs") proof show "?lhs \<Longrightarrow> ?rhs" proof fix x :: string let ?cfg = "start_config_string k x" assume "computes_in_time k M f T" then have 1: "halts M ?cfg" and 2: "running_time M ?cfg \<le> T (length x)" and 3: "halting_config M ?cfg <:> 1 = string_to_contents (f x)" using computes_in_time_def by simp_all define cfg where "cfg = halting_config M ?cfg" then have "transits M ?cfg (T (length x)) cfg" using 2 halting_config_def transits_def by auto then have "transforms M (snd ?cfg) (T (length x)) (snd cfg)" using transits_def transforms_def start_config_def by (metis (no_types, lifting) "1" cfg_def halting_config_def prod.collapse running_timeD(1) snd_conv) moreover have "snd cfg ::: 1 = string_to_contents (f x)" using cfg_def 3 by simp ultimately show "\<exists>tps. tps ::: 1 = string_to_contents (f x) \<and> transforms M (snd (start_config_string k x)) (T (length x)) tps" by auto qed show "?rhs \<Longrightarrow> ?lhs" unfolding computes_in_time_def proof assume rhs: ?rhs fix x :: string let ?cfg = "start_config_string k x" obtain tps where tps: "tps ::: 1 = string_to_contents (f x)" "transforms M (snd ?cfg) (T (length x)) tps" using rhs by auto then have "transits M ?cfg (T (length x)) (length M, tps)" using transforms_def start_config_def by simp then have 1: "halts M ?cfg" using halts_def transits_def by (metis fst_eqD) moreover have 2: "running_time M ?cfg \<le> T (length x)" using tps(2) transforms_running_time start_config_def by simp moreover have 3: "halting_config M ?cfg <:> 1 = string_to_contents (f x)" proof - have "halting_config M ?cfg = (length M, tps)" using transforms_halting_config[OF tps(2)] start_config_def by simp then show ?thesis using tps(1) by simp qed ultimately show "halts M ?cfg \<and> running_time M ?cfg \<le> T (length x) \<and> halting_config M ?cfg <:> 1 = string_to_contents (f x)" by simp qed qed lemma computes_in_timeD: fixes x assumes "computes_in_time k M f T" shows "\<exists>tps. tps ::: 1 = string_to_contents (f x) \<and> transforms M (snd (start_config k (string_to_symbols x))) (T (length x)) tps" using assms computes_in_time_alt by simp lemma computes_in_timeI [intro]: assumes "\<And>x. \<exists>tps. tps ::: 1 = string_to_contents (f x) \<and> transforms M (snd (start_config k (string_to_symbols x))) (T (length x)) tps" shows "computes_in_time k M f T" using assms computes_in_time_alt by simp text \<open> As an example, the function mapping every string to the empty string is computable within any time bound by the empty Turing machine. \<close> lemma computes_Nil_empty: assumes "k \<ge> 2" shows "computes_in_time k [] (\<lambda>x. []) T" proof fix x :: string let ?tps = "snd (start_config_string k x)" let ?f = "\<lambda>x. []" have "?tps ::: 1 = string_to_contents (?f x)" using start_config4 assms by auto moreover have "transforms [] ?tps (T (length x)) ?tps" using transforms_Nil transforms_monotone by blast ultimately show "\<exists>tps. tps ::: 1 = string_to_contents (?f x) \<and> transforms [] ?tps (T (length x)) tps" by auto qed subsection \<open>Pairing strings\label{s:tm-basic-pair}\<close> text \<open> In order to define the computability of functions with two arguments, we need a way to encode a pair of strings as one string. The idea is to write the two strings with a separator, for example, $\bbbO\bbbI\bbbO\bbbO\#\bbbI\bbbI\bbbI\bbbO$ and then encode every symbol $\bbbO, \bbbI, \#$ by two bits from $\bbOI$. We slightly deviate from Arora and Barak's encoding~\cite[p.~2]{ccama} and map $\bbbO$ to $\bbbO\bbbO$, $\bbbI$ to $\bbbO\bbbI$, and \# to $\bbbI\bbbI$, the idea being that the first bit signals whether the second bit is to be taken literally or as a special character. Our example turns into $\bbbO\bbbO\bbbO\bbbI\bbbO\bbbO\bbbO\bbbO\bbbI\bbbI\bbbO\bbbI\bbbO\bbbI\bbbO\bbbI\bbbO\bbbO$. \null \<close> abbreviation bitenc :: "string \<Rightarrow> string" where "bitenc x \<equiv> concat (map (\<lambda>h. [\<bbbO>, h]) x)" definition string_pair :: "string \<Rightarrow> string \<Rightarrow> string" ("\<langle>_, _\<rangle>") where "\<langle>x, y\<rangle> \<equiv> bitenc x @ [\<bbbI>, \<bbbI>] @ bitenc y" text \<open> Our example: \<close> proposition "\<langle>[\<bbbO>, \<bbbI>, \<bbbO>, \<bbbO>], [\<bbbI>, \<bbbI>, \<bbbI>, \<bbbO>]\<rangle> = [\<bbbO>, \<bbbO>, \<bbbO>, \<bbbI>, \<bbbO>, \<bbbO>, \<bbbO>, \<bbbO>, \<bbbI>, \<bbbI>, \<bbbO>, \<bbbI>, \<bbbO>, \<bbbI>, \<bbbO>, \<bbbI>, \<bbbO>, \<bbbO>]" using string_pair_def by simp lemma length_string_pair: "length \<langle>x, y\<rangle> = 2 * length x + 2 * length y + 2" proof - have "length (concat (map (\<lambda>h. [\<bbbO>, h]) z)) = 2 * length z" for z by (induction z) simp_all then show ?thesis using string_pair_def by simp qed lemma length_bitenc: "length (bitenc z) = 2 * length z" by (induction z) simp_all lemma bitenc_nth: assumes "i < length zs" shows "bitenc zs ! (2 * i) = \<bbbO>" and "bitenc zs ! (2 * i + 1) = zs ! i" proof - let ?f = "\<lambda>h. [\<bbbO>, h]" let ?xs = "concat (map ?f zs)" have eqtake: "bitenc (take i zs) = take (2 * i) (bitenc zs)" if "i \<le> length zs" for i zs proof - have "take (2 * i) (bitenc zs) = take (2 * i) (bitenc (take i zs @ drop i zs))" by simp then have "take (2 * i) (bitenc zs) = take (2 * i) (bitenc (take i zs) @ (bitenc (drop i zs)))" by (metis concat_append map_append) then show ?thesis using length_bitenc that by simp qed have eqdrop: "bitenc (drop i zs) = drop (2 * i) (bitenc zs)" if "i < length zs" for i proof - have "drop (2 * i) (bitenc zs) = drop (2 * i) (bitenc (take i zs @ drop i zs))" by simp then have "drop (2 * i) (bitenc zs) = drop (2 * i) (bitenc (take i zs) @ bitenc (drop i zs))" by (metis concat_append map_append) then show ?thesis using length_bitenc that by simp qed have take2: "take 2 (drop (2 * i) (bitenc zs)) = ?f (zs ! i)" if "i < length zs" for i proof - have 1: "1 \<le> length (drop i zs)" using that by simp have "take 2 (drop (2*i) (bitenc zs)) = take 2 (bitenc (drop i zs))" using that eqdrop by simp also have "... = bitenc (take 1 (drop i zs))" using 1 eqtake by simp also have "... = bitenc [zs ! i]" using that by (metis Cons_nth_drop_Suc One_nat_def take0 take_Suc_Cons) also have "... = ?f (zs ! i)" by simp finally show ?thesis . qed show "bitenc zs ! (2 * i) = \<bbbO>" proof - have "bitenc zs ! (2 * i) = drop (2 * i) (bitenc zs) ! 0" using assms drop0 length_bitenc by simp also have "... = take 2 (drop (2 * i) (bitenc zs)) ! 0" using eqdrop by simp also have "... = ?f (zs ! i) ! 0" using assms take2 by simp also have "... = \<bbbO>" by simp finally show ?thesis . qed show "bitenc zs ! (2*i + 1) = zs ! i" proof - have "bitenc zs ! (2*i+1) = drop (2 * i) (bitenc zs) ! 1" using assms length_bitenc by simp also have "... = take 2 (drop (2*i) (bitenc zs)) ! 1" using eqdrop by simp also have "... = ?f (zs ! i) ! 1" using assms(1) take2 by simp also have "... = zs ! i" by simp finally show ?thesis . qed qed lemma string_pair_first_nth: assumes "i < length x" shows "\<langle>x, y\<rangle> ! (2 * i) = \<bbbO>" and "\<langle>x, y\<rangle> ! (2 * i + 1) = x ! i" proof - have "\<langle>x, y\<rangle> ! (2*i) = concat (map (\<lambda>h. [\<bbbO>, h]) x) ! (2*i)" using string_pair_def length_bitenc by (simp add: assms nth_append) then show "\<langle>x, y\<rangle> ! (2 * i) = \<bbbO>" using bitenc_nth(1) assms by simp have "2 * i + 1 < 2 * length x" using assms by simp then have "\<langle>x, y\<rangle> ! (2*i+1) = concat (map (\<lambda>h. [\<bbbO>, h]) x) ! (2*i+1)" using string_pair_def length_bitenc[of x] assms nth_append by force then show "\<langle>x, y\<rangle> ! (2 * i + 1) = x ! i" using bitenc_nth(2) assms by simp qed lemma string_pair_sep_nth: shows "\<langle>x, y\<rangle> ! (2 * length x) = \<bbbI>" and "\<langle>x, y\<rangle> ! (2 * length x + 1) = \<bbbI>" using string_pair_def length_bitenc by (metis append_Cons nth_append_length) (simp add: length_bitenc nth_append string_pair_def) lemma string_pair_second_nth: assumes "i < length y" shows "\<langle>x, y\<rangle> ! (2 * length x + 2 + 2 * i) = \<bbbO>" and "\<langle>x, y\<rangle> ! (2 * length x + 2 + 2 * i + 1) = y ! i" proof - have "\<langle>x, y\<rangle> ! (2 * length x + 2 + 2*i) = concat (map (\<lambda>h. [\<bbbO>, h]) y) ! (2*i)" using string_pair_def length_bitenc by (simp add: assms nth_append) then show "\<langle>x, y\<rangle> ! (2 * length x + 2 + 2 * i) = \<bbbO>" using bitenc_nth(1) assms by simp have "2 * i + 1 < 2 * length y" using assms by simp then have "\<langle>x, y\<rangle> ! (2 * length x + 2 + 2*i+1) = concat (map (\<lambda>h. [\<bbbO>, h]) y) ! (2*i+1)" using string_pair_def length_bitenc[of x] assms nth_append by force then show "\<langle>x, y\<rangle> ! (2 * length x + 2 + 2 * i + 1) = y ! i" using bitenc_nth(2) assms by simp qed lemma string_pair_inj: assumes "\<langle>x1, y1\<rangle> = \<langle>x2, y2\<rangle>" shows "x1 = x2 \<and> y1 = y2" proof show "x1 = x2" proof (rule ccontr) assume neq: "x1 \<noteq> x2" consider "length x1 = length x2" | "length x1 < length x2" | "length x1 > length x2" by linarith then show False proof (cases) case 1 then obtain i where i: "i < length x1" "x1 ! i \<noteq> x2 ! i" using neq list_eq_iff_nth_eq by blast then have "\<langle>x1, y1\<rangle> ! (2 * i + 1) = x1 ! i" and "\<langle>x2, y2\<rangle> ! (2 * i + 1) = x2 ! i" using 1 string_pair_first_nth by simp_all then show False using assms i(2) by simp next case 2 let ?i = "length x1" have "\<langle>x1, y1\<rangle> ! (2 * ?i) = \<bbbI>" using string_pair_sep_nth by simp moreover have "\<langle>x2, y2\<rangle> ! (2 * ?i) = \<bbbO>" using string_pair_first_nth 2 by simp ultimately show False using assms by simp next case 3 let ?i = "length x2" have "\<langle>x2, y2\<rangle> ! (2 * ?i) = \<bbbI>" using string_pair_sep_nth by simp moreover have "\<langle>x1, y1\<rangle> ! (2 * ?i) = \<bbbO>" using string_pair_first_nth 3 by simp ultimately show False using assms by simp qed qed then have len_x_eq: "length x1 = length x2" by simp then have len_y_eq: "length y1 = length y2" using assms length_string_pair by (smt (verit) Suc_1 Suc_mult_cancel1 add_left_imp_eq add_right_cancel) show "y1 = y2" proof (rule ccontr) assume neq: "y1 \<noteq> y2" then obtain i where i: "i < length y1" "y1 ! i \<noteq> y2 ! i" using list_eq_iff_nth_eq len_y_eq by blast then have "\<langle>x1, y1\<rangle> ! (2 * length x1 + 2 + 2 * i + 1) = y1 ! i" and "\<langle>x2, y2\<rangle> ! (2 * length x2 + 2 + 2 * i + 1) = y2 ! i" using string_pair_second_nth len_y_eq by simp_all then show False using assms i(2) len_x_eq by simp qed qed text \<open> Turing machines have to deal with pairs of symbol sequences rather than strings. \<close> abbreviation pair :: "string \<Rightarrow> string \<Rightarrow> symbol list" ("\<langle>_; _\<rangle>") where "\<langle>x; y\<rangle> \<equiv> string_to_symbols \<langle>x, y\<rangle>" lemma symbols_lt_pair: "symbols_lt 4 \<langle>x; y\<rangle>" by simp lemma length_pair: "length \<langle>x; y\<rangle> = 2 * length x + 2 * length y + 2" by (simp add: length_string_pair) lemma pair_inj: assumes "\<langle>x1; y1\<rangle> = \<langle>x2; y2\<rangle>" shows "x1 = x2 \<and> y1 = y2" using string_pair_inj assms symbols_to_string_to_symbols by metis subsection \<open>Big-Oh and polynomials\label{s:tm-basic-bigoh}\<close> text \<open> The Big-Oh notation is standard~\cite[Definition~0.2]{ccama}. It can be defined with $c$ ranging over real or natural numbers. We choose natural numbers for simplicity. \<close> definition big_oh :: "(nat \<Rightarrow> nat) \<Rightarrow> (nat \<Rightarrow> nat) \<Rightarrow> bool" where "big_oh g f \<equiv> \<exists>c m. \<forall>n>m. g n \<le> c * f n" text \<open> Some examples: \<close> proposition "big_oh (\<lambda>n. n) (\<lambda>n. n)" using big_oh_def by auto proposition "big_oh (\<lambda>n. n) (\<lambda>n. n * n)" using big_oh_def by auto proposition "big_oh (\<lambda>n. 42 * n) (\<lambda>n. n * n)" proof- have "\<forall>n>0::nat. 42 * n \<le> 42 * n * n" by simp then have "\<exists>(c::nat)>0. \<forall>n>0. 42 * n \<le> c * n * n" using zero_less_numeral by blast then show ?thesis using big_oh_def by auto qed proposition "\<not> big_oh (\<lambda>n. n * n) (\<lambda>n. n)" (is "\<not> big_oh ?g ?f") proof assume "big_oh (\<lambda>n. n * n) (\<lambda>n. n)" then obtain c m where "\<forall>n>m. ?g n \<le> c * ?f n" using big_oh_def by auto then have 1: "\<forall>n>m. n * n \<le> c * n" by auto define nn where "nn = max (m + 1) (c + 1)" then have 2: "nn > m" by simp then have "nn * nn > c * nn" by (simp add: nn_def max_def) with 1 2 show False using not_le by blast qed text \<open> Some lemmas helping with polynomial upper bounds. \<close> lemma pow_mono: fixes n d1 d2 :: nat assumes "d1 \<le> d2" and "n > 0" shows "n ^ d1 \<le> n ^ d2" using assms by (simp add: Suc_leI power_increasing) lemma pow_mono': fixes n d1 d2 :: nat assumes "d1 \<le> d2" and "0 < d1" shows "n ^ d1 \<le> n ^ d2" using assms by (metis dual_order.eq_iff less_le_trans neq0_conv pow_mono power_eq_0_iff) lemma linear_le_pow: fixes n d1 :: nat assumes "0 < d1" shows "n \<le> n ^ d1" using assms by (metis One_nat_def gr_implies_not0 le_less_linear less_Suc0 self_le_power) text \<open> The next definition formalizes the phrase ``polynomially bounded'' and the term ``polynomial'' in ``polynomial running-time''. This is often written ``$f(n) = n^{O(1)}$'' (for example, Arora and Barak~\cite[Example 0.3]{ccama}). \<close> definition big_oh_poly :: "(nat \<Rightarrow> nat) \<Rightarrow> bool" where "big_oh_poly f \<equiv> \<exists>d. big_oh f (\<lambda>n. n ^ d)" lemma big_oh_poly: "big_oh_poly f \<longleftrightarrow> (\<exists>d c n\<^sub>0. \<forall>n>n\<^sub>0. f n \<le> c * n ^ d)" using big_oh_def big_oh_poly_def by auto lemma big_oh_polyI: assumes "\<And>n. n > n\<^sub>0 \<Longrightarrow> f n \<le> c * n ^ d" shows "big_oh_poly f" using assms big_oh_poly by auto lemma big_oh_poly_const: "big_oh_poly (\<lambda>n. c)" proof - let ?c = "max 1 c" have "(\<lambda>n. c) n \<le> ?c * n ^ 1" if "n > 0" for n proof - have "c \<le> n * ?c" by (metis (no_types) le_square max.cobounded2 mult.assoc mult_le_mono nat_mult_le_cancel_disj that) then show ?thesis by (simp add: mult.commute) qed then show ?thesis using big_oh_polyI[of 0 _ ?c] by simp qed lemma big_oh_poly_poly: "big_oh_poly (\<lambda>n. n ^ d)" using big_oh_polyI[of 0 _ 1 d] by simp lemma big_oh_poly_id: "big_oh_poly (\<lambda>n. n)" using big_oh_poly_poly[of 1] by simp lemma big_oh_poly_le: assumes "big_oh_poly f" and "\<And>n. g n \<le> f n" shows "big_oh_poly g" using assms big_oh_polyI by (metis big_oh_poly le_trans) lemma big_oh_poly_sum: assumes "big_oh_poly f1" and "big_oh_poly f2" shows "big_oh_poly (\<lambda>n. f1 n + f2 n)" proof- obtain d1 c1 m1 where 1: "\<forall>n>m1. f1 n \<le> c1 * n ^ d1" using big_oh_poly assms(1) by blast obtain d2 c2 m2 where 2: "\<forall>n>m2. f2 n \<le> c2 * n ^ d2" using big_oh_poly assms(2) by blast let ?f3 = "\<lambda>n. f1 n + f2 n" let ?c3 = "max c1 c2" let ?m3 = "max m1 m2" let ?d3 = "max d1 d2" have "\<forall>n>?m3. f1 n \<le> ?c3 * n ^ d1" using 1 by (simp add: max.coboundedI1 nat_mult_max_left) moreover have "\<forall>n>?m3. n ^ d1 \<le> n^?d3" using pow_mono by simp ultimately have *: "\<forall>n>?m3. f1 n \<le> ?c3 * n^?d3" using order_subst1 by fastforce have "\<forall>n>?m3. f2 n \<le> ?c3 * n ^ d2" using 2 by (simp add: max.coboundedI2 nat_mult_max_left) moreover have "\<forall>n>?m3. n ^ d2 \<le> n ^ ?d3" using pow_mono by simp ultimately have "\<forall>n>?m3. f2 n \<le> ?c3 * n ^ ?d3" using order_subst1 by fastforce then have "\<forall>n>?m3. f1 n + f2 n \<le> ?c3 * n ^ ?d3 + ?c3 * n ^ ?d3" using * by fastforce then have "\<forall>n>?m3. f1 n + f2 n \<le> 2 * ?c3 * n ^ ?d3" by auto then have "\<exists>d c m. \<forall>n>m. ?f3 n \<le> c * n ^ d" by blast then show ?thesis using big_oh_poly by simp qed lemma big_oh_poly_prod: assumes "big_oh_poly f1" and "big_oh_poly f2" shows "big_oh_poly (\<lambda>n. f1 n * f2 n)" proof- obtain d1 c1 m1 where 1: "\<forall>n>m1. f1 n \<le> c1 * n ^ d1" using big_oh_poly assms(1) by blast obtain d2 c2 m2 where 2: "\<forall>n>m2. f2 n \<le> c2 * n ^ d2" using big_oh_poly assms(2) by blast let ?f3 = "\<lambda>n. f1 n * f2 n" let ?c3 = "max c1 c2" let ?m3 = "max m1 m2" have "\<forall>n>?m3. f1 n \<le> ?c3 * n ^ d1" using 1 by (simp add: max.coboundedI1 nat_mult_max_left) moreover have "\<forall>n>?m3. n ^ d1 \<le> n ^ d1" using pow_mono by simp ultimately have *: "\<forall>n>?m3. f1 n \<le> ?c3 * n ^ d1" using order_subst1 by fastforce have "\<forall>n>?m3. f2 n \<le> ?c3 * n ^ d2" using 2 by (simp add: max.coboundedI2 nat_mult_max_left) moreover have "\<forall>n>?m3. n ^ d2 \<le> n ^ d2" using pow_mono by simp ultimately have "\<forall>n>?m3. f2 n \<le> ?c3 * n ^ d2" using order_subst1 by fastforce then have "\<forall>n>?m3. f1 n * f2 n \<le> ?c3 * n ^ d1 * ?c3 * n ^ d2" using * mult_le_mono by (metis mult.assoc) then have "\<forall>n>?m3. f1 n * f2 n \<le> ?c3 * ?c3 * n ^ d1 * n ^ d2" by (simp add: semiring_normalization_rules(16)) then have "\<forall>n>?m3. f1 n * f2 n \<le> ?c3 * ?c3 * n ^ (d1 + d2)" by (simp add: mult.assoc power_add) then have "\<exists>d c m. \<forall>n>m. ?f3 n \<le> c * n ^ d" by blast then show ?thesis using big_oh_poly by simp qed lemma big_oh_poly_offset: assumes "big_oh_poly f" shows "\<exists>b c d. d > 0 \<and> (\<forall>n. f n \<le> b + c * n ^ d)" proof - obtain d c m where dcm: "\<forall>n>m. f n \<le> c * n ^ d" using assms big_oh_poly by auto have *: "f n \<le> c * n ^ Suc d" if "n > m" for n proof - have "n > 0" using that by simp then have "n ^ d \<le> n ^ Suc d" by simp then have "c * n ^ d \<le> c * n ^ Suc d" by simp then show "f n \<le> c * n ^ Suc d" using dcm order_trans that by blast qed define b :: nat where "b = Max {f n | n. n \<le> m}" then have "y \<le> b" if "y \<in> {f n | n. n \<le> m}" for y using that by simp then have "f n \<le> b" if "n \<le> m" for n using that by auto then have "f n \<le> b + c * n ^ Suc d" for n using * by (meson trans_le_add1 trans_le_add2 verit_comp_simplify1(3)) then show ?thesis using * dcm(1) by blast qed lemma big_oh_poly_composition: assumes "big_oh_poly f1" and "big_oh_poly f2" shows "big_oh_poly (f2 \<circ> f1)" proof- obtain d1 c1 m1 where 1: "\<forall>n>m1. f1 n \<le> c1 * n ^ d1" using big_oh_poly assms(1) by blast obtain d2 c2 b where 2: "\<forall>n. f2 n \<le> b + c2 * n ^ d2" using big_oh_poly_offset assms(2) by blast define c where "c = c2 * c1 ^ d2" have 3: "\<forall>n>m1. f1 n \<le> c1 * n ^ d1" using 1 by simp have "\<forall>n>m1. f2 n \<le> b + c2 * n ^ d2" using 2 by simp { fix n assume "n > m1" then have 4: "(f1 n) ^ d2 \<le> (c1 * n ^ d1) ^ d2" using 3 by (simp add: power_mono) have "f2 (f1 n) \<le> b + c2 * (f1 n) ^ d2" using 2 by simp also have "... \<le> b + c2 * (c1 * n ^ d1) ^ d2" using 4 by simp also have "... = b + c2 * c1 ^ d2 * n ^ (d1 * d2)" by (simp add: power_mult power_mult_distrib) also have "... = b + c * n ^ (d1 * d2)" using c_def by simp also have "... \<le> b * n ^ (d1 * d2) + c * n ^ (d1 * d2)" using `n > m1` by simp also have "... \<le> (b + c) * n ^ (d1 * d2)" by (simp add: comm_semiring_class.distrib) finally have "f2 (f1 n) \<le> (b + c) * n ^ (d1 * d2)" . } then show ?thesis using big_oh_polyI[of m1 _ "b + c" "d1 * d2"] by simp qed lemma big_oh_poly_pow: fixes f :: "nat \<Rightarrow> nat" and d :: nat assumes "big_oh_poly f" shows "big_oh_poly (\<lambda>n. f n ^ d)" proof - let ?g = "\<lambda>n. n ^ d" have "big_oh_poly ?g" using big_oh_poly_poly by simp moreover have "(\<lambda>n. f n ^ d) = ?g \<circ> f" by auto ultimately show ?thesis using assms big_oh_poly_composition by simp qed text \<open> The textbook does not give an explicit definition of polynomials. It treats them as functions between natural numbers. So assuming the coefficients are natural numbers too, seems natural. We justify this choice when defining $\NP$ in Section~\ref{s:TC-NP}. \null \<close> definition polynomial :: "(nat \<Rightarrow> nat) \<Rightarrow> bool" where "polynomial f \<equiv> \<exists>cs. \<forall>n. f n = (\<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i)" lemma const_polynomial: "polynomial (\<lambda>_. c)" proof - let ?cs = "[c]" have "\<forall>n. (\<lambda>_. c) n = (\<Sum>i\<leftarrow>[0..<length ?cs]. ?cs ! i * n ^ i)" by simp then show ?thesis using polynomial_def by blast qed lemma polynomial_id: "polynomial id" proof - let ?cs = "[0, 1::nat]" have "\<forall>n::nat. id n = (\<Sum>i\<leftarrow>[0..<length ?cs]. ?cs ! i * n ^ i)" by simp then show ?thesis using polynomial_def by blast qed lemma big_oh_poly_polynomial: fixes f :: "nat \<Rightarrow> nat" assumes "polynomial f" shows "big_oh_poly f" proof - have "big_oh_poly (\<lambda>n. (\<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i))" for cs proof (induction "length cs" arbitrary: cs) case 0 then show ?case using big_oh_poly_const by simp next case (Suc len) let ?cs = "butlast cs" have len: "length ?cs = len" using Suc by simp { fix n :: nat have "(\<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i) = (\<Sum>i\<leftarrow>[0..<Suc len]. cs ! i * n ^ i)" using Suc by simp also have "... = (\<Sum>i\<leftarrow>[0..<len]. cs ! i * n ^ i) + cs ! len * n ^ len" using Suc(2) by (metis (mono_tags, lifting) Nat.add_0_right list.simps(8) list.simps(9) map_append sum_list.Cons sum_list.Nil sum_list_append upt_Suc zero_le) also have "... = (\<Sum>i\<leftarrow>[0..<len]. ?cs ! i * n ^ i) + cs ! len * n ^ len" using Suc(2) len by (metis (no_types, lifting) atLeastLessThan_iff map_eq_conv nth_butlast set_upt) finally have "(\<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i) = (\<Sum>i\<leftarrow>[0..<len]. ?cs ! i * n ^ i) + cs ! len * n ^ len" . } then have "(\<lambda>n. \<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i) = (\<lambda>n. (\<Sum>i\<leftarrow>[0..<len]. ?cs ! i * n ^ i) + cs ! len * n ^ len)" by simp moreover have "big_oh_poly (\<lambda>n. cs ! len * n ^ len)" using big_oh_poly_poly big_oh_poly_prod big_oh_poly_const by simp moreover have "big_oh_poly (\<lambda>n. (\<Sum>i\<leftarrow>[0..<len]. ?cs ! i * n ^ i))" using Suc len by blast ultimately show "big_oh_poly (\<lambda>n. \<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i)" using big_oh_poly_sum by simp qed moreover obtain cs where "f = (\<lambda>n. (\<Sum>i\<leftarrow>[0..<length cs]. cs ! i * n ^ i))" using assms polynomial_def by blast ultimately show ?thesis by simp qed section \<open>Increasing the alphabet or the number of tapes\label{s:tm-trans}\<close> text \<open> For technical reasons it is sometimes necessary to add tapes to a machine or to formally enlarge its alphabet such that it matches another machine's tape number or alphabet size without changing the behavior of the machine. The primary use of this is when composing machines with unequal alphabets or tape numbers (see Section~\ref{s:tm-composing}). \<close> subsection \<open>Enlarging the alphabet\<close> text \<open> A Turing machine over alphabet $G$ is not necessarily a Turing machine over a larger alphabet $G' > G$ because reading a symbol in $\{G, \dots, G'-1\}$ the TM may write a symbol $\geq G'$. This is easy to remedy by modifying the TM to do nothing when it reads a symbol $\geq G$. It then formally satisfies the alphabet restriction property of Turing commands. This is rather crude, because the new TM loops infinitely on encountering a ``forbidden'' symbol, but it is good enough for our purposes. The next function performs this transformation on a TM $M$ over alphabet $G$. The resulting machine is a Turing machine for every alphabet size $G' \ge G$. \<close> definition enlarged :: "nat \<Rightarrow> machine \<Rightarrow> machine" where "enlarged G M \<equiv> map (\<lambda>cmd rs. if symbols_lt G rs then cmd rs else (0, map (\<lambda>r. (r, Stay)) rs)) M" lemma length_enlarged: "length (enlarged G M) = length M" using enlarged_def by simp lemma enlarged_nth: assumes "symbols_lt G gs" and "i < length M" shows "(M ! i) gs = (enlarged G M ! i) gs" using assms enlarged_def by simp lemma enlarged_write: assumes "length gs = k" and "i < length M" and "turing_machine k G M" shows "length (snd ((M ! i) gs)) = length (snd ((enlarged G M ! i) gs))" proof (cases "symbols_lt G gs") case True then show ?thesis using assms enlarged_def by simp next case False then have "(enlarged G M ! i) gs = (0, map (\<lambda>r. (r, Stay)) gs)" using assms enlarged_def by auto then show ?thesis using assms turing_commandD(1) turing_machine_def by (metis length_map nth_mem snd_conv) qed lemma turing_machine_enlarged: assumes "turing_machine k G M" and "G' \<ge> G" shows "turing_machine k G' (enlarged G M)" proof let ?M = "enlarged G M" show "2 \<le> k" and "4 \<le> G'" using assms turing_machine_def by simp_all show "turing_command k (length ?M) G' (?M ! i)" if i: "i < length ?M" for i proof have len: "length ?M = length M" using enlarged_def by simp then have 1: "turing_command k (length M) G (M ! i)" using assms(1) that turing_machine_def by simp show "\<And>gs. length gs = k \<Longrightarrow> length ([!!] (?M ! i) gs) = length gs" using enlarged_write that 1 len assms(1) by (metis turing_commandD(1)) show "(?M ! i) gs [.] j < G'" if "length gs = k" "(\<And>i. i < length gs \<Longrightarrow> gs ! i < G')" "j < length gs" for gs j proof (cases "symbols_lt G gs") case True then have "(?M ! i) gs = (M ! i) gs" using enlarged_def i by simp moreover have "(M ! i) gs [.] j < G" using "1" turing_commandD(2) that(1,3) True by simp ultimately show ?thesis using assms(2) by simp next case False then have "(?M ! i) gs = (0, map (\<lambda>r. (r, Stay)) gs)" using enlarged_def i by auto then show ?thesis using that by simp qed show "(?M ! i) gs [.] 0 = gs ! 0" if "length gs = k" and "k > 0" for gs proof (cases "symbols_lt G gs") case True then show ?thesis using enlarged_def i "1" turing_command_def that by simp next case False then have "(?M ! i) gs = (0, map (\<lambda>r. (r, Stay)) gs)" using that enlarged_def i by auto then show ?thesis using assms(1) turing_machine_def that by simp qed show "[*] ((?M ! i) gs) \<le> length ?M" if "length gs = k" for gs proof (cases "symbols_lt G gs") case True then show ?thesis using enlarged_def i that assms(1) turing_machine_def "1" turing_commandD(4) enlarged_nth len by (metis (no_types, lifting)) next case False then show ?thesis using that enlarged_def i by auto qed qed qed text \<open> The enlarged machine has the same behavior as the original machine when started on symbols over the original alphabet $G$. \<close> lemma execute_enlarged: assumes "turing_machine k G M" and "symbols_lt G zs" shows "execute (enlarged G M) (start_config k zs) t = execute M (start_config k zs) t" proof (induction t) case 0 then show ?case by simp next case (Suc t) let ?M = "enlarged G M" have "execute ?M (start_config k zs) (Suc t) = exe ?M (execute ?M (start_config k zs) t)" by simp also have "... = exe ?M (execute M (start_config k zs) t)" (is "_ = exe ?M ?cfg") using Suc by simp also have "... = execute M (start_config k zs) (Suc t)" proof (cases "fst ?cfg < length M") case True then have "exe ?M ?cfg = sem (?M ! (fst ?cfg)) ?cfg" (is "_ = sem ?cmd ?cfg") using exe_lt_length length_enlarged by simp then have "exe ?M ?cfg = (fst (?cmd (config_read ?cfg)), map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd (config_read ?cfg))) (snd ?cfg)))" using sem' by simp moreover have "symbols_lt G (config_read ?cfg)" using read_alphabet' assms by auto ultimately have "exe ?M ?cfg = (fst ((M ! (fst ?cfg)) (config_read ?cfg)), map (\<lambda>(a, tp). act a tp) (zip (snd ((M ! (fst ?cfg)) (config_read ?cfg))) (snd ?cfg)))" using True enlarged_nth by auto then have "exe ?M ?cfg = exe M ?cfg" using sem' by (simp add: True exe_lt_length) then show ?thesis using Suc by simp next case False then show ?thesis using Suc enlarged_def exe_def by auto qed finally show ?case . qed lemma transforms_enlarged: assumes "turing_machine k G M" and "symbols_lt G zs" and "transforms M (snd (start_config k zs)) t tps1" shows "transforms (enlarged G M) (snd (start_config k zs)) t tps1" proof - let ?tps = "snd (start_config k zs)" have "\<exists>t'\<le>t. execute M (start_config k zs) t' = (length M, tps1)" using assms(3) transforms_def transits_def start_config_def by simp then have "\<exists>t'\<le>t. execute (enlarged G M) (start_config k zs) t' = (length M, tps1)" using assms(1,2) transforms_def transits_def execute_enlarged by auto moreover have "length M = length (enlarged G M)" using enlarged_def by simp ultimately show ?thesis using start_config_def transforms_def transitsI by auto qed subsection \<open>Increasing the number of tapes\<close> text \<open> We can add tapes to a Turing machine in such a way that on the additional tapes the machine does nothing. While the new tapes could go anywhere, we only consider appending them at the end or inserting them at the beginning. \<close> subsubsection \<open>Appending tapes at the end\<close> text \<open> The next function turns a $k$-tape Turing machine into a $k'$-tape Turing machine (for $k' \geq k$) by appending $k' - k$ tapes at the end. \<close> definition append_tapes :: "nat \<Rightarrow> nat \<Rightarrow> machine \<Rightarrow> machine" where "append_tapes k k' M \<equiv> map (\<lambda>cmd rs. (fst (cmd (take k rs)), snd (cmd (take k rs)) @ (map (\<lambda>i. (rs ! i, Stay)) [k..<k']))) M" lemma length_append_tapes: "length (append_tapes k k' M) = length M" unfolding append_tapes_def by simp lemma append_tapes_nth: assumes "i < length M" and "length gs = k'" shows "(append_tapes k k' M ! i) gs = (fst ((M ! i) (take k gs)), snd ((M ! i) (take k gs)) @ (map (\<lambda>j. (gs ! j, Stay)) [k..<k']))" unfolding append_tapes_def using assms(1) by simp lemma append_tapes_tm: assumes "turing_machine k G M" and "k' \<ge> k" shows "turing_machine k' G (append_tapes k k' M)" proof let ?M = "append_tapes k k' M" show "2 \<le> k'" using assms turing_machine_def by simp show "4 \<le> G" using assms(1) turing_machine_def by simp show "turing_command k' (length ?M) G (?M ! i)" if "i < length ?M" for i proof have "i < length M" using that by (simp add: append_tapes_def) then have turing_command: "turing_command k (length M) G (M ! i)" using assms(1) that turing_machine_def by simp have ith: "append_tapes k k' M ! i = (\<lambda>rs. (fst ((M ! i) (take k rs)), snd ((M ! i) (take k rs)) @ (map (\<lambda>j. (rs ! j, Stay)) [k..<k'])))" unfolding append_tapes_def using `i < length M` by simp show "\<And>gs. length gs = k' \<Longrightarrow> length ([!!] (append_tapes k k' M ! i) gs) = length gs" using assms(2) ith turing_command turing_commandD by simp show "(append_tapes k k' M ! i) gs [.] j < G" if "length gs = k'" "\<And>i. i < length gs \<Longrightarrow> gs ! i < G" "j < length gs" for j gs proof (cases "j < k") case True let ?gs = "take k gs" have len: "length ?gs = k" using that(1) assms(2) by simp have "\<And>i. i < length ?gs \<Longrightarrow> ?gs ! i < G" using that(2) by simp then have "\<forall>i'<length ?gs. (M ! i) ?gs [.] i' < G" using turing_commandD(2)[OF turing_command len] by simp then show ?thesis using ith that turing_commandD(1)[OF turing_command len] by (simp add: nth_append) next case False then have "j \<ge> k" by simp have *: "length (snd ((M ! i) (take k gs))) = k" using turing_commandD(1)[OF turing_command] assms(2) that(1) by auto have "(append_tapes k k' M ! i) gs [.] j = fst ((snd ((M ! i) (take k gs)) @ (map (\<lambda>j. (gs ! j, Stay)) [k..<k'])) ! j)" using ith by simp also have "... = fst ((map (\<lambda>j. (gs ! j, Stay)) [k..<k']) ! (j - k))" using * that `j \<ge> k` by (simp add: False nth_append) also have "... = fst (gs ! j, Stay)" by (metis False \<open>k \<le> j\<close> add_diff_inverse_nat diff_less_mono length_upt nth_map nth_upt that(1,3)) also have "... = gs ! j" by simp also have "... < G" using that(2,3) by simp finally show ?thesis by simp qed show "(append_tapes k k' M ! i) gs [.] 0 = gs ! 0" if "length gs = k'" for gs proof - have "k > 0" using assms(1) turing_machine_def by simp then have 1: "(M ! i) rs [.] 0 = rs ! 0" if "length rs = k" for rs using turing_commandD(3)[OF turing_command that] that by simp have len: "length (take k gs) = k" by (simp add: assms(2) min_absorb2 that(1)) then have *: "length (snd ((M ! i) (take k gs))) = k" using turing_commandD(1)[OF turing_command] by auto have "(append_tapes k k' M ! i) gs [.] 0 = fst ((snd ((M ! i) (take k gs)) @ (map (\<lambda>j. (gs ! j, Stay)) [k..<k'])) ! 0)" using ith by simp also have "... = fst (snd ((M ! i) (take k gs)) ! 0)" using * by (simp add: nth_append `0 < k`) finally show ?thesis using 1 len \<open>0 < k\<close> by simp qed show "[*] ((append_tapes k k' M ! i) gs) \<le> length (append_tapes k k' M)" if "length gs = k'" for gs proof - have "length (take k gs) = k" using assms(2) that by simp then have 1: "fst ((M ! i) (take k gs)) \<le> length M" using turing_commandD[OF turing_command] \<open>i < length M\<close> assms(1) turing_machine_def by blast moreover have "fst ((append_tapes k k' M ! i) gs) = fst ((M ! i) (take k gs))" using ith by simp ultimately show "fst ((append_tapes k k' M ! i) gs) \<le> length (append_tapes k k' M)" using length_append_tapes by metis qed qed qed lemma execute_append_tapes: assumes "turing_machine k G M" and "k' \<ge> k" and "length tps = k'" shows "execute (append_tapes k k' M) (q, tps) t = (fst (execute M (q, take k tps) t), snd (execute M (q, take k tps) t) @ drop k tps)" proof (induction t) case 0 then show ?case by simp next case (Suc t) let ?M = "append_tapes k k' M" let ?cfg = "execute M (q, take k tps) t" let ?cfg' = "execute M (q, take k tps) (Suc t)" have "execute ?M (q, tps) (Suc t) = exe ?M (execute ?M (q, tps) t)" by simp also have "... = exe ?M (fst ?cfg, snd ?cfg @ drop k tps)" using Suc by simp also have "... = (fst ?cfg', snd ?cfg' @ drop k tps)" proof (cases "fst ?cfg < length ?M") case True have "sem (?M ! (fst ?cfg)) (fst ?cfg, snd ?cfg @ drop k tps) = (fst ?cfg', snd ?cfg' @ drop k tps)" proof (rule semI) have "turing_machine k' G (append_tapes k k' M)" using append_tapes_tm[OF assms(1,2)] by simp then show 1: "proper_command k' (append_tapes k k' M ! fst (execute M (q, take k tps) t))" using True turing_machine_def turing_commandD by (metis nth_mem) show 2: "length (snd ?cfg @ drop k tps) = k'" using assms execute_num_tapes by fastforce show "length (snd ?cfg' @ drop k tps) = k'" by (metis (no_types, lifting) append_take_drop_id assms execute_num_tapes length_append length_take min_absorb2 snd_conv) show "fst ((?M ! fst ?cfg) (read (snd ?cfg @ drop k tps))) = fst ?cfg'" proof - have less': "fst ?cfg < length M" using True by (simp add: length_append_tapes) let ?tps = "snd ?cfg @ drop k tps" have "length (snd ?cfg) = k" using assms execute_num_tapes by fastforce then have take2: "take k ?tps = snd ?cfg" by simp let ?rs = "read ?tps" have len: "length ?rs = k'" using 2 read_length by simp have take2': "take k ?rs = read (snd ?cfg)" using read_def take2 by (metis (mono_tags, lifting) take_map) have "fst ((?M ! fst ?cfg) ?rs) = fst (fst ((M ! fst ?cfg) (take k ?rs)), snd ((M ! fst ?cfg) (take k ?rs)) @ (map (\<lambda>j. (?rs ! j, Stay)) [k..<k']))" using append_tapes_nth[OF less' len] by simp also have "... = fst ((M ! fst ?cfg) (read (snd ?cfg)))" using take2' by simp also have "... = fst (exe M ?cfg)" by (simp add: exe_def less' sem_fst) finally show ?thesis by simp qed show "(act ((?M ! fst ?cfg) (read (snd ?cfg @ drop k tps)) [!] j) ((snd ?cfg @ drop k tps) ! j) = (snd ?cfg' @ drop k tps) ! j)" if "j < k'" for j proof - have less': "fst ?cfg < length M" using True by (simp add: length_append_tapes) let ?tps = "snd ?cfg @ drop k tps" have len2: "length (snd ?cfg) = k" using assms execute_num_tapes by fastforce then have take2: "take k ?tps = snd ?cfg" by simp from len2 have len2': "length (snd ((M ! fst ?cfg) (read (snd ?cfg)))) = k" using assms(1) turing_commandD(1) less' read_length turing_machine_def by (metis nth_mem) let ?rs = "read ?tps" have len: "length ?rs = k'" using 2 read_length by simp have take2': "take k ?rs = read (snd ?cfg)" using read_def take2 by (metis (mono_tags, lifting) take_map) have "act ((?M ! fst ?cfg) ?rs [!] j) (?tps ! j) = act ((fst ((M ! fst ?cfg) (take k ?rs)), snd ((M ! fst ?cfg) (take k ?rs)) @ (map (\<lambda>j. (?rs ! j, Stay)) [k..<k'])) [!] j) (?tps ! j)" using append_tapes_nth[OF less' len] by simp also have "... = act ((fst ((M ! fst ?cfg) (read (snd ?cfg))), snd ((M ! fst ?cfg) (read (snd ?cfg))) @ (map (\<lambda>j. (?rs ! j, Stay)) [k..<k'])) [!] j) (?tps ! j)" using take2' by simp also have "... = act ((snd ((M ! fst ?cfg) (read (snd ?cfg))) @ (map (\<lambda>j. (?rs ! j, Stay)) [k..<k'])) ! j) (?tps ! j)" by simp also have "... = (snd ?cfg' @ drop k tps) ! j" proof (cases "j < k") case True then have tps: "?tps ! j = snd ?cfg ! j" by (simp add: len2 nth_append) have "(snd ?cfg' @ drop k tps) ! j = (snd (exe M ?cfg) @ drop k tps) ! j" by simp also have "... = snd (exe M ?cfg) ! j" using assms(1) True by (metis exe_num_tapes len2 nth_append) also have "... = snd (sem (M ! fst ?cfg) ?cfg) ! j" by (simp add: exe_lt_length less') also have "... = act (snd ((M ! fst ?cfg) (read (snd ?cfg))) ! j) (?tps ! j)" proof - have "proper_command k (M ! (fst ?cfg))" using turing_commandD(1) turing_machine_def assms(1) less' nth_mem by blast then show ?thesis using sem_snd True tps len2 by simp qed finally show ?thesis using len2' True by (simp add: nth_append) next case False then have tps: "?tps ! j = tps ! j" using len2 by (metis (no_types, lifting) "2" append_take_drop_id assms(3) length_take nth_append take2) from False have gt2: "j \<ge> k" by simp have len': "length (snd ?cfg') = k" using assms(1) exe_num_tapes len2 by auto have rs: "?rs ! j = read tps ! j" using tps by (metis (no_types, lifting) "2" assms(3) that nth_map read_def) have "act ((snd ((M ! fst ?cfg) (read (snd ?cfg))) @ (map (\<lambda>j. (?rs ! j, Stay)) [k..<k'])) ! j) (?tps ! j) = act ((map (\<lambda>j. (?rs ! j, Stay)) [k..<k']) ! (j - k)) (?tps ! j)" using False len2 len2' by (simp add: nth_append) also have "... = act (?rs ! j, Stay) (?tps ! j)" by (metis (no_types, lifting) False add_diff_inverse_nat diff_less_mono gt2 that length_upt nth_map nth_upt) also have "... = act (?rs ! j, Stay) (tps ! j)" using tps by simp also have "... = act (read tps ! j, Stay) (tps ! j)" using rs by simp also have "... = tps ! j" using act_Stay assms(3) that by simp also have "... = (snd (exe M ?cfg) @ drop k tps) ! j" using len' by (metis (no_types, lifting) "2" False append_take_drop_id assms(3) execute.simps(2) len2 length_take nth_append take2) also have "... = (snd ?cfg' @ drop k tps) ! j" by simp finally show ?thesis by simp qed finally show "act ((?M ! fst ?cfg) ?rs [!] j) (?tps ! j) = (snd ?cfg' @ drop k tps) ! j" . qed qed then show ?thesis using exe_def True by simp next case False then show ?thesis using assms by (simp add: exe_ge_length length_append_tapes) qed finally show "execute ?M (q, tps) (Suc t) = (fst ?cfg', snd ?cfg' @ drop k tps)" . qed lemma execute_append_tapes': assumes "turing_machine k G M" and "length tps = k" shows "execute (append_tapes k (k + length tps') M) (q, tps @ tps') t = (fst (execute M (q, tps) t), snd (execute M (q, tps) t) @ tps')" using assms execute_append_tapes by simp lemma transforms_append_tapes: assumes "turing_machine k G M" and "length tps0 = k" and "transforms M tps0 t tps1" shows "transforms (append_tapes k (k + length tps') M) (tps0 @ tps') t (tps1 @ tps')" (is "transforms ?M _ _ _") proof - have "execute M (0, tps0) t = (length M, tps1)" using assms(3) transforms_def transits_def by (metis (no_types, opaque_lifting) execute_after_halting_ge fst_conv) then have "execute ?M (0, tps0 @ tps') t = (length M, tps1 @ tps')" using assms(1,2) execute_append_tapes' by simp moreover have "length M = length ?M" by (simp add: length_append_tapes) ultimately show ?thesis by (simp add: execute_imp_transits transforms_def) qed subsubsection \<open>Inserting tapes at the beginning\<close> text \<open> The next function turns a $k$-tape Turing machine into a $(k + d)$-tape Turing machine by inserting $d$ tapes at the beginning. \<close> definition prepend_tapes :: "nat \<Rightarrow> machine \<Rightarrow> machine" where "prepend_tapes d M \<equiv> map (\<lambda>cmd rs. (fst (cmd (drop d rs)), map (\<lambda>h. (h, Stay)) (take d rs) @ snd (cmd (drop d rs)))) M" lemma prepend_tapes_at: assumes "i < length M" shows "(prepend_tapes d M ! i) gs = (fst ((M ! i) (drop d gs)), map (\<lambda>h. (h, Stay)) (take d gs) @ snd ((M ! i) (drop d gs)))" using assms prepend_tapes_def by simp lemma prepend_tapes_tm: assumes "turing_machine k G M" shows "turing_machine (d + k) G (prepend_tapes d M)" proof show "2 \<le> d + k" using assms turing_machine_def by simp show "4 \<le> G" using assms turing_machine_def by simp let ?M = "prepend_tapes d M" show "turing_command (d + k) (length ?M) G (?M ! i)" if "i < length ?M" for i proof have len: "i < length M" using that prepend_tapes_def by simp then have *: "(?M ! i) gs = (fst ((M ! i) (drop d gs)), map (\<lambda>h. (h, Stay)) (take d gs) @ snd ((M ! i) (drop d gs)))" if "length gs = d + k" for gs using prepend_tapes_def that by simp have tc: "turing_command k (length M) G (M ! i)" using that turing_machine_def len assms by simp show "length (snd ((?M ! i) gs)) = length gs" if "length gs = d + k" for gs using * that turing_commandD[OF tc] by simp show "(?M ! i) gs [.] j < G" if "length gs = d + k" "(\<And>i. i < length gs \<Longrightarrow> gs ! i < G)" "j < length gs" for gs j proof (cases "j < d") case True have "(?M ! i) gs [.] j = fst ((map (\<lambda>h. (h, Stay)) (take d gs) @ snd ((M ! i) (drop d gs))) ! j)" using * that(1) by simp also have "... = fst (map (\<lambda>h. (h, Stay)) (take d gs) ! j)" using True that(1) by (simp add: nth_append) also have "... = gs ! j" by (simp add: True that(3)) finally have "(?M ! i) gs [.] j = gs ! j" . then show ?thesis using that(2,3) by simp next case False have "(?M ! i) gs [.] j = fst ((map (\<lambda>h. (h, Stay)) (take d gs) @ snd ((M ! i) (drop d gs))) ! j)" using * that(1) by simp also have "... = fst (snd ((M ! i) (drop d gs)) ! (j - d))" using False that(1) by (metis (no_types, lifting) add_diff_cancel_left' append_take_drop_id diff_add_inverse2 length_append length_drop length_map nth_append) also have "... < G" using False that turing_commandD[OF tc] by simp finally show ?thesis by simp qed show "(?M ! i) gs [.] 0 = gs ! 0" if "length gs = d + k" and "d + k > 0" for gs proof (cases "d = 0") case True then have "(?M ! i) gs [.] 0 = fst (snd ((M ! i) gs) ! 0)" using * that(1) by simp then show ?thesis using True that turing_commandD[OF tc] by simp next case False then have "(?M ! i) gs [.] 0 = fst ((map (\<lambda>h. (h, Stay)) (take d gs)) ! 0)" using * that(1) by (simp add: nth_append) also have "... = fst ((map (\<lambda>h. (h, Stay)) gs) ! 0)" using False by (metis gr_zeroI nth_take take_map) also have "... = gs ! 0" using False that by simp finally show ?thesis by simp qed show "[*] ((?M ! i) gs) \<le> length ?M" if "length gs = d + k" for gs proof - have "fst ((?M ! i) gs) = fst ((M ! i) (drop d gs))" using that * by simp moreover have "length (drop d gs) = k" using that by simp ultimately have "fst ((?M ! i) gs) \<le> length M" using turing_commandD(4)[OF tc] by fastforce then show "fst ((?M ! i) gs) \<le> length ?M" using prepend_tapes_def by simp qed qed qed definition shift_cfg :: "tape list \<Rightarrow> config \<Rightarrow> config" where "shift_cfg tps cfg \<equiv> (fst cfg, tps @ snd cfg)" lemma execute_prepend_tapes: assumes "turing_machine k G M" and "length tps = d" and "||cfg0|| = k" shows "execute (prepend_tapes d M) (shift_cfg tps cfg0) t = shift_cfg tps (execute M cfg0 t)" proof (induction t) case 0 show ?case by simp next case (Suc t) let ?M = "prepend_tapes d M" let ?scfg = "shift_cfg tps cfg0" let ?scfg' = "execute ?M ?scfg t" let ?cfg' = "execute M cfg0 t" have fst: "fst ?cfg' = fst ?scfg'" using shift_cfg_def Suc.IH by simp have len: "||?cfg'|| = k" using assms(1,3) execute_num_tapes read_length by auto have len_s: "||?scfg'|| = d + k" using prepend_tapes_tm[OF assms(1)] shift_cfg_def assms(2,3) execute_num_tapes read_length by (metis length_append snd_conv) let ?srs = "read (snd ?scfg')" let ?rs = "read (snd ?cfg')" have len_rs: "length ?rs = k" using assms(1,3) execute_num_tapes read_length by auto moreover have len_srs: "length ?srs = k + d" using prepend_tapes_tm[OF assms(1)] shift_cfg_def assms(2,3) by (metis add.commute execute_num_tapes length_append read_length snd_conv) ultimately have srs_rs: "drop d ?srs = ?rs" using Suc shift_cfg_def read_def by simp have *: "execute ?M ?scfg (Suc t) = exe ?M ?scfg'" by simp show ?case proof (cases "fst ?scfg' \<ge> length ?M") case True then show ?thesis using * Suc exe_ge_length shift_cfg_def prepend_tapes_def by auto next case running: False then have scmd: "?M ! (fst ?scfg') = (\<lambda>gs. (fst ((M ! (fst ?scfg')) (drop d gs)), map (\<lambda>h. (h, Stay)) (take d gs) @ snd ((M ! (fst ?scfg')) (drop d gs))))" (is "?scmd = _") using prepend_tapes_at prepend_tapes_def by auto then have cmd: "?M ! (fst ?scfg') = (\<lambda>gs. (fst ((M ! (fst ?cfg')) (drop d gs)), map (\<lambda>h. (h, Stay)) (take d gs) @ snd ((M ! (fst ?cfg')) (drop d gs))))" using fst by simp let ?cmd = "M ! (fst ?cfg')" have "execute ?M ?scfg (Suc t) = sem (?M ! (fst ?scfg')) ?scfg'" using running * exe_lt_length by simp then have lhs: "execute ?M ?scfg (Suc t) = (fst (?scmd ?srs), map (\<lambda>(a, tp). act a tp) (zip (snd (?scmd ?srs)) (snd ?scfg')))" (is "_ = ?lhs") using sem' by simp have "shift_cfg tps (execute M cfg0 (Suc t)) = shift_cfg tps (exe M ?cfg')" by simp also have "... = shift_cfg tps (sem (M ! (fst ?cfg')) ?cfg')" using exe_lt_length running fst prepend_tapes_def by auto also have "... = shift_cfg tps (fst (?cmd ?rs), map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd ?rs)) (snd ?cfg')))" using sem' by simp also have "... = (fst (?cmd ?rs), tps @ map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd ?rs)) (snd ?cfg')))" using shift_cfg_def by simp finally have rhs: "shift_cfg tps (execute M cfg0 (Suc t)) = (fst (?cmd ?rs), tps @ map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd ?rs)) (snd ?cfg')))" (is "_ = ?rhs") . have "?lhs = ?rhs" proof standard+ show "fst (?scmd ?srs) = fst (?cmd ?rs)" using srs_rs cmd by simp show "map (\<lambda>(a, tp). act a tp) (zip (snd (?scmd ?srs)) (snd ?scfg')) = tps @ map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd ?rs)) (snd ?cfg'))" (is "?l = ?r") proof (rule nth_equalityI) have lenl: "length ?l = d + k" using lhs execute_num_tapes assms prepend_tapes_tm len_s by (smt (z3) length_append shift_cfg_def snd_conv) moreover have "length ?r = d + k" using rhs execute_num_tapes assms shift_cfg_def by (metis (mono_tags, lifting) length_append snd_conv) ultimately show "length ?l = length ?r" by simp show "?l ! j = ?r ! j" if "j < length ?l" for j proof (cases "j < d") case True let ?at = "zip (snd (?scmd ?srs)) (snd ?scfg') ! j" have "?l ! j = act (fst ?at) (snd ?at)" using that by simp moreover have "fst ?at = snd (?scmd ?srs) ! j" using that by simp moreover have "snd ?at = snd ?scfg' ! j" using that by simp ultimately have "?l ! j = act (snd (?scmd ?srs) ! j) (snd ?scfg' ! j)" by simp moreover have "snd ?scfg' ! j = tps ! j" using shift_cfg_def assms(2) by (metis (no_types, lifting) Suc.IH True nth_append snd_conv) moreover have "snd (?scmd ?srs) ! j = (?srs ! j, Stay)" proof - have "snd (?scmd ?srs) = map (\<lambda>h. (h, Stay)) (take d ?srs) @ snd ((M ! (fst ?scfg')) (drop d ?srs))" using scmd by simp then have "snd (?scmd ?srs) ! j = map (\<lambda>h. (h, Stay)) (take d ?srs) ! j" using len_srs lenl True that by (smt (z3) add.commute length_map length_take min_less_iff_conj nth_append) then show ?thesis using len_srs True by simp qed moreover have "?r ! j = tps ! j" using True assms(2) by (simp add: nth_append) ultimately show ?thesis using len_s that lenl by (metis act_Stay) next case False have jle: "j < d + k" using lenl that by simp have jle': "j - d < k" using lenl that False by simp let ?at = "zip (snd (?scmd ?srs)) (snd ?scfg') ! j" have "?l ! j = act (fst ?at) (snd ?at)" using that by simp moreover have "fst ?at = snd (?scmd ?srs) ! j" using that by simp moreover have "snd ?at = snd ?scfg' ! j" using that by simp ultimately have "?l ! j = act (snd (?scmd ?srs) ! j) (snd ?scfg' ! j)" by simp moreover have "snd ?scfg' ! j = snd ?cfg' ! (j - d)" using shift_cfg_def assms(2) Suc False jle by (metis nth_append snd_conv) moreover have "snd (?scmd ?srs) ! j = snd (?cmd ?rs) ! (j - d)" proof - have "snd (?scmd ?srs) = map (\<lambda>h. (h, Stay)) (take d ?srs) @ snd ((M ! (fst ?cfg')) (drop d ?srs))" using cmd by simp then have "snd (?scmd ?srs) ! j = snd ((M ! (fst ?cfg')) (drop d ?srs)) ! (j - d)" using len_srs lenl False that len_rs by (smt (z3) Nat.add_diff_assoc add.right_neutral add_diff_cancel_left' append_take_drop_id le_add1 length_append length_map nth_append srs_rs) then have "snd (?scmd ?srs) ! j = snd (?cmd ?rs) ! (j - d)" using srs_rs by simp then show ?thesis by simp qed moreover have "?r ! j = act (snd (?cmd ?rs) ! (j - d)) (snd ?cfg' ! (j - d))" proof - have "fst (execute M cfg0 t) < length M" using running fst prepend_tapes_def by simp then have len1: "length (snd (?cmd ?rs)) = k" using assms(1) len_rs turing_machine_def[of k G M] turing_commandD(1) by fastforce have "?r ! j = map (\<lambda>(a, tp). act a tp) (zip (snd (?cmd ?rs)) (snd ?cfg')) ! (j - d)" using assms(2) False by (simp add: nth_append) also have "... = act (snd (?cmd ?rs) ! (j - d)) (snd ?cfg' ! (j - d))" using len1 len jle' by simp finally show ?thesis by simp qed ultimately show ?thesis by simp qed qed qed then show ?thesis using lhs rhs by simp qed qed lemma transforms_prepend_tapes: assumes "turing_machine k G M" and "length tps = d" and "length tps0 = k" and "transforms M tps0 t tps1" shows "transforms (prepend_tapes d M) (tps @ tps0) t (tps @ tps1)" proof - have "\<exists>t'\<le>t. execute M (0, tps0) t' = (length M, tps1)" using assms(4) transforms_def transits_def by simp then have "\<exists>t'\<le>t. execute (prepend_tapes d M) (shift_cfg tps (0, tps0)) t' = shift_cfg tps (length M, tps1)" using assms transforms_def transits_def execute_prepend_tapes shift_cfg_def by auto moreover have "length M = length (prepend_tapes d M)" using prepend_tapes_def by simp ultimately show ?thesis using shift_cfg_def transforms_def transitsI by auto qed end
classdef RunClass < TreeNodeClass methods % ---------------------------------------------------------------------------------- function obj = RunClass(varargin) % % Syntax: % obj = RunClass() % obj = RunClass(filename); % obj = RunClass(filename, iGroup, iSubj, iSess, iRun); % obj = RunClass(run); % % Example 1: % run1 = RunClass('./s1/neuro_run01.nirs',1,1,1,1); % run1copy = RunClass(run1); % obj@TreeNodeClass(varargin); obj.type = 'run'; if nargin==0 obj.name = ''; return; end if isa(varargin{1}, 'RunClass') obj.Copy(varargin{1}); return; elseif isa(varargin{1}, 'FileClass') [~, ~, ~, obj.name] = varargin{1}.ExtractNames(); obj.path = varargin{1}.GetFilesPath(); % Fix wrong root path elseif ischar(varargin{1}) && strcmp(varargin{1},'copy') return; elseif ischar(varargin{1}) obj.name = varargin{1}; end if nargin==5 obj.iGroup = varargin{2}; obj.iSubj = varargin{3}; obj.iSess = varargin{4}; obj.iRun = varargin{5}; end obj.LoadAcquiredData(); if obj.acquired.Error() obj = RunClass.empty(); return; end obj.procStream = ProcStreamClass(obj.acquired); obj.InitTincMan(); if isa(varargin{1}, 'FileClass') varargin{1}.Loaded(); end end % ---------------------------------------------------------------------------------- function b = Error(obj) if isempty(obj) b = -1; return; end b = obj.acquired.Error(); end % ---------------------------------------------------------------------------------- function err = Load(obj) err = 0; err1 = obj.LoadAcquiredData(); err2 = obj.LoadDerivedData(); if ~(err1==0 && err2==0) err = -1; end end % ---------------------------------------------------------------------------------- function err = LoadDerivedData(obj) err = 0; if isempty(obj) return; end err = obj.procStream.Load([obj.path, obj.GetOutputFilename]); end % ---------------------------------------------------------------------------------- function err = LoadAcquiredData(obj) err = -1; if isempty(obj) return; end if isempty(obj.SaveMemorySpace(obj.name)) % Storage scheme is memory: In this case load acquisition data unconditionally. dataStorageScheme = 'memory'; else dataStorageScheme = 'files'; end if isempty(obj.acquired) if obj.IsNirs() obj.acquired = NirsClass([obj.path, obj.name], dataStorageScheme); else obj.acquired = SnirfClass([obj.path, obj.name], dataStorageScheme); end else obj.acquired.Load([obj.path, obj.name]); end if obj.acquired.Error() < 0 obj.logger.Write( sprintf(' **** Error: "%s" failed to load - %s\n', obj.name, obj.acquired.GetErrorMsg()) ); return; elseif obj.acquired.Error() > 0 obj.logger.Write( sprintf(' **** Warning: %s in file "%s"\n', obj.acquired.GetErrorMsg(), obj.name) ); else %fprintf(' Loaded file %s to run.\n', obj.name); end err = 0; end % ---------------------------------------------------------------------------------- function FreeMemory(obj) if isempty(obj) return; end % Unload derived data obj.procStream.FreeMemory([obj.path, obj.GetOutputFilename()]); % Unload acquired data obj.acquired.FreeMemory([obj.path, obj.GetFilename()]); end % ---------------------------------------------------------------------------------- function SaveAcquiredData(obj) if isempty(obj) return; end obj.procStream.input.SaveAcquiredData() end % ---------------------------------------------------------------------------------- function b = AcquiredDataModified(obj) b = obj.procStream.AcquiredDataModified(); if b obj.logger.Write(sprintf('Acquisition data for run %s has been modified\n', obj.name)); end end % ---------------------------------------------------------------------------------- % Copy processing params (procInut and procResult) from % N2 to N1 if N1 and N2 are same nodes % ---------------------------------------------------------------------------------- function Copy(obj, obj2, conditional) if nargin==3 && strcmp(conditional, 'conditional') if obj.Mismatch(obj2) return end obj.Copy@TreeNodeClass(obj2, 'conditional'); else obj.Copy@TreeNodeClass(obj2); if isempty(obj.acquired) if obj.IsNirs() obj.acquired = NirsClass(); else obj.acquired = SnirfClass(); end end obj.acquired.Copy(obj2.acquired); end end % -------------------------------------------------------------- function CopyStims(obj, obj2) obj.CondNames = obj2.CondNames; obj.procStream.CopyStims(obj2.procStream); end % ---------------------------------------------------------------------------------- % Subjects obj1 and obj2 are considered equivalent if their names % are equivalent and their sets of runs are equivalent. % ---------------------------------------------------------------------------------- function B = equivalent(obj1, obj2) B=1; [p1,n1] = fileparts(obj1.name); [p2,n2] = fileparts(obj2.name); if ~strcmp([p1,'/',n1],[p2,'/',n2]) B=0; return; end end % ---------------------------------------------------------------------------------- function b = IsEmpty(obj) b = true; if isempty(obj) return; end if obj.acquired.IsEmpty() return; end b = false; end % ---------------------------------------------------------------------------------- function b = IsEmptyOutput(obj) b = true; if isempty(obj) return; end obj.LoadDerivedData(); if obj.procStream.IsEmptyOutput() return; end b = false; end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% methods % ---------------------------------------------------------------------------------- function b = IsNirs(obj) b = false; [~,~,ext] = fileparts(obj.name); if strcmp(ext,'.nirs') b = true; end end % ---------------------------------------------------------------------------------- function varval = GetVar(obj, varname) varval = []; if isproperty(obj, varname) varval = eval( sprintf('obj.%s', varname) ); end if isempty(varval) varval = obj.procStream.GetVar(varname); end if isempty(varval) varval = obj.acquired.GetVar(varname); end end % ---------------------------------------------------------------------------------- function LoadInputVars(obj) % a) Find all variables needed by proc stream args = obj.procStream.GetInputArgs(); % b) Find these variables in this run for ii = 1:length(args) eval( sprintf('obj.inputVars.%s = obj.GetVar(args{ii});', args{ii}) ); end end % ---------------------------------------------------------------------------------- function Calc(obj, options) if ~exist('options','var') || isempty(options) options = 'overwrite'; end % Update call application GUI using it's generic Update function if ~isempty(obj.updateParentGui) obj.updateParentGui('DataTreeClass', [obj.iGroup, obj.iSubj, obj.iSess, obj.iRun]); end % Load acquired data obj.acquired.Load(); if strcmpi(options, 'overwrite') % Recalculating result means deleting old results, if % option == 'overwrite' obj.procStream.output.Flush(); end if obj.DEBUG fprintf('Calculating processing stream for group %d, subject %d, session %d, run %d\n', obj.iGroup, obj.iSubj, obj.iSess, obj.iRun); end % Find all variables needed by proc stream, find them in this run, and load them to proc stream input obj.LoadInputVars(); Calc@TreeNodeClass(obj); if obj.DEBUG obj.logger.Write(sprintf('Completed processing stream for group %d, subject %d, session %d, run %d\n', obj.iGroup, obj.iSubj, obj.iSess, obj.iRun)); obj.logger.Write('\n') end end % ---------------------------------------------------------------------------------- function Print(obj, indent) if ~exist('indent', 'var') indent = 4; else indent = indent+4; end Print@TreeNodeClass(obj, indent); end % --------------------------------------------------------------- function PrintProcStream(obj) fcalls = obj.procStream.GetFuncCallChain(); obj.logger.Write('Run processing stream:\n'); for ii = 1:length(fcalls) obj.logger.Write('%s\n', fcalls{ii}); end end end % Public methods %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Pubic Set/Get methods for acquired data %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% methods % ---------------------------------------------------------------------------------- function t = GetTime(obj, iBlk) if nargin==1 iBlk=1; end % Sometimes the caller is NOT the current element in which case we need to load % (when dataStorageScheme = 'file' mode) here explicitely. err = -1; if obj.acquired.IsEmpty() err = obj.acquired.LoadTime(); end t = obj.acquired.GetTime(iBlk); if err==0 obj.acquired.FreeMemory(obj.GetFilename()); end end % ---------------------------------------------------------------------------------- function t = GetTimeCombined(obj) % Sometimes the caller is NOT the current element in which case we need to load % (when dataStorageScheme = 'file' mode) here explicitely. err = -1; if obj.acquired.IsEmpty() err = obj.acquired.LoadTime(); end t = obj.acquired.GetTimeCombined(); if err==0 obj.acquired.FreeMemory(obj.GetFilename()); end end % ---------------------------------------------------------------------------------- function t = GetAuxiliaryTime(obj) % Sometimes the caller is NOT the current element in which case we need to load % (when dataStorageScheme = 'file' mode) here explicitely. BUT this needs to be % optimized to NOT load the whole thing, just aux - see GetTime()/ GetTimeCombined(). % TBD. jdubb, 08/17/2022 err = -1; if obj.acquired.IsEmpty() obj.acquired.Load(); err = obj.GetError(); end t = obj.acquired.GetAuxiliaryTime(); if err==0 obj.acquired.FreeMemory(obj.GetFilename()); end end % ---------------------------------------------------------------------------------- function [d, t, ml] = GetRawData(obj, iBlk) if nargin<2 iBlk = 1; end [d, t, ml] = obj.acquired.GetDataTimeSeries('', iBlk); end % ---------------------------------------------------------------------------------- function [d, t, ml] = GetDataTimeSeries(obj, options, iBlk) if ~exist('options','var') options = ''; end if ~exist('iBlk','var') || isempty(iBlk) iBlk = 1; end if isempty(options) || strcmp(options, 'reshape') [d, t, ml] = obj.acquired.GetDataTimeSeries(options, iBlk); else [d, t, ml] = obj.GetDataTimeSeries@TreeNodeClass(options, iBlk); end end % ---------------------------------------------------------------------------------- function [iDataBlks, iCh] = GetDataBlocksIdxs(obj, iCh) if nargin<2 iCh = []; end [iDataBlks, iCh] = obj.acquired.GetDataBlocksIdxs(iCh); end % ---------------------------------------------------------------------------------- function n = GetDataBlocksNum(obj) n = obj.acquired.GetDataBlocksNum(); end % ---------------------------------------------------------------------------------- function SD = GetSDG(obj,option) % Sometimes the caller is NOT the current element in which case we need to load % (when dataStorageScheme = 'file' mode) here explicitely. BUT this needs to be % optimized to NOT load the whole thing, just SDG. TBD. jdubb, 08/17/2022 err = -1; if obj.acquired.IsEmpty() obj.acquired.Load(); err = obj.GetError(); end SD.Lambda = obj.acquired.GetWls(); if exist('option','var') SD.SrcPos = obj.acquired.GetSrcPos(option); SD.DetPos = obj.acquired.GetDetPos(option); else SD.SrcPos = obj.acquired.GetSrcPos(); SD.DetPos = obj.acquired.GetDetPos(); end if err==0 obj.acquired.FreeMemory(obj.GetFilename()); end end % ---------------------------------------------------------------------------------- function InitMlActMan(obj, iBlk) if ~exist('iBlk','var') iBlk = 1; end ml = obj.acquired.data.GetMeasListSrcDetPairs(iBlk); obj.procStream.input.SetMeasListActMan([ml, ones(size(ml, 1), 1)]); end % ---------------------------------------------------------------------------------- function InitMlVis(obj, iBlk) if ~exist('iBlk','var') iBlk = 1; end ml = obj.acquired.data.GetMeasListSrcDetPairs(iBlk); obj.procStream.input.SetMeasListActMan([ml, ones(size(ml, 1), 1)]); end % ---------------------------------------------------------------------------------- function ch = GetMeasList(obj, options, iBlk) if ~exist('iBlk','var') || isempty(iBlk) iBlk=1; end if ~exist('options','var') options = ''; end ch = struct('MeasList',[], 'MeasListVis',[], 'MeasListActMan',[], 'MeasListActAuto',[]); % Sometimes the caller is NOT the current element in which case we need to load % (when dataStorageScheme = 'file' mode) here explicitely. BUT this needs to be % optimized to NOT load the whole thing, just SDG. TBD. jdubb, 08/17/2022 err = -1; if obj.acquired.IsEmpty() obj.acquired.Load(); err = obj.GetError(); end ch.MeasList = obj.acquired.GetMeasList(iBlk); ch.MeasListActMan = obj.procStream.GetMeasListActMan(iBlk); ch.MeasListActAuto = obj.procStream.GetMeasListActAuto(iBlk); ch.MeasListActMan = mlAct_Initialize(ch.MeasListActMan, ch.MeasList); ch.MeasListActAuto = mlAct_Initialize(ch.MeasListActAuto, ch.MeasList); if strcmp(options,'reshape') ch.MeasList = sortrows(ch.MeasList); end if err==0 obj.acquired.FreeMemory(obj.GetFilename()); end end % ---------------------------------------------------------------------------------- function mlAct = GetActiveChannels(obj) % Load to memory if needed err = -1; if obj.procStream.output.IsEmpty() obj.Load(); err = 0; end mlAct = obj.GetVar('mlActAuto'); if ~isempty(mlAct) mlAct = mlAct{1}; end % Free memory if err==0 obj.FreeMemory(); end end % ---------------------------------------------------------------------------------- function SetStims_MatInput(obj, s, t, CondNames) obj.procStream.SetStims_MatInput(s, t, CondNames); end % ---------------------------------------------------------------------------------- function ReloadStim(obj) % Update call application GUI using it's generic Update function if ~isempty(obj.updateParentGui) obj.updateParentGui('DataTreeClass', [obj.iGroup, obj.iSubj, obj.iSess, obj.iRun]); end if obj.DEBUG fprintf('group %d, subject %d, session %d, run %d\n', obj.iGroup, obj.iSubj, obj.iSess, obj.iRun); end obj.acquired.LoadStim(obj.acquired.GetFilename()); obj.procStream.CopyStims(obj.acquired) pause(.5) end % ---------------------------------------------------------------------------------- function s = GetStims(obj, t) % Proc stream output s_inp = obj.procStream.input.GetStims(t); s_out = obj.procStream.output.GetStims(t); k_inp_all = find(s_inp~=0); k_out_edit = find(s_out~=0 & s_out~=1); % Select only those output stims which exist in the input b = ismember(k_out_edit, k_inp_all); s = s_inp; s(k_out_edit(b)) = s_out(k_out_edit(b)); end % ---------------------------------------------------------------------------------- function SetConditions(obj, CondNames) if nargin==2 obj.procStream.SetConditions(CondNames); end obj.CondNames = unique(obj.procStream.GetConditions()); end % ---------------------------------------------------------------------------------- function CondNames = GetConditions(obj) CondNames = obj.procStream.GetConditions(); end % ---------------------------------------------------------------------------------- function CondNames = GetConditionsActive(obj) CondNames = obj.CondNames; t = obj.GetTime(); s = obj.GetStims(t); for ii=1:size(s,2) if ismember(abs(1), s(:,ii)) CondNames{ii} = ['-- ', CondNames{ii}]; end end end % ---------------------------------------------------------------------------------- function wls = GetWls(obj) wls = obj.acquired.GetWls(); end % ---------------------------------------------------------------------------------- function bbox = GetSdgBbox(obj) if obj.acquired.IsEmpty() % No need to load whole of acquired data need only probe here obj.acquired.LoadProbe(obj.acquired.GetFilename()); end bbox = obj.acquired.GetSdgBbox(); end % ---------------------------------------------------------------------------------- function aux = GetAux(obj) aux = obj.acquired.GetAux(); end % ---------------------------------------------------------------------------------- function aux = GetAuxiliary(obj) aux = obj.acquired.GetAuxiliary(); end % ---------------------------------------------------------------------------------- function tIncAuto = GetTincAuto(obj, iBlk) if nargin<2 iBlk = 1; end tIncAuto = obj.procStream.output.GetTincAuto(iBlk); end % ---------------------------------------------------------------------------------- function tIncAutoCh = GetTincAutoCh(obj, iBlk) if nargin<2 iBlk = 1; end tIncAutoCh = obj.procStream.output.GetTincAutoCh(iBlk); end % ---------------------------------------------------------------------------------- function tIncMan = GetTincMan(obj, iBlk) if nargin<2 iBlk = 1; end tIncMan = obj.procStream.input.GetTincMan(iBlk); if isempty(tIncMan) % If the Tinc array is unitialized TODO find a more sensical place to do this obj.InitTincMan(); tIncMan = obj.procStream.input.GetTincMan(iBlk); end end % ---------------------------------------------------------------------------------- function SetTincMan(obj, idxs, iBlk, excl_incl) if nargin<2 return end if nargin<4 excl_incl = 'exclude'; end tIncMan = obj.procStream.GetTincMan(iBlk); if strcmp(excl_incl, 'exclude') tIncMan(idxs) = 0; elseif strcmp(excl_incl, 'include') tIncMan(idxs) = 1; end obj.procStream.SetTincMan(tIncMan, iBlk); end % ---------------------------------------------------------------------------------- function InitTincMan(obj) iBlk = 1; % TODO implement multiple data blocks while 1 t = obj.acquired.GetTime(iBlk); if isempty(t) break end tIncMan = ones(length(t),1); obj.procStream.SetTincMan(tIncMan, iBlk); iBlk = iBlk+1; end end end % Public Set/Get methods %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % All other public methods for acquired data %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% methods % ---------------------------------------------------------------------------------- function AddStims(obj, tPts, condition, duration, amp, more) if isempty(tPts) return; end if isempty(condition) return; end obj.procStream.AddStims(tPts, condition, duration, amp, more); end % ---------------------------------------------------------------------------------- function DeleteStims(obj, tPts, condition) if ~exist('tPts','var') || isempty(tPts) return; end if ~exist('condition','var') condition = ''; end obj.procStream.DeleteStims(tPts, condition); end % ---------------------------------------------------------------------------------- function ToggleStims(obj, tPts, condition) if ~exist('tPts','var') || isempty(tPts) return; end if ~exist('condition','var') condition = ''; end obj.procStream.ToggleStims(tPts, condition); end % ---------------------------------------------------------------------------------- function MoveStims(obj, tPts, condition) if ~exist('tPts','var') || isempty(tPts) return; end if ~exist('condition','var') condition = ''; end obj.procStream.MoveStims(tPts, condition); end % ---------------------------------------------------------------------------------- function AddStimColumn(obj, name, initValue) if ~exist('name', 'var') return; end obj.procStream.AddStimColumn(name, initValue); end % ---------------------------------------------------------------------------------- function DeleteStimColumn(obj, idx) if ~exist('idx', 'var') || idx <= 3 return; end obj.procStream.DeleteStimColumn(idx); end % ---------------------------------------------------------------------------------- function RenameStimColumn(obj, oldname, newname) if ~exist('oldname', 'var') || ~exist('newname', 'var') return; end obj.procStream.RenameStimColumn(oldname, newname); end % ---------------------------------------------------------------------------------- function probe = GetProbe(obj) probe = obj.acquired.GetProbe(); end % ---------------------------------------------------------------------------------- function data = GetStimData(obj, icond) data = obj.procStream.GetStimData(icond); end % ---------------------------------------------------------------------------------- function val = GetStimDataLabels(obj, icond) val = obj.procStream.GetStimDataLabels(icond); end % ---------------------------------------------------------------------------------- function SetStimTpts(obj, icond, tpts) obj.procStream.SetStimTpts(icond, tpts); end % ---------------------------------------------------------------------------------- function tpts = GetStimTpts(obj, icond) if ~exist('icond','var') icond=1; end tpts = obj.procStream.GetStimTpts(icond); end % ---------------------------------------------------------------------------------- function SetStimDuration(obj, icond, duration, tpts) obj.procStream.SetStimDuration(icond, duration, tpts); end % ---------------------------------------------------------------------------------- function duration = GetStimDuration(obj, icond) if ~exist('icond','var') icond=1; end duration = obj.procStream.GetStimDuration(icond); end % ---------------------------------------------------------------------------------- function SetStimAmplitudes(obj, icond, amps, tpts) obj.procStream.SetStimAmplitudes(icond, amps, tpts); end % ---------------------------------------------------------------------------------- function vals = GetStimAmplitudes(obj, icond) if ~exist('icond','var') icond=1; end vals = obj.procStream.GetStimAmplitudes(icond); end % ---------------------------------------------------------------------------------- function RenameCondition(obj, oldname, newname) % Function to rename a condition. Important to remeber that changing the % condition involves 2 distinct well defined steps: % a) For the current element change the name of the specified (old) % condition for ONLY for ALL the acquired data elements under the % currElem, be it run, subj, or group . In this step we DO NOT TOUCH % the condition names of the run, subject or group . % b) Rebuild condition names and tables of all the tree nodes group, subjects % and runs same as if you were loading during Homer3 startup from the % acquired data. % if ~exist('oldname','var') || ~ischar(oldname) return; end if ~exist('newname','var') || ~ischar(newname) return; end newname = obj.ErrCheckNewCondName(newname); if obj.err ~= 0 return; end obj.procStream.RenameCondition(oldname, newname); end % ---------------------------------------------------------------------------------- function StimReject(obj, t, iBlk) obj.procStream.StimReject(t, iBlk); end % ---------------------------------------------------------------------------------- function StimInclude(obj, t, iBlk) obj.procStream.StimInclude(t, iBlk); end % ---------------------------------------------------------------------------------- function vals = GetStimValSettings(obj) vals = obj.procStream.input.GetStimValSettings(); end % ---------------------------------------------------------------------------------- function ExportStim(obj, options) global cfg if ~exist('options','var') options = ''; if strcmpi(cfg.GetValue('Load Stim from TSV file'), 'no') options = 'regenerate'; end end SnirfFile2Tsv(obj.acquired, '', options); end % ---------------------------------------------------------------------------------- function fname = DeleteExportStim(obj) fname = obj.acquired.GetStimTsvFilename(); if ispathvalid(fname) try fprintf('Delete %s\n', fname) delete(fname); catch end end end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% methods % ---------------------------------------------------------------------------------- function nbytes = MemoryRequired(obj, option) if ~exist('option','var') option = 'memory'; end nbytes = obj.procStream.MemoryRequired(); if strcmp(option, 'file') return end if isempty(obj.acquired) return end nbytes = nbytes + obj.acquired.MemoryRequired(); end % ----------------------------------------------------------------- function [fn_error, missing_args, prereqs] = CheckProcStreamOrder(obj) % Returns index of processing stream function which is missing % an argument, and a cell array of the missing arguments. % fn_error is 0 if there are no errors. missing_args = {}; fn_error = 0; prereqs = ''; % Processing stream begins with inputs available available = obj.procStream.input.GetProcInputs(); % Inputs which are usually optional or defined elsewhere extras = {'iRun' 'iSubj' 'iGroup' 'mlActAuto', 'tIncAuto', 'Aaux', 'rcMap'}; available = [available, extras]; % For all fcalls for i = 1:length(obj.procStream.fcalls) inputs = obj.procStream.fcalls(i).GetInputs(); % Check that each input is available for j = 1:length(inputs) if ~any(strcmp(available, inputs{j})) fn_error = obj.procStream.fcalls(i); missing_args{end+1} = inputs{j}; %#ok<AGROW> end end if isa(fn_error, 'FuncCallClass') entry = obj.procStream.reg.GetEntryByName(fn_error.name); if isfield(entry.help.sections, 'prerequisites') prereqs_list = splitlines(entry.help.sections.prerequisites.str); for k = 1:length(prereqs_list) if ~isempty(prereqs_list{k}) prereqs = [prereqs, sprintf('\n'), strtrim(prereqs_list{k})]; end end end return; end % Add outputs of the function to available list outputs = obj.procStream.fcalls(i).GetOutputs(); for j = 1:length(outputs) available{end + 1} = outputs{j}; %#ok<AGROW> end end end % ---------------------------------------------------------------------------------- function r = ListOutputFilenames(obj, options) if ~exist('options','var') options = ''; end r = obj.GetOutputFilename(options); fprintf(' %s %s\n', obj.path, r); end % ---------------------------------------------------------------------------------- function b = HaveOutput(obj) b1 = ~obj.procStream.output.IsEmpty(); fname = obj.procStream.output.SetFilename([obj.path, obj.GetOutputFilename()]); b2 = false; if ispathvalid(fname) r = load(fname); b2 = ~r.output.IsEmpty(); end b = b1 || b2; end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Private methods %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% methods (Access = public) end % Private methods end
-- Implementation of Wireworld. -- Cell states 'cellT' are represented as follows. -- 'empty' -- 'ehead' for electron head -- 'etail' for electron tail -- 'condu' for conductor -- The definition 'mk_hpp' builds an instance of HPP -- from an initial configuration of cell states. import cell_automaton utils data.vector open utils namespace ww section ww -- empty | electron head | electron tail | conductor @[derive decidable_eq] inductive cellT | empty | ehead | etail | condu open cellT def cellT_str : cellT → string | empty := " " | ehead := "H" | etail := "T" | condu := "X" instance cellT_to_str : has_to_string cellT := ⟨cellT_str⟩ instance cellT_repr : has_repr cellT := ⟨cellT_str⟩ attribute [reducible] def ww := cell_automaton cellT def step : cellT → ℕ → cellT | empty _ := empty | ehead _ := etail | etail _ := condu | condu c := if c = 1 ∨ c = 2 then ehead else condu def ww_step (cell : cellT) (neigh : list cellT) := step cell $ count_at_single neigh ehead def mk_ww (g : vec_grid₀ cellT) : ww := ⟨g, empty, cell_automatons.moore, ww_step, cell_automatons.ext_id⟩ def wire_g := vec_grid₀.mk ⟨1, 5, dec_trivial, ⟨[etail, ehead, condu, condu, condu], rfl⟩⟩ ⟨0, 1⟩ def wire : ww := mk_ww wire_g def or_g_10 := vec_grid₀.mk ⟨5, 6, dec_trivial, ⟨[etail, ehead, empty, empty, empty, empty, empty, empty, condu, empty, empty, empty, empty, condu, condu, condu, condu, condu, empty, empty, condu, empty, empty, empty, condu, condu, empty, empty, empty, empty], rfl⟩⟩ ⟨0, 1⟩ def or_10 : ww := mk_ww or_g_10 def or_g_01 := vec_grid₀.mk ⟨5, 6, dec_trivial, ⟨[condu, condu, empty, empty, empty, empty, empty, empty, condu, empty, empty, empty, empty, condu, condu, condu, condu, condu, empty, empty, condu, empty, empty, empty, etail, ehead, empty, empty, empty, empty], rfl⟩⟩ ⟨0, 1⟩ def or_01 : ww := mk_ww or_g_01 open cardinals section ww_or def or_gate' := vec_grid₀.mk ⟨5, 6, dec_trivial, ⟨[condu, condu, empty, empty, empty, empty, empty, empty, condu, empty, empty, empty, empty, condu, condu, condu, condu, condu, empty, empty, condu, empty, empty, empty, etail, ehead, empty, empty, empty, empty], rfl⟩⟩ ⟨-5, -5⟩ def or_gate : ww := mk_ww or_gate' def write_input (i₁ : bool) (i₂ : bool) := let (b₁, b₂) := if i₁ then (etail, ehead) else (condu, condu) in let (b₃, b₄) := if i₂ then (etail, ehead) else (condu, condu) in mod_many [(⟨-5, -10⟩, b₁), (⟨-4, -10⟩, b₂), (⟨-5, -6⟩, b₃), (⟨-4, -6⟩, b₄)] or_gate def sim_or (i₁ i₂ : bool) : bool := let sim := step_n (write_input i₁ i₂) 3 in yield_at sim ⟨-8, -2⟩ = etail ∧ yield_at sim ⟨-8, -1⟩ = ehead end ww_or section ww_xor inductive direction | N | W | E | S open direction structure inout := (p₁ : point) (p₂ : point) (dir : direction) structure ww₁ := (aut : ww) (ins : list inout) (ous : list inout) def str_of_ww₁ : ww₁ → string | ⟨aut, _, _⟩ := to_string aut instance ww₁_to_str : has_to_string ww₁ := ⟨str_of_ww₁⟩ instance ww₁_repr : has_repr ww₁ := ⟨str_of_ww₁⟩ def mk_ww₁ (g : vec_grid₀ cellT) (inputs outputs : list inout) : ww₁ := ⟨mk_ww g, inputs, outputs⟩ def write (a : ww₁) (n : ℕ) (b : bool) : ww₁ := let input := list.nth a.ins n in match input with | none := a | some ⟨p₁, p₂, dir⟩ := if b then match dir with | N := ww₁.mk (mod_many [(up p₁ p₂, ehead), (down p₁ p₂, etail)] a.aut) a.ins a.ous | S := ww₁.mk (mod_many [(down p₁ p₂, ehead), (up p₁ p₂, etail)] a.aut) a.ins a.ous | W := ww₁.mk (mod_many [(left p₁ p₂, ehead), (right p₁ p₂, etail)] a.aut) a.ins a.ous | E := ww₁.mk (mod_many [(right p₁ p₂, ehead), (left p₁ p₂, etail)] a.aut) a.ins a.ous end else ww₁.mk (mod_many [(p₁, condu), (p₂, condu)] a.aut) a.ins a.ous end def read (a : ww₁) (n : ℕ) : bool := let output := list.nth a.ous n in match output with | none := ff | some ⟨p₁, p₂, dir⟩ := match dir with | N := yield_at a.aut (up p₁ p₂) = ehead ∧ yield_at a.aut (down p₁ p₂) = etail | S := yield_at a.aut (up p₁ p₂) = etail ∧ yield_at a.aut (down p₁ p₂) = ehead | W := yield_at a.aut (left p₁ p₂) = ehead ∧ yield_at a.aut (right p₁ p₂) = etail | E := yield_at a.aut (left p₁ p₂) = etail ∧ yield_at a.aut (right p₁ p₂) = ehead end end def xor_gate' := vec_grid₀.mk ⟨7, 7, dec_trivial, ⟨[condu, condu, empty, empty, empty, empty, empty, empty, empty, condu, empty, empty, empty, empty, empty, condu, condu, condu, condu, empty, empty, empty, condu, empty, empty, condu, condu, condu, empty, condu, condu, condu, condu, empty, empty, empty, empty, condu, empty, empty, empty, empty, condu, condu, empty, empty, empty, empty, empty], rfl⟩⟩ ⟨0, 0⟩ def xor_gate_inputs : list inout := [⟨⟨0, 6⟩, ⟨1, 6⟩, E⟩, ⟨⟨0, 0⟩, ⟨1, 0⟩, E⟩] def xor_gate_outputs : list inout := [⟨⟨5, 3⟩, ⟨6, 3⟩, E⟩] def mk_xor (a : ww) : ww₁ := ⟨a, xor_gate_inputs, xor_gate_outputs⟩ def xor_gate_w : ww := mk_ww xor_gate' def xor_gate : ww₁ := mk_xor xor_gate_w def xor' (b₁ b₂ : bool) : bool := read (mk_xor (step_n (write (write xor_gate 0 b₂) 1 b₁).aut 5)) 0 theorem xor_iff_xor' {b₁ b₂} : bxor b₁ b₂ ↔ xor' b₁ b₂ := begin cases b₁; cases b₂; split; intros h, { dsimp at h, contradiction }, { have : xor' ff ff = ff, from dec_trivial, rw this at h, contradiction }, { exact dec_trivial }, { dsimp, unfold_coes }, { exact dec_trivial }, { dsimp, unfold_coes }, { dsimp at h, contradiction }, { have : xor' tt tt = ff, from dec_trivial, rw this at h, contradiction } end end ww_xor end ww end ww
subroutine update(*) chk ================= include 'edp_main.inc' ! use edp_main include 'edp_dat.inc' ! use edp_dat common /main_para/ tolower 1 ,jnk(m_main_para-1) logical tolower character*6 options(4) data options/'xyz','b','weight','text'/ character*(108) file_name character*(32) form !,skip character*(80) tmp character*(max_num_chars) txt common /cmm_txt/n_len,txt,ib,ie logical nword logical open_file1 external open_file1 integer jt(2) equivalence (jt1,jt(1)),(jt2,jt(2)) n_of_syn=5 !000515 syntax(1)='syntax:' syntax(2)='update ' syntax(3)='update (xyz, w, b) input_filename.s fortran_format.s' syntax(4)= 1'update t column_1.i column_2.i input_filename.s fortran_format.s' syntax(5)=' [jump_after_string.s]' call find1(4,options,i_option) if(i_option.le.0) then j=0 do i=1,n_atom if(lf(i)) then read(text(i)(31:66),1009,err=900) x(i), y(i), z(i), w(i), b(i) 1009 format(3f8.3,2f6.2) j=j+1 endif enddo goto 200 else if(i_option.gt.4) then return 1 endif if(i_option.eq.4) then call read_ai(2,jt,*900,*900) if(jt1.le.0 .or. jt2.gt.72 .or. jt1.gt.jt2) return 1 endif if(.not. open_file1(29,file_name,'old','.txt')) return 1 if(nword(n_len,txt,ib,ie)) then form='*' if(verbose.ge.2 ) write(6,1005) 1005 format(' update-I2> free format will be used.') else ie=n_len ic=ichar(delimiter) if( ic .eq. ichar(txt(ib:ib))) then !use '(...)' to input the user prefored output-format ib= ib+1 do j=ib, n_len if( ichar(txt(j:j)).eq.ic) then ie=j-1 txt(j:j)=' ' !000303, it may screw up the history file. goto 80 endif enddo endif 80 form= txt(ib:ie) if(verbose.ge.2 ) write(6,1004) form 1004 format(' update-I2> the input format will be: ',a) endif if(.not.nword(n_len,txt,ib,ie)) then ie=n_len ic=ichar(delimiter) if( ic .eq. ichar(txt(ib:ib)) ) then !use '(...)' to input the skip string ib= ib+1 do j=ib, n_len if( ichar(txt(j:j)).eq.ic) then ie=j-1 txt(j:j)=' ' !000303, it may screw up the history file. goto 84 endif enddo endif 84 if(ie.ge.ib) then num_skip_lines=0 85 read(29,1002,end=920) tmp num_skip_lines=num_skip_lines+1 if(verbose.ge.4 ) write(6,*) tmp if(index(tmp,txt(ib:ie)).le.0) goto 85 if(verbose.ge.2 ) write(6,1006) num_skip_lines, txt(ib:ie) 1002 format(a) 1006 format(' update-I2>',i4, 1' lines are skipped until the first [',a,'] is met.') endif ! write(*,*) 'ok1' else ! write(*,*) 'ok2' endif j=0 do i=1,n_atom if(lf(i)) then if(i_option.eq.1) then ! update xyz if(form.ne.'*') then read(29,form,err=900,end=910) x(i), y(i), z(i) chk ^ chk the format must have correct syntax, if on an alpha machine. else read(29,*,err=900,end=910) x(i), y(i), z(i) endif write(text(i)(31:54),1001,err=900) x(i),y(i),z(i) 1001 format(3f8.3) else if(i_option.eq.2) then ! update b-factor if(form.ne.'*') then read(29,form,err=900,end=910) b(i) else read(29,*,err=900,end=910) b(i) endif else if(i_option.eq.3) then ! update occ if(form.ne.'*') then read(29,form,err=900,end=910) w(i) else read(29,*,err=900,end=910) w(i) endif else if(i_option.eq.4) then ! update text if(form.ne.'*') then read(29,form,err=900,end=910) text(i)(jt1:jt2) else read(29,'(a)',err=900,end=910) text(i)(jt1:jt2) endif if(tolower) then do jc=jt1,jt2 ic=ichar(text(i)(jc:jc)) if(65.le.ic.and.ic.le.90) text(i)(jc:jc)=char(ic+32) enddo endif endif j=j+1 endif enddo 200 if(verbose.ge.1 ) write(6,1007) j 1007 format(' update> ',i6,' records have been updated.') return 900 errmsg=' errmsg: format error' return 1 920 errmsg=' errmsg: the starting text string not found' return 1 910 write(6,1003) j 1003 format(' update-W> only',i6,' records have been updated.', 1 ' end of file detected.') end subroutine extract(*) c ===================== include 'edp_main.inc' ! use edp_main character*(max_num_chars) txt common /cmm_txt/ n_len,txt,ib,ie logical nword logical lf1(max_atom) n_of_syn=2 !000515 syntax(1)='syntax:' syntax(2)='load group2.s | extract group1 sub_group1 ' call dfgroup(igr,*901) ie=0 if( nword(n_len,txt,ib,ie)) goto 901 ! get ie back igr1= match_l( max_gr, cgroup) if( igr1 .le. 0) then errmsg=' errmsg: a group name is needed' return 1 end if n_group1= n_groupa(igr1) if( n_group1.le.0) then write(6,*) 1 'extract-W> UNDONE: define group ['//cgroup(igr1)//'] first.' return else if(n_group1.ne.n_groupa(igr)) then write(6,*) 1 'extract-W> UNDONE: num(',cgroup(igr1),')=/=num(',cgroup(igr), 1 ').' return end if igr2= match_l( max_gr, cgroup) if( igr2 .le. 0 .or. igr2 .eq. igr1) then errmsg=' errmsg: another group name needed' return 1 else if( n_groupa(igr2) .le.0) then write(6,*) 1'extract-W> UNDONE: define group ['//cgroup(igr2)//'] first.' return end if n_group2= n_groupa(igr2) do i=1,n_atom lf1(i)=.false. enddo j1=n_groupa(igr2) do j=1,j1 i=igroupa( j,igr2) lf1(i)=.true. enddo n=0 j1=n_groupa(igr1) do j=1,j1 i=igroupa( j,igr1) if(lf1(i)) then i=igroupa( j,igr) lf(i)=incl n=n+1 endif enddo if(n.le.0) write(6,*) 1 'extract-W> UNDONE: the overlap between ', 1 cgroup(igr1),' & ',cgroup(igr2), 1 ' is zero.' return 901 errmsg= ' errmsg: wrong group/zone information' return 1 end subroutine dock(*) chk =============== include 'edp_main.inc' include 'edp_dat.inc' ! use edp_main character*(max_num_chars) txt common /cmm_txt/n_len,txt,ib,ie logical nword dimension rad(max_atom), isort(max_atom) call read_ar(4,rad, *901, *901) if(rad(1).le.rad(2)) then zc=rad(1) ze=rad(2) else zc=rad(2) ze=rad(1) endif if(rad(3).gt.0.0) then dz=rad(3) else goto 901 endif if(rad(4).gt. 0.0) then shell=rad(4) else goto 901 endif 200 if(zc.le.ze) then j=0 do i=1,n_atom if(lf(i)) then j=j+1 rad(j)=sqrt(x(i)**2+y(i)**2+(z(i)-zc)**2) if(verbose .ge. 6) w(i)= rad(j) endif enddo if(j.le.2) goto 901 call sort_rl(j,rad,isort) r_min=rad(isort(1)) n=0 do i=1,j if((rad(isort(i))-r_min).le.shell) then n=n+1 else write(6,1002) r_min, n zc=zc+dz goto 200 endif enddo endif return 1002 format(' dock> r=',f8.1,', # of atoms in touch=',i6) 901 return 1 end c*** end of pair.for copyright by X. Cai Zhang
import pybullet as p import numpy as np from scipy.spatial.transform import Rotation as R from assistive_gym.envs.env import AssistiveEnv def _make_unit_vector(v: np.ndarray) -> np.ndarray: assert len(v.shape) == 1 return v / np.linalg.norm(v) class TeleOpCamera: def __init__(self, env: AssistiveEnv): self.env = env self._tilt_unit = 0 self._tilt_gain = 2 self._pan_unit = 0 self._pan_gain = 2 @property def _tilt_value(self): return self._tilt_unit * self._tilt_gain @property def _pan_value(self): return self._pan_unit * self._pan_gain @property def _camera_kwargs(self): robot = self.env.robot robot_pos, robot_orient = robot.get_base_pos_orient() # compute correct heading for camera rotation1 = R.from_quat(robot_orient) rotation2 = R.from_rotvec(-(np.pi / 4) * np.array([0, 0, 1])) robot_heading = _make_unit_vector(np.ones_like(robot_pos)) robot_heading = rotation1.apply(robot_heading) robot_heading = rotation2.apply(robot_heading) # project onto xy-plane robot_heading[2] = 0 robot_heading = _make_unit_vector(robot_heading) # initialize camera eye camera_height = 1.25 camera_eye = robot_pos + (0.5 * robot_heading) camera_eye[2] += camera_height camera_heading = np.copy(robot_heading) # calculate yaw dot_x = np.dot(robot_heading, np.array([1, 0, 0])) dot_y = np.dot(robot_heading, np.array([0, 1, 0])) yaw = np.degrees(np.arccos(dot_y)) yaw *= -1 if dot_x > 0 else 1 yaw += self._pan_value # pan camera_heading pan_axis = np.array([0, 0, 1]) pan_rotation = R.from_rotvec(np.radians(self._pan_value) * pan_axis) camera_heading = pan_rotation.apply(camera_heading) # tilt camera heading pitch = self._tilt_value tilt_axis = np.cross(camera_heading, np.array([0, 0, 1])) tilt_rotation = R.from_rotvec(np.radians(pitch) * tilt_axis) camera_heading = tilt_rotation.apply(camera_heading) return dict( cameraDistance=1.0, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=camera_eye + camera_heading, ) def _tilt_camera(self, num_units: float): self._tilt_unit += num_units tilt_value = self._tilt_value tilt_limit = 90 if tilt_value <= -tilt_limit or tilt_value >= tilt_limit: self._tilt_unit -= num_units def _pan_camera(self, num_units: float): self._pan_unit += num_units pan_value = self._pan_value pan_limit = 90 if pan_value <= -pan_limit or pan_value >= pan_limit: self._pan_unit -= num_units def tilt_up(self): self._tilt_camera(num_units=1) def tilt_down(self): self._tilt_camera(num_units=-1) def pan_left(self): self._pan_camera(num_units=1) def pan_right(self): self._pan_camera(num_units=-1) def reset_drive_mode(self): self._tilt_unit = 0 self._pan_unit = 0 def reset_tool_mode(self): self._tilt_unit = -4 self._pan_unit = -44 def update_env_camera(self) -> dict: camera_kwargs = self._camera_kwargs p.resetDebugVisualizerCamera(**camera_kwargs) return camera_kwargs
#pragma once #include <gsl/gsl> #include <memory> #include <list> #include <chrono> #include <EASTL/fixed_vector.h> #include <EASTL/map.h> #include "../platform/d3d.h" #include "buffers.h" #include "shaders.h" #include "camera.h" #include "textures.h" struct ID3D11Texture2D; namespace gfx { class RenderingDevice; class Material; struct RasterizerSpec; class RasterizerState; struct BlendSpec; class BlendState; struct DepthStencilSpec; class DepthStencilState; struct SamplerSpec; class SamplerState; struct MaterialSamplerSpec; class BufferBinding; class TextEngine; using SamplerStatePtr = std::shared_ptr<SamplerState>; using DepthStencilStatePtr = std::shared_ptr<DepthStencilState>; using BlendStatePtr = std::shared_ptr<BlendState>; using RasterizerStatePtr = std::shared_ptr<RasterizerState>; using ResizeListener = std::function<void(int w, int h)>; // RAII style listener registration class ResizeListenerRegistration { friend class RenderingDevice; public: ~ResizeListenerRegistration(); NO_COPY(ResizeListenerRegistration); ResizeListenerRegistration(ResizeListenerRegistration &&o) : mDevice(o.mDevice), mKey(o.mKey) { o.mKey = 0; } ResizeListenerRegistration &operator =(ResizeListenerRegistration &&o) = delete; private: ResizeListenerRegistration(gfx::RenderingDevice &device, uint32_t key) : mDevice(device), mKey(key) {} gfx::RenderingDevice &mDevice; uint32_t mKey; }; class ResourceListener { public: virtual ~ResourceListener(); virtual void CreateResources(RenderingDevice&) = 0; virtual void FreeResources(RenderingDevice&) = 0; }; enum class StandardSlotSemantic { ViewProjMatrix, UiProjMatrix }; // RAII class for managing resource listener registrations class ResourceListenerRegistration { public: explicit ResourceListenerRegistration(RenderingDevice& device, ResourceListener* listener); ~ResourceListenerRegistration(); NO_COPY_OR_MOVE(ResourceListenerRegistration); private: RenderingDevice& mDevice; ResourceListener* mListener; }; using DynamicTexturePtr = std::shared_ptr<class DynamicTexture>; using RenderTargetTexturePtr = std::shared_ptr<class RenderTargetTexture>; using RenderTargetDepthStencilPtr = std::shared_ptr<class RenderTargetDepthStencil>; // An output of a display device (think: monitor) struct DisplayDeviceOutput { // Technical id for use in a configuration or log file std::string id; // Name to display to the end user std::string name; }; // A display device that can be used to render the game (think: GPU) struct DisplayDevice { // Technical id for use in a configuration or log file size_t id; // Name to display to the end user std::string name; // Outputs associated with this display device eastl::fixed_vector<DisplayDeviceOutput, 4> outputs; }; enum class PrimitiveType { TriangleList, TriangleStrip, LineStrip, LineList, PointList, }; enum class BufferFormat { A8, A8R8G8B8, X8R8G8B8 }; enum class MapMode { Read, Discard, NoOverwrite }; template<typename TBuffer, typename TElement> class MappedBuffer { public: using View = gsl::span<TElement>; using Iterator = gsl::contiguous_span_iterator<View>; MappedBuffer(TBuffer &buffer, RenderingDevice &device, View data, uint32_t rowPitch) : mBuffer(buffer), mDevice(device), mData(data), mRowPitch(rowPitch) {} MappedBuffer(MappedBuffer&& o) : mBuffer(o.mBuffer), mDevice(o.mDevice), mData(o.mData), mRowPitch(o.rowPitch) { o.mMoved = true; } ~MappedBuffer(); Iterator begin() { return mData.begin(); } Iterator end() { return mData.end(); } TElement &operator[](size_t idx) { assert(!mMoved); return mData[idx]; } size_t size() const { return mData.size(); } TElement *GetData() const { return mData.data(); } // Only relevant for mapped texture buffers size_t GetRowPitch() const { return mRowPitch; } void Unmap(); private: TBuffer& mBuffer; RenderingDevice &mDevice; View mData; size_t mRowPitch; bool mMoved = false; }; template<typename TElement> using MappedVertexBuffer = MappedBuffer<VertexBuffer, TElement>; using MappedIndexBuffer = MappedBuffer<IndexBuffer, uint16_t>; using MappedTexture = MappedBuffer<DynamicTexture, uint8_t>; class RenderingDevice { template<typename T> friend class Shader; friend class BufferBinding; friend class TextureLoader; friend class DebugUI; public: RenderingDevice(HWND mWindowHandle, uint32_t adapterIdx = 0, bool debugDevice = false); ~RenderingDevice(); bool BeginFrame(); bool Present(); void PresentForce(); void Flush(); void ClearCurrentColorTarget(XMCOLOR color); void ClearCurrentDepthTarget(bool clearDepth = true, bool clearStencil = true, float depthValue = 1.0f, uint8_t stencilValue = 0); using Clock = std::chrono::high_resolution_clock; using TimePoint = std::chrono::time_point<Clock>; TimePoint GetLastFrameStart() const { return mLastFrameStart; } TimePoint GetDeviceCreated() const { return mDeviceCreated; } const eastl::vector<DisplayDevice> &GetDisplayDevices(); // Resize the back buffer void ResizeBuffers(int w, int h); Material CreateMaterial( const BlendSpec &blendSpec, const DepthStencilSpec &depthStencilSpec, const RasterizerSpec &rasterizerSpec, const std::vector<MaterialSamplerSpec> &samplerSpecs, const VertexShaderPtr &vs, const PixelShaderPtr &ps ); BlendStatePtr CreateBlendState(const BlendSpec &spec); DepthStencilStatePtr CreateDepthStencilState(const DepthStencilSpec &spec); RasterizerStatePtr CreateRasterizerState(const RasterizerSpec &spec); SamplerStatePtr CreateSamplerState(const SamplerSpec &spec); // Changes the current scissor rect to the given rectangle void SetScissorRect(int x, int y, int width, int height); // Resets the scissor rect to the current render target's size void ResetScissorRect(); std::shared_ptr<class IndexBuffer> CreateEmptyIndexBuffer(size_t count); std::shared_ptr<class VertexBuffer> CreateEmptyVertexBuffer(size_t count, bool forPoints = false); DynamicTexturePtr CreateDynamicTexture(gfx::BufferFormat format, int width, int height); DynamicTexturePtr CreateDynamicStagingTexture(gfx::BufferFormat format, int width, int height); void CopyRenderTarget(gfx::RenderTargetTexture &renderTarget, gfx::DynamicTexture &stagingTexture); RenderTargetTexturePtr CreateRenderTargetTexture(gfx::BufferFormat format, int width, int height, bool multiSampled = false); RenderTargetTexturePtr CreateRenderTargetForNativeSurface(ID3D11Texture2D *surface); RenderTargetTexturePtr CreateRenderTargetForSharedSurface(IUnknown *surface); RenderTargetDepthStencilPtr CreateRenderTargetDepthStencil(int width, int height, bool multiSampled = false); template<typename T> VertexBufferPtr CreateVertexBuffer(gsl::span<T> data, bool immutable = true); VertexBufferPtr CreateVertexBufferRaw(gsl::span<const uint8_t> data, bool immutable = true); IndexBufferPtr CreateIndexBuffer(gsl::span<const uint16_t> data, bool immutable = true); void SetMaterial(const Material &material); void SetVertexShaderConstant(uint32_t startRegister, StandardSlotSemantic semantic); void SetPixelShaderConstant(uint32_t startRegister, StandardSlotSemantic semantic); void SetRasterizerState(const RasterizerState &state); void SetBlendState(const BlendState &state); void SetDepthStencilState(const DepthStencilState &state); void SetSamplerState(int samplerIdx, const SamplerState &state); void SetTexture(uint32_t slot, gfx::Texture &texture); void SetIndexBuffer(const gfx::IndexBuffer &indexBuffer); void Draw(PrimitiveType type, uint32_t vertexCount, uint32_t startVertex = 0); void DrawIndexed(PrimitiveType type, uint32_t vertexCount, uint32_t indexCount, uint32_t startVertex = 0, uint32_t vertexBase = 0); /* Changes the currently used cursor to the given surface. */ void SetCursor(int hotspotX, int hotspotY, const gfx::TextureRef &texture); void ShowCursor(); void HideCursor(); /* Take a screenshot with the given size. The image will be stretched to the given size. */ void TakeScaledScreenshot(const std::string& filename, int width, int height, int quality = 90); // Creates a buffer binding for a MDF material that // is preinitialized with the correct shader BufferBinding CreateMdfBufferBinding(); Shaders& GetShaders() { return mShaders; } Textures& GetTextures() { return mTextures; } WorldCamera& GetCamera() { return mCamera; } void SetAntiAliasing(bool enable, uint32_t samples, uint32_t quality); template<typename T> void UpdateBuffer(VertexBuffer &buffer, gsl::span<T> data) { UpdateBuffer(buffer, data.data(), data.size_bytes()); } void UpdateBuffer(VertexBuffer &buffer, const void *data, size_t size); void UpdateBuffer(IndexBuffer &buffer, gsl::span<uint16_t> data); template<typename TElement> MappedVertexBuffer<TElement> Map(VertexBuffer &buffer, gfx::MapMode mode = gfx::MapMode::Discard) { auto data = MapVertexBufferRaw(buffer, mode); auto castData = gsl::span<TElement>((TElement*)data.data(), data.size() / sizeof(TElement)); return MappedVertexBuffer<TElement>(buffer, *this, castData, 0); } void Unmap(VertexBuffer &buffer); // Index buffer memory mapping techniques MappedIndexBuffer Map(IndexBuffer &buffer, gfx::MapMode mode = gfx::MapMode::Discard); void Unmap(IndexBuffer &buffer); MappedTexture Map(DynamicTexture &texture, gfx::MapMode mode = gfx::MapMode::Discard); void Unmap(DynamicTexture &texture); static constexpr uint32_t MaxVsConstantBufferSize = 2048; template<typename T> void SetVertexShaderConstants(uint32_t slot, const T &buffer) { static_assert(sizeof(T) <= MaxVsConstantBufferSize, "Constant buffer exceeds maximum size"); UpdateResource(mVsConstantBuffer, &buffer, sizeof(T)); VSSetConstantBuffer(slot, mVsConstantBuffer); } static constexpr uint32_t MaxPsConstantBufferSize = 512; template<typename T> void SetPixelShaderConstants(uint32_t slot, const T &buffer) { static_assert(sizeof(T) <= MaxPsConstantBufferSize, "Constant buffer exceeds maximum size"); UpdateResource(mPsConstantBuffer, &buffer, sizeof(T)); PSSetConstantBuffer(slot, mPsConstantBuffer); } const gfx::Size &GetBackBufferSize() const; // Pushes the back buffer and it's depth buffer as the current render target void PushBackBufferRenderTarget(); void PushRenderTarget( const gfx::RenderTargetTexturePtr &colorBuffer, const gfx::RenderTargetDepthStencilPtr &depthStencilBuffer ); void PopRenderTarget(); const gfx::RenderTargetTexturePtr &GetCurrentRederTargetColorBuffer() const { return mRenderTargetStack.back().colorBuffer; } const gfx::RenderTargetDepthStencilPtr &GetCurrentRenderTargetDepthStencilBuffer() const { return mRenderTargetStack.back().depthStencilBuffer; } ResizeListenerRegistration AddResizeListener(ResizeListener listener); bool IsDebugDevice() const; /** * Emits the start of a rendering call group if the debug device is being used. * This information can be used in the graphic debugger. */ template<typename... T> void BeginPerfGroup(const char *format, const T &... args) const { if (IsDebugDevice()) { BeginPerfGroupInternal(fmt::format(format, args...).c_str()); } } /** * Ends a previously started performance group. */ void EndPerfGroup() const; TextEngine& GetTextEngine() const; private: friend class ResourceListenerRegistration; friend class ResizeListenerRegistration; void BeginPerfGroupInternal(const char *msg) const; void RemoveResizeListener(uint32_t key); void AddResourceListener(ResourceListener* resourceListener); void RemoveResourceListener(ResourceListener* resourceListener); void UpdateResource(ID3D11Resource *resource, const void *data, size_t size); CComPtr<ID3D11Buffer> CreateConstantBuffer(const void *initialData, size_t initialDataSize); void VSSetConstantBuffer(uint32_t slot, ID3D11Buffer *buffer); void PSSetConstantBuffer(uint32_t slot, ID3D11Buffer *buffer); gsl::span<uint8_t> MapVertexBufferRaw(VertexBuffer &buffer, MapMode mode); CComPtr<IDXGIAdapter1> GetAdapter(size_t index); int mBeginSceneDepth = 0; HWND mWindowHandle; CComPtr<IDXGIFactory1> mDxgiFactory; // The DXGI adapter we use CComPtr<IDXGIAdapter1> mAdapter; // D3D11 device and related CComPtr<ID3D11Device> mD3d11Device; CComPtr<ID3D11Device1> mD3d11Device1; DXGI_SWAP_CHAIN_DESC mSwapChainDesc; CComPtr<IDXGISwapChain> mSwapChain; CComPtr<ID3D11DeviceContext> mContext; gfx::RenderTargetTexturePtr mBackBufferNew; gfx::RenderTargetDepthStencilPtr mBackBufferDepthStencil; struct RenderTarget { gfx::RenderTargetTexturePtr colorBuffer; gfx::RenderTargetDepthStencilPtr depthStencilBuffer; }; eastl::fixed_vector<RenderTarget, 16> mRenderTargetStack; D3D_FEATURE_LEVEL mFeatureLevel = D3D_FEATURE_LEVEL_9_1; eastl::vector<DisplayDevice> mDisplayDevices; CComPtr<ID3D11Buffer> mVsConstantBuffer; CComPtr<ID3D11Buffer> mPsConstantBuffer; eastl::map<uint32_t, ResizeListener> mResizeListeners; uint32_t mResizeListenersKey = 0; std::list<ResourceListener*> mResourcesListeners; bool mResourcesCreated = false; TimePoint mLastFrameStart = Clock::now(); TimePoint mDeviceCreated = Clock::now(); size_t mUsedSamplers = 0; Shaders mShaders; Textures mTextures; WorldCamera mCamera; struct Impl; std::unique_ptr<Impl> mImpl; }; template <typename T> VertexBufferPtr RenderingDevice::CreateVertexBuffer(gsl::span<T> data, bool immutable) { return CreateVertexBufferRaw(gsl::span(reinterpret_cast<const uint8_t*>(&data[0]), data.size_bytes()), immutable); } extern RenderingDevice *renderingDevice; template<typename TBuffer, typename TElement> inline MappedBuffer<TBuffer, TElement>::~MappedBuffer() { if (!mMoved) { mDevice.Unmap(mBuffer); } } template<typename TBuffer, typename TElement> inline void MappedBuffer<TBuffer, TElement>::Unmap() { assert(!mMoved); mMoved = true; mDevice.Unmap(mBuffer); } inline ResizeListenerRegistration::~ResizeListenerRegistration() { mDevice.RemoveResizeListener(mKey); } // RAII style demarcation of render call groups for performance debugging class PerfGroup { public: template<typename... T> PerfGroup(RenderingDevice &device, const char *format, T... args) : mDevice(device) { device.BeginPerfGroup(format, args...); } ~PerfGroup() { mDevice.EndPerfGroup(); } private: const RenderingDevice &mDevice; }; }
Italics denote future exit numbers .
(** CoLoR, a Coq library on rewriting and termination. See the COPYRIGHTS and LICENSE files. - Adam Koprowski, 2006-04-27 Constructing terms. *) Set Implicit Arguments. From CoLoR Require Import RelExtras ListExtras LogicUtil. From CoLoR Require TermsActiveEnv. From Coq Require Import Lia. Module TermsBuilding (Sig : TermsSig.Signature). Module Export TAE := TermsActiveEnv.TermsActiveEnv Sig. Record appCond : Type := { appL: Term; appR: Term; eqEnv: env appL = env appR; typArr: isArrowType (type appL); typOk: type_left (type appL) = type appR }. Definition buildApp : appCond -> Term. Proof. intro t; inversion t as [L R eq_env typ_arr typ_ok]. destruct L as [?? typeL typingL]; destruct R as [??? typingR]; simpl in *. rewrite eq_env in typingL. destruct typeL; try contr; simpl in *. rewrite typ_ok in typingL. exact (buildT (TApp typingL typingR)). Defined. Lemma buildApp_isApp : forall a, isApp (buildApp a). Proof. intros; destruct a; term_type_inv appL0; term_type_inv appR0. Qed. Lemma buildApp_Lok : forall a, appBodyL (buildApp_isApp a) = a.(appL). Proof. destruct a; destruct appR0; term_type_inv appL0. Qed. Lemma buildApp_Rok : forall a, appBodyR (buildApp_isApp a) = a.(appR). Proof. destruct a; destruct appR0; term_type_inv appL0. Qed. Lemma buildApp_preterm : forall a, term (buildApp a) = term a.(appL) @@ term a.(appR). Proof. destruct a; destruct appR0; term_type_inv appL0. Qed. Lemma buildApp_env_l : forall a, env (buildApp a) = env a.(appL). Proof. destruct a; destruct appR0; term_type_inv appL0. Qed. Lemma buildApp_type : forall a, type (buildApp a) = type_right (type a.(appL)). Proof. destruct a; destruct appR0; term_type_inv appL0. Qed. Record absCond : Type := { absB: Term; absT: SimpleType; envNotEmpty: env absB |= 0 := absT }. Definition buildAbs : absCond -> Term. Proof. intro t; inversion t as [aBody aType envCond]. destruct aBody as [env ?? typing]; simpl in *; destruct env. try_solve. destruct o. exact (buildT (TAbs typing)). try_solve. Defined. Lemma buildAbs_isAbs: forall a, isAbs (buildAbs a). Proof. destruct a as [[env ???] ??]; destruct env. try_solve. destruct o; try_solve. Qed. Lemma buildAbs_absBody : forall a, absBody (buildAbs_isAbs a) = a.(absB). Proof. destruct a as [[env ???] ??]; destruct env. try_solve. destruct o; try_solve. Qed. Lemma buildAbs_absType : forall a, absType (buildAbs_isAbs a) = a.(absT). Proof. destruct a as [[env ???] ??]; destruct env. try_solve. destruct o; try_solve. unfold VarD in * . inversion envNotEmpty0; trivial. Qed. Lemma buildAbs_env : forall a, env (buildAbs a) = tail (env a.(absB)). Proof. destruct a as [[env ???] ??]; destruct env. try_solve. destruct o; try_solve. Qed. Definition buildVar : forall A x, (copy x None ++ A [#] EmptyEnv) |- %x := A. Proof. constructor; unfold VarD. rewrite nth_app_right; autorewrite with datatypes using try lia. replace (x - x) with 0; trivial. lia. Defined. Lemma buildVar_minimal : forall A x, envMinimal (buildT (buildVar A x)). Proof. intros; unfold envMinimal; trivial. Qed. End TermsBuilding.
RUoQtA is using thinkery to extend his/her brain and is sharing these things. Check out what it's about! No Things found. RUoQtA hasn't shared anything.
(*<*)theory AB imports Main begin(*>*) section{*Case Study: A Context Free Grammar*} text{*\label{sec:CFG} \index{grammars!defining inductively|(}% Grammars are nothing but shorthands for inductive definitions of nonterminals which represent sets of strings. For example, the production $A \to B c$ is short for \[ w \in B \Longrightarrow wc \in A \] This section demonstrates this idea with an example due to Hopcroft and Ullman, a grammar for generating all words with an equal number of $a$'s and~$b$'s: \begin{eqnarray} S &\to& \epsilon \mid b A \mid a B \nonumber\\ A &\to& a S \mid b A A \nonumber\\ B &\to& b S \mid a B B \nonumber \end{eqnarray} At the end we say a few words about the relationship between the original proof @{cite \<open>p.\ts81\<close> HopcroftUllman} and our formal version. We start by fixing the alphabet, which consists only of @{term a}'s and~@{term b}'s: *} datatype alfa = a | b text{*\noindent For convenience we include the following easy lemmas as simplification rules: *} text{*\noindent Words over this alphabet are of type @{typ"alfa list"}, and the three nonterminals are declared as sets of such words. The productions above are recast as a \emph{mutual} inductive definition\index{inductive definition!simultaneous} of @{term S}, @{term A} and~@{term B}: *} inductive_set S :: "alfa list set" and A :: "alfa list set" and B :: "alfa list set" where "[] \<in> S" | "w \<in> A \<Longrightarrow> b#w \<in> S" | "w \<in> B \<Longrightarrow> a#w \<in> S" | "w \<in> S \<Longrightarrow> a#w \<in> A" | "\<lbrakk> v\<in>A; w\<in>A \<rbrakk> \<Longrightarrow> b#v@w \<in> A" | "w \<in> S \<Longrightarrow> b#w \<in> B" | "\<lbrakk> v \<in> B; w \<in> B \<rbrakk> \<Longrightarrow> a#v@w \<in> B" text{*\noindent First we show that all words in @{term S} contain the same number of @{term a}'s and @{term b}'s. Since the definition of @{term S} is by mutual induction, so is the proof: we show at the same time that all words in @{term A} contain one more @{term a} than @{term b} and all words in @{term B} contain one more @{term b} than @{term a}. *} lemma correctness: "(w \<in> S \<longrightarrow> size[x\<leftarrow>w. x=a] = size[x\<leftarrow>w. x=b]) \<and> (w \<in> A \<longrightarrow> size[x\<leftarrow>w. x=a] = size[x\<leftarrow>w. x=b] + 1) \<and> (w \<in> B \<longrightarrow> size[x\<leftarrow>w. x=b] = size[x\<leftarrow>w. x=a] + 1)" txt{*\noindent These propositions are expressed with the help of the predefined @{term filter} function on lists, which has the convenient syntax @{text"[x\<leftarrow>xs. P x]"}, the list of all elements @{term x} in @{term xs} such that @{prop"P x"} holds. Remember that on lists @{text size} and @{text length} are synonymous. The proof itself is by rule induction and afterwards automatic: *} by (rule S_A_B.induct, auto) text{*\noindent This may seem surprising at first, and is indeed an indication of the power of inductive definitions. But it is also quite straightforward. For example, consider the production $A \to b A A$: if $v,w \in A$ and the elements of $A$ contain one more $a$ than~$b$'s, then $bvw$ must again contain one more $a$ than~$b$'s. As usual, the correctness of syntactic descriptions is easy, but completeness is hard: does @{term S} contain \emph{all} words with an equal number of @{term a}'s and @{term b}'s? It turns out that this proof requires the following lemma: every string with two more @{term a}'s than @{term b}'s can be cut somewhere such that each half has one more @{term a} than @{term b}. This is best seen by imagining counting the difference between the number of @{term a}'s and @{term b}'s starting at the left end of the word. We start with 0 and end (at the right end) with 2. Since each move to the right increases or decreases the difference by 1, we must have passed through 1 on our way from 0 to 2. Formally, we appeal to the following discrete intermediate value theorem @{thm[source]nat0_intermed_int_val} @{thm[display,margin=60]nat0_intermed_int_val[no_vars]} where @{term f} is of type @{typ"nat \<Rightarrow> int"}, @{typ int} are the integers, @{text"\<bar>.\<bar>"} is the absolute value function\footnote{See Table~\ref{tab:ascii} in the Appendix for the correct \textsc{ascii} syntax.}, and @{term"1::int"} is the integer 1 (see \S\ref{sec:numbers}). First we show that our specific function, the difference between the numbers of @{term a}'s and @{term b}'s, does indeed only change by 1 in every move to the right. At this point we also start generalizing from @{term a}'s and @{term b}'s to an arbitrary property @{term P}. Otherwise we would have to prove the desired lemma twice, once as stated above and once with the roles of @{term a}'s and @{term b}'s interchanged. *} lemma step1: "\<forall>i < size w. \<bar>(int(size[x\<leftarrow>take (i+1) w. P x])-int(size[x\<leftarrow>take (i+1) w. \<not>P x])) - (int(size[x\<leftarrow>take i w. P x])-int(size[x\<leftarrow>take i w. \<not>P x]))\<bar> \<le> 1" txt{*\noindent The lemma is a bit hard to read because of the coercion function @{text"int :: nat \<Rightarrow> int"}. It is required because @{term size} returns a natural number, but subtraction on type~@{typ nat} will do the wrong thing. Function @{term take} is predefined and @{term"take i xs"} is the prefix of length @{term i} of @{term xs}; below we also need @{term"drop i xs"}, which is what remains after that prefix has been dropped from @{term xs}. The proof is by induction on @{term w}, with a trivial base case, and a not so trivial induction step. Since it is essentially just arithmetic, we do not discuss it. *} apply(induct_tac w) apply(auto simp add: abs_if take_Cons split: nat.split) done text{* Finally we come to the above-mentioned lemma about cutting in half a word with two more elements of one sort than of the other sort: *} lemma part1: "size[x\<leftarrow>w. P x] = size[x\<leftarrow>w. \<not>P x]+2 \<Longrightarrow> \<exists>i\<le>size w. size[x\<leftarrow>take i w. P x] = size[x\<leftarrow>take i w. \<not>P x]+1" txt{*\noindent This is proved by @{text force} with the help of the intermediate value theorem, instantiated appropriately and with its first premise disposed of by lemma @{thm[source]step1}: *} apply(insert nat0_intermed_int_val[OF step1, of "P" "w" "1"]) by force text{*\noindent Lemma @{thm[source]part1} tells us only about the prefix @{term"take i w"}. An easy lemma deals with the suffix @{term"drop i w"}: *} lemma part2: "\<lbrakk>size[x\<leftarrow>take i w @ drop i w. P x] = size[x\<leftarrow>take i w @ drop i w. \<not>P x]+2; size[x\<leftarrow>take i w. P x] = size[x\<leftarrow>take i w. \<not>P x]+1\<rbrakk> \<Longrightarrow> size[x\<leftarrow>drop i w. P x] = size[x\<leftarrow>drop i w. \<not>P x]+1" by(simp del: append_take_drop_id) text{*\noindent In the proof we have disabled the normally useful lemma \begin{isabelle} @{thm append_take_drop_id[no_vars]} \rulename{append_take_drop_id} \end{isabelle} to allow the simplifier to apply the following lemma instead: @{text[display]"[x\<in>xs@ys. P x] = [x\<in>xs. P x] @ [x\<in>ys. P x]"} To dispose of trivial cases automatically, the rules of the inductive definition are declared simplification rules: *} declare S_A_B.intros[simp] text{*\noindent This could have been done earlier but was not necessary so far. The completeness theorem tells us that if a word has the same number of @{term a}'s and @{term b}'s, then it is in @{term S}, and similarly for @{term A} and @{term B}: *} theorem completeness: "(size[x\<leftarrow>w. x=a] = size[x\<leftarrow>w. x=b] \<longrightarrow> w \<in> S) \<and> (size[x\<leftarrow>w. x=a] = size[x\<leftarrow>w. x=b] + 1 \<longrightarrow> w \<in> A) \<and> (size[x\<leftarrow>w. x=b] = size[x\<leftarrow>w. x=a] + 1 \<longrightarrow> w \<in> B)" txt{*\noindent The proof is by induction on @{term w}. Structural induction would fail here because, as we can see from the grammar, we need to make bigger steps than merely appending a single letter at the front. Hence we induct on the length of @{term w}, using the induction rule @{thm[source]length_induct}: *} apply(induct_tac w rule: length_induct) apply(rename_tac w) txt{*\noindent The @{text rule} parameter tells @{text induct_tac} explicitly which induction rule to use. For details see \S\ref{sec:complete-ind} below. In this case the result is that we may assume the lemma already holds for all words shorter than @{term w}. Because the induction step renames the induction variable we rename it back to @{text w}. The proof continues with a case distinction on @{term w}, on whether @{term w} is empty or not. *} apply(case_tac w) apply(simp_all) (*<*)apply(rename_tac x v)(*>*) txt{*\noindent Simplification disposes of the base case and leaves only a conjunction of two step cases to be proved: if @{prop"w = a#v"} and @{prop[display]"size[x\<in>v. x=a] = size[x\<in>v. x=b]+2"} then @{prop"b#v \<in> A"}, and similarly for @{prop"w = b#v"}. We only consider the first case in detail. After breaking the conjunction up into two cases, we can apply @{thm[source]part1} to the assumption that @{term w} contains two more @{term a}'s than @{term b}'s. *} apply(rule conjI) apply(clarify) apply(frule part1[of "\<lambda>x. x=a", simplified]) apply(clarify) txt{*\noindent This yields an index @{prop"i \<le> length v"} such that @{prop[display]"length [x\<leftarrow>take i v . x = a] = length [x\<leftarrow>take i v . x = b] + 1"} With the help of @{thm[source]part2} it follows that @{prop[display]"length [x\<leftarrow>drop i v . x = a] = length [x\<leftarrow>drop i v . x = b] + 1"} *} apply(drule part2[of "\<lambda>x. x=a", simplified]) apply(assumption) txt{*\noindent Now it is time to decompose @{term v} in the conclusion @{prop"b#v \<in> A"} into @{term"take i v @ drop i v"}, *} apply(rule_tac n1=i and t=v in subst[OF append_take_drop_id]) txt{*\noindent (the variables @{term n1} and @{term t} are the result of composing the theorems @{thm[source]subst} and @{thm[source]append_take_drop_id}) after which the appropriate rule of the grammar reduces the goal to the two subgoals @{prop"take i v \<in> A"} and @{prop"drop i v \<in> A"}: *} apply(rule S_A_B.intros) txt{* Both subgoals follow from the induction hypothesis because both @{term"take i v"} and @{term"drop i v"} are shorter than @{term w}: *} apply(force simp add: min_less_iff_disj) apply(force split: nat_diff_split) txt{* The case @{prop"w = b#v"} is proved analogously: *} apply(clarify) apply(frule part1[of "\<lambda>x. x=b", simplified]) apply(clarify) apply(drule part2[of "\<lambda>x. x=b", simplified]) apply(assumption) apply(rule_tac n1=i and t=v in subst[OF append_take_drop_id]) apply(rule S_A_B.intros) apply(force simp add: min_less_iff_disj) by(force simp add: min_less_iff_disj split: nat_diff_split) text{* We conclude this section with a comparison of our proof with Hopcroft\index{Hopcroft, J. E.} and Ullman's\index{Ullman, J. D.} @{cite \<open>p.\ts81\<close> HopcroftUllman}. For a start, the textbook grammar, for no good reason, excludes the empty word, thus complicating matters just a little bit: they have 8 instead of our 7 productions. More importantly, the proof itself is different: rather than separating the two directions, they perform one induction on the length of a word. This deprives them of the beauty of rule induction, and in the easy direction (correctness) their reasoning is more detailed than our @{text auto}. For the hard part (completeness), they consider just one of the cases that our @{text simp_all} disposes of automatically. Then they conclude the proof by saying about the remaining cases: ``We do this in a manner similar to our method of proof for part (1); this part is left to the reader''. But this is precisely the part that requires the intermediate value theorem and thus is not at all similar to the other cases (which are automatic in Isabelle). The authors are at least cavalier about this point and may even have overlooked the slight difficulty lurking in the omitted cases. Such errors are found in many pen-and-paper proofs when they are scrutinized formally.% \index{grammars!defining inductively|)} *} (*<*)end(*>*)
(*<*) theory IHOML imports Relations begin nitpick_params[user_axioms=true, show_all, expect=genuine, format = 3, atoms e = a b c d] (*>*) section \<open>Introduction\<close> text\<open> We present a study on Computational Metaphysics: a computer-formalisation and verification of Fitting's variant of the ontological argument (for the existence of God) as presented in his textbook \emph{Types, Tableaus and G\"odel's God} @{cite "Fitting"}. Fitting's argument is an emendation of Kurt G\"odel's modern variant @{cite "GoedelNotes"} (resp. Dana Scott's variant @{cite "ScottNotes"}) of the ontological argument. \<close> text\<open> The motivation is to avoid the \emph{modal collapse} @{cite Sobel and sobel2004logic}, which has been criticised as an undesirable side-effect of the axioms of G\"odel resp. Scott. The modal collapse essentially states that there are no contingent truths and that everything is determined. Several authors (e.g. @{cite "anderson90:_some_emend_of_goedel_ontol_proof" and "AndersonGettings" and "Hajek2002" and "bjordal99"}) have proposed emendations of the argument with the aim of maintaining the essential result (the necessary existence of God) while at the same time avoiding the modal collapse. Related work has formalised several of these variants on the computer and verified or falsified them. For example, G\"odel's axioms @{cite "GoedelNotes"} have been shown inconsistent @{cite C55 and C60} while Scott's version has been verified @{cite "ECAI"}. Further experiments, contributing amongst others to the clarification of a related debate between H\'ajek and Anderson, are presented and discussed in @{cite "J23"}. The enabling technique in all of these experiments has been shallow semantical embeddings of (extensional) higher-order modal logics in classical higher-order logic (see @{cite J23 and R59} and the references therein). \<close> text\<open> Fitting's emendation also intends to avoid the modal collapse. However, in contrast to the above variants, Fitting's solution is based on the use of an intensional as opposed to an extensional higher-order modal logic. For our work this imposed the additional challenge to provide a shallow embedding of this more advanced logic. The experiments presented below confirm that Fitting's argument as presented in his textbook @{cite "Fitting"} is valid and that it avoids the modal collapse as intended. \<close> text\<open> The work presented here originates from the \emph{Computational Metaphysics} lecture course held at FU Berlin in Summer 2016 @{cite "C65"}. \pagebreak \<close> section \<open>Embedding of Intensional Higher-Order Modal Logic\<close> text\<open> The object logic being embedded, intensional higher-order modal logic (IHOML), is a modification of the intentional logic developed by Montague and Gallin @{cite "Gallin75"}. IHOML is introduced by Fitting in the second part of his textbook @{cite "Fitting"} in order to formalise his emendation of G\"odel's ontological argument. We offer here a shallow embedding of this logic in Isabelle/HOL, which has been inspired by previous work on the semantical embedding of multimodal logics with quantification @{cite "J23"}. We expand this approach to allow for actualist quantifiers, intensional types and their related operations. \<close> subsection \<open>Type Declarations\<close> text\<open> Since IHOML and Isabelle/HOL are both typed languages, we introduce a type-mapping between them. We follow as closely as possible the syntax given by Fitting (see p. 86). According to this syntax, if \<open>\<tau>\<close> is an extensional type, \<open>\<up>\<tau>\<close> is the corresponding intensional type. For instance, a set of (red) objects has the extensional type \<open>\<langle>\<zero>\<rangle>\<close>, whereas the concept `red' has intensional type \<open>\<up>\<langle>\<zero>\<rangle>\<close>. In what follows, terms having extensional (intensional) types will be called extensional (intensional) terms. \<close> typedecl i \<comment> \<open>type for possible worlds\<close> type_synonym io = "(i\<Rightarrow>bool)" \<comment> \<open>formulas with world-dependent truth-value\<close> typedecl e ("\<zero>") \<comment> \<open>individuals\<close> text\<open> Aliases for common unary predicate types: \<close> type_synonym ie = "(i\<Rightarrow>\<zero>)" ("\<up>\<zero>") type_synonym se = "(\<zero>\<Rightarrow>bool)" ("\<langle>\<zero>\<rangle>") type_synonym ise = "(\<zero>\<Rightarrow>io)" ("\<up>\<langle>\<zero>\<rangle>") type_synonym sie = "(\<up>\<zero>\<Rightarrow>bool)" ("\<langle>\<up>\<zero>\<rangle>") type_synonym isie = "(\<up>\<zero>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<zero>\<rangle>") type_synonym sise = "(\<up>\<langle>\<zero>\<rangle>\<Rightarrow>bool)" ("\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>") type_synonym isise = "(\<up>\<langle>\<zero>\<rangle>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>") type_synonym sisise= "(\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>\<Rightarrow>bool)" ("\<langle>\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>\<rangle>") type_synonym isisise= "(\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>\<rangle>") type_synonym sse = "\<langle>\<zero>\<rangle>\<Rightarrow>bool" ("\<langle>\<langle>\<zero>\<rangle>\<rangle>") type_synonym isse = "\<langle>\<zero>\<rangle>\<Rightarrow>io" ("\<up>\<langle>\<langle>\<zero>\<rangle>\<rangle>") text\<open> Aliases for common binary relation types: \<close> type_synonym see = "(\<zero>\<Rightarrow>\<zero>\<Rightarrow>bool)" ("\<langle>\<zero>,\<zero>\<rangle>") type_synonym isee = "(\<zero>\<Rightarrow>\<zero>\<Rightarrow>io)" ("\<up>\<langle>\<zero>,\<zero>\<rangle>") type_synonym sieie = "(\<up>\<zero>\<Rightarrow>\<up>\<zero>\<Rightarrow>bool)" ("\<langle>\<up>\<zero>,\<up>\<zero>\<rangle>") type_synonym isieie = "(\<up>\<zero>\<Rightarrow>\<up>\<zero>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<zero>,\<up>\<zero>\<rangle>") type_synonym ssese = "(\<langle>\<zero>\<rangle>\<Rightarrow>\<langle>\<zero>\<rangle>\<Rightarrow>bool)" ("\<langle>\<langle>\<zero>\<rangle>,\<langle>\<zero>\<rangle>\<rangle>") type_synonym issese = "(\<langle>\<zero>\<rangle>\<Rightarrow>\<langle>\<zero>\<rangle>\<Rightarrow>io)" ("\<up>\<langle>\<langle>\<zero>\<rangle>,\<langle>\<zero>\<rangle>\<rangle>") type_synonym ssee = "(\<langle>\<zero>\<rangle>\<Rightarrow>\<zero>\<Rightarrow>bool)" ("\<langle>\<langle>\<zero>\<rangle>,\<zero>\<rangle>") type_synonym issee = "(\<langle>\<zero>\<rangle>\<Rightarrow>\<zero>\<Rightarrow>io)" ("\<up>\<langle>\<langle>\<zero>\<rangle>,\<zero>\<rangle>") type_synonym isisee = "(\<up>\<langle>\<zero>\<rangle>\<Rightarrow>\<zero>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<langle>\<zero>\<rangle>,\<zero>\<rangle>") type_synonym isiseise = "(\<up>\<langle>\<zero>\<rangle>\<Rightarrow>\<up>\<langle>\<zero>\<rangle>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<langle>\<zero>\<rangle>,\<up>\<langle>\<zero>\<rangle>\<rangle>") type_synonym isiseisise= "(\<up>\<langle>\<zero>\<rangle>\<Rightarrow>\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>\<Rightarrow>io)" ("\<up>\<langle>\<up>\<langle>\<zero>\<rangle>,\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>\<rangle>") subsection \<open>Definitions\<close> subsubsection \<open>Logical Operators as Truth-Sets\<close> abbreviation mnot :: "io\<Rightarrow>io" ("\<^bold>\<not>_"[52]53) where "\<^bold>\<not>\<phi> \<equiv> \<lambda>w. \<not>(\<phi> w)" abbreviation negpred :: "\<langle>\<zero>\<rangle>\<Rightarrow>\<langle>\<zero>\<rangle>" ("\<rightharpoondown>_"[52]53) where "\<rightharpoondown>\<Phi> \<equiv> \<lambda>x. \<not>(\<Phi> x)" abbreviation mnegpred :: "\<up>\<langle>\<zero>\<rangle>\<Rightarrow>\<up>\<langle>\<zero>\<rangle>" ("\<^bold>\<rightharpoondown>_"[52]53) where "\<^bold>\<rightharpoondown>\<Phi> \<equiv> \<lambda>x.\<lambda>w. \<not>(\<Phi> x w)" abbreviation mand :: "io\<Rightarrow>io\<Rightarrow>io" (infixr"\<^bold>\<and>"51) where "\<phi>\<^bold>\<and>\<psi> \<equiv> \<lambda>w. (\<phi> w)\<and>(\<psi> w)" abbreviation mor :: "io\<Rightarrow>io\<Rightarrow>io" (infixr"\<^bold>\<or>"50) where "\<phi>\<^bold>\<or>\<psi> \<equiv> \<lambda>w. (\<phi> w)\<or>(\<psi> w)" abbreviation mimp :: "io\<Rightarrow>io\<Rightarrow>io" (infixr"\<^bold>\<rightarrow>"49) where "\<phi>\<^bold>\<rightarrow>\<psi> \<equiv> \<lambda>w. (\<phi> w)\<longrightarrow>(\<psi> w)" abbreviation mequ :: "io\<Rightarrow>io\<Rightarrow>io" (infixr"\<^bold>\<leftrightarrow>"48) where "\<phi>\<^bold>\<leftrightarrow>\<psi> \<equiv> \<lambda>w. (\<phi> w)\<longleftrightarrow>(\<psi> w)" abbreviation xor:: "bool\<Rightarrow>bool\<Rightarrow>bool" (infixr"\<oplus>"50) where "\<phi>\<oplus>\<psi> \<equiv> (\<phi>\<or>\<psi>) \<and> \<not>(\<phi>\<and>\<psi>)" abbreviation mxor :: "io\<Rightarrow>io\<Rightarrow>io" (infixr"\<^bold>\<oplus>"50) where "\<phi>\<^bold>\<oplus>\<psi> \<equiv> \<lambda>w. (\<phi> w)\<oplus>(\<psi> w)" subsubsection \<open>Possibilist Quantification\<close> abbreviation mforall :: "('t\<Rightarrow>io)\<Rightarrow>io" ("\<^bold>\<forall>") where "\<^bold>\<forall>\<Phi> \<equiv> \<lambda>w.\<forall>x. (\<Phi> x w)" abbreviation mexists :: "('t\<Rightarrow>io)\<Rightarrow>io" ("\<^bold>\<exists>") where "\<^bold>\<exists>\<Phi> \<equiv> \<lambda>w.\<exists>x. (\<Phi> x w)" abbreviation mforallB :: "('t\<Rightarrow>io)\<Rightarrow>io" (binder"\<^bold>\<forall>"[8]9) \<comment> \<open>Binder notation\<close> where "\<^bold>\<forall>x. \<phi>(x) \<equiv> \<^bold>\<forall>\<phi>" abbreviation mexistsB :: "('t\<Rightarrow>io)\<Rightarrow>io" (binder"\<^bold>\<exists>"[8]9) where "\<^bold>\<exists>x. \<phi>(x) \<equiv> \<^bold>\<exists>\<phi>" subsubsection \<open>Actualist Quantification\<close> text\<open> The following predicate is used to model actualist quantifiers by restricting the domain of quantification at every possible world. This standard technique has been referred to as \emph{existence relativization} (@{cite "fitting98"}, p. 106), highlighting the fact that this predicate can be seen as a kind of meta-logical `existence predicate' telling us which individuals \emph{actually} exist at a given world. This meta-logical concept does not appear in our object language. \<close> consts Exists::"\<up>\<langle>\<zero>\<rangle>" ("existsAt") abbreviation mforallAct :: "\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>" ("\<^bold>\<forall>\<^sup>E") where "\<^bold>\<forall>\<^sup>E\<Phi> \<equiv> \<lambda>w.\<forall>x. (existsAt x w)\<longrightarrow>(\<Phi> x w)" abbreviation mexistsAct :: "\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>" ("\<^bold>\<exists>\<^sup>E") where "\<^bold>\<exists>\<^sup>E\<Phi> \<equiv> \<lambda>w.\<exists>x. (existsAt x w) \<and> (\<Phi> x w)" abbreviation mforallActB :: "\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>" (binder"\<^bold>\<forall>\<^sup>E"[8]9) \<comment> \<open>binder notation\<close> where "\<^bold>\<forall>\<^sup>Ex. \<phi>(x) \<equiv> \<^bold>\<forall>\<^sup>E\<phi>" abbreviation mexistsActB :: "\<up>\<langle>\<up>\<langle>\<zero>\<rangle>\<rangle>" (binder"\<^bold>\<exists>\<^sup>E"[8]9) where "\<^bold>\<exists>\<^sup>Ex. \<phi>(x) \<equiv> \<^bold>\<exists>\<^sup>E\<phi>" subsubsection \<open>Modal Operators\<close> consts aRel::"i\<Rightarrow>i\<Rightarrow>bool" (infixr "r" 70) \<comment> \<open>accessibility relation \emph{r}\<close> abbreviation mbox :: "io\<Rightarrow>io" ("\<^bold>\<box>_"[52]53) where "\<^bold>\<box>\<phi> \<equiv> \<lambda>w.\<forall>v. (w r v)\<longrightarrow>(\<phi> v)" abbreviation mdia :: "io\<Rightarrow>io" ("\<^bold>\<diamond>_"[52]53) where "\<^bold>\<diamond>\<phi> \<equiv> \<lambda>w.\<exists>v. (w r v)\<and>(\<phi> v)" subsubsection \<open>\emph{Extension-of} Operator\<close> text\<open> According to Fitting's semantics (@{cite "Fitting"}, pp. 92-4) \<open>\<down>\<close> is an unary operator applying only to intensional terms. A term of the form \<open>\<down>\<alpha>\<close> designates the extension of the intensional object designated by \<open>\<alpha>\<close>, at some \emph{given} world. For instance, suppose we take possible worlds as persons, we can therefore think of the concept `red' as a function that maps each person to the set of objects that person classifies as red (its extension). We can further state, the intensional term \emph{r} of type \<open>\<up>\<langle>\<zero>\<rangle>\<close> designates the concept `red'. As can be seen, intensional terms in IHOML designate functions on possible worlds and they always do it \emph{rigidly}. We will sometimes refer to an intensional object explicitly as `rigid', implying that its (rigidly) designated function has the same extension in all possible worlds. \<close> text\<open> Terms of the form \<open>\<down>\<alpha>\<close> are called \emph{relativized} (extensional) terms; they are always derived from intensional terms and their type is \emph{extensional} (in the color example \<open>\<down>r\<close> would be of type \<open>\<langle>\<zero>\<rangle>\<close>). Relativized terms may vary their denotation from world to world of a model, because the extension of an intensional term can change from world to world, i.e. they are non-rigid. \<close> text\<open> To recap: an intensional term denotes the same function in all worlds (i.e. it's rigid), whereas a relativized term denotes a (possibly) different extension (an object or a set) at every world (i.e. it's non-rigid). To find out the denotation of a relativized term, a world must be given. Relativized terms are the \emph{only} non-rigid terms. \bigbreak \<close> text\<open> For our Isabelle/HOL embedding, we had to follow a slightly different approach; we model \<open>\<down>\<close> as a predicate applying to formulas of the form \<open>\<Phi>(\<down>\<alpha>\<^sub>1,\<dots>\<alpha>\<^sub>n)\<close> (for our treatment we only need to consider cases involving one or two arguments, the first one being a relativized term). For instance, the formula \<open>Q(\<down>a\<^sub>1)\<^sup>w\<close> (evaluated at world \emph{w}) is modelled as \<open>\<downharpoonleft>(Q,a\<^sub>1)\<^sup>w\<close> (or \<open>(Q \<downharpoonleft> a\<^sub>1)\<^sup>w\<close> using infix notation), which gets further translated into \<open>Q(a\<^sub>1(w))\<^sup>w\<close>. Depending on the particular types involved, we have to define \<open>\<down>\<close> differently to ensure type correctness (see \emph{a-d} below). Nevertheless, the essence of the \emph{Extension-of} operator remains the same: a term \<open>\<alpha>\<close> preceded by \<open>\<down>\<close> behaves as a non-rigid term, whose denotation at a given possible world corresponds to the extension of the original intensional term \<open>\<alpha>\<close> at that world. \<close> text\<open> (\emph{a}) Predicate \<open>\<phi>\<close> takes as argument a relativized term derived from an (intensional) individual of type \<open>\<up>\<zero>\<close>: \<close> abbreviation extIndivArg::"\<up>\<langle>\<zero>\<rangle>\<Rightarrow>\<up>\<zero>\<Rightarrow>io" (infix "\<downharpoonleft>" 60) where "\<phi> \<downharpoonleft>c \<equiv> \<lambda>w. \<phi> (c w) w" text\<open> (\emph{b}) A variant of (\emph{a}) for terms derived from predicates (types of form \<open>\<up>\<langle>t\<rangle>\<close>): \<close> abbreviation extPredArg::"(('t\<Rightarrow>bool)\<Rightarrow>io)\<Rightarrow>('t\<Rightarrow>io)\<Rightarrow>io" (infix "\<down>" 60) where "\<phi> \<down>P \<equiv> \<lambda>w. \<phi> (\<lambda>x. P x w) w" text\<open> (\emph{c}) A variant of (\emph{b}) with a second argument (the first one being relativized): \<close> abbreviation extPredArg1::"(('t\<Rightarrow>bool)\<Rightarrow>'b\<Rightarrow>io)\<Rightarrow>('t\<Rightarrow>io)\<Rightarrow>'b\<Rightarrow>io" (infix "\<down>\<^sub>1" 60) where "\<phi> \<down>\<^sub>1P \<equiv> \<lambda>z. \<lambda>w. \<phi> (\<lambda>x. P x w) z w" text\<open> In what follows, the `\<open>\<lparr>_\<rparr>\<close>' parentheses are an operator used to convert extensional objects into `rigid' intensional ones: \<close> abbreviation trivialConversion::"bool\<Rightarrow>io" ("\<lparr>_\<rparr>") where "\<lparr>\<phi>\<rparr> \<equiv> (\<lambda>w. \<phi>)" text\<open> (\emph{d}) A variant of (\emph{b}) where \<open>\<phi>\<close> takes `rigid' intensional terms as argument: \<close> abbreviation mextPredArg::"(('t\<Rightarrow>io)\<Rightarrow>io)\<Rightarrow>('t\<Rightarrow>io)\<Rightarrow>io" (infix "\<^bold>\<down>" 60) where "\<phi> \<^bold>\<down>P \<equiv> \<lambda>w. \<phi> (\<lambda>x. \<lparr>P x w\<rparr>) w" (* where "\<phi> \<^bold>\<down>P \<equiv> \<lambda>w. \<phi> (\<lambda>x u. P x w) w"*) subsubsection \<open>Equality\<close> abbreviation meq :: "'t\<Rightarrow>'t\<Rightarrow>io" (infix"\<^bold>\<approx>"60) \<comment> \<open>normal equality (for all types)\<close> where "x \<^bold>\<approx> y \<equiv> \<lambda>w. x = y" abbreviation meqC :: "\<up>\<langle>\<up>\<zero>,\<up>\<zero>\<rangle>" (infixr"\<^bold>\<approx>\<^sup>C"52) \<comment> \<open>eq. for individual concepts\<close> where "x \<^bold>\<approx>\<^sup>C y \<equiv> \<lambda>w. \<forall>v. (x v) = (y v)" abbreviation meqL :: "\<up>\<langle>\<zero>,\<zero>\<rangle>" (infixr"\<^bold>\<approx>\<^sup>L"52) \<comment> \<open>Leibniz eq. for individuals\<close> where "x \<^bold>\<approx>\<^sup>L y \<equiv> \<^bold>\<forall>\<phi>. \<phi>(x)\<^bold>\<rightarrow>\<phi>(y)" subsubsection \<open>Meta-logical Predicates\<close> abbreviation valid :: "io\<Rightarrow>bool" ("\<lfloor>_\<rfloor>" [8]) where "\<lfloor>\<psi>\<rfloor> \<equiv> \<forall>w.(\<psi> w)" abbreviation satisfiable :: "io\<Rightarrow>bool" ("\<lfloor>_\<rfloor>\<^sup>s\<^sup>a\<^sup>t" [8]) where "\<lfloor>\<psi>\<rfloor>\<^sup>s\<^sup>a\<^sup>t \<equiv> \<exists>w.(\<psi> w)" abbreviation countersat :: "io\<Rightarrow>bool" ("\<lfloor>_\<rfloor>\<^sup>c\<^sup>s\<^sup>a\<^sup>t" [8]) where "\<lfloor>\<psi>\<rfloor>\<^sup>c\<^sup>s\<^sup>a\<^sup>t \<equiv> \<exists>w.\<not>(\<psi> w)" abbreviation invalid :: "io\<Rightarrow>bool" ("\<lfloor>_\<rfloor>\<^sup>i\<^sup>n\<^sup>v" [8]) where "\<lfloor>\<psi>\<rfloor>\<^sup>i\<^sup>n\<^sup>v \<equiv> \<forall>w.\<not>(\<psi> w)" subsection \<open>Verifying the Embedding\<close> text\<open> The above definitions introduce modal logic \emph{K} with possibilist and actualist quantifiers, as evidenced by the following tests: \<close> text\<open> Verifying \emph{K} Principle and Necessitation: \<close> text\<open> Barcan and Converse Barcan Formulas are satisfied for standard (possibilist) quantifiers: \<close> lemma "\<lfloor>(\<^bold>\<forall>x.\<^bold>\<box>(\<phi> x)) \<^bold>\<rightarrow> \<^bold>\<box>(\<^bold>\<forall>x.(\<phi> x))\<rfloor>" by simp lemma "\<lfloor>\<^bold>\<box>(\<^bold>\<forall>x.(\<phi> x)) \<^bold>\<rightarrow> (\<^bold>\<forall>x.\<^bold>\<box>(\<phi> x))\<rfloor>" by simp text\<open> (Converse) Barcan Formulas not satisfied for actualist quantifiers: \<close> lemma "\<lfloor>(\<^bold>\<forall>\<^sup>Ex.\<^bold>\<box>(\<phi> x)) \<^bold>\<rightarrow> \<^bold>\<box>(\<^bold>\<forall>\<^sup>Ex.(\<phi> x))\<rfloor>" nitpick oops \<comment> \<open>countersatisfiable\<close> lemma "\<lfloor>\<^bold>\<box>(\<^bold>\<forall>\<^sup>Ex.(\<phi> x)) \<^bold>\<rightarrow> (\<^bold>\<forall>\<^sup>Ex.\<^bold>\<box>(\<phi> x))\<rfloor>" nitpick oops \<comment> \<open>countersatisfiable\<close> text\<open> Above we have made use of (counter-)model finder \emph{Nitpick} @{cite "Nitpick"} for the first time. For all the conjectured lemmas above, \emph{Nitpick} has found a countermodel, i.e. a model satisfying all the axioms which falsifies the given formula. This means, the formulas are not valid. \<close> text\<open> Well known relations between meta-logical notions: \<close> lemma "\<lfloor>\<phi>\<rfloor> \<longleftrightarrow> \<not>\<lfloor>\<phi>\<rfloor>\<^sup>c\<^sup>s\<^sup>a\<^sup>t" by simp lemma "\<lfloor>\<phi>\<rfloor>\<^sup>s\<^sup>a\<^sup>t \<longleftrightarrow> \<not>\<lfloor>\<phi>\<rfloor>\<^sup>i\<^sup>n\<^sup>v " by simp text\<open> Contingent truth does not allow for necessitation: \<close> lemma "\<lfloor>\<^bold>\<diamond>\<phi>\<rfloor> \<longrightarrow> \<lfloor>\<^bold>\<box>\<phi>\<rfloor>" nitpick oops \<comment> \<open>countersatisfiable\<close> lemma "\<lfloor>\<^bold>\<box>\<phi>\<rfloor>\<^sup>s\<^sup>a\<^sup>t \<longrightarrow> \<lfloor>\<^bold>\<box>\<phi>\<rfloor>" nitpick oops \<comment> \<open>countersatisfiable\<close> text\<open> \emph{Modal collapse} is countersatisfiable: \<close> lemma "\<lfloor>\<phi> \<^bold>\<rightarrow> \<^bold>\<box>\<phi>\<rfloor>" nitpick oops \<comment> \<open>countersatisfiable\<close> text\<open> \pagebreak \<close> subsection \<open>Useful Definitions for Axiomatization of Further Logics\<close> text\<open> The best known normal logics (\emph{K4, K5, KB, K45, KB5, D, D4, D5, D45, ...}) can be obtained by combinations of the following axioms: \<close> abbreviation M where "M \<equiv> \<^bold>\<forall>\<phi>. \<^bold>\<box>\<phi> \<^bold>\<rightarrow> \<phi>" abbreviation B where "B \<equiv> \<^bold>\<forall>\<phi>. \<phi> \<^bold>\<rightarrow> \<^bold>\<box>\<^bold>\<diamond>\<phi>" abbreviation D where "D \<equiv> \<^bold>\<forall>\<phi>. \<^bold>\<box>\<phi> \<^bold>\<rightarrow> \<^bold>\<diamond>\<phi>" abbreviation IV where "IV \<equiv> \<^bold>\<forall>\<phi>. \<^bold>\<box>\<phi> \<^bold>\<rightarrow> \<^bold>\<box>\<^bold>\<box>\<phi>" abbreviation V where "V \<equiv> \<^bold>\<forall>\<phi>. \<^bold>\<diamond>\<phi> \<^bold>\<rightarrow> \<^bold>\<box>\<^bold>\<diamond>\<phi>" text\<open> Instead of postulating (combinations of) the above axioms we instead make use of the well-known \emph{Sahlqvist correspondence}, which links axioms to constraints on a model's accessibility relation (e.g. reflexive, symmetric, etc.; the definitions of which are not shown here). We show that reflexivity, symmetry, seriality, transitivity and euclideanness imply axioms $M, B, D, IV, V$ respectively. \<close> lemma "reflexive aRel \<Longrightarrow> \<lfloor>M\<rfloor>" by blast \<comment> \<open>aka T\<close> lemma "symmetric aRel \<Longrightarrow> \<lfloor>B\<rfloor>" by blast lemma "serial aRel \<Longrightarrow> \<lfloor>D\<rfloor>" by blast lemma "transitive aRel \<Longrightarrow> \<lfloor>IV\<rfloor>" by blast lemma "euclidean aRel \<Longrightarrow> \<lfloor>V\<rfloor>" by blast lemma "preorder aRel \<Longrightarrow> \<lfloor>M\<rfloor> \<and> \<lfloor>IV\<rfloor>" by blast \<comment> \<open>S4: reflexive + transitive\<close> lemma "equivalence aRel \<Longrightarrow> \<lfloor>M\<rfloor> \<and> \<lfloor>V\<rfloor>" by blast \<comment> \<open>S5: preorder + symmetric\<close> lemma "reflexive aRel \<and> euclidean aRel \<Longrightarrow> \<lfloor>M\<rfloor> \<and> \<lfloor>V\<rfloor>" by blast \<comment> \<open>S5\<close> text\<open> Using these definitions, we can derive axioms for the most common modal logics (see also @{cite "C47"}). Thereby we are free to use either the semantic constraints or the related \emph{Sahlqvist} axioms. Here we provide both versions. In what follows we use the semantic constraints (for improved performance). \pagebreak \<close> (*<*) end (*>*)
We’re searching for project managers and quantity surveyors to work with our in-house and consultant specialists. Lower Richmond Properties (LRP) was established over 15 years ago as a family owned and run property development and management company. This is a fantastic opportunity for someone at an Associate Project Manager level who might be looking for that clear route to senior management.
(* Copyright © 2006-2008 Russell O’Connor Permission is hereby granted, free of charge, to any person obtaining a copy of this proof and associated documentation files (the "Proof"), to deal in the Proof without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Proof, and to permit persons to whom the Proof is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Proof. THE PROOF IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE PROOF OR THE USE OR OTHER DEALINGS IN THE PROOF. *) Require Import CoRN.metric2.Metric. Require Import CoRN.metric2.UniformContinuity. Require Import CoRN.reals.fast.CRAlternatingSum. Require Import CoRN.reals.fast.CRstreams. Require Import CoRN.model.totalorder.QposMinMax. Require Import CoRN.model.totalorder.QMinMax. Require Import Coq.QArith.Qpower. Require Import Coq.QArith.Qabs. Require Import Coq.ZArith.Zdiv. Set Implicit Arguments. Opaque Qabs. Local Open Scope Q_scope. Import MathClasses.theory.CoqStreams. (** [InfiniteSum] approximates the limit of the series s within err_prop. *) (* This Fixpoint is fat because the fuel is a unary nat, which can waste a lot of memory in normal form inside vm_compute. *) Fixpoint InfiniteSum_fat (filter:Stream Q -> bool) (s:Stream Q) (q : Q) (fuel:nat) : Q := if filter s then q else match fuel with | O => q | S p => InfiniteSum_fat filter (tl s) (Qplus' q (hd s)) p end. Definition InfiniteSum_raw_F (cont: Stream Q -> Q -> Q) (err_prop:Stream Q -> bool) (s:Stream Q) (q : Q) : Q := if (err_prop s) then q (* the error is small enough, stop the sum *) else cont (tl s) (Qplus' q (hd s)). (* continue the calculations *) (** Sum a rational stream up to the first index at which the filter Stream Q -> bool equals true, or 2^n. n is intended as a termination proof (fuel). It is otherwise useless for the calculations and we want it as small as possible in memory, because vm_compute will start by computing its normal form. By choosing 2^n we make the fuel logarithmic in the actual number of iterations. cont is a continuation that holds the rest of the computations to do in the recursion, it starts with fun _ q => q. This has the same calculation speed as InfiniteSum_fat, and should take less memory. *) Fixpoint InfiniteSum_raw_N (n:nat) (filter: Stream Q -> bool) (cont: Stream Q -> Q -> Q) {struct n} : Stream Q -> Q -> Q := match n with | O => InfiniteSum_raw_F cont filter | S p => InfiniteSum_raw_N p filter (fun s => InfiniteSum_raw_N p filter cont s) end. (* Remark : the eta expension here is important, else the virtual machine will compute the value of (InfiniteGeometricSum_raw_N n') before reducing the call of InfiniteGeometricSum_raw_F.*) (* Get an idea of how the recursion goes. The continuation will unfold n layers deep, before being folded by additions. *) Lemma InfiniteSum_raw_N_unfold : forall n cont (filter : Stream Q -> bool) s, InfiniteSum_raw_N (S (S n)) filter cont s = InfiniteSum_raw_N n filter (fun s0 => InfiniteSum_raw_N n filter (fun s1 => InfiniteSum_raw_N (S n) filter cont s1) s0) s. Proof. reflexivity. Qed. Lemma InfiniteSum_fat_plus : forall (fuel:nat) (filter:Stream Q -> bool) (s:Stream Q) (q : Q), InfiniteSum_fat filter s q fuel == q + InfiniteSum_fat filter s 0 fuel. Proof. induction fuel. - intros. simpl. destruct (filter s); rewrite Qplus_0_r; reflexivity. - intros. simpl. destruct (filter s). symmetry. apply Qplus_0_r. rewrite IHfuel. rewrite (IHfuel filter (tl s) (Qplus' 0 (hd s))). rewrite Qplus_assoc. apply Qplus_comp. 2: reflexivity. rewrite Qplus'_correct, Qplus'_correct, Qplus_0_l. reflexivity. Qed. Lemma InfiniteSum_fat_remove_filter : forall (fuel:nat) (filter:Stream Q -> bool) (s:Stream Q) (q : Q), filter (Str_nth_tl fuel s) = true -> exists n:nat, InfiniteSum_fat filter s q fuel = InfiniteSum_fat (fun _ => false) s q n /\ (forall p:nat, lt p n -> filter (Str_nth_tl p s) = false) /\ filter (Str_nth_tl n s) = true. Proof. induction fuel. - intros. exists O. split. 2: split. simpl. destruct (filter s); reflexivity. intros. exfalso; inversion H0. exact H. - intros. destruct (filter s) eqn:des. + exists O. split. 2: split. simpl. rewrite des. reflexivity. intros. exfalso; inversion H0. exact des. + specialize (IHfuel filter (tl s) (Qplus' q (hd s)) H) as [n [H1 H2]]. exists (S n). split. 2: split. simpl. rewrite des. rewrite H1. reflexivity. intros. destruct p. exact des. simpl. apply H2. apply le_S_n in H0. exact H0. apply H2. Qed. Lemma InfiniteSum_fat_add_stop : forall (p n : nat) (s : Stream Q) (filter : Stream Q -> bool) (q : Q), le n p -> filter (Str_nth_tl n s) = true -> InfiniteSum_fat filter s q p = InfiniteSum_fat filter s q n. Proof. induction p. - intros n s filter q H H0. inversion H. reflexivity. - intros n s filter q H H0. destruct n. + simpl in H0. simpl. rewrite H0. reflexivity. + specialize (IHp n (tl s) filter). simpl. destruct (filter s) eqn:des. reflexivity. rewrite IHp. reflexivity. apply le_S_n, H. exact H0. Qed. Lemma InfiniteSum_fat_extend : forall (n : nat) (s : Stream Q) (filter : Stream Q -> bool) (q : Q), filter (Str_nth_tl n s) = true -> InfiniteSum_fat filter s q n = InfiniteSum_fat filter s q (S n). Proof. intros. symmetry. apply InfiniteSum_fat_add_stop. apply le_S, le_refl. exact H. Qed. Lemma InfiniteSum_fat_add_pass : forall (n p : nat) (s : Stream Q) (filter : Stream Q -> bool) (q : Q), (forall k:nat, lt k n -> filter (Str_nth_tl k s) = false) -> InfiniteSum_fat filter s q (n+p) = InfiniteSum_fat filter (Str_nth_tl n s) (InfiniteSum_fat filter s q n) p. Proof. induction n. - intros. simpl. destruct (filter s); reflexivity. - intros. pose proof (H O (le_n_S 0 n (le_0_n _))) as zeroFalse. simpl in zeroFalse. simpl. rewrite zeroFalse. rewrite IHn. reflexivity. intros k H0. destruct n. exfalso; inversion H0. apply Nat.le_succ_r in H0. destruct H0. apply (H (S k)), le_n_S, le_S, H0. inversion H0. subst k. apply (H (S n) (le_refl _)). Qed. Lemma decide_filter_before_n : forall (n : nat) (filter : Stream Q -> bool) (s : Stream Q), (exists p:nat, lt p n /\ filter (Str_nth_tl p s) = true) \/ (forall p:nat, lt p n -> filter (Str_nth_tl p s) = false). Proof. induction n. - intros. right. intros. exfalso. inversion H. - intros. destruct (filter (Str_nth_tl n s)) eqn:des. left. exists n. split. apply le_refl. exact des. destruct (IHn filter s). + left. destruct H as [p [H H0]]. exists p. split. apply le_S, H. exact H0. + right. intros. apply Nat.le_succ_r in H0. destruct H0. apply H, H0. inversion H0. subst p. exact des. Qed. Lemma InfiniteSum_raw_N_step : forall (fuel : nat) c (filter : Stream Q -> bool) (s : Stream Q) (q : Q), (forall p:nat, p < 2 ^ fuel -> filter (Str_nth_tl p s) = false)%nat -> InfiniteSum_raw_N fuel filter c s q = c (Str_nth_tl (2^fuel) s) (InfiniteSum_raw_N fuel filter (fun _ r => r) s q). Proof. induction fuel. - intros. simpl. unfold InfiniteSum_raw_F. destruct (filter s) eqn:des. 2: reflexivity. exfalso. specialize (H O (le_refl _)). simpl in H. rewrite H in des. discriminate. - intros. simpl. assert (forall p : nat, (p < 2 ^ fuel)%nat -> filter (Str_nth_tl p s) = false) as firstHalf. { intros. apply (H p). simpl. rewrite Nat.add_0_r. apply (lt_le_trans _ (0+2^fuel)). exact H0. apply Nat.add_le_mono_r, le_0_n. } rewrite IHfuel, IHfuel. 3: exact firstHalf. rewrite Nat.add_0_r, Str_nth_tl_plus. apply f_equal. symmetry. apply IHfuel. exact firstHalf. intros. rewrite Str_nth_tl_plus. apply H. simpl. rewrite Nat.add_0_r. apply Nat.add_lt_mono_r, H0. Qed. (* The initial continuation is not reached when the filter is triggered before. *) Lemma InfiniteSum_raw_N_cont_invariant : forall (fuel p : nat) c d (filter : Stream Q -> bool) (s : Stream Q) (q : Q), (p < 2 ^ fuel)%nat -> filter (Str_nth_tl p s) = true -> InfiniteSum_raw_N fuel filter c s q = InfiniteSum_raw_N fuel filter d s q. Proof. induction fuel. - intros. simpl in H. simpl. unfold InfiniteSum_raw_F. destruct (filter s) eqn:des. reflexivity. apply le_S_n in H. inversion H. exfalso. subst p. simpl in H0. rewrite H0 in des. discriminate. - intros. simpl. simpl in H. rewrite Nat.add_0_r in H. destruct (decide_filter_before_n (2^fuel) filter s). destruct H1 as [k [H1 H2]]. apply (IHfuel k). exact H1. exact H2. (* Now 2^fuel <= p *) destruct (Nat.lt_ge_cases p (2^fuel)). exfalso. specialize (H1 p H2). rewrite H0 in H1. discriminate. apply Nat.le_exists_sub in H2. destruct H2 as [k [H2 _]]. subst p. rewrite <- Str_nth_tl_plus in H0. rewrite (InfiniteSum_raw_N_step fuel (fun s0 : Stream Q => InfiniteSum_raw_N fuel filter c s0)). 2: exact H1. rewrite (InfiniteSum_raw_N_step fuel (fun s0 : Stream Q => InfiniteSum_raw_N fuel filter d s0)). 2: exact H1. apply (IHfuel k). apply Nat.add_lt_mono_r in H. exact H. exact H0. Qed. Lemma InfiniteSum_raw_N_correct : forall (fuel : nat) (s : Stream Q) (filter : Stream Q -> bool) (q : Q), InfiniteSum_raw_N fuel filter (fun _ r => r) s q = InfiniteSum_fat filter s q (2 ^ fuel)%nat. Proof. induction fuel. - intros. simpl. unfold InfiniteSum_raw_F. destruct (filter s). reflexivity. simpl. destruct (filter (tl s)); reflexivity. - intros s filter q. simpl. rewrite Nat.add_0_r. destruct (decide_filter_before_n (2^fuel)%nat filter s). + destruct H as [p [H H0]]. rewrite (@InfiniteSum_fat_add_stop _ p). 3: exact H0. rewrite <- (@InfiniteSum_fat_add_stop (2^fuel)). 2: apply (le_trans _ (S p) _ (le_S _ _ (le_refl p)) H). 2: exact H0. rewrite <- IHfuel. apply (@InfiniteSum_raw_N_cont_invariant fuel p). exact H. exact H0. apply (le_trans _ (2^fuel + 0)). rewrite Nat.add_0_r. apply (le_trans _ (S p) _ (le_S _ _ (le_refl p)) H). apply Nat.add_le_mono_l, le_0_n. + rewrite InfiniteSum_fat_add_pass. 2: exact H. rewrite <- IHfuel. rewrite <- IHfuel. rewrite InfiniteSum_raw_N_step. reflexivity. exact H. Qed. Lemma InfiniteSum_raw_N_extend : forall (p q:nat) s (err : Stream Q -> bool) (r:Q), (Is_true (err (Str_nth_tl (2^p) s))) -> (p <= q)%nat -> InfiniteSum_raw_N p err (fun _ r => r) s r = InfiniteSum_raw_N q err (fun _ r => r) s r. Proof. intros. rewrite InfiniteSum_raw_N_correct, InfiniteSum_raw_N_correct. symmetry. apply InfiniteSum_fat_add_stop. apply Nat.pow_le_mono_r. discriminate. exact H0. unfold Is_true in H. destruct (err (Str_nth_tl (2 ^ p) s)). reflexivity. contradiction. Qed. Lemma InfiniteSum_fat_minus : forall (i p : nat) (s : Stream Q) (q : Q), InfiniteSum_fat (fun _ => false) s q (p + i) - InfiniteSum_fat (fun _ => false) s q i == InfiniteSum_fat (fun _ => false) (Str_nth_tl i s) 0 p. Proof. induction i. - intros. simpl. rewrite Nat.add_0_r. unfold Qminus. rewrite InfiniteSum_fat_plus. ring. - intros. rewrite Nat.add_succ_r. simpl. rewrite IHi. reflexivity. Qed. Lemma InfiniteSum_fat_wd : forall (fuel:nat) (filter:Stream Q -> bool) (s:Stream Q) (q r : Q), q == r -> InfiniteSum_fat filter s q fuel == InfiniteSum_fat filter s r fuel. Proof. induction fuel. - intros. simpl. destruct (filter s); exact H. - intros. simpl. destruct (filter s). exact H. apply IHfuel. rewrite Qplus'_correct, Qplus'_correct. apply Qplus_comp. exact H. reflexivity. Qed. (** ** Geometric Series A geometric series is simple to sum. However we do something slightly more general. We sum a series that satifies the ratio test. *) Section GeometricSeries. Variable a : Q. Hypothesis Ha0 : 0 <= a. Hypothesis Ha1 : a < 1. (** The definition of what we are calling a [GeometricSeries]: a series that satifies the ratio test. *) Definition GeometricSeries := ForAll (fun s => Qabs ((hd (tl s))) <= a*(Qabs(hd s))). (** [err_bound] majorates the distance between the head of the series and its limit. *) Let err_bound (s:Stream Q) : Q := Qabs (hd s)/(1-a). (** [err_prop]: is err a bound on the series s? *) Let err_prop (err:Q) (s:Stream Q) : bool := match ((err_bound s) ?= err) with Gt => false |_ => true end. Lemma err_prop_prop : forall e s, err_prop e s = true <-> err_bound s <= e. Proof. intros e s. unfold err_prop, err_bound, Qcompare, Qle, Z.le. destruct (Qnum (Qabs (hd s) / (1 - a))%Q * Zpos (Qden e) ?= Qnum e * Zpos (Qden (Qabs (hd s) / (1 - a))%Q))%Z; split; auto with *. Qed. Lemma err_prop_nonneg : forall e s, err_prop e s = true -> 0 <= e. Proof. intros. apply err_prop_prop in H. refine (Qle_trans _ _ _ _ H). apply Qmult_le_0_compat. apply Qabs_nonneg. apply Qlt_le_weak, Qinv_lt_0_compat. unfold Qminus. rewrite <- Qlt_minus_iff. exact Ha1. Qed. (** The key lemma about error bounds. *) Lemma err_prop_key : forall (e:Q) (s: Stream Q) (x:Q), err_prop e s = true -> Qabs x <= a*e -> Qabs (Qplus' (hd s) x) <= e. Proof. intros e s x Hs Hx. rewrite -> Qplus'_correct. eapply Qle_trans. apply Qabs_triangle. setoid_replace e with (e*(1-a) + a*e) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; ring). assert (X:0 < 1 - a). change (0 < 1 + - a). rewrite <- Qlt_minus_iff. assumption. apply Qplus_le_compat; try assumption. rewrite -> err_prop_prop in Hs. unfold err_bound in Hs. apply Qmult_lt_0_le_reg_r with (/(1-a)). apply Qinv_lt_0_compat; assumption. rewrite <- Qmult_assoc, Qmult_inv_r, Qmult_1_r. assumption. auto with *. Qed. Lemma err_prop_key' : forall (e:Q) (s: Stream Q), GeometricSeries s -> err_prop e s = true -> err_prop (a*e) (tl s) = true. Proof. intros e s [H _] Hs. rewrite -> err_prop_prop in *. unfold err_bound in *. rewrite -> Qle_minus_iff in H, Hs |- *. rewrite -> Qlt_minus_iff in Ha1. setoid_replace (a * e + - (Qabs (hd (tl s)) / (1 - a))) with (a * (e + - (Qabs (hd s)/(1-a)))+ (a * Qabs (hd s) + - Qabs (hd (tl s)))/(1+-a)) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; field). rewrite <- Qplus_0_r. apply Qplus_le_compat. apply Qmult_le_0_compat. exact Ha0. exact Hs. apply Qmult_le_0_compat. exact H. apply Qlt_le_weak, Qinv_lt_0_compat, Ha1. intro abs. rewrite abs in Ha1. exact (Qlt_irrefl 0 Ha1). Qed. Lemma err_prop_monotone : forall (e0 e1:Q) (s: Stream Q), (e0 <= e1) -> err_prop e0 s = true -> err_prop e1 s = true. Proof. intros e0 e1 s He H. rewrite -> err_prop_prop in *. apply Qle_trans with e0; assumption. Qed. Lemma err_prop_monotone' : forall (e:Q) (s: Stream Q), GeometricSeries s -> err_prop e s = true -> err_prop e (tl s) = true. Proof. intros e s Hs H. rewrite -> err_prop_prop in *. eapply Qle_trans;[|apply H]. unfold err_bound. apply Qmult_le_r. - apply Qinv_lt_0_compat. unfold Qminus. rewrite <- Qlt_minus_iff. exact Ha1. - destruct Hs as [H0 _]. eapply Qle_trans;[apply H0|]. rewrite <- (Qmult_1_l (Qabs(hd s))) at 2. apply Qmult_le_compat_r. apply Qlt_le_weak, Ha1. apply Qabs_nonneg. Qed. (** If a geometric sum s is bounded by e, summing s to any index p is within bound e. *) Lemma err_prop_correct : forall (p:nat) (e:Q) (s : Stream Q) (e':Stream Q -> bool), GeometricSeries s -> err_prop e s = true -> Qabs (InfiniteSum_fat e' s 0%Q p) <= e. Proof. induction p. - intros. simpl. apply err_prop_nonneg in H0. destruct (e' s); exact H0. - intros. simpl. destruct (e' s). apply err_prop_nonneg in H0. exact H0. rewrite InfiniteSum_fat_plus. rewrite Qplus'_correct, Qplus_0_l. rewrite <- Qplus'_correct. apply err_prop_key. exact H0. apply (IHp (a*e)). apply H. apply err_prop_key'; assumption. Qed. (** This lemma tells us how to compute an upper bound on the number of terms we will need to compute. It is okay for this error to be loose because the partial sums will bail out early when it sees that its estimate of the error is small enough. *) Lemma GeometricCovergenceLemma : forall (n:positive) (e:Qpos), /(proj1_sig e*(1 - a)) <= inject_Z (Zpos n) -> a^Zpos n <= proj1_sig e. Proof. destruct (Qle_lt_or_eq _ _ Ha0) as [Ha0'|Ha0']. - intros n e H. assert (0 < a^Zpos n). { assert (X:0 < proj1_sig (Qpos_power (exist _ _ Ha0') (Zpos n))%Qpos) by auto with *. exact X. } apply Qmult_lt_0_le_reg_r with ((/proj1_sig e)*/(a^Zpos n)). apply (Qle_lt_trans _ (0 * (/a^Zpos n))). rewrite Qmult_0_l. apply Qle_refl. apply Qmult_lt_r. apply Qinv_lt_0_compat; exact H0. apply Qinv_lt_0_compat, Qpos_ispos. assert (0 < proj1_sig e) by (apply Qpos_ispos). rewrite (Qmult_assoc (proj1_sig e)), Qmult_inv_r, Qmult_1_l. 2: apply Qpos_nonzero. setoid_replace (a ^ Zpos n * (/ proj1_sig e * / a ^ Zpos n)) with (/proj1_sig e) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; field). 2: split. 2: apply Qpos_nonzero. 2: auto with *. rewrite -> Qlt_minus_iff in Ha1. change (0<1-a) in Ha1. rewrite -> Qle_minus_iff in H. apply Qle_trans with (1 + inject_Z (Zpos n) * (/a -1)). + rewrite -> Qle_minus_iff. setoid_replace (1 + inject_Z (Zpos n) * (/ a - 1) + - / proj1_sig e) with (1+(1 - a)*((inject_Z (Zpos n)*(1-a)*/a + (inject_Z (Zpos n) +-(/(proj1_sig e*(1 - a))))))) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; field). 2: split; auto with *. apply (Qle_trans _ (1+0)). discriminate. apply Qplus_le_r. repeat apply Qmult_le_0_compat; simpl; auto with *. assert (0 <= 1-a) by auto with *. apply (Qle_trans _ (0+0)). discriminate. apply Qplus_le_compat. 2: exact H. apply Qmult_le_0_compat. apply Qmult_le_0_compat. discriminate. exact H2. apply Qlt_le_weak, Qinv_lt_0_compat, Ha0'. + clear -n Ha0'. induction n using Pind. simpl. setoid_replace (1 + inject_Z 1 * (/ a - 1)) with (/a) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; ring). apply Qle_refl. rewrite Zpos_succ_morphism. unfold Z.succ. rewrite -> Qpower_plus;[|auto with *]. rewrite -> Qinv_mult_distr. rewrite -> Q.Zplus_Qplus. apply Qle_trans with ((1 + inject_Z (Zpos n) * (/ a - 1))*/a). rewrite -> Qle_minus_iff. setoid_replace ( (1 + inject_Z (Z.pos n) * (/ a - 1)) * / a + - (1 + (inject_Z (Z.pos n) + inject_Z 1) * (/ a - 1))) with (inject_Z (Zpos n)*(/a -1)^2) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; ring). apply Qmult_le_0_compat. discriminate. unfold Qle. rewrite Z.mul_0_l. simpl. rewrite Z.mul_1_r. apply Z.square_nonneg. apply Qmult_le_compat_r. assumption. apply Qinv_le_0_compat; auto with *. - intros n e _. rewrite <- Ha0'. rewrite -> Qpower_0; auto with *. Qed. Definition InfiniteGeometricSum_maxIter series (err:Qpos) : positive := let x := (1-a) in let (n,d) := (Qabs (hd series))/(proj1_sig err*x*x) in match Z.succ (Z.div n (Zpos d)) with | Zpos p => p | _ => 1%positive end. Lemma InfiniteGeometricSum_maxIter_monotone : forall series (err:Qpos), GeometricSeries series -> (InfiniteGeometricSum_maxIter (tl series) err <= InfiniteGeometricSum_maxIter series err)%positive. Proof. intros series err Gs. unfold InfiniteGeometricSum_maxIter. cut ((Qabs (hd (tl series)) / (proj1_sig err * (1 - a) * (1 - a))) <= (Qabs (hd series) / (proj1_sig err * (1 - a) * (1 - a)))). - generalize (Qabs (hd (tl series)) / (proj1_sig err * (1 - a) * (1 - a))) (Qabs (hd series) / (proj1_sig err * (1 - a) * (1 - a))). intros [na da] [nb db] H. cut (Z.succ (na/Zpos da) <= Z.succ (nb/Zpos db))%Z. generalize (Z.succ (na / Zpos da)) (Z.succ (nb/Zpos db)). intros [|x|x] [|y|y] Hxy; try solve [apply Hxy | apply Qle_refl | elim Hxy; constructor | unfold Qle; simpl; repeat rewrite Pmult_1_r]. apply Pos.le_1_l. discriminate. apply Pos.le_1_l. discriminate. apply Zsucc_le_compat. unfold Qle in H. simpl in H. rewrite <- (Zdiv_mult_cancel_r na (Zpos da) (Zpos db)). 2: discriminate. rewrite <- (Zdiv_mult_cancel_r nb (Zpos db) (Zpos da)). 2: discriminate. rewrite (Zmult_comm (Zpos db) (Zpos da)). apply Z_div_le. reflexivity. exact H. - assert (X:0 < 1 - a). change (0 < 1 + - a). rewrite <- Qlt_minus_iff. assumption. apply Qle_shift_div_l. apply (Qpos_ispos (err * (exist _ _ X) * (exist _ _ X))). unfold Qdiv. rewrite <- Qmult_assoc. rewrite <- (Qmult_comm (proj1_sig err * (1 - a) * (1 - a))). rewrite Qmult_inv_r, Qmult_1_r. destruct Gs as [H _]. eapply Qle_trans. apply H. rewrite <- (Qmult_1_l (Qabs (hd series))) at 2. apply Qmult_le_compat_r. apply Qlt_le_weak, Ha1. apply Qabs_nonneg. apply (Qpos_nonzero (err * (exist _ _ X) * (exist _ _ X))). Qed. Lemma InfiniteGeometricSum_maxIter_correct : forall series (err:Qpos), GeometricSeries series -> err_prop (proj1_sig err) (Str_nth_tl (nat_of_P (InfiniteGeometricSum_maxIter series err)) series) = true. Proof. intros series err H. rewrite -> err_prop_prop. unfold err_bound. assert (X:0 < 1 - a). change (0 < 1 + - a). rewrite <- Qlt_minus_iff. assumption. apply Qle_shift_div_r; try assumption. assert (Y:(Qabs (hd series) * a ^ Zpos (InfiniteGeometricSum_maxIter series err) <= proj1_sig err * (1 - a))). { destruct (Qlt_le_dec 0 (Qabs (hd series))). apply Qmult_lt_0_le_reg_r with (/Qabs (hd series)). apply Qinv_lt_0_compat; assumption. rewrite (Qmult_comm (Qabs (hd series))), <- Qmult_assoc. rewrite Qmult_inv_r, Qmult_1_r. 2: auto with *. cut (a ^ Zpos (InfiniteGeometricSum_maxIter series err) <= proj1_sig (err * exist _ _ X * Qpos_inv (exist _ _ q))%Qpos). autorewrite with QposElim; auto. apply GeometricCovergenceLemma. autorewrite with QposElim. unfold InfiniteGeometricSum_maxIter. simpl (/ (proj1_sig (err * exist (Qlt 0) (1 - a) X * Qpos_inv (exist (Qlt 0) (Qabs (hd series)) q))%Qpos * (1 - a))). setoid_replace (/ (proj1_sig err * (1 - a) * / Qabs (hd series) * (1 - a))) with (Qabs (hd series) / (proj1_sig err * (1 - a) * (1 - a))) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; field). 2: repeat split;auto with *;apply Qpos_nonzero. cut (0 < (Qabs (hd series) / (proj1_sig err * (1 - a) * (1 - a)))). generalize (Qabs (hd series) / (proj1_sig err * (1 - a) * (1 - a))). intros [n d] Hnd. apply Qle_trans with (inject_Z (Z.succ (n/Zpos d))). unfold Qle. simpl. unfold Z.succ. apply Zle_0_minus_le. replace ((n / Zpos d + 1) * Zpos d - n * 1)%Z with (Zpos d*(n/Zpos d) + n mod (Zpos d) - n mod (Zpos d) - n + Zpos d)%Z by ring. rewrite <- Z_div_mod_eq_full. replace (n - n mod (Zpos d) - n + Zpos d)%Z with (Zpos d - n mod (Zpos d))%Z by ring. apply Zle_minus_le_0. destruct (Z_mod_lt n (Zpos d)); auto with *. generalize (Z.succ (n/Zpos d)). intros [|z|z]. discriminate. apply Qle_refl. discriminate. cut (0 < proj1_sig ((exist _ _ q) * Qpos_inv(err * (exist _ _ X)*(exist _ _ X)))%Qpos). simpl; auto. apply Q.Qmult_lt_0_compat; auto with *. setoid_replace (Qabs (hd series)) with 0. rewrite Qmult_0_l. apply (Qpos_nonneg (err * (exist _ _ X))). apply Qle_antisym; try assumption. apply Qabs_nonneg. } apply Qle_trans with (Qabs (hd series)*a^Zpos (InfiniteGeometricSum_maxIter series err)); try assumption. clear Y. generalize (InfiniteGeometricSum_maxIter series err). intros p. revert series H. induction p using Pind; intros series H. simpl. destruct H. rewrite -> Qmult_comm. assumption. rewrite nat_of_P_succ_morphism. rewrite Zpos_succ_morphism. unfold Z.succ. rewrite -> Qpower_plus';[|discriminate]. rewrite Qmult_assoc. apply Qle_trans with (Qabs (hd (Str_nth_tl (nat_of_P p) series))*a). change (S (nat_of_P p)) with (1+(nat_of_P p))%nat. rewrite <- Str_nth_tl_plus. cut (GeometricSeries (Str_nth_tl (nat_of_P p) series)). generalize (Str_nth_tl (nat_of_P p) series). intros s [H0 _]. rewrite -> Qmult_comm. assumption. clear -H. induction (nat_of_P p). auto. change (S n) with (1+n)%nat. rewrite <- Str_nth_tl_plus. simpl. destruct IHn; assumption. apply Qmult_le_compat_r; try assumption. apply IHp; assumption. Qed. (** The implemenation of [InfiniteGeometricSum]. *) Definition InfiniteGeometricSum_fat series (e:QposInf) : Q := match e with | QposInfinity => 0 | Qpos2QposInf err => InfiniteSum_fat (err_prop (proj1_sig err)) series 0%Q (Pos.to_nat (InfiniteGeometricSum_maxIter series err)) end. Definition InfiniteGeometricSum_raw series (e:QposInf) : Q := match e with | QposInfinity => 0 | Qpos2QposInf err => InfiniteSum_raw_N (Pos.to_nat (Pos.size (InfiniteGeometricSum_maxIter series err))) (err_prop (proj1_sig err)) (fun _ r => r) series 0%Q end. Lemma InfiniteGeometricSum_raw_correct : forall (series : Stream Q) (e : QposInf), GeometricSeries series -> InfiniteGeometricSum_raw series e = InfiniteGeometricSum_fat series e. Proof. assert (forall n:nat, lt 0 n -> Pos.of_nat (2 ^ n) = (2 ^ Pos.of_nat n)%positive) as inj_pow. { induction n. - intros. exfalso; inversion H. - intros _. destruct n. reflexivity. rewrite Nat2Pos.inj_succ. 2: discriminate. rewrite Pos.pow_succ_r. rewrite <- IHn. 2: apply le_n_S, le_0_n. clear IHn. generalize (S n). intro k. change (2 ^ S k)%nat with (2 * 2 ^ k)%nat. rewrite Nat2Pos.inj_mul. reflexivity. discriminate. apply Nat.pow_nonzero. discriminate. } intros. destruct e. 2: reflexivity. simpl. rewrite InfiniteSum_raw_N_correct. apply InfiniteSum_fat_add_stop. 2: apply InfiniteGeometricSum_maxIter_correct, H. specialize (inj_pow (Pos.to_nat (Pos.size (InfiniteGeometricSum_maxIter series q))) (Pos2Nat.is_pos _)). rewrite Pos2Nat.id in inj_pow. rewrite <- Nat2Pos.id. rewrite inj_pow. apply Pos2Nat.inj_le. apply Pos.lt_le_incl, Pos.size_gt. apply Nat.pow_nonzero. discriminate. Qed. (* Now we prove that bounds are correct when applied to tails of a geometric series at indexes p and p0. *) Lemma err_prop_tail_correct : forall (series : Stream Q) (e0 e1:Q) (p p0 : nat), GeometricSeries series -> e1 <= e0 -> err_prop e0 (Str_nth_tl p series) = true -> err_prop e1 (Str_nth_tl p0 series) = true -> Qball e0 (InfiniteSum_fat (err_prop e0) series 0%Q p) (InfiniteSum_fat (err_prop e1) series 0%Q p0). Proof. intros series e0 e1 p0 p1 H He H0 H1. (* err_prop e1 implies err_prop e0 so the e1 sum is longer. Replace by the first indexes where the filters are triggered, the one of e1 being higher. The subtraction is a tail sum of e0 after the convergence index, so it is below e0. *) pose proof (@InfiniteSum_fat_remove_filter p0 (err_prop e0) series 0%Q) as [i [H2 H3]]. exact H0. rewrite H2. pose proof (@InfiniteSum_fat_remove_filter p1 (err_prop e1) series 0%Q) as [j [H4 H5]]. exact H1. rewrite H4. clear H4 H2 H1 H0 p0 p1. destruct H3. destruct H5. destruct (Nat.lt_ge_cases j i). - exfalso. specialize (H0 j H4). pose proof (err_prop_monotone (Str_nth_tl j series) He H3) as mon. rewrite mon in H0. discriminate. - unfold Qball. rewrite <- AbsSmall_Qabs, Qabs_Qminus. apply Nat.le_exists_sub in H4. destruct H4 as [p [H4 _]]. subst j. clear H3 H2 He e1. rewrite InfiniteSum_fat_minus. apply err_prop_correct. 2: exact H1. apply ForAll_Str_nth_tl, H. Qed. Lemma InfiniteGeometricSum_raw_prf : forall series, GeometricSeries series -> is_RegularFunction Qball (InfiniteGeometricSum_raw series). Proof. intros series H e0 e1. rewrite InfiniteGeometricSum_raw_correct. rewrite InfiniteGeometricSum_raw_correct. 2: exact H. 2: exact H. pose proof (InfiniteGeometricSum_maxIter_correct e0 H) as H0. pose proof (InfiniteGeometricSum_maxIter_correct e1 H) as H1. destruct (Qle_total (proj1_sig e1) (proj1_sig e0)). - apply ball_weak. apply Qpos_nonneg. apply (err_prop_tail_correct _ _ H q H0 H1). - rewrite Qplus_comm. apply ball_weak. apply Qpos_nonneg. apply ball_sym. apply (err_prop_tail_correct _ _ H q H1 H0). Qed. Definition InfiniteGeometricSum series (Gs:GeometricSeries series) : CR := Build_RegularFunction (InfiniteGeometricSum_raw_prf Gs). (** The [InfiniteGeometricSum] is correct. *) Lemma InfiniteGeometricSum_step : forall series (Gs:GeometricSeries series), (InfiniteGeometricSum Gs == ('(hd series))+(InfiniteGeometricSum (ForAll_Str_nth_tl 1%nat Gs)))%CR. Proof. intros series Gs. rewrite -> CRplus_translate. apply regFunEq_equiv, regFunEq_e. intros e. change (approximate (InfiniteGeometricSum Gs) e) with (InfiniteGeometricSum_raw series e). rewrite InfiniteGeometricSum_raw_correct. 2: exact Gs. simpl (InfiniteGeometricSum (ForAll_Str_nth_tl 1 Gs)). change (approximate (translate (hd series) (InfiniteGeometricSum (ForAll_Str_nth_tl 1 Gs))) e) with (hd series + (InfiniteGeometricSum_raw (tl series) e)). rewrite InfiniteGeometricSum_raw_correct. 2: apply Gs. simpl. rewrite InfiniteSum_fat_extend. 2: apply InfiniteGeometricSum_maxIter_correct, Gs. simpl. case_eq (err_prop (proj1_sig e) series); intros He. - apply ball_sym. simpl. unfold Qball. rewrite <- AbsSmall_Qabs. unfold Qminus. rewrite Qplus_0_r. eapply Qle_trans. apply Qabs_triangle. apply Qplus_le_compat; simpl. rewrite -> err_prop_prop in He. unfold err_bound in He. assert (X:0 < 1 - a). change (0 < 1 + - a). rewrite <- Qlt_minus_iff. assumption. clear - He Ha0 X. setoid_replace (Qabs (hd series)) with ((Qabs (hd series)/(1-a))*(1-a)) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; simpl; field). 2: auto with *. apply (Qle_trans _ ((proj1_sig e) * (1-a))). apply Qmult_le_compat_r. exact He. apply Qlt_le_weak, X. rewrite <- (Qmult_1_r (proj1_sig e)) at 2. apply Qmult_le_l. apply Qpos_ispos. rewrite <- (Qplus_0_r 1) at 2. apply Qplus_le_r. apply (Qopp_le_compat 0), Ha0. apply err_prop_correct. destruct Gs; assumption. apply err_prop_monotone'; assumption. - assert (Qplus' 0 (hd series) == hd series). { rewrite Qplus'_correct. apply Qplus_0_l. } rewrite (InfiniteSum_fat_wd _ _ _ H). rewrite InfiniteSum_fat_plus. rewrite (@InfiniteSum_fat_add_stop (Pos.to_nat (InfiniteGeometricSum_maxIter series e)) (Pos.to_nat (InfiniteGeometricSum_maxIter (tl series) e))). apply Qball_Reflexive. apply (Qpos_nonneg (e+e)). apply Pos2Nat.inj_le. apply (@InfiniteGeometricSum_maxIter_monotone series e), Gs. apply InfiniteGeometricSum_maxIter_correct, Gs. Qed. Lemma InfiniteGeometricSum_bound : forall series (Gs:GeometricSeries series), (-'(err_bound series) <= InfiniteGeometricSum Gs /\ InfiniteGeometricSum Gs <= '(err_bound series))%CR. Proof. intros series Gs. assert (Y:0 < 1 - a). { change (0 < 1 + - a). rewrite <- Qlt_minus_iff. assumption. } destruct (Qeq_dec (err_bound series) 0) as [Hq|Hq]. - setoid_replace (InfiniteGeometricSum Gs) with 0%CR. split; simpl; rewrite -> Hq; try apply CRle_refl. rewrite CRopp_0. apply CRle_refl. apply regFunEq_equiv, regFunEq_e. intros e. apply ball_sym. change (approximate (InfiniteGeometricSum Gs) e) with (InfiniteGeometricSum_raw series e). rewrite InfiniteGeometricSum_raw_correct. 2: exact Gs. simpl. unfold Qball. unfold QAbsSmall. setoid_replace (0 - InfiniteSum_fat (err_prop (proj1_sig e)) series 0 (Pos.to_nat (InfiniteGeometricSum_maxIter series e)))%Q with 0. split. apply (Qopp_le_compat 0), (Qpos_nonneg (e+e)). apply (Qpos_nonneg (e+e)). unfold canonical_names.equiv, stdlib_rationals.Q_eq. unfold Qminus. rewrite Qplus_0_l. assert (X:err_prop (proj1_sig e) series = true). rewrite -> err_prop_prop. rewrite -> Hq. apply Qpos_nonneg. destruct (InfiniteGeometricSum_maxIter series e) using Pind. simpl. destruct (err_prop (proj1_sig e) series). reflexivity. discriminate. rewrite Pos2Nat.inj_succ. simpl. destruct (err_prop (proj1_sig e) series). reflexivity. discriminate. - cut (-(' err_bound series) <= InfiniteGeometricSum Gs /\ InfiniteGeometricSum Gs <= 'err_bound series)%CR. + intros [H0 H1]. split; assumption. + setoid_replace (InfiniteGeometricSum Gs) with (InfiniteGeometricSum Gs - 0)%CR by (unfold canonical_names.equiv, msp_Equiv; ring). apply CRAbsSmall_ball. apply regFunBall_e. intros d. change (approximate (InfiniteGeometricSum Gs) d) with (InfiniteGeometricSum_raw series d). rewrite InfiniteGeometricSum_raw_correct. 2: exact Gs. simpl. set (p:=(InfiniteGeometricSum_maxIter series d)). unfold Qball. rewrite <- AbsSmall_Qabs. unfold Qminus. rewrite Qplus_0_r. apply (err_prop_correct _ (proj1_sig d+err_bound series+proj1_sig d)); try assumption. apply err_prop_monotone with (err_bound series). simpl. apply (Qle_trans _ (0 + err_bound series + 0)). rewrite Qplus_0_l, Qplus_0_r. apply Qle_refl. apply Qplus_le_compat. apply Qplus_le_compat. apply Qpos_nonneg. apply Qle_refl. apply Qpos_nonneg. rewrite -> err_prop_prop. apply Qle_refl. Qed. Lemma InfiniteGeometricSum_small_tail : forall series (e : Qpos), GeometricSeries series -> {n : nat & forall Gs : GeometricSeries (Str_nth_tl n series), (- ' proj1_sig e <= InfiniteGeometricSum Gs /\ InfiniteGeometricSum Gs <= 'proj1_sig e)%CR }. Proof. intros series e. exists (nat_of_P (InfiniteGeometricSum_maxIter series e)). intros Gs. pose proof (InfiniteGeometricSum_bound Gs) as [H0 H1]. split. refine (CRle_trans _ H0). apply CRopp_le_compat. rewrite -> CRle_Qle. rewrite <- err_prop_prop. apply InfiniteGeometricSum_maxIter_correct. assumption. apply (CRle_trans H1). rewrite -> CRle_Qle. rewrite <- err_prop_prop. apply InfiniteGeometricSum_maxIter_correct. assumption. Qed. End GeometricSeries. (** If one stream is [DecreasingNonNegative] and the other is a [GeometricSeries], then the result is a [GeometricSeries]. *) Lemma mult_Streams_Gs : forall a (x y : Stream Q), (DecreasingNonNegative x) -> (GeometricSeries a y) -> (GeometricSeries a (mult_Streams x y)). Proof. cofix mult_Streams_Gs. intros a x y Hx Hy. constructor. destruct Hy as [Hy _]. apply dnn_alt in Hx. destruct Hx as [[[Hx2 _] [[Hx0 Hx1] _]] _]. simpl. rewrite -> Qabs_Qmult. apply Qle_trans with (Qabs (CoqStreams.hd x) * Qabs (CoqStreams.hd (CoqStreams.tl y))). apply Qmult_le_compat_r. do 2 (rewrite -> Qabs_pos; try assumption). apply Qabs_nonneg. rewrite -> Qabs_Qmult. rewrite Qmult_comm. rewrite (Qmult_comm (Qabs (CoqStreams.hd x))), Qmult_assoc. apply Qmult_le_compat_r; try assumption. apply Qabs_nonneg. destruct Hy. apply mult_Streams_Gs. 2: exact Hy. apply Hx. Qed. (** [powers] is a [GeometricSeries]. *) Lemma powers_help_Gs (a : Q) : (0 <= a) -> forall c, (GeometricSeries a (powers_help a c)). Proof. intros Ha. cofix powers_help_Gs. intros c. constructor. simpl. rewrite -> Qmult_comm. rewrite -> Qabs_Qmult. rewrite -> (Qabs_pos a); try assumption. apply Qle_refl. apply powers_help_Gs. Qed. Lemma powers_Gs (a : Q) : (0 <= a) -> (GeometricSeries a (powers a)). Proof. intros Ha. apply (powers_help_Gs Ha). Qed. Definition InfiniteGeometricSum_shift_raw (s : Stream Q) (n : nat) {a : Q} (Gs : GeometricSeries a (Str_nth_tl n s)) (e : QposInf) : Q := take s n Qplus' 0 + InfiniteGeometricSum_raw a (Str_nth_tl n s) e. Lemma InfiniteGeometricSum_raw_shift_prf : forall (s : Stream Q) (n : nat) {a : Q} (Gs : GeometricSeries a (Str_nth_tl n s)), 0 <= a -> a < 1 -> is_RegularFunction Qball (InfiniteGeometricSum_shift_raw s n Gs). Proof. intros. intros e1 e2. apply AbsSmall_Qabs. unfold InfiniteGeometricSum_shift_raw. setoid_replace (take s n Qplus' 0 + InfiniteGeometricSum_raw a (Str_nth_tl n s) e1 - (take s n Qplus' 0 + InfiniteGeometricSum_raw a (Str_nth_tl n s) e2)) with (InfiniteGeometricSum_raw a (Str_nth_tl n s) e1 - InfiniteGeometricSum_raw a (Str_nth_tl n s) e2) by (unfold canonical_names.equiv, stdlib_rationals.Q_eq; ring). apply AbsSmall_Qabs. apply (InfiniteGeometricSum_raw_prf H H0 Gs). Qed. Definition InfiniteGeometricSum_shift (s : Stream Q) (n : nat) (a : Q) (Gs : GeometricSeries a (Str_nth_tl n s)) (apos : 0 <= a) (aone : a < 1) : CR := Build_RegularFunction (InfiniteGeometricSum_raw_shift_prf s n Gs apos aone). (* Proof of correctness : the limit of the geometric series does not depend on the geometric ratio. *) Lemma InfiniteGeometricSum_wd : forall (s : Stream Q) (a b : Q) (Gsa : GeometricSeries a s) (Gsb : GeometricSeries b s) (apos : 0 <= a) (aone : a < 1) (bpos : 0 <= b) (bone : b < 1), msp_eq (InfiniteGeometricSum apos aone Gsa) (InfiniteGeometricSum bpos bone Gsb). Proof. assert (forall (s : Stream Q) (a b : Q) (Gsa : GeometricSeries a s) (Gsb : GeometricSeries b s) (apos : 0 <= a) (aone : a < 1) (bpos : 0 <= b) (bone : b < 1), a <= b -> msp_eq (InfiniteGeometricSum apos aone Gsa) (InfiniteGeometricSum bpos bone Gsb)). { intros. (* The same series is summed up to 2 different indexes, the distance is the sum between the lower and upper index. The upper index is associated to b, which corresponds to a slower geometric series. *) intros e1 e2. change (approximate (InfiniteGeometricSum apos aone Gsa) e1) with (InfiniteGeometricSum_raw a s e1). rewrite (InfiniteGeometricSum_raw_correct apos aone _ Gsa). change (approximate (InfiniteGeometricSum bpos bone Gsb) e2) with (InfiniteGeometricSum_raw b s e2). rewrite (InfiniteGeometricSum_raw_correct bpos bone _ Gsb). simpl. pose proof (@InfiniteSum_fat_remove_filter (Pos.to_nat (InfiniteGeometricSum_maxIter a s e1)) (fun s0 : Stream Q => match Qabs (hd s0) / (1 - a) ?= proj1_sig e1 with | Gt => false | _ => true end) s 0%Q) as [i [H2 H3]]. apply (InfiniteGeometricSum_maxIter_correct apos aone _ Gsa). rewrite H2. pose proof (@InfiniteSum_fat_remove_filter (Pos.to_nat (InfiniteGeometricSum_maxIter b s e2)) (fun s0 : Stream Q => match Qabs (hd s0) / (1 - b) ?= proj1_sig e2 with | Gt => false | _ => true end) s 0%Q) as [j [H4 H5]]. apply (InfiniteGeometricSum_maxIter_correct bpos bone _ Gsb). rewrite H4. destruct (Nat.lt_ge_cases i j) as [H0|H0]. - rewrite Qplus_0_r. apply Nat.le_exists_sub in H0. destruct H0 as [k [H0 _]]. subst j. unfold Qball. clear H5. rewrite <- AbsSmall_Qabs, Qabs_Qminus. replace (k + S i)%nat with (S k + i)%nat by (rewrite Nat.add_succ_r; reflexivity). rewrite InfiniteSum_fat_minus. apply (Qle_trans _ (proj1_sig e1 + 0)). rewrite Qplus_0_r. apply (err_prop_correct apos aone). apply ForAll_Str_nth_tl, Gsa. apply H3. apply Qplus_le_r, Qpos_nonneg. - rewrite Qplus_0_r. apply Nat.le_exists_sub in H0. destruct H0 as [k [H0 _]]. subst i. unfold Qball. clear H3. rewrite <- AbsSmall_Qabs, Qabs_Qminus. rewrite Qabs_Qminus. rewrite InfiniteSum_fat_minus. apply (Qle_trans _ (0 + proj1_sig e2)). rewrite Qplus_0_l. apply (err_prop_correct bpos bone). apply ForAll_Str_nth_tl, Gsb. apply H5. apply Qplus_le_l, Qpos_nonneg. } intros. destruct (Qle_total a b). apply H, q. symmetry. apply H, q. Qed.
In memory of the Royal Navy Sailors and Army Commandos killed in the raid on St Nazaire on 28 March 1942
Campus Cinema Alternative Film Club 20071017 07:40:46 nbsp Howdy James. I was wondering if I could contact you outside the wiki somehow. Im trying to put together a history of Campus Cinema as far back as I can take it. If you could Mailto(jw DOT dw AT timewarp DOT org, email me), I would very much appreciate it (my phone number and other contact info is also on my user entry). Thanks! Users/JabberWokky
-- ----------------------------------------------------------------- [ Lib.idr ] -- Module : Lib.idr -- Copyright : (c) Jan de Muijnck-Hughes -- License : see LICENSE -- --------------------------------------------------------------------- [ EOH ] ||| A library of Patterns module Sif.Library import Effect.Default import Sif.Types import Sif.Pattern import public Data.DList import public Data.AVL.Dict -- -------------------------------------------------------------- [ Directives ] %access export -- --------------------------------------------------------- [ Data Structures ] public export record LibEntry (impl : SifTy -> SifDomain -> Type) where constructor MkEntry idx : Nat entry : PATTERN impl d public export record SifLib where constructor MkSLib counter : Nat patts : DList (SifTy -> SifDomain -> Type) LibEntry is emptyLib : SifLib emptyLib = MkSLib Z Nil addToLibrary : PATTERN impl d -> SifLib -> SifLib addToLibrary p (MkSLib c ps) = MkSLib (S c) (MkEntry c p::ps) addToLibraryM : List (PATTERN impl d) -> SifLib -> SifLib addToLibraryM xs lib = foldl (flip $ addToLibrary) lib xs defaultLib : SifLib defaultLib = emptyLib Default SifLib where default = defaultLib getLibraryIndex : SifLib -> Dict Nat String getLibraryIndex lib = Dict.fromList idx where f : LibEntry impl -> (Nat, String) f (MkEntry n p) = (n, unwords ["Pattern:", SifExpr.getTitle p]) idx : List (Nat, String) idx = mapDList f (patts lib) getPatternByIndex : Nat -> SifLib -> Maybe (d ** (impl ** PATTERN impl d)) getPatternByIndex n lib = case index (minus (length (patts lib)) (n + 1)) (patts lib) of Nothing => Nothing Just (_ ** p) => Just (_ ** (_ ** entry p)) indexToString : Dict Nat String -> String indexToString idx = unlines res where itemToString : (Nat, String) -> List String -> List String itemToString (i,t) rs = (unwords [show i, "<--", t]) :: rs res : List String res = foldr (itemToString) Nil $ Dict.toList idx -- --------------------------------------------------------------------- [ EOF ]
Require Import List Arith Omega. Section mirror. Variable A : Type. Inductive remove_last (a:A) : list A -> list A -> Prop := | remove_last_hd : remove_last a (a :: nil) nil | remove_last_tl : forall (b:A) (l m:list A), remove_last a l m -> remove_last a (b :: l) (b :: m). Inductive palindrome : list A -> Prop := | empty_pal : palindrome nil | single_pal : forall a:A, palindrome (a :: nil) | cons_pal : forall (a:A) (l m:list A), palindrome l -> remove_last a m l -> palindrome (a :: m). Hint Constructors remove_last palindrome. Lemma ababa : forall a b:A, palindrome (a :: b :: a :: b :: a :: nil). Proof. eauto 7. Qed. (* more about palindromes *) Lemma remove_last_inv : forall (a:A) (l m:list A), remove_last a m l -> m = l ++ a :: nil. Proof. intros a l m H; elim H; simpl; auto with datatypes. intros b l0 m0 H0 e; rewrite e; trivial. Qed. Lemma rev_app : forall l m:list A, rev (l ++ m) = rev m ++ rev l. Proof. intros l m; elim l; simpl; auto with datatypes. intros a l0 H0; rewrite ass_app; rewrite H0; auto. Qed. Lemma palindrome_rev : forall l:list A, palindrome l -> rev l = l. Proof. intros l H; elim H; simpl; auto with datatypes. intros a l0 m H0 H1 H2; generalize H1; inversion_clear H2. - simpl; auto. - rewrite (remove_last_inv _ _ _ H3); simpl; repeat (rewrite rev_app; simpl). intro eg; rewrite eg; simpl; auto. Qed. (* A new induction principle for lists *) (* preliminaries *) Lemma length_app : forall l l':list A, length (l ++ l') = length l + length l'. Proof. intro l; elim l; simpl; auto. Qed. Lemma fib_ind : forall P:nat -> Prop, P 0 -> P 1 -> (forall n:nat, P n -> P (S n) -> P (S (S n))) -> forall n:nat, P n. Proof. intros P H0 H1 HSSn n. assert (H2 : P n /\ P (S n)). - induction n ;[tauto | ]. destruct IHn;split;auto. - destruct H2; auto. Qed. Section Proof_of_list_new_ind. Variables (P : list A -> Prop). Hypotheses (H0 : P nil) (H1 : forall a: A, P (a::nil)) (H2 : forall (a b:A) (l:list A), P l -> P (a :: l ++ b :: nil)). Lemma list_cut : forall (l:list A) (x:A), exists b : A, exists l' : list A, x :: l = l' ++ b :: nil. Proof. intro l; elim l; simpl. intro x; exists x; exists (nil (A:=A)); auto. intros a1 l3 H x. case (H a1). intros x0 H7. case H7; intros b Hb. rewrite Hb. exists x0. exists (x :: b); auto. Qed. Lemma list_new_ind_length : forall (n:nat) (l:list A), length l = n -> P l. Proof. intro n; pattern n; apply fib_ind. - intro l; case l; [simpl; auto with datatypes | discriminate]. - intro l; case l; simpl; [ discriminate | ]. + intros a l0; case l0; simpl; [auto | discriminate]. - intros n0 H3 H4 l; case l; simpl;[discriminate |]. + intros a l0 H5; generalize H5; case l0. * simpl; discriminate 1. * intros a0 l1 H6; destruct (list_cut l1 a0) as [x [l' Hx]]; rewrite Hx; apply H2. apply H3. rewrite Hx in H6. rewrite length_app in H6. simpl in H6; omega. Qed. Lemma list_new_ind : forall l:list A, P l. Proof. intro l; now apply list_new_ind_length with (length l). Qed. End Proof_of_list_new_ind. Lemma app_left_reg : forall l l1 l2:list A, l ++ l1 = l ++ l2 -> l1 = l2. Proof. intro l; elim l; simpl; auto. intros a l0 H0 l1 l2 H; injection H; auto. Qed. Lemma app_right_reg : forall l l1 l2:list A, l1 ++ l = l2 ++ l -> l1 = l2. Proof. intros l l1 l2 e. assert (H: rev (l1 ++ l) = rev (l2 ++ l)). - now rewrite e. - repeat rewrite rev_app in H. generalize (app_left_reg _ _ _ H). intro H1; rewrite <- (rev_involutive l1) ; rewrite <- (rev_involutive l2); rewrite H1; auto. Qed. Theorem rev_pal : forall l:list A, rev l = l -> palindrome l. Proof. intro l; elim l using list_new_ind; auto. - intros a b l0 H H0. apply cons_pal with l0. + apply H; simpl in H0; rewrite rev_app in H0. simpl in H0; injection H0. intros H1 e; generalize H1; rewrite e. intro H2; generalize (app_right_reg _ _ _ H2); auto. + simpl in H0; rewrite rev_app in H0; simpl in H0. injection H0; intros H1 H2; rewrite <- H2. generalize l0; intro l1; induction l1; simpl; auto. Qed. End mirror.
State Before: 𝓕 : Type ?u.129849 𝕜 : Type ?u.129852 α : Type ?u.129855 ι : Type ?u.129858 κ : Type ?u.129861 E : Type u_1 F : Type ?u.129867 G : Type ?u.129870 inst✝² : SeminormedGroup E inst✝¹ : SeminormedGroup F inst✝ : SeminormedGroup G s : Set E a a₁ a₂ b b₁ b₂ : E r r₁ r₂ : ℝ ⊢ HasBasis (𝓝 1) (fun ε => 0 < ε) fun ε => {y | ‖y‖ < ε} State After: case h.e'_5.h.h.e'_2.h.h.e'_3.h.e'_3 𝓕 : Type ?u.129849 𝕜 : Type ?u.129852 α : Type ?u.129855 ι : Type ?u.129858 κ : Type ?u.129861 E : Type u_1 F : Type ?u.129867 G : Type ?u.129870 inst✝² : SeminormedGroup E inst✝¹ : SeminormedGroup F inst✝ : SeminormedGroup G s : Set E a a₁ a₂ b b₁ b₂ : E r r₁ r₂ x✝¹ : ℝ x✝ : E ⊢ x✝ = x✝ / 1 Tactic: convert NormedCommGroup.nhds_basis_norm_lt (1 : E) State Before: case h.e'_5.h.h.e'_2.h.h.e'_3.h.e'_3 𝓕 : Type ?u.129849 𝕜 : Type ?u.129852 α : Type ?u.129855 ι : Type ?u.129858 κ : Type ?u.129861 E : Type u_1 F : Type ?u.129867 G : Type ?u.129870 inst✝² : SeminormedGroup E inst✝¹ : SeminormedGroup F inst✝ : SeminormedGroup G s : Set E a a₁ a₂ b b₁ b₂ : E r r₁ r₂ x✝¹ : ℝ x✝ : E ⊢ x✝ = x✝ / 1 State After: no goals Tactic: simp
Formal statement is: lemma LIMSEQ_le_const2: "X \<longlonglongrightarrow> x \<Longrightarrow> \<exists>N. \<forall>n\<ge>N. X n \<le> a \<Longrightarrow> x \<le> a" for a x :: "'a::linorder_topology" Informal statement is: If $X_n$ converges to $x$ and there exists $N$ such that for all $n \geq N$, $X_n \leq a$, then $x \leq a$.
import lib.lib provability universes u v open_locale logic_symbol namespace logic @[reducible] def Theory (F : Type*) [has_logic_symbol F] := set F variables {F : Type*} [has_logic_symbol F] namespace Theory variables [axiomatic_classical_logic F] (T : Theory F) def mk (S : set F) : Theory F := S def consistent : Prop := ¬∃p : F, (T ⊢ p) ∧ (T ⊢ ∼p) class Consistent (T : Theory F) := (consis : consistent T) lemma consistent_def : consistent T ↔ ¬∃p : F, (T ⊢ p) ∧ (T ⊢ ∼p) := by refl open axiomatic_classical_logic axiomatic_classical_logic' variables {T} lemma consistent_iff_bot : consistent T ↔ ¬T ⊢ ⊥ := ⟨by { simp[consistent_def], intros h A, have : ¬T ⊢ ∼⊤, from h ⊤ (by simp), have : T ⊢ ∼⊤, from of_equiv A (by simp), contradiction }, by { intros h, simp[consistent_def], intros p hp hnp, have : T ⊢ ⊥, from explosion hp hnp, exact h this }⟩ lemma not_consistent_iff_bot : ¬consistent T ↔ T ⊢ ⊥ := by simp[consistent_iff_bot] lemma not_consistent_iff : ¬consistent T ↔ ∃p : F, (T ⊢ p) ∧ (T ⊢ ∼p) := by simp[consistent_def] instance : has_le (Theory F) := ⟨λ T U, ∀ ⦃p : F⦄, T ⊢ p → U ⊢ p⟩ @[simp] lemma le_refl (T : Theory F) : T ≤ T := λ p h, h @[trans] lemma le_trans {T₁ T₂ T₃ : Theory F} : T₁ ≤ T₂ → T₂ ≤ T₃ → T₁ ≤ T₃ := λ le₁₂ le₂₃ p b, le₂₃ (le₁₂ b) class extend (T₀ T : set F) := (le : T₀ ≤ T) namespace extend instance extend_refl (T : set F) : extend T T := ⟨λ p h, h⟩ def of_ss {T U : Theory F} (ss : T ⊆ U) : extend T U := ⟨by intros p h; exact weakening ss h⟩ @[trans] def extend.trans (T₁ T₂ T₃ : set F) [extend T₁ T₂] [extend T₂ T₃] : extend T₁ T₃ := ⟨λ p b, extend.le (extend.le b : T₂ ⊢ p)⟩ lemma by_axiom (T₁ T₂ : set F) [extend T₁ T₂] {p : F} (hp : p ∈ T₁) : T₂ ⊢ p := extend.le (axiomatic_classical_logic'.by_axiom hp) end extend def th (T : Theory F) : Theory F := {p | T ⊢ p} end Theory variables (F) class semantics (𝓢 : Type*) := (models : 𝓢 → F → Prop) namespace semantics variables {F} {𝓢 : Type*} [semantics F 𝓢] (S : 𝓢) instance : has_double_turnstile 𝓢 F := ⟨models⟩ instance : has_double_turnstile 𝓢 (Theory F) := ⟨λ S T, ∀ ⦃p⦄, p ∈ T → S ⊧ p⟩ variables {S} lemma Models_def {T : Theory F} : S ⊧ T ↔ ∀ p ∈ T, S ⊧ p := by refl variables (𝓢) def valid (p : F) : Prop := ∀ S : 𝓢, S ⊧ p def satisfiable (p : F) : Prop := ∃ S : 𝓢, S ⊧ p def Valid (T : Theory F) : Prop := ∀ S : 𝓢, S ⊧ T def Satisfiable (T : Theory F) : Prop := ∃ S : 𝓢, S ⊧ T def toutology (p : F) : Prop := ∀ ⦃S : 𝓢⦄, S ⊧ p def consequence (T : Theory F) (p : F) : Prop := ∀ ⦃S : 𝓢⦄, S ⊧ T → S ⊧ p variables {𝓢} {S} (T U : Theory F) @[simp] lemma models_empty : S ⊧ (∅ : Theory F) := λ _, by simp @[simp] lemma models_of_ss {U T : Theory F} (ss : U ⊆ T) : S ⊧ T → S ⊧ U := λ h p hp, h (ss hp) @[simp] lemma models_union {T U : Theory F} : S ⊧ T ∪ U ↔ S ⊧ T ∧ S ⊧ U := ⟨λ h, ⟨λ p hp, h (set.mem_union_left U hp), λ p hp, h (set.mem_union_right T hp)⟩, by { rintros ⟨hT, hU⟩ p (hp | hp), { exact hT hp}, { exact hU hp } }⟩ @[simp] lemma models_insert {T : Theory F} {p : F} : S ⊧ insert p T ↔ S ⊧ p ∧ S ⊧ T := by simp[Models_def] @[simp] lemma models_Union {ι} {T : ι → Theory F} : S ⊧ (⋃ n, T n) ↔ ∀ n, S ⊧ T n := by simp[Models_def]; refine ⟨λ h i p, h p i, λ h p i, h i p⟩ lemma Satisfiable_of_ss {T U : Theory F} (ss : T ⊆ U) : Satisfiable 𝓢 U → Satisfiable 𝓢 T := by rintros ⟨S, hS⟩; refine ⟨S, by { intros p hp,refine hS (ss hp) }⟩ variables (F 𝓢) class nontrivial := (verum : ∀ S : 𝓢, S ⊧ (⊤ : F)) (falsum : ∀ S : 𝓢, ¬S ⊧ (⊥ : F)) attribute [simp] nontrivial.verum nontrivial.falsum end semantics variables (F) [axiomatic_classical_logic F] class sound (𝓢 : Type*) [semantics F 𝓢] := (soundness : ∀ {T : Theory F} {p}, T ⊢ p → semantics.consequence 𝓢 T p) namespace sound variables {F} {𝓢 : Type*} [semantics F 𝓢] [sound F 𝓢] {S : 𝓢} theorem consistent_of_Satisfiable [semantics.nontrivial F 𝓢] {T : Theory F} : semantics.Satisfiable 𝓢 T → Theory.consistent T := begin rintros ⟨S, hS⟩, by_contradiction A, have : T ⊢ ⊥, from Theory.not_consistent_iff_bot.mp A, exact semantics.nontrivial.falsum S (soundness this hS) end variables (S) lemma tautology_of_tautology (p : F) (h : ⬝⊢ p) : S ⊧ p := by { have : semantics.consequence 𝓢 ∅ p, from soundness h, exact this (show S ⊧ ∅, by simp) } end sound class complete (𝓢 : Type*) [semantics F 𝓢] extends sound F 𝓢 := (completeness' : ∀ {T : Theory F} {p}, semantics.consequence 𝓢 T p → T ⊢ p) namespace complete variables {F} {𝓢 : Type*} [semantics F 𝓢] [complete F 𝓢] {S : 𝓢} theorem completeness {T : Theory F} {p} : T ⊢ p ↔ semantics.consequence 𝓢 T p := ⟨sound.soundness, completeness'⟩ theorem consistent_iff_Satisfiable [semantics.nontrivial F 𝓢] {T : Theory F} : Theory.consistent T ↔ semantics.Satisfiable 𝓢 T := ⟨by { contrapose, intros h, have : semantics.consequence 𝓢 T ⊥, { intros S hS, exfalso, exact h ⟨S, hS⟩ }, have : T ⊢ ⊥, from completeness.mpr this, exact Theory.not_consistent_iff_bot.mpr this }, sound.consistent_of_Satisfiable⟩ end complete end logic
/- Copyright (c) 2022 Yuma Mizuno. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Yuma Mizuno -/ import category_theory.discrete_category import category_theory.bicategory.functor import category_theory.bicategory.strict /-! # Locally discrete bicategories > THIS FILE IS SYNCHRONIZED WITH MATHLIB4. > Any changes to this file require a corresponding PR to mathlib4. A category `C` can be promoted to a strict bicategory `locally_discrete C`. The objects and the 1-morphisms in `locally_discrete C` are the same as the objects and the morphisms, respectively, in `C`, and the 2-morphisms in `locally_discrete C` are the equalities between 1-morphisms. In other words, the category consisting of the 1-morphisms between each pair of objects `X` and `Y` in `locally_discrete C` is defined as the discrete category associated with the type `X ⟶ Y`. -/ namespace category_theory open bicategory discrete open_locale bicategory universes w₂ v v₁ v₂ u u₁ u₂ variables {C : Type u} /-- A type synonym for promoting any type to a category, with the only morphisms being equalities. -/ def locally_discrete (C : Type u) := C namespace locally_discrete instance : Π [inhabited C], inhabited (locally_discrete C) := id instance [category_struct.{v} C] : category_struct (locally_discrete C) := { hom := λ (X Y : C), discrete (X ⟶ Y), id := λ X : C, ⟨𝟙 X⟩, comp := λ X Y Z f g, ⟨f.as ≫ g.as⟩ } variables {C} [category_struct.{v} C] @[priority 900] instance hom_small_category (X Y : locally_discrete C) : small_category (X ⟶ Y) := category_theory.discrete_category (X ⟶ Y) /-- Extract the equation from a 2-morphism in a locally discrete 2-category. -/ lemma eq_of_hom {X Y : locally_discrete C} {f g : X ⟶ Y} (η : f ⟶ g) : f = g := begin have : discrete.mk (f.as) = discrete.mk (g.as) := congr_arg discrete.mk (eq_of_hom η), simpa using this end end locally_discrete variables (C) [category.{v} C] /-- The locally discrete bicategory on a category is a bicategory in which the objects and the 1-morphisms are the same as those in the underlying category, and the 2-morphisms are the equalities between 1-morphisms. -/ instance locally_discrete_bicategory : bicategory (locally_discrete C) := { whisker_left := λ X Y Z f g h η, eq_to_hom (congr_arg2 (≫) rfl (locally_discrete.eq_of_hom η)), whisker_right := λ X Y Z f g η h, eq_to_hom (congr_arg2 (≫) (locally_discrete.eq_of_hom η) rfl), associator := λ W X Y Z f g h, eq_to_iso $ by { unfold_projs, simp only [category.assoc] }, left_unitor := λ X Y f, eq_to_iso $ by { unfold_projs, simp only [category.id_comp, mk_as] }, right_unitor := λ X Y f, eq_to_iso $ by { unfold_projs, simp only [category.comp_id, mk_as] } } /-- A locally discrete bicategory is strict. -/ instance locally_discrete_bicategory.strict : strict (locally_discrete C) := { id_comp' := by { intros, ext1, unfold_projs, apply category.id_comp }, comp_id' := by { intros, ext1, unfold_projs, apply category.comp_id }, assoc' := by { intros, ext1, unfold_projs, apply category.assoc } } variables {I : Type u₁} [category.{v₁} I] {B : Type u₂} [bicategory.{w₂ v₂} B] [strict B] /-- If `B` is a strict bicategory and `I` is a (1-)category, any functor (of 1-categories) `I ⥤ B` can be promoted to an oplax functor from `locally_discrete I` to `B`. -/ @[simps] def functor.to_oplax_functor (F : I ⥤ B) : oplax_functor (locally_discrete I) B := { obj := F.obj, map := λ X Y f, F.map f.as, map₂ := λ i j f g η, eq_to_hom (congr_arg _ (eq_of_hom η)), map_id := λ i, eq_to_hom (F.map_id i), map_comp := λ i j k f g, eq_to_hom (F.map_comp f.as g.as) } end category_theory
module TypedContainers.RBTree.Base import Data.List import TypedContainers.In import TypedContainers.LawfulOrd %default total public export data Color = Red | Black public export data GoodTree : {height : Nat} -> {color : Color} -> {kt : Type} -> {kord : LawfulOrd kt} -> {keys : List kt} -> {vt : kt -> Type} -> Type where Empty : (0 _ : keys = []) -> GoodTree {height = 0, color = Black, keys} RedNode : {0 kord : LawfulOrd kt} -> (k : kt) -> {0 kp : In (k ==) keys} -> vt k -> GoodTree {height, color = Black, kt, kord, keys = filter (k >) keys, vt} -> GoodTree {height, color = Black, kt, kord, keys = filter (k <) keys, vt} -> GoodTree {height, color = Red, kt, kord, keys, vt} BlackNode : {0 kord : LawfulOrd kt} -> (k : kt) -> {0 kp : In (k ==) keys} -> vt k -> GoodTree {height, color = colorLeft, kt, kord, keys = filter (k >) keys, vt} -> GoodTree {height, color = colorRight, kt, kord, keys = filter (k <) keys, vt} -> GoodTree {height = S height, color = Black, kt, kord, keys, vt} -- Like GoodTree but BadRedNode can have red children public export data BadTree : {height : Nat} -> {kt : Type} -> {kord : LawfulOrd kt} -> {keys : List kt} -> {vt : kt -> Type} -> Type where BadRedNode : {0 kord : LawfulOrd kt} -> (k : kt) -> {0 kp : In (k ==) keys} -> vt k -> GoodTree {height, kt, kord, keys = filter (k >) keys, vt} -> GoodTree {height, kt, kord, keys = filter (k <) keys, vt} -> BadTree {height, kt, kord, keys, vt}
module bohm-out where open import lib open import general-util open import cedille-types open import syntax-util {- Implementation of the Böhm-Out Algorithm -} private nfoldr : ℕ → ∀ {ℓ} {X : Set ℓ} → X → (ℕ → X → X) → X nfoldr zero z s = z nfoldr (suc n) z s = s n (nfoldr n z s) nfoldl : ℕ → ∀ {ℓ} {X : Set ℓ} → X → (ℕ → X → X) → X nfoldl zero z s = z nfoldl (suc n) z s = nfoldl n (s n z) s set-nth : ∀ {ℓ} {X : Set ℓ} → ℕ → X → 𝕃 X → 𝕃 X set-nth n x [] = [] set-nth zero x (x' :: xs) = x :: xs set-nth (suc n) x (x' :: xs) = x' :: set-nth n x xs -- Böhm Tree data BT : Set where Node : (n i : ℕ) → 𝕃 BT → BT -- n: number of lambdas currently bound -- i: head variable -- 𝕃 BT: list of arguments -- Path to difference data path : Set where hd : path -- Difference in heads as : path -- Difference in number of arguments ps : (n : ℕ) → path → path -- Difference in nth subtrees (recursive) -- η functions η-expand'' : ℕ → 𝕃 BT → 𝕃 BT η-expand'' g [] = [] η-expand'' g (Node n i b :: bs) = Node (suc n) (if i ≥ g then suc i else i) (η-expand'' g b) :: η-expand'' g bs η-expand' : ℕ → BT → BT η-expand' g (Node n i b) = Node (suc n) (if i ≥ g then suc i else i) (η-expand'' g b) η-expand : BT → BT η-expand t @ (Node n _ _) with η-expand' (suc n) t ...| Node n' i' b' = Node n' i' (b' ++ [ Node n' n' [] ]) bt-n : BT → ℕ bt-n (Node n i b) = n η-equate : BT → BT → BT × BT η-equate t₁ t₂ = nfoldr (bt-n t₂ ∸ bt-n t₁) t₁ (λ _ → η-expand) , nfoldr (bt-n t₁ ∸ bt-n t₂) t₂ (λ _ → η-expand) -- η-equates all nodes along path to difference η-equate-path : BT → BT → path → BT × BT η-equate-path (Node n₁ i₁ b₁) (Node n₂ i₂ b₂) (ps d p) = let b-b = h d b₁ b₂ in η-equate (Node n₁ i₁ (fst b-b)) (Node n₂ i₂ (snd b-b)) where h : ℕ → 𝕃 BT → 𝕃 BT → 𝕃 BT × 𝕃 BT h zero (b₁ :: bs₁) (b₂ :: bs₂) with η-equate-path b₁ b₂ p ...| b₁' , b₂' = b₁' :: bs₁ , b₂' :: bs₂ h (suc d) (b₁ :: bs₁) (b₂ :: bs₂) with h d bs₁ bs₂ ...| bs₁' , bs₂' = b₁ :: bs₁' , b₂ :: bs₂' h d b₁ b₂ = b₁ , b₂ η-equate-path t₁ t₂ p = η-equate t₁ t₂ -- Rotation functions rotate : (k : ℕ) → BT rotate k = Node (suc k) (suc k) (nfoldl k [] (λ k' → Node (suc k) (suc k') [] ::_)) rotate-BT' : ℕ → 𝕃 BT → 𝕃 BT rotate-BT' k [] = [] rotate-BT' k (Node n i b :: bs) with i =ℕ k ...| ff = Node n i (rotate-BT' k b) :: rotate-BT' k bs ...| tt = Node (suc n) (suc n) (η-expand'' (suc n) (rotate-BT' k b)) :: rotate-BT' k bs rotate-BT : ℕ → BT → BT rotate-BT k (Node n i b) with i =ℕ k ...| ff = Node n i (rotate-BT' k b) ...| tt = Node (suc n) (suc n) (η-expand'' (suc n) (rotate-BT' k b)) -- Returns the greatest number of arguments k ever has at each node it where it is the head greatest-apps' : ℕ → 𝕃 BT → ℕ greatest-apps' k [] = zero greatest-apps' k (Node n k' bs' :: bs) with k =ℕ k' ...| ff = max (greatest-apps' k bs') (greatest-apps' k bs) ...| tt = max (length bs') (max (greatest-apps' k bs') (greatest-apps' k bs)) greatest-apps : ℕ → BT → ℕ greatest-apps k (Node n i b) with k =ℕ i ...| ff = greatest-apps' k b ...| tt = max (length b) (greatest-apps' k b) greatest-η' : ℕ → ℕ → 𝕃 BT → 𝕃 BT greatest-η' k m [] = [] greatest-η' k m (Node n i bs :: bs') with k =ℕ i ...| ff = Node n i (greatest-η' k m bs) :: greatest-η' k m bs' ...| tt = nfoldr (m ∸ length bs) (Node n i (greatest-η' k m bs)) (λ _ → η-expand) :: greatest-η' k m bs' greatest-η : ℕ → ℕ → BT → BT greatest-η k m (Node n i b) with k =ℕ i ...| ff = Node n i (greatest-η' k m b) ...| tt = nfoldr (m ∸ length b) (Node n i (greatest-η' k m b)) (λ _ → η-expand) -- Returns tt if k ever is at the head of a node along the path to the difference occurs-in-path : ℕ → BT → path → 𝔹 occurs-in-path k (Node n i b) (ps d p) = k =ℕ i || maybe-else ff (λ t → occurs-in-path k t p) (nth d b) occurs-in-path k (Node n i b) p = k =ℕ i adjust-path : ℕ → BT → path → path adjust-path k (Node n i b) (ps d p) = maybe-else' (nth d b) (ps d p) λ n → ps d (adjust-path k n p) adjust-path k (Node n i b) as with k =ℕ i ...| tt = hd ...| ff = as adjust-path k (Node n i b) hd = hd -- Δ functions construct-BT : term → maybe BT construct-BT = h zero empty-trie Node where h : ℕ → trie ℕ → ((n i : ℕ) → 𝕃 BT → BT) → term → maybe BT h n vm f (Var _ x) = just (f n (trie-lookup-else zero vm x) []) h n vm f (App t NotErased t') = h n vm Node t' ≫=maybe λ t' → h n vm (λ n i b → f n i (b ++ [ t' ])) t h n vm f (Lam _ NotErased _ x NoClass t) = h (suc n) (trie-insert vm x (suc n)) f t h n vm f t = nothing {-# TERMINATING #-} construct-path' : BT → BT → maybe (path × BT × BT) construct-path : BT → BT → maybe (path × BT × BT) construct-path (Node _ zero _) _ = nothing construct-path _ (Node _ zero _) = nothing construct-path t₁ t₂ = uncurry construct-path' (η-equate t₁ t₂) construct-path' t₁ @ (Node n₁ i₁ b₁) t₂ @ (Node n₂ i₂ b₂) = if ~ i₁ =ℕ i₂ then just (hd , t₁ , t₂) else if length b₁ =ℕ length b₂ then maybe-map (λ {(p , b₁ , b₂) → p , Node n₁ i₁ b₁ , Node n₂ i₂ b₂}) (h zero b₁ b₂) else just (as , t₁ , t₂) where h : ℕ → 𝕃 BT → 𝕃 BT → maybe (path × 𝕃 BT × 𝕃 BT) h n (b₁ :: bs₁) (b₂ :: bs₂) = maybe-else (maybe-map (λ {(p , bs₁ , bs₂) → p , b₁ :: bs₁ , b₂ :: bs₂}) (h (suc n) bs₁ bs₂)) (λ {(p , b₁ , b₂) → just (ps n p , b₁ :: bs₁ , b₂ :: bs₂)}) (construct-path b₁ b₂) h _ _ _ = nothing {-# TERMINATING #-} construct-Δ : BT → BT → path → 𝕃 BT construct-Δ (Node n₁ i₁ b₁) (Node n₂ i₂ b₂) hd = nfoldl n₁ [] λ m → _::_ (if suc m =ℕ i₁ then Node (2 + length b₁) (1 + length b₁) [] else if suc m =ℕ i₂ then Node (2 + length b₂) (2 + length b₂) [] else Node 1 1 []) construct-Δ (Node n₁ i₁ b₁) (Node n₂ i₂ b₂) as = let l₁ = length b₁ l₂ = length b₂ d = l₁ > l₂ lM = if d then l₁ else l₂ lm = if d then l₂ else l₁ l = lM ∸ lm in nfoldl n₁ (nfoldr l [ Node (2 + l) ((if d then 1 else 2) + l) [] ] λ l' → _++ [ if suc l' =ℕ l then Node 2 (if d then 2 else 1) [] else Node 1 1 [] ]) (λ n' → _::_ (if suc n' =ℕ i₁ then Node (suc lM) (suc lM) [] else Node 1 1 [])) construct-Δ t₁ @ (Node n₁ i₁ b₁) t₂ @ (Node n₂ i₂ b₂) (ps d p) with nth d b₁ ≫=maybe λ b₁ → nth d b₂ ≫=maybe λ b₂ → just (b₁ , b₂) ...| nothing = [] -- Shouldn't happen ...| just (t₁' @ (Node n₁' i₁' b₁') , t₂' @ (Node n₂' i₂' b₂')) with occurs-in-path i₁ t₁' p || occurs-in-path i₂ t₂' p ...| ff = set-nth (pred i₁) (Node (length b₁) (suc d) []) (construct-Δ t₁' t₂' p) ...| tt with max (greatest-apps i₁ t₁) (greatest-apps i₂ t₂) ...| kₘ with η-equate-path (rotate-BT i₁ (greatest-η i₁ kₘ t₁)) (rotate-BT i₂ (greatest-η i₂ kₘ t₂)) (ps d p) ...| t₁'' , t₂'' = set-nth (pred i₁) (rotate kₘ) (construct-Δ t₁'' t₂'' (ps d $ adjust-path i₁ t₁' p)) reconstruct : BT → term reconstruct = h zero where mkvar : ℕ → var mkvar n = "x" ^ ℕ-to-string n h : ℕ → BT → term a : ℕ → term → 𝕃 BT → term a n t [] = t a n t (b :: bs) = a n (mapp t (h n b)) bs h m (Node n i b) = nfoldl (n ∸ m) (a n (mvar (mkvar i)) b) (λ nm → mlam (mkvar (suc (m + nm)))) -- Returns a term f such that f t₁ ≃ λ t. λ f. t and f t₂ ≃ λ t. λ f. f, assuming two things: -- 1. t₁ ≄ t₂ -- 2. The head of each node along the path to the difference between t₁ and t₂ is bound -- withing the terms (so λ x. λ y. y y (x y) and λ x. λ y. y y (x x) works, but not -- λ x. λ y. y y (f y), where f is already declared/defined) make-contradiction : (t₁ t₂ : term) → maybe term make-contradiction t₁ t₂ = construct-BT t₁ ≫=maybe λ t₁ → construct-BT t₂ ≫=maybe λ t₂ → construct-path t₁ t₂ ≫=maybe λ {(p , t₁ , t₂) → just (reconstruct (Node (suc zero) (suc zero) (map (η-expand' zero) (construct-Δ t₁ t₂ p))))} -- Returns tt if the two terms are provably not equal is-contradiction : term → term → 𝔹 is-contradiction t₁ t₂ = isJust (make-contradiction t₁ t₂)
```python import animation import numpy as np import sympy as sym import matplotlib.pyplot as plt ``` # CW08 Examples This notebook is intended as a gallery of useful examples and functionality, to reinforce what we have been doing in the class so far. Go through the examples and corresponding code, and see if you can understand what is happening. Discuss with your group. Feel free to create new code in new cells to play with the ideas further, which can help you understand. Save your updated notebooks and turn them in if you do so. ## Language-specific Cells You can change which kernel evaluates individual cells in Jupyter, which can be useful for quick demonstrations. For example, the cell below has been switched to `bash` instead of the default `python3` kernel being used for the rest of the notebook. ```python %%script bash # The line above acts exactly as the #!/bin/bash line of a script file # It tells Jupyter that you are running a script, and which interpreter to use for the script acc=0 acc2=1 for i in $(seq 20); do echo $acc tmp=$acc2 acc2=$(($acc2 + $acc)) acc=$tmp done ``` 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 ```python %%script octave # For example, this script is now in Octave, a free clone of MATLAB # Next week, we will be looking into using MATLAB itself # Note in the code below that the function names look very similar to numpy # The similarity of numpy to MATLAB is deliberate - you will be able to use most of # the syntax you are familiar with from numpy with only a few minor tweaks e = zeros(10,10); # 10x10 matrix of zeros v = linspace(0,1,10); # vector of length 10 storing domain of points between 0 and 1 e(2:end,1:end-1) = -eye(9); # Set lower off-diagonal to -1 using 9x9 identity matrix e(1:end-1,2:end) += eye(9) # Set upper off-diagonal to +1 using 9x9 identity matrix, do not suppress output ``` e = 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 0 octave: X11 DISPLAY environment variable not set octave: disabling GUI features ## Matplotlib Animations Recent versions of matplotlib have now enabled easy creation of plot animations. Below I highlight two distinct animation modes, using the code from the supplementary module `animation.py`. ### HTML5 MP4 Video Rendering First we define a generator that yields new frames for the animation. We define a frame to be the updated range for the plot, seeding the generator with the constant domain. For simplicity, we will show a Gaussian wave packet. ```python def wave_packet(x): for i in range(100): yield 10*np.sin(2*np.pi*(x - 0.01*i))*np.exp(-x**2/2) ``` Now, using the code in the helper module, we plot the wave packet, choosing a domain of points $x\in(-4,4)$. ```python animation.plot_anim(wave_packet,xlim=(-4,4),title="Gaussian Wave Packet",xlabel="x",ylabel="g(x)") ``` The plot then renders as an MP4 video directly inside the notebook using an HTML5 wrapper. You can play, pause, and fullscreen the animation, as well as download it as a compressed mp4 file. ### Animated GIF File Output Alternatively, you can have matplotlib directly save an animated gif file. These are nice for web display, but are not optimized in filesize by default. As a test below, we define a sinc fuction, but animate the drawing of the sinc one point at a time for visual effect. The file will be output to `draw_sinc.gif` in the same directory as the notebook. ```python def draw_sinc(x): y = np.zeros_like(x) for ind in range(len(x)): y[ind] = 1 if x[ind]==0 else np.sin(x[ind])/x[ind] yield y ``` ```python animation.plot_anim(draw_sinc,xlim=(-15*np.pi,15*np.pi),n=200,delay=10,ylim=(-0.5,1),title="Sinc Function",xlabel="x",ylabel="sinc(x)",gif=True) ``` ## SymPy (Symbolic Python) Symbolic python aims to reproduce the functionality of Mathematica completely within python. This means that it can perform symbolic manipulations of expressions, including automatic algebraic simplification. ```python sym.init_printing() # Tell sympy to use the prettiest printing available (e.g., LaTeX) x,y,z = sym.symbols('x y z') # Define the variables x and y to be mathematical variable symbols f,g = sym.symbols('f g', cls=sym.Function) # define the variables f and g to be mathematical function symbols ``` Expand $(x+y)^3$: ```python sym.expand((x+y)**3) ``` $$x^{3} + 3 x^{2} y + 3 x y^{2} + y^{3}$$ Factor $x^2 + 2xy + y^2$: ```python sym.factor(x**2 + 2*x*y + y**2) ``` $$\left(x + y\right)^{2}$$ Differentiate: $\frac{d}{dy}(x+y)^3$ ```python sym.diff((x+y)**3, y) ``` $$3 \left(x + y\right)^{2}$$ Differentiate: $\frac{d^3}{dy^2dx}(x+y)^3$ ```python sym.diff((x+y)**3, y, 2, x) ``` $$6$$ Integrate: $\int_{-\infty}^\infty e^{-x^2}dx$ ```python sym.integrate(sym.exp(-x**2),(x,-sym.oo,sym.oo)) # Note that sym.oo is infinity ``` $$\sqrt{\pi}$$ Solve algebraic equation: $x^3 = 1$ ```python sym.solve( sym.Eq(x**3,1) ) ``` $$\left [ 1, \quad - \frac{1}{2} - \frac{\sqrt{3} i}{2}, \quad - \frac{1}{2} + \frac{\sqrt{3} i}{2}\right ]$$ Solve (damped Harmonic Oscillator) differential equation: $f''(x) + z f'(x) + y^2 f(x) = 0$ ```python diffeq = sym.Eq(f(x).diff(x,x) + z * f(x).diff(x) + y**2 * f(x), 0) diffeq ``` $$y^{2} f{\left (x \right )} + z \frac{d}{d x} f{\left (x \right )} + \frac{d^{2}}{d x^{2}} f{\left (x \right )} = 0$$ ```python sym.dsolve(diffeq, f(x)) ``` $$f{\left (x \right )} = C_{1} e^{\frac{x}{2} \left(- z - \sqrt{- 4 y^{2} + z^{2}}\right)} + C_{2} e^{\frac{x}{2} \left(- z + \sqrt{- 4 y^{2} + z^{2}}\right)}$$ Define a Gaussian function, take its derivative and integral symbolically, then convert them to numerical versions that use numpy: ```python gaussexp = sym.exp(-x**2/2)/sym.sqrt(2*sym.pi) gaussdiffexp = gaussexp.diff(x) gaussintexp = sym.integrate(gaussexp, (x, -sym.oo, x)) ``` $g(x) = $ ```python gaussexp ``` $$\frac{\sqrt{2} e^{- \frac{x^{2}}{2}}}{2 \sqrt{\pi}}$$ $\frac{d}{dx}g(x) = $ ```python gaussdiffexp ``` $$- \frac{\sqrt{2} x e^{- \frac{x^{2}}{2}}}{2 \sqrt{\pi}}$$ $\int_{-\infty}^x g(x')dx' = $ ```python gaussintexp ``` $$\frac{1}{2} \operatorname{erf}{\left (\frac{\sqrt{2} x}{2} \right )} + \frac{1}{2}$$ ```python # If functions are known to numpy, just replace them with numpy versions gauss = sym.lambdify(x, gaussexp) dgaussdx = sym.lambdify(x, gaussdiffexp) # Erf (error function) is not known to numpy, so evaluate it pointwise explicitly intgauss = np.vectorize(lambda x0: gaussintexp.replace(x,x0)) ``` ```python xnp = np.linspace(-5,5,1000) plt.plot(xnp, gauss(xnp), label="Gaussian") plt.plot(xnp, dgaussdx(xnp), label="Gaussian Derivative") plt.plot(xnp, intgauss(xnp), label="Gaussian Integral") plt.title("Gaussian Functions") plt.xlabel("x") plt.ylabel("g(x)") plt.legend() ```
#redirect Bainer Hall
{-# OPTIONS --without-K --rewriting #-} module lib.groups.Groups where open import lib.groups.CommutingSquare public open import lib.groups.FreeAbelianGroup public open import lib.groups.FreeGroup public open import lib.groups.GroupProduct public open import lib.groups.Homomorphism public open import lib.groups.HomotopyGroup public open import lib.groups.Int public open import lib.groups.Isomorphism public open import lib.groups.Lift public open import lib.groups.LoopSpace public open import lib.groups.QuotientGroup public open import lib.groups.PullbackGroup public open import lib.groups.Subgroup public open import lib.groups.SubgroupProp public open import lib.groups.TruncationGroup public open import lib.groups.Unit public
module Proofs.GroupCancelationLemmas import Specifications.Group import Symmetry.Opposite %default total %access export infixl 8 # ||| cancel two factors in a product of three groupCancel1 : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> (inv a # a) # b = b groupCancel1 spec a b = o2 === o3 where o1 : inv a # a = e o1 = inverseL spec a o2 : (inv a # a) # b = e # b o2 = cong {f = (# b)} o1 o3 : e # b = b o3 = neutralL (monoid spec) b ||| cancel two factors in a product of three groupCancel1bis : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> inv a # (a # b) = b groupCancel1bis spec a b = assoc === groupCancel1 spec a b where assoc : inv a # (a # b) = (inv a # a) # b assoc = associative (monoid spec) _ _ _ ||| cancel two factors in a product of three groupCancel2 : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> a # (inv b # b) = a groupCancel2 spec a b = o2 === o3 where o1 : inv b # b = e o1 = inverseL spec b o2 : a # (inv b # b) = a # e o2 = cong o1 o3 : a # e = a o3 = neutralR (monoid spec) a ||| cancel two factors in a product of three groupCancel2bis : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> (a # inv b) # b = a groupCancel2bis spec a b = assoc @== groupCancel2 spec a b where assoc : a # (inv b # b) = (a # inv b) # b assoc = associative (monoid spec) _ _ _ ||| cancel two factors in a product of three groupCancel3 : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> a # (b # inv b) = a groupCancel3 spec a b = groupCancel1 (opposite spec) b a ||| cancel two factors in a product of three groupCancel3bis : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> (a # b) # inv b = a groupCancel3bis spec a b = groupCancel1bis (opposite spec) b a ||| cancel two factors in a product of three groupCancelAbelian : {(#) : Binop s} -> GroupSpec (#) e inv -> isAbelian (#) -> (a,b : s) -> a # (b # inv a) = b groupCancelAbelian spec abel a b = abel a _ === o1 where o1 : (b # inv a) # a = b o1 = groupCancel2bis spec b a ||| Translations are injective. groupTranslationInjectiveL : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,x,y : s) -> a # x = a # y -> x = y groupTranslationInjectiveL spec a x y given = o2 where o1 : inv a # (a # x) = inv a # (a # y) o1 = cong given o2 : x = y o2 = groupCancel1bis spec a x @== o1 === groupCancel1bis spec a y ||| ab = a implies b = e groupCancelGivesNeutral : {(#) : Binop s} -> GroupSpec (#) e inv -> (a,b : s) -> a # b = a -> b = e groupCancelGivesNeutral spec a b given = o2 === inverseL spec a where o1 : inv a # (a # b) = inv a # a o1 = cong given o2 : b = inv a # a o2 = groupCancel1bis spec a b @== o1
Trabzonspor striker Waris sustained an injury in the Turkish Super Lig side's 4-1 defeat to Eskisehirspor nine days ago. Waris has been replaced in the squad by Mahatma Otoo, who plays his football in the Norwegian Tippeligaen for Sogndal. Leicester City defender Schlupp also misses out because of a knee problem that he sustained in the Premier League club's 2-2 draw with Liverpool on New Year's Day. The Ghana Football Association revealed in a statement that Schlupp will be sidelined for three weeks. Forward Kwesi Appiah, on loan at Cambridge United from Crystal Palace, is one of several members of Avram Grant's 23-man squad set to make their first appearance at the tournament. Baba Rahman, Daniel Amartey, Frank Acheampong and David Accam will all be making their AFCON debuts. However, there is disappointment for midfielder Ibrahim Moro as well as defenders Samuel Inkoom and Kwabena Adusei, with that trio all missing out after initially being named in Grant's provisional squad for the competition. Ghana start their AFCON campaign a week on Monday against Group C rivals Senegal in Mongomo. Strikers: Jordan Ayew (Lorient), Asamoah Gyan (Al Ain), Kwesi Appiah (Crystal Palace), David Accam (Chicago Fire), Mahatma Otoo (Sogndal).
Random.seed!(0) #sig = [1 2; 2 2] #sig = [1 1; 2 3] #sig = [2 3] #sig = [2 2] sig = [3 2; 2 3] S = random_S0Graph(sig) T = complement(S) n = S.n D = S.D J = block_expander(S.sig) w = random_bounded(S.n) @time opt0, x1, Z1 = dsw(S, w, eps=solver_eps) if true @time opt1, _, x2, Z2 = dsw_via_complement(T, w, eps=solver_eps) @test opt1 ≈ opt0 atol=tol else @time opt1, _, y = dsw_via_complement(T, w, eps=solver_eps) @test opt1 ≈ opt0 atol=tol @time opt2, x2, Z2 = dsw(T, y, eps=solver_eps) @test opt2 ≈ 1 atol=tol end x1 /= opt0 Z1 /= opt0 Z1 = J * Z1 * J' Z2 = J * Z2 * J' @test Z1 ≈ Z1' @test Z2 ≈ Z2' Z1 = Hermitian(Z1) Z2 = Hermitian(Z2) @test tr(√D * x1 * √D * x2) ≈ 1 atol=tol @test partialtrace(Z1, 1, [n,n]) ≈ transpose(x1) atol=tol @test partialtrace(Z2, 1, [n,n]) ≈ transpose(x2) atol=tol v1 = reshape(conj(x1), n^2) v2 = reshape(conj(x2), n^2) Q1 = [1 v1'; v1 Z1] Q2 = [1 v2'; v2 Z2] @test Q1 ≈ Q1' @test Q2 ≈ Q2' Q1 = Hermitian(Q1) Q2 = Hermitian(Q2) D2 = cat([ 1, -kron(√D, √D) ]..., dims=(1,2)) @test minimum(eigvals(Q1)) > -tol @test minimum(eigvals(Q2)) > -tol @test tr(Q1 * D2 * Q2 * D2) ≈ 0 atol=tol @test Z1 * kron(√D, √D) * v2 ≈ v1 atol=tol @test Z2 * kron(√D, √D) * v1 ≈ v2 atol=tol @test Z1 * kron(√D, √D) * Z2 ≈ v1 * v2' atol=tol @test Z1 * kron(eye(n), D) * Z2 ≈ v1 * v2' atol=tol
```python import numpy as np import pandas as pd ``` # Descrevendo os arquivos CSV ## TermosUnicos_&lt;nome da coleção&gt;-Amostra.csv Na primeira versão do experimento, calculamos o tamanho da amostra necessário em função da média de caracteres de cada documento dentro das coleções (arquivos &lt;nome da coleção&gt;-Amostra). No entanto, como os radicalizadores são executados sobre palavras e não sobre caracteres, alteramos a versão 2 para o calcular a amostragem em função da média de termos únicos por documento. Como os arquivos possuem a mesma estrutura, utilizaremos os Acórdãos do Segundo Grau para facilitar a explicação. ```python amostra = pd.read_csv('../csv/TermosUnicos_SegundoGrauAcordaos-Amostra.csv') amostra ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Media</th> <th>Sigma</th> <th>N</th> <th>e</th> <th>TamanhoDaAmostra</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>638.44896</td> <td>322.149428</td> <td>181994</td> <td>16.107471</td> <td>1524</td> </tr> </tbody> </table> </div> Calculamos o tamanho da amostra utilizando a função: \begin{equation} n = \frac{z^2 . \sigma^2 . N}{e^2 . (N - 1) + z^2 . \sigma^2} \end{equation} Onde $z = 1,96$ (nível de confiança de 95%), $\sigma$ é o desvio-padrão da quantidade de termos únicos na coleção, N é o número de documentos que a coleção possui e erro amostral $e = 0,05\sigma$. ## &lt;nome da coleção&gt;-Sumario.csv Esse arquivo registra o tamanho da amostra processada (Processados), o número de termos únicos da amostra (TUC) e o algoritmo de radicalização utilizado (Stemmer). ```python sumario = pd.read_csv('../csv/SegundoGrauAcordaos-Sumario.csv') sumario ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Processados</th> <th>TUC</th> <th>Stemmer</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1524</td> <td>42141</td> <td>NoStem</td> </tr> <tr> <th>1</th> <td>1524</td> <td>23019</td> <td>Porter</td> </tr> <tr> <th>2</th> <td>1524</td> <td>20220</td> <td>RSLP</td> </tr> <tr> <th>3</th> <td>1524</td> <td>36976</td> <td>RSLPS</td> </tr> <tr> <th>4</th> <td>1524</td> <td>32007</td> <td>UniNE</td> </tr> </tbody> </table> </div> ## &lt;nome da coleção&gt;-Termos.csv Registra o número de vezes que cada termo apareceu na amostra depois que aplicamos o radicalizador. ```python termos = pd.read_csv('../csv/SegundoGrauAcordaos-Termos.csv', encoding='Latin-1') termos.head() ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Termo</th> <th>Quantidade</th> <th>Stemmer</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>desditosa</td> <td>1</td> <td>NoStem</td> </tr> <tr> <th>1</th> <td>guarde</td> <td>1</td> <td>NoStem</td> </tr> <tr> <th>2</th> <td>titularizar</td> <td>4</td> <td>NoStem</td> </tr> <tr> <th>3</th> <td>espelhada</td> <td>4</td> <td>NoStem</td> </tr> <tr> <th>4</th> <td>3640</td> <td>1</td> <td>NoStem</td> </tr> </tbody> </table> </div> ## &lt;nome da coleção&gt;.csv Para cada documento com id "Chave", registramos a quantidade de termos únicos após o uso da radicalização. ```python colecao = pd.read_csv('../csv/SegundoGrauAcordaos.csv') colecao.head() ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Chave</th> <th>TUD</th> <th>Stemmer</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>201100200560443632531</td> <td>576</td> <td>NoStem</td> </tr> <tr> <th>1</th> <td>201100200560443632531</td> <td>517</td> <td>Porter</td> </tr> <tr> <th>2</th> <td>201100200560443632531</td> <td>480</td> <td>RSLP</td> </tr> <tr> <th>3</th> <td>201100200560443632531</td> <td>540</td> <td>RSLPS</td> </tr> <tr> <th>4</th> <td>201100200560443632531</td> <td>535</td> <td>UniNE</td> </tr> </tbody> </table> </div> ## &lt;nome da coleção&gt;&lowbar;&lt;nome do algoritmo&gt;&lowbar;&lt;nome da métrica&gt;.csv Registra o valor da métrica obtido para cada uma das consultas disparadas contra o engine de busca. ```python metrica = pd.read_csv('../csv/asg_nostem_map.csv', header=None, index_col=0) metrica.head() ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>1</th> </tr> <tr> <th>0</th> <th></th> </tr> </thead> <tbody> <tr> <th>1</th> <td>0.947681</td> </tr> <tr> <th>2</th> <td>0.500000</td> </tr> <tr> <th>3</th> <td>1.000000</td> </tr> <tr> <th>4</th> <td>0.557677</td> </tr> <tr> <th>5</th> <td>1.000000</td> </tr> </tbody> </table> </div> **Vale ressaltar que não setamos a semente nas fases de randomização dos experimentos, portanto, os arquivos gerados a cada execução podem apresentar valores diferentes dos aqui expostos. Contudo, estatisticamente, é esperado que haja ratificação das hipóteses descritas na dissertação.**
Fearless rejoined Marlborough around 04 : 00 and both ships briefly fired at the German zeppelin <unk> . Commodore Reginald Tyrwhitt 's Harwich Force had been ordered to reinforce the Grand Fleet , particularly to relieve ships low on fuel ; they departed at 03 : 50 but this was too late for them to reach the fleet by morning , so Jellicoe ordered Tyrwhitt to detach destroyers to escort Marlborough back to port . On the way , Marlborough and Fearless encountered the British submarines G3 and G5 ; the two submarines prepared to attack the ships but fortunately recognised them before they launched torpedoes . By 15 : 00 , eight destroyers from the Harwich Force had joined Marlborough and another pump had been lowered into the flooded boiler room . At around 23 : 30 , the pump was being moved to clean it when the roll of the ship threw the pump into the damaged bulkhead , knocking the shores loose . Water flooded into the ship and Marlborough 's captain ordered Fearless and the destroyers to prepare to come alongside , to rescue the crew if the flooding worsened at 00 : 47 on 2 June . A diver was sent into the boiler room at that time , and he was able to keep the pump clean , which slowly reduced the water level in the ship .
import numpy as np from evobench import Solution from evosolve.discrete.hc.fihc import FIHC from evosolve.linkage import BaseEmpiricalLinkage, LinkageScrap class EmpiricalLinkage(BaseEmpiricalLinkage): def __init__(self, fihc: FIHC): super(EmpiricalLinkage, self).__init__(fihc.benchmark) self.fihc = fihc def get_scrap(self, base: Solution, target_index: int) -> LinkageScrap: loci_order = self.fihc.get_loci_order() base_converged = self.fihc(base, loci_order) perturbed = base.genome.copy() perturbed[target_index] = not perturbed[target_index] modified = Solution(perturbed) perturbed_converged = self.fihc(modified, loci_order) interactions = np.abs(base_converged.genome - perturbed_converged.genome) return LinkageScrap(target_index, interactions)
(*-------------------------------------------* | CSP-Prover on Isabelle2004 | | December 2004 | | August 2005 (modified) | | | | CSP-Prover on Isabelle2005 | | October 2005 (modified) | | April 2006 (modified) | | | | CSP-Prover on Isabelle2009-2 | | October 2010 (modified) | | | | CSP-Prover on Isabelle2016 | | May 2016 (modified) | | | | CSP-Prover on Isabelle2017 | | April 2018 (modified) | | | | Yoshinao Isobe (AIST JAPAN) | *-------------------------------------------*) theory Domain_F_cpo imports Domain_F CSP_T.Domain_T_cpo Set_F_cpo CSP.CPO_pair CSP_T.CSP_T_continuous begin (***************************************************************** 1. 2. 3. 4. *****************************************************************) (* The following simplification rules are deleted in this theory file *) (* because they unexpectly rewrite UnionT and InterT. *) (* Union (B ` A) = (UN x:A. B x) *) (* Inter (B ` A) = (INT x:A. B x) *) (* declare Union_image_eq [simp del] declare Inter_image_eq [simp del] *) (* no simp rules in Isabelle 2017 declare Sup_image_eq [simp del] declare Inf_image_eq [simp del] *) (********************************************************* Bottom in Dom_F *********************************************************) (* instance domF :: (type) bot0 by (intro_classes) *) instantiation domF :: (type) bot0 begin definition bottom_domF_def : "Bot == ({<>}t ,, {}f)" instance .. end (* defs (overloaded) bottom_domF_def : "Bot == ({<>}t ,, {}f)" *) lemma bottom_domF: "Bot <= (F::'a domF)" apply (simp add: bottom_domF_def pairF_def) apply (simp add: subdomF_def Abs_domF_inverse) done instance domF :: (type) bot apply (intro_classes) by (simp add: bottom_domF) (*** fstF and sndF ***) lemma fstF_bottom_domF[simp]: "(fstF o Bot) = Bot" apply (simp add: prod_Bot_def) apply (simp add: bottom_domF_def) apply (simp add: bottom_domT_def) apply (simp add: comp_def pairF) done lemma sndF_bottom_domF[simp]: "(sndF o Bot) = Bot" apply (simp add: prod_Bot_def) apply (simp add: bottom_domF_def) apply (simp add: bottom_setF_def) apply (simp add: comp_def pairF) done (********************************************************** lemmas used in a proof that domain_F is a cpo. **********************************************************) (* LUB_TF TFs is an upper bound of TFs *) definition LUB_TF :: "'a domTsetF set => 'a domTsetF" where LUB_TF_def : "LUB_TF TFs == (UnionT (fst ` TFs), UnionF (snd ` TFs))" definition LUB_domF :: "'a domF set => 'a domF" where LUB_domF_def : "LUB_domF Fs == Abs_domF (LUB_TF (Rep_domF ` Fs))" (************* LUB_TF *************) (*** LUB_TF --> LUB ***) lemma LUB_TF_isLUB: "TFs ~= {} ==> LUB_TF TFs isLUB TFs" apply (simp add: pair_LUB_decompo) apply (simp add: LUB_TF_def) apply (simp add: isLUB_UnionT isLUB_UnionF) done (*** LUB --> LUB_TF ***) lemma isLUB_LUB_TF_only_if: "[| TFs ~= {} ; TF isLUB TFs |] ==> TF = LUB_TF TFs" apply (insert LUB_TF_isLUB[of TFs]) by (simp add: LUB_unique) (* iff *) lemma isLUB_LUB_TF : "TFs ~= {} ==> TF isLUB TFs = (TF = LUB_TF TFs)" apply (rule iffI) apply (simp add: isLUB_LUB_TF_only_if) apply (simp add: LUB_TF_isLUB) done (*** LUB TF = LUB_TF ***) lemma LUB_LUB_TF: "TFs ~= {} ==> LUB TFs = LUB_TF TFs" by (simp add: isLUB_LUB LUB_TF_isLUB) (****** LUB_TF TFs in domF ******) (* T3_F4 *) lemma LUB_TF_in_T3_F4: "[| TFs ~= {} ; ALL TF:TFs. TF:domF |] ==> HC_T3_F4 (LUB_TF TFs)" apply (simp add: LUB_TF_def HC_T3_F4_def) apply (intro allI impI) apply (elim bexE conjE) apply (drule_tac x="x" in bspec, simp) apply (simp add: domF_iff HC_T3_F4_def) by (auto) (* F3 *) lemma LUB_TF_in_F3: "[| TFs ~= {} ; ALL TF:TFs. TF:domF |] ==> HC_F3 (LUB_TF TFs)" apply (simp add: LUB_TF_def HC_F3_def) apply (intro allI impI) apply (elim bexE conjE) apply (drule_tac x="x" in bspec, simp) apply (simp add: domF_def HC_F3_def) apply (elim conjE) apply (drule_tac x="s" in spec) apply (drule_tac x="X" in spec) apply (drule_tac x="Y" in spec) by (auto) (* T2 *) lemma LUB_TF_in_T2: "[| TFs ~= {} ; ALL TF:TFs. TF:domF |] ==> HC_T2 (LUB_TF TFs)" apply (simp add: LUB_TF_def HC_T2_def) apply (intro allI impI) apply (elim exE bexE) apply (drule_tac x="x" in bspec, simp) apply (simp add: domF_def HC_T2_def) apply (elim conjE) apply (drule_tac x="s" in spec) by (auto) (*** LUB_TF TFs in domF ***) lemma LUB_TF_in: "[| TFs ~= {} ; ALL TF:TFs. TF:domF |] ==> (LUB_TF TFs): domF" apply (simp (no_asm) add: domF_iff) apply (simp add: LUB_TF_in_T2) apply (simp add: LUB_TF_in_F3) apply (simp add: LUB_TF_in_T3_F4) done lemma LUB_TF_in_Rep: "Fs ~= {} ==> (LUB_TF (Rep_domF ` Fs)): domF" apply (rule LUB_TF_in) apply (auto) done (************* LUB_domF *************) (* isLUB lemma *) lemma TF_isLUB_domFs: "[| TF:domF ; TF isLUB Rep_domF ` Fs |] ==> Abs_domF TF isLUB Fs" apply (simp add: isUB_def isLUB_def) apply (rule conjI) (* ub *) apply (intro allI impI) apply (elim bexE conjE) apply (drule_tac x="fst (Rep_domF y)" in spec) apply (drule_tac x="snd (Rep_domF y)" in spec) apply (simp add: subdomF_def Abs_domF_inverse) (* lub *) apply (intro allI impI) apply (elim bexE conjE) apply (rotate_tac -1) apply (drule_tac x="fst (Rep_domF y)" in spec) apply (rotate_tac -1) apply (drule_tac x="snd (Rep_domF y)" in spec) apply (simp) apply (drule mp) apply (intro allI impI) apply (simp add: image_def) apply (elim bexE conjE) apply (drule_tac x="x" in spec) apply (simp) apply (simp add: subdomF_def) apply (simp add: subdomF_def Abs_domF_inverse) done (*** LUB_domF --> LUB ***) lemma LUB_domF_isLUB: "Fs ~= {} ==> LUB_domF Fs isLUB Fs" apply (simp add: LUB_domF_def) apply (rule TF_isLUB_domFs) apply (simp add: LUB_TF_in) apply (simp add: LUB_TF_isLUB) done lemma LUB_domF_isLUB_I: "[| Fs ~= {} ; F = LUB_domF Fs |] ==> F isLUB Fs" by (simp add: LUB_domF_isLUB) (*** LUB --> LUB_domF ***) lemma isLUB_LUB_domF_only_if: "[| Fs ~= {} ; F isLUB Fs |] ==> F = LUB_domF Fs" apply (insert LUB_domF_isLUB[of Fs]) by (simp add: LUB_unique) (* iff *) lemma isLUB_LUB_domF : "Fs ~= {} ==> F isLUB Fs = (F = LUB_domF Fs)" apply (rule iffI) apply (simp add: isLUB_LUB_domF_only_if) apply (simp add: LUB_domF_isLUB) done (********************************************************** ( domF, <= ) is a CPO **********************************************************) instance domF :: (type) cpo apply (intro_classes) apply (simp add: hasLUB_def) apply (rule_tac x="LUB_domF X" in exI) by (simp add: directed_def LUB_domF_isLUB) (********************************************************** ( domF, <= ) is a pointed CPO **********************************************************) instance domF :: (type) cpo_bot by (intro_classes) (********************************************************** continuity of Abs_domF **********************************************************) (*** Abs_domF ***) lemma continuous_Abs_domF: "[| ALL x. f x: domF ; continuous f |] ==> continuous (Abs_domF o f)" apply (simp add: continuous_iff) apply (intro allI impI) apply (drule_tac x="X" in spec, simp) apply (elim conjE exE) apply (rule_tac x="x" in exI, simp) apply (rule TF_isLUB_domFs) apply (simp) apply (subgoal_tac "Rep_domF ` (Abs_domF o f) ` X = f ` X") by (auto simp add: image_def Abs_domF_inverse) (*** Rep_domF ***) lemma cont_Rep_domF: "continuous Rep_domF" apply (simp add: continuous_iff) apply (intro allI impI) apply (rule_tac x="LUB_domF X" in exI) apply (simp add: directed_def LUB_domF_isLUB) apply (simp add: isLUB_LUB_TF) apply (simp add: LUB_domF_def) apply (simp add: LUB_TF_in Abs_domF_inverse) done (*** fstF and sndF ***) lemma fstF_continuous: "continuous fstF" apply (simp add: fstF_def) apply (rule compo_continuous) apply (simp add: cont_Rep_domF) apply (simp add: fst_continuous) done lemma sndF_continuous: "continuous sndF" apply (simp add: sndF_def) apply (rule compo_continuous) apply (simp add: cont_Rep_domF) apply (simp add: snd_continuous) done (********************************************************** continuity decomposition **********************************************************) (*** if ***) lemma continuous_domF: "[| ALL x. (f x, g x): domF ; continuous f ; continuous g |] ==> continuous (%x. (f x ,, g x))" apply (simp add: pairF_def) apply (subgoal_tac "(%x. Abs_domF (f x, g x)) = Abs_domF o (%x. (f x, g x))") apply (simp) apply (rule continuous_Abs_domF) apply (simp) apply (simp add: pair_continuous) apply (simp add: comp_def) apply (simp add: fun_eq_iff) done lemmas continuous_domF_decompo_if = continuous_domF (*** only if ***) lemma continuous_domF_decompo_only_if_lm: "[| ALL x. (f x, g x) : domF; continuous (%x. (f x , g x)) |] ==> continuous f & continuous g" apply (simp add: pair_continuous) apply (simp add: comp_def) done lemma continuous_domF_decompo_only_if: "[| ALL x. (f x, g x) : domF; continuous (%x. (f x ,, g x)) |] ==> continuous f & continuous g" apply (rule continuous_domF_decompo_only_if_lm) apply (simp) apply (simp add: pairF_def) apply (insert compo_continuous [of "(%x. Abs_domF (f x, g x))" "Rep_domF" ]) apply (simp add: comp_def) apply (simp add: cont_Rep_domF) apply (simp add: Abs_domF_inverse) done lemma continuous_domF_decompo: "ALL x. (f x, g x) : domF ==> continuous (%x. (f x ,, g x)) = (continuous f & continuous g)" apply (rule) apply (simp add: continuous_domF_decompo_only_if) apply (simp add: continuous_domF_decompo_if) done (********************************************************** continuity of (op o fstF) **********************************************************) (* lemma continuous_op_fstF: "continuous (op o fstF)" *) lemma continuous_op_fstF: "continuous ((o) fstF)" apply (simp add: continuous_iff) apply (intro allI impI) apply (insert complete_cpo_lm) apply (drule_tac x="X" in spec) apply (simp add: hasLUB_def) apply (elim exE) apply (rule_tac x="x" in exI) apply (simp) apply (simp add: image_def) apply (simp add: isLUB_def) apply (simp add: isUB_def) apply (rule) (* UB *) apply (intro allI impI) apply (elim conjE bexE) apply (simp) apply (drule_tac x="xa" in spec) apply (simp add: order_prod_def) apply (simp add: mono_fstF[simplified mono_def]) (* LUB *) apply (intro allI impI) apply (elim conjE) apply (rotate_tac -1) apply (drule_tac x="(%pn. (y pn ,, maxFof (y pn)))" in spec) apply (drule mp) apply (intro allI impI) apply (drule_tac x="fstF o ya" in spec) apply (drule mp, fast) apply (simp add: order_prod_def) apply (intro allI impI) apply (drule_tac x="xa" in spec) apply (simp add: subdomF_decompo) apply (simp add: pairF maxFof_domF) apply (rule) apply (simp add: subdomT_iff) apply (drule_tac x="s" in spec) apply (simp add: pairF_domF_T2) apply (simp add: maxFof_max) apply (simp add: order_prod_def) apply (simp add: subdomF_decompo) apply (simp add: pairF maxFof_domF) done (********************************************************** fstF-distribution over LUB **********************************************************) lemma dist_fstF_LUB: "X ~= {} ==> fstF o LUB X = LUB (((o) fstF) ` X)" (* "X ~= {} ==> fstF o LUB X = LUB ((op o fstF) ` X)" *) apply (subgoal_tac "X hasLUB") apply (rule sym) apply (rule isLUB_LUB) apply (simp add: isLUB_def isUB_def) apply (rule) (* UB *) apply (intro allI impI) apply (simp add: image_iff) apply (simp add: comp_def) apply (elim bexE) apply (simp add: order_prod_def) apply (intro allI) apply (subgoal_tac "x <= LUB X") apply (simp add: order_prod_def) apply (simp add: mono_fstF[simplified mono_def]) apply (subgoal_tac "LUB X isLUB X") apply (simp add: isLUB_def isUB_def) apply (simp add: LUB_is) (* LUB *) apply (intro allI impI) apply (simp add: order_prod_def) apply (intro allI impI) apply (subgoal_tac "(LUB X) isLUB X") apply (simp add: prod_LUB_decompo) apply (simp add: proj_fun_def) apply (drule_tac x="x" in spec) apply (subgoal_tac "(%xa. xa x) ` X ~= {}") apply (simp add: isLUB_LUB_domF) apply (simp add: LUB_domF_def) apply (simp add: fstF_def) apply (simp add: LUB_TF_in_Rep Abs_domF_inverse) apply (simp add: LUB_TF_def) apply (rule subdomTI) apply (simp) apply (elim conjE bexE) apply (drule_tac x="fstF o xa" in spec) apply (drule mp) apply (simp add: image_iff) apply (rule_tac x="xa" in bexI) apply (simp add: fstF_def) apply (simp) apply (drule_tac x="x" in spec) apply (simp add: subdomT_iff) apply (drule_tac x="t" in spec) apply (simp add: fstF_def) apply (force) apply (rule LUB_is) apply (simp) apply (simp add: hasLUB_def) apply (simp add: prod_LUB_decompo) apply (simp add: proj_fun_def) apply (rule_tac x="(%i. LUB_domF ((%x. x i) ` X))" in exI) apply (simp add: isLUB_LUB_domF) done (****************** to add them again ******************) (* declare Union_image_eq [simp] declare Inter_image_eq [simp] *) (* declare Sup_image_eq [simp] declare Inf_image_eq [simp] *) end
from collections import OrderedDict import pytest import numpy import torch from functools import partial import traceback import io import syft from syft.serde import protobuf from test.serde.serde_helpers import * # Dictionary containing test samples functions samples = OrderedDict() # Native samples[type(None)] = make_none def test_serde_coverage(): """Checks all types in serde are tested""" for cls, _ in protobuf.serde.bufferizers.items(): has_sample = cls in samples assert has_sample is True, "Serde for %s is not tested" % cls @pytest.mark.parametrize("cls", samples) def test_serde_roundtrip_protobuf(cls, workers): """Checks that values passed through serialization-deserialization stay same""" _samples = samples[cls](workers=workers) for sample in _samples: _to_protobuf = ( protobuf.serde._bufferize if not sample.get("forced", False) else protobuf.serde._force_full_bufferize ) serde_worker = syft.hook.local_worker serde_worker.framework = sample.get("framework", torch) obj = sample.get("value") protobuf_obj = _to_protobuf(serde_worker, obj) roundtrip_obj = None if not isinstance(obj, Exception): roundtrip_obj = protobuf.serde._unbufferize(serde_worker, protobuf_obj) assert type(roundtrip_obj) == type(obj) assert roundtrip_obj == obj
% CXCORR Circular Cross Correlation function estimates. % CXCORR(a,b), where a and b represent samples taken over time interval T % which is assumed to be a common period of two corresponded periodic signals. % a and b are supposed to be length M row vectors, either real or complex. % % [x,c]=CXCORR(a,b) returns the length M-1 circular cross correlation sequence c % with corresponded lags x. % % The circular cross correlation is: % c(k) = sum[a(n)*conj(b(n+k))]/[norm(a)*norm(b)]; % where vector b is shifted CIRCULARLY by k samples. % % The function doesn't check the format of input vectors a and b! % % For circular covariance between a and b look for CXCOV(a,b) in % http://www.mathworks.com/matlabcentral/fileexchange/loadAuthor.do?objectType=author&objectId=1093734 % % Reference: % A. V. Oppenheim, R. W. Schafer and J. R. Buck, Discrete-Time Signal Processing, % Upper Saddler River, NJ : Prentice Hall, 1999. % % Author: G. Levin, Apr. 26, 2004. function [x,c]=CXCORR(a,b) na=norm(a); nb=norm(b); a=a/na; %normalization b=b/nb; for k=1:length(b) c(k)=a*b'; b=[b(end),b(1:end-1)]; %circular shift end x=[0:length(b)-1]; %lags
Formal statement is: lemma IVT: fixes f :: "'a::linear_continuum_topology \<Rightarrow> 'b::linorder_topology" shows "f a \<le> y \<Longrightarrow> y \<le> f b \<Longrightarrow> a \<le> b \<Longrightarrow> (\<forall>x. a \<le> x \<and> x \<le> b \<longrightarrow> isCont f x) \<Longrightarrow> \<exists>x. a \<le> x \<and> x \<le> b \<and> f x = y" Informal statement is: If $f$ is continuous on the interval $[a,b]$ and $f(a) \leq y \leq f(b)$, then there exists $x \in [a,b]$ such that $f(x) = y$.
function spm_dem_cue_movie(DEM,q) % creates a movie of cued pointing % FORMAT spm_dem_cue_movie(DEM,q) % % DEM - DEM structure from reaching simulations % q - flag switching from true to perceived reaching %__________________________________________________________________________ % Copyright (C) 2008 Wellcome Trust Centre for Neuroimaging % Karl Friston % $Id: spm_dem_set_movie.m 4231 2011-03-07 21:00:02Z karl $ % Dimensions %-------------------------------------------------------------------------- N = size(DEM.pU.v{1},2); n = size(DEM.pP.P{1},2); % evaluate true location (targets) %---------------------------------------------------------------------- for i = 1:N L(:,:,i) = DEM.pP.P{1}; end if nargin > 1 % evaluate perceived positions (motor plant) %---------------------------------------------------------------------- x = tan(DEM.qU.v{1}(1:2,:)); % finger location c = DEM.qU.v{1}(4 + (1:n),:); % target contrast else % evaluate true positions (motor plant) %---------------------------------------------------------------------- x = tan(DEM.pU.v{1}(1:2,:)); % finger location c = DEM.pU.v{1}(4 + (1:n),:); % target contrast end c = c - min(c(:)) + 1/32; c = c/max(c(:)); fin = imread('pointfinger.jpg'); % movie %-------------------------------------------------------------------------- s = 2; for i = 1:N cla axis image ij hold on % finger %---------------------------------------------------------------------- imagesc(([-1 0] + .68)*s + x(1,i),([-1 0] + .96)*s + x(2,i),fin); hold on % trajectory %---------------------------------------------------------------------- plot(x(1,1:i),x(2,1:i),'k:') % targets %---------------------------------------------------------------------- for j = 1:n plot(L(1,j,i),L(2,j,i),'.','MarkerSize',64,'color',[c(j,i) (1 - c(j,i)) 0]) end axis([-1 1 -1 1]*2) hold off drawnow % save %---------------------------------------------------------------------- M(i) = getframe(gca); end % set ButtonDownFcn %-------------------------------------------------------------------------- h = findobj(gca,'type','image'); set(h(1),'Userdata',{M,16}) set(h(1),'ButtonDownFcn','spm_DEM_ButtonDownFcn')
Formal statement is: lemma zero_less_dist_iff: "0 < dist x y \<longleftrightarrow> x \<noteq> y" Informal statement is: $0 < \text{dist}(x, y)$ if and only if $x \neq y$.
! { dg-do compile } ! { dg-options "-std=f2008" } ! ! PR fortran/48820 ! ! subroutine foo(x) integer :: x(..) ! { dg-error "TS 29113/TS 18508: Assumed-rank array" } end subroutine foo
/* Copyright (c) 2006, Arvid Norberg & Daniel Wallin All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef FIND_DATA_050323_HPP #define FIND_DATA_050323_HPP #include <vector> #include <map> #include <libtorrent/kademlia/traversal_algorithm.hpp> #include <libtorrent/kademlia/node_id.hpp> #include <libtorrent/kademlia/routing_table.hpp> #include <libtorrent/kademlia/rpc_manager.hpp> #include <libtorrent/kademlia/observer.hpp> #include <libtorrent/kademlia/msg.hpp> #include <boost/optional.hpp> #include <boost/function.hpp> namespace libtorrent { namespace dht { typedef std::vector<char> packet_t; class rpc_manager; class node_impl; // -------- find data ----------- class find_data : public traversal_algorithm { public: typedef boost::function<void(std::vector<tcp::endpoint> const&)> data_callback; typedef boost::function<void(std::vector<std::pair<node_entry, std::string> > const&)> nodes_callback; void got_data(msg const* m); void got_write_token(node_id const& n, std::string const& write_token) { m_write_tokens[n] = write_token; } find_data(node_impl& node, node_id target , data_callback const& dcallback , nodes_callback const& ncallback); virtual char const* name() const { return "get_peers"; } node_id const target() const { return m_target; } private: void done(); void invoke(node_id const& id, udp::endpoint addr); data_callback m_data_callback; nodes_callback m_nodes_callback; std::map<node_id, std::string> m_write_tokens; node_id const m_target; bool m_done; }; class find_data_observer : public observer { public: find_data_observer( boost::intrusive_ptr<find_data> const& algorithm , node_id self) : observer(algorithm->allocator()) , m_algorithm(algorithm) , m_self(self) {} ~find_data_observer(); void send(msg& m) { m.reply = false; m.message_id = messages::get_peers; m.info_hash = m_algorithm->target(); } void timeout(); void reply(msg const&); void abort() { m_algorithm = 0; } private: boost::intrusive_ptr<find_data> m_algorithm; node_id const m_self; }; } } // namespace libtorrent::dht #endif // FIND_DATA_050323_HPP
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal33. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Lemma conj3synthconj1 : forall (lv0 : natural) (lv1 : natural), (@eq natural (plus lv0 (Succ lv1)) (plus lv0 (Succ lv1))). Admitted. QuickChick conj3synthconj1.
During the planning for the invasion of South Korea in the years before the war , the North Korean leadership began to create large numbers of commando and special forces units to send south . These units subverted South Korean authority before and during the war with terror campaigns , sabotage and inducing rebellions in ROK military units . Hundreds of commandos were sent to South Korea in this fashion , and by the end of the war up to 3 @,@ 000 of them had been trained and armed . During this time , North Korean leadership also ordered the creation of large conventional units to act as advance forces for the actual invasion . The 766th Unit was formed in April 1949 at the Third Military Academy in <unk> , North Korea . The academy was specially designed to train commandos , and the 766th was originally designed to supervise North Korean light infantry ranger units . Over the next year , the 766th Unit received extensive training in unconventional warfare and amphibious warfare . During this time , the unit was expanded in size to 3 @,@ 000 men in six battalions .
module DirectedRoundings export RoundNearest, RoundUp, RoundDown, RoundToZero, RoundFromZero, RoundLoHi, RoundValue import Base: RoundNearest, RoundUp, RoundDown, RoundToZero, RoundFromZero +, -, *, /, \, hypot, sqrt, cbrt, div, fld, cld, mod, rem, divrem, fldmod, abs, flipsign, copysign, export +, -, *, /, \, hypot, sqrt, cbrt, div, fld, cld, mod, rem, divrem, fldmod, abs, flipsign, copysign using Base.Rounding const RoundLoHi = RoundingMode{:HiLo}() const RoundValue = RoundingMode{:Value}() #= • round the significand to use fewer bits • RoundNearest (default) • RoundUp • RoundDown • round the floating point value to an integer • RoundToZero • RoundFromZero =# # allow multiparameter operators taking 1, 2, 3, or 4 args @inline RoundNearest(fn::Function, a::T) where {T<:AbstractFloat} = rounded(fn, a, RoundNearest) @inline RoundNearest(fn::Function, a::T, b::T) where {T<:AbstractFloat} = rounded(fn, a, b, RoundNearest) @inline RoundNearest(fn::Function, a::T, b::T, c::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, RoundNearest) @inline RoundNearest(fn::Function, a::T, b::T, c::T, d::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, d, RoundNearest) @inline RoundUp(fn::Function, a::T) where {T<:AbstractFloat} = rounded(fn, a, RoundUp) @inline RoundUp(fn::Function, a::T, b::T) where {T<:AbstractFloat} = rounded(fn, a, b, RoundUp) @inline RoundUp(fn::Function, a::T, b::T, c::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, RoundUp) @inline RoundUp(fn::Function, a::T, b::T, c::T, d::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, d, RoundUp) @inline RoundDown(fn::Function, a::T) where {T<:AbstractFloat} = rounded(fn, a, RoundDown) @inline RoundDown(fn::Function, a::T, b::T) where {T<:AbstractFloat} = rounded(fn, a, b, RoundDown) @inline RoundDown(fn::Function, a::T, b::T, c::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, RoundDown) @inline RoundDown(fn::Function, a::T, b::T, c::T, d::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, d, RoundDown) @inline RoundToZero(fn::Function, a::T) where {T<:AbstractFloat} = rounded(fn, a, RoundToZero) @inline RoundToZero(fn::Function, a::T, b::T) where {T<:AbstractFloat} = rounded(fn, a, b, RoundToZero) @inline RoundToZero(fn::Function, a::T, b::T, c::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, RoundToZero) @inline RoundToZero(fn::Function, a::T, b::T, c::T, d::T) where {T<:AbstractFloat} = rounded(fn, a, b, c, d, RoundToZero) # support RoundFromZero @inline RoundFromZero(fn::Function, a::T) where {T<:BigFloat} = rounded(fn, a, RoundFromZero) @inline RoundFromZero(fn::Function, a::T, b::T) where {T<:BigFloat} = rounded(fn, a, b, RoundFromZero) @inline RoundFromZero(fn::Function, a::T, b::T, c::T) where {T<:BigFloat} = rounded(fn, a, b, c, RoundFromZero) @inline RoundFromZero(fn::Function, a::T, b::T, c::T, d::T) where {T<:BigFloat} = rounded(fn, a, b, c, d, RoundFromZero) @inline function RoundFromZero(fn::Function, a::T) where {T<:AbstractFloat} result = RoundUp(a, b) if result < 0 result = RoundDown(a, b) end return result end @inline function RoundFromZero(fn::Function, a::T, b::T) where {T<:AbstractFloat} result = RoundUp(a, b) if result < 0 result = RoundDown(a, b) end return result end @inline function RoundFromZero(fn::Function, a::T, b::T, c::T) where {T<:AbstractFloat} result = RoundUp(a, b, c) if result < 0 result = RoundDown(a, b, c) end return result end @inline function RoundFromZero(fn::Function, a::T, b::T, c::T, d::T) where {T<:AbstractFloat} result = RoundUp(a, b, c, d) if result < 0 result = RoundDown(a, b, c, d) end return result end # directed rounding in a functional context @inline function rounded(fn::Function, a::T, mode::RoundingMode) where {T<:AbstractFloat} setrounding(T, mode) do fn(a) end end @inline function rounded(fn::Function, a::T, b::T, mode::RoundingMode) where {T<:AbstractFloat} setrounding(T, mode) do fn(a, b) end end @inline function rounded(fn::Function, a::T, b::T, c::T, mode::RoundingMode) where {T<:AbstractFloat} setrounding(T, mode) do fn(a, b, c) end end @inline function rounded(fn::Function, a::T, b::T, c::T, d::T, mode::RoundingMode) where {T<:AbstractFloat} setrounding(T, mode) do fn(a, b, c, d) end end # bidirectional rounding in a functional context @inline function RoundLoHi(fn::Function, a::T) where {T<:AbstractFloat} hi = rounded(fn, a, RoundUp) lo = rounded(fn, a, RoundDown) return minmax(hi, lo) end @inline function RoundLoHi(fn::Function, a::T, b::T) where {T<:AbstractFloat} hi = rounded(fn, a, b, RoundUp) lo = rounded(fn, a, b, RoundDown) return minmax(hi, lo) end @inline function RoundLoHi(fn::Function, a::T, b::T, c::T) where {T<:AbstractFloat} hi = rounded(fn, a, b, c, RoundUp) lo = rounded(fn, a, b, c, RoundDown) return minmax(hi, lo) end @inline function RoundLoHi(fn::Function, a::T, b::T, c::T, d::T) where {T<:AbstractFloat} hi = rounded(fn, a, b, c, d, RoundUp) lo = rounded(fn, a, b, c, d, RoundDown) return minmax(hi, lo) end # obtain the most informing, least misleading representation of value const twopowex = [2.0, 4.0, 8.0, 16.0, 32.0, 64.0] function RoundValue(fn::Function, a::T) where {T<:AbstractFloat} hi, lo = RoundLoHi(fn, a) frhi, exhi = frexp(hi) frlo, exlo = frexp(lo) if exhi !== exlo dex = exhi - exlo fex = dex < 7 ? twopowex[dex] : 2.0^dex frlo = frlo * fex frex = frex - dex end signifhi = frhi signiflo = frlo sigbits = 53 while sigbits > 0 && (signifhi !== signiflo) sigbits -= 1 signifhi = signif(hi, sigbits, 2) signiflo = signif(lo, sigbits, 2) end if sigbits == 0 && (signifhi !== signiflo) span = hi - lo throw(ErrorException("interval [$lo, $hi] spans $span, which does not resolve")) end return signiflo end end # DirectedRoundings module
The nest is 12 – 15 cm ( 4 @.@ 7 – 5 @.@ 9 in ) in diameter and 3 – 4 cm ( 1 @.@ 2 – 1 @.@ 6 in ) deep . The clutch is 6 – 14 , usually 8 – 12 eggs ; these are oval , slightly glossy , creamy or tinted with green , blue or grey , and blotched red @-@ brown . They average 37 mm × 26 mm ( 1 @.@ 5 in × 1 @.@ 0 in ) and weigh about 13 – 16 g ( 0 @.@ 46 – 0 @.@ 56 oz ) , of which 7 % is shell . The eggs are laid at daily intervals , but second clutches may sometimes have two eggs added per day . Incubation is by the female only ; her tendency to sit tight when disturbed , or wait until the last moment to flee , leads to many deaths during hay @-@ cutting and harvesting . The eggs hatch together after 19 – 20 days , and the precocial chicks leave the nest within a day or two . They are fed by the female for three or four days , but can find their own food thereafter . The juveniles fledge after 34 – 38 days . The second brood is started about 42 days after the first , and the incubation period is slightly shorter at 16 – 18 days . The grown young may stay with the female until departure for Africa .
from sage.rings.integer import Integer from sage.structure.sage_object import SageObject from sage.lfunctions.dokchitser import Dokchitser from l_series_gross_zagier_coeffs import gross_zagier_L_series from sage.modular.dirichlet import kronecker_character class GrossZagierLseries(SageObject): def __init__(self, E, A, prec=53): r""" Class for the Gross-Zagier L-series. This is attached to a pair `(E,A)` where `E` is an elliptic curve over `\QQ` and `A` is an ideal class in an imaginary quadratic number field. For the exact definition, in the more general setting of modular forms instead of elliptic curves, see section IV of [GrossZagier]_. INPUT: - ``E`` -- an elliptic curve over `\QQ` - ``A`` -- an ideal class in an imaginary quadratic number field - ``prec`` -- an integer (default 53) giving the required precision EXAMPLES:: sage: e = EllipticCurve('37a') sage: K.<a> = QuadraticField(-40) sage: A = K.class_group().gen(0) sage: from sage.modular.modform.l_series_gross_zagier import GrossZagierLseries sage: G = GrossZagierLseries(e, A) TESTS:: sage: K.<b> = QuadraticField(131) sage: A = K.class_group().one() sage: G = GrossZagierLseries(e, A) Traceback (most recent call last): ... ValueError: A is not an ideal class in an imaginary quadratic field """ self._E = E self._N = N = E.conductor() self._A = A ideal = A.ideal() K = A.gens()[0].parent() D = K.disc() if not(K.degree() == 2 and D < 0): raise ValueError("A is not an ideal class in an" " imaginary quadratic field") Q = ideal.quadratic_form().reduced_form() epsilon = - kronecker_character(D)(N) self._dokchister = Dokchitser(N ** 2 * D ** 2, [0, 0, 1, 1], weight=2, eps=epsilon, prec=prec) self._nterms = nterms = Integer(self._dokchister.gp()('cflength()')) if nterms > 1e6: # just takes way to long raise ValueError("Too many terms: {}".format(nterms)) zeta_ord = ideal.number_field().zeta_order() an_list = gross_zagier_L_series(E.anlist(nterms + 1), Q, N, zeta_ord) self._dokchister.gp().set('a', an_list[1:]) self._dokchister.init_coeffs('a[k]', 1) def __call__(self, s, der=0): r""" Return the value at `s`. INPUT: - `s` -- complex number - ``der`` -- ? (default 0) EXAMPLES:: sage: e = EllipticCurve('37a') sage: K.<a> = QuadraticField(-40) sage: A = K.class_group().gen(0) sage: from sage.modular.modform.l_series_gross_zagier import GrossZagierLseries sage: G = GrossZagierLseries(e, A) sage: G(3) -0.272946890617590 """ return self._dokchister(s, der) def taylor_series(self, s=1, series_prec=6, var='z'): r""" Return the Taylor series at `s`. INPUT: - `s` -- complex number (default 1) - ``series_prec`` -- number of terms (default 6) in the Taylor series - ``var`` -- variable (default 'z') EXAMPLES:: sage: e = EllipticCurve('37a') sage: K.<a> = QuadraticField(-40) sage: A = K.class_group().gen(0) sage: from sage.modular.modform.l_series_gross_zagier import GrossZagierLseries sage: G = GrossZagierLseries(e, A) sage: G.taylor_series(2,3) -0.613002046122894 + 0.490374999263514*z - 0.122903033710382*z^2 + O(z^3) """ return self._dokchister.taylor_series(s, series_prec, var) def _repr_(self): """ Return the string representation. EXAMPLES:: sage: e = EllipticCurve('37a') sage: K.<a> = QuadraticField(-40) sage: A = K.class_group().gen(0) sage: from sage.modular.modform.l_series_gross_zagier import GrossZagierLseries sage: GrossZagierLseries(e, A) Gross Zagier L-series attached to Elliptic Curve defined by y^2 + y = x^3 - x over Rational Field with ideal class Fractional ideal class (2, 1/2*a) """ msg = "Gross Zagier L-series attached to {} with ideal class {}" return msg.format(self._E, self._A)
\documentclass[12pt]{article} \usepackage[margin=1in]{geometry} \usepackage{amsmath,amsthm,amssymb,amsfonts} \usepackage{graphicx} \usepackage{authblk} \usepackage{todonotes} \usepackage{listings} \usepackage{matlab-prettifier} \usepackage{graphicx} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newenvironment{problem}[2][problem]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}} %If you want to title your bold things something different just make another thing exactly like this but replace "problem" with the name of the thing you want, like theorem or lemma or whatever \begin{document} %\renewcommand{\qedsymbol}{\filledbox} %Good resources for looking up how to do stuff: %Binary operators: http://www.access2science.com/latex/Binary.html %General help: http://en.wikibooks.org/wiki/LaTeX/Mathematics %Or just google stuff \title{\textbf{Sequence to Sequence modeling of breakpoints in time series }} \date{} \author{ Amy Pitts \\ Marist College - DATA 440 \\ April 10, 2019 } \maketitle \begin{abstract} \end{abstract} \hspace{10pt} \thispagestyle{empty} \clearpage \setcounter{page}{1} %--------------------------------------------------------- \section{Introduction} When modeling time series data, it can be necessary to identify places or points in time where significant change occurs in the behavior of the data. By identifying these breakpoints or change points, different parts of the data can be fitted with separate, more appropriate models, allowing for noteworthy changes to be better characterized in the combined model. The summer of 2018 I spend 8 weeks working with a team at Lafayette College coming up with a Bayesian procedure that identifies the number and location of breakpoints in times series. The final product that was produced was Bayesian Adaptive Auto-Regression (BAAR), a new procedure for accurately and efficiently finding the distribution of the number and location of breakpoints in time series. I would cite my own research, however, it has not been peer-reviewed yet. The goal of this project is to attempt to create another technique that identifies breakpoints. This research project will use a sequence to sequence neural network approach to locate breakpoints. The algorithm will learn off of simulated data where the breakpoint is known and then testing the technique on re-world data. These results will be compared to the results that my REU group found over the summer as well as well regarded journals. %--------------------------------------------------------- \section{Background and History} Since breakpoints are found in numerous types of time series, there has been ample interest in developing techniques to detect them in recent decades across a wide range of fields. Existing techniques have been applied to everything from the United States Treasury bill rates \cite{Pesaran} to hydrology \cite{Seidou} to climate records \cite{Ruggieri}. There have been various technique in detecting change-point locations. The simplest technique relies on expert opinion. This is the process experts approximating where the breakpoint location will occur based on historical knowledge. However, there is a lot of human error introduced with that technique which sparked the development of more computational methods. One widely used technique is the Bai-Perron Test \cite{Bai} and has an accessible R package "strucchange" \cite{Zeileis}. The test returns a single optimal breakpoint set but requires the user to specify a maximum number of breakpoints and minimum segment size. Another technique is Bayesian Adaptive Regression Splines (BARS), a Bayesian curve fitting technique developed by DiMatteo et al. \cite{DiMatteo} and implemented by Wallstrom, Liebner, and Kass \cite{Wallstrom}. These two methods inspired Bayesian Adaptive Auto-Regression (BAAR) procedure. The Sequence to Sequence neural network is a technique that is typically used for language processing. The process is to train a model to convert a sequence from one domain into another. The most commonly used example is taking english words and translating them into french \cite{Sutskever}. The Sequence to Sequence process is able to take sequence and convert it into another sequence of a different length \cite{Sutskever}. Before other Deep Neural Networks (DNNs) also perform arbitrary parallel computation for a modest number of steps however some techniques puts a requirement on keeping the lengths of sequences the same. %--------------------------------------------------------- \section{Method} I need to expand on this. I am basing my work of off a keras blog. Primarily where they do the juvenile example of training a sequence to sequence neural network to add two number. That code can be found at the link $$https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py$$ Also the keras blog is $https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html$. %--------------------------------------------------------- \section{Implementation in python} One of the bigger stumbling blocks that I have come across is my lack of data. Therefore, I am going to train the sequence to sequence neural network on simulated time series. What this code is doing is randomly generating some specified amount of numbers between two value. The code is alternating between two set of values to create breaks in the data. The locations of the breaks are randomly generated and kept track off. For each iteration, 10 different data sets are produced the first starting with one break and then each data set after has one more break until 10 breakpoints. \lstset{language=Python,breaklines=true,basicstyle=\small\ttfamily} \begin{lstlisting}[frame=single] ###Setting the dataset length = 1000 #defining the number of datapoints in the set iterations = 10 #changing the number of iterations. not to high please master_data = [] master_breaks = [] #I ran into a problem of the set of breaks was not in sense of time #but just number of datapoints in that set def add_one_by_one(l): new_l = [] cumsum = 0 for elt in l: cumsum += elt new_l.append(cumsum) return new_l #creating the data. 1-10 breaks for each iteration for x in range(iterations): for y in range(10): num_of_breaks = y+1 s=[] for x in range(num_of_breaks): s.append( r.randint(1,round(length/(num_of_breaks)))) s.append(length-sum(s)) #making sure the data is the right length numbers = [] for x in range(num_of_breaks+1): #every other set should have 0-5 and 5-10 datapoint values if(x%2 == 0): l = np.random.uniform(0,5,s[x]) else: l = np.random.uniform(5,10,s[x]) numbers = numbers + [*l] master_data.append(numbers) #saving that dataset breaks = add_one_by_one(s) #changing the breakpoints master_breaks.append(breaks) #saving the breakpoints #testing my creation plt.plot(master_data[1]) master_breaks[1] \end{lstlisting} %--------------------------------------------------------- \section{Results} %--------------------------------------------------------- \section{Discussion} %--------------------------------------------------------- \section{Conclusion} %--------------------------------------------------------- \begin{thebibliography}{9} %Starting at background: \bibitem{Sutskever} I. Sutskever, O. Vinyals, Q. V. Le ``Sequence to Sequence Learning with Neural Networks” \textit{Advances in neural information processing systems}, pp. 3104-3112, 2014. \bibitem{Bai} J. Bai, and P. Perron, ``Computation and analysis of multiple structural change models.” \textit{Journal of applied econometrics}, vol.18(1), pp.1-22. 2003. \bibitem{DiMatteo} I. DiMatteo, C.R. Genovese, R.E. and Kass ``Bayesian curve‐fitting with free‐knot splines.” \textit{Biometrika}, vol.88(4), pp.1055-1071. 2001 \bibitem{Pesaran} M.H. Pesaran, D. Pettenuzzo, and A. Timmermann, Forecasting time series subject to multiple structural breaks. \textit{The Review of Economic Studies}, vol.73(4), pp.1057-1084. 2006 \bibitem{Ruggieri} E. Ruggieri, A Bayesian approach to detecting change points in climatic records. \textit{International Journal of Climatology}, 33(2), pp.520-528. (2013). \bibitem{Seidou} O. Seidou, and T.B. Ouarda, ``Recursion‐based multiple changepoint detection in multiple linear regression and application to river streamflows”. \textit{Water Resources Research}, vol.43(7). 2007 \bibitem{Wallstrom} G. Wallstrom, J. Liebner, and R.E. Kass, ``An implementation of Bayesian adaptive regression splines (BARS) in C with S and R wrappers”. \textit{Journal of Statistical Software}, vol.26(1), p.1. (2008). \bibitem{Zeileis} A. Zeileis, F. Leisch, B. Hansen, K. Hornik, C. Kleiber, and M.A. Zeileis. \textit{The strucchange Package}. R manual. breakpoint in the strucchange package (2007) \end{thebibliography} \end{document}
[STATEMENT] lemma Y_cond_X_0[simp]: assumes "x \<in> set_pmf S0" shows "Y_cond_X p S0 0 x = p [] x" [PROOF STATE] proof (prove) goal (1 subgoal): 1. Y_cond_X p S0 0 x = p [] x [PROOF STEP] by (auto intro: pmf_eqI simp: assms pmf_Y_cond_X pmf_eq_0_set_pmf)
(* * Copyright 2020, Data61, CSIRO (ABN 41 687 119 230) * * SPDX-License-Identifier: GPL-2.0-only *) theory Types imports ArchTypes begin subsection \<open>Policy definition\<close> text\<open> The goal is to place limits on what untrusted agents can do while the trusted agents are not running. This supports a framing constraint in the descriptions (Hoare triples) of syscalls. Roughly the existing proofs show the effect of the syscall, and this proof summarises what doesn't (or isn't allowed to) change. The basic intuition is to map all object references to the agent they belong to and show that changes to the object reference graph are allowed by the policy specified by the user. The policy is a labelled graph amongst agents independent of the current state, i.e. a static external summary of what should be talking to what and how. The interesting cases occur between trusted and untrusted components: e.g. we assume that it is unsafe for untrusted components (UT) to send capabilities to trusted components (T), and so T must ensure that all endpoints it shares with UT that it receives on do not have the grant bit set. \<close> type_synonym 'a agent_map = "obj_ref \<Rightarrow> 'a" type_synonym 'a agent_asid_map = "asid \<Rightarrow> 'a" type_synonym 'a agent_irq_map = "irq \<Rightarrow> 'a" type_synonym 'a agent_domain_map = "domain \<Rightarrow> 'a set" text\<open> What one agent can do to another. We allow multiple edges between agents in the graph. Control is special. It implies the ability to do pretty much anything, including get access the other rights, create, remove, etc. DeleteDerived allows you to delete a cap derived from a cap you own \<close> datatype auth = Control | Receive | SyncSend | Notify | Reset | Grant | Call | Reply | Write | Read | DeleteDerived | AAuth arch_auth text\<open> The interesting case is for endpoints. Consider an EP between T and UT (across a trust boundary). We want to be able to say that all EPs and tcbs that T does not expose to UT do not change when it is not running. If UT had a direct Send right to T then integrity (see below) could not guarantee this, as it doesn't know which EP can be changed by UT. Thus we set things up like this (all distinct labels): T -Receive-> EP <-Send- UT Now UT can interfere with EP and all of T's tcbs blocked for receive on EP, but not endpoints internal to T, or tcbs blocked on other (suitably labelled) EPs, etc. \<close> type_synonym 'a auth_graph = "('a \<times> auth \<times> 'a) set" text \<open> Each global namespace will need a labeling function. We map each scheduling domain to a single label; concretely, each tcb in a scheduling domain has to have the same label. We will want to weaken this in the future. The booleans @{text pasMayActivate} and @{text pasMayEditReadyQueues} are used to weaken the integrity property. When @{const True}, @{text pasMayActivate} causes the integrity property to permit activation of newly-scheduled threads. Likewise, @{text pasMayEditReadyQueues} has the integrity property permit the removal of threads from ready queues, as occurs when scheduling a new domain for instance. By setting each of these @{const False} we get a more constrained integrity property that is useful for establishing some of the proof obligations for the infoflow proofs, particularly those over @{const handle_event} that neither activates new threads nor schedules new domains. The @{text pasDomainAbs} relation describes which labels may run in a given scheduler domain. This relation is not relevant to integrity but will be used in the information flow theory (with additional constraints on its structure). \<close> record 'a PAS = pasObjectAbs :: "'a agent_map" pasASIDAbs :: "'a agent_asid_map" pasIRQAbs :: "'a agent_irq_map" pasPolicy :: "'a auth_graph" pasSubject :: "'a" \<comment> \<open>The active label\<close> pasMayActivate :: "bool" pasMayEditReadyQueues :: "bool" pasMaySendIrqs :: "bool" pasDomainAbs :: "'a agent_domain_map" text\<open> Very often we want to say that the agent currently running owns a given pointer. \<close> abbreviation is_subject :: "'a PAS \<Rightarrow> obj_ref \<Rightarrow> bool" where "is_subject aag ptr \<equiv> pasObjectAbs aag ptr = pasSubject aag" text\<open> Also we often want to say the current agent can do something to a pointer that he doesn't own but has some authority to. \<close> abbreviation(input) abs_has_auth_to :: "'a PAS \<Rightarrow> auth \<Rightarrow> obj_ref \<Rightarrow> obj_ref \<Rightarrow> bool" where "abs_has_auth_to aag auth ptr ptr' \<equiv> (pasObjectAbs aag ptr, auth, pasObjectAbs aag ptr') \<in> pasPolicy aag" abbreviation(input) aag_has_auth_to :: "'a PAS \<Rightarrow> auth \<Rightarrow> obj_ref \<Rightarrow> bool" where "aag_has_auth_to aag auth ptr \<equiv> (pasSubject aag, auth, pasObjectAbs aag ptr) \<in> pasPolicy aag" abbreviation aag_subjects_have_auth_to_label :: "'a set \<Rightarrow> 'a PAS \<Rightarrow> auth \<Rightarrow> 'a \<Rightarrow> bool" where "aag_subjects_have_auth_to_label subs aag auth label \<equiv> \<exists>s \<in> subs. (s, auth, label) \<in> pasPolicy aag" abbreviation aag_subjects_have_auth_to :: "'a set \<Rightarrow> 'a PAS \<Rightarrow> auth \<Rightarrow> obj_ref \<Rightarrow> bool" where "aag_subjects_have_auth_to subs aag auth oref \<equiv> aag_subjects_have_auth_to_label subs aag auth (pasObjectAbs aag oref)" subsection \<open>Misc. definitions\<close> definition ptr_range where "ptr_range p sz \<equiv> {p .. p + 2 ^ sz - 1}" lemma ptr_range_0[simp]: "ptr_range (p :: obj_ref) 0 = {p}" unfolding ptr_range_def by simp definition tcb_states_of_state where "tcb_states_of_state s \<equiv> \<lambda>p. option_map tcb_state (get_tcb p s)" fun can_receive_ipc :: "thread_state \<Rightarrow> bool" where "can_receive_ipc (BlockedOnReceive _ _) = True" | "can_receive_ipc (BlockedOnSend _ pl) = (sender_is_call pl \<and> (sender_can_grant pl \<or> sender_can_grant_reply pl))" | "can_receive_ipc (BlockedOnNotification _) = True" | "can_receive_ipc BlockedOnReply = True" | "can_receive_ipc _ = False" end
(* This program is free software; you can redistribute it and/or *) (* modify it under the terms of the GNU Lesser General Public License *) (* as published by the Free Software Foundation; either version 2.1 *) (* of the License, or (at your option) any later version. *) (* *) (* This program is distributed in the hope that it will be useful, *) (* but WITHOUT ANY WARRANTY; without even the implied warranty of *) (* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *) (* GNU General Public License for more details. *) (* *) (* You should have received a copy of the GNU Lesser General Public *) (* License along with this program; if not, write to the Free *) (* Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA *) (* 02110-1301 USA *) (* MACHINE *) (* Common descriptions for all abstract machines *) Require Import List. Require Export fifo. Require Export table. Require Export reduce. Require Export sigma2. Unset Standard Proposition Elimination Names. Section MACHINE. Parameter Site : Set. Parameter owner : Site. Axiom eq_site_dec : eq_dec Site. Definition change_site (E : Set) := change_x0 Site E eq_site_dec. Lemma that_site : forall (E : Set) (f : Site -> E) (s0 : Site) (x : E), change_site E f s0 x s0 = x. Proof. intros; unfold change_site in |- *; apply here. Qed. Lemma other_site : forall (E : Set) (f : Site -> E) (s0 s1 : Site) (x : E), s0 <> s1 -> change_site E f s0 x s1 = f s1. Proof. intros; unfold change_site in |- *; apply elsewhere; trivial. Qed. (* les couples de sites correspondent aux files d'attente *) Definition eq_queue_dec := eq_couple_dec Site Site eq_site_dec eq_site_dec. Definition change_queue (Q : Set) := update2_table Site eq_site_dec Q. Lemma that_queue : forall (E : Set) (f : Site -> Site -> E) (s0 s1 : Site) (x : E), change_queue E f s0 s1 x s0 s1 = x. Proof. intros; unfold change_queue in |- *; unfold update2_table in |- *; apply here2. Qed. Lemma other_queue : forall (E : Set) (f : Site -> Site -> E) (s0 s1 s2 s3 : Site) (x : E), s2 <> s0 \/ s3 <> s1 -> change_queue E f s0 s1 x s2 s3 = f s2 s3. Proof. intros; unfold change_queue in |- *; unfold update2_table in |- *; apply elsewhere2; trivial. Qed. (* l'ensemble des sites est fini, la liste des sites sert a parcourir et \`a sommer sur l'ensemble des sites *) Parameter LS : list Site. Axiom finite_site : list_of_elements Site eq_site_dec LS. Lemma in_s_LS : forall s : Site, In s LS. Proof. intros; apply only_once_in with (eq_E_dec := eq_site_dec); exact (finite_site s). Qed. Variable Data : Set. Definition Bag_of_Data := Site -> Site -> queue Data. End MACHINE. Section M_ACTION. Variable Message : Set. Let Bag_of_message := Bag_of_Data Message. (* This is taken straight from Jean's file *) Definition Post_message (m0 : Message) (b0 : Bag_of_message) (s0 s1 : Site) := change_queue (queue Message) b0 s0 s1 (input Message m0 (b0 s0 s1)). Definition Collect_message (b0 : Bag_of_message) (s0 s1 : Site) := change_queue (queue Message) b0 s0 s1 (first_out Message (b0 s0 s1)). Lemma post_here : forall (b0 : Bag_of_message) (m0 : Message) (s0 s1 : Site), Post_message m0 b0 s0 s1 s0 s1 = input Message m0 (b0 s0 s1). Proof. intros; unfold Post_message in |- *; apply that_queue. Qed. Lemma post_elsewhere : forall (b0 : Bag_of_message) (m0 : Message) (s0 s1 s2 s3 : Site), s0 <> s2 \/ s1 <> s3 -> Post_message m0 b0 s0 s1 s2 s3 = b0 s2 s3. Proof. intros; unfold Post_message in |- *; apply other_queue; elim H; auto. Qed. Lemma collect_here : forall (b0 : Bag_of_message) (s0 s1 : Site), Collect_message b0 s0 s1 s0 s1 = first_out Message (b0 s0 s1). Proof. intros; unfold Collect_message in |- *; apply that_queue. Qed. Lemma collect_elsewhere : forall (b0 : Bag_of_message) (s0 s1 s2 s3 : Site), s0 <> s2 \/ s1 <> s3 -> Collect_message b0 s0 s1 s2 s3 = b0 s2 s3. Proof. intros; unfold Collect_message in |- *; apply other_queue; elim H; auto. Qed. End M_ACTION. Section IN_Q_EFFECT. Variable Message : Set. Let Bag_of_message := Bag_of_Data Message. (* consequence sur l'appartenance *) Lemma in_post : forall (m m' : Message) (bom : Bag_of_message) (s1 s2 s3 s4 : Site), m <> m' -> In_queue Message m (Post_message Message m' bom s1 s2 s3 s4) -> In_queue Message m (bom s3 s4). Proof. intros m m' bom s1 s2 s3 s4; case (eq_queue_dec s1 s3 s2 s4); intro. decompose [and] a; rewrite H; rewrite H0. rewrite post_here. intros; apply in_q_input with (d' := m'); trivial. rewrite post_elsewhere; auto. Qed. Lemma not_in_post : forall (m m' : Message) (bom : Bag_of_message) (s1 s2 s3 s4 : Site), m <> m' -> ~ In_queue Message m (bom s3 s4) -> ~ In_queue Message m (Post_message Message m' bom s1 s2 s3 s4). Proof. intros m m' bom s1 s2 s3 s4; case (eq_queue_dec s1 s3 s2 s4); intro. decompose [and] a; rewrite H; rewrite H0. rewrite post_here. intros H1 H2. simpl in |- *. intuition. intros H H0. rewrite post_elsewhere. auto. auto. Qed. Lemma not_in_collect : forall (m : Message) (bom : Bag_of_message) (s1 s2 s3 s4 : Site), ~ In_queue Message m (bom s3 s4) -> ~ In_queue Message m (Collect_message Message bom s1 s2 s3 s4). Proof. intros m bom s1 s2 s3 s4; case (eq_queue_dec s1 s3 s2 s4); intro. decompose [and] a; rewrite H; rewrite H0. rewrite collect_here. intro H1. apply not_in_q_output. auto. intros H1. rewrite collect_elsewhere. auto. auto. Qed. Lemma in_collect : forall (m : Message) (bom : Bag_of_message) (s1 s2 s3 s4 : Site), In_queue Message m (Collect_message Message bom s1 s2 s3 s4) -> In_queue Message m (bom s3 s4). Proof. intros m bom s1 s2 s3 s4; case (eq_queue_dec s1 s3 s2 s4); intro. decompose [and] a; rewrite H; rewrite H0. rewrite collect_here; intros; apply in_q_output; trivial. rewrite collect_elsewhere; auto. Qed. Hypothesis eq_message_dec : eq_dec Message. Lemma in_queue_decidable : forall (m : Message) (q : queue Message), {In_queue Message m q} + {~ In_queue Message m q}. Proof. simple induction q. right; simpl in |- *. simpl in |- *; red in |- *; trivial. intros d q0 H. case (eq_message_dec m d). intro; left; left. trivial. intro; simpl in |- *; elim H. left; right; auto. intro; right; intuition. Qed. End IN_Q_EFFECT. Section REDUCE_EFFECT. Variable Message : Set. Let Bag_of_message := Bag_of_Data Message. Variable f : Message -> Z. Lemma reduce_post_message : forall (m : Message) (s1 s2 : Site) (b : Bag_of_message), reduce Message f (Post_message Message m b s1 s2 s1 s2) = (reduce Message f (b s1 s2) + f m)%Z. Proof. intros m s1 s2 b. rewrite post_here. apply reduce_append_nil1. Qed. Lemma reduce_collect_message : forall (m : Message) (s1 s2 : Site) (b : Bag_of_message), first Message (b s1 s2) = value Message m -> reduce Message f (Collect_message Message b s1 s2 s1 s2) = (reduce Message f (b s1 s2) - f m)%Z. Proof. intros m s1 s2 b H. rewrite collect_here. apply reduce_first_out. auto. Qed. Lemma reduce_post_message_null : forall (m : Message) (s1 s2 s3 s4 : Site) (b : Bag_of_message), f m = 0%Z -> reduce Message f (Post_message Message m b s1 s2 s3 s4) = reduce Message f (b s3 s4). Proof. intros m s1 s2 s3 s4 b H. case (eq_queue_dec s1 s3 s2 s4). intros a. decompose [and] a. rewrite H0; rewrite H1. rewrite post_here. simpl in |- *. rewrite H. omega. intro o. rewrite post_elsewhere. auto. auto. Qed. Lemma reduce_collect_message_null : forall (m : Message) (s1 s2 s3 s4 : Site) (b : Bag_of_message), first Message (b s1 s2) = value Message m -> f m = 0%Z -> reduce Message f (Collect_message Message b s1 s2 s3 s4) = reduce Message f (b s3 s4). Proof. intros m s1 s2 s3 s4 b H H0. case (eq_queue_dec s1 s3 s2 s4). intros a. decompose [and] a. rewrite H1; rewrite H2. rewrite collect_here. rewrite reduce_first_out with (m := m). rewrite H0. omega. rewrite <- H1; rewrite <- H2; auto. intro o. rewrite collect_elsewhere. auto. auto. Qed. End REDUCE_EFFECT. Section SIG_SEND. Let Send_T := Site -> Z. Definition sigma_send_table (t : Send_T) := sigma_table Site LS Z (fun (s : Site) (x : Z) => x) t. Definition Inc_send_table (t0 : Send_T) (s0 : Site) := update_table Site eq_site_dec Z t0 s0 (t0 s0 + 1)%Z. Definition Dec_send_table (t0 : Send_T) (s0 : Site) := update_table Site eq_site_dec Z t0 s0 (t0 s0 - 1)%Z. Let Date_T := Site -> nat. Definition Set_date_table (d0 : Date_T) (s0 : Site) (new_date : nat) := update_table Site eq_site_dec nat d0 s0 new_date. End SIG_SEND. Section SEND_EFFECT. Let Send_T := Site -> Z. Lemma sigma_inc_send_table : forall (t : Send_T) (s : Site), sigma_send_table (Inc_send_table t s) = (sigma_send_table t + 1)%Z. Proof. intros t s. unfold Inc_send_table in |- *. unfold sigma_send_table in |- *. rewrite (sigma_table_change Site eq_site_dec LS Z t s (t s + 1)%Z (fun (s : Site) (x : Z) => x)). omega. apply finite_site. Qed. Lemma sigma_dec_send_table : forall (t : Send_T) (s : Site), sigma_send_table (Dec_send_table t s) = (sigma_send_table t - 1)%Z. Proof. intros t s. unfold Dec_send_table in |- *. unfold sigma_send_table in |- *. rewrite (sigma_table_change Site eq_site_dec LS Z t s (t s - 1)%Z (fun (s : Site) (x : Z) => x)). omega. apply finite_site. Qed. End SEND_EFFECT. Section SIG_RECV. Let Recv_T := Site -> bool. Definition sigma_receive_table (t : Recv_T) := sigma_table Site LS bool (fun s : Site => Int) t. Definition Set_rec_table (r0 : Recv_T) (s0 : Site) := change_site bool r0 s0 true. Definition Reset_rec_table (r0 : Recv_T) (s0 : Site) := change_site bool r0 s0 false. Lemma sigma_set_receive_table : forall (t : Recv_T) (s : Site), t s = false -> sigma_receive_table (Set_rec_table t s) = (sigma_receive_table t + 1)%Z. Proof. intros t s H. unfold Set_rec_table in |- *. unfold sigma_receive_table in |- *. rewrite (sigma_table_change Site eq_site_dec LS bool t s true). rewrite H. simpl in |- *. omega. apply finite_site. Qed. Lemma sigma_reset_receive_table : forall (t : Recv_T) (s : Site), t s = true -> sigma_receive_table (Reset_rec_table t s) = (sigma_receive_table t - 1)%Z. Proof. intros t s H. unfold Reset_rec_table in |- *. unfold sigma_receive_table in |- *. rewrite (sigma_table_change Site eq_site_dec LS bool t s false). rewrite H. simpl in |- *. omega. apply finite_site. Qed. End SIG_RECV. Section BUT_OWNER. Variable Data : Set. Let Table := Site -> Data. Variable f : Site -> Data -> Z. Definition sigma_table_but_owner (st : Table) := sigma_but Site owner eq_site_dec LS (fun s : Site => f s (st s)). Lemma sigma_sigma_but_owner : forall t : Table, sigma_table Site LS Data f t = (sigma_table_but_owner t + f owner (t owner))%Z. Proof. intros t. unfold sigma_table_but_owner in |- *. unfold sigma_table in |- *. rewrite (sigma_sigma_but Site owner eq_site_dec). auto. apply finite_site. Qed. End BUT_OWNER.
lemma chinese_remainder: fixes a::nat assumes ab: "coprime a b" and a: "a \<noteq> 0" and b: "b \<noteq> 0" shows "\<exists>x q1 q2. x = u + q1 * a \<and> x = v + q2 * b"
-- Andreas, 2013-02-21 issue seems to have been fixed along with issue 796 -- {-# OPTIONS -v tc.decl:10 #-} module Issue4 where open import Common.Equality abstract abstract -- a second abstract seems to have no effect T : Set -> Set T A = A see-through : ∀ {A} → T A ≡ A see-through = refl data Ok (A : Set) : Set where ok : T (Ok A) → Ok A opaque : ∀ {A} → T A ≡ A opaque = see-through data Bad (A : Set) : Set where bad : T (Bad A) -> Bad A
import data.vect.basic import tactic.csimp universe u --- Check whether a given vector is monotonic with respect to a given binary relation. definition is_monotonic {α : Type*} (r : α → α → Prop) : ∀ {n : ℕ}, vect α n → Prop | _ vect.nil := true | _ (vect.cons _ vect.nil) := true | _ (vect.cons x (vect.cons y ys)) := r x y ∧ is_monotonic (vect.cons y ys) inductive based_chain {α : Type u} (r : α → α → Prop) : α → ℕ → Type u | base (x : α) : based_chain x 1 | cons {n : ℕ} (x : α) {y : α} (_ : r x y) (xs : based_chain y n) : based_chain x (n+1) namespace based_chain definition to_vect {α : Type*} {r : α → α → Prop} : Π {x : α} {n : ℕ}, based_chain r x n → vect α n | x _ (base _) := vect.cons x vect.nil | x _ (cons _ h xs) := vect.cons x (to_vect xs) lemma to_vect_is_monotonic {α : Type*} {r : α → α → Prop} : Π {x : α} {n : ℕ} (ch : based_chain r x n), is_monotonic r ch.to_vect | _ _ (base x) := by csimp [to_vect,is_monotonic] | _ _ (cons x h (base y)) := by csimp [to_vect,is_monotonic]; exact ⟨h,true.intro⟩ | _ _ (cons x hxy (cons y h ch)) := begin have h_ind : is_monotonic r (cons y h ch).to_vect, from to_vect_is_monotonic (cons y h ch), csimp [to_vect,is_monotonic] at *, split, show r x y, from hxy, show is_monotonic r (vect.cons y ch.to_vect), from h_ind end definition from_vect {α : Type*} {r : α → α → Prop} : Π {n : ℕ} (xs : vect α (n+1)) (hmono : is_monotonic r xs), based_chain r xs.head (n+1) | _ (vect.cons x vect.nil) hmono := based_chain.base x | n (vect.cons x (vect.cons y ys)) hmono := let tail := from_vect (vect.cons y ys) hmono.right in cons x hmono.left tail definition from_vect' {α : Type*} {r : α → α → Prop} {n : ℕ} (xs : vect α (n+1)) (hmono : is_monotonic r xs) : Σ x, based_chain r x (n+1) := ⟨xs.head, from_vect xs hmono⟩ lemma tail_heq {α : Type _} {r : α → α → Prop} (x : α) {n : ℕ} : Π {y₁ y₂ : α} (hxy₁ : r x y₁) (hxy₂ : r x y₂) {ys₁ : based_chain r y₁ n} {ys₂ : based_chain r y₂ n}, (y₁ = y₂) → ys₁ == ys₂ → cons x hxy₁ ys₁ = cons x hxy₂ ys₂ := begin intros _ _ _ _ _ _ hy hys, cases hy, cases hy, have : ys₁ = ys₂ := eq_of_heq hys, rw [this] end lemma to_vect_of_from {α : Type*} {r : α → α → Prop} : Π {n : ℕ} (xs : vect α (n+1)) (hmono : is_monotonic r xs), (from_vect xs hmono).to_vect = xs | _ (vect.cons x vect.nil) hmono := by dsimp [from_vect,to_vect,vect.head]; refl | _ (vect.cons x (vect.cons y ys)) hmono := begin dsimp only [from_vect,to_vect], unfold vect.head, let h_ind := to_vect_of_from (vect.cons y ys) hmono.right, rw [h_ind] end attribute [simp,reducible] lemma head_of_to_vect {α : Type*} {r : α → α → Prop} : Π {x : α} {n : ℕ} (ch : based_chain r x (n+1)), ch.to_vect.head = x | _ _ (base x) := rfl | _ _ (cons x h ys) := rfl lemma from_vect_of_to {α : Type*} {r : α → α → Prop} : Π {x : α} {n : ℕ} (ch : based_chain r x (n+1)), (from_vect' ch.to_vect (to_vect_is_monotonic ch)) = ⟨x,ch⟩ | _ _ (base x) := rfl | _ _ (cons x h (base y)) := rfl | _ _ (cons x hxy (cons y h ys)) := begin let h_ind := from_vect_of_to (cons y h ys), unfold to_vect at *, unfold from_vect' at *, apply sigma.eq; try {unfold sigma.fst at *}; try {unfold sigma.snd at *}, dsimp [from_vect], refine tail_heq x _ hxy _ _; try {refl}, exact (sigma.mk.inj h_ind).right end end based_chain
# Laplace transform This notebook is a short tutorial of Laplace transform using SymPy. The main functions to use are ``laplace_transform`` and ``inverse_laplace_transform``. ```python from sympy import * ``` ```python init_session() ``` IPython console for SymPy 1.0 (Python 2.7.13-64-bit) (ground types: python) These commands were executed: >>> from __future__ import division >>> from sympy import * >>> x, y, z, t = symbols('x y z t') >>> k, m, n = symbols('k m n', integer=True) >>> f, g, h = symbols('f g h', cls=Function) >>> init_printing() Documentation can be found at http://docs.sympy.org/1.0/ Let us compute the Laplace transform from variables $t$ to $s$, then, we have the condition that $t>0$ (and real). ```python t = symbols("t", real=True, positive=True) s = symbols("s") ``` To calculate the Laplace transform of the expression $t^4$, we enter ```python laplace_transform(t**4, t, s) ``` This function returns ``(F, a, cond)`` where ``F`` is the Laplace transform of ``f``, $\mathcal{R}(s)>a$ is the half-plane of convergence, and ``cond`` are auxiliary convergence conditions. If we are not interested in the conditions for the convergence of this transform, we can use ``noconds=True`` ```python laplace_transform(t**4, t, s, noconds=True) ``` ```python fun = 1/((s-2)*(s-1)**2) fun ``` ```python inverse_laplace_transform(fun, s, t) ``` Right now, Sympy does not support the tranformation of derivatives. If we do ```python laplace_transform(f(t).diff(t), t, s, noconds=True) ``` we don't obtain, the expected ```python s*LaplaceTransform(f(t), t, s) - f(0) ``` or, $$\mathcal{L}\lbrace f^{(n)}(t)\rbrace = s^n F(s) - \sum_{k=1}^{n} s^{n - k} f^{(k - 1)}(0)\, ,$$ in general. We can still, operate with the trasformation of a differential equation. For example, let us consider the equation $$\frac{d f(t)}{dt} = 3f(t) + e^{-t}\, ,$$ that has as Laplace transform $$sF(s) - f(0) = 3F(s) + \frac{1}{s+1}\, .$$ ```python eq = Eq(s*LaplaceTransform(f(t), t, s) - f(0), 3*LaplaceTransform(f(t), t, s) + 1/(s +1)) eq ``` We then solve for $F(s)$ ```python sol = solve(eq, LaplaceTransform(f(t), t, s)) sol ``` and compute the inverse Laplace transform ```python inverse_laplace_transform(sol[0], s, t) ``` and we verify this using ``dsolve`` ```python factor(dsolve(f(t).diff(t) - 3*f(t) - exp(-t))) ``` that is equal if $4C_1 = 4f(0) + 1$. It is common to use practial fraction decomposition when computing inverse Laplace transforms. We can do this using ``apart``, as follows ```python frac = 1/(x**2*(x**2 + 1)) frac ``` ```python apart(frac) ``` We can also compute the Laplace transform of Heaviside and Dirac's Delta "functions" ```python laplace_transform(Heaviside(t - 3), t, s, noconds=True) ``` ```python laplace_transform(DiracDelta(t - 2), t, s, noconds=True) ``` ```python ``` The next cell change the format of the notebook. ```python from IPython.core.display import HTML def css_styling(): styles = open('./styles/custom_barba.css', 'r').read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> /* Based on Lorena Barba template available at: https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css*/ @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } div.cell{ width:800px; margin-left:16% !important; margin-right:auto; } h1 { font-family: 'Alegreya Sans', sans-serif; } h2 { font-family: 'Fenix', serif; } h3{ font-family: 'Fenix', serif; margin-top:12px; margin-bottom: 3px; } h4{ font-family: 'Fenix', serif; } h5 { font-family: 'Alegreya Sans', sans-serif; } div.text_cell_render{ font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 135%; font-size: 120%; width:600px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .text_cell_render h1 { font-weight: 200; font-size: 50pt; line-height: 100%; color:#CD2305; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #CD2305; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style>
(* File: Pairwise_Majority_Rule.thy Copyright 2021 Karlsruhe Institute of Technology (KIT) *) \<^marker>\<open>creator "Stephan Bohr, Karlsruhe Institute of Technology (KIT)"\<close> \<^marker>\<open>contributor "Michael Kirsten, Karlsruhe Institute of Technology (KIT)"\<close> section \<open>Pairwise Majority Rule\<close> theory Pairwise_Majority_Rule imports "Compositional_Structures/Basic_Modules/Condorcet_Module" "Compositional_Structures/Defer_One_Loop_Composition" begin text \<open>This is the pairwise majority rule, a voting rule that implements the Condorcet criterion, i.e., it elects the Condorcet winner if it exists, otherwise a tie remains between all alternatives.\<close> subsection \<open>Definition\<close> fun pairwise_majority_rule :: "'a Electoral_Module" where "pairwise_majority_rule A p vs = elector condorcet A p vs" fun condorcet' :: "'a Electoral_Module" where "condorcet' A p vs = ((min_eliminator condorcet_score) \<circlearrowleft>\<^sub>\<exists>\<^sub>!\<^sub>d) A p vs" fun pairwise_majority_rule' :: "'a Electoral_Module" where "pairwise_majority_rule' A p vs = iterelect condorcet' A p vs" subsection \<open>Condorcet Consistency Property\<close> theorem condorcet_condorcet: "condorcet_consistency pairwise_majority_rule" proof - have "condorcet_consistency (elector condorcet)" using condorcet_is_dcc dcc_imp_cc_elector by metis thus ?thesis using condorcet_consistency2 electoral_module_def pairwise_majority_rule.simps by metis qed end
Lock in a great price for North Shire Lodge – rated 9.4 by recent guests! Location and view were outstanding. Tim and the staff were pleasant and helpful. Breakfast in the Pub room was excellent. It's a very nice place to stay, relaxing, and comfortable! The private restaurant and lodge area was very relaxing. Hosts, owners and workers very pleasant and hospitable. The views from the room were splendid and had great sunsets. And one of the coolest, well curated DVD collections I have seen. Complimentary hot breakfast is a huge plus, delicious and great fuel for the slopes. I loved the sliders that opened up to a beautiful Mountain View. It was gorgeous! I'm literally going to write a review everywhere about this place! My husband and I stayed at the lodge for three nights and it was the best place we've ever been. Rooms are very spacious, clean and cozy. Each room has a different interior. Great view and amenities. It's owned by a great couple (Tim & Kerry) who are on site 24/7. Very friendly and always ready to help. We had a chance to spend some time with Tim at the pub and he told us some very interesting stories about the lodge. Will definitely come back. Highly recommend! Love the hosts Love the accommodations Great place to stay in Manchester Great breakfast Love Vermont! Will be back! Loved the owners and the blueberry pancakes in the morning are to die for! Quirky motel/B&B. Lady on Reception very welcoming. Breakfast (which you order in advance) was huge. North Shire Lodge This rating is a reflection of how the property compares to the industry standard when it comes to price, facilities and services available. It's based on a self-evaluation by the property. Use this rating to help choose your stay! Set in the Battenkill Valley, amid spectacular views of the Green and Taconic Mountains, this Manchester, Vermont lodge features an on-site pub. A free, chef-prepared breakfast is served daily. A flat-screen TV and DVD player is provided in each room at North Shire Lodge. Guests are also provided with a dining area that includes a refrigerator and dining table. Private bathrooms come with a hairdryer. The Mountain View Pub, located on site features an extensive bourbon, single malt and cocktail list. An on-site pool offers guests a place to cool down during summer. Wi-Fi is provided free of charge. North Shire Lodge is located 28 mi from Stratton and 12 mi from Bromley Mountain ski resorts. When would you like to stay at North Shire Lodge? This room features a flat-screen TV, microwave oven, refrigerator and furniture with a rustic lodge theme. House Rules North Shire Lodge takes special requests – add in the next step! North Shire Lodge accepts these cards and reserves the right to temporarily hold an amount prior to arrival. Please inform North Shire Lodge of your expected arrival time in advance. You can use the Special Requests box when booking, or contact the property directly using the contact details in your confirmation. We had no problems with our stay. Great breakfast made to order in a really cool pub. We ate on top of a piano made into a table. Nothing! We came in winter and everything was perfect for a weekend get away. It had a beautiful view from our room! Bar not open every night. If you’re late for breakfast you’ll find your hot items already cooked and reheated. Great bar and food. Great rooms. The short length of our stay ! After dinner in town we spent pleasant evening in bar with owner taking about malt whisky, bourbon and photos of his ancestors which decorate the room. My first visit here was about seven years ago. They've updated the rooms since then. The beds are very cozy, the rooms are big and I love having the sliding glass doors and outdoor furniture to sit outside. It's got more character than your average motel. Lovely, and deep, swimming pool. Make sure you sign up for the hot breakfast - it's yummy. Close to Manchester and all the amenities.
using Plots include("rw_1d.jl") nsteps = length(ARGS)>0 ? parse(Int, ARGS[1]) : 1000 prob = length(ARGS)>1 ? parse(Float64, ARGS[1]) : 0.5 result = randomwalk(nsteps,prob) pl = plot(result, xaxis=("t"), yaxis=("x"), color=[:black], legend=false) savefig(pl, "rw_1d.pdf")
lemma closure_scaleR: fixes S :: "'a::real_normed_vector set" shows "((*\<^sub>R) c) ` (closure S) = closure (((*\<^sub>R) c) ` S)"
Formal statement is: lemma has_vector_derivative_polynomial_function: fixes p :: "real \<Rightarrow> 'a::euclidean_space" assumes "polynomial_function p" obtains p' where "polynomial_function p'" "\<And>x. (p has_vector_derivative (p' x)) (at x)" Informal statement is: If $p$ is a polynomial function, then there exists a polynomial function $p'$ such that $p'$ is the derivative of $p$.
\name{uh} \alias{uh} \title{ Convert units } \description{ Convert units } \usage{ uh(...) } \arguments{ \item{...}{pass to \code{\link{convert_length}}} } \details{ This function is same as \code{\link{convert_length}}. } \author{ Zuguang Gu <[email protected]> } \examples{ # see example in `convert_length` page NULL }
Require Import Crypto.Arithmetic.PrimeFieldTheorems. Require Import Crypto.Specific.solinas64_2e256m2e32m977_7limbs.Synthesis. (* TODO : change this to field once field isomorphism happens *) Definition carry : { carry : feBW_loose -> feBW_tight | forall a, phiBW_tight (carry a) = (phiBW_loose a) }. Proof. Set Ltac Profiling. Time synthesize_carry (). Show Ltac Profile. Time Defined. Print Assumptions carry.
/- Copyright (c) 2022 Patrick Massot. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Patrick Massot, Floris van Doorn, Yury Kudryashov -/ import order.filter.lift import order.filter.at_top_bot /-! # The filter of small sets This file defines the filter of small sets w.r.t. a filter `f`, which is the largest filter containing all powersets of members of `f`. `g` converges to `f.small_sets` if for all `s ∈ f`, eventually we have `g x ⊆ s`. An example usage is that if `f : ι → E → ℝ` is a family of nonnegative functions with integral 1, then saying that `λ i, support (f i)` tendsto `(𝓝 0).small_sets` is a way of saying that `f` tends to the Dirac delta distribution. -/ open_locale filter open filter set variables {α β : Type*} {ι : Sort*} namespace filter variables {l l' la : filter α} {lb : filter β} /-- The filter `l.small_sets` is the largest filter containing all powersets of members of `l`. -/ def small_sets (l : filter α) : filter (set α) := l.lift' powerset lemma small_sets_eq_generate {f : filter α} : f.small_sets = generate (powerset '' f.sets) := by { simp_rw [generate_eq_binfi, small_sets, infi_image], refl } lemma has_basis.small_sets {p : ι → Prop} {s : ι → set α} (h : has_basis l p s) : has_basis l.small_sets p (λ i, 𝒫 (s i)) := h.lift' monotone_powerset lemma has_basis_small_sets (l : filter α) : has_basis l.small_sets (λ t : set α, t ∈ l) powerset := l.basis_sets.small_sets /-- `g` converges to `f.small_sets` if for all `s ∈ f`, eventually we have `g x ⊆ s`. -/ lemma tendsto_small_sets_iff {f : α → set β} : tendsto f la lb.small_sets ↔ ∀ t ∈ lb, ∀ᶠ x in la, f x ⊆ t := (has_basis_small_sets lb).tendsto_right_iff lemma eventually_small_sets {p : set α → Prop} : (∀ᶠ s in l.small_sets, p s) ↔ ∃ s ∈ l, ∀ t ⊆ s, p t := eventually_lift'_iff monotone_powerset lemma eventually_small_sets' {p : set α → Prop} (hp : ∀ ⦃s t⦄, s ⊆ t → p t → p s) : (∀ᶠ s in l.small_sets, p s) ↔ ∃ s ∈ l, p s := eventually_small_sets.trans $ exists₂_congr $ λ s hsf, ⟨λ H, H s subset.rfl, λ hs t ht, hp ht hs⟩ lemma frequently_small_sets {p : set α → Prop} : (∃ᶠ s in l.small_sets, p s) ↔ ∀ t ∈ l, ∃ s ⊆ t, p s := l.has_basis_small_sets.frequently_iff lemma frequently_small_sets_mem (l : filter α) : ∃ᶠ s in l.small_sets, s ∈ l := frequently_small_sets.2 $ λ t ht, ⟨t, subset.rfl, ht⟩ lemma has_antitone_basis.tendsto_small_sets {ι} [preorder ι] {s : ι → set α} (hl : l.has_antitone_basis s) : tendsto s at_top l.small_sets := tendsto_small_sets_iff.2 $ λ t ht, hl.eventually_subset ht @[mono] lemma monotone_small_sets : monotone (@small_sets α) := monotone_lift' monotone_id monotone_const @[simp] lemma small_sets_bot : (⊥ : filter α).small_sets = pure ∅ := by rw [small_sets, lift'_bot monotone_powerset, powerset_empty, principal_singleton] @[simp] lemma small_sets_top : (⊤ : filter α).small_sets = ⊤ := by rw [small_sets, lift'_top, powerset_univ, principal_univ] @[simp] lemma small_sets_principal (s : set α) : (𝓟 s).small_sets = 𝓟(𝒫 s) := lift'_principal monotone_powerset lemma small_sets_comap (l : filter β) (f : α → β) : (comap f l).small_sets = l.lift' (powerset ∘ preimage f) := comap_lift'_eq2 monotone_powerset lemma comap_small_sets (l : filter β) (f : α → set β) : comap f l.small_sets = l.lift' (preimage f ∘ powerset) := comap_lift'_eq lemma small_sets_infi {f : ι → filter α} : (infi f).small_sets = (⨅ i, (f i).small_sets) := lift'_infi_of_map_univ powerset_inter powerset_univ lemma small_sets_inf (l₁ l₂ : filter α) : (l₁ ⊓ l₂).small_sets = l₁.small_sets ⊓ l₂.small_sets := lift'_inf _ _ powerset_inter instance small_sets_ne_bot (l : filter α) : ne_bot l.small_sets := (lift'_ne_bot_iff monotone_powerset).2 $ λ _ _, powerset_nonempty lemma tendsto.small_sets_mono {s t : α → set β} (ht : tendsto t la lb.small_sets) (hst : ∀ᶠ x in la, s x ⊆ t x) : tendsto s la lb.small_sets := begin rw [tendsto_small_sets_iff] at ht ⊢, exact λ u hu, (ht u hu).mp (hst.mono $ λ a hst ht, subset.trans hst ht) end @[simp] lemma eventually_small_sets_eventually {p : α → Prop} : (∀ᶠ s in l.small_sets, ∀ᶠ x in l', x ∈ s → p x) ↔ ∀ᶠ x in l ⊓ l', p x := calc _ ↔ ∃ s ∈ l, ∀ᶠ x in l', x ∈ s → p x : eventually_small_sets' $ λ s t hst ht, ht.mono $ λ x hx hs, hx (hst hs) ... ↔ ∃ (s ∈ l) (t ∈ l'), ∀ x, x ∈ t → x ∈ s → p x : by simp only [eventually_iff_exists_mem] ... ↔ ∀ᶠ x in l ⊓ l', p x : by simp only [eventually_inf, and_comm, mem_inter_iff, ← and_imp] @[simp] lemma eventually_small_sets_forall {p : α → Prop} : (∀ᶠ s in l.small_sets, ∀ x ∈ s, p x) ↔ ∀ᶠ x in l, p x := by simpa only [inf_top_eq, eventually_top] using @eventually_small_sets_eventually α l ⊤ p alias eventually_small_sets_forall ↔ eventually.of_small_sets eventually.small_sets @[simp] lemma eventually_small_sets_subset {s : set α} : (∀ᶠ t in l.small_sets, t ⊆ s) ↔ s ∈ l := eventually_small_sets_forall end filter
Require Import SpecDeps. Require Import RData. Require Import EventReplay. Require Import MoverTypes. Require Import Constants. Require Import CommonLib. Require Import AbsAccessor.Spec. Local Open Scope Z_scope. Section SpecLow. Definition realm_ns_step_spec0 (adt: RData) : option (RData * Z64) := when' ret, adt == user_step_spec adt; rely is_int64 ret; Some (adt, VZ64 ret) . End SpecLow.
{-# OPTIONS --cubical --safe #-} module Subtyping where open import Cubical.Core.Everything hiding (Type) open import Cubical.Foundations.Prelude using (refl; sym; symP; cong; _∙_; transport; subst; transportRefl; transport-filler; toPathP; fromPathP; congP) open import Cubical.Foundations.Transport using (transport⁻Transport) open import Cubical.Data.Nat using (ℕ; zero; suc; _+_; +-comm; snotz; znots; +-suc; +-zero; injSuc; isSetℕ) open import Cubical.Data.Nat.Order using (_≟_; lt; eq; gt; ≤-k+; ≤-+k; ≤-trans; pred-≤-pred; _≤_; _<_; ¬m+n<m; ¬-<-zero; suc-≤-suc; <-k+; <-+k; zero-≤; m≤n-isProp; <≤-trans; ≤-refl; <-weaken) open import Cubical.Data.Fin using (Fin; toℕ; fzero; fsuc; Fin-fst-≡) open import Cubical.Data.Sigma using (_×_; _,_; fst; snd; ΣPathP; Σ-cong-snd) open import Cubical.Data.Sum using (_⊎_; inl; inr) import Cubical.Data.Empty as Empty open import Label using (Label; Record; nil; cons; _∈_; find; l∈r-isProp) -- A term with at most `n` free variables. data Term (n : ℕ) : Set where var : Fin n -> Term n abs : Term (suc n) -> Term n _·_ : Term n -> Term n -> Term n rec : forall {l} -> Record (Term n) l -> Term n _#_ : Term n -> Label -> Term n shift : forall {m : ℕ} (n : ℕ) (i : Fin (suc m)) (e : Term m) -> Term (m + n) shiftRecord : forall {m : ℕ} (n : ℕ) (i : Fin (suc m)) {l : Label} (r : Record (Term m) l) -> Record (Term (m + n)) l shift {m} n i (var j) with toℕ i ≟ toℕ j ... | lt _ = var (toℕ j + n , ≤-+k (snd j)) ... | eq _ = var (toℕ j + n , ≤-+k (snd j)) ... | gt _ = var (toℕ j , ≤-trans (snd j) (n , +-comm n m)) shift n i (abs e) = abs (shift n (fsuc i) e) shift n i (e · e₁) = shift n i e · shift n i e₁ shift n i (rec r) = rec (shiftRecord n i r) shift n i (e # l) = shift n i e # l shiftRecord n i nil = nil shiftRecord n i (cons r l' x x₁) = cons (shiftRecord n i r) l' (shift n i x) x₁ subst′ : forall {m n : ℕ} -> (e' : Term m) -> (i : Fin (suc n)) -> (e1 : Term (suc (n + m))) -> Term (n + m) substRecord : forall {m n : ℕ} -> (e' : Term m) -> (i : Fin (suc n)) -> forall {l : Label} -> (r1 : Record (Term (suc (n + m))) l) -> Record (Term (n + m)) l subst′ {m} {n} e' i (var j) with toℕ j ≟ toℕ i ... | lt j<i = var (toℕ j , <≤-trans j<i (pred-≤-pred (≤-trans (snd i) (suc-≤-suc (m , +-comm m n))))) ... | eq _ = transport (λ i₁ → Term (+-comm m n i₁)) (shift n fzero e') ... | gt i<j with j ... | zero , _ = Empty.rec (¬-<-zero i<j) ... | suc fst₁ , snd₁ = var (fst₁ , pred-≤-pred snd₁) subst′ e' i (abs e1) = abs (subst′ e' (fsuc i) e1) subst′ e' i (e1 · e2) = subst′ e' i e1 · subst′ e' i e2 subst′ e' i (rec r) = rec (substRecord e' i r) subst′ e' i (e # l) = subst′ e' i e # l substRecord e' i nil = nil substRecord e' i (cons r1 l' x x₁) = cons (substRecord e' i r1) l' (subst′ e' i x) x₁ infix 3 _▷_ data _▷_ {n : ℕ} : Term n -> Term n -> Set where beta/=> : forall {e1 : Term (suc n)} {e2 : Term n} -> abs e1 · e2 ▷ subst′ e2 fzero e1 cong/app : forall {e1 e1' e2 : Term n} -> e1 ▷ e1' -> e1 · e2 ▷ e1' · e2 beta/rec : forall {l'} {r : Record (Term n) l'} {l} {l∈r : l ∈ r} -> rec r # l ▷ find l r l∈r cong/# : forall {e e' : Term n} {l} -> e ▷ e' -> e # l ▷ e' # l data Base : Set where Unit : Base Int : Base infixr 8 _=>_ data Type : Set where base : Base -> Type Top : Type _=>_ : Type -> Type -> Type rec : forall {l} -> Record Type l -> Type data Context : ℕ -> Set where [] : Context 0 _∷_ : forall {n} -> Type -> Context n -> Context (suc n) data _[_]=_ : forall {n} -> Context n -> Fin n -> Type -> Set where here : forall {n} A (G : Context n) -> (A ∷ G) [ 0 , suc-≤-suc zero-≤ ]= A there : forall {n} {A} B {G : Context n} {i} -> G [ i ]= A -> (B ∷ G) [ fsuc i ]= A lookup : forall {n} -> Context n -> Fin n -> Type lookup [] (fst₁ , snd₁) = Empty.rec (¬-<-zero snd₁) lookup (A ∷ G) (zero , snd₁) = A lookup (A ∷ G) (suc fst₁ , snd₁) = lookup G (fst₁ , pred-≤-pred snd₁) lookup-[]= : forall {n} (G : Context n) i -> G [ i ]= lookup G i lookup-[]= [] (fst₁ , snd₁) = Empty.rec (¬-<-zero snd₁) lookup-[]= (A ∷ G) (zero , snd₁) = subst (λ f -> (A ∷ G) [ f ]= A) (Fin-fst-≡ refl) (here A G) lookup-[]= (A ∷ G) (suc fst₁ , snd₁) = subst (λ f -> (A ∷ G) [ f ]= lookup G (fst₁ , pred-≤-pred snd₁)) (Fin-fst-≡ refl) (there A (lookup-[]= G (fst₁ , pred-≤-pred snd₁))) _++_ : forall {m n} -> Context m -> Context n -> Context (m + n) [] ++ G2 = G2 (A ∷ G1) ++ G2 = A ∷ (G1 ++ G2) ++-[]= : forall {m n} {G : Context m} (G' : Context n) {j : Fin m} {A} -> G [ j ]= A -> (G' ++ G) [ n + toℕ j , <-k+ (snd j) ]= A ++-[]= [] l = subst (λ f → _ [ f ]= _) (Fin-fst-≡ refl) l ++-[]= {G = G} (C ∷ G') {A = A} l = subst (λ f → ((C ∷ G') ++ G) [ f ]= A) (Fin-fst-≡ refl) (there C (++-[]= G' l)) inserts : forall {m n} -> Fin (suc m) -> Context n -> Context m -> Context (m + n) inserts {m} {n} (zero , snd₁) G' G = subst Context (+-comm n m) (G' ++ G) inserts (suc fst₁ , snd₁) G' [] = Empty.rec (¬-<-zero (pred-≤-pred snd₁)) inserts (suc fst₁ , snd₁) G' (A ∷ G) = A ∷ inserts (fst₁ , pred-≤-pred snd₁) G' G inserts-[]=-unaffected : forall {m n} (G : Context m) (G' : Context n) {j : Fin m} (i : Fin (suc m)) {A} -> toℕ j < toℕ i -> G [ j ]= A -> inserts i G' G [ toℕ j , ≤-trans (snd j) (n , +-comm n m) ]= A inserts-[]=-unaffected (A ∷ G) G' (zero , snd₁) j<i (here .A .G) = Empty.rec (¬-<-zero j<i) inserts-[]=-unaffected (A ∷ G) G' (suc fst₁ , snd₁) j<i (here .A .G) = subst (λ f → inserts (suc fst₁ , snd₁) G' (A ∷ G) [ f ]= A) (Fin-fst-≡ refl) (here A (inserts (fst₁ , pred-≤-pred snd₁) G' G)) inserts-[]=-unaffected (B ∷ _) G' (zero , snd₁) j<i (there .B l) = Empty.rec (¬-<-zero j<i) inserts-[]=-unaffected (B ∷ G) G' (suc fst₁ , snd₁) {A = A} j<i (there .B l) = subst (λ f → inserts (suc fst₁ , snd₁) G' (B ∷ G) [ f ]= A) (Fin-fst-≡ refl) (there B (inserts-[]=-unaffected G G' (fst₁ , pred-≤-pred snd₁) (pred-≤-pred j<i) l)) helper1 : forall m n (j : Fin m) -> PathP (λ i -> Fin (+-comm n m i)) (n + toℕ j , <-k+ (snd j)) (toℕ j + n , ≤-+k (snd j)) helper1 m n j = ΣPathP (+-comm n (toℕ j) , toPathP (m≤n-isProp _ _)) helper2 : forall m n (G : Context m) (G' : Context n) -> PathP (λ i → Context (+-comm n m i)) (G' ++ G) (subst Context (+-comm n m) (G' ++ G)) helper2 m n G G' = toPathP refl inserts-[]=-shifted : forall {m n} (G : Context m) (G' : Context n) {j : Fin m} (i : Fin (suc m)) {A} -> toℕ i ≤ toℕ j -> G [ j ]= A -> inserts i G' G [ toℕ j + n , ≤-+k (snd j) ]= A inserts-[]=-shifted {m} {n} G G' {j} (zero , snd₁) {A} i≤j l = transport (λ i -> helper2 m n G G' i [ helper1 m n j i ]= A) (++-[]= G' l) inserts-[]=-shifted (B ∷ G) G' (suc fst₁ , snd₁) i≤j (here .B .G) = Empty.rec (¬-<-zero i≤j) inserts-[]=-shifted (B ∷ G) G' {j = suc fst₂ , snd₂} (suc fst₁ , snd₁) {A = A} i≤j (there .B l) = subst (λ f → inserts (suc fst₁ , snd₁) G' (B ∷ G) [ f ]= A) (Fin-fst-≡ refl) (there B (inserts-[]=-shifted G G' (fst₁ , pred-≤-pred snd₁) (pred-≤-pred i≤j) l)) _+++_+++_ : forall {m n} -> Context m -> Type -> Context n -> Context (suc (m + n)) [] +++ A +++ G2 = A ∷ G2 (B ∷ G1) +++ A +++ G2 = B ∷ (G1 +++ A +++ G2) ++++++-[]=-unaffected : forall {m n} (G1 : Context m) (G2 : Context n) {A B} {j : Fin (suc (n + m))} -> (j<n : toℕ j < n) -> (G2 +++ A +++ G1) [ j ]= B -> (G2 ++ G1) [ toℕ j , <≤-trans j<n (m , +-comm m n) ]= B ++++++-[]=-unaffected G1 [] j<n l = Empty.rec (¬-<-zero j<n) ++++++-[]=-unaffected G1 (C ∷ G2) j<n (here .C .(G2 +++ _ +++ G1)) = subst (λ f -> (C ∷ (G2 ++ G1)) [ f ]= C) (Fin-fst-≡ refl) (here C (G2 ++ G1)) ++++++-[]=-unaffected G1 (C ∷ G2) {B = B} j<n (there .C l) = let a = ++++++-[]=-unaffected G1 G2 (pred-≤-pred j<n) l in subst (λ f -> (C ∷ (G2 ++ G1)) [ f ]= B) (Fin-fst-≡ refl) (there C a) -- Note that `j` stands for `suc fst₁`. ++++++-[]=-shifted : forall {m n} (G1 : Context m) (G2 : Context n) {A B} {fst₁ : ℕ} {snd₁ : suc (fst₁) < suc (n + m)} -> (n<j : n < suc fst₁) -> (G2 +++ A +++ G1) [ suc fst₁ , snd₁ ]= B -> (G2 ++ G1) [ fst₁ , pred-≤-pred snd₁ ]= B ++++++-[]=-shifted G1 [] n<j (there _ l) = subst (λ f → G1 [ f ]= _) (Fin-fst-≡ refl) l ++++++-[]=-shifted {m} {suc n} G1 (C ∷ G2) n<j (there .C {i = zero , snd₁} l) = Empty.rec (¬-<-zero (pred-≤-pred n<j)) ++++++-[]=-shifted {m} {suc n} G1 (C ∷ G2) {B = B} n<j (there .C {i = suc fst₁ , snd₁} l) = let a = ++++++-[]=-shifted G1 G2 (pred-≤-pred n<j) l in subst (λ f → (C ∷ (G2 ++ G1)) [ f ]= B) (Fin-fst-≡ refl) (there C a) ++++++-[]=-hit : forall {m n} (G1 : Context m) (G2 : Context n) {A B} {j : Fin (suc (n + m))} -> toℕ j ≡ n -> (G2 +++ A +++ G1) [ j ]= B -> A ≡ B ++++++-[]=-hit G1 [] j≡n (here _ .G1) = refl ++++++-[]=-hit G1 [] j≡n (there _ l) = Empty.rec (snotz j≡n) ++++++-[]=-hit G1 (C ∷ G2) j≡n (here .C .(G2 +++ _ +++ G1)) = Empty.rec (znots j≡n) ++++++-[]=-hit G1 (C ∷ G2) j≡n (there .C l) = ++++++-[]=-hit G1 G2 (injSuc j≡n) l infix 2 _<:_ infix 2 _<::_ data _<:_ : Type -> Type -> Set data _<::_ {l1 l2 : Label} : Record Type l1 -> Record Type l2 -> Set data _<:_ where S-Refl : forall {A} -> A <: A S-Arr : forall {A1 B1 A2 B2} -> A2 <: A1 -> B1 <: B2 -> A1 => B1 <: A2 => B2 S-Top : forall {A} -> A <: Top S-Record : forall {l1 l2} {r1 : Record Type l1} {r2 : Record Type l2} -> r1 <:: r2 -> rec r1 <: rec r2 data _<::_ {l1} {l2} where S-nil : l2 ≤ l1 -> nil <:: nil S-cons1 : forall {l1'} {r1 : Record Type l1'} {r2 : Record Type l2} {A} {l1'<l1 : l1' < l1} -> r1 <:: r2 -> cons r1 l1 A l1'<l1 <:: r2 S-cons2 : forall {l1' l2'} {r1 : Record Type l1'} {r2 : Record Type l2'} {A B} {l1'<l1 : l1' < l1} .{l2'<l2 : l2' < l2} -> r1 <:: r2 -> A <: B -> l1 ≡ l2 -> cons r1 l1 A l1'<l1 <:: cons r2 l2 B l2'<l2 <::-implies-≥ : forall {l1 l2} {r1 : Record Type l1} {r2 : Record Type l2} -> r1 <:: r2 -> l2 ≤ l1 <::-implies-≥ (S-nil x) = x <::-implies-≥ (S-cons1 {l1'<l1 = l1'<l1} s) = ≤-trans (<::-implies-≥ s) (<-weaken l1'<l1) <::-implies-≥ (S-cons2 s x x₁) = 0 , sym x₁ helper/<::-∈ : forall {l1 l2} {r1 : Record Type l1} {r2 : Record Type l2} -> r1 <:: r2 -> forall {l} -> l ∈ r2 -> l ∈ r1 helper/<::-∈ (S-cons1 {l1'<l1 = l1'<l1} s) l∈r2 = _∈_.there {lt = l1'<l1} (helper/<::-∈ s l∈r2) helper/<::-∈ (S-cons2 {r1 = r1} {A = A} {l1'<l1 = l1'<l1} _ x l1≡l2) (_∈_.here {lt = a} e) = transport (λ i -> (l1≡l2 ∙ sym e) i ∈ cons r1 _ A l1'<l1) (_∈_.here {lt = l1'<l1} refl) helper/<::-∈ (S-cons2 {l1'<l1 = l1'<l1} s x l1≡l2) (_∈_.there l∈r2) = _∈_.there {lt = l1'<l1} (helper/<::-∈ s l∈r2) infix 2 _⊢_::_ infix 2 _⊢_:::_ data _⊢_::_ {n : ℕ} (G : Context n) : Term n -> Type -> Set data _⊢_:::_ {n : ℕ} (G : Context n) {l} : Record (Term n) l -> Record Type l -> Set data _⊢_::_ {n} G where axiom : forall {i : Fin n} {A} -> G [ i ]= A -> G ⊢ var i :: A =>I : forall {A B : Type} {e : Term (suc n)} -> A ∷ G ⊢ e :: B -> G ⊢ abs e :: A => B =>E : forall {A B : Type} {e1 e2 : Term n} -> G ⊢ e1 :: A => B -> G ⊢ e2 :: A -> G ⊢ e1 · e2 :: B recI : forall {l} {r : Record (Term n) l} {rt : Record Type l} -> G ⊢ r ::: rt -> G ⊢ rec r :: rec rt recE : forall {l'} {r : Record Type l'} {e : Term n} {l : Label} -> G ⊢ e :: rec r -> (l∈r : l ∈ r) -> G ⊢ e # l :: find l r l∈r sub : forall {A B : Type} {e} -> G ⊢ e :: A -> A <: B -> G ⊢ e :: B data _⊢_:::_ {n} G {l} where rec/nil : G ⊢ nil ::: nil rec/cons : forall {l'} {r : Record (Term n) l'} {rt : Record Type l'} {e A} .{l'<l : (l' < l)} -> G ⊢ r ::: rt -> G ⊢ e :: A -> G ⊢ cons r l e l'<l ::: cons rt l A l'<l helper/∈ : forall {n} {G : Context n} {l} {r : Record (Term n) l} {rt : Record Type l} -> G ⊢ r ::: rt -> forall {l₁} -> l₁ ∈ r -> l₁ ∈ rt helper/∈ (rec/cons D x) (_∈_.here {lt = y} e) = _∈_.here {lt = y} e helper/∈ (rec/cons D x) (_∈_.there {lt = y} l₁∈r) = _∈_.there {lt = y} (helper/∈ D l₁∈r) helper/∈′ : forall {n} {G : Context n} {l} {r : Record (Term n) l} {rt : Record Type l} -> G ⊢ r ::: rt -> forall {l₁} -> l₁ ∈ rt -> l₁ ∈ r helper/∈′ (rec/cons D x) (_∈_.here {lt = y} e) = _∈_.here {lt = y} e helper/∈′ (rec/cons D x) (_∈_.there {lt = y} l₁∈r) = _∈_.there {lt = y} (helper/∈′ D l₁∈r) weakening : forall {m n} (i : Fin (suc m)) {G : Context m} (G' : Context n) {e : Term m} {A} -> G ⊢ e :: A -> inserts i G' G ⊢ shift n i e :: A weakeningRecord : forall {m n} (i : Fin (suc m)) {G : Context m} (G' : Context n) {l} {r : Record (Term m) l} {rt} -> G ⊢ r ::: rt -> inserts i G' G ⊢ shiftRecord n i r ::: rt weakening {m = m} {n = n} i {G = G} G' {e = var j} (axiom l) with toℕ i ≟ toℕ j ... | lt i<j = axiom (inserts-[]=-shifted G G' i (≤-trans (1 , refl) i<j) l) ... | eq i≡j = axiom (inserts-[]=-shifted G G' i (0 , i≡j) l) ... | gt j<i = axiom (inserts-[]=-unaffected G G' i j<i l) weakening {n = n} i {G = G} G' {e = abs e} {A = A => B} (=>I D) = =>I (subst (λ f -> (A ∷ inserts f G' G) ⊢ shift n (fsuc i) e :: B) (Fin-fst-≡ {j = i} refl) (weakening (fsuc i) G' D)) weakening i G' (=>E D D₁) = =>E (weakening i G' D) (weakening i G' D₁) weakening i G' (sub D s) = sub (weakening i G' D) s weakening i G' (recI D) = recI (weakeningRecord i G' D) weakening i G' (recE D l∈r) = recE (weakening i G' D) l∈r weakeningRecord i G' rec/nil = rec/nil weakeningRecord i G' (rec/cons x x₁) = rec/cons (weakeningRecord i G' x) (weakening i G' x₁) helper3 : forall {n} -> (suc n , ≤-refl) ≡ (suc n , suc-≤-suc ≤-refl) helper3 = Fin-fst-≡ refl helper4 : forall m n (j : Fin (suc (n + m))) j<n -> (toℕ j , <≤-trans j<n (m , +-comm m n)) ≡ (toℕ j , <≤-trans j<n (pred-≤-pred (≤-trans (0 , refl) (suc-≤-suc (m , +-comm m n))))) helper4 m n j j<n = Fin-fst-≡ refl helper5 : forall m n (G1 : Context m) (G2 : Context n) -> PathP (λ i -> Context (+-comm n m (~ i))) (subst Context (+-comm n m) (G2 ++ G1)) (G2 ++ G1) helper5 m n G1 G2 = symP {A = λ i -> Context (+-comm n m i)} (toPathP refl) helper6 : forall m n (e : Term (m + n)) -> PathP (λ i -> Term (+-comm m n i)) e (transport (λ i -> Term (+-comm m n i)) e) helper6 m n e = toPathP refl helper' : forall m n -> +-comm m n ≡ sym (+-comm n m) helper' m n = isSetℕ (m + n) (n + m) (+-comm m n) (sym (+-comm n m)) helper7 : forall m n (e : Term (m + n)) -> PathP (λ i -> Term (+-comm n m (~ i))) e (transport (λ i -> Term (+-comm m n i)) e) helper7 m n e = subst (λ m+n≡n+m → PathP (λ i → Term (m+n≡n+m i)) e (transport (λ i -> Term (+-comm m n i)) e)) (helper' m n) (helper6 m n e) substitution : forall {m n} (G1 : Context m) (G2 : Context n) (e1 : Term (suc (n + m))) {e2 : Term m} {A B} -> G1 ⊢ e2 :: A -> G2 +++ A +++ G1 ⊢ e1 :: B -> G2 ++ G1 ⊢ subst′ e2 (n , ≤-refl) e1 :: B substitutionRecord : forall {m n} (G1 : Context m) (G2 : Context n) {l} (r : Record (Term (suc (n + m))) l) {e2 : Term m} {A} {rt} -> G1 ⊢ e2 :: A -> G2 +++ A +++ G1 ⊢ r ::: rt -> G2 ++ G1 ⊢ substRecord e2 (n , ≤-refl) r ::: rt substitution G1 G2 e1 D' (sub D s) = sub (substitution G1 G2 e1 D' D) s substitution {m} {n} G1 G2 (var j) {e2 = e2} {B = B} D' (axiom l) with toℕ j ≟ toℕ (n , ≤-refl) ... | lt j<n = axiom (transport (λ i -> (G2 ++ G1) [ helper4 m n j j<n i ]= B) (++++++-[]=-unaffected G1 G2 j<n l)) ... | eq j≡n = let a = weakening fzero G2 D' in transport (λ i → helper5 m n G1 G2 i ⊢ helper7 m n (shift n fzero e2) i :: ++++++-[]=-hit G1 G2 j≡n l i ) a ... | gt n<j with j ... | zero , snd₁ = Empty.rec (¬-<-zero n<j) ... | suc fst₁ , snd₁ = axiom (++++++-[]=-shifted G1 G2 n<j l) substitution G1 G2 (abs e1) {e2 = e2} D' (=>I {A} {B} D) = =>I (transport (λ i → (A ∷ (G2 ++ G1)) ⊢ subst′ e2 (helper3 i) e1 :: B) (substitution G1 (A ∷ G2) e1 D' D)) substitution G1 G2 (e · e') D' (=>E D D₁) = =>E (substitution G1 G2 e D' D) (substitution G1 G2 e' D' D₁) substitution G1 G2 (rec r) D' (recI D) = recI (substitutionRecord G1 G2 r D' D) substitution G1 G2 (e # l) D' (recE D l∈r) = recE (substitution G1 G2 e D' D) l∈r substitutionRecord G1 G2 nil D' rec/nil = rec/nil substitutionRecord G1 G2 (cons r l e _) D' (rec/cons D x) = rec/cons (substitutionRecord G1 G2 r D' D) (substitution G1 G2 e D' x) S-Trans : forall {A B C} -> A <: B -> B <: C -> A <: C S-TransRecord : forall {l1 l2 l3} {r1 : Record Type l1} {r2 : Record Type l2} {r3 : Record Type l3} -> r1 <:: r2 -> r2 <:: r3 -> r1 <:: r3 S-Trans S-Refl s2 = s2 S-Trans (S-Arr s1 s3) S-Refl = S-Arr s1 s3 S-Trans (S-Arr s1 s3) (S-Arr s2 s4) = S-Arr (S-Trans s2 s1) (S-Trans s3 s4) S-Trans (S-Arr s1 s3) S-Top = S-Top S-Trans S-Top S-Refl = S-Top S-Trans S-Top S-Top = S-Top S-Trans (S-Record s1) S-Refl = S-Record s1 S-Trans (S-Record s1) S-Top = S-Top S-Trans (S-Record s1) (S-Record s2) = S-Record (S-TransRecord s1 s2) S-TransRecord (S-nil x) (S-nil y) = S-nil (≤-trans y x) S-TransRecord (S-cons1 {l1'<l1 = l1'<l1} x) x₁ = S-cons1 {l1'<l1 = l1'<l1} (S-TransRecord x x₁) S-TransRecord (S-cons2 {l1'<l1 = a} x x₂ _) (S-cons1 x₁) = S-cons1 {l1'<l1 = a} (S-TransRecord x x₁) S-TransRecord (S-cons2 {l1'<l1 = a} x x₂ l1≡l2) (S-cons2 x₁ x₃ l2≡l3) = S-cons2 {l1'<l1 = a} (S-TransRecord x x₁) (S-Trans x₂ x₃) (l1≡l2 ∙ l2≡l3) inversion/S-Arr : forall {A1 B1 A2 B2} -> A1 => B1 <: A2 => B2 -> (A2 <: A1) × (B1 <: B2) inversion/S-Arr S-Refl = S-Refl , S-Refl inversion/S-Arr (S-Arr s s₁) = s , s₁ helper/inversion/S-Record : forall {l1 l2} {r1 : Record Type l1} {r2 : Record Type l2} -> (s : r1 <:: r2) -> forall {l} -> (l∈r2 : l ∈ r2) -> find l r1 (helper/<::-∈ s l∈r2) <: find l r2 l∈r2 helper/inversion/S-Record (S-cons1 s) l∈r2 = helper/inversion/S-Record s l∈r2 helper/inversion/S-Record {l1} {r1 = cons r1 l1 A k} (S-cons2 {B = B} {l1'<l1 = l1'<l1} _ x l1≡l2) {l = l} (_∈_.here {lt = u} e) = subst (λ z -> find l (cons r1 l1 A k) z <: B) (l∈r-isProp l (cons r1 l1 A k) (_∈_.here {lt = l1'<l1} (e ∙ sym l1≡l2)) (transport (λ i → (l1≡l2 ∙ sym e) i ∈ cons r1 l1 A k) (_∈_.here refl))) x helper/inversion/S-Record (S-cons2 s x x₁) (_∈_.there l∈r2) = helper/inversion/S-Record s l∈r2 inversion/S-Record : forall {l1 l2} {r1 : Record Type l1} {r2 : Record Type l2} -> rec r1 <: rec r2 -> forall {l} (l∈r2 : l ∈ r2) -> Σ[ l∈r1 ∈ (l ∈ r1) ] (find l r1 l∈r1 <: find l r2 l∈r2) inversion/S-Record S-Refl l∈r2 = l∈r2 , S-Refl inversion/S-Record (S-Record s) l∈r2 = helper/<::-∈ s l∈r2 , helper/inversion/S-Record s l∈r2 inversion/=>I : forall {n} {G : Context n} {e : Term (suc n)} {A} -> G ⊢ abs e :: A -> Σ[ B ∈ Type ] Σ[ C ∈ Type ] ((B ∷ G ⊢ e :: C) × (B => C <: A)) inversion/=>I (=>I D) = _ , _ , D , S-Refl inversion/=>I (sub D s) with inversion/=>I D ... | B , C , D' , s' = B , C , D' , S-Trans s' s helper/inversion/recI : forall {n} {G : Context n} {l} {r : Record (Term n) l} {rt : Record Type l} -> (D : G ⊢ r ::: rt) -> forall {l₁} -> (l₁∈r : l₁ ∈ r) -> G ⊢ find l₁ r l₁∈r :: find l₁ rt (helper/∈ D l₁∈r) helper/inversion/recI (rec/cons D x) (_∈_.here e) = x helper/inversion/recI (rec/cons D x) (_∈_.there l₁∈r) = helper/inversion/recI D l₁∈r inversion/recI : forall {n} {G : Context n} {l} {r : Record (Term n) l} {A} -> G ⊢ rec r :: A -> Σ[ rt ∈ Record Type l ] Σ[ f ∈ (forall {l₁} -> l₁ ∈ r -> l₁ ∈ rt) ] Σ[ g ∈ (forall {l₁} -> l₁ ∈ rt -> l₁ ∈ r) ] ((forall {l₁} (l₁∈r : l₁ ∈ r) -> (G ⊢ find l₁ r l₁∈r :: find l₁ rt (f l₁∈r))) × (rec rt <: A)) inversion/recI (recI D) = _ , helper/∈ D , helper/∈′ D , (helper/inversion/recI D) , S-Refl inversion/recI (sub D s) with inversion/recI D ... | rt , f , g , x , s' = rt , f , g , x , S-Trans s' s preservation : forall {n} {G : Context n} (e : Term n) {e' : Term n} {A} -> G ⊢ e :: A -> e ▷ e' -> G ⊢ e' :: A preservation e (sub D s) st = sub (preservation e D st) s preservation (_ · _) (=>E D D₁) (cong/app s) = =>E (preservation _ D s) D₁ preservation {G = G} (abs e1 · e2) (=>E D D₁) beta/=> with inversion/=>I D ... | _ , _ , D , s with inversion/S-Arr s ... | sdom , scod = substitution G [] e1 (sub D₁ sdom) (sub D scod) preservation (e # l) (recE D l∈r) (cong/# s) = recE (preservation e D s) l∈r preservation {G = G} (rec r # l) (recE D l∈r) (beta/rec {l∈r = l∈r′}) with inversion/recI D ... | rt , f , _ , x , s with inversion/S-Record s ... | sr = let a = x l∈r′ in let l∈rt , b = sr l∈r in sub (subst (λ z -> G ⊢ find l r l∈r′ :: find l rt z) (l∈r-isProp l rt (f l∈r′) (l∈rt)) a) b -- Path. data P {n : ℕ} : Term n -> Set where var : forall {i : Fin n} -> P (var i) app : forall {e1 e2 : Term n} -> P e1 -> P (e1 · e2) proj : forall {e} {l} -> P e -> P (e # l) data Whnf {n : ℕ} : Term n -> Set where `_ : forall {p : Term n} -> P p -> Whnf p abs : forall {e : Term (suc n)} -> Whnf (abs e) rec : forall {l} {r : Record (Term n) l} -> Whnf (rec r) =>Whnf : forall {n} {G : Context n} {e : Term n} {A B : Type} -> G ⊢ e :: A => B -> Whnf e -> P e ⊎ (Σ[ e' ∈ Term (suc n) ] e ≡ abs e') =>Whnf {e = var x} D (` x₁) = inl x₁ =>Whnf {e = abs e} D abs = inr (e , refl) =>Whnf {e = e · e₁} D (` x) = inl x =>Whnf {e = rec x} D w with inversion/recI D ... | () =>Whnf {e = e # x} D (` x₁) = inl x₁ recWhnf : forall {n} {G : Context n} {e : Term n} {l} {rt : Record Type l} -> G ⊢ e :: rec rt -> Whnf e -> P e ⊎ (Σ[ l' ∈ Label ] Σ[ r ∈ Record (Term n) l' ] e ≡ rec r) recWhnf {e = var x} D (` x₁) = inl x₁ recWhnf {e = abs e} D w with inversion/=>I D ... | () recWhnf {e = e · e₁} D (` x) = inl x recWhnf {e = rec x} D rec = inr (_ , x , refl) recWhnf {e = e # x} D (` x₁) = inl x₁ helper/progress : forall {n} {G : Context n} {l1 l2} {r : Record _ l1} {rt : Record _ l2} -> G ⊢ rec r :: rec rt -> forall {l} -> l ∈ rt -> l ∈ r helper/progress D l∈rt with inversion/recI D ... | rt0 , f , g , x , s with inversion/S-Record s l∈rt ... | l∈rt0 , s' = g l∈rt0 progress : forall {n} {G : Context n} {e : Term n} {A} -> G ⊢ e :: A -> (Σ[ e' ∈ Term n ] e ▷ e') ⊎ Whnf e progress (axiom x) = inr (` var) progress (=>I D) = inr abs progress {n} {e = e1 · e2} (=>E D D₁) with progress D ... | inl (e1' , s) = inl ((e1' · e2) , cong/app s) ... | inr w with =>Whnf D w ... | inl p = inr (` app p) ... | inr (e1 , x) = inl (transport (Σ-cong-snd λ x₁ i → (x (~ i) · e2) ▷ x₁) (subst′ e2 fzero e1 , beta/=>)) progress (sub D _) = progress D progress (recI D) = inr rec progress {G = G} {e = e # l} (recE D l∈r) with progress D ... | inl (e' , s) = inl ((e' # l) , cong/# s) ... | inr w with recWhnf D w ... | inl p = inr (` proj p) ... | inr (l' , r , x) = inl (transport (Σ-cong-snd λ x₁ i → x (~ i) # l ▷ x₁) (find l r (helper/progress (subst (λ x₁ → G ⊢ x₁ :: _) x D) l∈r) , beta/rec))
The modulus of the product of a complex number with its conjugate is equal to the square of the modulus of the complex number.
[STATEMENT] lemma elem_set_of: "\<And> x st. \<lbrakk>x \<in> set_of st; set_of st \<subseteq> set_of t\<rbrakk> \<Longrightarrow> x \<in> set_of t" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x st. \<lbrakk>x \<in> set_of st; set_of st \<subseteq> set_of t\<rbrakk> \<Longrightarrow> x \<in> set_of t [PROOF STEP] by blast
[STATEMENT] lemma norm_eq_1_iff_in_S: shows "norm v = 1 \<longleftrightarrow> proj2_pt v \<in> S" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] let ?v' = "vector2_append1 v" [PROOF STATE] proof (state) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] have "?v' \<noteq> 0" [PROOF STATE] proof (prove) goal (1 subgoal): 1. vector2_append1 v \<noteq> 0 [PROOF STEP] by (rule vector2_append1_non_zero) [PROOF STATE] proof (state) this: vector2_append1 v \<noteq> 0 goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] from real_sqrt_unique [of "norm v" 1] [PROOF STATE] proof (chain) picking this: \<lbrakk>(norm v)\<^sup>2 = 1; 0 \<le> norm v\<rbrakk> \<Longrightarrow> sqrt 1 = norm v [PROOF STEP] have "norm v = 1 \<longleftrightarrow> (norm v)\<^sup>2 = 1" [PROOF STATE] proof (prove) using this: \<lbrakk>(norm v)\<^sup>2 = 1; 0 \<le> norm v\<rbrakk> \<Longrightarrow> sqrt 1 = norm v goal (1 subgoal): 1. (norm v = 1) = ((norm v)\<^sup>2 = 1) [PROOF STEP] by auto [PROOF STATE] proof (state) this: (norm v = 1) = ((norm v)\<^sup>2 = 1) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] hence "norm v = 1 \<longleftrightarrow> ?v' \<bullet> (M *v ?v') = 0" [PROOF STATE] proof (prove) using this: (norm v = 1) = ((norm v)\<^sup>2 = 1) goal (1 subgoal): 1. (norm v = 1) = (vector2_append1 v \<bullet> (M *v vector2_append1 v) = 0) [PROOF STEP] by (simp add: norm_M) [PROOF STATE] proof (state) this: (norm v = 1) = (vector2_append1 v \<bullet> (M *v vector2_append1 v) = 0) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] with \<open>?v' \<noteq> 0\<close> [PROOF STATE] proof (chain) picking this: vector2_append1 v \<noteq> 0 (norm v = 1) = (vector2_append1 v \<bullet> (M *v vector2_append1 v) = 0) [PROOF STEP] have "norm v = 1 \<longleftrightarrow> proj2_abs ?v' \<in> S" [PROOF STATE] proof (prove) using this: vector2_append1 v \<noteq> 0 (norm v = 1) = (vector2_append1 v \<bullet> (M *v vector2_append1 v) = 0) goal (1 subgoal): 1. (norm v = 1) = (proj2_abs (vector2_append1 v) \<in> S) [PROOF STEP] by (subst S_abs) [PROOF STATE] proof (state) this: (norm v = 1) = (proj2_abs (vector2_append1 v) \<in> S) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] thus "norm v = 1 \<longleftrightarrow> proj2_pt v \<in> S" [PROOF STATE] proof (prove) using this: (norm v = 1) = (proj2_abs (vector2_append1 v) \<in> S) goal (1 subgoal): 1. (norm v = 1) = (proj2_pt v \<in> S) [PROOF STEP] by (unfold proj2_pt_def) [PROOF STATE] proof (state) this: (norm v = 1) = (proj2_pt v \<in> S) goal: No subgoals! [PROOF STEP] qed
import os import json import cv2 import sys import barcodeQrSDK import time import numpy as np # set license barcodeQrSDK.initLicense("DLS2eyJoYW5kc2hha2VDb2RlIjoiMjAwMDAxLTE2NDk4Mjk3OTI2MzUiLCJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSIsInNlc3Npb25QYXNzd29yZCI6IndTcGR6Vm05WDJrcEQ5YUoifQ==") # initialize barcode reader reader = barcodeQrSDK.createInstance() results = None # The callback function for receiving barcode results def onBarcodeResult(data): global results results = data def get_time(): localtime = time.localtime() capturetime = time.strftime("%Y%m%d%H%M%S", localtime) return capturetime def read_barcode(): global results video_width = 640 video_height = 480 vc = cv2.VideoCapture(0) vc.set(3, video_width) #set width vc.set(4, video_height) #set height if vc.isOpened(): rval, frame = vc.read() else: return windowName = "Barcode Reader" max_buffer = 2 max_results = 10 image_format = 1 # 0: gray; 1: rgb888 reader.startVideoMode(max_buffer, max_results, video_width, video_height, image_format, onBarcodeResult) while True: if results != None: thickness = 2 color = (0,255,0) for result in results: print("barcode format: " + result.format) print("barcode value: " + result.text) x1 = result.x1 y1 = result.y1 x2 = result.x2 y2 = result.y2 x3 = result.x3 y3 = result.y3 x4 = result.x4 y4 = result.y4 cv2.drawContours(frame, [np.array([(x1, y1), (x2, y2), (x3, y3), (x4, y4)])], 0, (0, 255, 0), 2) results = None cv2.imshow(windowName, frame) rval, frame = vc.read() # start = time.time() try: ret = reader.appendVideoFrame(frame) except: pass # cost = (time.time() - start) * 1000 # print('time cost: ' + str(cost) + ' ms') # 'ESC' for quit key = cv2.waitKey(1) if key == 27: break reader.stopVideoMode() cv2.destroyWindow(windowName) if __name__ == "__main__": print("OpenCV version: " + cv2.__version__) read_barcode()
program main logical :: x logical :: y = .true. logical :: z = .false. x = y .and. z x = y .or. z x = y .eqv. z x = y .neqv. z x = 1 < 2 x = 1 .lt. 2 x = 1 <= 2 x = 1 .le. 2 x = 1 > 2 x = 1 .gt. 2 x = 1 >= 2 x = 1 .ge. 2 x = 1 == 1 x = 1 .eq. 1 x = 1 /= 1 x = 1 .ne. 1 end program main
#ifndef _EM_LA_H_ #define _EM_LA_H_ 1 #include <petsc.h> struct EMContext; PetscErrorCode access_vec(Vec, std::vector<PetscInt> &, int, double *); PetscErrorCode setup_ams(EMContext *); PetscErrorCode destroy_ams(EMContext *); PetscErrorCode create_pc(EMContext *); PetscErrorCode destroy_pc(EMContext *); PetscErrorCode pc_apply_b(PC, Vec, Vec); PetscErrorCode matshell_createvecs_a(Mat, Vec *, Vec *); PetscErrorCode matshell_mult_a(Mat, Vec, Vec); PetscErrorCode solve_linear_system(EMContext *, const PETScBlockVector &, PETScBlockVector &, PetscInt, PetscReal); #endif
import clip import torch from PIL import Image from sklearn.preprocessing import normalize from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize import torch import tqdm import numpy as np import sklearn.preprocessing import collections from packaging import version class CLIPCapDataset(torch.utils.data.Dataset): def __init__(self, data, prefix='A photo depicts'): self.data = data self.prefix = prefix if self.prefix[-1] != ' ': self.prefix += ' ' def __getitem__(self, idx): c_data = self.data[idx] c_data = clip.tokenize(self.prefix + c_data, truncate=True).squeeze() return {'caption': c_data} def __len__(self): return len(self.data) class CLIPImageDataset(torch.utils.data.Dataset): def __init__(self, data): self.data = data # only 224x224 ViT-B/32 supported for now self.preprocess = self._transform_test(224) def _transform_test(self, n_px): return Compose([ Resize(n_px, interpolation=Image.BICUBIC), CenterCrop(n_px), lambda image: image.convert("RGB"), ToTensor(), Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), ]) def __getitem__(self, idx): c_data = self.data[idx] image = Image.open(c_data) image = self.preprocess(image) return {'image':image} def __len__(self): return len(self.data) def extract_all_captions(captions, model, device, batch_size=256, num_workers=8): data = torch.utils.data.DataLoader( CLIPCapDataset(captions), batch_size=batch_size, num_workers=num_workers, shuffle=False) all_text_features = [] with torch.no_grad(): for b in tqdm.tqdm(data): b = b['caption'].to(device) all_text_features.append(model.encode_text(b).cpu().numpy()) all_text_features = np.vstack(all_text_features) return all_text_features def extract_all_images(images, model, device, batch_size=64, num_workers=8): data = torch.utils.data.DataLoader( CLIPImageDataset(images), batch_size=batch_size, num_workers=num_workers, shuffle=False) all_image_features = [] with torch.no_grad(): for b in tqdm.tqdm(data): b = b['image'].to(device) all_image_features.append(model.encode_image(b).cpu().numpy()) all_image_features = np.vstack(all_image_features) return all_image_features def get_clip_score(model, images, candidates, device, w=2.5): ''' get standard image-text clipscore. images can either be: - a list of strings specifying filepaths for images - a precomputed, ordered matrix of image features ''' if isinstance(images, list): # need to extract image features images = extract_all_images(images, model, device) candidates = extract_all_captions(candidates, model, device) #as of numpy 1.21, normalize doesn't work properly for float16 if version.parse(np.__version__) < version.parse('1.21'): images = sklearn.preprocessing.normalize(images, axis=1) candidates = sklearn.preprocessing.normalize(candidates, axis=1) else: print( 'due to a numerical instability, new numpy normalization is slightly different than paper results. ' 'to exactly replicate paper results, please use numpy version less than 1.21, e.g., 1.20.3.') images = images / np.sqrt(np.sum(images**2, axis=1, keepdims=True)) candidates = candidates / np.sqrt(np.sum(candidates**2, axis=1, keepdims=True)) per = w*np.clip(np.sum(images * candidates, axis=1), 0, None) return np.mean(per), per, candidates def get_refonlyclipscore(model, references, candidates, device): ''' The text only side for refclipscore ''' if isinstance(candidates, list): candidates = extract_all_captions(candidates, model, device) flattened_refs = [] flattened_refs_idxs = [] for idx, refs in enumerate(references): flattened_refs.extend(refs) flattened_refs_idxs.extend([idx for _ in refs]) flattened_refs = extract_all_captions(flattened_refs, model, device) if version.parse(np.__version__) < version.parse('1.21'): candidates = sklearn.preprocessing.normalize(candidates, axis=1) flattened_refs = sklearn.preprocessing.normalize(flattened_refs, axis=1) else: print( 'due to a numerical instability, new numpy normalization is slightly different than paper results. ' 'to exactly replicate paper results, please use numpy version less than 1.21, e.g., 1.20.3.') candidates = candidates / np.sqrt(np.sum(candidates**2, axis=1, keepdims=True)) flattened_refs = flattened_refs / np.sqrt(np.sum(flattened_refs**2, axis=1, keepdims=True)) cand_idx2refs = collections.defaultdict(list) for ref_feats, cand_idx in zip(flattened_refs, flattened_refs_idxs): cand_idx2refs[cand_idx].append(ref_feats) assert len(cand_idx2refs) == len(candidates) cand_idx2refs = {k: np.vstack(v) for k, v in cand_idx2refs.items()} per = [] for c_idx, cand in tqdm.tqdm(enumerate(candidates)): cur_refs = cand_idx2refs[c_idx] all_sims = cand.dot(cur_refs.transpose()) per.append(np.max(all_sims)) return np.mean(per), per
module TTImp.WithClause import Core.Context import Core.Context.Log import Core.TT import TTImp.BindImplicits import TTImp.TTImp import Data.List %default covering matchFail : FC -> Core a matchFail loc = throw (GenericMsg loc "With clause does not match parent") mutual export getMatch : (lhs : Bool) -> RawImp -> RawImp -> Core (List (String, RawImp)) getMatch lhs (IBindVar _ n) tm = pure [(n, tm)] getMatch lhs (Implicit _ _) tm = pure [] getMatch lhs (IVar _ (NS ns n)) (IVar loc (NS ns' n')) = if n == n' && isParentOf ns' ns then pure [] else matchFail loc getMatch lhs (IVar _ (NS ns n)) (IVar loc n') = if n == n' then pure [] else matchFail loc getMatch lhs (IVar _ n) (IVar loc n') = if n == n' then pure [] else matchFail loc getMatch lhs (IPi _ c p n arg ret) (IPi loc c' p' n' arg' ret') = if c == c' && samePiInfo p p' && n == n' then matchAll lhs [(arg, arg'), (ret, ret')] else matchFail loc where samePiInfo : PiInfo RawImp -> PiInfo RawImp -> Bool samePiInfo Explicit Explicit = True samePiInfo Implicit Implicit = True samePiInfo AutoImplicit AutoImplicit = True samePiInfo (DefImplicit _) (DefImplicit _) = True samePiInfo _ _ = False -- TODO: Lam, Let, Case, Local, Update getMatch lhs (IApp _ f a) (IApp loc f' a') = matchAll lhs [(f, f'), (a, a')] getMatch lhs (IImplicitApp _ f n a) (IImplicitApp loc f' n' a') = if n == n' then matchAll lhs [(f, f'), (a, a')] else matchFail loc getMatch lhs (IWithApp _ f a) (IWithApp loc f' a') = matchAll lhs [(f, f'), (a, a')] -- On LHS: If there's an implicit in the parent, but not the clause, add the -- implicit to the clause. This will propagate the implicit through to the -- body getMatch True (IImplicitApp fc f n a) f' = matchAll True [(f, f'), (a, a)] -- On RHS: Rely on unification to fill in the implicit getMatch False (IImplicitApp fc f n a) f' = getMatch False f f -- Can't have an implicit in the clause if there wasn't a matching -- implicit in the parent getMatch lhs f (IImplicitApp fc f' n a) = matchFail fc -- Alternatives are okay as long as the alternatives correspond, and -- one of them is okay getMatch lhs (IAlternative fc _ as) (IAlternative _ _ as') = matchAny fc lhs (zip as as') getMatch lhs (IAs _ _ (UN n) p) (IAs fc _ (UN n') p') = do ms <- getMatch lhs p p' mergeMatches lhs ((n, IBindVar fc n') :: ms) getMatch lhs (IAs _ _ (UN n) p) p' = do ms <- getMatch lhs p p' mergeMatches lhs ((n, p') :: ms) getMatch lhs (IAs _ _ _ p) p' = getMatch lhs p p' getMatch lhs p (IAs _ _ _ p') = getMatch lhs p p' getMatch lhs (IType _) (IType _) = pure [] getMatch lhs (IPrimVal fc c) (IPrimVal fc' c') = if c == c' then pure [] else matchFail fc getMatch lhs pat spec = matchFail (getFC pat) matchAny : FC -> (lhs : Bool) -> List (RawImp, RawImp) -> Core (List (String, RawImp)) matchAny fc lhs [] = matchFail fc matchAny fc lhs ((x, y) :: ms) = catch (getMatch lhs x y) (\err => matchAny fc lhs ms) matchAll : (lhs : Bool) -> List (RawImp, RawImp) -> Core (List (String, RawImp)) matchAll lhs [] = pure [] matchAll lhs ((x, y) :: ms) = do matches <- matchAll lhs ms mxy <- getMatch lhs x y mergeMatches lhs (mxy ++ matches) mergeMatches : (lhs : Bool) -> List (String, RawImp) -> Core (List (String, RawImp)) mergeMatches lhs [] = pure [] mergeMatches lhs ((n, tm) :: rest) = do rest' <- mergeMatches lhs rest case lookup n rest' of Nothing => pure ((n, tm) :: rest') Just tm' => do getMatch lhs tm tm' -- just need to know it succeeds mergeMatches lhs rest -- Get the arguments for the rewritten pattern clause of a with by looking -- up how the argument names matched getArgMatch : FC -> Bool -> RawImp -> List (String, RawImp) -> Maybe (PiInfo RawImp, Name) -> RawImp getArgMatch ploc search warg ms Nothing = warg getArgMatch ploc True warg ms (Just (AutoImplicit, UN n)) = case lookup n ms of Nothing => ISearch ploc 500 Just tm => tm getArgMatch ploc True warg ms (Just (AutoImplicit, _)) = ISearch ploc 500 getArgMatch ploc search warg ms (Just (_, UN n)) = case lookup n ms of Nothing => Implicit ploc True Just tm => tm getArgMatch ploc search warg ms _ = Implicit ploc True export getNewLHS : {auto c : Ref Ctxt Defs} -> FC -> (drop : Nat) -> NestedNames vars -> Name -> List (Maybe (PiInfo RawImp, Name)) -> RawImp -> RawImp -> Core RawImp getNewLHS ploc drop nest wname wargnames lhs_raw patlhs = do (mlhs_raw, wrest) <- dropWithArgs drop patlhs autoimp <- isUnboundImplicits setUnboundImplicits True (_, lhs) <- bindNames False lhs_raw (_, mlhs) <- bindNames False mlhs_raw setUnboundImplicits autoimp let (warg :: rest) = reverse wrest | _ => throw (GenericMsg ploc "Badly formed 'with' clause") log "with" 5 $ show lhs ++ " against " ++ show mlhs ++ " dropping " ++ show (warg :: rest) ms <- getMatch True lhs mlhs log "with" 5 $ "Matches: " ++ show ms let newlhs = apply (IVar ploc wname) (map (getArgMatch ploc False warg ms) wargnames ++ rest) log "with" 5 $ "New LHS: " ++ show newlhs pure newlhs where dropWithArgs : Nat -> RawImp -> Core (RawImp, List RawImp) dropWithArgs Z tm = pure (tm, []) dropWithArgs (S k) (IApp _ f arg) = do (tm, rest) <- dropWithArgs k f pure (tm, arg :: rest) -- Shouldn't happen if parsed correctly, but there's no guarantee that -- inputs come from parsed source so throw an error. dropWithArgs _ _ = throw (GenericMsg ploc "Badly formed 'with' clause") -- Find a 'with' application on the RHS and update it export withRHS : {auto c : Ref Ctxt Defs} -> FC -> (drop : Nat) -> Name -> List (Maybe (PiInfo RawImp, Name)) -> RawImp -> RawImp -> Core RawImp withRHS fc drop wname wargnames tm toplhs = wrhs tm where withApply : FC -> RawImp -> List RawImp -> RawImp withApply fc f [] = f withApply fc f (a :: as) = withApply fc (IWithApp fc f a) as updateWith : FC -> RawImp -> List RawImp -> Core RawImp updateWith fc (IWithApp _ f a) ws = updateWith fc f (a :: ws) updateWith fc tm [] = throw (GenericMsg fc "Badly formed 'with' application") updateWith fc tm (arg :: args) = do log "with" 10 $ "With-app: Matching " ++ show toplhs ++ " against " ++ show tm ms <- getMatch False toplhs tm log "with" 10 $ "Result: " ++ show ms let newrhs = apply (IVar fc wname) (map (getArgMatch fc True arg ms) wargnames) log "with" 10 $ "With args for RHS: " ++ show wargnames log "with" 10 $ "New RHS: " ++ show newrhs pure (withApply fc newrhs args) mutual wrhs : RawImp -> Core RawImp wrhs (IPi fc c p n ty sc) = pure $ IPi fc c p n !(wrhs ty) !(wrhs sc) wrhs (ILam fc c p n ty sc) = pure $ ILam fc c p n !(wrhs ty) !(wrhs sc) wrhs (ILet fc c n ty val sc) = pure $ ILet fc c n !(wrhs ty) !(wrhs val) !(wrhs sc) wrhs (ICase fc sc ty clauses) = pure $ ICase fc !(wrhs sc) !(wrhs ty) !(traverse wrhsC clauses) wrhs (ILocal fc decls sc) = pure $ ILocal fc decls !(wrhs sc) -- TODO! wrhs (IUpdate fc upds tm) = pure $ IUpdate fc upds !(wrhs tm) -- TODO! wrhs (IApp fc f a) = pure $ IApp fc !(wrhs f) !(wrhs a) wrhs (IImplicitApp fc f n a) = pure $ IImplicitApp fc !(wrhs f) n !(wrhs a) wrhs (IWithApp fc f a) = updateWith fc f [a] wrhs (IRewrite fc rule tm) = pure $ IRewrite fc !(wrhs rule) !(wrhs tm) wrhs (IDelayed fc r tm) = pure $ IDelayed fc r !(wrhs tm) wrhs (IDelay fc tm) = pure $ IDelay fc !(wrhs tm) wrhs (IForce fc tm) = pure $ IForce fc !(wrhs tm) wrhs tm = pure tm wrhsC : ImpClause -> Core ImpClause wrhsC (PatClause fc lhs rhs) = pure $ PatClause fc lhs !(wrhs rhs) wrhsC c = pure c
lemma tendsto_cong: "(f \<longlongrightarrow> c) F \<longleftrightarrow> (g \<longlongrightarrow> c) F" if "eventually (\<lambda>x. f x = g x) F"
subroutine findnm_kcs_flowwrapper(xp , yp , mp , np , & & rmp , rnp , inside,spheric, gdp) !----- GPL --------------------------------------------------------------------- ! ! Copyright (C) Stichting Deltares, 2011-2016. ! ! This program is free software: you can redistribute it and/or modify ! it under the terms of the GNU General Public License as published by ! the Free Software Foundation version 3. ! ! This program is distributed in the hope that it will be useful, ! but WITHOUT ANY WARRANTY; without even the implied warranty of ! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ! GNU General Public License for more details. ! ! You should have received a copy of the GNU General Public License ! along with this program. If not, see <http://www.gnu.org/licenses/>. ! ! contact: [email protected] ! Stichting Deltares ! P.O. Box 177 ! 2600 MH Delft, The Netherlands ! ! All indications and logos of, and references to, "Delft3D" and "Deltares" ! are registered trademarks of Stichting Deltares, and remain the property of ! Stichting Deltares. All rights reserved. ! !------------------------------------------------------------------------------- ! $Id: findnm_kcs_flowwrapper.f90 5717 2016-01-12 11:35:24Z mourits $ ! $HeadURL: https://svn.oss.deltares.nl/repos/delft3d/tags/6686/src/engines_gpl/flow2d3d/packages/io/src/output/findnm_kcs_flowwrapper.f90 $ !!--description----------------------------------------------------------------- ! ! Function: - Locate a point in the Delft3D curvilinear mesh ! Method used: ! !!--pseudo code and references-------------------------------------------------- ! NONE !!--declarations---------------------------------------------------------------- use precision ! use globaldata ! implicit none include 'fsm.i' include 'tri-dyn.igd' ! type(globdat),target :: gdp ! ! The following list of pointer parameters is used to point inside the gdp structure ! integer(pntrsize) , pointer :: xcor integer(pntrsize) , pointer :: ycor integer(pntrsize) , pointer :: kcs integer , pointer :: mlb integer , pointer :: mmax integer , pointer :: mub integer , pointer :: nlb integer , pointer :: nmaxus integer , pointer :: nub real(hp), pointer :: dearthrad ! ! Global variables ! integer , intent(inout) :: mp ! M index of point (initially last value) integer , intent(inout) :: np ! N index of point (initially last value) real(fp) , intent(in) :: xp ! X coordinate of point real(fp) , intent(in) :: yp ! Y coordinate of point real(fp) , intent(out) :: rmp ! Fractional M index of point real(fp) , intent(out) :: rnp ! Fractional N index of point logical , intent(inout) :: inside ! True if point lies inside grid (or mp,np valid) logical , intent(in) :: spheric ! Spherical coordinates ! ! Local variables ! ! !! executable statements ------------------------------------------------------- ! xcor => gdp%gdr_i_ch%xcor ycor => gdp%gdr_i_ch%ycor kcs => gdp%gdr_i_ch%kcs mlb => gdp%d%mlb mmax => gdp%d%mmax mub => gdp%d%mub nlb => gdp%d%nlb nmaxus => gdp%d%nmaxus nub => gdp%d%nub dearthrad => gdp%gdconstd%dearthrad ! call findnm_kcs (xp , yp ,r(xcor),r(ycor), mlb , mub , & & nlb , nub , mmax , nmaxus, mp , np , & & rmp , rnp ,i(kcs) , inside, spheric,dearthrad) end subroutine findnm_kcs_flowwrapper
||| An implementation of `Data.Heap` based on left-skewed binary trees. ||| ||| This implementation is a direct adaptation of Okasaki's `LeftistHeap` ||| found on pp.18-20 of his book without trying to lift anything at the ||| Type-level nor prove any properties. Its purpose is to be used as a ||| baseline for performance analysis and improvements in the "proven" ||| version found in `Data.Heap.LeftistHeap`. module Data.Heap.RawBinaryHeap import public Data.Heap ||| A implementation of a leftist binary tree which is not `Nat`-indexed public export data BinaryTree : (a : Type) -> Type where Empty : BinaryTree a Node : (elem : a) -> (rank : Int) -> (left : BinaryTree a) -> (right : BinaryTree a) -> BinaryTree a rank : BinaryTree a -> Int rank Empty = 0 rank (Node _ rk _ _) = rk makeNode : (elem : a) -> BinaryTree a -> BinaryTree a -> BinaryTree a makeNode elem left right = if rank left < rank right then Node elem (rank left + rank right) right left else Node elem (rank left + rank right) left right mergeTree : (Ord a) => (left : BinaryTree a) -> (right : BinaryTree a) -> BinaryTree a mergeTree Empty right = right mergeTree left Empty = left mergeTree l@(Node elem rank left right) r@(Node elem' rank' left' right') = if (elem < elem') then makeNode elem left (mergeTree right r) else makeNode elem' left' (mergeTree l right') findMin : BinaryTree a -> Maybe a findMin Empty = Nothing findMin (Node elem _ _ _) = Just elem popMin : (Ord a) => BinaryTree a -> (BinaryTree a, Maybe a) popMin Empty = (Empty, Nothing) popMin (Node elem _ left right) = (mergeTree left right, Just elem) insert : (Ord a) => a -> BinaryTree a -> BinaryTree a insert x Empty = Node x 1 Empty Empty insert x node = mergeTree (Node x 1 Empty Empty) node public export data BinaryHeap : Type -> Type where ||| A Heap that keeps track of elements that's been inserted into ||| it. ||| For debugging and analysis purpose MkHeap : (tree : BinaryTree a) -> (inserts: Lazy (List a)) -> BinaryHeap a traceInserts : BinaryHeap a -> List a traceInserts (MkHeap tree inserts) = inserts -- implementation of Heap public export Heap BinaryHeap where empty = MkHeap Empty [] isEmpty (MkHeap Empty _) = True isEmpty _ = False push a (MkHeap tree ins) = MkHeap (insert a tree) (a :: ins) peek (MkHeap tree _ ) = findMin tree pop (MkHeap tree ins) = let (tree', a) = popMin tree in (MkHeap tree' ins, a) merge (MkHeap tree ins) (MkHeap tree' ins') = MkHeap (mergeTree tree tree') (ins ++ ins') stats (MkHeap _ ins) = Just $ show ins
function [ x, seed ] = pearson_05_sample ( a, b, c, seed ) %*****************************************************************************80 % %% PEARSON_05_SAMPLE samples the Pearson 5 PDF. % % Licensing: % % This code is distributed under the GNU LGPL license. % % Modified: % % 19 September 2004 % % Author: % % John Burkardt % % Parameters: % % Input, real A, B, C, the parameters of the PDF. % 0.0 < A, 0.0 < B. % % Input, integer SEED, a seed for the random number generator. % % Output, real X, a sample of the PDF. % % Output, integer SEED, an updated seed for the random number generator. % a2 = 0.0; b2 = b; c2 = 1.0 / a; [ x2, seed ] = gamma_sample ( a2, b2, c2, seed ); x = c + 1.0 / x2; return end
Omar produced moderate damage throughout numerous islands , amounting to at least $ 60 million ( 2008 USD ) and one death was related to the storm .
\documentclass[9pt,twocolumn,twoside]{../../styles/osajnl} \journal{i524} \title{An Overview of Pivotal Web Services} \author[1,*]{Harshit Krishnakumar} \affil[1]{School of Informatics and Computing, Bloomington, IN 47408, U.S.A.} \affil[*]{Corresponding authors: [email protected], S17-IR-2014} \dates{Project-02, \today} \ociscodes{Cloud, I524, Web Services} % replace this with your url in github/gitlab \doi{\url{https://github.com/cloudmesh/sp17-i524/raw/master/paper2/S17-IR-2014/report.pdf}} \begin{abstract} Pivotal Web Services is a platform as a service (PAAS) provider which allows developers to deploy applications written in six programming languages. PWS provides the infrastructure to host applications on the cloud, and allows vertical scaling for each instance and horizontal scaling for the application. PWS is built on CloudFoundry, an open source software for hosting applications on the cloud. This paper presents the different features of Pivotal Web Services and a basic overview of hands-on of application deployment in Pivotal Web Services. \newline \end{abstract} \setboolean{displaycopyright}{true} \begin{document} \maketitle \section{Introduction} The current scenario for software product based companies is such, that coming up with ground breaking ideas to add extra functionality for an existing application is simply not enough. They need to be able to get it out to the users as quickly as possible, else they loose ground to competitors who might have already implemented it. To make software development and deployment process quicker, software companies follow a few methods and concepts. Pivotal Web Services comes in this line of thought, where it allows the application developer to focus on just the application development and getting the business requirements right, without worrying about platform compatibility, dependencies and differences between production/development/testing environment. PWS is built based on Cloud Foundry which is one of the leading open source PAAS services \cite{www-pivotal}. In order to comprehend the need for a service like PWS, one would require a basic knowledge of the entire process of agile, devops and PAAS. \subsection{Agile Development} With the widespread use of Internet to push quick updates and the emergence of automation of methods used in software testing and deployment, software companies are moving away from traditional waterfall methodology to agile development practices, which emphasizes on iterative development where there is high collaboration between self organizing and cross functional teams to evolve requirements and solutions. Agile methods encourage deployment of high quality and goal oriented software in quick successions, and any feedback and changes will be handled in the next update version. \subsection{DevOps} Devops is a set of practices for software testing and deployment which enables Agile development. Typically there is latency to for the development process that there are many manual tasks involved. Devops sets standards to automate testing and to ensure that production, testing and development environments are in sync. It gives greater responsibility and access to developers for easy testing and development. With automated processes for testing, developers would get their feedback within minutes, and they can work on fixes. The final aspect of devops is to automate deployment, there are software programs to automatically deploy the software on a host of servers with the right configurations and connections, thus reducing manual effort and latency. \subsection{Platform as a service} The concept of containers started gaining popularity considering the advantages of modularity in software development. Containers in software development serve the purpose of building modular software. A container will have the actual software along with all its dependencies and methods. Platform as a service (PaaS) or application platform as a service (aPaaS) is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app \cite{www-paas-wiki}. PaaS providers generally provide a cloud environment to deploy the application on, the networks, servers, OS, storage, databases and other services to run applications. This removes the hassles of maintaining and running the servers and systems for an application from the developers, and also minimises the risk of server failures. \subsection{Cloud Foundry} Cloud Foundry is an open source PAAS software provider. It provides with all the software and tools required to host applications on multiple clouds. Cloud Foundry does not offer the hardware for hosting clouds, there are many commercial options which provide the platform hardware along with hosted Cloud Foundry software, which takes the responsibility of handling and maintaining the cloud hardware away from application developers. \subsection{Pivotal Web Services} PWS is built based on an open source PaaS Cloud Foundry along with some proprietary additions such as Pivotal's Developer Console, Billing Service and Marketplace \cite{www-pws-register}. PWS offers hosted cloud systems with a web interface for managing the environment, and a number of pre-provisioned services like relational databases and messaging queues \cite{www-pws-stackoverflow}. Pivotal Cloud Foundry enables developers to provision and bind web and mobile apps with platform and data services such as Jenkins, MongoDB, Hadoop, etc. on a unified platform. \section{Features of PWS} PWS offers many different options to deploy and manage software \cite{www-pws-features}. \subsection{Upload} There is a single command way to upload software developed on local to the cloud. The code is transformed into a running application on the cloud. The steps to follow for uploading an application with name <APP-NAME> is given in \cite{www-pws-push}. \subsection{Stage} Behind the scenes, the deployed application goes through staging scripts called buildpacks to create a ready-to-run package. Buildpacks are software packets that provide framework and runtime support for applications, and they are provided along with PWS cloud. Buildpacks typically examine user-provided artifacts to determine what dependencies to download and how to configure applications to communicate with bound services. Cloud Foundry automatically detects which buildpack is required and installs it on the Diego cell where the application needs to run \cite{www-pws-buildpacks}. For example, if a particular application requires d3.js to run and needs to connect to a database, buildpacks will determine that the application needs these dependencies in order to run and attach d3.js packet with the application and provide connectors to connect to the database. \subsection{Distribute} Deigo is the container management system for Cloud Foundry, which handles application scheduling and management. Each application VM has a Diego Cell that executes application start and stop actions locally, manages the VM’s containers, and reports app status and other data \cite{www-pws-diego}. \subsection{Run} Applications receive entry in a dynamic routing tier, which load balances traffic across all app instances. \section{Licensing} Though Cloud Foundry is open source, it is not easy to maintain a cloud and setup the architecture by a developer. PWS is charges for the use of its services, with a monthly cost depending upon the memory of application instance and number of instances. \section{Use Cases} PWS can be used for a range of applications, from running websites to maintaining mobile applications. For example, if we need to host a website which accesses data, we can write the base code and deploy to PWS cloud. For instance, if there is a Web Page that has to be hosted on cloud, we need to create an account in Pivotal and create the command line interface. Normally, deploying a web page requires web servers like Apache or Nginx, but with Pivotal it will automatically take care of the web server. We need to copy the web page HTML files in our local to the cloud where application needs to be hosted. Next we login to the Pivotal Cloud instance by giving username and password, and create a staticfile. Last step is to push the application. \begin{verbatim} cf login -a https://api.run.pivotal.io touch Staticfile cf push <<application file name>> \end{verbatim} We can verify the deployed webpage using the link which we will get after the above steps. \section{Conclusion} PWS is a hosted cloud platform service, which uses Cloud Foundry open source platform. It has options for scaling and updating the cloud with no downtime. As given in Section 2 (Features of PWS) there are a few basic commands to upload an application, and PWS automatically binds applications with dependencies and configurations required. PWS allows developers to concentrate on their business requirements and developing applications, rather than hosting and hardware requirements. PWS also makes up-scaling and downscaling easy. \cite{www-pws-adv}. \section{Further Education} Further learning about Pivotal is encouraged and informative materials can be found at the Pivotal homepage \cite{www-pws-agile}. \section*{Acknowledgements} The author thanks Professor Gregor Von Lazewski for providing us with the guidance and topics for the paper. The author also thanks the AIs of Big Data Class for providing the technical support. % Bibliography \bibliography{references} \section*{Author Biographies} \begingroup \setlength\intextsep{0pt} \begin{minipage}[t][3.2cm][t]{1.0\columnwidth} % Adjust height [3.2cm] as required for separation of bio photos. {\bfseries Harshit Krishnakumar} is pursuing his MSc in Data Science from Indiana University Bloomington \end{minipage} \endgroup \end{document}
function play(db,index) %PLAY - plays a MatlabADT sentence. %play(ADTobj,index) %See also query, filterdb, read, play. [data,smpr] = read(db,index); for i=1:length(data) sound(data{i},smpr); end end
#!/usr/bin/env julia using Primitiv using Base.Test this_file = basename(@__FILE__) function test_dir(dir) jl_files = sort(filter(f -> ismatch(r"^.+\.jl$", f), readdir(dir)), by = fn -> stat(joinpath(dir, fn)).mtime) map(reverse(jl_files)) do file file == this_file && return include(joinpath(dir, file)) end end @testset "Primitiv Test" begin test_dir(dirname(@__FILE__)) end
lemma Lim_transform_eventually: "\<lbrakk>(f \<longlongrightarrow> l) F; eventually (\<lambda>x. f x = g x) F\<rbrakk> \<Longrightarrow> (g \<longlongrightarrow> l) F"
# Matrix Image Reader (M.I.R) M.I.R helps you quickly create matrix objects with an image of one. Just scan a quick image and run it through the code, and you can quickly apply operations, like dot and cross product, RREF, finding determinant or eigenvectors, and more! # Import libraries ```python import numpy as np import cv2 from google.colab import files import matplotlib.pyplot as plt import tensorflow as tf from typing import Tuple import sympy ``` ```python def upload_files(): '''Upload files from personal computer to Google Colab''' uploaded = files.upload() for k, v in uploaded.items(): open(k, 'wb').write(v) return list(uploaded.keys()) ``` ```python upload_files() ``` <input type="file" id="files-8d7e08e9-afa7-48e8-8e8b-5d3216919bc3" name="files[]" multiple disabled style="border:none" /> <output id="result-8d7e08e9-afa7-48e8-8e8b-5d3216919bc3"> Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable. </output> Saving test2.png to test2 (6).png ['test2.png'] # Creating the Digit Recognition Model Credit to Sendex, thank you! URL: https://www.youtube.com/watch?v=wQ8BIBpya2k ```python mnist = tf.keras.datasets.mnist # 28x28 images of handwritten digits (0-9) # Obtain training and testing data (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = tf.keras.utils.normalize(x_train, axis=1) x_test = tf.keras.utils.normalize(x_test, axis=1) model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax)) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=3) val_loss, val_acc = model.evaluate(x_test, y_test) print(val_loss, val_acc) ``` Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step Epoch 1/3 1875/1875 [==============================] - 5s 2ms/step - loss: 0.4725 - accuracy: 0.8612 Epoch 2/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.1089 - accuracy: 0.9672 Epoch 3/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0691 - accuracy: 0.9783 313/313 [==============================] - 0s 1ms/step - loss: 0.0903 - accuracy: 0.9719 0.09028234332799911 0.9718999862670898 ```python predictions = model.predict([x_test]) print(np.argmax(predictions[0])) ``` WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'tuple'> input: (<tf.Tensor 'IteratorGetNext:0' shape=(None, 28, 28) dtype=float32>,) Consider rewriting this model with the Functional API. 7 ```python plt.imshow(x_test[0]) ``` ```python tf.keras.models.save_model(model, 'digit.model') ``` INFO:tensorflow:Assets written to: digit.model/assets # Predict the Digits ```python def predict_digit(digit, dim=(28, 28)): digit = cv2.resize(digit, dim) digit = np.reshape(digit, (1, 28, 28)) prediction = model.predict(digit) return np.argmax(prediction) ``` # Obtaining the Digits from Matrix ```python def draw_borders(fname: str='test1.png', dim: Tuple=(3,3)) -> sympy.Matrix: '''Draws borders around each digit of the matrix and predicts them. Args: fname (str): the filepath to image dim (str): the dimension of the matrix (mxn) Returns: sympy.Matrix: the sympy Matrix object ''' # Reading in the image img = cv2.imread(fname) images = [] # later to append predictted numbers # Resize to 512x512 pixel image img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_NEAREST) # Turn image to gray scale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Use Gaussian blur to reduce # of noisy edges blur = cv2.GaussianBlur(src=gray, ksize=(13, 13), sigmaX=1) # Seperate foreground and background, results in clearer matrix thresh = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 199, 25) # Invert image thresh = 255*(thresh < 128).astype(np.uint8) # Find contours contours, hierarchy = cv2.findContours( thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Remove the two brackets on the side contours = sorted( contours, key=lambda x: cv2.contourArea(x), reverse=True)[2:] # Read contours in order left-to-right, top-to-bottom contours = sorted( contours, key=lambda x: cv2.boundingRect(x)[0]*40 + cv2.boundingRect(x)[1]*200) # Loop through each contour for cnt in contours: # We don't want noisy useless edges if cv2.contourArea(cnt) > 50: # Find rectangle coords x, y, w, h = cv2.boundingRect(cnt) # Draw border cv2.rectangle(thresh, (x-7, y-7), (x+w+7, y+h+7), (255, 255, 255)) # Get sectioned off digits digit = thresh[y-2:y+h+2, x-2:x+w+2] # Add padding around image padded_digit = np.pad(digit, ((7,7),(7,7)), "constant", constant_values=0) # Predict the digits and append to image images.append(predict_digit(padded_digit)) # Show the border plt.imshow(thresh) return sympy.Matrix(np.array(images).reshape(dim)) ``` ```python m1 = draw_borders() m1 ``` ```python m2 = draw_borders('test2.png', dim=(4, 4)) m2 ``` ```python m3 = draw_borders('test3.png', dim=(3, 1)) m3 ``` ```python m4 = draw_borders('test4.png', dim=(3, 3)) m4 ``` # Sympy Matrix Operation URL: https://docs.sympy.org/latest/tutorial/matrices.html ```python display(m1, m4) # Multiply matrices m1 * m4 ``` $\displaystyle \left[\begin{matrix}9 & 4 & 4\\1 & 6 & 7\\7 & 8 & 8\end{matrix}\right]$ $\displaystyle \left[\begin{matrix}1 & 2 & 3\\4 & 5 & 6\\7 & 8 & 9\end{matrix}\right]$ $\displaystyle \left[\begin{matrix}53 & 70 & 87\\74 & 88 & 102\\95 & 118 & 141\end{matrix}\right]$ ```python m1.inv() ``` $\displaystyle \left[\begin{matrix}\frac{2}{11} & 0 & - \frac{1}{11}\\- \frac{41}{44} & -1 & \frac{59}{44}\\\frac{17}{22} & 1 & - \frac{25}{22}\end{matrix}\right]$ ```python m2.rref() ``` (Matrix([ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]), (0, 1, 2, 3)) ```python m2.columnspace() ``` [Matrix([ [1], [6], [6], [1]]), Matrix([ [8], [7], [2], [5]]), Matrix([ [9], [7], [3], [8]]), Matrix([ [4], [2], [4], [8]])] ```python m2.nullspace() ``` [] ```python m2.det() ``` $\displaystyle 70$ ```python ```
# Part I - Linear Equations ## 1. Views of linear equations It can be reasonable to state that the the linear equations are the most simple equation system and the one that we know the best. Thus, it constitute the basic part of our toolkit and the study of linear algebra. For the below linear equations system, $$ \begin{equation} 2x - y = 0 \\ -x + 2y = 3 \\ \end{equation} $$ It can be organized the system into a matrix representation as below.And that will lead to two kinds of views of the system. $$ \begin{equation} \left[ \begin{matrix} 2 & -1 \\ -1 & 2 \\ \end{matrix} \right] \left[ \begin{matrix} x \\ y \\ \end{matrix} \right] = \left[ \begin{matrix} 0 \\ 3 \\ \end{matrix} \right] \end{equation} $$ And the representation can further be summarized in the general form: $$\begin{equation} \rm{A} \textbf{x} = \textbf{b} \end{equation}$$ where $ \begin{equation} \rm{A} = \left[ \begin{matrix} 2 & -1 \\ -1 & 2 \\ \end{matrix} \right] \end{equation} $ is called the $\textit{coefficient matrix}$, $ \begin{equation} \textbf{x} = \left[ \begin{matrix} x \\ y \\ \end{matrix} \right] \end{equation} $ is the $\textit{vector of unknowns}$, and $ \begin{equation} \textbf{b} = \left[ \begin{matrix} 0 \\ 3 \\ \end{matrix} \right] \end{equation} $ is the vector coming from the right hand side of the equations. ### View of Row (View of Separate Equations) By seeing the system as separate equations, we get the view of row. $$ \begin{equation} \begin{cases} 2x - y = 0, & Eq1\\ -x + 2y = 3, & Eq2\\ \end{cases} \end{equation} $$ Under the view of row, solutions are interpreted as the set of points that satisfy all of the equations. Therefore, in geometry, they are the **intersection points of all the geometric objects (lines/planes/hyperplanes)** these equation represent. ### View of Column (View of Linear Combination) An alternative view can be: Since all the equations share the same vector of unknowns (if some unknowns are missing, it can be represented as unknowns with 0 coefficient), we can **"factor out"** those unknowns and reshape the system like this. $$ \begin{equation} x \left[ \begin{matrix} 2 \\ -1 \\ \end{matrix} \right] + y \left[ \begin{matrix} -1 \\ 2 \\ \end{matrix} \right] = \left[ \begin{matrix} 0 \\ 3 \\ \end{matrix} \right] \end{equation} $$ In this view, the different rows of the coefficient vectors are treated as the independent "coordinates". For example, the $2$ and $-1$ in $ \left[ \begin{matrix} 2 \\ -1 \\ \end{matrix} \right] $ are treated as the x-coordinate and the y-coordinate. And the solution to the euqations are interpreted as the **correct multiples** of these coefficient vectors such that the sum of them equals the the right hand side $\textbf{b}$. Following this view, since the coefficient vectors constitute the columns of the coefficient matrix, we call them the $\textit{column vectors}$. And solving the linear equations is equivalent to finding the correct multiples of the column vectors. Or to say, finding the correct $\textit{linear combination}$ of the column vectors. By linear combination, we mean adding/subtracting and multipling vectors by **scalars (pure numbers)**. ## 2. Gaussian Elimination ### Prerequisite: Matrix Multiplication The most elementary and mechanical view of matrix multiplication is the **element view**. This view focuses on every single element of the resulting matrix, and is more often used in simple calculations. In particular, for a $m\times{n}$ matrix $\rm{A}$ and a $n\times{p}$ matrix $\rm{B}$, the resulting matrix $\rm{C}=\rm{AB}$ is a $m\times{p}$ matrix. And a general element in row $i$ and column $j$ of matrix $\rm{C}$ is: $$ \begin{equation} c_{ij} = \sum_{k=1}^{n} a_{ik}b_{kj} \end{equation} $$ From Section 1 we have seen that when a matrix is multiplied by a vector on the right, it is equivalent to taking the $\textit{linear combination}$ of the columns of that matrix. And that helps to introduce the **vector view** of matrix multiplication. Notice that from the **element view** formula, all of the elements in column $j$ (denoted as $\rm{C_{j}}$) of the resulting matrix $\rm{C}$ are only influenced by the repective column $j$ (denoted as $\rm{B_{j}}$) in matrix $\rm{B}$, rather then the other columns from matrix $\rm{B}$. And the elements of $\rm{C_{j}}$ can be seen as the linear combination of the columns of $\rm{A}$. In other words, $\rm{A}B_{j}=C_{j}$. For detailed demonstration, think about the case that a $3\times{2}$ matrix $\rm{A}$ multiplies a $2\times{3}$ matrix $\rm{B}$, where the resulting matrix is a $3\times{3}$ matrix $\rm{C}$. The first column of $\rm{C}$ comes from the first columns of $\rm{B}$ with elements computed as follows. $$ \begin{equation} \begin{aligned} c_{11} &= a_{11}b_{11} + a_{12}b_{21} + a_{13}b_{31} \\ c_{21} &= a_{21}b_{11} + a_{22}b_{21} + a_{23}b_{31} \\ c_{31} &= a_{31}b_{11} + a_{32}b_{21} + a_{33}b_{31} \\ \end{aligned} \end{equation} $$ Going through the trick we have seen in Section 1 we may have, $$ \begin{equation} C_{1} = \left[ \begin{matrix} c_{11} \\ c_{21} \\ c_{31} \\ \end{matrix} \right] = b_{11} \left[ \begin{matrix} a_{11} \\ a_{21} \\ a_{31} \\ \end{matrix} \right] + b_{21} \left[ \begin{matrix} a_{12} \\ a_{22} \\ a_{32} \\ \end{matrix} \right] + b_{31} \left[ \begin{matrix} a_{13} \\ a_{23} \\ a_{33} \\ \end{matrix} \right] = \left[ \begin{matrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{matrix} \right] \left[ \begin{matrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{matrix} \right] = \rm{AB_{1}} \end{equation} $$ Therefore, in **verctor view**, we have the following important conclusions: For the product of matrix multiplication $\rm{C}=\rm{AB}$, 1. The column of $\rm{C}$ is a linear combination of the columns of of $\rm{A}$. 2. The row of $\rm{C}$ is a linear combination of the rows of of $\rm{B}$. The second conclusion can be proved like the way we do above. Since matrix multiplication is not commutative, the $\rm{A}$ and $\rm{B}$ cannot be interchanged under most circumstances. ### Warm-up from High Schools In high school, students learned how to solve simple linear equations through eliminating variables and simplifying the system. For general linear equation systems, since they can be represented by matrices, the idea of finding the solutions is alike but extended to the operation of matrices. For example, concerning the linear equations below, we may cancel the $x$ variable in $Eq2$ through multiplying the $Eq1$ by 3 and subtract that from $Eq2$, which will result in a new equation $Eq2^{'}$ with no $x$ variable. And then further eliminating $y$ in $Eq3$ by subtracting 2 times of $Eq2^{'}$ from it. Then the equations can be easily solved by $\textit{backsubstitution}$: solving z by $Eq3^{'}$, then using z to solve y by $Eq2^{'}$... $$ \begin{equation} \begin{aligned} x + 2y + z &= 2 &Eq1\\ 3x + 8y + z &= 12 &Eq2\\ 4y + z &= 2 &Eq3\\ \end{aligned} \ \rightarrow\ \begin{aligned} x + 2y + z &= 2 &Eq1\\ 2y - 2z &= 6 &Eq2^{'}\\ 4y + z &= 2 &Eq3\\ \end{aligned} \ \rightarrow\ \begin{aligned} x + 2y + z &= 2 &Eq1\\ 2y - 2z &= 6 &Eq2^{'}\\ 5z &= -10 &Eq3^{'}\\ \end{aligned} \end{equation} $$ ### Matrix Representation of the Elimination Process Since the linear equations can be represented by matrices, the elimination process above can also be stated in the language of matrices. First of all, the original equations can be represented by the following $\textit{augmented matrix}$. By $\textit{augmented}$, we mean adding an extra column to the coefficient matrix $\rm{A}$ to represent the right hand side vector $\textbf{b}$. Remember that in order to have the same solution, when multiplying or subtracting the equations in the system should do the same manipulation to the both sides of the equations. $$ \begin{equation} \left[ \begin {array}{c|c} \rm{A}& \textbf{b} \end {array} \right] = \left[{ \begin {array}{c|c} \begin{matrix} 1 & 2 & 1 \\ 3 & 8 & 1 \\ 0 & 4 & 1 \\ \end{matrix}& \begin{matrix} 2 \\ 12 \\ 2\\ \end{matrix} \end{array}} \right] \end{equation} $$ Given this the elimination above can be restated in matrix language, which again, can be solved in ease. $$ \begin{equation} \left[ {\begin{array}{c|c} \begin{matrix} 1 & 2 & 1 \\ 3 & 8 & 1 \\ 0 & 4 & 1 \\ \end{matrix}& \begin{matrix} 2 \\ 12 \\ 2\\ \end{matrix} \end{array}} \right] \ \rightarrow \ \left[ {\begin{array}{c|c} \begin{matrix} 1 & 2 & 1 \\ 0 & 2 & -2 \\ 0 & 4 & 1 \\ \end{matrix}& \begin{matrix} 2 \\ 6 \\ 2\\ \end{matrix} \end{array}} \right] \ \rightarrow \ \left[ {\begin{array}{c|c} \begin{matrix} 1 & 2 & 1 \\ 0 & 2 & -2 \\ 0 & 0 & 5 \\ \end{matrix}& \begin{matrix} 2 \\ 6 \\ -10\\ \end{matrix} \end{array}} \right] = \left[ \begin {array}{c|c} \rm{U}& \textbf{c} \end {array} \right] \end{equation} $$ ### Mathematical View of Matrix Operation The next thing we are going to do is to express the operations in the elimination process through mathematical language. In the elimination process, what we have done is taking multiples of the equations and performing addition or subtraction operations among them. In matrix language, we are manipulating the rows of the coefficient matrix or the augmented matrix. Recall from matrix multiplication, this can be expressed by multiplying a matrix on the left side, since taking multiples of the rows and performing addition or subtraction operations among them are just taking the $\textit{linear combinations}$. Basically, the procedures taken in the elimination process can be **decomposed** into fundamental ones represented by the so-called $\textit{elimination matrices}$. In particular, the matrix representing the procedures needed to eliminate the entry $\textit a_{ij}$ is denoted as the elimination matrix $\textit E_{ij}$. For example, for eliminating the 3 in row 2 and column 1 in the above matrix $\rm{A}$, we need to multiply the first row by 3 and subtract it from the second row of $\rm{A}$. In the language of matrix operation, $ \begin{equation} \textit E_{21} = \left[ \begin{matrix} 1 & 0 & 0 \\ -3 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right] \end{equation} $. And $\begin{equation} \textit E_{21}\rm{A} = \left[ \begin{matrix} 1 & 2 & 1 \\ 0 & 2 & -2 \\ 0 & 4 & 1 \\ \end{matrix} \right] \end{equation} $ will eliminate the element $\textit a_{21}=3$. Then the process can continue to eliminate $\textit a_{32}$ by multiplying $ \begin{equation} \textit E_{32} = \left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \\ \end{matrix} \right] \end{equation} $ and get the final result $\rm{U}$. Since matrix multiplication is **associative**, all the elimination matrix can be grouped together and computed first, denoted as $\textit{E}$. $$ \begin{equation} \textit E_{32}\ (E_{21}\rm{A}) = (\textit E_{32}\ \textit E_{21})\rm{A} = \left[ \begin{matrix} 1 & 2 & 1 \\ 0 & 2 & -2 \\ 0 & 0 & 5 \\ \end{matrix} \right] = \rm{U} \end{equation} $$ $$ \textit E = \textit E_{32}\ \textit E_{21} $$ And for the equations to have the same solutions as the original ones, the same multiplication should also be applied to the right hand side vector $\textbf{b}$, such that $\textit E\textbf{b} = \textbf{c}$. Thus, the original linear equations system $ \begin{equation} \rm{A} \textbf{x} = \textbf{b} \end{equation} $ has been transformed into a new system $ \begin{equation} (\textit E\rm{A}) \textbf{x} = \rm{U} \textbf{x} = \textbf{c} \end{equation} $, which can be easily solved by the $\textit{backsubstitution}$ procedure. And most importantly, the solutions stay the same. ### Additional comments When conductiong the elimination process, the final elements we put on the diagonal are often called as $\textit{pivot}$, including the elements $\textit u_{11}=1$, $\textit u_{22}=2$, $\textit u_{33}=5$ in the resulting matrix $\rm{U}$. Those are the elements we usually used to eliminate other elements below them. What worths noticing is that a $\textit{pivot}$ cannot be zero. In case of seeing a zero in the diagonal position, we will try to do **row exchange** with a row below and make the pivot non-zero. This can be done by multiplying a group of matrices called the $\textit{permutation matrices}$. However, if no non-zero element is available, then the matrix will have some problems or some additional properties. We called this kind of matrix $\textit{not invertible}$. For example, if $\begin{equation} \rm{A} = \left[ \begin{matrix} 1 & 2 & 1 \\ 3 & 6 & 1 \\ 0 & 4 & 1 \\ \end{matrix} \right] \end{equation} $, then after eliminating the $a_{21}=3$, $\textit E_{21}\rm{A} = \left[ \begin{matrix} 1 & 2 & 1 \\ 0 & 0 & -2 \\ 0 & 4 & 1 \\ \end{matrix} \right] $ would have $a_{22}=0$. At the time, we need to multiply an additional permutation matrix which will perform a swap between row 2 and row 3 of $\textit E_{21}\rm{A}$. And the permutation matrix $\begin{equation} P_{23} = \left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ \end{matrix} \right] \end{equation} $ would finish the job. (Using the **vector view** of matrix multiplication, multiplying a matrix from the left is taking linear combination of the rows. Therefore, the second row of the product is the third row of $\textit E_{21}\rm{A}$, and the third row is the second row of $\textit E_{21}\rm{A}$.Thus, $\begin{equation} P_{23}\textit E_{21}\rm{A} = \left[ \begin{matrix} 1 & 2 & 1 \\ 0 & 4 & 1 \\ 0 & 0 & -2 \\ \end{matrix} \right] \end{equation} $.) ## 3. Inverse For square matrices, there is one special family of members that deserves further attention. Each of them, say $\rm{A}$, is paired with another matrix called its $\textit{inverse}$ $\rm A^{-1}$ such that $\rm AA^{-1} = A^{-1}A = I$. Not all of the square matrices have the corresponding inverse matrices. Those do have their corresponding inverse matrices are called $\textit{invertible}$ or $\textit{non-singular}$. And those cannot find the corresponding inverse matrices are called $\textit{singluar matrices}$. The most simple example of inverse matrices are those corresponding to the $\textit{elimination matrices}$ or the $\textit{permutation matrices}$, which just undo the elimination or permutation steps. For example, if $ \begin{equation} \textit E_{21} = \left[ \begin{matrix} 1 & 0 & 0 \\ -3 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right] \end{equation} $, then the corresponding inverse $ \begin{equation} \textit E_{21}^{-1} = \left[ \begin{matrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right] \end{equation} $. In words, what $\textit E_{21}^{-1}$ is doing is just adding 3 times of the row 1 to row 2 of matrix $\rm{A}$, which is just the opposite of what $\textit E_{21}$ is doing. Therefore, the result of the two operations is just the matrix $\rm{A}$ itself, represented by $\textit E_{21}^{-1}\textit E_{21}=\rm{I}$, ie multiplying an identity matrix. ### Relation with linear equations The idea of inverse matrices is closely related to the solution of linear equations. If the coefficient matrix $\rm{A}$ has an inverse, then solution of the linear equation system $ \begin{equation} \rm{A} \textbf{x} = \textbf{b} \end{equation} $ can be easily found by multiplying the inverse $\rm{A^{-1}}$ on both sides, such that $$ \begin{equation} \rm{A^{-1}A} \textbf{x} = \rm{A^{-1}}\textbf{b} \\ \textbf{x} = \rm{A^{-1}}\textbf{b} \end{equation} $$. Moreover, if $\rm{A}$ has an inverse, the corresponding **homogeneous equation system** $\rm{A}𝐱 = 0$ has the unique solution $𝐱=0$, which means that the column vectors in $\rm{A}$ are $\textit{linear independent}$. The concept of linear independence is the key to finding overall properties of the solutions to a linear equation system $ \begin{equation} \rm{A} \textbf{x} = \textbf{b} \end{equation} $ and will be introduced in later parts. ### Gauss-Jordan Elimination The next issue becomes, how to find an inverse of a square matrix, if it exists. In fact, finding the inverse can also be accomplished through **row operations**, given that $\rm A^{-1}A = I$. Assume that we are doing a series of row operations and get the inverse of $\rm{A}$, and the product of those row operation matrices is $E=\rm A^{-1}$. Now suppose we perform these series of operations with two matrices $\rm{A}$ and $\rm{I}$ simutaneously, and record them through two blocks of a matrix. Then the inverse will automatically turn out in the second block. $$ \begin{equation} E \begin{matrix} \left[ \begin {array}{c|c} \rm{A}& \rm{I} \end {array} \right] \end{matrix} = \begin{matrix} \left[ \begin {array}{c|c} \rm{I}& E \end {array} \right] \end{matrix} \end{equation} $$ Therefore, by **augmenting** an extra block of a identity matrix, we can keep track of the row operations which turn $\rm{A}$ into $\rm{I}$. Specifically, the row operations are done in the manner of Gaussian elimination, but continue to **eliminate the upper right entries** above the diagonal, and normalize the diagonal entries into 1. For example, to find the inverse of the matrix $ \left[ \begin{matrix} 1 & 3 \\ 2 & 7 \\ \end{matrix} \right] $, we augment an identity matrix to the right, and start to perform the elimination. $$ \left[ \begin{array}{c|c} \begin{matrix} 1 & 3 \\ 2 & 7 \\ \end{matrix}& \begin{matrix} 1 & 0 \\ 0 & 1 \\ \end{matrix} \end{array} \right] \ \rightarrow \ \left[ \begin{array}{c|c} \begin{matrix} 1 & 3 \\ 0 & 1 \\ \end{matrix}& \begin{matrix} 1 & 0 \\ -2 & 1 \\ \end{matrix} \end{array} \right] \ \rightarrow \ \left[ \begin{array}{c|c} \begin{matrix} 1 & 0 \\ 0 & 1 \\ \end{matrix}& \begin{matrix} 7 & -3 \\ -2 & 1 \\ \end{matrix} \end{array} \right] $$ ## 4. First Factorization: A=LU The elimination process produces $\textit E\rm{A} = \rm{U}$ and $\rm{U}\textbf{x} = \text{c}$, where $\textit {E}$ is the product of a series of elimination matrices. Assume that we do not need to perform any permutation, or to say multiply any permutation matrices during the elimination process. We know that the $\textit{elimination matrices}$ always have their corresponding inverses, since what they do is taking $\textit{linear combinations}$ of the rows in $\rm{A}$. For example, $ \begin{equation} \textit E_{21} = \left[ \begin{matrix} 1 & 0 & 0 \\ -3 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right] \end{equation} $ above is just taking 3 times of the first row and subtracting it from the second row of matrix $\rm{A}$. Therefore, adding 3 times of the first row to the second row of the **product**, namely multiplying $ \begin{equation} \textit E_{21}^{-1} = \left[ \begin{matrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right] \end{equation} $ on the left will give the identity matrix $\rm{I}$ back. Back to the product of elimination matrices $\textit {E}$, $\textit {E}$ must have an inverse. If we multiply the inverse $\textit E^{-1}$ on both sides of $\textit E\rm{A} = \rm{U}$. We will get, $$ \rm{A} = \textit E^{-1}\rm{U} $$ Notice there are two important properties for the two matrices on the right, 1. $\textit {E}$ and $\textit E^{-1}$ are both lower triangular matrices. 2. $\rm {U}$ is a upper triangular matrix. The fact that $\rm{U}$ is upper triangular is obvious, since we knock off all the entries below the diagonal in the elimination process. And every elimination matrix $\textit E_{ij}$ is lower triangular, since we always subtract a multiple of one **upper** row from row $i$ that contains the element $a_{ij}$. Similarly, the inverse of $\textit E_{ij}$, which **undo** the row operation in $\textit E_{ij}$, is also a lower triangular matrix. Lastly, the **product** of two lower triangular matrices is once agin a lower triangular matrix, which will give us property 1. Because of these two properties, we often denote the $\textit E^{-1}$ on the right hand side as $\rm{L}$ to signify its lower triangular nature. Thus, we arrived at the first **factorization** of a matrix. $$ \rm{A} = \rm{L} \rm{U} $$ This means the information contains in $\rm{A}$ is now **decomposed** into two parts, storing in $\rm{L}$ and $\rm{U}$ respectively. ### Additional comments In order to decompose $\rm{A}$ into two triangular matrices $\rm{L}$ and $\rm{U}$, we have assumed that no permutation will need to perform. That is surely not the case all the time. If we do need to perform any permutation, represented by a permutation matrix $\textit{P}$, we will accomplish it **before** the elimination or decomposition process, such that the result of permutation $\textit{P}\rm{A}$, will become a matrix that does not require further permutations. This additional procedure gives us back to the case discussed above, with a generalized form of $$ \textit{P} \rm{A} = \rm{L} \rm{U} $$
section \<open>\isaheader{Hashable Interface}\<close> theory Intf_Hash imports Main "../../Lib/HashCode" "../../Lib/Code_Target_ICF" Automatic_Refinement.Automatic_Refinement begin type_synonym 'a eq = "'a \<Rightarrow> 'a \<Rightarrow> bool" type_synonym 'k bhc = "nat \<Rightarrow> 'k \<Rightarrow> nat" subsection \<open>Abstract and concrete hash functions\<close> definition is_bounded_hashcode :: "('c\<times>'a) set \<Rightarrow> 'c eq \<Rightarrow> 'c bhc \<Rightarrow> bool" where "is_bounded_hashcode R eq bhc \<equiv> ((eq,(=)) \<in> R \<rightarrow> R \<rightarrow> bool_rel) \<and> (\<forall>n. \<forall> x \<in> Domain R. \<forall> y \<in> Domain R. eq x y \<longrightarrow> bhc n x = bhc n y) \<and> (\<forall>n x. 1 < n \<longrightarrow> bhc n x < n)" definition abstract_bounded_hashcode :: "('c\<times>'a) set \<Rightarrow> 'c bhc \<Rightarrow> 'a bhc" where "abstract_bounded_hashcode Rk bhc n x' \<equiv> if x' \<in> Range Rk then THE c. \<exists>x. (x,x') \<in> Rk \<and> bhc n x = c else 0" lemma is_bounded_hashcodeI[intro]: "((eq,(=)) \<in> R \<rightarrow> R \<rightarrow> bool_rel) \<Longrightarrow> (\<And>x y n. x \<in> Domain R \<Longrightarrow> y \<in> Domain R \<Longrightarrow> eq x y \<Longrightarrow> bhc n x = bhc n y) \<Longrightarrow> (\<And>x n. 1 < n \<Longrightarrow> bhc n x < n) \<Longrightarrow> is_bounded_hashcode R eq bhc" unfolding is_bounded_hashcode_def by force lemma is_bounded_hashcodeD[dest]: assumes "is_bounded_hashcode R eq bhc" shows "(eq,(=)) \<in> R \<rightarrow> R \<rightarrow> bool_rel" and "\<And>n x y. x \<in> Domain R \<Longrightarrow> y \<in> Domain R \<Longrightarrow> eq x y \<Longrightarrow> bhc n x = bhc n y" and "\<And>n x. 1 < n \<Longrightarrow> bhc n x < n" using assms unfolding is_bounded_hashcode_def by simp_all lemma bounded_hashcode_welldefined: assumes BHC: "is_bounded_hashcode Rk eq bhc" and R1: "(x1,x') \<in> Rk" and R2: "(x2,x') \<in> Rk" shows "bhc n x1 = bhc n x2" proof- from is_bounded_hashcodeD[OF BHC] have "(eq,(=)) \<in> Rk \<rightarrow> Rk \<rightarrow> bool_rel" by simp with R1 R2 have "eq x1 x2" by (force dest: fun_relD) thus ?thesis using R1 R2 BHC by blast qed lemma abstract_bhc_correct[intro]: assumes "is_bounded_hashcode Rk eq bhc" shows "(bhc, abstract_bounded_hashcode Rk bhc) \<in> nat_rel \<rightarrow> Rk \<rightarrow> nat_rel" (is "(bhc, ?bhc') \<in> _") proof (intro fun_relI) fix n n' x x' assume A: "(n,n') \<in> nat_rel" and B: "(x,x') \<in> Rk" hence C: "n = n'" and D: "x' \<in> Range Rk" by auto have "?bhc' n' x' = bhc n x" unfolding abstract_bounded_hashcode_def apply (simp add: C D, rule) apply (intro exI conjI, fact B, rule refl) apply (elim exE conjE, hypsubst, erule bounded_hashcode_welldefined[OF assms _ B]) done thus "(bhc n x, ?bhc' n' x') \<in> nat_rel" by simp qed lemma abstract_bhc_is_bhc[intro]: fixes Rk :: "('c\<times>'a) set" assumes bhc: "is_bounded_hashcode Rk eq bhc" shows "is_bounded_hashcode Id (=) (abstract_bounded_hashcode Rk bhc)" (is "is_bounded_hashcode _ (=) ?bhc'") proof fix x'::'a and y'::'a and n'::nat assume "x' = y'" thus "?bhc' n' x' = ?bhc' n' y'" by simp next fix x'::'a and n'::nat assume "1 < n'" from abstract_bhc_correct[OF bhc] show "?bhc' n' x' < n'" proof (cases "x' \<in> Range Rk") case False with \<open>1 < n'\<close> show ?thesis unfolding abstract_bounded_hashcode_def by simp next case True then obtain x where "(x,x') \<in> Rk" .. have "(n',n') \<in> nat_rel" .. from abstract_bhc_correct[OF assms] have "?bhc' n' x' = bhc n' x" apply - apply (drule fun_relD[OF _ \<open>(n',n') \<in> nat_rel\<close>], drule fun_relD[OF _ \<open>(x,x') \<in> Rk\<close>], simp) done also from \<open>1 < n'\<close> and bhc have "... < n'" by blast finally show "?bhc' n' x' < n'" . qed qed simp (*lemma hashable_bhc_is_bhc[autoref_ga_rules]: "\<lbrakk>STRUCT_EQ_tag eq (=;) REL_IS_ID R\<rbrakk> \<Longrightarrow> is_bounded_hashcode R eq bounded_hashcode" unfolding is_bounded_hashcode_def by (simp add: bounded_hashcode_bounds)*) (* TODO: This is a hack that causes the relation to be instantiated to Id, if it is not yet fixed! *) lemma hashable_bhc_is_bhc[autoref_ga_rules]: "\<lbrakk>STRUCT_EQ_tag eq (=); REL_FORCE_ID R\<rbrakk> \<Longrightarrow> is_bounded_hashcode R eq bounded_hashcode_nat" unfolding is_bounded_hashcode_def by (simp add: bounded_hashcode_nat_bounds) subsection \<open>Default hash map size\<close> definition is_valid_def_hm_size :: "'k itself \<Rightarrow> nat \<Rightarrow> bool" where "is_valid_def_hm_size type n \<equiv> n > 1" lemma hashable_def_size_is_def_size[autoref_ga_rules]: shows "is_valid_def_hm_size TYPE('k::hashable) (def_hashmap_size TYPE('k))" unfolding is_valid_def_hm_size_def by (fact def_hashmap_size) end
subroutine adjust(km, nl, Cpd, Ps, PH, P, T, Th, kap) ! Hard adiabatic adjustment to lapse rate specified by kap ! Follows Kerry Emanuel's convect43b.f but does not account for ! virtual temperature effects and does not mix tracers ! NOTE: K=1 AT THE SURFACE!` ! km = vertical dimension ! nl = maximum level to which convection penetrates ! Cpd = specific heat of dry air ! Ps = surface pressure ! PH = pressure at level interfaces ! P = pressure at midlevel ! T = temperature at midlevel ! Th = (effective) potential temperature at midlevel ! kap = (effective) Rd/Cpd implicit none integer km,nl,jn,i,j,k,jc real Th(km),T(km),PH(km+1),P(km),TOLD(km) real sum,thbar,ahm,a2,RDCP,kap,X,Cpd,Ps do k=1,km told(k)=0. enddo ! Set (effective) Rd/Cpd rdcp = kap ! Perform adiabatic adjustment jc=0 do 30 i=nl-1,1,-1 jn=0 sum=th(i) do j=i+1,nl sum=sum+th(j) thbar=sum/float(j+1-i) if(th(j).lt.thbar)jn=j enddo ! if (i.eq.1) jn=max(jn,2) if (jn.eq.0) goto 30 12 continue ahm=0.0 do 15 j=i,jn ahm= ahm + cpd * t(j)*( ph(j)-ph(j+1) ) 15 continue a2=0.0 do 20 j=i,jn x=(p(j)/ps)**rdcp told(j)=t(j) t(j)=x a2=a2+cpd*x*(ph(j)-ph(j+1)) 20 continue do 25 j=i,jn th(j)=ahm/a2 t(j)=t(j)*th(j) 25 continue if(th(jn+1).lt.th(jn).and.jn.le.nl)then jn=jn+1 goto 12 end if if(i.eq.1)jc=jn 30 continue end
/- Copyright (c) 2021 Anne Baanen. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Anne Baanen -/ import number_theory.class_number.admissible_abs import number_theory.class_number.finite import number_theory.number_field.basic /-! # Class numbers of number fields This file defines the class number of a number field as the (finite) cardinality of the class group of its ring of integers. It also proves some elementary results on the class number. ## Main definitions - `number_field.class_number`: the class number of a number field is the (finite) cardinality of the class group of its ring of integers -/ namespace number_field variables (K : Type*) [field K] [number_field K] namespace ring_of_integers noncomputable instance : fintype (class_group (ring_of_integers K)) := class_group.fintype_of_admissible_of_finite ℚ K absolute_value.abs_is_admissible end ring_of_integers /-- The class number of a number field is the (finite) cardinality of the class group. -/ noncomputable def class_number : ℕ := fintype.card (class_group (ring_of_integers K)) variables {K} /-- The class number of a number field is `1` iff the ring of integers is a PID. -/ theorem class_number_eq_one_iff : class_number K = 1 ↔ is_principal_ideal_ring (ring_of_integers K) := card_class_group_eq_one_iff end number_field namespace rat open number_field theorem class_number_eq : number_field.class_number ℚ = 1 := class_number_eq_one_iff.mpr $ by convert is_principal_ideal_ring.of_surjective (rat.ring_of_integers_equiv.symm : ℤ →+* ring_of_integers ℚ) (rat.ring_of_integers_equiv.symm.surjective) end rat
State Before: G : Type u_1 G' : Type ?u.435622 inst✝² : Group G inst✝¹ : Group G' A : Type ?u.435631 inst✝ : AddGroup A s : Set G N : Subgroup G tn : Subgroup.Normal N a : G h : a ∈ N ⊢ conjugatesOf a ⊆ ↑N State After: G : Type u_1 G' : Type ?u.435622 inst✝² : Group G inst✝¹ : Group G' A : Type ?u.435631 inst✝ : AddGroup A s : Set G N : Subgroup G tn : Subgroup.Normal N a✝ : G h : a✝ ∈ N a : G hc : a ∈ conjugatesOf a✝ ⊢ a ∈ ↑N Tactic: rintro a hc State Before: G : Type u_1 G' : Type ?u.435622 inst✝² : Group G inst✝¹ : Group G' A : Type ?u.435631 inst✝ : AddGroup A s : Set G N : Subgroup G tn : Subgroup.Normal N a✝ : G h : a✝ ∈ N a : G hc : a ∈ conjugatesOf a✝ ⊢ a ∈ ↑N State After: case intro G : Type u_1 G' : Type ?u.435622 inst✝² : Group G inst✝¹ : Group G' A : Type ?u.435631 inst✝ : AddGroup A s : Set G N : Subgroup G tn : Subgroup.Normal N a : G h : a ∈ N c : G hc : c * a * c⁻¹ ∈ conjugatesOf a ⊢ c * a * c⁻¹ ∈ ↑N Tactic: obtain ⟨c, rfl⟩ := isConj_iff.1 hc State Before: case intro G : Type u_1 G' : Type ?u.435622 inst✝² : Group G inst✝¹ : Group G' A : Type ?u.435631 inst✝ : AddGroup A s : Set G N : Subgroup G tn : Subgroup.Normal N a : G h : a ∈ N c : G hc : c * a * c⁻¹ ∈ conjugatesOf a ⊢ c * a * c⁻¹ ∈ ↑N State After: no goals Tactic: exact tn.conj_mem a h c
###################################################################### `is_element/linord` := (A::set) -> proc(L) global reason; if not type(L,list) then reason := ["is_element/linord","L is not a list",L]; return false; fi; if nops(L) <> nops(A) then reason := ["is_element/linord","L does not the same length as A",L,A]; return false; fi; if {op(L)} <> A then reason := ["is_element/linord","L is not an enumeration of A",L,A]; return false; fi; return true; end; ###################################################################### `is_equal/linord` := (A::set) -> proc(L0,L1) return evalb(L0 = L1); end; ###################################################################### `op/linord` := (A::set) -> proc(L) local n,i; n := nops(A); return [seq(L[n-i],i=0..n-1)]; end; `res/linord` := (A::set,B::set) -> proc(L) return select(a -> member(a,B),L); end: ###################################################################### `flip/linord` := (A::set) -> (L) -> proc(a) local n,i; n := nops(L); if n <= 1 then return L; fi; i := 1; while i <= n and L[i] <> a do i := i+1; od; if i > n then error("a is not in L"); fi; return [op(i..n,L),op(1..i-1,L)]; end: `loop/linord` := (A::set) -> (L) -> L; ###################################################################### `random_element/linord` := (A::set) -> proc() return combinat[randperm](A); end: `list_elements/linord` := proc(A::set) return combinat[permute](A); end: `count_elements/linord` := proc(A::set) return nops(A) !; end: ###################################################################### `linord_is_leq` := (A::set) -> (L) -> proc(a,b) local x; for x in L do if x = a then return true; fi; if x = b then return false; fi; od; error("Neither a nor b is in L"); end: `linord_is_less` := (A::set) -> (L) -> proc(a,b) return evalb((a <> b) and `linord_is_leq`(A)(L)(a,b)); end:
[STATEMENT] lemma (in wf_digraph) scc_of_in_sccs_verts: assumes "u \<in> verts G" shows "scc_of u \<in> sccs_verts" [PROOF STATE] proof (prove) goal (1 subgoal): 1. scc_of u \<in> sccs_verts [PROOF STEP] using assms [PROOF STATE] proof (prove) using this: u \<in> verts G goal (1 subgoal): 1. scc_of u \<in> sccs_verts [PROOF STEP] by (auto simp: in_sccs_verts_conv_reachable scc_of_def intro: reachable_trans exI[where x=u])
\section{Quantum Spin Liquids} \subsection{LSM Constraints} If Hamiltonian has symmetry \begin{align} G = \underbrace{SO(3)}_{\text{spin rotation}}\times \underbrace{ \mathbb{Z}^d}_{\text{trans sym}} \end{align} and an odd number of spin-$\frac{1}{2}$ per unit cell, then no trivial unique gapped ground state is possible. In $d=1$, either \begin{enumerate} \item Ground state either breaks symmetry \item Or gapless \end{enumerate} In $d>1$ \begin{enumerate} \item '''' \item '''' \item Topological order (compatible with symmetry) \end{enumerate} With \begin{align} G &= U(1) \times \mathbb{Z}^d \end{align} filling $\nu$ is charge per unit cell. If $\nu$ is fractional then no unique gapped ground state is possible. The general version. \begin{align} G &= G_{\textrm{on-site}} \times \mathbb{Z}^d \end{align} If projective representation per unit cell \begin{align} [w] \in H^2 \left( G, U(1) \right) \end{align} the no unique gapped ground state is possible. Consider $U(1)$ symmetry. Consider a cylinder with space like $T^d$. Let $L_x=: L$. $V$ is then umber of unit cells, and $C=V/L$. Then translational symmetry acts like \begin{align} T_x \ket{\Psi_0} &= e^{iP_x^0} \ket{\Psi_0} \end{align} Suppose ground state is unique and gapped. Insert one flux quantum adiabatically, then large gauge transformation \begin{align} U &= e^{i \frac{2\pi}{L} \sum_r x n_r} \end{align} acts like \begin{align} \ket{\Psi_0} &\to U c \ket{\Psi_0} = \ket{\tilde{\Psi}_0} \end{align} and \begin{align} T_x \ket{\tilde{\Psi}_0} &= e^{i \tilde{P}_x^0} \ket{\tilde{\Psi}_0} \end{align} Then the new momentum is \begin{align} \tilde{P}_x^0 &= P_x^0 + \frac{2\pi}{L} \sum n_r\\ &= P_x^0 + \frac{2\pi N}{L}\\ &= P_x&0 + 2\pi \frac{V}{L} \frac{p}{q} \end{align} where we let $\nu = N/V$. Assume $\nu = p/q$ for some integers $p$ and $q$ which are coprime. Pick $V/L$ to be coprime with $q$ so that $\frac{V}{L} \frac{p}{q}$ is not an integer. So then \begin{align} e^{i\tilde{P}_x^0} \ne e^{i P_x^0} \end{align} which implies $\ket{\Psi_0}$ and $\ket{\tilde{\Psi}_0}$ must be orthogonal, which is a contradiction. \begin{question} This is not a full proof? \end{question} No, to get a full and rigorous proof, it was done originally by LSM. They considered a variational state, and showed you get could to $e^{1/L}$ energy of the ground state. Hastings gave a long rigorous proof, but add to give a number of assumptions about then umber of unit cells. He was able to prove it for even cells, but still had restrictions on the size. \begin{question} How do you have the freedom to pick $V/L$. \end{question} We're assuming we have enough freedom to choose the case. This proof would not work for specific vlaues of $V/L$. Then ext step is that if I cannot have a unique ground state for this system size, then I also cannot have it for any system size. The intuition is that if you did have a gapped ground state, even if it wre degenerate, then it has a finite coreelaiton elngth, so the spectrum shouldn't care xactly what hte systme size is because it'sa lgobal thing up to exponentially small corrections. Go from speicfic system sizes to generic syste msizes, by arguing correlation lengths. \begin{question} This smells like the FQH argument, where $V/L$ is the number of orbitals. \end{question} In FQH yo uhave fractional filling, and htis is exactly a way of proving FHQ has degenerate ground states, and if you did the work, then number of degenerate ground states if $q$. If you went further, you could show that if I insert one flux quantum, that toggles from one state to another, and you can go futher and argue that you have to have $q$ states. \begin{question} What if there is no translational symmetry? \end{question} Whatwe learn her is that if you have translational symmetry and on site symmetry, no unique ground state is possbile. For $d>1$ there are 3 psosiblities. Topologcial order is compatible wiht symmetry. The degenarcy itself is robust to breaking tansliaontl symmetry. The topological degeneracy when you insert flux does change which degenerate state you are in. The main point is that once you have topolgoical order you can satisfy this theorem, but the teorem itself sdone't require TI, but jsut that TI requires you have these almsot degenerate states. The tarnslation symetry and this filling requirement imply that you musth ave a degenearcy, but that degenracy can still be robust to breaking these tnralsioantl symmetry. Nothing says the egeneracy isn't robust if I brek the translational symmetry. \begin{question} Here you asumed periodiciiyt in the $x$ direction, but where did we use it? \end{question} Technically we used it because I asumed it had trnalaiotnl symmetry in any direciton, but if I had a boundary, I would technically break tranlationl symmetry. And I needdd translaitonal symmetry to define a filling. It's there implicitly, but explicitly I didn't use it. In practise it would get a bit more subtle wiht boundaries, becase you would have chiral edg modes on the boundary, and it wouldn't necessarily gapped. This result applies to way more states if I put it on a torus. \section{Spin liquids} That was all motivation. Once you have a system with fractional filling, then topological order is the only way to still have a gapped state and be compatible with the symmetry. Interestingly if you have such a system with an odd number of spin-$\frac{1}{2}$ per unit cell, plus the other conditions, then that implies topological order. You have systems with fractional filing and spin-half per unit cell, ad if the system wants to preserve the symmetry and be gapped, the only thing that can make that happen is topological order. The physical picture for what the quantum spin liquid is, comes from the idea of a resonating valence bond. \begin{align} H &= \sum_{\langle ij\rangle} J_{ij} \vec{s}_{i} \cdot \vec{s}_{j} \end{align} for $J_{ij}>0$. For a single pair $J_{ij} S_i S_j$, the lowest energy state is a spin singlet. Idea of resonating valence bond, is the idea that it's a superpositions of different singlet configurations. This goes back to chemistry, where you have molecules like benzine, where you have 6 carbon atoms, and hydrogen out here, and then the electrons can form singlet configurations. But the actual state of the molecule is a superposition between this singlet configurations and the other. People knew about it in Chemistry, and Linus Pauling started thinking about many-body valences bond resonance in metal, and that motivated Phil Anderson to think about resonating valence bond in insulators. Anderson in 1973 proposed the idea that you could have an electrically insulating system where the spin degrees of freedom form resonating valence bonds. Meaning that the ground state is a superposition over valence bond configurations. The system is forming a quantum liquid of singlets, a gorund stae superpositon of all posssible singlet configuraotns. Andreson talked about it in 1973 andit ws completely ignored. In 1986, physicist dicovered high temperature superconductivity in the cuprates. Right after that there was big explosion in the field. People tried to come pu with theories for why cuprates where high-temperature superconductors, and Anderson had this spin liquid lying in a draw, and he said look, maybe this spin liquid could have something to do with it. Since 1987, there's a huge explosion about understanding spin liquids and resonance states. I'm not going to mention the relationship with high-temperature superconductivity, because it's very tenuous and controversial. It may or may not have anything to do with high-$T_c$ superconductivity, and there are opposing camps of physicists who hate each other. It led to the destruction of the careers of many young scientists in the 1990s, and there should be dramatic books written about the sociology about this, extremely fascinating. Now, we're getting to a point where those people are dying and retiring, so maybe the new generation can solve this problem. It was pretty bad. The kind of ground states natural to think about, once you realise that the lowest energy state is a singlet, the natural states to think about are a crystal of these valence bonds, sometimes called VBS or valence band solid. These bonds form singlets, for do so in some ordered pattern. This kind of state breaks translation symmetry, but it preserves the SO(3). In the context of LSM, it's the first possibliity where you just breka the symmetry. Another state possible is a spin-density wave. The simplest example is a ferromagnet. So you have spins up and down. An anti-ferromagnetic is a special spin density wave with wave vector $k=\pi$. This thing breaks SO(3) as translatoin, but it preserves some combination. The third interesting possiblitiy s htis RVB state,. There are two kinds of RVB states we talk about. One is short-range RVB, where spins near each other form valence bonds, so only a superposition over states where nearby spins for valence bonds. THis kind of thing means systems have finite correlation lnegth. Anything happening in some reaosn, spins are only connected to spins nearby, so they come with finite corelation length, which is typically associated with gapped states. Ify ou had a short range RVB states, a gapped state, then it has to have topological order by the LSM theorem. Specifically, if you put it on a torus, there iwlll be a gorund satte degenearcy as required by LSM. Then you have long-range RVB, where you have valnece bonds arbirarily far apart, but of course the amplitude for having long range valence bonds will decrease far away but perhaps power law decay. It's the same reason why you have gapless systems at all. A gapless system is one where the correlation decays as a power law. How can that happen when I only have short-range interactions? The system some how conspires to have long-range correlations, power law decay, but still long range. One thing you can think of physically is suppose that these bonds are really strong. This guy has really strong bonds wiht this one, but the bonds iwht this guy are weak, So far I've given you a picutre, from which you can construct variatonal states, where you can write down a ground sate that is a superpositon of different configuraitons. You can try to figure out are these energetically favourable ground states. That's what Anderson did. It's not super concrete either. I have a Hamiltonian whose ground state is going to look like this form. \section{Quantum Dimer Model} An importnat devlopment after 1987 was quantum dimer models, which gave a nice way to think about spin liquids, changing the rules a little to make thisp icutre more precise. It changed the rules a bit so people don't like that. But the good review article is arXiv 0809.3051, and the original paper was Rokhsar-Kivelson 1987. The quantum dimer model says let's forget about the spins entirely, and think of a quantum state system of dimers, which are just colorings on edges. The idea is to directly model valence bonds. We replace valence bonds with dimers. The key point is that we treat different dimer configurations as orthogonal quantum states. This is the key thing. This is actually incorrect for spin systems. If I take a spin system, consider some condition nation of singlets, valence bonds, that's not going to be orthogonal to every other configuration. But the idea of the dimer model is to forget that and just demand they are orthogonal. The next thing is to assume the dimers cannot touch each other. So if one spin forms a singlet with one spin, it cannot also form a singlet with another spin. So dimers cannot touch. Here we just set up the Hilbert space. Now we want a Hamiltonian, for example on a square lattice. The Hamiltonian will have two terms, the sum over plaquettes of the square lattice. \begin{align} H_{QDM} &= \sum_{\square} -t \left( \ket{=}\bra{||} + \mathrm{h.c.} \right) + V \left( \ket{=}\bra{=} + \ket{||}\bra{||} \right) \end{align} The properties of the ground state depend very sensitively what dimension we are considering, whether square, cubic, and it also depends very sensitively on the geometry, like triangle, honeycomb, etc. Because we have these local rules which flip configurations, let me define some terminology. We could have a \emph{flippable plaquette} which takes $=$ to $||$. A non-flippable plaquette, consider 3 plaquettes, each with 1 singlet, but the bonds are not touching. Then that's not flippable. We could also have a \emph{flippable loop}. I can draw a fictitious loop that surrounds these guys. For a dimer configuration on this loop, if I can flip it with local moves to get the complementary configuration, then if I can get from one to another by dong local moves in the interior of the loop and maybe a little outside of it, then that would be flippable loop. Once we introduce these local moves, the question is which dimer configuration can be converted to another dimer configuration by local moves. And that introduces topology and winding into the system. You can ask if there are dimer configurations that cannot be transformed into each other by local moves, and if there are are, then you can argue they are topologically distinct from one another. It turns out the biggest difference in dimer models is whether your lattice bipartite or non-bipartite. Bipartite means that you can colour the vertices with 2 colours such that no nearest neighbour has the same colour. Square lattice you can do that so it's bipartite, but you can't do that on a triangular lattice, so it's non-bipartite. The notion of topological sector is that you count the number of dimers that are crossing some line. What I mean is that suppose you draw a triangular lattice. Then you draw a dotted line that crosses some plaquettes. You draw your line and you count how many crossings with dimers there are. One thing you'll notice is that the local moves only change the number of crossings by an even amount. Local moves here means a specific move on two triangles. The key point it's always 2 to 2. That's where this parity thing comes up. \begin{question} Is there a physical reason we have this constraint. \end{question} No. We're just writing down interesting Hamiltonians for these and considering interesting classes of moves. We'll find these map to lattice gauge theory, and changing the kind of moves allowed is changing the gauge constraints, but there's really no physical reason. It's made up anyway. Physically motivated, but the actual Hamiltonian is argued from another starting point. \begin{question} Perhaps it's conservation of energy? \end{question} There are terms you can write down like $S_i^+ S_j^+ S_k^- S_k^-$. But then you can write down 3 spin terms as well\ldots Suppose I have a full dimer covering, so there are no extra dimers you can place anywhere that is compatible with the rules. Then you cannot move a single dimer. Requiring we have full dimer covering will restrict the local moves you can have. The point is that if I count the parity of the number of crossings, that's a topological invariant because it's invariant under local moves. Putting this on a torus gives four topological sectors, because we have 2 non-contractile cycles, one called $\alpha$ and another called $\beta$. Suppose the number of crossings is $n_c^\alpha$ and $n_c^\beta$, then the invariants are \begin{align} n_c^\alpha\mod 2\\ n_c^\beta\mod 2 \end{align} You can get these from local plaquette moves. We don'th ave such a term n the Hamilonitna htat allowed me to do that. On a bipartitlelattcek we have the number o crossngs $\mod 2$. If you have a bipartitle lattice, then you have A sites and B sites. Imagine that all of our dimers actually have an arrow on them, but points to the A subslattice to the B sublattice, so they implcitly have arrows on a to b aso on. I can count the winding number, which is \begin{align} n_{\textrm{winding}} &= n_{\textrm{left-crossings}} - n_{\textrm{left-crossings}} \end{align} Ona a torus, there are $\mathbb{Z}\times mathbb{Z}$ \section{Topological Excitations} There are two kind of excitations we can have. First first one are called \emph{visons}. \begin{question} It's not clear it's the same for each vison \end{question} ??? I vison is a dual excitation that lives on plaquettes. Imagine a line that connects the two. Then count the dimer crossings on that line. The number of crossings is $n_c$. The wave function of a vison is \begin{align} \ket{\mathrm{ground state}} = \sum_c (-1)^{n_c} A_c \ket{c} \end{align} but the ground state is \begin{align} \ket{\mathrm{ground state}} = \sum_c A_c \ket{c} \end{align} Why are they of this form? Because it's like vortex. These excitations become defined finite energy excitations. In general, the choice of line matters, and the energy wold even depend on\ldots The next type of topological excitation is called a \emph{monomer}. I can have terms like this to $H$ \begin{align} \ket{|}\bra{:} + \ket{:} \bra|{} \end{align} The dots just means separate spins by themselves. The name vison is controversial. It was introduced in 1997 by Matt Fischer. All people 10-years liquid than $\mathbb{Z}_2$ vortices got renamed to something new. there's a reason to change the name. One way to think about what's going on, say you have a normal superconductor with vortices. You can have a $+1$ vortex, or a $-1$ vortex. A vison is kind of a superposition between a $+1$ and $-1$ vortex. You can think of it as all quantum superposition of odd-number winding vortices. The -on is for particle. V for vortex, and is for Ising. So vison. The key point is that depending on whether the ground state is ordered, valence bond crystal, or resonating valence bond state, a uniform superposition over all kinds of valence bond configurations, pending on which type of ground state we're in, these excitations will either be confined or deconfined excitations. Confined means linear in separation. Deconfined means you can separate them arbitrarily far apart without energy cost. There are mutual statistics between visons and monomers. In ordered states they are confined and in spin liquid RVB they are deconfined. Ordered VBS confined vs spin liquid RVB deconfined.
A lot of extremely useful information for boaters is produced by government agencies. Since those agencies have a mandate to share that information with the public, and since free distribution via the Internet seems like an obvious way to do that, one would think that a lot of government information would be available that way. Well, some of it is. I previously wrote about the data available from NOAA weather buoys, and I’ll probably be writing in the not-too-distant future about how cool it is that one can download current Coast Pilots and Local Notices to Mariners. So you’ll understand that I was really excited by this article in the current issue of BoatUS by Elaine Dickinson: Charts go PC. As digital technology moves forward at an ever-increasing speed, a bewildering array of chart products are out in the consumer marketplace, with more coming. Now, in a major development, the National Oceanic and Atmospheric Administration (NOAA) will soon, if it hasn’t done so by presstime, make its full suite of 970 raster electronic charts of U.S. waters available free to the public via the Internet. Up until now, boaters with navigation software had to purchase their charts from a vendor or pay a vendor for a subscription to a chart updating service. Now all of the charts, plus weekly “patches” of chart updates, can be downloaded from NOAA at no cost. The site is www.nauticalcharts.noaa.gov. The change has come about following the expiration of an exclusive agreement between NOAA and Maptech, a private company that co-developed the electronic chart format with the federal agency. Maptech’s Cooperative Research and Development Agreement ended in June, freeing up NOAA to release electronic raster charts to the public since it co-owns the resulting format and files. The Office of Coast Survey (OCS) does intends to distribute Raster Nautical Charts (RNCs) and updates for free over the Internet in the same manner as our distribution of Electronic Navigational Charts (see http://nauticalcharts.noaa.gov/MCD/enc/index.htm). In addition, OCS also anticipates establishing a program by which commercial, value-added providers will be able to download RNCs for free; reformat, encrypt, and/or packaged them with additional data or services; and eventually sell the resulting product for whatever price the market will bear. By adhering to a simple set of NOAA specified practices, these RNCs will retain their official status. However, as of today (November 7, 2005) a commencement date for either program has not been established by the Office of Coast Survey. So, no free raster charts for now. But soon, hopefully. NOAA pushed back the release of the raster charts after we went to press. It was supposed to be Mid-October. Right now, they are saying Mid-November. Please check back with them in a few weeks or watch our web site for an update. Later update: The charts are now available. Yay! See Downloadable Raster Charts Available Free from NOAA.
Paddy Cullen’s Pub in Dublin’s leafy Ballsbridge is an ideal location for the discerning customer to enjoy a drink or fine food in good company and pleasant traditional surroundings. Hugely popular around match-time, be it Soccer or Rugby (thanks to its close proximity to the Aviva Stadium), there is always a good atmosphere around the pub. They have a great food menu, with everything from Tapas to a large selection of lunch and dinner options. This is a great spot to have a pint of Dan Kelly’s Cider and definitely worth a visit. Get social with them on Facebook and Twitter.
-- Andreas, 2019-04-30, issue #3731 -- Compiler backend: Do not look for a main function if not the main module. open import Agda.Builtin.Nat module Issue3731 where module M where module main where record R : Set where field main : Nat data Main : Set where main : Main module N where data main : Set where module O where record main : Set where module P where postulate main : Nat module Q where abstract main : Nat main = 1 main : Nat main = 0
-- Example usage of solver {-# OPTIONS --without-K --safe #-} open import Categories.Category module Experiment.Categories.Solver.Category.Example {o ℓ e} (𝒞 : Category o ℓ e) where open import Experiment.Categories.Solver.Category 𝒞 open Category 𝒞 open HomReasoning private variable A B C D E : Obj module _ (f : D ⇒ E) (g : C ⇒ D) (h : B ⇒ C) (i : A ⇒ B) where _ : (f ∘ id ∘ g) ∘ id ∘ h ∘ i ≈ f ∘ (g ∘ h) ∘ i _ = solve ((∥-∥ :∘ :id :∘ ∥-∥) :∘ :id :∘ ∥-∥ :∘ ∥-∥) (∥-∥ :∘ (∥-∥ :∘ ∥-∥) :∘ ∥-∥) refl
theory Ell1 imports Main Tools Setsum_Infinite Real_Vector_Spaces Complete_Lattices "~~/src/HOL/Probability/Binary_Product_Measure" begin subsection {* ell1 (absolutely convergent real series) *} typedef 'a ell1 = "{\<mu>::'a\<Rightarrow>real. SetSums (\<lambda>x. abs(\<mu> x)) UNIV}" apply (rule exI[of _ "\<lambda>x. 0"], auto) unfolding SetSums_def using setsum_0 by auto instantiation ell1 :: (type) zero begin definition zero_ell1 :: "'a ell1" where "zero_ell1 = Abs_ell1 (\<lambda>x. 0)"; instance .. end instantiation ell1 :: (type) comm_monoid_add begin definition "\<mu> + \<nu> = Abs_ell1 (\<lambda>x. Rep_ell1 \<mu> x + Rep_ell1 \<nu> x)" instance apply intro_classes sorry end instantiation ell1 :: (type) real_vector begin definition "\<mu> - \<nu> = Abs_ell1 (\<lambda>x. Rep_ell1 \<mu> x - Rep_ell1 \<nu> x)" definition "-(\<nu>::'a ell1) = 0-\<nu>" definition "scaleR r (\<mu>::'a ell1) = Abs_ell1 (\<lambda>x. r * Rep_ell1 \<mu> x)" instance apply intro_classes sorry end instantiation ell1 :: (type) real_normed_vector begin definition "norm_ell1 (s::'a ell1) = SetSum (\<lambda>x. abs(Rep_ell1 s x)) UNIV" definition "dist_ell1 (s::'a ell1) t = norm (s-t)" definition "open_ell1 (S::'a ell1 set) = (\<forall>x\<in>S. \<exists>e>0. \<forall>y. dist y x < e \<longrightarrow> y \<in> S)" definition "sgn_ell1 (s::'a ell1) = s /\<^sub>R norm s" instance apply intro_classes sorry end instantiation ell1 :: (type) order begin definition "s \<le> (t::'a ell1) = (\<forall>x. Rep_ell1 s x \<le> Rep_ell1 t x)" definition "s < (t::'a ell1) = (s \<le> t \<and> s \<noteq> t)" instance apply intro_classes sorry end instantiation ell1 :: (type) ordered_real_vector begin instance apply intro_classes sorry end definition "weight_ell1 \<mu> = SetSum (\<lambda>x. Rep_ell1 \<mu> x) UNIV" definition point_ell1 :: "'a \<Rightarrow> 'a ell1" where "point_ell1 a = Abs_ell1 (\<lambda>x. if x=a then 1 else 0)"; consts compose_ell1 :: "('a \<Rightarrow> 'b ell1) \<Rightarrow> 'a ell1 \<Rightarrow> 'b ell1"; definition apply_to_ell1 :: "('a \<Rightarrow> 'b) \<Rightarrow> 'a ell1 \<Rightarrow> 'b ell1" where "apply_to_ell1 f = compose_ell1 (\<lambda>x. point_ell1 (f x))" definition "support_ell1 \<mu> = {x. Rep_ell1 \<mu> x \<noteq> 0}" lemma apply_to_ell1_twice [simp]: "apply_to_ell1 f (apply_to_ell1 g \<mu>) = apply_to_ell1 (\<lambda>x. f (g x)) \<mu>" sorry lemma apply_to_ell1_id [simp]: "apply_to_ell1 (\<lambda>x. x) \<mu> = \<mu>" sorry lemma support_compose_ell1 [simp]: "support_ell1 (compose_ell1 f g) = (\<Union>x\<in>support_ell1 g. support_ell1 (f x))" sorry lemma support_apply_to_ell1 [simp]: "support_ell1 (apply_to_ell1 f \<mu>) = f ` support_ell1 \<mu>" sorry lemma support_point_ell1 [simp]: "support_ell1 (point_ell1 x) = {x}" sorry definition "product_ell1 \<mu> \<nu> = Abs_ell1 (\<lambda>(x,y). Rep_ell1 \<mu> x * Rep_ell1 \<nu> y)" lemma fst_product_ell1 [simp]: "apply_to_ell1 fst (product_ell1 \<mu> \<nu>) = weight_ell1 \<nu> *\<^sub>R \<mu>" sorry lemma snd_product_ell1 [simp]: "apply_to_ell1 snd (product_ell1 \<mu> \<nu>) = weight_ell1 \<mu> *\<^sub>R \<nu>" sorry lemma support_product_ell1 [simp]: "support_ell1 (product_ell1 \<mu> \<nu>) = support_ell1 \<mu> \<times> support_ell1 \<nu>" sorry lemma product_ell1_sym: "apply_to_ell1 (\<lambda>(x,y). (y,x)) (product_ell1 \<mu> \<nu>) = product_ell1 \<nu> \<mu>" sorry lemma apply_to_point_ell1 [simp]: "apply_to_ell1 f (point_ell1 x) = point_ell1 (f x)" sorry lemma point_ell1_inj: "point_ell1 x = point_ell1 y \<Longrightarrow> x = y" sorry subsection {* Distributions (with weight <= 1) *} typedef 'a distr = "{M::'a measure. emeasure M (space M) \<le> 1 \<and> space M = UNIV \<and> sets M = UNIV}" by (rule exI[of _ "sigma UNIV UNIV"], auto simp: emeasure_sigma) definition "distr_pre d == emeasure (Rep_distr d)" definition "distr_pr d == measure (Rep_distr d)" abbreviation "distr_pr1 d x == distr_pr d {x}" definition support_distr :: "'a distr \<Rightarrow> 'a set" where "support_distr \<mu> = {x. distr_pr1 \<mu> x > 0}" instantiation distr :: (type) zero begin definition zero_distr :: "'a distr" where "zero_distr = Abs_distr (sigma UNIV UNIV)"; instance .. end instantiation distr :: (type) scaleR begin definition "scaleR_distr r \<mu> = Abs_distr (measure_of (space (Rep_distr \<mu>)) (sets (Rep_distr \<mu>)) (\<lambda>E. ereal r * emeasure (Rep_distr \<mu>) E))" instance .. end lemma scaleR_one_distr [simp]: "1 *\<^sub>R (\<mu>::'a distr) = \<mu>" unfolding scaleR_distr_def one_ereal_def[symmetric] by (auto simp: measure_of_of_measure Rep_distr_inverse) abbreviation "weight_distr \<mu> == distr_pr \<mu> UNIV" lemma Rep_Abs_distr_measure_of: "X UNIV \<le> 1 \<Longrightarrow> Rep_distr (Abs_distr (measure_of UNIV UNIV X)) = measure_of UNIV UNIV X" apply (subst Abs_distr_inverse) by (auto simp: emeasure_measure_of_conv) definition "mk_distr (f::_\<Rightarrow>real) == Abs_distr (measure_of UNIV UNIV f)" definition "mk_distre (f::_\<Rightarrow>ereal) == Abs_distr (measure_of UNIV UNIV f)" print_theorems lemma mk_distre_pr: assumes "f UNIV \<le> 1" assumes "\<And>x. f x \<ge> 0" assumes "f {} = 0" assumes "\<And>A. disjoint_family A \<Longrightarrow> (\<Sum>i. f (A i)) = f (\<Union>i. A i)" shows "distr_pre (mk_distre f) = f" proof - have sigma_UNIV: "sigma_sets UNIV UNIV = UNIV" by (metis UNIV_eq_I iso_tuple_UNIV_I sigma_sets.Basic) have "measure_space UNIV (sigma_sets UNIV UNIV) f" unfolding measure_space_def apply auto apply (metis Pow_UNIV sigma_algebra_sigma_sets top_greatest) unfolding positive_def using assms close auto unfolding sigma_UNIV countably_additive_def using assms by auto thus ?thesis unfolding mk_distre_def distr_pre_def apply (subst Abs_distr_inverse) by (auto simp: emeasure_measure_of_conv assms) qed lemma mk_distr_pr: assumes "f UNIV \<le> 1" assumes "\<And>x. f x \<ge> 0" assumes "f {} = 0" assumes "\<And>A. disjoint_family A \<Longrightarrow> (\<Sum>i. f (A i)) = f (\<Union>i. A i)" shows "distr_pr (mk_distr f) = f" sorry definition point_distr :: "'a \<Rightarrow> 'a distr" where "point_distr a = mk_distr (\<lambda>E. if a\<in>E then 1 else 0)"; lemma point_distr_pr: "distr_pr (point_distr a) E = (if a\<in>E then 1 else 0)" unfolding point_distr_def apply (subst mk_distr_pr, auto) sorry lemma weight_point_distr [simp]: "weight_distr (point_distr x) = 1" unfolding point_distr_pr by simp definition compose_distr :: "('a \<Rightarrow> 'b distr) \<Rightarrow> 'a distr \<Rightarrow> 'b distr" where "compose_distr f \<mu> == mk_distre (\<lambda>E. (\<integral>\<^sup>+ a. distr_pre (f a) E \<partial>Rep_distr \<mu>))" definition apply_to_distr :: "('a \<Rightarrow> 'b) \<Rightarrow> 'a distr \<Rightarrow> 'b distr" where "apply_to_distr f \<mu> = Abs_distr (distr (Rep_distr \<mu>) (sigma UNIV UNIV) f)" lemma apply_to_distr_twice [simp]: "apply_to_distr f (apply_to_distr g \<mu>) = apply_to_distr (\<lambda>x. f (g x)) \<mu>" proof - let ?\<mu> = "Rep_distr \<mu>" have valid: "emeasure (distr (Rep_distr \<mu>) (sigma UNIV UNIV) g) UNIV \<le> 1" sorry show ?thesis unfolding apply_to_distr_def apply (subst Abs_distr_inverse, auto simp: valid) apply (subst distr_distr) unfolding measurable_def o_def by (auto intro: Rep_distr) qed lemma apply_to_distr_id [simp]: "apply_to_distr (\<lambda>x. x) \<mu> = \<mu>" proof - let ?\<mu> = "Rep_distr \<mu>" have "?\<mu> = distr ?\<mu> ?\<mu> (\<lambda>x .x)" using distr_id by auto moreover have "... = distr ?\<mu> (sigma UNIV UNIV) (\<lambda>x. x)" by (rule distr_cong, auto) finally have eq:"... = ?\<mu>" by simp show ?thesis unfolding apply_to_distr_def unfolding eq by (rule Rep_distr_inverse) qed lemma support_compose_distr [simp]: "support_distr (compose_distr f g) = (\<Union>x\<in>support_distr g. support_distr (f x))" sorry lemma support_apply_to_distr [simp]: "support_distr (apply_to_distr f \<mu>) = f ` support_distr \<mu>" sorry lemma support_point_distr [simp]: "support_distr (point_distr x) = {x}" sorry definition "product_distr \<mu> \<nu> = Abs_distr (Rep_distr \<mu> \<Otimes>\<^sub>M Rep_distr \<nu>)" lemma [simp]: "sigma_finite_measure (Rep_distr \<mu>)" unfolding sigma_finite_measure_def apply (rule exI[of _ "{UNIV}"]) apply auto using Rep_distr by (metis (full_types, lifting) ereal_infty_less_eq(1) ereal_times(1) mem_Collect_eq) lemma fst_product_distr [simp]: "apply_to_distr fst (product_distr \<mu> \<nu>) = weight_distr \<nu> *\<^sub>R \<mu>" sorry lemma snd_product_distr [simp]: "apply_to_distr snd (product_distr \<mu> \<nu>) = weight_distr \<mu> *\<^sub>R \<nu>" sorry lemma support_product_distr [simp]: "support_distr (product_distr \<mu> \<nu>) = support_distr \<mu> \<times> support_distr \<nu>" sorry lemma product_distr_sym: "apply_to_distr (\<lambda>(x,y). (y,x)) (product_distr \<mu> \<nu>) = product_distr \<nu> \<mu>" proof - have \<mu>1: "emeasure (Rep_distr \<mu>) UNIV \<le> 1" and \<nu>1: "emeasure (Rep_distr \<nu>) UNIV \<le> 1" using Rep_distr by auto have 11: "1::ereal == 1 * 1" by auto have mult_mono: "\<And>a b c d. a\<le>c \<Longrightarrow> b\<le>d \<Longrightarrow> b\<ge>0 \<Longrightarrow> c\<ge>0 \<Longrightarrow> (a::ereal) * b \<le> c * d" by (metis ereal_mult_left_mono mult.commute order_trans) have leq1: "emeasure (Rep_distr \<mu>) UNIV * emeasure (Rep_distr \<nu>) UNIV \<le> 1" apply (subst 11) apply (rule mult_mono) using Rep_distr by auto have leq1': "emeasure (Rep_distr \<mu> \<Otimes>\<^sub>M Rep_distr \<nu>) UNIV \<le> 1" apply (subst UNIV_Times_UNIV[symmetric]) unfolding space_pair_measure apply simp apply (subst UNIV_Times_UNIV[symmetric]) by (subst sigma_finite_measure.emeasure_pair_measure_Times, auto simp: leq1) show ?thesis unfolding apply_to_distr_def product_distr_def apply (subst Abs_distr_inverse) apply (auto simp: leq1' space_pair_measure sets_pair_measure) find_theorems "sigma_sets UNIV" sorry by UNIV_Times_UNIV let ?\<mu> = "Rep_distr \<mu>" let ?\<nu> = "Rep_distr \<nu>" apply distr_pair_swap sorry lemma apply_to_point_distr [simp]: "apply_to_distr f (point_distr x) = point_distr (f x)" sorry lemma point_distr_inj: "point_distr x = point_distr y \<Longrightarrow> x = y" sorry definition uniform :: "'a set \<Rightarrow> 'a distr" where "uniform S = Abs_distr (\<lambda>x. if x \<in> S then 1/(card S) else 0)" lemma markov_chain: assumes "apply_to_distr snd \<mu>1 = apply_to_distr fst \<mu>2" obtains \<mu> where "apply_to_distr (\<lambda>(x::'a,y::'b,z::'c). (x,y)) \<mu> = \<mu>1" and "apply_to_distr (\<lambda>(x,y,z). (y,z)) \<mu> = \<mu>2" proof def \<mu> == "undefined::('a*'b*'c) distr" show "apply_to_distr (\<lambda>(x,y,z). (x,y)) \<mu> = \<mu>1" sorry show "apply_to_distr (\<lambda>(x,y,z). (y,z)) \<mu> = \<mu>2" sorry qed lemma compose_point_distr_r [simp]: "compose_distr f (point_distr x) = f x" sorry lemma compose_point_distr_l [simp]: "compose_distr (\<lambda>x. point_distr (f x)) \<mu> = apply_to_distr f \<mu>" unfolding apply_to_distr_def .. lemma compose_distr_trans: "compose_distr (\<lambda>x. compose_distr g (f x)) \<mu> = compose_distr g (compose_distr f \<mu>)" sorry subsection {* Combining ell1 and distr *} definition "distr_to_ell1 \<mu> = Abs_ell1 (Rep_distr \<mu>)" definition "ell1_to_distr \<mu> = Abs_distr (Rep_ell1 \<mu>)" lemma distr_to_ell1_apply_comm [simp]: "distr_to_ell1 (apply_to_distr f \<mu>) = apply_to_ell1 f (distr_to_ell1 \<mu>)" sorry lemma support_distr_to_ell1 [simp]: "support_ell1 (distr_to_ell1 \<mu>) = support_distr \<mu>" sorry end
Require Import Rupicola.Lib.Core. Require Import Rupicola.Lib.Notations. Require Import Rupicola.Lib.Tactics. Require Import Rupicola.Lib.Invariants. Require Import Rupicola.Lib.Gensym. Local Open Scope nat_scope. Section Gallina. Context {A: Type}. Implicit Type a : A. Implicit Type step : A -> nat -> A. Definition downto' a0 start count step : A := fold_left step (rev (skipn start (seq 0 count))) a0. Definition downto a0 count step : A := downto' a0 0 count step. Open Scope nat_scope. Lemma downto'_step start count step a0 : 0 < start <= count -> step (downto' a0 start count step) (start - 1) = downto' a0 (start - 1) count step. Proof. cbv [downto']; apply fold_left_skipn_seq. Qed. Lemma Nat_iter_as_downto' n (f: A -> A) : forall a i, Nat.iter n f a = downto' a i (i + n) (fun a _ => f a). Proof. unfold downto'. setoid_rewrite skipn_seq_step; setoid_rewrite minus_plus. simpl; induction n; simpl; intros. - reflexivity. - rewrite fold_left_app. auto using f_equal. Qed. Lemma Nat_iter_as_downto'_sub n (f: A -> A) a i: i <= n -> Nat.iter (n - i) f a = downto' a i n (fun a _ => f a). Proof. intros; replace n with (i + (n - i)) at 2 by lia. apply Nat_iter_as_downto'. Qed. Lemma Nat_iter_as_downto n (f: A -> A) a : Nat.iter n f a = downto a n (fun a _ => f a). Proof. apply (Nat_iter_as_downto' n f a 0). Qed. End Gallina. Definition cmd_downto i_var step_impl := (* while (i > 0) { i--; step i } *) (cmd.while (expr.op bopname.ltu (expr.literal 0) (expr.var i_var)) (cmd.seq (cmd.set i_var (expr.op bopname.sub (expr.var i_var) (expr.literal 1))) step_impl)). Definition cmd_downto_fresh i_var i_expr step_impl k_impl := cmd.seq (cmd.set i_var i_expr) (cmd.seq (cmd_downto i_var step_impl) k_impl). Section Compilation. Context {width: Z} {BW: Bitwidth width} {word: word.word width} {mem: map.map word Byte.byte}. Context {locals: map.map String.string word}. Context {env: map.map String.string (list String.string * list String.string * Syntax.cmd)}. Context {ext_spec: bedrock2.Semantics.ExtSpec}. Context {word_ok : word.ok word} {mem_ok : map.ok mem}. Context {locals_ok : map.ok locals}. Context {env_ok : map.ok env}. Context {ext_spec_ok : Semantics.ext_spec.ok ext_spec}. Implicit Types (x : word). (* helper lemma for subtracting one from the loop counter *) Lemma word_to_nat_sub_1 x n : (0 < word.unsigned x)%Z -> word.unsigned x = Z.of_nat n -> word.unsigned (word.sub x (word.of_Z 1)) = Z.of_nat (n - 1). Proof. intros. pose proof (word.unsigned_range x). rewrite Nat2Z.inj_sub by lia. rewrite word.unsigned_sub, word.unsigned_of_Z_1. rewrite word.wrap_small by lia. f_equal. congruence. Qed. (* helper lemma for continuation case *) Lemma word_to_nat_0 x n : (word.unsigned x <= 0)%Z -> word.unsigned x = Z.of_nat n -> x = word.of_Z 0. Proof. intros. pose proof (word.unsigned_range x). rewrite <- (word.of_Z_unsigned x). assert (n = 0) by lia; subst. change (Z.of_nat 0) with 0%Z in *. congruence. Qed. Lemma word_of_Z_sub_1 n: n > 0 -> word.of_Z (Z.of_nat (n - 1)) = word.sub (word := word) (word.of_Z (Z.of_nat n)) (word.of_Z 1). Proof. intros; rewrite <- word.ring_morph_sub. f_equal; lia. Qed. Lemma compile_downto_continued : forall {tr mem locals functions} {A} (a0: A) count step, let v := downto a0 count step in forall {P} {pred: P v -> predicate} {k: nlet_eq_k P v} {k_impl step_impl} (loop_pred : nat -> A -> predicate) i_var vars, (Z.of_nat count < 2 ^ width)%Z -> loop_pred count a0 tr mem locals -> (forall i st tr mem locals, loop_pred i st tr mem locals -> map.get locals i_var = Some (word.of_Z (Z.of_nat i))) -> ((* loop body *) forall tr l m i, let st := downto' a0 (S i) count step in let wi := word.of_Z (Z.of_nat i) in loop_pred (S i) st tr m l -> i < count -> <{ Trace := tr; Memory := m; Locals := map.put l i_var wi; Functions := functions }> step_impl <{ loop_pred i (step st i) }>) -> (let v := v in (* continuation *) forall tr l m, loop_pred 0 v tr m l -> <{ Trace := tr; Memory := m; Locals := l; Functions := functions }> k_impl <{ pred (k v eq_refl) }>) -> <{ Trace := tr; Memory := mem; Locals := locals; Functions := functions }> cmd.seq (cmd_downto i_var step_impl) k_impl <{ pred (nlet_eq vars v k) }>. Proof. repeat straightline. (* handle while *) WeakestPrecondition.unfold1_cmd_goal; (cbv beta match delta [WeakestPrecondition.cmd_body]). exists nat, lt. exists (fun i t m l => let st := downto' a0 i count step in loop_pred i st t m l /\ i <= count). ssplit; eauto using lt_wf; [ | ]. { cbv zeta. subst. exists count; split. - unfold downto'; rewrite skipn_all2 by (rewrite seq_length; lia); eauto. - eauto using Zle_0_nat. } { repeat straightline'. repeat (eexists; split; repeat straightline; eauto). rewrite word.unsigned_ltu, word.unsigned_of_Z_0. rewrite word.unsigned_of_Z_nowrap by lia. destruct_one_match; rewrite ?word.unsigned_of_Z_0, ?word.unsigned_of_Z_1; ssplit; try lia; [ | ]. { repeat straightline'. repeat (eexists; split; repeat straightline; eauto). subst_lets_in_goal. lazymatch goal with | [ Hcmd : context [WeakestPrecondition.cmd _ ?impl ], Hinv : context [loop_pred ?i ?st] |- WeakestPrecondition.cmd _ ?impl ?tr ?mem (map.put ?locals ?i_var (word.sub ?wi (word.of_Z 1))) ?post ] => specialize (Hcmd tr locals mem (i - 1)); replace (S (i-1)) with i in Hcmd by lia; unshelve epose proof (Hcmd _ _); clear Hcmd end; [ eauto; lia .. | ]. rewrite <- word_of_Z_sub_1 by lia. use_hyp_with_matching_cmd; [ ]. cbv [postcondition_cmd] in *; sepsimpl; cleanup; subst. repeat match goal with | |- exists _, _ => eexists; ssplit | _ => erewrite <- downto'_step; [ cbn [fst snd]; ecancel_assumption | ] | _ => lia || solve [eauto using word_to_nat_sub_1] end. } { repeat straightline'. match goal with | [ H: (Z.of_nat ?n <= 0)%Z |- _ ] => replace n with 0 in * by lia end. use_hyp_with_matching_cmd; subst_lets_in_goal; eauto. } } Qed. Lemma compile_downto : forall {tr mem locals functions} {A} (a0: A) count step, let v := downto a0 count step in forall {P} {pred: P v -> predicate} {k: nlet_eq_k P v} {k_impl step_impl} (loop_pred : nat -> A -> predicate) i_var i_expr vars, let zcount := Z.of_nat count in let wcount := word.of_Z zcount in (zcount < 2 ^ width)%Z -> WeakestPrecondition.dexpr mem locals i_expr wcount -> let locals0 := map.put locals i_var wcount in loop_pred count a0 tr mem locals0 -> (forall i st tr mem locals, loop_pred i st tr mem locals -> map.get locals i_var = Some (word.of_Z (Z.of_nat i))) -> (let lp := loop_pred in (* loop body *) forall tr l m i, let st := downto' a0 (S i) count step in let wi := word.of_Z (Z.of_nat i) in loop_pred (S i) st tr m l -> i < count -> <{ Trace := tr; Memory := m; Locals := map.put l i_var wi; Functions := functions }> step_impl <{ lp i (step st i) }>) -> (let v := v in (* continuation *) forall tr l m, loop_pred 0 v tr m l -> <{ Trace := tr; Memory := m; Locals := l; Functions := functions }> k_impl <{ pred (k v eq_refl) }>) -> <{ Trace := tr; Memory := mem; Locals := locals; Functions := functions }> cmd_downto_fresh i_var i_expr step_impl k_impl <{ pred (nlet_eq vars v k) }>. Proof. intros. unfold cmd_downto_fresh. repeat straightline; eexists; split; eauto. eapply compile_downto_continued; eauto. Qed. Lemma compile_Nat_iter : forall [tr mem locals functions] n {A} f (a: A), let v := Nat.iter n f a in forall {P} {pred: P v -> predicate} {k: nlet_eq_k P v} {k_impl f_impl} (loop_pred : nat -> A -> predicate) i_var i_expr vars, let zn := Z.of_nat n in let wn := word.of_Z zn in (zn < 2 ^ width)%Z -> WeakestPrecondition.dexpr mem locals i_expr wn -> let locals0 := map.put locals i_var wn in loop_pred n a tr mem locals0 -> (forall i st tr mem locals, loop_pred i st tr mem locals -> map.get locals i_var = Some (word.of_Z (Z.of_nat i))) -> (let lp := loop_pred in (* loop body *) forall tr l m i, let st := Nat.iter (n - S i) f a in let wi := word.of_Z (Z.of_nat i) in loop_pred (S i) st tr m l -> i < n -> <{ Trace := tr; Memory := m; Locals := map.put l i_var wi; Functions := functions }> f_impl <{ lp i (f st) }>) -> (let v := v in (* continuation *) forall tr l m, loop_pred 0 v tr m l -> <{ Trace := tr; Memory := m; Locals := l; Functions := functions }> k_impl <{ pred (k v eq_refl) }>) -> <{ Trace := tr; Memory := mem; Locals := locals; Functions := functions }> cmd_downto_fresh i_var i_expr f_impl k_impl <{ pred (nlet_eq vars v k) }>. Proof. cbv zeta; intros until a. rewrite Nat_iter_as_downto; intros * ???? Hf Hk. eapply compile_downto; eauto; []. cbv zeta; intros. rewrite <- Nat_iter_as_downto'_sub by lia; eapply Hf; [ | lia]; rewrite Nat_iter_as_downto'_sub by lia. eassumption. Qed. End Compilation. Ltac make_downto_predicate i_var i_arg vars args tr pred locals := lazymatch substitute_target i_var i_arg pred locals with | (?pred, ?locals) => make_predicate vars args tr pred locals end. Ltac infer_downto_predicate' i_var argstype vars tr pred locals := let val_pred := constr:(fun (idx: nat) (args: argstype) => ltac:(let f := make_downto_predicate i_var idx vars args tr pred locals in exact f)) in eval cbv beta in val_pred. Ltac infer_downto_predicate i_var := _infer_predicate_from_context ltac:(infer_downto_predicate' i_var). Ltac _compile_downto locals lemma := let i_v := gensym locals "i" in let lp := infer_downto_predicate i_v in eapply lemma with (i_var := i_v) (loop_pred := lp). Ltac compile_downto := lazymatch goal with | [ |- WeakestPrecondition.cmd _ _ _ _ ?locals (_ (nlet_eq _ ?v _)) ] => lazymatch v with | downto _ _ _ => _compile_downto locals compile_downto | Nat.iter _ _ _ => _compile_downto locals compile_Nat_iter end end. Module DownToCompiler. #[export] Hint Extern 1 (WP_nlet_eq (downto _ _ _)) => compile_downto; shelve : compiler. #[export] Hint Extern 1 (WP_nlet_eq (Nat.iter _ _ _)) => compile_downto; shelve : compiler. End DownToCompiler. Section GhostCompilation. Context {width: Z} {BW: Bitwidth width} {word: word.word width} {mem: map.map word Byte.byte}. Context {locals: map.map String.string word}. Context {env: map.map String.string (list String.string * list String.string * Syntax.cmd)}. Context {ext_spec: bedrock2.Semantics.ExtSpec}. Context {word_ok : word.ok word} {mem_ok : map.ok mem}. Context {locals_ok : map.ok locals}. Context {env_ok : map.ok env}. Context {ext_spec_ok : Semantics.ext_spec.ok ext_spec}. Implicit Types (x : word). (* helper definition *) Definition downto'_dependent {A B} initA initB start count (stepA : A -> nat -> A) (stepB : A -> B -> nat -> B) := fold_left_dependent stepA stepB (rev (skipn start (seq 0 count))) initA initB. Lemma downto'_dependent_fst {A B} initA initB stepA stepB i count : fst (@downto'_dependent A B initA initB i count stepA stepB) = downto' initA i count stepA. Proof. apply fold_left_dependent_fst. Qed. Open Scope nat_scope. Lemma downto'_dependent_step {A B} i count stepA stepB (initA : A) (initB : B) : 0 < i <= count -> (fun ab c => (stepA (fst ab) c, stepB (fst ab) (snd ab) c)) (downto'_dependent initA initB i count stepA stepB) (i-1) = downto'_dependent initA initB (i-1) count stepA stepB. Proof. apply fold_left_skipn_seq. Qed. (* In this lemma, state refers to the accumulator type for the Gallina downto loop, and ghost_state is any extra information that locals/memory invariants need access to. *) (* FIXME Do we actually need ghost state? *) (* TODO: consider taking in range of count instead of providing word? *) Lemma compile_downto_with_ghost_state : forall {tr mem locals functions} {state} (init : state) count step, let v := downto init count step in forall {P} {pred: P v -> predicate} {k: nlet_eq_k P v} {k_impl step_impl} wcount {ghost_state} (ginit : ghost_state) (ghost_step : state -> ghost_state -> nat -> ghost_state) (Inv : nat -> ghost_state -> state -> predicate) (* loop invariant *) i_var vars, Inv count ginit init tr mem (map.remove locals i_var) -> map.get locals i_var = Some wcount -> word.unsigned wcount = Z.of_nat count -> (let v := v in (* loop iteration case *) forall tr l m i wi, let stgst := downto'_dependent init ginit (S i) count step ghost_step in let st := fst stgst in let gst := snd stgst in let inv' v tr' mem' locals := map.get locals i_var = Some wi /\ Inv i (ghost_step st gst i) v tr' mem' (map.remove locals i_var) in let gst' := ghost_step st gst i in Inv (S i) gst st tr m (map.remove l i_var) -> word.unsigned wi = Z.of_nat i -> i < count -> <{ Trace := tr; Memory := m; Locals := map.put l i_var wi; Functions := functions }> step_impl <{ inv' (step st i)}>) -> (let v := v in (* continuation *) forall tr l m gst, Inv 0 gst v tr m (map.remove l i_var) -> map.get l i_var = Some (word.of_Z 0) -> <{ Trace := tr; Memory := m; Locals := l; Functions := functions }> k_impl <{ pred (k v eq_refl) }>) -> <{ Trace := tr; Memory := mem; Locals := locals; Functions := functions }> cmd.seq (cmd_downto i_var step_impl) k_impl <{ pred (nlet_eq vars v k) }>. Proof. repeat straightline'. (* handle while *) WeakestPrecondition.unfold1_cmd_goal; (cbv beta match delta [WeakestPrecondition.cmd_body]). exists nat, lt. exists (fun i t m l => let stgst := downto'_dependent init ginit i count step ghost_step in let st := fst stgst in let gst := snd stgst in Inv i gst st t m (map.remove l i_var) /\ i <= count /\ (exists wi, word.unsigned wi = Z.of_nat i /\ map.get l i_var = Some wi)). ssplit; eauto using lt_wf; [ | ]. { cbv zeta. subst. exists count; split. - unfold downto'_dependent; rewrite skipn_all2 by (rewrite seq_length; lia); eauto. - eauto using Zle_0_nat. } { intros. cleanup; subst. repeat straightline'. lazymatch goal with x := context [word.ltu] |- _ => subst x end. rewrite word.unsigned_ltu, word.unsigned_of_Z_0. destruct_one_match; rewrite ?word.unsigned_of_Z_0, ?word.unsigned_of_Z_1; ssplit; try lia; [ | ]. { repeat straightline'. subst_lets_in_goal. lazymatch goal with | Hcmd:context [ WeakestPrecondition.cmd _ ?impl ], Hinv : context [Inv _ (snd ?stgst) (fst ?stgst)], Hi : word.unsigned ?wi = Z.of_nat ?i |- WeakestPrecondition.cmd _ ?impl ?tr ?mem (map.put ?locals ?i_var (word.sub ?wi (word.of_Z 1))) ?post => specialize (Hcmd tr locals mem (i-1) (word.sub wi (word.of_Z 1))); replace (S (i-1)) with i in Hcmd by lia; unshelve epose proof (Hcmd _ _ _); clear Hcmd end; [ eauto using word_to_nat_sub_1; lia .. | ]. use_hyp_with_matching_cmd; [ ]. cbv [postcondition_cmd] in *; sepsimpl; cleanup; subst. repeat match goal with | |- exists _, _ => eexists; ssplit | _ => erewrite <-downto'_dependent_step; [ cbn [fst snd]; ecancel_assumption | ] | _ => lia || solve [eauto using word_to_nat_sub_1] end. } { repeat straightline'. rewrite @downto'_dependent_fst in *. match goal with | H : (word.unsigned ?x <= 0)%Z |- _ => eapply word_to_nat_0 in H; [ | solve [eauto] .. ]; subst end. match goal with H : word.unsigned (word.of_Z 0) = Z.of_nat ?n |- _ => assert (n = 0) by (rewrite word.unsigned_of_Z_0 in H; lia) end; subst. use_hyp_with_matching_cmd; subst_lets_in_goal; eauto. } } Qed. End GhostCompilation.
Near the end of World War I , the Erzherzog Karl @-@ class battleships were handed over to the newly formed State of Slovenes , Croats and Serbs but Erzherzog Ferdinand Max was later transferred to Great Britain as a war reparation . She was later broken up for scrap in 1921 .
Require Import Map. Local Open Scope map_scope. Require PeanoNat. Require Import Program. Create HintDb term discriminated. Delimit Scope term_scope with t. Local Open Scope term_scope. Reserved Notation "[ ]". Reserved Notation "[ x ]". Reserved Infix "@" (at level 20, left associativity). Reserved Infix "\" (at level 30, right associativity). Reserved Infix "//" (at level 50, no associativity). Reserved Notation "x ## s" (at level 50, no associativity). Reserved Infix "\\" (at level 30, no associativity). Local Ltac decide_right Q := pattern Q; apply decide_right; [easy | intros _]. Local Ltac decide_left Q := pattern Q; apply decide_left; [easy | intros _]. Module Id := PeanoNat.Nat. Definition id := Id.t. Definition id_eq := Id.eq_dec. Inductive term := | Box | Free (y: id) | App (s t: term) | Abs (n: map) (s: term). Bind Scope term_scope with term. Notation "[ ]" := Box (format "[ ]"): term_scope. Notation "[ x ]" := (Free x): term_scope. Infix "@" := App: term_scope. Infix "\" := Abs: term_scope. Function occb x s := match s with | [] => false | [y] => if id_eq x y then true else false | s @ t => occb x s || occb x t | _ \ s => occb x s end%bool. Notation "x ## s" := (occb x s = false): term_scope. Theorem occb_app x s1 s2: x ## (s1 @ s2) <-> x ## s1 /\ x ## s2. Proof. split. - now intros []%Bool.orb_false_elim. - intros []; apply Bool.orb_false_intro; easy. Qed. Inductive dvd: map -> term -> Prop := | DBox0: 0 // [] | DBox1: [1] // [] | DFree y: 0 // [y] | DApp m s n t: m // s -> n // t -> m & n // s @ t | DAbs m n s: m ⊥ n -> m // s -> n // s -> m // n \ s where "m // s" := (dvd m s): term_scope. Hint Constructors dvd: term. Notation map_for s := {m | m // s}. Notation wf := (dvd 0). Notation wfterm := {t | wf t}. Lemma div_wf n s: n // s -> wf s. Proof. induction 1; split_0; auto with map term. Qed. Lemma div_app m s n t: (m & n) // (s @ t) -> m // s /\ n // t. Proof. inversion 1; mapp_inj; subst; auto. Qed. Lemma div_abs m n s: m // n \ s -> m // s. Proof. now inversion 1. Qed. Corollary wf_app s t: wf (s @ t) <-> wf s /\ wf t. Proof. split_0; split; [apply div_app | intros []; auto with term]. Qed. Ltac div_app := match goal with | H: (_ & _) // (_ @ _) |- _ => let H1 := fresh H in let H2 := fresh H in apply div_app in H as [H1 H2] | H: wf (_ @ _) |- _ => let H1 := fresh H in let H2 := fresh H in apply wf_app in H as [H1 H2] end. Hint Extern 3 => div_app: term. (* uses PI *) Theorem wf_eq (s t: wfterm): `s = `t -> s = t. Proof. destruct s, t; apply subset_eq_compat. Qed. Hint Resolve div_wf div_abs wf_eq: term. Function fill (s: term) (m: map) (e: term): option term := match s with | [] => match m with | 0 => Some [] | M 1 => Some e | _ => None end | [y] => match m with | 0 => Some [y] | _ => None end | s1 @ s2 => match unmapp m with | Some (m1, m2) => match fill s1 m1 e, fill s2 m2 e with | Some t1, Some t2 => Some (t1 @ t2) | _, _ => None end | None => None end | n \ s' => match fill s' m e with | Some t => Some (n \ t) | None => None end end. Theorem fill_some s m e: wf s -> m // s -> exists r, fill s m e = Some r. Proof. intro S; induction 1; simpl; eauto. - destruct IHdvd1 as [? R1], IHdvd2 as [? R2]; eauto with term. rewrite mapp_unmapp, R1, R2; eauto. - destruct IHdvd1 as [? R]; eauto with term. rewrite R; eauto. Qed. Theorem fill_0 s e r: fill s 0 e = Some r -> r = s. Proof. revert r. remember 0 as m. functional induction (fill s m e); try congruence; intros r R. - inversion e1; inversion R; subst; f_equal; auto. - inversion R; subst; f_equal; auto. Qed. Theorem fill_div s m e r n: n // s -> m // s -> m ⊥ n -> wf s -> wf e -> fill s m e = Some r -> n // r. Proof. intro NS; revert m r. induction NS as [| | | p1 ? p2 | p]; intros m r MS MN S E R; try solve [functional inversion R; now subst]. - functional inversion R; subst. + eauto with term. + now apply orth_1 in MN. - functional inversion R; subst. replace m with (m1 & m2) in *; [orth_inj |]. + constructor; eauto with term. + pose proof (mapp_unmapp m1 m2). apply unmapp_inj; congruence. - apply div_abs in S. inversion MS; subst. functional inversion R; subst. constructor; eauto. Qed. Theorem fill_wf s m e r: wf s -> m // s -> wf e -> fill s m e = Some r -> wf r. Proof. revert r. functional induction (fill s m e); intros r S MS E R; inversion R; subst; auto. - rewrite (mapp_unmapp' m m1 m2) in MS; auto. apply wf_app; split. + apply IHo; eauto with term. + apply IHo0; eauto with term. - inversion MS; subst. apply div_abs in S. constructor; auto with map. apply fill_div with (s := s') (m := m) (e := e); auto. Qed. Fixpoint make_map x s := match s with | [] => 0 | [y] => if id_eq x y then M 1 else 0 | s @ t => make_map x s & make_map x t | _ \ s => make_map x s end. Fixpoint make_skel x s := match s with | [] => [] | [y] => if id_eq x y then [] else [y] | s @ t => make_skel x s @ make_skel x t | n \ s => n \ make_skel x s end. Theorem make_map_orth x s m: m // s -> make_map x s ⊥ m. Proof. revert m; induction s; simpl; inversion_clear 1; auto with map term. Qed. Hint Resolve make_map_orth: term. Lemma make_skel_div x s m: wf s -> m // s -> m // make_skel x s. Proof with (auto with term). revert m. induction s; intros m S MS; simpl; auto. - destruct (id_eq x y); inversion MS; subst... - inversion_clear MS... - inversion_clear MS; apply div_abs in S... Qed. Hint Resolve make_skel_div: term. Theorem make_map_skel_div x s: wf s -> make_map x s // make_skel x s. Proof with (simpl; eauto with term). induction s; intro S... - destruct (id_eq x y)... - inversion_clear S; constructor... Qed. Hint Resolve make_map_skel_div: term. Theorem make_skel_wf x s: wf s -> wf (make_skel x s). Proof with (auto with term). induction s; intro S; simpl... - destruct (id_eq x y)... - split_0; constructor... - inversion_clear S... Qed. Hint Resolve make_skel_wf: term. Theorem make_map_not_occ x s: x ## s -> make_map x s = 0. Proof. intro S. induction s; auto. - functional inversion S; subst; simpl. now rewrite H1. - apply occb_app in S as []. simpl; split_0; f_equal; auto. Qed. Hint Resolve make_map_not_occ: term. Theorem make_skel_not_occ x s: x ## make_skel x s. Proof. induction s; auto. - simpl. destruct (id_eq x y); simpl; auto. decide_right (id_eq x y); auto. - now apply occb_app. Qed. Hint Resolve make_skel_not_occ: term. Theorem make_skel_occ_diff x s z: x <> z -> occb z s = occb z (make_skel x s). Proof. intro XY; induction s; auto. - simpl. destruct (id_eq z y), (id_eq x y); subst; simpl. + congruence. + now decide_left (id_eq y y). + easy. + now decide_right (id_eq z y). - simpl; f_equal; easy. Qed. Theorem make_skel_not_occ_eq x s: x ## s -> make_skel x s = s. Proof. intro X. induction s; auto. - simpl. functional inversion X; subst. now decide_right (id_eq x y). - apply occb_app in X as []. simpl; f_equal; auto. - simpl in *; f_equal; auto. Qed. Hint Resolve make_skel_not_occ_eq: term. Definition subst s x := fill (make_skel x s) (make_map x s). Theorem subst_wf s x e r: wf s -> wf e -> subst s x e = Some r -> wf r. Proof. unfold subst. intros. apply fill_wf with (s := make_skel x s) (m := make_map x s) (e := e); auto with term. Qed. Section Subst_subterms. Local Arguments subst s x /. Theorem subst_self x s: subst s x [x] = Some s. Proof. induction s; simpl in *; auto. - destruct (id_eq x y); simpl; congruence. - now rewrite mapp_unmapp, IHs1, IHs2. - now rewrite IHs. Qed. Lemma subst_box x e: subst [] x e = Some []. Proof. easy. Qed. Lemma subst_free_eq x e: subst [x] x e = Some e. Proof. simpl; destruct (id_eq x x); easy. Qed. Lemma subst_free_ne x y e: x <> y -> subst [y] x e = Some [y]. Proof. intro; simpl; destruct (id_eq x y); easy. Qed. Lemma subst_app s1 s2 x e r1 r2: subst s1 x e = Some r1 -> subst s2 x e = Some r2 -> subst (s1 @ s2) x e = Some (r1 @ r2). Proof. simpl; rewrite mapp_unmapp; intros -> ->; easy. Qed. Lemma subst_abs n s x e r: subst s x e = Some r -> subst (n \ s) x e = Some (n \ r). Proof. simpl; intros ->; easy. Qed. Lemma subst_app_split s1 s2 x e r: subst (s1 @ s2) x e = Some r -> exists r1 r2, r = r1 @ r2 /\ subst s1 x e = Some r1 /\ subst s2 x e = Some r2. Proof. simpl. rewrite mapp_unmapp. destruct fill, fill; try easy. inversion 1; eauto. Qed. Lemma subst_abs_split n s x e r: subst (n \ s) x e = Some r -> exists r', r = n \ r' /\ subst s x e = Some r'. Proof. simpl. destruct fill; try easy. inversion 1; eauto. Qed. End Subst_subterms. Hint Rewrite subst_box subst_free_eq subst_free_ne subst_app subst_abs using solve [auto]: subst. Lemma subst_not_occ s x e: x ## s -> subst s x e = Some s. Proof. induction s; simpl; intro XS; auto. - unfold subst; simpl. now destruct (id_eq x y). - apply occb_app in XS as []. erewrite subst_app; eauto. - unfold subst; simpl. fold (subst s x). rewrite IHs; auto. Qed. Ltac invert_some := match goal with | H: Some _ = Some _ |- _ => inversion H; subst; clear H; simpl in * end. Ltac subst_app_split := (* these two matches can't be merged :/ *) match goal with | a: term |- _ => match goal with | H: subst (_ @ _) _ _ = Some a |- _ => let a1 := fresh a in let a2 := fresh a in let H1 := fresh H in let H2 := fresh H in apply subst_app_split in H as (a1 & a2 & -> & H1 & H2) end end. Ltac subst_abs_split := match goal with | a: term |- _ => match goal with | H: subst (_ \ _) _ _ = Some a |- _ => let a' := fresh a in let H' := fresh H in apply subst_abs_split in H as (a' & -> & H') end end. Ltac simpl_subst := progress repeat (invert_some + subst_app_split + subst_abs_split + (progress autorewrite with subst in *)). Theorem substitution: forall a x1 e1 x2 e2 a1 a2 e1' r1 r2, x1 <> x2 -> x1 ## e2 -> subst a x1 e1 = Some a1 -> subst a1 x2 e2 = Some r1 -> subst a x2 e2 = Some a2 -> subst e1 x2 e2 = Some e1' -> subst a2 x1 e1' = Some r2 -> r1 = r2. Proof. intros until r2; intros X XE A1 R1 A2 E1' R2. revert a1 a2 r1 r2 A1 A2 R1 R2. induction a; intros. - now simpl_subst. - destruct (id_eq x1 y), (id_eq x2 y); subst. + contradiction. + simpl_subst; congruence. + simpl_subst; rewrite subst_not_occ in R2; congruence. + now simpl_subst. - simpl_subst; f_equal; eauto. - simpl_subst; f_equal; eauto. Qed. Definition lam x s := make_map x s \ make_skel x s. Infix "\\" := lam: term. Theorem subst_lam_eq x s e: subst (lam x s) x e = Some (lam x s). Proof. apply subst_not_occ, make_skel_not_occ. Qed. Theorem subst_lam_not_occ x s y e: y ## s -> subst (lam x s) y e = Some (lam x s). Proof. intro; destruct (id_eq x y) as [-> | ]. + apply subst_lam_eq. + apply subst_not_occ; simpl. rewrite <- make_skel_occ_diff; auto. Qed.
BBQ and Refreshments will be available from 12:30pm. All members and non-members, adults and juniors are welcome to come along and play some tennis and find out about the club. Colin Dunbar our qualified coach will be available to give some tips on improving your game and to advise on coaching sessions that are currently available. Rackets and balls are provided. For more information please contact Michael Goldie (committee member) on 07734 073397 .
\section{Introduction} \label{sec:intro} A binary code parser converts the machine code representation of a program, library, or code snippet to abstractions such as the instructions, basic blocks, and functions that the binary code represents. The ParseAPI is a multi-platform library for creating such abstractions from binary code sources. The current incarnation uses the Dyninst SymtabAPI as the default binary code source; all platforms and architectures handled by the SymtabAPI are supported. The ParseAPI is designed to be easily extensible to other binary code sources. Support for parsing binary code in memory dumps or other formats requires only implementation of a small interface as described in this document. This API provides the user with a control flow-oriented view of a binary code source. Each code object such as a program binary or library is represented as a top-level collection containing the functions, basic blocks, and edges that represent the control flow graph. A simple query interface is provided for retrieving lower level objects like functions and basic blocks through address or other attribute lookups. These objects can be used to navigate the program structure as described below.
/- Copyright (c) 2022 Peter Nelson. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Peter Nelson -/ import order.antichain /-! # Orders with involution This file concerns orders that admit an order-reversing involution. In the case of a lattice, these are sometimes referred to as 'i-lattices' or 'lattices with involution'. Such an involution is more general than a `boolean_algebra` complement, but retains many of its properties. Other than a boolean algebra, an example is the subspace lattice of the vector space `𝕂ⁿ` for `𝕂` of nonzero characteristic, where for each subspace `W` we have `invo W = {x ∈ V | ∀ w ∈ W, wᵀx = 0}`; this is not a complement in the stronger sense because `invo W` can intersect `W`. ## Main declarations * `has_involution`: typeclass applying to types with a `preorder` that admit an antitone involution. * `ⁱ` : postfix notation for the function `invo : α → α` given a type `α` with `[has_involution α]` ## TODO Provide instances other than the one from `boolean_algebra`. -/ universe u class has_involution (α : Type u) [preorder α] := (invo : α → α) (invo_antitone' : ∀ (x y : α), x ≤ y → invo y ≤ invo x) (invo_involutive' : function.involutive invo) open has_involution variables {α : Type u} postfix `ⁱ`:(max+1) := invo section preorder variables [preorder α] [has_involution α] {x y : α} @[simp] lemma invo_invo (x : α) : xⁱⁱ = x := invo_involutive' x lemma invo_eq_iff_invo_eq : xⁱ = y ↔ yⁱ = x := by {rw [eq_comm], exact invo_involutive'.eq_iff.symm} lemma eq_invo_iff_eq_invo : x = yⁱ ↔ y = xⁱ := by rw [← invo_invo x, invo_eq_iff_invo_eq, invo_invo, invo_invo] lemma invo_le_invo (hxy : x ≤ y) : yⁱ ≤ xⁱ := invo_antitone' _ _ hxy lemma le_of_invo_le (hx : xⁱ ≤ yⁱ) : y ≤ x := by {rw [←invo_invo x, ←invo_invo y], exact invo_le_invo hx,} lemma invo_le_invo_iff_le : xⁱ ≤ yⁱ ↔ y ≤ x := ⟨le_of_invo_le, invo_le_invo⟩ lemma le_invo_iff_le_invo : x ≤ yⁱ ↔ y ≤ xⁱ := by rw [←invo_le_invo_iff_le, invo_invo] lemma invo_le_iff_invo_le : xⁱ ≤ y ↔ yⁱ ≤ x := by rw [←invo_le_invo_iff_le, invo_invo] lemma invo_inj (h : xⁱ = yⁱ) : x = y := invo_involutive'.injective h lemma invo_lt_invo_iff_lt : xⁱ < yⁱ ↔ y < x := by simp [lt_iff_le_not_le, invo_le_invo_iff_le] lemma lt_invo_iff_lt_invo : x < yⁱ ↔ y < xⁱ := by rw [←invo_lt_invo_iff_lt, invo_invo] lemma invo_lt_iff_invo_lt : xⁱ < y ↔ yⁱ < x := by rw [←invo_lt_invo_iff_lt, invo_invo] lemma le_invo_of_le_invo (h : y ≤ xⁱ) : x ≤ yⁱ := le_invo_iff_le_invo.mp h lemma invo_le_of_invo_le (h : yⁱ ≤ x) : xⁱ ≤ y := invo_le_iff_invo_le.mp h lemma invo_involutive : function.involutive (has_involution.invo : α → α) := invo_invo lemma invo_bijective : function.bijective (invo : α → α) := invo_involutive.bijective lemma invo_surjective : function.surjective (invo : α → α) := invo_involutive.surjective lemma invo_injective : function.injective (invo : α → α) := invo_involutive.injective lemma invo_antitone : antitone (invo: α → α) := λ a b, invo_le_invo @[simp] lemma invo_inj_iff : xⁱ = yⁱ ↔ x = y := invo_injective.eq_iff lemma invo_comp_invo : invo ∘ invo = @id α := funext invo_invo end preorder section lattice variables [lattice α] [has_involution α] @[simp] lemma invo_inf (x y : α) : (x ⊓ y)ⁱ = xⁱ ⊔ yⁱ := le_antisymm (invo_le_iff_invo_le.mpr (le_inf (invo_le_iff_invo_le.mp le_sup_left) ((invo_le_iff_invo_le.mp le_sup_right)))) (sup_le (invo_le_invo inf_le_left) (invo_le_invo inf_le_right)) @[simp] lemma invo_sup (x y : α) : (x ⊔ y)ⁱ = xⁱ ⊓ yⁱ := by rw [invo_eq_iff_invo_eq, invo_inf, invo_invo, invo_invo] end lattice section boolean_algebra @[priority 100] instance boolean_algebra.to_has_involution [boolean_algebra α] : has_involution α := { invo := compl, invo_antitone' := λ _ _, compl_le_compl, invo_involutive' := compl_involutive } end boolean_algebra section hom variables (α) [preorder α] [has_involution α] instance order_dual.has_involution : has_involution αᵒᵈ := { invo := λ x, order_dual.to_dual (order_dual.of_dual x)ⁱ, invo_antitone' := λ a b h, @invo_antitone' α _ _ b a h, invo_involutive' := invo_involutive' } /-- Taking the involution as an order isomorphism to the order dual. -/ @[simps] def order_iso.invo : α ≃o αᵒᵈ := { to_fun := order_dual.to_dual ∘ invo, inv_fun := invo ∘ order_dual.of_dual, left_inv := invo_invo, right_inv := invo_invo, map_rel_iff' := λ _ _, invo_le_invo_iff_le } lemma invo_strict_anti : strict_anti (invo : α → α) := (order_iso.invo α).strict_mono end hom section antichain variables [preorder α] [has_involution α] {s : set α} lemma is_antichain.image_invo (hs : is_antichain (≤) s) : is_antichain (≤) (invo '' s) := (hs.image_embedding (order_iso.invo α).to_order_embedding).flip lemma is_antichain.preimage_invo (hs : is_antichain (≤) s) : is_antichain (≤) (invo ⁻¹' s) := λ a ha a' ha' hne hle, hs ha' ha (λ h, hne (invo_inj_iff.mp h.symm)) (invo_le_invo hle) end antichain