Datasets:
AI4M
/

text
stringlengths
0
3.34M
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal33. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Lemma conj8synthconj3 : forall (lv0 : natural) (lv1 : natural), (@eq natural (plus (mult lv0 lv1) lv0) (plus lv0 (mult lv0 lv1))). Admitted. QuickChick conj8synthconj3.
#' Vaginal microbiome OTU table metadata #' #' Associated metadata to \code{\link{otu_table}}. #' #' @name metadata #' #' @docType data #' #' @usage data(metadata) #' #' @references Macklaim et al (2014). Microb Ecol Health Dis. #' doi: http://dx.doi.org/10.3402/mehd.v26.27799 #' #' @format A data frame with 297 rows and 35 columns, where rows are samples #' and columns are collected metadata. #' #' @seealso \code{\link{otu_table}} #' NULL
lemma LIMSEQ_linear: "X \<longlonglongrightarrow> x \<Longrightarrow> l > 0 \<Longrightarrow> (\<lambda> n. X (n * l)) \<longlonglongrightarrow> x"
module BFF where open import Data.Nat using (ℕ) open import Data.Fin using (Fin) open import Level using () renaming (zero to ℓ₀) import Category.Monad import Category.Functor open import Data.Maybe using (Maybe ; just ; nothing ; maybe′) open Category.Monad.RawMonad {Level.zero} Data.Maybe.monad using (_>>=_) open Category.Functor.RawFunctor {Level.zero} Data.Maybe.functor using (_<$>_) open import Data.List using (List ; [] ; _∷_ ; map ; length) open import Data.Vec using (Vec ; toList ; fromList ; allFin) renaming (lookup to lookupV ; map to mapV ; [] to []V ; _∷_ to _∷V_) open import Function using (_∘_ ; flip) open import Relation.Binary using (Setoid ; DecSetoid ; module DecSetoid) open import FinMap open import Generic using (sequenceV ; ≡-to-Π) open import Structures using (Shaped ; module Shaped) open import Instances using (VecShaped) import CheckInsert open import GetTypes using (VecVec-to-PartialVecVec ; PartialVecVec-to-PartialShapeShape) module PartialShapeBFF (A : DecSetoid ℓ₀ ℓ₀) where open GetTypes.PartialShapeShape public using (Get ; module Get) open module A = DecSetoid A using (Carrier) renaming (_≟_ to deq) open CheckInsert A assoc : {n m : ℕ} → Vec (Fin n) m → Vec Carrier m → Maybe (FinMapMaybe n Carrier) assoc []V []V = just empty assoc (i ∷V is) (b ∷V bs) = (assoc is bs) >>= (checkInsert i b) enumerate : {S : Set} {C : Set → S → Set} → (ShapeT : Shaped S C) → (s : S) → C (Fin (Shaped.arity ShapeT s)) s enumerate ShapeT s = fill s (allFin (arity s)) where open Shaped ShapeT denumerate : {S : Set} {C : Set → S → Set} → (ShapeT : Shaped S C) → {α : Set} {s : S} → (c : C α s) → Fin (Shaped.arity ShapeT s) → α denumerate ShapeT c = flip lookupV (Shaped.content ShapeT c) bff : (G : Get) → {i : Get.I G} → (j : Get.I G) → Get.SourceContainer G Carrier (Get.gl₁ G i) → Get.ViewContainer G Carrier (Get.gl₂ G j) → Maybe (Get.SourceContainer G (Maybe Carrier) (Get.gl₁ G j)) bff G {i} j s v = let s′ = enumerate SourceShapeT (gl₁ i) t′ = get s′ g = fromFunc (denumerate SourceShapeT s) g′ = delete-many (Shaped.content ViewShapeT t′) g t = enumerate SourceShapeT (gl₁ j) h = assoc (Shaped.content ViewShapeT (get t)) (Shaped.content ViewShapeT v) h′ = (flip union (reshape g′ (Shaped.arity SourceShapeT (gl₁ j)))) <$> h in ((λ f → fmapS f t) ∘ flip lookupM) <$> h′ where open Get G sbff : (G : Get) → {i : Get.I G} → (j : Get.I G) → Get.SourceContainer G Carrier (Get.gl₁ G i) → Get.ViewContainer G Carrier (Get.gl₂ G j) → Maybe (Get.SourceContainer G Carrier (Get.gl₁ G j)) sbff G j s v = bff G j s v >>= Shaped.sequence (Get.SourceShapeT G) module PartialVecBFF (A : DecSetoid ℓ₀ ℓ₀) where open GetTypes.PartialVecVec public using (Get) open module A = DecSetoid A using (Carrier) renaming (_≟_ to deq) open CheckInsert A open PartialShapeBFF A public using (assoc) enumerate : {n : ℕ} → Vec Carrier n → Vec (Fin n) n enumerate {n} _ = PartialShapeBFF.enumerate A VecShaped n enumeratel : (n : ℕ) → Vec (Fin n) n enumeratel = PartialShapeBFF.enumerate A VecShaped denumerate : {n : ℕ} → Vec Carrier n → Fin n → Carrier denumerate = PartialShapeBFF.denumerate A VecShaped bff : (G : Get) → {i : Get.I G} → (j : Get.I G) → Vec Carrier (Get.gl₁ G i) → Vec Carrier (Get.gl₂ G j) → Maybe (Vec (Maybe Carrier) (Get.gl₁ G j)) bff G j s v = PartialShapeBFF.bff A (PartialVecVec-to-PartialShapeShape G) j s v sbff : (G : Get) → {i : Get.I G} → (j : Get.I G) → Vec Carrier (Get.gl₁ G i) → Vec Carrier (Get.gl₂ G j) → Maybe (Vec Carrier (Get.gl₁ G j)) sbff G j s v = PartialShapeBFF.sbff A (PartialVecVec-to-PartialShapeShape G) j s v module VecBFF (A : DecSetoid ℓ₀ ℓ₀) where open GetTypes.VecVec public using (Get) open module A = DecSetoid A using (Carrier) renaming (_≟_ to deq) open CheckInsert A open PartialVecBFF A public using (assoc ; enumerate ; denumerate) bff : (G : Get) → {n : ℕ} → (m : ℕ) → Vec Carrier n → Vec Carrier (Get.getlen G m) → Maybe (Vec (Maybe Carrier) m) bff G = PartialVecBFF.bff A (VecVec-to-PartialVecVec G) sbff : (G : Get) → {n : ℕ} → (m : ℕ) → Vec Carrier n → Vec Carrier (Get.getlen G m) → Maybe (Vec Carrier m) sbff G = PartialVecBFF.sbff A (VecVec-to-PartialVecVec G)
Set Implicit Arguments. Require Import Variable_Sets. Require Import deBruijn_Isomorphism. (** This file provides a connection between the generic meta library and some simple typed languages such as STLC or ML based on some isomorphisms. A concrete type/term class [TT] is isomorphic to the generic representation [RR] when there is an isomorphism between [TT] and [Interpret RR]. In the following, two modules [MT] and [MY] are used. - MT stands for module for terms of e.g. STLC. - MY stands for module for types of e.g. STLC *) Module dBTemplate (iso1 iso2 : Iso_full). Hint Rewrite iso1.To_From iso1.From_To iso2.To_From iso2.From_To : isorew. Hint Resolve iso1.To_From iso1.From_To iso2.To_From iso2.From_To. (**************************************************************) (** * Shifting *) (**************************************************************) (** Shifting for a iso1 variable in a iso2 *) Definition Tshift (X : atom) (T : iso2.TT) : iso2.TT := iso2.To (shift X iso1.RR (iso2.From T)). (**************************************************************) (** * Substitution *) (**************************************************************) (** Substitution for a iso1 variable in a iso2 *) Definition Tsubst (T:iso2.TT) (m:atom) (U:iso1.TT) : iso2.TT := iso2.To (subst (iso2.From T) m (iso1.From U)). (**************************************************************) (** * Term size *) (**************************************************************) Definition Ysize (T:iso1.TT) : nat := size (iso1.From T). (**************************************************************) (** * A tactic unfolding everything *) (**************************************************************) Ltac gunfold := unfold Tshift, Tsubst in *; unfold Ysize in *; intros; repeat rewrite iso2.To_From in *; repeat rewrite iso2.From_To in *; repeat rewrite iso1.From_To in *; repeat rewrite iso1.From_To in *; simpl in *. (**************************************************************) (** * Homomorphisms *) (**************************************************************) (** [From] is a homomorphism w.r.t. substitutions. *) Lemma From_Tshift : forall (T:iso2.TT) (a:atom), iso2.From (Tshift a T) = shift a iso1.RR (iso2.From T). Proof. unfold Tshift; intros; autorewrite with isorew; auto. Qed. (** [From] is a homomorphism w.r.t. substitutions. *) Lemma From_Tsubst : forall (T:iso2.TT) (a:atom) (U:iso1.TT), iso2.From (Tsubst T a U) = subst (iso2.From T) a (iso1.From U). Proof. unfold Tsubst; intros; autorewrite with isorew; auto. Qed. (**************************************************************) (** Tshift and Tsubst are identity function when no [Repr] occurs. *) (**************************************************************) Lemma Tshift_id: forall n T, noRepr iso2.RR -> iso1.RR <> iso2.RR -> Tshift n T = T. Proof. gunfold; rewrite <- noRepr_shift_hetero; autorewrite with isorew; auto. Qed. Lemma Tsubst_id : forall n T U, noRepr iso2.RR -> iso1.RR <> iso2.RR -> Tsubst T n U = T. Proof. gunfold; rewrite <- noRepr_subst_hetero; autorewrite with isorew; auto. Qed. (**************************************************************) (** * Environments *) (**************************************************************) (** TEnv is (TT * TT) list. - [inl] is used for type variable binding. - [inr] is used for term variable binding. *) Notation TEnv := (Env iso1.TT). Fixpoint From_TEnv (e:TEnv) : (ENV iso1.RR) := match e with | nil => nil | inl T :: e' => inl (iso1.From T) :: (From_TEnv e') | inr T :: e' => inr (iso1.From T) :: (From_TEnv e') end. Fixpoint To_TEnv (e:ENV iso1.RR) : TEnv := match e with | nil => nil | inl T :: e' => inl (iso1.To T) :: (To_TEnv e') | inr T :: e' => inr (iso1.To T) :: (To_TEnv e') end. Lemma From_To_TEnv : forall e : ENV iso1.RR, From_TEnv (To_TEnv e) = e. Proof. induction e; [ simpl; auto | destruct a; simpl; rewrite IHe; rewrite iso1.From_To; auto ]. Qed. Lemma To_From_TEnv : forall e : TEnv, To_TEnv (From_TEnv e) = e. Proof. induction e; simpl; auto. destruct a; simpl; rewrite IHe, iso1.To_From; auto. Qed. Hint Resolve From_To_TEnv. Hint Rewrite From_To_TEnv : isorew. Definition Tlth (e : TEnv) : atom := lth (From_TEnv e). Notation "[[[ e ]]]" := (Tlth e). (** Tremove_right removes the x th element in environment. *) Fixpoint Tremove_right (e : TEnv) (x : nat) {struct e} : TEnv := match e with | nil => nil | (inl T)::e' => (inl T)::(Tremove_right e' x) | (inr T)::e' => match x with | O => e' | S x => (inr T::(Tremove_right e' x)) end end. (**************************************************************) (** To and From function on (option TT) *) (**************************************************************) Fixpoint opt_To (T : option (Interpret iso1.RR)) : option iso1.TT := match T with | None => None | Some T' => Some (iso1.To T') end. Fixpoint opt_From (T : option (iso1.TT)) : option (Interpret iso1.RR) := match T with | None => None | Some T' => Some (iso1.From T') end. Lemma opt_To_preserving_none (T : option (Interpret iso1.RR)) : T = None -> opt_To T = None. Proof. intros; rewrite H; auto. Qed. Lemma opt_To_preserving_none_rev (T : option (Interpret iso1.RR)) : opt_To T = None -> T = None. Proof. induction T; simpl; intros; [discriminate | auto]. Qed. Lemma opt_From_preserving_some (T : option iso1.TT) (t : iso1.TT): T = Some t -> opt_From T = Some (iso1.From t). Proof. intros; rewrite H; auto. Qed. Lemma opt_To_preserving_some (T : option (Interpret iso1.RR)) (t : Interpret iso1.RR): T = Some t -> opt_To T = Some (iso1.To t). Proof. intros; rewrite H; auto. Qed. Lemma opt_To_preserving_some_rev (T : option (Interpret iso1.RR)) (t : Interpret iso1.RR): opt_To T = Some (iso1.To t) -> T = Some t. Proof. induction T; simpl; intros; [idtac | discriminate]. rewrite <- (iso1.From_To a);rewrite <- (iso1.From_To t0). rewrite <- opt_To_preserving_some with (T:= Some a) in H;auto. rewrite <- opt_To_preserving_some with (T:= Some t0) in H;auto. inversion H; rewrite H1;reflexivity. Qed. Lemma opt_To_preserving_eq (T U : option (Interpret iso1.RR)) : T = U -> opt_To T = opt_To U. Proof. intros; rewrite H; auto. Qed. Lemma opt_To_preserving_eq_rev (T U : option (Interpret iso1.RR)) : opt_To T = opt_To U -> T = U. Proof. induction T; induction U; intros; [ inversion H; rewrite <- (iso1.From_To a); rewrite <- (iso1.From_To a0); rewrite H1; reflexivity | inversion H | inversion H | auto ]. Qed. Hint Resolve opt_To_preserving_none opt_To_preserving_none_rev opt_To_preserving_some opt_To_preserving_some_rev opt_To_preserving_eq opt_To_preserving_eq_rev opt_From_preserving_some : opt. (**************************************************************) (** Generic versions of [Tget_left] and [Tget_right] *) (**************************************************************) Definition gTget_left (e : TEnv) (X : atom) : option (iso1.TT) := opt_To (get_left (From_TEnv e) X). Definition gTget_right (e : TEnv) (X : atom) : option (iso1.TT) := opt_To (get_right (From_TEnv e) X). (**************************************************************) (** * Well-formedness in an environment *) (**************************************************************) (** Well-formed types in an environment *) Definition Twf_typ (e : TEnv) (T : iso1.TT) : Prop := HO_wf [[From_TEnv e]] (iso1.From T). Fixpoint Twf_env (e : TEnv) : Prop := match e with nil => True | (inr T)::e => Twf_typ e T /\ Twf_env e | (inl T)::e => Twf_typ e T /\ Twf_env e end. Definition gTwf_env (e : TEnv) : Prop := wf_env (From_TEnv e). Lemma Twf_env_gTwf_env : forall (e: TEnv), Twf_env e -> gTwf_env e. Proof. unfold gTwf_env;induction e;auto. simpl;destruct a;intros;destruct H;simpl; (split; [ unfold Twf_typ in H;auto | auto ]). Qed. Lemma gTwf_env_Twf_env : forall (e: TEnv), gTwf_env e -> Twf_env e. Proof. unfold gTwf_env;induction e;auto. simpl;destruct a;intros;destruct H;simpl; (split; [ unfold Twf_typ in H;auto | auto ]). Qed. Lemma Twf_env_weaken : forall (T : iso1.TT) (n m : TEnv), [[[n]]] <= [[[m]]] -> Twf_typ n T -> Twf_typ m T. Proof. unfold Twf_typ. eauto using HO_wf_weaken. Qed. (** Generic version of [Tremove_right] *) Definition gTremove_right (e : TEnv) (x : nat) : TEnv := To_TEnv (remove_right (From_TEnv e) x). Lemma Tremove_right_gTremove_right: forall (e:TEnv)(x:nat), Tremove_right e x = gTremove_right e x. Proof. unfold gTremove_right. induction e; simpl; intros; auto. destruct a; [ simpl; rewrite IHe, iso1.To_From; auto | destruct x; [ simpl; rewrite To_From_TEnv; auto | simpl; rewrite iso1.To_From; rewrite IHe; auto ] ]. Qed. Lemma Twf_typ_remove_right : forall (e : TEnv) (x : nat) (T : iso1.TT), Twf_typ e T -> Twf_typ (Tremove_right e x) T. Proof. intros. rewrite Tremove_right_gTremove_right. unfold Twf_typ, gTremove_right; intros. autorewrite with isorew. auto using HO_wf_remove_right. Qed. Lemma Twf_typ_insert_right : forall (e : TEnv) (n : nat) (T : iso1.TT), Twf_typ (Tremove_right e n) T -> Twf_typ e T. Proof. intros. rewrite Tremove_right_gTremove_right in H. unfold Twf_typ, gTremove_right in *; intros. autorewrite with isorew in *. eauto using HO_wf_insert_right. Qed. Lemma Twf_env_remove_right : forall (e : TEnv) (x : nat), Twf_env e -> Twf_env (Tremove_right e x). Proof. intros. rewrite Tremove_right_gTremove_right. apply gTwf_env_Twf_env. apply Twf_env_gTwf_env in H. unfold gTwf_env, gTremove_right in *; intros. autorewrite with isorew. eauto using wf_env_remove_right. Qed. (** Generic version of [Tinsert_left] *) Definition gTinsert_left n (e e':TEnv) := insert_left n (From_TEnv e) (From_TEnv e'). (** Isomorphisms between generic well-formedness and specific well-formedness *) Definition Tinsert (n: nat) (e: TEnv) (T: iso1.TT) (H:Twf_typ nil T) : TEnv := To_TEnv (insert n (From_TEnv e) (iso1.From T) H). Lemma Tinsert_S : forall (e:TEnv) (n:nat) U H, S [[[e]]] = [[[Tinsert n e U H]]]. Proof. unfold Tinsert, Tlth; intros. autorewrite with isorew. apply insert_S. Qed. Hint Resolve Tinsert_S. Lemma Twf_typ_weakening_right : forall (e : TEnv) (T U : iso1.TT), Twf_typ e U -> Twf_typ ((inr T)::e) U. Proof. unfold Twf_typ. auto using HO_wf_weakening_right. Qed. Lemma Twf_typ_strengthening_right : forall (e : TEnv) (T U : iso1.TT), Twf_typ ((inr T)::e) U -> Twf_typ e U. Proof. unfold Twf_typ. eauto using HO_wf_strengthening_right. Qed. Lemma Twf_typ_eleft : forall (T U V : iso1.TT) (e : TEnv), Twf_typ ((inl U)::e) T -> Twf_typ ((inl V)::e) T. Proof. unfold Twf_typ. eauto using HO_wf_left. Qed. End dBTemplate.
<a href="https://colab.research.google.com/github/starhou/Algorithm/blob/master/Classic_MLmodel.ipynb" target="_parent"></a> # 损失函数 ### **KL散度** $D_{\mathrm{KL}}(P \| Q)=-\sum_{i} P(i) \ln \frac{Q(i)}{P(i)}=\sum_{i} P(i) \ln \frac{P(i)}{Q(i)}=\sum_{i}P(i)(\log{P(i)}-\log{Q(i)})$ 其中$Q(i)>0,P(i)>0$, 其中P为原始分布,Q为近似分布,KL散度是两个分布对数差值的期望, 并非距离度量,没有对称性 ### **交叉熵** = 熵+KL散度 衡量的是同个变量的不同分布 样本集的两个概率分布 p(x) 和 q(x),其中 p(x) 为真实分布, q(x)非真实分布 $\mathbf{H}(p, q)=\mathbf{E}_{p}[-\log q]=\mathbf{H}(p)+D_{\mathrm{KL}}(p \| q)=\sum_{x} p(x) \log \frac{1}{q(x)}$ ### **条件熵:** 已知随机变量 X 的条件下随机变量 Y 的不确定性 $\begin{aligned} H(Y | X) &=\sum_{x} p(x) H(Y | X=x) \\ &=-\sum_{x} p(x) \sum_{y} p(y | x) \log p(y | x) \\ &=-\sum_{x} \sum_{y} p(x, y) \log p(y | x) \\ &=-\sum_{x, y} p(x, y) \log p(y | x) \end{aligned}$ ### **联合熵**: 衡量的是不同的变量 $H(X, Y)=-\sum_{x, y} p(x, y) \log p(x, y)=-\sum_{i=1}^{n} \sum_{j=1}^{m} p\left(x_{i}, y_{i}\right) \log p\left(x_{i}, y_{i}\right)$ # 支撑向量机(SVM) 一般优化表示, \begin{equation} \begin{aligned} &\min \quad f_{0}(x)\\ &\text { s.t. } \quad f_{i}(x) \leq 0, \quad i=1, \cdots, m\\ &\quad \quad \quad h_{i}(x)=0, i=1, \cdots, p \end{aligned} \end{equation} 1. 不管原命题形式如何,其对偶问题都是凸函数 2. 不管什么问题,都可以转化为拉格朗日对偶问题求解 3. 对偶问题是原问题最优解的下确界 4. slater条件:存在一点 $x \in$relint $D$ ($D$ 的相对内点集)满足$f_{i}(x)<0, i=1, \ldots, m, A x=b$, 这样的点称为严格可行的点 5. slater定理:满足slater条件且原问题是凸优化问题时,强对偶性成立 6. slater条件确保了鞍点的存在,KKT条件是鞍点是最优解的充分条件。当原问题是凸优化问题时,KKT条件是充要条件。 7. KKT条件,其中$h_{i}(x)$是等式约束,$g_{i}(x)$是不等式约束 $$ \begin{array}{c} \nabla f(x)+\sum_{i=1}^{n} \lambda_{i} \nabla h_{i}(x)+\sum_{i=1}^{n} \mu_{i} \nabla g_{i}(x)=0 \\ \mu_{i} g(x)_{i}=0 \\ \mu_{i} \geq 0 \\ h_{i}(x)=0 \\ g_{i}(x) \leq 0 \\ i=1,2, \ldots, n \end{array} $$ 8. 凸优化就是这个优化问题的优化函数是凸函数,并且可行域是凸集 #GDBT ```python ```
% Isoparametric Formulation Implementation % clear memory clear all close all clc % E: modulus of elasticity % A: area of cross section % L: length of bar E=8; L=4; u_exact=@(x) (56-8*(x-2)-24*heaviside(x-5))/2/x; hold on; ezplot(u_exact,[2 6]) title('Exact Solution v.s. FEM','interpreter','latex'); xlabel('x','interpreter','latex'); ylabel('Axial stress, $\it{\sigma}_{x}$','interpreter','latex','FontSize',12); for i=1:4 %For NEL = 1, 2, 4, 8 fprintf( '\nNumber of elements:%d\n\n',2^(i-1) ); % numberElements: number of elements numberElements=2^(i-1); % numberNodes: number of nodes numberNodes=2*numberElements+1; A=zeros(1,numberElements); % generation of coordinates and connectivities NNOD=3; nodeCoordinates=linspace(2,L+2,numberNodes); %Generate element length vector for i=1:numberElements Le(i)=L/numberElements; elementNodes(i,:)=[(i-1)*2+1 (i-1)*2+2 (i-1)*2+3]; A(i)=(nodeCoordinates(2*i-1)+Le(i)/2)*2; end % for structure: % displacements: displacement vector % force : force vector % stiffness: stiffness matrix force=zeros(numberNodes,1); stiffness=zeros(numberNodes,numberNodes); % computation of the system stiffness matrix and force vector for e=1:numberElements; % elementDof: element degrees of freedom (Dof) elementDof=elementNodes(e,:) ; detJacobian=Le(e)/2; invJacobian=1/detJacobian; ngp = 3; [w,xi]=gauss1d(ngp); xc=0.5*(nodeCoordinates(elementDof(1))+nodeCoordinates(elementDof(end))); for ip=1:ngp; [shape,naturalDerivatives]=shapeFunctionL3(xi(ip)); B=naturalDerivatives*invJacobian; stiffness(elementDof,elementDof)=... stiffness(elementDof,elementDof)+ B'*B*w(ip)*detJacobian*E*A(e); force(elementDof)=force(elementDof)+... 8*shape'*detJacobian*w(ip); end if(nodeCoordinates(elementDof(end))==5) x=(5-xc)/detJacobian; [s,n]=shapeFunctionL3(x); force(elementDof)= force(elementDof)+... 24*s'; end if(nodeCoordinates(elementDof(1))<5&&... nodeCoordinates(elementDof(3))>5) x=(5-xc)/detJacobian; [s,n]=shapeFunctionL3(x); force(elementDof)= force(elementDof)+... 24*s'; end end % boundary conditions and solution % prescribed dofs prescribedDof=[1]; % solution GDof=numberNodes; displacements=solution(GDof,prescribedDof,stiffness,force); % output displacements/reactions outputDisplacementsReactionsPretty(displacements,stiffness, ... numberNodes,prescribedDof,force) fprintf('Axial stress\n') fprintf('element\t\taxial stress\n') %Stress and strain recovery ngp = 3; elementNodeCoor=zeros(numberElements*NNOD,1); elementNodeStr=zeros(numberElements*NNOD,1); for e=1:numberElements; % elementDof: element degrees of freedom (Dof) elementDof=elementNodes(e,:); detJacobian=Le(e)/2; invJacobian=1/detJacobian; xi=[-1 0 1]; for ip=1:NNOD; [shape,naturalDerivatives]=shapeFunctionL3(xi(ip)); B=naturalDerivatives*invJacobian; elementNodeCoor(ip+(e-1)*NNOD,1)=nodeCoordinates(elementDof(ip)); elementNodeStr(ip+(e-1)*NNOD,1)=E*B*displacements(elementDof,1); fprintf('%2.0f\t%2.0fth node\t%10.4e\n', e, ip,elementNodeStr(ip+(e-1)*NNOD,1)) end end %post process switch numberElements case 1 str='r*--'; case 2 str='g*--'; case 4 str='k*--'; case 8 str='m*--'; end plot(elementNodeCoor,elementNodeStr,str) hold on; end legend('Exact solution','NEL=1','NEL=2','NEL=4','NEL=8','interpreter','latex');
Require Import Terms. Require Import LNaVSyntax. Require Import LNaVBigStep. (** * Equivalences *) (** Low-equivalence judgments. *) Inductive eq_atom : Atom -> Atom -> Lab -> Prop := | eq_a : forall b1 l1 b2 l2 l, l1 = l2 -> ((l1 <: l \/ l2 <: l) -> eq_box b1 b2 l) -> eq_atom (b1,l1) (b2,l2) l with eq_box : Box -> Box -> Lab -> Prop := | eq_v : forall v1 v2 l, eq_val v1 v2 l -> eq_box (V v1) (V v2) l | eq_d : forall e l, eq_box (D e) (D e) l with eq_val : Val -> Val -> Lab -> Prop := | eq_vconst : forall c' l, eq_val (VConst c') (VConst c') l | eq_vinx : forall d a a' l, eq_atom a a' l -> eq_val (VInx d a) (VInx d a') l | eq_vclos : forall r1 r2 x t l, eq_env r1 r2 l -> eq_val (VClos r1 x t) (VClos r2 x t) l with eq_env : Env -> Env -> Lab -> Prop := | eq_e_nil : forall l, eq_env nil nil l | eq_e_cons : forall x a1 a2 r1 r2 l, eq_atom a1 a2 l -> eq_env r1 r2 l -> eq_env ((x,a1)::r1) ((x,a2)::r2) l. Hint Constructors eq_atom eq_box eq_val eq_env. Scheme eq_atom_ind' := Minimality for eq_atom Sort Prop with eq_box_ind' := Minimality for eq_box Sort Prop with eq_val_ind' := Minimality for eq_val Sort Prop with eq_env_ind' := Minimality for eq_env Sort Prop. Combined Scheme eq_mutind from eq_val_ind', eq_box_ind', eq_atom_ind', eq_env_ind'. (** Low-equivalence judgments, that take the pc label into account. *) Definition eq_atom' a1 pc1 a2 pc2 l := (pc1 <: l \/ pc2 <: l) -> (pc1 = pc2 /\ eq_atom a1 a2 l). Definition eq_env' r1 pc1 r2 pc2 l := (pc1 <: l \/ pc2 <: l) -> (pc1 = pc2 /\ eq_env r1 r2 l). (** * Preliminary lemmas. *) Lemma maps_eq_env : forall x a1 a2 r1 r2 l, eq_env r1 r2 l -> maps r1 x a1 -> maps r2 x a2 -> eq_atom a1 a2 l. Proof. intros x a1 a2 r1 r2 l Heq_env Hmaps1 Hmaps2. induction Heq_env. invs Hmaps1. invs Hmaps1; invs Hmaps2. destruct (beq_nat x x0). congruence. auto. Qed. Lemma maps_eq_env' : forall x a1 a2 r1 r2 pc1 pc2 l, eq_env' r1 pc1 r2 pc2 l -> maps r1 x a1 -> maps r2 x a2 -> eq_atom' a1 pc1 a2 pc2 l. Proof. intros x a1 a2 r1 r2 pc1 pc2 l Heq_env' Hmaps1 Hmaps2 Hflows. apply Heq_env' in Hflows. intuition eauto using maps_eq_env. Qed. Lemma eq_env'_cons_inv_env : forall x a1 a2 r1 r2 pc1 pc2 l, eq_env' ((x,a1) :: r1) pc1 ((x,a2) :: r2) pc2 l -> eq_env' r1 pc1 r2 pc2 l. Proof. intros x a1 a2 r1 r2 pc1 pc2 l H Hflows. apply H in Hflows as [? Henv]. invsc Henv. auto. Qed. Lemma eq_env'_cons_inv_atom : forall x a1 a2 r1 r2 pc1 pc2 l, eq_env' ((x,a1) :: r1) pc1 ((x,a2) :: r2) pc2 l -> eq_atom' a1 pc1 a2 pc2 l. Proof. intros x a1 a2 r1 r2 pc1 pc2 l H Hflows. apply H in Hflows as [? Henv]. invsc Henv. auto. Qed. Lemma cons_eq_env' : forall x a1 a2 r1 r2 pc1 pc2 l, eq_atom' a1 pc1 a2 pc2 l -> eq_env' r1 pc1 r2 pc2 l -> eq_env' ((x,a1) :: r1) pc1 ((x,a2) :: r2) pc2 l. Proof. intros x a1 a2 r1 r2 pc1 pc2 l Hatom Henv Hflows. specialize (Hatom Hflows). specialize (Henv Hflows). intuition auto. Qed. (** Reflexivity of low-equivalence. Later, we prove this is actually an equivalence relation. *) Lemma eq_refl : (forall v l, eq_val v v l) /\ (forall b l, eq_box b b l) /\ (forall a l, eq_atom a a l) /\ (forall r l, eq_env r r l). Proof. apply val_box_atom_env_mutind; eauto. Qed. Lemma eq_val_refl : forall v l, eq_val v v l. Proof. pose proof eq_refl. intuition. Qed. Lemma eq_box_refl : forall b l, eq_box b b l. Proof. pose proof eq_refl. intuition. Qed. Lemma eq_atom_refl : forall a l, eq_atom a a l. Proof. pose proof eq_refl. intuition. Qed. Lemma eq_env_refl : forall r l, eq_env r r l. Proof. pose proof eq_refl. intuition. Qed. Lemma intro_and2 : forall (P P1 P2: Prop), (P -> (P1 /\ P2)) -> (P -> P1) /\ (P -> P2). Proof. tauto. Qed. Lemma eq_val_eq_tag : forall v1 v2 l, eq_val v1 v2 l -> tag_of v1 = tag_of v2. Proof. intros v1 v2 l Heq. inversion Heq; eauto. Qed. Lemma or_introrefl : forall (P : Prop), P -> P \/ P. Proof. left. assumption. Qed. Lemma or_elimrefl : forall (P : Prop), P \/ P -> P. Proof. intros P H. destruct H; assumption. Qed. (** Monotonicity of the pc label. This property is essential for the non-interference proof to go through. *) Lemma pc_eval_monotonic : forall r t pc a pc', r |- t, pc ==> a, pc' -> pc <: pc'. Proof. intros r t pc a pc' Heval. (eval_cases (induction Heval) Case); eauto 4 using flows_refl, flows_trans, join_1_rev. Qed. (* Binary operations respect equality *) Lemma eq_box_eq_bop : forall b b11 b12 b21 b22 l, eq_box b11 b21 l -> eq_box b12 b22 l -> bop_box b b11 b12 = bop_box b b21 b22. Proof. intros b b11 b12 b21 b22 l Heq1 Heq2. invsc Heq1; eauto; invsc Heq2; eauto; destruct b; invsc H; try invsc H0; eauto. Qed. (** * Non-interference *) (** Strengthened version of non-interference. *) Lemma non_interference_strong : forall r1 t pc1 a1 pc1', r1 |- t, pc1 ==> a1, pc1' -> forall r2 pc2 a2 pc2' l, r2 |- t, pc2 ==> a2, pc2' -> eq_env' r1 pc1 r2 pc2 l -> (eq_env' r1 pc1' r2 pc2' l /\ eq_atom' a1 pc1' a2 pc2' l). Proof. intros r1 t pc1 a1 pc1' H. (eval_cases (induction H) Case); intros r2 pc2 a2 pc2'; try (rename l into l1); intro l; intros Heval2 Heq_env; invsc Heval2. Case "eval_var". eauto using maps_eq_env'. Case "eval_const". intuition. intros Hpc. specialize (Heq_env Hpc). invsc Heq_env; eauto. Case "eval_let". apply intro_and2. intro Hpc. pose proof (pc_eval_monotonic _ _ _ _ _ H). pose proof (pc_eval_monotonic _ _ _ _ _ H0). pose proof (pc_eval_monotonic _ _ _ _ _ H8). pose proof (pc_eval_monotonic _ _ _ _ _ H9). assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using flows_trans). assert (pc' <: l \/ pc2' <: l) by (destruct Hpc; eauto using flows_trans). specialize (Heq_env H5). invsc Heq_env. apply IHeval1 with (l:=l) in H8; try (intro; eauto). destruct H8. apply IHeval2 with (l:=l) in H9; eauto using cons_eq_env'. destruct H9. specialize (H11 Hpc). invsc H11; eauto. Case "eval_abs". intuition. intros Hpc. specialize (Heq_env Hpc). invsc Heq_env; eauto. apply intro_and2. intro Hpc. pose proof (pc_eval_monotonic _ _ _ _ _ H1). pose proof (pc_eval_monotonic _ _ _ _ _ H10). pose proof (maps_eq_env' _ _ _ _ _ _ _ _ Heq_env H H4). pose proof (maps_eq_env' _ _ _ _ _ _ _ _ Heq_env H0 H6). assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev, flows_trans). specialize (H5 H8). invsc H5. specialize (H7 H8). invsc H7. invsc H11. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using join_2_rev, flows_trans). specialize (H17 H7). invsc H17. invsc H13. specialize (Heq_env H8). invsc Heq_env. apply IHeval with (l:=l) in H10. invsc H10. specialize (H14 Hpc). invsc H14; eauto. apply cons_eq_env'; intro; eauto. SCase "type error". (* spurious *) apply intro_and2. intro Hpc. pose proof (pc_eval_monotonic _ _ _ _ _ H1). assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev, flows_trans). pose proof (maps_eq_env' _ _ _ _ _ _ _ _ Heq_env H H5). specialize (H4 H3). invsc H4. invsc H7. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using join_2_rev, flows_trans). specialize (H13 H4). invsc H13. invsc H7. simpl in H9. exfalso; auto. Case "eval_app_no_abs". SCase "no type error". (* spurious *) apply intro_and2. intro Hpc. pose proof (pc_eval_monotonic _ _ _ _ _ H9). assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev, flows_trans). pose proof (maps_eq_env' _ _ _ _ _ _ _ _ Heq_env H H3). specialize (H4 H2). invsc H4. invsc H7. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using join_2_rev, flows_trans). specialize (H13 H4). invsc H13. invsc H8. simpl in H0. exfalso; auto. SCase "eval_app_no_abs". apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev, flows_trans). specialize (Heq_env H1). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H3 H H4). invsc H2. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using join_2_rev, flows_trans). specialize (H12 H2). invsc H12; eauto. Case "eval_inx". intuition. intros Hpc. specialize (Heq_env Hpc). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H1 H H6). eauto using maps_eq_env. Case "eval_match". pose proof (pc_eval_monotonic _ _ _ _ _ H0). pose proof (pc_eval_monotonic _ _ _ _ _ H10). apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using flows_trans, join_1_rev). specialize (Heq_env H3). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H5 H H9). invsc H4. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using flows_trans, join_2_rev). specialize (H14 H4). invsc H14. invsc H8. apply IHeval with (l:=l) in H10. invsc H10. specialize (H7 Hpc). invsc H7. eauto. eapply cons_eq_env'; intro; eauto. SCase "type error". (* spurious *) pose proof (pc_eval_monotonic _ _ _ _ _ H0). apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using flows_trans, join_1_rev). specialize (Heq_env H2). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H4 H H9). invsc H3. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using flows_trans, join_2_rev). specialize (H13 H3). invsc H13. invsc H6. simpl in H10. exfalso; auto. Case "eval_match_no_sum". SCase "no type error". (* spurious *) pose proof (pc_eval_monotonic _ _ _ _ _ H10). apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using flows_trans, join_1_rev). specialize (Heq_env H2). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H4 H H9). invsc H3. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using flows_trans, join_2_rev). specialize (H13 H3). invsc H13. invsc H7. simpl in H0. exfalso; auto. SCase "type error". apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using flows_trans, join_1_rev). specialize (Heq_env H1). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H3 H H9). invsc H2. assert (l0 <: l \/ l0 <: l) by (destruct Hpc; eauto using flows_trans, join_2_rev). specialize (H12 H2). invsc H12; eauto. Case "eval_tag". apply intro_and2. intro Hpc. specialize (Heq_env Hpc). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H1 H H2). invsc H0. split. auto. split. reflexivity. constructor. reflexivity. intro. specialize (H9 H0). unfold tag_box; invsc H9; eauto. invsc H3; eauto. Case "eval_bop". apply intro_and2. intro Hpc. specialize (Heq_env Hpc). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H2 H H8). pose proof (maps_eq_env _ _ _ _ _ _ H2 H0 H9). split. intuition. split. reflexivity. invsc H1. invsc H3. constructor. reflexivity. intro. assert (l'0 <: l \/ l'0 <: l) by (destruct H1; eauto using flows_trans, join_1_rev). assert (l''0 <: l \/ l''0 <: l) by (destruct H1; eauto using flows_trans, join_2_rev). specialize (H12 H3). specialize (H11 H4). pose proof (eq_box_eq_bop bo _ _ _ _ l H12 H11). rewrite H5. apply eq_box_refl. Case "eval_bracket". pose proof (pc_eval_monotonic _ _ _ _ _ H0). pose proof (pc_eval_monotonic _ _ _ _ _ H8). apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev). specialize (Heq_env H3). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H6 H H4). invsc H5. assert (l'0 <: l \/ l'0 <: l) by (destruct Hpc; eauto using join_2_rev). specialize (H14 H5). invsc H14. invsc H10. split. intuition. split. reflexivity. apply IHeval with (l:=l) in H8; try (intro; eauto). invsc H8. remember (flows_dec (l'' \_/ pc') (l0 \_/ (pc2 \_/ l'0))) as f1. remember (flows_dec (l''0 \_/ pc'0) (l0 \_/ (pc2 \_/ l'0))) as f2. destruct f1; destruct f2; constructor; auto; intro. SCase "flow, flow". assert (pc' <: l \/ pc'0 <: l) by (destruct H8; destruct Hpc; eauto using join_2_rev, flows_trans, join_minimal). specialize (H9 H10). invsc H9. invsc H12. apply H17. destruct H8; destruct Hpc; eauto using join_1_rev, flows_trans, join_minimal. SCase "flow, no flow". (* spurious *) assert (pc' <: l \/ pc'0 <: l) by (destruct H8; destruct Hpc; eauto using join_2_rev, flows_trans, join_minimal). specialize (H9 H10). invsc H9. invsc H12. contradiction. SCase "no flow, flow". (* spurious *) assert (pc' <: l \/ pc'0 <: l) by (destruct H8; destruct Hpc; eauto using join_2_rev, flows_trans, join_minimal). specialize (H9 H10). invsc H9. invsc H12. contradiction. Case "eval_bracket". SCase "type error". (* spurious *) apply intro_and2. intro Hpc. pose proof (pc_eval_monotonic _ _ _ _ _ H0). assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev, flows_trans). specialize (Heq_env H2). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H5 H H4). invsc H3. assert (l'0 <: l \/ l'0 <: l) by (destruct Hpc; eauto using join_2_rev, flows_trans). specialize (H13 H3). invsc H13. invsc H7. simpl in H8. exfalso; auto. Case "eval_bracket_no_lab". SCase "no type error". (* spurious *) apply intro_and2. intro Hpc. pose proof (pc_eval_monotonic _ _ _ _ _ H8). assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev, flows_trans). specialize (Heq_env H2). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H5 H H4). invsc H3. assert (l'0 <: l \/ l'0 <: l) by (destruct Hpc; eauto using join_2_rev, flows_trans). specialize (H13 H3). invsc H13. invsc H9. simpl in H0. exfalso; auto. Case "eval_bracket_no_lab". apply intro_and2. intro Hpc. assert (pc <: l \/ pc2 <: l) by (destruct Hpc; eauto using join_1_rev). specialize (Heq_env H1). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H3 H H4). invsc H2. assert (l'0 <: l \/ l'0 <: l) by (destruct Hpc; eauto using join_2_rev). specialize (H12 H2). invsc H12; eauto. Case "eval_label_of". apply intro_and2. intro Hpc. specialize (Heq_env Hpc). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H1 H H2). invsc H0. split; eauto. Case "eval_get_pc". apply intro_and2. intro Hpc. specialize (Heq_env Hpc). invsc Heq_env. split; eauto. Case "eval_mk_nav". apply intro_and2. intro Hpc. specialize (Heq_env Hpc). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H1 H H2). invsc H0. split. intuition. split. reflexivity. constructor. reflexivity. intro. specialize (H9 H0). invsc H9; unfold mk_nav_box; try invsc H3; eauto; destruct c'; auto. Case "eval_to_sum". apply intro_and2. intro Hpc. specialize (Heq_env Hpc). invsc Heq_env. pose proof (maps_eq_env _ _ _ _ _ _ H1 H H2). invsc H0. split. intuition. split. reflexivity. constructor. reflexivity. intro. specialize (H9 H0). unfold to_sum_box. invsc H9; constructor; constructor; eauto. Qed. (** Finally, non-interference. *) Theorem non_interference: forall r1 t pc1 a1 pc1' r2 pc2 a2 pc2' l, eq_env' r1 pc1 r2 pc2 l -> r1 |- t, pc1 ==> a1, pc1' -> r2 |- t, pc2 ==> a2, pc2' -> eq_atom' a1 pc1' a2 pc2' l. Proof. intros r1 t pc1 a1 pc1' r2 pc2 a2 pc2' l Hequiv_env Heval1 Heval2. assert (eq_env' r1 pc1' r2 pc2' l /\ eq_atom' a1 pc1' a2 pc2' l) by eauto using non_interference_strong. intuition. Qed.
import tactic.basic import .ch07_indprop open nat ( le less_than_or_equal.refl less_than_or_equal.step lt ) open indprop (next_nat total_relation total_relation.intro) open indprop.next_nat variables {α : Type} variables {n m o p: ℕ} namespace rel /- Definition relation (X: Type) := X → X → Prop. -/ def relation (α : Type) := α → α → Prop /- Print le. (* ====> Inductive le (n : nat) : nat -> Prop := le_n : n <= n | le_S : forall m : nat, n <= m -> n <= S m *) Check le : nat → nat → Prop. Check le : relation nat. -/ #print le #check (le : ℕ → ℕ → Prop) #check (le : relation ℕ) /- Definition partial_function {X: Type} (R: relation X) := ∀x y1 y2 : X, R x y1 → R x y2 → y1 = y2. -/ def partial_function (R : relation α) := ∀{x y₁ y₂} (h₁ : R x y₁) (h₂ : R x y₂), y₁ = y₂ /- Print next_nat. (* ====> Inductive next_nat (n : nat) : nat -> Prop := nn : next_nat n (S n) *) Check next_nat : relation nat. Theorem next_nat_partial_function : partial_function next_nat. Proof. unfold partial_function. intros x y1 y2 H1 H2. inversion H1. inversion H2. reflexivity. Qed. -/ #print next_nat #check (next_nat : relation ℕ) theorem next_nat_partial_function : partial_function next_nat := begin unfold partial_function, intros, cases h₁, cases h₂, refl, end /- Theorem le_not_a_partial_function : ¬(partial_function le). Proof. unfold not. unfold partial_function. intros Hc. assert (0 = 1) as Nonsense. { apply Hc with (x := 0). - apply le_n. - apply le_S. apply le_n. } discriminate Nonsense. Qed. -/ theorem le_not_a_partial_function : ¬partial_function le := begin unfold partial_function, by_contradiction c, have nonsense : 0 = 1, apply @c 0 0 1, apply less_than_or_equal.refl, apply less_than_or_equal.step, apply less_than_or_equal.refl, cases nonsense, end theorem total_relation_not_partial : ¬partial_function total_relation := begin unfold partial_function, by_contradiction c, cases c (total_relation.intro 0 0) (total_relation.intro 0 1), end theorem empty_relation_partial : partial_function (@empty_relation α) := begin unfold partial_function, intros, cases h₁, end /- Definition reflexive {X: Type} (R: relation X) := ∀a : X, R a a. Theorem le_reflexive : reflexive le. Proof. unfold reflexive. intros n. apply le_n. Qed. -/ def reflexive (R : relation α) := ∀a, R a a theorem le_refl : reflexive le := begin unfold reflexive, intro n, apply less_than_or_equal.refl, end /- Definition transitive {X: Type} (R: relation X) := ∀a b c : X, (R a b) → (R b c) → (R a c). Theorem le_trans : transitive le. Proof. intros n m o Hnm Hmo. induction Hmo. - (* le_n *) apply Hnm. - (* le_S *) apply le_S. apply IHHmo. Qed. Theorem lt_trans: transitive lt. Proof. unfold lt. unfold transitive. intros n m o Hnm Hmo. apply le_S in Hnm. apply le_trans with (a := (S n)) (b := (S m)) (c := o). apply Hnm. apply Hmo. Qed. -/ #check transitive def transitive (R : relation α) := ∀{a b c : α} (hab : R a b) (hbc : R b c), R a c theorem le_trans: transitive le := begin unfold transitive, intros, induction hbc with b' h ih, exact hab, apply less_than_or_equal.step, exact ih, end theorem lt_trans : transitive lt := begin unfold transitive, intros, exact le_trans (less_than_or_equal.step hab) hbc, end /- Theorem lt_trans' : transitive lt. Proof. (* Prove this by induction on evidence that m is less than o. *) unfold lt. unfold transitive. intros n m o Hnm Hmo. induction Hmo as [| m' Hm'o]. (* FILL IN HERE *) Admitted. -/ theorem lt_trans' : transitive lt := begin unfold transitive, intros, induction hbc with b' h ih, exact less_than_or_equal.step hab, exact less_than_or_equal.step ih, end /- Theorem lt_trans'' : transitive lt. Proof. unfold lt. unfold transitive. intros n m o Hnm Hmo. induction o as [| o']. (* FILL IN HERE *) Admitted. -/ theorem lt_trans'' : transitive lt := begin unfold transitive, intros, induction c with c ih, cases hbc, cases hbc with _ h, exact less_than_or_equal.step hab, exact less_than_or_equal.step (ih h), end /- Theorem le_Sn_le : ∀n m, S n ≤ m → n ≤ m. Proof. intros n m H. apply le_trans with (S n). - apply le_S. apply le_n. - apply H. Qed. -/ theorem nat.le_of_succ_le (h : n + 1 ≤ m) : n ≤ m := begin apply le_trans, apply less_than_or_equal.step, apply less_than_or_equal.refl, exact h, end /- Theorem le_S_n : ∀n m, (S n ≤ S m) → (n ≤ m). Proof. (* FILL IN HERE *) Admitted. -/ theorem nat.le_of_succ_le_succ (h : n + 1 ≤ m + 1) : n ≤ m := begin cases h with _ h, exact less_than_or_equal.refl, exact nat.le_of_succ_le h, end /- Theorem le_Sn_n : ∀n, ¬(S n ≤ n). Proof. (* FILL IN HERE *) Admitted. -/ theorem nat.not_succ_le_self (n) : ¬(n + 1 ≤ n) := begin by_contra c, induction n with n ih, cases c, exact ih (nat.le_of_succ_le_succ c), end /- Definition symmetric {X: Type} (R: relation X) := ∀a b : X, (R a b) → (R b a). -/ def symmetric (R : relation α) := ∀{a b : α} (h : R a b), R b a /- Theorem le_not_symmetric : ¬(symmetric le). Proof. (* FILL IN HERE *) Admitted. -/ theorem le_not_symmetric : ¬symmetric le := begin unfold symmetric, by_contra c, cases c (less_than_or_equal.step (@less_than_or_equal.refl 0)), end /- Definition antisymmetric {X: Type} (R: relation X) := ∀a b : X, (R a b) → (R b a) → a = b. -/ def anti_symmetric (R : relation α) := ∀{a b} (hab : R a b) (hba : R b a), a = b /- Theorem le_antisymmetric : antisymmetric le. Proof. (* FILL IN HERE *) Admitted. -/ theorem le_antisymm : anti_symmetric le := begin unfold anti_symmetric, intros, induction a with a ih generalizing b, cases hba, refl, cases b, cases hab, apply congr_arg, apply ih, exact nat.le_of_succ_le_succ hab, exact nat.le_of_succ_le_succ hba, end /- Theorem le_step : ∀n m p, n < m → m ≤ S p → n ≤ p. Proof. (* FILL IN HERE *) Admitted. -/ theorem le_step (hn : n < m) (hm : m ≤ p + 1) : n ≤ p := nat.le_of_succ_le_succ $ le_trans hn hm /- Definition equivalence {X:Type} (R: relation X) := (reflexive R) ∧ (symmetric R) ∧ (transitive R). -/ def equivalence (R : relation α) := reflexive R ∧ symmetric R ∧ transitive R /- Definition order {X:Type} (R: relation X) := (reflexive R) ∧ (antisymmetric R) ∧ (transitive R). -/ def partial_order (R : relation α) := reflexive R ∧ anti_symmetric R ∧ transitive R /- Definition preorder {X:Type} (R: relation X) := (reflexive R) ∧ (transitive R). Theorem le_order : order le. Proof. unfold order. split. - (* refl *) apply le_reflexive. - split. + (* antisym *) apply le_antisymmetric. + (* transitive. *) apply le_trans. Qed. -/ def preorder (R : relation α) := reflexive R ∧ transitive R theorem le_order : partial_order le := begin unfold partial_order, split, apply le_refl, split, apply le_antisymm, apply le_trans, end /- Inductive clos_refl_trans {A: Type} (R: relation A) : relation A := | rt_step x y (H : R x y) : clos_refl_trans R x y | rt_refl x : clos_refl_trans R x x | rt_trans x y z (Hxy : clos_refl_trans R x y) (Hyz : clos_refl_trans R y z) : clos_refl_trans R x z. -/ inductive clos_refl_trans (R : relation α) : relation α | rt_step {x y} (h : R x y) : clos_refl_trans x y | rt_refl (x) : clos_refl_trans x x | rt_trans {x y z} (hxy : clos_refl_trans x y) (hyz : clos_refl_trans y z) : clos_refl_trans x z open clos_refl_trans /- Theorem next_nat_closure_is_le : ∀n m, (n ≤ m) ↔ ((clos_refl_trans next_nat) n m). Proof. intros n m. split. - (* -> *) intro H. induction H. + (* le_n *) apply rt_refl. + (* le_S *) apply rt_trans with m. apply IHle. apply rt_step. apply nn. - (* <- *) intro H. induction H. + (* rt_step *) inversion H. apply le_S. apply le_n. + (* rt_refl *) apply le_n. + (* rt_trans *) apply le_trans with y. apply IHclos_refl_trans1. apply IHclos_refl_trans2. Qed. -/ theorem next_nat_closure_is_le : n ≤ m ↔ (clos_refl_trans next_nat) n m := begin split, intro h, induction h with m h ih, apply rt_refl, apply rt_trans, apply ih, apply rt_step, apply nn, intro h, induction h, case rt_step : x y h { cases h, apply less_than_or_equal.step, exact less_than_or_equal.refl, }, case rt_refl : x { exact less_than_or_equal.refl, }, case rt_trans : x y z hxy hyz ihx ihy { exact le_trans ihx ihy, }, end /- Inductive clos_refl_trans_1n {A : Type} (R : relation A) (x : A) : A → Prop := | rt1n_refl : clos_refl_trans_1n R x x | rt1n_trans (y z : A) (Hxy : R x y) (Hrest : clos_refl_trans_1n R y z) : clos_refl_trans_1n R x z. -/ inductive clos_refl_trans_1n (R : relation α) : α → α → Prop | rt1n_refl (x) : clos_refl_trans_1n x x | rt1n_trans {x y z} (hxy : R x y) (hyz : clos_refl_trans_1n y z) : clos_refl_trans_1n x z open clos_refl_trans_1n /- Lemma rsc_R : ∀(X:Type) (R:relation X) (x y : X), R x y → clos_refl_trans_1n R x y. Proof. intros X R x y H. apply rt1n_trans with y. apply H. apply rt1n_refl. Qed. -/ lemma rsc_R {R : relation α} {x y} (h : R x y) : clos_refl_trans_1n R x y := rt1n_trans h (rt1n_refl y) /- Lemma rsc_trans : ∀(X:Type) (R: relation X) (x y z : X), clos_refl_trans_1n R x y → clos_refl_trans_1n R y z → clos_refl_trans_1n R x z. Proof. (* FILL IN HERE *) Admitted. -/ lemma rsc_trans {R : relation α} {x y z} (hxy : clos_refl_trans_1n R x y) (hyz : clos_refl_trans_1n R y z) : clos_refl_trans_1n R x z := begin induction hxy, case rt1n_refl { exact hyz, }, case rt1n_trans : x' y' z' hxy' hyz' ih { exact rt1n_trans hxy' (ih hyz), }, end /- Theorem rtc_rsc_coincide : ∀(X:Type) (R: relation X) (x y : X), clos_refl_trans R x y ↔ clos_refl_trans_1n R x y. Proof. (* FILL IN HERE *) Admitted. -/ theorem rtc_rsc_coincide {R : relation α} {x y} : clos_refl_trans R x y ↔ clos_refl_trans_1n R x y := begin split, intro h, induction h, case rt_step : x' y' h' { exact rsc_R h', }, case rt_refl { exact rt1n_refl h, }, case rt_trans : x' y' z' hxy hyz ihy ihz { exact rsc_trans ihy ihz, }, intro h, induction h, case rt1n_refl { exact rt_refl h }, case rt1n_trans : x' y' z' hxy hyz ih { exact rt_trans (rt_step hxy) ih, }, end end rel
-- --------------------------------------------------------------- [ Model.idr ] -- Module : Model.idr -- Copyright : (c) Jan de Muijnck-Hughes -- License : see LICENSE -- --------------------------------------------------------------------- [ EOH ] ||| Example of using PML to model a paper. module GRL.Lang.Test.PML import GRL.Lang.PML -- ------------------------------------------------------------------- [ Paper ] paper : PAPER paper = MkPaper "My First Paper" abst : ABSTRACT abst = MkAbs bib : BIB bib = MkBib intr : SECT intr = MkSect "Introduction" meth : SECT meth = MkSect "Methodology" res : SECT res = MkSect "Results" disc : SECT disc = MkSect "Discussion" -- ------------------------------------------------------------------- [ Tasks ] wabs : WRITING wabs = MkAuth "Abstract" SATISFIED rabs : REVIEW rabs = MkRev "Abstract" WEAKSATIS wbib : WRITING wbib = MkAuth "Bib" WEAKSATIS rbib : REVIEW rbib = MkRev "Bib" WEAKSATIS wIntro : WRITING wIntro = MkAuth "Intro" DENIED rIntro : REVIEW rIntro = MkRev "Intro" DENIED wMeth : WRITING wMeth = MkAuth "Meth" DENIED rMeth : REVIEW rMeth = MkRev "Meth" DENIED wRes : WRITING wRes = MkAuth "Res" DENIED rRes : REVIEW rRes = MkRev "Res" DENIED wDis : WRITING wDis = MkAuth "Dis" DENIED rDis : REVIEW rDis = MkRev "Dis" DENIED -- ------------------------------------------------------------- [ Build Model ] paperPlan : GModel paperPlan = emptyModel \= paper \= abst \= wabs \= rabs \= bib \= wbib \= rbib \= intr \= wIntro \= rIntro \= meth \= wMeth \= rMeth \= res \= wRes \= rRes \= disc \= wDis \= rDis \= (paper &= abst) \= (wabs ==> abst) \= (rabs ==> abst) \= (paper &= bib) \= (wbib ==> bib) \= (rbib ==> bib) \= (paper &= intr) \= (wIntro ==> intr) \= (rIntro ==> intr) \= (paper &= meth) \= (wMeth ==> meth) \= (rMeth ==> meth) \= (paper &= res) \= (wRes ==> res) \= (rRes ==> res) \= (paper &= disc) \= (wDis ==> disc) \= (rDis ==> disc) -- -------------------------------------------------------------------- [ Test ] export runTest : IO () runTest = do putStrLn $ prettyModel paperPlan -- --------------------------------------------------------------------- [ EOF ]
{-# OPTIONS --cubical --no-import-sorts --safe #-} module Cubical.DStructures.Structures.Higher where open import Cubical.Foundations.Prelude open import Cubical.Foundations.Equiv open import Cubical.Foundations.HLevels open import Cubical.Foundations.Isomorphism open import Cubical.Foundations.Function open import Cubical.Foundations.Pointed open import Cubical.Foundations.Univalence open import Cubical.Functions.FunExtEquiv open import Cubical.Homotopy.Base open import Cubical.Homotopy.Connected open import Cubical.Data.Sigma open import Cubical.Data.Nat open import Cubical.Relation.Binary open import Cubical.Algebra.Group open import Cubical.Algebra.Group.Higher open import Cubical.Algebra.Group.EilenbergMacLane1 open import Cubical.HITs.EilenbergMacLane1 open import Cubical.DStructures.Base open import Cubical.DStructures.Meta.Properties open import Cubical.DStructures.Meta.Isomorphism open import Cubical.DStructures.Structures.Universe open import Cubical.DStructures.Structures.Type open import Cubical.DStructures.Structures.Group open import Cubical.DStructures.Structures.Constant private variable ℓ ℓ' : Level 𝒮ᴰ-connected : {ℓ : Level} (k : ℕ) → URGStrᴰ (𝒮-universe {ℓ}) (isConnected k) ℓ-zero 𝒮ᴰ-connected k = Subtype→Sub-𝒮ᴰ (λ A → isConnected k A , isPropIsContr) 𝒮-universe 𝒮ᴰ-truncated : {ℓ : Level} (n : ℕ) → URGStrᴰ (𝒮-universe {ℓ}) (isOfHLevel n) ℓ-zero 𝒮ᴰ-truncated n = Subtype→Sub-𝒮ᴰ (λ A → isOfHLevel n A , isPropIsOfHLevel n) 𝒮-universe 𝒮ᴰ-BGroup : (n k : ℕ) → URGStrᴰ (𝒮-universe {ℓ}) (λ A → A × (isConnected (k + 1) A) × (isOfHLevel (n + k + 2) A)) ℓ 𝒮ᴰ-BGroup n k = combine-𝒮ᴰ 𝒮ᴰ-pointed (combine-𝒮ᴰ (𝒮ᴰ-connected (k + 1)) (𝒮ᴰ-truncated (n + k + 2))) 𝒮-BGroup : (n k : ℕ) → URGStr (Σ[ A ∈ Type ℓ ] A × (isConnected (k + 1) A) × (isOfHLevel (n + k + 2) A)) ℓ 𝒮-BGroup n k = ∫⟨ 𝒮-universe ⟩ 𝒮ᴰ-BGroup n k 𝒮-1BGroup : URGStr 1BGroupΣ ℓ 𝒮-1BGroup = 𝒮-BGroup 0 1 𝒮-Iso-BGroup-Group : {ℓ : Level} → 𝒮-PIso (𝒮-group ℓ) 𝒮-1BGroup RelIso.fun 𝒮-Iso-BGroup-Group G = EM₁ G , embase , EM₁Connected G , EM₁Groupoid G RelIso.inv 𝒮-Iso-BGroup-Group = π₁-1BGroupΣ RelIso.leftInv 𝒮-Iso-BGroup-Group = π₁EM₁≃ RelIso.rightInv 𝒮-Iso-BGroup-Group BG = basetype-≅ , basepoint-≅ , tt , tt where -- notation type = fst BG * = fst (snd BG) conn = fst (snd (snd BG)) trunc = snd (snd (snd BG)) BG' = (bgroup (type , *) conn trunc) π₁BG : Group π₁BG = π₁-1BGroupΣ BG EM₁π₁BG : 1BGroupΣ EM₁π₁BG = EM₁ π₁BG , embase , EM₁Connected π₁BG , EM₁Groupoid π₁BG -- equivalences basetype-≅ : EM₁ π₁BG ≃ type fst basetype-≅ = EM₁-functor-lInv-function π₁BG BG' (GroupEquiv.hom (π₁EM₁≃ π₁BG)) snd basetype-≅ = EM₁-functor-lInv-onIso-isEquiv π₁BG BG' (π₁EM₁≃ π₁BG) basepoint-≅ : * ≡ * basepoint-≅ = refl 𝒮ᴰ-BGroupHom : (n k : ℕ) → URGStrᴰ (𝒮-BGroup {ℓ} n k ×𝒮 𝒮-BGroup {ℓ'} n k) (λ (BG , BH) → BGroupHomΣ BG BH) (ℓ-max ℓ ℓ') 𝒮ᴰ-BGroupHom n k = make-𝒮ᴰ (λ {(BG , BH)} {(BG' , BH')} f (((eᴳ , _) , eᴳ-pt , _), ((eᴴ , _) , eᴴ-pt , _)) f' → ((eᴴ , eᴴ-pt) ∘∙ f) ∙∼ (f' ∘∙ (eᴳ , eᴳ-pt))) (λ {(BG , BH)} f → q {(BG , BH)} f) contrSingl where module _ {(BG , BH) : BGroupΣ n k × BGroupΣ n k} (f : BGroupHomΣ BG BH) where q : (id∙ (baseΣ BH) ∘∙ f) ∙∼ (f ∘∙ id∙ (baseΣ BG)) q = funExt∙⁻ (id∙ (baseΣ BH) ∘∙ f ≡⟨ ∘∙-idʳ f ⟩ f ≡⟨ sym (∘∙-idˡ f) ⟩ (f ∘∙ id∙ (baseΣ BG)) ∎) module _ ((BG , BH) : BGroupΣ n k × BGroupΣ n k) (f : BGroupHomΣ BG BH) where contrSingl : isContr (Σ[ f' ∈ BGroupHomΣ BG BH ] ((id∙ (baseΣ BH) ∘∙ f) ∙∼ (f' ∘∙ id∙ (baseΣ BG)))) contrSingl = isContrRespectEquiv (Σ-cong-equiv-snd (λ f' → f ≡ f' ≃⟨ invEquiv (funExt∙≃ f f') ⟩ f ∙∼ f' ≃⟨ pathToEquiv (cong (_∙∼ f') (sym (∘∙-idʳ f)) ∙ cong ((id∙ (baseΣ BH) ∘∙ f) ∙∼_) (sym (∘∙-idˡ f'))) ⟩ (id∙ (baseΣ BH) ∘∙ f) ∙∼ (f' ∘∙ id∙ (baseΣ BG)) ■)) (isContrSingl f)
lemma subset_snd_imageI: "A \<times> B \<subseteq> S \<Longrightarrow> x \<in> A \<Longrightarrow> B \<subseteq> snd ` S"
(* Title: A state based hotel key card system Author: Tobias Nipkow, TU Muenchen *) (*<*) theory State imports Basis begin declare if_split_asm[split] (*>*) section\<open>A state based model\<close> text\<open>The model is based on three opaque types @{typ guest}, @{typ key} and @{typ room}. Type @{typ card} is just an abbreviation for @{typ"key \<times> key"}. The state of the system is modelled as a record which combines the information about the front desk, the rooms and the guests. \<close> record state = owns :: "room \<Rightarrow> guest option" currk :: "room \<Rightarrow> key" issued :: "key set" cards :: "guest \<Rightarrow> card set" roomk :: "room \<Rightarrow> key" isin :: "room \<Rightarrow> guest set" safe :: "room \<Rightarrow> bool" text\<open>\noindent Reception records who @{const owns} a room (if anybody, hence @{typ"guest option"}), the current key @{const currk} that has been issued for a room, and which keys have been @{const issued} so far. Each guest has a set of @{const cards}. Each room has a key @{const roomk} recorded in the lock and a set @{const isin} of occupants. The auxiliary variable @{const safe} is explained further below; we ignore it for now. In specification languages like Z, VDM and B we would now define a number of operations on this state space. Since they are the only permissible operations on the state, this defines a set of \emph{reachable} states. In a purely logical environment like Isabelle/HOL this set can be defined directly by an inductive definition. Each clause of the definition corresponds to a transition/operation/event. This is the standard approach to modelling state machines in theorem provers. The set of reachable states of the system (called \<open>reach\<close>) is defined by four transitions: initialization, checking in, entering a room, and leaving a room:\<close> (*<*) inductive_set reach :: "state set" where (*>*) init: "inj initk \<Longrightarrow> \<lparr> owns = (\<lambda>r. None), currk = initk, issued = range initk, cards = (\<lambda>g. {}), roomk = initk, isin = (\<lambda>r. {}), safe = (\<lambda>r. True) \<rparr> \<in> reach" | check_in: "\<lbrakk> s \<in> reach; k \<notin> issued s \<rbrakk> \<Longrightarrow> s\<lparr> currk := (currk s)(r := k), issued := issued s \<union> {k}, cards := (cards s)(g := cards s g \<union> {(currk s r, k)}), owns := (owns s)(r := Some g), safe := (safe s)(r := False) \<rparr> \<in> reach" | enter_room: "\<lbrakk> s \<in> reach; (k,k') \<in> cards s g; roomk s r \<in> {k,k'} \<rbrakk> \<Longrightarrow> s\<lparr> isin := (isin s)(r := isin s r \<union> {g}), roomk := (roomk s)(r := k'), safe := (safe s)(r := owns s r = \<lfloor>g\<rfloor> \<and> isin s r = {} \<and> k' = currk s r \<or> safe s r) \<rparr> \<in> reach" | exit_room: "\<lbrakk> s \<in> reach; g \<in> isin s r \<rbrakk> \<Longrightarrow> s\<lparr> isin := (isin s)(r := isin s r - {g}) \<rparr> \<in> reach" text\<open>\bigskip There is no check-out event because it is implicit in the next check-in for that room: this covers the cases where a guest leaves without checking out (in which case the room should not be blocked forever) or where the hotel decides to rent out a room prematurely, probably by accident. Neither do guests have to return their cards at any point because they may loose cards or may pretended to have lost them. We will now explain the events. \begin{description} \item[\<open>init\<close>] Initialization requires that every room has a different key, i.e.\ that @{const currk} is injective. Nobody owns a room, the keys of all rooms are recorded as issued, nobody has a card, and all rooms are empty. \item[@{thm[source] enter_room}] A guest may enter if either of the two keys on his card equal the room key. Then \<open>g\<close> is added to the occupants of \<open>r\<close> and the room key is set to the second key on the card. Normally this has no effect because the second key is already the room key. But when entering for the first time, the first key on the card equals the room key and then the lock is actually recoded. \item[\<open>exit_room\<close>] removes an occupant from the occupants of a room. \item[\<open>check_in\<close>] for room \<open>r\<close> and guest \<open>g\<close> issues the card @{term"(currk s r, k)"} to \<open>g\<close>, where \<open>k\<close> is new, makes \<open>g\<close> the owner of the room, and sets @{term"currk s r"} to the new key \<open>k\<close>. \end{description} The reader can easily check that our specification allows the intended distributed implementation: entering only reads and writes the key in that lock, and check-in only reads and writes the information at reception. In contrast to Jackson we require that initially distinct rooms have distinct keys. This protects the hotel from its guests: otherwise a guest may be able to enter rooms he does not own, potentially stealing objects from those rooms. Of course he can also steal objects from his own room, but in that case it is easier to hold him responsible. In general, the hotel may just want to minimize the opportunity for theft. The main difference to Jackson's model is that his can talk about transitions between states rather than merely about reachable states. This means that he can specify that unauthorized entry into a room should not occur. Because our specification does not formalize the transition relation itself, we need to include the \<open>isin\<close> component in order to express the same requirement. In the end, we would like to establish that the system is \emph{safe}: only the owner of a room can be in a room: \begin{center} @{prop"s \<in> reach \<Longrightarrow> g \<in> isin s r \<Longrightarrow> owns s r = Some g"} \end{center} Unfortunately, this is just not true. It does not take a PhD in computer science to come up with the following scenario: because guests can retain their cards, there is nothing to stop a guest from reentering his old room after he has checked out (in our model: after the next guest has checked in), but before the next guest has entered his room. Hence the best we can do is to prove a conditional safety property: under certain conditions, the above safety property holds. The question is: which conditions? It is clear that the room must be empty when its owner enters it, or all bets are off. But is that sufficient? Unfortunately not. Jackson's Alloy tool took 2 seconds~\cite[p.~303]{Jackson06} to find the following ``guest-in-the-middle'' attack: \begin{enumerate} \item Guest 1 checks in and obtains a card $(k_1,k_2)$ for room 1 (whose key in the lock is $k_1$). Guest 1 does not enter room 1. \item Guest 2 checks in, obtains a card $(k_2,k_3)$ for room 1, but does not enter room 1 either. \item Guest 1 checks in again, obtains a card $(k_3,k_4)$, goes to room 1, opens it with his old card $(k_1,k_2)$, finds the room empty, and feels safe \ldots \end{enumerate} After Guest~1 has left his room, Guest~2 enters and makes off with the luggage. Jackson now assumes that guests return their cards upon check-out, which can be modelled as follows: upon check-in, the new card is not added to the guest's set of cards but it replaces his previous set of cards, i.e.\ guests return old cards the next time they check in. Under this assumption, Alloy finds no more counterexamples to safety --- at least not up to 6 cards and keys and 3 guests and rooms. This is not a proof but a strong indication that the given assumptions suffice for safety. We prove that this is indeed the case. It should be noted that the system also suffers from a liveness problem: if a guest never enters the room he checked in to, that room is forever blocked. In practice this is dealt with by a master key. We ignore liveness. \subsection{Formalizing safety} \label{sec:formalizing-safety} It should be clear that one cannot force guests to always return their cards (or, equivalently, never to use an old card). We can only prove that if they do, their room is safe. However, we do not follow Jackson's approach of globally assuming everybody returns their old cards upon check-in. Instead we would like to take a local approach where it is up to each guest whether he follows this safety policy. We allow guests to keep their cards but make safety dependent on how they use them. This generality requires a finer grained model: we need to record if a guest has entered his room in a safe manner, i.e.\ if it was empty and if he used the latest key for the room, the one stored at reception. The auxiliary variable @{const safe} records for each room if this was the case at some point between his last check-in and now. The main theorem will be that if a room is safe in this manner, then only the owner can be in the room. Now we explain how @{const safe} is modified with each event: \begin{description} \item[\<open>init\<close>] sets @{const safe} to @{const True} for every room. \item[\<open>check_in\<close>] for room \<open>r\<close> resets @{prop"safe s r"} because it is not safe for the new owner yet. \item[@{thm[source] enter_room}] for room \<open>r\<close> sets @{prop"safe s r"} if the owner entered an empty room using the latest card issued for that room by reception, or if the room was already safe. \item[\<open>exit_room\<close>] does not modify @{const safe}. \end{description} The reader should convince his or herself that @{const safe} corresponds to the informal safety policy set out above. Note that a guest may find his room non-empty the first time he enters, and @{const safe} will not be set, but he may come back later, find the room empty, and then @{const safe} will be set. Furthermore, it is important that @{thm[source] enter_room} cannot reset @{const safe} due to the disjunct \<open>\<or> safe s r\<close>. Hence \<open>check_in\<close> is the only event that can reset @{const safe}. That is, a room stays safe until the next \<open>check_in\<close>. Additionally @{const safe} is initially @{const True}, which is fine because initially injectivity of \<open>initk\<close> prohibits illegal entries by non-owners. Note that because none of the other state components depend on @{const safe}, it is truly auxiliary: it can be deleted from the system and the same set of reachable states is obtained, modulo the absence of @{const safe}. We have formalized a very general safety policy of always using the latest card. A special case of this policy is the one called \emph{NoIntervening} by Jackson~\cite[p.~200]{Jackson06}: every \<open>check_in\<close> must immediately be followed by the corresponding @{thm[source] enter_room}. \<close> (*<*) lemma currk_issued[simp]: "s : reach \<Longrightarrow> currk s r : issued s" by (induct set: reach) auto lemma key1_issued[simp]: "s : reach \<Longrightarrow> (k,k') : cards s g \<Longrightarrow> k : issued s" by (induct set: reach) auto lemma key2_issued[simp]: "s : reach \<Longrightarrow> (k,k') : cards s g \<Longrightarrow> k' : issued s" by (induct set: reach) auto lemma roomk_issued[simp]: "s : reach \<Longrightarrow> roomk s k : issued s" by (induct set: reach) auto lemma currk_inj[simp]: "s : reach \<Longrightarrow> \<forall>r r'. (currk s r = currk s r') = (r = r')" by (induct set: reach) (auto simp:inj_on_def) lemma key1_not_currk[simp]: "s : reach \<Longrightarrow> (currk s r,k') \<notin> cards s g" by (induct set: reach) auto lemma guest_key2_disj[simp]: "\<lbrakk> s : reach; (k\<^sub>1,k) \<in> cards s g\<^sub>1; (k\<^sub>2,k) \<in> cards s g\<^sub>2 \<rbrakk> \<Longrightarrow> g\<^sub>1=g\<^sub>2" by (induct set: reach) auto lemma safe_roomk_currk[simp]: "s : reach \<Longrightarrow> safe s r \<Longrightarrow> roomk s r = currk s r" by (induct set: reach) auto lemma safe_only_owner_enter_normal_aux[simp]: "\<lbrakk> s : reach; safe s r; (k',roomk s r) \<in> cards s g \<rbrakk> \<Longrightarrow> owns s r = Some g" by (induct set: reach) (auto) lemma safe_only_owner_enter_normal: assumes "s : reach" shows "\<lbrakk> safe s r; (k',roomk s r) \<in> cards s g \<rbrakk> \<Longrightarrow> owns s r = Some g" using assms proof induct case (enter_room s k k1 g1 r1) let ?s' = "s\<lparr>isin := (isin s)(r1 := isin s r1 \<union> {g1}), roomk := (roomk s)(r1 := k1), safe := (safe s) (r1 := owns s r1 = Some g1 \<and> isin s r1 = {} \<and> k1 = currk s r1 \<or> safe s r1)\<rparr>" note s = \<open>s \<in> reach\<close> and IH = \<open>\<lbrakk> safe s r; (k', roomk s r) \<in> cards s g \<rbrakk> \<Longrightarrow> owns s r = Some g\<close> and card_g1 = \<open>(k,k1) \<in> cards s g1\<close> and safe = \<open>safe ?s' r\<close> and card_g = \<open>(k',roomk ?s' r) \<in> cards ?s' g\<close> have "roomk s r1 = k \<or> roomk s r1 = k1" using \<open>roomk s r1 \<in> {k,k1}\<close> by simp thus ?case proof assume [symmetric,simp]: "roomk s r1 = k" show ?thesis proof (cases "r1 = r") assume "r1 \<noteq> r" with IH safe card_g show ?thesis by simp next assume [simp]: "r1 = r" hence safe': "owns s r = Some g1 \<or> safe s r" using safe by auto thus ?thesis proof assume "safe s r" with s card_g1 have False by simp thus ?thesis .. next assume [simp]: "owns s r = Some g1" thus "owns ?s' r = Some g" using s card_g card_g1 by simp qed qed next assume "roomk s r1 = k1" with enter_room show ?case by auto qed qed auto theorem "s : reach \<Longrightarrow> safe s r \<Longrightarrow> g : isin s r \<Longrightarrow> owns s r = Some g" by (induct set: reach) auto theorem safe: assumes "s : reach" shows "safe s r \<Longrightarrow> g : isin s r \<Longrightarrow> owns s r = Some g" using assms proof induct case (enter_room s k1 k2 g1 r1) let ?s' = "s\<lparr>isin := (isin s)(r1 := isin s r1 \<union> {g1}), roomk := (roomk s)(r1 := k2), safe := (safe s) (r1 := owns s r1 = Some g1 \<and> isin s r1 = {} \<and> k2 = currk s r1 \<or> safe s r1)\<rparr>" note s = \<open>s \<in> reach\<close> and IH = \<open>\<lbrakk> safe s r; g \<in> isin s r \<rbrakk> \<Longrightarrow> owns s r = Some g\<close> and card_g1 = \<open>(k1,k2) \<in> cards s g1\<close> and safe = \<open>safe ?s' r\<close> and isin = \<open>g \<in> isin ?s' r\<close> show ?case proof (cases "r1 = r") assume "r1 \<noteq> r" with IH isin safe show ?thesis by simp next assume [simp]: "r1 = r" have "g \<in> isin s r \<or> g = g1" using isin by auto thus ?thesis proof assume g: "g \<in> isin s r" then have "safe s r" using safe by auto with g show ?thesis using IH by simp next assume [simp]: "g = g1" have "k2 = roomk s r1 \<or> k1 = roomk s r1" using \<open>roomk s r1 \<in> {k1,k2}\<close> by auto thus ?thesis proof assume "k2 = roomk s r1" with card_g1 s safe show ?thesis by auto next assume [simp]: "k1 = roomk s r1" have "owns s r = Some g1 \<or> safe s r" using safe by auto thus ?thesis proof assume "owns s r = Some g1" thus ?thesis by simp next assume "safe s r" hence False using s card_g1 by auto thus ?thesis .. qed qed qed qed qed auto (*>*) text\<open> \subsection{Verifying safety} \label{sec:verisafe} All of our lemmas are invariants of @{const reach}. The complete list, culminating in the main theorem, is this: \begin{lemma}\label{state-lemmas} \begin{enumerate} \item @{thm currk_issued} \item @{thm key1_issued} \item @{thm key2_issued} \item @{thm roomk_issued} \item \label{currk_inj} @{thm currk_inj} \item \label{key1_not_currk} @{thm key1_not_currk} \item @{thm guest_key2_disj} \item \label{safe_roomk_currk} @{thm[display] safe_roomk_currk} \item \label{safe_only_owner_enter_normal} @{thm safe_only_owner_enter_normal} \end{enumerate} \end{lemma} \begin{theorem}\label{safe-state} @{thm[mode=IfThen] safe} \end{theorem} The lemmas and the theorem are proved in this order, each one is marked as a simplification rule, and each proof is a one-liner: induction on @{prop"s \<in> reach"} followed by \<open>auto\<close>. Although, or maybe even because these proofs work so smoothly one may like to understand why. Hence we examine the proof of Theorem~\ref{safe-state} in more detail. The only interesting case is @{thm[source] enter_room}. We assume that guest \<open>g\<^sub>1\<close> enters room \<open>r\<^sub>1\<close> with card @{term"(k\<^sub>1,k\<^sub>2)"} and call the new state \<open>t\<close>. We assume @{prop"safe t r"} and @{prop"g \<in> isin t r"} and prove @{prop"owns t r = \<lfloor>g\<rfloor>"} by case distinction. If @{prop"r\<^sub>1 \<noteq> r"}, the claim follows directly from the induction hypothesis using \mbox{@{prop"safe s r"}} and @{prop"g \<in> isin t r"} because @{prop"owns t r = owns s r"} and @{prop"safe t r = safe s r"}. If @{prop"r\<^sub>1 = r"} then @{prop"g \<in> isin t r"} is equivalent with @{prop"g \<in> isin s r \<or> g = g\<^sub>1"}. If @{prop"g \<in> isin s r"} then \mbox{@{prop"safe s r"}} follows from @{prop"safe t r"} by definition of @{thm[source]enter_room} because @{prop"g \<in> isin s r"} implies @{prop"isin s r \<noteq> {}"}. Hence the induction hypothesis implies the claim. If @{prop"g = g\<^sub>1"} we make another case distinction. If @{prop"k\<^sub>2 = roomk s r"}, the claim follows immediately from Lemma~\ref{state-lemmas}.\ref{safe_only_owner_enter_normal} above: only the owner of a room can possess a card where the second key is the room key. If @{prop"k\<^sub>1 = roomk s r"} then, by definition of @{thm[source]enter_room}, @{prop"safe t r"} implies @{prop"owns s r = \<lfloor>g\<rfloor> \<or> safe s r"}. In the first case the claim is immediate. If @{prop"safe s r"} then @{prop"roomk s r = currk s r"} (by Lemma~\ref{state-lemmas}.\ref{safe_roomk_currk}) and thus @{prop"(currk s r, k\<^sub>2) \<in> cards s g"} by assumption @{prop"(k\<^sub>1,k\<^sub>2) \<in> cards s g\<^sub>1"}, thus contradicting Lemma~\ref{state-lemmas}.\ref{key1_not_currk}. This detailed proof shows that a number of case distinctions are required. Luckily, they all suggest themselves to Isabelle via the definition of function update (\<open>:=\<close>) or via disjunctions that arise automatically. \<close> (*<*) end (*>*)
[STATEMENT] lemma parCasesOutputFrame[consumes 11, case_names cPar1 cPar2]: fixes \<Psi> :: 'b and P :: "('a, 'b, 'c) psi" and Q :: "('a, 'b, 'c) psi" and M :: 'a and xvec :: "name list" and N :: 'a and T :: "('a, 'b, 'c) psi" and C :: "'d::fs_name" assumes Trans: "\<Psi> \<rhd> P \<parallel> Q \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> T" and "xvec \<sharp>* \<Psi>" and "xvec \<sharp>* P" and "xvec \<sharp>* Q" and "xvec \<sharp>* M" and "extractFrame(P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle>" and "distinct A\<^sub>P\<^sub>Q" and "A\<^sub>P\<^sub>Q \<sharp>* \<Psi>" and "A\<^sub>P\<^sub>Q \<sharp>* P" and "A\<^sub>P\<^sub>Q \<sharp>* Q" and "A\<^sub>P\<^sub>Q \<sharp>* M" and rPar1: "\<And>P' A\<^sub>P \<Psi>\<^sub>P A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>P; distinct A\<^sub>Q; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>P \<sharp>* \<Psi>\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>\<^sub>P; A\<^sub>P \<sharp>* A\<^sub>Q; A\<^sub>P\<^sub>Q = A\<^sub>P@A\<^sub>Q; \<Psi>\<^sub>P\<^sub>Q = \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q)" and rPar2: "\<And>Q' A\<^sub>P \<Psi>\<^sub>P A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>P; distinct A\<^sub>Q; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>P \<sharp>* \<Psi>\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>\<^sub>P; A\<^sub>P \<sharp>* A\<^sub>Q; A\<^sub>P\<^sub>Q = A\<^sub>P@A\<^sub>Q; \<Psi>\<^sub>P\<^sub>Q = \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q')" shows "Prop T" [PROOF STATE] proof (prove) goal (1 subgoal): 1. Prop T [PROOF STEP] using Trans \<open>xvec \<sharp>* \<Psi>\<close> \<open>xvec \<sharp>* P\<close> \<open>xvec \<sharp>* Q\<close> \<open>xvec \<sharp>* M\<close> [PROOF STATE] proof (prove) using this: \<Psi> \<rhd> P \<parallel> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> T xvec \<sharp>* \<Psi> xvec \<sharp>* P xvec \<sharp>* Q xvec \<sharp>* M goal (1 subgoal): 1. Prop T [PROOF STEP] proof(induct rule: parOutputCases[of _ _ _ _ _ _ _ "(A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q)"]) [PROOF STATE] proof (state) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] case(cPar1 P' A\<^sub>Q \<Psi>\<^sub>Q) [PROOF STATE] proof (state) this: \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> distinct A\<^sub>Q A\<^sub>Q \<sharp>* \<Psi> A\<^sub>Q \<sharp>* P A\<^sub>Q \<sharp>* Q A\<^sub>Q \<sharp>* M A\<^sub>Q \<sharp>* xvec A\<^sub>Q \<sharp>* N A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q) A\<^sub>Q \<sharp>* xvec distinct xvec goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q)\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q) [PROOF STEP] have "A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q" and "A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<^sub>Q" [PROOF STATE] proof (prove) using this: A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q) goal (1 subgoal): 1. A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q &&& A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<^sub>Q [PROOF STEP] by simp+ [PROOF STATE] proof (state) this: A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<^sub>Q goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] obtain A\<^sub>P \<Psi>\<^sub>P where FrP: "extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>" and "distinct A\<^sub>P" "A\<^sub>P \<sharp>* (P, Q, \<Psi>, M, A\<^sub>Q, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>Q)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* (P, Q, \<Psi>, M, A\<^sub>Q, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>Q)\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by(rule freshFrame) [PROOF STATE] proof (state) this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> distinct A\<^sub>P A\<^sub>P \<sharp>* (P, Q, \<Psi>, M, A\<^sub>Q, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] hence "A\<^sub>P \<sharp>* P" and "A\<^sub>P \<sharp>* Q" and "A\<^sub>P \<sharp>* \<Psi>" and "A\<^sub>P \<sharp>* M" and "A\<^sub>P \<sharp>* A\<^sub>Q" and "A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q" and "A\<^sub>P \<sharp>* \<Psi>\<^sub>Q" [PROOF STATE] proof (prove) using this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> distinct A\<^sub>P A\<^sub>P \<sharp>* (P, Q, \<Psi>, M, A\<^sub>Q, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>Q) goal (1 subgoal): 1. (A\<^sub>P \<sharp>* P &&& A\<^sub>P \<sharp>* Q &&& A\<^sub>P \<sharp>* \<Psi>) &&& (A\<^sub>P \<sharp>* M &&& A\<^sub>P \<sharp>* A\<^sub>Q) &&& A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q &&& A\<^sub>P \<sharp>* \<Psi>\<^sub>Q [PROOF STEP] by simp+ [PROOF STATE] proof (state) this: A\<^sub>P \<sharp>* P A\<^sub>P \<sharp>* Q A\<^sub>P \<sharp>* \<Psi> A\<^sub>P \<sharp>* M A\<^sub>P \<sharp>* A\<^sub>Q A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q A\<^sub>P \<sharp>* \<Psi>\<^sub>Q goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] have FrQ: "extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] by fact [PROOF STATE] proof (state) this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P \<sharp>* A\<^sub>Q\<close> FrP [PROOF STATE] proof (chain) picking this: A\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* A\<^sub>Q extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] have "A\<^sub>Q \<sharp>* \<Psi>\<^sub>P" [PROOF STATE] proof (prove) using this: A\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* A\<^sub>Q extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. A\<^sub>Q \<sharp>* \<Psi>\<^sub>P [PROOF STEP] by(force dest: extractFrameFreshChain) [PROOF STATE] proof (state) this: A\<^sub>Q \<sharp>* \<Psi>\<^sub>P goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>extractFrame(P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle>\<close> FrP FrQ \<open>A\<^sub>P \<sharp>* A\<^sub>Q\<close> \<open>A\<^sub>P \<sharp>* \<Psi>\<^sub>Q\<close> \<open>A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<close> [PROOF STATE] proof (chain) picking this: extractFrame (P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> A\<^sub>P \<sharp>* A\<^sub>Q A\<^sub>P \<sharp>* \<Psi>\<^sub>Q A\<^sub>Q \<sharp>* \<Psi>\<^sub>P [PROOF STEP] have "\<langle>(A\<^sub>P@A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) using this: extractFrame (P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> A\<^sub>P \<sharp>* A\<^sub>Q A\<^sub>P \<sharp>* \<Psi>\<^sub>Q A\<^sub>Q \<sharp>* \<Psi>\<^sub>P goal (1 subgoal): 1. \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> [PROOF STEP] by simp [PROOF STATE] proof (state) this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>distinct A\<^sub>P\<close> \<open>distinct A\<^sub>Q\<close> \<open>A\<^sub>P \<sharp>* A\<^sub>Q\<close> [PROOF STATE] proof (chain) picking this: distinct A\<^sub>P distinct A\<^sub>Q A\<^sub>P \<sharp>* A\<^sub>Q [PROOF STEP] have "distinct(A\<^sub>P@A\<^sub>Q)" [PROOF STATE] proof (prove) using this: distinct A\<^sub>P distinct A\<^sub>Q A\<^sub>P \<sharp>* A\<^sub>Q goal (1 subgoal): 1. distinct (A\<^sub>P @ A\<^sub>Q) [PROOF STEP] by(auto simp add: fresh_star_def fresh_def name_list_supp) [PROOF STATE] proof (state) this: distinct (A\<^sub>P @ A\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> distinct (A\<^sub>P @ A\<^sub>Q) [PROOF STEP] obtain p where S: "set p \<subseteq> set(A\<^sub>P@A\<^sub>Q) \<times> set((p \<bullet> A\<^sub>P)@(p \<bullet> A\<^sub>Q))" and "distinctPerm p" and \<Psi>eq: "\<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q)" and Aeq: "A\<^sub>P\<^sub>Q = (p \<bullet> A\<^sub>P)@(p \<bullet> A\<^sub>Q)" [PROOF STATE] proof (prove) using this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> distinct (A\<^sub>P @ A\<^sub>Q) goal (1 subgoal): 1. (\<And>p. \<lbrakk>set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q); distinctPerm p; \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q); A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using \<open>A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q\<close> \<open>A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q\<close> \<open>distinct A\<^sub>P\<^sub>Q\<close> [PROOF STATE] proof (prove) using this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> distinct (A\<^sub>P @ A\<^sub>Q) A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q distinct A\<^sub>P\<^sub>Q goal (1 subgoal): 1. (\<And>p. \<lbrakk>set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q); distinctPerm p; \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q); A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by(rule_tac frameChainEq') (assumption | simp add: eqvts)+ [PROOF STATE] proof (state) this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) distinctPerm p \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q) A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'\<close> S \<open>A\<^sub>P\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P \<sharp>* P\<close> \<open>A\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* M\<close> \<open>A\<^sub>P \<sharp>* M\<close> \<open>A\<^sub>Q \<sharp>* M\<close> Aeq [PROOF STATE] proof (chain) picking this: \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* P A\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q \<sharp>* M A\<^sub>P \<sharp>* M A\<^sub>Q \<sharp>* M A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q [PROOF STEP] have "(p \<bullet> (\<Psi> \<otimes> \<Psi>\<^sub>Q)) \<rhd> P \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'" [PROOF STATE] proof (prove) using this: \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* P A\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q \<sharp>* M A\<^sub>P \<sharp>* M A\<^sub>Q \<sharp>* M A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q goal (1 subgoal): 1. p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' [PROOF STEP] by(rule_tac outputPermFrame) (assumption | simp)+ [PROOF STATE] proof (state) this: p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] with S \<open>A\<^sub>P\<^sub>Q \<sharp>* \<Psi>\<close> \<open>A\<^sub>P \<sharp>* \<Psi>\<close> \<open>A\<^sub>Q \<sharp>* \<Psi>\<close> Aeq [PROOF STATE] proof (chain) picking this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* \<Psi> A\<^sub>P \<sharp>* \<Psi> A\<^sub>Q \<sharp>* \<Psi> A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' [PROOF STEP] have "\<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'" [PROOF STATE] proof (prove) using this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* \<Psi> A\<^sub>P \<sharp>* \<Psi> A\<^sub>Q \<sharp>* \<Psi> A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' goal (1 subgoal): 1. \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' [PROOF STEP] by(simp add: eqvts) [PROOF STATE] proof (state) this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from FrP [PROOF STATE] proof (chain) picking this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] have "(p \<bullet> extractFrame P) = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>" [PROOF STATE] proof (prove) using this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] by simp [PROOF STATE] proof (state) this: p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] with S \<open>A\<^sub>P\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P \<sharp>* P\<close> \<open>A\<^sub>Q \<sharp>* P\<close> Aeq [PROOF STATE] proof (chain) picking this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* P A\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] have "extractFrame P = \<langle>(p \<bullet> A\<^sub>P), p \<bullet> \<Psi>\<^sub>P\<rangle>" [PROOF STATE] proof (prove) using this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* P A\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> [PROOF STEP] by(simp add: eqvts) [PROOF STATE] proof (state) this: extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from FrQ [PROOF STATE] proof (chain) picking this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] have "(p \<bullet> extractFrame Q) = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) using this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] by simp [PROOF STATE] proof (state) this: p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] with S \<open>A\<^sub>P\<^sub>Q \<sharp>* Q\<close> \<open>A\<^sub>P \<sharp>* Q\<close> \<open>A\<^sub>Q \<sharp>* Q\<close> Aeq [PROOF STATE] proof (chain) picking this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] have "extractFrame Q = \<langle>(p \<bullet> A\<^sub>Q), p \<bullet> \<Psi>\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) using this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> [PROOF STEP] by(simp add: eqvts) [PROOF STATE] proof (state) this: extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>distinct A\<^sub>P\<close> \<open>distinct A\<^sub>Q\<close> [PROOF STATE] proof (chain) picking this: distinct A\<^sub>P distinct A\<^sub>Q [PROOF STEP] have "distinct(p \<bullet> A\<^sub>P)" and "distinct(p \<bullet> A\<^sub>Q)" [PROOF STATE] proof (prove) using this: distinct A\<^sub>P distinct A\<^sub>Q goal (1 subgoal): 1. distinct (p \<bullet> A\<^sub>P) &&& distinct (p \<bullet> A\<^sub>Q) [PROOF STEP] by simp+ [PROOF STATE] proof (state) this: distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>P \<sharp>* A\<^sub>Q\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>P \<sharp>* A\<^sub>Q [PROOF STEP] have "(p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q)" [PROOF STATE] proof (prove) using this: A\<^sub>P \<sharp>* A\<^sub>Q goal (1 subgoal): 1. (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) [PROOF STEP] by(simp add: pt_fresh_star_bij[OF pt_name_inst, OF at_name_inst]) [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>P \<sharp>* \<Psi>\<^sub>Q\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>P \<sharp>* \<Psi>\<^sub>Q [PROOF STEP] have "(p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q)" [PROOF STATE] proof (prove) using this: A\<^sub>P \<sharp>* \<Psi>\<^sub>Q goal (1 subgoal): 1. (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) [PROOF STEP] by(simp add: pt_fresh_star_bij[OF pt_name_inst, OF at_name_inst]) [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>Q \<sharp>* \<Psi>\<^sub>P [PROOF STEP] have "(p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P)" [PROOF STATE] proof (prove) using this: A\<^sub>Q \<sharp>* \<Psi>\<^sub>P goal (1 subgoal): 1. (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) [PROOF STEP] by(simp add: pt_fresh_star_bij[OF pt_name_inst, OF at_name_inst]) [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) goal (2 subgoals): 1. \<And>P' A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>Q \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P'; extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* \<Psi>; A\<^sub>Q \<sharp>* P; A\<^sub>Q \<sharp>* Q; A\<^sub>Q \<sharp>* M; A\<^sub>Q \<sharp>* xvec; A\<^sub>Q \<sharp>* N; A\<^sub>Q \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>Q \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P' \<parallel> Q) 2. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) goal (1 subgoal): 1. Prop (P' \<parallel> Q) [PROOF STEP] using \<open>A\<^sub>P\<^sub>Q \<sharp>* \<Psi>\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* Q\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* M\<close> Aeq \<Psi>eq [PROOF STATE] proof (prove) using this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>Q) \<rhd> P \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> P' extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) A\<^sub>P\<^sub>Q \<sharp>* \<Psi> A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q \<sharp>* M A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q) goal (1 subgoal): 1. Prop (P' \<parallel> Q) [PROOF STEP] by(rule_tac rPar1) (assumption | simp)+ [PROOF STATE] proof (state) this: Prop (P' \<parallel> Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] case(cPar2 Q' A\<^sub>P \<Psi>\<^sub>P) [PROOF STATE] proof (state) this: \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> distinct A\<^sub>P A\<^sub>P \<sharp>* \<Psi> A\<^sub>P \<sharp>* P A\<^sub>P \<sharp>* Q A\<^sub>P \<sharp>* M A\<^sub>P \<sharp>* xvec A\<^sub>P \<sharp>* N A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q) A\<^sub>P \<sharp>* xvec distinct xvec goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q)\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q) [PROOF STEP] have "A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q" and "A\<^sub>P \<sharp>* \<Psi>\<^sub>P\<^sub>Q" [PROOF STATE] proof (prove) using this: A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q) goal (1 subgoal): 1. A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q &&& A\<^sub>P \<sharp>* \<Psi>\<^sub>P\<^sub>Q [PROOF STEP] by simp+ [PROOF STATE] proof (state) this: A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q A\<^sub>P \<sharp>* \<Psi>\<^sub>P\<^sub>Q goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] obtain A\<^sub>Q \<Psi>\<^sub>Q where FrQ: "extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>" and "distinct A\<^sub>Q" "A\<^sub>Q \<sharp>* (P, Q, \<Psi>, M, A\<^sub>P, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>A\<^sub>Q \<Psi>\<^sub>Q. \<lbrakk>extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>; distinct A\<^sub>Q; A\<^sub>Q \<sharp>* (P, Q, \<Psi>, M, A\<^sub>P, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P)\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by(rule freshFrame) [PROOF STATE] proof (state) this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> distinct A\<^sub>Q A\<^sub>Q \<sharp>* (P, Q, \<Psi>, M, A\<^sub>P, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] hence "A\<^sub>Q \<sharp>* P" and "A\<^sub>Q \<sharp>* Q" and "A\<^sub>Q \<sharp>* \<Psi>" and "A\<^sub>Q \<sharp>* M" and "A\<^sub>Q \<sharp>* A\<^sub>P" and "A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q" and "A\<^sub>Q \<sharp>* \<Psi>\<^sub>P" [PROOF STATE] proof (prove) using this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> distinct A\<^sub>Q A\<^sub>Q \<sharp>* (P, Q, \<Psi>, M, A\<^sub>P, A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P) goal (1 subgoal): 1. (A\<^sub>Q \<sharp>* P &&& A\<^sub>Q \<sharp>* Q &&& A\<^sub>Q \<sharp>* \<Psi>) &&& (A\<^sub>Q \<sharp>* M &&& A\<^sub>Q \<sharp>* A\<^sub>P) &&& A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q &&& A\<^sub>Q \<sharp>* \<Psi>\<^sub>P [PROOF STEP] by simp+ [PROOF STATE] proof (state) this: A\<^sub>Q \<sharp>* P A\<^sub>Q \<sharp>* Q A\<^sub>Q \<sharp>* \<Psi> A\<^sub>Q \<sharp>* M A\<^sub>Q \<sharp>* A\<^sub>P A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q A\<^sub>Q \<sharp>* \<Psi>\<^sub>P goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] have FrP: "extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] by fact [PROOF STATE] proof (state) this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>P \<sharp>* Q\<close> \<open>A\<^sub>Q \<sharp>* A\<^sub>P\<close> FrQ [PROOF STATE] proof (chain) picking this: A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* A\<^sub>P extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] have "A\<^sub>P \<sharp>* \<Psi>\<^sub>Q" [PROOF STATE] proof (prove) using this: A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* A\<^sub>P extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. A\<^sub>P \<sharp>* \<Psi>\<^sub>Q [PROOF STEP] by(force dest: extractFrameFreshChain) [PROOF STATE] proof (state) this: A\<^sub>P \<sharp>* \<Psi>\<^sub>Q goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>extractFrame(P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle>\<close> FrP FrQ \<open>A\<^sub>Q \<sharp>* A\<^sub>P\<close> \<open>A\<^sub>P \<sharp>* \<Psi>\<^sub>Q\<close> \<open>A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<close> [PROOF STATE] proof (chain) picking this: extractFrame (P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> A\<^sub>Q \<sharp>* A\<^sub>P A\<^sub>P \<sharp>* \<Psi>\<^sub>Q A\<^sub>Q \<sharp>* \<Psi>\<^sub>P [PROOF STEP] have "\<langle>(A\<^sub>P@A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) using this: extractFrame (P \<parallel> Q) = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> A\<^sub>Q \<sharp>* A\<^sub>P A\<^sub>P \<sharp>* \<Psi>\<^sub>Q A\<^sub>Q \<sharp>* \<Psi>\<^sub>P goal (1 subgoal): 1. \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> [PROOF STEP] by simp [PROOF STATE] proof (state) this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>distinct A\<^sub>P\<close> \<open>distinct A\<^sub>Q\<close> \<open>A\<^sub>Q \<sharp>* A\<^sub>P\<close> [PROOF STATE] proof (chain) picking this: distinct A\<^sub>P distinct A\<^sub>Q A\<^sub>Q \<sharp>* A\<^sub>P [PROOF STEP] have "distinct(A\<^sub>P@A\<^sub>Q)" [PROOF STATE] proof (prove) using this: distinct A\<^sub>P distinct A\<^sub>Q A\<^sub>Q \<sharp>* A\<^sub>P goal (1 subgoal): 1. distinct (A\<^sub>P @ A\<^sub>Q) [PROOF STEP] by(auto simp add: fresh_star_def fresh_def name_list_supp) [PROOF STATE] proof (state) this: distinct (A\<^sub>P @ A\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> distinct (A\<^sub>P @ A\<^sub>Q) [PROOF STEP] obtain p where S: "set p \<subseteq> set(A\<^sub>P@A\<^sub>Q) \<times> set((p \<bullet> A\<^sub>P)@(p \<bullet> A\<^sub>Q))" and "distinctPerm p" and \<Psi>eq: "\<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q)" and Aeq: "A\<^sub>P\<^sub>Q = (p \<bullet> A\<^sub>P)@(p \<bullet> A\<^sub>Q)" [PROOF STATE] proof (prove) using this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> distinct (A\<^sub>P @ A\<^sub>Q) goal (1 subgoal): 1. (\<And>p. \<lbrakk>set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q); distinctPerm p; \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q); A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using \<open>A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q\<close> \<open>A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q\<close> \<open>distinct A\<^sub>P\<^sub>Q\<close> [PROOF STATE] proof (prove) using this: \<langle>(A\<^sub>P @ A\<^sub>Q), \<Psi>\<^sub>P \<otimes> \<Psi>\<^sub>Q\<rangle> = \<langle>A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q\<rangle> distinct (A\<^sub>P @ A\<^sub>Q) A\<^sub>P \<sharp>* A\<^sub>P\<^sub>Q A\<^sub>Q \<sharp>* A\<^sub>P\<^sub>Q distinct A\<^sub>P\<^sub>Q goal (1 subgoal): 1. (\<And>p. \<lbrakk>set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q); distinctPerm p; \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q); A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by(rule_tac frameChainEq') (assumption | simp add: eqvts)+ [PROOF STATE] proof (state) this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) distinctPerm p \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q) A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'\<close> S \<open>A\<^sub>P\<^sub>Q \<sharp>* Q\<close> \<open>A\<^sub>P \<sharp>* Q\<close> \<open>A\<^sub>Q \<sharp>* Q\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* M\<close> \<open>A\<^sub>P \<sharp>* M\<close> \<open>A\<^sub>Q \<sharp>* M\<close> Aeq [PROOF STATE] proof (chain) picking this: \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q \<sharp>* M A\<^sub>P \<sharp>* M A\<^sub>Q \<sharp>* M A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q [PROOF STEP] have "(p \<bullet> (\<Psi> \<otimes> \<Psi>\<^sub>P)) \<rhd> Q \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'" [PROOF STATE] proof (prove) using this: \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q \<sharp>* M A\<^sub>P \<sharp>* M A\<^sub>Q \<sharp>* M A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q goal (1 subgoal): 1. p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' [PROOF STEP] by(rule_tac outputPermFrame) (assumption | simp)+ [PROOF STATE] proof (state) this: p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] with S \<open>A\<^sub>P\<^sub>Q \<sharp>* \<Psi>\<close> \<open>A\<^sub>P \<sharp>* \<Psi>\<close> \<open>A\<^sub>Q \<sharp>* \<Psi>\<close> Aeq [PROOF STATE] proof (chain) picking this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* \<Psi> A\<^sub>P \<sharp>* \<Psi> A\<^sub>Q \<sharp>* \<Psi> A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' [PROOF STEP] have "\<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto>M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'" [PROOF STATE] proof (prove) using this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* \<Psi> A\<^sub>P \<sharp>* \<Psi> A\<^sub>Q \<sharp>* \<Psi> A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> \<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' goal (1 subgoal): 1. \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' [PROOF STEP] by(simp add: eqvts) [PROOF STATE] proof (state) this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from FrP [PROOF STATE] proof (chain) picking this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] have "(p \<bullet> extractFrame P) = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>" [PROOF STATE] proof (prove) using this: extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] by simp [PROOF STATE] proof (state) this: p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] with S \<open>A\<^sub>P\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P \<sharp>* P\<close> \<open>A\<^sub>Q \<sharp>* P\<close> Aeq [PROOF STATE] proof (chain) picking this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* P A\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> [PROOF STEP] have "extractFrame P = \<langle>(p \<bullet> A\<^sub>P), p \<bullet> \<Psi>\<^sub>P\<rangle>" [PROOF STATE] proof (prove) using this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P \<sharp>* P A\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame P = p \<bullet> \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> [PROOF STEP] by(simp add: eqvts) [PROOF STATE] proof (state) this: extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from FrQ [PROOF STATE] proof (chain) picking this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] have "(p \<bullet> extractFrame Q) = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) using this: extractFrame Q = \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] by simp [PROOF STATE] proof (state) this: p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] with S \<open>A\<^sub>P\<^sub>Q \<sharp>* Q\<close> \<open>A\<^sub>P \<sharp>* Q\<close> \<open>A\<^sub>Q \<sharp>* Q\<close> Aeq [PROOF STATE] proof (chain) picking this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> [PROOF STEP] have "extractFrame Q = \<langle>(p \<bullet> A\<^sub>Q), p \<bullet> \<Psi>\<^sub>Q\<rangle>" [PROOF STATE] proof (prove) using this: set p \<subseteq> set (A\<^sub>P @ A\<^sub>Q) \<times> set (p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q) A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P \<sharp>* Q A\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q p \<bullet> extractFrame Q = p \<bullet> \<langle>A\<^sub>Q, \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> [PROOF STEP] by(simp add: eqvts) [PROOF STATE] proof (state) this: extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>distinct A\<^sub>P\<close> \<open>distinct A\<^sub>Q\<close> [PROOF STATE] proof (chain) picking this: distinct A\<^sub>P distinct A\<^sub>Q [PROOF STEP] have "distinct(p \<bullet> A\<^sub>P)" and "distinct(p \<bullet> A\<^sub>Q)" [PROOF STATE] proof (prove) using this: distinct A\<^sub>P distinct A\<^sub>Q goal (1 subgoal): 1. distinct (p \<bullet> A\<^sub>P) &&& distinct (p \<bullet> A\<^sub>Q) [PROOF STEP] by simp+ [PROOF STATE] proof (state) this: distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>Q \<sharp>* A\<^sub>P\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>Q \<sharp>* A\<^sub>P [PROOF STEP] have "(p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q)" [PROOF STATE] proof (prove) using this: A\<^sub>Q \<sharp>* A\<^sub>P goal (1 subgoal): 1. (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) [PROOF STEP] by(simp add: pt_fresh_star_bij[OF pt_name_inst, OF at_name_inst]) [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>P \<sharp>* \<Psi>\<^sub>Q\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>P \<sharp>* \<Psi>\<^sub>Q [PROOF STEP] have "(p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q)" [PROOF STATE] proof (prove) using this: A\<^sub>P \<sharp>* \<Psi>\<^sub>Q goal (1 subgoal): 1. (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) [PROOF STEP] by(simp add: pt_fresh_star_bij[OF pt_name_inst, OF at_name_inst]) [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] moreover [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] from \<open>A\<^sub>Q \<sharp>* \<Psi>\<^sub>P\<close> [PROOF STATE] proof (chain) picking this: A\<^sub>Q \<sharp>* \<Psi>\<^sub>P [PROOF STEP] have "(p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P)" [PROOF STATE] proof (prove) using this: A\<^sub>Q \<sharp>* \<Psi>\<^sub>P goal (1 subgoal): 1. (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) [PROOF STEP] by(simp add: pt_fresh_star_bij[OF pt_name_inst, OF at_name_inst]) [PROOF STATE] proof (state) this: (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) goal (1 subgoal): 1. \<And>Q' A\<^sub>P \<Psi>\<^sub>P. \<lbrakk>\<Psi> \<otimes> \<Psi>\<^sub>P \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q'; extractFrame P = \<langle>A\<^sub>P, \<Psi>\<^sub>P\<rangle>; distinct A\<^sub>P; A\<^sub>P \<sharp>* \<Psi>; A\<^sub>P \<sharp>* P; A\<^sub>P \<sharp>* Q; A\<^sub>P \<sharp>* M; A\<^sub>P \<sharp>* xvec; A\<^sub>P \<sharp>* N; A\<^sub>P \<sharp>* (A\<^sub>P\<^sub>Q, \<Psi>\<^sub>P\<^sub>Q); A\<^sub>P \<sharp>* xvec; distinct xvec\<rbrakk> \<Longrightarrow> Prop (P \<parallel> Q') [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) goal (1 subgoal): 1. Prop (P \<parallel> Q') [PROOF STEP] using \<open>A\<^sub>P\<^sub>Q \<sharp>* \<Psi>\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* P\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* Q\<close> \<open>A\<^sub>P\<^sub>Q \<sharp>* M\<close> Aeq \<Psi>eq [PROOF STATE] proof (prove) using this: \<Psi> \<otimes> (p \<bullet> \<Psi>\<^sub>P) \<rhd> Q \<longmapsto> M\<lparr>\<nu>*xvec\<rparr>\<langle>N\<rangle> \<prec> Q' extractFrame P = \<langle>p \<bullet> A\<^sub>P, p \<bullet> \<Psi>\<^sub>P\<rangle> extractFrame Q = \<langle>p \<bullet> A\<^sub>Q, p \<bullet> \<Psi>\<^sub>Q\<rangle> distinct (p \<bullet> A\<^sub>P) distinct (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> A\<^sub>Q) (p \<bullet> A\<^sub>P) \<sharp>* (p \<bullet> \<Psi>\<^sub>Q) (p \<bullet> A\<^sub>Q) \<sharp>* (p \<bullet> \<Psi>\<^sub>P) A\<^sub>P\<^sub>Q \<sharp>* \<Psi> A\<^sub>P\<^sub>Q \<sharp>* P A\<^sub>P\<^sub>Q \<sharp>* Q A\<^sub>P\<^sub>Q \<sharp>* M A\<^sub>P\<^sub>Q = p \<bullet> A\<^sub>P @ p \<bullet> A\<^sub>Q \<Psi>\<^sub>P\<^sub>Q = (p \<bullet> \<Psi>\<^sub>P) \<otimes> (p \<bullet> \<Psi>\<^sub>Q) goal (1 subgoal): 1. Prop (P \<parallel> Q') [PROOF STEP] by(rule_tac rPar2) (assumption | simp)+ [PROOF STATE] proof (state) this: Prop (P \<parallel> Q') goal: No subgoals! [PROOF STEP] qed
[STATEMENT] lemma current_methd: "\<lbrakk>table_of (methods c) sig = Some new; ws_prog G; class G C = Some c; C \<noteq> Object; methd G (super c) sig = Some old\<rbrakk> \<Longrightarrow> methd G C sig = Some (C,new)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>table_of (methods c) sig = Some new; ws_prog G; class G C = Some c; C \<noteq> Object; methd G (super c) sig = Some old\<rbrakk> \<Longrightarrow> methd G C sig = Some (C, new) [PROOF STEP] by (auto simp add: methd_rec intro: filter_tab_SomeI map_add_find_right table_of_map_SomeI)
Oregon Duck fans – and Pac-12 fans, for that matter – are sick and tired of hearing about how great the Southeastern Conference (SEC) is. Broadcasters constantly speak about how the SEC is ahead of the Pac-12 in recruiting, quality of play and revenue. Well, that no longer appears to be the case. The absence of an SEC team in the 2015 NCAA Football National Championship Game is one reason to believe there’s change coming. However, it may surprise you to know that the Pac-12 made more money than the SEC and the B1G in terms of total revenue during the 2013-14 fiscal year (FY14), according to Steve Berkowitz of USA TODAY Sports. The Pac-12’s revenue nearly doubled from FY13’s $175 million to $334 million in FY14. Berkowitz also reported that Pac-12 Commissioner Larry Scott remained the nation’s most highly-paid conference CEO, earning just over $3.5 million in total compensation for the 2013 calendar year. Here’s a breakdown of the Pac-12’s total revenues for FY14: Bowl games $41.6 million, NCAA championships $26.3 million, television and media rights $252.7 million, advertising $9 million, investments totaling around $4.2 million – for a grand total of $333,992,599. For now, the Pac-12 leads all other conferences in revenue. That may change in a few years, due to the SEC’s launch of its own television network with ESPN. However, it’s safe to say that the Pac-12 isn’t content with playing second fiddle to other conferences anymore.
classdef testInverseSymmetricFourthOrderTensor < testInverseFourthOrderTensor methods (Access = protected) function createRandomFourthOrderTensor(obj) obj.tensor = Stiffness3DTensor; obj.tensor.createRandomTensor(); end end methods (Static, Access = protected) function Id = computeIdentityTensor(I,i,j,k,l) Id = 0.5*(I(i,k)*I(j,l) + I(i,l)*I(j,k)); end end end
-- | REPL for console version of Minesweeper module Minesweeper.REPL import Minesweeper.Game import Minesweeper.Board import Minesweeper.Helper import Control.Monad.State import Data.Vect import Effects import Effect.Random import Effect.StdIO import Effect.System implementation Show Difficulty where show Easy = "Easy" show Medium = "Medium" show Hard = "Hard" intro : String intro = """ ___ ________ _ _ _____ _____ _ _ _____ ___________ ___________ | \/ |_ _| \ | || ___/ ___|| | | | ___| ___| ___ \ ___| ___ \ | . . | | | | \| || |__ \ `--. | | | | |__ | |__ | |_/ / |__ | |_/ / | |\/| | | | | . ` || __| `--. \| |/\| | __|| __|| __/| __|| / | | | |_| |_| |\ || |___/\__/ /\ /\ / |___| |___| | | |___| |\ \ \_| |_/\___/\_| \_/\____/\____/ \/ \/\____/\____/\_| \____/\_| \_| Written by Ross Meikleham 2015""" invalid : String -> String invalid s = "Unknown option \"" ++ s ++ "\". " ++ "Enter h or help to display the list of possible options" data GameAction = GQuit | GHelp | GShow | Reveal Pos | GInvalid String parseGameAction : String -> GameAction parseGameAction "q" = GQuit parseGameAction "quit" = GQuit parseGameAction "h" = GHelp parseGameAction "help" = GHelp parseGameAction "d" = GShow parseGameAction "display" = GShow parseGameAction s = case parseReveal (words s) of (Just pos) => Reveal pos (Nothing) => GInvalid (invalid s) where parsePos : String -> String -> Maybe Pos parsePos strRow strCol = do row <- readNat strRow col <- readNat strCol pure $ MkPos col row parseReveal : List String -> Maybe Pos parseReveal (command :: row :: col :: []) = case command of "reveal" => parsePos row col "r" => parsePos row col parseReveal _ = Nothing gHelp : Nat -> Nat -> String gHelp (S r) (S c) = ("r [row] [column] or reveal [row] [column] to reveal the given square\n" ++ "\t where row is between 0 and " ++ show r ++ ", and column is between 0 and " ++ show c ++ ".\n" ++ "d or display to display the board\n" ++ "q or quit to exit\n" ++ "h or help to display help\n") gHelp _ _ = "Error with row/column size\n" playGame' : Board m n -> IO () playGame' {m} {n} board = do putStr "Enter Option> " str <- getLine let option = parseGameAction str case option of (Reveal pos) => let (res, newBoard) = runState (revealPos pos) board in case res of Playing str => putStrLn (showBoard newBoard) >>= \_ => putStrLn str >>= \_ => playGame' newBoard Won => putStrLn (showRevealed newBoard) >>= \_ => putStrLn "You Win!" Lost => putStrLn (showBoard newBoard) >>= \_ => putStrLn "You Hit a Mine!" >>= \_ => putStrLn (showRevealed newBoard) GHelp => putStrLn (gHelp m n) >>= \_ => playGame' board GQuit => putStrLn "Quitting Game..." GShow => putStrLn (showBoard board) GInvalid s => putStrLn s >>= \_ => playGame' board playGame : Difficulty -> IO() playGame difficulty = do let ((MkPos x y), nMines) = getSetupDetails difficulty mines <- run (generateMines x y nMines) case mines of Nothing => putStrLn "More mines than positions available" Just m => do let board = createBoard x y m case board of Nothing => putStrLn "Error creating board" Just b => putStrLn (showBoard b) >>= \_ => playGame' b -- | Difficulty Options/Menu data DifficultyAction = DSelected Difficulty | DHelp | DQuit | DInvalid String difficulty : String difficulty = "Enter difficulty> " difficultyHelp : String difficultyHelp = """ b or beginner to start an easy game i or intermediate to start a medium difficulty game e or expert to start a hard game h or help to display this help option q or quit to exit to the main menu""" parseDifficulty : String -> DifficultyAction parseDifficulty "b" = DSelected Easy parseDifficulty "beginner" = DSelected Easy parseDifficulty "i" = DSelected Medium parseDifficulty "intermediate" = DSelected Medium parseDifficulty "e" = DSelected Hard parseDifficulty "expert" = DSelected Hard parseDifficulty "h" = DHelp parseDifficulty "help" = DHelp parseDifficulty "q" = DQuit parseDifficulty "quit" = DQuit parseDifficulty s = DInvalid s difficultyMenu : IO () difficultyMenu = do putStrLn "" putStr difficulty optionStr <- getLine let option = parseDifficulty optionStr case option of DSelected d => putStrLn ("Starting " ++ show d ++ " game...") >>= \_ => playGame d DHelp => putStrLn difficultyHelp >>= \_ => difficultyMenu DQuit => putStrLn "Returning to main menu..." DInvalid s => (putStrLn $ invalid s) >>= \_ => difficultyMenu -- | Main Menu options data MainMenuAction = Quit | Play | Help | Invalid String parseOption : String -> MainMenuAction parseOption "q" = Quit parseOption "quit" = Quit parseOption "p" = Play parseOption "play" = Play parseOption "h" = Help parseOption "help" = Help parseOption i = Invalid i help : String help = """ p or play to start a game q or quit to exit h or help to display help""" mainMenu : IO () mainMenu = do putStrLn "" putStr "Enter option> " optionStr <- getLine let option = parseOption optionStr case option of Play => difficultyMenu >>= \_ => mainMenu Help => putStrLn help >>= \_ => mainMenu Quit => putStrLn "Goodbye :)" Invalid s => putStrLn (invalid s) >>= \_ => mainMenu export repl : IO () repl = do putStrLn intro mainMenu
-- Idris2 import System import System.Concurrency -- Test `conditionSignal` works for 1 main and 1 child thread main : IO () main = do cvMutex <- makeMutex cv <- makeCondition t <- fork $ do mutexAcquire cvMutex conditionWait cv cvMutex putStrLn "Hello mother" mutexRelease cvMutex putStrLn "Hello child" sleep 1 conditionSignal cv threadWait t
#include "RecordEngine.h" #include "Serialize/SlamSerialize.pb.h" #include "Serialize/MessageTypes.h" #include <opencv2/imgcodecs.hpp> #include <algorithm> #include <vector> #include <sstream> #include <boost/lexical_cast.hpp> using namespace LpSlam; namespace { template <class TVec3In, class TVec3Out> TVec3Out * toProtoVec3( TVec3In const& vec3) { auto outPosition = new TVec3Out(); outPosition->set_x(vec3.value[0]); outPosition->set_y(vec3.value[1]); outPosition->set_z(vec3.value[2]); return outPosition; } template <class TPosOrient> std::pair<LpgfSlamSerialize::Position*, LpgfSlamSerialize::Orientation* > toProtoPositionOrientation( TPosOrient const& posOrient) { auto outPosition = new LpgfSlamSerialize::Position(); outPosition->set_x(posOrient.position.value[0]); outPosition->set_y(posOrient.position.value[1]); outPosition->set_z(posOrient.position.value[2]); outPosition->set_x_sigma(posOrient.position.sigma[0]); outPosition->set_y_sigma(posOrient.position.sigma[1]); outPosition->set_z_sigma(posOrient.position.sigma[2]); auto outOrientation = new LpgfSlamSerialize::Orientation(); outOrientation->set_w(posOrient.orientation.value.w()); outOrientation->set_x(posOrient.orientation.value.x()); outOrientation->set_y(posOrient.orientation.value.y()); outOrientation->set_z(posOrient.orientation.value.z()); outOrientation->set_sigma(posOrient.orientation.sigma); return {outPosition, outOrientation}; } } RecordEngine::~RecordEngine() { stop(); } RecordEngine::RecordEngine() : m_recordThread( [](RecordThreadParams params) -> bool { const auto lmdStoreState = [] (GlobalState const & state) { auto [outPosition, outOrientation] = toProtoPositionOrientation(state); auto outState = new LpgfSlamSerialize::GlobalState(); outState->set_allocated_position(outPosition); outState->set_allocated_orientation(outOrientation); LpgfSlamSerialize::Velocity * velocity = nullptr; if (state.velocityValid) { velocity = toProtoVec3<Velocity3, LpgfSlamSerialize::Velocity>( state.velocity); } else { velocity = new LpgfSlamSerialize::Velocity(); } outState->set_allocated_velocity(velocity); return outState; }; // check if there is some work on our work queue try { RecordQueueEntry recordEntry; params.m_q.pop(recordEntry); if (recordEntry.valid == false) { return false; } if (recordEntry.type == EntryType::Camera) { LpgfSlamSerialize::CameraImage outCamEntry; outCamEntry.set_timestamp(recordEntry.camera.timestamp.time_since_epoch().count()); std::vector<unsigned char> imgData; cv::imencode(".jpg", recordEntry.camera.image, imgData); outCamEntry.set_imagedata(imgData.data(), imgData.size()); outCamEntry.set_cameranumber(recordEntry.camera.cameraNumber); { auto [outBasePosition, outBaseOrientation] = toProtoPositionOrientation(recordEntry.camera.base); auto outBase = new LpgfSlamSerialize::TrackerCoordinateSystem(); outBase->set_allocated_position(outBasePosition); outBase->set_allocated_orientation(outBaseOrientation); outCamEntry.set_allocated_imagebase(outBase); } // check if there is a second image if (recordEntry.camera.image_second.has_value()) { imgData.clear(); cv::imencode(".jpg", recordEntry.camera.image_second.value(), imgData); outCamEntry.set_imagedata_second(imgData.data(), imgData.size()); outCamEntry.set_cameranumber_second(recordEntry.camera.cameraNumber_second); { auto [outBasePosition, outBaseOrientation] = toProtoPositionOrientation( recordEntry.camera.base_second.value()); auto outBase = new LpgfSlamSerialize::TrackerCoordinateSystem(); outBase->set_allocated_position(outBasePosition); outBase->set_allocated_orientation(outBaseOrientation); outCamEntry.set_allocated_imagebase_second(outBase); } } auto outStateOdom = lmdStoreState(recordEntry.state_odom); outCamEntry.set_allocated_state_odom(outStateOdom); outCamEntry.set_hasglobalstate_odom(recordEntry.state_odom_valid); auto outStateMap = lmdStoreState(recordEntry.state_map); outCamEntry.set_allocated_state_map(outStateMap); outCamEntry.set_hasglobalstate_map(recordEntry.state_map_valid); params.m_stream.toStream( Serialization::CameraImage, outCamEntry, params.m_out.get()); if (params.writeRawFile) { std::stringstream sFileOut; std::string rawFileNumber = boost::lexical_cast<std::string>(params.imgCount); const size_t n_zero = 6; // fill with leading zeros rawFileNumber = std::string(n_zero - rawFileNumber.length(), '0') + rawFileNumber; sFileOut << rawFileNumber << "_left.jpg"; cv::imwrite(sFileOut.str(), recordEntry.camera.image); if (recordEntry.camera.image_second.has_value()) { std::stringstream sFileOutRight; sFileOutRight << rawFileNumber << "_right.jpg"; cv::imwrite(sFileOutRight.str(), recordEntry.camera.image_second.value()); } } //cv::imshow("Display Window", recordEntry.camera.image); params.imgCount++; } else if (recordEntry.type == EntryType::Sensor) { if (recordEntry.sensor.getSensorType() == SensorQueueEntry::SensorType::Imu) { LpgfSlamSerialize::SensorImu outSensorImu; outSensorImu.set_timestamp(recordEntry.sensor.timestamp.time_since_epoch().count()); auto outAcc = toProtoVec3<Acceleration3, LpgfSlamSerialize::Acceleration>(recordEntry.sensor.getAcceleration()); auto outGyro = toProtoVec3<AngularVelocity3, LpgfSlamSerialize::AngularVelocity>(recordEntry.sensor.getAngluarVelocity()); outSensorImu.set_allocated_acc(outAcc); outSensorImu.set_allocated_gyro(outGyro); params.m_stream.toStream( Serialization::SensorImu, outSensorImu, params.m_out.get()); } else if (recordEntry.sensor.getSensorType() == SensorQueueEntry::SensorType::GlobalState) { // right now, just using one ProtoBuf message ... LpgfSlamSerialize::SensorGlobalState outSensorGlobalState; outSensorGlobalState.set_timestamp(recordEntry.sensor.timestamp.time_since_epoch().count()); outSensorGlobalState.set_reference(recordEntry.sensor.reference); auto [outPosition, outOrientation] = toProtoPositionOrientation(recordEntry.sensor.getGlobalState()); auto outState = new LpgfSlamSerialize::GlobalState(); outState->set_allocated_position(outPosition); outState->set_allocated_orientation(outOrientation); outSensorGlobalState.set_allocated_globalstate(outState); params.m_stream.toStream( Serialization::SensorGlobalState, outSensorGlobalState, params.m_out.get()); } else if (recordEntry.sensor.getSensorType() == SensorQueueEntry::SensorType::FeatureList) { for (auto const& feature: recordEntry.sensor.getFeatureList()) { LpgfSlamSerialize::SensorFeature outSensorFeature; //std::cout << feature.m_timestamp.time_since_epoch().count() << std::endl; outSensorFeature.set_timestamp(feature.m_timestamp.time_since_epoch().count()); outSensorFeature.set_lastobserved(feature.m_lastObserved.time_since_epoch().count()); outSensorFeature.set_observationcount(feature.m_observationCount); outSensorFeature.set_allocated_position(toProtoVec3<Position3, LpgfSlamSerialize::Position> (feature.m_position)); outSensorFeature.set_allocated_closestkeyframeposition(toProtoVec3<Position3, LpgfSlamSerialize::Position> (feature.m_closestKeyframePosition)); outSensorFeature.set_anchorid(feature.m_anchorId); params.m_stream.toStream( Serialization::SensorFeatureList, outSensorFeature, params.m_out.get()); } } else { spdlog::error("Sensor type not support for recording"); } } else if (recordEntry.type == EntryType::Result) { LpgfSlamSerialize::GlobalStateInTime outResult; auto [res_timestamp, res_gs] = recordEntry.result.globalStateInTime; outResult.set_timestamp(res_timestamp.system_time.time_since_epoch().count()); auto [outPosition, outOrientation] = toProtoPositionOrientation(res_gs); auto outState = new LpgfSlamSerialize::GlobalState(); outState->set_allocated_position(outPosition); outState->set_allocated_orientation(outOrientation); outResult.set_allocated_globalstate(outState); params.m_stream.toStream( Serialization::Result, outResult, params.m_out.get()); } else { spdlog::error("Recording entry not supported."); } // pop all from the sensor values } catch (tbb::user_abort &) { // all waits on the fusion queue were aborted return false; } // continue thread return true; }) { GOOGLE_PROTOBUF_VERIFY_VERSION; } void RecordEngine::setOutputFile(std::string const& outputFile) { bool startAgain = false; if (m_out) { stop(); startAgain = true; } m_outputFile = outputFile; if (startAgain) { start(); } } void RecordEngine::setStoreImages(bool b) { m_storeImages = b; } void RecordEngine::storeSensor( SensorQueueEntry const& sensor) { if (!m_out) { // not recording return; } RecordQueueEntry entry; entry.type = EntryType::Sensor; entry.sensor = sensor; m_q.push(entry); } void RecordEngine::storeCameraImage( CameraQueueEntry const& camera, std::optional<GlobalStateInTime> state_odom, std::optional<GlobalStateInTime> state_map) { if (!m_out || !m_storeImages) { // not recording return; } RecordQueueEntry entry; entry.type = EntryType::Camera; entry.camera = camera; if (state_odom.has_value()) { entry.state_odom = state_odom.value().second; entry.state_odom_valid = true; } else { entry.state_odom_valid = false; } if (state_map.has_value()) { entry.state_map = state_map.value().second; entry.state_map_valid = true; } else { entry.state_map_valid = false; } m_q.push(entry); } void RecordEngine::storeResult( ResultQueueEntry const& result) { if (!m_out) { // not recording return; } RecordQueueEntry entry; entry.type = EntryType::Result; entry.result = result; m_q.push(entry); } void RecordEngine::setWriteRawFile(bool b) { m_writeRawFile = b; } void RecordEngine::start() { if (m_out) { // already recording ... return; } m_out = std::make_shared<std::ofstream>(m_outputFile, std::ofstream::binary); m_recordThread.start(RecordThreadParams(m_q, m_stream, m_out, m_imgCount, m_writeRawFile)); std::ofstream (m_outputFile, std::ios::binary); } void RecordEngine::stop() { if (m_recordThread.isRunning()) { // send the terminate and all previous measurements will be stored RecordQueueEntry camEntry; camEntry.valid = false; m_recordThread.stopAsync(); m_q.push(camEntry); m_recordThread.stop(); m_q.abort(); m_q.clear(); } if (m_out) { if (m_out->is_open()) { *m_out << std::flush; m_out->close(); } m_out.reset(); } }
– you will fully comply with the terms and conditions of these terms of service as well as our acceptable use policy Intellectual Property Rights The Amirahadaratube.com Video website, except all user submissions (as defined below), including without limitation, the text, software, visuals, sounds, music, videos, interactive features and the like (content) and the trademarks, service marks and logos contained therein (marks), are owned by or licensed to Amirahadaratube.com Limited. – you may not use, copy, reproduce, distribute, broadcast, display, sell, license, or otherwise exploit them for any other purposes without the prior written consent of their respective owners. You retain all of your ownership rights in your submissions. However, by submitting the material to Amirahadaratube.com Video, you grant a worldwide, non-exclusive, royalty-free, sublicenseable and transferable license to use, reproduce, distribute, prepare derivative works of, display, and perform the submission in connection with the Amirahadaratube.com Video website and business. The license granted by you terminates once you delete the submission from the Amirahadaratube.com Video website. Amirahadaratube.com Limited does not endorse any user submission, and expressly disclaims any and all liability in connection with user submissions. Amirahadaratube.com Limited does not permit copyright infringing activities or infringement of intellectual property rights on its website, and will promptly and without prior notice remove all content and user submissions if properly notified of infringements on third party’s intellectual property rights. Repeat infringers will have their user access terminated. D. You understand that when using the Amirahadaratube.com Video website, you will be exposed to user submissions from a wide variety of sources, and that Amirahadaratube.com Limited is not responsible for the accuracy, usefulness, safety, or intellectual property rights of or relating to such submissions. You further understand and acknowledge that you may be exposed to user submissions that are inaccurate, indecent, offensive, or objectionable, and you agree to waive, and hereby do waive, any legal or equitable rights or remedies you have or may have against Amirahadaratube.com Limited with respect thereto, and agree to indemnify and hold Amirahadaratube.com Limited and its owners, affiliates, and/or licensors, harmless to the fullest extent allowed by law regarding all matters related to your use of the site. D. Amirahadaratube.com Limited permits you to link to materials on the Amirahadaratube.com Video website for personal, non-commercial purposes only. In addition, Amirahadaratube.com Video provides an “embeddable player” feature, which you may incorporate into your own personal, non-commercial websites for use in accessing the materials contained on the Amirahadaratube.com Video website. Amirahadaratube.com Limited reserves the right to discontinue any aspect of the Amirahadaratube.com Video website at any time and for any reason. TRANSMITTED, OR OTHERWISE MADE AVAILABLE VIA THE Amirahadaratube.com VIDEO WEBSITE. Amirahadaratube.com LIMITED DOES NOT WARRANT, ENDORSE, GUARANTEE, OR ASSUME RESPONSIBILITY FOR ANY PRODUCT OR SERVICE ADVERTISED OR OFFERED BY A THIRD PARTY THROUGH THE Amirahadaratube.com VIDEO WEBSITE OR ANY HYPERLINKED WEBSITE OR FEATURED IN ANY BANNER OR OTHER ADVERTISING, AND Amirahadaratube.com LIMITED WILL NOT BE A PARTY TO OR IN ANY WAY BE RESPONSIBLE FOR MONITORING ANY TRANSACTION BETWEEN YOU AND THIRD-PARTY PROVIDERS OF PRODUCTS OR SERVICES. AS WITH THE PURCHASE OF A PRODUCT OR SERVICE THROUGH ANY MEDIUM OR IN ANY ENVIRONMENT, YOU SHOULD USE YOUR BEST JUDGMENT AND EXERCISE CAUTION WHERE APPROPRIATE. ANY ERRORS OR OMISSIONS IN ANY CONTENT OR FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF YOUR USE OF ANY CONTENT POSTED, EMAILED, TRANSMITTED, OR OTHERWISE MADE AVAILABLE VIA THE Amirahadaratube.com VIDEO WEBSITE, WHETHER BASED ON WARRANTY, CONTRACT, TORT, OR ANY OTHER LEGAL THEORY, AND WHETHER THE COMPANY IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. YOU SPECIFICALLY ACKNOWLEDGE THAT Amirahadaratube.com LIMITED SHALL NOT BE LIABLE FOR USER SUBMISSIONS OR THE DEFAMATORY, OFFENSIVE, OR ILLEGAL CONDUCT OF ANY THIRD PARTY AND THAT THE RISK OF HARM OR DAMAGE FROM THE FOREGOING RESTS ENTIRELY WITH YOU. These terms of service, together with the privacy notice and any other legal notices published on the Amirahadaratube.com Video website, shall constitute the entire agreement between you and Amirahadaratube.com Limited. If any provision of these terms of service is deemed invalid by a court of competent jurisdiction, the invalidity of such provision shall not affect the validity of the remaining provisions of these terms of service, which shall remain in full force and effect. No waiver of any term of this these terms of service shall be deemed a further or continuing waiver of such term or any other term, and Amirahadaratube.com Limited’s failure to assert any right or provision under these terms of service shall not constitute a waiver of such right or provision. Amirahadaratube.com Limited reserves the right to amend these terms of service at any time and without notice, and it is your responsibility to review these terms of service for any changes. Your use of the Amirahadaratube.com Video website following any amendment of these terms of service will signify your assent to and acceptance of its revised terms.
(*-------------------------------------------* | CSP-Prover on Isabelle2004 | | December 2004 | | July 2005 (modified) | | September 2005 (modified) | | | | CSP-Prover on Isabelle2005 | | October 2005 (modified) | | April 2006 (modified) | | March 2007 (modified) | | | | Yoshinao Isobe (AIST JAPAN) | *-------------------------------------------*) theory CSP_T_law_SKIP imports CSP_T_law_basic begin (***************************************************************** 1. SKIP |[X]| SKIP 2. SKIP |[X]| P 3. P |[X]| SKIP 4. SKIP -- X 5. SKIP [[r]] 6. SKIP ;; P 7. P ;; SKIP 8. SKIP |. n *****************************************************************) (********************************************************* SKIP |[X]| SKIP *********************************************************) lemma cspT_Parallel_term: "SKIP |[X]| SKIP =T[M1,M2] SKIP" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule) apply (simp add: in_traces) apply (elim disjE conjE exE) apply (simp_all) (* <= *) apply (rule) apply (simp add: in_traces) apply (elim disjE conjE exE) apply (simp_all) done (********************************************************* SKIP |[X]| P *********************************************************) lemma cspT_Parallel_preterm_l: "SKIP |[X]| (? :Y -> Qf) =T[M,M] ? x:(Y-X) -> (SKIP |[X]| Qf x)" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule) apply (simp add: in_traces) apply (insert trace_nil_or_Tick_or_Ev) apply (elim disjE conjE exE) apply (simp_all) apply (drule_tac x="t" in spec) apply (erule disjE, simp) apply (erule disjE, simp) apply (elim conjE exE, simp) apply (simp add: par_tr_head) apply (rule_tac x="<>" in exI) apply (rule_tac x="sa" in exI, simp) apply (drule_tac x="t" in spec) apply (erule disjE, simp) apply (erule disjE, simp) apply (elim conjE exE, simp) apply (simp add: par_tr_head) apply (rule_tac x="<Tick>" in exI) apply (rule_tac x="sa" in exI, simp) (* <= *) apply (rule) apply (simp add: in_traces) apply (elim disjE conjE exE) apply (simp_all) apply (rule_tac x="<>" in exI) apply (rule_tac x="<Ev a> ^^^ ta" in exI, simp) apply (simp add: par_tr_head) apply (rule_tac x="<Tick>" in exI) apply (rule_tac x="<Ev a> ^^^ ta" in exI, simp) apply (simp add: par_tr_head) done (********************************************************* P |[X]| SKIP *********************************************************) lemma cspT_Parallel_preterm_r: "(? :Y -> Pf) |[X]| SKIP =T[M,M] ? x:(Y-X) -> (Pf x |[X]| SKIP)" apply (rule cspT_trans) apply (rule cspT_Parallel_commut) apply (rule cspT_trans) apply (rule cspT_Parallel_preterm_l) apply (rule cspT_rm_head, simp) apply (rule cspT_Parallel_commut) done lemmas cspT_Parallel_preterm = cspT_Parallel_preterm_l cspT_Parallel_preterm_r (********************************************************* SKIP and Parallel *********************************************************) (* p.288 *) lemma cspT_SKIP_Parallel_Ext_choice_SKIP_l: "((? :Y -> Pf) [+] SKIP) |[X]| SKIP =T[M,M] (? x:(Y - X) -> (Pf x |[X]| SKIP)) [+] SKIP" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule, simp add: in_traces) apply (elim conjE exE disjE) apply (simp_all) apply (rule disjI2) apply (rule disjI1) apply (simp add: par_tr_nil_right) apply (elim conjE) apply (simp add: image_iff) apply (rule_tac x="sa" in exI) apply (rule_tac x="<>" in exI) apply (simp add: par_tr_nil_right) apply (rule disjI2) apply (rule disjI1) apply (simp add: par_tr_Tick_right) apply (elim conjE) apply (simp add: image_iff) apply (rule_tac x="sa" in exI) apply (rule_tac x="<Tick>" in exI) apply (simp add: par_tr_Tick_right) (* <= *) apply (rule, simp add: in_traces) apply (elim conjE exE disjE) apply (simp_all) apply (simp add: par_tr_nil_right) apply (elim conjE) apply (rule_tac x="<Ev a> ^^^ sa" in exI) apply (rule_tac x="<>" in exI) apply (simp add: par_tr_nil_right) apply (simp add: image_iff) apply (simp add: par_tr_Tick_right) apply (elim conjE) apply (rule_tac x="<Ev a> ^^^ sa" in exI) apply (rule_tac x="<Tick>" in exI) apply (simp add: par_tr_Tick_right) apply (simp add: image_iff) done lemma cspT_SKIP_Parallel_Ext_choice_SKIP_r: "SKIP |[X]| ((? :Y -> Pf) [+] SKIP) =T[M,M] (? x:(Y - X) -> (SKIP |[X]| Pf x)) [+] SKIP" apply (rule cspT_rw_left) apply (rule cspT_commut) apply (rule cspT_rw_left) apply (rule cspT_SKIP_Parallel_Ext_choice_SKIP_l) apply (rule cspT_rw_left) apply (rule cspT_decompo) apply (rule cspT_decompo) apply (simp) apply (rule cspT_commut) apply (rule cspT_reflex) apply (rule cspT_reflex) done lemmas cspT_SKIP_Parallel_Ext_choice_SKIP = cspT_SKIP_Parallel_Ext_choice_SKIP_l cspT_SKIP_Parallel_Ext_choice_SKIP_r (********************************************************* SKIP -- X *********************************************************) lemma cspT_SKIP_Hiding_Id: "SKIP -- X =T[M1,M2] SKIP" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule) apply (simp add: in_traces) apply (elim disjE conjE exE) apply (simp_all) (* <= *) apply (rule) apply (simp add: in_traces) apply (elim disjE conjE exE) apply (simp_all) apply (rule_tac x="<>" in exI) apply (simp) apply (rule_tac x="<Tick>" in exI) apply (simp) done (********************************************************* SKIP and Hiding *********************************************************) (* p.288 version "((? :Y -> Pf) [+] SKIP) -- X =T[M1,M2] IF (Y Int X = {}) THEN ((? x:Y -> (Pf x -- X)) [+] SKIP) ELSE (((? x:(Y-X) -> (Pf x -- X)) [+] SKIP) |~| (! x:(Y Int X) .. (Pf x -- X)))" *) lemma cspT_SKIP_Hiding_step: "((? :Y -> Pf) [+] SKIP) -- X =T[M,M] ((? x:(Y-X) -> (Pf x -- X)) [+] SKIP) |~| (! x:(Y Int X) .. (Pf x -- X))" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule, simp add: in_traces) apply (elim conjE exE disjE) apply (simp_all) apply (case_tac "a : X", force) apply (force) (* <= *) apply (rule) apply (simp add: in_traces) apply (elim conjE exE bexE disjE) apply (simp_all) apply (force) apply (rule_tac x="<Ev a> ^^^ sa" in exI) apply (simp) apply (force) apply (rule_tac x="<Tick>" in exI) apply (simp) apply (force) apply (rule_tac x="<Ev a> ^^^ s" in exI) apply (simp) done (********************************************************* SKIP [[r]] *********************************************************) lemma cspT_SKIP_Renaming_Id: "SKIP [[r]] =T[M1,M2] SKIP" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule) apply (simp add: in_traces) apply (force) (* <= *) apply (rule) apply (simp add: in_traces) apply (force) done (********************************************************* SKIP ;; P *********************************************************) lemma cspT_Seq_compo_unit_l: "SKIP ;; P =T[M,M] P" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule, simp add: in_traces) apply (force) (* <= *) apply (rule, simp add: in_traces) apply (rule disjI2) apply (rule_tac x="<>" in exI) apply (rule_tac x="t" in exI) apply (simp) done (********************************************************* P ;; SKIP *********************************************************) lemma cspT_Seq_compo_unit_r: "P ;; SKIP =T[M,M] P" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule, simp add: in_traces) apply (elim conjE exE disjE) apply (simp_all) apply (rule memT_prefix_closed, simp) apply (simp add: rmTick_prefix_rev) apply (rule memT_prefix_closed, simp, simp) (* <= *) apply (rule, simp add: in_traces) apply (insert trace_last_noTick_or_Tick) apply (drule_tac x="t" in spec) apply (erule disjE) apply (rule disjI1) apply (rule_tac x="t" in exI, simp) (* *) apply (rule disjI2) apply (elim conjE exE) apply (rule_tac x="s" in exI) apply (rule_tac x="<Tick>" in exI) apply (simp) done lemmas cspT_Seq_compo_unit = cspT_Seq_compo_unit_l cspT_Seq_compo_unit_r (********************************************************* SKIP and Sequential composition *********************************************************) (* p.141 *) lemma cspT_SKIP_Seq_compo_step: "((? :X -> Pf) [> SKIP) ;; Q =T[M,M] (? x:X -> (Pf x ;; Q)) [> Q" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule, simp add: in_traces) apply (elim conjE exE disjE) apply (simp_all) apply (rule disjI1) apply (fast) apply (rule disjI2) apply (rule disjI1) apply (insert trace_nil_or_Tick_or_Ev) apply (drule_tac x="s" in spec) apply (elim disjE conjE exE) apply (simp_all) apply (simp add: appt_assoc) apply (rule disjI2) apply (rule_tac x="sb" in exI) apply (rule_tac x="ta" in exI) apply (simp) (* <= *) apply (rule, simp add: in_traces) apply (elim conjE exE disjE) apply (simp_all) apply (rule disjI1) apply (rule_tac x="<>" in exI) apply (simp) apply (rule disjI1) apply (rule_tac x="<Ev a> ^^^ sa" in exI) apply (simp) apply (rule disjI2) apply (rule_tac x="<Ev a> ^^^ sa" in exI) apply (rule_tac x="ta" in exI) apply (simp add: appt_assoc) apply (rule disjI1) apply (rule_tac x="<>" in exI) apply (simp) apply (rule disjI2) apply (rule_tac x="<>" in exI) apply (rule_tac x="t" in exI) apply (simp) done (********************************************************* SKIP |. n *********************************************************) lemma cspT_SKIP_Depth_rest: "SKIP |. (Suc n) =T[M1,M2] SKIP" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule) apply (simp add: in_traces) (* <= *) apply (rule) apply (simp add: in_traces) apply (force) done (********************************************************* cspT_SKIP *********************************************************) lemmas cspT_SKIP = cspT_Parallel_term cspT_Parallel_preterm cspT_SKIP_Parallel_Ext_choice_SKIP cspT_SKIP_Hiding_Id cspT_SKIP_Hiding_step cspT_SKIP_Renaming_Id cspT_Seq_compo_unit cspT_SKIP_Seq_compo_step cspT_SKIP_Depth_rest (********************************************************* P [+] SKIP *********************************************************) (* p.141 *) lemma cspT_Ext_choice_SKIP_resolve: "P [+] SKIP =T[M,M] P [> SKIP" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule, simp add: in_traces) (* <= *) apply (rule, simp add: in_traces) done lemma cspT_Ext_choice_SKIP_resolve_sym: "P [> SKIP =T[M,M] P [+] SKIP" apply (rule cspT_sym) apply (simp add: cspT_Ext_choice_SKIP_resolve) done (********************************************************* SKIP ||| P *********************************************************) lemma cspT_Interleave_unit_l: "SKIP ||| P =T[M,M] P" apply (simp add: cspT_semantics) apply (rule order_antisym) (* => *) apply (rule) apply (simp add: in_traces) apply (elim disjE conjE exE) apply (simp add: par_tr_nil_left) apply (simp add: par_tr_Tick_left) (* <= *) apply (rule) apply (simp add: in_traces) apply (case_tac "noTick t") apply (rule_tac x="<>" in exI) apply (rule_tac x="t" in exI) apply (simp) apply (simp add: par_tr_nil_left) apply (simp add: noTick_def) apply (rule_tac x="<Tick>" in exI) apply (rule_tac x="t" in exI) apply (simp) apply (simp add: par_tr_Tick_left) apply (simp add: noTick_def) done (********************************************************* P ||| SKIP *********************************************************) lemma cspT_Interleave_unit_r: "P ||| SKIP =T[M,M] P" apply (rule cspT_rw_left) apply (rule cspT_commut) apply (simp add: cspT_Interleave_unit_l) done lemmas cspT_Interleave_unit = cspT_Interleave_unit_l cspT_Interleave_unit_r end
import datetime import random import numpy as np import torch from torch.utils import data class TimeSeries(data.Dataset): def __init__(self, data_frame, input_time_interval, output_time_interval, output_keyword, valid_rate=0.2, shuffle_seed=0): self.data_frame = data_frame self.data_channels = self.data_frame.head(1).values.shape[1] self.input_time_interval = input_time_interval self.output_time_interval = output_time_interval self.output_keyword = output_keyword self.get_data_list() self.dataset_size = len(self.inputs) self._split(valid_rate, shuffle_seed) def get_data_list(self): self.inputs = [] self.targets = [] head = self.data_frame.head(1).index[0] tail = self.data_frame.tail(1).index[0] data_head = head - datetime.timedelta(days=1) while True: data_head = data_head + datetime.timedelta(days=1) data_tail = data_head + datetime.timedelta(days=self.input_time_interval - 1) target_head = data_tail + datetime.timedelta(days=1) target_tail = target_head + datetime.timedelta(days=self.output_time_interval - 1) if target_tail > tail: break input = self.data_frame[data_head:data_tail] target = self.data_frame[target_head:target_tail][self.output_keyword] self.inputs.append(input) self.targets.append(target) def _split(self, valid_rate, shuffle_seed): self.indices = list(range(self.dataset_size)) random.seed(shuffle_seed) random.shuffle(self.indices) split = int(np.floor((1 - valid_rate) * self.dataset_size)) self.train_indices, self.valid_indices = self.indices[:split], self.indices[split:] self.train_dataset = data.Subset(self, self.train_indices) self.valid_dataset = data.Subset(self, self.valid_indices) self.train_sampler = data.RandomSampler(self.train_dataset) self.valid_sampler = data.SequentialSampler(self.valid_dataset) self.test_sampler = data.SequentialSampler(self) def get_dataloader(self, batch_size=1, num_workers=0): train_loader = data.DataLoader(self.train_dataset, batch_size=batch_size, sampler=self.train_sampler, num_workers=num_workers) valid_loader = data.DataLoader(self.valid_dataset, batch_size=batch_size, sampler=self.valid_sampler, num_workers=num_workers) test_loader = data.DataLoader(self, batch_size=batch_size, sampler=self.test_sampler, num_workers=num_workers) return train_loader, valid_loader, test_loader def __getitem__(self, index): input = self.inputs[index].values.astype(np.float).transpose(1, 0) target = self.targets[index].values.astype(np.float) input = torch.from_numpy(input).float() target = torch.from_numpy(target).float() return input, target def __len__(self): return self.dataset_size
-- @@stderr -- dtrace: failed to compile script test/unittest/actions/freopen/err.D_FREOPEN_INVALID.d: [D_FREOPEN_INVALID] line 18: freopen( ) argument #1 cannot be "."
#redirect UC Davis Ski and Snowboard Team
header "Soundness" theory Soundness imports Completeness begin lemma permutation_validS: "fs <~~> gs --> (validS fs = validS gs)" apply(simp add: validS_def) apply(simp add: evalS_def) apply(simp add: perm_set_eq) done lemma modelAssigns_vblcase: "phi \<in> modelAssigns M \<Longrightarrow> x \<in> objects M \<Longrightarrow> vblcase x phi \<in> modelAssigns M" apply (simp add: modelAssigns_def, rule) apply(erule_tac rangeE) apply(case_tac xaa rule: vbl_casesE, auto) done lemma tmp: "(!x : A. P x | Q) ==> (! x : A. P x) | Q " by blast lemma soundnessFAll: "!!Gamma. [| u ~: freeVarsFL (FAll Pos A # Gamma); validS (instanceF u A # Gamma) |] ==> validS (FAll Pos A # Gamma)" apply (simp add: validS_def, rule) apply (drule_tac x=M in spec, rule) apply(simp add: evalF_instance) apply (rule tmp, rule) apply(drule_tac x="% y. if y = u then x else phi y" in bspec) apply(simp add: modelAssigns_def) apply force apply(erule disjE) apply (rule disjI1, simp) apply(subgoal_tac "evalF M (vblcase x (\<lambda>y. if y = u then x else phi y)) A = evalF M (vblcase x phi) A") apply force apply(rule evalF_equiv) apply(rule equalOn_vblcaseI) apply(rule,rule) apply(simp add: freeVarsFL_cons) apply (rule equalOnI, force) apply(rule disjI2) apply(subgoal_tac "evalS M (\<lambda>y. if y = u then x else phi y) Gamma = evalS M phi Gamma") apply force apply(rule evalS_equiv) apply(rule equalOnI) apply(force simp: freeVarsFL_cons) done lemma soundnessFEx: "validS (instanceF x A # Gamma) ==> validS (FAll Neg A # Gamma)" apply(simp add: validS_def) apply (simp add: evalF_instance, rule, rule) apply(drule_tac x=M in spec) apply (drule_tac x=phi in bspec, assumption) apply(erule disjE) apply(rule disjI1) apply (rule_tac x="phi x" in bexI, assumption) apply(force dest: modelAssignsD subsetD) apply (rule disjI2, assumption) done lemma soundnessFCut: "[| validS (C # Gamma); validS (FNot C # Delta) |] ==> validS (Gamma @ Delta)" (* apply(force simp: validS_def evalS_append evalS_cons evalF_FNot)*) apply (simp add: validS_def, rule, rule) apply(drule_tac x=M in spec) apply(drule_tac x=M in spec) apply(drule_tac x=phi in bspec) apply assumption apply(drule_tac x=phi in bspec) apply assumption apply (simp add: evalS_append evalF_FNot, blast) done lemma completeness: "fs : deductions (PC) = validS fs" apply rule apply(rule soundness) apply assumption apply(subgoal_tac "fs : deductions CutFreePC") apply(rule subsetD) prefer 2 apply assumption apply(rule mono_deductions) apply(simp add: PC_def CutFreePC_def) apply blast apply(rule adequacy) by assumption end
from typing import List, Dict, Union, Tuple import numpy as np import pickle import math import sympy from igp2 import AgentState, Lane, VelocityTrajectory, StateTrajectory, Map from shapely.geometry import Point, LineString, Polygon, MultiPolygon from shapely.ops import unary_union, split from grit.core.goal_generator import TypedGoal, GoalGenerator from grit.core.base import get_occlusions_dir class FeatureExtractor: MAX_ONCOMING_VEHICLE_DIST = 100 # Minimum area the occlusion must have to contain a vehicle (assuming a 4m*3m vehicle) MIN_OCCLUSION_AREA = 12 # Maximum distance the occlusion can be to be considered as significant for creating occlusions. MAX_OCCLUSION_DISTANCE = 30 FRAME_STEP_SIZE = 25 MISSING = True NON_MISSING = False feature_names = {'path_to_goal_length': 'scalar', 'in_correct_lane': 'binary', 'speed': 'scalar', 'acceleration': 'scalar', 'angle_in_lane': 'scalar', 'vehicle_in_front_dist': 'scalar', 'vehicle_in_front_speed': 'scalar', 'oncoming_vehicle_dist': 'scalar', 'oncoming_vehicle_speed': 'scalar', 'road_heading': 'scalar', 'exit_number': 'integer'} indicator_features = ['exit_number_missing', 'vehicle_in_front_missing', 'oncoming_vehicle_missing'] possibly_missing_features = {'exit_number': 'exit_number_missing', 'oncoming_vehicle_dist': 'oncoming_vehicle_missing', 'oncoming_vehicle_speed': 'oncoming_vehicle_missing', 'vehicle_in_front_dist': 'vehicle_in_front_missing', 'vehicle_in_front_speed': 'vehicle_in_front_missing'} def __init__(self, scenario_map: Map, *args): self.scenario_map = scenario_map # If we want to consider occlusions, we need to provide the scenario map and episode index as parameter, # in this order. if len(args) > 1: self.scenario_name = args[0] self.episode_idx = args[1] with open(get_occlusions_dir() + f"{self.scenario_name}_e{self.episode_idx}.p", 'rb') as file: self.occlusions = pickle.load(file) def extract(self, agent_id: int, frames: List[Dict[int, AgentState]], goal: TypedGoal, ego_agent_id: int = None, initial_frame: Dict[int, AgentState] = None) \ -> Dict[str, Union[float, bool]]: """Extracts a dict of features describing the observation Args: agent_id: identifier for the agent of which we want the features frames: list of observed frames goal: goal of the agent ego_agent_id: id of the ego agent from whose pov the occlusions are taken. Used for indicator features initial_frame: first frame in which the target agent is visible to the ego. Used for indicator features Returns: dict of features values """ current_frame = frames[-1] current_state = current_frame[agent_id] initial_state = frames[0][agent_id] current_lane = goal.lane_path[0] lane_path = goal.lane_path speed = current_state.speed acceleration = np.linalg.norm(current_state.acceleration) in_correct_lane = self.in_correct_lane(lane_path) path_to_goal_length = self.path_to_goal_length(current_state, goal, lane_path) angle_in_lane = self.angle_in_lane(current_state, current_lane) road_heading = self.road_heading(lane_path) exit_number = self.exit_number(initial_state, lane_path) goal_type = goal.goal_type vehicle_in_front_id, vehicle_in_front_dist = self.vehicle_in_front(agent_id, lane_path, current_frame) if vehicle_in_front_id is None: vehicle_in_front_speed = 20 vehicle_in_front_dist = 100 else: vehicle_in_front = current_frame[vehicle_in_front_id] vehicle_in_front_speed = vehicle_in_front.speed oncoming_vehicle_id, oncoming_vehicle_dist = self.oncoming_vehicle(agent_id, lane_path, current_frame) if oncoming_vehicle_id is None: oncoming_vehicle_speed = 20 else: oncoming_vehicle_speed = current_frame[oncoming_vehicle_id].speed features = {'path_to_goal_length': path_to_goal_length, 'in_correct_lane': in_correct_lane, 'speed': speed, 'acceleration': acceleration, 'angle_in_lane': angle_in_lane, 'vehicle_in_front_dist': vehicle_in_front_dist, 'vehicle_in_front_speed': vehicle_in_front_speed, 'oncoming_vehicle_dist': oncoming_vehicle_dist, 'oncoming_vehicle_speed': oncoming_vehicle_speed, 'road_heading': road_heading, 'exit_number': exit_number, 'goal_type': goal_type} # We pass the ego_agent_id only if we want to extract the indicator features. if ego_agent_id is not None: occlusion_frame_id = math.ceil(current_state.time / self.FRAME_STEP_SIZE) frame_occlusions = self.occlusions[occlusion_frame_id] occlusions = unary_union(self.get_occlusions_ego_polygon(frame_occlusions, ego_agent_id)) vehicle_in_front_occluded = self.is_vehicle_in_front_missing(vehicle_in_front_dist, agent_id, lane_path, current_frame, occlusions) oncoming_vehicle_occluded = self.is_oncoming_vehicle_missing(oncoming_vehicle_dist, lane_path, occlusions) # Get the first state in which both the ego and target vehicles are alive (even if target is occluded). initial_state = initial_frame[agent_id] exit_number_occluded = self.is_exit_number_missing(initial_state, goal) \ if self.scenario_name == "round" else False indicator_features = {'vehicle_in_front_missing': vehicle_in_front_occluded, 'oncoming_vehicle_missing': oncoming_vehicle_occluded, 'exit_number_missing': exit_number_occluded} features.update(indicator_features) return features @staticmethod def get_vehicles_in_route(ego_agent_id: int, path: List[Lane], frame: Dict[int, AgentState]): agents = [] for agent_id, agent in frame.items(): agent_point = Point(*agent.position) if agent_id != ego_agent_id: for lane in path: if lane.boundary.contains(agent_point): agents.append(agent_id) return agents @staticmethod def angle_in_lane(state: AgentState, lane: Lane) -> float: """ Get the signed angle between the vehicle heading and the lane heading Args: state: current state of the vehicle lane: : current lane of the vehicle Returns: angle in radians """ lon = lane.distance_at(state.position) lane_heading = lane.get_heading_at(lon) angle_diff = np.diff(np.unwrap([lane_heading, state.heading]))[0] return angle_diff @staticmethod def road_heading(lane_path: List[Lane]): lane = lane_path[-1] start_heading = lane.get_heading_at(0) end_heading = lane.get_heading_at(lane.length) heading_change = np.diff(np.unwrap([start_heading, end_heading]))[0] return heading_change @staticmethod def in_correct_lane(lane_path: List[Lane]): for idx in range(0, len(lane_path) - 1): if lane_path[idx].lane_section == lane_path[idx+1].lane_section: return False return True @classmethod def path_to_goal_length(cls, state: AgentState, goal: TypedGoal, path: List[Lane]) -> float: end_point = goal.goal.center return cls.path_to_point_length(state, end_point, path) @classmethod def vehicle_in_front(cls, target_agent_id: int, lane_path: List[Lane], frame: Dict[int, AgentState]): state = frame[target_agent_id] vehicles_in_route = cls.get_vehicles_in_route(target_agent_id, lane_path, frame) min_dist = np.inf vehicle_in_front = None target_dist_along = cls.dist_along_path(lane_path, state.position) # find the vehicle in front with closest distance for agent_id in vehicles_in_route: agent_point = frame[agent_id].position agent_dist = cls.dist_along_path(lane_path, agent_point) dist = agent_dist - target_dist_along if 1e-4 < dist < min_dist: vehicle_in_front = agent_id min_dist = dist return vehicle_in_front, min_dist def is_vehicle_in_front_missing(self, dist: float, target_id: int, lane_path: List[Lane], frame: Dict[int, AgentState], occlusions: MultiPolygon): """ Args: dist: distance of the closest oncoming vehicle, if any. target_id: id of the vehicle for which we are extracting the features lane_path: lanes executed by the target vehicle if it had the assigned goal frame: current state of the world occlusions: must be unary union of all the occlusions for the ego at that point in time """ target_state = frame[target_id] target_point = Point(*target_state.position) current_lane = self.scenario_map.best_lane_at(target_state.position, target_state.heading, True) midline = current_lane.midline # Remove all the occlusions that are behind the target vehicle as we want possible hidden vehicles in front. area_before, area_after = self._get_split_at(midline, target_point) occlusions = self._get_occlusions_past_point(current_lane, lane_path, occlusions, target_point, area_after) if occlusions is None: return self.NON_MISSING occlusions = self._get_significant_occlusions(occlusions) if occlusions is None: # The occlusions are not large enough to hide a vehicle. return self.NON_MISSING distance_to_occlusion = occlusions.distance(target_point) if distance_to_occlusion > self.MAX_OCCLUSION_DISTANCE: # The occlusion is far away, and won't affect the target vehicle decisions. return self.NON_MISSING # Otherwise, the feature is missing if there is an occlusion closer than the vehicle in front. return not dist < distance_to_occlusion + 2.5 @classmethod def dist_along_path(cls, path: List[Lane], point: np.ndarray): shapely_point = Point(*point) midline = cls.get_lane_path_midline(path) dist = midline.project(shapely_point) return dist @staticmethod def get_current_path_lane_idx(path: List[Lane], point: np.ndarray) -> int: """ Get the index of the lane closest to a point""" if type(point) == Point: shapely_point = point else: shapely_point = Point(point[0], point[1]) for idx, lane in enumerate(path): if lane.boundary.contains(shapely_point): return idx closest_lane_dist = np.inf closest_lane_idx = None for idx, lane in enumerate(path): dist = lane.boundary.exterior.distance(shapely_point) if dist < closest_lane_dist: closest_lane_dist = dist closest_lane_idx = idx return closest_lane_idx @staticmethod def path_to_point_length(state: AgentState, point: np.ndarray, path: List[Lane]) -> float: """ Get the length of a path across multiple lanes Args: state: initial state of the vehicle point: final point to be reached path: sequence of lanes traversed Returns: path length """ end_lane = path[-1] end_lane_dist = end_lane.distance_at(point) start_point = state.position start_lane = path[0] start_lane_dist = start_lane.distance_at(start_point) dist = end_lane_dist - start_lane_dist if len(path) > 1: prev_lane = None for idx in range(len(path) - 1): lane = path[idx] lane_change = prev_lane is not None and prev_lane.lane_section == lane.lane_section if not lane_change: dist += lane.length prev_lane = lane return dist @staticmethod def angle_to_goal(state, goal): goal_heading = np.arctan2(goal[1] - state.y, goal[0] - state.x) return np.diff(np.unwrap([goal_heading, state.heading]))[0] @staticmethod def get_junction_lane(lane_path: List[Lane]) -> Union[Lane, None]: for lane in lane_path: if lane.parent_road.junction is not None: return lane return None @staticmethod def get_lane_path_midline(lane_path: List[Lane]) -> LineString: midline_points = [] for idx, lane in enumerate(lane_path[:-1]): # check if next lane is adjacent if lane_path[idx + 1] not in lane.lane_section.all_lanes: midline_points.extend(lane.midline.coords[:-1]) midline_points.extend(lane_path[-1].midline.coords) lane_ls = LineString(midline_points) return lane_ls @staticmethod def _get_split_at(midline, point): """ Split the midline at a specific point. """ point_on_midline = midline.interpolate(midline.project(point)).buffer(0.0001) split_lanes = split(midline, point_on_midline) if len(split_lanes) == 2: # Handle the case in which the split point is at the start/end of the lane. line_before, line_after = split_lanes else: line_before, _, line_after = split_lanes return line_before, line_after def _get_oncoming_vehicles(self, lane_path: List[Lane], ego_agent_id: int, frame: Dict[int, AgentState]) \ -> Dict[int, Tuple[AgentState, float]]: oncoming_vehicles = {} ego_junction_lane = self.get_junction_lane(lane_path) if ego_junction_lane is None: return oncoming_vehicles ego_junction_lane_boundary = ego_junction_lane.boundary.buffer(0) lanes_to_cross = self._get_lanes_to_cross(ego_junction_lane) agent_lanes = [(i, self.scenario_map.best_lane_at(s.position, s.heading, True)) for i, s in frame.items()] for lane_to_cross in lanes_to_cross: lane_sequence = self._get_predecessor_lane_sequence(lane_to_cross) midline = self.get_lane_path_midline(lane_sequence) crossing_point = lane_to_cross.boundary.buffer(0).intersection(ego_junction_lane_boundary).centroid crossing_lon = midline.project(crossing_point) # find agents in lane to cross for agent_id, agent_lane in agent_lanes: agent_state = frame[agent_id] if agent_id != ego_agent_id and agent_lane in lane_sequence: agent_lon = midline.project(Point(agent_state.position)) dist = crossing_lon - agent_lon if 0 < dist < self.MAX_ONCOMING_VEHICLE_DIST: oncoming_vehicles[agent_id] = (agent_state, dist) return oncoming_vehicles def _get_lanes_to_cross(self, ego_lane: Lane) -> List[Lane]: ego_road = ego_lane.parent_road ego_incoming_lane = ego_lane.link.predecessor[0] ego_lane_boundary = ego_lane.boundary.buffer(0) lanes = [] for connection in ego_road.junction.connections: for lane_link in connection.lane_links: lane = lane_link.to_lane same_predecessor = (ego_incoming_lane.id == lane_link.from_id and ego_incoming_lane.parent_road.id == connection.incoming_road.id) if not (same_predecessor or self._has_priority(ego_road, lane.parent_road)): overlap = ego_lane_boundary.intersection(lane.boundary.buffer(0)) if overlap.area > 1: lanes.append(lane) return lanes def _get_occlusions_past_point(self, current_lane, other_lanes, all_occlusions, point_of_cut, area_to_keep): """ Get the occlusions that are both on the 'other_lanes' and on the area_to_keep. Args: current_lane: lane on which the vehicle is currently on other_lanes: lanes for which we want to find the occluded areas all_occlusions: all the occlusions in the current frame point_of_cut: point in the current lane at which we want to cut the total occluded areas area_to_keep: part of the MIDLINE we want the occlusions on """ # Find the occlusions that intersect the lanes we want. possible_occlusions = [] for lane in other_lanes: o = all_occlusions.intersection(lane.boundary.buffer(0)) if isinstance(o, MultiPolygon): possible_occlusions.extend(list(o.geoms)) elif isinstance(o, Polygon): possible_occlusions.append(o) possible_occlusions = unary_union(possible_occlusions) if possible_occlusions.area == 0: return None # Find the line perpendicular to the current lane that passes through the point_of_cut ds = current_lane.boundary.boundary.project(point_of_cut) p = current_lane.boundary.boundary.interpolate(ds) slope = (p.y - point_of_cut.y) / (p.x - point_of_cut.x) s_p = sympy.Point(p.x, p.y) direction1 = Point(p.x - point_of_cut.x, p.y - point_of_cut.y) direction2 = Point(point_of_cut.x - p.x, point_of_cut.y - p.y) p1 = self.get_extended_point(30, slope, direction1, s_p) p2 = self.get_extended_point(30, slope, direction2, s_p) # Split the occluded areas along the line we just computed. line = LineString([Point(p1.x, p1.y), Point(p2.x, p2.y)]) intersections = split(possible_occlusions, line) # Get the occlusions that are on the area_to_keep. return unary_union([intersection for intersection in intersections.geoms if intersection.intersection(area_to_keep).length > 1]) @staticmethod def get_extended_point(length, slope, direction, point): delta_x = math.sqrt(length ** 2 / (1 + slope ** 2)) delta_y = math.sqrt(length ** 2 - delta_x ** 2) if direction.x < 0: delta_x = -delta_x if direction.y < 0: delta_y = -delta_y return point.translate(delta_x, delta_y) def _get_significant_occlusions(self, occlusions): """ Return a Multipolygon or Polygon with the occlusions that are large enough to fit a hidden vehicle. """ if isinstance(occlusions, MultiPolygon): return unary_union([occlusion for occlusion in occlusions.geoms if occlusion.area > self.MIN_OCCLUSION_AREA]) elif isinstance(occlusions, Polygon): return occlusions if occlusions.area > self.MIN_OCCLUSION_AREA else None def _get_min_dist_from_occlusions_oncoming_lanes(self, lanes_to_cross, ego_junction_lane, ego_junction_lane_boundary, occlusions): """ Get the minimum distance from any of the crossing points to the occlusions that could hide an oncoming vehicle. A crossing point is a point along the target vehicle's path inside a junction. Args: lanes_to_cross: list of lanes that the target vehicle will intersect while inside the junction. ego_junction_lane: lane the target vehicle travels on ego_junction_lane_boundary: boundary of the ego_junction lane occlusions: list of all the occlusions in the frame """ occluded_oncoming_areas = [] crossing_points = [] for lane_to_cross in lanes_to_cross: crossing_point = lane_to_cross.boundary.buffer(0).intersection(ego_junction_lane_boundary).centroid crossing_points.append(crossing_point) lane_sequence = self._get_predecessor_lane_sequence(lane_to_cross) midline = self.get_lane_path_midline(lane_sequence) # Find the occlusions on the lanes that the ego vehicle will cross. if occlusions: # Get the part of the midline of the lanes in which there could be oncoming vehicles, that is before # the crossing point. # Ignore the occlusions that are "after" (w.r.t traffic direction) the crossing point. # We only want to check if there is a hidden vehicle that could collide with the ego. # This can only happen with vehicles that are driving in the lane's direction of traffic # and have not passed the crossing point that the ego will drive through. area_before, area_after = self._get_split_at(midline, crossing_point) # Get the significant occlusions. lane_occlusions = self._get_occlusions_past_point(ego_junction_lane, lane_sequence, occlusions, crossing_point, area_before) if lane_occlusions is None: continue occluded_oncoming_areas.append(lane_occlusions) if occluded_oncoming_areas: occluded_oncoming_areas = unary_union(occluded_oncoming_areas) # Only take the occlusions that could fit a hidden vehicle. occluded_oncoming_areas = self._get_significant_occlusions(occluded_oncoming_areas) # Get the minimum distance from any of the crossing points and the relevant occlusions. if occluded_oncoming_areas: return min([crossing_point.distance(occluded_oncoming_areas) for crossing_point in crossing_points]) # If there are no occlusions large enough to fit a hidden vehicle. return math.inf @staticmethod def get_occlusions_ego_polygon(frame_occlusions, ego_id): """ Given the occlusions in a frame, extract the occlusions w.r.t the ego and return them as list of MultiPolygons. """ occlusions_vehicle_frame = frame_occlusions[ego_id] occlusions = [] for road_occlusions in occlusions_vehicle_frame: for lane_occlusions in occlusions_vehicle_frame[road_occlusions]: lane_occlusion = occlusions_vehicle_frame[road_occlusions][lane_occlusions] if lane_occlusion is not None: occlusions.append(lane_occlusion) return occlusions @classmethod def _get_predecessor_lane_sequence(cls, lane: Lane) -> List[Lane]: lane_sequence = [] total_length = 0 while lane is not None and total_length < 100: lane_sequence.insert(0, lane) total_length += lane.midline.length lane = lane.link.predecessor[0] if lane.link.predecessor else None return lane_sequence @staticmethod def _has_priority(ego_road, other_road): for priority in ego_road.junction.priorities: if (priority.high_id == ego_road.id and priority.low_id == other_road.id): return True return False def oncoming_vehicle(self, ego_agent_id: int, lane_path: List[Lane], frame: Dict[int, AgentState], max_dist=100): oncoming_vehicles = self._get_oncoming_vehicles(lane_path, ego_agent_id, frame) min_dist = max_dist closest_vehicle_id = None for agent_id, (agent, dist) in oncoming_vehicles.items(): if dist < min_dist: min_dist = dist closest_vehicle_id = agent_id return closest_vehicle_id, min_dist def is_oncoming_vehicle_missing(self, min_dist: int, lane_path: List[Lane], occlusions: MultiPolygon): ego_junction_lane = self.get_junction_lane(lane_path) if ego_junction_lane is None: return False ego_junction_lane_boundary = ego_junction_lane.boundary.buffer(0) lanes_to_cross = self._get_lanes_to_cross(ego_junction_lane) min_occlusion_distance = self._get_min_dist_from_occlusions_oncoming_lanes(lanes_to_cross, ego_junction_lane, ego_junction_lane_boundary, occlusions) # If the closest occlusion is too far away (or missing), we say that occlusion is not significant. if min_occlusion_distance > self.MAX_OCCLUSION_DISTANCE: return False # If the closest oncoming vehicle is further away to any of the crossing points that the occlusion, # then the feature is missing. The 2.5 meters offset is in case the vehicle is partially occluded. return min_occlusion_distance + 2.5 < min_dist def exit_number(self, initial_state: AgentState, future_lane_path: List[Lane]): # get the exit number in a roundabout if (future_lane_path[-1].parent_road.junction is None or future_lane_path[-1].parent_road.junction.junction_group is None or future_lane_path[-1].parent_road.junction.junction_group.type != 'roundabout'): return 0 position = initial_state.position heading = initial_state.heading possible_lanes = self.scenario_map.lanes_within_angle(position, heading, np.pi / 4, drivable_only=True, max_distance=3) initial_lane = possible_lanes[GoalGenerator.get_best_lane(possible_lanes, position, heading)] lane_path = self.path_to_lane(initial_lane, future_lane_path[-1]) # iterate through lane path and count number of junctions exit_number = 0 entrance_passed = False if lane_path is not None: for lane in lane_path: if self.is_roundabout_entrance(lane): entrance_passed = True elif entrance_passed and self.is_roundabout_junction(lane): exit_number += 1 return exit_number def is_exit_number_missing(self, initial_state: AgentState, goal: TypedGoal): """ The exit number feature is missing if we cannot get the exit number. This happens when: - the target vehicle is already in the roundabout when it becomes visible to the ego. - the target vehicle is occluded w.r.t the ego when it enters the roundabout. Args: initial_state: state of the target vehicle when it first became visible to the ego goal: the goal we are trying to get the probability for """ return self.exit_number(initial_state, goal.lane_path) == 0 @staticmethod def is_roundabout_junction(lane: Lane): junction = lane.parent_road.junction return (junction is not None and junction.junction_group is not None and junction.junction_group.type == 'roundabout') def is_roundabout_entrance(self, lane: Lane) -> bool: predecessor_in_roundabout = (lane.link.predecessor is not None and len(lane.link.predecessor) == 1 and self.scenario_map.road_in_roundabout(lane.link.predecessor[0].parent_road)) return self.is_roundabout_junction(lane) and not predecessor_in_roundabout def get_typed_goals(self, trajectory: VelocityTrajectory, goals: List[Tuple[int, int]]): typed_goals = [] goal_gen = GoalGenerator() gen_goals = goal_gen.generate(self.scenario_map, trajectory) for goal in goals: for gen_goal in gen_goals: if gen_goal.goal.reached(Point(*goal)): break else: gen_goal = None typed_goals.append(gen_goal) return typed_goals @staticmethod def goal_type(route: List[Lane]): return GoalGenerator.get_juction_goal_type(route[-1]) @staticmethod def path_to_lane(initial_lane: Lane, target_lane: Lane, max_depth=20) -> List[Lane]: visited_lanes = {initial_lane} open_set = [[initial_lane]] while len(open_set) > 0: lane_sequence = open_set.pop(0) if len(lane_sequence) > max_depth: break lane = lane_sequence[-1] if lane == target_lane: return lane_sequence junction = lane.parent_road.junction neighbours = lane.traversable_neighbours() for neighbour in neighbours: if neighbour not in visited_lanes: visited_lanes.add(neighbour) open_set.append(lane_sequence + [neighbour]) return None class GoalDetector: """ Detects the goals of agents based on their trajectories""" def __init__(self, possible_goals, dist_threshold=1.5): self.dist_threshold = dist_threshold self.possible_goals = possible_goals def detect_goals(self, trajectory: StateTrajectory): goals = [] goal_frame_idxes = [] for point_idx, agent_point in enumerate(trajectory.path): for goal_idx, goal_point in enumerate(self.possible_goals): dist = np.linalg.norm(agent_point - goal_point) if dist <= self.dist_threshold and goal_idx not in goals: goals.append(goal_idx) goal_frame_idxes.append(point_idx) return goals, goal_frame_idxes def get_agents_goals_ind(self, tracks, static_info, meta_info, map_meta, agent_class='car'): goal_locations = map_meta.goals agent_goals = {} for track_idx in range(len(static_info)): if static_info[track_idx]['class'] == agent_class: track = tracks[track_idx] agent_goals[track_idx] = [] for i in range(static_info[track_idx]['numFrames']): point = np.array([track['xCenter'][i], track['yCenter'][i]]) for goal_idx, loc in enumerate(goal_locations): dist = np.linalg.norm(point - loc) if dist < self.dist_threshold and loc not in agent_goals[track_idx]: agent_goals[track_idx].append(loc) return agent_goals
State Before: R : Type u a✝ b : R m n✝ : ℕ inst✝ : Semiring R p q : R[X] n : ℕ a : R ⊢ support (↑(monomial n) a) ⊆ {n} State After: R : Type u a✝ b : R m n✝ : ℕ inst✝ : Semiring R p q : R[X] n : ℕ a : R ⊢ (match { toFinsupp := Finsupp.single n a } with | { toFinsupp := p } => p.support) ⊆ {n} Tactic: rw [← ofFinsupp_single, support] State Before: R : Type u a✝ b : R m n✝ : ℕ inst✝ : Semiring R p q : R[X] n : ℕ a : R ⊢ (match { toFinsupp := Finsupp.single n a } with | { toFinsupp := p } => p.support) ⊆ {n} State After: no goals Tactic: exact Finsupp.support_single_subset
State Before: α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α ⊢ weightedSMul (c • μ) s = ENNReal.toReal c • weightedSMul μ s State After: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ↑(weightedSMul (c • μ) s) x = ↑(ENNReal.toReal c • weightedSMul μ s) x Tactic: ext1 x State Before: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ↑(weightedSMul (c • μ) s) x = ↑(ENNReal.toReal c • weightedSMul μ s) x State After: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ↑(weightedSMul (c • μ) s) x = (ENNReal.toReal c • ↑(weightedSMul μ s)) x Tactic: push_cast State Before: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ↑(weightedSMul (c • μ) s) x = (ENNReal.toReal c • ↑(weightedSMul μ s)) x State After: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ENNReal.toReal (↑↑(c • μ) s) • x = ENNReal.toReal c • ENNReal.toReal (↑↑μ s) • x Tactic: simp_rw [Pi.smul_apply, weightedSMul_apply] State Before: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ENNReal.toReal (↑↑(c • μ) s) • x = ENNReal.toReal c • ENNReal.toReal (↑↑μ s) • x State After: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ENNReal.toReal ((c • ↑↑μ) s) • x = ENNReal.toReal c • ENNReal.toReal (↑↑μ s) • x Tactic: push_cast State Before: case h α : Type u_1 E : Type ?u.30268 F : Type u_2 𝕜 : Type ?u.30274 inst✝¹ : NormedAddCommGroup F inst✝ : NormedSpace ℝ F m✝ : MeasurableSpace α μ✝ : Measure α m : MeasurableSpace α μ : Measure α c : ℝ≥0∞ s : Set α x : F ⊢ ENNReal.toReal ((c • ↑↑μ) s) • x = ENNReal.toReal c • ENNReal.toReal (↑↑μ s) • x State After: no goals Tactic: simp_rw [Pi.smul_apply, smul_eq_mul, toReal_mul, smul_smul]
SUBROUTINE FN_TRUNC( FILE_NAME ) !*********************************************************************** !* Truncates a File Name for the Operating System !* !* Language: Fortran !* !* Author: Stuart G. Mentzer !* !* Date: 2003/07/18 !*********************************************************************** ! Headers INCLUDE 'platform.fi' INCLUDE 'uds_fxn.fi' ! Arguments ______________________________________________________ CHARACTER*(*) FILE_NAME ! File name to truncate ! Variables ______________________________________________________ INTEGER IE, ITV, PFN, L_NAM, L_EXT CHARACTER TEMP_NAME*256 ! Functions ______________________________________________________ CHARACTER LJUST*256 EXTERNAL LJUST ! Find start of path-free file name FILE_NAME = LJUST( FILE_NAME ) PFN = FN_POSN( FILE_NAME ) IF ( PFN .EQ. 0 ) RETURN ! Truncate name if necessary IE = FE_POSN( FILE_NAME(PFN:) ) IF ( IE .GT. 0 ) THEN L_NAM = IE - 1 ELSE L_NAM = LEN_TRIM( FILE_NAME(PFN:) ) END IF IF ( L_NAM .GT. LEN_FN_NAM ) THEN ! Truncate name TEMP_NAME = & FILE_NAME(:PFN-1+LEN_FN_NAM)//FILE_NAME(PFN+L_NAM:) FILE_NAME = TEMP_NAME END IF ! Truncate extension if necessary IE = FE_POSN( FILE_NAME ) IF ( IE .GT. 0 ) THEN ! Has extension ITV = FT_POSN( FILE_NAME ) IF ( ITV .EQ. 0 ) ITV = FV_POSN( FILE_NAME ) IF ( ITV .EQ. 0 ) THEN ! No type/version L_EXT = LEN_TRIM( FILE_NAME(IE+1:) ) ELSE IF ( ITV .GT. IE + 1 ) THEN L_EXT = LEN_TRIM( FILE_NAME(IE+1:ITV-1) ) ELSE L_EXT = 0 END IF IF ( L_EXT .GT. LEN_FN_EXT ) THEN ! Truncate extension IF ( ITV .EQ. 0 ) THEN ! No type/version FILE_NAME(IE+LEN_FN_EXT+1:) = ' ' ELSE ! Has type/version TEMP_NAME = & FILE_NAME(:IE+LEN_FN_EXT)//FILE_NAME(ITV:) FILE_NAME = TEMP_NAME END IF END IF END IF RETURN END
lemma closed_Collect_le: fixes f g :: "'a :: topological_space \<Rightarrow> 'b::linorder_topology" assumes f: "continuous_on UNIV f" and g: "continuous_on UNIV g" shows "closed {x. f x \<le> g x}"
Formal statement is: lemma locally_connected_2: assumes "locally connected S" "openin (top_of_set S) t" "x \<in> t" shows "openin (top_of_set S) (connected_component_set t x)" Informal statement is: If $S$ is locally connected and $t$ is an open subset of $S$, then the connected component of $t$ containing $x$ is open in $S$.
# Обучение нейрона с помощью функции потерь LogLoss <h3 style="text-align: center;"><b>Нейрон с сигмоидой</b></h3> Снова рассмотрим нейрон с сигмоидой, то есть $$f(x) = \sigma(x)=\frac{1}{1+e^{-x}}$$ Ранее мы установили, что **обучение нейрона с сигмоидой с квадратичной функцией потерь**: $$MSE(w, x) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{2n}\sum_{i=1}^{n} (\sigma(w \cdot x_i) - y_i)^2$$ где $w \cdot x_i$ - скалярное произведение, а $\sigma(w \cdot x_i) =\frac{1}{1+e^{-w \cdot x_i}} $ - сигмоида -- **неэффективно**, то есть мы увидели, что даже за большое количество итераций нейрон предсказывает плохо. Давайте ещё раз взглянем на формулу для градиентного спуска от функции потерь $MSE$ по весам нейрона: $$ \frac{\partial MSE}{\partial w} = \frac{1}{n} X^T (\sigma(w \cdot X) - y)\sigma(w \cdot X)(1 - \sigma(w \cdot X))$$ А теперь смотрим на график сигмоиды: **Её значения: числа от 0 до 1.** Если получше проанализировать формулу, то теперь можно заметить, что, поскольку сигмоида принимает значения между 0 и 1 (а значит (1-$\sigma$) тоже принимает значения от 0 до 1), то мы умножаем $X^T$ на столбец $(\sigma(w \cdot X) - y)$ из чисел от -1 до 1, а потом ещё на столбцы $\sigma(w \cdot X)$ и $(1 - \sigma(w \cdot X))$ из чисел от 0 до 1. Таким образом в лучшем случае $\frac{\partial{Loss}}{\partial{w}}$ будет столбцом из чисел, порядок которых максимум 0.01 (в среднем, понятно, что если сигмоида выдаёт все 0, то будет 0, если все 1, то тоже 0). После этого мы умножаем на шаг градиентного спуска, который обычно порядка 0.001 или 0.1 максимум. То есть мы вычитаем из весов числа порядка ~0.0001. Медленновато спускаемся, не правда ли? Это называют **проблемой затухающих градиентов**. Чтобы избежать эту проблему в задачах классификации, в которых моделью является нейрон с сигмоидной функцией активации, предсказывающий "вероятности" принадлженостей к классамиспользуют **LogLoss**: $$J(\hat{y}, y) = -\frac{1}{n} \sum_{i=1}^n y_i \log(\hat{y_i}) + (1 - y_i) \log(1 - \hat{y_i}) = -\frac{1}{n} \sum_{i=1}^n y_i \log(\sigma(w \cdot x_i)) + (1 - y_i) \log(1 - \sigma(w \cdot x_i))$$ где, как и прежде, $y$ - столбец $(n, 1)$ из истинных значений классов, а $\hat{y}$ - столбец $(n, 1)$ из предсказаний нейрона. ``` from matplotlib import pyplot as plt from matplotlib.colors import ListedColormap import numpy as np import pandas as pd ``` ``` def loss(y_pred, y): return -np.mean(y * np.log(y_pred) + (1 - y) * np.log(1 - y_pred)) ``` Отметим, что сейчас речь идёт именно о **бинарной классификации (на два класса)**, в многоклассовой классификации используется функция потерь под названием *кросс-энтропия*, которая является обобщением LogLoss'а на случай нескольких классов. Почему же теперь всё будет лучше? Раньше была проблема умножения маленьких чисел в градиенте. Давайте посмотрим, что теперь: * Для веса $w_j$: $$ \frac{\partial Loss}{\partial w_j} = -\frac{1}{n} \sum_{i=1}^n \left(\frac{y_i}{\sigma(w \cdot x_i)} - \frac{1 - y_i}{1 - \sigma(w \cdot x_i)}\right)(\sigma(w \cdot x_i))_{w_j}' = -\frac{1}{n} \sum_{i=1}^n \left(\frac{y_i}{\sigma(w \cdot x_i)} - \frac{1 - y_i}{1 - \sigma(w \cdot x_i)}\right)\sigma(w \cdot x_i)(1 - \sigma(w \cdot x_i))x_{ij} = $$ $$-\frac{1}{n} \sum_{i=1}^n \left(y_i - \sigma(w \cdot x_i)\right)x_{ij}$$ * Градиент $Loss$'а по вектору весов -- это вектор, $j$-ая компонента которого равна $\frac{\partial Loss}{\partial w_j}$ (помним, что весов всего $m$): $$\begin{align} \frac{\partial Loss}{\partial w} &= \begin{bmatrix} -\frac{1}{n} \sum_{i=1}^n \left(y_i - \sigma(w \cdot x_i)\right)x_{i1} \\ -\frac{1}{n} \sum_{i=1}^n \left(y_i - \sigma(w \cdot x_i)\right)x_{i2} \\ \vdots \\ -\frac{1}{n} \sum_{i=1}^n \left(y_i - \sigma(w \cdot x_i)\right)x_{im} \end{bmatrix} \end{align}=\frac{1}{n} X^T \left(\hat{y} - y\right)$$ По аналогии с $w_j$ выведите формулу для свободного члена (bias'а) $b$ (*hint*: можно считать, что при нём есть признак $x_{i0}=1$ на всех $i$): Получили новое правило для обновления $w$ и $b$. ``` def sigmoid(x): """Сигмоидальная функция""" return 1 / (1 + np.exp(-x)) ``` Реализуйте нейрон с функцией потерь LogLoss: ``` class Neuron: def __init__(self, w=None, b=0): """ :param: w -- вектор весов :param: b -- смещение """ # пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов self.w = w self.b = b def activate(self, x): return sigmoid(x) def forward_pass(self, X): """ Эта функция рассчитывает ответ нейрона при предъявлении набора объектов :param: X -- матрица объектов размера (n, m), каждая строка - отдельный объект :return: вектор размера (n, 1) из нулей и единиц с ответами перцептрона """ # реализуйте forward_pass n = X.shape[0] y_pred = np.zeros((n, 1)) y_pred = self.activate(X @ self.w.reshape(X.shape[1], 1) + self.b) return y_pred.reshape(-1, 1) def backward_pass(self, X, y, y_pred, learning_rate=0.1): """ Обновляет значения весов нейрона в соответствие с этим объектом :param: X -- матрица объектов размера (n, m) y -- вектор правильных ответов размера (n, 1) learning_rate - "скорость обучения" (символ alpha в формулах выше) В этом методе ничего возвращать не нужно, только правильно поменять веса с помощью градиентного спуска. """ # тут нужно обновить веса по формулам, написанным выше n = len(y) y = np.array(y).reshape(-1, 1) sigma = self.activate(X @ self.w + self.b) self.w = self.w - learning_rate * (X.T @ (sigma - y)) / n self.b = self.b - learning_rate * np.mean(sigma - y) def fit(self, X, y, num_epochs=5000): """ Спускаемся в минимум :param: X -- матрица объектов размера (n, m) y -- вектор правильных ответов размера (n, 1) num_epochs -- количество итераций обучения :return: J_values -- вектор значений функции потерь """ self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1) self.b = 0 # смещение loss_values = [] # значения функции потерь на различных итерациях обновления весов for i in range(num_epochs): # предсказания с текущими весами y_pred = self.forward_pass(X) # считаем функцию потерь с текущими весами loss_values.append(loss(y_pred, y)) # обновляем веса по формуле градиентного спуска self.backward_pass(X, y, y_pred) return loss_values ``` <h3 style="text-align: center;"><b>Тестирование</b></h3> Протестируем нейрон, обученный с новой функцией потерь, на тех же данных, что и в предыдущем ноутбуке: **Проверка forward_pass()** ``` w = np.array([1., 2.]).reshape(2, 1) b = 2. X = np.array([[1., 3.], [2., 4.], [-1., -3.2]]) neuron = Neuron(w, b) y_pred = neuron.forward_pass(X) print("y_pred = " + str(y_pred)) ``` y_pred = [[0.99987661] [0.99999386] [0.00449627]] **Проверка backward_pass()** ``` y = np.array([1, 0, 1]).reshape(3, 1) ``` ``` neuron.backward_pass(X, y, y_pred) print("w = " + str(neuron.w)) print("b = " + str(neuron.b)) ``` w = [[0.9001544 ] [1.76049276]] b = 1.9998544421863216 Проверьте на наборах данных "яблоки и груши" и "голос". ``` data_apples_pears = pd.read_csv('apples_pears.csv') ``` ``` data_apples_pears.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>yellowness</th> <th>symmetry</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.779427</td> <td>0.257305</td> <td>1.0</td> </tr> <tr> <th>1</th> <td>0.777005</td> <td>0.015915</td> <td>1.0</td> </tr> <tr> <th>2</th> <td>0.977092</td> <td>0.304210</td> <td>1.0</td> </tr> <tr> <th>3</th> <td>0.043032</td> <td>0.140899</td> <td>0.0</td> </tr> <tr> <th>4</th> <td>0.760433</td> <td>0.193123</td> <td>1.0</td> </tr> </tbody> </table> </div> ``` plt.figure(figsize=(10, 8)) plt.scatter(data_apples_pears.iloc[:, 0], data_apples_pears.iloc[:, 1], c=data_apples_pears['target'], cmap='rainbow') plt.title('Яблоки и груши', fontsize=15) plt.xlabel('симметричность', fontsize=14) plt.ylabel('желтизна', fontsize=14) plt.show(); ``` ``` X = data_apples_pears.iloc[:,:2].values # матрица объекты-признаки y = data_apples_pears['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц) ``` ``` %%time neuron = Neuron(w=np.random.rand(X.shape[1], 1), b=np.random.rand(1)) losses = neuron.fit(X, y, num_epochs=10000) plt.figure(figsize=(10, 8)) plt.plot(losses) plt.title('Функция потерь', fontsize=15) plt.xlabel('номер итерации', fontsize=14) plt.ylabel('$LogLoss(\hat{y}, y)$', fontsize=14) plt.show() ``` ``` plt.figure(figsize=(10, 8)) plt.scatter(data_apples_pears.iloc[:, 0], data_apples_pears.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.7).ravel(), cmap='spring') plt.title('Яблоки и груши', fontsize=15) plt.xlabel('симметричность', fontsize=14) plt.ylabel('желтизна', fontsize=14) plt.show(); ``` ``` y_pred = np.array(neuron.forward_pass(X) > 0.7).ravel() from sklearn.metrics import accuracy_score print('Точность (доля правильных ответов, из 100%) нашего нейрона: {:.3f} %'.format( accuracy_score(y, y_pred) * 100)) ``` Точность (доля правильных ответов, из 100%) нашего нейрона: 98.800 % ``` data_voice = pd.read_csv("voice.csv") data_voice['label'] = data_voice['label'].apply(lambda x: 1 if x == 'male' else 0) ``` ``` data_voice.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>meanfreq</th> <th>sd</th> <th>median</th> <th>Q25</th> <th>Q75</th> <th>IQR</th> <th>skew</th> <th>kurt</th> <th>sp.ent</th> <th>sfm</th> <th>mode</th> <th>centroid</th> <th>meanfun</th> <th>minfun</th> <th>maxfun</th> <th>meandom</th> <th>mindom</th> <th>maxdom</th> <th>dfrange</th> <th>modindx</th> <th>label</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.059781</td> <td>0.064241</td> <td>0.032027</td> <td>0.015071</td> <td>0.090193</td> <td>0.075122</td> <td>12.863462</td> <td>274.402906</td> <td>0.893369</td> <td>0.491918</td> <td>0.000000</td> <td>0.059781</td> <td>0.084279</td> <td>0.015702</td> <td>0.275862</td> <td>0.007812</td> <td>0.007812</td> <td>0.007812</td> <td>0.000000</td> <td>0.000000</td> <td>1</td> </tr> <tr> <th>1</th> <td>0.066009</td> <td>0.067310</td> <td>0.040229</td> <td>0.019414</td> <td>0.092666</td> <td>0.073252</td> <td>22.423285</td> <td>634.613855</td> <td>0.892193</td> <td>0.513724</td> <td>0.000000</td> <td>0.066009</td> <td>0.107937</td> <td>0.015826</td> <td>0.250000</td> <td>0.009014</td> <td>0.007812</td> <td>0.054688</td> <td>0.046875</td> <td>0.052632</td> <td>1</td> </tr> <tr> <th>2</th> <td>0.077316</td> <td>0.083829</td> <td>0.036718</td> <td>0.008701</td> <td>0.131908</td> <td>0.123207</td> <td>30.757155</td> <td>1024.927705</td> <td>0.846389</td> <td>0.478905</td> <td>0.000000</td> <td>0.077316</td> <td>0.098706</td> <td>0.015656</td> <td>0.271186</td> <td>0.007990</td> <td>0.007812</td> <td>0.015625</td> <td>0.007812</td> <td>0.046512</td> <td>1</td> </tr> <tr> <th>3</th> <td>0.151228</td> <td>0.072111</td> <td>0.158011</td> <td>0.096582</td> <td>0.207955</td> <td>0.111374</td> <td>1.232831</td> <td>4.177296</td> <td>0.963322</td> <td>0.727232</td> <td>0.083878</td> <td>0.151228</td> <td>0.088965</td> <td>0.017798</td> <td>0.250000</td> <td>0.201497</td> <td>0.007812</td> <td>0.562500</td> <td>0.554688</td> <td>0.247119</td> <td>1</td> </tr> <tr> <th>4</th> <td>0.135120</td> <td>0.079146</td> <td>0.124656</td> <td>0.078720</td> <td>0.206045</td> <td>0.127325</td> <td>1.101174</td> <td>4.333713</td> <td>0.971955</td> <td>0.783568</td> <td>0.104261</td> <td>0.135120</td> <td>0.106398</td> <td>0.016931</td> <td>0.266667</td> <td>0.712812</td> <td>0.007812</td> <td>5.484375</td> <td>5.476562</td> <td>0.208274</td> <td>1</td> </tr> </tbody> </table> </div> ``` # Чтобы перемешать данные. Изначально там сначала идут все мужчины, потом все женщины data_voice = data_voice.sample(frac=1) ``` ``` X_train = data_voice.iloc[:int(len(data_voice)*0.7), :-1] # матрица объекты-признаки y_train = data_voice.iloc[:int(len(data_voice)*0.7), -1] # истинные значения пола (мужчина/женщина) X_test = data_voice.iloc[int(len(data_voice)*0.7):, :-1] # матрица объекты-признаки y_test = data_voice.iloc[int(len(data_voice)*0.7):, -1] # истинные значения пола (мужчина/женщина) ``` ``` from sklearn.preprocessing import StandardScaler ``` ``` scaler = StandardScaler() X_train = scaler.fit_transform(X_train.values) X_test = scaler.transform(X_test.values) ``` ``` plt.figure(figsize=(10, 8)) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap='rainbow') plt.title('Мужские и женские голоса', fontsize=15) plt.show(); ``` ``` neuron = Neuron(w=np.random.rand(X.shape[1], 1), b=np.random.rand(1)) losses = neuron.fit(X_train, y_train.values); ``` ``` y_pred = neuron.forward_pass(X_test) y_pred = (y_pred > 0.5).astype(int) ``` ``` from sklearn.metrics import accuracy_score print('Точность (доля правильных ответов, из 100%) нашего нейрона: {:.3f} %'.format( accuracy_score(y_test, y_pred) * 100)) ``` Точность (доля правильных ответов, из 100%) нашего нейрона: 97.476 % ``` plt.figure(figsize=(10, 8)) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_pred, cmap='spring') plt.title('Мужские и женские голоса', fontsize=15) plt.show(); ``` ``` ```
From Undecidability.L Require Export Datatypes.LNat Datatypes.LBool Tactics.LTactics Computability.Computability Tactics.Lbeta. Section MuRecursor. Variable P : term. Hypothesis P_proc : proc P. Hint Resolve P_proc : LProc. Hypothesis dec'_P : forall (n:nat), (exists (b:bool), app P (ext n) == ext b ). Lemma dec_P : forall n:nat, {b:bool | app P (ext n) == ext b}. intros. eapply lcomp_comp. -apply bool_enc_inv_correct. -apply dec'_P. Qed. Section hoas. Import HOAS_Notations. Definition mu' := Eval cbn -[enc] in rho (λ mu P n, (P n) (!!K n) (λ Sn, mu P Sn) (!!(ext S) n)). End hoas. Import L_Notations. Lemma mu'_proc : proc mu'. unfold mu'; Lproc. Qed. Hint Resolve mu'_proc : LProc. Lemma mu'_n_false n: P (ext n) == ext false -> mu' P (ext n) >* mu' P (ext (S n)). Proof. intros R. apply equiv_lambda in R;[|Lproc]. recStep mu'. unfold K. Lsimpl. Qed. Lemma mu'_0_false n: (forall n', n' < n -> P (ext n') == ext false) -> mu' P (ext 0) >* mu' P (ext n). Proof. intros H. induction n. -reflexivity. -rewrite IHn. +apply mu'_n_false. apply H. lia. +intros. apply H. lia. Qed. Lemma mu'_n_true (n:nat): P (ext n) == ext true -> mu' P (ext n) == ext n. Proof. intros R. recStep mu'. Lsimpl. rewrite R. unfold K. Lsimpl. Qed. (* TODO: mu' sound*) Lemma mu'_sound v n: proc v -> mu' P (ext (n:nat)) == v -> (forall n', n' < n -> P (ext n') == ext false) -> exists n0, n0 >= n /\ P (ext n0) == ext true /\ v == ext n0 /\ forall n', n' < n0 -> P (ext (n':nat)) == ext false. Proof. intros pv. intros R. apply equiv_lambda in R;try Lproc. apply star_pow in R. destruct R as [k R]. revert n R. apply complete_induction with (x:=k);clear k;intros k. intros IH n R H. specialize (dec_P n). destruct (dec_P n) as [[] eq]. -exists n;intuition. apply pow_star in R. apply star_equiv in R. rewrite <- R. now rewrite mu'_n_true. -assert (R':=mu'_n_false eq). apply star_pow in R'. destruct R' as [k' R']. destruct (parametrized_confluence uniform_confluence R R') as [x [l [u [le1 [le2 [R1 [R2 eq']]]]]]]. destruct x. +inv R1. apply IH in R2 as [n0 [ge1 [Rn0 [eq0 H0]]]]. *exists n0. repeat split;try assumption;lia. *decide (l=k);[|lia]. subst l. assert (k'=0) by lia. subst k'. inv R'. apply inj_enc in H1. lia. *intros. decide (n'=n). subst. tauto. apply H. lia. +destruct R1 as [? [C _]]. destruct pv as [_ [v']]. subst v. inv C. Qed. Lemma mu'_complete n0 : P (ext n0) == ext true -> (forall n', n' < n0 -> P (ext n') == ext false) -> mu' P (ext 0) == ext n0. Proof. intros. rewrite mu'_0_false with (n:=n0);try tauto. -recStep mu'. Lsimpl. rewrite H. unfold K. Lsimpl. Qed. (* the mu combinator:*) Definition mu :term := lam (mu' #0 (ext 0)). Lemma mu_proc : proc mu. unfold mu. Lproc. Qed. Hint Resolve mu_proc : LProc. Lemma mu_sound v : lambda v -> mu P == v -> exists n, v = ext n /\ P (ext n) == ext true /\ (forall n', n' < n -> P (ext n') == ext false). Proof. unfold mu. intros lv R. standardizeHypo 100. apply mu'_sound in R. -destruct R as [n ?]. exists n. intuition. apply unique_normal_forms;try Lproc. assumption. -split;[|Lproc]. apply equiv_lambda in R;auto. apply closed_star in R;Lproc. -intros. lia. Qed. Lemma mu_complete (n:nat) : P (ext n) == ext true -> exists n0:nat, mu P == ext n0. Proof. remember 0 as n0. assert (forall n':nat, n'< n-(n-n0) -> P (ext n') == ext false) by (intros;lia). assert ((n-n0)+n0=n) by lia. remember (n-n0) as k. clear Heqk Heqn0 H0 n0. induction k. -simpl in *. subst. intros. eexists. unfold mu. Lsimpl. apply mu'_complete;eauto. intros. apply H. lia. -intros. destruct (dec_P (n-S k)) as [y P']. destruct y. +eexists. unfold mu. Lsimpl. apply mu'_complete. exact P'. exact H. +apply IHk. intros. decide (n' = n - (S k)). *subst. exact P'. *apply H. lia. *assumption. Qed. Lemma mu_spec : converges (mu P) <-> exists n : nat, P (ext n) == ext true. Proof. split. - intros (? & ? & ?). eapply mu_sound in H as (? & ? & ? & ?); eauto. - intros []. eapply mu_complete in H as []. exists (ext x0). split. eauto. eapply proc_ext. Qed. End MuRecursor. Hint Resolve mu'_proc : LProc. Hint Resolve mu_proc : LProc.
#! /usr/bin/env python3 import numpy as np ngenes = 30 nsamps_a = 2 nsamps_b = 3 header = [] header.append("gene") header.extend([f"sample{1 + int(s) :02}" for s in range(nsamps_a + nsamps_b)]) print("\t".join(header)) for i in range(ngenes): record = [] record.append(f"gene{1 + i :03}") record.extend([str(x) for x in np.random.poisson(lam=100, size=nsamps_a)]) record.extend([str(x) for x in np.random.poisson(lam=100, size=nsamps_b)]) print("\t".join(record))
function pass = test_sample( ) % Test diskfun sample() command tol = 100*chebfunpref().cheb2Prefs.chebfun2eps; % Function to test f = diskfun(@(x,y) sin(pi*x.*y)); % Ensure the matrix of sampled values is correct. [m,n] = length(f); [nn,mm] = size(sample(f)); pass(1) = (m == mm) && (n == nn); % Sample on fixed grids of various sizes to make sure the right size output % is given. m = 120; n = 121; [nn,mm] = size(sample(f, m, n)); pass(2) = (m == mm) && (n == nn); m = 121; n = 120; [nn,mm] = size(sample(f, m, n)); pass(3) = (m == mm) && (n == nn); % Check samples are correct. % m even and n odd m = 30; n = 2*20+1; cp = chebpts(n); [t,r] = meshgrid(trigpts(m, [-pi, pi]), cp((n+1)/2:end)); F = f(t,r, 'polar'); G = sample(f, m, (n+1)/2); pass(4) = norm(F(:) - G(:), inf) < tol; [U, D, V] = sample(f, m, (n+1)/2); G = U * D * V.'; pass(5) = norm(F(:) - G(:), inf) < tol; % m odd and n odd m = 31; n = 2*20+1; cp = chebpts(n); [t,r] = meshgrid(trigpts(m, [-pi, pi]), cp((n+1)/2:end)); F = f(t,r, 'polar'); G = sample(f, m, (n+1)/2); pass(6) = norm(F(:) - G(:), inf) < tol; [U, D, V] = sample(f, m, (n+1)/2); G = U * D * V.'; pass(7) = norm(F(:) - G(:), inf) < tol; % m odd and n even m = 31; n = 2*20-1; cp = chebpts(n); [t,r] = meshgrid(trigpts(m, [-pi, pi]), cp((n+1)/2:end)); F = f(t,r, 'polar'); G = sample(f, m, (n+1)/2); pass(8) = norm(F(:) - G(:), inf) < tol; [U, D, V] = sample(f, m, (n+1)/2); G = U * D * V.'; pass(9) = norm(F(:) - G(:), inf) < tol; % m even and n even m = 30; n = 2*20-1; cp = chebpts(n); [t,r] = meshgrid(trigpts(m, [-pi, pi]), cp((n+1)/2:end)); F = f(t,r, 'polar'); G = sample(f, m, (n+1)/2); pass(10) = norm(F(:) - G(:), inf) < tol; [U, D, V] = sample(f, m, (n+1)/2); G = U * D * V.'; pass(11) = norm(F(:) - G(:), inf) < tol; % Sample should return all ones for the function 1. f = diskfun(@(x,y) 1 + 0*x); F = sample(f, 128, 128); pass(12) = norm(F(:) - 1, inf) < tol; % Check that errors are caught try F = sample(f, 0, 20); pass(13) = false; catch ME pass(13) = strcmp(ME.identifier, 'CHEBFUN:DISKFUN:sample:inputs'); end try F = sample(f, 20, 0); pass(14) = false; catch ME pass(14) = strcmp(ME.identifier, 'CHEBFUN:DISKFUN:sample:inputs'); end end
(* (c) Copyright 2006-2018 Microsoft Corporation and Inria. *) (* Distributed under the terms of CeCILL-B. *) From mathcomp Require Import ssreflect ssrfun ssrbool ssrnat ssrint. From fourcolor Require Import part hubcap present. (******************************************************************************) (* This file contains the unavoidability proof for cartwheels with a hub *) (* arity of 6. This proof is a reencoding of the argument that appeared in *) (* the main text of the Robertson et al. revised proof. *) (******************************************************************************) Set Implicit Arguments. Unset Strict Implicit. Unset Printing Implicit Defensive. Lemma exclude6 : excluded_arity 6. Proof. Presentation red. Pcase L0_1: s[1] > 6. Pcase L1_1: s[3] > 6. Pcase: s[2] > 5. Pcase: s[5] > 6. Pcase: s[4] > 5. Pcase: s[6] > 5. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Hubcap $[1,2]<=(-1) $[3,5]<=(-1) $[4,6]<=2 $. Pcase: s[6] > 5. Hubcap $[1,2]<=0 $[3,5]<=(-2) $[4,6]<=2 $. Hubcap $[1,2]<=(-1) $[3,5]<=(-3) $[4,6]<=4 $. Pcase L3_1: s[6] > 6. Pcase: s[4] > 6. Pcase: s[5] > 5. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Hubcap $[1,2]<=0 $[3,5]<=2 $[4,6]<=(-2) $. Pcase: s[4] > 5. Pcase: s[5] > 5. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Hubcap $[1,2]<=0 $[3,5]<=1 $[4,6]<=(-1) $. Pcase: s[5] > 5. Hubcap $[1,2]<=0 $[3,5]<=(-1) $[4,6]<=1 $. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: s[4] > 6. Similar to *L3_1[3]. Pcase L3_2: s[6] <= 5. Pcase: s[4] <= 5. Reducible. Pcase: s[5] <= 5. Hubcap $[2,3]<=(-1) $[1,5]<=0 $[4,6]<=1 $. Hubcap $[2,3]<=0 $[1,5]<=(-1) $[4,6]<=1 $. Pcase: s[4] <= 5. Similar to *L3_2[3]. Pcase: s[5] <= 5. Pcase L4_1: h[6] > 6. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Pcase: h[5] > 6. Similar to *L4_1[3]. Hubcap $[2,3]<=(-1) $[1,5]<=1 $[4,6]<=0 $. Pcase: h[6] > 6. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: h[6] <= 5. Pcase: f1[6] > 6. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Pcase: f1[6] <= 5. Hubcap $[1,2]<=(-2) $[3,5]<=2 $[4,6]<=0 $. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Pcase: f1[6] > 6. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: f1[6] <= 5. Pcase: h[1] <= 5. Hubcap $[1,2]<=(-2) $[3,5]<=2 $[4,6]<=0 $. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Pcase: h[1] <= 5. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: s[5] > 6. Pcase: s[4] > 5. Pcase: s[6] > 5. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Hubcap $[1,2]<=0 $[3,5]<=(-2) $[4,6]<=2 $. Pcase: s[6] > 5. Hubcap $[1,2]<=1 $[3,5]<=(-3) $[4,6]<=2 $. Hubcap $[1,2]<=0 $[3,5]<=(-4) $[4,6]<=4 $. Pcase L3_1: s[6] > 6. Pcase: s[4] > 6. Pcase: s[5] > 5. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Hubcap $[1,2]<=1 $[3,5]<=1 $[4,6]<=(-2) $. Pcase: s[4] > 5. Pcase: s[5] > 5. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Hubcap $[1,2]<=1 $[3,5]<=0 $[4,6]<=(-1) $. Pcase: s[5] > 5. Hubcap $[1,2]<=1 $[3,5]<=(-2) $[4,6]<=1 $. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Pcase: s[4] > 6. Similar to *L3_1[3]. Pcase L3_2: s[6] <= 5. Pcase: s[4] <= 5. Reducible. Pcase: s[5] <= 5. Hubcap $[2,3]<=0 $[1,5]<=(-1) $[4,6]<=1 $. Hubcap $[2,3]<=1 $[1,5]<=(-2) $[4,6]<=1 $. Pcase: s[4] <= 5. Similar to *L3_2[3]. Pcase: s[5] <= 5. Pcase L4_1: h[6] > 6. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: h[5] > 6. Similar to *L4_1[3]. Hubcap $[2,3]<=0 $[1,5]<=0 $[4,6]<=0 $. Pcase: h[6] > 6. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Pcase: h[6] <= 5. Pcase: f1[6] > 6. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: f1[6] <= 5. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: f1[6] > 6. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Pcase: f1[6] <= 5. Pcase: h[1] <= 5. Hubcap $[1,2]<=(-1) $[3,5]<=1 $[4,6]<=0 $. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Pcase: h[1] <= 5. Hubcap $[1,2]<=0 $[3,5]<=0 $[4,6]<=0 $. Hubcap $[1,2]<=1 $[3,5]<=(-1) $[4,6]<=0 $. Pcase: s[5] > 6. Similar to L1_1[4]. Pcase: s[4] > 6. Pcase: s[2] > 6. Similar to L1_1[1]. Pcase: s[6] > 6. Similar to L1_1[3]. Pcase L2_1: s[2] <= 5. Pcase: s[3] <= 5. Pcase L4_1: s[5] <= 5. Pcase: s[6] <= 5. Pcase: h[6] <= 5. Hubcap $[1,3]<=(-3) $[2,4]<=(-3) $[5,6]<=6 $. Hubcap $[1,3]<=(-2) $[2,4]<=(-2) $[5,6]<=4 $. Pcase: h[3] <= 5. Hubcap $[1,5]<=(-2) $[4,6]<=(-4) $[2,3]<=6 $. Hubcap $[1,5]<=(-1) $[4,6]<=(-3) $[2,3]<=4 $. Pcase: s[6] <= 5. Similar to *L4_1[2]. Pcase: h[3] <= 5. Hubcap $[1,5]<=(-3) $[4,6]<=(-3) $[2,3]<=6 $. Hubcap $[1,5]<=(-2) $[4,6]<=(-2) $[2,3]<=4 $. Pcase: s[5] <= 5. Pcase: s[6] <= 5. Pcase: h[6] <= 5. Hubcap $[1,3]<=(-4) $[2,4]<=(-2) $[5,6]<=6 $. Hubcap $[1,3]<=(-3) $[2,4]<=(-1) $[5,6]<=4 $. Pcase: h[6] <= 5. Pcase: h[5] > 5. Hubcap $[1,3]<=(-3) $[2,4]<=(-1) $[5,6]<=4 $. Hubcap $[1,3]<=(-3) $[2,4]<=(-2) $[5,6]<=5 $. Pcase: h[6] <= 6. Pcase: h[5] > 5. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Hubcap $[1,3]<=(-2) $[2,4]<=(-1) $[5,6]<=3 $. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Pcase: s[6] <= 5. Pcase: h[6] <= 5. Pcase: h[1] > 5. Hubcap $[1,3]<=(-3) $[2,4]<=(-1) $[5,6]<=4 $. Hubcap $[1,3]<=(-4) $[2,4]<=(-1) $[5,6]<=5 $. Pcase: h[1] > 5. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Pcase: h[6] > 6. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Hubcap $[1,3]<=(-3) $[2,4]<=0 $[5,6]<=3 $. Pcase: h[3] <= 5. Pcase: h[2] <= 5. Hubcap $[1,5]<=(-3) $[4,6]<=(-2) $[2,3]<=5 $. Hubcap $[1,5]<=(-2) $[4,6]<=(-2) $[2,3]<=4 $. Pcase: h[3] > 6. Hubcap $[1,5]<=(-1) $[4,6]<=(-1) $[2,3]<=2 $. Pcase: h[2] <= 5. Hubcap $[1,5]<=(-2) $[4,6]<=(-1) $[2,3]<=3 $. Hubcap $[1,5]<=(-1) $[4,6]<=(-1) $[2,3]<=2 $. Pcase: s[3] <= 5. Similar to *L2_1[2]. Pcase: s[5] <= 5. Similar to L2_1[3]. Pcase: s[6] <= 5. Similar to *L2_1[5]. Pcase: h[3] > 6. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: h[3] <= 5. Pcase L3_1: f1[2] <= 5. Hubcap $[1,5]<=(-2) $[4,6]<=(-1) $[2,3]<=3 $. Pcase: f1[3] <= 5. Similar to *L3_1[2]. Hubcap $[1,5]<=(-1) $[4,6]<=(-1) $[2,3]<=2 $. Pcase L2_2: f1[2] <= 5. Pcase: h[2] <= 5. Hubcap $[1,5]<=(-2) $[4,6]<=0 $[2,3]<=2 $. Hubcap $[1,5]<=(-1) $[4,6]<=0 $[2,3]<=1 $. Pcase: f1[3] <= 5. Similar to *L2_2[2]. Pcase L2_3: f1[2] > 6. Pcase: f1[3] > 6. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: h[4] > 5. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Hubcap $[1,5]<=0 $[4,6]<=(-1) $[2,3]<=1 $. Pcase: f1[3] > 6. Similar to *L2_3[2]. Pcase L2_4: h[2] <= 5. Hubcap $[1,5]<=(-1) $[4,6]<=0 $[2,3]<=1 $. Pcase: h[4] <= 5. Similar to *L2_4[2]. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase L1_2: s[2] > 6. Pcase: s[6] > 6. Similar to L1_1[5]. Pcase L2_1: s[3] <= 5. Pcase: s[5] <= 5. Reducible. Pcase: s[6] <= 5. Reducible. Pcase: s[4] <= 5. Hubcap $[3,5]<=1 $[2,4]<=0 $[1,6]<=(-1) $. Hubcap $[3,5]<=1 $[2,4]<=(-1) $[1,6]<=0 $. Pcase: s[6] <= 5. Similar to *L2_1[4]. Pcase L2_2: s[4] <= 5. Pcase: s[5] <= 5. Hubcap $[1,2]<=(-2) $[3,5]<=1 $[4,6]<=1 $. Pcase: h[4] > 5. Hubcap $[1,2]<=(-1) $[3,5]<=0 $[4,6]<=1 $. Hubcap $[4,2]<=1 $[3,5]<=0 $[1,6]<=(-1) $. Pcase: s[5] <= 5. Similar to *L2_2[4]. Pcase L2_3: h[4] > 6. Pcase: h[5] > 6. Hubcap $[1,5]<=0 $[2,3]<=0 $[4,6]<=0 $. Pcase: h[5] <= 5. Pcase: f1[4] > 5. Hubcap $[1,5]<=1 $[2,3]<=(-1) $[4,6]<=0 $. Hubcap $[1,5]<=2 $[2,3]<=(-2) $[4,6]<=0 $. Pcase: f1[4] <= 5. Hubcap $[1,5]<=1 $[2,3]<=(-1) $[4,6]<=0 $. Hubcap $[1,5]<=0 $[2,3]<=0 $[4,6]<=0 $. Pcase: h[6] > 6. Similar to *L2_3[4]. Pcase: h[5] > 6. Pcase: h[4] <= 5. Pcase: f1[3] <= 5. Hubcap $[1,5]<=(-1) $[4,6]<=2 $[2,3]<=(-1) $. Pcase: f1[4] <= 5. Hubcap $[1,5]<=(-2) $[4,6]<=1 $[2,3]<=1 $. Hubcap $[1,5]<=(-1) $[4,6]<=1 $[2,3]<=0 $. Pcase: f1[3] <= 5. Pcase: h[3] <= 5. Hubcap $[1,5]<=0 $[4,6]<=2 $[2,3]<=(-2) $. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Pcase: f1[4] <= 5. Hubcap $[1,5]<=(-1) $[4,6]<=0 $[2,3]<=1 $. Pcase: f1[3] > 6. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: h[3] <= 5. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase L2_4: h[6] <= 5. Pcase: h[4] <= 5. Reducible. Pcase: f1[3] <= 5. Pcase: h[3] <= 5. Hubcap $[1,5]<=0 $[4,6]<=2 $[2,3]<=(-2) $. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Pcase: f1[3] > 6. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: h[3] <= 5. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: h[4] <= 5. Similar to *L2_4[4]. Pcase L2_5: f1[4] > 6. Pcase: h[5] > 5. Pcase: f1[3] > 6. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: f1[3] > 5. Pcase: h[3] > 5. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Pcase: h[3] > 5. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Hubcap $[1,5]<=0 $[4,6]<=2 $[2,3]<=(-2) $. Pcase: f1[3] > 6. Hubcap $[1,5]<=1 $[4,6]<=0 $[2,3]<=(-1) $. Pcase: f1[3] > 5. Pcase: h[3] > 5. Hubcap $[1,5]<=1 $[4,6]<=0 $[2,3]<=(-1) $. Hubcap $[1,5]<=1 $[4,6]<=1 $[2,3]<=(-2) $. Pcase: h[3] > 5. Hubcap $[1,5]<=1 $[4,6]<=1 $[2,3]<=(-2) $. Hubcap $[1,5]<=1 $[4,6]<=2 $[2,3]<=(-3) $. Pcase: f1[5] > 6. Similar to *L2_5[4]. Pcase: f1[3] > 6. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Pcase: f1[3] > 5. Pcase: h[3] > 5. Hubcap $[1,5]<=0 $[4,6]<=0 $[2,3]<=0 $. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Pcase: h[3] > 5. Hubcap $[1,5]<=0 $[4,6]<=1 $[2,3]<=(-1) $. Hubcap $[1,5]<=0 $[4,6]<=2 $[2,3]<=(-2) $. Pcase: s[6] > 6. Similar to *L1_2[5]. Pcase L1_3: s[2] <= 5. Pcase: s[4] <= 5. Reducible. Pcase: s[5] <= 5. Reducible. Pcase: s[6] <= 5. Reducible. Pcase: h[6] <= 5. Reducible. Pcase: s[3] <= 5. Hubcap $[1,3]<=0 $[2,4]<=1 $[5,6]<=(-1) $. Pcase: h[6] > 6. Hubcap $[1,3]<=(-1) $[2,4]<=1 $[5,6]<=0 $. Pcase: f1[5] <= 5. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Pcase: f1[6] > 6. Hubcap $[1,3]<=(-1) $[2,4]<=1 $[5,6]<=0 $. Pcase: f1[6] <= 5. Reducible. Pcase: h[1] > 5. Hubcap $[1,3]<=(-1) $[2,4]<=1 $[5,6]<=0 $. Hubcap $[1,3]<=(-2) $[2,4]<=1 $[5,6]<=1 $. Pcase: s[6] <= 5. Similar to *L1_3[5]. Pcase L1_4: s[3] <= 5. Pcase: s[5] <= 5. Reducible. Pcase: s[4] <= 5. Hubcap $[1,6]<=(-2) $[2,4]<=1 $[3,5]<=1 $. Pcase: h[3] > 6. Hubcap $[1,6]<=(-1) $[2,4]<=0 $[3,5]<=1 $. Hubcap $[1,3]<=1 $[2,4]<=0 $[5,6]<=(-1) $. Pcase: s[5] <= 5. Similar to *L1_4[5]. Pcase: s[4] <= 5. Pcase: h[4] > 6. Hubcap $[1,2]<=(-1) $[3,5]<=0 $[4,6]<=1 $. Hubcap $[1,6]<=(-1) $[2,4]<=1 $[3,5]<=0 $. Pcase L1_5: h[3] <= 5. Pcase: h[6] <= 5. Hubcap $[1,3]<=(-1) $[2,4]<=(-1) $[5,6]<=2 $. Pcase: h[6] > 6. Pcase: h[5] <= 5. Reducible. Pcase: h[5] > 6. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: f1[5] <= 5. Hubcap $[1,3]<=0 $[2,4]<=1 $[5,6]<=(-1) $. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: f1[6] <= 5. Reducible. Pcase: f1[6] > 6. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: h[1] > 5. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Pcase: h[6] <= 5. Similar to *L1_5[5]. Pcase L1_6: h[4] <= 5. Pcase: f1[3] <= 5. Pcase: f1[4] <= 5. Hubcap $[1,5]<=(-2) $[2,3]<=0 $[4,6]<=2 $. Hubcap $[1,5]<=(-1) $[2,3]<=(-1) $[4,6]<=2 $. Pcase: f1[4] <= 5. Pcase: h[3] > 6. Hubcap $[1,5]<=(-2) $[2,3]<=1 $[4,6]<=1 $. Pcase: f1[2] <= 5. Reducible. Pcase: f1[2] > 6. Hubcap $[1,5]<=(-2) $[2,3]<=1 $[4,6]<=1 $. Pcase: h[2] > 5. Hubcap $[1,5]<=(-2) $[2,3]<=1 $[4,6]<=1 $. Hubcap $[1,5]<=(-3) $[2,3]<=2 $[4,6]<=1 $. Pcase: h[3] > 6. Hubcap $[1,5]<=(-1) $[2,3]<=0 $[4,6]<=1 $. Pcase: f1[2] <= 5. Reducible. Pcase: f1[2] > 6. Hubcap $[1,5]<=(-1) $[2,3]<=0 $[4,6]<=1 $. Pcase: h[2] > 5. Hubcap $[1,5]<=(-1) $[2,3]<=0 $[4,6]<=1 $. Hubcap $[1,5]<=(-2) $[2,3]<=1 $[4,6]<=1 $. Pcase: h[5] <= 5. Similar to *L1_6[5]. Pcase L1_7: h[4] > 6. Pcase: h[5] > 6. Pcase L3_1: h[3] > 6. Hubcap $[1,5]<=0 $[2,3]<=0 $[4,6]<=0 $. Pcase: h[6] > 6. Similar to *L3_1[5]. Pcase: f1[5] <= 5. Hubcap $[1,3]<=0 $[2,4]<=(-1) $[5,6]<=1 $. Pcase: f1[6] > 6. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: f1[6] <= 5. Pcase: h[1] <= 5. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Pcase: h[1] <= 5. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: h[6] > 6. Pcase: f1[4] > 5. Pcase: f1[5] > 5. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Hubcap $[1,3]<=0 $[2,4]<=1 $[5,6]<=(-1) $. Pcase: f1[5] > 5. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Hubcap $[1,3]<=(-1) $[2,4]<=1 $[5,6]<=0 $. Pcase: f1[4] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Pcase: f1[6] > 6. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: f1[6] <= 5. Pcase: h[1] <= 5. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Pcase: h[1] <= 5. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: h[5] > 6. Similar to *L1_7[5]. Pcase L1_8: h[6] > 6. Pcase: f1[5] > 5. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Hubcap $[1,3]<=0 $[2,4]<=1 $[5,6]<=(-1) $. Pcase: h[3] > 6. Similar to *L1_8[5]. Pcase: f1[6] > 6. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: f1[6] <= 5. Pcase: h[1] <= 5. Hubcap $[1,3]<=(-2) $[2,4]<=0 $[5,6]<=2 $. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Pcase: h[1] <= 5. Hubcap $[1,3]<=(-1) $[2,4]<=0 $[5,6]<=1 $. Hubcap $[1,3]<=0 $[2,4]<=0 $[5,6]<=0 $. Pcase: s[2] > 6. Similar to L0_1[1]. Pcase: s[3] > 6. Similar to L0_1[2]. Pcase: s[4] > 6. Similar to L0_1[3]. Pcase: s[5] > 6. Similar to L0_1[4]. Pcase: s[6] > 6. Similar to L0_1[5]. Pcase: s[1] <= 5. Reducible. Pcase: s[2] <= 5. Reducible. Pcase: s[3] <= 5. Reducible. Pcase: s[4] <= 5. Reducible. Pcase: s[5] <= 5. Reducible. Pcase: s[6] <= 5. Reducible. Pcase L0_2: h[1] > 6. Pcase L1_1: h[3] > 6. Pcase: h[5] > 6. Pcase L3_1: h[2] > 6. Pcase: h[6] > 6. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Pcase: h[6] <= 5. Pcase: f1[5] <= 5. Hubcap $[1,6]<=1 $[2,4]<=(-2) $[3,5]<=1 $. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=(-1) $[3,5]<=2 $. Hubcap $[1,6]<=0 $[2,4]<=(-1) $[3,5]<=1 $. Pcase: f1[5] <= 5. Hubcap $[1,6]<=1 $[2,4]<=(-1) $[3,5]<=0 $. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=0 $[3,5]<=1 $. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Pcase: h[6] > 6. Similar to *L3_1[0]. Pcase L3_2: h[2] <= 5. Pcase: f1[1] <= 5. Pcase: f1[5] <= 5. Hubcap $[1,6]<=0 $[2,4]<=1 $[3,5]<=(-1) $. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-2) $[2,4]<=2 $[3,5]<=0 $. Hubcap $[1,6]<=(-1) $[2,4]<=2 $[3,5]<=(-1) $. Pcase: f1[2] <= 5. Pcase: f1[5] <= 5. Hubcap $[1,6]<=2 $[2,4]<=0 $[3,5]<=(-2) $. Pcase: f1[6] <= 5. Hubcap $[1,6]<=0 $[2,4]<=1 $[3,5]<=(-1) $. Hubcap $[1,6]<=1 $[2,4]<=1 $[3,5]<=(-2) $. Pcase: f1[5] <= 5. Hubcap $[1,6]<=1 $[2,4]<=0 $[3,5]<=(-1) $. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=1 $[3,5]<=0 $. Hubcap $[1,6]<=0 $[2,4]<=1 $[3,5]<=(-1) $. Pcase: h[6] <= 5. Similar to *L3_2[0]. Pcase L3_3: f1[1] <= 5. Pcase: f1[5] <= 5. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-2) $[2,4]<=1 $[3,5]<=1 $. Hubcap $[1,6]<=(-1) $[2,4]<=1 $[3,5]<=0 $. Pcase: f1[6] <= 5. Similar to *L3_3[0]. Pcase L3_4: f1[2] <= 5. Pcase: f1[5] <= 5. Hubcap $[1,6]<=2 $[2,4]<=(-1) $[3,5]<=(-1) $. Hubcap $[1,6]<=1 $[2,4]<=0 $[3,5]<=(-1) $. Pcase: f1[5] <= 5. Similar to *L3_4[0]. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Pcase L2_1: h[4] > 6. Pcase: h[6] > 6. Pcase: h[5] <= 5. Pcase: f1[5] <= 5. Hubcap $[1,3]<=(-1) $[2,6]<=(-2) $[4,5]<=3 $. Pcase: f1[4] <= 5. Hubcap $[1,3]<=(-2) $[2,6]<=(-1) $[4,5]<=3 $. Hubcap $[1,3]<=(-1) $[2,6]<=(-1) $[4,5]<=2 $. Pcase: f1[5] <= 5. Hubcap $[1,3]<=0 $[2,6]<=(-1) $[4,5]<=1 $. Pcase: f1[4] <= 5. Hubcap $[1,3]<=(-1) $[2,6]<=0 $[4,5]<=1 $. Hubcap $[1,3]<=0 $[2,6]<=0 $[4,5]<=0 $. Pcase: h[5] <= 5. Pcase: f1[4] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-3) $[2,6]<=(-1) $[4,5]<=4 $. Hubcap $[1,3]<=(-2) $[2,6]<=(-1) $[4,5]<=3 $. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-2) $[2,6]<=(-1) $[4,5]<=3 $. Hubcap $[1,3]<=(-1) $[2,6]<=(-1) $[4,5]<=2 $. Pcase: h[6] <= 5. Pcase: f1[4] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-3) $[2,6]<=1 $[4,5]<=2 $. Hubcap $[1,3]<=(-2) $[2,6]<=1 $[4,5]<=1 $. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-2) $[2,6]<=1 $[4,5]<=1 $. Hubcap $[1,3]<=(-1) $[2,6]<=1 $[4,5]<=0 $. Pcase: f1[4] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-2) $[2,6]<=0 $[4,5]<=2 $. Hubcap $[1,3]<=(-1) $[2,6]<=0 $[4,5]<=1 $. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-1) $[2,6]<=0 $[4,5]<=1 $. Hubcap $[1,3]<=0 $[2,6]<=0 $[4,5]<=0 $. Pcase: h[6] > 6. Similar to *L2_1[4]. Pcase L2_2: h[4] <= 5. Pcase: f1[3] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,3]<=0 $[2,6]<=(-2) $[4,5]<=2 $. Hubcap $[1,3]<=1 $[2,6]<=(-2) $[4,5]<=1 $. Pcase: f1[6] <= 5. Hubcap $[1,3]<=0 $[2,6]<=(-1) $[4,5]<=1 $. Hubcap $[1,3]<=1 $[2,6]<=(-1) $[4,5]<=0 $. Pcase: h[6] <= 5. Similar to *L2_2[4]. Pcase L2_3: f1[3] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,3]<=(-1) $[2,6]<=(-1) $[4,5]<=2 $. Pcase: h[5] <= 5. Hubcap $[1,3]<=(-1) $[2,6]<=(-2) $[4,5]<=3 $. Hubcap $[1,3]<=0 $[2,6]<=(-1) $[4,5]<=1 $. Pcase: f1[6] <= 5. Similar to *L2_3[4]. Pcase: h[5] <= 5. Hubcap $[1,3]<=(-1) $[2,6]<=(-1) $[4,5]<=2 $. Hubcap $[1,3]<=0 $[2,6]<=0 $[4,5]<=0 $. Pcase: h[5] > 6. Similar to L1_1[4]. Pcase L1_2: h[2] > 6. Pcase: h[4] > 6. Similar to L1_1[1]. Pcase: h[6] > 6. Similar to L1_1[5]. Pcase L3_1: h[3] <= 5. Pcase: f1[2] <= 5. Hubcap $[1,5]<=(-2) $[2,3]<=3 $[4,6]<=(-1) $. Hubcap $[1,5]<=(-1) $[2,3]<=2 $[4,6]<=(-1) $. Pcase: h[6] <= 5. Similar to *L3_1[5]. Pcase L3_2: f1[2] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-2) $[2,4]<=0 $[3,5]<=2 $. Pcase: h[5] <= 5. Hubcap $[1,6]<=(-2) $[2,4]<=1 $[3,5]<=1 $. Hubcap $[1,6]<=(-1) $[2,4]<=0 $[3,5]<=1 $. Pcase: f1[6] <= 5. Similar to *L3_2[5]. Pcase: h[5] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=1 $[3,5]<=0 $. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Pcase: h[6] > 6. Similar to *L1_2[0]. Pcase L1_3: h[2] <= 5. Pcase: h[5] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=2 $[3,5]<=(-1) $. Pcase: f1[1] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=2 $[3,5]<=(-1) $. Hubcap $[1,6]<=0 $[2,4]<=1 $[3,5]<=(-1) $. Pcase: h[6] <= 5. Similar to *L1_3[0]. Pcase L1_4: h[3] <= 5. Hubcap $[1,6]<=(-1) $[2,4]<=0 $[3,5]<=1 $. Pcase: h[5] <= 5. Similar to *L1_4[0]. Pcase L1_5: f1[1] <= 5. Pcase: f1[6] <= 5. Hubcap $[1,6]<=(-2) $[2,4]<=1 $[3,5]<=1 $. Hubcap $[1,6]<=(-1) $[2,4]<=1 $[3,5]<=0 $. Pcase: f1[6] <= 5. Similar to *L1_5[0]. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Pcase: h[2] > 6. Similar to L0_2[1]. Pcase: h[3] > 6. Similar to L0_2[2]. Pcase: h[4] > 6. Similar to L0_2[3]. Pcase: h[5] > 6. Similar to L0_2[4]. Pcase: h[6] > 6. Similar to L0_2[5]. Pcase L0_3: h[1] <= 5. Hubcap $[1,6]<=2 $[2,4]<=(-1) $[3,5]<=(-1) $. Pcase: h[2] <= 5. Similar to L0_3[1]. Pcase: h[3] <= 5. Similar to L0_3[2]. Pcase: h[4] <= 5. Similar to L0_3[3]. Pcase: h[5] <= 5. Similar to L0_3[4]. Pcase: h[6] <= 5. Similar to L0_3[5]. Hubcap $[1,6]<=0 $[2,4]<=0 $[3,5]<=0 $. Qed.
/* ============================================================ * * halomodel.h * * Martin Kilbinger 2006-2009 * * ============================================================ */ #ifndef __HALOMODEL_H #define __HALOMODEL_H #include <stdio.h> #include <stdlib.h> #include <math.h> #include <assert.h> #include <string.h> #include <fftw3.h> #include <gsl/gsl_sf_erf.h> #include "io.h" #include "errorlist.h" #include "config.h" #include "maths.h" #include "cosmo.h" #include "nofz.h" #define hm_base -1900 #define hm_hodtype hm_base + 1 #define hm_Mmin hm_base + 2 #define hm_pofk hm_base + 3 #define hm_nfw hm_base + 4 #define hm_par hm_base + 5 #define hm_overflow hm_base + 6 #define hm_io hm_base + 7 #define hm_zbin hm_base + 8 #define hm_alpha hm_base + 9 #define hm_negative hm_base + 10 #define hm_zmean_2h hm_base + 11 #define hm_halo_bias hm_base + 12 #define hm_undef hm_base + 13 #define hm_gsl_int hm_base + 14 /* Ranges of interpolation tables */ #define k_max_HOD 3336.0 /* Present critical density [M_sol h^2 / Mpc^3] */ #define rho_c0 2.7754e11 /* Mass limits for integration over mass functions */ #define logMmin (3.0*log(10.0)) #define logMmax (16.0*log(10.0)) /* Number of steps for scale-factor-integration (redshift) */ #define Na_hm 20 /* Bit-coded power spectrum types */ typedef enum {pofk_undef=-1, pl=1, pnl=2, p1hdm=4, p2hdm=8, pthdm=16, p1hg=32, p2hg=64, pthg=128, p1hgcs=256, p1hgss=512, pstellar=1024} pofk_t; /* Halo mass function type */ typedef enum {ps, st, st2, j01} massfct_t; #define smassfct_t(i) ( \ i==ps ? "ps" : \ i==st ? "st" : \ i==st2 ? "st2" : \ i==j01 ? "j01" : \ "") #define Nmassfct_t 4 /* Halo bias type */ typedef enum {halo_bias_sc, halo_bias_tinker05, halo_bias_tinker10} halo_bias_t; #define shalo_bias_t(i) ( \ i==halo_bias_sc ? "halo_bias_sc" : \ i==halo_bias_tinker05 ? "halo_bias_tinker05" : \ i==halo_bias_tinker10 ? "halo_bias_tinker10" : \ "") #define Nhalo_bias_t 3 /* HOD (Halo occupation distribution) type */ #define Nhod_t 5 typedef enum {hod_none, hamana04, berwein02, berwein02_hexcl, leauthaud11} hod_t; #define shod_t(i) ( \ i==hod_none ? "hod_none" : \ i==hamana04 ? "hamana04" : \ i==berwein02 ? "berwein02" : \ i==berwein02_hexcl ? "berwein02_hexcl" :\ i==leauthaud11 ? "leauthaud11" : \ "") /* ---------------------------------------------------------------- * * Global variables and functions * * ---------------------------------------------------------------- */ double FFTLog_TMP; typedef struct FFTLog_complex { double re; double im; double amp; double arg; } FFTLog_complex; typedef struct { int N; fftw_plan p_forward; fftw_plan p_backward; fftw_complex *an; fftw_complex *ak; fftw_complex *cm; fftw_complex *um; fftw_complex *cmum; double min; double max; double q; double mu; double kr; } FFTLog_config; typedef struct { cosmo *cosmo; redshift_t *redshift; double zmin, zmax; /* Dark matter halo profile */ double c0; /* concentration parameter */ double alpha_NFW; /* density slope */ double beta_NFW; /* concentration slope as fct of mass */ massfct_t massfct; /* halo mass function */ halo_bias_t halo_bias; /* Halo bias */ /* Mass function parameters (Sheth&Torman). Do not set manually, they are set * * in set_massfct() according to enum massfct. */ double nmz_a; /* Called q in CS02 */ double nmz_p; /* a=1, p=1/2 is Press-Schechter mass fct. */ /* HOD (halo occupation distribution) parameters */ hod_t hod; /* HOD type */ double M1, M0, sigma_log_M; double M_min; double alpha; double pi_max; double eta; /* central galaxy proportion */ /* galaxy-galaxy lensing and wp(rp) */ double log10Mhalo; double coord_phys; /* For Leauthaud11 model */ double beta,delta,gamma,Mstar0; double beta_sat,B_sat,beta_cut,B_cut; double x; /* any parameter to propagate if needed */ double Mstellar_min, Mstellar_max; double fcen1, fcen2; /* Precomputed stuff */ double A; /* Mass function normalisation */ double Mstar; /* M_*(a=1.0) */ interTable2D *Pthdm; interTable *xir; interTable *xi_dm; interTable2D *rhohat; splineTable* sigRsqr; double a_xir; /* FFTLOG flag - OBSOLETE */ int FFTLog; } cosmo_hm; typedef struct { cosmo *cosmo; cosmo_hm *model; double a, r, k, ng, ngp, eps, c; double logMlim, bias_fac, Mh, Mstellar, Mstellar_min, Mstellar_max; double M, r_vir, *kk; error **err; double logrmin, logrmax, rp, xi; gsl_interp_accel *acc; gsl_spline *spline; int i, j, type, asymptotic, logintegrate; double (*bias_func)(double, void *); } cosmo_hm_params; typedef struct gsl_int_params { void *params; funcwithpars func; error **err; } gsl_int_params; typedef struct { double *z; double *fac; double *ypn; /* for spline interpolation */ double zm; /* average weighted redshift*/ int nbins; } nz_t; cosmo_hm* init_parameters_hm(double OMEGAM, double OMEGADE, double W0_DE, double W1_DE, double *W_POLY_DE, int N_POLY_DE, double H100, double OMEGAB, double OMEGANUMASS, double NEFFNUMASS, double NORM, double NSPEC, int Nzbin, const int *Nnz, const nofz_t *nofz, double *par_nz, double zmin, double zmax, nonlinear_t NONLINEAR, transfer_t TRANSFER, growth_t GROWTH, de_param_t DEPARAM, norm_t normmode, double C0, double ALPHANFW, double BETANFW, massfct_t MASSFCT, halo_bias_t HALO_BIAS, double M_min, double M1, double M0, double sigma_log_M, double alpha, double Mstar0, double beta, double delta, double gamma, double B_cut, double B_sat, double beta_cut, double beta_sat, double Mstellar_min, double Mstellar_max, double eta, double fcen1, double fcen2, hod_t HOD, double pi_max, error **err); cosmo_hm* copy_parameters_hm_only(cosmo_hm* source, error **err); cosmo_hm *copy_parameters_hm(cosmo_hm *source, error **err); void read_cosmological_parameters_hm(cosmo_hm **model, FILE *F, error **err); cosmo_hm *set_cosmological_parameters_to_default_hm(error **err); void free_parameters_hm(cosmo_hm** model); void set_massfct(massfct_t massfct, double *nmz_a, double *nmz_p, error **err); void dump_param_only_hm(cosmo_hm* model, FILE *F); void dump_param_hm(cosmo_hm* model, FILE *F, error **err); double sm2_rtbis(double (*func)(double, void *, error **), double x1, double x2, double xacc, void *param, error **err); /* From nrcomplex.h,c */ #ifndef _DCOMPLEX_DECLARE_T_ typedef struct DCOMPLEX {double r,i;} dcomplex; #define _DCOMPLEX_DECLARE_T_ #endif /* _DCOMPLEX_DECLARE_T_ */ dcomplex Complex(double re, double im); dcomplex Cadd(dcomplex a, dcomplex b); dcomplex Cmul(dcomplex a, dcomplex b); dcomplex Cdiv(dcomplex a, dcomplex b); dcomplex RCmul(double x, dcomplex a); void sm2_cisi(double x, double *ci, double *si, error **err); double delta_c(cosmo *model, double a, error **err); double bis_Mstar(double logM, void *param, error **err); double bis_Mstar_a(double logM, void *param, error **err); double Mstar(cosmo_hm *model, error **err); double Mstar_a(cosmo_hm *model, double a, error **err); double concentration(cosmo_hm *model, double Mh, double a, error **err); double Delta_vir(cosmo_hm *model, double a); double dsigma_R_sqr_dR(cosmo_hm *model, double R, error **err); double nufnu(cosmo_hm *model, double nu, int asymptotic, error **err); double nufnu_j01(double x); double sigma_R_sqr(cosmo_hm *model, double R, error **err); double sigmasqr_M(cosmo_hm *model, double M, error **err); double dsigma_m1_dlnM(cosmo_hm *model, double M, error **err); double dnu_dlnM(cosmo_hm *model, double M, double a, error **err); double dn_dlnM_lnM(double logM, void *intpar, error **err); double dn_dlnM_uf(double M, cosmo_hm *model, double a, error **err); double dn_dlnM(double M, void *intpar, error **err); double r_vir(cosmo_hm *model, double M, double a, error **err); double M_vir(cosmo_hm *model, double r_vir, double a, error **err); double Delta_h(cosmo_hm *model, double a, error **err); double rho_crit(cosmo_hm *model, double a, error **err); double rho_crit_halo(cosmo_hm *model, double a, error **err); double Omega_m_halo(cosmo_hm *model, double a, error **err); double rho_halo(cosmo_hm *model, double r, double a, double Mh, double c, error **err); double DeltaSigma_WB2000(cosmo_hm *model, double r, const double a, const double M, double c, double Delta, error **err); double g_inf(double x, error **err); double g_sup(double x, error **err); double int_for_rhohat(double, void *, error **err); double rhohat_halo(cosmo_hm *model, double k, double M, double a, double c, error **err); double halo_bias(cosmo_hm *model, double M, double a, int k, error **err); double bias(cosmo_hm *model, double M, double a, int k, error **err); double bias_tinker(cosmo_hm *model, double M, double a, error **err); double bias_tinker10(cosmo_hm *model, double M, double a, error **err); double int_for_bias_norm(double logM, void *intpar, error **err); double bias_norm(cosmo_hm *model, double a, error **err); double int_for_M_ij(double, void *, error **); double M_ij(cosmo_hm *model, int i, int j, double a, const double *k, error **err); double P1h_dm(cosmo_hm *model, double a, double k, error **err); double P2h_dm(cosmo_hm *model, double a, double k, error **err); double xi_dm_NL_OBSOLETE(cosmo_hm *model, double a, double r, error **err); // non-linear DM xi double int_for_xi_dm_NL_OBSOLETE(double k, void *intpar, error **err); // non-linear DM xi #define CHANGE(fct) int change_##fct(cosmo_hm*, cosmo_hm*) /* ---------------------------------------------------------------- * * Utils * * ---------------------------------------------------------------- */ double int_gsl(funcwithpars func,void *params, double a, double b, double eps, error **err); double integrand_gsl(double x,void *p); CHANGE(massfct); CHANGE(massfct_params); CHANGE(halo_bias); CHANGE(sigma_R_sqr); CHANGE(Mstar); CHANGE(rhohat_halo); CHANGE(Pth); #undef CHANGE #endif
\documentclass[11pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} %\usepackage[ddmmyyyy]{datetime} \usepackage[short,nodayofweek,level,12hr]{datetime} %\usepackage{cite} %\usepackage{wrapfig} %\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\e}{\epsilon} \newcommand{\dl}{\delta} \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\vect}[1]{\underline{#1}} \newcommand{\uvect}[1]{\hat{#1}} \newcommand{\1}{\vect{1}} \newcommand{\grad}{\nabla} \newcommand{\lc}{l_c} \title{Surface Tension} \date{\displaydate{date}} \newdate{date}{22}{09}{2018} \author{} \begin{document} \maketitle If a \textit{thin} tube is half-dipped in water, we know that water rises in the tube to a height greater than that of the surrounding fluid. But what do we mean by a thin tube? There must be a critical thickness of the tube beyond which we will expect gravity to dominate and below which surface tension will be important. Since there is a competition between gravity ($g$) and surface tension ($\Gamma$), we can obtain the critical length ($\lc$) by comparing the pressures exerted by each of these forces- \begin{align*} &\rho g \lc \sim \frac{\Gamma}{\lc}\\ \Rightarrow &\lc \sim \bigg(\frac{\Gamma}{\rho g}\bigg)^{1/2} \end{align*} This is the length scale over which the effects of surface tension are comparable with those of gravity. At lengths much smaller than this, surface tension will dominate gravity. For water, $\Gamma = 0.07 N/m$ and hence $l_c \approx 2.7$mm. Therefore a \textit{thin} tube, for water, referes to a tube whose diameter is nearly 2.7mm or less. But why is the surface of a fluid under tension? A fluid consists of a bulk and a surface. The surface, though usually idealized as a sheet with zero thickness, is actually a few molecules thick. For a depth of the order of a few molecules near the water surface, the potential energy of the molecules is substantially higher than that in the bulk. The surface molecules are more energetic, jiggling about more rapidly, often escaping from the liquid to the air. If we wanted to calculate the total potential energy of the liquid by evaluating the potential energy per molecule in the bulk and then multiplying it by the total number of molecules, we will incur an error because potential energy for the surface molecules is much larger. This surface energy is usually incorporated by considering the surface to be an area (even though it is a volume few molecules thick). The excess energy associated with the area is proportional to the area and the coefficient of proportionality is the surface tension. Increasing the surface area causes more molecules to move from the bulk to the surface which increases their potential energy. We know that force is a derivative of the potential energy ($F=-dU/dx$) and therefore pushing the molecules up the potential energy curve leads to a force. In order to minimize it's potential energy then, a fluid must minimize its surface area which is what we observe in nature. \section{The Young - Laplace Equation} The fact that surface of a fluid is under tension leads to a pressure jump between the two fluids. This can be seen by a force balance across the surface. Since the mass of a surface element is zero, the net force must be zero. Therefore, the sum of pressures acting on the two sides plus the surface tension force must add up to zero. \begin{align*} &\int(-p\vect n + p \vect n) dA + \oint \Gamma \vect t' ds = 0 \end{align*} where $\vect n$ is the normal to the surface, $\vect t$ is the tangent to the curve bounding the surface and $\vect t' = \vect n \wedge \vect t$ is a vector perpendicular to the curve. Using Stokes Theorem, this can be written as \begin{align*} &\underbrace{\bigg[(\hat p - p)- \Gamma \pd{n_k}{x_k} \bigg]n_i}_{\text{Normal force balance}} + \underbrace{(\dl_{ik} - n_in_k)\pd{\Gamma}{x_k}}_{\text{tangential force balance}} = 0 \end{align*} If the gradient of surface tension is zero, then second term is vanishes and we get \begin{align*} &\hat p - p = \Gamma \pd{n_k}{x_k}\\ &\hat p - p = \Gamma (\grad\cdot \vect{n}) \end{align*} which is the Young-Laplace Equation. This equation gives us the pressure jump across an interface due to surface tension provided $\Gamma$ is constant everywhere. If $\Gamma$ varies with position, then the gradient of $\Gamma$ will not be zero and there will be flow due the surface tension gradient. Such flows are called Marangoni flows. We can see a quick implementation of the Young-Laplace Equation by evaluating the pressure jump across a spherical bubble (say air bubble in water). For a sphere, the normal is given by $\vect n = x_i/r$. Hence- \begin{align*} \grad\cdot n &= \pd{}{x_i}\frac{x_i}{r}\\ &= \frac{1}{r}\pd{x_i}{x_i} - \frac{x_i x_i}{r^3} \tag{writing $r= (x_ix_i)^{1/2}$ }\\ &= \frac{\dl_{ii}}{r} - \frac{x_i x_i}{r^3} = \frac{3}{r} - \frac{1}{r} = \frac{2}{r} \end{align*} Therefore, for a spherical bubble, the pressure balance gives us \begin{align*} &\hat p - p = \frac{2 \Gamma}{r} \end{align*} \section{Shape of a 2D static meniscus} The Young-Laplce Equation can be used to obtain the shape of a static meniscus. In this section we look at a meniscus close to a plane wall. Just next to the wall, the rise in water level is highest and it tapers off as we move away from the wall. We want to find out the functional form for the interface $z \equiv z(x)$ where $x$ is the distance from the wall and $z$ is the height of the meniscus at a given $x$. Evaluation of the Young-Laplace Equation requires us to find the divergence of the unit normal. The normal to any surface $F(x,y) = 0$ is given by: \begin{align*} & \vect n = \frac{\grad F}{|\grad F|}\\ &\Rightarrow \pd{}{x_i}n_i = \pd{}{x_i}\frac{\pd{F}{x_i}}{|\grad F|}\\ &\Rightarrow \pd{n_i}{x_i} = \frac{1}{|\grad F|}\pd{^2F}{x_i^2} - \frac{1}{|\grad F|^3}\pd{F}{x_i}\pd{F}{x_k}\pd{^2F}{x_ix_k} \end{align*} For the 2D static meniscus $z=f(x)$ and hence $F(x,z) = z-f(x) = 0$. Evaluating the necessary derivatives: \begin{align*} &\pd{F}{x} = -\pd{f}{x}\\ &\pd{F}{z} = 1\\ &\pd{^2F}{x\partial z} = \pd{^2f}{x^2}\\ \end{align*} and substituting them in the Young-Laplace equation, we get the following: \begin{align*} &\hat p - p = -\Gamma \bigg(\frac{\pd{^2f}{x^2}}{\big(1+(\pd{f}{x})^2\big)^{3/2}} \bigg)\\ \Rightarrow &\hat p - p = -\rho g z = -\Gamma \bigg(\frac{\pd{^2f}{x^2}}{\big(1+(\pd{f}{x})^2\big)^{3/2}} \bigg) \end{align*} Here, $\hat p$ is the atmospheric pressure and $p$ is the pressure in the fluid at $z=0$. We can non-dimensionalize the equation by considering the length scale $l_c = \sqrt{\Gamma/\rho g}$. This leaves us with \begin{align*} & z = \frac{\pd{^2f}{x^2}}{\big(1+(\pd{f}{x})^2\big)^{3/2}} \end{align*} We can greatly simplify the matters if we linearize this equation. Consider the case where the slope of the meniscus is very small everywhere, i.e. $dz/dx \ll 1$. Then the equation becomes \begin{align*} & z = \pd{^2f}{x^2} \end{align*} whose solution is an exponential \begin{align*} &z = z_0 e^{-x} \end{align*} or in dimensional terms \begin{align*} &z = z_0 e^{-x/l_c} \end{align*} which tells us that the height of the meniscus decays exponentially away from the wall with a characterstic length scale of $l_c$. Another quantity of interest here is the maximum height to which water rises along the wall. In order to find it, we must impose the contact angle boundary condition: \begin{align*} &\frac{dz}{dx} = \tan(\frac{\pi}{2} + \theta_c) \quad @ \quad z=0\\ \Rightarrow &\frac{-z_0}{l_c} = \tan(\frac{\pi}{2} + \theta_c) \\ \Rightarrow & z_0 =-l_c \tan(\frac{\pi}{2} + \theta_c) \\ \Rightarrow & z_0 =l_c \cot(\theta_c) \end{align*} This gives us the height of the meniscus at the wall. But what if $\theta_c = 0$? The height comes out to be infinity. This is obviously wrong but we must remember that linearization was only valid when slope of the meniscus was small. Since we have violated this assumption, the method is expected to give the wrong answer. In order to obtain the height of meniscus at the wall when $\theta_c = 0$, we must solve the full non-linear equation, which leads us to \begin{align*} &1-\frac{z^2}{2} = \frac{1}{(1+\frac{dz}{dx}^2)^{1/2}} \end{align*} From this, we can obtain the height at the wall to be $\sqrt 2 l_c$ when $\theta_c = 0$. The equation can be further solved to obtain the complete shape of the meniscus as a transcendental function $x\equiv x(z)$. \section{Appendix} \subsection{Force balance interpretation} Describe \begin{align*} &1-\frac{z^2}{2} = \frac{1}{(1+\frac{dz}{dx}^2)^{1/2}} \end{align*} as a force balance \subsection{Solution of the non-linear equation for the 2D meniscus} \end{document}
Formal statement is: lemma at_within_interior: "x \<in> interior S \<Longrightarrow> at x within S = at x" Informal statement is: If $x$ is in the interior of $S$, then the filter at $x$ within $S$ is the same as the filter at $x$.
lemma measure_Diff_null_set: "A \<in> sets M \<Longrightarrow> B \<in> null_sets M \<Longrightarrow> measure M (A - B) = measure M A"
module LeafClass use, intrinsic :: iso_fortran_env use KinematicClass use FEMDomainClass use PetiClass use StemClass use LightClass use AirClass implicit none type :: Leaf_ type(FEMDomain_) :: FEMDomain real(real64),allocatable :: LeafSurfaceNode2D(:,:) real(real64) :: ShapeFactor,Thickness,length,width,center(3) real(real64) :: MaxThickness,Maxlength,Maxwidth real(real64) :: center_bottom(3),center_top(3) real(real64) :: outer_normal_bottom(3),outer_normal_top(3) real(real64),allocatable :: source(:), ppfd(:),A(:) integer(int32) :: Division type(leaf_),pointer :: pleaf type(Peti_),pointer :: pPeti real(real64) :: rot_x = 0.0d0 real(real64) :: rot_y = 0.0d0 real(real64) :: rot_z = 0.0d0 real(real64) :: disp_x = 0.0d0 real(real64) :: disp_y = 0.0d0 real(real64) :: disp_z = 0.0d0 real(real64) :: shaperatio = 0.30d0 real(real64) :: minwidth,minlength,MinThickness integer(int32),allocatable :: I_planeNodeID(:) integer(int32),allocatable :: I_planeElementID(:) integer(int32),allocatable :: II_planeNodeID(:) integer(int32),allocatable :: II_planeElementID(:) integer(int32) :: A_PointNodeID integer(int32) :: B_PointNodeID integer(int32) :: A_PointElementID integer(int32) :: B_PointElementID integer(int32) :: xnum = 10 integer(int32) :: ynum = 10 integer(int32) :: znum = 10 ! phisiological parameters real(real64) :: V_cmax = 100.0d0 ! 最大カルボキシル化反応速度, mincro-mol/m-2/s real(real64) :: V_omax = 100.0d0 ! 最大酸素化反応速度, mincro-mol/m-2/s, lambdaから推定 real(real64) :: O2 = 380.0d0! 酸素濃度, ppm real(real64) :: CO2=202000.0d0! 二酸化炭素濃度, ppm real(real64) :: R_d=1.0d0 ! 暗呼吸速度, mincro-mol/m-2/s real(real64) :: K_c=272.380d0 ! CO2に対するミカエリス定数 real(real64) :: K_o=165820.0d0 ! O2に対するミカエリス定数 real(real64) :: J_=0.0d0 ! 電子伝達速度 real(real64) :: I_=0.0d0 ! 光強度 real(real64) :: phi=0.0d0 ! I-J曲線の初期勾配 real(real64) :: J_max=180.0d0 !最大電子伝達速度,mincro-mol/m-2/s real(real64) :: theta_r=0.0d0 ! 曲線の凸度 real(real64) :: maxPPFD=1.0d0 ! micro-mol/m^2/s real(real64) :: Lambda= 37.430d0 ! 暗呼吸速度を無視した時のCO2補償点ppm real(real64) :: temp=303.0d0 ! temp real(real64),allocatable :: DryDensity(:) real(real64),allocatable :: WaterContent(:) contains procedure, public :: Init => initLeaf procedure, public :: rotate => rotateleaf procedure, public :: move => moveleaf procedure, public :: curve => curveleaf procedure, public :: create => createLeaf procedure,pass :: connectLeafLeaf => connectLeafLeaf procedure,pass :: connectLeafStem => connectLeafStem generic :: connect => connectLeafLeaf, connectLeafStem procedure, public :: photosynthesis => photosynthesisLeaf procedure, public :: rescale => rescaleleaf procedure, public :: adjust => adjustLeaf procedure, public :: resize => resizeleaf procedure, public :: getCoordinate => getCoordinateleaf procedure, public :: gmsh => gmshleaf procedure, public :: msh => mshleaf procedure, public :: vtk => vtkleaf procedure, public :: stl => stlleaf end type contains subroutine createLeaf(obj,SurfacePoints,filename,x_num,y_num,x_len,y_len) class(Leaf_),intent(inout) :: obj real(real64),optional,intent(in) :: SurfacePoints(:,:),x_len,y_len character(*),optional,intent(in) :: filename integer(int32),optional,intent(in) :: x_num,y_num type(IO_) :: f type(FEMDomain_) :: domain type(Math_) :: math character(:),allocatable :: line real(real64) :: x, y, r ,theta,x_sum,y_sum,center(2),max_r,coord(2), ret real(real64),allocatable :: r_data(:),theta_data(:),tx(:),tfx(:) integer(int32) :: num_ptr, i,id,ids(5),id_n if(present(filename) )then call f%open(filename,"r") ! get brief info num_ptr = 0 x_sum = 0.0d0 y_sum = 0.0d0 do line = f%readline() if(f%EOF) exit num_ptr = num_ptr+1 ! read x-y read(line,*) x, y x_sum = x_sum + x y_sum = y_sum + y enddo call f%close() center(1) = x_sum/dble(num_ptr) center(2) = y_sum/dble(num_ptr) r_data = zeros(num_ptr) theta_data = zeros(num_ptr) ! get detail call f%open(filename,"r") num_ptr=0 do line = f%readline() if(f%EOF) exit ! read x-y read(line,*) x, y coord(1) = x - center(1) coord(2) = y - center(2) r = sqrt( dot_product(coord,coord) ) theta = angles( coord ) num_ptr = num_ptr + 1 r_data(num_ptr) = r theta_data(num_ptr) = theta enddo max_r = maxval(r_data) r_data = r_data/max_r call f%close() elseif(present(SurfacePoints) )then num_ptr = size(SurfacePoints,1) center(1) = x_sum/dble(num_ptr) center(2) = y_sum/dble(num_ptr) r_data = zeros(num_ptr) theta_data = zeros(num_ptr) num_ptr=0 do i=1,size(SurfacePoints) ! read x-y x = SurfacePoints(i,1) y = SurfacePoints(i,2) coord(1) = x - center(1) coord(2) = y - center(2) r = sqrt( dot_product(coord,coord) ) theta = angles( coord ) num_ptr = num_ptr + 1 r_data(num_ptr) = r theta_data(num_ptr) = theta enddo max_r = maxval(r_data) r_data = r_data/max_r else print *, "ERROR :: Leaf%create >> Please import SurfacePoints or Filename" stop endif call obj%femdomain%create("Cylinder3D",x_num=x_num,y_num=y_num) call obj%femdomain%resize(x=2.0d0) call obj%femdomain%resize(y=2.0d0) call obj%femdomain%resize(z=0.010d0) ! #################################### ! test interpolate !tx = [0.0d0, 1.0d0, 2.0d0, 3.0d0] !tfx = [0.0d0, 2.0d0, 4.0d0, 8.0d0] !ret = interpolate(x =tx,Fx=tfx,x_value = -0.50d0) !print *, ret !stop ! #################################### ! adjust shape do i=1,obj%femdomain%nn() x = obj%femdomain%mesh%nodcoord(i,1) y = obj%femdomain%mesh%nodcoord(i,2) r = sqrt(x**2 + y**2) coord(1:2) = obj%femdomain%mesh%nodcoord(i,1:2) r = norm(coord) theta = angles(coord) ! find nearest theta r = r * interpolate(x=theta_data,Fx=r_data,x_value=theta) x = r*x y = r*y obj%femdomain%mesh%nodcoord(i,1) = x obj%femdomain%mesh%nodcoord(i,2) = y enddo obj%A_PointNodeID = randi(obj%femdomain%nn()) obj%B_PointNodeID = randi(obj%femdomain%nn()) obj%A_PointElementID = randi(obj%femdomain%nn()) obj%B_PointElementID = randi(obj%femdomain%nn()) if(present(x_len) )then call obj%femdomain%resize(x=x_len) endif if(present(y_len) )then call obj%femdomain%resize(y=y_len) endif ! ! export data ! call f%open("theta_r_relation.txt","w") ! do i=1,size(r_data) ! call f%write(theta_data(i),r_data(i) ) ! enddo ! call f%close() ! call f%plot("theta_r_relation.txt","w l") ! call f%plot(filename,"w l") end subroutine ! ######################################## subroutine initLeaf(obj,config,regacy,Thickness,length,width,ShapeFactor,& MaxThickness,Maxlength,Maxwidth,rotx,roty,rotz,location,species,SoyWidthRatio,& curvature) class(leaf_),intent(inout) :: obj real(real64),optional,intent(in) :: Thickness,length,width,ShapeFactor real(real64),optional,intent(in) :: MaxThickness,Maxlength,Maxwidth real(real64),optional,intent(in):: rotx,roty,rotz,location(3),SoyWidthRatio,curvature integer(int32),optional,intent(in) :: species logical, optional,intent(in) :: regacy character(*),optional,intent(in) :: config type(IO_) :: leafconf,f character(200) :: fn,conf,line integer(int32),allocatable :: buf(:) integer(int32) :: id,rmc,n,node_id,node_id2,elemid,blcount,i,j real(real64) :: loc(3),radius,z,leaf_L logical :: debug=.false. ! 節を生成するためのスクリプトを開く if(.not.present(config) .or. index(config,".json")==0 )then ! デフォルトの設定を生成 if(debug) print *, "New leaf-configuration >> leafconfig.json" call leafconf%open("leafconfig.json") write(leafconf%fh,*) '{' write(leafconf%fh,*) ' "type": "leaf",' write(leafconf%fh,*) ' "minlength": 0.005,' write(leafconf%fh,*) ' "minwidth": 0.005,' write(leafconf%fh,*) ' "minthickness": 0.0001,' write(leafconf%fh,*) ' "maxlength": 0.07,' write(leafconf%fh,*) ' "maxwidth": 0.045,' write(leafconf%fh,*) ' "maxthickness": 0.001,' write(leafconf%fh,*) ' "shaperatio": 0.3,' write(leafconf%fh,*) ' "drydensity": 0.0,' write(leafconf%fh,*) ' "watercontent": 0.0,' write(leafconf%fh,*) ' "xnum": 10,' write(leafconf%fh,*) ' "ynum": 10,' write(leafconf%fh,*) ' "znum": 20' write(leafconf%fh,*) '}' conf="leafconfig.json" call leafconf%close() else conf = trim(config) endif call leafconf%open(trim(conf)) blcount=0 do read(leafconf%fh,'(a)') line if(debug) print *, trim(line) if( adjustl(trim(line))=="{" )then blcount=1 cycle endif if( adjustl(trim(line))=="}" )then exit endif if(blcount==1)then if(index(line,"type")/=0 .and. index(line,"leaf")==0 )then print *, "ERROR: This config-file is not for leaf" return endif if(index(line,"maxlength")/=0 )then ! 生育ステージ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%maxlength endif if(index(line,"maxwidth")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%maxwidth endif if(index(line,"maxthickness")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%maxthickness endif if(index(line,"minlength")/=0 )then ! 生育ステージ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%minlength endif if(index(line,"shaperatio")/=0 )then ! 生育ステージ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%shaperatio endif if(index(line,"minwidth")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%minwidth endif if(index(line,"minthickness")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%minthickness endif if(index(line,"xnum")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%xnum endif if(index(line,"ynum")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%ynum endif if(index(line,"znum")/=0 )then ! 種子の長さ rmc=index(line,",") ! カンマがあれば除く if(rmc /= 0)then line(rmc:rmc)=" " endif id = index(line,":") read(line(id+1:),*) obj%znum endif cycle endif enddo call leafconf%close() ! グラフ構造とメッシュ構造を生成する。 ! ! %%%%%%%%%%%%%%%%%%%%%%%%%%%%% B ! %% % % ! %% % %% ! %% % %% ! %% % %% ! %% % %% ! %% %% ! A %% %% ! <I> %%%%%%%%%%%%%%%% ! メッシュを生成 call obj%FEMdomain%create(meshtype="rectangular3D",x_num=obj%xnum,y_num=obj%ynum,z_num=obj%znum,& x_len=obj%minwidth/2.0d0,y_len=obj%minwidth/2.0d0,z_len=obj%minlength,shaperatio=obj%shaperatio) ! physical parameters allocate(obj%A(size(obj%FEMDomain%Mesh%ElemNod,1) ) ) obj%A(:) = 0.0d0 allocate(obj%source(size(obj%FEMDomain%Mesh%ElemNod,1) ) ) obj%source(:) = 0.0d0 allocate(obj%ppfd(size(obj%FEMDomain%Mesh%ElemNod,1) ) ) obj%ppfd(:) = 0.0d0 ! initialize physical parameter obj%DryDensity = zeros( obj%FEMDomain%ne() ) obj%watercontent = zeros(obj%FEMDomain%ne()) obj%DryDensity(:) = freal(leafconf%parse(conf,key1="drydensity")) obj%watercontent(:) = freal(leafconf%parse(conf,key1="watercontent")) ! <I>面に属する要素番号、節点番号、要素座標、節点座標のリストを生成 obj%I_planeNodeID = obj%FEMdomain%mesh%getNodeList(zmax=0.0d0) obj%I_planeElementID = obj%FEMdomain%mesh%getElementList(zmax=0.0d0) ! <I>面に属する要素番号、節点番号、要素座標、節点座標のリストを生成 obj%II_planeNodeID = obj%FEMdomain%mesh%getNodeList(zmin=obj%minlength) obj%II_planeElementID = obj%FEMdomain%mesh%getElementList(zmin=obj%minlength) buf = obj%FEMDomain%mesh%getNodeList(& xmin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%xnum)/2.0d0 ,& xmax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%xnum)/2.0d0 ,& ymin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%ynum)/2.0d0 ,& ymax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%ynum)/2.0d0 ,& zmax=0.0d0) obj%A_PointNodeID = buf(1) buf = obj%FEMDomain%mesh%getNodeList(& xmin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%xnum)/2.0d0 ,& xmax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%xnum)/2.0d0 ,& ymin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%ynum)/2.0d0 ,& ymax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%ynum)/2.0d0 ,& zmin=obj%minlength) obj%B_PointNodeID = buf(1) buf = obj%FEMDomain%mesh%getElementList(& xmin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%xnum)/2.0d0 ,& xmax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%xnum)/2.0d0 ,& ymin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%ynum)/2.0d0 ,& ymax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%ynum)/2.0d0 ,& zmax=0.0d0) obj%A_PointElementID = buf(1) buf = obj%FEMDomain%mesh%getElementList(& xmin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%xnum)/2.0d0 ,& xmax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%xnum)/2.0d0 ,& ymin=obj%minwidth/2.0d0 - obj%minwidth/dble(obj%ynum)/2.0d0 ,& ymax=obj%minwidth/2.0d0 + obj%minwidth/dble(obj%ynum)/2.0d0 ,& zmin=obj%minlength) obj%B_PointElementID = buf(1) !print *, obj%A_PointNodeID !print *, obj%B_PointNodeID !print *, obj%A_PointElementID !print *, obj%B_PointElementID ! call obj%FEMdomain%remove() if(present(species) )then call obj%FEMdomain%create(meshtype="Leaf3D",x_num=obj%xnum,y_num=obj%ynum,z_num=obj%znum,& x_len=obj%minwidth/2.0d0,y_len=obj%minthickness/2.0d0,z_len=obj%minlength,species=species,SoyWidthRatio=SoyWidthRatio) else call obj%FEMdomain%create(meshtype="Leaf3D",x_num=obj%xnum,y_num=obj%ynum,z_num=obj%znum,& x_len=obj%minwidth/2.0d0,y_len=obj%minthickness/2.0d0,z_len=obj%minlength,shaperatio=obj%shaperatio) endif ! デバッグ用 ! call f%open("I_phaseNodeID.txt") ! do i=1,size(obj%I_planeNodeID) ! write(f%fh,*) obj%femdomain%mesh%NodCoord( obj%I_planeNodeID(i) ,:) ! enddo ! call f%close() ! ! call f%open("II_phaseNodeID.txt") ! do i=1,size(obj%II_planeNodeID) ! write(f%fh,*) obj%femdomain%mesh%NodCoord( obj%II_planeNodeID(i) ,:) ! enddo ! call f%close() ! ! call f%open("I_phaseElementID.txt") ! do i=1,size(obj%I_planeElementID) ! do j=1,size(obj%femdomain%mesh%elemnod,2) ! write(f%fh,*) obj%femdomain%mesh%NodCoord( & ! obj%femdomain%mesh%elemnod(obj%I_planeElementID(i),j),:) ! enddo ! enddo ! call f%close() ! ! call f%open("II_phaseElementID.txt") ! do i=1,size(obj%II_planeElementID) ! do j=1,size(obj%femdomain%mesh%elemnod,2) ! write(f%fh,*) obj%femdomain%mesh%NodCoord( & ! obj%femdomain%mesh%elemnod(obj%II_planeElementID(i),j),:) ! enddo ! enddo ! call f%close() ! return ! Aについて、要素番号、節点番号、要素座標、節点座標のリストを生成 if( present(regacy))then if(regacy .eqv. .true.)then loc(:)=0.0d0 if(present(location) )then loc(:)=location(:) endif obj%ShapeFactor = input(default=0.30d0 ,option= ShapeFactor ) obj%Thickness = input(default=0.10d0,option= Thickness ) obj%length = input(default=0.10d0,option= length ) obj%width = input(default=0.10d0,option= width) obj%MaxThickness = input(default=0.10d0 ,option= MaxThickness ) obj%Maxlength = input(default=10.0d0 ,option= Maxlength ) obj%Maxwidth = input(default=2.0d0 ,option= Maxwidth) obj%outer_normal_bottom(:)=0.0d0 obj%outer_normal_bottom(1)=1.0d0 obj%outer_normal_top(:)=0.0d0 obj%outer_normal_top(1)=1.0d0 ! rotate obj%outer_normal_Bottom(:) = Rotation3D(vector=obj%outer_normal_bottom,rotx=rotx,roty=roty,rotz=rotz) obj%outer_normal_top(:) = Rotation3D(vector=obj%outer_normal_top,rotx=rotx,roty=roty,rotz=rotz) obj%center_bottom(:)=loc(:) obj%center_top(:) = obj%center_bottom(:) + obj%length*obj%outer_normal_bottom(:) endif endif end subroutine ! ######################################## subroutine curveleaf(obj,curvature) ! deform by curvature class(leaf_),intent(inout) :: obj real(real64),intent(in) :: curvature real(real64) :: leaf_L,radius,z integer(int32) :: i if(curvature < dble(1.0e-5))then print *, "Caution >> initLeaf >> curvature is too small < 1.0e-5" print *, "Then, ignored." return endif radius = 1.0d0/curvature leaf_L = maxval(obj%femdomain%mesh%nodcoord(:,3)) - minval(obj%femdomain%mesh%nodcoord(:,3)) leaf_L = 0.50d0*leaf_L do i=1, obj%femdomain%nn() z = obj%femdomain%mesh%nodcoord(i,3) obj%femdomain%mesh%nodcoord(i,2) = & obj%femdomain%mesh%nodcoord(i,2) & - sqrt(radius*radius - leaf_L*leaf_L ) & + sqrt(radius*radius - (z - leaf_L)*(z - leaf_L) ) enddo end subroutine ! ######################################## recursive subroutine rotateleaf(obj,x,y,z,reset) class(leaf_),intent(inout) :: obj real(real64),optional,intent(in) :: x,y,z logical,optional,intent(in) :: reset real(real64),allocatable :: origin1(:),origin2(:),disp(:) if(present(reset) )then if(reset .eqv. .true.)then call obj%femdomain%rotate(-obj%rot_x,-obj%rot_y,-obj%rot_z) obj%rot_x = 0.0d0 obj%rot_y = 0.0d0 obj%rot_z = 0.0d0 endif endif origin1 = obj%getCoordinate("A") call obj%femdomain%rotate(x,y,z) obj%rot_x = obj%rot_x + input(default=0.0d0, option=x) obj%rot_y = obj%rot_y + input(default=0.0d0, option=y) obj%rot_z = obj%rot_z + input(default=0.0d0, option=z) origin2 = obj%getCoordinate("A") disp = origin1 disp(:) = origin1(:) - origin2(:) call obj%femdomain%move(x=disp(1),y=disp(2),z=disp(3) ) end subroutine ! ######################################## ! ######################################## recursive subroutine moveleaf(obj,x,y,z,reset) class(leaf_),intent(inout) :: obj real(real64),optional,intent(in) :: x,y,z logical,optional,intent(in) :: reset real(real64),allocatable :: origin1(:),origin2(:),disp(:) if(present(reset) )then if(reset .eqv. .true.)then call obj%femdomain%move(-obj%disp_x,-obj%disp_y,-obj%disp_z) obj%disp_x = 0.0d0 obj%disp_y = 0.0d0 obj%disp_z = 0.0d0 endif endif call obj%femdomain%move(x,y,z) obj%disp_x = obj%disp_x + input(default=0.0d0, option=x) obj%disp_y = obj%disp_y + input(default=0.0d0, option=y) obj%disp_z = obj%disp_z + input(default=0.0d0, option=z) end subroutine ! ######################################## ! ######################################## subroutine connectleafleaf(obj,direct,leaf) class(leaf_),intent(inout) :: obj class(leaf_),intent(inout) :: leaf character(2),intent(in) :: direct real(real64),allocatable :: x1(:),x2(:),disp(:) !if(present(Stem) )then ! if(direct=="->" .or. direct=="=>")then ! ! move obj to connect stem (stem is not moved.) ! x1 = leaf%getCoordinate("A") ! x2 = stem%getCoordinate("B") ! disp = x2 - x1 ! call leaf%move(x=disp(1),y=disp(2),z=disp(3) ) ! endif ! ! ! if(direct=="<-" .or. direct=="<=")then ! ! move obj to connect stem (stem is not moved.) ! x1 = stem%getCoordinate("A") ! x2 = leaf%getCoordinate("B") ! disp = x2 - x1 ! call stem%move(x=disp(1),y=disp(2),z=disp(3) ) ! endif ! return !endif if(direct=="->" .or. direct=="=>")then ! move obj to connect leaf (leaf is not moved.) x1 = obj%getCoordinate("A") x2 = leaf%getCoordinate("B") disp = x2 - x1 call obj%move(x=disp(1),y=disp(2),z=disp(3) ) endif if(direct=="<-" .or. direct=="<=")then ! move obj to connect leaf (leaf is not moved.) x1 = leaf%getCoordinate("A") x2 = obj%getCoordinate("B") disp = x2 - x1 call leaf%move(x=disp(1),y=disp(2),z=disp(3) ) endif end subroutine ! ######################################## ! ######################################## subroutine connectLeafStem(obj,direct,Stem) class(leaf_),intent(inout) :: obj class(Stem_),intent(inout) :: stem character(2),intent(in) :: direct real(real64),allocatable :: x1(:),x2(:),disp(:) if(direct=="->" .or. direct=="=>")then ! move obj to connect stem (stem is not moved.) x1 = obj%getCoordinate("A") x2 = stem%getCoordinate("B") disp = x2 - x1 call obj%move(x=disp(1),y=disp(2),z=disp(3) ) endif if(direct=="<-" .or. direct=="<=")then ! move obj to connect stem (stem is not moved.) x1 = stem%getCoordinate("A") x2 = obj%getCoordinate("B") disp = x2 - x1 call stem%move(x=disp(1),y=disp(2),z=disp(3) ) endif end subroutine ! ######################################## ! ######################################## function getCoordinateleaf(obj,nodetype) result(ret) class(leaf_),intent(inout) :: obj character(*),intent(in) :: nodetype real(real64),allocatable :: ret(:) integer(int32) :: dimnum dimnum = size(obj%femdomain%mesh%nodcoord,2) allocate(ret(dimnum) ) if( trim(nodetype)=="A" .or. trim(nodetype)=="a")then ret = obj%femdomain%mesh%nodcoord(obj%A_PointNodeID,:) endif if( trim(nodetype)=="B" .or. trim(nodetype)=="B")then ret = obj%femdomain%mesh%nodcoord(obj%B_PointNodeID,:) endif end function ! ######################################## ! ######################################## subroutine gmshleaf(obj,name) class(leaf_),intent(inout) :: obj character(*),intent(in) ::name if(obj%femdomain%mesh%empty() )then return endif call obj%femdomain%gmsh(Name=name) ! PPFD を出力 call obj%femdomain%gmsh(Name=name//"_PPFD_",field=obj%PPFD) ! ソース量 を出力 call obj%femdomain%gmsh(Name=name//"_SOURCE_",field=obj%source) ! 光合成速度 を出力 call obj%femdomain%gmsh(Name=name//"_A_",field=obj%A) end subroutine ! ######################################## ! ######################################## subroutine mshleaf(obj,name) class(leaf_),intent(inout) :: obj character(*),intent(in) ::name if(obj%femdomain%mesh%empty() )then return endif call obj%femdomain%msh(Name=name) ! PPFD を出力 !call obj%femdomain%msh(Name=name//"_PPFD_",field=obj%PPFD) ! ソース量 を出力 !call obj%femdomain%msh(Name=name//"_SOURCE_",field=obj%source) ! 光合成速度 を出力 !call obj%femdomain%msh(Name=name//"_A_",field=obj%A) end subroutine ! ######################################## ! ######################################## subroutine vtkleaf(obj,name) class(leaf_),intent(inout) :: obj character(*),intent(in) ::name if(obj%femdomain%mesh%empty() )then return endif call obj%femdomain%vtk(Name=name) ! PPFD を出力 !call obj%femdomain%msh(Name=name//"_PPFD_",field=obj%PPFD) ! ソース量 を出力 !call obj%femdomain%msh(Name=name//"_SOURCE_",field=obj%source) ! 光合成速度 を出力 !call obj%femdomain%msh(Name=name//"_A_",field=obj%A) end subroutine ! ######################################## ! ######################################## subroutine stlleaf(obj,name) class(leaf_),intent(inout) :: obj character(*),intent(in) ::name if(obj%femdomain%mesh%empty() )then return endif call obj%femdomain%stl(Name=name) ! PPFD を出力 !call obj%femdomain%msh(Name=name//"_PPFD_",field=obj%PPFD) ! ソース量 を出力 !call obj%femdomain%msh(Name=name//"_SOURCE_",field=obj%source) ! 光合成速度 を出力 !call obj%femdomain%msh(Name=name//"_A_",field=obj%A) end subroutine ! ######################################## ! ######################################## subroutine resizeleaf(obj,x,y,z) class(Leaf_),optional,intent(inout) :: obj real(real64),optional,intent(in) :: x,y,z real(real64),allocatable :: origin1(:), origin2(:),disp(:) origin1 = obj%getCoordinate("A") call obj%femdomain%resize(x_len=x,y_len=y,z_len=z) origin2 = obj%getCoordinate("A") disp = origin1 - origin2 call obj%move(x=disp(1),y=disp(2),z=disp(3) ) end subroutine ! ######################################## ! ######################################## subroutine rescaleleaf(obj,x,y,z) class(Leaf_),optional,intent(inout) :: obj real(real64),optional,intent(in) :: x,y,z real(real64),allocatable :: origin1(:), origin2(:),disp(:) origin1 = obj%getCoordinate("A") call obj%femdomain%resize(x_rate=x,y_rate=y,z_rate=z) origin2 = obj%getCoordinate("A") disp = origin1 - origin2 call obj%move(x=disp(1),y=disp(2),z=disp(3) ) end subroutine ! ######################################## ! ######################################## !subroutine LayTracingLeaf(obj,maxPPFD,light,) ! class(Leaf_),intent(inout) :: obj ! class(Light_),intent(in) :: light ! real(real64),intent(in) :: maxPPFD ! integer(int32) :: i,j,n,m,node_id ! real(real64) :: lx(3) ! real(real64),allocatable :: Elem_x(:,:) ! ! PPFDを計算する。 ! ! Photosynthetic photon flux density (PPFD) ! ! micro-mol/m^2/s ! ! ! 反射、屈折は無視、直線のみ ! ! n=size(obj%FEMDomain%Mesh%ElemNod,2) ! m=size(obj%FEMDomain%Mesh%NodCoord,2) ! ! allocate(Elem_x(n,m) ) ! ! 要素ごと ! do i=1, size(obj%FEMDomain%Mesh%ElemNod,1) ! do j=1,size(obj%FEMDomain%Mesh%ElemNod,2) ! node_id = obj%FEMDomain%Mesh%ElemNod(i,j) ! Elem_x(j,:) = obj%FEMDomain%Mesh%NodCoord(node_id,:) ! enddo ! ! 要素座標 >> Elem_x(:,:) ! ! 光源座標 >> lx(:) ! ! enddo ! ! ! !end subroutine ! ######################################## ! ######################################## subroutine photosynthesisLeaf(obj,dt,air) ! https://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/39102/1/67-013.pdf class(Leaf_),intent(inout) :: obj type(Air_),intent(in) :: air type(IO_) :: f real(real64),intent(in) :: dt ! Farquhar modelのパラメータ real(real64) :: A ! CO2吸収速度 real(real64) :: V_c ! カルボキシル化反応速度 real(real64) :: V_o ! 酸素化反応速度 real(real64) :: W_c! RuBPが飽和している場合のCO2吸収速度 real(real64) :: W_j! RuBP供給が律速している場合のCO2吸収速度 real(real64) :: V_cmax ! 最大カルボキシル化反応速度 real(real64) :: V_omax ! 最大酸素化反応速度 real(real64) :: O2 ! 酸素濃度 real(real64) :: CO2 ! 二酸化炭素濃度 real(real64) :: R_d ! なんだっけ real(real64) :: K_c ! CO2に対するミカエリス定数 real(real64) :: K_o ! O2に対するミカエリス定数 real(real64) :: J_ ! 電子伝達速度 real(real64) :: I_ ! 光強度 real(real64) :: phi ! I-J曲線の初期勾配 real(real64) :: J_max !最大電子伝達速度 real(real64) :: theta_r ! 曲線の凸度 real(real64) :: pfd real(real64) :: Lambda, volume integer(int32) :: i, element_id obj%temp=air%temp obj%CO2 = air%CO2 obj%O2 = air%O2 ! TT-model do i=1,size(obj%source) ! 要素ごとに電子伝達速度を求める element_id = i pfd = obj%ppfd(element_id) obj%J_ = 0.240d0*pfd/(sqrt(1.0d0 + (0.240d0*0.240d0)*pfd*pfd)/obj%J_max/obj%J_max) ! lambdaからV_omaxを推定 obj%V_omax = obj%Lambda*( 2.0d0 * obj%V_cmax*obj%K_o )/(obj%K_c*O2) ! CO2固定速度の計算 V_c = (obj%V_cmax*obj%CO2)/(obj%CO2 +obj% K_o * (1.0d0+ obj%O2/obj%K_o) ) V_o = (obj%V_omax*obj%O2 )/(obj%O2 + obj%K_o * (1.0d0 + obj%CO2/obj%K_c) ) ! RuBPが飽和している場合のCO2吸収速度 W_c = (obj%V_cmax*(obj%CO2 - obj%Lambda))/(obj%CO2 + obj%K_c*(1.0d0 + obj%O2/obj%K_o)) ! RuBP供給が律速している場合のCO2吸収速度 W_j = obj%J_ * (obj%CO2 - obj%Lambda)/(4.0d0 * obj%CO2 + 8.0d0 * obj%Lambda ) - obj%R_d if(W_j >= W_c )then A = W_c else A = W_j endif ! 要素体積を求める, m^3 obj%A(element_id) = A volume = obj%femdomain%getVolume(elem=element_id) !CO2固定量 mincro-mol/m-2/s ! ここ、体積あたりにする必要がある ! 一応、通常の葉の厚さを2mmとして、 ! 1 micro-mol/m^2/sを、 1 micro-mol/ 0.002m^3/s= 500micro-mol/m^3/sとして計算 ! また、ソース量はC6H12O6の質量gramとして換算する。 ! CO2の分子量44.01g/mol ! C6H12O6の分子量180.16g/mol ! 6CO2 + 12H2O => C6H12O6 + 6H2O + 6O2 ! よって、生成されるソース量は ! {CO2固定量,mol }× {1/6 してグルコースmol}×グルコース分子量 obj%source(i) =obj%source(i)+ A*dt/500.0d0*volume * 1.0d0/6.0d0 * 180.160d0 enddo ! ! For each elements, estimate photosynthesis by Farquhar model ! do i=1,size(obj%source) ! ! ! 光合成量の計算 ! ! Farquhar model ! V_c = (V_cmax*CO2)/(CO2 + K_o * (1.0d0 + O2/K_o) ) ! V_o = (V_omax*O2 )/(O2 + K_o * (1.0d0 + CO2/K_c) ) ! ! Lambda = (V_omax*K_c*O2)/( 2.0d0 * V_cmax*K_o ) ! ! W_c = (V_cmax*(CO2 - Lambda))/(CO2 + K_c*(1.0d0 + O2/K_o) ) ! ! J_ = (phi*I_ + J_max - & ! sqrt( (phi*I_ + J_max)**(2.0d0) - 4.0d0*phi*I_*theta_r*J_max)& ! /(2.0d0 * theta_r) ) ! W_j = J_ * (CO2 - Lambda)/(4.0d0 * CO2 + 8.0d0 * Lambda ) - R_d ! ! CO2吸収速度 ! A = V_c + 0.50d0*V_o - R_d ! ! if(W_j >= W_c )then ! A = W_c ! else ! A = W_j ! endif ! ! ! enddo ! ! end subroutine subroutine adjustLeaf(obj,width) class(Leaf_),intent(inout) :: obj real(real64),intent(in) :: width(:,:) end subroutine end module
{-# OPTIONS --cubical --safe --postfix-projections #-} module Data.Binary.Skew where open import Prelude open import Data.Nat open import Data.List 𝔹 : Type 𝔹 = List ℕ inc : 𝔹 → 𝔹 inc [] = zero ∷ [] inc (x ∷ []) = zero ∷ x ∷ [] inc (x₁ ∷ zero ∷ xs) = suc x₁ ∷ xs inc (x₁ ∷ suc x₂ ∷ xs) = zero ∷ x₁ ∷ x₂ ∷ xs ⟦_⇑⟧ : ℕ → 𝔹 ⟦ zero ⇑⟧ = [] ⟦ suc n ⇑⟧ = inc ⟦ n ⇑⟧ skew : ℕ → ℕ skew n = suc (n + n) w : ℕ → ℕ → ℕ w zero a = a w (suc n) a = skew (w n a) ⟦_∷_⇓⟧^ : ℕ → (ℕ → ℕ) → ℕ → ℕ ⟦ x ∷ xs ⇓⟧^ a = let a′ = w x a in a′ + xs (skew a′) ⟦_⇓⟧ : 𝔹 → ℕ ⟦ [] ⇓⟧ = zero ⟦ x ∷ xs ⇓⟧ = let a = w x 1 in a + foldr ⟦_∷_⇓⟧^ (const zero) xs a -- open import Path.Reasoning -- import Data.Nat.Properties as ℕ -- inc-suc : ∀ x → ⟦ inc x ⇓⟧ ≡ suc ⟦ x ⇓⟧ -- inc-suc [] = refl -- inc-suc (x ∷ []) = refl -- inc-suc (x ∷ zero ∷ xs) = cong suc (ℕ.+-assoc (w x 1) (w x 1) _) -- inc-suc (x₁ ∷ suc x₂ ∷ xs) = cong suc (cong (w x₁ 1 +_) {!!}) -- 𝔹-rightInv : ∀ x → ⟦ ⟦ x ⇑⟧ ⇓⟧ ≡ x -- 𝔹-rightInv zero = refl -- 𝔹-rightInv (suc x) = {!!} -- 𝔹-leftInv : ∀ x → ⟦ ⟦ x ⇓⟧ ⇑⟧ ≡ x -- 𝔹-leftInv [] = refl -- 𝔹-leftInv (x ∷ xs) = {!!} -- 𝔹⇔ℕ : 𝔹 ⇔ ℕ -- 𝔹⇔ℕ .fun = ⟦_⇓⟧ -- 𝔹⇔ℕ .inv = ⟦_⇑⟧ -- 𝔹⇔ℕ .rightInv x = {!!} -- 𝔹⇔ℕ .leftInv = {!!}
The circlepath function is defined as $z + r \exp(2 \pi i x)$.
module Text.WebIDL.Types.Definition import Generics.Derive import Text.WebIDL.Types.Attribute import Text.WebIDL.Types.Argument import Text.WebIDL.Types.Identifier import Text.WebIDL.Types.Member import Text.WebIDL.Types.StringLit import Text.WebIDL.Types.Type %hide Language.Reflection.TT.Namespace %language ElabReflection ||| CallbackRest :: ||| identifier = Type ( ArgumentList ) ; public export record Callback where constructor MkCallback attributes : ExtAttributeList name : Identifier type : IdlType args : ArgumentList %runElab derive "Callback" [Generic,Meta,Eq,Show,HasAttributes] ||| CallbackRestOrInterface :: ||| CallbackRest ||| interface identifier { CallbackInterfaceMembers } ; public export record CallbackInterface where constructor MkCallbackInterface attributes : ExtAttributeList name : Identifier members : CallbackInterfaceMembers %runElab derive "CallbackInterface" [Generic,Meta,Eq,Show,HasAttributes] ||| Dictionary :: ||| dictionary identifier Inheritance { DictionaryMembers } ; public export record Dictionary where constructor MkDictionary attributes : ExtAttributeList name : Identifier inherits : Inheritance members : DictionaryMembers %runElab derive "Dictionary" [Generic,Meta,Eq,Show,HasAttributes] ||| Enum :: ||| enum identifier { EnumValueList } ; ||| ||| EnumValueList :: ||| string EnumValueListComma ||| ||| EnumValueListComma :: ||| , EnumValueListString ||| ε ||| ||| EnumValueListString :: ||| string EnumValueListComma ||| ε public export record Enum where constructor MkEnum attributes : ExtAttributeList name : Identifier values : List1 StringLit %runElab derive "Enum" [Generic,Meta,Eq,Show,HasAttributes] ||| IncludesStatement :: ||| identifier includes identifier ; public export record Includes where constructor MkIncludes attributes : ExtAttributeList name : Identifier includes : Identifier %runElab derive "Includes" [Generic,Meta,Eq,Show,HasAttributes] ||| InterfaceRest :: ||| identifier Inheritance { InterfaceMembers } ; public export record Interface where constructor MkInterface attributes : ExtAttributeList name : Identifier inherits : Inheritance members : InterfaceMembers %runElab derive "Interface" [Generic,Meta,Eq,Show,HasAttributes] ||| MixinRest :: ||| mixin identifier { MixinMembers } ; public export record Mixin where constructor MkMixin attributes : ExtAttributeList name : Identifier members : MixinMembers %runElab derive "Mixin" [Generic,Meta,Eq,Show,HasAttributes] ||| Namespace :: ||| namespace identifier { NamespaceMembers } ; public export record Namespace where constructor MkNamespace attributes : ExtAttributeList name : Identifier members : NamespaceMembers %runElab derive "Namespace" [Generic,Meta,Eq,Show,HasAttributes] ||| Typedef :: ||| typedef TypeWithExtendedAttributes identifier ; public export record Typedef where constructor MkTypedef attributes : ExtAttributeList typeAttributes : ExtAttributeList type : IdlType name : Identifier %runElab derive "Typedef" [Generic,Meta,Eq,Show,HasAttributes] ||| PartialDictionary :: ||| dictionary identifier { DictionaryMembers } ; public export record PDictionary where constructor MkPDictionary attributes : ExtAttributeList name : Identifier members : DictionaryMembers %runElab derive "PDictionary" [Generic,Meta,Eq,Show,HasAttributes] ||| PartialInterfaceRest :: ||| identifier { PartialInterfaceMembers } ; public export record PInterface where constructor MkPInterface attributes : ExtAttributeList name : Identifier members : PartialInterfaceMembers %runElab derive "PInterface" [Generic,Meta,Eq,Show,HasAttributes] ||| MixinRest :: ||| mixin identifier { MixinMembers } ; public export record PMixin where constructor MkPMixin attributes : ExtAttributeList name : Identifier members : MixinMembers %runElab derive "PMixin" [Generic,Meta,Eq,Show,HasAttributes] ||| Namespace :: ||| namespace identifier { NamespaceMembers } ; public export record PNamespace where constructor MkPNamespace attributes : ExtAttributeList name : Identifier members : NamespaceMembers %runElab derive "PNamespace" [Generic,Meta,Eq,Show,HasAttributes] public export DefTypes : List Type DefTypes = [ Callback , CallbackInterface , Dictionary , Enum , Includes , Interface , Mixin , Namespace , Typedef ] public export PartTypes : List Type PartTypes = [PDictionary, PInterface, PMixin, PNamespace] ||| Definition :: ||| CallbackOrInterfaceOrMixin ||| Namespace ||| Partial ||| Dictionary ||| Enum ||| Typedef ||| IncludesStatement ||| CallbackOrInterfaceOrMixin :: ||| callback CallbackRestOrInterface ||| interface InterfaceOrMixin ||| ||| InterfaceOrMixin :: ||| InterfaceRest ||| MixinRest public export Definition : Type Definition = NS I DefTypes public export 0 Definitions : Type Definitions = NP List DefTypes ||| PartialDefinition :: ||| interface PartialInterfaceOrPartialMixin ||| PartialDictionary ||| Namespace ||| ||| PartialInterfaceOrPartialMixin :: ||| PartialInterfaceRest ||| MixinRest public export Part : Type Part = NS I PartTypes public export accumNs : {ts : _} -> List (NS I ts) -> NP List ts accumNs = foldl (\np,ns => hliftA2 (++) (toNP ns) np) hempty public export 0 PartOrDef : Type PartOrDef = NS I [Part,Definition] public export 0 PartsAndDefs : Type PartsAndDefs = NP List [Part,Definition] public export defs : PartsAndDefs -> Definitions defs = accumNs . get Definition -------------------------------------------------------------------------------- -- Domain -------------------------------------------------------------------------------- update : Eq k => (b -> b) -> k -> (b -> k) -> List b -> List b update f k bk = map (\b => if bk b == k then f b else b) mergeDict : PDictionary -> Dictionary -> Dictionary mergeDict d = record { members $= (++ d.members) } mergeIface : PInterface -> Interface -> Interface mergeIface i = record { members $= (++ map to i.members) } where to : (a,b) -> (a, NS I [c,b]) to (x, y) = (x, inject y) mergeMixin : PMixin -> Mixin -> Mixin mergeMixin m = record { members $= (++ m.members) } mergeNamespace : PNamespace -> Namespace -> Namespace mergeNamespace n = record { members $= (++ n.members) } public export record Domain where constructor MkDomain domain : String callbacks : List Callback callbackInterfaces : List CallbackInterface dictionaries : List Dictionary enums : List Enum includeStatements : List Includes interfaces : List Interface mixins : List Mixin namespaces : List Namespace typedefs : List Typedef %runElab derive "Domain" [Generic,Meta,Eq,Show,HasAttributes] applyPart : Domain -> Part -> Domain applyPart d (Z v) = record { dictionaries $= update (mergeDict v) v.name name } d applyPart d (S $ Z v) = record { interfaces $= update (mergeIface v) v.name name } d applyPart d (S $ S $ Z v) = record { mixins $= update (mergeMixin v) v.name name } d applyPart d (S $ S $ S $ Z v) = record { namespaces $= update (mergeNamespace v) v.name name } d export toDomains : List (String,PartsAndDefs) -> List Domain toDomains ps = let defs = map (\(s,pad) => fromNP s (defs pad)) ps prts = concatMap (\(_,pad) => get Part pad) ps in map (\d => foldl applyPart d prts) defs where fromNP : String -> Definitions -> Domain fromNP s [c,ci,d,e,ic,it,m,n,t] = MkDomain s c ci d e ic it m n t
{-# OPTIONS --without-K --safe #-} open import Categories.Category open import Categories.Category.Monoidal module Categories.Category.Monoidal.Symmetric {o ℓ e} {C : Category o ℓ e} (M : Monoidal C) where open import Level open import Data.Product using (Σ; _,_) open import Categories.Functor.Bifunctor open import Categories.NaturalTransformation.NaturalIsomorphism open import Categories.Morphism C open import Categories.Category.Monoidal.Braided M open Category C open Commutation private variable X Y Z : Obj -- symmetric monoidal category -- commutative braided monoidal category record Symmetric : Set (levelOfTerm M) where field braided : Braided module braided = Braided braided open braided public private B : ∀ {X Y} → X ⊗₀ Y ⇒ Y ⊗₀ X B {X} {Y} = braiding.⇒.η (X , Y) field commutative : B {X} {Y} ∘ B {Y} {X} ≈ id braided-iso : X ⊗₀ Y ≅ Y ⊗₀ X braided-iso = record { from = B ; to = B ; iso = record { isoˡ = commutative ; isoʳ = commutative } }
-- Andreas, 2013-10-21 -- Test case for CheckInternal extracted from The Agda standard library -- Propositional (intensional) equality module FunExt where open import Common.Level open import Common.Equality Extensionality : (a b : Level) → Set _ Extensionality a b = {A : Set a} {B : A → Set b} {f g : (x : A) → B x} → (∀ x → f x ≡ g x) → f ≡ g -- Functional extensionality implies a form of extensionality for -- Π-types. ∀-extensionality : ∀ {a b} → Extensionality a (lsuc b) → {A : Set a} (B₁ B₂ : A → Set b) → (∀ x → B₁ x ≡ B₂ x) → (∀ x → B₁ x) ≡ (∀ x → B₂ x) ∀-extensionality ext B₁ B₂ B₁≡B₂ with ext B₁≡B₂ ∀-extensionality ext B .B B₁≡B₂ | refl = refl
#include "gateserver.h" #include <boost/property_tree/ptree.hpp> #include <boost/property_tree/xml_parser.hpp> #include "mylogger.h" #include "protos/login.pb.h" #include "protos/server.pb.h" #include "net_asio/timer.h" #include "url.h" static LoggerPtr logger(Logger::getLogger("gate")); void onLogin(Cmd::Login::LoginRequest* msg, NetAsio::ConnectionPtr conn) { LOG4CXX_INFO(logger, "[" << boost::this_thread::get_id() << "] " << msg->GetTypeName() << ":\n" << msg->DebugString()); Cmd::Login::LoginReply reply; reply.set_retcode(Cmd::Login::LoginReply::OK); conn->send(reply); GateServer::instance()->syncInfo2Login(); } GateServer::GateServer() : NetApp("gateserver", user_msg_dispatcher_, server_msg_dispatcher_) { user_msg_dispatcher_.bind<Cmd::Login::LoginRequest, NetAsio::ConnectionPtr>(onLogin); } struct SyncInfo2LoginTimer : public NetAsio::Timer { SyncInfo2LoginTimer(boost::asio::io_service& ios) : NetAsio::Timer(ios, 3000) {} virtual void run() { GateServer::instance()->syncInfo2Login(); } }; bool GateServer::init() { if (NetApp::init()) { // TODO:临时为之 SyncInfo2LoginTimer* timer = new SyncInfo2LoginTimer(worker_pool_->get_io_service()); if (timer) { } return true; } return false; } void GateServer::fini() { NetApp::fini(); } void GateServer::syncInfo2Login() { LOG4CXX_INFO(logger, "同步状态到登录服务器"); // 发送到WorldManager URL listen; if (listen.parse(config_.get<std::string>("listen"))) { Cmd::Server::SyncGateUserCount sync; sync.set_gate_ip(listen.host); sync.set_gate_port(listen.port); sync.set_usercount(acceptor()->size()); connector()->sendTo(1, sync); } }
We provide quality welding and cutting products that feature the latest developments in design and innovation to the Australian market. Our commitment to service and product development, utilizing the latest technology, means that we now offer the Australian market a comprehensive range of Mig welders, Tig welders, MMA welders, Plasma cutting and Spot welding machines. These products combined with our extensive range of welding consumables and spare parts enables us to provide all welding requirements from home handyman to heavy industrial applications. All the machines we sell are all manufactured to, and comply with, the latest Australian standards AS60974-1 and EN 50199, thus providing the operator assurance and certainty of safety, duty cycle performance and quality. Uni-Mig is dedicated to after sales service with our warehouses/service centres located in Sydney, Melbourne, Brisbane and Perth. These warehouses/service centres are run by our team of sales staff and service technicians who maintain a large network of distributors and service agents throughout Australia to provide quick delivery and service backup that is second to none.
-- Andreas, 2017-09-03, issue #2729. -- Expect non-indexed or -primed variables when splitting. -- {-# OPTIONS -v interaction.case:100 #-} -- {-# OPTIONS -v tc.cover:40 #-} data Size : Set where ↑ : Size → Size data Nat : Size → Set where zero : ∀ i → Nat (↑ i) suc : ∀ i → Nat i → Nat (↑ i) pred : ∀ i → Nat i → Nat i pred i x = {!x!} -- C-c C-c -- WRONG (agda-2.5.3): -- pred .(↑ i₁) (zero i₁) = ? -- pred .(↑ i₁) (suc i₁ x) = ? -- EXPECTED (correct in agda-2.5.1.1): -- pred .(↑ i) (zero i) = ? -- pred .(↑ i) (suc i x) = ?
{- Formal verification of authenticated append-only skiplists in Agda, version 1.0. Copyright (c) 2020 Oracle and/or its affiliates. Licensed under the Universal Permissive License v 1.0 as shown at https://opensource.oracle.com/licenses/upl -} open import Data.Empty open import Data.Fin.Properties using (toℕ<n; toℕ-injective) open import Data.Product open import Data.Sum open import Data.Nat open import Data.Nat.Divisibility open import Data.Nat.Properties open import Data.Nat.Induction open import Data.List renaming (map to List-map) open import Data.List.Relation.Unary.Any open import Data.List.Relation.Unary.All import Relation.Binary.PropositionalEquality as Eq open import Relation.Binary.Definitions open Eq using (_≡_; refl; trans; sym; cong; cong-app; subst) open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _∎) open import Relation.Binary.PropositionalEquality renaming ( [_] to Reveal[_]) open import Relation.Binary.PropositionalEquality open import Relation.Binary.HeterogeneousEquality using (_≅_; ≅-to-≡; ≡-to-≅; _≇_) renaming (cong to ≅-cong; refl to ≅-refl; cong₂ to ≅-cong₂) open import Relation.Nullary open import Relation.Binary.Core open import Relation.Nullary.Negation using (contradiction; contraposition) import Relation.Nullary using (¬_) open import Function -- This module defines the hop relation used by the original AAOSL due to Maniatis -- and Baker, and proves various properties needed to establish it as a valid -- DepRel, so that we can instantiate the asbtract model with it to demonstrate that -- it is an instance of the class of AAOSLs for which we prove our properties. module AAOSL.Hops where open import AAOSL.Lemmas open import Data.Nat.Even -- The level of an index is 0 for index 0, -- otherwise, it is one plus the number of times -- that two divides said index. -- -- lvlOf must be marked terminating because in one branch -- we make recursive call on the quotient of the argument, which -- is not obviously smaller than that argument -- This is justified by proving that lvlOf is equal to lvlOfWF, -- which uses well-founded recursion {-# TERMINATING #-} lvlOf : ℕ → ℕ lvlOf 0 = 0 lvlOf (suc n) with even? (suc n) ...| no _ = 1 ...| yes e = suc (lvlOf (quotient e)) -- level of an index with well-founded recursion lvlOfWFHelp : (n : ℕ) → Acc _<_ n → ℕ lvlOfWFHelp 0 p = 0 lvlOfWFHelp (suc n) (acc rs) with even? (suc n) ... | no _ = 1 ... | yes (divides q eq) = suc (lvlOfWFHelp q (rs q (1+n=m*2⇒m<1+n q n eq))) lvlOfWF : ℕ → ℕ lvlOfWF n = lvlOfWFHelp n (<-wellFounded n) -- When looking at an index in the form 2^k * d, the level of -- said index is more easily defined. lvlOf' : ∀{n} → Pow2 n → ℕ lvlOf' zero = zero lvlOf' (pos l _ _ _) = suc l ------------------------------------------- -- Properties of lvlOf, lvlOfWF, and lvlOf' lvlOf≡lvlOfWFHelp : (n : ℕ) (p : Acc _<_ n) → lvlOf n ≡ lvlOfWFHelp n p lvlOf≡lvlOfWFHelp 0 p = refl lvlOf≡lvlOfWFHelp (suc n) (acc rs) with even? (suc n) ... | no _ = refl ... | yes (divides q eq) = cong suc (lvlOf≡lvlOfWFHelp q (rs q (1+n=m*2⇒m<1+n q n eq))) lvlOf≡lvlOfWF : (n : ℕ) → lvlOf n ≡ lvlOfWF n lvlOf≡lvlOfWF n = lvlOf≡lvlOfWFHelp n (<-wellFounded n) lvlOf≡lvlOf' : ∀ n → lvlOf n ≡ lvlOf' (to n) lvlOf≡lvlOf' n rewrite lvlOf≡lvlOfWF n = go n (<-wellFounded n) where go : (n : ℕ) (p : Acc _<_ n) → lvlOfWFHelp n p ≡ lvlOf' (to n) go 0 p = refl go (suc n) (acc rs) with even? (suc n) ... | no _ = refl ... | yes (divides q eq) with go q (rs q (1+n=m*2⇒m<1+n q n eq)) ... | ih with to q ... | pos l d odd prf = cong suc ih lvl≥2-even : ∀ {n} → 2 ≤ lvlOf n → Even n lvl≥2-even {suc n} x with 2 ∣? (suc n) ...| yes prf = prf ...| no prf = ⊥-elim ((≤⇒≯ x) (s≤s (s≤s z≤n))) lvlOfodd≡1 : ∀ n → Odd n → lvlOf n ≡ 1 lvlOfodd≡1 0 nodd = ⊥-elim (nodd (divides zero refl)) lvlOfodd≡1 (suc n) nodd with even? (suc n) ...| yes prf = ⊥-elim (nodd prf) ...| no prf = refl -- We eventually need to 'undo' a level lvlOf-undo : ∀{j}(e : Even (suc j)) → suc (lvlOf (quotient e)) ≡ lvlOf (suc j) lvlOf-undo {j} e with even? (suc j) ...| no abs = ⊥-elim (abs e) ...| yes prf rewrite even-irrelevant e prf = refl ∣-cmp : ∀{t u n} → (suc t * u) ∣ n → (d : suc t ∣ n) → u ∣ (quotient d) ∣-cmp {t} {u} {n} (divides q1 e1) (divides q2 e2) rewrite sym (*-assoc q1 (suc t) u) | *-comm q1 (suc t) | *-comm q2 (suc t) | *-assoc (suc t) q1 u = divides q1 (*-cancelˡ-≡ t (trans (sym e2) e1)) ∣-0< : ∀{n t} → 0 < n → (d : suc t ∣ n) → 0 < quotient d ∣-0< hip (divides zero e) = ⊥-elim (<⇒≢ hip (sym e)) ∣-0< hip (divides (suc q) e) = s≤s z≤n lvlOf-mono : ∀{n} k → 0 < n → 2 ^ k ∣ n → k ≤ lvlOf n lvlOf-mono zero hip prf = z≤n lvlOf-mono {suc n} (suc k) hip prf with even? (suc n) ...| no abs = ⊥-elim (abs (divides (quotient prf * (2 ^ k)) (trans (_∣_.equality prf) (trans (cong ((quotient prf) *_) (sym (*-comm (2 ^ k) 2))) (sym (*-assoc (quotient prf) (2 ^ k) 2)))))) ...| yes prf' = s≤s (lvlOf-mono {quotient prf'} k (∣-0< hip prf') (∣-cmp prf prf')) -- This property can be strenghtened to < if we ever need. lvlOf'-mono : ∀{k} d → 0 < d → k ≤ lvlOf' (to (2 ^ k * d)) lvlOf'-mono {k} d 0<d with to d ...| pos {d} kk dd odd eq with (2 ^ (k + kk)) * dd ≟ (2 ^ k) * d ...| no xx = ⊥-elim (xx ( trans (cong (_* dd) (^-distribˡ-+-* 2 k kk)) (trans (*-assoc (2 ^ k) (2 ^ kk) dd) (cong (λ x → (2 ^ k) * x) (sym eq))))) ...| yes xx with to-reduce {(2 ^ k) * d} {k + kk} {dd} (sym xx) odd ...| xx1 = ≤-trans (≮⇒≥ (m+n≮m k kk)) (≤-trans (n≤1+n (k + kk)) -- TODO-1: easy to strengthen to <; omit this step (≤-reflexive (sym (cong lvlOf' xx1)))) -- And a progress property about levels: -- These will be much easier to reason about in terms of lvlOf' -- as we can see in lvlOf-correct. lvlOf-correct : ∀{l j} → l < lvlOf j → 2 ^ l ≤ j lvlOf-correct {l} {j} prf rewrite lvlOf≡lvlOf' j with to j ...| zero = ⊥-elim (1+n≢0 (n≤0⇒n≡0 prf)) ...| pos l' d odd refl = 2^kd-mono (≤-unstep2 prf) (0<odd odd) -- lvlOf-prog states that if we have not reached 0, we landed somewhere -- where we can hop again at the same level. lvlOf-prog : ∀{l j} → 0 < j ∸ 2 ^ l → l < lvlOf j → l < lvlOf (j ∸ 2 ^ l) lvlOf-prog {l} {j} hip l<lvl rewrite lvlOf≡lvlOf' j | lvlOf≡lvlOf' (j ∸ 2 ^ l) with to j ...| zero = ⊥-elim (1+n≰n (≤-trans l<lvl z≤n)) ...| pos l₁ d₁ o₁ refl rewrite 2^ld-2l l₁ l d₁ (≤-unstep2 l<lvl) with l ≟ l₁ ...| no l≢l₁ rewrite to-2^kd l (odd-2^kd-1 (l₁ ∸ l) d₁ (0<m-n (≤∧≢⇒< (≤-unstep2 l<lvl) l≢l₁)) (0<odd o₁)) = ≤-refl ...| yes refl rewrite n∸n≡0 l₁ | +-comm d₁ 0 with odd∸1-even o₁ ...| divides q prf rewrite prf | sym (*-assoc (2 ^ l₁) q 2) | a*b*2-lemma (2 ^ l₁) q = lvlOf'-mono {suc l₁} q (1≤m*n⇒0<n {m = 2 ^ suc l₁} hip) lvlOf-no-overshoot : ∀ j l → suc l < lvlOf j → 0 < j ∸ 2 ^ l lvlOf-no-overshoot j l hip rewrite lvlOf≡lvlOf' j with to j ...| zero = ⊥-elim (1+n≰n (≤-trans (s≤s z≤n) hip)) ...| pos k d o refl = 0<m-n {2 ^ k * d} {2 ^ l} (<-≤-trans (2^-mono (≤-unstep2 hip)) (2^kd-mono {k} {k} ≤-refl (0<odd o))) --------------------------- -- The AAOSL Structure -- --------------------------- ------------------------------- -- Hops -- Encoding our hops into a relation. A value of type 'H l j i' -- witnesses the existence of a hop from j to i at level l. data H : ℕ → ℕ → ℕ → Set where hz : ∀ x → H 0 (suc x) x hs : ∀ {l x y z} → H l x y → H l y z → suc l < lvlOf x → H (suc l) x z ----------------------------- -- Hop's universal properties -- The universal property comes for free h-univ : ∀{l j i} → H l j i → i < j h-univ (hz x) = s≤s ≤-refl h-univ (hs h h₁ _) = <-trans (h-univ h₁) (h-univ h) -- It is easy to prove there are no hops from zero h-from0-⊥ : ∀{l i} → H l 0 i → ⊥ h-from0-⊥ (hs h h₁ _) = h-from0-⊥ h -- And it is easy to prove that i is a distance of 2 ^ l away -- from j. h-univ₂ : ∀{l i j} → H l j i → i ≡ j ∸ 2 ^ l h-univ₂ (hz x) = refl h-univ₂ (hs {l = l} {j} h₀ h₁ _) rewrite h-univ₂ h₀ | h-univ₂ h₁ | +-comm (2 ^ l) 0 | sym (∸-+-assoc j (2 ^ l) (2 ^ l)) = refl -- and vice versa. h-univ₁ : ∀{l i j} → H l j i → j ≡ i + 2 ^ l h-univ₁ (hz x) = sym (+-comm x 1) h-univ₁ (hs {l = l} {z = i} h₀ h₁ _) rewrite h-univ₁ h₀ | h-univ₁ h₁ | +-comm (2 ^ l) 0 = +-assoc i (2 ^ l) (2 ^ l) -------------- -- H and lvlOf -- A value of type H says something about the levels of their indices h-lvl-src : ∀{l j i} → H l j i → l < lvlOf j h-lvl-src (hz x) with even? (suc x) ...| no _ = s≤s z≤n ...| yes _ = s≤s z≤n h-lvl-src (hs h₀ h₁ prf) = prf h-lvl-tgt : ∀{l j i} → 0 < i → H l j i → l < lvlOf i h-lvl-tgt prf h rewrite h-univ₂ h = lvlOf-prog prf (h-lvl-src h) h-lvl-inj : ∀{l₁ l₂ j i} (h₁ : H l₁ j i)(h₂ : H l₂ j i) → l₁ ≡ l₂ h-lvl-inj {i = i} h₁ h₂ = 2^-injective (+-cancelˡ-≡ i (trans (sym (h-univ₁ h₁)) (h-univ₁ h₂))) -- TODO-1: document reasons for this pragma and justify it {-# TERMINATING #-} h-lvl-half : ∀{l j i y l₁} → H l j y → H l y i → H l₁ j i → lvlOf y ≡ suc l h-lvl-half w₀ w₁ (hz n) = ⊥-elim (1+n≰n (≤-<-trans (h-univ w₁) (h-univ w₀))) h-lvl-half {l}{j}{i}{y} w₀ w₁ (hs {l = l₁} {y = y₁} sh₀ sh₁ x) -- TODO-2: factor out a lemma to prove l₁ ≡ l and y₁ ≡ y (already exists?) with l₁ ≟ l ...| no imp with j ≟ i + (2 ^ l₁) + (2 ^ l₁) | j ≟ i + (2 ^ l) + (2 ^ l) ...| no imp1 | _ rewrite h-univ₁ sh₁ = ⊥-elim (imp1 (h-univ₁ sh₀)) ...| yes _ | no imp1 rewrite h-univ₁ w₁ = ⊥-elim (imp1 (h-univ₁ w₀)) ...| yes j₁ | yes j₂ with trans (sym j₂) j₁ ...| xx5 rewrite +-assoc i (2 ^ l) (2 ^ l) | +-assoc i (2 ^ l₁) (2 ^ l₁) with +-cancelˡ-≡ i xx5 ...| xx6 rewrite sym (+-identityʳ (2 ^ l)) | sym (+-identityʳ (2 ^ l₁)) | +-assoc (2 ^ l) 0 ((2 ^ l) + 0) | +-assoc (2 ^ l₁) 0 ((2 ^ l₁) + 0) | *-comm 2 (2 ^ l) | *-comm 2 (2 ^ l₁) = ⊥-elim (imp (sym (2^-injective {l} {l₁} ( sym (*2-injective (2 ^ l) (2 ^ l₁) xx6))))) h-lvl-half {l = l}{j = j}{i = i}{y = y} w₀ w₁ (hs {l = l₁} {y = y₁} sh₀ sh₁ x) | yes xx1 rewrite xx1 with y₁ ≟ y ...| no imp = ⊥-elim (imp (+-cancelʳ-≡ y₁ y (trans (sym (h-univ₁ sh₀)) (h-univ₁ w₀)))) ...| yes y₁≡y rewrite y₁≡y with w₀ ...| hs {l = l-1} ssh₀ ssh₁ xx rewrite sym xx1 = h-lvl-half sh₀ sh₁ (hs sh₀ sh₁ x) ...| hz y = lvlOfodd≡1 y (even-suc-odd y (lvl≥2-even {suc y} x)) -- If a hop goes over an index, then the level of this index is strictly -- less than the level of the hop. The '≤' is there because -- l starts at zero. -- -- For example, lvlOf 4 ≡ 3; the only hops that can go over 4 are -- those with l of 3 or higher. In fact, there is one at l ≡ 2 -- from 4 to 0: H 2 4 0 h-lvl-mid : ∀{l j i} → (k : ℕ) → H l j i → i < k → k < j → lvlOf k ≤ l h-lvl-mid k (hz x) i<k k<j = ⊥-elim (n≮n k (<-≤-trans k<j i<k)) h-lvl-mid {j = j} k (hs {l = l₀}{y = y} w₀ w₁ x) i<k k<j with <-cmp k y ...| tri< k<y k≢y k≯y = ≤-step (h-lvl-mid k w₁ i<k k<y) ...| tri> k≮y k≢y k>y = ≤-step (h-lvl-mid k w₀ k>y k<j) ...| tri≈ k≮y k≡y k≯y rewrite k≡y = ≤-reflexive (h-lvl-half w₀ w₁ (hs {l = l₀}{y = y} w₀ w₁ x)) h-lvl-≤₁ : ∀{l₁ l₂ j i₁ i₂} → (h : H l₁ j i₁)(v : H l₂ j i₂) → i₂ < i₁ → l₁ < l₂ h-lvl-≤₁ {l₁} {l₂} {j} {i₁} {i₂} h v i₂<i₁ = let h-univ = h-univ₁ h v-univ = h-univ₁ v eqj = trans (sym v-univ) h-univ in log-mono l₁ l₂ (n+p≡m+q∧n<m⇒q<p i₂<i₁ eqj) h-lvl-≤₂ : ∀{l₁ l₂ j₁ j₂ i} → (h : H l₁ j₁ i)(v : H l₂ j₂ i) → j₁ < j₂ → l₁ < l₂ h-lvl-≤₂ {l₁} {l₂} {j₁} {j₂} {i} h v j₂<j₁ = let h-univ = h-univ₁ h v-univ = h-univ₁ v in log-mono l₁ l₂ (+-cancelˡ-< i (subst (i + (2 ^ l₁) <_) v-univ (subst (_< j₂) h-univ j₂<j₁))) ------------------------------ -- Correctness and Irrelevance h-correct : ∀ j l → l < lvlOf j → H l j (j ∸ 2 ^ l) h-correct (suc j) zero prf = hz j h-correct (suc j) (suc l) prf with h-correct (suc j) l ...| ind with 2 ∣? (suc j) ...| no _ = ⊥-elim (ss≰1 prf) ...| yes e with ind (≤-unstep prf) ...| res₀ with h-correct (suc j ∸ 2 ^ l) l (lvlOf-prog {l} {suc j} (lvlOf-no-overshoot (suc j) l (subst (suc l <_ ) (lvlOf-undo e) prf)) (subst (l <_) (lvlOf-undo e) (≤-unstep prf))) ...| res₁ rewrite +-comm (2 ^ l) 0 | ∸-+-assoc (suc j) (2 ^ l) (2 ^ l) = hs res₀ res₁ (subst (suc l <_) (lvlOf-undo e) prf) h-irrelevant : ∀{l i j} → (h₁ : H l j i) → (h₂ : H l j i) → h₁ ≡ h₂ h-irrelevant (hz x) (hz .x) = refl h-irrelevant (hs {y = y} h₁ h₃ x) (hs {y = z} h₂ h₄ x₁) rewrite ≤-irrelevant x x₁ with y ≟ z ...| no abs = ⊥-elim (abs (trans (h-univ₂ h₁) (sym (h-univ₂ h₂)))) ...| yes refl = cong₂ (λ P Q → hs P Q x₁) (h-irrelevant h₁ h₂) (h-irrelevant h₃ h₄) ------------------------------------------------------------------- -- The non-overlapping property is stated in terms -- of subhops. The idea is that a hop is either separate -- from another one, or is entirely contained within the larger one. -- -- Entirely contained comes from _⊆Hop_ data _⊆Hop_ : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → H l₁ j₁ i₁ → H l₂ j₂ i₂ → Set where here : ∀{l i j}(h : H l j i) → h ⊆Hop h left : ∀{l₁ i₁ j₁ l₂ i₂ w j₂ } → (h : H l₁ j₁ i₁) → (w₀ : H l₂ j₂ w) → (w₁ : H l₂ w i₂) → (p : suc l₂ < lvlOf j₂) → h ⊆Hop w₀ → h ⊆Hop (hs w₀ w₁ p) right : ∀{l₁ i₁ j₁ l₂ i₂ w j₂} → (h : H l₁ j₁ i₁) → (w₀ : H l₂ j₂ w) → (w₁ : H l₂ w i₂) → (p : suc l₂ < lvlOf j₂) → h ⊆Hop w₁ → h ⊆Hop (hs w₀ w₁ p) ⊆Hop-refl : ∀{l₁ l₂ j i} → (h₁ : H l₁ j i) → (h₂ : H l₂ j i) → h₁ ⊆Hop h₂ ⊆Hop-refl h₁ h₂ with h-lvl-inj h₁ h₂ ...| refl rewrite h-irrelevant h₁ h₂ = here h₂ ⊆Hop-univ : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h1 : H l₁ j₁ i₁) → (h2 : H l₂ j₂ i₂) → h1 ⊆Hop h2 → i₂ ≤ i₁ × j₁ ≤ j₂ × l₁ ≤ l₂ ⊆Hop-univ h1 .h1 (here .h1) = ≤-refl , ≤-refl , ≤-refl ⊆Hop-univ h1 (hs w₀ w₁ p) (left h1 w₀ w₁ q hip) with ⊆Hop-univ h1 w₀ hip ...| a , b , c = (≤-trans (<⇒≤ (h-univ w₁)) a) , b , ≤-step c ⊆Hop-univ h1 (hs w₀ w₁ p) (right h1 w₀ w₁ q hip) with ⊆Hop-univ h1 w₁ hip ...| a , b , c = a , ≤-trans b (<⇒≤ (h-univ w₀)) , ≤-step c ⊆Hop-univ₁ : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h1 : H l₁ j₁ i₁) → (h2 : H l₂ j₂ i₂) → h1 ⊆Hop h2 → i₂ ≤ i₁ ⊆Hop-univ₁ h1 h2 h1h2 = proj₁ (⊆Hop-univ h1 h2 h1h2) ⊆Hop-src-≤ : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h1 : H l₁ j₁ i₁) → (h2 : H l₂ j₂ i₂) → h1 ⊆Hop h2 → j₁ ≤ j₂ ⊆Hop-src-≤ h1 h2 h1h2 = (proj₁ ∘ proj₂) (⊆Hop-univ h1 h2 h1h2) -- If two hops are not strictly the same, then the level of -- the smaller hop is strictly smaller than the level of -- the bigger hop. -- -- VERY IMPORTANT ⊆Hop-univ-lvl : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h₁ : H l₁ j₁ i₁) → (h₂ : H l₂ j₂ i₂) → h₁ ⊆Hop h₂ → j₁ < j₂ → l₁ < l₂ ⊆Hop-univ-lvl {l₁}{i₁}{j₁}{l₂}{i₂}{j₂} h₁ h₂ h₁⊆Hoph₂ j₁<j₂ = let r₁ : i₂ + (2 ^ l₁) ≤ i₁ + (2 ^ l₁) r₁ = +-monoˡ-≤ (2 ^ l₁) (proj₁ (⊆Hop-univ h₁ h₂ h₁⊆Hoph₂)) r₂ : i₁ + (2 ^ l₁) < i₂ + (2 ^ l₂) r₂ = subst₂ _<_ (h-univ₁ h₁) (h-univ₁ h₂) j₁<j₂ in log-mono l₁ l₂ ((+-cancelˡ-< i₂) (≤-<-trans r₁ r₂)) hz-⊆ : ∀{l j i k} → (v : H l j i) → i ≤ k → k < j → hz k ⊆Hop v hz-⊆ (hz x) i<k k<j rewrite ≤-antisym (≤-unstep2 k<j) i<k = here (hz x) hz-⊆ {k = k} (hs {y = y} v v₁ x) i<k k<j with k <? y ...| yes k<y = right (hz k) v v₁ x (hz-⊆ v₁ i<k k<y) ...| no k≮y = left (hz k) v v₁ x (hz-⊆ v (≮⇒≥ k≮y) k<j) ⊆Hop-inj₁ : ∀{l₁ l₂ j i₁ i₂} → (h : H l₁ j i₁)(v : H l₂ j i₂) → i₂ < i₁ → h ⊆Hop v ⊆Hop-inj₁ {i₁ = i₁} h (hz x) prf = ⊥-elim (n≮n i₁ (<-≤-trans (h-univ h) prf)) ⊆Hop-inj₁ {l} {j = j} {i₁ = i₁} h (hs {l = l₁} {y = y} v v₁ x) prf with y ≟ i₁ ...| yes refl = left h v v₁ x (⊆Hop-refl h v) ...| no y≢i₁ with h-lvl-≤₁ h (hs v v₁ x) prf ...| sl≤sl₁ with h-univ₂ h | h-univ₂ v ...| prf1 | prf2 = let r : j ∸ (2 ^ l₁) ≤ j ∸ (2 ^ l) r = ∸-monoʳ-≤ {m = 2 ^ l} {2 ^ l₁} j (^-mono l l₁ (≤-unstep2 sl≤sl₁)) in left h v v₁ x (⊆Hop-inj₁ h v (≤∧≢⇒< (subst₂ _≤_ (sym prf2) (sym prf1) r) y≢i₁)) ⊆Hop-inj₂ : ∀{l₁ l₂ j₁ j₂ i} → (h : H l₁ j₁ i)(v : H l₂ j₂ i) → j₁ < j₂ → h ⊆Hop v ⊆Hop-inj₂ h (hz x) prf = ⊥-elim (n≮n _ (<-≤-trans prf (h-univ h))) ⊆Hop-inj₂ {l} {j₁ = j₁} {i = i} h (hs {l = l₁} {y = y} v v₁ x) prf with y ≟ j₁ ...| yes refl = right h v v₁ x (⊆Hop-refl h v₁) ...| no y≢j₁ with h-lvl-≤₂ h (hs v v₁ x) prf ...| sl≤sl₁ with h-univ₁ h | h-univ₁ v₁ ...| prf1 | prf2 = let r : i + 2 ^ l ≤ i + 2 ^ l₁ r = +-monoʳ-≤ i (^-mono l l₁ (≤-unstep2 sl≤sl₁)) in right h v v₁ x (⊆Hop-inj₂ h v₁ (≤∧≢⇒< (subst₂ _≤_ (sym prf1) (sym prf2) r) (y≢j₁ ∘ sym))) ⊆Hop-inj₃ : ∀{l₁ l₂ j₁ j₂ i₁ i₂} → (h : H l₁ j₁ i₁)(v : H l₂ j₂ i₂) → i₁ ≡ i₂ → j₁ ≡ j₂ → h ⊆Hop v ⊆Hop-inj₃ h v refl refl with h-lvl-inj h v ...| refl rewrite h-irrelevant h v = here v -- This datatype encodes all the possible hop situations. This makes is -- much easier to structure proofs talking about two hops. data HopStatus : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → H l₁ j₁ i₁ → H l₂ j₂ i₂ → Set where -- Same hop; we carry the proofs explicitly here to be able to control -- when to perform the rewrites. Same : ∀{l₁ i₁ j₁ l₂ i₂ j₂}(h₁ : H l₁ j₁ i₁)(h₂ : H l₂ j₂ i₂) → i₁ ≡ i₂ → j₁ ≡ j₂ → HopStatus h₁ h₂ -- h₂ h₁ -- ⌜⁻⁻⁻⁻⁻⁻⁻⌝ ⌜⁻⁻⁻⁻⁻⁻⁻⌝ -- | | | | -- i₂ < j₂ ≤ i₁ < j₁ SepL : ∀{l₁ i₁ j₁ l₂ i₂ j₂}(h₁ : H l₁ j₁ i₁)(h₂ : H l₂ j₂ i₂) → j₂ ≤ i₁ → HopStatus h₁ h₂ -- h₁ h₂ -- ⌜⁻⁻⁻⁻⁻⁻⁻⌝ ⌜⁻⁻⁻⁻⁻⁻⁻⌝ -- | | | | -- i₁ < j₁ ≤ i₂ < j₂ SepR : ∀{l₁ i₁ j₁ l₂ i₂ j₂}(h₁ : H l₁ j₁ i₁)(h₂ : H l₂ j₂ i₂) → j₁ ≤ i₂ → HopStatus h₁ h₂ -- h₂ -- ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⌝ -- ∣ ∣ -- ∣ h₁ ∣ -- ∣ ⌜⁻⁻⁻⁻⁻⁻⁻⌝ | -- | | | | -- i₂ ≤ i₁ ⋯ j₁ ≤ j₂ SubL : ∀{l₁ i₁ j₁ l₂ i₂ j₂}(h₁ : H l₁ j₁ i₁)(h₂ : H l₂ j₂ i₂) → i₂ < i₁ ⊎ j₁ < j₂ -- makes sure hops differ! → h₁ ⊆Hop h₂ → HopStatus h₁ h₂ -- h₁ -- ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⌝ -- ∣ ∣ -- ∣ h₂ ∣ -- ∣ ⌜⁻⁻⁻⁻⁻⁻⁻⌝ | -- | | | | -- i₁ ≤ i₂ ⋯ j₂ < j₁ SubR : ∀{l₁ i₁ j₁ l₂ i₂ j₂}(h₁ : H l₁ j₁ i₁)(h₂ : H l₂ j₂ i₂) → i₁ < i₂ ⊎ j₂ < j₁ -- makes sure hops differ → h₂ ⊆Hop h₁ → HopStatus h₁ h₂ -- Finally, we can prove our no-overlap property. As it turns out, it is -- just a special case of general non-overlapping, and therefore, it is -- defined as such. mutual -- Distinguish is used to understand the relation between two arbitrary hops. -- It is used to perform the induction step on arbitrary hops. Note how -- 'no-overlap' has a clause that impedes the hops from being equal. distinguish : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h₁ : H l₁ j₁ i₁) → (h₂ : H l₂ j₂ i₂) → HopStatus h₁ h₂ distinguish {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h1 h2 with <-cmp i₁ i₂ ...| tri≈ i₁≮i₂ i₁≡i₂ i₂≮i₁ with <-cmp j₁ j₂ ...| tri≈ j₁≮j₂ j₁≡j₂ j₂≮j₁ = Same h1 h2 i₁≡i₂ j₁≡j₂ ...| tri< j₁<j₂ j₁≢j₂ j₂≮j₁ rewrite i₁≡i₂ = SubL h1 h2 (inj₂ j₁<j₂) (⊆Hop-inj₂ h1 h2 j₁<j₂) ...| tri> j₁≮j₂ j₁≢j₂ j₂<j₁ rewrite i₁≡i₂ = SubR h1 h2 (inj₂ j₂<j₁) (⊆Hop-inj₂ h2 h1 j₂<j₁) distinguish {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h1 h2 | tri< i₁<i₂ i₁≢i₂ i₂≮i₁ with <-cmp j₁ j₂ ...| tri≈ j₁≮j₂ j₁≡j₂ j₂≮j₁ rewrite j₁≡j₂ = SubR h1 h2 (inj₁ i₁<i₂) (⊆Hop-inj₁ h2 h1 i₁<i₂) ...| tri< j₁<j₂ j₁≢j₂ j₂≮j₁ with no-overlap h2 h1 i₁<i₂ ...| inj₁ a = SepR h1 h2 a ...| inj₂ b = SubR h1 h2 (inj₁ i₁<i₂) b distinguish {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h1 h2 | tri< i₁<i₂ i₁≢i₂ i₂≮i₁ | tri> j₁≮j₂ j₁≢j₂ j₂<j₁ with no-overlap h2 h1 i₁<i₂ ...| inj₁ a = SepR h1 h2 a ...| inj₂ b = SubR h1 h2 (inj₁ i₁<i₂) b distinguish {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h1 h2 | tri> i₁≮i₂ i₁≢i₂ i₂<i₁ with <-cmp j₁ j₂ ...| tri≈ j₁≮j₂ j₁≡j₂ j₂≮j₁ rewrite j₁≡j₂ = SubL h1 h2 (inj₁ i₂<i₁) (⊆Hop-inj₁ h1 h2 i₂<i₁) ...| tri< j₁<j₂ j₁≢j₂ j₂≮j₁ with no-overlap h1 h2 i₂<i₁ ...| inj₁ a = SepL h1 h2 a ...| inj₂ b = SubL h1 h2 (inj₁ i₂<i₁) b distinguish {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h1 h2 | tri> i₁≮i₂ i₁≢i₂ i₂<i₁ | tri> j₁≮j₂ j₁≢j₂ j₂<j₁ with no-overlap h1 h2 i₂<i₁ ...| inj₁ a = SepL h1 h2 a ...| inj₂ b = SubL h1 h2 (inj₁ i₂<i₁) b no-overlap-< : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h₁ : H l₁ j₁ i₁) → (h₂ : H l₂ j₂ i₂) → i₂ < i₁ → i₁ < j₂ → j₁ ≤ j₂ no-overlap-< h₁ h₂ prf hip with no-overlap h₁ h₂ prf ...| inj₁ imp = ⊥-elim (1+n≰n (≤-trans hip imp)) ...| inj₂ res = ⊆Hop-src-≤ h₁ h₂ res -- TODO-1: rename to nocross for consistency with paper -- Non-overlapping is more general, as hops might be completely -- separate and then, naturally won't overlap. no-overlap : ∀{l₁ i₁ j₁ l₂ i₂ j₂} → (h₁ : H l₁ j₁ i₁) → (h₂ : H l₂ j₂ i₂) → i₂ < i₁ -- this ensures h₁ ≢ h₂. → (j₂ ≤ i₁) ⊎ (h₁ ⊆Hop h₂) no-overlap h (hz x) prf = inj₁ prf no-overlap {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h₁ (hs {y = y} v₀ v₁ v-ok) hip with distinguish h₁ v₀ ...| SepL _ _ prf = inj₁ prf ...| SubL _ _ case prf = inj₂ (left h₁ v₀ v₁ v-ok prf) ...| Same _ _ p1 p2 = inj₂ (left h₁ v₀ v₁ v-ok (⊆Hop-inj₃ h₁ v₀ p1 p2)) no-overlap {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h₁ (hs {y = y} v₀ v₁ v-ok) hip | SepR _ _ j₁≤y with distinguish h₁ v₁ ...| SepL _ _ prf = ⊥-elim (<⇒≱ (h-univ h₁) (≤-trans j₁≤y prf)) ...| SepR _ _ prf = ⊥-elim (n≮n i₂ (<-trans hip (<-≤-trans (h-univ h₁) prf))) ...| SubR _ _ (inj₁ i₁<i₂) prf = ⊥-elim (n≮n i₂ (<-trans hip i₁<i₂)) ...| SubR _ _ (inj₂ y<j₁) prf = ⊥-elim (n≮n j₁ (≤-<-trans j₁≤y y<j₁)) ...| SubL _ _ case prf = inj₂ (right h₁ v₀ v₁ v-ok prf) ...| Same _ _ p1 p2 = inj₂ (right h₁ v₀ v₁ v-ok (⊆Hop-inj₃ h₁ v₁ p1 p2)) no-overlap {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h₁ (hs {y = y} v₀ v₁ v-ok) hip | SubR _ _ (inj₁ i₁<y) v₀⊆h₁ with distinguish h₁ v₁ ...| SepL _ _ prf = ⊥-elim (n≮n i₁ (<-≤-trans i₁<y prf)) ...| SepR _ _ prf = ⊥-elim (n≮n i₂ (<-≤-trans (<-trans hip (h-univ h₁)) prf)) ...| SubR _ _ (inj₁ i₁<i₂) prf = ⊥-elim (n≮n i₂ (<-trans hip i₁<i₂)) ...| SubR _ _ (inj₂ y<j₁) prf = ⊥-elim (≤⇒≯ (no-overlap-< h₁ v₁ hip i₁<y) y<j₁) ...| SubL _ _ case prf = inj₂ (right h₁ v₀ v₁ v-ok prf) ...| Same _ _ p1 p2 = inj₂ (right h₁ v₀ v₁ v-ok (⊆Hop-inj₃ h₁ v₁ p1 p2)) no-overlap {l₁} {i₁} {j₁} {l₂} {i₂} {j₂} h₁ (hs {y = y} v₀ v₁ v-ok) hip -- Here is the nasty case. We have to argue why this is impossible -- WITHOUT resorting to 'nov h₁ (hs v₀ v₁ v-ok)', otherwise this would -- result in an infinite loop. Note how 'nov' doesn't pattern match -- on any argument. -- -- Here's what this looks like: -- -- (hs v₀ v₁ v-ok) -- ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⌝ -- | h₁ | -- | ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁺⁻⁻⁻⁻⁻⁻⁻⌝ -- | ∣ | ∣ -- | v₁ ∣ v₀ | ∣ -- ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁺⁻⁻⁻⁻⁻⁻⌜⁻⁻⁻⁻⁻⁻⁻⌝ | -- | | | | | -- i₂ < i₁ ≤ y ⋯ j₂ < j₁ -- -- We can pattern match on i₁ ≟ y | SubR _ _ (inj₂ j₂<j₁) v₀⊆h₁ with i₁ ≟ y -- And we quickly discover that if i≢y, we have a crossing between -- v₁ and h₁, and that's impossible. ...| no i₁≢y = ⊥-elim (n≮n y (<-≤-trans (<-trans (h-univ v₀) j₂<j₁) (no-overlap-< h₁ v₁ hip (≤∧≢⇒< (⊆Hop-univ₁ v₀ h₁ v₀⊆h₁) i₁≢y)))) -- The hard part really is when i₁ ≡ y, here's how this looks like: -- -- (hs v₀ v₁ v-ok) -- lvl l+1 ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⁻⌝ -- | | h₁ -- | ⌜⁻⁻⁻⁻⁻⁻⁻⁺⁻⁻⁻⁻⁻⁻⁻⌝ lvl l₁ -- | ∣ | ∣ -- | v₁ ∣ v₀ | ∣ -- lvl l ⌜⁻⁻⁻⁻⁻⁻⁻⁻⁺⁻⁻⁻⁻⁻⁻⁻⌝ | -- | | | | -- i₂ < i₁ ⋯ j₂ < j₁ -- -- We must show that the composite hop (hs v₀ v₁ v-ok) is impossible to build -- to show that the crossing doesn't happen. -- -- Hence, we MUST reason about the levels of the indices and eliminate 'v-ok', -- Which is possible with a bit of struggling about levels. ...| yes refl with h-lvl-tgt (≤-trans (s≤s z≤n) hip) v₀ ...| l≤lvli₁ with ⊆Hop-univ-lvl _ _ v₀⊆h₁ j₂<j₁ ...| l<l₁ with h-lvl-mid i₁ (hs v₀ v₁ v-ok) hip (h-univ v₀) ...| lvli₁≤l+1 with h-lvl-tgt (≤-trans (s≤s z≤n) hip) h₁ ...| l₁≤lvli₁ rewrite ≤-antisym lvli₁≤l+1 l≤lvli₁ = ⊥-elim (n≮n _ (<-≤-trans l<l₁ (≤-unstep2 l₁≤lvli₁)))
3 + 3 Fe → 2 Sb + 3 FeS
[STATEMENT] lemma emeasure_L[simp]: "emeasure (qbs_to_measure X) = (\<lambda>A. if A = {} \<or> A \<notin> sigma_Mx X then 0 else \<infinity>)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. emeasure (qbs_to_measure X) = (\<lambda>A. if A = {} \<or> A \<notin> sigma_Mx X then 0 else \<infinity>) [PROOF STEP] by(auto simp: emeasure_def)
/* roots/brent.c * * Copyright (C) 1996, 1997, 1998, 1999, 2000 Reid Priedhorsky, Brian Gough * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or (at * your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ /* brent.c -- brent root finding algorithm */ #include <config.h> #include <stddef.h> #include <stdlib.h> #include <stdio.h> #include <math.h> #include <float.h> #include <gsl/gsl_math.h> #include <gsl/gsl_errno.h> #include <gsl/gsl_roots.h> #include "roots.h" typedef struct { double a, b, c, d, e; double fa, fb, fc; } brent_state_t; static int brent_init (void * vstate, gsl_function * f, double * root, double x_lower, double x_upper); static int brent_iterate (void * vstate, gsl_function * f, double * root, double * x_lower, double * x_upper); static int brent_init (void * vstate, gsl_function * f, double * root, double x_lower, double x_upper) { brent_state_t * state = (brent_state_t *) vstate; double f_lower, f_upper ; *root = 0.5 * (x_lower + x_upper) ; SAFE_FUNC_CALL (f, x_lower, &f_lower); SAFE_FUNC_CALL (f, x_upper, &f_upper); state->a = x_lower; state->fa = f_lower; state->b = x_upper; state->fb = f_upper; state->c = x_upper; state->fc = f_upper; state->d = x_upper - x_lower ; state->e = x_upper - x_lower ; if ((f_lower < 0.0 && f_upper < 0.0) || (f_lower > 0.0 && f_upper > 0.0)) { GSL_ERROR ("endpoints do not straddle y=0", GSL_EINVAL); } return GSL_SUCCESS; } static int brent_iterate (void * vstate, gsl_function * f, double * root, double * x_lower, double * x_upper) { brent_state_t * state = (brent_state_t *) vstate; double tol, m; int ac_equal = 0; double a = state->a, b = state->b, c = state->c; double fa = state->fa, fb = state->fb, fc = state->fc; double d = state->d, e = state->e; if ((fb < 0 && fc < 0) || (fb > 0 && fc > 0)) { ac_equal = 1; c = a; fc = fa; d = b - a; e = b - a; } if (fabs (fc) < fabs (fb)) { ac_equal = 1; a = b; b = c; c = a; fa = fb; fb = fc; fc = fa; } tol = 0.5 * GSL_DBL_EPSILON * fabs (b); m = 0.5 * (c - b); if (fb == 0) { *root = b; *x_lower = b; *x_upper = b; return GSL_SUCCESS; } if (fabs (m) <= tol) { *root = b; if (b < c) { *x_lower = b; *x_upper = c; } else { *x_lower = c; *x_upper = b; } return GSL_SUCCESS; } if (fabs (e) < tol || fabs (fa) <= fabs (fb)) { d = m; /* use bisection */ e = m; } else { double p, q, r; /* use inverse cubic interpolation */ double s = fb / fa; if (ac_equal) { p = 2 * m * s; q = 1 - s; } else { q = fa / fc; r = fb / fc; p = s * (2 * m * q * (q - r) - (b - a) * (r - 1)); q = (q - 1) * (r - 1) * (s - 1); } if (p > 0) { q = -q; } else { p = -p; } if (2 * p < GSL_MIN (3 * m * q - fabs (tol * q), fabs (e * q))) { e = d; d = p / q; } else { /* interpolation failed, fall back to bisection */ d = m; e = m; } } a = b; fa = fb; if (fabs (d) > tol) { b += d; } else { b += (m > 0 ? +tol : -tol); } SAFE_FUNC_CALL (f, b, &fb); state->a = a ; state->b = b ; state->c = c ; state->d = d ; state->e = e ; state->fa = fa ; state->fb = fb ; state->fc = fc ; /* Update the best estimate of the root and bounds on each iteration */ *root = b; if ((fb < 0 && fc < 0) || (fb > 0 && fc > 0)) { c = a; } if (b < c) { *x_lower = b; *x_upper = c; } else { *x_lower = c; *x_upper = b; } return GSL_SUCCESS ; } static const gsl_root_fsolver_type brent_type = {"brent", /* name */ sizeof (brent_state_t), &brent_init, &brent_iterate}; const gsl_root_fsolver_type * gsl_root_fsolver_brent = &brent_type;
State Before: α : Type u β : Type v γ : Type w ι : Sort x inst✝² : Preorder α inst✝¹ : Preorder β inst✝ : Preorder γ f : α → β → γ s : Set α t : Set β a : α b : β h₀ : ∀ (b : β), Antitone (swap f b) h₁ : ∀ (a : α), Monotone (f a) ⊢ BddBelow s → BddAbove t → BddAbove (Set.image2 f s t) State After: case intro.intro α : Type u β : Type v γ : Type w ι : Sort x inst✝² : Preorder α inst✝¹ : Preorder β inst✝ : Preorder γ f : α → β → γ s : Set α t : Set β a✝ : α b✝ : β h₀ : ∀ (b : β), Antitone (swap f b) h₁ : ∀ (a : α), Monotone (f a) a : α ha : a ∈ lowerBounds s b : β hb : b ∈ upperBounds t ⊢ BddAbove (Set.image2 f s t) Tactic: rintro ⟨a, ha⟩ ⟨b, hb⟩ State Before: case intro.intro α : Type u β : Type v γ : Type w ι : Sort x inst✝² : Preorder α inst✝¹ : Preorder β inst✝ : Preorder γ f : α → β → γ s : Set α t : Set β a✝ : α b✝ : β h₀ : ∀ (b : β), Antitone (swap f b) h₁ : ∀ (a : α), Monotone (f a) a : α ha : a ∈ lowerBounds s b : β hb : b ∈ upperBounds t ⊢ BddAbove (Set.image2 f s t) State After: no goals Tactic: exact ⟨f a b, mem_upperBounds_image2_of_mem_upperBounds_of_mem_upperBounds h₀ h₁ ha hb⟩
Insuring your home against financial catastrophe can involve a number of high-stakes decisions. From which carrier to use to how large a deductible to choose, the insurance industry has become endlessly complicated. Volcanic eruptions, riots and falling planes? Your standard policy definitely covers those unlikely events. Pit bull bites, sinkholes and mold infestations? Perhaps, perhaps not. Flooding? Definitely not. Here you will find an overview of what homeowners insurance protects and what it doesn’t, and how you can make informed decisions about buying coverage. What Type of Homeowners Insurance Should You Get? If you have a mortgage, homeowners insurance isn’t just extra security and protection — it’s required. Find out the average cost of home insurance in your state by using the below map. Source: National Association of Insurance Commissioners (NAIC). is injured on your property. The most popular insurance is an “all risk” policy, although people with newly constructed homes may qualify for comprehensive coverage. Whether you live in a mobile home, condo, single family home, or rent an apartment, you can find a policy to protect your home and/or property. A bare-bones policy that doesn’t include liability coverage. It’s no longer available in most states, according to the Insurance Information Institute. Also called a “broad form” policy. The coverage protects your home from 16 different types of perils (see above list). A “special form” policy that covers the attached structures of your home as well as detached structures like a garage, cottage and fence. It also includes personal liability coverage. The name “All Risk” Policy is misleading as the policy doesn’t cover all risks. It does cover the aforementioned 16 perils and everything except the named exclusions in the policy, which include war, power failure, flood, vermin and nuclear disaster, among others. This is an insurance policy for renters. It covers your personal belongings, though not the structure of the property you rent. A “comprehensive form” policy that covers the structure of your home and offers broader coverage for personal property. You’re protected from the same perils as an HO-3 but don’t have to prove your belongings were damaged by a peril named in the policy. This form of coverage is usually for new or recently constructed homes. A policy for condo and co-op owners that works similar to HO-2 coverage. It covers the structural parts of the building you own, plus your personal belongings. This coverage is like an HO-2 policy, but it protects mobile homes. A policy that typically covers the replacement cost — minus depreciation — of damage to an older home. Homeowners insurance is complicated, and rate quotes hinge on numerous variables. Check out our guide on comparing homeowners insurance quotes. Homeowners and renters insurance policies almost never cover floods, hurricanes, earthquakes and other natural hazards. Learn how to protect yourself from Mother Nature’s worst. You can’t control the wrath of nature, but you can prepare for the worst. Here are steps to take to get ready for an emergency, and what to do if disaster strikes. Homeowners coverage varies based on the type of policy you buy and where you live. The most bare-bones insurance policy is known as “dwelling fire,” also known as HO-1 coverage, and it insures against hazards that include fire, smoke, lightning and explosions. No homeowners policy covers flood damage if it’s caused by rising water. That peril is covered through a separate flood insurance policy from the federal government. You need a separate federal flood insurance policy. This risk is covered by separate earthquake insurance. If you live in Hurricane Alley, you might need a policy from your state’s windstorm plan. Many carriers have begun to exclude mold damage from policies. You might be able to buy a special rider for this peril. Some insurers exclude such breeds as pit bulls, Akitas and Rottweilers. You might be able to buy a special rider for this hazard, too. Say FedEx delivers work-related package to your house, and the driver slips and falls. Your policy doesn’t cover liability. If your roof is simply old rather than damaged, you pay for the repair. Does Your Location Affect Your Coverage? Yes, where you live affects your coverage and policy costs. If you live in California, earthquake coverage isn’t part of your standard policy. You’ll need separate coverage, which usually carries a high deductible. And if you live in Florida or another hurricane-prone state, you might need separate windstorm insurance. Some of the language in your policy might seem ridiculously specific, but there’s a reason for the hairsplitting. “Civil unrest” is covered in standard policies, but “war” isn’t. And the extensive damage left by Hurricane Katrina led to many lawsuits over whether water damage was technically caused by a flood or a windstorm. For most homeowners, the high-value part of the policy applies to the structure of your home. If an insured hazard hits your humble abode, your coverage will pay to fix or rebuild your house. Insurers typically cover not just your living quarters but also garages, fences, tool sheds and gazebos. In addition, insurance policies generally pay to replace personal belongings if they’re stolen or damaged in a fire, hurricane or other covered disaster. Be sure to review your dwelling coverage occasionally because capital improvements and inflation can affect your home’s cost of replacement. Due to the so-called 80 percent rule, if your coverage limits fall below 80 percent of the full replacement cost of your home, your insurance company may reduce the amount it pays on a claim. In a typical policy with $200,000 in coverage for the dwelling, the insurer would cover up to $100,000 to replace furniture, electronics and other personal items. Pricey items such as jewelry, furs and silverware typically are covered, but many insurers impose dollar limits if they’re stolen. If you have an extensive jewelry collection, though, you might consider paying for an endorsement, which specifies coverage under certain special circumstances or for itemized valuables. Homeowners policies also include liability protection, which pays for legal costs and any court awards in the event of a lawsuit against the homeowner. Typically, this coverage is limited to $100,000. Finally, your policy might pay for your living costs if your home is rendered uninhabitable. Pays to replace home and possessions minus a deduction for depreciation — so don’t expect to get $2,000 for that ancient laptop. The most generous coverage that pays to rebuild your home no matter the cost, useful if you’re the victim of a disaster that causes a spike in the costs of labor and construction materials. If you’ve lived in your house for years, creating an inventory might seem tedious. Start with your most recent purchases and then work backward to your older possessions. It’s better to have an incomplete list than no list at all. Source: Adapted from the Insurance Information Institute. Don’t feel like cataloging every article of clothing, book and knickknack in your home? Jack Hungelmann, author of Insurance for Dummies, suggests tallying the value of furniture, TVs and other major items, then doubling it. That should approximate the value of all your stuff. The average user is able to develop “a complete picture inventory of a property and generate detailed reports” in about an hour or less by using this app, according to its iTunes Store description. This app gives you the ability to quickly produce a complete estimate of your belongings. The free version has a 25-item limit, while the $4.99 version unlocks all of the app’s available features. Doubling as an organizer and home inventory resource, MyStuff2 allows you to keep track of your personal possessions. The “Lite” version is free; the main version of the app is $4.99 and the “Pro” version is $8.99. This app is described as the “perfect companion for your move.” It helps with creating a visual inventory of your belongings through photos, videos, tags and other features. The basic version is free and it’s $4.99 for the “Plus” version which has additional features. Stuffanizer is a visual inventory app that allows users to set custom locations, create tags and take photos to keep track of their stuff — for $2.99. One of the most important variables determining how much you’ll pay for coverage. Hurricane Alley and Tornado Alley are the most expensive places to insure a home — Florida, Louisiana, Texas and Oklahoma top the list of priciest markets. Idaho, Oregon and Utah are the cheapest states to buy homeowners coverage. If your Florida abode is a wood-frame house with jalousie windows, you’ll pay more. If it’s a concrete-block structure with impact-resistant windows, you’ll pay less. Insurers use a catastrophe model to determine prices. Rate regulation is the duty of insurance commissioners in each state. If State Farm, Allstate, Nationwide and other carriers are competing for customers in your area, rates are likely to fall. This is known as a soft market. In the opposite case, a hard market, rates tend to rise. How much insurers pay for this form of reinsurance can affect rates. Catastrophe reinsurance indemnifies the insurer for losses in excess of a stipulated sum arising from a single catastrophic event or series of events. What Kind of Discounts Might You Expect? Bundling your home insurance with your car insurance (buying it from the same insurance carrier). These are known as multi-line discounts. Having dead-bolts and/or a home security system. Bringing your old wiring up to code. What happens if you’re the victim of a burglary, fire or other insured claim? First, your policy requires you to mitigate the damage as soon as possible. So, for instance, if you have a water leak, turn off the water and immediately call a company that handles water damage. Next, contact your insurance company to report the damage. When you reach your company’s adjuster, be sure to get his name and cell phone number, along with your claim number. In most cases, your carrier will come through with a check to cover your claim, up to the limits of your policy. Whether you store it in the cloud or in a filing cabinet, your policy spells out what your insurer owes you in case of a loss. If you have a large collection of jewelry or art, buy an endorsement to insure it. Say you have a $1,000 deductible and your $500 bike is stolen from your garage. In that case, you’d be wise to keep your insurer out of the matter. Hit by a storm or significant water damage, it makes sense to file a claim. However, you may want to skip a small, non-weather claim, lest your insurer drop you when your policy comes up for renewal. Insurers handle each type of claim a bit differently. In case of a burglary, you’ll need to provide a police report and an inventory of the items taken. In the event of a fire or weather-related claim that damages the structure of your house, you’ll need to hire a contractor, and your insurer ultimately will reimburse you for your costs. Source: National Association of Insurance Commissioners. Cost adjusted to 2014 dollars. Do I Need a Public Insurance Adjuster? The typical insurance claim goes relatively smoothly, and the carrier pays up as promised. But in some cases, an insurer simply refuses to pay what you think you’re owed. If you’re faced with an unreasonable settlement offer, it might make sense to hire a public adjuster. Unlike an adjuster who’s employed by your insurance company, a public adjuster works for you and is paid by you. Public adjusters say they know policies and insurers’ processes, and they can persuade insurers to pay up when you can’t. You’ll pay your public adjuster a percentage of the settlement he negotiates on your behalf. Public adjusters stress that they’re not insurance agents nor paid by carriers, which puts them firmly on the side of policyholders. The National Association of Public Insurance Adjusters offers a listing of its members. Sources: National Association of Insurance Commissioners, based on average annual premiums for 2013; Insurance Information Institute. If you own a car, you’re familiar with the difference between insurance and a warranty. If your car is stolen or damaged in a crash or natural disaster, that’s a matter for your insurance policy. If your transmission blows out, you’ll turn to your warranty for financial help. The concept is similar for real estate. A home warranty is a service contract that helps offset expenses if the roof springs a leak or the air conditioning needs a repair. Home warranties are a type of insurance, but they’re designed to cushion the blow of the smaller costs of routine maintenance rather than some significant loss from a catastrophic event. Sellers often provide home warranties as a tool to market their homes. A lender will require you to have homeowners insurance but not a warranty. An individual employed by an insurance company to evaluate losses and settle policyholder claims. This computerized method of predicting claims combines long-term disaster trends with current demographics and building patterns. The result is a prediction of the potential cost of catastrophic losses for a given area. Insurers typically can handle billions of dollars of losses, but to protect against major disasters such as Hurricane Andrew and Hurricane Katrina, they buy reinsurance that helps cover claims filed after large-scale catastrophes. The amount of loss paid by the policyholder. It’s either a specific dollar amount or a percentage of the claim amount. A cut-rate type of insurance policy that covers only the most basic risks. This type of policy covers a structure and its contents but has a high deductible. A written form attached to an insurance policy altering the policy’s coverage, terms or conditions. A provision in an insurance policy disallowing coverage for certain risks. Coverage for flood damage is available from the federal government under the National Flood Insurance Program. It is sold by licensed insurance agents. A period when carriers are reluctant to sell coverage. A numerical ranking based on a consumer’s credit history. Insurers say people with poor credit histories have proven more likely to file claims. A specific risk or cause of loss covered by an insurance policy, such as fire or theft. A person who negotiates with insurers on behalf of policyholders and receives a portion of a claims settlement. The process by which states monitor insurance companies’ rate changes. An environment of plentiful, low-cost coverage. State-sponsored insurance pools sell hurricane coverage to people who can’t buy it in the voluntary market because of their high risk. Alabama, Florida, Louisiana, Mississippi, North Carolina, South Carolina and Texas offer these plans. Georgia and New York also have special windstorm pools for certain coastal communities.
*DECK SSPEV SUBROUTINE SSPEV (A, N, E, V, LDV, WORK, JOB, INFO) C***BEGIN PROLOGUE SSPEV C***PURPOSE Compute the eigenvalues and, optionally, the eigenvectors C of a real symmetric matrix stored in packed form. C***LIBRARY SLATEC (EISPACK) C***CATEGORY D4A1 C***TYPE SINGLE PRECISION (SSPEV-S) C***KEYWORDS EIGENVALUES, EIGENVECTORS, EISPACK, PACKED, SYMMETRIC C***AUTHOR Kahaner, D. K., (NBS) C Moler, C. B., (U. of New Mexico) C Stewart, G. W., (U. of Maryland) C***DESCRIPTION C C Abstract C SSPEV computes the eigenvalues and, optionally, the eigenvectors C of a real symmetric matrix stored in packed form. C C Call Sequence Parameters- C (The values of parameters marked with * (star) will be changed C by SSPEV.) C C A* REAL(N*(N+1)/2) C real symmetric packed input matrix. Contains upper C triangle and diagonal of A, by column (elements C 11, 12, 22, 13, 23, 33, ...). C C N INTEGER C set by the user to C the order of the matrix A. C C E* REAL(N) C on return from SSPEV, E contains the eigenvalues of A. C See also INFO below. C C V* REAL(LDV,N) C on return from SSPEV, if the user has set JOB C = 0 V is not referenced. C = nonzero the N eigenvectors of A are stored in the C first N columns of V. See also INFO below. C C LDV INTEGER C set by the user to C the leading dimension of the array V if JOB is also C set nonzero. In that case, N must be .LE. LDV. C If JOB is set to zero, LDV is not referenced. C C WORK* REAL(2N) C temporary storage vector. Contents changed by SSPEV. C C JOB INTEGER C set by the user to C = 0 eigenvalues only to be calculated by SSPEV. C Neither V nor LDV are referenced. C = nonzero eigenvalues and vectors to be calculated. C In this case, A & V must be distinct arrays. C Also, if LDA .GT. LDV, SSPEV changes all the C elements of A thru column N. If LDA < LDV, C SSPEV changes all the elements of V through C column N. If LDA=LDV, only A(I,J) and V(I, C J) for I,J = 1,...,N are changed by SSPEV. C C INFO* INTEGER C on return from SSPEV, the value of INFO is C = 0 for normal return. C = K if the eigenvalue iteration fails to converge. C Eigenvalues and vectors 1 through K-1 are correct. C C C Error Messages- C No. 1 recoverable N is greater than LDV and JOB is nonzero C No. 2 recoverable N is less than one C C***REFERENCES (NONE) C***ROUTINES CALLED IMTQL2, TQLRAT, TRBAK3, TRED3, XERMSG C***REVISION HISTORY (YYMMDD) C 800808 DATE WRITTEN C 890831 Modified array declarations. (WRB) C 890831 REVISION DATE from Version 3.2 C 891214 Prologue converted to Version 4.0 format. (BAB) C 900315 CALLs to XERROR changed to CALLs to XERMSG. (THJ) C 900326 Removed duplicate information from DESCRIPTION section. C (WRB) C***END PROLOGUE SSPEV INTEGER I,INFO,J,LDV,M,N REAL A(*),E(*),V(LDV,*),WORK(*) C***FIRST EXECUTABLE STATEMENT SSPEV IF (N .GT. LDV) CALL XERMSG ('SLATEC', 'SSPEV', 'N .GT. LDV.', + 1, 1) IF(N .GT. LDV) RETURN IF (N .LT. 1) CALL XERMSG ('SLATEC', 'SSPEV', 'N .LT. 1', 2, 1) IF(N .LT. 1) RETURN C C CHECK N=1 CASE C E(1) = A(1) INFO = 0 IF(N .EQ. 1) RETURN C IF(JOB.NE.0) GO TO 20 C C EIGENVALUES ONLY C CALL TRED3(N,1,A,E,WORK(1),WORK(N+1)) CALL TQLRAT(N,E,WORK(N+1),INFO) RETURN C C EIGENVALUES AND EIGENVECTORS C 20 CALL TRED3(N,1,A,E,WORK(1),WORK(1)) DO 30 I = 1, N DO 25 J = 1, N 25 V(I,J) = 0. 30 V(I,I) = 1. CALL IMTQL2(LDV,N,E,WORK,V,INFO) M = N IF(INFO .NE. 0) M = INFO - 1 CALL TRBAK3(LDV,N,1,A,M,V) RETURN END
\documentclass{article} \title{Learning Incoherent Subspaces: Classification via Incoherent Dictionary Learning.} \author{Daniele Barchiesi and Mark D. Plumbley} %\affiliation{Centre for Digital Music\\ % Queen Mary University of London\\ % Mile End Road, London E1 4NS, UK} %%%%%%%%%%%%%%%%%%%%%%PACKAGES%%%%%%%%%%%%%%%%%%%%% \usepackage{amsmath} \usepackage[english]{babel} \usepackage[applemac]{inputenc} \usepackage[T1]{fontenc} \usepackage{amsfonts,amssymb,amsmath,amsthm,bm} \usepackage[boxruled,linesnumbered]{algorithm2e} \usepackage{subfigure,graphicx} \usepackage{setspace} \usepackage{hyperref} \usepackage{enumitem} \usepackage{graphicx} \usepackage{epstopdf} %%%%%%%%%%%%%%%%%%%%%DEFINITIONS%%%%%%%%%%%%%%%%%% \input{definitions.tex} \def \nComponents{L} %number of significant components of ICA feature transform \def \fea{\Vector{x}} %vector of features \def \Feas{\Matrix{X}} %matrix containing set of training features \def \iFea{n} %component index of features vector \def \nDim{N} %dimensionality of features vector \def \newFea{\Vector{y}} %features vector after feature transform \def \NewFeas{\Matrix{Y}}%matrix of training features after feature transform \def \Dic{\Matrix{\Phi}} %dictionary learned from features \def \nAto{K} %number of atoms in the dictionary \def \iAto{k} %atom index \def \atom{\Vector{\phi}}%atom in a dictionary \def \nFea{M} %number of signals or features \def \iFea{m} %signal index \def \Coeff{\Matrix{A}} %matrix of sparse approximation coefficients \def \coherence{\mu} %mutual coherence \def \coeff{\Vector{\alpha}}%vector of sparse approximation coefficients \def \nActiveAtoms{S} %number of active atoms \def \orthmat{\Matrix{W}} %orthonormal matrix \def \cat{c} %category \def \cats{\Vector{c}} %vector of categories of training signals \def \Cat{\mathcal{C}} %set of possible categories \def \uniCat{C} %element of the set of possible categories \def \iCat{p} %index of the elements in the set of possible categories \def \nCat{P} %number of elements in the set of possible categories \def \definition{\overset{\Label{def}}{=}} %definitions \def \Gram{\Matrix{G}} %Gram matrix \def \gram{g} %element of the Gram matrix \def \admissibleDictionary{\Function{D}} %set of admissible dictionaries \def \ambient{\Set{R}} %ambient \def \ipr{\Acronym{ipr}} %Iterative projections and rotations \def \ip{\Acronym{ip}} %Iterative projections \def \nDimSub{Q} \def \Spa{\Matrix{\Psi}} \begin{document} %\ninept % \maketitle % \begin{abstract} In this article we present the supervised iterative projections and rotations (\Acronym{s-ipr}) algorithm, a method for learning discriminative incoherent subspaces from data. We derive \Acronym{s-ipr} as a supervised extension of our previously proposed iterative projections and rotations (\Acronym{ipr}) algorithm for incoherent dictionary learning, and we employ it to learn incoherent sub-spaces that model signals belonging to different classes. We test our method as a feature transform for supervised classification, first by visualising transformed features from a synthetic dataset and from the `iris' dataset, then by using the resulting features in a classification experiment. While the visualisation results are promising, we find that \Acronym{s-ipr} generally performs worse than traditional and state-of-the-art techniques for supervised dimensionality reduction in terms of the misclassification ratio. \end{abstract} % %\begin{keywords} %Feature transforms, sparse approximation, dictionary learning, supervised classification. %\end{keywords} % \section{Introduction: Classification And Feature Transforms}\label{sec:intro} Supervised classification is one of the classic problems in machine learning where a system is designed to discriminate the category of an observed signal, having previously observed representative examples from the considered classes \cite{Duda1973Pa}. Typically, a classification algorithm consists of a training phase where class-specific models are learned from labelled samples, followed by a testing phase where unlabelled data are classified by comparison with the learned models. Both training and testing comprise various stages. Firstly, we observe a signal that measures a process of interest, such as the recording of a sound or image, or a log of the temperatures in a particular geographic area. Then, a set of features are extracted from the raw signals using signal processing techniques. This step is performed in order to reduce the dimensionality of the data and provide a new signal that allows generalisation among examples of the same class, while retaining enough information to discriminate between different classes. Following the features extraction step, a feature transform can be employed to further reduce the dimensionality of the data and to enhance discrimination between classes. Thus classification benefits from feature transforms especially when features are not separable, that is, when it is not possible to optimise a simple function that maps features belonging to signals of a given class to the corresponding category. A further dimensionalty reduction may be performed when dealing with high dimensional signals (such as audio or high resolution images) by fitting the parameters of global statistical distributions with features learned on portions of the signal. Models learned on different classes are finally compared using a distance metric to the model learned form an unlabelled signal, which is typically assigned to the nearest class. \subsection{Traditional Algorithms For Feature Transform}\label{sec:aft} Two of the main feature transform techniques include principal component analysis (\Acronym{pca}) \cite{Pearson1901On} and Fisher's linear discriminant analysis (\Acronym{lda}) \cite{Duda1973Pa}. \subsubsection{\Acronym{pca}} Let $\curlyb{\fea_{\iFea} \in \real^{\nDim}}_{\iFea=1}^{\nFea}$ be a set of vectors containing features extracted from $\nFea$ training signals. The goal of \Acronym{pca} is to learn an orthonormal set of basis functions $\curlyb{\atom_{\iAto} \in \ambient^{\nDim}}_{\iAto=1}^{\nDim}$ such that $ \norm{\atom_{\iAtom}}{2}=1$ and $\inner{\atom_{i}}{\atom_{j}}=0 \; \forall i\neq j$ that are placed along the columns of a so-called \emph{dictionary} $\Dic\in\real^{\nDim\times\nDim}$. The bases are optimised from the data to identify their principal components, that is, the sub-spaces that retain the maximum variance of the features. To compute the dictionary, the eigenvalue decomposition of the outer product \begin{equation} \Matrix{X}\Transpose{\Matrix{X}} = \Matrix{Q}\Matrix{\Lambda}\Transpose{\Matrix{Q}} \end{equation} is first calculated. Then, the $\nComponents$ eigenvectors corresponding to the $\nComponents$ largest eigenvalues are selected from the matrix $\Matrix{Q}$, and scaled to unit $\ell_{2}$ norm to form the dictionary $\Dic$. A new set of transformed features $\newFea_{\Acronym{pca}} = \Dic\Transpose{\Dic}\fea$ is computed by projecting the data onto the sub-space spanned by the columns of $\Dic$ (that is, onto the $\nComponents$-dimensional principal sub-space). This operation reduces the dimensionality of the features by projecting them onto a linear subspace embedded in $\real^{\nDim}$. It is an unsupervised technique that does not exploit knowledge about the classes associated with the training set, but implicitly relies in the assumption that the principal component directions encode relevant differences between classes. \subsubsection{\Acronym{lda}} In contrast, \Acronym{lda} is a supervised method for feature transform whose objective is to explicitly maximise the separability of classes in the transformed domain. Let $\obsSet_{\iUniCat}$ be a set indexing features extracted from data belonging to the $\iUniCat$-th category, let \begin{equation} \average{\fea}_{\iUniCat} \definition \frac{1}{\abs{\obsSet_{\iUniCat}}}\sum_{\iObs\in\obsSet_{\iUniCat}} \fea_{\iObs} \end{equation} be the $\iUniCat$-th class feature centroid, and $\average{\fea}\definition\sum_{\iObs=1}^{\nObs}\fea_{\iObs}$ the centroid of the features extracted from the entire training dataset. The between-classes scatter matrix \begin{equation} \Matrix{S_{b}} \definition \sum_{\iUniCat=1}^{\nUniCat}\abs{\obsSet_{\iUniCat}}\roundb{\average{\fea}_{\iUniCat}-\average{\fea}}\Transpose{\roundb{\average{\fea}_{\iUniCat}-\average{\fea}}} \end{equation} is defined to measure the mutual distances between the centroids of different classes, while the within-classes scatter matrix \begin{equation} \Matrix{S}_{w}\definition \sum_{\iUniCat=1}^{\nUniCat}\sum_{\iObs\in\obsSet_{\iUniCat}}\roundb{\fea_{\iObs}-\average{\fea}_{\iUniCat}}\Transpose{\roundb{\fea_{\iObs}-\average{\fea}_{\iUniCat}}} \end{equation} quantifies the distances between features belonging to the same class. To maximise an objective function $\objective(\Matrix{W})\definition\frac{\abs{\Transpose{\Matrix{W}}\Matrix{S}_{b}\Matrix{W}}}{\abs{\Transpose{\Matrix{W}}\Matrix{S}_{w}\Matrix{W}}}$ that promotes features belonging to the same class to be near each other and far away from features belonging to other classes, the eigenvalue decomposition of the matrix \begin{equation} \pseudoinverse{\Matrix{S}_{w}}\Matrix{S}_{b} = \Matrix{Q}\Matrix{\Lambda}\Transpose{\Matrix{Q}} \end{equation} is computed, and the features $\fea$ are projected onto the space spanned by its $(\nUniCat-1)$ eigenvectors corresponding to the largest $(\nUniCat-1)$ eigenvalues. \Acronym{lda} explicitly seeks to enhance the discriminative power of features by optimising the objective $\objective$. \subsection{Supervised \Acronym{pca}} Related works that extend \Acronym{pca} include the supervised \Acronym{pca} (\Acronym{s-pca}) proposed by Barshan et al. \cite{Barshan2011Su}. \Acronym{s-pca} is based on the theory of reproducing kernel Hilbert spaces (\Acronym{rkhs}) (that are spaces of functions which satisfy certain properties and map elements from an arbitrary set to the set of complex numbers) \cite{Aronszajn:1950}, and on the so-called Hilbert-Schmidt independence criterion (\Acronym{hsic})\cite{gretton2005measuring}. The \Acronym{hsic} is used to estimate the statistical dependence of two random variables based on the fact that this quantity is related to the correlation of functions belonging to their respective \Acronym{rkhs}. While \Acronym{hsic} is defined in terms of the probability density function of the two random variables, empirical estimates of \Acronym{hsic} can be obtained from finite sequences of their realisations. The empirical \Acronym{hsic} can be used in turn to construct an objective function that maximises the dependence between the two variables. Hence, this strategy is adopted within the context of classification to maximise the statistical dependence between a transformed feature $\newFea_{\Acronym{s-pca}}$ and its corresponding category $\cat$. In practice, \Acronym{s-pca} differs from \Acronym{pca} in that it calculates the eigenvalue decomposition of a matrix $\Matrix{R}$ defined as follows: \begin{equation} \Matrix{R} \definition \Matrix{X}\Matrix{H}\Matrix{L}\Matrix{H}\Transpose{\Matrix{X}} \end{equation} were $\Matrix{H} \definition \Matrix{I} - \Vector{e}\Transpose{\Vector{e}}$ is a so-called \emph{centring} matrix\footnote{Here $\Vector{e}$ is a vector of ones.} and $\Matrix{L} \definition \Vector{\cat}\Transpose{\Vector{\cat}}$ is the kernel matrix of the class variable that is constructed by computing the outer product of the vectors resulting from assigning different numerical values to each category. \subsection{Other related work} The union of incoherent sub-spaces model proposed by Schnass and Vandergheynst \cite{Schnass2010A-} employes a very similar intuition to the one that inspired our proposed method, and models features belonging to different classes using incoherent subspaces. Other methods for supervised dimensionality reduction include metric learning algorithms \cite{xing2002distance}, sufficient dimensionality reduction \cite{li1991sliced} and Bair's supervised principal components \cite{Bair06predictionby}. Manifold learning techniques are used to model nonlinear data and reviewed by Van Der Maaten et al. \cite{Van-Der-Maaten2009Di}. Finally, the sparse sub-space clustering technique developed by Elhamifar and Vidal \cite{Elhamifar2013Sp} that applies concepts and algorithm from the field of sparse approximation to tackle unsupervised clustering problems. \subsection{Paper organisation} The method proposed in this paper is aimed at learning discriminative sub-spaces that allow dimensionality reduction, while at the same time enhancing the separability between classes. It is derived from our previous work on learning incoherent dictionaries for sparse approximation \cite{Barchiesi2013Le}. The incoherent dictionary learning problem will be introduced in Section \ref{sec:idl}, while Section \ref{sec:lis} will contain the main contribution of this paper consisting in learning incoherent subspaces for classification. Numerical experiments are presented in Section \ref{sec:ne}, and conclusions are drawn in Section \ref{sec:end}. \section{Incoherent Dictionary Learning}\label{sec:idl} A sparse approximation of a signal $\fea\in\real^{\nDim}$ is a linear combination of $\nAto\geq\nDim$ basis functions $\curlyb{\atom_{\iAto}\in\real^{\nDim}}_{\iAto=1}^{\nAto}$ called \emph{atoms} described by: \begin{equation} \fea \approx \approximant{\fea} = \sum_{\iAto=1}^{\nAto} \alpha_{\iAto}\atom_{\iAto} \end{equation} where the vector of coefficients $\coeff$ contains a \emph{small} number of non-zero components, corresponding to a small number of atoms actively contributing to the approximation $\approximant{\fea}$. Given a signal $\fea$ and a dictionary, various algorithms have been proposed to find a sparse approximation that minimises the residual error $\norm{\fea-\approximant{\fea}}{2}$\cite{Elad2010Sp}. Dictionary learning aims at optimising a dictionary $\Dic$ for sparse approximation given a set of training data. It is an unsupervised technique that can be thought as being a generalisation of \Acronym{pca}, as both methods learn linear subspaces that minimise the approximation error of the signals. Dictionary learning, however, is generally more flexible than \Acronym{pca} because it can be employed to learn more general non-orthogonal over-complete dictionaries \cite{Rubinstein2010Di}. \subsection{The incoherent dictionary learning problem} Dictionaries for sparse approximation have important intrinsic properties that describe the relations between their atoms, like the mutual coherence $\mu(\Dic)=\underset{i\neq j}{\max}{\inner{\atom_{i}}{\atom_{j}}}$ that is defined as the maximum inner product between any two different atoms. The goal of incoherent dictionary learning is to learn atoms that are well adapted to sparsely approximate a set of training signals, and that are at the same time mutually incoherent \cite{Barchiesi2013Le}. Given a set of $\nFea$ training signals contained in the columns of the matrix $\Feas \in \real^{\nDim\times\nFea}$ and a matrix $\Coeff\in\real^{\nAto\times\nFea}$ indicating the sparse approximation coefficients, the incoherent dictionary learning problem can be expressed as: \begin{align}\label{eq:iprcost} \optimal{\Dic} = \MinimiseST{\Dic}{\norm{\Feas-\Dic\Coeff}{\F}}{\coherence(\Dic) \leq \coherence_{0} \nonumber \\ &\norm{\coeff_{\iFea}}{0}\leq \nActiveAtoms \quad \forall \iFea} \end{align} where $\coherence_{0}$ is a fixed mutual coherence constraint, the $\ell_{0}$ pseudo-norm $\norm{\cdot}{0}$ counts the number of non-zero components of its argument and $\nActiveAtoms$ is a fixed number of active atoms. Algorithms for (incoherent) dictionary learning generally follow an alternate optimisation heuristic, iteratively updating $\Dic$ and $\Coeff$ until a stopping criterion is met. In the case of the iterative projections and rotations algorithm (\Acronym{ipr}) algorithm \cite{Barchiesi2013Le}, a dictionary de-correlation step is added after updating the dictionary in order to satisfy the mutual coherence constraint. Given $\Feas$, fixed $\coherence_{0}$, $\nActiveAtoms$ and a stopping criterion (such as a maximum number of iterations), the optimisation of \eqref{eq:iprcost} is tackled by iteratively performing the following steps: \begin{itemize} \item\emph{Sparse coding}: fix $\Dic$ and compute the matrix $\Coeff$ using a suitable sparse approximation method. \item\emph{Dictionary update}: fix $\Coeff$ and update $\Dic$ using a suitable method for dictionary learning. \item\emph{Dictionary de-correlation}: given $\Feas$, $\Dic$ and $\Coeff$ update the dictionary $\Dic$ to reduce its mutual coherence under the level $\coherence_{0}$. \end{itemize} \subsection{The iterative projections and rotations algorithm}\label{sec:ipr} The \Acronym{ipr} algorithm has been proposed in order to solve the dictionary de-correlation step, while ensuring that the updated dictionary provides a sparse approximation with low residual norm, as indicated by the objective function \eqref{eq:iprcost} \cite{Barchiesi2013Le}. The \Acronym{ipr} algorithm requires the calculation of the Gram matrix $\Gram=\Transpose{\Dic}\Dic$ which contains the inner products between any two atoms in the dictionary. $\Gram$ is iteratively projected onto two constraint sets, namely the structural constraint set $\stcset$ and the spectral constraint set $\spcset$. The former is the set of symmetric square matrices with unit diagonal values and off-diagonal values with magnitude smaller or equal than $\coherence_{0}$: \small \begin{equation*} \stcset \definition \curlyb{\stcmat \in \ambient^{\nAtoms \times \nAtoms} : \stcmat = \Transpose{\stcmat}, \stcel_{i,i}=1,\max_{i > j}|\stcel_{i,j}|\leq \coherence_{0}}. \end{equation*} \normalsize The latter is the set of symmetric positive semidefinite square matrices with rank smaller than or equal to $\nDimensions$: \begin{equation*} \spcset \definition \curlyb{ \spcmat \in \ambient^{\nAtoms \times \nAtoms} : \spcmat = \Transpose{\spcmat}, \operatorname{eig}(\spcmat)\geq \Vector{0}, \operatorname{rank}(\spcmat)\leq \nDimensions} \end{equation*} where the operator $\operatorname{eig}(\cdot)$ returns the vector of eigenvalues of its argument. Starting from the Gram matrix of an initial dictionary $\Dictionary$, the \Acronym{ipr} method iteratively performs the following operations. \begin{itemize} \item \emph{Projection onto the structural constraint set}. The projection $\stcmat = \Projection_{\stcset}(\Gram)$ can be obtained by: \begin{enumerate} \item setting $\stcel_{i,i} = 1$, \item limiting the off-diagonal elements so that, for $i \neq j$, \small \begin{equation}\label{eq:pscs} \stcel_{i,j} = \operatorname{Limit}({\gram}_{i,j},\coherence_{0}) = \left\{ \begin{array}{rl} \gram_{i,j} & \text{if} \quad |\gram_{i,j}|\leq \coherence_{0} \\ \operatorname{sgn}(\gram_{i,j})\coherence_{0} & \text{if} \quad |\gram_{i,j}| > \coherence_{0} \end{array} \right. \end{equation} \normalsize \end{enumerate} \item \emph{Projection onto the spectral constraint set and factorization}. The projection $\spcmat = \Projection_{\spcset}(\Gram)$ and subsequent factorisation are obtained by: \begin{enumerate} \item calculating the eigenvalue decomposition (\Acronym{evd}) $\Gram = \eigvecmat \eigvalmat \Transpose{\eigvecmat}$, \item thresholding the eigenvalues by keeping only the $\nDimensions$ largest positive ones. \begin{equation*} \left[\operatorname{Thresh}(\eigvalmat,\nDimensions) \right]_{i,i} = \left\{ \begin{array}{rl} \lambda_{i,i} & \text{if} \quad i \leq N \; \text{and} \; \lambda_{i,i}>0 \\ 0 & \text{if} \quad i > N \; \text{or} \; \lambda_{i,i}\leq 0 \end{array}\right. \end{equation*} where the eigenvalues in $\eigvalmat$ are ordered from the largest to the smallest. Following this step, at most $\nDimensions$ eigenvalues of the Gram matrix are different from zero, \item factorizing the projected Gram matrix into the product $\Gram=\Transpose{\Dic}\Dic$ by setting: \begin{equation} \Dic = \eigvalmat^{1/2}\Transpose{\eigvecmat}. \end{equation} \end{enumerate} \item \emph{Dictionary rotation}. Rotate the dictionary $\Dic$ to align it to the training set by solving the problem: \begin{equation}\label{eq:rot} \optimal{\orthmat} = \Minimise{\orthmat \Transpose{\orthmat} = \Matrix{I}}{\norm{\Feas - \orthmat\Dic\Coeff}{\Label{F}}}. \end{equation} The optimal rotation matrix can be calculated by: \begin{enumerate} \item computing the sample covariance between the observed signals and their approximations $\CovMat \definition (\Dic\Coeff)\Transpose{\Feas}$, \item calculating the \Acronym{svd} of the covariance $\CovMat = \Matrix{U}\Matrix{\Sigma}\Transpose{\Matrix{V}}$, \item setting the optimal rotation matrix to $\optimal{\orthmat}=\Matrix{V}\Transpose{\Matrix{U}}$, \item rotating the dictionary $\Dic \leftarrow \optimal{\orthmat}\Dic$. \end{enumerate} \end{itemize} More details about the \Acronym{ipr} algorithm can be found in \cite{Barchiesi2013Le}, including details of its computational cost. %The code of the \Acronym{ipr} method is illustrated in Algorithm \ref{algo:ipr}. % %\begin{algo} %\KwIn{$\Feas, \Dictionary, \Coeff, \coherence_{0}, \nIter$} %\KwOut{$\optimal{\Dictionary}$} %$\iter\gets1$\; %\While{$\iter \leq \nIter$ and $\coherence(\Dictionary)>\coherence_{0}$}{ % \tcp{Calculate Gram matrix} % $\Gram \gets \Transpose{\Dictionary}\Dictionary$\; % \tcp{Project ont structural c.s.} % $\operatorname{diag(\Gram)} \gets \Vector{1}$\; % $\Gram \gets \operatorname{Limit}(\Gram,\coherence_{0})$\; % \tcp{Project Gram matrix onto spectral c.s. and factorize} % $[\eigvecmat, \eigvalmat] \gets \Acronym{evd}(\Gram)$\; % $\eigvalmat \gets \operatorname{Thresh}(\eigvalmat,\nDimensions)$\; % $\Dictionary \gets \eigvalmat^{1/2}\Transpose{\eigvecmat}$\; % \tcp{Rotate dictionary} % $\CovMat \gets \Feas\Transpose{(\Dictionary\Coeff)}$\; % $[\Matrix{U},\Matrix{\Sigma},\Matrix{V}] \gets \Acronym{svd}(\CovMat)$\label{algo:ipr:svd}\; % $\OrthMat \gets \Matrix{V}\Transpose{\Matrix{U}}$\; % $\Dictionary \gets \OrthMat\Dictionary$\; % $\iter\gets\iter+1$\; %} %\caption{\label{algo:ipr}Iterative projections and rotations (\Acronym{ipr})} %\end{algo} \section{Learning Incoherent Subspaces}\label{sec:lis} The \Acronym{ipr} algorithm learns a dictionary where all the atoms are mutually incoherent. Therefore, given any two disjoint sets $\Lambda\bigcap\Gamma=\emptyset$ that identify non-overlapping collections of atoms, the sub-dictionaries $\Dic_{\Lambda}, \Dic_{\Gamma}$ are also mutually incoherent. Starting from this observation, the main intuition driving the development of a supervised \Acronym{ipr} (\Acronym{s-ipr}) algorithm for classification is to learn mutually incoherent sub-dictionaries that approximate features from different classes of signals. The sub-dictionaries are in turn used to define incoherent sub-spaces, and features are projected onto these sub-spaces yielding discriminative dimensionality reduction. \subsection{The supervised \Acronym{ipr} algorithm}\label{sec:iprclass} Let $\curlyb{\cat_{\iFea}\in\Cat}_{\iFea=1}^{\nFea},\; \Cat=\curlyb{\uniCat_{1},\uniCat_{2},\dots,\uniCat_{\nCat}}$ be a set of labels that identify the category of the vectors of features $\fea_{\iFea}$, whose elements belong to a set $\Cat$ of $\nCat$ possible categories. The columns of the matrix $\Feas_{\iCat}$ contain a selection of the features extracted from signals belonging to the $\iCat$-th category. To learn incoherent sub-dictionaries from the entire set of features, we must first cluster the atoms to different classes\footnote{Note that the term \emph{cluster} implies that a this stage the algorithm needs to make an unsupervised decision, since there is no any a-priori reason to assign a given atom to any particular class.}, and then only proceed with their de-correlation if they are assigned to different categories (while allowing coherent atoms to approximate features from the same class). To this aim, we employ the matrix $\Coeff$ to measure the contribution of every atom to the approximation of features belonging to each class. Let $\coeff_{\iCat}^{\iAto}$ indicate the $\iAto$-th row of the matrix $\Coeff_{\iCat}$ containing the coefficients that contribute to the approximation of $\Feas_{\iCat}$, and $\nDim_{\iCat}$ indicate the number of its elements. A coefficient $\gamma_{\iAto,\iCat}$ is defined as: \begin{equation} \gamma_{\iAto,\iCat} \definition \frac{1}{\nDim_{\iCat}}\norm{\coeff_{\iCat}^{\iAto}}{1}, \end{equation} and every atom $\atom_{\iAtom}$ is associated with the category to which it maximally contributes $\optimal{\iCat}_{\iAto} = \underset{\iCat}{\arg\max}\curlyb{\gamma_{\iAto,\iCat}}$. Grouping together atoms that have been assigned to the same class leads to a set of sub-dictionaries whose size and rank depends on the number of atoms for each class, and to their linear dependence. As a general heuristic, if features corresponding to different classes do not occupy the same sub-space (according to the active elements in $\Coeff$), a full-rank dictionary $\Dic$ with $\nAto \geq \nDim \gg \nCat$ ensures that $\optimal{\iCat}_{\iAto}$ identify $\nCat$ non-empty and disjoint sub-dictionaries $\curlyb{\Dic_{\iCat}}_{\iCat=1}^{\nCat}$. Once the atoms have been clustered, the Gram matrix $\Gram$ is computed and iteratively projected as in the method described in Section \ref{sec:ipr}, with the difference that equation \eqref{eq:pscs} is modified in order to only constraint the mutual coherence between atoms assigned to different categories \small \begin{equation}\label{eq:pscs2} \operatorname{Limit}({\gram}_{i,j},\coherence_{0},\optimal{\Vector{\iCat}}) = \left\{ \begin{array}{rl} \gram_{i,j} & \text{if} \quad |\gram_{i,j}|\leq \coherence_{0} \, \text{or} \, \optimal{\iCat}_{i}=\optimal{\iCat}_{j} \\ \operatorname{sgn}(\gram_{i,j}) \coherence_{0} & \text{if} \quad |\gram_{i,j}| > \coherence_{0} \, \text{and} \, \optimal{\iCat}_{i}\neq\optimal{\iCat}_{j} \end{array} \right. \end{equation} \normalsize A further modification of the standard \Acronym{ipr} algorithm presented in \cite{Barchiesi2013Le} consists in the update of the Gram matrix, performed by computing its element-wise average with the projection $\stcmat = \Projection_{\stcset}(\Gram)$ (rather than by using the projection alone). This heuristic has led to improved empirical results by preventing $\Gram$ from changing too abruptly. The complete supervised \Acronym{s-ipr} method is summarised in Algorithm \ref{algo:sipr}. Note that the mutual coherence $\coherence_{\optimal{\iCat}}(\Dic) = \underset{\optimal{\iCat}_{i}\neq\optimal{\iCat}_{j}}{\arg\max}\inner{\atom_{i}}{\atom_{j}}$ indicated in this algorithm measures the inner product between any two atoms assigned to different categories since atoms assigned to the same category are allowed to be mutually coherent. \begin{algo} \KwIn{$\Feas, \Dictionary, \Coeff, \coherence_{0}, \cats, \nIter$} \KwOut{$\optimal{\Dictionary}$} $\iter\gets1$\; \tcp{Cluster atoms} $\Coeff_{\iCat} \gets \squareb{\coeff_{j}} \forall j \in \uniCat_{\iCat}$\; $\gamma_{\iAto,\iCat} \gets \norm{\coeff_{\iCat}^{\iAto}}{1}/\nDim_{\iCat}$\; $\optimal{\iCat}_{\iAto} = \underset{\iCat}{\arg\max}\curlyb{\gamma_{\iAto,\iCat}}$\; \While{$\iter \leq \nIter$ and $\coherence_{\optimal{\iCat}}(\Dictionary)>\coherence_{0}$}{ \tcp{Calculate Gram matrix} $\Gram \gets \Transpose{\Dictionary}\Dictionary$\; \tcp{Project onto structural c.s.} $\operatorname{diag}(\stcmat) \gets \Vector{1}$\; $\stcmat \gets \operatorname{Limit}(\Gram,\coherence_{0},\optimal{\Vector{\iCat}})$\; $\Gram \gets \frac{1}{2}\Gram + \frac{1}{2}\stcmat$\; \tcp{Project onto spectral c.s. and factorize} $[\eigvecmat, \eigvalmat] \gets \Acronym{evd}(\Gram)$\; $\eigvalmat \gets \operatorname{Thresh}(\eigvalmat,\nDimensions)$\; $\Dictionary \gets \eigvalmat^{1/2}\Transpose{\eigvecmat}$\; \tcp{Rotate dictionary} $\CovMat \gets \Feas\Transpose{(\Dictionary\Coeff)}$\; $[\Matrix{U},\Matrix{\Sigma},\Matrix{V}] \gets \Acronym{svd}(\CovMat)$\label{algo:ipr:svd}\; $\OrthMat \gets \Matrix{V}\Transpose{\Matrix{U}}$\; $\Dictionary \gets \OrthMat\Dictionary$\; $\iter\gets\iter+1$\; } \caption{\label{algo:sipr}Supervised \Acronym{ipr}} \end{algo} \subsection{Classification via incoherent subspaces}\label{sec:class} The \Acronym{s-ipr} algorithm allows to learn a set of sub-dictionaries $\curlyb{\Dic_{\iCat}}$ that contain mutually incoherent atoms. These cannot be directly used to define discriminative subspaces because, depending on $\nDim$ and on the rank of each sub-dictionary, atoms belonging to disjoint sub-dictionaries might span identical subspaces. Instead, we fix a rank $\nDimSub\leq\floor{\nDim/\nCat}$ and choose a collection of $\nDimSub$ linearly independent atoms from each sub-dictionary $\Dic_{\iCat}$, using the largest values of $\gamma_{\iAto,\iCat}$ to define a picking order. Thus, we obtain a set $\curlyb{\Spa_{\iCat}}_{\iCat=1}^{\nCat}$ of incoherent sub-spaces of rank $\nDimSub$ embedded in the space $\ambient^{\nDim}$, and use them to derive a feature transform for classification. Each feature vector $\fea_{\iFea}$ that belongs to the class $\cat_{\iFea}$ is projected onto the relative subspace, yielding a set of transformed features $\curlyb{\newFea_{\iFea}}_{\iFea=1}^{\nFea}$. \begin{equation} \newFea_{\iFea} = \Spa_{\cat_{\iFea}}\pseudoinverse{\Spa_{\cat_{\iFea}}}\fea_{\iFea} \end{equation} where $\Spa^{\dagger}$ denotes the Moore-Penrose pseudo-inverse of the matrix $\Spa$ and needs to be used in place of the transposition operator because the columns of $\Spa$ are in general not orthogonal. When an unlabelled signal is presented to the classifier, the corresponding vector of features $\fea$ is projected onto all the learned sub-spaces. Then, the nearest sub-space is chosen using an Euclidean distance measure, and the corresponding projection $\newFea$ used as the transformed feature. \begin{align} \optimal{\iCat} &= \underset{\iCat}{\arg\min}\norm{\fea-\Spa_{\iCat}\pseudoinverse{\Spa_{\iCat}}\fea}{2} \\ \newFea &= \Spa_{\optimal{\iCat}}\pseudoinverse{\Spa_{\optimal{\iCat}}}\fea \end{align} The subspace $\optimal{\iCat}$ can be directly used as an estimator of the category of the signal $\optimal{\cat}$. Alternatively, a simple \emph{k-neaerst neighbour} classifier can be employed on the transformed features, and a class can be inferred as: \begin{equation} \optimal{\cat} = \texttt{knn}(\newFea,\NewFeas,\cats) \end{equation} where $\NewFeas$ represents the matrix of training features after the transform stage. This latter approach is especially suitable when working with a large number of classes in a space of relatively small dimension, as in this case multiple classes might be assigned to the same subspace. \section{Numerical Experiments}\label{sec:ne} \subsection{Feature visualisation}\label{sec:visu} To illustrate the \Acronym{s-ipr} algorithm for feature transform, we first run visualisation experiments depicting how different feature transform methods act on training and test data. \subsubsection{Synthetic data} \begin{figure} \centering \includegraphics[width=.6\textwidth]{./Code/Datasets/GetToyExampleDataset.pdf} \caption{\label{fig:toy}Synthetic data generated along one-dimensional subspaces of $\real^{2}$.} \end{figure} Figure \ref{fig:toy} displays a total of $1500$ synthetic features in $\real^{2}$ belonging to $3$ different classes that we generated for this experiment. For each class, first we draw values distributed uniformly in the interval $\squareb{-1,1}$ and assign them to the first component of the features (the \emph{x} coordinate). Then, we add Gaussian noise with variance $0.1$ to the second component (the \emph{y} coordinate), and we rotate the resulting data by the angles $\theta_{0}=0$, $\theta_{1}=\pi/4$ and $\theta_{3}=\pi/2$ for the $3$ classes respectively. This way, features belonging to different classes are clustered along different one-dimensional sub-spaces of $\real^{2}$. \begin{figure} \includegraphics[width=\textwidth]{./Code/toyvisu.pdf} \caption{\label{fig:toyNewFea}Feature transform applied to the synthetic data in Figure \ref{fig:toy}. Different colours correspond to different classes, `+' and `o' markers represent samples taken from the training and test set respectively.} \end{figure} Figure \ref{fig:toyNewFea} displays the result of the application of feature transforms to the data depicted in Figure \ref{fig:toy} using subspaces of dimension $1$ (with the exception of \Acronym{lda} that projects the data onto a space of dimension $\nUniCat-1=2$). To generate the plots, we divided the data into a training set (displayed using the `+' marker) and a test set (displayed using the `o' marker). Samples were drawn in random order from the dataset and assigned to either the training set or the test set, with the former containing $70\%$ of the total data and the latter containing the remaining $30\%$. Then, we applied feature transforms on the training set, thereby learning the transform operators, and applied them to the test set. Starting from the top-left plot, we can observe that \Acronym{PCA} identified the direction $x=y$ as the one-dimensional subspace that contains most of the variance of the training set. However, given the type of dataset and the dimensionality reduction caused by \Acronym{pca}, features from all classes are overlapping, making this transform a poor choice for classification. Similar observations can be drawn from analysing the result of \Acronym{s-pca}, although this transform identifies the direction $y=0$ as the one that leads to statistical dependence between the value of the transformed features in the training set and the relative class. \Acronym{lda} does not introduce any dimensionality reduction in this case, as it projects the features onto a space of dimension $\nUniCat-1=2$, leaving the original features unaltered. However, in the \Acronym{lda} plot we can appreciate the separation between training set and test set that is difficult to notice in the other plots. Finally, the plot at the right-bottom corner of Figure \ref{fig:toyNewFea} displays the results of the \Acronym{s-ipr} algorithm. In setting the parameters of \Acronym{s-ipr}, we chose a $2$ times over-complete dictionary, a number of active atoms equal to half the dimension of the data, and minimal mutual coherence. In the case considered here, this means $\nAto = 4$, $\nActiveAtoms = 1$ and $\coherence = \sqrt{(\nAto-\nDim)/\nDim(\nAto-1)} \approx 0.33$. As discussed in Section \ref{sec:iprclass}, \Acronym{s-ipr} does not project whole sets of features onto a unique sub-space, but rather learns one sub-space for each category, and projects features onto the nearest sub-space. The result depicted here shows that three directions were identified containing data from mostly one category each. Since the incoherent dictionary learning is designed to learn atoms with minimal mutual coherence, the angles between the directions of the sub-spaces learned by \Acronym{s-ipr} are approximately equal. Prior information regarding the directions of the data would allow to relax the parameter $\coherence$, and track more closely the directions of the three data classes. \subsubsection{Iris dataset} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{./Code/Datasets/GetFisherIrisDataset.pdf} \caption{\label{fig:fisher3} First three features of the `iris' dataset depicting measurements of sepal length, sepal width and petal length of three iris species.} \end{figure} Figure \ref{fig:fisher3} displays a subset of the `iris' dataset, a popular database that has been used extensively to test and benchmark classification algorithms. The original dataset contains measurements of the sepal length, sepal width, petal length and petal width of three species of iris, namely `setosa', `versicolor' and `virginica'. In this visualisation experiment we selected the first $3$ features to be able to depict the data using three dimensional scatter plots. From observing the distribution of the data in the feature space, we see that `setosa' is relatively separated from the other two classes, while the features relative to `virginica' and `versicolor' substantially overlap, with only a few exemplars of `virginica' being distinguishable due to large sepal length and petal length. \begin{figure} \includegraphics[width=\textwidth]{./Code/fishervisu.eps} \caption{\label{fig:fisher3NewFea}Feature transform applied to the iris data in Figure \ref{fig:fisher3}. Different colours correspond to different classes, `+' and `o' markers represent samples taken from the training and test set respectively.} \end{figure} The results of feature transforms are depicted in Figure \ref{fig:fisher3NewFea}. This time, we learn $2$ dimensional subspaces from the $3$ dimensional data points and plot the transformed features, along with the learned planes. We observe that \Acronym{pca} identifies a direction along a diagonal axis that follows the distribution of features displayed in Figure \ref{fig:fisher3}. \Acronym{s-pca} , on the other hand, projects the features onto a horizontal plane that slightly enhances the separation between `versicolor' and `virginica' samples. \Acronym{lda} results in a projection where features belonging to the same category are closely clustered together, but fails to separate the classes `versicolor' and `virginica'. Finally, the output of \Acronym{s-ipr} displays three distinct sub-spaces associated with the three classes. As in the other plots, the separation between `versicolor' and `virginica' is far from perfect, however features from the two classes are mostly projected onto the respective sub-spaces. Features belonging to the `setosa' category are mostly clustered together as a result of their projection onto the black subspace, however we can note a few test samples that have been associated by the algorithm to the blue sub-space. \subsection{Classification}\label{sec:cla} \begin{table} \centering \begin{tabular}{lccc}\textbf{Name} & \nDim & \nUniCat & \nFea \\ \hline Iris & 4 & 3 & 150\\ Balance & 4 & 3 & 625\\ Parkinsons & 23 & 2 & 197 \\ Sonar & 60 & 2 & 208\\ USPS & 256 & 3 & 1405 \end{tabular} \caption{\label{tab:datasets}Dataset used in the classification evaluation of feature transform algorithms. All the datasets can be downloaded from \href{http://archive.ics.uci.edu/ml/datasets.html}{http://archive.ics.uci.edu/ml/datasets.html}. Note that we only use a subset of the USPS dataset containing the digits $1$, $3$ and $8$.} \end{table} In the previous section, we have illustrated how the \Acronym{s-ipr} algorithm is able to learn incoherent sub-spaces that model the distribution of features belonging to different classes. Here we evaluate \Acronym{s-ipr} and the other feature transform algorithms in the context of supervised classification. To perform the classification, features are transformed using the methods already used for comparison in Section \ref{sec:visu} by learning a transform operator on the training set and applying it to the test set. We use a $5$-fold stratified cross-validation to classify all the features in a dataset during the test stage. This method produces $5$ independent classification problems with a ratio between the number of training and test samples equal to $8:2$. Once the features have been transformed, a $k$-nearest neighbour classifier with $k=5$ is used to estimate a class. We employ the datasets detailed in Table \ref{tab:datasets}, and for each of them we evaluate the misclassification ratio, that is defined as the fraction of misclassified samples as a proportion of the total number of samples in the test set, averaged over the $5$ independent classification problems created by the stratified cross-validation protocol. \begin{figure} \centering \subfigure{\includegraphics[width=.48\textwidth]{Code/Util/fisheriris.pdf}} \subfigure{\includegraphics[width=.48\textwidth]{Code/Util/balance.pdf}} \subfigure{\includegraphics[width=.48\textwidth]{Code/Util/parkinsons.pdf}} \subfigure{\includegraphics[width=.48\textwidth]{Code/Util/sonar.pdf}} %\subfigure{\includegraphics[width=.7\textwidth]{Code/Util/usps.pdf}} \caption{\label{fig:class} Misclassification ratio as a function of the rank of the subspace employed during feature transforms for the datasets `iris',`balance',`Parkinsons' and `sonar'} \end{figure} Figure \ref{fig:class} displays for each dataset the misclassification ratio as a function of the rank of the subspace learned by the algorithms. In the plots `none' indicates that no feature transform was applied (hence resulting in a sub-space rank equal to the dimension of the original features). In general we can see that \Acronym{s-ipr} does not perform as well as the other techniques, and is only comparable at high ranks that do not achieve an overall better classification ratio. Starting from the `iris' dataset, \Acronym{lda} achieves the best performance followed by one-dimensional subspaces learned using \Acronym{pca}. Both \Acronym{s-pca} and \Acronym{s-ipr} work better when learning subspaces of high rank. Note that, at rank $\nDim=4$ all the methods are equivalent because they are not performing dimensionality reduction. The results relative to the balance dataset are similar, with again \Acronym{lda} achieving the best misclassification ratio. Although the results on the `Parkinsons' and `sonar' datasets present similar trends regarding \Acronym{s-ipr}, here \Acronym{lda} does not prove to be as successful as \Acronym{pca} and \Acronym{s-pca} in separating features belonging to different classes. \section{Conclusion}\label{sec:end} \subsection{Summary} We have presented the \Acronym{s-ipr} algorithm for learning incoherent subspaces from data belonging to different categories. The encouraging experimental results obtained on the visualisation of the synthetic dataset and of a subset of features taken from the 'iris` dataset motivated us to test \Acronym{s-ipr} as a general method for feature transform to be used in classification problems. Unfortunately, we found that the performance of our proposed method on a group of datasets commonly used to benchmark classification algorithms is only competitive compared to traditional and state-of-the-art methods for feature transform at high sub-space ranks. The negative results presented in Section \ref{sec:cla} do not imply that \Acronym{s-ipr} is completely unsuitable as a tool for modelling data for classification, but they rather open a few important areas of future research that should be investigate to better understand the strengths and limitations of the proposed method. \subsection{Future work} The main assumption made when using incoherent dictionary learning for classification is that high dimensional features are arranged onto lower-dimensional sub-spaces, and that features belonging to different classes can be modelled using different subspaces that are mutually incoherent. This assumption might be met by some datasets, but might not generally be satisfied by others. Understanding the general distribution of the features in a dataset might be a necessary first step to inform a subsequent choice of algorithm, so that \Acronym{s-ipr} can be used in cases where its premise about the feature distribution is valid. This same argument holds for the whole class of linear models that comprises the dictionary learning model. Indeed, many feature transform techniques have equivalent \emph{kernelized} versions to model non-linear data. Other substantial improvements can be made on the algorithm itself. The present implementation of \Acronym{s-ipr} contains a fixed parameter $\coherence$ that promotes minimal mutual coherence between the sub-spaces used to approximate different data classes. Knowledge about the distribution of the features might lead to relaxing this parameter, learning sub-spaces that are closer to the true distribution of the features and in turn improving class separation. Moreover, different values of mutual coherence for different pairs of subspaces can be easily included in the optimisation, greatly enhancing the flexibility of \Acronym{s-ipr} as a modelling tool. \bibliographystyle{IEEEbib} \bibliography{bibliography.bib} \end{document}
%/* ----------------------------------------------------------- */ %/* */ %/* ___ */ %/* |_| | |_/ SPEECH */ %/* | | | | \ RECOGNITION */ %/* ========= SOFTWARE */ %/* */ %/* */ %/* ----------------------------------------------------------- */ %/* developed at: */ %/* */ %/* Speech Vision and Robotics group */ %/* Cambridge University Engineering Department */ %/* http://svr-www.eng.cam.ac.uk/ */ %/* */ %/* Entropic Cambridge Research Laboratory */ %/* (now part of Microsoft) */ %/* */ %/* ----------------------------------------------------------- */ %/* Copyright: Microsoft Corporation */ %/* 1995-2000 Redmond, Washington USA */ %/* http://www.microsoft.com */ %/* */ %/* 2001 Cambridge University */ %/* Engineering Department */ %/* */ %/* Use of this software is governed by a License Agreement */ %/* ** See the file License for the Conditions of Use ** */ %/* ** This banner notice must not be removed ** */ %/* */ %/* ----------------------------------------------------------- */ % % HTKBook - Steve Young 1/12/97 % \mychap{Speech Input/Output}{speechio} Many tools need to input parameterised speech data and \HTK\ provides a number of different methods for doing this: \begin{itemize} \item input from a previously encoded speech parameter file \item input from a waveform file which is encoded as part of the input processing \item input from an audio device which is encoded as part of the input processing. \end{itemize} For input from a waveform file, a large number of different file formats are supported, including all of the commonly used CD-ROM formats. Input/output for parameter files is limited to the standard \HTK\ file format and the new Entropic Esignal format. \sidepic{Tool.spio}{60}{} All \HTK\ speech input\index{speech input} is controlled by configuration parameters which give details of what processing operations to apply to each input speech file or audio source. This chapter describes speech input/output in \HTK. The general mechanisms are explained and the various configuration parameters are defined. The facilities for signal pre-processing, linear prediction-based processing, Fourier-based processing and vector quantisation are presented and the supported file formats are given. Also described are the facilities for augmenting the basic speech parameters with energy measures, delta coefficients and acceleration (delta-delta) coefficients and for splitting each parameter vector into multiple data streams to form \textit{observations}. The chapter concludes with a brief description of the tools \htool{HList} and \htool{HCopy} which are provided for viewing, manipulating and encoding speech files. \mysect{General Mechanism}{genio} The facilities for speech input and output in \HTK\ are provided by five distinct modules: \htool{HAudio}, \htool{HWave}, \htool{HParm}, \htool{HVQ} and \htool{HSigP}. The interconnections between these modules are shown in Fig.~\href{f:Spmods}. \index{speech input!general mechanism} \sidefig{Spmods}{62}{Speech Input Subsystem}{2}{ Waveforms are read from files using \htool{HWave}, or are input direct from an audio device using \htool{HAudio}. In a few rare cases, such as in the display tool \htool{HSLab}, only the speech waveform is needed. However, in most cases the waveform is wanted in parameterised form and the required encoding is performed by \htool{HParm} using the signal processing operations defined in \htool{HSigP}. The parameter vectors are output by \htool{HParm} in the form of observations which are the basic units of data processed by the \HTK\ recognition and training tools. An observation contains all components of a raw parameter vector but it may be possibly split into a number of independent parts. Each such part is regarded by a \HTK\ tool as a statistically independent data stream. Also, an observation may include VQ indices attached to each data stream. Alternatively, VQ indices can be read directly from a parameter file in which case the observation will contain only VQ indices. } Usually a \HTK\ tool will require a number of speech data files to be specified on the command line. In the majority of cases, these files will be required in parameterised form. Thus, the following example invokes the \HTK\ embedded training tool \htool{HERest} to re-estimate a set of models using the speech data files \texttt{s1}, \texttt{s2}, \texttt{s3}, \ldots . These are input via the library module \htool{HParm} and they must be in exactly the form needed by the models. \begin{verbatim} HERest ... s1 s2 s3 s4 ... \end{verbatim} However, if the external form of the speech data files is not in the required form, it will often be possible to convert them automatically during the input process. To do this, configuration parameter values are specified whose function is to define exactly how the conversion should be done. The key idea is that there is a \textit{source parameter kind} and \textit{target parameter kind}. The source refers to the natural form of the data in the external medium and the target refers to the form of the data that is required internally by the \HTK\ tool. The principle function of the speech input subsystem is to convert the source parameter kind into the required target parameter kind. \index{speech input!automatic conversion} Parameter kinds consist of a base form to which one or more qualifiers may be attached where each qualifier consists of a single letter preceded by an underscore character.\index{qualifiers} Some examples of parameter kinds are \begin{varlist} \fwitem{2cm}{WAVEFORM} simple waveform \fwitem{2cm}{LPC} linear prediction coefficients \fwitem{2cm}{LPC\_D\_E} LPC with energy and delta coefficients \fwitem{2cm}{MFCC\_C} compressed mel-cepstral coefficients \end{varlist} \index{speech input!target kind} The required source and target parameter kinds are specified using the configuration parameters \texttt{SOURCEKIND} \index{sourcekind@\texttt{SOURCEKIND}} and \texttt{TARGETKIND}\index{targetkind@\texttt{TARGETKIND}}. Thus, if the following configuration parameters were defined \begin{verbatim} SOURCEKIND = WAVEFORM TARGETKIND = MFCC_E \end{verbatim} then the speech input subsystem would expect each input file to contain a speech waveform and it would convert it to mel-frequency cepstral coefficients with log energy appended. The source need not be a waveform. For example, the configuration parameters \begin{verbatim} SOURCEKIND = LPC TARGETKIND = LPREFC \end{verbatim} would be used to read in files containing linear prediction coefficients and convert them to reflection coefficients. For convenience, a special parameter kind called \texttt{ANON}\index{anon@\texttt{ANON}} is provided. When the source is specified as \texttt{ANON} then the actual kind of the source is determined from the input file. When \texttt{ANON} is used in the target kind, then it is assumed to be identical to the source. For example, the effect of the following configuration parameters \begin{verbatim} SOURCEKIND = ANON TARGETKIND = ANON_D \end{verbatim} would simply be to add delta coefficients to whatever the source form happened to be. The source and target parameter kinds default to \texttt{ANON} to indicate that by default no input conversions are performed. Note, however, that where two or more files are listed on the command line, the meaning of \texttt{ANON} will not be re-interpreted from one file to the next. Thus, it is a general rule, that any tool reading multiple source speech files requires that all the files have the same parameter kind. The conversions applied by \HTK's input subsystem can be complex and may not always behave exactly as expected. There are two facilities that can be used to help check and debug the set-up of the speech i/o configuration parameters. Firstly, the tool \htool{HList} simply displays speech data by listing it on the terminal. However, since \htool{HList} uses the speech input subsystem like all \HTK\ tools, if a value for \texttt{TARGETKIND} is set, then it will display the target form rather than the source form. This is the simplest way to check the form of the speech data that will actually be delivered to a \HTK\ tool. \htool{HList} is described in more detail in section~\ref{s:UseHList} below. Secondly, trace output can be generated from the \htool{HParm} module by setting the \texttt{TRACE} configuration file parameter. This is a bit-string in which individual bits cover different parts of the conversion processing. The details are given in the reference section. To summarise, speech input in \HTK\ is controlled by configuration parameters. The key parameters are \texttt{SOURCEKIND} and {\tt TARGETKIND} which specify the source and target parameter kinds. These determine the end-points of the required input conversion. However, to properly specify the detailed steps in between, more configuration parameters must be defined. These are described in subsequent sections. \mysect{Speech Signal Processing}{sigproc} In this section, the basic mechanisms involved in transforming a speech waveform into a sequence of parameter vectors will be described. Throughout this section, it is assumed that the \texttt{SOURCEKIND} is \texttt{WAVEFORM} and that data is being read from a HTK format file via \htool{HWave}. Reading from different format files is described below in section~\ref{s:waveform}. Much of the material in this section also applies to data read direct from an audio device, the additional features needed to deal with this latter case are described later in section~\ref{s:audioio}. \vspace{0.2cm} \index{speech input!blocking} The overall process is illustrated in Fig.~\href{f:Blocking} which shows the sampled waveform being converted into a sequence of parameter blocks. In general, \HTK\ regards both waveform files and parameter files as being just sample sequences, the only difference being that in the former case the samples are 2-byte integers and in the latter they are multi-component vectors. The sample rate of the input waveform will normally be determined from the input file itself. However, it can be set explicitly using the configuration parameter \texttt{SOURCERATE}. The period between each parameter vector determines the output sample rate and it is set using the configuration parameter \texttt{TARGETRATE}. The segment of waveform used to determine each parameter vector is usually referred to as a window and its size is set by the configuration parameter \texttt{WINDOWSIZE}. Notice that the window size and frame rate are independent. Normally, the window size will be larger than the frame rate so that successive windows overlap as illustrated in Fig.~\href{f:Blocking}. \index{sourcerate@\texttt{SOURCERATE}} \index{targetrate@\texttt{TARGETRATE}} \index{windowsize@\texttt{WINDOWSIZE}} For example, a waveform sampled at 16kHz would be converted into 100 parameter vectors per second using a 25 msec window by setting the following configuration parameters. \begin{verbatim} SOURCERATE = 625 TARGETRATE = 100000 WINDOWSIZE = 250000 \end{verbatim} Remember that all durations are specified in 100 nsec units\footnote{ The somewhat bizarre choice of 100nsec units originated in Version 1 of \HTK\ when times were represented by integers and this unit was the best compromise between precision and range. Times are now represented by doubles and hence the constraints no longer apply. However, the need for backwards compatibility means that 100nsec units have been retained. The names \texttt{SOURCERATE} and \texttt{TARGETRATE} are also non-ideal, \texttt{SOURCEPERIOD} and \texttt{TARGETPERIOD} would be better. }. \sidefig{Blocking}{50}{Speech Encoding Process}{2}{} Independent of what parameter kind is required, there are some simple pre-processing operations that can be applied prior to performing the actual signal analysis.\index{speech input!pre-processing} Firstly, the DC mean can be removed from the source waveform by setting the Boolean configuration parameter \texttt{ZMEANSOURCE}\index{zmeansource@\texttt{ZMEANSOURCE}} to true (i.e.\ \texttt{T}). This is useful when\index{speech input!DC offset} the original analogue-digital conversion has added a DC offset to the signal. It is applied to each window individually so that it can be used both when reading from a file and when using direct audio input\footnote{ This method of applying a zero mean is different to HTK Version 1.5 where the mean was calculated and subtracted from the whole speech file in one operation. The configuration variable \texttt{V1COMPAT} can be set to revert to this older behaviour.}. Secondly, it is common practice to pre-emphasise the signal by applying the first order difference equation \hequation{ {s^{\prime}}_n = s_n - k\,s_{n-1} }{preemp} to the samples\index{speech input!pre-emphasis} $\{s_n, n=1,N \}$ in each window. Here $k$ is the pre-emphasis\index{pre-emphasis} coefficient which should be in the range $0 \leq k < 1$. It is specified using the configuration parameter \texttt{PREEMCOEF}\index{preemcoef@\texttt{PREEMCOEF}}. Finally, it is usually beneficial to taper the samples in each window so that discontinuities at the window edges are attenuated. This is done by setting the Boolean configuration parameter \texttt{USEHAMMING}\index{usehamming@\texttt{USEHAMMING}} to true. This applies the following transformation to the samples $\{s_n, n=1,N\}$ in the window \hequation{ {s^{\prime}}_n = \left\{ 0.54 - 0.46 \cos \left( \frac{2 \pi (n-1)}{N-1} \right) \right\} s_n }{ham} When both pre-emphasis and Hamming windowing are enabled, pre-emphasis is performed first.\index{speech input!Hamming window function} \index{Hamming Window} In practice, all three of the above are usually applied. Hence, a configuration file will typically contain the following \begin{verbatim} ZMEANSOURCE = T USEHAMMING = T PREEMCOEF = 0.97 \end{verbatim} Certain types of artificially generated waveform data can cause numerical overflows with some coding schemes. In such cases adding a small amount of random noise to the waveform data solves the problem. The noise is added to the samples using \hequation{ {s^{\prime}}_n = s_n + q RND() }{dither} where $RND()$ is a uniformly distributed random value over the interval $[-1.0, +1.0)$ and $q$ is the scaling factor. The amount of noise added to the data ($q$) is set with the configuration parameter \index{adddither@\texttt{ADDDITHER}}\texttt{ADDDITHER} (default value $0.0$). A positive value causes the noise signal added to be the same every time (ensuring that the same file always gives exactly the same results). With a negative value the noise is random and the same file may produce slightly different results in different trials. One problem that can arise when processing speech waveform files obtained from external sources, such as databases on CD-ROM, is that the byte-order\index{byte-order} may be different to that used by the machine on which \HTK\ is running. To deal with this problem, \htool{HWave} can perform automatic byte-swapping in order to preserve proper byte order. \HTK\ assumes by default that speech waveform data is encoded as a sequence of 2-byte integers as is the case for most current speech databases\footnote{Many of the more recent speech databases use compression. In these cases, the data may be regarded as being logically encoded as a sequence of 2-byte integers even if the actual storage uses a variable length encoding scheme.}. If the source format is known, then \htool{HWave} will also make an assumption about the byte order used to create speech files in that format. It then checks the byte order of the machine that it is running on and automatically performs byte-swapping if the order is different. For unknown formats, proper byte order can be ensured by setting the configuration parameter \texttt{BYTEORDER}\index{byteorder@\texttt{BYTEORDER}} to \texttt{VAX} if the speech data was created on a little-endian machine such as a VAX or an IBM PC, and to anything else (e.g. \texttt{NONVAX}) if the speech data was created on a big-endian machine such as a SUN, HP or Macintosh machine. \index{speech input!byte order} The reading/writing of \HTK\ format waveform files can be further controlled via the configuration parameters \texttt{NATURALREADORDER} and \texttt{NATURALWRITEORDER}. The effect and default settings of these parameters are described in section~\href{s:byteswap}. \index{byte swapping} Note that \texttt{BYTEORDER} should not be used when \texttt{NATURALREADORDER} is set to true. Finally, note that \HTK\ can also byte-swap parameterised files in a similar way provided that only the byte-order of each 4 byte float requires inversion. \mysect{Linear Prediction Analysis}{lpcanal} In linear prediction (LP) \index{linear prediction} analysis, the vocal tract transfer function is modelled by an all-pole filter\index{all-pole filter} with transfer function\footnote{ Note that some textbooks define the denominator of equation~\ref{e:allpole} as $1 - \sum_{i=1}^p a_i z^{-i}$ so that the filter coefficients are the negatives of those computed by \HTK.} \hequation{ H(z) = \frac{1}{\sum_{i=0}^p a_i z^{-i}} }{allpole} where $p$ is the number of poles and $a_0 \equiv 1$. The filter coefficients $\{a_i \}$ are chosen to minimise the mean square filter prediction error summed over the analysis window. The \HTK\ module \htool{HSigP} uses the \textit{autocorrelation method} to perform this optimisation as follows. Given a window of speech samples $\{s_n, n=1,N \}$, the first $p+1$ terms of the autocorrelation sequence are calculated from \hequation{ r_i = \sum_{j=1}^{N-i} s_j s_{j+i} }{autoco} where $i = 0,p$. The filter coefficients are then computed recursively using a set of auxiliary coefficients $\{k_i\}$ which can be interpreted as the reflection coefficients of an equivalent acoustic tube and the prediction error $E$ which is initially equal to $r_0$. Let $\{k_j^{(i-1)} \}$ and $\{a_j^{(i-1)} \}$ be the reflection and filter coefficients for a filter of order $i-1$, then a filter of order $i$ can be calculated in three steps. Firstly, a new set of reflection coefficients\index{reflection coefficients} are calculated. \hequation{ k_j^{(i)} = k_j^{(i-1)} }{kupdate1} for $j = 1,i-1$ and \hequation{ k_i^{(i)} = \left\{ r_i + \sum_{j=1}^{i-1} a_j^{(i-1)} r_{i-j} \right\} / E^{(i-1)} }{kupdate2} Secondly, the prediction energy is updated. \hequation{ E^{(i)} = (1 - k_i^{(i)} k_i^{(i)} ) E^{(i-1)} }{Eupdate} Finally, new filter coefficients are computed \hequation{ a_j^{(i)} = a_j^{(i-1)} - k_i^{(i)} a_{i-j}^{(i-1)} }{aupdate1} for $j = 1,i-1$ and \hequation{ a_i^{(i)} = - k_i^{(i)} }{aupdate2} This process is repeated from $i=1$ through to the required filter order $i=p$. To effect the above transformation, the target parameter kind must be set to either \texttt{LPC}\index{lpc@\texttt{LPC}} to obtain the LP filter parameters $\{a_i\}$ or \texttt{LPREFC}\index{lprefc@\texttt{LPREFC}} to obtain the reflection coefficients $\{k_i \}$. The required filter order must also be set using the configuration parameter \texttt{LPCORDER}\index{lpcorder@\texttt{LPCORDER}}. Thus, for example, the following configuration settings would produce a target parameterisation consisting of 12 reflection coefficients per vector. \begin{verbatim} TARGETKIND = LPREFC LPCORDER = 12 \end{verbatim} An alternative LPC-based parameterisation is obtained by setting the target kind to \texttt{LPCEPSTRA}\index{lpcepstra@\texttt{LPCEPSTRA}} to generate linear prediction cepstra. The cepstrum of a signal is computed by taking a Fourier (or similar) transform of the log spectrum. In the case of linear prediction cepstra\index{linear prediction!cepstra}, the required spectrum is the linear prediction spectrum which can be obtained from the Fourier transform of the filter coefficients. However, it can be shown that the required cepstra can be more efficiently computed using a simple recursion \hequation{ c_n = -a_n - \frac{1}{n} \sum_{i=1}^{n-1} (n-i) a_i c_{n-i} }{lpcepstra} The number of cepstra generated need not be the same as the number of filter coefficients, hence it is set by a separate configuration parameter called \texttt{NUMCEPS}\index{numceps@\texttt{NUMCEPS}}. The principal advantage of cepstral coefficients is that they are generally decorrelated and this allows diagonal covariances to be used in the HMMs. However, one minor problem with them is that the higher order cepstra are numerically quite small and this results in a very wide range of variances when going from the low to high cepstral coefficients\index{cepstral coefficients!liftering}. \HTK\ does not have a problem with this but for pragmatic reasons such as displaying model parameters, flooring variances, etc., it is convenient to re-scale the cepstral coefficients to have similar magnitudes. This is done by setting the configuration parameter \texttt{CEPLIFTER}\index{ceplifter@\texttt{CEPLIFTER}} to some value $L$ to \textit{lifter} the cepstra according to the following formula \hequation{ {c^{\prime}}_n = \left( 1 + \frac{L}{2} sin \frac{\pi n}{L} \right) c_n }{ceplifter} As an example, the following configuration parameters would use a 14'th order linear prediction analysis to generate 12 liftered LP cepstra per target vector \begin{verbatim} TARGETKIND = LPCEPSTRA LPCORDER = 14 NUMCEPS = 12 CEPLIFTER = 22 \end{verbatim} These are typical of the values needed to generate a good front-end parameterisation for a speech recogniser based on linear prediction. \index{cepstral analysis!LPC based}\index{cepstral analysis!liftering coefficient} Finally, note that the conversions supported by \HTK\ are not limited to the case where the source is a waveform. \HTK\ can convert any LP-based parameter into any other LP-based parameter. \mysect{Filterbank Analysis}{fbankanal} The human ear resolves frequencies non-linearly across the audio spectrum and empirical evidence suggests that designing a front-end to operate in a similar non-linear manner improves recognition performance. A popular alternative to linear prediction based analysis is therefore filterbank analysis since this provides a much more straightforward route to obtaining the desired non-linear frequency resolution. However, filterbank amplitudes are highly correlated and hence, the use of a cepstral transformation in this case is virtually mandatory if the data is to be used in a HMM based recogniser with diagonal covariances. \index{cepstral analysis!filter bank} \index{speech input!filter bank} \HTK\ provides a simple Fourier transform based filterbank designed to give approximately equal resolution on a mel-scale. Fig.~\href{f:melfbank} illustrates the general form of this filterbank. As can be seen, the filters used are triangular and they are equally spaced along the mel-scale which is defined by \hequation{ \mbox{Mel}(f) = 2595 \log_{10}(1 + \frac{f}{700}) }{melscale} To implement this filterbank, the window of speech data is transformed\index{mel scale} using a Fourier transform and the magnitude is taken. The magnitude coefficients are then \textit{binned} by correlating them with each triangular filter. Here binning means that each FFT magnitude coefficient is multiplied by the corresponding filter gain and the results accumulated. Thus, each bin holds a weighted sum representing the spectral magnitude in that filterbank channel.\index{binning} As an alternative, the Boolean configuration parameter \texttt{USEPOWER}\index{usepower@\texttt{USEPOWER}} can be set true to use the power rather than the magnitude of the Fourier transform in the binning process. \index{cepstral analysis!power vs magnitude} \centrefig{melfbank}{110}{Mel-Scale Filter Bank} \index{speech input!bandpass filtering} Normally the triangular filters are spread over the whole frequency range from zero upto the Nyquist frequency. However, band-limiting is often useful to reject unwanted frequencies or avoid allocating filters to frequency regions in which there is no useful signal energy. For filterbank analysis only, lower and upper frequency cut-offs can be set using the configuration parameters \texttt{LOFREQ}\index{lofreq@\texttt{LOFREQ}} and \texttt{HIFREQ}\index{hifreq@\texttt{HIFREQ}}. For example, \begin{verbatim} LOFREQ = 300 HIFREQ = 3400 \end{verbatim} might be used for processing telephone speech. When low and high pass cut-offs are set in this way, the specified number of filterbank channels are distributed equally on the mel-scale across the resulting pass-band such that the lower cut-off of the first filter is at \texttt{LOFREQ} and the upper cut-off of the last filter is at \texttt{HIFREQ}. If mel-scale filterbank parameters are required directly, then the target kind should be set to \texttt{MELSPEC}\index{melspec@\texttt{MELSPEC}}. Alternatively, log filterbank parameters can be generated by setting the target kind to \texttt{FBANK}. \mysect{Vocal Tract Length Normalisation}{vtln} A simple speaker normalisation technique can be implemented by modifying the filterbank analysis described in the previous section. Vocal tract length normalisation (VTLN) aims to compensate for the fact that speakers have vocal tracts of different sizes. VTLN can be implemented by warping the frequency axis in the filterbank analysis. In HTK simple linear frequency warping is supported. The warping factor~$\alpha$ is controlled by the configuration variable \texttt{WARPFREQ}\index{melspec@\texttt{WARPFREQ}}. Here values of $\alpha < 1.0$ correspond to a compression of the frequency axis. As the warping would lead to some filters being placed outside the analysis frequency range, the simple linear warping function is modified at the upper and lower boundaries. The result is that the lower boundary frequency of the analysis (\texttt{LOFREQ}\index{melspec@\texttt{LOFREQ}}) and the upper boundary frequency (\texttt{HIFREQ}\index{melspec@\texttt{HIFREQ}}) are always mapped to themselves. The regions in which the warping function deviates from the linear warping with factor~$\alpha$ are controlled with the two configuration variables (\texttt{WARPLCUTOFF}\index{melspec@\texttt{WARPLCUTOFF}}) and (\texttt{WARPUCUTOFF}\index{melspec@\texttt{WARPUCUTOFF}}). Figure~\href{f:vtlnpiecewise} shows the overall shape of the resulting piece-wise linear warping functions. \centrefig{vtlnpiecewise}{60}{Frequency Warping} The warping factor~$\alpha$ can for example be found using a search procedure that compares likelihoods at different warping factors. A typical procedure would involve recognising an utterance with $\alpha=1.0$ and then performing forced alignment of the hypothesis for all warping factors in the range $0.8 - 1.2$. The factor that gives the highest likelihood is selected as the final warping factor. Instead of estimating a separate warping factor for each utterance, large units can be used by for example estimating only one~$\alpha$ per speaker. Vocal tract length normalisation can be applied in testing as well as in training the acoustic models. \mysect{Cepstral Features}{cepstrum} Most often, however, cepstral parameters are required and these are indicated by setting the target kind to \texttt{MFCC} standing for Mel-Frequency Cepstral Coefficients (MFCCs). These are calculated from the log filterbank amplitudes $\{m_j\}$ using the Discrete Cosine Transform \hequation{ c_i = \sqrt{\frac{2}{N}} \sum_{j=1}^N m_j \cos \left( \frac{\pi i}{N}(j-0.5) \right) }{dct} where $N$ is the number of filterbank channels set by the configuration parameter \texttt{NUMCHANS}\index{numchans@\texttt{NUMCHANS}}. The required number of cepstral coefficients is set by \texttt{NUMCEPS}\index{numceps@\texttt{NUMCEPS}} as in the linear prediction case. Liftering can also be applied to MFCCs using the \texttt{CEPLIFTER}\index{ceplifter@\texttt{CEPLIFTER}} configuration parameter (see equation~\ref{e:ceplifter}). MFCCs are the parameterisation of choice for many speech recognition applications. They give good discrimination and lend themselves to a number of manipulations. In particular, the effect of inserting a transmission channel on the input speech is to multiply the speech spectrum by the channel transfer function. In the log cepstral domain, this multiplication becomes a simple addition which can be removed by subtracting the cepstral mean from all input vectors. In practice, of course, the mean has to be estimated over a limited amount of speech data so the subtraction will not be perfect. Nevertheless, this simple technique is very effective in practice where it compensates for long-term spectral effects such as those caused by different microphones and audio channels. To perform this so-called \textit{Cepstral Mean Normalisation} (CMN) in \HTK\, it is only necessary to add the \texttt{\_Z}\index{qualifiers!aaaz@\texttt{\_Z}} qualifier to the target parameter kind. The mean is estimated by computing the average of each cepstral parameter across each input speech file. Since this cannot be done with live audio, cepstral mean compensation is not supported for this case. \index{cepstral mean normalisation} In addition to the mean normalisation the variance of the data can be normalised. For improved robustness both mean and variance of the data should be calculated on a larger units (e.g.\ on all the data from a speaker instead of just on a single utterance). To use speaker-/cluster-based normalisation the mean and variance estimates are computed offline before the actual recognition and stored in separate files (two files per cluster). The configuration variables \texttt{CMEANDIR}\index{numchans@\texttt{CMEANDIR}} and \texttt{VARSCALEDIR}\index{numchans@\texttt{VARSCALEDIR}} point to the directories where these files are stored. To find the actual filename a second set of variables (\texttt{CMEANMASK}\index{numchans@\texttt{CMEANMASK}} and \texttt{VARSCALEMASK}\index{numchans@\texttt{VARSCALEMASK}}) has to be specified. These masks are regular expressions in which you can use the special characters \texttt{?}, \texttt{*} and \texttt{\%}. The appropriate mask is matched against the filename of the file to be recognised and the substring that was matched against the \texttt{\%} characters is used as the filename of the normalisation file. An example config setting is: \begin{verbatim} CMEANDIR = /data/eval01/plp/cmn CMEANMASK = %%%%%%%%%%_* VARSCALEDIR = /data/eval01/plp/cvn VARSCALEMASK = %%%%%%%%%%_* VARSCALEFN = /data/eval01/plp/globvar \end{verbatim} So, if the file \verb|sw1-4930-B_4930Bx-sw1_000126_000439.plp| is to be recognised then the normalisation estimates would be loaded from the following files: \begin{verbatim} /data/eval01/plp/cmn/sw1-4930-B /data/eval01/plp/cvn/sw1-4930-B \end{verbatim} The file specified by \texttt{VARSCALEFN}\index{numchans@\texttt{VARSCALEFN}} contains the global target variance vector, i.e. the variance of the data is first normalised to 1.0 based on the estimate in the appropriate file in \texttt{VARSCALEDIR}\index{numchans@\texttt{VARSCALEDIR}} and then scaled to the target variance given in \texttt{VARSCALEFN}\index{numchans@\texttt{VARSCALEFN}}. The format of the files is very simple and each of them just contains one vector. Note that in the case of the cepstral mean only the static coefficients will be normalised. A cmn file could for example look like: \begin{verbatim} <CEPSNORM> <PLP_0> <MEAN> 13 -10.285290 -9.484871 -6.454639 ... \end{verbatim} The cepstral variance normalised always applies to the full observation vector after all qualifiers like delta and acceleration coefficients have been added, e.g.: \begin{verbatim} <CEPSNORM> <PLP_D_A_Z_0> <VARIANCE> 39 33.543018 31.241779 36.076199 ... \end{verbatim} The global variance vector will always have the same number of dimensions as the cvn vector, e.g.: \begin{verbatim} <VARSCALE> 39 2.974308e+01 4.143743e+01 3.819999e+01 ... \end{verbatim} These estimates can be generated using \htool{HCompV}. See the reference section for details. \mysect{Perceptual Linear Prediction}{plp} An alternative to the Mel-Frequency Cepstral Coefficients is the use of Perceptual Linear Prediction (PLP) coefficients. As implemented in HTK the PLP feature extraction is based on the standard mel-frequency filterbank (possibly warped). The mel filterbank coefficients are weighted by an equal-loudness curve and then compressed by taking the cubic root.\footnote{the degree of compression can be controlled by setting the configuration parameter \texttt{COMPRESSFACT}\index{enormalise@\texttt{COMPRESSFACT}} which is the power to which the amplitudes are raised and defaults to 0.33)} From the resulting auditory spectrum LP coefficents are estimated which are then converted to cepstral coefficents in the normal way (see above). \mysect{Energy Measures}{energy} \index{speech input!energy measures} To augment the spectral parameters derived from linear prediction or mel-filterbank analysis, an energy term can be appended by including the qualifier \texttt{\_E}\index{qualifiers!aaae@\texttt{\_E}} in the target kind. The energy is computed as the log of the signal energy, that is, for speech samples $\{s_n, n=1,N \}$ \hequation{ E = log \sum_{n=1}^N s_n^2 }{logenergy} This log energy measure can be normalised to the range $-E_{min}..1.0$ by setting the Boolean configuration parameter \texttt{ENORMALISE}\index{enormalise@\texttt{ENORMALISE}} to true (default setting). This normalisation is implemented by subtracting the maximum value of $E$ in the utterance and adding $1.0$. Note that energy normalisation is incompatible with live audio input and in such circumstances the configuration variable \texttt{ENORMALISE} should be explicitly set false. The lowest energy in the utterance can be clamped using the configuration parameter \texttt{SILFLOOR}\index{silfloor@\texttt{SILFLOOR}} which gives the ratio between the maximum and minimum energies in the utterance in dB. Its default value is 50dB. Finally, the overall log energy can be arbitrarily scaled by the value of the configuration parameter \texttt{ESCALE}\index{escale@\texttt{ESCALE}} whose default is $0.1$. \index{silence floor} When calculating energy for LPC-derived parameterisations, the default is to use the zero-th delay autocorrelation coefficient ($r_0$). However, this means that the energy is calculated after windowing and pre-emphasis. If the configuration parameter \texttt{RAWENERGY}\index{rawenergy@\texttt{RAWENERGY}} is set true, however, then energy is calculated separately before any windowing or pre-emphasis regardless of the requested parameterisation\footnote{ In any event, setting the compatibility variable \texttt{V1COMPAT} to true in \htool{HPARM} will ensure that the calculation of energy is compatible with that computed by the Version 1 tool \htool{HCode}. }. In addition to, or in place of, the log energy, the qualifier \texttt{\_O}\index{qualifiers!aaao@\texttt{\_O}} can be added to a target kind to indicate that the 0'th cepstral parameter $C_0$ is to be appended. This qualifier is only valid if the target kind is \texttt{MFCC}. Unlike earlier versions of \HTK\, scaling factors set by the configuration variable \texttt{ESCALE} are not applied to $C_0$\footnote{ Unless \texttt{V1COMPAT} is set to true. }. \mysect{Delta, Acceleration and Third Differential Coefficients}{delta} \index{speech input!dynamic coefficents} The performance of a speech recognition system can be greatly enhanced by adding time derivatives to the basic static parameters. In \HTK, these are indicated by attaching qualifiers to the basic parameter kind. The qualifier \texttt{\_D} indicates that first order regression coefficients (referred to as delta coefficients) are appended, the qualifier \texttt{\_A}\index{qualifiers!aaaa@\texttt{\_A}} indicates that second order regression coefficients (referred to as acceleration coefficients) and the qualifier \texttt{\_T}\index{qualifiers!aaaa@\texttt{\_T}} indicates that third order regression coefficients (referred to as third differential coefficients) are appended. The \texttt{\_A} qualifier cannot be used without also using the \texttt{\_D}\index{qualifiers!aaad@\texttt{\_D}} qualifier. Similarly the \texttt{\_T} qualifier cannot be used without also using the \texttt{\_D} and \texttt{\_A} qualifiers. The delta coefficients\index{delta coefficients} are computed using the following regression formula\index{regression formula} \hequation{ d_t = \frac{ \sum_{\theta =1}^\Theta \theta(c_{t+\theta} - c_{t-\theta}) }{ 2 \sum_{\theta = 1}^\Theta \theta^2 } }{deltas} where $d_t$ is a delta coefficient at time $t$ computed in terms of the corresponding static coefficients $c_{t-\Theta}$ to $c_{t+\Theta}$. The value of $\Theta$ is set using the configuration parameter \texttt{DELTAWINDOW}\index{deltawindow@\texttt{DELTAWINDOW}}. The same formula is applied to the delta coefficients to obtain acceleration coefficients except that in this case the window size is set by \texttt{ACCWINDOW}\index{accwindow@\texttt{ACCWINDOW}}. Similarly the third differentials use \texttt{THIRDWINDOW}. Since equation~\ref{e:deltas} relies on past and future speech parameter values, some modification is needed at the beginning and end of the speech. The default behaviour is to replicate the first or last vector as needed to fill the regression window. In older version 1.5 of \HTK\ and earlier, this end-effect problem was solved by using simple first order differences at the start and end of the speech, that is \begin{equation} d_t = c_{t+1} - c_t,\;\;\; t<\Theta \end{equation} and \begin{equation} d_t = c_t - c_{t-1}, \;\;\; t \geq T-\Theta \end{equation} where $T$ is the length of the data file. If required, this older behaviour can be restored by setting the configuration variable \texttt{V1COMPAT}\index{v1compat@\texttt{V1COMPAT}} to true in \htool{HParm}. For some purposes, it is useful to use simple differences throughout. This can be achieved by setting the configuration variable \texttt{SIMPLEDIFFS}\index{simplediffs@\texttt{SIMPLEDIFFS}} to true in \htool{HParm}. In this case, just the end-points of the delta window are used, i.e. \hequation{ d_t = \frac{ (c_{t+\Theta} - c_{t-\Theta}) }{ 2 \Theta} }{simdiffs} \index{simple differences} When delta and acceleration coefficients are requested, they are computed for all static parameters including energy if present. In some applications, the absolute energy is not useful but time derivatives of the energy may be. By including the \texttt{\_E} qualifier together with the \texttt{\_N}\index{qualifiers!aaan@\texttt{\_N}} qualifier, the absolute energy is suppressed leaving just the delta and acceleration coefficients of the energy. \mysect{Storage of Parameter Files}{parmstore} Whereas \HTK\ can handle waveform data in a variety of file formats, all parameterised speech data is stored externally in either native \HTK\ format data files or Entropic Esignal format files. Entropic ESPS format is no longer supported directly, but input and output filters can be used to convert ESPS to Esignal format on input and Esignal to ESPS on output. \subsection{\HTK\ Format Parameter Files} \HTK\ format files consist of a contiguous sequence of \textit{samples} preceded by a header. Each sample is a vector of either 2-byte integers or 4-byte floats. 2-byte integers are used for compressed forms as described below and for vector quantised data as described later in section~\ref{s:vquant}. \HTK\ format data files can also be used to store speech waveforms as described in section~\ref{s:waveform}. \index{file formats!HTK} The \HTK\ file format header is 12 bytes long and contains the following data \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{nSamples}\>-- number of samples in file (4-byte integer) \\ \>\texttt{sampPeriod}\>-- sample period in 100ns units (4-byte integer) \\ \>\texttt{sampSize}\>-- number of bytes per sample (2-byte integer) \\ \>\texttt{parmKind}\>-- a code indicating the sample kind (2-byte integer) \end{tabbing} The parameter kind\index{parameter kind} consists of a 6 bit code representing the basic parameter kind plus additional bits for each of the possible qualifiers\index{qualifiers}. The basic parameter kind codes are \begin{tabbing} ++++\= +++ \= ++++++++ \= \kill \>0 \> \texttt{WAVEFORM} \> sampled waveform \\ \>1 \> \texttt{LPC} \> linear prediction filter coefficients \\ \>2 \> \texttt{LPREFC} \> linear prediction reflection coefficients \\ \>3 \> \texttt{LPCEPSTRA} \> LPC cepstral coefficients \\ \>4 \> \texttt{LPDELCEP} \> LPC cepstra plus delta coefficients \\ \>5 \> \texttt{IREFC} \> LPC reflection coef in 16 bit integer format \\ \>6 \> \texttt{MFCC} \> mel-frequency cepstral coefficients \\ \>7 \> \texttt{FBANK} \> log mel-filter bank channel outputs \\ \>8 \> \texttt{MELSPEC} \> linear mel-filter bank channel outputs \\ \>9 \> \texttt{USER} \> user defined sample kind \\ \>10 \> \texttt{DISCRETE} \> vector quantised data \\ \>11 \> \texttt{PLP} \> PLP cepstral coefficients \\ \end{tabbing} and the bit-encoding for the qualifiers (in octal) is \begin{tabbing} ++++\= +++ \= ++++++++ \= \kill \>\texttt{\_E} \> 000100 \> has energy \\ \>\texttt{\_N} \> 000200 \> absolute energy suppressed \\ \>\texttt{\_D} \> 000400 \> has delta coefficients \\ \>\texttt{\_A} \> 001000 \> has acceleration coefficients\\ \>\texttt{\_C} \> 002000 \> is compressed \\ \>\texttt{\_Z} \> 004000 \> has zero mean static coef. \\ \>\texttt{\_K} \> 010000 \> has CRC checksum \\ \>\texttt{\_O} \> 020000 \> has 0'th cepstral coef. \\ \>\texttt{\_V} \> 040000 \> has VQ data \\ \>\texttt{\_T} \> 100000 \> has third differential coef. \\ \end{tabbing}\index{qualifiers!codes} The \texttt{\_A} qualifier can only be specified when \texttt{\_D} is also specified. The \texttt{\_N} qualifier is only valid when both energy and delta coefficients are present. The sample kind \texttt{LPDELCEP} is identical to \texttt{LPCEPSTRA\_D} and is retained for compatibility with older versions of \HTK. The \texttt{\_C}\index{qualifiers!aaac@\texttt{\_C}} and \texttt{\_K}\index{qualifiers!aaak@\texttt{\_K}} only exist in external files. Compressed files are always decompressed on loading and any attached CRC is checked and removed. An external file can contain both an energy term and a 0'th order cepstral coefficient. These may be retained on loading but normally one or the other is discarded\footnote{ Some applications may require the 0'th order cepstral coefficient in order to recover the filterbank coefficients from the cepstral coefficients.}. \putfig{HTKFormat}{130}{Parameter Vector Layout in \HTK\ Format Files} All parameterised forms of \HTK\ data files consist of a sequence of vectors. Each vector is organised as shown by the examples in Fig~\href{f:HTKFormat} where various different qualified forms are listed. As can be seen, an energy value if present immediately follows the base coefficients. If delta coefficients are added, these follow the base coefficients and energy value. Note that the base form \texttt{LPC} is used in this figure only as an example, the same layout applies to all base sample kinds. If the 0'th order cepstral coefficient is included as well as energy then it is inserted immediately before the energy coefficient, otherwise it replaces it. For external storage of speech parameter files, two compression methods are provided. For LP coding only, the \texttt{IREFC} parameter kind exploits the fact that the reflection coefficients are bounded by $\pm 1$ and hence they can be stored as scaled integers such that $+1.0$ is stored as $32767$ and $-1.0$ is stored as $-32767$. For other types of parameterisation, a more general compression facility indicated by the \texttt{\_C}\index{qualifiers!aaac@\texttt{\_C}} qualifier is used. \HTK\ compressed parameter files consist of a set of compressed parameter vectors stored as shorts such that for parameter $x$ \begin{eqnarray} x_{short} & = & A*x_{float}-B \nonumber \end{eqnarray} The coefficients $A$ and $B$ are defined as \begin{eqnarray} A & = & 2*I/(x_{max}-x_{min}) \nonumber\\ B & = & (x_{max}+x_{min})*I/(x_{max}-x_{min}) \nonumber \end{eqnarray} where $x_{max}$ is the maximum value of parameter $x$ in the whole file and $x_{min}$ is the corresponding minimum. $I$ is the maximum range of a 2-byte integer i.e.\ 32767. The values of $A$ and $B$ are stored as two floating point vectors prepended to the start of the file immediately after the header. When a \HTK\ tool writes out a speech file to external storage, no further signal conversions are performed. Thus, for most purposes, the target parameter kind specifies both the required internal representation and the form of the written output, if any. However, there is a distinction in the way that the external data is actually stored. Firstly, it can be compressed as described above by setting the configuration parameter \texttt{SAVECOMPRESSED} to true. If the target kind is \texttt{LPREFC} then this compression is implemented by converting to \texttt{IREFC} otherwise the general compression algorithm described above is used. Secondly, in order to avoid data corruption problems, externally stored \HTK\ parameter files can have a cyclic redundancy checksum appended. This is indicated by the qualifier \texttt{\_K}\index{qualifiers!aaak@\texttt{\_K}} and it is generated by setting the configuration parameter \texttt{SAVEWITHCRC} to true. The principle tool which uses these output conversions is \htool{HCopy} (see section~\ref{s:UseHCopy}). \subsection{Esignal Format Parameter Files} \index{file formats!Esignal} The default for parameter files is native \HTK\ format. However, \HTK\ tools also support the Entropic Esignal format for both input and output. Esignal replaces the Entropic ESPS file format. To ensure compatibility Entropic provides conversion programs from ESPS to ESIG and vice versa. To indicate that a source file is in Esignal format the configuration variable \texttt{SOURCEFORMAT}\index{sourceformat@\texttt{SOURCEFORMAT}} should be set to \texttt{ESIG}. Alternatively, \texttt{-F ESIG}\index{standard options!aaaf@\texttt{-F}} can be specified as a command-line option. To generate Esignal format output files, the configuration variable \texttt{TARGETFORMAT} should be set to \texttt{ESIG} or the command line option \texttt{-O ESIG} should be set. ESIG files consist of three parts: a preamble, a sequence of field specifications called the field list and a sequence of records. The preamble and the field list together constitute the header. The preamble is purely ASCII. Currently it consists of 6 information items that are all terminated by a new line. The information in the preamble is the following: \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{line 1}\>-- identification of the file format \\ \>\texttt{line 2}\>-- version of the file format\\ \>\texttt{line 3}\>-- architecture (ASCII, EDR1, EDR2, machine name)\\ \>\texttt{line 4}\>-- preamble size (48 bytes)\\ \>\texttt{line 5}\>-- total header size\\ \>\texttt{line 6}\>-- record size\\ \end{tabbing} All ESIG files that are output by \HTK\ programs contain the following global fields: \begin{description} \item[commandLine] the command-line used to generate the file; \item[recordFreq] a double value that indicates the sample frequency in Herz; \item[startTime] a double value that indicates a time at which the first sample is presumed to be starting; \item[parmKind] a character string that indicates the full type of parameters in the file, e.g: \texttt{MFCC\_E\_D}. \item[source\_1] if the input file was an ESIG file this field includes the header items in the input file. \end{description} After that there are field specifiers for the records. The first specifier is for the basekind of the parameters, e.g: \texttt{MFCC}. Then for each available qualifier there are additional specifiers. Possible specifiers are: \begin{tabbing} ++++\= \kill \>\texttt{zeroc} \\ \>\texttt{energy}\\ \>\texttt{delta}\\ \>\texttt{delta\_zeroc} \\ \>\texttt{delta\_energy}\\ \>\texttt{accs}\\ \>\texttt{accs\_zeroc} \\ \>\texttt{accs\_energy}\\ \end{tabbing}\index{qualifiers!ESIG field specifiers} The data segments of the ESIG files have exactly the same format as the the corresponding \HTK\ files. This format was described in the previous section. \HTK\ can only input parameter files that have a valid parameter kind as value of the header field \texttt{parmKind}. If this field does not exist or if the value of this field does not contain a valid parameter kind, the file is rejected. After the header has been read the file is treated as an \HTK\ file. \mysect{Waveform File Formats}{waveform} For reading waveform data files, \HTK\ can support a variety of different formats and these are all briefly described in this section. The default speech file format is \HTK. If a different format is to be used, it can be specified by setting the configuration parameter \texttt{SOURCEFORMAT}\index{sourceformat@\texttt{SOURCEFORMAT}}. However, since file formats need to be changed often, they can also be set individually via the \texttt{-F}\index{standard options!aaaf@\texttt{-F}} command-line option. This over-rides any setting of the \texttt{SOURCEFORMAT} configuration parameter. Similarly for the output of waveforms, the format can be set using either the configuration parameter \texttt{TARGETFORMAT} or the \texttt{-O} command-line option. However, for output only native \HTK\ format (\texttt{HTK}), Esignal format (\texttt{ESIG}) and headerless (\texttt{NOHEAD}) waveform files are supported. The following sub-sections give a brief description of each of the waveform file formats supported by \HTK. \subsection{HTK File Format} \index{file formats!HTK} The \HTK\ file format for waveforms is identical to that described in section~\ref{s:parmstore} above. It consists of a 12 byte header followed by a sequence of 2 byte integer speech samples. For waveforms, the \texttt{sampSize} field will be 2 and the \texttt{parmKind} field will be 0. The \texttt{sampPeriod} field gives the sample period in 100ns units, hence for example, it will have the value 1000 for speech files sampled at 10kHz and 625 for speech files sampled at 16kHz. \subsection{Esignal File Format} \index{file formats!Esignal} The Esignal file format for waveforms is similar to that described in section~\ref{s:parmstore} above with the following exceptions. When reading an ESIG waveform file the \HTK\ programs only check whether the record length equals 2 and whether the datatype of the only field in the data records is \texttt{SHORT}. The data field that is created on output of a waveform is called \texttt{WAVEFORM}. \subsection{TIMIT File Format} \index{file formats!TIMIT} The TIMIT format has the same structure as the HTK format except that the 12-byte header contains the following \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{hdrSize}\>-- number of bytes in header ie 12 (2-byte integer) \\ \>\texttt{version}\>-- version number (2-byte integer) \\ \>\texttt{numChannels}\>-- number of channels (2-byte integer) \\ \>\texttt{sampRate}\>-- sample rate (2-byte integer) \\ \>\texttt{nSamples}\>-- number of samples in file (4-byte integer) \end{tabbing} TIMIT format data is used only on the prototype TIMIT CD ROM. \subsection{NIST File Format} \index{file formats!NIST} The NIST file format is also referred to as the Sphere file format. A NIST header consists of ASCII text. It begins with a label of the form \texttt{NISTxx} where xx is a version code followed by the number of bytes in the header. The remainder of the header consists of name value pairs of which \HTK\ decodes the following \begin{tabbing} ++ \= +++++++++++++ \= \kill \>\texttt{sample\_rate} \>-- sample rate in Hz \\ \>\texttt{sample\_n\_bytes} \>-- number of bytes in each sample \\ \>\texttt{sample\_count} \>-- number of samples in file \\ \>\texttt{sample\_byte\_format} \>-- byte order \\ \>\texttt{sample\_coding} \>-- speech coding eg pcm, $\mu$law, shortpack \\ \>\texttt{channels\_interleaved} \>-- for 2 channel data only \end{tabbing} The current NIST Sphere data format\index{NIST Sphere data format} subsumes a variety of internal data organisations. HTK currently supports interleaved $\mu$law used in Switchboard, Shortpack compression used in the original version of WSJ0 and standard 16bit linear PCM as used in Resource Management, TIMIT, etc. It does not currently support the Shorten compression format as used in WSJ1 due to licensing restrictions. Hence, to read WSJ1, the files must be converted using the NIST supplied decompression routines into standard 16 bit linear PCM. This is most conveniently done under UNIX by using the decompression program as an input filter set via the environment variable \texttt{HWAVEFILTER}\index{hwavefilter@\texttt{HWAVEFILTER}} (see section~\ref{s:iopipes}). For interleaved $\mu$law as used in Switchboard, the default is to add the two channels together. The left channel only can be obtained by setting the environment variable \texttt{STEREOMODE} to \texttt{LEFT} and the right channel only can be obtained by setting the environment variable \texttt{STEREOMODE} to \texttt{RIGHT}. \index{mu law encoded files } \subsection{SCRIBE File Format} \index{file formats!SCRIBE} The SCRIBE format is a subset of the standard laid down by the European Esprit Programme SAM Project. SCRIBE data files are headerless and therefore consist of just a sequence of 16 bit sample values. \HTK\ assumes by default that the sample rate is 20kHz. The configuration parameter \texttt{SOURCERATE} should be set to over-ride this. The byte ordering assumed for SCRIBE data files is \texttt{VAX} (little-endian). \subsection{SDES1 File Format} \index{file formats!Sound Designer(SDES1)} The SDES1 format refers to the ``Sound Designer I'' format defined by Digidesign Inc in 1985 for multimedia and general audo applications. It is used for storing short monoaural sound samples. The SDES1 header is complex (1336 bytes) since it allows for associated display window information to be stored in it as well as providing facilities for specifying repeat loops. The HTK input routine for this format just picks out the following information \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{headerSize} \>-- size of header ie 1336 (2 byte integer) \\ \>(182 byte filler) \\ \>\texttt{fileSize} \>-- number of bytes of sampled data (4 byte integer)\\ \>(832 byte filler) \\ \>\texttt{sampRate} \>-- sample rate in Hz (4 byte integer) \\ \>\texttt{sampPeriod} \>-- sample period in microseconds (4 byte integer) \\ \>\texttt{sampSize} \>-- number of bits per sample ie 16 (2 byte integer) \end{tabbing} \subsection{AIFF File Format} \index{file formats!Audio Interchange (AIFF)} The AIFF format was defined by Apple Computer for storing monoaural and multichannel sampled sounds. An AIFF file consists of a number of {\it chunks}. A {\it Common} chunk contains the fundamental parameters of the sound (sample rate, number of channels, etc) and a {\it Sound Data} chunk contains sampled audio data. \HTK\ only partially supports AIFF since some of the information in it is stored as floating point numbers. In particular, the sample rate is stored in this form and to avoid portability problems, \HTK\ ignores the given sample rate and assumes that it is 16kHz. If this default rate is incorrect, then the true sample period should be specified by setting the \texttt{SOURCERATE} configuration parameter. Full details of the AIFF format are available from Apple Developer Technical Support. \subsection{SUNAU8 File Format} \index{file formats!Sun audio (SUNAU8)} The SUNAU8 format defines a subset of the ``.au'' and ``.snd'' audio file format used by Sun and NeXT. An SUNAU8 speech data file consists of a header followed by 8 bit $\mu$law encoded speech samples. The header is 28 bytes and contains the following fields, each of which is 4 bytes \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{magicNumber} \>-- magic number 0x2e736e64 \\ \>\texttt{dataLocation} \>-- offset to start of data \\ \>\texttt{dataSize} \>-- number of bytes of data \\ \>\texttt{dataFormat} \>-- data format code which is 1 for 8 bit $\mu$law \\ \>\texttt{sampRate} \>-- a sample rate code which is always 8012.821 Hz \\ \>\texttt{numChan} \>-- the number of channels \\ \>\texttt{info} \>-- arbitrary character string min length 4 bytes \end{tabbing} No default byte ordering is assumed for this format. If the data source is known to be different to the machine being used, then the environment variable \texttt{BYTEORDER} must be set appropriately. Note that when used on Sun Sparc machines with 16 bit audio device the sampling rate of 8012.821Hz is not supported and playback will be peformed at 8KHz. \subsection{OGI File Format} \index{file formats!OGI} The OGI format is similar to TIMIT. The header contains the following \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{hdrSize}\>-- number of bytes in header \\ \>\texttt{version}\>-- version number (2-byte integer) \\ \>\texttt{numChannels}\>-- number of channels (2-byte integer) \\ \>\texttt{sampRate}\>-- sample rate (2-byte integer) \\ \>\texttt{nSamples}\>-- number of samples in file (4-byte integer) \\ \>\texttt{lendian}\>-- used to test for byte swapping (4-byte integer) \end{tabbing} \subsection{WAV File Format}{} \index{file formats!WAV} The WAV file format is a subset of Microsoft's RIFF specification for the storage of multimedia files. A RIFF file starts out with a file header followed by a sequence of data ``chunks''. A WAV file is often just a RIFF file with a single ``WAVE'' chunk which consists of two sub-chunks - a ``fmt'' chunk specifying the data format and a ``data'' chunk containing the actual sample data. The WAV file header contains the following \begin{tabbing} ++ \= +++++++++ \= \kill \>\texttt{'RIFF'}\>-- RIFF file identification (4 bytes) \\ \>\texttt{<length>}\>-- length field (4 bytes)\\ \>\texttt{'WAVE'}\>-- WAVE chunk identification (4 bytes) \\ \>\texttt{'fmt '}\>-- format sub-chunk identification (4 bytes) \\ \>\texttt{flength}\>-- length of format sub-chunk (4 byte integer) \\ \>\texttt{format}\>-- format specifier (2 byte integer) \\ \>\texttt{chans}\>-- number of channels (2 byte integer) \\ \>\texttt{sampsRate}\>-- sample rate in Hz (4 byte integer) \\ \>\texttt{bpsec}\>-- bytes per second (4 byte integer) \\ \>\texttt{bpsample}\>-- bytes per sample (2 byte integer) \\ \>\texttt{bpchan}\>-- bits per channel (2 byte integer) \\ \>\texttt{'data'}\>-- data sub-chunk identification (4 bytes) \\ \>\texttt{dlength}\>-- length of data sub-chunk (4 byte integer) \end{tabbing} Support is provided for 8-bit CCITT mu-law, 8-bit CCITT a-law, 8-bit PCM linear and 16-bit PCM linear - all in stereo or mono (use of \texttt{STEREOMODE} parameter as per NIST). The default byte ordering assumed for \texttt{WAV} data files is \texttt{VAX} (little-endian). \subsection{ALIEN and NOHEAD File Formats} \index{file formats!ALIEN} \index{file formats!NOHEAD} \HTK\ tools can read speech waveform files with alien formats provided that their overall structure is that of a header followed by data. This is done by setting the format to \texttt{ALIEN} and setting the environment variable \texttt{HEADERSIZE} to the number of bytes in the header. \HTK\ will then attempt to infer the rest of the information it needs. However, if input is from a pipe, then the number of samples expected must be set using the environment variable \texttt{NSAMPLES}\index{nsamples@\texttt{NSAMPLES}}. The sample rate of the source file is defined by the configuration parameter \texttt{SOURCERATE} as described in section~\ref{s:sigproc}. If the file has no header then the format \texttt{NOHEAD} may be specified instead of \texttt{ALIEN}\index{alien@\texttt{ALIEN}} in which case \texttt{HEADERSIZE}\index{headersize@\texttt{HEADERSIZE}} is assumed to be zero. \mysect{Direct Audio Input/Output}{audioio} \index{speech input!direct audio} Many \HTK\ tools, particularly recognition tools, can input speech waveform data directly from an audio device. The basic mechanism for doing this is to simply specify the \texttt{SOURCEKIND} as being \texttt{HAUDIO}\index{haudio@\texttt{HAUDIO}} following which speech samples will be read directly from the host computer's audio input device. Note that for live audio input, the configuration variable \texttt{ENORMALISE} should be set to false both during training and recognition. Energy normalisation cannot be used with live audio input, and the default setting for this variable is \texttt{TRUE}. When training models for live audio input, be sure to set \texttt{ENORMALISE} to false. If you have existing models trained with \texttt{ENORMALISE} set to true, you can retrain them using {\it single-pass retraining} (see section~\ref{s:singlepass}). When using direct audio input\index{direct audio input}, the input sampling rate may be set explicitly using the configuration parameter \texttt{SOURCERATE}, \index{sourcerate@\texttt{SOURCERATE}} otherwise \HTK\ will assume that it has been set by some external means such as an audio control panel. In the latter case, it must be possible for \htool{HAudio} to obtain the sample rate from the audio driver otherwise an error message will be generated. Although the detailed control of audio hardware is typically machine dependent, \HTK\ provides a number of Boolean configuration variables to request specific input and output sources. These are indicated by the following table \begin{center}\index{audio source}\index{audio output} \begin{tabular}{|c|l|} \hline Variable & Source/Sink \\ \hline \texttt{LINEIN} & line input \\ \texttt{MICIN} & microphone input \\ \texttt{LINEOUT} & line output \\ \texttt{PHONESOUT} & headphones output \\ \texttt{SPEAKEROUT} & speaker output \\ \hline \end{tabular} \end{center} \index{linein@\texttt{LINEIN}} \index{micin@\texttt{MICIN}} \index{lineout@\texttt{LINEOUT}} \index{phonesout@\texttt{PHONESOUT}} \index{speakerout@\texttt{SPEAKEROUT}} The major complication in using direct audio is in starting and stopping the input device. The simplest approach to this is for \HTK\ tools to take direct control and, for example, enable the audio input for a fixed period determined via a command line option. However, the \htool{HAudio}/\htool{HParm} modules provides two more powerful built-in facilities for audio input control. \index{direct audio input!silence detector!speech detector} The first method of audio input control involves the use of an automatic energy-based speech/silence detector which is enabled by setting the configuration parameter \texttt{USESILDET}\index{usesildet@\texttt{USESILDET}} to true. Note that the speech/silence detector can also operate on waveform input files. The automatic speech/silence detector uses a two level algorithm which first classifies each frame of data as either speech or silence and then applies a heuristic to determine the start and end of each utterance.\index{HParm!SILENERGY} \index{HParm!SPEECHTHRESH}The detector classifies each frame as speech or silence based solely on the log energy of the signal. When the energy value exceeds a threshold the frame is marked as speech otherwise as silence. The threshold is made up of two components both of which can be set by configuration variables. The first component represents the mean energy level of silence and can be set explicitly via the configuration parameter \texttt{SILENERGY}. However, it is more usual to take a measurement from the environment directly. Setting the configuration parameter \texttt{MEASURESIL} to true will cause the detector to calibrate its parameters from the current acoustic environment just prior to sampling. The second threshold component is the level above which frames are classified as speech (\texttt{SPEECHTHRESH}) . \index{HParm!SPCSEQCOUNT} \index{HParm!SPCGLCHCOUNT} \index{HParm!SILGLCHCOUNT} Once each frame has been classified as speech or silence they are grouped into windows consisting of \texttt{SPCSEQCOUNT} consecutive frames. When the number of frames marked as silence within each window falls below a glitch count the whole window is classed as speech. Two separate glitch counts are used, {\tt SPCGLCHCOUNT} before speech onset is detected and {\tt SILGLCHCOUNT} whilst searching for the end of the utterance. This allows the algorithm to take account of the tendancy for the end of an utterance to be somewhat quieter than the beginning. \index{HParm!SILMARGIN} \index{HParm!SILSEQCOUNT} Finally, a top level heuristic is used to determine the start and end of the utterance. The heuristic defines the start of speech as the beginning of the first window classified as speech. The actual start of the processed utterance is \texttt{SILMARGIN} frames before the detected start of speech to ensure that when the speech detector triggers slightly late the recognition accuracy is not affected. Once the start of the utterance has been found the detector searches for \texttt{SILSEQCOUNT} windows all classified as silence and sets the end of speech to be the end of the last window classified as speech. Once again the processed utterance is extended \texttt{SILMARGIN} frames to ensure that if the silence detector has triggered slightly early the whole of the speech is still available for further processing. \centrefig{endpointer}{120}{Endpointer Parameters} Fig~\href{f:endpointer} shows an example of the speech/silence detection process. The waveform data is first classified as speech or silence at frame and then at window level before finally the start and end of the utterance are marked. In the example, audio input starts at point {\tt A} and is stopped automatically at point {\tt H}. The start of speech, {\tt C}, occurs when a window of \texttt{SPCSEQCOUNT} frames are classified as speech and the start of the utterance occurs \texttt{SILMARGIN} frames earlier at {\tt B}. The period of silence from {\tt D} to {\tt E} is not marked as the end of the utterance because it is shorter than \texttt{SILSEQCOUNT}. However after point {\tt F} no more windows are classified as speech (although a few frames are) and so this is marked as the end of speech with the end of the utterance extended to {\tt G}. \index{direct audio input!signal control!keypress} The second built-in mechanism for controlling audio input is by arranging for a signal to be sent from some other process. Sending the signal for the first time starts the audio device. If the speech detector is not enabled then sampling starts immediately and is stopped by sending the signal a second time. If automatic speech/silence detection is enabled, then the first signal starts the detector. Sampling stops immediately when a second signal is received or when silence is detected. The signal number is set using the configuration parameter \texttt{AUDIOSIG}\index{audiosig@\texttt{AUDIOSIG}}. Keypress control operates in a similar fashion and is enabled by setting the configuration parameter \texttt{AUDIOSIG} to a negative number. In this mode an initial keypress will be required to start sampling/speech detection and a second keypress will stop sampling immediately. Audio output\index{audio output} is also supported by \HTK. There are no generic facilities for output and the precise behaviour will depend on the tool used. It should be noted, however, that the audio input facilities provided by \htool{HAudio} include provision for attaching a \textit{replay buffer} to an audio input channel. This is typically used to store the last few seconds of each input to a recognition tool in a circular buffer so that the last utterance input can be replayed on demand. \mysect{Multiple Input Streams}{streams} \index{multiple streams} As noted in section~\ref{s:genio}, \HTK\ tools regard the input observation sequence as being divided into a number of independent \textit{data streams}. For building continuous density HMM systems, this facility is of limited use and by far the most common case is that of a single data stream. However, when building tied-mixture systems or when using vector quantisation, a more uniform coverage of the acoustic space is obtained by separating energy, deltas, etc., into separate streams. This separation of parameter vectors into streams takes place at the point where the vectors are extracted from the converted input file or audio device and transformed into an observation. The tools for HMM construction and for recognition thus view the input data as a sequence of observations but note that this is entirely internal to \HTK. Externally data is always stored as a single sequence of parameter vectors. When multiple streams\index{multiple streams!rules for} are required, the division of the parameter vectors is performed automatically based on the parameter kind. This works according to the following rules. \begin{description} \item[1 stream] single parameter vector. This is the default case. \item[2 streams] if the parameter vector contains energy terms, then they are extracted and placed in stream 2. Stream 1 contains the remaining static coefficients and their deltas and accelerations, if any. Otherwise, the parameter vector must have appended delta coefficients and no appended acceleration coefficients. The vector is then split so that the static coefficients form stream 1 and the corresponding delta coefficients form stream 2. \item[3 streams] if the parameter vector has acceleration coefficients, then vector is split with static coefficients plus any energy in stream 1, delta coefficients plus any delta energy in stream 2 and acceleration coefficients plus any acceleration energy in stream 3. Otherwise, the parameter vector must include log energy and must have appended delta coefficients. The vector is then split into three parts so that the static coefficients form stream 1, the delta coefficients form stream 2, and the log energy and delta log energy are combined to form stream 3. \item[4 streams] the parameter vector must include log energy and must have appended delta and acceleration coefficients. The vector is split into 4 parts so that the static coefficients form stream 1, the delta coefficients form stream 2, the acceleration coefficients form stream 3 and the log energy, delta energy and acceleration energy are combined to form stream 4. \end{description} In all cases, the static log energy can be suppressed (via the \texttt{\_N}\index{qualifiers!aaan@\texttt{\_N}} qualifier). If none of the above rules apply for some required number of streams, then the parameter vector is simply incompatible with that form of observation. For example, the parameter kind \texttt{LPC\_D\_A} cannot be split into 2 streams, instead 3 streams should be used. \index{energy suppression} \putfig{streams}{100}{Example Stream Construction} Fig.~\href{f:streams} illustrates the way that streams are constructed for a number of common cases. As earlier, the choice of \texttt{LPC} as the static coefficients is purely for illustration and the same mechanism applies to all base parameter kinds. As discussed further in the next section, multiple data streams are often used with vector quantised data. In this case, each VQ symbol per input sample is placed in a separate data stream. \mysect{Vector Quantisation}{vquant} Although \HTK\ was designed primarily for building continuous density HMM systems, it also supports discrete density HMMs. Discrete HMMs are particularly useful for modelling data which is naturally symbolic. They can also be used with continuous signals such as speech by quantising each speech vector to give a unique VQ symbol for each input frame. The \HTK\ module \htool{HVQ} provides a basic facility for performing this vector quantisation\index{vector quantisation}. The VQ table (or codebook) can be constructed using the \HTK\ tool \htool{HQuant}. When used with speech, the principle justification for using discrete HMMs is the much reduced computation. However, the use of vector quantisation introduces errors and it can lead to rather fragile systems. For this reason, the use of continuous density systems is generally preferred. To facilitate the use of continuous density systems when there are computational constraints, \HTK\ also allows VQ to be used as the basis for pre-selecting a subset of Gaussian\index{Gaussian pre-selection} components for evaluation at each time frame. \sidefig{VQUse}{65}{Using Vector Quantisation}{2}{ Fig.~\href{f:VQUse} illustrates the different ways that VQ can be used in \HTK\ for a single data stream. For multiple streams, the same principles are applied to each stream individually. A converted speech waveform or file of parameter vectors can have VQ indices attached simply by specifying the name of a VQ table using the configuration parameter \texttt{VQTABLE}\index{vqtable@\texttt{VQTABLE}} and by adding the \texttt{\_V} qualifier to the target kind. The effect of this is that each \textit{observation} passed to a recogniser can include both a conventional parameter vector and a VQ index. \index{vector quantisation!uses of} \index{qualifiers!aaav@\texttt{\_V}} For continuous density HMM systems, a possible use of this might be to preselect Gaussians for evaluation (but note that \HTK\ does not currently support this facility). When used with a discrete HMM system, the continuous parameter vectors are ignored and only the VQ indices are used. For training and evaluating discrete HMMs, it is convenient to store speech data in vector quantised form. This is done using the tool \htool{HCopy} to read in and vector quantise each speech file. Normally, \htool{HCopy} copies the target form directly into the output file. However, if the configuration parameter \texttt{SAVEASVQ} is set, then it will store only the VQ indices and mark the kind of the newly created file as \texttt{DISCRETE}. Discrete files created in this way can be read directly by \htool{HParm} and the VQ symbols passed directly to a tool as indicated by the lower part of Fig.~\href{f:VQUse}. } \index{saveasvq@\texttt{SAVEASVQ}} \index{discrete@\texttt{DISCRETE}} \index{vector quantisation!distance metrics} \htool{HVQ} supports three types of distance metric and two organisations of VQ codebook. Each codebook consists of a collection of nodes where each node has a mean vector and optionally a covariance matrix or diagonal variance vector. The corresponding distance metric used for each of these is simple Euclidean, full covariance Mahalanobis or diagonal covariance Mahalanobis. The codebook nodes are arranged in the form of a simple linear table or as a binary tree. In the linear case, the input vector is compared with every node in turn and the nearest determines the VQ index. In the binary tree case, each non-terminal node has a left and a right daughter. Starting with the top-most root node, the input is compared with the left and right daughter node and the nearest is selected. This process is repeated until a terminal node is reached. \index{vector quantisation!type of} \index{vector quantisation!code book external format}\index{files!VQ codebook} VQ Tables are stored externally in text files consisting of a header followed by a sequence of node entries. The header consists of the following information \begin{tabbing} ++ \= +++++++ \= + \= \kill \> \textit{magic}\> --\> a magic number usually the original parameter kind \\ \> \textit{type} \> --\> 0 = linear tree, 1 = binary tree \\ \> \textit{mode} \> --\> 1 = diagonal covariance Mahalanobis \\ \>\>\> 2 = full covariance Mahalanobis \\ \>\>\> 5 = Euclidean \\ \> \textit{numNodes} \> --\> total number of nodes in the codebook \\ \> \textit{numS}\> --\> number of independent data streams \\ \> \textit{sw1,sw2,...}\> --\> width of each data stream \\ \end{tabbing} Every node has a unique integer identifier and consists of the following \begin{tabbing} ++ \= +++++++ \= + \= \kill \> \textit{stream}\> --\>stream number for this node \\ \> \textit{vqidx}\> --\>VQ index for this node (0 if non-terminal) \\ \> \textit{nodeId}\> --\>integer id of this node \\ \> \textit{leftId}\> --\>integer id of left daughter node \\ \> \textit{rightId}\> --\>integer id of right daughter node \\ \> \textit{mean}\> --\>mean vector \\ \> \textit{cov}\> --\>diagonal variance or full covariance \\ \end{tabbing} The inclusion of the optional variance vector or covariance matrix depends on the mode in the header. If present they are stored in inverse form. In a binary tree, the root id is always 1. In linear codebooks, the left and right daughter node id's are ignored. \mysect{Viewing Speech with \htool{HList}}{UseHList} \index{speech input!monitoring} As mentioned in section~\ref{s:genio}, the tool \htool{HList}\index{hlist@\htool{HList}} provides a dual r\^{o}le in \HTK. Firstly, it can be used for examining the contents of speech data files. In general, \htool{HList} displays three types of information \begin{enumerate} \item \textit{source header}: requested using the \texttt{-h} option \item \textit{target header}: requested using the \texttt{-t} option \item \textit{target data}: printed by default. The begin and end samples of the displayed data can be specified using the \texttt{-s} and \texttt{-e} options. \end{enumerate} When the default configuration parameters are used, no conversions are applied and the target data is identical to the contents of the file. \index{files!listing contents} As an example, suppose that the file called \texttt{timit.wav} holds speech waveform data using the TIMIT format. The command \begin{verbatim} HList -h -e 49 -F TIMIT timit.wav \end{verbatim} would display the source header information and the first 50 samples of the file. The output would look something like the following \begin{list}{}{\setlength{\leftmargin}{-1cm}} \item \begin{verbatim} ----------------------------- Source: timit.wav --------------------------- Sample Bytes: 2 Sample Kind: WAVEFORM Num Comps: 1 Sample Period: 62.5 us Num Samples: 31437 File Format: TIMIT ------------------------------ Samples: 0->49 ----------------------------- 0: 8 -4 -1 0 -2 -1 -3 -2 0 0 10: -1 0 -1 -2 -1 1 0 -1 -2 1 20: -2 0 0 0 2 1 -2 2 1 0 30: 1 0 0 -1 4 2 0 -1 4 0 40: 2 2 1 -1 -1 1 1 2 1 1 ------------------------------------ END ---------------------------------- \end{verbatim} \end{list} The source information confirms that the file contains \texttt{WAVEFORM} data with 2 byte samples and 31437 samples in total. The sample period is $62.5\mu s$ which corresponds to a 16kHz sample rate. The displayed data is numerically small because it corresponds to leading silence. Any part of the file could be viewed by suitable choice of the begin and end sample indices. For example, \begin{verbatim} HList -s 5000 -e 5049 -F TIMIT timit.wav \end{verbatim} would display samples 5000 through to 5049. The output might look like the following \begin{list}{}{\setlength{\leftmargin}{-1cm}} \item \begin{verbatim} ---------------------------- Samples: 5000->5049 -------------------------- 5000: 85 -116 -159 -252 23 99 69 92 79 -166 5010: -100 -123 -111 48 -19 15 111 41 -126 -304 5020: -189 91 162 255 80 -134 -174 -55 57 155 5030: 90 -1 33 154 68 -149 -70 91 165 240 5040: 297 50 13 72 187 189 193 244 198 128 ------------------------------------ END ---------------------------------- \end{verbatim} \end{list} The second use of \htool{HList} is to check that input conversions are being performed properly. Suppose that the above TIMIT format file is part of a database to be used for training a recogniser and that mel-frequency cepstra are to be used along with energy and the first differential coefficients. Suitable configuration parameters needed to achieve this might be as follows \begin{verbatim} # Wave -> MFCC config file SOURCEFORMAT = TIMIT # same as -F TIMIT TARGETKIND = MFCC_E_D # MFCC + Energy + Deltas TARGETRATE = 100000 # 10ms frame rate WINDOWSIZE = 200000 # 20ms window NUMCHANS = 24 # num filterbank chans NUMCEPS = 8 # compute c1 to c8 \end{verbatim} \htool{HList} can be used to check this. For example, typing \begin{verbatim} HList -C config -o -h -t -s 100 -e 104 -i 9 timit.wav \end{verbatim} will cause the waveform file to be converted, then the source header, the target header and parameter vectors 100 through to 104 to be listed. A typical output would be as follows \begin{verbatim} ------------------------------ Source: timit.wav --------------------------- Sample Bytes: 2 Sample Kind: WAVEFORM Num Comps: 1 Sample Period: 62.5 us Num Samples: 31437 File Format: TIMIT ------------------------------------ Target -------------------------------- Sample Bytes: 72 Sample Kind: MFCC_E_D Num Comps: 18 Sample Period: 10000.0 us Num Samples: 195 File Format: HTK -------------------------- Observation Structure --------------------------- x: MFCC-1 MFCC-2 MFCC-3 MFCC-4 MFCC-5 MFCC-6 MFCC-7 MFCC-8 E Del-1 Del-2 Del-3 Del-4 Del-5 Del-6 Del-7 Del-8 DelE ------------------------------ Samples: 100->104 --------------------------- 100: 3.573 -19.729 -1.256 -6.646 -8.293 -15.601 -23.404 10.988 0.834 3.161 -1.913 0.573 -0.069 -4.935 2.309 -5.336 2.460 0.080 101: 3.372 -16.278 -4.683 -3.600 -11.030 -8.481 -21.210 10.472 0.777 0.608 -1.850 -0.903 -0.665 -2.603 -0.194 -2.331 2.180 0.069 102: 2.823 -15.624 -5.367 -4.450 -12.045 -15.939 -22.082 14.794 0.830 -0.051 0.633 -0.881 -0.067 -1.281 -0.410 1.312 1.021 0.005 103: 3.752 -17.135 -5.656 -6.114 -12.336 -15.115 -17.091 11.640 0.825 -0.002 -0.204 0.015 -0.525 -1.237 -1.039 1.515 1.007 0.015 104: 3.127 -16.135 -5.176 -5.727 -14.044 -14.333 -18.905 15.506 0.833 -0.034 -0.247 0.103 -0.223 -1.575 0.513 1.507 0.754 0.006 ------------------------------------- END ---------------------------------- \end{verbatim} The target header information shows that the converted data consists of 195 parameter vectors, each vector having 18 components and being 72 bytes in size. The structure of each parameter vector is displayed as a simple sequence of floating-point numbers. The layout information described in section~\ref{s:parmstore} can be used to interpret the data. However, including the \texttt{-o} option, as in the example, causes \htool{HList} to output a schematic of the observation structure. Thus, it can be seen that the first row of each sample contains the static coefficients and the second contains the delta coefficients. The energy is in the final column. The command line option \texttt{-i 9} controls the number of values displayed per line and can be used to aid in the visual interpretation of the data. Notice finally that the command line option \texttt{-F TIMIT} was not required in this case because the source format was specified in the configuration file. It should be stressed that when \htool{HList} displays parameterised data, it does so in exactly the form that \textit{observations} are passed to a \HTK\ tool. So, for example, if the above data was input to a system built using 3 data streams, then this can be simulated by using the command line option \texttt{-n} to set the number of streams. For example, typing \begin{verbatim} HList -C config -n 3 -o -s 100 -e 101 -i 9 timit.wav \end{verbatim} would result in the following output \begin{verbatim} ------------------------ Observation Structure ----------------------- nTotal=18 nStatic=8 nDel=16 eSep=T x.1: MFCC-1 MFCC-2 MFCC-3 MFCC-4 MFCC-5 MFCC-6 MFCC-7 MFCC-8 x.2: Del-1 Del-2 Del-3 Del-4 Del-5 Del-6 Del-7 Del-8 x.3: E DelE -------------------------- Samples: 100->101 ------------------------- 100.1: 3.573 -19.729 -1.256 -6.646 -8.293 -15.601 -23.404 10.988 100.2: 3.161 -1.913 0.573 -0.069 -4.935 2.309 -5.336 2.460 100.3: 0.834 0.080 101.1: 3.372 -16.278 -4.683 -3.600 -11.030 -8.481 -21.210 10.472 101.2: 0.608 -1.850 -0.903 -0.665 -2.603 -0.194 -2.331 2.180 101.3: 0.777 0.069 --------------------------------- END -------------------------------- \end{verbatim} Notice that the data is identical to the previous case, but it has been re-organised into separate streams.\index{observations!displaying structure of} \mysect{Copying and Coding using \htool{HCopy}}{UseHCopy} \index{files!copying} \htool{HCopy}\index{hcopy@\htool{HCopy}} is a general-purpose tool for copying and manipulating speech files. The general form of invocation is \begin{verbatim} HCopy src tgt \end{verbatim} which will make a new copy called \texttt{tgt} of the file called \texttt{src}. \htool{HCopy} can also concatenate several sources together as in \begin{verbatim} HCopy src1 + src2 + src3 tgt \end{verbatim} which concatenates the contents of \texttt{src1}, \texttt{src2} and \texttt{src3}, storing the results in the file \texttt{tgt}. As well as putting speech files together, \htool{HCopy} can also take them apart. For example, \begin{verbatim} HCopy -s 100 -e -100 src tgt \end{verbatim} will extract samples 100 through to N-100 of the file \texttt{src} to the file \texttt{tgt} where N is the total number of samples in the source file. The range of samples to be copied can also be specified with reference to a label file, and modifications made to the speech file can be tracked in a copy of the label file. All of the various options provided by \htool{HCopy} are given in the reference section and in total they provide a powerful facility for manipulating speech data files. However, the use of \htool{HCopy} extends beyond that of copying, chopping and concatenating files. \htool{HCopy} reads in all files using the speech input/output subsystem described in the preceding sections. Hence, by specifying an appropriate configuration file, \htool{HCopy} is also a speech coding tool. For example, if the configuration file \texttt{config} was set-up to convert waveform data to MFCC coefficients, the command \begin{verbatim} HCopy -C config -s 100 -e -100 src.wav tgt.mfc \end{verbatim} would parameterise the file waveform file \texttt{src.wav}, excluding the first and last 100 samples, and store the result in \texttt{tgt.mfc}. \htool{HCopy} will process its arguments in pairs, and as with all \HTK\ tools, argument lists can be written in a script file specified via the \texttt{-S} option. When coding a large database, the separate invocation of \htool{HCopy} for each file needing to be processed would incur a very large overhead. Hence, it is better to create a file, \texttt{flist} say, containing a list of all source and target files, as in for example, \begin{verbatim} src1.wav tgt1.mfc src2.wav tgt2.mfc src3.wav tgt3.mfc src4.wav tgt4.mfc etc \end{verbatim} and then invoke \htool{HCopy} by \begin{verbatim} HCopy -C config -s 100 -e -100 -S flist \end{verbatim} which would encode each file listed in \texttt{flist} in a single invocation. Normally \htool{HCopy} makes a direct copy of the target speech data in the output file. However, if the configuration parameter \texttt{SAVECOMPRESSED}\index{savecompressed@\texttt{SAVECOMPRESSED}} is set true then the output is saved in compressed form and if the configuration parameter \texttt{SAVEWITHCRC}\index{savewithcrc@\texttt{SAVEWITHCRC}} is set true then a checksum is appended to the output (see section~\ref{s:parmstore}). If the configuration parameter \texttt{SAVEASVQ} is set true then only VQ indices are saved and the kind of the target file is changed to \texttt{DISCRETE}. For this to work, the target kind must have the qualifier \texttt{\_V} \index{qualifiers!aaav@\texttt{\_V}} attached (see section~\ref{s:vquant}). \index{compression}\index{check sums} \index{files!compressing}\index{files!adding checksums} \centrefig{coercions}{100}{Valid Parameter Kind Conversions} \mysect{Version 1.5 Compatibility}{v1spcompat} The redesign of the \HTK\ front-end in version 2 has introduced a number of differences in parameter encoding. The main changes are \begin{enumerate} \item Source waveform zero mean processing is now performed on a frame-by-frame basis. \item Delta coefficients use a modified form of regression rather than simple differences at the start and end of the utterance. \item Energy scaling is no longer applied to the zero'th MFCC coefficient. \end{enumerate} If a parameter encoding is required which is as close as possible to the version 1.5 encoding, then the compatibility configuration variable \texttt{V1COMPAT} should be set to true. Note also in this context that the default values for the various configuration values have been chosen to be consistent with the defaults or recommended practice for version 1.5. \mysect{Summary}{spiosum} \index{speech input!summary of variables} This section summarises the various file formats, parameter kinds, qualifiers and configuration parameters used by \HTK. Table~\href{t:fileform} lists the audio speech file formats which can be read by the \htool{HWave} module. Table~\href{t:parmkinds} lists the basic parameter kinds supported by the \htool{HParm} module and Fig.~\href{f:coercions} shows the various automatic conversions that can be performed by appropriate choice of source and target parameter kinds. Table~\href{t:qualifiers} lists the available qualifiers for parameter kinds. The first 6 of these are used to describe the target kind. The source kind may already have some of these, \htool{HParm} adds the rest as needed. Note that \htool{HParm} can also delete qualifiers when converting from source to target. The final two qualifiers in Table~\href{t:qualifiers} are only used in external files to indicate compression and an attached checksum. \htool{HParm} adds these qualifiers to the target form during output and only in response to setting the configuration parameters \texttt{SAVECOMPRESSED} and \texttt{SAVEWITHCRC}. Adding the \texttt{\_C}\index{qualifiers!aaac@\texttt{\_C}} or \texttt{\_K}\index{qualifiers!aaak@\texttt{\_K}} qualifiers to the target kind simply causes an error. Finally, Tables \href{t:spiocparms1} and \href{t:spiocparms2} lists all of the configuration parameters along with their meaning and default values. \begin{center} \begin{tabular}{|p{2.6cm}|p{8.7cm}|} \hline Name & Description \\ \hline \texttt{\HTK} & The standard \HTK\ file format\\ \texttt{TIMIT} & As used in the original prototype TIMIT CD-ROM\\ \texttt{NIST} & The standard SPHERE format used by the US NIST\\ \texttt{SCRIBE} & Subset of the European SAM standard used in the SCRIBE CD-ROM\\ \texttt{SDES1} & The Sound Designer 1 format defined by Digidesign Inc. \\ \texttt{AIFF} & Audio interchange file format\\ \texttt{SUNAU8} & Subset of 8bit ".au" and ".snd" formats used by Sun and NeXT\\ \texttt{OGI} & Format used by Oregan Graduate Institute similar to TIMIT\\ \texttt{WAV} & Microsoft WAVE files used on PCs\\ \texttt{ESIG} & Entropic Esignal file format\\ \hline \texttt{AUDIO} & Pseudo format to indicate direct audio input \\ \texttt{ALIEN} & Pseudo format to indicate unsupported file, the alien header size must be set via the environment variable \texttt{HDSIZE} \\ \texttt{NOHEAD} & As for the ALIEN format but header size is zero \\ \hline \end{tabular} \tabcap{fileform}{Supported File Formats} \end{center} \begin{center} \begin{tabular}{|p{2.6cm}|p{8.7cm}|} \hline Kind & Meaning \\ \hline \texttt{WAVEFORM} & scalar samples (usually raw speech data) \\ \texttt{LPC} & linear prediction coefficients \\ \texttt{LPREFC} & linear prediction reflection coefficients \\ \texttt{LPCEPSTRA} & LP derived cepstral coefficients \\ \texttt{LPDELCEP} & LP cepstra + delta coef (obsolete) \\ \texttt{IREFC} & LPREFC stored as 16bit (short) integers \\ \texttt{MFCC} & mel-frequency cepstral coefficients \\ \texttt{FBANK} & log filter-bank parameters \\ \texttt{MELSPEC} & linear filter-bank parameters \\ \texttt{USER} & user defined parameters \\ \texttt{DISCRETE} & vector quantised codebook symbols \\ \texttt{PLP} & perceptual linaer prediction coefficients \\ \texttt{ANON} & matches actual parameter kind \\ \hline \end{tabular} \tabcap{parmkinds}{Supported Parameter Kinds} \end{center} \begin{center} \begin{tabular}{|p{2.6cm}|p{8.7cm}|} \hline Qualifier & Meaning \\ \hline \texttt{\_A} & Acceleration coefficients appended \\ \texttt{\_C} & External form is compressed\\ \texttt{\_D} & Delta coefficients appended \\ \texttt{\_E} & Log energy appended\\ \texttt{\_K} & External form has checksum appended\\ \texttt{\_N} & Absolute log energy suppressed \\ \texttt{\_T} & Third differential coefficients appended \\ \texttt{\_V} & VQ index appended\\ \texttt{\_Z} & Cepstral mean subtracted\\ \texttt{\_0} & Cepstral C0 coefficient appended\\ \hline \end{tabular} \tabcap{qualifiers}{Parameter Kind Qualifiers} \end{center}\index{qualifiers!summary} \begin{center} \begin{tabular}{|p{1.2cm}|p{3.0cm}|p{1.3cm}|p{6.5cm}|} \hline Module & Name & Default & Description \\ \hline \htool{HAudio} & \texttt{LINEIN} & \texttt{T} & Select line input for audio\\ \htool{HAudio} & \texttt{MICIN} & \texttt{F} & Select microphone input for audio\\ \htool{HAudio} & \texttt{LINEOUT} & \texttt{T} & Select line output for audio\\ \htool{HAudio} & \texttt{SPEAKEROUT} & \texttt{F} & Select speaker output for audio\\ \htool{HAudio} & \texttt{PHONESOUT} & \texttt{T} & Select headphones output for audio\\ & \texttt{SOURCEKIND} & \texttt{ANON} & Parameter kind of source \\ & \texttt{SOURCEFORMAT} & \texttt{HTK} & File format of source \\ & \texttt{SOURCERATE} & \texttt{0.0} & Sample period of source in 100ns units \\ \htool{HWave} & \texttt{NSAMPLES} & & Num samples in alien file input via a pipe\\ \htool{HWave} & \texttt{HEADERSIZE} & & Size of header in an alien file\\ \htool{HWave} & \texttt{STEREOMODE} & & Select channel: \texttt{RIGHT} or \texttt{LEFT} \\ \htool{HWave} & \texttt{BYTEORDER} & & Define byte order \texttt{VAX} or other\\ & \texttt{NATURALREADORDER} & \texttt{F} & Enable natural read order for HTK files \\ & \texttt{NATURALWRITEORDER} & \texttt{F} & Enable natural write order for HTK files \\ & \texttt{TARGETKIND} & \texttt{ANON} & Parameter kind of target \\ & \texttt{TARGETFORMAT} & \texttt{HTK} & File format of target \\ & \texttt{TARGETRATE} & \texttt{0.0} & Sample period of target in 100ns units \\ \htool{HParm} & \texttt{SAVECOMPRESSED} & \texttt{F} & Save the output file in compressed form \\ \htool{HParm} & \texttt{SAVEWITHCRC} & \texttt{T} & Attach a checksum to output parameter file \\ \htool{HParm} & \texttt{ADDDITHER} & \texttt{0.0} & Level of noise added to input signal \\ \htool{HParm} & \texttt{ZMEANSOURCE} & \texttt{F} & Zero mean source waveform before analysis \\ \htool{HParm} & \texttt{WINDOWSIZE} & \texttt{256000.0} & Analysis window size in 100ns units \\ \htool{HParm} & \texttt{USEHAMMING} & \texttt{T} & Use a Hamming window \\ \htool{HParm} & \texttt{PREEMCOEF} & \texttt{0.97} & Set pre-emphasis coefficient \\ \htool{HParm} & \texttt{LPCORDER} & \texttt{12} & Order of LPC analysis \\ \htool{HParm} & \texttt{NUMCHANS} & \texttt{20} & Number of filterbank channels \\ \htool{HParm} & \texttt{LOFREQ} & \texttt{-1.0} & Low frequency cut-off in fbank analysis \\ \htool{HParm} & \texttt{HIFREQ} & \texttt{-1.0} & High frequency cut-off in fbank analysis \\ \htool{HParm} & \texttt{USEPOWER} & \texttt{F} & Use power not magnitude in fbank analysis \\ \htool{HParm} & \texttt{NUMCEPS} & \texttt{12} & Number of cepstral parameters \\ \htool{HParm} & \texttt{CEPLIFTER} & \texttt{22} & Cepstral liftering coefficient \\ \htool{HParm} & \texttt{ENORMALISE} & \texttt{T} & Normalise log energy \\ \htool{HParm} & \texttt{ESCALE} & \texttt{0.1} & Scale log energy \\ \htool{HParm} & \texttt{SILFLOOR} & \texttt{50.0} & Energy silence floor (dB) \\ \htool{HParm} & \texttt{DELTAWINDOW} & \texttt{2} & Delta window size \\ \htool{HParm} & \texttt{ACCWINDOW} & \texttt{2} & Acceleration window size \\ \htool{HParm} & \texttt{VQTABLE} & \texttt{NULL} & Name of VQ table \\ \htool{HParm} & \texttt{SAVEASVQ} & \texttt{F} & Save only the VQ indices \\ \htool{HParm} & \texttt{AUDIOSIG} & \texttt{0} & Audio signal number for remote control \\ \hline \end{tabular} \tabcap{spiocparms1}{Configuration Parameters} \end{center} \begin{center} \begin{tabular}{|p{1.1cm}|p{2.6cm}|p{1.4cm}|p{6.5cm}|} \hline Module & Name & Default & Description \\ \hline \htool{HParm} & \texttt{USESILDET} & \texttt{F} & Enable speech/silence detector \\ \htool{HParm} & \texttt{MEASURESIL} & \texttt{T} & Measure background noise level prior to sampling \\ \htool{HParm} & \texttt{OUTSILWARN} & \texttt{T} & Print a warning message to {\tt stdout} before measuring audio levels \\ \htool{HParm} & \texttt{SPEECHTHRESH} & \texttt{9.0} & Threshold for speech above silence level (dB) \\ \htool{HParm} & \texttt{SILENERGY} & \texttt{0.0} & Average background noise level (dB) \\ \htool{HParm} & \texttt{SPCSEQCOUNT} & \texttt{10} & Window over which speech/silence decision reached \\ \htool{HParm} & \texttt{SPCGLCHCOUNT} & \texttt{0} & Maximum number of frames marked as silence in window which is classified as speech whilst expecting start of speech \\ \htool{HParm} & \texttt{SILSEQCOUNT} & \texttt{100} & Number of frames classified as silence needed to mark end of utterance \\ \htool{HParm} & \texttt{SILGLCHCOUNT} & \texttt{2} & Maximum number of frames marked as silence in window which is classified as speech whilst expecting silence \\ \htool{HParm} & \texttt{SILMARGIN} & \texttt{40} & Number of extra frames included before and after start and end of speech marks from the speech/silence detector \\ \htool{HParm} & \texttt{V1COMPAT} & \texttt{F} & Set Version 1.5 compatibility mode \\ & \texttt{TRACE} & \texttt{0} & Trace setting\\ \hline \end{tabular} \tabcap{spiocparms2}{Configuration Parameters (cont)} \end{center} %%% Local Variables: %%% mode: latex %%% TeX-master: "htkbook" %%% End:
subroutine ptrac1 !*********************************************************************** ! Copyright, 2004, The Regents of the University of California. ! This program was prepared by the Regents of the University of ! California at Los Alamos National Laboratory (the University) under ! contract No. W-7405-ENG-36 with the U.S. Department of Energy (DOE). ! All rights in the program are reserved by the DOE and the University. ! Permission is granted to the public to copy and use this software ! without charge, provided that this Notice and any statement of ! authorship are reproduced on all copies. Neither the U.S. Government ! nor the University makes any warranty, express or implied, or ! assumes any liability or responsibility for the use of this software. !*********************************************************************** !D1 !D1 PURPOSE !D1 !D1 Perform initial setup functions for the streamline particle !D1 tracking calculations. !D1 !*********************************************************************** !D2 !D2 REVISION HISTORY !D2 !D2 FEHM Version 2.0, SC-194 !D2 !D2 $Log: /pvcs.config/fehm90/src/ptrac1.f_a $ !D2 !D2 Rev 2.5 06 Jan 2004 10:43:06 pvcs !D2 FEHM Version 2.21, STN 10086-2.21-00, Qualified October 2003 !D2 !D2 Rev 2.4 29 Jan 2003 09:12:28 pvcs !D2 FEHM Version 2.20, STN 10086-2.20-00 !D2 !D2 Rev 2.3 14 Nov 2001 13:11:44 pvcs !D2 FEHM Version 2.12, STN 10086-2.12-00 !D2 !D2 Rev 2.2 06 Jun 2001 13:36:12 pvcs !D2 FEHM Version 2.11, STN 10086-2.11-00 !D2 !D2 Rev 2.2 06 Jun 2001 08:26:14 pvcs !D2 Update for extended dispersion tensor model !D2 !D2 Rev 2.1 30 Nov 2000 12:06:02 pvcs !D2 FEHM Version 2.10, STN 10086-2.10-00 !D2 !D2 Rev 2.0 Fri May 07 14:44:28 1999 pvcs !D2 FEHM Version 2.0, SC-194 (Fortran 90) !D2 !*********************************************************************** !D3 !D3 REQUIREMENTS TRACEABILITY !D3 !D3 2.3.6 Streamline particle-tracking module !D3 !*********************************************************************** !D4 !D4 SPECIAL COMMENTS AND REFERENCES !D4 !D4 Requirements from SDN: 10086-RD-2.20-00 !D4 SOFTWARE REQUIREMENTS DOCUMENT (RD) for the !D4 FEHM Application Version 2.20 !D4 !*********************************************************************** use comai use combi use comci use comdi use comsptr use compart use comsk use comxi, only : nmfil use davidi use comwt implicit none integer neqp1,n50 integer i3,ii1,ii2,i1,i33,ix,iy,iz integer ip,is,i5,kbp,kbm,kb,i,np1 integer current_node integer n_porosi0,itemp_col,itemp_node, kb2,flag_box,inp1 real*8 dx,dy,dz,ep,ep5,x60,x33 real*8 rprime, spacing real*8 xcoordw, ycoordw, zcoordw, del_plus, del_minus real*8 ps_print real*8 s_print integer connect_flag, upper_limit integer iprcount, iprint real*8 tol_c parameter(tol_c=1.d-20) integer position_in_string, final_position integer n_written, iprops, jprops, ijkv_find c......dec 4 01 s kelkar insert omr changes ................... integer inode,iwsk real*8 aread,aread_max integer flag_sk integer iomr_flag integer idbg,kdbg real*8 gotcord integer npart_ist2 real*8 epsilonwt c c&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& ! Called from insptr now ! if(cliff_flag) call setup_cliffnodes c&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& epsilonwt=1.e-12 count_steps = 0 if (omr_flag) iboulist = 0 if (omr_flag) ipc_save = 0 ! Added to read static arrays for calculations once they have been computed zvd 20-Apr-04 if (save_omr) then inquire ( file = nmfil(23), exist = sptr_exists ) if (sptr_exists) then ! Read arrays, no need to recompute call sptr_save (0) end if else sptr_exists = .false. end if c!!!for debuggin 5/13/04 c do i=1,neq c if((i).eq.1..or.ddy(i).eq.1..or.ddz(i).eq.1.) then c write(*,*)'i,ddx(i),ddy(i),ddz(i);',i,ddx(i),ddy(i),ddz(i) c endif c enddo c!!!!!! c Initialize parameters neqp1=neq+1 ep=1.e-7 ep5=.5*ep c Initialize particle time arrays ttt=0 if (irsttime .ne. 0) then tt1 = rsttime c ZVD - 10-Dec-09, time can't be less than the simulation start time do i = 1, num_part if (ttp1(i) .lt. tt1) ttp1(i) = tt1 end do else tt1= 0.d0 end if tt = 0 ! ZVD - 14-Oct-05, initialized in insptr (for particle restarts) ! ttp1=0 c Initialize omr arrays if data not read in if (.not. sptr_exists .and. omr_flag) then node_omr = 0 isave_omr = 0 end if c 5-Nov-02 zvd Add default nsp and icns values (liquid only) nsp = 1 icns = 1 if(.not.unstruct_sptr) then c Determine setup arrays for structured grids c Call routine to construct the connectivity array for c structured grids c !!!!!!!!!!!!!!!!!5/13/04 next two lines taken out for debugging if (.not. sptr_exists) then call struct_conn_array(iomr_flag) c.. s kelkar sep 28 04, moved call to geom_array inside conn_array c Call routine to determine geometric sizes of structured grid cells c call struct_geom_array endif if(node_count.eq.0) omr_flag=.false. else c Setup for unstructured grids call unstruct_arrays end if c Determine initial state of particles (positions, nodes) if(ist.eq.2) then call particle_patch(npart_ist2) end if c *******initial state ********* c Find cell location where particle starts if(abs(ist).ge.1) then call find_particle_cell(npart_ist2,ierr,iptty) c.............................................................. c...Oct 15 2008 s kelkar if porosity<=0 move down the column c routine wtsi_column sorts nodes in vertical columns c wcol(node)=column# corresponding to the node c n_col(column#)=# of nodes in the column c col(kb,wcol(node))=node #s in that column n_porosi0= 0 do i=1,num_part if (ijkv(i) .ne. 0) then if(ps(ijkv(i)).le.0.) then if (ifree .ne. 0) then c Only try to move particle if wtsi problem n_porosi0 = n_porosi0 +1 if( n_porosi0.eq.1) then if(.not.allocated(wcol)) then call wtsi_column endif endif inp1=ijkv(i) itemp_col = wcol(inp1) do kb2 = 1, n_col(itemp_col) itemp_node=col(itemp_col,kb2) if(inp1.eq.itemp_node) then do kb = kb2+1, n_col(itemp_col) itemp_node=col(itemp_col,kb) if(ps(itemp_node).gt.0.) then ijkv(i) = itemp_node z1(i)=cord(itemp_node,3)- & corn(itemp_node,3) goto 96969 endif enddo c did not find a porosity>0 node in the column. Do a neighbor search call tree_search_porosity(ijkv(i),5,flag_box) if(flag_box.gt.0) then ijkv(i)=flag_box z1(i) = cord(flag_box,3) else write(ierr, 222) i, ijkv(i) ijkv(i) = 0 istop(i) = 1 c write(ierr,*)"error in ptrac1. can't find" c write(ierr,*)'neighbor with porosity>0. ', c & 'STOP.' c stop endif endif enddo 96969 continue else c Just remove the particle write(ierr, 223) ijkv(i), i istop(i) = 1 ijkv(i) = 0 end if endif end if enddo 222 format ("Error in ptrac1: can't find neighbor with porosity>0", & ' for particle ', i8, 'at node ', i8) 223 format ('Error in ptrac1: Invalid particle start ', & 'at 0 porosity node ', i8, ' for particle number ', i8) c.............................................................. if (ist .eq. 2) then do i = 1, num_part part_id(i,1) = i part_id(i,2) = ijkv(i) end do end if end if c zvd 06-21-07 Set x3,y3,z3 to initial particle location do i = 1, num_part if (abs(ijkv(i)) .ne. 0) then x3(i) = x1(i) + corn(ijkv(i), 1) y3(i) = y1(i) + corn(ijkv(i), 2) z3(i) = z1(i) + corn(ijkv(i), 3) c zvd 03-16-2010 Set x3, y3, z3 to initial particle location in insptr for ist = 1 else if (ist .ne. 1) then x3(i) = x1(i) y3(i) = y1(i) z3(i) = z1(i) end if end do c...for a quick fix of bouncing particles s kelkar 1/16/02 do np1=1,num_part oldnode(np1)=ijkv(np1) oldnode2(np1)=ijkv(np1) oldnode_rand(np1)=ijkv(np1) enddo c...................................... c Output of particle information changed: BAR 6-15-99 c if(iprto.ne.0) then call output_info end if c Subroutine to initialize particle tracking transport parameters c s kelkar may 28 09 moved call to fehmn.f where ptrac1 used to be called c call init_sptr_params c s kelkar may 20 09 moved call to fehmn.f where ptrac1 used to be called c if (.not. compute_flow) then c if (.not. sptr_exists) call load_omr_flux_array c if(.not.random_flag) then c if(allocated(sx)) deallocate(sx) c if(allocated(istrw)) deallocate(istrw) c end if c end if ! Added to save static arrays for calculations once they have been computed zvd 20-Apr-04 c zvd 19-Nov-2010 ! Moved to fehmn, needs to be called after call to load_omr_flux_array ! if (.not. sptr_exists .and. save_omr) call sptr_save (1) return contains ****************************************************************** ****************************************************************** subroutine struct_conn_array(iomr_flag) use comsk use comai, only : ierr, iout, iptty implicit none integer ibou, iflag_boundry, iomr_flag,idum,irray0,idumm c **** get irray ***** if(icnl.eq.0) then upper_limit = 3 else upper_limit = 2 end if do i=1,neq do i3=-3,3 ggg(i,i3)=0. enddo enddo c ...dec 4 01, s kelkar nov 1 01, OMR stuff ............................... node_count=0 c................................................................. do i=1,neq do i3=-3,3 c do not zero out irray(i,0) because it has particle capture c information read in insptr if(i3.ne.0) irray(i,i3)=0 enddo c ...dec 4 01, s kelkar nov 1 01, OMR stuff ...................... iomr_count=0 iomr_flag=0 isave_omr=0 do i1=1,iomrmax iomr_neighbour(i1)=0 enddo c................................................................. ii1=nelm(i)+1 ii2=nelm(i+1) c.....dec 4, 01 s kelkar 9/21/01 find the max area connection at this node c for flagging connection-areas that are zero (less than epsilon) aread_max=0. aread=0. do i1=ii1,ii2 kb=nelm(i1) if(kb.ne.i) then c istrw is a pointer array for sx, corresponding to the connection c i-i1. Generally (but not always) it is a scalar, the negative of the c magnitude of (cross sectional area0/3. divided by the inter-nodal c distance iwsk=istrw(i1-neq-1) aread=-sx(iwsk,1) if(aread.gt.aread_max) aread_max=aread endif enddo aread_max=aread_max*1.e-8 c................................................... do i1=ii1,ii2 kb=nelm(i1) if(kb.ne.i) then c Do loop filters out any connections that aren't c oriented only in the x, y, or z directions connect_flag = 0 do i3=1,upper_limit x33=cord(kb,i3)-cord(i,i3) if(x33.gt. tol_c ) connect_flag = connect_flag + 1 if(x33.lt.-tol_c ) connect_flag = connect_flag + 1 end do if(connect_flag.lt.2) then c if connect_flag =1 then its a good connection, store the node c number in the correct slot of irray i33=0 do i3=1,upper_limit x33=cord(kb,i3)-cord(i,i3) if(x33.gt. tol_c) then i33= i3 endif if(x33.lt.-tol_c) then i33=-i3 endif enddo c s kelkar, may 25,04, check for -ve porosity at kb, these are treated c as no flow connections. Flag these with -ve sign ! Use rock matirx pososity here to account for nodes that have been ! eliminated using negative porosities if(ps(kb).gt.0.) then irray(i,i33)=+kb else irray(i,i33)=-kb endif endif if(connect_flag.gt.1) then c ........... dec 4 01 s kelkar 9/21/01.......................... c if connect_flag > 1 then the connection is not lined up c with any of the axis. If this connection has a nonzero area c then it signals change from a structured to unstructured c part of the grid. Howver, zero area connections that are c not lined up with axis can also occure c in a structured part of the grid, ie a diagonal in a square. c these have to be filtered out. iwsk=istrw(i1-neq-1) aread=-sx(iwsk,1) if(aread.gt.aread_max) then c ...s kelkar nov 1 01, OMR stuff ............................... c here is how the information is stored: c node_count counts the number of nodes that have at least one bad c connection with any one of its neighbours, and the node number of c is stored in node_omr(node_count). c for each such node i, the # of its neighbours that have a bad c connection with it are counted in iomr_count, and the pointers c (i1) to the neighbour c node numbers are stored temporarily in iomr_neighbour(iomr_count) c When the loop ovver the neighbours 'kb' is finished, then for each c node i with a bad connection, the subroutine 'subomr' is called, c which sorts out the faces on which bad connections occur. c komr_count(k) is a temporary counter for the number of neighbours c of 'i' on a particular face (k). and the pointers (i1) to the node c numbers of such neighbours are saved in c isave_omr(face#,komr_count(k)) if(iomr_flag.eq.0) then iomr_flag=1 node_count=node_count+1 if(node_count.gt.omr_nodes) then write (ierr, 1001) node_count, omr_nodes if (iptty .ne. 0) . write (iptty, 1001) node_count, omr_nodes call exit_ptrac1 endif 1001 format ( 'ERROR in PTRAC1: count ', i8, & ' greater than number of omr nodes ', & i8, /, 'STOPPING') node_omr(node_count)=i endif iomr_count=iomr_count+1 if(iomr_count.gt.iomrmax) then write(ierr,*)'iomr_count.gt.iomrmax in ptrac1' write(ierr,*)'increase dimension of ', . 'iomr_neighbour' write(ierr,*)'STOPPING' if (iptty .ne. 0) then write(iptty,*)'iomr_count.gt.iomrmax in ptrac1' write(iptty,*)'increase dimension of ', . 'iomr_neighbour' write(iptty,*)'STOPPING' end if call exit_ptrac1 endif c NOTE: saving the pointer i1 rather than the node # kb because c i1 can be directly used as a pointer to sx and a_axy arrays c and kb can be retrieved from nelm(i1) iomr_neighbour(iomr_count)=i1 c close(87) endif endif c................................................................. endif enddo if(iomr_flag.eq.1) then c OMR node can not be specified as a well-capture node. At this stage c this could have only come from insptr- sptr input file must be c modified irray0=irray(i,0) if(irray0.eq.-i.or.irray0.eq.-(i+1000000) 1 .or.(irray0.lt.-(10000000).and.irray0.gt.-(100000000)) 2 )then write(ierr,*)'OMR node can not specified as a ' write(ierr,*)'well-capture node.sptr input file' write(ierr,*)'must be modified. check keyword cpatur' write(ierr,*)'SUBROUTINE struct_conn_array (ptrac1)' write(ierr,*)'Node Number=',i write(ierr,*)'STOP' call exit_ptrac1 endif if(irray0.lt.-100000000) then idumm=-(irray0+100000000) irray0=-200000000-idumm elseif(irray(i,0).ne.-(i+2000000)) then irray0 = 0 endif irray(i,0) = irray0 c omr node allowed to be a spring node c....s kelkar Jan 27march 10, 04, 3D ORM stuff............ c irray(i,0) = +i : regular, interior node, not a source/sink c irray(i,0) = -i-2000 : regular node on a external boundary c irray(i,0) = -i : regular interior node that is a sink/source c but not explicitly specified in sptr macro c -100000000 < irray(i,0) < -10000000 : regular interior node that is c specified as a sink/source in the sptr macro c in this case -(irray(i,0)+10000000) is the pointer c for the storage location in well_radius for this node c -200000000 < irray(i,0) < -100000000 : non-OMR cliff node c irray(i,0) < -200000000 : OMR cliff node c if the node is a cliff node, but has a specified boundary c outflow at it, remove cliff tag and mark as a regular c boundry node in load_omr_flux_array c = -(i+2000000) : spring node c = -(i+1000000) : well-capture node on external bound c similar to =-i case but with half space solution c irray(i,0) = 0 : OMR node not on boundary c irray(i,0) = -i-1000 : OMR node on a external boundary c c flag OMR nodes that are on the exterior boundary, and save the c exterior faces in iboulist. Also set irray(i,0)=-i-1000 for c OMR nodes on a external boundary call boundary_nodes(i,+1) c................................................... else c flag non-OMR nodes that are on the exterior boundary, and save the c exterior faces in iboulist. call boundary_nodes(i,0) irray0=irray(i,0) if(irray0.ne.-i.and.irray0.ne.-(i+1000000).and. 1 irray0.ne.-(i+2000000).and.irray0.ne.-(i+2000). 2 and.irray0.gt.-10000000) then irray(i,0) = i endif endif c ...dec 4 01 s kelkar nov 1 01, OMR stuff .......................... if(iomr_flag.eq.1) call subomr2(i) c................................................................. c... s kelkar sep 28, 04 reduce storage by calling geom_array here c and changin the dim of iomr_save to (la,k) c calculate del_plus, del_minus- the distances to the c control volume boundaries, for use in ddx_corn_array call struct_geom_array(i) enddo c set up the ddx,ddy,ddz and corn arrays call ddx_corn_array c do i=1,neq do i3=-3,3 ggg(i,i3)=0. enddo enddo write(iout, 1003) omr_nodes, node_count if (iptty .ne. 0) write(iptty, 1003) omr_nodes, node_count 1003 format ('Number of OMR nodes set to ', i8, /, . 'Actual number of OMR nodes ', i8) c **** got irray ***** return end subroutine struct_conn_array ****************************************************************** ****************************************************************** subroutine output_info implicit none real*8 sptr_time character*200 sptr_heading, sptr_prop_values c Sets up output of particle tracking info c Only used for regular output with transient particle start times, c trans_flag is used for minimal output options pstart_out = .true. c Robinson added minimal write option 3-11-02 if(iprto.lt.0) then if (xyz_flag) then c ZVD modified minimal write option to include coordinates 12-08-2005 if (iprto .eq. -1) then write(isptr2,100) else write (isptr2) 'XYZ' end if do np1 = 1, num_part current_node = ijkv(np1) xcoordw = x1(np1) + corn(current_node,1) ycoordw = y1(np1) + corn(current_node,2) zcoordw = z1(np1) + corn(current_node,3) sptr_time = ttp1(np1) if (iprto .eq. -1) then write(isptr2,105) part_id(np1,1),sptr_time, & current_node,xcoordw, ycoordw, zcoordw else write (isptr2) part_id(np1,1),sptr_time,current_node, & xcoordw, ycoordw, zcoordw end if end do else if (ip_flag .or. trans_flag) then c ZVD added option to write initial position to abbreviated output file if (iprto .eq. -1) then if (ip_flag) then write(isptr2, 110) '' else write(isptr2, 110) 'TRA : ' end if else if (trans_flag) write (isptr2) 'TRA' end if do np1 = 1, num_part c ZVD added option for transient where particle start time is saved but c initial node is set to 0 c ZVD 07-Feb-2011 negative of starting node is now output c (this way particles that have been excluded can be distinguished from c particles that have a delayed start time) if (trans_flag) then c current_node = 0 current_node = -ijkv(np1) else current_node = ijkv(np1) end if sptr_time = ttp1(np1) if (iprto .eq. -1) then write(isptr2,105) part_id(np1,1),sptr_time, & current_node else write (isptr2) part_id(np1,1),sptr_time,current_node end if end do end if 100 format ('XYZ : Part_no time_days cell_leaving', & ' X Y Z') 105 format(1x,i8,1x,g21.14,1x,i8,3(1x,g16.9)) 110 format (a, 'Part_no time_days cell_leaving') elseif(iprto.eq.1) then pstart_out = .false. do iprint = 1, 200 sptr_heading(iprint:iprint) = ' ' end do sptr_prop_values = '' sptr_heading(1:58) = 2 ' particle_number x(m) y(m) z(m) time(days)' c 2 ' particle_number x y z time ' position_in_string = 60 c Determine how many property columns are being written n_written = 0 do iprops = 1, nsptrprops if(write_prop(iprops).ne.0) then n_written = n_written + 1 end if end do c Find which one is written next, write it to string do iprops = 1, n_written inner: do jprops = 1, nsptrprops if(write_prop(jprops).eq.iprops) then if(jprops.eq.1) then sptr_heading(position_in_string:position_in_string+9) 2 = 'porosity ' position_in_string = position_in_string + 10 exit inner c Fluid saturation elseif(jprops.eq.2) then sptr_heading(position_in_string:position_in_string+11) 2 = 'saturation ' position_in_string = position_in_string + 12 exit inner c Permeability elseif(jprops.eq.3) then sptr_heading(position_in_string:position_in_string+13) 2 = 'permeability ' position_in_string = position_in_string + 14 exit inner c Rock density elseif(jprops.eq.4) then sptr_heading(position_in_string:position_in_string+13) 2 = 'rock_density ' position_in_string = position_in_string + 14 exit inner c Pressure elseif(jprops.eq.5) then sptr_heading(position_in_string:position_in_string+9) 2 = 'pressure ' position_in_string = position_in_string + 10 exit inner c Temperature elseif(jprops.eq.6) then sptr_heading(position_in_string:position_in_string+12) 2 = 'temperature ' position_in_string = position_in_string + 13 exit inner c Zone number elseif(jprops.eq.7) then sptr_heading(position_in_string:position_in_string+5) 2 = 'zone ' position_in_string = position_in_string + 6 exit inner c Particle ID elseif(jprops.eq.8) then sptr_heading(position_in_string:position_in_string+3) 2 = 'ID ' position_in_string = position_in_string + 4 exit inner end if end if end do inner end do sptr_heading(position_in_string:position_in_string+17) 2 = 'old_node new_node' final_position = position_in_string+17 write(isptr2,'(a)') trim(sptr_heading) do np1 = 1, num_part current_node = ijkv(np1) xcoordw = x1(np1) + corn(current_node,1) ycoordw = y1(np1) + corn(current_node,2) zcoordw = z1(np1) + corn(current_node,3) sptr_prop = 0. if(current_node.eq.0) then iprcount = 0 do iprint = 1, nsptrprops if(write_prop(iprint).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(iprint)) = 1.d-30 end if end do c ps_print = 1.e-30 c s_print = 1.e-30 else iprcount = 0 c Porosity if(write_prop(1).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(1)) = ps_trac(current_node) end if c Fluid saturation if(write_prop(2).ne.0) then iprcount = iprcount + 1 if (irdof .ne. 13 .or. ifree .ne. 0) then sptr_prop(write_prop(2)) = s(current_node) else sptr_prop(write_prop(2)) = 1.0d0 end if end if c Permeability if(write_prop(3).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(3)) = 1.d-6*pnx(current_node) end if c Rock density if(write_prop(4).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(4)) = denr(current_node) end if c Pressure if(write_prop(5).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(5)) = phi(current_node) end if c Temperature if(write_prop(6).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(6)) = t(current_node) end if c Zone number if(write_prop(7).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(7)) = izonef(current_node) end if c Particle ID if(write_prop(8).ne.0) then iprcount = iprcount + 1 sptr_prop(write_prop(8)) = part_id(np1, 2) end if position_in_string = 1 do i = 1, iprcount if (i .eq. write_prop(7) .or. i .eq. write_prop(8)) & then write (sptr_prop_values(position_in_string: & position_in_string+9), '(i8,2x)') & int(sptr_prop(i)) position_in_string = position_in_string+10 else write (sptr_prop_values(position_in_string: & position_in_string+17), '(g16.9,2x)') & sptr_prop(i) position_in_string = position_in_string+18 end if end do write (sptr_prop_values(position_in_string: & position_in_string+17), '(2(i8,2x))') & current_node, current_node sptr_time = ttp1(np1) c Don't output if particle time is greater than starting time c if (sptr_time .gt. days) sptr_time = days if (sptr_time .le. days) then pstart_out(np1) = .true. position_in_string = len_trim(sptr_prop_values) write(isptr2,8001) part_id(np1,1), xcoordw, ycoordw, 2 zcoordw, sptr_time, 3 sptr_prop_values(1: position_in_string) end if end if end do end if 8001 format(1x, i8, 3(1x,g16.9), 1x, g21.14, 200a) return end subroutine output_info ****************************************************************** ****************************************************************** subroutine struct_geom_array(i) use comsptr use comsk implicit none integer i,j,kb,kb_omr,jab real*8 gotcord, delkb,del c NOTE: using ggg as scratch storage for saving del+ and del- c temporarily do j=-3,3 delkb=0. if(j.ne.0) then jab=abs(j) kb= abs (irray(i, j)) if((irray(i,0).eq.0).or.(irray(i,0).eq.-(i+1000)).or. 1 (irray(i,0).lt.-200000000)) then kb_omr=0 call getcord(i,kb_omr,j,gotcord) del = 0.5*abs((gotcord-cord(i,jab))) c on return from gotcord, kb_omr =0 only for boundary nodes c use expected symmetry of ggg to set ggg(kb_omr,-j) c Also if del(i,j) and del(kb,-j) c are not equal, then use the greater of the two- this way we may c increase overlap, but we reduce chances of holes. c for omr nodes, kb can be 0 either bcs its a boundry node or bcs c grid refinement has a missing node there. In the case of a c missing node, return value of kb_omr can be a legitimate node. c in that case, use symmetry of ggg and irray. ggg(kb_omr could be c nonzero thru this process even if kb_omr lt i. Hence when i c becomes value of kb_omr, that del is recalculated and the c greater value is used. ggg(i,j)=del if(irray(i,j).eq.0 .and. kb_omr .gt. 0) then if (ps(kb_omr).gt.0.) then irray(i,j)=+kb_omr else irray(i,j)=-kb_omr endif endif c if(kb_omr.gt.0) then c delkb=ggg(kb_omr,-j) c if(delkb.gt.del) del=delkb c ggg(kb_omr,-j)=del c ggg(i,j)=del c irray(i,j)=kb_omr c irray(kb_omr,-j)=i c else c ggg(i,j)=del c endif else c elseif(ggg(i,j).eq.0.) then c non-OMR node. c there is no need to recalculate ggg if it is already nonzero kb_omr=abs(kb) call getcord(i,kb_omr,j,gotcord) del = 0.5*abs((gotcord-cord(i,jab))) ggg(i,j)=del c if (kb_omr.gt.0) ggg(kb_omr,-j)=del endif endif enddo return end subroutine struct_geom_array c****************************************************************** c Subroutine is empty - not yet implemented subroutine unstruct_arrays return end subroutine unstruct_arrays ****************************************************************** ****************************************************************** subroutine particle_patch(npart_ist2) implicit none integer npart_ist2 c *******initial state section ********* if(nx.eq.1) then dx=0 else dx=xdim/(1.e-20+nx-1) endif if(ny.eq.1) then dy=0 else dy=ydim/(1.e-20+ny-1) endif if(icnl.ne.0) nz = 1 if(nz.eq.1) then dz=0 else dz=zdim/(1.e-20+nz-1) endif do ix=1,nx do iy=1,ny do iz=1,nz ip=ix+(iy-1)*nx+(iz-1)*nx*ny x1(ip)=x10+(ix-1)*dx y1(ip)=y10+(iy-1)*dy z1(ip)=z10+(iz-1)*dz enddo enddo enddo npart_ist2=ip c....dec 4 01, s kelkar sep 20 2001 if(ist.eq.3) then do ip=1,num_part inode=ijkv(ip) x1(ip)=cord(inode,1) y1(ip)=cord(inode,2) z1(ip)=cord(inode,3) enddo endif c..................................... return end subroutine particle_patch ****************************************************************** subroutine find_particle_cell(npart_ist2,ierr,iptty) implicit none integer flag_box,npart_ist2,ijkv_last,ierr,iptty integer nout,save_out_face(6) c *** fudged initial xo=x1 yo=y1 zo=z1 c If the search algorithm starts at node 1 for all nodes, c it can't get inside a locally structured part of the grid c from outside of it. The following search gets each particle c close or even at its starting node, so as long as that c coordinate is inside a locally structured part of the c grid and inside the model domain, the subsequent search c should work. BAR 3-9-00 c if(abs(ist).eq.1) then ! If we have read in the corresponding nodes use them otherwise search if (.not. sptr_snode) then c Particles could be anywhere, do search on each one do i1 = 1, num_part call near3(xo(i1),yo(i1),zo(i1),ijkv(i1),0) end do end if elseif(ist.eq.2) then c Particles are in a cluster, do search on first c one only, use that as starting location call near3(xo(1),yo(1),zo(1),ijkv_find,0) ijkv(1)=ijkv_find c..s kelkar Jan 21,05 replacing with tree-search c *** n50 must exceed max flow path length/smallest nodal separation c n50=100000 c c c do is=1,n50 c ijkvss=ijkv c x1=xo-corn(ijkv,1) c y1=yo-corn(ijkv,2) c z1=zo-corn(ijkv,3) c ddxv=ddx(ijkv) c ddyv=ddy(ijkv) c ddzv=ddz(ijkv) c c ijkvs=ijkv c where(x1/ddxv.gt.1.) c ijkv=irray(ijkv, 1) c endwhere c where(x1/ddxv.lt.0.) c ijkv=irray(ijkv,-1) c endwhere c where(ijkv.eq.0) ijkv=ijkvs c c ijkvs=ijkv c where(y1/ddyv.gt.1.) c ijkv=irray(ijkv, 2) c endwhere c where(y1/ddyv.lt.0.) c ijkv=irray(ijkv,-2) c endwhere c where(ijkv.eq.0) ijkv=ijkvs c c ijkvs=ijkv c where(z1/ddzv.gt.1.) c ijkv=irray(ijkv, 3) c endwhere c where(z1/ddzv.lt.0.) c ijkv=irray(ijkv,-3) c endwhere c where(ijkv.eq.0) ijkv=ijkvs c c i5=0 c do i1=1,num_part c if(ijkv(i1).ne.ijkvss(i1)) i5=1 c enddo c if(i5.eq.0) go to 201 c c enddo c 201 continue c........................................................ ijkv_last=ijkv_find do is=2,npart_ist2 call tree_search(ijkv_last,20,ierr,iptty, $ xo(is),yo(is),zo(is),flag_box,nout,save_out_face) if(flag_box.gt.0) then ijkv(is)=flag_box ijkv_last=flag_box else call tree_search(ijkv(1),20,ierr,iptty, $ xo(is),yo(is),zo(is),flag_box,nout,save_out_face) if(flag_box.gt.0) then ijkv(is)=flag_box ijkv_last=flag_box else c tree-search failed, do a global search call near3(xo(is),yo(is),zo(is),flag_box,0) if(flag_box.gt.0) then ijkv(is)=flag_box ijkv_last=flag_box else write(ierr,*)'Error in find_particle_cell. Ist=2 ' write(ierr,*)'Cant find the CC for particle #', is write(ierr,*)'xo,yo,zo=',xo(is),yo(is),zo(is) write(ierr,*)'STOP' if (iptty. ne. 0) then write(iptty,*)'Error in find_particle_cell. Ist=2 ' write(iptty,*)'Cant find the CC for particle #', is write(iptty,*)'xo,yo,zo=',xo(is),yo(is),zo(is) write(iptty,*)'STOP' end if call exit_ptrac1 end if endif end if enddo end if ijkvs=ijkv c*** c s kelkar aug 29, 06 c for water table nodes, if S(ijkv(np1))>Smin then set the initial c position below ddz*S, if S<Smin then move the particle vertically c downward until a node with S>Smin is encountered. c zvd added to time loop (ptrac3), to move particle down to wt c when it starts to move aug 27, 2007 c if (ifree.ne.0) then c if(deltawt.gt.epsilonwt) then c call wtsi_ptrac1_init c endif c endif c**** x1=xo-corn(ijkv,1) y1=yo-corn(ijkv,2) z1=zo-corn(ijkv,3) c ********at final relative initial state ********* ddxv=ddx(ijkv) ddyv=ddy(ijkv) ddzv=ddz(ijkv) c **** is the initial state valid? ***** cc where((ijkv.lt.1).or.(ijkv.gt.neq)) ijkv=0 c where((x1.gt.ddxv).or.(x1.lt.0.)) ijkv=0 c where((y1.gt.ddyv).or.(y1.lt.0.)) ijkv=0 c if(icnl.eq.0) then c where((z1.gt.ddzv).or.(z1.lt.0.)) ijkv=0 c end if do is=1,num_part c Check to see if particle should be excluded if out side the model domain if (exclude_particle) then if(x1(is).gt.ddxv(is) .or. x1(is).lt.0. .or. & y1(is).gt.ddyv(is) .or. y1(is).lt.0. .or. & ps(ijkv(is)) .le. 0.) then istop(is) = 1 ijkv(is) = 0 end if if (icnl.eq.0) then if(z1(is).gt.ddzv(is) .or. z1(is).lt.0.) then istop(is) = 1 ijkv(is) = 0 end if end if else if(x1(is).gt.ddxv(is)) x1(is)=ddxv(is) if(x1(is).lt.0.) x1(is)=0. if(y1(is).gt.ddyv(is)) y1(is)=ddyv(is) if(y1(is).lt.0.) y1(is)=0. if(icnl.eq.0) then if(z1(is).gt.ddzv(is)) z1(is)=ddzv(is) if(z1(is).lt.0.) z1(is)=0. end if end if enddo c....dec4 01 s kelkar nov 11 01 commented out next 3 lines to c allow omr nodes as initial locations c do np1 = 1, num_part c if(irray(ijkv(np1),0).lt.0) ijkv(np1) = 0 c end do c................................... c *** print initial state ****** c ***** stop if initial state of any particle is invalid: ijkv=0 **** istop=0 do i1=1,num_part if(ijkv(i1).eq.0) then write(ierr, 224) i1 ! call exit_ptrac1 istop (i1)=1 end if enddo 224 format ('Error in ptrac1: Initial state of particle is invalid', & ' for particle number ', i8) c **** set istop=1 if point out of domain**** c where(ijkv.eq.0) istop=1 c *** move initial points off element boundaries*** ddxv=ddx(ijkv) ddyv=ddy(ijkv) ddzv=ddz(ijkv) where(x1.eq.0. ) x1=ddxv*ep where(x1.eq.ddxv) x1=ddxv*(1.-ep) where(y1.eq.0. ) y1=ddyv*ep where(y1.eq.ddyv) y1=ddyv*(1.-ep) if(icnl.eq.0) then where(z1.eq.0. ) z1=ddzv*ep where(z1.eq.ddzv) z1=ddzv*(1.-ep) end if return end subroutine find_particle_cell c****************************************************************** end subroutine ptrac1 c*********************************************************************** subroutine subomr2(i) c s kelkar 11 jul 05 c this is a modified version (and simplified) of 'subomr' c each connected node with a 'bad' connection, from array c iomr_neighbour(:) is counted and stored for every direction c that the connection if off-axis in the array isave_omr(:,:) c connections are not classified as type I or II (that is done c in subomr, but no longer needed. In the current version of ptrac1, c that classification leads to wrong ddx,ddy,ddz values) use comai use combi use comci use comdi use comflow use comsptr use comsk use compart implicit none integer i,i1,k,ia,kb,ikb,ipos real*8 epsilon,xkb,xia,d epsilon=1.e-8 do i1=-3,3 komr_count(i1)=0 enddo do i1=1,iomr_count c NOTE: in iomr_neighbour,saved ikb rather than the node # kb c because ikb can be directly used as a pointer to sx and a_axy c arrays and kb can be retrieved from nelm(ikb) ikb=iomr_neighbour(i1) kb=nelm(ikb) do k=1,3 c ia=abs(irray(i,k)) d=cord(kb,k)-cord(i,k) if(d.gt.epsilon) then komr_count(k)=komr_count(k)+1 if(komr_count(k).gt.komrmax) then write(ierr,*)'komr_count(k).gt.komrmax in ', & 'subomr2' write(ierr,*)'change dimension in comomr.' write(ierr,*)'i,iomr_count',i,iomr_count write(ierr,*)'STOP' if (iptty .ne. 0) then write(iptty,*)'komr_count(k).gt.komrmax in ', & 'subomr2' write(iptty,*)'change dimension in comomr.' write(iptty,*)'i,iomr_count',i,iomr_count write(iptty,*)'STOP' end if call exit_ptrac1 endif isave_omr(k,komr_count(k))=ikb endif c ia=abs(irray(i,-k)) d=cord(kb,k)-cord(i,k) if(d.lt.-(epsilon)) then komr_count(-k)=komr_count(-k)+1 if(komr_count(-k).gt.komrmax) then write(ierr,*)'komr_count(k).gt.komrmax in ', & 'subomr2' write(ierr,*)'change dimension in comomr.' write(ierr,*)'STOP' if (iptty .ne. 0) then write(iptty,*)'komr_count(k).gt.komrmax in ', & 'subomr2' write(iptty,*)'change dimension in comomr.' write(iptty,*)'STOP' end if call exit_ptrac1 endif isave_omr(-k,komr_count(-k))=ikb endif enddo enddo return end c....................................................................... subroutine getcord(i,kb,la,gotcord) c s kelkar, modified Feb 10,05 use comai, only : ierr, iptty use combi use comsptr use comsk implicit none integer i,kb,k,l,la,j,jk,jkb,jkbmax,ibou,j1,j2,jkc integer itempf, jtemp(200), jtemp_count, lasign,jjjj integer i_augment,i1,i2,iii,jkd, jkbb,jkb1,jkb2,i_dir integer l_perp,irray0 real*8 gotcord,dmax,d, djtemp, djkb,dtemp1,dtemp2 l=abs(la) lasign=isign(1,la) if(kb.gt.0) then gotcord=cord(kb,l) elseif(kb.eq.0) then irray0=irray(i,0) if((irray0.eq.i).or.(irray0.eq.-(i+1000000)).or. 1 (irray0.eq.-(i+2000000)).or.irray0.eq.-(i+2000).or. 2 (irray0.lt.-100000000.and.irray0.gt.-200000000)) 3 then c i is a non-OMR boundary node, a well-capture node on boundary c or a spring node on a boundaryor a cliff node. c Set gotcord = cord of i gotcord=cord(i,l) kb=0 c......s kelkar 1/27/04 3-D stuff................ elseif(irray(i,0).eq.-(i+1000).or. 1 (irray(i,0).lt.-200000000)) then c handle OMR nodes on exterior boundaries, including omr-cliff nodes itempf = 0 do ibou=1,6 if(iboulist(i,ibou).eq.la) then c node i is an OMR node on an exterior boundary, with the exterior in c the la direction. set gotcord =cord of i if (irray(i, la) .lt. 0) then kb = irray(i, la) gotcord = cord (abs(kb), l) else gotcord=cord(i,l) kb=0 end if itempf = 1 endif enddo endif c................................................. if(irray(i,0).eq.0.or.((irray(i,0).eq.(-i-1000).or. 1 (irray(i,0).lt.-200000000)).and. 2 itempf.eq.0)) then c we have either an non-boundary omr node c , or a boundary OMR node but with boundary oriented in a plane c different from that given by la. Define a c fictious control volume face c find the node furtherest from i in the la direction c form the list of nodes with a bad connection with i in the c la direction (these are saved in isave_omr) dmax=0. jtemp_count=0 do j=0,komrmax,1 if(j.eq.0) then jkb=abs(irray(i,la)) else jkb=0 jk=isave_omr(la,j) if(jk.gt.0) jkb=nelm(jk) endif if(jkb.gt.0) then d=abs(cord(i,l)-cord(jkb,l)) if(d.gt.dmax) then c saving jkb in jtemp if a search is needed below over c the neighbours of jkb to avoind creating holes in the mesh. jtemp_count=jtemp_count+1 jtemp(jtemp_count)=jkb dmax=d jkbmax=jkb endif endif enddo if(jkbmax.gt.0 ) then gotcord=cord(jkbmax,l) kb=jkbmax else write(ierr,*)'STOP. getcord found jkbmax=0' write(ierr,*)'i,kb,la=',i,kb,la if (iptty .ne. 0) then write(iptty,*)'STOP. getcord found jkbmax=0' write(iptty,*)'i,kb,la=',i,kb,la end if call exit_ptrac1 endif c now check for the rare situation when c the node i lies on the side of a rectangle which has the central c node missing due to refinement on all sides, c and if so, getcord is set equal to the node on the other side c of the squarerectangle, thus creating cc's that overlap in the c middle of this rectangle, but avoid creating holes in the model c this situation can only arise if the grid on the 'la' side is c one level coarse compared to the grid on the '-la' side. c the search for a node needs to be only over the neighbours of the c nodes stored in jtemp(1:jtemp_count) jkc=jkbmax j1=nelm(jkc)+1 j2=nelm(jkc+1) do j=j1,j2 jkb=nelm(j) if(jkb.ne.i) then djkb=abs(cord(i,l)-cord(jkb,l)) if(djkb.eq.dmax*2.) then c the node jkb is at the expected distance from i along la axix c check if the other 2 coordinates match do jjjj=1,3 if(jjjj.ne.l) then if(cord(i,jjjj).ne.cord(jkb,jjjj)) goto 91911 endif enddo c the coordinates match, c find the normal axis to the coordinate plane formed the nodes c i,jkc. note c that i and jkb lie along the la coordinate axis do jjjj=1,3 if(jjjj.ne.l) then if(cord(i,jjjj).eq.cord(jkc,jjjj)) then i_dir=jjjj goto 91913 endif endif enddo c i and jkc not in a coordinate plane, continue with other c neighbours of jkc goto 91911 91913 continue c now check if the nodes i, jkc and jkb form c a closed figure with another neighbour of i in the plane c corrosponding to the coordinate i_dir. need to search only c only the nodes on the la side of i, ie isave_omr(la,iii) do iii=1,komrmax jk=isave_omr(la,iii) if(jk.gt.0) then jkd=nelm(jk) if(jkd.ne.jkc) then c jkd is already ne jkb, and ne i if(cord(i,i_dir).eq.cord(jkd,i_dir)) then c found a neighbour of i in the plane of i-jkc-jkb. See if it is c connected to jkb, forming a closed figure jkb1=nelm(jkb)+1 jkb2=nelm(jkb+1) do jkbb=jkb1,jkb2 if(jkd.eq.nelm(jkbb)) then c closed figure is formed. By construction, we know that jkb and c i are on the opposite sides of the line jkc-jkd: this is because c jkc is the neighbour of i that is furthest from i in the la c direction, and jkb is twice as far. Now check if c jkc and jkd are on the opposite sides of the line i-jkb. Note that c the line i-jkb is parallel to the la axis. First find l_perp, c the axis normal to la(and l) and i_dir if(l.eq.1) then if(i_dir.eq.2) then l_perp=3 elseif(i_dir.eq.3) then l_perp=2 else write(iptty,*)' Error in gotcord. l=i_dir=1' stop endif elseif(l.eq.2) then if(i_dir.eq.3) then l_perp=1 elseif(i_dir.eq.1) then l_perp=3 else write(iptty,*)' Error in gotcord. l=i_dir=2' stop endif elseif(l.eq.3) then if(i_dir.eq.2) then l_perp=1 elseif(i_dir.eq.1) then l_perp=2 else write(iptty,*)' Error in gotcord. l=i_dir=3' stop endif endif dtemp1=cord(i,l_perp)-cord(jkd,l_perp) dtemp2=cord(i,l_perp)-cord(jkc,l_perp) if((dtemp1*dtemp2).lt.0.) then c set getchord equal to cord of node jkb c and exit the search loop. jkb is the neighbour of a neighbour, c and not in the original neighbour list for i in nelm. So c save jkb as the la-neighbour of i in irray(i,la) c irray(i,la)=jkb c irray(jkb,-la)=i gotcord=cord(jkb,l) kb=jkb goto 91912 endif endif enddo endif endif endif enddo endif endif 91911 continue enddo 91912 continue endif endif return end c.................................................................. subroutine flag_boundry(i,i1,i2,i3,iflag_boundry) use combi use comdi, only : ps use comsk implicit none integer i,i1,i2,i3,iflag_boundry,i3ab,i3sign,kb,ksign integer k real*8 dist c....s kelkar feb 3, 04, 3D ORM stuff............ c flag OMR nodes that are on the exterior boundary. The subroutine c flag_boundry checks orientations for missing nodes, and if they are c present, then checks if any other connetcions exist on the same side c of the axis. If not, then it is a boundry node. i3ab=iabs(i3) i3sign=isign(1,i3) do k=i1,i2 kb=nelm(k) if (ps(kb) .gt. 0.d0) then dist=cord(kb,i3ab)-cord(i,i3ab) ksign=dsign(1.d0,dist) if(ksign.eq.i3sign .and. abs(dist).gt.1.e-20) then c found a neighbour node on the same side as the missing c node. So i is not a boundry node. set flag and return iflag_boundry = 0 goto 9999 endif end if enddo c did not find any neighbour nodes on the same side as the c missing node, so node i must be on an exterior boundry. c set flag iflag_boundry= +1 9999 continue return end subroutine flag_boundry c................................................................... subroutine exit_ptrac1 stop return end subroutine exit_ptrac1 c........................................................... subroutine ddx_corn_array c set up ddx, ddy,ddz and corn arrays c NOTE: using ggg as scratch storage for saving del+ and del- c temperorl use comai, only : neq, isptr9 use combi, only : cord use comsptr use comsk implicit none integer i,j real*8 del_plus,del_minus,dtemp(3) do i=1,neq do j=1,3 del_plus=ggg(i,j) del_minus=ggg(i,-j) dtemp(j)=del_plus+del_minus corn(i,j)=cord(i,j)-del_minus enddo ddx(i)=dtemp(1) ddy(i)=dtemp(2) ddz(i)=dtemp(3) enddo c s kelkar sep 12 05 volume output for plumecalc if(sptrx_flag) then call sptr_volume_out endif return end subroutine ddx_corn_array c.................................................................... subroutine boundary_nodes(i,omrflag) c....s kelkar April 4, 2005 c flag nodes that are on the exterior boundary. The subroutine c flag_boundry checks orientations for missing nodes, and if they are c present, then checks if any other connetcions exist on the same side c of the axis. If not, then it is a boundry node. c c iboulist(i,7 )=# of boundary faces for node i (max 6) c iboulist(i,1:6)= codes for boundary faces of node i c irray(i,0) = +i : regular node, not a source/sink c irray(i,0) = -i-2000 : regular node on a external boundary c irray(i,0) = -i : regular interior node that is a sink/source c but not explicitly specified in sptr macro c -100000000 < irray(i,0) < -10000000 : regular interior node that is c specified as a sink/source in the sptr macro c in this case -(irray(i,0)+10000000) is the pointer c for the storage location in well_radius for this node c = -(i+2000000) : spring node c = -(i+1000000) : well-capture node on external bound c simillar to =-i case but with half space solution c irray(i,0) = 0 : OMR node not on boundary c irray(i,0) = -i-1000 : OMR node on a external boundary c -200000000 < irray(i,0) < -100000000 : non-OMR cliff node c irray(i,0) < -200000000 : OMR cliff node c use comai, only : ierr, iptty use combi, only : nelm use comsk use comsptr implicit none integer i,ii1,ii2,i3,iflag_boundry,omrflag integer ibou,upper_limit ibou=0 upper_limit=3 ii1=nelm(i)+1 ii2=nelm(i+1) do i3=1,upper_limit iflag_boundry=0 if(irray(i,+i3).le.0) then call flag_boundry(i,ii1,ii2,+i3,iflag_boundry) endif if(iflag_boundry.eq.1) then if(irray(i,0).gt.-100000000) then if(omrflag.eq.1) then irray(i,0)=-i-1000 elseif(irray(i,0).ne.-(i+1000000).and.irray(i,0).ne. $ -(i+2000000)) then irray(i,0)=-i-2000 endif endif ibou=ibou+1 if(ibou.gt.6) then write(ierr, 1002) if (iptty .ne. 0) write(iptty, 1002) call exit_ptrac1 endif if (omr_flag) iboulist(i,ibou)=i3 endif iflag_boundry=0 if(irray(i,-i3).le.0) then call flag_boundry(i,ii1,ii2,-i3,iflag_boundry) endif if(iflag_boundry.eq.1) then if(irray(i,0).gt.-100000000) then if(omrflag.eq.1) then irray(i,0)=-i-1000 elseif(irray(i,0).ne.-(i+1000000).and.irray(i,0).ne. $ -(i+2000000)) then irray(i,0)=-i-2000 endif endif ibou=ibou+1 if(ibou.gt.6) then write(ierr, 1002) if (iptty .ne. 0) write(iptty, 1002) call exit_ptrac1 endif if (omr_flag) iboulist(i,ibou)=-i3 endif enddo 1002 format ('Error in stuc_conn_array, ibou > 6: STOPPING') c save the number of faces on the boundary if (omr_flag) iboulist(i,7)=ibou return end subroutine boundary_nodes c................................................... subroutine subomr(i) c s kelkar 1 onv 0 c determin which faces the omr nodes lie use comai use combi use comci use comdi use comflow use comsptr use comsk use compart implicit none integer i,i1,k,ia,kb,ikb,ipos real*8 epsilon,xkb,xia,d epsilon=1.e-8 do i1=-3,3 komr_count(i1)=0 enddo do i1=1,iomr_count c NOTE: in iomr_neighbour,saved ikb rather than the node # kb c because ikb can be directly used as a pointer to sx and a_axy c arrays and kb can be retrieved from nelm(ikb) ikb=iomr_neighbour(i1) kb=nelm(ikb) c now figure out the geometry stuff c Look for type II connection, ie, kb is on a face defined c without a central node, going from higher to lower level omr. c we need to look at only those values of l for which irray(i,l)=0 c a connection node is missing. Use the sign of the difference c in the l coordinate as an indicator do k=1,3 ia=abs(irray(i,k)) if(ia.eq.0) then d=cord(kb,k)-cord(i,k) if(d.gt.epsilon) then komr_count(k)=komr_count(k)+1 if(komr_count(k).gt.komrmax) then write(ierr,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(ierr,*)'change dimension in comomr.' write(ierr,*)'i,iomr_count',i,iomr_count write(ierr,*)'STOP' if (iptty .ne. 0) then write(iptty,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(iptty,*)'change dimension in comomr.' write(iptty,*)'i,iomr_count',i,iomr_count write(iptty,*)'STOP' end if call exit_ptrac1 endif isave_omr(k,komr_count(k))=ikb ! commenting out the next goto 9191 to allow node kb to be included ! as a neighbour on multiple sides of i c goto 9191 endif endif ia=abs(irray(i,-k)) if(ia.eq.0) then d=cord(kb,k)-cord(i,k) if(d.lt.-(epsilon)) then komr_count(-k)=komr_count(-k)+1 if(komr_count(-k).gt.komrmax) then write(ierr,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(ierr,*)'change dimension in comomr.' write(ierr,*)'STOP' if (iptty .ne. 0) then write(iptty,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(iptty,*)'change dimension in comomr.' write(iptty,*)'STOP' end if call exit_ptrac1 endif isave_omr(-k,komr_count(-k))=ikb c goto 9191 endif endif enddo c check if its a type-I connection, ie if kb is on a face defined c by a central node, going from lower to higher level omr. c check distance from each normal plane to kb to see if it is in the c plane c do k=1,3 xkb=cord(kb,k) ia=abs(irray(i,k)) if(ia.gt.0) then xia=cord(ia,k) d=abs(xkb-xia) if(d.lt.epsilon) then komr_count(k)=komr_count(k)+1 if(komr_count(k).gt.komrmax) then write(ierr,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(ierr,*)'change dimension in comomr.' write(ierr,*)'STOP' if (iptty .ne. 0) then write(iptty,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(iptty,*)'change dimension in comomr.' write(iptty,*)'STOP' end if call exit_ptrac1 endif isave_omr(k,komr_count(k))=ikb c goto 9191 endif endif ia=abs(irray(i,-k)) if(ia.gt.0) then xia=cord(ia,k) d=abs(xkb-xia) if(d.lt.-(epsilon)) then komr_count(-k)=komr_count(-k)+1 if(komr_count(-k).gt.komrmax) then write(ierr,*)'komr_count(-k).gt.komrmax in ', & 'subomr' write(ierr,*)'change dimension in comomr.' write(ierr,*)'STOP' if (iptty .ne. 0) then write(iptty,*)'komr_count(k).gt.komrmax in ', & 'subomr' write(iptty,*)'change dimension in comomr.' write(iptty,*)'STOP' end if call exit_ptrac1 endif isave_omr(-k,komr_count(-k))=ikb c goto 9191 endif endif enddo 9191 continue enddo return end c....................................................................... subroutine wtsi_ptrac1_init c s kelkar aug 29, 05 c for water table nodes, if S(ijkv(np1))>Smin then set the initial c position below ddz*S, if S<Smin then move the particle vertically c downward until a node with S>Smin is encountered. use comai, only : days use comdi, only : izone_free_nodes,s use comsptr use comsk implicit none integer i,inp1,newnode real*8 xp,yp,zp,dumm,zwt do i=1,num_part inp1=ijkv(i) c move only if it is time for the particle to move if (izone_free_nodes(inp1).gt.1 .and. ttp1(i) .le. days) then zp=zo(i)-corn(inp1,3) zwt=ddz(inp1)*s(inp1) if(zp.gt.zwt) then xp=xo(i)-corn(inp1,1) yp=yo(i)-corn(inp1,2) newnode=inp1 call wtsi_find_water(inp1,i,xp,yp,zp,newnode) if (newnode .ne. 0) then call wtsi_displace_node(inp1,i,xp,yp,zp,newnode) ijkv(i)=newnode xo(i)=xp+corn(newnode,1) yo(i)=yp+corn(newnode,2) zo(i)=zp+corn(newnode,3) end if endif endif enddo end subroutine wtsi_ptrac1_init c.................................................................... subroutine wtsi_find_water(inp1,np1,xp,yp,zp,newnode) c s kelkar aug 30, 05 c If newnode has irreducible water c saturation, flagged by izone_free_nodes(inp1).gt.1 then c search vertically downward to find a node with flowing water. use comai, only : ierr, iptty use comdi use comsptr use comsk implicit none integer inp1,np1,newnode,j,nextnode, node_flag, nodetemp integer node_previous,ibou,i1 real*8 xp,yp,zp real*8 epsilon epsilon= 1.e-10 if (izone_free_nodes(newnode).ge.3.or.s(newnode).lt.smin) then node_previous=newnode do j=1,1000000 node_flag=irray(node_previous,0) if(node_flag.eq.0) then c nextnode is interior OMR, do OMR check c xc and yc are updated wrt nodetemp in wtsi_neighbour and also c zc is set on the +3 boundary of nodetemp. call wtsi_neighbour2(inp1,np1,node_previous, 1 xp,yp,zp, nodetemp) if(izone_free_nodes(nodetemp).le.2.and.s(nodetemp) 1 .ge.smin) then c found a valid flowing node, return newnode=nodetemp goto 99999 else node_previous = nodetemp endif elseif(node_flag.eq.-(1000+node_previous)) then c node_previous is a boundary OMR node. check boundary faces, c stored in iboulist(), to see if the -3 plane is a c boundary plane. if so, particle is exiting the model- return. c If it is not, find the new nearest node. ibou=iboulist(node_previous,7) do i1=1,ibou if(-3.eq.iboulist(node_previous,i1)) then c particle exited the model newnode=0 write(ierr,*)'ptrac1:find_water_table. cant find' write(ierr,*)'water table. Check initial particle' write(ierr,*)'locations. np1=',np1 write(ierr,*)'exit_ptrac1. stop.' call exit_ptrac1 c goto 99999 endif enddo c node_previous not on -3 boundary. In wtsi_neighbour, c xc and yc are updated wrt nodetemp in wtsi_neighbour and also c zc is set on the +3 boundary of nodetemp. call wtsi_neighbour2(inp1,np1,node_previous, 1 xp,yp,zp, nodetemp) if(izone_free_nodes(nodetemp).le.2.and.s(nodetemp) 1 .ge.smin) then c found a valid flowing node, return newnode=nodetemp goto 99999 else node_previous = nodetemp endif else c regular interior node nextnode=irray(node_previous,-3) if(nextnode.gt.0) then c wtsi_neighbour has not been called, so need to update xc,yc,zc c change xc,yc to refere to nextnode and zc slightly below c + 3 face of nextnode. xp=xp+corn(node_previous,1)-corn(nextnode,1) yp=yp+corn(node_previous,2)-corn(nextnode,2) zp=(1.-epsilon)*ddz(nextnode) If(izone_free_nodes(nextnode).le.2.and.s(nextnode) 1 .ge.smin) then c found a valid flowing node, return newnode=nextnode goto 99999 else c nextnode doesnt have flowing water,continue checking -3 neighbour node_previous = nextnode endif elseif(nextnode.eq.0) then c bottom node- particle has to exit newnode=0 else write(iptty,*)'error wtsi_newnode.nextnode=',nextnode write(iptty,*)'STOP' write(ierr,*)'error wtsi_newnode.nextnode=',nextnode write(ierr,*)'STOP' stop endif endif enddo else c found a valid flowing node, return endif 99999 continue return end subroutine wtsi_find_water c........................................................... subroutine wtsi_neighbour2(inp1,np1,node_previous,xc,yc,zc, 1 nodetemp) use comai, only : ierr, iptty use comdi use comsptr use comsk implicit none integer inp1,np1,j, nodetemp, node_previous real*8 xc,yc,zc,epsilon,zcc epsilon=1.e-8 c an Internal OMR node, check for neighbours on -3 side c to begin ,place zcc slightly below -3 face of node_previous zcc=-epsilon*ddz(node_previous) call nearest_node(node_previous,np1,-3,xc,yc,zcc, nodetemp) if(nodetemp.le.0) then write(ierr,*)'Hole in the model. node=',node_previous write(ierr,*)'stop in wtsi_neighbour2' if (iptty .ne. 0) then write(iptty,*)'Hole in the model. node=',node_previous write(iptty,*)'stop in wtsi_neighbour2' endif call update_exit(-inp1,np1,-100,nodetemp, $ 0.,xc,yc,zc) c stop end if c change xc,yc to refere to nodetemp c place zc at +3 face of nodetemp xc=xc+corn(node_previous,1)-corn(nodetemp,1) yc=yc+corn(node_previous,2)-corn(nodetemp,2) zc=(1.-epsilon)*ddz(nodetemp) return end subroutine wtsi_neighbour2 c........................................................... subroutine wtsi_newlocation2(nodeabove,np1,nodebelow,zc) use comdi use comsptr use comsk implicit none integer nodeabove,np1,nodebelow,j real*8 dzw,zc, epsilon epsilon=1.e-12 dzw=s(nodebelow)*ddz(nodebelow) if((dzw+corn(nodebelow,3)).lt.(ddz(nodeabove)+corn(nodeabove,3))) 1 then c at this point zc is assumed to be wrt nodebelow already. if(zc.gt.dzw) then zc=dzw*(1.-deltawt) endif endif return end subroutine wtsi_newlocation2 c........................................................... subroutine wtsi_displace_node(inp1,np1,xp,yp,zp,newnode) c s kelkar aug 30, 05 c displace particle location vertically downward to deltawt c meters below the water table and find the new node use comai, only : ierr, iptty use comdi use comsptr use comsk implicit none integer inp1,np1,newnode,j,nextnode, node_flag, nodetemp integer node_previous,ibou,i1 real*8 xp,yp,zp,zwt,zptemp real*8 epsilon epsilon= 1.e-10 c place the particle deltawt(m) below the water table. note that c xp,yp,zp are wrt newnode zwt=ddz(newnode)*s(newnode) zp=zwt*(1-epsilon)-abs(deltawt) if (zp.lt.0.) then node_previous=newnode do j=1,1000000 node_flag=irray(node_previous,0) if(node_flag.eq.0) then c nextnode is interior OMR, do OMR check c xp,yp,zp are updated wrt nodetemp in wtsi_neighbour3 c zp might be far below corn of node_previous, in which c case nearest_node might not find nodetemp in level_max c iterations, so use zptemp to place the particle c just below corn(node_previous,3) zptemp=-epsilon call wtsi_neighbour3(inp1,np1,node_previous, 1 xp,yp,zptemp, nodetemp) c recalculate zp wrt nodetemp zp=zp+corn(node_previous,3)-corn(nodetemp,3) if(zp.gt.0.) then c found the new CC, return newnode=nodetemp goto 99999 else node_previous = nodetemp endif elseif(node_flag.eq.-(1000+node_previous)) then c node_previous is a boundary OMR node. check boundary faces, c stored in iboulist(), to see if the -3 plane is a c boundary plane. if so, particle is exiting the model- return. c If it is not, find the new nearest node. ibou=iboulist(node_previous,7) do i1=1,ibou if(-3.eq.iboulist(node_previous,i1)) then c particle exited the model newnode=0 write(ierr,*)'ptrac1:find_water_table. cant find' write(ierr,*)'water table. Check initial particle' write(ierr,*)'locations. np1=',np1 write(ierr,*)'exit_ptrac1. stop.' call exit_ptrac1 c goto 99999 endif enddo c node_previous not on -3 boundary. In wtsi_neighbour, c xc and yc are updated wrt nodetemp and also c zc is set on the +3 boundary of nodetemp. zptemp=-epsilon call wtsi_neighbour3(inp1,np1,node_previous, 1 xp,yp,zptemp, nodetemp) c recalculate zp wrt nodetemp zp=zp+corn(node_previous,3)-corn(nodetemp,3) if(zp.gt.0.) then c found the new CC, return newnode=nodetemp goto 99999 else node_previous = nodetemp endif else c regular interior node nextnode=irray(node_previous,-3) if(nextnode.gt.0) then c wtsi_neighbour has not been called, so need to update xc,yc,zc c change xc,yc to refere to nextnode and zc slightly below c + 3 face of nextnode. xp=xp+corn(node_previous,1)-corn(nextnode,1) yp=yp+corn(node_previous,2)-corn(nextnode,2) zp=zp+corn(node_previous,3)-corn(nextnode,3) If(zp.gt.0.) then c found the new CC, return newnode=nextnode goto 99999 else c not in CC of nextnode ,continue checking -3 neighbour node_previous = nextnode endif elseif(nextnode.eq.0) then c bottom node- particle displacement outside the model c place the particle in the last CC newnode=node_previous else write(iptty,*)'error wtsi_displace_node.nextnode=', 1 nextnode write(iptty,*)'STOP' write(ierr,*)'error wtsi_displace_node.nextnode=', 1 nextnode write(ierr,*)'STOP' stop endif endif enddo else c found the new CC, return endif 99999 continue return end subroutine wtsi_displace_node c........................................................... subroutine wtsi_neighbour3(inp1,np1,node_previous,xc,yc,zc, 1 nodetemp) use comai, only : ierr, iptty use comdi use comsptr use comsk implicit none integer inp1,np1,j, nodetemp, node_previous real*8 xc,yc,zc,epsilon,zcc epsilon=1.e-8 c an Internal OMR node, check for neighbours on -3 side call nearest_node(node_previous,np1,-3,xc,yc,zc, nodetemp) if(nodetemp.le.0) then write(ierr,*)'Hole in the model. node=',node_previous write(ierr,*)'stop in wtsi_neighbour' write(iptty,*)'Hole in the model. node=',node_previous write(iptty,*)'stop in wtsi_neighbour' stop endif c change xc,yc,zc to refere to nodetemp xc=xc+corn(node_previous,1)-corn(nodetemp,1) yc=yc+corn(node_previous,2)-corn(nodetemp,2) zc=zc+corn(node_previous,3)-corn(nodetemp,3) return end subroutine wtsi_neighbour3 c........................................................... subroutine sptr_volume_out c sep 13 05 s kelkar c store volumes for plumecalc use comai, only : iw, neq, neq_primary, isptr9 use combi use comsptr use comsk use comxi, only : cform implicit none logical opened integer i1, i1flag, i, ncoef, max_con real*8, allocatable :: sx1temp(:) allocate (sx1temp(neq)) do i1=1,neq i1flag=irray(i1,0) if(i1flag.eq.0.or.i1flag.eq.-(i1+1000).or. 1 (i1flag.lt.-200000000)) then c omr node- compute the volume of the approximate brick shape sx1temp(i1)=ddx(i1)*ddy(i1)*ddz(i1) else sx1temp(i1)=sx1(i1) endif enddo ncoef=0 max_con=0 if(cform(26).eq.'formatted') then c formatted write(isptr9, '(5i10)' ) iw, neq_primary, & nelm(neq_primary+1), ncoef, max_con write(isptr9, '(5(1pe20.10))') & (sx1temp(i), i = 1, neq_primary) write(isptr9, '(5i10)' ) & (nelm(i), i = 1, nelm(neq_primary+1)) elseif(cform(26).eq.'unformatted') then c unformatted write(isptr9) iw, neq_primary, nelm(neq_primary+1), & ncoef, max_con write(isptr9) & (sx1temp(i), i = 1, neq_primary) write(isptr9) & (nelm(i), i = 1, nelm(neq_primary+1) ) endif if(allocated(sx1temp)) deallocate(sx1temp) close(isptr9) return end subroutine sptr_volume_out ****************************************************************** subroutine init_sptr_params use comai, only : neq use combi, only : izonef use comci, only : rolf use comdi, only : denr, diffmfl, ifree, itrc, ps_trac, s use compart, only : aperture, kd, matrix_por, secondary use comsptr use davidi, only : irdof implicit none integer i, np1 real*8 denominator, rprime, spacing c Subroutine to initialize particle tracking parameters if(nzbtc.gt.0) then c zvd - initialized in allocmem and used in insptr, don't reset here c izonebtc = 0 end if cHari 01-Nov-06 include colloid diversity model (tprpflag=11) do i = 1, neq if(tprpflag(itrc(i)).eq.1.or.tprpflag(itrc(i)).eq.2.or. 2 tprpflag(itrc(i)).eq.11) then c Compute denominator, make sure no divide by 0 if (irdof .ne. 13 .or. ifree .ne. 0) then denominator = rolf(i)*s(i)*ps_trac(i) else denominator = rolf(i)*ps_trac(i) endif denominator = max(1.d-30, denominator) omega_partial(i) = 1.+kd(itrc(i),1)* 2 denr(i)/denominator elseif(tprpflag(itrc(i)).eq.3.or.tprpflag(itrc(i)).eq.4) then if (irdof .ne. 13 .or. ifree .ne. 0) then denominator = rolf(i)*s(i)*matrix_por(itrc(i)) else denominator = rolf(i)*matrix_por(itrc(i)) endif denominator = max(1.d-30, denominator) rprime = 1.+kd(itrc(i),1)* 2 denr(i)/denominator c aperture = 2 * b in Sudicky and Frind solution c spacing = 2 * B in Sudicky and Frind solution if(aperture(itrc(i)).lt.0.) then if (irdof .ne. 13 .or. ifree .ne. 0) then omega_partial(i) = (matrix_por(itrc(i))*s(i))**2* 2 diffmfl(1,itrc(i))*rprime else omega_partial(i) = (matrix_por(itrc(i)))**2* 2 diffmfl(1,itrc(i))*rprime endif sigma_partial(i) = aperture(itrc(i)) else if (secondary(itrc(i)) .ne. 0.) then spacing = secondary(itrc(i)) else spacing = abs(aperture(itrc(i)))/max(1.d-30, & ps_trac(i)) end if if (irdof .ne. 13 .or. ifree .ne. 0) then omega_partial(i) = s(i)*matrix_por(itrc(i))* 2 sqrt(rprime*diffmfl(1,itrc(i)))/(0.5* 3 abs(aperture(itrc(i)))) else omega_partial(i) = matrix_por(itrc(i))* 2 sqrt(rprime*diffmfl(1,itrc(i)))/(0.5* 3 abs(aperture(itrc(i)))) endif sigma_partial(i) = sqrt(rprime/diffmfl(1,itrc(i)))*0.5* 3 (spacing-abs(aperture(itrc(i)))) end if end if end do return end subroutine init_sptr_params ******************************************************************
(* This Isabelle theory is produced using the TIP tool offered at the following website: https://github.com/tip-org/tools This file was originally provided as part of TIP benchmark at the following website: https://github.com/tip-org/benchmarks Yutaka Nagashima at CIIRC, CTU changed the TIP output theory file slightly to make it compatible with Isabelle2017.*) theory TIP_list_concat_map_bind imports "../../Test_Base" begin datatype 'a list = nil2 | cons2 "'a" "'a list" fun x :: "'a list => 'a list => 'a list" where "x (nil2) y2 = y2" | "x (cons2 z2 xs) y2 = cons2 z2 (x xs y2)" fun y :: "'a list => ('a => 'b list) => 'b list" where "y (nil2) y2 = nil2" | "y (cons2 z2 xs) y2 = x (y2 z2) (y xs y2)" fun map :: "('a => 'b) => 'a list => 'b list" where "map f (nil2) = nil2" | "map f (cons2 y2 xs) = cons2 (f y2) (map f xs)" fun concat :: "('a list) list => 'a list" where "concat (nil2) = nil2" | "concat (cons2 y2 xs) = x y2 (concat xs)" theorem property0 : "((concat (map f xs)) = (y xs f))" oops end
library(GenomicRanges) library(bumphunter) library(RColorBrewer) load('/dcl01/lieber/ajaffe/lab/brain-epigenomics/bumphunting/BSobj_bsseqSmooth_Neuron_minCov_3.Rdata') load("/dcl01/lieber/ajaffe/lab/brain-epigenomics/rdas/DMR/DMR_objects.rda") load("/dcl01/lieber/ajaffe/lab/brain-epigenomics/bumphunting/rda/limma_Neuron_CpGs_minCov_3_ageInfo_dmrs.Rdata") # load HARs HARs = openxlsx::read.xlsx('/dcl01/lieber/ajaffe/lab/brain-epigenomics/rdas/HARs_hg19_Doan_Walsh_Table_S1.xlsx') hars = makeGRangesFromDataFrame(HARs, keep.extra.columns=T) length(hars) # 2737 # Identify all CpG clusters in the genome gr <- granges(BSobj) cl = clusterMaker( chr = as.character(seqnames(gr)), pos = start(gr), maxGap = 1000) gr.clusters = split(gr, cl) gr.clusters = unlist(range(gr.clusters)) df.clusters = as.data.frame(gr.clusters) # Find overlaps with DMRS in all three models dmrs = split(dmrs, dmrs$k6cluster_label) names(dmrs) = c("Gr1","Gr2","Gr3","Gr4","Gr5","Gr6") oo = lapply(dmrs, function(x) findOverlaps(x, makeGRangesFromDataFrame(DMR$Interaction))) dmrs = lapply(oo, function(x) DMR$Interaction[subjectHits(x),]) DMRgr = lapply(c(DMR, dmrs), function(x) makeGRangesFromDataFrame(x[which(x$fwer<=0.05),], keep.extra.columns=T)) oo = lapply(DMRgr, function(x) findOverlaps(gr.clusters, x)) lapply(oo, function(x) length(unique(queryHits(x)))) harOverlap = findOverlaps(hars, gr.clusters) df.clusters$regionID = paste0(df.clusters$seqnames,":",df.clusters$start,"-",df.clusters$end) df.clusters$rnum = 1:length(gr.clusters) df.clusters$CellType = ifelse(df.clusters$rnum %in% queryHits(oo$CellType), "CellType","no") df.clusters$Age = ifelse(df.clusters$rnum %in% queryHits(oo$Age), "Age","no") df.clusters$Interaction = ifelse(df.clusters$rnum %in% queryHits(oo$Interaction), "Interaction","no") df.clusters$Gr1 = ifelse(df.clusters$rnum %in% queryHits(oo$Gr1), "Gr1","no") df.clusters$Gr2 = ifelse(df.clusters$rnum %in% queryHits(oo$Gr2), "Gr2","no") df.clusters$Gr3 = ifelse(df.clusters$rnum %in% queryHits(oo$Gr3), "Gr3","no") df.clusters$Gr4 = ifelse(df.clusters$rnum %in% queryHits(oo$Gr4), "Gr4","no") df.clusters$Gr5 = ifelse(df.clusters$rnum %in% queryHits(oo$Gr5), "Gr5","no") df.clusters$Gr6 = ifelse(df.clusters$rnum %in% queryHits(oo$Gr6), "Gr6","no") df.clusters$HARs = ifelse(df.clusters$rnum %in% subjectHits(harOverlap), "HAR","no") ## make contingency tables tables = list() for (i in 1:length(names(DMRgr))) { tables[[i]] = data.frame(YesHAR = c(nrow(df.clusters[df.clusters$HARs=="HAR" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$HARs=="HAR" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHAR = c(nrow(df.clusters[df.clusters$HARs=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$HARs=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")) } names(tables) = names(DMRgr) fisher = lapply(tables, fisher.test) df = do.call(rbind, Map(cbind, lapply(fisher, function(x) data.frame(OR = x$estimate, upper = x$conf.int[2], lower = x$conf.int[1], pval = x$p.value)), Model = as.list(names(fisher)))) df$fdr = p.adjust(df$pval, method= "fdr") ## test for enrichment of conserved, evolutionarily dated enhancers noonan = openxlsx::read.xlsx('/dcl01/lieber/ajaffe/lab/brain-epigenomics/rdas/Emera_Noonan_supptable_1.xlsx') x = GRanges(noonan[,1]) mcols(x) = noonan[,2:3] noonan = x length(noonan) # 30526 noonan = c(list(enhancers = noonan), as.list(split(noonan, noonan$Phylogenetic.age.assignment))) noonanOv = lapply(noonan, function(x) findOverlaps(x, gr.clusters)) hits = lapply(noonanOv, function(x) ifelse(df.clusters$rnum %in% subjectHits(x), "yes","no")) for (i in 1:length(hits)) { df.clusters[,17+i] = hits[[i]] } colnames(df.clusters)[18:28] = names(hits) ## make contingency tables tables = list() for (i in 1:length(names(DMRgr))) { tables[[i]] = list(enhancers = data.frame(YesHit = c(nrow(df.clusters[df.clusters$enhancers=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$enhancers=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$enhancers=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$enhancers=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Amniota = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Amniota=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Amniota=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Amniota=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Amniota=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Eutheria = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Eutheria=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Eutheria=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Eutheria=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Eutheria=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Gnathostomata = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Gnathostomata=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Gnathostomata=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Gnathostomata=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Gnathostomata=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Human = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Human=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Human=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Human=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Human=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Mammalia = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Mammalia=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Mammalia=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Mammalia=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Mammalia=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), "No age assignment" = data.frame(YesHit = c(nrow(df.clusters[df.clusters$"No age assignment"=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$"No age assignment"=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$"No age assignment"=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$"No age assignment"=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Primate = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Primate=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Primate=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Primate=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Primate=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Tetrapoda = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Tetrapoda=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Tetrapoda=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Tetrapoda=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Tetrapoda=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Theria = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Theria=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Theria=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Theria=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Theria=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR")), Vertebrata = data.frame(YesHit = c(nrow(df.clusters[df.clusters$Vertebrata=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Vertebrata=="yes" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), NoHit = c(nrow(df.clusters[df.clusters$Vertebrata=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]==names(DMRgr)[i],]), nrow(df.clusters[df.clusters$Vertebrata=="no" & df.clusters[,colnames(df.clusters)==names(DMRgr)[i]]=="no",])), row.names = c("YesDMR","NoDMR"))) } names(tables) = names(DMRgr) fisher = lapply(tables, function(x) lapply(x, fisher.test)) res = do.call(rbind, Map(cbind, lapply(fisher, function(y) do.call(rbind, Map(cbind, lapply(y, function(x) data.frame(OR = x$estimate, upper = x$conf.int[2], lower = x$conf.int[1], pval = x$p.value)), Feature = as.list(names(y))))), Model = as.list(names(fisher)))) res$fdr = p.adjust(res$pval, method= "fdr") res$Feature = gsub("enhancers","All Enhancers", res$Feature) res$Feature = gsub("assignment","assigned", res$Feature) res$Feature = factor(res$Feature, levels=c("All Enhancers","No age assigned","Vertebrata","Gnathostomata","Tetrapoda", "Amniota","Mammalia","Theria","Eutheria","Primate","Human")) res = rbind(res, cbind(df[,1:4], Feature = "HAR", df[,5:6])) rownames(res) = NULL write.csv(res, quote=F, file = "/dcl01/lieber/ajaffe/lab/brain-epigenomics/rdas/DMR/HAR_conservedEnhancers_fisher.results.csv") ## Plot results pdf("/dcl01/lieber/ajaffe/lab/brain-epigenomics/DMR/figures/HARs_DMR_oddsRatios.pdf", height = 4) ggplot(df[which(df$Model %in% c("CellType", "Age", "Interaction")),], aes(Model, OR, fill = Model)) + geom_col() + theme_classic() + ylab("Odds Ratio") + theme(axis.text.x = element_text(angle = 45, hjust = 1)) + xlab("") + geom_hline(yintercept=1, linetype="dotted") + ggtitle("Enrichment for HARs") + theme(title = element_text(size = 20)) + theme(text = element_text(size = 20), legend.title=element_blank()) + theme(legend.position="bottom") ggplot(df[which(df$Model %in% c("Gr1", "Gr2", "Gr3","Gr4","Gr5","Gr6")),], aes(Model, OR, fill = Model)) + geom_col() + scale_fill_brewer(8, palette="Dark2") + theme_classic() + geom_hline(yintercept=1, linetype="dotted") + ylab("Odds Ratio") + xlab("") + theme(axis.text.x = element_text(angle = 45, hjust = 1)) + ggtitle("Enrichment for HARs") + theme(title = element_text(size = 20)) + theme(text = element_text(size = 20), legend.title=element_blank()) + theme(legend.position="bottom") dev.off() pdf("/dcl01/lieber/ajaffe/lab/brain-epigenomics/DMR/figures/conservedEnhancers_DMR_oddsRatios.pdf", width = 16, height = 4) ggplot(res[which(res$Model %in% c("CellType", "Age", "Interaction") & res$fdr<=0.05),], aes(Model,OR, fill = Model)) + geom_col() + theme_classic() + facet_grid(. ~ Feature) + ylab("Odds Ratio") + xlab("") + geom_hline(yintercept=1, linetype="dotted") + theme(axis.text.x = element_text(angle = 45, hjust = 1)) + ggtitle("Enrichment for Enhancers") + theme(title = element_text(size = 20)) + theme(text = element_text(size = 20), legend.title=element_blank()) + theme(legend.position="bottom") ggplot(res[which(res$Model %in% c("Gr1", "Gr2", "Gr3","Gr4","Gr5","Gr6") & res$fdr<=0.05),], aes(Model,OR, fill = Model)) + geom_col() + scale_fill_brewer(8, palette="Dark2") + theme_classic() + geom_hline(yintercept=1, linetype="dotted") + facet_grid(. ~ Feature) + ylab("Odds Ratio") + xlab("") + theme(axis.text.x = element_text(angle = 45, hjust = 1)) + ggtitle("Enrichment for Enhancers") + theme(title = element_text(size = 20)) + theme(text = element_text(size = 20), legend.title=element_blank()) + theme(legend.position="bottom") dev.off() df = read.csv("../Desktop/BAMS/HAR_conservedEnhancers_fisher.results.csv") df$sig = ifelse(df$fdr<=0.05, "Significant", "Not Significant") df$sig = factor(df$sig, levels = c("Significant", "Not Significant")) df = df[which(df$Model %in% c("Gr1", "Gr2", "Gr3","Gr4","Gr5","Gr6")),] df$Model = factor(df$Model, levels = c("Gr1", "Gr2", "Gr3","Gr4","Gr5","Gr6")) df$Feature = factor(df$Feature, levels=c("HAR","All Enhancers","No age assigned","Vertebrata","Gnathostomata", "Tetrapoda", "Amniota","Mammalia","Theria","Eutheria","Primate","Human")) pdf("./brain-epigenomics/DMR/figures/conservedEnhancers_DMR_oddsRatios_dotplot.pdf", height = 5.5, width = 9.5) ggplot(data = df[-which(df$Feature %in% c("HAR", "Primate", "Human")),], aes(x = Model, y = log(OR), col = Model, shape = sig)) + geom_point() + geom_pointrange(aes(ymin = log(lower), ymax = log(upper))) + theme_bw() + facet_wrap(. ~ Feature) + xlab("") + ylab("log(OR)") + geom_hline(yintercept = 0, lty = 2) + scale_shape_manual(values = c(16, 1)) + scale_size_manual(values = c(2, 1)) + scale_colour_brewer(8, palette="Dark2") + ggtitle("Enrichment for Enhancers") + theme(axis.ticks.x = element_blank(), title = element_text(size = 20), text = element_text(size = 20), legend.title = element_blank(), legend.position = "bottom") + guides(col = FALSE) dev.off() df$Model = gsub("Gr1", "Gr1\n(G-N+)", df$Model) df$Model = gsub("Gr2", "Gr2\n(G0N+)", df$Model) df$Model = gsub("Gr3", "Gr3\n(G0N-)", df$Model) df$Model = gsub("Gr4", "Gr4\n(G+N0)", df$Model) df$Model = gsub("Gr5", "Gr5\n(G+N-)", df$Model) df$Model = gsub("Gr6", "Gr6\n(G-N0)", df$Model) df$Model = factor(df$Model, levels = c("Gr1\n(G-N+)","Gr2\n(G0N+)","Gr3\n(G0N-)", "Gr4\n(G+N0)","Gr5\n(G+N-)","Gr6\n(G-N0)")) pdf("./brain-epigenomics/DMR/figures/HARs_DMR_oddsRatios_dotplot.pdf", height = 2.5, width = 6) ggplot(data = df[which(df$Feature=="HAR"),], aes(x = Model, y = log(OR), col = Model, shape = sig, size = sig)) + geom_point() + geom_pointrange(aes(ymin = log(lower), ymax = log(upper))) + theme_bw() + xlab("") + ylab("log(OR)") + geom_hline(yintercept = 0, lty = 2) + scale_shape_manual(values = c(16, 1)) + scale_size_manual(values = c(1, 2)) + scale_colour_brewer(8, palette="Dark2") + ggtitle("Enrichment for HARs") + theme(axis.ticks.x = element_blank(), title = element_text(size = 20), text = element_text(size = 20)) + guides(col = FALSE, shape = FALSE, size = FALSE) dev.off()
# I arranged the working directory into "input" (all input files went here) # and "output" - for submission files. # For completeness I will past all the code in this one file. This includes: # 1. Reading Data # 2. Preparing Data # 3. Building the Model. # 4. Wrigin Submission File. ############################################################################### ################################### SOURCES ################################### ############################################################################### setwd("~/GitHub/Data/mlsp-2014-mri") getwd() data_txt = read.csv("breadwrapper.txt") data = read.csv("bf_study.csv") data # Used libraries library(verification) library(DWD) ############################################################################### ################################## LOAD DATA ################################## ############################################################################### trainFNC = read.csv(file='Train/train_FNC.csv', head=TRUE, sep=",") trainSBM = read.csv(file='Train/train_SBM.csv', head=TRUE, sep=",") trainLAB = read.csv(file='Train/train_labels.csv', head=TRUE, sep=",") testFNC = read.csv(file='Test/test_FNC.csv', head=TRUE, sep=",") testSBM = read.csv(file='Test/test_SBM.csv', head=TRUE, sep=",") # myExample <- read.csv(file.path(projectTree, "input/submission_example.csv"), as.is=T, header=T, sep=",") ############################################################################### ################################## PREP DATA ################################## ############################################################################### myTrain = rbind(t(trainFNC[,-1]), t(trainSBM[,-1])) colnames(myTrain) = trainLAB$Class myTest = rbind(t(testFNC[,-1]), t(testSBM[,-1])) colnames(myTest) = testFNC$Id ############################################################################### ############################## CROSS-VALIDATION ############################### ############################################################################### # This part is optional and was used to select the values of C constraint. # (This runs 100 itterations of 10-fold cross validation ROCS = list() Cs = c(1, 5, 10, 50, 100, 300, 500, 1000) for(Cind in 1:length(Cs)) { C = Cs[Cind] tmpRocs = numeric() for(i in 1:100) { trainInds1 = sample(which(colnames(myTrain)==0), 42) trainInds2 = sample(which(colnames(myTrain)==1), 36) trainInds = c(trainInds1, trainInds2) theTrain = myTrain[,trainInds] theTest = myTrain[,-trainInds] myFit = kdwd(t(myTrain), colnames(myTrain), C=C) testScores = t(myFit@w[[1]]) %*% theTest testScores = 1 - ((testScores - min(testScores)) / max(testScores - min(testScores))) tmpRocs[i] = roc.area(as.numeric(colnames(theTest)), testScores)$A print(i) } ROCS[[Cind]] = tmpRocs } ############################################################################### ################################## FIT MODEL ################################## ############################################################################### myFit = kdwd(t(myTrain), colnames(myTrain), C=300) # Get scores for training data (meaningless for now). scores = t(myFit@w[[1]]) %*% myTrain scores = 1 - ((scores - min(scores)) / max(scores - min(scores))) # Check ROC area. (meaningless, because of possible overfitting) roc.area(as.numeric(colnames(myTrain)), scores) ############################################################################### ################################ WRITE SCORES ################################# ############################################################################### testScores = t(myFit@w[[1]]) %*% myTest testScores = 1 - ((testScores - min(testScores)) / max(testScores - min(testScores))) myExample$Probability = as.numeric(testScores) write.csv(myExample, file=file.path(projectTree, "output/submission.csv"), row.names=F)
Mike Carey returned to the title for a single issue between Denise Mina and Andy Diggle 's runs on the title , and also wrote the well @-@ received Hellblazer graphic novel All His Engines about a strange illness sweeping the globe .
Any other trail is good if you like long XC rides. The trail is fast with technical sections filled with rocks and roots with lots of JUMPS and some single track sections the trail is 1.9 miles long. Any other trail if you like XC fun. Echo Ridge was developed as a cross-country ski area by the U.S. Forest Service. It has 18 miles of trail cut into the hillsides. There are a number of options available to the rider depending on their experience. One of my favorites is to ride from the Echo Valley Ski Area up the Forest Service road to the Echo Ridge Parking lot. This insures that you have earned a great downhill run through Bergman Gulch. From the Ridge parking lot, head out to Grand Junction. At the GJ kiosk, look to your right. Between the two trails is a fun single track which takes the rider to Chaos Corner. From there head on down Alley Oop and do that loop which will bring you back to Chaos Corner. Then head up Windsinger to Grand Junction. From there head up Ridge View or go out Morning Glory to 5-Corners. Hit Little Critter. Keep your eye open for the single track taking off to the right after you have ridden most of Little Critter. This will dump you at North Junction at the beginning of the 1.9 mile Gulch. It is fast, sometimes technical and definitely dangerous if you hit a huge water bar going to fast. Your ride ends back at Echo Valley. There are close to 20 plus miles of trails, old jammer (logging) roads, fire roads and xc ski trails open to Mountain Bikers. Pick up a map at the Chelan Ranger District located next to the Caravel Resort. The Gulch rocks, if you have a shuttle to the top. Echo Ridge XC trails are good, plenty of different trails to ride. Plenty of jumps on the gulch and lots of different spurs off the main trail, look for them.
Antimony is incompatible with strong acids , halogenated acids , and oxidizers ; when exposed to newly formed hydrogen it may form stibine ( SbH3 ) .
# Fidelity estimation: Minimax Method We show how to use the code for minimax method using some examples. ## Example 1: Cluster state and Pauli measurements To demonstrate how to use the minimax method to estimate fidelity, we consider a 3-qubit linear cluster state as our target state.\ We will focus on Pauli measurements. The minimax method constructs an estimator when the target state, the measurement settings, and the confidence level is specified. One we construct the estimator, we can repeatedly reuse the estimator for the chosen settings. We outline the basic steps that we will go through in this example - from specifying the settings to computing the estimate. 1. Create a YAML file describing the settings required to construct the estimator. 2. Construct the estimator using the specified settings. 3. Create a CSV file containing the measurement outcomes. 4. Compute the fidelity estimate using the CSV file and the constructed estimator. We remark that step 2 is usually computation intensive. Nevertheless, once the estimator has been constructed, the fidelity estimates (step 4) can be obtained almost instantaneously. Let's look at each step in more detail. ## Step 1: Create a YAML file with the measurement settings. [YAML](https://en.wikipedia.org/wiki/YAML) is a markup language that is human-readable, and can be parsed by a computer. The combination of these attributes makes it a good medium to specify the settings to the code. We allow for different ways to specify the settings in the YAML file.\ For example, one could specify the target state as a list in the YAML file, or provide a path to a `.npy` file containing the numpy array for the target state.\ Since Pauli measurements and some special states like the stabilizer states are commonly used, we have provided a special interface to conveniently specify these settings.\ We will be using the latter interface in this demo for convenience. For details on all available formats to specify the settings, we encourage the reader to refer to the documentation of the code. We create the following settings file for the cluster state. We specify those Pauli operators $P$ that have a non-zero weight $\text{Tr}(P \rho)$. ------------------------------------------------- ### cluster_state_settings.yaml ``` target: - cluster: 3 POVM_list: - pauli: [IZX, XIX, XZI, YXY, YYZ, ZXZ, ZYY] R_list: 100 confidence_level: 0.95 ``` ------------------------------------------------- Let's take a closer look at the settings. - `target` refers to the target state. We can conveniently provide a linear cluster state using the syntax: `- cluster: nq`, where `nq` is the number of qubits.\ We have therefore specified a 3-qubit cluster state. - `POVM_list` is a list of POVMs that will measured in order to estimate the fidelity.\ Pauli measurements can be specified in a few different ways, but here we use the most obvious one: list the Pauli operators that you want to measure.\ The default measurement is projection on each eigenvector of the Pauli operator, but if collective measurement on eigenspace with $+1$ and $-1$ eigenvalue is required, you can include the keyword `subspace` after listing all the Pauli operators. - `R_list` corresponds to the number of outcomes recorded for each POVM.\ We want 100 outcomes for each Pauli measurement, so we simply write 100.\ If something more specific is required, write a list of outcomes, one for each Pauli measurement. - `confidence_level` should be a number between 0.75 and 1, and it determines the confidence level of the computed risk. ---------- > **It is important to adhere to the syntax specified in the documentation when creating the YAML file.**\ The code is expected to throw an error when incorrect syntax is used. However, there could be some cases that slip past the sanity checks, and the code may end up constructing an estimator that was not intended by the user! ## Step 2: Construct the estimator In order to construct the estimator, we use the function ```construct_fidelity_estimator``` included in ```handle_fidelity_estimation.py``` module. The syntax for this function is pretty straightforward: ---------------------- ``` construct_fidelity_estimator(yaml_filename, estimator_filename, yaml_file_dir = './yaml_files', estimator_dir = './estimator_files') ``` ---------------------- A closer look at the options: - ```yaml_filename``` refers to the name of the YAML settings file. - ```estimator_filename``` refers to the name of the (JSON) file to which the constructed estimator is saved. - ```yaml_file_dir``` specifies the directory in which the YAML settings file is stored.\ This is an optional argument, and if nothing is specified, the code assumes that the YAML file lives in a sub-directory named `yaml_files` of the current directory. - ```estimator_dir``` specifies the directory where the constructed estimator is saved.\ As before, this is an optional argument, and the default location is assumed to be a sub-directory named `estimator_files` of the current directory. ------------------ > We save the estimator because the same estimator can be re-used later for the same settings.\ The estimator is saved as a JSON file. These files are internally handled by the functions in the module, and need not be edited manually by the user. ------------------ Following the default options, we have created a subdirectory called `yaml_files` and placed `cluster_state_settings.yaml` YAML file there. Let us now construct the estimator. > **It can take anywhere from a few minutes to many hours to compute the estimator depending on the settings that were specified.**\ If the dimension of the system is large or many measurement settings are begin used, please consider running the code on a workstation or a cluster. The following code is expected to run in about 4 minutes on a laptop, though the actual time may vary depending on the hardware and the OS. ```python import project_root # adds the root directory of the project to Python Path from handle_fidelity_estimation import construct_fidelity_estimator construct_fidelity_estimator(yaml_filename = 'cluster_state_settings.yaml',\ estimator_filename = 'cluster_state_estimator.json') ``` Optimization complete. > 1. Note that `construct_fidelity_estimator` prints the progress of optimization by default.\ If you wish to turn this off, supply `print_progress = False` as an additional argument to the function. 2. If an estimator file already exists, `construct_fidelity_estimator` function will throw an error. You can delete the existing estimator, move it to a different directory, or use another name to save the estimator in that case. ## Step 3: Generate measurement outcomes We generate the measurement outcomes separately and store them in a CSV file.\ Note that these outcomes are generated using the state obtained by applying 10% depolarizing noise to the target state. > In practice, these outcomes will come from experiments. The CSV file looks as follows: --------------------- ### cluster_state_outcomes.csv | | | | | | | --- | - | - | --- | - | | IZX | 7 | 0 | ... | 7 | | XIX | 7 | 7 | ... | 0 | | . | . | . | . | . | | ZYY | 5 | 3 | ... | 0 | --------------------- The first column contains the labels of the Pauli measurements performed.\ Corresponding to each Pauli operator, we store the measurement outcomes in the same row as the Pauli operator.\ Outcome $i$ points to the eigenvector $\vert i \rangle$ that was observed upon measurement. > 1. **It is important that the order of eigenvectors used for outcomes matches the POVM that was specified for constructing the estimator.**\ We use the following convention for the eigenvectors: $\vert+++\rangle$, $\vert++-\rangle$, $\vert+-+\rangle$, ..., $\vert---\rangle$.\ Basically, we use the binary expansion of numbers from $0$ to $2^{n_q} - 1$, where $n_q$ are the number of qubits, with $0$ replaced by $+$ and $1$ replaced by $-$. 2. **It is important that the outcomes for Pauli operators are listed in the same order as what we used for constructing the estimator.**\ That is, we must have outcomes for IZXZ, XIX, ..., ZYY in that order in the CSV file. Note for any Pauli operator $P = X_1 \dotsb X_{n_q}$, a $+$ at the $i$th qubit location means that we are looking at the $+1$ eigenvector of $X_i$, where $X_i \in \{I, X, Y, Z\}$. > In practice, steps 2 & 3 can occur in any order. ## Step 4: Compute the fidelity estimate Let's use the estimator that we constructed in step 2 and the outcomes generated in step 3 to compute the fidelity estimate.\ This task is handled by `compute_fidelity_estimate_risk` function in `handle_fidelity_estimation.py` module. This function takes the following form. ---------------- ``` compute_fidelity_estimate_risk(outcomes, estimator_filename, estimator_dir = './estimator_files') ``` ---------------- The options accept the following formats: - `outcomes` can be one of the following: 1. A list of outcomes for each POVM measurement. 2. Path to a YAML file containing a list of outcomes for each POVM measurement. 3. - Path to a CSV file, or - A dictionary:\ `{'csv_file_path': Path to CSV file, 'entries': 'row'/'column', 'start': (row index, column index)}`\ where `row` (`column`) is used if data is stored in rows (columns),\ and `start` denotes the index of the cell where the data starts (we start the row and column at index 0). - `estimator_filename` is the name of the estimator file that we constructed previously. - `estimator_dir` refers to the directory in which the estimator file has been saved. We refer the reader to the documentation of the code which elaborates these options further. As we can see from the CSV file outline in step 3, the data starts at the first row and the second column.\ The first column describes the data, but is not actually a part of it. Therefore, we set `start = (0, 1)`.\ As noted earlier, we label the rows and columns starting from 0, following Python convention.\ Also, it is clear that the data is stored row-wise, so we set `entries = 'row'`. Note that we have saved the `cluster_state_outcomes.csv` file in a subdirectory called `outcome_files`.\ Using this, we compute the estimate as follows. ```python import project_root # adds the root directory of the project to Python Path from handle_fidelity_estimation import compute_fidelity_estimate_risk compute_fidelity_estimate_risk(outcomes = {'csv_file_path': './outcome_files/cluster_state_outcomes.csv',\ 'entries': 'row', 'start': (0, 1)},\ estimator_filename = 'cluster_state_estimator.json') ``` Fidelity estimate: 0.925 Risk: 0.086 We can see that the estimate $\widehat{F} \approx 0.925$ is close to the actual fidelity $F = 0.9125$.\ The risk can be reduced by increasing the number of shots and/or the Pauli measurements performed. -------------------- -------------------- ## Example 2: The Bell State and Randomized Pauli Measurement scheme Suppose that our target state $\rho$ is the two-qubit Bell state \begin{align} \rho &= \vert \psi \rangle \langle \psi \vert \\ \text{where} \quad \vert \psi \rangle &= \frac{1}{\sqrt{2}} \left(\vert 00 \rangle + \vert 11 \rangle\right) \end{align} Observe that $\vert \psi \rangle$ is a stabilizer state that is generated by the stabilizers $XX$ and $ZZ$. To compute the fidelty, we use the minimax optimal measurement scheme for stabilizer states. This amounts to sampling uniformly from the stabilizer group elements (except the identity) and recording their measurement outcome ($\pm 1). Let's compute the estimator given by the minimax method for such a setting. We know that \begin{equation} R = \left\lceil 2\frac{\ln\left(2/\epsilon\right)}{\left|\ln\left(1 - \left(\frac{d}{d - 1}\right)^2 \widehat{R}_*^2\right)\right|} \right\rceil \end{equation} outcomes are sufficient to achieve a risk $\widehat{\mathcal{R}}_* \in (0, 0.5)$ with a confidence level of $1 - \epsilon \in (0.75, 1)$. As before, we break down the process of constructing an estimator & computing an estimate into four steps: 1. Create a YAML file describing the settings to construct the estimator and the risk. 2. Construct the estimator for the specified settings. 3. Store the outcomes in a YAML file. Convert outcomes to indices in case they are eigenvalues. 4. Use the outcomes and constructed estimator to compute the fidelity estimate (and the risk). ## Step 1: Create the YAML file containing the settings The YAML file looks as follows. ### bell_state_settings.yml ``` target: - stabilizer: [XX, ZZ] POVM_list: - pauli: [RPM] R_list: 1657 confidence_level: 0.95 ``` We describe the a couple of above options in more detail: - The general syntax for specifying a target stabilizer state is `- stabilizer: list of stabilizer generators`. Note that we can include a sign in front of the Pauli operator if necessary.\ For example, we specify a stabilizer state above with $XX$ and $ZZ$ as the stabilizer generators. We could as well have used $XX$ and $-YY$ as the stabilizer generators. - We have included a shortcut to specify the Randomized Pauli Measurement (RPM) scheme described in section II.E. of the PRA submission. The syntax is always `- pauli: [RPM]` for specifying this measurement scheme.\ For stabilizer states, this amounts to randomly sampling the stabilizer group (excluding the identity) and recording the eigenvalues of outcomes. Note that we use a confidence level of $95\%$ and a risk of $\widehat{\mathcal{R}}_* = 0.05$ to obtain $R = 1657$. ## Step 2: Construct the estimator using the YAML settings file As before, we use the function ```construct_fidelity_estimator``` in ```handle_fidelity_estimation.py``` module to construct the estimator. We have placed `bell_state_settings.yaml` settings file in the `yaml_files` subdirectory. > The estimator for the RPM measurement scheme is constructed efficiently. It should take at most a few minutes, if not seconds, to construct the estimator. ```python import project_root # adds the root directory of the project to Python Path from handle_fidelity_estimation import construct_fidelity_estimator construct_fidelity_estimator(yaml_filename = 'bell_state_settings.yaml',\ estimator_filename = 'bell_state_estimator.json') ``` Optimization complete You can check that there is a subdirectory called `estimator_files` (if it wasn't already there), and you can find the file `bell_state_estimator.json` there. ## Step 3: Create a YAML file with the measurement outcomes We created some outcomes beforehand to test the estimator. For this purpose, we added $10\%$ depolarizing noise to the target state $\rho$, and then performed the Pauli measurements as prescibed by Randomized Pauli Measurement (RPM) scheme. Note that for the RPM scheme, only the *number* of $+1$ and $-1$ eigenvalues are important. It doesn't matter which Pauli measurement gave a $+1$ outcome or a $-1$ outcome.\ Before we supply the outcomes to the estimator, we need to convert $+1 \to 0$ and $-1 \to 1$. The reason is that the estimator works by referring to the POVM elements and we used $\{E_+, E_-\}$ as the POVM when constructing the estimator, in that order.\ Because the outcomes are going be just $0$ and $1$, we put the outcomes in a list inside a YAML file. > For the sake of demonstration, this time we choose to save our outcomes in a YAML file instead of a CSV file.\ A CSV file can be used if that's preferred. The YAML file containing the outcomes looks as follows. ### bell_state_measurement_outcomes.yaml ``` outcomes: - [0, 0, ...] ``` Note that there must be exactly $R = 1657$ measurement outcomes, because the estimator was constructed for this case. > The syntax used in the YAML file is important for ensuring proper parsing of the file.\ The code documentation can be referred for details. We use these outcomes to compute the fidelity estimate. ## Step 4: Compute the fidelity estimate We now supply the outcomes to the `compute_fidelity_estimate_risk` function in `handle_fidelity_estimation.py` module. We have saved the `bell_state_measurement_outcomes.yml` file in a subdirectory called `outcome_files`, and we use this to compute the estimate. ```python import project_root # adds the root directory of the project to Python Path from handle_fidelity_estimation import compute_fidelity_estimate_risk compute_fidelity_estimate_risk(outcomes = './outcome_files/bell_state_measurement_outcomes.yaml',\ estimator_filename = 'bell_state_estimator.json') ``` Fidelity estimate: 0.933 Risk: 0.05 Observe that the risk is very close to the value of $0.05$ (only a rounded value is displayed) that we chose to determine the number of outcomes $R = 1657$. We can also see that the fidelity estimate $\widehat{F} \approx 0.933$ is close to the actual fidelity $F = 0.925$, and within the specified risk of $0.05$. This estimate can be found in Table II of PRA submission. -------------------- -------------------- # Epilogue Other formats are supported by the YAML settings file. You can directly supply lists to it or give a path to a `.npy` file which contains the array describing your target state or POVMs. Please read the documentation to see all the available options. Note that the code can be run directly from the commandline. This is especially helpful if one needs to run the code on a cluster or even on a workstation. Please refer the documentation for details on how to use this functionality.
[STATEMENT] lemma (in group) compl_fam_empty[simp]: "compl_fam S {}" [PROOF STATE] proof (prove) goal (1 subgoal): 1. compl_fam S {} [PROOF STEP] unfolding compl_fam_def [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<forall>i\<in>{}. complementary (S i) (IDirProds G S ({} - {i})) [PROOF STEP] by simp
-- Prueba mediante encadenamiento de ecuaciones -- ============================================ -- Sean a, b y c números reales. Demostrar que -- (a * b) * c = b * (a * c) import data.real.basic variables (a b c : ℝ) -- 1ª demostración example : (a * b) * c = b * (a * c) := begin rw mul_comm a b, rw mul_assoc, end -- 2ª demostración example : (a * b) * c = b * (a * c) := begin calc (a * b) * c = (b * a) * c : by rw mul_comm a b ... = b * (a * c) : by rw mul_assoc, end -- 3ª demostración example : (a * b) * c = b * (a * c) := by linarith -- 4ª demostración example : (a * b) * c = b * (a * c) := by finish -- 5ª demostración example : (a * b) * c = b * (a * c) := by ring
-- 2012-10-20 Andreas module Issue721b where data Bool : Set where false true : Bool record Foo (b : Bool) : Set where field _*_ : Bool → Bool → Bool data _≡_ {A : Set} (x : A) : A → Set where refl : x ≡ x test : (F : Foo false) → let open Foo F in (x : Bool) → _*_ x ≡ (λ x → x) test F x = x where open Foo F -- Don't want to see any anonymous module
Require Import CodeDeps. Require Import Ident. Local Open Scope Z_scope. Definition _bit := 1%positive. Definition _bits := 2%positive. Definition _g := 3%positive. Definition _i := 4%positive. Definition _intid := 5%positive. Definition _pending := 6%positive. Definition _rec := 7%positive. Definition _rec_rvic_state := 8%positive. Definition _ret := 9%positive. Definition _rvic := 10%positive. Definition _t'1 := 11%positive. Definition rvic_clear_pending_body := (Ssequence (Scall (Some _t'1) (Evar _get_rvic_pending_bits (Tfunction (Tcons (tptr Tvoid) Tnil) (tptr Tvoid) cc_default)) ((Etempvar _rvic (tptr Tvoid)) :: nil)) (Scall None (Evar _rvic_clear_flag (Tfunction (Tcons tulong (Tcons (tptr Tvoid) Tnil)) tvoid cc_default)) ((Etempvar _intid tulong) :: (Etempvar _t'1 (tptr Tvoid)) :: nil))) . Definition f_rvic_clear_pending := {| fn_return := tvoid; fn_callconv := cc_default; fn_params := ((_rvic, (tptr Tvoid)) :: (_intid, tulong) :: nil); fn_vars := nil; fn_temps := ((_t'1, (tptr Tvoid)) :: nil); fn_body := rvic_clear_pending_body |}.
Formal statement is: lemma frontier_subset_closed: "closed S \<Longrightarrow> frontier S \<subseteq> S" Informal statement is: If $S$ is a closed set, then the frontier of $S$ is a subset of $S$.
% SEGMENT_GRAPH Given a sparse, square graph of edges weights segment the nodes % of the graph into connected sub-components using the greedy merge-based % method of "Graph Based Image Segmentation". % % C = segment_graph(A) % C = segment_graph(A,'ParameterName',ParameterValue, ...) % % Inputs: % A #A by #A sparse, square matrix of edge weights % Optional: % 'Threshold' followed by "C" threshold to use (paper writes that this % roughly corresponds to minimum size, though it's really just adding a % weight of size/C to components. In any case, increasing this will tend % to produce larger segments. % 'MinSize' followed by the minimum size of an output component. This % constraint is enforced as a _post process_. % Output: % C #A by 1 list of component ids % % Example: % [V,F] = load_mesh('~/Dropbox/models/Cosmic blobs/Model9.off'); % A = adjacency_dihedral_angle_matrix(V,F); % [AI,AJ,AV] = find(A); % A = sparse(AI,AJ,exp(abs(pi-abs(AV-pi))),size(A,1),size(A,2)); % L = -(A - diag(sum(A,2))); % C = segment_graph(L,'Threshold',500,'MinSize',20); % tsurf(F,V,'CData',C); % colormap(cbrewer('Set1',(max(C)))); % view(2); % axis equal;
[STATEMENT] lemma trnl\<^sub>\<epsilon>_eq: assumes "ide u" and "ide v" and "src f = trg v" and "src g = trg u" and "\<nu> \<in> hom v (g \<star> u)" shows "trnl\<^sub>\<epsilon> u \<nu> = (\<epsilon> \<star> u) \<cdot> \<a>\<^sup>-\<^sup>1[f, g, u] \<cdot> (f \<star> \<nu>)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. trnl\<^sub>\<epsilon> u \<nu> = (\<epsilon> \<star> u) \<cdot> \<a>\<^sup>-\<^sup>1[f, g, u] \<cdot> (f \<star> \<nu>) [PROOF STEP] using assms trnl\<^sub>\<epsilon>_def antipar strict_lunit comp_cod_arr hcomp_obj_arr [PROOF STATE] proof (prove) using this: ide u ide v src f = trg v src g = trg u \<nu> \<in> hom v (g \<star> u) trnl\<^sub>\<epsilon> ?u ?\<nu> \<equiv> \<l>[?u] \<cdot> (\<epsilon> \<star> ?u) \<cdot> \<a>\<^sup>-\<^sup>1[f, g, ?u] \<cdot> (f \<star> ?\<nu>) trg g = src f src g = trg f ide ?f \<Longrightarrow> \<l>[?f] = ?f \<lbrakk>arr ?f; cod ?f = ?b\<rbrakk> \<Longrightarrow> ?b \<cdot> ?f = ?f \<lbrakk>obj ?b; arr ?f; ?b = trg ?f\<rbrakk> \<Longrightarrow> ?b \<star> ?f = ?f goal (1 subgoal): 1. trnl\<^sub>\<epsilon> u \<nu> = (\<epsilon> \<star> u) \<cdot> \<a>\<^sup>-\<^sup>1[f, g, u] \<cdot> (f \<star> \<nu>) [PROOF STEP] by auto
{-# OPTIONS --cubical --no-import-sorts #-} open import Bundles module Properties.ConstructiveField {ℓ ℓ'} (F : ConstructiveField {ℓ} {ℓ'}) where open import Agda.Primitive renaming (_⊔_ to ℓ-max; lsuc to ℓ-suc; lzero to ℓ-zero) private variable ℓ'' : Level open import Cubical.Foundations.Everything renaming (_⁻¹ to _⁻¹ᵖ; assoc to ∙-assoc) open import Cubical.Data.Sum.Base renaming (_⊎_ to infixr 4 _⊎_) open import Cubical.Data.Sigma.Base renaming (_×_ to infixr 4 _×_) open import Cubical.Data.Empty renaming (elim to ⊥-elim) -- `⊥` and `elim` open import Function.Base using (it) -- instance search open import MoreLogic open MoreLogic.Reasoning import MoreAlgebra -- Lemma 4.1.6. -- For a constructive field (F, 0, 1, +, ·, #), the following hold. -- 1. 1 # 0. -- 2. Addition + is #-compatible in the sense that for all x, y, z : F -- x # y ⇔ x + z # y + z. -- 3. Multiplication · is #-extensional in the sense that for all w, x, y, z : F -- w · x # y · z ⇒ w # y ∨ x # z. open ConstructiveField F open import Cubical.Structures.Ring R = (makeRing 0f 1f _+_ _·_ -_ is-set +-assoc +-rid +-rinv +-comm ·-assoc ·-rid ·-lid ·-rdist-+ ·-ldist-+) open Cubical.Structures.Ring.Theory R open MoreAlgebra.Properties.Ring R -- Lemma 4.1.6.1 1f#0f : 1f # 0f 1f#0f with ·-identity 1f 1f#0f | 1·1≡1 , _ = fst (·-inv-back _ _ 1·1≡1) -- Lemma 4.1.6.2 -- For #-compatibility of +, suppose x # y, that is, (x +z) −z # (y +z) −z. -- Then #-extensionality gives (x + z # y + z) ∨ (−z # −z), where the latter case is excluded by irreflexivity of #. +-#-compatible : ∀(x y z : Carrier) → x # y → x + z # y + z +-#-compatible x y z x#y with let P = transport (λ i → a+b-b≡a x z i # a+b-b≡a y z i ) x#y in +-#-extensional _ _ _ _ P ... | inl x+z#y+z = x+z#y+z ... | inr -z#-z = ⊥-elim (#-irrefl _ -z#-z) -- The other direction is similar. +-#-compatible-inv : ∀(x y z : Carrier) → x + z # y + z → x # y +-#-compatible-inv _ _ _ x+z#y+z with +-#-extensional _ _ _ _ x+z#y+z ... | inl x#y = x#y ... | inr z#z = ⊥-elim (#-irrefl _ z#z) -- Lemma 4.1.6.3 ·-#-extensional-case1 : ∀(w x y z : Carrier) → w · x # y · z → w · x # w · z → x # z ·-#-extensional-case1 w x y z w·x#y·z w·x#w·z = let instance -- this allows to use ⁻¹ᶠ without an instance argument w·[z-x]#0f = ( w · x # w · z ⇒⟨ +-#-compatible _ _ (- (w · x)) ⟩ w · x - w · x # w · z - w · x ⇒⟨ transport (λ i → (fst (+-inv (w · x)) i) # a·b-a·c≡a·[b-c] w z x i) ⟩ 0f # w · (z - x) ⇒⟨ #-sym _ _ ⟩ w · (z - x) # 0f ◼) w·x#w·z in ( w · (z - x) # 0f ⇒⟨ (λ _ → ·-rinv (w · (z - x)) it ) ⟩ -- NOTE: "plugging in" the instance did not work, ∴ `it` w · (z - x) · (w · (z - x)) ⁻¹ᶠ ≡ 1f ⇒⟨ transport (λ i → ·-comm w (z - x) i · (w · (z - x)) ⁻¹ᶠ ≡ 1f) ⟩ (z - x) · w · (w · (z - x)) ⁻¹ᶠ ≡ 1f ⇒⟨ transport (λ i → ·-assoc (z - x) w ((w · (z - x)) ⁻¹ᶠ) (~ i) ≡ 1f) ⟩ (z - x) · (w · (w · (z - x)) ⁻¹ᶠ) ≡ 1f ⇒⟨ fst ∘ (·-inv-back _ _) ⟩ z - x # 0f ⇒⟨ +-#-compatible _ _ x ⟩ (z - x) + x # 0f + x ⇒⟨ transport (λ i → +-assoc z (- x) x (~ i) # snd (+-identity x) i) ⟩ z + (- x + x) # x ⇒⟨ transport (λ i → z + snd (+-inv x) i # x) ⟩ z + 0f # x ⇒⟨ transport (λ i → fst (+-identity z) i # x) ⟩ z # x ⇒⟨ #-sym _ _ ⟩ x # z ◼) it -- conceptually we would plug `w·[z-x]#0f` in, but this breaks the very first step ·-#-extensional : ∀(w x y z : Carrier) → w · x # y · z → (w # y) ⊎ (x # z) ·-#-extensional w x y z w·x#y·z with #-cotrans _ _ w·x#y·z (w · z) ... | inl w·x#w·z = inr (·-#-extensional-case1 w x y z w·x#y·z w·x#w·z) -- first case ... | inr w·z#y·z = let z·w≡z·y = (transport (λ i → ·-comm w z i # ·-comm y z i) w·z#y·z) in inl (·-#-extensional-case1 _ _ _ _ z·w≡z·y z·w≡z·y) -- second case reduced to first case
" Behind the Crooked Cross " is rarely played live as Hanneman hates the track , though King has always wanted to play it " because it 's got a cool intro " despite it not being his favorite song . King said " that 's fine " when speaking of the situation , noting " there are songs that he wants to play that I always shoot down . " " Ghosts of War " isn 't King 's favorite song either , which he attests " everybody always wants to hear " performed live . He confessed ; " I like the ending , you know , I like the big heavy part and I always say , ‘ Let 's put the heavy ending at the end of " Chemical Warfare " and just do the last half . ’ But I could never make that fly . "
/* test.c * * Copyright (C) 2018 Patrick Alken * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 3 of the License, or (at * your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #include <config.h> #include <stdlib.h> #include <math.h> #include <assert.h> #include <gsl/gsl_math.h> #include <gsl/gsl_bst.h> #include <gsl/gsl_rng.h> #include <gsl/gsl_sort.h> #include <gsl/gsl_test.h> enum array_order { ORD_RANDOM = 0, /* random order */ ORD_ASCENDING, /* ascending order */ ORD_DESCENDING, /* descending order */ ORD_BALANCED, /* balanced tree order */ ORD_ZIGZAG, /* zig-zag order */ ORD_ASCENDING_SHIFTED, /* ascending from middle, then beginning */ ORD_END_NODUP, /* end of no-duplicate ordering */ ORD_RANDOM_DUP /* random order with duplicates */ }; /* fill array[] with random integers in [lower,upper] with duplicates allowed */ static void random_integers(const size_t n, const int lower, const int upper, int array[], gsl_rng * r) { size_t i; for (i = 0; i < n; ++i) array[i] = (int) ((upper - lower) * gsl_rng_uniform(r) + lower); } /* fills array[] with a random permutation of the integers between 0 and n - 1 */ static void random_permuted_integers (const size_t n, int array[], gsl_rng * r) { size_t i; for (i = 0; i < n; i++) array[i] = i; for (i = 0; i < n; i++) { size_t j = i + (unsigned) (gsl_rng_uniform(r) * (n - i)); int t = array[j]; array[j] = array[i]; array[i] = t; } } static int compare_ints(const void *pa, const void *pb, void *params) { const int *a = pa; const int *b = pb; (void) params; return (*a < *b) ? -1 : (*a > *b); } /* Generates a list of integers that produce a balanced tree when inserted in order into a binary tree in the usual way. |min| and |max| inclusively bound the values to be inserted. Output is deposited starting at |*array|. */ static void gen_balanced_tree (const int min, const int max, int **array) { int i; if (min > max) return; i = (min + max + 1) / 2; *(*array)++ = i; gen_balanced_tree (min, i - 1, array); gen_balanced_tree (i + 1, max, array); } /* generates a permutation of the integers |0| to |n - 1| */ static void gen_int_array (const size_t n, const enum array_order order, int array[], gsl_rng * r) { size_t i; switch (order) { case ORD_RANDOM: random_permuted_integers (n, array, r); break; case ORD_ASCENDING: for (i = 0; i < n; i++) array[i] = i; break; case ORD_DESCENDING: for (i = 0; i < n; i++) array[i] = n - i - 1; break; case ORD_BALANCED: gen_balanced_tree (0, n - 1, &array); break; case ORD_ZIGZAG: for (i = 0; i < n; i++) { if (i % 2 == 0) array[i] = i / 2; else array[i] = n - i / 2 - 1; } break; case ORD_ASCENDING_SHIFTED: for (i = 0; i < n; i++) { array[i] = i + n / 2; if ((size_t) array[i] >= n) array[i] -= n; } break; case ORD_RANDOM_DUP: random_integers(n, -10, 10, array, r); break; default: assert (0); } } static void check_traverser(const size_t n, const enum array_order order, gsl_bst_trav * trav, int data, const char *desc, const gsl_bst_workspace * w) { int *prev, *cur, *next; prev = gsl_bst_trav_prev(trav); if (prev != NULL) { gsl_test(*prev > data, "bst %s[n=%zu,order=%d] %s traverser ahead of %d, but should be ahead of %d", gsl_bst_name(w), n, order, desc, *prev, data); } gsl_bst_trav_next(trav); cur = gsl_bst_trav_cur(trav); gsl_test(*cur != data, "bst %s[n=%zu,order=%d] %s traverser at %d, but should be at %d", gsl_bst_name(w), n, order, desc, *cur, data); next = gsl_bst_trav_next(trav); if (next != NULL) { gsl_test(*next < data, "bst %s[n=%zu,order=%d] %s traverser behind %d, but should be behind %d", gsl_bst_name(w), n, order, desc, *next, data); } gsl_bst_trav_prev(trav); } static void test_bst_int(const size_t n, const gsl_bst_type * T, const enum array_order order, gsl_rng * r) { int *data = malloc(n * sizeof(int)); int *data_delete = malloc(n * sizeof(int)); int *sorted_data = malloc(n * sizeof(int)); gsl_bst_workspace * w = gsl_bst_alloc(T, NULL, compare_ints, NULL); gsl_bst_trav trav; int *p; int i; size_t nodes; /* generate data to be inserted in tree */ gen_int_array(n, order, data, r); for (i = 0; i < (int) n; ++i) sorted_data[i] = data[i]; gsl_sort_int(sorted_data, 1, n); if (order != ORD_RANDOM_DUP) { /* generate random order to delete data from tree */ gen_int_array(n, ORD_RANDOM, data_delete, r); } else { for (i = 0; i < (int) n; ++i) data_delete[i] = sorted_data[i]; } /* insert data */ for (i = 0; i < (int) n; ++i) { p = gsl_bst_insert(&data[i], w); gsl_test(p != NULL, "bst_int %s[n=%zu,order=%d] insert i=%d", gsl_bst_name(w), n, order, i); } if (order != ORD_RANDOM_DUP) { nodes = gsl_bst_nodes(w); gsl_test(nodes != n, "bst_int %s[n=%zu,order=%d] after insertion count = %zu/%zu", gsl_bst_name(w), n, order, nodes, n); } /* test data was inserted and can be found */ for (i = 0; i < (int) n; ++i) { p = gsl_bst_find(&data[i], w); gsl_test(*p != data[i], "bst_int %s[n=%zu,order=%d] find [%d,%d]", gsl_bst_name(w), n, order, *p, data[i]); p = gsl_bst_trav_find(&data[i], &trav, w); gsl_test(p == NULL, "bst_int %s[n=%zu,order=%d] trav_find unable to find item %d", gsl_bst_name(w), n, order, data[i]); check_traverser(n, order, &trav, data[i], "post-insertion", w); } /* traverse tree in-order */ p = gsl_bst_trav_first(&trav, w); i = 0; while (p != NULL) { int *q = gsl_bst_trav_cur(&trav); gsl_test(*p != sorted_data[i], "bst_int %s[n=%zu,order=%d] traverse i=%d [%d,%d]", gsl_bst_name(w), n, order, i, *p, sorted_data[i]); gsl_test(*p != *q, "bst_int %s[n=%zu,order=%d] traverse cur i=%d [%d,%d]", gsl_bst_name(w), n, order, i, *p, *q); p = gsl_bst_trav_next(&trav); ++i; } gsl_test(i != (int) n, "bst_int %s[n=%zu,order=%d] traverse number=%d", gsl_bst_name(w), n, order, i); /* traverse tree in reverse order */ p = gsl_bst_trav_last(&trav, w); i = n - 1; while (p != NULL) { int *q = gsl_bst_trav_cur(&trav); gsl_test(*p != sorted_data[i], "bst_int %s[n=%zu,order=%d] traverse reverse i=%d [%d,%d]", gsl_bst_name(w), n, order, i, *p, sorted_data[i]); gsl_test(*p != *q, "bst_int %s[n=%zu,order=%d] traverse reverse cur i=%d [%d,%d]", gsl_bst_name(w), n, order, i, *p, *q); p = gsl_bst_trav_prev(&trav); --i; } gsl_test(i != -1, "bst_int %s[n=%zu,order=%d] traverse reverse number=%d", gsl_bst_name(w), n, order, i); /* test traversal during tree modifications */ for (i = 0; i < (int) n; ++i) { gsl_bst_trav x, y, z; gsl_bst_trav_find(&data[i], &x, w); check_traverser(n, order, &x, data[i], "pre-deletion", w); if (data[i] == data_delete[i]) continue; p = gsl_bst_remove(&data_delete[i], w); gsl_test(*p != data_delete[i], "bst_int %s[n=%zu,order=%d] remove i=%d [%d,%d]", gsl_bst_name(w), n, order, i, *p, data_delete[i]); p = gsl_bst_trav_copy(&y, &x); gsl_test(*p != data[i], "bst_int %s[n=%zu,order=%d] copy i=%d [%d,%d]", gsl_bst_name(w), n, order, i, *p, data[i]); /* re-insert item */ p = gsl_bst_trav_insert(&data_delete[i], &z, w); check_traverser(n, order, &x, data[i], "post-deletion", w); check_traverser(n, order, &y, data[i], "copied", w); check_traverser(n, order, &z, data_delete[i], "insertion", w); #if 0 /* delete again */ gsl_bst_remove(&data[i], w); #endif } /* emmpty tree */ gsl_bst_empty(w); nodes = gsl_bst_nodes(w); gsl_test(nodes != 0, "bst_int %s[n=%zu,order=%d] empty count = %zu", gsl_bst_name(w), n, order, nodes); gsl_bst_free(w); free(data); free(data_delete); free(sorted_data); } static void test_bst(const gsl_bst_type * T, gsl_rng * r) { enum array_order order; for (order = 0; order < ORD_END_NODUP; ++order) { test_bst_int(50, T, order, r); test_bst_int(100, T, order, r); test_bst_int(500, T, order, r); } } int main(void) { gsl_rng * r = gsl_rng_alloc(gsl_rng_default); test_bst(gsl_bst_avl, r); test_bst(gsl_bst_rb, r); gsl_rng_free(r); exit (gsl_test_summary()); }
PROGRAM STEPFOR INTEGER I C This will print all even numbers from -10 to +10, inclusive. DO 10 I = -10, 10, 2 WRITE (*,*) I 10 CONTINUE STOP END
With the Samsung Galaxy S9+, you get all the great features of the S9, including the shiny glass-and-metal design, the immersive Infinity Display, the stereo speakers, and the dual-aperture camera with 960fps slow-motion and AR Emoji. On top of this, you also get a larger screen, a bigger battery, and a second camera at the back enabling 2X lossless zoom and bokeh effect for your portraits. All of these goodies are backed up by a top-notch specs sheet making this one of the most powerful Android phones around. 3GLEB delivers the Samsung Galaxy S8 Plus to any location in Lebanon via Aramex.
module Protocol.Hex import Data.Bits import Data.List -- Those three imports are for compatibility and should be removed after release of 0.6.0 import Data.DPair import Data.Nat import Data.Fin %default total hexDigit : Bits64 -> Char hexDigit 0 = '0' hexDigit 1 = '1' hexDigit 2 = '2' hexDigit 3 = '3' hexDigit 4 = '4' hexDigit 5 = '5' hexDigit 6 = '6' hexDigit 7 = '7' hexDigit 8 = '8' hexDigit 9 = '9' hexDigit 10 = 'a' hexDigit 11 = 'b' hexDigit 12 = 'c' hexDigit 13 = 'd' hexDigit 14 = 'e' hexDigit 15 = 'f' hexDigit _ = 'X' -- TMP HACK: Ideally we'd have a bounds proof, generated below -- `i4` is to be replaced with a `4` literal after release of 0.6.0 namespace Old export i4 : Subset Nat (`LT` 64) i4 = Element (the Nat 4) %search namespace New export i4 : Fin 64 i4 = 4 ||| Convert a Bits64 value into a list of (lower case) hexadecimal characters export asHex : Bits64 -> String asHex 0 = "0" asHex n = pack $ asHex' n [] where asHex' : Bits64 -> List Char -> List Char asHex' 0 hex = hex asHex' n hex = asHex' (assert_smaller n (n `shiftR` i4)) (hexDigit (n .&. 0xf) :: hex) export leftPad : Char -> Nat -> String -> String leftPad paddingChar padToLength str = if length str < padToLength then pack (List.replicate (minus padToLength (length str)) paddingChar) ++ str else str export fromHexDigit : Char -> Maybe Int fromHexDigit '0' = Just 0 fromHexDigit '1' = Just 1 fromHexDigit '2' = Just 2 fromHexDigit '3' = Just 3 fromHexDigit '4' = Just 4 fromHexDigit '5' = Just 5 fromHexDigit '6' = Just 6 fromHexDigit '7' = Just 7 fromHexDigit '8' = Just 8 fromHexDigit '9' = Just 9 fromHexDigit 'a' = Just 10 fromHexDigit 'b' = Just 11 fromHexDigit 'c' = Just 12 fromHexDigit 'd' = Just 13 fromHexDigit 'e' = Just 14 fromHexDigit 'f' = Just 15 fromHexDigit _ = Nothing export fromHexChars : List Char -> Maybe Integer fromHexChars = fromHexChars' 1 where fromHexChars' : Integer -> List Char -> Maybe Integer fromHexChars' _ [] = Just 0 fromHexChars' m (d :: ds) = do digit <- fromHexDigit (toLower d) digits <- fromHexChars' (m*16) ds pure $ cast digit * m + digits export fromHex : String -> Maybe Integer fromHex = fromHexChars . unpack
If $f$ is convex on the interval $[x,y]$, then for all $t \in [0,1]$, we have $f((1-t)x + ty) \leq (1-t)f(x) + tf(y)$.
<a href="https://colab.research.google.com/github/jhmartel/fp/blob/master/_notebooks/2022-02-22-Positronium_Part1.ipynb" target="_parent"></a> # Positronium Part I. > "Weber potential, positronium, two-body problem. " - toc: false - branch: master - badges: false - comments: true - author: JHM - categories: [weber, positronium, two-body] Today we begin the study of Weber's potential in the isolated two-body system consisting of an electron and positron pair $e^-$ and $e^+$. We assume the particles $e^\pm$ have equal mass $m=m_{e^{\pm}}$. The reduced mass is concentrated at the centre-of-mass $\mu=m/2$. Weber's force is attractive between the pair $e^\pm$ at all distances. The particles $e^\pm$ do not indefinitely spiral inwards. Simulations indicate that the radial distance between $e^\pm$ stays strictly bounded between two upper and lower limits $$0 < r_{lower} \leq r \leq r_{upper} < + \infty .$$ This is rigorously proved in [Weber-Clemente, 1990](https://www.ifi.unicamp.br/~assis/Int-J-Theor-Phys-V30-p537-545(1991).pdf). If the electron is indivisible particle, then the above two-body problem models a pair $e^-$ and $e^+$ of isolated electron and positron. *But do the particles $e^\pm$ ever 'collide' and annihalate?* In the standard physics textbooks, it seems well known that annihalation between $e^\pm$ occurs and two gamma rays are ejected in opposite directions when $e^\pm$. conserving momentum, etc., and converting *all* their mass into energy. Thus it's determined that two gamma rays of energy $0.511 keV$ are released, where Einstein's formula $E=m_ec^2$ is applied, where $m_e$ is the reduced mass. [ref] The annihalation of $e^\pm$ is apparently an experimental test of the validity of Einstein's "mass-energy" hypothesis. But what does Weber's potential say about the annihalation of $e^+$ and $e^-$ ? If we know the centre of mass has zero net force, then we can replace the positions $r_1$, $r_2$ of the particles by their relative distance $r_{12}$ from the centre of mass. This yields $$r_1=R +\frac{m_2}{m_1+m_2} r_{12}$$ and $$r_2=R -\frac{m_1}{m_1+m_2} r_{12}.$$ Applying Newton's Second Law that $F_{21}=-F_{12}$ yields the following equation for $r_{12}''$: $$ \mu . r_{12}'' = F_{21},$$ where $\mu$ is the reduced mass of the system, namely $\mu=\frac{m_1 m_2}{m_1+m_2}=0.5$. In the following equations we use numpy.odeint to solve Weber equations of motion of the relative distance $r_{12}$. Therefore we have reduced the two-body problem to a one-body problem. This is a standard reduction. Given the solution for $r_{12}$, how do we reconstruct the paths/positions of the particles $r_1$, $r_2$ ? Answer: via the relation $r_1=R+\frac{m_2}{m_1+m_2} r_{12}$ and $r_2=R-\frac{m_1}{m_1 + m_2}r_{12}$. Now the relative distance $r_{12}$ is a type of radial distance, and if $r, \omega$ is spherical coordinates, then we have $$r'^2=|v|^2= x'^2+y'^2+z'^2=(r')^2+r^2 (\theta')^2. $$ The above formula is the usual $|v|^2=v_r^2+v_t^2$, and the tangent velocity $v_t$ satisfies $v_t=r\theta'$, where $\theta'$ is the angular velocity. The conservation of angular momentum says that the angular moment $L=\mu r\times v$, where $v$ is the linear velocity of $r$, is constant along the motion. Moreover one has $$|L|=\mu r^2 \theta'.$$ Thus we find the formula $$\theta'=\frac{|L|}{\mu r^2}.$$ This implies $$T=\frac{\mu}{2}v^2= \frac{\mu}{2}[(r')^2+\frac{|r\times v|^2}{r^{2}}]$$ represents the kinetic energy of the system. The conservation of energy says $T+U$ is *constant* along trajectories. ```python #collapse # Here we define basic functions. def cross(v1, v2): x1, y1, z1 = v1 x2, y2, z2 = v2 return [y1*z2 - z1*y2, -(x1*z2 - z1*x2), x1*y2 - y1*x2 ] def rho(rel_position): x,y,z = rel_position return (x*x+y*y+z*z)**0.5 def dot(vector1, vector2): x1, y1, z1 = vector1 x2, y2, z2 = vector2 return x1*x2+y1*y2+z1*z2 def rdot(position, vector): return dot(position, vector)/rho(position) def norm(rel_velocity): return rho(rel_velocity) mu=0.5 ## reduced mass of the system. We assume m1 and m2 are equal, hence mu=1/2. c=1.0 ## speed of light constant in Weber's potential # Define the angular momentum def AngMom(rel_position, rel_velocity): return cross(rel_position, rel_velocity) def L(rel_position, rel_velocity): return norm(cross(rel_position, rel_velocity)) # Linear Kinetic Energy def T(rel_position, rel_velocity): vt = norm(cross(rel_position, rel_velocity)) # next formula decomposes v^2=(vr)^2+(vt)^2, where vt=r*θ'=|L|/(mu*r) return (mu/2)*(rdot(rel_position, rel_velocity)**2) + (mu/2)*(rho(rel_position)**-2)*(vt**2) ## Weber Potential Energy ## Negative sign given -1=q1*q2 def U(rel_position, rel_velocity): x,y,z = rel_position vx,vy,vz = rel_velocity rdot=dot(rel_position, rel_velocity)/rho(rel_position) return -(1/rho(rel_position))*(1-(rdot*rdot)/2) ``` ```python ## import the basic packages import numpy as np from scipy.integrate import odeint, solve_ivp import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D ## Integrating two-body isolated system of oppositely charged particles ## i.e. positron+electron pair. ## The product of the charges q1*q2 is factor in Weber's force law, and appears twice ## in the formula of Newton's F=ma. def weber(t, rel_state): x, y, z, vx, vy, vz = rel_state r=(x*x + y*y + z*z)**0.5 rdot=(x*vx+y*vy+z*vz)/r A=(-1)*r**-2 ## minus sign from q1*q2 B=1-(rdot*rdot)/2 C=(mu+((c*c*r)**-1))**-1 ## +plus instead of -minus. dxdt = vx dydt = vy dzdt = vz dvxdt = (x/r)*A*B*C dvydt = (y/r)*A*B*C dvzdt = (z/r)*A*B*C return [dxdt, dydt, dzdt, dvxdt, dvydt, dvzdt] t_span = (0.0, 100.0) t = np.arange(0.0, 100.0, 0.1) y1=[2.0,0,0,] # initial relative position v1=[-0.4, 0.4, 0] # initial relative velocity result = odeint(weber, y1+v1, t, tfirst=True) #here odeint solves the weber equations of motion relative y1+v1 for t. Energy=T(y1,v1) + U(y1,v1) print('The initial total energy T+U is equal to:', Energy) print('The initial angular momentum is equal to', norm(AngMom(y1,v1))) fig = plt.figure() ax = fig.add_subplot(1, 2, 1, projection='3d') ax.plot(result[:, 0], result[:, 1], result[:, 2]) ax.set_title("position") ax = fig.add_subplot(1, 2, 2, projection='3d') ax.plot(result[:, 3], result[:, 4], result[:, 5]) ax.set_title("velocity") ``` What does the above plot demonstrate? It reveals a precession motion around the centre of mass. This is not predicted by Coulomb's force, which bounds the trajectories to elliptical orbits like Newton's Law of Gravitation. The force is central, therefore we have conservation of angular momentum, and implies the system is constrained to a plane, namely orthogonal to the angular moment of the system. Moreover the system satisfies a conservation of linear momentum, namely the sum $T+U$ is constant. * Problem: Verify Assis-Clemente's 1990 formula for the lower and upper limits of the relative distance along the orbits. ```python #collapse-output import matplotlib.pyplot import pylab r_list=[] for j in range(1000): sample_position=[result[j,0], result[j,1], result[j,2]] sample_velocity=[result[j,3], result[j,4], result[j,5]] r_list.append( ( int(j), rho(sample_position) ) ) prelistr = list(zip(*r_list)) pylab.scatter(list(prelistr[0]),list(prelistr[1])) pylab.xlabel('time') pylab.ylabel('rho') pylab.title('Solutions have Upper and Lower Limits') pylab.show() #TU_list=[] #for j in range(1000): # sample_position=[result[j,0], result[j,1], result[j,2]] # sample_velocity=[result[j,3], result[j,4], result[j,5]] # TU_list.append( # (rho(sample_position),T(sample_position,sample_velocity)+U(sample_position, sample_velocity))) #prelist1 = list(zip(*TU_list)) #pylab.scatter(list(prelist1[0]),list(prelist1[1])) #pylab.xlabel('distance r') #pylab.ylabel('T+U') #pylab.title('') #pylab.show() # The plot below demonstrates the conservation of angular momentum. # Note that rdot is directly equal to the sample_velocity. I.e. there is # no need to define rdot=v.hatr/r. This was error. #A_list=[] #for j in range(1000): # sample_position=[result[j,0], result[j,1], result[j,2]] # sample_velocity=[result[j,3], result[j,4], result[j,5]] # A_list.append( # (rho(sample_position), norm(cross(sample_position, sample_velocity)) ) ) #prelist2 = list(zip(*A_list)) #pylab.scatter(list(prelist2[0]),list(prelist2[1])) #pylab.xlabel('rho') #pylab.ylabel('Angular Momentum') #pylab.title('Conservation of Angular Momentum') #pylab.show() #rho_list=[] #for j in range(180): # rho_list.append( # (int(j), rho([result[j,0], result[j,1], result[j,2]]), # ) # ) ``` ```python #collapse from sympy import * t=symbols('t') m=symbols('m') c=symbols('c') r=Function('r')(t) P=Function('P')(r,t) F=Function('F')(r,t) U=-(r**-1)*(1-((r.diff(t))**2)*(2*c*c)**-1) F=(-1)*(U.diff(t))*((r.diff(t))**-1) pprint(simplify(U)) print() pprint(simplify(F)) ## symbolic computation of the Force law. ``` 2 ⎛d ⎞ ⎜──(r(t))⎟ 2 ⎝dt ⎠ - c + ─────────── 2 ────────────────── 2 c ⋅r(t) 2 ⎛d ⎞ 2 ⎜──(r(t))⎟ 2 d ⎝dt ⎠ - c - r(t)⋅───(r(t)) + ─────────── 2 2 dt ─────────────────────────────────── 2 2 c ⋅r (t)
c anirec - program to calculate receiver-function response of c a stack of anisotropic layers to a plane wave incident from below c CAN BE USED IN GRIDSEARCH OVER CC AND BAZ - SEE COMMENTED LINES FOR LOOP c cannibalized from aniprop.f 11/18/95 c xf77 -o anirec_osc -fast -native -O5 anirec.f /data/d4/park/Plotxy/plotlib /data/d4/park/Ritz/eislib /data/d4/park/Ritz/jlib c f77 -o anirec -fast -native -O5 anirec.f /data/d4/park/Ritz/eislib /data/d4/park/Ritz/jlib c for hexagonally symmetric media c reads fast axis orientation, constants A,B,C,D,E from file animodel c calculate quadratic eigenvalue problem based on the Christoffel matrix c see appendix of P. Shearer's thesis c c read model, phase velocity of incident wave, P, SV, or SH c c calc the eigenvector decomps for the layers c loop over frequency, calc reflection/transmission matrices c calc 3-comp transfer fct response at surface c find distortion of reference wavelet c modified to run with NA search by HY 2005 c outputs are in N-E-Z, R-T-Z or SV-SH-P coordinate systems c note jeff's T is 180 off of that of sac. c minor fix by HY 2006. add out_rot in the input c out_rot = 0: output traces in N-E-Z c out_rot = 1: R-T-Z c out_rot = 2: P-SV-SH. But not computed here. just rotate to c rtz and the is rotated by fs_traces_anirec elsewehere. c this version has t-comp flipped to keep up the sac convention. subroutine anirec(ntr, nsamp,dt,synth_cart,synth_cart2, c synth_cart3,thick,rho1,alpha, beta, pct_a, c pct_b,trend,plunge,nlay, c per,shiftp,shifts,baz0,slow,out_rot,ipulse) implicit real*8 (a-h,o-z) implicit integer*4 (i-n) include 'params.h' c integer maxlay, maxtr, maxsamp c parameter (maxlay =15, maxtr=200,maxsamp=2000) c integer ntr, nsamp, nlay ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c raysum parameters passed in real synth_cart(3,maxsamp,maxtr), synth_cart2(3,maxsamp,maxtr) real synth_cart3(3,maxsamp,maxtr) real thick(maxlay),rho1(maxlay),alpha(maxlay),beta(maxlay) real pct_a(maxlay),pct_b(maxlay),trend(maxlay),plunge(maxlay) real strike(maxlay),dip(maxlay),dt real baz0(maxtr), slow(maxtr), width, shiftp,shifts,per integer out_rot,pulse ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc character*80 name,title,ylabel(3),name2 c character *4 chead(158) c character *3 wf(3) complex*16 pp,u0,ee,z1,z0,xnu,e1,e2,zla,xl complex*16 rt,tt,rt0,trc,pfac,u,resp,zz real*4 cc4,zz4,frqq,amn,amx,baseline,dat4,a,dat44 common/data/a(40000) c common/header/ahead(158) common/stfff/w(3,101),t(3,3),ttl(3,3),s(3,3),stl(3,3), x r(3,3),x(3),y(3) common/model/z(100),dz(100),rho(101),vp(101),vs(101),vp2(101), x vp4(101),vs2(101),vss(101) common/model2/xmu(101),xla(101),xmu2(101),xla2(101),xla4(101) common/propag/xnu(6,101),xl(6,100),pfac(6,3),u(3,6) common/mstff/qq(6,6),wr(6),wi(6),zr(6,6),zi(6,6),iv(6),fv(6) common/pstff/pp(3),u0(3),ee(6,6,101),e1(6,6),e2(6,6),zla(6) common/rstff/rt(3,3,101),tt(3,3,101),rt0(3,3),trc(3,3) common/nstff/cc4(8200),zz4(8200),dat4(8200,3,3),ccc(8200) common/nstfff/dat44(8200,3,3) common/disper/resp(3,3,2050),frqq(2050) common/disper2/roota(101),rootb(101),jtrval(101),kroots(2050) common/evanes/ievan(10000) c dimension iah(158) c equivalence (iah,ahead),(chead,ahead) data eps/1.d-6/,tol/1.d-3/ c we reduce the condition numbers of matrices by c normalizing physical quantities to make them dimensionless c Ill use the normal-mode normalizations c which are a little peculiar for the crust, but what the hey! rbar=5.515d3 ren=1.075190645d-3 radi=6.371d6 vbar=ren*radi con=rbar*vbar**2 z1=dcmplx(1.d0,0.d0) z0=dcmplx(0.d0,0.d0) ! write(*,*)ntr, nsamp,dt,nlay ! do i=1,nlay ! write(*,*)'layer',i,'-----------------------' ! write(*,*)thick(i),rho1(i),alpha(i), beta(i) ! write(*,*)pct_a(i),pct_b(i),trend(i),plunge(i) ! enddo ! write(*,*)baz0(1),slow(1) ! write(*,*)shift,out_rot,ipulse,per ! write(*,*)'------------------------------------------' c Notes on the angle conventions for w-hat, the axis of symmetry: c c In the anisotropic reflectivity code, subroutine matget *assumes* a c coordinate system in which z is down (anti-vertical), x is the radial c direction, and y is anti-transverse. Therefore, the position angles c theta,phi for w-hat are tilt relative to down, and azimuth defined as a c rotation from x towards y. This rotation is CCW if viewed from below, c and CW if viewed from above. Since w-hat and -(w-hat) define the same c axis of symmetry, the position angles *also* can be defined as c theta=(tilt from vertical) and phi=(rotation from anti-x (anti-radial) c towards anti-y (transverse)). Viewed from above, this phi rotation is c CW, and defines the strike of w-hat relative to the arrival azimuth of c the wave. c c In order to compute seismograms for a variety of back-azimuths, we c assume that the default is a wave approaching from the north, so that c radial is south and transverse is east. Due to this orientation, the c synthetic code interprets the layered model as having w-hat position c angles defined as theta=(tilt from vertical) phi=(strike CW from N). c For an event at back-azimuth psi (CW from N), routine matget rotates w-hat c from geographic coordinates to ray-based coordinates before computing c reflectivity matrices. If a wave arrives at back-azimuth psi, the strike c of the axis of symmetry w-hat relative to its arrival azimuth is c phi'=phi-psi. The code performs this rotation with this code in c subroutine matget, for w-hat azimuth "az": c c caz=dcosd(az) c saz=-dsind(az) ! sin(-az) c do n=1,nlp c ww(3)=w(3,n) c ww(1)=w(1,n)*caz-w(2,n)*saz c ww(2)=w(1,n)*saz+w(2,n)*caz c ... c c In this manner, the axes of symmetry of the model, saved in array w(.,.), c are never modified. c in the driver code, "az" is the variable "baz" for back azimuth c c default baz = 0 c baz=0. c set up for 10 sps data c dt=0.1 c nfrq must be .le.npad/2 npad=2048!2048 nfrq=512!1024 dur=npad*dt df=1./dur frqmax=nfrq*df if (dur < nsamp * dt + shiftp) then write(*,*) 'HAHAHA IN ANIREC.F' write(*,*) 'dur =',dur , ' < ', 'nsamp * dt + shiftp=', &nsamp * dt + shiftp endif if (dur < nsamp * dt + shifts) then write(*,*) 'HAHAHA IN ANIREC.F' write(*,*) 'dur =',dur , ' < ', 'nsamp * dt + shifts=', &nsamp * dt + shifts endif c print *,'dt,df,duration of record,max frequency:',dt,df,dur,frqmax c print *,'NOTE: cosine^2 taper will be applied up to fmax' c print *,'input model? (space-return: animodel)' c read(5,102) name c 102 format(a) c if(name(1:1).eq.' ') then c open(7,file='animodel',form='formatted') c else c open(7,file=name,form='formatted') c endif c read(7,102) title c print *,title c read(7,*) nl nl = nlay-1 c read in theta,phi in degrees - polar coords of fast axis nlp=nl+1 nlm=nl-1 z(1) = thick(1) do i=2,nlp z(i) = z(i-1) + thick(i) enddo c note the theta phi passed in are in radian c also dip needs a 90 plus do i=1,nlp phi = trend(i) !baz theta = pi/2.+plunge(i) !tilt w(1,i)=dble(sin(theta )*cos(phi)) w(2,i)=dble(sin(theta)*sin(phi)) w(3,i)=dble(cos(theta)) c print *,(w(j,i),j=1,3) c read depth to ith interface, vp (m/sec), pk-to-pk cos(2th) relative P pert c pk-to-pk cos(4th) relative P pert, v_s, pk-to-pk cos(2th) relative S pert c density (kg/m**3) c read(7,*) z(i),vp(i),vp2(i),vp4(i),vs(i),vs2(i),rho(i) vp(i) = alpha(i) vp2(i) = pct_a(i) /100. vp4(i) = 0. vs(i) = beta(i) vs2(i) = pct_b(i) /100. rho(i) = rho1(i) c recall that we interpret fractional values of b,c,e c as peak-to-peak relative velocity perts. c therefore, e=0.02 is 2% pert to mu from slowest to fastest xmu(i)=rho(i)*vs(i)**2/con xmu2(i)=vs2(i)*xmu(i) xla(i)=rho(i)*vp(i)**2/con xla2(i)=vp2(i)*xla(i) xla4(i)=vp4(i)*xla(i) vs(i)=vs(i)/vbar vp(i)=vp(i)/vbar rho(i)=rho(i)/rbar z(i)=z(i)/radi end do do i=2,nl dz(i)=z(i)-z(i-1) end do dz(1)=z(1) c print the organ-pipe mode count for 1Hz c the lowest layer (nl+1) is taken as evanescent region. sdelay=0. pdelay=0. do i=1,nl sdelay=sdelay+(dz(i)/vs(i))/ren pdelay=pdelay+(dz(i)/vp(i))/ren end do c print *, sdelay,pdelay c search for cmin, cmax is halfspace velocity cmin=vs(1) vss(1)=vs(1) do i=2,nlp if(cmin.gt.vs(i)) cmin=vs(i) vss(i)=vs(i) end do 900 csmin=vs(nlp)*vbar/1000. cpmin=vp(nlp)*vbar/1000. c source; passed in from the main body c ipulse=1 c per=width c print *, 'ipulse is ', ipulse, ' per is ', per t1=0. t2=nsamp * dt npts=t2/dt+1 nst=t1/dt+1 c iah(80) = npts !number of points c source pulse npul=per/dt do i=1,npad zz4(i)=0.d0 end do fac=2.d0*pi/per if(ipulse.eq.1) then do i=1,npul time=i*dt zz4(i)=(dsin(fac*time/2.d0))**2 end do elseif(ipulse.eq.2) then do i=1,npul time=i*dt zz4(i)=dsin(fac*time)*(dsin(fac*time/2.d0))**2 end do elseif(ipulse.eq.3) then do i=1,160 xx=0.05*(i-80) zz4(i)=xx*cos(2.*pi*xx/sqrt(1.+xx**2))*exp(-0.4*xx**2) end do elseif(ipulse.eq.5) then do i=1,npul*2 xx=0.01*(i-npul) zz4(i)=xx*cos(2.*pi*xx/sqrt(1.+xx**2))*exp(-0.4*xx**2) end do else do i=1,160 xx=0.05*(i-80) zz4(i)=cos(5.*pi*xx/sqrt(1.+4*xx**2))*exp(-4.0*xx**2) end do endif c print *,(zz4(i),i=1,npul*2) c pause call refft(zz4,npad,1,1) c zero the DC and Nyquist c ick switches the sign of y and z components to transverse & vertical zz4(1)=0. zz4(2)=0. ccadd loop here for slowness/baz do intra = 1, ntr cc = 1./(slow(intra) * 1000.) baz = baz0(intra) cs=cos(baz ) sn=sin(baz ) if(cc.le.0.d0) go to 950 c non-dimensionalize cc cc=cc*1000./vbar c calc the eigenvector decomps for the layers c need to identify upgoing P,SV,SH in the halfspace call matget(nl,cc,baz * 180./pi) c loop over frequency, calc reflection/transmission matrices c calc 3-comp transfer fct response at surface do jf=1,nfrq om=2.d0*pi*jf*df/ren frqq(jf)=jf*df call respget(nl,om,cc,resp(1,1,jf)) end do cccc=cc*vbar/1000. c lets run a pulse thru these functions c s(t)=cos(2pi*t/T)*sin^2(2pi*t/2T) for two oscillations of the cos c OR c s(t)=sin(2pi*t/2T)**2 for 1/2 oscillation of the sin c OR bbpulse c s(t)=t*cos(2.*pi*t/sqrt(1.+t**2))*exp(-0.4*t**2) c let T=1 and 2 sec c version for user-supplied cc and baz c print *, c x 'wavelet print: 1-onesidedpulse 2-oscillation 3-bbpulse 4-hfbb' c read(5,*) ipulse c version for gridsearch cc and baz c ipulse=3 c c another version for gridsearch c ipulse=1 c end gridsearch lines c if(ipulse.le.2) then c print *,'enter wavelet period in seconds' c read(5,*) per c endif c t1=0. c dur=npad*dt c t1=0. c t2=dur c version for usersupplied cc and baz c 960 print *,'tstart, duration to plot? currently:',t1,t2 c read(5,*) t1,t2 c if(t1.lt.0.0.or.t2.le.0.) go to 900 c version for gridsearch cc and baz c note S wave is aligned to start at shift sec. c t1=0. c t2=nsamp * dt c end versions c npts=t2/dt+1 c nst=t1/dt+1 c we start at dt, with duration 2T -- ONE CYCLE do iprint=1,3 ick=1 baseline=0. do k=1,3 cc4(1)=0. cc4(2)=0. if(k.gt.1) ick=-1 do jf=1,nfrq zz=ick*dcmplx(dble(zz4(2*jf+1)),dble(zz4(2*jf+2))) zz=zz*resp(k,iprint,jf) cc4(2*jf+1)=dreal(zz) cc4(2*jf+2)=dimag(zz) end do do jf=2*nfrq+3,npad cc4(jf)=0. end do call refft(cc4,npad,-1,-1) amx=cc4(1) amn=cc4(1) do i=1,npts amx=amax1(amx,cc4(i+nst)) amn=amin1(amn,cc4(i+nst)) end do baseline=baseline-amn do i=1,npad dat4(i,k,iprint)=cc4(i)+baseline dat44(i,k,iprint)=cc4(i) end do baseline=baseline+amx end do end do cc4(1)=nst*dt cc4(2)=dt c wf(1) = '.P' c wf(2) = '.SV' c wf(3) = '.SH' cwe need SV and SH only do ity=1,3 c use organ-pipe count to correct for traveltime of Swave thru stack c tdelay=sdelay-shift. c use organ-pipe count to correct for traveltime of Pwave thru stack c tdelay=pdelay-4. ! write(*,*)sdelay,pdelay,shift if (ity .ne.1) then tdelay = sdelay-shifts!shift!sdelay!-t2/2. !write(*,*)'sdel',sdelay !if (tdelay.lt.0) stop "shift too large" c tdelay =sdelay-per else tdelay= pdelay-shiftp!shift!pdelay!-t2/2. !write(*,*)'pdel',pdelay !if (tdelay.lt.0) stop "shift too large" c tdelay =sdelay-per endif c if (ity .eq. 1) tdelay=tdelay*0.5 nst0=tdelay/dt if(nst0.le.0) nst0=1 c print *, t2,tdelay c switching the sign on T HY jan -11 for rtz if (ity .eq. 1) then do i=1,npts synth_cart(3,i,intra)=dat44(nst0+i-nst,3,ity) if (out_rot .ne. 0) then synth_cart(1,i,intra)=dat44(nst0+i-nst,1,ity) synth_cart(2,i,intra)=-dat44(nst0+i-nst,2,ity) else synth_cart(1,i,intra)=-sn*dat44(nst0+i,2,ity) x -cs*dat44(nst0+i,1,ity) synth_cart(2,i,intra)= cs*dat44(nst0+i,2,ity) x -sn*dat44(nst0+i,1,ity) endif end do end if if (ity .eq. 2) then amx = dat44(1,3,2) amn = dat44(1,3,2) do i=1,npts!per/dt*2.5!for the main pulse only synth_cart2(3,i,intra)=dat44(nst0+i-nst,3,ity) if (out_rot .ne. 0) then synth_cart2(1,i,intra)=dat44(nst0+i-nst,1,ity) synth_cart2(2,i,intra)=-dat44(nst0+i-nst,2,ity) else synth_cart2(1,i,intra)=-sn*dat44(nst0+i,2,ity) x -cs*dat44(nst0+i,1,ity) synth_cart2(2,i,intra)= cs*dat44(nst0+i,2,ity) x -sn*dat44(nst0+i,1,ity) endif end do end if if (ity .eq. 3) then do i=1,npts !per/dt*2.5!for the main pulse only synth_cart3(3,i,intra)=dat44(nst0+i-nst,3,ity) if (out_rot .ne. 0) then synth_cart3(1,i,intra)=dat44(nst0+i-nst,1,ity) synth_cart3(2,i,intra)=-dat44(nst0+i-nst,2,ity) else synth_cart3(1,i,intra)=-sn*dat44(nst0+i,2,ity) x -cs*dat44(nst0+i,1,ity) synth_cart3(2,i,intra)= cs*dat44(nst0+i,2,ity) x -sn*dat44(nst0+i,1,ity) endif end do cc if ((abs(amx) .lt. abs(amn)) ) then cc do i=1,npts cc synth_cart3(3,i,intra)=-synth_cart3(3,i,intra) cc synth_cart3(2,i,intra)=-synth_cart3(2,i,intra) cc synth_cart3(1,i,intra)=-synth_cart3(1,i,intra) cc end do cc end if end if end do 103 format(a,i2,a,i3,a) c version for user-supplied cc and baz end do !ntr c end version for user-supplied cc and baz c version for grid search over cc and baz c end do c end do c end version for grid search over cc and baz 950 continue 101 format(80a) c stop end subroutine matget(nl,cc,az) c SPECIAL VERSION: az rotates the w-hat vector by -az degrees c returns stress-displacement vectors for a stack of anisotropic layers c P waves may be evanescent, but S waves are oscillatory in the stack c the weirdness seen in the surface wave code should not appear in c a receiver function code c however, the iev parameter is retained to avoid leaving timebombs implicit real*8 (a-h,o-z) implicit integer*4 (i-n) complex*16 pp,u0,ee,pw,uw,pu,z1,z0,xnu,eye,e1,e2,zla,rtm complex*16 pfac,u,xl common/stfff/w(3,101),t(3,3),ttl(3,3),s(3,3),stl(3,3), x r(3,3),x(3),y(3) common/model/z(100),dz(100),rho(101),vp(101),vs(101),vp2(101), x vp4(101),vs2(101),vss(101) common/model2/xmu(101),xla(101),xmu2(101),xla2(101),xla4(101) common/propag/xnu(6,101),xl(6,100),pfac(6,3),u(3,6) common/rrt/rtm(6,6,100) common/mstff/qq(6,6),wr(6),wi(6),zr(6,6),zi(6,6),iv(6),fv(6) common/pstff/pp(3),u0(3),ee(6,6,101),e1(6,6),e2(6,6),zla(6) common/qstff/qi(6,6),xr(6),xi(6),yr(6),yi(6),ips(3) dimension ww(3) data pi/3.14159265358979d0/,eps/1.d-6/,tol/1.d-7/ c set iev=1 ** should be superfluous, save for now c toggle to iev=0 if there is a purely propagating wave in the top layer n=1 iev=1 z1=dcmplx(1.d0,0.d0) z0=dcmplx(0.d0,0.d0) eye=dcmplx(0.d0,1.d0) rbar=5.515d3 ren=1.075190645d-3 radi=6.371d6 vbar=radi*ren con=rbar*radi*radi*ren*ren nlp=nl+1 nlm=nl-1 c first calculate vertical wavenumbers and propagating waves for each layer c requires an eigenvector problem be solved c in general, the evanescent vertical wavenumbers have nonzero real parts c complex exponential fct is used to avoid endless branching c horizontal slowness p_x px=1.d0/cc caz=dble(cos(az* pi / 180.0)) saz=-dble(sin(az* pi / 180.0)) ! sin(-az) do n=1,nlp ww(3)=w(3,n) ww(1)=w(1,n)*caz-w(2,n)*saz ww(2)=w(1,n)*saz+w(2,n)*caz a=xla(n) b=xla2(n) c=xla4(n) d=xmu(n) e=xmu2(n) c print *,'a,b,c,d,e',a,b,c,d,e fact=8.d0*ww(1)*ww(1)*c+2.d0*e facs=16.d0*ww(1)*ww(3)*c facr=8.d0*ww(3)*ww(3)*c+2.d0*e c print *,'a,b,c,d,e',a,b,c,d,e c print *,'w(.,n),fact,facs,facr',(ww(l),l=1,3),fact,facs,facr do i=1,3 c first the what-0-what tensor do j=1,3 t(j,i)=fact*ww(j)*ww(i) s(j,i)=facs*ww(j)*ww(i) r(j,i)=facr*ww(j)*ww(i) end do c next the identity tensor - correct an error on 7/6/95 t(i,i)=t(i,i)+d+e*(2.d0*ww(1)*ww(1)-1.d0) s(i,i)=s(i,i)+4.d0*e*ww(1)*ww(3) r(i,i)=r(i,i)+d+e*(2.d0*ww(3)*ww(3)-1.d0) end do c print 101,(ww(i),i=1,3) c print 101,fact,facs,facr c print *,'t,s,r' c print 101,((t(i,j),j=1,3),i=1,3) c print 101,((s(i,j),j=1,3),i=1,3) c print 101,((r(i,j),j=1,3),i=1,3) fac=b-4.d0*c-2.d0*e c next the what-0-xhat and what-0-zhat tensors do i=1,3 t(1,i)=t(1,i)+fac*ww(1)*ww(i) t(i,1)=t(i,1)+fac*ww(1)*ww(i) s(1,i)=s(1,i)+fac*ww(3)*ww(i) s(i,1)=s(i,1)+fac*ww(3)*ww(i) s(3,i)=s(3,i)+fac*ww(1)*ww(i) s(i,3)=s(i,3)+fac*ww(1)*ww(i) r(3,i)=r(3,i)+fac*ww(3)*ww(i) r(i,3)=r(i,3)+fac*ww(3)*ww(i) end do fac=a-b+c-d+e c finally the xhat-0-xhat, zhat-0-zhat, xhat-0-zhat, zhat-0-xhat tensors t(1,1)=t(1,1)+fac s(3,1)=s(3,1)+fac s(1,3)=s(1,3)+fac r(3,3)=r(3,3)+fac c mult by horizontal slowness and calc the modified T-matrix do i=1,3 do j=1,3 t(j,i)=t(j,i)*px*px s(j,i)=s(j,i)*px end do t(i,i)=t(i,i)-rho(n) end do c calculate R**(-1).S, R**(-1).T, using routine solve nn=3 do i=1,3 do j=1,3 y(j)=s(j,i) end do call solve(nn,r,x,y) do j=1,3 stl(j,i)=x(j) end do nn=-3 end do do i=1,3 do j=1,3 y(j)=t(j,i) end do call solve(nn,r,x,y) do j=1,3 ttl(j,i)=x(j) end do end do c fill the 6x6 Q-matrix do i=1,3 do j=1,3 qq(j,i)=-stl(j,i) qq(j,i+3)=-ttl(j,i) qq(j+3,i)=0.d0 qq(j+3,i+3)=0.d0 end do qq(i+3,i)=1.d0 end do c solve eigenvalue problem for polarization vectors and vertical slownesses c matrix system is nonsymmetric real valued c solution from the eispack guide call balanc(6,6,qq,is1,is2,fv) call elmhes(6,6,is1,is2,qq,iv) call eltran(6,6,is1,is2,qq,iv,zr) call hqr2(6,6,is1,is2,qq,wr,wi,zr,ierr) if(ierr.ne.0) then print *, ierr,' error!' stop endif call balbak(6,6,is1,is2,fv,6,zr) c print *,'for layer',n c print *, 'for phase velocity',cc,' the vertical slownesses are' c print 101,(wr(i),wi(i),i=1,6) c pause 101 format(6g12.4) c eigenvector unpacking, see EISPACK guide, page 88 c bad eigenvector order is flagged by wi(i)>0. for odd i iflag=0 do i=1,6 if(wi(i).eq.0.d0) then if(n.eq.1) iev=0 do j=1,6 zi(j,i)=0.d0 end do elseif(wi(i).gt.0.d0) then c bad eigenvector order is flagged by wi(i)>0 for even i if((i/2)*2.eq.i) then iflag=iflag+1 iv(iflag)=i endif do j=1,6 zi(j,i)=zr(j,i+1) end do else do j=1,6 zi(j,i)=-zi(j,i-1) zr(j,i)=zr(j,i-1) end do endif c normalize by the last three indices sum=0.d0 do j=4,6 sum=sum+zr(j,i)**2+zi(j,i)**2 end do sum=dsqrt(sum) do j=1,6 zr(j,i)=zr(j,i)/sum zi(j,i)=zi(j,i)/sum end do end do c assemble the stress-displacement vectors c calculate the traction components, with i removed pp(1)=dcmplx(px,0.d0) pp(2)=z0 do k=1,6 pp(3)=dcmplx(wr(k),wi(k)) do i=1,3 u0(i)=dcmplx(zr(i+3,k),zi(i+3,k)) end do pu=z0 pw=z0 uw=z0 abcde=a-b+c-2.d0*d+2.d0*e bce=b-4.d0*c-4.d0*e de=d-e do i=1,3 pu=pu+pp(i)*u0(i) pw=pw+pp(i)*ww(i) uw=uw+u0(i)*ww(i) end do do i=1,3 e1(i,k)=u0(i) e1(i+3,k)=ww(i)*(pu*ww(3)*bce+8.d0*pw*uw*ww(3)*c x +2.d0*(pw*u0(3)+uw*pp(3))*e) e1(i+3,k)=e1(i+3,k)+pp(i)*(u0(3)*de+2.d0*uw*ww(3)*e) e1(i+3,k)=e1(i+3,k)+u0(i)*(pp(3)*de+2.d0*pw*ww(3)*e) end do e1(6,k)=e1(6,k)+pu*abcde+pw*uw*bce c almost lastly, mult traction by i do i=1,3 e1(i+3,k)=eye*e1(i+3,k) end do end do c reorder into upgoing and downgoing waves c we use the exp(-i*omega*t) convention with z increasing downward c so downgoing oscillatory waves have p_z>0, k_z real c downgoing evanescent waves have Im(p_z)>0 c if the axis of symmetry is tilted, there are cases where a pair of c near-horizontal plane waves will be both upgoing or both downgoing c since Chen's algorithm depends on a 3,3 split, we must adopt a kluge c similarly, there are cases where the EISPACK routines dont return c the vertical wavenumbers in ordered pairs, but mix them up a bit c this seems to cause problems, so a fix is necessary c c first, test for bad eigenvector order, switch k-1->k+1, k->k-1, k+1->k c worst case is iflag=2, real,imag1+,imag1-,imag2+,imag2-,real if(iflag.gt.0) then do i=1,iflag k=iv(i) wrr=wr(k-1) wii=wi(k-1) wr(k-1)=wr(k) wi(k-1)=wi(k) wr(k)=wr(k+1) wi(k)=wi(k+1) wr(k+1)=wrr wi(k+1)=wii do j=1,6 pu=e1(j,k-1) e1(j,k-1)=e1(j,k) e1(j,k)=e1(j,k+1) e1(j,k+1)=pu end do end do endif c second, divide into upgoing and downgoing waves isum=0 do k=1,6 iv(k)=0 if(wi(k).eq.0.d0.and.wr(k).gt.0) iv(k)=1 if(wi(k).gt.0.d0) iv(k)=1 isum=isum+iv(k) end do c if up and downgoing cohorts are not equal, switch the sense of the c pure-oscillatory wave with smallest wavenumber 140 continue if(isum.ne.3) then wr0=0.d0 do k=1,6 wr0=dmax1(wr0,dabs(wr(k))) end do do k=1,6 if(wi(k).eq.0.d0) then if(dabs(wr(k)).lt.wr0) then wr0=dabs(wr(k)) kk=k endif endif end do if(iv(kk).eq.0) then iv(kk)=1 else iv(kk)=0 endif c check that we have equal up/down cohorts isum=0 do k=1,6 isum=isum+iv(k) end do go to 140 endif jdown=1 jup=4 c print *,'for layer',n,' the vert wavenums are (0=up,1=dn)' do k=1,6 if(iv(k).eq.1) then ki=jdown jdown=jdown+1 else ki=jup jup=jup+1 endif do i=1,6 ee(i,ki,n)=e1(i,k) end do c incorporate the factor of i into the stored vertical slowness xnu(ki,n)=dcmplx(-wi(k),wr(k)) end do 1008 format(a,2g15.6,a,2g15.6) end do c now, must identify which upgoing waves in the halfspace are P,SV,SH c crud, this goes back to array ee c 3: SH is y-motion c 2: SV is (-sqrt((1/vs)**2-p_x**2),0,-p_x) ! recall that z points down c 1: P is (p_x,0,-sqrt((1/vp)**2-p_x**2) c so we branch on size of u_y, and relative sign of u_x and u_z c print *,'in the halfspace:' c do i=4,6 c print *,'for i*k_z=',xnu(i,nlp),', the disp-stress vector is' c do j=1,6 c xi(j)=dimag(ee(j,i,nlp)) c xr(j)=dreal(ee(j,i,nlp)) c end do c print 101,(xr(j),j=1,6),(xi(j),j=1,6) c end do do i=4,6 ips(i-3)=3 if(zabs(ee(2,i,nlp)).lt.dsqrt(tol)) then ! not SH test=dreal(ee(1,i,nlp))/dreal(ee(3,i,nlp)) if(test.gt.0.d0) then ips(i-3)=2 else ips(i-3)=1 endif endif end do c print *,'wave prints:',(ips(i),i=1,3) return end subroutine matget_old(nl,cc) c returns stress-displacement vectors for a stack of anisotropic layers c P waves may be evanescent, but S waves are oscillatory in the stack c the weirdness seen in the surface wave code should not appear in c a receiver function code c however, the iev parameter is retained to avoid leaving timebombs implicit real*8 (a-h,o-z) implicit integer*4 (i-n) complex*16 pp,u0,ee,pw,uw,pu,z1,z0,zz,xnu,eye,e1,e2,zla,rtm complex*16 pfac,u,xl common/stfff/w(3,101),t(3,3),ttl(3,3),s(3,3),stl(3,3), x r(3,3),x(3),y(3) common/model/z(100),dz(100),rho(101),vp(101),vs(101),vp2(101), x vp4(101),vs2(101),vss(101) common/model2/xmu(101),xla(101),xmu2(101),xla2(101),xla4(101) common/propag/xnu(6,101),xl(6,100),pfac(6,3),u(3,6) common/defect/idfct(4,101),adf(2,101) common/rrt/rtm(6,6,100) common/mstff/qq(6,6),wr(6),wi(6),zr(6,6),zi(6,6),iv(6),fv(6) common/pstff/pp(3),u0(3),ee(6,6,101),e1(6,6),e2(6,6),zla(6) common/qstff/qi(6,6),xr(6),xi(6),yr(6),yi(6),ips(3) data pi/3.14159265358979d0/,eps/1.d-6/,tol/1.d-7/ c set iev=1 ** should be superfluous, but save for now c toggle to iev=0 if there is a purely propagating wave in the top layer n=1 iev=1 z1=dcmplx(1.d0,0.d0) z0=dcmplx(0.d0,0.d0) eye=dcmplx(0.d0,1.d0) rbar=5.515d3 ren=1.075190645d-3 radi=6.371d6 vbar=radi*ren con=rbar*radi*radi*ren*ren nlp=nl+1 nlm=nl-1 c first calculate vertical wavenumbers and propagating waves for each layer c requires an eigenvector problem be solved c in general, the evanescent vertical wavenumbers have nonzero real parts c complex exponential fct is used to avoid endless branching c horizontal slowness p_x px=1.d0/cc do n=1,nlp a=xla(n) b=xla2(n) c=xla4(n) d=xmu(n) e=xmu2(n) c print *,'a,b,c,d,e',a,b,c,d,e fact=8.d0*w(1,n)*w(1,n)*c+2.d0*e facs=16.d0*w(1,n)*w(3,n)*c facr=8.d0*w(3,n)*w(3,n)*c+2.d0*e do i=1,3 c first the what-0-what tensor do j=1,3 t(j,i)=fact*w(j,n)*w(i,n) s(j,i)=facs*w(j,n)*w(i,n) r(j,i)=facr*w(j,n)*w(i,n) end do c next the identity tensor - correct an error on 7/6/95 t(i,i)=t(i,i)+d+e*(2.d0*w(1,n)*w(1,n)-1.d0) s(i,i)=s(i,i)+4.d0*e*w(1,n)*w(3,n) r(i,i)=r(i,i)+d+e*(2.d0*w(3,n)*w(3,n)-1.d0) end do c print 101,(w(i,n),i=1,3) c print 101,fact,facs,facr c print *,'t,s,r' c print 101,((t(i,j),j=1,3),i=1,3) c print 101,((s(i,j),j=1,3),i=1,3) c print 101,((r(i,j),j=1,3),i=1,3) fac=b-4.d0*c-2.d0*e c next the what-0-xhat and what-0-zhat tensors do i=1,3 t(1,i)=t(1,i)+fac*w(1,n)*w(i,n) t(i,1)=t(i,1)+fac*w(1,n)*w(i,n) s(1,i)=s(1,i)+fac*w(3,n)*w(i,n) s(i,1)=s(i,1)+fac*w(3,n)*w(i,n) s(3,i)=s(3,i)+fac*w(1,n)*w(i,n) s(i,3)=s(i,3)+fac*w(1,n)*w(i,n) r(3,i)=r(3,i)+fac*w(3,n)*w(i,n) r(i,3)=r(i,3)+fac*w(3,n)*w(i,n) end do fac=a-b+c-d+e c finally the xhat-0-xhat, zhat-0-zhat, xhat-0-zhat, zhat-0-xhat tensors t(1,1)=t(1,1)+fac s(3,1)=s(3,1)+fac s(1,3)=s(1,3)+fac r(3,3)=r(3,3)+fac c mult by horizontal slowness and calc the modified T-matrix do i=1,3 do j=1,3 t(j,i)=t(j,i)*px*px s(j,i)=s(j,i)*px end do t(i,i)=t(i,i)-rho(n) end do c calculate R**(-1).S, R**(-1).T, using routine solve nn=3 do i=1,3 do j=1,3 y(j)=s(j,i) end do call solve(nn,r,x,y) do j=1,3 stl(j,i)=x(j) end do nn=-3 end do do i=1,3 do j=1,3 y(j)=t(j,i) end do call solve(nn,r,x,y) do j=1,3 ttl(j,i)=x(j) end do end do c fill the 6x6 Q-matrix do i=1,3 do j=1,3 qq(j,i)=-stl(j,i) qq(j,i+3)=-ttl(j,i) qq(j+3,i)=0.d0 qq(j+3,i+3)=0.d0 end do qq(i+3,i)=1.d0 end do c solve eigenvalue problem for polarization vectors and vertical slownesses c matrix system is nonsymmetric real valued c solution from the eispack guide call balanc(6,6,qq,is1,is2,fv) call elmhes(6,6,is1,is2,qq,iv) call eltran(6,6,is1,is2,qq,iv,zr) call hqr2(6,6,is1,is2,qq,wr,wi,zr,ierr) if(ierr.ne.0) then print *, ierr,' error!' stop endif call balbak(6,6,is1,is2,fv,6,zr) c print *,'for layer',n c print *, 'for phase velocity',cc,' the vertical slownesses are' c print 101,(wr(i),wi(i),i=1,6) c pause 101 format(6g12.4) c eigenvector unpacking, see EISPACK guide, page 88 c bad eigenvector order is flagged by wi(i)>0. for odd i iflag=0 do i=1,6 if(wi(i).eq.0.d0) then if(n.eq.1) iev=0 do j=1,6 zi(j,i)=0.d0 end do elseif(wi(i).gt.0.d0) then c bad eigenvector order is flagged by wi(i)>0 for even i if((i/2)*2.eq.i) then iflag=iflag+1 iv(iflag)=i endif do j=1,6 zi(j,i)=zr(j,i+1) end do else do j=1,6 zi(j,i)=-zi(j,i-1) zr(j,i)=zr(j,i-1) end do endif c normalize by the last three indices sum=0.d0 do j=4,6 sum=sum+zr(j,i)**2+zi(j,i)**2 end do sum=dsqrt(sum) do j=1,6 zr(j,i)=zr(j,i)/sum zi(j,i)=zi(j,i)/sum end do end do c assemble the stress-displacement vectors c calculate the traction components, with i removed pp(1)=dcmplx(px,0.d0) pp(2)=z0 do k=1,6 pp(3)=dcmplx(wr(k),wi(k)) do i=1,3 u0(i)=dcmplx(zr(i+3,k),zi(i+3,k)) end do pu=z0 pw=z0 uw=z0 abcde=a-b+c-2.d0*d+2.d0*e bce=b-4.d0*c-4.d0*e de=d-e do i=1,3 pu=pu+pp(i)*u0(i) pw=pw+pp(i)*w(i,n) uw=uw+u0(i)*w(i,n) end do do i=1,3 e1(i,k)=u0(i) e1(i+3,k)=w(i,n)*(pu*w(3,n)*bce+8.d0*pw*uw*w(3,n)*c x +2.d0*(pw*u0(3)+uw*pp(3))*e) e1(i+3,k)=e1(i+3,k)+pp(i)*(u0(3)*de+2.d0*uw*w(3,n)*e) e1(i+3,k)=e1(i+3,k)+u0(i)*(pp(3)*de+2.d0*pw*w(3,n)*e) end do e1(6,k)=e1(6,k)+pu*abcde+pw*uw*bce c almost lastly, mult traction by i do i=1,3 e1(i+3,k)=eye*e1(i+3,k) end do end do c reorder into upgoing and downgoing waves c we use the exp(-i*omega*t) convention with z increasing downward c so downgoing oscillatory waves have p_z>0, k_z real c downgoing evanescent waves have Im(p_z)>0 c if the axis of symmetry is tilted, there are cases where a pair of c near-horizontal plane waves will be both upgoing or both downgoing c since Chen's algorithm depends on a 3,3 split, we must adopt a kluge c similarly, there are cases where the EISPACK routines dont return c the vertical wavenumbers in ordered pairs, but mix them up a bit c this seems to cause problems, so a fix is necessary c c first, test for bad eigenvector order, switch k-1->k+1, k->k-1, k+1->k c worst case is iflag=2, real,imag1+,imag1-,imag2+,imag2-,real if(iflag.gt.0) then do i=1,iflag k=iv(i) wrr=wr(k-1) wii=wi(k-1) wr(k-1)=wr(k) wi(k-1)=wi(k) wr(k)=wr(k+1) wi(k)=wi(k+1) wr(k+1)=wrr wi(k+1)=wii do j=1,6 pu=e1(j,k-1) e1(j,k-1)=e1(j,k) e1(j,k)=e1(j,k+1) e1(j,k+1)=pu end do end do endif c second, divide into upgoing and downgoing waves isum=0 do k=1,6 iv(k)=0 if(wi(k).eq.0.d0.and.wr(k).gt.0) iv(k)=1 if(wi(k).gt.0.d0) iv(k)=1 isum=isum+iv(k) end do c if up and downgoing cohorts are not equal, switch the sense of the c pure-oscillatory wave with smallest wavenumber 140 continue if(isum.ne.3) then wr0=0.d0 do k=1,6 wr0=dmax1(wr0,dabs(wr(k))) end do do k=1,6 if(wi(k).eq.0.d0) then if(dabs(wr(k)).lt.wr0) then wr0=dabs(wr(k)) kk=k endif endif end do if(iv(kk).eq.0) then iv(kk)=1 else iv(kk)=0 endif c check that we have equal up/down cohorts isum=0 do k=1,6 isum=isum+iv(k) end do go to 140 endif jdown=1 jup=4 c print *,'for layer',n,' the vert wavenums are (0=up,1=dn)' 1001 format(i2,2g15.6) do k=1,6 if(iv(k).eq.1) then ki=jdown jdown=jdown+1 else ki=jup jup=jup+1 endif do i=1,6 ee(i,ki,n)=e1(i,k) end do c incorporate the factor of i into the stored vertical slowness xnu(ki,n)=dcmplx(-wi(k),wr(k)) end do c OK, here's where we check whether two downgoing stress-disp vectors c are nearly parallel - we check the dotproducts of displacement components do i=1,4 idfct(i,n)=0 adf((i+1)/2,n)=0.d0 end do do i=1,2 do j=i+1,3 r1=0.d0 r2=0.d0 zz=z0 do k=1,3 r1=r1+zabs(ee(k,i,n))**2 r2=r2+zabs(ee(k,j,n))**2 zz=zz+ee(k,j,n)*conjg(ee(k,i,n)) end do qqq=1.d0-zabs(zz)/dsqrt(r1*r2) if(qqq.lt.tol) then ccc=cc*vbar idfct(1,n)=i idfct(2,n)=j c print 1008,'vert slownesses',xnu(i,n),' and',xnu(j,n) c we average eigenvalues (vert slownesses) c and solve for eigenvector in subroutine defective xnu(i,n)=(xnu(i,n)+xnu(j,n))/2.d0 xnu(j,n)=xnu(i,n) c calculate the extravector for defective repeated eigenvalue call defective(i,j,n,adf(1,n),a,b,c,d,e,px) c print *,i,j,n,ccc,qqq,adf(1,n) endif end do end do 1008 format(a,2g15.6,a,2g15.6) c OK, here's where we check whether two upgoing stress-disp vectors c are nearly parallel - we check the dotproducts of displacement components do i=4,5 do j=i+1,6 r1=0.d0 r2=0.d0 zz=z0 do k=1,3 r1=r1+zabs(ee(k,i,n))**2 r2=r2+zabs(ee(k,j,n))**2 zz=zz+ee(k,j,n)*conjg(ee(k,i,n)) end do qqq=1.d0-zabs(zz)/dsqrt(r1*r2) if(qqq.lt.tol) then ccc=cc*vbar idfct(3,n)=i idfct(4,n)=j c print 1008,'vert slownesses',xnu(i,n),' and',xnu(j,n) c we average the eigenvalues xnu(i,n)=(xnu(i,n)+xnu(j,n))/2.d0 xnu(j,n)=xnu(i,n) c calculate the extravector for defective repeated eigenvalue c as well as coefficient (adf) of linear (z-z0) term call defective(i,j,n,adf(2,n),a,b,c,d,e,px) c print *,i,j,n,ccc,qqq,adf(2,n) endif end do end do end do c now, must identify which upgoing waves in the halfspace are P,SV,SH c crud, this goes back to array ee c 3: SH is y-motion c 2: SV is (-sqrt((1/vs)**2-p_x**2),0,-p_x) ! recall that z points down c 1: P is (p_x,0,-sqrt((1/vp)**2-p_x**2) c so we branch on size of u_y, and relative sign of u_x and u_z print *,'in the halfspace:' do i=4,6 print *,'for i*k_z=',xnu(i,nlp),', the disp-stress vector is' do j=1,6 xi(j)=dimag(ee(j,i,nlp)) xr(j)=dreal(ee(j,i,nlp)) end do print 101,(xr(j),j=1,6),(xi(j),j=1,6) end do do i=4,6 ips(i-3)=3 if(zabs(ee(2,i,nlp)).lt.dsqrt(tol)) then ! not SH test=dreal(ee(1,i,nlp))/dreal(ee(3,i,nlp)) if(test.gt.0.d0) then ips(i-3)=2 else ips(i-3)=1 endif endif end do print *,'wave prints:',(ips(i),i=1,3) return end subroutine respget(nl,om,cc,resp) c returns surface response for a stack of anisotropic layers c incident p,sv,sh waves with freq om and phase velocity cc c iev=1 if the waves are evanescent in the top layer, iev=0 otherwise implicit real*8 (a-h,o-z) implicit integer*4 (i-n) complex*16 pp,u0,ee,z1,z0,xnu,eye,e1,e2,zla,rtm complex*16 rt,tt,rt0,trc,xl,pfac,u,co,resp,ur common/stfff/w(3,101),t(3,3),ttl(3,3),s(3,3),stl(3,3), x r(3,3),x(3),y(3) common/model/z(100),dz(100),rho(101),vp(101),vs(101),vp2(101), x vp4(101),vs2(101),vss(101) common/model2/xmu(101),xla(101),xmu2(101),xla2(101),xla4(101) common/propag/xnu(6,101),xl(6,100),pfac(6,3),u(3,6) common/defect/idfct(4,101),adf(2,101) common/rrt/rtm(6,6,100) common/mstff/qq(6,6),wr(6),wi(6),zr(6,6),zi(6,6),iv(6),fv(6) common/pstff/pp(3),u0(3),ee(6,6,101),e1(6,6),e2(6,6),zla(6) common/pstf2/co(6,101),ur(3) common/qstff/qi(6,6),xr(6),xi(6),yr(6),yi(6),ips(3) common/rstff/rt(3,3,101),tt(3,3,101),rt0(3,3),trc(3,3) dimension resp(3,3) data pi/3.14159265358979d0/,eps/1.d-6/,tol/1.d-7/ c set iev=1 c toggle to iev=0 if there is a purely propagating wave in the top layer n=1 iev=1 z1=dcmplx(1.d0,0.d0) z0=dcmplx(0.d0,0.d0) eye=dcmplx(0.d0,1.d0) rbar=5.515d3 ren=1.075190645d-3 radi=6.371d6 vbar=radi*ren con=rbar*radi*radi*ren*ren nlp=nl+1 nlm=nl-1 c first calculate vertical wavenumbers and propagating waves for each layer c an eigenvector problem was solved in prior subroutine c results stored in array ee(6,6,101) c in general, the evanescent vertical wavenumbers have nonzero real parts c complex exponential fct is used to avoid endless branching c c calculate modified R/T coefficients c first calc the propagation factors c note that for dipping fast axes the upgoing and downgoing wavenumbers are c independent, so we must calc all to be safe do n=1,nl do k=1,3 xl(k,n)=zexp(om*xnu(k,n)*dz(n)) ! downgoing xl(k+3,n)=zexp(-om*xnu(k+3,n)*dz(n)) ! upgoing end do end do c do i=1,6 c print 1002,xnu(i,3),xl(i,3) c end do 1002 format('i*k_z:',2g15.6,', propfac is',2g15.6) c calculate modified R/T coefficients at each interface do n=1,nl c rearrange to e1: waves approaching and e2: waves leaving an interface do k=1,3 do i=1,6 e1(i,k)=ee(i,k,n+1) e2(i,k)=ee(i,k,n) e1(i,k+3)=-ee(i,k+3,n) e2(i,k+3)=-ee(i,k+3,n+1) end do zla(k)=xl(k,n) if(n.lt.nl) then zla(k+3)=xl(k+3,n+1) else c reference the upcoming wave amplitude to the top of halfspace c therefore no propagation factor, not valid for evanescent waves in halfspace c in surface wave code this is zero, so that upgoing evanescent waves vanish zla(k+3)=1.d0 endif end do c mult the columns of e2 do k=1,6 do i=1,6 e2(i,k)=e2(i,k)*zla(k) end do end do c the possibility of defective matrices must be contemplated here c k=1,2,3 columns are downgoing in nth layer c k=4,5,6 columns are upgoing in (n+1)th layer c the vector e2(.,k1) has already been multiplied by exponential factor zla if(idfct(1,n).ne.0) then k1=idfct(1,n) k2=idfct(2,n) do i=1,6 e2(i,k2)=e2(i,k2)+adf(1,n)*dz(n)*e2(i,k1) end do endif c the sign change on dz is for upgoing waves if(idfct(3,n+1).ne.0) then k1=idfct(3,n+1) k2=idfct(4,n+1) do i=1,6 e2(i,k2)=e2(i,k2)-adf(2,n+1)*dz(n+1)*e2(i,k1) end do endif c in order to use csolve to invert e1, must separate into real/imag parts c its clumsy, but im lazy c we calc e1^{-1}\cdot e2\cdot \Gamma one column at a time do k=1,6 do i=1,6 qq(i,k)=dreal(e1(i,k)) qi(i,k)=dimag(e1(i,k)) end do end do nn=6 do k=1,6 do i=1,6 yr(i)=dreal(e2(i,k)) yi(i)=dimag(e2(i,k)) end do call csolve(nn,qq,qi,xr,xi,yr,yi) nn=-6 do i=1,6 rtm(i,k,n)=dcmplx(xr(i),xi(i)) end do end do end do c calc R_ud at the free surface c note that first two factors in Chen (20) dont collapse c mult by inv-matrix one column at a time do k=1,3 do i=1,3 rt0(i,k)=ee(i+3,k+3,1)*xl(k+3,1) s(i,k)=dreal(ee(i+3,k,1)) t(i,k)=dimag(ee(i+3,k,1)) end do end do c the possibility of defective matrices must be contemplated here c these waves are upgoing in 1st layer c the sign change on dz is for upgoing waves, and xl(k1,1)=xl(k2,1) if(idfct(3,1).ne.0) then k1=idfct(3,1) k2=idfct(4,1)-3 do i=1,3 rt0(i,k2)=rt0(i,k2)-adf(2,1)*dz(1)*ee(i+3,k1,1)*xl(k1,1) end do endif nn=3 do k=1,3 do i=1,3 yr(i)=dreal(rt0(i,k)) yi(i)=dimag(rt0(i,k)) end do call csolve(nn,s,t,xr,xi,yr,yi) nn=-3 do i=1,3 rt0(i,k)=-dcmplx(xr(i),xi(i)) end do end do c recursive calc of generalized R/T coefs: c in contrast to the surface-wave code, we start from the top layer and c iterate down to the halfspace c we also uses submatrices of generalized R/T matrix in different order do n=1,nl c first the generalized upward-transmission coef: do k=1,3 do i=1,3 trc(i,k)=z0 if(n.gt.1) then do j=1,3 trc(i,k)=trc(i,k)-rtm(i+3,j,n)*rt(j,k,n-1) end do else c use free-surface reflection matrix in top layer (interface "zero") do j=1,3 trc(i,k)=trc(i,k)-rtm(i+3,j,n)*rt0(j,k) end do endif end do trc(k,k)=trc(k,k)+z1 end do do k=1,3 do i=1,3 s(i,k)=dreal(trc(i,k)) t(i,k)=dimag(trc(i,k)) end do end do nn=3 do k=1,3 do i=1,3 yr(i)=dreal(rtm(i+3,k+3,n)) yi(i)=dimag(rtm(i+3,k+3,n)) end do call csolve(nn,s,t,xr,xi,yr,yi) nn=-3 do i=1,3 tt(i,k,n)=dcmplx(xr(i),xi(i)) end do end do c next the generalized reflection coef: do k=1,3 do i=1,3 trc(i,k)=z0 if(n.gt.1) then do j=1,3 trc(i,k)=trc(i,k)+rt(i,j,n-1)*tt(j,k,n) end do else c use free-surface reflection matrix in top layer (interface "zero") do j=1,3 trc(i,k)=trc(i,k)+rt0(i,j)*tt(j,k,n) end do endif end do end do do k=1,3 do i=1,3 rt(i,k,n)=rtm(i,k+3,n) do j=1,3 rt(i,k,n)=rt(i,k,n)+rtm(i,j,n)*trc(j,k) end do end do end do end do 1001 format(6f14.6) c print *,'free-surface reflection' c print 1001,((rt0(i,j),j=1,3),i=1,3) c do n=1,nl c print *,'interface',n c print 1001,((rt(i,j,n),j=1,3),i=1,3) c print 1001,((tt(i,j,n),j=1,3),i=1,3) c end do c using the p,sv,sh identification, we propagate upward to the surface, c calculate the wave coefs in the top layer, then the particle displacement do iup=1,3 do i=1,3 co(i+3,nlp)=z0 end do co(iup+3,nlp)=z1 c from upgoing coefs in the n+1 layer, calculate c upgoing coefs in the nth layer, downgoing coefs in the n+1 layer do n=nl,1,-1 do i=1,3 co(i+3,n)=z0 co(i,n+1)=z0 do j=1,3 co(i+3,n)=co(i+3,n)+tt(i,j,n)*co(j+3,n+1) co(i,n+1)=co(i,n+1)+rt(i,j,n)*co(j+3,n+1) end do end do end do c then downgoing coefs in the top layer: do i=1,3 co(i,1)=z0 do j=1,3 co(i,1)=co(i,1)+rt0(i,j)*co(j+3,1) end do end do c print *,'upgoing coefs' c print 1001,((co(j+3,n),j=1,3),n=1,nlp) c print *,'downgoing coefs' c print 1001,((co(j,n),j=1,3),n=1,nlp) c calc the surface displacement h1=0.d0 h2=z(1) do i=1,3 ur(i)=z0 do k=1,3 ur(i)=ur(i)+co(k,1)*ee(i,k,1) x +co(k+3,1)*ee(i,k+3,1)*(zexp(om*xnu(k+3,1)*(-h2))) end do c check for the xtra terms associated with defective matrices if(idfct(1,1).ne.0) then ii=idfct(1,1) jj=idfct(2,1) ur(i)=ur(i)+co(jj,1)*adf(1,1)*ee(i,ii,1)*(-h1) endif if(idfct(3,1).ne.0) then ii=idfct(3,1) jj=idfct(4,1) ur(i)=ur(i) x +co(jj,1)*adf(2,1)*ee(i,ii,1)*(-h2)*(zexp(om*xnu(ii,1)*(-h2))) endif end do c copy the surface displacement into the response matrix do i=1,3 resp(i,ips(iup))=ur(i) end do end do return end subroutine defective(i,j,n,adf,a,b,c,d,e,px) c kluge for dealing with nearly defective propagator matrices c in which the eigenvectors, c which represent the particle motion of upgoing and downgoing waves c become nearly parallel. c in this case the solution for system of ODEs is c a_1 \bf_1 e^xnu*(z-z0) + a_2*(\bf_2 + adf*(z-z0)*\bf_1)e^xnu*(z-z0) c implicit real*8 (a-h,o-z) implicit integer*4 (i-n) complex*16 pp,u0,ee,z1,z0,znu,xnu,e1,e2,zla,xl,u,pfac,eye complex*16 zq1,zq2,u1,u2,zq3,xee common/stfff/w(3,101),t(3,3),ttl(3,3),s(3,3),stl(3,3), x r(3,3),x(3),y(3) common/propag/xnu(6,101),xl(6,100),pfac(6,3),u(3,6) common/defect1/zq1(3,3),zq2(3,3),u1(3),u2(3),zq3(2,2),xee(3) common/defect2/edr(6),edi(6),qdr(5,5),qdi(5,5),ydr(6),ydi(6) common/defect3/q1r(3,3),q1i(3,3),q2r(3,3),q2i(3,3),fv2(3),fv3(3) common/mstff/qq(6,6),wr(6),wi(6),zr(6,6),zi(6,6),iv(6),fv(6) common/pstff/pp(3),u0(3),ee(6,6,101),e1(6,6),e2(6,6),zla(6) common/qstff/qi(6,6),xr(6),xi(6),yr(6),yi(6),ips(3) z1=dcmplx(1.d0,0.d0) z0=dcmplx(0.d0,0.d0) eye=dcmplx(0.d0,1.d0) c for the extravector, need to solve system of equations c based on original 6x6 Q matrix c the plane-wave solutions generalize to the form c u0*e^{i*nu*(z-z0)} and u1*e^{i*nu*(z-z0)} + adf* u0*(z-z0)*e^{i*nu*(z-z0)} c u1 is the solution to c (\bTtil + nu*\bStil + nu^2*\bI).u1=i*adf*(\bStil + 2*nu*\bI).u0 c in practice, we absorb the adf factor into u1, then normalize c (\bTtil + nu*\bStil + nu^2*\bI).(u1/adf)=i*(\bStil + 2*nu*\bI).u0 c since nu is the known eigenvalue of u0, the solution is easier c form the matrices on either side znu=-eye*xnu(i,n) do ii=1,3 do jj=1,3 zq1(jj,ii)=dcmplx(ttl(jj,ii),0.d0)+znu*stl(jj,ii) zq2(jj,ii)=dcmplx(stl(jj,ii),0.d0) end do zq1(ii,ii)=zq1(ii,ii)+znu*znu zq2(ii,ii)=zq2(ii,ii)+2.d0*znu end do c we wish to find the eigenvector of the near-defective matrix c in the region where its eigenvectors are numerically unstable c we explicitly calculate the eigenvector with smallest right-eigenvalue of c (\bTtil + nu*\bStil + nu^2*\bI)=zq1 c copy into real, imag matrices do ii=1,3 do jj=1,3 q1r(jj,ii)=dreal(zq1(jj,ii)) q1i(jj,ii)=dimag(zq1(jj,ii)) end do end do c into eispack call cbal(3,3,q1r,q1i,low,igh,fv) call corth(3,3,low,igh,q1r,q1i,fv2,fv3) call comqr2(3,3,low,igh,fv2,fv3,q1r,q1i,wr,wi,q2r,q2i,ierr) if(ierr.ne.0) go to 400 call cbabk2(3,3,low,igh,fv,3,q2r,q2i) amn=wr(1)**2+wi(1)**2 ij=1 do ii=2,3 amm=wr(ii)**2+wi(ii)**2 if(amm.lt.amn) then ij=ii amn=amm endif end do sum=0.d0 do ii=1,3 u0(ii)=dcmplx(q2r(ii,ij),q2i(ii,ij)) sum=sum+zabs(u0(ii))**2 end do sum=dsqrt(sum) do ii=1,3 u0(ii)=u0(ii)/sum end do c assemble the ith stress-displacement vector c calculate the traction components, with i removed pp(1)=dcmplx(px,0.d0) pp(2)=z0 pp(3)=znu pu=z0 pw=z0 uw=z0 abcde=a-b+c-2.d0*d+2.d0*e bce=b-4.d0*c-4.d0*e de=d-e do ii=1,3 pu=pu+pp(ii)*u0(ii) pw=pw+pp(ii)*w(ii,n) uw=uw+u0(ii)*w(ii,n) end do do ii=1,3 ee(ii,i,n)= u0(ii) ee(ii+3,i,n)=w(ii,n)*(pu*w(3,n)*bce+8.d0*pw*uw*w(3,n)*c x +2.d0*(pw*u0(3)+uw*pp(3))*e) ee(ii+3,i,n)=ee(ii+3,i,n)+pp(ii)*(u0(3)*de+2.d0*uw*w(3,n)*e) ee(ii+3,i,n)=ee(ii+3,i,n)+u0(ii)*(pp(3)*de+2.d0*pw*w(3,n)*e) end do ee(6,i,n)=ee(6,i,n)+pu*abcde+pw*uw*bce c almost lastly, mult traction by i do ii=1,3 ee(ii+3,i,n)=eye*ee(ii+3,i,n) end do c extract u0 from ee(*,i,n) use it to calculate the additional traction terms c and store in ee(*,j,n) c additional traction terms involve gradient of (z-z0) c so can be calculated from standard formulas with \bk=zhat c we dont multiply by i pp(1)=z0 pp(2)=z0 pp(3)=z1 pu=z0 pw=z0 uw=z0 abcde=a-b+c-2.d0*d+2.d0*e bce=b-4.d0*c-4.d0*e de=d-e do ii=1,3 u0(ii)=ee(ii,i,n) pu=pu+pp(ii)*u0(ii) pw=pw+pp(ii)*w(ii,n) uw=uw+u0(ii)*w(ii,n) end do do ii=1,3 xee(ii)=w(ii,n)*(pu*w(3,n)*bce+8.d0*pw*uw*w(3,n)*c x +2.d0*(pw*u0(3)+uw*pp(3))*e) xee(ii)=xee(ii)+pp(ii)*(u0(3)*de+2.d0*uw*w(3,n)*e) xee(ii)=xee(ii)+u0(ii)*(pp(3)*de+2.d0*pw*w(3,n)*e) end do xee(3)=xee(3)+pu*abcde+pw*uw*bce c extract u0 from ee(*,i,n), mult by i*(\bStil + 2*nu*\bI), replace in u0 do ii=1,3 u0(ii)=z0 do jj=1,3 u0(ii)=u0(ii)+zq2(ii,jj)*ee(jj,i,n) end do u0(ii)=eye*u0(ii) end do 1002 format(3(2g14.6,3x)) c for znu NOT an eigenvalue, c but rather the average of closely-space eigenvalues c in this case, zq1 is nonsingular, and we just solve for u1 do ii=1,3 yr(ii)=dreal(u0(ii)) yi(ii)=dimag(u0(ii)) do jj=1,3 q1r(jj,ii)=dreal(zq1(jj,ii)) q1i(jj,ii)=dimag(zq1(jj,ii)) end do end do call csolve(3,q1r,q1i,xr,xi,yr,yi) do ii=1,3 u1(ii)=dcmplx(xr(ii),xi(ii)) end do c End, different tactic c c normalize sum=0.d0 do ii=1,3 sum=sum+zabs(u1(ii))**2 end do sum=dsqrt(sum) do ii=1,3 u1(ii)=u1(ii)/sum end do c adf is the normalization constant adf=1.d0/sum c calculate the traction c and place the new stress-displacement vector in column j c pp is the wavenumber vector, and first two components are already in place pp(1)=dcmplx(px,0.d0) pp(2)=z0 pp(3)=znu pu=z0 pw=z0 uw=z0 abcde=a-b+c-2.d0*d+2.d0*e bce=b-4.d0*c-4.d0*e de=d-e do ii=1,3 pu=pu+pp(ii)*u1(ii) pw=pw+pp(ii)*w(ii,n) uw=uw+u1(ii)*w(ii,n) end do do ii=1,3 ee(ii,j,n)=u1(ii) ee(ii+3,j,n)=w(ii,n)*(pu*w(3,n)*bce+8.d0*pw*uw*w(3,n)*c x +2.d0*(pw*u1(3)+uw*pp(3))*e) ee(ii+3,j,n)=ee(ii+3,j,n)+pp(ii)*(u1(3)*de+2.d0*uw*w(3,n)*e) ee(ii+3,j,n)=ee(ii+3,j,n)+u1(ii)*(pp(3)*de+2.d0*pw*w(3,n)*e) end do ee(6,j,n)=ee(6,j,n)+pu*abcde+pw*uw*bce c almost lastly, mult traction by i c and add extra traction from (z-z0) term (not mult by i) c TEST - mult xee by zero, see if it is important --- it IS important do ii=1,3 ee(ii+3,j,n)=eye*ee(ii+3,j,n)+adf*xee(ii) end do return 400 print *,'eispack error' stop end
From Undecidability.L Require Export Util.L_facts. (* **** Closure calculus *) Inductive Comp : Type := | CompVar (x:nat) | CompApp (s : Comp) (t : Comp) : Comp | CompClos (s : term) (A : list Comp) : Comp. Coercion CompApp : Comp >-> Funclass. Inductive lamComp : Comp -> Prop := lambdaComp s A: lamComp (CompClos (lam s) A). Inductive validComp : Comp -> Prop := | validCompApp s t : validComp s -> validComp t -> validComp (s t) | validCompClos (s : term) (A : list Comp) : (forall a, a el A -> validComp a) -> (forall a, a el A -> lamComp a) -> bound (length A) s -> validComp (CompClos s A). Hint Constructors Comp lamComp validComp : core. Definition validEnv A := forall a, a el A -> validComp a (*/\ lamComp a)*). Definition validEnv' A := forall a, a el A -> closed a. Hint Unfold validEnv validEnv' : core. Lemma validEnv_cons a A : validEnv (a::A) <-> ((validComp a) /\ validEnv A). Proof. unfold validEnv. simpl. split. auto. intros [? ?] a' [eq|el']; subst;auto. Qed. Lemma validEnv'_cons a A : validEnv' (a::A) <-> (closed a /\ validEnv' A). Proof. unfold validEnv'. simpl. intuition. now subst. Qed. Ltac inv_validComp := match goal with | H : validComp (CompApp _ _) |- _ => inv H | H : validComp (CompClos _ _) |- _ => inv H end. Definition Comp_ind_deep' (P : Comp -> Prop) (Pl : list Comp -> Prop) (IHVar : forall x : nat, P (CompVar x)) (IHApp : forall s : Comp, P s -> forall t : Comp, P t -> P (s t)) (IHClos : forall (s : term) (A : list Comp), Pl A -> P (CompClos s A)) (IHNil : Pl nil) (IHCons : forall (a:Comp) (A : list Comp), P a -> Pl A -> Pl (a::A)) (x:Comp) : P x := (fix f c : P c:= match c with | CompVar x => IHVar x | CompApp s t => IHApp s (f s) t (f t) | CompClos s A => IHClos s A ((fix g A : Pl A := match A with [] => IHNil | a::A => IHCons a A (f a) (g A) end) A) end) x . Definition Comp_ind_deep (P : Comp -> Prop) (IHVar : forall x : nat, P (CompVar x)) (IHApp : forall s : Comp, P s -> forall t : Comp, P t -> P (s t)) (IHClos : forall (s : term) (A : list Comp), (forall a, a el A -> P a) -> P (CompClos s A)) : forall x, P x. Proof. apply Comp_ind_deep' with (Pl:=fun A => (forall a, a el A -> P a));auto. intros. inv H1;auto. Qed. (* Lemma subst_comm s x1 u1 x2 u2 : closed u1 -> closed u2 -> x1 <> x2 -> subst (subst s x1 u1) x2 u2 = subst (subst s x2 u2) x1 u1. Proof with try (congruence||auto). intros cl1 cl2 neq. revert x1 x2 neq;induction s;simpl;intros. -decide (n=x1); decide (n=x2); try rewrite cl1;try rewrite cl2;subst;simpl... +decide (x1=x1)... +decide (x2=x2)... +decide (n=x2);decide (n=x1)... -rewrite IHs1,IHs2... -rewrite IHs... Qed. *) (* Lemma subst_twice s x u1 u2 : closed u1 -> subst (subst s x u1) x u2 = subst s x u1. Proof with try (congruence||auto). intros cl. revert x;induction s;simpl;intros. -decide (n=x);subst. now rewrite cl. simpl. decide (n=x);subst;congruence. -rewrite IHs1,IHs2... -rewrite IHs... Qed.*) (* Lemma subst_free a s k u y: closed a -> subst s k u = s -> subst (subst s y a) k u = subst s y a. Proof. intros ca eq. revert y k u eq. induction s;simpl;intros. -decide (n=y). now rewrite ca. apply eq. -simpl in eq. inversion eq. rewrite H0, H1, IHs1, IHs2;auto. -f_equal. simpl in eq. inversion eq. rewrite !H0. now rewrite IHs. Qed.*) (* Lemma bound_ge k s m: bound k s -> m >= k -> bound m s. Proof. intros. decide (m=k);subst. -auto. -eapply bound_gt;eauto. lia. Qed. *) (* Lemma bound_subst' x s a y: bound x s -> closed a -> bound x (subst s y a). Proof. intros dcl cl. revert y. induction dcl;simpl;intros. -decide (n=y);subst. +eapply bound_ge. now apply closed_dcl. lia. +now constructor. -now constructor. -now constructor. Qed. *) (* Fixpoint substList' (s:term) (x:nat) (A: list term): term := match A with | nil => s | a::A => substList' (subst s x a) (S x) A end.*) Fixpoint substList (s:term) (x:nat) (A: list term): term := match s with | var n => if Dec (x>n) then var n else nth (n-x) A (var n) | app s t => app (substList s x A) (substList t x A) | lam s => lam (substList s (S x) A) end. Fixpoint deClos (s:Comp) : term := match s with | CompVar x => var x | CompApp s t => app (deClos s) (deClos t) | CompClos s A => substList s 0 (map deClos A) end. (* Reduction *) Reserved Notation "s '>[(' l ')]' t" (at level 50, format "s '>[(' l ')]' t"). Declare Scope LClos. Inductive CPow : nat -> Comp -> Comp -> Prop := | CPowRefl (s:Comp) : s >[(0)] s | CPowTrans (s t u:Comp) i j : s >[(i)] t -> t >[(j)] u -> s >[(i+j)] u | CPowAppL (s s' t :Comp) l: s >[(l)] s' -> (s t) >[(l)] (s' t) | CPowAppR (s t t':Comp) l: t >[(l)] t' -> (s t) >[(l)] (s t') | CPowApp (s t:term) (A:list Comp) : CompClos (app s t) A >[(0)] (CompClos s A) (CompClos t A) | CPowVar (x:nat) (A:list Comp): CompClos (var x) A >[(0)] nth x A (CompVar x) | CPowVal (s t:term) (A B:list Comp): lambda t -> (CompClos (lam s) A) (CompClos t B) >[(1)] (CompClos s ((CompClos t B)::A)) where "s '>[(' l ')]' t" := (CPow l s t) : LClos. Open Scope LClos. Ltac inv_CompStep := match goal with | H : (CompApp _ _) >(_) CompClos _ _ |- _ => inv H | H : (CompClos _ _) >(_) CompApp _ _ |- _ => inv H end. Hint Constructors CPow : core. Lemma CPow_congL n s s' t : s >[(n)] s' -> s t >[(n)] s' t. Proof. induction 1;eauto. Qed. Lemma CPow_congR n (s t t' : Comp) : t >[(n)] t' -> s t >[(n)] s t'. Proof. induction 1;eauto. Qed. Lemma CPow_trans s t u i j k : s >[(i)] t -> t >[(j)] u -> i + j = k -> s >[(k)] u. Proof. intros. subst. eauto. Qed. Instance CPow'_App_properR n: Proper (eq ==> (CPow n) ==> (CPow n)) CompApp. Proof. intros ? ? -> ? ? ?. now eapply CPow_congR. Qed. (* Definition CStar s t:= exists k , CPow k s t . Notation "s '>[]*' t" := (CStar s t) (at level 50) : L.los. Instance rStar'_PreOrder : PreOrder CStar. Proof. constructor; hnf. -now eexists. -eapply star_trans. Qed. Lemma rStar'_trans_l s s' t : s >[]* s' -> s t >[]* s' t. Proof. induction 1; eauto using star. Qed. Lemma rStar'_trans_r (s t t' : Comp): t >[]* t' -> s t >[]* s t'. Proof. induction 1; eauto using star. Qed. Instance rStar'_App_proper : Proper ((star CStep) ==> (star CStep) ==> (star CStep)) CompApp. Proof. cbv. intros s s' A t t' B. etransitivity. apply rStar'_trans_l, A. apply rStar'_trans_r, B. Qed. Instance CStep_star_subrelation : subrelation CStep (star CStep). Proof. intros s t st. eauto using star. Qed. *) (* Properties of step-indexed version *) (* Notation "x '>[]^' n y" := (ARS.pow CStep n x y) (at level 50) : L.cope. Lemma CStep_Lam n: forall (s t u:Comp), lamComp u -> (ARS.pow CStep n (s t) u) -> exists m1 m2 (s' t':Comp),(m1 < n /\ ARS.pow CStep m1 s s' /\ lamComp s') /\ (m2 < n /\ ARS.pow CStep m2 t t' /\ lamComp t'). Proof with repeat intuition;try now reflexivity. induction n;intros ? ? ? lu R. -inv R. inv lu. -destruct R as [u' [R R']]. inv R. +apply IHn in R'... decompose [ex and] R'. exists (S x), x0, x1, x2... change (S x) with (1+x). apply pow_add;simpl. exists s';intuition. eexists;simpl... +apply IHn in R'... decompose [ex and] R'. exists x, (S x0), x1, x2... change (S x0) with (1+x0). apply pow_add;simpl. exists t';intuition. eexists;simpl... +inv H2. eexists 0,0,_,_... Qed. Lemma CStep_Lam' (s t u:Comp) : lamComp u -> (s t) >[]* u -> exists (s' t':Comp),( s >[]* s' /\ lamComp s') /\ (t >[]* t' /\ lamComp t'). Proof with repeat intuition;try now reflexivity. intros lu R. apply star_pow in R. destruct R as [n R]. revert s t u lu R. induction n;intros. -inv R. inv lu. -destruct R as [u' [R R']]. inv R. +apply IHn in R'... decompose [ex and] R'. exists x, x0... eauto using star. +apply IHn in R'... decompose [ex and] R'. exists x, x0... eauto using star. +inv H2. eexists _,_... Qed. *) Lemma substList_bound x s A: bound x s -> substList s x A = s. Proof. revert x;induction s;intros;simpl. -inv H. decide (x>n);tauto. -inv H. now rewrite IHs1,IHs2. -inv H. rewrite IHs;auto. Qed. Lemma substList_closed s A x: closed s -> substList s x A = s. Proof. intros. apply substList_bound. destruct x. now apply closed_dcl. eapply bound_gt;[rewrite <- closed_dcl|];auto. lia. Qed. Lemma substList_var' y x A: y >= x -> substList (var y) x A = nth (y-x) A (var y). Proof. intros ge. simpl. decide (x>y). lia. auto. Qed. Lemma substList_var y A: substList (var y) 0 A = nth y A (var y). Proof. rewrite substList_var'. f_equal. lia. lia. Qed. Lemma substList_is_bound y A s: validEnv' A -> bound (y+|A|) (s) -> bound y (substList s y A). Proof. intros vA. revert y. induction s;intros y dA. -apply closed_k_bound. intros k u ge. simpl. decide (y>n). +simpl. destruct (Nat.eqb_spec n k). lia. auto. +inv dA. assert (n-y<|A|) by lia. now rewrite (vA _ (nth_In A #n H)). -inv dA. simpl. constructor;auto. -simpl. constructor. apply IHs. now inv dA. Qed. Lemma substList_closed' A s: validEnv' A -> bound (|A|) (s) -> closed (substList s 0 A). Proof. intros. rewrite closed_dcl. apply substList_is_bound;auto. Qed. Lemma deClos_valComp a: validComp a -> closed (deClos a). Proof. intros va. induction va;simpl. -now apply app_closed. -apply substList_closed'. intros a ain. rewrite in_map_iff in ain. destruct ain as [a' [eq a'in]];subst. now apply H0. now rewrite map_length. Qed. Lemma deClos_validEnv A : validEnv A -> validEnv' (map deClos A). Proof. intros vA. induction A;simpl. -unfold validEnv'. simpl. tauto. -rewrite validEnv'_cons. apply validEnv_cons in vA as [ca cA]. split;auto. apply deClos_valComp; auto. Qed. Hint Resolve deClos_validEnv : core. Lemma subst_substList x s t A: validEnv' A -> subst (substList s (S x) A) x t = substList s x (t::A). Proof. revert x;induction s;simpl;intros x cl. -decide (S x > n);simpl. decide (x>n); destruct (Nat.eqb_spec n x);try lia;try tauto. subst. now rewrite minus_diag. decide (x>n). lia. destruct (n-x) eqn: eq. lia. assert (n2=n-S x) by lia. subst n2. destruct (nth_in_or_default (n-S x) A #n). + apply cl in i. now rewrite i. +rewrite e. simpl. destruct (Nat.eqb_spec n x). lia. auto. -now rewrite IHs1,IHs2. -now rewrite IHs. Qed. Lemma validComp_step s t l: validComp s -> s >[(l)] t -> validComp t. Proof with repeat (subst || firstorder). intros vs R. induction R;repeat inv_validComp... -inv H3. constructor... -inv H3. apply H1. apply nth_In. lia. -inv H8. constructor;auto;intros a [?|?];subst;auto. Qed. Hint Resolve validComp_step : core. (* Lemma deClos_correct''' s t : validComp s -> s >(0) t -> deClos s = deClos t. Proof with repeat (cbn in * || eauto || congruence || lia || subst). intros cs R. remember 0 as n eqn:eq in R. revert eq. induction R;intros ?;repeat inv_validComp... -destruct i... rewrite IHR1,IHR2... -destruct IHR... -rewrite IHR... -simpl. rewrite <- minus_n_O. rewrite <-map_nth with (f:=deClos)... Qed. Lemma deClos_correct'' s t : validComp s -> s >(1) t -> deClos s = deClos t \/ deClos s ≻ deClos t. Proof with repeat (cbn in * || eauto || congruence || lia || subst). intros cs R. remember 1 as n eqn:eq in R. revert eq. induction R;intros ?;repeat inv_validComp... -destruct i... +destruct IHR2... apply deClos_correct''' in R1... left... aply deClos_correct''' in R1... right... right... split;eauto. destruct IHR. auto. left... right... -destruct IHR. auto. left... right... -left... -left. simpl. rewrite <- minus_n_O. rewrite <-map_nth with (f:=deClos)... -right. inv H. simpl. rewrite <-subst_substList... Qed.*) Lemma deClos_correct l s t : validComp s -> s >[(l)] t -> deClos s >(l) deClos t. Proof with repeat (cbn in * || eauto 10 using star || congruence || lia || subst). intros cs R. induction R... -eapply pow_trans;eauto. -inv cs;apply pow_step_congL... -inv cs;apply pow_step_congR... -rewrite <- minus_n_O. rewrite <-map_nth with (f:=deClos)... -inv H. inv cs. inv H1. eexists;split... rewrite <- subst_substList... Qed. (* (* relation that tries to capture that two closures 'reduce' to one another *) Reserved Notation "s '=[]>' t" (at level 70). Inductive reduceC : Comp -> Comp -> Prop := | redC s t: deClos s >* deClos t -> s =[]> t where "s '=[]>' t" := (reduceC s t). Hint Constructors reduceC. Lemma reduceC_if s t : s =[]> t -> deClos s >* deClos t. Proof. now inversion 1. Qed. (* ** Properties of the extended reduction relation *) Instance reduceC_PreOrder : PreOrder reduceC. Proof. constructor;repeat intros;constructor. -reflexivity. -inv H. inv H0. now rewrite H1. Qed. Instance reduceC_App_proper : Proper (reduceC ==> reduceC ==> reduceC) CompApp. Proof. cbv. intros s s' A t t' B. constructor. simpl. apply star_step_app_proper. -now inv A. -now inv B. Qed. Lemma CStep_reduceC l s t: validComp s -> s >(l) t -> s =[]> t. Proof. intros. constructor. eapply deClos_correct;eauto. Qed. (* relation that tries to capture that two closures 'are the same' *) Reserved Notation "s '=[]=' t" (at level 70). Inductive equivC : Comp -> Comp -> Prop := | eqC s t: deClos s == deClos t -> s =[]= t where "s '=[]=' t" := (equivC s t). Hint Constructors equivC. Lemma equivC_if s t : s =[]= t -> deClos s == deClos t. Proof. now inversion 1. Qed. (* ** Properties of the equivalence relation *) Instance equivC_Equivalence : Equivalence equivC. Proof. constructor;repeat intros;constructor. -reflexivity. -inv H. now rewrite H0. -inv H0. inv H. now rewrite H0. Qed. Instance equivC_App_proper : Proper (equivC ==> equivC ==> equivC) CompApp. Proof. cbv. intros s s' A t t' B. constructor. simpl. apply equiv_app_proper. -now inv A. -now inv B. Qed. Lemma CStep_equivC s t: validComp s -> s >[]> t -> s =[]= t. intros vs R. induction R;repeat inv_validComp. -now rewrite IHR. -now rewrite IHR. -constructor. reflexivity. -constructor. simpl. rewrite <- minus_n_O. rewrite <-map_nth with (f:= deClos). reflexivity. -constructor. rewrite deClos_correct'. reflexivity. auto. auto. Qed. Lemma starC_equivC s t : validComp s -> s >[]* t -> s =[]= t. Proof. intros vs R. induction R. -reflexivity. -rewrite <-IHR. +eauto using CStep_equivC. +eauto using validComp_step. Qed. *) Lemma substList_nil s x: substList s x [] = s. Proof. revert x. induction s;intros;simpl. -decide (x>n). reflexivity. now destruct(n-x). -congruence. -congruence. Qed. (* Lemma equivC_deClos s : s =[]> CompClos (deClos s) []. Proof. constructor. simpl. induction s;simpl. -now destruct x. -rewrite IHs1 at 1. rewrite IHs2 at 1. reflexivity. -now rewrite substList_nil. Qed. *) (* Goal uniform_confluent CStep. Proof with try (congruence||(now (subst;tauto))||(now (right;eauto))||(now (right;eauto;eexists;eauto))). intros s. induction s;intros. -inv H. -inv H;inv H0... +destruct (IHs1 _ _ H4 H3) as [?|[? [? ?]]]... +destruct (IHs2 _ _ H4 H3) as [?|[? [? ?]]]... +inv H4; now inv H3. +inv H3; now inv H4. -inv H; inv H0... Qed.*) (* Lemma lamComp_noStep l s t : lamComp s -> ~ s>(S l)t. Proof. intros H R. remember (S l). revert Heqn. revert H. induction R;intros;try congruence. destruct i. inv H. inv R.lia. . Qed. *) Lemma validComp_closed s: closed s -> validComp (CompClos s []). Proof. intros cs. constructor;simpl;try tauto. now apply closed_dcl. Qed. (* Lemma lamComp_star s t : lamComp s -> s >[]* t -> s = t. Proof. intros H R. induction R. auto. now apply lamComp_noStep in H0. Qed. Lemma validComp_star s t: validComp s -> s >[]* t -> validComp t. Proof. intros vs R. induction R; eauto using validComp_step. Qed. *) (* Lemma deClos_lam p s: (λ s) = deClos p -> exists t, lamComp t /\ deClos t = (lam s) /\ p >[]* t. Proof. revert s. apply Comp_ind_deep with (x:=p);clear p;simpl. -congruence. -congruence. -intros p A IH s eq. destruct p; simpl in eq. +rewrite <- minus_n_O in eq. change (var n) with (deClos (CompVar n)) in eq. rewrite map_nth in eq. apply IH in eq as [t [? [? R]]]. exists t;repeat split;auto. now rewrite CStepVar. destruct (nth_in_or_default n A (CompVar n)). *auto. *rewrite e in eq. simpl in eq. congruence. +inv eq. +exists (CompClos (lam p) A). simpl. repeat split;auto. reflexivity. Qed. Fixpoint normComp' s A:= match s with | app s t => (normComp' s A) (normComp' t A) | var x => CompClos (var x) A (*nth x A (CompVar x)*) | lam s => CompClos (lam s) A end. Fixpoint normComp s := match s with | CompApp s t => (normComp s) (normComp t) | CompClos s A => normComp' s A | s => s end. Lemma normComp'_deClos s A: deClos (CompClos s A) = deClos (normComp' s A). Proof. induction s;simpl. -rewrite <- minus_n_O. reflexivity. -simpl in *. congruence. -simpl in *. congruence. Qed. Lemma normComp_deClos s: deClos s = deClos (normComp s). Proof. induction s;simpl. -auto. -congruence. -rewrite <- normComp'_deClos. reflexivity. Qed. Lemma normComp'_star s A: CompClos s A >[]* normComp' s A. Proof. induction s;simpl;eauto using star. -rewrite CStepApp. now rewrite IHs1,IHs2. Qed. Lemma normComp_star s: s >[]* normComp s. Proof. induction s;simpl. -reflexivity. -now rewrite <- IHs1,<-IHs2. -apply normComp'_star. Qed. Lemma normComp'_idem s A:normComp (normComp' s A)=normComp' s A. Proof. induction s;simpl; congruence. Qed. Lemma normComp_idem s: normComp (normComp s)=normComp s. Proof. induction s;simpl. -reflexivity. -congruence. -apply normComp'_idem. Qed. Lemma normComp'_valid s A: validComp (CompClos s A) -> validComp (normComp' s A). Proof. intros vA. induction s;simpl. -auto. -inv vA. inv H3. auto. -auto. Qed. Lemma normComp_valid s: validComp s -> validComp (normComp s). Proof. intros vs. induction s;simpl. -auto. -inv vs. auto. -apply normComp'_valid. auto. Qed. Lemma CompStep_correct2' s t : normComp s = s -> validComp s -> deClos s ≻ t -> exists t', t = deClos t' /\ s >[]* t'. Proof. intros nc vs. revert t. induction vs as [s1 s2|]; intros t R. -simpl in R. inv R;simpl in nc. +destruct (deClos_lam H0) as [t'[lt' [eq R]]]. destruct (deClos_lam H1) as [u [lu [equ Ru]]]. inv lt'. exists (CompClos s0 (u::A)). simpl;split. *rewrite equ. rewrite <-subst_substList. simpl in eq. congruence. apply deClos_validEnv. apply validComp_star in R;auto. inv R. auto. *rewrite R, Ru. inv lu. rewrite <- CStepVal. reflexivity. auto. +apply IHvs2 in H2 as [u [? R]]. exists (s1 u). split; simpl. congruence. now rewrite R. congruence. +apply IHvs1 in H2 as [u [? R]]. exists (u s2). split; simpl. congruence. now rewrite R. congruence. -destruct s;simpl in nc. +simpl in R. rewrite <- minus_n_O in R. change (var n) with (deClos (CompVar n)) in R. rewrite map_nth in R. apply H0 in R. destruct R as [t' [? ?]]. *eexists. split. eauto. now rewrite CStepVar. *apply nth_In. now inv H2. *destruct (nth_in_or_default n A (CompVar n)). apply H1 in i. inv i. now simpl. rewrite e. reflexivity. +inv nc. +simpl in R. inv R. Qed. Lemma CompStep_correct2 s t : validComp s -> deClos s ≻ t -> exists t', t = deClos t' /\ s >[]* t'. Proof. intros vs R. rewrite normComp_deClos in R. destruct (CompStep_correct2' (normComp_idem s) (normComp_valid vs) R) as [t' [eq R']]. exists t'. split. auto. now rewrite normComp_star. Qed. Close Scope L.los.*)
#ifndef QUBUS_QTL_KERNEL_HELPERS_HPP #define QUBUS_QTL_KERNEL_HELPERS_HPP #include <boost/hana/for_each.hpp> #include <boost/hana/functional/apply.hpp> #include <boost/hana/range.hpp> #include <boost/hana/transform.hpp> #include <boost/hana/tuple.hpp> #include <boost/hana/type.hpp> #include <boost/hana/unpack.hpp> #include <qubus/util/function_traits.hpp> #include <functional> #include <type_traits> #include <utility> namespace qubus { namespace qtl { template <typename Kernel> struct get_kernel_arg_type_t { template <typename Index> constexpr auto operator()(Index index) const { return boost::hana::type_c<util::arg_type<Kernel, Index::value>>; } }; template <typename Kernel> constexpr auto get_kernel_arg_type = get_kernel_arg_type_t<Kernel>{}; struct instantiate_t { template <typename Type> auto operator()(Type type) const { using value_type = typename Type::type; return value_type{}; } }; constexpr auto instantiate = instantiate_t{}; template <typename Kernel> constexpr auto instantiate_kernel_args() { constexpr std::size_t kernel_arity = util::function_traits<Kernel>::arity; constexpr auto arg_types = boost::hana::transform( boost::hana::to_tuple(boost::hana::range_c<std::size_t, 0, kernel_arity>), get_kernel_arg_type<Kernel>); return boost::hana::transform(arg_types, instantiate); } } } #endif
module _ where open import Common.Prelude renaming (_+_ to _+N_) open import Common.Integer diff : Nat → Nat → Integer diff a zero = pos a diff zero (suc b) = negsuc b diff (suc a) (suc b) = diff a b _+_ : Integer → Integer → Integer pos a + pos b = pos (a +N b) pos a + negsuc b = diff a (suc b) negsuc a + pos b = diff b (suc a) negsuc a + negsuc b = negsuc (suc a +N b) printInt : Integer → IO Unit printInt n = putStrLn (intToString n) main : IO Unit main = printInt (pos 42 + pos 58) ,, printInt (pos 42 + negsuc 141) ,, printInt (pos 42 + negsuc 31) ,, printInt (negsuc 42 + pos 143) ,, printInt (negsuc 42 + pos 33) ,, printInt (negsuc 42 + negsuc 56)
[STATEMENT] lemma Union_in_lattice: "[|M \<subseteq> L; lattice L|] ==> \<Union>M \<in> L" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>M \<subseteq> L; lattice L\<rbrakk> \<Longrightarrow> \<Union> M \<in> L [PROOF STEP] by (simp add: lattice_def)
Require Import Crypto.Arithmetic.PrimeFieldTheorems. Require Import Crypto.Specific.solinas64_2e382m105_7limbs.Synthesis. (* TODO : change this to field once field isomorphism happens *) Definition carry : { carry : feBW_loose -> feBW_tight | forall a, phiBW_tight (carry a) = (phiBW_loose a) }. Proof. Set Ltac Profiling. Time synthesize_carry (). Show Ltac Profile. Time Defined. Print Assumptions carry.
MODULE GWFLPFMODULE INTEGER, SAVE, POINTER ::ILPFCB,IWDFLG,IWETIT,IHDWET INTEGER, SAVE, POINTER ::ISFAC,ICONCV,ITHFLG,NOCVCO,NOVFC REAL, SAVE, POINTER ::WETFCT INTEGER, SAVE, POINTER, DIMENSION(:) ::LAYTYP INTEGER, SAVE, POINTER, DIMENSION(:) ::LAYAVG REAL, SAVE, POINTER, DIMENSION(:) ::CHANI INTEGER, SAVE, POINTER, DIMENSION(:) ::LAYVKA INTEGER, SAVE, POINTER, DIMENSION(:) ::LAYWET INTEGER, SAVE, POINTER, DIMENSION(:) ::LAYSTRT INTEGER, SAVE, POINTER, DIMENSION(:,:) ::LAYFLG REAL, SAVE, POINTER, DIMENSION(:,:,:) ::VKA REAL, SAVE, POINTER, DIMENSION(:,:,:) ::VKCB REAL, SAVE, POINTER, DIMENSION(:,:,:) ::SC1 REAL, SAVE, POINTER, DIMENSION(:,:,:) ::SC2 REAL, SAVE, POINTER, DIMENSION(:,:,:) ::HANI REAL, SAVE, POINTER, DIMENSION(:,:,:) ::WETDRY REAL, SAVE, POINTER, DIMENSION(:,:,:) ::HK TYPE GWFLPFTYPE INTEGER, POINTER ::ILPFCB,IWDFLG,IWETIT,IHDWET INTEGER, POINTER ::ISFAC,ICONCV,ITHFLG,NOCVCO,NOVFC REAL, POINTER ::WETFCT INTEGER, POINTER, DIMENSION(:) ::LAYTYP INTEGER, POINTER, DIMENSION(:) ::LAYAVG REAL, POINTER, DIMENSION(:) ::CHANI INTEGER, POINTER, DIMENSION(:) ::LAYVKA INTEGER, POINTER, DIMENSION(:) ::LAYWET INTEGER, POINTER, DIMENSION(:) ::LAYSTRT INTEGER, POINTER, DIMENSION(:,:) ::LAYFLG REAL, POINTER, DIMENSION(:,:,:) ::VKA REAL, POINTER, DIMENSION(:,:,:) ::VKCB REAL, POINTER, DIMENSION(:,:,:) ::SC1 REAL, POINTER, DIMENSION(:,:,:) ::SC2 REAL, POINTER, DIMENSION(:,:,:) ::HANI REAL, POINTER, DIMENSION(:,:,:) ::WETDRY REAL, POINTER, DIMENSION(:,:,:) ::HK END TYPE TYPE(GWFLPFTYPE) GWFLPFDAT(10) END MODULE GWFLPFMODULE SUBROUTINE GWF2LPF7AR(IN,IGRID) C ****************************************************************** C ALLOCATE AND READ DATA FOR LAYER PROPERTY FLOW PACKAGE C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,ITRSS,LAYHDT,LAYHDS,LAYCBD, 1 NCNFBD,IBOUND,BUFF,BOTM,NBOTM,DELR,DELC,IOUT USE GWFBASMODULE,ONLY:HDRY USE GWFLPFMODULE,ONLY:ILPFCB,IWDFLG,IWETIT,IHDWET, 1 ISFAC,ICONCV,ITHFLG,NOCVCO,NOVFC,WETFCT, 2 LAYTYP,LAYAVG,CHANI,LAYVKA,LAYWET,LAYSTRT, 3 LAYFLG,VKA,VKCB,SC1,SC2,HANI,WETDRY,HK C CHARACTER*14 LAYPRN(5),AVGNAM(3),TYPNAM(2),VKANAM(2),WETNAM(2), 1 HANNAM DATA AVGNAM/' HARMONIC',' LOGARITHMIC',' LOG-ARITH'/ DATA TYPNAM/' CONFINED',' CONVERTIBLE'/ DATA VKANAM/' VERTICAL K',' ANISOTROPY'/ DATA WETNAM/' NON-WETTABLE',' WETTABLE'/ DATA HANNAM/' VARIABLE'/ CHARACTER*200 LINE CHARACTER*24 ANAME(9),STOTXT CHARACTER*4 PTYP C DATA ANAME(1) /' HYD. COND. ALONG ROWS'/ DATA ANAME(2) /' HORIZ. ANI. (COL./ROW)'/ DATA ANAME(3) /' VERTICAL HYD. COND.'/ DATA ANAME(4) /' HORIZ. TO VERTICAL ANI.'/ DATA ANAME(5) /'QUASI3D VERT. HYD. COND.'/ DATA ANAME(6) /' SPECIFIC STORAGE'/ DATA ANAME(7) /' SPECIFIC YIELD'/ DATA ANAME(8) /' WETDRY PARAMETER'/ DATA ANAME(9) /' STORAGE COEFFICIENT'/ C ------------------------------------------------------------------ C1------Allocate scalar data. ALLOCATE(ILPFCB,IWDFLG,IWETIT,IHDWET) ALLOCATE(ISFAC,ICONCV,ITHFLG,NOCVCO,NOVFC) ALLOCATE(WETFCT) ZERO=0. C C2------IDENTIFY PACKAGE WRITE(IOUT,1) IN 1 FORMAT(1X,/1X,'LPF -- LAYER-PROPERTY FLOW PACKAGE, VERSION 7', 1', 5/2/2005',/,9X,'INPUT READ FROM UNIT ',I4) C C3------READ COMMENTS AND ITEM 1. CALL URDCOM(IN,IOUT,LINE) LLOC=1 CALL URWORD(LINE,LLOC,ISTART,ISTOP,2,ILPFCB,R,IOUT,IN) CALL URWORD(LINE,LLOC,ISTART,ISTOP,3,I,HDRY,IOUT,IN) CALL URWORD(LINE,LLOC,ISTART,ISTOP,2,NPLPF,R,IOUT,IN) C C3A-----WRITE ITEM 1 IF(ILPFCB.LT.0) WRITE(IOUT,8) 8 FORMAT(1X,'CONSTANT-HEAD CELL-BY-CELL FLOWS WILL BE PRINTED', 1 ' WHEN ICBCFL IS NOT 0') IF(ILPFCB.GT.0) WRITE(IOUT,9) ILPFCB 9 FORMAT(1X,'CELL-BY-CELL FLOWS WILL BE SAVED ON UNIT ',I4) WRITE(IOUT,11) HDRY 11 FORMAT(1X,'HEAD AT CELLS THAT CONVERT TO DRY=',1PG13.5) IF(NPLPF.GT.0) THEN WRITE(IOUT,15) NPLPF 15 FORMAT(1X,I5,' Named Parameters ') ELSE NPLPF=0 WRITE(IOUT,'(A)') ' No named parameters' END IF C C3B-----GET OPTIONS. ISFAC=0 ICONCV=0 ITHFLG=0 NOCVCO=0 NOVFC=0 NOPCHK=0 STOTXT=ANAME(6) 20 CALL URWORD(LINE,LLOC,ISTART,ISTOP,1,I,R,IOUT,IN) IF(LINE(ISTART:ISTOP).EQ.'STORAGECOEFFICIENT') THEN ISFAC=1 STOTXT=ANAME(9) WRITE(IOUT,21) 21 FORMAT(1X,'STORAGECOEFFICIENT OPTION:',/, 1 1X,'Read storage coefficient rather than specific storage') ELSE IF(LINE(ISTART:ISTOP).EQ.'CONSTANTCV') THEN ICONCV=1 WRITE(IOUT,23) 23 FORMAT(1X,'CONSTANTCV OPTION:',/,1X,'Constant vertical', 1 ' conductance for convertible layers') ELSE IF(LINE(ISTART:ISTOP).EQ.'THICKSTRT') THEN ITHFLG=1 WRITE(IOUT,25) 25 FORMAT(1X,'THICKSTRT OPTION:',/,1X,'Negative LAYTYP indicates', 1 ' confined layer with thickness computed from STRT-BOT') ELSE IF(LINE(ISTART:ISTOP).EQ.'NOCVCORRECTION') THEN NOCVCO=1 WRITE(IOUT,27) 27 FORMAT(1X,'NOCVCORRECTION OPTION:',/,1X, 1 'Do not adjust vertical conductance when applying', 2 ' the vertical flow correction') ELSE IF(LINE(ISTART:ISTOP).EQ.'NOVFC') THEN NOVFC=1 NOCVCO=1 WRITE(IOUT,29) 29 FORMAT(1X,'NOVFC OPTION:',/,1X, 1 'Do not apply the vertical flow correction') ELSE IF(LINE(ISTART:ISTOP).EQ.'NOPARCHECK') THEN NOPCHK=1 WRITE(IOUT,30) 30 FORMAT(1X,'NOPARCHECK OPTION:',/,1X, 1 'For data defined by parameters, do not check to see if ', 2 'parameters define data at all cells') END IF IF(LLOC.LT.200) GO TO 20 C C4------ALLOCATE AND READ LAYTYP, LAYAVG, CHANI, LAYVKA, LAYWET, LAYSTRT. ALLOCATE(LAYTYP(NLAY)) ALLOCATE(LAYAVG(NLAY)) ALLOCATE(CHANI(NLAY)) ALLOCATE(LAYVKA(NLAY)) ALLOCATE(LAYWET(NLAY)) ALLOCATE(LAYSTRT(NLAY)) READ(IN,*) (LAYTYP(K),K=1,NLAY) READ(IN,*) (LAYAVG(K),K=1,NLAY) READ(IN,*) (CHANI(K),K=1,NLAY) READ(IN,*) (LAYVKA(K),K=1,NLAY) READ(IN,*) (LAYWET(K),K=1,NLAY) C C4A-----PRINT A TABLE OF VALUES FOR LAYTYP, LAYAVG, CHANI, LAYVKA, LAYWET. WRITE(IOUT,47) 47 FORMAT(1X,/3X,'LAYER FLAGS:',/1X, 1 'LAYER LAYTYP LAYAVG CHANI ', 2 ' LAYVKA LAYWET',/1X,75('-')) DO 50 K=1,NLAY WRITE(IOUT,48) K,LAYTYP(K),LAYAVG(K),CHANI(K),LAYVKA(K),LAYWET(K) 48 FORMAT(1X,I4,2I14,1PE14.3,2I14) C C4A1----SET GLOBAL HEAD-DEPENDENT TRANSMISSIVITY AND STORAGE FLAGS. IF (LAYTYP(K).NE.0) THEN LAYHDT(K)=1 LAYHDS(K)=1 ELSE LAYHDT(K)=0 LAYHDS(K)=0 ENDIF 50 CONTINUE C C4A2----SET LAYSTRT AND RESET LAYTYP IF THICKSTRT OPTION IS ACTIVE. DO 60 K=1,NLAY LAYSTRT(K)=0 IF(LAYTYP(K).LT.0 .AND. ITHFLG.NE.0) THEN LAYSTRT(K)=1 LAYTYP(K)=0 LAYHDT(K)=0 LAYHDS(K)=0 WRITE(IOUT,57) K 57 FORMAT(1X,'Layer',I5, 1 ' is confined because LAYTYP<0 and THICKSTRT option is active') END IF 60 CONTINUE C C4B-----BASED ON LAYTYP, LAYAVG, CHANI, LAYWET, COUNT THE NUMBER OF EACH C4B-----TYPE OF 2-D ARRAY; CHECK VALUES FOR CONSISTENCY; AND SETUP C4B-----POINTERS IN LAYTYP, CHANI, AND LAYWET FOR CONVENIENT ACCESS C4B-----TO SC2, HANI, and WETDRY. PRINT INTERPRETED VALUES OF FLAGS. NCNVRT=0 NHANI=0 NWETD=0 WRITE(IOUT,67) 67 FORMAT(1X,/3X,'INTERPRETATION OF LAYER FLAGS:',/1X, 1 ' INTERBLOCK HORIZONTAL', 2 ' DATA IN',/1X, 3 ' LAYER TYPE TRANSMISSIVITY ANISOTROPY', 4 ' ARRAY VKA WETTABILITY',/1X, 5 'LAYER (LAYTYP) (LAYAVG) (CHANI)', 6 ' (LAYVKA) (LAYWET)',/1X,75('-')) DO 100 K=1,NLAY IF(LAYTYP(K).NE.0) THEN NCNVRT=NCNVRT+1 LAYTYP(K)=NCNVRT END IF IF(CHANI(K).LE.ZERO) THEN NHANI=NHANI+1 CHANI(K)=-NHANI END IF IF(LAYWET(K).NE.0) THEN IF(LAYTYP(K).EQ.0) THEN WRITE(IOUT,*) 1 ' LAYWET is not 0 and LAYTYP is 0 for layer:',K WRITE(IOUT,*) ' LAYWET must be 0 if LAYTYP is 0' CALL USTOP(' ') ELSE NWETD=NWETD+1 LAYWET(K)=NWETD END IF END IF IF(LAYAVG(K).LT.0 .OR. LAYAVG(K).GT.2) THEN WRITE(IOUT,74) LAYAVG(K) 74 FORMAT(1X,I8, 1 ' IS AN INVALID LAYAVG VALUE -- MUST BE 0, 1, or 2') CALL USTOP(' ') END IF LAYPRN(1)=TYPNAM(1) IF(LAYTYP(K).NE.0) LAYPRN(1)=TYPNAM(2) LAYPRN(2)=AVGNAM(LAYAVG(K)+1) IF(CHANI(K).LE.0) THEN LAYPRN(3)=HANNAM ELSE WRITE(LAYPRN(3),'(1PE14.3)') CHANI(K) END IF LAYPRN(4)=VKANAM(1) IF(LAYVKA(K).NE.0) LAYPRN(4)=VKANAM(2) LAYPRN(5)=WETNAM(1) IF(LAYWET(K).NE.0) LAYPRN(5)=WETNAM(2) WRITE(IOUT,78) K,(LAYPRN(I),I=1,5) 78 FORMAT(1X,I4,5A) 100 CONTINUE C C4C-----PRINT WETTING INFORMATION. IF(NWETD.EQ.0) THEN WRITE(IOUT,13) 13 FORMAT(1X,/,1X,'WETTING CAPABILITY IS NOT ACTIVE IN ANY LAYER') IWDFLG=0 ELSE WRITE(IOUT,12) NWETD 12 FORMAT(1X,/,1X,'WETTING CAPABILITY IS ACTIVE IN',I4,' LAYERS') IWDFLG=1 READ(IN,*) WETFCT,IWETIT,IHDWET IF(IWETIT.LE.0) IWETIT=1 WRITE(IOUT,*) ' WETTING FACTOR=',WETFCT WRITE(IOUT,*) ' WETTING ITERATION INTERVAL=',IWETIT WRITE(IOUT,*) ' IHDWET=',IHDWET END IF C C5------ALLOCATE MEMORY FOR ARRAYS. ALLOCATE(LAYFLG(6,NLAY)) ALLOCATE(HK(NCOL,NROW,NLAY)) ALLOCATE(VKA(NCOL,NROW,NLAY)) IF(NCNFBD.GT.0) THEN ALLOCATE(VKCB(NCOL,NROW,NCNFBD)) ELSE ALLOCATE(VKCB(1,1,1)) END IF IF(ITRSS.NE.0) THEN ALLOCATE(SC1(NCOL,NROW,NLAY)) ELSE ALLOCATE(SC1(1,1,1)) END IF IF(ITRSS.NE.0 .AND. NCNVRT.GT.0) THEN ALLOCATE(SC2(NCOL,NROW,NCNVRT)) ELSE ALLOCATE(SC2(1,1,1)) END IF IF(NHANI.GT.0) THEN ALLOCATE(HANI(NCOL,NROW,NHANI)) ELSE ALLOCATE(HANI(1,1,1)) END IF IF(NWETD.GT.0) THEN ALLOCATE(WETDRY(NCOL,NROW,NWETD)) ELSE ALLOCATE(WETDRY(1,1,1)) END IF C C6------READ PARAMETER DEFINITIONS NPHK=0 NPVKCB=0 NPVK=0 NPVANI=0 NPSS=0 NPSY=0 NPHANI=0 IF(NPLPF.GT.0) THEN WRITE(IOUT,115) 115 FORMAT(/,' PARAMETERS DEFINED IN THE LPF PACKAGE') DO 120 K=1,NPLPF CALL UPARARRRP(IN,IOUT,N,1,PTYP,1,0,-1) C Note that NPHK and the other NP variables in C this group are used only as flags, not counts IF(PTYP.EQ.'HK') THEN NPHK=1 ELSE IF(PTYP.EQ.'HANI') THEN C6A-----WHEN A HANI PARAMETER IS USED, THEN ALL HORIZONTAL ANISOTROPY C6A-----MUST BE DEFINED USING PARAMETERS. ENSURE THAT ALL CHANI <= 0 DO 118 I = 1, NLAY IF (CHANI(I).GT.0.0) THEN WRITE(IOUT,117) 117 FORMAT(/, &' ERROR: WHEN A HANI PARAMETER IS USED, CHANI FOR ALL LAYERS',/, &' MUST BE LESS THAN OR EQUAL TO 0.0 -- STOP EXECUTION', &' (GWF2LPF7AR)') CALL USTOP(' ') ENDIF 118 CONTINUE NPHANI=1 ELSE IF(PTYP.EQ.'VKCB') THEN NPVKCB=1 ELSE IF(PTYP.EQ.'VK') THEN NPVK=1 CALL SGWF2LPF7CK(IOUT,N,'VK ') ELSE IF(PTYP.EQ.'VANI') THEN NPVANI=1 CALL SGWF2LPF7CK(IOUT,N,'VANI') ELSE IF(PTYP.EQ.'SS') THEN NPSS=1 ELSE IF(PTYP.EQ.'SY') THEN NPSY=1 ELSE WRITE(IOUT,*) ' Invalid parameter type for LPF Package' CALL USTOP(' ') END IF 120 CONTINUE END IF C C7------DEFINE DATA FOR EACH LAYER -- VIA READING OR NAMED PARAMETERS. DO 200 K=1,NLAY KK=K C C7A-----DEFINE HORIZONTAL HYDRAULIC CONDUCTIVITY (HK) IF(NPHK.EQ.0) THEN CALL U2DREL(HK(:,:,KK),ANAME(1),NROW,NCOL,KK,IN,IOUT) ELSE READ(IN,*) LAYFLG(1,K) WRITE(IOUT,121) ANAME(1),K,LAYFLG(1,K) 121 FORMAT(1X,/1X,A,' FOR LAYER',I4, 1 ' WILL BE DEFINED BY PARAMETERS',/1X,'(PRINT FLAG=',I4,')') CALL UPARARRSUB1(HK(:,:,KK),NCOL,NROW,KK,'HK', 1 IOUT,ANAME(1),LAYFLG(1,KK)) IF(NOPCHK.EQ.0) CALL UPARARRCK(BUFF,IBOUND,IOUT,K,NCOL,NLAY, 1 NROW,'HK ') END IF C C7B-----READ HORIZONTAL ANISOTROPY IF CHANI IS NON-ZERO IF(CHANI(K).LE.ZERO) THEN KHANI=-CHANI(K) IF(NPHANI.EQ.0) THEN CALL U2DREL(HANI(:,:,KHANI),ANAME(2),NROW,NCOL,KK,IN,IOUT) ELSE READ(IN,*) LAYFLG(6,K) WRITE(IOUT,121) ANAME(2),K,LAYFLG(6,K) CALL UPARARRSUB1(HANI(:,:,KHANI),NCOL,NROW,KK,'HANI', 1 IOUT,ANAME(2),LAYFLG(6,KK)) IF(NOPCHK.EQ.0) CALL UPARARRCK(BUFF,IBOUND,IOUT,K,NCOL, 1 NLAY,NROW,'HANI') END IF END IF C C7C-----DEFINE VERTICAL HYDRAULIC CONDUCTIVITY OR HORIZONTAL TO VERTICAL C7C-----ANISOTROPY (VKA). IANAME=3 PTYP='VK' IF(LAYVKA(K).NE.0) THEN IANAME=4 PTYP='VANI' END IF IF(NPVK.EQ.0 .AND. NPVANI.EQ.0) THEN CALL U2DREL(VKA(:,:,KK),ANAME(IANAME),NROW,NCOL,KK,IN,IOUT) ELSE READ(IN,*) LAYFLG(2,K) WRITE(IOUT,121) ANAME(IANAME),K,LAYFLG(2,K) CALL UPARARRSUB1(VKA(:,:,KK),NCOL,NROW,KK,PTYP,IOUT, 1 ANAME(IANAME),LAYFLG(2,KK)) IF(NOPCHK.EQ.0) CALL UPARARRCK(BUFF,IBOUND,IOUT,K,NCOL,NLAY, 1 NROW,PTYP) END IF C C7D-----DEFINE SPECIFIC STORAGE OR STORAGE COEFFICIENT IN ARRAY SC1 IF TRANSIENT. IF(ITRSS.NE.0) THEN IF(NPSS.EQ.0) THEN CALL U2DREL(SC1(:,:,KK),STOTXT,NROW,NCOL,KK,IN,IOUT) ELSE READ(IN,*) LAYFLG(3,K) WRITE(IOUT,121) STOTXT,K,LAYFLG(3,K) CALL UPARARRSUB1(SC1(:,:,KK),NCOL,NROW,KK,'SS', 1 IOUT,STOTXT,LAYFLG(3,KK)) IF(NOPCHK.EQ.0) CALL UPARARRCK(BUFF,IBOUND,IOUT,K,NCOL, 1 NLAY,NROW,'SS ') END IF IF(ISFAC.EQ.0) THEN CALL SGWF2LPF7SC(SC1(:,:,KK),KK,1) ELSE CALL SGWF2LPF7SC(SC1(:,:,KK),KK,0) END IF END IF C C7E-----DEFINE SPECIFIC YIELD IN ARRAY SC2 IF TRANSIENT AND LAYER IS C7E-----IS CONVERTIBLE. IF(LAYTYP(K).NE.0) THEN IF(ITRSS.NE.0) THEN IF(NPSY.EQ.0) THEN CALL U2DREL(SC2(:,:,LAYTYP(K)),ANAME(7),NROW,NCOL,KK,IN, 1 IOUT) ELSE READ(IN,*) LAYFLG(4,K) WRITE(IOUT,121) ANAME(7),K,LAYFLG(4,K) CALL UPARARRSUB1(SC2(:,:,LAYTYP(K)),NCOL, 1 NROW,KK,'SY',IOUT,ANAME(7),LAYFLG(4,KK)) IF(NOPCHK.EQ.0) CALL UPARARRCK(BUFF,IBOUND,IOUT,K, 1 NCOL,NLAY,NROW,'SY ') END IF CALL SGWF2LPF7SC(SC2(:,:,LAYTYP(K)),KK,0) END IF END IF C C7F-----READ CONFINING BED VERTICAL HYDRAULIC CONDUCTIVITY (VKCB) IF(LAYCBD(K).NE.0) THEN IF(NPVKCB.EQ.0) THEN CALL U2DREL(VKCB(:,:,LAYCBD(K)),ANAME(5),NROW,NCOL,KK,IN, 1 IOUT) ELSE READ(IN,*) LAYFLG(5,K) WRITE(IOUT,121) ANAME(5),K,LAYFLG(5,K) CALL UPARARRSUB1(VKCB(:,:,LAYCBD(K)),NCOL,NROW,KK, 1 'VKCB',IOUT,ANAME(5),LAYFLG(5,KK)) IF(NOPCHK.EQ.0) CALL UPARARRCK(BUFF,IBOUND,IOUT,K,NCOL, 1 NLAY,NROW,'VKCB') END IF END IF C C7G-----READ WETDRY CODES IF WETTING CAPABILITY HAS BEEN INVOKED C7G-----(LAYWET NOT 0). IF(LAYWET(K).NE.0) THEN CALL U2DREL(WETDRY(:,:,LAYWET(K)),ANAME(8),NROW,NCOL,KK,IN, 1 IOUT) END IF 200 CONTINUE C C8------PREPARE AND CHECK LPF DATA. CALL SGWF2LPF7N() C C9------RETURN CALL GWF2LPF7PSV(IGRID) RETURN END SUBROUTINE GWF2LPF7AD(KPER,IGRID) C ****************************************************************** C SET HOLD TO BOTM WHENEVER A WETTABLE CELL IS DRY C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,ISSFLG,IBOUND,HOLD,BOTM,LBOTM USE GWFLPFMODULE,ONLY:LAYWET,WETDRY C ------------------------------------------------------------------ C CALL SGWF2LPF7PNT(IGRID) ISS=ISSFLG(KPER) C C1------RETURN IF STEADY STATE. IF(ISS.NE.0) RETURN C C2------LOOP THROUGH ALL LAYERS TO SET HOLD=BOT IF A WETTABLE CELL IS DRY ZERO=0. DO 100 K=1,NLAY C C2A-----SKIP LAYERS THAT CANNOT CONVERT BETWEEN WET AND DRY IF(LAYWET(K).EQ.0) GO TO 100 DO 90 I=1,NROW DO 90 J=1,NCOL C C2B-----SKIP CELLS THAT ARE CURRENTLY WET OR ARE NOT WETTABLE IF(IBOUND(J,I,K).NE.0) GO TO 90 IF(WETDRY(J,I,LAYWET(K)).EQ.ZERO) GO TO 90 C C2C-----SET HOLD=BOT HOLD(J,I,K)=BOTM(J,I,LBOTM(K)) 90 CONTINUE 100 CONTINUE C C3-----RETURN RETURN END SUBROUTINE GWF2LPF7FM(KITER,KSTP,KPER,IGRID) C ****************************************************************** C ADD LEAKAGE CORRECTION AND STORAGE TO HCOF AND RHS, AND CALCULATE C CONDUCTANCE AS REQUIRED. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,IBOUND,BOTM,NBOTM,DELR,DELC, 1 LBOTM,CV,HNEW,RHS,HCOF,HOLD,ISSFLG,IOUT USE GWFBASMODULE,ONLY:DELT USE GWFLPFMODULE,ONLY:LAYTYP,SC1,SC2,NOVFC C ------------------------------------------------------------------ C C1------SET POINTERS TO DATA, GET STEADY-STATE FLAG FOR STRESS PERIOD, C1------DEFINE CONSTANT. CALL SGWF2LPF7PNT(IGRID) ISS=ISSFLG(KPER) ONE=1. C C2------FOR EACH LAYER: IF CONVERTIBLE, CALCULATE CONDUCTANCES. DO 100 K=1,NLAY KK=K IF(LAYTYP(K).NE.0) 1 CALL SGWF2LPF7HCOND(KK,KITER,KSTP,KPER) 100 CONTINUE DO 101 K=1,NLAY KK=K IF(K.NE.NLAY) THEN IF(LAYTYP(K).NE.0 .OR. LAYTYP(K+1).NE.0) 1 CALL SGWF2LPF7VCOND(KK) END IF 101 CONTINUE C C3------IF THE STRESS PERIOD IS TRANSIENT, ADD STORAGE TO HCOF AND RHS IF(ISS.EQ.0) THEN TLED=ONE/DELT DO 200 K=1,NLAY C C4------SEE IF THIS LAYER IS CONVERTIBLE OR NON-CONVERTIBLE. IF(LAYTYP(K).EQ.0) THEN C5------NON-CONVERTIBLE LAYER, SO USE PRIMARY STORAGE DO 140 I=1,NROW DO 140 J=1,NCOL IF(IBOUND(J,I,K).LE.0) GO TO 140 RHO=SC1(J,I,K)*TLED HCOF(J,I,K)=HCOF(J,I,K)-RHO RHS(J,I,K)=RHS(J,I,K)-RHO*HOLD(J,I,K) 140 CONTINUE ELSE C C6------A CONVERTIBLE LAYER, SO CHECK OLD AND NEW HEADS TO DETERMINE C6------WHEN TO USE PRIMARY AND SECONDARY STORAGE DO 180 I=1,NROW DO 180 J=1,NCOL C C6A-----IF THE CELL IS EXTERNAL THEN SKIP IT. IF(IBOUND(J,I,K).LE.0) GO TO 180 TP=BOTM(J,I,LBOTM(K)-1) RHO2=SC2(J,I,LAYTYP(K))*TLED RHO1=SC1(J,I,K)*TLED C C6B-----FIND STORAGE FACTOR AT START OF TIME STEP. SOLD=RHO2 IF(HOLD(J,I,K).GT.TP) SOLD=RHO1 C C6C-----FIND STORAGE FACTOR AT END OF TIME STEP. HTMP=HNEW(J,I,K) SNEW=RHO2 IF(HTMP.GT.TP) SNEW=RHO1 C C6D-----ADD STORAGE TERMS TO RHS AND HCOF. HCOF(J,I,K)=HCOF(J,I,K)-SNEW RHS(J,I,K)=RHS(J,I,K) - SOLD*(HOLD(J,I,K)-TP) - SNEW*TP C 180 CONTINUE END IF C 200 CONTINUE END IF C C7------FOR EACH LAYER DETERMINE IF CORRECTION TERMS ARE NEEDED FOR C7------FLOW DOWN INTO PARTIALLY SATURATED LAYERS. IF(NOVFC.EQ.0) THEN DO 300 K=1,NLAY C C8------SEE IF CORRECTION IS NEEDED FOR LEAKAGE FROM ABOVE. IF(LAYTYP(K).NE.0 .AND. K.NE.1) THEN C C8A-----FOR EACH CELL MAKE THE CORRECTION IF NEEDED. DO 220 I=1,NROW DO 220 J=1,NCOL C C8B-----IF THE CELL IS EXTERNAL(IBOUND<=0) THEN SKIP IT. IF(IBOUND(J,I,K).LE.0) GO TO 220 HTMP=HNEW(J,I,K) C C8C-----IF HEAD IS ABOVE TOP THEN CORRECTION NOT NEEDED TOP=BOTM(J,I,LBOTM(K)-1) IF(HTMP.GE.TOP) GO TO 220 C C8D-----WITH HEAD BELOW TOP ADD CORRECTION TERMS TO RHS. RHS(J,I,K)=RHS(J,I,K) + CV(J,I,K-1)*(TOP-HTMP) 220 CONTINUE END IF C C9------SEE IF THIS LAYER MAY NEED CORRECTION FOR LEAKAGE TO BELOW. IF(K.EQ.NLAY) GO TO 300 IF(LAYTYP(K+1).NE.0) THEN C C9A-----FOR EACH CELL MAKE THE CORRECTION IF NEEDED. DO 280 I=1,NROW DO 280 J=1,NCOL C C9B-----IF CELL IS EXTERNAL (IBOUND<=0) THEN SKIP IT. IF(IBOUND(J,I,K).LE.0) GO TO 280 C C9C-----IF HEAD IN THE LOWER CELL IS LESS THAN TOP ADD CORRECTION C9C-----TERM TO RHS. HTMP=HNEW(J,I,K+1) TOP=BOTM(J,I,LBOTM(K+1)-1) IF(HTMP.LT.TOP) RHS(J,I,K)=RHS(J,I,K)- CV(J,I,K)*(TOP-HTMP) 280 CONTINUE END IF C 300 CONTINUE END IF C C10-----RETURN RETURN END SUBROUTINE SGWF2LPF7N() C ****************************************************************** C INITIALIZE AND CHECK LPF DATA C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,IBOUND,HNEW,LAYCBD,CV, 1 BOTM,NBOTM,DELR,DELC,IOUT USE GWFBASMODULE,ONLY:HNOFLO USE GWFLPFMODULE,ONLY:LAYWET,WETDRY,HK,VKCB,LAYTYP,VKA C ------------------------------------------------------------------ C C1------DEFINE CONSTANTS. ZERO=0. HCNV=HNOFLO C C2------INSURE THAT EACH ACTIVE CELL HAS AT LEAST ONE NON-ZERO C2------TRANSMISSIVE PARAMETER. DO 60 K=1,NLAY IF(LAYWET(K).NE.0) THEN C C3------WETTING IS ACTIVE. DO 40 I=1,NROW DO 40 J=1,NCOL IF(IBOUND(J,I,K).EQ.0 .AND. WETDRY(J,I,LAYWET(K)).EQ.ZERO) 1 GO TO 40 C C3A-----CHECK HORIZONTAL HYDRAULIC CONDUCTIVITY (HK). IF(HK(J,I,K).NE.ZERO) GO TO 40 C C3B-----CHECK VERTICAL HYDRAULIC CONDUCTIVITY AND CONFINING BED C3B-----VERTICAL HYDRAULIC CONDUCTIVITY. IF(NLAY.GT.1) THEN IF(VKA(J,I,K).NE.ZERO) THEN IF(K.NE.NLAY) THEN IF (VKA(J,I,K+1).NE.ZERO) THEN IF(LAYCBD(K).NE.0) THEN IF(VKCB(J,I,LAYCBD(K)).NE.ZERO) GO TO 40 ELSE GO TO 40 END IF END IF END IF IF(K.NE.1) THEN IF (VKA(J,I,K-1).NE.ZERO) THEN IF (LAYCBD(K-1).NE.0) THEN IF(VKCB(J,I,LAYCBD(K-1)).NE.ZERO) GO TO 40 ELSE GO TO 40 END IF ENDIF END IF END IF END IF C C3C-----ALL TRANSMISSIVE TERMS ARE ALL 0, SO CONVERT CELL TO NO FLOW. IBOUND(J,I,K)=0 HNEW(J,I,K)=HCNV WETDRY(J,I,LAYWET(K))=ZERO WRITE(IOUT,43) K,I,J 40 CONTINUE C ELSE C C4------WETTING IS INACTIVE DO 50 I=1,NROW DO 50 J=1,NCOL IF(IBOUND(J,I,K).EQ.0) GO TO 50 C C4A-----CHECK HORIZONTAL HYDRAULIC CONDUCTIVITY (HK). IF(HK(J,I,K).NE.ZERO) GO TO 50 C C4B-----CHECK VERTICAL HYDRAULIC CONDUCTIVITY AND CONFINING BED C4B-----VERTICAL HYDRAULIC CONDUCTIVITY. IF(NLAY.GT.1) THEN IF(VKA(J,I,K).NE.ZERO) THEN IF(K.NE.NLAY) THEN IF (VKA(J,I,K+1).NE.ZERO) THEN IF(LAYCBD(K).NE.0) THEN IF(VKCB(J,I,LAYCBD(K)).NE.ZERO) GO TO 50 ELSE GO TO 50 END IF END IF END IF IF(K.NE.1) THEN IF (VKA(J,I,K-1).NE.ZERO) THEN IF (LAYCBD(K-1).NE.0) THEN IF(VKCB(J,I,LAYCBD(K-1)).NE.ZERO) GO TO 50 ELSE GO TO 50 END IF ENDIF END IF END IF END IF C C4C-----ALL TRANSMISSIVE TERMS ARE 0, SO CONVERT CELL TO NO FLOW. IBOUND(J,I,K)=0 HNEW(J,I,K)=HCNV WRITE(IOUT,43) K,I,J 43 FORMAT(1X,'NODE (LAYER,ROW,COL) ',I3,2(1X,I5), 1 ' ELIMINATED BECAUSE ALL HYDRAULIC',/, 2 ' CONDUCTIVITIES TO NODE ARE 0') 50 CONTINUE END IF 60 CONTINUE C C5------CALCULATE HOR. CONDUCTANCE(CR AND CC) FOR CONSTANT T LAYERS. DO 70 K=1,NLAY KK=K IF(LAYTYP(K).EQ.0) CALL SGWF2LPF7HCOND(KK,0,0,0) 70 CONTINUE C C6------CALCULATE VERTICAL CONDUCTANCE BETWEEN CONFINED LAYERS. IF(NLAY.GT.1) THEN DO 10 K=1,NLAY-1 KK=K IF(LAYTYP(K).EQ.0 .AND. LAYTYP(K+1).EQ.0) 1 CALL SGWF2LPF7VCOND(KK) 10 CONTINUE END IF C C7------RETURN. RETURN END SUBROUTINE GWF2LPF7BDADJ(KSTP,KPER,IDIR,IBDRET, 1 IC1,IC2,IR1,IR2,IL1,IL2,IGRID) C ****************************************************************** C COMPUTE FLOW BETWEEN ADJACENT CELLS IN A SUBREGION OF THE GRID C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,IBOUND,HNEW,BUFF,CR,CC,CV, 1 BOTM,LBOTM,IOUT USE GWFBASMODULE,ONLY:ICBCFL,DELT,PERTIM,TOTIM,ICHFLG USE GWFLPFMODULE,ONLY:ILPFCB,LAYTYP,NOVFC C CHARACTER*16 TEXT(3) DOUBLE PRECISION HD C DATA TEXT(1),TEXT(2),TEXT(3) 1 /'FLOW RIGHT FACE ','FLOW FRONT FACE ','FLOW LOWER FACE '/ C ------------------------------------------------------------------ C CALL SGWF2LPF7PNT(IGRID) C C1------IF CELL-BY-CELL FLOWS WILL BE SAVED IN A FILE, SET FLAG IBD. C1------RETURN IF FLOWS ARE NOT BEING SAVED OR RETURNED. ZERO=0. IBD=0 IF(ILPFCB.GT.0) IBD=ICBCFL IF(IBD.EQ.0 .AND. IBDRET.EQ.0) RETURN C C2------SET THE SUBREGION EQUAL TO THE ENTIRE GRID IF VALUES ARE BEING C2------SAVED IN A FILE. IF(IBD.NE.0) THEN K1=1 K2=NLAY I1=1 I2=NROW J1=1 J2=NCOL END IF C C3------TEST FOR DIRECTION OF CALCULATION; IF NOT ACROSS COLUMNS, GO TO C3------STEP 4. IF ONLY 1 COLUMN, RETURN. IF(IDIR.NE.1) GO TO 405 IF(NCOL.EQ.1) RETURN C C3A-----CALCULATE FLOW ACROSS COLUMNS (THROUGH RIGHT FACE). IF NOT C3A-----SAVING IN A FILE, SET THE SUBREGION. CLEAR THE BUFFER. IF(IBD.EQ.0) THEN K1=IL1 K2=IL2 I1=IR1 I2=IR2 J1=IC1-1 IF(J1.LT.1) J1=1 J2=IC2 END IF DO 310 K=K1,K2 DO 310 I=I1,I2 DO 310 J=J1,J2 BUFF(J,I,K)=ZERO 310 CONTINUE C C3B-----FOR EACH CELL CALCULATE FLOW THRU RIGHT FACE & STORE IN BUFFER. IF(J2.EQ.NCOL) J2=J2-1 DO 400 K=K1,K2 DO 400 I=I1,I2 DO 400 J=J1,J2 IF(ICHFLG.EQ.0) THEN IF((IBOUND(J,I,K).LE.0) .AND. (IBOUND(J+1,I,K).LE.0)) GO TO 400 ELSE IF((IBOUND(J,I,K).EQ.0) .OR. (IBOUND(J+1,I,K).EQ.0)) GO TO 400 END IF HDIFF=HNEW(J,I,K)-HNEW(J+1,I,K) BUFF(J,I,K)=HDIFF*CR(J,I,K) 400 CONTINUE C C3C-----RECORD CONTENTS OF BUFFER AND RETURN. IF(IBD.EQ.1) 1 CALL UBUDSV(KSTP,KPER,TEXT(1),ILPFCB,BUFF,NCOL,NROW,NLAY,IOUT) IF(IBD.EQ.2) CALL UBDSV1(KSTP,KPER,TEXT(1),ILPFCB,BUFF,NCOL,NROW, 1 NLAY,IOUT,DELT,PERTIM,TOTIM,IBOUND) RETURN C C4------TEST FOR DIRECTION OF CALCULATION; IF NOT ACROSS ROWS, GO TO C4------STEP 5. IF ONLY 1 ROW, RETURN. 405 IF(IDIR.NE.2) GO TO 505 IF(NROW.EQ.1) RETURN C C4A-----CALCULATE FLOW ACROSS ROWS (THROUGH FRONT FACE). IF NOT SAVING C4A-----IN A FILE, SET THE SUBREGION. CLEAR THE BUFFER. IF(IBD.EQ.0) THEN K1=IL1 K2=IL2 I1=IR1-1 IF(I1.LT.1) I1=1 I2=IR2 J1=IC1 J2=IC2 END IF DO 410 K=K1,K2 DO 410 I=I1,I2 DO 410 J=J1,J2 BUFF(J,I,K)=ZERO 410 CONTINUE C C4B-----FOR EACH CELL CALCULATE FLOW THRU FRONT FACE & STORE IN BUFFER. IF(I2.EQ.NROW) I2=I2-1 DO 500 K=K1,K2 DO 500 I=I1,I2 DO 500 J=J1,J2 IF(ICHFLG.EQ.0) THEN IF((IBOUND(J,I,K).LE.0) .AND. (IBOUND(J,I+1,K).LE.0)) GO TO 500 ELSE IF((IBOUND(J,I,K).EQ.0) .OR. (IBOUND(J,I+1,K).EQ.0)) GO TO 500 END IF HDIFF=HNEW(J,I,K)-HNEW(J,I+1,K) BUFF(J,I,K)=HDIFF*CC(J,I,K) 500 CONTINUE C C4C-----RECORD CONTENTS OF BUFFER AND RETURN. IF(IBD.EQ.1) 1 CALL UBUDSV(KSTP,KPER,TEXT(2),ILPFCB,BUFF,NCOL,NROW,NLAY,IOUT) IF(IBD.EQ.2) CALL UBDSV1(KSTP,KPER,TEXT(2),ILPFCB,BUFF,NCOL,NROW, 1 NLAY,IOUT,DELT,PERTIM,TOTIM,IBOUND) RETURN C C5------DIRECTION OF CALCULATION IS ACROSS LAYERS BY ELIMINATION. IF C5------ONLY 1 LAYER, RETURN. 505 IF(NLAY.EQ.1) RETURN C C5A-----CALCULATE FLOW ACROSS LAYERS (THROUGH LOWER FACE). IF NOT C5A-----SAVING IN A FILE, SET THE SUBREGION. CLEAR THE BUFFER. IF(IBD.EQ.0) THEN K1=IL1-1 IF(K1.LT.1) K1=1 K2=IL2 I1=IR1 I2=IR2 J1=IC1 J2=IC2 END IF DO 510 K=K1,K2 DO 510 I=I1,I2 DO 510 J=J1,J2 BUFF(J,I,K)=ZERO 510 CONTINUE C C5B-----FOR EACH CELL CALCULATE FLOW THRU LOWER FACE & STORE IN BUFFER. IF(K2.EQ.NLAY) K2=K2-1 DO 600 K=1,K2 IF(K.LT.K1) GO TO 600 DO 590 I=I1,I2 DO 590 J=J1,J2 IF(ICHFLG.EQ.0) THEN IF((IBOUND(J,I,K).LE.0) .AND. (IBOUND(J,I,K+1).LE.0)) GO TO 590 ELSE IF((IBOUND(J,I,K).EQ.0) .OR. (IBOUND(J,I,K+1).EQ.0)) GO TO 590 END IF HD=HNEW(J,I,K+1) IF(NOVFC.NE.0 .OR. LAYTYP(K+1).EQ.0) GO TO 580 TMP=HD TOP=BOTM(J,I,LBOTM(K+1)-1) IF(TMP.LT.TOP) HD=TOP 580 HDIFF=HNEW(J,I,K)-HD BUFF(J,I,K)=HDIFF*CV(J,I,K) 590 CONTINUE 600 CONTINUE C C5C-----RECORD CONTENTS OF BUFFER AND RETURN. IF(IBD.EQ.1) 1 CALL UBUDSV(KSTP,KPER,TEXT(3),ILPFCB,BUFF,NCOL,NROW,NLAY,IOUT) IF(IBD.EQ.2) CALL UBDSV1(KSTP,KPER,TEXT(3),ILPFCB,BUFF,NCOL,NROW, 1 NLAY,IOUT,DELT,PERTIM,TOTIM,IBOUND) RETURN END SUBROUTINE GWF2LPF7BDS(KSTP,KPER,IGRID) C ****************************************************************** C COMPUTE STORAGE BUDGET FLOW TERM FOR LPF. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,ISSFLG,IBOUND,HNEW,HOLD, 1 BUFF,BOTM,LBOTM,IOUT USE GWFBASMODULE,ONLY:MSUM,ICBCFL,VBVL,VBNM,DELT,PERTIM,TOTIM USE GWFLPFMODULE,ONLY:ILPFCB,LAYTYP,SC1,SC2 CHARACTER*16 TEXT DOUBLE PRECISION STOIN,STOUT,SSTRG C DATA TEXT /' STORAGE'/ C ------------------------------------------------------------------ C CALL SGWF2LPF7PNT(IGRID) C C1------INITIALIZE BUDGET ACCUMULATORS AND 1/DELT. ISS=ISSFLG(KPER) ZERO=0. STOIN=ZERO STOUT=ZERO C2------IF STEADY STATE, STORAGE TERM IS ZERO IF(ISS.NE.0) GOTO 400 ONE=1. TLED=ONE/DELT C C3------IF CELL-BY-CELL FLOWS WILL BE SAVED, SET FLAG IBD. IBD=0 IF(ILPFCB.GT.0) IBD=ICBCFL C C4------CLEAR BUFFER. DO 210 K=1,NLAY DO 210 I=1,NROW DO 210 J=1,NCOL BUFF(J,I,K)=ZERO 210 CONTINUE C C5------LOOP THROUGH EVERY CELL IN THE GRID. KT=0 DO 300 K=1,NLAY LC=LAYTYP(K) IF(LC.NE.0) KT=KT+1 DO 300 I=1,NROW DO 300 J=1,NCOL C C6------SKIP NO-FLOW AND CONSTANT-HEAD CELLS. IF(IBOUND(J,I,K).LE.0) GO TO 300 HSING=HNEW(J,I,K) C C7-----CHECK LAYER TYPE TO SEE IF ONE STORAGE CAPACITY OR TWO. IF(LC.EQ.0) GO TO 285 C C7A----TWO STORAGE CAPACITIES. TP=BOTM(J,I,LBOTM(K)-1) RHO2=SC2(J,I,KT)*TLED RHO1=SC1(J,I,K)*TLED SOLD=RHO2 IF(HOLD(J,I,K).GT.TP) SOLD=RHO1 SNEW=RHO2 IF(HSING.GT.TP) SNEW=RHO1 STRG=SOLD*(HOLD(J,I,K)-TP) + SNEW*TP - SNEW*HSING GO TO 288 C C7B----ONE STORAGE CAPACITY. 285 RHO=SC1(J,I,K)*TLED STRG=RHO*HOLD(J,I,K) - RHO*HSING C C8-----STORE CELL-BY-CELL FLOW IN BUFFER AND ADD TO ACCUMULATORS. 288 BUFF(J,I,K)=STRG SSTRG=STRG IF(STRG.LT.ZERO) THEN STOUT=STOUT-SSTRG ELSE STOIN=STOIN+SSTRG END IF C 300 CONTINUE C C9-----IF IBD FLAG IS SET RECORD THE CONTENTS OF THE BUFFER. IF(IBD.EQ.1) CALL UBUDSV(KSTP,KPER,TEXT, 1 ILPFCB,BUFF,NCOL,NROW,NLAY,IOUT) IF(IBD.EQ.2) CALL UBDSV1(KSTP,KPER,TEXT,ILPFCB, 1 BUFF,NCOL,NROW,NLAY,IOUT,DELT,PERTIM,TOTIM,IBOUND) C C10-----ADD TOTAL RATES AND VOLUMES TO VBVL & PUT TITLE IN VBNM. 400 CONTINUE SIN=STOIN SOUT=STOUT VBVL(1,MSUM)=VBVL(1,MSUM)+SIN*DELT VBVL(2,MSUM)=VBVL(2,MSUM)+SOUT*DELT VBVL(3,MSUM)=SIN VBVL(4,MSUM)=SOUT VBNM(MSUM)=TEXT MSUM=MSUM+1 C C11----RETURN. RETURN END SUBROUTINE GWF2LPF7BDCH(KSTP,KPER,IGRID) C ****************************************************************** C COMPUTE FLOW FROM CONSTANT-HEAD CELLS C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,IBOUND,HNEW,BUFF,CR,CC,CV, 1 BOTM,LBOTM,IOUT USE GWFBASMODULE,ONLY:MSUM,VBVL,VBNM,DELT,PERTIM,TOTIM,ICBCFL, 1 ICHFLG USE GWFLPFMODULE,ONLY:ILPFCB,LAYTYP,NOVFC CHARACTER*16 TEXT DOUBLE PRECISION HD,CHIN,CHOUT,XX1,XX2,XX3,XX4,XX5,XX6 C DATA TEXT /' CONSTANT HEAD'/ C ------------------------------------------------------------------ CALL SGWF2LPF7PNT(IGRID) C C1------SET IBD TO INDICATE IF CELL-BY-CELL BUDGET VALUES WILL BE SAVED. IBD=0 IF(ILPFCB.LT.0 .AND. ICBCFL.NE.0) IBD=-1 IF(ILPFCB.GT.0) IBD=ICBCFL C C2------CLEAR BUDGET ACCUMULATORS. ZERO=0. CHIN=ZERO CHOUT=ZERO IBDLBL=0 C C3------CLEAR BUFFER. DO 5 K=1,NLAY DO 5 I=1,NROW DO 5 J=1,NCOL BUFF(J,I,K)=ZERO 5 CONTINUE C C3A-----IF SAVING CELL-BY-CELL FLOW IN A LIST, COUNT CONSTANT-HEAD C3A-----CELLS AND WRITE HEADER RECORDS. IF(IBD.EQ.2) THEN NCH=0 DO 7 K=1,NLAY DO 7 I=1,NROW DO 7 J=1,NCOL IF(IBOUND(J,I,K).LT.0) NCH=NCH+1 7 CONTINUE CALL UBDSV2(KSTP,KPER,TEXT,ILPFCB,NCOL,NROW,NLAY, 1 NCH,IOUT,DELT,PERTIM,TOTIM,IBOUND) END IF C C4------LOOP THROUGH EACH CELL AND CALCULATE FLOW INTO MODEL FROM EACH C4------CONSTANT-HEAD CELL. DO 200 K=1,NLAY DO 200 I=1,NROW DO 200 J=1,NCOL C C5------IF CELL IS NOT CONSTANT HEAD SKIP IT & GO ON TO NEXT CELL. IF (IBOUND(J,I,K).GE.0)GO TO 200 C C6------CLEAR VALUES FOR FLOW RATE THROUGH EACH FACE OF CELL. X1=ZERO X2=ZERO X3=ZERO X4=ZERO X5=ZERO X6=ZERO CHCH1=ZERO CHCH2=ZERO CHCH3=ZERO CHCH4=ZERO CHCH5=ZERO CHCH6=ZERO C C7------CALCULATE FLOW THROUGH THE LEFT FACE. C7------COMMENTS A-C APPEAR ONLY IN THE SECTION HEADED BY COMMENT 7, C7------BUT THEY APPLY IN A SIMILAR MANNER TO SECTIONS 8-12. C C7A-----IF THERE IS NO FLOW TO CALCULATE THROUGH THIS FACE, THEN GO ON C7A-----TO NEXT FACE. NO FLOW OCCURS AT THE EDGE OF THE GRID, TO AN C7A-----ADJACENT NO-FLOW CELL, OR TO AN ADJACENT CONSTANT-HEAD CELL. IF(J.EQ.1) GO TO 30 IF(IBOUND(J-1,I,K).EQ.0) GO TO 30 IF(IBOUND(J-1,I,K).LT.0 .AND. ICHFLG.EQ.0) GO TO 30 C C7B-----CALCULATE FLOW THROUGH THIS FACE INTO THE ADJACENT CELL. HDIFF=HNEW(J,I,K)-HNEW(J-1,I,K) CHCH1=HDIFF*CR(J-1,I,K) IF(IBOUND(J-1,I,K).LT.0) GO TO 30 X1=CHCH1 XX1=X1 C C7C-----ACCUMULATE POSITIVE AND NEGATIVE FLOW. IF(X1.LT.ZERO) THEN CHOUT=CHOUT-XX1 ELSE CHIN=CHIN+XX1 END IF C C8------CALCULATE FLOW THROUGH THE RIGHT FACE. 30 IF(J.EQ.NCOL) GO TO 60 IF(IBOUND(J+1,I,K).EQ.0) GO TO 60 IF(IBOUND(J+1,I,K).LT.0 .AND. ICHFLG.EQ.0) GO TO 60 HDIFF=HNEW(J,I,K)-HNEW(J+1,I,K) CHCH2=HDIFF*CR(J,I,K) IF(IBOUND(J+1,I,K).LT.0) GO TO 60 X2=CHCH2 XX2=X2 IF(X2.LT.ZERO) THEN CHOUT=CHOUT-XX2 ELSE CHIN=CHIN+XX2 END IF C C9------CALCULATE FLOW THROUGH THE BACK FACE. 60 IF(I.EQ.1) GO TO 90 IF (IBOUND(J,I-1,K).EQ.0) GO TO 90 IF (IBOUND(J,I-1,K).LT.0 .AND. ICHFLG.EQ.0) GO TO 90 HDIFF=HNEW(J,I,K)-HNEW(J,I-1,K) CHCH3=HDIFF*CC(J,I-1,K) IF(IBOUND(J,I-1,K).LT.0) GO TO 90 X3=CHCH3 XX3=X3 IF(X3.LT.ZERO) THEN CHOUT=CHOUT-XX3 ELSE CHIN=CHIN+XX3 END IF C C10-----CALCULATE FLOW THROUGH THE FRONT FACE. 90 IF(I.EQ.NROW) GO TO 120 IF(IBOUND(J,I+1,K).EQ.0) GO TO 120 IF(IBOUND(J,I+1,K).LT.0 .AND. ICHFLG.EQ.0) GO TO 120 HDIFF=HNEW(J,I,K)-HNEW(J,I+1,K) CHCH4=HDIFF*CC(J,I,K) IF(IBOUND(J,I+1,K).LT.0) GO TO 120 X4=CHCH4 XX4=X4 IF(X4.LT.ZERO) THEN CHOUT=CHOUT-XX4 ELSE CHIN=CHIN+XX4 END IF C C11-----CALCULATE FLOW THROUGH THE UPPER FACE. 120 IF(K.EQ.1) GO TO 150 IF (IBOUND(J,I,K-1).EQ.0) GO TO 150 IF (IBOUND(J,I,K-1).LT.0 .AND. ICHFLG.EQ.0) GO TO 150 HD=HNEW(J,I,K) IF(NOVFC.NE.0 .OR. LAYTYP(K).EQ.0) GO TO 122 TMP=HD TOP=BOTM(J,I,LBOTM(K)-1) IF(TMP.LT.TOP) HD=TOP 122 HDIFF=HD-HNEW(J,I,K-1) CHCH5=HDIFF*CV(J,I,K-1) IF(IBOUND(J,I,K-1).LT.0) GO TO 150 X5=CHCH5 XX5=X5 IF(X5.LT.ZERO) THEN CHOUT=CHOUT-XX5 ELSE CHIN=CHIN+XX5 END IF C C12-----CALCULATE FLOW THROUGH THE LOWER FACE. 150 IF(K.EQ.NLAY) GO TO 180 IF(IBOUND(J,I,K+1).EQ.0) GO TO 180 IF(IBOUND(J,I,K+1).LT.0 .AND. ICHFLG.EQ.0) GO TO 180 HD=HNEW(J,I,K+1) IF(NOVFC.NE.0 .OR. LAYTYP(K+1).EQ.0) GO TO 152 TMP=HD TOP=BOTM(J,I,LBOTM(K+1)-1) IF(TMP.LT.TOP) HD=TOP 152 HDIFF=HNEW(J,I,K)-HD CHCH6=HDIFF*CV(J,I,K) IF(IBOUND(J,I,K+1).LT.0) GO TO 180 X6=CHCH6 XX6=X6 IF(X6.LT.ZERO) THEN CHOUT=CHOUT-XX6 ELSE CHIN=CHIN+XX6 END IF C C13-----SUM THE FLOWS THROUGH SIX FACES OF CONSTANT HEAD CELL, AND C13-----STORE SUM IN BUFFER. 180 RATE=CHCH1+CHCH2+CHCH3+CHCH4+CHCH5+CHCH6 BUFF(J,I,K)=RATE C C14-----PRINT THE FLOW FOR THE CELL IF REQUESTED. IF(IBD.LT.0) THEN IF(IBDLBL.EQ.0) WRITE(IOUT,899) TEXT,KPER,KSTP 899 FORMAT(1X,/1X,A,' PERIOD ',I4,' STEP ',I3) WRITE(IOUT,900) K,I,J,RATE 900 FORMAT(1X,'LAYER ',I3,' ROW ',I5,' COL ',I5, 1 ' RATE ',1PG15.6) IBDLBL=1 END IF C C15-----IF SAVING CELL-BY-CELL FLOW IN LIST, WRITE FLOW FOR CELL. IF(IBD.EQ.2) CALL UBDSVA(ILPFCB,NCOL,NROW,J,I,K,RATE,IBOUND,NLAY) 200 CONTINUE C C16-----IF SAVING CELL-BY-CELL FLOW IN 3-D ARRAY, WRITE THE ARRAY. IF(IBD.EQ.1) CALL UBUDSV(KSTP,KPER,TEXT, 1 ILPFCB,BUFF,NCOL,NROW,NLAY,IOUT) C C17-----SAVE TOTAL CONSTANT HEAD FLOWS AND VOLUMES IN VBVL TABLE C17-----FOR INCLUSION IN BUDGET. PUT LABELS IN VBNM TABLE. CIN=CHIN COUT=CHOUT VBVL(1,MSUM)=VBVL(1,MSUM)+CIN*DELT VBVL(2,MSUM)=VBVL(2,MSUM)+COUT*DELT VBVL(3,MSUM)=CIN VBVL(4,MSUM)=COUT VBNM(MSUM)=TEXT MSUM=MSUM+1 C C18-----RETURN. RETURN END SUBROUTINE SGWF2LPF7SC(SC,K,ISPST) C ****************************************************************** C COMPUTE STORAGE CAPACITY C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,DELR,DELC,BOTM,LBOTM,LAYCBD C DIMENSION SC(NCOL,NROW) C ------------------------------------------------------------------ C C1------MULTIPLY SPECIFIC STORAGE BY THICKNESS, DELR, AND DELC TO GET C1------CONFINED STORAGE CAPACITY. IF(ISPST.NE.0) THEN DO 80 I=1,NROW DO 80 J=1,NCOL THICK=BOTM(J,I,LBOTM(K)-1)-BOTM(J,I,LBOTM(K)) SC(J,I)=SC(J,I)*THICK*DELR(J)*DELC(I) 80 CONTINUE ELSE C C2------MULTIPLY SPECIFIC YIELD BY DELR AND DELC TO GET UNCONFINED C2------STORAGE CAPACITY(SC2). DO 85 I=1,NROW DO 85 J=1,NCOL SC(J,I)=SC(J,I)*DELR(J)*DELC(I) 85 CONTINUE END IF C RETURN END SUBROUTINE SGWF2LPF7HCOND(K,KITER,KSTP,KPER) C ****************************************************************** C COMPUTE HORIZONTAL BRANCH CONDUCTANCE FOR ONE LAYER. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:IOUT,NCOL,NROW,IBOUND,HNEW,BOTM,NBOTM, 1 LBOTM,CC,STRT USE GWFBASMODULE,ONLY:HDRY USE GWFLPFMODULE,ONLY:LAYWET,IWETIT,LAYTYP,LAYAVG,LAYSTRT C CHARACTER*3 ACNVRT DIMENSION ICNVRT(5),JCNVRT(5),ACNVRT(5) C C ------------------------------------------------------------------ C1------INITIALIZE DATA. ZERO=0. NCNVRT=0 IHDCNV=0 C C2------IF LAYER IS WETTABLE CONVERT DRY CELLS TO WET WHEN APPROPRIATE. ITFLG=1 IF(LAYWET(K).NE.0) ITFLG=MOD(KITER,IWETIT) IF(ITFLG.EQ.0) CALL SGWF2LPF7WET(K,KITER,KSTP,KPER, 2 IHDCNV,NCNVRT,ICNVRT,JCNVRT,ACNVRT) C C3------LOOP THROUGH EACH CELL, AND CALCULATE SATURATED THICKNESS. DO 200 I=1,NROW DO 200 J=1,NCOL C C3A-----SET SATURATED THICKNESS=0. FOR DRY CELLS. IF(IBOUND(J,I,K).EQ.0) THEN CC(J,I,K)=ZERO ELSE C C3B-----CALCULATE SATURATED THICKNESS FOR A WET CELL. BBOT=BOTM(J,I,LBOTM(K)) IF(LAYSTRT(K).NE.0) THEN TTOP=STRT(J,I,K) IF(BBOT.GT.TTOP) THEN WRITE(IOUT,33) K,I,J 33 FORMAT(1X,/1X,'Negative cell thickness at (layer,row,col)', 1 I4,',',I5,',',I5) WRITE(IOUT,34) TTOP,BBOT 34 FORMAT(1X,'Initial head, bottom elevation:',1P,2G13.5) CALL USTOP(' ') END IF ELSE TTOP=BOTM(J,I,LBOTM(K)-1) IF(BBOT.GT.TTOP) THEN WRITE(IOUT,35) K,I,J 35 FORMAT(1X,/1X,'Negative cell thickness at (layer,row,col)', 1 I4,',',I5,',',I5) WRITE(IOUT,36) TTOP,BBOT 36 FORMAT(1X,'Top elevation, bottom elevation:',1P,2G13.5) CALL USTOP(' ') END IF END IF IF(LAYTYP(K).NE.0) THEN HHD=HNEW(J,I,K) IF(HHD.LT.TTOP) TTOP=HHD END IF THCK=TTOP-BBOT CC(J,I,K)=THCK C C C3C-----WHEN SATURATED THICKNESS <= 0, PRINT A MESSAGE AND SET C3C-----HNEW=HDRY, SATURATED THICKNESS=0.0, AND IBOUND=0. IF(THCK.LE.ZERO) THEN CALL SGWF2LPF7WDMSG(1,NCNVRT,ICNVRT,JCNVRT,ACNVRT,IHDCNV, 1 IOUT,KITER,J,I,K,KSTP,KPER,NCOL,NROW) HNEW(J,I,K)=HDRY CC(J,I,K)=ZERO IF(IBOUND(J,I,K).LT.0) THEN WRITE(IOUT,151) 151 FORMAT(1X,/1X,'CONSTANT-HEAD CELL WENT DRY', 1 ' -- SIMULATION ABORTED') WRITE(IOUT,*) TTOP, BBOT, THCK WRITE(IOUT,152) K,I,J,KITER,KSTP,KPER 152 FORMAT(1X,'LAYER=',I3,' ROW=',I5,' COLUMN=',I5, 1 ' ITERATION=',I3,' TIME STEP=',I3,' STRESS PERIOD=',I4) CALL USTOP(' ') END IF IBOUND(J,I,K)=0 END IF END IF 200 CONTINUE C C4------PRINT ANY REMAINING CELL CONVERSIONS NOT YET PRINTED. CALL SGWF2LPF7WDMSG(0,NCNVRT,ICNVRT,JCNVRT,ACNVRT,IHDCNV, 1 IOUT,KITER,J,I,K,KSTP,KPER,NCOL,NROW) C C5------CHANGE IBOUND VALUE FOR CELLS THAT CONVERTED TO WET THIS C5------ITERATION FROM 30000 to 1. IF(LAYWET(K).NE.0) THEN DO 205 I=1,NROW DO 205 J=1,NCOL IF(IBOUND(J,I,K).EQ.30000) IBOUND(J,I,K)=1 205 CONTINUE END IF C C6------COMPUTE HORIZONTAL BRANCH CONDUCTANCES FROM CELL HYDRAULIC C6------CONDUCTIVITY, SATURATED THICKNESS, AND GRID DIMENSIONS. IF(LAYAVG(K).EQ.0) THEN CALL SGWF2LPF7HHARM(K) ELSE IF(LAYAVG(K).EQ.1) THEN CALL SGWF2LPF7HLOG(K) ELSE CALL SGWF2LPF7HUNCNF(K) END IF C C7------RETURN. RETURN END SUBROUTINE SGWF2LPF7WET(K,KITER,KSTP,KPER,IHDCNV,NCNVRT, 1 ICNVRT,JCNVRT,ACNVRT) C C ****************************************************************** C CONVERT DRY CELLS TO WET. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:IOUT,NCOL,NROW,NLAY,HNEW,IBOUND,BOTM,LBOTM USE GWFLPFMODULE, ONLY:LAYTYP,CHANI,LAYVKA,LAYWET,WETDRY, 1 WETFCT,IHDWET C CHARACTER*3 ACNVRT DIMENSION ICNVRT(5),JCNVRT(5),ACNVRT(5) C ------------------------------------------------------------------ C C1------LOOP THROUGH ALL CELLS. ZERO=0.0 DO 100 I=1,NROW DO 100 J=1,NCOL C C2------IF CELL IS DRY AND IF IT IS WETTABLE, CONTINUE CHECKING TO SEE C2------IF IT SHOULD BECOME WET. IF(IBOUND(J,I,K).EQ.0 .AND. WETDRY(J,I,LAYWET(K)).NE.ZERO) THEN C C3------CALCULATE WETTING ELEVATION. WD=WETDRY(J,I,LAYWET(K)) IF(WD.LT.ZERO) WD=-WD TURNON=BOTM(J,I,LBOTM(K))+WD C C4------CHECK HEAD IN CELL BELOW TO SEE IF WETTING ELEVATION HAS BEEN C4------REACHED. IF(K.NE.NLAY) THEN HTMP=HNEW(J,I,K+1) IF(IBOUND(J,I,K+1).GT.0 .AND. HTMP.GE.TURNON) GO TO 50 END IF C C5------CHECK HEAD IN ADJACENT HORIZONTAL CELLS TO SEE IF WETTING C5------ELEVATION HAS BEEN REACHED. IF(WETDRY(J,I,LAYWET(K)).GT.ZERO) THEN IF(J.NE.1) THEN HTMP=HNEW(J-1,I,K) IF(IBOUND(J-1,I,K).GT.0 .AND. IBOUND(J-1,I,K).NE.30000. 1 AND. HTMP.GE.TURNON) GO TO 50 END IF IF(J.NE.NCOL) THEN HTMP=HNEW(J+1,I,K) IF(IBOUND(J+1,I,K).GT.0 .AND. HTMP.GE.TURNON) GO TO 50 END IF IF(I.NE.1) THEN HTMP=HNEW(J,I-1,K) IF(IBOUND(J,I-1,K).GT.0 .AND. IBOUND(J,I-1,K).NE.30000. 1 AND. HTMP.GE.TURNON) GO TO 50 END IF IF(I.NE.NROW) THEN HTMP=HNEW(J,I+1,K) IF(IBOUND(J,I+1,K).GT.0 .AND. HTMP.GE.TURNON) GO TO 50 END IF END IF C C6------WETTING ELEVATION HAS NOT BEEN REACHED, SO CELL REMAINS DRY. GO TO 100 C C7------CELL BECOMES WET. PRINT MESSAGE, SET INITIAL HEAD, AND SET C7------IBOUND. 50 CALL SGWF2LPF7WDMSG(2,NCNVRT,ICNVRT,JCNVRT,ACNVRT,IHDCNV, 1 IOUT,KITER,J,I,K,KSTP,KPER,NCOL,NROW) C C7A-----USE EQUATION 3A IF IHDWET=0; USE EQUATION 3B IF IHDWET IS NOT 0. IF(IHDWET.EQ.0) THEN HNEW(J,I,K)=BOTM(J,I,LBOTM(K))+ 1 WETFCT*(HTMP-BOTM(J,I,LBOTM(K))) ELSE HNEW(J,I,K)=BOTM(J,I,LBOTM(K))+WETFCT*WD END IF IBOUND(J,I,K)=30000 END IF C C8------END OF LOOP FOR ALL CELLS IN LAYER. 100 CONTINUE C C9------RETURN. RETURN END SUBROUTINE SGWF2LPF7WDMSG(ICODE,NCNVRT,ICNVRT,JCNVRT,ACNVRT, 1 IHDCNV,IOUT,KITER,J,I,K,KSTP,KPER,NCOL,NROW) C ****************************************************************** C PRINT MESSAGE WHEN CELLS CONVERT BETWEEN WET AND DRY. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ CHARACTER*3 ACNVRT DIMENSION ICNVRT(5),JCNVRT(5),ACNVRT(5) C ------------------------------------------------------------------ C C1------KEEP TRACK OF CELL CONVERSIONS. IF(ICODE.GT.0) THEN NCNVRT=NCNVRT+1 ICNVRT(NCNVRT)=I JCNVRT(NCNVRT)=J IF(ICODE.EQ.1) THEN ACNVRT(NCNVRT)='DRY' ELSE ACNVRT(NCNVRT)='WET' END IF END IF C C2------PRINT A LINE OF DATA IF 5 CONVERSIONS HAVE OCCURRED OR IF ICODE C2------INDICATES THAT A PARTIAL LINE SHOULD BE PRINTED. IF(NCNVRT.EQ.5 .OR. (ICODE.EQ.0 .AND. NCNVRT.GT.0)) THEN IF(IHDCNV.EQ.0) WRITE(IOUT,17) KITER,K,KSTP,KPER 17 FORMAT(1X,/1X,'CELL CONVERSIONS FOR ITER.=',I3,' LAYER=', 1 I3,' STEP=',I3,' PERIOD=',I4,' (ROW,COL)') IHDCNV=1 IF (NROW.LE.999 .AND. NCOL.LE.999) THEN WRITE(IOUT,18) (ACNVRT(L),ICNVRT(L),JCNVRT(L),L=1,NCNVRT) 18 FORMAT(1X,3X,5(A,'(',I3,',',I3,') ')) ELSE WRITE(IOUT,19) (ACNVRT(L),ICNVRT(L),JCNVRT(L),L=1,NCNVRT) 19 FORMAT(1X,2X,5(A,'(',I5,',',I5,')')) ENDIF NCNVRT=0 END IF C C3------RETURN. RETURN END SUBROUTINE SGWF2LPF7HHARM(K) C ****************************************************************** C COMPUTE HORIZONTAL BRANCH CONDUCTANCE USING HARMONIC MEAN OF BLOCK C CONDUCTANCES (DISTANCE WEIGHTED HARMONIC MEAN OF TRANSMISSIVITY). C CELL THICKNESS IS IN CC UPON ENTRY. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,IBOUND,CR,CC,DELR,DELC USE GWFLPFMODULE,ONLY:HK,CHANI,HANI C ------------------------------------------------------------------ C ZERO=0. TWO=2. C C1------FOR EACH CELL CALCULATE BRANCH CONDUCTANCES FROM THAT CELL C1------TO THE ONE ON THE RIGHT AND THE ONE IN FRONT. DO 100 I=1,NROW DO 100 J=1,NCOL C C2------IF CELL IS DRY OR HK=0., SET CONDUCTANCE EQUAL TO 0 AND GO ON C2------TO NEXT CELL. IF(IBOUND(J,I,K).EQ.0 .OR. HK(J,I,K).EQ.ZERO) THEN CR(J,I,K)=ZERO CC(J,I,K)=ZERO ELSE C C3------CELL IS WET -- CALCULATE TRANSMISSIVITY OF CELL. T1=HK(J,I,K)*CC(J,I,K) C3A-----IF THIS IS NOT THE LAST COLUMN (RIGHTMOST), CALCULATE C3A-----BRANCH CONDUCTANCE IN THE ROW DIRECTION (CR) TO THE RIGHT. IF(J.NE.NCOL) THEN IF(IBOUND(J+1,I,K).NE.0) THEN T2=HK(J+1,I,K)*CC(J+1,I,K) CR(J,I,K)=TWO*T2*T1*DELC(I)/(T1*DELR(J+1)+T2*DELR(J)) ELSE CR(J,I,K)=ZERO END IF ELSE C3B-----IF THIS IS THE LAST COLUMN, SET BRANCH CONDUCTANCE=0. CR(J,I,K)=ZERO END IF C C3C-----IF THIS IS NOT THE LAST ROW (FRONTMOST) THEN CALCULATE C3C-----BRANCH CONDUCTANCE IN THE COLUMN DIRECTION (CC) TO THE FRONT. IF(I.NE.NROW) THEN IF(IBOUND(J,I+1,K).NE.0) THEN T2=HK(J,I+1,K)*CC(J,I+1,K) IF(CHANI(K).LE.ZERO) THEN KHANI=-CHANI(K) T1=T1*HANI(J,I,KHANI) T2=T2*HANI(J,I+1,KHANI) ELSE T1=T1*CHANI(K) T2=T2*CHANI(K) END IF CC(J,I,K)=TWO*T2*T1*DELR(J)/(T1*DELC(I+1)+T2*DELC(I)) ELSE C3D-----IF THIS IS THE LAST ROW, SET BRANCH CONDUCTANCE=0. CC(J,I,K)=ZERO END IF ELSE CC(J,I,K)=ZERO END IF END IF 100 CONTINUE C C4------RETURN RETURN END SUBROUTINE SGWF2LPF7HLOG(K) C ****************************************************************** C-----COMPUTE HORIZONTAL CONDUCTANCE USING LOGARITHMIC MEAN C-----TRANSMISSIVITY -- ACTIVATED BY LAYAVG=1 C-----CELL SATURATED THICKNESS IS IN CC. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,IBOUND,CR,CC,DELR,DELC USE GWFLPFMODULE,ONLY:HK,CHANI,HANI C ------------------------------------------------------------------ C ZERO=0. TWO=2. HALF=0.5 FRAC1=1.005 FRAC2=0.995 C C1------FOR EACH CELL CALCULATE BRANCH CONDUCTANCES FROM THAT CELL C1------TO THE ONE ON THE RIGHT AND THE ONE IN FRONT. DO 100 I=1,NROW DO 100 J=1,NCOL C C2------IF CELL IS DRY OR HK=0., SET CONDUCTANCE EQUAL TO 0 AND GO ON C2------TO NEXT CELL. IF(IBOUND(J,I,K).EQ.0 .OR. HK(J,I,K).EQ.ZERO) THEN CR(J,I,K)=ZERO CC(J,I,K)=ZERO ELSE C C3------CELL IS WET -- CALCULATE TRANSMISSIVITY OF CELL. T1=HK(J,I,K)*CC(J,I,K) C3A-----IF THIS IS NOT THE LAST COLUMN(RIGHTMOST) THEN CALCULATE C3A-----BRANCH CONDUCTANCE IN THE ROW DIRECTION (CR) TO THE RIGHT. IF(J.NE.NCOL) THEN IF(IBOUND(J+1,I,K).NE.0) THEN C3A1----LOGARITHMIC MEAN INTERBLOCK TRANSMISSIVITY T2=HK(J+1,I,K)*CC(J+1,I,K) RATIO=T2/T1 IF(RATIO.GT.FRAC1 .OR. RATIO.LT.FRAC2) THEN T=(T2-T1)/LOG(RATIO) ELSE T=HALF*(T1+T2) END IF CR(J,I,K)=TWO*DELC(I)*T/(DELR(J+1)+DELR(J)) ELSE CR(J,I,K)=ZERO END IF ELSE CR(J,I,K)=ZERO END IF C C3B-----IF THIS IS NOT THE LAST ROW (FRONTMOST) THEN CALCULATE C3B-----BRANCH CONDUCTANCE IN THE COLUMN DIRECTION (CC) TO THE FRONT. IF(I.NE.NROW) THEN IF(IBOUND(J,I+1,K).NE.0) THEN T2=HK(J,I+1,K)*CC(J,I+1,K) IF(CHANI(K).LE.ZERO) THEN KHANI=-CHANI(K) T1=T1*HANI(J,I,KHANI) T2=T2*HANI(J,I+1,KHANI) ELSE T1=T1*CHANI(K) T2=T2*CHANI(K) END IF RATIO=T2/T1 IF(RATIO.GT.FRAC1 .OR. RATIO.LT.FRAC2) THEN T=(T2-T1)/LOG(RATIO) ELSE T=HALF*(T1+T2) END IF CC(J,I,K)=TWO*DELR(J)*T/(DELC(I+1)+DELC(I)) ELSE CC(J,I,K)=ZERO END IF ELSE CC(J,I,K)=ZERO END IF END IF 100 CONTINUE C C4------RETURN RETURN END SUBROUTINE SGWF2LPF7HUNCNF(K) C ****************************************************************** C-----COMPUTE HORIZONTAL CONDUCTANCE USING ARITHMETIC MEAN SATURATED C-----THICKNESS AND LOGARITHMIC MEAN HYDRAULIC CONDUCTIVITY. C-----CELL SATURATED THICKNESS IS IN CC. C-----ACTIVATED BY LAYAVG=2 C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,IBOUND,CR,CC,DELR,DELC USE GWFLPFMODULE,ONLY:HK,CHANI,HANI C ------------------------------------------------------------------ C ZERO=0. HALF=0.5 FRAC1=1.005 FRAC2=0.995 C C1------FOR EACH CELL CALCULATE BRANCH CONDUCTANCES FROM THAT CELL C1------TO THE ONE ON THE RIGHT AND THE ONE IN FRONT. DO 100 I=1,NROW DO 100 J=1,NCOL C C2------IF CELL IS DRY OR HK=0., SET CONDUCTANCE EQUAL TO 0 AND GO ON C2------TO NEXT CELL. IF(IBOUND(J,I,K).EQ.0 .OR. HK(J,I,K).EQ.ZERO) THEN CR(J,I,K)=ZERO CC(J,I,K)=ZERO ELSE C C3------CELL IS WET -- CALCULATE TRANSMISSIVITY OF CELL. HYC1=HK(J,I,K) C3A-----IF THIS IS NOT THE LAST COLUMN(RIGHTMOST) THEN CALCULATE C3A-----BRANCH CONDUCTANCE IN THE ROW DIRECTION (CR) TO THE RIGHT. IF(J.NE.NCOL) THEN IF(IBOUND(J+1,I,K).NE.0) THEN C3A1----LOGARITHMIC MEAN HYDRAULIC CONDUCTIVITY HYC2=HK(J+1,I,K) RATIO=HYC2/HYC1 IF(RATIO.GT.FRAC1 .OR. RATIO.LT.FRAC2) THEN HYC=(HYC2-HYC1)/LOG(RATIO) ELSE HYC=HALF*(HYC1+HYC2) END IF C3A2----MULTIPLY LOGARITHMIC K BY ARITHMETIC SATURATED THICKNESS. CR(J,I,K)=DELC(I)*HYC*(CC(J,I,K)+CC(J+1,I,K))/ 1 (DELR(J+1)+DELR(J)) ELSE CR(J,I,K)=ZERO END IF ELSE CR(J,I,K)=ZERO END IF C C3B-----IF THIS IS NOT THE LAST ROW (FRONTMOST) THEN CALCULATE C3B-----BRANCH CONDUCTANCE IN THE COLUMN DIRECTION (CC) TO THE FRONT. IF(I.NE.NROW) THEN IF(IBOUND(J,I+1,K).NE.0) THEN C3B1----LOGARITHMIC MEAN HYDRAULIC CONDUCTIVITY HYC2=HK(J,I+1,K) IF(CHANI(K).LE.ZERO) THEN KHANI=-CHANI(K) HYC1=HYC1*HANI(J,I,KHANI) HYC2=HYC2*HANI(J,I+1,KHANI) ELSE HYC1=HYC1*CHANI(K) HYC2=HYC2*CHANI(K) END IF RATIO=HYC2/HYC1 IF(RATIO.GT.FRAC1 .OR. RATIO.LT.FRAC2) THEN HYC=(HYC2-HYC1)/LOG(RATIO) ELSE HYC=HALF*(HYC1+HYC2) END IF C3B2----MULTIPLY LOGARITHMIC K BY ARITHMETIC SATURATED THICKNESS. CC(J,I,K)=DELR(J)*HYC*(CC(J,I,K)+CC(J,I+1,K))/ 1 (DELC(I+1)+DELC(I)) ELSE CC(J,I,K)=ZERO END IF ELSE CC(J,I,K)=ZERO END IF END IF 100 CONTINUE C C4------RETURN. RETURN END SUBROUTINE SGWF2LPF7VCOND(K) C ****************************************************************** C COMPUTE VERTICAL BRANCH CONDUCTANCE BETWEEN A LAYER AND THE NEXT C LOWER LAYER FROM VERTICAL HYDRAULIC CONDUCTIVITY. C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GLOBAL, ONLY:NCOL,NROW,NLAY,IBOUND,HNEW,CV,DELR,DELC, 1 BOTM,LBOTM,LAYCBD,IOUT,STRT USE GWFLPFMODULE, ONLY:LAYTYP,LAYAVG,CHANI,LAYVKA,LAYWET, 1 HK,VKA,VKCB,NOCVCO,ICONCV,LAYSTRT C DOUBLE PRECISION BBOT,TTOP,HHD C ------------------------------------------------------------------ C IF(K.EQ.NLAY) RETURN ZERO=0. HALF=0.5 C C1------LOOP THROUGH ALL CELLS IN THE LAYER. DO 100 I=1,NROW DO 100 J=1,NCOL CV(J,I,K)=ZERO IF(IBOUND(J,I,K).NE.0 .AND. IBOUND(J,I,K+1).NE.0) THEN C C2------CALCULATE VERTICAL HYDRAULIC CONDUCTIVITY FOR CELL. IF(LAYVKA(K).EQ.0) THEN HYC1=VKA(J,I,K) ELSE HYC1=HK(J,I,K)/VKA(J,I,K) END IF IF(HYC1.GT.ZERO) THEN C3------CALCULATE VERTICAL HYDRAULIC CONDUCTIVITY FOR CELL BELOW. IF(LAYVKA(K+1).EQ.0) THEN HYC2=VKA(J,I,K+1) ELSE HYC2=(HK(J,I,K+1)/VKA(J,I,K+1)) END IF IF(HYC2.GT.ZERO) THEN C C4------CALCULATE INVERSE LEAKANCE FOR CELL. ICONCV FLAG PREVENTS C4------CV FROM BEING HEAD DEPENDENT. BBOT=BOTM(J,I,LBOTM(K)) TTOP=BOTM(J,I,LBOTM(K)-1) IF(LAYSTRT(K).NE.0) TTOP=STRT(J,I,K) IF(LAYTYP(K).NE.0 .AND. ICONCV.EQ.0) THEN HHD=HNEW(J,I,K) IF(HHD.LT.TTOP) TTOP=HHD END IF BOVK1=(TTOP-BBOT)*HALF/HYC1 C C5------CALCULATE INVERSE LEAKANCE FOR CELL BELOW. BBOT=BOTM(J,I,LBOTM(K+1)) TTOP=BOTM(J,I,LBOTM(K+1)-1) IF(LAYSTRT(K+1).NE.0) TTOP=STRT(J,I,K+1) B=(TTOP-BBOT)*HALF C C5A-----IF CELL BELOW IS NOT SATURATED, DO NOT INCLUDE ITS CONDUCTANCE C5A-----IN THE VERTICAL CONDUCTANCE CALULATION, EXCEPT THAT THE NOCVCO C5A-----AND ICONCV FLAGS TURN OFF THIS CORRECTION. IF(LAYTYP(K+1).NE.0 1 .AND.NOCVCO.EQ.0 .AND. ICONCV.EQ.0) THEN HHD=HNEW(J,I,K+1) IF(HHD.LT.TTOP) B=ZERO END IF BOVK2=B/HYC2 C C6------CALCULATE VERTICAL HYDRAULIC CONDUCTIVITY FOR CONFINING BED. IF(LAYCBD(K).NE.0) THEN IF(VKCB(J,I,LAYCBD(K)).GT.ZERO) THEN C C7------CALCULATE INVERSE LEAKANCE FOR CONFINING BED. B=BOTM(J,I,LBOTM(K))-BOTM(J,I,LBOTM(K)+1) IF(B.LT.ZERO) THEN WRITE(IOUT,45) K,I,J 45 FORMAT(1X,/1X, 1 'Negative confining bed thickness below cell (Layer,row,col)', 2 I4,',',I5,',',I5) WRITE(IOUT,46) BOTM(J,I,LBOTM(K)),BOTM(J,I,LBOTM(K)+1) 46 FORMAT(1X,'Top elevation, bottom elevation:',1P,2G13.5) CALL USTOP(' ') END IF CBBOVK=B/VKCB(J,I,LAYCBD(K)) CV(J,I,K)=DELR(J)*DELC(I)/(BOVK1+CBBOVK+BOVK2) END IF ELSE CV(J,I,K)=DELR(J)*DELC(I)/(BOVK1+BOVK2) END IF END IF END IF END IF 100 CONTINUE C C8------RETURN. RETURN END SUBROUTINE SGWF2LPF7CK(IOUT,NP,PTYP) C ****************************************************************** C CHECK THAT JUST-DEFINED PARAMETER OF TYPE 'VK' OR 'VANI' IS USED C CONSISTENTLY WITH LAYVKA ENTRIES FOR LAYERS LISTED IN CLUSTERS FOR C THE PARAMETER C ****************************************************************** C C SPECIFICATIONS: C ------------------------------------------------------------------ USE GWFLPFMODULE, ONLY:LAYTYP,LAYAVG,CHANI,LAYVKA,LAYWET USE PARAMMODULE C CHARACTER*4 PTYP C ------------------------------------------------------------------ C C1------LOOP THROUGH THE CLUSTERS FOR THIS PARAMETER. DO 10 ICL = IPLOC(1,NP),IPLOC(2,NP) LAY = IPCLST(1,ICL) LV = LAYVKA(LAY) IF (PTYP.EQ.'VK ' .AND. LV.NE.0) THEN WRITE (IOUT,590) LAY,LV,LAY,PARNAM(NP),'VK' 590 FORMAT(/, &1X,'LAYVKA entered for layer ',i3,' is: ',i3,'; however,', &' layer ',i3,' is',/,' listed in a cluster for parameter "',a, &'" of type ',a,' and') WRITE (IOUT,600) 600 FORMAT( &1X,'parameters of type VK can apply only to layers for which', &/,' LAYVKA is specified as zero -- STOP EXECUTION (SGWF2LPF7CK)') CALL USTOP(' ') ELSEIF (PTYP.EQ.'VANI' .AND. LV.EQ.0) THEN WRITE (IOUT,590) LAY,LV,LAY,PARNAM(NP),'VANI' WRITE (IOUT,610) 610 FORMAT( &1X,'parameters of type VANI can apply only to layers for which',/, &' LAYVKA is not specified as zero -- STOP EXECUTION', &' (SGWF2LPF7CK)') CALL USTOP(' ') ENDIF 10 CONTINUE C C2------Return. RETURN END SUBROUTINE GWF2LPF7DA(IGRID) C Deallocate LPF DATA USE GWFLPFMODULE C DEALLOCATE(GWFLPFDAT(IGRID)%ILPFCB) DEALLOCATE(GWFLPFDAT(IGRID)%IWDFLG) DEALLOCATE(GWFLPFDAT(IGRID)%IWETIT) DEALLOCATE(GWFLPFDAT(IGRID)%IHDWET) DEALLOCATE(GWFLPFDAT(IGRID)%ISFAC) DEALLOCATE(GWFLPFDAT(IGRID)%ICONCV) DEALLOCATE(GWFLPFDAT(IGRID)%ITHFLG) DEALLOCATE(GWFLPFDAT(IGRID)%NOCVCO) DEALLOCATE(GWFLPFDAT(IGRID)%NOVFC) DEALLOCATE(GWFLPFDAT(IGRID)%WETFCT) DEALLOCATE(GWFLPFDAT(IGRID)%LAYTYP) DEALLOCATE(GWFLPFDAT(IGRID)%LAYAVG) DEALLOCATE(GWFLPFDAT(IGRID)%CHANI) DEALLOCATE(GWFLPFDAT(IGRID)%LAYVKA) DEALLOCATE(GWFLPFDAT(IGRID)%LAYWET) DEALLOCATE(GWFLPFDAT(IGRID)%LAYSTRT) DEALLOCATE(GWFLPFDAT(IGRID)%LAYFLG) DEALLOCATE(GWFLPFDAT(IGRID)%VKA) DEALLOCATE(GWFLPFDAT(IGRID)%VKCB) DEALLOCATE(GWFLPFDAT(IGRID)%SC1) DEALLOCATE(GWFLPFDAT(IGRID)%SC2) DEALLOCATE(GWFLPFDAT(IGRID)%HANI) DEALLOCATE(GWFLPFDAT(IGRID)%WETDRY) DEALLOCATE(GWFLPFDAT(IGRID)%HK) C RETURN END SUBROUTINE SGWF2LPF7PNT(IGRID) C Point to LPF data for a grid. USE GWFLPFMODULE C ILPFCB=>GWFLPFDAT(IGRID)%ILPFCB IWDFLG=>GWFLPFDAT(IGRID)%IWDFLG IWETIT=>GWFLPFDAT(IGRID)%IWETIT IHDWET=>GWFLPFDAT(IGRID)%IHDWET ISFAC=>GWFLPFDAT(IGRID)%ISFAC ICONCV=>GWFLPFDAT(IGRID)%ICONCV ITHFLG=>GWFLPFDAT(IGRID)%ITHFLG NOCVCO=>GWFLPFDAT(IGRID)%NOCVCO NOVFC=>GWFLPFDAT(IGRID)%NOVFC WETFCT=>GWFLPFDAT(IGRID)%WETFCT LAYTYP=>GWFLPFDAT(IGRID)%LAYTYP LAYAVG=>GWFLPFDAT(IGRID)%LAYAVG CHANI=>GWFLPFDAT(IGRID)%CHANI LAYVKA=>GWFLPFDAT(IGRID)%LAYVKA LAYWET=>GWFLPFDAT(IGRID)%LAYWET LAYSTRT=>GWFLPFDAT(IGRID)%LAYSTRT LAYFLG=>GWFLPFDAT(IGRID)%LAYFLG VKA=>GWFLPFDAT(IGRID)%VKA VKCB=>GWFLPFDAT(IGRID)%VKCB SC1=>GWFLPFDAT(IGRID)%SC1 SC2=>GWFLPFDAT(IGRID)%SC2 HANI=>GWFLPFDAT(IGRID)%HANI WETDRY=>GWFLPFDAT(IGRID)%WETDRY HK=>GWFLPFDAT(IGRID)%HK C RETURN END SUBROUTINE GWF2LPF7PSV(IGRID) C Save LPF data for a grid. USE GWFLPFMODULE C GWFLPFDAT(IGRID)%ILPFCB=>ILPFCB GWFLPFDAT(IGRID)%IWDFLG=>IWDFLG GWFLPFDAT(IGRID)%IWETIT=>IWETIT GWFLPFDAT(IGRID)%IHDWET=>IHDWET GWFLPFDAT(IGRID)%ISFAC=>ISFAC GWFLPFDAT(IGRID)%ICONCV=>ICONCV GWFLPFDAT(IGRID)%ITHFLG=>ITHFLG GWFLPFDAT(IGRID)%NOCVCO=>NOCVCO GWFLPFDAT(IGRID)%NOVFC=>NOVFC GWFLPFDAT(IGRID)%WETFCT=>WETFCT GWFLPFDAT(IGRID)%LAYTYP=>LAYTYP GWFLPFDAT(IGRID)%LAYAVG=>LAYAVG GWFLPFDAT(IGRID)%CHANI=>CHANI GWFLPFDAT(IGRID)%LAYVKA=>LAYVKA GWFLPFDAT(IGRID)%LAYWET=>LAYWET GWFLPFDAT(IGRID)%LAYSTRT=>LAYSTRT GWFLPFDAT(IGRID)%LAYFLG=>LAYFLG GWFLPFDAT(IGRID)%VKA=>VKA GWFLPFDAT(IGRID)%VKCB=>VKCB GWFLPFDAT(IGRID)%SC1=>SC1 GWFLPFDAT(IGRID)%SC2=>SC2 GWFLPFDAT(IGRID)%HANI=>HANI GWFLPFDAT(IGRID)%WETDRY=>WETDRY GWFLPFDAT(IGRID)%HK=>HK C RETURN END
\hypertarget{naomi-and-ruth}{% \subsection{Naomi and Ruth}\label{naomi-and-ruth}} \hypertarget{section}{% \section{1}\label{section}} \bibverse{1} In the time when the judges ruled, there was once a famine in the land. A man from Bethlehem in Judah took his wife and two sons to live in the territory of Moab. \bibverse{2} His name was Elimelech and his wife's was Naomi, and his two sons were Mahlon and Chilion. They were Ephrathites from Bethlehem in Judah. After they had been living in Moab for some time, \bibverse{3} Elimelech died, and Naomi was left with her two sons, \bibverse{4} who married Moabite women named Orpah and Ruth. After they had lived there about ten years, \bibverse{5} Mahlon and Chilion both died, and Naomi was left alone, without husband or sons. \bibverse{6} So she set out with her daughters-in-law to return from the land of Moab, for she had heard that the Lord had remembered his people and given them food. \bibverse{7} As they were setting out together on the journey to Judah, \bibverse{8} Naomi said to her daughters-in-law, ``Go, return both of you to the home of your mother. May the Lord be kind to you as you have been kind to the dead and to me. \bibverse{9} The Lord grant that each of you may find peace and happiness in the house of a new husband.'' Then she kissed them; but they began to weep aloud \bibverse{10} and said to her, ``No, we will return with you to your people.'' \bibverse{11} But Naomi said, ``Go back, my daughters; why should you go with me? Can I still bear sons who might become your husbands? \bibverse{12} Go back, my daughters, go your own way, because I am too old to have a husband. Even if I should say, `I have hope,' even if I should have a husband tonight and should bear sons, \bibverse{13} would you wait for them until they were grown up? Would you remain single for them? No, my daughters! My heart grieves for you, for the Lord has sent me adversity.'' \bibverse{14} Then they again wept aloud, and Orpah kissed her mother-in-law goodbye, but Ruth stayed with her. \bibverse{15} ``Look,'' said Naomi, ``your sister-in-law is going back to her own people and to her own gods; go along with her!'' \bibverse{16} But Ruth answered, ``Do not urge me to leave you or to go back. I will go where you go, and I will stay wherever you stay. Your people will be my people, and your God my God; \bibverse{17} I will die where you die, and be buried there. May the Lord bring a curse upon me, if anything but death separate you and me.'' \bibverse{18} When Naomi saw that Ruth was determined to go with her, she ceased urging her to return. \bibverse{19} So they journeyed on until they came to Bethlehem. Their arrival stirred the whole town, and the women said, ``Can this be Naomi?'' \bibverse{20} ``Do not call me Naomi,'' she said to them, ``call me Mara+ 1.20 In Hebrew ``Naomi'' means ``pleasant,'' and ``Mara'' means ``bitter.'', for the Almighty has given me a bitter lot. \bibverse{21} I had plenty when I left, but the Lord has brought me back empty handed. Why should you call me Naomi, now that the Lord has afflicted me, and the Almighty has brought misfortune on me?'' \bibverse{22} So Naomi and Ruth, her Moabite daughter-in-law, returned from Moab. They reached Bethlehem at the beginning of the barley harvest. \hypertarget{in-the-fields-of-boaz}{% \subsection{In the Fields of Boaz}\label{in-the-fields-of-boaz}} \hypertarget{section-1}{% \section{2}\label{section-1}} \bibverse{1} Now Naomi was related through her husband to a very wealthy man of the family of Elimelech named Boaz. \bibverse{2} Ruth the Moabite said to Naomi, ``Let me now go into the fields and gather leftover grain behind anyone who will allow me.'' ``Go, my daughter.'' she replied. \bibverse{3} So she went to glean in the field after the reapers. As it happened, she was in that part of the field which belonged to Boaz, who was of the family of Elimelech. \bibverse{4} When Boaz came from Bethlehem and said to the reapers, ``The Lord be with you,'' they answered him, ``May the Lord bless you.'' \bibverse{5} ``Whose girl is this?'' Boaz asked his servant who had charge of the reapers. \bibverse{6} The servant who had charge of the reapers replied, ``It is the Moabite girl who came back with Naomi from the territory of Moab. \bibverse{7} She asked to be allowed to glean and gather sheaves after the reapers. So she came and has continued to work until now and she has not rested a moment in the field.'' \bibverse{8} Then Boaz said to Ruth, ``Listen, my daughter. Do not go to glean in another field nor leave this place, but stay here with my girls. \bibverse{9} Watch where the men are reaping and follow the gleaners. I have told the young men not to trouble you. When you are thirsty, go to the jars and drink of that which the young men have drawn.'' \bibverse{10} Then she bowed low and said to him, ``Why are you so kind to me, to take interest in me when I am just a foreigner?'' \bibverse{11} Boaz replied, ``I have heard what you have done for your mother-in-law since the death of your husband, and how you left your father and mother and your native land to come to a people that you did not know before. \bibverse{12} May the Lord repay you for what you have done, and may you be fully rewarded by the God of Israel, under whose wings you have come to take refuge.'' \bibverse{13} Then she said, ``I trust I may please you, my lord, for you have comforted me and spoken kindly to your servant, although I am not really equal to one of your own servants.'' \bibverse{14} At mealtime Boaz said to Ruth, ``Come here and eat some of the food and dip your piece of bread in the vinegar.'' So she sat beside the reapers, and he passed her some roasted grain. She ate until she was satisfied and had some left. \bibverse{15} When she rose to glean, Boaz gave this order to his young men: ``Let her glean even amongst the sheaves and do not disturb her. \bibverse{16} Also pull out some for her from the bundles and leave for her to glean, and do not find fault with her.'' \bibverse{17} So she gleaned in the field until evening, then beat out what she had gleaned. It was about a bushel of barley. \bibverse{18} Then she took it up and went into the town and showed her mother-in-law what she had gleaned. She also brought out and gave her that which she had left from her meal after she had had enough. \bibverse{19} ``Where did you glean today, and where did you work?'' asked her mother-in-law. ``A blessing on him who took notice of you!'' So she told her mother-in-law where she had worked. ``The name of the man with whom I worked today,'' she said, ``is Boaz.'' \bibverse{20} Naomi said to her daughter-in-law, ``May the blessing of the Lord rest on this man who has not ceased to show his loving-kindness to the living and to the dead. The man,'' she added, ``is a near relation of ours.'' \bibverse{21} ``He told me,'' Ruth said, ``that I must keep near his young men until they have completed all his harvest.'' \bibverse{22} Naomi said to Ruth, ``It is best, my daughter, that you should go out with his girls because you might not be as safe in another field.'' \bibverse{23} So she gleaned with the girls of Boaz until the end of the barley and wheat harvest; but she lived with her mother-in-law. \hypertarget{night-and-morning}{% \subsection{Night and morning}\label{night-and-morning}} \hypertarget{section-2}{% \section{3}\label{section-2}} \bibverse{1} One day, Naomi said to Ruth, ``My daughter, should I not seek to secure a home for you where you will be happy and prosperous? \bibverse{2} Is not Boaz, with whose girls you have been, a relative of ours? \bibverse{3} Tonight he is going to winnow barley on the threshing-floor. So bathe and anoint yourself and put on your best clothes and go down to the threshing-floor. But do not make yourself known to the man until he has finished eating and drinking. \bibverse{4} Then when he lies down, mark the place where he lies. Go in, uncover his feet, lie down, and then he will tell you what to do.'' \bibverse{5} ``I will do as you say.'' Ruth said to her. \bibverse{6} So she went down to the threshing-floor and did just as her mother-in-law told her. \bibverse{7} When Boaz had finished eating and drinking and was in a happy mood, he went to lie down at the end of the heap of grain. Then Ruth came quietly and uncovered his feet and lay down. \bibverse{8} At midnight the man was startled and turned over, and there was a woman lying at his feet! \bibverse{9} ``Who are you?'' he said. ``I am Ruth your servant.'' she answered, ``Spread your cloak over your servant, for you are a near relative.'' \bibverse{10} He said, ``May you be blest by the Lord, my daughter. You have shown me greater favour now than at first, for you have not followed young men, whether poor or rich. \bibverse{11} My daughter, have no fear; I will do for you all that you ask; for the whole town knows that you are a virtuous woman. \bibverse{12} Now it is true that I am a near relative, but there is another man nearer than I. \bibverse{13} Stay here tonight, and then in the morning, if he will perform for you the duty of a kinsman, well, let him do it. But if he will not perform for you the duty of a kinsman, then as surely as the Lord lives, I will do it for you. Lie down until morning.'' \bibverse{14} So she lay at his feet until morning, but rose before anyone could recognise her, for Boaz said, ``No one must know that a woman came to the threshing-floor.'' \bibverse{15} He also said, ``Bring the cloak which you have on and hold it.'' So she held it while he poured into it six measures of barley and laid it on her shoulders. Then he went into the city. \bibverse{16} When Ruth came to her mother-in-law, Naomi asked, ``Is it you, my daughter?'' Then Ruth told Naomi all that the man had done for her. \bibverse{17} ``He gave me these six measures of barley,'' she said, ``for he said I should not go to my mother-in-law empty-handed.'' \bibverse{18} ``Wait quietly, my daughter.'' Naomi said, ``Until you know how the affair will turn out, for the man will not rest unless he settles it all today.'' \hypertarget{section-3}{% \section{4}\label{section-3}} \bibverse{1} Then Boaz went up to the gate and sat down. Just then the near kinsman of whom Boaz had spoken came along. Boaz said, ``Hello, So-and-so (calling him by name), come here and sit down.'' So he stopped and sat down. \bibverse{2} Boaz also took ten of the town elders and said, ``Sit down here.'' So they sat down. \bibverse{3} Then he said to the near relative, ``Naomi, who has come back from the country of Moab, is offering for sale the piece of land which belonged to our relative Elimelech, \bibverse{4} and I thought that I would lay the matter before you, suggesting that you buy it in the presence of these men who sit here and of the elders of my people. If you will buy it and so keep it in the possession of the family, do so; but if not; then tell me, so that I may know; for no one but you has the right to buy it, and I am next to you.'' ``I will buy it.'' he said. \bibverse{5} Then Boaz said, ``On the day you buy the field from Naomi, you must also marry Ruth the Moabite, the widow of the dead, in order to preserve the name of the dead in connection with his inheritance.'' \bibverse{6} ``I cannot buy it for myself without spoiling my own inheritance,'' the near relative said. ``You take my right of buying it as a relative, because I cannot do so.'' \bibverse{7} Now this used to be the custom in Israel: to make valid anything relating to a matter of redemption or exchange, a man drew off his sandal and gave it to the other man; and this was the way contracts were attested in Israel. \bibverse{8} So when the near relative said to Boaz, ``Buy it for yourself,'' Boaz drew off the man's sandal. \bibverse{9} Then Boaz said to the elders and to all the people, ``You are witnesses at this time that I have bought all that was Elimelech's and all that was Chilion's and Mahlon's from Naomi. \bibverse{10} Moreover I have secured Ruth the Moabite, the wife of Mahlon, to be my wife, in order to perpetuate the name of the dead in connection with his inheritance, so that his name will not disappear from amongst his relatives and from the household where he lived. You are witnesses this day.'' \bibverse{11} Then all the people who were at the gate and the elders said, ``We are witnesses. May the Lord make the woman who is coming into your house like Rachel and Leah, who together built the house of Israel. May you do well in Ephrata, and become famous in Bethlehem. \bibverse{12} From the children whom the Lord will give you by this young woman may your household become like the household of Perez, whom Tamar bore to Judah.'' \bibverse{13} So Boaz married Ruth, and she became his wife; and the Lord gave to her a son. \bibverse{14} Then the women said to Naomi, ``Blessed be the Lord who has not left you at this time without a near relative, and may his name be famous in Israel. \bibverse{15} This child will restore your vigour and nourish you in your old age; for your daughter-in-law who loves you, who is worth more to you than seven sons, has borne a son to Boaz!'' \bibverse{16} So Naomi took the child in her arms and cared for him as if he was her own. \bibverse{17} The women of the neighbourhood gave him a name, saying, ``A son is born to Naomi!'' They named him Obed; he became the father of Jesse, who was the father of David. \#\# Genealogy \bibverse{18} This is the genealogy of Perez: Perez was the father of Hezron, \bibverse{19} Hezron of Ram, Ram of Amminadab, \bibverse{20} Amminidab of Nashon, Nashon of Salmon, \bibverse{21} Salmon of Boaz, Boaz of Obed, \bibverse{22} Obed of Jesse, Jesse of David.
{-# OPTIONS --without-K --safe #-} module Dodo.Binary.Intersection where -- Stdlib imports open import Level using (Level; _⊔_) open import Data.Product as P open import Data.Product using (_×_; _,_; swap; proj₁; proj₂) open import Relation.Binary using (REL) -- Local imports open import Dodo.Binary.Equality -- # Definitions infixl 30 _∩₂_ _∩₂_ : {a b ℓ₁ ℓ₂ : Level} {A : Set a} {B : Set b} → REL A B ℓ₁ → REL A B ℓ₂ → REL A B (ℓ₁ ⊔ ℓ₂) _∩₂_ P Q x y = P x y × Q x y -- # Properties module _ {a b ℓ : Level} {A : Set a} {B : Set b} {R : REL A B ℓ} where ∩₂-idem : (R ∩₂ R) ⇔₂ R ∩₂-idem = ⇔: ⊆-proof ⊇-proof where ⊆-proof : (R ∩₂ R) ⊆₂' R ⊆-proof _ _ = proj₁ ⊇-proof : R ⊆₂' (R ∩₂ R) ⊇-proof _ _ Rxy = (Rxy , Rxy) module _ {a b ℓ₁ ℓ₂ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} where ∩₂-comm : (P ∩₂ Q) ⇔₂ (Q ∩₂ P) ∩₂-comm = ⇔: (λ _ _ → swap) (λ _ _ → swap) module _ {a b ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} {R : REL A B ℓ₃} where ∩₂-assoc : P ∩₂ (Q ∩₂ R) ⇔₂ (P ∩₂ Q) ∩₂ R ∩₂-assoc = ⇔: ⊆-proof ⊇-proof where ⊆-proof : P ∩₂ (Q ∩₂ R) ⊆₂' (P ∩₂ Q) ∩₂ R ⊆-proof _ _ (Pxy , (Qxy , Rxy)) = ((Pxy , Qxy) , Rxy) ⊇-proof : (P ∩₂ Q) ∩₂ R ⊆₂' P ∩₂ (Q ∩₂ R) ⊇-proof _ _ ((Pxy , Qxy) , Rxy) = (Pxy , (Qxy , Rxy)) -- # Operations -- ## Operations: ⊆₂ module _ {a b ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} {R : REL A B ℓ₃} where ∩₂-combine-⊆₂ : P ⊆₂ Q → P ⊆₂ R → P ⊆₂ (Q ∩₂ R) ∩₂-combine-⊆₂ (⊆: P⊆Q) (⊆: P⊆R) = ⊆: (λ x y Pxy → (P⊆Q x y Pxy , P⊆R x y Pxy)) module _ {a b ℓ₁ ℓ₂ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} where ∩₂-introˡ-⊆₂ : (P ∩₂ Q) ⊆₂ Q ∩₂-introˡ-⊆₂ = ⊆: λ _ _ → proj₂ ∩₂-introʳ-⊆₂ : (P ∩₂ Q) ⊆₂ P ∩₂-introʳ-⊆₂ = ⊆: λ _ _ → proj₁ module _ {a b ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} {R : REL A B ℓ₃} where ∩₂-elimˡ-⊆₂ : P ⊆₂ (Q ∩₂ R) → P ⊆₂ R ∩₂-elimˡ-⊆₂ (⊆: P⊆[Q∩R]) = ⊆: (λ x y Pxy → proj₂ (P⊆[Q∩R] x y Pxy)) ∩₂-elimʳ-⊆₂ : P ⊆₂ (Q ∩₂ R) → P ⊆₂ Q ∩₂-elimʳ-⊆₂ (⊆: P⊆[Q∩R]) = ⊆: (λ x y Pxy → proj₁ (P⊆[Q∩R] x y Pxy)) module _ {a b ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} {R : REL A B ℓ₃} where ∩₂-substˡ-⊆₂ : P ⊆₂ Q → (P ∩₂ R) ⊆₂ (Q ∩₂ R) ∩₂-substˡ-⊆₂ (⊆: P⊆Q) = ⊆: (λ x y → P.map₁ (P⊆Q x y)) ∩₂-substʳ-⊆₂ : P ⊆₂ Q → (R ∩₂ P) ⊆₂ (R ∩₂ Q) ∩₂-substʳ-⊆₂ (⊆: P⊆Q) = ⊆: (λ x y → P.map₂ (P⊆Q x y)) -- ## Operations: ⇔₂ module _ {a b ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set a} {B : Set b} {P : REL A B ℓ₁} {Q : REL A B ℓ₂} {R : REL A B ℓ₃} where ∩₂-substˡ : P ⇔₂ Q → (P ∩₂ R) ⇔₂ (Q ∩₂ R) ∩₂-substˡ = ⇔₂-compose ∩₂-substˡ-⊆₂ ∩₂-substˡ-⊆₂ ∩₂-substʳ : P ⇔₂ Q → (R ∩₂ P) ⇔₂ (R ∩₂ Q) ∩₂-substʳ = ⇔₂-compose ∩₂-substʳ-⊆₂ ∩₂-substʳ-⊆₂
If $f$ is continuous on the closed segment $[a,b]$, then the contour integral of $f$ along the line segment $[a,b]$ is equal to the sum of the contour integrals of $f$ along the line segments $[a,c]$ and $[c,b]$.
Formal statement is: lemma orthogonal_self: "orthogonal x x \<longleftrightarrow> x = 0" Informal statement is: Two vectors are orthogonal if and only if they are equal to zero.
module ReadNum import Data.Strings %default total readNumber : HasIO io => io (Maybe Nat) readNumber = do n <- getLine if all isDigit (unpack n) then pure $ Just (stringToNatOrZ n) else pure Nothing readNumbers : HasIO io => io (Maybe (Nat, Nat)) readNumbers = do m <- readNumber case m of Nothing => pure Nothing Just mm => do n <- readNumber case n of Nothing => pure Nothing Just nn => pure $ Just (mm, nn) -- pattern-matched destucturing readNumbersImproved : HasIO io => io (Maybe (Nat, Nat)) readNumbersImproved = do Just m <- readNumber | Nothing => pure Nothing Just n <- readNumber | Nothing => pure Nothing pure $ Just (m, n)
Formal statement is: lemma asymp_equiv_sym: "f \<sim>[F] g \<longleftrightarrow> g \<sim>[F] f" Informal statement is: If $f$ is asymptotically equivalent to $g$, then $g$ is asymptotically equivalent to $f$.
program main use mpi DOUBLE PRECISION f(10,2) integer code,np,world,ierr do i=1,10 f(i,1)=1 f(i,2)=2 enddo call mpi_init(ierr) call mpi_comm_dup(mpi_comm_world,world,ierr) call mpi_comm_rank(world,code,ierr) call mpi_comm_size(world,np,ierr) do i=1,3 call update() if(code.eq.0)then print*,'global f',f endif enddo call mpi_comm_free(world,ierr) call mpi_finalize(ierr) contains subroutine update() integer i,j,istr,iend,avgtask,extra,currtasknum double precision, allocatable :: kf(:,:) character*8 name !parallel avgtask=10/np extra=mod(10,np) !print*,'extra',extra if(code .lt. extra)then istr=(avgtask+1)*code+1 iend=(avgtask+1)*(code+1) currtasknum=avgtask+1 else istr=avgtask*code+1+extra iend=avgtask*(code+1)+extra currtasknum=avgtask endif print*,'code,istr,iend',code,',',istr,',',iend do i=istr,iend f(i,1)=f(i,1)+10*code f(i,2)=f(i,2)+10*code enddo if(code.eq.0)then allocate(kf(10,2)) endif call mpi_gather(f(istr,1),2*currtasknum,mpi_integer,kf(istr,1),2*currtasknum,mpi_integer,0,world,ierr) call mpi_gather(f(istr,2),2*currtasknum,mpi_integer,kf(istr,2),2*currtasknum,mpi_integer,0,world,ierr) if(code.eq.0)then f=kf ! print*,'dddddddd',f deallocate(kf) endif end subroutine update end program
module TTImp.Elab.Prim import Core.TT %default covering export checkPrim : FC -> Constant -> (Term vars, Term vars) checkPrim fc (I i) = (PrimVal fc (I i), PrimVal fc IntType) checkPrim fc (BI i) = (PrimVal fc (BI i), PrimVal fc IntegerType) checkPrim fc (B8 i) = (PrimVal fc (B8 i), PrimVal fc Bits8Type) checkPrim fc (B16 i) = (PrimVal fc (B16 i), PrimVal fc Bits16Type) checkPrim fc (B32 i) = (PrimVal fc (B32 i), PrimVal fc Bits32Type) checkPrim fc (B64 i) = (PrimVal fc (B64 i), PrimVal fc Bits64Type) checkPrim fc (Str s) = (PrimVal fc (Str s), PrimVal fc StringType) checkPrim fc (Ch c) = (PrimVal fc (Ch c), PrimVal fc CharType) checkPrim fc (Db d) = (PrimVal fc (Db d), PrimVal fc DoubleType) checkPrim fc WorldVal = (PrimVal fc WorldVal, PrimVal fc WorldType) checkPrim fc IntType = (PrimVal fc IntType, TType fc) checkPrim fc IntegerType = (PrimVal fc IntegerType, TType fc) checkPrim fc Bits8Type = (PrimVal fc Bits8Type, TType fc) checkPrim fc Bits16Type = (PrimVal fc Bits16Type, TType fc) checkPrim fc Bits32Type = (PrimVal fc Bits32Type, TType fc) checkPrim fc Bits64Type = (PrimVal fc Bits64Type, TType fc) checkPrim fc StringType = (PrimVal fc StringType, TType fc) checkPrim fc CharType = (PrimVal fc CharType, TType fc) checkPrim fc DoubleType = (PrimVal fc DoubleType, TType fc) checkPrim fc WorldType = (PrimVal fc WorldType, TType fc)
{-# OPTIONS --without-K --safe #-} open import Definition.Typed.EqualityRelation module Definition.LogicalRelation {{eqrel : EqRelSet}} where open EqRelSet {{...}} open import Definition.Untyped as U open import Definition.Untyped.Properties open import Definition.Typed.Properties open import Definition.Typed open import Definition.Typed.Weakening open import Tools.Product import Tools.PropositionalEquality as PE -- The different cases of the logical relation are spread out through out -- this file. This is due to them having different dependencies. -- We will refer to expressions that satisfies the logical relation as reducible. -- Reducibility of Neutrals: -- Neutral type record _⊩ne_ (Γ : Con Term) (A : Term) : Set where constructor ne field K : Term D : Γ ⊢ A :⇒*: K neK : Neutral K K≡K : Γ ⊢ K ~ K ∷ U -- Neutral type equality record _⊩ne_≡_/_ (Γ : Con Term) (A B : Term) ([A] : Γ ⊩ne A) : Set where constructor ne₌ open _⊩ne_ [A] field M : Term D′ : Γ ⊢ B :⇒*: M neM : Neutral M K≡M : Γ ⊢ K ~ M ∷ U -- Neutral term in WHNF record _⊩neNf_∷_ (Γ : Con Term) (k A : Term) : Set where inductive constructor neNfₜ field neK : Neutral k ⊢k : Γ ⊢ k ∷ A k≡k : Γ ⊢ k ~ k ∷ A -- Neutral term record _⊩ne_∷_/_ (Γ : Con Term) (t A : Term) ([A] : Γ ⊩ne A) : Set where inductive constructor neₜ open _⊩ne_ [A] field k : Term d : Γ ⊢ t :⇒*: k ∷ K nf : Γ ⊩neNf k ∷ K -- Neutral term equality in WHNF record _⊩neNf_≡_∷_ (Γ : Con Term) (k m A : Term) : Set where inductive constructor neNfₜ₌ field neK : Neutral k neM : Neutral m k≡m : Γ ⊢ k ~ m ∷ A -- Neutral term equality record _⊩ne_≡_∷_/_ (Γ : Con Term) (t u A : Term) ([A] : Γ ⊩ne A) : Set where constructor neₜ₌ open _⊩ne_ [A] field k m : Term d : Γ ⊢ t :⇒*: k ∷ K d′ : Γ ⊢ u :⇒*: m ∷ K nf : Γ ⊩neNf k ≡ m ∷ K -- Reducibility of natural numbers: -- Natural number type _⊩ℕ_ : (Γ : Con Term) (A : Term) → Set Γ ⊩ℕ A = Γ ⊢ A :⇒*: ℕ -- Natural number type equality _⊩ℕ_≡_ : (Γ : Con Term) (A B : Term) → Set Γ ⊩ℕ A ≡ B = Γ ⊢ B ⇒* ℕ mutual -- Natural number term record _⊩ℕ_∷ℕ (Γ : Con Term) (t : Term) : Set where inductive constructor ℕₜ field n : Term d : Γ ⊢ t :⇒*: n ∷ ℕ n≡n : Γ ⊢ n ≅ n ∷ ℕ prop : Natural-prop Γ n -- WHNF property of natural number terms data Natural-prop (Γ : Con Term) : (n : Term) → Set where sucᵣ : ∀ {n} → Γ ⊩ℕ n ∷ℕ → Natural-prop Γ (suc n) zeroᵣ : Natural-prop Γ zero ne : ∀ {n} → Γ ⊩neNf n ∷ ℕ → Natural-prop Γ n mutual -- Natural number term equality record _⊩ℕ_≡_∷ℕ (Γ : Con Term) (t u : Term) : Set where inductive constructor ℕₜ₌ field k k′ : Term d : Γ ⊢ t :⇒*: k ∷ ℕ d′ : Γ ⊢ u :⇒*: k′ ∷ ℕ k≡k′ : Γ ⊢ k ≅ k′ ∷ ℕ prop : [Natural]-prop Γ k k′ -- WHNF property of Natural number term equality data [Natural]-prop (Γ : Con Term) : (n n′ : Term) → Set where sucᵣ : ∀ {n n′} → Γ ⊩ℕ n ≡ n′ ∷ℕ → [Natural]-prop Γ (suc n) (suc n′) zeroᵣ : [Natural]-prop Γ zero zero ne : ∀ {n n′} → Γ ⊩neNf n ≡ n′ ∷ ℕ → [Natural]-prop Γ n n′ -- Natural extraction from term WHNF property natural : ∀ {Γ n} → Natural-prop Γ n → Natural n natural (sucᵣ x) = sucₙ natural zeroᵣ = zeroₙ natural (ne (neNfₜ neK ⊢k k≡k)) = ne neK -- Natural extraction from term equality WHNF property split : ∀ {Γ a b} → [Natural]-prop Γ a b → Natural a × Natural b split (sucᵣ x) = sucₙ , sucₙ split zeroᵣ = zeroₙ , zeroₙ split (ne (neNfₜ₌ neK neM k≡m)) = ne neK , ne neM -- Reducibility of Empty -- Empty type _⊩Empty_ : (Γ : Con Term) (A : Term) → Set Γ ⊩Empty A = Γ ⊢ A :⇒*: Empty -- Empty type equality _⊩Empty_≡_ : (Γ : Con Term) (A B : Term) → Set Γ ⊩Empty A ≡ B = Γ ⊢ B ⇒* Empty -- WHNF property of absurd terms data Empty-prop (Γ : Con Term) : (n : Term) → Set where ne : ∀ {n} → Γ ⊩neNf n ∷ Empty → Empty-prop Γ n -- Empty term record _⊩Empty_∷Empty (Γ : Con Term) (t : Term) : Set where inductive constructor Emptyₜ field n : Term d : Γ ⊢ t :⇒*: n ∷ Empty n≡n : Γ ⊢ n ≅ n ∷ Empty prop : Empty-prop Γ n data [Empty]-prop (Γ : Con Term) : (n n′ : Term) → Set where ne : ∀ {n n′} → Γ ⊩neNf n ≡ n′ ∷ Empty → [Empty]-prop Γ n n′ -- Empty term equality record _⊩Empty_≡_∷Empty (Γ : Con Term) (t u : Term) : Set where inductive constructor Emptyₜ₌ field k k′ : Term d : Γ ⊢ t :⇒*: k ∷ Empty d′ : Γ ⊢ u :⇒*: k′ ∷ Empty k≡k′ : Γ ⊢ k ≅ k′ ∷ Empty prop : [Empty]-prop Γ k k′ empty : ∀ {Γ n} → Empty-prop Γ n → Neutral n empty (ne (neNfₜ neK _ _)) = neK esplit : ∀ {Γ a b} → [Empty]-prop Γ a b → Neutral a × Neutral b esplit (ne (neNfₜ₌ neK neM k≡m)) = neK , neM -- Reducibility of Unit -- Unit type _⊩Unit_ : (Γ : Con Term) (A : Term) → Set Γ ⊩Unit A = Γ ⊢ A :⇒*: Unit -- Unit type equality _⊩Unit_≡_ : (Γ : Con Term) (A B : Term) → Set Γ ⊩Unit A ≡ B = Γ ⊢ B ⇒* Unit record _⊩Unit_∷Unit (Γ : Con Term) (t : Term) : Set where inductive constructor Unitₜ field n : Term d : Γ ⊢ t :⇒*: n ∷ Unit prop : Whnf n -- Unit term equality record _⊩Unit_≡_∷Unit (Γ : Con Term) (t u : Term) : Set where constructor Unitₜ₌ field ⊢t : Γ ⊢ t ∷ Unit ⊢u : Γ ⊢ u ∷ Unit -- Type levels data TypeLevel : Set where ⁰ : TypeLevel ¹ : TypeLevel data _<_ : (i j : TypeLevel) → Set where 0<1 : ⁰ < ¹ -- Logical relation -- Exported interface record LogRelKit : Set₁ where constructor Kit field _⊩U : (Γ : Con Term) → Set _⊩B⟨_⟩_ : (Γ : Con Term) (W : BindingType) → Term → Set _⊩_ : (Γ : Con Term) → Term → Set _⊩_≡_/_ : (Γ : Con Term) (A B : Term) → Γ ⊩ A → Set _⊩_∷_/_ : (Γ : Con Term) (t A : Term) → Γ ⊩ A → Set _⊩_≡_∷_/_ : (Γ : Con Term) (t u A : Term) → Γ ⊩ A → Set module LogRel (l : TypeLevel) (rec : ∀ {l′} → l′ < l → LogRelKit) where -- Reducibility of Universe: -- Universe type record _⊩¹U (Γ : Con Term) : Set where constructor Uᵣ field l′ : TypeLevel l< : l′ < l ⊢Γ : ⊢ Γ -- Universe type equality _⊩¹U≡_ : (Γ : Con Term) (B : Term) → Set Γ ⊩¹U≡ B = B PE.≡ U -- Note lack of reduction -- Universe term record _⊩¹U_∷U/_ {l′} (Γ : Con Term) (t : Term) (l< : l′ < l) : Set where constructor Uₜ open LogRelKit (rec l<) field A : Term d : Γ ⊢ t :⇒*: A ∷ U typeA : Type A A≡A : Γ ⊢ A ≅ A ∷ U [t] : Γ ⊩ t -- Universe term equality record _⊩¹U_≡_∷U/_ {l′} (Γ : Con Term) (t u : Term) (l< : l′ < l) : Set where constructor Uₜ₌ open LogRelKit (rec l<) field A B : Term d : Γ ⊢ t :⇒*: A ∷ U d′ : Γ ⊢ u :⇒*: B ∷ U typeA : Type A typeB : Type B A≡B : Γ ⊢ A ≅ B ∷ U [t] : Γ ⊩ t [u] : Γ ⊩ u [t≡u] : Γ ⊩ t ≡ u / [t] mutual -- Reducibility of Binding types (Π, Σ): -- B-type record _⊩¹B⟨_⟩_ (Γ : Con Term) (W : BindingType) (A : Term) : Set where inductive constructor Bᵣ field F : Term G : Term D : Γ ⊢ A :⇒*: ⟦ W ⟧ F ▹ G ⊢F : Γ ⊢ F ⊢G : Γ ∙ F ⊢ G A≡A : Γ ⊢ ⟦ W ⟧ F ▹ G ≅ ⟦ W ⟧ F ▹ G [F] : ∀ {ρ Δ} → ρ ∷ Δ ⊆ Γ → ⊢ Δ → Δ ⊩¹ U.wk ρ F [G] : ∀ {ρ Δ a} → ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) → Δ ⊩¹ a ∷ U.wk ρ F / [F] [ρ] ⊢Δ → Δ ⊩¹ U.wk (lift ρ) G [ a ] G-ext : ∀ {ρ Δ a b} → ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) → ([a] : Δ ⊩¹ a ∷ U.wk ρ F / [F] [ρ] ⊢Δ) → ([b] : Δ ⊩¹ b ∷ U.wk ρ F / [F] [ρ] ⊢Δ) → Δ ⊩¹ a ≡ b ∷ U.wk ρ F / [F] [ρ] ⊢Δ → Δ ⊩¹ U.wk (lift ρ) G [ a ] ≡ U.wk (lift ρ) G [ b ] / [G] [ρ] ⊢Δ [a] -- B-type equality record _⊩¹B⟨_⟩_≡_/_ (Γ : Con Term) (W : BindingType) (A B : Term) ([A] : Γ ⊩¹B⟨ W ⟩ A) : Set where inductive constructor B₌ open _⊩¹B⟨_⟩_ [A] field F′ : Term G′ : Term D′ : Γ ⊢ B ⇒* ⟦ W ⟧ F′ ▹ G′ A≡B : Γ ⊢ ⟦ W ⟧ F ▹ G ≅ ⟦ W ⟧ F′ ▹ G′ [F≡F′] : ∀ {ρ Δ} → ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) → Δ ⊩¹ U.wk ρ F ≡ U.wk ρ F′ / [F] [ρ] ⊢Δ [G≡G′] : ∀ {ρ Δ a} → ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) → ([a] : Δ ⊩¹ a ∷ U.wk ρ F / [F] [ρ] ⊢Δ) → Δ ⊩¹ U.wk (lift ρ) G [ a ] ≡ U.wk (lift ρ) G′ [ a ] / [G] [ρ] ⊢Δ [a] -- Term reducibility of Π-type _⊩¹Π_∷_/_ : (Γ : Con Term) (t A : Term) ([A] : Γ ⊩¹B⟨ BΠ ⟩ A) → Set Γ ⊩¹Π t ∷ A / Bᵣ F G D ⊢F ⊢G A≡A [F] [G] G-ext = ∃ λ f → Γ ⊢ t :⇒*: f ∷ Π F ▹ G × Function f × Γ ⊢ f ≅ f ∷ Π F ▹ G × (∀ {ρ Δ a b} ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) ([a] : Δ ⊩¹ a ∷ U.wk ρ F / [F] [ρ] ⊢Δ) ([b] : Δ ⊩¹ b ∷ U.wk ρ F / [F] [ρ] ⊢Δ) ([a≡b] : Δ ⊩¹ a ≡ b ∷ U.wk ρ F / [F] [ρ] ⊢Δ) → Δ ⊩¹ U.wk ρ f ∘ a ≡ U.wk ρ f ∘ b ∷ U.wk (lift ρ) G [ a ] / [G] [ρ] ⊢Δ [a]) × (∀ {ρ Δ a} → ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) → ([a] : Δ ⊩¹ a ∷ U.wk ρ F / [F] [ρ] ⊢Δ) → Δ ⊩¹ U.wk ρ f ∘ a ∷ U.wk (lift ρ) G [ a ] / [G] [ρ] ⊢Δ [a]) {- NOTE(WN): Last 2 fields could be refactored to a single forall. But touching this definition is painful, so only do it if you have to change it anyway. -} -- Issue: Agda complains about record use not being strictly positive. -- Therefore we have to use × -- Term equality of Π-type _⊩¹Π_≡_∷_/_ : (Γ : Con Term) (t u A : Term) ([A] : Γ ⊩¹B⟨ BΠ ⟩ A) → Set Γ ⊩¹Π t ≡ u ∷ A / [A]@(Bᵣ F G D ⊢F ⊢G A≡A [F] [G] G-ext) = ∃₂ λ f g → Γ ⊢ t :⇒*: f ∷ Π F ▹ G × Γ ⊢ u :⇒*: g ∷ Π F ▹ G × Function f × Function g × Γ ⊢ f ≅ g ∷ Π F ▹ G × Γ ⊩¹Π t ∷ A / [A] × Γ ⊩¹Π u ∷ A / [A] × (∀ {ρ Δ a} ([ρ] : ρ ∷ Δ ⊆ Γ) (⊢Δ : ⊢ Δ) ([a] : Δ ⊩¹ a ∷ U.wk ρ F / [F] [ρ] ⊢Δ) → Δ ⊩¹ U.wk ρ f ∘ a ≡ U.wk ρ g ∘ a ∷ U.wk (lift ρ) G [ a ] / [G] [ρ] ⊢Δ [a]) -- Issue: Same as above. -- Term reducibility of Σ-type _⊩¹Σ_∷_/_ : (Γ : Con Term) (t A : Term) ([A] : Γ ⊩¹B⟨ BΣ ⟩ A) → Set Γ ⊩¹Σ t ∷ A / [A]@(Bᵣ F G D ⊢F ⊢G A≡A [F] [G] G-ext) = ∃ λ p → Γ ⊢ t :⇒*: p ∷ Σ F ▹ G × Product p × Γ ⊢ p ≅ p ∷ Σ F ▹ G × (Σ (Γ ⊩¹ fst p ∷ U.wk id F / [F] id (wf ⊢F)) λ [fst] → Γ ⊩¹ snd p ∷ U.wk (lift id) G [ fst p ] / [G] id (wf ⊢F) [fst]) -- Term equality of Σ-type _⊩¹Σ_≡_∷_/_ : (Γ : Con Term) (t u A : Term) ([A] : Γ ⊩¹B⟨ BΣ ⟩ A) → Set Γ ⊩¹Σ t ≡ u ∷ A / [A]@(Bᵣ F G D ⊢F ⊢G A≡A [F] [G] G-ext) = ∃₂ λ p r → Γ ⊢ t :⇒*: p ∷ Σ F ▹ G × Γ ⊢ u :⇒*: r ∷ Σ F ▹ G × Product p × Product r × Γ ⊢ p ≅ r ∷ Σ F ▹ G × Γ ⊩¹Σ t ∷ A / [A] × Γ ⊩¹Σ u ∷ A / [A] × (Σ (Γ ⊩¹ fst p ∷ U.wk id F / [F] id (wf ⊢F)) λ [fstp] → Γ ⊩¹ fst r ∷ U.wk id F / [F] id (wf ⊢F) × Γ ⊩¹ fst p ≡ fst r ∷ U.wk id F / [F] id (wf ⊢F) × Γ ⊩¹ snd p ≡ snd r ∷ U.wk (lift id) G [ fst p ] / [G] id (wf ⊢F) [fstp]) -- Logical relation definition data _⊩¹_ (Γ : Con Term) : Term → Set where Uᵣ : Γ ⊩¹U → Γ ⊩¹ U ℕᵣ : ∀ {A} → Γ ⊩ℕ A → Γ ⊩¹ A Emptyᵣ : ∀ {A} → Γ ⊩Empty A → Γ ⊩¹ A Unitᵣ : ∀ {A} → Γ ⊩Unit A → Γ ⊩¹ A ne : ∀ {A} → Γ ⊩ne A → Γ ⊩¹ A Bᵣ : ∀ {A} W → Γ ⊩¹B⟨ W ⟩ A → Γ ⊩¹ A emb : ∀ {A l′} (l< : l′ < l) (let open LogRelKit (rec l<)) ([A] : Γ ⊩ A) → Γ ⊩¹ A _⊩¹_≡_/_ : (Γ : Con Term) (A B : Term) → Γ ⊩¹ A → Set Γ ⊩¹ A ≡ B / Uᵣ UA = Γ ⊩¹U≡ B Γ ⊩¹ A ≡ B / ℕᵣ D = Γ ⊩ℕ A ≡ B Γ ⊩¹ A ≡ B / Emptyᵣ D = Γ ⊩Empty A ≡ B Γ ⊩¹ A ≡ B / Unitᵣ D = Γ ⊩Unit A ≡ B Γ ⊩¹ A ≡ B / ne neA = Γ ⊩ne A ≡ B / neA Γ ⊩¹ A ≡ B / Bᵣ W BA = Γ ⊩¹B⟨ W ⟩ A ≡ B / BA Γ ⊩¹ A ≡ B / emb l< [A] = Γ ⊩ A ≡ B / [A] where open LogRelKit (rec l<) _⊩¹_∷_/_ : (Γ : Con Term) (t A : Term) → Γ ⊩¹ A → Set Γ ⊩¹ t ∷ .U / Uᵣ (Uᵣ l′ l< ⊢Γ) = Γ ⊩¹U t ∷U/ l< Γ ⊩¹ t ∷ A / ℕᵣ D = Γ ⊩ℕ t ∷ℕ Γ ⊩¹ t ∷ A / Emptyᵣ D = Γ ⊩Empty t ∷Empty Γ ⊩¹ t ∷ A / Unitᵣ D = Γ ⊩Unit t ∷Unit Γ ⊩¹ t ∷ A / ne neA = Γ ⊩ne t ∷ A / neA Γ ⊩¹ t ∷ A / Bᵣ BΠ ΠA = Γ ⊩¹Π t ∷ A / ΠA Γ ⊩¹ t ∷ A / Bᵣ BΣ ΣA = Γ ⊩¹Σ t ∷ A / ΣA Γ ⊩¹ t ∷ A / emb l< [A] = Γ ⊩ t ∷ A / [A] where open LogRelKit (rec l<) _⊩¹_≡_∷_/_ : (Γ : Con Term) (t u A : Term) → Γ ⊩¹ A → Set Γ ⊩¹ t ≡ u ∷ .U / Uᵣ (Uᵣ l′ l< ⊢Γ) = Γ ⊩¹U t ≡ u ∷U/ l< Γ ⊩¹ t ≡ u ∷ A / ℕᵣ D = Γ ⊩ℕ t ≡ u ∷ℕ Γ ⊩¹ t ≡ u ∷ A / Emptyᵣ D = Γ ⊩Empty t ≡ u ∷Empty Γ ⊩¹ t ≡ u ∷ A / Unitᵣ D = Γ ⊩Unit t ≡ u ∷Unit Γ ⊩¹ t ≡ u ∷ A / ne neA = Γ ⊩ne t ≡ u ∷ A / neA Γ ⊩¹ t ≡ u ∷ A / Bᵣ BΠ ΠA = Γ ⊩¹Π t ≡ u ∷ A / ΠA Γ ⊩¹ t ≡ u ∷ A / Bᵣ BΣ ΣA = Γ ⊩¹Σ t ≡ u ∷ A / ΣA Γ ⊩¹ t ≡ u ∷ A / emb l< [A] = Γ ⊩ t ≡ u ∷ A / [A] where open LogRelKit (rec l<) kit : LogRelKit kit = Kit _⊩¹U _⊩¹B⟨_⟩_ _⊩¹_ _⊩¹_≡_/_ _⊩¹_∷_/_ _⊩¹_≡_∷_/_ open LogRel public using (Uᵣ; ℕᵣ; Emptyᵣ; Unitᵣ; ne; Bᵣ; B₌; emb; Uₜ; Uₜ₌) -- Patterns for the non-records of Π pattern Πₜ f d funcF f≡f [f] [f]₁ = f , d , funcF , f≡f , [f] , [f]₁ pattern Πₜ₌ f g d d′ funcF funcG f≡g [f] [g] [f≡g] = f , g , d , d′ , funcF , funcG , f≡g , [f] , [g] , [f≡g] pattern Σₜ p d pProd p≅p [fst] [snd] = p , d , pProd , p≅p , ([fst] , [snd]) pattern Σₜ₌ p r d d′ pProd rProd p≅r [t] [u] [fstp] [fstr] [fst≡] [snd≡] = p , r , d , d′ , pProd , rProd , p≅r , [t] , [u] , ([fstp] , [fstr] , [fst≡] , [snd≡]) pattern Uᵣ′ a b c = Uᵣ (Uᵣ a b c) pattern ne′ a b c d = ne (ne a b c d) pattern Bᵣ′ W a b c d e f g h i = Bᵣ W (Bᵣ a b c d e f g h i) pattern Πᵣ′ a b c d e f g h i = Bᵣ′ BΠ a b c d e f g h i pattern Σᵣ′ a b c d e f g h i = Bᵣ′ BΣ a b c d e f g h i logRelRec : ∀ l {l′} → l′ < l → LogRelKit logRelRec ⁰ = λ () logRelRec ¹ 0<1 = LogRel.kit ⁰ (λ ()) kit : ∀ (i : TypeLevel) → LogRelKit kit l = LogRel.kit l (logRelRec l) -- a bit of repetition in "kit ¹" definition, would work better with Fin 2 for -- TypeLevel because you could recurse. _⊩′⟨_⟩U : (Γ : Con Term) (l : TypeLevel) → Set Γ ⊩′⟨ l ⟩U = Γ ⊩U where open LogRelKit (kit l) _⊩′⟨_⟩B⟨_⟩_ : (Γ : Con Term) (l : TypeLevel) (W : BindingType) → Term → Set Γ ⊩′⟨ l ⟩B⟨ W ⟩ A = Γ ⊩B⟨ W ⟩ A where open LogRelKit (kit l) _⊩⟨_⟩_ : (Γ : Con Term) (l : TypeLevel) → Term → Set Γ ⊩⟨ l ⟩ A = Γ ⊩ A where open LogRelKit (kit l) _⊩⟨_⟩_≡_/_ : (Γ : Con Term) (l : TypeLevel) (A B : Term) → Γ ⊩⟨ l ⟩ A → Set Γ ⊩⟨ l ⟩ A ≡ B / [A] = Γ ⊩ A ≡ B / [A] where open LogRelKit (kit l) _⊩⟨_⟩_∷_/_ : (Γ : Con Term) (l : TypeLevel) (t A : Term) → Γ ⊩⟨ l ⟩ A → Set Γ ⊩⟨ l ⟩ t ∷ A / [A] = Γ ⊩ t ∷ A / [A] where open LogRelKit (kit l) _⊩⟨_⟩_≡_∷_/_ : (Γ : Con Term) (l : TypeLevel) (t u A : Term) → Γ ⊩⟨ l ⟩ A → Set Γ ⊩⟨ l ⟩ t ≡ u ∷ A / [A] = Γ ⊩ t ≡ u ∷ A / [A] where open LogRelKit (kit l)
Formal statement is: lemma lim_cnj: "((\<lambda>x. cnj(f x)) \<longlongrightarrow> cnj l) F \<longleftrightarrow> (f \<longlongrightarrow> l) F" Informal statement is: The limit of the complex conjugate of a function is the complex conjugate of the limit of the function.
module Y2016.M12.D14.Exercise where import Data.Complex -- import available via 1HaskellADay git repository import Data.Matrix {-- I'm thinking about quantum computation. IBM has released a 5-qubit computer for public experimentation. So, let's experiment. One way to go about that is to dive right in, so, yes, if you wish: dive right in. Another approach is to comprehend the maths behind quantum computation. So, let's look at that. I was going to bewail that Shor's prime factors algorithm needs 7 qubits to work, but, NEWSFLASH! IBM has added Shor's algorithm to their API, so ... CANCEL BEWAILMENT. *ahem* Moving on. First, let's look at qubits. Qubits are 'bra-ket'ted numbers (ket numbers) with the representation |0> = | 1 | or |1> = | 0 | | 0 | | 1 | OOH! MATRICES! exercise 1. Represent ket0 and ket1 states as matrices in Haskell --} type Qubit = Matrix (Complex Float) --- where Float is 1 or 0 but okay ket0, ket1 :: Qubit ket0 = undefined ket1 = undefined {-- It MAY be helpful to have a show-instance of a qubit that abbreviates the complex number to something more presentable. Your choice. A qubit state is most-times in a super-position of |0> or |1> and we represent that as |ψ> = α|0> + β|1> And we KNOW that |α|² + |β|² = 1 YAY! Okay. Whatever. So, we have a qubit at |0>-state and we want to flip it to |1>-state, or vice versa. How do we do that? We put it through a Pauli X gate The Pauli X operator is = | 0 1 | | 1 0 | That is to say, zero goes to 1 and 1 goes to zero. excercise 2: represent the Pauli X, Y, and Z operators --} type PauliOperator = Matrix (Complex Float) pauliX, pauliY, pauliZ :: PauliOperator pauliX = undefined pauliY = undefined pauliZ = undefined -- exercise 3: rotate the qubits ket0 and ket1 through the pauliX operator -- (figure out what that means). The intended result is: -- X|0> = |1> and X|1> = |0> -- what are your results? rotate :: PauliOperator -> Qubit -> Qubit rotate p q = undefined