Datasets:
AI4M
/

text
stringlengths
0
3.34M
The Almond Tree books’ foreign and movie rights continues to grow with the addition of Chinese Complex Characters’ Rights (Taiwan) sold to Faces/Cite Publishing. View all all foreign and movie rights and the various different cover versions of the book on the Foreign and Movie Rights Page. great item of writing, I’m sharing with my buddies.
Require Import Nat. Theorem plus_1_neq_0: forall n: nat, (n + 1) =? 0 = false. Proof. intros n. destruct n as [| n'] eqn:E. - reflexivity. - reflexivity. Qed. Check negb. Theorem negb_involutive: forall b: bool, negb (negb b) = b. Proof. intros b. destruct b. - reflexivity. - reflexivity. Qed. Theorem andb_commutative'' : forall b c, andb b c = andb c b. Proof. intros [] []. - reflexivity. - reflexivity. - reflexivity. - reflexivity. Qed. Theorem andb_true_elim2: forall b c: bool, andb b c = true -> c = true. Proof. intros b c H. destruct c. - reflexivity. - rewrite <- H. destruct b. * reflexivity. * reflexivity. Qed. Theorem zero_nbeq_plus_1: forall n: nat, 0 =? (n + 1) = false. Proof. intros [|n']. reflexivity. reflexivity. Qed. Theorem identity_fn_applied_twice: forall (f: bool->bool), (forall (x: bool), f x = x) -> forall (b: bool), f (f b) = b. Proof. intros f H []. rewrite -> H. rewrite -> H. reflexivity. rewrite -> H. rewrite -> H. reflexivity. Qed. Theorem neg_fn_applied_twice: forall (f: bool->bool), (forall (x: bool), f x = negb x) -> forall (b: bool), f (f b) = b. Proof. intros f H []. rewrite -> H. rewrite -> H. reflexivity. rewrite -> H. rewrite -> H. reflexivity. Qed. Lemma andb_true: forall (b: bool), andb b true = b. Proof. intros []. reflexivity. reflexivity. Qed. Lemma andb_false: forall (b: bool), andb b false = false. Proof. intros []. reflexivity. reflexivity. Qed. Lemma orb_true: forall (b:bool), orb b true = true. Proof. intros []. reflexivity. reflexivity. Qed. Lemma orb_false: forall (b: bool), orb b false = b. Proof. intros []. reflexivity. reflexivity. Qed. Theorem andb_eq_orb: forall (b c: bool), (andb b c = orb b c) -> b = c. Proof. intros b c. destruct c. rewrite -> andb_true. rewrite -> orb_true. intros H. rewrite -> H. reflexivity. rewrite -> andb_false. rewrite -> orb_false. intros H1. rewrite -> H1. reflexivity. Qed. Inductive bin: Type := | Z | A (n: bin) | B (n: bin). Fixpoint incr (m:bin):bin := match m with | Z => B Z | A n => B n | B n => A (incr n) end. Fixpoint bin_to_nat (m: bin): nat := match m with | Z => 0 | A n => 2 * bin_to_nat n | B n => 1 + 2 * bin_to_nat n end. Compute bin_to_nat (incr (A (A (A (A (B Z)))))). Example test_bin_incr1 : (incr (B Z)) = A (B Z). Proof. simpl. reflexivity. Qed. Example test_bin_incr2 : (incr (A (B Z))) = B (B Z). Proof. simpl. reflexivity. Qed. Example test_bin_incr3 : (incr (B (B Z))) = A (A (B Z)). Proof. simpl. reflexivity. Qed. Example test_bin_incr4 : bin_to_nat (A (B Z)) = 2. Proof. simpl. reflexivity. Qed. Example test_bin_incr5 : bin_to_nat (incr (B Z)) = 1 + bin_to_nat (B Z). Proof. simpl. reflexivity. Qed. Example test_bin_incr6 : bin_to_nat (incr (incr (B Z))) = 2 + bin_to_nat (B Z). Proof. simpl. reflexivity. Qed.
Jordella Skincare cleansers are easy to use and a must in ones regime everyday. Removing pollutants & make-up to ensure our cells are kept clean, fresh and vibrant followed by Green Coffee Antioxidant Toner by neutralising the skin’s pH level to accept moisture and remove any residue left from Cleansing. We do not use Alcohol in our products.
//---------------------------------------------------------------------------// // Copyright (c) 2015 Jakub Pola <[email protected]> // // Distributed under the Boost Software License, Version 1.0 // See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt // // See http://boostorg.github.com/compute for more information. //---------------------------------------------------------------------------// #define BOOST_TEST_MODULE TestScatterIf #include <boost/test/unit_test.hpp> #include <boost/compute/system.hpp> #include <boost/compute/algorithm/scatter_if.hpp> #include <boost/compute/container/vector.hpp> #include <boost/compute/iterator/constant_buffer_iterator.hpp> #include <boost/compute/iterator/counting_iterator.hpp> #include <boost/compute/functional.hpp> #include "check_macros.hpp" #include "context_setup.hpp" namespace bc = boost::compute; BOOST_AUTO_TEST_CASE(scatter_if_int) { int input_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; bc::vector<int> input(input_data, input_data + 10, queue); int map_data[] = {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}; bc::vector<int> map(map_data, map_data + 10, queue); int stencil_data[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1}; bc::vector<bc::uint_> stencil(stencil_data, stencil_data + 10, queue); bc::vector<int> output(input.size(), -1, queue); bc::scatter_if(input.begin(), input.end(), map.begin(), stencil.begin(), output.begin()); CHECK_RANGE_EQUAL(int, 10, output, (9, -1, 7, -1, 5, -1, 3, -1, 1, -1) ); } BOOST_AUTO_TEST_CASE(scatter_if_constant_indices) { int input_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; bc::vector<int> input(input_data, input_data + 10, queue); int map_data[] = {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}; bc::buffer map_buffer(context, 10 * sizeof(int), bc::buffer::read_only | bc::buffer::use_host_ptr, map_data); int stencil_data[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1}; bc::buffer stencil_buffer(context, 10 * sizeof(bc::uint_), bc::buffer::read_only | bc::buffer::use_host_ptr, stencil_data); bc::vector<int> output(input.size(), -1, queue); bc::scatter_if(input.begin(), input.end(), bc::make_constant_buffer_iterator<int>(map_buffer, 0), bc::make_constant_buffer_iterator<int>(stencil_buffer, 0), output.begin(), queue); CHECK_RANGE_EQUAL(int, 10, output, (9, -1, 7, -1, 5, -1, 3, -1, 1, -1) ); } BOOST_AUTO_TEST_CASE(scatter_if_function) { int input_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; bc::vector<int> input(input_data, input_data + 10, queue); int map_data[] = {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}; bc::vector<int> map(map_data, map_data + 10, queue); int stencil_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; bc::vector<bc::uint_> stencil(stencil_data, stencil_data + 10, queue); bc::vector<int> output(input.size(), -1, queue); BOOST_COMPUTE_FUNCTION(int, gt_than_5, (int x), { if (x > 5) return true; else return false; }); bc::scatter_if(input.begin(), input.end(), map.begin(), stencil.begin(), output.begin(), gt_than_5, queue); CHECK_RANGE_EQUAL(int, 10, output, (9, 8, 7, 6, -1, -1, -1, -1, -1, -1) ); } BOOST_AUTO_TEST_CASE(scatter_if_counting_iterator) { int input_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; bc::vector<int> input(input_data, input_data + 10, queue); int map_data[] = {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}; bc::vector<int> map(map_data, map_data + 10, queue); bc::vector<int> output(input.size(), -1, queue); BOOST_COMPUTE_FUNCTION(int, gt_than_5, (int x), { if (x > 5) return true; else return false; }); bc::scatter_if(input.begin(), input.end(), map.begin(), bc::make_counting_iterator<int>(0), output.begin(), gt_than_5, queue); CHECK_RANGE_EQUAL(int, 10, output, (9, 8, 7, 6, -1, -1, -1, -1, -1, -1) ); } BOOST_AUTO_TEST_SUITE_END()
However , Georgiev and Kolev 's pessimistic assessment of 6.Bg5 has since been called into question , as White succeeded with <unk> ( another critical line ) in several later high @-@ level games . GM <unk> <unk> wrote in 2013 that after <unk> , " a forced draw results " , but that after <unk> , " we reach a very sharp position , with mutual chances . "
Unset Strict Universe Declaration. (* File reduced by coq-bug-finder from original input, then from 11542 lines to 325 lines, then from 347 lines to 56 lines, then from 58 lines to 15 lines *) (* coqc version trunk (September 2014) compiled on Sep 25 2014 2:53:46 with OCaml 4.01.0 coqtop version cagnode16:/afs/csail.mit.edu/u/j/jgross/coq-trunk,trunk (bec7e0914f4a7144cd4efa8ffaccc9f72dbdb790) *) Axiom transport : forall {A : Type} (P : A -> Type) {x y : A} (p : x = y) (u : P x), P y. Notation "p # x" := (transport _ p x) (right associativity, at level 65, only parsing). Inductive V : Type@{U'} := | set {A : Type@{U}} (f : A -> V) : V. Module NonPrim. Record hProp := hp { hproptype :> Type ; isp : Set}. Goal forall (A B : Type) (H_f : A -> V -> hProp) (H_g : B -> V -> hProp) (C : Type) (h : C -> V) (b : B) (a : A) (c : C), H_f a (h c) -> H_f a (h c) = H_g b (h c) -> H_g b (h c). intros A B H_f H_g C h b a c H3 H'. exact (@transport hProp (fun x => x) _ _ H' H3). Undo. Set Debug Unification. exact (H' # H3). Defined. End NonPrim. Module Prim. Set Primitive Projections. Set Universe Polymorphism. Record hProp := hp { hproptype :> Type ; isp : Set}. Goal forall (A B : Type) (H_f : A -> V -> hProp) (H_g : B -> V -> hProp) (C : Type) (h : C -> V) (b : B) (a : A) (c : C), H_f a (h c) -> H_f a (h c) = H_g b (h c) -> H_g b (h c). intros A B H_f H_g C h b a c H3 H'. exact (@transport hProp (fun x => x) _ _ H' H3). Undo. Set Debug Unification. exact (H' # H3). (* Toplevel input, characters 7-14: Error: In environment A : Type B : Type H_f : A -> V -> hProp H_g : B -> V -> hProp C : Type h : C -> V b : B a : A c : C H3 : H_f a (h c) H' : H_f a (h c) = H_g b (h c) Unable to unify "hproptype (H_f a (h c))" with "?T (H_f a (h c))". *) Defined. End Prim.
Add Rec LoadPath "." as DEP_AI. Require Import language. Require Import values. Require Import over_instrumented_values. Require Import oitval_instantiation. Lemma is_instantiable_oival : forall (u:oival) (l:label) (vl:val), exists (v':val), instantiate_oival l vl u = Some v'. Proof. intros u l vl. apply (oival_ind2 (fun u => exists (v':val), instantiate_oival l vl u = Some v') (fun v => exists (v':val), instantiate_oival0 l vl v = Some v') (fun c => exists (c':env), instantiate_oienv l vl c = Some c')); intros; simpl; eauto. induction (apply_impact_fun l vl d). exists a; reflexivity. assumption. unfold option_map. inversion H. rewrite H0. eauto. unfold option_map. inversion H. rewrite H0. eauto. unfold option_map2. inversion H. inversion H0. rewrite H1. rewrite H2. eauto. unfold option_map. inversion H. rewrite H0. eauto. unfold option_map2. inversion H. inversion H0. rewrite H1. rewrite H2. eauto. Qed. Lemma is_instantiable_oitval : forall (uu:oitval) (td:oitdeps) (u:oival) (l:label) (vl:val), uu = OIV td u -> apply_timpact_fun l vl td <> Some true -> exists (v':val), instantiate_oitval l vl uu = Some v'. Proof. intros uu td u l vl H H0. rewrite H; simpl. induction (apply_timpact_fun l vl td). induction a. elim (H0 eq_refl). apply is_instantiable_oival. apply is_instantiable_oival. Qed. Lemma is_instantiable_oienv : forall (c:oienv) (l:label) (vl:val), exists (c':env), instantiate_oienv l vl c = Some c'. Proof. intros u l vl. apply (oienv_ind2 (fun u => exists (v':val), instantiate_oival l vl u = Some v') (fun v => exists (v':val), instantiate_oival0 l vl v = Some v') (fun c => exists (c':env), instantiate_oienv l vl c = Some c')); intros; simpl; eauto. induction (apply_impact_fun l vl d). exists a; reflexivity. assumption. unfold option_map. inversion H. rewrite H0. eauto. unfold option_map. inversion H. rewrite H0. eauto. unfold option_map2. inversion H. inversion H0. rewrite H1. rewrite H2. eauto. unfold option_map. inversion H. rewrite H0. eauto. unfold option_map2. inversion H. inversion H0. rewrite H1. rewrite H2. eauto. Qed.
[STATEMENT] theorem pmp: assumes "|~ F" and "|~ F \<longrightarrow> G" shows "|~ G" [PROOF STATE] proof (prove) goal (1 subgoal): 1. |~ G [PROOF STEP] using assms[unlifted] [PROOF STATE] proof (prove) using this: F ?w F ?w \<longrightarrow> G ?w goal (1 subgoal): 1. |~ G [PROOF STEP] by auto
Antimony forms two series of halides : <unk>
############################################################ # A function evaulator routine that # finds the formation of trap surface # i.e. blackhole formation, see the paper # for its definition: http://arxiv.org/pdf/1508.01614v2.pdf ############################################################ read "/home/arman/FD/FD.mpl": CFD(); MFD(); grid_functions:= {a1,b1,alpha,beta,ctfm,ctfmp}; AVGT := f -> (FD(f,[[1],[0,0,0]]) + FD(f,[[0],[0,0,0]]))/2 ; FD_table[t] := [ [0] , [0,1] ]; spec := [ { i = [1,1,1] , b=xmin } = 1+myzero*x(i), { i = [2,Nx-1,1] } = (AVGT(Gen_Sten(1/alpha(t,x)))*Gen_Sten(diff(b1(t,x),t)) + AVGT( Gen_Sten( (1/a1(t,x) - beta(t,x)/alpha(t,x))*( diff(b1(t,x),x)/ctfmp(x) + b1(t,x)/ctfm(x) ) ) ) )/ (AVGT(Gen_Sten(b1(t,x)))), { i=[Nx,Nx,1], b=xmax } = myzero*x(i) ]; A_Gen_Eval_Code(spec,input="d",proc_name="compute_th");
module Issue215 where open import Imports.Bool {-# COMPILE GHC Bool = data Bool (True | False) #-}
{-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# OPTIONS_GHC -Wall #-} -- | This module is wrongly named. module VectorSpace ( LinMap (.., LI), lmul, HasDim(Dim, dimDict), toRawMatrix, evalL, L (..), linear, VectorSpace (..), toVector, fromVector, ) where import Data.Constraint (Dict (..), withDict, (:-)) import Data.Proxy (Proxy (..)) import GHC.TypeLits import Overloaded.Categories import qualified Control.Category as C import qualified Data.Constraint.Nat as C import qualified Numeric.LinearAlgebra as L -- import qualified Numeric.LinearAlgebra.Static as LS data LinMap a b where LZ :: LinMap a b LD :: Double -> LinMap a a LH :: LinMap a b -> LinMap a c -> LinMap a (b, c) LV :: LinMap a c -> LinMap b c -> LinMap (a, b) c LA :: LinMap a b -> LinMap a b -> LinMap a b deriving instance Show (LinMap a b) pattern LI :: forall a b. () => (b ~ a) => LinMap a b pattern LI = LD 1 lmul :: Double -> LinMap a b -> LinMap a b lmul _ LZ = LZ lmul k (LD x) = LD (k * x) lmul k (LH f g) = LH (lmul k f) (lmul k g) lmul k (LV f g) = LV (lmul k f) (lmul k g) lmul k (LA f g) = LA (lmul k f) (lmul k g) lcomp :: LinMap b c -> LinMap a b -> LinMap a c lcomp LZ _ = LZ lcomp _ LZ = LZ lcomp (LD k) h = lmul k h lcomp h (LD k) = lmul k h lcomp (LA f g) h = LA (lcomp f h) (lcomp g h) lcomp f (LA g h) = LA (lcomp f g) (lcomp f h) lcomp (LH f g) h = LH (lcomp f h) (lcomp g h) lcomp h (LV f g) = LV (lcomp h f) (lcomp h g) lcomp (LV f g) (LH u v) = LA (lcomp f u) (lcomp g v) instance Category LinMap where id = LI (.) = lcomp instance CategoryWith1 LinMap where type Terminal LinMap = () terminal = LZ instance CartesianCategory LinMap where type Product LinMap = (,) proj1 = LV C.id LZ proj2 = LV LZ C.id fanout = LH instance CategoryWith0 LinMap where type Initial LinMap = () initial = LZ instance CocartesianCategory LinMap where type Coproduct LinMap = (,) inl = LH C.id LZ inr = LH LZ C.id fanin = LV instance BicartesianCategory LinMap where distr = LH (LH (LV (LV LI LZ) LZ) (LV LZ LI)) (LH (LV (LV LZ LI) LZ) (LV LZ LI)) newtype L a b = L (forall r. LinMap r a -> LinMap r b) lfst :: LinMap a (b, c) -> LinMap a b lfst (LA f g) = LA (lfst f) (lfst g) lfst (LH f _) = f lfst (LV f g) = LV (lfst f) (lfst g) lfst LZ = LZ lfst (LD k) = LV (LD k) LZ lsnd :: LinMap a (b, c) -> LinMap a c lsnd (LH _ g) = g lsnd (LA f g) = LA (lsnd f) (lsnd g) lsnd (LV f g) = LV (lsnd f) (lsnd g) lsnd LZ = LZ lsnd (LD k) = LV LZ (LD k) linitial :: LinMap r () -> LinMap r a linitial _ = LZ linear :: Double -> L a a linear k = L $ lmul k -- lmult :: Double -> Double -> LinMap r (a, a) -> LinMap r a -- lmult x y (LH f g) = LA (LK y f) (LK x g) -- lmult x y (LV f g) = LV (lmult x y f) (lmult x y g) -- lmult x y (LA f g) = LA (lmult x y f) (lmult x y g) -- lmult x y (LK k f) = LK k (lmult x y f) -- lmult _ _ LZ = LZ -- lmult x y LI = LV (LK y LI) (LK x LI) instance Category L where id = L id L f . L g = L (f . g) instance CategoryWith1 L where type Terminal L = () terminal = L (\_ -> LZ) instance CartesianCategory L where type Product L = (,) proj1 = L lfst proj2 = L lsnd fanout (L f) (L g) = L $ \x -> LH (f x) (g x) instance CategoryWith0 L where type Initial L = () initial = L linitial -- Is this correct? instance CocartesianCategory L where type Coproduct L = (,) inl = L $ \f -> LH f LZ inr = L $ \g -> LH LZ g fanin (L f) (L g) = L $ \x -> LA (f (lfst x)) (g (lsnd x)) class HasDim a where type Dim a :: Nat dimDict :: Proxy a -> Dict (KnownNat (Dim a)) splitPair :: (a ~ (b, c)) => (Dict (HasDim b), Dict (HasDim c)) splitPair = error "impossible: splitPair" instance HasDim () where type Dim () = 0 dimDict _ = Dict instance HasDim Double where type Dim Double = 1 dimDict _ = Dict instance (HasDim a, HasDim b) => HasDim (a, b) where type Dim (a, b) = Dim a + Dim b dimDict _ = withDimDict (Proxy :: Proxy a) $ withDimDict (Proxy :: Proxy b) $ withDict (C.plusNat :: (KnownNat (Dim a), KnownNat (Dim b)) :- KnownNat (Dim a + Dim b)) Dict splitPair = (Dict, Dict) withDimDict :: HasDim a => Proxy a -> (KnownNat (Dim a) => r) -> r withDimDict p = withDict (dimDict p) dim :: forall a. HasDim a => Proxy a -> Int dim p = withDimDict p $ fromInteger $ natVal (Proxy :: Proxy (Dim a)) toRawMatrix :: forall a b. (HasDim a, HasDim b) => LinMap a b -> L.Matrix Double toRawMatrix LZ = (dim (Proxy :: Proxy a) L.>< dim (Proxy :: Proxy b)) (repeat 0) toRawMatrix (LD k) = L.scale k (L.ident (dim (Proxy :: Proxy a))) toRawMatrix (LA f g) = L.add (toRawMatrix f) (toRawMatrix g) toRawMatrix (LH f g) = go splitPair f g where go :: (Dict (HasDim x), Dict (HasDim y)) -> LinMap a x -> LinMap a y -> L.Matrix Double go (Dict, Dict) f' g' = toRawMatrix f' L.||| toRawMatrix g' toRawMatrix (LV f g) = go splitPair f g where go :: (Dict (HasDim x), Dict (HasDim y)) -> LinMap x b -> LinMap y b -> L.Matrix Double go (Dict, Dict) f' g' = toRawMatrix f' L.=== toRawMatrix g' evalL :: (HasDim a, HasDim b) => L a b -> L.Matrix Double evalL (L f) = toRawMatrix (f (LD 1)) -- toStaticMatrix :: forall a b. (HasDim a, HasDim b) => LinMap a b -> LS.L (Dim a) (Dim b) -- toStaticMatrix LZ = -- withDimDict (Proxy :: Proxy a) $ -- withDimDict (Proxy :: Proxy b) 0 -- toStaticMatrix LI = -- withDimDict (Proxy :: Proxy a) LS.eye -- toStaticMatrix (LA f g) = -- withDimDict (Proxy :: Proxy a) $ -- withDimDict (Proxy :: Proxy b) $ -- L.add (toStaticMatrix f) (toStaticMatrix g) -- toStaticMatrix (LK k f) = -- withDimDict (Proxy :: Proxy a) $ -- withDimDict (Proxy :: Proxy b) $ -- toStaticMatrix f LS.<> LS.diag (LS.konst k) -- toStaticMatrix (LH f g) = go splitPair f g where -- go :: forall x y. (x,y) ~ b => (Dict (HasDim x), Dict (HasDim y)) -> LinMap a x -> LinMap a y -> LS.L (Dim a) (Dim x + Dim y) -- go (Dict, Dict) f' g' = -- withDimDict (Proxy :: Proxy a) $ -- withDimDict (Proxy :: Proxy b) $ -- withDimDict (Proxy :: Proxy x) $ -- withDimDict (Proxy :: Proxy y) $ -- toStaticMatrix f' LS.||| toStaticMatrix g' -- toStaticMatrix (LV f g) = go splitPair f g where -- go :: forall x y. (x,y) ~ a => (Dict (HasDim x), Dict (HasDim y)) -> LinMap x b -> LinMap y b -> LS.L (Dim x + Dim y) (Dim b) -- go (Dict, Dict) f' g' = -- withDimDict (Proxy :: Proxy a) $ -- withDimDict (Proxy :: Proxy b) $ -- withDimDict (Proxy :: Proxy x) $ -- withDimDict (Proxy :: Proxy y) $ -- toStaticMatrix f' LS.=== toStaticMatrix g' ------------------------------------------------------------------------------- -- Vector space ------------------------------------------------------------------------------- class HasDim a => VectorSpace a where toVector' :: a -> [Double] -> [Double] fromVector' :: [Double] -> (a -> [Double] -> r) -> r toVector :: VectorSpace a => a -> [Double] toVector x = toVector' x [] fromVector :: VectorSpace a => [Double] -> a fromVector ds = fromVector' ds const instance VectorSpace Double where toVector' d = (d :) fromVector' [] k = k 0 [] fromVector' (d:ds) k = k d ds instance (VectorSpace a, VectorSpace b) => VectorSpace (a, b) where toVector' (a, b) = toVector' a . toVector' b fromVector' xs k = fromVector' xs $ \a ys -> fromVector' ys $ \b zs -> k (a, b) zs
! { dg-do compile } module op implicit none type a integer i end type a type b real i end type b interface operator(==) module procedure f1 end interface operator(.eq.) interface operator(.eq.) module procedure f2 end interface operator(==) interface operator(/=) module procedure f1 end interface operator(.ne.) interface operator(.ne.) module procedure f2 end interface operator(/=) interface operator(<=) module procedure f1 end interface operator(.le.) interface operator(.le.) module procedure f2 end interface operator(<=) interface operator(<) module procedure f1 end interface operator(.lt.) interface operator(.lt.) module procedure f2 end interface operator(<) interface operator(>=) module procedure f1 end interface operator(.ge.) interface operator(.ge.) module procedure f2 end interface operator(>=) interface operator(>) module procedure f1 end interface operator(.gt.) interface operator(.gt.) module procedure f2 end interface operator(>) contains function f2(x,y) logical f2 type(a), intent(in) :: x, y end function f2 function f1(x,y) logical f1 type(b), intent(in) :: x, y end function f1 end module op
{- Terms of the language. Based on Pfenning and Davies' "Judgmental reconstruction of modal logic." -} module Syntax.Terms where open import Syntax.Types open import Syntax.Context mutual -- Pure terms of the language, expressed as typing judgements infix 10 _⊢_ data _⊢_ : Context -> Judgement -> Set where -- | Simply typed lambda calculus -- Variables var : ∀{Γ A} -> A ∈ Γ ------- -> Γ ⊢ A -- Lambda abstraction lam : ∀{Γ A B} -> Γ , A now ⊢ B now ------------------- -> Γ ⊢ A => B now -- Application _$_ : ∀{Γ A B} -> Γ ⊢ A => B now -> Γ ⊢ A now ------------------------------- -> Γ ⊢ B now -- | Basic data types -- Unit --------------- unit : ∀{Γ} -> Γ ⊢ Unit now -- Pair of two terms [_,,_] : ∀{Γ A B} -> Γ ⊢ A now -> Γ ⊢ B now ---------------------------- -> Γ ⊢ A & B now -- First projection fst : ∀{Γ A B} -> Γ ⊢ A & B now --------------- -> Γ ⊢ A now -- Second projection snd : ∀{Γ A B} -> Γ ⊢ A & B now --------------- -> Γ ⊢ B now -- Left injection inl : ∀{Γ A B} -> Γ ⊢ A now --------------- -> Γ ⊢ A + B now -- Right injection inr : ∀{Γ A B} -> Γ ⊢ B now --------------- -> Γ ⊢ A + B now -- Case split case_inl↦_||inr↦_ : ∀{Γ A B C} -> Γ ⊢ A + B now -> Γ , A now ⊢ C now -> Γ , B now ⊢ C now ------------------------------------------- -> Γ ⊢ C now -- | Modal operators -- A stable type can be sampled now sample : ∀{A Γ} -> Γ ⊢ A always -------------- -> Γ ⊢ A now -- Types in stable contexts are always inhabited stable : ∀{Γ A} -> Γ ˢ ⊢ A now -------------- -> Γ ⊢ A always -- Signal constructor sig : ∀{Γ A} -> Γ ⊢ A always ------------------ -> Γ ⊢ Signal A now -- Signal destructor letSig_In_ : ∀{Γ A B} -> Γ ⊢ Signal A now -> Γ , A always ⊢ B now ---------------------------------------- -> Γ ⊢ B now -- Event constructor event : ∀{Γ A} -> Γ ⊨ A now ------------------ -> Γ ⊢ Event A now -- Computational terms of the language infix 10 _⊨_ data _⊨_ : Context -> Judgement -> Set where -- Pure term is a computational term pure : ∀{A Γ} -> Γ ⊢ A ------- -> Γ ⊨ A -- Computational signal destructor letSig_InC_ : ∀{Γ A B} -> Γ ⊢ Signal A now -> Γ , A always ⊨ B now ---------------------------------------------- -> Γ ⊨ B now -- Event destructor letEvt_In_ : ∀{Γ A B} -> Γ ⊢ Event A now -> Γ ˢ , A now ⊨ B now -------------------------------------------- -> Γ ⊨ B now -- Select the event that happens first select_↦_||_↦_||both↦_ : ∀{Γ A B C} -> Γ ⊢ Event A now -> Γ ˢ , A now , Event B now ⊨ C now -- A happens first -> Γ ⊢ Event B now -> Γ ˢ , Event A now , B now ⊨ C now -- B happens first -> Γ ˢ , A now , B now ⊨ C now -- A and B happen at the same time ------------------------------------------------- -> Γ ⊨ C now
{-# LANGUAGE CPP , GADTs , KindSignatures , TypeOperators , TypeFamilies , EmptyCase , DataKinds , PolyKinds , ExistentialQuantification , FlexibleContexts , OverloadedStrings #-} {-# OPTIONS_GHC -Wall -fwarn-tabs #-} module Language.Hakaru.Sample where import Numeric.SpecFunctions (logFactorial) import qualified Data.Number.LogFloat as LF import qualified Math.Combinatorics.Exact.Binomial as EB -- import qualified Numeric.Integration.TanhSinh as TS import qualified System.Random.MWC as MWC import qualified System.Random.MWC.CondensedTable as MWC import qualified System.Random.MWC.Distributions as MWCD import qualified Data.Vector as V import Data.STRef import Data.Sequence (Seq) import qualified Data.Foldable as F import qualified Data.List.NonEmpty as L import Data.List.NonEmpty (NonEmpty(..)) import Data.Maybe (fromMaybe) #if __GLASGOW_HASKELL__ < 710 import Control.Applicative (Applicative(..), (<$>)) #endif import Control.Monad import Control.Monad.ST import Control.Monad.Identity import Control.Monad.Trans.Maybe import Control.Monad.State.Strict import qualified Data.IntMap as IM import Data.Number.Nat (fromNat) import Data.Number.Natural (fromNatural, fromNonNegativeRational, Natural, unsafeNatural) import Language.Hakaru.Types.DataKind import Language.Hakaru.Types.Coercion import Language.Hakaru.Types.Sing import Language.Hakaru.Types.HClasses import Language.Hakaru.Syntax.IClasses import Language.Hakaru.Syntax.TypeOf import Language.Hakaru.Syntax.Value import Language.Hakaru.Syntax.Reducer import Language.Hakaru.Syntax.Datum import Language.Hakaru.Syntax.DatumCase import Language.Hakaru.Syntax.AST import Language.Hakaru.Syntax.ABT data EAssoc = forall a. EAssoc {-# UNPACK #-} !(Variable a) !(Value a) newtype Env = Env (IM.IntMap EAssoc) emptyEnv :: Env emptyEnv = Env IM.empty updateEnv :: EAssoc -> Env -> Env updateEnv v@(EAssoc x _) (Env xs) = Env $ IM.insert (fromNat $ varID x) v xs updateEnvs :: List1 Variable xs -> List1 Value xs -> Env -> Env updateEnvs Nil1 Nil1 env = env updateEnvs (Cons1 x xs) (Cons1 y ys) env = updateEnvs xs ys (updateEnv (EAssoc x y) env) lookupVar :: Variable a -> Env -> Maybe (Value a) lookupVar x (Env env) = do EAssoc x' e' <- IM.lookup (fromNat $ varID x) env Refl <- varEq x x' return e' --------------------------------------------------------------- -- Makes use of Atkinson's algorithm as described in: -- Monte Carlo Statistical Methods pg. 55 -- -- Further discussion at: -- http://www.johndcook.com/blog/2010/06/14/generating-poisson-random-values/ poisson_rng :: Double -> MWC.GenIO -> IO Int poisson_rng lambda g' = make_poisson g' where smu = sqrt lambda b = 0.931 + 2.53*smu a = -0.059 + 0.02483*b vr = 0.9277 - 3.6224/(b - 2) arep = 1.1239 + 1.1368/(b - 3.4) lnlam = log lambda make_poisson :: MWC.GenIO -> IO Int make_poisson g = do u <- MWC.uniformR (-0.5,0.5) g v <- MWC.uniformR (0,1) g let us = 0.5 - abs u k = floor $ (2*a / us + b)*u + lambda + 0.43 case () of () | us >= 0.07 && v <= vr -> return k () | k < 0 -> make_poisson g () | us <= 0.013 && v > us -> make_poisson g () | accept_region us v k -> return k _ -> make_poisson g accept_region :: Double -> Double -> Int -> Bool accept_region us v k = log (v * arep / (a/(us*us)+b)) <= -lambda + fromIntegral k * lnlam - logFactorial k normalize :: [Value 'HProb] -> (LF.LogFloat, Double, [Double]) normalize [] = (0, 0, []) normalize [(VProb x)] = (x, 1, [1]) normalize xs = (m, y, ys) where xs' = map (\(VProb x) -> x) xs m = maximum xs' ys = [ LF.fromLogFloat (x/m) | x <- xs' ] y = sum ys normalizeVector :: Value ('HArray 'HProb) -> (LF.LogFloat, Double, V.Vector Double) normalizeVector (VArray xs) = let xs' = V.map (\(VProb x) -> x) xs in case V.length xs of 0 -> (0, 0, V.empty) 1 -> (V.unsafeHead xs', 1, V.singleton 1) _ -> let m = V.maximum xs' ys = V.map (\x -> LF.fromLogFloat (x/m)) xs' y = V.sum ys in (m, y, ys) --------------------------------------------------------------- runEvaluate :: (ABT Term abt) => abt '[] a -> Value a runEvaluate prog = evaluate prog emptyEnv evaluate :: (ABT Term abt) => abt '[] a -> Env -> Value a evaluate e env = caseVarSyn e (evaluateVar env) (flip evaluateTerm env) evaluateVar :: Env -> Variable a -> Value a evaluateVar env v = case lookupVar v env of Nothing -> error "variable not found!" Just a -> a evaluateTerm :: (ABT Term abt) => Term abt a -> Env -> Value a evaluateTerm t env = case t of o :$ es -> evaluateSCon o es env NaryOp_ o es -> evaluateNaryOp o es env Literal_ v -> evaluateLiteral v Empty_ _ -> evaluateEmpty Array_ n es -> evaluateArray n es env ArrayLiteral_ es -> VArray . V.fromList $ map (flip evaluate env) es Bucket b e rs -> evaluateBucket b e rs env Datum_ d -> evaluateDatum d env Case_ o es -> evaluateCase o es env Superpose_ es -> evaluateSuperpose es env Reject_ _ -> VMeasure $ \_ _ -> return Nothing evaluateSCon :: (ABT Term abt) => SCon args a -> SArgs abt args -> Env -> Value a evaluateSCon Lam_ (e1 :* End) env = caseBind e1 $ \x e1' -> VLam $ \v -> evaluate e1' (updateEnv (EAssoc x v) env) evaluateSCon App_ (e1 :* e2 :* End) env = case evaluate e1 env of VLam f -> f (evaluate e2 env) evaluateSCon Let_ (e1 :* e2 :* End) env = let v = evaluate e1 env in caseBind e2 $ \x e2' -> evaluate e2' (updateEnv (EAssoc x v) env) evaluateSCon (CoerceTo_ c) (e1 :* End) env = coerceTo c $ evaluate e1 env evaluateSCon (UnsafeFrom_ c) (e1 :* End) env = coerceFrom c $ evaluate e1 env evaluateSCon (PrimOp_ o) es env = evaluatePrimOp o es env evaluateSCon (ArrayOp_ o) es env = evaluateArrayOp o es env evaluateSCon (MeasureOp_ m) es env = evaluateMeasureOp m es env evaluateSCon Dirac (e1 :* End) env = VMeasure $ \p _ -> return $ Just (evaluate e1 env, p) evaluateSCon MBind (e1 :* e2 :* End) env = case evaluate e1 env of VMeasure m1 -> VMeasure $ \ p g -> do x <- m1 p g case x of Nothing -> return Nothing Just (a, p') -> caseBind e2 $ \x' e2' -> case evaluate e2' (updateEnv (EAssoc x' a) env) of VMeasure y -> y p' g evaluateSCon Plate (n :* e2 :* End) env = case evaluate n env of VNat n' -> caseBind e2 $ \x e' -> VMeasure $ \(VProb p) g -> runMaybeT $ do (v', ps) <- fmap V.unzip . V.mapM (performMaybe g) $ V.generate (fromInteger $ fromNatural n') $ \v -> evaluate e' $ updateEnv (EAssoc x . VNat $ intToNatural v) env return ( VArray v' , VProb $ p * V.product (V.map (\(VProb y) -> y) ps) ) where performMaybe :: MWC.GenIO -> Value ('HMeasure a) -> MaybeT IO (Value a, Value 'HProb) performMaybe g (VMeasure m) = MaybeT $ m (VProb 1) g evaluateSCon Chain (n :* s :* e :* End) env = case (evaluate n env, evaluate s env) of (VNat n', start) -> caseBind e $ \x e' -> let s' = VLam $ \v -> evaluate e' (updateEnv (EAssoc x v) env) in VMeasure (\(VProb p) g -> runMaybeT $ do (evaluates, sout) <- runStateT (replicateM (unsafeInt n') $ convert g s') start let (v', ps) = unzip evaluates bodyType :: Sing ('HMeasure (HPair a b)) -> Sing ('HArray a) bodyType = SArray . fst . sUnPair . sUnMeasure return ( VDatum $ dPair_ (bodyType $ caseBind e (const typeOf)) (typeOf s) (VArray . V.fromList $ v') sout , VProb $ p * product (map (\(VProb y) -> y) ps) )) where convert :: MWC.GenIO -> Value (s ':-> 'HMeasure (HPair a s)) -> StateT (Value s) (MaybeT IO) (Value a, Value 'HProb) convert g (VLam f) = StateT $ \s' -> case f s' of VMeasure f' -> do (as'', p') <- MaybeT (f' (VProb 1) g) let (a, s'') = unPair as'' return ((a, p'), s'') unPair :: Value (HPair a b) -> (Value a, Value b) unPair (VDatum (Datum "pair" _typ (Inl (Et (Konst a) (Et (Konst b) Done))))) = (a, b) unPair x = case x of {} evaluateSCon (Summate hd hs) (e1 :* e2 :* e3 :* End) env = case (evaluate e1 env, evaluate e2 env) of (lo, hi) -> caseBind e3 $ \x e3' -> foldl (\t i -> evalOp (Sum hs) t $ evaluate e3' (updateEnv (EAssoc x i) env)) (identityElement $ Sum hs) (enumFromUntilValue hd lo hi) evaluateSCon (Product hd hs) (e1 :* e2 :* e3 :* End) env = case (evaluate e1 env, evaluate e2 env) of (lo, hi) -> caseBind e3 $ \x e3' -> foldl (\t i -> evalOp (Prod hs) t $ evaluate e3' (updateEnv (EAssoc x i) env)) (identityElement $ Prod hs) (enumFromUntilValue hd lo hi) evaluateSCon s _ _ = error $ "TODO: evaluateSCon{" ++ show s ++ "}" evaluatePrimOp :: ( ABT Term abt, typs ~ UnLCs args, args ~ LCs typs) => PrimOp typs a -> SArgs abt args -> Env -> Value a evaluatePrimOp Not (e1 :* End) env = case evaluate e1 env of VDatum a -> if a == dTrue then VDatum dFalse else VDatum dTrue evaluatePrimOp Pi End _ = VProb . LF.logFloat $ pi evaluatePrimOp Cos (e1 :* End) env = case evaluate e1 env of VReal v1 -> VReal . cos $ v1 evaluatePrimOp Sin (e1 :* End) env = case evaluate e1 env of VReal v1 -> VReal . sin $ v1 evaluatePrimOp Tan (e1 :* End) env = case evaluate e1 env of VReal v1 -> VReal . tan $ v1 evaluatePrimOp RealPow (e1 :* e2 :* End) env = case (evaluate e1 env, evaluate e2 env) of (VProb v1, VReal v2) -> VProb $ LF.pow v1 v2 evaluatePrimOp Choose (e1 :* e2 :* End) env = case (evaluate e1 env, evaluate e2 env) of (VNat v1, VNat v2) -> VNat $ EB.choose v1 v2 evaluatePrimOp Exp (e1 :* End) env = case evaluate e1 env of VReal v1 -> VProb . LF.logToLogFloat $ v1 evaluatePrimOp Log (e1 :* End) env = case evaluate e1 env of VProb v1 -> VReal . LF.logFromLogFloat $ v1 evaluatePrimOp (Infinity h) End _ = case h of HIntegrable_Nat -> error "Can not evaluate infinity for natural numbers" HIntegrable_Prob -> VProb $ LF.logFloat LF.infinity evaluatePrimOp (Equal _) (e1 :* e2 :* End) env = (VDatum . dBool) $ evaluate e1 env == evaluate e2 env evaluatePrimOp (Less _) (e1 :* e2 :* End) env = case (evaluate e1 env, evaluate e2 env) of (VNat v1, VNat v2) -> VDatum $ if v1 < v2 then dTrue else dFalse (VInt v1, VInt v2) -> VDatum $ if v1 < v2 then dTrue else dFalse (VProb v1, VProb v2) -> VDatum $ if v1 < v2 then dTrue else dFalse (VReal v1, VReal v2) -> VDatum $ if v1 < v2 then dTrue else dFalse _ -> error "TODO: evaluatePrimOp{Less}" evaluatePrimOp (NatPow _) (e1 :* e2 :* End) env = case evaluate e2 env of VNat v2 -> let v2' = fromNatural v2 in case evaluate e1 env of VNat v1 -> VNat (v1 ^ v2') VInt v1 -> VInt (v1 ^ v2') VProb v1 -> VProb (v1 ^ v2') VReal v1 -> VReal (v1 ^ v2') _ -> error "NatPow should always return some kind of number" evaluatePrimOp (Negate _) (e1 :* End) env = case evaluate e1 env of VInt v -> VInt (negate v) VReal v -> VReal (negate v) v -> case v of {} evaluatePrimOp (Abs _) (e1 :* End) env = case evaluate e1 env of VInt v -> VNat . unsafeNatural $ abs v VReal v -> VProb . LF.logFloat $ abs v v -> case v of {} evaluatePrimOp (Recip _) (e1 :* End) env = case evaluate e1 env of VProb v -> VProb (recip v) VReal v -> VReal (recip v) v -> case v of {} evaluatePrimOp (NatRoot _) (e1 :* e2 :* End) env = case (evaluate e1 env, evaluate e2 env) of (VProb v1, VNat v2) -> VProb $ LF.pow v1 (recip . fromIntegral $ v2) v -> case v of {} evaluatePrimOp (Floor) (e1 :* End) env = case (evaluate e1 env) of VProb v1 -> VNat (floor (LF.fromLogFloat v1)) evaluatePrimOp prim _ _ = error ("TODO: evaluatePrimOp{" ++ show prim ++ "}") evaluateArrayOp :: ( ABT Term abt , typs ~ UnLCs args , args ~ LCs typs) => ArrayOp typs a -> SArgs abt args -> Env -> Value a evaluateArrayOp (Index _) = \(e1 :* e2 :* End) env -> case (evaluate e1 env, evaluate e2 env) of (VArray v, VNat n) -> v V.! unsafeInt n evaluateArrayOp (Size _) = \(e1 :* End) env -> case evaluate e1 env of VArray v -> VNat . intToNatural $ V.length v evaluateArrayOp (Reduce _) = \(e1 :* e2 :* e3 :* End) env -> case ( evaluate e1 env , evaluate e2 env , evaluate e3 env) of (f, a, VArray v) -> V.foldl' (lam2 f) a v evaluateMeasureOp :: ( ABT Term abt , typs ~ UnLCs args , args ~ LCs typs) => MeasureOp typs a -> SArgs abt args -> Env -> Value ('HMeasure a) evaluateMeasureOp Lebesgue = \(e1 :* e2 :* End) env -> case (evaluate e1 env, evaluate e2 env) of (VReal v1, VReal v2) | v1 < v2 -> VMeasure $ \(VProb p) g -> case (isInfinite v1, isInfinite v2) of (False, False) -> do x <- MWC.uniformR (v1, v2) g return $ Just (VReal $ x, VProb $ p * LF.logFloat (v2 - v1)) (False, True) -> do u <- MWC.uniform g let l = log u let n = -l return $ Just (VReal $ v1 + n, VProb $ p * LF.logToLogFloat n) (True, False) -> do u <- MWC.uniform g let l = log u let n = -l return $ Just (VReal $ v2 - n, VProb $ p * LF.logToLogFloat n) (True, True) -> do (u,b) <- MWC.uniform g let l = log u let n = -l return $ Just (VReal $ if b then n else l, VProb $ p * 2 * LF.logToLogFloat n) (VReal _, VReal _) -> error "Lebesgue with length 0 or flipped endpoints" evaluateMeasureOp Counting = \End _ -> VMeasure $ \(VProb p) g -> do let success = LF.logToLogFloat (-3 :: Double) let pow x y = LF.logToLogFloat (LF.logFromLogFloat x * (fromIntegral y :: Double)) u' <- MWCD.geometric0 (LF.fromLogFloat success) g let u = toInteger u' b <- MWC.uniform g return $ Just ( VInt $ if b then -1-u else u , VProb $ p * 2 / pow (1-success) u / success) evaluateMeasureOp Categorical = \(e1 :* End) env -> VMeasure $ \p g -> do let (_,y,ys) = normalizeVector (evaluate e1 env) if not (y > (0::Double)) -- TODO: why not use @y <= 0@ ?? then error "Categorical needs positive weights" else do u <- MWC.uniformR (0, y) g return $ Just ( VNat . intToNatural . fromMaybe 0 . V.findIndex (u <=) . V.scanl1' (+) $ ys , p) evaluateMeasureOp Uniform = \(e1 :* e2 :* End) env -> case (evaluate e1 env, evaluate e2 env) of (VReal v1, VReal v2) -> VMeasure $ \p g -> do x <- MWC.uniformR (v1, v2) g return $ Just (VReal x, p) evaluateMeasureOp Normal = \(e1 :* e2 :* End) env -> case (evaluate e1 env, evaluate e2 env) of (VReal v1, VProb v2) -> VMeasure $ \ p g -> do x <- MWCD.normal v1 (LF.fromLogFloat v2) g return $ Just (VReal x, p) evaluateMeasureOp Poisson = \(e1 :* End) env -> case evaluate e1 env of VProb v1 -> VMeasure $ \ p g -> do x <- MWC.genFromTable (MWC.tablePoisson (LF.fromLogFloat v1)) g return $ Just (VNat $ intToNatural x, p) evaluateMeasureOp Gamma = \(e1 :* e2 :* End) env -> case (evaluate e1 env, evaluate e2 env) of (VProb v1, VProb v2) -> VMeasure $ \ p g -> do x <- MWCD.gamma (LF.fromLogFloat v1) (LF.fromLogFloat v2) g return $ Just (VProb $ LF.logFloat x, p) evaluateMeasureOp Beta = \(e1 :* e2 :* End) env -> case (evaluate e1 env, evaluate e2 env) of (VProb v1, VProb v2) -> VMeasure $ \ p g -> do x <- MWCD.beta (LF.fromLogFloat v1) (LF.fromLogFloat v2) g return $ Just (VProb $ LF.logFloat x, p) evaluateNaryOp :: (ABT Term abt) => NaryOp a -> Seq (abt '[] a) -> Env -> Value a evaluateNaryOp s es = F.foldr (evalOp s) (identityElement s) . mapEvaluate es identityElement :: NaryOp a -> Value a identityElement And = VDatum dTrue identityElement (Sum HSemiring_Nat) = VNat 0 identityElement (Sum HSemiring_Int) = VInt 0 identityElement (Sum HSemiring_Prob) = VProb 0 identityElement (Sum HSemiring_Real) = VReal 0 identityElement (Prod HSemiring_Nat) = VNat 1 identityElement (Prod HSemiring_Int) = VInt 1 identityElement (Prod HSemiring_Prob) = VProb 1 identityElement (Prod HSemiring_Real) = VReal 1 identityElement (Max HOrd_Prob) = VProb 0 identityElement (Max HOrd_Real) = VReal LF.negativeInfinity identityElement (Min HOrd_Prob) = VProb (LF.logFloat LF.infinity) identityElement (Min HOrd_Real) = VReal LF.infinity identityElement _ = error "Missing identity elements?" evalOp :: NaryOp a -> Value a -> Value a -> Value a evalOp And (VDatum a) (VDatum b) | a == dTrue && b == dTrue = VDatum dTrue | otherwise = VDatum dFalse evalOp (Sum HSemiring_Nat) (VNat a) (VNat b) = VNat (a + b) evalOp (Sum HSemiring_Int) (VInt a) (VInt b) = VInt (a + b) evalOp (Sum HSemiring_Prob) (VProb a) (VProb b) = VProb (a + b) evalOp (Sum HSemiring_Real) (VReal a) (VReal b) = VReal (a + b) evalOp (Prod HSemiring_Nat) (VNat a) (VNat b) = VNat (a * b) evalOp (Prod HSemiring_Int) (VInt a) (VInt b) = VInt (a * b) evalOp (Prod HSemiring_Prob) (VProb a) (VProb b) = VProb (a * b) evalOp (Prod HSemiring_Real) (VReal a) (VReal b) = VReal (a * b) evalOp (Max HOrd_Prob) (VProb a) (VProb b) = VProb (max a b) evalOp (Max HOrd_Real) (VReal a) (VReal b) = VReal (max a b) evalOp (Min HOrd_Prob) (VProb a) (VProb b) = VProb (min a b) evalOp (Min HOrd_Real) (VReal a) (VReal b) = VReal (min a b) evalOp op _ _ = error ("TODO: evalOp{" ++ show op ++ "}") mapEvaluate :: (ABT Term abt) => Seq (abt '[] a) -> Env -> Seq (Value a) mapEvaluate es env = fmap (flip evaluate env) es evaluateLiteral :: Literal a -> Value a evaluateLiteral (LNat n) = VNat . fromInteger $ fromNatural n -- TODO: catch overflow errors evaluateLiteral (LInt n) = VInt $ fromInteger n -- TODO: catch overflow errors evaluateLiteral (LProb n) = VProb . fromRational $ fromNonNegativeRational n evaluateLiteral (LReal n) = VReal $ fromRational n evaluateEmpty :: Value ('HArray a) evaluateEmpty = VArray V.empty evaluateArray :: (ABT Term abt) => (abt '[] 'HNat) -> (abt '[ 'HNat ] a) -> Env -> Value ('HArray a) evaluateArray n e env = case evaluate n env of VNat n' -> caseBind e $ \x e' -> VArray $ V.generate (unsafeInt n') $ \v -> let v' = VNat $ intToNatural v in evaluate e' (updateEnv (EAssoc x v') env) evaluateBucket :: (ABT Term abt) => abt '[] 'HNat -> abt '[] 'HNat -> Reducer abt '[] a -> Env -> Value a evaluateBucket b e rs env = case (evaluate b env, evaluate e env) of (VNat b', VNat e') -> runST $ do s' <- init Nil1 rs env mapM_ (\i -> accum (VNat i) Nil1 rs s' env) [b' .. e' - 1] done s' where init :: (ABT Term abt) => List1 Value xs -> Reducer abt xs a -> Env -> ST s (VReducer s a) init ix (Red_Fanout r1 r2) env = VRed_Pair (type_ r1) (type_ r2) <$> init ix r1 env <*> init ix r2 env init ix (Red_Index n _ mr) env' = let (vars, n') = caseBinds n in case evaluate n' (updateEnvs vars ix env') of VNat n'' -> VRed_Array <$> V.generateM (fromIntegral n'') (\bb -> init (Cons1 (vnat bb) ix) mr env') init ix (Red_Split _ r1 r2) env' = VRed_Pair (type_ r1) (type_ r2) <$> init ix r1 env <*> init ix r2 env' init _ Red_Nop _ = return VRed_Unit init _ (Red_Add h _) _ = VRed_Num <$> newSTRef (identityElement (Sum h)) type_ = typeOfReducer vnat :: Int -> Value 'HNat vnat = VNat . fromIntegral accum :: (ABT Term abt) => Value 'HNat -> List1 Value xs -> Reducer abt xs a -> VReducer s a -> Env -> ST s () accum n ix (Red_Fanout r1 r2) (VRed_Pair _ _ v1 v2) env' = accum n ix r1 v1 env >> accum n ix r2 v2 env' accum n ix (Red_Index n' a1 r2) (VRed_Array v) env' = caseBind a1 $ \i a1' -> let (vars, a1'') = caseBinds a1' VNat ov = evaluate a1'' (updateEnv (EAssoc i n) (updateEnvs vars ix env')) ov' = fromIntegral ov in accum n (Cons1 (VNat ov) ix) r2 (v V.! ov') env accum n ix (Red_Split bb r1 r2) (VRed_Pair _ _ v1 v2) env' = caseBind bb $ \i b' -> let (vars, b'') = caseBinds b' in case evaluate b'' (updateEnv (EAssoc i n) (updateEnvs vars ix env')) of VDatum bb -> if bb == dTrue then accum n ix r1 v1 env' else accum n ix r2 v2 env' accum n ix (Red_Add h ee) (VRed_Num s) env' = caseBind ee $ \i e' -> let (vars, e'') = caseBinds e' v = evaluate e'' (updateEnv (EAssoc i n) (updateEnvs vars ix env')) in modifySTRef' s (evalOp (Sum h) v) accum _ _ Red_Nop _ _ = return () accum _ _ _ _ _ = error "Some impossible combinations happened?" done :: VReducer s a -> ST s (Value a) done (VRed_Num s) = readSTRef s done VRed_Unit = return (VDatum dUnit) done (VRed_Pair s1 s2 v1 v2) = do v1' <- done v1 v2' <- done v2 return (VDatum $ dPair_ s1 s2 v1' v2') done (VRed_Array v) = VArray <$> V.sequence (V.map done v) evaluateDatum :: (ABT Term abt) => Datum (abt '[]) (HData' a) -> Env -> Value (HData' a) evaluateDatum d env = VDatum (fmap11 (flip evaluate env) d) evaluateCase :: forall abt a b . (ABT Term abt) => abt '[] a -> [Branch a abt b] -> Env -> Value b evaluateCase o es env = case runIdentity $ matchBranches evaluateDatum' (evaluate o env) es of Just (Matched rho b) -> evaluate b (extendFromMatch (fromAssocs rho) env) _ -> error "Missing cases in match expression" where extendFromMatch :: [Assoc Value] -> Env -> Env extendFromMatch [] env' = env' extendFromMatch (Assoc x v : xvs) env' = extendFromMatch xvs (updateEnv (EAssoc x v) env') evaluateDatum' :: DatumEvaluator Value Identity evaluateDatum' = return . Just . getVDatum getVDatum :: Value (HData' a) -> Datum Value (HData' a) getVDatum (VDatum a) = a evaluateSuperpose :: (ABT Term abt) => NonEmpty (abt '[] 'HProb, abt '[] ('HMeasure a)) -> Env -> Value ('HMeasure a) evaluateSuperpose ((q, m) :| []) env = case evaluate m env of VMeasure m' -> let VProb q' = evaluate q env in VMeasure (\(VProb p) g -> m' (VProb $ p * q') g) evaluateSuperpose pms@((_, m) :| _) env = case evaluate m env of VMeasure m' -> let pms' = L.toList pms weights = map ((flip evaluate env) . fst) pms' (x,y,ys) = normalize weights in VMeasure $ \(VProb p) g -> if not (y > (0::Double)) then return Nothing else do u <- MWC.uniformR (0, y) g case [ m1 | (v,(_,m1)) <- zip (scanl1 (+) ys) pms', u <= v ] of m2 : _ -> case evaluate m2 env of VMeasure m2' -> m2' (VProb $ p * x * LF.logFloat y) g [] -> m' (VProb $ p * x * LF.logFloat y) g ---------------------------------------------------------------- -- Useful 'short-hand' intToNatural :: Int -> Natural intToNatural = unsafeNatural . toInteger unsafeInt :: Natural -> Int unsafeInt = fromInteger . fromNatural ----------------------------------------------------------- fin.
#include <boost/numeric/odeint/external/mtl4/mtl4.hpp>
\documentclass{beamer} \usefonttheme[onlymath]{serif} %\usepackage[utf8]{inputenc} %\usepackage[T1]{fontenc} \usepackage{fontspec} \defaultfontfeatures{Extension = .otf} % ...and this line \usepackage{subfig} \usepackage{multirow} \usepackage{xcolor} \usepackage{amsmath} \usepackage{lmodern} %\usepackage[english]{babel} %\frenchbsetup{CompactItemize=false} % \usepackage[babel]{csquotes} % \usepackage[url=false, doi=false, style=science, backend=bibtex, bibencoding=ascii]{biblatex} % \bibliography{IEEEabrv,bib/OAM} \graphicspath{{img/}} \DeclareGraphicsExtensions{.jpg,.png,.pdf,.eps} \mode<presentation> { % \useoutertheme{infolines} % Pour les thèmes qui n'ont pas de pied-de-page \usetheme{ulaval} \usecolortheme{ulaval} % \setbeamercovered{transparent} \setbeamercovered{invisible} %\setbeamertemplate{navigation symbols}{} % Enlever les icônes de navigation } %\usepackage{bm} % For typesetting bold math (not \mathbold) %\includeonlyframes{} %\usepackage{pgfpages} %\setbeameroption{show notes on second screen} %\setbeameroption{show notes} %\setbeamertemplate{note page}[plain] \logo{ \includegraphics[height=0.5cm]{UL_P} \hspace{.5cm}% \includegraphics[height=0.5cm]{lvsn} \hspace{.5cm} \vspace{.85\paperheight} \includegraphics[height=0.5cm]{logoCNR-ITC}} \title[Cold food chain-IRT application]{Cold food chain: Infrared thermography applied to the evaluation of insulation anomalies in refrigerated vehicles for the transport of food \& Exploration of cold approach in infrared thermography for Non-Destructive Testing} %\subtitle[]{} \author[Lei Lei]{Lei Lei} \institute[Université Laval] { Electrical and Computer Engineering Department \\ Laval University, Quebec City, Canada \\ \medskip % {\emph{[email protected]}} } \date{July 16, 2018} % \today will show current date. % Alternatively, you can specify a date. \AtBeginSection[]{ \begin{frame} \Huge \centerline{\insertsection} % \small \tableofcontents[currentsection, hideothersubsections] \end{frame} } \begin{document} \begin{frame}[label=titre, plain] \titlepage \begin{center} \includegraphics[height=0.9cm]{UL_P}% \hspace{1cm} \includegraphics[height=0.9cm]{lvsn} \hspace{1cm} \includegraphics[height=0.9cm]{logoCNR-ITC} \end{center} \end{frame} \section*{Contents} \begin{frame}[label=toc]{Contents} \setlength{\leftskip}{5cm}% \tableofcontents[subsubsectionstyle=hide] \end{frame} \section{Introduction} \include{chp1} \section{IRT application for ATP} \include{chp2} \include{chp3} \section{Exploration of cold approach} \include{chp4} \include{chp5} \section{Conclusion \& Perspectives} \begin{frame}{Conclusion} \begin{itemize}%[<+->] % \pause \item IRT application for ATP \begin{itemize} \item Thermal resistance model works well \item Homography application offers good results in mapping the roll-container % \item Large size of vehicle panel \item Thermal reflection in vehicle surfaces \pause \bigskip \item Favorable result in panoramic view compared to ATP \item Uniform and repeatable texture inside the truck toughened the automatic feature detection and image stitching \item Amelioration in need for image processing with advanced feature detection and description \end{itemize} \end{itemize} \end{frame} \begin{frame}{Conclusion} \begin{itemize}%[<+->] \item Exploration of cold approach \begin{itemize} \item Detection of steel dusts, water defects and thermal bridge \item Heating approach provides clearer results \item Compressed air spray is faster \item Practically convenient with air spray than the heating method in providing successful detection \pause \bigskip \item All techniques highlight part of the flaws in the sample \item PCT post-processing method displays a better results for all procedures \item More defects are exhibited in Flash stimulation with PCT processing \item ROC curve analysis and its AUC analysis have elucidated a straightforward classification comparison \item The best values are obtained with the Flash technique with PCT processing, trailed narrowly by the Liquid Nitrogen method. \end{itemize} \end{itemize} \end{frame} \begin{frame}{Perspectives} \large \begin{itemize} \item Exploration of better feature detection in automatic panoramic algorithms \item A suitable software package for post-processing and computation of K-value \item Stimulus strategy of heating one side and cooling the other \item More ROC curves analysis application in NDT techniques \end{itemize} \end{frame} \begin{frame} \centering \Huge{\textsf{\textit{Thanks for your attention!}}} \end{frame} \section*{Q \& A} \include{qas} \end{document}
= = Politics = =
theory prop_01 imports Main "../../TestTheories/Listing" "../../TestTheories/Naturals" "$HIPSTER_HOME/IsaHipster" begin theorem initLast: "ts \<noteq> Nil \<Longrightarrow> app (init ts) (Cons (last ts) Nil) = ts" by (hipster_induct_schemes) end
using Docile, Lexicon, RCall const api_directory = "api" const modules = Docile.Collector.submodules(RCall) cd(dirname(@__FILE__)) do # Run the doctests *before* we start to generate *any* documentation. for m in modules failures = failed(doctest(m)) if !isempty(failures.results) println("\nDoctests failed, aborting commit.\n") display(failures) exit(1) # Bail when doctests fail. end end # Generate and save the contents of docstrings as markdown files. index = Index() config = Config(md_subheader = :category, category_order = [:module, :function, :method, :type, :typealias, :macro, :global]) for mod in modules update!(index, save(joinpath(api_directory, "$(mod).md"), mod, config)) end save(joinpath(api_directory, "index.md"), index, config) end
Formal statement is: lemma Reals_of_nat [simp]: "of_nat n \<in> \<real>" Informal statement is: Every natural number is a real number.
[GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β s t : Set α ⊢ restrict s f '' (Subtype.val ⁻¹' t) = f '' (t ∩ s) [PROOFSTEP] rw [restrict_eq, image_comp, image_preimage_eq_inter_range, Subtype.range_coe] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ ⊢ restrict (range f) (extend f g g') = fun x => g (Exists.choose (_ : ↑x ∈ range f)) [PROOFSTEP] classical exact restrict_dite _ _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ ⊢ restrict (range f) (extend f g g') = fun x => g (Exists.choose (_ : ↑x ∈ range f)) [PROOFSTEP] exact restrict_dite _ _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ ⊢ restrict (range f)ᶜ (extend f g g') = g' ∘ Subtype.val [PROOFSTEP] classical exact restrict_dite_compl _ _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ ⊢ restrict (range f)ᶜ (extend f g g') = g' ∘ Subtype.val [PROOFSTEP] exact restrict_dite_compl _ _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ ⊢ range (extend f g g') ⊆ range g ∪ g' '' (range f)ᶜ [PROOFSTEP] classical rintro _ ⟨y, rfl⟩ rw [extend_def] split_ifs with h exacts [Or.inl (mem_range_self _), Or.inr (mem_image_of_mem _ h)] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ ⊢ range (extend f g g') ⊆ range g ∪ g' '' (range f)ᶜ [PROOFSTEP] rintro _ ⟨y, rfl⟩ [GOAL] case intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ y : β ⊢ extend f g g' y ∈ range g ∪ g' '' (range f)ᶜ [PROOFSTEP] rw [extend_def] [GOAL] case intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ y : β ⊢ (if h : ∃ a, f a = y then g (Classical.choose h) else g' y) ∈ range g ∪ g' '' (range f)ᶜ [PROOFSTEP] split_ifs with h [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ y : β h : ∃ a, f a = y ⊢ g (Classical.choose h) ∈ range g ∪ g' '' (range f)ᶜ case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β g : α → γ g' : β → γ y : β h : ¬∃ a, f a = y ⊢ g' y ∈ range g ∪ g' '' (range f)ᶜ [PROOFSTEP] exacts [Or.inl (mem_range_self _), Or.inr (mem_image_of_mem _ h)] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β hf : Injective f g : α → γ g' : β → γ ⊢ range (extend f g g') = range g ∪ g' '' (range f)ᶜ [PROOFSTEP] refine' (range_extend_subset _ _ _).antisymm _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β hf : Injective f g : α → γ g' : β → γ ⊢ range g ∪ g' '' (range f)ᶜ ⊆ range (extend f g g') [PROOFSTEP] rintro z (⟨x, rfl⟩ | ⟨y, hy, rfl⟩) [GOAL] case inl.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β hf : Injective f g : α → γ g' : β → γ x : α ⊢ g x ∈ range (extend f g g') case inr.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : α → β hf : Injective f g : α → γ g' : β → γ y : β hy : y ∈ (range f)ᶜ ⊢ g' y ∈ range (extend f g g') [PROOFSTEP] exacts [⟨f x, hf.extend_apply _ _ _⟩, ⟨y, extend_apply' _ _ _ hy⟩] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 f : ι → α s : Set α h : ∀ (x : ι), f x ∈ s ⊢ Injective (codRestrict f s h) ↔ Injective f [PROOFSTEP] simp only [Injective, Subtype.ext_iff, val_codRestrict_apply] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ EqOn f₁ f₂ {a} ↔ f₁ a = f₂ a [PROOFSTEP] simp [Set.EqOn] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β heq : EqOn f₁ f₂ s t : Set β x : α hx : x ∈ s ⊢ x ∈ f₁ ⁻¹' t ↔ x ∈ f₂ ⁻¹' t [PROOFSTEP] rw [mem_preimage, mem_preimage, heq hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β inst✝¹ : Preorder α inst✝ : Preorder β h₁ : MonotoneOn f₁ s h : EqOn f₁ f₂ s ⊢ MonotoneOn f₂ s [PROOFSTEP] intro a ha b hb hab [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b✝ : β inst✝¹ : Preorder α inst✝ : Preorder β h₁ : MonotoneOn f₁ s h : EqOn f₁ f₂ s a : α ha : a ∈ s b : α hb : b ∈ s hab : a ≤ b ⊢ f₂ a ≤ f₂ b [PROOFSTEP] rw [← h ha, ← h hb] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b✝ : β inst✝¹ : Preorder α inst✝ : Preorder β h₁ : MonotoneOn f₁ s h : EqOn f₁ f₂ s a : α ha : a ∈ s b : α hb : b ∈ s hab : a ≤ b ⊢ f₁ a ≤ f₁ b [PROOFSTEP] exact h₁ ha hb hab [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β inst✝¹ : Preorder α inst✝ : Preorder β h₁ : StrictMonoOn f₁ s h : EqOn f₁ f₂ s ⊢ StrictMonoOn f₂ s [PROOFSTEP] intro a ha b hb hab [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b✝ : β inst✝¹ : Preorder α inst✝ : Preorder β h₁ : StrictMonoOn f₁ s h : EqOn f₁ f₂ s a : α ha : a ∈ s b : α hb : b ∈ s hab : a < b ⊢ f₂ a < f₂ b [PROOFSTEP] rw [← h ha, ← h hb] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b✝ : β inst✝¹ : Preorder α inst✝ : Preorder β h₁ : StrictMonoOn f₁ s h : EqOn f₁ f₂ s a : α ha : a ∈ s b : α hb : b ∈ s hab : a < b ⊢ f₁ a < f₁ b [PROOFSTEP] exact h₁ ha hb hab [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g✝ g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β x✝ : ∃ g, ∀ (x : ↑s), f ↑x = ↑(g x) x : α hx : x ∈ s g : ↑s → ↑t hg : ∀ (x : ↑s), f ↑x = ↑(g x) ⊢ f x ∈ t [PROOFSTEP] erw [hg ⟨x, hx⟩] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g✝ g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β x✝ : ∃ g, ∀ (x : ↑s), f ↑x = ↑(g x) x : α hx : x ∈ s g : ↑s → ↑t hg : ∀ (x : ↑s), f ↑x = ↑(g x) ⊢ ↑(g { val := x, property := hx }) ∈ t [PROOFSTEP] apply Subtype.coe_prop [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → α s : Set α h : MapsTo f s s n : ℕ ⊢ (restrict f s s h)^[n] = restrict f^[n] s s (_ : MapsTo f^[n] s s) [PROOFSTEP] funext x [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → α s : Set α h : MapsTo f s s n : ℕ x : ↑s ⊢ (restrict f s s h)^[n] x = restrict f^[n] s s (_ : MapsTo f^[n] s s) x [PROOFSTEP] rw [Subtype.ext_iff, MapsTo.val_restrict_apply] [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → α s : Set α h : MapsTo f s s n : ℕ x : ↑s ⊢ ↑((restrict f s s h)^[n] x) = f^[n] ↑x [PROOFSTEP] induction' n with n ihn generalizing x [GOAL] case h.zero α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → α s : Set α h : MapsTo f s s x✝ x : ↑s ⊢ ↑((restrict f s s h)^[Nat.zero] x) = f^[Nat.zero] ↑x [PROOFSTEP] rfl [GOAL] case h.succ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → α s : Set α h : MapsTo f s s x✝ : ↑s n : ℕ ihn : ∀ (x : ↑s), ↑((restrict f s s h)^[n] x) = f^[n] ↑x x : ↑s ⊢ ↑((restrict f s s h)^[Nat.succ n] x) = f^[Nat.succ n] ↑x [PROOFSTEP] simp [Nat.iterate, ihn] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → β s : Set α ⊢ MapsTo f s (f '' s) [PROOFSTEP] rw [mapsTo'] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g✝ g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β f : α → β g : γ → α s : Set β ⊢ MapsTo f (range g) s ↔ MapsTo (f ∘ g) univ s [PROOFSTEP] rw [← image_univ, maps_image_to] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ range (restrictPreimage t f) = Subtype.val ⁻¹' range f [PROOFSTEP] delta Set.restrictPreimage [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ range (MapsTo.restrict f (f ⁻¹' t) t (_ : MapsTo f (f ⁻¹' t) t)) = Subtype.val ⁻¹' range f [PROOFSTEP] rw [MapsTo.range_restrict, Set.image_preimage_eq_inter_range, Set.preimage_inter, Subtype.coe_preimage_self, Set.univ_inter] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ ⊢ InjOn f (s₁ ∪ s₂) ↔ InjOn f s₁ ∧ InjOn f s₂ ∧ ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y [PROOFSTEP] refine' ⟨fun H => ⟨H.mono <| subset_union_left _ _, H.mono <| subset_union_right _ _, _⟩, _⟩ [GOAL] case refine'_1 α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ H : InjOn f (s₁ ∪ s₂) ⊢ ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y [PROOFSTEP] intro x hx y hy hxy [GOAL] case refine'_1 α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ H : InjOn f (s₁ ∪ s₂) x : α hx : x ∈ s₁ y : α hy : y ∈ s₂ hxy : f x = f y ⊢ False [PROOFSTEP] obtain rfl : x = y := H (Or.inl hx) (Or.inr hy) hxy [GOAL] case refine'_1 α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ H : InjOn f (s₁ ∪ s₂) x : α hx : x ∈ s₁ hy : x ∈ s₂ hxy : f x = f x ⊢ False [PROOFSTEP] exact h.le_bot ⟨hx, hy⟩ [GOAL] case refine'_2 α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ ⊢ (InjOn f s₁ ∧ InjOn f s₂ ∧ ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y) → InjOn f (s₁ ∪ s₂) [PROOFSTEP] rintro ⟨h₁, h₂, h₁₂⟩ [GOAL] case refine'_2.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ h₁ : InjOn f s₁ h₂ : InjOn f s₂ h₁₂ : ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y ⊢ InjOn f (s₁ ∪ s₂) [PROOFSTEP] rintro x (hx | hx) y (hy | hy) hxy [GOAL] case refine'_2.intro.intro.inl.inl α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ h₁ : InjOn f s₁ h₂ : InjOn f s₂ h₁₂ : ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y x : α hx : x ∈ s₁ y : α hy : y ∈ s₁ hxy : f x = f y ⊢ x = y case refine'_2.intro.intro.inl.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ h₁ : InjOn f s₁ h₂ : InjOn f s₂ h₁₂ : ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y x : α hx : x ∈ s₁ y : α hy : y ∈ s₂ hxy : f x = f y ⊢ x = y case refine'_2.intro.intro.inr.inl α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ h₁ : InjOn f s₁ h₂ : InjOn f s₂ h₁₂ : ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y x : α hx : x ∈ s₂ y : α hy : y ∈ s₁ hxy : f x = f y ⊢ x = y case refine'_2.intro.intro.inr.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Disjoint s₁ s₂ h₁ : InjOn f s₁ h₂ : InjOn f s₂ h₁₂ : ∀ (x : α), x ∈ s₁ → ∀ (y : α), y ∈ s₂ → f x ≠ f y x : α hx : x ∈ s₂ y : α hy : y ∈ s₂ hxy : f x = f y ⊢ x = y [PROOFSTEP] exacts [h₁ hx hy hxy, (h₁₂ _ hx _ hy hxy).elim, (h₁₂ _ hy _ hx hxy.symm).elim, h₂ hx hy hxy] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b : β f : α → β s : Set α a : α has : ¬a ∈ s ⊢ InjOn f (insert a s) ↔ InjOn f s ∧ ¬f a ∈ f '' s [PROOFSTEP] have : Disjoint s { a } := disjoint_iff_inf_le.mpr fun x ⟨hxs, (hxa : x = a)⟩ => has (hxa ▸ hxs) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b : β f : α → β s : Set α a : α has : ¬a ∈ s this : Disjoint s {a} ⊢ InjOn f (insert a s) ↔ InjOn f s ∧ ¬f a ∈ f '' s [PROOFSTEP] rw [← union_singleton, injOn_union this] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b : β f : α → β s : Set α a : α has : ¬a ∈ s this : Disjoint s {a} ⊢ (InjOn f s ∧ InjOn f {a} ∧ ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ {a} → f x ≠ f y) ↔ InjOn f s ∧ ¬f a ∈ f '' s [PROOFSTEP] simp [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Injective (g ∘ f) ⊢ InjOn g (range f) [PROOFSTEP] rintro _ ⟨x, rfl⟩ _ ⟨y, rfl⟩ H [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : Injective (g ∘ f) x y : α H : g (f x) = g (f y) ⊢ f x = f y [PROOFSTEP] exact congr_arg f (h H) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : MapsTo f s t ⊢ Injective (restrict f s t h) ↔ InjOn f s [PROOFSTEP] rw [h.restrict_eq_codRestrict, injective_codRestrict, injOn_iff_injective] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β inst✝ : Nonempty β x✝ : ∃ f, Injective f f : ↑s → β hf : Injective f ⊢ ∃ f, InjOn f s [PROOFSTEP] lift f to α → β using trivial [GOAL] case intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β inst✝ : Nonempty β x✝ : ∃ f, Injective f f : α → β hf : Injective fun i => f ↑i ⊢ ∃ f, InjOn f s [PROOFSTEP] exact ⟨f, injOn_iff_injective.2 hf⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u ⊢ f '' (s ∩ t) = f '' s ∩ f '' t [PROOFSTEP] apply Subset.antisymm (image_inter_subset _ _ _) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u ⊢ f '' s ∩ f '' t ⊆ f '' (s ∩ t) [PROOFSTEP] intro x ⟨⟨y, ys, hy⟩, ⟨z, zt, hz⟩⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u x : β y : α ys : y ∈ s hy : f y = x z : α zt : z ∈ t hz : f z = x ⊢ x ∈ f '' (s ∩ t) [PROOFSTEP] have : y = z := by apply hf (hs ys) (ht zt) rwa [← hz] at hy [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u x : β y : α ys : y ∈ s hy : f y = x z : α zt : z ∈ t hz : f z = x ⊢ y = z [PROOFSTEP] apply hf (hs ys) (ht zt) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u x : β y : α ys : y ∈ s hy : f y = x z : α zt : z ∈ t hz : f z = x ⊢ f y = f z [PROOFSTEP] rwa [← hz] at hy [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u x : β y : α ys : y ∈ s hy : f y = x z : α zt : z ∈ t hz : f z = x this : y = z ⊢ x ∈ f '' (s ∩ t) [PROOFSTEP] rw [← this] at zt [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α hf : InjOn f u hs : s ⊆ u ht : t ⊆ u x : β y : α ys : y ∈ s hy : f y = x z : α zt : y ∈ t hz : f z = x this : y = z ⊢ x ∈ f '' (s ∩ t) [PROOFSTEP] exact ⟨y, ⟨ys, zt⟩, hy⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α f : α → β h : Disjoint s t hf : InjOn f u hs : s ⊆ u ht : t ⊆ u ⊢ Disjoint (f '' s) (f '' t) [PROOFSTEP] rw [disjoint_iff_inter_eq_empty] at h ⊢ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f✝ f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s t u : Set α f : α → β h : s ∩ t = ∅ hf : InjOn f u hs : s ⊆ u ht : t ⊆ u ⊢ f '' s ∩ f '' t = ∅ [PROOFSTEP] rw [← hf.image_inter hs ht, h, image_empty] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g✝ g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β x✝ : ∃ t' g, t ⊆ t' ∧ Surjective g ∧ ∀ (x : ↑s), f ↑x = ↑(g x) y : β hy : y ∈ t t' : Set β g : ↑s → ↑t' htt' : t ⊆ t' hg : Surjective g hfg : ∀ (x : ↑s), f ↑x = ↑(g x) x : ↑s hx : g x = { val := y, property := (_ : y ∈ t') } ⊢ f ↑x = y [PROOFSTEP] rw [hfg, hx, Subtype.coe_mk] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : SurjOn f₁ s t H : EqOn f₁ f₂ s ⊢ SurjOn f₂ s t [PROOFSTEP] rwa [SurjOn, ← H.image_eq] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h₁ : SurjOn f s₁ t₁ h₂ : SurjOn f s₂ t₂ h : InjOn f (s₁ ∪ s₂) ⊢ SurjOn f (s₁ ∩ s₂) (t₁ ∩ t₂) [PROOFSTEP] intro y hy [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h₁ : SurjOn f s₁ t₁ h₂ : SurjOn f s₂ t₂ h : InjOn f (s₁ ∪ s₂) y : β hy : y ∈ t₁ ∩ t₂ ⊢ y ∈ f '' (s₁ ∩ s₂) [PROOFSTEP] rcases h₁ hy.1 with ⟨x₁, hx₁, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h₁ : SurjOn f s₁ t₁ h₂ : SurjOn f s₂ t₂ h : InjOn f (s₁ ∪ s₂) x₁ : α hx₁ : x₁ ∈ s₁ hy : f x₁ ∈ t₁ ∩ t₂ ⊢ f x₁ ∈ f '' (s₁ ∩ s₂) [PROOFSTEP] rcases h₂ hy.2 with ⟨x₂, hx₂, heq⟩ [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h₁ : SurjOn f s₁ t₁ h₂ : SurjOn f s₂ t₂ h : InjOn f (s₁ ∪ s₂) x₁ : α hx₁ : x₁ ∈ s₁ hy : f x₁ ∈ t₁ ∩ t₂ x₂ : α hx₂ : x₂ ∈ s₂ heq : f x₂ = f x₁ ⊢ f x₁ ∈ f '' (s₁ ∩ s₂) [PROOFSTEP] obtain rfl : x₁ = x₂ := h (Or.inl hx₁) (Or.inr hx₂) heq.symm [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h₁ : SurjOn f s₁ t₁ h₂ : SurjOn f s₂ t₂ h : InjOn f (s₁ ∪ s₂) x₁ : α hx₁ : x₁ ∈ s₁ hy : f x₁ ∈ t₁ ∩ t₂ hx₂ : x₁ ∈ s₂ heq : f x₁ = f x₁ ⊢ f x₁ ∈ f '' (s₁ ∩ s₂) [PROOFSTEP] exact mem_image_of_mem f ⟨hx₁, hx₂⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s : Set α ⊢ SurjOn id s s [PROOFSTEP] simp [SurjOn, subset_rfl] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g✝ g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : SurjOn f s t g : β → γ ⊢ SurjOn (g ∘ f) s (g '' t) [PROOFSTEP] rw [SurjOn, image_comp g f] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g✝ g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : SurjOn f s t g : β → γ ⊢ g '' t ⊆ g '' (f '' s) [PROOFSTEP] exact image_subset _ hf [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β s : Set β t : Set γ hf : Surjective f hg : SurjOn g s t ⊢ SurjOn (g ∘ f) (f ⁻¹' s) t [PROOFSTEP] rwa [SurjOn, image_comp g f, image_preimage_eq _ hf] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ Surjective f ↔ SurjOn f univ univ [PROOFSTEP] simp [Surjective, SurjOn, subset_def] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ f '' s = t ↔ SurjOn f s t ∧ MapsTo f s t [PROOFSTEP] refine' ⟨_, fun h => h.1.image_eq_of_mapsTo h.2⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ f '' s = t → SurjOn f s t ∧ MapsTo f s t [PROOFSTEP] rintro rfl [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ SurjOn f s (f '' s) ∧ MapsTo f s (f '' s) [PROOFSTEP] exact ⟨s.surjOn_image f, s.mapsTo_image f⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : EqOn (g₁ ∘ f) (g₂ ∘ f) s hf' : SurjOn f s t ⊢ EqOn g₁ g₂ t [PROOFSTEP] intro b hb [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b✝ : β hf : EqOn (g₁ ∘ f) (g₂ ∘ f) s hf' : SurjOn f s t b : β hb : b ∈ t ⊢ g₁ b = g₂ b [PROOFSTEP] obtain ⟨a, ha, rfl⟩ := hf' hb [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a✝ : α b : β hf : EqOn (g₁ ∘ f) (g₂ ∘ f) s hf' : SurjOn f s t a : α ha : a ∈ s hb : f a ∈ t ⊢ g₁ (f a) = g₂ (f a) [PROOFSTEP] exact hf ha [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β ⊢ BijOn f {a} {b} ↔ f a = b [PROOFSTEP] simp [BijOn, eq_comm] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : LeftInvOn f' f s hf : SurjOn f s t y : β hy : y ∈ t ⊢ f' y ∈ s [PROOFSTEP] let ⟨x, hs, hx⟩ := hf hy [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β h : LeftInvOn f' f s hf : SurjOn f s t y : β hy : y ∈ t x : α hs : x ∈ s hx : f x = y ⊢ f' y ∈ s [PROOFSTEP] rwa [← hx, h hs] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f '' (s₁ ∩ s) = f' ⁻¹' s₁ ∩ f '' s [PROOFSTEP] apply Subset.antisymm [GOAL] case h₁ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f '' (s₁ ∩ s) ⊆ f' ⁻¹' s₁ ∩ f '' s [PROOFSTEP] rintro _ ⟨x, ⟨h₁, h⟩, rfl⟩ [GOAL] case h₁.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s x : α h₁ : x ∈ s₁ h : x ∈ s ⊢ f x ∈ f' ⁻¹' s₁ ∩ f '' s [PROOFSTEP] exact ⟨by rwa [mem_preimage, hf h], mem_image_of_mem _ h⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s x : α h₁ : x ∈ s₁ h : x ∈ s ⊢ f x ∈ f' ⁻¹' s₁ [PROOFSTEP] rwa [mem_preimage, hf h] [GOAL] case h₂ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f' ⁻¹' s₁ ∩ f '' s ⊆ f '' (s₁ ∩ s) [PROOFSTEP] rintro _ ⟨h₁, ⟨x, h, rfl⟩⟩ [GOAL] case h₂.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s x : α h : x ∈ s h₁ : f x ∈ f' ⁻¹' s₁ ⊢ f x ∈ f '' (s₁ ∩ s) [PROOFSTEP] exact mem_image_of_mem _ ⟨by rwa [← hf h], h⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s x : α h : x ∈ s h₁ : f x ∈ f' ⁻¹' s₁ ⊢ x ∈ s₁ [PROOFSTEP] rwa [← hf h] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f '' (s₁ ∩ s) = f' ⁻¹' (s₁ ∩ s) ∩ f '' s [PROOFSTEP] rw [hf.image_inter'] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f' ⁻¹' s₁ ∩ f '' s = f' ⁻¹' (s₁ ∩ s) ∩ f '' s [PROOFSTEP] refine' Subset.antisymm _ (inter_subset_inter_left _ (preimage_mono <| inter_subset_left _ _)) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f' ⁻¹' s₁ ∩ f '' s ⊆ f' ⁻¹' (s₁ ∩ s) ∩ f '' s [PROOFSTEP] rintro _ ⟨h₁, x, hx, rfl⟩ [GOAL] case intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s x : α hx : x ∈ s h₁ : f x ∈ f' ⁻¹' s₁ ⊢ f x ∈ f' ⁻¹' (s₁ ∩ s) ∩ f '' s [PROOFSTEP] exact ⟨⟨h₁, by rwa [hf hx]⟩, mem_image_of_mem _ hx⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s x : α hx : x ∈ s h₁ : f x ∈ f' ⁻¹' s₁ ⊢ f' (f x) ∈ s [PROOFSTEP] rwa [hf hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : LeftInvOn f' f s ⊢ f' '' (f '' s) = s [PROOFSTEP] rw [Set.image_image, image_congr hf, image_id'] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : SurjOn f s t hf' : RightInvOn f f' s y : β hy : y ∈ t ⊢ f (f' y) = y [PROOFSTEP] let ⟨x, hx, heq⟩ := hf hy [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β hf : SurjOn f s t hf' : RightInvOn f f' s y : β hy : y ∈ t x : α hx : x ∈ s heq : f x = y ⊢ f (f' y) = y [PROOFSTEP] rw [← heq, hf' hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 inst✝ : Nonempty α s : Set α f : α → β a : α b : β h : ∃ a, a ∈ s ∧ f a = b ⊢ invFunOn f s b ∈ s ∧ f (invFunOn f s b) = b [PROOFSTEP] rw [invFunOn, dif_pos h] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 inst✝ : Nonempty α s : Set α f : α → β a : α b : β h : ∃ a, a ∈ s ∧ f a = b ⊢ Classical.choose h ∈ s ∧ f (Classical.choose h) = b [PROOFSTEP] exact Classical.choose_spec h [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 inst✝ : Nonempty α s : Set α f : α → β a : α b : β h : ¬∃ a, a ∈ s ∧ f a = b ⊢ invFunOn f s b = Classical.choice inst✝ [PROOFSTEP] rw [invFunOn, dif_neg h] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : s ⊆ invFunOn f s '' (f '' s) x : α hx : x ∈ s ⊢ invFunOn f s (f x) = x [PROOFSTEP] obtain ⟨-, ⟨x, hx', rfl⟩, rfl⟩ := h hx [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : s ⊆ invFunOn f s '' (f '' s) x : α hx' : x ∈ s hx : invFunOn f s (f x) ∈ s ⊢ invFunOn f s (f (invFunOn f s (f x))) = invFunOn f s (f x) [PROOFSTEP] rw [invFunOn_apply_eq (f := f) hx'] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β inst✝ : Nonempty α f : α → β s : Set α ⊢ InjOn (invFunOn f s) (f '' s) [PROOFSTEP] rintro _ ⟨x, hx, rfl⟩ _ ⟨x', hx', rfl⟩ he [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β inst✝ : Nonempty α f : α → β s : Set α x : α hx : x ∈ s x' : α hx' : x' ∈ s he : invFunOn f s (f x) = invFunOn f s (f x') ⊢ f x = f x' [PROOFSTEP] rw [← invFunOn_apply_eq (f := f) hx, he, invFunOn_apply_eq (f := f) hx'] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β inst✝ : Nonempty α f : α → β s : Set α ⊢ invFunOn f s '' (f '' s) ⊆ s [PROOFSTEP] rintro _ ⟨_, ⟨x, hx, rfl⟩, rfl⟩ [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β inst✝ : Nonempty α f : α → β s : Set α x : α hx : x ∈ s ⊢ invFunOn f s (f x) ∈ s [PROOFSTEP] exact invFunOn_apply_mem hx [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : SurjOn f s t ⊢ InvOn (invFunOn f s) f (invFunOn f s '' t) t [PROOFSTEP] refine' ⟨_, h.rightInvOn_invFunOn⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : SurjOn f s t ⊢ LeftInvOn (invFunOn f s) f (invFunOn f s '' t) [PROOFSTEP] rintro _ ⟨y, hy, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : SurjOn f s t y : β hy : y ∈ t ⊢ invFunOn f s (f (invFunOn f s y)) = invFunOn f s y [PROOFSTEP] rw [h.rightInvOn_invFunOn hy] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : SurjOn f s t ⊢ BijOn f (invFunOn f s '' t) t [PROOFSTEP] refine' h.invOn_invFunOn.bijOn _ (mapsTo_image _ _) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : SurjOn f s t ⊢ MapsTo f (invFunOn f s '' t) t [PROOFSTEP] rintro _ ⟨y, hy, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β inst✝ : Nonempty α h : SurjOn f s t y : β hy : y ∈ t ⊢ f (invFunOn f s y) ∈ t [PROOFSTEP] rwa [h.rightInvOn_invFunOn hy] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β ⊢ SurjOn f s t ↔ ∃ s' x, BijOn f s' t [PROOFSTEP] constructor [GOAL] case mp α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β ⊢ SurjOn f s t → ∃ s' x, BijOn f s' t [PROOFSTEP] rcases eq_empty_or_nonempty t with (rfl | ht) [GOAL] case mp.inl α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α f : α → β ⊢ SurjOn f s ∅ → ∃ s' x, BijOn f s' ∅ [PROOFSTEP] exact fun _ => ⟨∅, empty_subset _, bijOn_empty f⟩ [GOAL] case mp.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β ht : Set.Nonempty t ⊢ SurjOn f s t → ∃ s' x, BijOn f s' t [PROOFSTEP] intro h [GOAL] case mp.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β ht : Set.Nonempty t h : SurjOn f s t ⊢ ∃ s' x, BijOn f s' t [PROOFSTEP] haveI : Nonempty α := ⟨Classical.choose (h.comap_nonempty ht)⟩ [GOAL] case mp.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β ht : Set.Nonempty t h : SurjOn f s t this : Nonempty α ⊢ ∃ s' x, BijOn f s' t [PROOFSTEP] exact ⟨_, h.mapsTo_invFunOn.image_subset, h.bijOn_subset⟩ [GOAL] case mpr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β ⊢ (∃ s' x, BijOn f s' t) → SurjOn f s t [PROOFSTEP] rintro ⟨s', hs', hfs'⟩ [GOAL] case mpr.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s s₁ s₂ : Set α t : Set β f : α → β s' : Set α hs' : s' ⊆ s hfs' : BijOn f s' t ⊢ SurjOn f s t [PROOFSTEP] exact hfs'.surjOn.mono hs' (Subset.refl _) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : Classical.choice n ∈ s ⊢ Function.invFun f ⁻¹' s = f '' s ∪ (range f)ᶜ [PROOFSTEP] ext x [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : Classical.choice n ∈ s x : β ⊢ x ∈ Function.invFun f ⁻¹' s ↔ x ∈ f '' s ∪ (range f)ᶜ [PROOFSTEP] rcases em (x ∈ range f) with (⟨a, rfl⟩ | hx) [GOAL] case h.inl.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : Classical.choice n ∈ s a : α ⊢ f a ∈ Function.invFun f ⁻¹' s ↔ f a ∈ f '' s ∪ (range f)ᶜ [PROOFSTEP] simp only [mem_preimage, mem_union, mem_compl_iff, mem_range_self, not_true, or_false, leftInverse_invFun hf _, hf.mem_set_image] [GOAL] case h.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : Classical.choice n ∈ s x : β hx : ¬x ∈ range f ⊢ x ∈ Function.invFun f ⁻¹' s ↔ x ∈ f '' s ∪ (range f)ᶜ [PROOFSTEP] simp only [mem_preimage, invFun_neg hx, h, hx, mem_union, mem_compl_iff, not_false_iff, or_true] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : ¬Classical.choice n ∈ s ⊢ Function.invFun f ⁻¹' s = f '' s [PROOFSTEP] ext x [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : ¬Classical.choice n ∈ s x : β ⊢ x ∈ Function.invFun f ⁻¹' s ↔ x ∈ f '' s [PROOFSTEP] rcases em (x ∈ range f) with (⟨a, rfl⟩ | hx) [GOAL] case h.inl.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : ¬Classical.choice n ∈ s a : α ⊢ f a ∈ Function.invFun f ⁻¹' s ↔ f a ∈ f '' s [PROOFSTEP] rw [mem_preimage, leftInverse_invFun hf, hf.mem_set_image] [GOAL] case h.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : ¬Classical.choice n ∈ s x : β hx : ¬x ∈ range f ⊢ x ∈ Function.invFun f ⁻¹' s ↔ x ∈ f '' s [PROOFSTEP] have : x ∉ f '' s := fun h' => hx (image_subset_range _ _ h') [GOAL] case h.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 s✝ s₁ s₂ : Set α t : Set β f✝ : α → β n : Nonempty α f : α → β hf : Injective f s : Set α h : ¬Classical.choice n ∈ s x : β hx : ¬x ∈ range f this : ¬x ∈ f '' s ⊢ x ∈ Function.invFun f ⁻¹' s ↔ x ∈ f '' s [PROOFSTEP] simp only [mem_preimage, invFun_neg hx, h, this] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (i : α) → Decidable (i ∈ ∅) ⊢ piecewise ∅ f g = g [PROOFSTEP] ext i [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (i : α) → Decidable (i ∈ ∅) i : α ⊢ piecewise ∅ f g i = g i [PROOFSTEP] simp [piecewise] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (i : α) → Decidable (i ∈ univ) ⊢ piecewise univ f g = f [PROOFSTEP] ext i [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (i : α) → Decidable (i ∈ univ) i : α ⊢ piecewise univ f g i = f i [PROOFSTEP] simp [piecewise] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) ⊢ piecewise (insert j s) f g j = f j [PROOFSTEP] simp [piecewise] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) ⊢ piecewise (insert j s) f g = update (piecewise s f g) j (f j) [PROOFSTEP] simp [piecewise] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) ⊢ (fun i => if i = j ∨ i ∈ s then f i else g i) = update (fun i => if i ∈ s then f i else g i) j (f j) [PROOFSTEP] ext i [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) i : α ⊢ (if i = j ∨ i ∈ s then f i else g i) = update (fun i => if i ∈ s then f i else g i) j (f j) i [PROOFSTEP] by_cases h : i = j [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) i : α h : i = j ⊢ (if i = j ∨ i ∈ s then f i else g i) = update (fun i => if i ∈ s then f i else g i) j (f j) i [PROOFSTEP] rw [h] [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) i : α h : i = j ⊢ (if j = j ∨ j ∈ s then f j else g j) = update (fun i => if i ∈ s then f i else g i) j (f j) j [PROOFSTEP] simp [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) i : α h : ¬i = j ⊢ (if i = j ∨ i ∈ s then f i else g i) = update (fun i => if i ∈ s then f i else g i) j (f j) i [PROOFSTEP] by_cases h' : i ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) i : α h : ¬i = j h' : i ∈ s ⊢ (if i = j ∨ i ∈ s then f i else g i) = update (fun i => if i ∈ s then f i else g i) j (f j) i [PROOFSTEP] simp [h, h'] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) inst✝¹ : DecidableEq α j : α inst✝ : (i : α) → Decidable (i ∈ insert j s) i : α h : ¬i = j h' : ¬i ∈ s ⊢ (if i = j ∨ i ∈ s then f i else g i) = update (fun i => if i ∈ s then f i else g i) j (f j) i [PROOFSTEP] simp [h, h'] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) x : α inst✝¹ : (y : α) → Decidable (y ∈ {x}) inst✝ : DecidableEq α f g : α → β ⊢ piecewise {x} f g = update g x (f x) [PROOFSTEP] ext y [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) x : α inst✝¹ : (y : α) → Decidable (y ∈ {x}) inst✝ : DecidableEq α f g : α → β y : α ⊢ piecewise {x} f g y = update g x (f x) y [PROOFSTEP] by_cases hy : y = x [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) x : α inst✝¹ : (y : α) → Decidable (y ∈ {x}) inst✝ : DecidableEq α f g : α → β y : α hy : y = x ⊢ piecewise {x} f g y = update g x (f x) y [PROOFSTEP] subst y [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) x : α inst✝¹ : (y : α) → Decidable (y ∈ {x}) inst✝ : DecidableEq α f g : α → β ⊢ piecewise {x} f g x = update g x (f x) x [PROOFSTEP] simp [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝² : (j : α) → Decidable (j ∈ s) x : α inst✝¹ : (y : α) → Decidable (y ∈ {x}) inst✝ : DecidableEq α f g : α → β y : α hy : ¬y = x ⊢ piecewise {x} f g y = update g x (f x) y [PROOFSTEP] simp [hy] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g✝ : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g i i : α h : i ∈ s ⊢ piecewise s f₁ f₂ i ≤ g i [PROOFSTEP] simp [*] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g✝ : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g i i : α h : ¬i ∈ s ⊢ piecewise s f₁ f₂ i ≤ g i [PROOFSTEP] simp [*] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g₁ g₂ : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g₁ i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g₂ i ⊢ piecewise s f₁ f₂ ≤ piecewise s g₁ g₂ [PROOFSTEP] apply piecewise_le [GOAL] case h₁ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g₁ g₂ : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g₁ i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g₂ i ⊢ ∀ (i : α), i ∈ s → f₁ i ≤ piecewise s g₁ g₂ i [PROOFSTEP] intros [GOAL] case h₂ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g₁ g₂ : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g₁ i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g₂ i ⊢ ∀ (i : α), ¬i ∈ s → f₂ i ≤ piecewise s g₁ g₂ i [PROOFSTEP] intros [GOAL] case h₁ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g₁ g₂ : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g₁ i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g₂ i i✝ : α a✝ : i✝ ∈ s ⊢ f₁ i✝ ≤ piecewise s g₁ g₂ i✝ [PROOFSTEP] simp [*] [GOAL] case h₂ α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ✝ i inst✝² : (j : α) → Decidable (j ∈ s✝) δ : α → Type u_7 inst✝¹ : (i : α) → Preorder (δ i) s : Set α inst✝ : (j : α) → Decidable (j ∈ s) f₁ f₂ g₁ g₂ : (i : α) → δ i h₁ : ∀ (i : α), i ∈ s → f₁ i ≤ g₁ i h₂ : ∀ (i : α), ¬i ∈ s → f₂ i ≤ g₂ i i✝ : α x✝ : ¬i✝ ∈ s ⊢ f₂ i✝ ≤ piecewise s g₁ g₂ i✝ [PROOFSTEP] simp [*] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝¹ : (j : α) → Decidable (j ∈ s) i j : α h : i ≠ j inst✝ : (i : α) → Decidable (i ∈ insert j s) ⊢ piecewise (insert j s) f g i = piecewise s f g i [PROOFSTEP] simp [piecewise, h] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝¹ : (j : α) → Decidable (j ∈ s) inst✝ : (i : α) → Decidable (i ∈ sᶜ) x : α hx : x ∈ s ⊢ piecewise sᶜ f g x = piecewise s g f x [PROOFSTEP] simp [hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝¹ : (j : α) → Decidable (j ∈ s) inst✝ : (i : α) → Decidable (i ∈ sᶜ) x : α hx : ¬x ∈ s ⊢ piecewise sᶜ f g x = piecewise s g f x [PROOFSTEP] simp [hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ i inst✝¹ : (j : α) → Decidable (j ∈ s✝) s s₁ s₂ : Set α t t₁ t₂ : Set β f₁ f₂ : α → β inst✝ : (i : α) → Decidable (i ∈ s) h₁ : MapsTo f₁ (s₁ ∩ s) (t₁ ∩ t) h₂ : MapsTo f₂ (s₂ ∩ sᶜ) (t₂ ∩ tᶜ) ⊢ MapsTo (piecewise s f₁ f₂) (Set.ite s s₁ s₂) (Set.ite t t₁ t₂) [PROOFSTEP] refine' (h₁.congr _).union_union (h₂.congr _) [GOAL] case refine'_1 α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ i inst✝¹ : (j : α) → Decidable (j ∈ s✝) s s₁ s₂ : Set α t t₁ t₂ : Set β f₁ f₂ : α → β inst✝ : (i : α) → Decidable (i ∈ s) h₁ : MapsTo f₁ (s₁ ∩ s) (t₁ ∩ t) h₂ : MapsTo f₂ (s₂ ∩ sᶜ) (t₂ ∩ tᶜ) ⊢ EqOn f₁ (piecewise s f₁ f₂) (s₁ ∩ s) case refine'_2 α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s✝ : Set α f g : (i : α) → δ i inst✝¹ : (j : α) → Decidable (j ∈ s✝) s s₁ s₂ : Set α t t₁ t₂ : Set β f₁ f₂ : α → β inst✝ : (i : α) → Decidable (i ∈ s) h₁ : MapsTo f₁ (s₁ ∩ s) (t₁ ∩ t) h₂ : MapsTo f₂ (s₂ ∩ sᶜ) (t₂ ∩ tᶜ) ⊢ EqOn f₂ (piecewise s f₁ f₂) (s₂ ∩ sᶜ) [PROOFSTEP] exacts [(piecewise_eqOn s f₁ f₂).symm.mono (inter_subset_right _ _), (piecewise_eqOn_compl s f₁ f₂).symm.mono (inter_subset_right _ _)] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t : Set α ⊢ EqOn (piecewise s f f') g t ↔ EqOn f g (t ∩ s) ∧ EqOn f' g (t ∩ sᶜ) [PROOFSTEP] simp only [EqOn, ← forall_and] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t : Set α ⊢ (∀ ⦃x : α⦄, x ∈ t → piecewise s f f' x = g x) ↔ ∀ (x : α), (x ∈ t ∩ s → f x = g x) ∧ (x ∈ t ∩ sᶜ → f' x = g x) [PROOFSTEP] refine' forall_congr' fun a => _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t : Set α a : α ⊢ a ∈ t → piecewise s f f' a = g a ↔ (a ∈ t ∩ s → f a = g a) ∧ (a ∈ t ∩ sᶜ → f' a = g a) [PROOFSTEP] by_cases a ∈ s [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t : Set α a : α ⊢ a ∈ t → piecewise s f f' a = g a ↔ (a ∈ t ∩ s → f a = g a) ∧ (a ∈ t ∩ sᶜ → f' a = g a) [PROOFSTEP] by_cases a ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t : Set α a : α h : a ∈ s ⊢ a ∈ t → piecewise s f f' a = g a ↔ (a ∈ t ∩ s → f a = g a) ∧ (a ∈ t ∩ sᶜ → f' a = g a) [PROOFSTEP] simp [*] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t : Set α a : α h : ¬a ∈ s ⊢ a ∈ t → piecewise s f f' a = g a ↔ (a ∈ t ∩ s → f a = g a) ∧ (a ∈ t ∩ sᶜ → f' a = g a) [PROOFSTEP] simp [*] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f f' g : α → β t t' : Set α h : EqOn f g (t ∩ s) h' : EqOn f' g (t' ∩ sᶜ) ⊢ EqOn (piecewise s f f') g (Set.ite s t t') [PROOFSTEP] simp [eqOn_piecewise, *] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β t : Set β x : α ⊢ x ∈ piecewise s f g ⁻¹' t ↔ x ∈ Set.ite s (f ⁻¹' t) (g ⁻¹' t) [PROOFSTEP] by_cases x ∈ s [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β t : Set β x : α ⊢ x ∈ piecewise s f g ⁻¹' t ↔ x ∈ Set.ite s (f ⁻¹' t) (g ⁻¹' t) [PROOFSTEP] by_cases x ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β t : Set β x : α h : x ∈ s ⊢ x ∈ piecewise s f g ⁻¹' t ↔ x ∈ Set.ite s (f ⁻¹' t) (g ⁻¹' t) [PROOFSTEP] simp [*, Set.ite] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β t : Set β x : α h : ¬x ∈ s ⊢ x ∈ piecewise s f g ⁻¹' t ↔ x ∈ Set.ite s (f ⁻¹' t) (g ⁻¹' t) [PROOFSTEP] simp [*, Set.ite] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) δ' : α → Sort u_7 h : (i : α) → δ i → δ' i x : α ⊢ h x (piecewise s f g x) = piecewise s (fun x => h x (f x)) (fun x => h x (g x)) x [PROOFSTEP] by_cases hx : x ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) δ' : α → Sort u_7 h : (i : α) → δ i → δ' i x : α hx : x ∈ s ⊢ h x (piecewise s f g x) = piecewise s (fun x => h x (f x)) (fun x => h x (g x)) x [PROOFSTEP] simp [hx] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) δ' : α → Sort u_7 h : (i : α) → δ i → δ' i x : α hx : ¬x ∈ s ⊢ h x (piecewise s f g x) = piecewise s (fun x => h x (f x)) (fun x => h x (g x)) x [PROOFSTEP] simp [hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) δ' : α → Sort u_7 δ'' : α → Sort u_8 f' g' : (i : α) → δ' i h : (i : α) → δ i → δ' i → δ'' i x : α ⊢ h x (piecewise s f g x) (piecewise s f' g' x) = piecewise s (fun x => h x (f x) (f' x)) (fun x => h x (g x) (g' x)) x [PROOFSTEP] by_cases hx : x ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) δ' : α → Sort u_7 δ'' : α → Sort u_8 f' g' : (i : α) → δ' i h : (i : α) → δ i → δ' i → δ'' i x : α hx : x ∈ s ⊢ h x (piecewise s f g x) (piecewise s f' g' x) = piecewise s (fun x => h x (f x) (f' x)) (fun x => h x (g x) (g' x)) x [PROOFSTEP] simp [hx] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) δ' : α → Sort u_7 δ'' : α → Sort u_8 f' g' : (i : α) → δ' i h : (i : α) → δ i → δ' i → δ'' i x : α hx : ¬x ∈ s ⊢ h x (piecewise s f g x) (piecewise s f' g' x) = piecewise s (fun x => h x (f x) (f' x)) (fun x => h x (g x) (g' x)) x [PROOFSTEP] simp [hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) ⊢ piecewise s f f = f [PROOFSTEP] ext x [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) x : α ⊢ piecewise s f f x = f x [PROOFSTEP] by_cases hx : x ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) x : α hx : x ∈ s ⊢ piecewise s f f x = f x [PROOFSTEP] simp [hx] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f g : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) x : α hx : ¬x ∈ s ⊢ piecewise s f f x = f x [PROOFSTEP] simp [hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β ⊢ range (piecewise s f g) = f '' s ∪ g '' sᶜ [PROOFSTEP] ext y [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β y : β ⊢ y ∈ range (piecewise s f g) ↔ y ∈ f '' s ∪ g '' sᶜ [PROOFSTEP] constructor [GOAL] case h.mp α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β y : β ⊢ y ∈ range (piecewise s f g) → y ∈ f '' s ∪ g '' sᶜ [PROOFSTEP] rintro ⟨x, rfl⟩ [GOAL] case h.mp.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α ⊢ piecewise s f g x ∈ f '' s ∪ g '' sᶜ [PROOFSTEP] by_cases h : x ∈ s <;> [left; right] [GOAL] case h.mp.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α ⊢ piecewise s f g x ∈ f '' s ∪ g '' sᶜ [PROOFSTEP] by_cases h : x ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α h : x ∈ s ⊢ piecewise s f g x ∈ f '' s ∪ g '' sᶜ [PROOFSTEP] left [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α h : ¬x ∈ s ⊢ piecewise s f g x ∈ f '' s ∪ g '' sᶜ [PROOFSTEP] right [GOAL] case pos.h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α h : x ∈ s ⊢ piecewise s f g x ∈ f '' s [PROOFSTEP] use x [GOAL] case neg.h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α h : ¬x ∈ s ⊢ piecewise s f g x ∈ g '' sᶜ [PROOFSTEP] use x [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α h : x ∈ s ⊢ x ∈ s ∧ f x = piecewise s f g x [PROOFSTEP] simp [h] [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α h : ¬x ∈ s ⊢ x ∈ sᶜ ∧ g x = piecewise s f g x [PROOFSTEP] simp [h] [GOAL] case h.mpr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β y : β ⊢ y ∈ f '' s ∪ g '' sᶜ → y ∈ range (piecewise s f g) [PROOFSTEP] rintro (⟨x, hx, rfl⟩ | ⟨x, hx, rfl⟩) [GOAL] case h.mpr.inl.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α hx : x ∈ s ⊢ f x ∈ range (piecewise s f g) [PROOFSTEP] use x [GOAL] case h.mpr.inr.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α hx : x ∈ sᶜ ⊢ g x ∈ range (piecewise s f g) [PROOFSTEP] use x [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α hx : x ∈ s ⊢ piecewise s f g x = f x [PROOFSTEP] simp_all [GOAL] case h α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α hx : x ∈ sᶜ ⊢ piecewise s f g x = g x [PROOFSTEP] simp_all [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β ⊢ Injective (piecewise s f g) ↔ InjOn f s ∧ InjOn g sᶜ ∧ ∀ (x : α), x ∈ s → ∀ (y : α), ¬y ∈ s → f x ≠ g y [PROOFSTEP] rw [injective_iff_injOn_univ, ← union_compl_self s, injOn_union (@disjoint_compl_right _ _ s), (piecewise_eqOn s f g).injOn_iff, (piecewise_eqOn_compl s f g).injOn_iff] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β ⊢ (InjOn f s ∧ InjOn g sᶜ ∧ ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ sᶜ → piecewise s f g x ≠ piecewise s f g y) ↔ InjOn f s ∧ InjOn g sᶜ ∧ ∀ (x : α), x ∈ s → ∀ (y : α), ¬y ∈ s → f x ≠ g y [PROOFSTEP] refine' and_congr Iff.rfl (and_congr Iff.rfl <| forall₄_congr fun x hx y hy => _) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ i inst✝ : (j : α) → Decidable (j ∈ s) f g : α → β x : α hx : x ∈ s y : α hy : y ∈ sᶜ ⊢ piecewise s f g x ≠ piecewise s f g y ↔ f x ≠ g y [PROOFSTEP] rw [piecewise_eq_of_mem s f g hx, piecewise_eq_of_not_mem s f g hy] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ✝ i inst✝ : (j : α) → Decidable (j ∈ s) δ : α → Type u_7 t : Set α t' : (i : α) → Set (δ i) f g : (i : α) → δ i hf : f ∈ pi t t' hg : g ∈ pi t t' ⊢ piecewise s f g ∈ pi t t' [PROOFSTEP] intro i ht [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ✝ i inst✝ : (j : α) → Decidable (j ∈ s) δ : α → Type u_7 t : Set α t' : (i : α) → Set (δ i) f g : (i : α) → δ i hf : f ∈ pi t t' hg : g ∈ pi t t' i : α ht : i ∈ t ⊢ piecewise s f g i ∈ t' i [PROOFSTEP] by_cases hs : i ∈ s [GOAL] case pos α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ✝ i inst✝ : (j : α) → Decidable (j ∈ s) δ : α → Type u_7 t : Set α t' : (i : α) → Set (δ i) f g : (i : α) → δ i hf : f ∈ pi t t' hg : g ∈ pi t t' i : α ht : i ∈ t hs : i ∈ s ⊢ piecewise s f g i ∈ t' i [PROOFSTEP] simp [hf i ht, hg i ht, hs] [GOAL] case neg α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 δ✝ : α → Sort u_6 s : Set α f✝ g✝ : (i : α) → δ✝ i inst✝ : (j : α) → Decidable (j ∈ s) δ : α → Type u_7 t : Set α t' : (i : α) → Set (δ i) f g : (i : α) → δ i hf : f ∈ pi t t' hg : g ∈ pi t t' i : α ht : i ∈ t hs : ¬i ∈ s ⊢ piecewise s f g i ∈ t' i [PROOFSTEP] simp [hf i ht, hg i ht, hs] [GOAL] α✝ : Type u_1 β : Type u_2 γ : Type u_3 ι✝ : Sort u_4 π : α✝ → Type u_5 δ : α✝ → Sort u_6 s✝ : Set α✝ f g : (i : α✝) → δ i inst✝¹ : (j : α✝) → Decidable (j ∈ s✝) ι : Type u_7 α : ι → Type u_8 s : Set ι t t' : (i : ι) → Set (α i) inst✝ : (x : ι) → Decidable (x ∈ s) ⊢ pi univ (piecewise s t t') = pi s t ∩ pi sᶜ t' [PROOFSTEP] simp [compl_eq_univ_diff] [GOAL] α✝ : Type u_1 β : Type u_2 γ : Type u_3 ι✝ : Sort u_4 π : α✝ → Type u_5 δ : α✝ → Sort u_6 s✝ : Set α✝ f g : (i : α✝) → δ i inst✝¹ : (j : α✝) → Decidable (j ∈ s✝) ι : Type u_7 α : ι → Type u_8 s : Set ι t : (i : ι) → Set (α i) inst✝ : (x : ι) → Decidable (x ∈ s) ⊢ pi univ (piecewise s t fun x => univ) = pi s t [PROOFSTEP] simp [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 inst✝¹ : Preorder α inst✝ : Preorder β f : α → β s : Set α ⊢ StrictMono (restrict s f) ↔ StrictMonoOn f s [PROOFSTEP] simp [Set.restrict, StrictMono, StrictMonoOn] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : SurjOn fa s t ⊢ SurjOn fb (f '' s) (f '' t) [PROOFSTEP] rintro y ⟨x, hxt, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : SurjOn fa s t x : α hxt : x ∈ t ⊢ f x ∈ fb '' (f '' s) [PROOFSTEP] rcases ha hxt with ⟨x, hxs, rfl⟩ [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : SurjOn fa s t x : α hxs : x ∈ s hxt : fa x ∈ t ⊢ f (fa x) ∈ fb '' (f '' s) [PROOFSTEP] rw [h x] [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : SurjOn fa s t x : α hxs : x ∈ s hxt : fa x ∈ t ⊢ fb (f x) ∈ fb '' (f '' s) [PROOFSTEP] exact mem_image_of_mem _ (mem_image_of_mem _ hxs) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : Surjective fa ⊢ SurjOn fb (range f) (range f) [PROOFSTEP] rw [← image_univ] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : Surjective fa ⊢ SurjOn fb (f '' univ) (f '' univ) [PROOFSTEP] exact h.surjOn_image (ha.surjOn univ) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : InjOn fa s hf : InjOn f (fa '' s) ⊢ InjOn fb (f '' s) [PROOFSTEP] rintro _ ⟨x, hx, rfl⟩ _ ⟨y, hy, rfl⟩ H [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : InjOn fa s hf : InjOn f (fa '' s) x : α hx : x ∈ s y : α hy : y ∈ s H : fb (f x) = fb (f y) ⊢ f x = f y [PROOFSTEP] simp only [← h.eq] at H [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : InjOn fa s hf : InjOn f (fa '' s) x : α hx : x ∈ s y : α hy : y ∈ s H : f (fa x) = f (fa y) ⊢ f x = f y [PROOFSTEP] exact congr_arg f (ha hx hy <| hf (mem_image_of_mem fa hx) (mem_image_of_mem fa hy) H) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : Injective fa hf : InjOn f (range fa) ⊢ InjOn fb (range f) [PROOFSTEP] rw [← image_univ] at * [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : Injective fa hf : InjOn f (fa '' univ) ⊢ InjOn fb (f '' univ) [PROOFSTEP] exact h.injOn_image (ha.injOn univ) hf [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : Bijective fa hf : Injective f ⊢ BijOn fb (range f) (range f) [PROOFSTEP] rw [← image_univ] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s t : Set α h : Semiconj f fa fb ha : Bijective fa hf : Injective f ⊢ BijOn fb (f '' univ) (f '' univ) [PROOFSTEP] exact h.bijOn_image (bijective_iff_bijOn_univ.1 ha) (hf.injOn univ) [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s✝ t✝ : Set α h : Semiconj f fa fb s t : Set β hb : MapsTo fb s t x : α hx : x ∈ f ⁻¹' s ⊢ fa x ∈ f ⁻¹' t [PROOFSTEP] simp only [mem_preimage, h x, hb hx] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s✝ t : Set α h : Semiconj f fa fb s : Set β hb : InjOn fb s hf : InjOn f (f ⁻¹' s) ⊢ InjOn fa (f ⁻¹' s) [PROOFSTEP] intro x hx y hy H [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s✝ t : Set α h : Semiconj f fa fb s : Set β hb : InjOn fb s hf : InjOn f (f ⁻¹' s) x : α hx : x ∈ f ⁻¹' s y : α hy : y ∈ f ⁻¹' s H : fa x = fa y ⊢ x = y [PROOFSTEP] have := congr_arg f H [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s✝ t : Set α h : Semiconj f fa fb s : Set β hb : InjOn fb s hf : InjOn f (f ⁻¹' s) x : α hx : x ∈ f ⁻¹' s y : α hy : y ∈ f ⁻¹' s H : fa x = fa y this : f (fa x) = f (fa y) ⊢ x = y [PROOFSTEP] rw [h.eq, h.eq] at this [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 fa : α → α fb : β → β f : α → β g : β → γ s✝ t : Set α h : Semiconj f fa fb s : Set β hb : InjOn fb s hf : InjOn f (f ⁻¹' s) x : α hx : x ∈ f ⁻¹' s y : α hy : y ∈ f ⁻¹' s H : fa x = fa y this : fb (f x) = fb (f y) ⊢ x = y [PROOFSTEP] exact hf hx hy (hb hx hy this) [GOAL] α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t ⊢ MonotoneOn ψ s [PROOFSTEP] rintro x xs y ys l [GOAL] α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t x : α xs : x ∈ s y : α ys : y ∈ s l : x ≤ y ⊢ ψ x ≤ ψ y [PROOFSTEP] rcases le_total (ψ x) (ψ y) with (ψxy | ψyx) [GOAL] case inl α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t x : α xs : x ∈ s y : α ys : y ∈ s l : x ≤ y ψxy : ψ x ≤ ψ y ⊢ ψ x ≤ ψ y [PROOFSTEP] exact ψxy [GOAL] case inr α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t x : α xs : x ∈ s y : α ys : y ∈ s l : x ≤ y ψyx : ψ y ≤ ψ x ⊢ ψ x ≤ ψ y [PROOFSTEP] have := hφ (ψts ys) (ψts xs) ψyx [GOAL] case inr α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t x : α xs : x ∈ s y : α ys : y ∈ s l : x ≤ y ψyx : ψ y ≤ ψ x this : φ (ψ y) ≤ φ (ψ x) ⊢ ψ x ≤ ψ y [PROOFSTEP] rw [φψs.eq ys, φψs.eq xs] at this [GOAL] case inr α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t x : α xs : x ∈ s y : α ys : y ∈ s l : x ≤ y ψyx : ψ y ≤ ψ x this : y ≤ x ⊢ ψ x ≤ ψ y [PROOFSTEP] induction le_antisymm l this [GOAL] case inr.refl α✝ : Type u_1 β✝ : Type u_2 γ : Type u_3 ι : Sort u_4 π : α✝ → Type u_5 fa : α✝ → α✝ fb : β✝ → β✝ f : α✝ → β✝ g : β✝ → γ s✝ t✝ : Set α✝ α : Type u_6 β : Type u_7 inst✝¹ : PartialOrder α inst✝ : LinearOrder β φ : β → α ψ : α → β t : Set β s : Set α hφ : MonotoneOn φ t φψs : RightInvOn ψ φ s ψts : MapsTo ψ s t x : α xs : x ∈ s y : α ys : x ∈ s l : x ≤ x ψyx : ψ x ≤ ψ x this : x ≤ x ⊢ ψ x ≤ ψ x [PROOFSTEP] exact le_refl _ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : MapsTo (↑g) s t ⊢ MapsTo (↑(extendDomain g f)) (Subtype.val ∘ ↑f '' s) (Subtype.val ∘ ↑f '' t) [PROOFSTEP] rintro _ ⟨a, ha, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : MapsTo (↑g) s t a : α ha : a ∈ s ⊢ ↑(extendDomain g f) ((Subtype.val ∘ ↑f) a) ∈ Subtype.val ∘ ↑f '' t [PROOFSTEP] exact ⟨_, h ha, by simp_rw [Function.comp_apply, extendDomain_apply_image]⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : MapsTo (↑g) s t a : α ha : a ∈ s ⊢ (Subtype.val ∘ ↑f) (↑g a) = ↑(extendDomain g f) ((Subtype.val ∘ ↑f) a) [PROOFSTEP] simp_rw [Function.comp_apply, extendDomain_apply_image] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : SurjOn (↑g) s t ⊢ SurjOn (↑(extendDomain g f)) (Subtype.val ∘ ↑f '' s) (Subtype.val ∘ ↑f '' t) [PROOFSTEP] rintro _ ⟨a, ha, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : SurjOn (↑g) s t a : α ha : a ∈ t ⊢ (Subtype.val ∘ ↑f) a ∈ ↑(extendDomain g f) '' (Subtype.val ∘ ↑f '' s) [PROOFSTEP] obtain ⟨b, hb, rfl⟩ := h ha [GOAL] case intro.intro.intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : SurjOn (↑g) s t b : α hb : b ∈ s ha : ↑g b ∈ t ⊢ (Subtype.val ∘ ↑f) (↑g b) ∈ ↑(extendDomain g f) '' (Subtype.val ∘ ↑f '' s) [PROOFSTEP] exact ⟨_, ⟨_, hb, rfl⟩, by simp_rw [Function.comp_apply, extendDomain_apply_image]⟩ [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : SurjOn (↑g) s t b : α hb : b ∈ s ha : ↑g b ∈ t ⊢ ↑(extendDomain g f) ((Subtype.val ∘ ↑f) b) = (Subtype.val ∘ ↑f) (↑g b) [PROOFSTEP] simp_rw [Function.comp_apply, extendDomain_apply_image] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : LeftInvOn (↑g₁) (↑g₂) s ⊢ LeftInvOn (↑(extendDomain g₁ f)) (↑(extendDomain g₂ f)) (Subtype.val ∘ ↑f '' s) [PROOFSTEP] rintro _ ⟨a, ha, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : LeftInvOn (↑g₁) (↑g₂) s a : α ha : a ∈ s ⊢ ↑(extendDomain g₁ f) (↑(extendDomain g₂ f) ((Subtype.val ∘ ↑f) a)) = (Subtype.val ∘ ↑f) a [PROOFSTEP] simp_rw [Function.comp_apply, extendDomain_apply_image, h ha] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : RightInvOn (↑g₁) (↑g₂) t ⊢ RightInvOn (↑(extendDomain g₁ f)) (↑(extendDomain g₂ f)) (Subtype.val ∘ ↑f '' t) [PROOFSTEP] rintro _ ⟨a, ha, rfl⟩ [GOAL] case intro.intro α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 p : β → Prop inst✝ : DecidablePred p f : α ≃ Subtype p g g₁ g₂ : Perm α s t : Set α h : RightInvOn (↑g₁) (↑g₂) t a : α ha : a ∈ t ⊢ ↑(extendDomain g₂ f) (↑(extendDomain g₁ f) ((Subtype.val ∘ ↑f) a)) = (Subtype.val ∘ ↑f) a [PROOFSTEP] simp_rw [Function.comp_apply, extendDomain_apply_image, h ha] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β h : ∀ (a : α), ↑e a ∈ t ↔ a ∈ s b : β hb : b ∈ t ⊢ ↑e (↑e.symm b) ∈ t [PROOFSTEP] rwa [apply_symm_apply] [GOAL] α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α a b : α ha : a ∈ s hb : b ∈ s x : α ⊢ ↑(swap a b) x ∈ s ↔ x ∈ s [PROOFSTEP] obtain rfl | hxa := eq_or_ne x a [GOAL] case inl α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α b : α hb : b ∈ s x : α ha : x ∈ s ⊢ ↑(swap x b) x ∈ s ↔ x ∈ s [PROOFSTEP] obtain rfl | hxb := eq_or_ne x b [GOAL] case inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α a b : α ha : a ∈ s hb : b ∈ s x : α hxa : x ≠ a ⊢ ↑(swap a b) x ∈ s ↔ x ∈ s [PROOFSTEP] obtain rfl | hxb := eq_or_ne x b [GOAL] case inl.inl α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α x : α ha hb : x ∈ s ⊢ ↑(swap x x) x ∈ s ↔ x ∈ s [PROOFSTEP] simp [*, swap_apply_of_ne_of_ne] [GOAL] case inl.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α b : α hb : b ∈ s x : α ha : x ∈ s hxb : x ≠ b ⊢ ↑(swap x b) x ∈ s ↔ x ∈ s [PROOFSTEP] simp [*, swap_apply_of_ne_of_ne] [GOAL] case inr.inl α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α a : α ha : a ∈ s x : α hxa : x ≠ a hb : x ∈ s ⊢ ↑(swap a x) x ∈ s ↔ x ∈ s [PROOFSTEP] simp [*, swap_apply_of_ne_of_ne] [GOAL] case inr.inr α : Type u_1 β : Type u_2 γ : Type u_3 ι : Sort u_4 π : α → Type u_5 e : α ≃ β s : Set α t : Set β inst✝ : DecidableEq α a b : α ha : a ∈ s hb : b ∈ s x : α hxa : x ≠ a hxb : x ≠ b ⊢ ↑(swap a b) x ∈ s ↔ x ∈ s [PROOFSTEP] simp [*, swap_apply_of_ne_of_ne]
Ajial Pressbrake of 3 mts. x 90 Tons. Casanova Pressbrake of 6 mts. AJIAL Shear of 3 mts. x 10 mm. Second hand machine in perfect conditions. Pressbrake of 3 mts. x 110 Tons. Second hand machine: revised, repainted, grinded blades and security CE. Hydraulic bending roll ASTRIDA model CTC-3030 of 3 mts. x 1013 mm. Hydraulic bending roll ASTRIDA model CTC-3033 of 3 mts. x 1316 mm. Machine revised, painted, tooling, etc. HYDRAULIC SHEAR COLLY BRAND OF 3 MTS. X 6 MM. LASER CUTTING MACHINE ASTRIDA LX-3015, of 3000 W. Machine manufactured on 2007 by HG-LASER FARLEY LASERLAB. Equipped with El.En Generator Italy) of 3000 W. The machine has 2000 working hours. Equipped with CNC Siemens 840 with touch screen. Intechangeable tables of 3000 x 1500 mm. Pressbrake of 3 mts. x 15 Tons. It was placed in a warehouse as an exhibition.
*###[ ffxb2p: subroutine ffxb2p(cb2i,cb1,cb0,ca0i,xp,xm1,xm2,piDpj,ier) ***#[*comment:*********************************************************** * * * Compute the PV B2, the coefficients of p(mu)p(nu) and g(mu,nu) * * of 1/(ipi^2)\int d^nQ Q(mu)Q(nu)/(Q^2-m_1^2)/((Q+p)^2-m_2^2) * * originally based on aaxbx by Andre Aeppli. * * * * Input: cb1 complex vector two point function * * cb0 complex scalar two point function * * ca0i(2) complex scalar onepoint function with * * m1,m2 * * xp real p.p in B&D metric * * xm1,2 real m_1^2,m_2^2 * * piDpj(3,3) real dotproducts between s1,s2,p * * ier integer digits lost so far * * * * Output: cb2i(2) complex B21,B22: coeffs of p*p, g in B2 * * * ***#]*comment:*********************************************************** * #[ declarations: implicit none * * arguments * integer ier DOUBLE PRECISION xp,xm1,xm2,piDpj(3,3) DOUBLE COMPLEX cb2i(2),cb1,cb0,ca0i(2) * * local variables * DOUBLE PRECISION dm1p,dm2p,dm1m2 * * #] declarations: * #[ work: * dm1p = xm1 - xp dm2p = xm2 - xp dm1m2= xm1 - xm2 call ffxb2q(cb2i,cb1,cb0,ca0i,xp,xm1,xm2,dm1p,dm2p,dm1m2, + piDpj,ier) * * #] work: *###] ffxb2p: end *###[ ffxb2q: subroutine ffxb2q(cb2i,cb1,cb0,ca0i,xp,xm1,xm2,dm1p,dm2p,dm1m2, + piDpj,ier) ***#[*comment:*********************************************************** * * * Compute the PV B2, the coefficients of p(mu)p(nu) and g(mu,nu) * * of 1/(ipi^2)\int d^nQ Q(mu)Q(nu)/(Q^2-m_1^2)/((Q+p)^2-m_2^2) * * originally based on aaxbx by Andre Aeppli. * * * * Input: cb1 complex vector two point function * * cb0 complex scalar two point function * * ca0i(2) complex scalar onepoint function with * * m1,m2 * * xp real p.p in B&D metric * * xm1,2 real m_1^2,m_2^2 * * piDpj(3,3) real dotproducts between s1,s2,p * * ier integer digits lost so far * * * * Output: cb2i(2) complex B21,B22: coeffs of p*p, g in B2 * * * ***#]*comment:*********************************************************** * #[ declarations: implicit none * * arguments * integer ier DOUBLE PRECISION xp,xm1,xm2,dm1p,dm2p,dm1m2,piDpj(3,3) DOUBLE COMPLEX cb2i(2),cb1,cb0,ca0i(2) * * local variables * integer i,j,ier0,ier1 logical llogmm DOUBLE PRECISION xmax,absc,xlam,slam,alp,bet,xmxp,dfflo3,xlo3, + xmxsav,xnoe,xnoe2,xlogmm,dfflo1,rloss, + qiDqj(3,3) DOUBLE COMPLEX cs(16),cc,csom,clo2,clo3,zfflo2,zfflo3 * * common blocks * include 'ff.h' * * statement function * absc(cc) = abs(DBLE(cc)) + abs(DIMAG(cc)) * * #] declarations: * #[ test input: if ( ltest ) then ier0 = ier call ffdot2(qiDqj,xp,xm1,xm2,dm1p,dm2p,dm1m2,ier0) rloss = xloss*DBLE(10)**(-mod(ier0,50)) do 20 j=1,3 do 10 i=1,3 if ( rloss*abs(piDpj(i,j)-qiDqj(i,j)).gt.precx* + abs(piDpj(i,j))) print *,'ffxb2q: error: piDpj(' + ,i,j,') wrong: ',piDpj(i,j),qiDqj(i,j), + piDpj(i,j)-qiDqj(i,j),ier0 10 continue 20 continue endif * #] test input: * #[ normal case: ier0 = ier ier1 = ier * * with thanks to Andre Aeppli, off whom I stole the original * if ( xp .ne. 0) then cs(1) = ca0i(2) cs(2) = DBLE(xm1)*cb0 cs(3) = DBLE(2*piDpj(1,3))*cb1 cs(4) = (xm1+xm2)/2 cs(5) = -xp/6 cb2i(1) = cs(1) - cs(2) + 2*cs(3) - cs(4) - cs(5) cb2i(2) = cs(1) + 2*cs(2) - cs(3) + 2*cs(4) + 2*cs(5) xmax = max(absc(cs(2)),absc(cs(3)),absc(cs(4)),absc(cs(5))) xmxsav = xmax if ( absc(cb2i(1)) .ge. xloss*xmax ) goto 100 if ( lwrite ) then print *,'cb2i(1) = ',cb2i(1),xmax print *,'with cs ' print '(i3,2e30.16)',1,cs(1),2,-cs(2),3,2*cs(3),4, + -cs(4),5,-cs(5) endif * #] normal case: * #[ improve: m1=m2: * * a relatively simple case: dm1m2 = 0 (bi0.frm) * if ( dm1m2.eq.0 .and. xm1.ne.0 ) then if ( xp.lt.0 ) then slam = sqrt(xp**2-4*xm1*xp) xlo3 = dfflo3((xp-slam)/(2*xm1),ier) cs(1) = xp*(-1/DBLE(3) + slam/(4*xm1)) cs(2) = xp**2*(-slam/(4*xm1**2) - 3/(4*xm1)) cs(3) = xp**3/(4*xm1**2) cs(4) = DBLE(xp/xm1)*ca0i(1) cs(5) = xlo3/xp*(-xm1*slam) cs(6) = xlo3*slam else slam = isgnal*sqrt(-xp**2+4*xm1*xp) clo3 = zfflo3(DCMPLX(DBLE(xp/(2*xm1)), + DBLE(-slam/(2*xm1))),ier) cs(1) = DBLE(xp)*DCMPLX(-1/DBLE(3), + DBLE(slam/(4*xm1))) cs(2) = DBLE(xp**2)*DCMPLX(DBLE(-3/(4*xm1)), + DBLE(-slam/(4*xm1**2))) cs(3) = DBLE(xp**3/(4*xm1**2)) cs(4) = DBLE(xp/xm1)*ca0i(1) cs(5) = clo3*DCMPLX(DBLE(0),DBLE(-xm1*slam/xp)) cs(6) = clo3*DCMPLX(DBLE(0),DBLE(slam)) endif csom = cs(1) + cs(2) + cs(3) + cs(4) + cs(5) + cs(6) xmxp = max(absc(cs(2)),absc(cs(3)),absc(cs(4)), + absc(cs(5)),absc(cs(6))) * * get rid of noise in the imaginary part * if ( xloss*abs(DIMAG(csom)).lt.precc*abs(DBLE(csom)) ) + csom = DCMPLX(DBLE(csom),DBLE(0)) if ( lwrite ) then print *,'cb2i(1)+= ',csom,xmxp print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,6) endif if ( xmxp.lt.xmax ) then cb2i(1) = csom xmax = xmxp endif if ( absc(cb2i(1)).ge.xloss**2*xmax ) goto 100 endif * #] improve: m1=m2: * #[ improve: |xp| < xm1 < xm2: * * try again (see bi.frm) * xlam = 4*(piDpj(1,3)**2 - xm1*xp) if ( xm1.eq.0 .or. xm2.eq.0 ) then xlogmm = 0 elseif ( abs(dm1m2).lt.xloss*xm1 ) then xlogmm = dfflo1(dm1m2/xm1,ier) else xlogmm = log(xm2/xm1) endif if ( xlam.gt.0 .and. abs(xp).lt.xloss*xm2 .and. + xm1.lt.xm2 ) then slam = sqrt(xlam) alp = (2*xm1*xm2/(2*piDpj(1,2)+slam) + xm1)/(slam-dm1m2) * bet = [xm2-xm1-xp-slam] bet = 4*xm1*xp/(2*piDpj(1,3)+slam) cs(1) = DBLE(xp/xm2)*ca0i(2) cs(2) = xlogmm*bet*(-2*xm1**2*xm2 - 2*xm1**3) + /((-dm1m2+slam)*(2*piDpj(1,2)+slam)*(2*piDpj(1,3)+slam)) cs(3) = xlogmm*(-4*xp*xm1**3) + /((-dm1m2+slam)*(2*piDpj(1,2)+slam)*(2*piDpj(1,3)+slam)) xnoe = 1/(2*piDpj(2,3)+slam) xnoe2 = xnoe**2 cs(4) = xnoe2*xm1*bet*(xp-4*xm2) cs(5) = xnoe2*xm1*2*xp*xm2 cs(6) = xnoe2*xm1**2*bet cs(7) = xnoe2*xm1**2*4*xp cs(8) = xnoe2*bet*(xp*xm2+3*xm2**2) cs(9) = xnoe2*(-6*xp*xm2**2) cs(10)= xp*(7/6.d0 - 2*xm1*slam*xnoe2 + + 4*xm2*slam*xnoe2 - 2*slam*xnoe) cs(11)= xp**2*( -2*slam*xnoe2 ) xlo3 = dfflo3(2*xp*xnoe,ier) cs(12) = xlo3*dm1m2**2*slam/xp**2 cs(13) = xlo3*(xm1 - 2*xm2)*slam/xp cs(14) = xlo3*slam csom = 0 xmxp = 0 do 50 i=1,14 csom = csom + cs(i) xmxp = max(xmxp,absc(cs(i))) 50 continue if ( lwrite ) then print *,'cb2i(1)+= ',csom,xmxp print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,14) endif if ( xmxp.lt.xmax ) then cb2i(1) = csom xmax = xmxp endif if ( absc(cb2i(1)).ge.xloss**2*xmax ) goto 100 endif * #] improve: |xp| < xm1 < xm2: * #[ improve: |xp| < xm2 < xm1: if ( xlam.gt.0 .and. abs(xp).lt.xloss*xm1 .and. + xm2.lt.xm1 ) then slam = sqrt(xlam) alp = (2*xm2*xm1/(2*piDpj(1,2)+slam) + xm2)/(slam+dm1m2) * bet = [xm1-xm2-xp-slam] bet = 4*xm2*xp/(-2*piDpj(2,3)+slam) xnoe = 1/(-2*piDpj(1,3)+slam) xnoe2 = xnoe**2 cs(1) = DBLE(xp/xm1)*ca0i(1) cs(2) = -xlogmm*bet*(12*xp*xm1*xm2+6*xp*xm2**2- + 6*xp**2*xm2-2*xm1*xm2**2-2*xm2**3) + /((dm1m2+slam)*(2*piDpj(1,2)+slam)*(-2*piDpj(2,3)+slam)) cs(3) = -xlogmm*(-24*xp*xm1**2*xm2-4*xp*xm2**3+36* + xp**2*xm1*xm2+12*xp**2*xm2**2-12*xp**3*xm2) + /((dm1m2+slam)*(2*piDpj(1,2)+slam)*(-2*piDpj(2,3)+slam)) cs(4) = xnoe2*xm2*bet*(xp-4*xm1) cs(5) = xnoe2*xm2*(-10*xp*xm1) cs(6) = xnoe2*xm2**2*bet cs(7) = xnoe2*xm2**2*4*xp cs(8) = xnoe2*bet*(xp*xm1+3*xm1**2) cs(9) = xnoe2*6*xp*xm1**2 cs(10)= xp*(7/6.d0 - 2*xm1*slam*xnoe2 + + 4*xm2*slam*xnoe2 - 2*slam*xnoe) cs(11)= xp**2*( -2*slam*xnoe2 ) xlo3 = dfflo3(2*xp*xnoe,ier) cs(12) = xlo3*dm1m2**2*slam/xp**2 cs(13) = xlo3*(xm1 - 2*xm2)*slam/xp cs(14) = xlo3*slam csom = 0 xmxp = 0 do 60 i=1,14 csom = csom + cs(i) xmxp = max(xmxp,absc(cs(i))) 60 continue if ( lwrite ) then print *,'cb2i(1)+= ',csom,xmxp print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,14) endif if ( xmxp.lt.xmax ) then cb2i(1) = csom xmax = xmxp endif if ( absc(cb2i(1)).ge.xloss**2*xmax ) goto 100 endif * #] improve: |xp| < xm2 < xm1: * #[ wrap up: if ( lwarn ) then call ffwarn(225,ier0,absc(cb2i(1)),xmax) if ( lwrite ) then print *,'xp,xm1,xm2 = ',xp,xm1,xm2 endif endif 100 continue xmax = xmxsav if ( absc(cb2i(2)) .lt. xloss**2*xmax ) then if ( lwrite ) then print *,'cb2i(2) = ',cb2i(2),xmax print *,'with cs ' print '(i3,2e30.16)',1,cs(1),2,2*cs(2),3,-cs(3), + 4,2*cs(4) endif * if ( lwarn ) then call ffwarn(226,ier1,absc(cb2i(2)),xmax) endif 110 continue if ( lwrite ) print *,'cb2i(2)+= ',cb2i(2) endif cb2i(1) = DBLE(1/(3*xp)) * cb2i(1) cb2i(2) = DBLE(1/6.d0) * cb2i(2) * #] wrap up: * #[ xp=0, m1!=m2: elseif (dm1m2 .ne. 0) then * #[ old code: * first calculate B21 * cs(1) = +DBLE(xm1*xm1/dm1m2) * ca0i(1) * cs(2) = - xm1*xm1/dm1m2 * xm1 * cs(3) = -DBLE((3*xm1**2-3*xm1*xm2+xm2**2)/dm1m2) * ca0i(2) * cs(4) = + (3*xm1**2-3*xm1*xm2+xm2**2)/dm1m2 * xm2 * cs(5) = (11*xm1**2-7*xm1*xm2+2*xm2**2)/6 ** * cb2i(2) = cs(1)+cs(2)+cs(3)+cs(4)+cs(5) * if ( lwarn ) then * xmax = max(absc(cs(1)),absc(cs(2)),absc(cs(3)), * + absc(cs(4)),absc(cs(5))) * if ( absc(cb2i(2)) .lt. xloss*xmax ) * + call ffwarn(298,ier0,absc(cb2i(2)),xmax) * endif * cb2i(1)=1/(3*dm1m2**2) * cb2i(2) * B22 in the same way as with xp diff from zero * 18-nov-1993 fixed sign error in cs(2) GJ * cs(1) = ca0i(2) * cs(2) =+DBLE(2*xm1)*cb0 * cs(3) = DBLE(dm1m2)*cb1 * cs(4) = xm1+xm2 * cb2i(2) = cs(1) + cs(2) + cs(3) + cs(4) * if ( lwarn ) then * xmax = max(absc(cs(1)),absc(cs(3)),absc(cs(4))) * if ( absc(cb2i(2)) .lt. xloss*xmax ) * + call ffwarn(298,ier1,absc(cb2i(2)),xmax) * endif * cb2i(2) = cb2i(2)/6 * #] old code: * #[ B21: llogmm = .FALSE. * * B21 (see thesis, b21.frm) * cs(1) = DBLE(xm1**2/3/dm1m2**3)*ca0i(1) cs(2) = DBLE((-xm1**2 + xm1*xm2 - xm2**2/3)/dm1m2**3)* + ca0i(2) cs(3) = (5*xm1**3/18 - xm1*xm2**2/2 + 2*xm2**3/9) + /dm1m2**3 cb2i(1) = cs(1)+cs(2)+cs(3) xmax = max(absc(cs(2)),absc(cs(3))) if ( absc(cb2i(1)).gt.xloss**2*xmax ) goto 160 if ( lwrite ) then print *,'cb2i(1) = ',cb2i(1),xmax print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,3) endif * * ma ~ mb * if ( abs(dm1m2).lt.xloss*xm1 ) then xlogmm = dfflo1(dm1m2/xm1,ier) else xlogmm = log(xm2/xm1) endif llogmm = .TRUE. cs(1) = (xm1/dm1m2)/6 cs(2) = (xm1/dm1m2)**2/3 cs(3) = (xm1/dm1m2)**3*xlogmm/3 cs(4) = -2/DBLE(9) + ca0i(1)*DBLE(1/(3*xm1)) cs(5) = -xlogmm/3 csom = cs(1)+cs(2)+cs(3)+cs(4)+cs(5) xmxp = max(absc(cs(2)),absc(cs(3)),absc(cs(4)), + absc(cs(5))) if ( lwrite ) then print *,'cb2i(1)+= ',csom,xmxp print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,5) endif if ( xmxp.lt.xmax ) then xmax = xmxp cb2i(1) = csom if ( absc(cb2i(1)).gt.xloss**2*xmax ) goto 160 endif * * and last try * xlo3 = dfflo3(dm1m2/xm1,ier) cs(1) = (dm1m2/xm1)**2/6 cs(2) = (dm1m2/xm1)/3 cs(3) = xlo3/(3*(dm1m2/xm1)**3) *same cs(4) = -2/DBLE(9) + ca0i(1)*DBLE(1/(3*xm1)) cs(5) = -xlo3/3 csom = cs(1)+cs(2)+cs(3)+cs(4)+cs(5) xmxp = max(absc(cs(2)),absc(cs(3)),absc(cs(4)), + absc(cs(5))) if ( lwrite ) then print *,'cb2i(1)+= ',csom,xmxp print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,5) endif if ( xmxp.lt.xmax ) then xmax = xmxp cb2i(1) = csom if ( absc(cb2i(1)).gt.xloss**2*xmax ) goto 160 endif * * give up * if ( lwarn ) then call ffwarn(225,ier,absc(cb2i(1)),xmax) if ( lwrite ) then print *,'xp,xm1,xm2 = ',xp,xm1,xm2 endif endif 160 continue * #] B21: * #[ B22: * * B22 * cs(1) = +DBLE(xm1/(4*dm1m2))*ca0i(1) cs(2) = -DBLE(xm2/(4*dm1m2))*ca0i(2) cs(3) = (xm1+xm2)/8 cb2i(2) = cs(1) + cs(2) + cs(3) xmax = max(absc(cs(2)),absc(cs(3))) if ( absc(cb2i(2)).gt.xloss*xmax ) goto 210 if ( lwrite ) then print *,'cb2i(2) = ',cb2i(2),xmax print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,3) endif * * second try, close together * if ( .not.llogmm ) then if ( abs(dm1m2).lt.xloss*xm1 ) then xlogmm = dfflo1(dm1m2/xm1,ier) else xlogmm = log(xm2/xm1) endif endif cs(1) = dm1m2*( -1/DBLE(8) - ca0i(1)*DBLE(1/(4*xm1)) ) cs(2) = dm1m2*xlogmm/4 cs(3) = xm1*(xm1/dm1m2)/4*xlogmm cs(4) = xm1*( 1/DBLE(4) + ca0i(1)*DBLE(1/(2*xm1)) ) cs(5) = -xm1*xlogmm/2 csom = cs(1) + cs(2) + cs(3) + cs(4) + cs(5) xmxp = max(absc(cs(2)),absc(cs(3)),absc(cs(4)), + absc(cs(5))) if ( lwrite ) then print *,'cb2i(2)+= ',csom,xmxp print *,'with cs ' print '(i3,2e30.16)',(i,cs(i),i=1,2) endif if ( xmxp.lt.xmax ) then xmax = xmxp cb2i(2) = csom endif if ( absc(cb2i(2)).gt.xloss*xmax ) goto 210 * * give up * if ( lwarn ) then call ffwarn(226,ier,absc(cb2i(2)),xmax) if ( lwrite ) then print *,'xp,xm1,xm2 = ',xp,xm1,xm2 endif endif 210 continue * #] B22: * #] xp=0, m1!=m2: * #[ xp=0, m1==m2: else * * taken over from ffxb2a, which in turns stem from my thesis GJ * cb2i(1) = cb0/3 cb2i(2) = DBLE(xm1/2)*(cb0 + 1) endif * #] xp=0, m1==m2: * #[ finish up: ier = max(ier0,ier1) * #] finish up: *###] ffxb2q: end
#' Extract the EMM Plot as a ggplot element form a Jamovi call to ANOVA #' #' @param res The result of an ANOVA call #' @param plotnumber The emm plot to extract if using multi-way Anova #' #' @return a ggplot object #' @export #' #' @examples anova_emm_plot <- function(res, plotnumber = 1){ res$emm[[plotnumber]]$emmPlot$plot }
[STATEMENT] lemma continuous_on_ext_cont[continuous_intros]: "continuous_on (cbox a b) f \<Longrightarrow> continuous_on S (ext_cont f a b)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. continuous_on (cbox a b) f \<Longrightarrow> continuous_on S (ext_cont f a b) [PROOF STEP] by (auto intro!: clamp_continuous_on simp: ext_cont_def)
#include <stdio.h> #include <stdlib.h> #include <math.h> #include <string.h> #include <gsl/gsl_errno.h> #include <gsl/gsl_integration.h> #include "ccl.h" #ifdef HAVE_ANGPOW #include "Angpow/angpow_ccl.h" #endif #define CCL_FRAC_RELEVANT 5E-4 //#define CCL_FRAC_RELEVANT 1E-3 //Gets the x-interval where the values of y are relevant //(meaning, that the values of y for those x are at least above a fraction frac of its maximum) static void get_support_interval(int n,double *x,double *y,double frac, double *xmin_out,double *xmax_out) { int ix; double ythr=-1000; //Initialize as the original edges in case we don't find an interval *xmin_out=x[0]; *xmax_out=x[n-1]; //Find threshold for(ix=0;ix<n;ix++) { if(y[ix]>ythr) ythr=y[ix]; } ythr*=frac; //Find minimum for(ix=0;ix<n;ix++) { if(y[ix]>=ythr) { *xmin_out=x[ix]; break; } } //Find maximum for(ix=n-1;ix>=0;ix--) { if(y[ix]>=ythr) { *xmax_out=x[ix]; break; } } } //Wrapper around spline_eval with GSL function syntax static double speval_bis(double x,void *params) { return ccl_spline_eval(x,(SplPar *)params); } void ccl_cl_workspace_free(CCL_ClWorkspace *w) { free(w->l_arr); free(w); } CCL_ClWorkspace *ccl_cl_workspace_new(int lmax,int l_limber, double l_logstep,int l_linstep,int *status) { int i_l,l0,increment; CCL_ClWorkspace *w=(CCL_ClWorkspace *)malloc(sizeof(CCL_ClWorkspace)); if(w==NULL) *status=CCL_ERROR_MEMORY; if(*status==0) { //Set params w->lmax=lmax; w->l_limber=l_limber; w->l_logstep=l_logstep; w->l_linstep=l_linstep; //Compute number of multipoles i_l=0; l0=0; increment=CCL_MAX(((int)(l0*(w->l_logstep-1.))),1); while((l0 < w->lmax) && (increment < w->l_linstep)) { i_l++; l0+=increment; increment=CCL_MAX(((int)(l0*(w->l_logstep-1))),1); } increment=w->l_linstep; while(l0 < w->lmax) { i_l++; l0+=increment; } //Allocate array of multipoles w->n_ls=i_l+1; w->l_arr=(int *)malloc(w->n_ls*sizeof(int)); if(w->l_arr==NULL) *status=CCL_ERROR_MEMORY; } if(*status==0) { //Redo the computation above and store values of ell i_l=0; l0=0; increment=CCL_MAX(((int)(l0*(w->l_logstep-1.))),1); while((l0 < w->lmax) && (increment < w->l_linstep)) { w->l_arr[i_l]=l0; i_l++; l0+=increment; increment=CCL_MAX(((int)(l0*(w->l_logstep-1))),1); } increment=w->l_linstep; while(l0 < w->lmax) { w->l_arr[i_l]=l0; i_l++; l0+=increment; } //Don't go further than lmaw w->l_arr[w->n_ls-1]=w->lmax; } return w; } CCL_ClWorkspace *ccl_cl_workspace_new_limber(int lmax,double l_logstep,int l_linstep,int *status) { return ccl_cl_workspace_new(lmax,-1,l_logstep,l_linstep,status); } //Params for lensing kernel integrand typedef struct { double chi; SplPar *spl_pz; ccl_cosmology *cosmo; int *status; } IntLensPar; //Integrand for lensing kernel static double integrand_wl(double chip,void *params) { IntLensPar *p=(IntLensPar *)params; double chi=p->chi; double a=ccl_scale_factor_of_chi(p->cosmo,chip, p->status); double z=1./a-1; double pz=ccl_spline_eval(z,p->spl_pz); double h=p->cosmo->params.h*ccl_h_over_h0(p->cosmo,a, p->status)/CLIGHT_HMPC; if(chi==0) return h*pz; else return h*pz*ccl_sinn(p->cosmo,chip-chi,p->status)/ccl_sinn(p->cosmo,chip,p->status); } //Integral to compute lensing window function //chi -> comoving distance //cosmo -> ccl_cosmology object //spl_pz -> normalized N(z) spline //chi_max -> maximum comoving distance to which the integral is computed //win -> result is stored here static int window_lensing(double chi,ccl_cosmology *cosmo,SplPar *spl_pz,double chi_max,double *win) { int gslstatus =0, status =0; double result,eresult; IntLensPar ip; gsl_function F; gsl_integration_workspace *w=gsl_integration_workspace_alloc(ccl_gsl->N_ITERATION); ip.chi=chi; ip.cosmo=cosmo; ip.spl_pz=spl_pz; ip.status = &status; F.function=&integrand_wl; F.params=&ip; // This conputes the lensing kernel: // w_L(chi) = Integral[ dN/dchi(chi') * f(chi'-chi)/f(chi') , chi < chi' < chi_horizon ] // Where f(chi) is the comoving angular distance (which is just chi for zero curvature). gslstatus=gsl_integration_qag(&F, chi, chi_max, 0, ccl_gsl->INTEGRATION_EPSREL, ccl_gsl->N_ITERATION, ccl_gsl->INTEGRATION_GAUSS_KRONROD_POINTS, w, &result, &eresult); *win=result; gsl_integration_workspace_free(w); if(gslstatus!=GSL_SUCCESS || *ip.status) { ccl_raise_gsl_warning(gslstatus, "ccl_cls.c: window_lensing():"); return 1; } //TODO: chi_max should be changed to chi_horizon //we should precompute this quantity and store it in cosmo by default return 0; } //Params for lensing kernel integrand typedef struct { double chi; SplPar *spl_pz; SplPar *spl_sz; ccl_cosmology *cosmo; int *status; } IntMagPar; //Integrand for magnification kernel static double integrand_mag(double chip,void *params) { IntMagPar *p=(IntMagPar *)params; double chi=p->chi; double a=ccl_scale_factor_of_chi(p->cosmo,chip, p->status); double z=1./a-1; double pz=ccl_spline_eval(z,p->spl_pz); double sz=ccl_spline_eval(z,p->spl_sz); double h=p->cosmo->params.h*ccl_h_over_h0(p->cosmo,a, p->status)/CLIGHT_HMPC; if(chi==0) return h*pz*(1-2.5*sz); else return h*pz*(1-2.5*sz)*ccl_sinn(p->cosmo,chip-chi,p->status)/ccl_sinn(p->cosmo,chip,p->status); } //Integral to compute magnification window function //chi -> comoving distance //cosmo -> ccl_cosmology object //spl_pz -> normalized N(z) spline //spl_pz -> magnification bias s(z) //chi_max -> maximum comoving distance to which the integral is computed //win -> result is stored here static int window_magnification(double chi,ccl_cosmology *cosmo,SplPar *spl_pz,SplPar *spl_sz, double chi_max,double *win) { int gslstatus =0, status =0; double result,eresult; IntMagPar ip; gsl_function F; gsl_integration_workspace *w=gsl_integration_workspace_alloc(ccl_gsl->N_ITERATION); ip.chi=chi; ip.cosmo=cosmo; ip.spl_pz=spl_pz; ip.spl_sz=spl_sz; ip.status = &status; F.function=&integrand_mag; F.params=&ip; // This conputes the magnification lensing kernel: // w_M(chi) = Integral[ dN/dchi(chi') * (1-5/2 * s(chi)) * f(chi'-chi)/f(chi') , chi < chi' < chi_horizon ] // Where f(chi) is the comoving angular distance (which is just chi for zero curvature) // and s(chi) is the magnification bias parameter. gslstatus=gsl_integration_qag(&F, chi, chi_max, 0, ccl_gsl->INTEGRATION_EPSREL, ccl_gsl->N_ITERATION, ccl_gsl->INTEGRATION_GAUSS_KRONROD_POINTS, w, &result, &eresult); *win=result; gsl_integration_workspace_free(w); if(gslstatus!=GSL_SUCCESS || *ip.status) { ccl_raise_gsl_warning(gslstatus, "ccl_cls.c: window_magnification():"); return 1; } //TODO: chi_max should be changed to chi_horizon //we should precompute this quantity and store it in cosmo by default return 0; } static void clt_init_nz(CCL_ClTracer *clt,ccl_cosmology *cosmo, int nz_n,double *z_n,double *n,int *status) { int gslstatus; gsl_function F; double nz_norm,nz_enorm; double *nz_normalized; //Find redshift range where the N(z) has support get_support_interval(nz_n,z_n,n,CCL_FRAC_RELEVANT,&(clt->zmin),&(clt->zmax)); clt->chimax=ccl_comoving_radial_distance(cosmo,1./(1+clt->zmax),status); clt->chimin=ccl_comoving_radial_distance(cosmo,1./(1+clt->zmin),status); clt->spl_nz=ccl_spline_init(nz_n,z_n,n,0,0); if(clt->spl_nz==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_nz(): error initializing spline for N(z)\n"); } if(*status==0) { //Normalize n(z) nz_normalized=(double *)malloc(nz_n*sizeof(double)); if(nz_normalized==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_nz(): memory allocation\n"); return; } } if(*status==0) { gsl_integration_workspace *w=gsl_integration_workspace_alloc(ccl_gsl->N_ITERATION); F.function=&speval_bis; F.params=clt->spl_nz; //Here we're just integrating the N(z) to normalize it to unit probability. gslstatus=gsl_integration_qag(&F, z_n[0], z_n[nz_n-1], 0, ccl_gsl->INTEGRATION_EPSREL, ccl_gsl->N_ITERATION, ccl_gsl->INTEGRATION_GAUSS_KRONROD_POINTS, w, &nz_norm, &nz_enorm); gsl_integration_workspace_free(w); if(gslstatus!=GSL_SUCCESS) { ccl_raise_gsl_warning(gslstatus, "ccl_cls.c: clt_init_nz():"); *status=CCL_ERROR_INTEG; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_nz(): integration error when normalizing N(z)\n"); } } if(*status==0) { for(int ii=0;ii<nz_n;ii++) nz_normalized[ii]=n[ii]/nz_norm; ccl_spline_free(clt->spl_nz); clt->spl_nz=ccl_spline_init(nz_n,z_n,nz_normalized,0,0); if(clt->spl_nz==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_nz(): error initializing normalized spline for N(z)\n"); } } free(nz_normalized); } static void clt_init_bz(CCL_ClTracer *clt,ccl_cosmology *cosmo, int nz_b,double *z_b,double *b,int *status) { //Initialize bias spline clt->spl_bz=ccl_spline_init(nz_b,z_b,b,b[0],b[nz_b-1]); if(clt->spl_bz==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_bz(): error initializing spline for b(z)\n"); } } static void clt_init_wM(CCL_ClTracer *clt,ccl_cosmology *cosmo, int nz_s,double *z_s,double *s,int *status) { //Compute magnification kernel int nchi; double *x,*y; double dchi_here=5.; double zmax=clt->spl_nz->xf; double chimax=ccl_comoving_radial_distance(cosmo,1./(1+zmax),status); //TODO: The interval in chi (5. Mpc) should be made a macro //In this case we need to integrate all the way to z=0. Reset zmin and chimin clt->zmin=0; clt->chimin=0; clt->spl_sz=ccl_spline_init(nz_s,z_s,s,s[0],s[nz_s-1]); if(clt->spl_sz==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wM(): error initializing spline for s(z)\n"); } if(*status==0) { nchi=(int)(chimax/dchi_here)+1; x=ccl_linear_spacing(0.,chimax,nchi); dchi_here=chimax/nchi; if(x==NULL || (fabs(x[0]-0)>1E-5) || (fabs(x[nchi-1]-chimax)>1e-5)) { *status=CCL_ERROR_LINSPACE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wM(): Error creating linear spacing in chi\n"); } } if(*status==0) { y=(double *)malloc(nchi*sizeof(double)); if(y==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wM(): memory allocation\n"); } } if(*status==0) { int clstatus=0; for(int j=0;j<nchi;j++) clstatus|=window_magnification(x[j],cosmo,clt->spl_nz,clt->spl_sz,chimax,&(y[j])); if(clstatus) { *status=CCL_ERROR_INTEG; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wM(): error computing lensing window\n"); } } if(*status==0) { clt->spl_wM=ccl_spline_init(nchi,x,y,y[0],0); if(clt->spl_wM==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wM(): error initializing spline for lensing window\n"); } } free(x); free(y); } //CCL_ClTracer initializer for number counts static void clt_nc_init(CCL_ClTracer *clt,ccl_cosmology *cosmo, int has_rsd,int has_magnification, int nz_n,double *z_n,double *n, int nz_b,double *z_b,double *b, int nz_s,double *z_s,double *s,int *status) { clt->has_rsd=has_rsd; clt->has_magnification=has_magnification; if ( ((cosmo->params.N_nu_mass)>0) && clt->has_rsd){ *status=CCL_ERROR_NOT_IMPLEMENTED; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_cl_tracer_new(): Number counts tracers with RSD not yet implemented in cosmologies with massive neutrinos."); return; } clt_init_nz(clt,cosmo,nz_n,z_n,n,status); clt_init_bz(clt,cosmo,nz_b,z_b,b,status); if(clt->has_magnification) clt_init_wM(clt,cosmo,nz_s,z_s,s,status); } static void clt_init_wL(CCL_ClTracer *clt,ccl_cosmology *cosmo, int *status) { //Compute weak lensing kernel int nchi; double *x,*y; double dchi_here=5.; double zmax=clt->spl_nz->xf; double chimax=ccl_comoving_radial_distance(cosmo,1./(1+zmax),status); //TODO: The interval in chi (5. Mpc) should be made a macro //In this case we need to integrate all the way to z=0. Reset zmin and chimin clt->zmin=0; clt->chimin=0; nchi=(int)(chimax/dchi_here)+1; x=ccl_linear_spacing(0.,chimax,nchi); dchi_here=chimax/nchi; if(x==NULL || (fabs(x[0]-0)>1E-5) || (fabs(x[nchi-1]-chimax)>1e-5)) { *status=CCL_ERROR_LINSPACE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wL(): Error creating linear spacing in chi\n"); } if(*status==0) { y=(double *)malloc(nchi*sizeof(double)); if(y==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wL(): memory allocation\n"); } } if(*status==0) { int clstatus=0; for(int j=0;j<nchi;j++) clstatus|=window_lensing(x[j],cosmo,clt->spl_nz,chimax,&(y[j])); if(clstatus) { *status=CCL_ERROR_INTEG; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wL(): error computing lensing window\n"); } } if(*status==0) { clt->spl_wL=ccl_spline_init(nchi,x,y,y[0],0); if(clt->spl_wL==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_wL(): error initializing spline for lensing window\n"); } } free(x); free(y); } static void clt_init_rf(CCL_ClTracer *clt,ccl_cosmology *cosmo, int nz_rf,double *z_rf,double *rf,int *status) { //Initialize bias spline clt->spl_rf=ccl_spline_init(nz_rf,z_rf,rf,rf[0],rf[nz_rf-1]); if(clt->spl_rf==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_rf(): error initializing spline for b(z)\n"); } } static void clt_init_ba(CCL_ClTracer *clt,ccl_cosmology *cosmo, int nz_ba,double *z_ba,double *ba,int *status) { //Initialize bias spline clt->spl_ba=ccl_spline_init(nz_ba,z_ba,ba,ba[0],ba[nz_ba-1]); if(clt->spl_ba==NULL) { *status=CCL_ERROR_SPLINE; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: clt_init_ba(): error initializing spline for b(z)\n"); } } static void clt_wl_init(CCL_ClTracer *clt,ccl_cosmology *cosmo, int has_intrinsic_alignment, int nz_n,double *z_n,double *n, int nz_ba,double *z_ba,double *ba, int nz_rf,double *z_rf,double *rf,int *status) { clt->has_intrinsic_alignment=has_intrinsic_alignment; clt_init_nz(clt,cosmo,nz_n,z_n,n,status); clt_init_wL(clt,cosmo,status); if(clt->has_intrinsic_alignment) { clt_init_rf(clt,cosmo,nz_rf,z_rf,rf,status); clt_init_ba(clt,cosmo,nz_ba,z_ba,ba,status); } } //CCL_ClTracer creator //cosmo -> ccl_cosmology object //tracer_type -> type of tracer. Supported: ccl_number_counts_tracer, ccl_weak_lensing_tracer //nz_n -> number of points for N(z) //z_n -> array of z-values for N(z) //n -> corresponding N(z)-values. Normalization is irrelevant // N(z) will be set to zero outside the range covered by z_n //nz_b -> number of points for b(z) //z_b -> array of z-values for b(z) //b -> corresponding b(z)-values. // b(z) will be assumed constant outside the range covered by z_n static CCL_ClTracer *cl_tracer(ccl_cosmology *cosmo,int tracer_type, int has_rsd,int has_magnification,int has_intrinsic_alignment, int nz_n,double *z_n,double *n, int nz_b,double *z_b,double *b, int nz_s,double *z_s,double *s, int nz_ba,double *z_ba,double *ba, int nz_rf,double *z_rf,double *rf, double z_source, int * status) { int clstatus=0; CCL_ClTracer *clt=(CCL_ClTracer *)malloc(sizeof(CCL_ClTracer)); if(clt==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_cl_tracer(): memory allocation\n"); } if(*status==0) { clt->tracer_type=tracer_type; double hub=cosmo->params.h*ccl_h_over_h0(cosmo,1.,status)/CLIGHT_HMPC; clt->prefac_lensing=1.5*hub*hub*cosmo->params.Omega_m; if(tracer_type==ccl_number_counts_tracer) clt_nc_init(clt,cosmo,has_rsd,has_magnification, nz_n,z_n,n,nz_b,z_b,b,nz_s,z_s,s,status); else if(tracer_type==ccl_weak_lensing_tracer) clt_wl_init(clt,cosmo,has_intrinsic_alignment, nz_n,z_n,n,nz_ba,z_ba,ba,nz_rf,z_rf,rf,status); else if(tracer_type==ccl_cmb_lensing_tracer) { clt->chi_source=ccl_comoving_radial_distance(cosmo,1./(1+z_source),status); clt->chimax=clt->chi_source; clt->chimin=0; } else { free(clt); *status=CCL_ERROR_INCONSISTENT; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_cl_tracer(): unknown tracer type\n"); return NULL; } } if(*status) { free(clt); clt=NULL; } return clt; } //CCL_ClTracer constructor with error checking //cosmo -> ccl_cosmology object //tracer_type -> type of tracer. Supported: ccl_number_counts_tracer, ccl_weak_lensing_tracer //nz_n -> number of points for N(z) //z_n -> array of z-values for N(z) //n -> corresponding N(z)-values. Normalization is irrelevant // N(z) will be set to zero outside the range covered by z_n //nz_b -> number of points for b(z) //z_b -> array of z-values for b(z) //b -> corresponding b(z)-values. // b(z) will be assumed constant outside the range covered by z_n CCL_ClTracer *ccl_cl_tracer(ccl_cosmology *cosmo,int tracer_type, int has_rsd,int has_magnification,int has_intrinsic_alignment, int nz_n,double *z_n,double *n, int nz_b,double *z_b,double *b, int nz_s,double *z_s,double *s, int nz_ba,double *z_ba,double *ba, int nz_rf,double *z_rf,double *rf, double z_source, int * status) { CCL_ClTracer *clt=cl_tracer(cosmo,tracer_type,has_rsd,has_magnification,has_intrinsic_alignment, nz_n,z_n,n,nz_b,z_b,b,nz_s,z_s,s, nz_ba,z_ba,ba,nz_rf,z_rf,rf,z_source,status); ccl_check_status(cosmo,status); return clt; } //CCL_ClTracer destructor void ccl_cl_tracer_free(CCL_ClTracer *clt) { if((clt->tracer_type==ccl_number_counts_tracer) || (clt->tracer_type==ccl_weak_lensing_tracer)) ccl_spline_free(clt->spl_nz); if(clt->tracer_type==ccl_number_counts_tracer) { ccl_spline_free(clt->spl_bz); if(clt->has_magnification) { ccl_spline_free(clt->spl_sz); ccl_spline_free(clt->spl_wM); } } else if(clt->tracer_type==ccl_weak_lensing_tracer) { ccl_spline_free(clt->spl_wL); if(clt->has_intrinsic_alignment) { ccl_spline_free(clt->spl_ba); ccl_spline_free(clt->spl_rf); } } free(clt); } CCL_ClTracer *ccl_cl_tracer_cmblens(ccl_cosmology *cosmo,double z_source,int *status) { return ccl_cl_tracer(cosmo,ccl_cmb_lensing_tracer, 0,0,0, 0,NULL,NULL,0,NULL,NULL,0,NULL,NULL, 0,NULL,NULL,0,NULL,NULL,z_source,status); } CCL_ClTracer *ccl_cl_tracer_number_counts(ccl_cosmology *cosmo, int has_rsd,int has_magnification, int nz_n,double *z_n,double *n, int nz_b,double *z_b,double *b, int nz_s,double *z_s,double *s, int * status) { return ccl_cl_tracer(cosmo,ccl_number_counts_tracer,has_rsd,has_magnification,0, nz_n,z_n,n,nz_b,z_b,b,nz_s,z_s,s, -1,NULL,NULL,-1,NULL,NULL,0, status); } CCL_ClTracer *ccl_cl_tracer_number_counts_simple(ccl_cosmology *cosmo, int nz_n,double *z_n,double *n, int nz_b,double *z_b,double *b, int * status) { return ccl_cl_tracer(cosmo,ccl_number_counts_tracer,0,0,0, nz_n,z_n,n,nz_b,z_b,b,-1,NULL,NULL, -1,NULL,NULL,-1,NULL,NULL,0, status); } CCL_ClTracer *ccl_cl_tracer_lensing(ccl_cosmology *cosmo, int has_alignment, int nz_n,double *z_n,double *n, int nz_ba,double *z_ba,double *ba, int nz_rf,double *z_rf,double *rf, int * status) { return ccl_cl_tracer(cosmo,ccl_weak_lensing_tracer,0,0,has_alignment, nz_n,z_n,n,-1,NULL,NULL,-1,NULL,NULL, nz_ba,z_ba,ba,nz_rf,z_rf,rf,0, status); } CCL_ClTracer *ccl_cl_tracer_lensing_simple(ccl_cosmology *cosmo, int nz_n,double *z_n,double *n, int * status) { return ccl_cl_tracer(cosmo,ccl_weak_lensing_tracer,0,0,0, nz_n,z_n,n,-1,NULL,NULL,-1,NULL,NULL, -1,NULL,NULL,-1,NULL,NULL,0, status); } static double f_dens(double a,ccl_cosmology *cosmo,CCL_ClTracer *clt, int * status) { double z=1./a-1; double pz=ccl_spline_eval(z,clt->spl_nz); double bz=ccl_spline_eval(z,clt->spl_bz); double h=cosmo->params.h*ccl_h_over_h0(cosmo,a,status)/CLIGHT_HMPC; return pz*bz*h; } static double f_rsd(double a,ccl_cosmology *cosmo,CCL_ClTracer *clt, int * status) { double z=1./a-1; double pz=ccl_spline_eval(z,clt->spl_nz); double fg=ccl_growth_rate(cosmo,a,status); double h=cosmo->params.h*ccl_h_over_h0(cosmo,a,status)/CLIGHT_HMPC; return pz*fg*h; } static double f_mag(double a,double chi,ccl_cosmology *cosmo,CCL_ClTracer *clt, int * status) { double wM=ccl_spline_eval(chi,clt->spl_wM); if(wM<=0) return 0; else return wM/(a*chi); } //Transfer function for number counts //l -> angular multipole //k -> wavenumber modulus //cosmo -> ccl_cosmology object //w -> CCL_ClWorskpace object //clt -> CCL_ClTracer object (must be of the ccl_number_counts_tracer type) static double transfer_nc(int l,double k, ccl_cosmology *cosmo,CCL_ClWorkspace *w,CCL_ClTracer *clt, int * status) { double ret=0; double x0=(l+0.5); double chi0=x0/k; if(chi0<=clt->chimax) { double a0=ccl_scale_factor_of_chi(cosmo,chi0,status); double f_all=f_dens(a0,cosmo,clt,status); if(clt->has_rsd) { double x1=(l+1.5); double chi1=x1/k; if(chi1<=clt->chimax) { double a1=ccl_scale_factor_of_chi(cosmo,chi1,status); double pk0=ccl_nonlin_matter_power(cosmo,k,a0,status); double pk1=ccl_nonlin_matter_power(cosmo,k,a1,status); double fg0=f_rsd(a0,cosmo,clt,status); double fg1=f_rsd(a1,cosmo,clt,status); f_all+=fg0*(1.-l*(l-1.)/(x0*x0))-fg1*2.*sqrt((l+0.5)*pk1/((l+1.5)*pk0))/x1; } } if(clt->has_magnification) f_all+=-2*clt->prefac_lensing*l*(l+1)*f_mag(a0,chi0,cosmo,clt,status)/(k*k); ret=f_all; } return ret; } static double f_lensing(double a,double chi,ccl_cosmology *cosmo,CCL_ClTracer *clt, int * status) { double wL=ccl_spline_eval(chi,clt->spl_wL); if(wL<=0) return 0; else return clt->prefac_lensing*wL/(a*chi); } static double f_IA_NLA(double a,double chi,ccl_cosmology *cosmo,CCL_ClTracer *clt, int * status) { if(chi<=1E-10) return 0; else { double a=ccl_scale_factor_of_chi(cosmo,chi, status); double z=1./a-1; double pz=ccl_spline_eval(z,clt->spl_nz); double ba=ccl_spline_eval(z,clt->spl_ba); double rf=ccl_spline_eval(z,clt->spl_rf); double h=cosmo->params.h*ccl_h_over_h0(cosmo,a,status)/CLIGHT_HMPC; return pz*ba*rf*h/(chi*chi); } } //Transfer function for shear //l -> angular multipole //k -> wavenumber modulus //cosmo -> ccl_cosmology object //w -> CCL_ClWorskpace object //clt -> CCL_ClTracer object (must be of the ccl_weak_lensing_tracer type) static double transfer_wl(int l,double k, ccl_cosmology *cosmo,CCL_ClWorkspace *w,CCL_ClTracer *clt, int * status) { double ret=0; double chi=(l+0.5)/k; if(chi<=clt->chimax) { double a=ccl_scale_factor_of_chi(cosmo,chi,status); double f_all=f_lensing(a,chi,cosmo,clt,status); if(clt->has_intrinsic_alignment) f_all+=f_IA_NLA(a,chi,cosmo,clt,status); ret=f_all; } return sqrt((l+2.)*(l+1.)*l*(l-1.))*ret/(k*k); //return (l+1.)*l*ret/(k*k); } static double transfer_cmblens(int l,double k,ccl_cosmology *cosmo,CCL_ClTracer *clt,int *status) { double chi=(l+0.5)/k; if(chi>=clt->chi_source) return 0; if(chi<=clt->chimax) { double a=ccl_scale_factor_of_chi(cosmo,chi,status); double w=1-chi/clt->chi_source; return clt->prefac_lensing*l*(l+1.)*w/(a*chi*k*k); } return 0; } //Wrapper for transfer function //l -> angular multipole //k -> wavenumber modulus //cosmo -> ccl_cosmology object //clt -> CCL_ClTracer object static double transfer_wrap(int il,double lk,ccl_cosmology *cosmo, CCL_ClWorkspace *w,CCL_ClTracer *clt, int * status) { double transfer_out=0; double k=pow(10.,lk); if(clt->tracer_type==ccl_number_counts_tracer) transfer_out=transfer_nc(w->l_arr[il],k,cosmo,w,clt,status); else if(clt->tracer_type==ccl_weak_lensing_tracer) transfer_out=transfer_wl(w->l_arr[il],k,cosmo,w,clt,status); else if(clt->tracer_type==ccl_cmb_lensing_tracer) transfer_out=transfer_cmblens(w->l_arr[il],k,cosmo,clt,status); else transfer_out=-1; return transfer_out; } //Params for power spectrum integrand typedef struct { int il; ccl_cosmology *cosmo; CCL_ClWorkspace *w; CCL_ClTracer *clt1; CCL_ClTracer *clt2; int *status; } IntClPar; //Integrand for integral power spectrum static double cl_integrand(double lk,void *params) { double d1,d2; IntClPar *p=(IntClPar *)params; d1=transfer_wrap(p->il,lk,p->cosmo,p->w,p->clt1,p->status); if(d1==0) return 0; d2=transfer_wrap(p->il,lk,p->cosmo,p->w,p->clt2,p->status); if(d2==0) return 0; double k=pow(10.,lk); double chi=(p->w->l_arr[p->il]+0.5)/k; double a=ccl_scale_factor_of_chi(p->cosmo,chi,p->status); double pk=ccl_nonlin_matter_power(p->cosmo,k,a,p->status); return k*pk*d1*d2; } //Figure out k intervals where the Limber kernel has support //clt1 -> tracer #1 //clt2 -> tracer #2 //l -> angular multipole //lkmin, lkmax -> log10 of the range of scales where the transfer functions have support static void get_k_interval(ccl_cosmology *cosmo,CCL_ClWorkspace *w, CCL_ClTracer *clt1,CCL_ClTracer *clt2,int l, double *lkmin,double *lkmax) { double chimin,chimax; int cut_low_1=0,cut_low_2=0; //Define a minimum distance only if no lensing is needed if((clt1->tracer_type==ccl_number_counts_tracer) && (clt1->has_magnification==0)) cut_low_1=1; if((clt2->tracer_type==ccl_number_counts_tracer) && (clt2->has_magnification==0)) cut_low_2=1; if(cut_low_1) { if(cut_low_2) { chimin=fmax(clt1->chimin,clt2->chimin); chimax=fmin(clt1->chimax,clt2->chimax); } else { chimin=clt1->chimin; chimax=clt1->chimax; } } else if(cut_low_2) { chimin=clt2->chimin; chimax=clt2->chimax; } else { chimin=0.5*(l+0.5)/ccl_splines->K_MAX; chimax=2*(l+0.5)/ccl_splines->K_MIN; } if(chimin<=0) chimin=0.5*(l+0.5)/ccl_splines->K_MAX; *lkmax=log10(fmin( ccl_splines->K_MAX ,2 *(l+0.5)/chimin)); *lkmin=log10(fmax( ccl_splines->K_MIN ,0.5*(l+0.5)/chimax)); } //Compute angular power spectrum between two bins //cosmo -> ccl_cosmology object //il -> index in angular multipole array //clt1 -> tracer #1 //clt2 -> tracer #2 static double ccl_angular_cl_native(ccl_cosmology *cosmo,CCL_ClWorkspace *cw,int il, CCL_ClTracer *clt1,CCL_ClTracer *clt2,int * status) { int clastatus=0, gslstatus; IntClPar ipar; double result=0,eresult; double lkmin,lkmax; gsl_function F; gsl_integration_workspace *w=gsl_integration_workspace_alloc(ccl_gsl->N_ITERATION); ipar.il=il; ipar.cosmo=cosmo; ipar.w=cw; ipar.clt1=clt1; ipar.clt2=clt2; ipar.status = &clastatus; F.function=&cl_integrand; F.params=&ipar; get_k_interval(cosmo,cw,clt1,clt2,cw->l_arr[il],&lkmin,&lkmax); // This computes the angular power spectra in the Limber approximation between two quantities a and b: // C_ell^ab = 2/(2*ell+1) * Integral[ Delta^a_ell(k) Delta^b_ell(k) * P(k) , k_min < k < k_max ] // Note that we use log10(k) as an integration variable, and the ell-dependent prefactor is included // at the end of this function. gslstatus=gsl_integration_qag(&F, lkmin, lkmax, 0, ccl_gsl->INTEGRATION_LIMBER_EPSREL, ccl_gsl->N_ITERATION, ccl_gsl->INTEGRATION_LIMBER_GAUSS_KRONROD_POINTS, w, &result, &eresult); gsl_integration_workspace_free(w); // Test if a round-off error occured in the evaluation of the integral // If so, try another integration function, more robust but potentially slower if(gslstatus == GSL_EROUND) { ccl_raise_gsl_warning(gslstatus, "ccl_cls.c: ccl_angular_cl_native(): Default GSL integration failure, attempting backup method."); gsl_integration_cquad_workspace *w_cquad= gsl_integration_cquad_workspace_alloc (ccl_gsl->N_ITERATION); size_t nevals=0; gslstatus=gsl_integration_cquad(&F, lkmin, lkmax, 0, ccl_gsl->INTEGRATION_LIMBER_EPSREL, w_cquad, &result, &eresult, &nevals); gsl_integration_cquad_workspace_free(w_cquad); } if(gslstatus!=GSL_SUCCESS || *ipar.status) { ccl_raise_gsl_warning(gslstatus, "ccl_cls.c: ccl_angular_cl_native():"); // If an error status was already set, don't overwrite it. if(*status == 0){ *status=CCL_ERROR_INTEG; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_angular_cl_native(): error integrating over k\n"); } return -1; } ccl_check_status(cosmo,status); return result*M_LN10/(cw->l_arr[il]+0.5); } void ccl_angular_cls(ccl_cosmology *cosmo,CCL_ClWorkspace *w, CCL_ClTracer *clt1,CCL_ClTracer *clt2, int nl_out,int *l_out,double *cl_out,int *status) { int ii,do_angpow; double *l_nodes,*cl_nodes; SplPar *spcl_nodes; //First check if ell range is within workspace for(ii=0;ii<nl_out;ii++) { if(l_out[ii]>w->lmax) { *status=CCL_ERROR_SPLINE_EV; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_angular_cls(); " "requested l beyond range allowed by workspace\n"); return; } } if(*status==0) { //Allocate array for power spectrum at interpolation nodes l_nodes=(double *)malloc(w->n_ls*sizeof(double)); if(l_nodes==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_angular_cls(); memory allocation\n"); } } if(*status==0) { cl_nodes=(double *)malloc(w->n_ls*sizeof(double)); if(cl_nodes==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_cl_angular_cls(); memory allocation\n"); } } if(*status==0) { for(ii=0;ii<w->n_ls;ii++) l_nodes[ii]=(double)(w->l_arr[ii]); do_angpow=0; //Now check if angpow is needed at all if(w->l_limber>0) { for(ii=0;ii<w->n_ls;ii++) { if(w->l_arr[ii]<=w->l_limber) do_angpow=1; } } #ifndef HAVE_ANGPOW do_angpow=0; #endif //HAVE_ANGPOW //Resort to Limber if we have lensing (this will hopefully only be temporary) if(clt1->tracer_type==ccl_weak_lensing_tracer || clt2->tracer_type==ccl_weak_lensing_tracer || clt1->has_magnification || clt2->has_magnification) { do_angpow=0; } //Use angpow if non-limber is needed if(do_angpow) ccl_angular_cls_angpow(cosmo,w,clt1,clt2,cl_nodes,status); ccl_check_status(cosmo,status); } if(*status==0) { //Compute limber nodes for(ii=0;ii<w->n_ls;ii++) { if((!do_angpow) || (w->l_arr[ii]>w->l_limber)) cl_nodes[ii]=ccl_angular_cl_native(cosmo,w,ii,clt1,clt2,status); } //Interpolate into ells requested by user spcl_nodes=ccl_spline_init(w->n_ls,l_nodes,cl_nodes,0,0); if(spcl_nodes==NULL) { *status=CCL_ERROR_MEMORY; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: ccl_cl_angular_cls(); memory allocation\n"); } } if(*status==0) { for(ii=0;ii<nl_out;ii++) cl_out[ii]=ccl_spline_eval((double)(l_out[ii]),spcl_nodes); } //Cleanup ccl_spline_free(spcl_nodes); free(cl_nodes); free(l_nodes); } static int check_clt_fa_inconsistency(CCL_ClTracer *clt,int func_code) { if(((func_code==ccl_trf_nz) && (clt->tracer_type==ccl_cmb_lensing_tracer)) || //lensing has no n(z) (((func_code==ccl_trf_bz) || (func_code==ccl_trf_sz) || (func_code==ccl_trf_wM)) && (clt->tracer_type!=ccl_number_counts_tracer)) || //bias and magnification only for clustering (((func_code==ccl_trf_rf) || (func_code==ccl_trf_ba) || (func_code==ccl_trf_wL)) && (clt->tracer_type!=ccl_weak_lensing_tracer))) //IAs only for weak lensing return 1; if((((func_code==ccl_trf_sz) || (func_code==ccl_trf_wM)) && (clt->has_magnification==0)) || //correct combination, but no magnification (((func_code==ccl_trf_rf) || (func_code==ccl_trf_ba)) && (clt->has_intrinsic_alignment==0))) //Correct combination, but no IAs return 1; return 0; } double ccl_get_tracer_fa(ccl_cosmology *cosmo,CCL_ClTracer *clt,double a,int func_code,int *status) { SplPar *spl; if(check_clt_fa_inconsistency(clt,func_code)) { *status=CCL_ERROR_INCONSISTENT; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: inconsistent combination of tracer and internal function to be evaluated"); return -1; } switch(func_code) { case ccl_trf_nz : spl=clt->spl_nz; break; case ccl_trf_bz : spl=clt->spl_bz; break; case ccl_trf_sz : spl=clt->spl_sz; break; case ccl_trf_rf : spl=clt->spl_rf; break; case ccl_trf_ba : spl=clt->spl_ba; break; case ccl_trf_wL : spl=clt->spl_wL; break; case ccl_trf_wM : spl=clt->spl_wM; break; } double x; if((func_code==ccl_trf_wL) || (func_code==ccl_trf_wM)) x=ccl_comoving_radial_distance(cosmo,a,status); //x-variable is comoving distance for lensing kernels else x=1./a-1; //x-variable is redshift by default return ccl_spline_eval(x,spl); } int ccl_get_tracer_fas(ccl_cosmology *cosmo,CCL_ClTracer *clt,int na,double *a,double *fa, int func_code,int *status) { SplPar *spl; if(check_clt_fa_inconsistency(clt,func_code)) { *status=CCL_ERROR_INCONSISTENT; ccl_cosmology_set_status_message(cosmo, "ccl_cls.c: inconsistent combination of tracer and internal function to be evaluated"); return -1; } switch(func_code) { case ccl_trf_nz : spl=clt->spl_nz; break; case ccl_trf_bz : spl=clt->spl_bz; break; case ccl_trf_sz : spl=clt->spl_sz; break; case ccl_trf_rf : spl=clt->spl_rf; break; case ccl_trf_ba : spl=clt->spl_ba; break; case ccl_trf_wL : spl=clt->spl_wL; break; case ccl_trf_wM : spl=clt->spl_wM; break; } int compchi = (func_code==ccl_trf_wL) || (func_code==ccl_trf_wM); int ia; for(ia=0;ia<na;ia++) { double x; if(compchi) //x-variable is comoving distance for lensing kernels x=ccl_comoving_radial_distance(cosmo,a[ia],status); else //x-variable is redshift by default x=1./a[ia]-1; fa[ia]=ccl_spline_eval(x,spl); } return 0; }
-- import SciLean.Core.CoreFunctionProperties -- import SciLean.Core.AdjDiff import SciLean.Core.FinVec import SciLean.Core.SmoothMap namespace SciLean opaque LocIntDom (X : Type) [Vec X] : Type -------------------------------------------------------------------------------- -- Integral -------------------------------------------------------------------------------- -- If `f` is integrable on `Ω` return integral otherwise return zero -- IMPORTANT: We choose to integrate only over **bounded** domains. -- This way the function `λ (f : X⟿Y) => ∫ x, f x` can be linear. -- QUESTION: Do we need Y to be complete? For example smooth function -- with compact support do not form closed subspace in `ℝ ⟿ ℝ`. -- Can we have `γ : ℝ ⟿ {f : ℝ ⟿ ℝ // TestFun f}` such that -- `∫ t ∈ [0,1], γ.1` is not a `TestFun`? noncomputable opaque integral {X Y ι : Type} [Enumtype ι] [FinVec X ι] [Vec Y] (f : X ⟿ Y) (Ω : LocIntDom X) : Y noncomputable opaque limitOverWholeDomain {X Y ι : Type} [Enumtype ι] [FinVec X ι] [Vec Y] (F : LocIntDom X → Y) : Y instance integral.instNotationIntegral {X Y ι : Type} [Enumtype ι] [FinVec X ι] [Vec Y] (f : X ⟿ Y) : Integral f (integral f) := ⟨⟩ syntax intBinderType := ":" term syntax intBinder := ident (intBinderType)? syntax "∫" intBinder "," term:66 : term syntax "∫" "(" intBinder ")" "," term:66 : term macro_rules | `(∫ $x:ident, $f) => `(∫ (SmoothMap.mk' λ $x => $f)) | `(∫ $x:ident : $type:term, $f) => `(∫ (SmoothMap.mk' λ ($x : $type) => $f)) | `(∫ ($x:ident : $type:term), $f) => `(∫ $x:ident : $type:term, $f) -------------------------------------------------------------------------------- -- SemiHilbert structure on spaces like `ℝ^{n}⟿ℝ` -------------------------------------------------------------------------------- variable {X Y ι : Type} [Enumtype ι] [FinVec X ι] [Hilbert Y] noncomputable instance : Inner (X⟿Y) where inner f g := (integral (SmoothMap.mk' (λ x => ⟪f x, g x⟫) sorry)) |> limitOverWholeDomain instance : TestFunctions (X⟿Y) where TestFun f := sorry -- has compact support noncomputable instance : SemiHilbert (X⟿Y) := SemiHilbert.mkSorryProofs
public export data X : Type where [noHints] U : X W : X public export %hint xHint : X -- Intentionally unimplemented f : Int -> X => Int f a = a + 1 g : Int g = f 4 -- Typechecks even when `xHint` is unimplemented.
#read main metadata file Projects_metadata <- read_csv(PMeta) #-- choose line containing the project we want to analyse Projects_metadata = Projects_metadata %>% filter (Proj_name == Name_project) #-- define WD: path to the folder WD= NA WD = ifelse (Projects_metadata$source_data =="this_github", "../data",WD) WD = ifelse (Projects_metadata$source_data =="USB_stick", STICK,WD) WD = ifelse (Projects_metadata$source_data =="https:/", "https:/",WD) #-- get additional metadata animal_meta = read_csv( paste( WD, Projects_metadata$Folder_path, Projects_metadata$animal_metadata, sep = "/" ), col_names = TRUE, cols( animal_ID = col_character(), animal_birthdate = col_date(format = ""), gender = col_character(), treatment = col_character(), genotype = col_character(), other_category = col_character(), date = col_date(format = ""), test_cage = col_character(), real_time_start = col_time(format = ""), Lab_ID = col_character(), Exclude_data = col_character(), comment = col_character(), experiment_folder_name = col_character(), Behavior_sequence = col_character(), Onemin_summary = col_character(), Onehour_summary = col_character(), primary_behav_sequence = col_character(), primary_position_time = col_character(), primary_datafile = col_character() ) ) animal_meta$Proj_name = Name_project lab_meta = read_csv(paste( WD, Projects_metadata$Folder_path, Projects_metadata$lab_metadata, sep = "/" )) #-- put all information together metadata=left_join(left_join(animal_meta, lab_meta , by ="Lab_ID"),Projects_metadata, by ="Proj_name") #-- add an ID for each animal (number starting at 100) metadata$ID = as.character(c(100: (99+nrow(metadata)))) if (Projects_metadata$video_analysis =="HCS 3.0"){ Behav_code <- read_delim( "infos/HCS_MBR_Code_Details/Short Behavior Codes-Table 1.csv", ",", escape_double = FALSE, col_types = cols(`Behavior Code` = col_integer()), trim_ws = TRUE, skip = 1 )[, c(1, 3)] names (Behav_code) = c("behavior", "beh_name") framepersec = 25 }
[GOAL] n : ℕ z : ℂ ⊢ ¬(z ^ n = z + 1 ∧ z ^ n + z ^ 2 = 0) [PROOFSTEP] rintro ⟨h1, h2⟩ [GOAL] case intro n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 ⊢ False [PROOFSTEP] replace h3 : z ^ 3 = 1 [GOAL] case h3 n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 ⊢ z ^ 3 = 1 [PROOFSTEP] linear_combination (1 - z - z ^ 2 - z ^ n) * h1 + (z ^ n - 2) * h2 [GOAL] case intro n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 ⊢ False [PROOFSTEP] have key : z ^ n = 1 ∨ z ^ n = z ∨ z ^ n = z ^ 2 := by rw [← Nat.mod_add_div n 3, pow_add, pow_mul, h3, one_pow, mul_one] have : n % 3 < 3 := Nat.mod_lt n zero_lt_three interval_cases n % 3 <;> simp only [this, pow_zero, pow_one, eq_self_iff_true, or_true_iff, true_or_iff] [GOAL] n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 ⊢ z ^ n = 1 ∨ z ^ n = z ∨ z ^ n = z ^ 2 [PROOFSTEP] rw [← Nat.mod_add_div n 3, pow_add, pow_mul, h3, one_pow, mul_one] [GOAL] n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 ⊢ z ^ (n % 3) = 1 ∨ z ^ (n % 3) = z ∨ z ^ (n % 3) = z ^ 2 [PROOFSTEP] have : n % 3 < 3 := Nat.mod_lt n zero_lt_three [GOAL] n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 this : n % 3 < 3 ⊢ z ^ (n % 3) = 1 ∨ z ^ (n % 3) = z ∨ z ^ (n % 3) = z ^ 2 [PROOFSTEP] interval_cases n % 3 [GOAL] case «0» n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 this : 0 < 3 ⊢ z ^ 0 = 1 ∨ z ^ 0 = z ∨ z ^ 0 = z ^ 2 [PROOFSTEP] simp only [this, pow_zero, pow_one, eq_self_iff_true, or_true_iff, true_or_iff] [GOAL] case «1» n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 this : 1 < 3 ⊢ z ^ 1 = 1 ∨ z ^ 1 = z ∨ z ^ 1 = z ^ 2 [PROOFSTEP] simp only [this, pow_zero, pow_one, eq_self_iff_true, or_true_iff, true_or_iff] [GOAL] case «2» n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 this : 2 < 3 ⊢ z ^ 2 = 1 ∨ z ^ 2 = z ∨ z ^ 2 = z ^ 2 [PROOFSTEP] simp only [this, pow_zero, pow_one, eq_self_iff_true, or_true_iff, true_or_iff] [GOAL] case intro n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 key : z ^ n = 1 ∨ z ^ n = z ∨ z ^ n = z ^ 2 ⊢ False [PROOFSTEP] have z_ne_zero : z ≠ 0 := fun h => zero_ne_one ((zero_pow zero_lt_three).symm.trans (show (0 : ℂ) ^ 3 = 1 from h ▸ h3)) [GOAL] case intro n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 key : z ^ n = 1 ∨ z ^ n = z ∨ z ^ n = z ^ 2 z_ne_zero : z ≠ 0 ⊢ False [PROOFSTEP] rcases key with (key | key | key) [GOAL] case intro.inl n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 z_ne_zero : z ≠ 0 key : z ^ n = 1 ⊢ False [PROOFSTEP] exact z_ne_zero (by rwa [key, self_eq_add_left] at h1 ) [GOAL] n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 z_ne_zero : z ≠ 0 key : z ^ n = 1 ⊢ z = 0 [PROOFSTEP] rwa [key, self_eq_add_left] at h1 [GOAL] case intro.inr.inl n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 z_ne_zero : z ≠ 0 key : z ^ n = z ⊢ False [PROOFSTEP] exact one_ne_zero (by rwa [key, self_eq_add_right] at h1 ) [GOAL] n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 z_ne_zero : z ≠ 0 key : z ^ n = z ⊢ 1 = 0 [PROOFSTEP] rwa [key, self_eq_add_right] at h1 [GOAL] case intro.inr.inr n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 z_ne_zero : z ≠ 0 key : z ^ n = z ^ 2 ⊢ False [PROOFSTEP] exact z_ne_zero (pow_eq_zero (by rwa [key, add_self_eq_zero] at h2 )) [GOAL] n : ℕ z : ℂ h1 : z ^ n = z + 1 h2 : z ^ n + z ^ 2 = 0 h3 : z ^ 3 = 1 z_ne_zero : z ≠ 0 key : z ^ n = z ^ 2 ⊢ z ^ ?m.10353 = 0 [PROOFSTEP] rwa [key, add_self_eq_zero] at h2 [GOAL] n : ℕ hn1 : n ≠ 1 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] by_cases hn0 : n = 0 [GOAL] case pos n : ℕ hn1 : n ≠ 1 hn0 : n = 0 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] rw [hn0, pow_zero, sub_sub, add_comm, ← sub_sub, sub_self, zero_sub] [GOAL] case pos n : ℕ hn1 : n ≠ 1 hn0 : n = 0 ⊢ Irreducible (-X) [PROOFSTEP] exact Associated.irreducible ⟨-1, mul_neg_one X⟩ irreducible_X [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] have hn : 1 < n := Nat.one_lt_iff_ne_zero_and_ne_one.mpr ⟨hn0, hn1⟩ [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] have hp : (X ^ n - X - 1 : ℤ[X]) = trinomial 0 1 n (-1) (-1) 1 := by simp only [trinomial, C_neg, C_1]; ring [GOAL] n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n ⊢ X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 [PROOFSTEP] simp only [trinomial, C_neg, C_1] [GOAL] n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n ⊢ X ^ n - X - 1 = -1 * X ^ 0 + -1 * X ^ 1 + 1 * X ^ n [PROOFSTEP] ring [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] rw [hp] [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 ⊢ Irreducible (trinomial 0 1 n (-1) (-1) 1) [PROOFSTEP] apply IsUnitTrinomial.irreducible_of_coprime' ⟨0, 1, n, zero_lt_one, hn, -1, -1, 1, rfl⟩ [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 ⊢ ∀ (z : ℂ), ¬(↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 ∧ ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0) [PROOFSTEP] rintro z ⟨h1, h2⟩ [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ False [PROOFSTEP] apply X_pow_sub_X_sub_one_irreducible_aux z [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ z ^ ?m.21676 = z + 1 ∧ z ^ ?m.21676 + z ^ 2 = 0 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ ℕ [PROOFSTEP] rw [trinomial_mirror zero_lt_one hn (-1 : ℤˣ).ne_zero (1 : ℤˣ).ne_zero] at h2 [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h2 : ↑(aeval z) (trinomial 0 (n - 1 + 0) n ↑1 ↑(-1) ↑(-1)) = 0 ⊢ z ^ ?m.21676 = z + 1 ∧ z ^ ?m.21676 + z ^ 2 = 0 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ ℕ [PROOFSTEP] simp_rw [trinomial, aeval_add, aeval_mul, aeval_X_pow, aeval_C, Units.val_neg, Units.val_one, map_neg, map_one] at h1 h2 [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h1 : -1 * z ^ 0 + -1 * z ^ 1 + 1 * z ^ n = 0 h2 : 1 * z ^ 0 + -1 * z ^ (n - 1 + 0) + -1 * z ^ n = 0 ⊢ z ^ ?m.21676 = z + 1 ∧ z ^ ?m.21676 + z ^ 2 = 0 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ ℕ [PROOFSTEP] replace h1 : z ^ n = z + 1 := by linear_combination h1 [GOAL] n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h1 : -1 * z ^ 0 + -1 * z ^ 1 + 1 * z ^ n = 0 h2 : 1 * z ^ 0 + -1 * z ^ (n - 1 + 0) + -1 * z ^ n = 0 ⊢ z ^ n = z + 1 [PROOFSTEP] linear_combination h1 [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h2 : 1 * z ^ 0 + -1 * z ^ (n - 1 + 0) + -1 * z ^ n = 0 h1 : z ^ n = z + 1 ⊢ z ^ ?m.21676 = z + 1 ∧ z ^ ?m.21676 + z ^ 2 = 0 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ ℕ [PROOFSTEP] replace h2 := mul_eq_zero_of_left h2 z [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h1 : z ^ n = z + 1 h2 : (1 * z ^ 0 + -1 * z ^ (n - 1 + 0) + -1 * z ^ n) * z = 0 ⊢ z ^ ?m.21676 = z + 1 ∧ z ^ ?m.21676 + z ^ 2 = 0 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ ℕ [PROOFSTEP] rw [add_mul, add_mul, add_zero, mul_assoc (-1 : ℂ), ← pow_succ', Nat.sub_add_cancel hn.le] at h2 [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h1 : z ^ n = z + 1 h2 : 1 * z ^ 0 * z + -1 * z ^ n + -1 * z ^ n * z = 0 ⊢ z ^ ?m.21676 = z + 1 ∧ z ^ ?m.21676 + z ^ 2 = 0 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1 : ↑(aeval z) (trinomial 0 1 n ↑(-1) ↑(-1) ↑1) = 0 h2 : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 ⊢ ℕ [PROOFSTEP] rw [h1] at h2 ⊢ [GOAL] case neg.intro n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h1 : z ^ n = z + 1 h2 : 1 * z ^ 0 * z + -1 * (z + 1) + -1 * (z + 1) * z = 0 ⊢ z + 1 = z + 1 ∧ z + 1 + z ^ 2 = 0 [PROOFSTEP] exact ⟨rfl, by linear_combination -h2⟩ [GOAL] n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hn : 1 < n hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 z : ℂ h1✝ : ↑(aeval z) (↑C ↑(-1) * X ^ 0 + ↑C ↑(-1) * X ^ 1 + ↑C ↑1 * X ^ n) = 0 h2✝ : ↑(aeval z) (mirror (trinomial 0 1 n ↑(-1) ↑(-1) ↑1)) = 0 h1 : z ^ n = z + 1 h2 : 1 * z ^ 0 * z + -1 * (z + 1) + -1 * (z + 1) * z = 0 ⊢ z + 1 + z ^ 2 = 0 [PROOFSTEP] linear_combination -h2 [GOAL] n : ℕ hn1 : n ≠ 1 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] by_cases hn0 : n = 0 [GOAL] case pos n : ℕ hn1 : n ≠ 1 hn0 : n = 0 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] rw [hn0, pow_zero, sub_sub, add_comm, ← sub_sub, sub_self, zero_sub] [GOAL] case pos n : ℕ hn1 : n ≠ 1 hn0 : n = 0 ⊢ Irreducible (-X) [PROOFSTEP] exact Associated.irreducible ⟨-1, mul_neg_one X⟩ irreducible_X [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] have hp : (X ^ n - X - 1 : ℤ[X]) = trinomial 0 1 n (-1) (-1) 1 := by simp only [trinomial, C_neg, C_1]; ring [GOAL] n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 ⊢ X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 [PROOFSTEP] simp only [trinomial, C_neg, C_1] [GOAL] n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 ⊢ X ^ n - X - 1 = -1 * X ^ 0 + -1 * X ^ 1 + 1 * X ^ n [PROOFSTEP] ring [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] have hn : 1 < n := Nat.one_lt_iff_ne_zero_and_ne_one.mpr ⟨hn0, hn1⟩ [GOAL] case neg n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 hn : 1 < n ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] have h := (IsPrimitive.Int.irreducible_iff_irreducible_map_cast ?_).mp (X_pow_sub_X_sub_one_irreducible hn1) [GOAL] case neg.refine_2 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 hn : 1 < n h : Irreducible (map (Int.castRingHom ℚ) (X ^ n - X - 1)) ⊢ Irreducible (X ^ n - X - 1) [PROOFSTEP] rwa [Polynomial.map_sub, Polynomial.map_sub, Polynomial.map_pow, Polynomial.map_one, Polynomial.map_X] at h [GOAL] case neg.refine_1 n : ℕ hn1 : n ≠ 1 hn0 : ¬n = 0 hp : X ^ n - X - 1 = trinomial 0 1 n (-1) (-1) 1 hn : 1 < n ⊢ IsPrimitive (X ^ n - X - 1) [PROOFSTEP] exact hp.symm ▸ (trinomial_monic zero_lt_one hn).isPrimitive
(* Title: HOL/Metis_Examples/Type_Encodings.thy Author: Jasmin Blanchette, TU Muenchen Example that exercises Metis's (and hence Sledgehammer's) type encodings. *) section \<open> Example that Exercises Metis's (and Hence Sledgehammer's) Type Encodings \<close> theory Type_Encodings imports MainRLT begin declare [[metis_new_skolem]] text \<open>Setup for testing Metis exhaustively\<close> lemma fork: "P \<Longrightarrow> P \<Longrightarrow> P" by assumption ML \<open> val type_encs = ["erased", "poly_guards", "poly_guards?", "poly_guards??", "poly_guards@", "poly_tags", "poly_tags?", "poly_tags??", "poly_tags@", "poly_args", "poly_args?", "raw_mono_guards", "raw_mono_guards?", "raw_mono_guards??", "raw_mono_guards@", "raw_mono_tags", "raw_mono_tags?", "raw_mono_tags??", "raw_mono_tags@", "raw_mono_args", "raw_mono_args?", "mono_guards", "mono_guards?", "mono_guards??", "mono_tags", "mono_tags?", "mono_tags??", "mono_args"] fun metis_exhaust_tac ctxt ths = let fun tac [] st = all_tac st | tac (type_enc :: type_encs) st = st |> ((if null type_encs then all_tac else resolve_tac ctxt @{thms fork} 1) THEN Metis_Tactic.metis_tac [type_enc] ATP_Problem_Generate.combsN ctxt ths 1 THEN COND (has_fewer_prems 2) all_tac no_tac THEN tac type_encs) in tac type_encs end \<close> method_setup metis_exhaust = \<open> Attrib.thms >> (fn ths => fn ctxt => SIMPLE_METHOD (metis_exhaust_tac ctxt ths)) \<close> "exhaustively run Metis with all type encodings" text \<open>Miscellaneous tests\<close> lemma "x = y \<Longrightarrow> y = x" by metis_exhaust lemma "[a] = [Suc 0] \<Longrightarrow> a = 1" by (metis_exhaust last.simps One_nat_def) lemma "map Suc [0] = [Suc 0]" by (metis_exhaust list.map) lemma "map Suc [1 + 1] = [Suc 2]" by (metis_exhaust list.map nat_1_add_1) lemma "map Suc [2] = [Suc (1 + 1)]" by (metis_exhaust list.map nat_1_add_1) definition "null xs = (xs = [])" lemma "P (null xs) \<Longrightarrow> null xs \<Longrightarrow> xs = []" by (metis_exhaust null_def) lemma "(0::nat) + 0 = 0" by (metis_exhaust add_0_left) end
#This module implements `Partition`---Hakaru's replacement for Maple's #endogenous and unwieldy `piecewise`. #The outer data structure for a Partition is a function, PARTITION(...), (just like it #is for piecewise. Partition:= module() option package; # Constructor for Partition pieces global Piece; local Umap := proc(f,x,$) f(op(0,x))( map( p -> Piece(f(condOf(p)),f(valOf(p))), piecesOf(x) ) , `if`(nops(x)>1,map(f,[op(2..-1,x)]),[])[] ) end proc, isPartitionPieceOf := proc( p, elem_t := anything ) type(p, 'Piece(PartitionCond, elem_t)'); end proc, isPartitionOf := proc( e, elem_t := anything ) type(e, 'PARTITION(list(PartitionPiece(elem_t)))' ) or type(e, 'PARTITION(list(PartitionPiece(Or(elem_t,PieceRef))),list(elem_t))' ); end proc, ModuleLoad::static:= proc() ModuleUnload(); :-`print/PARTITION`:= proc() local SetOfRecords, branch; SetOfRecords := piecesOf(PARTITION(args)); `print/%piecewise`( seq([ condOf(eval(branch)), valOf(eval(branch))][], branch= SetOfRecords)) end proc; TypeTools:-AddType(PieceRef, And(specfunc(nonnegint,PieceRef),satisfies(x->nops(x)>0))); TypeTools:-AddType(PartitionCond, {relation, boolean, `::`, specfunc({`And`,`Or`,`Not`}), `and`, `or`, `not`}); TypeTools:-AddType(PartitionPiece, isPartitionPieceOf); TypeTools:-AddType(Partition, isPartitionOf); # global extensions to maple functionality :-`eval/PARTITION` := proc(p, eqs, $) Partition:-Simpl(Umap(x->eval(x,eqs), p)); end proc; # :-`depends/PARTITION` := # proc(parts, nms, $) # local dp := (x -> depends(x, nms)); # # `or`(op ( map(p-> dp(condOf(p)) or dp(valOf(p)), parts) ), dp(x,); # end proc; :-`diff/PARTITION` := proc() local pw, wrt, dpw, r, r0, r1; wrt := args[-1]; pw := PARTITION(args[1..-2]); pw := PartitionToPW(pw); dpw := diff(pw, wrt); r := PWToPartition(dpw, 'do_solve'); r0 := Simpl:-singular_pts(r); # probably a better way to do this; we really only want to simplify # sums and products of integrals and summations r1 := subsindets(r0, algebraic, `simplify`); userinfo(10, 'disint_trace', printf(" input : \n\t%a\n\n" " diff : \n\t%a\n\n" " singular pts : \n\t%a\n\n" " simplified : \n\t%a\n\n\n" , parts, r, r0, r1 )); r1; end proc; :-`simplify/PARTITION` := Simpl; NULL end proc, ModuleUnload::static:= proc() map(proc(x::uneval) try eval(x) catch: NULL; end try end proc, ['TypeTools:-RemoveType(Partition)' ,'TypeTools:-RemoveType(PartitionPiece)' ]); NULL end proc, # abstract out all argument checking for map-like functions map_check := proc(p) local pos, err; if p::indexed then pos:= op(p); if not [pos]::[posint] then err := sprintf("Expected positive integer index; received %a", [pos]); return err; end if else pos:= 1 end if; if nargs-1 <= pos then err := sprintf("Expected at least %d arguments; received %d", pos+1, nargs-1); return err; end if; if not args[pos+2]::Partition then err := sprintf("Expected a Partition; received %a", args[pos+2]); return err; end if; return pos; end proc, extr_conjs := proc(x,$) if x::{specfunc(`And`), `and`} then map(extr_conjs, [op(x)])[]; else x end if end proc; export piecesOf := proc(x, $) local ps, rs; ps := op(1,x); if nops(x)=2 then rs := op(2,x); ps := map(mapPiece(proc(c0,v0) local c,c1,v,s; c,v := c0,v0; s := (x->x); if v::PieceRef then c1 := is_lhs(type,c,name); if c1<>FAIL then s := x->subs(c1,x) end if; c, s(op(op(1,v),rs)) else c, v end if; end proc), ps ); elif has(x,PieceRef) then error "found piece references (%1) but no table of pieces: %2", indets(x,PieceRef), x end if; ps; end proc, condOf := proc(x::specfunc(`Piece`),$) op(1,x); end proc, valOf := proc(x::specfunc(`Piece`),$) op(2,x); end proc, mapPiece := proc(f,$) proc(x::PartitionPiece,$) Piece(f(condOf(x), valOf(x))) end proc; end proc, unPiece := mapPiece(ident), ident := proc() args end proc, is_lhs := proc(test,x0) local x := x0; if test(rhs(x),_rest) then x := op(0,x)(rhs(x),lhs(x)) end if; if test(lhs(x),_rest) then return x else return FAIL end if; end proc, # This is an alternate (to PARTITION) constructor for partition, which has the # same call convention as piecewise, except there are no implicit cases. if # there is an otherwise case, its condition is the conjunction of negations of # the other conditions. ModuleApply := proc()::Partition; local ps, as, ops_r; if nargs=0 then error "empty partition"; end if; ps := [args]; ops_r := iquo(nops(ps),2); if nops(ps)::odd then ps := [op(1..-2,ps), Not(bool_And(seq(op(2*i-1,ps),i=1..ops_r))), op(-1,ps)]; ops_r := ops_r+1; end if; ps := [seq(Piece(op(2*i-1,ps),op(2*i,ps)),i=1..ops_r)]; PARTITION(ps); end proc, Pieces := proc(cs0,es0)::list(PartitionPiece); local es, cs; es := `if`(es0::{set,list},x->x,x->{x})(es0); cs := `if`(cs0::{set,list,specfunc(`And`),`and`},x->x,x->{x})(cs0); [seq(seq(Piece(c,e),c=cs),e=es)]; end proc, #This is just `map` for Partitions. #Allow additional args, just like `map` Pmap::static:= proc(f)::Partition; local pair,pos,res; res := map_check(procname, args); if res::string then error res else pos := res; end if; PARTITION([seq( Piece(condOf(pair) ,f(args[2..pos], valOf(pair), args[pos+2..]) ),pair= piecesOf(args[pos+1]))]) end proc, # a more complex mapping combinator which works on all 3 parts # not fully general, but made to work with KB # also, does not handle extra arguments (on purpose!) Amap::static:= proc( funcs::[anything, anything, anything], #`appliable` not inclusive enough. part::Partition)::Partition; local pair,pos,f,g,h,doIt; (f,g,h) := op(funcs); #sigh, we don't have a decent 'let', need to use a local proc doIt := proc(pair) local kb0 := h(condOf(pair)); Piece( f(condOf(pair), kb0), g(valOf(pair), kb0)); end proc; PARTITION(map(doIt,piecesOf(part))); end proc, Foldr := proc( cons, nil, prt :: Partition, $ ) foldr(proc(p, x) cons(condOf(p), valOf(p), x); end proc,nil,op(piecesOf(prt))); end proc, Case := proc(ty,f,g) proc(x) if x::ty then f(x) else g(x) end if end proc end proc, Foldr_mb := proc(cons,nil,prt) Case(Partition, x->Foldr(cons,nil,x), x->cons(true,x,nil)) end proc, PartitionToPW_mb := Case(Partition, PartitionToPW, x->x), PWToPartition_mb := Case(specfunc(piecewise), PWToPartition, x->x), PartitionToPW := module() export ModuleApply; local pw_cond_ctx; ModuleApply := proc(x::Partition, $) local parts := piecesOf(x); if nops(parts) = 1 and is(op([1,1],parts)) then return op([1,2], parts) end if; parts := foldl(pw_cond_ctx, [ [], {} ], op(parts) ); parts := [seq([condOf(p), valOf(p)][], p=op(1,parts))]; if op(-2, parts) :: identical(true) then parts := subsop(-2=NULL, parts); end if; piecewise(op(parts)); end proc; pw_cond_ctx := proc(ctx_, p, $) local ps, ctx, ctxN, cond, ncond; ps, ctx := op(ctx_); cond := condOf(p); cond := {extr_conjs(cond)}; cond := remove(x->x in ctx, cond); ncond := `if`(nops(cond)=1 , KB:-negate_rel(op(1,cond)) , `Not`(bool_And(op(cond))) ); cond := bool_And(op(cond)); ctx := ctx union { ncond }; [ [ op(ps), Piece(cond, valOf(p)) ], ctx ] end proc; end module, isShape := kind -> module() option record; export MakeCtx := proc(p0,_rec) local p := p0, pw, p1, wps, ws, vs, cs, w, ps; if kind='piecewise' then p := Partition:-PWToPartition(p); end if; w, p1 := Partition:-Simpl:-single_nonzero_piece(p); if not w :: identical(true) or p1 <> p then [ w, p1 ] else ps := piecesOf(p); wps := map(_rec@valOf, ps); ws, vs, cs := map2(op, 1, wps), map2(op, 2, wps), map(condOf, ps); if nops(vs) > 0 and andmap(v->op(1,vs)=v, vs) and ormap(a->a<>{}, ws) then [ `bool_Or`( op( zip(`bool_And`, ws, cs) ) ) , op(1,vs) ]; else [ true, p0 ]; end if; end if; end proc; export MapleType := `if`(kind=`PARTITION`,'Partition','specfunc(piecewise)'); end module, # convert a piecewise to a partition, which is straightforward except: # - if any of the branches are unreachable, they are removed # - if the last clause is (implicitly) `otherwise`, that clause is filled in # appropriately # note that if the piecewise does not cover the entire domain, # then this Partition will be 'invalid' (in the sense that it also # will not cover the entire domain) - the 'correct' thing to do would # probably be to add a new clause whose value is 'undefined' # the logic of this function is already essentially implemented, by KB # in fact, kb_piecewise does something extremely similar to this PWToPartition := proc(x::specfunc(piecewise))::Partition; # each clause evaluated under the context so far, which is the conjunction # of the negations of all clauses so far local ctx := true, n := nops(x), cls := [], cnd, ncnd, i, q, ctxC, cl; # handles all but the `otherwise` case if there is such a case for i in seq(q, q = 1 .. iquo(n, 2)) do cnd := op(2*i-1,x); # the clause as given # if this clause is unreachable, then every subsequent clause will be as well if ctx :: identical(false) then return PARTITION( cls ); else ctxC := `And`(cnd, ctx); # the condition, along with the context (which is implicit in pw) ctxC := Simpl:-condition(ctxC, _rest); if cnd :: `=` then ncnd := lhs(cnd) <> rhs(cnd); else ncnd := Not(cnd) end if; ctx := `And`(ncnd, ctx); # the context for the next clause if ctx :: identical(false,[]) then # this clause is actually unreachable return(PARTITION(cls)); else cls := [ op(cls), op(Pieces(ctxC,[op(2*i,x)])) ]; end if; end if; end do; # if there is an otherwise case, handle that. if n::odd then ctx := Simpl:-condition(ctx, _rest); if not ctx :: identical(false,[]) then cls := [ op(cls), op(Pieces(ctx,[op(n,x)])) ]; end if; end if; if nops(cls) = 0 then WARNING("PWToPartition: the piecewise %1 produced an empty partition", x); return 0; end if; PARTITION( cls ); end proc, # applies a function to the arg if arg::Partition, # and if arg::piecewise, then converts the piecewise to a partition, # applies the function, then converts back to piecewise # this mainly acts as a sanity check AppPartOrPw := proc(f,x::Or(Partition,specfunc(piecewise))) if x::Partition then f(x); else PartitionToPW(f(PWToPartition(x))) end if; end proc, #Check whether the conditions of a Partition depend on any of a set of names. ConditionsDepend:= proc(P::Partition, V::{name, list(name), set(name)}, $) local p; for p in piecesOf(P) do if depends(condOf(p), V) then return true end if end do; false end proc, # The cartesian product of two Partitions PProd := proc(p0::Partition,p1::Partition,{_add := `+`})::Partition; local ps0, ps1, cs, rs, rs0, rs1; ps0,ps1 := map(ps -> sort(piecesOf(ps), key=(z->condOf(z))), [p0,p1])[]; cs := zip(proc(p0,p1) if condOf(p0)=condOf(p1) then Piece(condOf(p0),_add(valOf(p0),valOf(p1))) ; else [p0,p1]; end if; end proc, ps0,ps1); rs, cs := selectremove(c->type(c,list),cs); rs0, rs1 := map(k->map(r->op(k,r),rs),[1,2])[]; rs := map(r0->map(r1-> Piece( bool_And(condOf(r0),condOf(r1)), _add(valOf(r0),valOf(r1)) ) ,rs1)[],rs0); PARTITION([op(cs),op(rs)]); end proc, Simpl := module() export ModuleApply := proc(p) local ps, qs, qs1, mk; if p :: Partition then reduce_branches(remove_false_pieces(flatten(p))); elif assigned(distrib_op_Partition[op(0,p)]) then mk := distrib_op_Partition[op(0,p)]; ps := [op(p)]; ps := map(x->Simpl(x,_rest), ps); ps, qs := selectremove(type, ps, Partition); if nops(ps)=0 then return p end if; mk(op(qs),foldr(((a,b)->Partition:-PProd(a,b,_add=mk)),op(ps))); else subsindets(p,{Partition,indices(distrib_op_Partition,nolist)},x->Simpl(x,_rest)); end if; end proc; local distrib_op_Partition := table([`+`=`+`,`*`=`*`]); export flatten := module() export ModuleApply; local unpiece, unpart, unpartProd; ModuleApply := proc(pr0, { _with := [ 'Partition', unpart ] } ) local ty, un_ty, pr1, pr2; ty, un_ty := op(_with); pr1 := subsindets(pr0, ty, un_ty); if pr0 <> pr1 then ModuleApply(pr1, _with=[ Or('Partiton',`*`), unpartProd ]); else pr0 end if; end proc; # like Piece, but tries to not be a Piece unpiece := proc(c, pr, $) if pr :: Partition then map(q -> applyop(z->bool_And(z,c),1,q), piecesOf(pr))[] else Piece(c, pr) end if end proc; unpart := proc(pr, $) if pr :: Partition then PARTITION(map(q -> unpiece(condOf(q),valOf(q)), piecesOf(pr))) else pr end if end proc: unpartProd := proc(pr, $) local ps, ws; if pr :: `*` then ps, ws := selectremove(q->type(q,Partition), [op(pr)]); if nops(ps) = 1 then Pmap(x->`*`(op(ws),x), unpartProd(piecesOf(pr))); else pr end if; else unpart(pr) end if; end proc; end module; export single_nonzero_piece_cps := proc(k) local r,p; r, p := single_nonzero_piece(_rest); if r :: identical(true) then args[2] else k(r, p); end if; end proc; export single_nonzero_piece := proc(e, { _testzero := Testzero }) local zs, nzs; if e :: Partition then zs, nzs := selectremove(p -> _testzero(valOf(p)), piecesOf(e)); if nops(nzs) = 1 then return condOf(op(1,nzs)) , valOf(op(1,nzs)) end if; end if; true, e end proc; export remove_false_pieces := proc(e::Partition, $) PARTITION(remove(p -> type(KB:-assert(condOf(p), KB:-empty), t_not_a_kb), piecesOf(e))); end proc; local `&on` := proc(f,k,$) proc(a,b,$) f(k(a),k(b)) end proc end proc; local condition_complexity := proc(x) nops(indets(x,PartitionCond)) end proc; export reduce_branches := proc(e::Partition, { _testequal := ((a,b) -> Testzero(a-b)) }) local vs, ps1, ps; ps := piecesOf(e); vs := map(valOf,ps); userinfo(3, :-reduce_branches, printf("Input: %a\n", ps)); if nops(ps)=1 then return op([1,2],ps); end if; ps1 := [ListTools:-Categorize(_testequal &on valOf, ps)]; if nops(ps1) >= nops(ps) then return e; end if; userinfo(3, :-reduce_branches, printf("Categorize: %a\n", ps1)); ps1 := map(p->Piece(bool_Or(condition(bool_Or(map(condOf,p)[]), 'do_solve', 'do_kb')[]) ,valOf(op(1,p))) ,ps1); userinfo(3, :-reduce_branches, printf("condition: %a\n", ps1)); if nops(ps1)=2 then ps1 := sort(ps1, key=tree_size); ps1 := subsop([2,1]=Not(op([1,1],ps1)),ps1); end if; return PARTITION(ps1); end proc; # Removal of singular points from partitions export singular_pts := module() # we can simplify the pieces which are equalities and whose LHS or # RHS is a name. local canSimp := c -> condOf(c) :: `=` and (lhs(condOf(c)) :: name or rhs(condOf(c)) :: name); # determines if a given variable `t' has the given upper/lower `bnd'. local mentions_t_hi := t -> bnd -> cl -> has(cl, t<bnd) or has(cl, bnd>t); local mentions_t_lo := t -> bnd -> cl -> has(cl, bnd<t) or has(cl, t>bnd); # replace bounds with what we would get if the equality can be # integrated into other pieces local replace_with := t -> bnd -> x -> if mentions_t_hi(t)(bnd)(x) then t <= bnd elif mentions_t_lo(t)(bnd)(x) then t >= bnd else x end if; local set_xor := ((a,b)->(a union b) intersect (a minus b)); # this loops over the pieces to replace, keeping a state consisting of # the "rest" of the pieces local tryReplacePieces := proc(replPieces, otherPieces,cmp,$) local rpp := replPieces, otp := otherPieces, nm, val, rp, rpv; for rp in rpp do rp, rpv := op(rp); nm := `if`(lhs(rp)::name, lhs(rp), rhs(rp)); val := `if`(lhs(rp)::name, rhs(rp), lhs(rp)); otp := tryReplacePiece( nm, val, rpv, otp, cmp ) end do; otp; end proc; local eval_IntSum := proc(r,ev,$) local q,vs,body,BODY,range,mk,mk_e; if r::specfunc({Int,Sum}) then mk,body,range := op([0..-1],r); mk_e := `if`(mk=Int,'int','sum'); q := mk_e( BODY(body), eval(range,ev) ); if not(q :: specfunc(mk_e)) and not has(q,BODY) then return q end if; end if; eval(r,ev); end proc; local do_eval_for_cmp := proc(eval_cmp, ev, x, $) local r; try r := eval_cmp(eval_IntSum(x,ev)); userinfo(3, 'Partition', printf("evaluating\n\tsubs(%a,%a)\n\tproduced %a\n",x,ev,r)): catch "numeric exception: division by zero": r := eval_cmp(limit(x, ev)); userinfo(3, 'Partition', printf("evaluating\n\tlimit(%a,%a)\n\tproduced %a\n",x,ev,r)): end try; subsindets(r,And(specfunc(NewSLO:-applyintegrand),anyfunc(anything, identical(0))),_->0); end proc; local tryReplacePiece := proc(vrNm, vrVal, pc0val, pcs, eval_cmp,$) local pcs0 := pcs, pcs1, qs0, qs1, qs2, vrEq := vrNm=vrVal, vs2, ret, q, q_i; ret := [ Piece(vrEq, pc0val), op(pcs0) ] ; # speculatively replace the conditions pcs1 := subsindets(pcs0, relation, replace_with(vrNm)(vrVal)); # convert to sets and take the "set xor", which will contain # only those elements which are not common to both sets. qs0, qs1 := seq({op(qs)},qs=(pcs0,pcs1)); qs2 := set_xor(qs1, qs0); # if we have updated precisely two pieces (an upper and lower bound) if nops(qs2) = 2 then # get the values of those pieces, and the value of the # piece to be replaced, if that isn't undefined vs2 := map(valOf, qs2); if not pc0val :: identical('undefined') then vs2 := { pc0val, op(vs2) }; end if; # substitute the equality over the piece values vs2 := map(x->do_eval_for_cmp(eval_cmp,vrEq,x), vs2); # if they are identically equal, return the original # "guess" if nops(vs2) = 1 then # arbitrarily pick the first candidate to be the one to # be replaced q := op(1,qs2); q_i := seq(`if`(op(i,pcs1)=q,[i],[])[],i=1..nops(pcs1)); ret := subsop(q_i=op(q_i,pcs0), pcs1); end if; end if; ret; end proc; export ModuleApply := proc(p_,{eval_cmp:='value'},$) local p := p_, r := p, uc, oc; # if the partition contains case of the form `x = t', where `t' is a # constant (or term??) and `x' is a variable, and the value of that # case is `undefined', then we may be able to eliminate it (if another # case includes that point) r := piecesOf(r); uc, oc := selectremove(canSimp, r); PARTITION(tryReplacePieces(uc, oc, eval_cmp)); end proc; end module; # singular_pts export condition := module() local is_extra_sol := x -> (x :: `=` and rhs(x)=lhs(x) and lhs(x) :: name); local postproc_for_solve := proc(ctx, ctxSlv) ::{identical(false), list({boolean,relation,specfunc(boolean,And),`and`(boolean)})}; local ctxC := ctxSlv; if ctxC = [] then ctxC := [] ; elif nops(ctxC)> 1 then ctxC := map(x -> postproc_for_solve(ctx, [x], _rest)[], ctxC); elif nops(ctxC) = 1 then ctxC := op(1,ctxC); if ctxC :: set then if ctxC :: identical('{}') then ctxC := NULL; else ctxC := remove(is_extra_sol, ctxC); ctxC := bool_And(op(ctxC)); end if ; ctxC := [ctxC]; elif ctxC :: specfunc('piecewise') then ctxC := PWToPartition(ctxC, _rest); ctxC := [ seq( map( o -> condOf(c) and o , postproc_for_solve(ctx, valOf(c), _rest))[] , c=piecesOf(ctxC) )] ; else ctxC := FAIL; end if; else ctxC := FAIL; end if; if ctxC = FAIL then error "don't know what to do with %1", ctxSlv; else ctxC; end if; end proc; export ModuleApply := proc(ctx)::list(PartitionCond); local ctxC, ctxC1, ctxC_c, ctxC1_c; ctxC := ctx; if ctx :: identical(true) then error "Simpl:-condition: don't know what to do with %1", ctxC; elif condition_complexity(ctx)=1 then return [ctx]; end if; if 'do_kb' in {_rest} then ctxC1 := KB:-assert( ctxC, KB:-empty ); ctxC1 := KB:-kb_to_constraints(ctxC1); ctxC1 := bool_And(op(ctxC1)); ctxC1_c, ctxC_c := map(condition_complexity, [ctxC1,ctxC])[]; if ctxC1_c < ctxC_c then ctxC := ctxC1; if ctxC1_c = 1 then return [ctxC]; end if; end if; end if; ctxC := KB:-chill(ctxC); if 'do_solve' in {_rest} and _Env_HakaruSolve<>false then ctxC := solve({ctxC}, 'useassumptions'=true); if ctxC = NULL and indets(ctx, specfunc(`exp`)) <> {} then ctxC := [ctx]; else ctxC := postproc_for_solve(ctx, [ctxC], _rest); end if; if indets(ctxC, specfunc({`Or`, `or`})) <> {} then userinfo(10, 'Simpl:-condition', printf("output: \n" " %a\n\n" , ctxC )); end if; else ctxC := Domain:-simpl_relation(ctxC, norty='DNF'); ctxC := eval(ctxC,`And`=bool_And); if ctxC :: specfunc(`Or`) then ctxC := [op(ctxC)] else ctxC := [ctxC]; end if; end if; if 'no_split_disj' in {_rest} then ctxC := [ bool_Or(op(ctxC)) ]; end if; KB:-warm(ctxC); end proc; end module; #Simpl:-condition end module, #Simpl SamePartition := proc(eqCond, eqPart, p0 :: Partition, p1 :: Partition, $) local ps0, ps1, pc0, the; ps0, ps1 := map(piecesOf,[p0,p1])[]; if nops(ps0) <> nops(ps1) then return false end if; for pc0 in ps0 do the, ps1 := selectremove(pc1 -> eqCond( condOf(pc1), condOf(pc0) ) and eqPart( valOf(pc1) , valOf(pc0) ) ,ps1); if nops(the) <> 1 then return false; end if; end do; true; end proc ; uses Hakaru; end module:
try using JuliaFormatter catch error if isa(error, LoadError) using Pkg Pkg.add("JuliaFormatter") using JuliaFormatter else rethrow() end end import JuliaFormatter JuliaFormatter.format(".")
(* Title: Partial_Function_Set.thy Author: Andreas Lochbihler, ETH Zurich *) theory Partial_Function_Set imports Main begin subsection \<open>Setup for \<open>partial_function\<close> for sets\<close> lemma (in complete_lattice) lattice_partial_function_definition: "partial_function_definitions (\<le>) Sup" by(unfold_locales)(auto intro: Sup_upper Sup_least) interpretation set: partial_function_definitions "(\<subseteq>)" Union by(rule lattice_partial_function_definition) lemma set_admissible: "set.admissible (\<lambda>f :: 'a \<Rightarrow> 'b set. \<forall>x y. y \<in> f x \<longrightarrow> P x y)" by(rule ccpo.admissibleI)(auto simp add: fun_lub_Sup) abbreviation "mono_set \<equiv> monotone (fun_ord (\<subseteq>)) (\<subseteq>)" lemma fixp_induct_set_scott: fixes F :: "'c \<Rightarrow> 'c" and U :: "'c \<Rightarrow> 'b \<Rightarrow> 'a set" and C :: "('b \<Rightarrow> 'a set) \<Rightarrow> 'c" and P :: "'b \<Rightarrow> 'a \<Rightarrow> bool" and x and y assumes mono: "\<And>x. mono_set (\<lambda>f. U (F (C f)) x)" and eq: "f \<equiv> C (ccpo.fixp (fun_lub Sup) (fun_ord (\<le>)) (\<lambda>f. U (F (C f))))" and inverse2: "\<And>f. U (C f) = f" and step: "\<And>f x y. \<lbrakk> \<And>x y. y \<in> U f x \<Longrightarrow> P x y; y \<in> U (F f) x \<rbrakk> \<Longrightarrow> P x y" and enforce_variable_ordering: "x = x" and elem: "y \<in> U f x" shows "P x y" using step elem set.fixp_induct_uc[of U F C, OF mono eq inverse2 set_admissible, of P] by blast lemma fixp_Sup_le: defines "le \<equiv> ((\<le>) :: _ :: complete_lattice \<Rightarrow> _)" shows "ccpo.fixp Sup le = ccpo_class.fixp" proof - have "class.ccpo Sup le (<)" unfolding le_def by unfold_locales thus ?thesis by(simp add: ccpo.fixp_def fixp_def ccpo.iterates_def iterates_def ccpo.iteratesp_def iteratesp_def fun_eq_iff le_def) qed lemma fun_ord_le: "fun_ord (\<le>) = (\<le>)" by(auto simp add: fun_ord_def fun_eq_iff le_fun_def) lemma fixp_induct_set: fixes F :: "'c \<Rightarrow> 'c" and U :: "'c \<Rightarrow> 'b \<Rightarrow> 'a set" and C :: "('b \<Rightarrow> 'a set) \<Rightarrow> 'c" and P :: "'b \<Rightarrow> 'a \<Rightarrow> bool" and x and y assumes mono: "\<And>x. mono_set (\<lambda>f. U (F (C f)) x)" and eq: "f \<equiv> C (ccpo.fixp (fun_lub Sup) (fun_ord (\<le>)) (\<lambda>f. U (F (C f))))" and inverse2: "\<And>f. U (C f) = f" and step: "\<And>f' x y. \<lbrakk> \<And>x. U f' x = U f' x; y \<in> U (F (C (inf (U f) (\<lambda>x. {y. P x y})))) x \<rbrakk> \<Longrightarrow> P x y" \<comment> \<open>partial\_function requires a quantifier over f', so let's have a fake one\<close> and elem: "y \<in> U f x" shows "P x y" proof - from mono have mono': "mono (\<lambda>f. U (F (C f)))" by(simp add: fun_ord_le mono_def le_fun_def) hence eq': "f \<equiv> C (lfp (\<lambda>f. U (F (C f))))" using eq unfolding fun_ord_le fun_lub_Sup fixp_Sup_le by(simp add: lfp_eq_fixp) let ?f = "C (lfp (\<lambda>f. U (F (C f))))" have step': "\<And>x y. \<lbrakk> y \<in> U (F (C (inf (U ?f) (\<lambda>x. {y. P x y})))) x \<rbrakk> \<Longrightarrow> P x y" unfolding eq'[symmetric] by(rule step[OF refl]) let ?P = "\<lambda>x. {y. P x y}" from mono' have "lfp (\<lambda>f. U (F (C f))) \<le> ?P" by(rule lfp_induct)(auto intro!: le_funI step' simp add: inverse2) with elem show ?thesis by(subst (asm) eq')(auto simp add: inverse2 le_fun_def) qed declaration \<open>Partial_Function.init "set" @{term set.fixp_fun} @{term set.mono_body} @{thm set.fixp_rule_uc} @{thm set.fixp_induct_uc} (SOME @{thm fixp_induct_set})\<close> partial_function (set) test :: "'a list \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> int set" where "test xs i j = insert 4 (test [] 0 j \<union> test [] 1 True \<inter> test [] 2 False - {5} \<union> uminus ` test [undefined] 0 True \<union> uminus -` test [] 1 False)" interpretation coset: partial_function_definitions "(\<supseteq>)" Inter by(rule complete_lattice.lattice_partial_function_definition[OF dual_complete_lattice]) lemma fun_lub_Inf: "fun_lub Inf = (Inf :: _ \<Rightarrow> _ :: complete_lattice)" by(auto simp add: fun_lub_def fun_eq_iff Inf_fun_def intro: Inf_eqI INF_lower INF_greatest) lemma fun_ord_ge: "fun_ord (\<ge>) = (\<ge>)" by(auto simp add: fun_ord_def fun_eq_iff le_fun_def) lemma coset_admissible: "coset.admissible (\<lambda>f :: 'a \<Rightarrow> 'b set. \<forall>x y. P x y \<longrightarrow> y \<in> f x)" by(rule ccpo.admissibleI)(auto simp add: fun_lub_Inf) abbreviation "mono_coset \<equiv> monotone (fun_ord (\<supseteq>)) (\<supseteq>)" lemma gfp_eq_fixp: fixes f :: "'a :: complete_lattice \<Rightarrow> 'a" assumes f: "monotone (\<ge>) (\<ge>) f" shows "gfp f = ccpo.fixp Inf (\<ge>) f" proof (rule antisym) from f have f': "mono f" by(simp add: mono_def monotone_def) interpret ccpo Inf "(\<ge>)" "mk_less (\<ge>) :: 'a \<Rightarrow> _" by(rule ccpo)(rule complete_lattice.lattice_partial_function_definition[OF dual_complete_lattice]) show "ccpo.fixp Inf (\<ge>) f \<le> gfp f" by(rule gfp_upperbound)(subst fixp_unfold[OF f], rule order_refl) show "gfp f \<le> ccpo.fixp Inf (\<ge>) f" by(rule fixp_lowerbound[OF f])(subst gfp_unfold[OF f'], rule order_refl) qed lemma fixp_coinduct_set: fixes F :: "'c \<Rightarrow> 'c" and U :: "'c \<Rightarrow> 'b \<Rightarrow> 'a set" and C :: "('b \<Rightarrow> 'a set) \<Rightarrow> 'c" and P :: "'b \<Rightarrow> 'a \<Rightarrow> bool" and x and y assumes mono: "\<And>x. mono_coset (\<lambda>f. U (F (C f)) x)" and eq: "f \<equiv> C (ccpo.fixp (fun_lub Inter) (fun_ord (\<ge>)) (\<lambda>f. U (F (C f))))" and inverse2: "\<And>f. U (C f) = f" and step: "\<And>f' x y. \<lbrakk> \<And>x. U f' x = U f' x; \<not> P x y \<rbrakk> \<Longrightarrow> y \<in> U (F (C (sup (\<lambda>x. {y. \<not> P x y}) (U f)))) x" \<comment> \<open>partial\_function requires a quantifier over f', so let's have a fake one\<close> and elem: "y \<notin> U f x" shows "P x y" using elem proof(rule contrapos_np) have mono': "monotone (\<ge>) (\<ge>) (\<lambda>f. U (F (C f)))" and mono'': "mono (\<lambda>f. U (F (C f)))" using mono by(simp_all add: monotone_def fun_ord_def le_fun_def mono_def) hence eq': "U f = gfp (\<lambda>f. U (F (C f)))" by(subst eq)(simp add: fun_lub_Inf fun_ord_ge gfp_eq_fixp inverse2) let ?P = "\<lambda>x. {y. \<not> P x y}" have "?P \<le> gfp (\<lambda>f. U (F (C f)))" using mono'' by(rule coinduct)(auto intro!: le_funI dest: step[OF refl] simp add: eq') moreover assume "\<not> P x y" ultimately show "y \<in> U f x" by(auto simp add: le_fun_def eq') qed declaration \<open>Partial_Function.init "coset" @{term coset.fixp_fun} @{term coset.mono_body} @{thm coset.fixp_rule_uc} @{thm coset.fixp_induct_uc} (SOME @{thm fixp_coinduct_set})\<close> abbreviation "mono_set' \<equiv> monotone (fun_ord (\<supseteq>)) (\<supseteq>)" lemma [partial_function_mono]: shows insert_mono': "mono_set' A \<Longrightarrow> mono_set' (\<lambda>f. insert x (A f))" and UNION_mono': "\<lbrakk>mono_set' B; \<And>y. mono_set' (\<lambda>f. C y f)\<rbrakk> \<Longrightarrow> mono_set' (\<lambda>f. \<Union>y\<in>B f. C y f)" and set_bind_mono': "\<lbrakk>mono_set' B; \<And>y. mono_set' (\<lambda>f. C y f)\<rbrakk> \<Longrightarrow> mono_set' (\<lambda>f. Set.bind (B f) (\<lambda>y. C y f))" and Un_mono': "\<lbrakk> mono_set' A; mono_set' B \<rbrakk> \<Longrightarrow> mono_set' (\<lambda>f. A f \<union> B f)" and Int_mono': "\<lbrakk> mono_set' A; mono_set' B \<rbrakk> \<Longrightarrow> mono_set' (\<lambda>f. A f \<inter> B f)" unfolding bind_UNION by(fast intro!: monotoneI dest: monotoneD)+ context begin private partial_function (coset) test2 :: "nat \<Rightarrow> nat set" where "test2 x = insert x (test2 (Suc x))" private lemma test2_coinduct: assumes "P x y" and *: "\<And>x y. P x y \<Longrightarrow> y = x \<or> (P (Suc x) y \<or> y \<in> test2 (Suc x))" shows "y \<in> test2 x" using \<open>P x y\<close> apply(rule contrapos_pp) apply(erule test2.raw_induct[rotated]) apply(simp add: *) done end end
/** * Copyright (C) 2019-present MongoDB, Inc. * * This program is free software: you can redistribute it and/or modify * it under the terms of the Server Side Public License, version 1, * as published by MongoDB, Inc. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * Server Side Public License for more details. * * You should have received a copy of the Server Side Public License * along with this program. If not, see * <http://www.mongodb.com/licensing/server-side-public-license>. * * As a special exception, the copyright holders give permission to link the * code of portions of this program with the OpenSSL library under certain * conditions as described in each individual source file and distribute * linked combinations including the program with the OpenSSL library. You * must comply with the Server Side Public License in all respects for * all of the code used other than as permitted herein. If you modify file(s) * with this exception, you may extend this exception to your version of the * file(s), but you are not obligated to do so. If you do not wish to do so, * delete this exception statement from your version. If you delete this * exception statement from all source files in the program, then also delete * it in the license file. */ #include "mongo/logv2/plain_formatter.h" #include "mongo/bson/bsonobj.h" #include "mongo/logv2/attribute_storage.h" #include "mongo/logv2/attributes.h" #include "mongo/logv2/constants.h" #include "mongo/stdx/variant.h" #include "mongo/util/str_escape.h" #include <boost/container/small_vector.hpp> #include <boost/log/attributes/value_extraction.hpp> #include <boost/log/expressions/message.hpp> #include <boost/log/utility/formatting_ostream.hpp> #include <any> #include <deque> #include <fmt/format.h> namespace mongo::logv2 { namespace { struct TextValueExtractor { void operator()(const char* name, CustomAttributeValue const& val) { if (val.stringSerialize) { fmt::memory_buffer buffer; val.stringSerialize(buffer); _addString(name, fmt::to_string(buffer)); } else if (val.toString) { _addString(name, val.toString()); } else if (val.BSONAppend) { BSONObjBuilder builder; val.BSONAppend(builder, name); BSONElement element = builder.done().getField(name); _addString(name, element.toString(false)); } else if (val.BSONSerialize) { BSONObjBuilder builder; val.BSONSerialize(builder); operator()(name, builder.done()); } else if (val.toBSONArray) { operator()(name, val.toBSONArray()); } } void operator()(const char* name, const BSONObj& val) { _addString(name, val.jsonString(JsonStringFormat::ExtendedRelaxedV2_0_0)); } void operator()(const char* name, const BSONArray& val) { _addString(name, val.jsonString(JsonStringFormat::ExtendedRelaxedV2_0_0, 0, true)); } template <typename Period> void operator()(const char* name, const Duration<Period>& val) { _addString(name, val.toString()); } template <typename T> void operator()(const char* name, const T& val) { _add(name, val); } const auto& args() const { return _args; } void reserve(std::size_t sz) { _args.reserve(sz, sz); } private: /** * Workaround for `dynamic_format_arg_store`'s desire to copy string * values and user-defined values. */ static auto _wrapValue(StringData val) { return std::string_view{val.rawData(), val.size()}; } template <typename T> static auto _wrapValue(const T& val) { return std::cref(val); } template <typename T> void _add(const char* name, const T& val) { // Store our own fmt::arg results in a container of std::any, // and give reference_wrappers to _args. This avoids a string // copy of the 'name' inside _args. _args.push_back(std::cref(_store(fmt::arg(name, _wrapValue(val))))); } void _addString(const char* name, std::string&& val) { _add(name, StringData{_store(std::move(val))}); } template <typename T> const T& _store(T&& val) { return std::any_cast<const T&>(_storage.emplace_back(std::forward<T>(val))); } std::deque<std::any> _storage; fmt::dynamic_format_arg_store<fmt::format_context> _args; }; } // namespace void PlainFormatter::operator()(boost::log::record_view const& rec, fmt::memory_buffer& buffer) const { using boost::log::extract; StringData message = extract<StringData>(attributes::message(), rec).get(); const auto& attrs = extract<TypeErasedAttributeStorage>(attributes::attributes(), rec).get(); // Log messages logged via logd are already formatted and have the id == 0 if (attrs.empty()) { if (extract<int32_t>(attributes::id(), rec).get() == 0) { buffer.append(message.begin(), message.end()); return; } } TextValueExtractor extractor; extractor.reserve(attrs.size()); attrs.apply(extractor); fmt::vformat_to(buffer, std::string_view{message.rawData(), message.size()}, extractor.args()); size_t attributeMaxSize = buffer.size(); if (extract<LogTruncation>(attributes::truncation(), rec).get() == LogTruncation::Enabled) { if (_maxAttributeSizeKB) attributeMaxSize = _maxAttributeSizeKB->loadRelaxed() * 1024; else attributeMaxSize = constants::kDefaultMaxAttributeOutputSizeKB * 1024; } buffer.resize(std::min(attributeMaxSize, buffer.size())); if (StringData sd(buffer.data(), buffer.size()); sd.endsWith("\n"_sd)) buffer.resize(buffer.size() - 1); } void PlainFormatter::operator()(boost::log::record_view const& rec, boost::log::formatting_ostream& strm) const { fmt::memory_buffer buffer; operator()(rec, buffer); strm.write(buffer.data(), buffer.size()); strm.put(boost::log::formatting_ostream::char_type('\n')); } } // namespace mongo::logv2
' The file "table1_reg_on_indicators.r" creates table 1 with the regression estimates and correlations of the main indicators (mortality, expropriation risk and GDP) from the corresponding r-file in the analysis directory "regression_on_indicators.r" ' source("project_paths.r") library(xtable) ## import vector mat = dget( paste(PATH_OUT_ANALYSIS, "/", "regression_on_indicators.txt", sep="") ) ## change format and make matrix mat = sprintf("%.2f",as.numeric(mat)) mat = matrix(mat,c(10,3)) for (i in 1:3) { mat[3,i] = paste( "(", mat[3,i], ")", sep="") mat[5,i] = paste( "(", mat[5,i], ")", sep="") mat[1,i] = "" mat[7,i] = "" mat[8,i] = "" } ## add header and row names, make latex table header = matrix( c( "", "Log mortality ", "Expropriation risk", "Log GDP", "Dependent variable", "(1)", "(2)", "(3)", "\\midrule", "", "", "", "", "", "", "" ), c(4,4), byrow=TRUE ) row_names = c( "Original sample (64 countries)", "~~ Campaign indicator", "", "~~ Laborer indicator", "", "~~ $R^{2}$", "", "Correlation with log mortality", "~~ Full", "~~ Partial, controlling for indicators" ) tex_table = cbind(row_names, mat) tex_table_final = rbind(header, tex_table) tex_table_final = xtable( tex_table_final, caption="Relationship of Main Variables Campaign And Laborer Indicators" ) align(tex_table_final) = "llccc" ## export the latex table print( tex_table_final, sanitize.text.function = function(x){x}, file=paste(PATH_OUT_TABLES, "/", "table1_reg_on_indicators.tex", sep=""), include.rownames=FALSE, include.colnames=FALSE, caption.placement="top", booktabs=TRUE )
module Server.Template.Template import Extra.String -- Simple template library, can be vastly improved ident : String -> String ident k = "<%% " ++ k ++ " %%>" export template : String -> List (String, String) -> String template tmpl = foldl (\acc, (k, v) => replace (ident k) v acc) tmpl
data Nat : Set where succ : Nat → Nat data Fin : Nat → Set where zero : (n : Nat) → Fin (succ n) data Tm (n : Nat) : Set where var : Fin n → Tm n piv : Fin (succ n) → Tm n data Cx : Nat → Set where succ : (n : Nat) → Tm n → Cx (succ n) data CxChk : ∀ n → Cx n → Set where succ : (n : Nat) (T : Tm n) → CxChk (succ n) (succ n T) data TmChk (n : Nat) : Cx n → Tm n → Set where vtyp : (g : Cx n) (v : Fin n) → CxChk n g → TmChk n g (var v) error : ∀ n g s → TmChk n g s → Set error n g s (vtyp g' (zero x) (succ n' (piv (zero y)))) = Nat -- Internal error here. error _ _ _ (vtyp g' (zero n) (succ n (var x))) = Nat -- This clause added to pass 2.5.3.
/- Copyright (c) 2022 Anand Rao, Rémi Bottinelli. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Anand Rao, Rémi Bottinelli -/ import combinatorics.simple_graph.ends.defs import data.finite.set import data.finset.basic /-! # Properties of the ends of graphs This file is meant to contain results about the ends of (usually locally finite and connected) graphs. -/ variables {V : Type} (G : simple_graph V) namespace simple_graph lemma ends_of_finite [finite V] : is_empty G.end := begin rw is_empty_iff, rintro ⟨s, -⟩, casesI nonempty_fintype V, obtain ⟨v, h⟩ := (s $ opposite.op finset.univ).nonempty, exact set.disjoint_iff.mp (s _).disjoint_right ⟨by simp only [opposite.unop_op, finset.coe_univ], h⟩, end section TODO def induce_univ_iso : G.induce ⊤ ≃g G := sorry def connected_component.iso {V V' : Type*} {G : simple_graph V} {G' : simple_graph V'} : G ≃g G' → G.connected_component ≃ G'.connected_component := sorry theorem component_compl.eq_of_not_disjoint {G : simple_graph V} {K : set V} (C D : G.component_compl K) : ¬(disjoint (C : set V) (D : set V)) → C = D := sorry end TODO /--! ## Ends of locally finite, connected graphs -/ /- For a locally finite preconnected graph, the number of components outside of any finite set is finite. -/ lemma component_compl_finite [locally_finite G] (Gpc : preconnected G) (K : finset V) : finite (G.component_compl K) := begin classical, rcases K.eq_empty_or_nonempty with h|h, -- If K is empty, then removing K doesn't change the graph, which is connected, hence has a -- single connected component { cases h, dsimp [component_compl], rw set.compl_empty, haveI := @finite.of_subsingleton _ Gpc.subsingleton_connected_component, exact finite.of_equiv _ (connected_component.iso (induce_univ_iso G)).symm, }, -- Otherwise, we consider the function `touch` mapping a connected component to one of its -- vertices adjacent to `K`. { let touch : G.component_compl K → {v : V | ∃ k : V, k ∈ K ∧ G.adj k v} := λ C, let p := C.exists_adj_boundary_pair Gpc h in ⟨p.some.1, p.some.2, p.some_spec.2.1, p.some_spec.2.2.symm⟩, -- `touch` is injective have touch_inj : touch.injective := λ C D h', component_compl.eq_of_not_disjoint C D (by { rw set.not_disjoint_iff, use touch C, exact ⟨ (C.exists_adj_boundary_pair Gpc h).some_spec.1, h'.symm ▸ (D.exists_adj_boundary_pair Gpc h).some_spec.1⟩, }), -- `touch` has finite range haveI : finite (set.range touch), by { apply @subtype.finite _ _ _, apply set.finite.to_subtype, have : {v : V | ∃ (k : V), k ∈ K ∧ G.adj k v} = finset.bUnion K (λ v, G.neighbor_finset v), by { ext v, simp only [set.mem_Union, exists_prop, set.mem_set_of_eq, finset.coe_bUnion, finset.mem_coe, mem_neighbor_finset], }, rw this, apply finset.finite_to_set, }, apply finite.of_injective_finite_range touch_inj, }, end /-- In an infinite graph, the set of components out of a finite set is nonempty. -/ lemma component_compl_nonempty_of_infinite [infinite V] (K : finset V) : nonempty (G.component_compl K) := begin obtain ⟨k,kK⟩ := set.infinite.nonempty (set.finite.infinite_compl $ K.finite_to_set), exact ⟨component_compl_mk _ kK⟩, end -- TODO: Fix the definitions below using the newer inverse system implementation #exit /-- The `component_compl`s chosen by an end are all infinite. -/ lemma end_component_compl_infinite (e : G.end) (K : (finset V)ᵒᵖ) : (e.val K).supp.infinite := begin apply (e.val K).inf_iff_in_all_ranges.mpr (λ L h, _), change opposite.unop K ⊆ opposite.unop (opposite.op L) at h, exact ⟨e.val (opposite.op L), (e.prop (category_theory.op_hom_of_le h)).symm⟩, end /-- A locally finite preconnected infinite graph has at least one end. -/ lemma nonempty_ends_of_infinite [Glf : locally_finite G] (Gpc : preconnected G) [Vi : infinite V] : G.end.nonempty := begin classical, exact @nonempty_sections_of_fintype_inverse_system _ _ _ G.component_compl_functor (λ K, @fintype.of_finite _ $ G.component_compl_finite Gpc K.unop) (λ K, G.component_compl_nonempty_of_infinite K.unop) end end simple_graph
-- An ATP conjecture must be used with postulates. -- This error is detected by Syntax.Translation.ConcreteToAbstract. module ATPBadConjecture1 where data Bool : Set where false true : Bool {-# ATP prove false #-}
In fact , the British fleet was unable to pursue Villaret , having only 11 ships still capable of battle to the French 12 , and having numerous dismasted ships and prizes to protect . Retiring and regrouping , the British crews set about making hasty repairs and securing their prizes ; seven in total , including the badly damaged Vengeur du Peuple . Vengeur had been holed by cannon firing from Brunswick directly through the ship 's bottom , and after her surrender no British ship had managed to get men aboard . This left Vengeur 's few remaining unwounded crew to attempt to salvage what they could — a task made harder when some of her sailors broke into the spirit room and became drunk . Ultimately the ship 's pumps became unmanageable , and Vengeur began to sink . Only the timely arrival of boats from the undamaged Alfred and HMS Culloden , as well as the services of the cutter HMS Rattler , saved any of the Vengeur 's crew from drowning , these ships taking off nearly 500 sailors between them . Lieutenant John Winne of Rattler was especially commended for this hazardous work . By 18 : 15 , Vengeur was clearly beyond salvage and only the very worst of the wounded , the dead , and the drunk remained aboard . Several sailors are said to have waved the tricolor from the bow of the ship and cried " Vive la Nation , vive la République ! "
namespace prop_08 variable P : Prop theorem prop_8 : ¬ ¬ ¬ P → ¬ P := assume h1: ¬ ¬ ¬ P, show ¬ P, from (classical.by_contradiction h1) -- end namespace end prop_08
Shinola Jewelry bracelet in sterling silver. Smooth bangle with thorn buckle detail. Alexis Bittar crystal hinge bracelet. Polished yellow golden hardware. Hand-sculpted and painted Lucite®. Crystal encrusted accent stations. Approx. 2.3" diameter; 0.5"H. Hinged opening; slip-on style. Break hinge bangle bracelet by Alexis Bittar. 10-karat gold plated hardware and settings. Signature hand-sculpted, hand-painted Lucite®. Encrusted pavé Swarovski® crystal end caps. Hinged opening; slip-on style.
GRANT COUNTY, N.M. - "Everywhere I go, the kids call me the book lady," Dolly Parton said. She may be known for her country hits, but for years Dolly Parton has been giving out books to homes all over the country, with the help of people like Barbara and Loren Nelson. "Dolly started in an underserved mining area. And we thought we know an underserved mining area," Loren Nelson said. These retired school teachers began Dolly's Imagination Library in Grant County with just a thousand dollars of their own money and donations from a few dozen businesses. "There's great poverty. The people in Grant County can't afford to buy books," Nelson said. "We know because of all kinds of research how important it is to have books in the home. And quality books, not just for reading, but the snuggle, the family bonding. And so we decided this is going to be our passion." While Dolly takes care of choosing and shipping the books, the local affiliates are in charge of raising the funds. "We kind of work month-to-month-to-month, but we've been able to keep the program going," Nelson said. Now, they've expanded into 22 of New Mexico's 33 counties. 18 of those are just partial expansions, and Bernalillo County is one example of that. "I only have funds to work with families that work with zip codes in 87105 and 87121. And as we raise more money we'll open up more zip codes. We're anxious to do that," said John Heinrich, President of Libros for Kids, Imagination Library in Bernalillo County. If you'd like to sign up for the program, go to imaginationlibrary.org and you can see if your zip code participates. Or to donate, go to that website as well to connect with a local affiliate.
State Before: n : ℕ five_le_n : 5 ≤ n ⊢ n ≤ fib n State After: case refl n : ℕ ⊢ 5 ≤ fib 5 case step n✝ n : ℕ five_le_n : Nat.le 5 n IH : n ≤ fib n ⊢ succ n ≤ fib (succ n) Tactic: induction' five_le_n with n five_le_n IH State Before: case refl n : ℕ ⊢ 5 ≤ fib 5 State After: no goals Tactic: rfl State Before: case step n✝ n : ℕ five_le_n : Nat.le 5 n IH : n ≤ fib n ⊢ succ n ≤ fib (succ n) State After: case step n✝ n : ℕ five_le_n : Nat.le 5 n IH : n ≤ fib n ⊢ n < fib (succ n) Tactic: rw [succ_le_iff] State Before: case step n✝ n : ℕ five_le_n : Nat.le 5 n IH : n ≤ fib n ⊢ n < fib (succ n) State After: no goals Tactic: calc n ≤ fib n := IH _ < fib (n + 1) := fib_lt_fib_succ (le_trans (by decide) five_le_n) State Before: n✝ n : ℕ five_le_n : Nat.le 5 n IH : n ≤ fib n ⊢ 2 ≤ 5 State After: no goals Tactic: decide
Police said that their investigations found no evidence that a rape had taken place . Shaoguan government spokesman Wang <unk> , called it " a very ordinary incident " , which he said had been exaggerated to foment <unk> Guardian reported that video of the riots and photographs of the victims were quickly circulated on the internet by Uighur exile groups , along with claims that the death toll was under @-@ reported and the police were slow to act ; protests in Ürümqi were assembled by email . Xinhua reported that Guangdong authorities had arrested two people who are suspected of having spread rumours online which alleged sexual assault of Han women had taken place . In addition , it reported on 7 July 2009 that 13 suspects had been taken into custody following the incident , of which 3 were Uyghurs from Xinjiang . Xinhua quoted 23 @-@ year @-@ old Huang <unk> saying that he was angry at being turned down for a job in June at the toy factory , and thus posted an article at a forum on <unk> on 16 June which alleged six Xinjiang boys had raped two innocent girls at the <unk> Toy Factory ; Huang <unk> , 19 , was detained for writing on his online chat space on 28 June that eight Xinjiang people had died in the factory fight . Kang <unk> , vice director with the Shaoguan Public Security Bureau , said that the offenders would face up to 15 days in administrative detention .
#include <stdio.h> #include <stdlib.h> #include <gsl/gsl_sf_bessel.h> int main(void) { double x = 5.0; printf("J0(%g) = %.18e\n", x, gsl_sf_bessel_J0(x)); return EXIT_SUCCESS; }
theory Mod_Ring_Numeral imports "Berlekamp_Zassenhaus.Poly_Mod" "Berlekamp_Zassenhaus.Poly_Mod_Finite_Field" "HOL-Library.Numeral_Type" begin section \<open>Lemmas for Simplification of Modulo Equivalences\<close> lemma to_int_mod_ring_of_int [simp]: "to_int_mod_ring (of_int n :: 'a :: nontriv mod_ring) = n mod int CARD('a)" by transfer auto lemma to_int_mod_ring_of_nat [simp]: "to_int_mod_ring (of_nat n :: 'a :: nontriv mod_ring) = n mod CARD('a)" by transfer (auto simp: of_nat_mod) lemma to_int_mod_ring_numeral [simp]: "to_int_mod_ring (numeral n :: 'a :: nontriv mod_ring) = numeral n mod CARD('a)" by (metis of_nat_numeral to_int_mod_ring_of_nat) lemma of_int_mod_ring_eq_iff [simp]: "((of_int a :: 'a :: nontriv mod_ring) = of_int b) \<longleftrightarrow> ((a mod CARD('a)) = (b mod CARD('a)))" by (metis to_int_mod_ring_hom.eq_iff to_int_mod_ring_of_int) lemma of_nat_mod_ring_eq_iff [simp]: "((of_nat a :: 'a :: nontriv mod_ring) = of_nat b) \<longleftrightarrow> ((a mod CARD('a)) = (b mod CARD('a)))" by (metis of_nat_eq_iff to_int_mod_ring_hom.eq_iff to_int_mod_ring_of_nat) lemma one_eq_numeral_mod_ring_iff [simp]: "(1 :: 'a :: nontriv mod_ring) = numeral a \<longleftrightarrow> (1 mod CARD('a)) = (numeral a mod CARD('a))" using of_nat_mod_ring_eq_iff[of 1 "numeral a", where ?'a = 'a] by (simp del: of_nat_mod_ring_eq_iff) lemma numeral_eq_one_mod_ring_iff [simp]: "numeral a = (1 :: 'a :: nontriv mod_ring) \<longleftrightarrow> (numeral a mod CARD('a)) = (1 mod CARD('a))" using of_nat_mod_ring_eq_iff[of "numeral a" 1, where ?'a = 'a] by (simp del: of_nat_mod_ring_eq_iff) lemma zero_eq_numeral_mod_ring_iff [simp]: "(0 :: 'a :: nontriv mod_ring) = numeral a \<longleftrightarrow> 0 = (numeral a mod CARD('a))" using of_nat_mod_ring_eq_iff[of 0 "numeral a", where ?'a = 'a] by (simp del: of_nat_mod_ring_eq_iff) lemma numeral_eq_zero_mod_ring_iff [simp]: "numeral a = (0 :: 'a :: nontriv mod_ring) \<longleftrightarrow> (numeral a mod CARD('a)) = 0" using of_nat_mod_ring_eq_iff[of "numeral a" 0, where ?'a = 'a] by (simp del: of_nat_mod_ring_eq_iff) lemma numeral_mod_ring_eq_iff [simp]: "((numeral a :: 'a :: nontriv mod_ring) = numeral b) \<longleftrightarrow> ((numeral a mod CARD('a)) = (numeral b mod CARD('a)))" using of_nat_mod_ring_eq_iff[of "numeral a" "numeral b", where ?'a = 'a] by (simp del: of_nat_mod_ring_eq_iff) instantiation bit1 :: (finite) nontriv begin instance proof show "1 < CARD('a bit1)" by simp qed end end
lemma scaleR_mono: "a \<le> b \<Longrightarrow> x \<le> y \<Longrightarrow> 0 \<le> b \<Longrightarrow> 0 \<le> x \<Longrightarrow> a *\<^sub>R x \<le> b *\<^sub>R y"
function [Gamma,scale] = id_Gamma_inference(L,Pi,order) % inference for independent samples (ignoring time structure) % scale is the likelihood L(L<realmin) = realmin; Gamma = repmat(Pi,size(L,1),1) .* L; Gamma = Gamma(1+order:end,:); scale = sum(Gamma,2); Gamma = rdiv(Gamma,scale); end
classdef TestInterpolateCornersCharuco %TestInterpolateCornersCharuco methods (Static) function test_1 [img, board] = get_image_markers(); [corners, ids] = cv.detectMarkers(img, board{end}); [charucoCorners, charucoIds, num] = cv.interpolateCornersCharuco(... corners, ids, img, board); validateattributes(charucoCorners, {'cell'}, {'vector'}); cellfun(@(c) validateattributes(c, {'numeric'}, ... {'vector', 'numel',2}), charucoCorners); validateattributes(charucoIds, {'numeric'}, ... {'vector', 'integer', 'nonnegative'}); assert(isequal(numel(charucoCorners),numel(charucoIds))); validateattributes(num, {'numeric'}, ... {'scalar', 'integer', 'nonnegative'}); assert(isequal(num,numel(charucoIds))); end function test_error_argnum try cv.interpolateCornersCharuco(); throw('UnitTest:Fail'); catch e assert(strcmp(e.identifier,'mexopencv:error')); end end end end function [img, board] = get_image_markers() % markers in a 5x7 charuco board board = {5, 7, 60, 40, '6x6_50'}; img = cv.drawCharucoBoard(board, [340 460], 'MarginSize',20); img = repmat(img, [1 1 3]); end
function spline_test001 ( ) %*****************************************************************************80 % %% TEST001 tests PARABOLA_VAL2. % % Licensing: % % This code is distributed under the GNU LGPL license. % % Modified: % % 01 February 2009 % % Author % % John Burkardt % ndim = 1; ndata = 5; fprintf ( 1, '\n' ); fprintf ( 1, 'TEST001\n' ); fprintf ( 1, ' PARABOLA_VAL2 evaluates parabolas through\n' ); fprintf ( 1, ' 3 points in a table\n' ); fprintf ( 1, '\n' ); fprintf ( 1, ' Our data tables will actually be parabolas:\n' ); fprintf ( 1, ' A: 2*x**2 + 3 * x + 1.\n' ); fprintf ( 1, ' B: 4*x**2 - 2 * x + 5.\n' ); fprintf ( 1, '\n' ); for i = 1 : ndata xval = 2.0 * i; xdata(i) = xval; ydata(1,i) = 2.0 * xval * xval + 3.0 * xval + 1.0; zdata(i) = 4.0 * xval * xval - 2.0 * xval + 5.0; fprintf ( 1, '%6d %10f %10f %10f\n', i, xdata(i), ydata(1,i), zdata(i) ); end fprintf ( 1, '\n' ); fprintf ( 1, 'Interpolated data:\n' ); fprintf ( 1, '\n' ); fprintf ( 1, ' LEFT, X, Y1, Y2\n' ); fprintf ( 1, '\n' ); for i = 1 : 5 xval = 2 * i - 1; left = i; if ( ndata - 2 < left ) left = ndata - 2; end if ( left < 1 ) left = 1; end yval = parabola_val2 ( ndim, ndata, xdata, ydata, left, xval ); zval = parabola_val2 ( ndim, ndata, xdata, zdata, left, xval ); fprintf ( 1, '%6d %10f %10f %10f\n', left, xval, yval(1), zval(1) ); end return end
SUBROUTINE LA_TEST_SGECON( NORM, N, A, LDA, ANORM, RCOND, WORK, & IWORK, INFO ) ! -- LAPACK95 interface driver routine (version 0.0) -- ! UNI-C, Denmark; Univ. of Tennessee, USA; NAG Ltd., UK ! October 31, 1996 ! ! .. Use Statements .. USE LA_PRECISION, ONLY: WP => SP USE F95_LAPACK, ONLY: LA_GETRF ! .. Implicit Statement .. IMPLICIT NONE ! .. Scalar Arguments .. CHARACTER(LEN=1), INTENT(IN) :: NORM INTEGER, INTENT(IN) :: N, LDA INTEGER, INTENT(INOUT) :: INFO REAL(WP), INTENT(IN) :: ANORM REAL(WP), INTENT(OUT) :: RCOND ! .. Array Arguments .. INTEGER, INTENT(INOUT) :: IWORK(1:N) REAL(WP), INTENT(INOUT) :: A(1:LDA,1:N), WORK(*) ! .. Parameters .. CHARACTER(LEN=8), PARAMETER :: SRNAME = 'LA_GETRF' CHARACTER(LEN=14), PARAMETER :: SRNAMT = 'LA_TEST_SGECON' ! .. Local Scalars .. CHARACTER(LEN=1) LNORM INTEGER :: I, J, IA1, IA2, IIWORK REAL(WP) :: W1 LOGICAL, SAVE :: CTEST = .TRUE., ETEST = .TRUE. ! .. Executable Statements .. W1 = ANORM; RCOND = WORK(1) IA1 = N; IA2 = N; IIWORK = N; LNORM = NORM I = INFO / 100; J = INFO - I*100 SELECT CASE(I) CASE(0) IF( LNORM == 'N' )THEN CALL LA_GETRF( A(1:IA1,1:IA2), IWORK(1:IIWORK), RCOND ) ELSE CALL LA_GETRF( A(1:IA1,1:IA2), IWORK(1:IIWORK), RCOND, LNORM ) END IF INFO = 0 CASE (1) IA2 = IA1-1 CASE (2) IIWORK = IA1-1 CASE(4) SELECT CASE(J) CASE(1) CALL LA_GETRF( A(1:IA1,1:IA2), IWORK(1:IIWORK), NORM = LNORM, & INFO = INFO ) CASE(2) LNORM = '/' CASE(:0,3,5:) CALL UESTOP(SRNAMT) END SELECT CASE(:-1,5:) CALL UESTOP(SRNAMT) END SELECT IF ( I /= 0 ) THEN CALL LA_GETRF( A(1:IA1,1:IA2),IWORK(1:IIWORK), RCOND, LNORM, INFO ) END IF IF( RCOND == 0.0_WP .AND. I == 0 ) INFO = 0 CALL LA_AUX_AA01( I, CTEST, ETEST, SRNAMT ) END SUBROUTINE LA_TEST_SGECON
[STATEMENT] theorem pumping_lemma: fixes r :: "'a :: finite rexp" obtains n where "\<And>z. z \<in> lang r \<Longrightarrow> length z \<ge> n \<Longrightarrow> \<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> n \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>n. (\<And>z. \<lbrakk>z \<in> lang r; n \<le> length z\<rbrakk> \<Longrightarrow> \<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> n \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r)) \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. (\<And>n. (\<And>z. \<lbrakk>z \<in> lang r; n \<le> length z\<rbrakk> \<Longrightarrow> \<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> n \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r)) \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] let ?n = "card (range (\<lambda>w. Derivs w (lang r)))" [PROOF STATE] proof (state) goal (1 subgoal): 1. (\<And>n. (\<And>z. \<lbrakk>z \<in> lang r; n \<le> length z\<rbrakk> \<Longrightarrow> \<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> n \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r)) \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] have "\<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> ?n \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r)" if "z \<in> lang r" and "length z \<ge> ?n" for z [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> card (range (\<lambda>w. Derivs w (lang r))) \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r) [PROOF STEP] by (intro pumping_lemma_aux[of z] that regular_Derivs_finite) [PROOF STATE] proof (state) this: \<lbrakk>?z \<in> lang r; card (range (\<lambda>w. Derivs w (lang r))) \<le> length ?z\<rbrakk> \<Longrightarrow> \<exists>u v w. ?z = u @ v @ w \<and> length (u @ v) \<le> card (range (\<lambda>w. Derivs w (lang r))) \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r) goal (1 subgoal): 1. (\<And>n. (\<And>z. \<lbrakk>z \<in> lang r; n \<le> length z\<rbrakk> \<Longrightarrow> \<exists>u v w. z = u @ v @ w \<and> length (u @ v) \<le> n \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r)) \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: \<lbrakk>?z \<in> lang r; card (range (\<lambda>w. Derivs w (lang r))) \<le> length ?z\<rbrakk> \<Longrightarrow> \<exists>u v w. ?z = u @ v @ w \<and> length (u @ v) \<le> card (range (\<lambda>w. Derivs w (lang r))) \<and> v \<noteq> [] \<and> (\<forall>i. u @ repeat i v @ w \<in> lang r) goal (1 subgoal): 1. thesis [PROOF STEP] by (rule that) [PROOF STATE] proof (state) this: thesis goal: No subgoals! [PROOF STEP] qed
module WhiskerStructs struct HydroWhisker λ::Float64 γ::Float64 Ac::Float64 At::Float64 ϕ::Float64 ϵ::Float64 T::Float64 end struct TopoWhisker a::Float64 b::Float64 k::Float64 l::Float64 α::Float64 β::Float64 M::Float64 end export HydroWhisker, TopoWhisker end
// Copyright 2019, Winfield Chen and Lloyd T. Elliott. #ifndef __STATISTICS_H_ # define __STATISTICS_H_ #include <gsl/gsl_sf.h> #include "matrix.h" double tcdf1m(double t, double nu); double log_tcdf1m(double t, double nu); void regression(t_matrix g, t_matrix y, t_matrix yt, t_matrix obs, t_matrix denom, t_matrix beta, t_matrix se, t_matrix tstat, t_matrix pval, t_matrix b1, t_matrix w1, t_matrix w2); #endif
\chapter{$b$-hadron Lifetimes}\label{appendix:B_hadron_lifetimes} $b$-hadrons (hadronically bound states containing at least one $b$-flavor quark) have what are viewed as long lived lifetimes before they decay. Using the charged $B$ meson, $B^{-}$, as an example, with quark content of $B^{-} = \Ket{\bar{u}\, b}$, a decay mediated by the strong force is forbidden by electrical charge conservation. Thus, the decay must proceed through a flavor-changing charged current mediated by a $W$ boson. Thus, some possible decays are \[ \underbrace{\bar{u}\,b}_{B^{-}} \to \underbrace{u\bar{u}}_{\pi^0} \left(W^{-} \to\right) \ell^{-} \bar{\nu}_{\ell}, \qquad \underbrace{\bar{u}\,b}_{B^{-}} \to \underbrace{u\bar{u}}_{\pi^0} \left(W^{-} \to \right) \underbrace{\bar{u}d}_{\pi^-}, \] % \[ \underbrace{\bar{u}\,b}_{B^{-}} \to \underbrace{c\bar{u}}_{D^0} \left(W^{-} \to \right) \ell^{-} \bar{\nu}_{\ell}, \qquad \underbrace{\bar{u}\,b}_{B^{-}} \to \underbrace{c\bar{u}}_{D^0} \left(W^{-} \to \right) \underbrace{\bar{u}d}_{\pi^-}. \] As the $b$-decay is cross generational, it is ``Cabibbo suppressed'' further increasing the lifetime~\cite{Vaandering}. Cabibbo suppression is also relevant in the decays of kaons and charged $D$-mesons. With the introduction of the ``strangness'' quantum number, it was observed that the decay rates of particles with nonzero strangness were different then non-strange particles. Cabibbo suggested~\cite{Cabibbo:1963yz} that these decays were also mediated by weak interactions but that the participating states (weak eigenstates) were mixtures of the mass eigenstates, \[ \Ket{d'} = \alpha \Ket{d} + \beta \Ket{s}, \] such that through normalization, $\Braket{d'|d'} = 1$, and absorbing phases, one free parameter remains. The choices of \[ \alpha = \cos\theta_C, \qquad \beta = \sin\theta_C, \] are made and $\theta_C$ --- the free parameter --- is empirically determined from fits to data to be $\theta_C \approx 0.23~\mathrm{rad} \approx 13.15^{\circ}$. With Glashow, Iliopoulos, and Maiani's (GIM) introduction of a fourth quark, $c$,~\cite{Glashow:1970gm} the Cabibbo-GIM scheme established the ``Cabibbo-rotated'' weak eigenstates \[ \Ket{d'} = \cos\theta_C \Ket{d} + \sin\theta_C \Ket{s}, \qquad \Ket{s'} = -\sin\theta_C \Ket{d} + \cos\theta_C \Ket{s} \] which comprised the flavor doublets \[ \begin{pmatrix} u \\d' \end{pmatrix}, \quad \begin{pmatrix} c \\s' \end{pmatrix} \] that the $W$ bosons couple to in the same manner as they couple to lepton flavor doublets. The Cabibbo rotation matrix obviously follows, \[ \begin{pmatrix} d' \\ s' \end{pmatrix} = \begin{pmatrix} \cos\theta_C & \sin\theta_C \\ -\sin\theta_C & \cos\theta_C \end{pmatrix} \begin{pmatrix} d \\ s \end{pmatrix} \] With Kobayashi and Maskawa's generalization of the Cabibbo-GIM scheme to three generations~\cite{Kobayashi:1973fv} the CKM transformation matrix was formed, \[ \begin{pmatrix} d' \\ s'\\ b' \end{pmatrix} = \begin{pmatrix} V_{ud} & V_{us} & V_{ub} \\ V_{cd} & V_{cs} & V_{cb} \\ V_{td} & V_{ts} & V_{tb} \\ \end{pmatrix} \begin{pmatrix} d \\ s\\ b \end{pmatrix}\,. \] Taking the third to first and second generational mixing elements to be small (i.e., in terms of the generalized Cabibbo angles $(\theta_{12},\theta_{23},\theta_{13})$ $\theta_{13} \approx \theta_{23} \sim 0$), it is seen that the Cabibbo-GIM mixing matrix is recovered. It is seen from the CKM matrix (whose on-diagonal elements are close to unity) that cross-generational decays (off-diagonal elements) are ``Cabibbo suppressed'' while intragenerational decays (on-diagonal elements) are ``Cabibbo favored.'' Thus, noting that \[ \beta = \frac{\abs{\vec{p}}c}{E}, \qquad E = \gamma\, mc^2, \] it is seen that for a hadron with mass $m$, mean lifetime $\tau$, and 3-momentum $\abs{\vec{p}}$, the distance it travels, $x'$, in the lab frame, $O'$, before decaying is, \[ \begin{split} x' &= v' t' \\ &= \left(\beta c\right) \left(\gamma \tau\right) \\ &= \frac{\abs{\vec{p}}c^2}{E} \gamma \tau \\ &= \frac{\abs{\vec{p}}c^2}{\gamma\, mc^2} \gamma \tau \\ &= \frac{\abs{\vec{p}}}{m}\, \tau. \end{split} \] It is also seen that the characteristic length scale of the particle, where $\beta\gamma=1$ and so $p=mc$, is equal to $c\tau$. The boost of the particle then acts as a scale factor of this length, scaling it up and down.
# -*- coding:utf-8 -*- import setproctitle setproctitle.setproctitle("STSGCN@lifuxian") import time import json import argparse import numpy as np import mxnet as mx from utils import (construct_model, generate_data, masked_mae_np, masked_mape_np, masked_mse_np) parser = argparse.ArgumentParser() parser.add_argument("--config", type=str, help='configuration file') parser.add_argument("--test", action="store_true", help="test program") parser.add_argument("--plot", help="plot network graph", action="store_true") parser.add_argument("--save", action="store_true", help="save model") args = parser.parse_args() config_filename = args.config with open(config_filename, 'r') as f: config = json.loads(f.read()) print(json.dumps(config, sort_keys=True, indent=4)) net = construct_model(config) batch_size = config['batch_size'] num_of_vertices = config['num_of_vertices'] graph_signal_matrix_filename = config['graph_signal_matrix_filename'] if isinstance(config['ctx'], list): ctx = [mx.gpu(i) for i in config['ctx']] elif isinstance(config['ctx'], int): ctx = mx.gpu(config['ctx']) loaders = [] true_values = [] for idx, (x, y) in enumerate(generate_data(graph_signal_matrix_filename)): if args.test: x = x[: 100] y = y[: 100] y = y.squeeze(axis=-1) print(x.shape, y.shape) loaders.append( mx.io.NDArrayIter( x, y if idx == 0 else None, batch_size=batch_size, shuffle=(idx == 0), label_name='label' ) ) if idx == 0: training_samples = x.shape[0] else: true_values.append(y) train_loader, val_loader, test_loader = loaders val_y, test_y = true_values global_epoch = 1 global_train_steps = training_samples // batch_size + 1 all_info = [] epochs = config['epochs'] mod = mx.mod.Module( net, data_names=['data'], label_names=['label'], context=ctx ) mod.bind( data_shapes=[( 'data', (batch_size, config['points_per_hour'], num_of_vertices, 1) ), ], label_shapes=[( 'label', (batch_size, config['points_per_hour'], num_of_vertices) )] ) mod.init_params(initializer=mx.init.Xavier(magnitude=0.0003)) lr_sch = mx.lr_scheduler.PolyScheduler( max_update=global_train_steps * epochs * config['max_update_factor'], base_lr=config['learning_rate'], pwr=2, warmup_steps=global_train_steps ) mod.init_optimizer( optimizer=config['optimizer'], optimizer_params=(('lr_scheduler', lr_sch),) ) num_of_parameters = 0 for param_name, param_value in mod.get_params()[0].items(): # print(param_name, param_value.shape) num_of_parameters += np.prod(param_value.shape) print("Number of Parameters: {}".format(num_of_parameters), flush=True) metric = mx.metric.create(['RMSE', 'MAE'], output_names=['pred_output']) if args.plot: graph = mx.viz.plot_network(net) graph.format = 'png' graph.render('graph') def training(epochs): global global_epoch lowest_val_loss = 1e6 tolerance = 50 cnt_temp = 0 for epoch in range(epochs): t = time.time() info = [global_epoch] train_loader.reset() metric.reset() for idx, databatch in enumerate(train_loader): mod.forward_backward(databatch) mod.update_metric(metric, databatch.label) mod.update() metric_values = dict(zip(*metric.get())) print('training: Epoch: %s, RMSE: %.2f, MAE: %.2f, time: %.2f s' % ( global_epoch, metric_values['rmse'], metric_values['mae'], time.time() - t), flush=True) info.append(metric_values['mae']) val_loader.reset() prediction = mod.predict(val_loader)[1].asnumpy() loss = masked_mae_np(val_y, prediction, 0) print('validation: Epoch: %s, loss: %.2f, time: %.2f s' % ( global_epoch, loss, time.time() - t), flush=True) info.append(loss) if loss < lowest_val_loss: test_loader.reset() prediction = mod.predict(test_loader)[1].asnumpy() tmp_info = [] for idx in range(config['num_for_predict']): # y, x = test_y[:, : idx + 1, :], prediction[:, : idx + 1, :] y, x = test_y[:, idx : idx + 1, :], prediction[:, idx : idx + 1, :] tmp_info.append(( masked_mae_np(y, x, 0), masked_mape_np(y, x, 0), masked_mse_np(y, x, 0) ** 0.5 )) mae, mape, rmse = tmp_info[-1] print('test: Epoch: {}, MAE: {:.2f}, MAPE: {:.2f}, RMSE: {:.2f}, ' 'time: {:.2f}s'.format( global_epoch, mae, mape, rmse, time.time() - t)) print(flush=True) info.extend((mae, mape, rmse)) info.append(tmp_info) all_info.append(info) lowest_val_loss = loss cnt_temp = 0 else: cnt_temp += 1 if cnt_temp < tolerance: global_epoch += 1 else: print('earlystopping at epoch ', epoch) break if args.test: epochs = 5 training(epochs) the_best = min(all_info, key=lambda x: x[2]) # print('step: {}\ntraining loss: {:.2f}\nvalidation loss: {:.2f}\n' # 'tesing: MAE: {:.2f}\ntesting: MAPE: {:.2f}\n' # 'testing: RMSE: {:.2f}\n'.format(*the_best)) # print(the_best) # the_best = [68, 1.411298357080995, 1.796820238713304, 2.261148341010781, 5.400548858947783, 5.2094251746191915, [(0.960964881090736, 1.8586036151815766, 1.7065738206433105), (1.231127729653431, 2.5126222235681794, 2.4261496147356), (1.4356823103392997, 3.037844077552284, 3.0128969394052794), (1.5964147147430807, 3.481093203674549, 3.4896500217794113), (1.729473497683941, 3.8699448526983558, 3.8808571468819997), (1.834626912422782, 4.16717613882355, 4.181905769364844), (1.9326727985467773, 4.442828320728456, 4.432138512797938), (2.0172858990449942, 4.6921727248928145, 4.63484797416534), (2.086977551399003, 4.89622051400139, 4.810464363080575), (2.146455060549889, 5.077811552706208, 4.952446549782243), (2.202791783490587, 5.23813464423378, 5.084228846970088), (2.261148341010781, 5.400548858947783, 5.2094251746191915)]] print('step: {}\ntraining loss: {:.2f}\nvalidation loss: {:.2f}\n'.format(*the_best)) for i in [2,5,11]: print('Horizon ' + str(i+1)+':') print('test\tMAE\t\tMAPE\t\tRMSE') print(the_best[6][i]) if args.save: mod.save_checkpoint('STSGCN', epochs)
# E. coli Glycolytic Network Construction ## Growth Medium: Glucose Growth data obtained from the following sources: - Gerosa, Luca et al. “Pseudo-transition Analysis Identifies the Key Regulators of Dynamic Metabolic Adaptations from Steady-State Data.” Cell systems vol. 1,4 (2015): 270-82. doi:10.1016/j.cels.2015.09.008 - Volkmer, Benjamin, and Matthias Heinemann. “Condition-dependent cell volume and concentration of Escherichia coli to facilitate data conversion for systems biology modeling.” PloS one vol. 6,7 (2011): e23126. doi:10.1371/journal.pone.0023126 ### Import packages ```python # Disable gurobi logging output for this notebook. try: import gurobipy gurobipy.setParam("OutputFlag", 0) except ImportError: pass import numpy as np import pandas as pd import sympy as sym import matplotlib.pyplot as plt import cobra from cobra.io.json import load_json_model as load_cobra_json_model import mass from mass import MassConfiguration, MassModel, MassMetabolite, MassReaction, Simulation from mass.io.json import save_json_model as save_mass_json_model from mass.visualization import plot_comparison, plot_time_profile print(f"COBRApy version: {cobra.__version__}") print(f"MASSpy version: {mass.__version__}") ``` Set parameter Username Academic license - for non-commercial use only - expires 2022-01-21 COBRApy version: 0.22.1 MASSpy version: 0.1.5 ### Set solver ```python MASSCONFIGURATION = MassConfiguration() MASSCONFIGURATION.solver = "gurobi" ``` ## Load COBRA model ```python cobra_model = load_cobra_json_model(f"./models/cobra/iML1515.json") ``` ## Obtain Flux State ### Load growth data ```python medium = "Glucose" flux_data = pd.read_excel( io="./data/growth_data.xlsx", sheet_name="flux_data", index_col=0 ) flux_data = flux_data.loc[lambda x: x['Growth Medium'] == medium] flux_data = flux_data.drop("Growth Medium", axis=1) flux_data ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Flux (mmol * gDW-1 * h-1)</th> </tr> <tr> <th>ID</th> <th></th> </tr> </thead> <tbody> <tr> <th>EX_ac_e</th> <td>6.827019e+00</td> </tr> <tr> <th>ACt2rpp</th> <td>-6.827019e+00</td> </tr> <tr> <th>ACKr</th> <td>-6.827019e+00</td> </tr> <tr> <th>PTAr</th> <td>6.827019e+00</td> </tr> <tr> <th>ACS</th> <td>0.000000e+00</td> </tr> <tr> <th>...</th> <td>...</td> </tr> <tr> <th>ME1</th> <td>-4.778621e-08</td> </tr> <tr> <th>ME2</th> <td>-4.778621e-08</td> </tr> <tr> <th>ICL</th> <td>9.654038e-10</td> </tr> <tr> <th>MALS</th> <td>9.654038e-10</td> </tr> <tr> <th>BIOMASS_Ec_iML1515_core_75p37M</th> <td>6.500000e-01</td> </tr> </tbody> </table> <p>68 rows × 1 columns</p> </div> ### Set bounds #### Growth rate and media ```python biomass_rxn = cobra_model.reactions.BIOMASS_Ec_iML1515_core_75p37M growth_rate = flux_data.loc[biomass_rxn.id][0] biomass_rxn.bounds = (growth_rate, growth_rate) biomass_rxn.bounds ``` (0.65, 0.65) ```python EX_glc__D_e = cobra_model.reactions.EX_glc__D_e medium_uptake = flux_data.loc[EX_glc__D_e.id][0] EX_glc__D_e.bounds = (medium_uptake, medium_uptake) EX_glc__D_e.bounds ``` (-9.654, -9.654) ### Formulate QP minimization for fluxes ```python v_vars = [] v_data = [] # For irreversible enzyme pairs, flux data is given as Enzyme1 - Enzyme2 = value. # To ensure all enzymes have some flux, add a percentage of the net flux for each enzyme # The netflux will still remain the same value. reverse_flux_percent = 0.1 irreversible_enzyme_pairs = [["PFK", "FBP"], ["PYK", "PPS"]] for rid, flux in flux_data.itertuples(): # Make adjustments to net flux of PFK/FBP and PYK/PPS to ensure # no target flux value is 0 in order to create an enzyme module. for irreversible_enzyme_pair in irreversible_enzyme_pairs: if rid in irreversible_enzyme_pair: flux1, flux2 = flux_data.loc[irreversible_enzyme_pair, "Flux (mmol * gDW-1 * h-1)"].values if flux1 == 0: flux += reverse_flux_percent * flux2 # mmol*gDW^-1*hr^-1 if flux2 == 0: flux += reverse_flux_percent * flux1 # mmol*gDW^-1*hr^-1 print(rid, flux) v_vars.append(sym.Symbol(rid)) v_data.append(flux) # Make symbolic for optlang objective v_vars = sym.Matrix(v_vars) v_data = sym.Matrix(v_data) F = sym.Matrix(2 * sym.eye(len(v_vars))) objective = 0.5 * v_vars.T * F * v_vars - (2 * v_data).T * v_vars cobra_model.objective = objective[0] cobra_model.objective_direction = "min" flux_solution = cobra_model.optimize() flux_solution ``` PFK 7.76432470140912 FBP 0.7058477001281019 PYK 2.7360389076214697 PPS 0.24873080978377 <strong><em>Optimal</em> solution with objective value -1075.383</strong><br><div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>fluxes</th> <th>reduced_costs</th> </tr> </thead> <tbody> <tr> <th>CYTDK2</th> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>XPPT</th> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>HXPRT</th> <td>0.000000</td> <td>0.045104</td> </tr> <tr> <th>NDPK5</th> <td>0.017561</td> <td>0.000000</td> </tr> <tr> <th>SHK3Dr</th> <td>0.247727</td> <td>0.000000</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> </tr> <tr> <th>MPTS</th> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>MOCOS</th> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>BMOGDS2</th> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>FESD2s</th> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>OCTNLL</th> <td>0.000000</td> <td>2.611245</td> </tr> </tbody> </table> <p>2712 rows × 2 columns</p> </div> ```python flux_comparison_fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5, 5)) plot_comparison( x=flux_data["Flux (mmol * gDW-1 * h-1)"], y=flux_solution, compare="fluxes", observable=[rid for rid in flux_data.index], ax=ax, legend="right outside", plot_function="plot", xlim=(-16.5, 16.5), ylim=(-16.5, 16.5), xy_line=True, xy_legend="best", xlabel="Measured [mmol/(gDW * h)]", ylabel="Adjusted [mmol/(gDW * h)]") flux_comparison_fig.tight_layout() ``` #### Export data for analysis ```python flux_data_for_comparison = pd.concat(objs=(flux_data, flux_solution.fluxes), axis=1).dropna() flux_data_for_comparison.index.name = "ID" flux_data_for_comparison.columns = ["Initial", "Adjusted"] flux_data_for_comparison.to_csv("./data/analysis_data/fluxes.csv") ``` ## Create MASS Model ```python # Create MassModel mass_model = MassModel("Glycolysis", array_type="DataFrame") # Reactions to extract into subnetwork reaction_list = [ "PGI", "PFK", "FBP", "FBA", "TPI", "GAPD", "PGK", "PGM", "ENO", "PYK", "PPS", "LDH_D", ] cobra_reactions = cobra_model.reactions.get_by_any(reaction_list) mass_model.add_reactions([MassReaction(rxn) for rxn in cobra_reactions]) mass_model ``` <table> <tr> <td><strong>Name</strong></td><td>Glycolysis</td> </tr><tr> <td><strong>Memory address</strong></td><td>0x07ff0d1ebc5b0</td> </tr><tr> <td><strong>Stoichiometric Matrix</strong></td> <td>19x12</td> </tr><tr> <td><strong>Matrix Rank</strong></td> <td>12</td> </tr><tr> <td><strong>Number of metabolites</strong></td> <td>19</td> </tr><tr> <td><strong>Initial conditions defined</strong></td> <td>0/19</td> </tr><tr> <td><strong>Number of reactions</strong></td> <td>12</td> </tr><tr> <td><strong>Number of genes</strong></td> <td>12</td> </tr><tr> <td><strong>Number of enzyme modules</strong></td> <td>0</td> </tr><tr> <td><strong>Number of groups</strong></td> <td>0</td> </tr><tr> <td><strong>Objective expression</strong></td> <td>0</td> </tr><tr> <td><strong>Compartments</strong></td> <td>c</td> </tr> </table> ### Convert flux units to M/s ```python T = 313.15 gas_constant = 0.008314 e_coli_density = 1.1 # g / mL assumption volume = 3.2 # femtoliter # Perform conversions doubling_time_per_minute = np.log(2) / growth_rate * 60 cell_gDW = 42000 * doubling_time_per_minute**-1.232 * 1e-15 real_cell_total_weight = e_coli_density * (volume * 1e-12) # fL --> mL # Assume water is 70% adj_volume = volume * 0.7 gDW_L_conversion_factor = real_cell_total_weight / (adj_volume * 1e-15) for reaction in mass_model.reactions.get_by_any(reaction_list): flux = flux_solution[reaction.id] reaction.steady_state_flux = flux * gDW_L_conversion_factor * 0.001 / 3600 ``` ## Set equilibrium constants ```python Keq_data = pd.read_excel( io="./data/growth_data.xlsx", sheet_name="Keq_data", index_col=0 ) for reaction in mass_model.reactions.get_by_any(reaction_list): reaction.Keq = Keq_data.loc[reaction.Keq_str][0] ``` ```python conc_data = pd.read_excel( io="./data/growth_data.xlsx", sheet_name="conc_data", index_col=0 ) conc_data = conc_data.loc[lambda x: x['Growth Medium'] == "Glucose"] conc_data = conc_data.drop("Growth Medium", axis=1) conc_data ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Concentration (mol * L-1)</th> </tr> <tr> <th>ID</th> <th></th> </tr> </thead> <tbody> <tr> <th>adp_c</th> <td>0.002185</td> </tr> <tr> <th>amp_c</th> <td>0.001743</td> </tr> <tr> <th>atp_c</th> <td>0.012466</td> </tr> <tr> <th>nad_c</th> <td>0.007636</td> </tr> <tr> <th>nadh_c</th> <td>0.000099</td> </tr> <tr> <th>13dpg_c</th> <td>0.000184</td> </tr> <tr> <th>f6p_c</th> <td>0.001214</td> </tr> <tr> <th>dhap_c</th> <td>0.004772</td> </tr> <tr> <th>g6p_c</th> <td>0.003431</td> </tr> <tr> <th>pep_c</th> <td>0.001276</td> </tr> <tr> <th>pyr_c</th> <td>0.005079</td> </tr> <tr> <th>2pg_c</th> <td>0.002876</td> </tr> <tr> <th>3pg_c</th> <td>0.002876</td> </tr> <tr> <th>fdp_c</th> <td>0.004196</td> </tr> <tr> <th>gdp_c</th> <td>0.001206</td> </tr> </tbody> </table> </div> ### Add PFK1 activator GDP ```python gdp_c = MassMetabolite(cobra_model.metabolites.gdp_c) # Set the activator as a constant gdp_c.fixed = True mass_model.add_metabolites(gdp_c) ``` ### Set initial concentrations from growth data ```python mass_model.update_initial_conditions({ mid: value for mid, value in conc_data.itertuples() }) # Fix hydrogen and water as constants and set concentration to 1. for metabolite in mass_model.metabolites.get_by_any(["h2o_c", "h_c"]): metabolite.fixed = True metabolite.initial_condition = 1 missing_ics = mass_model.metabolites.query(lambda m: m.initial_condition is None) # Provide initial guesses for missing metabolites (pi_c, g3p_c, and lac__D_c) print(missing_ics) for metabolite in missing_ics: metabolite.initial_condition = 0.001 ``` [<MassMetabolite pi_c at 0x7ff0d1ebcfa0>, <MassMetabolite g3p_c at 0x7ff100dd8280>, <MassMetabolite lac__D_c at 0x7ff100dd8eb0>] ### Formulate QP minimization for concentrations ```python from mass.thermo import ( ConcSolver, sample_concentrations, update_model_with_concentration_solution) ``` ```python conc_solver = ConcSolver( mass_model, excluded_metabolites=["h_c", "h2o_c"], constraint_buffer=1, equilibrium_reactions=[x.id for x in mass_model.reactions if x.steady_state_flux == 0] ) conc_solver.setup_feasible_qp_problem( fixed_conc_bounds=list(mass_model.fixed), ) conc_solution = conc_solver.optimize() conc_solution ``` <strong><em>Optimal</em> solution with objective value 0.000</strong><br><div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>variables</th> <th>reduced_costs</th> </tr> </thead> <tbody> <tr> <th>f6p_c</th> <td>0.001711</td> <td>0.0</td> </tr> <tr> <th>g6p_c</th> <td>0.002435</td> <td>0.0</td> </tr> <tr> <th>adp_c</th> <td>0.002185</td> <td>0.0</td> </tr> <tr> <th>atp_c</th> <td>0.012466</td> <td>0.0</td> </tr> <tr> <th>fdp_c</th> <td>0.009390</td> <td>0.0</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> </tr> <tr> <th>Keq_PGM</th> <td>2.178140</td> <td>0.0</td> </tr> <tr> <th>Keq_ENO</th> <td>5.190000</td> <td>0.0</td> </tr> <tr> <th>Keq_PYK</th> <td>21739.130400</td> <td>0.0</td> </tr> <tr> <th>Keq_PPS</th> <td>2.410000</td> <td>0.0</td> </tr> <tr> <th>Keq_LDH_D</th> <td>0.000240</td> <td>0.0</td> </tr> </tbody> </table> <p>30 rows × 2 columns</p> </div> ```python conc_comparison_fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5, 5)) plot_comparison( x=conc_data["Concentration (mol * L-1)"], y=conc_solution, compare="concentrations", observable=[mid for mid in conc_data.index], ax=ax, legend="right outside", plot_function="loglog", xlim=(1e-5, 1e-1), ylim=(1e-5, 1e-1), xy_line=True, xy_legend="best", xlabel="Initial [mol/L]", ylabel="Adjusted [mol/L]") conc_comparison_fig.tight_layout() update_model_with_concentration_solution( mass_model, conc_solution, concentrations=True, Keqs=True, inplace=True); ``` #### Export data for analysis ```python conc_data_for_comparison = pd.concat(objs=(conc_data, conc_solution.concentrations), axis=1).dropna() conc_data_for_comparison.index.name = "ID" conc_data_for_comparison.columns = ["Initial", "Adjusted"] conc_data_for_comparison.to_csv("./data/analysis_data/concentrations.csv") conc_data_for_comparison ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Initial</th> <th>Adjusted</th> </tr> <tr> <th>ID</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>adp_c</th> <td>0.002185</td> <td>0.002185</td> </tr> <tr> <th>amp_c</th> <td>0.001743</td> <td>0.001743</td> </tr> <tr> <th>atp_c</th> <td>0.012466</td> <td>0.012466</td> </tr> <tr> <th>nad_c</th> <td>0.007636</td> <td>0.035436</td> </tr> <tr> <th>nadh_c</th> <td>0.000099</td> <td>0.000021</td> </tr> <tr> <th>13dpg_c</th> <td>0.000184</td> <td>0.000141</td> </tr> <tr> <th>f6p_c</th> <td>0.001214</td> <td>0.001711</td> </tr> <tr> <th>dhap_c</th> <td>0.004772</td> <td>0.004062</td> </tr> <tr> <th>g6p_c</th> <td>0.003431</td> <td>0.002435</td> </tr> <tr> <th>pep_c</th> <td>0.001276</td> <td>0.001276</td> </tr> <tr> <th>pyr_c</th> <td>0.005079</td> <td>0.001420</td> </tr> <tr> <th>2pg_c</th> <td>0.002876</td> <td>0.001182</td> </tr> <tr> <th>3pg_c</th> <td>0.002876</td> <td>0.006999</td> </tr> <tr> <th>fdp_c</th> <td>0.004196</td> <td>0.009390</td> </tr> <tr> <th>gdp_c</th> <td>0.001206</td> <td>0.001206</td> </tr> </tbody> </table> </div> ```python Keq_comparison_fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5, 5)) plot_comparison( x=Keq_data["Equilibrium Constant"], y=conc_solution, compare="Keqs", ax=ax, legend="right outside", plot_function="loglog", xlim=(1e-5, 1e5), ylim=(1e-5, 1e5), xy_line=True, xy_legend="best", xlabel="Initial", ylabel="Adjusted") conc_comparison_fig.tight_layout() update_model_with_concentration_solution( mass_model, conc_solution, concentrations=True, Keqs=True, inplace=True); ``` #### Export data for analysis ```python Keq_data_for_comparison = pd.concat(objs=(Keq_data, conc_solution.Keqs), axis=1).dropna() Keq_data_for_comparison.index.name = "ID" Keq_data_for_comparison.columns = ["Initial", "Adjusted"] Keq_data_for_comparison.to_csv("./data/analysis_data/equilibrium_constants.csv") Keq_data_for_comparison ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Initial</th> <th>Adjusted</th> </tr> <tr> <th>ID</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>Keq_PGI</th> <td>0.364000</td> <td>0.258394</td> </tr> <tr> <th>Keq_PFK</th> <td>2000.000000</td> <td>2000.000000</td> </tr> <tr> <th>Keq_FBP</th> <td>41.500000</td> <td>41.500000</td> </tr> <tr> <th>Keq_FBA</th> <td>0.000160</td> <td>0.000358</td> </tr> <tr> <th>Keq_TPI</th> <td>0.106952</td> <td>0.203736</td> </tr> <tr> <th>Keq_GAPD</th> <td>0.452000</td> <td>0.586631</td> </tr> <tr> <th>Keq_PGK</th> <td>0.000530</td> <td>0.000530</td> </tr> <tr> <th>Keq_PGM</th> <td>5.300000</td> <td>2.178140</td> </tr> <tr> <th>Keq_ENO</th> <td>5.190000</td> <td>5.190000</td> </tr> <tr> <th>Keq_PYK</th> <td>21739.130400</td> <td>21739.130400</td> </tr> <tr> <th>Keq_PPS</th> <td>2.410000</td> <td>2.410000</td> </tr> <tr> <th>Keq_LDH_D</th> <td>0.000067</td> <td>0.000240</td> </tr> </tbody> </table> </div> ```python # Fix Metabolite IDs as SBML compatible before next step for metabolite in mass_model.metabolites: if metabolite.id[0].isdigit(): metabolite.id = f"_{metabolite.id}" mass_model.repair() ``` ```python n_models = 5 ``` ```python conc_solver = ConcSolver( mass_model, excluded_metabolites=["h_c", "h2o_c"], constraint_buffer=1, equilibrium_reactions=[x.id for x in mass_model.reactions if x.steady_state_flux == 0] ) conc_solver.setup_sampling_problem( fixed_conc_bounds=list(mass_model.fixed), ) for variable in conc_solver.variables: try: met = mass_model.metabolites.get_by_id(variable.name) variable.lb, variable.ub = np.log([met.ic / 10, met.ic * 10]) except: pass conc_samples = sample_concentrations(conc_solver, n=n_models, seed=4) conc_samples ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>f6p_c</th> <th>g6p_c</th> <th>adp_c</th> <th>atp_c</th> <th>fdp_c</th> <th>pi_c</th> <th>dhap_c</th> <th>g3p_c</th> <th>_13dpg_c</th> <th>nad_c</th> <th>nadh_c</th> <th>_3pg_c</th> <th>_2pg_c</th> <th>pep_c</th> <th>pyr_c</th> <th>amp_c</th> <th>lac__D_c</th> <th>gdp_c</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.010470</td> <td>0.010661</td> <td>0.013486</td> <td>0.105633</td> <td>0.024770</td> <td>0.004267</td> <td>0.005646</td> <td>0.000339</td> <td>0.000149</td> <td>0.073327</td> <td>0.000050</td> <td>0.011918</td> <td>0.001586</td> <td>0.001170</td> <td>0.000168</td> <td>0.000468</td> <td>0.000491</td> <td>0.000121</td> </tr> <tr> <th>1</th> <td>0.000553</td> <td>0.000743</td> <td>0.003268</td> <td>0.001477</td> <td>0.005574</td> <td>0.004123</td> <td>0.002493</td> <td>0.000171</td> <td>0.000108</td> <td>0.112689</td> <td>0.000071</td> <td>0.038825</td> <td>0.005840</td> <td>0.002756</td> <td>0.001839</td> <td>0.008645</td> <td>0.004783</td> <td>0.000456</td> </tr> <tr> <th>2</th> <td>0.003326</td> <td>0.004251</td> <td>0.003567</td> <td>0.001996</td> <td>0.011121</td> <td>0.011735</td> <td>0.003586</td> <td>0.000109</td> <td>0.000027</td> <td>0.147767</td> <td>0.000113</td> <td>0.019942</td> <td>0.000497</td> <td>0.000485</td> <td>0.000204</td> <td>0.004712</td> <td>0.000692</td> <td>0.000146</td> </tr> <tr> <th>3</th> <td>0.001739</td> <td>0.002718</td> <td>0.009105</td> <td>0.001749</td> <td>0.002127</td> <td>0.008857</td> <td>0.002809</td> <td>0.000071</td> <td>0.000029</td> <td>0.043257</td> <td>0.000055</td> <td>0.038715</td> <td>0.000583</td> <td>0.000700</td> <td>0.000203</td> <td>0.002591</td> <td>0.001066</td> <td>0.000122</td> </tr> <tr> <th>4</th> <td>0.001519</td> <td>0.001621</td> <td>0.004541</td> <td>0.002020</td> <td>0.003339</td> <td>0.012093</td> <td>0.000720</td> <td>0.000049</td> <td>0.000028</td> <td>0.033218</td> <td>0.000030</td> <td>0.028609</td> <td>0.000541</td> <td>0.000468</td> <td>0.000170</td> <td>0.001633</td> <td>0.000637</td> <td>0.000121</td> </tr> </tbody> </table> </div> ### Balance network with pseudoreactions and calculate PERCs ```python models_for_ensemble = [] for idx, conc_sample in conc_samples.iterrows(): # Make copy of new model new_model = mass_model.copy() new_model.id += "_C{0:d}".format(idx) print(f"Creating model {new_model.id}") # Get concentration sample and update model with sample new_model.update_initial_conditions(conc_sample.to_dict()) fluxes = np.array(list(new_model.steady_state_fluxes.values())) imbalanced_metabolites = new_model.S.dot(fluxes) # Iterate through metabolites for mid, imbalance in imbalanced_metabolites.iteritems(): # Ignore balanced metabolites if imbalance == 0: continue # Get metabolite object met = new_model.metabolites.get_by_id(mid) # Add boundary reactions for imbalanced metabolites boundary_type = "sink" # Add boundary reaction with imbalance as flux value boundary_reaction = new_model.add_boundary( mid, boundary_type, boundary_condition=met.ic) boundary_reaction.Keq = 1 if imbalance < 0: boundary_reaction.reverse_stoichiometry(inplace=True) imbalance = -imbalance boundary_reaction.kf = imbalance / met.ic boundary_reaction.steady_state_flux = imbalance try: # Update PERCs new_model.calculate_PERCs( fluxes={ r: v for r, v in new_model.steady_state_fluxes.items() if not r.boundary}, update_reactions=True) except: print("Negative PERCs for {0}".format(new_model.id)) continue models_for_ensemble.append(new_model) print("Number of models in ensemble: {0:d}".format( len(models_for_ensemble))) ``` Creating model Glycolysis_C0 Creating model Glycolysis_C1 Creating model Glycolysis_C2 Creating model Glycolysis_C3 Creating model Glycolysis_C4 Number of models in ensemble: 5 ```python reference_model = models_for_ensemble[0].copy() reference_model.id = "Glycolysis_Ref_wo_Enzymes" sim = Simulation(reference_model) sim.integrator.absolute_tolerance = 1e-15 sim.integrator.relative_tolerance = 1e-9 tfinal = 1e6 conc_sol, flux_sol = sim.simulate(reference_model, time=(0, tfinal)) conc_sol.view_time_profile() ``` #### Save a reference MASS model w/o enzymes ```python conc_sol, flux_sol = sim.find_steady_state( models=reference_model, strategy="simulate", tfinal=tfinal) if conc_sol and flux_sol: reference_model.update_initial_conditions(conc_sol) reference_model.update_parameters({f"v_{k}": v for k, v in flux_sol.items()}) # Save a reference MASS model save_mass_json_model( mass_model=reference_model, filename=f"./models/mass/without_enzymes/{reference_model.id}.json") print(f"Saving {reference_model.id}") reference_model ``` Saving Glycolysis_Ref_wo_Enzymes <table> <tr> <td><strong>Name</strong></td><td>Glycolysis_Ref_wo_Enzymes</td> </tr><tr> <td><strong>Memory address</strong></td><td>0x07ff0f115da30</td> </tr><tr> <td><strong>Stoichiometric Matrix</strong></td> <td>20x27</td> </tr><tr> <td><strong>Matrix Rank</strong></td> <td>19</td> </tr><tr> <td><strong>Number of metabolites</strong></td> <td>20</td> </tr><tr> <td><strong>Initial conditions defined</strong></td> <td>20/20</td> </tr><tr> <td><strong>Number of reactions</strong></td> <td>27</td> </tr><tr> <td><strong>Number of genes</strong></td> <td>12</td> </tr><tr> <td><strong>Number of enzyme modules</strong></td> <td>0</td> </tr><tr> <td><strong>Number of groups</strong></td> <td>0</td> </tr><tr> <td><strong>Objective expression</strong></td> <td>0</td> </tr><tr> <td><strong>Compartments</strong></td> <td>c</td> </tr> </table> ## Create Enzyme Modules Assume 90% of flux goes through the major isozyme, the remaining through the minor isozyme ```python from construction_functions import make_enzyme_module_from_dir ``` ```python isozyme1_percent = 0.9 isozyme2_percent = 0.1 # Isozymes and flux split percentages, isozymes_and_flux_splits = { "PFK": { "PFK1": isozyme1_percent, "PFK2": isozyme2_percent, }, "FBP": { "FBP1": isozyme1_percent, "FBP2": isozyme2_percent, }, "FBA": { "FBA1": isozyme1_percent, "FBA2": isozyme2_percent, }, "PGM": { "PGMi": isozyme1_percent, "PGMd": isozyme2_percent, }, "PYK": { "PYK1": isozyme1_percent, "PYK2": isozyme2_percent, }, } isozymes_and_flux_splits ``` {'PFK': {'PFK1': 0.9, 'PFK2': 0.1}, 'FBP': {'FBP1': 0.9, 'FBP2': 0.1}, 'FBA': {'FBA1': 0.9, 'FBA2': 0.1}, 'PGM': {'PGMi': 0.9, 'PGMd': 0.1}, 'PYK': {'PYK1': 0.9, 'PYK2': 0.1}} ```python final_ensemble = [] for model in models_for_ensemble: enzyme_modules = {} for reaction in model.reactions.get_by_any(reaction_list): # PGM & PGK needs flux flipped since enzyme module stoichiometry # is reversed compared to lone reaction. if reaction.id in ["PGK", "PGM"]: flux = -reaction.steady_state_flux else: flux = reaction.steady_state_flux # Make isozymes if reaction.id in isozymes_and_flux_splits: isozymes_and_flux_split = isozymes_and_flux_splits[reaction.id] isozyme_modules = [] for isozyme, flux_split in isozymes_and_flux_split.items(): enzyme_module = make_enzyme_module_from_dir( enzyme_id=isozyme, steady_state_flux=flux * flux_split, # Split flux for isozymes metabolite_concentrations=model.initial_conditions, path_to_dir="./data/enzyme_module_data", kcluster=1, enzyme_gpr=reaction.gene_reaction_rule, zero_tol=1e-10) isozyme_modules += [enzyme_module] enzyme_modules[reaction] = isozyme_modules else: enzyme_module = make_enzyme_module_from_dir( enzyme_id=reaction.id, steady_state_flux=flux, metabolite_concentrations=new_model.initial_conditions, path_to_dir="./data/enzyme_module_data", kcluster=1, enzyme_gpr=reaction.gene_reaction_rule, zero_tol=1e-10) enzyme_modules[reaction] = [enzyme_module] for reaction_to_remove, enzymes_to_add in enzyme_modules.items(): model.remove_reactions([reaction_to_remove]) for enzyme in enzymes_to_add: model = model.merge(enzyme, inplace=True) final_ensemble += [model] print(f"Finished {model.id}") ``` Finished Glycolysis_C0 Finished Glycolysis_C1 Finished Glycolysis_C2 Finished Glycolysis_C3 Finished Glycolysis_C4 # Inspect a model ```python reference_model = models_for_ensemble[0].copy() reference_model.id = "Glycolysis_Ref_w_Enzymes" sim = Simulation(reference_model) sim.integrator.absolute_tolerance = 1e-15 sim.integrator.relative_tolerance = 1e-9 tfinal = 1e6 conc_sol, flux_sol = sim.simulate(reference_model, time=(0, tfinal)) conc_sol.view_time_profile(plot_function="semilogx") ``` #### Save a reference MASS model w/ enzymes ```python conc_sol, flux_sol = sim.find_steady_state( models=reference_model, strategy="simulate", update_values=True, tfinal=tfinal) if conc_sol and flux_sol: # Save a reference MASS model save_mass_json_model( mass_model=reference_model, filename=f"./models/mass/with_enzymes/{reference_model.id}.json") print(f"Saving {reference_model.id}") reference_model ``` Saving Glycolysis_Ref_w_Enzymes <table> <tr> <td><strong>Name</strong></td><td>Glycolysis_Ref_w_Enzymes</td> </tr><tr> <td><strong>Memory address</strong></td><td>0x07ff0c1f03880</td> </tr><tr> <td><strong>Stoichiometric Matrix</strong></td> <td>120x121</td> </tr><tr> <td><strong>Matrix Rank</strong></td> <td>102</td> </tr><tr> <td><strong>Number of metabolites</strong></td> <td>120</td> </tr><tr> <td><strong>Initial conditions defined</strong></td> <td>120/120</td> </tr><tr> <td><strong>Number of reactions</strong></td> <td>121</td> </tr><tr> <td><strong>Number of genes</strong></td> <td>19</td> </tr><tr> <td><strong>Number of enzyme modules</strong></td> <td>17</td> </tr><tr> <td><strong>Number of groups</strong></td> <td>0</td> </tr><tr> <td><strong>Objective expression</strong></td> <td>0</td> </tr><tr> <td><strong>Compartments</strong></td> <td>c</td> </tr> </table> ## Simulate to steady state and export ensemble ```python sim = Simulation(final_ensemble[0]) sim.integrator.absolute_tolerance = 1e-15 sim.integrator.relative_tolerance = 1e-9 sim.integrator. sim.add_models(final_ensemble[1:], disable_safe_load=True) for model in final_ensemble: # Attempt to determine steady state conc_sol, flux_sol = sim.find_steady_state( models=model, strategy="simulate", update_values=True, tfinal=tfinal) if conc_sol and flux_sol: # Save a reference MASS model save_mass_json_model( mass_model=model, filename=f"./models/mass/with_enzymes/{model.id}.json") print(f"Saving {model.id}") else: print(f"No steady state for {model.id}.") ``` Saving Glycolysis_C0 Saving Glycolysis_C1 Saving Glycolysis_C2 Saving Glycolysis_C3 Saving Glycolysis_C4 ```python ```
" Создать десять переменных разных типов, проверить их тип с помощью функции typeof(). Проверить с помощью этой функции типы следующих констант 29, 23i, -34L, 2/3, 4/2, 0xA, 0XbL – 120L, 0XbL – 120, 0XbL * 17. Объяснить полученные результаты. " #Десять переменных print("Вывод 10 переменных разных типов") a <- "String" typeof(a) b <- 1.2 typeof(b) c1 <- c(1,2,5.3,6,-2,4) typeof(c1) d <- 10i typeof(d) e <- matrix(1:20, nrow=5,ncol=4) typeof(e) f <- list(name="Fred", age=5.3) typeof(f) g <- c(rep("male",20), rep("female", 30)) g <- factor(g) typeof(g) h = FALSE print(h) i = 0xAb typeof(i) #Вывод констант и их типов print("Вывод констант и их типов") localelist <- list(29, 23i, -34L, 2/3, 4/2, 0xA, 0XbL - 120L, 0XbL - 120, 0XbL * 17.) for (element in localelist) { print(element) print(typeof(element)) }
! { dg-do compile } ! { dg-options "-std=legacy" } ! ! PR 32938 subroutine r (*) integer(kind=8) :: i return i end
Formal statement is: lemma connected_component_set_empty [simp]: "connected_component_set {} x = {}" Informal statement is: The connected component of the empty set is the empty set.
function [r2,mn,st]= cv_mean(a,r,loss_type,param1) if nargin<3 [r2,st]=get_mean2(r); [mn,st]=get_mean2(r,[],1); else [r2,st]=get_mean2(r,loss_type); [mn,st]=get_mean2(r,loss_type,1); end
Even after complete remission is achieved , leukemic cells likely remain in numbers too small to be detected with current diagnostic techniques . If no further postremission or consolidation therapy is given , almost all people with AML will eventually relapse . Therefore , more therapy is necessary to eliminate <unk> disease and prevent relapse — that is , to achieve a cure .
{-# LANGUAGE BangPatterns, CPP, FlexibleContexts, GADTs, OverloadedStrings, RankNTypes #-} {- | Module : TestBench.Evaluate Description : Tree-based representation for Criterion and Weigh Copyright : (c) Ivan Lazar Miljenovic License : MIT Maintainer : [email protected] An extremely simple rose tree-based representation of criterion benchmarks and weigh measurements. -} module TestBench.Evaluate ( -- * Types EvalTree , EvalForest , Eval(..) -- ** Weights , GetWeight , getWeight , getWeightIO -- * Conversion , flattenBenchTree , flattenBenchForest -- * Running benchmarks , evalForest -- ** Weighing individual functions , weighIndex ) where import TestBench.Commands (resetUnusedConfig, weighFileArg, weighIndexArg) import TestBench.LabelTree import Criterion.Analysis (OutlierVariance(ovFraction), SampleAnalysis(..)) import Criterion.Internal (runAndAnalyseOne) import Criterion.Measurement (initializeTime, secs) import Criterion.Measurement.Types (Benchmark, Benchmarkable, bench, bgroup) import Criterion.Monad (withConfig) import Criterion.Types (Config(..), DataRecord(..), Report(..)) import Statistics.Types (ConfInt(..), Estimate(..)) import Weigh (weighAction, weighFunc) import Data.Csv (DefaultOrdered(..), Field, Name, ToField, ToNamedRecord(..), ToRecord(..), header, namedRecord, record, toField) import qualified Data.DList as DL import Streaming (Of, Stream) import Streaming.Cassava (encodeByNameDefault) import qualified Streaming.Prelude as S import Streaming.With (writeBinaryFile) import Control.Applicative (liftA2) import Control.DeepSeq (NFData) import Control.Monad (join, when, zipWithM_) import Control.Monad.IO.Class (liftIO) import Control.Monad.Trans.Class (lift) import Control.Monad.Trans.State.Strict (StateT, evalStateT, get, put) import Data.Int (Int64) import Data.List (intercalate) import Data.Maybe (isJust, listToMaybe, mapMaybe) import Data.String (IsString) import System.Environment (getExecutablePath) import System.Exit (ExitCode(..)) import System.IO (hClose) import System.IO.Temp (withSystemTempFile) import System.Process (rawSystem) import Text.Printf (printf) #if MIN_VERSION_base (4,9,0) import Data.Semigroup (Semigroup(..)) #endif -------------------------------------------------------------------------------- -- | A more explicit tree-like structure for benchmarks than using -- Criterion's 'Benchmark' type. type EvalTree = LabelTree Eval type EvalForest = [EvalTree] data Eval = Eval { eName :: !String , eBench :: !(Maybe Benchmarkable) , eWeigh :: !(Maybe GetWeight) } -- | The results from measuring memory usage. -- -- @since 0.2.0.0 data GetWeight where GetWeight :: forall a b. (NFData b) => (a -> b) -> a -> GetWeight GetWeightIO :: forall a b. (NFData b) => (a -> IO b) -> a -> GetWeight runGetWeight :: GetWeight -> IO Weight runGetWeight gw = mkWeight <$> case gw of GetWeight f a -> weighFunc f a GetWeightIO f a -> weighAction f a where mkWeight (b,gc,_,_) = Weight b gc data Weight = Weight { bytesAlloc :: !Int64 , numGC :: !Int64 } deriving (Eq, Ord, Show, Read) -- | How to weigh a function. -- -- @since 0.2.0.0 getWeight :: (NFData b) => (a -> b) -> a -> GetWeight getWeight = GetWeight -- | An IO-based variant of 'getWeight'. -- -- @since 0.2.0.0 getWeightIO :: (NFData b) => (a -> IO b) -> a -> GetWeight getWeightIO = GetWeightIO flattenBenchTree :: EvalTree -> Maybe Benchmark flattenBenchTree = fmap (foldLTree (const bgroup) (flip const)) . mapMaybeTree (liftA2 fmap (bench . eName) eBench) -- | Remove the explicit tree-like structure into the implicit one -- used by Criterion. -- -- Useful for embedding the results into an existing benchmark -- suite. flattenBenchForest :: EvalForest -> [Benchmark] flattenBenchForest = mapMaybe flattenBenchTree -- | Run the specified benchmarks, printing the results (once they're -- all complete) to stdout in a tabular format for easier -- comparisons. evalForest :: Config -> EvalForest -> IO () evalForest cfg ef = do when (hasBench ep) initializeTime let ec = EC cfg ep printHeaders ep (`evalStateT` zeroIndex) . maybeCSV . S.mapM_ (liftIO . printRow ep) . S.copy $ toRows ec ef where ep = checkForest ef -- Ideally we would do the S.copy here only if we're writing to -- CSV; unfortunately, as we're not longer using ResourceT we -- don't have a MonadMask instance and thus can't use another -- Stream as the inner Monad; this /must/ be run after processing -- the stdout version. maybeCSV = maybe S.effects streamCSV (csvFile cfg) -- In reality, this type signature contains StateT, but that -- over-complicates understanding what it does, and to specify it -- generically requires bringing in MonadBaseControl and -- MonadThrow. -- streamCSV :: FilePath -> Stream (Of Row) IO () -> IO () streamCSV fp = writeBinaryFile fp . encodeByNameDefault . S.filter isLeaf data EvalParams = EP { hasBench :: !Bool , hasWeigh :: !Bool , nameWidth :: !Width } deriving (Eq, Ord, Show, Read) #if MIN_VERSION_base (4,9,0) instance Semigroup EvalParams where (<>) = appendEP #endif instance Monoid EvalParams where mempty = EP { hasBench = False , hasWeigh = False , nameWidth = 0 } mappend = appendEP appendEP :: EvalParams -> EvalParams -> EvalParams appendEP ec1 ec2 = EP { hasBench = appendBy hasBench , hasWeigh = appendBy hasWeigh , nameWidth = nameWidth ec1 `max` nameWidth ec2 } where appendBy f = f ec1 || f ec2 checkForest :: EvalForest -> EvalParams checkForest = mconcat . map (foldLTree mergeNode calcConfig) where mergeNode d lbl ls = mempty { nameWidth = width d lbl } `mappend` mconcat ls calcConfig d e = EP { hasBench = isJust (eBench e) , hasWeigh = isJust (eWeigh e) , nameWidth = width d (eName e) } width d nm = indentPerLevel * d + length nm data EvalConfig = EC { benchConfig :: {-# UNPACK #-}!Config , evalParam :: {-# UNPACK #-}!EvalParams } deriving (Eq, Show, Read) -------------------------------------------------------------------------------- type PathList = DL.DList String data Row = Row { rowLabel :: !String , rowPath :: !PathList -- ^ Invariant: length == rowDepth , rowDepth :: {-# UNPACK #-} !Depth , isLeaf :: !Bool , rowBench :: !(Maybe BenchResults) , rowWeight :: !(Maybe Weight) } deriving (Eq, Show, Read) pathLabel :: Row -> String pathLabel row = intercalate "/" (DL.toList (DL.snoc (rowPath row) (rowLabel row))) -- | Unlike terminal output, this instance creates columns for benchmarks, weighing, etc. even if they're not used. instance ToRecord Row where toRecord row = record (toField fmtLabel : benchRecord ++ weightRecord) where fmtLabel = pathLabel row benchRecord = timed resMean ++ timed resStdDev ++ [bField (ovFraction . resOutVar)] where bField = mField . (. rowBench) . fmap timed f = map (bField . (. f)) [estPoint, estLowerBound, estUpperBound] weightRecord = [wField bytesAlloc, wField numGC] where wField = mField . (. rowWeight) . fmap mField :: (ToField a) => (Row -> Maybe a) -> Field mField f = maybe mempty toField (f row) -- | Unlike terminal output, this instance creates columns for benchmarks, weighing, etc. even if they're not used. instance ToNamedRecord Row where toNamedRecord row = namedRecord (fmtLabel : benchRecord ++ weightRecord) where fmtLabel = (labelName, toField (pathLabel row)) benchRecord = zip benchNames (timed resMean ++ timed resStdDev ++ [bField (ovFraction . resOutVar)]) where bField = mField . (. rowBench) . fmap timed f = map (bField . (. f)) [estPoint, estLowerBound, estUpperBound] weightRecord = zip weighNames [wField bytesAlloc, wField numGC] where wField = mField . (. rowWeight) . fmap mField :: (ToField a) => (Row -> Maybe a) -> Field mField f = maybe mempty toField (f row) instance DefaultOrdered Row where headerOrder _ = header (labelName : benchNames ++ weighNames) toRows :: EvalConfig -> EvalForest -> Stream (Of Row) (StateT Index IO) () toRows cfg = f2r DL.empty where f2r :: PathList -> EvalForest -> Stream (Of Row) (StateT Index IO) () f2r pl = mapM_ (t2r pl) t2r :: PathList -> EvalTree -> Stream (Of Row) (StateT Index IO) () t2r pl bt = case bt of Leaf d e -> do i <- lift get lift (put (i+1)) r <- liftIO (makeRow cfg pl i d e) S.yield r Branch d lbl ts -> S.cons (Row lbl pl d False Nothing Nothing) (f2r (pl `DL.snoc` lbl) ts) makeRow :: EvalConfig -> PathList -> Index -> Depth -> Eval -> IO Row makeRow cfg pl idx d e = Row lbl pl d True <$> tryRun hasBench eBench (getBenchResults (benchConfig cfg) lbl) <*> tryRun hasWeigh eWeigh (const (tryGetWeight idx)) where lbl = eName e ep = evalParam cfg tryRun :: (EvalParams -> Bool) -> (Eval -> Maybe a) -> (a -> IO (Maybe b)) -> IO (Maybe b) tryRun p f r = if p ep then maybe (return Nothing) r (f e) else return Nothing data BenchResults = BenchResults { resMean :: !(Estimate ConfInt Double) , resStdDev :: !(Estimate ConfInt Double) , resOutVar :: !OutlierVariance } deriving (Eq, Show, Read) getBenchResults :: Config -> String -> Benchmarkable -> IO (Maybe BenchResults) getBenchResults cfg lbl b = do dr <- withConfig cfg' (runAndAnalyseOne i lbl b) return $ case dr of Measurement{} -> Nothing Analysed rpt -> Just $ let sa = reportAnalysis rpt in BenchResults { resMean = anMean sa , resStdDev = anStdDev sa , resOutVar = anOutlierVar sa } where -- Set this here just in case someone didn't use the top-level -- 'testBench' function. -- -- Also, just in case a CSV file is being outputted, don't try and -- write to it. cfg' = resetUnusedConfig cfg { csvFile = Nothing } i = 0 -- We're ignoring this value anyway, so it should be OK to -- just set it. -------------------------------------------------------------------------------- type Index = Int zeroIndex :: Index zeroIndex = 0 tryGetWeight :: Index -> IO (Maybe Weight) tryGetWeight idx = withSystemTempFile "testBench.weigh" $ \fp h -> do -- We use a temporary file in case the program prints something else -- out to stdout. hClose h -- We're not writing to it, just need the file exe <- getExecutablePath ec <- rawSystem exe ["--" ++ weighIndexArg, show idx, "--" ++ weighFileArg, fp, "+RTS", "-T", "-RTS"] case ec of ExitFailure{} -> return Nothing ExitSuccess -> do out <- readFile fp case reads out of [(!mw,_)] -> return mw _ -> return Nothing weighIndex :: EvalForest -> Index -> IO (Maybe Weight) weighIndex ef = fmap join . mapM (mapM runGetWeight . eWeigh) . index es where es = concatMap leaves ef index :: [a] -> Index -> Maybe a index as n = listToMaybe . drop (n-1) $ as -------------------------------------------------------------------------------- printHeaders :: EvalParams -> IO () printHeaders ep = do putStr (replicate (nameWidth ep) ' ') when (hasBench ep) (mapM_ toPrintf benchHeaders) when (hasWeigh ep) (mapM_ toPrintf weighHeaders) putStr "\n" where toPrintf (w,hdr) = printf "%s%*s" columnSpace w hdr printRow :: EvalParams -> Row -> IO () printRow ep r = do printf "%-*s" (nameWidth ep) label when (hasBench ep) (printBench (rowBench r)) when (hasWeigh ep) (printWeigh (rowWeight r)) putStr "\n" where label :: String label = printf "%s%s" (replicate (rowDepth r * indentPerLevel) ' ') (rowLabel r) type Width = Int indentPerLevel :: Width indentPerLevel = 2 columnGap :: Width columnGap = 2 columnSpace :: String columnSpace = replicate columnGap ' ' labelName :: Name labelName = "Label" benchNames :: (IsString str) => [str] benchNames = ["Mean", "MeanLB", "MeanUB", "Stddev", "StddevLB", "StddevUB", "OutlierVariance"] benchHeaders :: [(Width, String)] benchHeaders = map addWidth benchNames weighNames :: (IsString str) => [str] weighNames = ["AllocBytes", "NumGC"] weighHeaders :: [(Width, String)] weighHeaders = map addWidth weighNames -- Maximum width a numeric field can take. Might as well make them -- all the same width. All other formatters have been manually -- adjusted to produce nothing longer than this. secsWidth :: Width secsWidth = length (secs ((-pi) / 000)) addWidth :: String -> (Width, String) addWidth nm = (max (length nm) secsWidth, nm) printBench :: Maybe BenchResults -> IO () printBench mr = zipWithM_ (printf "%s%*s" columnSpace) wdths cols where cols = maybe (repeat "") (\r -> timed (resMean r) ++ timed (resStdDev r) ++ [ov r]) mr timed bs = [secIt estPoint, secIt estLowerBound, secIt estUpperBound] where secIt f = secs (f bs) ov r = percent (ovFraction (resOutVar r)) wdths = map fst benchHeaders estLowerBound :: (Num a) => Estimate ConfInt a -> a estLowerBound e = estPoint e - confIntLDX (estError e) estUpperBound :: (Num a) => Estimate ConfInt a -> a estUpperBound e = estPoint e + confIntUDX (estError e) printWeigh :: Maybe Weight -> IO () printWeigh mr = zipWithM_ (printf "%s%*s" columnSpace) wdths cols where cols = maybe (repeat "") (\r -> [bytes (bytesAlloc r), count (numGC r)]) mr wdths = map fst weighHeaders percent :: Double -> String percent p = printf "%.3f%%" (p * 100) -- | Human-readable description of the number of bytes used. Assumed -- non-negative. bytes :: Int64 -> String bytes b = printf "%.3f %sB" val p where prefixes = [" ", "K", "M", "G", "T", "P", "E"] :: [String] base = 1024 :: Num a => a mult | b == 0 = 0 | otherwise = floor (logBase (base :: Double) (fromIntegral b)) val = fromIntegral b / (base ^ mult) p = prefixes !! mult count :: Int64 -> String count = printf "%.3e" . (`asTypeOf` (0::Double)) . fromIntegral
#include "lah.h" #ifdef HAVE_LAPACK #include <lapacke.h> lah_Return lah_solveLU(lah_mat *B, lah_mat const *L, lapack_int *ipiv) { if ( L == NULL || B == NULL || L->nR != B->nR || B->nR != L->nC) { return lahReturnParameterError; } /* if (matrix_layout == LAPACK_COL_MAJOR) { lda = L->nR; ldb = B->nR; } else { lda = L->nC; ldb = B->nC; } */ return (!GETRS(LAH_LAPACK_LAYOUT, 'N', B->nR, B->nC, L->data , LAH_LEADING_DIM(L), ipiv, B->data, LAH_LEADING_DIM(B))) ? lahReturnOk : lahReturnExternError; } #endif
function fmridat = rescale(fmridat, meth, varargin) % Rescales data in an fmri_data object % Data is observations x images, so operating on the columns operates on % images, and operating on the rows operates on voxels (or variables more % generally) across images. % % :Usage: % :: % % fmridat = rescale(fmridat, meth) % % :Inputs: % % **Methods:** % Rescaling of voxels % - centervoxels subtract voxel means % - zscorevoxels subtract voxel means, divide by voxel std. dev % - rankvoxels replace values in each voxel with rank across images % % *Note: these methods must exclude invalid (0 or % NaN) voxels image-wise. Some images (but not others) in an % object may be missing some voxels. % % Rescaling of images % - centerimages subtract image means % - zscoreimages subtract image means, divide by image std. dev % % *Note: these methods must exclude invalid (0 or % NaN) voxels image-wise. Some images (but not others) in an % object may be missing some voxels. % % - l2norm_images divide each image by its l2 norm, multiply by sqrt(n valid voxels) % - divide_by_csf_l2norm divide each image by CSF l2 norm. requires MNI space images for ok results! % - rankimages rank voxels within image; % - csf_mean_var Subtract mean CSF signal from image and divide by CSF variance. (Useful for PE % maps where CSF mean and var should be 0). % % Other procedures % - windsorizevoxels Winsorize extreme values to 3 SD for each image (separately) using trimts % - percentchange Scale each voxel (column) to percent signal change with a mean of 100 % based on smoothed mean values across space (cols), using iimg_smooth_3d % - tanh Rescale variables by hyperbolic tangent function % this will shrink the tails (and outliers) towards the mean, % making subsequent algorithms more robust to % outliers. Affects normality of distribution. % % Appropriate for multi-session (time series) only: % - session_global_percent_change % - session_global_z % - session_multiplicative % % See also fmri_data.preprocess switch meth case 'centervoxels' % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); fmridat.dat(ismissing) = NaN; m = nanmean(fmridat.dat'); fmridat.dat = (fmridat.dat' - m)'; % Note: Sizes are wrong without repmat, but Matlab 2020b at least figures this out. % Old - this does not handle different missing voxels in different images fmridat.dat = scale(fmridat.dat', 1)'; fmridat.dat(ismissing) = 0; % Replace with 0 for compatibility with image format fmridat.history{end+1} = 'Centered voxels (rows) across images'; case 'zscorevoxels' % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); fmridat.dat(ismissing) = NaN; m = nanmean(fmridat.dat'); s = nanstd(fmridat.dat'); fmridat.dat = ((fmridat.dat' - m) ./ s)'; % Note: Sizes are wrong without repmat, but Matlab 2020b at least figures this out. fmridat.dat(ismissing) = 0; % Replace with 0 for compatibility with image format % Old - this does not handle different missing voxels in different images % fmridat.dat = scale(fmridat.dat')'; fmridat.history{end+1} = 'Z-scored voxels (rows) across images'; case 'rankvoxels' for i = 1:size(fmridat.dat, 1) % for each voxel d = fmridat.dat(i, :)'; % Consider missing voxels, and exclude case-wise (image-wise) ismissing = d == 0 | isnan(d); d(ismissing) = NaN; if ~all(d == 0) fmridat.dat(i, ~ismissing) = rankdata(d(~ismissing))'; end end ismissing = fmridat.dat == 0 | isnan(fmridat.dat); fmridat.dat(ismissing) = 0; % Replace with 0 for compatibility with image format fmridat.history{end+1} = 'Ranked voxels (rows) across images'; case 'rankimages' dat = zeros(size(fmridat.dat)); parfor i = 1:size(fmridat.dat, 2) d = fmridat.dat(:,i); % Consider missing voxels, and exclude case-wise (image-wise) ismissing = d == 0 | isnan(d); d(ismissing) = NaN; if ~all(d == 0) d(~ismissing, i) = rankdata(d(~ismissing)); end dat(:, i) = d; end dat(isnan(dat)) = 0; % Replace with 0 for compatibility with image format fmridat.dat = dat; fmridat.history{end+1} = 'Ranked images (columns) across voxels'; case 'centerimages' % center images (observations) % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); fmridat.dat(ismissing) = NaN; m = nanmean(fmridat.dat); fmridat.dat = (fmridat.dat - m); % Note: Sizes are wrong without repmat, but Matlab 2020b at least figures this out. fmridat.dat(ismissing) = 0; % Replace with 0 for compatibility with image format % Old - this does not handle different missing voxels in different images % fmridat.dat = scale(fmridat.dat, 1); fmridat.history{end+1} = 'Centered images (columns) across voxels'; case 'zscoreimages' % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); fmridat.dat(ismissing) = NaN; m = nanmean(fmridat.dat); s = nanstd(fmridat.dat); fmridat.dat = (fmridat.dat - m) ./ s; % Note: Sizes are wrong without repmat, but Matlab 2020b at least figures this out. fmridat.dat(ismissing) = 0; % Replace with 0 for compatibility with image format % Old - this does not handle different missing voxels in different images % fmridat.dat = scale(fmridat.dat); fmridat.history{end+1} = 'Z-scored each image (columns) across voxels, excluding missing values (0 or NaN) for the image'; case 'doublecenter' % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); fmridat.dat(ismissing) = NaN; imagemeans = nanmean(fmridat.dat); [v, n] = size(fmridat.dat); imagemeanmatrix = repmat(imagemeans, v, 1); dat_doublecent = fmridat.dat - imagemeanmatrix; voxelmeans = nanmean(dat_doublecent, 2); voxelmeanmatrix = repmat(voxelmeans, 1, n); dat_doublecent = dat_doublecent - voxelmeanmatrix; fmridat.dat = dat_doublecent; fmridat.dat(ismissing) = 0; % Replace with 0 for compatibility with image format fmridat.history{end+1} = 'Double-centered data matrix across images and voxels'; case 'l1norm_images' % Vector L1 norm of vector % divide by this value to normalize image % wh = x ~= 0 & ~isnan(x); % valid values % nvalid = sum(wh); % l1norm = double(sum(abs(x(wh)))) % normalized_x = x .* nvalid ./ l1norm; % divides each voxel x in image by the mean valid voxel value. normfun = @(x) sum(abs(x)); x = fmridat.dat; % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); x(ismissing) = NaN; for i = 1:size(x, 2) % remove nans, 0s xx = x(~ismissing(:, i), i); % divides each voxel xx in image by the mean valid voxel value. n(i) = normfun(xx); xx = xx.* length(xx) ./ n(i); x(~ismissing(:, i), i) = xx; end x(ismissing) = 0; % Replace with 0 for compatibility with image format fmridat.dat = x; case 'l2norm_images' % Vector L2 norm / sqrt(length) of vector % divide by this value to normalize image normfun = @(x) sum(x .^ 2) .^ .5; x = fmridat.dat; % Consider missing voxels, and exclude case-wise (image-wise) ismissing = fmridat.dat == 0 | isnan(fmridat.dat); x(ismissing) = NaN; for i = 1:size(x, 2) % remove nans, 0s xx = x(~ismissing(:, i), i); % isbad = xx == 0 | isnan(xx); % xx(isbad) = []; % divide by sqrt(length) so number of elements will not change scaling n(i) = normfun(xx) ./ sqrt(length(xx)); xx = xx ./ n(i); % x(:, i) = zeroinsert(isbad, xx); x(~ismissing(:, i), i) = xx; end x(ismissing) = 0; % Replace with 0 for compatibility with image format fmridat.dat = x; case 'divide_by_csf_l2norm' [~, ~, ~, l2norms] = extract_gray_white_csf(fmridat); % divide each column image by its respective ventricle l2norm fmridat.dat = bsxfun(@rdivide, fmridat.dat, l2norms(:, 3)') ; case 'session_global_percent_change' nscan = fmridat.images_per_session; % num images per session I = intercept_model(nscan); for i = 1:size(I, 2) wh = find(I(:, i)); y = fmridat.dat(:, wh)'; % y is images x voxels gm = mean(y); % mean at each voxel, 1 x voxels % subtract mean at each vox, divide by global session mean y = (y - repmat(gm, size(y, 1), 1)) ./ std(y(:)); fmridat.dat(:, wh) = y; end case 'session_spm_style' % SPM's default method of global mean scaling % useful for replicating SPM analyses or comparing SPM's scaling to % other methods. % not implemented yet because tor decided to use spm_global on images for comparison; this could be done though... % see help spm_global % nscan = fmridat.images_per_session; % num images per session % I = intercept_model(nscan); % for i = 1:size(I, 2) % % wh = find(I(:, i)); % y = fmridat.dat(:, wh)'; % y is images x voxels % gm = mean(y); % mean at each voxel, 1 x voxels % % % subtract mean at each vox, divide by global session mean % y = (y - repmat(gm, size(y, 1), 1)) ./ std(y(:)); % fmridat.dat(:, wh) = y; % end case 'session_global_z' % scale each session so that global brain mean and global brain std % across time are the same for each session % underlying model: % session-specific shifts in mean signal and scaling exist and are % independent % minus global (whole-brain) mean / global (whole-brain) std. % across time nscan = fmridat.images_per_session; % num images per session I = intercept_model(nscan); for i = 1:size(I, 2) wh = find(I(:, i)); y = fmridat.dat(:, wh); gm = mean(y); % mean at each time point % subtract mean at each vox, divide by session global std y = (y - repmat(gm, size(y, 1), 1)) ./ std(y(:)); fmridat.dat(:, wh) = y; end case 'session_multiplicative' % scale - multiplicative % underlying model: % a is a process constant across time, but different for each voxel % (T2 contrast as a function of voxel properties) % b is a process constant across voxels, but different at each time % point within a session % (overall scaling, which varies across time) % these interact multiplicatively to create variation in both % global mean and std deviation jointly, which is why global mean % and global std are intercorrelated correlated. % % image std scales with b, and so does image mean. % model assumes a linear relationship between mean and std with % slope = 1 gm = mean(fmridat.dat, 1); % mean across brain, obs series gs = std(fmridat.dat, 1); create_figure('global mean vs std', 1, 2); plot(gm, gs, 'k.') xlabel('Image mean'); ylabel('Image std'); title('Mean vs. std, before scaling, each dot is 1 image') drawnow % if length(varargin) == 0 || isempty(varargin{1}) % error('Must enter number of images in each session as input argument'); % end if isempty(fmridat.images_per_session) fmridat.images_per_session = size(fmridat.dat, 2); end nscan = fmridat.images_per_session; % num images per session I = intercept_model(nscan); for i = 1:size(I, 2) wh = find(I(:, i)); y = fmridat.dat(:, wh); y(y < 0) = 0; % images should be all positive-valued a = mean(y')'; % mean across time, for each voxel b = mean(y); % mean across voxels, for each time point % ystar is reconstruction based on marginal means ystar = a*b ./ (mean(a)); fmridat.dat(:, wh) = y ./ (ystar + .05*mean(ystar(:))); % like m-estimator; avoid dividing by zero by adding a constant % fmridat.dat(:, wh) = y ./ repmat(b, size(y, 1), 1); % intensity normalization end gm = mean(fmridat.dat, 1); % mean across brain, obs series gs = std(fmridat.dat, 1); subplot(1, 2, 2) plot(gm, gs, 'k.') xlabel('Image mean'); ylabel('Image std'); title('After scaling') drawnow case 'windsorizevoxels' whbad = all(fmridat.dat == 0, 2); nok = sum(~whbad); fprintf('windsorizing %3.0f voxels to 3 std: %05d', 0); for i = 1:nok if mod(i, 100) == 0, fprintf('\b\b\b\b\b%05d', i); end fmridat.dat(i, :) = trimts(fmridat.dat(i, :)', 3, [])'; end fmridat.history{end+1} = 'Windsorized each voxel data series to 3 sd'; case 'tanh' % rescale variables by hyperbolic tangent function % this will shrink the tails (and outliers) towards the mean, % making subsequent algorithms more robust to outliers. % However, it also truncates and flattens the distribution % (non-normal) fmridat.dat = tanh(zscore(fmridat.dat'))'; case 'percentchange' % scale each voxel (column) to percent signal change with a mean of 100 % based on smoothed mean values across space (cols), using iimg_smooth_3d m = mean(fmridat.dat',1)'; % mean at each voxel, voxels x 1 sfwhm = 16; ms = iimg_smooth_3d(m, fmridat.volInfo, sfwhm, fmridat.removed_voxels); % subtract mean at each vox, divide by global session mean fmridat.dat = 100 + 100 .* (fmridat.dat - repmat(m, 1, size(fmridat.dat, 2))) ./ repmat(ms, 1, size(fmridat.dat, 2)); fmridat.history{end+1} = 'Rescaled to mean 100, voxelwise % signal change with 16 mm fwhm smoothing of divisor mean'; case 'correly' % correl with y % provides implicit feature selection when using algorithms that % are scale-dependent (e.g., SVM, PCA) case 'csf_mean_var' [~,~,tissues] = extract_gray_white_csf(fmridat); csfStd = nanstd(tissues{3}.dat); csfMean = nanmean(tissues{3}.dat); fmridat = fmridat.remove_empty; dat = fmridat.dat; dat = bsxfun(@minus,dat,csfMean); dat = bsxfun(@rdivide,dat,csfStd); fmridat.dat = dat; fmridat = fmridat.replace_empty; otherwise error('Unknown scaling method.') end end
(* Title: HOL/Nonstandard_Analysis/HyperNat.thy Author: Jacques D. Fleuriot Copyright: 1998 University of Cambridge Converted to Isar and polished by lcp *) section \<open>Hypernatural numbers\<close> theory HyperNat imports StarDef begin type_synonym hypnat = "nat star" abbreviation hypnat_of_nat :: "nat \<Rightarrow> nat star" where "hypnat_of_nat \<equiv> star_of" definition hSuc :: "hypnat \<Rightarrow> hypnat" where hSuc_def [transfer_unfold]: "hSuc = *f* Suc" subsection \<open>Properties Transferred from Naturals\<close> lemma hSuc_not_zero [iff]: "\<And>m. hSuc m \<noteq> 0" by transfer (rule Suc_not_Zero) lemma zero_not_hSuc [iff]: "\<And>m. 0 \<noteq> hSuc m" by transfer (rule Zero_not_Suc) lemma hSuc_hSuc_eq [iff]: "\<And>m n. hSuc m = hSuc n \<longleftrightarrow> m = n" by transfer (rule nat.inject) lemma zero_less_hSuc [iff]: "\<And>n. 0 < hSuc n" by transfer (rule zero_less_Suc) lemma hypnat_minus_zero [simp]: "\<And>z::hypnat. z - z = 0" by transfer (rule diff_self_eq_0) lemma hypnat_diff_0_eq_0 [simp]: "\<And>n::hypnat. 0 - n = 0" by transfer (rule diff_0_eq_0) lemma hypnat_add_is_0 [iff]: "\<And>m n::hypnat. m + n = 0 \<longleftrightarrow> m = 0 \<and> n = 0" by transfer (rule add_is_0) lemma hypnat_diff_diff_left: "\<And>i j k::hypnat. i - j - k = i - (j + k)" by transfer (rule diff_diff_left) lemma hypnat_diff_commute: "\<And>i j k::hypnat. i - j - k = i - k - j" by transfer (rule diff_commute) lemma hypnat_diff_add_inverse [simp]: "\<And>m n::hypnat. n + m - n = m" by transfer (rule diff_add_inverse) lemma hypnat_diff_add_inverse2 [simp]: "\<And>m n::hypnat. m + n - n = m" by transfer (rule diff_add_inverse2) lemma hypnat_diff_cancel [simp]: "\<And>k m n::hypnat. (k + m) - (k + n) = m - n" by transfer (rule diff_cancel) lemma hypnat_diff_cancel2 [simp]: "\<And>k m n::hypnat. (m + k) - (n + k) = m - n" by transfer (rule diff_cancel2) lemma hypnat_diff_add_0 [simp]: "\<And>m n::hypnat. n - (n + m) = 0" by transfer (rule diff_add_0) lemma hypnat_diff_mult_distrib: "\<And>k m n::hypnat. (m - n) * k = (m * k) - (n * k)" by transfer (rule diff_mult_distrib) lemma hypnat_diff_mult_distrib2: "\<And>k m n::hypnat. k * (m - n) = (k * m) - (k * n)" by transfer (rule diff_mult_distrib2) lemma hypnat_le_zero_cancel [iff]: "\<And>n::hypnat. n \<le> 0 \<longleftrightarrow> n = 0" by transfer (rule le_0_eq) lemma hypnat_mult_is_0 [simp]: "\<And>m n::hypnat. m * n = 0 \<longleftrightarrow> m = 0 \<or> n = 0" by transfer (rule mult_is_0) lemma hypnat_diff_is_0_eq [simp]: "\<And>m n::hypnat. m - n = 0 \<longleftrightarrow> m \<le> n" by transfer (rule diff_is_0_eq) lemma hypnat_not_less0 [iff]: "\<And>n::hypnat. \<not> n < 0" by transfer (rule not_less0) lemma hypnat_less_one [iff]: "\<And>n::hypnat. n < 1 \<longleftrightarrow> n = 0" by transfer (rule less_one) lemma hypnat_add_diff_inverse: "\<And>m n::hypnat. \<not> m < n \<Longrightarrow> n + (m - n) = m" by transfer (rule add_diff_inverse) lemma hypnat_le_add_diff_inverse [simp]: "\<And>m n::hypnat. n \<le> m \<Longrightarrow> n + (m - n) = m" by transfer (rule le_add_diff_inverse) lemma hypnat_le_add_diff_inverse2 [simp]: "\<And>m n::hypnat. n \<le> m \<Longrightarrow> (m - n) + n = m" by transfer (rule le_add_diff_inverse2) declare hypnat_le_add_diff_inverse2 [OF order_less_imp_le] lemma hypnat_le0 [iff]: "\<And>n::hypnat. 0 \<le> n" by transfer (rule le0) lemma hypnat_le_add1 [simp]: "\<And>x n::hypnat. x \<le> x + n" by transfer (rule le_add1) lemma hypnat_add_self_le [simp]: "\<And>x n::hypnat. x \<le> n + x" by transfer (rule le_add2) lemma hypnat_add_one_self_less [simp]: "x < x + 1" for x :: hypnat by (fact less_add_one) lemma hypnat_neq0_conv [iff]: "\<And>n::hypnat. n \<noteq> 0 \<longleftrightarrow> 0 < n" by transfer (rule neq0_conv) lemma hypnat_gt_zero_iff: "0 < n \<longleftrightarrow> 1 \<le> n" for n :: hypnat by (auto simp add: linorder_not_less [symmetric]) lemma hypnat_gt_zero_iff2: "0 < n \<longleftrightarrow> (\<exists>m. n = m + 1)" for n :: hypnat by (auto intro!: add_nonneg_pos exI[of _ "n - 1"] simp: hypnat_gt_zero_iff) lemma hypnat_add_self_not_less: "\<not> x + y < x" for x y :: hypnat by (simp add: linorder_not_le [symmetric] add.commute [of x]) lemma hypnat_diff_split: "P (a - b) \<longleftrightarrow> (a < b \<longrightarrow> P 0) \<and> (\<forall>d. a = b + d \<longrightarrow> P d)" for a b :: hypnat \<comment> \<open>elimination of \<open>-\<close> on \<open>hypnat\<close>\<close> proof (cases "a < b" rule: case_split) case True then show ?thesis by (auto simp add: hypnat_add_self_not_less order_less_imp_le hypnat_diff_is_0_eq [THEN iffD2]) next case False then show ?thesis by (auto simp add: linorder_not_less dest: order_le_less_trans) qed subsection \<open>Properties of the set of embedded natural numbers\<close> lemma of_nat_eq_star_of [simp]: "of_nat = star_of" proof show "of_nat n = star_of n" for n by transfer simp qed lemma Nats_eq_Standard: "(Nats :: nat star set) = Standard" by (auto simp: Nats_def Standard_def) lemma hypnat_of_nat_mem_Nats [simp]: "hypnat_of_nat n \<in> Nats" by (simp add: Nats_eq_Standard) lemma hypnat_of_nat_one [simp]: "hypnat_of_nat (Suc 0) = 1" by transfer simp lemma hypnat_of_nat_Suc [simp]: "hypnat_of_nat (Suc n) = hypnat_of_nat n + 1" by transfer simp lemma of_nat_eq_add: fixes d::hypnat shows "of_nat m = of_nat n + d \<Longrightarrow> d \<in> range of_nat" proof (induct n arbitrary: d) case (Suc n) then show ?case by (metis Nats_def Nats_eq_Standard Standard_simps(4) hypnat_diff_add_inverse of_nat_in_Nats) qed auto lemma Nats_diff [simp]: "a \<in> Nats \<Longrightarrow> b \<in> Nats \<Longrightarrow> a - b \<in> Nats" for a b :: hypnat by (simp add: Nats_eq_Standard) subsection \<open>Infinite Hypernatural Numbers -- \<^term>\<open>HNatInfinite\<close>\<close> text \<open>The set of infinite hypernatural numbers.\<close> definition HNatInfinite :: "hypnat set" where "HNatInfinite = {n. n \<notin> Nats}" lemma Nats_not_HNatInfinite_iff: "x \<in> Nats \<longleftrightarrow> x \<notin> HNatInfinite" by (simp add: HNatInfinite_def) lemma HNatInfinite_not_Nats_iff: "x \<in> HNatInfinite \<longleftrightarrow> x \<notin> Nats" by (simp add: HNatInfinite_def) lemma star_of_neq_HNatInfinite: "N \<in> HNatInfinite \<Longrightarrow> star_of n \<noteq> N" by (auto simp add: HNatInfinite_def Nats_eq_Standard) lemma star_of_Suc_lessI: "\<And>N. star_of n < N \<Longrightarrow> star_of (Suc n) \<noteq> N \<Longrightarrow> star_of (Suc n) < N" by transfer (rule Suc_lessI) lemma star_of_less_HNatInfinite: assumes N: "N \<in> HNatInfinite" shows "star_of n < N" proof (induct n) case 0 from N have "star_of 0 \<noteq> N" by (rule star_of_neq_HNatInfinite) then show ?case by simp next case (Suc n) from N have "star_of (Suc n) \<noteq> N" by (rule star_of_neq_HNatInfinite) with Suc show ?case by (rule star_of_Suc_lessI) qed lemma star_of_le_HNatInfinite: "N \<in> HNatInfinite \<Longrightarrow> star_of n \<le> N" by (rule star_of_less_HNatInfinite [THEN order_less_imp_le]) subsubsection \<open>Closure Rules\<close> lemma Nats_less_HNatInfinite: "x \<in> Nats \<Longrightarrow> y \<in> HNatInfinite \<Longrightarrow> x < y" by (auto simp add: Nats_def star_of_less_HNatInfinite) lemma Nats_le_HNatInfinite: "x \<in> Nats \<Longrightarrow> y \<in> HNatInfinite \<Longrightarrow> x \<le> y" by (rule Nats_less_HNatInfinite [THEN order_less_imp_le]) lemma zero_less_HNatInfinite: "x \<in> HNatInfinite \<Longrightarrow> 0 < x" by (simp add: Nats_less_HNatInfinite) lemma one_less_HNatInfinite: "x \<in> HNatInfinite \<Longrightarrow> 1 < x" by (simp add: Nats_less_HNatInfinite) lemma one_le_HNatInfinite: "x \<in> HNatInfinite \<Longrightarrow> 1 \<le> x" by (simp add: Nats_le_HNatInfinite) lemma zero_not_mem_HNatInfinite [simp]: "0 \<notin> HNatInfinite" by (simp add: HNatInfinite_def) lemma Nats_downward_closed: "x \<in> Nats \<Longrightarrow> y \<le> x \<Longrightarrow> y \<in> Nats" for x y :: hypnat using HNatInfinite_not_Nats_iff Nats_le_HNatInfinite by fastforce lemma HNatInfinite_upward_closed: "x \<in> HNatInfinite \<Longrightarrow> x \<le> y \<Longrightarrow> y \<in> HNatInfinite" using HNatInfinite_not_Nats_iff Nats_downward_closed by blast lemma HNatInfinite_add: "x \<in> HNatInfinite \<Longrightarrow> x + y \<in> HNatInfinite" using HNatInfinite_upward_closed hypnat_le_add1 by blast lemma HNatInfinite_diff: "\<lbrakk>x \<in> HNatInfinite; y \<in> Nats\<rbrakk> \<Longrightarrow> x - y \<in> HNatInfinite" by (metis HNatInfinite_not_Nats_iff Nats_add Nats_le_HNatInfinite le_add_diff_inverse) lemma HNatInfinite_is_Suc: "x \<in> HNatInfinite \<Longrightarrow> \<exists>y. x = y + 1" for x :: hypnat using hypnat_gt_zero_iff2 zero_less_HNatInfinite by blast subsection \<open>Existence of an infinite hypernatural number\<close> text \<open>\<open>\<omega>\<close> is in fact an infinite hypernatural number = \<open>[<1, 2, 3, \<dots>>]\<close>\<close> definition whn :: hypnat where hypnat_omega_def: "whn = star_n (\<lambda>n::nat. n)" lemma hypnat_of_nat_neq_whn: "hypnat_of_nat n \<noteq> whn" by (simp add: FreeUltrafilterNat.singleton' hypnat_omega_def star_of_def star_n_eq_iff) lemma whn_neq_hypnat_of_nat: "whn \<noteq> hypnat_of_nat n" by (simp add: FreeUltrafilterNat.singleton hypnat_omega_def star_of_def star_n_eq_iff) lemma whn_not_Nats [simp]: "whn \<notin> Nats" by (simp add: Nats_def image_def whn_neq_hypnat_of_nat) lemma HNatInfinite_whn [simp]: "whn \<in> HNatInfinite" by (simp add: HNatInfinite_def) lemma lemma_unbounded_set [simp]: "eventually (\<lambda>n::nat. m < n) \<U>" by (rule filter_leD[OF FreeUltrafilterNat.le_cofinite]) (auto simp add: cofinite_eq_sequentially eventually_at_top_dense) lemma hypnat_of_nat_eq: "hypnat_of_nat m = star_n (\<lambda>n::nat. m)" by (simp add: star_of_def) lemma SHNat_eq: "Nats = {n. \<exists>N. n = hypnat_of_nat N}" by (simp add: Nats_def image_def) lemma Nats_less_whn: "n \<in> Nats \<Longrightarrow> n < whn" by (simp add: Nats_less_HNatInfinite) lemma Nats_le_whn: "n \<in> Nats \<Longrightarrow> n \<le> whn" by (simp add: Nats_le_HNatInfinite) lemma hypnat_of_nat_less_whn [simp]: "hypnat_of_nat n < whn" by (simp add: Nats_less_whn) lemma hypnat_of_nat_le_whn [simp]: "hypnat_of_nat n \<le> whn" by (simp add: Nats_le_whn) lemma hypnat_zero_less_hypnat_omega [simp]: "0 < whn" by (simp add: Nats_less_whn) lemma hypnat_one_less_hypnat_omega [simp]: "1 < whn" by (simp add: Nats_less_whn) subsubsection \<open>Alternative characterization of the set of infinite hypernaturals\<close> text \<open>\<^term>\<open>HNatInfinite = {N. \<forall>n \<in> Nats. n < N}\<close>\<close> text\<open>unused, but possibly interesting\<close> lemma HNatInfinite_FreeUltrafilterNat_eventually: assumes "\<And>k::nat. eventually (\<lambda>n. f n \<noteq> k) \<U>" shows "eventually (\<lambda>n. m < f n) \<U>" proof (induct m) case 0 then show ?case using assms eventually_mono by fastforce next case (Suc m) then show ?case using assms [of "Suc m"] eventually_elim2 by fastforce qed lemma HNatInfinite_iff: "HNatInfinite = {N. \<forall>n \<in> Nats. n < N}" using HNatInfinite_def Nats_less_HNatInfinite by auto subsubsection \<open>Alternative Characterization of \<^term>\<open>HNatInfinite\<close> using Free Ultrafilter\<close> lemma HNatInfinite_FreeUltrafilterNat: "star_n X \<in> HNatInfinite \<Longrightarrow> \<forall>u. eventually (\<lambda>n. u < X n) \<U>" by (metis (full_types) starP2_star_of starP_star_n star_less_def star_of_less_HNatInfinite) lemma FreeUltrafilterNat_HNatInfinite: "\<forall>u. eventually (\<lambda>n. u < X n) \<U> \<Longrightarrow> star_n X \<in> HNatInfinite" by (auto simp add: star_less_def starP2_star_n HNatInfinite_iff SHNat_eq hypnat_of_nat_eq) lemma HNatInfinite_FreeUltrafilterNat_iff: "(star_n X \<in> HNatInfinite) = (\<forall>u. eventually (\<lambda>n. u < X n) \<U>)" by (rule iffI [OF HNatInfinite_FreeUltrafilterNat FreeUltrafilterNat_HNatInfinite]) subsection \<open>Embedding of the Hypernaturals into other types\<close> definition of_hypnat :: "hypnat \<Rightarrow> 'a::semiring_1_cancel star" where of_hypnat_def [transfer_unfold]: "of_hypnat = *f* of_nat" lemma of_hypnat_0 [simp]: "of_hypnat 0 = 0" by transfer (rule of_nat_0) lemma of_hypnat_1 [simp]: "of_hypnat 1 = 1" by transfer (rule of_nat_1) lemma of_hypnat_hSuc: "\<And>m. of_hypnat (hSuc m) = 1 + of_hypnat m" by transfer (rule of_nat_Suc) lemma of_hypnat_add [simp]: "\<And>m n. of_hypnat (m + n) = of_hypnat m + of_hypnat n" by transfer (rule of_nat_add) lemma of_hypnat_mult [simp]: "\<And>m n. of_hypnat (m * n) = of_hypnat m * of_hypnat n" by transfer (rule of_nat_mult) lemma of_hypnat_less_iff [simp]: "\<And>m n. of_hypnat m < (of_hypnat n::'a::linordered_semidom star) \<longleftrightarrow> m < n" by transfer (rule of_nat_less_iff) lemma of_hypnat_0_less_iff [simp]: "\<And>n. 0 < (of_hypnat n::'a::linordered_semidom star) \<longleftrightarrow> 0 < n" by transfer (rule of_nat_0_less_iff) lemma of_hypnat_less_0_iff [simp]: "\<And>m. \<not> (of_hypnat m::'a::linordered_semidom star) < 0" by transfer (rule of_nat_less_0_iff) lemma of_hypnat_le_iff [simp]: "\<And>m n. of_hypnat m \<le> (of_hypnat n::'a::linordered_semidom star) \<longleftrightarrow> m \<le> n" by transfer (rule of_nat_le_iff) lemma of_hypnat_0_le_iff [simp]: "\<And>n. 0 \<le> (of_hypnat n::'a::linordered_semidom star)" by transfer (rule of_nat_0_le_iff) lemma of_hypnat_le_0_iff [simp]: "\<And>m. (of_hypnat m::'a::linordered_semidom star) \<le> 0 \<longleftrightarrow> m = 0" by transfer (rule of_nat_le_0_iff) lemma of_hypnat_eq_iff [simp]: "\<And>m n. of_hypnat m = (of_hypnat n::'a::linordered_semidom star) \<longleftrightarrow> m = n" by transfer (rule of_nat_eq_iff) lemma of_hypnat_eq_0_iff [simp]: "\<And>m. (of_hypnat m::'a::linordered_semidom star) = 0 \<longleftrightarrow> m = 0" by transfer (rule of_nat_eq_0_iff) lemma HNatInfinite_of_hypnat_gt_zero: "N \<in> HNatInfinite \<Longrightarrow> (0::'a::linordered_semidom star) < of_hypnat N" by (rule ccontr) (simp add: linorder_not_less) end
% test for run length coding % some signal with long cluster of 0/1 n = 1024*2; options.alpha = 0.1; x = load_signal('regular',n,options); x = (x-mean(x))>0; options.rle_coding_mode = 'shannon'; [tmp,nb_shannon] = perform_rle_coding(x, +1, options); disp(sprintf('Shannon = %.2f', nb_shannon)); options.rle_coding_mode = 'arithmetic'; [tmp,nb_arith] = perform_rle_coding(x, +1, options); disp(sprintf('Arithmetic = %.2f', nb_arith)); options.rle_coding_mode = 'arithfixed'; [tmp,nb_arithfixed] = perform_rle_coding(x, +1, options); disp(sprintf('Arithmetic(laplacian) = %.2f', nb_arithfixed)); [tmp,nb_direct] = perform_arithmetic_coding(x, +1); disp(sprintf('Direct(entropy) = %.2f', nb_direct)); options.rle_coding_mode = 'nocoding'; [tmp,nb_nocode] = perform_rle_coding(x, +1, options); disp(sprintf('No code = %.2f', nb_nocode)); % test for bijectivity options.rle_coding_mode = 'nocoding'; stream = perform_rle_coding(x, +1, options); xx = perform_rle_coding(stream, -1, options); disp( sprintf('Error (should be 0): %.2f', norme(x-xx)) ); options.rle_coding_mode = 'arithfixed'; stream = perform_rle_coding(x, +1, options); xx = perform_rle_coding(stream, -1, options); disp( sprintf('Error (should be 0): %.2f', norme(x-xx)) );
import logic.misc attribute [reducible] definition is_binary (p : Prop) : Prop := p ∨ ¬p --- An "internal" version of decidability. class idecidable (p : Prop) : Prop := (is_either : is_binary p) @[reducible] definition idecidable_pred {α : Sort _} (p : α → Prop) := ∀ a, idecidable (p a) --- Class of types that has internally decidable equality. @[reducible] definition idecidable_eq (α : Sort _) : Prop := ∀ (a b : α), idecidable (a=b) @[reducible] def idecidable_rel {α : Sort _} (r : α → α → Prop) := ∀ a b, idecidable (r a b) @[elab_as_eliminator] lemma whether (p : Prop) [idec : idecidable p] : p ∨ ¬ p := idec.is_either lemma not_binary {p : Prop} (hbp : is_binary p) : is_binary (¬p) := begin apply or.elim hbp, show p → is_binary (¬p), { intro hp; right; exact not_not_intro hp}, show ¬p → is_binary (¬p), { intro hnp; left; assumption }, end lemma or_binary {p q : Prop} (hbp : is_binary p) (hbq : is_binary q) : is_binary (p∨ q) := begin apply or.elim hbq; apply or.elim hbp, show p → q → is_binary (p∨ q), { intros hp hq, left; left; exact hp }, show ¬p → q → is_binary (p∨ q), { intros hnp hq, left; right; exact hq }, show p → ¬q → is_binary (p∨ q), { intros hp hnq, left; left; exact hp }, show ¬p → ¬q → is_binary (p∨ q), { intros hnp hnq, right, intros hnpq, apply or.elim hnpq; intros; contradiction } end lemma and_binary {p q : Prop} (hbp : is_binary p) (hbq : is_binary q) : is_binary (p∧q) := begin apply or.elim hbq; apply or.elim hbp, show p → q → is_binary (p∧q), { intros hp hq, left; exact ⟨hp,hq⟩ }, show ¬p → q → is_binary (p∧q), { intros hnp hq, right; intros hnpq, exact hnp hnpq.left }, show p → ¬q → is_binary (p∧q), { intros hp hnq, right; intros hnpq, exact hnq hnpq.right }, show ¬p → ¬q → is_binary (p∧q), { intros hnp hnq, right; intros hnpq, exact hnp hnpq.left }, end lemma xor_binary {p q : Prop} (hbp : is_binary p) (hbq : is_binary q) : is_binary (xor p q) := begin have hbnp : is_binary ¬p, from not_binary hbp, have hbnq : is_binary ¬q, from not_binary hbq, apply or_binary; apply and_binary; assumption end instance not_idecidable (p : Prop) [idecidable p] : idecidable ¬p := {is_either := not_binary (whether p)} instance or_idecidable (p q : Prop) [idecidable p] [idecidable q] : idecidable (p∨q) := {is_either := or_binary (whether p) (whether q)} instance and_idecidable (p q : Prop) [idecidable p] [idecidable q] : idecidable (p∧q) := {is_either := and_binary (whether p) (whether q)} instance xor_idecidable (p q : Prop) [idecidable p] [idecidable q] : idecidable (xor p q) := {is_either := xor_binary (whether p) (whether q)} #print axioms xor_idecidable lemma not_xor_iff {p q : Prop} [idecidable p] [idecidable q] : ¬xor p q → (p ↔ q) := begin intros hnx, let h := not_or_distrib.mp hnx, apply or.elim (whether q); apply or.elim (whether p), show p → q → (p↔ q), from λ hp hq, {mp := (λ _, hq), mpr := (λ _, hp)}, show ¬p → q → (p↔ q), from λ hnp hq, (h.right ⟨hq,hnp⟩).elim, show p → ¬q → (p↔ q), from λ hp hnq, (h.left ⟨hp,hnq⟩).elim, show ¬p → ¬q → (p↔ q), from λ hnp hnq, {mp := (λ hp, (hnp hp).elim), mpr := (λ hq, (hnq hq).elim)} end private lemma xor_assoc_impl (p q r : Prop) [idecidable p] [idecidable q] [idecidable r] : xor (xor p q) r → xor p (xor q r) := begin intros h, apply or.elim h, show (xor p q) ∧ ¬r → xor p (xor q r), { intros hpqr, apply or.elim (whether r), show r → xor p (xor q r), {intro hr; exfalso; exact hpqr.right hr}, intros hnr, apply or.elim hpqr.left, show p∧¬q → xor p (xor q r), { intros hpq, left; split; try {exact hpq.left}, intros hx; apply or.elim hx, show q∧¬r → false, { intros hqr, exact hpq.right hqr.left }, show r∧¬q → false, { intros hrq, exact hnr hrq.left }, }, show q∧¬p → xor p (xor q r), { intros hqr, right; split; try {exact hqr.right}, exact or.inl ⟨hqr.left,hnr⟩ } }, show r ∧ ¬xor p q → xor p (xor q r), { intros hrpq, have hpq : p↔q, from not_xor_iff hrpq.right, apply or.elim (whether p), show p → xor p (xor q r), { intros hp, left; split; try {assumption}, intros hx, apply or.elim hx, show q ∧ ¬r → false, { intro hqr, exact hqr.right hrpq.left }, show r ∧ ¬q → false, { intro hrq, exact hrq.right (hpq.mp hp) }, }, show ¬p → xor p (xor q r), { intros hnp, right; split; try {assumption}, right, exact ⟨hrpq.left, (λ hq, hnp (hpq.mpr hq))⟩ } }, end definition xor_assoc (p q r : Prop) [idecidable p] [idecidable q] [idecidable r] : xor (xor p q) r ↔ xor p (xor q r) := { mp := xor_assoc_impl p q r, mpr := by calc xor p (xor q r) → xor p (xor r q) : (xor_congr iff.rfl (xor_comm q r)).mp ... → xor (xor r q) p : (xor_comm p (xor r q)).mp ... → xor r (xor q p) : xor_assoc_impl r q p ... → xor r (xor p q) : (xor_congr iff.rfl (xor_comm q p)).mp ... → xor (xor p q) r : (xor_comm r (xor p q)).mp } #print axioms xor_assoc
import os import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np from torch.utils.data import Dataset, DataLoader class Encoding_Dataset(Dataset): def __init__(self, root_path, transform=None): self.root_path = root_path self.data = [] self.transform = transform self.classes = os.listdir(root_path) for c in self.classes: class_path = root_path / c for data_path in os.listdir(class_path): self.data.append((class_path / data_path, self.classes.index(c))) def __len__(self): return len(self.data) def __getitem__(self, i): x = np.load(self.data[i][0]) y = self.data[i][1] return x, y def make_weights_for_balanced_classes(data, nclasses): count = [0] * nclasses for item in data: count[item[1]] += 1 weight_per_class = [0.] * nclasses N = float(sum(count)) for i in range(nclasses): weight_per_class[i] = N/float(count[i]) weight = [0] * len(data) for idx, val in enumerate(data): weight[idx] = weight_per_class[val[1]] return weight def testing(model, loader, e, device="cuda"): data_iter = iter(loader) true_images, _ = data_iter.next() true_images = true_images.to(device) pred_images = model(true_images).to(device) fig = plt.figure(figsize=(8, 8)) rows, columns = 4, 2 for i, (t, p) in enumerate(zip(true_images[:4], pred_images[:4])): t = t[0].cpu().detach().numpy() p = p[0].cpu().detach().numpy() fig.add_subplot(rows, columns, i * 2 + 1) plt.imshow(t, cmap=cm.gray) fig.add_subplot(rows, columns, (i + 1) * 2) plt.imshow(p, cmap=cm.gray) plt.show() plt.savefig(f'outputs/{e}.png', dpi=300)
## Pre script stuff library(tidyverse) library(data.table) library(plotly) library(patchwork) ## Functions load_runsummary <- function(in_dir, result_path) { fread(paste0(in_dir, result_path, "RunSummary.txt")) %>% as_tibble } load_module_files <- function(in_dir, file_ext) { list.files(in_dir, pattern = file_ext, recursive = T, full.names = T) %>% enframe("Index", "File") %>% mutate(data = purrr::map(File, function(x) fread(x, header = T) %>% as_tibble), File = map(File, function(file_name) {str_split(file_name, "/") %>% unlist %>% tail(n = 3) %>% paste(collapse = "/") %>% str_remove_all(file_ext)})) %>% separate(File, c("Mode", "Module", "File"), sep = "/") %>% unnest %>% rename_at(vars(matches("Taxon")), function(x) str_replace_all(x, "Taxon", "Node")) %>% mutate(Mode = factor(Mode, levels = c("default", "ancient"))) } filter_module_files <- function(in_dat, filt, remove_string, File, taxon) { sample <- enquo(File) taxon <- enquo(taxon) if (remove_string == "") { in_dat %>% filter(File %in% !! sample, Node == !! taxon, Mode %in% filt) } else { ## Note static requires second str_remove, but not in shiny! in_dat %>% mutate(File = str_remove(File, remove_string)) %>% filter(File %in% str_remove(!! sample, remove_string), Node == !! taxon, Mode %in% filt) } } plot_damage <- function(x){ ggplot(x, aes(Position, Frequency, colour = Mismatch, group = Mismatch)) + geom_line() + xlab("Position (bp)") + ylab("Alignments (n)") + facet_wrap(File ~ Strand, scales = "free_x") + scale_colour_manual(values = mismatch_colours) + theme_minimal() } plot_col <- function(dat, xaxis, yaxis, xlabel, ylabel) { ggplot(dat, aes_string(xaxis, yaxis, fill = "Mode")) + geom_col() + xlab(xlabel) + ylab(ylabel) + facet_wrap(File ~ Node, scales = "free_x") + scale_fill_manual(values = mode_colours) + theme_minimal() } ## default aesthetics mode_colours <- c(default = "#1b9e77", ancient = "#7570b3") mismatch_colours <- c(`C>T` = "#e41a1c", `G>A` = "#377eb8", `D>V(11Substitution)` = "grey", `H>B(11Substitution)` = "grey") damage_xaxis <- c("11" = "-10", "12" = "-9", "13" = "-8", "14" = "-7", "15" = "-6", "16" = "-5", "17" = "-4", "18" = "-3", "19" = "-2", "20" = "-1") filterstats_info <- c(`Reference Length` = "ReferenceLength", `Filtering Status` = "turnedOn?", `Downsampling Status` = "downSampling?", `Unfiltered Reads (n)` = "NumberOfUnfilteredReads", `Filtered Reads (n)` = "NumberOfFilteredReads", `Total Alignments (n)` = "numberOfAlignments", `Unfiltered Alignments (n)` = "NumberOfUnfilteredAlignments", `Total Alignments on Reference (n)` = "TotalAlignmentsOnReference", `Non-Duplicates on Reference (n)` = "nonDuplicatesonReference", `Non-Stacked (n)` = "nonStacked", `Read Distribution` = "uniquePerReference", `Average Coverage on Reference (X)` = "AverageCoverage", `Standard Deviation Coverage on Reference (X)` = "Coverge_StandardDeviation", `Mean Read Length (bp)` = "Mean", `Standard Deviation Read Length (bp)` = "StandardDev", `Geometric Mean Read Length (bp)` = "GeometricMean", `Median Read Length (bp)` = "Median") ## Load data and get input selection options select_input <- "~/Documents/Scripts/shiny_web_apps/MEx-IPA/dev/test_data/output_dairymicrobes_archive/" input_dir <- paste0(select_input,"/") default_runsummary <- load_runsummary(input_dir, "default/") damageMismatch <- load_module_files(input_dir, "_damageMismatch.txt") editDistance <- load_module_files(input_dir, "_editDistance.txt") readLengthDist <- load_module_files(input_dir, "_readLengthDist.txt") filterStats <- load_module_files(input_dir, "_filterTable.txt") percentIdentity <- load_module_files(input_dir, "_percentIdentity.txt") alignmentDist <- load_module_files(input_dir, "_alignmentDist.txt") readLengthStat <- load_module_files(input_dir, "_readLengthStat.txt") coverageHist <- load_module_files(input_dir, "_coverageHist.txt") positionsCovered <- load_module_files(input_dir, "_postionsCovered.txt") ## Load data specific information for user options node_names <- default_runsummary %>% pull(Node) file_names <- default_runsummary %>% colnames %>% .[. != "Node"] filter_names <- list(all = c("default", "ancient"), default = c("default"), ancient = c("ancient")) ## User options selection ###################################################### selected_node <- node_names[37] selected_file <- file_names[1] selected_filter <- filter_names[1] %>% unlist %>% unname selected_interactive <- T remove_string <- "_S0_L001_R1_001.fastq.combined.fq.prefixed.extractunmapped.bam.rma6" ################################################################################ ## Data processing damage_data <- filter_module_files(damageMismatch, selected_filter, remove_string, selected_file, selected_node) %>% gather(Mismatch, Frequency, 6:(ncol(.) - 1)) %>% separate(Mismatch, c("Mismatch", "Position"), "_") %>% mutate(Strand = if_else(Mismatch %in% c("C>T", "D>V(11Substitution)"), "5 prime", "3 prime"), Strand = factor(Strand, levels = c("5 prime", "3 prime")), Position = if_else(Position %in% names(damage_xaxis), damage_xaxis[Position], Position), Position = as_factor(Position) ) edit_data <- filter_module_files(editDistance, selected_filter, remove_string, selected_file, selected_node) %>% gather(Edit_Distance, Alignment_Count, 6:ncol(.)) %>% mutate(Edit_Distance = factor(Edit_Distance, levels = c(0:10, "higher"))) percentidentity_data <- filter_module_files(percentIdentity, selected_filter, remove_string, selected_file, selected_node) %>% gather(Percent_Identity, Alignment_Count, 6:ncol(.)) %>% mutate(Percent_Identity = factor(Percent_Identity, levels = seq(80, 100, 5))) length_data <- filter_module_files(readLengthDist, selected_filter, remove_string, selected_file, selected_node) %>% gather(Length_Bin, Alignment_Count, 6:ncol(.)) %>% mutate(Length_Bin = str_replace_all(Length_Bin, "bp", ""), Length_Bin = as.numeric(Length_Bin)) positionscovered_data <- filter_module_files(positionsCovered, selected_filter, remove_string, selected_file, selected_node) %>% select(-contains("Average"), -contains("_"), -Reference) %>% gather(Breadth, Percentage, 6:ncol(.)) %>% mutate(Breadth = str_remove_all(Breadth, "percCoveredHigher") %>% as.numeric) coveragehist_data <- filter_module_files(coverageHist, selected_filter, remove_string, selected_file, selected_node) %>% gather(Fold_Coverage, Base_Pairs, 7:ncol(.)) %>% mutate(Fold_Coverage = as_factor(Fold_Coverage)) %>% filter(Fold_Coverage != "0") filterstats_data <- filterStats %>% select(-Module) %>% left_join(readLengthStat %>% select(-Module), by = c("Index", "Mode", "File", "Node")) %>% left_join(alignmentDist %>% select(-Module, -Reference), by = c("Index", "Mode", "File", "Node")) %>% left_join(positionsCovered %>% select(-Module, -Reference, -contains("perc")), by = c("Index", "Mode", "File", "Node")) %>% filter_module_files(selected_filter, remove_string, selected_file, selected_node) %>% mutate(Mean = round(Mean, digits = 2), GeometricMean = round(Mean, digits = 2), StandardDev = round(StandardDev, digits = 2)) %>% gather(Information, Value, 5:ncol(.)) %>% select(-Index) %>% mutate(Information = map(Information, function(x) names(filterstats_info[filterstats_info == x])) %>% unlist) %>% mutate(Information = factor(Information, levels = rev(names(filterstats_info)))) %>% arrange(Information) ## Plotting plot_damage <- function(x){ ggplot(x, aes(Position, Frequency, colour = Mismatch, group = Mismatch)) + geom_line() + labs(title = "Misincorporation (Damage) Plot") + xlab("Position (bp)") + ylab("Alignments (n)") + facet_wrap(~ Strand, scales = "free_x") + scale_colour_manual(values = mismatch_colours) + theme_minimal() } if (any(selected_filter %in% "default")) { damage_plot <- plot_damage(damage_data %>% filter(Mode == "default")) } else { damage_plot <- plot_damage(damage_data) } length_plot <- plot_col(length_data, "Length_Bin", "Alignment_Count", "Read Length Bins (bp)", "Alignments (n)") edit_plot <- plot_col(edit_data, "Edit_Distance", "Alignment_Count", "Edit Distance", "Alignments (n)") percentidentity_plot <- plot_col(percentidentity_data, "Percent_Identity", "Alignment_Count", "Sequence Identity (%)", "Alignments (n)") positionscovered_plot <- plot_col(positionscovered_data, "Breadth", "Percentage", "Fold Coverage (X)", "Percentage of Reference (%)") coveragehist_plot <- plot_col(coveragehist_data, "Fold_Coverage", "Base_Pairs", "Fold Coverage (X)", "Base Pairs (n)") filterstats_plot <- ggplot(filterstats_data, aes(y = Information, x = Mode, label = Value)) + geom_tile(colour = "darkgrey", fill = NA, size = 0.3) + geom_text() + scale_x_discrete(position = "top") + theme_minimal() + theme(panel.grid = element_blank()) ## Final single sample plot display if (!isTRUE(selected_interactive)) { damage_plot + edit_plot + percentidentity_plot + length_plot + positionscovered_plot + coveragehist_plot filterstats_plot } else if (isTRUE(selected_interactive)) { ggplotly(damage_plot) ggplotly(edit_plot) ggplotly(length_plot) ggplotly(percentidentity_plot) filterstats_plot } ## Multiple sample comparison damage_data_comparison <- filter_module_files(damageMismatch, selected_filter, remove_string, file_names, selected_node) %>% gather(Mismatch, Frequency, 6:(ncol(.) - 1)) %>% separate(Mismatch, c("Mismatch", "Position"), "_") %>% mutate(Strand = if_else(Mismatch %in% c("C>T", "D>V(11Substitution)"), "5 prime", "3 prime"), Strand = factor(Strand, levels = c("5 prime", "3 prime")), Position = if_else(Position %in% names(damage_xaxis), damage_xaxis[Position], Position), Position = as_factor(Position) ) ## Following https://tbradley1013.github.io/2018/08/10/create-a-dynamic-number-of-ui-elements-in-shiny-with-purrr/ multi_plot_data <- damage_data_comparison %>% group_by(File) %>% nest() %>% mutate(plots = map(data, function(x) plot_damage(x))) %>% pull(plots) output <- list() final <- iwalk(multi_plot_data, ~{ output_name <- paste0("plot_", .y) output[[output_name]] <- renderPlotly(.x) }) graphs <- eventReactive(input$submit, { req(maltExtract_data()) dat <- maltExtract_data() if (input$selected_filter == "all") { selected_filter <- c("default", "ancient") } else { selected_filter <- input$selected_filter } if (input$remove_string == "") { file_names <- dat$file_names } else { file_names <- dat$file_names %>% str_remove(., input$remove_string) } damage_data_comparison <- filter_module_files(dat$damageMismatch, selected_filter, input$remove_string, file_names, input$selected_node) %>% gather(Mismatch, Frequency, 6:(ncol(.) - 1)) %>% separate(Mismatch, c("Mismatch", "Position"), "_") %>% mutate(Strand = if_else(Mismatch %in% c("C>T", "D>V(11Substitution)"), "5 prime", "3 prime"), Strand = factor(Strand, levels = c("5 prime", "3 prime")), Position = if_else(Position %in% names(damage_xaxis), damage_xaxis[Position], Position), Position = as_factor(Position) ) damage_data_comparison %>% group_by(File) %>% nest() %>% mutate(plots = map(data, function(x) plot_damage(x))) %>% pull(plots) }) # use purrr::iwalk to create a dynamic number of outputs observeEvent(input$submit, { req(graphs()) iwalk(graphs(), ~{ output_name <- paste0("plot_", .y) output[[output_name]] <- renderPlotly(.x) }) }) # use renderUI to create a dynamic number of output ui elements output$comparison_plots <- renderUI({ req(graphs()) plots_list <- imap(graphs(), ~{ tagList( plotlyOutput( outputId = paste0("plot_", .y) ), br() ) }) tagList(plots_list) }) # use purrr::iwalk to create a dynamic number of outputs observeEvent(input$submit, { req(graphs()) iwalk(graphs(), ~{ output_name <- paste0("plot_", .y) output[[output_name]] <- renderPlotly(.x) }) }) # use renderUI to create a dynamic number of output ui elements output$graphs_ui <- renderUI({ req(graphs()) plots_list <- imap(graphs(), ~{ tagList( plotlyOutput( outputId = paste0("plot_", .y) ), br() ) }) tagList(plots_list) })
/- Copyright (c) 2020 Joseph Myers. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Joseph Myers ! This file was ported from Lean 3 source module linear_algebra.affine_space.finite_dimensional ! leanprover-community/mathlib commit b875cbb7f2aa2b4c685aaa2f99705689c95322ad ! Please do not edit these lines, except to modify the commit id ! if you have ported upstream changes. -/ import Mathbin.LinearAlgebra.AffineSpace.Basis import Mathbin.LinearAlgebra.FiniteDimensional /-! # Finite-dimensional subspaces of affine spaces. This file provides a few results relating to finite-dimensional subspaces of affine spaces. ## Main definitions * `collinear` defines collinear sets of points as those that span a subspace of dimension at most 1. -/ noncomputable section open BigOperators Affine section AffineSpace' variable (k : Type _) {V : Type _} {P : Type _} variable {ι : Type _} include V open AffineSubspace FiniteDimensional Module variable [DivisionRing k] [AddCommGroup V] [Module k V] [affine_space V P] /-- The `vector_span` of a finite set is finite-dimensional. -/ theorem finiteDimensional_vectorSpan_of_finite {s : Set P} (h : Set.Finite s) : FiniteDimensional k (vectorSpan k s) := span_of_finite k <| h.vsub h #align finite_dimensional_vector_span_of_finite finiteDimensional_vectorSpan_of_finite /-- The `vector_span` of a family indexed by a `fintype` is finite-dimensional. -/ instance finiteDimensional_vectorSpan_range [Finite ι] (p : ι → P) : FiniteDimensional k (vectorSpan k (Set.range p)) := finiteDimensional_vectorSpan_of_finite k (Set.finite_range _) #align finite_dimensional_vector_span_range finiteDimensional_vectorSpan_range /-- The `vector_span` of a subset of a family indexed by a `fintype` is finite-dimensional. -/ instance finiteDimensional_vectorSpan_image_of_finite [Finite ι] (p : ι → P) (s : Set ι) : FiniteDimensional k (vectorSpan k (p '' s)) := finiteDimensional_vectorSpan_of_finite k (Set.toFinite _) #align finite_dimensional_vector_span_image_of_finite finiteDimensional_vectorSpan_image_of_finite /-- The direction of the affine span of a finite set is finite-dimensional. -/ theorem finiteDimensional_direction_affineSpan_of_finite {s : Set P} (h : Set.Finite s) : FiniteDimensional k (affineSpan k s).direction := (direction_affineSpan k s).symm ▸ finiteDimensional_vectorSpan_of_finite k h #align finite_dimensional_direction_affine_span_of_finite finiteDimensional_direction_affineSpan_of_finite /-- The direction of the affine span of a family indexed by a `fintype` is finite-dimensional. -/ instance finiteDimensional_direction_affineSpan_range [Finite ι] (p : ι → P) : FiniteDimensional k (affineSpan k (Set.range p)).direction := finiteDimensional_direction_affineSpan_of_finite k (Set.finite_range _) #align finite_dimensional_direction_affine_span_range finiteDimensional_direction_affineSpan_range /-- The direction of the affine span of a subset of a family indexed by a `fintype` is finite-dimensional. -/ instance finiteDimensional_direction_affineSpan_image_of_finite [Finite ι] (p : ι → P) (s : Set ι) : FiniteDimensional k (affineSpan k (p '' s)).direction := finiteDimensional_direction_affineSpan_of_finite k (Set.toFinite _) #align finite_dimensional_direction_affine_span_image_of_finite finiteDimensional_direction_affineSpan_image_of_finite /-- An affine-independent family of points in a finite-dimensional affine space is finite. -/ theorem finite_of_fin_dim_affineIndependent [FiniteDimensional k V] {p : ι → P} (hi : AffineIndependent k p) : Finite ι := by nontriviality ι; inhabit ι rw [affineIndependent_iff_linearIndependent_vsub k p default] at hi letI : IsNoetherian k V := IsNoetherian.iff_fg.2 inferInstance exact (Set.finite_singleton default).finite_of_compl (Set.finite_coe_iff.1 hi.finite_of_is_noetherian) #align finite_of_fin_dim_affine_independent finite_of_fin_dim_affineIndependent /-- An affine-independent subset of a finite-dimensional affine space is finite. -/ theorem finite_set_of_fin_dim_affineIndependent [FiniteDimensional k V] {s : Set ι} {f : s → P} (hi : AffineIndependent k f) : s.Finite := @Set.toFinite _ s (finite_of_fin_dim_affineIndependent k hi) #align finite_set_of_fin_dim_affine_independent finite_set_of_fin_dim_affineIndependent open Classical variable {k} /-- The `vector_span` of a finite subset of an affinely independent family has dimension one less than its cardinality. -/ theorem AffineIndependent.finrank_vectorSpan_image_finset {p : ι → P} (hi : AffineIndependent k p) {s : Finset ι} {n : ℕ} (hc : Finset.card s = n + 1) : finrank k (vectorSpan k (s.image p : Set P)) = n := by have hi' := hi.range.mono (Set.image_subset_range p ↑s) have hc' : (s.image p).card = n + 1 := by rwa [s.card_image_of_injective hi.injective] have hn : (s.image p).Nonempty := by simp [hc', ← Finset.card_pos] rcases hn with ⟨p₁, hp₁⟩ have hp₁' : p₁ ∈ p '' s := by simpa using hp₁ rw [affineIndependent_set_iff_linearIndependent_vsub k hp₁', ← Finset.coe_singleton, ← Finset.coe_image, ← Finset.coe_sdiff, Finset.sdiff_singleton_eq_erase, ← Finset.coe_image] at hi' have hc : (Finset.image (fun p : P => p -ᵥ p₁) ((Finset.image p s).eraseₓ p₁)).card = n := by rw [Finset.card_image_of_injective _ (vsub_left_injective _), Finset.card_erase_of_mem hp₁] exact Nat.pred_eq_of_eq_succ hc' rwa [vectorSpan_eq_span_vsub_finset_right_ne k hp₁, finrank_span_finset_eq_card, hc] #align affine_independent.finrank_vector_span_image_finset AffineIndependent.finrank_vectorSpan_image_finset /-- The `vector_span` of a finite affinely independent family has dimension one less than its cardinality. -/ theorem AffineIndependent.finrank_vectorSpan [Fintype ι] {p : ι → P} (hi : AffineIndependent k p) {n : ℕ} (hc : Fintype.card ι = n + 1) : finrank k (vectorSpan k (Set.range p)) = n := by rw [← Finset.card_univ] at hc rw [← Set.image_univ, ← Finset.coe_univ, ← Finset.coe_image] exact hi.finrank_vector_span_image_finset hc #align affine_independent.finrank_vector_span AffineIndependent.finrank_vectorSpan /-- The `vector_span` of a finite affinely independent family whose cardinality is one more than that of the finite-dimensional space is `⊤`. -/ theorem AffineIndependent.vectorSpan_eq_top_of_card_eq_finrank_add_one [FiniteDimensional k V] [Fintype ι] {p : ι → P} (hi : AffineIndependent k p) (hc : Fintype.card ι = finrank k V + 1) : vectorSpan k (Set.range p) = ⊤ := eq_top_of_finrank_eq <| hi.finrank_vectorSpan hc #align affine_independent.vector_span_eq_top_of_card_eq_finrank_add_one AffineIndependent.vectorSpan_eq_top_of_card_eq_finrank_add_one variable (k) /-- The `vector_span` of `n + 1` points in an indexed family has dimension at most `n`. -/ theorem finrank_vectorSpan_image_finset_le (p : ι → P) (s : Finset ι) {n : ℕ} (hc : Finset.card s = n + 1) : finrank k (vectorSpan k (s.image p : Set P)) ≤ n := by have hn : (s.image p).Nonempty := by rw [Finset.Nonempty.image_iff, ← Finset.card_pos, hc] apply Nat.succ_pos rcases hn with ⟨p₁, hp₁⟩ rw [vectorSpan_eq_span_vsub_finset_right_ne k hp₁] refine' le_trans (finrank_span_finset_le_card (((s.image p).eraseₓ p₁).image fun p => p -ᵥ p₁)) _ rw [Finset.card_image_of_injective _ (vsub_left_injective p₁), Finset.card_erase_of_mem hp₁, tsub_le_iff_right, ← hc] apply Finset.card_image_le #align finrank_vector_span_image_finset_le finrank_vectorSpan_image_finset_le /-- The `vector_span` of an indexed family of `n + 1` points has dimension at most `n`. -/ theorem finrank_vectorSpan_range_le [Fintype ι] (p : ι → P) {n : ℕ} (hc : Fintype.card ι = n + 1) : finrank k (vectorSpan k (Set.range p)) ≤ n := by rw [← Set.image_univ, ← Finset.coe_univ, ← Finset.coe_image] rw [← Finset.card_univ] at hc exact finrank_vectorSpan_image_finset_le _ _ _ hc #align finrank_vector_span_range_le finrank_vectorSpan_range_le /-- `n + 1` points are affinely independent if and only if their `vector_span` has dimension `n`. -/ theorem affineIndependent_iff_finrank_vectorSpan_eq [Fintype ι] (p : ι → P) {n : ℕ} (hc : Fintype.card ι = n + 1) : AffineIndependent k p ↔ finrank k (vectorSpan k (Set.range p)) = n := by have hn : Nonempty ι := by simp [← Fintype.card_pos_iff, hc] cases' hn with i₁ rw [affineIndependent_iff_linearIndependent_vsub _ _ i₁, linearIndependent_iff_card_eq_finrank_span, eq_comm, vectorSpan_range_eq_span_range_vsub_right_ne k p i₁] congr rw [← Finset.card_univ] at hc rw [Fintype.subtype_card] simp [Finset.filter_ne', Finset.card_erase_of_mem, hc] #align affine_independent_iff_finrank_vector_span_eq affineIndependent_iff_finrank_vectorSpan_eq /-- `n + 1` points are affinely independent if and only if their `vector_span` has dimension at least `n`. -/ theorem affineIndependent_iff_le_finrank_vectorSpan [Fintype ι] (p : ι → P) {n : ℕ} (hc : Fintype.card ι = n + 1) : AffineIndependent k p ↔ n ≤ finrank k (vectorSpan k (Set.range p)) := by rw [affineIndependent_iff_finrank_vectorSpan_eq k p hc] constructor · rintro rfl rfl · exact fun hle => le_antisymm (finrank_vectorSpan_range_le k p hc) hle #align affine_independent_iff_le_finrank_vector_span affineIndependent_iff_le_finrank_vectorSpan /-- `n + 2` points are affinely independent if and only if their `vector_span` does not have dimension at most `n`. -/ theorem affineIndependent_iff_not_finrank_vectorSpan_le [Fintype ι] (p : ι → P) {n : ℕ} (hc : Fintype.card ι = n + 2) : AffineIndependent k p ↔ ¬finrank k (vectorSpan k (Set.range p)) ≤ n := by rw [affineIndependent_iff_le_finrank_vectorSpan k p hc, ← Nat.lt_iff_add_one_le, lt_iff_not_ge] #align affine_independent_iff_not_finrank_vector_span_le affineIndependent_iff_not_finrank_vectorSpan_le /-- `n + 2` points have a `vector_span` with dimension at most `n` if and only if they are not affinely independent. -/ theorem finrank_vectorSpan_le_iff_not_affineIndependent [Fintype ι] (p : ι → P) {n : ℕ} (hc : Fintype.card ι = n + 2) : finrank k (vectorSpan k (Set.range p)) ≤ n ↔ ¬AffineIndependent k p := (not_iff_comm.1 (affineIndependent_iff_not_finrank_vectorSpan_le k p hc).symm).symm #align finrank_vector_span_le_iff_not_affine_independent finrank_vectorSpan_le_iff_not_affineIndependent variable {k} /-- If the `vector_span` of a finite subset of an affinely independent family lies in a submodule with dimension one less than its cardinality, it equals that submodule. -/ theorem AffineIndependent.vectorSpan_image_finset_eq_of_le_of_card_eq_finrank_add_one {p : ι → P} (hi : AffineIndependent k p) {s : Finset ι} {sm : Submodule k V} [FiniteDimensional k sm] (hle : vectorSpan k (s.image p : Set P) ≤ sm) (hc : Finset.card s = finrank k sm + 1) : vectorSpan k (s.image p : Set P) = sm := eq_of_le_of_finrank_eq hle <| hi.finrank_vectorSpan_image_finset hc #align affine_independent.vector_span_image_finset_eq_of_le_of_card_eq_finrank_add_one AffineIndependent.vectorSpan_image_finset_eq_of_le_of_card_eq_finrank_add_one /-- If the `vector_span` of a finite affinely independent family lies in a submodule with dimension one less than its cardinality, it equals that submodule. -/ theorem AffineIndependent.vectorSpan_eq_of_le_of_card_eq_finrank_add_one [Fintype ι] {p : ι → P} (hi : AffineIndependent k p) {sm : Submodule k V} [FiniteDimensional k sm] (hle : vectorSpan k (Set.range p) ≤ sm) (hc : Fintype.card ι = finrank k sm + 1) : vectorSpan k (Set.range p) = sm := eq_of_le_of_finrank_eq hle <| hi.finrank_vectorSpan hc #align affine_independent.vector_span_eq_of_le_of_card_eq_finrank_add_one AffineIndependent.vectorSpan_eq_of_le_of_card_eq_finrank_add_one /-- If the `affine_span` of a finite subset of an affinely independent family lies in an affine subspace whose direction has dimension one less than its cardinality, it equals that subspace. -/ theorem AffineIndependent.affineSpan_image_finset_eq_of_le_of_card_eq_finrank_add_one {p : ι → P} (hi : AffineIndependent k p) {s : Finset ι} {sp : AffineSubspace k P} [FiniteDimensional k sp.direction] (hle : affineSpan k (s.image p : Set P) ≤ sp) (hc : Finset.card s = finrank k sp.direction + 1) : affineSpan k (s.image p : Set P) = sp := by have hn : s.nonempty := by rw [← Finset.card_pos, hc] apply Nat.succ_pos refine' eq_of_direction_eq_of_nonempty_of_le _ ((hn.image _).to_set.affineSpan _) hle have hd := direction_le hle rw [direction_affineSpan] at hd⊢ exact hi.vector_span_image_finset_eq_of_le_of_card_eq_finrank_add_one hd hc #align affine_independent.affine_span_image_finset_eq_of_le_of_card_eq_finrank_add_one AffineIndependent.affineSpan_image_finset_eq_of_le_of_card_eq_finrank_add_one /-- If the `affine_span` of a finite affinely independent family lies in an affine subspace whose direction has dimension one less than its cardinality, it equals that subspace. -/ theorem AffineIndependent.affineSpan_eq_of_le_of_card_eq_finrank_add_one [Fintype ι] {p : ι → P} (hi : AffineIndependent k p) {sp : AffineSubspace k P} [FiniteDimensional k sp.direction] (hle : affineSpan k (Set.range p) ≤ sp) (hc : Fintype.card ι = finrank k sp.direction + 1) : affineSpan k (Set.range p) = sp := by rw [← Finset.card_univ] at hc rw [← Set.image_univ, ← Finset.coe_univ, ← Finset.coe_image] at hle⊢ exact hi.affine_span_image_finset_eq_of_le_of_card_eq_finrank_add_one hle hc #align affine_independent.affine_span_eq_of_le_of_card_eq_finrank_add_one AffineIndependent.affineSpan_eq_of_le_of_card_eq_finrank_add_one /-- The `affine_span` of a finite affinely independent family is `⊤` iff the family's cardinality is one more than that of the finite-dimensional space. -/ theorem AffineIndependent.affineSpan_eq_top_iff_card_eq_finrank_add_one [FiniteDimensional k V] [Fintype ι] {p : ι → P} (hi : AffineIndependent k p) : affineSpan k (Set.range p) = ⊤ ↔ Fintype.card ι = finrank k V + 1 := by constructor · intro h_tot let n := Fintype.card ι - 1 have hn : Fintype.card ι = n + 1 := (Nat.succ_pred_eq_of_pos (card_pos_of_affine_span_eq_top k V P h_tot)).symm rw [hn, ← finrank_top, ← (vector_span_eq_top_of_affine_span_eq_top k V P) h_tot, ← hi.finrank_vector_span hn] · intro hc rw [← finrank_top, ← direction_top k V P] at hc exact hi.affine_span_eq_of_le_of_card_eq_finrank_add_one le_top hc #align affine_independent.affine_span_eq_top_iff_card_eq_finrank_add_one AffineIndependent.affineSpan_eq_top_iff_card_eq_finrank_add_one theorem Affine.Simplex.span_eq_top [FiniteDimensional k V] {n : ℕ} (T : Affine.Simplex k V n) (hrank : finrank k V = n) : affineSpan k (Set.range T.points) = ⊤ := by rw [AffineIndependent.affineSpan_eq_top_iff_card_eq_finrank_add_one T.independent, Fintype.card_fin, hrank] #align affine.simplex.span_eq_top Affine.Simplex.span_eq_top /-- The `vector_span` of adding a point to a finite-dimensional subspace is finite-dimensional. -/ instance finiteDimensional_vectorSpan_insert (s : AffineSubspace k P) [FiniteDimensional k s.direction] (p : P) : FiniteDimensional k (vectorSpan k (insert p (s : Set P))) := by rw [← direction_affineSpan, ← affineSpan_insert_affineSpan] rcases(s : Set P).eq_empty_or_nonempty with (hs | ⟨p₀, hp₀⟩) · rw [coe_eq_bot_iff] at hs rw [hs, bot_coe, span_empty, bot_coe, direction_affineSpan] convert finiteDimensional_bot _ _ <;> simp · rw [affine_span_coe, direction_affine_span_insert hp₀] infer_instance #align finite_dimensional_vector_span_insert finiteDimensional_vectorSpan_insert /-- The direction of the affine span of adding a point to a finite-dimensional subspace is finite-dimensional. -/ instance finiteDimensional_direction_affineSpan_insert (s : AffineSubspace k P) [FiniteDimensional k s.direction] (p : P) : FiniteDimensional k (affineSpan k (insert p (s : Set P))).direction := (direction_affineSpan k (insert p (s : Set P))).symm ▸ finiteDimensional_vectorSpan_insert s p #align finite_dimensional_direction_affine_span_insert finiteDimensional_direction_affineSpan_insert variable (k) /-- The `vector_span` of adding a point to a set with a finite-dimensional `vector_span` is finite-dimensional. -/ instance finiteDimensional_vectorSpan_insert_set (s : Set P) [FiniteDimensional k (vectorSpan k s)] (p : P) : FiniteDimensional k (vectorSpan k (insert p s)) := by haveI : FiniteDimensional k (affineSpan k s).direction := (direction_affineSpan k s).symm ▸ inferInstance rw [← direction_affineSpan, ← affineSpan_insert_affineSpan, direction_affineSpan] exact finiteDimensional_vectorSpan_insert (affineSpan k s) p #align finite_dimensional_vector_span_insert_set finiteDimensional_vectorSpan_insert_set /-- A set of points is collinear if their `vector_span` has dimension at most `1`. -/ def Collinear (s : Set P) : Prop := Module.rank k (vectorSpan k s) ≤ 1 #align collinear Collinear /-- The definition of `collinear`. -/ theorem collinear_iff_dim_le_one (s : Set P) : Collinear k s ↔ Module.rank k (vectorSpan k s) ≤ 1 := Iff.rfl #align collinear_iff_dim_le_one collinear_iff_dim_le_one variable {k} /-- A set of points, whose `vector_span` is finite-dimensional, is collinear if and only if their `vector_span` has dimension at most `1`. -/ theorem collinear_iff_finrank_le_one {s : Set P} [FiniteDimensional k (vectorSpan k s)] : Collinear k s ↔ finrank k (vectorSpan k s) ≤ 1 := by have h := collinear_iff_dim_le_one k s rw [← finrank_eq_dim] at h exact_mod_cast h #align collinear_iff_finrank_le_one collinear_iff_finrank_le_one alias collinear_iff_finrank_le_one ↔ Collinear.finrank_le_one _ #align collinear.finrank_le_one Collinear.finrank_le_one /-- A subset of a collinear set is collinear. -/ theorem Collinear.subset {s₁ s₂ : Set P} (hs : s₁ ⊆ s₂) (h : Collinear k s₂) : Collinear k s₁ := (dim_le_of_submodule (vectorSpan k s₁) (vectorSpan k s₂) (vectorSpan_mono k hs)).trans h #align collinear.subset Collinear.subset /-- The `vector_span` of collinear points is finite-dimensional. -/ theorem Collinear.finiteDimensional_vectorSpan {s : Set P} (h : Collinear k s) : FiniteDimensional k (vectorSpan k s) := IsNoetherian.iff_fg.1 (IsNoetherian.iff_dim_lt_aleph0.2 (lt_of_le_of_lt h Cardinal.one_lt_aleph0)) #align collinear.finite_dimensional_vector_span Collinear.finiteDimensional_vectorSpan /-- The direction of the affine span of collinear points is finite-dimensional. -/ theorem Collinear.finiteDimensional_direction_affineSpan {s : Set P} (h : Collinear k s) : FiniteDimensional k (affineSpan k s).direction := (direction_affineSpan k s).symm ▸ h.finiteDimensional_vectorSpan #align collinear.finite_dimensional_direction_affine_span Collinear.finiteDimensional_direction_affineSpan variable (k P) /-- The empty set is collinear. -/ theorem collinear_empty : Collinear k (∅ : Set P) := by rw [collinear_iff_dim_le_one, vectorSpan_empty] simp #align collinear_empty collinear_empty variable {P} /-- A single point is collinear. -/ theorem collinear_singleton (p : P) : Collinear k ({p} : Set P) := by rw [collinear_iff_dim_le_one, vectorSpan_singleton] simp #align collinear_singleton collinear_singleton variable {k} /-- Given a point `p₀` in a set of points, that set is collinear if and only if the points can all be expressed as multiples of the same vector, added to `p₀`. -/ theorem collinear_iff_of_mem {s : Set P} {p₀ : P} (h : p₀ ∈ s) : Collinear k s ↔ ∃ v : V, ∀ p ∈ s, ∃ r : k, p = r • v +ᵥ p₀ := by simp_rw [collinear_iff_dim_le_one, dim_submodule_le_one_iff', Submodule.le_span_singleton_iff] constructor · rintro ⟨v₀, hv⟩ use v₀ intro p hp obtain ⟨r, hr⟩ := hv (p -ᵥ p₀) (vsub_mem_vectorSpan k hp h) use r rw [eq_vadd_iff_vsub_eq] exact hr.symm · rintro ⟨v, hp₀v⟩ use v intro w hw have hs : vectorSpan k s ≤ k ∙ v := by rw [vectorSpan_eq_span_vsub_set_right k h, Submodule.span_le, Set.subset_def] intro x hx rw [SetLike.mem_coe, Submodule.mem_span_singleton] rw [Set.mem_image] at hx rcases hx with ⟨p, hp, rfl⟩ rcases hp₀v p hp with ⟨r, rfl⟩ use r simp have hw' := SetLike.le_def.1 hs hw rwa [Submodule.mem_span_singleton] at hw' #align collinear_iff_of_mem collinear_iff_of_mem /-- A set of points is collinear if and only if they can all be expressed as multiples of the same vector, added to the same base point. -/ theorem collinear_iff_exists_forall_eq_smul_vadd (s : Set P) : Collinear k s ↔ ∃ (p₀ : P)(v : V), ∀ p ∈ s, ∃ r : k, p = r • v +ᵥ p₀ := by rcases Set.eq_empty_or_nonempty s with (rfl | ⟨⟨p₁, hp₁⟩⟩) · simp [collinear_empty] · rw [collinear_iff_of_mem hp₁] constructor · exact fun h => ⟨p₁, h⟩ · rintro ⟨p, v, hv⟩ use v intro p₂ hp₂ rcases hv p₂ hp₂ with ⟨r, rfl⟩ rcases hv p₁ hp₁ with ⟨r₁, rfl⟩ use r - r₁ simp [vadd_vadd, ← add_smul] #align collinear_iff_exists_forall_eq_smul_vadd collinear_iff_exists_forall_eq_smul_vadd variable (k) /-- Two points are collinear. -/ theorem collinear_pair (p₁ p₂ : P) : Collinear k ({p₁, p₂} : Set P) := by rw [collinear_iff_exists_forall_eq_smul_vadd] use p₁, p₂ -ᵥ p₁ intro p hp rw [Set.mem_insert_iff, Set.mem_singleton_iff] at hp cases hp · use 0 simp [hp] · use 1 simp [hp] #align collinear_pair collinear_pair variable {k} /-- Three points are affinely independent if and only if they are not collinear. -/ theorem affineIndependent_iff_not_collinear {p : Fin 3 → P} : AffineIndependent k p ↔ ¬Collinear k (Set.range p) := by rw [collinear_iff_finrank_le_one, affineIndependent_iff_not_finrank_vectorSpan_le k p (Fintype.card_fin 3)] #align affine_independent_iff_not_collinear affineIndependent_iff_not_collinear /-- Three points are collinear if and only if they are not affinely independent. -/ theorem collinear_iff_not_affineIndependent {p : Fin 3 → P} : Collinear k (Set.range p) ↔ ¬AffineIndependent k p := by rw [collinear_iff_finrank_le_one, finrank_vectorSpan_le_iff_not_affineIndependent k p (Fintype.card_fin 3)] #align collinear_iff_not_affine_independent collinear_iff_not_affineIndependent /-- Three points are affinely independent if and only if they are not collinear. -/ theorem affineIndependent_iff_not_collinear_set {p₁ p₂ p₃ : P} : AffineIndependent k ![p₁, p₂, p₃] ↔ ¬Collinear k ({p₁, p₂, p₃} : Set P) := by simp [affineIndependent_iff_not_collinear, -Set.union_singleton] #align affine_independent_iff_not_collinear_set affineIndependent_iff_not_collinear_set /-- Three points are collinear if and only if they are not affinely independent. -/ theorem collinear_iff_not_affineIndependent_set {p₁ p₂ p₃ : P} : Collinear k ({p₁, p₂, p₃} : Set P) ↔ ¬AffineIndependent k ![p₁, p₂, p₃] := affineIndependent_iff_not_collinear_set.not_left.symm #align collinear_iff_not_affine_independent_set collinear_iff_not_affineIndependent_set /-- Three points are affinely independent if and only if they are not collinear. -/ theorem affineIndependent_iff_not_collinear_of_ne {p : Fin 3 → P} {i₁ i₂ i₃ : Fin 3} (h₁₂ : i₁ ≠ i₂) (h₁₃ : i₁ ≠ i₃) (h₂₃ : i₂ ≠ i₃) : AffineIndependent k p ↔ ¬Collinear k ({p i₁, p i₂, p i₃} : Set P) := by have hu : (Finset.univ : Finset (Fin 3)) = {i₁, i₂, i₃} := by decide! rw [affineIndependent_iff_not_collinear, ← Set.image_univ, ← Finset.coe_univ, hu, Finset.coe_insert, Finset.coe_insert, Finset.coe_singleton, Set.image_insert_eq, Set.image_pair] #align affine_independent_iff_not_collinear_of_ne affineIndependent_iff_not_collinear_of_ne /-- Three points are collinear if and only if they are not affinely independent. -/ theorem collinear_iff_not_affineIndependent_of_ne {p : Fin 3 → P} {i₁ i₂ i₃ : Fin 3} (h₁₂ : i₁ ≠ i₂) (h₁₃ : i₁ ≠ i₃) (h₂₃ : i₂ ≠ i₃) : Collinear k ({p i₁, p i₂, p i₃} : Set P) ↔ ¬AffineIndependent k p := (affineIndependent_iff_not_collinear_of_ne h₁₂ h₁₃ h₂₃).not_left.symm #align collinear_iff_not_affine_independent_of_ne collinear_iff_not_affineIndependent_of_ne /-- If three points are not collinear, the first and second are different. -/ theorem ne₁₂_of_not_collinear {p₁ p₂ p₃ : P} (h : ¬Collinear k ({p₁, p₂, p₃} : Set P)) : p₁ ≠ p₂ := by rintro rfl simpa [collinear_pair] using h #align ne₁₂_of_not_collinear ne₁₂_of_not_collinear /-- If three points are not collinear, the first and third are different. -/ theorem ne₁₃_of_not_collinear {p₁ p₂ p₃ : P} (h : ¬Collinear k ({p₁, p₂, p₃} : Set P)) : p₁ ≠ p₃ := by rintro rfl simpa [collinear_pair] using h #align ne₁₃_of_not_collinear ne₁₃_of_not_collinear /-- If three points are not collinear, the second and third are different. -/ theorem ne₂₃_of_not_collinear {p₁ p₂ p₃ : P} (h : ¬Collinear k ({p₁, p₂, p₃} : Set P)) : p₂ ≠ p₃ := by rintro rfl simpa [collinear_pair] using h #align ne₂₃_of_not_collinear ne₂₃_of_not_collinear /-- A point in a collinear set of points lies in the affine span of any two distinct points of that set. -/ theorem Collinear.mem_affineSpan_of_mem_of_ne {s : Set P} (h : Collinear k s) {p₁ p₂ p₃ : P} (hp₁ : p₁ ∈ s) (hp₂ : p₂ ∈ s) (hp₃ : p₃ ∈ s) (hp₁p₂ : p₁ ≠ p₂) : p₃ ∈ line[k, p₁, p₂] := by rw [collinear_iff_of_mem hp₁] at h rcases h with ⟨v, h⟩ rcases h p₂ hp₂ with ⟨r₂, rfl⟩ rcases h p₃ hp₃ with ⟨r₃, rfl⟩ rw [vadd_left_mem_affineSpan_pair] refine' ⟨r₃ / r₂, _⟩ have h₂ : r₂ ≠ 0 := by rintro rfl simpa using hp₁p₂ simp [smul_smul, h₂] #align collinear.mem_affine_span_of_mem_of_ne Collinear.mem_affineSpan_of_mem_of_ne /-- The affine span of any two distinct points of a collinear set of points equals the affine span of the whole set. -/ theorem Collinear.affineSpan_eq_of_ne {s : Set P} (h : Collinear k s) {p₁ p₂ : P} (hp₁ : p₁ ∈ s) (hp₂ : p₂ ∈ s) (hp₁p₂ : p₁ ≠ p₂) : line[k, p₁, p₂] = affineSpan k s := le_antisymm (affineSpan_mono _ (Set.insert_subset.2 ⟨hp₁, Set.singleton_subset_iff.2 hp₂⟩)) (affineSpan_le.2 fun p hp => h.mem_affineSpan_of_mem_of_ne hp₁ hp₂ hp hp₁p₂) #align collinear.affine_span_eq_of_ne Collinear.affineSpan_eq_of_ne /-- Given a collinear set of points, and two distinct points `p₂` and `p₃` in it, a point `p₁` is collinear with the set if and only if it is collinear with `p₂` and `p₃`. -/ theorem Collinear.collinear_insert_iff_of_ne {s : Set P} (h : Collinear k s) {p₁ p₂ p₃ : P} (hp₂ : p₂ ∈ s) (hp₃ : p₃ ∈ s) (hp₂p₃ : p₂ ≠ p₃) : Collinear k (insert p₁ s) ↔ Collinear k ({p₁, p₂, p₃} : Set P) := by have hv : vectorSpan k (insert p₁ s) = vectorSpan k ({p₁, p₂, p₃} : Set P) := by conv_lhs => rw [← direction_affineSpan, ← affineSpan_insert_affineSpan] conv_rhs => rw [← direction_affineSpan, ← affineSpan_insert_affineSpan] rw [h.affine_span_eq_of_ne hp₂ hp₃ hp₂p₃] rw [Collinear, Collinear, hv] #align collinear.collinear_insert_iff_of_ne Collinear.collinear_insert_iff_of_ne /-- Adding a point in the affine span of a set does not change whether that set is collinear. -/ theorem collinear_insert_iff_of_mem_affineSpan {s : Set P} {p : P} (h : p ∈ affineSpan k s) : Collinear k (insert p s) ↔ Collinear k s := by rw [Collinear, Collinear, vectorSpan_insert_eq_vectorSpan h] #align collinear_insert_iff_of_mem_affine_span collinear_insert_iff_of_mem_affineSpan /-- If a point lies in the affine span of two points, those three points are collinear. -/ theorem collinear_insert_of_mem_affineSpan_pair {p₁ p₂ p₃ : P} (h : p₁ ∈ line[k, p₂, p₃]) : Collinear k ({p₁, p₂, p₃} : Set P) := by rw [collinear_insert_iff_of_mem_affineSpan h] exact collinear_pair _ _ _ #align collinear_insert_of_mem_affine_span_pair collinear_insert_of_mem_affineSpan_pair /-- If two points lie in the affine span of two points, those four points are collinear. -/ theorem collinear_insert_insert_of_mem_affineSpan_pair {p₁ p₂ p₃ p₄ : P} (h₁ : p₁ ∈ line[k, p₃, p₄]) (h₂ : p₂ ∈ line[k, p₃, p₄]) : Collinear k ({p₁, p₂, p₃, p₄} : Set P) := by rw [collinear_insert_iff_of_mem_affineSpan ((AffineSubspace.le_def' _ _).1 (affineSpan_mono k (Set.subset_insert _ _)) _ h₁), collinear_insert_iff_of_mem_affineSpan h₂] exact collinear_pair _ _ _ #align collinear_insert_insert_of_mem_affine_span_pair collinear_insert_insert_of_mem_affineSpan_pair /-- If three points lie in the affine span of two points, those five points are collinear. -/ theorem collinear_insert_insert_insert_of_mem_affineSpan_pair {p₁ p₂ p₃ p₄ p₅ : P} (h₁ : p₁ ∈ line[k, p₄, p₅]) (h₂ : p₂ ∈ line[k, p₄, p₅]) (h₃ : p₃ ∈ line[k, p₄, p₅]) : Collinear k ({p₁, p₂, p₃, p₄, p₅} : Set P) := by rw [collinear_insert_iff_of_mem_affineSpan ((AffineSubspace.le_def' _ _).1 (affineSpan_mono k ((Set.subset_insert _ _).trans (Set.subset_insert _ _))) _ h₁), collinear_insert_iff_of_mem_affineSpan ((AffineSubspace.le_def' _ _).1 (affineSpan_mono k (Set.subset_insert _ _)) _ h₂), collinear_insert_iff_of_mem_affineSpan h₃] exact collinear_pair _ _ _ #align collinear_insert_insert_insert_of_mem_affine_span_pair collinear_insert_insert_insert_of_mem_affineSpan_pair /-- If three points lie in the affine span of two points, the first four points are collinear. -/ theorem collinear_insert_insert_insert_left_of_mem_affineSpan_pair {p₁ p₂ p₃ p₄ p₅ : P} (h₁ : p₁ ∈ line[k, p₄, p₅]) (h₂ : p₂ ∈ line[k, p₄, p₅]) (h₃ : p₃ ∈ line[k, p₄, p₅]) : Collinear k ({p₁, p₂, p₃, p₄} : Set P) := by refine' (collinear_insert_insert_insert_of_mem_affineSpan_pair h₁ h₂ h₃).Subset _ simp [Set.insert_subset_insert] #align collinear_insert_insert_insert_left_of_mem_affine_span_pair collinear_insert_insert_insert_left_of_mem_affineSpan_pair /-- If three points lie in the affine span of two points, the first three points are collinear. -/ theorem collinear_triple_of_mem_affineSpan_pair {p₁ p₂ p₃ p₄ p₅ : P} (h₁ : p₁ ∈ line[k, p₄, p₅]) (h₂ : p₂ ∈ line[k, p₄, p₅]) (h₃ : p₃ ∈ line[k, p₄, p₅]) : Collinear k ({p₁, p₂, p₃} : Set P) := by refine' (collinear_insert_insert_insert_left_of_mem_affineSpan_pair h₁ h₂ h₃).Subset _ simp [Set.insert_subset_insert] #align collinear_triple_of_mem_affine_span_pair collinear_triple_of_mem_affineSpan_pair variable (k) /-- A set of points is coplanar if their `vector_span` has dimension at most `2`. -/ def Coplanar (s : Set P) : Prop := Module.rank k (vectorSpan k s) ≤ 2 #align coplanar Coplanar variable {k} /-- The `vector_span` of coplanar points is finite-dimensional. -/ theorem Coplanar.finiteDimensional_vectorSpan {s : Set P} (h : Coplanar k s) : FiniteDimensional k (vectorSpan k s) := by refine' IsNoetherian.iff_fg.1 (IsNoetherian.iff_dim_lt_aleph0.2 (lt_of_le_of_lt h _)) simp #align coplanar.finite_dimensional_vector_span Coplanar.finiteDimensional_vectorSpan /-- The direction of the affine span of coplanar points is finite-dimensional. -/ theorem Coplanar.finiteDimensional_direction_affineSpan {s : Set P} (h : Coplanar k s) : FiniteDimensional k (affineSpan k s).direction := (direction_affineSpan k s).symm ▸ h.finiteDimensional_vectorSpan #align coplanar.finite_dimensional_direction_affine_span Coplanar.finiteDimensional_direction_affineSpan /-- A set of points, whose `vector_span` is finite-dimensional, is coplanar if and only if their `vector_span` has dimension at most `2`. -/ theorem coplanar_iff_finrank_le_two {s : Set P} [FiniteDimensional k (vectorSpan k s)] : Coplanar k s ↔ finrank k (vectorSpan k s) ≤ 2 := by have h : Coplanar k s ↔ Module.rank k (vectorSpan k s) ≤ 2 := Iff.rfl rw [← finrank_eq_dim] at h exact_mod_cast h #align coplanar_iff_finrank_le_two coplanar_iff_finrank_le_two alias coplanar_iff_finrank_le_two ↔ Coplanar.finrank_le_two _ #align coplanar.finrank_le_two Coplanar.finrank_le_two /-- A subset of a coplanar set is coplanar. -/ theorem Coplanar.subset {s₁ s₂ : Set P} (hs : s₁ ⊆ s₂) (h : Coplanar k s₂) : Coplanar k s₁ := (dim_le_of_submodule (vectorSpan k s₁) (vectorSpan k s₂) (vectorSpan_mono k hs)).trans h #align coplanar.subset Coplanar.subset /-- Collinear points are coplanar. -/ theorem Collinear.coplanar {s : Set P} (h : Collinear k s) : Coplanar k s := le_trans h one_le_two #align collinear.coplanar Collinear.coplanar variable (k) (P) /-- The empty set is coplanar. -/ theorem coplanar_empty : Coplanar k (∅ : Set P) := (collinear_empty k P).Coplanar #align coplanar_empty coplanar_empty variable {P} /-- A single point is coplanar. -/ theorem coplanar_singleton (p : P) : Coplanar k ({p} : Set P) := (collinear_singleton k p).Coplanar #align coplanar_singleton coplanar_singleton /-- Two points are coplanar. -/ theorem coplanar_pair (p₁ p₂ : P) : Coplanar k ({p₁, p₂} : Set P) := (collinear_pair k p₁ p₂).Coplanar #align coplanar_pair coplanar_pair variable {k} /-- Adding a point in the affine span of a set does not change whether that set is coplanar. -/ theorem coplanar_insert_iff_of_mem_affineSpan {s : Set P} {p : P} (h : p ∈ affineSpan k s) : Coplanar k (insert p s) ↔ Coplanar k s := by rw [Coplanar, Coplanar, vectorSpan_insert_eq_vectorSpan h] #align coplanar_insert_iff_of_mem_affine_span coplanar_insert_iff_of_mem_affineSpan end AffineSpace' section DivisionRing variable {k : Type _} {V : Type _} {P : Type _} include V open AffineSubspace FiniteDimensional Module variable [DivisionRing k] [AddCommGroup V] [Module k V] [affine_space V P] /-- Adding a point to a finite-dimensional subspace increases the dimension by at most one. -/ theorem finrank_vectorSpan_insert_le (s : AffineSubspace k P) (p : P) : finrank k (vectorSpan k (insert p (s : Set P))) ≤ finrank k s.direction + 1 := by by_cases hf : FiniteDimensional k s.direction; swap · have hf' : ¬FiniteDimensional k (vectorSpan k (insert p (s : Set P))) := by intro h have h' : s.direction ≤ vectorSpan k (insert p (s : Set P)) := by conv_lhs => rw [← affine_span_coe s, direction_affineSpan] exact vectorSpan_mono k (Set.subset_insert _ _) exact hf (Submodule.finiteDimensional_of_le h') rw [finrank_of_infinite_dimensional hf, finrank_of_infinite_dimensional hf', zero_add] exact zero_le_one haveI := hf rw [← direction_affineSpan, ← affineSpan_insert_affineSpan] rcases(s : Set P).eq_empty_or_nonempty with (hs | ⟨p₀, hp₀⟩) · rw [coe_eq_bot_iff] at hs rw [hs, bot_coe, span_empty, bot_coe, direction_affineSpan, direction_bot, finrank_bot, zero_add] convert zero_le_one' ℕ rw [← finrank_bot k V] convert rfl <;> simp · rw [affine_span_coe, direction_affine_span_insert hp₀, add_comm] refine' (Submodule.dim_add_le_dim_add_dim _ _).trans (add_le_add_right _ _) refine' finrank_le_one ⟨p -ᵥ p₀, Submodule.mem_span_singleton_self _⟩ fun v => _ have h := v.property rw [Submodule.mem_span_singleton] at h rcases h with ⟨c, hc⟩ refine' ⟨c, _⟩ ext exact hc #align finrank_vector_span_insert_le finrank_vectorSpan_insert_le variable (k) /-- Adding a point to a set with a finite-dimensional span increases the dimension by at most one. -/ theorem finrank_vectorSpan_insert_le_set (s : Set P) (p : P) : finrank k (vectorSpan k (insert p s)) ≤ finrank k (vectorSpan k s) + 1 := by rw [← direction_affineSpan, ← affineSpan_insert_affineSpan, direction_affineSpan] refine' (finrank_vectorSpan_insert_le _ _).trans (add_le_add_right _ _) rw [direction_affineSpan] #align finrank_vector_span_insert_le_set finrank_vectorSpan_insert_le_set variable {k} /-- Adding a point to a collinear set produces a coplanar set. -/ theorem Collinear.coplanar_insert {s : Set P} (h : Collinear k s) (p : P) : Coplanar k (insert p s) := by haveI := h.finite_dimensional_vector_span rw [coplanar_iff_finrank_le_two] exact (finrank_vectorSpan_insert_le_set k s p).trans (add_le_add_right h.finrank_le_one _) #align collinear.coplanar_insert Collinear.coplanar_insert /-- A set of points in a two-dimensional space is coplanar. -/ theorem coplanar_of_finrank_eq_two (s : Set P) (h : finrank k V = 2) : Coplanar k s := by haveI := finite_dimensional_of_finrank_eq_succ h rw [coplanar_iff_finrank_le_two, ← h] exact Submodule.finrank_le _ #align coplanar_of_finrank_eq_two coplanar_of_finrank_eq_two /-- A set of points in a two-dimensional space is coplanar. -/ theorem coplanar_of_fact_finrank_eq_two (s : Set P) [h : Fact (finrank k V = 2)] : Coplanar k s := coplanar_of_finrank_eq_two s h.out #align coplanar_of_fact_finrank_eq_two coplanar_of_fact_finrank_eq_two variable (k) /-- Three points are coplanar. -/ theorem coplanar_triple (p₁ p₂ p₃ : P) : Coplanar k ({p₁, p₂, p₃} : Set P) := (collinear_pair k p₂ p₃).coplanar_insert p₁ #align coplanar_triple coplanar_triple end DivisionRing namespace AffineBasis universe u₁ u₂ u₃ u₄ variable {ι : Type u₁} {k : Type u₂} {V : Type u₃} {P : Type u₄} variable [AddCommGroup V] [affine_space V P] section DivisionRing variable [DivisionRing k] [Module k V] include V protected theorem finiteDimensional [Finite ι] (b : AffineBasis ι k P) : FiniteDimensional k V := let ⟨i⟩ := b.Nonempty FiniteDimensional.of_fintype_basis (b.basisOf i) #align affine_basis.finite_dimensional AffineBasis.finiteDimensional protected theorem finite [FiniteDimensional k V] (b : AffineBasis ι k P) : Finite ι := finite_of_fin_dim_affineIndependent k b.ind #align affine_basis.finite AffineBasis.finite protected theorem finite_set [FiniteDimensional k V] {s : Set ι} (b : AffineBasis s k P) : s.Finite := finite_set_of_fin_dim_affineIndependent k b.ind #align affine_basis.finite_set AffineBasis.finite_set theorem card_eq_finrank_add_one [Fintype ι] (b : AffineBasis ι k P) : Fintype.card ι = FiniteDimensional.finrank k V + 1 := haveI := b.finite_dimensional b.ind.affine_span_eq_top_iff_card_eq_finrank_add_one.mp b.tot #align affine_basis.card_eq_finrank_add_one AffineBasis.card_eq_finrank_add_one variable {k V P} theorem exists_affineBasis_of_finiteDimensional [Fintype ι] [FiniteDimensional k V] (h : Fintype.card ι = FiniteDimensional.finrank k V + 1) : Nonempty (AffineBasis ι k P) := by obtain ⟨s, b, hb⟩ := AffineBasis.exists_affineBasis k V P lift s to Finset P using b.finite_set refine' ⟨b.reindex <| Fintype.equivOfCardEq _⟩ rw [h, ← b.card_eq_finrank_add_one] #align affine_basis.exists_affine_basis_of_finite_dimensional AffineBasis.exists_affineBasis_of_finiteDimensional end DivisionRing end AffineBasis
% $Header: ch1.tex,v 1.11 93/08/02 21:16:12 goossens Exp $ \Filename{H1-FZ-User-Specifications} \chapter{User specifications for the FZ package} \Filename{H2-FZ-Data-Structure-Representation} \section{Representation of a data-structure} The unit of information on a ZEBRA file is the \textbf{data-structure}. It may consist of zero, one, two, or more \textbf{data-segments}. The data-segments reflect the original residence of different parts of the data-structure in different divisions at the moment when the d/s was transferred from memory to the file with \Rind{FZOUT}. When the d/s is transferred back from the file to memory with \Rind{FZIN} individual data-segments may be directed to separate divisions, or may be ignored. User information which may be associated with each d/s is the 'user header vector', specified and received via parameters to \Rind{FZOUT} and \Rind{FZIN}, and the 'text vector' taken from and delivered to the text-buffer associated to the file with \Rind{FZTXAS} (implementation of this routine is pending). On the file the data-structure is represented by the 'pilot information' followed by the 'bank material'. The pilot carries all the control and context information, namely: \begin{itemize} \item the amount of memory required to receive the data-structure; \item the entry address into the data-structure, if any; \item the user-header vector with its I/O characteristics, if any; \item the text-vector, if any; \item the segment table, if any, describing which data segment comes from which division; \item the relocation table, if any, describing the original position in memory of each contiguous set of banks, needed to update all links in the data-structure for the new positions in memory on input. \end{itemize} This is followed by the 'bank-material', which carries the copy of the memory regions originally occupied by the banks of the data-structure. The data-structure may in fact be empty, in which case there is no bank-material. In Native Data Format the bank material on the file is a simple dump of the memory; but in Exchange Data Format the numbers have to be transformed from the internal to the exchange representation. To make this possible automatically, every bank has to carry its 'I/O characteristic' describing the integer/floating/Hollerith nature of its contents exactly; see the Zebra Reference Manual, book MZ, paragraphs MZLIFT and MZFORM for a description. Banks of type 'undefined' cannot be transported. The exact details for the file and data formats are found in Chapter 3. \Filename{H2-F2-events-runs-files} \section{Events, Runs, and Files} The unit of information on a ZEBRA file is the data-structure. Several data-structures may be (but need not) be grouped into an \textbf{event}. On the file events are separated by the 'start-of-event' flag being present in the first data-structure of each event. \Rind{FZIN} may be asked to skip forward to and read the next 'start-of-event' data-structure. Several events (or d/s) may (but need not) be grouped into a \textbf{run}. On the file the start and the end of a run are marked by special \textbf{StoR} and \textbf{EoR} records written by calling \Rind{FZRUN}. \Rind{FZIN} may be asked to skip forward to and read the next 'start-of-run' record. (Skipping forward to next run or event should not be used for the medium Memory or Channel.) A ZEBRA file has to be terminated. The writing of End-of-File is a perennial problem, as the requirements for different kinds of files are different for different machines and different media. Thus for example, on the IBM system MVS one should not terminate a disk file by an ENDFILE statement, as this inhibits the release of the unused space on the disk. A tape, on the other hand, should be terminated by a double EoF which may or may not be provided by the system, yet on the VAX the program will collapse if one tries to ENDFILE an unlabelled tape. In principle, a Zebra file may logically consist of several files on the same medium. To implement this rigorously on all machines the special \textbf{Zebra EoF} record is provided (an end-of-run record which immediately follows a true end-of-run is also interpreted as EoF). It is written by a call to \Rind{FZENDO} with one of the options T, N, C, or I. Whether or not the writing of a Zebra EoF signal is followed by the explicit request to write one or two system file-marks (for end-of-file or end-of-data) depends on the circumstances. Most machines do not support multi-file disk files, and some machines do not even support multi-file tape files. In first approximation, a Zebra file is assumed not to contain imbedded system file-marks. For output this means that no file-marks are written explicitly, leaving the file termination to the system; for input it means that a system file-mark is interpreted as 'end-of-data'. A different behaviour can be selected when calling \Rind{FZFILE} by setting the NEOF parameter associated with the file to 1, 2, or 3 as explained in para. 1.04. \Filename{H2-FZ-Outline-Usage-for-disk-tape} \section{Outline of usage for medium Disk or Tape} Before using FZ, the routines MZEBRA and MZSTOR must have been called. FZ uses the system division of the primary store to hold the control-information about all its files. One bank per file is used, containing the parameters of the file, the statistics of usage of the file, and also the physical record buffer, if the file format is 'exchange'. {\large\bf Initialization} Before using a particular file, it should be \textbf{opened}, normally with the Fortran OPEN statement or with the C interface routine CFOPEN (except for files which are read/written by special machine-dependent packages, such as IOPACK on IBM). The ZEBRA handling of this file must be \textbf{initialized} by calling \Rind{FZFILE}. Machine-dependent details about opening files are given in chapter 2. The call to \Rind{FZFILE} specifies the properties of the file and the processing direction, for example: \begin{verbatim} CALL FZFILE (LUN,0,'.') native mode, input only, disk file CALL FZFILE (LUN,0,'IO') native mode, disk file, input-output or output-input, CALL FZFILE (LUN,0,'XO') exchange mode, output only, disk file CALL FZFILE (LUN,0,'D') exchange mode, input only, disk file reading with direct-access Fortran CALL FZFILE (LUN,0,'TL') exchange mode, input only, tape file to be read via the C Library \end{verbatim} Note that the Fortran systems on some Unix machines, like on SUN or Silicon Graphics, are not capable of handling fixed-length records in sequential mode, ie. \Lit{RECORDTYPE='FIXED'} is not available in their Fortran OPEN statement. In this case one has to use the direct-access mode, or the C library mode, of FZ for exchange format files. If one is debugging a program, it can be useful to set the logging level of FZ for this file to 2 with \begin{verbatim} CALL FZLOGL (LUN,2) \end{verbatim} causing FZ to print a log message whenever it is called for this file. \subsection*{Input} To simply read the next data-structure, one calls for example with: \begin{verbatim} PARAMETER (NUHMAX=100) DIMENSION IUHEAD(NUHMAX) COMMON /QUEST/IQUEST(100) NUH = NUHMAX CALL FZIN (LUN, IXDIV, LSUP,JBIAS, '.', NUH,IUHEAD) IF (IQUEST(1).NE.0) GO TO special \end{verbatim} This will read the next d/s into the division indicated by IXDIV, it will transfer the user-header-vector into IUHEAD*, NUHMAX words at most, returning in *NUH* its useful size. It will connect the d/s read into a higher level d/s (if any) according to the parameters !LSUP and JBIAS, which have the same significance as with MZLIFT or ZSHUNT. (For the use of the symbols '*' and '!' see 'Principles'.) On normal completion \Rind{FZIN} returns IQUEST(1)=0; a positive value indicates an exception, like Start-of-run or End-of-data; a negative value signals trouble. IQUEST(1) \textbf{must} be tested after every call to \Rind{FZIN}. Frequently one is interested in processing only a particular kind of data-structure, wanting to rapidly skip any others which might be on the file. To make this possible the data must be organised to contain all the information relevant to selection in the user header vector, because one can ask \Rind{FZIN} to start the d/s by reading the pilot information only, delivering the user header vector to the caller, leaving the bank-material in suspense, waiting for a decision. If the d/s is to be rejected, all the work of bringing it into memory with adjustment of the links can be saved. To get the user header vector of the next d/s one specifies the S option (Select) in a first call to \Rind{FZIN}; a second call with the A option will transfer the d/s to memory, for example: \begin{verbatim} C-- Ready to select next d/s 11 NUH = NUHMAX CALL FZIN (LUN, IXDIV, 0,0, 'S', NUH,IUHEAD) IF (IQUEST(1).NE.0) GO TO special IF (not wanted) GO TO 11 C-- Accept pending D/S CALL FZIN (LUN, IXDIV, LSUP,JBIAS, 'A', 0,0) IF (IQUEST(1).NE.0) GO TO special \end{verbatim} Whilst accepting is done by an explicit call with the A option, rejection is done implicitly by asking for the next d/s. Having reached the end of the input file (or having decided to stop input for some other reason), one can get the statistics of file usage printed by \begin{verbatim} CALL FZENDI (LUN,option) \end{verbatim} 'option' indicates the further action to be taken on this file, such as REWIND and re-start reading from the beginning, or start writing on the file positioned by reading it, or simply terminate. Beware: for exchange format files one can switch from input to output only after having read an end-of-run or end-of-file. If one wants to read several different files on the same logical unit number (thereby possibly saving I/O buffers in the system), this can be done as indicated by this sketch, provided all the files have the same characteristics: \begin{verbatim} OPEN (LUN,FILE=<file 1>,...) CALL FZFILE (LUN,0,opt) read first file CALL FZENDI (LUN,'NX') new file to be connected OPEN (LUN,FILE=<file 2>,...) read second file CALL FZENDI (LUN,'NX') OPEN (LUN,FILE=<file 3>,...) read third file . . . . . \end{verbatim} If the files are \textbf{not} of the same kind, for example if the first file is in native mode and the second file is in exchange mode, \Rind{FZENDI} must be told to forget all about the first file, so that a new file can be started on the same logical unit number, for example: \begin{verbatim} OPEN (LUN,FILE=<file 1>,...) CALL FZFILE (LUN,0,'.') native mode read first file CALL FZENDI (LUN,'TX') terminate OPEN (LUN,FILE=<file 2>,...) CALL FZFILE (LUN,0,'X') exchange mode read second file CALL FZENDI (LUN,'TX') \end{verbatim} (This is necessary because the size and character of the FZ control bank depends on the nature of the file.) \subsection*{Output} It may be desirable to group the output into 'runs', in which case one would start a new run with, for example: \begin{verbatim} JRUN = run number . . . CALL FZRUN (LUN,JRUN,0,0) \end{verbatim} It is possible to store user information into the 'start-of-run' record via the last two parameters of the call. There is however the danger, if this information is essential for the processing of the data of the run, that the start-of-run record may get lost due to read errors. (An end-of-run record can be requested explicitly, but normally this is not necessary, since it is triggered by a new run, or by \Rind{FZENDO}.) To ouput a d/s from the primary store, supported by the bank at !LHEAD, together with a user header vector in IUHEAD of NUH integer words, one may call: \begin{verbatim} CALL FZOUT (LUN,0,LHEAD,0,'L',2,NUH,IUHEAD) \end{verbatim} In this case, the material to be output is defined solely by the entry address !LHEAD into the d/s. Therefore \Rind{FZOUT} has to do a logical walk through the complete d/s by following all the structural links, to mark all the banks belonging to this d/s. A subsequent sequential scan over the memory constructs the table of the memory regions to be output. For a large d/s the time spent on this operation may be non-negligible; it can be saved if the user has organized his data such that the d/s to be output resides in a separate division [or divisions] of which it has exclusive use. In this case one can instruct \Rind{FZOUT} to simply output the complete division IXDIV [or divisions IXDIV1 + IXDIV2], whithout the need for the logical walk, by calling: \begin{verbatim} [ IXDIV = MZIXCO (IXDIV1,IXDIV2,0,0) ] CALL FZOUT (LUN,IXDIV,LHEAD,0,'D',2,NUH,IUHEAD) \end{verbatim} The entry address !LHEAD is still needed, no longer to define the data to be written, but for the receiver to find his way into the d/s read. Although option 'D' saves the logical walk, \Rind{FZOUT} still has to do the sequential scan of the division[s] to identify the live banks to be written, and the dead banks to be suppressed. If the user knows that there are no dead banks, or that their volume is negligible, he can indicate this to \Rind{FZOUT} with the 'DI' option, causing it to write the complete division[s] as it stands: \begin{verbatim} CALL FZOUT (LUN,IXDIV,LHEAD,0,'DI',2,NUH,IUHEAD) \end{verbatim} Occasionally the d/s to be written out is not described as easily as assumed above, for example one may want to write a data-structure minus some of its sub-structures. In this case (cf para. 1.21) the user may pre-mark the banks to be output and \begin{verbatim} CALL FZOUT (LUN,IXDIV,LHEAD,0,'M',2,NUH,IUHEAD) \end{verbatim} Output of a file must be terminated, to make sure that the last physical record is transfered from the buffer to the file, for example with: \begin{verbatim} CALL FZENDO (LUN,'I') \end{verbatim} to re-read the file just written; or with: \begin{verbatim} CALL FZENDO (LUN,'TX') \end{verbatim} if the program no longer needs this file. The recommended procedure is to have a standard job-termination routine, called \Rind{ZEND}, normally called from the Main program. This routine is called also from the ZEBRA recovery system in case of abnormal job termination. Into this routine one should include a \begin{verbatim} CALL FZENDO (0,'TX') \end{verbatim} to terminate all pending output files. However, this call pulls in the non-negligible volume of code for the \Rind{FZOUT} complex, and should hence be present only for programs really using \Rind{FZOUT}. Writing several different files to the same logical unit can be done in complete analogy to the case of reading; in the examples given above one has to add the 'O' option for \Rind{FZFILE}, and one has to change the calls to \Rind{FZENDI} into calls to \Rind{FZENDO}. \Filename{H2-FZ-Initialize-ZEBRA-file} \section{FZFILE - initialize a ZEBRA file} To initialize a Zebra file: \begin{verbatim} Fortran: OPEN (LUN, FILE=name, ... C: CALL CFOPEN (LUNPTR, ..., name, ...) IQUEST(1) = LUNPTR \end{verbatim} \Shubr{FZFILE}{(LUN,LREC,CHOPT)} \begin{verbatim} with LUN: logical unit number (Fortran) or Zebra stream identifier (otherwise), this must be a unique small positive integer LREC: record length, in words (ignored if A option) native file format - maximum logical record length zero: standard limit: 2440 words +ve: user defined limit, but < 2500 exchange file format - physical record length zero: standard length: 900 words +ve: user defined length must be a multiple of 30 words CHOPT: character string, individual characters select options: medium: * sequential binary disk file, default T magnetic tape D direct access disk file A alfa: 80 column card-image disk file C channel mode M memory mode usage: F read/write with Fortran, default (except IBM) Y read/write with special machine specific code (IBM has IOPACK, NORD has MAGTAP) L read/write with interface to the C Library K read/write with user supplied code file format: native file format is default X exchange file format modes M, C, A, D, L, K all imply 'X' data format: native is default for native file format exchange is default for exchange file format N native data format \end{verbatim} \begin{verbatim} direction: default direction is 'input only' I input enabled O output enabled IO input/output enabled various: S separate d/ss U unpacked d/ss, only with modes M or C R initial rewind Q quiet, set logging level to -2 P permissive, enable error return, see 'Status returned' just below NEOF: handling of system EoF for output: 0 write no file-marks at all 1 write file-mark only for End-of-File 2 write file-mark only for End-of-Data 3 write file-marks both for EoF and EoD for input: 1 or 3 one file-mark signals 'end-of-file', otherwise: file-mark signals 'end-of-data' \end{verbatim} \subsection*{Status returned in \Lit{/QUEST/IQUEST(100)}} \begin{verbatim} IQUEST(1) = 0 all is well 1 file has already been initialized with FZFILE 2 LUN is invalid 3 requested format is not available on the particular Zebra library (either because of the installation options taken, or because the code is not ready for the particular machine) 4 the file pointer is zero for modes L or K \end{verbatim} The error returns are enabled only if the P option is selected, otherwise control goes to ZFATAL. If the P option is selected, the status must be checked, because the file will not be initialized if an error exit is taken. \subsection*{Option compatibility diagram} \begin{verbatim} M C A D T * K L Y channel C - + combination useful alfa A - - - combination not allowed direct D - + - i option implied tape T - - - d option default neither * - - - - - ? depends on the user's implementation user K - - - ? ? + lib C L - - - + + + - special Y - - - - + + - - Fortran F - - i d d d - - - exchange X i i i i + + i i i native N + + + + + d + + + separate S i + + + + + + + + unpacked U + + - - - - - - - M C A D T * K L Y \end{verbatim} \subsection*{Notes} \subsubsection*{OPENing:} \Rind{FZFILE} initializes only the Zebra controls for this file; the opening of the file has to be done by the user in his calling program, according to the needs of his machine and operating system. \subsubsection*{Fortran OPEN:} if the file is to be handled with Fortran READ/WRITE one needs an OPEN statement; one will find some hints in chapter 2. \subsubsection*{C open:} for modes L or K the file should be opened by calling CFOPEN (see the specifications at the end of this paragraph) and the 'file pointer' returned by CFOPEN must be passed on to \Rind{FZFILE} via IQUEST(1). \subsubsection*{LUN:} this is the Zebra stream identifier which will be used in all subsequent calls for this file; if the file is to be handled with Fortran this is at the same time the logical unit number. \subsubsection*{LREC:} for exchange file format it is important to choose a good value for all one's files, and then stick to it. One has to compromise between conflicting things: on tapes one would like to make this large, but this costs memory for the Zebra buffer, multiplied by the number of files concurrently open, and it wastes disk space for end-of-run records which occupy a whole physical record. Some numbers can be found in chapter 3. The physical record size for the exchange file format needs to be specified both to Zebra with \Rind{FZFILE} and to the system with the OPEN statement and maybe even with some JCL, in which case the user may need to know this: the block size is specified to Zebra in words, the default is 900 words. These words correspond to words in the Zebra dynamic store, such that a bank of 900 words could just fill one block. Except for 32-bit machines, the number of bits written to the file for each word depends on the data format: for the exchange data format each word generates 32 bits, for the native data format a full machine word is transferred. To the system the block size has to be specified either in bytes or in native words. For example, on the CRAY (64-bit words) the record-size of a standard block will have to be given as 7200 bytes (900 machine words) for the native data format, but as 3600 bytes (450 machine words) for the exchange data format. \subsection*{medium M or C:} for the media 'memory' and 'channel' the Exchange File Format is implied, and this cannot be changed. The Native Data Format can be selected by giving the N option. Instructions on the use of these media is given in separate paragraphs near the end of this chapter. \subsection*{medium A:} Alfa mode should only be used to transmit data over a network connection which cannot handle binary file transfers. The character representation (ASCII, EBCDIC, etc) used is that of the originating machine; the translation is expected to happen in the network station. Alfa mode must not be used for writing magnetic tapes, it is at least a factor of ten slower than binary. \subsection*{medium D:} this serves two different purposes: on some machines Fortran is not capable of handling fixed-length records without system control words in sequential mode, only in direct-access mode, but this only for disk files. A side-effect advantage is better error recovery from lost records on files which have been moved from tape to disk. No timing studies have yet been made to check whether direct access is slower than sequential acccess. The other purpose is random access to the d/ss on the file, this is described in the para. 1.18 'Usage for random access'. Selecting D only gives the possibility, but no obligation for random access: for input Zebra will read the file sequentially except at moments when the user interfers with calls to FZINXT; for output Zebra operates strictly sequentially. \subsection*{medium T:} at the moment no distinction is made internally in Zebra between disk and tape files (exception: NORD), but it may turn out that the C interface will have to have a separate branch for tape files on some machines. \subsection*{usage F or Y:} read/write with Fortran, option F, is the default if none of Y, L, K are specified. Exception IBM: up to including Zebra version 3.66 the default for sequental files is Y, that is handling with IOPACK; from version 3.67 onwards the default will be F. Most people give now (version 3.65) option F, those who really want IOPACK should change their programs to request Y to be insensitive to the transition to 3.67. The only other machine sensitive to Y is presently the NORD: magnetic tapes must be written through the MAGTAP utility, on this machine TX implies Y. \subsection*{usage L:} read/write is with the routines CFGET/CFPUT which are part of the interface to the C Library for handling files with fixed-length records. This mode must be used for exchange file format tape files on those Unix machines where Fortran does not provide the parameter \Lit{RECORDTYPE='FIXED'} (or equivalent) in the OPEN statement, like the SUN, or SGI, or DECSTATION. On the same machines one might use this also for disk files as an alternative to option D; no studies have yet been made to see which is faster. L can be combined with D for random access using the C interface. \subsection*{usage K:} this is a hook to enable a user to write his own handling of physical records in case that none of the modes provided are satisfactory. Chapter 4 will give some hints of how to do this. \subsection*{file-format:} default is 'native' if none of D, A, C, M, L, K is given, which necessarily operate with exchange file format. \subsection*{data-format:} for native file format this is 'native'; for exchange file format data format 'exchange' is assumed by default, but native data format can be requested by giving the N option. In this case LREC native words are written for each physical record, and no data translation, packing, or byte inversion, is done. \subsection*{direction:} the option 'IO' is needed in two separate cases: - if the program first writes a new file which it then reads; - if the program positions an existing file by reading for further output. In this case the input or output mode of the file is defined by the first I/O action on the file; it can be changed at the end of the first phase only with \Rind{FZENDI} from input to output, or with \Rind{FZENDO} the other way round. \subsection*{various S:} for the exchange file format, \Rind{FZOUT} normally places the start of a given d/s just after the end of the previous structure in the same physical record, to economize file space. This may be inconvenient if the file is later to be handled by means other than calling \Rind{FZIN}: giving the S option will force each d/s to start on a new physical record. For the medium 'memory' the S option is implied. \subsection*{various U:} only for media 'memory' and 'channel': When handling the physical records for the Data Format 'exchange' it may be more convenient for the user to do himself the unpacking (\Rind{FZIN}) or packing (\Rind{FZOUT}) operation needed, because in this case he has immediate access to the control information in the records. (Note: on the VAX 'packing/unpacking' is in fact byte inversion.) The U option allows this: if given, \Rind{FZOUT} delivers the data non-packed, and \Rind{FZIN} expects data which have already been unpacked by the user. \subsection*{various R:} if the initial REWIND is selected the file has to be OPENed before calling \Rind{FZFILE}. \subsection*{various Q:} giving this option suppresses message printing for this file. \subsection*{NEOF:} this parameter controls for output the explicit writing of system file-marks; for input it controls the interpretation of a system file-mark, which can mean either end-of-file or end-of-data (two file-marks in succession always act as end-of-data). On most machines the default value is NEOF=0, meaning single-file files only. This can be over-ridden by giving the 1, 2, or 3 option if multi-file files must be handled. See also para. 1.02 for more explanations. \subsection*{Specifications for CFOPEN} Since this is a new KERNLIB routine not yet documented we print this here. \Shubr{CFOPEN}{(LUNPTR*,MEDIUM,NWREC,IOMODE,NBUF,NAME,ISTAT*)} \begin{verbatim} LUNPTR* is the 'file pointer' returned by the C library routine 'fopen', CFOPEN returns it to the caller who must hand it on to FZFILE via IQUEST(1). This will be zero if the open fails. MEDIUM = 0 for disk file, normal 1 tape file, normal 2 disk file, user coded I/O 3 tape file, user coded I/O NWREC the number of machine words per physical record, this is used to calculate the buffer size if NBUF not zero. MODE the 'type' parameter of 'fopen', of type CHARACTER: r open for reading w truncate or create for writing a append: open for writing at end of file, or create for writing r+ open for update (reading and writing) w+ truncate or create for update a+ append; open or create for update at EOF NBUF not currently used, always give zero NAME the name of the file, of Fortran type CHARACTER. ISTAT* status returned, zero if all is well, otherwise a system error code. \end{verbatim} \Filename{H2-FZ-Change-logging-level} \section{FZLOGL - change the logging level of a file} To change the logging level for a file: \Shubr{FZLOGL}{(LUN,LOGLEV)} \begin{verbatim} with LUN: logical unit number LOGLEV: logging level -3: suppress all messages -2: print error messages only 0: normal mode 1: normal mode + details of conversion problems 2: print to monitor CALLs to FZ 3: print short diagnostic dumps to debug 4: print full diagnostic dumps to debug user-written output routines \end{verbatim} A logging level is attached to each FZ file; by default this is the general system-wide default logging level set by MZEBRA. By giving the Q (quiet) option with \Rind{FZFILE} the level is set to -2. It can be changed later at any time by calling FZLOGL. \Filename{H2-FZMEMO-Connect-memory-area-for-medium-memory} \section{FZMEMO - connect user memory area for medium Memory} To connect the memory area for use by a 'file': \Shubr{FZMEMO}{(LUN,MBUF,NWBUF)} \begin{verbatim} with LUN: stream number MBUF: user memory of NWBUF machine words \end{verbatim} This must be called after the 'file' has been initialized with \Rind{FZFILE}, and before it is used with \Rind{FZIN} or \Rind{FZOUT}. Different memory areas may be connected by recalling this routine any number of times; see para. 1.19 for explanations. \Filename{H2-FZHOOK-connect-user-routine-for-medium-Channel} \section{FZHOOK - connect user routine for medium Channel} To connect a particular user routine to be called by \Rind{FZIN} or \Rind{FZOUT} for this 'file': \begin{verbatim} EXTERNAL UserSR \end{verbatim} \Shubr{FZHOOK}{(LUN, UserSR, 0)} \spcomp \begin{verbatim} with LUN: stream number UserSR: name of the user routine dummy: the third parameter is not at the moment used, but it must be present in the call \end{verbatim} \spnorm This must be called after the 'file' has been initialized with \Rind{FZFILE}, and before it is used with \Rind{FZIN} or \Rind{FZOUT}. Different user routines may be connected by recalling this routine any number of times; see para. 1.20 for explanations. \Filename{H2-FZLIMI-limit-size-output-file} \section{FZLIMI - limit the size of an output file} \Shubr{FZLIMI}{(LUN,ALIMIT)} \begin{verbatim} with LUN: logical unit number ALIMIT: floating point number giving, in Mega-words, the limit of the data to be written to one reel of tape; if zero: increase the limit by one more reel of tape if -ve: unlimited (as intialized by FZFILE) Example: CALL FZLIMI (21, 12.75) sets the file-size to 12.75 Mwords for unit 21 Re-calling later with: CALL FZLIMI (21, 0.) sets the file-size to be the current data-volume plus 12.75 Mwords \end{verbatim} \spnorm The reason for this facility is the fact that detecting 'end-of-tape' is a problem which cannot be solved satisfactorily in full generality. To help the user who wants control over tape reel switching, ZEBRA counts the total number of words written, and checks after every data-structure written out (but not for start-of-run, end-of-run, end-of-file) whether the limit has been reached. If so, it returns the 'pseudo end-of-tape' condition (cf. \Rind{FZOUT}) for every data-structure output until an increase of the limit to include one more reel of tape is requested with ALIMIT=0. Thus the user can switch tape, call FZLIMI (LUN,0.), and continue to write another tape, again waiting for the 'end-of-tape' signal. \Filename{H2-FZODAT-storing-recovering-direct-access-table} \section{FZODAT - storing and recovering the direct access table} The routines FZODAT and FZIDAT store and retrieve the direct-acces table onto and from a file. See para. 1.18 'Usage for random access' for explanations. To store the Direct-access Table bank: \Shubr{FZODAT}{(LUN,IXDIV,!LDAT)} \begin{verbatim} with LUN: logical unit number IXDIV: index of division or store having the DaT bank !LDAT: address of the DaT bank, if non-zero \end{verbatim} If LDAT is zero the DaT 'forward reference' record is written to be updated later to contain the address of the DaT; this is useful only as the very first record on the file. If LDAT is non-zero the DaT bank is written and the forward reference record is updated if possible. \vspace*{12pt} To retrieve the DaT bank: \Shubr{FZIDAT}{(LUN,IXDIV,!LSUP,JBIAS)} \begin{verbatim} with LUN: logical unit number IXDIV: index of the division to receive the DaT bank The d/s read is linked into a pre-existing d/s as directed by !LSUP and JBIAS, which have the same significance as for MZLIFT: !LSUP: if JBIAS < 1: !LSUP is the supporting bank, JBIAS: connection to link LQ(!LSUP-JBIAS) IQUEST(13) returns the entry adr to the d/s if JBIAS = 1: *!LSUP is the supporting link, connection to *!LSUP* (top-level d/s) !LSUP* returns the entry adr to the d/s if JBIAS = 2: stand-alone d/s, no connection !LSUP* returns the entry adr to the d/s Status return: IQUEST(1) = 0 success -1 DaT not found -2 file is empty \end{verbatim} For the symbols '!' and '*' see the convention in 'Principles'. \Filename{H2-FZRUN-write-RUN-record} \section{FZRUN - write a RUN record} To write a start-of-run or end-of-run record: \Shubr{FZRUN}{(LUN,NRUN,NUH,IUHEAD)} \begin{verbatim} with LUN: logical unit number NRUN: run number, if +ve: new run, run number literal zero: new run, increase current run number by one -ve: end-of-run record NUH: length of the user information, may be zero, < 401 IUHEAD: NUH words of user information, integers only \end{verbatim} \textbf{Write / Error status} returned: as for \Rind{FZOUT} A start-of-run record will be preceded by an end-of-run signal if the last action on the file was the writing of a data-structure. The request to write an end-of-run will be by-passed if the last action on the file was the writing of EoR or EoF. For the media 'memory' or 'channel' the writing of end-of-run, if needed, should be requested by an explicit call to \Rind{FZRUN} with NRUN negative, since an implicit generation will not get through to the user. \Filename{H2-FZOUT-write-one-data-structure} \section{FZOUT - write one data-structure} To write one data-structure: \Shubr{FZOUT}{(LUN,IXDIV,!LENTRY,IEVENT,options,IOCH,NUH,IUHEAD)} \begin{verbatim} with LUN: logical unit number IXDIV: index of division(s) may be zero [or IXSTOR] if the D option is not selected may be a compound index if the D option is selected !LENTRY: entry address of the d/s may be zero if the Z option is selected IEVENT: start-of-event flag = 0 for event continued 1 for new event the following values are for use by FZRUN and FZENDO and are illegal for calls by the user: 13 flush the buffer 15 write end-of-file (X mode only) 16 write end-of-data 14 write end-of-run -1 write start-of-run options: character string, individual characters select options: select d/s: mutually exclusive options by default the d/s supported by the bank at LENTRY is written out (link 0 not followed) L write the d/s supported by the linear structure at LENTRY (link 0 followed) M write the banks marked by the user see para. 1.21 for details D write complete division(s) default: dead banks are squeezed out (slower but maybe more economic than DI) DI immediate dump of division(s), dead banks, if any, are also written out S write the single bank at LENTRY Z zero banks, ie. empty d/s, header only \end{verbatim} \begin{verbatim} others: N no links, ie. linkless handling (cf 'Principles') default: links are significant P permit error returns default: exit to ZTELL IOCH: the I/O characteristic for the user header vector; as for a bank this may be either 'immediate' if the whole vector is of the same type, or it may be composite. - immediate: IOCH = 1 all bits 2 all integers 3 all floating 4 all double precision 5 all Hollerith 7 self-describing - composite: set up with CALL MZIOCH (IOCH,NW,'format') where IOCH is now a vector of NW words at most NUH: number of words in the user header vector, < 401, may be zero, in which case IOCH is not used IUHEAD: the user header vector \end{verbatim} \subsection*{Write status returned in \Lit{/QUEST/IQUEST(100)}} \begin{verbatim} IQUEST(1) = 0 normal completion +1 'pseudo end-of-tape' condition (cf. FZLIMI) -1 first attempt to write after end-of-data -2 error return IQUEST(5) = word 1 of the direct access adr of the d/s just written IQUEST(6) = word 2 IQUEST(9) = # of useful machine words ready in the user's memory only for medium 'memory' IQUEST(11) = NWBK, number of words of bank material IQUEST(12) = NWTB, size of the relocation table IQUEST(13) = number of pilot records written so far IQUEST(14) = number of Mwords written so far IQUEST(15) = number of words (up to 1 M) written so far ie. the total is IQUEST(15) + IQUEST(14)*10**6 IQUEST(16) = number of logical records written so far IQUEST(17) = number of physical records written (exchange mode only) \end{verbatim} Further information about the file can be obtained by calling \Rind{FZIN}FO, see para. 4.01. \subsection*{Error status returned in \Lit{/QUEST/IQUEST(100)} Normally \Rind{FZOUT} does not return to the caller for (program) errors, but exits to ZTELL. Exceptionally, error returns may be enabled by the P option. \begin{verbatim} IQUEST(1) = -2 IQUEST(2) = 11: !LENTRY invalid or pointing to a dead bank = 12: bank chaining clobbered = 13: not enough space for the relocation table = 14: medium 'memory': user's memory too small \end{verbatim} If the P option is not taken exit is with \Lit{CALL ZTELL (i,1)} with $i=11,12,13,14$. \\[2mm] If the actual write operation fails, for example because the disk is full, control is handed to \Rind{ZTELL} (which may return) with: \begin{verbatim} CALL ZTELL (19,0) with IQUEST(1) = 19 IQUEST(2) = who is in trouble ? 1 - Fortran sequential 2 - Fortran direct access 21 - L mode sequential 22 - L mode direct-access 41 - Alfa mode IQUEST(3) = IOSTAT error code return by the 'write' IQUEST(4) = LUN (Zebra stream identifier) IQUEST(5) = C file descriptor if writing in L mode \end{verbatim} \Filename{H2-FZIN-read-one-data-structure} \section{FZIN - read one data-structure} To read the next data-structure one calls \Rind{FZIN}. The return code in IQUEST(1) will tell the caller whether the READ operation was free of error, and whether the object read was a d/s, a start-of-run, an end-of-run, or an end-of-file signal. \Rind{FZIN} may be asked to skip to and then read the next start-of-event d/s or the next start-of-run record. In the simplest case (opt = '.' or blank) \Rind{FZIN} will read the next data-structure into the division indicated by the parameter IXDIV, at the same time delivering the user-header vector to IUHEAD. The selective read has been provided to rapidly skip unwanted d/ss without expansion into memory and without relocation of the links: calling \Rind{FZIN} with opt='S' causes reading of the next pilot information only, returning to the user the header-vector (and the text-vector, if any) for taking a decision to read or to skip the 'pending d/s'. Skipping is done by asking for the next d/s; accepting is done by calling \Rind{FZIN} with opt='A'. Note that every call to \Rind{FZIN} has to be checked for the success of the operation by testing on IQUEST(1). In the cases described so far the complete data-structure is read and is deposited into one particular division. It is however possible to steer individual data segments of the d/s into particular divisions, or to cause them to be ignored. This can be done by using the options T and D, as described separately in the next paragraph. \subsection*{Specifications for \Rind{FZIN}} \Rind{FZIN} returns the read status, either normal or error, in IQUEST; be careful about the meaning of status codes 4 and 5: '4' means EoF seen on a file which can be a multi-file file; '5' means 'End-of-Data'. Reading a file which cannot be multi-file can never produce status 4, the end will always be indicated by status 5. \subsubsection*{Error status} \begin{verbatim} IQUEST(1) = -8 . . . -7 for 3 consecutive errors -6 for 2 consecutive errors -5 read error -4 bad constructs, maybe not a file written by FZOUT -3 bad data -2 not enough space to read the d/s and its table -1 faulty call: T,D,A option given, but no pending d/s IQUEST(2) = number of logical records read so far IQUEST(3) = number of physical records read so far (exchange mode) \end{verbatim} Details about the error occurred are stored in IQUEST(11) ff. as described in the Zebra Reference Manual, book DIA. To read the next data-structure: \Shubr{FZIN}{(LUN,IXDIV,!LSUP,JBIAS,opt,*NUH*,IUHEAD*)} \begin{verbatim} with LUN: logical unit number IXDIV: index of the default division to receive the d/s zero: division 2 of the primary store (ignored if S option given) The d/s read is linked into a pre-existing d/s as directed by !LSUP and JBIAS, which have the same meaning as for MZLIFT: !LSUP: if JBIAS < 1: !LSUP is the supporting bank, JBIAS: connection to link LQ(!LSUP-JBIAS) IQUEST(13) returns the entry adr to the d/s if JBIAS = 1: *!LSUP is the supporting link, connection to *!LSUP* (top-level d/s) !LSUP* returns the entry adr to the d/s if JBIAS = 2: stand-alone d/s, no connection !LSUP* returns the entry adr to the d/s (ignored if options S or T selected) options: character string, individual characters select options: event: default: go for the next d/s E skip to and read the next start-of-event d/s R skip to and read the next start-of-run record 2 skip to and read the next end-of-run record 3 skip to and read the next Zebra end-of-file 4 skip to and read the next machine end-of-file any skip operation stops also on machine EoF; option E or 2 skipping stop also on Zebra EoF, option R skipping does not stop on Zebra EoF. select: default: read the next header and its d/s (may mean: skip pending d/s or current event) S select, read next header and text-vector only (may mean: skip pending d/s or current event) (LSUP and JBIAS not used) T table, load the segment table for the current d/s into /FZCSEG/ (LSUP, JBIAS, NUH, and IUHEAD not used) A accept, read the pending d/s (NUH and IUHEAD not used) D divisional accept, read the pending d/s under control from /FZCSEG/ (NUH and IUHEAD not used) F accept also DaT records, which are normally ignored; see para. 1.18 'Usage of random access' *NUH*: size of the user header vector on input: maximum size of IUHEAD on output: useful size stored in IUHEAD (ignored if options T, A, or D selected) IUHEAD*: user header vector (ignored if options T, A, or D selected) \end{verbatim} \subsubsection*{Normal read status returned in \Lit{/QUEST/IQUEST(100)}} \begin{verbatim} IQUEST(1) = -ve error, see separate list 0 normal completion 1 start-of-run record 2 end-of-run record 3 Zebra end-of-file 4 system end-of-file, continuation possible 5 system end-of-data, continuation not possible 6 first attempt to read beyond EoD IQUEST(2) = number of logical records read so far IQUEST(3) = number of physical records read so far (exchange mode) IQUEST(5) = word 1 of the direct access adr of the d/s read IQUEST(6) = word 2 (exchange mode only) IQUEST(11) if IQUEST(1)=0: = 1 or 0 for yes/no start new event if IQUEST(1)=1: = run number for start/end of run IQUEST(12) = processing bits of pilot, normally zero IQUEST(13) = LENTRY, the entry address into the data structure zero means: empty d/s (not yet a valid address if S option return) IQUEST(14) = NWBK, the number of words occupied by the d/s in memory zero means: empty d/s IQUEST(20) = NWIOCH, size of the I/O characteristic IQUEST(21) = NWIOCH words of I/O characteristic ... for the user header vector \end{verbatim} Further information about the file can be obtained by calling FZINFO, see para. 4.01. \spnorm \Filename{H2-FZIN-read-one-data-structure-by-segments} \section{FZIN - read one data-structure by segments} It may be convenient to represent an event by several separate d/ss on the file. This permits in an easy way to selectively read only a particular part of every event. This method has one draw-back: if there are reference links pointing from one part to an other part of the event, where both parts are residing simultaneously in memory, and if the two parts are written out by two separate calls to \Rind{FZOUT}, the cross links will be lost on read-back. To amend for this, the following scheme has been implemented: when a data-structure is transferred from several divisions to the FZ file, the data are 'segmented', i.e. a table is included into the pilot information, indicating the divisions from which the different data segments originated, together with their sizes. On read-back the user can either skip particular data-segments or he can direct data-segments into particular divisions individually. To do so, three calls to \Rind{FZIN} are necessary: \begin{enumerate} \item call with the S option to read the pilot; \item call with the T option to ready the segment table in /FZCSEG/; \item call with the D option to read the d/s with distribution. \end{enumerate} The second call will present the 'segment table' to the caller in the labelled Common Block \begin{verbatim} COMMON /FZCSEG/ NQSEG,IQSEGH(2,20),IQSEGD(22) where: NQSEG = number of segments contained in the pending d/s if NQSEG = 0: d/s is not segmented IQSEGH(1,J) = char 1-4 IQSEGH(2,J) = 5-8 of the Hollerith name of the division from which segment J derives IQSEGD(J) index of division selected for segment J Note: IQSEGD(21+22) are working elements of the system and, like NQSEG, must not be modified by the user. \end{verbatim} To direct segment J into a given division one should set IQSEGD(J) to the index of that division (or merely to the division number; the store is selected by the parameter IXDIV to \Rind{FZIN}). To cause this segment to be ignored IQSEGD(J) = --1 should be set. IQSEGD(J) containing zero directs this segment into the 'default' division selected by the parameter IXDIV to \Rind{FZIN}. (The vector IQSEGD is preset to zero by \Rind{FZIN}.) Since /FZCSEG/ is used for segment handling with all streams, both input and output, there must not occur some other call to FZ for any stream between the second and the third call. Also, having called with the T option does not oblige the user to follow it by a call with the D option; he may call with the A or even the S option, in which cases the segment table is simply ignored. \Filename{H2-FZINXT-reset-read-point-on-direct-access-file} \section{FZINXT - reset the read point on a direct access file} To reset the read point: \Shubr{FZINXT}{(LUN,MDSA1,MDSA2)} \begin{verbatim} with LUN: logical unit number or Zebra stream ID the 2 word d/s address of the next d/s to be read, MDSA1: word 1: physical record number MDSA2: 2: off-set within the record, if this is zero the first d/s starting in the record will be used \end{verbatim} See para. 1.18 for context information. \Filename{H2-FZCOPY-copy-one-data-structure-from-input-to-output} \section{FZCOPY - copy one data-structure from input to output} FZCOPY will copy a data-structure from the input to the output 'file' without expansion into memory and without translating the data representation, thereby saving the time which would otherwise be spent on these operations. The file-format and the data-format of the input or the output file may be 'exchange' or 'native', but the following restrictions are imposed: \begin{enumerate} \item the data-format of the output file must be the same as that of the input file, ie. FZCOPY will not translate between native and exchange data-format. (Note however that on some machines native and exchange data-format are identical.) \item the input and the output cannot both be in 'channel' mode. \item Alfa format is not handled. \item the locical record length for the input in native file-format must not be longer than 2500 words. \end{enumerate} To copy a data-structure one has first to start reading it by calling \Rind{FZIN} with the S option; thereby one obtains the user header, which will normally be used to decide whether or not the copy is wanted. This 'pending' data-structure may then be copied by calling: \Shubr{FZCOPY}{(LUNIN,LUNOUT,IEVENT,options,IOCH,NUH,IUHEAD)} \begin{verbatim} with LUNIN: logical unit number of the input file LUNOUT: logical unit number of the output file IEVENT: start-of-event flag = 0 for event continued 1 for new event options: character string, individual characters select options: I/O descr.: by default the I/O descriptor from the input file is used for IUHEAD I use the new I/O descriptor given in IOCH for the user header vector P special 'permit' option not normally given IOCH: the I/O characteristic for the user header vector; this is ignored if the I option is not given; as for a bank this may be either 'immediate' if the whole vector is of the same type, or it may be composite. - immediate: IOCH = 1 all bits 2 all integers 3 all floating 4 all double precision 5 all Hollerith 7 self-describing - composite: set up with CALL MZIOCH (IOCH,NW,'format') where IOCH is now a vector of NW words at most NUH: number of words in the user header vector, < 401, may be zero, in which case IOCH is not used IUHEAD: the user header vector \end{verbatim} \subsection*{Status returned in \Lit{/QUEST/IQUEST(100)}} \begin{verbatim} IQUEST(1) = 0 normal completion +1 'pseudo end-of-tape' condition (cf. FZLIMI) < 0 input error return, see below > 1 output error return, see below If normal completion: IQUEST(5) = word 1 of the direct access adr of the d/s just written IQUEST(6) = word 2 IQUEST(9) = # of useful machine words ready in the user's memory only for medium 'memory' IQUEST(11) = NWBK, number of words of bank material IQUEST(12) = NWTB, size of the relocation table IQUEST(13) = number of pilot records written so far IQUEST(14) = number of Mwords written so far IQUEST(15) = number of words (up to 1 M) written so far ie. the total is IQUEST(15) + IQUEST(14)*10**6 careful: if this compound is bigger than 2G it needs more than 32 bits to hold it IQUEST(16) = number of logical records written so far IQUEST(17) = number of physical records written (exchange mode only) \end{verbatim} \subsubsection*{Input error status} \begin{verbatim} IQUEST(1) = -8 . . . -7 for 3 consecutive errors -6 for 2 consecutive errors -5 read error -4 bad constructs, maybe not a file written by FZCOPY -3 bad data -2 not enough space to read the d/s and its table -1 faulty call: no pending d/s, or: Alfa mode, or: input/output have differen data format, or: both input/output in channel mode, or: native input record length too long; (code -1 causes ZFATAL unless P option given) IQUEST(2) = number of logical records read so far IQUEST(3) = number of physical records read so far (exchange mode) \end{verbatim} Details about the error occurred are stored in IQUEST(11) ff. as described in the Zebra Reference Manual, book DIA for \Rind{FZIN}. \subsubsection*{Output error status returned in \Lit{/QUEST/IQUEST(100)}} Normally \Rind{FZCOPY} does not return to the caller for (program) errors, but exits to \Rind{ZTELL} or to \Rind{ZFATAL}. Exceptionally, some such error returns may be enabled by giving the P option in the call. \begin{verbatim} IQUEST(1) = +2 IQUEST(2) = 14: medium 'memory': user's memory too small \end{verbatim} \Filename{H2-FZENDO-output-file-termination} \section{FZENDO - output file termination} To terminate one or all \textbf{output} files: \Shubr{FZENDO}{(LUN,options)} \begin{verbatim} with LUN: logical unit number if zero: all FZ output files options: character string, individual characters select options: main: T terminate: - ensure end-of-data (unless done) - print file statistics (unless done) - drop FZ control-bank N continue output to a new file to be connected by the user to LUN after this call: - ensure end-of-data (unless done) - print file statistics (unless done) C continue on the next file of the same stream: - ensure end-of-file (unless done) - print file statistics (unless done) I switch to input, to read the file just written: - ensure end-of-data (unless done) - print file statistics (unless done) - remove the 'output' permission - rewind and change status to 'input' O output again, to over-write the file just written: - ensure end-of-data (unless done) - print file statistics (unless done) - rewind none print file statistics only over-ruling: T -> N -> C -> I -> O variants: R execute REWIND function, only with T or N U execute UNLOAD function, only with T or N (no action yet) X execute CLOSE function, only with T or N O keep the 'output' permission, only with I Q quiet, suppress printing of file statistics 0,1,2 or 3 only with I: change the NEOF parameter of FZFILE for reading \end{verbatim} To be sure that all output files are closed correctly, even on abnormal job termination, the user should call from \Rind{ZEND}: \qquad\Lit{CALL FZENDO (0, 'TX')} If necessary this is taken as a final close-down signal to be passed on to special I/O packages on some machines (such as \Rind{IOPACK} on IBM). \Filename{H2-FZENDI-input-file-termination} \section{FZENDI - input file termination} To terminate one or all \textbf{input} files: \Shubr{FZENDI}{(LUN,options)} \begin{verbatim} with LUN: logical unit number if zero: all FZ input files options: character string, individual characters select options: main: T terminate: - print file statistics (unless done) - drop FZ control-bank N continue input from a new file to be connected by the user to LUN after this call: - print file statistics (unless done) C continue on the next file of the same stream: - print file statistics (unless done) - step over the system EOF as required on some machines I input again, to re-read the same file: - print file statistics (unless done) - rewind O switch to output, to permit writing on a file positioned by reading: - print file statistics (unless done) - change status to 'output' none print file statistics only over-ruling: T -> N -> C -> I -> O variants: R execute REWIND function, only with T or N U execute UNLOAD function, only with T or N (no action yet) X execute CLOSE function, only with T or N Q quiet, suppress printing of file statistics 0,1,2 or 3 only with O: change the NEOF parameter of FZFILE for writing \end{verbatim} \spnorm Both \Rind{FZENDI} and \Rind{FZENDO} also load the file statistics into the Common \Lit{/FZSTAT/} just like the routine \Rind{FZINFO}, to provide the final statistics, which are not yet available just before \Rind{FZENDO}, and maybe no longer available just after \Rind{FZENDO}. \Filename{H2-Usage-for-random-access} \section{Usage for random access} \subsection*{Principles} For random access the location of a d/s within its file is specified by its 'data-structure address', the DsA, consisting of 2 words:\\ \hspace*{8mm} word 1 is the number of the physical record in which the d/s starts,\\ \hspace*{8mm} word 2 is the off-set within this record.\\ If the file contains less than 2 Gigawords one can pack this into one word with\\ \hspace*{16mm} $ JDSAP = (JDSA1-1) * LREC + JDSA2 $\\ with LREC the number of words in each physical record, as specified to \Rind{FZFILE}. For every successful call to \Rind{FZOUT} or \Rind{FZIN} Zebra returns the DsA of the latest d/s in IQUEST(5) and IQUEST(6), provided the file-format is 'exchange' and even for a tape file. To prepare a file for reading it randomly the user would construct a table, the 'direct access table', or DaT, which contains the relevant properties of the d/ss on the file plus their DsA's. This can be done either by an extra read pass over the file to collect the data, or more economically at the time when the file is created. In the latter case the DaT can be written to the file as the last d/s before end-of-file, and is then ready for use for any future reading of the file. When reading a file initialized with mode D given to \Rind{FZFILE}, repeated calls to \Rind{FZIN} will read the data sequentially. To obtain a particular data-structure one has to call \Rind{FZIN}XT specifying the DsA of the wanted d/s; this will reset the 'current read point' for the file and the next call to \Rind{FZIN} will then deliver the wanted d/s. \subsection*{File creation} Do this: \begin{enumerate} \item call \Rind{FZFILE} giving\\ \hspace*{35mm} CHOPT = 'DO' or 'LDO' if it is a disk file,\\ \hspace*{35mm} CHOPT = 'TO' or 'TLO' if it is a tape file. \item lift a bank, generously large enough to hold the DaT. \item \Lit{CALL FZODAT (LUN,0,0)} to create the 'DaT forward-reference record'. This is a small record to provide 2 words to hold the DsA of the DaT itself; they are initialized to zero and on the final call FZODAT will try to update them with the true address if the file is ready for random access. This should be provided on every file which later one may want to access randomly, like a tape file to be copied to disk. This is useful only if this record is the very first record on the file. \item create the file by repeated calls to \Rind{FZOUT}, compiling at the same the Direct Acces Table, including into it IQUEST(5) and (6) returned from \Rind{FZOUT}. \item call \Rind{MZPUSH} to reduce the bank holding the DaT to the size actually needed. \item \Lit{CALL FZODAT (LUN,IXDIV,LBANK)} to store the bank with the DaT onto the file and to update the forward reference record if possible; Zebra will remember the DsA of this d/s in the control-bank for this file. \item \Lit{CALL FZENDO (LUN,'TX')} to terminate and close the file. This will store the DsA of the DaT bank into the user header of the special Zebra EoF record, which is the very last item on the file. \end{enumerate} \subsection*{Reading the file} Do this: 1) call \Rind{FZFILE} giving CHOPT = 'D' or 'LD' to permit random access. 2) retrieve the direct-access table: \begin{verbatim} CALL FZIDAT (LUN,IXDIV,LDAT,1) IF (IQUEST(1).NE.0) GO TO trouble \end{verbatim} \hspace*{4mm} This will try to find the Direct access Table: \begin{enumerate} \item first it will read the first record on the file to see whether it is a DaT forward reference record containing a non-zero DsA. If so it will use this adr to read the DaT and deliver it to the user. \item if this is not successful it will try to get from the operating system the length of the file and then read its last physical record; this should contain the Zebra EoF record with the DsA of the DaT. \item if this fails it will scan either the last 24 physical records or the whole file for the DaT record, depending on whether it did or did not get some indication of the file size. \end{enumerate} 3) to get a wanted d/s: find it in the DaT, get its DsA and call FZINXT (LUN,MDS1,MDS2), with MDS the 2-word data-structure address. This resets the read point, calling \Rind{FZIN} (LUN,...) will then deliver the wanted d/s. \vspace*{12pt} If acccess to the table was not immediate, FZIDAT will try to update the forward reference record, if it exists, to ease future use. It can do this only if the file hase been opened for read/write, and this has to be signalled by giving CHOPT='DIO' to \Rind{FZFILE}. \vspace*{12pt} Keeping the DaT on the file to be read is obviously the simplest way to proceed, but other organisations are possible. For example, if one has a large disk with many files as the data-base for the events of an experiment, one may want to have a separate global DaT for all the events on the disk, with a structure reflecting the files and holding their names. Remember: if one needs a data-base with key-word access or update capability one should use the RZ package of Zebra. \Filename{H2-Usage-for-medium-Memory} \section{Usage for medium Memory} In this mode all the records assembled by \Rind{FZOUT} to represent one particular data-structure are placed one after the other contiguously into an area of memory belonging to the user (rather than being written to a file). One is then free to move the data around, normally from a machine which has no I/O facilities to a machine which has, as in emulator farms. They may there be written to tape. Eventually the data will be brought back to be read with \Rind{FZIN}, either directly from a file, or again from user memory. To 'read' a d/s from user memory, \Rind{FZIN} expects all the records representing the d/s to be present in the memory area. FZ handles the medium 'memory' only with the \textbf{file format} 'exchange', ie.~with logical 'records' blocked onto fixed-length physical 'records'. The \textbf{data format} can be 'exchange' or 'native'. The details of this format are found in chapter 3. To use this mode the following is necessary: \begin{itemize} \item[--] the 'stream' must be initialized with the M option in \Rind{FZFILE}, \item[--] the options I or O set this stream for input or output (it cannot be used for both concurrently), \item[--] the parameter LUN to \Rind{FZFILE} identifies the stream, \item[--] the N option selects the native data format, \item[--] the U option selects unpacked mode, as explained below; \item[--] the user memory to be used for this stream has to be connected by calling FZMEMO; different streams may use different memory regions. \end{itemize} The connection call is \Shubr{FZMEMO}{(LUN,MBUF,NWBUF)} indicating to FZ the where-abouts and the size of one's memory region for stream LUN. NWBUF gives the size in terms of machine words, it must be large enough to hold the largest d/s to be handled; see the memory-size considerations further down. \subsection*{Output} In the case of output one calls \Rind{FZOUT} (also \Rind{FZRUN}), on return one will find the physical records representing the d/s in one's memory. \Rind{FZOUT} starts this string of records with a 'steering block', ie.~a physical record with 8 control words (see chapter 3 for details), placed at the start of the user's memory; the remaining data are placed as 'fast blocks', if any. The last block is normally used only partially, the unused part is marked out with a padding record, ie.~a logical record of type 5, of which only the record length and the type code are significant, the data contents are irrelevant. \Rind{FZOUT} returns in IQUEST(9) the number of useful machine words stored into the memory. It includes the (normally) 2 control words of the padding record, but not its data words. If one needs it, one can compute the number of physical records as 1 + (IQUEST(9)-1)/LENREC where LENREC is the physical record size in machine words. On machines with a word-size greater than 32 bits this needs a little care, as explained in the note about LREC of para. 1.04 for \Rind{FZFILE}, and also below. To obtain a start-of-run record one can call \Rind{FZRUN} to get it into the memory, the useful size is again found in IQUEST(9). To get an end-of-run record one has to call \Rind{FZRUN} explicitly; one may call \Rind{FZENDO} to have the file statistic printed, but this will not put anything into the user's memory. \subsection*{Input} To read back a d/s originally produced by \Rind{FZOUT} and then shipped around, one has to store it in the user memory, connected with FZMEMO, and call \Rind{FZIN}. This will transfer the data to the dynamic store, converted if the Exchange Data Format is in use, and relocate the links. The user's memory is left intact. IQUEST(1) returns the read status and must be checked after every call to \Rind{FZIN}. \subsection*{Lay-out of the memory and size considerations} Using the information of chapter 3, one obtains the lay-out of the d/s in the memory produced by \Rind{FZOUT} as shown on the next page. NWPHR is the physical record length as defined by the call to \Rind{FZFILE}, it is 900 by default. If one is using the native data format (or if one is running on a machine where the native format is identical to the exchange format, ie.~on IEEE machines such as Apollo, Alliant, Motorola) this lay-out applies literally, the numbers are usable directly, and \Rind{FZOUT} will return IQUEST(9)=NWUSE. However, if one is using the exchange data format the data are converted to their exchange representation, and they are also 'packed'. For example, on the Cray two words are packed into one machine word, and \Rind{FZOUT} will return IQUEST(9)=(NWUSE+1)/2. On a VAX 'packing' means byte inversion, so also in this case the data are not usable directly. Use of the U option: the situation can arise that one needs the exchange format, because communication is between different machines, but that one also needs access to the control information at the beginning of the data. One could gain this access by local unpacking, but this is messy. It is more elegant to take charge of the packing oneself, and instruct \Rind{FZOUT} to deliver the data non-packed (by selecting the U option in \Rind{FZFILE}). In this case the lay-out shown on the next page applies again literally, or almost: the data are delivered as one word per each machine word (and on the VAX the bytes are not yet inverted), but they are converted to the exchange representation, 32 bits right justified with zero-fill. This conversion affects negative integers, floating point numbers, Hollerith, but not positive integers and bit patterns. Since the control information has been deliberately chosen to be of this invariant kind, it remains usable. \Rind{FZOUT} returns again IQUEST(9)=NWUSE. When shipping the data one has to execute the 'packing' operation applicable on the given machine such that they arrive in the correct form at the destination. From these considerations one easily derives the size requirement on the memory to be connected with FZMEMO. On the Cray for example, to handle data-strutures of not more than 80000 words (including the control and context information) one needs a memory region of 40000 words if one operates in normal exchange mode, but 80000 words for native mode or if the U option is selected. \subsection*{Lay-out of a data-structure in the user memory} \begin{verbatim} word 1-4 steering block stamp = hex 0123CDEF 80708070 4321ABCD 80618061 5 bits 1->24: NWPHR, the physical record length 30: set if start-of-run in this block 31: set if end-of-run 32: set if emergency stop block 6 zero (physical record counter) 7 NWTOLR = 8 8 NFAST, the number of fast blocks to follow 9 NWLR, logical record length 10 LRTYP = 2 or 3, the record type 11 floating point 12345.0 as check word 12 Zebra version number, integer = 10000. * QVERSIO 13 zero 14 zero 15 NWTX=0, number of words in the text-vector 16 NWSEG, number of words in the segment table 17 NWTAB, number of words in the relocation table 18 NWBK, number of words of bank material 19 LENTRY, entry address into the d/s 20 NWUHIO = NWUH + NWIO, number of words in the user header vector plus its I/O characteristic, zero if no header NWIO words of the I/O characteristic for the u. h. vector NWUH words of the user header vector NWSEG words of the segment table NWTX words of the text-vector NWTAB words of the relocation table NWBK words of bank material word NT is the last word of the d/s NT+1 NWPAD, record length of the padding record NT+2 5, record type NWPAD-1 untouched padding words \end{verbatim} The following numbers are calculated: \begin{verbatim} log. record length NWLR = 10+NWIO+NWUH+NWSEG+NWTX+NWTAB+NWBK incl. log. c/words NWDS = 2 + NWLR incl. phys. c/words NT = 8 + NWDS # of fast blocks NFAST = (NT-1) / NWPHR total # of words NALL = NWPHR * (NFAST+1) padding length NWPAD = NALL - NT - 1 (can be 0 or -1) # of useful words NWUSE = NT + 1 + MIN(NWPAD,1) \end{verbatim} \Filename{H2-Usage-for-medium-Channel} \section{Usage for medium Channel} In this mode the records assembled by \Rind{FZOUT} are channelled through a user routine one-by-one to their destination; (rather than being written to a file or to memory). Similiarly for \Rind{FZIN} the data are acquired not from tape or disk directly, but through the same user routine. The name of this routine is not decided by Zebra. Channelled mode operates with \textbf{file format} 'exchange', ie.~the data are collected into fixed-length records, and each record is handed to the user routine when complete (for \Rind{FZOUT}, the inverse for \Rind{FZIN}). The details of this format are found in chapter 3. The \textbf{data format} can be 'exchange' or 'native'. To use this mode the following is necessary: \begin{itemize} \item the 'stream' must be initialized with the C option in \Rind{FZFILE}, \item the options I or O set this stream for input or output (it cannot be used for both concurrently), \item the parameter LUN to \Rind{FZFILE} identifies the stream, \item the N option selects the native data format, \item the S option forces each d/s to start on a new physical record, the U option selects unpacked mode, as explained in the previous paragraph; \item the user routine UserSR to be used for this stream must be hooked up to FZ by calling FZHOOK; different streams may use different user routines; \item the user routine UserSR must be provided. \end{itemize} The connection call is \begin{verbatim} EXTERNAL UserSR \end{verbatim} \Shubr{FZHOOK}{(LUN, UserSR, 0)} passing to FZ the address of the user routine; the third argument is not used for the time being. The specifications for the user routine are: \fbox{\Lit{SUBROUTINE UserSR (IBUF,IOWAY)}} \begin{verbatim} with IBUF: the data of the 'record' IOWAY: the I/O direction: = 0 if called from FZIN for input 1 if called from FZOUT for output other values are reserved to the user IQUEST (1): on entry: LUN, the stream ID on exit: status flag (2): on entry: the number of machine words for transmission on exit: number of machine words delivered (3): kind of record (4): = zero if sequential access = ordinal number of the record wanted if direct-access (5): 0 / 1 for disk / tape (6): if FZIN: number of words per physical record \end{verbatim} \subsection*{UserSR called from \Rind{FZIN}} In this case IOWAY is zero on entry, and IQUEST(2) specifies the maximum number of words which the buffer IBUF can accept without the program being destroyed. \Lit{IQUEST(3)} indicates the kind of record expected, if this is zero a normal continuation record is wanted; if it is =1 then \Rind{FZIN} is expecting a physical record starting a new d/s; the user routine is supposed to discard trailing records of the previous d/s if this has been de-selected. Note that selective reading with \Rind{FZIN} in channel mode is not yet fully tuned. UserSR is supposed to fill the buffer \Lit{IBUF}, store into \Lit{IQUEST(2)} the number of words received, and return zero in \Lit{IQUEST(1)}. Exeptions may be signalled by setting \begin{verbatim} IQUEST(1) = -1 end of data > 0 error, the value of this status code will be displayed to the caller of FZIN in IQUEST(14) \end{verbatim} \subsection*{User routines called from \Rind{FZOUT}} In this case IOWAY is 1 on entry, and IQUEST(2) specifies the number of words in the buffer IBUF waiting to be transmitted. UserSR is supposed to dispatch the buffer IBUF, and return zero in IQUEST(1). (At least for the time being, a non-zero status code in IQUEST(1) is ignored.) \Filename{H2-User-marking-of-data-structures-for-FZOUT} \section{User marking of data-structures for \protect\Rind{FZOUT}} Normally the identification of the banks belonging to the data-structure to be written out is left to \Rind{FZOUT} itself. Naturally this can cover only logically simple cases, such as the complete d/s supported by the bank at the entry address specified to \Rind{FZOUT}. The situation does however arise that one needs a more complex description. For this the M option has been provided which tells \Rind{FZOUT} that the user has already marked the banks to be transfered by setting system status-bit IQMARK (=26), and that he has designated the memory interval which contains his banks by storing the addresses of the lowest and the highest bank into the COMMON /ZLIMIT/LLOW,LHIGH. Thus in principle one can set up one's selection in full generality, except that one must take care that the banks marked actually form a connected d/s with the entry address LENTRY. It may however be quite tedious to do this job completely 'by hand'. So one tries to provide some tools for formalizable situations. At present the only such tool is MZMARK (cf. book MZ chapter 2), which scans a d/s for marking, but at the start of every new linear structure reached during the scan it checks the Hollerith ID of the start bank with a list to see whether the new sub-structure should be included into the marking process. To take an example, suppose the header bank at LENTRY supports 4 primary sub-structures with bank names RAW, GEOM, KIN, DST, of which the first 3 support in turn 4 sub-structures with bank names TEC, BGO, CAL, MUC, and at the even lower levels there may be any unspecified further sub-structures. If now one wants to write out only the data for GEOM, KIN, DST, and for the first 2 only the BGO results, one can do this with \begin{verbatim} PARAMETER (NID=4) DIMENSION IDLIST(NID) DATA IDLIST / 4HRAW , 4HTEC , 4HCAL , 4HMUC / CALL MZMARK (0,LENTRY,'-',NID,IDLIST) CALL FZOUT (LUN,0,LENTRY,IEVENT,'M',IOCH,NUH,IUHEAD) \end{verbatim} Note that we have used the anti-selection option (--) of \Rind{MZMARK} to veto at the high levels, which permits the low level linear stuctures to be accepted without one having to specify which exactly they are. \Filename{H2-Suppress-loading-of-unused-parts-of-FZ} \section{Suppress loading of unused parts of FZ} Because the format for processing an FZ file (native, exchange, ALFA mode) is selected by the initializing call to \Rind{FZFILE}, and because the handling of all formats is done by the one set of routines \Rind{FZIN} and \Rind{FZOUT}, the user's call to \Rind{FZIN} or to \Rind{FZOUT} causes the loading of all the code to handle all the formats. This volume is non-negligible, and for production programs one may want to suppress loading of the non-used parts of it. This can be done by adapting this dummy routine, which as it stands is valid for a program which only reads files in native mode and which does not call \Rind{FZOUT}/\Rind{FZENDO}: \begin{verbatim} SUBROUTINE FZDUMY CHARACTER NAME*6 C-- No output native mode *n ENTRY FZOFFN *n NAME = 'FZOFFN' *n GO TO 17 C-- No output exchange mode, neither binary nor ALFA *n ENTRY FZOFFX *n NAME = 'FZOFFX' *n GO TO 17 C-- No output ALFA mode, but binary mode *n ENTRY FZOASC *n NAME = 'FZOASC' *n GO TO 17 C-- No input native mode *u ENTRY FZIFFN *u NAME = 'FZIFFN' *u GO TO 17 C-- No input exchange mode, neither binary nor ALFA ENTRY FZIFFX NAME = 'FZIFFX' GO TO 17 C-- No input ALFA mode, but binary mode *n ENTRY FZIPHA *n NAME = 'FZIPHA' 17 CALL ZFATAM (NAME//' in FZDUMMY reached.') END \end{verbatim} Note that the dummy entry \Lit{FZIFFN} is not active because the true routine is needed, \Lit{FZIPHA} is not needed because \Lit{FZIFFX} is stronger, and that the entries \Lit{FZO...} are not needed because \Rind{FZOUT} is not called. If the dummy is reached by mistake, the program is stopped via \Rind{ZFATAL} with a message. \Filename{H2-FZ-installation-options} \section{FZ installation options} The various modes of operation of FZ have been made optional at the source code level to allow a tailor-made installation for specific applications. For example, if on a given machine, like an emulator, FZ is used exclusively in 'channeled mode', the code for all other modes can be removed by giving the command line \begin{verbatim} +USE, FZFFNAT, FZDACC, FZLIBC, FZMEMORY, FZALFA, T=INHIBIT. \end{verbatim} in the cradle to the Patchy run which generates the code to be compiled for the Zebra Library. These are the options: \begin{DLtt}{123456789} \item[FZFFNAT] file format Native \item[FZFORTRAN] sequential Fortran I/O for file format exchange \item[FZLIBC] read/write through the C library interface \item[FZDACC] any direct access mode \item[FZDACCF] direct access with Fortran \item[FZDACCL] direct access with C Library \item[FZCHANNEL] channeled mode \item[FZMEMORY] memory mode \item[FZALFA] Alfa exchange mode \end{DLtt} The standard Zebra libraries are usually prepared with all FZ modes selected, except \Lit{FZLIBC} which is selected only under UNIX.
(* Title: HOL/MicroJava/Comp/TranslCompTp.thy Author: Martin Strecker *) theory TranslCompTp imports Index "../BV/JVMType" begin (**********************************************************************) definition comb :: "['a \<Rightarrow> 'b list \<times> 'c, 'c \<Rightarrow> 'b list \<times> 'd, 'a] \<Rightarrow> 'b list \<times> 'd" where "comb == (\<lambda> f1 f2 x0. let (xs1, x1) = f1 x0; (xs2, x2) = f2 x1 in (xs1 @ xs2, x2))" definition comb_nil :: "'a \<Rightarrow> 'b list \<times> 'a" where "comb_nil a == ([], a)" notation (xsymbols) comb (infixr "\<box>" 55) lemma comb_nil_left [simp]: "comb_nil \<box> f = f" by (simp add: comb_def comb_nil_def split_beta) lemma comb_nil_right [simp]: "f \<box> comb_nil = f" by (simp add: comb_def comb_nil_def split_beta) lemma comb_assoc [simp]: "(fa \<box> fb) \<box> fc = fa \<box> (fb \<box> fc)" by (simp add: comb_def split_beta) lemma comb_inv: "(xs', x') = (f1 \<box> f2) x0 \<Longrightarrow> \<exists> xs1 x1 xs2 x2. (xs1, x1) = (f1 x0) \<and> (xs2, x2) = f2 x1 \<and> xs'= xs1 @ xs2 \<and> x'=x2" apply (case_tac "f1 x0") apply (case_tac "f2 x1") apply (simp add: comb_def split_beta) done (**********************************************************************) abbreviation (input) mt_of :: "method_type \<times> state_type \<Rightarrow> method_type" where "mt_of == fst" abbreviation (input) sttp_of :: "method_type \<times> state_type \<Rightarrow> state_type" where "sttp_of == snd" definition nochangeST :: "state_type \<Rightarrow> method_type \<times> state_type" where "nochangeST sttp == ([Some sttp], sttp)" definition pushST :: "[ty list, state_type] \<Rightarrow> method_type \<times> state_type" where "pushST tps == (\<lambda> (ST, LT). ([Some (ST, LT)], (tps @ ST, LT)))" definition dupST :: "state_type \<Rightarrow> method_type \<times> state_type" where "dupST == (\<lambda> (ST, LT). ([Some (ST, LT)], (hd ST # ST, LT)))" definition dup_x1ST :: "state_type \<Rightarrow> method_type \<times> state_type" where "dup_x1ST == (\<lambda> (ST, LT). ([Some (ST, LT)], (hd ST # hd (tl ST) # hd ST # (tl (tl ST)), LT)))" definition popST :: "[nat, state_type] \<Rightarrow> method_type \<times> state_type" where "popST n == (\<lambda> (ST, LT). ([Some (ST, LT)], (drop n ST, LT)))" definition replST :: "[nat, ty, state_type] \<Rightarrow> method_type \<times> state_type" where "replST n tp == (\<lambda> (ST, LT). ([Some (ST, LT)], (tp # (drop n ST), LT)))" definition storeST :: "[nat, ty, state_type] \<Rightarrow> method_type \<times> state_type" where "storeST i tp == (\<lambda> (ST, LT). ([Some (ST, LT)], (tl ST, LT [i:= OK tp])))" (* Expressions *) primrec compTpExpr :: "java_mb \<Rightarrow> java_mb prog \<Rightarrow> expr \<Rightarrow> state_type \<Rightarrow> method_type \<times> state_type" and compTpExprs :: "java_mb \<Rightarrow> java_mb prog \<Rightarrow> expr list \<Rightarrow> state_type \<Rightarrow> method_type \<times> state_type" where "compTpExpr jmb G (NewC c) = pushST [Class c]" | "compTpExpr jmb G (Cast c e) = (compTpExpr jmb G e) \<box> (replST 1 (Class c))" | "compTpExpr jmb G (Lit val) = pushST [the (typeof (\<lambda>v. None) val)]" | "compTpExpr jmb G (BinOp bo e1 e2) = (compTpExpr jmb G e1) \<box> (compTpExpr jmb G e2) \<box> (case bo of Eq => popST 2 \<box> pushST [PrimT Boolean] \<box> popST 1 \<box> pushST [PrimT Boolean] | Add => replST 2 (PrimT Integer))" | "compTpExpr jmb G (LAcc vn) = (\<lambda> (ST, LT). pushST [ok_val (LT ! (index jmb vn))] (ST, LT))" | "compTpExpr jmb G (vn::=e) = (compTpExpr jmb G e) \<box> dupST \<box> (popST 1)" | "compTpExpr jmb G ( {cn}e..fn ) = (compTpExpr jmb G e) \<box> replST 1 (snd (the (field (G,cn) fn)))" | "compTpExpr jmb G (FAss cn e1 fn e2 ) = (compTpExpr jmb G e1) \<box> (compTpExpr jmb G e2) \<box> dup_x1ST \<box> (popST 2)" | "compTpExpr jmb G ({C}a..mn({fpTs}ps)) = (compTpExpr jmb G a) \<box> (compTpExprs jmb G ps) \<box> (replST ((length ps) + 1) (method_rT (the (method (G,C) (mn,fpTs)))))" | "compTpExprs jmb G [] = comb_nil" | "compTpExprs jmb G (e#es) = (compTpExpr jmb G e) \<box> (compTpExprs jmb G es)" (* Statements *) primrec compTpStmt :: "java_mb \<Rightarrow> java_mb prog \<Rightarrow> stmt \<Rightarrow> state_type \<Rightarrow> method_type \<times> state_type" where "compTpStmt jmb G Skip = comb_nil" | "compTpStmt jmb G (Expr e) = (compTpExpr jmb G e) \<box> popST 1" | "compTpStmt jmb G (c1;; c2) = (compTpStmt jmb G c1) \<box> (compTpStmt jmb G c2)" | "compTpStmt jmb G (If(e) c1 Else c2) = (pushST [PrimT Boolean]) \<box> (compTpExpr jmb G e) \<box> popST 2 \<box> (compTpStmt jmb G c1) \<box> nochangeST \<box> (compTpStmt jmb G c2)" | "compTpStmt jmb G (While(e) c) = (pushST [PrimT Boolean]) \<box> (compTpExpr jmb G e) \<box> popST 2 \<box> (compTpStmt jmb G c) \<box> nochangeST" definition compTpInit :: "java_mb \<Rightarrow> (vname * ty) \<Rightarrow> state_type \<Rightarrow> method_type \<times> state_type" where "compTpInit jmb == (\<lambda> (vn,ty). (pushST [ty]) \<box> (storeST (index jmb vn) ty))" primrec compTpInitLvars :: "[java_mb, (vname \<times> ty) list] \<Rightarrow> state_type \<Rightarrow> method_type \<times> state_type" where "compTpInitLvars jmb [] = comb_nil" | "compTpInitLvars jmb (lv#lvars) = (compTpInit jmb lv) \<box> (compTpInitLvars jmb lvars)" definition start_ST :: "opstack_type" where "start_ST == []" definition start_LT :: "cname \<Rightarrow> ty list \<Rightarrow> nat \<Rightarrow> locvars_type" where "start_LT C pTs n == (OK (Class C))#((map OK pTs))@(replicate n Err)" definition compTpMethod :: "[java_mb prog, cname, java_mb mdecl] \<Rightarrow> method_type" where "compTpMethod G C == \<lambda> ((mn,pTs),rT, jmb). let (pns,lvars,blk,res) = jmb in (mt_of ((compTpInitLvars jmb lvars \<box> compTpStmt jmb G blk \<box> compTpExpr jmb G res \<box> nochangeST) (start_ST, start_LT C pTs (length lvars))))" definition compTp :: "java_mb prog => prog_type" where "compTp G C sig == let (D, rT, jmb) = (the (method (G, C) sig)) in compTpMethod G C (sig, rT, jmb)" (**********************************************************************) (* Computing the maximum stack size from the method_type *) definition ssize_sto :: "(state_type option) \<Rightarrow> nat" where "ssize_sto sto == case sto of None \<Rightarrow> 0 | (Some (ST, LT)) \<Rightarrow> length ST" definition max_of_list :: "nat list \<Rightarrow> nat" where "max_of_list xs == foldr max xs 0" definition max_ssize :: "method_type \<Rightarrow> nat" where "max_ssize mt == max_of_list (map ssize_sto mt)" end
program t implicit none !inquire stmt 'number' specifier integer::a inquire (file='infile', number=a) print *,a open (95, file='infile') inquire (95, number=a) print *,a endprogram t
We went to Hound Habits, which operates out of Kenmore's Petbarn. It's not far from the end of Gap Creek Road. Our puppy seemed to enjoy it and learnt a bit.
{-# LANGUAGE TypeFamilies #-} {-# LANGUAGE EmptyDataDecls #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE BangPatterns #-} {-# LANGUAGE DeriveDataTypeable #-} {-# LANGUAGE ViewPatterns #-} -- | -- Module : Data.Matrix.Symmetric.Mutable -- Copyright : Copyright (c) 2012 Aleksey Khudyakov <[email protected]> -- License : BSD3 -- Maintainer : Aleksey Khudyakov <[email protected]> -- Stability : experimental -- -- Symmetric and hermitian matrices module Data.Matrix.Symmetric.Mutable ( -- * Data types MSymmetric , MHermitian -- ** Implementation , MSymmetricRaw(..) , IsSymmetric , IsHermitian -- * Function , new , symmetricAsDense , hermitianAsDense , symmIndex -- * Complex number , NumberType , IsReal , IsComplex , castSymmetric , Conjugate(..) ) where import Control.Monad import Control.Monad.Primitive import Data.Complex (Complex,conjugate) import Data.Typeable (Typeable) import Data.Vector.Storable.Internal import qualified Data.Vector.Generic.Mutable as M -- import Foreign.Ptr import Foreign.Marshal.Array ( advancePtr ) import Foreign.ForeignPtr import Foreign.Storable import Data.Internal import Data.Vector.Storable.Strided.Mutable import Data.Matrix.Generic.Mutable import Data.Matrix.Dense.Mutable (MMatrix(..)) import Unsafe.Coerce ---------------------------------------------------------------- -- Data types ---------------------------------------------------------------- -- | Symmetric/hermitian matrix. Whether it's symmetric of hermitian -- is determined by type tag. See 'IsSymmetric' and 'IsHermitian'. -- -- Storage takes n² elements and data is stored in column major -- order. Fields are -- -- * Order of matrix -- -- * Leading dimension size -- -- * Pointer to data data MSymmetricRaw tag s a = MSymmetricRaw {-# UNPACK #-} !Int -- Order of matrix {-# UNPACK #-} !Int -- Leading dimension size {-# UNPACK #-} !(ForeignPtr a) deriving (Typeable) -- | Type tag for symmetric matrices. data IsSymmetric -- | Type tag for hermitian matrices. data IsHermitian -- | Mutable symmetric matrix type MSymmetric = MSymmetricRaw IsSymmetric -- | Mutable hermitian matrix type MHermitian = MSymmetricRaw IsHermitian instance Storable a => IsMMatrix (MSymmetricRaw IsSymmetric) a where basicRows (MSymmetricRaw n _ _) = n {-# INLINE basicRows #-} basicCols (MSymmetricRaw _ n _) = n {-# INLINE basicCols #-} basicIsIndexMutable _ _ = True {-# INLINE basicIsIndexMutable #-} basicUnsafeRead (MSymmetricRaw _ lda fp) (symmIndex -> (!i,!j)) = unsafePrimToPrim $ withForeignPtr fp (`peekElemOff` (i + j*lda)) {-# INLINE basicUnsafeRead #-} basicUnsafeWrite (MSymmetricRaw _ lda fp) (symmIndex -> (!i,!j)) x = unsafePrimToPrim $ withForeignPtr fp $ \p -> pokeElemOff p (i + j*lda) x {-# INLINE basicUnsafeWrite #-} basicCloneShape = new . rows {-# INLINE basicCloneShape #-} basicClone = cloneSym {-# INLINE basicClone #-} instance (Conjugate a, Storable a) => IsMMatrix (MSymmetricRaw IsHermitian) a where basicRows (MSymmetricRaw n _ _) = n {-# INLINE basicRows #-} basicCols (MSymmetricRaw _ n _) = n {-# INLINE basicCols #-} basicIsIndexMutable _ _ = True {-# INLINE basicIsIndexMutable #-} basicUnsafeRead (MSymmetricRaw _ lda fp) (!i,!j) = unsafePrimToPrim $ case () of _| i > j -> conjugateNum `liftM` withForeignPtr fp (`peekElemOff` (j + i*lda)) | otherwise -> withForeignPtr fp (`peekElemOff` (i + j*lda)) {-# INLINE basicUnsafeRead #-} basicUnsafeWrite (MSymmetricRaw _ lda fp) (!i,!j) x = unsafePrimToPrim $ case () of _| i > j -> withForeignPtr fp $ \p -> pokeElemOff p (j + i*lda) (conjugateNum x) | otherwise -> withForeignPtr fp $ \p -> pokeElemOff p (i + j*lda) x {-# INLINE basicUnsafeWrite #-} basicCloneShape = new . rows {-# INLINE basicCloneShape #-} basicClone = cloneSym {-# INLINE basicClone #-} -- | Choose index so upper part of matrix is accessed symmIndex :: (Int,Int) -> (Int,Int) {-# INLINE symmIndex #-} symmIndex (i,j) | i > j = (j,i) | otherwise = (i,j) -- | Allocate new matrix. It works for both symmetric and hermitian -- matrices. new :: (PrimMonad m, Storable a) => Int -- ^ Matrix order -> m (MSymmetricRaw tag (PrimState m) a) {-# INLINE new #-} new n = do fp <- unsafePrimToPrim $ mallocVector $ n * n return $ MSymmetricRaw n n fp -- | Convert to dense matrix. Dense matrix will use same buffer as -- symmetric symmetricAsDense :: (PrimMonad m, Storable a) => MSymmetricRaw IsSymmetric (PrimState m) a -> m (MMatrix (PrimState m) a) symmetricAsDense (MSymmetricRaw n lda fp) = do let m = MMatrix n n lda fp forM_ [0 .. n-2] $ \i -> do let row = MVector (n-i-1) lda $ updPtr (`advancePtr` ( i + (i+1) * lda)) fp col = MVector (n-i-1) 1 $ updPtr (`advancePtr` (1+i + i * lda)) fp M.move col row return m -- | Convert to dense matrix. Dense matrix will use same buffer as -- symmetric hermitianAsDense :: (PrimMonad m, Storable a, Conjugate a) => MSymmetricRaw IsSymmetric (PrimState m) a -> m (MMatrix (PrimState m) a) hermitianAsDense (MSymmetricRaw n lda fp) = do let m = MMatrix n n lda fp forM_ [0 .. n-2] $ \i -> do let len = n - i - 1 row = MVector (n-i-1) lda $ updPtr (`advancePtr` ( i + (i+1) * lda)) fp col = MVector (n-i-1) 1 $ updPtr (`advancePtr` (1+i + i * lda)) fp forM_ [0 .. len - 1] $ \j -> do M.unsafeWrite col j . conjugateNum =<< M.unsafeRead row j return m ---------------------------------------------------------------- -- Real/Complex distinction ---------------------------------------------------------------- -- | Type tag for real numbers data IsReal -- | Type tag for complex numbers data IsComplex type family NumberType a :: * type instance NumberType Float = IsReal type instance NumberType Double = IsReal type instance NumberType (Complex a) = IsComplex -- | Cast between symmetric and hermitian matrices is data parameter -- is real. castSymmetric :: (NumberType a ~ IsReal) => MSymmetricRaw tag s a -> MSymmetricRaw tag' s a {-# INLINE castSymmetric #-} castSymmetric = unsafeCoerce -- | Conjugate which works for both real (noop) and complex values. class Conjugate a where conjugateNum :: a -> a instance Conjugate Float where conjugateNum = id {-# INLINE conjugateNum #-} instance Conjugate Double where conjugateNum = id {-# INLINE conjugateNum #-} instance RealFloat a => Conjugate (Complex a) where conjugateNum = conjugate {-# INLINE conjugateNum #-} ---------------------------------------------------------------- -- Helpers ---------------------------------------------------------------- -- Get n'th column of matrix as mutable vector. Internal since part of -- vector contain junk unsafeGetCol :: Storable a => MSymmetricRaw tag s a -> Int -> MVector s a {-# INLINE unsafeGetCol #-} unsafeGetCol (MSymmetricRaw n lda fp) i = MVector n 1 $ updPtr (`advancePtr` (i*lda)) fp cloneSym :: (Storable a, PrimMonad m) => MSymmetricRaw tag (PrimState m) a -> m (MSymmetricRaw tag (PrimState m) a) {-# INLINE cloneSym #-} cloneSym m@(MSymmetricRaw n _ _) = do q <- new n forM_ [0 .. n - 1] $ \i -> M.unsafeCopy (unsafeGetCol q i) (unsafeGetCol m i) return q
lemma homeomorphic_minimal: "s homeomorphic t \<longleftrightarrow> (\<exists>f g. (\<forall>x\<in>s. f(x) \<in> t \<and> (g(f(x)) = x)) \<and> (\<forall>y\<in>t. g(y) \<in> s \<and> (f(g(y)) = y)) \<and> continuous_on s f \<and> continuous_on t g)" (is "?lhs = ?rhs")
module RecordList import Flexidisc.Record import Flexidisc.Record.Transformation import Flexidisc.RecordList ||| `RecordList` are Heterogeneous lists of record people : RecordList String [ [ "firstname" ::: String , "lastname" ::: String , "age" ::: Nat , "location" ::: Maybe String ] , [ "firstname" ::: String , "location" ::: String ] ] people = [ [ "firstname" := "John" , "lastname" := "Doe" , "age" := 42 , "location" := Nothing ] , [ "firstname" := "Waldo" , "location" := "Hidden" ] ] ||| We can find the first record that match the partial information we provide whereIsWaldo : Maybe (header ** Record String header) whereIsWaldo = firstWith ["firstname" := is "Waldo"] people whereIsDoe : Maybe (header ** Record String header) whereIsDoe = firstWith ["lastname" := is "Doe"] people ||| You can even search for something that is not available in every record whoIs42 : Maybe (header ** Record String header) whoIs42 = firstWith ["age" := is (the Nat 42)] people -- with one limitation, if you look for a specific row, -- it should have the same type in every rows where it's defined -- -- this would fail: -- -- whoIsHidden : Maybe (header ** Record String header) -- whoIsHidden = firstWith ["location" := is "Hidden"] people
# Importing the dataset # Click on the dataset you want to import at the right below pattern and clcick "more" - "Set as working directory" dataset = read.csv('Data.csv') # Taking care of missing data dataset$Age = ifelse(is.na(dataset$Age), ave(dataset$Age,FUN = function(x) mean(x, na.rm = TRUE)), # if there's NaN in the column, remove it and replace it with average value of all columns dataset$Age) dataset$Salary = ifelse(is.na(dataset$Salary), ave(dataset$Salary, FUN = function(x) mean(x, na.rm = TRUE)), dataset$Salary) # Encoding categorical data dataset$Country = factor(dataset$Country, levels = c('France', 'Spain', 'Germany'), labels = c(1, 2, 3)) dataset$Purchased = factor(dataset$Purchased, levels = c('No', 'Yes'), labels = c(0, 1)) # Splitting the dataset into the Training set and Test set # install.packages('caTools') # library(caTools) set.seed(123) # we can choose a random number and make a split split = sample.split(dataset$Purchased, SplitRatio = 0.8) #0.8 is the 80% observation put in the training set training_set = subset(dataset, split == TRUE) # split == True if the observation is going to the training set test_set = subset(dataset, split == FALSE) # split == False if the observation is going to the test set # Feature Scaling training_set[,2:3] = scale(training_set[,2:3]) # since our first and last column are categorical variables, we can't scale the x_train or x_test directly # Therefore, we would only scale the second and third column, which are "Age" and "Salary", index starts at 2 and ends at 3. test_set[,2:3] = scale(test_set[,2:3])
module Network.Curl.Prim.Easy ------------------------------------------------- -- Easy ------------------------------------------------- -- curl_easy_cleanup -- curl_easy_duphandle -- curl_easy_escape -- curl_easy_getinfo -- curl_easy_init -- curl_easy_option_by_id -- curl_easy_option_by_name -- curl_easy_option_next -- curl_easy_pause -- curl_easy_perform -- curl_easy_recv -- curl_easy_reset -- curl_easy_send -- curl_easy_setopt -- curl_easy_strerror -- curl_easy_unescape -- curl_easy_upkeep ------------------------------------------------- import Data.Buffer -- import Network.Curl.Prim.Mem import Network.Curl.Prim.Other import Network.Curl.Types -- import Network.Curl.Types.CurlEOption as EO -- import Derive.Enum import Derive.Prim %language ElabReflection ------------------------------------------------- -- curl_easy_duphandle %runElab makeHasIO "curl_easy_duphandle" Export `[ %foreign "C:curl_easy_duphandle,libcurl,curl/curl.h" export prim_curl_easy_duphandle : Ptr HandlePtr -> PrimIO (Ptr HandlePtr) ] --` ------------------------------------------------- %foreign "C:curl_easy_init,libcurl,curl/curl.h" prim_curl_easy_init : PrimIO (Ptr HandlePtr) export curl_easy_init : HasIO io => io (Maybe (CurlHandle Easy)) curl_easy_init = do r <- primIO prim_curl_easy_init pure $ if believe_me r == 0 then Nothing else Just (MkH r) ------------------------------------------------- %foreign "C:curl_easy_cleanup,libcurl,curl/curl.h" prim_curl_easy_cleanup : Ptr HandlePtr -> PrimIO () export curl_easy_cleanup : HasIO io => CurlHandle Easy -> io () curl_easy_cleanup (MkH h) = primIO $ prim_curl_easy_cleanup h -- It's just a lookup, expect this to work unless curl itself is broken %foreign "C:curl_easy_strerror,libcurl,curl/curl.h" curl_easy_strerror : (curlcode : Int) -> String export curlEasyStrError : CurlECode -> String curlEasyStrError c = curl_easy_strerror (toCode c) ------------------------------------------------- %foreign "C:curl_easy_setopt,libcurl,curl/curl.h" prim_curl_easy_setopt_long : Ptr HandlePtr -> Int -> Int -> PrimIO Int %foreign "C:curl_easy_setopt,libcurl,curl/curl.h" prim_curl_easy_setopt_objptr : Ptr HandlePtr -> Int -> AnyPtr -> PrimIO Int %foreign "C:curl_easy_setopt,libcurl,curl/curl.h" prim_curl_easy_setopt_off_t : Ptr HandlePtr -> Int -> Bits64 -> PrimIO Int %foreign "C:curl_easy_setopt,libcurl,curl/curl.h" prim_curl_easy_setopt_blob : Ptr HandlePtr -> Int -> Buffer -> PrimIO Int %foreign "C:curl_easy_setopt,libcurl,curl/curl.h" prim_curl_easy_setopt_string : Ptr HandlePtr -> Int -> String -> PrimIO Int -- This is to fill the role of {- %foreign "C:curl_easy_setopt,libcurl,curl/curl.h" prim_curl_easy_setopt : Ptr HandlePtr -> Int -> any -> PrimIO CurlECode -} -- We can't pass 'any' or an arbitrary Type to a foreign function, but we can -- generate that function at the type we need. %macro eSetOptPrim : {opty : _} -> (opt : CurlEOption opty) -> Elab (Ptr HandlePtr -> Int -> paramType opt -> PrimIO Int) eSetOptPrim opt = do let name = UN $ "setOptPrim_" ++ show opt z <- quote (paramType opt) str <- quote "C:curl_easy_setopt,libcurl,curl/curl.h" let ty = MkTy EmptyFC EmptyFC name `(Ptr HandlePtr -> Int -> ~z -> PrimIO Int) let claim = IClaim EmptyFC MW Private [ForeignFn [str]] ty declare [claim] -- generate prim check (IVar EmptyFC name) -- insert prim's name ||| It's kind of unfortunate we need to case all of these `opt` when the code ||| used on them is exactly the same and while it could be done with elaboration ||| it's not really worth the work. Users shouldn't actually be exposed to this ||| definition to be scared away by it anyway. export curl_easy_setopt : HasIO io => CurlHandle Easy -> {ty : _} -> (opt : CurlEOption ty) -> paramType opt -> io CurlECode curl_easy_setopt (MkH h) opt@CURLOPT_WRITEFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_READFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_PROGRESSFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_HEADERFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_DEBUGFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_SSL_CTX_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_IOCTLFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_CONV_FROM_NETWORK_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_CONV_TO_NETWORK_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_CONV_FROM_UTF8_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_SOCKOPTFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_OPENSOCKETFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_SEEKFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_SSH_KEYFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_INTERLEAVEFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_CHUNK_BGN_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_CHUNK_END_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_FNMATCH_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_CLOSESOCKETFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_XFERINFOFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_RESOLVER_START_FUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt (MkH h) opt@CURLOPT_TRAILERFUNCTION v = do let prim = eSetOptPrim opt pure (unsafeFromCode !(primIO $ prim h (toCode opt) v)) curl_easy_setopt {ty = CURLOPTTYPE_LONG} (MkH h) opt v = pure $ unsafeFromCode !(primIO $ prim_curl_easy_setopt_long h (toCode opt) v) curl_easy_setopt {ty = CURLOPTTYPE_OBJECTPOINT} (MkH h) opt v = pure $ unsafeFromCode !(primIO $ prim_curl_easy_setopt_objptr h (toCode opt) v) curl_easy_setopt {ty = CURLOPTTYPE_OFF_T} (MkH h) opt v = pure $ unsafeFromCode !(primIO $ prim_curl_easy_setopt_off_t h (toCode opt) v) curl_easy_setopt {ty = CURLOPTTYPE_BLOB} (MkH h) opt v = pure $ unsafeFromCode !(primIO $ prim_curl_easy_setopt_blob h (toCode opt) v) curl_easy_setopt {ty = CURLOPTTYPE_STRINGPOINT} (MkH h) opt v = pure $ unsafeFromCode !(primIO $ prim_curl_easy_setopt_string h (toCode opt) v) curl_easy_setopt {ty = UnusedOptType} _ _ _ = pure CURLE_UNKNOWN_OPTION -- ^can't happen during normal use ------------------------------------------------- %foreign "C:curl_easy_perform,libcurl,curl/curl.h" prim_curl_easy_perform : Ptr HandlePtr -> PrimIO Int export curl_easy_perform : HasIO io => CurlHandle Easy -> io CurlECode curl_easy_perform (MkH ptr) = unsafeFromCode <$> primIO (prim_curl_easy_perform ptr) ------------------------------------------------- -- This String is allocated by curl but idris ffi should copy it so we should be -- safe in the presence or lack of curl_free -- TODO check this ^ -- TODO check for possible issues with \0 -- char *curl_easy_escape( CURL *curl, const char *string , int length ); %foreign "C:curl_easy_escape,libcurl,curl/curl.h" prim_curl_easy_escape : Ptr HandlePtr -> (url : String) -> (url_len : Int) -> PrimIO String -- idris calls free after copying the PrimIO String it converts but curl wants you -- to call curl_free, how can I reconcile this? ||| Escapes a String byte-by-byte without awareness of encoding. ||| String is expected to be free of \0 since strlen() is used by curl and so ||| will truncate on \0. This shouldn't be a problem in idris since the ||| primitive String already truncate on \0 in idris. curl_easy_escape : HasIO io => CurlHandle ty -> String -> io String curl_easy_escape (MkH h) str = primIO $ prim_curl_easy_escape h str 0 -- 0 is to tell curl to compute the length itself via strlen() ------------------------------------------------- {- -- This String is allocated by curl but idris ffi should copy it so we should be -- safe in the presence or lack of curl_free. -- TODO check this ^ -- char *curl_easy_unescape( CURL *curl, const char *url , int inlength, int *outlength ); %foreign "C:curl_easy_unescape,libcurl,curl/curl.h" prim_curl_easy_unescape : Ptr HandlePtr -> (url : String) -> (url_len : Int) -> (intptr : Buffer) -> PrimIO String %foreign "C:curl_easy_unescape,libcurl,curl/curl.h" prim_curl_easy_unescape' : Ptr HandlePtr -> (url : String) -> (url_len : Int) -> Ptr Int -> PrimIO String ||| Unescapes a String byte-by-byte without awareness of encoding. When ||| encountering %00 or some other source of \0 it returns as much of the String ||| as it got to that point. ||| NB I'm not bothering with proper handling %00, if you need this I'll accept ||| a tested PR though. It's just that idris's \0 terminated Strings can make ||| this a hassle. I'll revisit this when/if I made a Text type since it won't ||| have a weakness to \0. curl_easy_unescape : HasIO io => CurlHandle ty -> String -> io (Either String String) curl_easy_unescape (MkH h) str = withAllocElems {a=Int} 1 $ \iptr => do str <- primIO $ prim_curl_easy_unescape h str 0 (unsafeForeignPtrToBuffer iptr) len <- peek iptr pure $ if len > 0 && cast (length str) /= len -- str contained %00 then Left str -- This is where a Text type could help. else Right str testo : IO () testo = do Just h <- curl_easy_init | _ => printLn "fuck" l <- curl_easy_escape h "fafodoba" Right m <- curl_easy_unescape h l | Left s => printLn s -- print remainder if there is one n <- curl_easy_escape h m printLn n Right m <- curl_easy_unescape h "gabbo%00nock" | Left s => printLn s -- print remainder if there is one printLn m ------------------------------------------------- -}
F-35 Lightning. "They are tearing down old hangars and building new ones to house the jet." "It's really University City where they are in after-burner mode." There may be a shift in the military aircraft noise from Marine Corps Air Station Miramar, this week and in the long term, base commander Col. Jason Woodworth told the Mira Mesa Planning Group Monday night. "Folks along Genessee and around the 805 will get more noise than Mira Mesa." "I assure you - and the general does too - the aircraft that are flying are still meeting the exacting standards before military use," Woodworth said. According to the base community plans and liaison officer, there will be increased flight operation at the air station due to troop deployments. "Those living and working near MCAS Miramar may notice large, heavy aircraft (contracted 777, 747s and 767s) departing," the announcement says. The base gets a half dozen noise complaints a day, a duty officer said. Miramar is an air base open every day all day, Woodworth said, but the Marines tend to fly between 8 am and 12:30 am, with the last two hours part of the 'modified quiet' approach. "Our pilots need night training as much if not more than day training," Woodworth said. The command is preparing for the arrival of the F-35 — Lightnings, in the trade language. They are tearing down old hangars and building new ones to house the jet, which ultimately will replace the FA-18. For now, the Marine air base at Yuma has several of the F-35s. Miramar is home to the 3rd Marine Air Wing, pilots and crews who fly FA-18 Hornets, KC-130s, the MV-22 Osprey, and the KC-130 Hercules. "The current pattern for decibel levels will stay about the same," Woodworth said. "The sound is different — It's a different craft with a different sound." Much of the noise occurs in University City, he said, and base complaint counts show that's where most of the complaints come from. "It's really University City where they are in after-burner mode," he said. "Folks along Genessee and around the 805 will get more noise than Mira Mesa." The transition to the F-35s is expected to take 11 years, starting in 2020 and going to 2031, he said.
Two finite sets are homeomorphic if and only if they have the same number of elements.
import numpy as np __all__ = ["EM_hopf"] def EM_hopf(M=100, N=2**16, xlambda=-1., omega=1, sigma=.5, Xzero=1, Yzero=0, T=10): """ EM Stochastic Hopf bifurcation Discretized Brownian path over [0,T] has dt = 2^(-8). Euler-Maruyama uses timestep R*dt. """ np.random.seed(M) # xlambda = -1.0; omega = 1; sigma = 0.5 # Xzero = 1; Yzero = 0 # T=10; N=2**16; dt = float(T)/N t=np.linspace(0,T,N+1) dW1=np.sqrt(dt)*np.random.randn(1,N) W1=np.cumsum(dW1) dW2=np.sqrt(dt)*np.random.randn(1,N) W2=np.cumsum(dW1) R=1; Dt=R*dt; L=float(N)/R Xem=np.zeros(L+1); Xem[0] = Xzero Yem=np.zeros(L+1); Yem[0] = Yzero for j in range(1,int(L)+1): Winc1=np.sum(dW1[0][range(R*(j-1),R*j)]) Winc2=np.sum(dW2[0][range(R*(j-1),R*j)]) Rtemp = Xem[j-1]*Xem[j-1] + Yem[j-1]*Yem[j-1] Xdrift = xlambda*Xem[j-1]-omega*Yem[j-1]-Xem[j-1]*Rtemp Ydrift = xlambda*Yem[j-1]+omega*Xem[j-1]-Yem[j-1]*Rtemp Xem[j] = Xem[j-1] + Dt*Xdrift + sigma*Winc1 Yem[j] = Yem[j-1] + Dt*Ydrift + sigma*Winc2 return Xem, Yem
#include <boost/date_time/local_time/local_time.hpp> #include <iostream> using namespace boost::posix_time; using namespace boost::gregorian; int main() { ptime pt{date{2014, 5, 12}, time_duration{12, 0, 0}}; time_iterator it{pt, time_duration{6, 30, 0}}; std::cout << *++it << '\n'; std::cout << *++it << '\n'; }
for i in [1, 3 .. 11] do Print(i, "\n"); od; 1 3 5 7 9 11
! { dg-do compile } ! { dg-options "-pedantic -mdalign" { target sh*-*-* } } ! ! PR fortran/50273 ! subroutine test() character :: a integer :: b character :: c common /global_var/ a, b, c ! { dg-warning "Padding of 3 bytes required before 'b' in COMMON" } print *, a, b, c end subroutine test
Require Import CodeDeps. Require Import Ident. Local Open Scope Z_scope. Definition _addr := 1%positive. Definition _g := 2%positive. Definition _g_rd := 3%positive. Definition _granule := 4%positive. Definition _i := 5%positive. Definition _level := 6%positive. Definition _lock := 7%positive. Definition _map_addr := 8%positive. Definition _pa := 9%positive. Definition _rd := 10%positive. Definition _rd_addr := 11%positive. Definition _ret := 12%positive. Definition _rt := 13%positive. Definition _table := 14%positive. Definition _val := 15%positive. Definition _valid := 16%positive. Definition _t'1 := 17%positive. Definition _t'2 := 18%positive. Definition _t'3 := 19%positive. Definition _t'4 := 20%positive. Definition smc_rtt_unmap_body := (Ssequence (Ssequence (Scall (Some _t'1) (Evar _validate_table_commands (Tfunction (Tcons tulong (Tcons tulong (Tcons tulong (Tcons tulong (Tcons tulong Tnil))))) tuint cc_default)) ((Etempvar _map_addr tulong) :: (Etempvar _level tulong) :: (Econst_int (Int.repr 2) tint) :: (Econst_long (Int64.repr 3) tulong) :: (Econst_int (Int.repr 2) tint) :: nil)) (Sset _ret (Ecast (Etempvar _t'1 tuint) tulong))) (Ssequence (Sifthenelse (Ebinop Oeq (Etempvar _ret tulong) (Econst_int (Int.repr 0) tuint) tint) (Ssequence (Ssequence (Scall (Some _t'2) (Evar _find_lock_granule (Tfunction (Tcons tulong (Tcons tulong Tnil)) (tptr Tvoid) cc_default)) ((Etempvar _rd_addr tulong) :: (Econst_int (Int.repr 2) tuint) :: nil)) (Sset _g_rd (Etempvar _t'2 (tptr Tvoid)))) (Ssequence (Scall (Some _t'4) (Evar _is_null (Tfunction (Tcons (tptr Tvoid) Tnil) tuint cc_default)) ((Etempvar _g_rd (tptr Tvoid)) :: nil)) (Sifthenelse (Ebinop Oeq (Etempvar _t'4 tuint) (Econst_int (Int.repr 1) tuint) tint) (Sset _ret (Econst_long (Int64.repr 1) tulong)) (Ssequence (Ssequence (Scall (Some _t'3) (Evar _table_unmap3 (Tfunction (Tcons (tptr Tvoid) (Tcons tulong (Tcons tulong Tnil))) tulong cc_default)) ((Etempvar _g_rd (tptr Tvoid)) :: (Etempvar _map_addr tulong) :: (Etempvar _level tulong) :: nil)) (Sset _ret (Etempvar _t'3 tulong))) (Scall None (Evar _granule_unlock (Tfunction (Tcons (tptr Tvoid) Tnil) tvoid cc_default)) ((Etempvar _g_rd (tptr Tvoid)) :: nil)))))) Sskip) (Sreturn (Some (Etempvar _ret tulong))))) . Definition f_smc_rtt_unmap := {| fn_return := tulong; fn_callconv := cc_default; fn_params := ((_rd_addr, tulong) :: (_map_addr, tulong) :: (_level, tulong) :: nil); fn_vars := nil; fn_temps := ((_g_rd, (tptr Tvoid)) :: (_ret, tulong) :: (_t'4, tuint) :: (_t'3, tulong) :: (_t'2, (tptr Tvoid)) :: (_t'1, tuint) :: nil); fn_body := smc_rtt_unmap_body |}.
module Main import Data.Vect import Data.Stream foo : List Char foo = unpack $ pack $ take 4000 (repeat 'a') factorialAux : Integer -> Integer -> Integer factorialAux 0 a = a factorialAux n a = factorialAux (n-1) (a*n) factorial : Integer -> Integer factorial n = factorialAux n 1 sum : Nat -> Nat sum n = go 0 n where go : Nat -> Nat -> Nat go acc Z = acc go acc n@(S k) = go (acc + n) k main : IO () main = do printLn $ factorial 100 printLn $ factorial 10000 printLn $ show $ the (Vect 3 String) ["red", "green", "blue"] printLn foo printLn (sum 50000)
#include <gsl/gsl_math.h> #include <gsl/gsl_cblas.h> #include "cblas.h" void cblas_sger (const enum CBLAS_ORDER order, const int M, const int N, const float alpha, const float *X, const int incX, const float *Y, const int incY, float *A, const int lda) { #define BASE float #include "source_ger.h" #undef BASE }
import numpy as np from brian2.codegen.translation import analyse_identifiers, get_identifiers_recursively from brian2.core.variables import Subexpression, Variable from brian2.units.fundamentalunits import Unit def test_analyse_identifiers(): ''' Test that the analyse_identifiers function works on a simple clear example. ''' code = ''' a = b+c d = e+f ''' known = ['b', 'c', 'd', 'g'] defined, used_known, dependent = analyse_identifiers(code, known) assert defined==set(['a']) assert used_known==set(['b', 'c', 'd']) assert dependent==set(['e', 'f']) def test_get_identifiers_recursively(): ''' Test finding identifiers including subexpressions. ''' variables = {} variables['sub1'] = Subexpression(Unit(1), np.float32, 'sub2 * z', variables, {}) variables['sub2'] = Subexpression(Unit(1), np.float32, '5 + y', variables, {}) variables['x'] = Variable(unit=None) identifiers = get_identifiers_recursively('_x = sub1 + x', variables) assert identifiers == set(['x', '_x', 'y', 'z', 'sub1', 'sub2']) if __name__ == '__main__': test_analyse_identifiers() test_get_identifiers_recursively()
myTestRule { #Input parameters are: # Type of quota (user or group) # User or group name # Optional resource on which the quota applies (or total for all resources) # Quota value in bytes msiSetQuota(*Type, *Name, *Resource, *Value); writeLine("stdout","Set quota on *Name for resource *Resource to *Value bytes"); } INPUT *Type="user", *Name="rods", *Resource="demoResc", *Value="1000000000" OUTPUT ruleExecOut
function out = CO_PartialAutoCorr(y,maxTau,whatMethod) % CO_PartialAutoCorr Compute the partial autocorrelation of an input time series % %---INPUTS: % y, a scalar time series column vector. % % maxTau, the maximum time-delay. Returns for lags up to this maximum. % % whatMethod, the method used to compute: 'ols' or 'yule_walker' % %---OUTPUT: the partial autocorrelations across the set of time lags. % % ------------------------------------------------------------------------------ % Copyright (C) 2020, Ben D. Fulcher <[email protected]>, % <http://www.benfulcher.com> % % If you use this code for your research, please cite the following two papers: % % (1) B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework for Automated % Time-Series Phenotyping Using Massive Feature Extraction, Cell Systems 5: 527 (2017). % DOI: 10.1016/j.cels.2017.10.001 % % (2) B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative time-series % analysis: the empirical structure of time series and their methods", % J. Roy. Soc. Interface 10(83) 20130048 (2013). % DOI: 10.1098/rsif.2013.0048 % % This function is free software: you can redistribute it and/or modify it under % the terms of the GNU General Public License as published by the Free Software % Foundation, either version 3 of the License, or (at your option) any later % version. % % This program is distributed in the hope that it will be useful, but WITHOUT % ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS % FOR A PARTICULAR PURPOSE. See the GNU General Public License for more % details. % % You should have received a copy of the GNU General Public License along with % this program. If not, see <http://www.gnu.org/licenses/>. % ------------------------------------------------------------------------------ % ------------------------------------------------------------------------------ %% Check inputs and set defaults: % ------------------------------------------------------------------------------ if nargin < 2 % Use a maximum lag of 10 by default maxTau = 10; end if nargin < 3 || isempty(whatMethod) % ordinary least square by default whatMethod = 'ols'; end %------------------------------------------------------------------------------- %% Initial checks on maxTau %------------------------------------------------------------------------------- N = length(y); % time-series length assert(maxTau > 0) if maxTau < 0 error('Negative time lags not applicable') end % ------------------------------------------------------------------------------ %% Do the computation % ------------------------------------------------------------------------------ pacf = parcorr(y,'NumLags',maxTau,'Method',whatMethod); % Zero lag is the first entry in the PACF (and should always be 1) for i = 1:maxTau out.(sprintf('pac_%u',i)) = pacf(i+1); end end
section \<open>The Residue Theorem, the Argument Principle and Rouch\'{e}'s Theorem\<close> theory Residue_Theorem imports Complex_Residues "HOL-Library.Landau_Symbols" begin subsection \<open>Cauchy's residue theorem\<close> lemma get_integrable_path: assumes "open s" "connected (s-pts)" "finite pts" "f holomorphic_on (s-pts) " "a\<in>s-pts" "b\<in>s-pts" obtains g where "valid_path g" "pathstart g = a" "pathfinish g = b" "path_image g \<subseteq> s-pts" "f contour_integrable_on g" using assms proof (induct arbitrary:s thesis a rule:finite_induct[OF \<open>finite pts\<close>]) case 1 obtain g where "valid_path g" "path_image g \<subseteq> s" "pathstart g = a" "pathfinish g = b" using connected_open_polynomial_connected[OF \<open>open s\<close>,of a b ] \<open>connected (s - {})\<close> valid_path_polynomial_function "1.prems"(6) "1.prems"(7) by auto moreover have "f contour_integrable_on g" using contour_integrable_holomorphic_simple[OF _ \<open>open s\<close> \<open>valid_path g\<close> \<open>path_image g \<subseteq> s\<close>,of f] \<open>f holomorphic_on s - {}\<close> by auto ultimately show ?case using "1"(1)[of g] by auto next case idt:(2 p pts) obtain e where "e>0" and e:"\<forall>w\<in>ball a e. w \<in> s \<and> (w \<noteq> a \<longrightarrow> w \<notin> insert p pts)" using finite_ball_avoid[OF \<open>open s\<close> \<open>finite (insert p pts)\<close>, of a] \<open>a \<in> s - insert p pts\<close> by auto define a' where "a' \<equiv> a+e/2" have "a'\<in>s-{p} -pts" using e[rule_format,of "a+e/2"] \<open>e>0\<close> by (auto simp add:dist_complex_def a'_def) then obtain g' where g'[simp]:"valid_path g'" "pathstart g' = a'" "pathfinish g' = b" "path_image g' \<subseteq> s - {p} - pts" "f contour_integrable_on g'" using idt.hyps(3)[of a' "s-{p}"] idt.prems idt.hyps(1) by (metis Diff_insert2 open_delete) define g where "g \<equiv> linepath a a' +++ g'" have "valid_path g" unfolding g_def by (auto intro: valid_path_join) moreover have "pathstart g = a" and "pathfinish g = b" unfolding g_def by auto moreover have "path_image g \<subseteq> s - insert p pts" unfolding g_def proof (rule subset_path_image_join) have "closed_segment a a' \<subseteq> ball a e" using \<open>e>0\<close> by (auto dest!:segment_bound1 simp:a'_def dist_complex_def norm_minus_commute) then show "path_image (linepath a a') \<subseteq> s - insert p pts" using e idt(9) by auto next show "path_image g' \<subseteq> s - insert p pts" using g'(4) by blast qed moreover have "f contour_integrable_on g" proof - have "closed_segment a a' \<subseteq> ball a e" using \<open>e>0\<close> by (auto dest!:segment_bound1 simp:a'_def dist_complex_def norm_minus_commute) then have "continuous_on (closed_segment a a') f" using e idt.prems(6) holomorphic_on_imp_continuous_on[OF idt.prems(5)] apply (elim continuous_on_subset) by auto then have "f contour_integrable_on linepath a a'" using contour_integrable_continuous_linepath by auto then show ?thesis unfolding g_def apply (rule contour_integrable_joinI) by (auto simp add: \<open>e>0\<close>) qed ultimately show ?case using idt.prems(1)[of g] by auto qed lemma Cauchy_theorem_aux: assumes "open s" "connected (s-pts)" "finite pts" "pts \<subseteq> s" "f holomorphic_on s-pts" "valid_path g" "pathfinish g = pathstart g" "path_image g \<subseteq> s-pts" "\<forall>z. (z \<notin> s) \<longrightarrow> winding_number g z = 0" "\<forall>p\<in>s. h p>0 \<and> (\<forall>w\<in>cball p (h p). w\<in>s \<and> (w\<noteq>p \<longrightarrow> w \<notin> pts))" shows "contour_integral g f = (\<Sum>p\<in>pts. winding_number g p * contour_integral (circlepath p (h p)) f)" using assms proof (induct arbitrary:s g rule:finite_induct[OF \<open>finite pts\<close>]) case 1 then show ?case by (simp add: Cauchy_theorem_global contour_integral_unique) next case (2 p pts) note fin[simp] = \<open>finite (insert p pts)\<close> and connected = \<open>connected (s - insert p pts)\<close> and valid[simp] = \<open>valid_path g\<close> and g_loop[simp] = \<open>pathfinish g = pathstart g\<close> and holo[simp]= \<open>f holomorphic_on s - insert p pts\<close> and path_img = \<open>path_image g \<subseteq> s - insert p pts\<close> and winding = \<open>\<forall>z. z \<notin> s \<longrightarrow> winding_number g z = 0\<close> and h = \<open>\<forall>pa\<in>s. 0 < h pa \<and> (\<forall>w\<in>cball pa (h pa). w \<in> s \<and> (w \<noteq> pa \<longrightarrow> w \<notin> insert p pts))\<close> have "h p>0" and "p\<in>s" and h_p: "\<forall>w\<in>cball p (h p). w \<in> s \<and> (w \<noteq> p \<longrightarrow> w \<notin> insert p pts)" using h \<open>insert p pts \<subseteq> s\<close> by auto obtain pg where pg[simp]: "valid_path pg" "pathstart pg = pathstart g" "pathfinish pg=p+h p" "path_image pg \<subseteq> s-insert p pts" "f contour_integrable_on pg" proof - have "p + h p\<in>cball p (h p)" using h[rule_format,of p] by (simp add: \<open>p \<in> s\<close> dist_norm) then have "p + h p \<in> s - insert p pts" using h[rule_format,of p] \<open>insert p pts \<subseteq> s\<close> by fastforce moreover have "pathstart g \<in> s - insert p pts " using path_img by auto ultimately show ?thesis using get_integrable_path[OF \<open>open s\<close> connected fin holo,of "pathstart g" "p+h p"] that by blast qed obtain n::int where "n=winding_number g p" using integer_winding_number[OF _ g_loop,of p] valid path_img by (metis DiffD2 Ints_cases insertI1 subset_eq valid_path_imp_path) define p_circ where "p_circ \<equiv> circlepath p (h p)" define p_circ_pt where "p_circ_pt \<equiv> linepath (p+h p) (p+h p)" define n_circ where "n_circ \<equiv> \<lambda>n. ((+++) p_circ ^^ n) p_circ_pt" define cp where "cp \<equiv> if n\<ge>0 then reversepath (n_circ (nat n)) else n_circ (nat (- n))" have n_circ:"valid_path (n_circ k)" "winding_number (n_circ k) p = k" "pathstart (n_circ k) = p + h p" "pathfinish (n_circ k) = p + h p" "path_image (n_circ k) = (if k=0 then {p + h p} else sphere p (h p))" "p \<notin> path_image (n_circ k)" "\<And>p'. p'\<notin>s - pts \<Longrightarrow> winding_number (n_circ k) p'=0 \<and> p'\<notin>path_image (n_circ k)" "f contour_integrable_on (n_circ k)" "contour_integral (n_circ k) f = k * contour_integral p_circ f" for k proof (induct k) case 0 show "valid_path (n_circ 0)" and "path_image (n_circ 0) = (if 0=0 then {p + h p} else sphere p (h p))" and "winding_number (n_circ 0) p = of_nat 0" and "pathstart (n_circ 0) = p + h p" and "pathfinish (n_circ 0) = p + h p" and "p \<notin> path_image (n_circ 0)" unfolding n_circ_def p_circ_pt_def using \<open>h p > 0\<close> by (auto simp add: dist_norm) show "winding_number (n_circ 0) p'=0 \<and> p'\<notin>path_image (n_circ 0)" when "p'\<notin>s- pts" for p' unfolding n_circ_def p_circ_pt_def apply (auto intro!:winding_number_trivial) by (metis Diff_iff pathfinish_in_path_image pg(3) pg(4) subsetCE subset_insertI that)+ show "f contour_integrable_on (n_circ 0)" unfolding n_circ_def p_circ_pt_def by (auto intro!:contour_integrable_continuous_linepath simp add:continuous_on_sing) show "contour_integral (n_circ 0) f = of_nat 0 * contour_integral p_circ f" unfolding n_circ_def p_circ_pt_def by auto next case (Suc k) have n_Suc:"n_circ (Suc k) = p_circ +++ n_circ k" unfolding n_circ_def by auto have pcirc:"p \<notin> path_image p_circ" "valid_path p_circ" "pathfinish p_circ = pathstart (n_circ k)" using Suc(3) unfolding p_circ_def using \<open>h p > 0\<close> by (auto simp add: p_circ_def) have pcirc_image:"path_image p_circ \<subseteq> s - insert p pts" proof - have "path_image p_circ \<subseteq> cball p (h p)" using \<open>0 < h p\<close> p_circ_def by auto then show ?thesis using h_p pcirc(1) by auto qed have pcirc_integrable:"f contour_integrable_on p_circ" by (auto simp add:p_circ_def intro!: pcirc_image[unfolded p_circ_def] contour_integrable_continuous_circlepath holomorphic_on_imp_continuous_on holomorphic_on_subset[OF holo]) show "valid_path (n_circ (Suc k))" using valid_path_join[OF pcirc(2) Suc(1) pcirc(3)] unfolding n_circ_def by auto show "path_image (n_circ (Suc k)) = (if Suc k = 0 then {p + complex_of_real (h p)} else sphere p (h p))" proof - have "path_image p_circ = sphere p (h p)" unfolding p_circ_def using \<open>0 < h p\<close> by auto then show ?thesis unfolding n_Suc using Suc.hyps(5) \<open>h p>0\<close> by (auto simp add: path_image_join[OF pcirc(3)] dist_norm) qed then show "p \<notin> path_image (n_circ (Suc k))" using \<open>h p>0\<close> by auto show "winding_number (n_circ (Suc k)) p = of_nat (Suc k)" proof - have "winding_number p_circ p = 1" by (simp add: \<open>h p > 0\<close> p_circ_def winding_number_circlepath_centre) moreover have "p \<notin> path_image (n_circ k)" using Suc(5) \<open>h p>0\<close> by auto then have "winding_number (p_circ +++ n_circ k) p = winding_number p_circ p + winding_number (n_circ k) p" using valid_path_imp_path Suc.hyps(1) Suc.hyps(2) pcirc apply (intro winding_number_join) by auto ultimately show ?thesis using Suc(2) unfolding n_circ_def by auto qed show "pathstart (n_circ (Suc k)) = p + h p" by (simp add: n_circ_def p_circ_def) show "pathfinish (n_circ (Suc k)) = p + h p" using Suc(4) unfolding n_circ_def by auto show "winding_number (n_circ (Suc k)) p'=0 \<and> p'\<notin>path_image (n_circ (Suc k))" when "p'\<notin>s-pts" for p' proof - have " p' \<notin> path_image p_circ" using \<open>p \<in> s\<close> h p_circ_def that using pcirc_image by blast moreover have "p' \<notin> path_image (n_circ k)" using Suc.hyps(7) that by blast moreover have "winding_number p_circ p' = 0" proof - have "path_image p_circ \<subseteq> cball p (h p)" using h unfolding p_circ_def using \<open>p \<in> s\<close> by fastforce moreover have "p'\<notin>cball p (h p)" using \<open>p \<in> s\<close> h that "2.hyps"(2) by fastforce ultimately show ?thesis unfolding p_circ_def apply (intro winding_number_zero_outside) by auto qed ultimately show ?thesis unfolding n_Suc apply (subst winding_number_join) by (auto simp: valid_path_imp_path pcirc Suc that not_in_path_image_join Suc.hyps(7)[OF that]) qed show "f contour_integrable_on (n_circ (Suc k))" unfolding n_Suc by (rule contour_integrable_joinI[OF pcirc_integrable Suc(8) pcirc(2) Suc(1)]) show "contour_integral (n_circ (Suc k)) f = (Suc k) * contour_integral p_circ f" unfolding n_Suc by (auto simp add:contour_integral_join[OF pcirc_integrable Suc(8) pcirc(2) Suc(1)] Suc(9) algebra_simps) qed have cp[simp]:"pathstart cp = p + h p" "pathfinish cp = p + h p" "valid_path cp" "path_image cp \<subseteq> s - insert p pts" "winding_number cp p = - n" "\<And>p'. p'\<notin>s - pts \<Longrightarrow> winding_number cp p'=0 \<and> p' \<notin> path_image cp" "f contour_integrable_on cp" "contour_integral cp f = - n * contour_integral p_circ f" proof - show "pathstart cp = p + h p" and "pathfinish cp = p + h p" and "valid_path cp" using n_circ unfolding cp_def by auto next have "sphere p (h p) \<subseteq> s - insert p pts" using h[rule_format,of p] \<open>insert p pts \<subseteq> s\<close> by force moreover have "p + complex_of_real (h p) \<in> s - insert p pts" using pg(3) pg(4) by (metis pathfinish_in_path_image subsetCE) ultimately show "path_image cp \<subseteq> s - insert p pts" unfolding cp_def using n_circ(5) by auto next show "winding_number cp p = - n" unfolding cp_def using winding_number_reversepath n_circ \<open>h p>0\<close> by (auto simp: valid_path_imp_path) next show "winding_number cp p'=0 \<and> p' \<notin> path_image cp" when "p'\<notin>s - pts" for p' unfolding cp_def apply (auto) apply (subst winding_number_reversepath) by (auto simp add: valid_path_imp_path n_circ(7)[OF that] n_circ(1)) next show "f contour_integrable_on cp" unfolding cp_def using contour_integrable_reversepath_eq n_circ(1,8) by auto next show "contour_integral cp f = - n * contour_integral p_circ f" unfolding cp_def using contour_integral_reversepath[OF n_circ(1)] n_circ(9) by auto qed define g' where "g' \<equiv> g +++ pg +++ cp +++ (reversepath pg)" have "contour_integral g' f = (\<Sum>p\<in>pts. winding_number g' p * contour_integral (circlepath p (h p)) f)" proof (rule "2.hyps"(3)[of "s-{p}" "g'",OF _ _ \<open>finite pts\<close> ]) show "connected (s - {p} - pts)" using connected by (metis Diff_insert2) show "open (s - {p})" using \<open>open s\<close> by auto show " pts \<subseteq> s - {p}" using \<open>insert p pts \<subseteq> s\<close> \<open> p \<notin> pts\<close> by blast show "f holomorphic_on s - {p} - pts" using holo \<open>p \<notin> pts\<close> by (metis Diff_insert2) show "valid_path g'" unfolding g'_def cp_def using n_circ valid pg g_loop by (auto intro!:valid_path_join ) show "pathfinish g' = pathstart g'" unfolding g'_def cp_def using pg(2) by simp show "path_image g' \<subseteq> s - {p} - pts" proof - define s' where "s' \<equiv> s - {p} - pts" have s':"s' = s-insert p pts " unfolding s'_def by auto then show ?thesis using path_img pg(4) cp(4) unfolding g'_def apply (fold s'_def s') apply (intro subset_path_image_join) by auto qed note path_join_imp[simp] show "\<forall>z. z \<notin> s - {p} \<longrightarrow> winding_number g' z = 0" proof clarify fix z assume z:"z\<notin>s - {p}" have "winding_number (g +++ pg +++ cp +++ reversepath pg) z = winding_number g z + winding_number (pg +++ cp +++ (reversepath pg)) z" proof (rule winding_number_join) show "path g" using \<open>valid_path g\<close> by (simp add: valid_path_imp_path) show "z \<notin> path_image g" using z path_img by auto show "path (pg +++ cp +++ reversepath pg)" using pg(3) cp by (simp add: valid_path_imp_path) next have "path_image (pg +++ cp +++ reversepath pg) \<subseteq> s - insert p pts" using pg(4) cp(4) by (auto simp:subset_path_image_join) then show "z \<notin> path_image (pg +++ cp +++ reversepath pg)" using z by auto next show "pathfinish g = pathstart (pg +++ cp +++ reversepath pg)" using g_loop by auto qed also have "... = winding_number g z + (winding_number pg z + winding_number (cp +++ (reversepath pg)) z)" proof (subst add_left_cancel,rule winding_number_join) show "path pg" and "path (cp +++ reversepath pg)" and "pathfinish pg = pathstart (cp +++ reversepath pg)" by (auto simp add: valid_path_imp_path) show "z \<notin> path_image pg" using pg(4) z by blast show "z \<notin> path_image (cp +++ reversepath pg)" using z by (metis Diff_iff \<open>z \<notin> path_image pg\<close> contra_subsetD cp(4) insertI1 not_in_path_image_join path_image_reversepath singletonD) qed also have "... = winding_number g z + (winding_number pg z + (winding_number cp z + winding_number (reversepath pg) z))" apply (auto intro!:winding_number_join simp: valid_path_imp_path) apply (metis Diff_iff contra_subsetD cp(4) insertI1 singletonD z) by (metis Diff_insert2 Diff_subset contra_subsetD pg(4) z) also have "... = winding_number g z + winding_number cp z" apply (subst winding_number_reversepath) apply (auto simp: valid_path_imp_path) by (metis Diff_iff contra_subsetD insertI1 pg(4) singletonD z) finally have "winding_number g' z = winding_number g z + winding_number cp z" unfolding g'_def . moreover have "winding_number g z + winding_number cp z = 0" using winding z \<open>n=winding_number g p\<close> by auto ultimately show "winding_number g' z = 0" unfolding g'_def by auto qed show "\<forall>pa\<in>s - {p}. 0 < h pa \<and> (\<forall>w\<in>cball pa (h pa). w \<in> s - {p} \<and> (w \<noteq> pa \<longrightarrow> w \<notin> pts))" using h by fastforce qed moreover have "contour_integral g' f = contour_integral g f - winding_number g p * contour_integral p_circ f" proof - have "contour_integral g' f = contour_integral g f + contour_integral (pg +++ cp +++ reversepath pg) f" unfolding g'_def apply (subst contour_integral_join) by (auto simp add:open_Diff[OF \<open>open s\<close>,OF finite_imp_closed[OF fin]] intro!: contour_integrable_holomorphic_simple[OF holo _ _ path_img] contour_integrable_reversepath) also have "... = contour_integral g f + contour_integral pg f + contour_integral (cp +++ reversepath pg) f" apply (subst contour_integral_join) by (auto simp add:contour_integrable_reversepath) also have "... = contour_integral g f + contour_integral pg f + contour_integral cp f + contour_integral (reversepath pg) f" apply (subst contour_integral_join) by (auto simp add:contour_integrable_reversepath) also have "... = contour_integral g f + contour_integral cp f" using contour_integral_reversepath by (auto simp add:contour_integrable_reversepath) also have "... = contour_integral g f - winding_number g p * contour_integral p_circ f" using \<open>n=winding_number g p\<close> by auto finally show ?thesis . qed moreover have "winding_number g' p' = winding_number g p'" when "p'\<in>pts" for p' proof - have [simp]: "p' \<notin> path_image g" "p' \<notin> path_image pg" "p'\<notin>path_image cp" using "2.prems"(8) that apply blast apply (metis Diff_iff Diff_insert2 contra_subsetD pg(4) that) by (meson DiffD2 cp(4) rev_subsetD subset_insertI that) have "winding_number g' p' = winding_number g p' + winding_number (pg +++ cp +++ reversepath pg) p'" unfolding g'_def apply (subst winding_number_join) apply (simp_all add: valid_path_imp_path) apply (intro not_in_path_image_join) by auto also have "... = winding_number g p' + winding_number pg p' + winding_number (cp +++ reversepath pg) p'" apply (subst winding_number_join) apply (simp_all add: valid_path_imp_path) apply (intro not_in_path_image_join) by auto also have "... = winding_number g p' + winding_number pg p'+ winding_number cp p' + winding_number (reversepath pg) p'" apply (subst winding_number_join) by (simp_all add: valid_path_imp_path) also have "... = winding_number g p' + winding_number cp p'" apply (subst winding_number_reversepath) by (simp_all add: valid_path_imp_path) also have "... = winding_number g p'" using that by auto finally show ?thesis . qed ultimately show ?case unfolding p_circ_def apply (subst (asm) sum.cong[OF refl, of pts _ "\<lambda>p. winding_number g p * contour_integral (circlepath p (h p)) f"]) by (auto simp add:sum.insert[OF \<open>finite pts\<close> \<open>p\<notin>pts\<close>] algebra_simps) qed lemma Cauchy_theorem_singularities: assumes "open s" "connected s" "finite pts" and holo:"f holomorphic_on s-pts" and "valid_path g" and loop:"pathfinish g = pathstart g" and "path_image g \<subseteq> s-pts" and homo:"\<forall>z. (z \<notin> s) \<longrightarrow> winding_number g z = 0" and avoid:"\<forall>p\<in>s. h p>0 \<and> (\<forall>w\<in>cball p (h p). w\<in>s \<and> (w\<noteq>p \<longrightarrow> w \<notin> pts))" shows "contour_integral g f = (\<Sum>p\<in>pts. winding_number g p * contour_integral (circlepath p (h p)) f)" (is "?L=?R") proof - define circ where "circ \<equiv> \<lambda>p. winding_number g p * contour_integral (circlepath p (h p)) f" define pts1 where "pts1 \<equiv> pts \<inter> s" define pts2 where "pts2 \<equiv> pts - pts1" have "pts=pts1 \<union> pts2" "pts1 \<inter> pts2 = {}" "pts2 \<inter> s={}" "pts1\<subseteq>s" unfolding pts1_def pts2_def by auto have "contour_integral g f = (\<Sum>p\<in>pts1. circ p)" unfolding circ_def proof (rule Cauchy_theorem_aux[OF \<open>open s\<close> _ _ \<open>pts1\<subseteq>s\<close> _ \<open>valid_path g\<close> loop _ homo]) have "finite pts1" unfolding pts1_def using \<open>finite pts\<close> by auto then show "connected (s - pts1)" using \<open>open s\<close> \<open>connected s\<close> connected_open_delete_finite[of s] by auto next show "finite pts1" using \<open>pts = pts1 \<union> pts2\<close> assms(3) by auto show "f holomorphic_on s - pts1" by (metis Diff_Int2 Int_absorb holo pts1_def) show "path_image g \<subseteq> s - pts1" using assms(7) pts1_def by auto show "\<forall>p\<in>s. 0 < h p \<and> (\<forall>w\<in>cball p (h p). w \<in> s \<and> (w \<noteq> p \<longrightarrow> w \<notin> pts1))" by (simp add: avoid pts1_def) qed moreover have "sum circ pts2=0" proof - have "winding_number g p=0" when "p\<in>pts2" for p using \<open>pts2 \<inter> s={}\<close> that homo[rule_format,of p] by auto thus ?thesis unfolding circ_def apply (intro sum.neutral) by auto qed moreover have "?R=sum circ pts1 + sum circ pts2" unfolding circ_def using sum.union_disjoint[OF _ _ \<open>pts1 \<inter> pts2 = {}\<close>] \<open>finite pts\<close> \<open>pts=pts1 \<union> pts2\<close> by blast ultimately show ?thesis apply (fold circ_def) by auto qed theorem Residue_theorem: fixes s pts::"complex set" and f::"complex \<Rightarrow> complex" and g::"real \<Rightarrow> complex" assumes "open s" "connected s" "finite pts" and holo:"f holomorphic_on s-pts" and "valid_path g" and loop:"pathfinish g = pathstart g" and "path_image g \<subseteq> s-pts" and homo:"\<forall>z. (z \<notin> s) \<longrightarrow> winding_number g z = 0" shows "contour_integral g f = 2 * pi * \<i> *(\<Sum>p\<in>pts. winding_number g p * residue f p)" proof - define c where "c \<equiv> 2 * pi * \<i>" obtain h where avoid:"\<forall>p\<in>s. h p>0 \<and> (\<forall>w\<in>cball p (h p). w\<in>s \<and> (w\<noteq>p \<longrightarrow> w \<notin> pts))" using finite_cball_avoid[OF \<open>open s\<close> \<open>finite pts\<close>] by metis have "contour_integral g f = (\<Sum>p\<in>pts. winding_number g p * contour_integral (circlepath p (h p)) f)" using Cauchy_theorem_singularities[OF assms avoid] . also have "... = (\<Sum>p\<in>pts. c * winding_number g p * residue f p)" proof (intro sum.cong) show "pts = pts" by simp next fix x assume "x \<in> pts" show "winding_number g x * contour_integral (circlepath x (h x)) f = c * winding_number g x * residue f x" proof (cases "x\<in>s") case False then have "winding_number g x=0" using homo by auto thus ?thesis by auto next case True have "contour_integral (circlepath x (h x)) f = c* residue f x" using \<open>x\<in>pts\<close> \<open>finite pts\<close> avoid[rule_format,OF True] apply (intro base_residue[of "s-(pts-{x})",THEN contour_integral_unique,folded c_def]) by (auto intro:holomorphic_on_subset[OF holo] open_Diff[OF \<open>open s\<close> finite_imp_closed]) then show ?thesis by auto qed qed also have "... = c * (\<Sum>p\<in>pts. winding_number g p * residue f p)" by (simp add: sum_distrib_left algebra_simps) finally show ?thesis unfolding c_def . qed subsection \<open>The argument principle\<close> theorem argument_principle: fixes f::"complex \<Rightarrow> complex" and poles s:: "complex set" defines "pz \<equiv> {w. f w = 0 \<or> w \<in> poles}" \<comment> \<open>\<^term>\<open>pz\<close> is the set of poles and zeros\<close> assumes "open s" and "connected s" and f_holo:"f holomorphic_on s-poles" and h_holo:"h holomorphic_on s" and "valid_path g" and loop:"pathfinish g = pathstart g" and path_img:"path_image g \<subseteq> s - pz" and homo:"\<forall>z. (z \<notin> s) \<longrightarrow> winding_number g z = 0" and finite:"finite pz" and poles:"\<forall>p\<in>poles. is_pole f p" shows "contour_integral g (\<lambda>x. deriv f x * h x / f x) = 2 * pi * \<i> * (\<Sum>p\<in>pz. winding_number g p * h p * zorder f p)" (is "?L=?R") proof - define c where "c \<equiv> 2 * complex_of_real pi * \<i> " define ff where "ff \<equiv> (\<lambda>x. deriv f x * h x / f x)" define cont where "cont \<equiv> \<lambda>ff p e. (ff has_contour_integral c * zorder f p * h p ) (circlepath p e)" define avoid where "avoid \<equiv> \<lambda>p e. \<forall>w\<in>cball p e. w \<in> s \<and> (w \<noteq> p \<longrightarrow> w \<notin> pz)" have "\<exists>e>0. avoid p e \<and> (p\<in>pz \<longrightarrow> cont ff p e)" when "p\<in>s" for p proof - obtain e1 where "e1>0" and e1_avoid:"avoid p e1" using finite_cball_avoid[OF \<open>open s\<close> finite] \<open>p\<in>s\<close> unfolding avoid_def by auto have "\<exists>e2>0. cball p e2 \<subseteq> ball p e1 \<and> cont ff p e2" when "p\<in>pz" proof - define po where "po \<equiv> zorder f p" define pp where "pp \<equiv> zor_poly f p" define f' where "f' \<equiv> \<lambda>w. pp w * (w - p) powr po" define ff' where "ff' \<equiv> (\<lambda>x. deriv f' x * h x / f' x)" obtain r where "pp p\<noteq>0" "r>0" and "r<e1" and pp_holo:"pp holomorphic_on cball p r" and pp_po:"(\<forall>w\<in>cball p r-{p}. f w = pp w * (w - p) powr po \<and> pp w \<noteq> 0)" proof - have "isolated_singularity_at f p" proof - have "f holomorphic_on ball p e1 - {p}" apply (intro holomorphic_on_subset[OF f_holo]) using e1_avoid \<open>p\<in>pz\<close> unfolding avoid_def pz_def by force then show ?thesis unfolding isolated_singularity_at_def using \<open>e1>0\<close> analytic_on_open open_delete by blast qed moreover have "not_essential f p" proof (cases "is_pole f p") case True then show ?thesis unfolding not_essential_def by auto next case False then have "p\<in>s-poles" using \<open>p\<in>s\<close> poles unfolding pz_def by auto moreover have "open (s-poles)" using \<open>open s\<close> apply (elim open_Diff) apply (rule finite_imp_closed) using finite unfolding pz_def by simp ultimately have "isCont f p" using holomorphic_on_imp_continuous_on[OF f_holo] continuous_on_eq_continuous_at by auto then show ?thesis unfolding isCont_def not_essential_def by auto qed moreover have "\<exists>\<^sub>F w in at p. f w \<noteq> 0 " proof (rule ccontr) assume "\<not> (\<exists>\<^sub>F w in at p. f w \<noteq> 0)" then have "\<forall>\<^sub>F w in at p. f w= 0" unfolding frequently_def by auto then obtain rr where "rr>0" "\<forall>w\<in>ball p rr - {p}. f w =0" unfolding eventually_at by (auto simp add:dist_commute) then have "ball p rr - {p} \<subseteq> {w\<in>ball p rr-{p}. f w=0}" by blast moreover have "infinite (ball p rr - {p})" using \<open>rr>0\<close> using finite_imp_not_open by fastforce ultimately have "infinite {w\<in>ball p rr-{p}. f w=0}" using infinite_super by blast then have "infinite pz" unfolding pz_def infinite_super by auto then show False using \<open>finite pz\<close> by auto qed ultimately obtain r where "pp p \<noteq> 0" and r:"r>0" "pp holomorphic_on cball p r" "(\<forall>w\<in>cball p r - {p}. f w = pp w * (w - p) powr of_int po \<and> pp w \<noteq> 0)" using zorder_exist[of f p,folded po_def pp_def] by auto define r1 where "r1=min r e1 / 2" have "r1<e1" unfolding r1_def using \<open>e1>0\<close> \<open>r>0\<close> by auto moreover have "r1>0" "pp holomorphic_on cball p r1" "(\<forall>w\<in>cball p r1 - {p}. f w = pp w * (w - p) powr of_int po \<and> pp w \<noteq> 0)" unfolding r1_def using \<open>e1>0\<close> r by auto ultimately show ?thesis using that \<open>pp p\<noteq>0\<close> by auto qed define e2 where "e2 \<equiv> r/2" have "e2>0" using \<open>r>0\<close> unfolding e2_def by auto define anal where "anal \<equiv> \<lambda>w. deriv pp w * h w / pp w" define prin where "prin \<equiv> \<lambda>w. po * h w / (w - p)" have "((\<lambda>w. prin w + anal w) has_contour_integral c * po * h p) (circlepath p e2)" proof (rule has_contour_integral_add[of _ _ _ _ 0,simplified]) have "ball p r \<subseteq> s" using \<open>r<e1\<close> avoid_def ball_subset_cball e1_avoid by (simp add: subset_eq) then have "cball p e2 \<subseteq> s" using \<open>r>0\<close> unfolding e2_def by auto then have "(\<lambda>w. po * h w) holomorphic_on cball p e2" using h_holo by (auto intro!: holomorphic_intros) then show "(prin has_contour_integral c * po * h p ) (circlepath p e2)" using Cauchy_integral_circlepath_simple[folded c_def, of "\<lambda>w. po * h w"] \<open>e2>0\<close> unfolding prin_def by (auto simp add: mult.assoc) have "anal holomorphic_on ball p r" unfolding anal_def using pp_holo h_holo pp_po \<open>ball p r \<subseteq> s\<close> \<open>pp p\<noteq>0\<close> by (auto intro!: holomorphic_intros) then show "(anal has_contour_integral 0) (circlepath p e2)" using e2_def \<open>r>0\<close> by (auto elim!: Cauchy_theorem_disc_simple) qed then have "cont ff' p e2" unfolding cont_def po_def proof (elim has_contour_integral_eq) fix w assume "w \<in> path_image (circlepath p e2)" then have "w\<in>ball p r" and "w\<noteq>p" unfolding e2_def using \<open>r>0\<close> by auto define wp where "wp \<equiv> w-p" have "wp\<noteq>0" and "pp w \<noteq>0" unfolding wp_def using \<open>w\<noteq>p\<close> \<open>w\<in>ball p r\<close> pp_po by auto moreover have der_f':"deriv f' w = po * pp w * (w-p) powr (po - 1) + deriv pp w * (w-p) powr po" proof (rule DERIV_imp_deriv) have "(pp has_field_derivative (deriv pp w)) (at w)" using DERIV_deriv_iff_has_field_derivative pp_holo \<open>w\<noteq>p\<close> by (meson open_ball \<open>w \<in> ball p r\<close> ball_subset_cball holomorphic_derivI holomorphic_on_subset) then show " (f' has_field_derivative of_int po * pp w * (w - p) powr of_int (po - 1) + deriv pp w * (w - p) powr of_int po) (at w)" unfolding f'_def using \<open>w\<noteq>p\<close> by (auto intro!: derivative_eq_intros DERIV_cong[OF has_field_derivative_powr_of_int]) qed ultimately show "prin w + anal w = ff' w" unfolding ff'_def prin_def anal_def apply simp apply (unfold f'_def) apply (fold wp_def) apply (auto simp add:field_simps) by (metis (no_types, lifting) diff_add_cancel mult.commute powr_add powr_to_1) qed then have "cont ff p e2" unfolding cont_def proof (elim has_contour_integral_eq) fix w assume "w \<in> path_image (circlepath p e2)" then have "w\<in>ball p r" and "w\<noteq>p" unfolding e2_def using \<open>r>0\<close> by auto have "deriv f' w = deriv f w" proof (rule complex_derivative_transform_within_open[where s="ball p r - {p}"]) show "f' holomorphic_on ball p r - {p}" unfolding f'_def using pp_holo by (auto intro!: holomorphic_intros) next have "ball p e1 - {p} \<subseteq> s - poles" using ball_subset_cball e1_avoid[unfolded avoid_def] unfolding pz_def by auto then have "ball p r - {p} \<subseteq> s - poles" apply (elim dual_order.trans) using \<open>r<e1\<close> by auto then show "f holomorphic_on ball p r - {p}" using f_holo by auto next show "open (ball p r - {p})" by auto show "w \<in> ball p r - {p}" using \<open>w\<in>ball p r\<close> \<open>w\<noteq>p\<close> by auto next fix x assume "x \<in> ball p r - {p}" then show "f' x = f x" using pp_po unfolding f'_def by auto qed moreover have " f' w = f w " using \<open>w \<in> ball p r\<close> ball_subset_cball subset_iff pp_po \<open>w\<noteq>p\<close> unfolding f'_def by auto ultimately show "ff' w = ff w" unfolding ff'_def ff_def by simp qed moreover have "cball p e2 \<subseteq> ball p e1" using \<open>0 < r\<close> \<open>r<e1\<close> e2_def by auto ultimately show ?thesis using \<open>e2>0\<close> by auto qed then obtain e2 where e2:"p\<in>pz \<longrightarrow> e2>0 \<and> cball p e2 \<subseteq> ball p e1 \<and> cont ff p e2" by auto define e4 where "e4 \<equiv> if p\<in>pz then e2 else e1" have "e4>0" using e2 \<open>e1>0\<close> unfolding e4_def by auto moreover have "avoid p e4" using e2 \<open>e1>0\<close> e1_avoid unfolding e4_def avoid_def by auto moreover have "p\<in>pz \<longrightarrow> cont ff p e4" by (auto simp add: e2 e4_def) ultimately show ?thesis by auto qed then obtain get_e where get_e:"\<forall>p\<in>s. get_e p>0 \<and> avoid p (get_e p) \<and> (p\<in>pz \<longrightarrow> cont ff p (get_e p))" by metis define ci where "ci \<equiv> \<lambda>p. contour_integral (circlepath p (get_e p)) ff" define w where "w \<equiv> \<lambda>p. winding_number g p" have "contour_integral g ff = (\<Sum>p\<in>pz. w p * ci p)" unfolding ci_def w_def proof (rule Cauchy_theorem_singularities[OF \<open>open s\<close> \<open>connected s\<close> finite _ \<open>valid_path g\<close> loop path_img homo]) have "open (s - pz)" using open_Diff[OF _ finite_imp_closed[OF finite]] \<open>open s\<close> by auto then show "ff holomorphic_on s - pz" unfolding ff_def using f_holo h_holo by (auto intro!: holomorphic_intros simp add:pz_def) next show "\<forall>p\<in>s. 0 < get_e p \<and> (\<forall>w\<in>cball p (get_e p). w \<in> s \<and> (w \<noteq> p \<longrightarrow> w \<notin> pz))" using get_e using avoid_def by blast qed also have "... = (\<Sum>p\<in>pz. c * w p * h p * zorder f p)" proof (rule sum.cong[of pz pz,simplified]) fix p assume "p \<in> pz" show "w p * ci p = c * w p * h p * (zorder f p)" proof (cases "p\<in>s") assume "p \<in> s" have "ci p = c * h p * (zorder f p)" unfolding ci_def apply (rule contour_integral_unique) using get_e \<open>p\<in>s\<close> \<open>p\<in>pz\<close> unfolding cont_def by (metis mult.assoc mult.commute) thus ?thesis by auto next assume "p\<notin>s" then have "w p=0" using homo unfolding w_def by auto then show ?thesis by auto qed qed also have "... = c*(\<Sum>p\<in>pz. w p * h p * zorder f p)" unfolding sum_distrib_left by (simp add:algebra_simps) finally have "contour_integral g ff = c * (\<Sum>p\<in>pz. w p * h p * of_int (zorder f p))" . then show ?thesis unfolding ff_def c_def w_def by simp qed subsection \<open>Coefficient asymptotics for generating functions\<close> text \<open> For a formal power series that has a meromorphic continuation on some disc in the context plane, we can use the Residue Theorem to extract precise asymptotic information from the residues at the poles. This can be used to derive the asymptotic behaviour of the coefficients (\<open>a\<^sub>n \<sim> \<dots>\<close>). If the function is meromorphic on the entire complex plane, one can even derive a full asymptotic expansion. We will first show the relationship between the coefficients and the sum over the residues with an explicit remainder term (the contour integral along the circle used in the Residue theorem). \<close> theorem fixes f :: "complex \<Rightarrow> complex" and n :: nat and r :: real defines "g \<equiv> (\<lambda>w. f w / w ^ Suc n)" and "\<gamma> \<equiv> circlepath 0 r" assumes "open A" "connected A" "cball 0 r \<subseteq> A" "r > 0" assumes "f holomorphic_on A - S" "S \<subseteq> ball 0 r" "finite S" "0 \<notin> S" shows fps_coeff_conv_residues: "(deriv ^^ n) f 0 / fact n = contour_integral \<gamma> g / (2 * pi * \<i>) - (\<Sum>z\<in>S. residue g z)" (is ?thesis1) and fps_coeff_residues_bound: "(\<And>z. norm z = r \<Longrightarrow> z \<notin> k \<Longrightarrow> norm (f z) \<le> C) \<Longrightarrow> C \<ge> 0 \<Longrightarrow> finite k \<Longrightarrow> norm ((deriv ^^ n) f 0 / fact n + (\<Sum>z\<in>S. residue g z)) \<le> C / r ^ n" proof - have holo: "g holomorphic_on A - insert 0 S" unfolding g_def using assms by (auto intro!: holomorphic_intros) have "contour_integral \<gamma> g = 2 * pi * \<i> * (\<Sum>z\<in>insert 0 S. winding_number \<gamma> z * residue g z)" proof (rule Residue_theorem) show "g holomorphic_on A - insert 0 S" by fact from assms show "\<forall>z. z \<notin> A \<longrightarrow> winding_number \<gamma> z = 0" unfolding \<gamma>_def by (intro allI impI winding_number_zero_outside[of _ "cball 0 r"]) auto qed (insert assms, auto simp: \<gamma>_def) also have "winding_number \<gamma> z = 1" if "z \<in> insert 0 S" for z unfolding \<gamma>_def using assms that by (intro winding_number_circlepath) auto hence "(\<Sum>z\<in>insert 0 S. winding_number \<gamma> z * residue g z) = (\<Sum>z\<in>insert 0 S. residue g z)" by (intro sum.cong) simp_all also have "\<dots> = residue g 0 + (\<Sum>z\<in>S. residue g z)" using \<open>0 \<notin> S\<close> and \<open>finite S\<close> by (subst sum.insert) auto also from \<open>r > 0\<close> have "0 \<in> cball 0 r" by simp with assms have "0 \<in> A - S" by blast with assms have "residue g 0 = (deriv ^^ n) f 0 / fact n" unfolding g_def by (subst residue_holomorphic_over_power'[of "A - S"]) (auto simp: finite_imp_closed) finally show ?thesis1 by (simp add: field_simps) assume C: "\<And>z. norm z = r \<Longrightarrow> z \<notin> k \<Longrightarrow> norm (f z) \<le> C" "C \<ge> 0" and k: "finite k" have "(deriv ^^ n) f 0 / fact n + (\<Sum>z\<in>S. residue g z) = contour_integral \<gamma> g / (2 * pi * \<i>)" using \<open>?thesis1\<close> by (simp add: algebra_simps) also have "norm \<dots> = norm (contour_integral \<gamma> g) / (2 * pi)" by (simp add: norm_divide norm_mult) also have "norm (contour_integral \<gamma> g) \<le> C / r ^ Suc n * (2 * pi * r)" proof (rule has_contour_integral_bound_circlepath_strong) from \<open>open A\<close> and \<open>finite S\<close> have "open (A - insert 0 S)" by (blast intro: finite_imp_closed) with assms show "(g has_contour_integral contour_integral \<gamma> g) (circlepath 0 r)" unfolding \<gamma>_def by (intro has_contour_integral_integral contour_integrable_holomorphic_simple [OF holo]) auto next fix z assume z: "norm (z - 0) = r" "z \<notin> k" hence "norm (g z) = norm (f z) / r ^ Suc n" by (simp add: norm_divide g_def norm_mult norm_power) also have "\<dots> \<le> C / r ^ Suc n" using k and \<open>r > 0\<close> and z by (intro divide_right_mono C zero_le_power) auto finally show "norm (g z) \<le> C / r ^ Suc n" . qed (insert C(2) k \<open>r > 0\<close>, auto) also from \<open>r > 0\<close> have "C / r ^ Suc n * (2 * pi * r) / (2 * pi) = C / r ^ n" by simp finally show "norm ((deriv ^^ n) f 0 / fact n + (\<Sum>z\<in>S. residue g z)) \<le> \<dots>" by - (simp_all add: divide_right_mono) qed text \<open> Since the circle is fixed, we can get an upper bound on the values of the generating function on the circle and therefore show that the integral is $O(r^{-n})$. \<close> corollary fps_coeff_residues_bigo: fixes f :: "complex \<Rightarrow> complex" and r :: real assumes "open A" "connected A" "cball 0 r \<subseteq> A" "r > 0" assumes "f holomorphic_on A - S" "S \<subseteq> ball 0 r" "finite S" "0 \<notin> S" assumes g: "eventually (\<lambda>n. g n = -(\<Sum>z\<in>S. residue (\<lambda>z. f z / z ^ Suc n) z)) sequentially" (is "eventually (\<lambda>n. _ = -?g' n) _") shows "(\<lambda>n. (deriv ^^ n) f 0 / fact n - g n) \<in> O(\<lambda>n. 1 / r ^ n)" (is "(\<lambda>n. ?c n - _) \<in> O(_)") proof - from assms have "compact (f ` sphere 0 r)" by (intro compact_continuous_image holomorphic_on_imp_continuous_on holomorphic_on_subset[OF \<open>f holomorphic_on A - S\<close>]) auto hence "bounded (f ` sphere 0 r)" by (rule compact_imp_bounded) then obtain C where C: "\<And>z. z \<in> sphere 0 r \<Longrightarrow> norm (f z) \<le> C" by (auto simp: bounded_iff sphere_def) have "0 \<le> norm (f (of_real r))" by simp also from C[of "of_real r"] and \<open>r > 0\<close> have "\<dots> \<le> C" by simp finally have C_nonneg: "C \<ge> 0" . have "(\<lambda>n. ?c n + ?g' n) \<in> O(\<lambda>n. of_real (1 / r ^ n))" proof (intro bigoI[of _ C] always_eventually allI ) fix n :: nat from assms and C and C_nonneg have "norm (?c n + ?g' n) \<le> C / r ^ n" by (intro fps_coeff_residues_bound[where A = A and k = "{}"]) auto also have "\<dots> = C * norm (complex_of_real (1 / r ^ n))" using \<open>r > 0\<close> by (simp add: norm_divide norm_power) finally show "norm (?c n + ?g' n) \<le> \<dots>" . qed also have "?this \<longleftrightarrow> (\<lambda>n. ?c n - g n) \<in> O(\<lambda>n. of_real (1 / r ^ n))" by (intro landau_o.big.in_cong eventually_mono[OF g]) simp_all finally show ?thesis . qed corollary fps_coeff_residues_bigo': fixes f :: "complex \<Rightarrow> complex" and r :: real assumes exp: "f has_fps_expansion F" assumes "open A" "connected A" "cball 0 r \<subseteq> A" "r > 0" assumes "f holomorphic_on A - S" "S \<subseteq> ball 0 r" "finite S" "0 \<notin> S" assumes "eventually (\<lambda>n. g n = -(\<Sum>z\<in>S. residue (\<lambda>z. f z / z ^ Suc n) z)) sequentially" (is "eventually (\<lambda>n. _ = -?g' n) _") shows "(\<lambda>n. fps_nth F n - g n) \<in> O(\<lambda>n. 1 / r ^ n)" (is "(\<lambda>n. ?c n - _) \<in> O(_)") proof - have "fps_nth F = (\<lambda>n. (deriv ^^ n) f 0 / fact n)" using fps_nth_fps_expansion[OF exp] by (intro ext) simp_all with fps_coeff_residues_bigo[OF assms(2-)] show ?thesis by simp qed subsection \<open>Rouche's theorem \<close> theorem Rouche_theorem: fixes f g::"complex \<Rightarrow> complex" and s:: "complex set" defines "fg\<equiv>(\<lambda>p. f p + g p)" defines "zeros_fg\<equiv>{p. fg p = 0}" and "zeros_f\<equiv>{p. f p = 0}" assumes "open s" and "connected s" and "finite zeros_fg" and "finite zeros_f" and f_holo:"f holomorphic_on s" and g_holo:"g holomorphic_on s" and "valid_path \<gamma>" and loop:"pathfinish \<gamma> = pathstart \<gamma>" and path_img:"path_image \<gamma> \<subseteq> s " and path_less:"\<forall>z\<in>path_image \<gamma>. cmod(f z) > cmod(g z)" and homo:"\<forall>z. (z \<notin> s) \<longrightarrow> winding_number \<gamma> z = 0" shows "(\<Sum>p\<in>zeros_fg. winding_number \<gamma> p * zorder fg p) = (\<Sum>p\<in>zeros_f. winding_number \<gamma> p * zorder f p)" proof - have path_fg:"path_image \<gamma> \<subseteq> s - zeros_fg" proof - have False when "z\<in>path_image \<gamma>" and "f z + g z=0" for z proof - have "cmod (f z) > cmod (g z)" using \<open>z\<in>path_image \<gamma>\<close> path_less by auto moreover have "f z = - g z" using \<open>f z + g z =0\<close> by (simp add: eq_neg_iff_add_eq_0) then have "cmod (f z) = cmod (g z)" by auto ultimately show False by auto qed then show ?thesis unfolding zeros_fg_def fg_def using path_img by auto qed have path_f:"path_image \<gamma> \<subseteq> s - zeros_f" proof - have False when "z\<in>path_image \<gamma>" and "f z =0" for z proof - have "cmod (g z) < cmod (f z) " using \<open>z\<in>path_image \<gamma>\<close> path_less by auto then have "cmod (g z) < 0" using \<open>f z=0\<close> by auto then show False by auto qed then show ?thesis unfolding zeros_f_def using path_img by auto qed define w where "w \<equiv> \<lambda>p. winding_number \<gamma> p" define c where "c \<equiv> 2 * complex_of_real pi * \<i>" define h where "h \<equiv> \<lambda>p. g p / f p + 1" obtain spikes where "finite spikes" and spikes: "\<forall>x\<in>{0..1} - spikes. \<gamma> differentiable at x" using \<open>valid_path \<gamma>\<close> by (auto simp: valid_path_def piecewise_C1_differentiable_on_def C1_differentiable_on_eq) have h_contour:"((\<lambda>x. deriv h x / h x) has_contour_integral 0) \<gamma>" proof - have outside_img:"0 \<in> outside (path_image (h o \<gamma>))" proof - have "h p \<in> ball 1 1" when "p\<in>path_image \<gamma>" for p proof - have "cmod (g p/f p) <1" using path_less[rule_format,OF that] apply (cases "cmod (f p) = 0") by (auto simp add: norm_divide) then show ?thesis unfolding h_def by (auto simp add:dist_complex_def) qed then have "path_image (h o \<gamma>) \<subseteq> ball 1 1" by (simp add: image_subset_iff path_image_compose) moreover have " (0::complex) \<notin> ball 1 1" by (simp add: dist_norm) ultimately show "?thesis" using convex_in_outside[of "ball 1 1" 0] outside_mono by blast qed have valid_h:"valid_path (h \<circ> \<gamma>)" proof (rule valid_path_compose_holomorphic[OF \<open>valid_path \<gamma>\<close> _ _ path_f]) show "h holomorphic_on s - zeros_f" unfolding h_def using f_holo g_holo by (auto intro!: holomorphic_intros simp add:zeros_f_def) next show "open (s - zeros_f)" using \<open>finite zeros_f\<close> \<open>open s\<close> finite_imp_closed by auto qed have "((\<lambda>z. 1/z) has_contour_integral 0) (h \<circ> \<gamma>)" proof - have "0 \<notin> path_image (h \<circ> \<gamma>)" using outside_img by (simp add: outside_def) then have "((\<lambda>z. 1/z) has_contour_integral c * winding_number (h \<circ> \<gamma>) 0) (h \<circ> \<gamma>)" using has_contour_integral_winding_number[of "h o \<gamma>" 0,simplified] valid_h unfolding c_def by auto moreover have "winding_number (h o \<gamma>) 0 = 0" proof - have "0 \<in> outside (path_image (h \<circ> \<gamma>))" using outside_img . moreover have "path (h o \<gamma>)" using valid_h by (simp add: valid_path_imp_path) moreover have "pathfinish (h o \<gamma>) = pathstart (h o \<gamma>)" by (simp add: loop pathfinish_compose pathstart_compose) ultimately show ?thesis using winding_number_zero_in_outside by auto qed ultimately show ?thesis by auto qed moreover have "vector_derivative (h \<circ> \<gamma>) (at x) = vector_derivative \<gamma> (at x) * deriv h (\<gamma> x)" when "x\<in>{0..1} - spikes" for x proof (rule vector_derivative_chain_at_general) show "\<gamma> differentiable at x" using that \<open>valid_path \<gamma>\<close> spikes by auto next define der where "der \<equiv> \<lambda>p. (deriv g p * f p - g p * deriv f p)/(f p * f p)" define t where "t \<equiv> \<gamma> x" have "f t\<noteq>0" unfolding zeros_f_def t_def by (metis DiffD1 image_eqI norm_not_less_zero norm_zero path_defs(4) path_less that) moreover have "t\<in>s" using contra_subsetD path_image_def path_fg t_def that by fastforce ultimately have "(h has_field_derivative der t) (at t)" unfolding h_def der_def using g_holo f_holo \<open>open s\<close> by (auto intro!: holomorphic_derivI derivative_eq_intros) then show "h field_differentiable at (\<gamma> x)" unfolding t_def field_differentiable_def by blast qed then have " ((/) 1 has_contour_integral 0) (h \<circ> \<gamma>) = ((\<lambda>x. deriv h x / h x) has_contour_integral 0) \<gamma>" unfolding has_contour_integral apply (intro has_integral_spike_eq[OF negligible_finite, OF \<open>finite spikes\<close>]) by auto ultimately show ?thesis by auto qed then have "contour_integral \<gamma> (\<lambda>x. deriv h x / h x) = 0" using contour_integral_unique by simp moreover have "contour_integral \<gamma> (\<lambda>x. deriv fg x / fg x) = contour_integral \<gamma> (\<lambda>x. deriv f x / f x) + contour_integral \<gamma> (\<lambda>p. deriv h p / h p)" proof - have "(\<lambda>p. deriv f p / f p) contour_integrable_on \<gamma>" proof (rule contour_integrable_holomorphic_simple[OF _ _ \<open>valid_path \<gamma>\<close> path_f]) show "open (s - zeros_f)" using finite_imp_closed[OF \<open>finite zeros_f\<close>] \<open>open s\<close> by auto then show "(\<lambda>p. deriv f p / f p) holomorphic_on s - zeros_f" using f_holo by (auto intro!: holomorphic_intros simp add:zeros_f_def) qed moreover have "(\<lambda>p. deriv h p / h p) contour_integrable_on \<gamma>" using h_contour by (simp add: has_contour_integral_integrable) ultimately have "contour_integral \<gamma> (\<lambda>x. deriv f x / f x + deriv h x / h x) = contour_integral \<gamma> (\<lambda>p. deriv f p / f p) + contour_integral \<gamma> (\<lambda>p. deriv h p / h p)" using contour_integral_add[of "(\<lambda>p. deriv f p / f p)" \<gamma> "(\<lambda>p. deriv h p / h p)" ] by auto moreover have "deriv fg p / fg p = deriv f p / f p + deriv h p / h p" when "p\<in> path_image \<gamma>" for p proof - have "fg p\<noteq>0" and "f p\<noteq>0" using path_f path_fg that unfolding zeros_f_def zeros_fg_def by auto have "h p\<noteq>0" proof (rule ccontr) assume "\<not> h p \<noteq> 0" then have "g p / f p= -1" unfolding h_def by (simp add: add_eq_0_iff2) then have "cmod (g p/f p) = 1" by auto moreover have "cmod (g p/f p) <1" using path_less[rule_format,OF that] apply (cases "cmod (f p) = 0") by (auto simp add: norm_divide) ultimately show False by auto qed have der_fg:"deriv fg p = deriv f p + deriv g p" unfolding fg_def using f_holo g_holo holomorphic_on_imp_differentiable_at[OF _ \<open>open s\<close>] path_img that by auto have der_h:"deriv h p = (deriv g p * f p - g p * deriv f p)/(f p * f p)" proof - define der where "der \<equiv> \<lambda>p. (deriv g p * f p - g p * deriv f p)/(f p * f p)" have "p\<in>s" using path_img that by auto then have "(h has_field_derivative der p) (at p)" unfolding h_def der_def using g_holo f_holo \<open>open s\<close> \<open>f p\<noteq>0\<close> by (auto intro!: derivative_eq_intros holomorphic_derivI) then show ?thesis unfolding der_def using DERIV_imp_deriv by auto qed show ?thesis apply (simp only:der_fg der_h) apply (auto simp add:field_simps \<open>h p\<noteq>0\<close> \<open>f p\<noteq>0\<close> \<open>fg p\<noteq>0\<close>) by (auto simp add:field_simps h_def \<open>f p\<noteq>0\<close> fg_def) qed then have "contour_integral \<gamma> (\<lambda>p. deriv fg p / fg p) = contour_integral \<gamma> (\<lambda>p. deriv f p / f p + deriv h p / h p)" by (elim contour_integral_eq) ultimately show ?thesis by auto qed moreover have "contour_integral \<gamma> (\<lambda>x. deriv fg x / fg x) = c * (\<Sum>p\<in>zeros_fg. w p * zorder fg p)" unfolding c_def zeros_fg_def w_def proof (rule argument_principle[OF \<open>open s\<close> \<open>connected s\<close> _ _ \<open>valid_path \<gamma>\<close> loop _ homo , of _ "{}" "\<lambda>_. 1",simplified]) show "fg holomorphic_on s" unfolding fg_def using f_holo g_holo holomorphic_on_add by auto show "path_image \<gamma> \<subseteq> s - {p. fg p = 0}" using path_fg unfolding zeros_fg_def . show " finite {p. fg p = 0}" using \<open>finite zeros_fg\<close> unfolding zeros_fg_def . qed moreover have "contour_integral \<gamma> (\<lambda>x. deriv f x / f x) = c * (\<Sum>p\<in>zeros_f. w p * zorder f p)" unfolding c_def zeros_f_def w_def proof (rule argument_principle[OF \<open>open s\<close> \<open>connected s\<close> _ _ \<open>valid_path \<gamma>\<close> loop _ homo , of _ "{}" "\<lambda>_. 1",simplified]) show "f holomorphic_on s" using f_holo g_holo holomorphic_on_add by auto show "path_image \<gamma> \<subseteq> s - {p. f p = 0}" using path_f unfolding zeros_f_def . show " finite {p. f p = 0}" using \<open>finite zeros_f\<close> unfolding zeros_f_def . qed ultimately have " c* (\<Sum>p\<in>zeros_fg. w p * (zorder fg p)) = c* (\<Sum>p\<in>zeros_f. w p * (zorder f p))" by auto then show ?thesis unfolding c_def using w_def by auto qed end
## Dale W.R. Rosenthal, 2018 ## You are free to distribute and use this code so long as you attribute ## it to me or cite the text. ## The legal disclaimer in _A Quantitative Primer on Investments with R_ ## applies to this code. Use or distribution without these comment lines ## is forbidden. price.tree <- function(numsteps, under.tree, opt.payoffs, rf, delta.t, ups=0, downs=0) { if (numsteps > 0) { up.value <- price.tree(numsteps-1, under.tree, opt.payoffs, rf, delta.t, ups+1, downs) down.value <- price.tree(numsteps-1, under.tree, opt.payoffs, rf, delta.t, ups, downs+1) } else { # We have reached a leaf; return the option payoff return(opt.payoffs[downs+1]) } under.up.value <- under.tree[downs+1,ups+1+1] under.down.value <- under.tree[downs+1+1,ups+1] H <- (up.value-down.value)/(under.up.value-under.down.value) B <- (under.up.value*down.value - under.down.value*up.value)/ (under.up.value - under.down.value)*exp(-rf*delta.t) H*under.tree[downs+1,ups+1] + B } rf <- 0.03 # 3% risk-free rates sigma <- 0.45 # 45% volatility T <- 1.25 # 15-month option n.steps <- 16 # number of steps total delta.t <- T/n.steps # interpolated time step size u <- exp(sigma*sqrt(delta.t)) # up move d <- exp(-sigma*sqrt(delta.t)) # down move s0 <- 100 # initial stock price K <- 80 # strike price ## This creates a full matrix with the tree -- and steps beyond the tree ## in the lower-right triangle of the matrix. Since this is an easy way ## to create the tree, we will just ignore the lower-right triangle. underlier.tree <- s0*d^(0:n.steps)%*%t(u^(0:n.steps)) option.payoffs <- pmax(s0*u^(n.steps:0)*d^(0:n.steps) - K, 0) price.tree(numsteps=n.steps, underlier.tree, option.payoffs, rf, delta.t)
################################################################################################################# #Name: # GEOdb_CreadorTablas.r #Author: # Escobedo M. A. Sof and D. Monzalvo Andrea G. #Version # v1 #Description # The script allows querying from organism name to obtain ...xd # Input parameters: # -O or --Org Name of the organism # -p or --pathSQLiteFile Path of SQLite file # -o or --outPath Path where the output files will be deposited # -n or --nameDir Name of the output file # Output # 1) tablas xd # # Examples: # Rscript /export/storage/users/aescobed/Systems/GEOdb/bin/GEOdb_CreadorTablas_v1.r -O 'Escherichia coli' -p /export/storage/users/aescobed/Systems/GEOdb/input/GEOmetadb.sqlite -o /export/storage/users/aescobed/Systems/GEOdb/output -n metadata_Ecoli.tsv ################################################################################################################# message('====================================================================================================================') message('\t\t\t\t\t|-|-|-|-|-|-|\tWELCOME TO GEOdbQuery\t|-|-|-|-|-|-|') # Importacion de librerias message('\t\t\tSTEP 1: Importing libraries...') suppressPackageStartupMessages(library("GEOmetadb")) suppressPackageStartupMessages(library(httr)) suppressPackageStartupMessages(library('stringr')) suppressPackageStartupMessages(library("argparse")) suppressPackageStartupMessages(library("tidyverse")) suppressPackageStartupMessages(library("reshape2")) suppressPackageStartupMessages(library("plyr")) suppressPackageStartupMessages(library("dplyr")) suppressPackageStartupMessages(library('progress')) message('\t\t\tDone!\n') ################################################## FUNCTIONS #################################################### # Funcion 1: consulta de metadatos directamente del SQLite funcionChidota<-function(geo_con,organism){ query<-paste("SELECT gpl.organism,gpl.bioc_package, gpl.supplementary_file, gpl.technology, gpl.distribution, gpl.manufacturer, gse_gpl.gse, gpl.gpl, gse_gsm.gsm, gse.overall_design FROM gpl JOIN gse_gpl, gse_gsm, gse WHERE gpl.organism= '",organism,"' AND gse_gsm.gse=gse_gpl.gse AND gse_gpl.gpl=gpl.gpl AND gse.gse=gse_gpl.gse",sep="") tabla_iden<-dbGetQuery(geo_con,query) return(tabla_iden) } # Funcion de apoyo 1: obtener numero de plataforma segun un gse getMultiPlatCateSupport <- function(x, tableFull){ lele <- length(unique(filter(tableFull, gse == x)$gpl)) return(lele) } # Funcion 2: obtener metadata de descarga, tipo de descarga y booleano de descarga getDownloadTypeModTable <- function(tableFC){ uniqGSE <- unique(tableFC$gse) lenVecPlat <- sapply(uniqGSE, getMultiPlatCateSupport , tableFull = tableFC) columnCatPlat <- mapvalues(tableFC$gse, from= names(lenVecPlat), to = as.numeric(as.vector(lenVecPlat))) tableFCnew <- tableFC %>% mutate(gseCatNP = ifelse(columnCatPlat == 1, 0, 1), gseSuSe = ifelse(grepl(x=tableFC$overall_design, pattern= 'Refer to individual Series'), 'B', 'A')) tableFCfinal <- unite(tableFCnew, col = 'download_type', gseCatNP:gseSuSe, remove= TRUE, sep='') tableFCfinal <- tableFCfinal %>% mutate(download_bool = ifelse(tableFCfinal$download_type %in% c('1A', '0A'), TRUE, FALSE)) return(tableFCfinal) } # Funcion de apoyo 2: obtener tamaños segun html getSizeDownload <- function(gID){ link<-GET(paste("https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=",gID,sep="")) matchSize <- str_extract(content(link, 'text',encoding = "Latin1"), 'full table size <strong>\\d+') size <- unlist(strsplit(matchSize,"<strong>"))[2] row <- data.frame(ID= gID, download_size= size) return(row) } # Funcion 3: obtener tamaños de los gsm de un table createGsmDataSize <- function(tableData){ dfSamples <- data.frame() gSamples <- unique(tableData$gsm) for (gSample in gSamples){ rowSam <- getSizeDownload(gSample) dfSamples <- rbind(dfSamples, rowSam)} names(dfSamples) <- c('gsm', 'download_size_gsm') return(dfSamples) } # Funcion de apoyo 3: obtener taxID segun html getTaxID <- function(gID){ link<-GET(paste("https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=",gID,sep="")) matchTaxId <- str_extract(content(link, 'text',encoding = "Latin1"), '<a href="/Taxonomy/.*onmouseout') taxId <- unlist(strsplit(str_remove(matchTaxId, '^.*id='), '" '))[1] taxID <- as.double(taxId) row <- data.frame(ID= gID, taxon_id = taxID) return(row) } # Funcion 4: obtener comparacion de taxID de gsm y gpl, sacar taxID de ambos createGsmGplDataTaxIDComp <- function(tableData){ dfSamples <- data.frame() dfPlatforms <- data.frame() subsetData <- data.frame(gsm= tableData$gsm, gpl = tableData$gpl) gPlatforms <- unique(tableData$gpl) gSamples <- unique(tableData$gsm) for (gSample in gSamples){ rowSam <- getTaxID(gSample) dfSamples <- rbind(dfSamples, rowSam)} names(dfSamples) <- c('gsm', 'taxon_id_gsm') for (gPlatform in gPlatforms){ rowPlat <- getTaxID(gPlatform) dfPlatforms <- rbind(dfPlatforms, rowPlat)} names(dfPlatforms) <- c('gpl', 'taxon_id_gpl') df <- subsetData %>% left_join(dfSamples, by= 'gsm') %>% left_join(dfPlatforms, by= 'gpl') dfFinal <- df %>% mutate(comparation_tax_id = ifelse(taxon_id_gsm == taxon_id_gpl, 'T', 'F')) dfFinal <- dfFinal %>% select(-gpl) return(dfFinal) } # Funcion 5: une todas las fases y las reporta, regresa el metadata final getReportTable <- function (organism, geo_con){ message('\t\t\t\tSTEP 4.1: Making consult from SQLite and filtering...') tableDataOrg <- funcionChidota(geo_con,organism) %>% filter( manufacturer %in% c('Affymetrix')) message('\t\t\t\tDone!\n') message('\t\t\t\tSTEP 4.2: Creating download metadata and boolean categories...') downloadMetaData <- getDownloadTypeModTable(tableDataOrg) message('\t\t\t\tDone!\n') message('\t\t\t\tSTEP 4.3: Getting gsm/gpl taxID, comparing them and creating dataframe...') taxIDData <- createGsmGplDataTaxIDComp(downloadMetaData) message('\t\t\t\tDone!\n') message('\t\t\t\tSTEP 4.4: Getting gsm size and creating dataframe') sizeData <- createGsmDataSize(downloadMetaData) message('\t\t\t\tDone!\n') message('\t\t\t\tSTEP 4.5: Uniting taxID and size dataframes to the principal metadata table') preReportDf <- downloadMetaData %>% left_join(taxIDData, by= 'gsm') %>% left_join(sizeData, by= 'gsm') message('\t\t\t\tDone!\n') message('\t\t\t\tSTEP 4.6: Making categories using distribution and bioc_package status') reportDf <- preReportDf %>% mutate(bioconductor_disagree = ifelse(distribution %in% c('commercial') & bioc_package %in% c(NA), 'T', 'F')) message('\t\t\t\tDone!\n') return(reportDf) } ################################################## MAIN CODE ########################################################## # Inicio de toma de tiempo start <- Sys.time() # Creacion de argumento message('\t\t\tSTEP 2: Creating arguments...') parser <- ArgumentParser() parser$add_argument("-O", "--Org", type="character", help="Pon tu puto organismo, no seas huevon >:v", default="Escherichia coli") parser$add_argument("-p", "--pathSQLiteFile", type= "character", help= 'El path de su archivo SQLite', default='GEOmetadb.sqlite') parser$add_argument("-o", "--outPath", type= "character", help="Patas de salida", default= getwd()) parser$add_argument("-n", "--nameOutFile", type= "character", help="Archivo de salida", default= paste('metaTableData', paste(Sys.Date(), '/', sep =''),sep= '_')) args <- parser$parse_args() message('\t\t\tDone!\n') # Conection message('\t\t\tSTEP 3: Creating connection to SQLite...') sqlfile <- file.path(args$pathSQLiteFile) geo_con <- dbConnect(SQLite(), sqlfile) message('\t\t\tDone!\n') # Crear metadata para organismo message(paste('\t\t\tSTEP 4: Processing metadata of', args$Org, '...')) metadata <- getReportTable(args$Org, geo_con) message('\t\t\tDone!\n') # Guardar metadata message('\t\t\tSTEP 5: Saving metadata results...') outputFinal <- paste(args$outPath, args$nameOutFile, sep = '/') write.table(x= metadata, file= outputFinal, sep= '\t', quote = FALSE, col.names= TRUE, row.names= FALSE, na = "NA") message('\t\t\tDone!\n') end <- Sys.time() timeTaken <- end - start timeTaken message('\t\t\t\t\t|-|-|-|-|-|-|\tThanks for using GEOdbQuery! Have a nice day!\t|-|-|-|-|-|-|') message('====================================================================================================================')
using PhantomSegmentation using Test using TestSetExtensions using DICOM using DICOMUtils
He was known for his support of representative democracy and his populist style . For example , he would hold town halls and let constituents vote on motions to decide what he would do in Congress on their behalf . These meetings helped Bedell understand the problems of his constituents ; as a result , he backed issues that were important to his farming constituency , such as waterway usage fees and production constraints .