text
stringlengths 0
3.34M
|
---|
(** * Logic: Logic in Coq *)
(*Require Export MoreCoq.
*)
(** Coq's built-in logic is very small: the only primitives are
[Inductive] definitions, universal quantification ([forall]), and
implication ([->]), while all the other familiar logical
connectives -- conjunction, disjunction, negation, existential
quantification, even equality -- can be encoded using just these.
This chapter explains the encodings and shows how the tactics
we've seen can be used to carry out standard forms of logical
reasoning involving these connectives.
*)
(* ########################################################### *)
(** * Propositions *)
(** In previous chapters, we have seen many examples of factual
claims (_propositions_) and ways of presenting evidence of their
truth (_proofs_). In particular, we have worked extensively with
_equality propositions_ of the form [e1 = e2], with
implications ([P -> Q]), and with quantified propositions
([forall x, P]).
*)
(** In Coq, the type of things that can (potentially)
be proven is [Prop]. *)
(** Here is an example of a provable proposition: *)
Check (3 = 3).
(* ===> Prop *)
(** Here is an example of an unprovable proposition: *)
Check (forall (n:nat), n = 2).
(* ===> Prop *)
(** Recall that [Check] asks Coq to tell us the type of the indicated
expression. *)
(* ########################################################### *)
(** * Proofs and Evidence *)
(** In Coq, propositions have the same status as other types, such as
[nat]. Just as the natural numbers [0], [1], [2], etc. inhabit
the type [nat], a Coq proposition [P] is inhabited by its
_proofs_. We will refer to such inhabitants as _proof term_ or
_proof object_ or _evidence_ for the truth of [P].
In Coq, when we state and then prove a lemma such as:
Lemma silly : 0 * 3 = 0.
Proof. reflexivity. Qed.
the tactics we use within the [Proof]...[Qed] keywords tell Coq
how to construct a proof term that inhabits the proposition. In
this case, the proposition [0 * 3 = 0] is justified by a
combination of the _definition_ of [mult], which says that [0 * 3]
_simplifies_ to just [0], and the _reflexive_ principle of
equality, which says that [0 = 0].
*)
(** *** *)
Lemma silly : 0 * 3 = 0.
Proof. reflexivity. Qed.
(** We can see which proof term Coq constructs for a given Lemma by
using the [Print] directive: *)
Print silly.
(* ===> silly = eq_refl : 0 * 3 = 0 *)
(** Here, the [eq_refl] proof term witnesses the equality. (More on
equality later!)*)
(** ** Implications _are_ functions *)
(** Just as we can implement natural number multiplication as a
function:
[
mult : nat -> nat -> nat
]
The _proof term_ for an implication [P -> Q] is a _function_ that
takes evidence for [P] as input and produces evidence for [Q] as its
output.
*)
Lemma silly_implication : (1 + 1) = 2 -> 0 * 3 = 0.
Proof. intros H. reflexivity. Qed.
(** We can see that the proof term for the above lemma is indeed a
function: *)
Print silly_implication.
(* ===> silly_implication = fun _ : 1 + 1 = 2 => eq_refl
: 1 + 1 = 2 -> 0 * 3 = 0 *)
(** ** Defining propositions *)
(** Just as we can create user-defined inductive types (like the
lists, binary representations of natural numbers, etc., that we
seen before), we can also create _user-defined_ propositions.
Question: How do you define the meaning of a proposition?
*)
(** *** *)
(** The meaning of a proposition is given by _rules_ and _definitions_
that say how to construct _evidence_ for the truth of the
proposition from other evidence.
- Typically, rules are defined _inductively_, just like any other
datatype.
- Sometimes a proposition is declared to be true without
substantiating evidence. Such propositions are called _axioms_.
In this, and subsequence chapters, we'll see more about how these
proof terms work in more detail.
*)
(* ########################################################### *)
(** * Conjunction (Logical "and") *)
(** The logical conjunction of propositions [P] and [Q] can be
represented using an [Inductive] definition with one
constructor. *)
Inductive and (P Q : Prop) : Prop :=
conj : P -> Q -> (and P Q).
(** The intuition behind this definition is simple: to
construct evidence for [and P Q], we must provide evidence
for [P] and evidence for [Q]. More precisely:
- [conj p q] can be taken as evidence for [and P Q] if [p]
is evidence for [P] and [q] is evidence for [Q]; and
- this is the _only_ way to give evidence for [and P Q] --
that is, if someone gives us evidence for [and P Q], we
know it must have the form [conj p q], where [p] is
evidence for [P] and [q] is evidence for [Q].
Since we'll be using conjunction a lot, let's introduce a more
familiar-looking infix notation for it. *)
Notation "P /\ Q" := (and P Q) : type_scope.
(** (The [type_scope] annotation tells Coq that this notation
will be appearing in propositions, not values.) *)
(** Consider the "type" of the constructor [conj]: *)
Check conj.
(* ===> forall P Q : Prop, P -> Q -> P /\ Q *)
(** Notice that it takes 4 inputs -- namely the propositions [P]
and [Q] and evidence for [P] and [Q] -- and returns as output the
evidence of [P /\ Q]. *)
(** ** "Introducing" conjunctions *)
(** Besides the elegance of building everything up from a tiny
foundation, what's nice about defining conjunction this way is
that we can prove statements involving conjunction using the
tactics that we already know. For example, if the goal statement
is a conjuction, we can prove it by applying the single
constructor [conj], which (as can be seen from the type of [conj])
solves the current goal and leaves the two parts of the
conjunction as subgoals to be proved separately. *)
Theorem and_example :
(0 = 0) /\ (4 = mult 2 2).
Proof.
apply conj.
(*Case "left".*) reflexivity.
(*Case "right".*) reflexivity. Qed.
(** Just for convenience, we can use the tactic [split] as a shorthand for
[apply conj]. *)
Theorem and_example' :
(0 = 0) /\ (4 = mult 2 2).
Proof.
split.
(*Case "left".*) reflexivity.
(*Case "right".*) reflexivity. Qed.
(** ** "Eliminating" conjunctions *)
(** Conversely, the [destruct] tactic can be used to take a
conjunction hypothesis in the context, calculate what evidence
must have been used to build it, and add variables representing
this evidence to the proof context. *)
Theorem proj1 : forall P Q : Prop,
P /\ Q -> P.
Proof.
intros P Q H.
destruct H as [HP HQ].
apply HP. Qed.
(** **** Exercise: 1 star, optional (proj2) *)
Theorem proj2 : forall P Q : Prop,
P /\ Q -> Q.
Proof.
intros. destruct H as [HP HQ].
apply HQ.
Qed.
(** [] *)
Theorem and_commut : forall P Q : Prop,
P /\ Q -> Q /\ P.
Proof.
(* WORKED IN CLASS *)
intros P Q H.
destruct H as [HP HQ].
split.
(*Case "left".*) apply HQ.
(*Case "right".*) apply HP. Qed.
(** **** Exercise: 2 stars (and_assoc) *)
(** In the following proof, notice how the _nested pattern_ in the
[destruct] breaks the hypothesis [H : P /\ (Q /\ R)] down into
[HP: P], [HQ : Q], and [HR : R]. Finish the proof from there: *)
Theorem and_assoc : forall P Q R : Prop,
P /\ (Q /\ R) -> (P /\ Q) /\ R.
Proof.
intros P Q R H.
destruct H as [HP [HQ HR]].
split. split. apply HP. apply HQ. apply HR.
Qed.
(** [] *)
(* ###################################################### *)
(** * Iff *)
(** The handy "if and only if" connective is just the conjunction of
two implications. *)
Definition iff (P Q : Prop) := (P -> Q) /\ (Q -> P).
Notation "P <-> Q" := (iff P Q)
(at level 95, no associativity)
: type_scope.
Theorem iff_implies : forall P Q : Prop,
(P <-> Q) -> P -> Q.
Proof.
intros P Q H.
destruct H as [HAB HBA]. apply HAB. Qed.
Theorem iff_sym : forall P Q : Prop,
(P <-> Q) -> (Q <-> P).
Proof.
(* WORKED IN CLASS *)
intros P Q H.
destruct H as [HAB HBA].
split.
(*Case "->".*) apply HBA.
(*Case "<-".*) apply HAB. Qed.
(** **** Exercise: 1 star, optional (iff_properties) *)
(** Using the above proof that [<->] is symmetric ([iff_sym]) as
a guide, prove that it is also reflexive and transitive. *)
Theorem iff_refl : forall P : Prop,
P <-> P.
Proof.
intros. split. intros. apply H.
intros. apply H.
Qed.
Theorem iff_trans : forall P Q R : Prop,
(P <-> Q) -> (Q <-> R) -> (P <-> R).
Proof.
intros. split. destruct H. destruct H0.
intros. apply H0. apply H. apply H3.
intros. destruct H. destruct H0.
apply H2. apply H3. apply H1.
Qed.
(** Hint: If you have an iff hypothesis in the context, you can use
[inversion] to break it into two separate implications. (Think
about why this works.) *)
(** [] *)
(** Some of Coq's tactics treat [iff] statements specially, thus
avoiding the need for some low-level manipulation when reasoning
with them. In particular, [rewrite] can be used with [iff]
statements, not just equalities. *)
(* ############################################################ *)
(** * Disjunction (Logical "or") *)
(** ** Implementing disjunction *)
(** Disjunction ("logical or") can also be defined as an
inductive proposition. *)
Inductive or (P Q : Prop) : Prop :=
| or_introl : P -> or P Q
| or_intror : Q -> or P Q.
Notation "P \/ Q" := (or P Q) : type_scope.
(** Consider the "type" of the constructor [or_introl]: *)
Check or_introl.
(* ===> forall P Q : Prop, P -> P \/ Q *)
(** It takes 3 inputs, namely the propositions [P], [Q] and
evidence of [P], and returns, as output, the evidence of [P \/ Q].
Next, look at the type of [or_intror]: *)
Check or_intror.
(* ===> forall P Q : Prop, Q -> P \/ Q *)
(** It is like [or_introl] but it requires evidence of [Q]
instead of evidence of [P]. *)
(** Intuitively, there are two ways of giving evidence for [P \/ Q]:
- give evidence for [P] (and say that it is [P] you are giving
evidence for -- this is the function of the [or_introl]
constructor), or
- give evidence for [Q], tagged with the [or_intror]
constructor. *)
(** *** *)
(** Since [P \/ Q] has two constructors, doing [destruct] on a
hypothesis of type [P \/ Q] yields two subgoals. *)
Theorem or_commut : forall P Q : Prop,
P \/ Q -> Q \/ P.
Proof.
intros P Q H.
destruct H as [HP | HQ].
(*Case "left".*) apply or_intror. apply HP.
(*Case "right".*) apply or_introl. apply HQ. Qed.
(** From here on, we'll use the shorthand tactics [left] and [right]
in place of [apply or_introl] and [apply or_intror]. *)
Theorem or_commut' : forall P Q : Prop,
P \/ Q -> Q \/ P.
Proof.
intros P Q H.
destruct H as [HP | HQ].
(*Case "left".*) right. apply HP.
(*Case "right".*) left. apply HQ. Qed.
Theorem or_distributes_over_and_1 : forall P Q R : Prop,
P \/ (Q /\ R) -> (P \/ Q) /\ (P \/ R).
Proof.
intros P Q R. intros H. destruct H as [HP | [HQ HR]].
(*Case "left".*) split.
(*SCase "left".*) left. apply HP.
(*SCase "right".*) left. apply HP.
(*Case "right".*) split.
(*SCase "left".*) right. apply HQ.
(*SCase "right".*) right. apply HR. Qed.
(** **** Exercise: 2 stars (or_distributes_over_and_2) *)
Theorem or_distributes_over_and_2 : forall P Q R : Prop,
(P \/ Q) /\ (P \/ R) -> P \/ (Q /\ R).
Proof.
intros. inversion H as [HPQ HPR]. inversion HPQ. left. apply H0.
inversion HPR. left. apply H1.
right. split. apply H0. apply H1.
Qed.
(** [] *)
(** **** Exercise: 1 star, optional (or_distributes_over_and) *)
Theorem or_distributes_over_and : forall P Q R : Prop,
P \/ (Q /\ R) <-> (P \/ Q) /\ (P \/ R).
Proof.
intros. split. apply or_distributes_over_and_1.
apply or_distributes_over_and_2.
Qed.
(** [] *)
(* ################################################### *)
(** ** Relating [/\] and [\/] with [andb] and [orb] *)
(** We've already seen several places where analogous structures
can be found in Coq's computational ([Type]) and logical ([Prop])
worlds. Here is one more: the boolean operators [andb] and [orb]
are clearly analogs of the logical connectives [/\] and [\/].
This analogy can be made more precise by the following theorems,
which show how to translate knowledge about [andb] and [orb]'s
behaviors on certain inputs into propositional facts about those
inputs. *)
Theorem andb_prop : forall b c,
andb b c = true -> b = true /\ c = true.
Proof.
(* WORKED IN CLASS *)
intros b c H.
destruct b.
(*Case "b = true".*) destruct c.
(*SCase "c = true".*) apply conj. reflexivity. reflexivity.
(*SCase "c = false".*) inversion H.
(*Case "b = false".*) inversion H. Qed.
Theorem andb_true_intro : forall b c,
b = true /\ c = true -> andb b c = true.
Proof.
(* WORKED IN CLASS *)
intros b c H.
destruct H.
rewrite H. rewrite H0. reflexivity. Qed.
(** **** Exercise: 2 stars, optional (andb_false) *)
Theorem andb_false : forall b c,
andb b c = false -> b = false \/ c = false.
Proof.
intros. destruct b.
destruct c. inversion H.
right. reflexivity.
left. reflexivity.
Qed.
(** **** Exercise: 2 stars, optional (orb_false) *)
Theorem orb_prop : forall b c,
orb b c = true -> b = true \/ c = true.
Proof.
intros. destruct b. left. reflexivity.
destruct c. right. reflexivity.
left. inversion H.
Qed.
(** **** Exercise: 2 stars, optional (orb_false_elim) *)
Theorem orb_false_elim : forall b c,
orb b c = false -> b = false /\ c = false.
Proof.
intros. destruct b. destruct c. inversion H.
split. inversion H. reflexivity.
destruct c. split. reflexivity. inversion H.
split. auto. auto.
Qed.
(** [] *)
(* ################################################### *)
(** * Falsehood *)
(** Logical falsehood can be represented in Coq as an inductively
defined proposition with no constructors. *)
Inductive False : Prop := .
(** Intuition: [False] is a proposition for which there is no way
to give evidence. *)
(** Since [False] has no constructors, inverting an assumption
of type [False] always yields zero subgoals, allowing us to
immediately prove any goal. *)
Theorem False_implies_nonsense :
False -> 2 + 2 = 5.
Proof.
intros contra.
inversion contra. Qed.
(** How does this work? The [inversion] tactic breaks [contra] into
each of its possible cases, and yields a subgoal for each case.
As [contra] is evidence for [False], it has _no_ possible cases,
hence, there are no possible subgoals and the proof is done. *)
(** *** *)
(** Conversely, the only way to prove [False] is if there is already
something nonsensical or contradictory in the context: *)
Theorem nonsense_implies_False :
2 + 2 = 5 -> False.
Proof.
intros contra.
inversion contra. Qed.
(** Actually, since the proof of [False_implies_nonsense]
doesn't actually have anything to do with the specific nonsensical
thing being proved; it can easily be generalized to work for an
arbitrary [P]: *)
Theorem ex_falso_quodlibet : forall (P:Prop),
False -> P.
Proof.
(* WORKED IN CLASS *)
intros P contra.
inversion contra. Qed.
(** The Latin _ex falso quodlibet_ means, literally, "from
falsehood follows whatever you please." This theorem is also
known as the _principle of explosion_. *)
(* #################################################### *)
(** ** Truth *)
(** Since we have defined falsehood in Coq, one might wonder whether
it is possible to define truth in the same way. We can. *)
(** **** Exercise: 2 stars, advanced (True) *)
(** Define [True] as another inductively defined proposition. (The
intution is that [True] should be a proposition for which it is
trivial to give evidence.) *)
Inductive True : Prop :=
I : True.
(** [] *)
(** However, unlike [False], which we'll use extensively, [True] is
used fairly rarely. By itself, it is trivial (and therefore
uninteresting) to prove as a goal, and it carries no useful
information as a hypothesis. But it can be useful when defining
complex [Prop]s using conditionals, or as a parameter to
higher-order [Prop]s. *)
(* #################################################### *)
(** * Negation *)
(** The logical complement of a proposition [P] is written [not
P] or, for shorthand, [~P]: *)
Definition not (P:Prop) := P -> False.
(** The intuition is that, if [P] is not true, then anything at
all (even [False]) follows from assuming [P]. *)
Notation "~ x" := (not x) : type_scope.
Check not.
(* ===> Prop -> Prop *)
(** It takes a little practice to get used to working with
negation in Coq. Even though you can see perfectly well why
something is true, it can be a little hard at first to get things
into the right configuration so that Coq can see it! Here are
proofs of a few familiar facts about negation to get you warmed
up. *)
Theorem not_False :
~ False.
Proof.
unfold not. intros H. inversion H. Qed.
(** *** *)
Theorem contradiction_implies_anything : forall P Q : Prop,
(P /\ ~P) -> Q.
Proof.
(* WORKED IN CLASS *)
intros P Q H. destruct H as [HP HNA]. unfold not in HNA.
apply HNA in HP. inversion HP. Qed.
Theorem double_neg : forall P : Prop,
P -> ~~P.
Proof.
(* WORKED IN CLASS *)
intros P H. unfold not. intros G. apply G. apply H. Qed.
(** **** Exercise: 2 stars, advanced (double_neg_inf) *)
(** Write an informal proof of [double_neg]:
_Theorem_: [P] implies [~~P], for any proposition [P].
_Proof_:
(* FILL IN HERE *)
[]
*)
(** **** Exercise: 2 stars (contrapositive) *)
Theorem contrapositive : forall P Q : Prop,
(P -> Q) -> (~Q -> ~P).
Proof.
intros. unfold not in H0. unfold not. intros. apply H0. apply H.
apply H1.
Qed.
(** [] *)
(** **** Exercise: 1 star (not_both_true_and_false) *)
Theorem not_both_true_and_false : forall P : Prop,
~ (P /\ ~P).
Proof.
intros. unfold not. intros. destruct H.
apply H0. apply H.
Qed.
(** [] *)
(** **** Exercise: 1 star, advanced (informal_not_PNP) *)
(** Write an informal proof (in English) of the proposition [forall P
: Prop, ~(P /\ ~P)]. *)
(* FILL IN HERE *)
(** [] *)
(** *** Constructive logic *)
(** Note that some theorems that are true in classical logic are _not_
provable in Coq's (constructive) logic. E.g., let's look at how
this proof gets stuck... *)
Theorem classic_double_neg : forall P : Prop,
~~P -> P.
Proof.
(* WORKED IN CLASS *)
intros P H. unfold not in H.
(* But now what? There is no way to "invent" evidence for [~P]
from evidence for [P]. *)
Abort.
(** **** Exercise: 5 stars, advanced, optional (classical_axioms) *)
(** For those who like a challenge, here is an exercise
taken from the Coq'Art book (p. 123). The following five
statements are often considered as characterizations of
classical logic (as opposed to constructive logic, which is
what is "built in" to Coq). We can't prove them in Coq, but
we can consistently add any one of them as an unproven axiom
if we wish to work in classical logic. Prove that these five
propositions are equivalent. *)
Definition peirce := forall P Q: Prop,
((P->Q)->P)->P.
Definition classic := forall P:Prop,
~~P -> P.
Definition excluded_middle := forall P:Prop,
P \/ ~P.
Definition de_morgan_not_and_not := forall P Q:Prop,
~(~P /\ ~Q) -> P\/Q.
Definition implies_to_or := forall P Q:Prop,
(P->Q) -> (~P\/Q).
(** [] *)
(** **** Exercise: 3 stars (excluded_middle_irrefutable) *)
(** This theorem implies that it is always safe to add a decidability
axiom (i.e. an instance of excluded middle) for any _particular_ Prop [P].
Why? Because we cannot prove the negation of such an axiom; if we could,
we would have both [~ (P \/ ~P)] and [~ ~ (P \/ ~P)], a contradiction. *)
Theorem excluded_middle_irrefutable: forall (P:Prop), ~ ~ (P \/ ~ P).
Proof.
intros. unfold not. intros. apply H. right. intros. apply H.
left. apply H0.
Qed.
(* ########################################################## *)
(** ** Inequality *)
(** Saying [x <> y] is just the same as saying [~(x = y)]. *)
Notation "x <> y" := (~ (x = y)) : type_scope.
(** Since inequality involves a negation, it again requires
a little practice to be able to work with it fluently. Here
is one very useful trick. If you are trying to prove a goal
that is nonsensical (e.g., the goal state is [false = true]),
apply the lemma [ex_falso_quodlibet] to change the goal to
[False]. This makes it easier to use assumptions of the form
[~P] that are available in the context -- in particular,
assumptions of the form [x<>y]. *)
Theorem not_false_then_true : forall b : bool,
b <> false -> b = true.
Proof.
intros b H. destruct b.
(*Case "b = true".*) reflexivity.
(*Case "b = false".*)
unfold not in H.
apply ex_falso_quodlibet.
apply H. reflexivity. Qed.
(** *** *)
(** *** *)
(** *** *)
(** *** *)
(** *** *)
(** **** Exercise: 2 stars (false_beq_nat) *)
Fixpoint beq_nat (n m : nat) : bool :=
match n with
| O => match m with
| O => true
| S m' => false
end
| S n' => match m with
| O => false
| S m' => beq_nat n' m'
end
end.
Theorem false_beq_nat : forall n m : nat,
n <> m ->
beq_nat n m = false.
Proof.
induction n. destruct m. intros. apply ex_falso_quodlibet.
unfold not in H. apply H. reflexivity.
intros. unfold not in H. split.
destruct m. intros. unfold not in H. split.
simpl. intros. apply IHn. revert H. unfold not.
intros. apply H. apply f_equal. apply H0.
Qed.
(** **** Exercise: 2 stars, optional (beq_nat_false) *)
Theorem beq_nat_false : forall n m,
beq_nat n m = false -> n <> m.
Proof.
induction n. unfold not. intros. destruct m. inversion H.
inversion H0.
unfold not. intros. destruct m. inversion H0. inversion H. inversion H0.
revert H2 H3. apply IHn.
Qed.
(** [] *)
(** $Date: 2014-12-31 11:17:56 -0500 (Wed, 31 Dec 2014) $ *)
|
import pandas as pd
import numpy as np
# dataset source:
# https://www.kaggle.com/ucffool/amazon-sales-rank-data-for-print-and-kindle-books/downloads/amazon_com.csv/3
src = "../books.csv"
dst = "../books_fitted.csv"
df = pd.read_csv(src, usecols=[0, 3, 4, 5], nrows=10000)
df['STORAGE'] = np.random.randint(1, 100, size=10000)
df['SELLING PRICE'] = np.around(
np.random.uniform(1, 50, size=10000), decimals=2)
df['PURCHASE PRICE'] = np.around(
df['SELLING PRICE']*(np.random.rand(10000)*0.5+0.5), decimals=2)
df.drop_duplicates(subset=['ASIN'])
df.to_csv(dst,index=False,header=False)
|
/-
Copyright (c) 2020 Aaron Anderson. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Aaron Anderson
! This file was ported from Lean 3 source module ring_theory.simple_module
! leanprover-community/mathlib commit cce7f68a7eaadadf74c82bbac20721cdc03a1cc1
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.LinearAlgebra.Isomorphisms
import Mathbin.Order.JordanHolder
/-!
# Simple Modules
## Main Definitions
* `is_simple_module` indicates that a module has no proper submodules
(the only submodules are `⊥` and `⊤`).
* `is_semisimple_module` indicates that every submodule has a complement, or equivalently,
the module is a direct sum of simple modules.
* A `division_ring` structure on the endomorphism ring of a simple module.
## Main Results
* Schur's Lemma: `bijective_or_eq_zero` shows that a linear map between simple modules
is either bijective or 0, leading to a `division_ring` structure on the endomorphism ring.
## TODO
* Artin-Wedderburn Theory
* Unify with the work on Schur's Lemma in a category theory context
-/
variable (R : Type _) [Ring R] (M : Type _) [AddCommGroup M] [Module R M]
/-- A module is simple when it has only two submodules, `⊥` and `⊤`. -/
abbrev IsSimpleModule :=
IsSimpleOrder (Submodule R M)
#align is_simple_module IsSimpleModule
/-- A module is semisimple when every submodule has a complement, or equivalently, the module
is a direct sum of simple modules. -/
abbrev IsSemisimpleModule :=
ComplementedLattice (Submodule R M)
#align is_semisimple_module IsSemisimpleModule
-- Making this an instance causes the linter to complain of "dangerous instances"
theorem IsSimpleModule.nontrivial [IsSimpleModule R M] : Nontrivial M :=
⟨⟨0, by
have h : (⊥ : Submodule R M) ≠ ⊤ := bot_ne_top
contrapose! h
ext
simp [Submodule.mem_bot, Submodule.mem_top, h x]⟩⟩
#align is_simple_module.nontrivial IsSimpleModule.nontrivial
variable {R} {M} {m : Submodule R M} {N : Type _} [AddCommGroup N] [Module R N]
theorem IsSimpleModule.congr (l : M ≃ₗ[R] N) [IsSimpleModule R N] : IsSimpleModule R M :=
(Submodule.orderIsoMapComap l).IsSimpleOrder
#align is_simple_module.congr IsSimpleModule.congr
theorem isSimpleModule_iff_isAtom : IsSimpleModule R m ↔ IsAtom m :=
by
rw [← Set.isSimpleOrder_Iic_iff_isAtom]
apply OrderIso.isSimpleOrder_iff
exact Submodule.MapSubtype.relIso m
#align is_simple_module_iff_is_atom isSimpleModule_iff_isAtom
theorem isSimpleModule_iff_isCoatom : IsSimpleModule R (M ⧸ m) ↔ IsCoatom m :=
by
rw [← Set.isSimpleOrder_Ici_iff_isCoatom]
apply OrderIso.isSimpleOrder_iff
exact Submodule.comapMkQRelIso m
#align is_simple_module_iff_is_coatom isSimpleModule_iff_isCoatom
theorem covby_iff_quot_is_simple {A B : Submodule R M} (hAB : A ≤ B) :
A ⋖ B ↔ IsSimpleModule R (B ⧸ Submodule.comap B.Subtype A) :=
by
set f : Submodule R B ≃o Set.Iic B := Submodule.MapSubtype.relIso B with hf
rw [covby_iff_coatom_Iic hAB, isSimpleModule_iff_isCoatom, ← OrderIso.isCoatom_iff f, hf]
simp [-OrderIso.isCoatom_iff, Submodule.MapSubtype.relIso, Submodule.map_comap_subtype,
inf_eq_right.2 hAB]
#align covby_iff_quot_is_simple covby_iff_quot_is_simple
namespace IsSimpleModule
variable [hm : IsSimpleModule R m]
@[simp]
theorem isAtom : IsAtom m :=
isSimpleModule_iff_isAtom.1 hm
#align is_simple_module.is_atom IsSimpleModule.isAtom
end IsSimpleModule
theorem is_semisimple_of_supₛ_simples_eq_top
(h : supₛ { m : Submodule R M | IsSimpleModule R m } = ⊤) : IsSemisimpleModule R M :=
complementedLattice_of_supₛ_atoms_eq_top (by simp_rw [← h, isSimpleModule_iff_isAtom])
#align is_semisimple_of_Sup_simples_eq_top is_semisimple_of_supₛ_simples_eq_top
namespace IsSemisimpleModule
variable [IsSemisimpleModule R M]
theorem supₛ_simples_eq_top : supₛ { m : Submodule R M | IsSimpleModule R m } = ⊤ :=
by
simp_rw [isSimpleModule_iff_isAtom]
exact supₛ_atoms_eq_top
#align is_semisimple_module.Sup_simples_eq_top IsSemisimpleModule.supₛ_simples_eq_top
instance is_semisimple_submodule {m : Submodule R M} : IsSemisimpleModule R m :=
haveI f : Submodule R m ≃o Set.Iic m := Submodule.MapSubtype.relIso m
f.complemented_lattice_iff.2 IsModularLattice.complementedLattice_Iic
#align is_semisimple_module.is_semisimple_submodule IsSemisimpleModule.is_semisimple_submodule
end IsSemisimpleModule
theorem is_semisimple_iff_top_eq_supₛ_simples :
supₛ { m : Submodule R M | IsSimpleModule R m } = ⊤ ↔ IsSemisimpleModule R M :=
⟨is_semisimple_of_supₛ_simples_eq_top, by
intro
exact IsSemisimpleModule.supₛ_simples_eq_top⟩
#align is_semisimple_iff_top_eq_Sup_simples is_semisimple_iff_top_eq_supₛ_simples
namespace LinearMap
theorem injective_or_eq_zero [IsSimpleModule R M] (f : M →ₗ[R] N) : Function.Injective f ∨ f = 0 :=
by
rw [← ker_eq_bot, ← ker_eq_top]
apply eq_bot_or_eq_top
#align linear_map.injective_or_eq_zero LinearMap.injective_or_eq_zero
theorem injective_of_ne_zero [IsSimpleModule R M] {f : M →ₗ[R] N} (h : f ≠ 0) :
Function.Injective f :=
f.injective_or_eq_zero.resolve_right h
#align linear_map.injective_of_ne_zero LinearMap.injective_of_ne_zero
theorem surjective_or_eq_zero [IsSimpleModule R N] (f : M →ₗ[R] N) :
Function.Surjective f ∨ f = 0 :=
by
rw [← range_eq_top, ← range_eq_bot, or_comm']
apply eq_bot_or_eq_top
#align linear_map.surjective_or_eq_zero LinearMap.surjective_or_eq_zero
theorem surjective_of_ne_zero [IsSimpleModule R N] {f : M →ₗ[R] N} (h : f ≠ 0) :
Function.Surjective f :=
f.surjective_or_eq_zero.resolve_right h
#align linear_map.surjective_of_ne_zero LinearMap.surjective_of_ne_zero
/-- **Schur's Lemma** for linear maps between (possibly distinct) simple modules -/
theorem bijective_or_eq_zero [IsSimpleModule R M] [IsSimpleModule R N] (f : M →ₗ[R] N) :
Function.Bijective f ∨ f = 0 := by
by_cases h : f = 0
· right
exact h
exact Or.intro_left _ ⟨injective_of_ne_zero h, surjective_of_ne_zero h⟩
#align linear_map.bijective_or_eq_zero LinearMap.bijective_or_eq_zero
theorem bijective_of_ne_zero [IsSimpleModule R M] [IsSimpleModule R N] {f : M →ₗ[R] N} (h : f ≠ 0) :
Function.Bijective f :=
f.bijective_or_eq_zero.resolve_right h
#align linear_map.bijective_of_ne_zero LinearMap.bijective_of_ne_zero
theorem isCoatom_ker_of_surjective [IsSimpleModule R N] {f : M →ₗ[R] N}
(hf : Function.Surjective f) : IsCoatom f.ker :=
by
rw [← isSimpleModule_iff_isCoatom]
exact IsSimpleModule.congr (f.quot_ker_equiv_of_surjective hf)
#align linear_map.is_coatom_ker_of_surjective LinearMap.isCoatom_ker_of_surjective
/-- Schur's Lemma makes the endomorphism ring of a simple module a division ring. -/
noncomputable instance Module.End.divisionRing [DecidableEq (Module.End R M)] [IsSimpleModule R M] :
DivisionRing (Module.End R M) :=
{
(Module.End.ring :
Ring
(Module.End R
M)) with
inv := fun f =>
if h : f = 0 then 0
else
LinearMap.inverse f (Equiv.ofBijective _ (bijective_of_ne_zero h)).invFun
(Equiv.ofBijective _ (bijective_of_ne_zero h)).left_inv
(Equiv.ofBijective _ (bijective_of_ne_zero h)).right_inv
exists_pair_ne :=
⟨0, 1, by
haveI := IsSimpleModule.nontrivial R M
have h := exists_pair_ne M
contrapose! h
intro x y
simp_rw [ext_iff, one_apply, zero_apply] at h
rw [← h x, h y]⟩
mul_inv_cancel := by
intro a a0
change a * dite _ _ _ = 1
ext
rw [dif_neg a0, mul_eq_comp, one_apply, comp_apply]
exact (Equiv.ofBijective _ (bijective_of_ne_zero a0)).right_inv x
inv_zero := dif_pos rfl }
#align module.End.division_ring Module.End.divisionRing
end LinearMap
instance jordanHolderModule : JordanHolderLattice (Submodule R M)
where
IsMaximal := (· ⋖ ·)
lt_of_isMaximal x y := Covby.lt
sup_eq_of_isMaximal x y z hxz hyz := Wcovby.sup_eq hxz.Wcovby hyz.Wcovby
isMaximal_inf_left_of_isMaximal_sup A B := inf_covby_of_covby_sup_of_covby_sup_left
Iso X Y := Nonempty <| (X.2 ⧸ X.1.comap X.2.Subtype) ≃ₗ[R] Y.2 ⧸ Y.1.comap Y.2.Subtype
iso_symm := fun A B ⟨f⟩ => ⟨f.symm⟩
iso_trans := fun A B C ⟨f⟩ ⟨g⟩ => ⟨f.trans g⟩
second_iso A B h :=
⟨by
rw [sup_comm, inf_comm]
exact (LinearMap.quotientInfEquivSupQuotient B A).symm⟩
#align jordan_holder_module jordanHolderModule
|
(*
Title: Inverse.thy
Author: Jose Divasón <jose.divasonm at unirioja.es>
Author: Jesús Aransay <jesus-maria.aransay at unirioja.es>
*)
header{*Inverse of a matrix using the Gauss Jordan algorithm*}
theory Inverse
imports
Gauss_Jordan_PA
begin
subsection{*Several properties*}
text{*Properties about Gauss Jordan algorithm, reduced row echelon form, rank, identity matrix and invertibility*}
lemma rref_id_implies_invertible:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes Gauss_mat_1: "Gauss_Jordan A = mat 1"
shows "invertible A"
proof -
obtain P where P: "invertible P" and PA: "Gauss_Jordan A = P ** A" using invertible_Gauss_Jordan[of A] by blast
have "A = mat 1 ** A" unfolding matrix_mul_lid ..
also have "... = (matrix_inv P ** P) ** A" using P invertible_def matrix_inv_unique by metis
also have "... = (matrix_inv P) ** (P ** A)" by (metis PA assms calculation matrix_eq matrix_vector_mul_assoc matrix_vector_mul_lid)
also have "... = (matrix_inv P) ** mat 1" unfolding PA[symmetric] Gauss_mat_1 ..
also have "... = (matrix_inv P)" unfolding matrix_mul_rid ..
finally have "A = (matrix_inv P)" .
thus ?thesis using P unfolding invertible_def using matrix_inv_unique by blast
qed
text{*In the following case, nrows is equivalent to ncols due to we are working with a square matrix*}
lemma full_rank_implies_invertible:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes rank_n: "rank A = nrows A"
shows "invertible A"
proof (unfold invertible_left_inverse[of A] matrix_left_invertible_ker, clarify)
fix x
assume Ax: "A *v x = 0"
have rank_eq_card_n: "rank A = CARD('n)" using rank_n unfolding nrows_def .
have "vec.dim (null_space A)=0" unfolding dim_null_space unfolding rank_eq_card_n dimension_vector by simp
hence "null_space A = {0}" using vec.dim_zero_eq using Ax null_space_def by auto
thus "x = 0" unfolding null_space_def using Ax by blast
qed
lemma invertible_implies_full_rank:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes inv_A: "invertible A"
shows "rank A = nrows A"
proof -
have "(\<forall>x. A *v x = 0 \<longrightarrow> x = 0)" using inv_A unfolding invertible_left_inverse[unfolded matrix_left_invertible_ker] .
hence null_space_eq_0: "(null_space A) = {0}" unfolding null_space_def using matrix_vector_zero by fast
have dim_null_space: "vec.dim (null_space A) = 0" unfolding vec.dim_def
by (rule someI2[of _"0"], rule exI[of _ "{}"], simp add: vec.independent_empty null_space_eq_0,
metis card_empty empty_subsetI null_space_eq_0 vec.span_empty vec.spanning_subset_independent)
show ?thesis using rank_nullity_theorem_matrices[of A] unfolding dim_null_space rank_eq_dim_col_space nrows_def
unfolding col_space_eq unfolding ncols_def by simp
qed
definition id_upt_k :: "'a::{zero, one}^'n::{mod_type}^'n::{mod_type} \<Rightarrow> nat => bool"
where "id_upt_k A k = (\<forall>i j. to_nat i < k \<and> to_nat j < k \<longrightarrow> ((i = j \<longrightarrow> A $ i $ j = 1) \<and> (i \<noteq> j \<longrightarrow> A $ i $ j = 0)))"
lemma id_upt_nrows_mat_1:
assumes "id_upt_k A (nrows A)"
shows "A = mat 1"
unfolding mat_def apply vector using assms unfolding id_upt_k_def nrows_def
using to_nat_less_card[where ?'a='b]
by presburger
subsection{*Computing the inverse of a matrix using the Gauss Jordan algorithm*}
text{*This lemma is essential to demonstrate that the Gauss Jordan form of an invertible matrix is the identity.
The proof is made by induction and it is explained in
\url{http://www.unirioja.es/cu/jodivaso/Isabelle/Gauss-Jordan-2013-2-Generalized/Demonstration_invertible.pdf}*}
lemma id_upt_k_Gauss_Jordan:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes inv_A: "invertible A"
shows "id_upt_k (Gauss_Jordan A) k"
proof (induct k)
case 0
show ?case unfolding id_upt_k_def by fast
next
case (Suc k)
note id_k=Suc.hyps
have rref_k: "reduced_row_echelon_form_upt_k (Gauss_Jordan A) k" using rref_implies_rref_upt[OF rref_Gauss_Jordan] .
have rref_suc_k: "reduced_row_echelon_form_upt_k (Gauss_Jordan A) (Suc k)" using rref_implies_rref_upt[OF rref_Gauss_Jordan] .
have inv_gj: "invertible (Gauss_Jordan A)" by (metis inv_A invertible_Gauss_Jordan invertible_mult)
show "id_upt_k (Gauss_Jordan A) (Suc k)"
proof (unfold id_upt_k_def, auto)
fix j::'n
assume j_less_suc: "to_nat j < Suc k"
--"First of all we prove a property which will be useful later"
have greatest_prop: "j \<noteq> 0 \<Longrightarrow> to_nat j = k \<Longrightarrow> (GREATEST' m. \<not> is_zero_row_upt_k m k (Gauss_Jordan A)) = j - 1"
proof (rule Greatest'_equality)
assume j_not_zero: "j \<noteq> 0" and j_eq_k: "to_nat j = k"
have j_minus_1: "to_nat (j - 1) < k" by (metis (full_types) Suc_le' diff_add_cancel j_eq_k j_not_zero to_nat_mono)
show "\<not> is_zero_row_upt_k (j - 1) k (Gauss_Jordan A)"
unfolding is_zero_row_upt_k_def
proof (auto, rule exI[of _ "j - 1"], rule conjI)
show "to_nat (j - 1) < k" using j_minus_1 .
show "Gauss_Jordan A $ (j - 1) $ (j - 1) \<noteq> 0" using id_k unfolding id_upt_k_def using j_minus_1 by simp
qed
fix a::'n
assume not_zero_a: "\<not> is_zero_row_upt_k a k (Gauss_Jordan A)"
show "a \<le> j - 1"
proof (rule ccontr)
assume " \<not> a \<le> j - 1"
hence a_greater_i_minus_1: "a > j - 1" by simp
have "is_zero_row_upt_k a k (Gauss_Jordan A)"
unfolding is_zero_row_upt_k_def
proof (clarify)
fix b::'n assume a: "to_nat b < k"
have Least_eq: "(LEAST n. Gauss_Jordan A $ b $ n \<noteq> 0) = b"
proof (rule Least_equality)
show "Gauss_Jordan A $ b $ b \<noteq> 0" by (metis a id_k id_upt_k_def zero_neq_one)
show "\<And>y. Gauss_Jordan A $ b $ y \<noteq> 0 \<Longrightarrow> b \<le> y"
by (metis (hide_lams, no_types) a dual_linorder.not_less_iff_gr_or_eq id_k id_upt_k_def less_trans not_less to_nat_mono)
qed
moreover have "\<not> is_zero_row_upt_k b k (Gauss_Jordan A)"
unfolding is_zero_row_upt_k_def apply auto apply (rule exI[of _ b]) using a id_k unfolding id_upt_k_def by simp
moreover have "a \<noteq> b"
proof -
have "b < from_nat k" by (metis a from_nat_to_nat_id j_eq_k not_less_iff_gr_or_eq to_nat_le)
also have "... = j" using j_eq_k to_nat_from_nat by auto
also have "... \<le> a" using a_greater_i_minus_1 by (metis diff_add_cancel le_Suc)
finally show ?thesis by simp
qed
ultimately show "Gauss_Jordan A $ a $ b = 0" using rref_upt_condition4[OF rref_k] by auto
qed
thus "False" using not_zero_a by contradiction
qed
qed
show Gauss_jj_1: "Gauss_Jordan A $ j $ j = 1"
proof (cases "j=0")
--"In case that j be zero, the result is trivial"
case True show ?thesis
proof (unfold True, rule rref_first_element)
show "reduced_row_echelon_form (Gauss_Jordan A)" by (rule rref_Gauss_Jordan)
show "column 0 (Gauss_Jordan A) \<noteq> 0" by (metis det_zero_column inv_gj invertible_det_nz)
qed
next
case False note j_not_zero = False
show ?thesis
proof (cases "to_nat j < k")
case True thus ?thesis using id_k unfolding id_upt_k_def by presburger --"Easy due to the inductive hypothesis"
next
case False
hence j_eq_k: "to_nat j = k" using j_less_suc by auto
have j_minus_1: "to_nat (j - 1) < k" by (metis (full_types) Suc_le' diff_add_cancel j_eq_k j_not_zero to_nat_mono)
have "(GREATEST' m. \<not> is_zero_row_upt_k m k (Gauss_Jordan A)) = j - 1" by (rule greatest_prop[OF j_not_zero j_eq_k])
hence zero_j_k: "is_zero_row_upt_k j k (Gauss_Jordan A)"
by (metis not_le greatest_ge_nonzero_row j_eq_k j_minus_1 to_nat_mono')
show ?thesis
proof (rule ccontr, cases "Gauss_Jordan A $ j $ j = 0")
case False
note gauss_jj_not_0 = False
assume gauss_jj_not_1: "Gauss_Jordan A $ j $ j \<noteq> 1"
have "(LEAST n. Gauss_Jordan A $ j $ n \<noteq> 0) = j"
proof (rule Least_equality)
show "Gauss_Jordan A $ j $ j \<noteq> 0" using gauss_jj_not_0 .
show "\<And>y. Gauss_Jordan A $ j $ y \<noteq> 0 \<Longrightarrow> j \<le> y" by (metis le_less_linear is_zero_row_upt_k_def j_eq_k to_nat_mono zero_j_k)
qed
hence "Gauss_Jordan A $ j $ (LEAST n. Gauss_Jordan A $ j $ n \<noteq> 0) \<noteq> 1" using gauss_jj_not_1 by auto --"Contradiction with the second condition of rref"
thus False by (metis gauss_jj_not_0 is_zero_row_upt_k_def j_eq_k lessI rref_suc_k rref_upt_condition2)
next
case True
note gauss_jj_0 = True
have zero_j_suc_k: "is_zero_row_upt_k j (Suc k) (Gauss_Jordan A)"
by (rule is_zero_row_upt_k_suc[OF zero_j_k], metis gauss_jj_0 j_eq_k to_nat_from_nat)
have "\<not> (\<exists>B. B ** (Gauss_Jordan A) = mat 1)" --"This will be a contradiction"
proof (unfold matrix_left_invertible_independent_columns, simp,
rule exI[of _ "\<lambda>i. (if i < j then column j (Gauss_Jordan A) $ i else if i=j then -1 else 0)"], rule conjI)
show "(\<Sum>i\<in>UNIV. (if i < j then column j (Gauss_Jordan A) $ i else if i=j then -1 else 0) *s column i (Gauss_Jordan A)) = 0"
proof (unfold vec_eq_iff setsum_component, auto)
--"We write the column j in a linear combination of the previous ones, which is a contradiction (the matrix wouldn't be invertible)"
let ?f="\<lambda>i. (if i < j then column j (Gauss_Jordan A) $ i else if i=j then -1 else 0)"
fix i
let ?g="(\<lambda>x. ?f x * column x (Gauss_Jordan A) $ i)"
show "setsum ?g UNIV = 0"
proof (cases "i<j")
case True note i_less_j = True
have setsum_rw: "setsum ?g (UNIV - {i}) = ?g j + setsum ?g ((UNIV - {i}) - {j})"
proof (rule setsum.remove)
show "finite (UNIV - {i})" using finite_code by simp
show "j \<in> UNIV - {i}" using True by blast
qed
have setsum_g0: "setsum ?g (UNIV - {i} - {j}) = 0"
proof (rule setsum.neutral, auto)
fix a
assume a_not_j: "a \<noteq> j" and a_not_i: "a \<noteq> i" and a_less_j: "a < j" and column_a_not_zero: "column a (Gauss_Jordan A) $ i \<noteq> 0"
have "Gauss_Jordan A $ i $ a = 0" using id_k unfolding id_upt_k_def using a_less_j j_eq_k using i_less_j a_not_i to_nat_mono by blast
thus "column j (Gauss_Jordan A) $ a = 0" using column_a_not_zero unfolding column_def by simp --"Contradiction"
qed
have "setsum ?g UNIV = ?g i + setsum ?g (UNIV - {i})" by (rule setsum.remove, simp_all)
also have "... = ?g i + ?g j + setsum ?g (UNIV - {i} - {j})" unfolding setsum_rw by auto
also have "... = ?g i + ?g j" unfolding setsum_g0 by simp
also have "... = 0" using True unfolding column_def
by (simp, metis id_k id_upt_k_def j_eq_k to_nat_mono)
finally show ?thesis .
next
case False
have zero_i_suc_k: "is_zero_row_upt_k i (Suc k) (Gauss_Jordan A)"
by (metis False zero_j_suc_k linorder_cases rref_suc_k rref_upt_condition1)
show ?thesis
proof (rule setsum.neutral, auto)
show "column j (Gauss_Jordan A) $ i = 0"
using zero_i_suc_k unfolding column_def is_zero_row_upt_k_def
by (metis j_eq_k lessI vec_lambda_beta)
next
fix a
assume a_not_j: "a \<noteq> j" and a_less_j: "a < j" and column_a_i: "column a (Gauss_Jordan A) $ i \<noteq> 0"
have "Gauss_Jordan A $ i $ a = 0" using zero_i_suc_k unfolding is_zero_row_upt_k_def
by (metis (full_types) a_less_j j_eq_k less_SucI to_nat_mono)
thus "column j (Gauss_Jordan A) $ a = 0" using column_a_i unfolding column_def by simp
qed
qed
qed
next
show "\<exists>i. (if i < j then column j (Gauss_Jordan A) $ i else if i = j then -1 else 0) \<noteq> 0"
by (metis False j_eq_k neg_equal_0_iff_equal to_nat_mono zero_neq_one)
qed
thus False using inv_gj unfolding invertible_def by simp
qed
qed
qed
fix i::'n
assume i_less_suc: "to_nat i < Suc k" and i_not_j: "i \<noteq> j"
show "Gauss_Jordan A $ i $ j = 0" --"This result is proved making use of the 4th condition of rref"
proof (cases "to_nat i < k \<and> to_nat j < k")
case True thus ?thesis using id_k i_not_j unfolding id_upt_k_def by blast --"Easy due to the inductive hypothesis"
next
case False note i_or_j_ge_k = False
show ?thesis
proof (cases "to_nat i < k")
case True
hence j_eq_k: "to_nat j = k" using i_or_j_ge_k j_less_suc by simp
have j_noteq_0: "j \<noteq> 0" by (metis True j_eq_k less_nat_zero_code to_nat_0)
have j_minus_1: "to_nat (j - 1) < k" by (metis (full_types) Suc_le' diff_add_cancel j_eq_k j_noteq_0 to_nat_mono)
have "(GREATEST' m. \<not> is_zero_row_upt_k m k (Gauss_Jordan A)) = j - 1" by (rule greatest_prop[OF j_noteq_0 j_eq_k])
hence zero_j_k: "is_zero_row_upt_k j k (Gauss_Jordan A)"
by (metis (lifting, mono_tags) dual_linorder.less_linear dual_order.less_asym j_eq_k j_minus_1 not_greater_Greatest' to_nat_mono)
have Least_eq_j: "(LEAST n. Gauss_Jordan A $ j $ n \<noteq> 0) = j"
proof (rule Least_equality)
show "Gauss_Jordan A $ j $ j \<noteq> 0" using Gauss_jj_1 by simp
show "\<And>y. Gauss_Jordan A $ j $ y \<noteq> 0 \<Longrightarrow> j \<le> y"
by (metis True dual_linorder.le_cases from_nat_to_nat_id i_or_j_ge_k is_zero_row_upt_k_def j_less_suc less_Suc_eq_le less_le to_nat_le zero_j_k)
qed
moreover have "\<not> is_zero_row_upt_k j (Suc k) (Gauss_Jordan A)" unfolding is_zero_row_upt_k_def by (metis Gauss_jj_1 j_less_suc zero_neq_one)
ultimately show ?thesis using rref_upt_condition4[OF rref_suc_k] i_not_j by fastforce
next
case False
hence i_eq_k: "to_nat i = k" by (metis `to_nat i < Suc k` less_SucE)
hence j_less_k: "to_nat j < k" by (metis i_not_j j_less_suc less_SucE to_nat_from_nat)
have "(LEAST n. Gauss_Jordan A $ j $ n \<noteq> 0) = j"
proof (rule Least_equality)
show "Gauss_Jordan A $ j $ j \<noteq> 0" by (metis Gauss_jj_1 zero_neq_one)
show "\<And>y. Gauss_Jordan A $ j $ y \<noteq> 0 \<Longrightarrow> j \<le> y"
by (metis dual_linorder.le_cases id_k id_upt_k_def j_less_k less_trans not_less to_nat_mono)
qed
moreover have "\<not> is_zero_row_upt_k j k (Gauss_Jordan A)" by (metis (full_types) Gauss_jj_1 is_zero_row_upt_k_def j_less_k zero_neq_one)
ultimately show ?thesis using rref_upt_condition4[OF rref_k] i_not_j by fastforce
qed
qed
qed
qed
lemma invertible_implies_rref_id:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes inv_A: "invertible A"
shows "Gauss_Jordan A = mat 1"
using id_upt_k_Gauss_Jordan[OF inv_A, of "nrows (Gauss_Jordan A)"]
using id_upt_nrows_mat_1
by fast
lemma matrix_inv_Gauss:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes inv_A: "invertible A" and Gauss_eq: "Gauss_Jordan A = P ** A"
shows "matrix_inv A = P"
proof (unfold matrix_inv_def, rule some1_equality)
show "\<exists>!A'. A ** A' = mat 1 \<and> A' ** A = mat 1" by (metis inv_A invertible_def matrix_inv_unique matrix_left_right_inverse)
show "A ** P = mat 1 \<and> P ** A = mat 1" by (metis Gauss_eq inv_A invertible_implies_rref_id matrix_left_right_inverse)
qed
lemma matrix_inv_Gauss_Jordan_PA:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes inv_A: "invertible A"
shows "matrix_inv A = fst (Gauss_Jordan_PA A)"
by (metis Gauss_Jordan_PA_eq fst_Gauss_Jordan_PA inv_A matrix_inv_Gauss)
lemma invertible_eq_full_rank[code_unfold]:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
shows "invertible A = (rank A = nrows A)"
by (metis full_rank_implies_invertible invertible_implies_full_rank)
definition "inverse_matrix A = (if invertible A then Some (matrix_inv A) else None)"
lemma the_inverse_matrix:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
assumes "invertible A"
shows "the (inverse_matrix A) = P_Gauss_Jordan A"
by (metis P_Gauss_Jordan_def assms inverse_matrix_def matrix_inv_Gauss_Jordan_PA option.sel)
lemma inverse_matrix:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
shows "inverse_matrix A = (if invertible A then Some (P_Gauss_Jordan A) else None)"
by (metis inverse_matrix_def option.sel the_inverse_matrix)
lemma inverse_matrix_code[code_unfold]:
fixes A::"'a::{field}^'n::{mod_type}^'n::{mod_type}"
shows "inverse_matrix A = (let GJ = Gauss_Jordan_PA A;
rank_A = (if A = 0 then 0 else to_nat (GREATEST' a. row a (snd GJ) \<noteq> 0) + 1) in
if nrows A = rank_A then Some (fst(GJ)) else None)"
unfolding inverse_matrix
unfolding invertible_eq_full_rank
unfolding rank_Gauss_Jordan_code
unfolding P_Gauss_Jordan_def
unfolding Let_def Gauss_Jordan_PA_eq by presburger
end
|
#' backup do for Master files
#'
#' not longer used
#'
#'
#' @author ILO / bescond
#' @keywords ILO, microdataset, preprocessing
#' @examples
#' ## Not run:
#'
#' ## End(**Not run**)
#' @export
#' @rdname Data_get_workflow
Data_backup_cmd <- function(wd){
test <- ilo:::path$data
master <- Data_get_workflow() %>% distinct(repo, project) %>%
mutate(init = paste0(test, repo, '/', project, '/do'),
backup = paste0(wd, 'iloData/inst/doc'))
for (i in 1:nrow(master)){
try(file.copy(from = master$init[i], to = master$backup[i],copy.mode = TRUE, copy.date = TRUE, overwrite = TRUE, recursive = TRUE), silent = TRUE)
}
file.copy( from = paste0(ilo:::path$data, '_Admin/0_WorkFlow.xlsx'),
to = paste0(wd, 'iloData/inst/doc/0_WorkFlow.xlsx'),copy.mode = TRUE, copy.date = TRUE, overwrite = TRUE)
}
|
Require Export euclidean__axioms.
Require Export euclidean__defs.
Require Export lemma__NCorder.
Require Export lemma__collinearorder.
Require Export logic.
Definition lemma__samesideflip : forall A B P Q, (euclidean__defs.OS P Q A B) -> (euclidean__defs.OS P Q B A).
Proof.
intro A.
intro B.
intro P.
intro Q.
intro H.
assert (* Cut *) (exists p q r, (euclidean__axioms.Col A B p) /\ ((euclidean__axioms.Col A B q) /\ ((euclidean__axioms.BetS P p r) /\ ((euclidean__axioms.BetS Q q r) /\ ((euclidean__axioms.nCol A B P) /\ (euclidean__axioms.nCol A B Q)))))) as H0.
- assert (exists X U V, (euclidean__axioms.Col A B U) /\ ((euclidean__axioms.Col A B V) /\ ((euclidean__axioms.BetS P U X) /\ ((euclidean__axioms.BetS Q V X) /\ ((euclidean__axioms.nCol A B P) /\ (euclidean__axioms.nCol A B Q)))))) as H0 by exact H.
assert (exists X U V, (euclidean__axioms.Col A B U) /\ ((euclidean__axioms.Col A B V) /\ ((euclidean__axioms.BetS P U X) /\ ((euclidean__axioms.BetS Q V X) /\ ((euclidean__axioms.nCol A B P) /\ (euclidean__axioms.nCol A B Q)))))) as __TmpHyp by exact H0.
destruct __TmpHyp as [x H1].
destruct H1 as [x0 H2].
destruct H2 as [x1 H3].
destruct H3 as [H4 H5].
destruct H5 as [H6 H7].
destruct H7 as [H8 H9].
destruct H9 as [H10 H11].
destruct H11 as [H12 H13].
exists x0.
exists x1.
exists x.
split.
-- exact H4.
-- split.
--- exact H6.
--- split.
---- exact H8.
---- split.
----- exact H10.
----- split.
------ exact H12.
------ exact H13.
- destruct H0 as [p H1].
destruct H1 as [q H2].
destruct H2 as [r H3].
destruct H3 as [H4 H5].
destruct H5 as [H6 H7].
destruct H7 as [H8 H9].
destruct H9 as [H10 H11].
destruct H11 as [H12 H13].
assert (* Cut *) (euclidean__axioms.Col B A p) as H14.
-- assert (* Cut *) ((euclidean__axioms.Col B A p) /\ ((euclidean__axioms.Col B p A) /\ ((euclidean__axioms.Col p A B) /\ ((euclidean__axioms.Col A p B) /\ (euclidean__axioms.Col p B A))))) as H14.
--- apply (@lemma__collinearorder.lemma__collinearorder A B p H4).
--- destruct H14 as [H15 H16].
destruct H16 as [H17 H18].
destruct H18 as [H19 H20].
destruct H20 as [H21 H22].
exact H15.
-- assert (* Cut *) (euclidean__axioms.Col B A q) as H15.
--- assert (* Cut *) ((euclidean__axioms.Col B A q) /\ ((euclidean__axioms.Col B q A) /\ ((euclidean__axioms.Col q A B) /\ ((euclidean__axioms.Col A q B) /\ (euclidean__axioms.Col q B A))))) as H15.
---- apply (@lemma__collinearorder.lemma__collinearorder A B q H6).
---- destruct H15 as [H16 H17].
destruct H17 as [H18 H19].
destruct H19 as [H20 H21].
destruct H21 as [H22 H23].
exact H16.
--- assert (* Cut *) (euclidean__axioms.nCol B A P) as H16.
---- assert (* Cut *) ((euclidean__axioms.nCol B A P) /\ ((euclidean__axioms.nCol B P A) /\ ((euclidean__axioms.nCol P A B) /\ ((euclidean__axioms.nCol A P B) /\ (euclidean__axioms.nCol P B A))))) as H16.
----- apply (@lemma__NCorder.lemma__NCorder A B P H12).
----- destruct H16 as [H17 H18].
destruct H18 as [H19 H20].
destruct H20 as [H21 H22].
destruct H22 as [H23 H24].
exact H17.
---- assert (* Cut *) (euclidean__axioms.nCol B A Q) as H17.
----- assert (* Cut *) ((euclidean__axioms.nCol B A Q) /\ ((euclidean__axioms.nCol B Q A) /\ ((euclidean__axioms.nCol Q A B) /\ ((euclidean__axioms.nCol A Q B) /\ (euclidean__axioms.nCol Q B A))))) as H17.
------ apply (@lemma__NCorder.lemma__NCorder A B Q H13).
------ destruct H17 as [H18 H19].
destruct H19 as [H20 H21].
destruct H21 as [H22 H23].
destruct H23 as [H24 H25].
exact H18.
----- assert (* Cut *) (euclidean__defs.OS P Q B A) as H18.
------ exists r.
exists p.
exists q.
split.
------- exact H14.
------- split.
-------- exact H15.
-------- split.
--------- exact H8.
--------- split.
---------- exact H10.
---------- split.
----------- exact H16.
----------- exact H17.
------ exact H18.
Qed.
|
/-
Copyright (c) 2019 Sébastien Gouëzel. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sébastien Gouëzel
-/
import analysis.specific_limits.basic
import order.filter.countable_Inter
import topology.G_delta
/-!
# Baire theorem
In a complete metric space, a countable intersection of dense open subsets is dense.
The good concept underlying the theorem is that of a Gδ set, i.e., a countable intersection
of open sets. Then Baire theorem can also be formulated as the fact that a countable
intersection of dense Gδ sets is a dense Gδ set. We prove Baire theorem, giving several different
formulations that can be handy. We also prove the important consequence that, if the space is
covered by a countable union of closed sets, then the union of their interiors is dense.
The names of the theorems do not contain the string "Baire", but are instead built from the form of
the statement. "Baire" is however in the docstring of all the theorems, to facilitate grep searches.
We also define the filter `residual α` generated by dense `Gδ` sets and prove that this filter
has the countable intersection property.
-/
noncomputable theory
open_locale classical topological_space filter ennreal
open filter encodable set
variables {α : Type*} {β : Type*} {γ : Type*} {ι : Type*}
section Baire_theorem
open emetric ennreal
variables [pseudo_emetric_space α] [complete_space α]
/-- Baire theorem: a countable intersection of dense open sets is dense. Formulated here when
the source space is ℕ (and subsumed below by `dense_Inter_of_open` working with any
encodable source space). -/
theorem dense_Inter_of_open_nat {f : ℕ → set α} (ho : ∀n, is_open (f n))
(hd : ∀n, dense (f n)) : dense (⋂n, f n) :=
begin
let B : ℕ → ℝ≥0∞ := λn, 1/2^n,
have Bpos : ∀n, 0 < B n,
{ intro n,
simp only [B, one_div, one_mul, ennreal.inv_pos],
exact pow_ne_top two_ne_top },
/- Translate the density assumption into two functions `center` and `radius` associating
to any n, x, δ, δpos a center and a positive radius such that
`closed_ball center radius` is included both in `f n` and in `closed_ball x δ`.
We can also require `radius ≤ (1/2)^(n+1)`, to ensure we get a Cauchy sequence later. -/
have : ∀n x δ, δ ≠ 0 → ∃y r, 0 < r ∧ r ≤ B (n+1) ∧ closed_ball y r ⊆ (closed_ball x δ) ∩ f n,
{ assume n x δ δpos,
have : x ∈ closure (f n) := hd n x,
rcases emetric.mem_closure_iff.1 this (δ/2) (ennreal.half_pos δpos) with ⟨y, ys, xy⟩,
rw edist_comm at xy,
obtain ⟨r, rpos, hr⟩ : ∃ r > 0, closed_ball y r ⊆ f n :=
nhds_basis_closed_eball.mem_iff.1 (is_open_iff_mem_nhds.1 (ho n) y ys),
refine ⟨y, min (min (δ/2) r) (B (n+1)), _, _, λz hz, ⟨_, _⟩⟩,
show 0 < min (min (δ / 2) r) (B (n+1)),
from lt_min (lt_min (ennreal.half_pos δpos) rpos) (Bpos (n+1)),
show min (min (δ / 2) r) (B (n+1)) ≤ B (n+1), from min_le_right _ _,
show z ∈ closed_ball x δ, from calc
edist z x ≤ edist z y + edist y x : edist_triangle _ _ _
... ≤ (min (min (δ / 2) r) (B (n+1))) + (δ/2) : add_le_add hz (le_of_lt xy)
... ≤ δ/2 + δ/2 : add_le_add (le_trans (min_le_left _ _) (min_le_left _ _)) le_rfl
... = δ : ennreal.add_halves δ,
show z ∈ f n, from hr (calc
edist z y ≤ min (min (δ / 2) r) (B (n+1)) : hz
... ≤ r : le_trans (min_le_left _ _) (min_le_right _ _)) },
choose! center radius Hpos HB Hball using this,
refine λ x, (mem_closure_iff_nhds_basis nhds_basis_closed_eball).2 (λ ε εpos, _),
/- `ε` is positive. We have to find a point in the ball of radius `ε` around `x` belonging to all
`f n`. For this, we construct inductively a sequence `F n = (c n, r n)` such that the closed ball
`closed_ball (c n) (r n)` is included in the previous ball and in `f n`, and such that
`r n` is small enough to ensure that `c n` is a Cauchy sequence. Then `c n` converges to a
limit which belongs to all the `f n`. -/
let F : ℕ → (α × ℝ≥0∞) := λn, nat.rec_on n (prod.mk x (min ε (B 0)))
(λn p, prod.mk (center n p.1 p.2) (radius n p.1 p.2)),
let c : ℕ → α := λn, (F n).1,
let r : ℕ → ℝ≥0∞ := λn, (F n).2,
have rpos : ∀ n, 0 < r n,
{ assume n,
induction n with n hn,
exact lt_min εpos (Bpos 0),
exact Hpos n (c n) (r n) hn.ne' },
have r0 : ∀ n, r n ≠ 0 := λ n, (rpos n).ne',
have rB : ∀n, r n ≤ B n,
{ assume n,
induction n with n hn,
exact min_le_right _ _,
exact HB n (c n) (r n) (r0 n) },
have incl : ∀n, closed_ball (c (n+1)) (r (n+1)) ⊆ (closed_ball (c n) (r n)) ∩ (f n) :=
λ n, Hball n (c n) (r n) (r0 n),
have cdist : ∀n, edist (c n) (c (n+1)) ≤ B n,
{ assume n,
rw edist_comm,
have A : c (n+1) ∈ closed_ball (c (n+1)) (r (n+1)) := mem_closed_ball_self,
have I := calc
closed_ball (c (n+1)) (r (n+1)) ⊆ closed_ball (c n) (r n) :
subset.trans (incl n) (inter_subset_left _ _)
... ⊆ closed_ball (c n) (B n) : closed_ball_subset_closed_ball (rB n),
exact I A },
have : cauchy_seq c :=
cauchy_seq_of_edist_le_geometric_two _ one_ne_top cdist,
-- as the sequence `c n` is Cauchy in a complete space, it converges to a limit `y`.
rcases cauchy_seq_tendsto_of_complete this with ⟨y, ylim⟩,
-- this point `y` will be the desired point. We will check that it belongs to all
-- `f n` and to `ball x ε`.
use y,
simp only [exists_prop, set.mem_Inter],
have I : ∀n, ∀m ≥ n, closed_ball (c m) (r m) ⊆ closed_ball (c n) (r n),
{ assume n,
refine nat.le_induction _ (λm hnm h, _),
{ exact subset.refl _ },
{ exact subset.trans (incl m) (subset.trans (inter_subset_left _ _) h) }},
have yball : ∀n, y ∈ closed_ball (c n) (r n),
{ assume n,
refine is_closed_ball.mem_of_tendsto ylim _,
refine (filter.eventually_ge_at_top n).mono (λ m hm, _),
exact I n m hm mem_closed_ball_self },
split,
show ∀n, y ∈ f n,
{ assume n,
have : closed_ball (c (n+1)) (r (n+1)) ⊆ f n := subset.trans (incl n) (inter_subset_right _ _),
exact this (yball (n+1)) },
show edist y x ≤ ε, from le_trans (yball 0) (min_le_left _ _),
end
/-- Baire theorem: a countable intersection of dense open sets is dense. Formulated here with ⋂₀. -/
theorem dense_sInter_of_open {S : set (set α)} (ho : ∀s∈S, is_open s) (hS : countable S)
(hd : ∀s∈S, dense s) : dense (⋂₀S) :=
begin
cases S.eq_empty_or_nonempty with h h,
{ simp [h] },
{ rcases hS.exists_surjective h with ⟨f, hf⟩,
have F : ∀n, f n ∈ S := λn, by rw hf; exact mem_range_self _,
rw [hf, sInter_range],
exact dense_Inter_of_open_nat (λn, ho _ (F n)) (λn, hd _ (F n)) }
end
/-- Baire theorem: a countable intersection of dense open sets is dense. Formulated here with
an index set which is a countable set in any type. -/
theorem dense_bInter_of_open {S : set β} {f : β → set α} (ho : ∀s∈S, is_open (f s))
(hS : countable S) (hd : ∀s∈S, dense (f s)) : dense (⋂s∈S, f s) :=
begin
rw ← sInter_image,
apply dense_sInter_of_open,
{ rwa ball_image_iff },
{ exact hS.image _ },
{ rwa ball_image_iff }
end
/-- Baire theorem: a countable intersection of dense open sets is dense. Formulated here with
an index set which is an encodable type. -/
theorem dense_Inter_of_open [encodable β] {f : β → set α} (ho : ∀s, is_open (f s))
(hd : ∀s, dense (f s)) : dense (⋂s, f s) :=
begin
rw ← sInter_range,
apply dense_sInter_of_open,
{ rwa forall_range_iff },
{ exact countable_range _ },
{ rwa forall_range_iff }
end
/-- Baire theorem: a countable intersection of dense Gδ sets is dense. Formulated here with ⋂₀. -/
theorem dense_sInter_of_Gδ {S : set (set α)} (ho : ∀s∈S, is_Gδ s) (hS : countable S)
(hd : ∀s∈S, dense s) : dense (⋂₀S) :=
begin
-- the result follows from the result for a countable intersection of dense open sets,
-- by rewriting each set as a countable intersection of open sets, which are of course dense.
choose T hTo hTc hsT using ho,
have : ⋂₀ S = ⋂₀ (⋃ s ∈ S, T s ‹_›), -- := (sInter_bUnion (λs hs, (hT s hs).2.2)).symm,
by simp only [sInter_Union, (hsT _ _).symm, ← sInter_eq_bInter],
rw this,
refine dense_sInter_of_open _ (hS.bUnion hTc) _;
simp only [mem_Union]; rintro t ⟨s, hs, tTs⟩,
show is_open t, from hTo s hs t tTs,
show dense t,
{ intro x,
have := hd s hs x,
rw hsT s hs at this,
exact closure_mono (sInter_subset_of_mem tTs) this }
end
/-- Baire theorem: a countable intersection of dense Gδ sets is dense. Formulated here with
an index set which is an encodable type. -/
theorem dense_Inter_of_Gδ [encodable β] {f : β → set α} (ho : ∀s, is_Gδ (f s))
(hd : ∀s, dense (f s)) : dense (⋂s, f s) :=
begin
rw ← sInter_range,
exact dense_sInter_of_Gδ (forall_range_iff.2 ‹_›) (countable_range _) (forall_range_iff.2 ‹_›)
end
/-- Baire theorem: a countable intersection of dense Gδ sets is dense. Formulated here with
an index set which is a countable set in any type. -/
theorem dense_bInter_of_Gδ {S : set β} {f : Π x ∈ S, set α} (ho : ∀s∈S, is_Gδ (f s ‹_›))
(hS : countable S) (hd : ∀s∈S, dense (f s ‹_›)) : dense (⋂s∈S, f s ‹_›) :=
begin
rw bInter_eq_Inter,
haveI := hS.to_encodable,
exact dense_Inter_of_Gδ (λ s, ho s s.2) (λ s, hd s s.2)
end
/-- Baire theorem: the intersection of two dense Gδ sets is dense. -/
theorem dense.inter_of_Gδ {s t : set α} (hs : is_Gδ s) (ht : is_Gδ t) (hsc : dense s)
(htc : dense t) :
dense (s ∩ t) :=
begin
rw [inter_eq_Inter],
apply dense_Inter_of_Gδ; simp [bool.forall_bool, *]
end
/-- A property holds on a residual (comeagre) set if and only if it holds on some dense `Gδ` set. -/
lemma eventually_residual {p : α → Prop} :
(∀ᶠ x in residual α, p x) ↔ ∃ (t : set α), is_Gδ t ∧ dense t ∧ ∀ x ∈ t, p x :=
calc (∀ᶠ x in residual α, p x) ↔
∀ᶠ x in ⨅ (t : set α) (ht : is_Gδ t ∧ dense t), 𝓟 t, p x :
by simp only [residual, infi_and]
... ↔ ∃ (t : set α) (ht : is_Gδ t ∧ dense t), ∀ᶠ x in 𝓟 t, p x : mem_binfi_of_directed
(λ t₁ h₁ t₂ h₂, ⟨t₁ ∩ t₂, ⟨h₁.1.inter h₂.1, dense.inter_of_Gδ h₁.1 h₂.1 h₁.2 h₂.2⟩, by simp⟩)
⟨univ, is_Gδ_univ, dense_univ⟩
... ↔ _ : by simp [and_assoc]
/-- A set is residual (comeagre) if and only if it includes a dense `Gδ` set. -/
lemma mem_residual {s : set α} : s ∈ residual α ↔ ∃ t ⊆ s, is_Gδ t ∧ dense t :=
(@eventually_residual α _ _ (λ x, x ∈ s)).trans $ exists_congr $
λ t, by rw [exists_prop, and_comm (t ⊆ s), subset_def, and_assoc]
lemma dense_of_mem_residual {s : set α} (hs : s ∈ residual α) : dense s :=
let ⟨t, hts, _, hd⟩ := mem_residual.1 hs in hd.mono hts
instance : countable_Inter_filter (residual α) :=
⟨begin
intros S hSc hS,
simp only [mem_residual] at *,
choose T hTs hT using hS,
refine ⟨⋂ s ∈ S, T s ‹_›, _, _, _⟩,
{ rw [sInter_eq_bInter],
exact Inter₂_mono hTs },
{ exact is_Gδ_bInter hSc (λ s hs, (hT s hs).1) },
{ exact dense_bInter_of_Gδ (λ s hs, (hT s hs).1) hSc (λ s hs, (hT s hs).2) }
end⟩
/-- Baire theorem: if countably many closed sets cover the whole space, then their interiors
are dense. Formulated here with an index set which is a countable set in any type. -/
theorem dense_bUnion_interior_of_closed {S : set β} {f : β → set α} (hc : ∀s∈S, is_closed (f s))
(hS : countable S) (hU : (⋃s∈S, f s) = univ) : dense (⋃s∈S, interior (f s)) :=
begin
let g := λs, (frontier (f s))ᶜ,
have : dense (⋂s∈S, g s),
{ refine dense_bInter_of_open (λs hs, _) hS (λs hs, _),
show is_open (g s), from is_open_compl_iff.2 is_closed_frontier,
show dense (g s),
{ intro x,
simp [interior_frontier (hc s hs)] }},
refine this.mono _,
show (⋂s∈S, g s) ⊆ (⋃s∈S, interior (f s)),
assume x hx,
have : x ∈ ⋃s∈S, f s, { have := mem_univ x, rwa ← hU at this },
rcases mem_Union₂.1 this with ⟨s, hs, xs⟩,
have : x ∈ g s := mem_Inter₂.1 hx s hs,
have : x ∈ interior (f s),
{ have : x ∈ f s \ (frontier (f s)) := mem_inter xs this,
simpa [frontier, xs, (hc s hs).closure_eq] using this },
exact mem_Union₂.2 ⟨s, ⟨hs, this⟩⟩
end
/-- Baire theorem: if countably many closed sets cover the whole space, then their interiors
are dense. Formulated here with `⋃₀`. -/
theorem dense_sUnion_interior_of_closed {S : set (set α)} (hc : ∀s∈S, is_closed s)
(hS : countable S) (hU : (⋃₀ S) = univ) : dense (⋃s∈S, interior s) :=
by rw sUnion_eq_bUnion at hU; exact dense_bUnion_interior_of_closed hc hS hU
/-- Baire theorem: if countably many closed sets cover the whole space, then their interiors
are dense. Formulated here with an index set which is an encodable type. -/
theorem dense_Union_interior_of_closed [encodable β] {f : β → set α} (hc : ∀s, is_closed (f s))
(hU : (⋃s, f s) = univ) : dense (⋃s, interior (f s)) :=
begin
rw ← bUnion_univ,
apply dense_bUnion_interior_of_closed,
{ simp [hc] },
{ apply countable_encodable },
{ rwa ← bUnion_univ at hU }
end
/-- One of the most useful consequences of Baire theorem: if a countable union of closed sets
covers the space, then one of the sets has nonempty interior. -/
theorem nonempty_interior_of_Union_of_closed [nonempty α] [encodable β] {f : β → set α}
(hc : ∀s, is_closed (f s)) (hU : (⋃s, f s) = univ) :
∃s, (interior $ f s).nonempty :=
begin
by_contradiction h,
simp only [not_exists, not_nonempty_iff_eq_empty] at h,
have := calc ∅ = closure (⋃s, interior (f s)) : by simp [h]
... = univ : (dense_Union_interior_of_closed hc hU).closure_eq,
exact univ_nonempty.ne_empty this.symm
end
end Baire_theorem
|
[GOAL]
a b : PosNum
⊢ Decidable ((fun x x_1 => x < x_1) a b)
[PROOFSTEP]
dsimp [LT.lt]
[GOAL]
a b : PosNum
⊢ Decidable (cmp a b = lt)
[PROOFSTEP]
infer_instance
[GOAL]
a b : PosNum
⊢ Decidable ((fun x x_1 => x ≤ x_1) a b)
[PROOFSTEP]
dsimp [LE.le]
[GOAL]
a b : PosNum
⊢ Decidable ¬b < a
[PROOFSTEP]
infer_instance
[GOAL]
a b : Num
⊢ Decidable ((fun x x_1 => x < x_1) a b)
[PROOFSTEP]
dsimp [LT.lt]
[GOAL]
a b : Num
⊢ Decidable (cmp a b = lt)
[PROOFSTEP]
infer_instance
[GOAL]
a b : Num
⊢ Decidable ((fun x x_1 => x ≤ x_1) a b)
[PROOFSTEP]
dsimp [LE.le]
[GOAL]
a b : Num
⊢ Decidable ¬b < a
[PROOFSTEP]
infer_instance
[GOAL]
a b : ZNum
⊢ Decidable ((fun x x_1 => x < x_1) a b)
[PROOFSTEP]
dsimp [LT.lt]
[GOAL]
a b : ZNum
⊢ Decidable (cmp a b = lt)
[PROOFSTEP]
infer_instance
[GOAL]
a b : ZNum
⊢ Decidable ((fun x x_1 => x ≤ x_1) a b)
[PROOFSTEP]
dsimp [LE.le]
[GOAL]
a b : ZNum
⊢ Decidable ¬b < a
[PROOFSTEP]
infer_instance
|
using DifferentialEquations
using Plots
μ = 0.15 # Drift
σ = 1.0 # Vol of Vol
θ = 0.04 # long-term variance
κ = 1.0 # Mean reversion speed
ρ = -0.75 # spot-vol correlation
u0 = [100.0, 0.0225]
tspan = (0.0, 1.0)
dt = 1.0/365
prob = HestonProblem(μ, κ, θ, σ, ρ, u0, (0.0, 1.0))
sol = solve(prob, EM(), dt=dt)
print(sol.u[1][1])
path_matrix = hcat(sol.u)
print(typeof(path_matrix))
# print(path_matrix) |
Frankincense, a dominant ingredient in this popular scent, creates a refreshing energy. Sandalwood, cinnamon, ginger lily and other spices add a precious sparkle to this scent.
30 sticks - 5.25" long, plus a biodegradable incense holder.
Approximate burning time: 30 minutes per stick.
Ingredients: Sandalwood (Santalum), Frankincense (Olibanum), Cinnamon (Cinnamomum), Ginger Lily (Hedychii rhizoma) and other spices. |
[STATEMENT]
lemma distribute_rely_choice:
assumes nonabort_i: "chaos \<sqsubseteq> i"
shows "(c \<sqinter> d) // i \<sqsubseteq> (c // i) \<sqinter> (d // i)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (c \<sqinter> d) // i \<sqsubseteq> c // i \<sqinter> d // i
variables:
c, d, i :: 'a
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. (c \<sqinter> d) // i \<sqsubseteq> c // i \<sqinter> d // i
variables:
c, d, i :: 'a
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
have "c \<sqinter> d \<sqsubseteq> ((c // i) \<parallel> i) \<sqinter> ((d // i) \<parallel> i)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. c \<sqinter> d \<sqsubseteq> c // i \<parallel> i \<sqinter> d // i \<parallel> i
variables:
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
c, d, i :: 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
by (metis nonabort_i inf_mono rely_quotient)
[PROOF STATE]
proof (state)
this:
(c::'a::refinement_lattice) \<sqinter> (d::'a::refinement_lattice) \<sqsubseteq> c // (i::'a::refinement_lattice) \<parallel> i \<sqinter> d // i \<parallel> i
goal (1 subgoal):
1. (c \<sqinter> d) // i \<sqsubseteq> c // i \<sqinter> d // i
variables:
c, d, i :: 'a
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
(c::'a::refinement_lattice) \<sqinter> (d::'a::refinement_lattice) \<sqsubseteq> c // (i::'a::refinement_lattice) \<parallel> i \<sqinter> d // i \<parallel> i
[PROOF STEP]
have "c \<sqinter> d \<sqsubseteq> ((c // i) \<sqinter> (d // i)) \<parallel> i"
[PROOF STATE]
proof (prove)
using this:
(c::'a::refinement_lattice) \<sqinter> (d::'a::refinement_lattice) \<sqsubseteq> c // (i::'a::refinement_lattice) \<parallel> i \<sqinter> d // i \<parallel> i
goal (1 subgoal):
1. c \<sqinter> d \<sqsubseteq> (c // i \<sqinter> d // i) \<parallel> i
variables:
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
c, d, i :: 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
by (metis inf_par_distrib)
[PROOF STATE]
proof (state)
this:
(c::'a::refinement_lattice) \<sqinter> (d::'a::refinement_lattice) \<sqsubseteq> (c // (i::'a::refinement_lattice) \<sqinter> d // i) \<parallel> i
goal (1 subgoal):
1. (c \<sqinter> d) // i \<sqsubseteq> c // i \<sqinter> d // i
variables:
c, d, i :: 'a
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
(c::'a::refinement_lattice) \<sqinter> (d::'a::refinement_lattice) \<sqsubseteq> (c // (i::'a::refinement_lattice) \<sqinter> d // i) \<parallel> i
goal (1 subgoal):
1. (c \<sqinter> d) // i \<sqsubseteq> c // i \<sqinter> d // i
variables:
c, d, i :: 'a
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
by (metis nonabort_i rely_refinement)
[PROOF STATE]
proof (state)
this:
((c::'a::refinement_lattice) \<sqinter> (d::'a::refinement_lattice)) // (i::'a::refinement_lattice) \<sqsubseteq> c // i \<sqinter> d // i
goal:
No subgoals!
variables:
c, d, i :: 'a
(\<parallel>) :: 'a \<Rightarrow> 'a \<Rightarrow> 'a
type variables:
'a :: refinement_lattice
[PROOF STEP]
qed |
[STATEMENT]
lemma orelse_conv[simp]:
"((x orelse y) = TT) \<longleftrightarrow> (x = TT \<or> (x = FF \<and> y = TT))"
"((x orelse y) = \<bottom>) \<longleftrightarrow> (x = \<bottom> \<or> (x = FF \<and> y = \<bottom>))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((x orelse y) = TT) = (x = TT \<or> x = FF \<and> y = TT) &&& ((x orelse y) = \<bottom>) = (x = \<bottom> \<or> x = FF \<and> y = \<bottom>)
[PROOF STEP]
by (cases x; cases y; simp)+ |
lemma norm_ii [simp]: "norm \<i> = 1" |
import Data.Complex
main :: IO ()
main = do let res = dft $ enumerate $ map (\x -> x :+ 0) [1..100]
putStrLn $ show res
enumerate :: Integral b => [a] -> [(a, b)]
enumerate l = zip l [0..]
dft :: (RealFloat a, Integral b) => [(Complex a, b)] -> [Complex a]
dft x = map (\(_, k) -> singleDFT k x) x
singleDFT :: (RealFloat a, Integral b) => b -> [(Complex a, b)] -> Complex a
singleDFT k x = sum $ map (\(e, n) -> (e *) $ fCoeff k n $ length x) $ x
fCoeff :: (RealFloat a, Integral b) => b -> b -> b -> Complex a
fCoeff k n n_tot = exp (0.0 :+ ((-2.0) * pi * (fromIntegral $ k * n) /
(fromIntegral n_tot)))
|
= = Fourth stage = =
|
Formal statement is: lemmas continuous_mult [continuous_intros] = bounded_bilinear.continuous [OF bounded_bilinear_mult] Informal statement is: The multiplication function is continuous. |
State Before: α : Type u_1
β : Type ?u.160
n : ℕ
a a' : α
v : Vector α n
⊢ a ∈ toList v ↔ ∃ i, get v i = a State After: α : Type u_1
β : Type ?u.160
n : ℕ
a a' : α
v : Vector α n
⊢ (∃ i h, List.get (toList v) { val := i, isLt := h } = a) ↔
∃ i h, List.get (toList v) (↑(Fin.cast (_ : n = List.length (toList v))) { val := i, isLt := h }) = a Tactic: simp only [List.mem_iff_get, Fin.exists_iff, Vector.get_eq_get] State Before: α : Type u_1
β : Type ?u.160
n : ℕ
a a' : α
v : Vector α n
⊢ (∃ i h, List.get (toList v) { val := i, isLt := h } = a) ↔
∃ i h, List.get (toList v) (↑(Fin.cast (_ : n = List.length (toList v))) { val := i, isLt := h }) = a State After: no goals Tactic: exact
⟨fun ⟨i, hi, h⟩ => ⟨i, by rwa [toList_length] at hi, h⟩, fun ⟨i, hi, h⟩ =>
⟨i, by rwa [toList_length], h⟩⟩ State Before: α : Type u_1
β : Type ?u.160
n : ℕ
a a' : α
v : Vector α n
x✝ : ∃ i h, List.get (toList v) { val := i, isLt := h } = a
i : ℕ
hi : i < List.length (toList v)
h : List.get (toList v) { val := i, isLt := hi } = a
⊢ i < n State After: no goals Tactic: rwa [toList_length] at hi State Before: α : Type u_1
β : Type ?u.160
n : ℕ
a a' : α
v : Vector α n
x✝ : ∃ i h, List.get (toList v) (↑(Fin.cast (_ : n = List.length (toList v))) { val := i, isLt := h }) = a
i : ℕ
hi : i < n
h : List.get (toList v) (↑(Fin.cast (_ : n = List.length (toList v))) { val := i, isLt := hi }) = a
⊢ i < List.length (toList v) State After: no goals Tactic: rwa [toList_length] |
proposition orthogonal_extension: fixes S :: "'a::euclidean_space set" assumes S: "pairwise orthogonal S" obtains U where "pairwise orthogonal (S \<union> U)" "span (S \<union> U) = span (S \<union> T)" |
{-# LANGUAGE Haskell98 #-}
{-# LINE 1 "Data/Default/Class.hs" #-}
{-
Copyright (c) 2013 Lukas Mai
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the author nor the names of his contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY LUKAS MAI AND CONTRIBUTORS "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE DefaultSignatures, TypeOperators, FlexibleContexts #-}
module Data.Default.Class (
-- | This module defines a class for types with a default value.
-- It also defines 'Default' instances for the types 'Int', 'Int8',
-- 'Int16', 'Int32', 'Int64', 'Word', 'Word8', 'Word16', 'Word32', 'Word64',
-- 'Integer', 'Float', 'Double', 'Ratio', 'Complex', 'CShort', 'CUShort',
-- 'CInt', 'CUInt', 'CLong', 'CULong', 'CLLong', 'CULLong', 'CPtrdiff',
-- 'CSize', 'CSigAtomic', 'CIntPtr', 'CUIntPtr', 'CIntMax', 'CUIntMax',
-- 'CClock', 'CTime', 'CUSeconds', 'CSUSeconds', 'CFloat', 'CDouble', '(->)',
-- 'IO', 'Maybe', '()', '[]', 'Ordering', 'Any', 'All', 'Last', 'First', 'Sum',
-- 'Product', 'Endo', 'Dual', and tuples.
Default(..)
) where
import Data.Int
import Data.Word
import Data.Monoid
import Data.Ratio
import Data.Complex
import Foreign.C.Types
import GHC.Generics
class GDefault f where
gdef :: f a
instance GDefault U1 where
gdef = U1
instance (Default a) => GDefault (K1 i a) where
gdef = K1 def
instance (GDefault a, GDefault b) => GDefault (a :*: b) where
gdef = gdef :*: gdef
instance (GDefault a) => GDefault (M1 i c a) where
gdef = M1 gdef
-- | A class for types with a default value.
class Default a where
-- | The default value for this type.
def :: a
default def :: (Generic a, GDefault (Rep a)) => a
def = to gdef
instance Default Int where def = 0
instance Default Int8 where def = 0
instance Default Int16 where def = 0
instance Default Int32 where def = 0
instance Default Int64 where def = 0
instance Default Word where def = 0
instance Default Word8 where def = 0
instance Default Word16 where def = 0
instance Default Word32 where def = 0
instance Default Word64 where def = 0
instance Default Integer where def = 0
instance Default Float where def = 0
instance Default Double where def = 0
instance (Integral a) => Default (Ratio a) where def = 0
instance (Default a, RealFloat a) => Default (Complex a) where def = def :+ def
instance Default CShort where def = 0
instance Default CUShort where def = 0
instance Default CInt where def = 0
instance Default CUInt where def = 0
instance Default CLong where def = 0
instance Default CULong where def = 0
instance Default CLLong where def = 0
instance Default CULLong where def = 0
instance Default CPtrdiff where def = 0
instance Default CSize where def = 0
instance Default CSigAtomic where def = 0
instance Default CIntPtr where def = 0
instance Default CUIntPtr where def = 0
instance Default CIntMax where def = 0
instance Default CUIntMax where def = 0
instance Default CClock where def = 0
instance Default CTime where def = 0
instance Default CUSeconds where def = 0
instance Default CSUSeconds where def = 0
instance Default CFloat where def = 0
instance Default CDouble where def = 0
instance (Default r) => Default (e -> r) where def = const def
instance (Default a) => Default (IO a) where def = return def
instance Default (Maybe a) where def = Nothing
instance Default () where def = mempty
instance Default [a] where def = mempty
instance Default Ordering where def = mempty
instance Default Any where def = mempty
instance Default All where def = mempty
instance Default (Last a) where def = mempty
instance Default (First a) where def = mempty
instance (Num a) => Default (Sum a) where def = mempty
instance (Num a) => Default (Product a) where def = mempty
instance Default (Endo a) where def = mempty
instance (Default a) => Default (Dual a) where def = Dual def
instance (Default a, Default b) => Default (a, b) where def = (def, def)
instance (Default a, Default b, Default c) => Default (a, b, c) where def = (def, def, def)
instance (Default a, Default b, Default c, Default d) => Default (a, b, c, d) where def = (def, def, def, def)
instance (Default a, Default b, Default c, Default d, Default e) => Default (a, b, c, d, e) where def = (def, def, def, def, def)
instance (Default a, Default b, Default c, Default d, Default e, Default f) => Default (a, b, c, d, e, f) where def = (def, def, def, def, def, def)
instance (Default a, Default b, Default c, Default d, Default e, Default f, Default g) => Default (a, b, c, d, e, f, g) where def = (def, def, def, def, def, def, def)
|
(** This file implements the type [coGset A] of finite/cofinite sets
of elements of any countable type [A].
Note that [coGset positive] cannot represent all elements of [coPset]
(e.g., [coPset_suffixes], [coPset_l], and [coPset_r] construct
infinite sets that cannot be represented). *)
From stdpp Require Export sets countable.
From stdpp Require Import decidable finite gmap coPset.
From stdpp Require Import options.
(* Pick up extra assumptions from section parameters. *)
Set Default Proof Using "Type*".
Inductive coGset `{Countable A} :=
| FinGSet (X : gset A)
| CoFinGset (X : gset A).
Arguments coGset _ {_ _} : assert.
Instance coGset_eq_dec `{Countable A} : EqDecision (coGset A).
Proof. solve_decision. Defined.
Instance coGset_countable `{Countable A} : Countable (coGset A).
Proof.
apply (inj_countable'
(λ X, match X with FinGSet X => inl X | CoFinGset X => inr X end)
(λ s, match s with inl X => FinGSet X | inr X => CoFinGset X end)).
by intros [].
Qed.
Section coGset.
Context `{Countable A}.
Global Instance coGset_elem_of : ElemOf A (coGset A) := λ x X,
match X with FinGSet X => x ∈ X | CoFinGset X => x ∉ X end.
Global Instance coGset_empty : Empty (coGset A) := FinGSet ∅.
Global Instance coGset_top : Top (coGset A) := CoFinGset ∅.
Global Instance coGset_singleton : Singleton A (coGset A) := λ x,
FinGSet {[x]}.
Global Instance coGset_union : Union (coGset A) := λ X Y,
match X, Y with
| FinGSet X, FinGSet Y => FinGSet (X ∪ Y)
| CoFinGset X, CoFinGset Y => CoFinGset (X ∩ Y)
| FinGSet X, CoFinGset Y => CoFinGset (Y ∖ X)
| CoFinGset X, FinGSet Y => CoFinGset (X ∖ Y)
end.
Global Instance coGset_intersection : Intersection (coGset A) := λ X Y,
match X, Y with
| FinGSet X, FinGSet Y => FinGSet (X ∩ Y)
| CoFinGset X, CoFinGset Y => CoFinGset (X ∪ Y)
| FinGSet X, CoFinGset Y => FinGSet (X ∖ Y)
| CoFinGset X, FinGSet Y => FinGSet (Y ∖ X)
end.
Global Instance coGset_difference : Difference (coGset A) := λ X Y,
match X, Y with
| FinGSet X, FinGSet Y => FinGSet (X ∖ Y)
| CoFinGset X, CoFinGset Y => FinGSet (Y ∖ X)
| FinGSet X, CoFinGset Y => FinGSet (X ∩ Y)
| CoFinGset X, FinGSet Y => CoFinGset (X ∪ Y)
end.
Global Instance coGset_set : TopSet A (coGset A).
Proof.
split; [split; [split| |]|].
- by intros ??.
- intros x y. unfold elem_of, coGset_elem_of; simpl.
by rewrite elem_of_singleton.
- intros [X|X] [Y|Y] x; unfold elem_of, coGset_elem_of, coGset_union; simpl.
+ set_solver.
+ by rewrite not_elem_of_difference, (comm (∨)).
+ by rewrite not_elem_of_difference.
+ by rewrite not_elem_of_intersection.
- intros [] [];
unfold elem_of, coGset_elem_of, coGset_intersection; set_solver.
- intros [X|X] [Y|Y] x;
unfold elem_of, coGset_elem_of, coGset_difference; simpl.
+ set_solver.
+ rewrite elem_of_intersection. destruct (decide (x ∈ Y)); tauto.
+ set_solver.
+ rewrite elem_of_difference. destruct (decide (x ∈ Y)); tauto.
- done.
Qed.
End coGset.
Instance coGset_elem_of_dec `{Countable A} : RelDecision (∈@{coGset A}) :=
λ x X,
match X with
| FinGSet X => decide_rel elem_of x X
| CoFinGset X => not_dec (decide_rel elem_of x X)
end.
Section infinite.
Context `{Countable A, Infinite A}.
Global Instance coGset_leibniz : LeibnizEquiv (coGset A).
Proof.
intros [X|X] [Y|Y]; rewrite elem_of_equiv;
unfold elem_of, coGset_elem_of; simpl; intros HXY.
- f_equal. by apply leibniz_equiv.
- by destruct (exist_fresh (X ∪ Y)) as [? [? ?%HXY]%not_elem_of_union].
- by destruct (exist_fresh (X ∪ Y)) as [? [?%HXY ?]%not_elem_of_union].
- f_equal. apply leibniz_equiv; intros x. by apply not_elem_of_iff.
Qed.
Global Instance coGset_equiv_dec : RelDecision (≡@{coGset A}).
Proof.
refine (λ X Y, cast_if (decide (X = Y))); abstract (by fold_leibniz).
Defined.
Global Instance coGset_disjoint_dec : RelDecision (##@{coGset A}).
Proof.
refine (λ X Y, cast_if (decide (X ∩ Y = ∅)));
abstract (by rewrite disjoint_intersection_L).
Defined.
Global Instance coGset_subseteq_dec : RelDecision (⊆@{coGset A}).
Proof.
refine (λ X Y, cast_if (decide (X ∪ Y = Y)));
abstract (by rewrite subseteq_union_L).
Defined.
Definition coGset_finite (X : coGset A) : bool :=
match X with FinGSet _ => true | CoFinGset _ => false end.
Lemma coGset_finite_spec X : set_finite X ↔ coGset_finite X.
Proof.
destruct X as [X|X];
unfold set_finite, elem_of at 1, coGset_elem_of; simpl.
- split; [done|intros _]. exists (elements X). set_solver.
- split; [intros [Y HXY]%(pred_finite_set(C:=gset A))|done].
by destruct (exist_fresh (X ∪ Y)) as [? [?%HXY ?]%not_elem_of_union].
Qed.
Global Instance coGset_finite_dec (X : coGset A) : Decision (set_finite X).
Proof.
refine (cast_if (decide (coGset_finite X)));
abstract (by rewrite coGset_finite_spec).
Defined.
End infinite.
(** * Pick elements from infinite sets *)
Definition coGpick `{Countable A, Infinite A} (X : coGset A) : A :=
fresh (match X with FinGSet _ => ∅ | CoFinGset X => X end).
Lemma coGpick_elem_of `{Countable A, Infinite A} X :
¬set_finite X → coGpick X ∈ X.
Proof.
unfold coGpick.
destruct X as [X|X]; rewrite coGset_finite_spec; simpl; [done|].
by intros _; apply is_fresh.
Qed.
(** * Conversion to and from gset *)
Definition coGset_to_gset `{Countable A} (X : coGset A) : gset A :=
match X with FinGSet X => X | CoFinGset _ => ∅ end.
Definition gset_to_coGset `{Countable A} : gset A → coGset A := FinGSet.
Section to_gset.
Context `{Countable A}.
Lemma elem_of_gset_to_coGset (X : gset A) x : x ∈ gset_to_coGset X ↔ x ∈ X.
Proof. done. Qed.
Context `{Infinite A}.
Lemma elem_of_coGset_to_gset (X : coGset A) x :
set_finite X → x ∈ coGset_to_gset X ↔ x ∈ X.
Proof. rewrite coGset_finite_spec. by destruct X. Qed.
Lemma gset_to_coGset_finite (X : gset A) : set_finite (gset_to_coGset X).
Proof. by rewrite coGset_finite_spec. Qed.
End to_gset.
(** * Conversion to coPset *)
Definition coGset_to_coPset (X : coGset positive) : coPset :=
match X with
| FinGSet X => gset_to_coPset X
| CoFinGset X => ⊤ ∖ gset_to_coPset X
end.
Lemma elem_of_coGset_to_coPset X x : x ∈ coGset_to_coPset X ↔ x ∈ X.
Proof.
destruct X as [X|X]; simpl.
- by rewrite elem_of_gset_to_coPset.
- by rewrite elem_of_difference, elem_of_gset_to_coPset, (left_id True (∧)).
Qed.
(** * Inefficient conversion to arbitrary sets with a top element *)
(** This shows that, when [A] is countable, [coGset A] is initial
among sets with [∪], [∩], [∖], [∅], [{[_]}], and [⊤]. *)
Definition coGset_to_top_set `{Countable A, Empty C, Singleton A C, Union C,
Top C, Difference C} (X : coGset A) : C :=
match X with
| FinGSet X => list_to_set (elements X)
| CoFinGset X => ⊤ ∖ list_to_set (elements X)
end.
Lemma elem_of_coGset_to_top_set `{Countable A, TopSet A C} X x :
x ∈@{C} coGset_to_top_set X ↔ x ∈ X.
Proof. destruct X; set_solver. Qed.
(** * Domain of finite maps *)
Instance coGset_dom `{Countable K} {A} : Dom (gmap K A) (coGset K) := λ m,
gset_to_coGset (dom _ m).
Instance coGset_dom_spec `{Countable K} : FinMapDom K (gmap K) (coGset K).
Proof.
split; try apply _. intros B m i. unfold dom, coGset_dom.
by rewrite elem_of_gset_to_coGset, elem_of_dom.
Qed.
Typeclasses Opaque coGset_elem_of coGset_empty coGset_top coGset_singleton.
Typeclasses Opaque coGset_union coGset_intersection coGset_difference.
Typeclasses Opaque coGset_dom.
|
Require Import
MathClasses.interfaces.abstract_algebra
CoRN.stdlib_omissions.Q
CoRN.algebra.RSetoid
CoRN.metric2.Metric
CoRN.metric2.UniformContinuity
CoRN.reals.fast.CRpi_fast
CoRN.reals.fast.CRarctan_small
CoRN.reals.faster.ARarctan_small.
Section ARpi.
Context `{AppRationals AQ}.
Lemma AQpi_prf (x : Z) : 1 < x → -('x : AQ) < 1 < ('x : AQ).
Proof.
split. 2: apply semirings.preserves_gt_1, H5.
rewrite <- (rings.preserves_1 (f:=cast Z AQ)).
rewrite <- (rings.preserves_negate (f:=cast Z AQ)).
apply (strictly_order_preserving (cast Z AQ)).
unfold one, stdlib_binary_integers.Z_1.
rewrite <- (Z.opp_involutive 1).
apply Z.gt_lt, CornBasics.Zlt_opp.
apply (Z.lt_trans _ 1). reflexivity. exact H5.
Qed.
Lemma ZtoAQ_pos : forall (z:Z), 0 < z -> 0 < ('z : AQ).
Proof.
intros z zpos.
pose proof (rings.preserves_0 (f:=cast Z AQ)).
rewrite <- H5.
exact (strictly_order_preserving (cast Z AQ) 0 z zpos).
Qed.
Definition AQpi (x : AQ) : msp_car AR :=
ucFun (ARscale (' 176%Z * x))
(AQarctan_small (AQpi_prf 57 eq_refl) (ZtoAQ_pos 57 eq_refl)) +
ucFun (ARscale (' 28%Z * x))
(AQarctan_small (AQpi_prf 239 eq_refl) (ZtoAQ_pos 239 eq_refl)) +
(ucFun (ARscale (' (-48)%Z * x))
(AQarctan_small (AQpi_prf 682 eq_refl) (ZtoAQ_pos 682 eq_refl)) +
ucFun (ARscale (' 96%Z * x))
(AQarctan_small (AQpi_prf 12943 eq_refl) (ZtoAQ_pos 12943 eq_refl))).
Lemma ARtoCR_preserves_AQpi x : 'AQpi x = r_pi ('x).
Proof.
unfold AQpi, r_pi.
assert (∀ (k : Z) (d : positive)
(Pnd: -(cast Z AQ (Zpos d)) < 1 < cast Z AQ (Zpos d))
(dpos : 0 < cast Z AQ (Zpos d))
(Pa : (-1 <= 1#d <= 1)%Q),
' ucFun (ARscale ('k * x)) (AQarctan_small Pnd dpos)
= ucFun (scale (inject_Z k * 'x)) (rational_arctan_small Pa)) as PP.
{ intros.
rewrite ARtoCR_preserves_scale.
apply Cmap_wd.
rewrite rings.preserves_mult, AQtoQ_ZtoAQ.
reflexivity.
rewrite AQarctan_small_correct.
rewrite rational_arctan_small_wd.
reflexivity.
rewrite rings.preserves_1.
pose proof (AQtoQ_ZtoAQ (Zpos d)). rewrite H5.
reflexivity. }
assert (forall x y : msp_car AR, ARtoCR (x+y) = 'x + 'y) as plusMorph.
{ intros x0 y. apply (rings.preserves_plus x0 y). }
unfold cast. unfold cast in plusMorph.
rewrite plusMorph. apply ucFun2_wd.
rewrite plusMorph. apply ucFun2_wd.
apply PP. apply PP.
rewrite plusMorph.
apply ucFun2_wd. apply PP. apply PP.
Qed.
Definition ARpi := AQpi 1.
Lemma ARtoCR_preserves_pi : 'ARpi = CRpi.
Proof.
unfold ARpi, CRpi.
rewrite ARtoCR_preserves_AQpi.
rewrite rings.preserves_1.
reflexivity.
Qed.
End ARpi.
|
\name{NEWS}
\title{News for Package \pkg{adductomicsR}}
\section{Changes in version 0.3.0 (2018-09-19)}{
\itemize{
\item Package prepared for Bioconductor submission.
}
} |
||| The Core Computation Context.
|||
||| Borrowed from Idris2 `Rug` is the core computation context that
||| brings the computations together.
module Toolkit.TheRug
import System
import System.File
import System.Clock
import Data.Vect
import Data.String
import Decidable.Equality
import Text.Parser
import Text.Lexer
import Toolkit.System
import Toolkit.Text.Parser.Run
import Toolkit.Data.Location
import Toolkit.Text.Lexer.Run
import Toolkit.Data.DList
import Toolkit.Decidable.Informative
%default total
||| Because it ties everything together.
export
record TheRug e t where
constructor MkTheRug
rugRun : IO (Either e t)
export
run : (whenErr : e -> IO b)
-> (whenOK : a -> IO b)
-> (prog : TheRug e a)
-> IO b
run whenErr whenOK (MkTheRug rugRun)
= either whenErr
whenOK
!rugRun
export
%inline
fail : (msg : e)
-> TheRug e a
fail e
= MkTheRug (pure (Left e))
export
%inline
throw : (msg : e) -> TheRug e a
throw = fail
export
%inline
map : (a -> b)
-> TheRug e a
-> TheRug e b
map f (MkTheRug a)
= MkTheRug (map (map f) a)
export
%inline
ignore : TheRug e a -> TheRug e ()
ignore
= map (\_ => ())
export
%inline
embed : (this : IO a)
-> TheRug e a
embed op
= MkTheRug (do o <- op
pure (Right o))
export
%inline
embed_ : (this : IO a)
-> TheRug e ()
embed_ this = ignore (embed this)
export
%inline
(>>=) : TheRug e a
-> (a -> TheRug e b)
-> TheRug e b
(>>=) (MkTheRug act) f
= MkTheRug (act >>=
(\res =>
case res of
Left err => pure (Left err)
Right val => rugRun (f val)))
export
%inline
(>>) : TheRug e ()
-> TheRug e a
-> TheRug e a
(>>) ma mb = ma >>= const mb
export
%inline
pure : a -> TheRug e a
pure x = MkTheRug (pure (pure x))
export
(<*>) : TheRug e (a -> b)
-> TheRug e a
-> TheRug e b
(<*>) (MkTheRug f)
(MkTheRug a) = MkTheRug [| f <*> a |]
export
(*>) : TheRug e a
-> TheRug e b
-> TheRug e b
(*>) (MkTheRug a)
(MkTheRug b) = MkTheRug [| a *> b |]
export
(<*) : TheRug e a
-> TheRug e b
-> TheRug e a
(<*) (MkTheRug a)
(MkTheRug b) = MkTheRug [| a <* b |]
export
%inline
when : (test : Bool)
-> Lazy (TheRug e ())
-> TheRug e ()
when False _
= pure ()
when True f
= f
export
%inline
tryCatch : (onErr : ea -> eb)
-> (prog : TheRug ea a)
-> TheRug eb a
tryCatch onErr prog
= MkTheRug (run (pure . Left . onErr)
(pure . Right)
prog)
namespace Decidable
export
%inline
embed : (err : b)
-> (result : Dec r)
-> TheRug b r
embed _ (Yes prfWhy)
= pure prfWhy
embed err (No prfWhyNot)
= throw err
export
%inline
when : (result : Dec a)
-> (this : Lazy (TheRug e ()))
-> TheRug e ()
when (Yes _) this
= this
when (No _) _
= pure ()
export
%inline
whenNot : (result : Dec r)
-> (this : Lazy (TheRug e ()))
-> TheRug e ()
whenNot (Yes _) _
= pure ()
whenNot (No _) this
= this
namespace Informative
export
%inline
embed : (f : a -> b)
-> (result : DecInfo a r)
-> TheRug b r
embed f (Yes prfWhy)
= pure prfWhy
embed f (No msgWhyNot prfWhyNot)
= throw (f msgWhyNot)
namespace Traverse
namespace List
traverse' : (acc : List b)
-> (f : a -> TheRug e b)
-> (xs : List a)
-> TheRug e (List b)
traverse' acc f []
= pure (reverse acc)
traverse' acc f (x :: xs)
= traverse' (!(f x) :: acc) f xs
export
%inline
traverse : (f : a -> TheRug e b)
-> (xs : List a)
-> TheRug e (List b)
traverse = traverse' Nil
namespace Vect
export
%inline
traverse : (f : a -> TheRug e b)
-> (xs : Vect n a)
-> TheRug e (Vect n b)
traverse f []
= pure Nil
traverse f (x :: xs)
= [| f x :: traverse f xs |]
namespace IO
export
%inline
putStr : (s : String)
-> TheRug e ()
putStr = (TheRug.embed . putStr)
export
%inline
putStrLn : (s : String)
-> TheRug e ()
putStrLn = (TheRug.embed . putStrLn)
export
%inline
print : Show a
=> (this : a)
-> TheRug e ()
print = (TheRug.embed . print)
export
%inline
printLn : Show a
=> (this : a)
-> TheRug e ()
printLn = (TheRug.embed . printLn)
export
covering -- not my fault
readFile : (onErr : String -> FileError -> e)
-> (fname : String)
-> TheRug e String
readFile onErr fname
= do Right content <- (TheRug.embed (readFile fname))
| Left err => throw (onErr fname err)
pure content
namespace Parsing
export
covering -- not my fault
parseFile : {e : _}
-> (onErr : ParseError a -> err)
-> (lexer : Lexer a)
-> (rule : Grammar () a e ty)
-> (fname : String)
-> TheRug err ty
parseFile onErr lexer rule fname
= do Right res <- TheRug.embed (parseFile lexer rule fname)
| Left err => throw (onErr err)
pure res
namespace Cheap
export
%inline
log : (msg : String)
-> TheRug e ()
log = putStrLn
namespace Timed
export
%inline
log : (showTime : Bool)
-> (time : Lazy (Clock type))
-> (msg : String)
-> TheRug e ()
log showTime t m
= if showTime
then do print t
putStrLn m
else putStrLn m
export
%inline
try : Show e
=> (showTime : Bool)
-> (msg : String)
-> (f : a -> TheRug e b)
-> (val : a)
-> TheRug e b
try showTime msg f val
= do start <- (embed $ clockTime UTC)
res <- embed (run (\err => do stop <- clockTime UTC
putStrLn "Error Happened"
let d = timeDifference stop start
if showTime
then do putStrLn (unwords [msg, show d])
printLn err
exitFailure
else do putStrLn msg
printLn err
exitFailure)
(\res => do stop <- clockTime UTC
let d = timeDifference stop start
if showTime
then do putStrLn (unwords [msg, show d])
pure res
else do putStrLn msg
pure res)
(f val))
pure res
-- [ EOF ]
|
import algebra.group
variable {G: Type*}
theorem Q_10 [group G]:
(∀ a: G, a⁻¹ = a) →
(∀ a b: G, a * b = b * a) :=
λ h a b, mul_left_cancel (mul_right_cancel $
calc a * (a * b) * b
= a * (a⁻¹ * b⁻¹) * b : by rw [h, h]
... = a * (b * a)⁻¹ * b : by rw [←mul_inv_rev]
... = a * (b * a) * b : by rw h )
|
#define BOOST_TEST_MODULE "dsn::log::Base"
#include <dsnutil/log/base.h>
#include <dsnutil/log/init.h>
#include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_CASE(log_init) { dsn::log::init(); }
namespace {
class LoggerTest : public dsn::log::Base<LoggerTest> {
public:
LoggerTest() { BOOST_LOG_SEV(log, severity::info) << "Hello from LoggerTest constructor"; }
void error() { BOOST_LOG_SEV(log, severity::error) << "This is a demo error message"; }
~LoggerTest() { BOOST_LOG_SEV(log, severity::info) << "Goodbye from LoggerTest destructor"; }
};
}
BOOST_AUTO_TEST_CASE(derived_logger)
{
std::unique_ptr<LoggerTest> ptr(new LoggerTest());
BOOST_CHECK(ptr.get() != nullptr);
ptr->error();
}
|
(*
This is the definition of formal syntax for Dan Grossman's Thesis,
"SAFE PROGRAMMING AT THE C LEVEL OF ABSTRACTION".
Let's define partial functions as lists.
We have just too many unique domain constraints.
*)
Require Import List.
Export ListNotations.
Require Import ZArith.
Require Import Init.Datatypes.
Definition pfun (A : Type) (B : Type) := list (A * B).
(* TODO I could make this a relation and then check at each add that I am
not overwriting the domain. *)
Function plusl (A : Type) (B : Type) (x : A*B) (p : pfun A B) : pfun A B :=
x :: p.
Function plusr (A : Type) (B : Type) (p : pfun A B) (x : A * B) : pfun A B :=
p ++ [x].
Function union (A : Type) (B : Type)
(p1 : pfun A B) (p2 : pfun A B) : pfun A B :=
p1 ++ p2.
Notation "x |+ y" := (plusl x y)
(at level 60, right associativity).
Notation "[ x |; .. |; y ]" := (cons x .. (cons y []) ..).
Notation "|[ ]" := [].
Notation "x |++ y" := (union x y)
(at level 60, right associativity).
(* TODO Map or ? *)
Function Dom (A : Type) (B : Type) (p : pfun A B) : list A :=
match p with
| (a,b) :: p' => a :: Dom A B p'
| |[] => |[]
end.
Function notindom (A : Type) (B : Type) (a : A) (p : pfun A B) : Prop :=
match a, p with
| b, (b',_) :: p' => b = b'
| _ , |[] => True
end.
Inductive WFpfun : Type -> Type -> pfun Type Type -> Prop :=
| WFpfun_nil : WFpfun _ _ |[]
| WFpfun_plus : forall (A : Type) (B : Type) (a : A * B) (p : pfun A B),
WFpfun (a |+ p).
(* How do I put an invarient in here? I don't think I can. *)
|
Formal statement is: lemma "prime(97::int)" Informal statement is: The number 97 is prime. |
Have we got a summer splash for you!
What could you accomplish this summer with a clear intention and some dedicated time?
Your exercise routine, your performance in an art or sport, creating a calm space for yourself during a hectic day? Whether you’d like to improve your balance, gait, posture, workout, or just basic comfort and coordination for daily activities, the Feldenkrais Method® can get you started and accelerate your progress.
We’re offering our first-ever Feldenkrais® Summer Camp for Grown-Ups. It’s a wonderful way to explore the Feldenkrais Method for the first time, to resume your practice and get back “in the groove,” or to add to your current course.
A $665 value for $525.
Register by 6/2 for $495.
You’ll schedule your private lessons (one or two a week) anytime between June 5 and July 28.
You’ll choose one of our four summer ATM classes, held between June 6 and July 27.
CHOOSE FROM: TUESDAYS @ the Jung Center (Museum District), noon or 6 p.m.
THURSDAYS @ The Feldenkrais Center of Houston (610/290), noon or 6 p.m.
Summer camp participants will also qualify for a special tuition price for our June workshop, Easy Comfortable Knees on Saturday, June 17.
Questions? Call us at 713-622-8794, or email us.
Register by June 2 and get this $665 value for the special tuition price of $495.
After June 1 – still a great opportunity at $525.
Click the button below to register for Feldenkrais Summer Camp for Grown-Ups. We will EMAIL you to schedule your first private lesson, and to enroll you in the class time/location of your choice.
See you at Feldenkrais Summer Camp for Grown-Ups! |
(* Title: IL_Interval.thy
Date: Oct 2006
Author: David Trachtenherz
*)
header {* Intervals and operations for temporal logic declarations *}
theory IL_Interval
imports
"../List-Infinite/CommonSet/InfiniteSet2"
"../List-Infinite/CommonSet/SetIntervalStep"
begin
subsection {* Time intervals -- definitions and basic lemmata *}
subsubsection {* Definitions *}
type_synonym Time = nat
(* Time interval *)
type_synonym iT = "Time set"
text {* Infinite interval starting at some natural @{term "n"}. *}
definition
iFROM :: "Time \<Rightarrow> iT" ("[_\<dots>]") (* [n, \<infinity>) *)
where
"[n\<dots>] \<equiv> {n..}"
text {* Finite interval starting at @{term "0"} and ending at some natural @{term "n"}. *}
definition
iTILL :: "Time \<Rightarrow> iT" ("[\<dots>_]") (* [0, n] *) (* Equivalent to [0\<dots>,n] *)
where
"[\<dots>n] \<equiv> {..n}"
text {*
Finite bounded interval containing the naturals between
@{term "n"} and @{term "n + d"}.
@{term "d"} denotes the difference between left and right interval bound.
The number of elements is @{term "d + 1"} so that an empty interval cannot be defined. *}
definition
iIN :: "Time \<Rightarrow> nat \<Rightarrow> iT" ( "[_\<dots>,_]") (* [n, n+d] *)
where
"[n\<dots>,d] \<equiv> {n..n+d}"
text {*
Infinite modulo interval containing all naturals
having the same division remainder modulo @{term "m"}
as @{term "r"}, and beginning at @{term "n"}. *}
definition
iMOD :: "Time \<Rightarrow> nat \<Rightarrow> iT" ( "[ _, mod _ ]" )
where
"[r, mod m] \<equiv> { x. x mod m = r mod m \<and> r \<le> x}"
text {*
Finite bounded modulo interval containing all naturals
having the same division remainder modulo @{term "m"}
as @{term "r"}, beginning at @{term "n"},
and ending after @{term "c"} cycles at @{term "r + m * c"}.
The number of elements is @{term "c + 1"} so that an empty interval cannot be defined. *}
definition
iMODb :: "Time \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> iT" ( "[ _, mod _, _ ]" )
where
"[r, mod m, c] \<equiv> { x. x mod m = r mod m \<and> r \<le> x \<and> x \<le> r + m * c}"
subsubsection {* Membership in an interval *}
lemmas iT_defs = iFROM_def iTILL_def iIN_def iMOD_def iMODb_def
lemma iFROM_iff: "x \<in> [n\<dots>] = (n \<le> x)"
by (simp add: iFROM_def)
lemma iFROM_D: "x \<in> [n\<dots>] \<Longrightarrow> (n \<le> x)"
by (rule iFROM_iff[THEN iffD1])
lemma iTILL_D: "x \<in> [\<dots>n] \<Longrightarrow> (x \<le> n)"
by (rule iTILL_iff[THEN iffD1])
corollary iIN_geD: "x \<in> [n\<dots>,d] \<Longrightarrow> n \<le> x"
by (simp add: iIN_iff)
corollary iIN_leD: "x \<in> [n\<dots>,d] \<Longrightarrow> x \<le> n + d"
by (simp add: iIN_iff)
corollary iMOD_modD: "x \<in> [r, mod m] \<Longrightarrow> x mod m = r mod m"
by (simp add: iMOD_iff)
corollary iMOD_geD: "x \<in> [r, mod m] \<Longrightarrow> r \<le> x"
by (simp add: iMOD_iff)
corollary iMODb_modD: "x \<in> [r, mod m, c] \<Longrightarrow> x mod m = r mod m"
by (simp add: iMODb_iff)
corollary iMODb_geD: "x \<in> [r, mod m, c] \<Longrightarrow> r \<le> x"
by (simp add: iMODb_iff)
corollary iMODb_leD: "x \<in> [r, mod m, c] \<Longrightarrow> x \<le> r + m * c"
by (simp add: iMODb_iff)
lemmas iT_iff = iFROM_iff iTILL_iff iIN_iff iMOD_iff iMODb_iff
lemmas iT_drule =
iFROM_D
iTILL_D
iIN_geD iIN_leD
iMOD_modD iMOD_geD
iMODb_modD iMODb_geD iMODb_leD
thm iT_drule
lemma
iFROM_I [intro]: "n \<le> x \<Longrightarrow> x \<in> [n\<dots>]" and
iTILL_I [intro]: "x \<le> n \<Longrightarrow> x \<in> [\<dots>n]" and
iIN_I [intro]: "n \<le> x \<Longrightarrow> x \<le> n + d \<Longrightarrow> x \<in> [n\<dots>,d]" and
iMOD_I [intro]: "x mod m = r mod m \<Longrightarrow> r \<le> x \<Longrightarrow> x \<in> [r, mod m]" and
iMODb_I [intro]: "x mod m = r mod m \<Longrightarrow> r \<le> x \<Longrightarrow> x \<le> r + m * c \<Longrightarrow> x \<in> [r, mod m, c]"
by (simp add: iT_iff)+
lemma
iFROM_E [elim]: "x \<in> [n\<dots>] \<Longrightarrow> (n \<le> x \<Longrightarrow> P) \<Longrightarrow> P" and
iTILL_E [elim]: "x \<in> [\<dots>n] \<Longrightarrow> (x \<le> n \<Longrightarrow> P) \<Longrightarrow> P" and
iIN_E [elim]: "x \<in> [n\<dots>,d] \<Longrightarrow> (n \<le> x \<Longrightarrow> x \<le> n + d \<Longrightarrow> P) \<Longrightarrow> P" and
iMOD_E [elim]: "x \<in> [r, mod m] \<Longrightarrow> (x mod m = r mod m \<Longrightarrow> r \<le> x \<Longrightarrow> P) \<Longrightarrow> P" and
iMODb_E [elim]: "x \<in> [r, mod m, c] \<Longrightarrow> (x mod m = r mod m \<Longrightarrow> r \<le> x \<Longrightarrow> x \<le> r + m * c \<Longrightarrow> P) \<Longrightarrow> P"
by (simp add: iT_iff)+
(*
lemma "0 < n \<Longrightarrow> \<exists>x \<in> [n\<dots>,2*n]. x mod 2 = 0"
apply (simp add: iT_defs)
apply (rule_tac x="2*n" in bexI)
apply simp
apply simp
done
lemma "0 < n \<Longrightarrow> \<exists>x \<in> [n\<dots>,2*n]. x mod 2 = 0"
apply (simp add: iT_defs atLeastAtMost_def atLeast_def atMost_def Collect_conj_eq[symmetric])
apply (rule_tac x="2*n" in exI)
apply simp
done
*)
lemma iIN_Suc_insert_conv: "
insert (Suc (n + d)) [n\<dots>,d] = [n\<dots>,Suc d]"
by (fastforce simp: iIN_iff)
lemma iTILL_Suc_insert_conv: "insert (Suc n) [\<dots>n] = [\<dots>Suc n]"
by (fastforce simp: iIN_Suc_insert_conv[of 0 n])
lemma iMODb_Suc_insert_conv: "
insert (r + m * Suc c) [r, mod m, c] = [r, mod m, Suc c]"
apply (rule set_eqI)
apply (simp add: iMODb_iff add.commute[of _ r])
apply (simp add: add.commute[of m])
apply (simp add: add.assoc[symmetric])
apply (rule iffI)
apply fastforce
apply (elim conjE)
apply (drule_tac x=x in order_le_less[THEN iffD1, rule_format])
apply (erule disjE)
thm less_mod_eq_imp_add_divisor_le
apply (frule less_mod_eq_imp_add_divisor_le[where m=m], simp)
thm add_le_imp_le_right
apply (drule add_le_imp_le_right)
apply simp
apply simp
done
lemma iFROM_pred_insert_conv: "insert (n - Suc 0) [n\<dots>] = [n - Suc 0\<dots>]"
by (fastforce simp: iFROM_iff)
lemma iIN_pred_insert_conv: "
0 < n \<Longrightarrow> insert (n - Suc 0) [n\<dots>,d] = [n - Suc 0\<dots>,Suc d]"
by (fastforce simp: iIN_iff)
lemma iMOD_pred_insert_conv: "
m \<le> r \<Longrightarrow> insert (r - m) [r, mod m] = [r - m, mod m]"
apply (case_tac "m = 0")
apply (simp add: iMOD_iff insert_absorb)
apply simp
apply (rule set_eqI)
apply (simp add: iMOD_iff mod_diff_self2)
apply (rule iffI)
apply (erule disjE)
apply (simp add: mod_diff_self2)
apply (simp add: le_imp_diff_le)
apply (erule conjE)
apply (drule order_le_less[THEN iffD1, of "r-m"], erule disjE)
prefer 2
apply simp
apply (frule order_less_le_trans[of _ m r], assumption)
thm less_mod_eq_imp_add_divisor_le[of "r-m" x m]
apply (drule less_mod_eq_imp_add_divisor_le[of "r-m" _ m])
apply (simp add: mod_diff_self2)
apply simp
done
lemma iMODb_pred_insert_conv: "
m \<le> r \<Longrightarrow> insert (r - m) [r, mod m, c] = [r - m, mod m, Suc c]"
apply (rule set_eqI)
apply (frule iMOD_pred_insert_conv)
apply (drule_tac f="\<lambda>s. x \<in> s" in arg_cong)
apply (force simp: iMOD_iff iMODb_iff)
done
lemma iFROM_Suc_pred_insert_conv: "insert n [Suc n\<dots>] = [n\<dots>]"
by (insert iFROM_pred_insert_conv[of "Suc n"], simp)
lemma iIN_Suc_pred_insert_conv: "insert n [Suc n\<dots>,d] = [n\<dots>,Suc d]"
by (insert iIN_pred_insert_conv[of "Suc n"], simp)
lemma iMOD_Suc_pred_insert_conv: "insert r [r + m, mod m] = [r, mod m]"
by (insert iMOD_pred_insert_conv[of m "r + m"], simp)
lemma iMODb_Suc_pred_insert_conv: "insert r [r + m, mod m, c] = [r, mod m, Suc c]"
by (insert iMODb_pred_insert_conv[of m "r + m"], simp)
lemmas iT_Suc_insert =
iIN_Suc_insert_conv
iTILL_Suc_insert_conv
iMODb_Suc_insert_conv
lemmas iT_pred_insert =
iFROM_pred_insert_conv
iIN_pred_insert_conv
iMOD_pred_insert_conv
iMODb_pred_insert_conv
lemmas iT_Suc_pred_insert =
iFROM_Suc_pred_insert_conv
iIN_Suc_pred_insert_conv
iMOD_Suc_pred_insert_conv
iMODb_Suc_pred_insert_conv
lemma iMOD_mem_diff: "\<lbrakk> a \<in> [r, mod m]; b \<in> [r, mod m] \<rbrakk> \<Longrightarrow> (a - b) mod m = 0"
by (simp add: iMOD_iff mod_eq_imp_diff_mod_0)
lemma iMODb_mem_diff: "\<lbrakk> a \<in> [r, mod m, c]; b \<in> [r, mod m, c] \<rbrakk> \<Longrightarrow> (a - b) mod m = 0"
by (simp add: iMODb_iff mod_eq_imp_diff_mod_0)
subsubsection {* Interval conversions *}
lemma iIN_0_iTILL_conv:"[0\<dots>,n] = [\<dots>n]"
by (simp add: iTILL_def iIN_def atMost_atLeastAtMost_0_conv)
lemma iIN_iTILL_iTILL_conv: "0 < n \<Longrightarrow> [n\<dots>,d] = [\<dots>n+d] - [\<dots>n - Suc 0]"
by (fastforce simp: iTILL_iff iIN_iff)
lemma iIN_iFROM_iTILL_conv: "[n\<dots>,d] = [n\<dots>] \<inter> [\<dots>n+d]"
by (simp add: iT_defs atLeastAtMost_def)
lemma iMODb_iMOD_iTILL_conv: "[r, mod m, c] = [r, mod m] \<inter> [\<dots>r+m*c]"
by (force simp: iT_defs set_interval_defs)
lemma iMODb_iMOD_iIN_conv: "[r, mod m, c] = [r, mod m] \<inter> [r\<dots>,m*c]"
by (force simp: iT_defs set_interval_defs)
lemma iFROM_iTILL_iIN_conv: "n \<le> n' \<Longrightarrow> [n\<dots>] \<inter> [\<dots>n'] = [n\<dots>,n'-n]"
by (simp add: iT_defs atLeastAtMost_def)
lemma iMOD_iTILL_iMODb_conv: "
r \<le> n \<Longrightarrow> [r, mod m] \<inter> [\<dots>n] = [r, mod m, (n - r) div m]"
apply (rule set_eqI)
apply (simp add: iT_iff mult_div_cancel)
apply (rule iffI)
apply clarify
thm le_imp_sub_mod_le
apply (frule_tac x=x and y=n and m=m in le_imp_sub_mod_le)
apply (simp add: mod_diff_right_eq)
apply fastforce
done
lemma iMOD_iIN_iMODb_conv: "
[r, mod m] \<inter> [r\<dots>,d] = [r, mod m, d div m]"
apply (case_tac "r = 0")
thm iMOD_iTILL_iMODb_conv
apply (simp add: iIN_0_iTILL_conv iMOD_iTILL_iMODb_conv)
apply (simp add: iIN_iTILL_iTILL_conv Diff_Int_distrib iMOD_iTILL_iMODb_conv diff_add_inverse)
thm subst[of "{}" _ "\<lambda>t. \<forall>x.(x - t) = x"]
thm subst[of "{}" _ "\<lambda>t. \<forall>x.(x - t) = x", THEN spec]
apply (rule subst[of "{}" _ "\<lambda>t. \<forall>x.(x - t) = x", THEN spec])
prefer 2
apply simp
apply (rule sym)
thm disjoint_iff_not_equal
apply (fastforce simp: disjoint_iff_not_equal iMOD_iff iTILL_iff)
done
thm UNIV_def
lemma iFROM_0: "[0\<dots>] = UNIV"
by (simp add: iFROM_def)
lemma iMOD_1: "[r, mod Suc 0] = [r\<dots>]"
by (fastforce simp: iFROM_iff)
lemma iMODb_mod_1: "[r, mod Suc 0, c] = [r\<dots>,c]"
by (fastforce simp: iT_iff)
subsubsection {* Finiteness and emptiness of intervals *}
lemma
iFROM_not_empty: "[n\<dots>] \<noteq> {}" and
iTILL_not_empty: "[\<dots>n] \<noteq> {}" and
iIN_not_empty: "[n\<dots>,d] \<noteq> {}" and
iMOD_not_empty: "[r, mod m] \<noteq> {}" and
iMODb_not_empty: "[r, mod m, c] \<noteq> {}"
by (fastforce simp: iT_iff)+
lemmas iT_not_empty =
iFROM_not_empty
iTILL_not_empty
iIN_not_empty
iMOD_not_empty
iMODb_not_empty
thm iT_not_empty
lemma
iTILL_finite: "finite [\<dots>n]" and
iIN_finite: "finite [n\<dots>,d]" and
iMODb_finite: "finite [r, mod m, c]" and
iMOD_0_finite: "finite [r, mod 0]"
by (simp add: iT_defs)+
lemma iFROM_infinite: "infinite [n\<dots>]"
by (simp add: iT_defs infinite_atLeast)
lemma iMOD_infinite: "0 < m \<Longrightarrow> infinite [r, mod m]"
thm infinite_nat_iff_asc_chain
apply (rule infinite_nat_iff_asc_chain[THEN iffD2])
apply (rule iT_not_empty)
apply (rule ballI, rename_tac n)
apply (rule_tac x="n+m" in bexI, simp)
apply (simp add: iMOD_iff)
done
lemmas iT_finite =
iTILL_finite
iIN_finite
iMODb_finite iMOD_0_finite
thm iT_finite
lemmas iT_infinite =
iFROM_infinite
iMOD_infinite
thm iT_infinite
thm
iMax_finite_conv
iMax_infinite_conv
subsubsection {* @{text Min} and @{text Max} element of an interval *}
lemma
iTILL_Min: "iMin [\<dots>n] = 0" and
iFROM_Min: "iMin [n\<dots>] = n" and
iIN_Min: "iMin [n\<dots>,d] = n" and
iMOD_Min: "iMin [r, mod m] = r" and
iMODb_Min: "iMin [r, mod m, c] = r"
thm iMin_equality
by (rule iMin_equality, (simp add: iT_iff)+)+
lemmas iT_Min =
iIN_Min
iTILL_Min
iFROM_Min
iMOD_Min
iMODb_Min
thm iT_Min
lemma
iTILL_Max: "Max [\<dots>n] = n" and
iIN_Max: "Max [n\<dots>,d] = n+d" and
iMODb_Max: "Max [r, mod m, c] = r + m * c" and
iMOD_0_Max: "Max [r, mod 0] = r"
by (rule Max_equality, (simp add: iT_iff iT_finite)+)+
lemmas iT_Max =
iTILL_Max
iIN_Max
iMODb_Max
iMOD_0_Max
thm iT_Max
lemma
iTILL_iMax: "iMax [\<dots>n] = enat n" and
iIN_iMax: "iMax [n\<dots>,d] = enat (n+d)" and
iMODb_iMax: "iMax [r, mod m, c] = enat (r + m * c)" and
iMOD_0_iMax: "iMax [r, mod 0] = enat r" and
iFROM_iMax: "iMax [n\<dots>] = \<infinity>" and
iMOD_iMax: "0 < m \<Longrightarrow> iMax [r, mod m] = \<infinity>"
by (simp add: iMax_def iT_finite iT_infinite iT_Max)+
lemmas iT_iMax =
iTILL_iMax
iIN_iMax
iMODb_iMax
iMOD_0_iMax
iFROM_iMax
iMOD_iMax
thm iT_iMax
subsection {* Adding and subtracting constants to interval elements *}
lemma
iFROM_plus: "x \<in> [n\<dots>] \<Longrightarrow> x + k \<in> [n\<dots>]" and
iFROM_Suc: "x \<in> [n\<dots>] \<Longrightarrow> Suc x \<in> [n\<dots>]" and
iFROM_minus: "\<lbrakk> x \<in> [n\<dots>]; k \<le> x - n \<rbrakk> \<Longrightarrow> x - k \<in> [n\<dots>]" and
iFROM_pred: "n < x \<Longrightarrow> x - Suc 0 \<in> [n\<dots>]"
by (simp add: iFROM_iff)+
lemma
iTILL_plus: "\<lbrakk> x \<in> [\<dots>n]; k \<le> n - x \<rbrakk> \<Longrightarrow> x + k \<in> [\<dots>n]" and
iTILL_Suc: "x < n \<Longrightarrow> Suc x \<in> [\<dots>n]" and
iTILL_minus: "x \<in> [\<dots>n] \<Longrightarrow> x - k \<in> [\<dots>n]" and
iTILL_pred: "x \<in> [\<dots>n] \<Longrightarrow> x - Suc 0 \<in> [\<dots>n]"
by (simp add: iTILL_iff)+
lemma iIN_plus: "\<lbrakk> x \<in> [n\<dots>,d]; k \<le> n + d - x \<rbrakk> \<Longrightarrow> x + k \<in> [n\<dots>,d]"
by (fastforce simp: iIN_iff)
lemma iIN_Suc: "\<lbrakk> x \<in> [n\<dots>,d]; x < n + d \<rbrakk> \<Longrightarrow> Suc x \<in> [n\<dots>,d]"
by (simp add: iIN_iff)
lemma iIN_minus: "\<lbrakk> x \<in> [n\<dots>,d]; k \<le> x - n \<rbrakk> \<Longrightarrow> x - k \<in> [n\<dots>,d]"
by (fastforce simp: iIN_iff)
lemma iIN_pred: "\<lbrakk> x \<in> [n\<dots>,d]; n < x \<rbrakk> \<Longrightarrow> x - Suc 0 \<in> [n\<dots>,d]"
by (fastforce simp: iIN_iff)
lemma iMOD_plus_divisor_mult: "x \<in> [r, mod m] \<Longrightarrow> x + k * m \<in> [r, mod m]"
by (simp add: iMOD_def)
corollary iMOD_plus_divisor: "x \<in> [r, mod m] \<Longrightarrow> x + m \<in> [r, mod m]"
by (simp add: iMOD_def)
lemma iMOD_minus_divisor_mult: "
\<lbrakk> x \<in> [r, mod m]; k * m \<le> x - r \<rbrakk> \<Longrightarrow> x - k * m \<in> [r, mod m]"
thm mod_diff_mult_self1
by (fastforce simp: iMOD_def mod_diff_mult_self1)
corollary iMOD_minus_divisor_mult2: "
\<lbrakk> x \<in> [r, mod m]; k \<le> (x - r) div m \<rbrakk> \<Longrightarrow> x - k * m \<in> [r, mod m]"
apply (rule iMOD_minus_divisor_mult, assumption)
apply (clarsimp simp: iMOD_iff)
apply (drule mult_le_mono1[of _ _ m])
thm mod_0_div_mult_cancel[THEN iffD1, OF mod_eq_imp_diff_mod_0]
apply (simp add: mod_0_div_mult_cancel[THEN iffD1, OF mod_eq_imp_diff_mod_0])
done
corollary iMOD_minus_divisor: "
\<lbrakk> x \<in> [r, mod m]; m + r \<le> x \<rbrakk> \<Longrightarrow> x - m \<in> [r, mod m]"
apply (frule iMOD_geD)
thm iMOD_minus_divisor_mult[of x r m 1]
apply (insert iMOD_minus_divisor_mult[of x r m 1])
apply simp
done
lemma iMOD_plus: "
x \<in> [r, mod m] \<Longrightarrow> (x + k \<in> [r, mod m]) = (k mod m = 0)"
apply safe
apply (drule iMOD_modD)+
thm mod_add_eq_imp_mod_0[THEN iffD1]
apply (rule mod_add_eq_imp_mod_0[of x, THEN iffD1])
apply simp
apply (simp add: mult.commute iMOD_plus_divisor_mult)
done
corollary iMOD_Suc: "
x \<in> [r, mod m] \<Longrightarrow> (Suc x \<in> [r, mod m]) = (m = Suc 0)"
apply (simp add: iMOD_iff, safe)
apply (simp add: mod_Suc, split split_if_asm)
apply simp+
done
lemma iMOD_minus: "
\<lbrakk> x \<in> [r, mod m]; k \<le> x - r \<rbrakk> \<Longrightarrow> (x - k \<in> [r, mod m]) = (k mod m = 0)"
apply safe
apply (clarsimp simp: iMOD_iff)
apply (rule mod_add_eq_imp_mod_0[of "x - k" k, THEN iffD1])
apply simp
apply (simp add: mult.commute iMOD_minus_divisor_mult)
done
corollary iMOD_pred: "
\<lbrakk> x \<in> [r, mod m]; r < x \<rbrakk> \<Longrightarrow> (x - Suc 0 \<in> [r, mod m]) = (m = Suc 0)"
apply safe
thm iMOD_Suc[of "x - Suc 0", THEN iffD1]
apply (simp add: iMOD_Suc[of "x - Suc 0" r, THEN iffD1])
apply (simp add: iMOD_iff)
done
lemma iMODb_plus_divisor_mult: "
\<lbrakk> x \<in> [r, mod m, c]; k * m \<le> r + m * c - x \<rbrakk> \<Longrightarrow> x + k * m \<in> [r, mod m, c]"
by (fastforce simp: iMODb_def)
lemma iMODb_plus_divisor_mult2: "
\<lbrakk> x \<in> [r, mod m, c]; k \<le> c - (x - r) div m \<rbrakk> \<Longrightarrow>
x + k * m \<in> [r, mod m, c]"
apply (rule iMODb_plus_divisor_mult, assumption)
apply (clarsimp simp: iMODb_iff)
apply (drule mult_le_mono1[of _ _ m])
apply (simp add: diff_mult_distrib
mod_0_div_mult_cancel[THEN iffD1, OF mod_eq_imp_diff_mod_0]
add.commute[of r] mult.commute[of c])
done
lemma iMODb_plus_divisor: "
\<lbrakk> x \<in> [r, mod m, c]; x < r + m * c \<rbrakk> \<Longrightarrow> x + m \<in> [r, mod m, c]"
thm less_mod_eq_imp_add_divisor_le
by (simp add: iMODb_iff less_mod_eq_imp_add_divisor_le)
lemma iMODb_minus_divisor_mult: "
\<lbrakk> x \<in> [r, mod m, c]; r + k * m \<le> x \<rbrakk> \<Longrightarrow> x - k * m \<in> [r, mod m, c]"
thm mod_diff_mult_self1
by (fastforce simp: iMODb_def mod_diff_mult_self1)
lemma iMODb_plus: "
\<lbrakk> x \<in> [r, mod m, c]; k \<le> r + m * c - x \<rbrakk> \<Longrightarrow>
(x + k \<in> [r, mod m, c]) = (k mod m = 0)"
apply safe
thm mod_add_eq_imp_mod_0[THEN iffD1]
apply (rule mod_add_eq_imp_mod_0[of x, THEN iffD1])
apply (simp add: iT_iff)
apply fastforce
done
corollary iMODb_Suc: "
\<lbrakk> x \<in> [r, mod m, c]; x < r + m * c \<rbrakk> \<Longrightarrow>
(Suc x \<in> [r, mod m, c]) = (m = Suc 0)"
apply (rule iffI)
apply (simp add: iMODb_iMOD_iTILL_conv iMOD_Suc)
apply (simp add: iMODb_iMOD_iTILL_conv iMOD_1 iFROM_Suc iTILL_Suc)
done
lemma iMODb_minus: "
\<lbrakk> x \<in> [r, mod m, c]; k \<le> x - r \<rbrakk> \<Longrightarrow>
(x - k \<in> [r, mod m, c]) = (k mod m = 0)"
apply (rule iffI)
apply (simp add: iMODb_iMOD_iTILL_conv iMOD_minus)
apply (simp add: iMODb_iMOD_iTILL_conv iMOD_minus iTILL_minus)
done
corollary iMODb_pred: "
\<lbrakk> x \<in> [r, mod m, c]; r < x \<rbrakk> \<Longrightarrow>
(x - Suc 0 \<in> [r, mod m, c]) = (m = Suc 0)"
apply (rule iffI)
thm iMOD_pred[THEN iffD1, of x r m]
apply (subgoal_tac "x \<in> [r, mod m] \<and> x - Suc 0 \<in> [r, mod m]")
prefer 2
apply (simp add: iT_iff)
apply (clarsimp simp: iMOD_pred)
apply (fastforce simp add: iMODb_iff)
done
lemmas iFROM_plus_minus =
iFROM_plus
iFROM_Suc
iFROM_minus
iFROM_pred
thm iFROM_plus_minus
lemmas iTILL_plus_minus =
iTILL_plus
iTILL_Suc
iTILL_minus
iTILL_pred
thm iTILL_plus_minus
lemmas iIN_plus_minus =
iIN_plus
iIN_Suc
iTILL_minus
iIN_pred
thm iIN_plus_minus
lemmas iMOD_plus_minus_divisor =
iMOD_plus_divisor_mult
iMOD_plus_divisor
iMOD_minus_divisor_mult
iMOD_minus_divisor_mult2
iMOD_minus_divisor
thm iMOD_plus_minus_divisor
lemmas iMOD_plus_minus =
iMOD_plus
iMOD_Suc
iMOD_minus
iMOD_pred
thm iMOD_plus_minus
lemmas iMODb_plus_minus_divisor =
iMODb_plus_divisor_mult
iMODb_plus_divisor_mult2
iMODb_plus_divisor
iMODb_minus_divisor_mult
thm iMODb_plus_minus_divisor
lemmas iMODb_plus_minus =
iMODb_plus
iMODb_Suc
iMODb_minus
iMODb_pred
thm iMODb_plus_minus
lemmas iT_plus_minus =
iFROM_plus_minus
iTILL_plus_minus
iIN_plus_minus
iMOD_plus_minus_divisor
iMOD_plus_minus
iMODb_plus_minus_divisor
iMODb_plus_minus
thm iT_plus_minus
(*
lemma "a \<in> [3\<dots>,2] \<Longrightarrow> 3 \<le> a \<and> a \<le> 5"
by (simp add: iT_iff)
lemma "15 \<in> [5, mod 10]"
by (simp add: iT_iff)
lemma "n \<in> [15, mod 10] \<Longrightarrow> n \<in> [5, mod 10]"
by (simp add: iT_iff)
lemma "[15, mod 10] \<subseteq> [5, mod 10]"
by (fastforce simp: iMOD_def)
lemma "n \<le> i \<Longrightarrow> n \<in> [\<dots>i]"
by (simp add: iT_iff)
lemma "\<forall>n \<in> [\<dots>i]. n \<le> i"
by (simp add: iT_iff)
lemma "\<exists>n \<in> [2, mod 10].n \<notin> [12, mod 10]"
apply (simp add: iT_defs)
apply (rule_tac x=2 in exI)
apply simp
done
*)
subsection {* Relations between intervals *}
subsubsection {* Auxiliary lemmata *}
lemma Suc_in_imp_not_subset_iMOD: "
\<lbrakk> n \<in> S; Suc n \<in> S; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> \<not> S \<subseteq> [r, mod m]"
thm iMOD_Suc[THEN iffD1]
by (blast intro: iMOD_Suc[THEN iffD1])
corollary Suc_in_imp_neq_iMOD: "
\<lbrakk> n \<in> S; Suc n \<in> S; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> S \<noteq> [r, mod m]"
by (blast dest: Suc_in_imp_not_subset_iMOD)
lemma Suc_in_imp_not_subset_iMODb: "
\<lbrakk> n \<in> S; Suc n \<in> S; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> \<not> S \<subseteq> [r, mod m, c]"
apply (rule ccontr, simp)
apply (frule subsetD[of _ _ n], assumption)
apply (drule subsetD[of _ _ "Suc n"], assumption)
thm iMODb_Suc[THEN iffD1]
apply (frule iMODb_Suc[THEN iffD1])
apply (drule iMODb_leD[of "Suc n"])
apply simp
apply blast+
done
corollary Suc_in_imp_neq_iMODb: "
\<lbrakk> n \<in> S; Suc n \<in> S; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> S \<noteq> [r, mod m, c]"
by (blast dest: Suc_in_imp_not_subset_iMODb)
subsubsection {* Subset relation between intervals *}
lemma
iIN_iFROM_subset_same: "[n\<dots>,d] \<subseteq> [n\<dots>]" and
iIN_iTILL_subset_same: "[n\<dots>,d] \<subseteq> [\<dots>n+d]" and
iMOD_iFROM_subset_same: "[r, mod m] \<subseteq> [r\<dots>]" and
iMODb_iTILL_subset_same: "[r, mod m, c] \<subseteq> [\<dots>r+m*c]" and
iMODb_iIN_subset_same: "[r, mod m, c] \<subseteq> [r\<dots>,m*c]" and
iMODb_iMOD_subset_same: "[r, mod m, c] \<subseteq> [r, mod m]"
by (simp add: subset_iff iT_iff)+
lemmas iT_subset_same =
iIN_iFROM_subset_same
iIN_iTILL_subset_same
iMOD_iFROM_subset_same
iMODb_iTILL_subset_same
iMODb_iIN_subset_same
iMODb_iTILL_subset_same
iMODb_iMOD_subset_same
thm iT_subset_same
lemma iMODb_imp_iMOD: "x \<in> [r, mod m, c] \<Longrightarrow> x \<in> [r, mod m]"
by (blast intro: iMODb_iMOD_subset_same)
lemma iMOD_imp_iMODb: "
\<lbrakk> x \<in> [r, mod m]; x \<le> r + m * c \<rbrakk> \<Longrightarrow> x \<in> [r, mod m, c]"
by (simp add: iT_iff)
lemma iMOD_singleton_subset_conv: "([r, mod m] \<subseteq> {a}) = (r = a \<and> m = 0)"
apply (rule iffI)
apply (simp add: subset_singleton_conv iT_not_empty)
apply (simp add: set_eq_iff iT_iff)
apply (frule_tac x=r in spec, drule_tac x="r+m" in spec)
apply simp
apply (simp add: iMOD_0 iIN_0)
done
lemma iMOD_singleton_eq_conv: "([r, mod m] = {a}) = (r = a \<and> m = 0)"
apply (rule_tac t="[r, mod m] = {a}" and s="[r, mod m] \<subseteq> {a}" in subst)
apply (simp add: subset_singleton_conv iMOD_not_empty)
apply (simp add: iMOD_singleton_subset_conv)
done
lemma iMODb_singleton_subset_conv: "
([r, mod m, c] \<subseteq> {a}) = (r = a \<and> (m = 0 \<or> c = 0))"
apply (rule iffI)
apply (simp add: subset_singleton_conv iT_not_empty)
apply (simp add: set_eq_iff iT_iff)
apply (frule_tac x=r in spec, drule_tac x="r+m" in spec)
apply clarsimp
apply (fastforce simp: iMODb_0 iMODb_mod_0 iIN_0)
done
lemma iMODb_singleton_eq_conv: "
([r, mod m, c] = {a}) = (r = a \<and> (m = 0 \<or> c = 0))"
apply (rule_tac t="[r, mod m, c] = {a}" and s="[r, mod m, c] \<subseteq> {a}" in subst)
apply (simp add: subset_singleton_conv iMODb_not_empty)
apply (simp add: iMODb_singleton_subset_conv)
done
lemma iMODb_subset_imp_divisor_mod_0: "
\<lbrakk> 0 < c'; [r', mod m', c'] \<subseteq> [r, mod m, c] \<rbrakk> \<Longrightarrow> m' mod m = 0"
apply (simp add: subset_iff iMODb_iff)
apply (drule gr0_imp_self_le_mult1[of _ m'])
thm mod_add_eq_imp_mod_0[of r' m' m]
apply (rule mod_add_eq_imp_mod_0[of r' m' m, THEN iffD1])
apply (frule_tac x=r' in spec, drule_tac x="r'+m'" in spec)
apply simp
done
lemma iMOD_subset_imp_divisor_mod_0: "
[r', mod m'] \<subseteq> [r, mod m] \<Longrightarrow> m' mod m = 0"
apply (simp add: subset_iff iMOD_iff)
thm mod_add_eq_imp_mod_0[of r' m' m]
apply (rule mod_add_eq_imp_mod_0[of r' m' m, THEN iffD1])
apply simp
done
lemma iMOD_subset_imp_iMODb_subset: "
\<lbrakk> [r', mod m'] \<subseteq> [r, mod m]; r' + m' * c' \<le> r + m * c \<rbrakk> \<Longrightarrow>
[r', mod m', c'] \<subseteq> [r, mod m, c]"
by (simp add: subset_iff iT_iff)
lemma iMODb_subset_imp_iMOD_subset: "
\<lbrakk> [r', mod m', c'] \<subseteq> [r, mod m, c]; 0 < c' \<rbrakk> \<Longrightarrow>
[r', mod m'] \<subseteq> [r, mod m]"
thm subsetD
apply (frule subsetD[of _ _ r'])
apply (simp add: iMODb_iff)
thm subsetI
apply (rule subsetI)
apply (simp add: iMOD_iff iMODb_iff, clarify)
thm mod_eq_mod_0_imp_mod_eq
apply (drule mod_eq_mod_0_imp_mod_eq[where m=m and m'=m'])
thm iMODb_subset_imp_divisor_mod_0
apply (simp add: iMODb_subset_imp_divisor_mod_0)
apply simp
done
lemma iMODb_0_iMOD_subset_conv: "
([r', mod m', 0] \<subseteq> [r, mod m]) =
(r' mod m = r mod m \<and> r \<le> r')"
by (simp add: iMODb_0 iIN_0 singleton_subset_conv iMOD_iff)
lemma iFROM_subset_conv: "([n'\<dots>] \<subseteq> [n\<dots>]) = (n \<le> n')"
by (simp add: iFROM_def)
lemma iFROM_iMOD_subset_conv: "([n'\<dots>] \<subseteq> [r, mod m]) = (r \<le> n' \<and> m = Suc 0)"
apply (rule iffI)
apply (rule conjI)
thm iMin_subset[OF iFROM_not_empty]
apply (drule iMin_subset[OF iFROM_not_empty])
apply (simp add: iT_Min)
apply (rule ccontr)
thm Suc_in_imp_not_subset_iMOD
apply (cut_tac Suc_in_imp_not_subset_iMOD[of n' "[n'\<dots>]" m r])
apply (simp add: iT_iff)+
apply (simp add: subset_iff iT_iff)
done
lemma iIN_subset_conv: "([n'\<dots>,d'] \<subseteq> [n\<dots>,d]) = (n \<le> n' \<and> n'+d' \<le> n+d)"
apply (rule iffI)
apply (frule iMin_subset[OF iIN_not_empty])
apply (drule Max_subset[OF iIN_not_empty _ iIN_finite])
apply (simp add: iIN_Min iIN_Max)
apply (simp add: subset_iff iIN_iff)
done
lemma iIN_iFROM_subset_conv: "([n'\<dots>,d'] \<subseteq> [n\<dots>]) = (n \<le> n')"
by (fastforce simp: subset_iff iFROM_iff iIN_iff)
lemma iIN_iTILL_subset_conv: "([n'\<dots>,d'] \<subseteq> [\<dots>n]) = (n' + d' \<le> n)"
by (fastforce simp: subset_iff iT_iff)
lemma iIN_iMOD_subset_conv: "
0 < d' \<Longrightarrow> ([n'\<dots>,d'] \<subseteq> [r, mod m]) = (r \<le> n' \<and> m = Suc 0)"
apply (rule iffI)
apply (frule iMin_subset[OF iIN_not_empty])
apply (simp add: iT_Min)
apply (subgoal_tac "n' \<in> [n'\<dots>,d']")
prefer 2
apply (simp add: iIN_iff)
apply (rule ccontr)
thm Suc_in_imp_not_subset_iMOD
apply (frule Suc_in_imp_not_subset_iMOD[where r=r and m=m])
apply (simp add: iIN_Suc)+
apply (simp add: iMOD_1 iIN_iFROM_subset_conv)
done
lemma iIN_iMODb_subset_conv: "
0 < d' \<Longrightarrow>
([n'\<dots>,d'] \<subseteq> [r, mod m, c]) =
(r \<le> n' \<and> m = Suc 0 \<and> n' + d' \<le> r + m * c)"
apply (rule iffI)
thm subset_trans[OF _ iMODb_iMOD_subset_same]
apply (frule subset_trans[OF _ iMODb_iMOD_subset_same])
apply (simp add: iIN_iMOD_subset_conv iMODb_mod_1 iIN_subset_conv)
apply (clarsimp simp: iMODb_mod_1 iIN_subset_conv)
done
lemma iTILL_subset_conv: "([\<dots>n'] \<subseteq> [\<dots>n]) = (n' \<le> n)"
by (simp add: iTILL_def)
lemma iTILL_iFROM_subset_conv: "([\<dots>n'] \<subseteq> [n\<dots>]) = (n = 0)"
apply (rule iffI)
apply (drule subsetD[of _ _ 0])
apply (simp add: iT_iff)+
apply (simp add: iFROM_0)
done
lemma iTILL_iIN_subset_conv: "([\<dots>n'] \<subseteq> [n\<dots>,d]) = (n = 0 \<and> n' \<le> d)"
apply (rule iffI)
apply (frule iMin_subset[OF iTILL_not_empty])
apply (drule Max_subset[OF iTILL_not_empty _ iIN_finite])
apply (simp add: iT_Min iT_Max)
apply (simp add: iIN_0_iTILL_conv iTILL_subset_conv)
done
lemma iTILL_iMOD_subset_conv: "
0 < n' \<Longrightarrow> ([\<dots>n'] \<subseteq> [r, mod m]) = (r = 0 \<and> m = Suc 0)"
apply (drule iIN_iMOD_subset_conv[of n' 0 r m])
apply (simp add: iIN_0_iTILL_conv)
done
lemma iTILL_iMODb_subset_conv: "
0 < n' \<Longrightarrow> ([\<dots>n'] \<subseteq> [r, mod m, c]) = (r = 0 \<and> m = Suc 0 \<and> n' \<le> r + m * c)"
apply (drule iIN_iMODb_subset_conv[of n' 0 r m c])
apply (simp add: iIN_0_iTILL_conv)
done
lemma iMOD_iFROM_subset_conv: "([r', mod m']) \<subseteq> [n\<dots>] = (n \<le> r')"
by (fastforce simp: subset_iff iT_iff)
lemma iMODb_iFROM_subset_conv: "([r', mod m', c'] \<subseteq> [n\<dots>]) = (n \<le> r')"
by (fastforce simp: subset_iff iT_iff)
lemma iMODb_iIN_subset_conv: "
([r', mod m', c'] \<subseteq> [n\<dots>,d]) = (n \<le> r' \<and> r' + m' * c' \<le> n + d)"
by (fastforce simp: subset_iff iT_iff)
lemma iMODb_iTILL_subset_conv: "
([r', mod m', c'] \<subseteq> [\<dots>n]) = (r' + m' * c' \<le> n)"
by (fastforce simp: subset_iff iT_iff)
lemma iMOD_0_subset_conv: "([r', mod 0] \<subseteq> [r, mod m]) = (r' mod m = r mod m \<and> r \<le> r')"
by (fastforce simp: iMOD_0 iIN_0 singleton_subset_conv iMOD_iff)
lemma iMOD_subset_conv: "0 < m \<Longrightarrow>
([r', mod m'] \<subseteq> [r, mod m]) =
(r' mod m = r mod m \<and> r \<le> r' \<and> m' mod m = 0)"
apply (rule iffI)
apply (frule subsetD[of _ _ r'])
apply (simp add: iMOD_iff)
apply (drule iMOD_subset_imp_divisor_mod_0)
apply (simp add: iMOD_iff)
apply (rule subsetI)
apply (simp add: iMOD_iff, elim conjE)
thm mod_eq_mod_0_imp_mod_eq
apply (drule mod_eq_mod_0_imp_mod_eq[where m'=m' and m=m])
apply simp+
done
lemma iMODb_subset_mod_0_conv: "
([r', mod m', c'] \<subseteq> [r, mod 0, c ]) = (r'=r \<and> (m'=0 \<or> c'=0))"
by (simp add: iMODb_mod_0 iIN_0 iMODb_singleton_subset_conv)
lemma iMODb_subset_0_conv: "
([r', mod m', c'] \<subseteq> [r, mod m, 0 ]) = (r'=r \<and> (m'=0 \<or> c'=0))"
by (simp add: iMODb_0 iIN_0 iMODb_singleton_subset_conv)
lemma iMODb_0_subset_conv: "
([r', mod m', 0] \<subseteq> [r, mod m, c ]) = (r' \<in> [r, mod m, c])"
by (simp add: iMODb_0 iIN_0)
lemma iMODb_mod_0_subset_conv: "
([r', mod 0, c'] \<subseteq> [r, mod m, c ]) = (r' \<in> [r, mod m, c])"
by (simp add: iMODb_mod_0 iIN_0)
lemma iMODb_subset_conv': "\<lbrakk> 0 < c; 0 < c' \<rbrakk> \<Longrightarrow>
([r', mod m', c'] \<subseteq> [r, mod m, c]) =
(r' mod m = r mod m \<and> r \<le> r' \<and> m' mod m = 0 \<and>
r' + m' * c' \<le> r + m * c)"
apply (rule iffI)
thm iMODb_subset_imp_iMOD_subset
apply (frule iMODb_subset_imp_iMOD_subset, assumption)
apply (drule iMOD_subset_imp_divisor_mod_0)
apply (frule subsetD[OF _ iMinI_ex2[OF iMODb_not_empty]])
apply (drule Max_subset[OF iMODb_not_empty _ iMODb_finite])
apply (simp add: iMODb_iff iMODb_Min iMODb_Max)
apply (elim conjE)
thm iMOD_subset_imp_iMODb_subset
apply (case_tac "m = 0", simp add: iMODb_mod_0)
apply (simp add: iMOD_subset_imp_iMODb_subset iMOD_subset_conv)
done
lemma iMODb_iMOD_subset_conv: "0 < c' \<Longrightarrow>
([r', mod m', c'] \<subseteq> [r, mod m]) =
(r' mod m = r mod m \<and> r \<le> r' \<and> m' mod m = 0)"
apply (rule iffI)
thm subsetD[OF _ iMinI_ex2[OF iMODb_not_empty]]
apply (frule subsetD[OF _ iMinI_ex2[OF iMODb_not_empty]])
apply (simp add: iMODb_Min iMOD_iff, elim conjE)
apply (simp add: iMODb_iMOD_iTILL_conv)
apply (subgoal_tac "[ r', mod m', c' ] \<subseteq> [ r, mod m ] \<inter> [\<dots>r' + m' * c']")
prefer 2
apply (simp add: iMODb_iMOD_iTILL_conv)
thm iMOD_iTILL_iMODb_conv iMODb_subset_imp_divisor_mod_0
apply (simp add: iMOD_iTILL_iMODb_conv iMODb_subset_imp_divisor_mod_0)
thm subset_trans[OF iMODb_iMOD_subset_same]
apply (rule subset_trans[OF iMODb_iMOD_subset_same])
apply (case_tac "m = 0", simp)
apply (simp add: iMOD_subset_conv)
done
lemmas iT_subset_conv =
iFROM_subset_conv
iFROM_iMOD_subset_conv
iTILL_subset_conv
iTILL_iFROM_subset_conv
iTILL_iIN_subset_conv
iTILL_iMOD_subset_conv
iTILL_iMODb_subset_conv
iIN_subset_conv
iIN_iFROM_subset_conv
iIN_iTILL_subset_conv
iIN_iMOD_subset_conv
iIN_iMODb_subset_conv
iMOD_subset_conv
iMOD_iFROM_subset_conv
iMODb_subset_conv'
iMODb_subset_conv
iMODb_iFROM_subset_conv
iMODb_iIN_subset_conv
iMODb_iTILL_subset_conv
iMODb_iMOD_subset_conv
thm iT_subset_conv
lemma iFROM_subset: "n \<le> n' \<Longrightarrow> [n'\<dots>] \<subseteq> [n\<dots>]"
by (simp add: iFROM_subset_conv)
lemma not_iFROM_iIN_subset: "\<not> [n'\<dots>] \<subseteq> [n\<dots>,d]"
apply (rule ccontr, simp)
apply (drule subsetD[of _ _ "max n' (Suc (n + d))"])
apply (simp add: iFROM_iff)
apply (simp add: iIN_iff)
done
lemma not_iFROM_iTILL_subset: "\<not> [n'\<dots>] \<subseteq> [\<dots>n]"
by (simp add: iIN_0_iTILL_conv [symmetric] not_iFROM_iIN_subset)
lemma iIN_subset: "\<lbrakk> n \<le> n'; n' + d' \<le> n + d \<rbrakk> \<Longrightarrow> [n'\<dots>,d'] \<subseteq> [n\<dots>,d]"
by (simp add: iIN_subset_conv)
lemma iIN_iFROM_subset: "n \<le> n' \<Longrightarrow> [n'\<dots>,d'] \<subseteq> [n\<dots>]"
by (simp add: subset_iff iT_iff)
lemma iIN_iTILL_subset: "n' + d' \<le> n \<Longrightarrow> [n'\<dots>,d'] \<subseteq> [\<dots>n]"
by (simp add: iIN_0_iTILL_conv[symmetric] iIN_subset)
lemma not_iIN_iMODb_subset: "\<lbrakk> 0 < d'; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> \<not> [n'\<dots>,d'] \<subseteq> [r, mod m, c]"
apply (rule Suc_in_imp_not_subset_iMODb[of n'])
apply (simp add: iIN_iff)+
done
lemma not_iIN_iMOD_subset: "\<lbrakk> 0 < d'; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> \<not> [n'\<dots>,d'] \<subseteq> [r, mod m]"
apply (rule ccontr, simp)
apply (case_tac "r \<le> n' + d'")
thm iIN_iTILL_subset[OF order_refl]
thm Int_greatest[OF _ iIN_iTILL_subset[OF order_refl]]
apply (drule Int_greatest[OF _ iIN_iTILL_subset[OF order_refl]])
thm iMOD_iTILL_iMODb_conv not_iIN_iMODb_subset
apply (simp add: iMOD_iTILL_iMODb_conv not_iIN_iMODb_subset)
apply (drule subsetD[of _ _ "n'+d'"])
apply (simp add: iT_iff)+
done
lemma iTILL_subset: "n' \<le> n \<Longrightarrow> [\<dots>n'] \<subseteq> [\<dots>n]"
by (rule iTILL_subset_conv[THEN iffD2])
lemma iTILL_iFROM_subset: "([\<dots>n'] \<subseteq> [0\<dots>])"
by (simp add: iFROM_0)
lemma iTILL_iIN_subset: "n' \<le> d \<Longrightarrow> ([\<dots>n'] \<subseteq> [0\<dots>,d])"
by (simp add: iIN_0_iTILL_conv iTILL_subset)
thm not_iIN_iMOD_subset
lemma not_iTILL_iMOD_subset: "
\<lbrakk> 0 < n'; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> \<not> [\<dots>n'] \<subseteq> [r, mod m]"
by (simp add: iIN_0_iTILL_conv[symmetric] not_iIN_iMOD_subset)
lemma not_iTILL_iMODb_subset: "
\<lbrakk> 0 < n'; m \<noteq> Suc 0 \<rbrakk> \<Longrightarrow> \<not> [\<dots>n'] \<subseteq> [r, mod m, c]"
by (simp add: iIN_0_iTILL_conv[symmetric] not_iIN_iMODb_subset)
lemma iMOD_iFROM_subset: "n \<le> r' \<Longrightarrow> [r', mod m'] \<subseteq> [n\<dots>]"
by (rule iMOD_iFROM_subset_conv[THEN iffD2])
lemma not_iMOD_iIN_subset: "0 < m' \<Longrightarrow> \<not> [r', mod m'] \<subseteq> [n\<dots>,d]"
by (rule infinite_not_subset_finite[OF iMOD_infinite iIN_finite])
lemma not_iMOD_iTILL_subset: "0 < m' \<Longrightarrow> \<not> [r', mod m'] \<subseteq> [\<dots>n]"
by (rule infinite_not_subset_finite[OF iMOD_infinite iTILL_finite])
thm iMOD_subset_conv
lemma iMOD_subset: "
\<lbrakk> r \<le> r'; r' mod m = r mod m; m' mod m = 0 \<rbrakk> \<Longrightarrow> [r', mod m'] \<subseteq> [r, mod m]"
apply (case_tac "m = 0", simp)
apply (simp add: iMOD_subset_conv)
done
lemma not_iMOD_iMODb_subset: "0 < m' \<Longrightarrow> \<not> [r', mod m'] \<subseteq> [r, mod m, c]"
by (rule infinite_not_subset_finite[OF iMOD_infinite iMODb_finite])
lemma iMODb_iFROM_subset: "n \<le> r' \<Longrightarrow> [r', mod m', c'] \<subseteq> [n\<dots>]"
thm iMODb_iFROM_subset_conv[THEN iffD2]
by (rule iMODb_iFROM_subset_conv[THEN iffD2])
lemma iMODb_iTILL_subset: "
r' + m' * c' \<le> n \<Longrightarrow> [r', mod m', c'] \<subseteq> [\<dots>n]"
by (rule iMODb_iTILL_subset_conv[THEN iffD2])
thm iMODb_iIN_subset_conv
lemma iMODb_iIN_subset: "
\<lbrakk> n \<le> r'; r' + m' * c' \<le> n + d \<rbrakk> \<Longrightarrow> [r', mod m', c'] \<subseteq> [n\<dots>,d]"
by (simp add: iMODb_iIN_subset_conv)
thm iMODb_iMOD_subset_conv
lemma iMODb_iMOD_subset: "
\<lbrakk> r \<le> r'; r' mod m = r mod m; m' mod m = 0 \<rbrakk> \<Longrightarrow> [r', mod m', c'] \<subseteq> [r, mod m]"
apply (case_tac "c' = 0")
apply (simp add: iMODb_0 iIN_0 iMOD_iff)
thm iMODb_iMOD_subset_conv
apply (simp add: iMODb_iMOD_subset_conv)
done
thm iMODb_subset_conv
lemma iMODb_subset: "
\<lbrakk> r \<le> r'; r' mod m = r mod m; m' mod m = 0; r' + m' * c' \<le> r + m * c \<rbrakk> \<Longrightarrow>
[r', mod m', c'] \<subseteq> [r, mod m, c]"
apply (case_tac "m' = 0")
apply (simp add: iMODb_mod_0 iIN_0 iMODb_iff)
apply (case_tac "c' = 0")
apply (simp add: iMODb_0 iIN_0 iMODb_iff)
apply (simp add: iMODb_subset_conv)
done
lemma iFROM_trans: "\<lbrakk> y \<in> [x\<dots>]; z \<in> [y\<dots>] \<rbrakk> \<Longrightarrow> z \<in> [x\<dots>]"
by (rule subsetD[OF iFROM_subset[OF iFROM_D]])
lemma iTILL_trans: "\<lbrakk> y \<in> [\<dots>x]; z \<in> [\<dots>y] \<rbrakk> \<Longrightarrow> z \<in> [\<dots>x]"
by (rule subsetD[OF iTILL_subset[OF iTILL_D]])
thm iIN_subset
lemma iIN_trans: "
\<lbrakk> y \<in> [x\<dots>,d]; z \<in> [y\<dots>,d']; d' \<le> x + d - y \<rbrakk> \<Longrightarrow> z \<in> [x\<dots>,d]"
by fastforce
lemma iMOD_trans: "
\<lbrakk> y \<in> [x, mod m]; z \<in> [y, mod m] \<rbrakk> \<Longrightarrow> z \<in> [x, mod m]"
by (rule subsetD[OF iMOD_subset[OF iMOD_geD iMOD_modD mod_self]])
lemma iMODb_trans: "
\<lbrakk> y \<in> [x, mod m, c]; z \<in> [y, mod m, c']; m * c' \<le> x + m * c - y \<rbrakk> \<Longrightarrow>
z \<in> [x, mod m, c]"
by fastforce
lemma iMODb_trans': "
\<lbrakk> y \<in> [x, mod m, c]; z \<in> [y, mod m, c']; c' \<le> x div m + c - y div m \<rbrakk> \<Longrightarrow>
z \<in> [x, mod m, c]"
apply (rule iMODb_trans[where c'=c'], assumption+)
apply (frule iMODb_geD, frule div_le_mono[of x y m])
apply (simp add: add.commute[of _ c] add.commute[of _ "m*c"])
apply (drule mult_le_mono[OF le_refl, of _ _ m])
apply (simp add: add_mult_distrib2 diff_mult_distrib2 mult_div_cancel)
apply (simp add: iMODb_iff)
done
subsubsection {* Equality of intervals *}
lemma iFROM_eq_conv: "([n\<dots>] = [n'\<dots>]) = (n = n')"
apply (rule iffI)
apply (drule set_eq_subset[THEN iffD1])
apply (simp add: iFROM_subset_conv)
apply simp
done
lemma iIN_eq_conv: "([n\<dots>,d] = [n'\<dots>,d']) = (n = n' \<and> d = d')"
apply (rule iffI)
apply (drule set_eq_subset[THEN iffD1])
apply (simp add: iIN_subset_conv)
apply simp
done
lemma iTILL_eq_conv: "([\<dots>n] = [\<dots>n']) = (n = n')"
thm iIN_eq_conv[of 0 n 0 n']
by (simp add: iIN_0_iTILL_conv[symmetric] iIN_eq_conv)
thm iMOD_singleton_eq_conv
lemma iMOD_0_eq_conv: "([r, mod 0] = [r', mod m']) = (r = r' \<and> m' = 0)"
apply (simp add: iMOD_0 iIN_0)
thm iMOD_singleton_eq_conv
apply (simp add: iMOD_singleton_eq_conv eq_sym_conv[of "{r}"] eq_sym_conv[of "r"])
done
lemma iMOD_eq_conv: "0 < m \<Longrightarrow> ([r, mod m] = [r', mod m']) = (r = r' \<and> m = m')"
apply (case_tac "m' = 0")
apply (simp add: eq_sym_conv[of "[r, mod m]"] iMOD_0_eq_conv)
apply (rule iffI)
apply (fastforce simp add: set_eq_subset iMOD_subset_conv)
apply simp
done
thm iMODb_singleton_eq_conv
lemma iMODb_mod_0_eq_conv: "
([r, mod 0, c] = [r', mod m', c']) = (r = r' \<and> (m' = 0 \<or> c' = 0))"
apply (simp add: iMODb_mod_0 iIN_0)
apply (fastforce simp: iMODb_singleton_eq_conv eq_sym_conv[of "{r}"])
done
lemma iMODb_0_eq_conv: "
([r, mod m, 0] = [r', mod m', c']) = (r = r' \<and> (m' = 0 \<or> c' = 0))"
apply (simp add: iMODb_0 iIN_0)
apply (fastforce simp: iMODb_singleton_eq_conv eq_sym_conv[of "{r}"])
done
lemma iMODb_eq_conv: "\<lbrakk> 0 < m; 0 < c \<rbrakk> \<Longrightarrow>
([r, mod m, c] = [r', mod m', c']) = (r = r' \<and> m = m' \<and> c = c')"
apply (case_tac "c' = 0")
apply (simp add: iMODb_0 iIN_0 iMODb_singleton_eq_conv)
apply (rule iffI)
apply (fastforce simp: set_eq_subset iMODb_subset_conv')
apply simp
done
lemma iMOD_iFROM_eq_conv: "([n\<dots>] = [r, mod m]) = (n = r \<and> m = Suc 0)"
by (fastforce simp: iMOD_1[symmetric] iMOD_eq_conv)
thm iMODb_singleton_eq_conv
lemma iMODb_iIN_0_eq_conv: "
([n\<dots>,0] = [r, mod m, c]) = (n = r \<and> (m = 0 \<or> c = 0))"
by (simp add: iIN_0 eq_commute[of "{n}"] eq_commute[of n] iMODb_singleton_eq_conv)
lemma iMODb_iIN_eq_conv: "
0 < d \<Longrightarrow> ([n\<dots>,d] = [r, mod m, c]) = (n = r \<and> m = Suc 0 \<and> c = d)"
by (fastforce simp: iMODb_mod_1[symmetric] iMODb_eq_conv)
subsubsection {* Inequality of intervals *}
lemma iFROM_iIN_neq: "[n'\<dots>] \<noteq> [n\<dots>,d]"
apply (rule ccontr)
apply (insert iFROM_infinite[of n'], insert iIN_finite[of n d])
apply simp
done
corollary iFROM_iTILL_neq: "[n'\<dots>] \<noteq> [\<dots>n]"
by (simp add: iIN_0_iTILL_conv[symmetric] iFROM_iIN_neq)
corollary iFROM_iMOD_neq: "m \<noteq> Suc 0 \<Longrightarrow> [n\<dots>] \<noteq> [r, mod m]"
apply (subgoal_tac "n \<in> [n\<dots>]")
prefer 2
apply (simp add: iFROM_iff)
apply (simp add: Suc_in_imp_neq_iMOD iFROM_Suc)
done
corollary iFROM_iMODb_neq: "[n\<dots>] \<noteq> [r, mod m, c]"
apply (rule ccontr)
apply (insert iMODb_finite[of r m c], insert iFROM_infinite[of n])
apply simp
done
corollary iIN_iMOD_neq: "0 < m \<Longrightarrow> [n\<dots>,d] \<noteq> [r, mod m]"
apply (rule ccontr)
apply (insert iMOD_infinite[of m r], insert iIN_finite[of n d])
apply simp
done
corollary iIN_iMODb_neq2: "\<lbrakk> m \<noteq> Suc 0; 0 < d \<rbrakk> \<Longrightarrow> [n\<dots>,d] \<noteq> [r, mod m, c]"
apply (subgoal_tac "n \<in> [n\<dots>,d]")
prefer 2
apply (simp add: iIN_iff)
apply (simp add: Suc_in_imp_neq_iMODb iIN_Suc)
done
lemma iIN_iMODb_neq: "\<lbrakk> 2 \<le> m; 0 < c \<rbrakk> \<Longrightarrow> [n\<dots>,d] \<noteq> [r, mod m, c]"
apply (simp add: nat_ge2_conv, elim conjE)
apply (case_tac "d=0")
thm iMODb_singleton_eq_conv
apply (rule not_sym)
apply (simp add: iIN_0 iMODb_singleton_eq_conv)
apply (simp add: iIN_iMODb_neq2)
done
lemma iTILL_iIN_neq: "0 < n \<Longrightarrow> [\<dots>n'] \<noteq> [n\<dots>,d]"
by (fastforce simp: set_eq_iff iT_iff)
corollary iTILL_iMOD_neq: "0 < m \<Longrightarrow> [\<dots>n] \<noteq> [r, mod m]"
by (simp add: iIN_0_iTILL_conv[symmetric] iIN_iMOD_neq)
corollary iTILL_iMODb_neq: "
\<lbrakk> m \<noteq> Suc 0; 0 < n \<rbrakk> \<Longrightarrow> [\<dots>n] \<noteq> [r, mod m, c]"
by (simp add: iIN_0_iTILL_conv[symmetric] iIN_iMODb_neq2)
lemma iMOD_iMODb_neq: "0 < m \<Longrightarrow> [r, mod m] \<noteq> [r', mod m', c']"
apply (rule ccontr)
apply (insert iMODb_finite[of r' m' c'], insert iMOD_infinite[of m r])
apply simp
done
lemmas iT_neq =
iFROM_iTILL_neq iFROM_iIN_neq iFROM_iMOD_neq iFROM_iMODb_neq
iTILL_iIN_neq iTILL_iMOD_neq iTILL_iMODb_neq
iIN_iMOD_neq iIN_iMODb_neq iIN_iMODb_neq2
iMOD_iMODb_neq
thm iT_neq
subsection {* Union and intersection of intervals *}
lemma iFROM_union': "[n\<dots>] \<union> [n'\<dots>] = [min n n'\<dots>]"
by (fastforce simp: iFROM_iff)
corollary iFROM_union: "n \<le> n' \<Longrightarrow> [n\<dots>] \<union> [n'\<dots>] = [n\<dots>]"
by (simp add: iFROM_union' min_eqL)
lemma iFROM_inter': "[n\<dots>] \<inter> [n'\<dots>] = [max n n'\<dots>]"
by (fastforce simp: iFROM_iff)
corollary iFROM_inter: "n' \<le> n \<Longrightarrow> [n\<dots>] \<inter> [n'\<dots>] = [n\<dots>]"
by (simp add: iFROM_inter' max_eqL)
lemma iTILL_union': "[\<dots>n] \<union> [\<dots>n'] = [\<dots>max n n']"
by (fastforce simp: iTILL_iff)
corollary iTILL_union: "n' \<le> n \<Longrightarrow> [\<dots>n] \<union> [\<dots>n'] = [\<dots>n]"
by (simp add: iTILL_union' max_eqL)
lemma iTILL_iFROM_union: "n \<le> n' \<Longrightarrow> [\<dots>n'] \<union> [n\<dots>] = UNIV"
by (fastforce simp: iT_iff)
lemma iTILL_inter': "[\<dots>n] \<inter> [\<dots>n'] = [\<dots>min n n']"
by (fastforce simp: iT_iff)
corollary iTILL_inter: "n \<le> n' \<Longrightarrow> [\<dots>n] \<inter> [\<dots>n'] = [\<dots>n]"
by (simp add: iTILL_inter' min_eqL)
text {*
Union and intersection for iIN
only when there intersection is not empty and
none of them is other's subset,
for instance:
.. n .. n+d
.. n' .. n'+d'
*}
lemma iIN_union: "
\<lbrakk> n \<le> n'; n' \<le> Suc (n + d); n + d \<le> n' + d' \<rbrakk> \<Longrightarrow>
[n\<dots>,d] \<union> [n'\<dots>,d'] = [n\<dots>,n' - n + d'] "
by (fastforce simp add: iIN_iff)
(* The case of the second interval starting directly after the first one *)
lemma iIN_append_union: "
[n\<dots>,d] \<union> [n + d\<dots>,d'] = [n\<dots>,d + d']"
by (simp add: iIN_union)
lemma iIN_append_union_Suc: "
[n\<dots>,d] \<union> [Suc (n + d)\<dots>,d'] = [n\<dots>,Suc (d + d')]"
by (simp add: iIN_union)
lemma iIN_append_union_pred: "
0 < d \<Longrightarrow> [n\<dots>,d - Suc 0] \<union> [n + d\<dots>,d'] = [n\<dots>,d + d']"
by (simp add: iIN_union)
lemma iIN_iFROM_union: "
n' \<le> Suc (n + d) \<Longrightarrow> [n\<dots>,d] \<union> [n'\<dots>] = [min n n'\<dots>]"
by (fastforce simp: iIN_iff)
lemma iIN_iFROM_append_union: "
[n\<dots>,d] \<union> [n + d\<dots>] = [n\<dots>]"
by (simp add: iIN_iFROM_union min_eqL)
lemma iIN_inter: "
\<lbrakk> n \<le> n'; n' \<le> n + d; n + d \<le> n' + d' \<rbrakk> \<Longrightarrow>
[n\<dots>,d] \<inter> [n'\<dots>,d'] = [n'\<dots>,n + d - n']"
by (fastforce simp: iIN_iff)
lemma iMOD_union: "
\<lbrakk> r \<le> r'; r mod m = r' mod m \<rbrakk> \<Longrightarrow>
[r, mod m] \<union> [r', mod m] = [r, mod m]"
by (fastforce simp: iT_iff)
lemma iMOD_union': "
r mod m = r' mod m \<Longrightarrow>
[r, mod m] \<union> [r', mod m] = [min r r', mod m]"
apply (case_tac "r \<le> r'")
apply (fastforce simp: iMOD_union min_eq)+
done
lemma iMOD_inter: "
\<lbrakk> r \<le> r'; r mod m = r' mod m \<rbrakk> \<Longrightarrow>
[r, mod m] \<inter> [r', mod m] = [r', mod m]"
by (fastforce simp: iT_iff)
lemma iMOD_inter': "
r mod m = r' mod m \<Longrightarrow>
[r, mod m] \<inter> [r', mod m] = [max r r', mod m]"
apply (case_tac "r \<le> r'")
apply (fastforce simp: iMOD_inter max_eq)+
done
lemma iMODb_union: "
\<lbrakk> r \<le> r'; r mod m = r' mod m; r' \<le> r + m * c; r + m * c \<le> r' + m * c' \<rbrakk> \<Longrightarrow>
[r, mod m, c] \<union> [r', mod m, c'] = [r, mod m, r' div m - r div m + c']"
apply (rule set_eqI)
apply (simp add: iMODb_iff)
apply (drule sym[of "r mod m"], simp)
apply (fastforce simp: add_mult_distrib2 diff_mult_distrib2 mult_div_cancel)
done
thm iMODb_iMOD_subset_same
thm Un_absorb1[OF iMODb_iMOD_subset_same]
lemma iMODb_append_union: "
[r, mod m, c] \<union> [ r + m * c, mod m, c'] = [r, mod m, c + c']"
thm iMODb_union[of r "r + m * c" m c c']
apply (insert iMODb_union[of r "r + m * c" m c c'])
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0)
apply simp
done
lemma iMODb_iMOD_append_union': "
\<lbrakk> r mod m = r' mod m; r' \<le> r + m * Suc c \<rbrakk> \<Longrightarrow>
[r, mod m, c] \<union> [ r', mod m ] = [min r r', mod m]"
apply (subgoal_tac "(min r r') mod m = r' mod m")
prefer 2
apply (simp add: min_def)
apply (rule set_eqI)
apply (simp add: iT_iff)
apply (drule sym[of "r mod m"], simp)
apply (rule iffI)
apply fastforce
apply (clarsimp simp: linorder_not_le)
apply (case_tac "r \<le> r'")
apply (simp add: min_eqL)
thm add_le_imp_le_right[of _ m]
apply (rule add_le_imp_le_right[of _ m])
thm less_mod_eq_imp_add_divisor_le
apply (rule less_mod_eq_imp_add_divisor_le)
apply simp+
done
lemma iMODb_iMOD_append_union: "
\<lbrakk> r \<le> r'; r mod m = r' mod m; r' \<le> r + m * Suc c \<rbrakk> \<Longrightarrow>
[r, mod m, c] \<union> [ r', mod m ] = [r, mod m]"
thm iMODb_iMOD_append_union'
by (simp add: iMODb_iMOD_append_union' min_eqL)
lemma iMODb_append_union_Suc: "
[r, mod m, c] \<union> [ r + m * Suc c, mod m, c'] = [r, mod m, Suc (c + c')]"
thm insert_absorb[of "r + m * c" "[r, mod m, c] \<union> [ r + m * Suc c, mod m, c']"]
apply (subst insert_absorb[of "r + m * c" "[r, mod m, c] \<union> [ r + m * Suc c, mod m, c']", symmetric])
apply (simp add: iT_iff)
apply (simp del: Un_insert_right add: Un_insert_right[symmetric] add.commute[of m] add.assoc[symmetric] iMODb_Suc_pred_insert_conv)
thm iMODb_append_union[of r m c]
apply (simp add: iMODb_append_union)
done
lemma iMODb_append_union_pred: "
0 < c \<Longrightarrow> [r, mod m, c - Suc 0] \<union> [ r + m * c, mod m, c'] = [r, mod m, c + c']"
by (insert iMODb_append_union_Suc[of r m "c - Suc 0" c'], simp)
lemma iMODb_inter: "
\<lbrakk> r \<le> r'; r mod m = r' mod m; r' \<le> r + m * c; r + m * c \<le> r' + m * c' \<rbrakk> \<Longrightarrow>
[r, mod m, c] \<inter> [r', mod m, c'] = [r', mod m, c - (r'-r) div m]"
apply (rule set_eqI)
apply (simp add: iMODb_iff)
apply (simp add: diff_mult_distrib2)
apply (simp add: mult.commute[of _ "(r' - r) div m"])
thm mod_0_div_mult_cancel[THEN iffD1, OF mod_eq_imp_diff_mod_0]
apply (simp add: mod_0_div_mult_cancel[THEN iffD1, OF mod_eq_imp_diff_mod_0])
apply (simp add: add.commute[of _ r])
apply fastforce
done
lemmas iT_union' =
iFROM_union'
iTILL_union'
iMOD_union'
iMODb_iMOD_append_union'
lemmas iT_union =
iFROM_union
iTILL_union
iTILL_iFROM_union
iIN_union
iIN_iFROM_union
iMOD_union
iMODb_union
lemmas iT_union_append =
iIN_append_union
iIN_append_union_Suc
iIN_append_union_pred
iIN_iFROM_append_union
iIN_iFROM_append_union_Suc
iIN_iFROM_append_union_pred
iMODb_append_union
iMODb_iMOD_append_union
iMODb_append_union_Suc
iMODb_append_union_pred
lemmas iT_inter' =
iFROM_inter'
iTILL_inter'
iMOD_inter'
lemmas iT_inter =
iFROM_inter
iTILL_inter
iIN_inter
iMOD_inter
iMODb_inter
thm iT_union'
thm iT_union
thm iT_union_append
thm iT_inter'
thm iT_inter
thm partition_Union
lemma mod_partition_Union: "
0 < m \<Longrightarrow> (\<Union>k. A \<inter> [k * m\<dots>,m - Suc 0]) = A"
apply simp
thm subst[where s=UNIV and P="\<lambda>x. A \<inter> x = A"]
apply (rule subst[where s=UNIV and P="\<lambda>x. A \<inter> x = A"])
apply (rule set_eqI)
apply (simp add: iT_iff)
apply (rule_tac x="x div m" in exI)
thm div_mult_cancel
apply (simp add: div_mult_cancel)
thm le_add_diff
apply (subst add.commute)
apply (rule le_add_diff)
thm Suc_mod_le_divisor
apply (simp add: Suc_mod_le_divisor)
apply simp
done
thm mod_partition_Union
lemma finite_mod_partition_Union: "
\<lbrakk> 0 < m; finite A \<rbrakk> \<Longrightarrow>
(\<Union>k\<le>Max A div m. A \<inter> [k*m\<dots>,m - Suc 0]) = A"
thm subst[OF mod_partition_Union[of m], where
P="\<lambda>x. (\<Union>k\<le>Max A div m. A \<inter> [k*m\<dots>,m - Suc 0]) = x"]
apply (rule subst[OF mod_partition_Union[of m], where
P="\<lambda>x. (\<Union>k\<le>Max A div m. A \<inter> [k*m\<dots>,m - Suc 0]) = x"])
apply assumption
apply (rule set_eqI)
apply (simp add: Bex_def iIN_iff)
apply (rule iffI, blast)
apply clarsimp
apply (rename_tac x x1)
apply (rule_tac x="x div m" in exI)
apply (frule in_imp_not_empty[where A=A])
apply (frule_tac Max_ge, assumption)
apply (cut_tac n=x and k="x div m" and m=m in div_imp_le_less)
apply clarsimp+
apply (drule_tac m=x in less_imp_le_pred)
apply (simp add: add.commute[of m])
apply (simp add: div_le_mono)
done
thm setsum.UNION_disjoint
lemma mod_partition_is_disjoint: "
\<lbrakk> 0 < (m::nat); k \<noteq> k' \<rbrakk> \<Longrightarrow>
(A \<inter> [k * m\<dots>,m - Suc 0]) \<inter> (A \<inter> [k' * m\<dots>,m - Suc 0]) = {}"
apply (clarsimp simp add: all_not_in_conv[symmetric] iT_iff)
apply (subgoal_tac "
\<And>k. \<lbrakk> k * m \<le> x; x \<le> k * m + m - Suc 0 \<rbrakk> \<Longrightarrow> x div m = k", blast)
thm le_less_imp_div
apply (rule le_less_imp_div, assumption)
apply simp
done
subsection {* Cutting intervals *}
thm i_cut_defs
thm i_cut_subset
thm i_cut_bound
thm
cut_le_less_conv
cut_less_le_conv
thm
cut_ge_greater_conv
cut_greater_ge_conv
(*
lemma "[10\<dots>,5] \<down>< 12 = [10\<dots>,1]"
apply (simp add: iT_iff cut_less_def)
apply (simp add: iT_defs set_interval_defs Collect_conj_eq[symmetric])
apply fastforce
done
*)
thm
cut_le_Min_empty
cut_less_Min_empty
cut_le_Min_not_empty
cut_less_Min_not_empty
thm
i_cut_min_empty
thm
i_cut_min_all
thm
i_cut_max_empty
thm
i_cut_max_all
lemma iTILL_cut_le: "[\<dots>n] \<down>\<le> t = (if t \<le> n then [\<dots>t] else [\<dots>n])"
unfolding i_cut_defs iT_defs atMost_def
by force
corollary iTILL_cut_le1: "t \<in> [\<dots>n] \<Longrightarrow> [\<dots>n] \<down>\<le> t = [\<dots>t]"
by (simp add: iTILL_cut_le iT_iff)
corollary iTILL_cut_le2: "t \<notin> [\<dots>n] \<Longrightarrow> [\<dots>n] \<down>\<le> t = [\<dots>n]"
by (simp add: iTILL_cut_le iT_iff)
lemma iFROM_cut_le: "
[n\<dots>] \<down>\<le> t =
(if t < n then {} else [n\<dots>,t-n])"
by (simp add: set_eq_iff i_cut_mem_iff iT_iff)
corollary iFROM_cut_le1: "t \<in> [n\<dots>] \<Longrightarrow> [n\<dots>] \<down>\<le> t = [n\<dots>,t - n]"
by (simp add: iFROM_cut_le iT_iff)
lemma iIN_cut_le: "
[n\<dots>,d] \<down>\<le> t = (
if t < n then {} else
if t \<le> n+d then [n\<dots>,t-n]
else [n\<dots>,d])"
by (force simp: set_eq_iff i_cut_mem_iff iT_iff)
corollary iIN_cut_le1: "
t \<in> [n\<dots>,d] \<Longrightarrow> [n\<dots>,d] \<down>\<le> t = [n\<dots>,t - n]"
by (simp add: iIN_cut_le iT_iff)
lemma iMOD_cut_le: "
[r, mod m] \<down>\<le> t = (
if t < r then {}
else [r, mod m, (t - r) div m])"
apply (case_tac "m = 0")
apply (simp add: iMOD_0 iMODb_0 iIN_0 i_cut_empty i_cut_singleton)
apply (case_tac "t < r")
apply (simp add: cut_le_Min_empty iMOD_Min)
apply (clarsimp simp: linorder_not_less set_eq_iff i_cut_mem_iff iT_iff)
apply (rule conj_cong, simp)+
apply (clarsimp simp: mult_div_cancel)
apply (drule_tac x=r and y=x in le_imp_less_or_eq, erule disjE)
prefer 2
apply simp
thm less_mod_eq_imp_add_divisor_le
apply (drule_tac x=r and y=x and m=m in less_mod_eq_imp_add_divisor_le, simp)
apply (rule iffI)
thm mod_eq_imp_diff_mod_eq[of _ m r t, rule_format]
apply (rule_tac x=x in subst[OF mod_eq_imp_diff_mod_eq[of _ m r t], rule_format], simp+)
apply (subgoal_tac "r + (t - x) mod m \<le> t")
prefer 2
thm add_le_mono2[OF mod_le_divisor]
thm order_trans[OF add_le_mono2[OF mod_le_divisor]]
apply (simp add: order_trans[OF add_le_mono2[OF mod_le_divisor]])
apply simp
thm le_imp_sub_mod_le[of _ t]
apply (simp add: le_imp_sub_mod_le)
apply (subgoal_tac "r + (t - r) mod m \<le> t")
prefer 2
apply (rule ccontr)
apply simp
apply simp
done
lemma iMOD_cut_le1: "
t \<in> [r, mod m] \<Longrightarrow>
[r, mod m] \<down>\<le> t = [r, mod m, (t - r) div m]"
by (simp add: iMOD_cut_le iT_iff)
lemma iMODb_cut_le: "
[r, mod m, c] \<down>\<le> t = (
if t < r then {}
else if t < r + m * c then [r, mod m, (t - r) div m]
else [r, mod m, c])"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0 cut_le_singleton)
apply (case_tac "t < r")
apply (simp add: cut_le_Min_empty iT_Min)
apply (case_tac "r + m * c \<le> t")
apply (simp add: cut_le_Max_all iT_Max iT_finite)
apply (simp add: linorder_not_le linorder_not_less)
thm iMOD_iTILL_iMODb_conv[of r "r + m * c" m]
apply (rule_tac t=c and s="(r + m * c - r) div m" in subst, simp)
apply (subst iMOD_iTILL_iMODb_conv[symmetric], simp)
apply (simp add: cut_le_Int_right iTILL_cut_le)
thm iMOD_iTILL_iMODb_conv
apply (simp add: iMOD_iTILL_iMODb_conv)
done
lemma iMODb_cut_le1: "
t \<in> [r, mod m, c] \<Longrightarrow>
[r, mod m, c] \<down>\<le> t = [r, mod m, (t - r) div m]"
by (clarsimp simp: iMODb_cut_le iT_iff iMODb_mod_0)
lemma iTILL_cut_less: "
[\<dots>n] \<down>< t = (
if n < t then [\<dots>n] else
if t = 0 then {}
else [\<dots>t - Suc 0])"
apply (case_tac "n < t")
apply (simp add: cut_less_Max_all iT_Max iT_finite)
apply (case_tac "t = 0")
apply (simp add: cut_less_0_empty)
apply (fastforce simp: nat_cut_less_le_conv iTILL_cut_le)
done
lemma iTILL_cut_less1: "
\<lbrakk> t \<in> [\<dots>n]; 0 < t \<rbrakk> \<Longrightarrow> [\<dots>n] \<down>< t = [\<dots>t - Suc 0]"
thm iTILL_cut_less
by (simp add: iTILL_cut_less iT_iff)
lemma iFROM_cut_less: "
[n\<dots>] \<down>< t = (
if t \<le> n then {}
else [n\<dots>,t - Suc n])"
apply (case_tac "t \<le> n")
apply (simp add: cut_less_Min_empty iT_Min)
apply (fastforce simp: nat_cut_less_le_conv iFROM_cut_le)
done
lemma iFROM_cut_less1: "
n < t \<Longrightarrow> [n\<dots>] \<down>< t = [n\<dots>,t - Suc n]"
by (simp add: iFROM_cut_less)
lemma iIN_cut_less: "
[n\<dots>,d] \<down>< t = (
if t \<le> n then {} else
if t \<le> n + d then [n\<dots>, t - Suc n]
else [n\<dots>,d])"
apply (case_tac "t \<le> n")
apply (simp add: cut_less_Min_empty iT_Min )
apply (case_tac "n + d < t")
apply (simp add: cut_less_Max_all iT_Max iT_finite)
apply (fastforce simp: nat_cut_less_le_conv iIN_cut_le)
done
lemma iIN_cut_less1: "
\<lbrakk> t \<in> [n\<dots>,d]; n < t \<rbrakk> \<Longrightarrow> [n\<dots>,d] \<down>< t = [n\<dots>, t - Suc n]"
thm iIN_cut_less
by (simp add: iIN_cut_less iT_iff)
thm
cut_le_less_conv
cut_less_le_conv
lemma iMOD_cut_less: "
[r, mod m] \<down>< t = (
if t \<le> r then {}
else [r, mod m, (t - Suc r) div m])"
thm iMOD_cut_le
apply (case_tac "t = 0")
apply (simp add: cut_less_0_empty)
apply (simp add: nat_cut_less_le_conv iMOD_cut_le)
apply fastforce
done
lemma iMOD_cut_less1: "
\<lbrakk> t \<in> [r, mod m]; r < t \<rbrakk> \<Longrightarrow>
[r, mod m] \<down>< t = [r, mod m, (t - r) div m - Suc 0]"
apply (case_tac "m = 0")
apply (simp add: iMOD_0 iMODb_mod_0 iIN_0)
apply (simp add: iMOD_cut_less)
thm mod_0_imp_diff_Suc_div_conv mod_eq_imp_diff_mod_0
apply (simp add: mod_0_imp_diff_Suc_div_conv mod_eq_imp_diff_mod_0 iT_iff)
done
lemma iMODb_cut_less: "
[r, mod m, c] \<down>< t = (
if t \<le> r then {} else
if r + m * c < t then [r, mod m, c]
else [r, mod m, (t - Suc r) div m])"
thm iMODb_cut_le
apply (case_tac "t = 0")
apply (simp add: cut_less_0_empty)
apply (simp add: nat_cut_less_le_conv iMODb_cut_le)
apply fastforce
done
lemma iMODb_cut_less1: "\<lbrakk> t \<in> [r, mod m, c]; r < t \<rbrakk> \<Longrightarrow>
[r, mod m, c] \<down>< t = [r, mod m, (t - r) div m - Suc 0]"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0)
apply (simp add: iMODb_cut_less)
thm mod_0_imp_diff_Suc_div_conv mod_eq_imp_diff_mod_0
apply (simp add: mod_0_imp_diff_Suc_div_conv mod_eq_imp_diff_mod_0 iT_iff)
done
lemmas iT_cut_le =
iTILL_cut_le
iFROM_cut_le
iIN_cut_le
iMOD_cut_le
iMODb_cut_le
thm iT_cut_le
lemmas iT_cut_le1 =
iTILL_cut_le1
iFROM_cut_le1
iIN_cut_le1
iMOD_cut_le1
iMODb_cut_le1
thm iT_cut_le1
lemmas iT_cut_less =
iTILL_cut_less
iFROM_cut_less
iIN_cut_less
iMOD_cut_less
iMODb_cut_less
thm iT_cut_less
lemmas iT_cut_less1 =
iTILL_cut_less1
iFROM_cut_less1
iIN_cut_less1
iMOD_cut_less1
iMODb_cut_less1
thm iT_cut_less1
lemmas iT_cut_le_less =
iTILL_cut_le
iTILL_cut_less
iFROM_cut_le
iFROM_cut_less
iIN_cut_le
iIN_cut_less
iMOD_cut_le
iMOD_cut_less
iMODb_cut_le
iMODb_cut_less
lemmas iT_cut_le_less1 =
iTILL_cut_le1
iTILL_cut_less1
iFROM_cut_le1
iFROM_cut_less1
iIN_cut_le1
iIN_cut_less1
iMOD_cut_le1
iMOD_cut_less1
iMODb_cut_le1
iMODb_cut_less1
thm iT_cut_le_less
thm iT_cut_le_less1
lemma iTILL_cut_ge: "
[\<dots>n] \<down>\<ge> t = (if n < t then {} else [t\<dots>,n-t])"
by (force simp: i_cut_mem_iff iT_iff)
corollary iTILL_cut_ge1: "t \<in> [\<dots>n] \<Longrightarrow> [\<dots>n] \<down>\<ge> t = [t\<dots>,n-t]"
by (simp add: iTILL_cut_ge iT_iff)
corollary iTILL_cut_ge2: "t \<notin> [\<dots>n] \<Longrightarrow> [\<dots>n] \<down>\<ge> t = {}"
by (simp add: iTILL_cut_ge iT_iff)
lemma iTILL_cut_greater: "
[\<dots>n] \<down>> t = (if n \<le> t then {} else [Suc t\<dots>,n - Suc t])"
by (force simp: i_cut_mem_iff iT_iff)
corollary iTILL_cut_greater1: "
t \<in> [\<dots>n] \<Longrightarrow> t < n \<Longrightarrow> [\<dots>n] \<down>> t = [Suc t\<dots>,n - Suc t]"
by (simp add: iTILL_cut_greater iT_iff)
corollary iTILL_cut_greater2: "t \<notin> [\<dots>n] \<Longrightarrow> [\<dots>n] \<down>> t = {}"
by (simp add: iTILL_cut_greater iT_iff)
lemma iFROM_cut_ge: "
[n\<dots>] \<down>\<ge> t = (if n \<le> t then [t\<dots>] else [n\<dots>])"
by (force simp: i_cut_mem_iff iT_iff)
corollary iFROM_cut_ge1: "t \<in> [n\<dots>] \<Longrightarrow> [n\<dots>] \<down>\<ge> t = [t\<dots>]"
by (simp add: iFROM_cut_ge iT_iff)
lemma iFROM_cut_greater: "
[n\<dots>] \<down>> t = (if n \<le> t then [Suc t\<dots>] else [n\<dots>])"
by (force simp: i_cut_mem_iff iT_iff)
corollary iFROM_cut_greater1: "
t \<in> [n\<dots>] \<Longrightarrow> [n\<dots>] \<down>> t = [Suc t\<dots>]"
by (simp add: iFROM_cut_greater iT_iff)
lemma iIN_cut_ge: "
[n\<dots>,d] \<down>\<ge> t = (
if t < n then [n\<dots>,d] else
if t \<le> n+d then [t\<dots>,n+d-t]
else {})"
by (force simp: i_cut_mem_iff iT_iff)
corollary iIN_cut_ge1: "t \<in> [n\<dots>,d] \<Longrightarrow>
[n\<dots>,d] \<down>\<ge> t = [t\<dots>,n+d-t]"
by (simp add: iIN_cut_ge iT_iff)
corollary iIN_cut_ge2: "t \<notin> [n\<dots>,d] \<Longrightarrow>
[n\<dots>,d] \<down>\<ge> t = (if t < n then [n\<dots>,d] else {})"
by (simp add: iIN_cut_ge iT_iff)
lemma iIN_cut_greater: "
[n\<dots>,d] \<down>> t = (
if t < n then [n\<dots>,d] else
if t < n+d then [Suc t\<dots>,n + d - Suc t]
else {})"
by (force simp: i_cut_mem_iff iT_iff)
corollary iIN_cut_greater1: "
\<lbrakk> t \<in> [n\<dots>,d]; t < n + d \<rbrakk>\<Longrightarrow>
[n\<dots>,d] \<down>> t = [Suc t\<dots>,n + d - Suc t]"
by (simp add: iIN_cut_greater iT_iff)
(*
lemma "let m=5 in let r = 12 in let t = 16 in
[r, mod m] \<down>> t = (
if t < r then [r, mod m] else
if (m = 0 \<and> r \<le> t) then {}
else [r + (t - r) div m * m + m, mod m])"
apply (simp add: Let_def)
oops
lemma "let m=5 in let r = 12 in let t = 16 in
[r, mod m] \<down>> t = (
if t < r then [r, mod m] else
if (m = 0 \<and> r \<le> t) then {}
else [t + m - (t - r) mod m, mod m])"
apply (simp add: Let_def)
oops
*)
thm sub_diff_mod_eq'[of r t 1 m, simplified]
lemma mod_cut_greater_aux_t_less: "
\<lbrakk> 0 < (m::nat); r \<le> t \<rbrakk> \<Longrightarrow>
t < t + m - (t - r) mod m"
thm less_add_diff
by (simp add: less_add_diff add.commute)
lemma mod_cut_greater_aux_le_x: "
\<lbrakk> (r::nat) \<le> t; t < x; x mod m = r mod m\<rbrakk> \<Longrightarrow>
t + m - (t - r) mod m \<le> x"
thm diff_mod_le
apply (insert diff_mod_le[of t r m])
thm diff_add_assoc2[of "(t - r) mod m" t m]
apply (subst diff_add_assoc2, simp)
thm less_mod_eq_imp_add_divisor_le[of "t - (t - r) mod m" x m]
apply (rule less_mod_eq_imp_add_divisor_le, simp)
thm sub_diff_mod_eq
apply (simp add: sub_diff_mod_eq)
done
lemma iMOD_cut_greater: "
[r, mod m] \<down>> t = (
if t < r then [r, mod m] else
if m = 0 then {}
else [t + m - (t - r) mod m, mod m])"
apply (case_tac "m = 0")
apply (simp add: iMOD_0 iIN_0 i_cut_singleton)
apply (case_tac "t < r")
apply (simp add: iT_Min cut_greater_Min_all)
apply (simp add: linorder_not_less)
apply (simp add: set_eq_iff i_cut_mem_iff iT_iff, clarify)
apply (subgoal_tac "(t - r) mod m \<le> t")
prefer 2
thm order_trans[OF mod_le_dividend]
apply (rule order_trans[OF mod_le_dividend], simp)
thm diff_add_assoc2[of "(t - r) mod m" t m]
apply (simp add: diff_add_assoc2 del: add_diff_assoc2)
thm sub_diff_mod_eq
apply (simp add: sub_diff_mod_eq del: add_diff_assoc2)
apply (rule conj_cong, simp)
apply (rule iffI)
apply clarify
thm less_mod_eq_imp_add_divisor_le[of "t - (t - r) mod m" x m]
apply (rule less_mod_eq_imp_add_divisor_le)
apply simp
apply (simp add: sub_diff_mod_eq)
apply (subgoal_tac "t < x")
prefer 2
apply (rule_tac y="t - (t - r) mod m + m" in order_less_le_trans)
apply (simp add: mod_cut_greater_aux_t_less)
apply simp+
done
lemma iMOD_cut_greater1: "
t \<in> [r, mod m] \<Longrightarrow>
[r, mod m] \<down>> t = (
if m = 0 then {}
else [t + m, mod m])"
by (simp add: iMOD_cut_greater iT_iff mod_eq_imp_diff_mod_0)
lemma iMODb_cut_greater_aux: "
\<lbrakk> 0 < m; t < r + m * c; r \<le> t\<rbrakk> \<Longrightarrow>
(r + m * c - (t + m - (t - r) mod m)) div m =
c - Suc ((t - r) div m)"
thm diff_diff_right[of r "t+m-(t-r)mod m" "m*c", symmetric]
apply (subgoal_tac "r \<le> t + m - (t - r) mod m")
prefer 2
apply (rule order_trans[of _ t], simp)
thm mod_cut_greater_aux_t_less
apply (simp add: mod_cut_greater_aux_t_less less_imp_le)
apply (rule_tac t="(r + m * c - (t + m - (t - r) mod m))" and s="m * (c - Suc ((t - r) div m))" in subst)
apply (simp add: diff_mult_distrib2 mult_div_cancel del: diff_diff_left)
apply simp
done
lemma iMODb_cut_greater: "
[r, mod m, c] \<down>> t = (
if t < r then [r, mod m, c] else
if r + m * c \<le> t then {}
else [t + m - (t - r) mod m, mod m, c - Suc ((t-r) div m)])"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0 i_cut_singleton)
apply (case_tac "r + m * c \<le> t")
apply (simp add: cut_greater_Max_empty iT_Max iT_finite)
apply (case_tac "t < r")
apply (simp add: cut_greater_Min_all iT_Min)
apply (simp add: linorder_not_less linorder_not_le)
thm iMOD_iTILL_iMODb_conv[of r "r + m * c" m]
apply (rule_tac t="[ r, mod m, c ]" and s="[ r, mod m ] \<inter> [\<dots>r + m * c]" in subst)
apply (simp add: iMOD_iTILL_iMODb_conv)
thm iMOD_cut_greater
apply (simp add: i_cut_Int_left iMOD_cut_greater)
thm iMOD_iTILL_iMODb_conv
apply (subst iMOD_iTILL_iMODb_conv)
thm mod_cut_greater_aux_le_x
apply (rule mod_cut_greater_aux_le_x, simp+)
apply (simp add: iMODb_cut_greater_aux)
done
lemma iMODb_cut_greater1: "
t \<in> [r, mod m, c] \<Longrightarrow>
[r, mod m, c] \<down>> t = (
if r + m * c \<le> t then {}
else [t + m, mod m, c - Suc ((t-r) div m)])"
by (simp add: iMODb_cut_greater iT_iff mod_eq_imp_diff_mod_0)
(*
lemma "let m=5 in let r = 12 in let t = 17 in
[r, mod m] \<down>\<ge> t = (
if t \<le> r then [r, mod m] else
if m=0 then {}
else [t + m - Suc ((t - Suc r) mod m), mod m])"
apply (simp add: Let_def)
oops
*)
lemma iMOD_cut_ge: "
[r, mod m] \<down>\<ge> t = (
if t \<le> r then [r, mod m] else
if m = 0 then {}
else [t + m - Suc ((t - Suc r) mod m), mod m])"
apply (case_tac "t = 0")
apply (simp add: cut_ge_0_all)
thm iMOD_cut_greater nat_cut_greater_ge_conv[symmetric]
apply (force simp: nat_cut_greater_ge_conv[symmetric] iMOD_cut_greater)
done
lemma iMOD_cut_ge1: "
t \<in> [r, mod m] \<Longrightarrow>
[r, mod m] \<down>\<ge> t = [t, mod m]"
by (fastforce simp: iMOD_cut_ge)
(*
lemma "let m=5 in let r = 12 in let t = 21 in let c=5 in
[r, mod m, c] \<down>\<ge> t = (
if t \<le> r then [r, mod m, c] else
if r + m * c < t then {}
else [t + m - Suc ((t - Suc r) mod m), mod m, c - (t + m - Suc r) div m])"
apply (simp add: Let_def)
oops
*)
lemma iMODb_cut_ge: "
[r, mod m, c] \<down>\<ge> t = (
if t \<le> r then [r, mod m, c] else
if r + m * c < t then {}
else [t + m - Suc ((t - Suc r) mod m), mod m, c - (t + m - Suc r) div m])"
thm iMOD_cut_ge
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0 i_cut_singleton)
apply (case_tac "r + m * c < t")
apply (simp add: cut_ge_Max_empty iT_Max iT_finite)
apply (case_tac "t \<le> r")
apply (simp add: cut_ge_Min_all iT_Min)
apply (simp add: linorder_not_less linorder_not_le)
apply (case_tac "r mod m = t mod m")
thm diff_mod_pred
apply (simp add: diff_mod_pred)
thm mod_0_imp_diff_Suc_div_conv
apply (simp add: mod_0_imp_diff_Suc_div_conv mod_eq_diff_mod_0_conv diff_add_assoc2 del: add_diff_assoc2)
apply (subgoal_tac "0 < (t - r) div m")
prefer 2
apply (frule_tac x=r in less_mod_eq_imp_add_divisor_le)
apply (simp add: mod_eq_diff_mod_0_conv)
apply (drule add_le_imp_le_diff2)
thm div_le_mono
apply (drule_tac m=m and k=m in div_le_mono)
apply simp
apply (simp add: set_eq_iff i_cut_mem_iff iT_iff, intro allI)
apply (simp add: mod_eq_diff_mod_0_conv[symmetric])
apply (rule conj_cong, simp)
apply (case_tac "t \<le> x")
prefer 2
apply simp
apply (simp add: diff_mult_distrib2 mult_div_cancel mod_eq_diff_mod_0_conv add.commute[of r])
apply (subgoal_tac "Suc ((t - Suc r) mod m) = (t - r) mod m")
prefer 2
thm diff_mod_pred
apply (clarsimp simp add: diff_mod_pred mod_eq_diff_mod_0_conv)
thm iMOD_iTILL_iMODb_conv[of r "r + m * c" m]
apply (rule_tac t="[ r, mod m, c ]" and s="[ r, mod m ] \<inter> [\<dots>r + m * c]" in subst)
apply (simp add: iMOD_iTILL_iMODb_conv)
thm iMOD_cut_ge
apply (simp add: i_cut_Int_left iMOD_cut_ge)
thm iMOD_iTILL_iMODb_conv
apply (subst iMOD_iTILL_iMODb_conv)
apply (drule_tac x=t in le_imp_less_or_eq, erule disjE)
thm mod_cut_greater_aux_le_x
apply (rule mod_cut_greater_aux_le_x, simp+)
apply (rule arg_cong [where y="c - (t + m - Suc r) div m"])
apply (drule_tac x=t in le_imp_less_or_eq, erule disjE)
prefer 2
apply simp
thm iMODb_cut_greater_aux[of m t r c]
apply (simp add: iMODb_cut_greater_aux)
apply (rule arg_cong[where f="op - c"])
apply (simp add: diff_add_assoc2 del: add_diff_assoc2)
apply (rule_tac t="t - Suc r" and s="t - r - Suc 0" in subst, simp)
thm div_diff1_eq
apply (subst div_diff1_eq[of _ "Suc 0"])
apply (case_tac "m = Suc 0", simp)
apply simp
done
thm iMODb_cut_greater1
lemma iMODb_cut_ge1: "
t \<in> [r, mod m, c] \<Longrightarrow>
[r, mod m, c] \<down>\<ge> t = (
if r + m * c < t then {}
else [t, mod m, c - (t - r) div m])"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iT_iff iIN_0 i_cut_singleton)
thm iMODb_cut_ge
apply (clarsimp simp: iMODb_cut_ge iT_iff)
thm mod_eq_imp_diff_mod_eq_divisor
apply (simp add: mod_eq_imp_diff_mod_eq_divisor)
apply (rule_tac t="t + m - Suc r" and s="t - r + (m - Suc 0)" in subst, simp)
thm div_add1_eq
apply (subst div_add1_eq)
apply (simp add: mod_eq_imp_diff_mod_0)
done
lemma iMOD_0_cut_greater: "
t \<in> [r, mod 0] \<Longrightarrow> [r, mod 0] \<down>> t = {}"
by (simp add: iT_iff iMOD_0 iIN_0 i_cut_singleton)
lemma iMODb_0_cut_greater: "t \<in> [r, mod 0, c] \<Longrightarrow>
[r, mod 0, c] \<down>> t = {}"
by (simp add: iT_iff iMODb_mod_0 iIN_0 i_cut_singleton)
lemmas iT_cut_ge =
iTILL_cut_ge
iFROM_cut_ge
iIN_cut_ge
iMOD_cut_ge
iMODb_cut_ge
thm iT_cut_ge
lemmas iT_cut_ge1 =
iTILL_cut_ge1
iFROM_cut_ge1
iIN_cut_ge1
iMOD_cut_ge1
iMODb_cut_ge1
thm iT_cut_le1
lemmas iT_cut_greater =
iTILL_cut_greater
iFROM_cut_greater
iIN_cut_greater
iMOD_cut_greater
iMODb_cut_greater
thm iT_cut_greater
lemmas iT_cut_greater1 =
iTILL_cut_greater1
iFROM_cut_greater1
iIN_cut_greater1
iMOD_cut_greater1
iMODb_cut_greater1
thm iT_cut_greater1
lemmas iT_cut_ge_greater =
iTILL_cut_ge
iTILL_cut_greater
iFROM_cut_ge
iFROM_cut_greater
iIN_cut_ge
iIN_cut_greater
iMOD_cut_ge
iMOD_cut_greater
iMODb_cut_ge
iMODb_cut_greater
lemmas iT_cut_ge_greater1 =
iTILL_cut_ge1
iTILL_cut_greater1
iFROM_cut_ge1
iFROM_cut_greater1
iIN_cut_ge1
iIN_cut_greater1
iMOD_cut_ge1
iMOD_cut_greater1
iMODb_cut_ge1
iMODb_cut_greater1
thm iT_cut_ge_greater
thm iT_cut_ge_greater1
subsection {* Cardinality of intervals *}
lemma iFROM_card: "card [n\<dots>] = 0"
by (simp add: iFROM_infinite)
lemma iTILL_card: "card [\<dots>n] = Suc n"
by (simp add: iTILL_def)
lemma iIN_card: "card [n\<dots>,d] = Suc d"
by (simp add: iIN_def)
lemma iMOD_0_card: "card [r, mod 0] = Suc 0"
by (simp add: iMOD_0 iIN_card)
lemma iMOD_card: "0 < m \<Longrightarrow> card [r, mod m] = 0"
by (simp add: iMOD_infinite)
lemmas iT_card =
iFROM_card
iTILL_card
iIN_card
iMOD_card_if
iMODb_card_if
text {* Cardinality with @{text icard} *}
lemma iFROM_icard: "icard [n\<dots>] = \<infinity>"
by (simp add: iFROM_infinite)
lemma iTILL_icard: "icard [\<dots>n] = enat (Suc n)"
by (simp add: icard_finite iT_finite iT_card)
lemma iIN_icard: "icard [n\<dots>,d] = enat (Suc d)"
by (simp add: icard_finite iT_finite iT_card)
lemma iMOD_0_icard: "icard [r, mod 0] = eSuc 0"
by (simp add: icard_finite iT_finite iT_card eSuc_enat)
lemma iMOD_icard: "0 < m \<Longrightarrow> icard [r, mod m] = \<infinity>"
by (simp add: iMOD_infinite)
lemma iMOD_icard_if: "icard [r, mod m] = (if m = 0 then eSuc 0 else \<infinity>)"
by (simp add: icard_finite iT_finite iT_infinite eSuc_enat iT_card)
lemma iMODb_mod_0_icard: "icard [r, mod 0, c] = eSuc 0"
by (simp add: icard_finite iT_finite eSuc_enat iT_card)
lemma iMODb_icard: "0 < m \<Longrightarrow> icard [r, mod m, c] = enat (Suc c)"
by (simp add: icard_finite iT_finite iMODb_card)
lemma iMODb_icard_if: "icard [r, mod m, c] = enat (if m = 0 then Suc 0 else Suc c)"
by (simp add: icard_finite iT_finite iMODb_card_if)
lemmas iT_icard =
iFROM_icard
iTILL_icard
iIN_icard
iMOD_icard_if
iMODb_icard_if
subsection {* Functions @{text inext} and @{text iprev} with intervals *}
thm
inext_def
iprev_def
(*
lemma "inext 5 [\<dots>10] = 6"
apply (simp add: inext_def)
apply (simp add: iT_iff)
apply (simp add: iT_cut_greater)
apply (simp add: iT_not_empty)
apply (simp add: iT_Min)
done
lemma "inext 12 [\<dots>10] = 12"
apply (simp add: inext_def)
apply (simp add: iT_iff)
done
lemma "inext 5 [4\<dots>,5] = 6"
apply (simp add: inext_def)
apply (simp add: iT_iff)
apply (simp add: iT_cut_greater)
apply (simp add: iT_not_empty)
apply (simp add: iT_Min)
done
lemma "inext 14 [2, mod 4] = 18"
apply (simp add: inext_def)
apply safe
thm iMOD_cut_greater[of 14 2 4]
apply (simp add: iMOD_cut_greater iT_iff
iT_Min)
apply (simp add: iT_iff)
thm iMOD_cut_greater[of 14 2 4, simplified]
apply (simp add: iMOD_cut_greater iT_iff
iT_not_empty)
done
lemma "iprev 5 [\<dots>10] = 4"
apply (simp add: iprev_def)
apply (simp add: iT_iff)
apply (simp add: i_cut_defs)
apply safe
apply (simp add: iT_iff)
apply (rule le_antisym)
thm Max_le_iff
thm iffD2[OF Max_le_iff]
apply (rule iffD2[OF Max_le_iff])
apply fastforce+
done
*)
thm
inext_Max
iprev_iMin
thm
inext_closed
iprev_closed
thm
iMin_subset
iMin_Un
thm
Max_Un
Min_Un
lemma
iFROM_inext: "t \<in> [n\<dots>] \<Longrightarrow> inext t [n\<dots>] = Suc t" and
iTILL_inext: "t < n \<Longrightarrow> inext t [\<dots>n] = Suc t" and
iIN_inext: "\<lbrakk> n \<le> t; t < n + d \<rbrakk> \<Longrightarrow> inext t [n\<dots>,d] = Suc t"
by (simp add: iT_defs inext_atLeast inext_atMost inext_atLeastAtMost)+
lemma
iFROM_iprev': "t \<in> [n\<dots>] \<Longrightarrow> iprev (Suc t) [n\<dots>] = t" and
iFROM_iprev: "n < t \<Longrightarrow> iprev t [n\<dots>] = t - Suc 0" and
iTILL_iprev: "t \<in> [\<dots>n] \<Longrightarrow> iprev t [\<dots>n] = t - Suc 0" and
iIN_iprev: "\<lbrakk> n < t; t \<le> n + d \<rbrakk> \<Longrightarrow> iprev t [n\<dots>,d] = t - Suc 0" and
iIN_iprev': "\<lbrakk> n \<le> t; t < n + d \<rbrakk> \<Longrightarrow> iprev (Suc t) [n\<dots>,d] = t"
by (simp add: iT_defs iprev_atLeast iprev_atMost iprev_atLeastAtMost)+
lemma iMOD_inext: "t \<in> [r, mod m] \<Longrightarrow> inext t [r, mod m] = t + m"
by (clarsimp simp add: inext_def iMOD_cut_greater iT_iff iT_Min iT_not_empty mod_eq_imp_diff_mod_0)
lemma iMOD_iprev: "\<lbrakk> t \<in> [r, mod m]; r < t \<rbrakk> \<Longrightarrow> iprev t [r, mod m] = t - m"
apply (case_tac "m = 0", simp add: iMOD_iff)
apply (clarsimp simp add: iprev_def iMOD_cut_less iT_iff iT_Max iT_not_empty mult_div_cancel)
thm mod_eq_imp_diff_mod_eq_divisor
apply (simp del: add_Suc_right add: add_Suc_right[symmetric] mod_eq_imp_diff_mod_eq_divisor)
thm less_mod_eq_imp_add_divisor_le
apply (simp add: less_mod_eq_imp_add_divisor_le)
done
lemma iMOD_iprev': "t \<in> [r, mod m] \<Longrightarrow> iprev (t + m) [r, mod m] = t"
apply (case_tac "m = 0")
apply (simp add: iMOD_0 iIN_0 iprev_singleton)
apply (simp add: iMOD_iprev iT_iff)
done
lemma iMODb_inext: "
\<lbrakk> t \<in> [r, mod m, c]; t < r + m * c \<rbrakk> \<Longrightarrow>
inext t [r, mod m, c] = t + m"
by (clarsimp simp add: inext_def iMODb_cut_greater iT_iff iT_Min iT_not_empty mod_eq_imp_diff_mod_0)
lemma iMODb_iprev: "
\<lbrakk> t \<in> [r, mod m, c]; r < t \<rbrakk> \<Longrightarrow>
iprev t [r, mod m, c] = t - m"
apply (case_tac "m = 0", simp add: iMODb_iff)
apply (clarsimp simp add: iprev_def iMODb_cut_less iT_iff iT_Max iT_not_empty mult_div_cancel)
thm mod_eq_imp_diff_mod_eq_divisor
apply (simp del: add_Suc_right add: add_Suc_right[symmetric] mod_eq_imp_diff_mod_eq_divisor)
thm less_mod_eq_imp_add_divisor_le
apply (simp add: less_mod_eq_imp_add_divisor_le)
done
lemma iMODb_iprev': "
\<lbrakk> t \<in> [r, mod m, c]; t < r + m * c \<rbrakk> \<Longrightarrow>
iprev (t + m) [r, mod m, c] = t"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0 iprev_singleton)
thm less_mod_eq_imp_add_divisor_le
apply (simp add: iMODb_iprev iT_iff less_mod_eq_imp_add_divisor_le)
done
lemmas iT_inext =
iFROM_inext
iTILL_inext
iIN_inext
iMOD_inext
iMODb_inext
lemmas iT_iprev =
iFROM_iprev'
iFROM_iprev
iTILL_iprev
iIN_iprev
iIN_iprev'
iMOD_iprev
iMOD_iprev'
iMODb_iprev
iMODb_iprev'
thm iT_inext
thm iT_iprev
thm iprev_iMin
thm iT_finite[THEN inext_Max]
lemma iFROM_inext_if: "
inext t [n\<dots>] = (if t \<in> [n\<dots>] then Suc t else t)"
by (simp add: iFROM_inext not_in_inext_fix)
lemma iTILL_inext_if: "
inext t [\<dots>n] = (if t < n then Suc t else t)"
by (simp add: iTILL_inext iT_finite iT_Max inext_ge_Max)
lemma iIN_inext_if: "
inext t [n\<dots>,d] = (if n \<le> t \<and> t < n + d then Suc t else t)"
by (fastforce simp: iIN_inext iT_iff not_in_inext_fix iT_finite iT_Max inext_ge_Max)
lemma iMOD_inext_if: "
inext t [r, mod m] = (if t \<in> [r, mod m] then t + m else t)"
by (simp add: iMOD_inext not_in_inext_fix)
lemma iMODb_inext_if: "
inext t [r, mod m, c] =
(if t \<in> [r, mod m, c] \<and> t < r + m * c then t + m else t)"
by (fastforce simp: iMODb_inext iT_iff not_in_inext_fix iT_finite iT_Max inext_ge_Max)
lemmas iT_inext_if =
iFROM_inext_if
iTILL_inext_if
iIN_inext_if
iMOD_inext_if
iMODb_inext_if
thm iT_inext_if
lemma iFROM_iprev_if: "
iprev t [n\<dots>] = (if n < t then t - Suc 0 else t)"
by (simp add: iFROM_iprev iT_Min iprev_le_iMin)
lemma iTILL_iprev_if: "
iprev t [\<dots>n] = (if t \<in> [\<dots>n] then t - Suc 0 else t)"
by (simp add: iTILL_iprev not_in_iprev_fix)
lemma iIN_iprev_if: "
iprev t [n\<dots>,d] = (if n < t \<and> t \<le> n + d then t - Suc 0 else t)"
by (fastforce simp: iIN_iprev iT_iff not_in_iprev_fix iT_Min iprev_le_iMin)
lemma iMOD_iprev_if: "
iprev t [r, mod m] =
(if t \<in> [r, mod m] \<and> r < t then t - m else t)"
by (fastforce simp add: iMOD_iprev iT_iff not_in_iprev_fix iT_Min iprev_le_iMin)
lemma iMODb_iprev_if: "
iprev t [r, mod m, c] =
(if t \<in> [r, mod m, c] \<and> r < t then t - m else t)"
by (fastforce simp add: iMODb_iprev iT_iff not_in_iprev_fix iT_Min iprev_le_iMin)
lemmas iT_iprev_if =
iFROM_iprev_if
iTILL_iprev_if
iIN_iprev_if
iMOD_iprev_if
iMODb_iprev_if
thm iT_iprev_if
text {*
The difference between an element and the next/previous element is constant
if the element is different from Min/Max of the interval *}
lemma iFROM_inext_diff_const: "
t \<in> [n\<dots>] \<Longrightarrow> inext t [n\<dots>] - t = Suc 0"
by (simp add: iFROM_inext)
lemma iFROM_iprev_diff_const: "
n < t \<Longrightarrow> t - iprev t [n\<dots>] = Suc 0"
by (simp add: iFROM_iprev )
lemma iFROM_iprev_diff_const': "
t \<in> [n\<dots>] \<Longrightarrow> Suc t - iprev (Suc t) [n\<dots>] = Suc 0"
by (simp add: iFROM_iprev')
lemma iTILL_inext_diff_const: "
t < n \<Longrightarrow> inext t [\<dots>n] - t = Suc 0"
by (simp add: iTILL_inext)
lemma iTILL_iprev_diff_const: "
\<lbrakk> t \<in> [\<dots>n]; 0 < t \<rbrakk> \<Longrightarrow> t - iprev t [\<dots>n] = Suc 0"
by (simp add: iTILL_iprev)
thm iIN_inext
lemma iIN_inext_diff_const: "
\<lbrakk> n \<le> t; t < n + d \<rbrakk> \<Longrightarrow> inext t [n\<dots>,d] - t = Suc 0"
by (simp add: iIN_inext)
thm iIN_iprev
lemma iIN_iprev_diff_const: "
\<lbrakk> n < t; t \<le> n + d \<rbrakk> \<Longrightarrow> t - iprev t [n\<dots>,d] = Suc 0"
by (simp add: iIN_iprev)
lemma iIN_iprev_diff_const': "
\<lbrakk> n \<le> t; t < n + d \<rbrakk> \<Longrightarrow> Suc t - iprev (Suc t) [n\<dots>,d] = Suc 0"
by (simp add: iIN_iprev)
thm iMOD_inext
lemma iMOD_inext_diff_const: "
t \<in> [r, mod m] \<Longrightarrow> inext t [r, mod m] - t = m"
by (simp add: iMOD_inext)
lemma iMOD_iprev_diff_const': "
t \<in> [r, mod m] \<Longrightarrow> (t + m) - iprev (t + m) [r, mod m] = m"
by (simp add: iMOD_iprev')
thm iMOD_iprev
lemma iMOD_iprev_diff_const: "
\<lbrakk> t \<in> [r, mod m]; r < t \<rbrakk> \<Longrightarrow> t - iprev t [r, mod m] = m"
apply (simp add: iMOD_iprev iT_iff)
apply (drule less_mod_eq_imp_add_divisor_le[where m=m], simp+)
done
thm iMODb_inext
lemma iMODb_inext_diff_const: "
\<lbrakk> t \<in> [r, mod m, c]; t < r + m * c \<rbrakk> \<Longrightarrow> inext t [r, mod m, c] - t = m"
by (simp add: iMODb_inext)
thm iMODb_iprev'
lemma iMODb_iprev_diff_const': "
\<lbrakk> t \<in> [r, mod m, c]; t < r + m * c \<rbrakk> \<Longrightarrow> (t + m) - iprev (t + m) [r, mod m, c] = m"
by (simp add: iMODb_iprev')
thm iMODb_iprev
lemma iMODb_iprev_diff_const: "
\<lbrakk> t \<in> [r, mod m, c]; r < t \<rbrakk> \<Longrightarrow> t - iprev t [r, mod m, c] = m"
apply (simp add: iMODb_iprev iT_iff)
apply (drule less_mod_eq_imp_add_divisor_le[where m=m], simp+)
done
lemmas iT_inext_diff_const =
iFROM_inext_diff_const
iTILL_inext_diff_const
iIN_inext_diff_const
iMOD_inext_diff_const
iMODb_inext_diff_const
lemmas iT_iprev_diff_const =
iFROM_iprev_diff_const
iFROM_iprev_diff_const'
iTILL_iprev_diff_const
iIN_iprev_diff_const
iIN_iprev_diff_const'
iMOD_iprev_diff_const'
iMOD_iprev_diff_const
iMODb_iprev_diff_const'
iMODb_iprev_diff_const
thm iT_inext_diff_const
thm iT_iprev_diff_const
subsubsection {* Mirroring of intervals *}
thm mirror_elem_def
lemma
iIN_mirror_elem: "mirror_elem x [n\<dots>,d] = n + n + d - x" and
iTILL_mirror_elem: "mirror_elem x [\<dots>n] = n - x" and
iMODb_mirror_elem: "mirror_elem x [r, mod m, c] = r + r + m * c - x"
by (simp add: mirror_elem_def nat_mirror_def iT_Min iT_Max)+
lemma iMODb_imirror_bounds: "
r' + m' * c' \<le> l + r \<Longrightarrow>
imirror_bounds [r', mod m', c'] l r = [l + r - r' - m' * c', mod m', c']"
apply (clarsimp simp: set_eq_iff Bex_def imirror_bounds_iff iT_iff)
apply (frule diff_le_mono[of _ _ r'], simp)
thm mod_diff_right_eq
apply (simp add: mod_diff_right_eq)
apply (rule iffI)
apply (clarsimp, rename_tac x')
thm mod_diff_right_eq
apply (rule_tac a=x' in ssubst[OF mod_diff_right_eq, rule_format], simp+)
apply (simp add: diff_le_mono2)
apply clarsimp
apply (rule_tac x="l+r-x" in exI)
apply (simp add: le_diff_swap)
apply (simp add: le_diff_conv2)
thm mod_sub_eq_mod_swap
apply (subst mod_sub_eq_mod_swap, simp+)
thm mod_diff_right_eq
apply (simp add: mod_diff_right_eq[symmetric])
done
thm imirror_bounds_def
lemma iIN_imirror_bounds: "
n + d \<le> l + r \<Longrightarrow> imirror_bounds [n\<dots>,d] l r = [l + r - n - d\<dots>,d]"
apply (insert iMODb_imirror_bounds[of n "Suc 0" d l r])
apply (simp add: iMODb_mod_1)
done
lemma iTILL_imirror_bounds: "
n \<le> l + r \<Longrightarrow> imirror_bounds [\<dots>n] l r = [l + r - n\<dots>,n]"
apply (insert iIN_imirror_bounds[of 0 n l r])
apply (simp add: iIN_0_iTILL_conv)
done
lemmas iT_imirror_bounds =
iTILL_imirror_bounds
iIN_imirror_bounds
iMODb_imirror_bounds
thm iT_imirror_bounds
lemmas iT_imirror_ident =
iTILL_imirror_ident
iIN_imirror_ident
iMODb_imirror_ident
thm iT_imirror_ident
subsubsection {* Functions @{term inext_nth} and @{term iprev_nth} on intervals *}
lemma iFROM_inext_nth : "[n\<dots>] \<rightarrow> a = n + a"
by (simp add: iT_defs inext_nth_atLeast)
lemma iIN_inext_nth : "a \<le> d \<Longrightarrow> [n\<dots>,d] \<rightarrow> a = n + a"
by (simp add: iT_defs inext_nth_atLeastAtMost)
lemma iIN_iprev_nth: "a \<le> d \<Longrightarrow> [n\<dots>,d] \<leftarrow> a = n + d - a"
by (simp add: iT_defs iprev_nth_atLeastAtMost)
lemma iIN_inext_nth_if : "
[n\<dots>,d] \<rightarrow> a = (if a \<le> d then n + a else n + d)"
by (simp add: iIN_inext_nth inext_nth_card_Max iT_finite iT_not_empty iT_Max iT_card)
lemma iIN_iprev_nth_if: "
[n\<dots>,d] \<leftarrow> a = (if a \<le> d then n + d - a else n)"
by (simp add: iIN_iprev_nth iprev_nth_card_iMin iT_finite iT_not_empty iT_Min iT_card)
lemma iTILL_inext_nth : "a \<le> n \<Longrightarrow> [\<dots>n] \<rightarrow> a = a"
by (simp add: iTILL_def inext_nth_atMost)
lemma iTILL_inext_nth_if : "
[\<dots>n] \<rightarrow> a = (if a \<le> n then a else n)"
by (insert iIN_inext_nth_if[of 0 n a], simp add: iIN_0_iTILL_conv)
lemma iTILL_iprev_nth: "a \<le> n \<Longrightarrow> [\<dots>n] \<leftarrow> a = n - a"
by (simp add: iTILL_def iprev_nth_atMost)
lemma iTILL_iprev_nth_if: "
[\<dots>n] \<leftarrow> a= (if a \<le> n then n - a else 0)"
by (insert iIN_iprev_nth_if[of 0 n a], simp add: iIN_0_iTILL_conv)
lemma iMOD_inext_nth: "[r, mod m] \<rightarrow> a = r + m * a"
apply (induct a)
apply (simp add: iT_Min)
apply (simp add: iMOD_inext_if iT_iff)
done
lemma iMODb_inext_nth: "a \<le> c \<Longrightarrow> [r, mod m, c] \<rightarrow> a = r + m * a"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0 inext_nth_singleton)
apply (induct a)
apply (simp add: iMODb_Min)
apply (simp add: iMODb_inext_if iT_iff)
done
lemma iMODb_inext_nth_if: "
[r, mod m, c] \<rightarrow> a = (if a \<le> c then r + m * a else r + m * c)"
by (simp add: iMODb_inext_nth inext_nth_card_Max iT_finite iT_not_empty iT_Max iT_card)
lemma iMODb_iprev_nth: "
a \<le> c \<Longrightarrow> [r, mod m, c] \<leftarrow> a = r + m * (c - a)"
apply (case_tac "m = 0")
apply (simp add: iMODb_mod_0 iIN_0 iprev_nth_singleton)
apply (induct a)
apply (simp add: iMODb_Max)
apply (simp add: iMODb_iprev_if iT_iff)
apply (frule mult_left_mono[of _ _ m], simp)
apply (simp add: diff_mult_distrib2)
done
lemma iMODb_iprev_nth_if: "
[r, mod m, c] \<leftarrow> a = (if a \<le> c then r + m * (c - a) else r)"
by (simp add: iMODb_iprev_nth iprev_nth_card_iMin iT_finite iT_not_empty iT_Min iT_card)
lemma iIN_iFROM_inext_nth: "
a \<le> d \<Longrightarrow> [n\<dots>,d] \<rightarrow> a = [n\<dots>] \<rightarrow> a"
by (simp add: iIN_inext_nth iFROM_inext_nth)
lemma iIN_iFROM_inext: "
a < n + d \<Longrightarrow> inext a [n\<dots>,d] = inext a [n\<dots>]"
by (simp add: iT_inext_if iT_iff)
lemma iMOD_iMODb_inext_nth: "
a \<le> c \<Longrightarrow> [r, mod m, c] \<rightarrow> a = [r, mod m] \<rightarrow> a"
by (simp add: iMOD_inext_nth iMODb_inext_nth)
lemma iMOD_iMODb_inext: "
a < r + m * c \<Longrightarrow> inext a [r, mod m, c] = inext a [r, mod m]"
by (simp add: iT_inext_if iT_iff)
lemma iMOD_inext_nth_Suc_diff: "
([r, mod m] \<rightarrow> (Suc n)) - ([r, mod m] \<rightarrow> n) = m"
by (simp add: iMOD_inext_nth del: inext_nth.simps)
lemma iMOD_inext_nth_diff: "
([r, mod m] \<rightarrow> a) - ([r, mod m] \<rightarrow> b) = (a - b) * m"
by (simp add: iMOD_inext_nth diff_mult_distrib mult.commute[of m])
lemma iMODb_inext_nth_diff: "\<lbrakk> a \<le> c; b \<le> c \<rbrakk> \<Longrightarrow>
([r, mod m, c] \<rightarrow> a) - ([r, mod m, c] \<rightarrow> b) = (a - b) * m"
by (simp add: iMODb_inext_nth diff_mult_distrib mult.commute[of m])
subsection {* Induction with intervals *}
thm
inext_induct
iFROM_inext
lemma iFROM_induct: "
\<lbrakk> P n; \<And>k. \<lbrakk> k \<in> [n\<dots>]; P k \<rbrakk> \<Longrightarrow> P (Suc k); a \<in> [n\<dots>] \<rbrakk> \<Longrightarrow> P a"
apply (rule inext_induct[of _ "[n\<dots>]"])
apply (simp add: iT_Min iT_inext_if)+
done
lemma iIN_induct: "
\<lbrakk> P n; \<And>k. \<lbrakk> k \<in> [n\<dots>,d]; k \<noteq> n + d; P k \<rbrakk> \<Longrightarrow> P (Suc k); a \<in> [n\<dots>,d] \<rbrakk> \<Longrightarrow> P a"
apply (rule inext_induct[of _ "[n\<dots>,d]"])
apply (simp add: iT_Min iT_inext_if)+
done
lemma iTILL_induct: "
\<lbrakk> P 0; \<And>k. \<lbrakk> k \<in> [\<dots>n]; k \<noteq> n; P k \<rbrakk> \<Longrightarrow> P (Suc k); a \<in> [\<dots>n] \<rbrakk> \<Longrightarrow> P a"
apply (rule inext_induct[of _ "[\<dots>n]"])
apply (simp add: iT_Min iT_inext_if)+
done
lemma iMOD_induct: "
\<lbrakk> P r; \<And>k. \<lbrakk> k \<in> [r, mod m]; P k \<rbrakk> \<Longrightarrow> P (k + m); a \<in> [r, mod m] \<rbrakk> \<Longrightarrow> P a"
apply (rule inext_induct[of _ "[r, mod m]"])
apply (simp add: iT_Min iT_inext_if)+
done
lemma iMODb_induct: "
\<lbrakk> P r; \<And>k. \<lbrakk> k \<in> [r, mod m, c]; k \<noteq> r + m * c; P k \<rbrakk> \<Longrightarrow> P (k + m); a \<in> [r, mod m, c] \<rbrakk> \<Longrightarrow> P a"
apply (rule inext_induct[of _ "[r, mod m, c]"])
apply (simp add: iT_Min iT_inext_if)+
done
thm
iprev_induct
iIN_inext
lemma iIN_rev_induct: "
\<lbrakk> P (n + d); \<And>k. \<lbrakk> k \<in> [n\<dots>,d]; k \<noteq> n; P k \<rbrakk> \<Longrightarrow> P (k - Suc 0); a \<in> [n\<dots>,d] \<rbrakk> \<Longrightarrow> P a"
apply (rule iprev_induct[of _ "[n\<dots>,d]"])
apply (simp add: iT_Max iT_finite iT_iprev_if)+
done
lemma iTILL_rev_induct: "
\<lbrakk> P n; \<And>k. \<lbrakk> k \<in> [\<dots>n]; 0 < k; P k \<rbrakk> \<Longrightarrow> P (k - Suc 0); a \<in> [\<dots>n] \<rbrakk> \<Longrightarrow> P a"
apply (rule iprev_induct[of _ "[\<dots>n]"])
apply (fastforce simp: iT_Max iT_finite iT_iprev_if)+
done
lemma iMODb_rev_induct: "
\<lbrakk> P (r + m * c); \<And>k. \<lbrakk> k \<in> [r, mod m, c]; k \<noteq> r; P k \<rbrakk> \<Longrightarrow> P (k - m); a \<in> [r, mod m, c] \<rbrakk> \<Longrightarrow> P a"
apply (rule iprev_induct[of _ "[r, mod m, c]"])
apply (simp add: iT_Max iT_finite iT_iprev_if)+
done
end
|
module Test.ColorTest
import IdrTest.Test
import IdrTest.Expectation
import Color
simpleTest : Test
simpleTest =
test "Red & Bold" (\_ => assertEq
(decorate (Text Red <&> BG Blue <&> Bold) "Hello")
"\ESC[31m\ESC[44m\ESC[1mHello\ESC[0m\ESC[0m\ESC[0m"
)
export
suite : Test
suite =
describe "Color Tests"
[ simpleTest
]
|
/-
Copyright (c) 2021 Yaël Dillies, Bhavik Mehta. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yaël Dillies, Bhavik Mehta
! This file was ported from Lean 3 source module analysis.convex.gauge
! leanprover-community/mathlib commit 468b141b14016d54b479eb7a0fff1e360b7e3cf6
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Analysis.Convex.Basic
import Mathbin.Analysis.NormedSpace.Pointwise
import Mathbin.Analysis.Seminorm
import Mathbin.Data.IsROrC.Basic
import Mathbin.Tactic.Congrm
/-!
# The Minkowksi functional
This file defines the Minkowski functional, aka gauge.
The Minkowski functional of a set `s` is the function which associates each point to how much you
need to scale `s` for `x` to be inside it. When `s` is symmetric, convex and absorbent, its gauge is
a seminorm. Reciprocally, any seminorm arises as the gauge of some set, namely its unit ball. This
induces the equivalence of seminorms and locally convex topological vector spaces.
## Main declarations
For a real vector space,
* `gauge`: Aka Minkowksi functional. `gauge s x` is the least (actually, an infimum) `r` such
that `x ∈ r • s`.
* `gauge_seminorm`: The Minkowski functional as a seminorm, when `s` is symmetric, convex and
absorbent.
## References
* [H. H. Schaefer, *Topological Vector Spaces*][schaefer1966]
## Tags
Minkowski functional, gauge
-/
open NormedField Set
open Pointwise
noncomputable section
variable {𝕜 E F : Type _}
section AddCommGroup
variable [AddCommGroup E] [Module ℝ E]
/-- The Minkowski functional. Given a set `s` in a real vector space, `gauge s` is the functional
which sends `x : E` to the smallest `r : ℝ` such that `x` is in `s` scaled by `r`. -/
def gauge (s : Set E) (x : E) : ℝ :=
infₛ { r : ℝ | 0 < r ∧ x ∈ r • s }
#align gauge gauge
variable {s t : Set E} {a : ℝ} {x : E}
theorem gauge_def : gauge s x = infₛ ({ r ∈ Set.Ioi 0 | x ∈ r • s }) :=
rfl
#align gauge_def gauge_def
/- ./././Mathport/Syntax/Translate/Tactic/Builtin.lean:73:14: unsupported tactic `congrm #[[expr Inf (λ r, _)]] -/
/-- An alternative definition of the gauge using scalar multiplication on the element rather than on
the set. -/
theorem gauge_def' : gauge s x = infₛ ({ r ∈ Set.Ioi 0 | r⁻¹ • x ∈ s }) :=
by
trace
"./././Mathport/Syntax/Translate/Tactic/Builtin.lean:73:14: unsupported tactic `congrm #[[expr Inf (λ r, _)]]"
exact and_congr_right fun hr => mem_smul_set_iff_inv_smul_mem₀ hr.ne' _ _
#align gauge_def' gauge_def'
private theorem gauge_set_bdd_below : BddBelow { r : ℝ | 0 < r ∧ x ∈ r • s } :=
⟨0, fun r hr => hr.1.le⟩
#align gauge_set_bdd_below gauge_set_bdd_below
/-- If the given subset is `absorbent` then the set we take an infimum over in `gauge` is nonempty,
which is useful for proving many properties about the gauge. -/
theorem Absorbent.gauge_set_nonempty (absorbs : Absorbent ℝ s) :
{ r : ℝ | 0 < r ∧ x ∈ r • s }.Nonempty :=
let ⟨r, hr₁, hr₂⟩ := Absorbs x
⟨r, hr₁, hr₂ r (Real.norm_of_nonneg hr₁.le).ge⟩
#align absorbent.gauge_set_nonempty Absorbent.gauge_set_nonempty
theorem gauge_mono (hs : Absorbent ℝ s) (h : s ⊆ t) : gauge t ≤ gauge s := fun x =>
cinfₛ_le_cinfₛ gauge_set_bddBelow hs.gauge_set_nonempty fun r hr => ⟨hr.1, smul_set_mono h hr.2⟩
#align gauge_mono gauge_mono
theorem exists_lt_of_gauge_lt (absorbs : Absorbent ℝ s) (h : gauge s x < a) :
∃ b, 0 < b ∧ b < a ∧ x ∈ b • s :=
by
obtain ⟨b, ⟨hb, hx⟩, hba⟩ := exists_lt_of_cinfₛ_lt absorbs.gauge_set_nonempty h
exact ⟨b, hb, hba, hx⟩
#align exists_lt_of_gauge_lt exists_lt_of_gauge_lt
/-- The gauge evaluated at `0` is always zero (mathematically this requires `0` to be in the set `s`
but, the real infimum of the empty set in Lean being defined as `0`, it holds unconditionally). -/
@[simp]
theorem gauge_zero : gauge s 0 = 0 := by
rw [gauge_def']
by_cases (0 : E) ∈ s
· simp only [smul_zero, sep_true, h, cinfₛ_Ioi]
· simp only [smul_zero, sep_false, h, Real.infₛ_empty]
#align gauge_zero gauge_zero
@[simp]
theorem gauge_zero' : gauge (0 : Set E) = 0 := by
ext
rw [gauge_def']
obtain rfl | hx := eq_or_ne x 0
· simp only [cinfₛ_Ioi, mem_zero, Pi.zero_apply, eq_self_iff_true, sep_true, smul_zero]
· simp only [mem_zero, Pi.zero_apply, inv_eq_zero, smul_eq_zero]
convert Real.infₛ_empty
exact eq_empty_iff_forall_not_mem.2 fun r hr => hr.2.elim (ne_of_gt hr.1) hx
#align gauge_zero' gauge_zero'
@[simp]
theorem gauge_empty : gauge (∅ : Set E) = 0 := by
ext
simp only [gauge_def', Real.infₛ_empty, mem_empty_iff_false, Pi.zero_apply, sep_false]
#align gauge_empty gauge_empty
theorem gauge_of_subset_zero (h : s ⊆ 0) : gauge s = 0 :=
by
obtain rfl | rfl := subset_singleton_iff_eq.1 h
exacts[gauge_empty, gauge_zero']
#align gauge_of_subset_zero gauge_of_subset_zero
/-- The gauge is always nonnegative. -/
theorem gauge_nonneg (x : E) : 0 ≤ gauge s x :=
Real.infₛ_nonneg _ fun x hx => hx.1.le
#align gauge_nonneg gauge_nonneg
theorem gauge_neg (symmetric : ∀ x ∈ s, -x ∈ s) (x : E) : gauge s (-x) = gauge s x :=
by
have : ∀ x, -x ∈ s ↔ x ∈ s := fun x => ⟨fun h => by simpa using Symmetric _ h, Symmetric x⟩
simp_rw [gauge_def', smul_neg, this]
#align gauge_neg gauge_neg
theorem gauge_neg_set_neg (x : E) : gauge (-s) (-x) = gauge s x := by
simp_rw [gauge_def', smul_neg, neg_mem_neg]
#align gauge_neg_set_neg gauge_neg_set_neg
theorem gauge_neg_set_eq_gauge_neg (x : E) : gauge (-s) x = gauge s (-x) := by
rw [← gauge_neg_set_neg, neg_neg]
#align gauge_neg_set_eq_gauge_neg gauge_neg_set_eq_gauge_neg
theorem gauge_le_of_mem (ha : 0 ≤ a) (hx : x ∈ a • s) : gauge s x ≤ a :=
by
obtain rfl | ha' := ha.eq_or_lt
· rw [mem_singleton_iff.1 (zero_smul_set_subset _ hx), gauge_zero]
· exact cinfₛ_le gauge_set_bdd_below ⟨ha', hx⟩
#align gauge_le_of_mem gauge_le_of_mem
theorem gauge_le_eq (hs₁ : Convex ℝ s) (hs₀ : (0 : E) ∈ s) (hs₂ : Absorbent ℝ s) (ha : 0 ≤ a) :
{ x | gauge s x ≤ a } = ⋂ (r : ℝ) (H : a < r), r • s :=
by
ext
simp_rw [Set.mem_interᵢ, Set.mem_setOf_eq]
refine' ⟨fun h r hr => _, fun h => le_of_forall_pos_lt_add fun ε hε => _⟩
· have hr' := ha.trans_lt hr
rw [mem_smul_set_iff_inv_smul_mem₀ hr'.ne']
obtain ⟨δ, δ_pos, hδr, hδ⟩ := exists_lt_of_gauge_lt hs₂ (h.trans_lt hr)
suffices (r⁻¹ * δ) • δ⁻¹ • x ∈ s by rwa [smul_smul, mul_inv_cancel_right₀ δ_pos.ne'] at this
rw [mem_smul_set_iff_inv_smul_mem₀ δ_pos.ne'] at hδ
refine' hs₁.smul_mem_of_zero_mem hs₀ hδ ⟨by positivity, _⟩
rw [inv_mul_le_iff hr', mul_one]
exact hδr.le
· have hε' := (lt_add_iff_pos_right a).2 (half_pos hε)
exact
(gauge_le_of_mem (ha.trans hε'.le) <| h _ hε').trans_lt (add_lt_add_left (half_lt_self hε) _)
#align gauge_le_eq gauge_le_eq
theorem gauge_lt_eq' (absorbs : Absorbent ℝ s) (a : ℝ) :
{ x | gauge s x < a } = ⋃ (r : ℝ) (H : 0 < r) (H : r < a), r • s :=
by
ext
simp_rw [mem_set_of_eq, mem_Union, exists_prop]
exact
⟨exists_lt_of_gauge_lt Absorbs, fun ⟨r, hr₀, hr₁, hx⟩ =>
(gauge_le_of_mem hr₀.le hx).trans_lt hr₁⟩
#align gauge_lt_eq' gauge_lt_eq'
theorem gauge_lt_eq (absorbs : Absorbent ℝ s) (a : ℝ) :
{ x | gauge s x < a } = ⋃ r ∈ Set.Ioo 0 (a : ℝ), r • s :=
by
ext
simp_rw [mem_set_of_eq, mem_Union, exists_prop, mem_Ioo, and_assoc']
exact
⟨exists_lt_of_gauge_lt Absorbs, fun ⟨r, hr₀, hr₁, hx⟩ =>
(gauge_le_of_mem hr₀.le hx).trans_lt hr₁⟩
#align gauge_lt_eq gauge_lt_eq
theorem gauge_lt_one_subset_self (hs : Convex ℝ s) (h₀ : (0 : E) ∈ s) (absorbs : Absorbent ℝ s) :
{ x | gauge s x < 1 } ⊆ s := by
rw [gauge_lt_eq Absorbs]
refine' Set.unionᵢ₂_subset fun r hr _ => _
rintro ⟨y, hy, rfl⟩
exact hs.smul_mem_of_zero_mem h₀ hy (Ioo_subset_Icc_self hr)
#align gauge_lt_one_subset_self gauge_lt_one_subset_self
theorem gauge_le_one_of_mem {x : E} (hx : x ∈ s) : gauge s x ≤ 1 :=
gauge_le_of_mem zero_le_one <| by rwa [one_smul]
#align gauge_le_one_of_mem gauge_le_one_of_mem
theorem self_subset_gauge_le_one : s ⊆ { x | gauge s x ≤ 1 } := fun x => gauge_le_one_of_mem
#align self_subset_gauge_le_one self_subset_gauge_le_one
theorem Convex.gauge_le (hs : Convex ℝ s) (h₀ : (0 : E) ∈ s) (absorbs : Absorbent ℝ s) (a : ℝ) :
Convex ℝ { x | gauge s x ≤ a } := by
by_cases ha : 0 ≤ a
· rw [gauge_le_eq hs h₀ Absorbs ha]
exact convex_interᵢ fun i => convex_interᵢ fun hi => hs.smul _
· convert convex_empty
exact eq_empty_iff_forall_not_mem.2 fun x hx => ha <| (gauge_nonneg _).trans hx
#align convex.gauge_le Convex.gauge_le
theorem Balanced.starConvex (hs : Balanced ℝ s) : StarConvex ℝ 0 s :=
starConvex_zero_iff.2 fun x hx a ha₀ ha₁ =>
hs _ (by rwa [Real.norm_of_nonneg ha₀]) (smul_mem_smul_set hx)
#align balanced.star_convex Balanced.starConvex
theorem le_gauge_of_not_mem (hs₀ : StarConvex ℝ 0 s) (hs₂ : Absorbs ℝ s {x}) (hx : x ∉ a • s) :
a ≤ gauge s x := by
rw [starConvex_zero_iff] at hs₀
obtain ⟨r, hr, h⟩ := hs₂
refine' le_cinfₛ ⟨r, hr, singleton_subset_iff.1 <| h _ (Real.norm_of_nonneg hr.le).ge⟩ _
rintro b ⟨hb, x, hx', rfl⟩
refine' not_lt.1 fun hba => hx _
have ha := hb.trans hba
refine' ⟨(a⁻¹ * b) • x, hs₀ hx' (by positivity) _, _⟩
· rw [← div_eq_inv_mul]
exact div_le_one_of_le hba.le ha.le
· rw [← mul_smul, mul_inv_cancel_left₀ ha.ne']
#align le_gauge_of_not_mem le_gauge_of_not_mem
theorem one_le_gauge_of_not_mem (hs₁ : StarConvex ℝ 0 s) (hs₂ : Absorbs ℝ s {x}) (hx : x ∉ s) :
1 ≤ gauge s x :=
le_gauge_of_not_mem hs₁ hs₂ <| by rwa [one_smul]
#align one_le_gauge_of_not_mem one_le_gauge_of_not_mem
section LinearOrderedField
variable {α : Type _} [LinearOrderedField α] [MulActionWithZero α ℝ] [OrderedSMul α ℝ]
theorem gauge_smul_of_nonneg [MulActionWithZero α E] [IsScalarTower α ℝ (Set E)] {s : Set E} {a : α}
(ha : 0 ≤ a) (x : E) : gauge s (a • x) = a • gauge s x :=
by
obtain rfl | ha' := ha.eq_or_lt
· rw [zero_smul, gauge_zero, zero_smul]
rw [gauge_def', gauge_def', ← Real.infₛ_smul_of_nonneg ha]
congr 1
ext r
simp_rw [Set.mem_smul_set, Set.mem_sep_iff]
constructor
· rintro ⟨hr, hx⟩
simp_rw [mem_Ioi] at hr⊢
rw [← mem_smul_set_iff_inv_smul_mem₀ hr.ne'] at hx
have := smul_pos (inv_pos.2 ha') hr
refine' ⟨a⁻¹ • r, ⟨this, _⟩, smul_inv_smul₀ ha'.ne' _⟩
rwa [← mem_smul_set_iff_inv_smul_mem₀ this.ne', smul_assoc,
mem_smul_set_iff_inv_smul_mem₀ (inv_ne_zero ha'.ne'), inv_inv]
· rintro ⟨r, ⟨hr, hx⟩, rfl⟩
rw [mem_Ioi] at hr⊢
rw [← mem_smul_set_iff_inv_smul_mem₀ hr.ne'] at hx
have := smul_pos ha' hr
refine' ⟨this, _⟩
rw [← mem_smul_set_iff_inv_smul_mem₀ this.ne', smul_assoc]
exact smul_mem_smul_set hx
#align gauge_smul_of_nonneg gauge_smul_of_nonneg
theorem gauge_smul_left_of_nonneg [MulActionWithZero α E] [SMulCommClass α ℝ ℝ]
[IsScalarTower α ℝ ℝ] [IsScalarTower α ℝ E] {s : Set E} {a : α} (ha : 0 ≤ a) :
gauge (a • s) = a⁻¹ • gauge s :=
by
obtain rfl | ha' := ha.eq_or_lt
· rw [inv_zero, zero_smul, gauge_of_subset_zero (zero_smul_set_subset _)]
ext
rw [gauge_def', Pi.smul_apply, gauge_def', ← Real.infₛ_smul_of_nonneg (inv_nonneg.2 ha)]
congr 1
ext r
simp_rw [Set.mem_smul_set, Set.mem_sep_iff]
constructor
· rintro ⟨hr, y, hy, h⟩
simp_rw [mem_Ioi] at hr⊢
refine' ⟨a • r, ⟨smul_pos ha' hr, _⟩, inv_smul_smul₀ ha'.ne' _⟩
rwa [smul_inv₀, smul_assoc, ← h, inv_smul_smul₀ ha'.ne']
· rintro ⟨r, ⟨hr, hx⟩, rfl⟩
rw [mem_Ioi] at hr⊢
have := smul_pos ha' hr
refine' ⟨smul_pos (inv_pos.2 ha') hr, r⁻¹ • x, hx, _⟩
rw [smul_inv₀, smul_assoc, inv_inv]
#align gauge_smul_left_of_nonneg gauge_smul_left_of_nonneg
theorem gauge_smul_left [Module α E] [SMulCommClass α ℝ ℝ] [IsScalarTower α ℝ ℝ]
[IsScalarTower α ℝ E] {s : Set E} (symmetric : ∀ x ∈ s, -x ∈ s) (a : α) :
gauge (a • s) = (|a|)⁻¹ • gauge s :=
by
rw [← gauge_smul_left_of_nonneg (abs_nonneg a)]
obtain h | h := abs_choice a
· rw [h]
· rw [h, Set.neg_smul_set, ← Set.smul_set_neg]
congr
ext y
refine' ⟨Symmetric _, fun hy => _⟩
rw [← neg_neg y]
exact Symmetric _ hy
· infer_instance
#align gauge_smul_left gauge_smul_left
end LinearOrderedField
section IsROrC
variable [IsROrC 𝕜] [Module 𝕜 E] [IsScalarTower ℝ 𝕜 E]
theorem gauge_norm_smul (hs : Balanced 𝕜 s) (r : 𝕜) (x : E) : gauge s (‖r‖ • x) = gauge s (r • x) :=
by
rw [@IsROrC.real_smul_eq_coe_smul 𝕜]
obtain rfl | hr := eq_or_ne r 0
· simp only [norm_zero, IsROrC.of_real_zero]
unfold gauge
congr with θ
refine' and_congr_right fun hθ => (hs.smul _).mem_smul_iff _
rw [IsROrC.norm_of_real, norm_norm]
#align gauge_norm_smul gauge_norm_smul
/-- If `s` is balanced, then the Minkowski functional is ℂ-homogeneous. -/
theorem gauge_smul (hs : Balanced 𝕜 s) (r : 𝕜) (x : E) : gauge s (r • x) = ‖r‖ * gauge s x :=
by
rw [← smul_eq_mul, ← gauge_smul_of_nonneg (norm_nonneg r), gauge_norm_smul hs]
infer_instance
#align gauge_smul gauge_smul
end IsROrC
section TopologicalSpace
variable [TopologicalSpace E] [ContinuousSMul ℝ E]
theorem interior_subset_gauge_lt_one (s : Set E) : interior s ⊆ { x | gauge s x < 1 } :=
by
intro x hx
let f : ℝ → E := fun t => t • x
have hf : Continuous f := by continuity
let s' := f ⁻¹' interior s
have hs' : IsOpen s' := hf.is_open_preimage _ isOpen_interior
have one_mem : (1 : ℝ) ∈ s' := by simpa only [s', f, Set.mem_preimage, one_smul]
obtain ⟨ε, hε₀, hε⟩ := (Metric.nhds_basis_closedBall.1 _).1 (isOpen_iff_mem_nhds.1 hs' 1 one_mem)
rw [Real.closedBall_eq_Icc] at hε
have hε₁ : 0 < 1 + ε := hε₀.trans (lt_one_add ε)
have : (1 + ε)⁻¹ < 1 := by
rw [inv_lt_one_iff]
right
linarith
refine' (gauge_le_of_mem (inv_nonneg.2 hε₁.le) _).trans_lt this
rw [mem_inv_smul_set_iff₀ hε₁.ne']
exact
interior_subset
(hε ⟨(sub_le_self _ hε₀.le).trans ((le_add_iff_nonneg_right _).2 hε₀.le), le_rfl⟩)
#align interior_subset_gauge_lt_one interior_subset_gauge_lt_one
theorem gauge_lt_one_eq_self_of_open (hs₁ : Convex ℝ s) (hs₀ : (0 : E) ∈ s) (hs₂ : IsOpen s) :
{ x | gauge s x < 1 } = s :=
by
refine' (gauge_lt_one_subset_self hs₁ ‹_› <| absorbent_nhds_zero <| hs₂.mem_nhds hs₀).antisymm _
convert interior_subset_gauge_lt_one s
exact hs₂.interior_eq.symm
#align gauge_lt_one_eq_self_of_open gauge_lt_one_eq_self_of_open
theorem gauge_lt_one_of_mem_of_open (hs₁ : Convex ℝ s) (hs₀ : (0 : E) ∈ s) (hs₂ : IsOpen s) {x : E}
(hx : x ∈ s) : gauge s x < 1 := by rwa [← gauge_lt_one_eq_self_of_open hs₁ hs₀ hs₂] at hx
#align gauge_lt_one_of_mem_of_open gauge_lt_one_of_mem_of_open
theorem gauge_lt_of_mem_smul (x : E) (ε : ℝ) (hε : 0 < ε) (hs₀ : (0 : E) ∈ s) (hs₁ : Convex ℝ s)
(hs₂ : IsOpen s) (hx : x ∈ ε • s) : gauge s x < ε :=
by
have : ε⁻¹ • x ∈ s := by rwa [← mem_smul_set_iff_inv_smul_mem₀ hε.ne']
have h_gauge_lt := gauge_lt_one_of_mem_of_open hs₁ hs₀ hs₂ this
rwa [gauge_smul_of_nonneg (inv_nonneg.2 hε.le), smul_eq_mul, inv_mul_lt_iff hε, mul_one] at
h_gauge_lt
infer_instance
#align gauge_lt_of_mem_smul gauge_lt_of_mem_smul
end TopologicalSpace
theorem gauge_add_le (hs : Convex ℝ s) (absorbs : Absorbent ℝ s) (x y : E) :
gauge s (x + y) ≤ gauge s x + gauge s y :=
by
refine' le_of_forall_pos_lt_add fun ε hε => _
obtain ⟨a, ha, ha', hx⟩ :=
exists_lt_of_gauge_lt Absorbs (lt_add_of_pos_right (gauge s x) (half_pos hε))
obtain ⟨b, hb, hb', hy⟩ :=
exists_lt_of_gauge_lt Absorbs (lt_add_of_pos_right (gauge s y) (half_pos hε))
rw [mem_smul_set_iff_inv_smul_mem₀ ha.ne'] at hx
rw [mem_smul_set_iff_inv_smul_mem₀ hb.ne'] at hy
suffices gauge s (x + y) ≤ a + b by linarith
have hab : 0 < a + b := add_pos ha hb
apply gauge_le_of_mem hab.le
have := convex_iff_div.1 hs hx hy ha.le hb.le hab
rwa [smul_smul, smul_smul, ← mul_div_right_comm, ← mul_div_right_comm, mul_inv_cancel ha.ne',
mul_inv_cancel hb.ne', ← smul_add, one_div, ← mem_smul_set_iff_inv_smul_mem₀ hab.ne'] at this
#align gauge_add_le gauge_add_le
section IsROrC
variable [IsROrC 𝕜] [Module 𝕜 E] [IsScalarTower ℝ 𝕜 E]
/-- `gauge s` as a seminorm when `s` is balanced, convex and absorbent. -/
@[simps]
def gaugeSeminorm (hs₀ : Balanced 𝕜 s) (hs₁ : Convex ℝ s) (hs₂ : Absorbent ℝ s) : Seminorm 𝕜 E :=
Seminorm.of (gauge s) (gauge_add_le hs₁ hs₂) (gauge_smul hs₀)
#align gauge_seminorm gaugeSeminorm
variable {hs₀ : Balanced 𝕜 s} {hs₁ : Convex ℝ s} {hs₂ : Absorbent ℝ s} [TopologicalSpace E]
[ContinuousSMul ℝ E]
theorem gaugeSeminorm_lt_one_of_open (hs : IsOpen s) {x : E} (hx : x ∈ s) :
gaugeSeminorm hs₀ hs₁ hs₂ x < 1 :=
gauge_lt_one_of_mem_of_open hs₁ hs₂.zero_mem hs hx
#align gauge_seminorm_lt_one_of_open gaugeSeminorm_lt_one_of_open
theorem gaugeSeminorm_ball_one (hs : IsOpen s) : (gaugeSeminorm hs₀ hs₁ hs₂).ball 0 1 = s :=
by
rw [Seminorm.ball_zero_eq]
exact gauge_lt_one_eq_self_of_open hs₁ hs₂.zero_mem hs
#align gauge_seminorm_ball_one gaugeSeminorm_ball_one
end IsROrC
/-- Any seminorm arises as the gauge of its unit ball. -/
@[simp]
protected theorem Seminorm.gauge_ball (p : Seminorm ℝ E) : gauge (p.ball 0 1) = p :=
by
ext
obtain hp | hp := { r : ℝ | 0 < r ∧ x ∈ r • p.ball 0 1 }.eq_empty_or_nonempty
· rw [gauge, hp, Real.infₛ_empty]
by_contra
have hpx : 0 < p x := (map_nonneg _ _).lt_of_ne h
have hpx₂ : 0 < 2 * p x := mul_pos zero_lt_two hpx
refine' hp.subset ⟨hpx₂, (2 * p x)⁻¹ • x, _, smul_inv_smul₀ hpx₂.ne' _⟩
rw [p.mem_ball_zero, map_smul_eq_mul, Real.norm_eq_abs, abs_of_pos (inv_pos.2 hpx₂),
inv_mul_lt_iff hpx₂, mul_one]
exact lt_mul_of_one_lt_left hpx one_lt_two
refine' IsGLB.cinfₛ_eq ⟨fun r => _, fun r hr => le_of_forall_pos_le_add fun ε hε => _⟩ hp
· rintro ⟨hr, y, hy, rfl⟩
rw [p.mem_ball_zero] at hy
rw [map_smul_eq_mul, Real.norm_eq_abs, abs_of_pos hr]
exact mul_le_of_le_one_right hr.le hy.le
· have hpε : 0 < p x + ε := by positivity
refine' hr ⟨hpε, (p x + ε)⁻¹ • x, _, smul_inv_smul₀ hpε.ne' _⟩
rw [p.mem_ball_zero, map_smul_eq_mul, Real.norm_eq_abs, abs_of_pos (inv_pos.2 hpε),
inv_mul_lt_iff hpε, mul_one]
exact lt_add_of_pos_right _ hε
#align seminorm.gauge_ball Seminorm.gauge_ball
theorem Seminorm.gaugeSeminorm_ball (p : Seminorm ℝ E) :
gaugeSeminorm (p.balanced_ball_zero 1) (p.convex_ball 0 1) (p.absorbent_ball_zero zero_lt_one) =
p :=
FunLike.coe_injective p.gauge_ball
#align seminorm.gauge_seminorm_ball Seminorm.gaugeSeminorm_ball
end AddCommGroup
section Norm
variable [SeminormedAddCommGroup E] [NormedSpace ℝ E] {s : Set E} {r : ℝ} {x : E}
theorem gauge_unit_ball (x : E) : gauge (Metric.ball (0 : E) 1) x = ‖x‖ :=
by
obtain rfl | hx := eq_or_ne x 0
· rw [norm_zero, gauge_zero]
refine' (le_of_forall_pos_le_add fun ε hε => _).antisymm _
· have : 0 < ‖x‖ + ε := by positivity
refine' gauge_le_of_mem this.le _
rw [smul_ball this.ne', smul_zero, Real.norm_of_nonneg this.le, mul_one, mem_ball_zero_iff]
exact lt_add_of_pos_right _ hε
refine'
le_gauge_of_not_mem balanced_ball_zero.star_convex (absorbent_ball_zero zero_lt_one).Absorbs
fun h => _
obtain hx' | hx' := eq_or_ne ‖x‖ 0
· rw [hx'] at h
exact hx (zero_smul_set_subset _ h)
· rw [mem_smul_set_iff_inv_smul_mem₀ hx', mem_ball_zero_iff, norm_smul, norm_inv, norm_norm,
inv_mul_cancel hx'] at h
exact lt_irrefl _ h
#align gauge_unit_ball gauge_unit_ball
theorem gauge_ball (hr : 0 < r) (x : E) : gauge (Metric.ball (0 : E) r) x = ‖x‖ / r :=
by
rw [← smul_unit_ball_of_pos hr, gauge_smul_left, Pi.smul_apply, gauge_unit_ball, smul_eq_mul,
abs_of_nonneg hr.le, div_eq_inv_mul]
simp_rw [mem_ball_zero_iff, norm_neg]
exact fun _ => id
#align gauge_ball gauge_ball
theorem mul_gauge_le_norm (hs : Metric.ball (0 : E) r ⊆ s) : r * gauge s x ≤ ‖x‖ :=
by
obtain hr | hr := le_or_lt r 0
· exact (mul_nonpos_of_nonpos_of_nonneg hr <| gauge_nonneg _).trans (norm_nonneg _)
rw [mul_comm, ← le_div_iff hr, ← gauge_ball hr]
exact gauge_mono (absorbent_ball_zero hr) hs x
#align mul_gauge_le_norm mul_gauge_le_norm
end Norm
|
import data.real.irrational
import topology.basic
import algebra.order.floor
--OUTPUT 1
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
have h1 : ∀ (i j : ℤ), i ≠ j → (int.fract (α * ↑i)) ≠ (int.fract (α * ↑j)), from
assume (i j : ℤ) (hij : i ≠ j),
have h2 : α * ↑i - ↑(int.floor (α * ↑i)) = int.fract (α * ↑i), from by {
rw int.fract_def,
ring,
},
have h3 : α * ↑j - ↑(int.floor (α * ↑j)) = int.fract (α * ↑j), from by {
rw int.fract_def,
ring,
},
have h4 : α * ↑i - ↑(int.floor (α * ↑i)) = α * ↑j - ↑(int.floor (α * ↑j)), from by {
rw h2, rw h3,
},
have h5 : α = (int.floor (α * ↑i) - int.floor (α * ↑j)) / (i - j), from by {
rw ← int.fract_add_floor_eq_of_lt (α * ↑i) (by {norm_num}),
rw ← int.fract_add_floor_eq_of_lt (α * ↑j) (by {norm_num}),
rw h4,
rw int.fract_add_floor_eq_of_lt (α * ↑i) (by {norm_num}),
rw int.fract_add_floor_eq_of_lt (α * ↑j) (by {norm_num}),
rw ← int.fract_add_floor_eq_of_lt (α * ↑i) (by {norm_num}),
rw ← int.fract_add_floor_eq_of_lt (α * ↑j) (by {norm_num}),
rw int.fract_add_floor_eq_of_lt (α * ↑i) (by {norm_num}),
rw int.fract_add_floor_eq_of_lt (α * ↑j) (by {norm_num}),
ring,
},
have h6 : α ∈ ℚ, from by {
rw ← h5,
apply int.cast_ne_zero.1 hij,
},
have h7 : irrational α, from hα_irrat,
have h8 : ¬(α ∈ ℚ), from h7,
have h9 : false, from by {
apply h8 h6,
},
show (int.fract (α * ↑i)) ≠ (int.fract (α * ↑j)), from by {
apply h9,
},
have h2 : ∀ (i : ℤ), int.fract (α * ↑i) ∈ set.Icc 0 1, from
assume (i : ℤ),
have h3 : 0 ≤ int.fract (α * ↑i), from by {
rw int.fract_def,
apply int.cast_nonneg,
},
have h4 : int.fract (α * ↑i) < 1, from by {
rw int.fract_def,
have h5 : α * ↑i - ↑(int.floor (α * ↑i)) < 1, from by {
apply int.cast_lt.2,
rw ← int.coe_nat_lt_coe_nat_iff,
apply int.fract_lt_one,
},
have h6 : 0 < α * ↑i - ↑(int.floor (α * ↑i)), from by {
apply int.cast_pos,
rw ← int.coe_nat_lt_coe_nat_iff,
apply int.fract_pos,
norm_num,
},
have h7 : 0 ≤ ↑(int.floor (α * ↑i)), from by {
apply int.cast_nonneg,
},
linarith,
},
show int.fract (α * ↑i) ∈ set.Icc 0 1, from by {
apply set.mem_Icc.2,
split,
exact h3,
exact h4,
},
have h3 : ∀ (i j : ℤ), i ≠ j → int.fract (α * ↑i) ≠ int.fract (α * ↑j), from h1,
have h4 : ∀ (i j : ℤ), i ≠ j → int.fract (α * ↑i) ∉ set.range (λ (m : ℤ), int.fract (α * ↑m)), from
assume (i j : ℤ) (hij : i ≠ j),
have h5 : int.fract (α * ↑i) ≠ int.fract (α * ↑j), from h3 i j hij,
show int.fract (α * ↑i) ∉ set.range (λ (m : ℤ), int.fract (α * ↑m)), from by {
rw set.mem_range,
rw set.mem_range at h5,
exact h5,
},
have h5 : ∀ (i : ℤ), int.fract (α * ↑i) ∉ set.range (λ (m : ℤ), int.fract (α * ↑m)), from
assume (i : ℤ),
have h6 : ∃ (j : ℤ), i ≠ j, from by {
have h7 : ∀ (j : ℤ), i ≠ j → ∃ (k : ℤ), i ≠ k, from by {
assume (j : ℤ) (hij : i ≠ j),
have h8 : ∃ (k : ℤ), k ≠ j, from by {
use j,
norm_num,
},
cases h8 with k hk,
use k,
exact hk,
},
have h9 : ∃ (j : ℤ), ∀ (k : ℤ), i ≠ k → i ≠ j, from by {
use i,
assume (k : ℤ) (hik : i ≠ k),
exact hik,
},
cases h9 with j hj,
use j,
exact hj j (hj j),
},
cases h6 with j hj,
h4 i j hj,
have h6 : ∀ (i : ℤ), int.fract (α * ↑i) ∈ closure (set.range (λ (m : ℤ), int.fract (α * ↑m))), from
assume (i : ℤ),
have h7 : ∀ (j : ℤ), i ≠ j → int.fract (α * ↑i) ∈ closure (set.range (λ (m : ℤ), int.fract (α * ↑m))), from by {
assume (j : ℤ) (hij : i ≠ j),
have h8 : int.fract (α * ↑i) ∈ set.range (λ (m : ℤ), int.fract (α * ↑m)), from by {
rw set.mem_range,
use i,
refl,
},
have h9 : int.fract (α * ↑i) ∉ closure (set.range (λ (m : ℤ), int.fract (α * ↑m))), from by {
rw set.mem_closure_iff,
rw set.mem_range at h8,
cases h8 with k hk,
rw hk,
use int.fract (α * ↑k),
assume h10,
have h11 : int.fract (α * ↑i) ∉ set.range (λ (m : ℤ), int.fract (α * ↑m)), from by {
rw set.mem_range at h10,
cases h10 with l hl,
rw hl,
end --Needs more than 2000 tokens!
--OUTPUT 2
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
have h1 : ∀ i j : ℤ, i ≠ j → int.fract (α * ↑i) ≠ int.fract (α * ↑j), from by {
assume i j h,
assume h2 : int.fract (α * ↑i) = int.fract (α * ↑j),
have h3 : (α * ↑i) - (int.floor (α * ↑i)) = (int.floor (α * ↑j)) - (int.floor (α * ↑j)),
from by {rw h2, ring},
have h4 : (α * ↑i) - (int.floor (α * ↑i)) = (α * ↑j) - (int.floor (α * ↑j)),
from by {rw h3, ring},
have h5 : (α * ↑i) - (int.floor (α * ↑i)) = (α * ↑j) - (int.floor (α * ↑j)),
from by {rw h3, ring},
have h6 : α = (int.floor (α * ↑i)) - (int.floor (α * ↑j)) / (i - j),
from by {rw h4, ring},
have h7 : (int.floor (α * ↑i)) - (int.floor (α * ↑j)) / (i - j) ∈ ℚ,
from by {apply int.cast_div,},
have h8 : α ∈ ℚ, from by {rw h6, exact h7,},
have h9 : irrational α, from hα_irrat,
contradiction,
},
have h2 : set.finite (set.range (λ (m : ℤ), int.fract (α * ↑m))), from by {
apply set.finite_range,
},
have h3 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ≠ 0, from by {
assume x h,
assume h2 : int.fract (α * ↑x) = 0,
have h3 : (α * ↑x) - (int.floor (α * ↑x)) = 0,
from by {rw h2, ring},
have h4 : (α * ↑x) - (int.floor (α * ↑x)) = 0,
from by {rw h3, ring},
have h5 : α = (int.floor (α * ↑x)) / x,
from by {rw h4, ring},
have h6 : (int.floor (α * ↑x)) / x ∈ ℚ,
from by {apply int.cast_div,},
have h7 : α ∈ ℚ, from by {rw h5, exact h6,},
have h8 : irrational α, from hα_irrat,
contradiction,
},
have h4 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ≠ 1, from by {
assume x h,
assume h2 : int.fract (α * ↑x) = 1,
have h3 : (α * ↑x) - (int.floor (α * ↑x)) = 1,
from by {rw h2, ring},
have h4 : (α * ↑x) - (int.floor (α * ↑x)) = 1,
from by {rw h3, ring},
have h5 : α = (int.floor (α * ↑x) + 1) / x,
from by {rw h4, ring},
have h6 : (int.floor (α * ↑x) + 1) / x ∈ ℚ,
from by {apply int.cast_div,},
have h7 : α ∈ ℚ, from by {rw h5, exact h6,},
have h8 : irrational α, from hα_irrat,
contradiction,
},
have h5 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ∈ set.Icc 0 1, from by {
assume x h,
have h2 : int.fract (α * ↑x) ≠ 0, from h3 x h,
have h3 : int.fract (α * ↑x) ≠ 1, from h4 x h,
rw set.mem_Icc,
split,
exact h2,
exact h3,
},
have h6 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ∈ set.range (λ (m : ℤ), int.fract (α * ↑m)), from by {
assume x h,
use x,
simp [h],
},
have h7 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ∈ (set.range (λ (m : ℤ), int.fract (α * ↑m))) ∩ set.Icc 0 1, from by {
assume x h,
split,
exact h6 x h,
exact h5 x h,
},
have h8 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ∈ closure ((λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)), from by {
assume x h,
have h2 : int.fract (α * ↑x) ∈ (set.range (λ (m : ℤ), int.fract (α * ↑m))) ∩ set.Icc 0 1, from h7 x h,
have h3 : (set.range (λ (m : ℤ), int.fract (α * ↑m))) ∩ set.Icc 0 1 ⊆ closure ((λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)), from by {
assume x h,
have h2 : x ∈ (set.range (λ (m : ℤ), int.fract (α * ↑m))) ∩ set.Icc 0 1, from h,
have h3 : x ∈ set.range (λ (m : ℤ), int.fract (α * ↑m)), from h2.left,
have h4 : x ∈ set.Icc 0 1, from h2.right,
rw set.mem_closure,
use (set.range (λ (m : ℤ), int.fract (α * ↑m))),
split,
exact h3,
use set.univ,
split,
exact set.mem_univ x,
exact h4,
},
exact h3 h2,
},
have h9 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ∈ closure ((λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)), from h8,
have h10 : ∀ x : ℤ, x ≠ 0 → int.fract (α * ↑x) ∈ set.Icc 0 1, from by {
assume x h,
have h2 : int.fract (α * ↑x) ∈ closure ((λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)), from h9 x h,
have h3 : closure ((λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)) ⊆ set.Icc 0 1, from by {
assume x h,
have h2 : x ∈ closure ((λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)), from h,
rw set.mem_Icc,
rw set.mem_closure at h2,
cases h2 with (h2_1 : x ∈ (λ (m : ℤ), int.fract (α * ↑m)) '' (@set.univ ℤ)) (h2_2 : ∀ ε > 0, ∃ (y : ℤ), y
end --Needs more than 2000 tokens!
--OUTPUT 3
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
have h1 : ∀ i j : ℤ, i ≠ j → (int.fract (α * ↑i)) ≠ (int.fract (α * ↑j)), from by {
assume (i j : ℤ) (h : i ≠ j),
have h1 : (int.fract (α * ↑i)) = (int.fract (α * ↑j)) → α = (i - j)⁻¹ * (int.nat_abs (α * ↑i) - int.nat_abs (α * ↑j)) ∈ ℚ, from by {
assume h2 : (int.fract (α * ↑i)) = (int.fract (α * ↑j)),
rw [h2,int.fract_mul,int.fract_mul] at h2,
rw [int.fract_eq_iff_nat_abs_sub_lt_one,int.fract_eq_iff_nat_abs_sub_lt_one] at h2,
have h3 : α = (i - j)⁻¹ * (int.nat_abs (α * ↑i) - int.nat_abs (α * ↑j)), from by {
rw [int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.mul_sub,int.
end --Needs more than 2000 tokens!
--OUTPUT 4
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
have h1 : ∀ (i j : ℤ), i ≠ j → int.fract (α * ↑i) ≠ int.fract (α * ↑j), from by {
assume (i j : ℤ) (hne : i ≠ j),
have h2 : α ∉ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
assume h3 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)),
cases h3 with i0 h4,
have h5 : α = ↑i0 / ↑(i0 - j), from by {rw ← h4, refl},
have h6 : ↑i0 / ↑(i0 - j) ∈ ℚ, from by {
have h7 : ↑i0 / ↑(i0 - j) = ↑i0 / ↑(i0 - j), from by refl,
have h8 : ↑i0 / ↑(i0 - j) ∈ ℝ, from by {rw h7, apply_instance},
have h9 : ↑i0 / ↑(i0 - j) ∈ ℚ, from by {rw h7, apply_instance},
exact h9,
},
have h10 : ↑i0 / ↑(i0 - j) = ↑i / ↑(i - j), from by rw h5,
have h11 : ↑i0 / ↑(i0 - j) = α, from by {rw h10, refl},
have h12 : α ∈ ℚ, from by {rw h11, exact h6},
have h13 : α ∉ ℚ, from by {exact hα_irrat},
contradiction,
},
have h3 : ∀ (i : ℤ), i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume (i : ℤ) (hne : i ≠ j),
have h4 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h5 : ¬(↑i / ↑(i - j) = α), from by {
assume h6 : ↑i / ↑(i - j) = α,
have h7 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h6, exact h4},
contradiction,
},
exact h5,
},
have h4 : ∀ (i : ℤ), i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume (i : ℤ) (hne : i ≠ j),
have h5 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h6 : ¬(↑i / ↑(i - j) = α), from by {
assume h7 : ↑i / ↑(i - j) = α,
have h8 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h7, exact h5},
contradiction,
},
exact h6,
},
have h5 : i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume hne : i ≠ j,
have h6 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h7 : ¬(↑i / ↑(i - j) = α), from by {
assume h8 : ↑i / ↑(i - j) = α,
have h9 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h8, exact h6},
contradiction,
},
exact h7,
},
have h6 : i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume hne : i ≠ j,
have h7 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h8 : ¬(↑i / ↑(i - j) = α), from by {
assume h9 : ↑i / ↑(i - j) = α,
have h10 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h9, exact h7},
contradiction,
},
exact h8,
},
have h7 : i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume hne : i ≠ j,
have h8 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h9 : ¬(↑i / ↑(i - j) = α), from by {
assume h10 : ↑i / ↑(i - j) = α,
have h11 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h10, exact h8},
contradiction,
},
exact h9,
},
have h8 : i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume hne : i ≠ j,
have h9 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h10 : ¬(↑i / ↑(i - j) = α), from by {
assume h11 : ↑i / ↑(i - j) = α,
have h12 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h11, exact h9},
contradiction,
},
exact h10,
},
have h9 : i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume hne : i ≠ j,
have h10 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h11 : ¬(↑i / ↑(i - j) = α), from by {
assume h12 : ↑i / ↑(i - j) = α,
have h13 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h12, exact h10},
contradiction,
},
exact h11,
},
have h10 : i ≠ j → α ≠ ↑i / ↑(i - j), from by {
assume hne : i ≠ j,
have h11 : ↑i / ↑(i - j) ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {
use i,
refl,
},
have h12 : ¬(↑i / ↑(i - j) = α), from by {
assume h13 : ↑i / ↑(i - j) = α,
have h14 : α ∈ set.range (λ (i : ℤ), ↑i / ↑(i - j)), from by {rw ← h13, exact h11},
contradiction,
},
exact h12,
},
end --Needs more than 2000 tokens!
--OUTPUT 5
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
have h1 : ∀ (i j : ℤ), i ≠ j → int.fract (α * ↑i) ≠ int.fract (α * ↑j), from begin
assume (i j : ℤ) (hij : i ≠ j),
have h2 : (α * ↑i) - (int.floor (α * ↑i)) = int.fract (α * ↑i), from by {rw int.fract_def, ring},
have h3 : (α * ↑j) - (int.floor (α * ↑j)) = int.fract (α * ↑j), from by {rw int.fract_def, ring},
have h4 : (α * ↑i) - (int.floor (α * ↑i)) = (α * ↑j) - (int.floor (α * ↑j)), from by {rw [h2, h3]},
have h5 : α = (int.floor (α * ↑i) - int.floor (α * ↑j)) / (i - j), from by {rw [h4, div_sub_div_same, div_self hij], ring},
have h6 : α ∈ ℚ, from by {apply q_of_rat, exact h5},
have h7 : irrational α, from hα_irrat,
have h8 : ¬ (α ∈ ℚ), from h7,
have h9 : ¬ (α ∈ ℚ), from h8,
exact absurd h6 h9,
end,
have h2 : ∀ (i j : ℤ), i ≠ j → int.fract (α * ↑i) ≠ int.fract (α * ↑j), from h1,
have h3 : ∀ (i j : ℤ), i ≠ j → int.fract (α * ↑i) - int.fract (α * ↑j) ≠ 0, from begin
assume (i j : ℤ) (hij : i ≠ j),
have h4 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * ↑i) + - int.fract (α * ↑j), from by ring,
have h5 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * ↑i) + - (int.fract (α * ↑j)), from by {rw h4},
have h6 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * ↑i) + int.fract (-(α * ↑j)), from by {rw ← int.fract_neg, rw h5},
have h7 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * ↑i - (α * ↑j)), from by {rw h6},
have h8 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (↑i - ↑j)), from by {rw h7},
have h9 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h8},
have h10 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h9},
have h11 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract ((α * i) - (α * j)), from by {rw h10},
have h12 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract ((α * i) - (α * j)), from by {rw h11},
have h13 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * i - α * j), from by {rw h12},
have h14 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * i - α * j), from by {rw h13},
have h15 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h14},
have h16 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h15},
have h17 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h16},
have h18 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h17},
have h19 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h18},
have h20 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h19},
have h21 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h20},
have h22 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h21},
have h23 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h22},
have h24 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h23},
have h25 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h24},
have h26 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h25},
have h27 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h26},
have h28 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h27},
have h29 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h28},
have h30 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h29},
have h31 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h30},
have h32 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h31},
have h33 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h32},
have h34 : int.fract (α * ↑i) - int.fract (α * ↑j) = int.fract (α * (i - j)), from by {rw h33},
have h35 : int.
end --Needs more than 2000 tokens!
--OUTPUT 6
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
have h1 : ∀ i j : ℤ, i ≠ j → (int.fract (α * ↑i)) ≠ (int.fract (α * ↑j)), from by {
assume i j hne,
have h2 : (α * ↑i) - ↑(int.floor (α * ↑i)) = (int.fract (α * ↑i)), from by {rw int.fract_eq_sub_floor},
have h3 : (α * ↑j) - ↑(int.floor (α * ↑j)) = (int.fract (α * ↑j)), from by {rw int.fract_eq_sub_floor},
have h4 : (int.fract (α * ↑i)) = (int.fract (α * ↑j)), from by {rw h2, rw h3, linarith},
have h5 : (α * ↑i) - ↑(int.floor (α * ↑i)) = (α * ↑j) - ↑(int.floor (α * ↑j)), from by {rw h4},
have h6 : (α * ↑i) - (α * ↑j) = ↑(int.floor (α * ↑i)) - ↑(int.floor (α * ↑j)), from by {linarith},
have h7 : α = (↑(int.floor (α * ↑i)) - ↑(int.floor (α * ↑j))) / ↑(i - j), from by {rw h6, rw mul_comm α i, rw mul_comm α j, rw mul_sub, rw mul_sub, rw mul_comm, rw mul_assoc, rw mul_comm, rw mul_assoc, rw mul_comm, rw mul_comm, rw div_eq_mul_inv, rw mul_comm, rw mul_assoc, rw mul_comm, rw mul_assoc, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm, rw mul_comm
end --Needs more than 2000 tokens!
/- FEW SHOT PROMPTS TO CODEX(START)
/--`theorem`
Power Set is Closed under Intersection
Let $S$ be a set.
Let $\powerset S$ be the power set of $S$.
Then:
:$\forall A, B \in \powerset S: A \cap B \in \powerset S$
`proof`
Let $A, B \in \powerset S$.
Then by the definition of power set, $A \subseteq S$ and $B \subseteq S$.
From Intersection is Subset we have that $A \cap B \subseteq A$.
It follows from Subset Relation is Transitive that $A \cap B \subseteq S$.
Thus $A \cap B \in \powerset S$ and closure is proved.
{{qed}}
-/
theorem power_set_intersection_closed {α : Type*} (S : set α) : ∀ A B ∈ 𝒫 S, (A ∩ B) ∈ 𝒫 S :=
begin
assume (A : set α) (hA : A ∈ 𝒫 S) (B : set α) (hB : B ∈ 𝒫 S),
have h1 : (A ⊆ S) ∧ (B ⊆ S), from by {split,apply set.subset_of_mem_powerset,exact hA,apply set.subset_of_mem_powerset,exact hB},
have h2 : (A ∩ B) ⊆ A, from by apply set.inter_subset_left,
have h3 : (A ∩ B) ⊆ S, from by {apply set.subset.trans h2 h1.left},
show (A ∩ B) ∈ 𝒫 S, from by {apply set.mem_powerset h3},
end
/--`theorem`
Square of Sum
:$\forall x, y \in \R: \paren {x + y}^2 = x^2 + 2 x y + y^2$
`proof`
Follows from the distribution of multiplication over addition:
{{begin-eqn}}
{{eqn | l = \left({x + y}\right)^2
| r = \left({x + y}\right) \cdot \left({x + y}\right)
}}
{{eqn | r = x \cdot \left({x + y}\right) + y \cdot \left({x + y}\right)
| c = Real Multiplication Distributes over Addition
}}
{{eqn | r = x \cdot x + x \cdot y + y \cdot x + y \cdot y
| c = Real Multiplication Distributes over Addition
}}
{{eqn | r = x^2 + 2xy + y^2
| c =
}}
{{end-eqn}}
{{qed}}
-/
theorem square_of_sum (x y : ℝ) : (x + y)^2 = (x^2 + 2*x*y + y^2) :=
begin
calc (x + y)^2 = (x+y)*(x+y) : by rw sq
... = x*(x+y) + y*(x+y) : by rw add_mul
... = x*x + x*y + y*x + y*y : by {rw [mul_comm x (x+y),mul_comm y (x+y)], rw [add_mul,add_mul], ring}
... = x^2 + 2*x*y + y^2 : by {repeat {rw ← sq}, rw mul_comm y x, ring}
end
/--`theorem`
Identity of Group is Unique
Let $\struct {G, \circ}$ be a group. Then there is a unique identity element $e \in G$.
`proof`
From Group has Latin Square Property, there exists a unique $x \in G$ such that:
:$a x = b$
and there exists a unique $y \in G$ such that:
:$y a = b$
Setting $b = a$, this becomes:
There exists a unique $x \in G$ such that:
:$a x = a$
and there exists a unique $y \in G$ such that:
:$y a = a$
These $x$ and $y$ are both $e$, by definition of identity element.
{{qed}}
-/
theorem group_identity_unique {G : Type*} [group G] : ∃! e : G, ∀ a : G, e * a = a ∧ a * e = a :=
begin
have h1 : ∀ a b : G, ∃! x : G, a * x = b, from by {
assume a b : G, use a⁻¹ * b, obviously, },
have h2 : ∀ a b : G, ∃! y : G, y * a = b, from by {
assume a b : G, use b * a⁻¹, obviously, },
have h3 : ∀ a : G, ∃! x : G, a * x = a, from
assume a : G, h1 a a,
have h4 : ∀ a : G, ∃! y : G, y * a = a, from
assume a : G, h2 a a,
have h5 : ∀ a : G, classical.some (h3 a).exists = (1 : G), from assume a :G,
exists_unique.unique (h3 a) (classical.some_spec (exists_unique.exists (h3 a)))
(mul_one a),
have h6 : ∀ a : G, classical.some (h4 a).exists = (1 : G), from assume a : G,
exists_unique.unique (h4 a) (classical.some_spec (exists_unique.exists (h4 a))) (one_mul a),
show ∃! e : G, ∀ a : G, e * a = a ∧ a * e = a, from by {
use (1 : G),
have h7 : ∀ e : G, (∀ a : G, e * a = a ∧ a * e = a) → e = 1, from by {
assume (e : G) (hident : ∀ a : G, e * a = a ∧ a * e = a),
have h8 : ∀ a : G, e = classical.some (h3 a).exists, from assume (a : G),
exists_unique.unique (h3 a) (hident a).right
(classical.some_spec (exists_unique.exists (h3 a))),
have h9 : ∀ a : G, e = classical.some (h4 a).exists, from assume (a : G),
exists_unique.unique (h4 a) (hident a).left
(classical.some_spec (exists_unique.exists (h4 a))),
show e = (1 : G), from eq.trans (h9 e) (h6 _),
},
exact ⟨by obviously, h7⟩,
}
end
/--`theorem`
Squeeze Theorem for Real Numbers
Let $\sequence {x_n}$, $\sequence {y_n}$ and $\sequence {z_n}$ be sequences in $\R$.
Let $\sequence {y_n}$ and $\sequence {z_n}$ both be convergent to the following limit:
:$\ds \lim_{n \mathop \to \infty} y_n = l, \lim_{n \mathop \to \infty} z_n = l$
Suppose that:
:$\forall n \in \N: y_n \le x_n \le z_n$
Then:
:$x_n \to l$ as $n \to \infty$
that is:
:$\ds \lim_{n \mathop \to \infty} x_n = l$
`proof`
From Negative of Absolute Value:
:$\size {x - l} < \epsilon \iff l - \epsilon < x < l + \epsilon$
Let $\epsilon > 0$.
We need to prove that:
:$\exists N: \forall n > N: \size {x_n - l} < \epsilon$
As $\ds \lim_{n \mathop \to \infty} y_n = l$ we know that:
:$\exists N_1: \forall n > N_1: \size {y_n - l} < \epsilon$
As $\ds \lim_{n \mathop \to \infty} z_n = l$ we know that:
:$\exists N_2: \forall n > N_2: \size {z_n - l} < \epsilon$
Let $N = \max \set {N_1, N_2}$.
Then if $n > N$, it follows that $n > N_1$ and $n > N_2$.
So:
:$\forall n > N: l - \epsilon < y_n < l + \epsilon$
:$\forall n > N: l - \epsilon < z_n < l + \epsilon$
But:
:$\forall n \in \N: y_n \le x_n \le z_n$
So:
:$\forall n > N: l - \epsilon < y_n \le x_n \le z_n < l + \epsilon$
and so:
:$\forall n > N: l - \epsilon < x_n < l + \epsilon$
So:
:$\forall n > N: \size {x_n - l} < \epsilon$
Hence the result.
{{qed}}
-/
theorem squeeze_theorem_real_numbers (x y z : ℕ → ℝ) (l : ℝ) :
let seq_limit : (ℕ → ℝ) → ℝ → Prop := λ (u : ℕ → ℝ) (l : ℝ), ∀ ε > 0, ∃ N, ∀ n > N, |u n - l| < ε in
seq_limit y l → seq_limit z l → (∀ n : ℕ, (y n) ≤ (x n) ∧ (x n) ≤ (z n)) → seq_limit x l :=
begin
assume seq_limit (h2 : seq_limit y l) (h3 : seq_limit z l) (h4 : ∀ (n : ℕ), y n ≤ x n ∧ x n ≤ z n) (ε),
have h5 : ∀ x, |x - l| < ε ↔ (((l - ε) < x) ∧ (x < (l + ε))),
from by
{
intro x0,
have h6 : |x0 - l| < ε ↔ ((x0 - l) < ε) ∧ ((l - x0) < ε),
from abs_sub_lt_iff, rw h6,
split,
rintro ⟨ S_1, S_2 ⟩,
split; linarith,
rintro ⟨ S_3, S_4 ⟩,
split; linarith,
},
assume (h7 : ε > 0),
cases h2 ε h7 with N1 h8,
cases h3 ε h7 with N2 h9,
let N := max N1 N2,
use N,
have h10 : ∀ n > N, n > N1 ∧ n > N2 := by {
assume n h,
split,
exact lt_of_le_of_lt (le_max_left N1 N2) h,
exact lt_of_le_of_lt (le_max_right N1 N2) h,
},
have h11 : ∀ n > N, (((l - ε) < (y n)) ∧ ((y n) ≤ (x n))) ∧ (((x n) ≤ (z n)) ∧ ((z n) < l+ε)),
from by {
intros n h12,
split,
{
have h13 := (h8 n (h10 n h12).left), rw h5 (y n) at h13,
split,
exact h13.left,
exact (h4 n).left,
},
{
have h14 := (h9 n (h10 n h12).right),rw h5 (z n) at h14,
split,
exact (h4 n).right,
exact h14.right,
},
},
have h15 : ∀ n > N, ((l - ε) < (x n)) ∧ ((x n) < (l+ε)),
from by {
intros n1 h16, cases (h11 n1 h16);
split; linarith,
},
show ∀ (n : ℕ), n > N → |x n - l| < ε,
from by {
intros n h17,
cases h5 (x n) with h18 h19,
apply h19, exact h15 n h17,
},
end
/--`theorem`
Density of irrational orbit
The fractional parts of the integer multiples of an irrational number form a dense subset of the unit interval
`proof`
Let $\alpha$ be an irrational number. Then for distinct $i, j \in \mathbb{Z}$, we must have $\{i \alpha\} \neq\{j \alpha\}$. If this were not true, then
$$
i \alpha-\lfloor i \alpha\rfloor=\{i \alpha\}=\{j \alpha\}=j \alpha-\lfloor j \alpha\rfloor,
$$
which yields the false statement $\alpha=\frac{\lfloor i \alpha\rfloor-\lfloor j \alpha\rfloor}{i-j} \in \mathbb{Q}$. Hence,
$$
S:=\{\{i \alpha\} \mid i \in \mathbb{Z}\}
$$
is an infinite subset of $\left[0,1\right]$.
By the Bolzano-Weierstrass theorem, $S$ has a limit point in $[0, 1]$. One can thus find pairs of elements of $S$ that are arbitrarily close. Since (the absolute value of) the difference of any two elements of $S$ is also an element of $S$, it follows that $0$ is a limit point of $S$.
To show that $S$ is dense in $[0, 1]$, consider $y \in[0,1]$, and $\epsilon>0$. Then by selecting $x \in S$ such that $\{x\}<\epsilon$ (which exists as $0$ is a limit point), and $N$ such that $N \cdot\{x\} \leq y<(N+1) \cdot\{x\}$, we get: $|y-\{N x\}|<\epsilon$.
QED
-/
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
FEW SHOT PROMPTS TO CODEX(END)-/
|
[STATEMENT]
lemma has_fps_expansion_tan' [fps_expansion_intros]:
"tan has_fps_expansion fps_tan (1 :: 'a :: {banach, real_normed_field, field_char_0})"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. tan has_fps_expansion fps_tan (1::'a)
[PROOF STEP]
using has_fps_expansion_tan[of 1]
[PROOF STATE]
proof (prove)
using this:
(\<lambda>x. tan ((1::?'b1) * x)) has_fps_expansion fps_tan (1::?'b1)
goal (1 subgoal):
1. tan has_fps_expansion fps_tan (1::'a)
[PROOF STEP]
by simp |
(** Coq coding by choukh, Feb 2021 **)
Require Export ZFC.Axiom.ZFC0.
Declare Scope class_scope.
Open Scope class_scope.
(* 类 *)
Notation Class := (set → Prop).
(* 类成员 *)
Definition InClass := λ x (C : Class), C x.
Notation "x ⋵ C" := (InClass x C) (at level 70) : class_scope.
Global Hint Unfold InClass : core.
Notation "∀ x .. y ⋵ A , P" :=
(∀ x, x ⋵ A → (.. (∀ y, y ⋵ A → P) ..))
(at level 200, x binder, right associativity) : class_scope.
Notation "∃ x .. y ⋵ A , P" :=
(∃ x, x ⋵ A ∧ (.. (∃ y, y ⋵ A ∧ P) ..))
(at level 200, x binder, right associativity) : class_scope.
Notation "'∃' ! x .. y ⋵ A , P" :=
(∃! x, x ⋵ A ∧ (.. (∃! y, y ⋵ A ∧ P) ..))
(at level 200, x binder, right associativity,
format "'[' '∃' ! '/ ' x .. y ⋵ A , '/ ' P ']'") : class_scope.
(* 能成为集合的类 *)
Definition is_set := λ C, ∃ A, ∀ x, x ∈ A ↔ x ⋵ C.
(* 子类 *)
Notation "C ⫃ D" := (∀ x, x ⋵ C → x ⋵ D) (at level 70) : class_scope.
(* 类的子集 *)
Notation "A ⪽ C" := (∀ x, x ∈ A → x ⋵ C) (at level 70) : class_scope.
(* 类函数 *)
Notation "F :ᶜ D ⇒ R" := (∀x ⋵ D, F x ⋵ R) (at level 60) : class_scope.
(* 类单射 *)
Definition class_injective := λ (F : set → set) D,
∀ x y ⋵ D, F x = F y → x = y.
Definition class_injection := λ F D R,
F :ᶜ D ⇒ R ∧ class_injective F D.
Notation "F :ᶜ D ⇔ R" := (class_injection F D R) (at level 60) : class_scope.
(* 类满射 *)
Definition class_surjective := λ F D R,
∀y ⋵ R, ∃x ⋵ D, F x = y.
Definition class_surjection := λ F D R,
F :ᶜ D ⇒ R ∧ class_surjective F D R.
Notation "F :ᶜ D ⟹ R" := (class_surjection F D R) (at level 60) : class_scope.
(* 类双射 *)
Definition class_bijective := λ F D R,
class_injective F D ∧ class_surjective F D R.
Definition class_bijection := λ F D R,
F :ᶜ D ⇒ R ∧ class_bijective F D R.
Notation "F :ᶜ D ⟺ R" := (class_bijection F D R) (at level 60) : class_scope.
|
#pragma once
#include <vector>
#include <gsl/gsl>
#include "halley/utils/utils.h"
namespace Halley
{
class NetworkPacketBase
{
public:
size_t copyTo(gsl::span<gsl::byte> dst) const;
size_t getSize() const;
gsl::span<const gsl::byte> getBytes() const;
NetworkPacketBase(NetworkPacketBase&& other) = delete;
NetworkPacketBase& operator=(NetworkPacketBase&& other) = delete;
protected:
NetworkPacketBase();
NetworkPacketBase(gsl::span<const gsl::byte> data, size_t prePadding);
size_t dataStart;
std::vector<gsl::byte> data;
};
class OutboundNetworkPacket : public NetworkPacketBase
{
public:
OutboundNetworkPacket(const OutboundNetworkPacket& other);
explicit OutboundNetworkPacket(OutboundNetworkPacket&& other) noexcept;
explicit OutboundNetworkPacket(gsl::span<const gsl::byte> data);
explicit OutboundNetworkPacket(const Bytes& data);
void addHeader(gsl::span<const gsl::byte> src);
template <typename T>
void addHeader(const T& h)
{
addHeader(gsl::as_bytes(gsl::span<const T>(&h, 1)));
}
OutboundNetworkPacket& operator=(OutboundNetworkPacket&& other) noexcept;
};
class InboundNetworkPacket : public NetworkPacketBase
{
public:
InboundNetworkPacket();
explicit InboundNetworkPacket(InboundNetworkPacket&& other);
explicit InboundNetworkPacket(gsl::span<const gsl::byte> data);
void extractHeader(gsl::span<gsl::byte> dst);
template <typename T>
void extractHeader(T& h)
{
extractHeader(gsl::as_writeable_bytes(gsl::span<T>(&h, 1)));
}
InboundNetworkPacket& operator=(InboundNetworkPacket&& other);
};
}
|
Set Implicit Arguments.
Class ORDER A := Order {
LEQ : A -> A -> bool;
leqRefl: forall x, true = LEQ x x;
leqTransTrue : forall x y z,
true = LEQ x y ->
true = LEQ y z ->
true = LEQ x z;
leqTransFalse : forall x y z,
false = LEQ x y ->
false = LEQ y z ->
false = LEQ x z;
leqSymm : forall x y,
false = LEQ x y ->
true = LEQ y x
}.
(*Extract Inductive bool => "Prelude.Bool" ["Prelude.True" "Prelude.False"].*)
|
[STATEMENT]
lemma quote_all_empty [simp]: "quote_all p {} = {}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. quote_all p {} = {}
[PROOF STEP]
by (simp add: quote_all_def) |
Address correspondence and reprint requests to Eleanor Colle, MD, Room E-316, Montreal Childrens Hospital, 2300 Tupper, Montreal, Quebec, H3H 1P3 Canada.
We describe the phenotypic characteristics of animals in the fifth backcross-intercross generation of a breeding program in which the RT1 u haplotype and the phenotypic trait responsible for the T-lymphopenia of BB rats have been transferred to the ACI background. In this generation of animals, 24% were lymphopenic with decreased numbers of PBL expressing CD5, TCRα, and RT6. The PBL of the lymphopenic animals had a decreased mitogenic response to ConA. All of the nonlymphopenic animals were homozygous for RT6.2. Phenotypic analysis of intestinal IEL revealed that this was also the case for the lymphopenic animals. Moreover, IEL of the lymphopenic animals exhibited a pattern of staining (increased numbers of TCRαβ+CD4+CD8+ and decreased numbers of TCRαβ+CD4−CD8+) similar to that of BB DP animals. The ACI.1U(BB)-lymphopenic animals, although having two of the genetic traits associated with the expression of spontaneous diabetes mellitus, uniformly fail to develop diabetes. Breeding studies in which these animals were crossed with BB and hBB rats suggest that other genes are necessary for development of overt diabetes.
Revision received July 9, 1992. |
MISSES' DRESSES: Wrap dresses have loose-fitting bodice, front extending into back collar, front shoulder pleats, self-lined midriff, semi-fittted, pleated skirt, shaped hemline and hook & eye closure. A and B: topstitching.
Weiterführende Links zu "McCALL'S M6986" |
[STATEMENT]
lemma f_Exec_Stream_Acc_Output_Nil[simp]: "
f_Exec_Comp_Stream_Acc_Output k output_fun trans_fun [] c = []"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f_Exec_Comp_Stream_Acc_Output k output_fun trans_fun [] c = []
[PROOF STEP]
by (simp add: f_Exec_Comp_Stream_Acc_Output_def) |
If $f$ is a positive additive function on a ring of sets, and $A$ is an increasing sequence of sets in the ring, then $\sum_{i \leq n} f(A_i \setminus A_{i-1}) = f(A_n)$. |
// matrix/kaldi-blas.h
// Copyright 2009-2011 Ondrej Glembek; Microsoft Corporation
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
// THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
// WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
// MERCHANTABLITY OR NON-INFRINGEMENT.
// See the Apache 2 License for the specific language governing permissions and
// limitations under the License.
#ifndef KALDI_MATRIX_KALDI_BLAS_H_
#define KALDI_MATRIX_KALDI_BLAS_H_
// This file handles the #includes for BLAS, LAPACK and so on.
// It manipulates the declarations into a common format that kaldi can handle.
// However, the kaldi code will check whether HAVE_ATLAS is defined as that
// code is called a bit differently from CLAPACK that comes from other sources.
// There are three alternatives:
// (i) you have ATLAS, which includes the ATLAS implementation of CBLAS
// plus a subset of CLAPACK (but with clapack_ in the function declarations).
// In this case, define HAVE_ATLAS and make sure the relevant directories are
// in the include path.
// (ii) you have CBLAS (some implementation thereof) plus CLAPACK.
// In this case, define HAVE_CLAPACK.
// [Since CLAPACK depends on BLAS, the presence of BLAS is implicit].
// (iii) you have the MKL library, which includes CLAPACK and CBLAS.
// Note that if we are using ATLAS, no Svd implementation is supplied,
// so we define HAVE_Svd to be zero and this directs our implementation to
// supply its own "by hand" implementation which is based on TNT code.
#if (defined(HAVE_CLAPACK) && (defined(HAVE_ATLAS) || defined(HAVE_MKL))) \
|| (defined(HAVE_ATLAS) && defined(HAVE_MKL))
#error "Do not define more than one of HAVE_CLAPACK, HAVE_ATLAS and HAVE_MKL"
#endif
#ifdef HAVE_ATLAS
extern "C" {
#include <cblas.h>
#include <clapack.h>
}
#elif defined(HAVE_CLAPACK)
#ifdef __APPLE__
#include <Accelerate/Accelerate.h>
typedef __CLPK_integer integer;
typedef __CLPK_logical logical;
typedef __CLPK_real real;
typedef __CLPK_doublereal doublereal;
typedef __CLPK_complex complex;
typedef __CLPK_doublecomplex doublecomplex;
typedef __CLPK_ftnlen ftnlen;
#else
extern "C" {
// May be in /usr/[local]/include if installed; else this uses the one
// from the external/CLAPACK-* directory.
#include <cblas.h>
#include <f2c.h>
#include <clapack.h>
// get rid of macros from f2c.h -- these are dangerous.
#undef abs
#undef dabs
#undef min
#undef max
#undef dmin
#undef dmax
#undef bit_test
#undef bit_clear
#undef bit_set
}
#endif
#elif defined(HAVE_MKL)
extern "C" {
#include <mkl.h>
}
#else
#error "You need to define (using the preprocessor) either HAVE_CLAPACK or HAVE_ATLAS or HAVE_MKL (but not more than one)"
#endif
#ifdef HAVE_CLAPACK
typedef integer KaldiBlasInt;
#endif
#ifdef HAVE_MKL
typedef MKL_INT KaldiBlasInt;
#endif
#ifdef HAVE_ATLAS
// in this case there is no need for KaldiBlasInt-- this typedef is only needed
// for Svd code which is not included in ATLAS (we re-implement it).
#endif
#endif // KALDI_MATRIX_KALDI_BLAS_H_
|
module Decidable.Equality.Core
%default total
--------------------------------------------------------------------------------
-- Decidable equality
--------------------------------------------------------------------------------
||| Decision procedures for propositional equality
public export
interface DecEq t where
||| Decide whether two elements of `t` are propositionally equal
decEq : (x1 : t) -> (x2 : t) -> Dec (x1 = x2)
--------------------------------------------------------------------------------
-- Utility lemmas
--------------------------------------------------------------------------------
||| The negation of equality is symmetric (follows from symmetry of equality)
export
negEqSym : Not (a = b) -> Not (b = a)
negEqSym p h = p (sym h)
||| Everything is decidably equal to itself
export
decEqSelfIsYes : DecEq a => {x : a} -> decEq x x = Yes Refl
decEqSelfIsYes with (decEq x x)
decEqSelfIsYes | Yes Refl = Refl
decEqSelfIsYes | No contra = absurd $ contra Refl
||| If you have a proof of inequality, you're sure that `decEq` would give a `No`.
export
decEqContraIsNo : DecEq a => {x, y : a} -> Not (x = y) -> (p ** decEq x y = No p)
decEqContraIsNo uxy with (decEq x y)
decEqContraIsNo uxy | Yes xy = absurd $ uxy xy
decEqContraIsNo _ | No uxy = (uxy ** Refl)
|
using Images, ImageFeatures, CoordinateTransformations, ImageDraw
img1 = Gray.(load("cat-3417184_640.jpg"))
img2 = Gray.(load("cat-3417184_640_watermarked.jpg"))
# rotate img2 around the center and resize it
rot = recenter(RotMatrix(5pi/6), [size(img2)...] .÷ 2) # a rotation around the center
tform = rot ∘ Translation(-50, -40)
img2 = warp(img2, tform, size(img2))
img2 = imresize(img2, Int.(trunc.(size(img2) .* 0.7)))
features_1 = Features(fastcorners(img1, 12, 0.35));
features_2 = Features(fastcorners(img2, 12, 0.35));
brisk_params = BRISK()
desc_1, ret_features_1 = create_descriptor(img1, features_1, brisk_params);
desc_2, ret_features_2 = create_descriptor(img2, features_2, brisk_params);
matches = match_keypoints(Keypoints(ret_features_1), Keypoints(ret_features_2), desc_1, desc_2, 0.2)
# create
img3 = zeros(Gray, size(img1))
img3[1:size(img2, 1), 1:size(img2, 2)] = img2
grid = hcat(img1, img3)
offset = CartesianIndex(0, size(img1, 2))
map(m -> draw!(grid, LineSegment(m[1], m[2] + offset)), matches)
imshow(grid)
|
example : (0 : Nat) + 0 = 0 :=
show 0 + 0 = 0 from rfl
example : (0 : Int) + 0 = 0 :=
show 0 + 0 = 0 from rfl
example : Int :=
show Nat from 0
example (x : Nat) : (x + 0) + y = x + y := by
rw [show x + 0 = x from rfl]
|
/-
Copyright (c) 2020 Markus Himmel. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
! This file was ported from Lean 3 source module category_theory.limits.shapes.strong_epi
! leanprover-community/mathlib commit 32253a1a1071173b33dc7d6a218cf722c6feb514
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathlib.CategoryTheory.Balanced
import Mathlib.CategoryTheory.LiftingProperties.Basic
/-!
# Strong epimorphisms
In this file, we define strong epimorphisms. A strong epimorphism is an epimorphism `f`
which has the (unique) left lifting property with respect to monomorphisms. Similarly,
a strong monomorphisms in a monomorphism which has the (unique) right lifting property
with respect to epimorphisms.
## Main results
Besides the definition, we show that
* the composition of two strong epimorphisms is a strong epimorphism,
* if `f ≫ g` is a strong epimorphism, then so is `g`,
* if `f` is both a strong epimorphism and a monomorphism, then it is an isomorphism
We also define classes `StrongMonoCategory` and `StrongEpiCategory` for categories in which
every monomorphism or epimorphism is strong, and deduce that these categories are balanced.
## TODO
Show that the dual of a strong epimorphism is a strong monomorphism, and vice versa.
## References
* [F. Borceux, *Handbook of Categorical Algebra 1*][borceux-vol1]
-/
universe v u
namespace CategoryTheory
variable {C : Type u} [Category.{v} C]
variable {P Q : C}
/-- A strong epimorphism `f` is an epimorphism which has the left lifting property
with respect to monomorphisms. -/
class StrongEpi (f : P ⟶ Q) : Prop where
/-- The epimorphism condition on `f` -/
epi : Epi f
/-- The left lifting property with respect to all monomorphism -/
llp : ∀ ⦃X Y : C⦄ (z : X ⟶ Y) [Mono z], HasLiftingProperty f z
#align category_theory.strong_epi CategoryTheory.StrongEpi
#align category_theory.strong_epi.epi CategoryTheory.StrongEpi.epi
theorem StrongEpi.mk' {f : P ⟶ Q} [Epi f]
(hf : ∀ (X Y : C) (z : X ⟶ Y)
(_ : Mono z) (u : P ⟶ X) (v : Q ⟶ Y) (sq : CommSq u f z v), sq.HasLift) :
StrongEpi f :=
{ epi := inferInstance
llp := fun {X Y} z hz => ⟨fun {u v} sq => hf X Y z hz u v sq⟩ }
#align category_theory.strong_epi.mk' CategoryTheory.StrongEpi.mk'
/-- A strong monomorphism `f` is a monomorphism which has the right lifting property
with respect to epimorphisms. -/
class StrongMono (f : P ⟶ Q) : Prop where
/-- The monomorphism condition on `f` -/
mono : Mono f
/-- The right lifting property with respect to all epimorphisms -/
rlp : ∀ ⦃X Y : C⦄ (z : X ⟶ Y) [Epi z], HasLiftingProperty z f
#align category_theory.strong_mono CategoryTheory.StrongMono
theorem StrongMono.mk' {f : P ⟶ Q} [Mono f]
(hf : ∀ (X Y : C) (z : X ⟶ Y) (_ : Epi z) (u : X ⟶ P)
(v : Y ⟶ Q) (sq : CommSq u z f v), sq.HasLift) : StrongMono f where
mono := inferInstance
rlp := fun {X Y} z hz => ⟨fun {u v} sq => hf X Y z hz u v sq⟩
#align category_theory.strong_mono.mk' CategoryTheory.StrongMono.mk'
attribute [instance] StrongEpi.llp
attribute [instance] StrongMono.rlp
instance (priority := 100) epi_of_strongEpi (f : P ⟶ Q) [StrongEpi f] : Epi f :=
StrongEpi.epi
#align category_theory.epi_of_strong_epi CategoryTheory.epi_of_strongEpi
instance (priority := 100) mono_of_strongMono (f : P ⟶ Q) [StrongMono f] : Mono f :=
StrongMono.mono
#align category_theory.mono_of_strong_mono CategoryTheory.mono_of_strongMono
section
variable {R : C} (f : P ⟶ Q) (g : Q ⟶ R)
/-- The composition of two strong epimorphisms is a strong epimorphism. -/
theorem strongEpi_comp [StrongEpi f] [StrongEpi g] : StrongEpi (f ≫ g) :=
{ epi := epi_comp _ _
llp := by
intros
infer_instance }
#align category_theory.strong_epi_comp CategoryTheory.strongEpi_comp
/-- The composition of two strong monomorphisms is a strong monomorphism. -/
theorem strongMono_comp [StrongMono f] [StrongMono g] : StrongMono (f ≫ g) :=
{ mono := mono_comp _ _
rlp := by
intros
infer_instance }
#align category_theory.strong_mono_comp CategoryTheory.strongMono_comp
/-- If `f ≫ g` is a strong epimorphism, then so is `g`. -/
theorem strongEpi_of_strongEpi [StrongEpi (f ≫ g)] : StrongEpi g :=
{ epi := epi_of_epi f g
llp := fun {X Y} z _ => by
constructor
intro u v sq
have h₀ : (f ≫ u) ≫ z = (f ≫ g) ≫ v := by simp only [Category.assoc, sq.w]
exact
CommSq.HasLift.mk'
⟨(CommSq.mk h₀).lift, by
simp only [← cancel_mono z, Category.assoc, CommSq.fac_right, sq.w], by simp⟩ }
#align category_theory.strong_epi_of_strong_epi CategoryTheory.strongEpi_of_strongEpi
/-- If `f ≫ g` is a strong monomorphism, then so is `f`. -/
theorem strongMono_of_strongMono [StrongMono (f ≫ g)] : StrongMono f :=
{ mono := mono_of_mono f g
rlp := fun {X Y} z => by
intros
constructor
intro u v sq
have h₀ : u ≫ f ≫ g = z ≫ v ≫ g := by
rw [←Category.assoc, eq_whisker sq.w, Category.assoc]
exact CommSq.HasLift.mk' ⟨(CommSq.mk h₀).lift, by simp, by simp [← cancel_epi z, sq.w]⟩ }
#align category_theory.strong_mono_of_strong_mono CategoryTheory.strongMono_of_strongMono
/-- An isomorphism is in particular a strong epimorphism. -/
instance (priority := 100) strongEpi_of_isIso [IsIso f] : StrongEpi f where
epi := by infer_instance
llp {X Y} z := HasLiftingProperty.of_left_iso _ _
#align category_theory.strong_epi_of_is_iso CategoryTheory.strongEpi_of_isIso
/-- An isomorphism is in particular a strong monomorphism. -/
instance (priority := 100) strongMono_of_isIso [IsIso f] : StrongMono f where
mono := by infer_instance
rlp {X Y} z := HasLiftingProperty.of_right_iso _ _
#align category_theory.strong_mono_of_is_iso CategoryTheory.strongMono_of_isIso
theorem StrongEpi.of_arrow_iso {A B A' B' : C} {f : A ⟶ B} {g : A' ⟶ B'}
(e : Arrow.mk f ≅ Arrow.mk g) [h : StrongEpi f] : StrongEpi g :=
{ epi := by
rw [Arrow.iso_w' e]
haveI := epi_comp f e.hom.right
apply epi_comp
llp := fun {X Y} z => by
intro
apply HasLiftingProperty.of_arrow_iso_left e z }
#align category_theory.strong_epi.of_arrow_iso CategoryTheory.StrongEpi.of_arrow_iso
theorem StrongMono.of_arrow_iso {A B A' B' : C} {f : A ⟶ B} {g : A' ⟶ B'}
(e : Arrow.mk f ≅ Arrow.mk g) [h : StrongMono f] : StrongMono g :=
{ mono := by
rw [Arrow.iso_w' e]
haveI := mono_comp f e.hom.right
apply mono_comp
rlp := fun {X Y} z => by
intro
apply HasLiftingProperty.of_arrow_iso_right z e }
#align category_theory.strong_mono.of_arrow_iso CategoryTheory.StrongMono.of_arrow_iso
theorem StrongEpi.iff_of_arrow_iso {A B A' B' : C} {f : A ⟶ B} {g : A' ⟶ B'}
(e : Arrow.mk f ≅ Arrow.mk g) : StrongEpi f ↔ StrongEpi g := by
constructor <;> intro
exacts [StrongEpi.of_arrow_iso e, StrongEpi.of_arrow_iso e.symm]
#align category_theory.strong_epi.iff_of_arrow_iso CategoryTheory.StrongEpi.iff_of_arrow_iso
theorem StrongMono.iff_of_arrow_iso {A B A' B' : C} {f : A ⟶ B} {g : A' ⟶ B'}
(e : Arrow.mk f ≅ Arrow.mk g) : StrongMono f ↔ StrongMono g := by
constructor <;> intro
exacts [StrongMono.of_arrow_iso e, StrongMono.of_arrow_iso e.symm]
#align category_theory.strong_mono.iff_of_arrow_iso CategoryTheory.StrongMono.iff_of_arrow_iso
end
/-- A strong epimorphism that is a monomorphism is an isomorphism. -/
theorem isIso_of_mono_of_strongEpi (f : P ⟶ Q) [Mono f] [StrongEpi f] : IsIso f :=
⟨⟨(CommSq.mk (show 𝟙 P ≫ f = f ≫ 𝟙 Q by simp)).lift, by aesop_cat⟩⟩
#align category_theory.is_iso_of_mono_of_strong_epi CategoryTheory.isIso_of_mono_of_strongEpi
/-- A strong monomorphism that is an epimorphism is an isomorphism. -/
theorem isIso_of_epi_of_strongMono (f : P ⟶ Q) [Epi f] [StrongMono f] : IsIso f :=
⟨⟨(CommSq.mk (show 𝟙 P ≫ f = f ≫ 𝟙 Q by simp)).lift, by aesop_cat⟩⟩
#align category_theory.is_iso_of_epi_of_strong_mono CategoryTheory.isIso_of_epi_of_strongMono
section
variable (C)
/-- A strong epi category is a category in which every epimorphism is strong. -/
class StrongEpiCategory : Prop where
/-- A strong epi category is a category in which every epimorphism is strong. -/
strongEpi_of_epi : ∀ {X Y : C} (f : X ⟶ Y) [Epi f], StrongEpi f
#align category_theory.strong_epi_category CategoryTheory.StrongEpiCategory
#align category_theory.strong_epi_category.strong_epi_of_epi CategoryTheory.StrongEpiCategory.strongEpi_of_epi
/-- A strong mono category is a category in which every monomorphism is strong. -/
class StrongMonoCategory : Prop where
/-- A strong mono category is a category in which every monomorphism is strong. -/
strongMono_of_mono : ∀ {X Y : C} (f : X ⟶ Y) [Mono f], StrongMono f
#align category_theory.strong_mono_category CategoryTheory.StrongMonoCategory
#align category_theory.strong_mono_category.strong_mono_of_mono CategoryTheory.StrongMonoCategory.strongMono_of_mono
end
theorem strongEpi_of_epi [StrongEpiCategory C] (f : P ⟶ Q) [Epi f] : StrongEpi f :=
StrongEpiCategory.strongEpi_of_epi _
#align category_theory.strong_epi_of_epi CategoryTheory.strongEpi_of_epi
theorem strongMono_of_mono [StrongMonoCategory C] (f : P ⟶ Q) [Mono f] : StrongMono f :=
StrongMonoCategory.strongMono_of_mono _
#align category_theory.strong_mono_of_mono CategoryTheory.strongMono_of_mono
section
attribute [local instance] strongEpi_of_epi
instance (priority := 100) balanced_of_strongEpiCategory [StrongEpiCategory C] : Balanced C where
isIso_of_mono_of_epi _ _ _ := isIso_of_mono_of_strongEpi _
#align category_theory.balanced_of_strong_epi_category CategoryTheory.balanced_of_strongEpiCategory
end
section
attribute [local instance] strongMono_of_mono
instance (priority := 100) balanced_of_strongMonoCategory [StrongMonoCategory C] : Balanced C where
isIso_of_mono_of_epi _ _ _ := isIso_of_epi_of_strongMono _
#align category_theory.balanced_of_strong_mono_category CategoryTheory.balanced_of_strongMonoCategory
end
end CategoryTheory
|
lemmas Cauchy_Im = bounded_linear.Cauchy [OF bounded_linear_Im] |
\documentclass[a4paper,12pt]{scrartcl}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{bbold}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage[left=3cm,right=3cm,top=3.5cm,bottom=3.5cm]{geometry}
\usepackage{pgfplots,pgfplotstable}
\usepackage{tikz}
%\usetikzlibrary{positioning, fadings, through}
\usepackage{fancyhdr}
\usepackage[locale=DE,output-decimal-marker={.}]{siunitx}
\sisetup{separate-uncertainty, per-mode=fraction,}
\usepackage{here}
\usepackage{hyperref}
\usepackage{setspace}
\onehalfspacing
\usepackage{comment}
\usepackage{circledsteps}
% Default fixed font does not support bold face
\DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold
\DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal
% Custom colors
\usepackage{color}
\definecolor{deepblue}{rgb}{0,0,0.5}
\definecolor{deepred}{rgb}{0.6,0,0}
\definecolor{deepgreen}{rgb}{0,0.5,0}
\usepackage{listings}
% Python style for highlighting
\newcommand\pythonstyle{\lstset{
numbers=left,
language=Python,
basicstyle=\ttm,
otherkeywords={self}, % Add keywords here
% keywordstyle=\ttb\color{deepblue},
emph={MyClass,__init__}, % Custom highlighting
emphstyle=\ttb\color{deepred}, % Custom highlighting style
stringstyle=\color{deepgreen},
frame=tb, % Any extra options here
showstringspaces=false %
}}
% Python environment
\lstnewenvironment{python}[1][]{
\pythonstyle
\lstset{#1}
}{}
% Python for external files
\newcommand\pythonexternal[2][]{{
\pythonstyle
\lstinputlisting[#1]{#2}}}
% Python for inline
\newcommand\pythoninline[1]{{\pythonstyle\lstinline!#1!}}
\usepackage{booktabs}
\usepackage{multirow}
\usetikzlibrary{external}
\tikzexternalize[prefix=tikz/]
\pgfplotsset{compat=newest,
tick label style={font=\small},
label style={font=\small},
legend style={font=\footnotesize}
}
\tikzset{every mark/.append style={scale=0.3}}
\tikzset{>=stealth}
\usepackage{acronym}
\newlength{\plotheight}
\newlength{\plotwidth}
\newlength{\imgheight}
\setlength{\plotwidth}{14cm}
\setlength{\plotheight}{8cm}
\newcommand{\titel}{Finite Difference Time Domain}
\usepackage{fancyhdr}
\fancyhf{}
\pagestyle{fancy}
\cfoot{\thepage}
\fancyhead[L]{\leftmark}
\fancyhead[R]{\thepage}
\subject{Report}
\title{\titel}
\author{\large{Martin \textsc{Aleksiev}, Felix \textsc{Wechsler}, Mei \textsc{Yunhao}, Mingxuan \textsc{Zhang}}\\ \large{Group 3}}
\date{\large{\today}}
\publishers{\vspace{5.5cm}Abbe School of Photonics\\
Friedrich-Schiller-Universität Jena}
\newcommand\todo[1]{\textcolor{red}{\textbf{TODO: #1}}}
%proper Integral typsetting
\newcommand{\dt}{\, \mathrm{d}t}
\newcommand{\dd}{\mathrm{d}}
\newcommand\ff[1]{ \mathbf #1(\mathbf r, \omega)}
\newcommand\lap{\mathop{{}\bigtriangleup}\nolimits}
\usepackage[%
backend=biber,
url=false,
style=alphabetic,
% citestyle=authoryear,
maxnames=4,
minnames=3,
maxbibnames=99,
giveninits,
uniquename=init]{biblatex}
\addbibresource{references.bib}
\pgfkeys{/csteps/inner color=transparent , /csteps/outer color=black}
%indentation to 0
\setlength\parindent{0pt}
\begin{document}
\maketitle
\thispagestyle{empty}
\newpage
\setcounter{page}{1}
\tableofcontents
\newpage
\section{Introduction}
In this report we present the basics of the Finite Difference Time Domain numerical method.
This method allows to fully solve Maxwell's equation without any approximations.
After providing the physical background, we show how to implement the solution with
a Yee-grid and show results for the simulation of a pulsed beam in a 1D and 3D case.
\section{Physical Background}
The physical basics are Maxwell's equation (MWEQ):
\begin{align}
\nabla \times \ff{E} &= - \frac{\partial \ff B} {\partial t}\\
\nabla \times \ff{H} &= \frac{\partial \ff D} {\partial t} + \ff j \\
\nabla \cdot \ff D &= \rho(\mathbf r , t)\\
\nabla \cdot \ff B &= 0
\end{align}
with $\ff E$ being the electric field, $\ff H$ the magnetic field, $\ff D $ the dielectric flux density,
$\ff B$ the magnetic flux density, $\ff P$ the dielectric polarization, $\rho (\mathbf{r}, t)$ the external charge density and
$\ff j$ the macroscopic current density.
In an isotropic, dispersionless and non-magnetic media we furthermore obtain the following material equations:
\begin{align}
\ff D &= \epsilon_0 \epsilon(\mathbf r) \ff E\\
\ff B &= \mu_0 \ff H
\end{align}
In this case, MWEQ can be expressed as:
\begin{align}
\nabla \times \ff{E} &= - \mu_0 \frac{\partial \ff H} {\partial t}
\label{eq:final_}\\
\nabla \times \ff{H} &= \epsilon_0 \epsilon(\mathbf r) \frac{\partial \ff E} {\partial t} + \ff j
\label{eq:final}
\end{align}
\ref{eq:final_} and \ref{eq:final} are the final equations we are going to solve in the next sections.
\section{Numerical Implementation}
To solve the equations, we can explicitly express one of the cross products:
\begin{equation}
\frac{\partial E_x}{\partial t} = \frac1{\epsilon_0 \epsilon(\mathbf r)} \left( \frac{\partial H_z}{\partial y} - \frac{\partial H_z}{\partial y} - j_x \right)
\end{equation}
Using a second order central-difference scheme for both the time and spatial
derivatives we obtain the resulting equations for a 1D case:
\begin{align}
E_x^{n+1} &= E_x^{n-1} + \frac1{\epsilon_0 \epsilon} \frac{\Delta t}{\Delta x} (H_{x+\Delta x}^n - H_{x-\Delta x}^n - j_x)\\
H_y^{n+1} &= H_y^{n-1} + \frac1{\mu_0} \frac{\Delta t}{\Delta x} (E_{x+\Delta x}^n - E_{x-\Delta x}^n)
\end{align}
It can be observed that $E$ and $H$ are updated in an alternating way.
Therefore, one can introduce the Yee grid with a Leapfrog time stepping scheme.
A visual representation of the scheme can be see in \autoref{fig:yee}.
\begin{figure}[H]
\centering
\includegraphics[width=.5\textwidth]{figures/yee.png}
\caption{Yee grid with a Leapfrog time stepping. Figure taken from \cite{lecture}.}
\label{fig:yee}
\end{figure}
One can finally express the equations in the Yee grid as:
\begin{align}
E_i^{n+1} &= E_i^{n} + \frac1{\epsilon_0 \epsilon} \frac{\Delta t}{\Delta x} (H_{i+0.5}^{n+0.5} - H_{i-0.5}^{n+0.5} - \Delta x \cdot j_{i+0.5}^{n+0.5})\\
H_{i+0.5}^{n+1.5} &= H_{i+0.5}^{n+0.5} + \frac1{\mu_0} \frac{\Delta t}{\Delta x} (E_{i+1}^{n+1} - E_{i}^{n+1})
\end{align}
The current source $j_z$ is defined using a delta distribution, meaning it is zero everywhere, except for one spatial position. It has a gaussian temporal envelope and is described by the following equation:
\begin{align}
j_z &= \exp\left(-2\pi ift\right) \cdot \exp\left(-\frac{(t-t_0)^2}{\tau ^2}\right)
\end{align}
The fields are calculated for different spatial positions using indexing and for different times via a for loop. The $E$-field is assumed to be a perfect conductor at the boundaries and as a result, those values are set to zero. \\
For the 3D case, there are a fer aditional considerations taken. The tangential E-fields and normal H-fields are stored in arrays of size $N$, whereas the tangential $H$-fields and normal E-fields are stored in $N-1$ size arrays. This is due to the Yee grid being in 3 dimentions and having integer and fractional indices. Due to these indices, the permittivity must be aslo interpolated.
A for loop is used to calculate the fields for different times, analogously to the 1D case with the addition of saving the calculated fields every output step, which we choose. These fields are interpolated so that we have all fields on a common grid in space and time.
% \newpage
\section{Results}
In this section we present some results of the simulations.
\autoref{fig:1Dres} shows the electric and magnetic field for
a current being injected into the space.
We observe that two pulses are propagating in different direction.
\begin{figure}[H]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Figure_1D1.png}
\caption{$ct = \SI{4.57}{\micro\meter}$}
\label{fig:1Dl}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Figure_1D2.png}
\caption{$ct = \SI{8.66}{\micro\meter}$}
\label{fig:1Dr}
\end{subfigure}
\caption{Electric and magnetic field for the 1D problem.}
\label{fig:1Dres}
\end{figure}
In \autoref{fig:1Dr} we can observe that in the medium on the right hand side of the dashed line (higher $\epsilon$) the pulse is
compressed. This is due to the slower group velocity. Furthermore the amplitude is reduced. We notice that a third pulse appeared due to reflection according to Fresnel reflection laws.
In \autoref{fig:3Dres} we can see the electric field propagating
in radial manner starting at the center. In the center we injected a current guided in z direction.
\begin{figure}[H]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-Ez.png}
\caption{Electric field of the z-component}
\label{fig:3Dl}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-Hx.png}
\caption{Magnetic field of the x-component}
\label{fig:3Dr}
\end{subfigure}
\caption{Cross section at of the 3D field with a current pulse originating from the center.}
\label{fig:3Dres}
\end{figure}
The electrical field is radial symmetric, the magnetic field not.
This is due to the reason that we displaying the $x$ component of the $H$ field. And due to the divergence condition the $x$ component
must be 0 in the $x$ direction.
\subsection{Computational Performance}
In this section we provide some data for the computational performance.
Our setup for all measurements was Python 3.8.2, SciPy 1.3.2, NumPy 1.17.3 and a Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz.
The example scripts are attached to this report.
In \autoref{tab:res} we can see the computational time
for different datatypes and for 1D and 3D. Obviously 1D is much faster than the full 3D approach.
\begin{table}[h]
\centering
\setstretch{1.5}
\begin{tabular}{l r}
& Total time needed in \si{\second} \\
\hline
1D, single precision & $0.136\pm 0.001$ \\
1D, double precision & $0.187\pm 0.001$ \\
\hline
3D, single precision & $5.11\pm 0.05$ \\
3D, double precision & $6.413\pm 0.02$ \\
\end{tabular}
\caption{Results for the computing time}
\label{tab:res}
\end{table}
However, we can see that the datatype does not have a significant
impact on the time needed. This is due to the reason that Python is an interpreted language and therefore source code relying on for loops is usually slow. In Python this cannot circumvented since the core functions within the for loop are rather trivial and do not rely on external functions call taking most of the time.
For a better performance once should use a compiled language or modern approaches like Julia.
\subsection{Convergence Rate}
In this part we show results of the convergence rate for different spatial and time resolutions.
For a quick estimation we compare the fields to higher sampled numerical versions instead of an analytical one.
In our source code the spatial and temporal sampling are connected:
\begin{equation}
\Delta t = \frac{\Delta r}{2 c}
\end{equation}
Therefore by varying the spatial sampling we also vary the temporal one.
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{figures/index.png}
\caption{Convergence behaviour with different $\Delta r$ sampling.}
\label{fig:conv}
\end{figure}
In \autoref{fig:conv} we can see the convergence behaviour of the FDTD. Note that this plot is double logarithmic.
The slope of the blue curve is $\SI{1.79\pm0.16}{\per\micro\meter}$. The theoretical convergence of the algorithm is 2, so our measured values comes close to that one.
The reason for the deviation could be the boundary handling and artificats due to the fact that we comparing against a numerical solution and not an analytical one.
\section{Conclusion}
In this report we explained the physical and numerical basics
behind the finite difference time domain method to solve Maxwell's equation. We could observe several physical effects like different group velocities or the validation of divergence condition.
At the end we presented several computational results like time and convergence.
\newpage
%to print all entries
\nocite{*}
\printbibliography
\end{document} |
module Main where
import Criterion
import Criterion.Main.Options
import Criterion.Types
import Data.Complex
import Data.Vector
import qualified Numeric.FFT as FFT
tstvec :: Int -> Vector (Complex Double)
tstvec sz = generate sz (\i -> let ii = fromIntegral i
in sin (2*pi*ii/1024) + sin (2*pi*ii/511))
main :: IO ()
main = do
p <- FFT.plan 256
benchmarkWith (defaultConfig { resamples = 1000 })
(nf (FFT.fftWith p) (tstvec 256))
|
theorem bad : ∀ (m n : Nat), (if m = n then Ordering.eq else Ordering.gt) = Ordering.lt → False := by
intros m n
cases (Nat.decEq m n) with -- an error as expected: "alternative `isFalse` has not bee provided"
| isTrue h =>
set_option trace.Meta.Tactic.simp true in
simp [h]
theorem bad' : ∀ (m n : Nat), (if m = n then Ordering.eq else Ordering.gt) = Ordering.lt → False := by
intros m n
cases (Nat.decEq m n) with
| isTrue h =>
simp [h]
| isFalse h =>
simp [h]
|
lemmas landau_trans_lift [trans] = landau_symbols[THEN landau_symbol.lift_trans] landau_symbols[THEN landau_symbol.lift_trans'] landau_symbols[THEN landau_symbol.lift_trans_bigtheta] landau_symbols[THEN landau_symbol.lift_trans_bigtheta'] |
Jump to Timeline #Navigation Navigation
On campus, the Mann Laboratory, named for Louis Kimball Mann, completes construction and the Rec Pool is constructed using student funds.
UC Davis School of Medicine is established.
Crocker Nuclear Laboratory is established.
Tandem Properties is established.
Hillel House opens its doors to support Judaism in Davis.
|
Require Import GHC.Num.
(* Characters *)
Require Import NArith.
Definition Char := N.
Bind Scope char_scope with N.
(* Notation for literal characters in Coq source. *)
Require Import Coq.Strings.Ascii.
Definition hs_char__ : Ascii.ascii -> Char := N_of_ascii.
Notation "'&#' c" := (hs_char__ c) (at level 1, format "'&#' c").
Definition chr : Int -> Char := Z.to_N. |
(* Title: ZF/Induct/FoldSet.thy
Author: Sidi O Ehmety, Cambridge University Computer Laboratory
A "fold" functional for finite sets. For n non-negative we have
fold f e {x1,...,xn} = f x1 (... (f xn e)) where f is at
least left-commutative.
*)
theory FoldSet imports Main begin
consts fold_set :: "[i, i, [i,i]=>i, i] => i"
inductive
domains "fold_set(A, B, f,e)" \<subseteq> "Fin(A)*B"
intros
emptyI: "e\<in>B ==> <0, e>\<in>fold_set(A, B, f,e)"
consI: "[| x\<in>A; x \<notin>C; <C,y> \<in> fold_set(A, B,f,e); f(x,y):B |]
==> <cons(x,C), f(x,y)>\<in>fold_set(A, B, f, e)"
type_intros Fin.intros
definition
fold :: "[i, [i,i]=>i, i, i] => i" ("fold[_]'(_,_,_')") where
"fold[B](f,e, A) == THE x. <A, x>\<in>fold_set(A, B, f,e)"
definition
setsum :: "[i=>i, i] => i" where
"setsum(g, C) == if Finite(C) then
fold[int](%x y. g(x) $+ y, #0, C) else #0"
(** foldSet **)
inductive_cases empty_fold_setE: "<0, x> \<in> fold_set(A, B, f,e)"
inductive_cases cons_fold_setE: "<cons(x,C), y> \<in> fold_set(A, B, f,e)"
(* add-hoc lemmas *)
lemma cons_lemma1: "[| x\<notin>C; x\<notin>B |] ==> cons(x,B)=cons(x,C) \<longleftrightarrow> B = C"
by (auto elim: equalityE)
lemma cons_lemma2: "[| cons(x, B)=cons(y, C); x\<noteq>y; x\<notin>B; y\<notin>C |]
==> B - {y} = C-{x} & x\<in>C & y\<in>B"
apply (auto elim: equalityE)
done
(* fold_set monotonicity *)
lemma fold_set_mono_lemma:
"<C, x> \<in> fold_set(A, B, f, e)
==> \<forall>D. A<=D \<longrightarrow> <C, x> \<in> fold_set(D, B, f, e)"
apply (erule fold_set.induct)
apply (auto intro: fold_set.intros)
done
lemma fold_set_mono: " C<=A ==> fold_set(C, B, f, e) \<subseteq> fold_set(A, B, f, e)"
apply clarify
apply (frule fold_set.dom_subset [THEN subsetD], clarify)
apply (auto dest: fold_set_mono_lemma)
done
lemma fold_set_lemma:
"<C, x>\<in>fold_set(A, B, f, e) ==> <C, x>\<in>fold_set(C, B, f, e) & C<=A"
apply (erule fold_set.induct)
apply (auto intro!: fold_set.intros intro: fold_set_mono [THEN subsetD])
done
(* Proving that fold_set is deterministic *)
lemma Diff1_fold_set:
"[| <C-{x},y> \<in> fold_set(A, B, f,e); x\<in>C; x\<in>A; f(x, y):B |]
==> <C, f(x, y)> \<in> fold_set(A, B, f, e)"
apply (frule fold_set.dom_subset [THEN subsetD])
apply (erule cons_Diff [THEN subst], rule fold_set.intros, auto)
done
locale fold_typing =
fixes A and B and e and f
assumes ftype [intro,simp]: "[|x \<in> A; y \<in> B|] ==> f(x,y) \<in> B"
and etype [intro,simp]: "e \<in> B"
and fcomm: "[|x \<in> A; y \<in> A; z \<in> B|] ==> f(x, f(y, z))=f(y, f(x, z))"
lemma (in fold_typing) Fin_imp_fold_set:
"C\<in>Fin(A) ==> (\<exists>x. <C, x> \<in> fold_set(A, B, f,e))"
apply (erule Fin_induct)
apply (auto dest: fold_set.dom_subset [THEN subsetD]
intro: fold_set.intros etype ftype)
done
lemma Diff_sing_imp:
"[|C - {b} = D - {a}; a \<noteq> b; b \<in> C|] ==> C = cons(b,D) - {a}"
by (blast elim: equalityE)
lemma (in fold_typing) fold_set_determ_lemma [rule_format]:
"n\<in>nat
==> \<forall>C. |C|<n \<longrightarrow>
(\<forall>x. <C, x> \<in> fold_set(A, B, f,e)\<longrightarrow>
(\<forall>y. <C, y> \<in> fold_set(A, B, f,e) \<longrightarrow> y=x))"
apply (erule nat_induct)
apply (auto simp add: le_iff)
apply (erule fold_set.cases)
apply (force elim!: empty_fold_setE)
apply (erule fold_set.cases)
apply (force elim!: empty_fold_setE, clarify)
(*force simplification of "|C| < |cons(...)|"*)
apply (frule_tac a = Ca in fold_set.dom_subset [THEN subsetD, THEN SigmaD1])
apply (frule_tac a = Cb in fold_set.dom_subset [THEN subsetD, THEN SigmaD1])
apply (simp add: Fin_into_Finite [THEN Finite_imp_cardinal_cons])
apply (case_tac "x=xb", auto)
apply (simp add: cons_lemma1, blast)
txt\<open>case @{term "x\<noteq>xb"}\<close>
apply (drule cons_lemma2, safe)
apply (frule Diff_sing_imp, assumption+)
txt\<open>* LEVEL 17\<close>
apply (subgoal_tac "|Ca| \<le> |Cb|")
prefer 2
apply (rule succ_le_imp_le)
apply (simp add: Fin_into_Finite Finite_imp_succ_cardinal_Diff
Fin_into_Finite [THEN Finite_imp_cardinal_cons])
apply (rule_tac C1 = "Ca-{xb}" in Fin_imp_fold_set [THEN exE])
apply (blast intro: Diff_subset [THEN Fin_subset])
txt\<open>* LEVEL 24 *\<close>
apply (frule Diff1_fold_set, blast, blast)
apply (blast dest!: ftype fold_set.dom_subset [THEN subsetD])
apply (subgoal_tac "ya = f(xb,xa) ")
prefer 2 apply (blast del: equalityCE)
apply (subgoal_tac "<Cb-{x}, xa> \<in> fold_set(A,B,f,e)")
prefer 2 apply simp
apply (subgoal_tac "yb = f (x, xa) ")
apply (drule_tac [2] C = Cb in Diff1_fold_set, simp_all)
apply (blast intro: fcomm dest!: fold_set.dom_subset [THEN subsetD])
apply (blast intro: ftype dest!: fold_set.dom_subset [THEN subsetD], blast)
done
lemma (in fold_typing) fold_set_determ:
"[| <C, x>\<in>fold_set(A, B, f, e);
<C, y>\<in>fold_set(A, B, f, e)|] ==> y=x"
apply (frule fold_set.dom_subset [THEN subsetD], clarify)
apply (drule Fin_into_Finite)
apply (unfold Finite_def, clarify)
apply (rule_tac n = "succ (n)" in fold_set_determ_lemma)
apply (auto intro: eqpoll_imp_lepoll [THEN lepoll_cardinal_le])
done
(** The fold function **)
lemma (in fold_typing) fold_equality:
"<C,y> \<in> fold_set(A,B,f,e) ==> fold[B](f,e,C) = y"
apply (unfold fold_def)
apply (frule fold_set.dom_subset [THEN subsetD], clarify)
apply (rule the_equality)
apply (rule_tac [2] A=C in fold_typing.fold_set_determ)
apply (force dest: fold_set_lemma)
apply (auto dest: fold_set_lemma)
apply (simp add: fold_typing_def, auto)
apply (auto dest: fold_set_lemma intro: ftype etype fcomm)
done
lemma fold_0 [simp]: "e \<in> B ==> fold[B](f,e,0) = e"
apply (unfold fold_def)
apply (blast elim!: empty_fold_setE intro: fold_set.intros)
done
text\<open>This result is the right-to-left direction of the subsequent result\<close>
lemma (in fold_typing) fold_set_imp_cons:
"[| <C, y> \<in> fold_set(C, B, f, e); C \<in> Fin(A); c \<in> A; c\<notin>C |]
==> <cons(c, C), f(c,y)> \<in> fold_set(cons(c, C), B, f, e)"
apply (frule FinD [THEN fold_set_mono, THEN subsetD])
apply assumption
apply (frule fold_set.dom_subset [of A, THEN subsetD])
apply (blast intro!: fold_set.consI intro: fold_set_mono [THEN subsetD])
done
lemma (in fold_typing) fold_cons_lemma [rule_format]:
"[| C \<in> Fin(A); c \<in> A; c\<notin>C |]
==> <cons(c, C), v> \<in> fold_set(cons(c, C), B, f, e) \<longleftrightarrow>
(\<exists>y. <C, y> \<in> fold_set(C, B, f, e) & v = f(c, y))"
apply auto
prefer 2 apply (blast intro: fold_set_imp_cons)
apply (frule_tac Fin.consI [of c, THEN FinD, THEN fold_set_mono, THEN subsetD], assumption+)
apply (frule_tac fold_set.dom_subset [of A, THEN subsetD])
apply (drule FinD)
apply (rule_tac A1 = "cons(c,C)" and f1=f and B1=B and C1=C and e1=e in fold_typing.Fin_imp_fold_set [THEN exE])
apply (blast intro: fold_typing.intro ftype etype fcomm)
apply (blast intro: Fin_subset [of _ "cons(c,C)"] Finite_into_Fin
dest: Fin_into_Finite)
apply (rule_tac x = x in exI)
apply (auto intro: fold_set.intros)
apply (drule_tac fold_set_lemma [of C], blast)
apply (blast intro!: fold_set.consI
intro: fold_set_determ fold_set_mono [THEN subsetD]
dest: fold_set.dom_subset [THEN subsetD])
done
lemma (in fold_typing) fold_cons:
"[| C\<in>Fin(A); c\<in>A; c\<notin>C|]
==> fold[B](f, e, cons(c, C)) = f(c, fold[B](f, e, C))"
apply (unfold fold_def)
apply (simp add: fold_cons_lemma)
apply (rule the_equality, auto)
apply (subgoal_tac [2] "\<langle>C, y\<rangle> \<in> fold_set(A, B, f, e)")
apply (drule Fin_imp_fold_set)
apply (auto dest: fold_set_lemma simp add: fold_def [symmetric] fold_equality)
apply (blast intro: fold_set_mono [THEN subsetD] dest!: FinD)
done
lemma (in fold_typing) fold_type [simp,TC]:
"C\<in>Fin(A) ==> fold[B](f,e,C):B"
apply (erule Fin_induct)
apply (simp_all add: fold_cons ftype etype)
done
lemma (in fold_typing) fold_commute [rule_format]:
"[| C\<in>Fin(A); c\<in>A |]
==> (\<forall>y\<in>B. f(c, fold[B](f, y, C)) = fold[B](f, f(c, y), C))"
apply (erule Fin_induct)
apply (simp_all add: fold_typing.fold_cons [of A B _ f]
fold_typing.fold_type [of A B _ f]
fold_typing_def fcomm)
done
lemma (in fold_typing) fold_nest_Un_Int:
"[| C\<in>Fin(A); D\<in>Fin(A) |]
==> fold[B](f, fold[B](f, e, D), C) =
fold[B](f, fold[B](f, e, (C \<inter> D)), C \<union> D)"
apply (erule Fin_induct, auto)
apply (simp add: Un_cons Int_cons_left fold_type fold_commute
fold_typing.fold_cons [of A _ _ f]
fold_typing_def fcomm cons_absorb)
done
lemma (in fold_typing) fold_nest_Un_disjoint:
"[| C\<in>Fin(A); D\<in>Fin(A); C \<inter> D = 0 |]
==> fold[B](f,e,C \<union> D) = fold[B](f, fold[B](f,e,D), C)"
by (simp add: fold_nest_Un_Int)
lemma Finite_cons_lemma: "Finite(C) ==> C\<in>Fin(cons(c, C))"
apply (drule Finite_into_Fin)
apply (blast intro: Fin_mono [THEN subsetD])
done
subsection\<open>The Operator @{term setsum}\<close>
lemma setsum_0 [simp]: "setsum(g, 0) = #0"
by (simp add: setsum_def)
lemma setsum_cons [simp]:
"Finite(C) ==>
setsum(g, cons(c,C)) =
(if c \<in> C then setsum(g,C) else g(c) $+ setsum(g,C))"
apply (auto simp add: setsum_def Finite_cons cons_absorb)
apply (rule_tac A = "cons (c, C)" in fold_typing.fold_cons)
apply (auto intro: fold_typing.intro Finite_cons_lemma)
done
lemma setsum_K0: "setsum((%i. #0), C) = #0"
apply (case_tac "Finite (C) ")
prefer 2 apply (simp add: setsum_def)
apply (erule Finite_induct, auto)
done
(*The reversed orientation looks more natural, but LOOPS as a simprule!*)
lemma setsum_Un_Int:
"[| Finite(C); Finite(D) |]
==> setsum(g, C \<union> D) $+ setsum(g, C \<inter> D)
= setsum(g, C) $+ setsum(g, D)"
apply (erule Finite_induct)
apply (simp_all add: Int_cons_right cons_absorb Un_cons Int_commute Finite_Un
Int_lower1 [THEN subset_Finite])
done
lemma setsum_type [simp,TC]: "setsum(g, C):int"
apply (case_tac "Finite (C) ")
prefer 2 apply (simp add: setsum_def)
apply (erule Finite_induct, auto)
done
lemma setsum_Un_disjoint:
"[| Finite(C); Finite(D); C \<inter> D = 0 |]
==> setsum(g, C \<union> D) = setsum(g, C) $+ setsum(g,D)"
apply (subst setsum_Un_Int [symmetric])
apply (subgoal_tac [3] "Finite (C \<union> D) ")
apply (auto intro: Finite_Un)
done
lemma Finite_RepFun [rule_format (no_asm)]:
"Finite(I) ==> (\<forall>i\<in>I. Finite(C(i))) \<longrightarrow> Finite(RepFun(I, C))"
apply (erule Finite_induct, auto)
done
lemma setsum_UN_disjoint [rule_format (no_asm)]:
"Finite(I)
==> (\<forall>i\<in>I. Finite(C(i))) \<longrightarrow>
(\<forall>i\<in>I. \<forall>j\<in>I. i\<noteq>j \<longrightarrow> C(i) \<inter> C(j) = 0) \<longrightarrow>
setsum(f, \<Union>i\<in>I. C(i)) = setsum (%i. setsum(f, C(i)), I)"
apply (erule Finite_induct, auto)
apply (subgoal_tac "\<forall>i\<in>B. x \<noteq> i")
prefer 2 apply blast
apply (subgoal_tac "C (x) \<inter> (\<Union>i\<in>B. C (i)) = 0")
prefer 2 apply blast
apply (subgoal_tac "Finite (\<Union>i\<in>B. C (i)) & Finite (C (x)) & Finite (B) ")
apply (simp (no_asm_simp) add: setsum_Un_disjoint)
apply (auto intro: Finite_Union Finite_RepFun)
done
lemma setsum_addf: "setsum(%x. f(x) $+ g(x),C) = setsum(f, C) $+ setsum(g, C)"
apply (case_tac "Finite (C) ")
prefer 2 apply (simp add: setsum_def)
apply (erule Finite_induct, auto)
done
lemma fold_set_cong:
"[| A=A'; B=B'; e=e'; (\<forall>x\<in>A'. \<forall>y\<in>B'. f(x,y) = f'(x,y)) |]
==> fold_set(A,B,f,e) = fold_set(A',B',f',e')"
apply (simp add: fold_set_def)
apply (intro refl iff_refl lfp_cong Collect_cong disj_cong ex_cong, auto)
done
lemma fold_cong:
"[| B=B'; A=A'; e=e';
!!x y. [|x\<in>A'; y\<in>B'|] ==> f(x,y) = f'(x,y) |] ==>
fold[B](f,e,A) = fold[B'](f', e', A')"
apply (simp add: fold_def)
apply (subst fold_set_cong)
apply (rule_tac [5] refl, simp_all)
done
lemma setsum_cong:
"[| A=B; !!x. x\<in>B ==> f(x) = g(x) |] ==>
setsum(f, A) = setsum(g, B)"
by (simp add: setsum_def cong add: fold_cong)
lemma setsum_Un:
"[| Finite(A); Finite(B) |]
==> setsum(f, A \<union> B) =
setsum(f, A) $+ setsum(f, B) $- setsum(f, A \<inter> B)"
apply (subst setsum_Un_Int [symmetric], auto)
done
lemma setsum_zneg_or_0 [rule_format (no_asm)]:
"Finite(A) ==> (\<forall>x\<in>A. g(x) $\<le> #0) \<longrightarrow> setsum(g, A) $\<le> #0"
apply (erule Finite_induct)
apply (auto intro: zneg_or_0_add_zneg_or_0_imp_zneg_or_0)
done
lemma setsum_succD_lemma [rule_format]:
"Finite(A)
==> \<forall>n\<in>nat. setsum(f,A) = $# succ(n) \<longrightarrow> (\<exists>a\<in>A. #0 $< f(a))"
apply (erule Finite_induct)
apply (auto simp del: int_of_0 int_of_succ simp add: not_zless_iff_zle int_of_0 [symmetric])
apply (subgoal_tac "setsum (f, B) $\<le> #0")
apply simp_all
prefer 2 apply (blast intro: setsum_zneg_or_0)
apply (subgoal_tac "$# 1 $\<le> f (x) $+ setsum (f, B) ")
apply (drule zdiff_zle_iff [THEN iffD2])
apply (subgoal_tac "$# 1 $\<le> $# 1 $- setsum (f,B) ")
apply (drule_tac x = "$# 1" in zle_trans)
apply (rule_tac [2] j = "#1" in zless_zle_trans, auto)
done
lemma setsum_succD:
"[| setsum(f, A) = $# succ(n); n\<in>nat |]==> \<exists>a\<in>A. #0 $< f(a)"
apply (case_tac "Finite (A) ")
apply (blast intro: setsum_succD_lemma)
apply (unfold setsum_def)
apply (auto simp del: int_of_0 int_of_succ simp add: int_succ_int_1 [symmetric] int_of_0 [symmetric])
done
lemma g_zpos_imp_setsum_zpos [rule_format]:
"Finite(A) ==> (\<forall>x\<in>A. #0 $\<le> g(x)) \<longrightarrow> #0 $\<le> setsum(g, A)"
apply (erule Finite_induct)
apply (simp (no_asm))
apply (auto intro: zpos_add_zpos_imp_zpos)
done
lemma g_zpos_imp_setsum_zpos2 [rule_format]:
"[| Finite(A); \<forall>x. #0 $\<le> g(x) |] ==> #0 $\<le> setsum(g, A)"
apply (erule Finite_induct)
apply (auto intro: zpos_add_zpos_imp_zpos)
done
lemma g_zspos_imp_setsum_zspos [rule_format]:
"Finite(A)
==> (\<forall>x\<in>A. #0 $< g(x)) \<longrightarrow> A \<noteq> 0 \<longrightarrow> (#0 $< setsum(g, A))"
apply (erule Finite_induct)
apply (auto intro: zspos_add_zspos_imp_zspos)
done
lemma setsum_Diff [rule_format]:
"Finite(A) ==> \<forall>a. M(a) = #0 \<longrightarrow> setsum(M, A) = setsum(M, A-{a})"
apply (erule Finite_induct)
apply (simp_all add: Diff_cons_eq Finite_Diff)
done
end |
import ..BifurcationsBase
BifurcationsBase.aliasof(::Codim1Sweep) = "Codim1Sweep"
BifurcationsBase.aliasof(::Codim1Solution) = "Codim1Solution"
BifurcationsBase.aliasof(::Codim1Solver) = "Codim1Solver"
|
{-# OPTIONS --with-K --safe --no-sized-types --no-guardedness
--no-subtyping #-}
module Agda.Builtin.Equality.Erase where
open import Agda.Builtin.Equality
primitive primEraseEquality : ∀ {a} {A : Set a} {x y : A} → x ≡ y → x ≡ y
|
DSGESV Example Program Results
Solution
1.0000 -1.0000 3.0000 -5.0000
Pivot indices
2 2 3 4
|
[STATEMENT]
lemma ordered_keys_Mapping [code]:
"Mapping.ordered_keys (Mapping xs) = sort (remdups (map fst xs))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Mapping.ordered_keys (AList_Mapping.Mapping xs) = sort (remdups (map fst xs))
[PROOF STEP]
by (simp only: ordered_keys_def keys_Mapping sorted_list_of_set_sort_remdups) simp |
import data.finset.nat_antidiagonal
import analysis.normed_space.basic
import analysis.specific_limits.basic
import laurent_measures.aux_lemmas
/- These lemmas seem to no longer be needed for Theorem 6.9 or anywhere else in LTE. I ([FAE])
wonder if they might be useful somewhere-/
open aux_thm69
open metric finset normed_field
open_locale nnreal classical big_operators topological_space
def equiv_Ico_nat_neg {d : ℤ} (hd : d < 0) : {y : {x : ℤ // d ≤ x } // y ∉ T hd} ≃ ℕ :=
begin
fconstructor,
{ rintro ⟨⟨a, ha⟩, hx⟩,
exact int.to_nat a },
{ intro n,
refine ⟨⟨n, hd.le.trans (int.coe_zero_le n)⟩, _⟩,
apply (not_iff_not_of_iff mem_Ico).mpr,
simp only [subtype.mk_lt_mk, not_and, not_lt, int.coe_nat_nonneg, implies_true_iff] },
{ rintros ⟨⟨x, dx⟩, hx⟩,
simp [int.to_nat_of_nonneg (T.zero_le hd hx)] },
{ exact λ n, by simp only [int.to_nat_coe_nat] }
end
lemma equiv_Ico_nat_neg_apply {d : ℤ} (hd : d < 0) {y : {x : ℤ // d ≤ x}} (h : y ∉ T hd) : y.1 = (equiv_Ico_nat_neg hd) ⟨y, h⟩ :=
by { cases y, simp [equiv_Ico_nat_neg, T.zero_le hd h] }
/- This lemma seems to not be used anywhere. -/
lemma summable_iff_on_nat {f : ℤ → ℝ} {ρ : ℝ≥0} (d : ℤ) (h : ∀ n : ℤ, n < d → f n = 0) :
summable (λ n, ∥ f n ∥ * ρ ^ n) ↔ summable (λ n : ℕ, ∥ f n ∥ * ρ ^ (n : ℤ)) :=
iff.trans (summable_iff_on_nat_less d (λ n nd, by simp [h _ nd])) iff.rfl
/- This lemma seems to not be used anywhere. -/
lemma aux_summable_iff_on_nat {f : ℤ → ℝ} {ρ : ℝ≥0} (d : ℤ) (h : ∀ n : ℤ, n < d → f n = 0) :
summable (λ n, ∥ f n ∥ * ρ ^ n) ↔ summable (λ n : ℕ, ∥ f (n + d) ∥ * ρ ^ (n + d : ℤ)) :=
begin
have hf : function.support (λ n : ℤ, ∥ f n ∥ * ρ ^ n) ⊆ { a : ℤ | d ≤ a},
{ rw function.support_subset_iff,
intro x,
rw [← not_imp_not, not_not, mul_eq_zero],
intro hx,
simp only [not_le, set.mem_set_of_eq] at hx,
apply or.intro_left,
rw norm_eq_zero,
exact h x hx },
have h1 := λ a : ℝ,
@has_sum_subtype_iff_of_support_subset ℝ ℤ _ _ (λ n : ℤ, ∥ f n ∥ * ρ ^ n) _ _ hf,
have h2 := λ a : ℝ,
@equiv.has_sum_iff ℝ {b : ℤ // d ≤ b} ℕ _ _ ((λ n, ∥ f n ∥ * ρ ^ n) ∘ coe) _
(equiv_bdd_integer_nat d),
exact exists_congr (λ a, ((h2 a).trans (h1 a)).symm),
end
|
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
⊢ Continuous fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd))
[PROOFSTEP]
exact
((map_continuous HSpace.hmul).comp
((continuous_fst.comp continuous_fst).prod_mk (continuous_fst.comp continuous_snd))).prod_mk
((map_continuous HSpace.hmul).comp
((continuous_snd.comp continuous_fst).prod_mk (continuous_snd.comp continuous_snd)))
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
⊢ ↑(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd))) ((e, e), e, e) = (e, e)
[PROOFSTEP]
simp only [ContinuousMap.coe_mk, Prod.mk.inj_iff]
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
⊢ ↑hmul (e, e) = e ∧ ↑hmul (e, e) = e
[PROOFSTEP]
exact ⟨HSpace.hmul_e_e, HSpace.hmul_e_e⟩
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
⊢ HomotopyRel
(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
(ContinuousMap.id (X × Y)) {(e, e)}
[PROOFSTEP]
let G : I × X × Y → X × Y := fun p => (HSpace.eHmul (p.1, p.2.1), HSpace.eHmul (p.1, p.2.2))
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
⊢ HomotopyRel
(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
(ContinuousMap.id (X × Y)) {(e, e)}
[PROOFSTEP]
have hG : Continuous G :=
(Continuous.comp HSpace.eHmul.1.1.2 (continuous_fst.prod_mk (continuous_fst.comp continuous_snd))).prod_mk
(Continuous.comp HSpace.eHmul.1.1.2 (continuous_fst.prod_mk (continuous_snd.comp continuous_snd)))
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
⊢ HomotopyRel
(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
(ContinuousMap.id (X × Y)) {(e, e)}
[PROOFSTEP]
use!⟨G, hG⟩
[GOAL]
case map_zero_left
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
⊢ ∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x
[PROOFSTEP]
rintro ⟨x, y⟩
[GOAL]
case map_zero_left.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
x : X
y : Y
⊢ ContinuousMap.toFun (ContinuousMap.mk G) (0, x, y) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
(x, y)
[PROOFSTEP]
exacts [Prod.mk.inj_iff.mpr ⟨HSpace.eHmul.1.2 x, HSpace.eHmul.1.2 y⟩]
[GOAL]
case map_one_left
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
⊢ ∀ (x : X × Y), ContinuousMap.toFun (ContinuousMap.mk G) (1, x) = ↑(ContinuousMap.id (X × Y)) x
[PROOFSTEP]
rintro ⟨x, y⟩
[GOAL]
case map_one_left.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
x : X
y : Y
⊢ ContinuousMap.toFun (ContinuousMap.mk G) (1, x, y) = ↑(ContinuousMap.id (X × Y)) (x, y)
[PROOFSTEP]
exact Prod.mk.inj_iff.mpr ⟨HSpace.eHmul.1.3 x, HSpace.eHmul.1.3 y⟩
[GOAL]
case prop'
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
⊢ ∀ (t : ↑I) (x : X × Y),
x ∈ {(e, e)} →
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
x =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
x =
↑(ContinuousMap.id (X × Y)) x
[PROOFSTEP]
rintro t ⟨x, y⟩ h
[GOAL]
case prop'.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
t : ↑I
x : X
y : Y
h : (x, y) ∈ {(e, e)}
⊢ ↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
(x, y) ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(ContinuousMap.id (X × Y)) (x, y)
[PROOFSTEP]
replace h := Prod.mk.inj_iff.mp (Set.mem_singleton_iff.mp h)
[GOAL]
case prop'.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑eHmul (p.fst, p.snd.fst), ↑eHmul (p.fst, p.snd.snd))
hG : Continuous G
t : ↑I
x : X
y : Y
h : x = e ∧ y = e
⊢ ↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
(x, y) ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (X × Y) (e, e)) (ContinuousMap.id (X × Y))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(ContinuousMap.id (X × Y)) (x, y)
[PROOFSTEP]
exact
⟨Prod.mk.inj_iff.mpr
⟨HomotopyRel.eq_fst HSpace.eHmul t (Set.mem_singleton_iff.mpr h.1),
HomotopyRel.eq_fst HSpace.eHmul t (Set.mem_singleton_iff.mpr h.2)⟩,
Prod.mk.inj_iff.mpr ⟨(HSpace.eHmul.2 t x h.1).2, (HSpace.eHmul.2 t y h.2).2⟩⟩
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
⊢ HomotopyRel
(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
(ContinuousMap.id (X × Y)) {(e, e)}
[PROOFSTEP]
let G : I × X × Y → X × Y := fun p => (HSpace.hmulE (p.1, p.2.1), HSpace.hmulE (p.1, p.2.2))
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
⊢ HomotopyRel
(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
(ContinuousMap.id (X × Y)) {(e, e)}
[PROOFSTEP]
have hG : Continuous G :=
(Continuous.comp HSpace.hmulE.1.1.2 (continuous_fst.prod_mk (continuous_fst.comp continuous_snd))).prod_mk
(Continuous.comp HSpace.hmulE.1.1.2 (continuous_fst.prod_mk (continuous_snd.comp continuous_snd)))
[GOAL]
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
⊢ HomotopyRel
(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
(ContinuousMap.id (X × Y)) {(e, e)}
[PROOFSTEP]
use!⟨G, hG⟩
[GOAL]
case map_zero_left
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
⊢ ∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x
[PROOFSTEP]
rintro ⟨x, y⟩
[GOAL]
case map_zero_left.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
x : X
y : Y
⊢ ContinuousMap.toFun (ContinuousMap.mk G) (0, x, y) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
(x, y)
[PROOFSTEP]
exacts [Prod.mk.inj_iff.mpr ⟨HSpace.hmulE.1.2 x, HSpace.hmulE.1.2 y⟩]
[GOAL]
case map_one_left
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
⊢ ∀ (x : X × Y), ContinuousMap.toFun (ContinuousMap.mk G) (1, x) = ↑(ContinuousMap.id (X × Y)) x
[PROOFSTEP]
rintro ⟨x, y⟩
[GOAL]
case map_one_left.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
x : X
y : Y
⊢ ContinuousMap.toFun (ContinuousMap.mk G) (1, x, y) = ↑(ContinuousMap.id (X × Y)) (x, y)
[PROOFSTEP]
exact Prod.mk.inj_iff.mpr ⟨HSpace.hmulE.1.3 x, HSpace.hmulE.1.3 y⟩
[GOAL]
case prop'
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
⊢ ∀ (t : ↑I) (x : X × Y),
x ∈ {(e, e)} →
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
x =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
x =
↑(ContinuousMap.id (X × Y)) x
[PROOFSTEP]
rintro t ⟨x, y⟩ h
[GOAL]
case prop'.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
t : ↑I
x : X
y : Y
h : (x, y) ∈ {(e, e)}
⊢ ↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
(x, y) ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(ContinuousMap.id (X × Y)) (x, y)
[PROOFSTEP]
replace h := Prod.mk.inj_iff.mp (Set.mem_singleton_iff.mp h)
[GOAL]
case prop'.mk
X : Type u
Y : Type v
inst✝³ : TopologicalSpace X
inst✝² : TopologicalSpace Y
inst✝¹ : HSpace X
inst✝ : HSpace Y
G : ↑I × X × Y → X × Y := fun p => (↑hmulE (p.fst, p.snd.fst), ↑hmulE (p.fst, p.snd.snd))
hG : Continuous G
t : ↑I
x : X
y : Y
h : x = e ∧ y = e
⊢ ↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(comp (ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
(x, y) ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk G,
map_zero_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (0, x) =
↑(comp
(ContinuousMap.mk fun p => (↑hmul (p.fst.fst, p.snd.fst), ↑hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (X × Y)) (const (X × Y) (e, e))))
x),
map_one_left :=
(_ :
∀ (x : X × Y),
ContinuousMap.toFun (ContinuousMap.mk G) (1, x) =
↑(ContinuousMap.id (X × Y)) x) }.toContinuousMap
(t, x))
(x, y) =
↑(ContinuousMap.id (X × Y)) (x, y)
[PROOFSTEP]
exact
⟨Prod.mk.inj_iff.mpr
⟨HomotopyRel.eq_fst HSpace.hmulE t (Set.mem_singleton_iff.mpr h.1),
HomotopyRel.eq_fst HSpace.hmulE t (Set.mem_singleton_iff.mpr h.2)⟩,
Prod.mk.inj_iff.mpr ⟨(HSpace.hmulE.2 t x h.1).2, (HSpace.hmulE.2 t y h.2).2⟩⟩
[GOAL]
M : Type u
inst✝² : MulOneClass M
inst✝¹ : TopologicalSpace M
inst✝ : ContinuousMul M
⊢ comp (ContinuousMap.mk (Function.uncurry Mul.mul)) (prodMk (const M 1) (ContinuousMap.id M)) = ContinuousMap.id M
[PROOFSTEP]
ext1
[GOAL]
case h
M : Type u
inst✝² : MulOneClass M
inst✝¹ : TopologicalSpace M
inst✝ : ContinuousMul M
a✝ : M
⊢ ↑(comp (ContinuousMap.mk (Function.uncurry Mul.mul)) (prodMk (const M 1) (ContinuousMap.id M))) a✝ =
↑(ContinuousMap.id M) a✝
[PROOFSTEP]
apply one_mul
[GOAL]
M : Type u
inst✝² : MulOneClass M
inst✝¹ : TopologicalSpace M
inst✝ : ContinuousMul M
⊢ comp (ContinuousMap.mk (Function.uncurry Mul.mul)) (prodMk (ContinuousMap.id M) (const M 1)) = ContinuousMap.id M
[PROOFSTEP]
ext1
[GOAL]
case h
M : Type u
inst✝² : MulOneClass M
inst✝¹ : TopologicalSpace M
inst✝ : ContinuousMul M
a✝ : M
⊢ ↑(comp (ContinuousMap.mk (Function.uncurry Mul.mul)) (prodMk (ContinuousMap.id M) (const M 1))) a✝ =
↑(ContinuousMap.id M) a✝
[PROOFSTEP]
apply mul_one
[GOAL]
G G' : Type u
inst✝⁵ : TopologicalSpace G
inst✝⁴ : Group G
inst✝³ : TopologicalGroup G
inst✝² : TopologicalSpace G'
inst✝¹ : Group G'
inst✝ : TopologicalGroup G'
⊢ hSpace (G × G') = HSpace.prod G G'
[PROOFSTEP]
simp only [HSpace.prod]
[GOAL]
G G' : Type u
inst✝⁵ : TopologicalSpace G
inst✝⁴ : Group G
inst✝³ : TopologicalGroup G
inst✝² : TopologicalSpace G'
inst✝¹ : Group G'
inst✝ : TopologicalGroup G'
⊢ hSpace (G × G') =
{ hmul := ContinuousMap.mk fun p => (↑HSpace.hmul (p.fst.fst, p.snd.fst), ↑HSpace.hmul (p.fst.snd, p.snd.snd)),
e := (HSpace.e, HSpace.e),
hmul_e_e :=
(_ :
↑(ContinuousMap.mk fun p => (↑HSpace.hmul (p.fst.fst, p.snd.fst), ↑HSpace.hmul (p.fst.snd, p.snd.snd)))
((HSpace.e, HSpace.e), HSpace.e, HSpace.e) =
(HSpace.e, HSpace.e)),
eHmul :=
{
toHomotopy :=
{
toContinuousMap :=
ContinuousMap.mk fun p => (↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)),
map_zero_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p => (↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)))
(0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst), ↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (G × G') (HSpace.e, HSpace.e)) (ContinuousMap.id (G × G'))))
x),
map_one_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p => (↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)))
(1, x) =
↑(ContinuousMap.id (G × G')) x) },
prop' :=
(_ :
∀ (t : ↑I) (x : G × G'),
x ∈ {(HSpace.e, HSpace.e)} →
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{
toContinuousMap :=
ContinuousMap.mk fun p =>
(↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)),
map_zero_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)))
(0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst),
↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (G × G') (HSpace.e, HSpace.e))
(ContinuousMap.id (G × G'))))
x),
map_one_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)))
(1, x) =
↑(ContinuousMap.id (G × G')) x) }.toContinuousMap
(t, x))
x =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst), ↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (G × G') (HSpace.e, HSpace.e)) (ContinuousMap.id (G × G'))))
x ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{
toContinuousMap :=
ContinuousMap.mk fun p =>
(↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)),
map_zero_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)))
(0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst),
↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (const (G × G') (HSpace.e, HSpace.e))
(ContinuousMap.id (G × G'))))
x),
map_one_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.eHmul (p.fst, p.snd.fst), ↑HSpace.eHmul (p.fst, p.snd.snd)))
(1, x) =
↑(ContinuousMap.id (G × G')) x) }.toContinuousMap
(t, x))
x =
↑(ContinuousMap.id (G × G')) x) },
hmulE :=
{
toHomotopy :=
{
toContinuousMap :=
ContinuousMap.mk fun p => (↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)),
map_zero_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p => (↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)))
(0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst), ↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (G × G')) (const (G × G') (HSpace.e, HSpace.e))))
x),
map_one_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p => (↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)))
(1, x) =
↑(ContinuousMap.id (G × G')) x) },
prop' :=
(_ :
∀ (t : ↑I) (x : G × G'),
x ∈ {(HSpace.e, HSpace.e)} →
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{
toContinuousMap :=
ContinuousMap.mk fun p =>
(↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)),
map_zero_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)))
(0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst),
↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (G × G'))
(const (G × G') (HSpace.e, HSpace.e))))
x),
map_one_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)))
(1, x) =
↑(ContinuousMap.id (G × G')) x) }.toContinuousMap
(t, x))
x =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst), ↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (G × G')) (const (G × G') (HSpace.e, HSpace.e))))
x ∧
↑(ContinuousMap.mk fun x =>
ContinuousMap.toFun
{
toContinuousMap :=
ContinuousMap.mk fun p =>
(↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)),
map_zero_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)))
(0, x) =
↑(comp
(ContinuousMap.mk fun p =>
(↑HSpace.hmul (p.fst.fst, p.snd.fst),
↑HSpace.hmul (p.fst.snd, p.snd.snd)))
(prodMk (ContinuousMap.id (G × G'))
(const (G × G') (HSpace.e, HSpace.e))))
x),
map_one_left :=
(_ :
∀ (x : G × G'),
ContinuousMap.toFun
(ContinuousMap.mk fun p =>
(↑HSpace.hmulE (p.fst, p.snd.fst), ↑HSpace.hmulE (p.fst, p.snd.snd)))
(1, x) =
↑(ContinuousMap.id (G × G')) x) }.toContinuousMap
(t, x))
x =
↑(ContinuousMap.id (G × G')) x) } }
[PROOFSTEP]
rfl
[GOAL]
⊢ Continuous fun p => 2 * ↑p.fst
[PROOFSTEP]
continuity
[GOAL]
⊢ Continuous fun p => 1 + ↑p.snd
[PROOFSTEP]
continuity
[GOAL]
θ : ↑I
⊢ 2 * ↑(0, θ).fst / (1 + ↑(0, θ).snd) ≤ 0
[PROOFSTEP]
simp only [coe_zero, mul_zero, zero_div, le_refl]
[GOAL]
θ : ↑I
⊢ 1 * (1 + ↑(1, θ).snd) ≤ 2 * ↑(1, θ).fst
[PROOFSTEP]
dsimp only
[GOAL]
θ : ↑I
⊢ 1 * (1 + ↑θ) ≤ 2 * ↑1
[PROOFSTEP]
rw [coe_one, one_mul, mul_one, add_comm, ← one_add_one_eq_two]
[GOAL]
θ : ↑I
⊢ ↑θ + 1 ≤ 1 + 1
[PROOFSTEP]
simp only [add_le_add_iff_right]
[GOAL]
θ : ↑I
⊢ ↑θ ≤ 1
[PROOFSTEP]
exact le_one _
[GOAL]
t : ↑I
⊢ ↑(qRight (t, 0)) = if ↑t ≤ 1 / 2 then 2 * ↑t else 1
[PROOFSTEP]
simp only [qRight, coe_zero, add_zero, div_one]
[GOAL]
t : ↑I
⊢ ↑(Set.projIcc 0 1 qRight.proof_1 (2 * ↑t)) = if ↑t ≤ 1 / 2 then 2 * ↑t else 1
[PROOFSTEP]
split_ifs
[GOAL]
case pos
t : ↑I
h✝ : ↑t ≤ 1 / 2
⊢ ↑(Set.projIcc 0 1 qRight.proof_1 (2 * ↑t)) = 2 * ↑t
[PROOFSTEP]
rw [Set.projIcc_of_mem _ ((mul_pos_mem_iff zero_lt_two).2 _)]
[GOAL]
t : ↑I
h✝ : ↑t ≤ 1 / 2
⊢ ↑t ∈ Set.Icc 0 (1 / 2)
[PROOFSTEP]
refine' ⟨t.2.1, _⟩
[GOAL]
t : ↑I
h✝ : ↑t ≤ 1 / 2
⊢ ↑t ≤ 1 / 2
[PROOFSTEP]
tauto
[GOAL]
case neg
t : ↑I
h✝ : ¬↑t ≤ 1 / 2
⊢ ↑(Set.projIcc 0 1 qRight.proof_1 (2 * ↑t)) = 1
[PROOFSTEP]
rw [(Set.projIcc_eq_right _).2]
[GOAL]
case neg
t : ↑I
h✝ : ¬↑t ≤ 1 / 2
⊢ 1 ≤ 2 * ↑t
[PROOFSTEP]
linarith
[GOAL]
t : ↑I
h✝ : ¬↑t ≤ 1 / 2
⊢ 0 < 1
[PROOFSTEP]
exact zero_lt_one
[GOAL]
t : ↑I
⊢ qRight (t, 1) = Set.projIcc 0 1 (_ : 0 ≤ 1) ↑t
[PROOFSTEP]
rw [qRight]
[GOAL]
t : ↑I
⊢ Set.projIcc 0 1 qRight.proof_1 (2 * ↑(t, 1).fst / (1 + ↑(t, 1).snd)) = Set.projIcc 0 1 (_ : 0 ≤ 1) ↑t
[PROOFSTEP]
congr
[GOAL]
case e_x
t : ↑I
⊢ 2 * ↑(t, 1).fst / (1 + ↑(t, 1).snd) = ↑t
[PROOFSTEP]
norm_num
[GOAL]
case e_x
t : ↑I
⊢ 2 * ↑t / 2 = ↑t
[PROOFSTEP]
apply mul_div_cancel_left
[GOAL]
case e_x.ha
t : ↑I
⊢ 2 ≠ 0
[PROOFSTEP]
exact two_ne_zero
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
θ : ↑I
γ : Path x y
⊢ ContinuousMap.toFun (ContinuousMap.mk fun t => ↑γ (qRight (t, θ))) 0 = x
[PROOFSTEP]
dsimp only
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
θ : ↑I
γ : Path x y
⊢ ↑γ (qRight (0, θ)) = x
[PROOFSTEP]
rw [qRight_zero_left, γ.source]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
θ : ↑I
γ : Path x y
⊢ ContinuousMap.toFun (ContinuousMap.mk fun t => ↑γ (qRight (t, θ))) 1 = y
[PROOFSTEP]
dsimp only
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
θ : ↑I
γ : Path x y
⊢ ↑γ (qRight (1, θ)) = y
[PROOFSTEP]
rw [qRight_one_left, γ.target]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
⊢ delayReflRight 0 γ = trans γ (refl y)
[PROOFSTEP]
ext t
[GOAL]
case a.h
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
⊢ ↑(delayReflRight 0 γ) t = ↑(trans γ (refl y)) t
[PROOFSTEP]
simp only [delayReflRight, trans_apply, refl_extend, Path.coe_mk_mk, Function.comp_apply, refl_apply]
[GOAL]
case a.h
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
⊢ ↑γ (qRight (t, 0)) = if h : ↑t ≤ 1 / 2 then ↑γ { val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) } else y
[PROOFSTEP]
split_ifs with h
[GOAL]
case pos
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = ↑γ { val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
case neg X : Type u inst✝ : TopologicalSpace X x y : X γ : Path x y t : ↑I h : ¬↑t ≤ 1 / 2 ⊢ ↑γ (qRight (t, 0)) = y
[PROOFSTEP]
swap
[GOAL]
case neg
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = y
case pos
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = ↑γ { val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
[PROOFSTEP]
conv_rhs => rw [← γ.target]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
| y
[PROOFSTEP]
rw [← γ.target]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
| y
[PROOFSTEP]
rw [← γ.target]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
| y
[PROOFSTEP]
rw [← γ.target]
[GOAL]
case neg
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = ↑γ 1
case pos
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = ↑γ { val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
[PROOFSTEP]
all_goals apply congr_arg γ; ext1; rw [qRight_zero_right]
[GOAL]
case neg
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = ↑γ 1
[PROOFSTEP]
apply congr_arg γ
[GOAL]
case neg
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
⊢ qRight (t, 0) = 1
[PROOFSTEP]
ext1
[GOAL]
case neg.a
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
⊢ ↑(qRight (t, 0)) = ↑1
[PROOFSTEP]
rw [qRight_zero_right]
[GOAL]
case pos
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ ↑γ (qRight (t, 0)) = ↑γ { val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
[PROOFSTEP]
apply congr_arg γ
[GOAL]
case pos
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ qRight (t, 0) = { val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
[PROOFSTEP]
ext1
[GOAL]
case pos.a
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ ↑(qRight (t, 0)) = ↑{ val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
[PROOFSTEP]
rw [qRight_zero_right]
[GOAL]
case neg.a
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ¬↑t ≤ 1 / 2
⊢ (if ↑t ≤ 1 / 2 then 2 * ↑t else 1) = ↑1
case pos.a
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
h : ↑t ≤ 1 / 2
⊢ (if ↑t ≤ 1 / 2 then 2 * ↑t else 1) = ↑{ val := 2 * ↑t, property := (_ : 2 * ↑t ∈ I) }
[PROOFSTEP]
exacts [if_neg h, if_pos h]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
⊢ delayReflRight 1 γ = γ
[PROOFSTEP]
ext t
[GOAL]
case a.h
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
t : ↑I
⊢ ↑(delayReflRight 1 γ) t = ↑γ t
[PROOFSTEP]
exact congr_arg γ (qRight_one_right t)
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
⊢ delayReflLeft 0 γ = trans (refl x) γ
[PROOFSTEP]
simp only [delayReflLeft, delayReflRight_zero, trans_symm, refl_symm, Path.symm_symm]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x y : X
γ : Path x y
⊢ delayReflLeft 1 γ = γ
[PROOFSTEP]
simp only [delayReflLeft, delayReflRight_one, Path.symm_symm]
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x✝ y x : X
⊢ ∀ (t : ↑I) (x_1 : Path x x),
x_1 ∈ {refl x} →
↑(ContinuousMap.mk fun x_2 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflLeft p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflLeft 0 γ = trans (refl x) γ),
map_one_left := (_ : ∀ (γ : Path x x), delayReflLeft 1 γ = γ) }.toContinuousMap
(t, x_2))
x_1 =
↑(comp (ContinuousMap.mk fun ρ => trans ρ.fst ρ.snd)
(prodMk (const (Path x x) (refl x)) (ContinuousMap.id (Path x x))))
x_1 ∧
↑(ContinuousMap.mk fun x_2 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflLeft p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflLeft 0 γ = trans (refl x) γ),
map_one_left := (_ : ∀ (γ : Path x x), delayReflLeft 1 γ = γ) }.toContinuousMap
(t, x_2))
x_1 =
↑(ContinuousMap.id (Path x x)) x_1
[PROOFSTEP]
rintro t _ (rfl : _ = _)
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x✝ y x : X
t : ↑I
⊢ ↑(ContinuousMap.mk fun x_1 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflLeft p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflLeft 0 γ = trans (refl x) γ),
map_one_left := (_ : ∀ (γ : Path x x), delayReflLeft 1 γ = γ) }.toContinuousMap
(t, x_1))
(refl x) =
↑(comp (ContinuousMap.mk fun ρ => trans ρ.fst ρ.snd)
(prodMk (const (Path x x) (refl x)) (ContinuousMap.id (Path x x))))
(refl x) ∧
↑(ContinuousMap.mk fun x_1 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflLeft p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflLeft 0 γ = trans (refl x) γ),
map_one_left := (_ : ∀ (γ : Path x x), delayReflLeft 1 γ = γ) }.toContinuousMap
(t, x_1))
(refl x) =
↑(ContinuousMap.id (Path x x)) (refl x)
[PROOFSTEP]
exact ⟨refl_trans_refl.symm, rfl⟩
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x✝ y x : X
⊢ ∀ (t : ↑I) (x_1 : Path x x),
x_1 ∈ {refl x} →
↑(ContinuousMap.mk fun x_2 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflRight p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflRight 0 γ = trans γ (refl x)),
map_one_left := (_ : ∀ (γ : Path x x), delayReflRight 1 γ = γ) }.toContinuousMap
(t, x_2))
x_1 =
↑(comp (ContinuousMap.mk fun ρ => trans ρ.fst ρ.snd)
(prodMk (ContinuousMap.id (Path x x)) (const (Path x x) (refl x))))
x_1 ∧
↑(ContinuousMap.mk fun x_2 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflRight p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflRight 0 γ = trans γ (refl x)),
map_one_left := (_ : ∀ (γ : Path x x), delayReflRight 1 γ = γ) }.toContinuousMap
(t, x_2))
x_1 =
↑(ContinuousMap.id (Path x x)) x_1
[PROOFSTEP]
rintro t _ (rfl : _ = _)
[GOAL]
X : Type u
inst✝ : TopologicalSpace X
x✝ y x : X
t : ↑I
⊢ ↑(ContinuousMap.mk fun x_1 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflRight p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflRight 0 γ = trans γ (refl x)),
map_one_left := (_ : ∀ (γ : Path x x), delayReflRight 1 γ = γ) }.toContinuousMap
(t, x_1))
(refl x) =
↑(comp (ContinuousMap.mk fun ρ => trans ρ.fst ρ.snd)
(prodMk (ContinuousMap.id (Path x x)) (const (Path x x) (refl x))))
(refl x) ∧
↑(ContinuousMap.mk fun x_1 =>
ContinuousMap.toFun
{ toContinuousMap := ContinuousMap.mk fun p => delayReflRight p.fst p.snd,
map_zero_left := (_ : ∀ (γ : Path x x), delayReflRight 0 γ = trans γ (refl x)),
map_one_left := (_ : ∀ (γ : Path x x), delayReflRight 1 γ = γ) }.toContinuousMap
(t, x_1))
(refl x) =
↑(ContinuousMap.id (Path x x)) (refl x)
[PROOFSTEP]
exact ⟨refl_trans_refl.symm, rfl⟩
|
## 2. Plotting the Residuals ##
library(readr)
williamsburg_north <- suppressMessages(read_csv("williamsburg_north.csv"))
library(dqanswerchecking)
condos_lm_fit <- lm(sale_price ~ gross_square_feet, data = williamsburg_north)
library(ggplot2)
residuals_df <- data.frame(condos_lm_fit$residuals)
ggplot(data = residuals_df, aes(x = condos_lm_fit.residuals)) +
geom_histogram()
## 4. The t-statistic ##
t_statistic <- (1926.6 - 0) / 169.5
## 5. The p-value ##
p_value <- coef(summary(condos_lm_fit))[, 4][[2]]
reject_null_hypothesis <- TRUE
## 6. Confidence Intervals ##
slope_CI_lower <- condos_lm_fit$coefficients[[2]] - 2 *
coef(summary(condos_lm_fit))[, 2][[2]]
slope_CI_upper <- condos_lm_fit$coefficients[[2]] + 2 *
coef(summary(condos_lm_fit))[, 2][[2]]
slope_CI <- confint(condos_lm_fit)[2,]
## 7. Residual Standard Error ##
library(dplyr)
# Add residuals and squared-residuals
williamsburg_north <- williamsburg_north %>%
mutate(residuals = resid(condos_lm_fit)) %>%
mutate(resid_squared = residuals^2)
# Compute the residual sum of squares (RSS)
RSS <- williamsburg_north %>%
summarise(RSS = sum(resid_squared)) %>%
pull()
# Extract RSS from model output
RSS_from_lm <- deviance(condos_lm_fit)
# Optional: check RSS equality
near(RSS, deviance(condos_lm_fit))
# Manual RSE
RSE <- sqrt(RSS / (nrow(williamsburg_north) - 2))
# Alternate method for RSE
RSE <- sqrt(1 / (nrow(williamsburg_north) - 2) * RSS)
lm_fit_sigma <- sigma(condos_lm_fit)
are_equal <- near(RSE, lm_fit_sigma)
## 8. The R-squared Statistic ##
# Add residuals and squared-residuals
williamsburg_north <- williamsburg_north %>%
mutate(residuals = resid(condos_lm_fit)) %>%
mutate(resid_squared = residuals^2)
# Compute the residual sum of squares (RSS)
RSS <- williamsburg_north %>%
summarise(RSS = sum(resid_squared)) %>%
pull()
TSS <- sum((williamsburg_north$sale_price -
mean(williamsburg_north$sale_price))^2)
r_squared <- 1 - RSS/TSS
lm_r_squared <- summary(condos_lm_fit)$r.squared
are_equal <- near(r_squared, lm_r_squared)
adj_r_squared <- summary(condos_lm_fit)$adj.r.squared |
IF (.NOT. in%is_complex .EQV. convert_to_complex) THEN
CALL MergeMatrixLocalBlocks(in, local_matrix)
CALL ConstructEmptyMatrix(out, in%actual_matrix_dimension, &
& process_grid_in=in%process_grid, is_complex_in=convert_to_complex)
CALL ConvertMatrixType(local_matrix, converted_matrix)
CALL SplitMatrixToLocalBlocks(out, converted_matrix)
END IF
|
module Plotting where
import qualified Data.Random.Distribution.Normal as DRDN
import qualified Data.Vector as V
import Graphics.Rendering.Chart.Backend.Cairo
import Graphics.Rendering.Chart.Easy
import qualified Numeric.LinearAlgebra.Data as LAD
import qualified Sample as S
import qualified Statistics.Distribution as SD
import qualified Statistics.Distribution.Normal as SDN
import qualified Statistics.Sample.Histogram as SSH
normGrid :: [Double]
normGrid = LAD.toList $ LAD.linspace 100 (-3, 3)
normDensityGrid :: [Double]
normDensityGrid = fmap (SD.density SDN.standard) normGrid
fillFromZero :: [(Double, Double)] -> EC m2 (PlotFillBetween Double Double)
fillFromZero points = liftEC $ do
plot_fillbetween_style .= solidFillStyle (grey `withOpacity` 0.25)
plot_fillbetween_values .= fmap (\(x, y) -> (x, (0, y))) points
histPlot :: FilePath -> [(Double, Double)] -> IO ()
histPlot filename points = toFile def filename $ do
setColors [opaque black]
color <- takeColor
plot (line "Density" [points])
plot (fillFromZero points)
pHistPlot :: FilePath -> [Double] -> IO ()
pHistPlot = undefined
histogram :: Int -> [Double] -> [(Double, Double)]
histogram nBins xs = zip (V.toList grid) (V.toList densities)
where
(grid, counts) = SSH.histogram nBins (V.fromList xs)
densities = V.map (\x -> x / fromIntegral (length xs)) counts
xs :: [Double]
xs = take 1000 (S.sample 123 DRDN.StdNormal)
go = histogram 25 xs
|
/-
Copyright (c) 2022 Yury G. Kudryashov. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yury G. Kudryashov
! This file was ported from Lean 3 source module analysis.complex.schwarz
! leanprover-community/mathlib commit f2ce6086713c78a7f880485f7917ea547a215982
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Analysis.Complex.AbsMax
import Mathbin.Analysis.Complex.RemovableSingularity
/-!
# Schwarz lemma
In this file we prove several versions of the Schwarz lemma.
* `complex.norm_deriv_le_div_of_maps_to_ball`, `complex.abs_deriv_le_div_of_maps_to_ball`: if
`f : ℂ → E` sends an open disk with center `c` and a positive radius `R₁` to an open ball with
center `f c` and radius `R₂`, then the absolute value of the derivative of `f` at `c` is at most
the ratio `R₂ / R₁`;
* `complex.dist_le_div_mul_dist_of_maps_to_ball`: if `f : ℂ → E` sends an open disk with center `c`
and radius `R₁` to an open disk with center `f c` and radius `R₂`, then for any `z` in the former
disk we have `dist (f z) (f c) ≤ (R₂ / R₁) * dist z c`;
* `complex.abs_deriv_le_one_of_maps_to_ball`: if `f : ℂ → ℂ` sends an open disk of positive radius
to itself and the center of this disk to itself, then the absolute value of the derivative of `f`
at the center of this disk is at most `1`;
* `complex.dist_le_dist_of_maps_to_ball`: if `f : ℂ → ℂ` sends an open disk to itself and the center
`c` of this disk to itself, then for any point `z` of this disk we have `dist (f z) c ≤ dist z c`;
* `complex.abs_le_abs_of_maps_to_ball`: if `f : ℂ → ℂ` sends an open disk with center `0` to itself,
then for any point `z` of this disk we have `abs (f z) ≤ abs z`.
## Implementation notes
We prove some versions of the Schwarz lemma for a map `f : ℂ → E` taking values in any normed space
over complex numbers.
## TODO
* Prove that these inequalities are strict unless `f` is an affine map.
* Prove that any diffeomorphism of the unit disk to itself is a Möbius map.
## Tags
Schwarz lemma
-/
open Metric Set Function Filter TopologicalSpace
open Topology
namespace Complex
section Space
variable {E : Type _} [NormedAddCommGroup E] [NormedSpace ℂ E] {R R₁ R₂ : ℝ} {f : ℂ → E}
{c z z₀ : ℂ}
/-- An auxiliary lemma for `complex.norm_dslope_le_div_of_maps_to_ball`. -/
theorem schwarz_aux {f : ℂ → ℂ} (hd : DifferentiableOn ℂ f (ball c R₁))
(h_maps : MapsTo f (ball c R₁) (ball (f c) R₂)) (hz : z ∈ ball c R₁) :
‖dslope f c z‖ ≤ R₂ / R₁ :=
by
have hR₁ : 0 < R₁ := nonempty_ball.1 ⟨z, hz⟩
suffices ∀ᶠ r in 𝓝[<] R₁, ‖dslope f c z‖ ≤ R₂ / r
by
refine' ge_of_tendsto _ this
exact (tendsto_const_nhds.div tendsto_id hR₁.ne').mono_left nhdsWithin_le_nhds
rw [mem_ball] at hz
filter_upwards [Ioo_mem_nhdsWithin_Iio ⟨hz, le_rfl⟩]with r hr
have hr₀ : 0 < r := dist_nonneg.trans_lt hr.1
replace hd : DiffContOnCl ℂ (dslope f c) (ball c r)
· refine' DifferentiableOn.diffContOnCl _
rw [closure_ball c hr₀.ne']
exact
((differentiable_on_dslope <| ball_mem_nhds _ hR₁).mpr hd).mono (closed_ball_subset_ball hr.2)
refine' norm_le_of_forall_mem_frontier_norm_le bounded_ball hd _ _
· rw [frontier_ball c hr₀.ne']
intro z hz
have hz' : z ≠ c := ne_of_mem_sphere hz hr₀.ne'
rw [dslope_of_ne _ hz', slope_def_module, norm_smul, norm_inv, mem_sphere_iff_norm.1 hz, ←
div_eq_inv_mul, div_le_div_right hr₀, ← dist_eq_norm]
exact
le_of_lt
(h_maps
(mem_ball.2
(by
rw [mem_sphere.1 hz]
exact hr.2)))
· rw [closure_ball c hr₀.ne', mem_closed_ball]
exact hr.1.le
#align complex.schwarz_aux Complex.schwarz_aux
/-- Two cases of the **Schwarz Lemma** (derivative and distance), merged together. -/
theorem norm_dslope_le_div_of_mapsTo_ball (hd : DifferentiableOn ℂ f (ball c R₁))
(h_maps : MapsTo f (ball c R₁) (ball (f c) R₂)) (hz : z ∈ ball c R₁) :
‖dslope f c z‖ ≤ R₂ / R₁ :=
by
have hR₁ : 0 < R₁ := nonempty_ball.1 ⟨z, hz⟩
have hR₂ : 0 < R₂ := nonempty_ball.1 ⟨f z, h_maps hz⟩
cases' eq_or_ne (dslope f c z) 0 with hc hc
· rw [hc, norm_zero]
exact div_nonneg hR₂.le hR₁.le
rcases exists_dual_vector ℂ _ hc with ⟨g, hg, hgf⟩
have hg' : ‖g‖₊ = 1 := NNReal.eq hg
have hg₀ : ‖g‖₊ ≠ 0 := by simpa only [hg'] using one_ne_zero
calc
‖dslope f c z‖ = ‖dslope (g ∘ f) c z‖ :=
by
rw [g.dslope_comp, hgf, IsROrC.norm_of_real, norm_norm]
exact fun _ => hd.differentiable_at (ball_mem_nhds _ hR₁)
_ ≤ R₂ / R₁ :=
by
refine' schwarz_aux (g.differentiable.comp_differentiable_on hd) (maps_to.comp _ h_maps) hz
simpa only [hg', NNReal.coe_one, one_mul] using g.lipschitz.maps_to_ball hg₀ (f c) R₂
#align complex.norm_dslope_le_div_of_maps_to_ball Complex.norm_dslope_le_div_of_mapsTo_ball
/-- Equality case in the **Schwarz Lemma**: in the setup of `norm_dslope_le_div_of_maps_to_ball`, if
`‖dslope f c z₀‖ = R₂ / R₁` holds at a point in the ball then the map `f` is affine. -/
theorem affine_of_mapsTo_ball_of_exists_norm_dslope_eq_div [CompleteSpace E] [StrictConvexSpace ℝ E]
(hd : DifferentiableOn ℂ f (ball c R₁)) (h_maps : Set.MapsTo f (ball c R₁) (ball (f c) R₂))
(h_z₀ : z₀ ∈ ball c R₁) (h_eq : ‖dslope f c z₀‖ = R₂ / R₁) :
Set.EqOn f (fun z => f c + (z - c) • dslope f c z₀) (ball c R₁) :=
by
set g := dslope f c
rintro z hz
by_cases z = c
· simp [h]
have h_R₁ : 0 < R₁ := nonempty_ball.mp ⟨_, h_z₀⟩
have g_le_div : ∀ z ∈ ball c R₁, ‖g z‖ ≤ R₂ / R₁ := fun z hz =>
norm_dslope_le_div_of_maps_to_ball hd h_maps hz
have g_max : IsMaxOn (norm ∘ g) (ball c R₁) z₀ :=
is_max_on_iff.mpr fun z hz => by simpa [h_eq] using g_le_div z hz
have g_diff : DifferentiableOn ℂ g (ball c R₁) :=
(differentiable_on_dslope (is_open_ball.mem_nhds (mem_ball_self h_R₁))).mpr hd
have : g z = g z₀ :=
eq_on_of_is_preconnected_of_is_max_on_norm (convex_ball c R₁).IsPreconnected is_open_ball g_diff
h_z₀ g_max hz
simp [← this]
#align complex.affine_of_maps_to_ball_of_exists_norm_dslope_eq_div Complex.affine_of_mapsTo_ball_of_exists_norm_dslope_eq_div
theorem affine_of_mapsTo_ball_of_exists_norm_dslope_eq_div' [CompleteSpace E]
[StrictConvexSpace ℝ E] (hd : DifferentiableOn ℂ f (ball c R₁))
(h_maps : Set.MapsTo f (ball c R₁) (ball (f c) R₂))
(h_z₀ : ∃ z₀ ∈ ball c R₁, ‖dslope f c z₀‖ = R₂ / R₁) :
∃ C : E, ‖C‖ = R₂ / R₁ ∧ Set.EqOn f (fun z => f c + (z - c) • C) (ball c R₁) :=
let ⟨z₀, h_z₀, h_eq⟩ := h_z₀
⟨dslope f c z₀, h_eq, affine_of_mapsTo_ball_of_exists_norm_dslope_eq_div hd h_maps h_z₀ h_eq⟩
#align complex.affine_of_maps_to_ball_of_exists_norm_dslope_eq_div' Complex.affine_of_mapsTo_ball_of_exists_norm_dslope_eq_div'
/-- The **Schwarz Lemma**: if `f : ℂ → E` sends an open disk with center `c` and a positive radius
`R₁` to an open ball with center `f c` and radius `R₂`, then the absolute value of the derivative of
`f` at `c` is at most the ratio `R₂ / R₁`. -/
theorem norm_deriv_le_div_of_mapsTo_ball (hd : DifferentiableOn ℂ f (ball c R₁))
(h_maps : MapsTo f (ball c R₁) (ball (f c) R₂)) (h₀ : 0 < R₁) : ‖deriv f c‖ ≤ R₂ / R₁ := by
simpa only [dslope_same] using norm_dslope_le_div_of_maps_to_ball hd h_maps (mem_ball_self h₀)
#align complex.norm_deriv_le_div_of_maps_to_ball Complex.norm_deriv_le_div_of_mapsTo_ball
/-- The **Schwarz Lemma**: if `f : ℂ → E` sends an open disk with center `c` and radius `R₁` to an
open ball with center `f c` and radius `R₂`, then for any `z` in the former disk we have
`dist (f z) (f c) ≤ (R₂ / R₁) * dist z c`. -/
theorem dist_le_div_mul_dist_of_mapsTo_ball (hd : DifferentiableOn ℂ f (ball c R₁))
(h_maps : MapsTo f (ball c R₁) (ball (f c) R₂)) (hz : z ∈ ball c R₁) :
dist (f z) (f c) ≤ R₂ / R₁ * dist z c :=
by
rcases eq_or_ne z c with (rfl | hne); · simp only [dist_self, MulZeroClass.mul_zero]
simpa only [dslope_of_ne _ hne, slope_def_module, norm_smul, norm_inv, ← div_eq_inv_mul, ←
dist_eq_norm, div_le_iff (dist_pos.2 hne)] using norm_dslope_le_div_of_maps_to_ball hd h_maps hz
#align complex.dist_le_div_mul_dist_of_maps_to_ball Complex.dist_le_div_mul_dist_of_mapsTo_ball
end Space
variable {f : ℂ → ℂ} {c z : ℂ} {R R₁ R₂ : ℝ}
/-- The **Schwarz Lemma**: if `f : ℂ → ℂ` sends an open disk with center `c` and a positive radius
`R₁` to an open disk with center `f c` and radius `R₂`, then the absolute value of the derivative of
`f` at `c` is at most the ratio `R₂ / R₁`. -/
theorem abs_deriv_le_div_of_mapsTo_ball (hd : DifferentiableOn ℂ f (ball c R₁))
(h_maps : MapsTo f (ball c R₁) (ball (f c) R₂)) (h₀ : 0 < R₁) : abs (deriv f c) ≤ R₂ / R₁ :=
norm_deriv_le_div_of_mapsTo_ball hd h_maps h₀
#align complex.abs_deriv_le_div_of_maps_to_ball Complex.abs_deriv_le_div_of_mapsTo_ball
/-- The **Schwarz Lemma**: if `f : ℂ → ℂ` sends an open disk of positive radius to itself and the
center of this disk to itself, then the absolute value of the derivative of `f` at the center of
this disk is at most `1`. -/
theorem abs_deriv_le_one_of_mapsTo_ball (hd : DifferentiableOn ℂ f (ball c R))
(h_maps : MapsTo f (ball c R) (ball c R)) (hc : f c = c) (h₀ : 0 < R) : abs (deriv f c) ≤ 1 :=
(norm_deriv_le_div_of_mapsTo_ball hd (by rwa [hc]) h₀).trans_eq (div_self h₀.ne')
#align complex.abs_deriv_le_one_of_maps_to_ball Complex.abs_deriv_le_one_of_mapsTo_ball
/-- The **Schwarz Lemma**: if `f : ℂ → ℂ` sends an open disk to itself and the center `c` of this
disk to itself, then for any point `z` of this disk we have `dist (f z) c ≤ dist z c`. -/
theorem dist_le_dist_of_mapsTo_ball_self (hd : DifferentiableOn ℂ f (ball c R))
(h_maps : MapsTo f (ball c R) (ball c R)) (hc : f c = c) (hz : z ∈ ball c R) :
dist (f z) c ≤ dist z c :=
by
have hR : 0 < R := nonempty_ball.1 ⟨z, hz⟩
simpa only [hc, div_self hR.ne', one_mul] using
dist_le_div_mul_dist_of_maps_to_ball hd (by rwa [hc]) hz
#align complex.dist_le_dist_of_maps_to_ball_self Complex.dist_le_dist_of_mapsTo_ball_self
/-- The **Schwarz Lemma**: if `f : ℂ → ℂ` sends an open disk with center `0` to itself, the for any
point `z` of this disk we have `abs (f z) ≤ abs z`. -/
theorem abs_le_abs_of_mapsTo_ball_self (hd : DifferentiableOn ℂ f (ball 0 R))
(h_maps : MapsTo f (ball 0 R) (ball 0 R)) (h₀ : f 0 = 0) (hz : abs z < R) : abs (f z) ≤ abs z :=
by
replace hz : z ∈ ball (0 : ℂ) R; exact mem_ball_zero_iff.2 hz
simpa only [dist_zero_right] using dist_le_dist_of_maps_to_ball_self hd h_maps h₀ hz
#align complex.abs_le_abs_of_maps_to_ball_self Complex.abs_le_abs_of_mapsTo_ball_self
end Complex
|
/* cdf/binomial.c
*
* Copyright (C) 2004 Jason H. Stover.
* Copyright (C) 2004 Giulio Bottazzi
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 3 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <config.h>
#include <math.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_cdf.h>
#include <gsl/gsl_sf_gamma.h>
#include "error.h"
/* Computes the cumulative distribution function for a binomial
random variable. For a binomial random variable X with n trials
and success probability p,
Pr( X <= k ) = Pr( Y >= p )
where Y is a beta random variable with parameters k+1 and n-k.
The binomial distribution has the form,
prob(k) = n!/(k!(n-k)!) * p^k (1-p)^(n-k) for k = 0, 1, ..., n
The cumulated distributions can be expressed in terms of normalized
incomplete beta functions (see Abramowitz & Stegun eq. 26.5.26 and
eq. 6.6.3).
Reference:
W. Feller, "An Introduction to Probability and Its
Applications," volume 1. Wiley, 1968. Exercise 45, page 173,
chapter 6.
*/
#include <config.h>
#include <math.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_cdf.h>
double
gsl_cdf_binomial_P (const unsigned int k, const double p, const unsigned int n)
{
double P;
double a;
double b;
if (p > 1.0 || p < 0.0)
{
CDF_ERROR ("p < 0 or p > 1", GSL_EDOM);
}
if (k >= n)
{
P = 1.0;
}
else
{
a = (double) k + 1.0;
b = (double) n - k;
P = gsl_cdf_beta_Q (p, a, b);
}
return P;
}
double
gsl_cdf_binomial_Q (const unsigned int k, const double p, const unsigned int n)
{
double Q;
double a;
double b;
if (p > 1.0 || p < 0.0)
{
CDF_ERROR ("p < 0 or p > 1", GSL_EDOM);
}
if (k >= n)
{
Q = 0.0;
}
else
{
a = (double) k + 1.0;
b = (double) n - k;
Q = gsl_cdf_beta_P (p, a, b);
}
return Q;
}
|
[GOAL]
⊢ NoncompactSpace ℍ
[PROOFSTEP]
refine' ⟨fun h => _⟩
[GOAL]
h : IsCompact univ
⊢ False
[PROOFSTEP]
have : IsCompact (Complex.im ⁻¹' Ioi 0) := isCompact_iff_isCompact_univ.2 h
[GOAL]
h : IsCompact univ
this : IsCompact (Complex.im ⁻¹' Ioi 0)
⊢ False
[PROOFSTEP]
replace := this.isClosed.closure_eq
[GOAL]
h : IsCompact univ
this : closure (Complex.im ⁻¹' Ioi 0) = Complex.im ⁻¹' Ioi 0
⊢ False
[PROOFSTEP]
rw [closure_preimage_im, closure_Ioi, Set.ext_iff] at this
[GOAL]
h : IsCompact univ
this : ∀ (x : ℂ), x ∈ Complex.im ⁻¹' Ici 0 ↔ x ∈ Complex.im ⁻¹' Ioi 0
⊢ False
[PROOFSTEP]
exact absurd ((this 0).1 (@left_mem_Ici ℝ _ 0)) (@lt_irrefl ℝ _ 0)
|
PERFORMANCE BONDING SURETY & INSURANCE BROKERAGE, LP is a business entity registered in the state of New York under the legal form of FOREIGN LIMITED PARTNERSHIP. It can be found in the register by the DOS ID 5195433. The company was established and set into the register at 31st August 2017 and its current status is ACTIVE. Company´s jurisdiction is in the state of CALIFORNIA in county ALBANY. Process address of this entity is at 2804 GATEWAY OAKS DRIVE #200, SACRAMENTO, 95833, CALIFORNIA.
NEW QINGTENG ELITE SERVICE INC.
24/7 CRANES & RIGGING INC.
BETA OMEGA CRI CHIROPRACTIC FRATERNITY, INC.
RHOMESTEAD CONSTRUCTION AND PROPERTY INC.
STUART H. MAYER, CPA, P.C.
NEW J & M CLEANERS INC.
HAMBURG SCHOOL CAR SERVICE, INC.
ROY J. MIRRO & ASSOCIATES, P.C.
NEW HING WONG RESTAURANT INC.
111 EAST MAIN STREET REALTY CORP.
BAISLEY ACHIEVERS CIVIC ASSOCIATION, INC.
THE PROVIDENCE CHARITABLE FOUNDATION, INC.
NYC PROPHECY & ME INC.
GRAND COUNCIL, DELTA PHI SIGMA SORORITY, INC.
BUILDING MANAGERS' ASSOCIATION OF ROCHESTER, INC. |
lemma measurable_gfp2_coinduct[consumes 1, case_names continuity step]: fixes F :: "('a \<Rightarrow> 'c \<Rightarrow> 'b) \<Rightarrow> ('a \<Rightarrow> 'c \<Rightarrow> 'b::{complete_lattice, countable})" assumes "P M s" assumes F: "inf_continuous F" assumes *: "\<And>M A s. P M s \<Longrightarrow> (\<And>N t. P N t \<Longrightarrow> A t \<in> measurable N (count_space UNIV)) \<Longrightarrow> F A s \<in> measurable M (count_space UNIV)" shows "gfp F s \<in> measurable M (count_space UNIV)" |
library(ggplot2)
theme_set(theme_bw(18))
setwd("~/Dropbox/sinking_marbles/sinking-marbles/models/varied_alternatives/results/")
source("rscripts/helpers.r")
load("data/mp.RData")
mp = read.table("data/parsed_results.tsv", quote="", sep="\t", header=T)
nrow(mp)
summary(mp)
mp$Proportion = mp$NumState / mp$TotalMarbles
summary(mp)
mp$OptTotal = as.factor(paste(mp$SpeakerOptimality,mp$TotalMarbles))
mp$QUDTotal = as.factor(paste(mp$QUD,mp$TotalMarbles))
save(mp, file="data/mp.RData")
isall = subset(mp, QUD=="is-all")
ggplot(isall, aes(x=Proportion,y=PosteriorProbability,shape=as.factor(SpeakerOptimality),color=as.factor(TotalMarbles),group=OptTotal)) +
geom_point() +
geom_line() +
facet_grid(Alternatives~PriorAllProbability)
ggsave("graphs/byproportion-qud-all.pdf",width=25,height=25)
howmany = subset(mp, QUD=="how-many")
ggplot(howmany, aes(x=Proportion,y=PosteriorProbability,shape=as.factor(SpeakerOptimality),color=as.factor(TotalMarbles),group=OptTotal)) +
geom_point() +
geom_line() +
facet_grid(Alternatives~PriorAllProbability)
ggsave("graphs/byproportion-qud-howmany.pdf",width=25,height=25)
ggplot(isall, aes(x=PriorAllProbability,y=PosteriorProbability,shape=as.factor(SpeakerOptimality),color=as.factor(TotalMarbles),group=OptTotal)) +
geom_point() +
geom_line() +
facet_grid(Alternatives~Proportion)
ggsave("graphs/byprior-qud-isall.pdf",width=40,height=15)
ggplot(howmany, aes(x=PriorAllProbability,y=PosteriorProbability,shape=as.factor(SpeakerOptimality),color=as.factor(TotalMarbles),group=OptTotal)) +
geom_point() +
geom_line() +
facet_grid(Alternatives~Proportion)
ggsave("graphs/byprior-qud-howmany.pdf",width=40,height=15)
allisall = subset(mp, Proportion == 1 & QUD == "is-all")
ggplot(allisall, aes(x=PriorAllProbability,y=PosteriorProbability,shape=as.factor(SpeakerOptimality),color=as.factor(TotalMarbles),group=OptTotal)) +
geom_point() +
geom_line() +
facet_wrap(~Alternatives)
ggsave("graphs/allprobability-qud-isall.pdf",width=12)
allhowmany = subset(mp, Proportion == 1 & QUD == "how-many")
ggplot(allhowmany, aes(x=PriorAllProbability,y=PosteriorProbability,shape=as.factor(SpeakerOptimality),color=as.factor(TotalMarbles),group=OptTotal)) +
geom_point() +
geom_line() +
facet_wrap(~Alternatives)
ggsave("graphs/allprobability-qud-howmany.pdf",width=12)
all = subset(mp, Proportion == 1 & SpeakerOptimality == 1)
ggplot(all, aes(x=PriorAllProbability,y=PosteriorProbability,shape=QUD,color=as.factor(TotalMarbles),group=QUDTotal)) +
geom_point() +
geom_line() +
facet_wrap(~Alternatives)
ggsave("graphs/allprobability-quds.pdf",width=12)
head(mp)
nrow(mp)
hmbasic2 = subset(mp, QUD=="how-many" & Alternatives == "basic" & SpeakerOptimality == 2 & Proportion == 1 & TotalMarbles == 4)
nrow(hmbasic2)
summary(hmbasic2)
hmbasic2 = unique(hmbasic2)
hmbasic2 = droplevels(hmbasic2)
row.names(hmbasic2) = as.character(hmbasic2$PriorAllProbability)
head(hmbasic2)
hmbasic2.gprior = hmbasic2
save(hmbasic2.gprior, file="data/hmbasic2.gprior.RData")
|
From Coq Require Import Arith.Arith.
From Coq Require Import Bool.Bool.
Require Export Coq.Strings.String.
From Coq Require Import Logic.FunctionalExtensionality.
From Coq Require Import Lists.List.
Import ListNotations.
Example apply_ex1:forall P Q:Prop,
(P->Q)->P->Q.
Proof.
intros P Q P_imply_Q P_holds.
(*apply P_imply_Q in P_holds.*) (* 正向推理 结果为将P_holds变为Q*)
apply P_imply_Q.
apply P_holds.
Qed.
Theorem silly1 : forall (n m o p : nat),
n = m ->
[n;o] = [n;p] ->
[n;o] = [m;p].
Proof.
intros n m o p eq1 eq2.
rewrite <- eq1.
apply eq2. Qed.
Theorem silly2 : forall (n m o p : nat),
n = m ->
(forall (q r : nat), q = r -> [q;o] = [r;p]) ->
[n;o] = [m;p].
Proof.
intros n m o p eq1 eq2.
apply eq2. apply eq1. Qed.
Theorem trans_eq : forall (X:Type) (n m o : X),
n = m -> m = o -> n = o.
Proof.
intros X n m o eq1 eq2. rewrite -> eq1. rewrite -> eq2.
reflexivity. Qed.
Example trans_eq_example' : forall (a b c d e f : nat),
[a;b] = [c;d] ->
[c;d] = [e;f] ->
[a;b] = [e;f].
Proof.
intros a b c d e f eq1 eq2.
apply trans_eq with (m:=[c;d]).
apply eq1. apply eq2. Qed.
Theorem trans_eq_2 : forall (X:Type) (n m o p : X),
n = m -> m = p -> p = o -> n = o.
Proof.
Admitted.
Example trans_eq_example'_2 : forall (a b c d e f h i: nat),
[a;b] = [c;d] ->
[c;d] = [e;f] ->
[e;f] = [h;i] ->
[a;b] = [h;i].
Proof.
intros a b c d e f h i eq1 eq2 eq3.
apply trans_eq_2 with (m:=[c;d]) (p:=[e;f]).
apply eq1. apply eq2. apply eq3.
Qed.
(*通过在此处编写injection H
我们命令Coq使用构造子的单射性来产生所有它能从H所推出的等式
每一个产生的等式都作为一个前件附加在目标上
在这个例子中 附加了前件n=m*)
Theorem S_injective' : forall (n m : nat),
S n = S m ->
n = m.
Proof.
intros n m H.
injection H. intros Hnm. apply Hnm.
Qed.
(*爆炸原理可能令你费解
那么请记住上述证明并不肯定其后件
而是说明:倘若荒谬的前件成立 则会得出荒谬的结论*)
Theorem discriminate_ex2 : forall (n m : nat),
false = true ->
[n] = [m].
Proof.
intros n m contra. discriminate contra. Qed.
|
(* Copyright (c) 2009-2012, Adam Chlipala
*
* This work is licensed under a
* Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
* Unported License.
* The license text is available at:
* http://creativecommons.org/licenses/by-nc-nd/3.0/
*)
(* begin hide *)
Require Import Coq.Lists.List.
Require Import List.
Require Import DepList CpdtTactics.
Set Implicit Arguments.
(* end hide *)
(** printing $ %({}*% #(<a/>*# *)
(** printing ^ %*{})% #*<a/>)# *)
(** %\chapter{Universes and Axioms}% *)
(** Many traditional theorems can be proved in Coq without special knowledge of CIC, the logic behind the prover. A development just seems to be using a particular ASCII notation for standard formulas based on %\index{set theory}%set theory. Nonetheless, as we saw in Chapter 4, CIC differs from set theory in starting from fewer orthogonal primitives. It is possible to define the usual logical connectives as derived notions. The foundation of it all is a dependently typed functional programming language, based on dependent function types and inductive type families. By using the facilities of this language directly, we can accomplish some things much more easily than in mainstream math.
%\index{Gallina}%Gallina, which adds features to the more theoretical CIC%~\cite{CIC}%, is the logic implemented in Coq. It has a relatively simple foundation that can be defined rigorously in a page or two of formal proof rules. Still, there are some important subtleties that have practical ramifications. This chapter focuses on those subtleties, avoiding formal metatheory in favor of example code. *)
(** * The [Type] Hierarchy *)
(** %\index{type hierarchy}%Every object in Gallina has a type. *)
Check 0.
(** %\vspace{-.15in}% [[
0
: nat
]]
It is natural enough that zero be considered as a natural number. *)
Check nat.
(** %\vspace{-.15in}% [[
nat
: Set
]]
From a set theory perspective, it is unsurprising to consider the natural numbers as a "set." *)
Check Set.
(** %\vspace{-.15in}% [[
Set
: Type
]]
The type [Set] may be considered as the set of all sets, a concept that set theory handles in terms of%\index{class (in set theory)}% _classes_. In Coq, this more general notion is [Type]. *)
Check Type.
(** %\vspace{-.15in}% [[
Type
: Type
]]
Strangely enough, [Type] appears to be its own type. It is known that polymorphic languages with this property are inconsistent, via %\index{Girard's paradox}%Girard's paradox%~\cite{GirardsParadox}%. That is, using such a language to encode proofs is unwise, because it is possible to "prove" any proposition. What is really going on here?
Let us repeat some of our queries after toggling a flag related to Coq's printing behavior.%\index{Vernacular commands!Set Printing Universes}% *)
Set Printing Universes.
Check nat.
(** %\vspace{-.15in}% [[
nat
: Set
]]
*)
Check Set.
(** %\vspace{-.15in}% [[
Set
: Type $ (0)+1 ^
]]
*)
Check Type.
(** %\vspace{-.15in}% [[
Type $ Top.3 ^
: Type $ (Top.3)+1 ^
]]
Occurrences of [Type] are annotated with some additional information, inside comments. These annotations have to do with the secret behind [Type]: it really stands for an infinite hierarchy of types. The type of [Set] is [Type(0)], the type of [Type(0)] is [Type(1)], the type of [Type(1)] is [Type(2)], and so on. This is how we avoid the "[Type : Type]" paradox. As a convenience, the universe hierarchy drives Coq's one variety of subtyping. Any term whose type is [Type] at level [i] is automatically also described by [Type] at level [j] when [j > i].
In the outputs of our first [Check] query, we see that the type level of [Set]'s type is [(0)+1]. Here [0] stands for the level of [Set], and we increment it to arrive at the level that _classifies_ [Set].
In the third query's output, we see that the occurrence of [Type] that we check is assigned a fresh%\index{universe variable}% _universe variable_ [Top.3]. The output type increments [Top.3] to move up a level in the universe hierarchy. As we write code that uses definitions whose types mention universe variables, unification may refine the values of those variables. Luckily, the user rarely has to worry about the details.
Another crucial concept in CIC is%\index{predicativity}% _predicativity_. Consider these queries. *)
Check forall T : nat, fin T.
(** %\vspace{-.15in}% [[
forall T : nat, fin T
: Set
]]
*)
Check forall T : Set, T.
(** %\vspace{-.15in}% [[
forall T : Set, T
: Type $ max(0, (0)+1) ^
]]
*)
Check forall T : Type, T.
(** %\vspace{-.15in}% [[
forall T : Type $ Top.9 ^ , T
: Type $ max(Top.9, (Top.9)+1) ^
]]
These outputs demonstrate the rule for determining which universe a [forall] type lives in. In particular, for a type [forall x : T1, T2], we take the maximum of the universes of [T1] and [T2]. In the first example query, both [T1] ([nat]) and [T2] ([fin T]) are in [Set], so the [forall] type is in [Set], too. In the second query, [T1] is [Set], which is at level [(0)+1]; and [T2] is [T], which is at level [0]. Thus, the [forall] exists at the maximum of these two levels. The third example illustrates the same outcome, where we replace [Set] with an occurrence of [Type] that is assigned universe variable [Top.9]. This universe variable appears in the places where [0] appeared in the previous query.
The behind-the-scenes manipulation of universe variables gives us predicativity. Consider this simple definition of a polymorphic identity function, where the first argument [T] will automatically be marked as implicit, since it can be inferred from the type of the second argument [x]. *)
Definition id (T : Set) (x : T) : T := x.
Check id 0.
(** %\vspace{-.15in}% [[
id 0
: nat
Check id Set.
]]
<<
Error: Illegal application (Type Error):
...
The 1st term has type "Type (* (Top.15)+1 *)"
which should be coercible to "Set".
>>
The parameter [T] of [id] must be instantiated with a [Set]. The type [nat] is a [Set], but [Set] is not. We can try fixing the problem by generalizing our definition of [id]. *)
Reset id.
Definition id (T : Type) (x : T) : T := x.
Check id 0.
(** %\vspace{-.15in}% [[
id 0
: nat
]]
*)
Check id Set.
(** %\vspace{-.15in}% [[
id Set
: Type $ Top.17 ^
]]
*)
Check id Type.
(** %\vspace{-.15in}% [[
id Type $ Top.18 ^
: Type $ Top.19 ^
]]
*)
(** So far so good. As we apply [id] to different [T] values, the inferred index for [T]'s [Type] occurrence automatically moves higher up the type hierarchy.
[[
Check id id.
]]
<<
Error: Universe inconsistency (cannot enforce Top.16 < Top.16).
>>
%\index{universe inconsistency}%This error message reminds us that the universe variable for [T] still exists, even though it is usually hidden. To apply [id] to itself, that variable would need to be less than itself in the type hierarchy. Universe inconsistency error messages announce cases like this one where a term could only type-check by violating an implied constraint over universe variables. Such errors demonstrate that [Type] is _predicative_, where this word has a CIC meaning closely related to its usual mathematical meaning. A predicative system enforces the constraint that, when an object is defined using some sort of quantifier, none of the quantifiers may ever be instantiated with the object itself. %\index{impredicativity}%Impredicativity is associated with popular paradoxes in set theory, involving inconsistent constructions like "the set of all sets that do not contain themselves" (%\index{Russell's paradox}%Russell's paradox). Similar paradoxes would result from uncontrolled impredicativity in Coq. *)
(** ** Inductive Definitions *)
(** Predicativity restrictions also apply to inductive definitions. As an example, let us consider a type of expression trees that allows injection of any native Coq value. The idea is that an [exp T] stands for an encoded expression of type [T].
[[
Inductive exp : Set -> Set :=
| Const : forall T : Set, T -> exp T
| Pair : forall T1 T2, exp T1 -> exp T2 -> exp (T1 * T2)
| Eq : forall T, exp T -> exp T -> exp bool.
]]
<<
Error: Large non-propositional inductive types must be in Type.
>>
This definition is%\index{large inductive types}% _large_ in the sense that at least one of its constructors takes an argument whose type has type [Type]. Coq would be inconsistent if we allowed definitions like this one in their full generality. Instead, we must change [exp] to live in [Type]. We will go even further and move [exp]'s index to [Type] as well. *)
Inductive exp : Type -> Type :=
| Const : forall T, T -> exp T
| Pair : forall T1 T2, exp T1 -> exp T2 -> exp (T1 * T2)
| Eq : forall T, exp T -> exp T -> exp bool.
(** Note that before we had to include an annotation [: Set] for the variable [T] in [Const]'s type, but we need no annotation now. When the type of a variable is not known, and when that variable is used in a context where only types are allowed, Coq infers that the variable is of type [Type], the right behavior here, though it was wrong for the [Set] version of [exp].
Our new definition is accepted. We can build some sample expressions. *)
Check Const 0.
(** %\vspace{-.15in}% [[
Const 0
: exp nat
]]
*)
Check Pair (Const 0) (Const tt).
(** %\vspace{-.15in}% [[
Pair (Const 0) (Const tt)
: exp (nat * unit)
]]
*)
Check Eq (Const Set) (Const Type).
(** %\vspace{-.15in}% [[
Eq (Const Set) (Const Type $ Top.59 ^ )
: exp bool
]]
We can check many expressions, including fancy expressions that include types. However, it is not hard to hit a type-checking wall.
[[
Check Const (Const O).
]]
<<
Error: Universe inconsistency (cannot enforce Top.42 < Top.42).
>>
We are unable to instantiate the parameter [T] of [Const] with an [exp] type. To see why, it is helpful to print the annotated version of [exp]'s inductive definition. *)
(** [[
Print exp.
]]
%\vspace{-.15in}%[[
Inductive exp
: Type $ Top.8 ^ ->
Type
$ max(0, (Top.11)+1, (Top.14)+1, (Top.15)+1, (Top.19)+1) ^ :=
Const : forall T : Type $ Top.11 ^ , T -> exp T
| Pair : forall (T1 : Type $ Top.14 ^ ) (T2 : Type $ Top.15 ^ ),
exp T1 -> exp T2 -> exp (T1 * T2)
| Eq : forall T : Type $ Top.19 ^ , exp T -> exp T -> exp bool
]]
We see that the index type of [exp] has been assigned to universe level [Top.8]. In addition, each of the four occurrences of [Type] in the types of the constructors gets its own universe variable. Each of these variables appears explicitly in the type of [exp]. In particular, any type [exp T] lives at a universe level found by incrementing by one the maximum of the four argument variables. Therefore, [exp] _must_ live at a higher universe level than any type which may be passed to one of its constructors. This consequence led to the universe inconsistency.
Strangely, the universe variable [Top.8] only appears in one place. Is there no restriction imposed on which types are valid arguments to [exp]? In fact, there is a restriction, but it only appears in a global set of universe constraints that are maintained "off to the side," not appearing explicitly in types. We can print the current database.%\index{Vernacular commands!Print Universes}% *)
Print Universes.
(** %\vspace{-.15in}% [[
Top.19 < Top.9 <= Top.8
Top.15 < Top.9 <= Top.8 <= Coq.Init.Datatypes.38
Top.14 < Top.9 <= Top.8 <= Coq.Init.Datatypes.37
Top.11 < Top.9 <= Top.8
]]
The command outputs many more constraints, but we have collected only those that mention [Top] variables. We see one constraint for each universe variable associated with a constructor argument from [exp]'s definition. Universe variable [Top.19] is the type argument to [Eq]. The constraint for [Top.19] effectively says that [Top.19] must be less than [Top.8], the universe of [exp]'s indices; an intermediate variable [Top.9] appears as an artifact of the way the constraint was generated.
The next constraint, for [Top.15], is more complicated. This is the universe of the second argument to the [Pair] constructor. Not only must [Top.15] be less than [Top.8], but it also comes out that [Top.8] must be less than [Coq.Init.Datatypes.38]. What is this new universe variable? It is from the definition of the [prod] inductive family, to which types of the form [A * B] are desugared. *)
(* begin hide *)
(* begin thide *)
Inductive prod := pair.
Reset prod.
(* end thide *)
(* end hide *)
(** %\vspace{-.3in}%[[
Print prod.
]]
%\vspace{-.15in}%[[
Inductive prod (A : Type $ Coq.Init.Datatypes.37 ^ )
(B : Type $ Coq.Init.Datatypes.38 ^ )
: Type $ max(Coq.Init.Datatypes.37, Coq.Init.Datatypes.38) ^ :=
pair : A -> B -> A * B
]]
We see that the constraint is enforcing that indices to [exp] must not live in a higher universe level than [B]-indices to [prod]. The next constraint above establishes a symmetric condition for [A].
Thus it is apparent that Coq maintains a tortuous set of universe variable inequalities behind the scenes. It may look like some functions are polymorphic in the universe levels of their arguments, but what is really happening is imperative updating of a system of constraints, such that all uses of a function are consistent with a global set of universe levels. When the constraint system may not be evolved soundly, we get a universe inconsistency error.
%\medskip%
The annotated definition of [prod] reveals something interesting. A type [prod A B] lives at a universe that is the maximum of the universes of [A] and [B]. From our earlier experiments, we might expect that [prod]'s universe would in fact need to be _one higher_ than the maximum. The critical difference is that, in the definition of [prod], [A] and [B] are defined as _parameters_; that is, they appear named to the left of the main colon, rather than appearing (possibly unnamed) to the right.
Parameters are not as flexible as normal inductive type arguments. The range types of all of the constructors of a parameterized type must share the same parameters. Nonetheless, when it is possible to define a polymorphic type in this way, we gain the ability to use the new type family in more ways, without triggering universe inconsistencies. For instance, nested pairs of types are perfectly legal. *)
Check (nat, (Type, Set)).
(** %\vspace{-.15in}% [[
(nat, (Type $ Top.44 ^ , Set))
: Set * (Type $ Top.45 ^ * Type $ Top.46 ^ )
]]
The same cannot be done with a counterpart to [prod] that does not use parameters. *)
Inductive prod' : Type -> Type -> Type :=
| pair' : forall A B : Type, A -> B -> prod' A B.
(** %\vspace{-.15in}%[[
Check (pair' nat (pair' Type Set)).
]]
<<
Error: Universe inconsistency (cannot enforce Top.51 < Top.51).
>>
The key benefit parameters bring us is the ability to avoid quantifying over types in the types of constructors. Such quantification induces less-than constraints, while parameters only introduce less-than-or-equal-to constraints.
Coq includes one more (potentially confusing) feature related to parameters. While Gallina does not support real %\index{universe polymorphism}%universe polymorphism, there is a convenience facility that mimics universe polymorphism in some cases. We can illustrate what this means with a simple example. *)
Inductive foo (A : Type) : Type :=
| Foo : A -> foo A.
(* begin hide *)
Unset Printing Universes.
(* end hide *)
Check foo nat.
(** %\vspace{-.15in}% [[
foo nat
: Set
]]
*)
Check foo Set.
(** %\vspace{-.15in}% [[
foo Set
: Type
]]
*)
Check foo True.
(** %\vspace{-.15in}% [[
foo True
: Prop
]]
The basic pattern here is that Coq is willing to automatically build a "copied-and-pasted" version of an inductive definition, where some occurrences of [Type] have been replaced by [Set] or [Prop]. In each context, the type-checker tries to find the valid replacements that are lowest in the type hierarchy. Automatic cloning of definitions can be much more convenient than manual cloning. We have already taken advantage of the fact that we may re-use the same families of tuple and list types to form values in [Set] and [Type].
Imitation polymorphism can be confusing in some contexts. For instance, it is what is responsible for this weird behavior. *)
Inductive bar : Type := Bar : bar.
Check bar.
(** %\vspace{-.15in}% [[
bar
: Prop
]]
The type that Coq comes up with may be used in strictly more contexts than the type one might have expected. *)
(** ** Deciphering Baffling Messages About Inability to Unify *)
(** One of the most confusing sorts of Coq error messages arises from an interplay between universes, syntax notations, and %\index{implicit arguments}%implicit arguments. Consider the following innocuous lemma, which is symmetry of equality for the special case of types. *)
Theorem symmetry : forall A B : Type,
A = B
-> B = A.
intros ? ? H; rewrite H; reflexivity.
Qed.
(** Let us attempt an admittedly silly proof of the following theorem. *)
Theorem illustrative_but_silly_detour : unit = unit.
(** %\vspace{-.25in}%[[
apply symmetry.
]]
<<
Error: Impossible to unify "?35 = ?34" with "unit = unit".
>>
Coq tells us that we cannot, in fact, apply our lemma [symmetry] here, but the error message seems defective. In particular, one might think that [apply] should unify [?35] and [?34] with [unit] to ensure that the unification goes through. In fact, the issue is in a part of the unification problem that is _not_ shown to us in this error message!
The following command is the secret to getting better error messages in such cases:%\index{Vernacular commands!Set Printing All}% *)
Set Printing All.
(** %\vspace{-.15in}%[[
apply symmetry.
]]
<<
Error: Impossible to unify "@eq Type ?46 ?45" with "@eq Set unit unit".
>>
Now we can see the problem: it is the first, _implicit_ argument to the underlying equality function [eq] that disagrees across the two terms. The universe [Set] may be both an element and a subtype of [Type], but the two are not definitionally equal. *)
Abort.
(** A variety of changes to the theorem statement would lead to use of [Type] as the implicit argument of [eq]. Here is one such change. *)
Theorem illustrative_but_silly_detour : (unit : Type) = unit.
apply symmetry; reflexivity.
Qed.
(** There are many related issues that can come up with error messages, where one or both of notations and implicit arguments hide important details. The [Set Printing All] command turns off all such features and exposes underlying CIC terms.
For completeness, we mention one other class of confusing error message about inability to unify two terms that look obviously unifiable. Each unification variable has a scope; a unification variable instantiation may not mention variables that were not already defined within that scope, at the point in proof search where the unification variable was introduced. Consider this illustrative example: *)
Unset Printing All.
Theorem ex_symmetry : (exists x, x = 0) -> (exists x, 0 = x).
eexists.
(** %\vspace{-.15in}%[[
H : exists x : nat, x = 0
============================
0 = ?98
]]
*)
destruct H.
(** %\vspace{-.15in}%[[
x : nat
H : x = 0
============================
0 = ?99
]]
*)
(** %\vspace{-.2in}%[[
symmetry; exact H.
]]
<<
Error: In environment
x : nat
H : x = 0
The term "H" has type "x = 0" while it is expected to have type
"?99 = 0".
>>
The problem here is that variable [x] was introduced by [destruct] _after_ we introduced [?99] with [eexists], so the instantiation of [?99] may not mention [x]. A simple reordering of the proof solves the problem. *)
Restart.
destruct 1 as [x]; apply ex_intro with x; symmetry; assumption.
Qed.
(** This restriction for unification variables may seem counterintuitive, but it follows from the fact that CIC contains no concept of unification variable. Rather, to construct the final proof term, at the point in a proof where the unification variable is introduced, we replace it with the instantiation we eventually find for it. It is simply syntactically illegal to refer there to variables that are not in scope. Without such a restriction, we could trivially "prove" such non-theorems as [exists n : nat, forall m : nat, n = m] by [econstructor; intro; reflexivity]. *)
(** * The [Prop] Universe *)
(** In Chapter 4, we saw parallel versions of useful datatypes for "programs" and "proofs." The convention was that programs live in [Set], and proofs live in [Prop]. We gave little explanation for why it is useful to maintain this distinction. There is certainly documentation value from separating programs from proofs; in practice, different concerns apply to building the two types of objects. It turns out, however, that these concerns motivate formal differences between the two universes in Coq.
Recall the types [sig] and [ex], which are the program and proof versions of existential quantification. Their definitions differ only in one place, where [sig] uses [Type] and [ex] uses [Prop]. *)
Print sig.
(** %\vspace{-.15in}% [[
Inductive sig (A : Type) (P : A -> Prop) : Type :=
exist : forall x : A, P x -> sig P
]]
*)
Print ex.
(** %\vspace{-.15in}% [[
Inductive ex (A : Type) (P : A -> Prop) : Prop :=
ex_intro : forall x : A, P x -> ex P
]]
It is natural to want a function to extract the first components of data structures like these. Doing so is easy enough for [sig]. *)
Definition projS A (P : A -> Prop) (x : sig P) : A :=
match x with
| exist v _ => v
end.
(* begin hide *)
(* begin thide *)
Definition projE := O.
(* end thide *)
(* end hide *)
(** We run into trouble with a version that has been changed to work with [ex].
[[
Definition projE A (P : A -> Prop) (x : ex P) : A :=
match x with
| ex_intro v _ => v
end.
]]
<<
Error:
Incorrect elimination of "x" in the inductive type "ex":
the return type has sort "Type" while it should be "Prop".
Elimination of an inductive object of sort Prop
is not allowed on a predicate in sort Type
because proofs can be eliminated only to build proofs.
>>
In formal Coq parlance, %\index{elimination}%"elimination" means "pattern-matching." The typing rules of Gallina forbid us from pattern-matching on a discriminee whose type belongs to [Prop], whenever the result type of the [match] has a type besides [Prop]. This is a sort of "information flow" policy, where the type system ensures that the details of proofs can never have any effect on parts of a development that are not also marked as proofs.
This restriction matches informal practice. We think of programs and proofs as clearly separated, and, outside of constructive logic, the idea of computing with proofs is ill-formed. The distinction also has practical importance in Coq, where it affects the behavior of extraction.
Recall that %\index{program extraction}%extraction is Coq's facility for translating Coq developments into programs in general-purpose programming languages like OCaml. Extraction _erases_ proofs and leaves programs intact. A simple example with [sig] and [ex] demonstrates the distinction. *)
Definition sym_sig (x : sig (fun n => n = 0)) : sig (fun n => 0 = n) :=
match x with
| exist n pf => exist _ n (sym_eq pf)
end.
Extraction sym_sig.
(** <<
(** val sym_sig : nat -> nat **)
let sym_sig x = x
>>
Since extraction erases proofs, the second components of [sig] values are elided, making [sig] a simple identity type family. The [sym_sig] operation is thus an identity function. *)
Definition sym_ex (x : ex (fun n => n = 0)) : ex (fun n => 0 = n) :=
match x with
| ex_intro n pf => ex_intro _ n (sym_eq pf)
end.
Extraction sym_ex.
(** <<
(** val sym_ex : __ **)
let sym_ex = __
>>
In this example, the [ex] type itself is in [Prop], so whole [ex] packages are erased. Coq extracts every proposition as the (Coq-specific) type <<__>>, whose single constructor is <<__>>. Not only are proofs replaced by [__], but proof arguments to functions are also removed completely, as we see here.
Extraction is very helpful as an optimization over programs that contain proofs. In languages like Haskell, advanced features make it possible to program with proofs, as a way of convincing the type checker to accept particular definitions. Unfortunately, when proofs are encoded as values in GADTs%~\cite{GADT}%, these proofs exist at runtime and consume resources. In contrast, with Coq, as long as all proofs are kept within [Prop], extraction is guaranteed to erase them.
Many fans of the %\index{Curry-Howard correspondence}%Curry-Howard correspondence support the idea of _extracting programs from proofs_. In reality, few users of Coq and related tools do any such thing. Instead, extraction is better thought of as an optimization that reduces the runtime costs of expressive typing.
%\medskip%
We have seen two of the differences between proofs and programs: proofs are subject to an elimination restriction and are elided by extraction. The remaining difference is that [Prop] is%\index{impredicativity}% _impredicative_, as this example shows. *)
Check forall P Q : Prop, P \/ Q -> Q \/ P.
(** %\vspace{-.15in}% [[
forall P Q : Prop, P \/ Q -> Q \/ P
: Prop
]]
We see that it is possible to define a [Prop] that quantifies over other [Prop]s. This is fortunate, as we start wanting that ability even for such basic purposes as stating propositional tautologies. In the next section of this chapter, we will see some reasons why unrestricted impredicativity is undesirable. The impredicativity of [Prop] interacts crucially with the elimination restriction to avoid those pitfalls.
Impredicativity also allows us to implement a version of our earlier [exp] type that does not suffer from the weakness that we found. *)
Inductive expP : Type -> Prop :=
| ConstP : forall T, T -> expP T
| PairP : forall T1 T2, expP T1 -> expP T2 -> expP (T1 * T2)
| EqP : forall T, expP T -> expP T -> expP bool.
Check ConstP 0.
(** %\vspace{-.15in}% [[
ConstP 0
: expP nat
]]
*)
Check PairP (ConstP 0) (ConstP tt).
(** %\vspace{-.15in}% [[
PairP (ConstP 0) (ConstP tt)
: expP (nat * unit)
]]
*)
Check EqP (ConstP Set) (ConstP Type).
(** %\vspace{-.15in}% [[
EqP (ConstP Set) (ConstP Type)
: expP bool
]]
*)
Check ConstP (ConstP O).
(** %\vspace{-.15in}% [[
ConstP (ConstP 0)
: expP (expP nat)
]]
In this case, our victory is really a shallow one. As we have marked [expP] as a family of proofs, we cannot deconstruct our expressions in the usual programmatic ways, which makes them almost useless for the usual purposes. Impredicative quantification is much more useful in defining inductive families that we really think of as judgments. For instance, this code defines a notion of equality that is strictly more permissive than the base equality [=]. *)
Inductive eqPlus : forall T, T -> T -> Prop :=
| Base : forall T (x : T), eqPlus x x
| Func : forall dom ran (f1 f2 : dom -> ran),
(forall x : dom, eqPlus (f1 x) (f2 x))
-> eqPlus f1 f2.
Check (Base 0).
(** %\vspace{-.15in}% [[
Base 0
: eqPlus 0 0
]]
*)
Check (Func (fun n => n) (fun n => 0 + n) (fun n => Base n)).
(** %\vspace{-.15in}% [[
Func (fun n : nat => n) (fun n : nat => 0 + n) (fun n : nat => Base n)
: eqPlus (fun n : nat => n) (fun n : nat => 0 + n)
]]
*)
Check (Base (Base 1)).
(** %\vspace{-.15in}% [[
Base (Base 1)
: eqPlus (Base 1) (Base 1)
]]
*)
(** Stating equality facts about proofs may seem baroque, but we have already seen its utility in the chapter on reasoning about equality proofs. *)
(** * Axioms *)
(** While the specific logic Gallina is hardcoded into Coq's implementation, it is possible to add certain logical rules in a controlled way. In other words, Coq may be used to reason about many different refinements of Gallina where strictly more theorems are provable. We achieve this by asserting%\index{axioms}% _axioms_ without proof.
We will motivate the idea by touring through some standard axioms, as enumerated in Coq's online FAQ. I will add additional commentary as appropriate. *)
(** ** The Basics *)
(** One simple example of a useful axiom is the %\index{law of the excluded middle}%law of the excluded middle. *)
Require Import Classical_Prop.
Print classic.
(** %\vspace{-.15in}% [[
*** [ classic : forall P : Prop, P \/ ~ P ]
]]
In the implementation of module [Classical_Prop], this axiom was defined with the command%\index{Vernacular commands!Axiom}% *)
Axiom classic : forall P : Prop, P \/ ~ P.
(** An [Axiom] may be declared with any type, in any of the universes. There is a synonym %\index{Vernacular commands!Parameter}%[Parameter] for [Axiom], and that synonym is often clearer for assertions not of type [Prop]. For instance, we can assert the existence of objects with certain properties. *)
Parameter num : nat.
Axiom positive : num > 0.
Reset num.
(** This kind of "axiomatic presentation" of a theory is very common outside of higher-order logic. However, in Coq, it is almost always preferable to stick to defining your objects, functions, and predicates via inductive definitions and functional programming.
In general, there is a significant burden associated with any use of axioms. It is easy to assert a set of axioms that together is%\index{inconsistent axioms}% _inconsistent_. That is, a set of axioms may imply [False], which allows any theorem to be proved, which defeats the purpose of a proof assistant. For example, we could assert the following axiom, which is consistent by itself but inconsistent when combined with [classic]. *)
Axiom not_classic : ~ forall P : Prop, P \/ ~ P.
Theorem uhoh : False.
generalize classic not_classic; tauto.
Qed.
Theorem uhoh_again : 1 + 1 = 3.
destruct uhoh.
Qed.
Reset not_classic.
(** On the subject of the law of the excluded middle itself, this axiom is usually quite harmless, and many practical Coq developments assume it. It has been proved metatheoretically to be consistent with CIC. Here, "proved metatheoretically" means that someone proved on paper that excluded middle holds in a _model_ of CIC in set theory%~\cite{SetsInTypes}%. All of the other axioms that we will survey in this section hold in the same model, so they are all consistent together.
Recall that Coq implements%\index{constructive logic}% _constructive_ logic by default, where the law of the excluded middle is not provable. Proofs in constructive logic can be thought of as programs. A [forall] quantifier denotes a dependent function type, and a disjunction denotes a variant type. In such a setting, excluded middle could be interpreted as a decision procedure for arbitrary propositions, which computability theory tells us cannot exist. Thus, constructive logic with excluded middle can no longer be associated with our usual notion of programming.
Given all this, why is it all right to assert excluded middle as an axiom? The intuitive justification is that the elimination restriction for [Prop] prevents us from treating proofs as programs. An excluded middle axiom that quantified over [Set] instead of [Prop] _would_ be problematic. If a development used that axiom, we would not be able to extract the code to OCaml (soundly) without implementing a genuine universal decision procedure. In contrast, values whose types belong to [Prop] are always erased by extraction, so we sidestep the axiom's algorithmic consequences.
Because the proper use of axioms is so precarious, there are helpful commands for determining which axioms a theorem relies on.%\index{Vernacular commands!Print Assumptions}% *)
Theorem t1 : forall P : Prop, P -> ~ ~ P.
tauto.
Qed.
Print Assumptions t1.
(** <<
Closed under the global context
>>
*)
Theorem t2 : forall P : Prop, ~ ~ P -> P.
(** %\vspace{-.25in}%[[
tauto.
]]
<<
Error: tauto failed.
>>
*)
intro P; destruct (classic P); tauto.
Qed.
Print Assumptions t2.
(** %\vspace{-.15in}% [[
Axioms:
classic : forall P : Prop, P \/ ~ P
]]
It is possible to avoid this dependence in some specific cases, where excluded middle _is_ provable, for decidable families of propositions. *)
Theorem nat_eq_dec : forall n m : nat, n = m \/ n <> m.
induction n; destruct m; intuition; generalize (IHn m); intuition.
Qed.
Theorem t2' : forall n m : nat, ~ ~ (n = m) -> n = m.
intros n m; destruct (nat_eq_dec n m); tauto.
Qed.
Print Assumptions t2'.
(** <<
Closed under the global context
>>
%\bigskip%
Mainstream mathematical practice assumes excluded middle, so it can be useful to have it available in Coq developments, though it is also nice to know that a theorem is proved in a simpler formal system than classical logic. There is a similar story for%\index{proof irrelevance}% _proof irrelevance_, which simplifies proof issues that would not even arise in mainstream math. *)
Require Import ProofIrrelevance.
Print proof_irrelevance.
(** %\vspace{-.15in}% [[
*** [ proof_irrelevance : forall (P : Prop) (p1 p2 : P), p1 = p2 ]
]]
This axiom asserts that any two proofs of the same proposition are equal. Recall this example function from Chapter 6. *)
(* begin hide *)
Lemma zgtz : 0 > 0 -> False.
crush.
Qed.
(* end hide *)
Definition pred_strong1 (n : nat) : n > 0 -> nat :=
match n with
| O => fun pf : 0 > 0 => match zgtz pf with end
| S n' => fun _ => n'
end.
(** We might want to prove that different proofs of [n > 0] do not lead to different results from our richly typed predecessor function. *)
Theorem pred_strong1_irrel : forall n (pf1 pf2 : n > 0), pred_strong1 pf1 = pred_strong1 pf2.
destruct n; crush.
Qed.
(** The proof script is simple, but it involved peeking into the definition of [pred_strong1]. For more complicated function definitions, it can be considerably more work to prove that they do not discriminate on details of proof arguments. This can seem like a shame, since the [Prop] elimination restriction makes it impossible to write any function that does otherwise. Unfortunately, this fact is only true metatheoretically, unless we assert an axiom like [proof_irrelevance]. With that axiom, we can prove our theorem without consulting the definition of [pred_strong1]. *)
Theorem pred_strong1_irrel' : forall n (pf1 pf2 : n > 0), pred_strong1 pf1 = pred_strong1 pf2.
intros; f_equal; apply proof_irrelevance.
Qed.
(** %\bigskip%
In the chapter on equality, we already discussed some axioms that are related to proof irrelevance. In particular, Coq's standard library includes this axiom: *)
Require Import Eqdep.
Import Eq_rect_eq.
Print eq_rect_eq.
(** %\vspace{-.15in}% [[
*** [ eq_rect_eq :
forall (U : Type) (p : U) (Q : U -> Type) (x : Q p) (h : p = p),
x = eq_rect p Q x p h ]
]]
This axiom says that it is permissible to simplify pattern matches over proofs of equalities like [e = e]. The axiom is logically equivalent to some simpler corollaries. In the theorem names, "UIP" stands for %\index{unicity of identity proofs}%"unicity of identity proofs", where "identity" is a synonym for "equality." *)
Corollary UIP_refl : forall A (x : A) (pf : x = x), pf = eq_refl x.
intros; replace pf with (eq_rect x (eq x) (eq_refl x) x pf); [
symmetry; apply eq_rect_eq
| exact (match pf as pf' return match pf' in _ = y return x = y with
| eq_refl => eq_refl x
end = pf' with
| eq_refl => eq_refl _
end) ].
Qed.
Corollary UIP : forall A (x y : A) (pf1 pf2 : x = y), pf1 = pf2.
intros; generalize pf1 pf2; subst; intros;
match goal with
| [ |- ?pf1 = ?pf2 ] => rewrite (UIP_refl pf1); rewrite (UIP_refl pf2); reflexivity
end.
Qed.
(* begin hide *)
(* begin thide *)
Require Eqdep_dec.
(* end thide *)
(* end hide *)
(** These corollaries are special cases of proof irrelevance. In developments that only need proof irrelevance for equality, there is no need to assert full irrelevance.
Another facet of proof irrelevance is that, like excluded middle, it is often provable for specific propositions. For instance, [UIP] is provable whenever the type [A] has a decidable equality operation. The module [Eqdep_dec] of the standard library contains a proof. A similar phenomenon applies to other notable cases, including less-than proofs. Thus, it is often possible to use proof irrelevance without asserting axioms.
%\bigskip%
There are two more basic axioms that are often assumed, to avoid complications that do not arise in set theory. *)
Require Import FunctionalExtensionality.
Print functional_extensionality_dep.
(** %\vspace{-.15in}% [[
*** [ functional_extensionality_dep :
forall (A : Type) (B : A -> Type) (f g : forall x : A, B x),
(forall x : A, f x = g x) -> f = g ]
]]
This axiom says that two functions are equal if they map equal inputs to equal outputs. Such facts are not provable in general in CIC, but it is consistent to assume that they are.
A simple corollary shows that the same property applies to predicates. *)
Corollary predicate_extensionality : forall (A : Type) (B : A -> Prop) (f g : forall x : A, B x),
(forall x : A, f x = g x) -> f = g.
intros; apply functional_extensionality_dep; assumption.
Qed.
(** In some cases, one might prefer to assert this corollary as the axiom, to restrict the consequences to proofs and not programs. *)
(** ** Axioms of Choice *)
(** Some Coq axioms are also points of contention in mainstream math. The most prominent example is the %\index{axiom of choice}%axiom of choice. In fact, there are multiple versions that we might consider, and, considered in isolation, none of these versions means quite what it means in classical set theory.
First, it is possible to implement a choice operator _without_ axioms in some potentially surprising cases. *)
Require Import ConstructiveEpsilon.
Check constructive_definite_description.
(** %\vspace{-.15in}% [[
constructive_definite_description
: forall (A : Set) (f : A -> nat) (g : nat -> A),
(forall x : A, g (f x) = x) ->
forall P : A -> Prop,
(forall x : A, {P x} + { ~ P x}) ->
(exists! x : A, P x) -> {x : A | P x}
]]
*)
Print Assumptions constructive_definite_description.
(** <<
Closed under the global context
>>
This function transforms a decidable predicate [P] into a function that produces an element satisfying [P] from a proof that such an element exists. The functions [f] and [g], in conjunction with an associated injectivity property, are used to express the idea that the set [A] is countable. Under these conditions, a simple brute force algorithm gets the job done: we just enumerate all elements of [A], stopping when we find one satisfying [P]. The existence proof, specified in terms of _unique_ existence [exists!], guarantees termination. The definition of this operator in Coq uses some interesting techniques, as seen in the implementation of the [ConstructiveEpsilon] module.
Countable choice is provable in set theory without appealing to the general axiom of choice. To support the more general principle in Coq, we must also add an axiom. Here is a functional version of the axiom of unique choice. *)
Require Import ClassicalUniqueChoice.
Check dependent_unique_choice.
(** %\vspace{-.15in}% [[
dependent_unique_choice
: forall (A : Type) (B : A -> Type) (R : forall x : A, B x -> Prop),
(forall x : A, exists! y : B x, R x y) ->
exists f : forall x : A, B x,
forall x : A, R x (f x)
]]
This axiom lets us convert a relational specification [R] into a function implementing that specification. We need only prove that [R] is truly a function. An alternate, stronger formulation applies to cases where [R] maps each input to one or more outputs. We also simplify the statement of the theorem by considering only non-dependent function types. *)
(* begin hide *)
(* begin thide *)
Require RelationalChoice.
(* end thide *)
(* end hide *)
Require Import ClassicalChoice.
Check choice.
(** %\vspace{-.15in}% [[
choice
: forall (A B : Type) (R : A -> B -> Prop),
(forall x : A, exists y : B, R x y) ->
exists f : A -> B, forall x : A, R x (f x)
]]
This principle is proved as a theorem, based on the unique choice axiom and an additional axiom of relational choice from the [RelationalChoice] module.
In set theory, the axiom of choice is a fundamental philosophical commitment one makes about the universe of sets. In Coq, the choice axioms say something weaker. For instance, consider the simple restatement of the [choice] axiom where we replace existential quantification by its Curry-Howard analogue, subset types. *)
Definition choice_Set (A B : Type) (R : A -> B -> Prop) (H : forall x : A, {y : B | R x y})
: {f : A -> B | forall x : A, R x (f x)} :=
exist (fun f => forall x : A, R x (f x))
(fun x => proj1_sig (H x)) (fun x => proj2_sig (H x)).
(** %\smallskip{}%Via the Curry-Howard correspondence, this "axiom" can be taken to have the same meaning as the original. It is implemented trivially as a transformation not much deeper than uncurrying. Thus, we see that the utility of the axioms that we mentioned earlier comes in their usage to build programs from proofs. Normal set theory has no explicit proofs, so the meaning of the usual axiom of choice is subtly different. In Gallina, the axioms implement a controlled relaxation of the restrictions on information flow from proofs to programs.
However, when we combine an axiom of choice with the law of the excluded middle, the idea of "choice" becomes more interesting. Excluded middle gives us a highly non-computational way of constructing proofs, but it does not change the computational nature of programs. Thus, the axiom of choice is still giving us a way of translating between two different sorts of "programs," but the input programs (which are proofs) may be written in a rich language that goes beyond normal computability. This combination truly is more than repackaging a function with a different type.
%\bigskip%
The Coq tools support a command-line flag %\index{impredicative Set}%<<-impredicative-set>>, which modifies Gallina in a more fundamental way by making [Set] impredicative. A term like [forall T : Set, T] has type [Set], and inductive definitions in [Set] may have constructors that quantify over arguments of any types. To maintain consistency, an elimination restriction must be imposed, similarly to the restriction for [Prop]. The restriction only applies to large inductive types, where some constructor quantifies over a type of type [Type]. In such cases, a value in this inductive type may only be pattern-matched over to yield a result type whose type is [Set] or [Prop]. This rule contrasts with the rule for [Prop], where the restriction applies even to non-large inductive types, and where the result type may only have type [Prop].
In old versions of Coq, [Set] was impredicative by default. Later versions make [Set] predicative to avoid inconsistency with some classical axioms. In particular, one should watch out when using impredicative [Set] with axioms of choice. In combination with excluded middle or predicate extensionality, inconsistency can result. Impredicative [Set] can be useful for modeling inherently impredicative mathematical concepts, but almost all Coq developments get by fine without it. *)
(** ** Axioms and Computation *)
(** One additional axiom-related wrinkle arises from an aspect of Gallina that is very different from set theory: a notion of _computational equivalence_ is central to the definition of the formal system. Axioms tend not to play well with computation. Consider this example. We start by implementing a function that uses a type equality proof to perform a safe type-cast. *)
Definition cast (x y : Set) (pf : x = y) (v : x) : y :=
match pf with
| eq_refl => v
end.
(** Computation over programs that use [cast] can proceed smoothly. *)
Eval compute in (cast (eq_refl (nat -> nat)) (fun n => S n)) 12.
(** %\vspace{-.15in}%[[
= 13
: nat
]]
*)
(** Things do not go as smoothly when we use [cast] with proofs that rely on axioms. *)
Theorem t3 : (forall n : nat, fin (S n)) = (forall n : nat, fin (n + 1)).
change ((forall n : nat, (fun n => fin (S n)) n) = (forall n : nat, (fun n => fin (n + 1)) n));
rewrite (functional_extensionality (fun n => fin (n + 1)) (fun n => fin (S n))); crush.
Qed.
Eval compute in (cast t3 (fun _ => First)) 12.
(** %\vspace{-.15in}%[[
= match t3 in (_ = P) return P with
| eq_refl => fun n : nat => First
end 12
: fin (12 + 1)
]]
Computation gets stuck in a pattern-match on the proof [t3]. The structure of [t3] is not known, so the match cannot proceed. It turns out a more basic problem leads to this particular situation. We ended the proof of [t3] with [Qed], so the definition of [t3] is not available to computation. That mistake is easily fixed. *)
Reset t3.
Theorem t3 : (forall n : nat, fin (S n)) = (forall n : nat, fin (n + 1)).
change ((forall n : nat, (fun n => fin (S n)) n) = (forall n : nat, (fun n => fin (n + 1)) n));
rewrite (functional_extensionality (fun n => fin (n + 1)) (fun n => fin (S n))); crush.
Defined.
Eval compute in (cast t3 (fun _ => First)) 12.
(** %\vspace{-.15in}%[[
= match
match
match
functional_extensionality
....
]]
We elide most of the details. A very unwieldy tree of nested matches on equality proofs appears. This time evaluation really _is_ stuck on a use of an axiom.
If we are careful in using tactics to prove an equality, we can still compute with casts over the proof. *)
Lemma plus1 : forall n, S n = n + 1.
induction n; simpl; intuition.
Defined.
Theorem t4 : forall n, fin (S n) = fin (n + 1).
intro; f_equal; apply plus1.
Defined.
Eval compute in cast (t4 13) First.
(** %\vspace{-.15in}% [[
= First
: fin (13 + 1)
]]
This simple computational reduction hides the use of a recursive function to produce a suitable [eq_refl] proof term. The recursion originates in our use of [induction] in [t4]'s proof. *)
(** ** Methods for Avoiding Axioms *)
(** The last section demonstrated one reason to avoid axioms: they interfere with computational behavior of terms. A further reason is to reduce the philosophical commitment of a theorem. The more axioms one assumes, the harder it becomes to convince oneself that the formal system corresponds appropriately to one's intuitions. A refinement of this last point, in applications like %\index{proof-carrying code}%proof-carrying code%~\cite{PCC}% in computer security, has to do with minimizing the size of a%\index{trusted code base}% _trusted code base_. To convince ourselves that a theorem is true, we must convince ourselves of the correctness of the program that checks the theorem. Axioms effectively become new source code for the checking program, increasing the effort required to perform a correctness audit.
An earlier section gave one example of avoiding an axiom. We proved that [pred_strong1] is agnostic to details of the proofs passed to it as arguments, by unfolding the definition of the function. A "simpler" proof keeps the function definition opaque and instead applies a proof irrelevance axiom. By accepting a more complex proof, we reduce our philosophical commitment and trusted base. (By the way, the less-than relation that the proofs in question here prove turns out to admit proof irrelevance as a theorem provable within normal Gallina!)
One dark secret of the [dep_destruct] tactic that we have used several times is reliance on an axiom. Consider this simple case analysis principle for [fin] values: *)
Theorem fin_cases : forall n (f : fin (S n)), f = First \/ exists f', f = Next f'.
intros; dep_destruct f; eauto.
Qed.
(* begin hide *)
Require Import JMeq.
(* begin thide *)
Definition jme := (JMeq, JMeq_eq).
(* end thide *)
(* end hide *)
Print Assumptions fin_cases.
(** %\vspace{-.15in}%[[
Axioms:
JMeq_eq : forall (A : Type) (x y : A), JMeq x y -> x = y
]]
The proof depends on the [JMeq_eq] axiom that we met in the chapter on equality proofs. However, a smarter tactic could have avoided an axiom dependence. Here is an alternate proof via a slightly strange looking lemma. *)
(* begin thide *)
Lemma fin_cases_again' : forall n (f : fin n),
match n return fin n -> Prop with
| O => fun _ => False
| S n' => fun f => f = First \/ exists f', f = Next f'
end f.
destruct f; eauto.
Qed.
(** We apply a variant of the %\index{convoy pattern}%convoy pattern, which we are used to seeing in function implementations. Here, the pattern helps us state a lemma in a form where the argument to [fin] is a variable. Recall that, thanks to basic typing rules for pattern-matching, [destruct] will only work effectively on types whose non-parameter arguments are variables. The %\index{tactics!exact}%[exact] tactic, which takes as argument a literal proof term, now gives us an easy way of proving the original theorem. *)
Theorem fin_cases_again : forall n (f : fin (S n)), f = First \/ exists f', f = Next f'.
intros; exact (fin_cases_again' f).
Qed.
(* end thide *)
Print Assumptions fin_cases_again.
(** %\vspace{-.15in}%
<<
Closed under the global context
>>
*)
(* begin thide *)
(** As the Curry-Howard correspondence might lead us to expect, the same pattern may be applied in programming as in proving. Axioms are relevant in programming, too, because, while Coq includes useful extensions like [Program] that make dependently typed programming more straightforward, in general these extensions generate code that relies on axioms about equality. We can use clever pattern matching to write our code axiom-free.
As an example, consider a [Set] version of [fin_cases]. We use [Set] types instead of [Prop] types, so that return values have computational content and may be used to guide the behavior of algorithms. Beside that, we are essentially writing the same "proof" in a more explicit way. *)
Definition finOut n (f : fin n) : match n return fin n -> Type with
| O => fun _ => Empty_set
| _ => fun f => {f' : _ | f = Next f'} + {f = First}
end f :=
match f with
| First _ => inright _ (eq_refl _)
| Next _ f' => inleft _ (exist _ f' (eq_refl _))
end.
(* end thide *)
(** As another example, consider the following type of formulas in first-order logic. The intent of the type definition will not be important in what follows, but we give a quick intuition for the curious reader. Our formulas may include [forall] quantification over arbitrary [Type]s, and we index formulas by environments telling which variables are in scope and what their types are; such an environment is a [list Type]. A constructor [Inject] lets us include any Coq [Prop] as a formula, and [VarEq] and [Lift] can be used for variable references, in what is essentially the de Bruijn index convention. (Again, the detail in this paragraph is not important to understand the discussion that follows!) *)
Inductive formula : list Type -> Type :=
| Inject : forall Ts, Prop -> formula Ts
| VarEq : forall T Ts, T -> formula (T :: Ts)
| Lift : forall T Ts, formula Ts -> formula (T :: Ts)
| Forall : forall T Ts, formula (T :: Ts) -> formula Ts
| And : forall Ts, formula Ts -> formula Ts -> formula Ts.
(** This example is based on my own experiences implementing variants of a program logic called XCAP%~\cite{XCAP}%, which also includes an inductive predicate for characterizing which formulas are provable. Here I include a pared-down version of such a predicate, with only two constructors, which is sufficient to illustrate certain tricky issues. *)
Inductive proof : formula nil -> Prop :=
| PInject : forall (P : Prop), P -> proof (Inject nil P)
| PAnd : forall p q, proof p -> proof q -> proof (And p q).
(** Let us prove a lemma showing that a "[P /\ Q -> P]" rule is derivable within the rules of [proof]. *)
Theorem proj1 : forall p q, proof (And p q) -> proof p.
destruct 1.
(** %\vspace{-.15in}%[[
p : formula nil
q : formula nil
P : Prop
H : P
============================
proof p
]]
*)
(** We are reminded that [induction] and [destruct] do not work effectively on types with non-variable arguments. The first subgoal, shown above, is clearly unprovable. (Consider the case where [p = Inject nil False].)
An application of the %\index{tactics!dependent destruction}%[dependent destruction] tactic (the basis for [dep_destruct]) solves the problem handily. We use a shorthand with the %\index{tactics!intros}%[intros] tactic that lets us use question marks for variable names that do not matter. *)
Restart.
Require Import Program.
intros ? ? H; dependent destruction H; auto.
Qed.
Print Assumptions proj1.
(** %\vspace{-.15in}%[[
Axioms:
eq_rect_eq : forall (U : Type) (p : U) (Q : U -> Type) (x : Q p) (h : p = p),
x = eq_rect p Q x p h
]]
Unfortunately, that built-in tactic appeals to an axiom. It is still possible to avoid axioms by giving the proof via another odd-looking lemma. Here is a first attempt that fails at remaining axiom-free, using a common equality-based trick for supporting induction on non-variable arguments to type families. The trick works fine without axioms for datatypes more traditional than [formula], but we run into trouble with our current type. *)
Lemma proj1_again' : forall r, proof r
-> forall p q, r = And p q -> proof p.
destruct 1; crush.
(** %\vspace{-.15in}%[[
H0 : Inject [] P = And p q
============================
proof p
]]
The first goal looks reasonable. Hypothesis [H0] is clearly contradictory, as [discriminate] can show. *)
discriminate.
(** %\vspace{-.15in}%[[
H : proof p
H1 : And p q = And p0 q0
============================
proof p0
]]
It looks like we are almost done. Hypothesis [H1] gives [p = p0] by injectivity of constructors, and then [H] finishes the case. *)
injection H1; intros.
(* begin hide *)
(* begin thide *)
Definition existT' := existT.
(* end thide *)
(* end hide *)
(** Unfortunately, the "equality" that we expected between [p] and [p0] comes in a strange form:
[[
H3 : existT (fun Ts : list Type => formula Ts) []%list p =
existT (fun Ts : list Type => formula Ts) []%list p0
============================
proof p0
]]
It may take a bit of tinkering, but, reviewing Chapter 3's discussion of writing injection principles manually, it makes sense that an [existT] type is the most direct way to express the output of [injection] on a dependently typed constructor. The constructor [And] is dependently typed, since it takes a parameter [Ts] upon which the types of [p] and [q] depend. Let us not dwell further here on why this goal appears; the reader may like to attempt the (impossible) exercise of building a better injection lemma for [And], without using axioms.
How exactly does an axiom come into the picture here? Let us ask [crush] to finish the proof. *)
crush.
Qed.
Print Assumptions proj1_again'.
(** %\vspace{-.15in}%[[
Axioms:
eq_rect_eq : forall (U : Type) (p : U) (Q : U -> Type) (x : Q p) (h : p = p),
x = eq_rect p Q x p h
]]
It turns out that this familiar axiom about equality (or some other axiom) is required to deduce [p = p0] from the hypothesis [H3] above. The soundness of that proof step is neither provable nor disprovable in Gallina.
Hope is not lost, however. We can produce an even stranger looking lemma, which gives us the theorem without axioms. As always when we want to do case analysis on a term with a tricky dependent type, the key is to refactor the theorem statement so that every term we [match] on has _variables_ as its type indices; so instead of talking about proofs of [And p q], we talk about proofs of an arbitrary [r], but we only conclude anything interesting when [r] is an [And]. *)
Lemma proj1_again'' : forall r, proof r
-> match r with
| And Ps p _ => match Ps return formula Ps -> Prop with
| nil => fun p => proof p
| _ => fun _ => True
end p
| _ => True
end.
destruct 1; auto.
Qed.
Theorem proj1_again : forall p q, proof (And p q) -> proof p.
intros ? ? H; exact (proj1_again'' H).
Qed.
Print Assumptions proj1_again.
(** <<
Closed under the global context
>>
This example illustrates again how some of the same design patterns we learned for dependently typed programming can be used fruitfully in theorem statements.
%\medskip%
To close the chapter, we consider one final way to avoid dependence on axioms. Often this task is equivalent to writing definitions such that they _compute_. That is, we want Coq's normal reduction to be able to run certain programs to completion. Here is a simple example where such computation can get stuck. In proving properties of such functions, we would need to apply axioms like %\index{axiom K}%K manually to make progress.
Imagine we are working with %\index{deep embedding}%deeply embedded syntax of some programming language, where each term is considered to be in the scope of a number of free variables that hold normal Coq values. To enforce proper typing, we will need to model a Coq typing environment somehow. One natural choice is as a list of types, where variable number [i] will be treated as a reference to the [i]th element of the list. *)
Section withTypes.
Variable types : list Set.
(** To give the semantics of terms, we will need to represent value environments, which assign each variable a term of the proper type. *)
Variable values : hlist (fun x : Set => x) types.
(** Now imagine that we are writing some procedure that operates on a distinguished variable of type [nat]. A hypothesis formalizes this assumption, using the standard library function [nth_error] for looking up list elements by position. *)
Variable natIndex : nat.
Variable natIndex_ok : nth_error types natIndex = Some nat.
(** It is not hard to use this hypothesis to write a function for extracting the [nat] value in position [natIndex] of [values], starting with two helpful lemmas, each of which we finish with [Defined] to mark the lemma as transparent, so that its definition may be expanded during evaluation. *)
Lemma nth_error_nil : forall A n x,
nth_error (@nil A) n = Some x
-> False.
destruct n; simpl; unfold error; congruence.
Defined.
Implicit Arguments nth_error_nil [A n x].
Lemma Some_inj : forall A (x y : A),
Some x = Some y
-> x = y.
congruence.
Defined.
Fixpoint getNat (types' : list Set) (values' : hlist (fun x : Set => x) types')
(natIndex : nat) : (nth_error types' natIndex = Some nat) -> nat :=
match values' with
| HNil => fun pf => match nth_error_nil pf with end
| HCons t ts x values'' =>
match natIndex return nth_error (t :: ts) natIndex = Some nat -> nat with
| O => fun pf =>
match Some_inj pf in _ = T return T with
| eq_refl => x
end
| S natIndex' => getNat values'' natIndex'
end
end.
End withTypes.
(** The problem becomes apparent when we experiment with running [getNat] on a concrete [types] list. *)
Definition myTypes := unit :: nat :: bool :: nil.
Definition myValues : hlist (fun x : Set => x) myTypes :=
tt ::: 3 ::: false ::: HNil.
Definition myNatIndex := 1.
Theorem myNatIndex_ok : nth_error myTypes myNatIndex = Some nat.
reflexivity.
Defined.
Eval compute in getNat myValues myNatIndex myNatIndex_ok.
(** %\vspace{-.15in}%[[
= 3
]]
We have not hit the problem yet, since we proceeded with a concrete equality proof for [myNatIndex_ok]. However, consider a case where we want to reason about the behavior of [getNat] _independently_ of a specific proof. *)
Theorem getNat_is_reasonable : forall pf, getNat myValues myNatIndex pf = 3.
intro; compute.
(**
<<
1 subgoal
>>
%\vspace{-.3in}%[[
pf : nth_error myTypes myNatIndex = Some nat
============================
match
match
pf in (_ = y)
return (nat = match y with
| Some H => H
| None => nat
end)
with
| eq_refl => eq_refl
end in (_ = T) return T
with
| eq_refl => 3
end = 3
]]
Since the details of the equality proof [pf] are not known, computation can proceed no further. A rewrite with axiom K would allow us to make progress, but we can rethink the definitions a bit to avoid depending on axioms. *)
Abort.
(** Here is a definition of a function that turns out to be useful, though no doubt its purpose will be mysterious for now. A call [update ls n x] overwrites the [n]th position of the list [ls] with the value [x], padding the end of the list with extra [x] values as needed to ensure sufficient length. *)
Fixpoint copies A (x : A) (n : nat) : list A :=
match n with
| O => nil
| S n' => x :: copies x n'
end.
Fixpoint update A (ls : list A) (n : nat) (x : A) : list A :=
match ls with
| nil => copies x n ++ x :: nil
| y :: ls' => match n with
| O => x :: ls'
| S n' => y :: update ls' n' x
end
end.
(** Now let us revisit the definition of [getNat]. *)
Section withTypes'.
Variable types : list Set.
Variable natIndex : nat.
(** Here is the trick: instead of asserting properties about the list [types], we build a "new" list that is _guaranteed by construction_ to have those properties. *)
Definition types' := update types natIndex nat.
Variable values : hlist (fun x : Set => x) types'.
(** Now a bit of dependent pattern matching helps us rewrite [getNat] in a way that avoids any use of equality proofs. *)
Fixpoint skipCopies (n : nat)
: hlist (fun x : Set => x) (copies nat n ++ nat :: nil) -> nat :=
match n with
| O => fun vs => hhd vs
| S n' => fun vs => skipCopies n' (htl vs)
end.
Fixpoint getNat' (types'' : list Set) (natIndex : nat)
: hlist (fun x : Set => x) (update types'' natIndex nat) -> nat :=
match types'' with
| nil => skipCopies natIndex
| t :: types0 =>
match natIndex return hlist (fun x : Set => x)
(update (t :: types0) natIndex nat) -> nat with
| O => fun vs => hhd vs
| S natIndex' => fun vs => getNat' types0 natIndex' (htl vs)
end
end.
End withTypes'.
(** Now the surprise comes in how easy it is to _use_ [getNat']. While typing works by modification of a types list, we can choose parameters so that the modification has no effect. *)
Theorem getNat_is_reasonable : getNat' myTypes myNatIndex myValues = 3.
reflexivity.
Qed.
(** The same parameters as before work without alteration, and we avoid use of axioms. *)
|
% !TeX root = ../main.tex
\chapter{Future work}
\label{ch:futureWork}
% data preprocessing (is implemented) just testing
%boosting |
import tactic
-- import order (netreba)
universe u
class modular_lattice(α : Type u) extends lattice α :=
(modular_law: ∀ (x u v : α ), (x ≤ u) → u ⊓ (v ⊔ x) = (u ⊓ v) ⊔ x )
theorem modular_lattice_isomorphism { α: Type u } [ modular_lattice α ] { u v w x y : α}:
x ≤ u →
x ≥ v →
x ≥ u ⊓ v →
x ≤ u ⊔ v →
u ⊓ ( v ⊔ x ) = x ∧ (u ⊓ x) ⊔ v = x
:=
begin
intros h1 h2 h3 h4,
split,
{
rw modular_lattice.modular_law,
exact sup_eq_right.mpr h3,
exact h1
},
{
rw inf_comm,
rw ← modular_lattice.modular_law,
exact inf_eq_left.mpr h4,
exact h2
}
end
|
theory HSV_tasks_2020 imports Complex_Main begin
section \<open>Task 1: proving that "3 / sqrt 2" is irrational.\<close>
(* In case it is helpful, the following theorem is copied from Chapter 3 of the worksheet. *)
theorem sqrt2_irrational: "sqrt 2 \<notin> \<rat>"
proof auto
assume "sqrt 2 \<in> \<rat>"
then obtain m n where
"n \<noteq> 0" and "\<bar>sqrt 2\<bar> = real m / real n" and "coprime m n"
by (rule Rats_abs_nat_div_natE)
hence "\<bar>sqrt 2\<bar>^2 = (real m / real n)^2" by auto
hence "2 = (real m / real n)^2" by simp
hence "2 = (real m)^2 / (real n)^2" unfolding power_divide by auto
hence "2 * (real n)^2 = (real m)^2"
by (simp add: nonzero_eq_divide_eq `n \<noteq> 0`)
hence "real (2 * n^2) = (real m)^2" by auto
hence *: "2 * n^2 = m^2"
using of_nat_power_eq_of_nat_cancel_iff by blast
hence "even (m^2)" by presburger
hence "even m" by simp
then obtain m' where "m = 2 * m'" by auto
with * have "2 * n^2 = (2 * m')^2" by auto
hence "2 * n^2 = 4 * m'^2" by simp
hence "n^2 = 2 * m'^2" by simp
hence "even (n^2)" by presburger
hence "even n" by simp
with `even m` and `coprime m n` show False by auto
qed
theorem "3 / sqrt 2 \<notin> \<rat>"
sorry (* TODO: Complete this proof. *)
section \<open>Task 2: Centred pentagonal numbers.\<close>
fun pent :: "nat \<Rightarrow> nat" where
"pent n = (if n = 0 then 1 else 5 * n + pent (n - 1))"
value "pent 0" (* should be 1 *)
value "pent 1" (* should be 6 *)
value "pent 2" (* should be 16 *)
value "pent 3" (* should be 31 *)
theorem "pent n = (5 * n^2 + 5 * n + 2) div 2"
sorry (* TODO: Complete this proof. *)
section \<open>Task 3: Lucas numbers.\<close>
fun fib :: "nat \<Rightarrow> nat" where
"fib n = (if n = 0 then 0 else if n = 1 then 1 else fib (n - 1) + fib (n - 2))"
value "fib 0" (* should be 0 *)
value "fib 1" (* should be 1 *)
value "fib 2" (* should be 1 *)
value "fib 3" (* should be 2 *)
thm fib.induct (* rule induction theorem for fib *)
(* TODO: Complete this task. *)
section \<open>Task 4: Balancing circuits.\<close>
(* Here is a datatype for representing circuits, copied from the worksheet *)
datatype "circuit" =
NOT "circuit"
| AND "circuit" "circuit"
| OR "circuit" "circuit"
| TRUE
| FALSE
| INPUT "int"
text \<open>Delay (assuming all gates have a delay of 1)\<close>
(* The following "delay" function also appeared in the 2019 coursework exercises. *)
fun delay :: "circuit \<Rightarrow> nat" where
"delay (NOT c) = 1 + delay c"
| "delay (AND c1 c2) = 1 + max (delay c1) (delay c2)"
| "delay (OR c1 c2) = 1 + max (delay c1) (delay c2)"
| "delay _ = 0"
(* TODO: Complete this task. *)
section \<open>Task 5: Extending with NAND gates.\<close>
(* TODO: Complete this task. *)
end |
------------------------------------------------------------------------
-- INCREMENTAL λ-CALCULUS
--
-- Congruence of application.
--
-- If f ≡ g and x ≡ y, then (f x) ≡ (g y).
------------------------------------------------------------------------
module Theorem.CongApp where
open import Relation.Binary.PropositionalEquality public
infixl 0 _⟨$⟩_
_⟨$⟩_ : ∀ {a b} {A : Set a} {B : Set b}
{f g : A → B} {x y : A} →
f ≡ g → x ≡ y → f x ≡ g y
_⟨$⟩_ = cong₂ (λ x y → x y)
|
-- WARNING: This file was generated automatically by Vehicle
-- and should not be modified manually!
-- Metadata
-- - Agda version: 2.6.2
-- - AISEC version: 0.1.0.1
-- - Time generated: ???
{-# OPTIONS --allow-exec #-}
open import Vehicle
open import Vehicle.Data.Tensor
open import Data.Rational as ℚ using (ℚ)
open import Data.Fin as Fin using (Fin; #_)
open import Data.List
module increasing-temp-output where
postulate f : Tensor ℚ (1 ∷ []) → Tensor ℚ (1 ∷ [])
abstract
increasing : ∀ (x : ℚ) → x ℚ.≤ f (x ∷ []) (# 0)
increasing = checkSpecification record
{ proofCache = "/home/matthew/Code/AISEC/vehicle/proofcache.vclp"
} |
-- Copyright (c) 2013 Radek Micek
module Algo
-- s - state, i - reason of interruption, r - result of whole computation.
data AlgoResult s i r
= Interrupt s i (s -> AlgoResult s i r)
| Finish s r
data Algo s i r a
= MkAlgo (s -> (a -> s -> AlgoResult s i r) -> AlgoResult s i r)
runAlgo : Algo s i r a -> s -> (a -> s -> AlgoResult s i r) -> AlgoResult s i r
runAlgo (MkAlgo x) = x
algoReturn : a -> Algo s i r a
algoReturn a = MkAlgo $ \s, k => k a s
algoBind : Algo s i r a -> (a -> Algo s i r b) -> Algo s i r b
algoBind x f = MkAlgo $ \s, k =>
runAlgo x s (\a, s' => runAlgo (f a) s' k)
instance Functor (Algo s i r) where
fmap f x = algoBind x (\x' => algoReturn (f x'))
instance Applicative (Algo s i r) where
pure = algoReturn
f <$> x =
algoBind f (\f' =>
algoBind x (\x' =>
algoReturn (f' x')))
instance Monad (Algo s i r) where
return = algoReturn
(>>=) = algoBind
run : s -> Algo s i r r -> AlgoResult s i r
run s x = runAlgo x s (\r, s' => Finish s' r)
interrupt : i -> Algo s i r ()
interrupt i = MkAlgo $ \s, k => Interrupt s i (k ())
resume : s -> (s -> AlgoResult s i r) -> AlgoResult s i r
resume s k = k s
put : s -> Algo s i r ()
put s = MkAlgo $ \_, k => k () s
get : Algo s i r s
get = MkAlgo $ \s, k => k s s
|
lemma continuous_on_homotopic_join_lemma: fixes q :: "[real,real] \<Rightarrow> 'a::topological_space" assumes p: "continuous_on ({0..1} \<times> {0..1}) (\<lambda>y. p (fst y) (snd y))" (is "continuous_on ?A ?p") and q: "continuous_on ({0..1} \<times> {0..1}) (\<lambda>y. q (fst y) (snd y))" (is "continuous_on ?A ?q") and pf: "\<And>t. t \<in> {0..1} \<Longrightarrow> pathfinish(p t) = pathstart(q t)" shows "continuous_on ({0..1} \<times> {0..1}) (\<lambda>y. (p(fst y) +++ q(fst y)) (snd y))" |
/-
Copyright (c) 2016 Jeremy Avigad. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Jeremy Avigad, Leonardo de Moura, Mario Carneiro, Johannes Hölzl
-/
import algebra.ordered_monoid
/-!
# Ordered groups
This file develops the basics of ordered groups.
## Implementation details
Unfortunately, the number of `'` appended to lemmas in this file
may differ between the multiplicative and the additive version of a lemma.
The reason is that we did not want to change existing names in the library.
-/
set_option old_structure_cmd true
universe u
variable {α : Type u}
/-- An ordered additive commutative group is an additive commutative group
with a partial order in which addition is strictly monotone. -/
@[protect_proj, ancestor add_comm_group partial_order]
class ordered_add_comm_group (α : Type u) extends add_comm_group α, partial_order α :=
(add_le_add_left : ∀ a b : α, a ≤ b → ∀ c : α, c + a ≤ c + b)
/-- An ordered commutative group is an commutative group
with a partial order in which multiplication is strictly monotone. -/
@[protect_proj, ancestor comm_group partial_order]
class ordered_comm_group (α : Type u) extends comm_group α, partial_order α :=
(mul_le_mul_left : ∀ a b : α, a ≤ b → ∀ c : α, c * a ≤ c * b)
attribute [to_additive] ordered_comm_group
/--The units of an ordered commutative monoid form an ordered commutative group. -/
@[to_additive]
instance units.ordered_comm_group [ordered_comm_monoid α] : ordered_comm_group (units α) :=
{ mul_le_mul_left := λ a b h c, mul_le_mul_left' h _,
.. units.partial_order,
.. (infer_instance : comm_group (units α)) }
section ordered_comm_group
variables [ordered_comm_group α] {a b c d : α}
@[to_additive ordered_add_comm_group.add_lt_add_left]
lemma ordered_comm_group.mul_lt_mul_left' (a b : α) (h : a < b) (c : α) : c * a < c * b :=
begin
rw lt_iff_le_not_le at h ⊢,
split,
{ apply ordered_comm_group.mul_le_mul_left _ _ h.1 },
{ intro w,
replace w : c⁻¹ * (c * b) ≤ c⁻¹ * (c * a) := ordered_comm_group.mul_le_mul_left _ _ w _,
simp only [mul_one, mul_comm, mul_left_inv, mul_left_comm] at w,
exact h.2 w },
end
@[to_additive ordered_add_comm_group.le_of_add_le_add_left]
lemma ordered_comm_group.le_of_mul_le_mul_left (h : a * b ≤ a * c) : b ≤ c :=
have a⁻¹ * (a * b) ≤ a⁻¹ * (a * c), from ordered_comm_group.mul_le_mul_left _ _ h _,
begin simp [inv_mul_cancel_left] at this, assumption end
@[to_additive]
lemma ordered_comm_group.lt_of_mul_lt_mul_left (h : a * b < a * c) : b < c :=
have a⁻¹ * (a * b) < a⁻¹ * (a * c), from ordered_comm_group.mul_lt_mul_left' _ _ h _,
begin simp [inv_mul_cancel_left] at this, assumption end
@[priority 100, to_additive] -- see Note [lower instance priority]
instance ordered_comm_group.to_ordered_cancel_comm_monoid (α : Type u)
[s : ordered_comm_group α] :
ordered_cancel_comm_monoid α :=
{ mul_left_cancel := @mul_left_cancel α _,
le_of_mul_le_mul_left := @ordered_comm_group.le_of_mul_le_mul_left α _,
..s }
@[priority 100, to_additive]
instance ordered_comm_group.has_exists_mul_of_le (α : Type u)
[ordered_comm_group α] :
has_exists_mul_of_le α :=
⟨λ a b hab, ⟨b * a⁻¹, (mul_inv_cancel_comm_assoc a b).symm⟩⟩
@[to_additive neg_le_neg]
lemma inv_le_inv' (h : a ≤ b) : b⁻¹ ≤ a⁻¹ :=
have 1 ≤ a⁻¹ * b, from mul_left_inv a ▸ mul_le_mul_left' h _,
have 1 * b⁻¹ ≤ a⁻¹ * b * b⁻¹, from mul_le_mul_right' this _,
by rwa [mul_inv_cancel_right, one_mul] at this
@[to_additive]
lemma le_of_inv_le_inv (h : b⁻¹ ≤ a⁻¹) : a ≤ b :=
suffices (a⁻¹)⁻¹ ≤ (b⁻¹)⁻¹, from
begin simp [inv_inv] at this, assumption end,
inv_le_inv' h
@[to_additive]
lemma one_le_of_inv_le_one (h : a⁻¹ ≤ 1) : 1 ≤ a :=
have a⁻¹ ≤ 1⁻¹, by rwa one_inv,
le_of_inv_le_inv this
@[to_additive]
lemma inv_le_one_of_one_le (h : 1 ≤ a) : a⁻¹ ≤ 1 :=
have a⁻¹ ≤ 1⁻¹, from inv_le_inv' h,
by rwa one_inv at this
@[to_additive nonpos_of_neg_nonneg]
lemma le_one_of_one_le_inv (h : 1 ≤ a⁻¹) : a ≤ 1 :=
have 1⁻¹ ≤ a⁻¹, by rwa one_inv,
le_of_inv_le_inv this
@[to_additive neg_nonneg_of_nonpos]
lemma one_le_inv_of_le_one (h : a ≤ 1) : 1 ≤ a⁻¹ :=
have 1⁻¹ ≤ a⁻¹, from inv_le_inv' h,
by rwa one_inv at this
@[to_additive neg_lt_neg]
lemma inv_lt_inv' (h : a < b) : b⁻¹ < a⁻¹ :=
have 1 < a⁻¹ * b, from mul_left_inv a ▸ mul_lt_mul_left' h (a⁻¹),
have 1 * b⁻¹ < a⁻¹ * b * b⁻¹, from mul_lt_mul_right' this (b⁻¹),
by rwa [mul_inv_cancel_right, one_mul] at this
@[to_additive]
lemma lt_of_inv_lt_inv (h : b⁻¹ < a⁻¹) : a < b :=
inv_inv a ▸ inv_inv b ▸ inv_lt_inv' h
@[to_additive]
lemma one_lt_of_inv_inv (h : a⁻¹ < 1) : 1 < a :=
have a⁻¹ < 1⁻¹, by rwa one_inv,
lt_of_inv_lt_inv this
@[to_additive]
lemma inv_inv_of_one_lt (h : 1 < a) : a⁻¹ < 1 :=
have a⁻¹ < 1⁻¹, from inv_lt_inv' h,
by rwa one_inv at this
@[to_additive neg_of_neg_pos]
lemma inv_of_one_lt_inv (h : 1 < a⁻¹) : a < 1 :=
have 1⁻¹ < a⁻¹, by rwa one_inv,
lt_of_inv_lt_inv this
@[to_additive neg_pos_of_neg]
lemma one_lt_inv_of_inv (h : a < 1) : 1 < a⁻¹ :=
have 1⁻¹ < a⁻¹, from inv_lt_inv' h,
by rwa one_inv at this
@[to_additive]
lemma le_inv_of_le_inv (h : a ≤ b⁻¹) : b ≤ a⁻¹ :=
begin
have h := inv_le_inv' h,
rwa inv_inv at h
end
@[to_additive]
lemma inv_le_of_inv_le (h : a⁻¹ ≤ b) : b⁻¹ ≤ a :=
begin
have h := inv_le_inv' h,
rwa inv_inv at h
end
@[to_additive]
lemma lt_inv_of_lt_inv (h : a < b⁻¹) : b < a⁻¹ :=
begin
have h := inv_lt_inv' h,
rwa inv_inv at h
end
@[to_additive]
lemma inv_lt_of_inv_lt (h : a⁻¹ < b) : b⁻¹ < a :=
begin
have h := inv_lt_inv' h,
rwa inv_inv at h
end
@[to_additive]
lemma mul_le_of_le_inv_mul (h : b ≤ a⁻¹ * c) : a * b ≤ c :=
begin
have h := mul_le_mul_left' h a,
rwa mul_inv_cancel_left at h
end
@[to_additive]
lemma le_inv_mul_of_mul_le (h : a * b ≤ c) : b ≤ a⁻¹ * c :=
begin
have h := mul_le_mul_left' h a⁻¹,
rwa inv_mul_cancel_left at h
end
@[to_additive]
lemma le_mul_of_inv_mul_le (h : b⁻¹ * a ≤ c) : a ≤ b * c :=
begin
have h := mul_le_mul_left' h b,
rwa mul_inv_cancel_left at h
end
@[to_additive]
lemma inv_mul_le_of_le_mul (h : a ≤ b * c) : b⁻¹ * a ≤ c :=
begin
have h := mul_le_mul_left' h b⁻¹,
rwa inv_mul_cancel_left at h
end
@[to_additive]
lemma le_mul_of_inv_mul_le_left (h : b⁻¹ * a ≤ c) : a ≤ b * c :=
le_mul_of_inv_mul_le h
@[to_additive]
lemma inv_mul_le_left_of_le_mul (h : a ≤ b * c) : b⁻¹ * a ≤ c :=
inv_mul_le_of_le_mul h
@[to_additive]
lemma le_mul_of_inv_mul_le_right (h : c⁻¹ * a ≤ b) : a ≤ b * c :=
by { rw mul_comm, exact le_mul_of_inv_mul_le h }
@[to_additive]
lemma inv_mul_le_right_of_le_mul (h : a ≤ b * c) : c⁻¹ * a ≤ b :=
by { rw mul_comm at h, apply inv_mul_le_left_of_le_mul h }
@[to_additive]
lemma mul_lt_of_lt_inv_mul (h : b < a⁻¹ * c) : a * b < c :=
begin
have h := mul_lt_mul_left' h a,
rwa mul_inv_cancel_left at h
end
@[to_additive]
lemma lt_inv_mul_of_mul_lt (h : a * b < c) : b < a⁻¹ * c :=
begin
have h := mul_lt_mul_left' h (a⁻¹),
rwa inv_mul_cancel_left at h
end
@[to_additive]
lemma lt_mul_of_inv_mul_lt (h : b⁻¹ * a < c) : a < b * c :=
begin
have h := mul_lt_mul_left' h b,
rwa mul_inv_cancel_left at h
end
@[to_additive]
lemma inv_mul_lt_of_lt_mul (h : a < b * c) : b⁻¹ * a < c :=
begin
have h := mul_lt_mul_left' h (b⁻¹),
rwa inv_mul_cancel_left at h
end
@[to_additive]
lemma lt_mul_of_inv_mul_lt_left (h : b⁻¹ * a < c) : a < b * c :=
lt_mul_of_inv_mul_lt h
@[to_additive]
lemma inv_mul_lt_left_of_lt_mul (h : a < b * c) : b⁻¹ * a < c :=
inv_mul_lt_of_lt_mul h
@[to_additive]
lemma lt_mul_of_inv_mul_lt_right (h : c⁻¹ * a < b) : a < b * c :=
by { rw mul_comm, exact lt_mul_of_inv_mul_lt h }
@[to_additive]
lemma inv_mul_lt_right_of_lt_mul (h : a < b * c) : c⁻¹ * a < b :=
by { rw mul_comm at h, exact inv_mul_lt_of_lt_mul h }
@[simp, to_additive]
lemma inv_lt_one_iff_one_lt : a⁻¹ < 1 ↔ 1 < a :=
⟨ one_lt_of_inv_inv, inv_inv_of_one_lt ⟩
@[simp, to_additive]
lemma inv_le_inv_iff : a⁻¹ ≤ b⁻¹ ↔ b ≤ a :=
have a * b * a⁻¹ ≤ a * b * b⁻¹ ↔ a⁻¹ ≤ b⁻¹, from mul_le_mul_iff_left _,
by { rw [mul_inv_cancel_right, mul_comm a, mul_inv_cancel_right] at this, rw [this] }
@[to_additive neg_le]
lemma inv_le' : a⁻¹ ≤ b ↔ b⁻¹ ≤ a :=
have a⁻¹ ≤ (b⁻¹)⁻¹ ↔ b⁻¹ ≤ a, from inv_le_inv_iff,
by rwa inv_inv at this
@[to_additive le_neg]
lemma le_inv' : a ≤ b⁻¹ ↔ b ≤ a⁻¹ :=
have (a⁻¹)⁻¹ ≤ b⁻¹ ↔ b ≤ a⁻¹, from inv_le_inv_iff,
by rwa inv_inv at this
@[to_additive neg_le_iff_add_nonneg]
lemma inv_le_iff_one_le_mul : a⁻¹ ≤ b ↔ 1 ≤ b * a :=
(mul_le_mul_iff_right a).symm.trans $ by rw inv_mul_self
@[to_additive neg_le_iff_add_nonneg']
lemma inv_le_iff_one_le_mul' : a⁻¹ ≤ b ↔ 1 ≤ a * b :=
(mul_le_mul_iff_left a).symm.trans $ by rw mul_inv_self
@[to_additive]
lemma inv_lt_iff_one_lt_mul : a⁻¹ < b ↔ 1 < b * a :=
(mul_lt_mul_iff_right a).symm.trans $ by rw inv_mul_self
@[to_additive]
lemma inv_lt_iff_one_lt_mul' : a⁻¹ < b ↔ 1 < a * b :=
(mul_lt_mul_iff_left a).symm.trans $ by rw mul_inv_self
@[to_additive]
lemma le_inv_iff_mul_le_one : a ≤ b⁻¹ ↔ a * b ≤ 1 :=
(mul_le_mul_iff_right b).symm.trans $ by rw inv_mul_self
@[to_additive]
lemma le_inv_iff_mul_le_one' : a ≤ b⁻¹ ↔ b * a ≤ 1 :=
(mul_le_mul_iff_left b).symm.trans $ by rw mul_inv_self
@[to_additive]
lemma lt_inv_iff_mul_lt_one : a < b⁻¹ ↔ a * b < 1 :=
(mul_lt_mul_iff_right b).symm.trans $ by rw inv_mul_self
@[to_additive]
lemma lt_inv_iff_mul_lt_one' : a < b⁻¹ ↔ b * a < 1 :=
(mul_lt_mul_iff_left b).symm.trans $ by rw mul_inv_self
@[simp, to_additive neg_nonpos]
lemma inv_le_one' : a⁻¹ ≤ 1 ↔ 1 ≤ a :=
have a⁻¹ ≤ 1⁻¹ ↔ 1 ≤ a, from inv_le_inv_iff,
by rwa one_inv at this
@[simp, to_additive neg_nonneg]
lemma one_le_inv' : 1 ≤ a⁻¹ ↔ a ≤ 1 :=
have 1⁻¹ ≤ a⁻¹ ↔ a ≤ 1, from inv_le_inv_iff,
by rwa one_inv at this
@[to_additive]
lemma inv_le_self (h : 1 ≤ a) : a⁻¹ ≤ a :=
le_trans (inv_le_one'.2 h) h
@[to_additive]
lemma self_le_inv (h : a ≤ 1) : a ≤ a⁻¹ :=
le_trans h (one_le_inv'.2 h)
@[simp, to_additive]
lemma inv_lt_inv_iff : a⁻¹ < b⁻¹ ↔ b < a :=
have a * b * a⁻¹ < a * b * b⁻¹ ↔ a⁻¹ < b⁻¹, from mul_lt_mul_iff_left _,
by { rw [mul_inv_cancel_right, mul_comm a, mul_inv_cancel_right] at this, rw [this] }
@[to_additive neg_lt_zero]
lemma inv_lt_one' : a⁻¹ < 1 ↔ 1 < a :=
have a⁻¹ < 1⁻¹ ↔ 1 < a, from inv_lt_inv_iff,
by rwa one_inv at this
@[to_additive neg_pos]
lemma one_lt_inv' : 1 < a⁻¹ ↔ a < 1 :=
have 1⁻¹ < a⁻¹ ↔ a < 1, from inv_lt_inv_iff,
by rwa one_inv at this
@[to_additive neg_lt]
lemma inv_lt' : a⁻¹ < b ↔ b⁻¹ < a :=
have a⁻¹ < (b⁻¹)⁻¹ ↔ b⁻¹ < a, from inv_lt_inv_iff,
by rwa inv_inv at this
@[to_additive lt_neg]
lemma lt_inv' : a < b⁻¹ ↔ b < a⁻¹ :=
have (a⁻¹)⁻¹ < b⁻¹ ↔ b < a⁻¹, from inv_lt_inv_iff,
by rwa inv_inv at this
@[to_additive]
lemma inv_lt_self (h : 1 < a) : a⁻¹ < a :=
(inv_lt_one'.2 h).trans h
@[to_additive]
lemma le_inv_mul_iff_mul_le : b ≤ a⁻¹ * c ↔ a * b ≤ c :=
have a⁻¹ * (a * b) ≤ a⁻¹ * c ↔ a * b ≤ c, from mul_le_mul_iff_left _,
by rwa inv_mul_cancel_left at this
@[simp, to_additive]
lemma inv_mul_le_iff_le_mul : b⁻¹ * a ≤ c ↔ a ≤ b * c :=
have b⁻¹ * a ≤ b⁻¹ * (b * c) ↔ a ≤ b * c, from mul_le_mul_iff_left _,
by rwa inv_mul_cancel_left at this
@[to_additive]
lemma mul_inv_le_iff_le_mul : a * c⁻¹ ≤ b ↔ a ≤ b * c :=
by rw [mul_comm a, mul_comm b, inv_mul_le_iff_le_mul]
@[simp, to_additive]
lemma mul_inv_le_iff_le_mul' : a * b⁻¹ ≤ c ↔ a ≤ b * c :=
by rw [← inv_mul_le_iff_le_mul, mul_comm]
@[to_additive]
lemma inv_mul_le_iff_le_mul' : c⁻¹ * a ≤ b ↔ a ≤ b * c :=
by rw [inv_mul_le_iff_le_mul, mul_comm]
@[simp, to_additive]
lemma lt_inv_mul_iff_mul_lt : b < a⁻¹ * c ↔ a * b < c :=
have a⁻¹ * (a * b) < a⁻¹ * c ↔ a * b < c, from mul_lt_mul_iff_left _,
by rwa inv_mul_cancel_left at this
@[simp, to_additive]
lemma inv_mul_lt_iff_lt_mul : b⁻¹ * a < c ↔ a < b * c :=
have b⁻¹ * a < b⁻¹ * (b * c) ↔ a < b * c, from mul_lt_mul_iff_left _,
by rwa inv_mul_cancel_left at this
@[to_additive]
lemma inv_mul_lt_iff_lt_mul_right : c⁻¹ * a < b ↔ a < b * c :=
by rw [inv_mul_lt_iff_lt_mul, mul_comm]
@[to_additive add_neg_le_add_neg_iff]
lemma div_le_div_iff' : a * b⁻¹ ≤ c * d⁻¹ ↔ a * d ≤ c * b :=
begin
split ; intro h,
have := mul_le_mul_right' (mul_le_mul_right' h b) d,
rwa [inv_mul_cancel_right, mul_assoc _ _ b, mul_comm _ b, ← mul_assoc, inv_mul_cancel_right]
at this,
have := mul_le_mul_right' (mul_le_mul_right' h d⁻¹) b⁻¹,
rwa [mul_inv_cancel_right, _root_.mul_assoc, _root_.mul_comm d⁻¹ b⁻¹, ← mul_assoc,
mul_inv_cancel_right] at this,
end
@[simp, to_additive] lemma div_le_self_iff (a : α) {b : α} : a / b ≤ a ↔ 1 ≤ b :=
by simp [div_eq_mul_inv]
@[simp, to_additive] lemma div_lt_self_iff (a : α) {b : α} : a / b < a ↔ 1 < b :=
by simp [div_eq_mul_inv]
/-- Pullback an `ordered_comm_group` under an injective map. -/
@[to_additive function.injective.ordered_add_comm_group
"Pullback an `ordered_add_comm_group` under an injective map."]
def function.injective.ordered_comm_group {β : Type*}
[has_one β] [has_mul β] [has_inv β] [has_div β]
(f : β → α) (hf : function.injective f) (one : f 1 = 1)
(mul : ∀ x y, f (x * y) = f x * f y)
(inv : ∀ x, f (x⁻¹) = (f x)⁻¹)
(div : ∀ x y, f (x / y) = f x / f y) :
ordered_comm_group β :=
{ ..partial_order.lift f hf,
..hf.ordered_comm_monoid f one mul,
..hf.comm_group f one mul inv div }
end ordered_comm_group
section ordered_add_comm_group
variables [ordered_add_comm_group α] {a b c d : α}
lemma sub_le_sub (hab : a ≤ b) (hcd : c ≤ d) : a - d ≤ b - c :=
by simpa only [sub_eq_add_neg] using add_le_add hab (neg_le_neg hcd)
lemma sub_lt_sub (hab : a < b) (hcd : c < d) : a - d < b - c :=
by simpa only [sub_eq_add_neg] using add_lt_add hab (neg_lt_neg hcd)
alias sub_le_self_iff ↔ _ sub_le_self
alias sub_lt_self_iff ↔ _ sub_lt_self
lemma sub_le_sub_iff : a - b ≤ c - d ↔ a + d ≤ c + b :=
by simpa only [sub_eq_add_neg] using add_neg_le_add_neg_iff
@[simp]
lemma sub_le_sub_iff_left (a : α) {b c : α} : a - b ≤ a - c ↔ c ≤ b :=
by rw [sub_eq_add_neg, sub_eq_add_neg, add_le_add_iff_left, neg_le_neg_iff]
lemma sub_le_sub_left (h : a ≤ b) (c : α) : c - b ≤ c - a :=
(sub_le_sub_iff_left c).2 h
@[simp]
lemma sub_le_sub_iff_right (c : α) : a - c ≤ b - c ↔ a ≤ b :=
by simpa only [sub_eq_add_neg] using add_le_add_iff_right _
lemma sub_le_sub_right (h : a ≤ b) (c : α) : a - c ≤ b - c :=
(sub_le_sub_iff_right c).2 h
@[simp]
lemma sub_lt_sub_iff_left (a : α) {b c : α} : a - b < a - c ↔ c < b :=
by rw [sub_eq_add_neg, sub_eq_add_neg, add_lt_add_iff_left, neg_lt_neg_iff]
lemma sub_lt_sub_left (h : a < b) (c : α) : c - b < c - a :=
(sub_lt_sub_iff_left c).2 h
@[simp]
lemma sub_lt_sub_iff_right (c : α) : a - c < b - c ↔ a < b :=
by simpa only [sub_eq_add_neg] using add_lt_add_iff_right _
lemma sub_lt_sub_right (h : a < b) (c : α) : a - c < b - c :=
(sub_lt_sub_iff_right c).2 h
@[simp] lemma sub_nonneg : 0 ≤ a - b ↔ b ≤ a :=
by rw [← sub_self a, sub_le_sub_iff_left]
alias sub_nonneg ↔ le_of_sub_nonneg sub_nonneg_of_le
@[simp] lemma sub_nonpos : a - b ≤ 0 ↔ a ≤ b :=
by rw [← sub_self b, sub_le_sub_iff_right]
alias sub_nonpos ↔ le_of_sub_nonpos sub_nonpos_of_le
@[simp] lemma sub_pos : 0 < a - b ↔ b < a :=
by rw [← sub_self a, sub_lt_sub_iff_left]
alias sub_pos ↔ lt_of_sub_pos sub_pos_of_lt
@[simp] lemma sub_lt_zero : a - b < 0 ↔ a < b :=
by rw [← sub_self b, sub_lt_sub_iff_right]
alias sub_lt_zero ↔ lt_of_sub_neg sub_neg_of_lt
lemma le_sub_iff_add_le' : b ≤ c - a ↔ a + b ≤ c :=
by rw [sub_eq_add_neg, add_comm, le_neg_add_iff_add_le]
lemma le_sub_iff_add_le : a ≤ c - b ↔ a + b ≤ c :=
by rw [le_sub_iff_add_le', add_comm]
alias le_sub_iff_add_le ↔ add_le_of_le_sub_right le_sub_right_of_add_le
lemma sub_le_iff_le_add' : a - b ≤ c ↔ a ≤ b + c :=
by rw [sub_eq_add_neg, add_comm, neg_add_le_iff_le_add]
alias le_sub_iff_add_le' ↔ add_le_of_le_sub_left le_sub_left_of_add_le
lemma sub_le_iff_le_add : a - c ≤ b ↔ a ≤ b + c :=
by rw [sub_le_iff_le_add', add_comm]
@[simp] lemma neg_le_sub_iff_le_add : -b ≤ a - c ↔ c ≤ a + b :=
le_sub_iff_add_le.trans neg_add_le_iff_le_add'
lemma neg_le_sub_iff_le_add' : -a ≤ b - c ↔ c ≤ a + b :=
by rw [neg_le_sub_iff_le_add, add_comm]
lemma sub_le : a - b ≤ c ↔ a - c ≤ b :=
sub_le_iff_le_add'.trans sub_le_iff_le_add.symm
theorem le_sub : a ≤ b - c ↔ c ≤ b - a :=
le_sub_iff_add_le'.trans le_sub_iff_add_le.symm
lemma lt_sub_iff_add_lt' : b < c - a ↔ a + b < c :=
by rw [sub_eq_add_neg, add_comm, lt_neg_add_iff_add_lt]
alias lt_sub_iff_add_lt' ↔ add_lt_of_lt_sub_left lt_sub_left_of_add_lt
lemma lt_sub_iff_add_lt : a < c - b ↔ a + b < c :=
by rw [lt_sub_iff_add_lt', add_comm]
alias lt_sub_iff_add_lt ↔ add_lt_of_lt_sub_right lt_sub_right_of_add_lt
lemma sub_lt_iff_lt_add' : a - b < c ↔ a < b + c :=
by rw [sub_eq_add_neg, add_comm, neg_add_lt_iff_lt_add]
alias sub_lt_iff_lt_add' ↔ lt_add_of_sub_left_lt sub_left_lt_of_lt_add
lemma sub_lt_iff_lt_add : a - c < b ↔ a < b + c :=
by rw [sub_lt_iff_lt_add', add_comm]
alias sub_lt_iff_lt_add ↔ lt_add_of_sub_right_lt sub_right_lt_of_lt_add
@[simp] lemma neg_lt_sub_iff_lt_add : -b < a - c ↔ c < a + b :=
lt_sub_iff_add_lt.trans neg_add_lt_iff_lt_add_right
lemma neg_lt_sub_iff_lt_add' : -a < b - c ↔ c < a + b :=
by rw [neg_lt_sub_iff_lt_add, add_comm]
lemma sub_lt : a - b < c ↔ a - c < b :=
sub_lt_iff_lt_add'.trans sub_lt_iff_lt_add.symm
theorem lt_sub : a < b - c ↔ c < b - a :=
lt_sub_iff_add_lt'.trans lt_sub_iff_add_lt.symm
end ordered_add_comm_group
/-!
### Linearly ordered commutative groups
-/
/-- A linearly ordered additive commutative group is an
additive commutative group with a linear order in which
addition is monotone. -/
@[protect_proj, ancestor add_comm_group linear_order]
class linear_ordered_add_comm_group (α : Type u) extends add_comm_group α, linear_order α :=
(add_le_add_left : ∀ a b : α, a ≤ b → ∀ c : α, c + a ≤ c + b)
/-- A linearly ordered commutative group is a
commutative group with a linear order in which
multiplication is monotone. -/
@[protect_proj, ancestor comm_group linear_order, to_additive]
class linear_ordered_comm_group (α : Type u) extends comm_group α, linear_order α :=
(mul_le_mul_left : ∀ a b : α, a ≤ b → ∀ c : α, c * a ≤ c * b)
section linear_ordered_comm_group
variables [linear_ordered_comm_group α] {a b c : α}
@[priority 100, to_additive] -- see Note [lower instance priority]
instance linear_ordered_comm_group.to_ordered_comm_group : ordered_comm_group α :=
{ ..‹linear_ordered_comm_group α› }
@[priority 100, to_additive] -- see Note [lower instance priority]
instance linear_ordered_comm_group.to_linear_ordered_cancel_comm_monoid :
linear_ordered_cancel_comm_monoid α :=
{ le_of_mul_le_mul_left := λ x y z, le_of_mul_le_mul_left',
mul_left_cancel := λ x y z, mul_left_cancel,
..‹linear_ordered_comm_group α› }
/-- Pullback a `linear_ordered_comm_group` under an injective map. -/
@[to_additive function.injective.linear_ordered_add_comm_group
"Pullback a `linear_ordered_add_comm_group` under an injective map."]
def function.injective.linear_ordered_comm_group {β : Type*}
[has_one β] [has_mul β] [has_inv β] [has_div β]
(f : β → α) (hf : function.injective f) (one : f 1 = 1)
(mul : ∀ x y, f (x * y) = f x * f y)
(inv : ∀ x, f (x⁻¹) = (f x)⁻¹)
(div : ∀ x y, f (x / y) = f x / f y) :
linear_ordered_comm_group β :=
{ ..linear_order.lift f hf,
..hf.ordered_comm_group f one mul inv div }
@[to_additive linear_ordered_add_comm_group.add_lt_add_left]
lemma linear_ordered_comm_group.mul_lt_mul_left'
(a b : α) (h : a < b) (c : α) : c * a < c * b :=
ordered_comm_group.mul_lt_mul_left' a b h c
@[to_additive min_neg_neg]
lemma min_inv_inv' (a b : α) : min (a⁻¹) (b⁻¹) = (max a b)⁻¹ :=
eq.symm $ @monotone.map_max α (order_dual α) _ _ has_inv.inv a b $ λ a b, inv_le_inv'
@[to_additive max_neg_neg]
lemma max_inv_inv' (a b : α) : max (a⁻¹) (b⁻¹) = (min a b)⁻¹ :=
eq.symm $ @monotone.map_min α (order_dual α) _ _ has_inv.inv a b $ λ a b, inv_le_inv'
@[to_additive min_sub_sub_right]
lemma min_div_div_right' (a b c : α) : min (a / c) (b / c) = min a b / c :=
by simpa only [div_eq_mul_inv] using min_mul_mul_right a b (c⁻¹)
@[to_additive max_sub_sub_right]
lemma max_div_div_right' (a b c : α) : max (a / c) (b / c) = max a b / c :=
by simpa only [div_eq_mul_inv] using max_mul_mul_right a b (c⁻¹)
@[to_additive min_sub_sub_left]
lemma min_div_div_left' (a b c : α) : min (a / b) (a / c) = a / max b c :=
by simp only [div_eq_mul_inv, min_mul_mul_left, min_inv_inv']
@[to_additive max_sub_sub_left]
lemma max_div_div_left' (a b c : α) : max (a / b) (a / c) = a / min b c :=
by simp only [div_eq_mul_inv, max_mul_mul_left, max_inv_inv']
@[to_additive max_zero_sub_eq_self]
lemma max_one_div_eq_self' (a : α) : max a 1 / max (a⁻¹) 1 = a :=
begin
rcases le_total a 1,
{ rw [max_eq_right h, max_eq_left, one_div, inv_inv], { rwa [le_inv', one_inv] } },
{ rw [max_eq_left, max_eq_right, div_eq_mul_inv, one_inv, mul_one],
{ rwa [inv_le', one_inv] }, exact h }
end
@[to_additive eq_zero_of_neg_eq]
lemma eq_one_of_inv_eq' (h : a⁻¹ = a) : a = 1 :=
match lt_trichotomy a 1 with
| or.inl h₁ :=
have 1 < a, from h ▸ one_lt_inv_of_inv h₁,
absurd h₁ this.asymm
| or.inr (or.inl h₁) := h₁
| or.inr (or.inr h₁) :=
have a < 1, from h ▸ inv_inv_of_one_lt h₁,
absurd h₁ this.asymm
end
@[to_additive exists_zero_lt]
lemma exists_one_lt' [nontrivial α] : ∃ (a:α), 1 < a :=
begin
obtain ⟨y, hy⟩ := exists_ne (1 : α),
cases hy.lt_or_lt,
{ exact ⟨y⁻¹, one_lt_inv'.mpr h⟩ },
{ exact ⟨y, h⟩ }
end
@[priority 100, to_additive] -- see Note [lower instance priority]
instance linear_ordered_comm_group.to_no_top_order [nontrivial α] :
no_top_order α :=
⟨ begin
obtain ⟨y, hy⟩ : ∃ (a:α), 1 < a := exists_one_lt',
exact λ a, ⟨a * y, lt_mul_of_one_lt_right' a hy⟩
end ⟩
@[priority 100, to_additive] -- see Note [lower instance priority]
instance linear_ordered_comm_group.to_no_bot_order [nontrivial α] : no_bot_order α :=
⟨ begin
obtain ⟨y, hy⟩ : ∃ (a:α), 1 < a := exists_one_lt',
exact λ a, ⟨a / y, (div_lt_self_iff a).mpr hy⟩
end ⟩
end linear_ordered_comm_group
section linear_ordered_add_comm_group
variables [linear_ordered_add_comm_group α] {a b c : α}
@[simp]
lemma sub_le_sub_flip : a - b ≤ b - a ↔ a ≤ b :=
begin
rw [sub_le_iff_le_add, sub_add_eq_add_sub, le_sub_iff_add_le],
split,
{ intro h,
by_contra H,
rw not_le at H,
apply not_lt.2 h,
exact add_lt_add H H, },
{ intro h,
exact add_le_add h h, }
end
lemma le_of_forall_pos_le_add [densely_ordered α] (h : ∀ ε : α, 0 < ε → a ≤ b + ε) : a ≤ b :=
le_of_forall_le_of_dense $ λ c hc,
calc a ≤ b + (c - b) : h _ (sub_pos_of_lt hc)
... = c : add_sub_cancel'_right _ _
lemma le_of_forall_pos_lt_add (h : ∀ ε : α, 0 < ε → a < b + ε) : a ≤ b :=
le_of_not_lt $ λ h₁, by simpa using h _ (sub_pos_of_lt h₁)
/-- `abs a` is the absolute value of `a`. -/
def abs (a : α) : α := max a (-a)
lemma abs_of_nonneg (h : 0 ≤ a) : abs a = a :=
max_eq_left $ (neg_nonpos.2 h).trans h
lemma abs_of_pos (h : 0 < a) : abs a = a :=
abs_of_nonneg h.le
lemma abs_of_nonpos (h : a ≤ 0) : abs a = -a :=
max_eq_right $ h.trans (neg_nonneg.2 h)
lemma abs_of_neg (h : a < 0) : abs a = -a :=
abs_of_nonpos h.le
@[simp] lemma abs_zero : abs 0 = (0:α) :=
abs_of_nonneg le_rfl
@[simp] lemma abs_neg (a : α) : abs (-a) = abs a :=
begin unfold abs, rw [max_comm, neg_neg] end
@[simp] lemma abs_pos : 0 < abs a ↔ a ≠ 0 :=
begin
rcases lt_trichotomy a 0 with (ha|rfl|ha),
{ simp [abs_of_neg ha, neg_pos, ha.ne, ha] },
{ simp },
{ simp [abs_of_pos ha, ha, ha.ne.symm] }
end
lemma abs_pos_of_pos (h : 0 < a) : 0 < abs a := abs_pos.2 h.ne.symm
lemma abs_pos_of_neg (h : a < 0) : 0 < abs a := abs_pos.2 h.ne
lemma abs_sub (a b : α) : abs (a - b) = abs (b - a) :=
by rw [← neg_sub, abs_neg]
lemma abs_le' : abs a ≤ b ↔ a ≤ b ∧ -a ≤ b := max_le_iff
lemma abs_le : abs a ≤ b ↔ - b ≤ a ∧ a ≤ b :=
by rw [abs_le', and.comm, neg_le]
lemma neg_le_of_abs_le (h : abs a ≤ b) : -b ≤ a := (abs_le.mp h).1
lemma le_of_abs_le (h : abs a ≤ b) : a ≤ b := (abs_le.mp h).2
lemma le_abs : a ≤ abs b ↔ a ≤ b ∨ a ≤ -b := le_max_iff
lemma le_abs_self (a : α) : a ≤ abs a := le_max_left _ _
lemma neg_le_abs_self (a : α) : -a ≤ abs a := le_max_right _ _
lemma abs_nonneg (a : α) : 0 ≤ abs a :=
(le_total 0 a).elim (λ h, h.trans (le_abs_self a)) (λ h, (neg_nonneg.2 h).trans $ neg_le_abs_self a)
@[simp] lemma abs_abs (a : α) : abs (abs a) = abs a :=
abs_of_nonneg $ abs_nonneg a
@[simp] lemma abs_eq_zero : abs a = 0 ↔ a = 0 :=
not_iff_not.1 $ ne_comm.trans $ (abs_nonneg a).lt_iff_ne.symm.trans abs_pos
@[simp] lemma abs_nonpos_iff {a : α} : abs a ≤ 0 ↔ a = 0 :=
(abs_nonneg a).le_iff_eq.trans abs_eq_zero
lemma abs_lt : abs a < b ↔ - b < a ∧ a < b :=
max_lt_iff.trans $ and.comm.trans $ by rw [neg_lt]
lemma neg_lt_of_abs_lt (h : abs a < b) : -b < a := (abs_lt.mp h).1
lemma lt_of_abs_lt (h : abs a < b) : a < b := (abs_lt.mp h).2
lemma lt_abs : a < abs b ↔ a < b ∨ a < -b := lt_max_iff
lemma max_sub_min_eq_abs' (a b : α) : max a b - min a b = abs (a - b) :=
begin
cases le_total a b with ab ba,
{ rw [max_eq_right ab, min_eq_left ab, abs_of_nonpos, neg_sub], rwa sub_nonpos },
{ rw [max_eq_left ba, min_eq_right ba, abs_of_nonneg], rwa sub_nonneg }
end
lemma max_sub_min_eq_abs (a b : α) : max a b - min a b = abs (b - a) :=
by { rw [abs_sub], exact max_sub_min_eq_abs' _ _ }
lemma abs_add (a b : α) : abs (a + b) ≤ abs a + abs b :=
abs_le.2 ⟨(neg_add (abs a) (abs b)).symm ▸
add_le_add (neg_le.2 $ neg_le_abs_self _) (neg_le.2 $ neg_le_abs_self _),
add_le_add (le_abs_self _) (le_abs_self _)⟩
lemma abs_sub_le_iff : abs (a - b) ≤ c ↔ a - b ≤ c ∧ b - a ≤ c :=
by rw [abs_le, neg_le_sub_iff_le_add, @sub_le_iff_le_add' _ _ b, and_comm]
lemma abs_sub_lt_iff : abs (a - b) < c ↔ a - b < c ∧ b - a < c :=
by rw [abs_lt, neg_lt_sub_iff_lt_add, @sub_lt_iff_lt_add' _ _ b, and_comm]
lemma sub_le_of_abs_sub_le_left (h : abs (a - b) ≤ c) : b - c ≤ a :=
sub_le.1 $ (abs_sub_le_iff.1 h).2
lemma sub_le_of_abs_sub_le_right (h : abs (a - b) ≤ c) : a - c ≤ b :=
sub_le_of_abs_sub_le_left (abs_sub a b ▸ h)
lemma sub_lt_of_abs_sub_lt_left (h : abs (a - b) < c) : b - c < a :=
sub_lt.1 $ (abs_sub_lt_iff.1 h).2
lemma sub_lt_of_abs_sub_lt_right (h : abs (a - b) < c) : a - c < b :=
sub_lt_of_abs_sub_lt_left (abs_sub a b ▸ h)
lemma abs_sub_abs_le_abs_sub (a b : α) : abs a - abs b ≤ abs (a - b) :=
sub_le_iff_le_add.2 $
calc abs a = abs (a - b + b) : by rw [sub_add_cancel]
... ≤ abs (a - b) + abs b : abs_add _ _
lemma abs_abs_sub_abs_le_abs_sub (a b : α) : abs (abs a - abs b) ≤ abs (a - b) :=
abs_sub_le_iff.2 ⟨abs_sub_abs_le_abs_sub _ _, by rw abs_sub; apply abs_sub_abs_le_abs_sub⟩
lemma abs_eq (hb : 0 ≤ b) : abs a = b ↔ a = b ∨ a = -b :=
iff.intro
begin
cases le_total a 0 with a_nonpos a_nonneg,
{ rw [abs_of_nonpos a_nonpos, neg_eq_iff_neg_eq, eq_comm], exact or.inr },
{ rw [abs_of_nonneg a_nonneg, eq_comm], exact or.inl }
end
(by intro h; cases h; subst h; try { rw abs_neg }; exact abs_of_nonneg hb)
lemma abs_le_max_abs_abs (hab : a ≤ b) (hbc : b ≤ c) : abs b ≤ max (abs a) (abs c) :=
abs_le'.2
⟨by simp [hbc.trans (le_abs_self c)],
by simp [(neg_le_neg hab).trans (neg_le_abs_self a)]⟩
theorem abs_le_abs (h₀ : a ≤ b) (h₁ : -a ≤ b) : abs a ≤ abs b :=
(abs_le'.2 ⟨h₀, h₁⟩).trans (le_abs_self b)
lemma abs_max_sub_max_le_abs (a b c : α) : abs (max a c - max b c) ≤ abs (a - b) :=
begin
simp_rw [abs_le, le_sub_iff_add_le, sub_le_iff_le_add, ← max_add_add_left],
split; apply max_le_max; simp only [← le_sub_iff_add_le, ← sub_le_iff_le_add, sub_self, neg_le,
neg_le_abs_self, neg_zero, abs_nonneg, le_abs_self]
end
lemma eq_of_abs_sub_eq_zero {a b : α} (h : abs (a - b) = 0) : a = b :=
sub_eq_zero.1 $ abs_eq_zero.1 h
lemma abs_by_cases (P : α → Prop) {a : α} (h1 : P a) (h2 : P (-a)) : P (abs a) :=
sup_ind _ _ h1 h2
lemma abs_sub_le (a b c : α) : abs (a - c) ≤ abs (a - b) + abs (b - c) :=
calc
abs (a - c) = abs (a - b + (b - c)) : by rw [sub_add_sub_cancel]
... ≤ abs (a - b) + abs (b - c) : abs_add _ _
lemma abs_add_three (a b c : α) : abs (a + b + c) ≤ abs a + abs b + abs c :=
(abs_add _ _).trans (add_le_add_right (abs_add _ _) _)
lemma dist_bdd_within_interval {a b lb ub : α} (hal : lb ≤ a) (hau : a ≤ ub)
(hbl : lb ≤ b) (hbu : b ≤ ub) : abs (a - b) ≤ ub - lb :=
abs_sub_le_iff.2 ⟨sub_le_sub hau hbl, sub_le_sub hbu hal⟩
lemma eq_of_abs_sub_nonpos (h : abs (a - b) ≤ 0) : a = b :=
eq_of_abs_sub_eq_zero (le_antisymm h (abs_nonneg (a - b)))
end linear_ordered_add_comm_group
/-- This is not so much a new structure as a construction mechanism
for ordered groups, by specifying only the "positive cone" of the group. -/
class nonneg_add_comm_group (α : Type*) extends add_comm_group α :=
(nonneg : α → Prop)
(pos : α → Prop := λ a, nonneg a ∧ ¬ nonneg (neg a))
(pos_iff : ∀ a, pos a ↔ nonneg a ∧ ¬ nonneg (-a) . order_laws_tac)
(zero_nonneg : nonneg 0)
(add_nonneg : ∀ {a b}, nonneg a → nonneg b → nonneg (a + b))
(nonneg_antisymm : ∀ {a}, nonneg a → nonneg (-a) → a = 0)
namespace nonneg_add_comm_group
variable [s : nonneg_add_comm_group α]
include s
@[reducible, priority 100] -- see Note [lower instance priority]
instance to_ordered_add_comm_group : ordered_add_comm_group α :=
{ le := λ a b, nonneg (b - a),
lt := λ a b, pos (b - a),
lt_iff_le_not_le := λ a b, by simp; rw [pos_iff]; simp,
le_refl := λ a, by simp [zero_nonneg],
le_trans := λ a b c nab nbc, by simp [-sub_eq_add_neg];
rw ← sub_add_sub_cancel; exact add_nonneg nbc nab,
le_antisymm := λ a b nab nba, eq_of_sub_eq_zero $
nonneg_antisymm nba (by rw neg_sub; exact nab),
add_le_add_left := λ a b nab c, by simpa [(≤), preorder.le] using nab,
..s }
theorem nonneg_def {a : α} : nonneg a ↔ 0 ≤ a :=
show _ ↔ nonneg _, by simp
theorem pos_def {a : α} : pos a ↔ 0 < a :=
show _ ↔ pos _, by simp
theorem not_zero_pos : ¬ pos (0 : α) :=
mt pos_def.1 (lt_irrefl _)
theorem zero_lt_iff_nonneg_nonneg {a : α} :
0 < a ↔ nonneg a ∧ ¬ nonneg (-a) :=
pos_def.symm.trans (pos_iff _)
theorem nonneg_total_iff :
(∀ a : α, nonneg a ∨ nonneg (-a)) ↔
(∀ a b : α, a ≤ b ∨ b ≤ a) :=
⟨λ h a b, by have := h (b - a); rwa [neg_sub] at this,
λ h a, by rw [nonneg_def, nonneg_def, neg_nonneg]; apply h⟩
/--
A `nonneg_add_comm_group` is a `linear_ordered_add_comm_group`
if `nonneg` is total and decidable.
-/
def to_linear_ordered_add_comm_group
[decidable_pred (@nonneg α _)]
(nonneg_total : ∀ a : α, nonneg a ∨ nonneg (-a))
: linear_ordered_add_comm_group α :=
{ le := (≤),
lt := (<),
le_total := nonneg_total_iff.1 nonneg_total,
decidable_le := by apply_instance,
decidable_lt := by apply_instance,
..@nonneg_add_comm_group.to_ordered_add_comm_group _ s }
end nonneg_add_comm_group
namespace order_dual
instance [ordered_add_comm_group α] : ordered_add_comm_group (order_dual α) :=
{ add_left_neg := λ a : α, add_left_neg a,
sub := λ a b, (a - b : α),
..order_dual.ordered_add_comm_monoid,
..show add_comm_group α, by apply_instance }
instance [linear_ordered_add_comm_group α] :
linear_ordered_add_comm_group (order_dual α) :=
{ add_le_add_left := λ a b h c, @add_le_add_left α _ b a h _,
..order_dual.linear_order α,
..show add_comm_group α, by apply_instance }
end order_dual
namespace prod
variables {G H : Type*}
@[to_additive]
instance [ordered_comm_group G] [ordered_comm_group H] :
ordered_comm_group (G × H) :=
{ .. prod.comm_group, .. prod.partial_order G H, .. prod.ordered_cancel_comm_monoid }
end prod
section type_tags
instance [ordered_add_comm_group α] : ordered_comm_group (multiplicative α) :=
{ ..multiplicative.comm_group,
..multiplicative.ordered_comm_monoid }
instance [ordered_comm_group α] : ordered_add_comm_group (additive α) :=
{ ..additive.add_comm_group,
..additive.ordered_add_comm_monoid }
instance [linear_ordered_add_comm_group α] : linear_ordered_comm_group (multiplicative α) :=
{ ..multiplicative.linear_order,
..multiplicative.ordered_comm_group }
instance [linear_ordered_comm_group α] : linear_ordered_add_comm_group (additive α) :=
{ ..additive.linear_order,
..additive.ordered_add_comm_group }
end type_tags
|
import Lean
import IIT.Util
open Lean
open Elab
open Meta
namespace Lean
namespace Meta
def inversion (mVar : MVarId) (fVar : FVarId) (names : Array Name) :
MetaM (Array Name × Array FVarId × MVarId) :=
withMVarContext mVar do
checkNotAssigned mVar `inversion
let target ← getMVarType mVar
-- Get Prop sorted fields
let truesgs ← cases (← mkFreshExprMVar $ mkConst `True).mvarId! fVar
unless truesgs.size == 1 do throwTacticEx `inversion mVar "indices must determine constructor uniquely"
let trueMVar := truesgs[0].mvarId
let fields := truesgs[0].fields
let fields ← withMVarContext trueMVar do
let fields ← fields.mapM fun fv => do
let fv ← whnf fv
inferType fv
fields.filterM fun e => do return (← getLevel e).isZero
-- Prove fields
let mut mVar := mVar
let mut fieldFVars := #[]
let mut names := names
for (e : Expr) in fields do
let (names',fieldFVar, mVar') ← withMVarContext mVar do
let fieldMVar ← mkFreshExprMVar e
let fsgs ← cases fieldMVar.mvarId! fVar
assumption fsgs[0].mvarId
let name := if names.size > 0 then names[0] else Name.anonymous
let fMVar ← mkFreshExprMVar $ mkForall name BinderInfo.default e target
assignExprMVar mVar $ mkApp fMVar fieldMVar
let (fieldFVar, mVar') ← intro fMVar.mvarId! name
pure (names[1:], fieldFVar, mVar')
names := names'
mVar := mVar'
fieldFVars := fieldFVars.push fieldFVar
return (names[1:], fieldFVars, mVar)
end Meta
open Tactic
syntax (name := inversion) "inversion" (colGt ident)+ ("with" (colGt ident)+)? : tactic
@[tactic inversion] def elabInversion : Tactic
| `(tactic|inversion $fVars* with $names*) => do
let mut names := names.map getNameOfIdent'
for f in fVars do
let rnames ← withMainContext do
let fvarId ← getFVarId f
let (rnames, _, mVar) ← Meta.inversion (← getMainGoal) (← getFVarId f) names
replaceMainGoal [mVar]
pure rnames
names := rnames
| `(tactic|inversion $fVars*) => do
forEachVar fVars fun mVar fVar => do
let (_, _, mVar) ← Meta.inversion mVar fVar #[]
return mVar
| _ => throwUnsupportedSyntax
end Lean
/-
-- Examples
inductive Foo : Nat → Nat → Prop
| mk1 : Foo 5 3
| mk2 : (y : Foo 9 8) → (z : Foo 13 25) → Foo 1 2
example (n : Nat) (x : Foo 1 n) (A : Type) (p : (y : Foo 9 8) → A) : A := by
inversion x with y z
exact p y
example (n : Nat) (x : Foo (2 - 1) n) (A : Type) (p : (y : Foo 9 8) → A) : A := by
skip
inversion x
apply p
assumption
-/
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
This script is to conduct the main analysis of the UDN database:
* Download data using PIC SURE API
* Analysis of UDN database breaking down patients in adult and pediatric population
both diagnosed and undiagnosed
* Clustering of adult and pediatric network using phenotypic similarity with Louvain method
* Analysis of clusters from a statistical and a phenotypic standpoint
* Disease enrichment analysis using Orphanet database
example usage from CLI:
$ python Data_analysis_UDN.py --token personal_token
--json_file "path/to/file"
--genes_file "path/to/gene/info"
-- variants_file "path/to/variant/info"
For help, run:
$ Data_analysis_UDN.py -h
"""
__author__ = "Josephine Yates"
__email__ = "[email protected]"
# # Data analysis of UDN patients
from UDN_utils import *
from UDN_utils_cluster_analysis import *
from UDN_utils_clustering import *
from UDN_utils_disease_enrichment import *
from UDN_utils_gene import *
from UDN_utils_HPO_analysis import *
from UDN_utils_parental_age import *
from UDN_utils_primary_symptoms import *
from UDN_utils_diagnostics import *
from UDN_utils_SVC import *
import argparse
import sys
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import scipy.stats as st
import scipy.sparse as sp
from scipy.stats import fisher_exact
import networkx as nx
from community import community_louvain
from scipy.stats import kruskal
import seaborn as sns
import collections as collec
import os
import xml.etree.ElementTree as ET
import operator
import pandas
import csv
from scipy.stats import mannwhitneyu, chisquare
from sklearn.metrics.pairwise import pairwise_distances
from docx import Document
from docx.shared import Inches
import ast
import logging
import scipy.stats
from netneurotools import cluster
import PicSureHpdsLib
import PicSureClient
# ### Data
def download_data(resource, logger):
"""download patient information with the PIC SURE API
Parameters: resource: picsure api resource
logger (Logger): logger
Returns: phenotypes, status, genes, variants, primary_symptoms, clinical_site,
family_history, natal_history, demographics, diagnostics (pandas.core.DataFrame):
dataframes containing the information for UDN patients
"""
# download patient data from the server
phenotypes = get_data_df("\\04_Clinical symptoms and physical findings (in HPO, from PhenoTips)\\", resource)
status = get_data_df("\\13_Status\\", resource)
genes=get_data_df("\\11_Candidate genes\\", resource)
variants=get_data_df("\\12_Candidate variants\\", resource)
primary_symptoms=get_data_df("\\01_Primary symptom category reported by patient or caregiver\\", resource)
clinical_site=get_data_df('\\03_UDN Clinical Site\\', resource)
family_history=get_data_df("\\08_Family history (from PhenoTips)\\", resource)
natal_history=get_data_df("\\09_Prenatal and perinatal history (from PhenoTips)\\", resource)
demographics=get_data_df("\\00_Demographics\\", resource)
diagnostics=get_data_df('\\14_Disorders (in OMIM, from PhenoTips)\\', resource)
# select only the phenotypes, and not the prenatal phenotypes
columns_to_del=[]
for col in list(phenotypes.columns)[1:]:
if "Prenatal Phenotype" in col.split('\\'):
columns_to_del.append(col)
phenotypes=phenotypes.drop(columns_to_del,axis=1)
# information not in the gateway, example update
#demographics["\\00_Demographics\\Age at symptom onset in years\\"].loc[patient_not_in_gateway]=0.3
return phenotypes, status, genes, variants, primary_symptoms, clinical_site, \
family_history, natal_history, demographics, diagnostics
# ### HPO analysis
def phenotype_formatting(phenotypes, jsonfile, logger):
# get a dictionnary of patients with their positively and negatively associated phenotypes
patient_phen = get_patient_phenotypes(phenotypes)
# retrieve the evaluation date from patients, and remove the information for patients not in the update
# (eval date comes from JSON) to fill in
patient_eval_date = get_patient_eval_date(jsonfile, patient_phen)
# get the list of patients evaluated before 2015, and delete the negative terms for these patients (cf. paper on possible bias)
# in the entry of negative terms before 2015
patient_phen = patient_eval_before_2015(patient_eval_date, patient_phen)
# delete negative terms for patients with over 50 negative terms (cut-off for bias)
for pat in patient_phen:
if len(patient_phen[pat]["neg"])>=50:
patient_phen[pat]["neg"]=[]
return patient_phen
def patient_breakdown(patient_phen, demographics, status, phenotypes, logger):
# ### Breakdown into pediatrics, diagnosed or undiagnosed, and adults, diagnosed or undiagnosed
all_patients = list(patient_phen.keys())
age = "\\00_Demographics\\Age at symptom onset in years\\"
adult_patients = \
demographics[age][demographics[age]>=18.0].index.to_numpy()
pediatric_patients = \
demographics[age][demographics[age]<18.0].index.to_numpy()
# get the list of diagnosed and undiagnosed patients
list_diagnosed = status.loc[status["\\13_Status\\"] == "solved"].index.to_numpy()
list_undiagnosed = status.loc[status["\\13_Status\\"] != "solved"].index.to_numpy()
# get the list of diagnosed or undiagnosed patients that have at least one HPO term
list_diagnosed_phen=[patient for patient in list_diagnosed if(patient in patient_phen)]
list_undiagnosed_phen=[patient for patient in list_undiagnosed if(patient in patient_phen)]
# get the lists for breakdown (adults diag and undiag, pediatric diag and undiag)
list_adult_diagnosed=[patient for patient in patient_phen if \
(patient in adult_patients) and (patient in list_diagnosed_phen)]
list_adult_undiagnosed=[patient for patient in patient_phen if \
(patient in adult_patients) and (patient in list_undiagnosed_phen)]
list_pediatric_diagnosed=[patient for patient in patient_phen if \
(patient in pediatric_patients) and (patient in list_diagnosed_phen)]
list_pediatric_undiagnosed=[patient for patient in patient_phen if \
(patient in pediatric_patients) and (patient in list_undiagnosed_phen)]
HPO_terms = get_HPO_terms(patient_phen, all_patients)
HPO_list_pos, HPO_list_neg, HPO_list = get_HPO_count_list(patient_phen, all_patients)
HPO_list_pos_adult_diagnosed,HPO_list_neg_adult_diagnosed,HPO_list_adult_diagnosed = \
get_HPO_count_list(patient_phen, list_adult_diagnosed)
HPO_list_pos_adult_undiagnosed,HPO_list_neg_adult_undiagnosed,HPO_list_adult_undiagnosed = \
get_HPO_count_list(patient_phen, list_adult_undiagnosed)
HPO_list_pos_pediatric_diagnosed,HPO_list_neg_pediatric_diagnosed,HPO_list_pediatric_diagnosed = \
get_HPO_count_list(patient_phen, list_pediatric_diagnosed)
HPO_list_pos_pediatric_undiagnosed,HPO_list_neg_pediatric_undiagnosed,HPO_list_pediatric_undiagnosed = \
get_HPO_count_list(patient_phen, list_pediatric_undiagnosed)
# log the stats on the HPO counts
show_stats_HPO_counts(HPO_list,HPO_list_pos,HPO_list_neg, logger)
show_stats_HPO_counts(HPO_list_adult_diagnosed, \
HPO_list_pos_adult_diagnosed,HPO_list_neg_adult_diagnosed, logger)
show_stats_HPO_counts(HPO_list_adult_undiagnosed, \
HPO_list_pos_adult_undiagnosed,HPO_list_neg_adult_undiagnosed, logger)
show_stats_HPO_counts(HPO_list_pediatric_diagnosed, \
HPO_list_pos_pediatric_diagnosed,HPO_list_neg_pediatric_diagnosed, logger)
show_stats_HPO_counts(HPO_list_pediatric_undiagnosed,HPO_list_pos_pediatric_undiagnosed, \
HPO_list_neg_pediatric_undiagnosed, logger)
# plot HPO term distribution
logger.info("Plotting HPO distribution, for all, positive and negative terms.")
show_distrib_HPO(HPO_list,"Distribution of HPO terms")
show_distrib_HPO(HPO_list_neg,"Distribution of negative HPO terms")
show_distrib_HPO(HPO_list_pos,"Distribution of positive HPO terms")
# get the dataframes with the phenotypes of diagnosed or undiagnosed patients that have at least one HPO term
phenotypes_diagnosed=phenotypes.loc[list_diagnosed_phen]
phenotypes_undiagnosed=phenotypes.loc[list_undiagnosed_phen]
return phenotypes_diagnosed, phenotypes_undiagnosed, list_diagnosed_phen, list_undiagnosed_phen, \
list_adult_diagnosed, list_adult_undiagnosed, list_pediatric_diagnosed, list_pediatric_undiagnosed, \
adult_patients, pediatric_patients, HPO_terms, all_patients
# ### HPO large group stats
# get the list of large groups in the HPO hierarchy
def HPO_large_group_analysis(phenotypes, patient_phen, adult_patients, pediatric_patients, all_patients, logger):
large_groups_HPO = get_large_group_HPO(phenotypes)
# get the association between unique phenotypes and the large groups \
# they are related to in the HPO hierarchy
# list_phenotypes_unique is a dictionary with the phenotypes as keys, \
# and a list of associated large groups as value
list_phenotypes_unique = get_phen_to_lg(phenotypes)
logger.info("Getting count of large groups")
# get the HPO occurrences for all patients
large_groups_HPO_count=get_large_groups_HPO_count(list_phenotypes_unique, \
large_groups_HPO,patient_phen,all_patients)
logger.info("Total : neg : {}, pos : {}".format(np.sum(list(large_groups_HPO_count["neg"].values())), \
np.sum(list(large_groups_HPO_count["pos"].values()))))
# get the count of large groups for positive and negative terms of adult patients
large_groups_HPO_count_adult=get_large_groups_HPO_count(list_phenotypes_unique, \
large_groups_HPO,patient_phen,adult_patients)
logger.info("Total adult : neg : {}, pos : {}".format(np.sum(list(large_groups_HPO_count_adult["neg"].values())), \
np.sum(list(large_groups_HPO_count_adult["pos"].values()))))
# get the count of large groups for positive and negative terms of pediatric patients
large_groups_HPO_count_pediatric=get_large_groups_HPO_count(list_phenotypes_unique, \
large_groups_HPO,patient_phen,pediatric_patients)
logger.info("Total pediatric : neg : {}, pos : {}".format(np.sum(list(large_groups_HPO_count_pediatric["neg"].values())), \
np.sum(list(large_groups_HPO_count_pediatric["pos"].values()))))
return list_phenotypes_unique, large_groups_HPO
# ### Comparison HPO and Primary Symptoms
def HPO_and_PS(patient_phen, primary_symptoms, list_phenotypes_unique, logger):
# get the links between the primary symptoms and the HPO large groups
link_PS_HPO=get_link_between_PS_HPO(patient_phen,primary_symptoms,list_phenotypes_unique)
return link_PS_HPO
# ### Analysis of metadata
def stats_metadata(demographics, clinical_site, primary_symptoms,
family_history, natal_history,
patient_phen, list_adult_diagnosed, list_adult_undiagnosed,
list_pediatric_diagnosed, list_pediatric_undiagnosed,
list_diagnosed_phen, list_undiagnosed_phen,
logger):
# get the dataframes for patients with at least one phenotype, for adult or pediatric, \
# diagnosed and undiagnosed
demographics = demographics.loc[list(patient_phen)]
demographics_adult_diagnosed = demographics.loc[list_adult_diagnosed]
demographics_adult_undiagnosed = demographics.loc[list_adult_undiagnosed]
demographics_pediatric_diagnosed = demographics.loc[list_pediatric_diagnosed]
demographics_pediatric_undiagnosed = demographics.loc[list_pediatric_undiagnosed]
clinical_site = clinical_site.loc[list(patient_phen)]
clinical_site_adult_diagnosed = clinical_site.loc[list_adult_diagnosed]
clinical_site_adult_undiagnosed = clinical_site.loc[list_adult_undiagnosed]
clinical_site_pediatric_diagnosed = clinical_site.loc[list_pediatric_diagnosed]
clinical_site_pediatric_undiagnosed = clinical_site.loc[list_pediatric_undiagnosed]
cscount_ad = clinical_site_adult_diagnosed.groupby('\\03_UDN Clinical Site\\')['Patient ID'].nunique()
cscount_and = clinical_site_adult_undiagnosed.groupby('\\03_UDN Clinical Site\\')['Patient ID'].nunique()
cscount_pd = clinical_site_pediatric_diagnosed.groupby('\\03_UDN Clinical Site\\')['Patient ID'].nunique()
cscount_pnd = clinical_site_pediatric_undiagnosed.groupby('\\03_UDN Clinical Site\\')['Patient ID'].nunique()
primary_symptoms = primary_symptoms.loc[list(patient_phen)]
primary_symptoms_ad = primary_symptoms.loc[list_adult_diagnosed]
primary_symptoms_and = primary_symptoms.loc[list_adult_undiagnosed]
primary_symptoms_pd = primary_symptoms.loc[list_pediatric_diagnosed]
primary_symptoms_pnd = primary_symptoms.loc[list_pediatric_undiagnosed]
pscount_ad = primary_symptoms_ad.groupby(
"\\01_Primary symptom category reported by patient or caregiver\\")['Patient ID'].nunique()
pscount_and = primary_symptoms_and.groupby(
"\\01_Primary symptom category reported by patient or caregiver\\")['Patient ID'].nunique()
pscount_pd = primary_symptoms_pd.groupby(
"\\01_Primary symptom category reported by patient or caregiver\\")['Patient ID'].nunique()
pscount_pnd = primary_symptoms_pnd.groupby(
"\\01_Primary symptom category reported by patient or caregiver\\")['Patient ID'].nunique()
family_history = family_history.loc[list(patient_phen)]
family_history_ad = family_history.loc[list_adult_diagnosed]
family_history_and = family_history.loc[list_adult_undiagnosed]
family_history_pd = family_history.loc[list_pediatric_diagnosed]
family_history_pnd = family_history.loc[list_pediatric_undiagnosed]
natal_history = natal_history.loc[list(patient_phen)]
natal_history = natal_history.replace(0, np.NaN)
natal_history_ad = natal_history.loc[list_adult_diagnosed]
natal_history_and = natal_history.loc[list_adult_undiagnosed]
natal_history_pd = natal_history.loc[list_pediatric_diagnosed]
natal_history_pnd = natal_history.loc[list_pediatric_undiagnosed]
# get the statistics for demographics for patient breakdown
logger.info("Getting the demographics statistics.")
logger.info("For all patients : {}".format(demographics.describe()))
logger.info("For adults diagnosed : {}".format(demographics_adult_diagnosed.describe()))
logger.info("For adults undiagnosed : {}".format(demographics_adult_undiagnosed.describe()))
logger.info("For pediatrics diagnosed: {}".format(demographics_pediatric_diagnosed.describe()))
logger.info("For pediatrics undiagnosed : {}".format(demographics_pediatric_undiagnosed.describe()))
logger.info("Getting the demographics statistics.")
# get the statistics for clinical sites for patient breakdown
logger.info("Getting the clinical site statistics.")
logger.info("For all patients : {}".format(clinical_site.describe()))
logger.info("For adults diagnosed : {}".format(clinical_site_adult_diagnosed.describe()))
logger.info("For adults undiagnosed : {}".format(clinical_site_adult_undiagnosed.describe()))
logger.info("For pediatrics diagnosed: {}".format(clinical_site_pediatric_diagnosed.describe()))
logger.info("For pediatrics undiagnosed : {}".format(clinical_site_pediatric_undiagnosed.describe()))
## can also be done for family and natal history
natalhist = "\\09_Prenatal and perinatal history (from PhenoTips)\\Maternal Age\\"
get_diff_parent_age(natal_history_ad, natal_history_pd, natalhist, logger)
natalhist = "\\09_Prenatal and perinatal history (from PhenoTips)\\Paternal Age\\"
get_diff_parent_age(natal_history_ad, natal_history_pd, natalhist, logger)
# show age distribution
show_age_distrib(demographics)
# show general statistics for demographics, clinical sites, etc...
ageeval, ageonset = "\\00_Demographics\\Age at UDN Evaluation (in years)\\", \
"\\00_Demographics\\Age at symptom onset in years\\"
logger.info("Age at UDN Evaluation, adult : {}".format(
mannwhitneyu(
np.array(demographics_adult_diagnosed[ageeval]),
np.array(demographics_adult_undiagnosed[ageeval]))
))
logger.info("Age at UDN Evaluation, pediatric : {}".format(
mannwhitneyu(
np.array(demographics_pediatric_diagnosed[ageeval]),
np.array(demographics_pediatric_undiagnosed[ageeval]))
))
logger.info("Age at symptom onset, adult : {}".format(
mannwhitneyu(
np.array(demographics_adult_diagnosed[ageonset]),
np.array(demographics_adult_undiagnosed[ageonset]))
))
logger.info("Age at symptom onset, pediatric : {}".format(
mannwhitneyu(
np.array(demographics_pediatric_diagnosed[ageonset]),
np.array(demographics_pediatric_undiagnosed[ageonset]))
))
logger.info("Primary symptoms, adults: {}".format(
mannwhitneyu(np.multiply(list(pscount_ad), 1/len(list_diagnosed_phen)*100),
np.multiply(list(pscount_and),1/len(list_undiagnosed_phen)*100))
))
logger.info("Primary symptoms, pediatric {}".format(
mannwhitneyu(np.multiply(list(pscount_pd), 1/len(list_diagnosed_phen)*100),
np.multiply(list(pscount_pnd),1/len(list_undiagnosed_phen)*100))
))
logger.info("Clinical sites, adults: {}".format(
mannwhitneyu(np.multiply(list(cscount_ad), 1/len(list_diagnosed_phen)*100),
np.multiply(list(cscount_and),1/len(list_undiagnosed_phen)*100))
))
logger.info("Clinical sites, pediatric {}".format(
mannwhitneyu(np.multiply(list(cscount_pd), 1/len(list_diagnosed_phen)*100),
np.multiply(list(cscount_pnd),1/len(list_undiagnosed_phen)*100))
))
logger.info("Female:male ratio adult diag vs undiag : {}".format(fisher_exact([[22,23],[102,85]])))
logger.info("Female:male ratio pediatric diag vs undiag : {}".format(fisher_exact([[113,81],[295,321]])))
return demographics, clinical_site, primary_symptoms, family_history, natal_history
def parental_age_analysis(demographics, natal_history, logger):
matnatal, patnatal = \
"\\09_Prenatal and perinatal history (from PhenoTips)\\Maternal Age\\", \
"\\09_Prenatal and perinatal history (from PhenoTips)\\Paternal Age\\"
# mat_age is the maternal age without the NaN values
mat_age=np.array(natal_history[matnatal])
isnan_mat=np.isnan(mat_age)
mat_age=mat_age[[not(isnan_mat[i]) for i in range(len(isnan_mat))]]
# pat_age is the paternal age without the NaN values
pat_age=np.array(natal_history[patnatal])
isnan_pat=np.isnan(pat_age)
pat_age=pat_age[[not(isnan_pat[i]) for i in range(len(isnan_pat))]]
# distribution of paternal age in the US in 2009 (cf. article)
USA_dist_pat=[4.7,17.7,25.1,26.6,16.3,6.7,2.1,0.8]
tranches_pat=["0-19","20-24","25-29","30-34","35-39","40-44","44-50",">50"]
boundaries_pat=[[0,19],[20,24],[25,29],[30,34],[35,39],[40,44],[44,50],[50,100]]
# distribution of maternal age in the US in 2009 (cf. article)
USA_dist_mat = [3.1,6.9,24.4,28.2,23.1,11.5,2.8]
tranches_mat=["0-18","19","20-24","25-29","30-34","35-39",">39"]
boundaries_mat=[[0,18],[19,19],[20,24],[25,29],[30,34],[35,39],[39,100]]
# shows maternal age distribution as opposed to average US in 2009
dist_age_mat=distrib_age(mat_age,USA_dist_mat,tranches_mat,boundaries_mat,"Maternal")
ttest_ind(dist_age_mat,USA_dist_mat)
# shows paternal age distribution as opposed to average US in 2009
dist_age_pat=distrib_age(pat_age,USA_dist_pat,tranches_pat,boundaries_pat,"Paternal")
ttest_ind(dist_age_pat,USA_dist_pat)
# analysis for young versus old paternal age at birth
paternal_age_df=natal_history[patnatal].dropna()
patients_with_pat_age=list(paternal_age_df.index)
old_pat_age=[patients_with_pat_age[i] for i in range(len(patients_with_pat_age)) if \
paternal_age_df.loc[patients_with_pat_age[i]]>35]
young_pat_age=[patients_with_pat_age[i] for i in range(len(patients_with_pat_age)) if \
paternal_age_df.loc[patients_with_pat_age[i]]<=35]
# Mann Whitney U test for diff between young and old paternal age for Age at UDN evaluation
ageeval, ageonset = "\\00_Demographics\\Age at UDN Evaluation (in years)\\", \
"\\00_Demographics\\Age at symptom onset in years\\"
mannwhitneyu(list(demographics[ageeval].loc[young_pat_age]),
list(demographics[ageeval].loc[old_pat_age]))
# Mann Whitney U test for diff between young and old paternal age for Age at UDN evaluation
mannwhitneyu(list(demographics[ageonset].loc[young_pat_age]),
list(demographics[ageonset].loc[old_pat_age]))
# ### Genomics
def load_genetic_data(variants_json, genes_json, status, patient_phen, logger):
variants=get_gene_data(patient_phen, variants_json,"Var")
genes=get_gene_data(patient_phen, genes_json,"Gene")
# get the list of patients that present a candidate gene or candidate variants
list_patient_genes=list(genes.keys())
list_patient_variants=list(variants.keys())
logger.info("Patients in both : {}".format(len([patient for patient in patient_phen if \
patient in list_patient_genes and patient in list_patient_variants])))
logger.info("Patients with only genes : {}".format(len([patient for patient in patient_phen if \
patient in list_patient_genes and not(patient in list_patient_variants)])))
logger.info("Patients with only variants: {}".format(len([patient for patient in patient_phen if \
not(patient in list_patient_genes) and patient in list_patient_variants])))
# count the number of solved cases for people with an indicated gene or an indicated variant
logger.info("Number of solved and unsolved cases for genes indicated : {}".format( \
collec.Counter(status.loc[list(genes.keys())]["\\13_Status\\"])))
logger.info("Number of solved and unsolved cases for variants indicated : {}".format( \
collec.Counter(status.loc[list(variants.keys())]["\\13_Status\\"])))
return genes, variants
def genomic_analysis(genes, variants, logger):
# get distribution for variant and gene data
variant_count=get_dist_genomic(variants,"Var")
logger.info("Variant distribution : {}".format(variant_count))
gene_count=get_dist_genomic(genes,"Gen")
logger.info("Gene distribution : {}".format(gene_count))
# plot distribution for variants and genes
plot_distribution_genomic_data(variants,"Count_dist_var_per_pat.png","variants")
plot_distribution_genomic_data(genes,"Count_genes_per_pat.png","genes")
# # Clustering
def perform_clustering(phenotypes, patient_phen, adult_patients, pediatric_patients, logger):
# get the index of unique phenotypes in the phenotype Dataframe
mat_phen_ind = get_unique_phenotype(phenotypes)
matrix_phen=phenotypes.drop("Patient ID",axis=1)
# transform the phenotype dataframe to obtain a matrix of unique phenotypes, \
# with only patients that have been evaluated,
# with 1 if the phenotype is positively present, 0 if negative or NaN
mat_phen_adult=matrix_phen.iloc[:,mat_phen_ind]
mat_phen_adult=mat_phen_adult.loc[adult_patients]
mat_phen_adult=mat_phen_adult.replace(to_replace={"Positive": 1, "Negative": 0, np.nan: 0})
mat_phen_pediatric=matrix_phen.iloc[:,mat_phen_ind]
mat_phen_pediatric=mat_phen_pediatric.loc[pediatric_patients]
mat_phen_pediatric=mat_phen_pediatric.replace(to_replace={"Positive": 1, "Negative": 0, np.nan: 0})
logger.info("Computing jaccard similarity matrix.")
# we compute the jaccard similarity matrix for the phenotypic matrix, adult patients
jac_sim_un_adult = 1 - pairwise_distances(mat_phen_adult, metric = "jaccard")
# we compute the jaccard similarity matrix for the phenotypic matrix, pediatric patients
jac_sim_un_pediatric = 1 - pairwise_distances(mat_phen_pediatric, metric = "jaccard")
# create networkx graphs for adult and pediatric network
logger.info("Creating networkx graphs (long step).")
# positions can be used to plot the graph in python environment
graph_un_adult,pos_un_adult=graph_of_patients_js(adult_patients,jac_sim_un_adult)
graph_un_pediatric,pos_un_pediatric=graph_of_patients_js(pediatric_patients,jac_sim_un_pediatric)
# writes the computed graph in a gml format, to be able to use Gephi to analyze it further
logger.info("Writing the graphs in a gml file for analysis with Gephi.")
nx.write_gml(graph_un_adult,"graph_un_adult.gml")
nx.write_gml(graph_un_pediatric,"graph_un_pediatric.gml")
# performing clustering with Louvain method, resolutions 3 and 1.2
logger.info("Performing clustering (long step).")
consensus_matrix_ad, df_louvain_ad = get_consensus_matrix(adult_patients,graph_un_adult,2,10,logger)
consensus_clustering_labels_ad = cluster.find_consensus(consensus_matrix_ad.values, seed=1234)
consensus_matrix_ped,df_louvain_ped = get_consensus_matrix(pediatric_patients,graph_un_pediatric,1.2,10,logger)
consensus_clustering_labels_ped = cluster.find_consensus(consensus_matrix_ped.values, seed=1234)
clusters_ad = {cluster: [] for cluster in np.unique(consensus_clustering_labels_ad)}
for i,pat in enumerate(list(df_louvain_ad.index)):
clusters_ad[consensus_clustering_labels_ad[i]].append(pat)
clusters_ped = {cluster: [] for cluster in np.unique(consensus_clustering_labels_ped)}
for i,pat in enumerate(list(df_louvain_ped.index)):
clusters_ped[consensus_clustering_labels_ped[i]].append(pat)
# get indices of clusters for analysis and outliers
ind_groups_adult = [cluster for cluster in clusters_ad if len(clusters_ad[cluster])>5]
ind_groups_ped = [cluster for cluster in clusters_ped if len(clusters_ped[cluster])>5]
ind_outliers_adult = [cluster for cluster in clusters_ad if len(clusters_ad[cluster])<=3]
ind_outliers_ped = [cluster for cluster in clusters_ped if len(clusters_ped[cluster])<=3]
return clusters_ad, clusters_ped, ind_groups_adult, ind_groups_ped, \
ind_outliers_adult, ind_outliers_ped, consensus_clustering_labels_ad, consensus_clustering_labels_ped
# ### Cluster analysis
def odds_ratio_cluster(clusters_un_adult, clusters_un_pediatric, ind_groups_adult,
ind_groups_pediatric, status, logger):
OR_diag_adult,IC_adult=calculate_diag_OR(clusters_un_adult,ind_groups_adult,status)
OR_diag_pediatric,IC_pediatric=calculate_diag_OR(clusters_un_pediatric,ind_groups_pediatric,status)
return OR_diag_adult, IC_adult, OR_diag_pediatric, IC_pediatric
def phenotype_clusters(clusters_un_adult, clusters_un_pediatric, patient_phen,
ind_groups_adult, ind_groups_pediatric,
ind_outliers_adult, ind_outliers_ped, HPO_terms, logger):
HPO_count_adult, avg_HPO_clusters_adult, CI_HPO_clusters_adult = \
get_HPO_count(clusters_un_adult,HPO_terms)
HPO_count_pediatric, avg_HPO_clusters_pediatric, CI_HPO_clusters_pediatric = \
get_HPO_count(clusters_un_pediatric,HPO_terms)
# get the ranked positively and negatively associated phenotypes for patients in each cluster (phen_ranked_pos
# and phen_ranked_neg)
phen_ranked_pos_adult,phen_ranked_neg_adult = \
get_phen_ranked(clusters_un_adult, patient_phen, ind_groups_adult)
phen_ranked_pos_pediatric,phen_ranked_neg_pediatric = \
get_phen_ranked(clusters_un_pediatric, patient_phen, ind_groups_pediatric)
# concatenate all outliers for adult and pediatric network
outlier_pat_adult,outlier_pat_ped = [],[]
for cl in ind_outliers_adult:
outlier_pat_adult+=clusters_un_adult[cl]
for cl in ind_outliers_ped:
outlier_pat_ped+=clusters_un_pediatric[cl]
# add the analysis of the outlier population
phen_ranked_outliers_adult = phenotype_enrichment_analysis(outlier_pat_adult,patient_phen,"pos")
phen_ranked_outliers_ped = phenotype_enrichment_analysis(outlier_pat_ped,patient_phen,"pos")
phen_ranked_pos_adult[len(list(phen_ranked_pos_adult.keys()))]=phen_ranked_outliers_adult
phen_ranked_pos_pediatric[len(list(phen_ranked_pos_pediatric.keys()))]=phen_ranked_outliers_ped
# heatmap for positive associations, adult
logger.info("Phenotype heatmap for adult population")
heatmap_phen(clusters_un_adult,phen_ranked_pos_adult,ind_groups_adult,
"adult",5,12,0,50,"heatmap_adult_clusters")
# heatmap for positive associations, pediatric
logger.info("Phenotype heatmap for pediatric population")
heatmap_phen(clusters_un_pediatric,phen_ranked_pos_pediatric,ind_groups_pediatric,
"pediatric",5,12,0,75,"heatmap_pediatric_clusters")
logger.info("Getting Kruskal Wallis test for HPO count.")
# get the Kruskal Wallis for the distribution of HPO count between clusters \\ adult
kr_HPO_ad=kruskal(HPO_count_adult[0],HPO_count_adult[1],HPO_count_adult[2],HPO_count_adult[3])
logger.info("For adult network : {}".format(kr_HPO_ad))
# get the Kruskal Wallis for the distribution of HPO count between clusters \\ pediatric
kr_HPO_ped=kruskal(HPO_count_pediatric[0],HPO_count_pediatric[1],HPO_count_pediatric[2],
HPO_count_pediatric[3],HPO_count_pediatric[4])
logger.info("For pediatric network : {}".format(kr_HPO_ped))
return phen_ranked_pos_adult, phen_ranked_neg_adult, phen_ranked_pos_pediatric, phen_ranked_neg_pediatric, \
avg_HPO_clusters_adult, CI_HPO_clusters_adult, avg_HPO_clusters_pediatric, CI_HPO_clusters_pediatric, \
kr_HPO_ad, kr_HPO_ped, outlier_pat_adult, outlier_pat_ped
def metadata_clusters(clusters_un_adult, clusters_un_pediatric, patient_phen,
ind_groups_adult, ind_groups_pediatric, demographics, logger):
ageeval, ageonset = '\\00_Demographics\\Age at UDN Evaluation (in years)\\', \
'\\00_Demographics\\Age at symptom onset in years\\'
# get the demographics for the patient in the cluster
demographics_coll_adult=metadata_collection(clusters_un_adult,demographics)
demographics_coll_pediatric=metadata_collection(clusters_un_pediatric,demographics)
avg_onset_adult,CI_onset_adult = \
get_metadata_clusters(
ind_groups_adult,demographics_coll_adult,ageonset,
)
avg_UDN_eval_adult,CI_UDN_eval_adult = \
get_metadata_clusters(
ind_groups_adult,demographics_coll_adult,ageeval,
)
avg_onset_pediatric,CI_onset_pediatric = \
get_metadata_clusters(
ind_groups_pediatric,demographics_coll_pediatric,ageonset,
)
avg_UDN_eval_pediatric,CI_UDN_eval_pediatric = \
get_metadata_clusters(
ind_groups_pediatric,demographics_coll_pediatric,ageeval,
)
# get distributino of gender
gender = '\\00_Demographics\\Gender\\'
gender_distrib_adult = get_distrib(gender,demographics_coll_adult)
gender_distrib_pediatric = get_distrib(gender,demographics_coll_pediatric)
logger.info("Getting chi square test to test independence of \
female:male ratio depending on cluster presence")
logger.info(chisquare(np.array([[61,46],[106,102],[21,19],[112,122],[97,106]]).T))
# KW test for Age at UDN evaluation // adult
kr_UDN_ad=get_stats_value(ageeval,"adult",ind_groups_adult,demographics_coll_adult, logger)
# KW test for Age at UDN evaluation // pediatric
kr_UDN_ped=get_stats_value(ageeval,"pediatric",ind_groups_pediatric,demographics_coll_pediatric, logger)
# KW test for Age at symptom onset // adult
kr_onset_ad=get_stats_value(ageonset,"adult",ind_groups_adult,demographics_coll_adult, logger)
# KW test for Age at symptom onset // pediatric
kr_onset_ped=get_stats_value(ageonset,"pediatric",ind_groups_pediatric,demographics_coll_pediatric, logger)
logger.info(
"Kruskal Wallis test for adult network, age at evaluation : {} and age at symptom onset : {}".format(
kr_UDN_ad, kr_onset_ad
)
)
logger.info(
"Kruskal Wallis test for adult network, age at evaluation : {} and age at symptom onset : {}".format(
kr_UDN_ped, kr_onset_ped
)
)
return avg_onset_adult, CI_onset_adult, avg_onset_pediatric, CI_onset_pediatric, \
avg_UDN_eval_adult, CI_UDN_eval_adult, avg_UDN_eval_pediatric, CI_UDN_eval_pediatric, \
kr_UDN_ad, kr_UDN_ped, kr_onset_ad, kr_onset_ped, gender_distrib_adult, gender_distrib_pediatric
# ## Creating tables
def word_tables(clusters_un_adult,ind_groups_adult,avg_HPO_clusters_adult,CI_HPO_clusters_adult,
gender_distrib_adult,OR_diag_adult,IC_adult,
avg_onset_adult,CI_onset_adult,avg_UDN_eval_adult,
CI_UDN_eval_adult, clusters_un_pediatric,ind_groups_ped,avg_HPO_clusters_pediatric,CI_HPO_clusters_pediatric,
gender_distrib_pediatric,OR_diag_pediatric,IC_pediatric,
avg_onset_pediatric,CI_onset_pediatric,avg_UDN_eval_pediatric,
CI_UDN_eval_pediatric, kr_HPO_ad,kr_HPO_ped,kr_UDN_ad,kr_UDN_ped,
kr_onset_ad,kr_onset_ped, logger):
create_table(ind_groups_adult,"A",clusters_un_adult,avg_HPO_clusters_adult,CI_HPO_clusters_adult,
gender_distrib_adult,OR_diag_adult,IC_adult,
avg_onset_adult,CI_onset_adult,avg_UDN_eval_adult,
CI_UDN_eval_adult,"adult_table")
create_table(ind_groups_pediatric,"P",clusters_un_pediatric,avg_HPO_clusters_pediatric,CI_HPO_clusters_pediatric,
gender_distrib_pediatric,OR_diag_pediatric,IC_pediatric,
avg_onset_pediatric,CI_onset_pediatric,avg_UDN_eval_pediatric,
CI_UDN_eval_pediatric,"pediatric_table")
create_stat_table(kr_HPO_ad,kr_HPO_ped,kr_UDN_ad,kr_UDN_ped,kr_onset_ad,kr_onset_ped,
"statistics adult and pediatric")
# # Diseases enrichment of clusters
def get_mappings(logger):
# get all the mapping needed from the HPO and Orphadata databases (with additional manual mappings)
all_diseases = get_all_diseases()
ORPHAmap, inverseORPHAmap = get_ORPHAmap()
mapping_HPO, syn_mapping = get_HPOmap()
Orphadata_HPO = get_Orphadata_HPO()
HPO_Orphadata = get_HPO_Orphadata(Orphadata_HPO)
return all_diseases, ORPHAmap, inverseORPHAmap, mapping_HPO, syn_mapping, Orphadata_HPO, HPO_Orphadata
def get_disease_enrichment_analysis(phen_ranked_pos_adult, phen_ranked_pos_pediatric,
mapping_HPO, syn_mapping, HPO_Orphadata,
clusters_un_adult, clusters_un_pediatric,
outlier_pat_adult, outlier_pat_ped, all_diseases, logger):
# get the diseases associated to the most representative phenotypes of the clusters
best_phenotypes_dis_analysis_adult = \
{i: phen_ranked_pos_adult[i][0][:5] for i in list(phen_ranked_pos_adult.keys())}
best_phenotypes_dis_analysis_ped = \
{i: phen_ranked_pos_pediatric[i][0][:5] for i in list(phen_ranked_pos_pediatric.keys())}
# transform into HPO ids
HPO_ids_cl_adult = get_HPO_from_cluster(best_phenotypes_dis_analysis_adult,mapping_HPO,syn_mapping)
HPO_ids_cl_ped = get_HPO_from_cluster(best_phenotypes_dis_analysis_ped,mapping_HPO,syn_mapping)
# get the diseases associated with the HPO ids
assoc_dis_adult = get_associated_diseases(HPO_ids_cl_adult,HPO_Orphadata)
assoc_dis_ped = get_associated_diseases(HPO_ids_cl_ped,HPO_Orphadata)
# concatenate at group level using the Orphadata hierarchy
assoc_groups_adult = get_associated_groups(assoc_dis_adult,all_diseases)
assoc_groups_ped = get_associated_groups(assoc_dis_ped,all_diseases)
# get the relative weight of the disease groups as their size over
# the total number of diseases in Orphadata
tot_weight = np.sum([len(all_diseases[group]) for group in all_diseases])
group_weight={group: len(all_diseases[group])/tot_weight for group in all_diseases}
# get the weighted # of group assocations
weighted_pop_adult = get_weighted_pop(assoc_groups_adult,group_weight)
weighted_pop_ped = get_weighted_pop(assoc_groups_ped,group_weight)
# plot as heatmap
heatmap_real(weighted_pop_adult,clusters_un_adult,len(outlier_pat_adult),
12,0,15,'heatmap_dis_adult')
heatmap_real(weighted_pop_ped,clusters_un_pediatric,len(outlier_pat_ped),
12,0,15,'heatmap_dis_ped')
def main(logger, token):
# ### Connect to the UDN data resource using the HPDS Adapter
# Connection to the PicSure Client w/ key
# token is the individual key given to connect to the resource
connection = PicSureClient.Client.connect("https://udn.hms.harvard.edu/picsure", token)
adapter = PicSureHpdsLib.Adapter(connection)
resource = adapter.useResource("8e8c7ed0-87ea-4342-b8da-f939e46bac26")
logger.info("Downloading data from server.")
phenotypes, status, genes, variants, primary_symptoms, clinical_site, \
family_history, natal_history, demographics, diagnostics = download_data(resource,logger)
logger.info("Getting the phenotypes for each patient.")
patient_phen = phenotype_formatting(phenotypes, FLAGS.jsonfile, logger)
logger.info("Computing analysis for patient breakdown.")
phenotypes_diagnosed, phenotypes_undiagnosed, list_diagnosed_phen, list_undiagnosed_phen, \
list_adult_diagnosed, list_adult_undiagnosed, list_pediatric_diagnosed, list_pediatric_undiagnosed, \
adult_patients, pediatric_patients, HPO_terms, all_patients = \
patient_breakdown(patient_phen, demographics, status, phenotypes, logger)
logger.info("Computing analysis of HPO large groups.")
list_phenotypes_unique, large_groups_HPO = HPO_large_group_analysis(phenotypes,
patient_phen, adult_patients, pediatric_patients, all_patients, logger)
logger.info("Getting link between HPO terms and primary symptoms.")
link_PS_HPO = HPO_and_PS(patient_phen, primary_symptoms, list_phenotypes_unique, logger)
logger.info("Performing metadata analysis.")
demographics, clinical_site, primary_symptoms, family_history, natal_history = \
stats_metadata(demographics, clinical_site, primary_symptoms, family_history, natal_history,
patient_phen, list_adult_diagnosed, list_adult_undiagnosed, list_pediatric_diagnosed,
list_pediatric_undiagnosed, list_diagnosed_phen, list_undiagnosed_phen, logger)
logger.info("Performing parental age analysis.")
parental_age_analysis(demographics, natal_history, logger)
logger.info("Downloading genetic information.")
genes, variants = load_genetic_data(FLAGS.variants_json, FLAGS.genes_json, status, patient_phen, logger)
logger.info("Performing genomic analysis.")
genomic_analysis(genes, variants, logger)
logger.info("Performing clustering on phenotypic data.")
clusters_un_adult, clusters_un_pediatric, ind_groups_adult, ind_groups_ped, \
ind_outliers_adult, ind_outliers_ped, consensus_clustering_labels_ad, \
consensus_clustering_labels_ped = perform_clustering(phenotypes, patient_phen,
adult_patients, pediatric_patients, logger)
logger.info("Starting cluster analysis.")
# odds ratio
OR_diag_adult, IC_adult, OR_diag_pediatric, IC_pediatric = odds_ratio_cluster(clusters_un_adult,
clusters_un_pediatric, ind_groups_adult, ind_groups_ped, status, logger)
# best phenotypes
phen_ranked_pos_adult, phen_ranked_neg_adult, phen_ranked_pos_pediatric, phen_ranked_neg_pediatric, \
avg_HPO_clusters_adult, CI_HPO_clusters_adult, avg_HPO_clusters_pediatric, CI_HPO_clusters_pediatric, \
kr_HPO_ad, kr_HPO_ped, outlier_pat_adult, outlier_pat_ped = \
phenotype_clusters(clusters_un_adult, clusters_un_pediatric, patient_phen, ind_groups_adult,
ind_groups_ped, ind_outliers_adult, ind_outliers_ped, HPO_terms, logger)
# metadata
avg_onset_adult, CI_onset_adult, avg_onset_pediatric, CI_onset_pediatric, \
avg_UDN_eval_adult, CI_UDN_eval_adult, avg_UDN_eval_pediatric, CI_UDN_eval_pediatric, \
kr_UDN_ad, kr_UDN_ped, kr_onset_ad, kr_onset_ped, \
gender_distrib_adult, gender_distrib_pediatric = \
metadata_clusters(clusters_un_adult, clusters_un_pediatric, patient_phen,
ind_groups_adult, ind_groups_ped, demographics, logger)
logger.info("Saving the analysis tables in word documents.")
word_tables(clusters_un_adult, ind_groups_adult, avg_HPO_clusters_adult, CI_HPO_clusters_adult, gender_distrib_adult,
OR_diag_adult, IC_adult, avg_onset_adult, CI_onset_adult, avg_UDN_eval_adult,
CI_UDN_eval_adult, clusters_un_pediatric, ind_groups_ped, avg_HPO_clusters_pediatric,
CI_HPO_clusters_pediatric, gender_distrib_pediatric, OR_diag_pediatric,
IC_pediatric, avg_onset_pediatric, CI_onset_pediatric, avg_UDN_eval_adult,
CI_UDN_eval_pediatric, kr_HPO_ad, kr_HPO_ped, kr_UDN_ad, kr_UDN_ped,
kr_onset_ad, kr_onset_ped, logger)
logger.info("Get mappings from the HPO and the Orphadata databases.")
all_diseases, ORPHAmap, inverseORPHAmap, mapping_HPO, syn_mapping, Orphadata_HPO, HPO_Orphadata = \
get_mappings(logger)
logger.info("Performing diseases enrichment analysis.")
get_disease_enrichment_analysis(phen_ranked_pos_adult, phen_ranked_pos_pediatric, mapping_HPO,
syn_mapping, HPO_Orphadata, clusters_un_adult, clusters_un_pediatric,
outlier_pat_adult, outlier_pat_ped, all_diseases, logger)
logger.info("Finding link to diagnostics.")
diagnostics_ad = get_diagnoses(clusters_un_adult,diagnostics,logger)
diagnostics_ped = get_diagnoses(clusters_un_pediatric,diagnostics,logger)
log_diagnoses(diagnostics_ad,logger)
log_diagnoses(diagnostics_ped,logger)
logger.info("Performing SVC prediction.")
diseases_binary_HPO,mat_phen_pediatric = get_diseases_binary(phenotypes,mapping_HPO,syn_mapping,logger)
svc,y_pred = SVC_pred(ind_groups_ped,clusters_un_pediatric,mat_phen_pediatric,
diseases_binary_HPO,consensus_clustering_labels_ped,logger)
svc_hpo,y_pred_hpo = SVC_pred_hpo(ind_groups_ped,clusters_un_pediatric,mat_phen_pediatric,
diseases_binary_HPO,consensus_clustering_labels_ped,logger)
if __name__=="__main__":
parser = argparse.ArgumentParser(
description="CLI args for path to files, individual key for resource"
)
parser.add_argument(
"--token",
"-t",
type=str,
required=True,
help="personal token granted to access resource",
)
parser.add_argument(
"--json_file",
"-json",
type=str,
required=True,
help="path to json file containing UDN patient information (might not be necessary if resource updated)",
)
parser.add_argument(
"--genes_file",
"-genes",
type=str,
required=True,
help="path to json file containing gene information (might not be necessary if resource updated)",
)
parser.add_argument(
"--variants_file",
"-variants",
type=str,
required=True,
help="path to json file containing variants information (might not be necessary if resource updated)",
)
FLAGS = parser.parse_args()
# clear logger.
logging.basicConfig(level=logging.DEBUG, filename="log_UDN_analysis.log")
logger = logging.getLogger("UDN_ANALYSIS")
# Create a second stream handler for logging to `stderr`, but set
# its log level to be a little bit smaller such that we only have
# informative messages
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
# Use the default format; since we do not adjust the logger before,
# this is all right.
stream_handler.setFormatter(
logging.Formatter(
"[%(asctime)s] %(levelname)s [%(name)s.%(funcName)s:%(lineno)d] %(message)s"
)
)
logger.addHandler(stream_handler)
logger.info("Usage:\n{0}\n".format(" ".join([x for x in sys.argv])))
main(logger, FLAGS.token)
|
/-
Copyright (c) 2021 David Renshaw. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: David Renshaw
-/
import algebra.geom_sum
import data.rat.defs
import data.real.basic
import tactic.positivity
/-!
# IMO 2013 Q5
Let `ℚ>₀` be the positive rational numbers. Let `f : ℚ>₀ → ℝ` be a function satisfying
the conditions
1. `f(x) * f(y) ≥ f(x * y)`
2. `f(x + y) ≥ f(x) + f(y)`
for all `x, y ∈ ℚ>₀`. Given that `f(a) = a` for some rational `a > 1`, prove that `f(x) = x` for
all `x ∈ ℚ>₀`.
# Solution
We provide a direct translation of the solution found in
https://www.imo-official.org/problems/IMO2013SL.pdf
-/
open_locale big_operators
lemma le_of_all_pow_lt_succ {x y : ℝ} (hx : 1 < x) (hy : 1 < y)
(h : ∀ n : ℕ, 0 < n → x^n - 1 < y^n) :
x ≤ y :=
begin
by_contra' hxy,
have hxmy : 0 < x - y := sub_pos.mpr hxy,
have hn : ∀ n : ℕ, 0 < n → (x - y) * (n : ℝ) ≤ x^n - y^n,
{ intros n hn,
have hterm : ∀ i : ℕ, i ∈ finset.range n → 1 ≤ x^i * y^(n - 1 - i),
{ intros i hi,
have hx' : 1 ≤ x ^ i := one_le_pow_of_one_le hx.le i,
have hy' : 1 ≤ y ^ (n - 1 - i) := one_le_pow_of_one_le hy.le (n - 1 - i),
calc 1 ≤ x^i : hx'
... = x^i * 1 : (mul_one _).symm
... ≤ x^i * y^(n-1-i) : mul_le_mul_of_nonneg_left hy' (zero_le_one.trans hx') },
calc (x - y) * (n : ℝ)
= (n : ℝ) * (x - y) : mul_comm _ _
... = (∑ (i : ℕ) in finset.range n, (1 : ℝ)) * (x - y) :
by simp only [mul_one, finset.sum_const, nsmul_eq_mul,
finset.card_range]
... ≤ (∑ (i : ℕ) in finset.range n, x ^ i * y ^ (n - 1 - i)) * (x-y) :
(mul_le_mul_right hxmy).mpr (finset.sum_le_sum hterm)
... = x^n - y^n : geom_sum₂_mul x y n, },
-- Choose n larger than 1 / (x - y).
obtain ⟨N, hN⟩ := exists_nat_gt (1 / (x - y)),
have hNp : 0 < N, { exact_mod_cast (one_div_pos.mpr hxmy).trans hN },
have := calc 1 = (x - y) * (1 / (x - y)) : by field_simp [ne_of_gt hxmy]
... < (x - y) * N : (mul_lt_mul_left hxmy).mpr hN
... ≤ x^N - y^N : hn N hNp,
linarith [h N hNp]
end
/--
Like le_of_all_pow_lt_succ, but with a weaker assumption for y.
-/
lemma le_of_all_pow_lt_succ' {x y : ℝ} (hx : 1 < x) (hy : 0 < y)
(h : ∀ n : ℕ, 0 < n → x^n - 1 < y^n) :
x ≤ y :=
begin
refine le_of_all_pow_lt_succ hx _ h,
by_contra' hy'' : y ≤ 1,
-- Then there exists y' such that 0 < y ≤ 1 < y' < x.
let y' := (x + 1) / 2,
have h_y'_lt_x : y' < x,
{ have hh : (x + 1)/2 < (x * 2) / 2, { linarith },
calc y' < (x * 2) / 2 : hh
... = x : by field_simp },
have h1_lt_y' : 1 < y',
{ have hh' : 1 * 2 / 2 < (x + 1) / 2, { linarith },
calc 1 = 1 * 2 / 2 : by field_simp
... < y' : hh' },
have h_y_lt_y' : y < y' := hy''.trans_lt h1_lt_y',
have hh : ∀ n, 0 < n → x^n - 1 < y'^n,
{ intros n hn,
calc x^n - 1 < y^n : h n hn
... ≤ y'^n : pow_le_pow_of_le_left hy.le h_y_lt_y'.le n },
exact h_y'_lt_x.not_le (le_of_all_pow_lt_succ hx h1_lt_y' hh)
end
lemma f_pos_of_pos {f : ℚ → ℝ} {q : ℚ} (hq : 0 < q)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n) :
0 < f q :=
begin
have num_pos : 0 < q.num := rat.num_pos_iff_pos.mpr hq,
have hmul_pos :=
calc (0 : ℝ) < q.num : int.cast_pos.mpr num_pos
... = ((q.num.nat_abs : ℤ) : ℝ) : congr_arg coe (int.nat_abs_of_nonneg num_pos.le).symm
... ≤ f q.num.nat_abs : H4 q.num.nat_abs
(int.nat_abs_pos_of_ne_zero num_pos.ne')
... = f q.num : by rw [nat.cast_nat_abs, abs_of_nonneg num_pos.le]
... = f (q * q.denom) : by rw ←rat.mul_denom_eq_num
... ≤ f q * f q.denom : H1 q q.denom hq (nat.cast_pos.mpr q.pos),
have h_f_denom_pos :=
calc (0 : ℝ) < q.denom : nat.cast_pos.mpr q.pos
... ≤ f q.denom : H4 q.denom q.pos,
exact pos_of_mul_pos_left hmul_pos h_f_denom_pos.le,
end
lemma fx_gt_xm1 {f : ℚ → ℝ} {x : ℚ} (hx : 1 ≤ x)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H2 : ∀ x y, 0 < x → 0 < y → f x + f y ≤ f (x + y))
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n) :
(x - 1 : ℝ) < f x :=
begin
have hx0 :=
calc (x - 1 : ℝ)
< ⌊x⌋₊ : by exact_mod_cast nat.sub_one_lt_floor x
... ≤ f ⌊x⌋₊ : H4 _ (nat.floor_pos.2 hx),
obtain h_eq | h_lt := (nat.floor_le $ zero_le_one.trans hx).eq_or_lt,
{ rwa h_eq at hx0 },
calc (x - 1 : ℝ) < f ⌊x⌋₊ : hx0
... < f (x - ⌊x⌋₊) + f ⌊x⌋₊ : lt_add_of_pos_left _ (f_pos_of_pos (sub_pos.mpr h_lt) H1 H4)
... ≤ f (x - ⌊x⌋₊ + ⌊x⌋₊) : H2 _ _ (sub_pos.mpr h_lt) (nat.cast_pos.2 (nat.floor_pos.2 hx))
... = f x : by rw sub_add_cancel
end
lemma pow_f_le_f_pow {f : ℚ → ℝ} {n : ℕ} (hn : 0 < n) {x : ℚ} (hx : 1 < x)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n) :
f (x^n) ≤ (f x)^n :=
begin
induction n with pn hpn,
{ exfalso, exact nat.lt_asymm hn hn },
cases pn,
{ simp only [pow_one] },
have hpn' := hpn pn.succ_pos,
rw [pow_succ' x (pn + 1), pow_succ' (f x) (pn + 1)],
have hxp : 0 < x := by positivity,
calc f ((x ^ (pn+1)) * x)
≤ f (x ^ (pn+1)) * f x : H1 (x ^ (pn+1)) x (pow_pos hxp (pn+1)) hxp
... ≤ (f x) ^ (pn+1) * f x : (mul_le_mul_right (f_pos_of_pos hxp H1 H4)).mpr hpn'
end
lemma fixed_point_of_pos_nat_pow {f : ℚ → ℝ} {n : ℕ} (hn : 0 < n)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n)
(H5 : ∀ x : ℚ, 1 < x → (x : ℝ) ≤ f x)
{a : ℚ} (ha1 : 1 < a) (hae : f a = a) :
f (a^n) = a^n :=
begin
have hh0 : (a : ℝ) ^ n ≤ f (a ^ n),
{ exact_mod_cast H5 (a ^ n) (one_lt_pow ha1 hn.ne') },
have hh1 := calc f (a^n) ≤ (f a)^n : pow_f_le_f_pow hn ha1 H1 H4
... = (a : ℝ)^n : by rw ← hae,
exact hh1.antisymm hh0
end
lemma fixed_point_of_gt_1 {f : ℚ → ℝ} {x : ℚ} (hx : 1 < x)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H2 : ∀ x y, 0 < x → 0 < y → f x + f y ≤ f (x + y))
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n)
(H5 : ∀ x : ℚ, 1 < x → (x : ℝ) ≤ f x)
{a : ℚ} (ha1 : 1 < a) (hae : f a = a) :
f x = x :=
begin
-- Choose n such that 1 + x < a^n.
obtain ⟨N, hN⟩ := pow_unbounded_of_one_lt (1 + x) ha1,
have h_big_enough : (1:ℚ) < a^N - x := lt_sub_iff_add_lt.mpr hN,
have h1 := calc (x : ℝ) + ((a^N - x) : ℚ)
≤ f x + ((a^N - x) : ℚ) : add_le_add_right (H5 x hx) _
... ≤ f x + f (a^N - x) : add_le_add_left (H5 _ h_big_enough) _,
have hxp : 0 < x := by positivity,
have hNp : 0 < N,
{ by_contra' H, rw [le_zero_iff.mp H] at hN, linarith },
have h2 := calc f x + f (a^N - x)
≤ f (x + (a^N - x)) : H2 x (a^N - x) hxp (zero_lt_one.trans h_big_enough)
... = f (a^N) : by ring_nf
... = a^N : fixed_point_of_pos_nat_pow hNp H1 H4 H5 ha1 hae
... = x + (a^N - x) : by ring,
have heq := h1.antisymm (by exact_mod_cast h2),
linarith [H5 x hx, H5 _ h_big_enough]
end
theorem imo2013_q5
(f : ℚ → ℝ)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H2 : ∀ x y, 0 < x → 0 < y → f x + f y ≤ f (x + y))
(H_fixed_point : ∃ a, 1 < a ∧ f a = a) :
∀ x, 0 < x → f x = x :=
begin
obtain ⟨a, ha1, hae⟩ := H_fixed_point,
have H3 : ∀ x : ℚ, 0 < x → ∀ n : ℕ, 0 < n → ↑n * f x ≤ f (n * x),
{ intros x hx n hn,
cases n,
{ exact (lt_irrefl 0 hn).elim },
induction n with pn hpn,
{ simp only [one_mul, nat.cast_one] },
calc ↑(pn + 2) * f x
= (↑pn + 1 + 1) * f x : by norm_cast
... = ((pn : ℝ) + 1) * f x + 1 * f x : add_mul (↑pn + 1) 1 (f x)
... = (↑pn + 1) * f x + f x : by rw one_mul
... ≤ f ((↑pn.succ) * x) + f x : by exact_mod_cast add_le_add_right
(hpn pn.succ_pos) (f x)
... ≤ f ((↑pn + 1) * x + x) : by exact_mod_cast H2 _ _
(mul_pos pn.cast_add_one_pos hx) hx
... = f ((↑pn + 1) * x + 1 * x) : by rw one_mul
... = f ((↑pn + 1 + 1) * x) : congr_arg f (add_mul (↑pn + 1) 1 x).symm
... = f (↑(pn + 2) * x) : by norm_cast },
have H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n,
{ intros n hn,
have hf1 : 1 ≤ f 1,
{ have a_pos : (0 : ℝ) < a := rat.cast_pos.mpr (zero_lt_one.trans ha1),
suffices : ↑a * 1 ≤ ↑a * f 1, from (mul_le_mul_left a_pos).mp this,
calc ↑a * 1 = ↑a : mul_one ↑a
... = f a : hae.symm
... = f (a * 1) : by rw mul_one
... ≤ f a * f 1 : (H1 a 1) (zero_lt_one.trans ha1) zero_lt_one
... = ↑a * f 1 : by rw hae },
calc (n : ℝ) = (n : ℝ) * 1 : (mul_one _).symm
... ≤ (n : ℝ) * f 1 : mul_le_mul_of_nonneg_left hf1 (nat.cast_nonneg _)
... ≤ f (n * 1) : H3 1 zero_lt_one n hn
... = f n : by rw mul_one },
have H5 : ∀ x : ℚ, 1 < x → (x : ℝ) ≤ f x,
{ intros x hx,
have hxnm1 : ∀ n : ℕ, 0 < n → (x : ℝ)^n - 1 < (f x)^n,
{ intros n hn,
calc (x : ℝ)^n - 1 < f (x^n) : by exact_mod_cast fx_gt_xm1 (one_le_pow_of_one_le hx.le n)
H1 H2 H4
... ≤ (f x)^n : pow_f_le_f_pow hn hx H1 H4 },
have hx' : 1 < (x : ℝ) := by exact_mod_cast hx,
have hxp : 0 < x := by positivity,
exact le_of_all_pow_lt_succ' hx' (f_pos_of_pos hxp H1 H4) hxnm1 },
have h_f_commutes_with_pos_nat_mul : ∀ n : ℕ, 0 < n → ∀ x : ℚ, 0 < x → f (n * x) = n * f x,
{ intros n hn x hx,
have h2 : f (n * x) ≤ n * f x,
{ cases n,
{ exfalso, exact nat.lt_asymm hn hn },
cases n,
{ simp only [one_mul, nat.cast_one] },
have hfneq : f (n.succ.succ) = n.succ.succ,
{ have := fixed_point_of_gt_1
(nat.one_lt_cast.mpr (nat.succ_lt_succ n.succ_pos)) H1 H2 H4 H5 ha1 hae,
rwa (rat.cast_coe_nat n.succ.succ) at this },
rw ← hfneq,
exact H1 (n.succ.succ : ℚ) x (nat.cast_pos.mpr hn) hx },
exact h2.antisymm (H3 x hx n hn) },
-- For the final calculation, we expand x as (2*x.num) / (2*x.denom), because
-- we need the top of the fraction to be strictly greater than 1 in order
-- to apply fixed_point_of_gt_1.
intros x hx,
let x2denom := 2 * x.denom,
let x2num := 2 * x.num,
have hx2pos := calc 0 < x.denom : x.pos
... < x.denom + x.denom : lt_add_of_pos_left x.denom x.pos
... = 2 * x.denom : by ring,
have hxcnez : (x.denom : ℚ) ≠ (0 : ℚ) := ne_of_gt (nat.cast_pos.mpr x.pos),
have hx2cnezr : (x2denom : ℝ) ≠ (0 : ℝ) := nat.cast_ne_zero.mpr (ne_of_gt hx2pos),
have hrat_expand2 := calc x = x.num / x.denom : by exact_mod_cast rat.num_denom.symm
... = x2num / x2denom : by { field_simp [-rat.num_div_denom], linarith },
have h_denom_times_fx :=
calc (x2denom : ℝ) * f x = f (x2denom * x) : (h_f_commutes_with_pos_nat_mul
x2denom hx2pos x hx).symm
... = f (x2denom * (x2num / x2denom)) : by rw hrat_expand2
... = f x2num : by { congr, field_simp, ring },
have h_fx2num_fixed : f x2num = x2num,
{ have hx2num_gt_one : (1 : ℚ) < (2 * x.num : ℤ),
{ norm_cast, linarith [rat.num_pos_iff_pos.mpr hx] },
have hh := fixed_point_of_gt_1 hx2num_gt_one H1 H2 H4 H5 ha1 hae,
rwa (rat.cast_coe_int x2num) at hh },
calc f x = f x * 1 : (mul_one (f x)).symm
... = f x * (x2denom / x2denom) : by rw ←(div_self hx2cnezr)
... = (f x * x2denom) / x2denom : mul_div_assoc' (f x) _ _
... = (x2denom * f x) / x2denom : by rw mul_comm
... = f x2num / x2denom : by rw h_denom_times_fx
... = x2num / x2denom : by rw h_fx2num_fixed
... = (((x2num : ℚ) / (x2denom : ℚ) : ℚ) : ℝ) : by norm_cast
... = x : by rw ←hrat_expand2
end
|
[STATEMENT]
lemma cFalse: "A \<turnstile> {\<lambda>s. False} c {Q}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. A \<turnstile> {\<lambda>s. False} c {Q}
[PROOF STEP]
apply (rule cThin)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. ?A' \<turnstile> {\<lambda>s. False} c {Q}
2. ?A' \<subseteq> A
[PROOF STEP]
apply (rule hoare_relative_complete)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<Turnstile> {\<lambda>s. False} c {Q}
2. {} \<subseteq> A
[PROOF STEP]
apply (auto simp add: valid_def)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
{-# LANGUAGE GADTs, DataKinds, PolyKinds #-}
{-# LANGUAGE FlexibleContexts #-}
module TensorDAG where
import Data.Vinyl
import Data.Vinyl.Functor
import Data.Singletons
import Numeric.LinearAlgebra (Numeric)
import TensorHMatrix
import DAGIO
import VarArgs
makeGrad1 :: (a -> b -> a) -> HList '[a] -> Identity b -> HList '[a]
makeGrad1 g (Identity a :& RNil) (Identity b) = Identity (g a b) :& RNil
makeGrad2 :: (a -> b -> c -> (a, b)) -> HList '[a, b] -> Identity c -> HList '[a, b]
makeGrad2 g (Identity a :& Identity b :& RNil) (Identity c) = Identity a' :& Identity b' :& RNil
where (a', b') = g a b c
makeDot :: Numeric a => Node (Tensor '[n] a) -> Node (Tensor '[n] a) -> IO (Node a)
makeDot = makeNode (uncurry2 dot, makeGrad2 gradDot)
makeMV :: (SingI n, IntegralN n, Usable a) => Node (Tensor '[n, m] a) -> Node (Tensor '[m] a) -> IO (Node (Tensor '[n] a))
makeMV = makeNode (uncurry2 mv, makeGrad2 gradMV)
makeMM :: (SingI n, IntegralN n, SingI k, Usable a) => Node (Tensor '[n, m] a) -> Node (Tensor '[m, k] a) -> IO (Node (Tensor '[n, k] a))
makeMM = makeNode (uncurry2 mm, makeGrad2 gradMM)
makeSelect i = makeNode (uncurry1 $ select i, makeGrad1 $ gradSelect i)
|
/-
Copyright (c) 2022 David Loeffler. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: David Loeffler
! This file was ported from Lean 3 source module analysis.fourier.riemann_lebesgue_lemma
! leanprover-community/mathlib commit 3353f3371120058977ce1e20bf7fc8986c0fb042
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.MeasureTheory.Function.ContinuousMapDense
import Mathbin.MeasureTheory.Integral.IntegralEqImproper
import Mathbin.MeasureTheory.Group.Integration
import Mathbin.Topology.ContinuousFunction.ZeroAtInfty
import Mathbin.Analysis.Fourier.FourierTransform
/-!
# The Riemann-Lebesgue Lemma
In this file we prove a weak form of the Riemann-Lebesgue lemma, stating that for any
compactly-supported continuous function `f` on `ℝ` (valued in some complete normed space `E`), the
integral
`∫ (x : ℝ), exp (↑(t * x) * I) • f x`
tends to zero as `t → ∞`. (The actual lemma is that this holds for all `L¹` functions `f`, which
follows from the result proved here together with the fact that continuous, compactly-supported
functions are dense in `L¹(ℝ)`, which will be proved in a future iteration.)
## Main results
- `tendsto_integral_mul_exp_at_top_of_continuous_compact_support`: the Riemann-Lebesgue lemma for
continuous compactly-supported functions on `ℝ`.
-/
open MeasureTheory Filter Complex Set
open Filter Topology Real ENNReal
section ContinuousCompactSupport
variable {E : Type _} [NormedAddCommGroup E] [NormedSpace ℂ E] {f : ℝ → E}
/-- The integrand in the Riemann-Lebesgue lemma is integrable. -/
theorem fourierIntegrandIntegrable (hf : Integrable f) (t : ℝ) :
Integrable fun x : ℝ => exp (↑(t * x) * I) • f x :=
by
rw [← integrable_norm_iff]
simp_rw [norm_smul, norm_exp_of_real_mul_I, one_mul]
exacts[hf.norm, (Continuous.aeStronglyMeasurable (by continuity)).smul hf.1]
#align fourier_integrand_integrable fourierIntegrandIntegrable
variable [CompleteSpace E]
/-- Shifting `f` by `π / t` negates the integral in the Riemann-Lebesgue lemma. -/
theorem fourier_integral_half_period_translate {t : ℝ} (ht : t ≠ 0) :
(∫ x : ℝ, exp (↑(t * x) * I) • f (x + π / t)) = -∫ x : ℝ, exp (↑(t * x) * I) • f x :=
by
have :
(fun x : ℝ => exp (↑(t * x) * I) • f (x + π / t)) = fun x : ℝ =>
(fun y : ℝ => -exp (↑(t * y) * I) • f y) (x + π / t) :=
by
ext1 x
dsimp only
rw [of_real_mul, of_real_mul, of_real_add, mul_add, add_mul, exp_add, ← neg_mul]
replace ht := complex.of_real_ne_zero.mpr ht
have : ↑t * ↑(π / t) * I = π * I := by
field_simp
ring
rw [this, exp_pi_mul_I]
ring_nf
rw [this, integral_add_right_eq_self]
simp_rw [neg_smul, integral_neg]
#align fourier_integral_half_period_translate fourier_integral_half_period_translate
/-- Rewrite the Riemann-Lebesgue integral in a form that allows us to use uniform continuity. -/
theorem fourier_integral_eq_half_sub_half_period_translate {t : ℝ} (ht : t ≠ 0)
(hf : Integrable f) :
(∫ x : ℝ, exp (↑(t * x) * I) • f x) =
(1 / (2 : ℂ)) • ∫ x : ℝ, exp (↑(t * x) * I) • (f x - f (x + π / t)) :=
by
simp_rw [smul_sub]
rw [integral_sub, fourier_integral_half_period_translate ht, sub_eq_add_neg, neg_neg, ←
two_smul ℂ _, ← @smul_assoc _ _ _ _ _ _ (IsScalarTower.left ℂ), smul_eq_mul]
norm_num
exacts[fourierIntegrandIntegrable hf t, fourierIntegrandIntegrable (hf.comp_add_right (π / t)) t]
#align fourier_integral_eq_half_sub_half_period_translate fourier_integral_eq_half_sub_half_period_translate
/-- Riemann-Lebesgue Lemma for continuous and compactly-supported functions: the integral
`∫ x, exp (t * x * I) • f x` tends to 0 as `t` gets large. -/
theorem tendsto_integral_mul_exp_atTop_of_continuous_compact_support (hf1 : Continuous f)
(hf2 : HasCompactSupport f) :
Tendsto (fun t : ℝ => ∫ x : ℝ, exp (↑(t * x) * I) • f x) atTop (𝓝 0) :=
by
simp_rw [NormedAddCommGroup.tendsto_nhds_zero, eventually_at_top, ge_iff_le]
intro ε hε
-- Extract an explicit candidate bound on `t` from uniform continuity.
obtain ⟨R, hR1, hR2⟩ := hf2.exists_pos_le_norm
obtain ⟨δ, hδ1, hδ2⟩ :=
metric.uniform_continuous_iff.mp (hf2.uniform_continuous_of_continuous hf1) (ε / (1 + 2 * R))
(div_pos hε (by positivity))
refine' ⟨max π (1 + π / δ), fun t ht => _⟩
have tpos : 0 < t := lt_of_lt_of_le Real.pi_pos ((le_max_left _ _).trans ht)
-- Rewrite integral in terms of `f x - f (x + π / t)`.
rw [fourier_integral_eq_half_sub_half_period_translate
(lt_of_lt_of_le (lt_max_of_lt_left Real.pi_pos) ht).ne'
(hf1.integrable_of_has_compact_support hf2)]
rw [norm_smul, norm_eq_abs, ← Complex.ofReal_one, ← of_real_bit0, ← of_real_div,
Complex.abs_of_nonneg one_half_pos.le]
have : ε = 1 / 2 * (2 * ε) := by
field_simp
ring
rw [this, mul_lt_mul_left (one_half_pos : (0 : ℝ) < 1 / 2)]
have :
‖∫ x : ℝ, exp (↑(t * x) * I) • (f x - f (x + π / t))‖ ≤
∫ x : ℝ, ‖exp (↑(t * x) * I) • (f x - f (x + π / t))‖ :=
norm_integral_le_integral_norm _
refine' lt_of_le_of_lt this _
simp_rw [norm_smul, norm_exp_of_real_mul_I, one_mul]
-- Show integral can be taken over `[-(R + 1), R] ⊂ ℝ`.
let A := Icc (-(R + 1)) R
have int_Icc : (∫ x : ℝ, ‖f x - f (x + π / t)‖) = ∫ x in A, ‖f x - f (x + π / t)‖ :=
by
refine' (set_integral_eq_integral_of_forall_compl_eq_zero fun x hx => _).symm
rw [mem_Icc, not_and_or, not_le, not_le, lt_neg] at hx
suffices f x = 0 ∧ f (x + π / t) = 0 by rw [this.1, this.2, sub_zero, norm_zero]
have tp : 0 < t := real.pi_pos.trans_le ((le_max_left _ _).trans ht)
refine' ⟨hR2 x <| le_abs.mpr _, hR2 _ <| le_abs.mpr _⟩
· cases hx
· exact Or.inr ((le_add_of_nonneg_right zero_le_one).trans hx.le)
· exact Or.inl hx.le
· cases hx
· refine' Or.inr _
rw [neg_add, ← sub_eq_add_neg, le_sub_iff_add_le]
refine' le_trans (add_le_add_left _ R) hx.le
exact (div_le_one tp).mpr ((le_max_left _ _).trans ht)
· exact Or.inl (hx.trans <| lt_add_of_pos_right _ <| div_pos Real.pi_pos tp).le
rw [int_Icc]
-- Bound integral using fact that ‖f x - f (x + π / t)‖ is small.
have bdA : ∀ x : ℝ, x ∈ A → ‖‖f x - f (x + π / t)‖‖ ≤ ε / (1 + 2 * R) :=
by
simp_rw [norm_norm]
refine' fun x _ => le_of_lt _
simp_rw [dist_eq_norm] at hδ2
apply hδ2
rw [sub_add_cancel', Real.norm_eq_abs, abs_neg, abs_of_pos (div_pos Real.pi_pos tpos),
div_lt_iff tpos, mul_comm, ← div_lt_iff hδ1]
linarith [(le_max_right π (1 + π / δ)).trans ht]
have bdA2 := norm_set_integral_le_of_norm_le_const (measure_Icc_lt_top : volume A < ∞) bdA _
swap
· apply Continuous.aeStronglyMeasurable
exact
continuous_norm.comp <|
Continuous.sub hf1 <| Continuous.comp hf1 <| continuous_id'.add continuous_const
have : ‖_‖ = ∫ x : ℝ in A, ‖f x - f (x + π / t)‖ :=
Real.norm_of_nonneg (set_integral_nonneg measurableSet_Icc fun x hx => norm_nonneg _)
rw [this] at bdA2
refine' lt_of_le_of_lt bdA2 _
rw [Real.volume_Icc, (by ring : R - -(R + 1) = 1 + 2 * R)]
have hh : 0 < 1 + 2 * R := by positivity
rw [ENNReal.toReal_ofReal hh.le, div_mul_cancel _ hh.ne', two_mul]
exact lt_add_of_pos_left _ hε
#align tendsto_integral_mul_exp_at_top_of_continuous_compact_support tendsto_integral_mul_exp_atTop_of_continuous_compact_support
theorem tendsto_integral_mul_exp_atBot_of_continuous_compact_support (hf1 : Continuous f)
(hf2 : HasCompactSupport f) :
Tendsto (fun t : ℝ => ∫ x : ℝ, exp (↑(t * x) * I) • f x) atBot (𝓝 0) :=
by
have hg2 : HasCompactSupport (f ∘ Neg.neg) := by
simpa only [neg_one_smul] using hf2.comp_smul (neg_ne_zero.mpr <| one_ne_zero' ℝ)
convert(tendsto_integral_mul_exp_atTop_of_continuous_compact_support (hf1.comp continuous_neg)
hg2).comp
tendsto_neg_at_bot_at_top
ext1 t
simp_rw [Function.comp_apply, neg_mul, ← mul_neg]
rw [← integral_neg_eq_self]
#align tendsto_integral_mul_exp_at_bot_of_continuous_compact_support tendsto_integral_mul_exp_atBot_of_continuous_compact_support
theorem zero_at_infty_integral_mul_exp_of_continuous_compact_support (hf1 : Continuous f)
(hf2 : HasCompactSupport f) :
Tendsto (fun t : ℝ => ∫ x : ℝ, exp (↑(t * x) * I) • f x) (cocompact ℝ) (𝓝 0) :=
by
rw [Real.cocompact_eq, tendsto_sup]
exact
⟨tendsto_integral_mul_exp_atBot_of_continuous_compact_support hf1 hf2,
tendsto_integral_mul_exp_atTop_of_continuous_compact_support hf1 hf2⟩
#align zero_at_infty_integral_mul_exp_of_continuous_compact_support zero_at_infty_integral_mul_exp_of_continuous_compact_support
open FourierTransform
/-- Riemann-Lebesgue lemma for continuous compactly-supported functions: the Fourier transform
tends to 0 at infinity. -/
theorem Real.fourierIntegral_zero_at_infty_of_continuous_compact_support (hc : Continuous f)
(hs : HasCompactSupport f) : Tendsto (𝓕 f) (cocompact ℝ) (𝓝 0) :=
by
refine'
((zero_at_infty_integral_mul_exp_of_continuous_compact_support hc hs).comp
(tendsto_cocompact_mul_left₀
(mul_ne_zero (neg_ne_zero.mpr two_ne_zero) real.pi_pos.ne'))).congr
fun w => _
rw [Real.fourierIntegral_eq_integral_exp_smul, Function.comp_apply]
congr 1 with x : 1
ring_nf
#align real.fourier_integral_zero_at_infty_of_continuous_compact_support Real.fourierIntegral_zero_at_infty_of_continuous_compact_support
end ContinuousCompactSupport
|
[STATEMENT]
lemma [autoref_op_pat]:
"(\<forall>i<u. P i) \<equiv> OP List.all_interval_nat P 0 u"
"(\<forall>i\<le>u. P i) \<equiv> OP List.all_interval_nat P 0 (Suc u)"
"(\<forall>i<u. l\<le>i \<longrightarrow> P i) \<equiv> OP List.all_interval_nat P l u"
"(\<forall>i\<le>u. l\<le>i \<longrightarrow> P i) \<equiv> OP List.all_interval_nat P l (Suc u)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<forall>i<u. P i \<equiv> OP List.all_interval_nat P 0 u) &&& \<forall>i\<le>u. P i \<equiv> OP List.all_interval_nat P 0 (Suc u)) &&& (\<forall>i<u. l \<le> i \<longrightarrow> P i \<equiv> OP List.all_interval_nat P l u) &&& \<forall>i\<le>u. l \<le> i \<longrightarrow> P i \<equiv> OP List.all_interval_nat P l (Suc u)
[PROOF STEP]
by (auto intro!: eq_reflection simp: List.all_interval_nat_def) |
theory Prelude_SortingPrograms__E3
imports "$HETS_ISABELLE_LIB/MainHC"
uses "$HETS_ISABELLE_LIB/prelude"
begin
setup "Header.initialize
[\"ga_monotonicity\", \"ga_monotonicity_1\", \"ga_monotonicity_2\",
\"ga_monotonicity_3\", \"ga_monotonicity_4\",
\"ga_monotonicity_5\", \"ga_monotonicity_6\",
\"ga_monotonicity_7\", \"ga_monotonicity_8\",
\"ga_monotonicity_9\", \"ga_monotonicity_10\",
\"ga_monotonicity_11\", \"ga_monotonicity_12\",
\"ga_monotonicity_13\", \"ga_monotonicity_14\",
\"ga_monotonicity_15\", \"ga_monotonicity_16\",
\"ga_monotonicity_17\", \"ga_monotonicity_18\",
\"ga_monotonicity_19\", \"ga_monotonicity_20\",
\"ga_monotonicity_21\", \"ga_monotonicity_22\",
\"ga_monotonicity_23\", \"ga_monotonicity_24\",
\"ga_monotonicity_25\", \"ga_monotonicity_26\",
\"ga_monotonicity_27\", \"ga_monotonicity_28\",
\"ga_monotonicity_29\", \"ga_monotonicity_30\",
\"ga_monotonicity_31\", \"ga_monotonicity_32\",
\"ga_monotonicity_33\", \"ga_monotonicity_34\",
\"ga_monotonicity_35\", \"ga_monotonicity_36\",
\"ga_monotonicity_37\", \"ga_monotonicity_38\",
\"ga_monotonicity_39\", \"ga_monotonicity_40\",
\"ga_monotonicity_41\", \"ga_monotonicity_42\",
\"ga_monotonicity_43\", \"ga_monotonicity_44\",
\"ga_monotonicity_45\", \"ga_monotonicity_46\",
\"ga_monotonicity_47\", \"ga_monotonicity_48\",
\"ga_monotonicity_49\", \"ga_monotonicity_50\",
\"ga_monotonicity_51\", \"ga_monotonicity_52\",
\"ga_monotonicity_53\", \"ga_monotonicity_54\",
\"ga_monotonicity_55\", \"ga_monotonicity_56\",
\"ga_monotonicity_57\", \"ga_monotonicity_58\",
\"ga_monotonicity_59\", \"ga_monotonicity_60\",
\"ga_monotonicity_61\", \"ga_monotonicity_62\",
\"ga_monotonicity_63\", \"ga_monotonicity_64\",
\"ga_monotonicity_65\", \"ga_monotonicity_66\",
\"ga_monotonicity_67\", \"ga_monotonicity_68\",
\"ga_monotonicity_69\", \"ga_monotonicity_70\",
\"ga_monotonicity_71\", \"ga_monotonicity_72\",
\"ga_monotonicity_73\", \"ga_monotonicity_74\",
\"ga_monotonicity_75\", \"ga_monotonicity_76\",
\"ga_monotonicity_77\", \"ga_monotonicity_78\",
\"ga_monotonicity_79\", \"ga_monotonicity_80\",
\"ga_monotonicity_81\", \"ga_monotonicity_82\",
\"ga_monotonicity_83\", \"ga_monotonicity_84\",
\"ga_monotonicity_85\", \"ga_monotonicity_86\",
\"ga_monotonicity_87\", \"ga_monotonicity_88\",
\"ga_monotonicity_89\", \"ga_monotonicity_90\",
\"ga_subt_reflexive\", \"ga_subt_transitive\",
\"ga_subt_inj_proj\", \"ga_inj_transitive\",
\"ga_subt_Int_XLt_Rat\", \"ga_subt_Nat_XLt_Int\",
\"ga_subt_Pos_XLt_Nat\", \"GenSortT1\", \"GenSortT2\",
\"GenSortF\", \"SplitInsertionSort\", \"JoinInsertionSort\",
\"InsertionSort\", \"SplitQuickSort\", \"JoinQuickSort\",
\"QuickSort\", \"SplitSelectionSort\", \"JoinSelectionSort\",
\"SelectionSort\", \"SplitMergeSort\", \"MergeNil\",
\"MergeConsNil\", \"MergeConsConsT\", \"MergeConsConsF\",
\"JoinMergeSort\", \"MergeSort\", \"ElemNil\", \"ElemCons\",
\"IsOrderedNil\", \"IsOrderedCons\", \"IsOrderedConsCons\",
\"PermutationNil\", \"PermutationConsCons\", \"PermutationCons\",
\"Theorem01\", \"Theorem02\", \"Theorem03\", \"Theorem04\",
\"Theorem05\", \"Theorem06\", \"Theorem07\", \"Theorem08\",
\"Theorem09\", \"Theorem10\", \"Theorem11\", \"Theorem12\",
\"Theorem13\", \"Theorem14\"]"
typedecl Bool
typedecl ('a1) List
typedecl Ordering
typedecl Pos
typedecl Rat
typedecl Unit
typedecl X_Int
typedecl X_Nat
datatype ('a, 'b) Split = X_Split 'b "'a List List"
consts
EQ :: "Ordering"
GT :: "Ordering"
LT :: "Ordering"
X0X1 :: "X_Int" ("0''")
X0X2 :: "X_Nat" ("0''''")
X0X3 :: "Rat" ("0'_3")
X1X1 :: "X_Int" ("1''")
X1X2 :: "X_Nat" ("1''''")
X1X3 :: "Pos" ("1'_3")
X1X4 :: "Rat" ("1'_4")
X2X1 :: "X_Int" ("2''")
X2X2 :: "X_Nat" ("2''''")
X2X3 :: "Rat" ("2'_3")
X3X1 :: "X_Int" ("3''")
X3X2 :: "X_Nat" ("3''''")
X3X3 :: "Rat" ("3'_3")
X4X1 :: "X_Int" ("4''")
X4X2 :: "X_Nat" ("4''''")
X4X3 :: "Rat" ("4'_3")
X5X1 :: "X_Int" ("5''")
X5X2 :: "X_Nat" ("5''''")
X5X3 :: "Rat" ("5'_3")
X6X1 :: "X_Int" ("6''")
X6X2 :: "X_Nat" ("6''''")
X6X3 :: "Rat" ("6'_3")
X7X1 :: "X_Int" ("7''")
X7X2 :: "X_Nat" ("7''''")
X7X3 :: "Rat" ("7'_3")
X8X1 :: "X_Int" ("8''")
X8X2 :: "X_Nat" ("8''''")
X8X3 :: "Rat" ("8'_3")
X9X1 :: "X_Int" ("9''")
X9X2 :: "X_Nat" ("9''''")
X9X3 :: "Rat" ("9'_3")
XMinus__XX1 :: "X_Int => X_Int" ("(-''/ _)" [56] 56)
XMinus__XX2 :: "Rat => Rat" ("(-''''/ _)" [56] 56)
X_Cons :: "'a => 'a List => 'a List"
X_False :: "Bool" ("False''")
X_Nil :: "'a List" ("Nil''")
X_True :: "Bool" ("True''")
X__XAmpXAmp__X :: "Bool => Bool => Bool" ("(_/ &&/ _)" [54,54] 52)
X__XAtXAt__X :: "X_Nat => X_Nat => X_Nat" ("(_/ @@/ _)" [54,54] 52)
X__XCaret__XX1 :: "X_Int => X_Nat => X_Int" ("(_/ ^''/ _)" [54,54] 52)
X__XCaret__XX2 :: "X_Nat => X_Nat => X_Nat" ("(_/ ^''''/ _)" [54,54] 52)
X__XCaret__XX3 :: "Rat => X_Int => Rat partial" ("(_/ ^'_3/ _)" [54,54] 52)
X__XEqXEq__X :: "'a => 'a => Bool" ("(_/ ==''/ _)" [54,54] 52)
X__XExclam :: "X_Nat => X_Nat" ("(_/ !'')" [58] 58)
X__XGtXEq__XX1 :: "X_Int => X_Int => bool" ("(_/ >=''/ _)" [44,44] 42)
X__XGtXEq__XX2 :: "X_Nat => X_Nat => bool" ("(_/ >=''''/ _)" [44,44] 42)
X__XGtXEq__XX3 :: "Rat => Rat => bool" ("(_/ >='_3/ _)" [44,44] 42)
X__XGtXEq__XX4 :: "'a => 'a => Bool" ("(_/ >='_4/ _)" [54,54] 52)
X__XGt__XX1 :: "X_Int => X_Int => bool" ("(_/ >''/ _)" [44,44] 42)
X__XGt__XX2 :: "X_Nat => X_Nat => bool" ("(_/ >''''/ _)" [44,44] 42)
X__XGt__XX3 :: "Rat => Rat => bool" ("(_/ >'_3/ _)" [44,44] 42)
X__XGt__XX4 :: "'a => 'a => Bool" ("(_/ >'_4/ _)" [54,54] 52)
X__XLtXEq__XX1 :: "X_Int => X_Int => bool" ("(_/ <=''/ _)" [44,44] 42)
X__XLtXEq__XX2 :: "X_Nat => X_Nat => bool" ("(_/ <=''''/ _)" [44,44] 42)
X__XLtXEq__XX3 :: "Rat => Rat => bool" ("(_/ <='_3/ _)" [44,44] 42)
X__XLtXEq__XX4 :: "'a => 'a => Bool" ("(_/ <='_4/ _)" [54,54] 52)
X__XLt__XX1 :: "X_Int => X_Int => bool" ("(_/ <''/ _)" [44,44] 42)
X__XLt__XX2 :: "X_Nat => X_Nat => bool" ("(_/ <''''/ _)" [44,44] 42)
X__XLt__XX3 :: "Rat => Rat => bool" ("(_/ <'_3/ _)" [44,44] 42)
X__XLt__XX4 :: "'a => 'a => Bool" ("(_/ <'_4/ _)" [54,54] 52)
X__XMinusXExclam__X :: "X_Nat => X_Nat => X_Nat" ("(_/ -!/ _)" [54,54] 52)
X__XMinusXQuest__X :: "X_Nat => X_Nat => X_Nat partial" ("(_/ -?/ _)" [54,54] 52)
X__XMinus__XX1 :: "X_Int => X_Int => X_Int" ("(_/ -''/ _)" [54,54] 52)
X__XMinus__XX2 :: "X_Nat => X_Nat => X_Int" ("(_/ -''''/ _)" [54,54] 52)
X__XMinus__XX3 :: "Rat => Rat => Rat" ("(_/ -'_3/ _)" [54,54] 52)
X__XMinus__XX4 :: "'a => 'a => 'a" ("(_/ -'_4/ _)" [54,54] 52)
X__XPlusXPlus__X :: "'a List => 'a List => 'a List" ("(_/ ++''/ _)" [54,54] 52)
X__XPlus__XX1 :: "X_Int => X_Int => X_Int" ("(_/ +''/ _)" [54,54] 52)
X__XPlus__XX2 :: "X_Nat => X_Nat => X_Nat" ("(_/ +''''/ _)" [54,54] 52)
X__XPlus__XX3 :: "X_Nat => Pos => Pos" ("(_/ +'_3/ _)" [54,54] 52)
X__XPlus__XX4 :: "Pos => X_Nat => Pos" ("(_/ +'_4/ _)" [54,54] 52)
X__XPlus__XX5 :: "Rat => Rat => Rat" ("(_/ +'_5/ _)" [54,54] 52)
X__XPlus__XX6 :: "'a => 'a => 'a" ("(_/ +'_6/ _)" [54,54] 52)
X__XSlashXEq__X :: "'a => 'a => Bool" ("(_/ '/=/ _)" [54,54] 52)
X__XSlashXQuest__XX1 :: "X_Int => X_Int => X_Int partial" ("(_/ '/?''/ _)" [54,54] 52)
X__XSlashXQuest__XX2 :: "X_Nat => X_Nat => X_Nat partial" ("(_/ '/?''''/ _)" [54,54] 52)
X__XSlash__XX1 :: "X_Int => Pos => Rat" ("(_/ '/''/ _)" [54,54] 52)
X__XSlash__XX2 :: "Rat => Rat => Rat partial" ("(_/ '/''''/ _)" [54,54] 52)
X__XSlash__XX3 :: "'a => 'a => 'a" ("(_/ '/'_3/ _)" [54,54] 52)
X__XVBarXVBar__X :: "Bool => Bool => Bool" ("(_/ ||/ _)" [54,54] 52)
X__Xx__XX1 :: "X_Int => X_Int => X_Int" ("(_/ *''/ _)" [54,54] 52)
X__Xx__XX2 :: "X_Nat => X_Nat => X_Nat" ("(_/ *''''/ _)" [54,54] 52)
X__Xx__XX3 :: "Pos => Pos => Pos" ("(_/ *'_3/ _)" [54,54] 52)
X__Xx__XX4 :: "Rat => Rat => Rat" ("(_/ *'_4/ _)" [54,54] 52)
X__Xx__XX5 :: "'a => 'a => 'a" ("(_/ *'_5/ _)" [54,54] 52)
X__div__XX1 :: "X_Int => X_Int => X_Int partial" ("(_/ div''/ _)" [54,54] 52)
X__div__XX2 :: "X_Nat => X_Nat => X_Nat partial" ("(_/ div''''/ _)" [54,54] 52)
X__div__XX3 :: "'a => 'a => 'a" ("(_/ div'_3/ _)" [54,54] 52)
X__dvd__X :: "X_Nat => X_Nat => bool" ("(_/ dvd''/ _)" [44,44] 42)
X__elem__X :: "'a => 'a List => bool" ("(_/ elem/ _)" [44,44] 42)
X__mod__XX1 :: "X_Int => X_Int => X_Nat partial" ("(_/ mod''/ _)" [54,54] 52)
X__mod__XX2 :: "X_Nat => X_Nat => X_Nat partial" ("(_/ mod''''/ _)" [54,54] 52)
X__mod__XX3 :: "'a => 'a => 'a" ("(_/ mod'_3/ _)" [54,54] 52)
X__o__X :: "('b => 'c) * ('a => 'b) => 'a => 'c"
X__quot__XX1 :: "X_Int => X_Int => X_Int partial" ("(_/ quot''/ _)" [54,54] 52)
X__quot__XX2 :: "'a => 'a => 'a" ("(_/ quot''''/ _)" [54,54] 52)
X__rem__XX1 :: "X_Int => X_Int => X_Int partial" ("(_/ rem''/ _)" [54,54] 52)
X__rem__XX2 :: "'a => 'a => 'a" ("(_/ rem''''/ _)" [54,54] 52)
X_absX1 :: "X_Int => X_Nat" ("abs''/'(_')" [3] 999)
X_absX2 :: "Rat => Rat" ("abs''''/'(_')" [3] 999)
X_absX3 :: "'a => 'a" ("abs'_3/'(_')" [3] 999)
X_all :: "('a => Bool) => 'a List => Bool"
X_andL :: "Bool List => Bool" ("andL/'(_')" [3] 999)
X_any :: "('a => Bool) => 'a List => Bool"
X_concat :: "'a List List => 'a List" ("concat''/'(_')" [3] 999)
X_curry :: "('a * 'b => 'c) => 'a => 'b => 'c"
X_drop :: "X_Int => 'a List => 'a List"
X_dropWhile :: "('a => Bool) => 'a List => 'a List"
X_evenX1 :: "X_Int => bool" ("even''/'(_')" [3] 999)
X_evenX2 :: "X_Nat => bool" ("even''''/'(_')" [3] 999)
X_filter :: "('a => Bool) => 'a List => 'a List"
X_flip :: "('a => 'b => 'c) => 'b => 'a => 'c"
X_foldl :: "('a => 'b => 'a) => 'a => 'b List => 'a partial"
X_foldr :: "('a => 'b => 'b) => 'b => 'a List => 'b partial"
X_fromInteger :: "X_Int => 'a" ("fromInteger/'(_')" [3] 999)
X_fst :: "'a => 'b => 'a" ("fst''/'(_,/ _')" [3,3] 999)
X_gn_inj :: "'a => 'b" ("gn'_inj/'(_')" [3] 999)
X_gn_proj :: "'a => 'b partial" ("gn'_proj/'(_')" [3] 999)
X_gn_subt :: "'a => 'b => bool" ("gn'_subt/'(_,/ _')" [3,3] 999)
X_head :: "'a List => 'a partial" ("head/'(_')" [3] 999)
X_id :: "'a => 'a" ("id''/'(_')" [3] 999)
X_init :: "'a List => 'a List partial" ("init/'(_')" [3] 999)
X_insert :: "'d => 'd List => 'd List"
X_insertionSort :: "'a List => 'a List" ("insertionSort/'(_')" [3] 999)
X_isOrdered :: "'a List => bool" ("isOrdered/'(_')" [3] 999)
X_joinInsertionSort :: "('a, 'a) Split => 'a List" ("joinInsertionSort/'(_')" [3] 999)
X_joinMergeSort :: "('a, unit) Split => 'a List" ("joinMergeSort/'(_')" [3] 999)
X_joinQuickSort :: "('b, 'b) Split => 'b List" ("joinQuickSort/'(_')" [3] 999)
X_joinSelectionSort :: "('b, 'b) Split => 'b List" ("joinSelectionSort/'(_')" [3] 999)
X_last :: "'a List => 'a partial" ("last''/'(_')" [3] 999)
X_length :: "'a List => X_Int" ("length''/'(_')" [3] 999)
X_map :: "('a => 'b) => 'a List => 'b List"
X_maxX1 :: "X_Int => X_Int => X_Int" ("max''/'(_,/ _')" [3,3] 999)
X_maxX2 :: "X_Nat => X_Nat => X_Nat" ("max''''/'(_,/ _')" [3,3] 999)
X_maxX3 :: "Rat => Rat => Rat" ("max'_3/'(_,/ _')" [3,3] 999)
X_maxX4 :: "'a => 'a => 'a"
X_maximum :: "'d List => 'd partial" ("maximum/'(_')" [3] 999)
X_mergeSort :: "'a List => 'a List" ("mergeSort/'(_')" [3] 999)
X_minX1 :: "X_Int => X_Int => X_Int" ("min''/'(_,/ _')" [3,3] 999)
X_minX2 :: "X_Nat => X_Nat => X_Nat" ("min''''/'(_,/ _')" [3,3] 999)
X_minX3 :: "Rat => Rat => Rat" ("min'_3/'(_,/ _')" [3,3] 999)
X_minX4 :: "'a => 'a => 'a"
X_minimum :: "'d List => 'd partial" ("minimum/'(_')" [3] 999)
X_negate :: "'a => 'a" ("negate/'(_')" [3] 999)
X_null :: "'a List => Bool" ("null''/'(_')" [3] 999)
X_oddX1 :: "X_Int => bool" ("odd''/'(_')" [3] 999)
X_oddX2 :: "X_Nat => bool" ("odd''''/'(_')" [3] 999)
X_orL :: "Bool List => Bool" ("orL/'(_')" [3] 999)
X_permutation :: "'a List => 'a List => bool" ("permutation/'(_,/ _')" [3,3] 999)
X_pre :: "X_Nat => X_Nat partial" ("pre/'(_')" [3] 999)
X_product :: "'c List => 'c" ("product/'(_')" [3] 999)
X_quickSort :: "'a List => 'a List" ("quickSort/'(_')" [3] 999)
X_recip :: "'a => 'a" ("recip/'(_')" [3] 999)
X_reverse :: "'a List => 'a List" ("reverse/'(_')" [3] 999)
X_selectionSort :: "'a List => 'a List" ("selectionSort/'(_')" [3] 999)
X_sign :: "X_Int => X_Int" ("sign/'(_')" [3] 999)
X_signum :: "'a => 'a" ("signum/'(_')" [3] 999)
X_snd :: "'a => 'b => 'b" ("snd''/'(_,/ _')" [3,3] 999)
X_splitInsertionSort :: "'b List => ('b, 'b) Split" ("splitInsertionSort/'(_')" [3] 999)
X_splitMergeSort :: "'b List => ('b, unit) Split" ("splitMergeSort/'(_')" [3] 999)
X_splitQuickSort :: "'a List => ('a, 'a) Split" ("splitQuickSort/'(_')" [3] 999)
X_splitSelectionSort :: "'a List => ('a, 'a) Split" ("splitSelectionSort/'(_')" [3] 999)
X_sum :: "'c List => 'c" ("sum/'(_')" [3] 999)
X_tail :: "'a List => 'a List partial" ("tail/'(_')" [3] 999)
X_take :: "X_Int => 'a List => 'a List"
X_takeWhile :: "('a => Bool) => 'a List => 'a List"
X_toInteger :: "'a => X_Int" ("toInteger/'(_')" [3] 999)
X_unzip :: "('a * 'b) List => 'a List * 'b List" ("unzip/'(_')" [3] 999)
X_zip :: "'a List => 'b List => ('a * 'b) List"
break :: "('a => Bool) => 'a List => 'a List * 'a List"
compare :: "'a => 'a => Ordering"
concatMap :: "('a => 'b List) => 'a List => 'b List"
delete :: "'e => 'e List => 'e List"
divMod :: "'a => 'a => 'a * 'a"
foldl1 :: "('a => 'a => 'a) => 'a List => 'a partial"
foldr1 :: "('a => 'a => 'a) => 'a List => 'a partial"
genSort :: "('a List => ('a, 'b) Split) => (('a, 'b) Split => 'a List) => 'a List => 'a List"
merge :: "'a List => 'a List => 'a List"
notH__X :: "Bool => Bool" ("(notH/ _)" [56] 56)
otherwiseH :: "Bool"
partition :: "('a => Bool) => 'a List => 'a List * 'a List"
quotRem :: "'a => 'a => 'a * 'a"
scanl :: "('a => 'b => 'a) => 'a => 'b List => 'a List"
scanl1 :: "('a => 'a => 'a) => 'a List => 'a List"
scanr :: "('a => 'b => 'b) => 'b => 'a List => 'b List"
scanr1 :: "('a => 'a => 'a) => 'a List => 'a List"
select :: "('a => Bool) => 'a => 'a List * 'a List => 'a List * 'a List"
span :: "('a => Bool) => 'a List => 'a List * 'a List"
splitAt :: "X_Int => 'a List => 'a List * 'a List"
sucX1 :: "X_Nat => X_Nat" ("suc''/'(_')" [3] 999)
sucX2 :: "X_Nat => Pos" ("suc''''/'(_')" [3] 999)
uncurry :: "('a => 'b => 'c) => 'a * 'b => 'c"
axioms
ga_monotonicity [rule_format] :
"(X_gn_inj :: (X_Int => X_Int) => X_Int => Rat) XMinus__XX1 =
(X_gn_inj :: (Rat => Rat) => X_Int => Rat) XMinus__XX2"
ga_monotonicity_1 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 0' =
(X_gn_inj :: X_Nat => X_Int) 0''"
ga_monotonicity_2 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 0' = (X_gn_inj :: Rat => Rat) 0_3"
ga_monotonicity_3 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 0'' = (X_gn_inj :: Rat => Rat) 0_3"
ga_monotonicity_4 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 1' =
(X_gn_inj :: X_Nat => X_Int) 1''"
ga_monotonicity_5 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 1' = (X_gn_inj :: Pos => X_Int) 1_3"
ga_monotonicity_6 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 1' = (X_gn_inj :: Rat => Rat) 1_4"
ga_monotonicity_7 [rule_format] :
"(X_gn_inj :: X_Nat => X_Nat) 1'' = (X_gn_inj :: Pos => X_Nat) 1_3"
ga_monotonicity_8 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 1'' = (X_gn_inj :: Rat => Rat) 1_4"
ga_monotonicity_9 [rule_format] :
"(X_gn_inj :: Pos => Rat) 1_3 = (X_gn_inj :: Rat => Rat) 1_4"
ga_monotonicity_10 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 2' =
(X_gn_inj :: X_Nat => X_Int) 2''"
ga_monotonicity_11 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 2' = (X_gn_inj :: Rat => Rat) 2_3"
ga_monotonicity_12 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 2'' = (X_gn_inj :: Rat => Rat) 2_3"
ga_monotonicity_13 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 3' =
(X_gn_inj :: X_Nat => X_Int) 3''"
ga_monotonicity_14 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 3' = (X_gn_inj :: Rat => Rat) 3_3"
ga_monotonicity_15 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 3'' = (X_gn_inj :: Rat => Rat) 3_3"
ga_monotonicity_16 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 4' =
(X_gn_inj :: X_Nat => X_Int) 4''"
ga_monotonicity_17 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 4' = (X_gn_inj :: Rat => Rat) 4_3"
ga_monotonicity_18 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 4'' = (X_gn_inj :: Rat => Rat) 4_3"
ga_monotonicity_19 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 5' =
(X_gn_inj :: X_Nat => X_Int) 5''"
ga_monotonicity_20 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 5' = (X_gn_inj :: Rat => Rat) 5_3"
ga_monotonicity_21 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 5'' = (X_gn_inj :: Rat => Rat) 5_3"
ga_monotonicity_22 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 6' =
(X_gn_inj :: X_Nat => X_Int) 6''"
ga_monotonicity_23 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 6' = (X_gn_inj :: Rat => Rat) 6_3"
ga_monotonicity_24 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 6'' = (X_gn_inj :: Rat => Rat) 6_3"
ga_monotonicity_25 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 7' =
(X_gn_inj :: X_Nat => X_Int) 7''"
ga_monotonicity_26 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 7' = (X_gn_inj :: Rat => Rat) 7_3"
ga_monotonicity_27 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 7'' = (X_gn_inj :: Rat => Rat) 7_3"
ga_monotonicity_28 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 8' =
(X_gn_inj :: X_Nat => X_Int) 8''"
ga_monotonicity_29 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 8' = (X_gn_inj :: Rat => Rat) 8_3"
ga_monotonicity_30 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 8'' = (X_gn_inj :: Rat => Rat) 8_3"
ga_monotonicity_31 [rule_format] :
"(X_gn_inj :: X_Int => X_Int) 9' =
(X_gn_inj :: X_Nat => X_Int) 9''"
ga_monotonicity_32 [rule_format] :
"(X_gn_inj :: X_Int => Rat) 9' = (X_gn_inj :: Rat => Rat) 9_3"
ga_monotonicity_33 [rule_format] :
"(X_gn_inj :: X_Nat => Rat) 9'' = (X_gn_inj :: Rat => Rat) 9_3"
ga_monotonicity_34 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__Xx__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Int)
(uncurryOp X__Xx__XX2)"
ga_monotonicity_35 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => Pos * Pos => X_Int)
(uncurryOp X__Xx__XX1) =
(X_gn_inj :: (Pos * Pos => Pos) => Pos * Pos => X_Int)
(uncurryOp X__Xx__XX3)"
ga_monotonicity_36 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => Rat)
(uncurryOp X__Xx__XX1) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Int * X_Int => Rat)
(uncurryOp X__Xx__XX4)"
ga_monotonicity_37 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => X_Int)
(uncurryOp X__Xx__XX1) =
(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => X_Int)
(uncurryOp X__Xx__XX5)"
ga_monotonicity_38 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => Pos * Pos => X_Nat)
(uncurryOp X__Xx__XX2) =
(X_gn_inj :: (Pos * Pos => Pos) => Pos * Pos => X_Nat)
(uncurryOp X__Xx__XX3)"
ga_monotonicity_39 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => Rat)
(uncurryOp X__Xx__XX2) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Nat * X_Nat => Rat)
(uncurryOp X__Xx__XX4)"
ga_monotonicity_40 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Nat)
(uncurryOp X__Xx__XX2) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Nat)
(uncurryOp X__Xx__XX5)"
ga_monotonicity_41 [rule_format] :
"(X_gn_inj :: (Pos * Pos => Pos) => Pos * Pos => Rat)
(uncurryOp X__Xx__XX3) =
(X_gn_inj :: (Rat * Rat => Rat) => Pos * Pos => Rat)
(uncurryOp X__Xx__XX4)"
ga_monotonicity_42 [rule_format] :
"(X_gn_inj :: (Pos * Pos => Pos) => Pos * Pos => Pos)
(uncurryOp X__Xx__XX3) =
(X_gn_inj :: (Pos * Pos => Pos) => Pos * Pos => Pos)
(uncurryOp X__Xx__XX5)"
ga_monotonicity_43 [rule_format] :
"(X_gn_inj :: (Rat * Rat => Rat) => Rat * Rat => Rat)
(uncurryOp X__Xx__XX4) =
(X_gn_inj :: (Rat * Rat => Rat) => Rat * Rat => Rat)
(uncurryOp X__Xx__XX5)"
ga_monotonicity_44 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XPlus__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XPlus__XX2)"
ga_monotonicity_45 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * Pos => X_Int)
(uncurryOp X__XPlus__XX1) =
(X_gn_inj :: (X_Nat * Pos => Pos) => X_Nat * Pos => X_Int)
(uncurryOp X__XPlus__XX3)"
ga_monotonicity_46 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => Pos * X_Nat => X_Int)
(uncurryOp X__XPlus__XX1) =
(X_gn_inj :: (Pos * X_Nat => Pos) => Pos * X_Nat => X_Int)
(uncurryOp X__XPlus__XX4)"
ga_monotonicity_47 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => Rat)
(uncurryOp X__XPlus__XX1) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Int * X_Int => Rat)
(uncurryOp X__XPlus__XX5)"
ga_monotonicity_48 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => X_Int)
(uncurryOp X__XPlus__XX1) =
(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => X_Int)
(uncurryOp X__XPlus__XX6)"
ga_monotonicity_49 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * Pos => X_Nat)
(uncurryOp X__XPlus__XX2) =
(X_gn_inj :: (X_Nat * Pos => Pos) => X_Nat * Pos => X_Nat)
(uncurryOp X__XPlus__XX3)"
ga_monotonicity_50 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => Pos * X_Nat => X_Nat)
(uncurryOp X__XPlus__XX2) =
(X_gn_inj :: (Pos * X_Nat => Pos) => Pos * X_Nat => X_Nat)
(uncurryOp X__XPlus__XX4)"
ga_monotonicity_51 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => Rat)
(uncurryOp X__XPlus__XX2) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Nat * X_Nat => Rat)
(uncurryOp X__XPlus__XX5)"
ga_monotonicity_52 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Nat)
(uncurryOp X__XPlus__XX2) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Nat)
(uncurryOp X__XPlus__XX6)"
ga_monotonicity_53 [rule_format] :
"(X_gn_inj :: (X_Nat * Pos => Pos) => Pos * Pos => Pos)
(uncurryOp X__XPlus__XX3) =
(X_gn_inj :: (Pos * X_Nat => Pos) => Pos * Pos => Pos)
(uncurryOp X__XPlus__XX4)"
ga_monotonicity_54 [rule_format] :
"(X_gn_inj :: (X_Nat * Pos => Pos) => X_Nat * Pos => Rat)
(uncurryOp X__XPlus__XX3) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Nat * Pos => Rat)
(uncurryOp X__XPlus__XX5)"
ga_monotonicity_55 [rule_format] :
"(X_gn_inj :: (Pos * X_Nat => Pos) => Pos * X_Nat => Rat)
(uncurryOp X__XPlus__XX4) =
(X_gn_inj :: (Rat * Rat => Rat) => Pos * X_Nat => Rat)
(uncurryOp X__XPlus__XX5)"
ga_monotonicity_56 [rule_format] :
"(X_gn_inj :: (Rat * Rat => Rat) => Rat * Rat => Rat)
(uncurryOp X__XPlus__XX5) =
(X_gn_inj :: (Rat * Rat => Rat) => Rat * Rat => Rat)
(uncurryOp X__XPlus__XX6)"
ga_monotonicity_57 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XMinus__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XMinus__XX2)"
ga_monotonicity_58 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => Rat)
(uncurryOp X__XMinus__XX1) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Int * X_Int => Rat)
(uncurryOp X__XMinus__XX3)"
ga_monotonicity_59 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => X_Int)
(uncurryOp X__XMinus__XX1) =
(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => X_Int)
(uncurryOp X__XMinus__XX4)"
ga_monotonicity_60 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Int) => X_Nat * X_Nat => Rat)
(uncurryOp X__XMinus__XX2) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Nat * X_Nat => Rat)
(uncurryOp X__XMinus__XX3)"
ga_monotonicity_61 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XMinus__XX2) =
(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XMinus__XX4)"
ga_monotonicity_62 [rule_format] :
"(X_gn_inj :: (Rat * Rat => Rat) => Rat * Rat => Rat)
(uncurryOp X__XMinus__XX3) =
(X_gn_inj :: (Rat * Rat => Rat) => Rat * Rat => Rat)
(uncurryOp X__XMinus__XX4)"
ga_monotonicity_63 [rule_format] :
"(X_gn_inj :: (X_Int * Pos => Rat) => X_Int * Pos => Rat)
(uncurryOp X__XSlash__XX1) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Int * Pos => Rat)
(uncurryOp X__XSlash__XX3)"
ga_monotonicity_64 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int partial) => X_Nat * X_Nat => X_Int partial)
(uncurryOp X__XSlashXQuest__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat partial) => X_Nat * X_Nat => X_Int partial)
(uncurryOp X__XSlashXQuest__XX2)"
ga_monotonicity_65 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLt__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLt__XX2)"
ga_monotonicity_66 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Int * X_Int => bool)
(uncurryOp X__XLt__XX1) =
(X_gn_inj :: (Rat * Rat => bool) => X_Int * X_Int => bool)
(uncurryOp X__XLt__XX3)"
ga_monotonicity_67 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLt__XX2) =
(X_gn_inj :: (Rat * Rat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLt__XX3)"
ga_monotonicity_68 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLtXEq__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLtXEq__XX2)"
ga_monotonicity_69 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Int * X_Int => bool)
(uncurryOp X__XLtXEq__XX1) =
(X_gn_inj :: (Rat * Rat => bool) => X_Int * X_Int => bool)
(uncurryOp X__XLtXEq__XX3)"
ga_monotonicity_70 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLtXEq__XX2) =
(X_gn_inj :: (Rat * Rat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XLtXEq__XX3)"
ga_monotonicity_71 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGt__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGt__XX2)"
ga_monotonicity_72 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Int * X_Int => bool)
(uncurryOp X__XGt__XX1) =
(X_gn_inj :: (Rat * Rat => bool) => X_Int * X_Int => bool)
(uncurryOp X__XGt__XX3)"
ga_monotonicity_73 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGt__XX2) =
(X_gn_inj :: (Rat * Rat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGt__XX3)"
ga_monotonicity_74 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGtXEq__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGtXEq__XX2)"
ga_monotonicity_75 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => bool) => X_Int * X_Int => bool)
(uncurryOp X__XGtXEq__XX1) =
(X_gn_inj :: (Rat * Rat => bool) => X_Int * X_Int => bool)
(uncurryOp X__XGtXEq__XX3)"
ga_monotonicity_76 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGtXEq__XX2) =
(X_gn_inj :: (Rat * Rat => bool) => X_Nat * X_Nat => bool)
(uncurryOp X__XGtXEq__XX3)"
ga_monotonicity_77 [rule_format] :
"(X_gn_inj :: (X_Int * X_Nat => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XCaret__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Int)
(uncurryOp X__XCaret__XX2)"
ga_monotonicity_78 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int partial) => X_Nat * X_Nat => X_Int partial)
(uncurryOp X__div__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat partial) => X_Nat * X_Nat => X_Int partial)
(uncurryOp X__div__XX2)"
ga_monotonicity_79 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Nat partial) => X_Nat * X_Nat => X_Nat partial)
(uncurryOp X__mod__XX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat partial) => X_Nat * X_Nat => X_Nat partial)
(uncurryOp X__mod__XX2)"
ga_monotonicity_80 [rule_format] :
"(X_gn_inj :: (X_Int => X_Nat) => X_Int => Rat) X_absX1 =
(X_gn_inj :: (Rat => Rat) => X_Int => Rat) X_absX2"
ga_monotonicity_81 [rule_format] :
"(X_gn_inj :: (Rat => Rat) => Rat => Rat) X_absX2 =
(X_gn_inj :: (Rat => Rat) => Rat => Rat) X_absX3"
ga_monotonicity_82 [rule_format] :
"(X_gn_inj :: (X_Int => bool) => X_Nat => bool) X_evenX1 =
(X_gn_inj :: (X_Nat => bool) => X_Nat => bool) X_evenX2"
ga_monotonicity_83 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X_maxX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Int)
(uncurryOp X_maxX2)"
ga_monotonicity_84 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => Rat)
(uncurryOp X_maxX1) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Int * X_Int => Rat)
(uncurryOp X_maxX3)"
ga_monotonicity_85 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => Rat)
(uncurryOp X_maxX2) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Nat * X_Nat => Rat)
(uncurryOp X_maxX3)"
ga_monotonicity_86 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Nat * X_Nat => X_Int)
(uncurryOp X_minX1) =
(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => X_Int)
(uncurryOp X_minX2)"
ga_monotonicity_87 [rule_format] :
"(X_gn_inj :: (X_Int * X_Int => X_Int) => X_Int * X_Int => Rat)
(uncurryOp X_minX1) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Int * X_Int => Rat)
(uncurryOp X_minX3)"
ga_monotonicity_88 [rule_format] :
"(X_gn_inj :: (X_Nat * X_Nat => X_Nat) => X_Nat * X_Nat => Rat)
(uncurryOp X_minX2) =
(X_gn_inj :: (Rat * Rat => Rat) => X_Nat * X_Nat => Rat)
(uncurryOp X_minX3)"
ga_monotonicity_89 [rule_format] :
"(X_gn_inj :: (X_Int => bool) => X_Nat => bool) X_oddX1 =
(X_gn_inj :: (X_Nat => bool) => X_Nat => bool) X_oddX2"
ga_monotonicity_90 [rule_format] :
"(X_gn_inj :: (X_Nat => X_Nat) => X_Nat => X_Nat) sucX1 =
(X_gn_inj :: (X_Nat => Pos) => X_Nat => X_Nat) sucX2"
ga_subt_reflexive [rule_format] :
"ALL (x :: 'a). ALL (y :: 'a). gn_subt(x, y)"
ga_subt_transitive [rule_format] :
"ALL (x :: 'a).
ALL (y :: 'b).
ALL (z :: 'c). gn_subt(x, y) & gn_subt(y, z) --> gn_subt(x, z)"
ga_subt_inj_proj [rule_format] :
"ALL (x :: 'a).
ALL (y :: 'b).
gn_subt(x, y) -->
y = (X_gn_inj :: 'a => 'b) x =
(makePartial x = (X_gn_proj :: 'b => 'a partial) y)"
ga_inj_transitive [rule_format] :
"ALL (x :: 'a).
ALL (y :: 'b).
ALL (z :: 'c).
gn_subt(x, y) & gn_subt(y, z) & y = (X_gn_inj :: 'a => 'b) x -->
z = (X_gn_inj :: 'a => 'c) x = (z = (X_gn_inj :: 'b => 'c) y)"
ga_subt_Int_XLt_Rat [rule_format] :
"ALL (x :: X_Int). ALL (y :: Rat). gn_subt(x, y)"
ga_subt_Nat_XLt_Int [rule_format] :
"ALL (x :: X_Nat). ALL (y :: X_Int). gn_subt(x, y)"
ga_subt_Pos_XLt_Nat [rule_format] :
"ALL (x :: Pos). ALL (y :: X_Nat). gn_subt(x, y)"
GenSortT1 [rule_format] :
"ALL (join :: ('a, 'b) Split => 'a List).
ALL (r :: 'b).
ALL (X_split :: 'a List => ('a, 'b) Split).
ALL (x :: 'a).
ALL (xs :: 'a List).
ALL (xxs :: 'a List List).
ALL (y :: 'a).
ALL (ys :: 'a List).
xs = X_Cons x (X_Cons y ys) & X_split xs = X_Split r xxs -->
genSort X_split join xs =
join (X_Split r (X_map (genSort X_split join) xxs))"
GenSortT2 [rule_format] :
"ALL (join :: ('a, 'b) Split => 'a List).
ALL (r :: 'b).
ALL (X_split :: 'a List => ('a, 'b) Split).
ALL (x :: 'a).
ALL (xs :: 'a List).
ALL (xxs :: 'a List List).
ALL (y :: 'a).
xs = X_Cons x (X_Cons y Nil') & X_split xs = X_Split r xxs -->
genSort X_split join xs =
join (X_Split r (X_map (genSort X_split join) xxs))"
GenSortF [rule_format] :
"ALL (join :: ('a, 'b) Split => 'a List).
ALL (X_split :: 'a List => ('a, 'b) Split).
ALL (x :: 'a).
ALL (xs :: 'a List).
xs = X_Cons x Nil' | xs = Nil' --> genSort X_split join xs = xs"
SplitInsertionSort [rule_format] :
"ALL (x :: 'a).
ALL (xs :: 'a List).
splitInsertionSort(X_Cons x xs) = X_Split x (X_Cons xs Nil')"
JoinInsertionSort [rule_format] :
"ALL (x :: 'a).
ALL (xs :: 'a List).
joinInsertionSort(X_Split x (X_Cons xs Nil')) = X_insert x xs"
InsertionSort [rule_format] :
"ALL (xs :: 'a List).
insertionSort(xs) =
genSort X_splitInsertionSort X_joinInsertionSort xs"
SplitQuickSort [rule_format] :
"ALL (x :: 'a).
ALL (xs :: 'a List).
splitQuickSort(X_Cons x xs) =
(let (ys, zs) = partition (% t. x <_4 t) xs
in X_Split x (X_Cons ys (X_Cons zs Nil')))"
JoinQuickSort [rule_format] :
"ALL (x :: 'a).
ALL (ys :: 'a List).
ALL (zs :: 'a List).
joinQuickSort(X_Split x (X_Cons ys (X_Cons zs Nil'))) =
ys ++' X_Cons x zs"
QuickSort [rule_format] :
"ALL (xs :: 'a List).
quickSort(xs) = genSort X_splitQuickSort X_joinQuickSort xs"
SplitSelectionSort [rule_format] :
"ALL (xs :: 'a List).
makePartial (splitSelectionSort(xs)) =
restrictOp
(makePartial
(let x = minimum(xs)
in X_Split (makeTotal x) (X_Cons (delete (makeTotal x) xs) Nil')))
(defOp x)"
JoinSelectionSort [rule_format] :
"ALL (x :: 'a).
ALL (xs :: 'a List).
joinSelectionSort(X_Split x (X_Cons xs Nil')) = X_Cons x xs"
SelectionSort [rule_format] :
"ALL (xs :: 'a List).
selectionSort(xs) =
genSort X_splitSelectionSort X_joinSelectionSort xs"
SplitMergeSort [rule_format] :
"ALL (X_n :: X_Nat).
ALL (xs :: 'a List).
defOp (length'(xs) div' (X_gn_inj :: X_Nat => X_Int) 2'') &
makePartial ((X_gn_inj :: X_Nat => X_Int) X_n) =
length'(xs) div' (X_gn_inj :: X_Nat => X_Int) 2'' -->
splitMergeSort(xs) =
(let (ys, zs) = splitAt ((X_gn_inj :: X_Nat => X_Int) X_n) xs
in X_Split () (X_Cons ys (X_Cons zs Nil')))"
MergeNil [rule_format] :
"ALL (xs :: 'a List).
ALL (ys :: 'a List). xs = Nil' --> merge xs ys = ys"
MergeConsNil [rule_format] :
"ALL (v :: 'a).
ALL (vs :: 'a List).
ALL (xs :: 'a List).
ALL (ys :: 'a List).
xs = X_Cons v vs & ys = Nil' --> merge xs ys = xs"
MergeConsConsT [rule_format] :
"ALL (v :: 'a).
ALL (vs :: 'a List).
ALL (w :: 'a).
ALL (ws :: 'a List).
ALL (xs :: 'a List).
ALL (ys :: 'a List).
(xs = X_Cons v vs & ys = X_Cons w ws) & v <_4 w = True' -->
merge xs ys = X_Cons v (merge vs ys)"
MergeConsConsF [rule_format] :
"ALL (v :: 'a).
ALL (vs :: 'a List).
ALL (w :: 'a).
ALL (ws :: 'a List).
ALL (xs :: 'a List).
ALL (ys :: 'a List).
(xs = X_Cons v vs & ys = X_Cons w ws) & v <_4 w = False' -->
merge xs ys = X_Cons w (merge xs ws)"
JoinMergeSort [rule_format] :
"ALL (ys :: 'a List).
ALL (zs :: 'a List).
joinMergeSort(X_Split () (X_Cons ys (X_Cons zs Nil'))) =
merge ys zs"
MergeSort [rule_format] :
"ALL (xs :: 'a List).
mergeSort(xs) = genSort X_splitMergeSort X_joinMergeSort xs"
ElemNil [rule_format] : "ALL (x :: 'a). ~ x elem Nil'"
ElemCons [rule_format] :
"ALL (x :: 'a).
ALL (y :: 'a).
ALL (ys :: 'a List). (x elem X_Cons y ys) = (x = y | x elem ys)"
IsOrderedNil [rule_format] : "isOrdered(Nil')"
IsOrderedCons [rule_format] :
"ALL (x :: 'a). isOrdered(X_Cons x Nil')"
IsOrderedConsCons [rule_format] :
"ALL (x :: 'a).
ALL (y :: 'a).
ALL (ys :: 'a List).
isOrdered(X_Cons x (X_Cons y ys)) =
(x <=_4 y = True' & isOrdered(X_Cons y ys))"
PermutationNil [rule_format] : "permutation(Nil', Nil')"
PermutationConsCons [rule_format] :
"ALL (x :: 'a).
ALL (xs :: 'a List).
ALL (y :: 'a).
ALL (ys :: 'a List).
permutation(X_Cons x xs, X_Cons y ys) =
(x = y & permutation(xs, ys) |
x elem ys & permutation(xs, X_Cons y (delete x ys)))"
declare ga_subt_reflexive [simp]
declare ga_subt_Int_XLt_Rat [simp]
declare ga_subt_Nat_XLt_Int [simp]
declare ga_subt_Pos_XLt_Nat [simp]
declare JoinInsertionSort [simp]
declare JoinQuickSort [simp]
declare JoinSelectionSort [simp]
declare JoinMergeSort [simp]
declare ElemNil [simp]
declare IsOrderedNil [simp]
declare IsOrderedCons [simp]
declare PermutationNil [simp]
theorem PermutationCons :
"ALL (x :: 'a).
ALL (y :: 'a). permutation(X_Cons x Nil', X_Cons y Nil') = (x = y)"
apply(auto)
apply(simp add: PermutationConsCons)+
done
setup "Header.record \"PermutationCons\""
theorem Theorem01 :
"ALL (xs :: 'a List). insertionSort(xs) = quickSort(xs)"
apply(auto)
oops
setup "Header.record \"Theorem01\""
theorem Theorem02 :
"ALL (xs :: 'a List). insertionSort(xs) = mergeSort(xs)"
oops
setup "Header.record \"Theorem02\""
theorem Theorem03 :
"ALL (xs :: 'a List). insertionSort(xs) = selectionSort(xs)"
oops
setup "Header.record \"Theorem03\""
theorem Theorem04 :
"ALL (xs :: 'a List). quickSort(xs) = mergeSort(xs)"
oops
setup "Header.record \"Theorem04\""
theorem Theorem05 :
"ALL (xs :: 'a List). quickSort(xs) = selectionSort(xs)"
oops
setup "Header.record \"Theorem05\""
theorem Theorem06 :
"ALL (xs :: 'a List). mergeSort(xs) = selectionSort(xs)"
oops
setup "Header.record \"Theorem06\""
theorem Theorem07 :
"ALL (xs :: 'a List). isOrdered(insertionSort(xs))"
oops
setup "Header.record \"Theorem07\""
theorem Theorem08 : "ALL (xs :: 'a List). isOrdered(quickSort(xs))"
oops
setup "Header.record \"Theorem08\""
theorem Theorem09 : "ALL (xs :: 'a List). isOrdered(mergeSort(xs))"
oops
setup "Header.record \"Theorem09\""
theorem Theorem10 :
"ALL (xs :: 'a List). isOrdered(selectionSort(xs))"
oops
setup "Header.record \"Theorem10\""
theorem Theorem11 :
"ALL (xs :: 'a List). permutation(xs, insertionSort(xs))"
oops
setup "Header.record \"Theorem11\""
theorem Theorem12 :
"ALL (xs :: 'a List). permutation(xs, quickSort(xs))"
oops
setup "Header.record \"Theorem12\""
theorem Theorem13 :
"ALL (xs :: 'a List). permutation(xs, mergeSort(xs))"
oops
setup "Header.record \"Theorem13\""
theorem Theorem14 :
"ALL (xs :: 'a List). permutation(xs, selectionSort(xs))"
oops
setup "Header.record \"Theorem14\""
end
|
Formal statement is: lemma open_prod_elim: assumes "open S" and "x \<in> S" obtains A B where "open A" and "open B" and "x \<in> A \<times> B" and "A \<times> B \<subseteq> S" Informal statement is: If $S$ is an open set and $x \in S$, then there exist open sets $A$ and $B$ such that $x \in A \times B$ and $A \times B \subseteq S$. |
*----------------------------------------------------------------------*
logical function next_msgamdist_diag(first,
& msdis,gamdis,
& nn,occ,ms,gam,nsym)
*----------------------------------------------------------------------*
* increment an entire set of Ms and IRREP distributions
*----------------------------------------------------------------------*
implicit none
include 'opdim.h'
include 'multd2h.h'
logical, intent(in) ::
& first
integer, intent(in) ::
& nn, ms, gam, nsym,
& occ(nn)
integer, intent(inout) ::
& msdis(nn),
& gamdis(nn)
logical ::
& success
logical, external ::
& first_msdistn, first_gamdistn,
& next_msdistn, next_gamdistn
if (first) then
! initialize
success = first_msdistn(msdis,ms,occ,nn)
success = success.and.
& first_gamdistn(gamdis,gam,nsym,nn)
else
! innermost index: increment IRREP distribution
if (next_gamdistn(gamdis,gam,nsym,nn)) then
success = .true.
! increment Ms distribution
else if (next_msdistn(msdis,ms,occ,nn)) then
! it's the same call to first_gamdist as above, so
! actually the result *must* be true (not checked)
success =
& first_gamdistn(gamdis,gam,nsym,nn)
else
success = .false.
end if
end if
next_msgamdist_diag = success
return
end
|
> module NonNegRational.BasicProperties
> import Syntax.PreorderReasoning
> import NonNegRational.NonNegRational
> import NonNegRational.BasicOperations
> import Fraction.Fraction
> import Fraction.BasicOperations
> import Fraction.Predicates
> import Fraction.BasicProperties
> import Fraction.Normalize
> import Fraction.NormalizeProperties
> import Fraction.EqProperties
> import Fraction.Normal
> import Subset.Properties
> import Unique.Predicates
> import Unique.Properties
> import Nat.Positive
> import Nat.Coprime -- used by the implementation of |not1Eq0|!
> import Nat.GCDAlgorithm -- used by the implementation of |not1Eq0|!
> import Num.Refinements
> import Pairs.Operations
> import PNat.PNat
> import PNat.Operations
> import PNat.Properties
> import Basic.Operations
> %default total
> -- %access export
> %access public export
* Properties of |toFraction|:
> ||| toFraction is injective
> toFractionInjective : {x, y : NonNegRational} -> (toFraction x) = (toFraction y) -> x = y
> toFractionInjective {x} {y} p = subsetEqLemma1 x y p NormalUnique
> ||| toFraction preserves equality
> toFractionEqLemma2 : {x, y : NonNegRational} -> x = y -> (toFraction x) = (toFraction y)
> toFractionEqLemma2 {x} {y} p = getWitnessPreservesEq p
* Properties of |fromFraction| and |toFraction|:
> ||| fromFraction is left inverse of toFraction
> fromToId : (x : NonNegRational) -> fromFraction (toFraction x) = x
> fromToId (Element x nx) = ( fromFraction (toFraction (Element x nx)) )
> ={ Refl }=
> ( fromFraction x )
> ={ Refl }=
> ( Element (normalize x) (normalNormalize x) )
> ={ toFractionInjective (normalizePreservesNormal x nx) }=
> ( Element x nx )
> QED
> ||| Equivalence of fractions implies equality of non-negative rationals
> fromFractionEqLemma : (x, y : Fraction) -> x `Eq` y -> fromFraction x = fromFraction y
> fromFractionEqLemma x y xEqy = s7 where
> s1 : normalize x = normalize y
> s1 = normalizeEqLemma2 x y xEqy
> s2 : Normal (normalize x) = Normal (normalize y)
> s2 = cong s1
> s3 : Normal (normalize x)
> s3 = normalNormalize x
> s4 : Normal (normalize y)
> s4 = normalNormalize y
> s5 : s3 = s4
> s5 = uniqueLemma (\ f => NormalUnique {x = f}) (normalize x) (normalize y) s3 s4 s1
> s6 : Element (normalize x) (normalNormalize x) = Element (normalize y) (normalNormalize y)
> s6 = depCong2 {alpha = Fraction}
> {P = Normal}
> {gamma = Subset Fraction Normal}
> {a1 = normalize x}
> {a2 = normalize y}
> {Pa1 = normalNormalize x}
> {Pa2 = normalNormalize y}
> {f = \ ZUZU => \ ZAZA => Element ZUZU ZAZA} s1 s5
> s7 : fromFraction x = fromFraction y
> s7 = s6
> ||| Denominators of non-negative rationals are greater than zero
> denLTLemma : (x : NonNegRational) -> Z `LT` den x
> denLTLemma x = s2 where
> s1 : Z `LT` den (toFraction x)
> s1 = Fraction.BasicProperties.denLTLemma (toFraction x)
> s2 : Z `LT` den x
> s2 = replace {P = \ ZUZU => Z `LT` ZUZU} Refl s1
> |||
> toFractionFromNatLemma : (n : Nat) -> toFraction (fromNat n) = fromNat n
> toFractionFromNatLemma n =
> ( toFraction (fromNat n) )
> ={ Refl }=
> ( toFraction (fromFraction (fromNat n)) )
> ={ Refl }=
> ( toFraction (fromFraction (n, Element (S Z) MkPositive)) )
> ={ Refl }=
> ( toFraction (Element (normalize (n, Element (S Z) MkPositive)) (normalNormalize (n, Element (S Z) MkPositive))) )
> ={ cong (toFractionInjective (normalizePreservesNormal (n, Element (S Z) MkPositive) fromNatNormal)) }=
> ( toFraction (Element (n, Element (S Z) MkPositive) fromNatNormal) )
> ={ Refl }=
> ( (n, Element (S Z) MkPositive) )
> ={ Refl }=
> ( fromNat n )
> QED
* Implementations:
> ||| NonNegRational is an implementation of Show
> implementation Show NonNegRational where
> show q = if (den q == 1)
> then show (num q)
> else show (num q) ++ "/" ++ show (den q)
> ||| NonNegRational is an implementation of Num
> implementation Num NonNegRational where
> (+) = plus
> (*) = mult
> fromInteger = fromNat . fromIntegerNat
> ||| NonNegRational is an implementation of DecEq
> implementation DecEq NonNegRational where
> decEq x y with (Decidable.Equality.decEq (toFraction x) (toFraction y))
> | (Yes p) = Yes (toFractionInjective p)
> | (No contra) = No (\ prf => contra (toFractionEqLemma2 prf))
> ||| NonNegRational is an implementation of Eq
> implementation Eq NonNegRational where
> (==) x y with (decEq x y)
> | (Yes _) = True
> | (No _) = False
* One is not zero
> num0Eq0 : NonNegRational.BasicOperations.num (fromInteger 0) = 0
> num0Eq0 = ( NonNegRational.BasicOperations.num (fromInteger 0) )
> ={ Refl }=
> ( Fraction.BasicOperations.num (toFraction (fromInteger 0)) )
> ={ Refl }=
> ( fst (toFraction (fromInteger 0)) )
> ={ Refl }=
> ( fst (toFraction (fromNat (fromIntegerNat 0))) )
> ={ cong (toFractionFromNatLemma (fromIntegerNat 0)) }=
> ( fst (fromNat (fromIntegerNat 0)) )
> ={ Refl }=
> ( fromIntegerNat 0 )
> ={ Refl }=
> ( 0 )
> QED
> num1Eq1 : NonNegRational.BasicOperations.num (fromInteger 1) = 1
> num1Eq1 = ( NonNegRational.BasicOperations.num (fromInteger 1) )
> ={ Refl }=
> ( Fraction.BasicOperations.num (toFraction (fromInteger 1)) )
> ={ Refl }=
> ( fst (toFraction (fromInteger 1)) )
> ={ Refl }=
> ( fst (toFraction (fromNat (fromIntegerNat 1))) )
> ={ cong (toFractionFromNatLemma (fromIntegerNat 1)) }=
> ( fst (fromNat (fromIntegerNat 1)) )
> ={ Refl }=
> ( fromIntegerNat 1 )
> ={ Refl }=
> ( 1 )
> QED
> not1Eq0 : Not ((=) {A = NonNegRational} {B = NonNegRational} (fromInteger 1) (fromInteger 0))
> not1Eq0 prf = SIsNotZ s3 where
> s1 : (=) {A = Nat} {B = Nat}
> (NonNegRational.BasicOperations.num (fromInteger 1))
> (NonNegRational.BasicOperations.num (fromInteger 0))
> s1 = cong {f = NonNegRational.BasicOperations.num} prf
> s2 : (=) {A = Nat} {B = Nat} 1 (NonNegRational.BasicOperations.num (fromInteger 0))
> s2 = replace {P = \ X => (=) {A = Nat} {B = Nat} X (NonNegRational.BasicOperations.num (fromInteger 0))} num1Eq1 s1
> s3 : (=) {A = Nat} {B = Nat} 1 0
> s3 = replace {P = \ X => (=) {A = Nat} {B = Nat} 1 X} num0Eq0 s2
* Further properties of toFraction, fromInteger, etc.:
> |||
> toFractionFromIntegerLemma : (n : Integer) -> toFraction (fromInteger n) = fromInteger n
> toFractionFromIntegerLemma n =
> ( toFraction (fromInteger n) )
> ={ Refl }=
> ( toFraction (fromNat (fromIntegerNat n)) )
> ={ toFractionFromNatLemma (fromIntegerNat n) }=
> ( fromNat (fromIntegerNat n) )
> ={ Refl }=
> ( fromInteger n )
> QED
> ||| fromFraction is linear
> fromFractionLinear : (x, y : Fraction) -> fromFraction (x + y) = fromFraction x + fromFraction y
> fromFractionLinear x y =
> let s1 = Element (normalize (x + y)) (normalNormalize (x + y)) in
> let s2 = Element (normalize (normalize x + normalize y)) (normalNormalize (normalize x + normalize y)) in
> ( fromFraction (x + y) )
> ={ Refl }=
> ( Element (normalize (x + y)) (normalNormalize (x + y)) )
> ={ subsetEqLemma1 s1 s2 (sym (normalizePlusElim x y)) NormalUnique }=
> ( Element (normalize (normalize x + normalize y)) (normalNormalize (normalize x + normalize y)) )
> ={ Refl }=
> ( fromFraction (normalize x + normalize y) )
> ={ Refl }=
> ( fromFraction (toFraction (Element (normalize x) (normalNormalize x))
> +
> toFraction (Element (normalize y) (normalNormalize y))) )
> ={ Refl }=
> ( fromFraction (toFraction (fromFraction x) + toFraction (fromFraction y)) )
> ={ Refl }=
> ( fromFraction x + fromFraction y )
> QED
* Properties of addition:
> ||| Addition is commutative
> plusCommutative : (x : NonNegRational) -> (y : NonNegRational) -> x + y = y + x
> plusCommutative x y =
> ( x + y )
> ={ Refl }=
> ( fromFraction (toFraction x + toFraction y) )
> ={ cong {f = fromFraction} (plusCommutative (toFraction x) (toFraction y)) }=
> ( fromFraction (toFraction y + toFraction x) )
> ={ Refl }=
> ( y + x )
> QED
> ||| 0 is neutral element of addition
> plusZeroRightNeutral : (x : NonNegRational) -> x + 0 = x
> plusZeroRightNeutral x =
> ( x + 0 )
> ={ Refl }=
> ( fromFraction (toFraction x + toFraction 0) )
> ={ cong {f = fromFraction} (plusZeroRightNeutral (toFraction x)) }=
> ( fromFraction (toFraction x) )
> ={ fromToId x }=
> ( x )
> QED
> ||| 0 is neutral element of addition
> plusZeroLeftNeutral : (x : NonNegRational) -> 0 + x = x
> plusZeroLeftNeutral x =
> ( 0 + x )
> ={ plusCommutative 0 x }=
> ( x + 0 )
> ={ plusZeroRightNeutral x }=
> ( x )
> QED
> ||| Addition is associative
> plusAssociative : (x, y, z : NonNegRational) -> x + (y + z) = (x + y) + z
> plusAssociative x y z =
> let x' = toFraction x in
> let y' = toFraction y in
> let z' = toFraction z in
> ( x + (y + z) )
> ={ Refl }=
> ( fromFraction (x' + toFraction (fromFraction (y' + z'))) )
> ={ Refl }=
> ( fromFraction (x' + normalize (y' + z')) )
> ={ Refl }=
> ( Element (normalize (x' + normalize (y' + z'))) (normalNormalize (x' + normalize (y' + z'))) )
> ={ toFractionInjective (normalizePlusElimRight x' (y' + z')) }=
> ( Element (normalize (x' + (y' + z'))) (normalNormalize (x' + (y' + z'))) )
> ={ toFractionInjective (cong (plusAssociative x' y' z')) }=
> ( Element (normalize ((x' + y') + z')) (normalNormalize ((x' + y') + z')) )
> ={ sym (toFractionInjective (normalizePlusElimLeft (x' + y') z')) }=
> ( Element (normalize (normalize (x' + y') + z')) (normalNormalize (normalize (x' + y') + z')) )
> ={ Refl }=
> ( fromFraction (normalize (x' + y') + z') )
> ={ Refl }=
> ( fromFraction (toFraction (fromFraction (x' + y')) + z') )
> ={ Refl }=
> ( (x + y) + z )
> QED
* Properties of multiplication:
> ||| Multiplication is commutative
> multCommutative : (x : NonNegRational) -> (y : NonNegRational) -> x * y = y * x
> multCommutative x y =
> ( x * y )
> ={ Refl }=
> ( fromFraction ((toFraction x) * (toFraction y)) )
> ={ cong {f = fromFraction} (multCommutative (toFraction x) (toFraction y)) }=
> ( fromFraction ((toFraction y) * (toFraction x)) )
> ={ Refl }=
> ( y * x )
> QED
> ||| 1 is neutral element of multiplication
> multOneRightNeutral : (x : NonNegRational) -> x * 1 = x
> multOneRightNeutral x =
> ( x * 1 )
> ={ Refl }=
> ( fromFraction ((toFraction x) * (toFraction 1)) )
> ={ Refl }=
> ( fromFraction ((toFraction x) * (toFraction (fromInteger 1))) )
> ={ cong {f = \ ZUZU => fromFraction ((toFraction x) * ZUZU)} (toFractionFromIntegerLemma 1) }=
> ( fromFraction ((toFraction x) * (fromInteger 1)) )
> ={ cong {f = fromFraction} (multOneRightNeutral (toFraction x)) }=
> ( fromFraction (toFraction x) )
> ={ fromToId x }=
> ( x )
> QED
> ||| 1 is neutral element of multiplication
> multOneLeftNeutral : (x : NonNegRational) -> 1 * x = x
> multOneLeftNeutral x =
> ( 1 * x )
> ={ multCommutative 1 x }=
> ( x * 1 )
> ={ multOneRightNeutral x }=
> ( x )
> QED
> |||
> multZeroRightZero : (x : NonNegRational) -> x * 0 = 0
> multZeroRightZero x =
> let x' = toFraction x in
> ( x * 0 )
> ={ Refl }=
> ( fromFraction (x' * 0) )
> ={ Refl }=
> ( Element (normalize (x' * 0)) (normalNormalize (x' * 0)) )
> ={ toFractionInjective (normalizeEqLemma2 (x' * 0) 0 (multZeroRightEqZero x')) }=
> ( Element (normalize 0) (normalNormalize 0) )
> ={ Refl }=
> ( fromFraction 0 )
> ={ fromToId 0 }=
> ( 0 )
> QED
> |||
> multZeroLeftZero : (x : NonNegRational) -> 0 * x = 0
> multZeroLeftZero x =
> ( 0 * x )
> ={ multCommutative 0 x }=
> ( x * 0 )
> ={ multZeroRightZero x }=
> ( 0 )
> QED
* Properties of addition and multiplication:
> |||
> multDistributesOverPlusRight : (x, y, z : NonNegRational) -> x * (y + z) = (x * y) + (x * z)
> multDistributesOverPlusRight x y z =
> let x' = toFraction x in
> let y' = toFraction y in
> let z' = toFraction z in
> ( x * (y + z) )
> ={ Refl }=
> ( fromFraction (x' * toFraction (fromFraction (y' + z'))) )
> ={ Refl }=
> ( fromFraction (x' * (normalize (y' + z'))) )
> ={ Refl }=
> ( Element (normalize (x' * (normalize (y' + z')))) (normalNormalize (x' * (normalize (y' + z')))) )
> ={ toFractionInjective (normalizeMultElimRight x' (y' + z')) }=
> ( Element (normalize (x' * (y' + z'))) (normalNormalize (x' * (y' + z'))) )
> ={ toFractionInjective (normalizeEqLemma2 (x' * (y' + z')) ((x' * y') + (x' * z')) multDistributesOverPlusRightEq) }=
> ( Element (normalize ((x' * y') + (x' * z'))) (normalNormalize ((x' * y') + (x' * z'))) )
> ={ toFractionInjective (sym (normalizePlusElim (x' * y') (x' * z'))) }=
> ( Element (normalize (normalize (x' * y') + normalize (x' * z')))
> (normalNormalize (normalize (x' * y') + normalize (x' * z'))) )
> ={ Refl }=
> ( fromFraction ((normalize (x' * y')) + (normalize (x' * z'))) )
> ={ Refl }=
> ( fromFraction ((toFraction (fromFraction (x' * y'))) + (toFraction (fromFraction (x' * z')))) )
> ={ Refl }=
> ( (x * y) + (x * z) )
> QED
> |||
> multDistributesOverPlusLeft : (x, y, z : NonNegRational) -> (x + y) * z = (x * z) + (y * z)
> multDistributesOverPlusLeft x y z =
> ( (x + y) * z )
> ={ multCommutative (x + y) z }=
> ( z * (x + y) )
> ={ multDistributesOverPlusRight z x y }=
> ( z * x + z * y )
> ={ cong {f = \ ZUZU => ZUZU + z * y} (multCommutative z x) }=
> ( x * z + z * y )
> ={ cong {f = \ ZUZU => x * z + ZUZU} (multCommutative z y) }=
> ( x * z + y * z )
> QED
* Implementations of refinements of |Num|:
> ||| NonNegRational is an implementation of NumPlusZeroNeutral
> implementation NumPlusZeroNeutral NonNegRational where
> plusZeroLeftNeutral = NonNegRational.BasicProperties.plusZeroLeftNeutral
> plusZeroRightNeutral = NonNegRational.BasicProperties.plusZeroRightNeutral
> ||| NonNegRational is an implementation of NumPlusAssociative
> implementation NumPlusAssociative NonNegRational where
> plusAssociative = NonNegRational.BasicProperties.plusAssociative
> ||| NonNegRational is an implementation of NumMultZeroOne
> implementation NumMultZeroOne NonNegRational where
> multZeroRightZero = NonNegRational.BasicProperties.multZeroRightZero
> multZeroLeftZero = NonNegRational.BasicProperties.multZeroLeftZero
> multOneRightNeutral = NonNegRational.BasicProperties.multOneRightNeutral
> multOneLeftNeutral = NonNegRational.BasicProperties.multOneLeftNeutral
> ||| NonNegRational is an implementation NumMultDistributesOverPlus
> implementation NumMultDistributesOverPlus NonNegRational where
> multDistributesOverPlusRight = NonNegRational.BasicProperties.multDistributesOverPlusRight
> multDistributesOverPlusLeft = NonNegRational.BasicProperties.multDistributesOverPlusLeft
* Elementary arithmetic properties:
> multElimRight : (m : Nat) -> (n, d : PNat) -> fromFraction (m * (toNat n), d * n) = fromFraction (m, d)
> multElimRight m n d =
> let s1 = Element (normalize (m * (toNat n), d * n)) (normalNormalize (m * (toNat n), d * n)) in
> let s2 = Element (normalize (m, d)) (normalNormalize (m, d)) in
> ( fromFraction (m * (toNat n), d * n) )
> ={ Refl }=
> ( Element (normalize (m * (toNat n), d * n)) (normalNormalize (m * (toNat n), d * n)) )
> ={ subsetEqLemma1 s1 s2 (normalizeUpscaleLemma (m, d) n) NormalUnique }=
> ( Element (normalize (m, d)) (normalNormalize (m, d)) )
> ={ Refl }=
> ( fromFraction (m, d) )
> QED
> postulate sumOneLemma1 : {n: Nat} ->
> fromFraction (n, Element (S n) MkPositive)
> +
> fromFraction (1, Element (S n) MkPositive)
> =
> 1
> {-
> ---}
|
lemma inj_upd: "inj_on upd {..< n}" |
From mathcomp.ssreflect Require Import ssreflect seq ssrbool ssrnat.
Require Import compcert.common.Memory.
Require Import compcert.common.Globalenvs.
(* The concurrent machinery*)
Require Import VST.concurrency.concurrent_machine.
Require Import VST.concurrency.dry_machine. Import Concur.
Require Import VST.concurrency.scheduler.
Require Import VST.concurrency.lifting.
(* We lift to a whole-program simulation on the dry concurrency machine *)
Require Import VST.sepcomp.wholeprog_simulations. Import Wholeprog_sim.
Require Import VST.sepcomp.event_semantics.
(** The X86 DryConc Machine*)
Require Import VST.concurrency.dry_context.
(** The Clight DryConc Machine*)
Require Import VST.concurrency.DryMachineSource.
Require Import VST.concurrency.machine_simulation. Import machine_simulation.
Module lifting_safety (SEMT: Semantics) (Machine: MachinesSig with Module SEM := SEMT).
Module lftng:= lifting SEMT Machine. Import lftng.
Module foo:= Machine.
Import THE_DRY_MACHINE_SOURCE.
Import THE_DRY_MACHINE_SOURCE.DMS.
Definition match_st gT gS main psrc ptgt sch:=
Machine_sim.match_state
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
Definition running_thread gT gS main psrc ptgt sch:=
Machine_sim.thread_running
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
Definition halt_axiom gT gS main psrc ptgt sch:=
Machine_sim.thread_halted
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
Definition core_ord gT gS main psrc ptgt sch:=
Machine_sim.core_ord
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
Definition core_ord_wf gT gS main psrc ptgt sch:=
Machine_sim.core_ord_wf
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
Definition same_running gT gS main psrc ptgt sch:=
Machine_sim.thread_running
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
Definition same_halted gT gS main psrc ptgt sch:=
Machine_sim.thread_halted
_ _ _ _ _ _ _ _
(concur_sim gT gS main psrc ptgt sch).
(* THE_DRY_MACHINE_SOURCE.dmachine_state
Machine.DryConc.MachState
*)
(* This axiom comes from the new simulation*)
(*Axiom blah: forall Tg Sg main p U,
forall cd j Sm Tm Sds Tds,
(match_st Tg Sg main p U) cd j Sds Sm Tds Tm ->
forall sch,
DryConc.valid (sch, snd (fst Sds), snd Sds) <->
Machine.DryConc.valid (sch, snd (fst Tds), snd Tds). *)
Axiom halted_trace: forall U tr tr' st,
DryConc.halted (U, tr, st) ->
DryConc.halted (U, tr', st).
Axiom halted_trace': forall U tr tr' st,
Machine.DryConc.halted (U, tr, st) ->
Machine.DryConc.halted (U, tr', st).
(*Axiom determinismN:
forall U p,
forall ge sch st2 m2 st2' m2',
forall n0 : nat,
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) ge n0 sch st2 m2 st2' m2' ->
forall U'0 : mySchedule.schedule,
Machine.DryConc.valid (U'0, [::], st2) ->
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) ge n0 U'0 st2 m2 st2' m2'.*)
(*Lemma stepN_safety:
forall U p ge st2' m2' n,
forall (condition : forall sch : mySchedule.schedule,
Machine.DryConc.valid (sch, [::], st2') ->
Machine.DryConc.explicit_safety ge sch st2' m2'),
forall (st2 : Machine.DryMachine.ThreadPool.t)
(m2 : mem) (sch : mySchedule.schedule),
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) ge n sch st2 m2 st2' m2' ->
forall U' : mySchedule.schedule,
Machine.DryConc.valid (U', [::], st2) ->
Machine.DryConc.explicit_safety ge U' st2 m2 .
Proof.
(*Make this a separated lemma*)
induction n.
- intros ? ? ? ? stepN U' val.
inversion stepN; subst.
apply: (condition _ val).
- intros ? ? ? ? stepN U' val.
assert (DeterminismN: forall n,
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) ge n sch st2
m2 st2' m2' ->
forall U', Machine.DryConc.valid (U', [::], st2) ->
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) ge n U' st2
m2 st2' m2'
).
apply: determinismN. (*This is true by determinism. *)
eapply DeterminismN in stepN; eauto.
inversion stepN.
move: H => /= [] m_ [] istep stepN'.
eapply Machine.DryConc.internal_safety; eauto.
Qed.
Lemma stepN_safety':
forall U p ge st2' m2' n,
forall (st2 : Machine.DryMachine.ThreadPool.t)
(m2 : mem) (sch : mySchedule.schedule),
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) ge n sch st2 m2 st2' m2' ->
forall (condition : forall sch : mySchedule.schedule,
Machine.DryConc.valid (sch, [::], st2') ->
Machine.DryConc.explicit_safety ge sch st2' m2'),
forall U' : mySchedule.schedule,
Machine.DryConc.valid (U', [::], st2) ->
Machine.DryConc.explicit_safety ge U' st2 m2 .
Proof. intros. apply: stepN_safety; eauto. Qed.
*)
(*Lemma safety_equivalence_stutter' {core_data: Type} {core_ord}:
@well_founded core_data core_ord ->
core_data ->
forall (ge : Machine.DryMachine.ThreadPool.SEM.G)
(U : mySchedule.schedule) (st : Machine.DryMachine.ThreadPool.t)
(m : mem),
(exists cd : core_data,
@Machine.DryConc.explicit_safety_stutter core_data core_ord ge cd U st m) ->
Machine.DryConc.explicit_safety ge U st m.
Proof. A dmitted.*)
Lemma safety_preservation'': forall main psrc ptgt U Sg Tg tr Sds Sm Tds Tm cd
(*HboundedS: DryConc.bounded_mem Sm*)
(*HboundedT: DryConc.bounded_mem Tm*)
(MATCH: exists j, (match_st Tg Sg main psrc ptgt U) cd j Sds Sm Tds Tm),
(forall sch, DryConc.new_valid ( tr, Sds, Sm) sch ->
DryConc.explicit_safety Sg sch Sds Sm) ->
(forall sch, Machine.DryConc.valid (sch, tr, Tds) ->
Machine.DryConc.stutter_stepN_safety ( core_ord:=core_ord Tg Sg main psrc ptgt U) Tg cd sch Tds Tm).
Proof.
move => main psrc ptgt U Sg Tg.
cofix CIH.
intros.
assert (H':=H).
specialize (H sch).
move: MATCH => [] j MATCH.
assert (equivalid: forall Tg Sg main psrc ptgt U,
forall cd j Sm Tm tr Sds Tds,
(match_st Tg Sg main psrc ptgt U) cd j Sds Sm Tds Tm ->
forall sch,
DryConc.valid (sch, tr, Sds) <->
Machine.DryConc.valid (sch, tr, Tds) ).
{ rewrite /DryConc.valid
/DryConc.correct_schedule
/DryConc.unique_Krun
/THE_DRY_MACHINE_SOURCE.SCH.schedPeek
/Machine.DryConc.valid
/Machine.DryConc.correct_schedule
/Machine.DryConc.unique_Krun
/mySchedule.schedPeek /=.
move => ? ? ? ? ? ? ? ? ? ? ? Sds' Tds' MATCH' sch0.
destruct (List.hd_error sch0); try solve[split; auto].
split.
- move => H1 j0 cntj0 q KRUN not_halted.
(*eapply H1.*)
(*pose (same_running Tg Sg main p U cd j Sds' Sm Tds' Tm).*)
pose ( machine_semantics.runing_thread (new_DMachineSem sch psrc)).
unfold new_DMachineSem in P; simpl in P.
unfold DryConc.unique_Krun in P.
eapply (same_running) in KRUN; eauto.
- move => H1 j0 cntj0 q KRUN not_halted.
(*eapply H1.*)
(*pose (same_running Tg Sg main p U cd j Sds' Sm Tds' Tm).*)
pose ( machine_semantics.runing_thread (new_DMachineSem sch psrc)).
unfold new_DMachineSem in P; simpl in P.
unfold DryConc.unique_Krun in P.
eapply (same_running) in KRUN; eauto.
}
move: (MATCH) => /equivalid /= AA.
move: (AA tr sch) => [A B].
assert (HH:DryConc.new_valid (tr, Sds, Sm) sch).
{ (* split.
- apply: B. auto.
- simpl; assumption. *)
apply: B; auto.
}
apply H in HH.
move: MATCH.
inversion HH; clear HH.
(*Halted case*)
- {
simpl in *; subst.
econstructor.
move: MATCH H1 => /halt_axiom /= HHH /(halted_trace _ nil nil Sds) AAA.
destruct (DryConc.halted (sch, nil ,Sds)) eqn:BBB; try solve [inversion AAA].
move: BBB=> /HHH [] j' [] v2 [] inv Halt.
rewrite Halt=> //.
Guarded.
}
- (*Internal StepN case*)
{ simpl in H2. pose (note2:=5). simpl in *; subst.
assert (my_core_diagram:= Machine_sim.thread_diagram
_ _ _ _ _ _ _ _
(concur_sim Tg Sg main psrc ptgt U)).
simpl in my_core_diagram.
intros MATCH.
eapply my_core_diagram (*with (st1':= (Tds))(m1':=Tm)*) in MATCH; eauto.
clear my_core_diagram.
move: MATCH => [] st2' [] m2' [] cd' [] mu' [] MATCH' [step_plus | [] [] n stepN data_step ].
(*Internal step Plus*)
- unfold machine_semantics_lemmas.thread_step_plus in step_plus.
destruct step_plus as [n step_plus].
apply (coinductive_safety.internal_safetyN_stut) with (cd':= cd')(y':=(st2',m2'))(n:=n).
+
Lemma thread_stepN_stepN:
forall Tg n sch Tds Tm st2' m2' U p,
machine_semantics_lemmas.thread_stepN
(Machine.DryConc.new_MachineSemantics U p) Tg n sch Tds
Tm st2' m2' ->
coinductive_safety.stepN
SC.Sch (Machine.DryMachine.ThreadPool.t * mem)
(fun (U0 : SC.Sch) (stm stm' : Machine.DryMachine.ThreadPool.t * mem)
=>
@Machine.DryConc.internal_step Tg U0 (fst stm) (snd stm)
(fst stm') (snd stm'))
sch (Tds, Tm) (st2', m2') n.
Proof.
move=> Tg n.
induction n; simpl.
- move=> sch Tds Tm st2' m2' U p HH; inversion HH; subst.
constructor 1; auto.
- move=> sch Tds Tm st2' m2' U p [] c2 [] m2 [] step PAST.
econstructor 2; eauto.
simpl. auto.
Qed.
eapply thread_stepN_stepN; eauto.
+ simpl. intros.
eapply CIH with (Sds:=fst y') (Sm:=snd y'); eauto.
(* * a dmit. (* By steping! *) *)
(* * destruct H3; auto. (* By steping! *)*)
* intros. destruct y' as [a b]; eapply H2.
auto.
(* * simpl in *.
destruct H3; eauto. *)
- (*Maybe stutter.... depends on n*)
destruct n.
+ (*Stutter case*)
inversion stepN; subst.
apply coinductive_safety.stutter with (cd':=cd'); auto.
apply CIH with (tr:=tr)(Sds:= (fst y') )(Sm:=(snd y')).
(* * a dmit. (*by stepping*) *)
(* * auto. *)
* exists mu'; eassumption.
* destruct y' as [a b]; apply H2; auto.
* assumption.
+ (*Fake stutter case*)
apply (coinductive_safety.internal_safetyN_stut) with (cd':= cd')(y':=(st2',m2'))(n:=n).
* eapply thread_stepN_stepN; eauto.
* {simpl. intros.
eapply CIH with (Sds:=fst y') (Sm:=snd y'). Guarded.
- exists mu'; assumption.
- destruct y' as [a b]; eapply H2.
- assumption.
}
}
- (*External step case *)
assert (my_machine_diagram:= Machine_sim.machine_diagram
_ _ _ _ _ _ _ _
(concur_sim Tg Sg main psrc ptgt U)).
simpl in my_machine_diagram.
intros MATCH.
eapply my_machine_diagram with (st1':= fst y')(m1':=snd y') in MATCH; eauto.
+ clear my_machine_diagram.
move: MATCH => [] st2' [] m2' [] cd' [] mu' [] MATCH' step.
apply coinductive_safety.external_safetyN_stut with (cd':=cd')(x':=x')(y':= (st2', m2')).
* apply step.
* intros; eapply CIH with (tr:=tr)(Sds:= (fst y') )(Sm:=(snd y')).
-- exists mu'; exact MATCH'.
-- destruct y' as [a b]; eapply H2.
-- assumption.
Guarded.
Unshelve.
auto.
Qed.
Lemma safety_preservation': forall main psrc ptgt U Sg Tg tr Sds Sm Tds Tm
(MATCH: exists cd j, (match_st Tg Sg main psrc ptgt U) cd j Sds Sm Tds Tm),
(forall sch, DryConc.valid (sch, tr, Sds) ->
DryConc.explicit_safety Sg sch Sds Sm) ->
(forall sch, Machine.DryConc.valid (sch, tr, Tds) ->
Machine.DryConc.explicit_safety Tg sch Tds Tm).
Proof.
move=> main psrc ptgt U Sg Tg tr Sds Sm Tds Tm [] cd [] mu MATCH HH sch VAL.
apply @coinductive_safety.safety_stutter_stepN_equiv
with (core_ord:=core_ord Tg Sg main psrc ptgt U); auto.
+ apply (core_ord_wf Tg Sg main psrc ptgt U).
(* + split; auto; simpl. *)
+ exists cd.
apply safety_preservation'' with (tr:=tr)(Sds:=Sds)(Sm:=Sm); try exists mu; assumption.
Qed.
Lemma safety_preservation: forall main psrc ptgt U Sg Tg Sds Sm Tds Tm
(MATCH: exists cd j, (match_st Tg Sg main psrc ptgt U) cd j Sds Sm Tds Tm),
(forall sch, DryConc.valid (sch, nil, Sds) ->
DryConc.safe_new_step Sg (sch, nil, Sds) Sm) ->
(forall sch, Machine.DryConc.valid (sch, nil, Tds) ->
Machine.DryConc.safe_new_step Tg (sch, nil, Tds) Tm).
Proof.
intros.
eapply Machine.DryConc.safety_equivalence2; auto.
intros.
eapply safety_preservation' with (tr:=nil); eauto.
intros.
eapply DryConc.safety_equivalence2; auto.
Qed.
End lifting_safety.
|
module CyclicModuleDependency where
import CyclicModuleDependency
|
-----------------------------------------------------------------------------
{- |
Module : Numeric.LinearAlgebra
Copyright : (c) Alberto Ruiz 2006-10
License : GPL-style
Maintainer : Alberto Ruiz (aruiz at um dot es)
Stability : provisional
Portability : uses ffi
This module reexports all normally required functions for Linear Algebra applications.
It also provides instances of standard classes 'Show', 'Read', 'Eq',
'Num', 'Fractional', and 'Floating' for 'Vector' and 'Matrix'.
In arithmetic operations one-component vectors and matrices automatically
expand to match the dimensions of the other operand.
-}
-----------------------------------------------------------------------------
module Numeric.LinearAlgebra (
module Numeric.Container,
module Numeric.LinearAlgebra.Algorithms
) where
import Numeric.Container
import Numeric.LinearAlgebra.Algorithms
import Numeric.Matrix()
import Numeric.Vector() |
State Before: α : Type u
β : Type v
γ : Type w
inst✝³ : TopologicalSpace α
inst✝² : LinearOrder α
inst✝¹ : OrderTopology α
inst✝ : SecondCountableTopology α
⊢ Set.Countable {x | ∃ y, y < x ∧ Ioo y x = ∅} State After: no goals Tactic: simpa only [← covby_iff_Ioo_eq] using countable_setOf_covby_left |
Thinking of buying a motorbike? Visit RM Motors on Bhagwati Hospital Road at Navagaon in Dahisar (west). All brands under one roof, low prices, good finance options and knowledgeable staff, they have it all.
Buy your next sparkling new bike from RM Motors or refer the bike superstore to a friend and earn cash or discount that you can pass on to others or use as you please.
Get a full petrol refill before you wheel the bike from RM Motors. Enjoy your new bike and when you are ready for a new one after many years of memorable driving, we willingly accept your old bike and you can either accept cash or a discount on your swanky new bike. |
------------------------------------------------------------------------
-- The Agda standard library
--
-- This module is DEPRECATED. Please use
-- Data.List.Relation.Binary.Permutation.Propositional directly.
------------------------------------------------------------------------
{-# OPTIONS --without-K --safe #-}
module Data.List.Relation.Binary.Permutation.Inductive where
{-# WARNING_ON_IMPORT
"Data.List.Relation.Binary.Permutation.Inductive was deprecated in v1.1.
Use Data.List.Relation.Binary.Permutation.Propositional instead."
#-}
open import Data.List.Relation.Binary.Permutation.Propositional public
|
Here is the list of best Neuro Surgeons for Spinal Cord Injuries in Multan Multan. Find complete details, timings, patient reviews and contact information. Book appointment or take video consultation with the listed doctors. Call at Marham helpline: 042-32591427 to schedule your appointment. |
Require Import Coq.Init.Byte FunctionalExtensionality EqdepFacts List Ndigits ZArith Lia.
From Minirust.def Require Import ty int_encoding.
From Minirust.proof.lemma Require Import utils.
From Minirust.proof.int_raw Require Import low.
(* TODO cleanup this file. There are a lot of things, that can be simplified using the basenum abstraction. *)
(* every other number relevant for `int_in_range` can be expressed neatly using basenum. *)
Definition basenum (size: Size) := (2 ^ (Z.of_nat size*8-1))%Z.
Lemma double_base {size: Size} (H: size > 0) :
(2 ^ (Z.of_nat size * 8))%Z = ((basenum size) * 2)%Z.
Proof.
unfold basenum.
declare x Hx (Z.of_nat size * 8)%Z.
assert (x > 0)%Z as B. { lia. }
rewrite Hx. clear - x B.
destruct x; try (simpl; lia).
assert (2^(Z.pos p) = 2 * 2^(Z.pos p-1))%Z; try lia.
rewrite <- (Z.pow_succ_r 2); try lia.
replace (Z.succ (Z.pos p - 1))%Z with (Z.pos p); lia.
Qed.
Lemma start_to_base (size: Size) (signedness: Signedness) :
int_start size signedness = match signedness with
| Unsigned => 0%Z
| Signed => (-(basenum size))%Z
end.
Proof.
destruct signedness; unfold int_start.
- unfold basenum. reflexivity.
- reflexivity.
Qed.
Lemma stop_to_base (size: Size) (signedness: Signedness) :
(size > 0) ->
int_stop size signedness = match signedness with
| Unsigned => ((basenum size) * 2)%Z
| Signed => basenum size
end.
Proof.
intros Hs.
destruct signedness; unfold int_stop,basenum.
- reflexivity.
- apply (double_base Hs).
Qed.
Lemma offset_to_base (size: Size) :
(size > 0) ->
signed_offset size = ((basenum size) * 2)%Z.
Proof.
apply double_base.
Qed.
Ltac to_base := repeat (
unfold int_in_range ||
(rewrite stop_to_base; try assumption) ||
(rewrite start_to_base; try assumption) ||
(rewrite offset_to_base; try assumption)
).
Ltac to_base_in x := repeat (
unfold int_in_range in x ||
(rewrite stop_to_base in x; try assumption) ||
(rewrite start_to_base in x; try assumption) ||
(rewrite offset_to_base in x; try assumption)
).
Lemma lemma2 (int: Int) (size: Size) (H: int_in_range int size Signed = true) (H2 : (int >=? 0)%Z = true) :
(size > 0) ->
int_in_range int size Unsigned = true.
Proof.
intros Hs.
to_base_in H.
to_base.
lia.
Qed.
Lemma lemma3 (size: Size) (int: Int) (H1: (int >=? 0)%Z = false) (H2: int_in_range int size Signed = true) (Hs: size > 0) :
int_in_range (int + signed_offset size)%Z size Unsigned = true.
Proof.
to_base.
to_base_in H2.
lia.
Qed.
Lemma lemma4 (size: Size) (int: Int)
(Hs0 : (int >= - 2 ^ (Z.of_nat size * 8 - 1))%Z) :
true = (
int + 2 ^ (Z.of_nat size * 8)
>=? 2 ^ (Z.of_nat size * 8 - 1))%Z.
Proof.
rewrite (proj2 (Z.geb_le _ _)). { reflexivity. }
assert (2 ^ (Z.of_nat size * 8 - 1) <= (- 2 ^ (Z.of_nat size * 8 - 1))%Z + 2 ^ (Z.of_nat size * 8))%Z; cycle 1. { lia. }
declare x Hx (Z.of_nat size * 8)%Z. rewrite Hx. clear - x.
destruct x; try (simpl; lia).
assert (2^(Z.pos p) = 2 * 2^(Z.pos p-1))%Z; try lia.
rewrite <- (Z.pow_succ_r 2); try lia.
replace (Z.succ (Z.pos p - 1))%Z with (Z.pos p); lia.
Qed.
Lemma lemma5 (d: Int) (size: Size) :
(size > 0) ->
((d >=? int_stop size Signed)%Z = true) ->
(int_in_range d size Unsigned = true) ->
(int_in_range (d - signed_offset size)%Z size Signed) = true.
Proof.
intros Hs.
to_base.
lia.
Qed.
Lemma lemma6 (d: Int) (size: Size) :
(size > 0) ->
(int_in_range d size Unsigned = true) ->
(d - signed_offset size >=? 0)%Z = false.
Proof.
intros Hs.
to_base.
lia.
Qed.
Lemma lemma7 (size: Size) (l: list byte) :
(length l = size) ->
(decode_uint_le size l >=? 0)%Z = true.
Proof.
intros H.
destruct (destruct_int_in_range (uint_le_decode_valid size l H)) as [H0 _].
to_base_in H0.
to_base.
lia.
Qed.
Lemma lemma8 (d: Int) (size: Size) :
(size > 0) ->
(d >=? int_stop size Signed)%Z = false ->
(d >=? 0)%Z = true ->
(int_in_range d size Signed) = true.
Proof.
intros Hs.
to_base.
lia.
Qed.
Lemma rt1_int_le (size: Size) (signedness: Signedness) (int: Int) (H: int_in_range int size signedness = true) :
(size > 0) ->
exists l, Some l = encode_int_le size signedness int /\
decode_int_le size signedness l = Some int /\ length l = size.
Proof.
intros Hs.
destruct signedness.
(* signed *)
- destruct (int >=? 0)%Z eqn:E.
(* signed, positive *)
-- exists (encode_uint_le size int).
unfold encode_int_le. rewrite H. simpl.
rewrite E.
split. { reflexivity. }
split.
++ unfold decode_int_le.
have HR (lemma2 _ _ H E Hs).
rewrite (uint_le_encode_valid); try apply HR.
f_equal.
rewrite Nat.eqb_refl.
replace (decode_uint_le size (encode_uint_le size int) >=?
int_stop size Signed)%Z with false. {
simpl. f_equal.
apply rt1_uint_le.
apply HR; try lia; try assumption.
}
rewrite rt1_uint_le; try apply HR.
destruct (destruct_int_in_range H).
to_base. to_base_in H0. to_base_in H1. to_base_in HR.
lia.
++ apply uint_le_encode_valid.
apply (lemma2 _ _ H); lia.
(* signed, negative *)
-- exists (encode_uint_le size (int + signed_offset size)%Z).
split.
--- unfold encode_int_le. rewrite H,E. simpl. reflexivity.
--- unfold decode_int_le.
rewrite rt1_uint_le.
destruct (destruct_int_in_range (lemma3 _ _ E H Hs)) as [H0 H1].
---- rewrite uint_le_encode_valid.
rewrite Nat.eqb_refl.
f_equal.
split; try reflexivity.
replace (int + signed_offset size >=?
int_stop size Signed)%Z with true.
destruct (destruct_int_in_range H) as [Hs0 Hs1].
to_base. to_base_in H0. to_base_in H1. to_base_in Hs0. to_base_in Hs1.
simpl. f_equal. lia.
destruct (destruct_int_in_range H) as [Hs0 Hs1]. (* this is redundant *)
to_base. to_base_in H0. to_base_in H1. to_base_in Hs0. to_base_in Hs1.
lia.
to_base. to_base_in H0. to_base_in H1. to_base_in Hs0. to_base_in Hs1.
lia.
---- apply (lemma3 _ _ E H Hs).
(* unsigned *)
- exists (encode_uint_le size int).
unfold encode_int_le. rewrite H. simpl.
split. { reflexivity. }
unfold decode_int_le.
rewrite (uint_le_encode_valid).
rewrite Nat.eqb_refl.
simpl.
f_equal. split; try reflexivity.
f_equal.
apply rt1_uint_le.
assumption.
assumption.
Qed.
Lemma rt2_int_le (size: Size) (signedness: Signedness) (l: list byte) (H: length l = size) :
(size > 0) ->
exists int, Some int = decode_int_le size signedness l /\
encode_int_le size signedness int = Some l /\
int_in_range int size signedness = true.
Proof.
intros Hs.
destruct signedness; unfold decode_int_le,encode_int_le; simpl; rewrite H; rewrite Nat.eqb_refl; simpl.
- destruct ((decode_uint_le size l) >=? int_stop size Signed)%Z eqn:E'.
(* signed, negative *)
-- exists ((decode_uint_le size l) - signed_offset size)%Z.
rewrite lemma5; try assumption; try (apply uint_le_decode_valid; assumption).
rewrite lemma6; try (apply uint_le_decode_valid; assumption); try assumption.
unfold int_stop in E'. rewrite E'.
split. { reflexivity. }
simpl. f_equal.
assert (forall x y, x - y + y = x)%Z as F. { lia. }
rewrite F.
split; try reflexivity.
f_equal.
apply rt2_uint_le.
assumption.
(* signed, positive *)
-- exists (decode_uint_le size l).
rewrite lemma7; try assumption.
rewrite lemma8; try (assumption || (apply lemma7; assumption)).
unfold int_stop in E'. rewrite E'.
split. { reflexivity. }
simpl.
rewrite rt2_uint_le.
--- split; reflexivity.
--- assumption.
(* unsigned *)
- exists (decode_uint_le size l).
split. { reflexivity. }
rewrite uint_le_decode_valid; try assumption.
simpl.
rewrite rt2_uint_le.
-- split; reflexivity.
-- assumption.
Qed.
|
lemma complex_cnj_i [simp]: "cnj \<i> = - \<i>" |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.